forked from odoo/runbot
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Syncing from upstream odoo/runbot (17.0-upgrade-rebuild-params-phase2-xdo) #816
Open
bt-admin
wants to merge
444
commits into
brain-tec:17.0-upgrade-rebuild-params-phase2-xdo
Choose a base branch
from
odoo:17.0-upgrade-rebuild-params-phase2-xdo
base: 17.0-upgrade-rebuild-params-phase2-xdo
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Syncing from upstream odoo/runbot (17.0-upgrade-rebuild-params-phase2-xdo) #816
bt-admin
wants to merge
444
commits into
brain-tec:17.0-upgrade-rebuild-params-phase2-xdo
from
odoo:17.0-upgrade-rebuild-params-phase2-xdo
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
I was pretty sure it wouldn't happen but it doesn't hurt to make sure, also to check that splitting the batch does correctly make things work out.
- make sure inconsistent batches prevent merging - don't take closed PRs in account for the consistency check (unless all PRs are closed) - add a wizard to split PRs out of a batch when the inconsistency is legitimate - notify that the batch is inconsistent on the PR dashboard Fixes #911
A new version of numpy (2.0.0) was released in june 2024 Since matplotib requires a version of numpy >= 1.17, this new version is installed instead of the 1.x. Pinning the numpy version to the expected one in ubuntu jammy should solve the issue. Failed to initialize database `65188118-17-0-runbot`. Traceback (most recent call last): File "/data/build/odoo/odoo/service/server.py", line 1313, in preload_registries registry = Registry.new(dbname, update_module=update_module) File "<decorator-gen-16>", line 2, in new File "/data/build/odoo/odoo/tools/func.py", line 87, in locked return func(inst, *args, **kwargs) File "/data/build/odoo/odoo/modules/registry.py", line 113, in new odoo.modules.load_modules(registry, force_demo, status, update_module) File "/data/build/odoo/odoo/modules/loading.py", line 480, in load_modules processed_modules += load_marked_modules(env, graph, File "/data/build/odoo/odoo/modules/loading.py", line 364, in load_marked_modules loaded, processed = load_module_graph( File "/data/build/odoo/odoo/modules/loading.py", line 185, in load_module_graph load_openerp_module(package.name) File "/data/build/odoo/odoo/modules/module.py", line 395, in load_openerp_module __import__(qualname) File "/data/build/runbot/runbot/__init__.py", line 3, in <module> from . import controllers File "/data/build/runbot/runbot/controllers/__init__.py", line 5, in <module> from . import badge File "/data/build/runbot/runbot/controllers/badge.py", line 5, in <module> from matplotlib.font_manager import FontProperties File "/home/odoo/.local/lib/python3.10/site-packages/matplotlib/__init__.py", line 109, in <module> from . import _api, _version, cbook, docstring, rcsetup File "/home/odoo/.local/lib/python3.10/site-packages/matplotlib/rcsetup.py", line 27, in <module> from matplotlib.colors import Colormap, is_color_like File "/home/odoo/.local/lib/python3.10/site-packages/matplotlib/colors.py", line 56, in <module> from matplotlib import _api, cbook, scale File "/home/odoo/.local/lib/python3.10/site-packages/matplotlib/scale.py", line 23, in <module> from matplotlib.ticker import ( File "/home/odoo/.local/lib/python3.10/site-packages/matplotlib/ticker.py", line 136, in <module> from matplotlib import transforms as mtransforms File "/home/odoo/.local/lib/python3.10/site-packages/matplotlib/transforms.py", line 46, in <module> from matplotlib._path import ( ImportError: numpy.core.multiarray failed to import
Should probably take priority over a PR being misconfigured.
Extract current table generation into a separate function, add an other function to render an alert / list of PR targets if the batch is not consistent. This means an extra pass on the table contents to precompute the image size, but we can delay loading fonts until after etag computation which might be a bigger gain all things considered: there aren't many cells in most PR tables, but fonts are rather expensive to load (I should probably load them at import and cache them in the module...)
See 66a1018#r144596695 Looks like 1.21.5 is not available on some distribution 1.22 should be available on most system. Note that the prefered solution would be to install matplotlib using apt install python3-matplotlib as stated in the readme.
The previous version worked but was extremely plodding and procedural. Initially I wanted to compose the table in a single pass but that turns out not to really be possible as the goal for #908 is to have a "drawer" for extended information about the current batch: this means different cells of the same row can have different heights, so we can't one-pass the image either vertically (later cells of the current column might be wider) or horizontally (later cells of the current row might be taller). However what can be done is give the entire thing *structure*, essentially defining a very cut down and ad-hoc layout system before committing the layout to raster. This also deduplicates formatting and labelling information which was previously in both the computation first step and the rasterisation second step.
The PR dashboard picture provides a great overview of the batch state both horizontally and vertically *but* apparently people can't for the life of them go check the actual dashboard when things don't line up. So expand the "current batch" to a view more similar to dashboard *page*, which gives details of the sub-checks being performed and whether they are or are not fulfilled. Fixes #908
The runbot colors can lack contrast in some case, especially for colorblinded people. This commit introduces a theme that should help a little. Note that this was done without much analysis and should be tweaked in the future.
Merge errors are logical failures, not technical, it doesn't make sense to log them out because there's nothing to be done technically, a PR having consistency issues or a conflict is "normal". As such those messages are completely useless and just take unnecessary space in the logs, making their use more difficult. Instead of sending them to logging, log staging attempts to the PR itself, and only do normal logging of the operation as an indicative item. And remove a bunch of `expect_log_errors` which don't stand anymore. While at it, fix a missed issue in forward porting: if the `root.head` doesn't exist in the repo its `fetch` will immediately fail before `cat-file` can even run, so the second one is redundant and the first one needs to be handled properly. Do that. And leave checking for *that* specific condition as a logged-out error as it should mean there's something very odd with the repository (how can a pull request have a head we can't fetch?)
- add context, in case that allows tracking overlap between tests eventually - use `-m` to run odoo, requires backporting `odoo/__main__.py` but I'm not quite sure how I was running it previously, possibly via odoo-bin? It certainly doesn't seem to work out with a global `odoo` helper
The order of requirements may have an impact on final outcome. In documentation builds, we have two requirements with conflicting needs Lets make the doc requirements install after the odoo ones (sequence dependant)
Found a bunch of old leftover `print` calls which should not be in the repo.
Mergebot / forwardport crons need to run in a specific ordering in order to flow into one another correctly. The default ordering being unspecified, it was not possible to use the normal cron runner (instead of the external driver running crons in sequence one at a time). This can be fixed by setting *sequences* on crons, as the cron runner (`_process_jobs`) will use that order to acquire and run crons. Also override `_process_jobs` however: the built-in cron runner fetches a static list of ready crons, then runs that. This is fine for normal situation where the cron runner runs in a loop anyway but it's any issue for the tests, as we expect that cron A can trigger cron B, and we want cron B to run *right now* even if it hadn't been triggered before cron A ran. We can replace `_process_job` with a cut down version which does that (cut down because we don't need most of the error handling / resilience, there's no concurrent workers, there's no module being installed, versions must match, ...). This allows e.g. the cron propagating commit statuses to trigger the staging cron, and both will run within the same `run_crons` session. Something I didn't touch is that `_process_jobs` internally creates completely new environments so there is no way to pass context into the cron jobs anymore (whereas it works for `method_direct_trigger`), this means the context values have to be shunted elsewhere for that purpose which is gross. But even though I'm replacing `_process_jobs`, this seems a bit too much of a change in cron execution semantics. So left it out. While at it tho, silence the spammy `py.warnings` stuff I can't do much about.
When cleaning build errors before fingerprinting, it's only possible to replace the matching regex with something else but not an empty string. Since the python 3.11 that may adds lines in error message in order to visually improve them, the fingerprint of those errors does not match anymore between different versions. With this commit, when the replacement string is two consecutive simple quotes, the matching element is replaced by an empty sting, allowing to remove unwanted characters.
The cron had been converted to using mostly triggers in 7cd9afe but I forgot to update the *tests* to avoid explicitly triggering it.
These are pretty simple to convert as they are straightforward: an item is added to a work queue (table), then a cron regularly scans through the table executing the items and deleting them. That means the cron trigger can just be added on `create` and things should work out fine. There's just two wrinkles in the port_forward cron: - It can be requeued in the future, so needs a conditional trigger-ing in `write`. - It is disabled during freeze (maybe something to change), as a result triggers don't enqueue at all, so we need to immediately trigger after freeze to force the cron re-enabling it.
The merge cron is the one in charge of checking staging state and either integrating the staging into the reference branch (if successful) or cancelling the staging (if failed). The most obvious trigger for the merge cron is a change in staging state from the states computation (transition from pending to either success or failure). Explicitly cancelling / failing a staging marks it as inactive so the merge cron isn't actually needed. However an other major trigger is *timeout*, which doesn't have a trivial signal. Instead, it needs to be hooked on the `timeout_limit`, and has to be re-triggered at every update to the `timeout_limit`, which in normal operations is mostly from "pending" statuses bumping the timeout limit. In that case, `_trigger` to the `timeout_limit` as that's where / when we expect a status change. Worst case scenario with this is we have parasitic wakeups of this cron, but having half a dozen wakeups unnecessary wakeups in an hour is still probably better than having a wakeup every minute.
The staging cron turns out to be pretty reasonable to trigger, as we already have a handler on the transition of a batch to `not blocked`, which is exactly when we want to create a staging (that and the completion of the previous staging). The batch transition is in a compute which is not awesome, but on the flip side we also cancel active stagings in that exact scenario (if it applies), so that matches. The only real finesse is that one of the tests wants to observe the instant between the end of a staging (and creation of splits) and the start of the next one, which because the staging cron is triggered by the failure of the previous staging is now "atomic", requiring disabling the staging cron, which means the trigger is skipped entirely. So this requires triggering the staging cron by hand.
With the trigger-ification pretty much complete the only cron that's still routinely triggered explicitly is the cross-pr check, and it's that in all modules, so there's no cause to keep an overridable fixture.
Since every cron runs on a fresh database, on the first `run_crons` every single cron in the db will run even though almost none of them is relevant. Aside from the slight inefficiency, this creates unnecessary extra garbage in the test logs. By setting the `nextcall` of all crons to infinity in the template we avoid this issue, only triggered crons (or the crons whose nextcall we set ourselves) will trigger during calls. This requires adjusting the branch cleanup cron slightly: it didn't correctly handle the initial run (`lastcall` being false).
A few crons (e.g.database maintenance) remain on timers, but most of the work crons should be migrated to triggers. The purpose of this change is mostly to reduce log spam, as crons get triggered very frequently (most of them every minute) but with little to do. Secondary purposes are getting experience with cron triggers, and better sussing out dependencies between actions and crons / queues. Fixes #797
`test_inconsistent_target` was, appropriately, inconsistent, but would only fail a very small fraction of the time: the issue is that a PR would switch target between `other` and `master` assuming neither was an intrinsic blocker *but* the branches are created independently, just with the same content. This means if a second ticked over between the creation of the `master` branch's commit and that of `other`, they would get different commit hashes (because different timestamp), thus the PR would get 2 commits (or complete nonsense) when targeted to `other`, and the PR itself would be blocked for lack of a merge method. The solution is to be slightly less lazy, and create `other` from `master` instead of copy/pasting the `make_commits` directive. This means the PR has the exact same number of commits whether targeted to `master` or `other`, and we now test what we want to test 60 seconds out of every minute.
The weekly maintenance would not prune refs. This is not an issue on odoo/odoo because development branches are in a separate repository, thus never fetched (we push to them but only using local commits and remote refs). However on repos like odoo/documentation the reference and development branches are collocated, the lack of pruning thus keeps every development branch alive locally, even years after the branch has been deleted in the repository. By pruning remote-tracking refs before GC-ing, we should have cleaner local clones, and better packing.
twas heehee in the moment but it's not really a long term hoohoo haahaa.
Detailed statuses are useful in the actual PR dashboard as that allows direct access to the builds, however in the PR where it's only a picture it's useless, so fold that information. Also fold it when a PR is staged. And while at it add a note / sub-title that the PR is staged. Fixes #919
Because the first operation the notification task performs is updating the commit, this has to be flushed in order to read-back the commit hash (probably fixed in later version). This means if updating the flag triggers a serialization failure (which happens and is normal) we transition to the `exception` handler, which *tries to retrieve the sha again* and we get an "in failed transaction" error on top of the current "serialization failure", leading to a giant traceback. Also an unhelpful one since a serialization failure is expected in this cron. - Move the reads with no write dependency out of the `try`, there's no reason for them to fail, and notably memoize the `sha`. - Split the handling of serialization failures out of the normal one, and just log an `info` (we could actually log nothing, probably). - Also set the precommit data just before the commit, in case staging tracking is ever a thing which could use it. Fixes #909
The batch prepare fill missing won't work if the repo is single version, in a foreign bundle. This commit simply adds the branches depending on their version instead of the bundle version.
Breakages: - the entire http.py API was updated requiring fixing the uses of `request.jsonrequest` and the patches to `WebRequest` to hook in sentry - `fontawesome` was moved - `*[@groups]` are now completely removed from the view if not matching, so any field inside of them which needs to be used outside (e.g. attrs) has to be added as invisible outside the element - discuss removed the mail tracking value helpers from RPC in odoo/odoo#88547, so reimplement locally (and better)
bundle_id is not computed anymore
When the reference build for some build is not done yet a message is logged with a link to the referenced build, however due to a typo in the message the markdown was badly formatted.
- Add dropdown-toggle to the github tool button so that it doesn't appear as if a button should be following it. - Add default btn classes to category icons, the button would otherwise be displayed as just an icon. - Add a gap between the two toolbars. - Add d-empty-none in case we have no category to display to remove the useless gap.
Allows searching branch through pr url or full branch name. Also fixes a crash when searching invalid values.
While it's useful to have a clickable link to the fixing pull request on the runbot error list, we cannot display the entire url as it takes too much space. So a static text `View PR` is used. As this static text is of limited use, this commit adds a new widget that displays a shortened URL for the github pull request only showing the repo and the PR number.
Supports the format (?owner)/repo#id in the `_search_dname` method.
`record._cr` is being deprecated in odoo/odoo#193636 this commit replaces the old `._cr` with `env.cr`.
Exceptions and their followups are managed by their managers, this will prevent having to create a custom filter to get our own exceptions.
The current qualifying implementation is based solely on error content. With this commit the fields on which the regular expression will apply can be choosen with checkboxes. It defaults to the `content` field.
Removes auto tags from a build if the build's branches match the build error's fixing branch.
fix typos, and grammatical mistakes
fix URL to change the master password.
The row_number window function is quite unoptimized as it requires reading the whole table. Using lateral join and / or distinct makes better use of existing indexes.
Looks like bu.database_ids[1:].sorted('name') was breaking the prefech set leading to one query per build. Fiwing it by sorting without slicing the ignoring the first record. ~230 ms improvement over 1.7 second
The corresponding fields are stored on the build, no need to read the params.
This allows to configure a custom trigger to use the base commits. This can be usefull to test a config on a branch that has some changes to ensure it works proprely.
- Fix the model on parse_logs action window. - Add a server action on build to open its errors - Fix a bad indentation
This commit adds a `common_qualifiers` field on build error. Its purpose is mainly to find similary qualified errors and similary qualified error contents. This field is computed by finding qualifiers in common in all the qualifiers of the error contents linked to the error. A new `unique_qualifiers` is also added for the same kind of puprpose. This field is computed by finding non contradictory qualifiers of the linked error contents. The fields can be used in 4 tabs added on the build error form.
c46b63e
to
3ce4ca4
Compare
3ce4ca4
to
7f2859f
Compare
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
bt_gitbot