You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, we run all of our optimizer routines exactly once, starting at (or just after) half of our max_examples budget. This works pretty well for the default max_examples=100, but I've noticed increasing use of Hypothesis with very large example budgets (and target()) for one-off searches for some example - and we can do much better for this workflow. Specifically, we should:
better support interleaving of ordinary generation with targeted optimization; e.g. "run an optimize step, then generate that many examples, then the next optimize step...". This still aims to spend up-to-half our total examples optimizing.
pick a smoother heuristic for when-to-optimize. Initial proposal: maintain current behaviour for max_examples < 1000; for max_examples >= 1000 use the incremental mixture instead, starting after 200 examples generated by our usual means.
for long runs, we might finish running the optimizer. How should we decide to start a second full pass? We don't want unproductive re-runs (as may be implicated in Pareto-optimizer sometimes causes tests to run much more slowly #2985), but an additional run must sometimes make sense...
The text was updated successfully, but these errors were encountered:
Currently, we run all of our optimizer routines exactly once, starting at (or just after) half of our
max_examples
budget. This works pretty well for the defaultmax_examples=100
, but I've noticed increasing use of Hypothesis with very large example budgets (andtarget()
) for one-off searches for some example - and we can do much better for this workflow. Specifically, we should:max_examples < 1000
; formax_examples >= 1000
use the incremental mixture instead, starting after 200 examples generated by our usual means.The text was updated successfully, but these errors were encountered: