-
-
Notifications
You must be signed in to change notification settings - Fork 101
Updating Benchmark Sets #1281
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Updating Benchmark Sets #1281
Conversation
Signed-off-by: AdityaPandeyCN <[email protected]>
It's going to need to wait for the registration to finish JuliaRegistries/General#133534 JuliaRegistries/General#133535 Also could you add the new PyCMA one which just merged around the same time? SciML/Optimization.jl#933 |
Sure |
Signed-off-by: AdityaPandeyCN <[email protected]>
@mxpoch it looks like PyCMA is giving a segfault and having some IO issues as part of the benchmark. Does PyCMA have to do IO in order to give the results? Seems like it could just all be memory? |
Hi @ChrisRackauckas Do you think we should run the Python-based optimizers sequentially to avoid the errors, this maybe happening because of multithreaded Python calls in Julia? |
@ChrisRackauckas Yep the default behaviour of PyCMA is to store logs in an 'outcmaes' folder. If you pass verb_log=0 as an argument it shouldn't do any writes. |
Signed-off-by: AdityaPandeyCN <[email protected]>
I have made changes to run the SciPy optimizers sequentially, I have tried to run this on my local machine but this became overwhelming for my machine:-
|
That seems like it would be over the limit? Is the maxiters not being passed on? |
Are we referring to the run_length here?, if so than yes seems like the SciPy optimizers are ignoring it and we have to set an explicit maxiters here. |
It looks like it is handled though: https://github.com/SciML/Optimization.jl/blob/master/lib/OptimizationSciPy/src/OptimizationSciPy.jl#L331. It might be best to try and just isolate one problem |
I think I have found the root cause in my ScipyBasinhopping wrapper (inner optimiser runs un-bounded). I’ve dropped Basinhopping from the current benchmark and running this on my machine if this works well I will push it and fix the wrapper code. |
oh nice! Does the nested optimizer need something like inner vs outer iterations? |
Basinhopping counts hops with niter (outer loop), then calls a local minimizer by default L-BFGS-B that has its own gradient iterations. We already set niter, but we also need to pass something like minimizer_kwargs[:options]["maxiter"] = … so each inner L-BFGS-B run is limited, otherwise a single hop can spin for billions. |
Signed-off-by: AdityaPandeyCN <[email protected]>
Hello @ChrisRackauckas I commented out two of the global optimizers the ScipyBasinhopping( reason we are discussing here) and ScipyDualAnnealing(this was really slow on my computer) I ran the benchmark and this is the result https://github.com/AdityaPandeyCN/juliabenchmark/blob/main/GlobalOptimization/blackbox_global_optimizers.md Please have a look, also note that the DualAnnealling one was really slow it didn't crash so commented it out... it ran till here |
Checklist
contributor guidelines, in particular the SciML Style Guide and
COLPRAC.
Additional context
This PR adds the recently implemented SciPy global optimization algorithms from OptimizationSciPy.jl (#927) to the black-box global optimizer benchmarks.