-
-
Notifications
You must be signed in to change notification settings - Fork 100
Add initial version of OptimizationIpopt #915
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This pacakge directly uses the C interface in Ipopt.jl. The implementation is based on OptimizationMOI, but it also adds Ipopt specific elements, such as the callback handling. Co-authored-by: Vaibhav Dixit <[email protected]> Co-authored-by: Valentin Kaisermayer <[email protected]> Co-authored-by: Fredrik Bagge Carlson <[email protected]> Co-authored-by: Oscar Dowson <[email protected]>
…mples - Add tests inspired by Ipopt C++ examples (recursive NLP, MyNLP, Luksan-Vlcek problems) - Add tests for various optimization problem types: * Optimal control problems * Portfolio optimization * Geometric programming * Parameter estimation/curve fitting * Network flow problems * Robust optimization - Add tests for advanced Ipopt features: * Custom tolerances and convergence criteria * Different linear solvers and scaling options * Barrier parameter (mu) strategies * Fixed variable handling * Derivative testing - Add tests for different Hessian approximation methods (BFGS, SR1) - Test warm start capabilities - Add stress tests for high-dimensional and highly nonlinear problems - Update OptimizationIpopt to support passing arbitrary Ipopt options via kwargs - Use correct wrapper parameter names (verbose, maxiters) instead of Ipopt internals - Add documentation for the test suite This significantly improves test coverage and ensures the wrapper properly handles various problem types and Ipopt-specific features. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
46700a9
to
3fe089d
Compare
@ChrisRackauckas the tests are passing now: https://github.com/SciML/Optimization.jl/actions/runs/17166591928/job/48708247846?pr=915 I think this should be ready, let me know what you think. |
return inds | ||
end | ||
|
||
function eval_hessian_lagrangian(cache::IpoptCache{T}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this seems like the wrong spot to go into this detail here... but can do it for now
u, z_L, z_U = zeros(n), zeros(n), zeros(n) | ||
g, lambda = zeros(m), zeros(m) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would be good to make these cached in the future
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we cache them though, we would not be able to record the evolution. From what I've seem in other optimizers, the internal metrics are not cached and you can record the evolution instead of needing a specialized callback. I'll have to see what's the performance impact, but I think that if you want to log these, you'd have to allocate new vectors anyway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes but you don't need to always allocate, it can just modify in-place and then if someone wants to log then allocate.
This pacakge directly uses the C interface in Ipopt.jl.
The implementation is based on OptimizationMOI, but it also adds Ipopt specific elements, such as the callback handling.
This PR is an initial draft and needs more work.
The advantage of this approach over OptimizationMOI is that
maxiters
,maxtime
,reltol
argumentsWhile I copied a bit of the code for handling of symbolic systems since I'm not sure I understand how it's working yet and why do we essentially re-implement SymbolicUtils' codegen.
I think that some of the functionality of OptimizationMOI might be common enought that can be moved to an upstream pacakge to avoid code duplication.
I tried to give credit to the autohors of the funtionality that I copied over from OptimizationMOI by includding them as co-authors on the commit. Let me know if that's okay.
Checklist
contributor guidelines, in particular the SciML Style Guide and
COLPRAC.
Additional context
Add any other context about the problem here.