-
Notifications
You must be signed in to change notification settings - Fork 4
Restore Type Stability #122
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
SSMProblems.jl/SSMProblems documentation for PR #122 is available at: |
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #122 +/- ##
==========================================
- Coverage 66.30% 65.26% -1.05%
==========================================
Files 29 30 +1
Lines 1211 1212 +1
==========================================
- Hits 803 791 -12
- Misses 408 421 +13
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
SSMProblems.jl/GeneralisedFilters documentation for PR #122 is available at: |
FINALLYThings are looking good. I have been running the variational filter with Float32s and all is now type stable
The @THargreaves I would definitely appreciate some feedback. Also wondering how we could better treat the |
charlesknipp
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I provide a little clarity for my design choices here. Just let me know if there's anything else that jumps out.
| for p in particles | ||
| p.log_w -= LSE_after | ||
| end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My only issue with this entire PR is this single line of code. If we could get rid of this, we no longer have to rely on the mutation of Particles and ParticleDistributions
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I might be on the complete wrong page, but what's stopping us from just constructing new particles using the old weights - LSE_after, rather than mutating?
Just some fixes I noticed when going through my review
THargreaves
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a really nice PR. Thank you so much for taking the time to go through and make all these changes. Looks like it was a bit of a slog but the result looks great.
I don't have any real comments, just a few points of confusion that you can probably clear-up.
| LSE before adding observation weights. For APF with resampling, it includes first-stage | ||
| correction terms computed during the APF resampling step. | ||
| """ | ||
| function marginalise!(state::ParticleDistribution) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a reason for passing in both state and particles. Can we not use particles = state.particles?
| for p in particles | ||
| p.log_w -= LSE_after | ||
| end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I might be on the complete wrong page, but what's stopping us from just constructing new particles using the old weights - LSE_after, rather than mutating?
|
I've had to update the trend inflation example to comply with the new I've had to write local_level_model = StateSpaceModel(
GF.HomogeneousGaussianPrior(zeros(T, 1), PDMat([100.0;;])),
LocalLevelTrend(),
OutlierAdjustedObservation(),
)even though I'd really like something like local_level_model = StateSpaceModel(
GF.HomogeneousGaussianPrior(zeros(T, 1), ScalMat(1, 100.0)),
LocalLevelTrend(),
OutlierAdjustedObservation(),
)The reason for this is that if we currently set the prior to be a I'm hoping that this PR will fix this behaviour and we use the second code snippet. |
|
@THargreaves I need you to take a look at the @yebai I added a very basic unit test using JET. What I have is very naive and should be iterated on. I have very little experience with JET myself, so if you don't mind pointing me in the right direction (like other repos with JET unit tests) I can swiftly iterate on this. With those two items out of the way, I think we can safely merge this PR. |
|
Mooncake uses JET fairly comprehensively to test type stability and allocation, see here. |
|
The two main JET tests useful are The docs contains examples. Let me know if anything is unclear. |
|
You just had the Agree the definition of |
Type stability is a very non-trivial problem since filtering is a recursive problem where weights aren't initialized until the first step of the algorithm.
To get around this,
I initialize particles with weight ofI define a typeless initializer fro unweighted particles that promote to the correct type when weights are eventually computed in step 1.false(can be more generic) using a basic implementation ofUnweightedParticles. I use a Boolean since it's type is generic enough to afford a single type promotion to encourage the correct types.Auxiliary Filter
The APF was somewhat of a challenge to get right. The code not only relied on updating particle weights, but required a sequence of marginalizing constants to keep canonical likelihoods. To remedy it's shortcomings, I created a custom resampler
AuxiliaryResamplerwhich handles all of the code previously inparticles.jlin a type stable fashion. This was made possible by a cacophony of changes to the resampling behavior.NOTE: I'm still a little shaky on the log likelihood computation here (see here). Intuitively it makes sense, but it seems like way more
logsumexps than should be necessary. In particles it seems to be far less work to get the likelihoods.Resampling
As mentioned, the resampling had quite a makeover. Instead of completely redefining the
resamplefamily of functions, I pass the combined weights (set default toget_weights(state)) and auxiliary weights as a keyword argument. This intelligently dispatches to the correct method without the invasive overwriting I had in previous drafts. It should also be much more efficient than previous drafts since we only need tosoftmaxthe weights once.