-
Notifications
You must be signed in to change notification settings - Fork 225
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Turing.jl newsletter #2498
Comments
Turing.jl Newsletter 1 — 28 February 2025Welcome to the inaugural issue of the Turing.jl newsletter! New Turing behaviour, especially Recently we have been focused on reworking a number of internal data structures in DynamicPPL.jl (this is the package that allows you to define models). We haven't released this yet but you might be interested to see the changelog on GitHub. DifferentiationInterface migration From a developer perspective, we have now fully switched over to DifferentiationInterface.jl for automatic differentiation of models occurs. This work of course couldn't have been possible without @gdalle's work on DI itself and also his help with integrating it into DynamicPPL. This also paves the way for a long-standing goal of Turing, which is to expose a series of AD testing utilities that will allow AD package developers to test against a fixed set of models — this will allow us to formalise the idea of Turing being 'compatible' with a given AD package. The plan for submodels We have been discussing for a while now about how best to fully implement submodels (i.e. be able to treat submodels like distributions in the sense that we can sample from them, and also condition models on values obtained from submodels). There is currently a proposal which we've written up on GitHub, and goes into more depth about what we'd like to see and the underlying syntax. If this is a Turing feature that you use, do feel free to let us know what you think. Turing.jl is now published (again!) We recently published a new paper with a high-level overview of Turing.jl's features and implementation. Check it out! We have also published in the conference proceedings of the workshop on Languages for Inference (LAFI), which was held as part of POPL 2025: Looking for Google Summer of Code students We are keen to take students for GSoC in 2025! If you are interested in working on a Python/R interface to JuliaBUGS, or making some improvements to TuringPosteriorDB, do get in touch. |
Turing.jl Newsletter 2 — 14 March 2025DynamicPPL benchmarking DynamicPPL.jl now has a set of benchmarks that are run on GitHub Actions! We measure how long it takes to evaluate a small selection of models and also to run AD on them. If you think that there are specific models / features that we should add to the benchmarks, please feel free to create an issue and let us know. Separately, we are planning to merge the benchmarking utilities in TuringBenchmarking.jl into DynamicPPL itself. There might be a little bit of API shake-up as part of this, but it's for the better as it'll allow the benchmarking code to more easily stay in sync with DynamicPPL — allowing us to catch performance regressions in PRs. SSMProblems The SSMProblems.jl and GeneralisedFilters.jl packages have now been merged into a single repository: https://github.com/TuringLang/SSMProblems.jl. This won't affect you if you are using the packages from the Julia General registry, but if you're looking to develop off the main branch you may have to use a different URL, or specify a subdirectory in Smaller bits Other code changes that have been merged:
|
Turing.jl Newsletter 3 — 28 March 2025Turing v0.37 We've now released v0.37 of Turing. This includes a lot of new functionality from DynamicPPL 0.35, including the new (simplified) More generally, it's likely that from now on our releases will involve larger changes because we are aggregating more changes into a single minor version. We are, however, also committed to providing thorough release notes that will help users and library authors upgrade more easily! Release notes will be available on GitHub, and you can see the notes for Turing 0.37 and DynamicPPL 0.35 here. If you have any trouble upgrading, just drop us a note. AD backend testing Right now we test a series of DynamicPPL models with several AD backends. It's rather ad-hoc and we are currently drafting a more formal interface for testing AD backends with Turing models. It's still early days but if you are an AD package developer and want to know what this means for integration with Turing, get in touch (easiest way: ping me on Slack) 🙂 Unified interface for optimisation algorithms There's an ongoing discussion about unifying the interface for MAP/MLE point estimates and variational inference (and potentially even MCMC). If you use more than one of these methods and have thoughts on what you'd like from an interface, we'd be very happy to hear from you! |
Turing.jl Newsletter 4 — 11 April 2025Have you used Turing.jl? Given that you're reading this, we hope so! We're currently putting together a list of papers and other outputs (e.g. tutorials, presentations, ...) which make use of Turing.jl. We'd love to have more examples, if you have any, please do get in touch (feel free to message me and I can forward it). Thank you! State of the AD Over the last few weeks we've been putting together a little project that tabulates the performance of different AD backends on a variety of Turing.jl models, and we're now quite excited to share it: https://turinglang.org/ADTests/ This will hopefully help to answer the perennial question of whether you should stick with good old ForwardDiff, or whether you should try something else. Do note that (as of the time of writing) this table is still in alpha stage and there are a lot of details that have yet to be ironed out 🙂 However, suggestions are always welcome! JuliaBUGS.jl The BUGS (Bayesian inference Using Gibbs Sampling) language provides a declarative way to specify complex Bayesian statistical models. For years, implementations like WinBUGS, OpenBUGS, and JAGS have been widely used tools for researchers applying these models. JuliaBUGS.jl is a modern implementation of the BUGS language, aiming for full backwards compatibility with standard BUGS models, while also offering improved interoperability with the Julia ecosystem. (For details and examples of BUGS syntax, check out the JuliaBUGS documentation.) A recent experimental update introduces significant performance improvements in JuliaBUGS: instead of relying solely on the previous graph-based approach, JuliaBUGS can now directly generate Julia code to compute the model's log-density. This code generation technique can yield >10x speedups compared to the graph-based method. Currently, this provides the most benefit for models with linear or hierarchical structures; support for state space models is planned for a future update. To use it, run this after compiling your model: JuliaBUGS.set_evaluation_mode(your_model, JuliaBUGS.UseGeneratedLogDensityFunction()) We would love for you to test out this new functionality! If you have any feedback, please do feel free to open a GitHub issue or discussion. Even more advanced HMC Lastly, we have a paper of our own to share on Hamiltonian Monte Carlo methods!
We will be looking to integrate these methods into Turing.jl in the future. |
Hello Turing.jl users!
We (the Turing.jl team) are starting a fortnightly series of updates on what we've been up to and what's in the works. We hope that this will provide you (our users) with some insight into the direction of the Turing ecosystem, and we'd also love for you to chip in with your thoughts if you have any.
You can keep up with this newsletter through any of the following methods:
We might post on other places like Discourse, too, this is still in the works.
The text was updated successfully, but these errors were encountered: