Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
63 changes: 63 additions & 0 deletions _gsocproposals/2026/proposal_BioDynamo.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
---
title: BioDynaMo Large-Scale Antimatter Simulation
layout: gsoc_proposal
project: BioDynamo
year: 2026
difficulty: medium
duration: 350
mentor_avail: June-October
organization:
- CompRes
project_mentors:
- email: vvasilev@cern.ch
first_name: Vassil
last_name: Vassilev
is_preferred_contact: yes
organization: Princeton University
- email: lukas.johannes.breitwieser@cern.ch
first_name: Lukas
last_name: Breitwieser
organization: CERN
---

## Description

Deliver a self-contained BioDynaMo module and research prototype that enables validated, reproducible simulations of charged antiparticle ensembles in Penning-trap-like geometries at scales beyond existing demonstrations. The project generalizes prior BioDynaMo Penning-trap work into a reusable, documented, and scalable module suitable for antimatter-motivated studies and other charged-particle systems.

The student will extend BioDynaMo with a focused set of features (pluginized force models, neighbor search tuned for charged particles, elastic runtime hooks, and analysis/visualization pipelines), validate the models on canonical testcases (single-particle motion, small plasma modes), and demonstrate scaling and scientific workflows up to the largest feasible size within available resources. BioDynaMo already provides an agent/plugin API, parallel execution (OpenMP), and visualization hooks (ParaView/VTK). A prior intern report demonstrates a Penning-trap proof-of-concept and identifies directions for extension (custom forces, multi-scale runs, hierarchical models, CI, containerization)[[1]](https://repository.cern/records/7capf-rqp49).

## Engineering Goals
* Implement a BioDynaMo plugin module (“AntimatterKernel”) optimized for charged-particle workloads, including SoA-compatible data layouts, spatial decomposition, and an efficient neighbor search.
* Enable elastic and reproducible execution via containerized workflows and runtime configuration for local, HPC, or cloud environments.
* Provide performance instrumentation and a small, well-documented benchmark suite integrated with BioDynaMo’s tooling.

## Physics/Scientific Goals
* Implement physics components as BioDynaMo plugins: Penning-trap external fields, Coulomb interactions (pairwise with documented extension points for approximations), stochastic annihilation handling, and basic species support.
* Validate against analytic and reference scenarios (single-particle trapping, basic plasma oscillation modes), with clearly stated assumptions and limits.
* Perform a limited parameter sweep (e.g. density, magnetic field, trap voltage) at increasing scale to explore collective behavior observable within accessible regimes.

## Expected Results
* A BioDynaMo plugin/module implementing charged-particle dynamics suitable for antimatter-motivated simulations.
* A set of validated physics testcases reproducing canonical scenarios, with documented assumptions and limitations.
* A scalable and reproducible simulation workflow, including performance instrumentation and example benchmark configurations.
* Elastic execution artifacts (containers and run scripts) enabling consistent execution across local, HPC, and cloud systems.
* Analysis and visualization pipelines producing scientifically meaningful observables (e.g. density profiles, energy spectra, annihilation maps).
* A public open-source release with documentation and a short technical report or draft publication suitable for a workshop or conference.


## Requirements

* Automatic differentiation
* Parallel programming
* Reasonable expertise in C++ programming

## Links
* [Repo](https://github.com/BioDynaMo/biodynamo)

## AI Policy

AI assistance is allowed for this contribution. The applicant takes full responsibility for all code and results, disclosing AI use for non-routine tasks (algorithm design, architecture, complex problem-solving). Routine tasks (grammar, formatting, style) do not require disclosure.

## How to Apply

In addition to reaching out to the mentors by email, prospective candidates are required to complete [this form](https://forms.gle/AYgrJthYCRmBwwFL8)
52 changes: 52 additions & 0 deletions _gsocproposals/2026/proposal_CartopiaX.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
---
title: Enhancing a Next-Generation Platform for Computational Cancer Biology
layout: gsoc_proposal
project: BioDynamo
year: 2026
difficulty: medium
duration: 350
mentor_avail: June-October
organization:
- CompRes
project_mentors:
- email: vvasilev@cern.ch
first_name: Vassil
last_name: Vassilev
is_preferred_contact: yes
organization: Princeton University
- email: lukas.johannes.breitwieser@cern.ch
first_name: Lukas
last_name: Breitwieser
organization: CERN
---

## Description

CartopiaX is an emerging simulation and modeling platform designed to support computational cancer research through large-scale, agent-based biological simulations. The project builds on modern high-performance scientific computing practices and leverages technologies inspired by platforms such as BioDynaMo to model tumor growth, tissue microenvironments, cell-cell interactions, and diffusion of signaling molecules.

CartopiaX aims to provide a flexible research environment that enables computational scientists and domain biologists to collaboratively design, execute, and analyze large-scale biological simulations. The platform combines high-performance C++ simulation kernels with user-friendly interfaces and scripting capabilities to enable rapid experimentation and reproducible research workflows. Currently, CartopiaX provides a performant core simulation engine but still requires improvements in usability, extensibility, and performance portability to support wider adoption in computational oncology and systems biology communities.

This project invites contributors to explore improvements that help integrate, extend, and deploy CartopiaX for real-world research applications. Students are encouraged to propose approaches that enhance developer productivity, accessibility for domain scientists, and computational performance.

## Possible Directions

* Easy integration - a possible direction focuses on improving the usability of CartopiaX by developing more intuitive ways for researchers to configure and run simulations. Currently, simulations rely heavily on static configuration files and parameter definitions. Students may explore designing graphical or web-based interfaces that allow researchers to interactively define experiments, create structured configuration systems using formats such as YAML or JSON, and develop reusable experiment templates. This direction aims to make CartopiaX more accessible to domain scientists who may not have extensive programming experience while improving reproducibility and workflow management.

* Flexibility: A potential direction involves extending CartopiaX through Python integration to support flexible and rapid scientific experimentation. Many researchers in computational biology prefer Python due to its strong ecosystem for data analysis and prototyping. Students may investigate technologies such as cppyy to enable seamless interaction between the high-performance C++ simulation core and Python. This could allow scientists to define cell behaviors, simulation rules, or analysis pipelines directly in Python while preserving the performance advantages of the C++ backend. This area provides opportunities to work on language interoperability and mixed-language scientific workflows.

* HPC: a third direction explores improving the performance and scalability of CartopiaX by identifying and optimizing computational bottlenecks within the simulation engine. Agent-based biological simulations frequently involve expensive processes such as diffusion modeling and large-scale cell interaction calculations. Students may explore profiling the simulation engine, investigating GPU acceleration strategies for diffusion solvers or other parallelizable components, and developing benchmarking tools to evaluate performance improvements. This direction is particularly suited for students interested in high-performance computing and parallel programming techniques.

## Requirements

* Requirements may vary based on the direction of the project, and can range from graphical or web-based interface design, Python/C++ programming, and/or familiarity with parallel programming and simulation/modelling

## Links
* [Repo](https://github.com/compiler-research/CARTopiaX)

## AI Policy

AI assistance is allowed for this contribution. The applicant takes full responsibility for all code and results, disclosing AI use for non-routine tasks (algorithm design, architecture, complex problem-solving). Routine tasks (grammar, formatting, style) do not require disclosure.

## How to Apply

In addition to reaching out to the mentors by email, prospective candidates are required to complete [this form](https://forms.gle/AYgrJthYCRmBwwFL8)
54 changes: 54 additions & 0 deletions _gsocproposals/2026/proposal_Clad-GPU.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
---
title: Consolidate and advance the GPU infrastructure in Clad
layout: gsoc_proposal
project: Clad
year: 2026
difficulty: medium
duration: 350
mentor_avail: June-October
organization:
- CompRes
project_mentors:
- email: vvasilev@cern.ch
first_name: Vassil
last_name: Vassilev
is_preferred_contact: yes
organization: Princeton University
- email: david.lange@cern.ch
first_name: David
last_name: Lange
organization: Princeton University
---

## Description

Clad is a Clang-based automatic differentiation (AD) plugin for C++. Over the past years, several efforts have explored GPU support in Clad, including differentiation of CUDA code, partial support for the Thrust API, and prototype integrations with larger applications such as XSBench, LULESH, a tiny raytracer in the Clad repository, and LLM training examples (including work carried out last year). While these efforts demonstrate feasibility, they are fragmented across forks and student branches, are inconsistently tested, and lack reproducible benchmarking.

This project aims to consolidate and strengthen Clad’s GPU infrastructure. The focus is on upstreaming existing work, improving correctness and consistency of CUDA and Thrust support, and integrating Clad with realistic GPU-intensive codebases. A key goal is to establish reliable benchmarks and CI coverage: if current results are already good, they should be documented and validated; if not, the implementation should be optimized further so that Clad is a practical AD solution for real-world GPU applications.

## Expected Results

* Recover, reproduce, and upstream past Clad+GPU work, including prior student projects and LLM training prototypes.
* Integrate Clad with representative GPU applications such as XSBench, LULESH, and the in-tree tiny raytracer, ensuring * correct end-to-end differentiation.
* Establish reproducible benchmarks for these codebases and compare results with other AD tools (e.g. Enzyme) where feasible.
* Reduce reliance on atomic operations, improve accumulation strategies, and add support for additional GPU primitives and CUDA/Thrust features.
* Add unit and integration tests and enable GPU-aware CI to catch correctness and performance regressions.
* Improve user-facing documentation and examples for CUDA and Thrust usage.
* Present intermediate and final results at relevant project meetings and conferences.

## Requirements

* Automatic differentiation
* Parallel/GPU programming
* Reasonable expertise in C++ programming

## Links
* [Repo](https://github.com/vgvassilev/clad)

## AI Policy

AI assistance is allowed for this contribution. The applicant takes full responsibility for all code and results, disclosing AI use for non-routine tasks (algorithm design, architecture, complex problem-solving). Routine tasks (grammar, formatting, style) do not require disclosure.

## How to Apply

In addition to reaching out to the mentors by email, prospective candidates are required to complete [this form](https://forms.gle/AYgrJthYCRmBwwFL8)
54 changes: 54 additions & 0 deletions _gsocproposals/2026/proposal_Clad-Libtorch.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
---
title: Clad as a first-class gradient engine in LibTorch
layout: gsoc_proposal
project: Clad
year: 2026
difficulty: medium
duration: 350
mentor_avail: June-October
organization:
- CompRes
project_mentors:
- email: vvasilev@cern.ch
first_name: Vassil
last_name: Vassilev
is_preferred_contact: yes
organization: Princeton University
- email: david.lange@cern.ch
first_name: David
last_name: Lange
organization: Princeton University
---

## Description

This project will design, implement, benchmark, and integrate a proof-of-concept that uses Clad (compiler-based automatic differentiation) as a first-class gradient engine in LibTorch (the C++ API of PyTorch). The goal is to demonstrate how ROOT users can run high-performance, pure-C++ machine-learning training and inference pipelines, without relying on Python. The project will result in a working prototype that integrates Clad-generated backward routines into LibTorch via `torch::autograd::Function` or custom ATen operators.

Recent efforts have extended the ROOT framework with modern machine-learning capabilities. In particular, a ROOT Users Workshop 2025 contribution by Meyer-Conde et al. demonstrates the use of LibTorch directly inside ROOT for gravitational-wave data analysis [[1]](https://indico.cern.ch/event/1505384/contributions/6706597). Their “ROOT+” prototype library augments ROOT with advanced features such as complex tensor arithmetic on CPU/GPU and modern I/O mechanisms (HTTP, Kafka), while relying on LibTorch for ML training and inference. In practice, this enables ROOT to load and execute neural networks (via ONNX or LibTorch) entirely in C++, and to combine them seamlessly with ROOT’s data-processing tools such as RDataFrame and TMVA, all within a single environment.
In parallel, recent work in the Compiler Research community has demonstrated that Clad-generated gradients can match and even outperform PyTorch autograd on CPU when carefully optimized [[2]](https://compiler-research.org/blogs/gsoc25_rohan_final_blog). These results motivate a deeper exploration of compiler-driven automatic differentiation as a backend for machine learning frameworks. Building on both efforts, this project will culminate in a ROOT integration demo (for example, a simplified gravitational-wave analysis workflow) and a reproducible benchmarking suite comparing Clad-based gradients with PyTorch autograd for realistic HEP and GW workloads.

This project is expected to deliver tangible performance and usability benefits for machine-learning workflows in ROOT. By offloading gradient computation to Clad’s compiler-generated routines, meaningful speedups are expected for CPU-bound training workloads; prior results report speedups over PyTorch autograd on CPU [[2]](https://compiler-research.org/blogs/gsoc25_rohan_final_blog). This makes the approach particularly attractive for offline HEP and gravitational-wave analyses, where CPU efficiency is often a limiting factor. In addition, the project will enable fully native C++ machine-learning workflows in ROOT, allowing users to define, train, and evaluate models without Python dependencies and to integrate ML tightly with existing C++ analysis code, ROOT I/O, and data pipelines. The Clad-enhanced LibTorch backend will naturally complement ROOT’s existing ML ecosystem including TMVA, SOFIE, ONNX-based inference, and RDataFrame providing a flexible “best-of-both-worlds” solution that combines modern deep-learning frameworks with ROOT’s mature analysis infrastructure. Beyond the immediate prototype, this work will establish a solid foundation for future research on compiler-driven optimizations such as kernel fusion, reduced memory traffic, and eventual GPU acceleration.

## Expected Results

* Create a small C++ demo where a simple neural network is defined (e.g. MLP) and use Clad to generate its derivative functions. Integrate this with LibTorch by wrapping the Clad-generated gradient code as a custom `torch::autograd::Function` or operator. This follows the strategy outlined in the Clad-PyTorch project. The result is a model that uses LibTorch tensors for forward, but Clad’s code for backward.
* Measure training (forward + backward) performance on CPU for representative tasks (e.g. MNIST or a simple GW signal classification). Compare Clad-derived gradients vs PyTorch autograd. Focus on performance: optimize memory layout and avoid dynamic allocations to maximize throughput.
* Adapt the working prototype into the ROOT framework. For example, incorporate it into a ROOT macro or plugin so that one can run C++ ML code under root.exe or in PyROOT. Provide examples using ROOT’s data structures (TTrees, RDataFrame) feeding into the Clad-empowered model. Investigate loading pretrained models (via ONNX or TorchScript) and whether Clad can backpropagate through them.

## Requirements

* Automatic differentiation
* Parallel programming
* C++ programming
* Experience with LibTorch is a plus

## Links
* [Repo](https://github.com/vgvassilev/clad)

## AI Policy

AI assistance is allowed for this contribution. The applicant takes full responsibility for all code and results, disclosing AI use for non-routine tasks (algorithm design, architecture, complex problem-solving). Routine tasks (grammar, formatting, style) do not require disclosure.

## How to Apply

In addition to reaching out to the mentors by email, prospective candidates are required to complete [this form](https://forms.gle/AYgrJthYCRmBwwFL8)
52 changes: 52 additions & 0 deletions _gsocproposals/2026/proposal_Clad-OpenMP.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
---
title: Enable automatic differentiation of OpenMP programs with Clad
layout: gsoc_proposal
project: Clad
year: 2026
difficulty: medium
duration: 350
mentor_avail: June-October
organization:
- CompRes
project_mentors:
- email: vvasilev@cern.ch
first_name: Vassil
last_name: Vassilev
is_preferred_contact: yes
organization: Princeton University
- email: david.lange@cern.ch
first_name: David
last_name: Lange
organization: Princeton University
---

## Description

Clad is an automatic differentiation (AD) clang plugin for C++. Given a C++ source code of a mathematical function, it can automatically generate C++ code for computing derivatives of the function. Clad is useful in powering statistical analysis and uncertainty assessment applications. OpenMP (Open Multi-Processing) is an application programming interface (API) that supports multi-platform shared-memory multiprocessing programming in C, C++, and other computing platforms.

This project aims to develop infrastructure in Clad to support the differentiation of programs that contain OpenMP primitives.

## Expected Results

* Extend the pragma handling support
* List the most commonly used OpenMP concurrency primitives and prepare a plan for how they should be handled in both forward and reverse accumulation in Clad
* Add support for concurrency primitives in Clad’s forward and reverse mode automatic differentiation.
* Add proper tests and documentation.
* Present the work at the relevant meetings and conferences.

## Requirements

* Automatic differentiation
* Parallel programming
* Reasonable expertise in C++ programming

## Links
* [Repo](https://github.com/vgvassilev/clad)

## AI Policy

AI assistance is allowed for this contribution. The applicant takes full responsibility for all code and results, disclosing AI use for non-routine tasks (algorithm design, architecture, complex problem-solving). Routine tasks (grammar, formatting, style) do not require disclosure.

## How to Apply

In addition to reaching out to the mentors by email, prospective candidates are required to complete [this form](https://forms.gle/AYgrJthYCRmBwwFL8)
2 changes: 2 additions & 0 deletions _gsocproposals/2026/proposal_Clad-STLConcurrency.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,9 +13,11 @@ project_mentors:
first_name: Vassil
last_name: Vassilev
is_preferred_contact: yes
organization: Princeton University
- email: david.lange@cern.ch
first_name: David
last_name: Lange
organization: Princeton University
---

## Description
Expand Down
Loading