-
Notifications
You must be signed in to change notification settings - Fork 333
Description
Hello everyone,
I would like to propose a new feature: an interactive voxel annotation layer for manual segmentation directly within Neuroglancer. I am working at Ariadne.ai, and this capability would be a great addition to our platform and, I believe, to the broader Neuroglancer community. I also saw that this feature was mentioned in the following talk: https://www.youtube.com/watch?v=_XgfGcu81AA
I am aware that the contribution guidelines recommend discussing large features before implementation. To better understand the technical challenges, facilitate a more concrete discussion and familiarize myself with neuroglancer, I have built a prototype (see https://github.com/briossant/neuroglancer/tree/feature/voxel-annotation). I'm opening this issue now to share the proposed architecture and ask for the community feedback.
DEMO video: https://youtu.be/7d_PiqV-h2w
Proposal
The core idea is a new layer type, vox, that allows users to paint voxel values directly onto a volume. My prototype implements the full end-to-end workflow, from UI interaction to data persistence and multi-resolution updates.
Here is a breakdown of the proposed architecture:
1. Frontend Components (layer/vox/, ui/voxel_annotations.ts)
VoxUserLayer: A newUserLayersubclass that orchestrates the feature.- Drawing Tools (
VoxelBrushLegacyTool, etc.): These tools capture mouse events. Onmousedown/drag, they calculate the target voxel coordinates based on the current view andMouseSelectionState. - Frontend
VoxelEditController: The tools call methods on this controller (e.g.,paintBrushWithShape). This frontend controller is responsible for: - Optimistic Local Preview: It immediately applies the edit to the in-memory
VolumeChunkdata (source.applyLocalEdits). This provides instant visual feedback to the user without waiting for the backend. The modified chunks are marked for GPU re-upload. - Backend Communication: It commits the edits to its backend counterpart via an RPC (
commitEdits).
2. Backend Controller (voxel_annotation/edit_backend.ts)
VoxelEditController(Backend): ASharedObjecton the web worker that serves as the authoritative state manager.- Edit Batching: It receives edits, debounces them, and aggregates them into a single transaction to minimize write operations.
- Data Persistence: It calls a new
applyEditsmethod on the appropriateVolumeChunkSourcebackend. This method handles the actual writing to the data store (e.g., Zarr, ...). It returns the original voxel values that were overwritten. - Undo/Redo History: Upon a successful write, it stores the returned
VoxelChange(containing indices, old values, and new values) in anundoStack. It also manages theredoStack. - Downsampling Orchestration: After a successful commit, it adds the key of the modified chunk to a queue for downsampling. After each downsampling level is done, an RPC (
callChunkReload) is sent to the frontend to invalidate the now-stale chunks, forcing a re-fetch from the data source.
Code Flow for a Single Brush Stroke
This diagram illustrates the sequence of events from a user drawing on the canvas to the data being persisted and downsampled.
sequenceDiagram
participant User
participant Tool as VoxelBrushLegacyTool
participant ControllerFE as VoxelEditController (Frontend)
participant SourceFE as VolumeChunkSource (Frontend)
participant ControllerBE as VoxelEditController (Backend)
participant SourceBE as VolumeChunkSource (Backend)
User->>Tool: Mouse Down/Drag
Tool->>ControllerFE: paintBrushWithShape(mouse, ...)
note right of ControllerFE: Calculates affected chunks/voxels
ControllerFE->>SourceFE: applyLocalEdits(chunkKeys, ...)
SourceFE->>SourceFE: Modifies in-memory chunk data
note right of SourceFE: Viewer redraws with updated chunk data (preview)
SourceFE->>ChunkManager: Invalidates GPU texture for chunk
ControllerFE->>ControllerBE: commitEdits(edits, ...) [RPC]
activate ControllerBE
ControllerBE->>ControllerBE: Debounces and batches edits
ControllerBE->>SourceBE: applyEdits(chunkKeys, ...)
activate SourceBE
SourceBE-->>ControllerBE: Returns VoxelChange
deactivate SourceBE
ControllerBE->>ControllerBE: Pushes change to Undo Stack
ControllerBE->>ControllerBE: Enqueues chunk for downsampling
deactivate ControllerBE
loop Downsampling Cascade
ControllerBE->>ControllerBE: downsampleStep(chunkKeys)
ControllerBE->>SourceBE: applyEdits(chunkKeys, ...)
note right of SourceFE: Reload chunks affected by the downsampling
ControllerBE->>ControllerFE: callChunkReload(chunkKeys) [RPC]
activate ControllerFE
ControllerFE->>SourceFE: invalidateChunks(chunkKeys)
end
deactivate ControllerFE
Architectural Discussion Points
1. Low res drawing / big brush
The prototype currently locks drawing to the highest resolution level. An attempt was made to allow drawing at lower resolutions, based on the brush size. However, the implementation of an upscaling cascade is not trivial; two approaches were considered:
- first to mimic the downscaling process, but with the exponential nature of the upscaling, we quickly hit the limitation of the number of upscale steps we can perform.
- second, to delay the upsampling to the moment we need the chunk, but this method is not compatible with generic data sources and requires a complex logic to handle conficts.
A solution would be to allow the low-resolution drawing while accepting that no upscaling will be performed. This would allow the feature to be used to outline datasets for example, but ui work is needed to avoid user confusion.
2. Live Preview on Compressed chunks
The applyLocalEdits() method inside the VolumeChunkSource is writing the edits directly into the chunk in memory, which is not possible for compressed chunks, as far as I know. To temporarily solve this, the compressed chunk are decompressed, edited and then re-compressed, but this obviously leads to poor performance. The only solution I have currently in mind would be to move the preview in a separate overlay that the render layer would render on top of the original chunks.
3. Annotating over Read-Only Sources & multi-datasources behavior
The current implementation was thought with a single source in mind. But use cases like annotating over a read-only source require considering a multi-datasources behavior. The trivial way would be to read and try to write to every source and then mix the chunks in the shader. But I think it would be better to have a priority system where we read through the sources until we find one that has the wanted chunk, and we would write to the most prioritized source or let the user choose the writable sources.
4. Dataset Creation
I think having the ability to create new datasets within neuroglancer would be a great feature, this would prevent the need of external tools or a dedicated backend, especially for the case of writing over read-only sources. This could be an option proposed to the user when the datasource url link to an empty directory for example. We would then let the user choose the dataset format, bounds, resolution, etc.
Looking forward to any feedback, Thank you.