Skip to content

Conversation

@ariostas
Copy link
Collaborator

@ariostas ariostas commented Oct 2, 2025

This is still a work in progress.

I implemented reading cache, which was an important thing that was missing. I'm also doing a bit of refactoring with the following two goals.

  • Support reading into virtual arrays
  • Allocate only one array for reading instead of allocating one per cluster and then gluing them together.

if return_buffer:
return destination

def gpu_read_clusters(self, fields, start_cluster_idx, stop_cluster_idx):
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I haven't tested this on GPU so I'll leave this comment open until I do, so I don't forget.

@ariostas ariostas changed the title feat: implement VirtualArray and cache support for RNTuples feat: implement VirtualArray support for RNTuples Oct 20, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants