forked from PinkCrow007/arrow-rs
-
Notifications
You must be signed in to change notification settings - Fork 1
Variant shredding #2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
carpecodeum
wants to merge
12
commits into
main
Choose a base branch
from
variant-shredding
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# Which issue does this PR close? - Related to apache#7395 - Closes apache#7495 - Closes apache#7377 # Rationale for this change Let's update tonic to the latest Given the open and unresolved questions on @rmn-boiko's PR apache#7377 from @Xuanwo and @sundy-li, I thought a new PR would result in a faster resolution. # What changes are included in this PR? This PR is based on apache#7495 from @MichaelScofield -- I resolved some merge conflicts and updated Cargo.toml in the integration tests # Are these changes tested? Yes, by CI # Are there any user-facing changes? New dependency version --------- Co-authored-by: LFC <[email protected]>
…pache#7922) # Which issue does this PR close? - Part of apache#7896 # Rationale for this change In apache#7896, we saw that inserting a large amount of field names takes a long time -- in this case ~45s to insert 2**24 field names. The bulk of this time is spent just allocating the strings, but we also see quite a bit of time spent reallocating the `IndexSet` that we're inserting into. `with_field_names` is an optimization to declare the field names upfront which avoids having to reallocate and rehash the entire `IndexSet` during field name insertion. Using this method requires at least 2 string allocations for each field name -- 1 to declare field names upfront and 1 to insert the actual field name during object building. This PR adds a new method `with_field_name_capacity` which allows you to reserve space to the metadata builder, without needing to allocate the field names themselves upfront. In this case, we see a modest performance improvement when inserting the field names during object building Before: <img width="1512" height="829" alt="Screenshot 2025-07-13 at 12 08 43 PM" src="https://github.com/user-attachments/assets/6ef0d9fe-1e08-4d3a-8f6b-703de550865c" /> After: <img width="1512" height="805" alt="Screenshot 2025-07-13 at 12 08 55 PM" src="https://github.com/user-attachments/assets/2faca4cb-0a51-441b-ab6c-5baa1dae84b3" />
…che#7914) # Which issue does this PR close? - Fixes apache#7907 # Rationale for this change When trying to append `VariantObject` or `VariantList`s directly on the `VariantBuilder`, it will panic. # Changes to the public API `VariantBuilder` now has these additional methods: - `append_object`, will panic if shallow validation fails or the object has duplicate field names - `try_append_object`, will perform full validation on the object before appending - `append_list`, will panic if shallow validation fails - `try_append_list`, will perform full validation on the list before appending --------- Co-authored-by: Andrew Lamb <[email protected]>
# Which issue does this PR close? - Closes apache#7893 # What changes are included in this PR? In parquet-variant: - Add a new function `Variant::get_path`: this traverses the path to create a new Variant (does not cast any of it). - Add a new module `parquet_variant::path`: adds structs/enums to define a path to access a variant value deeply. In parquet-variant-compute: - Add a new compute kernel `variant_get`: does the path traversal over a `VariantArray`. In the future, this would also cast the values to a specified type. - Includes some basic unit tests. Not comprehensive. - Includes a simple micro-benchmark for reference. Current limitations: - It can only return another VariantArray. Casts are not implemented yet. - Only top-level object/list access is supported. It panics on finding a nested object/list. Needs apache#7914 to fix this. - Perf is a TODO. # Are these changes tested? Some basic unit tests are added. # Are there any user-facing changes? Yes --------- Co-authored-by: Andrew Lamb <[email protected]>
woohoo! |
…he#7774) # Which issue does this PR close? - Part of apache#7762 # Rationale for this change As part of apache#7762 I want to optimize applying filters by adding a new code path. To ensure that works well, let's ensure the filtered code path is well covered with tests # What changes are included in this PR? 1. Add tests for filtering batches with 0.01%, 1%, 10% and 90% and varying data types # Are these changes tested? Only tests, no functional changes # Are there any user-facing changes?
18d88b0
to
b1afed1
Compare
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Path-based Field Extraction for VariantArray
This PR implements efficient path-based field extraction and manipulation capabilities for
VariantArray
, enabling direct access to nested fields without expensive unshredding operations. The implementation provides both high-level convenience methods and low-level byte operations to support various analytical workloads on variant data.Relationship to Concurrent PRs
This work builds directly on the path navigation concepts introduced in PR #7919, sharing the fundamental
VariantPathElement
design withField
andIndex
variants. While PR apache#7919 provides a compute kernel approach with avariant_get
function, this PR provides instance-based methods directly onVariantArray
with a fluent builder API using owned strings rather than PR apache#7919's vector-based approach.This PR is complementary to PR #7921, which implements schema-driven shredding during array construction. This PR provides runtime path-based access to both shredded and unshredded data, creating a complete solution for both efficient construction and efficient access of variant data.
What This PR Contributes
This PR introduces three entirely original capabilities missing from both concurrent PRs. Field removal operations through methods like
remove_field
andremove_fields
enable efficient removal of specific fields from variant data, crucial for shredding operations where temporary or debug fields need to be stripped. A complete byte-level operations module (field_operations.rs
) provides direct binary manipulation through functions likeget_path_bytes
,extract_field_bytes
, andremove_field_bytes
that operate on raw binary format without constructing intermediate objects. A comprehensive binary parser (variant_parser.rs
) supports all variant types with specialized parsers for 17 different primitive types, providing the foundation for efficient binary navigation.How This Benefits PR apache#7919
The performance-critical byte operations could serve as the underlying implementation for PR apache#7919's compute kernel, potentially providing better performance for batch operations by avoiding object construction overhead. The field removal capabilities could extend PR apache#7919's functionality beyond extraction to comprehensive field manipulation. The instance-based approach provides different ergonomics that complement PR apache#7919's compute kernel approach.
Implementation Details
The implementation follows a three-tier architecture: high-level instance methods returning
Variant
objects for convenient manipulation, mid-level path operations usingVariantPath
andVariantPathElement
types for type-safe nested access, and low-level byte operations for maximum performance where object construction overhead is prohibitive. This directly addresses the performance concerns identified in PR apache#7919 by providing direct binary navigation without full object reconstruction, enabling efficient batch operations, and implementing selective field access that prevents the quadratic work patterns identified in the original performance analysis.What Remains Pending
This PR focuses on runtime access and manipulation rather than construction-time optimization, leaving build-time schema-driven shredding to PR apache#7921. Future work could explore integration with PR apache#7919's compute kernel approach, potentially using this PR's byte-level operations as the underlying implementation.