You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Thread safe `Clone` method is implemented.
* `Index` and `IndexMut` traits are implemented for `usize` indices.
* Due to possibly fragmented nature of the underlying pinned vec, and since we cannot return a reference to a temporarily created collection, currently index over a range is not possible. This would be possible with `IndexMove` (see rust-lang/rfcs#997).
* Debug trait implementation is made mode convenient, revealing the current len and capacity in addition to the elements.
* Upgraded dependencies to pinned-vec (3.7) and pinned-concurrent-col (2.6) versions.
* Tests are extended:
* concurrent clone is tested.
* in addition to concurrent push & read, safety of concurrent extend & read is also tested.
* boxcar::Vec is added to the benchmarks.
ConcurrentVec wraps a [`PinnedVec`](https://crates.io/crates/orx-pinned-vex) of [`ConcurrentOption`](https://crates.io/crates/orx-concurrent-option) elements. This composition leads to the following safety guarantees:
@@ -118,35 +123,15 @@ Together, concurrency model of the `ConcurrentVec` has the following properties:
118
123
119
124
*You may find the details of the benchmarks at [benches/collect_with_push.rs](https://github.com/orxfun/orx-concurrent-vec/blob/main/benches/collect_with_push.rs).*
120
125
121
-
In the experiment, `rayon`s parallel iterator, `AppendOnlyVec`s and `ConcurrentVec`s `push` methods are used to collect results from multiple threads. Further, different underlying pinned vectors of the `ConcurrentVec` are tested. We observe that:
126
+
In the experiment, `rayon`s parallel iterator, and push methods of `AppendOnlyVec`, `boxcar::Vec` and `ConcurrentVec` are used to collect results from multiple threads. Further, different underlying pinned vectors of the `ConcurrentVec` are evaluated.
* The default `Doubling` growth strategy leads to efficient concurrent collection of results. Note that this variant does not require any input to construct.
123
132
* On the other hand, `Linear` growth strategy performs significantly better. Note that value of this argument means that each fragment of the underlying `SplitVec` will have a capacity of 2^12 (4096) elements. The underlying reason of improvement is potentially be due to less waste and could be preferred with minor knowledge of the data to be pushed.
124
133
* Finally, `Fixed` growth strategy is the least flexible and requires perfect knowledge about the hard-constrained capacity (will panic if we exceed). Since it does not outperform `Linear`, we do not necessarily prefer `Fixed` even if we have the perfect knowledge.
The performance can further be improved by using `extend` method instead of `push`. You may see results in the next subsection and details in the [performance notes](https://docs.rs/orx-concurrent-bag/2.3.0/orx_concurrent_bag/#section-performance-notes) of `ConcurrentBag` which has similar characteristics.
151
136
152
137
### Performance with `extend`
@@ -158,41 +143,7 @@ The only difference in this follow up experiment is that we use `extend` rather
158
143
* There is not a significant difference between extending by batches of 64 elements or batches of 65536 elements. We do not need a well tuned number, a large enough batch size seems to be just fine.
159
144
* Not all scenarios allow to extend in batches; however, the significant performance improvement makes it preferable whenever possible.
0 commit comments