-
Notifications
You must be signed in to change notification settings - Fork 454
Initial implementation of the global buffer arena #270
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
The information is redundant since it can be computed as free + in_use.
…nt-global-buffers
Codecov Report
@@ Coverage Diff @@
## master #270 +/- ##
=====================================
Coverage 100% 100%
=====================================
Files 1 1
Lines 30 30
=====================================
Hits 30 30 Continue to review full report at Codecov.
|
…nt-global-buffers
/// Since the whole point behind this byte buffer arena is to cache | ||
/// allocated heap memory the number of concurrent byte buffers that are | ||
/// in use at the same time should be kept small. | ||
const IN_USE_LIMIT: usize = 1000; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So the limit is defined by the number of instances, but what is the byte limit per instance? I wonder if it makes more sense to define the arena limit as a number of bytes, since this would provide a better idea of the upper byte boundary.
Does the number have any reasoning behind it or is it more a rough value of thumb for now?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The limit is arbitrary.
However imposing it makes sense since there is one way how you could abuse this global buffer arena and it is when you end up with many many different buffers, so when in_use
is a high number.
I expect normal use cases to spawn at most 5-10 buffers so 1000 is already pretty intense.
A regulation on the total bytes in use could be introduced but it should definitely be possible to have for example one giant buffer if that's what has been needed.
Co-Authored-By: Michael Müller <[email protected]>
Co-Authored-By: Michael Müller <[email protected]>
* [core] initial implementation of the global buffer arena * [core] add license header to buffer arena * [core] add module level docs to buffer arena * [core] move license header where it belongs (to the top) * [core] add docs for diagnostic fields and getters * [core] add tests to buffer arena * [core] apply rust fmt * [core] remove allocated field from BufferArena The information is redundant since it can be computed as free + in_use. * [core] improve buffer arena tests * [core] export buffer arena public symbols from core::env2 * [core] fix doc comment link to AsRef and AsMut * [core] remove nightly cell-update feature * [core] enable no_std for BufferArena and mirror thread_local interfacing * [core] fix some obvious no_std mis-compilations * [core] apply rustfmt * [core] apply rustfmt #2 * [core] fix clippy warning in buffer_arena * [core] fix typo Co-Authored-By: Michael Müller <[email protected]> * [core] slightly improve get_buffer impl Co-Authored-By: Michael Müller <[email protected]> * [core] slight improvements * [core] rename LocalKey to GlobalBufferArena * [core] fix no_std build
Implements the buffer pool mentioned in this issue: #245
The implementation of the
BufferArena
shall solve the problem of throttling heap memory allocations due to heavy usage in the layer between SRML contracts and ink!. Most often buffers are short-lived since they are just used for encoding and decoding intermediate values. TheBufferArena
acts as a globally accessible heap allocation cache.Provides a single API:
fn get_buffer(&self) -> BufferRef
Where
BufferRef
implements all traits required for ink! and SRML contracts interaction.When such a
BufferRef
goes out of scope its internal buffer is added back to the globalBufferArena
instance.The implementation tries to be as efficient as possible while maintaining checks to identify malicious smart contracts that accumulate many cached buffers.
This small module will be the backbone of the future dynamic allocator and maybe also of a reimplementation of the environmental accessor.