Skip to content

Decompressor/Compressor pool #34

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 7 commits into
base: main
Choose a base branch
from
Open

Decompressor/Compressor pool #34

wants to merge 7 commits into from

Conversation

adam-fowler
Copy link
Member

Add a pool of decompressors and compressors for both gzip and deflate. When middleware needs either it will use one in the pool if it exists otherwise it will allocate a new one. When freeing the compressor/decompressor it will pass it back to the pool if the pool hasn't already reached its maximum size.

This should reduce the amount of allocations made by zlib considerably

@adam-fowler adam-fowler requested a review from Joannis as a code owner February 27, 2025 08:25
Copy link

codecov bot commented Feb 27, 2025

Codecov Report

Attention: Patch coverage is 92.55319% with 7 lines in your changes missing coverage. Please review.

Project coverage is 93.30%. Comparing base (3708933) to head (6ccee8c).

Files with missing lines Patch % Lines
.../HummingbirdCompression/CompressedBodyWriter.swift 64.70% 6 Missing ⚠️
...irdCompression/ResponseCompressionMiddleware.swift 94.73% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main      #34      +/-   ##
==========================================
- Coverage   94.55%   93.30%   -1.25%     
==========================================
  Files           3        4       +1     
  Lines         202      269      +67     
==========================================
+ Hits          191      251      +60     
- Misses         11       18       +7     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

// keep finishing stream until we don't get a buffer overflow
while true {
do {
try lastBuffer.compressStream(to: &self.window, with: self.compressor, flush: .finish)
try lastBuffer.compressStream(to: &self.window, with: compressor, flush: .finish)
try await self.parentWriter.write(self.window)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this write fails, the allocation isn't freed. Can we use a non-copyable with a deinit to handle this instead? Or a class or something similar.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using AllocatedValue class to hold these.

Copy link
Member

@Joannis Joannis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a bug still, I'd prefer not manually managing allocations.

@Joannis
Copy link
Member

Joannis commented Feb 28, 2025

We can also lend an allocation using .with scoped resources

@adam-fowler
Copy link
Member Author

We can also lend an allocation using .with scoped resources

Unfortunately the way ResponseBodyWriters work doesn't lend themselves to .with scoped resources. Also setting the writer to be ~Copyable would require the protocol to be ~Copyable. I did check in hummingbird if that was possible and it is but that would be a breaking change.

@adam-fowler
Copy link
Member Author

adam-fowler commented Feb 28, 2025

There's a bug still, I'd prefer not manually managing allocations.

Yes I understand, but we can't pool these without managing the allocation of them.
I wrapped all allocations in a AllocatedValue class, which will manage allocations being pushed back to the pool

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants