-
Notifications
You must be signed in to change notification settings - Fork 493
Open
Labels
need/triageNeeds initial labeling and prioritizationNeeds initial labeling and prioritization
Description
Issue Description
While working with the Mplex stream multiplexer, I noticed potential memory issues with how stream buffers are managed. The current implementation could lead to significant memory consumption under heavy load.
Problem
The default configuration allows:
- 1MB max message size
- 4MB stream buffer size
- 1024 inbound/outbound streams per connection
This means a single connection could theoretically use up to 8GB of memory (4MB × 2048 streams). I've seen memory usage grow significantly when consumers process data slower than it arrives.
Current Behavior
When a stream's buffer fills up:
if (stream.sourceReadableLength() > maxBufferSize) {
// Stream just gets reset
this._source.push({
id: message.id,
type: type === MessageTypes.MESSAGE_INITIATOR ?
MessageTypes.RESET_RECEIVER : MessageTypes.RESET_INITIATOR
})
throw new StreamInputBufferError('Input buffer full')
}
This isn't ideal because:
- It can cause data loss
- There's no proper backpressure mechanism
- Memory usage can spike before the reset happens
Suggested Fix
We should consider:
-
Adding proper backpressure signaling
- Implement flow control mechanisms to slow down data producers
- Signal upstream when buffers are approaching capacity
-
Implementing dynamic buffer sizing based on system memory
- Adjust buffer sizes based on available system memory
- Scale buffer limits according to current memory pressure
-
Adding monitoring for overall memory usage across streams
- Track total memory consumption across all active streams
- Implement alerts when memory thresholds are exceeded
Questions
- Should we add a global memory limit across all streams?
- Would it make sense to implement automatic buffer size adjustment?
Metadata
Metadata
Assignees
Labels
need/triageNeeds initial labeling and prioritizationNeeds initial labeling and prioritization