Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix hardened_malloc error #10

Merged
merged 6 commits into from
Jan 10, 2025
Merged

Fix hardened_malloc error #10

merged 6 commits into from
Jan 10, 2025

Conversation

JingMatrix
Copy link
Owner

DivestOS users reported that vector::resize might trigger hardened_malloc fatal error: detected write after free.
From the error message, this can only happen when the for loop is executed for multiple times.
Note that the for loop is effective only when data to read still exist in the socket queue, which is the case when we call recv with flag MSG_PEEK.
Hence, after we call recv with zero flags, which consume data in the socket queue, we should break the for loop.

Note: previously, we break the for loop according to errno == EAGAIN.

As a simplification, we also move the resize call to the initialization of buf.

@JingMatrix
Copy link
Owner Author

From logs, the crash happens at the first loop, which is fairly strange.
verbose_2025-01-07T234520.020123.log

@JingMatrix JingMatrix force-pushed the divest_os branch 2 times, most recently from c4cef3d to e7b0d47 Compare January 8, 2025 19:41
DivestOS users reported that vector::resize might trigger `hardened_malloc fatal error: detected write after free`.
From the error message, this can only happen when the for loop is executed for multiple times.
Note that the for loop is effective only when data to read still exist in the socket queue, which is the case when we call `recv` with flag MSG_PEEK.
Hence, after we call `recv` with zero flags, which consume data in the socket queue, we should break the for loop.
Note: previously, we break the for loop according to errno == EAGAIN.

As a simplification, we also move the resize call to the initialization of buf.
It is strange in the crash log that no logs recorded when a new loop starts.
In case that the hardened_malloc error is a c++ problem, we choose to do the memory management manually.
Since we call free after the status are updated, it might cause problem when status struct are destroyed.
Print buf addr to see if it is duplicated.
From the log, we see that malloc always returns the same address from the heap.
Hence, we simply keep this buf for SocketHandler.
This might solve the issue of `write after free` since we don't free it while handling events.
DivestOS users reported that vector::resize might trigger `hardened_malloc fatal error: detected write after free`.
From the error message, this can only happen when the memory allocator is called for multiple times.

From the log, we see that malloc always returns the same address from the heap.
Hence, we can simply keep the buffer during the lifetime of SocketHandler.
@JingMatrix JingMatrix merged commit 19fe140 into master Jan 10, 2025
1 check passed
JingMatrix added a commit that referenced this pull request Jan 10, 2025
DivestOS users reported that vector::resize might trigger `hardened_malloc fatal error: detected write after free`.
From the error message, this can only happen when the buffer is freed for multiple times.
Hence, we can simply keep the buffer during the lifetime of SocketHandler.

Note: from the log, we see that malloc always returns the same address from the heap.
JingMatrix added a commit that referenced this pull request Jan 10, 2025
DivestOS users reported that vector::resize might trigger `hardened_malloc fatal error: detected write after free`.
From the error message, this can only happen when the buffer is freed for multiple times.
Hence, we can simply keep the buffer during the lifetime of SocketHandler.

Note: from the log, we see that malloc always returns the same address from the heap.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant