Skip to content

Comments

Fix: improve join chunks to prevent memory exhaustion when joining large files#157

Closed
lohanidamodar wants to merge 2 commits into0.18.xfrom
fix-join-chunks
Closed

Fix: improve join chunks to prevent memory exhaustion when joining large files#157
lohanidamodar wants to merge 2 commits into0.18.xfrom
fix-join-chunks

Conversation

@lohanidamodar
Copy link
Contributor

@lohanidamodar lohanidamodar commented Feb 23, 2026

  • Stream based copying to join chunks

Summary by CodeRabbit

  • Bug Fixes
    • Improved file operation reliability and efficiency by enhancing the file assembly process with better resource management and error handling.

@coderabbitai
Copy link

coderabbitai bot commented Feb 23, 2026

Important

Review skipped

Auto reviews are disabled on base/target branches other than the default branch.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix-join-chunks

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
src/Storage/Device/Local.php (1)

211-212: Consider handling rmdir failure gracefully.

If rmdir() fails (e.g., directory not empty due to unexpected files), the failure is silent. This is low-impact since it's cleanup code, but logging or checking the result could aid debugging in edge cases.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/Storage/Device/Local.php` around lines 211 - 212, The cleanup currently
calls \unlink($tmp) and then \rmdir(dirname($tmp)) without checking the result;
change this to check the return value of \rmdir(dirname($tmp)) and handle
failures gracefully by logging a warning (include dirname($tmp) and the $tmp
filename) rather than letting the failure be silent; use the existing logger
(e.g., $this->logger->warning(...)) if available, otherwise fallback to
error_log/trigger_error, but do not throw — just report the issue for debugging.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/Storage/Device/Local.php`:
- Around line 188-191: The code opens the destination file with fopen($path,
'ab') which appends to any existing partial file and can corrupt uploads on
retry; change the open mode to 'wb' (fopen($path, 'wb')) so the file is created
fresh/truncated before writing, updating the fopen call that assigns $dest and
keeping the existing error check that throws the Exception when $dest === false.

---

Nitpick comments:
In `@src/Storage/Device/Local.php`:
- Around line 211-212: The cleanup currently calls \unlink($tmp) and then
\rmdir(dirname($tmp)) without checking the result; change this to check the
return value of \rmdir(dirname($tmp)) and handle failures gracefully by logging
a warning (include dirname($tmp) and the $tmp filename) rather than letting the
failure be silent; use the existing logger (e.g., $this->logger->warning(...))
if available, otherwise fallback to error_log/trigger_error, but do not throw —
just report the issue for debugging.

Comment on lines 188 to 191
$dest = \fopen($path, 'ab');
if ($dest === false) {
throw new Exception('Failed to open destination file '.$path);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Using append mode ('ab') can corrupt files on retry.

If the destination file already exists from a previous failed or interrupted upload attempt, opening with 'ab' will append the new chunks to the existing corrupted/partial content rather than overwriting it. This can result in a corrupted final file.

Consider using 'wb' (write binary) mode to ensure the destination file is created fresh or truncated if it exists.

🐛 Proposed fix
-        $dest = \fopen($path, 'ab');
+        $dest = \fopen($path, 'wb');
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
$dest = \fopen($path, 'ab');
if ($dest === false) {
throw new Exception('Failed to open destination file '.$path);
}
$dest = \fopen($path, 'wb');
if ($dest === false) {
throw new Exception('Failed to open destination file '.$path);
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/Storage/Device/Local.php` around lines 188 - 191, The code opens the
destination file with fopen($path, 'ab') which appends to any existing partial
file and can corrupt uploads on retry; change the open mode to 'wb'
(fopen($path, 'wb')) so the file is created fresh/truncated before writing,
updating the fopen call that assigns $dest and keeping the existing error check
that throws the Exception when $dest === false.

…and handle temporary directory removal errors
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant