-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Limit the number of dirs and files an allocation could have #639
Comments
It seems that even with optimized database and root hash calculations, having many files and folders in a single allocation is inevitably gonna become slow and unwieldy simply for sake of fetching directory/file listings if nothing else. The list-all option is a particular problem. I know we have pagination in progress but rarely would a user need all this in one go. Perhaps more likely they would like the complete folder structure and then fetch lists of files for where they are interested. So, I just believe we need tools to encourage users to organise in a way that would be most efficient. Example. I see the main drawbacks with this of course would be that files and folders couldn't be moved between allocations and the user would have multiple allocations to manage, but I think better to support graceful method like this rather than impose hard limits and deter potential big data storage. |
@sculptex this is something the user can manage. We will limit the number of files/directories regardless of our future optimization. |
In my opinion, splitting tables for One obvious thing is we should only limit one of number of files/directories in a directory and depths of directories. In our case we should limit number of files/directories a directory can have. If anyone is taking over this task; don't forget to remove soft-deletion where applicable. |
. |
@peterlimg @cnlangzi @lpoli @guruhubb @sculptex I am planning to work on this next. Would like to confirm the following
|
Conversation moved to slack |
This is (perhaps) a temporary fix for resolving the slow allocation root calculation when there are enormous dirs and files in an allocation. Issue #627 detailed the problem, but we may not consider to do it in splitting tables way. Instead, we will consider to optimize it by organization the dirs and files in MPT tree, discussed here, just like what we do for blockchain state. Anyway, we will not put effects on it before mainnet. What we currently need to do is limit the number of dirs and files each allocation could have. The steps could be:
max_dirs_files
to config fileThe text was updated successfully, but these errors were encountered: