-
Notifications
You must be signed in to change notification settings - Fork 90
Potential optimization for imported async functions #434
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
(Sorry for the slow reply; getting back from holidays) Currently, if there is only one flattened parameter, it is passed as a scalar instead of going through linear memory (see We do have to be a little careful since increasing this limit will increase the static size of the engine-internal structure used to represent async calls to have space for the maximum number of async flattened parameters. But maybe a modest increase to, say, "4" would be reasonable? It'd be nice to have performance data to motivate this, though. |
Proposed in #520. |
Exported async functions have been changed in November to receive their arguments on the value stack, so these function calls now impose less overhead.
Imported async functions on the other hand still use a heap allocated list to pass arguments. While this is motivated by minimal overhead for back-pressure it penalizes the common case of async function calls with few parameters, e.g. a string or list. The overhead to store the arguments on the host in the (less likely?) back-pressure case could be acceptable in comparison to this unconditional heap allocation on the client side.
When I brought this up in bytecodealliance/wit-bindgen#1082 (comment) Joel proposed to open discussion here.
The text was updated successfully, but these errors were encountered: