You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I noticed that string.encode_wtf16_array is not accepting the string range to encode (like sourceStartIndex and sourceEndIndex).
Is it efficient enough to do a slice first and then call encode_wtf16_array or would it be worthwhile to take additional parameters to encode_wtf16_array?
The text was updated successfully, but these errors were encountered:
Creating a slice should be fairly cheap (allocation of a small object, could possibly be optimized out by engines in the long run). I'm also fine with adding parameters to encode_wtf16_array, or having a second version of that instruction that takes such parameters. Either way that instruction would probably belong in the world of stringview_wtf16 then.
I think it mostly boils down to: which scenario is more common in practice?
wanting to encode an entire string, without having to spend binary size on providing (0, length) as range?
wanting to encode part of a string, without having to explicitly create a slice first?
If it helps, we could also implement the second version as an experiment, then you could test with microbenchmarks to see if it makes a measurable difference.
I don't think there will be any concerns around binary size here so it is ok to if there is only one instruction.
If the slice is effectively a view, I'm also not much concerned around the performance but if you would like to experiment in any case I can quickly adapt and provide you some numbers.
I noticed that
string.encode_wtf16_array
is not accepting the string range to encode (likesourceStartIndex
andsourceEndIndex
).Is it efficient enough to do a slice first and then call
encode_wtf16_array
or would it be worthwhile to take additional parameters toencode_wtf16_array
?The text was updated successfully, but these errors were encountered: