You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This commit adds a new parameter `max_size`, in bytes, which is used to
enforce an upper limit on the overall HTTP POST size. This is useful
when trying to maximize bulk import speed by reducing roundtrips to
retrieve and send data.
This is needed for scenarios where there is no control over
Elasticsearch's maximum HTTP request payload size. For example, AWS'
elasticsearch offering has either a 10MiB or 100MiB HTTP request payload
size limit.
`batch_size` is good for bounding local runtime memory usage, but when
indexing large sets of big objects, it's entirely possible to hit a
service provider's underlying request size limit and biff the import
mid-run. This is even worse when `force` is true - then the index is
left in an incomplete state with no obvious value to adjust batch_size
down to, in order to sneak under the limit.
The `max_size` defaults to `10_000_000`, to catch the worst-case
scenario on AWS.
0 commit comments