Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
[SPARK-50996][K8S] Increase
spark.kubernetes.allocation.batch.size
…
…to 10 ### What changes were proposed in this pull request? This PR aims to increase `spark.kubernetes.allocation.batch.size` to 10 from 5 in Apache Spark 4.0.0. ### Why are the changes needed? Since Apache Spark 2.3.0, Apache Spark uses `5` as the default value of executor allocation batch size for 8 years conservatively. - #19468 Given that the improvement of K8s hardware infrastructure for last 8 year, we had better use a bigger value, `10`, from Apache Spark 4.0.0 in 2025. Technically, when we request 1200 executor pod, - Batch Size `5` takes 4 minutes. - Batch Size `10` takes 2 minutes. ### Does this PR introduce _any_ user-facing change? Yes, the users will see faster Spark job resource allocation. The migration guide is updated correspondingly. ### How was this patch tested? Pass the CIs. ### Was this patch authored or co-authored using generative AI tooling? No. Closes #49681 from dongjoon-hyun/SPARK-50996. Authored-by: Dongjoon Hyun <[email protected]> Signed-off-by: Dongjoon Hyun <[email protected]>
- Loading branch information