Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-50996][K8S] Increase spark.kubernetes.allocation.batch.size to 10 #49681

Closed
wants to merge 1 commit into from

Conversation

dongjoon-hyun
Copy link
Member

@dongjoon-hyun dongjoon-hyun commented Jan 26, 2025

What changes were proposed in this pull request?

This PR aims to increase spark.kubernetes.allocation.batch.size to 10 from 5 in Apache Spark 4.0.0.

Why are the changes needed?

Since Apache Spark 2.3.0, Apache Spark uses 5 as the default value of executor allocation batch size for 8 years conservatively.

Given that the improvement of K8s hardware infrastructure for last 8 year, we had better use a bigger value, 10, from Apache Spark 4.0.0 in 2025.

Technically, when we request 1200 executor pod,

  • Batch Size 5 takes 4 minutes.
  • Batch Size 10 takes 2 minutes.

Does this PR introduce any user-facing change?

Yes, the users will see faster Spark job resource allocation. The migration guide is updated correspondingly.

How was this patch tested?

Pass the CIs.

Was this patch authored or co-authored using generative AI tooling?

No.

@dongjoon-hyun
Copy link
Member Author

Thank you, @HyukjinKwon ! Merged to master/4.0.

dongjoon-hyun added a commit that referenced this pull request Jan 27, 2025
…to 10

### What changes were proposed in this pull request?

This PR aims to increase `spark.kubernetes.allocation.batch.size` to 10 from 5 in Apache Spark 4.0.0.

### Why are the changes needed?

Since Apache Spark 2.3.0, Apache Spark uses `5` as the default value of executor allocation batch size for 8 years conservatively.
- #19468

Given that the improvement of K8s hardware infrastructure for last 8 year, we had better use a bigger value, `10`, from Apache Spark 4.0.0 in 2025.

Technically, when we request 1200 executor pod,
- Batch Size `5` takes 4 minutes.
- Batch Size `10` takes 2 minutes.

### Does this PR introduce _any_ user-facing change?

Yes, the users will see faster Spark job resource allocation. The migration guide is updated correspondingly.

### How was this patch tested?

Pass the CIs.

### Was this patch authored or co-authored using generative AI tooling?

No.

Closes #49681 from dongjoon-hyun/SPARK-50996.

Authored-by: Dongjoon Hyun <[email protected]>
Signed-off-by: Dongjoon Hyun <[email protected]>
(cherry picked from commit 9da1cd0)
Signed-off-by: Dongjoon Hyun <[email protected]>
@dongjoon-hyun dongjoon-hyun deleted the SPARK-50996 branch January 27, 2025 00:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants