Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

in_kafka: boost throughput #9800

Closed
wants to merge 4 commits into from
Closed

in_kafka: boost throughput #9800

wants to merge 4 commits into from

Conversation

coreidcc
Copy link

@coreidcc coreidcc commented Jan 6, 2025

We have a Kafka cluster that ingests about 40k messages (about 60MB) of data per seconds. Fluent-bit in its current state stands no change to keep up with this load. Even Logstash is faster and vector is just consuming all these messages with ease.

Causes:
a) commits each message individually
b) a poll-timeout of just one 1ms (this completely overrides fetch.wait.max.ms from kafka)

probably related to "Batch processing is required in in_kafka. #8030"

Testing: To activate the changes one need to

[INPUT]
Name kafka
threaded true -> sets timeout fetch.wait.max.ms + 50ms (align our and kafkas timeout, ensures kafka triggers timeout)
enable_auto_commit true -> disable explicit commit call

-> The change doesn't do any dynamic allocations at all and therefore cant introduce any mem-leaks
-> The change has no impact on packaging

Throughput increased by more then a magnitude.

Copy link
Contributor

@cosmo0920 cosmo0920 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd recommended that timeount should be written as timeout.
This could be possible typos.

dsize = sizeof(conf_val);
res = rd_kafka_conf_get(kafka_conf, "fetch.wait.max.ms", conf_val, &dsize);
if (res == RD_KAFKA_CONF_OK && dsize <= sizeof(conf_val)) {
/* add 50ms so kafa triggers timeout */
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kafa -> kafka

Polling every 1ms and committing each message individually
results in rather pure performance in high volume Kafka
clusters.

Commiting in batches (relay on auto-commit of kafka)
drastically improves performance.

Signed-off-by: CoreidCC <[email protected]>
having 1ms timeout might make sense if the input plugin is
running in the main thread (not introducing delay for others).
but if we run in our very own thread then we should not over-
ride the fetch.wait.max.ms configuration value from the
kafka-consumer.

this in conjuntion with using autocommit again boosts the
throuhput significantly.

Signed-off-by: CoreidCC <[email protected]>
Copy link
Contributor

@cosmo0920 cosmo0920 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Basically, this patch sounds good. Would you mind if you add a unit test for confirming the newly introduced parameter like as?

https://github.com/fluent/fluent-bit/blob/master/tests/runtime/out_kafka.c

Just confirming that handling the newly introduced enable_auto_commit is able to be handled is enough for now.

Copy link
Contributor

@cosmo0920 cosmo0920 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR looks good to me. It would be nice to have test for newly introduced parameter but it's not mandatory for now, I believe.

@edsiper
Copy link
Member

edsiper commented Mar 29, 2025

implemented through #10122

@edsiper edsiper closed this Mar 29, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
docs-required ok-package-test Run PR packaging tests
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants