-
Notifications
You must be signed in to change notification settings - Fork 14.8k
KAFKA-19860: Integration tests for KIP-1226 #20909
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: trunk
Are you sure you want to change the base?
Conversation
AndrewJSchofield
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR. Just a few comments.
| .toList(); | ||
| } | ||
|
|
||
| private SharePartitionOffsetInfo sharePartitionDescription(Admin adminClient, String groupId, TopicPartition tp) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Method name seems wrong. It's returning the share-partition offset info, not the description.
| } | ||
|
|
||
| private List<Integer> topicPartitionLeader(Admin adminClient, String topicName, int partition) | ||
| throws InterruptedException, ExecutionException { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: The indentation on this method and the next is a bit odd because the throws and the start of the method body are equally indented. I'd just put the throws on the same line as the argument list for these methods.
| newGroupCoordNodeId.size() == 1 && !Objects.equals(newGroupCoordNodeId.get(0), curShareCoordNodeId.get(0)) && | ||
| newTopicPartitionLeader.size() == 1 && !Objects.equals(newTopicPartitionLeader.get(0), curShareCoordNodeId.get(0)); | ||
| }, DEFAULT_MAX_WAIT_MS, DEFAULT_POLL_INTERVAL_MS, () -> "Failed to elect new leaders after broker shutdown"); | ||
| // After share coordinator shutdown and new leaderS election, check that lag is still 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: new "leader's" not "leaderS"
| serverProperties = { | ||
| @ClusterConfigProperty(key = "offsets.topic.num.partitions", value = "3"), | ||
| @ClusterConfigProperty(key = "offsets.topic.replication.factor", value = "3"), | ||
| @ClusterConfigProperty(key = "share.coordinator.state.topic.num.partitions", value = "3"), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You do need replication factor 3, but you only need one partition for each of these topics. Then you'd not have to fiddle around trying to calculate the partitions used for the internal topics.
| @ClusterConfigProperty(key = "offsets.topic.num.partitions", value = "3"), | ||
| @ClusterConfigProperty(key = "offsets.topic.replication.factor", value = "3"), | ||
| @ClusterConfigProperty(key = "share.coordinator.state.topic.num.partitions", value = "3"), | ||
| @ClusterConfigProperty(key = "share.coordinator.state.topic.replication.factor", value = "3") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I read the code from the bottom so my comment about not needing 3 partitions for these internal topics stands here too. You do need replication factor 3.
AndrewJSchofield
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me. We need a green build to merge.
This PR is part of
KIP-1226.
This PR adds integration tests to ShareConsumerTest.java file to verify
the share partition lag is reported successfully in various scenarios.
Reviewers: Andrew Schofield [email protected]