Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
mzbench has been obsoleted by the feature benchmark framework (in test/feature-benchmark) and the cloudbench tool (bin/cloudbench). The kafka-avro-generator tool has been obsoleted by parallelizing kgen directly (#9841). So this commit removes mzbench.
To expound on the rational for removing mzbench:
mzbench configurations require an unmaintainable duplication of mzcompose.yml files. Each mzbench configuration contains 300+ lines of nearly identical definitions. There was talk of improving this (see Add mzbench-friendly avro-insert benchmark #6676), but the plans never came to fruition.
The interplay between mzbench and mzcompose is unnecessarily delicate. mzbench expects a composition with workflows named just so, and then parses their output. This makes it very difficult to refactor the underlying compositions, since you don't know if you're breaking the contract with mzbench. I think most of mzbench's features could be recreated much more simply with an e.g.
--num-trials
parameter to mzcompose.mzbench introduced quite a bit of complexity by trying to be both a demo of using Materialize to power a real-time dashboard 0 and a benchmarking framework. Experience suggests that this results in a tool that is a suboptimal dashboard and a suboptimal benchmarking framework. Better to have two separate tools optimized for their specific purpose.
The new feature benchmarking framework resolves the above concerns. It is only focused on being a benchmarking framework and does not suffer from the code duplication problem.
Motivation
Checklist