Skip to content

Implemented ML Pipeline Continuous new table rows RunInference#37647

Open
aIbrahiim wants to merge 7 commits intoapache:masterfrom
aIbrahiim:ml-pipeline-continuous-inference
Open

Implemented ML Pipeline Continuous new table rows RunInference#37647
aIbrahiim wants to merge 7 commits intoapache:masterfrom
aIbrahiim:ml-pipeline-continuous-inference

Conversation

@aIbrahiim
Copy link
Contributor

Please add a meaningful description for your change here


Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:

  • Mention the appropriate issue in your description (for example: addresses #123), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, comment fixes #<ISSUE NUMBER> instead.
  • Update CHANGES.md with noteworthy changes.
  • If this contribution is large, please file an Apache Individual Contributor License Agreement.

See the Contributor Guide for more tips on how to make review process smoother.

To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md

GitHub Actions Tests Status (on master branch)

Build python source distribution and wheels
Python tests
Java tests
Go tests

See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @aIbrahiim, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances Apache Beam's ML capabilities by introducing a robust example pipeline for performing continuous machine learning inference on structured table data. The new pipeline, built around the RunInference transform, is designed to handle both batch and streaming inputs, allowing for flexible data processing from sources like files or Pub/Sub, applying scikit-learn models, and outputting results to BigQuery while maintaining data schema. The changes also include comprehensive support infrastructure, such as data generation utilities, unit tests, and integration into the performance benchmarking system, ensuring the example is well-tested, documented, and its performance characteristics are measurable.

Highlights

  • New ML Pipeline for Table Row Inference: Introduced a new RunInference pipeline example (table_row_inference.py) designed to process continuous structured table rows, supporting both batch and streaming modes with BigQuery output.
  • Batch-Only Inference Example: Added a simplified batch-only version of the table row inference pipeline (table_row_inference_batch.py) for clearer demonstration and use cases.
  • Comprehensive Utilities and Testing: Included utility functions (table_row_inference_utils.py) for generating sample models and data, managing Pub/Sub resources, and provided dedicated unit tests (table_row_inference_test.py) for the new pipeline components.
  • Performance Benchmarking Integration: Integrated the new table row inference pipelines into the performance benchmarking framework, adding new benchmark definitions and updating existing cost benchmark logic to support streaming throughput metrics.
  • Documentation and Metrics Updates: Updated documentation and Looker metrics configurations to reflect the new table row inference pipelines, including dedicated performance pages for both batch and streaming variants.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • .test-infra/tools/refresh_looker_metrics.py
    • Updated Looker metric IDs to include new Table Row Inference Sklearn Batch and Streaming pipelines.
  • sdks/python/apache_beam/examples/inference/README.md
    • Documented the new table_row_inference.py example, detailing prerequisites, model/data setup, and execution instructions for batch and streaming modes.
  • sdks/python/apache_beam/examples/inference/table_row_inference.py
    • Added a new RunInference pipeline for continuous table row processing, supporting batch (file input) and streaming (Pub/Sub) modes with BigQuery output.
  • sdks/python/apache_beam/examples/inference/table_row_inference_batch.py
    • Added a simplified batch-only inference pipeline for table rows, offering file or BigQuery output.
  • sdks/python/apache_beam/examples/inference/table_row_inference_test.py
    • Created unit tests for the table_row_inference.py pipeline components, including JSON parsing, schema building, model handling, and pipeline integration.
  • sdks/python/apache_beam/examples/inference/table_row_inference_utils.py
    • Introduced utility functions to create sample scikit-learn models, generate sample JSONL data, and manage Pub/Sub resources for testing and deployment.
  • sdks/python/apache_beam/ml/inference/table_row_inference_requirements.txt
    • Added a new requirements file specifying dependencies for table row inference, including scikit-learn, numpy, google-cloud-monitoring, protobuf, and requests.
  • sdks/python/apache_beam/testing/benchmarks/inference/README.md
    • Updated the README to include documentation for the new Table Row Inference (Sklearn) benchmark, detailing GCS artifacts and pipeline options.
  • sdks/python/apache_beam/testing/benchmarks/inference/table_row_inference_benchmark.py
    • Implemented a new benchmark test for the table row inference pipeline, extending DataflowCostBenchmark to measure performance metrics for both batch and streaming modes.
    • sdks/python/apache_beam/testing/load_tests/dataflow_cost_benchmark.py
  • sdks/python/apache_beam/testing/load_tests/dataflow_cost_benchmark.py
    • Modified DataflowCostBenchmark to improve worker time interval extraction, add streaming throughput metrics from Pub/Sub, and refine PCollection throughput metric filtering.
  • sdks/python/apache_beam/testing/load_tests/load_test.py
    • Updated LoadTest to allow TestPipeline initialization with specific options classes, improving flexibility for benchmark configurations.
  • sdks/python/apache_beam/testing/test_pipeline.py
    • Added a get_options_list class method to TestPipeline for parsing command-line arguments related to test pipeline options.
  • website/www/site/content/en/performance/_index.md
    • Updated the performance index page to include links to the new Table Row Inference Sklearn Streaming and Batch performance pages.
  • website/www/site/content/en/performance/tablerowinference/_index.md
    • Added a new performance documentation page for 'Table Row Inference Sklearn Batch', detailing the model, accelerator, host, and linking to cost, version, and date metrics.
  • website/www/site/content/en/performance/tablerowinferencestreaming/_index.md
    • Added a new performance documentation page for 'Table Row Inference Sklearn Streaming', detailing the model, accelerator, host, and linking to cost, version, and date metrics.
  • website/www/site/data/performance.yaml
    • Configured new Looker dashboard IDs for 'tablerowinference' and 'tablerowinferencestreaming' performance metrics, including cost, date, and version tracking.
Ignored Files
  • Ignored by pattern: .github/workflows/** (3)
    • .github/workflows/beam_Inference_Python_Benchmarks_Dataflow.yml
    • .github/workflows/load-tests-pipeline-options/beam_Inference_Python_Benchmarks_Dataflow_Table_Row_Inference_Batch.txt
    • .github/workflows/load-tests-pipeline-options/beam_Inference_Python_Benchmarks_Dataflow_Table_Row_Inference_Stream.txt
Activity
  • The pull request introduces new ML pipeline examples, utilities, tests, and performance benchmarks. No specific human activity (comments, reviews, etc.) was provided in the context.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@github-actions
Copy link
Contributor

Checks are failing. Will not request review until checks are succeeding. If you'd like to override that behavior, comment assign set of reviewers

@github-actions
Copy link
Contributor

Assigning reviewers:

R: @tvalentyn for label python.
R: @Abacn for label build.
R: @Abacn for label website.

Note: If you would like to opt out of this review, comment assign to next reviewer.

Available commands:

  • stop reviewer notifications - opt out of the automated review tooling
  • remind me after tests pass - tag the comment author after tests pass
  • waiting on author - shift the attention set back to the author (any comment or push by the author will return the attention set to the reviewers)

The PR bot will only process comments in the main thread (not review comments).

@codecov
Copy link

codecov bot commented Feb 20, 2026

Codecov Report

❌ Patch coverage is 2.83019% with 103 lines in your changes missing coverage. Please review.
✅ Project coverage is 57.01%. Comparing base (600bd61) to head (331aa64).
⚠️ Report is 181 commits behind head on master.

Files with missing lines Patch % Lines
...beam/testing/load_tests/dataflow_cost_benchmark.py 0.00% 52 Missing ⚠️
...chmarks/inference/table_row_inference_benchmark.py 0.00% 46 Missing ⚠️
sdks/python/apache_beam/testing/test_pipeline.py 37.50% 5 Missing ⚠️
Additional details and impacted files
@@             Coverage Diff              @@
##             master   #37647      +/-   ##
============================================
- Coverage     57.13%   57.01%   -0.13%     
  Complexity     3515     3515              
============================================
  Files          1228     1225       -3     
  Lines        189092   188725     -367     
  Branches       3656     3656              
============================================
- Hits         108039   107596     -443     
- Misses        77637    77713      +76     
  Partials       3416     3416              
Flag Coverage Δ
python 80.61% <2.83%> (-0.19%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@aIbrahiim aIbrahiim force-pushed the ml-pipeline-continuous-inference branch from 3af0279 to 331aa64 Compare February 21, 2026 18:22
@aIbrahiim aIbrahiim closed this Feb 23, 2026
@aIbrahiim aIbrahiim force-pushed the ml-pipeline-continuous-inference branch from 331aa64 to 077e777 Compare February 23, 2026 17:47
@aIbrahiim aIbrahiim reopened this Feb 23, 2026
@Amar3tto Amar3tto requested a review from damccorm February 26, 2026 05:35
Copy link
Contributor

@damccorm damccorm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks - just had some minor feedback

# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This requirements file should be in the same folder as the test code that uses it

e.exception))
msg = str(e.exception)
self.assertIn('singleton', msg, msg='Expected singleton view error')
self.assertIn('more than one', msg, msg='Expected multiple-elements error')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are we making changes to this test class in this PR?

@damccorm
Copy link
Contributor

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new example pipeline for table row inference with scikit-learn models, which is a valuable addition. The implementation covers both batch and streaming modes, and includes comprehensive tests, benchmarks, and documentation.

My review has identified a few areas for improvement:

  • Robustness: There are places where the code could be more robust against missing data or arguments.
  • Determinism: The use of Python's built-in hash() for generating fallback keys can lead to non-deterministic behavior in a distributed environment.
  • Code Structure: The introduction of table_row_inference_batch.py alongside table_row_inference.py (which also supports batch mode) is a bit confusing. While the former is described as 'simplified', it also has features the latter lacks (like file output). It would be beneficial to either consolidate these into a single, more capable script or clarify their distinct purposes in the documentation.
  • Exception Handling: Some utility functions catch overly broad exceptions.

I've left specific comments with suggestions to address these points. Overall, this is a great contribution that expands the ML examples in Beam.

Comment on lines +223 to +224
parser.add_argument(
'--feature_columns', help='Comma-separated list of feature column names')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The --feature_columns argument is essential for the pipeline to function correctly, but it is not marked as required. If a user runs the script without providing it, the pipeline will fail with an AttributeError at line 264. Please mark this argument as required.

Suggested change
parser.add_argument(
'--feature_columns', help='Comma-separated list of feature column names')
parser.add_argument(
'--feature_columns', required=True, help='Comma-separated list of feature column names')

features_array = []
for row in batch:
row_dict = row._asdict()
features = [row_dict[col] for col in self.feature_columns]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This line will raise a KeyError if a feature column from self.feature_columns is not present in the row_dict. For better robustness, consider using row_dict.get(col, 0.0) to handle missing features gracefully, which also provides a default value. The table_row_inference_batch.py example already uses this pattern, and it would be good to be consistent.

Suggested change
features = [row_dict[col] for col in self.feature_columns]
features = [row_dict.get(col, 0.0) for col in self.feature_columns]

"""
data = json.loads(message.decode('utf-8'))

row_key = data.get('id', str(hash(message)))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Using hash() for generating a fallback key is not recommended as its output is not stable across different Python processes or versions. This can lead to non-deterministic behavior, especially in a distributed environment. For a deterministic key, please use a standard hashing algorithm like SHA256 from the hashlib module. You will need to add import hashlib at the top of the file.

Suggested change
row_key = data.get('id', str(hash(message)))
row_key = data.get('id', hashlib.sha256(message).hexdigest())

Comment on lines +19 to +29
"""Batch inference pipeline for table rows using RunInference.

This is a simplified batch-only implementation of ML Pipelines #18.
It reads table data from files, runs ML inference, and writes results.

Key Features:
- BATCH PROCESSING ONLY (no streaming complexity)
- Reads from files (JSONL, CSV, or custom)
- Preserves table schema
- Writes to BigQuery or files
- Simple and easy to understand
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This batch-only pipeline seems largely redundant with table_row_inference.py, which already supports a batch mode. This can be confusing for users, especially since this 'simplified' version includes features like file output that the main script lacks. Consider consolidating the two scripts by adding file output support to table_row_inference.py and removing this file to avoid duplication and improve clarity.

"""
data = json.loads(line)

row_id = data.get('id', str(hash(line)))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Using hash() for generating a fallback key is not recommended as its output is not stable across different Python processes or versions. This can lead to non-deterministic behavior. For a deterministic key, please use a standard hashing algorithm like SHA256 from the hashlib module. You will need to add import hashlib at the top of the file.

Suggested change
row_id = data.get('id', str(hash(line)))
row_id = data.get('id', hashlib.sha256(line.encode('utf-8')).hexdigest())

try:
publisher.get_topic(request={'topic': topic_path})
logging.info('Topic %s already exists', topic_name)
except Exception:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Catching a broad Exception can hide bugs or swallow exceptions that should be handled differently. It's better to catch a more specific exception. In this case, publisher.get_topic raises google.api_core.exceptions.NotFound when the topic doesn't exist, which is available as pubsub_v1.exceptions.NotFound. Please catch that specific exception. This feedback also applies to the except Exception: blocks on lines 194, 222, and 229.

Suggested change
except Exception:
except pubsub_v1.exceptions.NotFound:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants