Skip to content

Conversation

Imama-Kainat
Copy link
Contributor

@Imama-Kainat Imama-Kainat commented Mar 8, 2025

In this update, I streamlined the documentation update process by integrating automation for detecting configuration changes and updating TSV files accordingly.

📌 Branch: ParameterDocUpdation

Here's what I did:

1️⃣ Updated TSV Files: Refactored the tables in the documentation to match the latest changes in the config file. This ensures that the documentation stays accurate and up to date.

2️⃣ Testing & Verification: I checked the updates by reading the rendered docs to confirm that everything displays correctly after the changes.

3️⃣ Automation Script:

  • Created a Scripts/ folder and added update_tsv_docs.py to automatically detect config file changes.
  • The script updates TSV files based on the modified classes:
    • BasePlot
    • Chromatogram
    • Mobilogram
    • PeakMap
    • Spectrum
  • This ensures that whenever a parameter is modified, the corresponding TSV files are adjusted dynamically.

4️⃣ Git Pre-Commit Hook:

  • Wrote a pre-commit hook to trigger the script whenever _config.py is modified.
  • This means that any future changes to the config file will automatically update the TSV files before committing.

5️⃣ Final Testing:

  • Successfully tested the entire workflow and will attach screenshots to demonstrate that the pre-commit hook and script function as expected.

Files Added:

.git/hooks/pre-commit → Ensures automatic documentation updates before every commit.
Scripts/update_tsv_docs.py → Detects config changes and updates TSV files accordingly.

What This Script Can Handle?

  • Detects changes in _config.py
  • Automatically updates TSV files in docs/Parameters/
  • Supports parameter updates for multiple visualization classes
  • Ensures that documentation always reflects the latest code changes

This approach removes the need for manual TSV updates, making the process smoother and error-free. 🚀

Summary by CodeRabbit

  • New Features

    • Introduced an automated tool to update documentation content, ensuring improved accuracy and consistency.
  • Documentation

    • Enhanced and expanded explanations for visual data plots, including ion mobility data.
    • Refined documentation formatting and corrected grammatical issues for better readability.
    • Updated documentation configuration to support interactive plotting features.
  • Refactor

    • Streamlined the underlying plot configuration system for consistent behavior and improved usability.
  • Chores

    • Adjusted version control settings to ignore temporary files.

Copy link

coderabbitai bot commented Mar 8, 2025

Walkthrough

A new Python script (update_tsv_docs.py) has been added to automate updating TSV files by extracting dataclass definitions, processing inheritance information, and normalizing type annotations. The script implements logging, recursive attribute retrieval, and error handling, updating files only when changes occur. Additionally, documentation files in several parameter directories have been tweaked for formatting, clarity, and grammar. A Sphinx configuration file now includes a new Bokeh extension and a peak count variable. Finally, the foundational plot configuration class has been removed, and a new entry has been added to .gitignore.

Changes

File(s) Summary of Changes
Scripts/update_tsv_docs.py New script added with functions for type normalization (normalize_type), docstring parsing (parse_docstring), recursive attribute extraction (get_all_attributes), and TSV updating (update_tsv_files) with logging and error handling.
docs/Parameters/{Chromatogram.rst, Mobilogram.rst, PeakMap.rst, Spectrum.rst} * Chromatogram: adjusted blank lines before "Example Usage" and removed trailing blank line.
* Mobilogram: expanded and clarified description text.
* PeakMap: removed extra blank lines and added one before "Example Usage".
* Spectrum: fixed grammar and removed table title from CSV table.
docs/conf.py Added new Sphinx extension (bokeh.sphinxext.bokeh_plot), introduced new variable spectrum_peak_count = 500, and inserted a comment marker.
pyopenms_viz/_config.py Removed the BasePlotConfig class along with its attributes and methods, affecting all inheriting plot configuration classes.
.gitignore Added entry for .qodo to be ignored by version control.

Sequence Diagram(s)

sequenceDiagram
    participant U as User
    participant S as update_tsv_docs.py
    participant C as Config File
    participant D as Dataclass Processor
    participant T as TSV Handler

    U->>S: Execute script
    S->>C: Read configuration file
    S->>D: Parse class definitions & inheritance
    D-->>S: Return attributes & documentation info
    S->>T: Check & read existing TSV files
    S->>T: Update TSV content if changes detected
    T-->>S: Write updated TSV files
Loading

Poem

Hi, I'm a rabbit, hopping free,
I see our docs and scripts now glow with glee!
New features leap, and errors hide,
Configs updated, no bugs reside.
With every line of code so neat,
My little paws tap a joyful beat. 🐇💻
Hoppy changes make our code complete!

Tip

⚡🧪 Multi-step agentic review comment chat (experimental)
  • We're introducing multi-step agentic chat in review comments. This experimental feature enhances review discussions with the CodeRabbit agentic chat by enabling advanced interactions, including the ability to create pull requests directly from comments.
    - To enable this feature, set early_access to true under in the settings.
✨ Finishing Touches
  • 📝 Generate Docstrings

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@Imama-Kainat
Copy link
Contributor Author

image
Parameters rendering Check ✅

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (3)
Scripts/update_tsv_docs.py (3)

5-5: Remove unused import.

The os module is not referenced elsewhere and can be safely removed.

-import os
 import re
 import ast
 import logging
🧰 Tools
🪛 Ruff (0.8.2)

5-5: os imported but unused

Remove unused import: os

(F401)


9-9: Remove unused import.

is_dataclass is never utilized in this file and should be removed to keep the code clean.

-from dataclasses import is_dataclass
 from typing import Dict, List, Tuple
 from pathlib import Path
🧰 Tools
🪛 Ruff (0.8.2)

9-9: dataclasses.is_dataclass imported but unused

Remove unused import: dataclasses.is_dataclass

(F401)


89-91: Combine nested if statements.

Consolidate the nested if conditions for readability.

     for node in parsed.body:
-        if isinstance(node, ast.ClassDef):
-            if any(d.id == 'dataclass' for d in node.decorator_list if isinstance(d, ast.Name)):
-                if node.name in CLASS_TO_TSV_MAP:
-                    attrs = process_dataclass(node)
+        if (isinstance(node, ast.ClassDef)
+            and any(d.id == 'dataclass' for d in node.decorator_list if isinstance(d, ast.Name))
+            and node.name in CLASS_TO_TSV_MAP):
+            attrs = process_dataclass(node)
🧰 Tools
🪛 Ruff (0.8.2)

89-91: Use a single if statement instead of nested if statements

(SIM102)


90-91: Use a single if statement instead of nested if statements

(SIM102)

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 40549e8 and fd2841c.

⛔ Files ignored due to path filters (5)
  • docs/Parameters/basePlot.tsv is excluded by !**/*.tsv
  • docs/Parameters/chromatogramPlot.tsv is excluded by !**/*.tsv
  • docs/Parameters/mobilogramPlot.tsv is excluded by !**/*.tsv
  • docs/Parameters/peakMapPlot.tsv is excluded by !**/*.tsv
  • docs/Parameters/spectrumPlot.tsv is excluded by !**/*.tsv
📒 Files selected for processing (8)
  • Scripts/update_tsv_docs.py (1 hunks)
  • docs/Parameters/Chromatogram.rst (1 hunks)
  • docs/Parameters/Mobilogram.rst (2 hunks)
  • docs/Parameters/PeakMap.rst (1 hunks)
  • docs/Parameters/Spectrum.rst (1 hunks)
  • docs/conf.py (2 hunks)
  • docs/readthedocs.yml (1 hunks)
  • pyopenms_viz/_config.py (1 hunks)
🧰 Additional context used
🪛 Ruff (0.8.2)
Scripts/update_tsv_docs.py

5-5: os imported but unused

Remove unused import: os

(F401)


9-9: dataclasses.is_dataclass imported but unused

Remove unused import: dataclasses.is_dataclass

(F401)


89-91: Use a single if statement instead of nested if statements

(SIM102)


90-91: Use a single if statement instead of nested if statements

(SIM102)

🔇 Additional comments (9)
pyopenms_viz/_config.py (1)

207-208: Whitespace Consistency Improved

A new blank line has been inserted after the declaration of the legend_config attribute in the BasePlotConfig class. This addition enhances readability without altering functionality.

docs/Parameters/Chromatogram.rst (1)

13-14: Improved Section Spacing

The insertion of blank lines before the "Example Usage" section creates a clearer visual break between the different parts of the document, improving overall readability.

docs/Parameters/PeakMap.rst (1)

16-17: Enhanced Readability Through Spacing

Adding a blank line before the "Example Usage" section strengthens the separation between the "Parameters" and "Example Usage" sections. This formatting adjustment parallels similar improvements across other documentation files.

docs/readthedocs.yml (1)

1-12: New Read the Docs Configuration

This new configuration file correctly specifies the Read the Docs settings, including the Sphinx configuration file (docs/conf.py), installation requirements from requirements.txt, and the designated Python version (3.8). This change will streamline the documentation build and deployment process.

docs/Parameters/Spectrum.rst (2)

4-5: Grammar Improvement

The descriptive sentence has been updated to use "can be plotted" instead of the previously incorrect phrasing. This correction clarifies the intended meaning and improves overall readability.


9-12: CSV Table Directive Refinement

The CSV table directive now omits a specific title (previously "Chromatogram Options"), standardizing the format across documentation files. Ensure that this change aligns with the overall documentation style.

docs/conf.py (1)

324-326: Ensure the new variable is used or remove it if unnecessary.

spectrum_peak_count = 500 appears to be a new configuration variable. Verify that it is used in the documentation or code paths to avoid confusion or stale config values.

docs/Parameters/Mobilogram.rst (2)

4-4: Clear and concise documentation.

Thanks for expanding the explanation on mobilograms. The updated text accurately clarifies their purpose and usage.


17-17: Example usage is well-provided.

Including the minigallery scripts helps users quickly see how to generate mobilogram plots. Good addition!

docs/conf.py Outdated
"sphinx.ext.autosectionlabel",
"sphinx.ext.githubpages",
"sphinx_copybutton",
"sphinx.ext.csv", # Add this line
Copy link

@coderabbitai coderabbitai bot Mar 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Verify extension availability.

The extension "sphinx.ext.csv" is not a standard built-in Sphinx extension. Ensure it is installed or recognized in your environment, or replace it with an equivalent extension if necessary.


🌐 Web query:

Is "sphinx.ext.csv" a built-in or third-party extension for Sphinx?

💡 Result:

The sphinx.ext.csv extension is not a built-in Sphinx extension. It is a third-party extension developed by Crate.IO, as evidenced by:

  1. Installation Requirements:
    The extension must be installed via pip (pip install sphinx-csv-filter) and explicitly added to Sphinx's extensions list in conf.py as crate.sphinx.csv[1]. Built-in extensions (e.g., sphinx.ext.autodoc) do not require installation.

  2. Functionality:
    It adds a custom csv-filter directive to enhance CSV table handling with row filtering and column inclusion/exclusion options, which are not part of Sphinx's core features[1].

  3. Documentation:
    Official Sphinx documentation[3][4] lists built-in extensions (prefixed with sphinx.ext.*), and sphinx.ext.csv is absent. Third-party extensions like this one are hosted separately (e.g., on GitHub)[1][2].

For comparison, built-in CSV handling in Sphinx uses the standard csv-table directive[8][16], while this extension adds advanced filtering capabilities.

Citations:


CSV Extension Configuration Correction

In docs/conf.py (line 99), the extension entry "sphinx.ext.csv" is incorrect because it is not a built-in Sphinx extension. This third-party extension is developed by Crate.IO and must be installed via pip install sphinx-csv-filter. Additionally, you should update the extension name to "crate.sphinx.csv" in the configuration.

  • Install using: pip install sphinx-csv-filter
  • Update the config entry in docs/conf.py from "sphinx.ext.csv" to "crate.sphinx.csv"
  • Alternatively, if advanced CSV filtering isn’t required, consider using the built-in csv-table directive

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done ! I have removed the line sphinx.ext.csv and check by rendering again

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Imama-Kainat Great job! Removing the sphinx.ext.csv extension was the right approach. Sphinx already has built-in CSV table support through the csv-table directive, so you don't need any additional extensions for basic CSV functionality. Your documentation should render correctly now.

If you need to work with CSV data in your documentation in the future, you can use the standard directive like this:

.. csv-table:: Title
   :header: "Column 1", "Column 2", "Column 3"
   :widths: 15, 10, 30

   "data1", "data2", "data3"
   "data4", "data5", "data6"

✏️ Learnings added
Learnt from: Imama-Kainat
PR: OpenMS/pyopenms_viz#60
File: docs/conf.py:0-0
Timestamp: 2025-03-08T18:00:46.168Z
Learning: Sphinx has built-in CSV table support through the `csv-table` directive without requiring any additional extensions. The `sphinx.ext.csv` is not a standard extension and should not be added to the configuration.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.

@Imama-Kainat
Copy link
Contributor Author

Imama-Kainat commented Mar 8, 2025

image

AutoGenerated Docs Check✅

@Imama-Kainat
Copy link
Contributor Author

@jcharkow @t0mdavid-m Please review and let me know if any modifications are required.

Copy link
Collaborator

@jcharkow jcharkow left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great job so far as this is one of the more challenging PRs.

I have not had the chance to test your script yet but I trust that it is working as intended. One thing I did notice though is that it seems that the inheritance might not be followed correctly in your script as some parameters seem to be missing.

Also the ordering of some parameters is important. It is important that x, y, z, by, and backend are at the top because these are basic parameters that would be commonly modified. x, y (and possibly z) should have an "*" next to them to indicate that they are mandatory. It might be a good idea to just hardcode these in your script.

Something to consider is whether it is worth encoding the data structures with

x = field(str, metadata=dict(description="description of variable") 

as this might be easier to parse than the docstrings however it seems that you are parsing the docstrings well already so this might not be necessary.

Please see comments above as I also have a few other minor comments.

Comment on lines 1 to 11
version: 2

sphinx:
configuration: docs/conf.py

python:
install:
- requirements: requirements.txt
version: 3.8

# Optional: Add any additional configuration here
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is already a .readthedocs.yml file in the main directory so this is likely unneeded.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have delete it

Comment on lines +325 to +326


Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove extra lines

@Imama-Kainat
Copy link
Contributor Author

Acknowlege !

@jcharkow
Copy link
Collaborator

@Imama-Kainat please let me know when this is ready for re-review. I see that you have acknowledged all of the comments but I don't see any new commits

…ance, and Git Integration

 **Issues Resolved:**
✅ **Fixed attribute extraction from _config.py** – Now correctly retrieves all attributes, including inherited ones.
✅ **Handled missing attributes in dataclasses** – Ensured that no attributes are skipped when updating .tsv files.
✅ **Resolved file writing issues** – .tsv files now update only when necessary, preventing unnecessary commits.
✅ **Preserved existing descriptions in .tsv files** – Ensures no loss of documentation during updates.
✅ **Integrated with Git pre-commit hook** – Automatically stages updated .tsv files before commit.
✅ **Fixed Git detection of modified .tsv files** – Git now correctly recognizes changes to documentation files.

📌 **Key Changes:**
- Updated update_tsv_docs.py to correctly extract attributes and inheritance.
- Improved logging for better debugging and visibility of updates.
- Fixed issues where .tsv files were not updating due to skipped attributes.
- Enhanced pre-commit hook to detect and stage .tsv changes automatically.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
Scripts/update_tsv_docs.py (2)

12-12: Remove unused typing imports

The List type is imported but never used in the code. Consider removing it to keep imports clean.

-from typing import Dict, List, Tuple
+from typing import Dict, Tuple
🧰 Tools
🪛 Ruff (0.8.2)

12-12: typing.List imported but unused

Remove unused import

(F401)


12-12: typing.Tuple imported but unused

Remove unused import

(F401)


90-90: Remove extra blank line

There's an unnecessary blank line that can be removed.

    return attributes

-
def update_tsv_files():
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 57b240b and 8595e55.

⛔ Files ignored due to path filters (5)
  • docs/Parameters/basePlot.tsv is excluded by !**/*.tsv
  • docs/Parameters/chromatogramPlot.tsv is excluded by !**/*.tsv
  • docs/Parameters/mobilogramPlot.tsv is excluded by !**/*.tsv
  • docs/Parameters/peakMapPlot.tsv is excluded by !**/*.tsv
  • docs/Parameters/spectrumPlot.tsv is excluded by !**/*.tsv
📒 Files selected for processing (5)
  • .gitignore (1 hunks)
  • Scripts/update_tsv_docs.py (1 hunks)
  • docs/Parameters/Mobilogram.rst (1 hunks)
  • docs/Parameters/Spectrum.rst (1 hunks)
  • pyopenms_viz/_config.py (0 hunks)
💤 Files with no reviewable changes (1)
  • pyopenms_viz/_config.py
✅ Files skipped from review due to trivial changes (1)
  • .gitignore
🚧 Files skipped from review as they are similar to previous changes (2)
  • docs/Parameters/Spectrum.rst
  • docs/Parameters/Mobilogram.rst
🧰 Additional context used
🪛 Ruff (0.8.2)
Scripts/update_tsv_docs.py

7-7: os imported but unused

Remove unused import: os

(F401)


12-12: typing.List imported but unused

Remove unused import

(F401)


12-12: typing.Tuple imported but unused

Remove unused import

(F401)

🔇 Additional comments (7)
Scripts/update_tsv_docs.py (7)

7-7: Remove unused import

The os module is imported but never used in the code. This import can be safely removed to keep the codebase clean.

-import os
🧰 Tools
🪛 Ruff (0.8.2)

7-7: os imported but unused

Remove unused import: os

(F401)


30-35: Good implementation of type normalization

The type normalization function effectively simplifies complex type annotations using regular expressions, making the output more readable in the TSV files.


37-51: Well-structured docstring parser

This function correctly extracts parameter descriptions from Sphinx-style docstrings and handles multi-line descriptions appropriately.


53-89: Solid implementation of attribute inheritance

The recursive attribute retrieval function correctly handles inheritance hierarchies while properly skipping abstract base classes. The logging statements provide good visibility into the process.


91-156: Well-implemented TSV update function

The main function correctly:

  1. Extracts class definitions and inheritance relationships
  2. Processes each configured class
  3. Preserves existing descriptions when available
  4. Only writes files when content has changed
  5. Provides appropriate error handling

This implementation aligns well with the PR objectives of automating documentation updates.


158-159: LGTM!

The main execution block is properly implemented.


1-160: Overall excellent script implementation

This script successfully fulfills the PR objective of automating documentation updates. It handles class inheritance properly, normalizes type annotations, and preserves existing descriptions. The error handling is robust, and the logging provides good visibility into the process.

A few minor suggestions above for improvement, but overall this is a well-written script that will help maintain documentation consistency.

🧰 Tools
🪛 Ruff (0.8.2)

7-7: os imported but unused

Remove unused import: os

(F401)


12-12: typing.List imported but unused

Remove unused import

(F401)


12-12: typing.Tuple imported but unused

Remove unused import

(F401)

@Imama-Kainat
Copy link
Contributor Author

Hey @jcharkow ,

Apologies for the delayed response, and thanks for your detailed feedback! 😊Actually I went through all your comments and made the necessary changes, but your one comment about inheritance really shifted my perspective.

Initially, the script was working perfectly because each class was independent and had the @DataClass decorator directly applied. However, after introducing inheritance, some classes are no longer being detected correctly. Here’s what I’ve observed:

1️⃣ Previously, all classes were processed because they had @DataClass applied directly.
2️⃣ After adding inheritance, the script stopped detecting certain classes since it was only processing @dataclass-decorated classes.
3️⃣ The BaseConfig class extends ABC (an abstract base class), which is not a dataclass, so the script skipped BaseConfig.
4️⃣ Since BaseConfig was skipped, its child classes (BasePlotConfig, ChromatogramConfig, SpectrumConfig, etc.) were also not detected.
5️⃣ However, LegendConfig was still detected because it directly extends BaseConfig without additional dependencies.
6️⃣ The script incorrectly assumed missing parent classes meant missing child classes, leading to incomplete class extraction.
7️⃣ As a result, TSV files were not updated because no attributes were extracted from skipped classes.

What I Tried So Far:
✅ Modified the script to check inheritance first before extracting attributes.
✅ Ensured that child classes are detected even if their parent is not a dataclass.
✅ Attempted to properly inherit attributes in the extraction process.

Despite these fixes, something still seems off. I’m sharing a diagram of the inheritance structure along with the current script format to provide more clarity. I’ll be stepping away for a bit and will revisit with fresh eyes after reviewing other PRs.

If you have any insights on what might be missing, I’d really appreciate it! Thanks for your patience, and I’ll refine this further soon. 😊

@Imama-Kainat
Copy link
Contributor Author

image
Inheritance that i am assuming from config file.

@Imama-Kainat
Copy link
Contributor Author

image

My current Script flow

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants