Skip to content

[Draft] Feat/autodiscovery application code #1217

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 8 commits into
base: feat/autodiscovery
Choose a base branch
from

Conversation

cwadhwani-splunk
Copy link
Contributor

Description

Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. List any dependencies that are required for this change.

Fixes # (issue)

Type of change

Please delete options that are not relevant.

  • Dependency update
  • Bug fix
  • New feature
  • Refactor/improvement
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

How Has This Been Tested?

Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration

Checklist

  • My commit message is conventional
  • I have run pre-commit on all files before creating the PR
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes
  • I have checked my code and corrected any misspellings

Copy link
Contributor

@omrozowicz-splunk omrozowicz-splunk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you add another example to rendered and run make render? this way we'll see k8s manifests of how it looks exactly

Copy link
Contributor

@ajasnosz ajasnosz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's keep the commit convention names with feat, fix, docs etc.

return value

@validator("port", pre=True)
def port_validator(cls, value):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It can include the same constrains as InvenotryRecord:

            if value < 1 or value > 65535:
                raise ValueError(f"Port out of range {value}")

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This validation is already handled in the schema check, so it's not needed here.
But if this check should be validated here, it can be removed from schema check.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can keep it in both places, as schema that we use only applies for the kubernetes deployment.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added the validation here as well

LOG_LEVEL: ${WORKER_LOG_LEVEL:-INFO}
PREFETCH_COUNT: ${PREFETCH_SENDER_COUNT:-1}
DISCOVERY_PATH: ${DISCOVERY_PATH:-/app/discovery/discovery_devices.csv}
IPv6_ENABLED: ${IPv6_ENABLED:-false}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can be switched to referencing *ipv6 in line 287

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated the code

@@ -284,13 +284,14 @@ services:
cpus: ${WORKER_DISCOVERY_CPU_RESERVATIONS:-0.25}
memory: ${WORKER_DISCOVERY_MEMORY_RESERVATIONS:-250M}
environment:
<<: [ *general_sc4snmp_data, *splunk_general_setup, *splunk_extended_setup,
*pysnmp_debug, *ipv6 ]
<<: [*general_sc4snmp_data, *pysnmp_debug, *workers_general_setup, ]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are params from workers_general_setup necessary in discovery? I think they are more related to polling.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct, we don't require them all of them. We just require SC4SNMP_VERSION and LOG_LEVEL, so I thought of reusing *workers_general_setup. But since it has extra variables, updated the code accordingly.

SC4SNMP_VERSION: ${SC4SNMP_VERSION:-latest}
LOG_LEVEL: ${WORKER_LOG_LEVEL:-INFO}
PREFETCH_COUNT: ${PREFETCH_SENDER_COUNT:-1}
DISCOVERY_PATH: ${DISCOVERY_PATH:-/app/discovery/discovery_devices.csv}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shouldn't this be in volumes?

Copy link
Contributor Author

@cwadhwani-splunk cwadhwani-splunk Jul 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is just the path inside the pod where csv will be created. It is required inside the code to add device details. Volume for the same is present in the volume section.
Note: In the latest commit, I have changed the name of this env to DISCOVERY_CSV_PATH

# Discovery
DISCOVERY_ENABLE=true
DISCOVERY_LOG_LEVEL=INFO
DISCOVERY_CONFIG_PATH=/app/discovery/discovery-config.yaml
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if I know which variable is responsible for what here, could you describe them in the reply:
DISCOVERY_FILE_ABSOLUTE_PATH
DISCOVERY_PATH
LOCAL_DISCOVERY_PATH
DISCOVERY_CONFIG_PATH

DISCOVERY_LOG_LEVEL - we use manage_logs.py to set up the logs for docker compose (as it is possible to sent the logs directly from docker to splunk) did you test the script with discovery?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There was some misunderstanding among the discovery-related env variables. I've removed the unnecessary ones and updated the code accordingly.
Now using below variables for discovery:

  • DISCOVERY_CONFIG_FILE_ABSOLUTE_PATH: the absolute path on host to the discovery config file
  • DISCOVERY_PATH: the path to the folder (on host) where the discovery CSV file will be created

Below envs are that are hardcoded in docker-compose.yaml file. Need these envs inside the code.

  • DISCOVERY_CONFIG_PATH: This is the path where customer's discovery-config.yaml will mounted inside the pod/container.
  • DISCOVERY_CSV_PATH: This is the path where the final output file will be stored inside the container.

I have not tested the manage_logs.py, I ll check it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tested the manage_logs.py file, and it is adding the logging sections with splunk as its log-driver in the discovery and worker-discovery service. I further checked that the logs from the docker container are getting ingested to Splunk.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants