You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
During the last three years, we created and published the OCP Test & Validation Output specifications. They are based on a json schema to standardize the output of a diag that would like to be OCP compliant (https://github.com/opencomputeproject/ocp-diag-core/blob/main/json_spec/README.md).
We could improve the diags replicability describing the setup to use, even if we work in different environments.
Motivation
This proposal aims to create a standard to setup the environment achieving a replicable and comparable results.
These are the goals to achieve:
Describe the diag running environment (including main and accessories dependencies)
Provide descriptive steps to setup, configure and run the diag
Identify why this package could be useful
Explain how to manage the output
Guide-level explanation
This new spec works with different objects inside a main container: the "Package". This container holds all the information on how to replicate the environment, running the diag/s, and work with the output. It also has some key informations about the diag/s and why the package is useful to the user (Diagnosis Path).
The objects allowed in a package are:
Units: this object define the diag/s
Diagnosis Paths: what the package is trying to solve (e.g. memory debug)
Dependencies: all the resources the units need to run, except for those related to the environment
Executors: all the information to run the units
Worksets: the environments in which the units are able to provide the results expected
Processors: how to run the units (sequentially, in parallel, etc.)
These are the definitions of the environment in which to run the units. They could be Public or not and belongs to any type of infrastructure as a code definition (Docker, MS Cloud, Google cloud, AWS Cloud, etc.).
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Summary
During the last three years, we created and published the OCP Test & Validation Output specifications. They are based on a json schema to standardize the output of a diag that would like to be OCP compliant (https://github.com/opencomputeproject/ocp-diag-core/blob/main/json_spec/README.md).
We could improve the diags replicability describing the setup to use, even if we work in different environments.
Motivation
This proposal aims to create a standard to setup the environment achieving a replicable and comparable results.
These are the goals to achieve:
Guide-level explanation
This new spec works with different objects inside a main container: the "Package". This container holds all the information on how to replicate the environment, running the diag/s, and work with the output. It also has some key informations about the diag/s and why the package is useful to the user (Diagnosis Path).
The objects allowed in a package are:
An example root structure
{ "PackageDescription": "Diagnose silicon components", "PackageID": "awesome-package", "PackageName": "My wonderful package", "Units": [...], "DiagnosisPaths": [...], "Dependencies": [...], "Executors": [...], "Worksets": [...], "Processors": [...], "Collectors": [...] }Reference-level explanation
Unit
The core component of the spec. A Unit is a definition of a diag in its aspects.
Example
Diagnosis Paths
This object define which is the diagnosis scope of the package.
Example
Dependencies
These are the resources needed to run the unit (excluding OS, common libraries, etc.). They could be:
Example
Executors
These are the definition of the units binaries and their input parameters.
Example
Worksets
These are the definitions of the environment in which to run the units. They could be Public or not and belongs to any type of infrastructure as a code definition (Docker, MS Cloud, Google cloud, AWS Cloud, etc.).
Example
Processors
This section define how to run the units, their order and parallelism.
Example
Collectors
The collectors define how to retrieve and parse the output. It will make the result comparable, regardless of the computation eventually applied.
Example
Drawbacks
No drawbacks identified at the moment.
Rationale and alternatives
At the moment there is nothing else that I know is comparable to these specs.
Prior art
NA
Unresolved questions
NA
Future possibilities
These specs open the doors to additional tools and services:
Beta Was this translation helpful? Give feedback.
All reactions