You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
ONNX provides an OpSchema object as a general way to describe an arbitrary operation through a specification. OpSchema is is defined in `onnx/onnx/defs/schema.h`. All core operations done on OpSchema are done in place and return a reference to the OpSchema. This allows for specifications to be done in a piped matter.
3
+
4
+
Constructing a new Operator set should be done with the `ONNX_OPERATOR_SET_SCHEMA` definition. Below is an example of defining the LeakyRelu Op.
5
+
6
+
```c++
7
+
staticconstchar* LeakyRelu_ver6_doc = R"DOC(
8
+
LeakyRelu takes input data (Tensor<T>) and an argument alpha, and produces one
9
+
output data (Tensor<T>) where the function `f(x) = alpha * x for x < 0`,
10
+
`f(x) = x for x >= 0`, is applied to the data tensor elementwise.
11
+
)DOC";
12
+
13
+
ONNX_OPERATOR_SET_SCHEMA(
14
+
LeakyRelu, // Name
15
+
6, // Version
16
+
OpSchema() // Specification
17
+
.Attr("alpha", "Coefficient of leakage.", AttributeProto::FLOAT, 0.01f)
OpSchema provides us with a way to specify exactly what types are supported by your Op. This is done by the TypeConstraint function. In the example above, we define type `T` to be any tensor of floating points `(float, float16, double)`. OpSchema allows you to define multiple type constraints need be.
32
+
## Inputs and Outputs
33
+
Next we need to define the inputs and outputs needed by our model. Specification of `Input` and `Output` follows the same pattern `(index, name, description, type)`. The type used must be a specified type generated by `TypeConstraint`.
34
+
## Shape Inference
35
+
OpSchema also provides a way to describe the shape inference portion of your Op. The way to do this is to define a `std::function<void(InferenceContext&)>;` which propagates shape information to output nodes. A simple implementation of this is `propagateShapeAndTypeFromFirstInput` which maintains the same input shape for it's output.
36
+
37
+
InferenceContext provides us with the following information.
For more information on Shape Inference please refer to the [shape inference documentation](https://github.com/onnx/onnx/blob/master/docs/ShapeInference.md).
53
+
54
+
## Annotation
55
+
Op Annotations provide us with a way to explain the general properties of the Op. These annotations are completely optional but become very useful during the ONNX optimization problem. They provide a level of generalizability to our optimization framework by allowing us not to rely on individual ops but rather high level information about how they operate. In the example above, we annotate our LeakyRelu Op as being both elementwise and weak monotonic increasing. Because of our annotation we automatically reap the benefits of the `eliminate_nop_monotone_argmax` pass which removes any monotonic Op before an argmax.
56
+
57
+
For a more detailed description of all the annotations available and their meaning, please refer to `onnx/defs/op_annotation.h`
58
+
## Documentation
59
+
It is also necessary to provide a documentation string per Op, which is done through the SetDoc function.
0 commit comments