Skip to content

Commit 0628582

Browse files
ArmenAghouseroad
authored andcommitted
Implement Op Annotation's for ONNX (onnx#1648)
* skeleton for op annotation * annotate OpSchemas + update optimizations * remove unsused code * change crlf -> lf * polish+comments+more annotations * linting issue * style changes + minor issues
1 parent ad9d2f7 commit 0628582

13 files changed

+442
-75
lines changed

docs/OpSchema.md

+59
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,59 @@
1+
# OpSchema
2+
ONNX provides an OpSchema object as a general way to describe an arbitrary operation through a specification. OpSchema is is defined in `onnx/onnx/defs/schema.h`. All core operations done on OpSchema are done in place and return a reference to the OpSchema. This allows for specifications to be done in a piped matter.
3+
4+
Constructing a new Operator set should be done with the `ONNX_OPERATOR_SET_SCHEMA` definition. Below is an example of defining the LeakyRelu Op.
5+
6+
```c++
7+
static const char* LeakyRelu_ver6_doc = R"DOC(
8+
LeakyRelu takes input data (Tensor<T>) and an argument alpha, and produces one
9+
output data (Tensor<T>) where the function `f(x) = alpha * x for x < 0`,
10+
`f(x) = x for x >= 0`, is applied to the data tensor elementwise.
11+
)DOC";
12+
13+
ONNX_OPERATOR_SET_SCHEMA(
14+
LeakyRelu, // Name
15+
6, // Version
16+
OpSchema() // Specification
17+
.Attr("alpha", "Coefficient of leakage.", AttributeProto::FLOAT, 0.01f)
18+
.SetDoc(LeakyRelu_ver6_doc)
19+
.Input(0, "X", "Input tensor", "T")
20+
.Output(0, "Y", "Output tensor", "T")
21+
.TypeConstraint(
22+
"T",
23+
{"tensor(float16)", "tensor(float)", "tensor(double)"},
24+
"Constrain input and output types to float tensors.")
25+
.TypeAndShapeInferenceFunction(propagateShapeAndTypeFromFirstInput)
26+
.AddOpAnnotation(OpAnnotationFlag::ElementwiseIndependent)
27+
.AddOpAnnotation(OpAnnotationFlag::ElementwiseWeakMonotonicIncreasing));
28+
```
29+
30+
## Type Support
31+
OpSchema provides us with a way to specify exactly what types are supported by your Op. This is done by the TypeConstraint function. In the example above, we define type `T` to be any tensor of floating points `(float, float16, double)`. OpSchema allows you to define multiple type constraints need be.
32+
## Inputs and Outputs
33+
Next we need to define the inputs and outputs needed by our model. Specification of `Input` and `Output` follows the same pattern `(index, name, description, type)`. The type used must be a specified type generated by `TypeConstraint`.
34+
## Shape Inference
35+
OpSchema also provides a way to describe the shape inference portion of your Op. The way to do this is to define a `std::function<void(InferenceContext&)>;` which propagates shape information to output nodes. A simple implementation of this is `propagateShapeAndTypeFromFirstInput` which maintains the same input shape for it's output.
36+
37+
InferenceContext provides us with the following information.
38+
```c++
39+
struct InferenceContext {
40+
virtual const AttributeProto* getAttribute(const std::string& name) const = 0;
41+
virtual size_t getNumInputs() const = 0;
42+
virtual const TypeProto* getInputType(size_t index) const = 0;
43+
virtual const TensorProto* getInputData(size_t index) const = 0;
44+
virtual size_t getNumOutputs() const = 0;
45+
virtual TypeProto* getOutputType(size_t index) = 0;
46+
virtual GraphInferencer* getGraphAttributeInferencer(
47+
const std::string& attribute_name) = 0;
48+
virtual ~InferenceContext() {}
49+
};
50+
```
51+
52+
For more information on Shape Inference please refer to the [shape inference documentation](https://github.com/onnx/onnx/blob/master/docs/ShapeInference.md).
53+
54+
## Annotation
55+
Op Annotations provide us with a way to explain the general properties of the Op. These annotations are completely optional but become very useful during the ONNX optimization problem. They provide a level of generalizability to our optimization framework by allowing us not to rely on individual ops but rather high level information about how they operate. In the example above, we annotate our LeakyRelu Op as being both elementwise and weak monotonic increasing. Because of our annotation we automatically reap the benefits of the `eliminate_nop_monotone_argmax` pass which removes any monotonic Op before an argmax.
56+
57+
For a more detailed description of all the annotations available and their meaning, please refer to `onnx/defs/op_annotation.h`
58+
## Documentation
59+
It is also necessary to provide a documentation string per Op, which is done through the SetDoc function.

onnx/common/ir.h

+8
Original file line numberDiff line numberDiff line change
@@ -21,6 +21,7 @@
2121
#include "onnx/common/interned_strings.h"
2222
#include "onnx/common/graph_node_list.h"
2323
#include "onnx/common/tensor.h"
24+
#include "onnx/defs/schema.h"
2425

2526

2627
#define ONNX_DISALLOW_COPY_AND_ASSIGN(TypeName) \
@@ -464,6 +465,13 @@ struct Node : public Attributes<Node> {
464465
stage_ = s;
465466
return this;
466467
}
468+
inline bool containsOpAnnotation(OpAnnotationFlag flag) const{
469+
auto op_schema = OpSchemaRegistry::Instance()->Schema(this->kind().toString());
470+
if (nullptr != op_schema) {
471+
return op_schema->ContainsOpAnnotation(flag);
472+
}
473+
return false;
474+
}
467475
// NB: This returns an ArrayRef; that means that it will
468476
// get invalidated if you resize inputs (e.g., using addInput)
469477
// We can't return a std::vector<Node*>& because there's no

onnx/defs/math/defs.cc

+40-10
Original file line numberDiff line numberDiff line change
@@ -81,6 +81,9 @@ will throw errors.
8181
"T",
8282
{"tensor(float16)", "tensor(float)", "tensor(double)"},
8383
"Constrain input and output types to float tensors.");
84+
schema.AddOpAnnotation(OpAnnotationFlag::ElementwiseDependent);
85+
schema.AddOpAnnotation(
86+
OpAnnotationFlag::ElementwiseStrictMonotonicIncreasing);
8487
schema.TypeAndShapeInferenceFunction(propagateShapeAndTypeFromFirstInput);
8588
};
8689
}
@@ -223,7 +226,10 @@ ONNX_OPERATOR_SET_SCHEMA(
223226
"T",
224227
{"tensor(float16)", "tensor(float)", "tensor(double)"},
225228
"Constrain input and output types to float tensors.")
226-
.TypeAndShapeInferenceFunction(propagateShapeAndTypeFromFirstInput));
229+
.TypeAndShapeInferenceFunction(propagateShapeAndTypeFromFirstInput)
230+
.AddOpAnnotation(OpAnnotationFlag::ElementwiseIndependent)
231+
.AddOpAnnotation(
232+
OpAnnotationFlag::ElementwiseStrictMonotonicIncreasing));
227233

228234
static const char* Relu_ver6_doc = R"DOC(
229235
Relu takes one input data (Tensor<T>) and produces one output data
@@ -242,7 +248,9 @@ ONNX_OPERATOR_SET_SCHEMA(
242248
"T",
243249
{"tensor(float16)", "tensor(float)", "tensor(double)"},
244250
"Constrain input and output types to float tensors.")
245-
.TypeAndShapeInferenceFunction(propagateShapeAndTypeFromFirstInput));
251+
.TypeAndShapeInferenceFunction(propagateShapeAndTypeFromFirstInput)
252+
.AddOpAnnotation(OpAnnotationFlag::ElementwiseIndependent)
253+
.AddOpAnnotation(OpAnnotationFlag::ElementwiseWeakMonotonicIncreasing));
246254

247255
static const char* LeakyRelu_ver6_doc = R"DOC(
248256
LeakyRelu takes input data (Tensor<T>) and an argument alpha, and produces one
@@ -262,7 +270,9 @@ ONNX_OPERATOR_SET_SCHEMA(
262270
"T",
263271
{"tensor(float16)", "tensor(float)", "tensor(double)"},
264272
"Constrain input and output types to float tensors.")
265-
.TypeAndShapeInferenceFunction(propagateShapeAndTypeFromFirstInput));
273+
.TypeAndShapeInferenceFunction(propagateShapeAndTypeFromFirstInput)
274+
.AddOpAnnotation(OpAnnotationFlag::ElementwiseIndependent)
275+
.AddOpAnnotation(OpAnnotationFlag::ElementwiseWeakMonotonicIncreasing));
266276

267277
static const char* Selu_ver6_doc = R"DOC(
268278
Selu takes one input data (Tensor<T>) and produces one output data
@@ -294,7 +304,10 @@ ONNX_OPERATOR_SET_SCHEMA(
294304
"T",
295305
{"tensor(float16)", "tensor(float)", "tensor(double)"},
296306
"Constrain input and output types to float tensors.")
297-
.TypeAndShapeInferenceFunction(propagateShapeAndTypeFromFirstInput));
307+
.TypeAndShapeInferenceFunction(propagateShapeAndTypeFromFirstInput)
308+
.AddOpAnnotation(OpAnnotationFlag::ElementwiseIndependent)
309+
.AddOpAnnotation(
310+
OpAnnotationFlag::ElementwiseStrictMonotonicIncreasing));
298311

299312
static const char* Elu_ver6_doc = R"DOC(
300313
Elu takes one input data (Tensor<T>) and produces one output data
@@ -315,7 +328,10 @@ ONNX_OPERATOR_SET_SCHEMA(
315328
"T",
316329
{"tensor(float16)", "tensor(float)", "tensor(double)"},
317330
"Constrain input and output types to float tensors.")
318-
.TypeAndShapeInferenceFunction(propagateShapeAndTypeFromFirstInput));
331+
.TypeAndShapeInferenceFunction(propagateShapeAndTypeFromFirstInput)
332+
.AddOpAnnotation(OpAnnotationFlag::ElementwiseIndependent)
333+
.AddOpAnnotation(
334+
OpAnnotationFlag::ElementwiseStrictMonotonicIncreasing));
319335

320336
static const char* Exp_ver6_doc = R"DOC(
321337
Calculates the exponential of the given input tensor, element-wise.
@@ -337,7 +353,10 @@ ONNX_OPERATOR_SET_SCHEMA(
337353
"T",
338354
{"tensor(float16)", "tensor(float)", "tensor(double)"},
339355
"Constrain input and output types to float tensors.")
340-
.TypeAndShapeInferenceFunction(propagateShapeAndTypeFromFirstInput));
356+
.TypeAndShapeInferenceFunction(propagateShapeAndTypeFromFirstInput)
357+
.AddOpAnnotation(OpAnnotationFlag::ElementwiseIndependent)
358+
.AddOpAnnotation(
359+
OpAnnotationFlag::ElementwiseStrictMonotonicIncreasing));
341360

342361
static const char* Log_ver6_doc = R"DOC(
343362
Calculates the natural log of the given input tensor, element-wise.
@@ -359,7 +378,10 @@ ONNX_OPERATOR_SET_SCHEMA(
359378
"T",
360379
{"tensor(float16)", "tensor(float)", "tensor(double)"},
361380
"Constrain input and output types to float tensors.")
362-
.TypeAndShapeInferenceFunction(propagateShapeAndTypeFromFirstInput));
381+
.TypeAndShapeInferenceFunction(propagateShapeAndTypeFromFirstInput)
382+
.AddOpAnnotation(OpAnnotationFlag::ElementwiseIndependent)
383+
.AddOpAnnotation(
384+
OpAnnotationFlag::ElementwiseStrictMonotonicIncreasing));
363385

364386
static const char* Tanh_ver6_doc = R"DOC(
365387
Calculates the hyperbolic tangent of the given input tensor element-wise.
@@ -381,7 +403,10 @@ ONNX_OPERATOR_SET_SCHEMA(
381403
"T",
382404
{"tensor(float16)", "tensor(float)", "tensor(double)"},
383405
"Constrain input and output types to float tensors.")
384-
.TypeAndShapeInferenceFunction(propagateShapeAndTypeFromFirstInput));
406+
.TypeAndShapeInferenceFunction(propagateShapeAndTypeFromFirstInput)
407+
.AddOpAnnotation(OpAnnotationFlag::ElementwiseIndependent)
408+
.AddOpAnnotation(
409+
OpAnnotationFlag::ElementwiseStrictMonotonicIncreasing));
385410

386411
static const char* Pow_ver7_doc = R"DOC(
387412
Pow takes input data (Tensor<T>) and exponent Tensor, and
@@ -460,7 +485,10 @@ ONNX_OPERATOR_SET_SCHEMA(
460485
"T",
461486
{"tensor(float16)", "tensor(float)", "tensor(double)"},
462487
"Constrain input and output types to float tensors.")
463-
.TypeAndShapeInferenceFunction(propagateShapeAndTypeFromFirstInput));
488+
.TypeAndShapeInferenceFunction(propagateShapeAndTypeFromFirstInput)
489+
.AddOpAnnotation(OpAnnotationFlag::ElementwiseIndependent)
490+
.AddOpAnnotation(
491+
OpAnnotationFlag::ElementwiseStrictMonotonicIncreasing));
464492

465493
static const char* HardSigmoid_ver6_doc = R"DOC(
466494
HardSigmoid takes one input data (Tensor<T>) and produces one output data
@@ -481,7 +509,9 @@ ONNX_OPERATOR_SET_SCHEMA(
481509
"T",
482510
{"tensor(float16)", "tensor(float)", "tensor(double)"},
483511
"Constrain input and output types to float tensors.")
484-
.TypeAndShapeInferenceFunction(propagateShapeAndTypeFromFirstInput));
512+
.TypeAndShapeInferenceFunction(propagateShapeAndTypeFromFirstInput)
513+
.AddOpAnnotation(OpAnnotationFlag::ElementwiseIndependent)
514+
.AddOpAnnotation(OpAnnotationFlag::ElementwiseWeakMonotonicIncreasing));
485515

486516
std::function<void(OpSchema&)> ElementwiseMultiOpDocGenerator(
487517
const char* name) {

onnx/defs/op_annotation.cc

+7
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
#include "onnx/defs/op_annotation.h"
2+
3+
namespace ONNX_NAMESPACE {
4+
std::shared_ptr<OpAnnotationRegistry> OpAnnotationRegistry::instance_ =
5+
std::shared_ptr<OpAnnotationRegistry>(new OpAnnotationRegistry());
6+
7+
} // namespace ONNX_NAMESPACE

0 commit comments

Comments
 (0)