Skip to content

Add guidance to Sparkplug 4 on how to deal with 'invalid metric values' #545

@wes-johnson

Description

@wes-johnson

This was created from #62

Directly related, I'd like if the Sparkplug specification could call out exactly how oversized values should be handled relative to the datatype field a metric was birthed with.

For example, if your metric is birthed with a datatype of int8, but a value being passed is outside the range [-128,128), what should the recipient do?

ignore the metric
clamp it to the min/max that is allowed in an int8
pointer-cast the least significant 8 bits possibly resulting in an unexpected value
just handle the larger value and ignore the datatype the metric was birthed with

None of those answers are perfect, but I would prefer guidance of which to choose in the spec, to minimize the corner case behavior between different implementations.

There's an ambiguous disconnect right now between what datatype a metric is birthed with and how various applications store and handle values of that metric. I know Ignition will send out-of-range values compared to the Sparkplug birth datatype. MQTT Engine appears to convert everything into the most appropriate Java datatype (int or long) and then use that datatype in Sparkplug when sending messages back to a node.

If it's not obvious, I believe the datatype field a metric is birthed with takes precedence over any datatypes or values used, even if the node later sends the metric with a bigger datatype in a DATA payload. (Which I personally think should be against the spec, but currently is ambiguous.)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions