Skip to content

Releases: StyraInc/enterprise-opa

v1.14.0

07 Dec 14:20
4247492
Compare
Choose a tag to compare

OPA v0.59.0

This release updates the OPA version used in Enterprise OPA to v0.59.0, and integrates some performance improvements and a few bug fixes.

CLI

  • Fixed a panic when running eopa bundle convert on Delta Bundles.

Runtime

  • The Set and Object types received a few small performance optimizations in this release, which net out speedups of around 1-7% on some benchmarks.
  • Set union operations are slightly faster now.

v1.13.0

13 Nov 17:12
3d02f2e
Compare
Choose a tag to compare

OPA v0.58.0
This release contains a security fix for gRPC handlers used with OpenTelemetry, various performance
enhancements, bug fixes, third-party dependency updates, and a way to have Enterprise OPA fall back
to "OPA-mode" when there is no valid license.

OpenTelemetry CVE-2023-47108

This release updates the gRPC handlers used with OpenTelemetry to address a security vulnerability (CVE-2023-47108, GHSA-8pgv-569h-w5rw).

Fallback to OPA

When using eopa run and eopa exec without a valid license, Enterprise OPA will now log a message,
and continue executing as if it was an ordinary instance of OPA.

This is enabled by running the license check synchronously. It'll be quick for missing files and environment
variables.

If you don't want to fallback to OPA, because you expect your license to be present and valid, you can
pass --no-license-fallback to both eopa run and eopa exec: the license validation will run asynchronously,
and stop the process on failures.

Bug Fixes

  1. The gRPC API's decision logs now include the input sent with the request.
  2. An issue with the mongodb.find and mongodb.find_one caching has been resolved.

v1.12.0

27 Oct 10:45
0a0319a
Compare
Choose a tag to compare

This release updates the OPA version used in Enterprise OPA to v0.58.0,
and integrates several performance improvements and a bug fix:

Function return value caching

Function calls in Rego now have their return value cached: when called with the same arguments,
subsequent evaluations will use the cached value.
Previously, the function body was evaluated twice.

Currently, only simple argument types are subject to caching: numbers, bools, strings -- collection
arguments are exempt.

Library utils lazy loading

If your policy does not make use of any of the data.system.eopa.utils helpers of Enterprise OPA's
builtin functions, they are not loaded,
and thus avoid superfluous work in the compiler.

Topdown-specific compiler stages

When evaluating a policy, certain compiler stages in OPA are now skipped: namely, the Rego VM in
Enterprise OPA does not make use of OPA's rule and comprehension indices, so we no longer build them
in the compiler stages.

Numerous Rego VM improvements

The Rego VM now uses less allocations, improving overall performance.

Preview API

Fixes a bug with "Preview Selection".

v1.11.1

20 Oct 09:44
Compare
Choose a tag to compare

This is a bug fix release addressing the following security issue:

OpenTelemetry-Go Contrib security fix CVE-2023-45142:

Denial of service in otelhttp due to unbound cardinality metrics.

Note: GO-2023-2102 was fixed in v1.11.0

A malicious HTTP/2 client which rapidly creates requests and immediately resets them can cause excessive server resource consumption.

v1.11.0

13 Oct 08:02
Compare
Choose a tag to compare

This release includes several bugfixes and a powerful new feature for data source integrations: Rego transform rules!

[New Feature] Data transformations are available for all data source integrations

Enterprise OPA now supports Rego transform rules for all data source plugins!

These transform rules allow you to reshape and modify data fetched by the data sources, before that data is stored in EOPA for use by policies.

This feature can be opted into for a data source by adding a rego_transform key to its YAML configuration block.

Example transform rule with the HTTP data source

For this example, we will assume we have an HTTP endpoint that responds with the following JSON document:

[
    {"username": "alice", "roles": ["admin"]},
    {"username": "bob", "roles": []},
    {"username": "catherine", "roles": ["viewer"]}
]

Here's what the OPA configuration might look like for a fictitious HTTP data source:

plugins:
  data:
    http:
      type: http
      url: https://internal.example.com/api/users
      method: POST            # default: GET
      body: '{"count": 1000}' # default: no body
      file: /some/file        # alternatively, read request body from a file on disk (default: none)
      timeout: "10s"          # default: no timeout
      polling_interval: "20s" # default: 30s, minimum: 10s
      follow_redirects: false # default: true
      headers:
        Authorization: Bearer XYZ
        other-header:         # multiple values are supported
        - value 1
        - value 2
      rego_transform: data.e2e.transform

The rego_transform key at the end means that we will run the data.e2e.transform Rego rule on the incoming data before that data is made available to policies on this EOPA instance.

We then need to define our data.e2e.transform rule. rego_transform rules generally take incoming messages as JSON via input.incoming and return the transformed JSON for later use by other policies.
Below is an example of what a transform rule might look like for our HTTP data source:

package e2e
import future.keywords
transform[id] := d {
  some entry in input.incoming
  id := entry.username
  d := entry.roles
}

In the above example, the transform policy will populate the data.http.users object with key-value pairs. Of note: the http key comes from the datasource plugin configuration above.

Each key-value pair will be generated by iterating across the JSON list in input.incoming, and for each JSON object, the key will be taken from the username field, and the value from the roles field.

Given our earlier data source, the result stored in EOPA for data.http.users will look like:

{
    "alice": ["admin"],
    "bob": [],
    "catherine": ["viewer"]
}

This general pattern applies to all the data source integrations in Enterprise OPA, including the Kafka data source (covered below).

In addition to input.incoming – containing the incoming information retrieved by the datasource – the value of input.previous can be used to refer to all of the data currently stored in the plugin's data. subtree.

[Changed Behavior] Updates to the Kafka data source's Rego transform rules

The Kafka data source now supports the new rego_transform rule system, the same as all of the other data source integrations. Concretely, It no longer expects the output of the transform rule to be a JSON Patch object to be applied to the existing data, but instead expects the output to be the full data object to be persisted.

Because Kafka messages are often incremental updates, the input.previous value should be used to refer to the rest of the data subtree.

See the Reference documentation for more details and examples of the new transform rules.

[Changed Behavior] Updates to the dynamodb series of builtins

In this release dynamodb.send has been split apart into more specialized variants embodying the same functionality: dynamodb.get and dynamodb.query.

dynamodb.get

For normal key-value lookups in DynamoDB, dynamodb.get provides a straightforward solution.
Here is a brief usage example:

thread := dynamodb.get({
  "endpoint": "dynamodb.us-west-2.amazonaws.com",
  "region": "us-west-2",
  "table": "thread",
  "key": {
      "ForumName": {"S": "help"},
      "Subject": {"S": "How to write rego"}
  }
}) # => { "row": ...}

See the Reference documentation for more details.

dynamodb.query

For queries on DynamoDB, dynamodb.query allows control over the query expression and other parameters:
Here is a brief usage example:

music := dynamodb.query({
  "region": "us-west-1",
  "table": "foo",
  "key_condition_expression": "#music = :name",
  "expression_attribute_values": {":name": {"S": "Acme Band"}},
  "expression_attribute_names": {"#music": "Artist"}
}) # => { "rows": ... }

See the Reference documentation for more details.

[Changed Behavior] Removal of MongoDB plugin keys

The keys configuration for the MongoDB datasource plugin is now deprecated. Instead, MongoDB's native _id value will be used as the primary key for each document.

Any restructuring or renormalization of the data should now be done via rego_transform.

v1.10.1

02 Oct 22:26
9ace37d
Compare
Choose a tag to compare

New data source integration: MongoDB

It is now possible to use a single MongoDB collection as a data source, with optional filtering/projection at retrieval time.

For example if you had collection1 in a MongoDB instance set to the following JSON document:

[
  {"foo": "a", "bar": 0},
  {"foo": "b", "bar": 1},
  {"foo": "c", "bar": 0},
  {"foo": "d", "bar": 3}
]

If you configured a MongoDB data source to use collection1:

plugins:
  data:
    mongodb.example:
      type: mongodb
      uri: <your_db_uri_here>
      auth: <your_login_info_here>
      database: database
      collection: collection1
      keys: ["foo"]
      filter: {"bar": 0}

The configuration shown above would filter this collection down to just:

[
  {"foo": "a", "bar": 0},
  {"foo": "c", "bar": 0}
]

The keys parameter in the configuration shown earlier guides how the collection is transformed into a Rego Object, mapping the unique key field(s) to the corresponding documents from the filtered collection:

{
  "a": {"foo": "a", "bar": 0},
  "c": {"foo": "c", "bar": 0}
}

You could then use this data source in a Rego policy just like any other aggregate data type. As a simple example:

package hello_mongodb

filtered_documents := data.mongodb.example

allow if {
  count(filtered_documents) == 2 # Want just 2 items in the collection.
}

v1.10.0

29 Sep 19:55
9ace37d
Compare
Choose a tag to compare

This release updates the OPA version used in Enterprise OPA to v0.57.0, and integrates several bugfixes and new features.

v1.9.5

07 Sep 18:41
9ace37d
Compare
Choose a tag to compare

These releases have been release engineering fixes to sort out automated publishing of this changelog, capabilities JSON files, and gRPC protobuf definitions.

v1.9.4

07 Sep 17:24
c2af245
Compare
Choose a tag to compare

These releases have been release engineering fixes to sort out automated publishing of this changelog, capabilities JSON files, and gRPC protobuf definitions.

v1.9.3

07 Sep 16:12
Compare
Choose a tag to compare

These releases have been release engineering fixes to sort out automated publishing of this changelog, capabilities JSON files, and gRPC protobuf definitions.