Skip to content

Filters: fix remaining vale/markdownlint errors #2017

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -172,7 +172,7 @@
* [Sysinfo](pipeline/filters/sysinfo.md)
* [Tensorflow](pipeline/filters/tensorflow.md)
* [Throttle](pipeline/filters/throttle.md)
* [Type Converter](pipeline/filters/type-converter.md)
* [Type converter](pipeline/filters/type-converter.md)
* [Wasm](pipeline/filters/wasm.md)
* [Outputs](pipeline/outputs/README.md)
* [Amazon CloudWatch](pipeline/outputs/cloudwatch.md)
Expand Down
4 changes: 2 additions & 2 deletions pipeline/filters/ecs-metadata.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ The plugin supports the following configuration parameters:

| Key | Description | Default |
| :--- | :--- | :--- |
| `Add` | Similar to the `ADD` option in the [modify filter](https://docs.fluentbit.io/manual/pipeline/filters/modify). You can specify it multiple times. It takes two arguments: a `KEY` name and `VALUE`. The value uses Fluent Bit [`record_accessor`](https://docs.fluentbit.io/manual/v/1.5/administration/configuring-fluent-bit/record-accessor) syntax to create a template that uses ECS Metadata values. See the list of supported metadata templating keys. This option allows you to control both the key names for metadata and the format for metadata values. | _none_ |
| `Add` | Similar to the `ADD` option in the [modify filter](https://docs.fluentbit.io/manual/pipeline/filters/modify). You can specify it multiple times. It takes two arguments: a `KEY` name and `VALUE`. The value uses Fluent Bit [`record_accessor`](https://docs.fluentbit.io/manual/v/1.5/administration/configuring-fluent-bit/record-accessor) syntax to create a template that uses ECS Metadata values. See the list of supported metadata templating keys. This option lets you control both the key names for metadata and the format for metadata values. | _none_ |
| `ECS_Tag_Prefix` | Similar to the `Kube_Tag_Prefix` option in the [Kubernetes filter](https://docs.fluentbit.io/manual/pipeline/filters/kubernetes) and performs the same function. The full log tag should be prefixed with this string and after the prefix the filter must find the next characters in the tag to be the Docker Container Short ID (the first 12 characters of the full container ID). The filter uses this to identify which container the log came from so it can find which task it's a part of. See the design section for more information. If not specified, it defaults to empty string, meaning that the tag must be prefixed with the 12 character container short ID. If you want to attach cluster metadata to system or OS logs from processes that don't run as part of containers or ECS Tasks, don't set this parameter and enable the `Cluster_Metadata_Only` option | empty string |
| `Cluster_Metadata_Only` | When enabled, the plugin will only attempt to attach cluster metadata values. Use to attach cluster metadata to system or OS logs from processes that don't run as part of containers or ECS Tasks. | `Off` |
| `ECS_Meta_Cache_TTL` | The filter builds a hash table in memory mapping each unique container short ID to its metadata. This option sets a max `TTL` for objects in the hash table. You should set this if you have frequent container or task restarts. For example, if your cluster runs short running batch jobs that complete in less than 10 minutes, there is no reason to keep any stored metadata longer than 10 minutes. You would therefore set this parameter to `10m`. | `1h` |
Expand Down Expand Up @@ -269,4 +269,4 @@ pipeline:
```

{% endtab %}
{% endtabs %}
{% endtabs %}
23 changes: 11 additions & 12 deletions pipeline/filters/grep.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,9 @@ The plugin supports the following configuration parameters:
| `Exclude` | `KEY REGEX` | Exclude records where the content of `KEY` matches the regular expression. |
| `Logical_Op` | `Operation` | Specify a logical operator: `AND`, `OR` or `legacy` (default). In `legacy` mode the behaviour is either `AND` or `OR` depending on whether the `grep` is including (uses `AND`) or excluding (uses OR). Available from 2.1 or higher. |

### Record Accessor enabled
### Record accessor enabled

Enable the [Record Accessor](../../administration/configuring-fluent-bit/classic-mode/record-accessor.md) feature to specify the `KEY`. Use the record accessor to match values against nested values.
Enable the [record accessor](../../administration/configuring-fluent-bit/classic-mode/record-accessor.md) feature to specify the `KEY`. Use the record accessor to match values against nested values.

## Filter records

Expand Down Expand Up @@ -53,18 +53,18 @@ fluent-bit -i tail -p 'path=lines.txt' -F grep -p 'regex=log aa' -m '*' -o stdou
```yaml
service:
parsers_file: /path/to/parsers.conf

pipeline:
inputs:
- name: tail
path: lines.txt
parser: json

filters:
- name: grep
match: '*'
regex: log aa

outputs:
- name: stdout
match: '*'
Expand Down Expand Up @@ -95,8 +95,7 @@ pipeline:
{% endtab %}
{% endtabs %}


The filter allows to use multiple rules which are applied in order, you can have many `Regex` and `Exclude` entries as required ([more information](#multiple-conditions)).
The filter lets you use multiple rules which are applied in order, you can have many `Regex` and `Exclude` entries as required ([more information](#multiple-conditions)).

### Nested fields example

Expand Down Expand Up @@ -127,8 +126,8 @@ For example, to exclude records that match the nested field `kubernetes.labels.a
{% tab title="fluent-bit.yaml" %}

```yaml
pipeline:
pipeline:

filters:
- name: grep
match: '*'
Expand Down Expand Up @@ -162,7 +161,7 @@ The following example checks for a specific valid value for the key:

```yaml
pipeline:

filters:
# Use Grep to verify the contents of the iot_timestamp value.
# If the iot_timestamp key does not exist, this will fail
Expand Down Expand Up @@ -214,7 +213,7 @@ pipeline:
- name: dummy
dummy: '{"endpoint":"localhost", "value":"something"}'
tag: dummy

filters:
- name: grep
match: '*'
Expand Down Expand Up @@ -257,4 +256,4 @@ The output looks similar to:
```text
[0] dummy: [1674348410.558341857, {"endpoint"=>"localhost", "value"=>"something"}]
[0] dummy: [1674348411.546425499, {"endpoint"=>"localhost", "value"=>"something"}]
```
```
4 changes: 2 additions & 2 deletions pipeline/filters/kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -302,7 +302,7 @@ parsers:
- name: custom-tag
format: regex
regex: '^(?<namespace_name>[^_]+)\.(?<pod_name>[a-z0-9](?:[-a-z0-9]*[a-z0-9])?(?:\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*)\.(?<container_name>.+)\.(?<container_id>[a-z0-9]{64})'

pipeline:
inputs:
- name: tail
Expand Down Expand Up @@ -560,7 +560,7 @@ Learn how to solve them to ensure that the Fluent Bit Kubernetes filter is opera

If set roles are configured correctly, it should respond with `yes`.

For instance, using Azure AKS, running the previous command might respond with:
For instance, using Azure Kubernetes Service (AKS), running the previous command might respond with:

```text
no - Azure does not have opinion for this user.
Expand Down
26 changes: 13 additions & 13 deletions pipeline/filters/log_to_metrics.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ description: Generate metrics from logs

# Logs to metrics

The _log to metrics_ filter lets you generate log-derived metrics. It supports modes to count records, provide a guage for field values, or create a histogram. You can also match or exclude specific records based on regular expression patterns for values or nested values.
The _log to metrics_ filter lets you generate log-derived metrics. It supports modes to count records, provide a gauge for field values, or create a histogram. You can also match or exclude specific records based on regular expression patterns for values or nested values.

This filter doesn't actually act as a record filter and therefore doesn't change or drop records. All records will pass through this filter untouched, and any generated metrics will be emitted into a separate metric pipeline.

Expand Down Expand Up @@ -53,13 +53,13 @@ The following example takes records from two `dummy` inputs and counts all messa
service:
flush: 1
log_level: info

pipeline:
inputs:
- name: dummy
dummy: '{"message":"dummy", "kubernetes":{"namespace_name": "default", "docker_id": "abc123", "pod_name": "pod1", "container_name": "mycontainer", "pod_id": "def456", "labels":{"app": "app1"}}, "duration": 20, "color": "red", "shape": "circle"}'
tag: dummy.log

- name: dummy
dummy: '{"message":"hello", "kubernetes":{"namespace_name": "default", "docker_id": "abc123", "pod_name": "pod1", "container_name": "mycontainer", "pod_id": "def456", "labels":{"app": "app1"}}, "duration": 60, "color": "blue", "shape": "square"}'
tag: dummy.log2
Expand Down Expand Up @@ -154,13 +154,13 @@ The `gauge` mode needs a `value_field` to specify where to generate the metric v
service:
flush: 1
log_level: info

pipeline:
inputs:
- name: dummy
dummy: '{"message":"dummy", "kubernetes":{"namespace_name": "default", "docker_id": "abc123", "pod_name": "pod1", "container_name": "mycontainer", "pod_id": "def456", "labels":{"app": "app1"}}, "duration": 20, "color": "red", "shape": "circle"}'
tag: dummy.log

- name: dummy
dummy: '{"message":"hello", "kubernetes":{"namespace_name": "default", "docker_id": "abc123", "pod_name": "pod1", "container_name": "mycontainer", "pod_id": "def456", "labels":{"app": "app1"}}, "duration": 60, "color": "blue", "shape": "square"}'
tag: dummy.log2
Expand All @@ -176,7 +176,7 @@ pipeline:
kubernetes_mode: on
regex: 'message .*el.*'
add_label: app $kubernetes['labels']['app']
label_field:
label_field:
- color
- shape

Expand Down Expand Up @@ -218,7 +218,7 @@ pipeline:
add_label app $kubernetes['labels']['app']
label_field color
label_field shape

[OUTPUT]
name prometheus_exporter
match *
Expand Down Expand Up @@ -278,13 +278,13 @@ Similar to the `gauge` mode, the `histogram` mode needs a `value_field` to speci
service:
flush: 1
log_level: info

pipeline:
inputs:
- name: dummy
dummy: '{"message":"dummy", "kubernetes":{"namespace_name": "default", "docker_id": "abc123", "pod_name": "pod1", "container_name": "mycontainer", "pod_id": "def456", "labels":{"app": "app1"}}, "duration": 20, "color": "red", "shape": "circle"}'
tag: dummy.log

- name: dummy
dummy: '{"message":"hello", "kubernetes":{"namespace_name": "default", "docker_id": "abc123", "pod_name": "pod1", "container_name": "mycontainer", "pod_id": "def456", "labels":{"app": "app1"}}, "duration": 60, "color": "blue", "shape": "square"}'
tag: dummy.log2
Expand Down Expand Up @@ -342,7 +342,7 @@ pipeline:
add_label app $kubernetes['labels']['app']
label_field color
label_field shape

[OUTPUT]
name prometheus_exporter
match *
Expand Down Expand Up @@ -417,13 +417,13 @@ In the resulting output, there are several buckets by default: `0.005, 0.01, 0.0
service:
flush: 1
log_level: info

pipeline:
inputs:
- name: dummy
dummy: '{"message":"dummy", "kubernetes":{"namespace_name": "default", "docker_id": "abc123", "pod_name": "pod1", "container_name": "mycontainer", "pod_id": "def456", "labels":{"app": "app1"}}, "duration": 20, "color": "red", "shape": "circle"}'
tag: dummy.log

- name: dummy
dummy: '{"message":"hello", "kubernetes":{"namespace_name": "default", "docker_id": "abc123", "pod_name": "pod1", "container_name": "mycontainer", "pod_id": "def456", "labels":{"app": "app1"}}, "duration": 60, "color": "blue", "shape": "square"}'
tag: dummy.log2
Expand Down Expand Up @@ -496,7 +496,7 @@ pipeline:
regex message .*el.*
label_field color
label_field shape

[OUTPUT]
name prometheus_exporter
match *
Expand Down
Loading