chore: Fix readme linter errors for processor, aggregator, and parser plugins (#10960)

This commit is contained in:
reimda 2022-06-06 17:04:28 -06:00 committed by GitHub
parent d13314332e
commit 34eff493ae
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
39 changed files with 591 additions and 338 deletions

View File

@ -1,7 +1,8 @@
# BasicStats Aggregator Plugin
The BasicStats aggregator plugin give us count,diff,max,min,mean,non_negative_diff,sum,s2(variance), stdev for a set of values,
emitting the aggregate every `period` seconds.
The BasicStats aggregator plugin give us count, diff, max, min, mean,
non_negative_diff, sum, s2(variance), stdev for a set of values, emitting the
aggregate every `period` seconds.
## Configuration

View File

@ -31,31 +31,36 @@ derivative = --------------------------------
variable_last - variable_first
```
**Make sure the specified variable is not filtered and exists in the metrics passed to this aggregator!**
**Make sure the specified variable is not filtered and exists in the metrics
passed to this aggregator!**
When using a custom derivation variable, you should change the `suffix` of the derivative name.
See the next section on [customizing the derivative name](#customize-the-derivative-name) for details.
When using a custom derivation variable, you should change the `suffix` of the
derivative name. See the next section on [customizing the derivative
name](#customize-the-derivative-name) for details.
## Customize the Derivative Name
The derivatives generated by the aggregator are named `<fieldname>_rate`, i.e. they are composed of the field name and a suffix `_rate`.
You can configure the suffix to be used by changing the `suffix` parameter.
The derivatives generated by the aggregator are named `<fieldname>_rate`,
i.e. they are composed of the field name and a suffix `_rate`. You can
configure the suffix to be used by changing the `suffix` parameter.
## Roll-Over to next Period
Calculating the derivative for a period requires at least two distinct measurements during that period.
Whether those are available depends on the configuration of the aggregator `period` and the agent `interval`.
By default the last measurement is used as first measurement in the next
aggregation period. This enables a continuous calculation of the derivative. If
within the next period an earlier timestamp is encountered this measurement will
replace the roll-over metric. A main benefit of this roll-over is the ability to
cope with multiple "quiet" periods, where no new measurement is pushed to the
Calculating the derivative for a period requires at least two distinct
measurements during that period. Whether those are available depends on the
configuration of the aggregator `period` and the agent `interval`. By default
the last measurement is used as first measurement in the next aggregation
period. This enables a continuous calculation of the derivative. If within the
next period an earlier timestamp is encountered this measurement will replace
the roll-over metric. A main benefit of this roll-over is the ability to cope
with multiple "quiet" periods, where no new measurement is pushed to the
aggregator. The roll-over will take place at most `max_roll_over` times.
### Example of Roll-Over
Let us assume we have an input plugin, that generates a measurement with a single metric "test" every 2 seconds.
Let this metric increase the first 10 seconds from 0.0 to 10.0 and then decrease the next 10 seconds form 10.0 to 0.0:
Let us assume we have an input plugin, that generates a measurement with a
single metric "test" every 2 seconds. Let this metric increase the first 10
seconds from 0.0 to 10.0 and then decrease the next 10 seconds form 10.0 to 0.0:
| timestamp | value |
|-----------|-------|
@ -71,8 +76,9 @@ Let this metric increase the first 10 seconds from 0.0 to 10.0 and then decrease
| 18 | 2.0 |
| 20 | 0.0 |
To avoid thinking about border values, we consider periods to be inclusive at the start but exclusive in the end.
Using `period = "10s"` and `max_roll_over = 0` we would get the following aggregates:
To avoid thinking about border values, we consider periods to be inclusive at
the start but exclusive in the end. Using `period = "10s"` and `max_roll_over =
0` we would get the following aggregates:
| timestamp | value | aggregate | explanantion |
|-----------|-------|-----------|--------------|
@ -90,9 +96,11 @@ Using `period = "10s"` and `max_roll_over = 0` we would get the following aggreg
||| -1.0 | (2.0 - 10.0) / (18 - 10)
| 20 | 0.0 |
If we now decrease the period with `period = 2s`, no derivative could be calculated since there would only one measurement for each period.
The aggregator will emit the log messages `Same first and last event for "test", skipping.`.
This changes, if we use `max_roll_over = 1`, since now end measurements of a period are taking as start for the next period.
If we now decrease the period with `period = 2s`, no derivative could be
calculated since there would only one measurement for each period. The
aggregator will emit the log messages `Same first and last event for "test",
skipping.`. This changes, if we use `max_roll_over = 1`, since now end
measurements of a period are taking as start for the next period.
| timestamp | value | aggregate | explanantion |
|-----------|-------|-----------|--------------|
@ -108,10 +116,12 @@ This changes, if we use `max_roll_over = 1`, since now end measurements of a per
| 18 | 2.0 | -1.0 | (2.0 - 4.0) / (18 - 16) |
| 20 | 0.0 | -1.0 | (0.0 - 2.0) / (20 - 18) |
The default `max_roll_over = 10` allows for multiple periods without measurements either due to configuration or missing input.
The default `max_roll_over = 10` allows for multiple periods without
measurements either due to configuration or missing input.
There may be a slight difference in the calculation when using `max_roll_over` compared to running without.
To illustrate this, let us compare the derivatives for `period = "7s"`.
There may be a slight difference in the calculation when using `max_roll_over`
compared to running without. To illustrate this, let us compare the derivatives
for `period = "7s"`.
| timestamp | value | `max_roll_over = 0` | `max_roll_over = 1` |
|-----------|-------|-----------|--------------|
@ -130,10 +140,15 @@ To illustrate this, let us compare the derivatives for `period = "7s"`.
| 20 | 0.0 |
||| -1.0 | -1.0 |
The difference stems from the change of the value between periods, e.g. from 6.0 to 8.0 between first and second period.
Thoses changes are omitted with `max_roll_over = 0` but are respected with `max_roll_over = 1`.
That there are no more differences in the calculated derivatives is due to the example data, which has constant derivatives in during the first and last period, even when including the gap between the periods.
Using `max_roll_over` with a value greater 0 may be important, if you need to detect changes between periods, e.g. when you have very few measurements in a period or quasi-constant metrics with only occasional changes.
The difference stems from the change of the value between periods, e.g. from 6.0
to 8.0 between first and second period. Thoses changes are omitted with
`max_roll_over = 0` but are respected with `max_roll_over = 1`. That there are
no more differences in the calculated derivatives is due to the example data,
which has constant derivatives in during the first and last period, even when
including the gap between the periods. Using `max_roll_over` with a value
greater 0 may be important, if you need to detect changes between periods,
e.g. when you have very few measurements in a period or quasi-constant metrics
with only occasional changes.
## Configuration

View File

@ -4,25 +4,29 @@ The histogram aggregator plugin creates histograms containing the counts of
field values within a range.
If `cumulative` is set to true, values added to a bucket are also added to the
larger buckets in the distribution. This creates a [cumulative histogram](https://en.wikipedia.org/wiki/Histogram#/media/File:Cumulative_vs_normal_histogram.svg).
Otherwise, values are added to only one bucket, which creates an [ordinary histogram](https://en.wikipedia.org/wiki/Histogram#/media/File:Cumulative_vs_normal_histogram.svg)
larger buckets in the distribution. This creates a [cumulative histogram][1].
Otherwise, values are added to only one bucket, which creates an [ordinary
histogram][1]
Like other Telegraf aggregators, the metric is emitted every `period` seconds.
By default bucket counts are not reset between periods and will be non-strictly
increasing while Telegraf is running. This behavior can be changed by setting the
`reset` parameter to true.
increasing while Telegraf is running. This behavior can be changed by setting
the `reset` parameter to true.
[1]: https://en.wikipedia.org/wiki/Histogram#/media/File:Cumulative_vs_normal_histogram.svg
## Design
Each metric is passed to the aggregator and this aggregator searches
histogram buckets for those fields, which have been specified in the
config. If buckets are found, the aggregator will increment +1 to the appropriate
Each metric is passed to the aggregator and this aggregator searches histogram
buckets for those fields, which have been specified in the config. If buckets
are found, the aggregator will increment +1 to the appropriate
bucket. Otherwise, it will be added to the `+Inf` bucket. Every `period`
seconds this data will be forwarded to the outputs.
The algorithm of hit counting to buckets was implemented on the base
of the algorithm which is implemented in the Prometheus
[client](https://github.com/prometheus/client_golang/blob/master/prometheus/histogram.go).
The algorithm of hit counting to buckets was implemented on the base of the
algorithm which is implemented in the Prometheus [client][2].
[2]: https://github.com/prometheus/client_golang/blob/master/prometheus/histogram.go
## Configuration
@ -77,9 +81,10 @@ option. Optionally, if `fields` is set only the fields listed will be
aggregated. If `fields` is not set all fields are aggregated.
The `buckets` option contains a list of floats which specify the bucket
boundaries. Each float value defines the inclusive upper (right) bound of the bucket.
The `+Inf` bucket is added automatically and does not need to be defined.
(For left boundaries, these specified bucket borders and `-Inf` will be used).
boundaries. Each float value defines the inclusive upper (right) bound of the
bucket. The `+Inf` bucket is added automatically and does not need to be
defined. (For left boundaries, these specified bucket borders and `-Inf` will
be used).
## Measurements & Fields

View File

@ -1,4 +1,4 @@
# Merge Aggregator
# Merge Aggregator Plugin
Merge metrics together into a metric with multiple fields into the most memory
and network transfer efficient form.

View File

@ -1,7 +1,7 @@
# Quantile Aggregator Plugin
The quantile aggregator plugin aggregates specified quantiles for each numeric field
per metric it sees and emits the quantiles every `period`.
The quantile aggregator plugin aggregates specified quantiles for each numeric
field per metric it sees and emits the quantiles every `period`.
## Configuration
@ -52,14 +52,15 @@ For implementation details see the underlying [golang library][tdigest_lib].
### exact R7 and R8
These algorithms compute quantiles as described in [Hyndman & Fan (1996)][hyndman_fan].
The R7 variant is used in Excel and NumPy. The R8 variant is recommended
by Hyndman & Fan due to its independence of the underlying sample distribution.
These algorithms compute quantiles as described in [Hyndman & Fan
(1996)][hyndman_fan]. The R7 variant is used in Excel and NumPy. The R8
variant is recommended by Hyndman & Fan due to its independence of the
underlying sample distribution.
These algorithms save all data for the aggregation `period`. They require
a lot of memory when used with a large number of series or a
large number of samples. They are slower than the `t-digest`
algorithm and are recommended only to be used with a small number of samples and series.
These algorithms save all data for the aggregation `period`. They require a lot
of memory when used with a large number of series or a large number of
samples. They are slower than the `t-digest` algorithm and are recommended only
to be used with a small number of samples and series.
## Benchmark (linux/amd64)
@ -108,8 +109,9 @@ and the default setting for `quantiles` you get the following *output*
- maximum_response_ms_050 (float64)
- maximum_response_ms_075 (float64)
The `status` and `ok` fields are dropped because they are not numeric. Note that the
number of resulting fields scales with the number of `quantiles` specified.
The `status` and `ok` fields are dropped because they are not numeric. Note
that the number of resulting fields scales with the number of `quantiles`
specified.
### Tags

View File

@ -1,16 +1,22 @@
# Starlark Aggregator
# Starlark Aggregator Plugin
The `starlark` aggregator allows to implement a custom aggregator plugin with a Starlark script. The Starlark
script needs to be composed of the three methods defined in the Aggregator plugin interface which are `add`, `push` and `reset`.
The `starlark` aggregator allows to implement a custom aggregator plugin with a
Starlark script. The Starlark script needs to be composed of the three methods
defined in the Aggregator plugin interface which are `add`, `push` and `reset`.
The Starlark Aggregator plugin calls the Starlark function `add` to add the metrics to the aggregator, then calls the Starlark function `push` to push the resulting metrics into the accumulator and finally calls the Starlark function `reset` to reset the entire state of the plugin.
The Starlark Aggregator plugin calls the Starlark function `add` to add the
metrics to the aggregator, then calls the Starlark function `push` to push the
resulting metrics into the accumulator and finally calls the Starlark function
`reset` to reset the entire state of the plugin.
The Starlark functions can use the global function `state` to keep temporary the metrics to aggregate.
The Starlark functions can use the global function `state` to keep temporary the
metrics to aggregate.
The Starlark language is a dialect of Python, and will be familiar to those who
have experience with the Python language. However, there are major [differences](#python-differences).
Existing Python code is unlikely to work unmodified. The execution environment
is sandboxed, and it is not possible to do I/O operations such as reading from
have experience with the Python language. However, there are major
[differences](#python-differences). Existing
Python code is unlikely to work unmodified. The execution environment is
sandboxed, and it is not possible to do I/O operations such as reading from
files or sockets.
The **[Starlark specification][]** has details about the syntax and available
@ -52,24 +58,27 @@ def reset():
## Usage
The Starlark code should contain a function called `add` that takes a metric as argument.
The function will be called with each metric to add, and doesn't return anything.
The Starlark code should contain a function called `add` that takes a metric as
argument. The function will be called with each metric to add, and doesn't
return anything.
```python
def add(metric):
state["last"] = metric
```
The Starlark code should also contain a function called `push` that doesn't take any argument.
The function will be called to compute the aggregation, and returns the metrics to push to the accumulator.
The Starlark code should also contain a function called `push` that doesn't take
any argument. The function will be called to compute the aggregation, and
returns the metrics to push to the accumulator.
```python
def push():
return state.get("last")
```
The Starlark code should also contain a function called `reset` that doesn't take any argument.
The function will be called to reset the plugin, and doesn't return anything.
The Starlark code should also contain a function called `reset` that doesn't
take any argument. The function will be called to reset the plugin, and doesn't
return anything.
```python
def push():
@ -81,22 +90,28 @@ the [Starlark specification][].
## Python Differences
Refer to the section [Python Differences](plugins/processors/starlark/README.md#python-differences) of the documentation about the Starlark processor.
Refer to the section [Python
Differences](../../processors/starlark/README.md#python-differences) of the
documentation about the Starlark processor.
## Libraries available
Refer to the section [Libraries available](plugins/processors/starlark/README.md#libraries-available) of the documentation about the Starlark processor.
Refer to the section [Libraries
available](../../processors/starlark/README.md#libraries-available) of the
documentation about the Starlark processor.
## Common Questions
Refer to the section [Common Questions](plugins/processors/starlark/README.md#common-questions) of the documentation about the Starlark processor.
Refer to the section [Common
Questions](../../processors/starlark/README.md#common-questions) of the
documentation about the Starlark processor.
## Examples
- [minmax](/plugins/aggregators/starlark/testdata/min_max.star) - A minmax aggregator implemented with a Starlark script.
- [merge](/plugins/aggregators/starlark/testdata/merge.star) - A merge aggregator implemented with a Starlark script.
- [minmax](testdata/min_max.star) - A minmax aggregator implemented with a Starlark script.
- [merge](testdata/merge.star) - A merge aggregator implemented with a Starlark script.
[All examples](/plugins/aggregators/starlark/testdata) are in the testdata folder.
[All examples](testdata) are in the testdata folder.
Open a Pull Request to add any other useful Starlark examples.

View File

@ -1,4 +1,4 @@
# Collectd
# Collectd Parser Plugin
The collectd format parses the collectd binary network protocol. Tags are
created for host, instance, type, and type instance. All collectd values are
@ -11,12 +11,13 @@ You can control the cryptographic settings with parser options. Create an
authentication file and set `collectd_auth_file` to the path of the file, then
set the desired security level in `collectd_security_level`.
Additional information including client setup can be found
[here](https://collectd.org/wiki/index.php/Networking_introduction#Cryptographic_setup).
Additional information including client setup can be found [here][1].
You can also change the path to the typesdb or add additional typesdb using
`collectd_typesdb`.
[1]: https://collectd.org/wiki/index.php/Networking_introduction#Cryptographic_setup
## Configuration
```toml

View File

@ -1,4 +1,4 @@
# CSV
# CSV Parser Plugin
The `csv` parser creates metrics from a document containing comma separated
values.
@ -107,10 +107,10 @@ time using the JSON document you can use the `csv_timestamp_column` and
`csv_timestamp_format` options together to set the time to a value in the parsed
document.
The `csv_timestamp_column` option specifies the key containing the time value and
`csv_timestamp_format` must be set to `unix`, `unix_ms`, `unix_us`, `unix_ns`,
or a format string in using the Go "reference time" which is defined to be the
**specific time**: `Mon Jan 2 15:04:05 MST 2006`.
The `csv_timestamp_column` option specifies the key containing the time value
and `csv_timestamp_format` must be set to `unix`, `unix_ms`, `unix_us`,
`unix_ns`, or a format string in using the Go "reference time" which is defined
to be the **specific time**: `Mon Jan 2 15:04:05 MST 2006`.
Consult the Go [time][time parse] package for details and additional examples
on how to set the time format.

View File

@ -1,6 +1,11 @@
# Dropwizard
# Dropwizard Parser Plugin
The `dropwizard` data format can parse the [JSON Dropwizard][dropwizard] representation of a single dropwizard metric registry. By default, tags are parsed from metric names as if they were actual influxdb line protocol keys (`measurement<,tag_set>`) which can be overridden by defining a custom [template pattern][templates]. All field value types are supported, `string`, `number` and `boolean`.
The `dropwizard` data format can parse the [JSON Dropwizard][dropwizard]
representation of a single dropwizard metric registry. By default, tags are
parsed from metric names as if they were actual influxdb line protocol keys
(`measurement<,tag_set>`) which can be overridden by defining a custom [template
pattern][templates]. All field value types are supported, `string`, `number` and
`boolean`.
[templates]: /docs/TEMPLATE_PATTERN.md
[dropwizard]: http://metrics.dropwizard.io/3.1.0/manual/json/
@ -127,8 +132,9 @@ measurement,metric_type=histogram count=1,max=1.0,mean=1.0,min=1.0,p50=1.0,p75=1
measurement,metric_type=timer count=1,max=1.0,mean=1.0,min=1.0,p50=1.0,p75=1.0,p95=1.0,p98=1.0,p99=1.0,p999=1.0,stddev=1.0,m15_rate=1.0,m1_rate=1.0,m5_rate=1.0,mean_rate=1.0
```
You may also parse a dropwizard registry from any JSON document which contains a dropwizard registry in some inner field.
Eg. to parse the following JSON document:
You may also parse a dropwizard registry from any JSON document which contains a
dropwizard registry in some inner field. Eg. to parse the following JSON
document:
```json
{

View File

@ -1,4 +1,4 @@
# Form Urlencoded
# Form Urlencoded Parser Plugin
The `form-urlencoded` data format parses `application/x-www-form-urlencoded`
data, such as commonly used in the [query string][].

View File

@ -1,4 +1,4 @@
# Graphite
# Graphite Parser Plugin
The Graphite data format translates graphite *dot* buckets directly into
telegraf measurement names, with a single value field, and optional tags.

View File

@ -1,10 +1,10 @@
# Grok
# Grok Parser Plugin
The grok data format parses line delimited data using a regular expression like
language.
The best way to get acquainted with grok patterns is to read the logstash docs,
which are available [here](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html).
which are available [here][1].
The grok parser uses a slightly modified version of logstash "grok"
patterns, with the format:
@ -54,13 +54,16 @@ You must capture at least one field per line.
- ts-"CUSTOM"
CUSTOM time layouts must be within quotes and be the representation of the
"reference time", which is `Mon Jan 2 15:04:05 -0700 MST 2006`.
To match a comma decimal point you can use a period. For example `%{TIMESTAMP:timestamp:ts-"2006-01-02 15:04:05.000"}` can be used to match `"2018-01-02 15:04:05,000"`
To match a comma decimal point you can use a period in the pattern string.
See [Goloang Time docs](https://golang.org/pkg/time/#Parse) for more details.
"reference time", which is `Mon Jan 2 15:04:05 -0700 MST 2006`. To match a
comma decimal point you can use a period. For example
`%{TIMESTAMP:timestamp:ts-"2006-01-02 15:04:05.000"}` can be used to match
`"2018-01-02 15:04:05,000"` To match a comma decimal point you can use a period
in the pattern string. See [Goloang Time
docs](https://golang.org/pkg/time/#Parse) for more details.
Telegraf has many of its own [built-in patterns][] as well as support for most
of the Logstash builtin patterns using [these Go compatible patterns][grok-patterns].
of the Logstash builtin patterns using [these Go compatible
patterns][grok-patterns].
**Note:** Golang regular expressions do not support lookahead or lookbehind.
Logstash patterns that use these features may not be supported, or may use a Go
@ -69,8 +72,10 @@ friendly pattern that is not fully compatible with the Logstash pattern.
[built-in patterns]: /plugins/parsers/grok/influx_patterns.go
[grok-patterns]: https://github.com/vjeantet/grok/blob/master/patterns/grok-patterns
If you need help building patterns to match your logs,
you will find the [Grok Debug](https://grokdebug.herokuapp.com) application quite useful!
If you need help building patterns to match your logs, you will find the [Grok
Debug](https://grokdebug.herokuapp.com) application quite useful!
[1]: https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html
## Configuration
@ -160,7 +165,8 @@ Wed Apr 12 13:10:34 PST 2017 value=42
'''
```
This example input and config parses a file using a custom timestamp conversion that doesn't match any specific standard:
This example input and config parses a file using a custom timestamp conversion
that doesn't match any specific standard:
```text
21/02/2017 13:10:34 value=42
@ -175,12 +181,14 @@ This example input and config parses a file using a custom timestamp conversion
'''
```
For cases where the timestamp itself is without offset, the `timezone` config var is available
to denote an offset. By default (with `timezone` either omit, blank or set to `"UTC"`), the times
are processed as if in the UTC timezone. If specified as `timezone = "Local"`, the timestamp
will be processed based on the current machine timezone configuration. Lastly, if using a
timezone from the list of Unix [timezones](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones),
grok will offset the timestamp accordingly.
For cases where the timestamp itself is without offset, the `timezone` config
var is available to denote an offset. By default (with `timezone` either omit,
blank or set to `"UTC"`), the times are processed as if in the UTC timezone. If
specified as `timezone = "Local"`, the timestamp will be processed based on the
current machine timezone configuration. Lastly, if using a timezone from the
list of Unix
[timezones](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones), grok
will offset the timestamp accordingly.
#### TOML Escaping
@ -227,8 +235,9 @@ A multi-line literal string allows us to encode the pattern:
#### Tips for creating patterns
Writing complex patterns can be difficult, here is some advice for writing a
new pattern or testing a pattern developed [online](https://grokdebug.herokuapp.com).
Writing complex patterns can be difficult, here is some advice for writing a new
pattern or testing a pattern developed
[online](https://grokdebug.herokuapp.com).
Create a file output that writes to stdout, and disable other outputs while
testing. This will allow you to see the captured metrics. Keep in mind that

View File

@ -1,4 +1,4 @@
# Influx Line Protocol
# Influx Line Protocol Parser Plugin
Parses metrics using the [Influx Line Protocol][].

View File

@ -1,10 +1,11 @@
# JSON
# JSON Parser Plugin
The JSON data format parses a [JSON][json] object or an array of objects into
metric fields.
**NOTE:** All JSON numbers are converted to float fields. JSON strings and booleans are
ignored unless specified in the `tag_key` or `json_string_fields` options.
**NOTE:** All JSON numbers are converted to float fields. JSON strings and
booleans are ignored unless specified in the `tag_key` or `json_string_fields`
options.
## Configuration
@ -100,11 +101,11 @@ the Go "reference time" which is defined to be the specific time:
Consult the Go [time][time parse] package for details and additional examples
on how to set the time format.
When parsing times that don't include a timezone specifier, times are assumed
to be UTC. To default to another timezone, or to local time, specify the
`json_timezone` option. This option should be set to a
[Unix TZ value](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones),
such as `America/New_York`, to `Local` to utilize the system timezone, or to `UTC`.
When parsing times that don't include a timezone specifier, times are assumed to
be UTC. To default to another timezone, or to local time, specify the
`json_timezone` option. This option should be set to a [Unix TZ
value](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones), such as
`America/New_York`, to `Local` to utilize the system timezone, or to `UTC`.
## Examples

View File

@ -1,13 +1,14 @@
# JSON Parser - Version 2
# JSON Parser Version 2 Plugin
This parser takes valid JSON input and turns it into line protocol. The query syntax supported is [GJSON Path Syntax](https://github.com/tidwall/gjson/blob/v1.7.5/SYNTAX.md), you can go to this playground to test out your GJSON path here: [gjson.dev/](https://gjson.dev). You can find multiple examples under the `testdata` folder.
This parser takes valid JSON input and turns it into line protocol. The query
syntax supported is [GJSON Path
Syntax](https://github.com/tidwall/gjson/blob/v1.7.5/SYNTAX.md), you can go to
this playground to test out your GJSON path here:
[gjson.dev/](https://gjson.dev). You can find multiple examples under the
`testdata` folder.
## Configuration
You configure this parser by describing the line protocol you want by defining the fields and tags from the input. The configuration is divided into config sub-tables called `field`, `tag`, and `object`. In the example below you can see all the possible configuration keys you can define for each config table. In the sections that follow these configuration keys are defined in more detail.
**Example configuration:**
```toml
[[inputs.file]]
urls = []
@ -63,6 +64,12 @@ You configure this parser by describing the line protocol you want by defining t
key = "int"
```
You configure this parser by describing the line protocol you want by defining
the fields and tags from the input. The configuration is divided into config
sub-tables called `field`, `tag`, and `object`. In the example below you can see
all the possible configuration keys you can define for each config table. In the
sections that follow these configuration keys are defined in more detail.
---
### root config options
@ -81,11 +88,24 @@ such as `America/New_York`, to `Local` to utilize the system timezone, or to `UT
### `field` and `tag` config options
`field` and `tag` represent the elements of [line protocol](https://docs.influxdata.com/influxdb/v2.0/reference/syntax/line-protocol/). You can use the `field` and `tag` config tables to gather a single value or an array of values that all share the same type and name. With this you can add a field or tag to a line protocol from data stored anywhere in your JSON. If you define the GJSON path to return a single value then you will get a single resutling line protocol that contains the field/tag. If you define the GJSON path to return an array of values, then each field/tag will be put into a separate line protocol (you use the # character to retrieve JSON arrays, find examples [here](https://github.com/tidwall/gjson/blob/v1.7.5/SYNTAX.md#arrays)).
`field` and `tag` represent the elements of [line protocol][lp-ref]. You can use
the `field` and `tag` config tables to gather a single value or an array of
values that all share the same type and name. With this you can add a field or
tag to a line protocol from data stored anywhere in your JSON. If you define the
GJSON path to return a single value then you will get a single resutling line
protocol that contains the field/tag. If you define the GJSON path to return an
array of values, then each field/tag will be put into a separate line protocol
(you use the # character to retrieve JSON arrays, find examples
[here](https://github.com/tidwall/gjson/blob/v1.7.5/SYNTAX.md#arrays)).
Note that objects are handled separately, therefore if you provide a path that returns a object it will be ignored. You will need use the `object` config table to parse objects, because `field` and `tag` doesn't handle relationships between data. Each `field` and `tag` you define is handled as a separate data point.
Note that objects are handled separately, therefore if you provide a path that
returns a object it will be ignored. You will need use the `object` config table
to parse objects, because `field` and `tag` doesn't handle relationships between
data. Each `field` and `tag` you define is handled as a separate data point.
The notable difference between `field` and `tag`, is that `tag` values will always be type string while `field` can be multiple types. You can define the type of `field` to be any [type that line protocol supports](https://docs.influxdata.com/influxdb/v2.0/reference/syntax/line-protocol/#data-types-and-format), which are:
The notable difference between `field` and `tag`, is that `tag` values will
always be type string while `field` can be multiple types. You can define the
type of `field` to be any [type that line protocol supports][types], which are:
* float
* int
@ -93,9 +113,18 @@ The notable difference between `field` and `tag`, is that `tag` values will alwa
* string
* bool
[lp-ref]: https://docs.influxdata.com/influxdb/v2.0/reference/syntax/line-protocol/
[types]: https://docs.influxdata.com/influxdb/v2.0/reference/syntax/line-protocol/#data-types-and-format
#### **field**
Using this field configuration you can gather a non-array/non-object values. Note this acts as a global field when used with the `object` configuration, if you gather an array of values using `object` then the field gathered will be added to each resulting line protocol without acknowledging its location in the original JSON. This is defined in TOML as an array table using double brackets.
Using this field configuration you can gather a non-array/non-object
values. Note this acts as a global field when used with the `object`
configuration, if you gather an array of values using `object` then the field
gathered will be added to each resulting line protocol without acknowledging its
location in the original JSON. This is defined in TOML as an array table using
double brackets.
* **path (REQUIRED)**: A string with valid GJSON path syntax to a non-array/non-object value
* **name (OPTIONAL)**: You can define a string value to set the field name. If not defined it will use the trailing word from the provided query.
@ -104,19 +133,26 @@ Using this field configuration you can gather a non-array/non-object values. Not
#### **tag**
Using this tag configuration you can gather a non-array/non-object values. Note this acts as a global tag when used with the `object` configuration, if you gather an array of values using `object` then the tag gathered will be added to each resulting line protocol without acknowledging its location in the original JSON. This is defined in TOML as an array table using double brackets.
Using this tag configuration you can gather a non-array/non-object values. Note
this acts as a global tag when used with the `object` configuration, if you
gather an array of values using `object` then the tag gathered will be added to
each resulting line protocol without acknowledging its location in the original
JSON. This is defined in TOML as an array table using double brackets.
* **path (REQUIRED)**: A string with valid GJSON path syntax to a non-array/non-object value
* **name (OPTIONAL)**: You can define a string value to set the field name. If not defined it will use the trailing word from the provided query.
* **optional (OPTIONAL)**: Setting optional to true will suppress errors if the configured Path doesn't match the JSON. This should be used with caution because it removes the safety net of verifying the provided path. An example case to use this is with the `inputs.mqtt_consumer` plugin when you are expecting multiple JSON files.
For good examples in using `field` and `tag` you can reference the following example configs:
For good examples in using `field` and `tag` you can reference the following
example configs:
---
### object
With the configuration section `object`, you can gather values from [JSON objects](https://www.w3schools.com/js/js_json_objects.asp). This is defined in TOML as an array table using double brackets.
With the configuration section `object`, you can gather values from [JSON
objects](https://www.w3schools.com/js/js_json_objects.asp). This is defined in
TOML as an array table using double brackets.
#### The following keys can be set for `object`
@ -155,7 +191,11 @@ The following describes the high-level approach when parsing arrays and objects:
**Object**: Every key/value in a object is treated as a *single* line protocol
When handling nested arrays and objects, these above rules continue to apply as the parser creates line protocol. When an object has multiple array's as values, the array's will become separate line protocol containing only non-array values from the obejct. Below you can see an example of this behavior, with an input json containing an array of book objects that has a nested array of characters.
When handling nested arrays and objects, these above rules continue to apply as
the parser creates line protocol. When an object has multiple array's as values,
the array's will become separate line protocol containing only non-array values
from the obejct. Below you can see an example of this behavior, with an input
json containing an array of book objects that has a nested array of characters.
Example JSON:
@ -216,7 +256,8 @@ You can find more complicated examples under the folder `testdata`.
## Types
For each field you have the option to define the types. The following rules are in place for this configuration:
For each field you have the option to define the types. The following rules are
in place for this configuration:
* If a type is explicitly defined, the parser will enforce this type and convert the data to the defined type if possible. If the type can't be converted then the parser will fail.
* If a type isn't defined, the parser will use the default type defined in the JSON (int, float, string)

View File

@ -1,4 +1,4 @@
# Logfmt
# Logfmt Parser Plugin
The `logfmt` data format parses data in [logfmt] format.

View File

@ -1,4 +1,4 @@
# Nagios
# Nagios Parser Plugin
The `nagios` data format parses the output of nagios plugins.

View File

@ -1,9 +1,14 @@
# Prometheus Text-Based Format
# Prometheus Text-Based Format Parser Plugin
There are no additional configuration options for [Prometheus Text-Based Format][]. The metrics are parsed directly into Telegraf metrics. It is used internally in [prometheus input](/plugins/inputs/prometheus) or can be used in [http_listener_v2](/plugins/inputs/http_listener_v2) to simulate Pushgateway.
There are no additional configuration options for [Prometheus Text-Based
Format][]. The metrics are parsed directly into Telegraf metrics. It is used
internally in [prometheus input](/plugins/inputs/prometheus) or can be used in
[http_listener_v2](/plugins/inputs/http_listener_v2) to simulate Pushgateway.
[Prometheus Text-Based Format]: https://prometheus.io/docs/instrumenting/exposition_formats/#text-based-format
## Configuration
```toml
[[inputs.file]]
files = ["example"]

View File

@ -1,6 +1,8 @@
# Prometheus remote write
# Prometheus Remote Write Parser Plugin
Converts prometheus remote write samples directly into Telegraf metrics. It can be used with [http_listener_v2](/plugins/inputs/http_listener_v2). There are no additional configuration options for Prometheus Remote Write Samples.
Converts prometheus remote write samples directly into Telegraf metrics. It can
be used with [http_listener_v2](/plugins/inputs/http_listener_v2). There are no
additional configuration options for Prometheus Remote Write Samples.
## Configuration

View File

@ -1,4 +1,4 @@
# Value
# Value Parser Plugin
The "value" data format translates single values into Telegraf metrics. This
is done by assigning a measurement name and setting a single field ("value")
@ -6,18 +6,6 @@ as the parsed metric.
## Configuration
You **must** tell Telegraf what type of metric to collect by using the
`data_type` configuration option. Available options are:
1. integer
2. float or long
3. string
4. boolean
**Note:** It is also recommended that you set `name_override` to a measurement
name that makes sense for your metric, otherwise it will just be set to the
name of the plugin.
```toml
[[inputs.exec]]
## Commands array
@ -36,3 +24,15 @@ name of the plugin.
data_format = "value"
data_type = "integer" # required
```
You **must** tell Telegraf what type of metric to collect by using the
`data_type` configuration option. Available options are:
1. integer
2. float or long
3. string
4. boolean
**Note:** It is also recommended that you set `name_override` to a measurement
name that makes sense for your metric, otherwise it will just be set to the
name of the plugin.

View File

@ -1,4 +1,4 @@
# Wavefront
# Wavefront Parser Plugin
Wavefront Data Format is metrics are parsed directly into Telegraf metrics.
For more information about the Wavefront Data Format see
@ -6,8 +6,6 @@ For more information about the Wavefront Data Format see
## Configuration
There are no additional configuration options for Wavefront Data Format line-protocol.
```toml
[[inputs.file]]
files = ["example"]
@ -18,3 +16,6 @@ There are no additional configuration options for Wavefront Data Format line-pro
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "wavefront"
```
There are no additional configuration options for Wavefront Data Format
line-protocol.

View File

@ -1,10 +1,13 @@
# XPath
# XPath Parser Plugin
The XPath data format parser parses different formats into metric fields using [XPath][xpath] expressions.
The XPath data format parser parses different formats into metric fields using
[XPath][xpath] expressions.
For supported XPath functions check [the underlying XPath library][xpath lib].
__NOTE:__ The type of fields are specified using [XPath functions][xpath lib]. The only exception are _integer_ fields that need to be specified in a `fields_int` section.
__NOTE:__ The type of fields are specified using [XPath functions][xpath
lib]. The only exception are _integer_ fields that need to be specified in a
`fields_int` section.
## Supported data formats
@ -17,21 +20,32 @@ __NOTE:__ The type of fields are specified using [XPath functions][xpath lib]. T
### Protocol-buffers additional settings
For using the protocol-buffer format you need to specify additional (_mandatory_) properties for the parser. Those options are described here.
For using the protocol-buffer format you need to specify additional
(_mandatory_) properties for the parser. Those options are described here.
#### `xpath_protobuf_file` (mandatory)
Use this option to specify the name of the protocol-buffer definition file (`.proto`).
Use this option to specify the name of the protocol-buffer definition file
(`.proto`).
#### `xpath_protobuf_type` (mandatory)
This option contains the top-level message file to use for deserializing the data to be parsed. Usually, this is constructed from the `package` name in the protocol-buffer definition file and the `message` name as `<package name>.<message name>`.
This option contains the top-level message file to use for deserializing the
data to be parsed. Usually, this is constructed from the `package` name in the
protocol-buffer definition file and the `message` name as `<package
name>.<message name>`.
#### `xpath_protobuf_import_paths` (optional)
In case you import other protocol-buffer definitions within your `.proto` file (i.e. you use the `import` statement) you can use this option to specify paths to search for the imported definition file(s). By default the imports are only searched in `.` which is the current-working-directory, i.e. usually the directory you are in when starting telegraf.
In case you import other protocol-buffer definitions within your `.proto` file
(i.e. you use the `import` statement) you can use this option to specify paths
to search for the imported definition file(s). By default the imports are only
searched in `.` which is the current-working-directory, i.e. usually the
directory you are in when starting telegraf.
Imagine you do have multiple protocol-buffer definitions (e.g. `A.proto`, `B.proto` and `C.proto`) in a directory (e.g. `/data/my_proto_files`) where your top-level file (e.g. `A.proto`) imports at least one other definition
Imagine you do have multiple protocol-buffer definitions (e.g. `A.proto`,
`B.proto` and `C.proto`) in a directory (e.g. `/data/my_proto_files`) where your
top-level file (e.g. `A.proto`) imports at least one other definition
```protobuf
syntax = "proto3";
@ -59,9 +73,7 @@ You should use the following setting
...
```
## Configuration (explicit)
In this configuration mode, you explicitly specify the field and tags you want to scrape out of your data.
## Configuration
```toml
[[inputs.file]]
@ -124,14 +136,22 @@ In this configuration mode, you explicitly specify the field and tags you want t
ok = "Mode != 'ok'"
```
A configuration can contain muliple _xpath_ subsections for e.g. the file plugin to process the xml-string multiple times. Consult the [XPath syntax][xpath] and the [underlying library's functions][xpath lib] for details and help regarding XPath queries. Consider using an XPath tester such as [xpather.com][xpather] or [Code Beautify's XPath Tester][xpath tester] for help developing and debugging
In this configuration mode, you explicitly specify the field and tags you want
to scrape out of your data.
A configuration can contain muliple _xpath_ subsections for e.g. the file plugin
to process the xml-string multiple times. Consult the [XPath syntax][xpath] and
the [underlying library's functions][xpath lib] for details and help regarding
XPath queries. Consider using an XPath tester such as [xpather.com][xpather] or
[Code Beautify's XPath Tester][xpath tester] for help developing and debugging
your query.
## Configuration (batch)
Alternatively to the configuration above, fields can also be specified in a batch way. So contrary to specify the fields
in a section, you can define a `name` and a `value` selector used to determine the name and value of the fields in the
metric.
Alternatively to the configuration above, fields can also be specified in a
batch way. So contrary to specify the fields in a section, you can define a
`name` and a `value` selector used to determine the name and value of the fields
in the metric.
```toml
[[inputs.file]]
@ -210,83 +230,128 @@ metric.
```
__Please note__: The resulting fields are _always_ of type string!
_Please note_: The resulting fields are _always_ of type string!
It is also possible to specify a mixture of the two alternative ways of specifying fields.
It is also possible to specify a mixture of the two alternative ways of
specifying fields.
### metric_selection (optional)
You can specify a [XPath][xpath] query to select a subset of nodes from the XML document, each used to generate a new
metrics with the specified fields, tags etc.
You can specify a [XPath][xpath] query to select a subset of nodes from the XML
document, each used to generate a new metrics with the specified fields, tags
etc.
For relative queries in subsequent queries they are relative to the `metric_selection`. To specify absolute paths, please start the query with a slash (`/`).
For relative queries in subsequent queries they are relative to the
`metric_selection`. To specify absolute paths, please start the query with a
slash (`/`).
Specifying `metric_selection` is optional. If not specified all relative queries are relative to the root node of the XML document.
Specifying `metric_selection` is optional. If not specified all relative queries
are relative to the root node of the XML document.
### metric_name (optional)
By specifying `metric_name` you can override the metric/measurement name with the result of the given [XPath][xpath] query. If not specified, the default metric name is used.
By specifying `metric_name` you can override the metric/measurement name with
the result of the given [XPath][xpath] query. If not specified, the default
metric name is used.
### timestamp, timestamp_format (optional)
By default the current time will be used for all created metrics. To set the time from values in the XML document you can specify a [XPath][xpath] query in `timestamp` and set the format in `timestamp_format`.
By default the current time will be used for all created metrics. To set the
time from values in the XML document you can specify a [XPath][xpath] query in
`timestamp` and set the format in `timestamp_format`.
The `timestamp_format` can be set to `unix`, `unix_ms`, `unix_us`, `unix_ns`, or
an accepted [Go "reference time"][time const]. Consult the Go [time][time parse] package for details and additional examples on how to set the time format.
If `timestamp_format` is omitted `unix` format is assumed as result of the `timestamp` query.
an accepted [Go "reference time"][time const]. Consult the Go [time][time parse]
package for details and additional examples on how to set the time format. If
`timestamp_format` is omitted `unix` format is assumed as result of the
`timestamp` query.
### tags sub-section
[XPath][xpath] queries in the `tag name = query` format to add tags to the metrics. The specified path can be absolute (starting with `/`) or relative. Relative paths use the currently selected node as reference.
[XPath][xpath] queries in the `tag name = query` format to add tags to the
metrics. The specified path can be absolute (starting with `/`) or
relative. Relative paths use the currently selected node as reference.
__NOTE:__ Results of tag-queries will always be converted to strings.
### fields_int sub-section
[XPath][xpath] queries in the `field name = query` format to add integer typed fields to the metrics. The specified path can be absolute (starting with `/`) or relative. Relative paths use the currently selected node as reference.
[XPath][xpath] queries in the `field name = query` format to add integer typed
fields to the metrics. The specified path can be absolute (starting with `/`) or
relative. Relative paths use the currently selected node as reference.
__NOTE:__ Results of field_int-queries will always be converted to __int64__. The conversion will fail in case the query result is not convertible!
__NOTE:__ Results of field_int-queries will always be converted to
__int64__. The conversion will fail in case the query result is not convertible!
### fields sub-section
[XPath][xpath] queries in the `field name = query` format to add non-integer fields to the metrics. The specified path can be absolute (starting with `/`) or relative. Relative paths use the currently selected node as reference.
[XPath][xpath] queries in the `field name = query` format to add non-integer
fields to the metrics. The specified path can be absolute (starting with `/`) or
relative. Relative paths use the currently selected node as reference.
The type of the field is specified in the [XPath][xpath] query using the type conversion functions of XPath such as `number()`, `boolean()` or `string()`
If no conversion is performed in the query the field will be of type string.
The type of the field is specified in the [XPath][xpath] query using the type
conversion functions of XPath such as `number()`, `boolean()` or `string()` If
no conversion is performed in the query the field will be of type string.
__NOTE: Path conversion functions will always succeed even if you convert a text to float!__
__NOTE: Path conversion functions will always succeed even if you convert a text
to float!__
### field_selection, field_name, field_value (optional)
You can specify a [XPath][xpath] query to select a set of nodes forming the fields of the metric. The specified path can be absolute (starting with `/`) or relative to the currently selected node. Each node selected by `field_selection` forms a new field within the metric.
You can specify a [XPath][xpath] query to select a set of nodes forming the
fields of the metric. The specified path can be absolute (starting with `/`) or
relative to the currently selected node. Each node selected by `field_selection`
forms a new field within the metric.
The _name_ and the _value_ of each field can be specified using the optional `field_name` and `field_value` queries. The queries are relative to the selected field if not starting with `/`. If not specified the field's _name_ defaults to the node name and the field's _value_ defaults to the content of the selected field node.
__NOTE__: `field_name` and `field_value` queries are only evaluated if a `field_selection` is specified.
The _name_ and the _value_ of each field can be specified using the optional
`field_name` and `field_value` queries. The queries are relative to the selected
field if not starting with `/`. If not specified the field's _name_ defaults to
the node name and the field's _value_ defaults to the content of the selected
field node.
Specifying `field_selection` is optional. This is an alternative way to specify fields especially for documents where the node names are not known a priori or if there is a large number of fields to be specified. These options can also be combined with the field specifications above.
__NOTE__: `field_name` and `field_value` queries are only evaluated if a
`field_selection` is specified.
__NOTE: Path conversion functions will always succeed even if you convert a text to float!__
Specifying `field_selection` is optional. This is an alternative way to specify
fields especially for documents where the node names are not known a priori or
if there is a large number of fields to be specified. These options can also be
combined with the field specifications above.
__NOTE: Path conversion functions will always succeed even if you convert a text
to float!__
### field_name_expansion (optional)
When _true_, field names selected with `field_selection` are expanded to a _path_ relative to the _selected node_. This
is necessary if we e.g. select all leaf nodes as fields and those leaf nodes do not have unique names. That is in case
you have duplicate names in the fields you select you should set this to `true`.
When _true_, field names selected with `field_selection` are expanded to a
_path_ relative to the _selected node_. This is necessary if we e.g. select all
leaf nodes as fields and those leaf nodes do not have unique names. That is in
case you have duplicate names in the fields you select you should set this to
`true`.
### tag_selection, tag_name, tag_value (optional)
You can specify a [XPath][xpath] query to select a set of nodes forming the tags of the metric. The specified path can be absolute (starting with `/`) or relative to the currently selected node. Each node selected by `tag_selection` forms a new tag within the metric.
You can specify a [XPath][xpath] query to select a set of nodes forming the tags
of the metric. The specified path can be absolute (starting with `/`) or
relative to the currently selected node. Each node selected by `tag_selection`
forms a new tag within the metric.
The _name_ and the _value_ of each tag can be specified using the optional `tag_name` and `tag_value` queries. The queries are relative to the selected tag if not starting with `/`. If not specified the tag's _name_ defaults to the node name and the tag's _value_ defaults to the content of the selected tag node.
__NOTE__: `tag_name` and `tag_value` queries are only evaluated if a `tag_selection` is specified.
The _name_ and the _value_ of each tag can be specified using the optional
`tag_name` and `tag_value` queries. The queries are relative to the selected tag
if not starting with `/`. If not specified the tag's _name_ defaults to the node
name and the tag's _value_ defaults to the content of the selected tag node.
__NOTE__: `tag_name` and `tag_value` queries are only evaluated if a
`tag_selection` is specified.
Specifying `tag_selection` is optional. This is an alternative way to specify tags especially for documents where the node names are not known a priori or if there is a large number of tags to be specified. These options can also be combined with the tag specifications above.
Specifying `tag_selection` is optional. This is an alternative way to specify
tags especially for documents where the node names are not known a priori or if
there is a large number of tags to be specified. These options can also be
combined with the tag specifications above.
### tag_name_expansion (optional)
When _true_, tag names selected with `tag_selection` are expanded to a _path_ relative to the _selected node_. This
is necessary if we e.g. select all leaf nodes as tags and those leaf nodes do not have unique names. That is in case
you have duplicate names in the tags you select you should set this to `true`.
When _true_, tag names selected with `tag_selection` are expanded to a _path_
relative to the _selected node_. This is necessary if we e.g. select all leaf
nodes as tags and those leaf nodes do not have unique names. That is in case you
have duplicate names in the tags you select you should set this to `true`.
## Examples
@ -354,12 +419,19 @@ Output:
file,gateway=Main,host=Hugin seqnr=12i,ok=true 1598610830000000000
```
In the _tags_ definition the XPath function `substring-before()` is used to only extract the sub-string before the space. To get the integer value of `/Gateway/Sequence` we have to use the _fields_int_ section as there is no XPath expression to convert node values to integers (only float).
The `ok` field is filled with a boolean by specifying a query comparing the query result of `/Gateway/Status` with the string _ok_. Use the type conversions available in the XPath syntax to specify field types.
In the _tags_ definition the XPath function `substring-before()` is used to only
extract the sub-string before the space. To get the integer value of
`/Gateway/Sequence` we have to use the _fields_int_ section as there is no XPath
expression to convert node values to integers (only float).
The `ok` field is filled with a boolean by specifying a query comparing the
query result of `/Gateway/Status` with the string _ok_. Use the type conversions
available in the XPath syntax to specify field types.
### Time and metric names
This is an example for using time and name of the metric from the XML document itself.
This is an example for using time and name of the metric from the XML document
itself.
Config:
@ -387,11 +459,16 @@ Output:
Status,gateway=Main,host=Hugin ok=true 1596294243000000000
```
Additionally to the basic parsing example, the metric name is defined as the name of the `/Gateway/Status` node and the timestamp is derived from the XML document instead of using the execution time.
Additionally to the basic parsing example, the metric name is defined as the
name of the `/Gateway/Status` node and the timestamp is derived from the XML
document instead of using the execution time.
### Multi-node selection
For XML documents containing metrics for e.g. multiple devices (like `Sensor`s in the _example.xml_), multiple metrics can be generated using node selection. This example shows how to generate a metric for each _Sensor_ in the example.
For XML documents containing metrics for e.g. multiple devices (like `Sensor`s
in the _example.xml_), multiple metrics can be generated using node
selection. This example shows how to generate a metric for each _Sensor_ in the
example.
Config:
@ -430,11 +507,18 @@ sensors,host=Hugin,name=Facility\ B consumers=1i,frequency=49.78,ok=true,power=1
sensors,host=Hugin,name=Facility\ C consumers=0i,frequency=49.78,ok=false,power=0.02,temperature=19.7 1596294243000000000
```
Using the `metric_selection` option we select all `Sensor` nodes in the XML document. Please note that all field and tag definitions are relative to these selected nodes. An exception is the timestamp definition which is relative to the root node of the XML document.
Using the `metric_selection` option we select all `Sensor` nodes in the XML
document. Please note that all field and tag definitions are relative to these
selected nodes. An exception is the timestamp definition which is relative to
the root node of the XML document.
### Batch field processing with multi-node selection
For XML documents containing metrics with a large number of fields or where the fields are not known before (e.g. an unknown set of `Variable` nodes in the _example.xml_), field selectors can be used. This example shows how to generate a metric for each _Sensor_ in the example with fields derived from the _Variable_ nodes.
For XML documents containing metrics with a large number of fields or where the
fields are not known before (e.g. an unknown set of `Variable` nodes in the
_example.xml_), field selectors can be used. This example shows how to generate
a metric for each _Sensor_ in the example with fields derived from the
_Variable_ nodes.
Config:
@ -466,8 +550,14 @@ sensors,host=Hugin,name=Facility\ B consumers=1,frequency=49.78,power=14.3,tempe
sensors,host=Hugin,name=Facility\ C consumers=0,frequency=49.78,power=0.02,temperature=19.7 1596294243000000000
```
Using the `metric_selection` option we select all `Sensor` nodes in the XML document. For each _Sensor_ we then use `field_selection` to select all child nodes of the sensor as _field-nodes_ Please note that the field selection is relative to the selected nodes.
For each selected _field-node_ we use `field_name` and `field_value` to determining the field's name and value, respectively. The `field_name` derives the name of the first attribute of the node, while `field_value` derives the value of the first attribute and converts the result to a number.
Using the `metric_selection` option we select all `Sensor` nodes in the XML
document. For each _Sensor_ we then use `field_selection` to select all child
nodes of the sensor as _field-nodes_ Please note that the field selection is
relative to the selected nodes. For each selected _field-node_ we use
`field_name` and `field_value` to determining the field's name and value,
respectively. The `field_name` derives the name of the first attribute of the
node, while `field_value` derives the value of the first attribute and converts
the result to a number.
[xpath lib]: https://github.com/antchfx/xpath
[json]: https://www.json.org/

View File

@ -4,16 +4,16 @@ The clone processor plugin create a copy of each metric passing through it,
preserving untouched the original metric and allowing modifications in the
copied one.
The modifications allowed are the ones supported by input plugins and aggregators:
The modifications allowed are the ones supported by input plugins and
aggregators:
* name_override
* name_prefix
* name_suffix
* tags
Select the metrics to modify using the standard
[measurement filtering](https://github.com/influxdata/telegraf/blob/master/docs/CONFIGURATION.md#measurement-filtering)
options.
Select the metrics to modify using the standard [metric
filtering](../../../docs/CONFIGURATION.md#metric-filtering) options.
Values of *name_override*, *name_prefix*, *name_suffix* and already present
*tags* with conflicting keys will be overwritten. Absent *tags* will be

View File

@ -1,15 +1,18 @@
# Converter Processor
# Converter Processor Plugin
The converter processor is used to change the type of tag or field values. In
addition to changing field types it can convert between fields and tags.
Values that cannot be converted are dropped.
**Note:** When converting tags to fields, take care to ensure the series is still
uniquely identifiable. Fields with the same series key (measurement + tags)
will overwrite one another.
**Note:** When converting tags to fields, take care to ensure the series is
still uniquely identifiable. Fields with the same series key (measurement +
tags) will overwrite one another.
**Note on large strings being converted to numeric types:** When converting a string value to a numeric type, precision may be lost if the number is too large. The largest numeric type this plugin supports is `float64`, and if a string 'number' exceeds its size limit, accuracy may be lost.
**Note on large strings being converted to numeric types:** When converting a
string value to a numeric type, precision may be lost if the number is too
large. The largest numeric type this plugin supports is `float64`, and if a
string 'number' exceeds its size limit, accuracy may be lost.
## Configuration

View File

@ -1,8 +1,10 @@
# Defaults Processor
# Defaults Processor Plugin
The _Defaults_ processor allows you to ensure certain fields will always exist with a specified default value on your metric(s).
The _Defaults_ processor allows you to ensure certain fields will always exist
with a specified default value on your metric(s).
There are three cases where this processor will insert a configured default field.
There are three cases where this processor will insert a configured default
field.
1. The field is nil on the incoming metric
1. The field is not nil, but its value is an empty string.
@ -32,7 +34,8 @@ Telegraf minimum version: Telegraf 1.15.0
## Example
Ensure a _status\_code_ field with _N/A_ is inserted in the metric when one is not set in the metric by default:
Ensure a _status\_code_ field with _N/A_ is inserted in the metric when one is
not set in the metric by default:
```toml
[[processors.defaults]]

View File

@ -1,13 +1,13 @@
# Enum Processor Plugin
The Enum Processor allows the configuration of value mappings for metric tags or fields.
The main use-case for this is to rewrite status codes such as _red_, _amber_ and
_green_ by numeric values such as 0, 1, 2. The plugin supports string, int, float64 and bool
types for the field values. Multiple tags or fields can be configured with separate
value mappings for each. Default mapping values can be configured to be
used for all values, which are not contained in the value_mappings. The
processor supports explicit configuration of a destination tag or field. By default the
source tag or field is overwritten.
The Enum Processor allows the configuration of value mappings for metric tags or
fields. The main use-case for this is to rewrite status codes such as _red_,
_amber_ and _green_ by numeric values such as 0, 1, 2. The plugin supports
string, int, float64 and bool types for the field values. Multiple tags or
fields can be configured with separate value mappings for each. Default mapping
values can be configured to be used for all values, which are not contained in
the value_mappings. The processor supports explicit configuration of a
destination tag or field. By default the source tag or field is overwritten.
## Configuration

View File

@ -1,9 +1,9 @@
# Execd Processor Plugin
The `execd` processor plugin runs an external program as a separate process and
pipes metrics in to the process's STDIN and reads processed metrics from its STDOUT.
The programs must accept influx line protocol on standard in (STDIN) and output
metrics in influx line protocol to standard output (STDOUT).
pipes metrics in to the process's STDIN and reads processed metrics from its
STDOUT. The programs must accept influx line protocol on standard in (STDIN)
and output metrics in influx line protocol to standard output (STDOUT).
Program output on standard error is mirrored to the telegraf log.
@ -103,7 +103,8 @@ func main() {
}
```
to run it, you'd build the binary using go, eg `go build -o multiplier.exe main.go`
to run it, you'd build the binary using go, eg `go build -o multiplier.exe
main.go`
```toml
[[processors.execd]]

View File

@ -1,9 +1,8 @@
<!-- markdownlint-disable MD024 -->
# Filepath Processor Plugin
The `filepath` processor plugin maps certain go functions from [path/filepath](https://golang.org/pkg/path/filepath/)
onto tag and field values. Values can be modified in place or stored in another key.
The `filepath` processor plugin maps certain go functions from
[path/filepath](https://golang.org/pkg/path/filepath/) onto tag and field
values. Values can be modified in place or stored in another key.
Implemented functions are:
@ -13,16 +12,19 @@ Implemented functions are:
* [Clean](https://golang.org/pkg/path/filepath/#Clean) (accessible through `[[processors.filepath.clean]]`)
* [ToSlash](https://golang.org/pkg/path/filepath/#ToSlash) (accessible through `[[processors.filepath.toslash]]`)
On top of that, the plugin provides an extra function to retrieve the final path component without its extension. This
function is accessible through the `[[processors.filepath.stem]]` configuration item.
On top of that, the plugin provides an extra function to retrieve the final path
component without its extension. This function is accessible through the
`[[processors.filepath.stem]]` configuration item.
Please note that, in this implementation, these functions are processed in the order that they appear above( except for
`stem` that is applied in the first place).
Please note that, in this implementation, these functions are processed in the
order that they appear above( except for `stem` that is applied in the first
place).
Specify the `tag` and/or `field` that you want processed in each section and optionally a `dest` if you want the result
stored in a new tag or field.
Specify the `tag` and/or `field` that you want processed in each section and
optionally a `dest` if you want the result stored in a new tag or field.
If you plan to apply multiple transformations to the same `tag`/`field`, bear in mind the processing order stated above.
If you plan to apply multiple transformations to the same `tag`/`field`, bear in
mind the processing order stated above.
Telegraf minimum version: Telegraf 1.15.0
@ -63,10 +65,11 @@ Telegraf minimum version: Telegraf 1.15.0
## Considerations
### Clean
### Clean Automatic Invocation
Even though `clean` is provided a standalone function, it is also invoked when using the `rel` and `dirname` functions,
so there is no need to use it along with them.
Even though `clean` is provided a standalone function, it is also invoked when
using the `rel` and `dirname` functions, so there is no need to use it along
with them.
That is:
@ -86,9 +89,10 @@ Is equivalent to:
tag = "path"
```
### ToSlash
### ToSlash Platform-specific Behavior
The effects of this function are only noticeable on Windows platforms, because of the underlying golang implementation.
The effects of this function are only noticeable on Windows platforms, because
of the underlying golang implementation.
## Examples
@ -174,9 +178,9 @@ The effects of this function are only noticeable on Windows platforms, because o
## Processing paths from tail plugin
This plugin can be used together with the
[tail input plugn](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/tail) to make modifications
to the `path` tag injected for every file.
This plugin can be used together with the [tail input
plugin](../../inputs/tail/README.md) to make modifications to the `path` tag
injected for every file.
Scenario:
@ -189,7 +193,8 @@ tag
* Just in case, we don't want to override the original path (if for some reason we end up having duplicates we might
want this information)
For this purpose, we will use the `tail` input plugin, the `grok` parser plugin and the `filepath` processor.
For this purpose, we will use the `tail` input plugin, the `grok` parser plugin
and the `filepath` processor.
```toml
# Performs file path manipulations on tags and fields
@ -205,7 +210,8 @@ For this purpose, we will use the `tail` input plugin, the `grok` parser plugin
dest = "stempath"
```
The resulting output for a job taking 70 seconds for the mentioned log file would look like:
The resulting output for a job taking 70 seconds for the mentioned log file
would look like:
```text
myjobs_duration_seconds,host="my-host",path="/var/log/myjobs/mysql_backup.log",stempath="mysql_backup" 70 1587920425000000000

View File

@ -1,12 +1,13 @@
# Noise Processor
# Noise Processor Plugin
The _Noise_ processor is used to add noise to numerical field values. For each field a noise is generated using a defined probability densitiy function and added to the value. The function type can be configured as _Laplace_, _Gaussian_ or _Uniform_.
Depending on the function, various parameters need to be configured:
The _Noise_ processor is used to add noise to numerical field values. For each
field a noise is generated using a defined probability densitiy function and
added to the value. The function type can be configured as _Laplace_, _Gaussian_
or _Uniform_. Depending on the function, various parameters need to be
configured:
## Configuration
Depending on the choice of the distribution function, the respective parameters must be set. Default settings are `noise_type = "laplacian"` with `mu = 0.0` and `scale = 1.0`:
```toml @sample.conf
# Adds noise to numerical fields
[[processors.noise]]
@ -31,8 +32,13 @@ Depending on the choice of the distribution function, the respective parameters
# exclude_fields = []
```
Using the `include_fields` and `exclude_fields` options a filter can be configured to apply noise only to numeric fields matching it.
The following distribution functions are available.
Depending on the choice of the distribution function, the respective parameters
must be set. Default settings are `noise_type = "laplacian"` with `mu = 0.0` and
`scale = 1.0`:
Using the `include_fields` and `exclude_fields` options a filter can be
configured to apply noise only to numeric fields matching it. The following
distribution functions are available.
### Laplacian
@ -54,7 +60,9 @@ The following distribution functions are available.
## Example
Add noise to each value the _inputs.cpu_ plugin generates, except for the _usage\_steal_, _usage\_user_, _uptime\_format_, _usage\_idle_ field and all fields of the metrics _swap_, _disk_ and _net_:
Add noise to each value the _inputs.cpu_ plugin generates, except for the
_usage\_steal_, _usage\_user_, _uptime\_format_, _usage\_idle_ field and all
fields of the metrics _swap_, _disk_ and _net_:
```toml
[[inputs.cpu]]

View File

@ -8,10 +8,9 @@ supported by input plugins and aggregators:
* name_suffix
* tags
All metrics passing through this processor will be modified accordingly.
Select the metrics to modify using the standard
[measurement filtering](https://github.com/influxdata/telegraf/blob/master/docs/CONFIGURATION.md#measurement-filtering)
options.
All metrics passing through this processor will be modified accordingly. Select
the metrics to modify using the standard [metric
filtering](../../../docs/CONFIGURATION.md#metric-filtering) options.
Values of *name_override*, *name_prefix*, *name_suffix* and already present
*tags* with conflicting keys will be overwritten. Absent *tags* will be

View File

@ -1,4 +1,4 @@
# Pivot Processor
# Pivot Processor Plugin
You can use the `pivot` processor to rotate single valued metrics into a multi
field metric. This transformation often results in data that is more easily

View File

@ -1,10 +1,15 @@
# Port Name Lookup Processor Plugin
Use the `port_name` processor to convert a tag or field containing a well-known port number to the registered service name.
Use the `port_name` processor to convert a tag or field containing a well-known
port number to the registered service name.
Tag or field can contain a number ("80") or number and protocol separated by slash ("443/tcp"). If protocol is not provided it defaults to tcp but can be changed with the default_protocol setting. An additional tag or field can be specified for the protocol.
Tag or field can contain a number ("80") or number and protocol separated by
slash ("443/tcp"). If protocol is not provided it defaults to tcp but can be
changed with the default_protocol setting. An additional tag or field can be
specified for the protocol.
If the source was found in tag, the service name will be added as a tag. If the source was found in a field, the service name will also be a field.
If the source was found in tag, the service name will be added as a tag. If the
source was found in a field, the service name will also be a field.
Telegraf minimum version: Telegraf 1.15.0

View File

@ -1,13 +1,18 @@
# Regex Processor Plugin
The `regex` plugin transforms tag and field values with regex pattern. If `result_key` parameter is present, it can produce new tags and fields from existing ones.
The `regex` plugin transforms tag and field values with regex pattern. If
`result_key` parameter is present, it can produce new tags and fields from
existing ones.
The regex processor **only operates on string fields**. It will not work on
any other data types, like an integer or float.
For tags transforms, if `append` is set to `true`, it will append the transformation to the existing tag value, instead of overwriting it.
For tags transforms, if `append` is set to `true`, it will append the
transformation to the existing tag value, instead of overwriting it.
For metrics transforms, `key` denotes the element that should be transformed. Furthermore, `result_key` allows control over the behavior applied in case the resulting `tag` or `field` name already exists.
For metrics transforms, `key` denotes the element that should be
transformed. Furthermore, `result_key` allows control over the behavior applied
in case the resulting `tag` or `field` name already exists.
## Configuration

View File

@ -1,8 +1,9 @@
# S2 Geo Processor Plugin
Use the `s2geo` processor to add tag with S2 cell ID token of specified [cell level][cell levels].
The tag is used in `experimental/geo` Flux package functions.
The `lat` and `lon` fields values should contain WGS-84 coordinates in decimal degrees.
Use the `s2geo` processor to add tag with S2 cell ID token of specified [cell
level][cell levels]. The tag is used in `experimental/geo` Flux package
functions. The `lat` and `lon` fields values should contain WGS-84 coordinates
in decimal degrees.
## Configuration

View File

@ -1,13 +1,13 @@
# Starlark Processor
# Starlark Processor Plugin
The `starlark` processor calls a Starlark function for each matched metric,
allowing for custom programmatic metric processing.
The Starlark language is a dialect of Python, and will be familiar to those who
have experience with the Python language. However, there are major [differences](#python-differences).
Existing Python code is unlikely to work unmodified. The execution environment
is sandboxed, and it is not possible to do I/O operations such as reading from
files or sockets.
have experience with the Python language. However, there are major
[differences](#python-differences). Existing Python code is unlikely to work
unmodified. The execution environment is sandboxed, and it is not possible to
do I/O operations such as reading from files or sockets.
The **[Starlark specification][]** has details about the syntax and available
functions.
@ -42,8 +42,8 @@ def apply(metric):
## Usage
The Starlark code should contain a function called `apply` that takes a metric as
its single argument. The function will be called with each metric, and can
The Starlark code should contain a function called `apply` that takes a metric
as its single argument. The function will be called with each metric, and can
return `None`, a single metric, or a list of metrics.
```python
@ -102,12 +102,13 @@ While Starlark is similar to Python, there are important differences to note:
### Libraries available
The ability to load external scripts other than your own is pretty limited. The following libraries are available for loading:
The ability to load external scripts other than your own is pretty limited. The
following libraries are available for loading:
- json: `load("json.star", "json")` provides the following functions: `json.encode()`, `json.decode()`, `json.indent()`. See [json.star](/plugins/processors/starlark/testdata/json.star) for an example. For more details about the functions, please refer to [the documentation of this library](https://pkg.go.dev/go.starlark.net/lib/json).
- log: `load("logging.star", "log")` provides the following functions: `log.debug()`, `log.info()`, `log.warn()`, `log.error()`. See [logging.star](/plugins/processors/starlark/testdata/logging.star) for an example.
- math: `load("math.star", "math")` provides [the following functions and constants](https://pkg.go.dev/go.starlark.net/lib/math). See [math.star](/plugins/processors/starlark/testdata/math.star) for an example.
- time: `load("time.star", "time")` provides the following functions: `time.from_timestamp()`, `time.is_valid_timezone()`, `time.now()`, `time.parse_duration()`, `time.parseTime()`, `time.time()`. See [time_date.star](/plugins/processors/starlark/testdata/time_date.star), [time_duration.star](/plugins/processors/starlark/testdata/time_duration.star) and/or [time_timestamp.star](/plugins/processors/starlark/testdata/time_timestamp.star) for an example. For more details about the functions, please refer to [the documentation of this library](https://pkg.go.dev/go.starlark.net/lib/time).
- json: `load("json.star", "json")` provides the following functions: `json.encode()`, `json.decode()`, `json.indent()`. See [json.star](testdata/json.star) for an example. For more details about the functions, please refer to [the documentation of this library](https://pkg.go.dev/go.starlark.net/lib/json).
- log: `load("logging.star", "log")` provides the following functions: `log.debug()`, `log.info()`, `log.warn()`, `log.error()`. See [logging.star](testdata/logging.star) for an example.
- math: `load("math.star", "math")` provides [the following functions and constants](https://pkg.go.dev/go.starlark.net/lib/math). See [math.star](testdata/math.star) for an example.
- time: `load("time.star", "time")` provides the following functions: `time.from_timestamp()`, `time.is_valid_timezone()`, `time.now()`, `time.parse_duration()`, `time.parseTime()`, `time.time()`. See [time_date.star](testdata/time_date.star), [time_duration.star](testdata/time_duration.star) and/or [time_timestamp.star](testdata/time_timestamp.star) for an example. For more details about the functions, please refer to [the documentation of this library](https://pkg.go.dev/go.starlark.net/lib/time).
If you would like to see support for something else here, please open an issue.
@ -115,9 +116,10 @@ If you would like to see support for something else here, please open an issue.
**What's the performance cost to using Starlark?**
In local tests, it takes about 1µs (1 microsecond) to run a modest script to process one
metric. This is going to vary with the size of your script, but the total impact is minimal.
At this pace, it's likely not going to be the bottleneck in your Telegraf setup.
In local tests, it takes about 1µs (1 microsecond) to run a modest script to
process one metric. This is going to vary with the size of your script, but the
total impact is minimal. At this pace, it's likely not going to be the
bottleneck in your Telegraf setup.
**How can I drop/delete a metric?**
@ -158,7 +160,8 @@ def apply(metric):
```
When you use this form, it is not possible to modify the tags inside the loop,
if this is needed you should use one of the `.keys()`, `.values()`, or `.items()` methods:
if this is needed you should use one of the `.keys()`, `.values()`, or
`.items()` methods:
```python
def apply(metric):
@ -169,17 +172,19 @@ def apply(metric):
**How can I save values across multiple calls to the script?**
Telegraf freezes the global scope, which prevents it from being modified, except for a special shared global dictionary
named `state`, this can be used by the `apply` function.
See an example of this in [compare with previous metric](/plugins/processors/starlark/testdata/compare_metrics.star)
Telegraf freezes the global scope, which prevents it from being modified, except
for a special shared global dictionary named `state`, this can be used by the
`apply` function. See an example of this in [compare with previous
metric](testdata/compare_metrics.star)
Other than the `state` variable, attempting to modify the global scope will fail with an error.
Other than the `state` variable, attempting to modify the global scope will fail
with an error.
**How to manage errors that occur in the apply function?**
In case you need to call some code that may return an error, you can delegate the call
to the built-in function `catch` which takes as argument a `Callable` and returns the error
that occured if any, `None` otherwise.
In case you need to call some code that may return an error, you can delegate
the call to the built-in function `catch` which takes as argument a `Callable`
and returns the error that occured if any, `None` otherwise.
So for example:
@ -199,7 +204,9 @@ def failing(metric):
**How to reuse the same script but with different parameters?**
In case you have a generic script that you would like to reuse for different instances of the plugin, you can use constants as input parameters of your script.
In case you have a generic script that you would like to reuse for different
instances of the plugin, you can use constants as input parameters of your
script.
So for example, assuming that you have the next configuration:
@ -212,7 +219,8 @@ So for example, assuming that you have the next configuration:
somecustomstr = "mycustomfield"
```
Your script could then use the constants defined in the configuration as follows:
Your script could then use the constants defined in the configuration as
follows:
```python
def apply(metric):
@ -223,30 +231,30 @@ def apply(metric):
### Examples
- [drop string fields](/plugins/processors/starlark/testdata/drop_string_fields.star) - Drop fields containing string values.
- [drop fields with unexpected type](/plugins/processors/starlark/testdata/drop_fields_with_unexpected_type.star) - Drop fields containing unexpected value types.
- [iops](/plugins/processors/starlark/testdata/iops.star) - obtain IOPS (to aggregate, to produce max_iops)
- [json](/plugins/processors/starlark/testdata/json.star) - an example of processing JSON from a field in a metric
- [math](/plugins/processors/starlark/testdata/math.star) - Use a math function to compute the value of a field. [The list of the supported math functions and constants](https://pkg.go.dev/go.starlark.net/lib/math).
- [number logic](/plugins/processors/starlark/testdata/number_logic.star) - transform a numerical value to another numerical value
- [pivot](/plugins/processors/starlark/testdata/pivot.star) - Pivots a key's value to be the key for another key.
- [ratio](/plugins/processors/starlark/testdata/ratio.star) - Compute the ratio of two integer fields
- [rename](/plugins/processors/starlark/testdata/rename.star) - Rename tags or fields using a name mapping.
- [scale](/plugins/processors/starlark/testdata/scale.star) - Multiply any field by a number
- [time date](/plugins/processors/starlark/testdata/time_date.star) - Parse a date and extract the year, month and day from it.
- [time duration](/plugins/processors/starlark/testdata/time_duration.star) - Parse a duration and convert it into a total amount of seconds.
- [time timestamp](/plugins/processors/starlark/testdata/time_timestamp.star) - Filter metrics based on the timestamp in seconds.
- [time timestamp nanoseconds](/plugins/processors/starlark/testdata/time_timestamp_nanos.star) - Filter metrics based on the timestamp with nanoseconds.
- [time timestamp current](/plugins/processors/starlark/testdata/time_set_timestamp.star) - Setting the metric timestamp to the current/local time.
- [value filter](/plugins/processors/starlark/testdata/value_filter.star) - Remove a metric based on a field value.
- [logging](/plugins/processors/starlark/testdata/logging.star) - Log messages with the logger of Telegraf
- [multiple metrics](/plugins/processors/starlark/testdata/multiple_metrics.star) - Return multiple metrics by using [a list](https://docs.bazel.build/versions/master/skylark/lib/list.html) of metrics.
- [multiple metrics from json array](/plugins/processors/starlark/testdata/multiple_metrics_with_json.star) - Builds a new metric from each element of a json array then returns all the created metrics.
- [custom error](/plugins/processors/starlark/testdata/fail.star) - Return a custom error with [fail](https://docs.bazel.build/versions/master/skylark/lib/globals.html#fail).
- [compare with previous metric](/plugins/processors/starlark/testdata/compare_metrics.star) - Compare the current metric with the previous one using the shared state.
- [rename prometheus remote write](/plugins/processors/starlark/testdata/rename_prometheus_remote_write.star) - Rename prometheus remote write measurement name with fieldname and rename fieldname to value.
- [drop string fields](testdata/drop_string_fields.star) - Drop fields containing string values.
- [drop fields with unexpected type](testdata/drop_fields_with_unexpected_type.star) - Drop fields containing unexpected value types.
- [iops](testdata/iops.star) - obtain IOPS (to aggregate, to produce max_iops)
- [json](testdata/json.star) - an example of processing JSON from a field in a metric
- [math](testdata/math.star) - Use a math function to compute the value of a field. [The list of the supported math functions and constants](https://pkg.go.dev/go.starlark.net/lib/math).
- [number logic](testdata/number_logic.star) - transform a numerical value to another numerical value
- [pivot](testdata/pivot.star) - Pivots a key's value to be the key for another key.
- [ratio](testdata/ratio.star) - Compute the ratio of two integer fields
- [rename](testdata/rename.star) - Rename tags or fields using a name mapping.
- [scale](testdata/scale.star) - Multiply any field by a number
- [time date](testdata/time_date.star) - Parse a date and extract the year, month and day from it.
- [time duration](testdata/time_duration.star) - Parse a duration and convert it into a total amount of seconds.
- [time timestamp](testdata/time_timestamp.star) - Filter metrics based on the timestamp in seconds.
- [time timestamp nanoseconds](testdata/time_timestamp_nanos.star) - Filter metrics based on the timestamp with nanoseconds.
- [time timestamp current](testdata/time_set_timestamp.star) - Setting the metric timestamp to the current/local time.
- [value filter](testdata/value_filter.star) - Remove a metric based on a field value.
- [logging](testdata/logging.star) - Log messages with the logger of Telegraf
- [multiple metrics](testdata/multiple_metrics.star) - Return multiple metrics by using [a list](https://docs.bazel.build/versions/master/skylark/lib/list.html) of metrics.
- [multiple metrics from json array](testdata/multiple_metrics_with_json.star) - Builds a new metric from each element of a json array then returns all the created metrics.
- [custom error](testdata/fail.star) - Return a custom error with [fail](https://docs.bazel.build/versions/master/skylark/lib/globals.html#fail).
- [compare with previous metric](testdata/compare_metrics.star) - Compare the current metric with the previous one using the shared state.
- [rename prometheus remote write](testdata/rename_prometheus_remote_write.star) - Rename prometheus remote write measurement name with fieldname and rename fieldname to value.
[All examples](/plugins/processors/starlark/testdata) are in the testdata folder.
[All examples](testdata) are in the testdata folder.
Open a Pull Request to add any other useful Starlark examples.

View File

@ -1,6 +1,7 @@
# Strings Processor Plugin
The `strings` plugin maps certain go string functions onto measurement, tag, and field values. Values can be modified in place or stored in another key.
The `strings` plugin maps certain go string functions onto measurement, tag, and
field values. Values can be modified in place or stored in another key.
Implemented functions are:
@ -17,13 +18,21 @@ Implemented functions are:
- base64decode
- valid_utf8
Please note that in this implementation these are processed in the order that they appear above.
Please note that in this implementation these are processed in the order that
they appear above.
Specify the `measurement`, `tag`, `tag_key`, `field`, or `field_key` that you want processed in each section and optionally a `dest` if you want the result stored in a new tag or field. You can specify lots of transformations on data with a single strings processor.
Specify the `measurement`, `tag`, `tag_key`, `field`, or `field_key` that you
want processed in each section and optionally a `dest` if you want the result
stored in a new tag or field. You can specify lots of transformations on data
with a single strings processor.
If you'd like to apply the change to every `tag`, `tag_key`, `field`, `field_key`, or `measurement`, use the value `"*"` for each respective field. Note that the `dest` field will be ignored if `"*"` is used.
If you'd like to apply the change to every `tag`, `tag_key`, `field`,
`field_key`, or `measurement`, use the value `"*"` for each respective
field. Note that the `dest` field will be ignored if `"*"` is used.
If you'd like to apply multiple processings to the same `tag_key` or `field_key`, note the process order stated above. See the second example below for an example.
If you'd like to apply multiple processings to the same `tag_key` or
`field_key`, note the process order stated above. See the second example below
for an example.
## Configuration
@ -91,12 +100,14 @@ If you'd like to apply multiple processings to the same `tag_key` or `field_key`
### Trim, TrimLeft, TrimRight
The `trim`, `trim_left`, and `trim_right` functions take an optional parameter: `cutset`. This value is a string containing the characters to remove from the value.
The `trim`, `trim_left`, and `trim_right` functions take an optional parameter:
`cutset`. This value is a string containing the characters to remove from the
value.
### TrimPrefix, TrimSuffix
The `trim_prefix` and `trim_suffix` functions remote the given `prefix` or `suffix`
respectively from the string.
The `trim_prefix` and `trim_suffix` functions remote the given `prefix` or
`suffix` respectively from the string.
### Replace

View File

@ -1,4 +1,4 @@
# Template Processor
# Template Processor Plugin
The `template` processor applies a Go template to metrics to generate a new
tag. The primary use case of this plugin is to create a tag that can be used
@ -62,7 +62,9 @@ Read the full [Go Template Documentation][].
### Add all fields as a tag
Sometimes it is usefull to pass all fields with their values into a single message for sending it to a monitoring system (e.g. Syslog, GroundWork), then you can use `.FieldList` or `.TagList`:
Sometimes it is usefull to pass all fields with their values into a single
message for sending it to a monitoring system (e.g. Syslog, GroundWork), then
you can use `.FieldList` or `.TagList`:
```toml
[[processors.template]]

View File

@ -1,6 +1,8 @@
# TopK Processor Plugin
The TopK processor plugin is a filter designed to get the top series over a period of time. It can be tweaked to calculate the top metrics via different aggregation functions.
The TopK processor plugin is a filter designed to get the top series over a
period of time. It can be tweaked to calculate the top metrics via different
aggregation functions.
This processor goes through these steps when processing a batch of metrics:
@ -77,11 +79,14 @@ Notes:
### Tags
This processor does not add tags by default. But the setting `add_groupby_tag` will add a tag if set to anything other than ""
This processor does not add tags by default. But the setting `add_groupby_tag`
will add a tag if set to anything other than ""
### Fields
This processor does not add fields by default. But the settings `add_rank_fields` and `add_aggregation_fields` will add one or several fields if set to anything other than ""
This processor does not add fields by default. But the settings
`add_rank_fields` and `add_aggregation_fields` will add one or several fields if
set to anything other than ""
### Example

View File

@ -1,6 +1,8 @@
# Unpivot Processor
# Unpivot Processor Plugin
You can use the `unpivot` processor to rotate a multi field series into single valued metrics. This transformation often results in data that is more easy to aggregate across fields.
You can use the `unpivot` processor to rotate a multi field series into single
valued metrics. This transformation often results in data that is more easy to
aggregate across fields.
To perform the reverse operation use the [pivot] processor.