chore: clean up all markdown lint errors in parser plugins (#10153)

This commit is contained in:
Joshua Powers 2021-11-24 11:45:25 -07:00 committed by GitHub
parent 0036757afe
commit 779c1f0a59
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
16 changed files with 208 additions and 175 deletions

View File

@ -3,7 +3,7 @@
This description explains at a high level what the parser does and provides This description explains at a high level what the parser does and provides
links to where additional information about the format can be found. links to where additional information about the format can be found.
### Configuration ## Configuration
This section contains the sample configuration for the parser. Since the This section contains the sample configuration for the parser. Since the
configuration for a parser is not have a standalone plugin, use the `file` or configuration for a parser is not have a standalone plugin, use the `file` or
@ -24,22 +24,23 @@ configuration for a parser is not have a standalone plugin, use the `file` or
example_option = "example_value" example_option = "example_value"
``` ```
#### example_option ### example_option
If an option requires a more expansive explanation than can be included inline If an option requires a more expansive explanation than can be included inline
in the sample configuration, it may be described here. in the sample configuration, it may be described here.
### Metrics ## Metrics
The optional Metrics section contains details about how the parser converts The optional Metrics section contains details about how the parser converts
input data into Telegraf metrics. input data into Telegraf metrics.
### Examples ## Examples
The optional Examples section can show an example conversion from the input The optional Examples section can show an example conversion from the input
format using InfluxDB Line Protocol as the reference format. format using InfluxDB Line Protocol as the reference format.
For line delimited text formats a diff may be appropriate: For line delimited text formats a diff may be appropriate:
```diff ```diff
- cpu|host=localhost|source=example.org|value=42 - cpu|host=localhost|source=example.org|value=42
+ cpu,host=localhost,source=example.org value=42 + cpu,host=localhost,source=example.org value=42

View File

@ -17,7 +17,7 @@ Additional information including client setup can be found
You can also change the path to the typesdb or add additional typesdb using You can also change the path to the typesdb or add additional typesdb using
`collectd_typesdb`. `collectd_typesdb`.
### Configuration ## Configuration
```toml ```toml
[[inputs.socket_listener]] [[inputs.socket_listener]]
@ -43,9 +43,9 @@ You can also change the path to the typesdb or add additional typesdb using
collectd_parse_multivalue = "split" collectd_parse_multivalue = "split"
``` ```
### Example Output ## Example Output
``` ```text
memory,type=memory,type_instance=buffered value=2520051712 1560455990829955922 memory,type=memory,type_instance=buffered value=2520051712 1560455990829955922
memory,type=memory,type_instance=used value=3710791680 1560455990829955922 memory,type=memory,type_instance=used value=3710791680 1560455990829955922
memory,type=memory,type_instance=buffered value=2520047616 1560455980830417318 memory,type=memory,type_instance=buffered value=2520047616 1560455980830417318

View File

@ -94,7 +94,7 @@ or a format string in using the Go "reference time" which is defined to be the
Consult the Go [time][time parse] package for details and additional examples Consult the Go [time][time parse] package for details and additional examples
on how to set the time format. on how to set the time format.
### Metrics ## Metrics
One metric is created for each row with the columns added as fields. The type One metric is created for each row with the columns added as fields. The type
of the field is automatically determined based on the contents of the value. of the field is automatically determined based on the contents of the value.
@ -102,7 +102,7 @@ of the field is automatically determined based on the contents of the value.
In addition to the options above, you can use [metric filtering][] to skip over In addition to the options above, you can use [metric filtering][] to skip over
columns and rows. columns and rows.
### Examples ## Examples
Config: Config:

View File

@ -5,7 +5,7 @@ The `dropwizard` data format can parse the [JSON Dropwizard][dropwizard] represe
[templates]: /docs/TEMPLATE_PATTERN.md [templates]: /docs/TEMPLATE_PATTERN.md
[dropwizard]: http://metrics.dropwizard.io/3.1.0/manual/json/ [dropwizard]: http://metrics.dropwizard.io/3.1.0/manual/json/
### Configuration ## Configuration
```toml ```toml
[[inputs.file]] [[inputs.file]]
@ -51,76 +51,75 @@ The `dropwizard` data format can parse the [JSON Dropwizard][dropwizard] represe
# tag2 = "tags.tag2" # tag2 = "tags.tag2"
``` ```
## Examples
### Examples
A typical JSON of a dropwizard metric registry: A typical JSON of a dropwizard metric registry:
```json ```json
{ {
"version": "3.0.0", "version": "3.0.0",
"counters" : { "counters" : {
"measurement,tag1=green" : { "measurement,tag1=green" : {
"count" : 1 "count" : 1
} }
}, },
"meters" : { "meters" : {
"measurement" : { "measurement" : {
"count" : 1, "count" : 1,
"m15_rate" : 1.0, "m15_rate" : 1.0,
"m1_rate" : 1.0, "m1_rate" : 1.0,
"m5_rate" : 1.0, "m5_rate" : 1.0,
"mean_rate" : 1.0, "mean_rate" : 1.0,
"units" : "events/second" "units" : "events/second"
} }
}, },
"gauges" : { "gauges" : {
"measurement" : { "measurement" : {
"value" : 1 "value" : 1
} }
}, },
"histograms" : { "histograms" : {
"measurement" : { "measurement" : {
"count" : 1, "count" : 1,
"max" : 1.0, "max" : 1.0,
"mean" : 1.0, "mean" : 1.0,
"min" : 1.0, "min" : 1.0,
"p50" : 1.0, "p50" : 1.0,
"p75" : 1.0, "p75" : 1.0,
"p95" : 1.0, "p95" : 1.0,
"p98" : 1.0, "p98" : 1.0,
"p99" : 1.0, "p99" : 1.0,
"p999" : 1.0, "p999" : 1.0,
"stddev" : 1.0 "stddev" : 1.0
} }
}, },
"timers" : { "timers" : {
"measurement" : { "measurement" : {
"count" : 1, "count" : 1,
"max" : 1.0, "max" : 1.0,
"mean" : 1.0, "mean" : 1.0,
"min" : 1.0, "min" : 1.0,
"p50" : 1.0, "p50" : 1.0,
"p75" : 1.0, "p75" : 1.0,
"p95" : 1.0, "p95" : 1.0,
"p98" : 1.0, "p98" : 1.0,
"p99" : 1.0, "p99" : 1.0,
"p999" : 1.0, "p999" : 1.0,
"stddev" : 1.0, "stddev" : 1.0,
"m15_rate" : 1.0, "m15_rate" : 1.0,
"m1_rate" : 1.0, "m1_rate" : 1.0,
"m5_rate" : 1.0, "m5_rate" : 1.0,
"mean_rate" : 1.0, "mean_rate" : 1.0,
"duration_units" : "seconds", "duration_units" : "seconds",
"rate_units" : "calls/second" "rate_units" : "calls/second"
} }
} }
} }
``` ```
Would get translated into 4 different measurements: Would get translated into 4 different measurements:
``` ```text
measurement,metric_type=counter,tag1=green count=1 measurement,metric_type=counter,tag1=green count=1
measurement,metric_type=meter count=1,m15_rate=1.0,m1_rate=1.0,m5_rate=1.0,mean_rate=1.0 measurement,metric_type=meter count=1,m15_rate=1.0,m1_rate=1.0,m5_rate=1.0,mean_rate=1.0
measurement,metric_type=gauge value=1 measurement,metric_type=gauge value=1
@ -133,27 +132,28 @@ Eg. to parse the following JSON document:
```json ```json
{ {
"time" : "2017-02-22T14:33:03.662+02:00", "time" : "2017-02-22T14:33:03.662+02:00",
"tags" : { "tags" : {
"tag1" : "green", "tag1" : "green",
"tag2" : "yellow" "tag2" : "yellow"
}, },
"metrics" : { "metrics" : {
"counters" : { "counters" : {
"measurement" : { "measurement" : {
"count" : 1 "count" : 1
} }
}, },
"meters" : {}, "meters" : {},
"gauges" : {}, "gauges" : {},
"histograms" : {}, "histograms" : {},
"timers" : {} "timers" : {}
} }
} }
``` ```
and translate it into: and translate it into:
``` ```text
measurement,metric_type=counter,tag1=green,tag2=yellow count=1 1487766783662000000 measurement,metric_type=counter,tag1=green,tag2=yellow count=1 1487766783662000000
``` ```

View File

@ -1,13 +1,12 @@
# Form Urlencoded # Form Urlencoded
The `form-urlencoded` data format parses `application/x-www-form-urlencoded` The `form-urlencoded` data format parses `application/x-www-form-urlencoded`
data, such as commonly used in the [query string][]. data, such as commonly used in the [query string][].
A common use case is to pair it with [http_listener_v2][] input plugin to parse A common use case is to pair it with [http_listener_v2][] input plugin to parse
request body or query params. request body or query params.
### Configuration ## Configuration
```toml ```toml
[[inputs.http_listener_v2]] [[inputs.http_listener_v2]]
@ -29,11 +28,12 @@ request body or query params.
form_urlencoded_tag_keys = ["tag1"] form_urlencoded_tag_keys = ["tag1"]
``` ```
### Examples ## Examples
#### Basic parsing ### Basic parsing
Config: Config:
```toml ```toml
[[inputs.http_listener_v2]] [[inputs.http_listener_v2]]
name_override = "mymetric" name_override = "mymetric"
@ -44,12 +44,14 @@ Config:
``` ```
Request: Request:
```bash ```bash
curl -i -XGET 'http://localhost:8080/telegraf?tag1=foo&field1=0.42&field2=42' curl -i -XGET 'http://localhost:8080/telegraf?tag1=foo&field1=0.42&field2=42'
``` ```
Output: Output:
```
```text
mymetric,tag1=foo field1=0.42,field2=42 mymetric,tag1=foo field1=0.42,field2=42
``` ```

View File

@ -6,7 +6,7 @@ By default, the separator is left as `.`, but this can be changed using the
`separator` argument. For more advanced options, Telegraf supports specifying `separator` argument. For more advanced options, Telegraf supports specifying
[templates](#templates) to translate graphite buckets into Telegraf metrics. [templates](#templates) to translate graphite buckets into Telegraf metrics.
### Configuration ## Configuration
```toml ```toml
[[inputs.exec]] [[inputs.exec]]
@ -42,7 +42,7 @@ By default, the separator is left as `.`, but this can be changed using the
] ]
``` ```
#### templates ### templates
Consult the [Template Patterns](/docs/TEMPLATE_PATTERN.md) documentation for Consult the [Template Patterns](/docs/TEMPLATE_PATTERN.md) documentation for
details. details.

View File

@ -4,13 +4,12 @@ The grok data format parses line delimited data using a regular expression like
language. language.
The best way to get acquainted with grok patterns is to read the logstash docs, The best way to get acquainted with grok patterns is to read the logstash docs,
which are available here: which are available [here](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html).
https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html
The grok parser uses a slightly modified version of logstash "grok" The grok parser uses a slightly modified version of logstash "grok"
patterns, with the format: patterns, with the format:
``` ```text
%{<capture_syntax>[:<semantic_name>][:<modifier>]} %{<capture_syntax>[:<semantic_name>][:<modifier>]}
``` ```
@ -58,7 +57,7 @@ CUSTOM time layouts must be within quotes and be the representation of the
"reference time", which is `Mon Jan 2 15:04:05 -0700 MST 2006`. "reference time", which is `Mon Jan 2 15:04:05 -0700 MST 2006`.
To match a comma decimal point you can use a period. For example `%{TIMESTAMP:timestamp:ts-"2006-01-02 15:04:05.000"}` can be used to match `"2018-01-02 15:04:05,000"` To match a comma decimal point you can use a period. For example `%{TIMESTAMP:timestamp:ts-"2006-01-02 15:04:05.000"}` can be used to match `"2018-01-02 15:04:05,000"`
To match a comma decimal point you can use a period in the pattern string. To match a comma decimal point you can use a period in the pattern string.
See https://golang.org/pkg/time/#Parse for more details. See [Goloang Time docs](https://golang.org/pkg/time/#Parse) for more details.
Telegraf has many of its own [built-in patterns][] as well as support for most Telegraf has many of its own [built-in patterns][] as well as support for most
of the Logstash builtin patterns using [these Go compatible patterns][grok-patterns]. of the Logstash builtin patterns using [these Go compatible patterns][grok-patterns].
@ -71,9 +70,10 @@ friendly pattern that is not fully compatible with the Logstash pattern.
[grok-patterns]: https://github.com/vjeantet/grok/blob/master/patterns/grok-patterns [grok-patterns]: https://github.com/vjeantet/grok/blob/master/patterns/grok-patterns
If you need help building patterns to match your logs, If you need help building patterns to match your logs,
you will find the https://grokdebug.herokuapp.com application quite useful! you will find the [Grok Debug](https://grokdebug.herokuapp.com) application quite useful!
## Configuration
### Configuration
```toml ```toml
[[inputs.file]] [[inputs.file]]
## Files to parse each interval. ## Files to parse each interval.
@ -121,11 +121,11 @@ you will find the https://grokdebug.herokuapp.com application quite useful!
# grok_unique_timestamp = "auto" # grok_unique_timestamp = "auto"
``` ```
#### Timestamp Examples ### Timestamp Examples
This example input and config parses a file using a custom timestamp conversion: This example input and config parses a file using a custom timestamp conversion:
``` ```text
2017-02-21 13:10:34 value=42 2017-02-21 13:10:34 value=42
``` ```
@ -136,7 +136,7 @@ This example input and config parses a file using a custom timestamp conversion:
This example input and config parses a file using a timestamp in unix time: This example input and config parses a file using a timestamp in unix time:
``` ```text
1466004605 value=42 1466004605 value=42
1466004605.123456789 value=42 1466004605.123456789 value=42
``` ```
@ -148,7 +148,7 @@ This example input and config parses a file using a timestamp in unix time:
This example parses a file using a built-in conversion and a custom pattern: This example parses a file using a built-in conversion and a custom pattern:
``` ```text
Wed Apr 12 13:10:34 PST 2017 value=42 Wed Apr 12 13:10:34 PST 2017 value=42
``` ```
@ -162,7 +162,7 @@ Wed Apr 12 13:10:34 PST 2017 value=42
This example input and config parses a file using a custom timestamp conversion that doesn't match any specific standard: This example input and config parses a file using a custom timestamp conversion that doesn't match any specific standard:
``` ```text
21/02/2017 13:10:34 value=42 21/02/2017 13:10:34 value=42
``` ```
@ -192,7 +192,7 @@ syntax with `'''` may be useful.
The following config examples will parse this input file: The following config examples will parse this input file:
``` ```text
|42|\uD83D\uDC2F|'telegraf'| |42|\uD83D\uDC2F|'telegraf'|
``` ```
@ -208,6 +208,7 @@ backslash must be escaped, requiring us to escape the backslash a second time.
We cannot use a literal TOML string for the pattern, because we cannot match a We cannot use a literal TOML string for the pattern, because we cannot match a
`'` within it. However, it works well for the custom pattern. `'` within it. However, it works well for the custom pattern.
```toml ```toml
[[inputs.file]] [[inputs.file]]
grok_patterns = ["\\|%{NUMBER:value:int}\\|%{UNICODE_ESCAPE:escape}\\|'%{WORD:name}'\\|"] grok_patterns = ["\\|%{NUMBER:value:int}\\|%{UNICODE_ESCAPE:escape}\\|'%{WORD:name}'\\|"]
@ -215,6 +216,7 @@ We cannot use a literal TOML string for the pattern, because we cannot match a
``` ```
A multi-line literal string allows us to encode the pattern: A multi-line literal string allows us to encode the pattern:
```toml ```toml
[[inputs.file]] [[inputs.file]]
grok_patterns = [''' grok_patterns = ['''
@ -251,7 +253,8 @@ are a few techniques that can help:
- Avoid using patterns such as `%{DATA}` that will always match. - Avoid using patterns such as `%{DATA}` that will always match.
- If possible, add `^` and `$` anchors to your pattern: - If possible, add `^` and `$` anchors to your pattern:
```
```toml
[[inputs.file]] [[inputs.file]]
grok_patterns = ["^%{COMBINED_LOG_FORMAT}$"] grok_patterns = ["^%{COMBINED_LOG_FORMAT}$"]
``` ```

View File

@ -5,7 +5,7 @@ metrics are parsed directly into Telegraf metrics.
[line protocol]: https://docs.influxdata.com/influxdb/latest/reference/syntax/line-protocol/ [line protocol]: https://docs.influxdata.com/influxdb/latest/reference/syntax/line-protocol/
### Configuration ## Configuration
```toml ```toml
[[inputs.file]] [[inputs.file]]
@ -17,4 +17,3 @@ metrics are parsed directly into Telegraf metrics.
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "influx" data_format = "influx"
``` ```

View File

@ -4,9 +4,9 @@ The JSON data format parses a [JSON][json] object or an array of objects into
metric fields. metric fields.
**NOTE:** All JSON numbers are converted to float fields. JSON strings and booleans are **NOTE:** All JSON numbers are converted to float fields. JSON strings and booleans are
ignored unless specified in the `tag_key` or `json_string_fields` options. ignored unless specified in the `tag_key` or `json_string_fields` options.
### Configuration ## Configuration
```toml ```toml
[[inputs.file]] [[inputs.file]]
@ -73,7 +73,7 @@ ignored unless specified in the `tag_key` or `json_string_fields` options.
json_timezone = "" json_timezone = ""
``` ```
#### json_query ### json_query
The `json_query` is a [GJSON][gjson] path that can be used to transform the The `json_query` is a [GJSON][gjson] path that can be used to transform the
JSON document before being parsed. The query is performed before any other JSON document before being parsed. The query is performed before any other
@ -85,7 +85,7 @@ Consult the GJSON [path syntax][gjson syntax] for details and examples, and
consider using the [GJSON playground][gjson playground] for developing and consider using the [GJSON playground][gjson playground] for developing and
debugging your query. debugging your query.
#### json_time_key, json_time_format, json_timezone ### json_time_key, json_time_format, json_timezone
By default the current time will be used for all created metrics, to set the By default the current time will be used for all created metrics, to set the
time using the JSON document you can use the `json_time_key` and time using the JSON document you can use the `json_time_key` and
@ -106,10 +106,12 @@ to be UTC. To default to another timezone, or to local time, specify the
[Unix TZ value](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones), [Unix TZ value](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones),
such as `America/New_York`, to `Local` to utilize the system timezone, or to `UTC`. such as `America/New_York`, to `Local` to utilize the system timezone, or to `UTC`.
### Examples ## Examples
### Basic Parsing
#### Basic Parsing
Config: Config:
```toml ```toml
[[inputs.file]] [[inputs.file]]
files = ["example"] files = ["example"]
@ -118,6 +120,7 @@ Config:
``` ```
Input: Input:
```json ```json
{ {
"a": 5, "a": 5,
@ -129,13 +132,15 @@ Input:
``` ```
Output: Output:
```
```text
myjsonmetric a=5,b_c=6 myjsonmetric a=5,b_c=6
``` ```
#### Name, Tags, and String Fields ### Name, Tags, and String Fields
Config: Config:
```toml ```toml
[[inputs.file]] [[inputs.file]]
files = ["example"] files = ["example"]
@ -146,6 +151,7 @@ Config:
``` ```
Input: Input:
```json ```json
{ {
"a": 5, "a": 5,
@ -159,16 +165,18 @@ Input:
``` ```
Output: Output:
```
```text
my_json,my_tag_1=foo a=5,b_c=6,b_my_field="description" my_json,my_tag_1=foo a=5,b_c=6,b_my_field="description"
``` ```
#### Arrays ### Arrays
If the JSON data is an array, then each object within the array is parsed with If the JSON data is an array, then each object within the array is parsed with
the configured settings. the configured settings.
Config: Config:
```toml ```toml
[[inputs.file]] [[inputs.file]]
files = ["example"] files = ["example"]
@ -178,6 +186,7 @@ Config:
``` ```
Input: Input:
```json ```json
[ [
{ {
@ -198,16 +207,18 @@ Input:
``` ```
Output: Output:
```
```text
file a=5,b_c=6 1136387040000000000 file a=5,b_c=6 1136387040000000000
file a=7,b_c=8 1168527840000000000 file a=7,b_c=8 1168527840000000000
``` ```
#### Query ### Query
The `json_query` option can be used to parse a subset of the document. The `json_query` option can be used to parse a subset of the document.
Config: Config:
```toml ```toml
[[inputs.file]] [[inputs.file]]
files = ["example"] files = ["example"]
@ -218,6 +229,7 @@ Config:
``` ```
Input: Input:
```json ```json
{ {
"obj": { "obj": {
@ -235,7 +247,8 @@ Input:
``` ```
Output: Output:
```
```text
file,first=Dale last="Murphy",age=44 file,first=Dale last="Murphy",age=44
file,first=Roger last="Craig",age=68 file,first=Roger last="Craig",age=68
file,first=Jane last="Murphy",age=47 file,first=Jane last="Murphy",age=47

View File

@ -1,6 +1,6 @@
# JSON Parser - Version 2 # JSON Parser - Version 2
This parser takes valid JSON input and turns it into line protocol. The query syntax supported is [GJSON Path Syntax](https://github.com/tidwall/gjson/blob/v1.7.5/SYNTAX.md), you can go to this playground to test out your GJSON path here: https://gjson.dev/. You can find multiple examples under the `testdata` folder. This parser takes valid JSON input and turns it into line protocol. The query syntax supported is [GJSON Path Syntax](https://github.com/tidwall/gjson/blob/v1.7.5/SYNTAX.md), you can go to this playground to test out your GJSON path here: [gjson.dev/](https://gjson.dev). You can find multiple examples under the `testdata` folder.
## Configuration ## Configuration
@ -79,13 +79,13 @@ such as `America/New_York`, to `Local` to utilize the system timezone, or to `UT
Note that objects are handled separately, therefore if you provide a path that returns a object it will be ignored. You will need use the `object` config table to parse objects, because `field` and `tag` doesn't handle relationships between data. Each `field` and `tag` you define is handled as a separate data point. Note that objects are handled separately, therefore if you provide a path that returns a object it will be ignored. You will need use the `object` config table to parse objects, because `field` and `tag` doesn't handle relationships between data. Each `field` and `tag` you define is handled as a separate data point.
The notable difference between `field` and `tag`, is that `tag` values will always be type string while `field` can be multiple types. You can define the type of `field` to be any [type that line protocol supports](https://docs.influxdata.com/influxdb/v2.0/reference/syntax/line-protocol/#data-types-and-format), which are: The notable difference between `field` and `tag`, is that `tag` values will always be type string while `field` can be multiple types. You can define the type of `field` to be any [type that line protocol supports](https://docs.influxdata.com/influxdb/v2.0/reference/syntax/line-protocol/#data-types-and-format), which are:
* float * float
* int * int
* uint * uint
* string * string
* bool * bool
#### **field** #### **field**
Using this field configuration you can gather a non-array/non-object values. Note this acts as a global field when used with the `object` configuration, if you gather an array of values using `object` then the field gathered will be added to each resulting line protocol without acknowledging its location in the original JSON. This is defined in TOML as an array table using double brackets. Using this field configuration you can gather a non-array/non-object values. Note this acts as a global field when used with the `object` configuration, if you gather an array of values using `object` then the field gathered will be added to each resulting line protocol without acknowledging its location in the original JSON. This is defined in TOML as an array table using double brackets.
@ -98,7 +98,6 @@ Using this field configuration you can gather a non-array/non-object values. Not
Using this tag configuration you can gather a non-array/non-object values. Note this acts as a global tag when used with the `object` configuration, if you gather an array of values using `object` then the tag gathered will be added to each resulting line protocol without acknowledging its location in the original JSON. This is defined in TOML as an array table using double brackets. Using this tag configuration you can gather a non-array/non-object values. Note this acts as a global tag when used with the `object` configuration, if you gather an array of values using `object` then the tag gathered will be added to each resulting line protocol without acknowledging its location in the original JSON. This is defined in TOML as an array table using double brackets.
* **path (REQUIRED)**: A string with valid GJSON path syntax to a non-array/non-object value * **path (REQUIRED)**: A string with valid GJSON path syntax to a non-array/non-object value
* **name (OPTIONAL)**: You can define a string value to set the field name. If not defined it will use the trailing word from the provided query. * **name (OPTIONAL)**: You can define a string value to set the field name. If not defined it will use the trailing word from the provided query.
@ -193,7 +192,7 @@ Example configuration:
Expected line protocol: Expected line protocol:
``` ```text
file,title=The\ Lord\ Of\ The\ Rings author="Tolkien",chapters="A Long-expected Party" file,title=The\ Lord\ Of\ The\ Rings author="Tolkien",chapters="A Long-expected Party"
file,title=The\ Lord\ Of\ The\ Rings author="Tolkien",chapters="The Shadow of the Past" file,title=The\ Lord\ Of\ The\ Rings author="Tolkien",chapters="The Shadow of the Past"
file,title=The\ Lord\ Of\ The\ Rings author="Tolkien",name="Bilbo",species="hobbit" file,title=The\ Lord\ Of\ The\ Rings author="Tolkien",name="Bilbo",species="hobbit"

View File

@ -4,7 +4,7 @@ The `logfmt` data format parses data in [logfmt] format.
[logfmt]: https://brandur.org/logfmt [logfmt]: https://brandur.org/logfmt
### Configuration ## Configuration
```toml ```toml
[[inputs.file]] [[inputs.file]]
@ -17,14 +17,14 @@ The `logfmt` data format parses data in [logfmt] format.
data_format = "logfmt" data_format = "logfmt"
``` ```
### Metrics ## Metrics
Each key/value pair in the line is added to a new metric as a field. The type Each key/value pair in the line is added to a new metric as a field. The type
of the field is automatically determined based on the contents of the value. of the field is automatically determined based on the contents of the value.
### Examples ## Examples
``` ```text
- method=GET host=example.org ts=2018-07-24T19:43:40.275Z connect=4ms service=8ms status=200 bytes=1653 - method=GET host=example.org ts=2018-07-24T19:43:40.275Z connect=4ms service=8ms status=200 bytes=1653
+ logfmt method="GET",host="example.org",ts="2018-07-24T19:43:40.275Z",connect="4ms",service="8ms",status=200i,bytes=1653i + logfmt method="GET",host="example.org",ts="2018-07-24T19:43:40.275Z",connect="4ms",service="8ms",status=200i,bytes=1653i
``` ```

View File

@ -2,7 +2,7 @@
The `nagios` data format parses the output of nagios plugins. The `nagios` data format parses the output of nagios plugins.
### Configuration ## Configuration
```toml ```toml
[[inputs.exec]] [[inputs.exec]]

View File

@ -2,7 +2,7 @@
Converts prometheus remote write samples directly into Telegraf metrics. It can be used with [http_listener_v2](/plugins/inputs/http_listener_v2). There are no additional configuration options for Prometheus Remote Write Samples. Converts prometheus remote write samples directly into Telegraf metrics. It can be used with [http_listener_v2](/plugins/inputs/http_listener_v2). There are no additional configuration options for Prometheus Remote Write Samples.
### Configuration ## Configuration
```toml ```toml
[[inputs.http_listener_v2]] [[inputs.http_listener_v2]]
@ -16,31 +16,33 @@ Converts prometheus remote write samples directly into Telegraf metrics. It can
data_format = "prometheusremotewrite" data_format = "prometheusremotewrite"
``` ```
### Example Input ## Example Input
```
```json
prompb.WriteRequest{ prompb.WriteRequest{
Timeseries: []*prompb.TimeSeries{ Timeseries: []*prompb.TimeSeries{
{ {
Labels: []*prompb.Label{ Labels: []*prompb.Label{
{Name: "__name__", Value: "go_gc_duration_seconds"}, {Name: "__name__", Value: "go_gc_duration_seconds"},
{Name: "instance", Value: "localhost:9090"}, {Name: "instance", Value: "localhost:9090"},
{Name: "job", Value: "prometheus"}, {Name: "job", Value: "prometheus"},
{Name: "quantile", Value: "0.99"}, {Name: "quantile", Value: "0.99"},
}, },
Samples: []prompb.Sample{ Samples: []prompb.Sample{
{Value: 4.63, Timestamp: time.Date(2020, 4, 1, 0, 0, 0, 0, time.UTC).UnixNano()}, {Value: 4.63, Timestamp: time.Date(2020, 4, 1, 0, 0, 0, 0, time.UTC).UnixNano()},
}, },
}, },
}, },
} }
``` ```
### Example Output ## Example Output
```
```text
prometheus_remote_write,instance=localhost:9090,job=prometheus,quantile=0.99 go_gc_duration_seconds=4.63 1614889298859000000 prometheus_remote_write,instance=localhost:9090,job=prometheus,quantile=0.99 go_gc_duration_seconds=4.63 1614889298859000000
``` ```
## For alignment with the [InfluxDB v1.x Prometheus Remote Write Spec](https://docs.influxdata.com/influxdb/v1.8/supported_protocols/prometheus/#how-prometheus-metrics-are-parsed-in-influxdb) ## For alignment with the [InfluxDB v1.x Prometheus Remote Write Spec](https://docs.influxdata.com/influxdb/v1.8/supported_protocols/prometheus/#how-prometheus-metrics-are-parsed-in-influxdb)
- Use the [Starlark processor rename prometheus remote write script](https://github.com/influxdata/telegraf/blob/master/plugins/processors/starlark/testdata/rename_prometheus_remote_write.star) to rename the measurement name to the fieldname and rename the fieldname to value. - Use the [Starlark processor rename prometheus remote write script](https://github.com/influxdata/telegraf/blob/master/plugins/processors/starlark/testdata/rename_prometheus_remote_write.star) to rename the measurement name to the fieldname and rename the fieldname to value.

View File

@ -4,7 +4,7 @@ The "value" data format translates single values into Telegraf metrics. This
is done by assigning a measurement name and setting a single field ("value") is done by assigning a measurement name and setting a single field ("value")
as the parsed metric. as the parsed metric.
### Configuration ## Configuration
You **must** tell Telegraf what type of metric to collect by using the You **must** tell Telegraf what type of metric to collect by using the
`data_type` configuration option. Available options are: `data_type` configuration option. Available options are:
@ -33,4 +33,3 @@ name of the plugin.
data_format = "value" data_format = "value"
data_type = "integer" # required data_type = "integer" # required
``` ```

View File

@ -4,7 +4,7 @@ Wavefront Data Format is metrics are parsed directly into Telegraf metrics.
For more information about the Wavefront Data Format see For more information about the Wavefront Data Format see
[here](https://docs.wavefront.com/wavefront_data_format.html). [here](https://docs.wavefront.com/wavefront_data_format.html).
### Configuration ## Configuration
There are no additional configuration options for Wavefront Data Format line-protocol. There are no additional configuration options for Wavefront Data Format line-protocol.

View File

@ -6,7 +6,8 @@ For supported XPath functions check [the underlying XPath library][xpath lib].
**NOTE:** The type of fields are specified using [XPath functions][xpath lib]. The only exception are *integer* fields that need to be specified in a `fields_int` section. **NOTE:** The type of fields are specified using [XPath functions][xpath lib]. The only exception are *integer* fields that need to be specified in a `fields_int` section.
### Supported data formats ## Supported data formats
| name | `data_format` setting | comment | | name | `data_format` setting | comment |
| --------------------------------------- | --------------------- | ------- | | --------------------------------------- | --------------------- | ------- |
| [Extensible Markup Language (XML)][xml] | `"xml"` | | | [Extensible Markup Language (XML)][xml] | `"xml"` | |
@ -14,11 +15,14 @@ For supported XPath functions check [the underlying XPath library][xpath lib].
| [MessagePack][msgpack] | `"xpath_msgpack"` | | | [MessagePack][msgpack] | `"xpath_msgpack"` | |
| [Protocol buffers][protobuf] | `"xpath_protobuf"` | [see additional parameters](protocol-buffers-additiona-settings)| | [Protocol buffers][protobuf] | `"xpath_protobuf"` | [see additional parameters](protocol-buffers-additiona-settings)|
#### Protocol buffers additional settings ### Protocol buffers additional settings
For using the protocol-buffer format you need to specify a protocol buffer definition file (`.proto`) in `xpath_protobuf_file`, Furthermore, you need to specify which message type you want to use via `xpath_protobuf_type`. For using the protocol-buffer format you need to specify a protocol buffer definition file (`.proto`) in `xpath_protobuf_file`, Furthermore, you need to specify which message type you want to use via `xpath_protobuf_type`.
### Configuration (explicit) ## Configuration (explicit)
In this configuration mode, you explicitly specify the field and tags you want to scrape out of your data. In this configuration mode, you explicitly specify the field and tags you want to scrape out of your data.
```toml ```toml
[[inputs.file]] [[inputs.file]]
files = ["example.xml"] files = ["example.xml"]
@ -82,6 +86,7 @@ your query.
Alternatively to the configuration above, fields can also be specified in a batch way. So contrary to specify the fields Alternatively to the configuration above, fields can also be specified in a batch way. So contrary to specify the fields
in a section, you can define a `name` and a `value` selector used to determine the name and value of the fields in the in a section, you can define a `name` and a `value` selector used to determine the name and value of the fields in the
metric. metric.
```toml ```toml
[[inputs.file]] [[inputs.file]]
files = ["example.xml"] files = ["example.xml"]
@ -137,11 +142,12 @@ metric.
device = "string('the ultimate sensor')" device = "string('the ultimate sensor')"
``` ```
*Please note*: The resulting fields are _always_ of type string! *Please note*: The resulting fields are _always_ of type string!
It is also possible to specify a mixture of the two alternative ways of specifying fields. It is also possible to specify a mixture of the two alternative ways of specifying fields.
#### metric_selection (optional) ### metric_selection (optional)
You can specify a [XPath][xpath] query to select a subset of nodes from the XML document, each used to generate a new You can specify a [XPath][xpath] query to select a subset of nodes from the XML document, each used to generate a new
metrics with the specified fields, tags etc. metrics with the specified fields, tags etc.
@ -150,11 +156,11 @@ For relative queries in subsequent queries they are relative to the `metric_sele
Specifying `metric_selection` is optional. If not specified all relative queries are relative to the root node of the XML document. Specifying `metric_selection` is optional. If not specified all relative queries are relative to the root node of the XML document.
#### metric_name (optional) ### metric_name (optional)
By specifying `metric_name` you can override the metric/measurement name with the result of the given [XPath][xpath] query. If not specified, the default metric name is used. By specifying `metric_name` you can override the metric/measurement name with the result of the given [XPath][xpath] query. If not specified, the default metric name is used.
#### timestamp, timestamp_format (optional) ### timestamp, timestamp_format (optional)
By default the current time will be used for all created metrics. To set the time from values in the XML document you can specify a [XPath][xpath] query in `timestamp` and set the format in `timestamp_format`. By default the current time will be used for all created metrics. To set the time from values in the XML document you can specify a [XPath][xpath] query in `timestamp` and set the format in `timestamp_format`.
@ -162,19 +168,19 @@ The `timestamp_format` can be set to `unix`, `unix_ms`, `unix_us`, `unix_ns`, or
an accepted [Go "reference time"][time const]. Consult the Go [time][time parse] package for details and additional examples on how to set the time format. an accepted [Go "reference time"][time const]. Consult the Go [time][time parse] package for details and additional examples on how to set the time format.
If `timestamp_format` is omitted `unix` format is assumed as result of the `timestamp` query. If `timestamp_format` is omitted `unix` format is assumed as result of the `timestamp` query.
#### tags sub-section ### tags sub-section
[XPath][xpath] queries in the `tag name = query` format to add tags to the metrics. The specified path can be absolute (starting with `/`) or relative. Relative paths use the currently selected node as reference. [XPath][xpath] queries in the `tag name = query` format to add tags to the metrics. The specified path can be absolute (starting with `/`) or relative. Relative paths use the currently selected node as reference.
**NOTE:** Results of tag-queries will always be converted to strings. **NOTE:** Results of tag-queries will always be converted to strings.
#### fields_int sub-section ### fields_int sub-section
[XPath][xpath] queries in the `field name = query` format to add integer typed fields to the metrics. The specified path can be absolute (starting with `/`) or relative. Relative paths use the currently selected node as reference. [XPath][xpath] queries in the `field name = query` format to add integer typed fields to the metrics. The specified path can be absolute (starting with `/`) or relative. Relative paths use the currently selected node as reference.
**NOTE:** Results of field_int-queries will always be converted to **int64**. The conversion will fail in case the query result is not convertible! **NOTE:** Results of field_int-queries will always be converted to **int64**. The conversion will fail in case the query result is not convertible!
#### fields sub-section ### fields sub-section
[XPath][xpath] queries in the `field name = query` format to add non-integer fields to the metrics. The specified path can be absolute (starting with `/`) or relative. Relative paths use the currently selected node as reference. [XPath][xpath] queries in the `field name = query` format to add non-integer fields to the metrics. The specified path can be absolute (starting with `/`) or relative. Relative paths use the currently selected node as reference.
@ -183,8 +189,7 @@ If no conversion is performed in the query the field will be of type string.
**NOTE: Path conversion functions will always succeed even if you convert a text to float!** **NOTE: Path conversion functions will always succeed even if you convert a text to float!**
### field_selection, field_name, field_value (optional)
#### field_selection, field_name, field_value (optional)
You can specify a [XPath][xpath] query to select a set of nodes forming the fields of the metric. The specified path can be absolute (starting with `/`) or relative to the currently selected node. Each node selected by `field_selection` forms a new field within the metric. You can specify a [XPath][xpath] query to select a set of nodes forming the fields of the metric. The specified path can be absolute (starting with `/`) or relative to the currently selected node. Each node selected by `field_selection` forms a new field within the metric.
@ -195,15 +200,16 @@ Specifying `field_selection` is optional. This is an alternative way to specify
**NOTE: Path conversion functions will always succeed even if you convert a text to float!** **NOTE: Path conversion functions will always succeed even if you convert a text to float!**
#### field_name_expansion (optional) ### field_name_expansion (optional)
When *true*, field names selected with `field_selection` are expanded to a *path* relative to the *selected node*. This When *true*, field names selected with `field_selection` are expanded to a *path* relative to the *selected node*. This
is necessary if we e.g. select all leaf nodes as fields and those leaf nodes do not have unique names. That is in case is necessary if we e.g. select all leaf nodes as fields and those leaf nodes do not have unique names. That is in case
you have duplicate names in the fields you select you should set this to `true`. you have duplicate names in the fields you select you should set this to `true`.
### Examples ## Examples
This `example.xml` file is used in the configuration examples below: This `example.xml` file is used in the configuration examples below:
```xml ```xml
<?xml version="1.0"?> <?xml version="1.0"?>
<Gateway> <Gateway>
@ -238,11 +244,12 @@ This `example.xml` file is used in the configuration examples below:
</Bus> </Bus>
``` ```
#### Basic Parsing ### Basic Parsing
This example shows the basic usage of the xml parser. This example shows the basic usage of the xml parser.
Config: Config:
```toml ```toml
[[inputs.file]] [[inputs.file]]
files = ["example.xml"] files = ["example.xml"]
@ -260,18 +267,20 @@ Config:
``` ```
Output: Output:
```
```text
file,gateway=Main,host=Hugin seqnr=12i,ok=true 1598610830000000000 file,gateway=Main,host=Hugin seqnr=12i,ok=true 1598610830000000000
``` ```
In the *tags* definition the XPath function `substring-before()` is used to only extract the sub-string before the space. To get the integer value of `/Gateway/Sequence` we have to use the *fields_int* section as there is no XPath expression to convert node values to integers (only float). In the *tags* definition the XPath function `substring-before()` is used to only extract the sub-string before the space. To get the integer value of `/Gateway/Sequence` we have to use the *fields_int* section as there is no XPath expression to convert node values to integers (only float).
The `ok` field is filled with a boolean by specifying a query comparing the query result of `/Gateway/Status` with the string *ok*. Use the type conversions available in the XPath syntax to specify field types. The `ok` field is filled with a boolean by specifying a query comparing the query result of `/Gateway/Status` with the string *ok*. Use the type conversions available in the XPath syntax to specify field types.
#### Time and metric names ### Time and metric names
This is an example for using time and name of the metric from the XML document itself. This is an example for using time and name of the metric from the XML document itself.
Config: Config:
```toml ```toml
[[inputs.file]] [[inputs.file]]
files = ["example.xml"] files = ["example.xml"]
@ -291,16 +300,19 @@ Config:
``` ```
Output: Output:
```
```text
Status,gateway=Main,host=Hugin ok=true 1596294243000000000 Status,gateway=Main,host=Hugin ok=true 1596294243000000000
``` ```
Additionally to the basic parsing example, the metric name is defined as the name of the `/Gateway/Status` node and the timestamp is derived from the XML document instead of using the execution time. Additionally to the basic parsing example, the metric name is defined as the name of the `/Gateway/Status` node and the timestamp is derived from the XML document instead of using the execution time.
#### Multi-node selection ### Multi-node selection
For XML documents containing metrics for e.g. multiple devices (like `Sensor`s in the *example.xml*), multiple metrics can be generated using node selection. This example shows how to generate a metric for each *Sensor* in the example. For XML documents containing metrics for e.g. multiple devices (like `Sensor`s in the *example.xml*), multiple metrics can be generated using node selection. This example shows how to generate a metric for each *Sensor* in the example.
Config: Config:
```toml ```toml
[[inputs.file]] [[inputs.file]]
files = ["example.xml"] files = ["example.xml"]
@ -329,7 +341,8 @@ Config:
``` ```
Output: Output:
```
```text
sensors,host=Hugin,name=Facility\ A consumers=3i,frequency=49.78,ok=true,power=123.4,temperature=20 1596294243000000000 sensors,host=Hugin,name=Facility\ A consumers=3i,frequency=49.78,ok=true,power=123.4,temperature=20 1596294243000000000
sensors,host=Hugin,name=Facility\ B consumers=1i,frequency=49.78,ok=true,power=14.3,temperature=23.1 1596294243000000000 sensors,host=Hugin,name=Facility\ B consumers=1i,frequency=49.78,ok=true,power=14.3,temperature=23.1 1596294243000000000
sensors,host=Hugin,name=Facility\ C consumers=0i,frequency=49.78,ok=false,power=0.02,temperature=19.7 1596294243000000000 sensors,host=Hugin,name=Facility\ C consumers=0i,frequency=49.78,ok=false,power=0.02,temperature=19.7 1596294243000000000
@ -337,11 +350,12 @@ sensors,host=Hugin,name=Facility\ C consumers=0i,frequency=49.78,ok=false,power=
Using the `metric_selection` option we select all `Sensor` nodes in the XML document. Please note that all field and tag definitions are relative to these selected nodes. An exception is the timestamp definition which is relative to the root node of the XML document. Using the `metric_selection` option we select all `Sensor` nodes in the XML document. Please note that all field and tag definitions are relative to these selected nodes. An exception is the timestamp definition which is relative to the root node of the XML document.
#### Batch field processing with multi-node selection ### Batch field processing with multi-node selection
For XML documents containing metrics with a large number of fields or where the fields are not known before (e.g. an unknown set of `Variable` nodes in the *example.xml*), field selectors can be used. This example shows how to generate a metric for each *Sensor* in the example with fields derived from the *Variable* nodes. For XML documents containing metrics with a large number of fields or where the fields are not known before (e.g. an unknown set of `Variable` nodes in the *example.xml*), field selectors can be used. This example shows how to generate a metric for each *Sensor* in the example with fields derived from the *Variable* nodes.
Config: Config:
```toml ```toml
[[inputs.file]] [[inputs.file]]
files = ["example.xml"] files = ["example.xml"]
@ -363,7 +377,8 @@ Config:
``` ```
Output: Output:
```
```text
sensors,host=Hugin,name=Facility\ A consumers=3,frequency=49.78,power=123.4,temperature=20 1596294243000000000 sensors,host=Hugin,name=Facility\ A consumers=3,frequency=49.78,power=123.4,temperature=20 1596294243000000000
sensors,host=Hugin,name=Facility\ B consumers=1,frequency=49.78,power=14.3,temperature=23.1 1596294243000000000 sensors,host=Hugin,name=Facility\ B consumers=1,frequency=49.78,power=14.3,temperature=23.1 1596294243000000000
sensors,host=Hugin,name=Facility\ C consumers=0,frequency=49.78,power=0.02,temperature=19.7 1596294243000000000 sensors,host=Hugin,name=Facility\ C consumers=0,frequency=49.78,power=0.02,temperature=19.7 1596294243000000000