chore: clean up all markdown lint errors in serializer plugins (#10158)

This commit is contained in:
Joshua Powers 2021-11-24 11:47:23 -07:00 committed by GitHub
parent 4605c977da
commit c172df21a4
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
11 changed files with 64 additions and 52 deletions

View File

@ -1,9 +1,9 @@
# Example
# Example README
This description explains at a high level what the serializer does and
provides links to where additional information about the format can be found.
### Configuration
## Configuration
This section contains the sample configuration for the serializer. Since the
configuration for a serializer is not have a standalone plugin, use the `file`
@ -24,22 +24,23 @@ or `http` outputs as the base config.
data_format = "example"
```
#### example_option
### example_option
If an option requires a more expansive explanation than can be included inline
in the sample configuration, it may be described here.
### Metrics
## Metrics
The optional Metrics section contains details about how the serializer converts
Telegraf metrics into output.
### Example
## Example
The optional Example section can show an example conversion to the output
format using InfluxDB Line Protocol as the reference format.
For line delimited text formats a diff may be appropriate:
```diff
- cpu,host=localhost,source=example.org value=42
+ cpu|host=localhost|source=example.org|value=42

View File

@ -30,7 +30,7 @@ The `carbon2` serializer translates the Telegraf metric format to the [Carbon2 f
Standard form:
```
```text
metric=name field=field_1 host=foo 30 1234567890
metric=name field=field_2 host=foo 4 1234567890
metric=name field=field_N host=foo 59 1234567890
@ -51,7 +51,7 @@ after the `_`.
This is the behavior of `carbon2_format = "metric_includes_field"` which would
make the above example look like:
```
```text
metric=name_field_1 host=foo 30 1234567890
metric=name_field_2 host=foo 4 1234567890
metric=name_field_N host=foo 59 1234567890
@ -62,7 +62,7 @@ metric=name_field_N host=foo 59 1234567890
In order to sanitize the metric name one can specify `carbon2_sanitize_replace_char`
in order to replace the following characters in the metric name:
```
```text
!@#$%^&*()+`'\"[]{};<>,?/\\|=
```
@ -78,13 +78,13 @@ There will be a `metric` tag that represents the name of the metric and a `field
If we take the following InfluxDB Line Protocol:
```
```text
weather,location=us-midwest,season=summer temperature=82,wind=100 1234567890
```
after serializing in Carbon2, the result would be:
```
```text
metric=weather field=temperature location=us-midwest season=summer 82 1234567890
metric=weather field=wind location=us-midwest season=summer 100 1234567890
```

View File

@ -5,7 +5,7 @@ template pattern or tag support method. You can select between the two
methods using the [`graphite_tag_support`](#graphite-tag-support) option. When set, the tag support
method is used, otherwise the [Template Pattern](templates) is used.
### Configuration
## Configuration
```toml
[[outputs.file]]
@ -41,7 +41,7 @@ method is used, otherwise the [Template Pattern](templates) is used.
# graphite_separator = "."
```
#### graphite_tag_support
### graphite_tag_support
When the `graphite_tag_support` option is enabled, the template pattern is not
used. Instead, tags are encoded using
@ -52,14 +52,17 @@ added in Graphite 1.1. The `metric_path` is a combination of the optional
The tag `name` is reserved by Graphite, any conflicting tags and will be encoded as `_name`.
**Example Conversion**:
```
```text
cpu,cpu=cpu-total,dc=us-east-1,host=tars usage_idle=98.09,usage_user=0.89 1455320660004257758
=>
cpu.usage_user;cpu=cpu-total;dc=us-east-1;host=tars 0.89 1455320690
cpu.usage_idle;cpu=cpu-total;dc=us-east-1;host=tars 98.09 1455320690
```
With set option `graphite_separator` to "_"
```
```text
cpu,cpu=cpu-total,dc=us-east-1,host=tars usage_idle=98.09,usage_user=0.89 1455320660004257758
=>
cpu_usage_user;cpu=cpu-total;dc=us-east-1;host=tars 0.89 1455320690
@ -72,7 +75,4 @@ When in `strict` mode Telegraf uses the same rules as metrics when not using tag
When in `compatible` mode Telegraf allows more characters through, and is based on the Graphite specification:
>Tag names must have a length >= 1 and may contain any ascii characters except `;!^=`. Tag values must also have a length >= 1, they may contain any ascii characters except `;` and the first character must not be `~`. UTF-8 characters may work for names and values, but they are not well tested and it is not recommended to use non-ascii characters in metric names or tags. Metric names get indexed under the special tag name, if a metric name starts with one or multiple ~ they simply get removed from the derived tag value because the ~ character is not allowed to be in the first position of the tag value. If a metric name consists of no other characters than ~, then it is considered invalid and may get dropped.
[templates]: /docs/TEMPLATE_PATTERN.md

View File

@ -4,7 +4,7 @@ The `influx` data format outputs metrics into [InfluxDB Line Protocol][line
protocol]. This is the recommended format unless another format is required
for interoperability.
### Configuration
## Configuration
```toml
[[outputs.file]]
@ -32,10 +32,11 @@ for interoperability.
influx_uint_support = false
```
### Metrics
## Metrics
Conversion is direct taking into account some limitations of the Line Protocol
format:
- Float fields that are `NaN` or `Inf` are skipped.
- Trailing backslash `\` characters are removed from tag keys and values.
- Tags with a key or value that is the empty string are skipped.

View File

@ -2,7 +2,7 @@
The `json` output data format converts metrics into JSON documents.
### Configuration
## Configuration
```toml
[[outputs.file]]
@ -28,9 +28,10 @@ The `json` output data format converts metrics into JSON documents.
#json_timestamp_format = ""
```
### Examples:
## Examples
Standard form:
```json
{
"fields": {
@ -50,6 +51,7 @@ Standard form:
When an output plugin needs to emit multiple metrics at one time, it may use
the batch format. The use of batch format is determined by the plugin,
reference the documentation for the specific plugin.
```json
{
"metrics": [

View File

@ -1,14 +1,12 @@
# MessagePack:
# MessagePack
MessagePack is an efficient binary serialization format. It lets you exchange data among multiple languages like JSON.
[MessagePack](https://msgpack.org) is an efficient binary serialization format. It lets you exchange data among multiple languages like JSON.
https://msgpack.org
### Format Definitions:
## Format Definitions
Output of this format is MessagePack binary representation of metrics that have identical structure of the below JSON.
```
```json
{
"name":"cpu",
"time": <TIMESTAMP>, // https://github.com/msgpack/msgpack/blob/master/spec.md#timestamp-extension-type
@ -28,7 +26,7 @@ Output of this format is MessagePack binary representation of metrics that have
MessagePack has it's own timestamp representation. You can find additional informations from [MessagePack specification](https://github.com/msgpack/msgpack/blob/master/spec.md#timestamp-extension-type).
### MessagePack Configuration:
## MessagePack Configuration
There are no additional configuration options for MessagePack format.
@ -42,4 +40,4 @@ There are no additional configuration options for MessagePack format.
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
data_format = "msgpack"
```
```

View File

@ -7,8 +7,8 @@ If you're using the HTTP output, this serializer knows how to batch the metrics
[ServiceNow-format]: https://docs.servicenow.com/bundle/london-it-operations-management/page/product/event-management/reference/mid-POST-metrics.html
An example event looks like:
```javascript
[{
"metric_type": "Disk C: % Free Space",
@ -22,6 +22,7 @@ An example event looks like:
"source": “Telegraf”
}]
```
## Using with the HTTP output
To send this data to a ServiceNow MID Server with Web Server extension activated, you can use the HTTP output, there are some custom headers that you need to add to manage the MID Web Server authorization, here's a sample config for an HTTP output:
@ -53,7 +54,7 @@ To send this data to a ServiceNow MID Server with Web Server extension activated
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
data_format = "nowmetric"
## Additional HTTP headers
[outputs.http.headers]
# # Should be set manually to "application/json" for json data_format
@ -61,13 +62,13 @@ To send this data to a ServiceNow MID Server with Web Server extension activated
Accept = "application/json"
```
Starting with the London release, you also need to explicitly create event rule to allow binding of metric events to host CIs.
https://docs.servicenow.com/bundle/london-it-operations-management/page/product/event-management/task/event-rule-bind-metrics-to-host.html
Starting with the [London release](https://docs.servicenow.com/bundle/london-it-operations-management/page/product/event-management/task/event-rule-bind-metrics-to-host.html
),
you also need to explicitly create event rule to allow binding of metric events to host CIs.
## Using with the File output
You can use the file output to output the payload in a file.
You can use the file output to output the payload in a file.
In this case, just add the following section to your telegraf config file
```toml

View File

@ -13,8 +13,7 @@ also update their expiration time based on the most recently received data.
If incoming metrics stop updating specific buckets or quantiles but continue
reporting others every bucket/quantile will continue to exist.
### Configuration
## Configuration
```toml
[[outputs.file]]
@ -52,18 +51,20 @@ Prometheus labels are produced for each tag.
**Note:** String fields are ignored and do not produce Prometheus metrics.
### Example
## Example
**Example Input**
```
### Example Input
```text
cpu,cpu=cpu0 time_guest=8022.6,time_system=26145.98,time_user=92512.89 1574317740000000000
cpu,cpu=cpu1 time_guest=8097.88,time_system=25223.35,time_user=96519.58 1574317740000000000
cpu,cpu=cpu2 time_guest=7386.28,time_system=24870.37,time_user=95631.59 1574317740000000000
cpu,cpu=cpu3 time_guest=7434.19,time_system=24843.71,time_user=93753.88 1574317740000000000
```
**Example Output**
```
### Example Output
```text
# HELP cpu_time_guest Telegraf collected metric
# TYPE cpu_time_guest counter
cpu_time_guest{cpu="cpu0"} 9582.54

View File

@ -9,21 +9,21 @@ somewhat, but not fully, mitigated by using outputs that support writing in
"batch format". When using histogram and summary types, it is recommended to
use only the `prometheus_client` output.
### Configuration
## Configuration
```toml
[[outputs.http]]
## URL is the address to send metrics to
url = "https://cortex/api/prom/push"
## Optional TLS Config
tls_ca = "/etc/telegraf/ca.pem"
tls_cert = "/etc/telegraf/cert.pem"
tls_key = "/etc/telegraf/key.pem"
## Data format to output.
data_format = "prometheusremotewrite"
[outputs.http.headers]
Content-Type = "application/x-protobuf"
Content-Encoding = "snappy"

View File

@ -8,6 +8,7 @@ If you're using the HTTP output, this serializer knows how to batch the metrics
[splunk-format]: http://dev.splunk.com/view/event-collector/SP-CAAAFDN#json
An example event looks like:
```javascript
{
"time": 1529708430,
@ -22,7 +23,9 @@ An example event looks like:
}
}
```
In the above snippet, the following keys are dimensions:
* cpu
* dc
* user
@ -53,6 +56,7 @@ you can send all of your CPU stats in one JSON struct, an example event looks li
}
}
```
In order to enable this mode, there's a new option `splunkmetric_multimetric` that you set in the appropriate output module you plan on using.
## Using with the HTTP output
@ -100,15 +104,18 @@ to manage the HEC authorization, here's a sample config for an HTTP output:
```
## Overrides
You can override the default values for the HEC token you are using by adding additional tags to the config file.
The following aspects of the token can be overridden with tags:
* index
* source
You can either use `[global_tags]` or using a more advanced configuration as documented [here](https://github.com/influxdata/telegraf/blob/master/docs/CONFIGURATION.md).
Such as this example which overrides the index just on the cpu metric:
```toml
[[inputs.cpu]]
percpu = false
@ -122,6 +129,7 @@ Such as this example which overrides the index just on the cpu metric:
You can use the file output when running telegraf on a machine with a Splunk forwarder.
A sample event when `hec_routing` is false (or unset) looks like:
```javascript
{
"_value": 0.6,
@ -132,6 +140,7 @@ A sample event when `hec_routing` is false (or unset) looks like:
"time": 1529708430
}
```
Data formatted in this manner can be ingested with a simple `props.conf` file that
looks like this:
@ -183,4 +192,3 @@ Splunk supports only numeric field values, so serializer would silently drop met
unhealthy = 2
none = 3
```

View File

@ -2,7 +2,7 @@
The `wavefront` serializer translates the Telegraf metric format to the [Wavefront Data Format](https://docs.wavefront.com/wavefront_data_format.html).
### Configuration
## Configuration
```toml
[[outputs.file]]
@ -22,7 +22,7 @@ The `wavefront` serializer translates the Telegraf metric format to the [Wavefro
data_format = "wavefront"
```
### Metrics
## Metrics
A Wavefront metric is equivalent to a single field value of a Telegraf measurement.
The Wavefront metric name will be: `<measurement_name>.<field_name>`
@ -30,17 +30,17 @@ If a prefix is specified it will be honored.
Only boolean and numeric metrics will be serialized, all other types will generate
an error.
### Example
## Example
The following Telegraf metric
```
```text
cpu,cpu=cpu0,host=testHost user=12,idle=88,system=0 1234567890
```
will serialize into the following Wavefront metrics
```
```text
"cpu.user" 12.000000 1234567890 source="testHost" "cpu"="cpu0"
"cpu.idle" 88.000000 1234567890 source="testHost" "cpu"="cpu0"
"cpu.system" 0.000000 1234567890 source="testHost" "cpu"="cpu0"