chore: Fix readme linter errors for input plugins E-L (#11214)

This commit is contained in:
reimda 2022-06-07 15:37:08 -06:00 committed by GitHub
parent 1b1482b5eb
commit 453e276718
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
54 changed files with 529 additions and 307 deletions

View File

@ -1,15 +1,14 @@
# Amazon ECS Input Plugin
Amazon ECS, Fargate compatible, input plugin which uses the Amazon ECS metadata and
stats [v2][task-metadata-endpoint-v2] or [v3][task-metadata-endpoint-v3] API endpoints
to gather stats on running containers in a Task.
Amazon ECS, Fargate compatible, input plugin which uses the Amazon ECS metadata
and stats [v2][task-metadata-endpoint-v2] or [v3][task-metadata-endpoint-v3] API
endpoints to gather stats on running containers in a Task.
The telegraf container must be run in the same Task as the workload it is
inspecting.
This is similar to (and reuses a few pieces of) the [Docker][docker-input]
input plugin, with some ECS specific modifications for AWS metadata and stats
formats.
This is similar to (and reuses a few pieces of) the [Docker][docker-input] input
plugin, with some ECS specific modifications for AWS metadata and stats formats.
The amazon-ecs-agent (though it _is_ a container running on the host) is not
present in the metadata/stats endpoints.

View File

@ -1,25 +1,31 @@
# Elasticsearch Input Plugin
The [elasticsearch](https://www.elastic.co/) plugin queries endpoints to obtain
[Node Stats](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-nodes-stats.html)
and optionally
[Cluster-Health](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html)
metrics.
[Node Stats][1] and optionally [Cluster-Health][2] metrics.
In addition, the following optional queries are only made by the master node:
[Cluster Stats](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-stats.html)
[Indices Stats](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-stats.html)
[Shard Stats](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-stats.html)
[Cluster Stats][3] [Indices Stats][4] [Shard Stats][5]
Specific Elasticsearch endpoints that are queried:
- Node: either /_nodes/stats or /_nodes/_local/stats depending on 'local' configuration setting
- Cluster Heath: /_cluster/health?level=indices
- Cluster Stats: /_cluster/stats
- Indices Stats: /_all/_stats
- Shard Stats: /_all/_stats?level=shards
- Node: either /_nodes/stats or /_nodes/_local/stats depending on 'local'
configuration setting
- Cluster Heath: /_cluster/health?level=indices
- Cluster Stats: /_cluster/stats
- Indices Stats: /_all/_stats
- Shard Stats: /_all/_stats?level=shards
Note that specific statistics information can change between Elasticsearch versions. In general, this plugin attempts to stay as version-generic as possible by tagging high-level categories only and using a generic json parser to make unique field names of whatever statistics names are provided at the mid-low level.
Note that specific statistics information can change between Elasticsearch
versions. In general, this plugin attempts to stay as version-generic as
possible by tagging high-level categories only and using a generic json parser
to make unique field names of whatever statistics names are provided at the
mid-low level.
[1]: https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-nodes-stats.html
[2]: https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html
[3]: https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-stats.html
[4]: https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-stats.html
[5]: https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-stats.html
## Configuration

View File

@ -1,17 +1,19 @@
# Elasticsearch query input plugin
# Elasticsearch Query Input Plugin
This [elasticsearch](https://www.elastic.co/) query plugin queries endpoints to obtain metrics from data stored in an Elasticsearch cluster.
This [elasticsearch](https://www.elastic.co/) query plugin queries endpoints to
obtain metrics from data stored in an Elasticsearch cluster.
The following is supported:
- return number of hits for a search query
- calculate the avg/max/min/sum for a numeric field, filtered by a query, aggregated per tag
- calculate the avg/max/min/sum for a numeric field, filtered by a query,
aggregated per tag
- count number of terms for a particular field
## Elasticsearch support
## Elasticsearch Support
This plugins is tested against Elasticsearch 5.x and 6.x releases.
Currently it is known to break on 7.x or greater versions.
This plugins is tested against Elasticsearch 5.x and 6.x releases. Currently it
is known to break on 7.x or greater versions.
## Configuration
@ -91,7 +93,8 @@ Currently it is known to break on 7.x or greater versions.
## Examples
Please note that the `[[inputs.elasticsearch_query]]` is still required for all of the examples below.
Please note that the `[[inputs.elasticsearch_query]]` is still required for all
of the examples below.
### Search the average response time, per URI and per response status code
@ -151,17 +154,32 @@ Please note that the `[[inputs.elasticsearch_query]]` is still required for all
### Required parameters
- `measurement_name`: The target measurement to be stored the results of the aggregation query.
- `measurement_name`: The target measurement to be stored the results of the
aggregation query.
- `index`: The index name to query on Elasticsearch
- `query_period`: The time window to query (eg. "1m" to query documents from last minute). Normally should be set to same as collection
- `query_period`: The time window to query (eg. "1m" to query documents from
last minute). Normally should be set to same as collection
- `date_field`: The date/time field in the Elasticsearch index
### Optional parameters
- `date_field_custom_format`: Not needed if using one of the built in date/time formats of Elasticsearch, but may be required if using a custom date/time format. The format syntax uses the [Joda date format](https://www.elastic.co/guide/en/elasticsearch/reference/6.8/search-aggregations-bucket-daterange-aggregation.html#date-format-pattern).
- `date_field_custom_format`: Not needed if using one of the built in date/time
formats of Elasticsearch, but may be required if using a custom date/time
format. The format syntax uses the [Joda date format][joda].
- `filter_query`: Lucene query to filter the results (default: "\*")
- `metric_fields`: The list of fields to perform metric aggregation (these must be indexed as numeric fields)
- `metric_funcion`: The single-value metric aggregation function to be performed on the `metric_fields` defined. Currently supported aggregations are "avg", "min", "max", "sum". (see [https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics.html](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics.html)
- `tags`: The list of fields to be used as tags (these must be indexed as non-analyzed fields). A "terms aggregation" will be done per tag defined
- `include_missing_tag`: Set to true to not ignore documents where the tag(s) specified above does not exist. (If false, documents without the specified tag field will be ignored in `doc_count` and in the metric aggregation)
- `missing_tag_value`: The value of the tag that will be set for documents in which the tag field does not exist. Only used when `include_missing_tag` is set to `true`.
- `metric_fields`: The list of fields to perform metric aggregation (these must
be indexed as numeric fields)
- `metric_funcion`: The single-value metric aggregation function to be performed
on the `metric_fields` defined. Currently supported aggregations are "avg",
"min", "max", "sum". (see the [aggregation docs][agg]
- `tags`: The list of fields to be used as tags (these must be indexed as
non-analyzed fields). A "terms aggregation" will be done per tag defined
- `include_missing_tag`: Set to true to not ignore documents where the tag(s)
specified above does not exist. (If false, documents without the specified tag
field will be ignored in `doc_count` and in the metric aggregation)
- `missing_tag_value`: The value of the tag that will be set for documents in
which the tag field does not exist. Only used when `include_missing_tag` is
set to `true`.
[joda]: https://www.elastic.co/guide/en/elasticsearch/reference/6.8/search-aggregations-bucket-daterange-aggregation.html#date-format-pattern
[agg]: https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics.html

View File

@ -1,6 +1,7 @@
# Ethtool Input Plugin
The ethtool input plugin pulls ethernet device stats. Fields pulled will depend on the network device and driver.
The ethtool input plugin pulls ethernet device stats. Fields pulled will depend
on the network device and driver.
## Configuration

View File

@ -6,9 +6,12 @@ This plugin provides a consumer for use with Azure Event Hubs and Azure IoT Hub.
The main focus for development of this plugin is Azure IoT hub:
1. Create an Azure IoT Hub by following any of the guides provided here: [Azure IoT Hub](https://docs.microsoft.com/en-us/azure/iot-hub/)
2. Create a device, for example a [simulated Raspberry Pi](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-raspberry-pi-web-simulator-get-started)
3. The connection string needed for the plugin is located under *Shared access policies*, both the *iothubowner* and *service* policies should work
1. Create an Azure IoT Hub by following any of the guides provided here: [Azure
IoT Hub](https://docs.microsoft.com/en-us/azure/iot-hub/)
2. Create a device, for example a [simulated Raspberry
Pi](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-raspberry-pi-web-simulator-get-started)
3. The connection string needed for the plugin is located under *Shared access
policies*, both the *iothubowner* and *service* policies should work
## Configuration

View File

@ -4,33 +4,32 @@ The `example` plugin gathers metrics about example things. This description
explains at a high level what the plugin does and provides links to where
additional information can be found.
Telegraf minimum version: Telegraf x.x
Plugin minimum tested version: x.x
Telegraf minimum version: Telegraf x.x Plugin minimum tested version: x.x
## Configuration
This section contains the default TOML to configure the plugin. You can
generate it using `telegraf --usage <plugin-name>`.
```toml @sample.conf
# This is an example plugin
[[inputs.example]]
example_option = "example_value"
```
Running `telegraf --usage <plugin-name>` also gives the sample TOML
configuration.
### example_option
A more in depth description of an option can be provided here, but only do so
if the option cannot be fully described in the sample config.
A more in depth description of an option can be provided here, but only do so if
the option cannot be fully described in the sample config.
## Metrics
Here you should add an optional description and links to where the user can
get more information about the measurements.
Here you should add an optional description and links to where the user can get
more information about the measurements.
If the output is determined dynamically based on the input source, or there
are more metrics than can reasonably be listed, describe how the input is
mapped to the output.
If the output is determined dynamically based on the input source, or there are
more metrics than can reasonably be listed, describe how the input is mapped to
the output.
- measurement1
- tags:

View File

@ -1,7 +1,8 @@
# Exec Input Plugin
The `exec` plugin executes all the `commands` in parallel on every interval and parses metrics from
their output in any one of the accepted [Input Data Formats](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md).
The `exec` plugin executes all the `commands` in parallel on every interval and
parses metrics from their output in any one of the accepted [Input Data
Formats](../../../docs/DATA_FORMATS_INPUT.md).
This plugin can be used to poll for custom metrics from any source.
@ -41,14 +42,16 @@ scripts that match the pattern will cause them to be picked up immediately.
## Example
This script produces static values, since no timestamp is specified the values are at the current time.
This script produces static values, since no timestamp is specified the values
are at the current time.
```sh
#!/bin/sh
echo 'example,tag1=a,tag2=b i=42i,j=43i,k=44i'
```
It can be paired with the following configuration and will be run at the `interval` of the agent.
It can be paired with the following configuration and will be run at the
`interval` of the agent.
```toml
[[inputs.exec]]

View File

@ -1,10 +1,10 @@
# Execd Input Plugin
The `execd` plugin runs an external program as a long-running daemon.
The programs must output metrics in any one of the accepted
[Input Data Formats][] on the process's STDOUT, and is expected to
stay running. If you'd instead like the process to collect metrics and then exit,
check out the [inputs.exec][] plugin.
The `execd` plugin runs an external program as a long-running daemon. The
programs must output metrics in any one of the accepted [Input Data Formats][]
on the process's STDOUT, and is expected to stay running. If you'd instead like
the process to collect metrics and then exit, check out the [inputs.exec][]
plugin.
The `signal` can be configured to send a signal the running daemon on each
collection interval. This is used for when you want to have Telegraf notify the
@ -125,5 +125,5 @@ end
signal = "none"
```
[Input Data Formats]: https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
[inputs.exec]: https://github.com/influxdata/telegraf/blob/master/plugins/inputs/exec/README.md
[Input Data Formats]: ../../../docs/DATA_FORMATS_INPUT.md
[inputs.exec]: ../exec/README.md

View File

@ -3,8 +3,8 @@
The fail2ban plugin gathers the count of failed and banned ip addresses using
[fail2ban](https://www.fail2ban.org).
This plugin runs the `fail2ban-client` command which generally requires root access.
Acquiring the required permissions can be done using several methods:
This plugin runs the `fail2ban-client` command which generally requires root
access. Acquiring the required permissions can be done using several methods:
- [Use sudo](#using-sudo) run fail2ban-client.
- Run telegraf as root. (not recommended)
@ -49,7 +49,7 @@ Defaults!FAIL2BAN !logfile, !syslog, !pam_session
- failed (integer, count)
- banned (integer, count)
### Example Output
## Example Output
```shell
# fail2ban-client status sshd

View File

@ -1,7 +1,8 @@
# Fibaro Input Plugin
The Fibaro plugin makes HTTP calls to the Fibaro controller API to gather values of hooked devices.
Those values could be true (1) or false (0) for switches, percentage for dimmers, temperature, etc.
The Fibaro plugin makes HTTP calls to the Fibaro controller API to gather values
of hooked devices. Those values could be true (1) or false (0) for switches,
percentage for dimmers, temperature, etc.
## Configuration

View File

@ -1,7 +1,7 @@
# File Input Plugin
The file plugin parses the **complete** contents of a file **every interval** using
the selected [input data format][].
The file plugin parses the **complete** contents of a file **every interval**
using the selected [input data format][].
**Note:** If you wish to parse only newly appended lines use the [tail][] input
plugin instead.
@ -38,5 +38,10 @@ plugin instead.
# file_tag = ""
```
## Metrics
The format of metrics produced by this plugin depends on the content and data
format of the file.
[input data format]: /docs/DATA_FORMATS_INPUT.md
[tail]: /plugins/inputs/tail

View File

@ -1,4 +1,4 @@
# filestat Input Plugin
# Filestat Input Plugin
The filestat plugin gathers metrics about file existence, size, and other stats.

View File

@ -1,10 +1,14 @@
# Fluentd Input Plugin
The fluentd plugin gathers metrics from plugin endpoint provided by [in_monitor plugin](https://docs.fluentd.org/input/monitor_agent).
This plugin understands data provided by /api/plugin.json resource (/api/config.json is not covered).
The fluentd plugin gathers metrics from plugin endpoint provided by [in_monitor
plugin][1]. This plugin understands data provided by /api/plugin.json resource
(/api/config.json is not covered).
You might need to adjust your fluentd configuration, in order to reduce series cardinality in case your fluentd restarts frequently. Every time fluentd starts, `plugin_id` value is given a new random value.
According to [fluentd documentation](https://docs.fluentd.org/configuration/config-file#common-plugin-parameter), you are able to add `@id` parameter for each plugin to avoid this behaviour and define custom `plugin_id`.
You might need to adjust your fluentd configuration, in order to reduce series
cardinality in case your fluentd restarts frequently. Every time fluentd starts,
`plugin_id` value is given a new random value. According to [fluentd
documentation][2], you are able to add `@id` parameter for each plugin to avoid
this behaviour and define custom `plugin_id`.
example configuration with `@id` parameter for http plugin:
@ -16,6 +20,9 @@ example configuration with `@id` parameter for http plugin:
</source>
```
[1]: https://docs.fluentd.org/input/monitor_agent
[2]: https://docs.fluentd.org/configuration/config-file#common-plugin-parameter
## Configuration
```toml @sample.conf

View File

@ -34,7 +34,7 @@ alternative method for collecting repository information.
# additional_fields = []
```
### Metrics
## Metrics
- github_repository
- tags:
@ -61,17 +61,18 @@ When the [internal][] input is enabled:
- remaining - How many requests you have remaining (per hour)
- blocks - How many requests have been blocked due to rate limit
When specifying `additional_fields` the plugin will collect the specified properties.
**NOTE:** Querying this additional fields might require to perform additional API-calls.
Please make sure you don't exceed the query rate-limit by specifying too many additional fields.
In the following we list the available options with the required API-calls and the resulting fields
When specifying `additional_fields` the plugin will collect the specified
properties. **NOTE:** Querying this additional fields might require to perform
additional API-calls. Please make sure you don't exceed the query rate-limit by
specifying too many additional fields. In the following we list the available
options with the required API-calls and the resulting fields
- "pull-requests" (2 API-calls per repository)
- fields:
- open_pull_requests (int)
- closed_pull_requests (int)
### Example Output
## Example Output
```shell
github_repository,language=Go,license=MIT\ License,name=telegraf,owner=influxdata forks=2679i,networks=2679i,open_issues=794i,size=23263i,stars=7091i,subscribers=316i,watchers=7091i 1563901372000000000

View File

@ -1,10 +1,15 @@
# gNMI (gRPC Network Management Interface) Input Plugin
This plugin consumes telemetry data based on the [gNMI](https://github.com/openconfig/reference/blob/master/rpc/gnmi/gnmi-specification.md) Subscribe method. TLS is supported for authentication and encryption. This input plugin is vendor-agnostic and is supported on any platform that supports the gNMI spec.
This plugin consumes telemetry data based on the [gNMI][1] Subscribe method. TLS
is supported for authentication and encryption. This input plugin is
vendor-agnostic and is supported on any platform that supports the gNMI spec.
For Cisco devices:
It has been optimized to support gNMI telemetry as produced by Cisco IOS XR (64-bit) version 6.5.1, Cisco NX-OS 9.3 and Cisco IOS XE 16.12 and later.
It has been optimized to support gNMI telemetry as produced by Cisco IOS XR
(64-bit) version 6.5.1, Cisco NX-OS 9.3 and Cisco IOS XE 16.12 and later.
[1]: https://github.com/openconfig/reference/blob/master/rpc/gnmi/gnmi-specification.md
## Configuration

View File

@ -7,9 +7,11 @@ Plugin currently support two type of end points:-
- multiple (e.g. `http://[graylog-server-ip]:9000/api/system/metrics/multiple`)
- namespace (e.g. `http://[graylog-server-ip]:9000/api/system/metrics/namespace/{namespace}`)
End Point can be a mix of one multiple end point and several namespaces end points
End Point can be a mix of one multiple end point and several namespaces end
points
Note: if namespace end point specified metrics array will be ignored for that call.
Note: if namespace end point specified metrics array will be ignored for that
call.
## Configuration

View File

@ -1,9 +1,11 @@
# HAProxy Input Plugin
The [HAProxy](http://www.haproxy.org/) input plugin gathers
[statistics](https://cbonte.github.io/haproxy-dconv/1.9/intro.html#3.3.16)
using the [stats socket](https://cbonte.github.io/haproxy-dconv/1.9/management.html#9.3)
or [HTTP statistics page](https://cbonte.github.io/haproxy-dconv/1.9/management.html#9) of a HAProxy server.
The [HAProxy](http://www.haproxy.org/) input plugin gathers [statistics][1]
using the [stats socket][2] or [HTTP statistics page][3] of a HAProxy server.
[1]: https://cbonte.github.io/haproxy-dconv/1.9/intro.html#3.3.16
[2]: https://cbonte.github.io/haproxy-dconv/1.9/management.html#9.3
[3]: https://cbonte.github.io/haproxy-dconv/1.9/management.html#9
## Configuration
@ -42,27 +44,28 @@ or [HTTP statistics page](https://cbonte.github.io/haproxy-dconv/1.9/management.
### HAProxy Configuration
The following information may be useful when getting started, but please
consult the HAProxy documentation for complete and up to date instructions.
The following information may be useful when getting started, but please consult
the HAProxy documentation for complete and up to date instructions.
The [`stats enable`](https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4-stats%20enable)
option can be used to add unauthenticated access over HTTP using the default
settings. To enable the unix socket begin by reading about the
[`stats socket`](https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#3.1-stats%20socket)
option.
The [`stats enable`][4] option can be used to add unauthenticated access over
HTTP using the default settings. To enable the unix socket begin by reading
about the [`stats socket`][5] option.
[4]: https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4-stats%20enable
[5]: https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#3.1-stats%20socket
### servers
Server addresses must explicitly start with 'http' if you wish to use HAProxy
status page. Otherwise, addresses will be assumed to be an UNIX socket and
any protocol (if present) will be discarded.
status page. Otherwise, addresses will be assumed to be an UNIX socket and any
protocol (if present) will be discarded.
When using socket names, wildcard expansion is supported so plugin can gather
stats from multiple sockets at once.
To use HTTP Basic Auth add the username and password in the userinfo section
of the URL: `http://user:password@1.2.3.4/haproxy?stats`. The credentials are
sent via the `Authorization` header and not using the request URL.
To use HTTP Basic Auth add the username and password in the userinfo section of
the URL: `http://user:password@1.2.3.4/haproxy?stats`. The credentials are sent
via the `Authorization` header and not using the request URL.
### keep_field_names
@ -88,7 +91,7 @@ The following renames are made:
## Metrics
For more details about collected metrics reference the [HAProxy CSV format
documentation](https://cbonte.github.io/haproxy-dconv/1.8/management.html#9.1).
documentation][6].
- haproxy
- tags:
@ -109,6 +112,8 @@ documentation](https://cbonte.github.io/haproxy-dconv/1.8/management.html#9.1).
- `lastsess` (int)
- **all other stats** (int)
[6]: https://cbonte.github.io/haproxy-dconv/1.8/management.html#9.1
## Example Output
```shell

View File

@ -1,6 +1,10 @@
# HTTP Input Plugin
The HTTP input plugin collects metrics from one or more HTTP(S) endpoints. The endpoint should have metrics formatted in one of the supported [input data formats](../../../docs/DATA_FORMATS_INPUT.md). Each data format has its own unique set of configuration options which can be added to the input configuration.
The HTTP input plugin collects metrics from one or more HTTP(S) endpoints. The
endpoint should have metrics formatted in one of the supported [input data
formats](../../../docs/DATA_FORMATS_INPUT.md). Each data format has its own
unique set of configuration options which can be added to the input
configuration.
## Configuration
@ -75,7 +79,8 @@ The HTTP input plugin collects metrics from one or more HTTP(S) endpoints. The
## Metrics
The metrics collected by this input plugin will depend on the configured `data_format` and the payload returned by the HTTP endpoint(s).
The metrics collected by this input plugin will depend on the configured
`data_format` and the payload returned by the HTTP endpoint(s).
The default values below are added if the input format does not specify a value:
@ -85,4 +90,11 @@ The default values below are added if the input format does not specify a value:
## Optional Cookie Authentication Settings
The optional Cookie Authentication Settings will retrieve a cookie from the given authorization endpoint, and use it in subsequent API requests. This is useful for services that do not provide OAuth or Basic Auth authentication, e.g. the [Tesla Powerwall API](https://www.tesla.com/support/energy/powerwall/own/monitoring-from-home-network), which uses a Cookie Auth Body to retrieve an authorization cookie. The Cookie Auth Renewal interval will renew the authorization by retrieving a new cookie at the given interval.
The optional Cookie Authentication Settings will retrieve a cookie from the
given authorization endpoint, and use it in subsequent API requests. This is
useful for services that do not provide OAuth or Basic Auth authentication,
e.g. the [Tesla Powerwall API][tesla], which uses a Cookie Auth Body to retrieve
an authorization cookie. The Cookie Auth Renewal interval will renew the
authorization by retrieving a new cookie at the given interval.
[tesla]: https://www.tesla.com/support/energy/powerwall/own/monitoring-from-home-network

View File

@ -1,18 +1,18 @@
# HTTP Listener v2 Input Plugin
HTTP Listener v2 is a service input plugin that listens for metrics sent via
HTTP. Metrics may be sent in any supported [data format][data_format]. For metrics in
[InfluxDB Line Protocol][line_protocol] it's recommended to use the [`influxdb_listener`][influxdb_listener]
or [`influxdb_v2_listener`][influxdb_v2_listener] instead.
HTTP. Metrics may be sent in any supported [data format][data_format]. For
metrics in [InfluxDB Line Protocol][line_protocol] it's recommended to use the
[`influxdb_listener`][influxdb_listener] or
[`influxdb_v2_listener`][influxdb_v2_listener] instead.
**Note:** The plugin previously known as `http_listener` has been renamed
`influxdb_listener`. If you would like Telegraf to act as a proxy/relay for
InfluxDB it is recommended to use [`influxdb_listener`][influxdb_listener] or [`influxdb_v2_listener`][influxdb_v2_listener].
InfluxDB it is recommended to use [`influxdb_listener`][influxdb_listener] or
[`influxdb_v2_listener`][influxdb_v2_listener].
## Configuration
This is a sample configuration for the plugin.
```toml @sample.conf
# Generic HTTP write listener
[[inputs.http_listener_v2]]
@ -68,7 +68,8 @@ This is a sample configuration for the plugin.
## Metrics
Metrics are collected from the part of the request specified by the `data_source` param and are parsed depending on the value of `data_format`.
Metrics are collected from the part of the request specified by the
`data_source` param and are parsed depending on the value of `data_format`.
## Troubleshooting

View File

@ -96,9 +96,12 @@ This input plugin checks HTTP/HTTPS connections.
### `result` / `result_code`
Upon finishing polling the target server, the plugin registers the result of the operation in the `result` tag, and adds a numeric field called `result_code` corresponding with that tag value.
Upon finishing polling the target server, the plugin registers the result of the
operation in the `result` tag, and adds a numeric field called `result_code`
corresponding with that tag value.
This tag is used to expose network and plugin errors. HTTP errors are considered a successful connection.
This tag is used to expose network and plugin errors. HTTP errors are considered
a successful connection.
|Tag value |Corresponding field value|Description|
-------------------------------|-------------------------|-----------|

View File

@ -1,8 +1,9 @@
# HTTP JSON Input Plugin
## DEPRECATED in Telegraf v1.6: Use [HTTP input plugin][] as replacement
**DEPRECATED in Telegraf v1.6: Use [HTTP input plugin][] as replacement**
The httpjson plugin collects data from HTTP URLs which respond with JSON. It flattens the JSON and finds all numeric values, treating them as floats.
The httpjson plugin collects data from HTTP URLs which respond with JSON. It
flattens the JSON and finds all numeric values, treating them as floats.
## Configuration
@ -60,18 +61,23 @@ The httpjson plugin collects data from HTTP URLs which respond with JSON. It fl
- httpjson
- response_time (float): Response time in seconds
Additional fields are dependant on the response of the remote service being polled.
Additional fields are dependant on the response of the remote service being
polled.
## Tags
- All measurements have the following tags:
- server: HTTP origin as defined in configuration as `servers`.
Any top level keys listed under `tag_keys` in the configuration are added as tags. Top level keys are defined as keys in the root level of the object in a single object response, or in the root level of each object within an array of objects.
Any top level keys listed under `tag_keys` in the configuration are added as
tags. Top level keys are defined as keys in the root level of the object in a
single object response, or in the root level of each object within an array of
objects.
## Examples Output
This plugin understands responses containing a single JSON object, or a JSON Array of Objects.
This plugin understands responses containing a single JSON object, or a JSON
Array of Objects.
**Object Output:**
@ -91,7 +97,9 @@ Given the following response body:
The following metric is produced:
`httpjson,server=http://localhost:9999/stats/ b_d=0.1,a=0.5,b_e=5,response_time=0.001`
```shell
httpjson,server=http://localhost:9999/stats/ b_d=0.1,a=0.5,b_e=5,response_time=0.001
```
Note that only numerical values are extracted and the type is float.
@ -104,11 +112,14 @@ If `tag_keys` is included in the configuration:
Then the `service` tag will also be added:
`httpjson,server=http://localhost:9999/stats/,service=service01 b_d=0.1,a=0.5,b_e=5,response_time=0.001`
```shell
httpjson,server=http://localhost:9999/stats/,service=service01 b_d=0.1,a=0.5,b_e=5,response_time=0.001
```
**Array Output:**
If the service returns an array of objects, one metric is be created for each object:
If the service returns an array of objects, one metric is be created for each
object:
```json
[
@ -133,7 +144,9 @@ If the service returns an array of objects, one metric is be created for each ob
]
```
`httpjson,server=http://localhost:9999/stats/,service=service01 a=0.5,b_d=0.1,b_e=5,response_time=0.003`
`httpjson,server=http://localhost:9999/stats/,service=service02 a=0.6,b_d=0.2,b_e=6,response_time=0.003`
```shell
httpjson,server=http://localhost:9999/stats/,service=service01 a=0.5,b_d=0.1,b_e=5,response_time=0.003
httpjson,server=http://localhost:9999/stats/,service=service02 a=0.6,b_d=0.2,b_e=6,response_time=0.003
```
[HTTP input plugin]: /plugins/inputs/http
[HTTP input plugin]: ../http/README.md

View File

@ -1,10 +1,11 @@
# Hugepages Input Plugin
Transparent Huge Pages (THP) is a Linux memory management system that reduces the overhead of
Translation Lookaside Buffer (TLB) lookups on machines with large amounts of memory by using larger
memory pages.
Transparent Huge Pages (THP) is a Linux memory management system that reduces
the overhead of Translation Lookaside Buffer (TLB) lookups on machines with
large amounts of memory by using larger memory pages.
Consult <https://www.kernel.org/doc/html/latest/admin-guide/mm/hugetlbpage.html> for more details.
Consult <https://www.kernel.org/doc/html/latest/admin-guide/mm/hugetlbpage.html>
for more details.
## Configuration

View File

@ -4,7 +4,9 @@ This plugin gather services & hosts status using Icinga2 Remote API.
The icinga2 plugin uses the icinga2 remote API to gather status on running
services and hosts. You can read Icinga2's documentation for their remote API
[here](https://docs.icinga.com/icinga2/latest/doc/module/icinga2/chapter/icinga2-api)
[here][1].
[1]: https://docs.icinga.com/icinga2/latest/doc/module/icinga2/chapter/icinga2-api
## Configuration

View File

@ -1,12 +1,13 @@
# InfluxDB Input Plugin
The InfluxDB plugin will collect metrics on the given InfluxDB servers. Read our
[documentation](https://docs.influxdata.com/platform/monitoring/influxdata-platform/tools/measurements-internal/)
for detailed information about `influxdb` metrics.
[documentation][1] for detailed information about `influxdb` metrics.
This plugin can also gather metrics from endpoints that expose
InfluxDB-formatted endpoints. See below for more information.
[1]: https://docs.influxdata.com/platform/monitoring/influxdata-platform/tools/measurements-internal/
## Configuration
```toml @sample.conf
@ -39,7 +40,8 @@ InfluxDB-formatted endpoints. See below for more information.
## Measurements & Fields
**Note:** The measurements and fields included in this plugin are dynamically built from the InfluxDB source, and may vary between versions:
**Note:** The measurements and fields included in this plugin are dynamically
built from the InfluxDB source, and may vary between versions:
- **influxdb_ae** _(Enterprise Only)_ : Statistics related to the Anti-Entropy (AE) engine in InfluxDB Enterprise clusters.
- **bytesRx**: Number of bytes received by the data node.

View File

@ -5,9 +5,9 @@ according to the [InfluxDB HTTP API][influxdb_http_api]. The intent of the
plugin is to allow Telegraf to serve as a proxy/router for the `/api/v2/write`
endpoint of the InfluxDB HTTP API.
The `/api/v2/write` endpoint supports the `precision` query parameter and can be set
to one of `ns`, `us`, `ms`, `s`. All other parameters are ignored and
defer to the output plugins configuration.
The `/api/v2/write` endpoint supports the `precision` query parameter and can be
set to one of `ns`, `us`, `ms`, `s`. All other parameters are ignored and defer
to the output plugins configuration.
Telegraf minimum version: Telegraf 1.16.0

View File

@ -1,13 +1,18 @@
# Intel Performance Monitoring Unit Plugin
This input plugin exposes Intel PMU (Performance Monitoring Unit) metrics available through [Linux Perf](https://perf.wiki.kernel.org/index.php/Main_Page) subsystem.
This input plugin exposes Intel PMU (Performance Monitoring Unit) metrics
available through [Linux Perf](https://perf.wiki.kernel.org/index.php/Main_Page)
subsystem.
PMU metrics gives insight into performance and health of IA processor's internal components,
including core and uncore units. With the number of cores increasing and processor topology getting more complex
the insight into those metrics is vital to assure the best CPU performance and utilization.
PMU metrics gives insight into performance and health of IA processor's internal
components, including core and uncore units. With the number of cores increasing
and processor topology getting more complex the insight into those metrics is
vital to assure the best CPU performance and utilization.
Performance counters are CPU hardware registers that count hardware events such as instructions executed, cache-misses suffered, or branches mispredicted.
They form a basis for profiling applications to trace dynamic control flow and identify hotspots.
Performance counters are CPU hardware registers that count hardware events such
as instructions executed, cache-misses suffered, or branches mispredicted. They
form a basis for profiling applications to trace dynamic control flow and
identify hotspots.
## Configuration
@ -63,8 +68,10 @@ They form a basis for profiling applications to trace dynamic control flow and i
### Modifiers
Perf modifiers adjust event-specific perf attribute to fulfill particular requirements.
Details about perf attribute structure could be found in [perf_event_open](https://man7.org/linux/man-pages/man2/perf_event_open.2.html) syscall manual.
Perf modifiers adjust event-specific perf attribute to fulfill particular
requirements. Details about perf attribute structure could be found in
[perf_event_open][man]
syscall manual.
General schema of configuration's `events` list element:
@ -89,48 +96,65 @@ where:
## Requirements
The plugin is using [iaevents](https://github.com/intel/iaevents) library which is a golang package that makes accessing the Linux kernel's perf interface easier.
The plugin is using [iaevents](https://github.com/intel/iaevents) library which
is a golang package that makes accessing the Linux kernel's perf interface
easier.
Intel PMU plugin, is only intended for use on **linux 64-bit** systems.
Event definition JSON files for specific architectures can be found at [01.org](https://download.01.org/perfmon/).
A script to download the event definitions that are appropriate for your system (event_download.py) is available at [pmu-tools](https://github.com/andikleen/pmu-tools).
Please keep these files in a safe place on your system.
Event definition JSON files for specific architectures can be found at
[01.org](https://download.01.org/perfmon/). A script to download the event
definitions that are appropriate for your system (event_download.py) is
available at [pmu-tools](https://github.com/andikleen/pmu-tools). Please keep
these files in a safe place on your system.
## Measuring
Plugin allows measuring both core and uncore events. During plugin initialization the event names provided by user are compared
with event definitions included in JSON files and translated to perf attributes. Next, those events are activated to start counting.
During every telegraf interval, the plugin reads proper measurement for each previously activated event.
Plugin allows measuring both core and uncore events. During plugin
initialization the event names provided by user are compared with event
definitions included in JSON files and translated to perf attributes. Next,
those events are activated to start counting. During every telegraf interval,
the plugin reads proper measurement for each previously activated event.
Each single core event may be counted severally on every available CPU's core. In contrast, uncore events could be placed in
many PMUs within specified CPU package. The plugin allows choosing core ids (core events) or socket ids (uncore events) on which the counting should be executed.
Uncore events are separately activated on all socket's PMUs, and can be exposed as separate
Each single core event may be counted severally on every available CPU's
core. In contrast, uncore events could be placed in many PMUs within specified
CPU package. The plugin allows choosing core ids (core events) or socket ids
(uncore events) on which the counting should be executed. Uncore events are
separately activated on all socket's PMUs, and can be exposed as separate
measurement or to be summed up as one measurement.
Obtained measurements are stored as three values: **Raw**, **Enabled** and **Running**. Raw is a total count of event. Enabled and running are total time the event was enabled and running.
Normally these are the same. If more events are started than available counter slots on the PMU, then multiplexing
occurs and events only run part of the time. Therefore, the plugin provides a 4-th value called **scaled** which is calculated using following formula:
`raw * enabled / running`.
Obtained measurements are stored as three values: **Raw**, **Enabled** and
**Running**. Raw is a total count of event. Enabled and running are total time
the event was enabled and running. Normally these are the same. If more events
are started than available counter slots on the PMU, then multiplexing occurs
and events only run part of the time. Therefore, the plugin provides a 4-th
value called **scaled** which is calculated using following formula: `raw *
enabled / running`.
Events are measured for all running processes.
### Core event groups
Perf allows assembling events as a group. A perf event group is scheduled onto the CPU as a unit: it will be put onto the CPU only if all of the events in the group can be put onto the CPU.
This means that the values of the member events can be meaningfully compared — added, divided (to get ratios), and so on — with each other,
since they have counted events for the same set of executed instructions [(source)](https://man7.org/linux/man-pages/man2/perf_event_open.2.html).
Perf allows assembling events as a group. A perf event group is scheduled onto
the CPU as a unit: it will be put onto the CPU only if all of the events in the
group can be put onto the CPU. This means that the values of the member events
can be meaningfully compared — added, divided (to get ratios), and so on — with
each other, since they have counted events for the same set of executed
instructions [(source)][man].
> **NOTE:**
> Be aware that the plugin will throw an error when trying to create core event group of size that exceeds available core PMU counters.
> The error message from perf syscall will be shown as "invalid argument". If you want to check how many PMUs are supported by your Intel CPU, you can use the [cpuid](https://linux.die.net/man/1/cpuid) command.
> **NOTE:** Be aware that the plugin will throw an error when trying to create
> core event group of size that exceeds available core PMU counters. The error
> message from perf syscall will be shown as "invalid argument". If you want to
> check how many PMUs are supported by your Intel CPU, you can use the
> [cpuid](https://linux.die.net/man/1/cpuid) command.
### Note about file descriptors
The plugin opens a number of file descriptors dependent on number of monitored CPUs and number of monitored
counters. It can easily exceed the default per process limit of allowed file descriptors. Depending on
configuration, it might be required to increase the limit of opened file descriptors allowed.
This can be done for example by using `ulimit -n command`.
The plugin opens a number of file descriptors dependent on number of monitored
CPUs and number of monitored counters. It can easily exceed the default per
process limit of allowed file descriptors. Depending on configuration, it might
be required to increase the limit of opened file descriptors allowed. This can
be done for example by using `ulimit -n command`.
## Metrics
@ -208,3 +232,5 @@ pmu_metric,cpu=0,event=CPU_CLK_UNHALTED.REF_XCLK_ANY,host=xyz enabled=2200963921
pmu_metric,cpu=0,event=L1D_PEND_MISS.PENDING_CYCLES_ANY,host=xyz enabled=2200933946i,running=1470322480i,raw=23631950i,scaled=35374798i 1621254412000000000
pmu_metric,cpu=0,event=L1D_PEND_MISS.PENDING_CYCLES,host=xyz raw=18767833i,scaled=28169827i,enabled=2200888514i,running=1466317384i 1621254412000000000
```
[man]: https://man7.org/linux/man-pages/man2/perf_event_open.2.html

View File

@ -1,8 +1,9 @@
# Intel RDT Input Plugin
The `intel_rdt` plugin collects information provided by monitoring features of
the Intel Resource Director Technology (Intel(R) RDT). Intel RDT provides the hardware framework to monitor
and control the utilization of shared resources (ex: last level cache, memory bandwidth).
the Intel Resource Director Technology (Intel(R) RDT). Intel RDT provides the
hardware framework to monitor and control the utilization of shared resources
(ex: last level cache, memory bandwidth).
## About Intel RDT
@ -13,27 +14,31 @@ Intels Resource Director Technology (RDT) framework consists of:
- Cache Allocation Technology (CAT)
- Code and Data Prioritization (CDP)
As multithreaded and multicore platform architectures emerge, the last level cache and
memory bandwidth are key resources to manage for running workloads in single-threaded,
multithreaded, or complex virtual machine environments. Intel introduces CMT, MBM, CAT
and CDP to manage these workloads across shared resources.
As multithreaded and multicore platform architectures emerge, the last level
cache and memory bandwidth are key resources to manage for running workloads in
single-threaded, multithreaded, or complex virtual machine environments. Intel
introduces CMT, MBM, CAT and CDP to manage these workloads across shared
resources.
## Prerequsities - PQoS Tool
To gather Intel RDT metrics, the `intel_rdt` plugin uses _pqos_ cli tool which is a
part of [Intel(R) RDT Software Package](https://github.com/intel/intel-cmt-cat).
Before using this plugin please be sure _pqos_ is properly installed and configured regarding that the plugin
run _pqos_ to work with `OS Interface` mode. This plugin supports _pqos_ version 4.0.0 and above.
Note: pqos tool needs root privileges to work properly.
To gather Intel RDT metrics, the `intel_rdt` plugin uses _pqos_ cli tool which
is a part of [Intel(R) RDT Software
Package](https://github.com/intel/intel-cmt-cat). Before using this plugin
please be sure _pqos_ is properly installed and configured regarding that the
plugin run _pqos_ to work with `OS Interface` mode. This plugin supports _pqos_
version 4.0.0 and above. Note: pqos tool needs root privileges to work
properly.
Metrics will be constantly reported from the following `pqos` commands within the given interval:
Metrics will be constantly reported from the following `pqos` commands within
the given interval:
### If telegraf does not run as the root user
The `pqos` binary needs to run as root. If telegraf is running as a non-root user, you may enable sudo
to allow `pqos` to run correctly.
The `pqos` command requires root level access to run. There are two options to
overcome this if you run telegraf as a non-root user.
The `pqos` binary needs to run as root. If telegraf is running as a non-root
user, you may enable sudo to allow `pqos` to run correctly. The `pqos` command
requires root level access to run. There are two options to overcome this if
you run telegraf as a non-root user.
It is possible to update the pqos binary with setuid using `chmod u+s
/path/to/pqos`. This approach is simple and requires no modification to the
@ -42,7 +47,8 @@ security implications for making such a command setuid root.
Alternately, you may enable sudo to allow `pqos` to run correctly, as follows:
Add the following to your sudoers file (assumes telegraf runs as a user named `telegraf`):
Add the following to your sudoers file (assumes telegraf runs as a user named
`telegraf`):
```sh
telegraf ALL=(ALL) NOPASSWD:/usr/sbin/pqos -r --iface-os --mon-file-type=csv --mon-interval=*
@ -57,7 +63,8 @@ configuration (see below).
pqos -r --iface-os --mon-file-type=csv --mon-interval=INTERVAL --mon-core=all:[CORES]\;mbt:[CORES]
```
where `CORES` is equal to group of cores provided in config. User can provide many groups.
where `CORES` is equal to group of cores provided in config. User can provide
many groups.
### In case of process monitoring
@ -65,22 +72,24 @@ where `CORES` is equal to group of cores provided in config. User can provide ma
pqos -r --iface-os --mon-file-type=csv --mon-interval=INTERVAL --mon-pid=all:[PIDS]\;mbt:[PIDS]
```
where `PIDS` is group of processes IDs which name are equal to provided process name in a config.
User can provide many process names which lead to create many processes groups.
where `PIDS` is group of processes IDs which name are equal to provided process
name in a config. User can provide many process names which lead to create many
processes groups.
In both cases `INTERVAL` is equal to sampling_interval from config.
Because PIDs association within system could change in every moment, Intel RDT plugin provides a
functionality to check on every interval if desired processes change their PIDs association.
If some change is reported, plugin will restart _pqos_ tool with new arguments. If provided by user
process name is not equal to any of available processes, will be omitted and plugin will constantly
check for process availability.
Because PIDs association within system could change in every moment, Intel RDT
plugin provides a functionality to check on every interval if desired processes
change their PIDs association. If some change is reported, plugin will restart
_pqos_ tool with new arguments. If provided by user process name is not equal to
any of available processes, will be omitted and plugin will constantly check for
process availability.
## Useful links
Pqos installation process: <https://github.com/intel/intel-cmt-cat/blob/master/INSTALL>
Enabling OS interface: <https://github.com/intel/intel-cmt-cat/wiki>, <https://github.com/intel/intel-cmt-cat/wiki/resctrl>
More about Intel RDT: <https://www.intel.com/content/www/us/en/architecture-and-technology/resource-director-technology.html>
- Pqos installation process: <https://github.com/intel/intel-cmt-cat/blob/master/INSTALL>
- Enabling OS interface: <https://github.com/intel/intel-cmt-cat/wiki>, <https://github.com/intel/intel-cmt-cat/wiki/resctrl>
- More about Intel RDT: <https://www.intel.com/content/www/us/en/architecture-and-technology/resource-director-technology.html>
## Configuration
@ -130,14 +139,17 @@ More about Intel RDT: <https://www.intel.com/content/www/us/en/architecture-and-
## Troubleshooting
Pointing to non-existing cores will lead to throwing an error by _pqos_ and the plugin will not work properly.
Be sure to check provided core number exists within desired system.
Pointing to non-existing cores will lead to throwing an error by _pqos_ and the
plugin will not work properly. Be sure to check provided core number exists
within desired system.
Be aware, reading Intel RDT metrics by _pqos_ cannot be done simultaneously on the same resource.
Do not use any other _pqos_ instance that is monitoring the same cores or PIDs within the working system.
It is not possible to monitor same cores or PIDs on different groups.
Be aware, reading Intel RDT metrics by _pqos_ cannot be done simultaneously on
the same resource. Do not use any other _pqos_ instance that is monitoring the
same cores or PIDs within the working system. It is not possible to monitor
same cores or PIDs on different groups.
PIDs associated for the given process could be manually checked by `pidof` command. E.g:
PIDs associated for the given process could be manually checked by `pidof`
command. E.g:
```sh
pidof PROCESS

View File

@ -16,7 +16,8 @@ plugin.
## Measurements & Fields
memstats are taken from the Go runtime: <https://golang.org/pkg/runtime/#MemStats>
memstats are taken from the Go runtime:
<https://golang.org/pkg/runtime/#MemStats>
- internal_memstats
- alloc_bytes

View File

@ -1,6 +1,7 @@
# Internet Speed Monitor
# Internet Speed Monitor Input Plugin
The `Internet Speed Monitor` collects data about the internet speed on the system.
The `Internet Speed Monitor` collects data about the internet speed on the
system.
## Configuration

View File

@ -1,6 +1,7 @@
# Interrupts Input Plugin
The interrupts plugin gathers metrics about IRQs from `/proc/interrupts` and `/proc/softirqs`.
The interrupts plugin gathers metrics about IRQs from `/proc/interrupts` and
`/proc/softirqs`.
## Configuration

View File

@ -3,7 +3,8 @@
Get bare metal metrics using the command line utility
[`ipmitool`](https://github.com/ipmitool/ipmitool).
If no servers are specified, the plugin will query the local machine sensor stats via the following command:
If no servers are specified, the plugin will query the local machine sensor
stats via the following command:
```sh
ipmitool sdr
@ -15,13 +16,15 @@ or with the version 2 schema:
ipmitool sdr elist
```
When one or more servers are specified, the plugin will use the following command to collect remote host sensor stats:
When one or more servers are specified, the plugin will use the following
command to collect remote host sensor stats:
```sh
ipmitool -I lan -H SERVER -U USERID -P PASSW0RD sdr
```
Any of the following parameters will be added to the aformentioned query if they're configured:
Any of the following parameters will be added to the aformentioned query if
they're configured:
```sh
-y hex_key -L privilege
@ -114,7 +117,8 @@ ipmi device node. When using udev you can create the device node giving
KERNEL=="ipmi*", MODE="660", GROUP="telegraf"
```
Alternatively, it is possible to use sudo. You will need the following in your telegraf config:
Alternatively, it is possible to use sudo. You will need the following in your
telegraf config:
```toml
[[inputs.ipmi_sensor]]

View File

@ -1,18 +1,27 @@
# Iptables Input Plugin
The iptables plugin gathers packets and bytes counters for rules within a set of table and chain from the Linux's iptables firewall.
The iptables plugin gathers packets and bytes counters for rules within a set of
table and chain from the Linux's iptables firewall.
Rules are identified through associated comment. **Rules without comment are ignored**.
Indeed we need a unique ID for the rule and the rule number is not a constant: it may vary when rules are inserted/deleted at start-up or by automatic tools (interactive firewalls, fail2ban, ...).
Also when the rule set is becoming big (hundreds of lines) most people are interested in monitoring only a small part of the rule set.
Rules are identified through associated comment. **Rules without comment are
ignored**. Indeed we need a unique ID for the rule and the rule number is not a
constant: it may vary when rules are inserted/deleted at start-up or by
automatic tools (interactive firewalls, fail2ban, ...). Also when the rule set
is becoming big (hundreds of lines) most people are interested in monitoring
only a small part of the rule set.
Before using this plugin **you must ensure that the rules you want to monitor are named with a unique comment**. Comments are added using the `-m comment --comment "my comment"` iptables options.
Before using this plugin **you must ensure that the rules you want to monitor
are named with a unique comment**. Comments are added using the `-m comment
--comment "my comment"` iptables options.
The iptables command requires CAP_NET_ADMIN and CAP_NET_RAW capabilities. You have several options to grant telegraf to run iptables:
The iptables command requires CAP_NET_ADMIN and CAP_NET_RAW capabilities. You
have several options to grant telegraf to run iptables:
* Run telegraf as root. This is strongly discouraged.
* Configure systemd to run telegraf with CAP_NET_ADMIN and CAP_NET_RAW. This is the simplest and recommended option.
* Configure sudo to grant telegraf to run iptables. This is the most restrictive option, but require sudo setup.
* Configure systemd to run telegraf with CAP_NET_ADMIN and CAP_NET_RAW. This is
the simplest and recommended option.
* Configure sudo to grant telegraf to run iptables. This is the most restrictive
option, but require sudo setup.
## Using systemd capabilities
@ -47,7 +56,11 @@ Defaults!IPTABLESSHOW !logfile, !syslog, !pam_session
## Using IPtables lock feature
Defining multiple instances of this plugin in telegraf.conf can lead to concurrent IPtables access resulting in "ERROR in input [inputs.iptables]: exit status 4" messages in telegraf.log and missing metrics. Setting 'use_lock = true' in the plugin configuration will run IPtables with the '-w' switch, allowing a lock usage to prevent this error.
Defining multiple instances of this plugin in telegraf.conf can lead to
concurrent IPtables access resulting in "ERROR in input [inputs.iptables]: exit
status 4" messages in telegraf.log and missing metrics. Setting 'use_lock =
true' in the plugin configuration will run IPtables with the '-w' switch,
allowing a lock usage to prevent this error.
## Configuration

View File

@ -77,7 +77,8 @@ ipvs_real_server,address=172.18.64.220,address_family=inet,port=9000,virtual_add
ipvs_real_server,address=172.18.64.219,address_family=inet,port=9000,virtual_address=172.18.64.234,virtual_port=9000,virtual_protocol=tcp active_connections=0i,inactive_connections=0i,pps_in=0i,pps_out=0i,connections=0i,pkts_in=0i,pkts_out=0i,bytes_in=0i,bytes_out=0i,cps=0i 1541019340000000000
```
Virtual server is configured using `proto+addr+port` and backed by 2 real servers:
Virtual server is configured using `proto+addr+port` and backed by 2 real
servers:
```shell
ipvs_virtual_server,address_family=inet,fwmark=47,netmask=32,sched=rr cps=0i,connections=0i,pkts_in=0i,pkts_out=0i,bytes_in=0i,bytes_out=0i,pps_in=0i,pps_out=0i 1541019340000000000

View File

@ -1,8 +1,10 @@
# Jenkins Input Plugin
The jenkins plugin gathers information about the nodes and jobs running in a jenkins instance.
The jenkins plugin gathers information about the nodes and jobs running in a
jenkins instance.
This plugin does not require a plugin on jenkins and it makes use of Jenkins API to retrieve all the information needed.
This plugin does not require a plugin on jenkins and it makes use of Jenkins API
to retrieve all the information needed.
## Configuration

View File

@ -1,6 +1,6 @@
# Jolokia Input Plugin
## Deprecated in version 1.5: Please use the [jolokia2][] plugin
**Deprecated in version 1.5: Please use the [jolokia2][] plugin**
## Configuration

View File

@ -1,6 +1,8 @@
# Jolokia2 Input Plugin
The [Jolokia](http://jolokia.org) _agent_ and _proxy_ input plugins collect JMX metrics from an HTTP endpoint using Jolokia's [JSON-over-HTTP protocol](https://jolokia.org/reference/html/protocol.html).
The [Jolokia](http://jolokia.org) _agent_ and _proxy_ input plugins collect JMX
metrics from an HTTP endpoint using Jolokia's [JSON-over-HTTP
protocol](https://jolokia.org/reference/html/protocol.html).
* [jolokia2_agent Configuration](jolokia2_agent/README.md)
* [jolokia2_proxy Configuration](jolokia2_proxy/README.md)
@ -9,7 +11,8 @@ The [Jolokia](http://jolokia.org) _agent_ and _proxy_ input plugins collect JMX
### Jolokia Agent Configuration
The `jolokia2_agent` input plugin reads JMX metrics from one or more [Jolokia agent](https://jolokia.org/agent/jvm.html) REST endpoints.
The `jolokia2_agent` input plugin reads JMX metrics from one or more [Jolokia
agent](https://jolokia.org/agent/jvm.html) REST endpoints.
```toml @sample.conf
[[inputs.jolokia2_agent]]
@ -39,7 +42,9 @@ Optionally, specify TLS options for communicating with agents:
### Jolokia Proxy Configuration
The `jolokia2_proxy` input plugin reads JMX metrics from one or more _targets_ by interacting with a [Jolokia proxy](https://jolokia.org/features/proxy.html) REST endpoint.
The `jolokia2_proxy` input plugin reads JMX metrics from one or more _targets_
by interacting with a [Jolokia proxy](https://jolokia.org/features/proxy.html)
REST endpoint.
```toml
[[inputs.jolokia2_proxy]]
@ -84,7 +89,8 @@ Optionally, specify TLS options for communicating with proxies:
### Jolokia Metric Configuration
Each `metric` declaration generates a Jolokia request to fetch telemetry from a JMX MBean.
Each `metric` declaration generates a Jolokia request to fetch telemetry from a
JMX MBean.
| Key | Required | Description |
|----------------|----------|-------------|
@ -110,7 +116,8 @@ The preceeding `jvm_memory` `metric` declaration produces the following output:
jvm_memory HeapMemoryUsage.committed=4294967296,HeapMemoryUsage.init=4294967296,HeapMemoryUsage.max=4294967296,HeapMemoryUsage.used=1750658992,NonHeapMemoryUsage.committed=67350528,NonHeapMemoryUsage.init=2555904,NonHeapMemoryUsage.max=-1,NonHeapMemoryUsage.used=65821352,ObjectPendingFinalizationCount=0 1503762436000000000
```
Use `*` wildcards against `mbean` property-key values to create distinct series by capturing values into `tag_keys`.
Use `*` wildcards against `mbean` property-key values to create distinct series
by capturing values into `tag_keys`.
```toml
[[inputs.jolokia2_agent.metric]]
@ -120,7 +127,9 @@ Use `*` wildcards against `mbean` property-key values to create distinct series
tag_keys = ["name"]
```
Since `name=*` matches both `G1 Old Generation` and `G1 Young Generation`, and `name` is used as a tag, the preceeding `jvm_garbage_collector` `metric` declaration produces two metrics.
Since `name=*` matches both `G1 Old Generation` and `G1 Young Generation`, and
`name` is used as a tag, the preceeding `jvm_garbage_collector` `metric`
declaration produces two metrics.
```shell
jvm_garbage_collector,name=G1\ Old\ Generation CollectionCount=0,CollectionTime=0 1503762520000000000
@ -138,7 +147,8 @@ Use `tag_prefix` along with `tag_keys` to add detail to tag names.
tag_prefix = "pool_"
```
The preceeding `jvm_memory_pool` `metric` declaration produces six metrics, each with a distinct `pool_name` tag.
The preceeding `jvm_memory_pool` `metric` declaration produces six metrics, each
with a distinct `pool_name` tag.
```text
jvm_memory_pool,pool_name=Compressed\ Class\ Space PeakUsage.max=1073741824,PeakUsage.committed=3145728,PeakUsage.init=0,Usage.committed=3145728,Usage.init=0,PeakUsage.used=3017976,Usage.max=1073741824,Usage.used=3017976 1503764025000000000
@ -149,7 +159,10 @@ jvm_memory_pool,pool_name=G1\ Survivor\ Space Usage.max=-1,Usage.init=0,Collecti
jvm_memory_pool,pool_name=Metaspace PeakUsage.init=0,PeakUsage.used=21852224,PeakUsage.max=-1,Usage.max=-1,Usage.committed=22282240,Usage.init=0,Usage.used=21852224,PeakUsage.committed=22282240 1503764025000000000
```
Use substitutions to create fields and field prefixes with MBean property-keys captured by wildcards. In the following example, `$1` represents the value of the property-key `name`, and `$2` represents the value of the property-key `topic`.
Use substitutions to create fields and field prefixes with MBean property-keys
captured by wildcards. In the following example, `$1` represents the value of
the property-key `name`, and `$2` represents the value of the property-key
`topic`.
```toml
[[inputs.jolokia2_agent.metric]]
@ -159,13 +172,16 @@ Use substitutions to create fields and field prefixes with MBean property-keys c
tag_keys = ["topic"]
```
The preceeding `kafka_topic` `metric` declaration produces a metric per Kafka topic. The `name` Mbean property-key is used as a field prefix to aid in gathering fields together into the single metric.
The preceeding `kafka_topic` `metric` declaration produces a metric per Kafka
topic. The `name` Mbean property-key is used as a field prefix to aid in
gathering fields together into the single metric.
```text
kafka_topic,topic=my-topic BytesOutPerSec.MeanRate=0,FailedProduceRequestsPerSec.MeanRate=0,BytesOutPerSec.EventType="bytes",BytesRejectedPerSec.Count=0,FailedProduceRequestsPerSec.RateUnit="SECONDS",FailedProduceRequestsPerSec.EventType="requests",MessagesInPerSec.RateUnit="SECONDS",BytesInPerSec.EventType="bytes",BytesOutPerSec.RateUnit="SECONDS",BytesInPerSec.OneMinuteRate=0,FailedFetchRequestsPerSec.EventType="requests",TotalFetchRequestsPerSec.MeanRate=146.301533938701,BytesOutPerSec.FifteenMinuteRate=0,TotalProduceRequestsPerSec.MeanRate=0,BytesRejectedPerSec.FifteenMinuteRate=0,MessagesInPerSec.FiveMinuteRate=0,BytesInPerSec.Count=0,BytesRejectedPerSec.MeanRate=0,FailedFetchRequestsPerSec.MeanRate=0,FailedFetchRequestsPerSec.FiveMinuteRate=0,FailedFetchRequestsPerSec.FifteenMinuteRate=0,FailedProduceRequestsPerSec.Count=0,TotalFetchRequestsPerSec.FifteenMinuteRate=128.59314292334466,TotalFetchRequestsPerSec.OneMinuteRate=126.71551273850747,TotalFetchRequestsPerSec.Count=1353483,TotalProduceRequestsPerSec.FifteenMinuteRate=0,FailedFetchRequestsPerSec.OneMinuteRate=0,FailedFetchRequestsPerSec.Count=0,FailedProduceRequestsPerSec.FifteenMinuteRate=0,TotalFetchRequestsPerSec.FiveMinuteRate=130.8516148751592,TotalFetchRequestsPerSec.RateUnit="SECONDS",BytesRejectedPerSec.RateUnit="SECONDS",BytesInPerSec.MeanRate=0,FailedFetchRequestsPerSec.RateUnit="SECONDS",BytesRejectedPerSec.OneMinuteRate=0,BytesOutPerSec.Count=0,BytesOutPerSec.OneMinuteRate=0,MessagesInPerSec.FifteenMinuteRate=0,MessagesInPerSec.MeanRate=0,BytesInPerSec.FiveMinuteRate=0,TotalProduceRequestsPerSec.RateUnit="SECONDS",FailedProduceRequestsPerSec.OneMinuteRate=0,TotalProduceRequestsPerSec.EventType="requests",BytesRejectedPerSec.FiveMinuteRate=0,BytesRejectedPerSec.EventType="bytes",BytesOutPerSec.FiveMinuteRate=0,FailedProduceRequestsPerSec.FiveMinuteRate=0,MessagesInPerSec.Count=0,TotalProduceRequestsPerSec.FiveMinuteRate=0,TotalProduceRequestsPerSec.OneMinuteRate=0,MessagesInPerSec.EventType="messages",MessagesInPerSec.OneMinuteRate=0,TotalFetchRequestsPerSec.EventType="requests",BytesInPerSec.RateUnit="SECONDS",BytesInPerSec.FifteenMinuteRate=0,TotalProduceRequestsPerSec.Count=0 1503767532000000000
```
Both `jolokia2_agent` and `jolokia2_proxy` plugins support default configurations that apply to every `metric` declaration.
Both `jolokia2_agent` and `jolokia2_proxy` plugins support default
configurations that apply to every `metric` declaration.
| Key | Default Value | Description |
|---------------------------|---------------|-------------|
@ -187,4 +203,5 @@ Both `jolokia2_agent` and `jolokia2_proxy` plugins support default configuration
* [Weblogic](/plugins/inputs/jolokia2/examples/weblogic.conf)
* [ZooKeeper](/plugins/inputs/jolokia2/examples/zookeeper.conf)
Please help improve this list and contribute new configuration files by opening an issue or pull request.
Please help improve this list and contribute new configuration files by opening
an issue or pull request.

View File

@ -1,7 +1,11 @@
# JTI OpenConfig Telemetry Input Plugin
This plugin reads Juniper Networks implementation of OpenConfig telemetry data from listed sensors using Junos Telemetry Interface. Refer to
[openconfig.net](http://openconfig.net/) for more details about OpenConfig and [Junos Telemetry Interface (JTI)](https://www.juniper.net/documentation/en_US/junos/topics/concept/junos-telemetry-interface-oveview.html).
This plugin reads Juniper Networks implementation of OpenConfig telemetry data
from listed sensors using Junos Telemetry Interface. Refer to
[openconfig.net](http://openconfig.net/) for more details about OpenConfig and
[Junos Telemetry Interface (JTI)][1].
[1]: https://www.juniper.net/documentation/en_US/junos/topics/concept/junos-telemetry-interface-oveview.html
## Configuration

View File

@ -3,8 +3,8 @@
The [Kafka][kafka] consumer plugin reads from Kafka
and creates metrics using one of the supported [input data formats][].
For old kafka version (< 0.8), please use the [kafka_consumer_legacy][] input plugin
and use the old zookeeper connection method.
For old kafka version (< 0.8), please use the [kafka_consumer_legacy][] input
plugin and use the old zookeeper connection method.
## Configuration

View File

@ -1,12 +1,13 @@
# Kafka Consumer Legacy Input Plugin
## Deprecated in version 1.4. Please use [Kafka Consumer input plugin][]
**Deprecated in version 1.4. Please use [Kafka Consumer input plugin][]**
The [Kafka](http://kafka.apache.org/) consumer plugin polls a specified Kafka
topic and adds messages to InfluxDB. The plugin assumes messages follow the
line protocol. [Consumer Group](http://godoc.org/github.com/wvanbergen/kafka/consumergroup)
is used to talk to the Kafka cluster so multiple instances of telegraf can read
from the same topic in parallel.
topic and adds messages to InfluxDB. The plugin assumes messages follow the line
protocol. [Consumer Group][1] is used to talk to the Kafka cluster so multiple
instances of telegraf can read from the same topic in parallel.
[1]: http://godoc.org/github.com/wvanbergen/kafka/consumergroup
## Configuration
@ -45,4 +46,4 @@ from the same topic in parallel.
Running integration tests requires running Zookeeper & Kafka. See Makefile
for kafka container command.
[Kafka Consumer input plugin]: /plugins/inputs/kafka_consumer
[Kafka Consumer input plugin]: ../kafka_consumer/README.md

View File

@ -90,9 +90,12 @@ The Kapacitor plugin collects metrics from the given Kapacitor instances.
## kapacitor
The `kapacitor` measurement stores fields with information related to
[Kapacitor tasks](https://docs.influxdata.com/kapacitor/latest/introduction/getting-started/#kapacitor-tasks)
and [subscriptions](https://docs.influxdata.com/kapacitor/latest/administration/subscription-management/).
The `kapacitor` measurement stores fields with information related to [Kapacitor
tasks][tasks] and [subscriptions][subs].
[tasks]: https://docs.influxdata.com/kapacitor/latest/introduction/getting-started/#kapacitor-tasks
[subs]: https://docs.influxdata.com/kapacitor/latest/administration/subscription-management/
### num_enabled_tasks
@ -115,23 +118,30 @@ The `kapacitor_alert` measurement stores fields with information related to
### notification-dropped
The number of internal notifications dropped because they arrive too late from another Kapacitor node.
If this count is increasing, Kapacitor Enterprise nodes aren't able to communicate fast enough
to keep up with the volume of alerts.
The number of internal notifications dropped because they arrive too late from
another Kapacitor node. If this count is increasing, Kapacitor Enterprise nodes
aren't able to communicate fast enough to keep up with the volume of alerts.
### primary-handle-count
The number of times this node handled an alert as the primary. This count should increase under normal conditions.
The number of times this node handled an alert as the primary. This count should
increase under normal conditions.
### secondary-handle-count
The number of times this node handled an alert as the secondary. An increase in this counter indicates that the primary is failing to handle alerts in a timely manner.
The number of times this node handled an alert as the secondary. An increase in
this counter indicates that the primary is failing to handle alerts in a timely
manner.
---
## kapacitor_cluster
The `kapacitor_cluster` measurement reflects the ability of [Kapacitor nodes to communicate](https://docs.influxdata.com/enterprise_kapacitor/v1.5/administration/configuration/#cluster-communications) with one another. Specifically, these metrics track the gossip communication between the Kapacitor nodes.
The `kapacitor_cluster` measurement reflects the ability of [Kapacitor nodes to
communicate][cluster] with one another. Specifically, these metrics track the
gossip communication between the Kapacitor nodes.
[cluster]: https://docs.influxdata.com/enterprise_kapacitor/v1.5/administration/configuration/#cluster-communications
### dropped_member_events
@ -146,8 +156,9 @@ The number of gossip user events that were dropped.
## kapacitor_edges
The `kapacitor_edges` measurement stores fields with information related to
[edges](https://docs.influxdata.com/kapacitor/latest/tick/introduction/#pipelines)
in Kapacitor TICKscripts.
[edges][] in Kapacitor TICKscripts.
[edges]: https://docs.influxdata.com/kapacitor/latest/tick/introduction/#pipelines
### collected
@ -161,8 +172,8 @@ The number of messages emitted by TICKscript edges.
## kapacitor_ingress
The `kapacitor_ingress` measurement stores fields with information related to data
coming into Kapacitor.
The `kapacitor_ingress` measurement stores fields with information related to
data coming into Kapacitor.
### points_received
@ -173,7 +184,9 @@ The number of points received by Kapacitor.
## kapacitor_load
The `kapacitor_load` measurement stores fields with information related to the
[Kapacitor Load Directory service](https://docs.influxdata.com/kapacitor/latest/guides/load_directory/).
[Kapacitor Load Directory service][load-dir].
[load-dir]: https://docs.influxdata.com/kapacitor/latest/guides/load_directory/
### errors
@ -183,7 +196,8 @@ The number of errors reported from the load directory service.
## kapacitor_memstats
The `kapacitor_memstats` measurement stores fields related to Kapacitor memory usage.
The `kapacitor_memstats` measurement stores fields related to Kapacitor memory
usage.
### alloc_bytes
@ -341,14 +355,17 @@ The total number of unique series processed.
#### write_errors
The number of errors that occurred when writing to InfluxDB or other write endpoints.
The number of errors that occurred when writing to InfluxDB or other write
endpoints.
---
### kapacitor_topics
The `kapacitor_topics` measurement stores fields related to
Kapacitor topics](<https://docs.influxdata.com/kapacitor/latest/working/using_alert_topics/>).
The `kapacitor_topics` measurement stores fields related to Kapacitor
topics][topics].
[topics]: https://docs.influxdata.com/kapacitor/latest/working/using_alert_topics/
#### collected (kapacitor_topics)

View File

@ -3,8 +3,9 @@
This plugin is only available on Linux.
The kernel plugin gathers info about the kernel that doesn't fit into other
plugins. In general, it is the statistics available in `/proc/stat` that are
not covered by other plugins as well as the value of `/proc/sys/kernel/random/entropy_avail`
plugins. In general, it is the statistics available in `/proc/stat` that are not
covered by other plugins as well as the value of
`/proc/sys/kernel/random/entropy_avail`
The metrics are documented in `man proc` under the `/proc/stat` section.
The metrics are documented in `man 4 random` under the `/proc/stat` section.

View File

@ -1,10 +1,13 @@
# Kernel VMStat Input Plugin
The kernel_vmstat plugin gathers virtual memory statistics
by reading /proc/vmstat. For a full list of available fields see the
/proc/vmstat section of the [proc man page](http://man7.org/linux/man-pages/man5/proc.5.html).
For a better idea of what each field represents, see the
[vmstat man page](http://linux.die.net/man/8/vmstat).
The kernel_vmstat plugin gathers virtual memory statistics by reading
/proc/vmstat. For a full list of available fields see the /proc/vmstat section
of the [proc man page][man-proc]. For a better idea of what each field
represents, see the [vmstat man page][man-vmstat].
[man-proc]: http://man7.org/linux/man-pages/man5/proc.5.html
[man-vmstat]: http://linux.die.net/man/8/vmstat
```text
/proc/vmstat

View File

@ -63,10 +63,17 @@ Requires the following tools:
- [Docker](https://docs.docker.com/get-docker/)
- [Docker Compose](https://docs.docker.com/compose/install/)
From the root of this project execute the following script: `./plugins/inputs/kibana/test_environment/run_test_env.sh`
From the root of this project execute the following script:
`./plugins/inputs/kibana/test_environment/run_test_env.sh`
This will build the latest Telegraf and then start up Kibana and Elasticsearch, Telegraf will begin monitoring Kibana's status and write its results to the file `/tmp/metrics.out` in the Telegraf container.
This will build the latest Telegraf and then start up Kibana and Elasticsearch,
Telegraf will begin monitoring Kibana's status and write its results to the file
`/tmp/metrics.out` in the Telegraf container.
Then you can attach to the telegraf container to inspect the file `/tmp/metrics.out` to see if the status is being reported.
Then you can attach to the telegraf container to inspect the file
`/tmp/metrics.out` to see if the status is being reported.
The Visual Studio Code [Remote - Containers](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers) extension provides an easy user interface to attach to the running container.
The Visual Studio Code [Remote - Containers][remote] extension provides an easy
user interface to attach to the running container.
[remote]: https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers

View File

@ -89,8 +89,8 @@ DynamoDB:
### DynamoDB Checkpoint
The DynamoDB checkpoint stores the last processed record in a DynamoDB. To leverage
this functionality, create a table with the following string type keys:
The DynamoDB checkpoint stores the last processed record in a DynamoDB. To
leverage this functionality, create a table with the following string type keys:
```shell
Partition key: namespace

View File

@ -7,8 +7,6 @@ underlying "knx-go" project site (<https://github.com/vapourismo/knx-go>).
## Configuration
This is a sample config for the plugin.
```toml @sample.conf
# Listener capable of handling KNX bus messages provided through a KNX-IP Interface.
[[inputs.knx_listener]]

View File

@ -1,6 +1,7 @@
# Kubernetes Inventory Input Plugin
This plugin generates metrics derived from the state of the following Kubernetes resources:
This plugin generates metrics derived from the state of the following Kubernetes
resources:
- daemonsets
- deployments
@ -86,7 +87,13 @@ avoid cardinality issues:
## Kubernetes Permissions
If using [RBAC authorization](https://kubernetes.io/docs/reference/access-authn-authz/rbac/), you will need to create a cluster role to list "persistentvolumes" and "nodes". You will then need to make an [aggregated ClusterRole](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#aggregated-clusterroles) that will eventually be bound to a user or group.
If using [RBAC authorization][rbac], you will need to create a cluster role to
list "persistentvolumes" and "nodes". You will then need to make an [aggregated
ClusterRole][agg] that will eventually be bound to a user or group.
[rbac]: https://kubernetes.io/docs/reference/access-authn-authz/rbac/
[agg]: https://kubernetes.io/docs/reference/access-authn-authz/rbac/#aggregated-clusterroles
```yaml
---
@ -115,7 +122,8 @@ aggregationRule:
rules: [] # Rules are automatically filled in by the controller manager.
```
Bind the newly created aggregated ClusterRole with the following config file, updating the subjects as needed.
Bind the newly created aggregated ClusterRole with the following config file,
updating the subjects as needed.
```yaml
---
@ -135,8 +143,9 @@ subjects:
## Quickstart in k3s
When monitoring [k3s](https://k3s.io) server instances one can re-use already generated administration token.
This is less secure than using the more restrictive dedicated telegraf user but more convienient to set up.
When monitoring [k3s](https://k3s.io) server instances one can re-use already
generated administration token. This is less secure than using the more
restrictive dedicated telegraf user but more convienient to set up.
```console
# an empty token will make telegraf use the client cert/key files instead
@ -294,7 +303,8 @@ tls_key = "/run/telegraf-kubernetes-key"
### pv `phase_type`
The persistentvolume "phase" is saved in the `phase` tag with a correlated numeric field called `phase_type` corresponding with that tag value.
The persistentvolume "phase" is saved in the `phase` tag with a correlated
numeric field called `phase_type` corresponding with that tag value.
| Tag value | Corresponding field value |
| --------- | ------------------------- |
@ -307,7 +317,8 @@ The persistentvolume "phase" is saved in the `phase` tag with a correlated numer
### pvc `phase_type`
The persistentvolumeclaim "phase" is saved in the `phase` tag with a correlated numeric field called `phase_type` corresponding with that tag value.
The persistentvolumeclaim "phase" is saved in the `phase` tag with a correlated
numeric field called `phase_type` corresponding with that tag value.
| Tag value | Corresponding field value |
| --------- | ------------------------- |

View File

@ -6,13 +6,15 @@ is running as part of a `daemonset` within a kubernetes installation. This
means that telegraf is running on every node within the cluster. Therefore, you
should configure this plugin to talk to its locally running kubelet.
To find the ip address of the host you are running on you can issue a command like the following:
To find the ip address of the host you are running on you can issue a command
like the following:
```sh
curl -s $API_URL/api/v1/namespaces/$POD_NAMESPACE/pods/$HOSTNAME --header "Authorization: Bearer $TOKEN" --insecure | jq -r '.status.hostIP'
```
In this case we used the downward API to pass in the `$POD_NAMESPACE` and `$HOSTNAME` is the hostname of the pod which is set by the kubernetes API.
In this case we used the downward API to pass in the `$POD_NAMESPACE` and
`$HOSTNAME` is the hostname of the pod which is set by the kubernetes API.
Kubernetes is a fast moving project, with a new minor release every 3 months. As
such, we will aim to maintain support only for versions that are supported by
@ -65,8 +67,8 @@ avoid cardinality issues:
## DaemonSet
For recommendations on running Telegraf as a DaemonSet see [Monitoring Kubernetes
Architecture][k8s-telegraf] or view the Helm charts:
For recommendations on running Telegraf as a DaemonSet see [Monitoring
Kubernetes Architecture][k8s-telegraf] or view the Helm charts:
- [Telegraf][]
- [InfluxDB][]

View File

@ -1,9 +1,11 @@
# Arista LANZ Consumer Input Plugin
This plugin provides a consumer for use with Arista Networks Latency Analyzer (LANZ)
This plugin provides a consumer for use with Arista Networks Latency Analyzer
(LANZ)
Metrics are read from a stream of data via TCP through port 50001 on the
switches management IP. The data is in Protobuffers format. For more information on Arista LANZ
switches management IP. The data is in Protobuffers format. For more information
on Arista LANZ
- <https://www.arista.com/en/um-eos/eos-latency-analyzer-lanz>
@ -13,11 +15,6 @@ This plugin uses Arista's sdk.
## Configuration
You will need to configure LANZ and enable streaming LANZ data.
- <https://www.arista.com/en/um-eos/eos-section-44-3-configuring-lanz>
- <https://www.arista.com/en/um-eos/eos-section-44-3-configuring-lanz#ww1149292>
```toml @sample.conf
# Read metrics off Arista LANZ, via socket
[[inputs.lanz]]
@ -28,9 +25,15 @@ You will need to configure LANZ and enable streaming LANZ data.
]
```
You will need to configure LANZ and enable streaming LANZ data.
- <https://www.arista.com/en/um-eos/eos-section-44-3-configuring-lanz>
- <https://www.arista.com/en/um-eos/eos-section-44-3-configuring-lanz#ww1149292>
## Metrics
For more details on the metrics see <https://github.com/aristanetworks/goarista/blob/master/lanz/proto/lanz.proto>
For more details on the metrics see
<https://github.com/aristanetworks/goarista/blob/master/lanz/proto/lanz.proto>
- lanz_congestion_record:
- tags:

View File

@ -1,6 +1,8 @@
# LeoFS Input Plugin
The LeoFS plugin gathers metrics of LeoGateway, LeoManager, and LeoStorage using SNMP. See [LeoFS Documentation / System Administration / System Monitoring](https://leo-project.net/leofs/docs/admin/system_admin/monitoring/).
The LeoFS plugin gathers metrics of LeoGateway, LeoManager, and LeoStorage using
SNMP. See [LeoFS Documentation / System Administration / System
Monitoring](https://leo-project.net/leofs/docs/admin/system_admin/monitoring/).
## Configuration

View File

@ -1,6 +1,8 @@
# Linux Sysctl FS Input Plugin
The linux_sysctl_fs input provides Linux system level file metrics. The documentation on these fields can be found at <https://www.kernel.org/doc/Documentation/sysctl/fs.txt>.
The linux_sysctl_fs input provides Linux system level file metrics. The
documentation on these fields can be found at
<https://www.kernel.org/doc/Documentation/sysctl/fs.txt>.
Example output:

View File

@ -1,6 +1,7 @@
# Logparser Input Plugin
## Deprecated in Telegraf 1.15: Please use the [tail][] plugin along with the [`grok` data format][grok parser]
**Deprecated in Telegraf 1.15: Please use the [tail][] plugin along with the
[`grok` data format][grok parser]**
The `logparser` plugin streams and parses the given logfiles. Currently it
has the capability of parsing "grok" patterns from logfiles, which also supports

View File

@ -1,7 +1,7 @@
# Logstash Input Plugin
This plugin reads metrics exposed by
[Logstash Monitoring API](https://www.elastic.co/guide/en/logstash/current/monitoring-logstash.html).
This plugin reads metrics exposed by [Logstash Monitoring
API](https://www.elastic.co/guide/en/logstash/current/monitoring-logstash.html).
Logstash 5 and later is supported.
@ -43,7 +43,8 @@ Logstash 5 and later is supported.
## Metrics
Additional plugin stats may be collected (because logstash doesn't consistently expose all stats)
Additional plugin stats may be collected (because logstash doesn't consistently
expose all stats)
- logstash_jvm
- tags:

View File

@ -1,9 +1,10 @@
# Lustre Input Plugin
The [Lustre][]® file system is an open-source, parallel file system that supports
many requirements of leadership class HPC simulation environments.
The [Lustre][]® file system is an open-source, parallel file system that
supports many requirements of leadership class HPC simulation environments.
This plugin monitors the Lustre file system using its entries in the proc filesystem.
This plugin monitors the Lustre file system using its entries in the proc
filesystem.
## Configuration
@ -28,7 +29,8 @@ This plugin monitors the Lustre file system using its entries in the proc filesy
## Metrics
From `/proc/fs/lustre/obdfilter/*/stats` and `/proc/fs/lustre/osd-ldiskfs/*/stats`:
From `/proc/fs/lustre/obdfilter/*/stats` and
`/proc/fs/lustre/osd-ldiskfs/*/stats`:
- lustre2
- tags:

View File

@ -5,9 +5,6 @@ physical volumes, volume groups, and logical volumes.
## Configuration
The `lvm` command requires elevated permissions. If the user has configured
sudo with the ability to run these commands, then set the `use_sudo` to true.
```toml @sample.conf
# Read metrics about LVM physical volumes, volume groups, logical volumes.
[[inputs.lvm]]
@ -15,6 +12,9 @@ sudo with the ability to run these commands, then set the `use_sudo` to true.
use_sudo = false
```
The `lvm` command requires elevated permissions. If the user has configured sudo
with the ability to run these commands, then set the `use_sudo` to true.
### Using sudo
If your account does not already have the ability to run commands