chore: Fix readme linter errors for input plugins E-L (#11214)
This commit is contained in:
parent
1b1482b5eb
commit
453e276718
|
|
@ -1,15 +1,14 @@
|
||||||
# Amazon ECS Input Plugin
|
# Amazon ECS Input Plugin
|
||||||
|
|
||||||
Amazon ECS, Fargate compatible, input plugin which uses the Amazon ECS metadata and
|
Amazon ECS, Fargate compatible, input plugin which uses the Amazon ECS metadata
|
||||||
stats [v2][task-metadata-endpoint-v2] or [v3][task-metadata-endpoint-v3] API endpoints
|
and stats [v2][task-metadata-endpoint-v2] or [v3][task-metadata-endpoint-v3] API
|
||||||
to gather stats on running containers in a Task.
|
endpoints to gather stats on running containers in a Task.
|
||||||
|
|
||||||
The telegraf container must be run in the same Task as the workload it is
|
The telegraf container must be run in the same Task as the workload it is
|
||||||
inspecting.
|
inspecting.
|
||||||
|
|
||||||
This is similar to (and reuses a few pieces of) the [Docker][docker-input]
|
This is similar to (and reuses a few pieces of) the [Docker][docker-input] input
|
||||||
input plugin, with some ECS specific modifications for AWS metadata and stats
|
plugin, with some ECS specific modifications for AWS metadata and stats formats.
|
||||||
formats.
|
|
||||||
|
|
||||||
The amazon-ecs-agent (though it _is_ a container running on the host) is not
|
The amazon-ecs-agent (though it _is_ a container running on the host) is not
|
||||||
present in the metadata/stats endpoints.
|
present in the metadata/stats endpoints.
|
||||||
|
|
|
||||||
|
|
@ -1,25 +1,31 @@
|
||||||
# Elasticsearch Input Plugin
|
# Elasticsearch Input Plugin
|
||||||
|
|
||||||
The [elasticsearch](https://www.elastic.co/) plugin queries endpoints to obtain
|
The [elasticsearch](https://www.elastic.co/) plugin queries endpoints to obtain
|
||||||
[Node Stats](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-nodes-stats.html)
|
[Node Stats][1] and optionally [Cluster-Health][2] metrics.
|
||||||
and optionally
|
|
||||||
[Cluster-Health](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html)
|
|
||||||
metrics.
|
|
||||||
|
|
||||||
In addition, the following optional queries are only made by the master node:
|
In addition, the following optional queries are only made by the master node:
|
||||||
[Cluster Stats](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-stats.html)
|
[Cluster Stats][3] [Indices Stats][4] [Shard Stats][5]
|
||||||
[Indices Stats](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-stats.html)
|
|
||||||
[Shard Stats](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-stats.html)
|
|
||||||
|
|
||||||
Specific Elasticsearch endpoints that are queried:
|
Specific Elasticsearch endpoints that are queried:
|
||||||
|
|
||||||
- Node: either /_nodes/stats or /_nodes/_local/stats depending on 'local' configuration setting
|
- Node: either /_nodes/stats or /_nodes/_local/stats depending on 'local'
|
||||||
- Cluster Heath: /_cluster/health?level=indices
|
configuration setting
|
||||||
- Cluster Stats: /_cluster/stats
|
- Cluster Heath: /_cluster/health?level=indices
|
||||||
- Indices Stats: /_all/_stats
|
- Cluster Stats: /_cluster/stats
|
||||||
- Shard Stats: /_all/_stats?level=shards
|
- Indices Stats: /_all/_stats
|
||||||
|
- Shard Stats: /_all/_stats?level=shards
|
||||||
|
|
||||||
Note that specific statistics information can change between Elasticsearch versions. In general, this plugin attempts to stay as version-generic as possible by tagging high-level categories only and using a generic json parser to make unique field names of whatever statistics names are provided at the mid-low level.
|
Note that specific statistics information can change between Elasticsearch
|
||||||
|
versions. In general, this plugin attempts to stay as version-generic as
|
||||||
|
possible by tagging high-level categories only and using a generic json parser
|
||||||
|
to make unique field names of whatever statistics names are provided at the
|
||||||
|
mid-low level.
|
||||||
|
|
||||||
|
[1]: https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-nodes-stats.html
|
||||||
|
[2]: https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html
|
||||||
|
[3]: https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-stats.html
|
||||||
|
[4]: https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-stats.html
|
||||||
|
[5]: https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-stats.html
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,17 +1,19 @@
|
||||||
# Elasticsearch query input plugin
|
# Elasticsearch Query Input Plugin
|
||||||
|
|
||||||
This [elasticsearch](https://www.elastic.co/) query plugin queries endpoints to obtain metrics from data stored in an Elasticsearch cluster.
|
This [elasticsearch](https://www.elastic.co/) query plugin queries endpoints to
|
||||||
|
obtain metrics from data stored in an Elasticsearch cluster.
|
||||||
|
|
||||||
The following is supported:
|
The following is supported:
|
||||||
|
|
||||||
- return number of hits for a search query
|
- return number of hits for a search query
|
||||||
- calculate the avg/max/min/sum for a numeric field, filtered by a query, aggregated per tag
|
- calculate the avg/max/min/sum for a numeric field, filtered by a query,
|
||||||
|
aggregated per tag
|
||||||
- count number of terms for a particular field
|
- count number of terms for a particular field
|
||||||
|
|
||||||
## Elasticsearch support
|
## Elasticsearch Support
|
||||||
|
|
||||||
This plugins is tested against Elasticsearch 5.x and 6.x releases.
|
This plugins is tested against Elasticsearch 5.x and 6.x releases. Currently it
|
||||||
Currently it is known to break on 7.x or greater versions.
|
is known to break on 7.x or greater versions.
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
@ -91,7 +93,8 @@ Currently it is known to break on 7.x or greater versions.
|
||||||
|
|
||||||
## Examples
|
## Examples
|
||||||
|
|
||||||
Please note that the `[[inputs.elasticsearch_query]]` is still required for all of the examples below.
|
Please note that the `[[inputs.elasticsearch_query]]` is still required for all
|
||||||
|
of the examples below.
|
||||||
|
|
||||||
### Search the average response time, per URI and per response status code
|
### Search the average response time, per URI and per response status code
|
||||||
|
|
||||||
|
|
@ -151,17 +154,32 @@ Please note that the `[[inputs.elasticsearch_query]]` is still required for all
|
||||||
|
|
||||||
### Required parameters
|
### Required parameters
|
||||||
|
|
||||||
- `measurement_name`: The target measurement to be stored the results of the aggregation query.
|
- `measurement_name`: The target measurement to be stored the results of the
|
||||||
|
aggregation query.
|
||||||
- `index`: The index name to query on Elasticsearch
|
- `index`: The index name to query on Elasticsearch
|
||||||
- `query_period`: The time window to query (eg. "1m" to query documents from last minute). Normally should be set to same as collection
|
- `query_period`: The time window to query (eg. "1m" to query documents from
|
||||||
|
last minute). Normally should be set to same as collection
|
||||||
- `date_field`: The date/time field in the Elasticsearch index
|
- `date_field`: The date/time field in the Elasticsearch index
|
||||||
|
|
||||||
### Optional parameters
|
### Optional parameters
|
||||||
|
|
||||||
- `date_field_custom_format`: Not needed if using one of the built in date/time formats of Elasticsearch, but may be required if using a custom date/time format. The format syntax uses the [Joda date format](https://www.elastic.co/guide/en/elasticsearch/reference/6.8/search-aggregations-bucket-daterange-aggregation.html#date-format-pattern).
|
- `date_field_custom_format`: Not needed if using one of the built in date/time
|
||||||
|
formats of Elasticsearch, but may be required if using a custom date/time
|
||||||
|
format. The format syntax uses the [Joda date format][joda].
|
||||||
- `filter_query`: Lucene query to filter the results (default: "\*")
|
- `filter_query`: Lucene query to filter the results (default: "\*")
|
||||||
- `metric_fields`: The list of fields to perform metric aggregation (these must be indexed as numeric fields)
|
- `metric_fields`: The list of fields to perform metric aggregation (these must
|
||||||
- `metric_funcion`: The single-value metric aggregation function to be performed on the `metric_fields` defined. Currently supported aggregations are "avg", "min", "max", "sum". (see [https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics.html](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics.html)
|
be indexed as numeric fields)
|
||||||
- `tags`: The list of fields to be used as tags (these must be indexed as non-analyzed fields). A "terms aggregation" will be done per tag defined
|
- `metric_funcion`: The single-value metric aggregation function to be performed
|
||||||
- `include_missing_tag`: Set to true to not ignore documents where the tag(s) specified above does not exist. (If false, documents without the specified tag field will be ignored in `doc_count` and in the metric aggregation)
|
on the `metric_fields` defined. Currently supported aggregations are "avg",
|
||||||
- `missing_tag_value`: The value of the tag that will be set for documents in which the tag field does not exist. Only used when `include_missing_tag` is set to `true`.
|
"min", "max", "sum". (see the [aggregation docs][agg]
|
||||||
|
- `tags`: The list of fields to be used as tags (these must be indexed as
|
||||||
|
non-analyzed fields). A "terms aggregation" will be done per tag defined
|
||||||
|
- `include_missing_tag`: Set to true to not ignore documents where the tag(s)
|
||||||
|
specified above does not exist. (If false, documents without the specified tag
|
||||||
|
field will be ignored in `doc_count` and in the metric aggregation)
|
||||||
|
- `missing_tag_value`: The value of the tag that will be set for documents in
|
||||||
|
which the tag field does not exist. Only used when `include_missing_tag` is
|
||||||
|
set to `true`.
|
||||||
|
|
||||||
|
[joda]: https://www.elastic.co/guide/en/elasticsearch/reference/6.8/search-aggregations-bucket-daterange-aggregation.html#date-format-pattern
|
||||||
|
[agg]: https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics.html
|
||||||
|
|
|
||||||
|
|
@ -1,6 +1,7 @@
|
||||||
# Ethtool Input Plugin
|
# Ethtool Input Plugin
|
||||||
|
|
||||||
The ethtool input plugin pulls ethernet device stats. Fields pulled will depend on the network device and driver.
|
The ethtool input plugin pulls ethernet device stats. Fields pulled will depend
|
||||||
|
on the network device and driver.
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -6,9 +6,12 @@ This plugin provides a consumer for use with Azure Event Hubs and Azure IoT Hub.
|
||||||
|
|
||||||
The main focus for development of this plugin is Azure IoT hub:
|
The main focus for development of this plugin is Azure IoT hub:
|
||||||
|
|
||||||
1. Create an Azure IoT Hub by following any of the guides provided here: [Azure IoT Hub](https://docs.microsoft.com/en-us/azure/iot-hub/)
|
1. Create an Azure IoT Hub by following any of the guides provided here: [Azure
|
||||||
2. Create a device, for example a [simulated Raspberry Pi](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-raspberry-pi-web-simulator-get-started)
|
IoT Hub](https://docs.microsoft.com/en-us/azure/iot-hub/)
|
||||||
3. The connection string needed for the plugin is located under *Shared access policies*, both the *iothubowner* and *service* policies should work
|
2. Create a device, for example a [simulated Raspberry
|
||||||
|
Pi](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-raspberry-pi-web-simulator-get-started)
|
||||||
|
3. The connection string needed for the plugin is located under *Shared access
|
||||||
|
policies*, both the *iothubowner* and *service* policies should work
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -4,33 +4,32 @@ The `example` plugin gathers metrics about example things. This description
|
||||||
explains at a high level what the plugin does and provides links to where
|
explains at a high level what the plugin does and provides links to where
|
||||||
additional information can be found.
|
additional information can be found.
|
||||||
|
|
||||||
Telegraf minimum version: Telegraf x.x
|
Telegraf minimum version: Telegraf x.x Plugin minimum tested version: x.x
|
||||||
Plugin minimum tested version: x.x
|
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
This section contains the default TOML to configure the plugin. You can
|
|
||||||
generate it using `telegraf --usage <plugin-name>`.
|
|
||||||
|
|
||||||
```toml @sample.conf
|
```toml @sample.conf
|
||||||
# This is an example plugin
|
# This is an example plugin
|
||||||
[[inputs.example]]
|
[[inputs.example]]
|
||||||
example_option = "example_value"
|
example_option = "example_value"
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Running `telegraf --usage <plugin-name>` also gives the sample TOML
|
||||||
|
configuration.
|
||||||
|
|
||||||
### example_option
|
### example_option
|
||||||
|
|
||||||
A more in depth description of an option can be provided here, but only do so
|
A more in depth description of an option can be provided here, but only do so if
|
||||||
if the option cannot be fully described in the sample config.
|
the option cannot be fully described in the sample config.
|
||||||
|
|
||||||
## Metrics
|
## Metrics
|
||||||
|
|
||||||
Here you should add an optional description and links to where the user can
|
Here you should add an optional description and links to where the user can get
|
||||||
get more information about the measurements.
|
more information about the measurements.
|
||||||
|
|
||||||
If the output is determined dynamically based on the input source, or there
|
If the output is determined dynamically based on the input source, or there are
|
||||||
are more metrics than can reasonably be listed, describe how the input is
|
more metrics than can reasonably be listed, describe how the input is mapped to
|
||||||
mapped to the output.
|
the output.
|
||||||
|
|
||||||
- measurement1
|
- measurement1
|
||||||
- tags:
|
- tags:
|
||||||
|
|
|
||||||
|
|
@ -1,7 +1,8 @@
|
||||||
# Exec Input Plugin
|
# Exec Input Plugin
|
||||||
|
|
||||||
The `exec` plugin executes all the `commands` in parallel on every interval and parses metrics from
|
The `exec` plugin executes all the `commands` in parallel on every interval and
|
||||||
their output in any one of the accepted [Input Data Formats](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md).
|
parses metrics from their output in any one of the accepted [Input Data
|
||||||
|
Formats](../../../docs/DATA_FORMATS_INPUT.md).
|
||||||
|
|
||||||
This plugin can be used to poll for custom metrics from any source.
|
This plugin can be used to poll for custom metrics from any source.
|
||||||
|
|
||||||
|
|
@ -41,14 +42,16 @@ scripts that match the pattern will cause them to be picked up immediately.
|
||||||
|
|
||||||
## Example
|
## Example
|
||||||
|
|
||||||
This script produces static values, since no timestamp is specified the values are at the current time.
|
This script produces static values, since no timestamp is specified the values
|
||||||
|
are at the current time.
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
#!/bin/sh
|
#!/bin/sh
|
||||||
echo 'example,tag1=a,tag2=b i=42i,j=43i,k=44i'
|
echo 'example,tag1=a,tag2=b i=42i,j=43i,k=44i'
|
||||||
```
|
```
|
||||||
|
|
||||||
It can be paired with the following configuration and will be run at the `interval` of the agent.
|
It can be paired with the following configuration and will be run at the
|
||||||
|
`interval` of the agent.
|
||||||
|
|
||||||
```toml
|
```toml
|
||||||
[[inputs.exec]]
|
[[inputs.exec]]
|
||||||
|
|
|
||||||
|
|
@ -1,10 +1,10 @@
|
||||||
# Execd Input Plugin
|
# Execd Input Plugin
|
||||||
|
|
||||||
The `execd` plugin runs an external program as a long-running daemon.
|
The `execd` plugin runs an external program as a long-running daemon. The
|
||||||
The programs must output metrics in any one of the accepted
|
programs must output metrics in any one of the accepted [Input Data Formats][]
|
||||||
[Input Data Formats][] on the process's STDOUT, and is expected to
|
on the process's STDOUT, and is expected to stay running. If you'd instead like
|
||||||
stay running. If you'd instead like the process to collect metrics and then exit,
|
the process to collect metrics and then exit, check out the [inputs.exec][]
|
||||||
check out the [inputs.exec][] plugin.
|
plugin.
|
||||||
|
|
||||||
The `signal` can be configured to send a signal the running daemon on each
|
The `signal` can be configured to send a signal the running daemon on each
|
||||||
collection interval. This is used for when you want to have Telegraf notify the
|
collection interval. This is used for when you want to have Telegraf notify the
|
||||||
|
|
@ -125,5 +125,5 @@ end
|
||||||
signal = "none"
|
signal = "none"
|
||||||
```
|
```
|
||||||
|
|
||||||
[Input Data Formats]: https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
[Input Data Formats]: ../../../docs/DATA_FORMATS_INPUT.md
|
||||||
[inputs.exec]: https://github.com/influxdata/telegraf/blob/master/plugins/inputs/exec/README.md
|
[inputs.exec]: ../exec/README.md
|
||||||
|
|
|
||||||
|
|
@ -3,8 +3,8 @@
|
||||||
The fail2ban plugin gathers the count of failed and banned ip addresses using
|
The fail2ban plugin gathers the count of failed and banned ip addresses using
|
||||||
[fail2ban](https://www.fail2ban.org).
|
[fail2ban](https://www.fail2ban.org).
|
||||||
|
|
||||||
This plugin runs the `fail2ban-client` command which generally requires root access.
|
This plugin runs the `fail2ban-client` command which generally requires root
|
||||||
Acquiring the required permissions can be done using several methods:
|
access. Acquiring the required permissions can be done using several methods:
|
||||||
|
|
||||||
- [Use sudo](#using-sudo) run fail2ban-client.
|
- [Use sudo](#using-sudo) run fail2ban-client.
|
||||||
- Run telegraf as root. (not recommended)
|
- Run telegraf as root. (not recommended)
|
||||||
|
|
@ -49,7 +49,7 @@ Defaults!FAIL2BAN !logfile, !syslog, !pam_session
|
||||||
- failed (integer, count)
|
- failed (integer, count)
|
||||||
- banned (integer, count)
|
- banned (integer, count)
|
||||||
|
|
||||||
### Example Output
|
## Example Output
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
# fail2ban-client status sshd
|
# fail2ban-client status sshd
|
||||||
|
|
|
||||||
|
|
@ -1,7 +1,8 @@
|
||||||
# Fibaro Input Plugin
|
# Fibaro Input Plugin
|
||||||
|
|
||||||
The Fibaro plugin makes HTTP calls to the Fibaro controller API to gather values of hooked devices.
|
The Fibaro plugin makes HTTP calls to the Fibaro controller API to gather values
|
||||||
Those values could be true (1) or false (0) for switches, percentage for dimmers, temperature, etc.
|
of hooked devices. Those values could be true (1) or false (0) for switches,
|
||||||
|
percentage for dimmers, temperature, etc.
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,7 +1,7 @@
|
||||||
# File Input Plugin
|
# File Input Plugin
|
||||||
|
|
||||||
The file plugin parses the **complete** contents of a file **every interval** using
|
The file plugin parses the **complete** contents of a file **every interval**
|
||||||
the selected [input data format][].
|
using the selected [input data format][].
|
||||||
|
|
||||||
**Note:** If you wish to parse only newly appended lines use the [tail][] input
|
**Note:** If you wish to parse only newly appended lines use the [tail][] input
|
||||||
plugin instead.
|
plugin instead.
|
||||||
|
|
@ -38,5 +38,10 @@ plugin instead.
|
||||||
# file_tag = ""
|
# file_tag = ""
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Metrics
|
||||||
|
|
||||||
|
The format of metrics produced by this plugin depends on the content and data
|
||||||
|
format of the file.
|
||||||
|
|
||||||
[input data format]: /docs/DATA_FORMATS_INPUT.md
|
[input data format]: /docs/DATA_FORMATS_INPUT.md
|
||||||
[tail]: /plugins/inputs/tail
|
[tail]: /plugins/inputs/tail
|
||||||
|
|
|
||||||
|
|
@ -1,4 +1,4 @@
|
||||||
# filestat Input Plugin
|
# Filestat Input Plugin
|
||||||
|
|
||||||
The filestat plugin gathers metrics about file existence, size, and other stats.
|
The filestat plugin gathers metrics about file existence, size, and other stats.
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,10 +1,14 @@
|
||||||
# Fluentd Input Plugin
|
# Fluentd Input Plugin
|
||||||
|
|
||||||
The fluentd plugin gathers metrics from plugin endpoint provided by [in_monitor plugin](https://docs.fluentd.org/input/monitor_agent).
|
The fluentd plugin gathers metrics from plugin endpoint provided by [in_monitor
|
||||||
This plugin understands data provided by /api/plugin.json resource (/api/config.json is not covered).
|
plugin][1]. This plugin understands data provided by /api/plugin.json resource
|
||||||
|
(/api/config.json is not covered).
|
||||||
|
|
||||||
You might need to adjust your fluentd configuration, in order to reduce series cardinality in case your fluentd restarts frequently. Every time fluentd starts, `plugin_id` value is given a new random value.
|
You might need to adjust your fluentd configuration, in order to reduce series
|
||||||
According to [fluentd documentation](https://docs.fluentd.org/configuration/config-file#common-plugin-parameter), you are able to add `@id` parameter for each plugin to avoid this behaviour and define custom `plugin_id`.
|
cardinality in case your fluentd restarts frequently. Every time fluentd starts,
|
||||||
|
`plugin_id` value is given a new random value. According to [fluentd
|
||||||
|
documentation][2], you are able to add `@id` parameter for each plugin to avoid
|
||||||
|
this behaviour and define custom `plugin_id`.
|
||||||
|
|
||||||
example configuration with `@id` parameter for http plugin:
|
example configuration with `@id` parameter for http plugin:
|
||||||
|
|
||||||
|
|
@ -16,6 +20,9 @@ example configuration with `@id` parameter for http plugin:
|
||||||
</source>
|
</source>
|
||||||
```
|
```
|
||||||
|
|
||||||
|
[1]: https://docs.fluentd.org/input/monitor_agent
|
||||||
|
[2]: https://docs.fluentd.org/configuration/config-file#common-plugin-parameter
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
```toml @sample.conf
|
```toml @sample.conf
|
||||||
|
|
|
||||||
|
|
@ -34,7 +34,7 @@ alternative method for collecting repository information.
|
||||||
# additional_fields = []
|
# additional_fields = []
|
||||||
```
|
```
|
||||||
|
|
||||||
### Metrics
|
## Metrics
|
||||||
|
|
||||||
- github_repository
|
- github_repository
|
||||||
- tags:
|
- tags:
|
||||||
|
|
@ -61,17 +61,18 @@ When the [internal][] input is enabled:
|
||||||
- remaining - How many requests you have remaining (per hour)
|
- remaining - How many requests you have remaining (per hour)
|
||||||
- blocks - How many requests have been blocked due to rate limit
|
- blocks - How many requests have been blocked due to rate limit
|
||||||
|
|
||||||
When specifying `additional_fields` the plugin will collect the specified properties.
|
When specifying `additional_fields` the plugin will collect the specified
|
||||||
**NOTE:** Querying this additional fields might require to perform additional API-calls.
|
properties. **NOTE:** Querying this additional fields might require to perform
|
||||||
Please make sure you don't exceed the query rate-limit by specifying too many additional fields.
|
additional API-calls. Please make sure you don't exceed the query rate-limit by
|
||||||
In the following we list the available options with the required API-calls and the resulting fields
|
specifying too many additional fields. In the following we list the available
|
||||||
|
options with the required API-calls and the resulting fields
|
||||||
|
|
||||||
- "pull-requests" (2 API-calls per repository)
|
- "pull-requests" (2 API-calls per repository)
|
||||||
- fields:
|
- fields:
|
||||||
- open_pull_requests (int)
|
- open_pull_requests (int)
|
||||||
- closed_pull_requests (int)
|
- closed_pull_requests (int)
|
||||||
|
|
||||||
### Example Output
|
## Example Output
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
github_repository,language=Go,license=MIT\ License,name=telegraf,owner=influxdata forks=2679i,networks=2679i,open_issues=794i,size=23263i,stars=7091i,subscribers=316i,watchers=7091i 1563901372000000000
|
github_repository,language=Go,license=MIT\ License,name=telegraf,owner=influxdata forks=2679i,networks=2679i,open_issues=794i,size=23263i,stars=7091i,subscribers=316i,watchers=7091i 1563901372000000000
|
||||||
|
|
|
||||||
|
|
@ -1,10 +1,15 @@
|
||||||
# gNMI (gRPC Network Management Interface) Input Plugin
|
# gNMI (gRPC Network Management Interface) Input Plugin
|
||||||
|
|
||||||
This plugin consumes telemetry data based on the [gNMI](https://github.com/openconfig/reference/blob/master/rpc/gnmi/gnmi-specification.md) Subscribe method. TLS is supported for authentication and encryption. This input plugin is vendor-agnostic and is supported on any platform that supports the gNMI spec.
|
This plugin consumes telemetry data based on the [gNMI][1] Subscribe method. TLS
|
||||||
|
is supported for authentication and encryption. This input plugin is
|
||||||
|
vendor-agnostic and is supported on any platform that supports the gNMI spec.
|
||||||
|
|
||||||
For Cisco devices:
|
For Cisco devices:
|
||||||
|
|
||||||
It has been optimized to support gNMI telemetry as produced by Cisco IOS XR (64-bit) version 6.5.1, Cisco NX-OS 9.3 and Cisco IOS XE 16.12 and later.
|
It has been optimized to support gNMI telemetry as produced by Cisco IOS XR
|
||||||
|
(64-bit) version 6.5.1, Cisco NX-OS 9.3 and Cisco IOS XE 16.12 and later.
|
||||||
|
|
||||||
|
[1]: https://github.com/openconfig/reference/blob/master/rpc/gnmi/gnmi-specification.md
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -7,9 +7,11 @@ Plugin currently support two type of end points:-
|
||||||
- multiple (e.g. `http://[graylog-server-ip]:9000/api/system/metrics/multiple`)
|
- multiple (e.g. `http://[graylog-server-ip]:9000/api/system/metrics/multiple`)
|
||||||
- namespace (e.g. `http://[graylog-server-ip]:9000/api/system/metrics/namespace/{namespace}`)
|
- namespace (e.g. `http://[graylog-server-ip]:9000/api/system/metrics/namespace/{namespace}`)
|
||||||
|
|
||||||
End Point can be a mix of one multiple end point and several namespaces end points
|
End Point can be a mix of one multiple end point and several namespaces end
|
||||||
|
points
|
||||||
|
|
||||||
Note: if namespace end point specified metrics array will be ignored for that call.
|
Note: if namespace end point specified metrics array will be ignored for that
|
||||||
|
call.
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,9 +1,11 @@
|
||||||
# HAProxy Input Plugin
|
# HAProxy Input Plugin
|
||||||
|
|
||||||
The [HAProxy](http://www.haproxy.org/) input plugin gathers
|
The [HAProxy](http://www.haproxy.org/) input plugin gathers [statistics][1]
|
||||||
[statistics](https://cbonte.github.io/haproxy-dconv/1.9/intro.html#3.3.16)
|
using the [stats socket][2] or [HTTP statistics page][3] of a HAProxy server.
|
||||||
using the [stats socket](https://cbonte.github.io/haproxy-dconv/1.9/management.html#9.3)
|
|
||||||
or [HTTP statistics page](https://cbonte.github.io/haproxy-dconv/1.9/management.html#9) of a HAProxy server.
|
[1]: https://cbonte.github.io/haproxy-dconv/1.9/intro.html#3.3.16
|
||||||
|
[2]: https://cbonte.github.io/haproxy-dconv/1.9/management.html#9.3
|
||||||
|
[3]: https://cbonte.github.io/haproxy-dconv/1.9/management.html#9
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
@ -42,27 +44,28 @@ or [HTTP statistics page](https://cbonte.github.io/haproxy-dconv/1.9/management.
|
||||||
|
|
||||||
### HAProxy Configuration
|
### HAProxy Configuration
|
||||||
|
|
||||||
The following information may be useful when getting started, but please
|
The following information may be useful when getting started, but please consult
|
||||||
consult the HAProxy documentation for complete and up to date instructions.
|
the HAProxy documentation for complete and up to date instructions.
|
||||||
|
|
||||||
The [`stats enable`](https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4-stats%20enable)
|
The [`stats enable`][4] option can be used to add unauthenticated access over
|
||||||
option can be used to add unauthenticated access over HTTP using the default
|
HTTP using the default settings. To enable the unix socket begin by reading
|
||||||
settings. To enable the unix socket begin by reading about the
|
about the [`stats socket`][5] option.
|
||||||
[`stats socket`](https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#3.1-stats%20socket)
|
|
||||||
option.
|
[4]: https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4-stats%20enable
|
||||||
|
[5]: https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#3.1-stats%20socket
|
||||||
|
|
||||||
### servers
|
### servers
|
||||||
|
|
||||||
Server addresses must explicitly start with 'http' if you wish to use HAProxy
|
Server addresses must explicitly start with 'http' if you wish to use HAProxy
|
||||||
status page. Otherwise, addresses will be assumed to be an UNIX socket and
|
status page. Otherwise, addresses will be assumed to be an UNIX socket and any
|
||||||
any protocol (if present) will be discarded.
|
protocol (if present) will be discarded.
|
||||||
|
|
||||||
When using socket names, wildcard expansion is supported so plugin can gather
|
When using socket names, wildcard expansion is supported so plugin can gather
|
||||||
stats from multiple sockets at once.
|
stats from multiple sockets at once.
|
||||||
|
|
||||||
To use HTTP Basic Auth add the username and password in the userinfo section
|
To use HTTP Basic Auth add the username and password in the userinfo section of
|
||||||
of the URL: `http://user:password@1.2.3.4/haproxy?stats`. The credentials are
|
the URL: `http://user:password@1.2.3.4/haproxy?stats`. The credentials are sent
|
||||||
sent via the `Authorization` header and not using the request URL.
|
via the `Authorization` header and not using the request URL.
|
||||||
|
|
||||||
### keep_field_names
|
### keep_field_names
|
||||||
|
|
||||||
|
|
@ -88,7 +91,7 @@ The following renames are made:
|
||||||
## Metrics
|
## Metrics
|
||||||
|
|
||||||
For more details about collected metrics reference the [HAProxy CSV format
|
For more details about collected metrics reference the [HAProxy CSV format
|
||||||
documentation](https://cbonte.github.io/haproxy-dconv/1.8/management.html#9.1).
|
documentation][6].
|
||||||
|
|
||||||
- haproxy
|
- haproxy
|
||||||
- tags:
|
- tags:
|
||||||
|
|
@ -109,6 +112,8 @@ documentation](https://cbonte.github.io/haproxy-dconv/1.8/management.html#9.1).
|
||||||
- `lastsess` (int)
|
- `lastsess` (int)
|
||||||
- **all other stats** (int)
|
- **all other stats** (int)
|
||||||
|
|
||||||
|
[6]: https://cbonte.github.io/haproxy-dconv/1.8/management.html#9.1
|
||||||
|
|
||||||
## Example Output
|
## Example Output
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
|
|
|
||||||
|
|
@ -1,6 +1,10 @@
|
||||||
# HTTP Input Plugin
|
# HTTP Input Plugin
|
||||||
|
|
||||||
The HTTP input plugin collects metrics from one or more HTTP(S) endpoints. The endpoint should have metrics formatted in one of the supported [input data formats](../../../docs/DATA_FORMATS_INPUT.md). Each data format has its own unique set of configuration options which can be added to the input configuration.
|
The HTTP input plugin collects metrics from one or more HTTP(S) endpoints. The
|
||||||
|
endpoint should have metrics formatted in one of the supported [input data
|
||||||
|
formats](../../../docs/DATA_FORMATS_INPUT.md). Each data format has its own
|
||||||
|
unique set of configuration options which can be added to the input
|
||||||
|
configuration.
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
@ -75,7 +79,8 @@ The HTTP input plugin collects metrics from one or more HTTP(S) endpoints. The
|
||||||
|
|
||||||
## Metrics
|
## Metrics
|
||||||
|
|
||||||
The metrics collected by this input plugin will depend on the configured `data_format` and the payload returned by the HTTP endpoint(s).
|
The metrics collected by this input plugin will depend on the configured
|
||||||
|
`data_format` and the payload returned by the HTTP endpoint(s).
|
||||||
|
|
||||||
The default values below are added if the input format does not specify a value:
|
The default values below are added if the input format does not specify a value:
|
||||||
|
|
||||||
|
|
@ -85,4 +90,11 @@ The default values below are added if the input format does not specify a value:
|
||||||
|
|
||||||
## Optional Cookie Authentication Settings
|
## Optional Cookie Authentication Settings
|
||||||
|
|
||||||
The optional Cookie Authentication Settings will retrieve a cookie from the given authorization endpoint, and use it in subsequent API requests. This is useful for services that do not provide OAuth or Basic Auth authentication, e.g. the [Tesla Powerwall API](https://www.tesla.com/support/energy/powerwall/own/monitoring-from-home-network), which uses a Cookie Auth Body to retrieve an authorization cookie. The Cookie Auth Renewal interval will renew the authorization by retrieving a new cookie at the given interval.
|
The optional Cookie Authentication Settings will retrieve a cookie from the
|
||||||
|
given authorization endpoint, and use it in subsequent API requests. This is
|
||||||
|
useful for services that do not provide OAuth or Basic Auth authentication,
|
||||||
|
e.g. the [Tesla Powerwall API][tesla], which uses a Cookie Auth Body to retrieve
|
||||||
|
an authorization cookie. The Cookie Auth Renewal interval will renew the
|
||||||
|
authorization by retrieving a new cookie at the given interval.
|
||||||
|
|
||||||
|
[tesla]: https://www.tesla.com/support/energy/powerwall/own/monitoring-from-home-network
|
||||||
|
|
|
||||||
|
|
@ -1,18 +1,18 @@
|
||||||
# HTTP Listener v2 Input Plugin
|
# HTTP Listener v2 Input Plugin
|
||||||
|
|
||||||
HTTP Listener v2 is a service input plugin that listens for metrics sent via
|
HTTP Listener v2 is a service input plugin that listens for metrics sent via
|
||||||
HTTP. Metrics may be sent in any supported [data format][data_format]. For metrics in
|
HTTP. Metrics may be sent in any supported [data format][data_format]. For
|
||||||
[InfluxDB Line Protocol][line_protocol] it's recommended to use the [`influxdb_listener`][influxdb_listener]
|
metrics in [InfluxDB Line Protocol][line_protocol] it's recommended to use the
|
||||||
or [`influxdb_v2_listener`][influxdb_v2_listener] instead.
|
[`influxdb_listener`][influxdb_listener] or
|
||||||
|
[`influxdb_v2_listener`][influxdb_v2_listener] instead.
|
||||||
|
|
||||||
**Note:** The plugin previously known as `http_listener` has been renamed
|
**Note:** The plugin previously known as `http_listener` has been renamed
|
||||||
`influxdb_listener`. If you would like Telegraf to act as a proxy/relay for
|
`influxdb_listener`. If you would like Telegraf to act as a proxy/relay for
|
||||||
InfluxDB it is recommended to use [`influxdb_listener`][influxdb_listener] or [`influxdb_v2_listener`][influxdb_v2_listener].
|
InfluxDB it is recommended to use [`influxdb_listener`][influxdb_listener] or
|
||||||
|
[`influxdb_v2_listener`][influxdb_v2_listener].
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
This is a sample configuration for the plugin.
|
|
||||||
|
|
||||||
```toml @sample.conf
|
```toml @sample.conf
|
||||||
# Generic HTTP write listener
|
# Generic HTTP write listener
|
||||||
[[inputs.http_listener_v2]]
|
[[inputs.http_listener_v2]]
|
||||||
|
|
@ -68,7 +68,8 @@ This is a sample configuration for the plugin.
|
||||||
|
|
||||||
## Metrics
|
## Metrics
|
||||||
|
|
||||||
Metrics are collected from the part of the request specified by the `data_source` param and are parsed depending on the value of `data_format`.
|
Metrics are collected from the part of the request specified by the
|
||||||
|
`data_source` param and are parsed depending on the value of `data_format`.
|
||||||
|
|
||||||
## Troubleshooting
|
## Troubleshooting
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -96,9 +96,12 @@ This input plugin checks HTTP/HTTPS connections.
|
||||||
|
|
||||||
### `result` / `result_code`
|
### `result` / `result_code`
|
||||||
|
|
||||||
Upon finishing polling the target server, the plugin registers the result of the operation in the `result` tag, and adds a numeric field called `result_code` corresponding with that tag value.
|
Upon finishing polling the target server, the plugin registers the result of the
|
||||||
|
operation in the `result` tag, and adds a numeric field called `result_code`
|
||||||
|
corresponding with that tag value.
|
||||||
|
|
||||||
This tag is used to expose network and plugin errors. HTTP errors are considered a successful connection.
|
This tag is used to expose network and plugin errors. HTTP errors are considered
|
||||||
|
a successful connection.
|
||||||
|
|
||||||
|Tag value |Corresponding field value|Description|
|
|Tag value |Corresponding field value|Description|
|
||||||
-------------------------------|-------------------------|-----------|
|
-------------------------------|-------------------------|-----------|
|
||||||
|
|
|
||||||
|
|
@ -1,8 +1,9 @@
|
||||||
# HTTP JSON Input Plugin
|
# HTTP JSON Input Plugin
|
||||||
|
|
||||||
## DEPRECATED in Telegraf v1.6: Use [HTTP input plugin][] as replacement
|
**DEPRECATED in Telegraf v1.6: Use [HTTP input plugin][] as replacement**
|
||||||
|
|
||||||
The httpjson plugin collects data from HTTP URLs which respond with JSON. It flattens the JSON and finds all numeric values, treating them as floats.
|
The httpjson plugin collects data from HTTP URLs which respond with JSON. It
|
||||||
|
flattens the JSON and finds all numeric values, treating them as floats.
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
@ -60,18 +61,23 @@ The httpjson plugin collects data from HTTP URLs which respond with JSON. It fl
|
||||||
- httpjson
|
- httpjson
|
||||||
- response_time (float): Response time in seconds
|
- response_time (float): Response time in seconds
|
||||||
|
|
||||||
Additional fields are dependant on the response of the remote service being polled.
|
Additional fields are dependant on the response of the remote service being
|
||||||
|
polled.
|
||||||
|
|
||||||
## Tags
|
## Tags
|
||||||
|
|
||||||
- All measurements have the following tags:
|
- All measurements have the following tags:
|
||||||
- server: HTTP origin as defined in configuration as `servers`.
|
- server: HTTP origin as defined in configuration as `servers`.
|
||||||
|
|
||||||
Any top level keys listed under `tag_keys` in the configuration are added as tags. Top level keys are defined as keys in the root level of the object in a single object response, or in the root level of each object within an array of objects.
|
Any top level keys listed under `tag_keys` in the configuration are added as
|
||||||
|
tags. Top level keys are defined as keys in the root level of the object in a
|
||||||
|
single object response, or in the root level of each object within an array of
|
||||||
|
objects.
|
||||||
|
|
||||||
## Examples Output
|
## Examples Output
|
||||||
|
|
||||||
This plugin understands responses containing a single JSON object, or a JSON Array of Objects.
|
This plugin understands responses containing a single JSON object, or a JSON
|
||||||
|
Array of Objects.
|
||||||
|
|
||||||
**Object Output:**
|
**Object Output:**
|
||||||
|
|
||||||
|
|
@ -91,7 +97,9 @@ Given the following response body:
|
||||||
|
|
||||||
The following metric is produced:
|
The following metric is produced:
|
||||||
|
|
||||||
`httpjson,server=http://localhost:9999/stats/ b_d=0.1,a=0.5,b_e=5,response_time=0.001`
|
```shell
|
||||||
|
httpjson,server=http://localhost:9999/stats/ b_d=0.1,a=0.5,b_e=5,response_time=0.001
|
||||||
|
```
|
||||||
|
|
||||||
Note that only numerical values are extracted and the type is float.
|
Note that only numerical values are extracted and the type is float.
|
||||||
|
|
||||||
|
|
@ -104,11 +112,14 @@ If `tag_keys` is included in the configuration:
|
||||||
|
|
||||||
Then the `service` tag will also be added:
|
Then the `service` tag will also be added:
|
||||||
|
|
||||||
`httpjson,server=http://localhost:9999/stats/,service=service01 b_d=0.1,a=0.5,b_e=5,response_time=0.001`
|
```shell
|
||||||
|
httpjson,server=http://localhost:9999/stats/,service=service01 b_d=0.1,a=0.5,b_e=5,response_time=0.001
|
||||||
|
```
|
||||||
|
|
||||||
**Array Output:**
|
**Array Output:**
|
||||||
|
|
||||||
If the service returns an array of objects, one metric is be created for each object:
|
If the service returns an array of objects, one metric is be created for each
|
||||||
|
object:
|
||||||
|
|
||||||
```json
|
```json
|
||||||
[
|
[
|
||||||
|
|
@ -133,7 +144,9 @@ If the service returns an array of objects, one metric is be created for each ob
|
||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
`httpjson,server=http://localhost:9999/stats/,service=service01 a=0.5,b_d=0.1,b_e=5,response_time=0.003`
|
```shell
|
||||||
`httpjson,server=http://localhost:9999/stats/,service=service02 a=0.6,b_d=0.2,b_e=6,response_time=0.003`
|
httpjson,server=http://localhost:9999/stats/,service=service01 a=0.5,b_d=0.1,b_e=5,response_time=0.003
|
||||||
|
httpjson,server=http://localhost:9999/stats/,service=service02 a=0.6,b_d=0.2,b_e=6,response_time=0.003
|
||||||
|
```
|
||||||
|
|
||||||
[HTTP input plugin]: /plugins/inputs/http
|
[HTTP input plugin]: ../http/README.md
|
||||||
|
|
|
||||||
|
|
@ -1,10 +1,11 @@
|
||||||
# Hugepages Input Plugin
|
# Hugepages Input Plugin
|
||||||
|
|
||||||
Transparent Huge Pages (THP) is a Linux memory management system that reduces the overhead of
|
Transparent Huge Pages (THP) is a Linux memory management system that reduces
|
||||||
Translation Lookaside Buffer (TLB) lookups on machines with large amounts of memory by using larger
|
the overhead of Translation Lookaside Buffer (TLB) lookups on machines with
|
||||||
memory pages.
|
large amounts of memory by using larger memory pages.
|
||||||
|
|
||||||
Consult <https://www.kernel.org/doc/html/latest/admin-guide/mm/hugetlbpage.html> for more details.
|
Consult <https://www.kernel.org/doc/html/latest/admin-guide/mm/hugetlbpage.html>
|
||||||
|
for more details.
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -4,7 +4,9 @@ This plugin gather services & hosts status using Icinga2 Remote API.
|
||||||
|
|
||||||
The icinga2 plugin uses the icinga2 remote API to gather status on running
|
The icinga2 plugin uses the icinga2 remote API to gather status on running
|
||||||
services and hosts. You can read Icinga2's documentation for their remote API
|
services and hosts. You can read Icinga2's documentation for their remote API
|
||||||
[here](https://docs.icinga.com/icinga2/latest/doc/module/icinga2/chapter/icinga2-api)
|
[here][1].
|
||||||
|
|
||||||
|
[1]: https://docs.icinga.com/icinga2/latest/doc/module/icinga2/chapter/icinga2-api
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,12 +1,13 @@
|
||||||
# InfluxDB Input Plugin
|
# InfluxDB Input Plugin
|
||||||
|
|
||||||
The InfluxDB plugin will collect metrics on the given InfluxDB servers. Read our
|
The InfluxDB plugin will collect metrics on the given InfluxDB servers. Read our
|
||||||
[documentation](https://docs.influxdata.com/platform/monitoring/influxdata-platform/tools/measurements-internal/)
|
[documentation][1] for detailed information about `influxdb` metrics.
|
||||||
for detailed information about `influxdb` metrics.
|
|
||||||
|
|
||||||
This plugin can also gather metrics from endpoints that expose
|
This plugin can also gather metrics from endpoints that expose
|
||||||
InfluxDB-formatted endpoints. See below for more information.
|
InfluxDB-formatted endpoints. See below for more information.
|
||||||
|
|
||||||
|
[1]: https://docs.influxdata.com/platform/monitoring/influxdata-platform/tools/measurements-internal/
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
```toml @sample.conf
|
```toml @sample.conf
|
||||||
|
|
@ -39,7 +40,8 @@ InfluxDB-formatted endpoints. See below for more information.
|
||||||
|
|
||||||
## Measurements & Fields
|
## Measurements & Fields
|
||||||
|
|
||||||
**Note:** The measurements and fields included in this plugin are dynamically built from the InfluxDB source, and may vary between versions:
|
**Note:** The measurements and fields included in this plugin are dynamically
|
||||||
|
built from the InfluxDB source, and may vary between versions:
|
||||||
|
|
||||||
- **influxdb_ae** _(Enterprise Only)_ : Statistics related to the Anti-Entropy (AE) engine in InfluxDB Enterprise clusters.
|
- **influxdb_ae** _(Enterprise Only)_ : Statistics related to the Anti-Entropy (AE) engine in InfluxDB Enterprise clusters.
|
||||||
- **bytesRx**: Number of bytes received by the data node.
|
- **bytesRx**: Number of bytes received by the data node.
|
||||||
|
|
|
||||||
|
|
@ -5,9 +5,9 @@ according to the [InfluxDB HTTP API][influxdb_http_api]. The intent of the
|
||||||
plugin is to allow Telegraf to serve as a proxy/router for the `/api/v2/write`
|
plugin is to allow Telegraf to serve as a proxy/router for the `/api/v2/write`
|
||||||
endpoint of the InfluxDB HTTP API.
|
endpoint of the InfluxDB HTTP API.
|
||||||
|
|
||||||
The `/api/v2/write` endpoint supports the `precision` query parameter and can be set
|
The `/api/v2/write` endpoint supports the `precision` query parameter and can be
|
||||||
to one of `ns`, `us`, `ms`, `s`. All other parameters are ignored and
|
set to one of `ns`, `us`, `ms`, `s`. All other parameters are ignored and defer
|
||||||
defer to the output plugins configuration.
|
to the output plugins configuration.
|
||||||
|
|
||||||
Telegraf minimum version: Telegraf 1.16.0
|
Telegraf minimum version: Telegraf 1.16.0
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,13 +1,18 @@
|
||||||
# Intel Performance Monitoring Unit Plugin
|
# Intel Performance Monitoring Unit Plugin
|
||||||
|
|
||||||
This input plugin exposes Intel PMU (Performance Monitoring Unit) metrics available through [Linux Perf](https://perf.wiki.kernel.org/index.php/Main_Page) subsystem.
|
This input plugin exposes Intel PMU (Performance Monitoring Unit) metrics
|
||||||
|
available through [Linux Perf](https://perf.wiki.kernel.org/index.php/Main_Page)
|
||||||
|
subsystem.
|
||||||
|
|
||||||
PMU metrics gives insight into performance and health of IA processor's internal components,
|
PMU metrics gives insight into performance and health of IA processor's internal
|
||||||
including core and uncore units. With the number of cores increasing and processor topology getting more complex
|
components, including core and uncore units. With the number of cores increasing
|
||||||
the insight into those metrics is vital to assure the best CPU performance and utilization.
|
and processor topology getting more complex the insight into those metrics is
|
||||||
|
vital to assure the best CPU performance and utilization.
|
||||||
|
|
||||||
Performance counters are CPU hardware registers that count hardware events such as instructions executed, cache-misses suffered, or branches mispredicted.
|
Performance counters are CPU hardware registers that count hardware events such
|
||||||
They form a basis for profiling applications to trace dynamic control flow and identify hotspots.
|
as instructions executed, cache-misses suffered, or branches mispredicted. They
|
||||||
|
form a basis for profiling applications to trace dynamic control flow and
|
||||||
|
identify hotspots.
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
@ -63,8 +68,10 @@ They form a basis for profiling applications to trace dynamic control flow and i
|
||||||
|
|
||||||
### Modifiers
|
### Modifiers
|
||||||
|
|
||||||
Perf modifiers adjust event-specific perf attribute to fulfill particular requirements.
|
Perf modifiers adjust event-specific perf attribute to fulfill particular
|
||||||
Details about perf attribute structure could be found in [perf_event_open](https://man7.org/linux/man-pages/man2/perf_event_open.2.html) syscall manual.
|
requirements. Details about perf attribute structure could be found in
|
||||||
|
[perf_event_open][man]
|
||||||
|
syscall manual.
|
||||||
|
|
||||||
General schema of configuration's `events` list element:
|
General schema of configuration's `events` list element:
|
||||||
|
|
||||||
|
|
@ -89,48 +96,65 @@ where:
|
||||||
|
|
||||||
## Requirements
|
## Requirements
|
||||||
|
|
||||||
The plugin is using [iaevents](https://github.com/intel/iaevents) library which is a golang package that makes accessing the Linux kernel's perf interface easier.
|
The plugin is using [iaevents](https://github.com/intel/iaevents) library which
|
||||||
|
is a golang package that makes accessing the Linux kernel's perf interface
|
||||||
|
easier.
|
||||||
|
|
||||||
Intel PMU plugin, is only intended for use on **linux 64-bit** systems.
|
Intel PMU plugin, is only intended for use on **linux 64-bit** systems.
|
||||||
|
|
||||||
Event definition JSON files for specific architectures can be found at [01.org](https://download.01.org/perfmon/).
|
Event definition JSON files for specific architectures can be found at
|
||||||
A script to download the event definitions that are appropriate for your system (event_download.py) is available at [pmu-tools](https://github.com/andikleen/pmu-tools).
|
[01.org](https://download.01.org/perfmon/). A script to download the event
|
||||||
Please keep these files in a safe place on your system.
|
definitions that are appropriate for your system (event_download.py) is
|
||||||
|
available at [pmu-tools](https://github.com/andikleen/pmu-tools). Please keep
|
||||||
|
these files in a safe place on your system.
|
||||||
|
|
||||||
## Measuring
|
## Measuring
|
||||||
|
|
||||||
Plugin allows measuring both core and uncore events. During plugin initialization the event names provided by user are compared
|
Plugin allows measuring both core and uncore events. During plugin
|
||||||
with event definitions included in JSON files and translated to perf attributes. Next, those events are activated to start counting.
|
initialization the event names provided by user are compared with event
|
||||||
During every telegraf interval, the plugin reads proper measurement for each previously activated event.
|
definitions included in JSON files and translated to perf attributes. Next,
|
||||||
|
those events are activated to start counting. During every telegraf interval,
|
||||||
|
the plugin reads proper measurement for each previously activated event.
|
||||||
|
|
||||||
Each single core event may be counted severally on every available CPU's core. In contrast, uncore events could be placed in
|
Each single core event may be counted severally on every available CPU's
|
||||||
many PMUs within specified CPU package. The plugin allows choosing core ids (core events) or socket ids (uncore events) on which the counting should be executed.
|
core. In contrast, uncore events could be placed in many PMUs within specified
|
||||||
Uncore events are separately activated on all socket's PMUs, and can be exposed as separate
|
CPU package. The plugin allows choosing core ids (core events) or socket ids
|
||||||
|
(uncore events) on which the counting should be executed. Uncore events are
|
||||||
|
separately activated on all socket's PMUs, and can be exposed as separate
|
||||||
measurement or to be summed up as one measurement.
|
measurement or to be summed up as one measurement.
|
||||||
|
|
||||||
Obtained measurements are stored as three values: **Raw**, **Enabled** and **Running**. Raw is a total count of event. Enabled and running are total time the event was enabled and running.
|
Obtained measurements are stored as three values: **Raw**, **Enabled** and
|
||||||
Normally these are the same. If more events are started than available counter slots on the PMU, then multiplexing
|
**Running**. Raw is a total count of event. Enabled and running are total time
|
||||||
occurs and events only run part of the time. Therefore, the plugin provides a 4-th value called **scaled** which is calculated using following formula:
|
the event was enabled and running. Normally these are the same. If more events
|
||||||
`raw * enabled / running`.
|
are started than available counter slots on the PMU, then multiplexing occurs
|
||||||
|
and events only run part of the time. Therefore, the plugin provides a 4-th
|
||||||
|
value called **scaled** which is calculated using following formula: `raw *
|
||||||
|
enabled / running`.
|
||||||
|
|
||||||
Events are measured for all running processes.
|
Events are measured for all running processes.
|
||||||
|
|
||||||
### Core event groups
|
### Core event groups
|
||||||
|
|
||||||
Perf allows assembling events as a group. A perf event group is scheduled onto the CPU as a unit: it will be put onto the CPU only if all of the events in the group can be put onto the CPU.
|
Perf allows assembling events as a group. A perf event group is scheduled onto
|
||||||
This means that the values of the member events can be meaningfully compared — added, divided (to get ratios), and so on — with each other,
|
the CPU as a unit: it will be put onto the CPU only if all of the events in the
|
||||||
since they have counted events for the same set of executed instructions [(source)](https://man7.org/linux/man-pages/man2/perf_event_open.2.html).
|
group can be put onto the CPU. This means that the values of the member events
|
||||||
|
can be meaningfully compared — added, divided (to get ratios), and so on — with
|
||||||
|
each other, since they have counted events for the same set of executed
|
||||||
|
instructions [(source)][man].
|
||||||
|
|
||||||
> **NOTE:**
|
> **NOTE:** Be aware that the plugin will throw an error when trying to create
|
||||||
> Be aware that the plugin will throw an error when trying to create core event group of size that exceeds available core PMU counters.
|
> core event group of size that exceeds available core PMU counters. The error
|
||||||
> The error message from perf syscall will be shown as "invalid argument". If you want to check how many PMUs are supported by your Intel CPU, you can use the [cpuid](https://linux.die.net/man/1/cpuid) command.
|
> message from perf syscall will be shown as "invalid argument". If you want to
|
||||||
|
> check how many PMUs are supported by your Intel CPU, you can use the
|
||||||
|
> [cpuid](https://linux.die.net/man/1/cpuid) command.
|
||||||
|
|
||||||
### Note about file descriptors
|
### Note about file descriptors
|
||||||
|
|
||||||
The plugin opens a number of file descriptors dependent on number of monitored CPUs and number of monitored
|
The plugin opens a number of file descriptors dependent on number of monitored
|
||||||
counters. It can easily exceed the default per process limit of allowed file descriptors. Depending on
|
CPUs and number of monitored counters. It can easily exceed the default per
|
||||||
configuration, it might be required to increase the limit of opened file descriptors allowed.
|
process limit of allowed file descriptors. Depending on configuration, it might
|
||||||
This can be done for example by using `ulimit -n command`.
|
be required to increase the limit of opened file descriptors allowed. This can
|
||||||
|
be done for example by using `ulimit -n command`.
|
||||||
|
|
||||||
## Metrics
|
## Metrics
|
||||||
|
|
||||||
|
|
@ -208,3 +232,5 @@ pmu_metric,cpu=0,event=CPU_CLK_UNHALTED.REF_XCLK_ANY,host=xyz enabled=2200963921
|
||||||
pmu_metric,cpu=0,event=L1D_PEND_MISS.PENDING_CYCLES_ANY,host=xyz enabled=2200933946i,running=1470322480i,raw=23631950i,scaled=35374798i 1621254412000000000
|
pmu_metric,cpu=0,event=L1D_PEND_MISS.PENDING_CYCLES_ANY,host=xyz enabled=2200933946i,running=1470322480i,raw=23631950i,scaled=35374798i 1621254412000000000
|
||||||
pmu_metric,cpu=0,event=L1D_PEND_MISS.PENDING_CYCLES,host=xyz raw=18767833i,scaled=28169827i,enabled=2200888514i,running=1466317384i 1621254412000000000
|
pmu_metric,cpu=0,event=L1D_PEND_MISS.PENDING_CYCLES,host=xyz raw=18767833i,scaled=28169827i,enabled=2200888514i,running=1466317384i 1621254412000000000
|
||||||
```
|
```
|
||||||
|
|
||||||
|
[man]: https://man7.org/linux/man-pages/man2/perf_event_open.2.html
|
||||||
|
|
|
||||||
|
|
@ -1,8 +1,9 @@
|
||||||
# Intel RDT Input Plugin
|
# Intel RDT Input Plugin
|
||||||
|
|
||||||
The `intel_rdt` plugin collects information provided by monitoring features of
|
The `intel_rdt` plugin collects information provided by monitoring features of
|
||||||
the Intel Resource Director Technology (Intel(R) RDT). Intel RDT provides the hardware framework to monitor
|
the Intel Resource Director Technology (Intel(R) RDT). Intel RDT provides the
|
||||||
and control the utilization of shared resources (ex: last level cache, memory bandwidth).
|
hardware framework to monitor and control the utilization of shared resources
|
||||||
|
(ex: last level cache, memory bandwidth).
|
||||||
|
|
||||||
## About Intel RDT
|
## About Intel RDT
|
||||||
|
|
||||||
|
|
@ -13,27 +14,31 @@ Intel’s Resource Director Technology (RDT) framework consists of:
|
||||||
- Cache Allocation Technology (CAT)
|
- Cache Allocation Technology (CAT)
|
||||||
- Code and Data Prioritization (CDP)
|
- Code and Data Prioritization (CDP)
|
||||||
|
|
||||||
As multithreaded and multicore platform architectures emerge, the last level cache and
|
As multithreaded and multicore platform architectures emerge, the last level
|
||||||
memory bandwidth are key resources to manage for running workloads in single-threaded,
|
cache and memory bandwidth are key resources to manage for running workloads in
|
||||||
multithreaded, or complex virtual machine environments. Intel introduces CMT, MBM, CAT
|
single-threaded, multithreaded, or complex virtual machine environments. Intel
|
||||||
and CDP to manage these workloads across shared resources.
|
introduces CMT, MBM, CAT and CDP to manage these workloads across shared
|
||||||
|
resources.
|
||||||
|
|
||||||
## Prerequsities - PQoS Tool
|
## Prerequsities - PQoS Tool
|
||||||
|
|
||||||
To gather Intel RDT metrics, the `intel_rdt` plugin uses _pqos_ cli tool which is a
|
To gather Intel RDT metrics, the `intel_rdt` plugin uses _pqos_ cli tool which
|
||||||
part of [Intel(R) RDT Software Package](https://github.com/intel/intel-cmt-cat).
|
is a part of [Intel(R) RDT Software
|
||||||
Before using this plugin please be sure _pqos_ is properly installed and configured regarding that the plugin
|
Package](https://github.com/intel/intel-cmt-cat). Before using this plugin
|
||||||
run _pqos_ to work with `OS Interface` mode. This plugin supports _pqos_ version 4.0.0 and above.
|
please be sure _pqos_ is properly installed and configured regarding that the
|
||||||
Note: pqos tool needs root privileges to work properly.
|
plugin run _pqos_ to work with `OS Interface` mode. This plugin supports _pqos_
|
||||||
|
version 4.0.0 and above. Note: pqos tool needs root privileges to work
|
||||||
|
properly.
|
||||||
|
|
||||||
Metrics will be constantly reported from the following `pqos` commands within the given interval:
|
Metrics will be constantly reported from the following `pqos` commands within
|
||||||
|
the given interval:
|
||||||
|
|
||||||
### If telegraf does not run as the root user
|
### If telegraf does not run as the root user
|
||||||
|
|
||||||
The `pqos` binary needs to run as root. If telegraf is running as a non-root user, you may enable sudo
|
The `pqos` binary needs to run as root. If telegraf is running as a non-root
|
||||||
to allow `pqos` to run correctly.
|
user, you may enable sudo to allow `pqos` to run correctly. The `pqos` command
|
||||||
The `pqos` command requires root level access to run. There are two options to
|
requires root level access to run. There are two options to overcome this if
|
||||||
overcome this if you run telegraf as a non-root user.
|
you run telegraf as a non-root user.
|
||||||
|
|
||||||
It is possible to update the pqos binary with setuid using `chmod u+s
|
It is possible to update the pqos binary with setuid using `chmod u+s
|
||||||
/path/to/pqos`. This approach is simple and requires no modification to the
|
/path/to/pqos`. This approach is simple and requires no modification to the
|
||||||
|
|
@ -42,7 +47,8 @@ security implications for making such a command setuid root.
|
||||||
|
|
||||||
Alternately, you may enable sudo to allow `pqos` to run correctly, as follows:
|
Alternately, you may enable sudo to allow `pqos` to run correctly, as follows:
|
||||||
|
|
||||||
Add the following to your sudoers file (assumes telegraf runs as a user named `telegraf`):
|
Add the following to your sudoers file (assumes telegraf runs as a user named
|
||||||
|
`telegraf`):
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
telegraf ALL=(ALL) NOPASSWD:/usr/sbin/pqos -r --iface-os --mon-file-type=csv --mon-interval=*
|
telegraf ALL=(ALL) NOPASSWD:/usr/sbin/pqos -r --iface-os --mon-file-type=csv --mon-interval=*
|
||||||
|
|
@ -57,7 +63,8 @@ configuration (see below).
|
||||||
pqos -r --iface-os --mon-file-type=csv --mon-interval=INTERVAL --mon-core=all:[CORES]\;mbt:[CORES]
|
pqos -r --iface-os --mon-file-type=csv --mon-interval=INTERVAL --mon-core=all:[CORES]\;mbt:[CORES]
|
||||||
```
|
```
|
||||||
|
|
||||||
where `CORES` is equal to group of cores provided in config. User can provide many groups.
|
where `CORES` is equal to group of cores provided in config. User can provide
|
||||||
|
many groups.
|
||||||
|
|
||||||
### In case of process monitoring
|
### In case of process monitoring
|
||||||
|
|
||||||
|
|
@ -65,22 +72,24 @@ where `CORES` is equal to group of cores provided in config. User can provide ma
|
||||||
pqos -r --iface-os --mon-file-type=csv --mon-interval=INTERVAL --mon-pid=all:[PIDS]\;mbt:[PIDS]
|
pqos -r --iface-os --mon-file-type=csv --mon-interval=INTERVAL --mon-pid=all:[PIDS]\;mbt:[PIDS]
|
||||||
```
|
```
|
||||||
|
|
||||||
where `PIDS` is group of processes IDs which name are equal to provided process name in a config.
|
where `PIDS` is group of processes IDs which name are equal to provided process
|
||||||
User can provide many process names which lead to create many processes groups.
|
name in a config. User can provide many process names which lead to create many
|
||||||
|
processes groups.
|
||||||
|
|
||||||
In both cases `INTERVAL` is equal to sampling_interval from config.
|
In both cases `INTERVAL` is equal to sampling_interval from config.
|
||||||
|
|
||||||
Because PIDs association within system could change in every moment, Intel RDT plugin provides a
|
Because PIDs association within system could change in every moment, Intel RDT
|
||||||
functionality to check on every interval if desired processes change their PIDs association.
|
plugin provides a functionality to check on every interval if desired processes
|
||||||
If some change is reported, plugin will restart _pqos_ tool with new arguments. If provided by user
|
change their PIDs association. If some change is reported, plugin will restart
|
||||||
process name is not equal to any of available processes, will be omitted and plugin will constantly
|
_pqos_ tool with new arguments. If provided by user process name is not equal to
|
||||||
check for process availability.
|
any of available processes, will be omitted and plugin will constantly check for
|
||||||
|
process availability.
|
||||||
|
|
||||||
## Useful links
|
## Useful links
|
||||||
|
|
||||||
Pqos installation process: <https://github.com/intel/intel-cmt-cat/blob/master/INSTALL>
|
- Pqos installation process: <https://github.com/intel/intel-cmt-cat/blob/master/INSTALL>
|
||||||
Enabling OS interface: <https://github.com/intel/intel-cmt-cat/wiki>, <https://github.com/intel/intel-cmt-cat/wiki/resctrl>
|
- Enabling OS interface: <https://github.com/intel/intel-cmt-cat/wiki>, <https://github.com/intel/intel-cmt-cat/wiki/resctrl>
|
||||||
More about Intel RDT: <https://www.intel.com/content/www/us/en/architecture-and-technology/resource-director-technology.html>
|
- More about Intel RDT: <https://www.intel.com/content/www/us/en/architecture-and-technology/resource-director-technology.html>
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
@ -130,14 +139,17 @@ More about Intel RDT: <https://www.intel.com/content/www/us/en/architecture-and-
|
||||||
|
|
||||||
## Troubleshooting
|
## Troubleshooting
|
||||||
|
|
||||||
Pointing to non-existing cores will lead to throwing an error by _pqos_ and the plugin will not work properly.
|
Pointing to non-existing cores will lead to throwing an error by _pqos_ and the
|
||||||
Be sure to check provided core number exists within desired system.
|
plugin will not work properly. Be sure to check provided core number exists
|
||||||
|
within desired system.
|
||||||
|
|
||||||
Be aware, reading Intel RDT metrics by _pqos_ cannot be done simultaneously on the same resource.
|
Be aware, reading Intel RDT metrics by _pqos_ cannot be done simultaneously on
|
||||||
Do not use any other _pqos_ instance that is monitoring the same cores or PIDs within the working system.
|
the same resource. Do not use any other _pqos_ instance that is monitoring the
|
||||||
It is not possible to monitor same cores or PIDs on different groups.
|
same cores or PIDs within the working system. It is not possible to monitor
|
||||||
|
same cores or PIDs on different groups.
|
||||||
|
|
||||||
PIDs associated for the given process could be manually checked by `pidof` command. E.g:
|
PIDs associated for the given process could be manually checked by `pidof`
|
||||||
|
command. E.g:
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
pidof PROCESS
|
pidof PROCESS
|
||||||
|
|
|
||||||
|
|
@ -16,7 +16,8 @@ plugin.
|
||||||
|
|
||||||
## Measurements & Fields
|
## Measurements & Fields
|
||||||
|
|
||||||
memstats are taken from the Go runtime: <https://golang.org/pkg/runtime/#MemStats>
|
memstats are taken from the Go runtime:
|
||||||
|
<https://golang.org/pkg/runtime/#MemStats>
|
||||||
|
|
||||||
- internal_memstats
|
- internal_memstats
|
||||||
- alloc_bytes
|
- alloc_bytes
|
||||||
|
|
|
||||||
|
|
@ -1,6 +1,7 @@
|
||||||
# Internet Speed Monitor
|
# Internet Speed Monitor Input Plugin
|
||||||
|
|
||||||
The `Internet Speed Monitor` collects data about the internet speed on the system.
|
The `Internet Speed Monitor` collects data about the internet speed on the
|
||||||
|
system.
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,6 +1,7 @@
|
||||||
# Interrupts Input Plugin
|
# Interrupts Input Plugin
|
||||||
|
|
||||||
The interrupts plugin gathers metrics about IRQs from `/proc/interrupts` and `/proc/softirqs`.
|
The interrupts plugin gathers metrics about IRQs from `/proc/interrupts` and
|
||||||
|
`/proc/softirqs`.
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -3,7 +3,8 @@
|
||||||
Get bare metal metrics using the command line utility
|
Get bare metal metrics using the command line utility
|
||||||
[`ipmitool`](https://github.com/ipmitool/ipmitool).
|
[`ipmitool`](https://github.com/ipmitool/ipmitool).
|
||||||
|
|
||||||
If no servers are specified, the plugin will query the local machine sensor stats via the following command:
|
If no servers are specified, the plugin will query the local machine sensor
|
||||||
|
stats via the following command:
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
ipmitool sdr
|
ipmitool sdr
|
||||||
|
|
@ -15,13 +16,15 @@ or with the version 2 schema:
|
||||||
ipmitool sdr elist
|
ipmitool sdr elist
|
||||||
```
|
```
|
||||||
|
|
||||||
When one or more servers are specified, the plugin will use the following command to collect remote host sensor stats:
|
When one or more servers are specified, the plugin will use the following
|
||||||
|
command to collect remote host sensor stats:
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
ipmitool -I lan -H SERVER -U USERID -P PASSW0RD sdr
|
ipmitool -I lan -H SERVER -U USERID -P PASSW0RD sdr
|
||||||
```
|
```
|
||||||
|
|
||||||
Any of the following parameters will be added to the aformentioned query if they're configured:
|
Any of the following parameters will be added to the aformentioned query if
|
||||||
|
they're configured:
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
-y hex_key -L privilege
|
-y hex_key -L privilege
|
||||||
|
|
@ -114,7 +117,8 @@ ipmi device node. When using udev you can create the device node giving
|
||||||
KERNEL=="ipmi*", MODE="660", GROUP="telegraf"
|
KERNEL=="ipmi*", MODE="660", GROUP="telegraf"
|
||||||
```
|
```
|
||||||
|
|
||||||
Alternatively, it is possible to use sudo. You will need the following in your telegraf config:
|
Alternatively, it is possible to use sudo. You will need the following in your
|
||||||
|
telegraf config:
|
||||||
|
|
||||||
```toml
|
```toml
|
||||||
[[inputs.ipmi_sensor]]
|
[[inputs.ipmi_sensor]]
|
||||||
|
|
|
||||||
|
|
@ -1,18 +1,27 @@
|
||||||
# Iptables Input Plugin
|
# Iptables Input Plugin
|
||||||
|
|
||||||
The iptables plugin gathers packets and bytes counters for rules within a set of table and chain from the Linux's iptables firewall.
|
The iptables plugin gathers packets and bytes counters for rules within a set of
|
||||||
|
table and chain from the Linux's iptables firewall.
|
||||||
|
|
||||||
Rules are identified through associated comment. **Rules without comment are ignored**.
|
Rules are identified through associated comment. **Rules without comment are
|
||||||
Indeed we need a unique ID for the rule and the rule number is not a constant: it may vary when rules are inserted/deleted at start-up or by automatic tools (interactive firewalls, fail2ban, ...).
|
ignored**. Indeed we need a unique ID for the rule and the rule number is not a
|
||||||
Also when the rule set is becoming big (hundreds of lines) most people are interested in monitoring only a small part of the rule set.
|
constant: it may vary when rules are inserted/deleted at start-up or by
|
||||||
|
automatic tools (interactive firewalls, fail2ban, ...). Also when the rule set
|
||||||
|
is becoming big (hundreds of lines) most people are interested in monitoring
|
||||||
|
only a small part of the rule set.
|
||||||
|
|
||||||
Before using this plugin **you must ensure that the rules you want to monitor are named with a unique comment**. Comments are added using the `-m comment --comment "my comment"` iptables options.
|
Before using this plugin **you must ensure that the rules you want to monitor
|
||||||
|
are named with a unique comment**. Comments are added using the `-m comment
|
||||||
|
--comment "my comment"` iptables options.
|
||||||
|
|
||||||
The iptables command requires CAP_NET_ADMIN and CAP_NET_RAW capabilities. You have several options to grant telegraf to run iptables:
|
The iptables command requires CAP_NET_ADMIN and CAP_NET_RAW capabilities. You
|
||||||
|
have several options to grant telegraf to run iptables:
|
||||||
|
|
||||||
* Run telegraf as root. This is strongly discouraged.
|
* Run telegraf as root. This is strongly discouraged.
|
||||||
* Configure systemd to run telegraf with CAP_NET_ADMIN and CAP_NET_RAW. This is the simplest and recommended option.
|
* Configure systemd to run telegraf with CAP_NET_ADMIN and CAP_NET_RAW. This is
|
||||||
* Configure sudo to grant telegraf to run iptables. This is the most restrictive option, but require sudo setup.
|
the simplest and recommended option.
|
||||||
|
* Configure sudo to grant telegraf to run iptables. This is the most restrictive
|
||||||
|
option, but require sudo setup.
|
||||||
|
|
||||||
## Using systemd capabilities
|
## Using systemd capabilities
|
||||||
|
|
||||||
|
|
@ -47,7 +56,11 @@ Defaults!IPTABLESSHOW !logfile, !syslog, !pam_session
|
||||||
|
|
||||||
## Using IPtables lock feature
|
## Using IPtables lock feature
|
||||||
|
|
||||||
Defining multiple instances of this plugin in telegraf.conf can lead to concurrent IPtables access resulting in "ERROR in input [inputs.iptables]: exit status 4" messages in telegraf.log and missing metrics. Setting 'use_lock = true' in the plugin configuration will run IPtables with the '-w' switch, allowing a lock usage to prevent this error.
|
Defining multiple instances of this plugin in telegraf.conf can lead to
|
||||||
|
concurrent IPtables access resulting in "ERROR in input [inputs.iptables]: exit
|
||||||
|
status 4" messages in telegraf.log and missing metrics. Setting 'use_lock =
|
||||||
|
true' in the plugin configuration will run IPtables with the '-w' switch,
|
||||||
|
allowing a lock usage to prevent this error.
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -77,7 +77,8 @@ ipvs_real_server,address=172.18.64.220,address_family=inet,port=9000,virtual_add
|
||||||
ipvs_real_server,address=172.18.64.219,address_family=inet,port=9000,virtual_address=172.18.64.234,virtual_port=9000,virtual_protocol=tcp active_connections=0i,inactive_connections=0i,pps_in=0i,pps_out=0i,connections=0i,pkts_in=0i,pkts_out=0i,bytes_in=0i,bytes_out=0i,cps=0i 1541019340000000000
|
ipvs_real_server,address=172.18.64.219,address_family=inet,port=9000,virtual_address=172.18.64.234,virtual_port=9000,virtual_protocol=tcp active_connections=0i,inactive_connections=0i,pps_in=0i,pps_out=0i,connections=0i,pkts_in=0i,pkts_out=0i,bytes_in=0i,bytes_out=0i,cps=0i 1541019340000000000
|
||||||
```
|
```
|
||||||
|
|
||||||
Virtual server is configured using `proto+addr+port` and backed by 2 real servers:
|
Virtual server is configured using `proto+addr+port` and backed by 2 real
|
||||||
|
servers:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
ipvs_virtual_server,address_family=inet,fwmark=47,netmask=32,sched=rr cps=0i,connections=0i,pkts_in=0i,pkts_out=0i,bytes_in=0i,bytes_out=0i,pps_in=0i,pps_out=0i 1541019340000000000
|
ipvs_virtual_server,address_family=inet,fwmark=47,netmask=32,sched=rr cps=0i,connections=0i,pkts_in=0i,pkts_out=0i,bytes_in=0i,bytes_out=0i,pps_in=0i,pps_out=0i 1541019340000000000
|
||||||
|
|
|
||||||
|
|
@ -1,8 +1,10 @@
|
||||||
# Jenkins Input Plugin
|
# Jenkins Input Plugin
|
||||||
|
|
||||||
The jenkins plugin gathers information about the nodes and jobs running in a jenkins instance.
|
The jenkins plugin gathers information about the nodes and jobs running in a
|
||||||
|
jenkins instance.
|
||||||
|
|
||||||
This plugin does not require a plugin on jenkins and it makes use of Jenkins API to retrieve all the information needed.
|
This plugin does not require a plugin on jenkins and it makes use of Jenkins API
|
||||||
|
to retrieve all the information needed.
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,6 +1,6 @@
|
||||||
# Jolokia Input Plugin
|
# Jolokia Input Plugin
|
||||||
|
|
||||||
## Deprecated in version 1.5: Please use the [jolokia2][] plugin
|
**Deprecated in version 1.5: Please use the [jolokia2][] plugin**
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,6 +1,8 @@
|
||||||
# Jolokia2 Input Plugin
|
# Jolokia2 Input Plugin
|
||||||
|
|
||||||
The [Jolokia](http://jolokia.org) _agent_ and _proxy_ input plugins collect JMX metrics from an HTTP endpoint using Jolokia's [JSON-over-HTTP protocol](https://jolokia.org/reference/html/protocol.html).
|
The [Jolokia](http://jolokia.org) _agent_ and _proxy_ input plugins collect JMX
|
||||||
|
metrics from an HTTP endpoint using Jolokia's [JSON-over-HTTP
|
||||||
|
protocol](https://jolokia.org/reference/html/protocol.html).
|
||||||
|
|
||||||
* [jolokia2_agent Configuration](jolokia2_agent/README.md)
|
* [jolokia2_agent Configuration](jolokia2_agent/README.md)
|
||||||
* [jolokia2_proxy Configuration](jolokia2_proxy/README.md)
|
* [jolokia2_proxy Configuration](jolokia2_proxy/README.md)
|
||||||
|
|
@ -9,7 +11,8 @@ The [Jolokia](http://jolokia.org) _agent_ and _proxy_ input plugins collect JMX
|
||||||
|
|
||||||
### Jolokia Agent Configuration
|
### Jolokia Agent Configuration
|
||||||
|
|
||||||
The `jolokia2_agent` input plugin reads JMX metrics from one or more [Jolokia agent](https://jolokia.org/agent/jvm.html) REST endpoints.
|
The `jolokia2_agent` input plugin reads JMX metrics from one or more [Jolokia
|
||||||
|
agent](https://jolokia.org/agent/jvm.html) REST endpoints.
|
||||||
|
|
||||||
```toml @sample.conf
|
```toml @sample.conf
|
||||||
[[inputs.jolokia2_agent]]
|
[[inputs.jolokia2_agent]]
|
||||||
|
|
@ -39,7 +42,9 @@ Optionally, specify TLS options for communicating with agents:
|
||||||
|
|
||||||
### Jolokia Proxy Configuration
|
### Jolokia Proxy Configuration
|
||||||
|
|
||||||
The `jolokia2_proxy` input plugin reads JMX metrics from one or more _targets_ by interacting with a [Jolokia proxy](https://jolokia.org/features/proxy.html) REST endpoint.
|
The `jolokia2_proxy` input plugin reads JMX metrics from one or more _targets_
|
||||||
|
by interacting with a [Jolokia proxy](https://jolokia.org/features/proxy.html)
|
||||||
|
REST endpoint.
|
||||||
|
|
||||||
```toml
|
```toml
|
||||||
[[inputs.jolokia2_proxy]]
|
[[inputs.jolokia2_proxy]]
|
||||||
|
|
@ -84,7 +89,8 @@ Optionally, specify TLS options for communicating with proxies:
|
||||||
|
|
||||||
### Jolokia Metric Configuration
|
### Jolokia Metric Configuration
|
||||||
|
|
||||||
Each `metric` declaration generates a Jolokia request to fetch telemetry from a JMX MBean.
|
Each `metric` declaration generates a Jolokia request to fetch telemetry from a
|
||||||
|
JMX MBean.
|
||||||
|
|
||||||
| Key | Required | Description |
|
| Key | Required | Description |
|
||||||
|----------------|----------|-------------|
|
|----------------|----------|-------------|
|
||||||
|
|
@ -110,7 +116,8 @@ The preceeding `jvm_memory` `metric` declaration produces the following output:
|
||||||
jvm_memory HeapMemoryUsage.committed=4294967296,HeapMemoryUsage.init=4294967296,HeapMemoryUsage.max=4294967296,HeapMemoryUsage.used=1750658992,NonHeapMemoryUsage.committed=67350528,NonHeapMemoryUsage.init=2555904,NonHeapMemoryUsage.max=-1,NonHeapMemoryUsage.used=65821352,ObjectPendingFinalizationCount=0 1503762436000000000
|
jvm_memory HeapMemoryUsage.committed=4294967296,HeapMemoryUsage.init=4294967296,HeapMemoryUsage.max=4294967296,HeapMemoryUsage.used=1750658992,NonHeapMemoryUsage.committed=67350528,NonHeapMemoryUsage.init=2555904,NonHeapMemoryUsage.max=-1,NonHeapMemoryUsage.used=65821352,ObjectPendingFinalizationCount=0 1503762436000000000
|
||||||
```
|
```
|
||||||
|
|
||||||
Use `*` wildcards against `mbean` property-key values to create distinct series by capturing values into `tag_keys`.
|
Use `*` wildcards against `mbean` property-key values to create distinct series
|
||||||
|
by capturing values into `tag_keys`.
|
||||||
|
|
||||||
```toml
|
```toml
|
||||||
[[inputs.jolokia2_agent.metric]]
|
[[inputs.jolokia2_agent.metric]]
|
||||||
|
|
@ -120,7 +127,9 @@ Use `*` wildcards against `mbean` property-key values to create distinct series
|
||||||
tag_keys = ["name"]
|
tag_keys = ["name"]
|
||||||
```
|
```
|
||||||
|
|
||||||
Since `name=*` matches both `G1 Old Generation` and `G1 Young Generation`, and `name` is used as a tag, the preceeding `jvm_garbage_collector` `metric` declaration produces two metrics.
|
Since `name=*` matches both `G1 Old Generation` and `G1 Young Generation`, and
|
||||||
|
`name` is used as a tag, the preceeding `jvm_garbage_collector` `metric`
|
||||||
|
declaration produces two metrics.
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
jvm_garbage_collector,name=G1\ Old\ Generation CollectionCount=0,CollectionTime=0 1503762520000000000
|
jvm_garbage_collector,name=G1\ Old\ Generation CollectionCount=0,CollectionTime=0 1503762520000000000
|
||||||
|
|
@ -138,7 +147,8 @@ Use `tag_prefix` along with `tag_keys` to add detail to tag names.
|
||||||
tag_prefix = "pool_"
|
tag_prefix = "pool_"
|
||||||
```
|
```
|
||||||
|
|
||||||
The preceeding `jvm_memory_pool` `metric` declaration produces six metrics, each with a distinct `pool_name` tag.
|
The preceeding `jvm_memory_pool` `metric` declaration produces six metrics, each
|
||||||
|
with a distinct `pool_name` tag.
|
||||||
|
|
||||||
```text
|
```text
|
||||||
jvm_memory_pool,pool_name=Compressed\ Class\ Space PeakUsage.max=1073741824,PeakUsage.committed=3145728,PeakUsage.init=0,Usage.committed=3145728,Usage.init=0,PeakUsage.used=3017976,Usage.max=1073741824,Usage.used=3017976 1503764025000000000
|
jvm_memory_pool,pool_name=Compressed\ Class\ Space PeakUsage.max=1073741824,PeakUsage.committed=3145728,PeakUsage.init=0,Usage.committed=3145728,Usage.init=0,PeakUsage.used=3017976,Usage.max=1073741824,Usage.used=3017976 1503764025000000000
|
||||||
|
|
@ -149,7 +159,10 @@ jvm_memory_pool,pool_name=G1\ Survivor\ Space Usage.max=-1,Usage.init=0,Collecti
|
||||||
jvm_memory_pool,pool_name=Metaspace PeakUsage.init=0,PeakUsage.used=21852224,PeakUsage.max=-1,Usage.max=-1,Usage.committed=22282240,Usage.init=0,Usage.used=21852224,PeakUsage.committed=22282240 1503764025000000000
|
jvm_memory_pool,pool_name=Metaspace PeakUsage.init=0,PeakUsage.used=21852224,PeakUsage.max=-1,Usage.max=-1,Usage.committed=22282240,Usage.init=0,Usage.used=21852224,PeakUsage.committed=22282240 1503764025000000000
|
||||||
```
|
```
|
||||||
|
|
||||||
Use substitutions to create fields and field prefixes with MBean property-keys captured by wildcards. In the following example, `$1` represents the value of the property-key `name`, and `$2` represents the value of the property-key `topic`.
|
Use substitutions to create fields and field prefixes with MBean property-keys
|
||||||
|
captured by wildcards. In the following example, `$1` represents the value of
|
||||||
|
the property-key `name`, and `$2` represents the value of the property-key
|
||||||
|
`topic`.
|
||||||
|
|
||||||
```toml
|
```toml
|
||||||
[[inputs.jolokia2_agent.metric]]
|
[[inputs.jolokia2_agent.metric]]
|
||||||
|
|
@ -159,13 +172,16 @@ Use substitutions to create fields and field prefixes with MBean property-keys c
|
||||||
tag_keys = ["topic"]
|
tag_keys = ["topic"]
|
||||||
```
|
```
|
||||||
|
|
||||||
The preceeding `kafka_topic` `metric` declaration produces a metric per Kafka topic. The `name` Mbean property-key is used as a field prefix to aid in gathering fields together into the single metric.
|
The preceeding `kafka_topic` `metric` declaration produces a metric per Kafka
|
||||||
|
topic. The `name` Mbean property-key is used as a field prefix to aid in
|
||||||
|
gathering fields together into the single metric.
|
||||||
|
|
||||||
```text
|
```text
|
||||||
kafka_topic,topic=my-topic BytesOutPerSec.MeanRate=0,FailedProduceRequestsPerSec.MeanRate=0,BytesOutPerSec.EventType="bytes",BytesRejectedPerSec.Count=0,FailedProduceRequestsPerSec.RateUnit="SECONDS",FailedProduceRequestsPerSec.EventType="requests",MessagesInPerSec.RateUnit="SECONDS",BytesInPerSec.EventType="bytes",BytesOutPerSec.RateUnit="SECONDS",BytesInPerSec.OneMinuteRate=0,FailedFetchRequestsPerSec.EventType="requests",TotalFetchRequestsPerSec.MeanRate=146.301533938701,BytesOutPerSec.FifteenMinuteRate=0,TotalProduceRequestsPerSec.MeanRate=0,BytesRejectedPerSec.FifteenMinuteRate=0,MessagesInPerSec.FiveMinuteRate=0,BytesInPerSec.Count=0,BytesRejectedPerSec.MeanRate=0,FailedFetchRequestsPerSec.MeanRate=0,FailedFetchRequestsPerSec.FiveMinuteRate=0,FailedFetchRequestsPerSec.FifteenMinuteRate=0,FailedProduceRequestsPerSec.Count=0,TotalFetchRequestsPerSec.FifteenMinuteRate=128.59314292334466,TotalFetchRequestsPerSec.OneMinuteRate=126.71551273850747,TotalFetchRequestsPerSec.Count=1353483,TotalProduceRequestsPerSec.FifteenMinuteRate=0,FailedFetchRequestsPerSec.OneMinuteRate=0,FailedFetchRequestsPerSec.Count=0,FailedProduceRequestsPerSec.FifteenMinuteRate=0,TotalFetchRequestsPerSec.FiveMinuteRate=130.8516148751592,TotalFetchRequestsPerSec.RateUnit="SECONDS",BytesRejectedPerSec.RateUnit="SECONDS",BytesInPerSec.MeanRate=0,FailedFetchRequestsPerSec.RateUnit="SECONDS",BytesRejectedPerSec.OneMinuteRate=0,BytesOutPerSec.Count=0,BytesOutPerSec.OneMinuteRate=0,MessagesInPerSec.FifteenMinuteRate=0,MessagesInPerSec.MeanRate=0,BytesInPerSec.FiveMinuteRate=0,TotalProduceRequestsPerSec.RateUnit="SECONDS",FailedProduceRequestsPerSec.OneMinuteRate=0,TotalProduceRequestsPerSec.EventType="requests",BytesRejectedPerSec.FiveMinuteRate=0,BytesRejectedPerSec.EventType="bytes",BytesOutPerSec.FiveMinuteRate=0,FailedProduceRequestsPerSec.FiveMinuteRate=0,MessagesInPerSec.Count=0,TotalProduceRequestsPerSec.FiveMinuteRate=0,TotalProduceRequestsPerSec.OneMinuteRate=0,MessagesInPerSec.EventType="messages",MessagesInPerSec.OneMinuteRate=0,TotalFetchRequestsPerSec.EventType="requests",BytesInPerSec.RateUnit="SECONDS",BytesInPerSec.FifteenMinuteRate=0,TotalProduceRequestsPerSec.Count=0 1503767532000000000
|
kafka_topic,topic=my-topic BytesOutPerSec.MeanRate=0,FailedProduceRequestsPerSec.MeanRate=0,BytesOutPerSec.EventType="bytes",BytesRejectedPerSec.Count=0,FailedProduceRequestsPerSec.RateUnit="SECONDS",FailedProduceRequestsPerSec.EventType="requests",MessagesInPerSec.RateUnit="SECONDS",BytesInPerSec.EventType="bytes",BytesOutPerSec.RateUnit="SECONDS",BytesInPerSec.OneMinuteRate=0,FailedFetchRequestsPerSec.EventType="requests",TotalFetchRequestsPerSec.MeanRate=146.301533938701,BytesOutPerSec.FifteenMinuteRate=0,TotalProduceRequestsPerSec.MeanRate=0,BytesRejectedPerSec.FifteenMinuteRate=0,MessagesInPerSec.FiveMinuteRate=0,BytesInPerSec.Count=0,BytesRejectedPerSec.MeanRate=0,FailedFetchRequestsPerSec.MeanRate=0,FailedFetchRequestsPerSec.FiveMinuteRate=0,FailedFetchRequestsPerSec.FifteenMinuteRate=0,FailedProduceRequestsPerSec.Count=0,TotalFetchRequestsPerSec.FifteenMinuteRate=128.59314292334466,TotalFetchRequestsPerSec.OneMinuteRate=126.71551273850747,TotalFetchRequestsPerSec.Count=1353483,TotalProduceRequestsPerSec.FifteenMinuteRate=0,FailedFetchRequestsPerSec.OneMinuteRate=0,FailedFetchRequestsPerSec.Count=0,FailedProduceRequestsPerSec.FifteenMinuteRate=0,TotalFetchRequestsPerSec.FiveMinuteRate=130.8516148751592,TotalFetchRequestsPerSec.RateUnit="SECONDS",BytesRejectedPerSec.RateUnit="SECONDS",BytesInPerSec.MeanRate=0,FailedFetchRequestsPerSec.RateUnit="SECONDS",BytesRejectedPerSec.OneMinuteRate=0,BytesOutPerSec.Count=0,BytesOutPerSec.OneMinuteRate=0,MessagesInPerSec.FifteenMinuteRate=0,MessagesInPerSec.MeanRate=0,BytesInPerSec.FiveMinuteRate=0,TotalProduceRequestsPerSec.RateUnit="SECONDS",FailedProduceRequestsPerSec.OneMinuteRate=0,TotalProduceRequestsPerSec.EventType="requests",BytesRejectedPerSec.FiveMinuteRate=0,BytesRejectedPerSec.EventType="bytes",BytesOutPerSec.FiveMinuteRate=0,FailedProduceRequestsPerSec.FiveMinuteRate=0,MessagesInPerSec.Count=0,TotalProduceRequestsPerSec.FiveMinuteRate=0,TotalProduceRequestsPerSec.OneMinuteRate=0,MessagesInPerSec.EventType="messages",MessagesInPerSec.OneMinuteRate=0,TotalFetchRequestsPerSec.EventType="requests",BytesInPerSec.RateUnit="SECONDS",BytesInPerSec.FifteenMinuteRate=0,TotalProduceRequestsPerSec.Count=0 1503767532000000000
|
||||||
```
|
```
|
||||||
|
|
||||||
Both `jolokia2_agent` and `jolokia2_proxy` plugins support default configurations that apply to every `metric` declaration.
|
Both `jolokia2_agent` and `jolokia2_proxy` plugins support default
|
||||||
|
configurations that apply to every `metric` declaration.
|
||||||
|
|
||||||
| Key | Default Value | Description |
|
| Key | Default Value | Description |
|
||||||
|---------------------------|---------------|-------------|
|
|---------------------------|---------------|-------------|
|
||||||
|
|
@ -187,4 +203,5 @@ Both `jolokia2_agent` and `jolokia2_proxy` plugins support default configuration
|
||||||
* [Weblogic](/plugins/inputs/jolokia2/examples/weblogic.conf)
|
* [Weblogic](/plugins/inputs/jolokia2/examples/weblogic.conf)
|
||||||
* [ZooKeeper](/plugins/inputs/jolokia2/examples/zookeeper.conf)
|
* [ZooKeeper](/plugins/inputs/jolokia2/examples/zookeeper.conf)
|
||||||
|
|
||||||
Please help improve this list and contribute new configuration files by opening an issue or pull request.
|
Please help improve this list and contribute new configuration files by opening
|
||||||
|
an issue or pull request.
|
||||||
|
|
|
||||||
|
|
@ -1,7 +1,11 @@
|
||||||
# JTI OpenConfig Telemetry Input Plugin
|
# JTI OpenConfig Telemetry Input Plugin
|
||||||
|
|
||||||
This plugin reads Juniper Networks implementation of OpenConfig telemetry data from listed sensors using Junos Telemetry Interface. Refer to
|
This plugin reads Juniper Networks implementation of OpenConfig telemetry data
|
||||||
[openconfig.net](http://openconfig.net/) for more details about OpenConfig and [Junos Telemetry Interface (JTI)](https://www.juniper.net/documentation/en_US/junos/topics/concept/junos-telemetry-interface-oveview.html).
|
from listed sensors using Junos Telemetry Interface. Refer to
|
||||||
|
[openconfig.net](http://openconfig.net/) for more details about OpenConfig and
|
||||||
|
[Junos Telemetry Interface (JTI)][1].
|
||||||
|
|
||||||
|
[1]: https://www.juniper.net/documentation/en_US/junos/topics/concept/junos-telemetry-interface-oveview.html
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -3,8 +3,8 @@
|
||||||
The [Kafka][kafka] consumer plugin reads from Kafka
|
The [Kafka][kafka] consumer plugin reads from Kafka
|
||||||
and creates metrics using one of the supported [input data formats][].
|
and creates metrics using one of the supported [input data formats][].
|
||||||
|
|
||||||
For old kafka version (< 0.8), please use the [kafka_consumer_legacy][] input plugin
|
For old kafka version (< 0.8), please use the [kafka_consumer_legacy][] input
|
||||||
and use the old zookeeper connection method.
|
plugin and use the old zookeeper connection method.
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,12 +1,13 @@
|
||||||
# Kafka Consumer Legacy Input Plugin
|
# Kafka Consumer Legacy Input Plugin
|
||||||
|
|
||||||
## Deprecated in version 1.4. Please use [Kafka Consumer input plugin][]
|
**Deprecated in version 1.4. Please use [Kafka Consumer input plugin][]**
|
||||||
|
|
||||||
The [Kafka](http://kafka.apache.org/) consumer plugin polls a specified Kafka
|
The [Kafka](http://kafka.apache.org/) consumer plugin polls a specified Kafka
|
||||||
topic and adds messages to InfluxDB. The plugin assumes messages follow the
|
topic and adds messages to InfluxDB. The plugin assumes messages follow the line
|
||||||
line protocol. [Consumer Group](http://godoc.org/github.com/wvanbergen/kafka/consumergroup)
|
protocol. [Consumer Group][1] is used to talk to the Kafka cluster so multiple
|
||||||
is used to talk to the Kafka cluster so multiple instances of telegraf can read
|
instances of telegraf can read from the same topic in parallel.
|
||||||
from the same topic in parallel.
|
|
||||||
|
[1]: http://godoc.org/github.com/wvanbergen/kafka/consumergroup
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
@ -45,4 +46,4 @@ from the same topic in parallel.
|
||||||
Running integration tests requires running Zookeeper & Kafka. See Makefile
|
Running integration tests requires running Zookeeper & Kafka. See Makefile
|
||||||
for kafka container command.
|
for kafka container command.
|
||||||
|
|
||||||
[Kafka Consumer input plugin]: /plugins/inputs/kafka_consumer
|
[Kafka Consumer input plugin]: ../kafka_consumer/README.md
|
||||||
|
|
|
||||||
|
|
@ -90,9 +90,12 @@ The Kapacitor plugin collects metrics from the given Kapacitor instances.
|
||||||
|
|
||||||
## kapacitor
|
## kapacitor
|
||||||
|
|
||||||
The `kapacitor` measurement stores fields with information related to
|
The `kapacitor` measurement stores fields with information related to [Kapacitor
|
||||||
[Kapacitor tasks](https://docs.influxdata.com/kapacitor/latest/introduction/getting-started/#kapacitor-tasks)
|
tasks][tasks] and [subscriptions][subs].
|
||||||
and [subscriptions](https://docs.influxdata.com/kapacitor/latest/administration/subscription-management/).
|
|
||||||
|
[tasks]: https://docs.influxdata.com/kapacitor/latest/introduction/getting-started/#kapacitor-tasks
|
||||||
|
|
||||||
|
[subs]: https://docs.influxdata.com/kapacitor/latest/administration/subscription-management/
|
||||||
|
|
||||||
### num_enabled_tasks
|
### num_enabled_tasks
|
||||||
|
|
||||||
|
|
@ -115,23 +118,30 @@ The `kapacitor_alert` measurement stores fields with information related to
|
||||||
|
|
||||||
### notification-dropped
|
### notification-dropped
|
||||||
|
|
||||||
The number of internal notifications dropped because they arrive too late from another Kapacitor node.
|
The number of internal notifications dropped because they arrive too late from
|
||||||
If this count is increasing, Kapacitor Enterprise nodes aren't able to communicate fast enough
|
another Kapacitor node. If this count is increasing, Kapacitor Enterprise nodes
|
||||||
to keep up with the volume of alerts.
|
aren't able to communicate fast enough to keep up with the volume of alerts.
|
||||||
|
|
||||||
### primary-handle-count
|
### primary-handle-count
|
||||||
|
|
||||||
The number of times this node handled an alert as the primary. This count should increase under normal conditions.
|
The number of times this node handled an alert as the primary. This count should
|
||||||
|
increase under normal conditions.
|
||||||
|
|
||||||
### secondary-handle-count
|
### secondary-handle-count
|
||||||
|
|
||||||
The number of times this node handled an alert as the secondary. An increase in this counter indicates that the primary is failing to handle alerts in a timely manner.
|
The number of times this node handled an alert as the secondary. An increase in
|
||||||
|
this counter indicates that the primary is failing to handle alerts in a timely
|
||||||
|
manner.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## kapacitor_cluster
|
## kapacitor_cluster
|
||||||
|
|
||||||
The `kapacitor_cluster` measurement reflects the ability of [Kapacitor nodes to communicate](https://docs.influxdata.com/enterprise_kapacitor/v1.5/administration/configuration/#cluster-communications) with one another. Specifically, these metrics track the gossip communication between the Kapacitor nodes.
|
The `kapacitor_cluster` measurement reflects the ability of [Kapacitor nodes to
|
||||||
|
communicate][cluster] with one another. Specifically, these metrics track the
|
||||||
|
gossip communication between the Kapacitor nodes.
|
||||||
|
|
||||||
|
[cluster]: https://docs.influxdata.com/enterprise_kapacitor/v1.5/administration/configuration/#cluster-communications
|
||||||
|
|
||||||
### dropped_member_events
|
### dropped_member_events
|
||||||
|
|
||||||
|
|
@ -146,8 +156,9 @@ The number of gossip user events that were dropped.
|
||||||
## kapacitor_edges
|
## kapacitor_edges
|
||||||
|
|
||||||
The `kapacitor_edges` measurement stores fields with information related to
|
The `kapacitor_edges` measurement stores fields with information related to
|
||||||
[edges](https://docs.influxdata.com/kapacitor/latest/tick/introduction/#pipelines)
|
[edges][] in Kapacitor TICKscripts.
|
||||||
in Kapacitor TICKscripts.
|
|
||||||
|
[edges]: https://docs.influxdata.com/kapacitor/latest/tick/introduction/#pipelines
|
||||||
|
|
||||||
### collected
|
### collected
|
||||||
|
|
||||||
|
|
@ -161,8 +172,8 @@ The number of messages emitted by TICKscript edges.
|
||||||
|
|
||||||
## kapacitor_ingress
|
## kapacitor_ingress
|
||||||
|
|
||||||
The `kapacitor_ingress` measurement stores fields with information related to data
|
The `kapacitor_ingress` measurement stores fields with information related to
|
||||||
coming into Kapacitor.
|
data coming into Kapacitor.
|
||||||
|
|
||||||
### points_received
|
### points_received
|
||||||
|
|
||||||
|
|
@ -173,7 +184,9 @@ The number of points received by Kapacitor.
|
||||||
## kapacitor_load
|
## kapacitor_load
|
||||||
|
|
||||||
The `kapacitor_load` measurement stores fields with information related to the
|
The `kapacitor_load` measurement stores fields with information related to the
|
||||||
[Kapacitor Load Directory service](https://docs.influxdata.com/kapacitor/latest/guides/load_directory/).
|
[Kapacitor Load Directory service][load-dir].
|
||||||
|
|
||||||
|
[load-dir]: https://docs.influxdata.com/kapacitor/latest/guides/load_directory/
|
||||||
|
|
||||||
### errors
|
### errors
|
||||||
|
|
||||||
|
|
@ -183,7 +196,8 @@ The number of errors reported from the load directory service.
|
||||||
|
|
||||||
## kapacitor_memstats
|
## kapacitor_memstats
|
||||||
|
|
||||||
The `kapacitor_memstats` measurement stores fields related to Kapacitor memory usage.
|
The `kapacitor_memstats` measurement stores fields related to Kapacitor memory
|
||||||
|
usage.
|
||||||
|
|
||||||
### alloc_bytes
|
### alloc_bytes
|
||||||
|
|
||||||
|
|
@ -341,14 +355,17 @@ The total number of unique series processed.
|
||||||
|
|
||||||
#### write_errors
|
#### write_errors
|
||||||
|
|
||||||
The number of errors that occurred when writing to InfluxDB or other write endpoints.
|
The number of errors that occurred when writing to InfluxDB or other write
|
||||||
|
endpoints.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
### kapacitor_topics
|
### kapacitor_topics
|
||||||
|
|
||||||
The `kapacitor_topics` measurement stores fields related to
|
The `kapacitor_topics` measurement stores fields related to Kapacitor
|
||||||
Kapacitor topics](<https://docs.influxdata.com/kapacitor/latest/working/using_alert_topics/>).
|
topics][topics].
|
||||||
|
|
||||||
|
[topics]: https://docs.influxdata.com/kapacitor/latest/working/using_alert_topics/
|
||||||
|
|
||||||
#### collected (kapacitor_topics)
|
#### collected (kapacitor_topics)
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -3,8 +3,9 @@
|
||||||
This plugin is only available on Linux.
|
This plugin is only available on Linux.
|
||||||
|
|
||||||
The kernel plugin gathers info about the kernel that doesn't fit into other
|
The kernel plugin gathers info about the kernel that doesn't fit into other
|
||||||
plugins. In general, it is the statistics available in `/proc/stat` that are
|
plugins. In general, it is the statistics available in `/proc/stat` that are not
|
||||||
not covered by other plugins as well as the value of `/proc/sys/kernel/random/entropy_avail`
|
covered by other plugins as well as the value of
|
||||||
|
`/proc/sys/kernel/random/entropy_avail`
|
||||||
|
|
||||||
The metrics are documented in `man proc` under the `/proc/stat` section.
|
The metrics are documented in `man proc` under the `/proc/stat` section.
|
||||||
The metrics are documented in `man 4 random` under the `/proc/stat` section.
|
The metrics are documented in `man 4 random` under the `/proc/stat` section.
|
||||||
|
|
|
||||||
|
|
@ -1,10 +1,13 @@
|
||||||
# Kernel VMStat Input Plugin
|
# Kernel VMStat Input Plugin
|
||||||
|
|
||||||
The kernel_vmstat plugin gathers virtual memory statistics
|
The kernel_vmstat plugin gathers virtual memory statistics by reading
|
||||||
by reading /proc/vmstat. For a full list of available fields see the
|
/proc/vmstat. For a full list of available fields see the /proc/vmstat section
|
||||||
/proc/vmstat section of the [proc man page](http://man7.org/linux/man-pages/man5/proc.5.html).
|
of the [proc man page][man-proc]. For a better idea of what each field
|
||||||
For a better idea of what each field represents, see the
|
represents, see the [vmstat man page][man-vmstat].
|
||||||
[vmstat man page](http://linux.die.net/man/8/vmstat).
|
|
||||||
|
[man-proc]: http://man7.org/linux/man-pages/man5/proc.5.html
|
||||||
|
|
||||||
|
[man-vmstat]: http://linux.die.net/man/8/vmstat
|
||||||
|
|
||||||
```text
|
```text
|
||||||
/proc/vmstat
|
/proc/vmstat
|
||||||
|
|
|
||||||
|
|
@ -63,10 +63,17 @@ Requires the following tools:
|
||||||
- [Docker](https://docs.docker.com/get-docker/)
|
- [Docker](https://docs.docker.com/get-docker/)
|
||||||
- [Docker Compose](https://docs.docker.com/compose/install/)
|
- [Docker Compose](https://docs.docker.com/compose/install/)
|
||||||
|
|
||||||
From the root of this project execute the following script: `./plugins/inputs/kibana/test_environment/run_test_env.sh`
|
From the root of this project execute the following script:
|
||||||
|
`./plugins/inputs/kibana/test_environment/run_test_env.sh`
|
||||||
|
|
||||||
This will build the latest Telegraf and then start up Kibana and Elasticsearch, Telegraf will begin monitoring Kibana's status and write its results to the file `/tmp/metrics.out` in the Telegraf container.
|
This will build the latest Telegraf and then start up Kibana and Elasticsearch,
|
||||||
|
Telegraf will begin monitoring Kibana's status and write its results to the file
|
||||||
|
`/tmp/metrics.out` in the Telegraf container.
|
||||||
|
|
||||||
Then you can attach to the telegraf container to inspect the file `/tmp/metrics.out` to see if the status is being reported.
|
Then you can attach to the telegraf container to inspect the file
|
||||||
|
`/tmp/metrics.out` to see if the status is being reported.
|
||||||
|
|
||||||
The Visual Studio Code [Remote - Containers](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers) extension provides an easy user interface to attach to the running container.
|
The Visual Studio Code [Remote - Containers][remote] extension provides an easy
|
||||||
|
user interface to attach to the running container.
|
||||||
|
|
||||||
|
[remote]: https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers
|
||||||
|
|
|
||||||
|
|
@ -89,8 +89,8 @@ DynamoDB:
|
||||||
|
|
||||||
### DynamoDB Checkpoint
|
### DynamoDB Checkpoint
|
||||||
|
|
||||||
The DynamoDB checkpoint stores the last processed record in a DynamoDB. To leverage
|
The DynamoDB checkpoint stores the last processed record in a DynamoDB. To
|
||||||
this functionality, create a table with the following string type keys:
|
leverage this functionality, create a table with the following string type keys:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
Partition key: namespace
|
Partition key: namespace
|
||||||
|
|
|
||||||
|
|
@ -7,8 +7,6 @@ underlying "knx-go" project site (<https://github.com/vapourismo/knx-go>).
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
This is a sample config for the plugin.
|
|
||||||
|
|
||||||
```toml @sample.conf
|
```toml @sample.conf
|
||||||
# Listener capable of handling KNX bus messages provided through a KNX-IP Interface.
|
# Listener capable of handling KNX bus messages provided through a KNX-IP Interface.
|
||||||
[[inputs.knx_listener]]
|
[[inputs.knx_listener]]
|
||||||
|
|
|
||||||
|
|
@ -1,6 +1,7 @@
|
||||||
# Kubernetes Inventory Input Plugin
|
# Kubernetes Inventory Input Plugin
|
||||||
|
|
||||||
This plugin generates metrics derived from the state of the following Kubernetes resources:
|
This plugin generates metrics derived from the state of the following Kubernetes
|
||||||
|
resources:
|
||||||
|
|
||||||
- daemonsets
|
- daemonsets
|
||||||
- deployments
|
- deployments
|
||||||
|
|
@ -86,7 +87,13 @@ avoid cardinality issues:
|
||||||
|
|
||||||
## Kubernetes Permissions
|
## Kubernetes Permissions
|
||||||
|
|
||||||
If using [RBAC authorization](https://kubernetes.io/docs/reference/access-authn-authz/rbac/), you will need to create a cluster role to list "persistentvolumes" and "nodes". You will then need to make an [aggregated ClusterRole](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#aggregated-clusterroles) that will eventually be bound to a user or group.
|
If using [RBAC authorization][rbac], you will need to create a cluster role to
|
||||||
|
list "persistentvolumes" and "nodes". You will then need to make an [aggregated
|
||||||
|
ClusterRole][agg] that will eventually be bound to a user or group.
|
||||||
|
|
||||||
|
[rbac]: https://kubernetes.io/docs/reference/access-authn-authz/rbac/
|
||||||
|
|
||||||
|
[agg]: https://kubernetes.io/docs/reference/access-authn-authz/rbac/#aggregated-clusterroles
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
---
|
---
|
||||||
|
|
@ -115,7 +122,8 @@ aggregationRule:
|
||||||
rules: [] # Rules are automatically filled in by the controller manager.
|
rules: [] # Rules are automatically filled in by the controller manager.
|
||||||
```
|
```
|
||||||
|
|
||||||
Bind the newly created aggregated ClusterRole with the following config file, updating the subjects as needed.
|
Bind the newly created aggregated ClusterRole with the following config file,
|
||||||
|
updating the subjects as needed.
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
---
|
---
|
||||||
|
|
@ -135,8 +143,9 @@ subjects:
|
||||||
|
|
||||||
## Quickstart in k3s
|
## Quickstart in k3s
|
||||||
|
|
||||||
When monitoring [k3s](https://k3s.io) server instances one can re-use already generated administration token.
|
When monitoring [k3s](https://k3s.io) server instances one can re-use already
|
||||||
This is less secure than using the more restrictive dedicated telegraf user but more convienient to set up.
|
generated administration token. This is less secure than using the more
|
||||||
|
restrictive dedicated telegraf user but more convienient to set up.
|
||||||
|
|
||||||
```console
|
```console
|
||||||
# an empty token will make telegraf use the client cert/key files instead
|
# an empty token will make telegraf use the client cert/key files instead
|
||||||
|
|
@ -294,7 +303,8 @@ tls_key = "/run/telegraf-kubernetes-key"
|
||||||
|
|
||||||
### pv `phase_type`
|
### pv `phase_type`
|
||||||
|
|
||||||
The persistentvolume "phase" is saved in the `phase` tag with a correlated numeric field called `phase_type` corresponding with that tag value.
|
The persistentvolume "phase" is saved in the `phase` tag with a correlated
|
||||||
|
numeric field called `phase_type` corresponding with that tag value.
|
||||||
|
|
||||||
| Tag value | Corresponding field value |
|
| Tag value | Corresponding field value |
|
||||||
| --------- | ------------------------- |
|
| --------- | ------------------------- |
|
||||||
|
|
@ -307,7 +317,8 @@ The persistentvolume "phase" is saved in the `phase` tag with a correlated numer
|
||||||
|
|
||||||
### pvc `phase_type`
|
### pvc `phase_type`
|
||||||
|
|
||||||
The persistentvolumeclaim "phase" is saved in the `phase` tag with a correlated numeric field called `phase_type` corresponding with that tag value.
|
The persistentvolumeclaim "phase" is saved in the `phase` tag with a correlated
|
||||||
|
numeric field called `phase_type` corresponding with that tag value.
|
||||||
|
|
||||||
| Tag value | Corresponding field value |
|
| Tag value | Corresponding field value |
|
||||||
| --------- | ------------------------- |
|
| --------- | ------------------------- |
|
||||||
|
|
|
||||||
|
|
@ -6,13 +6,15 @@ is running as part of a `daemonset` within a kubernetes installation. This
|
||||||
means that telegraf is running on every node within the cluster. Therefore, you
|
means that telegraf is running on every node within the cluster. Therefore, you
|
||||||
should configure this plugin to talk to its locally running kubelet.
|
should configure this plugin to talk to its locally running kubelet.
|
||||||
|
|
||||||
To find the ip address of the host you are running on you can issue a command like the following:
|
To find the ip address of the host you are running on you can issue a command
|
||||||
|
like the following:
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
curl -s $API_URL/api/v1/namespaces/$POD_NAMESPACE/pods/$HOSTNAME --header "Authorization: Bearer $TOKEN" --insecure | jq -r '.status.hostIP'
|
curl -s $API_URL/api/v1/namespaces/$POD_NAMESPACE/pods/$HOSTNAME --header "Authorization: Bearer $TOKEN" --insecure | jq -r '.status.hostIP'
|
||||||
```
|
```
|
||||||
|
|
||||||
In this case we used the downward API to pass in the `$POD_NAMESPACE` and `$HOSTNAME` is the hostname of the pod which is set by the kubernetes API.
|
In this case we used the downward API to pass in the `$POD_NAMESPACE` and
|
||||||
|
`$HOSTNAME` is the hostname of the pod which is set by the kubernetes API.
|
||||||
|
|
||||||
Kubernetes is a fast moving project, with a new minor release every 3 months. As
|
Kubernetes is a fast moving project, with a new minor release every 3 months. As
|
||||||
such, we will aim to maintain support only for versions that are supported by
|
such, we will aim to maintain support only for versions that are supported by
|
||||||
|
|
@ -65,8 +67,8 @@ avoid cardinality issues:
|
||||||
|
|
||||||
## DaemonSet
|
## DaemonSet
|
||||||
|
|
||||||
For recommendations on running Telegraf as a DaemonSet see [Monitoring Kubernetes
|
For recommendations on running Telegraf as a DaemonSet see [Monitoring
|
||||||
Architecture][k8s-telegraf] or view the Helm charts:
|
Kubernetes Architecture][k8s-telegraf] or view the Helm charts:
|
||||||
|
|
||||||
- [Telegraf][]
|
- [Telegraf][]
|
||||||
- [InfluxDB][]
|
- [InfluxDB][]
|
||||||
|
|
|
||||||
|
|
@ -1,9 +1,11 @@
|
||||||
# Arista LANZ Consumer Input Plugin
|
# Arista LANZ Consumer Input Plugin
|
||||||
|
|
||||||
This plugin provides a consumer for use with Arista Networks’ Latency Analyzer (LANZ)
|
This plugin provides a consumer for use with Arista Networks’ Latency Analyzer
|
||||||
|
(LANZ)
|
||||||
|
|
||||||
Metrics are read from a stream of data via TCP through port 50001 on the
|
Metrics are read from a stream of data via TCP through port 50001 on the
|
||||||
switches management IP. The data is in Protobuffers format. For more information on Arista LANZ
|
switches management IP. The data is in Protobuffers format. For more information
|
||||||
|
on Arista LANZ
|
||||||
|
|
||||||
- <https://www.arista.com/en/um-eos/eos-latency-analyzer-lanz>
|
- <https://www.arista.com/en/um-eos/eos-latency-analyzer-lanz>
|
||||||
|
|
||||||
|
|
@ -13,11 +15,6 @@ This plugin uses Arista's sdk.
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
You will need to configure LANZ and enable streaming LANZ data.
|
|
||||||
|
|
||||||
- <https://www.arista.com/en/um-eos/eos-section-44-3-configuring-lanz>
|
|
||||||
- <https://www.arista.com/en/um-eos/eos-section-44-3-configuring-lanz#ww1149292>
|
|
||||||
|
|
||||||
```toml @sample.conf
|
```toml @sample.conf
|
||||||
# Read metrics off Arista LANZ, via socket
|
# Read metrics off Arista LANZ, via socket
|
||||||
[[inputs.lanz]]
|
[[inputs.lanz]]
|
||||||
|
|
@ -28,9 +25,15 @@ You will need to configure LANZ and enable streaming LANZ data.
|
||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
|
You will need to configure LANZ and enable streaming LANZ data.
|
||||||
|
|
||||||
|
- <https://www.arista.com/en/um-eos/eos-section-44-3-configuring-lanz>
|
||||||
|
- <https://www.arista.com/en/um-eos/eos-section-44-3-configuring-lanz#ww1149292>
|
||||||
|
|
||||||
## Metrics
|
## Metrics
|
||||||
|
|
||||||
For more details on the metrics see <https://github.com/aristanetworks/goarista/blob/master/lanz/proto/lanz.proto>
|
For more details on the metrics see
|
||||||
|
<https://github.com/aristanetworks/goarista/blob/master/lanz/proto/lanz.proto>
|
||||||
|
|
||||||
- lanz_congestion_record:
|
- lanz_congestion_record:
|
||||||
- tags:
|
- tags:
|
||||||
|
|
|
||||||
|
|
@ -1,6 +1,8 @@
|
||||||
# LeoFS Input Plugin
|
# LeoFS Input Plugin
|
||||||
|
|
||||||
The LeoFS plugin gathers metrics of LeoGateway, LeoManager, and LeoStorage using SNMP. See [LeoFS Documentation / System Administration / System Monitoring](https://leo-project.net/leofs/docs/admin/system_admin/monitoring/).
|
The LeoFS plugin gathers metrics of LeoGateway, LeoManager, and LeoStorage using
|
||||||
|
SNMP. See [LeoFS Documentation / System Administration / System
|
||||||
|
Monitoring](https://leo-project.net/leofs/docs/admin/system_admin/monitoring/).
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,6 +1,8 @@
|
||||||
# Linux Sysctl FS Input Plugin
|
# Linux Sysctl FS Input Plugin
|
||||||
|
|
||||||
The linux_sysctl_fs input provides Linux system level file metrics. The documentation on these fields can be found at <https://www.kernel.org/doc/Documentation/sysctl/fs.txt>.
|
The linux_sysctl_fs input provides Linux system level file metrics. The
|
||||||
|
documentation on these fields can be found at
|
||||||
|
<https://www.kernel.org/doc/Documentation/sysctl/fs.txt>.
|
||||||
|
|
||||||
Example output:
|
Example output:
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,6 +1,7 @@
|
||||||
# Logparser Input Plugin
|
# Logparser Input Plugin
|
||||||
|
|
||||||
## Deprecated in Telegraf 1.15: Please use the [tail][] plugin along with the [`grok` data format][grok parser]
|
**Deprecated in Telegraf 1.15: Please use the [tail][] plugin along with the
|
||||||
|
[`grok` data format][grok parser]**
|
||||||
|
|
||||||
The `logparser` plugin streams and parses the given logfiles. Currently it
|
The `logparser` plugin streams and parses the given logfiles. Currently it
|
||||||
has the capability of parsing "grok" patterns from logfiles, which also supports
|
has the capability of parsing "grok" patterns from logfiles, which also supports
|
||||||
|
|
|
||||||
|
|
@ -1,7 +1,7 @@
|
||||||
# Logstash Input Plugin
|
# Logstash Input Plugin
|
||||||
|
|
||||||
This plugin reads metrics exposed by
|
This plugin reads metrics exposed by [Logstash Monitoring
|
||||||
[Logstash Monitoring API](https://www.elastic.co/guide/en/logstash/current/monitoring-logstash.html).
|
API](https://www.elastic.co/guide/en/logstash/current/monitoring-logstash.html).
|
||||||
|
|
||||||
Logstash 5 and later is supported.
|
Logstash 5 and later is supported.
|
||||||
|
|
||||||
|
|
@ -43,7 +43,8 @@ Logstash 5 and later is supported.
|
||||||
|
|
||||||
## Metrics
|
## Metrics
|
||||||
|
|
||||||
Additional plugin stats may be collected (because logstash doesn't consistently expose all stats)
|
Additional plugin stats may be collected (because logstash doesn't consistently
|
||||||
|
expose all stats)
|
||||||
|
|
||||||
- logstash_jvm
|
- logstash_jvm
|
||||||
- tags:
|
- tags:
|
||||||
|
|
|
||||||
|
|
@ -1,9 +1,10 @@
|
||||||
# Lustre Input Plugin
|
# Lustre Input Plugin
|
||||||
|
|
||||||
The [Lustre][]® file system is an open-source, parallel file system that supports
|
The [Lustre][]® file system is an open-source, parallel file system that
|
||||||
many requirements of leadership class HPC simulation environments.
|
supports many requirements of leadership class HPC simulation environments.
|
||||||
|
|
||||||
This plugin monitors the Lustre file system using its entries in the proc filesystem.
|
This plugin monitors the Lustre file system using its entries in the proc
|
||||||
|
filesystem.
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
@ -28,7 +29,8 @@ This plugin monitors the Lustre file system using its entries in the proc filesy
|
||||||
|
|
||||||
## Metrics
|
## Metrics
|
||||||
|
|
||||||
From `/proc/fs/lustre/obdfilter/*/stats` and `/proc/fs/lustre/osd-ldiskfs/*/stats`:
|
From `/proc/fs/lustre/obdfilter/*/stats` and
|
||||||
|
`/proc/fs/lustre/osd-ldiskfs/*/stats`:
|
||||||
|
|
||||||
- lustre2
|
- lustre2
|
||||||
- tags:
|
- tags:
|
||||||
|
|
|
||||||
|
|
@ -5,9 +5,6 @@ physical volumes, volume groups, and logical volumes.
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
The `lvm` command requires elevated permissions. If the user has configured
|
|
||||||
sudo with the ability to run these commands, then set the `use_sudo` to true.
|
|
||||||
|
|
||||||
```toml @sample.conf
|
```toml @sample.conf
|
||||||
# Read metrics about LVM physical volumes, volume groups, logical volumes.
|
# Read metrics about LVM physical volumes, volume groups, logical volumes.
|
||||||
[[inputs.lvm]]
|
[[inputs.lvm]]
|
||||||
|
|
@ -15,6 +12,9 @@ sudo with the ability to run these commands, then set the `use_sudo` to true.
|
||||||
use_sudo = false
|
use_sudo = false
|
||||||
```
|
```
|
||||||
|
|
||||||
|
The `lvm` command requires elevated permissions. If the user has configured sudo
|
||||||
|
with the ability to run these commands, then set the `use_sudo` to true.
|
||||||
|
|
||||||
### Using sudo
|
### Using sudo
|
||||||
|
|
||||||
If your account does not already have the ability to run commands
|
If your account does not already have the ability to run commands
|
||||||
|
|
|
||||||
Loading…
Reference in New Issue