chore: Fix readme linter errors for input plugins A-D (#10964)

This commit is contained in:
reimda 2022-06-07 15:10:18 -06:00 committed by GitHub
parent 54552ff43a
commit 1b1482b5eb
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
40 changed files with 431 additions and 271 deletions

View File

@ -1,6 +1,7 @@
# ActiveMQ Input Plugin # ActiveMQ Input Plugin
This plugin gather queues, topics & subscribers metrics using ActiveMQ Console API. This plugin gather queues, topics & subscribers metrics using ActiveMQ Console
API.
## Configuration ## Configuration
@ -35,7 +36,8 @@ This plugin gather queues, topics & subscribers metrics using ActiveMQ Console A
## Metrics ## Metrics
Every effort was made to preserve the names based on the XML response from the ActiveMQ Console API. Every effort was made to preserve the names based on the XML response from the
ActiveMQ Console API.
- activemq_queues - activemq_queues
- tags: - tags:
@ -74,7 +76,7 @@ Every effort was made to preserve the names based on the XML response from the A
- enqueue_counter - enqueue_counter
- dequeue_counter - dequeue_counter
### Example Output ## Example Output
```shell ```shell
activemq_queues,name=sandra,host=88284b2fe51b,source=localhost,port=8161 consumer_count=0i,enqueue_count=0i,dequeue_count=0i,size=0i 1492610703000000000 activemq_queues,name=sandra,host=88284b2fe51b,source=localhost,port=8161 consumer_count=0i,enqueue_count=0i,dequeue_count=0i,size=0i 1492610703000000000

View File

@ -1,11 +1,13 @@
# Aerospike Input Plugin # Aerospike Input Plugin
The aerospike plugin queries aerospike server(s) and get node statistics & stats for The aerospike plugin queries aerospike server(s) and get node statistics & stats
all the configured namespaces. for all the configured namespaces.
For what the measurements mean, please consult the [Aerospike Metrics Reference Docs](http://www.aerospike.com/docs/reference/metrics). For what the measurements mean, please consult the [Aerospike Metrics Reference
Docs](http://www.aerospike.com/docs/reference/metrics).
The metric names, to make it less complicated in querying, have replaced all `-` with `_` as Aerospike metrics come in both forms (no idea why). The metric names, to make it less complicated in querying, have replaced all `-`
with `_` as Aerospike metrics come in both forms (no idea why).
All metrics are attempted to be cast to integers, then booleans, then strings. All metrics are attempted to be cast to integers, then booleans, then strings.
@ -55,7 +57,7 @@ All metrics are attempted to be cast to integers, then booleans, then strings.
# num_histogram_buckets = 100 # default: 10 # num_histogram_buckets = 100 # default: 10
``` ```
## Measurements ## Metrics
The aerospike metrics are under a few measurement names: The aerospike metrics are under a few measurement names:
@ -90,8 +92,9 @@ are available from the aerospike `sets/<namespace_name>/<set_name>` command.
... ...
``` ```
***aerospike_histogram_ttl***: These are aerospike ttl hisogram measurements, which ***aerospike_histogram_ttl***: These are aerospike ttl hisogram measurements,
is available from the aerospike `histogram:namespace=<namespace_name>;[set=<set_name>;]type=ttl` command. which is available from the aerospike
`histogram:namespace=<namespace_name>;[set=<set_name>;]type=ttl` command.
```text ```text
telnet localhost 3003 telnet localhost 3003
@ -100,7 +103,10 @@ is available from the aerospike `histogram:namespace=<namespace_name>;[set=<set_
... ...
``` ```
***aerospike_histogram_object_size_linear***: These are aerospike object size linear histogram measurements, which is available from the aerospike `histogram:namespace=<namespace_name>;[set=<set_name>;]type=object_size_linear` command. ***aerospike_histogram_object_size_linear***: These are aerospike object size
linear histogram measurements, which is available from the aerospike
`histogram:namespace=<namespace_name>;[set=<set_name>;]type=object_size_linear`
command.
```text ```text
telnet localhost 3003 telnet localhost 3003

View File

@ -1,13 +1,15 @@
# Alibaba (Aliyun) CloudMonitor Service Statistics Input Plugin # Alibaba (Aliyun) CloudMonitor Service Statistics Input Plugin
Here and after we use `Aliyun` instead `Alibaba` as it is default naming across web console and docs. Here and after we use `Aliyun` instead `Alibaba` as it is default naming across
web console and docs.
This plugin will pull Metric Statistics from Aliyun CMS. This plugin will pull Metric Statistics from Aliyun CMS.
## Aliyun Authentication ## Aliyun Authentication
This plugin uses an [AccessKey](https://www.alibabacloud.com/help/doc-detail/53045.htm?spm=a2c63.p38356.b99.127.5cba21fdt5MJKr&parentId=28572) credential for Authentication with the Aliyun OpenAPI endpoint. This plugin uses an [AccessKey][1] credential for Authentication with the Aliyun
In the following order the plugin will attempt to authenticate. OpenAPI endpoint. In the following order the plugin will attempt to
authenticate.
1. Ram RoleARN credential if `access_key_id`, `access_key_secret`, `role_arn`, `role_session_name` is specified 1. Ram RoleARN credential if `access_key_id`, `access_key_secret`, `role_arn`, `role_session_name` is specified
2. AccessKey STS token credential if `access_key_id`, `access_key_secret`, `access_key_sts_token` is specified 2. AccessKey STS token credential if `access_key_id`, `access_key_secret`, `access_key_sts_token` is specified
@ -17,6 +19,8 @@ In the following order the plugin will attempt to authenticate.
6. Environment variables credential 6. Environment variables credential
7. Instance metadata credential 7. Instance metadata credential
[1]: https://www.alibabacloud.com/help/doc-detail/53045.htm?spm=a2c63.p38356.b99.127.5cba21fdt5MJKr&parentId=28572
## Configuration ## Configuration
```toml @sample.conf ```toml @sample.conf
@ -124,7 +128,7 @@ In the following order the plugin will attempt to authenticate.
### Requirements and Terminology ### Requirements and Terminology
Plugin Configuration utilizes [preset metric items references](https://www.alibabacloud.com/help/doc-detail/28619.htm?spm=a2c63.p38356.a3.2.389f233d0kPJn0) Plugin Configuration utilizes [preset metric items references][2]
- `discovery_region` must be a valid Aliyun [Region](https://www.alibabacloud.com/help/doc-detail/40654.htm) value - `discovery_region` must be a valid Aliyun [Region](https://www.alibabacloud.com/help/doc-detail/40654.htm) value
- `period` must be a valid duration value - `period` must be a valid duration value
@ -132,10 +136,13 @@ Plugin Configuration utilizes [preset metric items references](https://www.aliba
- `names` must be preset metric names - `names` must be preset metric names
- `dimensions` must be preset dimension values - `dimensions` must be preset dimension values
## Measurements & Fields [2]: https://www.alibabacloud.com/help/doc-detail/28619.htm?spm=a2c63.p38356.a3.2.389f233d0kPJn0
Each Aliyun CMS Project monitored records a measurement with fields for each available Metric Statistic ## Metrics
Project and Metrics are represented in [snake case](https://en.wikipedia.org/wiki/Snake_case)
Each Aliyun CMS Project monitored records a measurement with fields for each
available Metric Statistic Project and Metrics are represented in [snake
case](https://en.wikipedia.org/wiki/Snake_case)
- aliyuncms_{project} - aliyuncms_{project}
- {metric}_average (metric Average value) - {metric}_average (metric Average value)

View File

@ -1,6 +1,9 @@
# AMD ROCm System Management Interface (SMI) Input Plugin # AMD ROCm System Management Interface (SMI) Input Plugin
This plugin uses a query on the [`rocm-smi`](https://github.com/RadeonOpenCompute/rocm_smi_lib/tree/master/python_smi_tools) binary to pull GPU stats including memory and GPU usage, temperatures and other. This plugin uses a query on the [`rocm-smi`][1] binary to pull GPU stats
including memory and GPU usage, temperatures and other.
[1]: https://github.com/RadeonOpenCompute/rocm_smi_lib/tree/master/python_smi_tools
## Configuration ## Configuration
@ -47,9 +50,10 @@ Linux:
rocm-smi rocm-smi -o -l -m -M -g -c -t -u -i -f -p -P -s -S -v --showreplaycount --showpids --showdriverversion --showmemvendor --showfwinfo --showproductname --showserial --showuniqueid --showbus --showpendingpages --showpagesinfo --showretiredpages --showunreservablepages --showmemuse --showvoltage --showtopo --showtopoweight --showtopohops --showtopotype --showtoponuma --showmeminfo all --json rocm-smi rocm-smi -o -l -m -M -g -c -t -u -i -f -p -P -s -S -v --showreplaycount --showpids --showdriverversion --showmemvendor --showfwinfo --showproductname --showserial --showuniqueid --showbus --showpendingpages --showpagesinfo --showretiredpages --showunreservablepages --showmemuse --showvoltage --showtopo --showtopoweight --showtopohops --showtopotype --showtoponuma --showmeminfo all --json
``` ```
Please include the output of this command if opening a GitHub issue, together with ROCm version. Please include the output of this command if opening a GitHub issue, together
with ROCm version.
### Example Output ## Example Output
```shell ```shell
amd_rocm_smi,gpu_id=0x6861,gpu_unique_id=0x2150e7d042a1124,host=ali47xl,name=card0 clocks_current_memory=167i,clocks_current_sm=852i,driver_version=51114i,fan_speed=14i,memory_free=17145282560i,memory_total=17163091968i,memory_used=17809408i,power_draw=7,temperature_sensor_edge=28,temperature_sensor_junction=29,temperature_sensor_memory=92,utilization_gpu=0i 1630572551000000000 amd_rocm_smi,gpu_id=0x6861,gpu_unique_id=0x2150e7d042a1124,host=ali47xl,name=card0 clocks_current_memory=167i,clocks_current_sm=852i,driver_version=51114i,fan_speed=14i,memory_free=17145282560i,memory_total=17163091968i,memory_used=17809408i,power_draw=7,temperature_sensor_edge=28,temperature_sensor_junction=29,temperature_sensor_memory=92,utilization_gpu=0i 1630572551000000000
@ -57,10 +61,14 @@ amd_rocm_smi,gpu_id=0x6861,gpu_unique_id=0x2150e7d042a1124,host=ali47xl,name=car
amd_rocm_smi,gpu_id=0x6861,gpu_unique_id=0x2150e7d042a1124,host=ali47xl,name=card0 clocks_current_memory=167i,clocks_current_sm=852i,driver_version=51114i,fan_speed=14i,memory_free=17145282560i,memory_total=17163091968i,memory_used=17809408i,power_draw=7,temperature_sensor_edge=29,temperature_sensor_junction=29,temperature_sensor_memory=92,utilization_gpu=0i 1630572749000000000 amd_rocm_smi,gpu_id=0x6861,gpu_unique_id=0x2150e7d042a1124,host=ali47xl,name=card0 clocks_current_memory=167i,clocks_current_sm=852i,driver_version=51114i,fan_speed=14i,memory_free=17145282560i,memory_total=17163091968i,memory_used=17809408i,power_draw=7,temperature_sensor_edge=29,temperature_sensor_junction=29,temperature_sensor_memory=92,utilization_gpu=0i 1630572749000000000
``` ```
### Limitations and notices ## Limitations and notices
Please notice that this plugin has been developed and tested on a limited number of versions and small set of GPUs. Currently the latest ROCm version tested is 4.3.0. Please notice that this plugin has been developed and tested on a limited number
Notice that depending on the device and driver versions the amount of information provided by `rocm-smi` can vary so that some fields would start/stop appearing in the metrics upon updates. of versions and small set of GPUs. Currently the latest ROCm version tested is
The `rocm-smi` JSON output is not perfectly homogeneous and is possibly changing in the future, hence parsing and unmarshaling can start failing upon updating ROCm. 4.3.0. Notice that depending on the device and driver versions the amount of
information provided by `rocm-smi` can vary so that some fields would start/stop
appearing in the metrics upon updates. The `rocm-smi` JSON output is not
perfectly homogeneous and is possibly changing in the future, hence parsing and
unmarshaling can start failing upon updating ROCm.
Inspired by the current state of the art of the `nvidia-smi` plugin. Inspired by the current state of the art of the `nvidia-smi` plugin.

View File

@ -1,10 +1,13 @@
# AMQP Consumer Input Plugin # AMQP Consumer Input Plugin
This plugin provides a consumer for use with AMQP 0-9-1, a prominent implementation of this protocol being [RabbitMQ](https://www.rabbitmq.com/). This plugin provides a consumer for use with AMQP 0-9-1, a prominent
implementation of this protocol being [RabbitMQ](https://www.rabbitmq.com/).
Metrics are read from a topic exchange using the configured queue and binding_key. Metrics are read from a topic exchange using the configured queue and
binding_key.
Message payload should be formatted in one of the [Telegraf Data Formats](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md). Message payload should be formatted in one of the [Telegraf Data
Formats](../../../docs/DATA_FORMATS_INPUT.md).
For an introduction to AMQP see: For an introduction to AMQP see:
@ -13,8 +16,6 @@ For an introduction to AMQP see:
## Configuration ## Configuration
The following defaults are known to work with RabbitMQ:
```toml @sample.conf ```toml @sample.conf
# AMQP consumer plugin # AMQP consumer plugin
[[inputs.amqp_consumer]] [[inputs.amqp_consumer]]

View File

@ -1,8 +1,15 @@
# Apache Input Plugin # Apache Input Plugin
The Apache plugin collects server performance information using the [`mod_status`](https://httpd.apache.org/docs/2.4/mod/mod_status.html) module of the [Apache HTTP Server](https://httpd.apache.org/). The Apache plugin collects server performance information using the
[`mod_status`](https://httpd.apache.org/docs/2.4/mod/mod_status.html) module of
the [Apache HTTP Server](https://httpd.apache.org/).
Typically, the `mod_status` module is configured to expose a page at the `/server-status?auto` location of the Apache server. The [ExtendedStatus](https://httpd.apache.org/docs/2.4/mod/core.html#extendedstatus) option must be enabled in order to collect all available fields. For information about how to configure your server reference the [module documentation](https://httpd.apache.org/docs/2.4/mod/mod_status.html#enable). Typically, the `mod_status` module is configured to expose a page at the
`/server-status?auto` location of the Apache server. The
[ExtendedStatus](https://httpd.apache.org/docs/2.4/mod/core.html#extendedstatus)
option must be enabled in order to collect all available fields. For
information about how to configure your server reference the [module
documentation](https://httpd.apache.org/docs/2.4/mod/mod_status.html#enable).
## Configuration ## Configuration
@ -29,7 +36,7 @@ Typically, the `mod_status` module is configured to expose a page at the `/serve
# insecure_skip_verify = false # insecure_skip_verify = false
``` ```
## Measurements & Fields ## Metrics
- apache - apache
- BusyWorkers (float) - BusyWorkers (float)
@ -56,7 +63,8 @@ Typically, the `mod_status` module is configured to expose a page at the `/serve
- TotalkBytes (float) - TotalkBytes (float)
- Uptime (float) - Uptime (float)
The following fields are collected from the `Scoreboard`, and represent the number of requests in the given state: The following fields are collected from the `Scoreboard`, and represent the
number of requests in the given state:
- apache - apache
- scboard_closing (float) - scboard_closing (float)

View File

@ -44,7 +44,7 @@ apcupsd should be installed and it's daemon should be running.
- nominal_power - nominal_power
- firmware - firmware
## Example output ## Example Output
```shell ```shell
apcupsd,serial=AS1231515,status=ONLINE,ups_name=name1 time_on_battery=0,load_percent=9.7,time_left_minutes=98,output_voltage=230.4,internal_temp=32.4,battery_voltage=27.4,input_frequency=50.2,input_voltage=230.4,battery_charge_percent=100,status_flags=8i 1490035922000000000 apcupsd,serial=AS1231515,status=ONLINE,ups_name=name1 time_on_battery=0,load_percent=9.7,time_left_minutes=98,output_voltage=230.4,internal_temp=32.4,battery_voltage=27.4,input_frequency=50.2,input_voltage=230.4,battery_charge_percent=100,status_flags=8i 1490035922000000000

View File

@ -1,8 +1,10 @@
# Aurora Input Plugin # Aurora Input Plugin
The Aurora Input Plugin gathers metrics from [Apache Aurora](https://aurora.apache.org/) schedulers. The Aurora Input Plugin gathers metrics from [Apache
Aurora](https://aurora.apache.org/) schedulers.
For monitoring recommendations reference [Monitoring your Aurora cluster](https://aurora.apache.org/documentation/latest/operations/monitoring/) For monitoring recommendations reference [Monitoring your Aurora
cluster](https://aurora.apache.org/documentation/latest/operations/monitoring/)
## Configuration ## Configuration

View File

@ -2,7 +2,7 @@
Get bcache stat from stats_total directory and dirty_data file. Get bcache stat from stats_total directory and dirty_data file.
## Measurements ## Metrics
Meta: Meta:
@ -53,8 +53,6 @@ cache_readaheads
## Configuration ## Configuration
Using this configuration:
```toml @sample.conf ```toml @sample.conf
# Read metrics of bcache from stats_total and dirty_data # Read metrics of bcache from stats_total and dirty_data
[[inputs.bcache]] [[inputs.bcache]]
@ -68,14 +66,12 @@ Using this configuration:
bcacheDevs = ["bcache0"] bcacheDevs = ["bcache0"]
``` ```
When run with: ## Example Output
```shell ```shell
./telegraf --config telegraf.conf --input-filter bcache --test ./telegraf --config telegraf.conf --input-filter bcache --test
``` ```
It produces:
```shell ```shell
* Plugin: bcache, Collection 1 * Plugin: bcache, Collection 1
> [backing_dev="md10" bcache_dev="bcache0"] bcache_dirty_data value=11639194 > [backing_dev="md10" bcache_dev="bcache0"] bcache_dirty_data value=11639194

View File

@ -1,6 +1,7 @@
# Beanstalkd Input Plugin # Beanstalkd Input Plugin
The `beanstalkd` plugin collects server stats as well as tube stats (reported by `stats` and `stats-tube` commands respectively). The `beanstalkd` plugin collects server stats as well as tube stats (reported by
`stats` and `stats-tube` commands respectively).
## Configuration ## Configuration
@ -17,7 +18,9 @@ The `beanstalkd` plugin collects server stats as well as tube stats (reported by
## Metrics ## Metrics
Please see the [Beanstalk Protocol doc](https://raw.githubusercontent.com/kr/beanstalkd/master/doc/protocol.txt) for detailed explanation of `stats` and `stats-tube` commands output. Please see the [Beanstalk Protocol
doc](https://raw.githubusercontent.com/kr/beanstalkd/master/doc/protocol.txt)
for detailed explanation of `stats` and `stats-tube` commands output.
`beanstalkd_overview` statistical information about the system as a whole `beanstalkd_overview` statistical information about the system as a whole
@ -93,7 +96,7 @@ Please see the [Beanstalk Protocol doc](https://raw.githubusercontent.com/kr/bea
- server (address taken from config) - server (address taken from config)
- version - version
## Example ## Example Output
```shell ```shell
beanstalkd_overview,host=server.local,hostname=a2ab22ed12e0,id=232485800aa11b24,server=localhost:11300,version=1.10 cmd_stats_tube=29482i,current_jobs_delayed=0i,current_jobs_urgent=6i,cmd_kick=0i,cmd_stats=7378i,cmd_stats_job=0i,current_waiting=0i,max_job_size=65535i,pid=6i,cmd_bury=0i,cmd_reserve_with_timeout=0i,cmd_touch=0i,current_connections=1i,current_jobs_ready=6i,current_producers=0i,cmd_delete=0i,cmd_list_tubes=7369i,cmd_peek_ready=0i,cmd_put=6i,cmd_use=3i,cmd_watch=0i,current_jobs_reserved=0i,rusage_stime=6.07,cmd_list_tubes_watched=0i,cmd_pause_tube=0i,total_jobs=6i,binlog_records_migrated=0i,cmd_list_tube_used=0i,cmd_peek_delayed=0i,cmd_release=0i,current_jobs_buried=0i,job_timeouts=0i,binlog_current_index=0i,binlog_max_size=10485760i,total_connections=7378i,cmd_peek_buried=0i,cmd_reserve=0i,current_tubes=4i,binlog_records_written=0i,cmd_peek=0i,rusage_utime=1.13,uptime=7099i,binlog_oldest_index=0i,current_workers=0i,cmd_ignore=0i 1528801650000000000 beanstalkd_overview,host=server.local,hostname=a2ab22ed12e0,id=232485800aa11b24,server=localhost:11300,version=1.10 cmd_stats_tube=29482i,current_jobs_delayed=0i,current_jobs_urgent=6i,cmd_kick=0i,cmd_stats=7378i,cmd_stats_job=0i,current_waiting=0i,max_job_size=65535i,pid=6i,cmd_bury=0i,cmd_reserve_with_timeout=0i,cmd_touch=0i,current_connections=1i,current_jobs_ready=6i,current_producers=0i,cmd_delete=0i,cmd_list_tubes=7369i,cmd_peek_ready=0i,cmd_put=6i,cmd_use=3i,cmd_watch=0i,current_jobs_reserved=0i,rusage_stime=6.07,cmd_list_tubes_watched=0i,cmd_pause_tube=0i,total_jobs=6i,binlog_records_migrated=0i,cmd_list_tube_used=0i,cmd_peek_delayed=0i,cmd_release=0i,current_jobs_buried=0i,job_timeouts=0i,binlog_current_index=0i,binlog_max_size=10485760i,total_connections=7378i,cmd_peek_buried=0i,cmd_reserve=0i,current_tubes=4i,binlog_records_written=0i,cmd_peek=0i,rusage_utime=1.13,uptime=7099i,binlog_oldest_index=0i,current_workers=0i,cmd_ignore=0i 1528801650000000000

View File

@ -41,7 +41,7 @@ known to work with Filebeat and Kafkabeat.
# insecure_skip_verify = false # insecure_skip_verify = false
``` ```
## Measurements & Fields ## Metrics
- **beat** - **beat**
- Fields: - Fields:
@ -135,7 +135,7 @@ known to work with Filebeat and Kafkabeat.
- beat_name - beat_name
- beat_version - beat_version
## Example ## Example Output
```shell ```shell
$ telegraf --input-filter beat --test $ telegraf --input-filter beat --test

View File

@ -4,15 +4,16 @@ This plugin decodes the JSON or XML statistics provided by BIND 9 nameservers.
## XML Statistics Channel ## XML Statistics Channel
Version 2 statistics (BIND 9.6 - 9.9) and version 3 statistics (BIND 9.9+) are supported. Note that Version 2 statistics (BIND 9.6 - 9.9) and version 3 statistics (BIND 9.9+) are
for BIND 9.9 to support version 3 statistics, it must be built with the `--enable-newstats` compile supported. Note that for BIND 9.9 to support version 3 statistics, it must be
flag, and it must be specifically requested via the correct URL. Version 3 statistics are the built with the `--enable-newstats` compile flag, and it must be specifically
default (and only) XML format in BIND 9.10+. requested via the correct URL. Version 3 statistics are the default (and only)
XML format in BIND 9.10+.
## JSON Statistics Channel ## JSON Statistics Channel
JSON statistics schema version 1 (BIND 9.10+) is supported. As of writing, some distros still do JSON statistics schema version 1 (BIND 9.10+) is supported. As of writing, some
not enable support for JSON statistics in their BIND packages. distros still do not enable support for JSON statistics in their BIND packages.
## Configuration ## Configuration
@ -35,8 +36,8 @@ not enable support for JSON statistics in their BIND packages.
- **gather_views** bool: Report per-view query statistics. - **gather_views** bool: Report per-view query statistics.
- **timeout** Timeout for http requests made by bind nameserver (example: "4s"). - **timeout** Timeout for http requests made by bind nameserver (example: "4s").
The following table summarizes the URL formats which should be used, depending on your BIND The following table summarizes the URL formats which should be used, depending
version and configured statistics channel. on your BIND version and configured statistics channel.
| BIND Version | Statistics Format | Example URL | | BIND Version | Statistics Format | Example URL |
| ------------ | ----------------- | ----------------------------- | | ------------ | ----------------- | ----------------------------- |
@ -47,7 +48,8 @@ version and configured statistics channel.
### Configuration of BIND Daemon ### Configuration of BIND Daemon
Add the following to your named.conf if running Telegraf on the same host as the BIND daemon: Add the following to your named.conf if running Telegraf on the same host as the
BIND daemon:
```json ```json
statistics-channels { statistics-channels {
@ -55,12 +57,12 @@ statistics-channels {
}; };
``` ```
Alternatively, specify a wildcard address (e.g., 0.0.0.0) or specific IP address of an interface to Alternatively, specify a wildcard address (e.g., 0.0.0.0) or specific IP address
configure the BIND daemon to listen on that address. Note that you should secure the statistics of an interface to configure the BIND daemon to listen on that address. Note
channel with an ACL if it is publicly reachable. Consult the BIND Administrator Reference Manual that you should secure the statistics channel with an ACL if it is publicly
for more information. reachable. Consult the BIND Administrator Reference Manual for more information.
## Measurements & Fields ## Metrics
- bind_counter - bind_counter
- name=value (multiple) - name=value (multiple)
@ -89,8 +91,8 @@ for more information.
## Sample Queries ## Sample Queries
These are some useful queries (to generate dashboards or other) to run against data from this These are some useful queries (to generate dashboards or other) to run against
plugin: data from this plugin:
```sql ```sql
SELECT non_negative_derivative(mean(/^A$|^PTR$/), 5m) FROM bind_counter \ SELECT non_negative_derivative(mean(/^A$|^PTR$/), 5m) FROM bind_counter \

View File

@ -27,7 +27,7 @@ The plugin collects these metrics from `/proc/net/bonding/*` files.
# collect_sys_details = false # collect_sys_details = false
``` ```
## Measurements & Fields ## Metrics
- bond - bond
- active_slave (for active-backup mode) - active_slave (for active-backup mode)
@ -75,7 +75,7 @@ The plugin collects these metrics from `/proc/net/bonding/*` files.
- bond - bond
- mode - mode
## Example output ## Example Output
Configuration: Configuration:

View File

@ -1,7 +1,8 @@
# Burrow Kafka Consumer Lag Checking Input Plugin # Burrow Kafka Consumer Lag Checking Input Plugin
Collect Kafka topic, consumer and partition status Collect Kafka topic, consumer and partition status via
via [Burrow](https://github.com/linkedin/Burrow) HTTP [API](https://github.com/linkedin/Burrow/wiki/HTTP-Endpoint). [Burrow](https://github.com/linkedin/Burrow) HTTP
[API](https://github.com/linkedin/Burrow/wiki/HTTP-Endpoint).
Supported Burrow version: `1.x` Supported Burrow version: `1.x`
@ -62,7 +63,9 @@ Supported Burrow version: `1.x`
> unknown value will be mapped to 0 > unknown value will be mapped to 0
## Fields ## Metrics
### Fields
* `burrow_group` (one event per each consumer group) * `burrow_group` (one event per each consumer group)
* status (string, see Partition Status mappings) * status (string, see Partition Status mappings)
@ -83,7 +86,7 @@ Supported Burrow version: `1.x`
* `burrow_topic` (one event per topic offset) * `burrow_topic` (one event per topic offset)
* offset (int64) * offset (int64)
## Tags ### Tags
* `burrow_group` * `burrow_group`
* cluster (string) * cluster (string)

View File

@ -1,6 +1,8 @@
# Cassandra Input Plugin # Cassandra Input Plugin
**Deprecated in version 1.7**: Please use the [jolokia2](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/jolokia2) plugin with the [cassandra.conf](/plugins/inputs/jolokia2/examples/cassandra.conf) example configuration. **Deprecated in version 1.7**: Please use the [jolokia2](../jolokia2/README.md)
plugin with the [cassandra.conf](../jolokia2/examples/cassandra.conf) example
configuration.
## Plugin arguments ## Plugin arguments
@ -10,13 +12,20 @@
## Description ## Description
The Cassandra plugin collects Cassandra 3 / JVM metrics exposed as MBean's attributes through jolokia REST endpoint. All metrics are collected for each server configured. The Cassandra plugin collects Cassandra 3 / JVM metrics exposed as MBean's
attributes through jolokia REST endpoint. All metrics are collected for each
server configured.
See: [https://jolokia.org/](https://jolokia.org/) and [Cassandra Documentation](http://docs.datastax.com/en/cassandra/3.x/cassandra/operations/monitoringCassandraTOC.html) See: [https://jolokia.org/](https://jolokia.org/) and [Cassandra
Documentation][1]
## Measurements [1]: http://docs.datastax.com/en/cassandra/3.x/cassandra/operations/monitoringCassandraTOC.html
Cassandra plugin produces one or more measurements for each metric configured, adding Server's name as `host` tag. More than one measurement is generated when querying table metrics with a wildcard for the keyspace or table name. ## Metrics
Cassandra plugin produces one or more measurements for each metric configured,
adding Server's name as `host` tag. More than one measurement is generated when
querying table metrics with a wildcard for the keyspace or table name.
## Configuration ## Configuration
@ -43,7 +52,7 @@ Cassandra plugin produces one or more measurements for each metric configured, a
] ]
``` ```
The collected metrics will be: ## Example Output
```shell ```shell
javaMemory,host=myHost,mname=HeapMemoryUsage HeapMemoryUsage_committed=1040187392,HeapMemoryUsage_init=1050673152,HeapMemoryUsage_max=1040187392,HeapMemoryUsage_used=368155000 1459551767230567084 javaMemory,host=myHost,mname=HeapMemoryUsage HeapMemoryUsage_committed=1040187392,HeapMemoryUsage_init=1050673152,HeapMemoryUsage_max=1040187392,HeapMemoryUsage_used=368155000 1459551767230567084
@ -51,7 +60,8 @@ javaMemory,host=myHost,mname=HeapMemoryUsage HeapMemoryUsage_committed=104018739
## Useful Metrics ## Useful Metrics
Here is a list of metrics that might be useful to monitor your cassandra cluster. This was put together from multiple sources on the web. Here is a list of metrics that might be useful to monitor your cassandra
cluster. This was put together from multiple sources on the web.
- [How to monitor Cassandra performance metrics](https://www.datadoghq.com/blog/how-to-monitor-cassandra-performance-metrics) - [How to monitor Cassandra performance metrics](https://www.datadoghq.com/blog/how-to-monitor-cassandra-performance-metrics)
- [Cassandra Documentation](http://docs.datastax.com/en/cassandra/3.x/cassandra/operations/monitoringCassandraTOC.html) - [Cassandra Documentation](http://docs.datastax.com/en/cassandra/3.x/cassandra/operations/monitoringCassandraTOC.html)
@ -117,7 +127,9 @@ Here is a list of metrics that might be useful to monitor your cassandra cluster
### measurement = cassandraTable ### measurement = cassandraTable
Using wildcards for "keyspace" and "scope" can create a lot of series as metrics will be reported for every table and keyspace including internal system tables. Specify a keyspace name and/or a table name to limit them. Using wildcards for "keyspace" and "scope" can create a lot of series as metrics
will be reported for every table and keyspace including internal system
tables. Specify a keyspace name and/or a table name to limit them.
- /org.apache.cassandra.metrics:type=Table,keyspace=\*,scope=\*,name=LiveDiskSpaceUsed - /org.apache.cassandra.metrics:type=Table,keyspace=\*,scope=\*,name=LiveDiskSpaceUsed
- /org.apache.cassandra.metrics:type=Table,keyspace=\*,scope=\*,name=TotalDiskSpaceUsed - /org.apache.cassandra.metrics:type=Table,keyspace=\*,scope=\*,name=TotalDiskSpaceUsed

View File

@ -1,16 +1,22 @@
# Ceph Storage Input Plugin # Ceph Storage Input Plugin
Collects performance metrics from the MON and OSD nodes in a Ceph storage cluster. Collects performance metrics from the MON and OSD nodes in a Ceph storage
cluster.
Ceph has introduced a Telegraf and Influx plugin in the 13.x Mimic release. The Telegraf module sends to a Telegraf configured with a socket_listener. [Learn more in their docs](https://docs.ceph.com/en/latest/mgr/telegraf/) Ceph has introduced a Telegraf and Influx plugin in the 13.x Mimic release. The
Telegraf module sends to a Telegraf configured with a socket_listener. [Learn
more in their docs](https://docs.ceph.com/en/latest/mgr/telegraf/)
## Admin Socket Stats ## Admin Socket Stats
This gatherer works by scanning the configured SocketDir for OSD, MON, MDS and RGW socket files. When it finds This gatherer works by scanning the configured SocketDir for OSD, MON, MDS and
a MON socket, it runs **ceph --admin-daemon $file perfcounters_dump**. For OSDs it runs **ceph --admin-daemon $file perf dump** RGW socket files. When it finds a MON socket, it runs **ceph --admin-daemon
$file perfcounters_dump**. For OSDs it runs **ceph --admin-daemon $file perf
dump**
The resulting JSON is parsed and grouped into collections, based on top-level key. Top-level keys are The resulting JSON is parsed and grouped into collections, based on top-level
used as collection tags, and all sub-keys are flattened. For example: key. Top-level keys are used as collection tags, and all sub-keys are
flattened. For example:
```json ```json
{ {
@ -24,7 +30,8 @@ used as collection tags, and all sub-keys are flattened. For example:
} }
``` ```
Would be parsed into the following metrics, all of which would be tagged with collection=paxos: Would be parsed into the following metrics, all of which would be tagged with
collection=paxos:
- refresh = 9363435 - refresh = 9363435
- refresh_latency.avgcount: 9363435 - refresh_latency.avgcount: 9363435
@ -32,10 +39,11 @@ Would be parsed into the following metrics, all of which would be tagged with co
## Cluster Stats ## Cluster Stats
This gatherer works by invoking ceph commands against the cluster thus only requires the ceph client, valid This gatherer works by invoking ceph commands against the cluster thus only
ceph configuration and an access key to function (the ceph_config and ceph_user configuration variables work requires the ceph client, valid ceph configuration and an access key to function
in conjunction to specify these prerequisites). It may be run on any server you wish which has access to (the ceph_config and ceph_user configuration variables work in conjunction to
the cluster. The currently supported commands are: specify these prerequisites). It may be run on any server you wish which has
access to the cluster. The currently supported commands are:
- ceph status - ceph status
- ceph df - ceph df
@ -92,7 +100,8 @@ the cluster. The currently supported commands are:
### Admin Socket ### Admin Socket
All fields are collected under the **ceph** measurement and stored as float64s. For a full list of fields, see the sample perf dumps in ceph_test.go. All fields are collected under the **ceph** measurement and stored as
float64s. For a full list of fields, see the sample perf dumps in ceph_test.go.
All admin measurements will have the following tags: All admin measurements will have the following tags:
@ -235,7 +244,7 @@ All admin measurements will have the following tags:
- recovering_bytes_per_sec (float) - recovering_bytes_per_sec (float)
- recovering_keys_per_sec (float) - recovering_keys_per_sec (float)
## Example ## Example Output
Below is an example of a custer stats: Below is an example of a custer stats:

View File

@ -34,7 +34,7 @@ KEY0 ... VAL0\n
KEY1 ... VAL1\n KEY1 ... VAL1\n
``` ```
## Tags ## Metrics
All measurements have the `path` tag. All measurements have the `path` tag.
@ -57,7 +57,7 @@ All measurements have the `path` tag.
# files = ["memory.*usage*", "memory.limit_in_bytes"] # files = ["memory.*usage*", "memory.limit_in_bytes"]
``` ```
## Example ## Example Configurations
```toml ```toml
# [[inputs.cgroup]] # [[inputs.cgroup]]

View File

@ -2,54 +2,62 @@
Get standard chrony metrics, requires chronyc executable. Get standard chrony metrics, requires chronyc executable.
Below is the documentation of the various headers returned by `chronyc tracking`. Below is the documentation of the various headers returned by `chronyc
tracking`.
- Reference ID - This is the refid and name (or IP address) if available, of the - Reference ID - This is the refid and name (or IP address) if available, of the
server to which the computer is currently synchronised. If this is 127.127.1.1 server to which the computer is currently synchronised. If this is 127.127.1.1
it means the computer is not synchronised to any external source and that you it means the computer is not synchronised to any external source and that you
have the local mode operating (via the local command in chronyc (see section local), have the local mode operating (via the local command in chronyc (see section
or the local directive in the /etc/chrony.conf file (see section local)). local), or the local directive in the /etc/chrony.conf file (see section
- Stratum - The stratum indicates how many hops away from a computer with an attached local)).
reference clock we are. Such a computer is a stratum-1 computer, so the computer in the - Stratum - The stratum indicates how many hops away from a computer with an
example is two hops away (i.e. a.b.c is a stratum-2 and is synchronised from a stratum-1). attached reference clock we are. Such a computer is a stratum-1 computer, so
- Ref time - This is the time (UTC) at which the last measurement from the reference the computer in the example is two hops away (i.e. a.b.c is a stratum-2 and is
source was processed. synchronised from a stratum-1).
- System time - In normal operation, chronyd never steps the system clock, because any - Ref time - This is the time (UTC) at which the last measurement from the
jump in the timescale can have adverse consequences for certain application programs. reference source was processed.
Instead, any error in the system clock is corrected by slightly speeding up or slowing - System time - In normal operation, chronyd never steps the system clock,
down the system clock until the error has been removed, and then returning to the system because any jump in the timescale can have adverse consequences for certain
clocks normal speed. A consequence of this is that there will be a period when the application programs. Instead, any error in the system clock is corrected by
system clock (as read by other programs using the gettimeofday() system call, or by the slightly speeding up or slowing down the system clock until the error has been
date command in the shell) will be different from chronyd's estimate of the current true removed, and then returning to the system clocks normal speed. A consequence
time (which it reports to NTP clients when it is operating in server mode). The value of this is that there will be a period when the system clock (as read by other
reported on this line is the difference due to this effect. programs using the gettimeofday() system call, or by the date command in the
shell) will be different from chronyd's estimate of the current true time
(which it reports to NTP clients when it is operating in server mode). The
value reported on this line is the difference due to this effect.
- Last offset - This is the estimated local offset on the last clock update. - Last offset - This is the estimated local offset on the last clock update.
- RMS offset - This is a long-term average of the offset value. - RMS offset - This is a long-term average of the offset value.
- Frequency - The frequency is the rate by which the systems clock would be - Frequency - The frequency is the rate by which the systems clock would be
wrong if chronyd was not correcting it. It is expressed in ppm (parts per million). wrong if chronyd was not correcting it. It is expressed in ppm (parts per
For example, a value of 1ppm would mean that when the systems clock thinks it has million). For example, a value of 1ppm would mean that when the systems
advanced 1 second, it has actually advanced by 1.000001 seconds relative to true time. clock thinks it has advanced 1 second, it has actually advanced by 1.000001
seconds relative to true time.
- Residual freq - This shows the residual frequency for the currently selected - Residual freq - This shows the residual frequency for the currently selected
reference source. This reflects any difference between what the measurements from the reference source. This reflects any difference between what the measurements
reference source indicate the frequency should be and the frequency currently being used. from the reference source indicate the frequency should be and the frequency
The reason this is not always zero is that a smoothing procedure is applied to the currently being used. The reason this is not always zero is that a smoothing
frequency. Each time a measurement from the reference source is obtained and a new procedure is applied to the frequency. Each time a measurement from the
residual frequency computed, the estimated accuracy of this residual is compared with the reference source is obtained and a new residual frequency computed, the
estimated accuracy (see skew next) of the existing frequency value. A weighted average estimated accuracy of this residual is compared with the estimated accuracy
is computed for the new frequency, with weights depending on these accuracies. If the (see skew next) of the existing frequency value. A weighted average is
measurements from the reference source follow a consistent trend, the residual will be computed for the new frequency, with weights depending on these accuracies. If
driven to zero over time. the measurements from the reference source follow a consistent trend, the
residual will be driven to zero over time.
- Skew - This is the estimated error bound on the frequency. - Skew - This is the estimated error bound on the frequency.
- Root delay - This is the total of the network path delays to the stratum-1 computer - Root delay - This is the total of the network path delays to the stratum-1
from which the computer is ultimately synchronised. In certain extreme situations, this computer from which the computer is ultimately synchronised. In certain
value can be negative. (This can arise in a symmetric peer arrangement where the computers extreme situations, this value can be negative. (This can arise in a symmetric
frequencies are not tracking each other and the network delay is very short relative to the peer arrangement where the computers frequencies are not tracking each other
turn-around time at each computer.) and the network delay is very short relative to the turn-around time at each
- Root dispersion - This is the total dispersion accumulated through all the computers computer.)
back to the stratum-1 computer from which the computer is ultimately synchronised. - Root dispersion - This is the total dispersion accumulated through all the
Dispersion is due to system clock resolution, statistical measurement variations etc. computers back to the stratum-1 computer from which the computer is ultimately
synchronised. Dispersion is due to system clock resolution, statistical
measurement variations etc.
- Leap status - This is the leap status, which can be Normal, Insert second, - Leap status - This is the leap status, which can be Normal, Insert second,
Delete second or Not synchronised. Delete second or Not synchronised.
## Configuration ## Configuration
@ -60,7 +68,7 @@ Delete second or Not synchronised.
# dns_lookup = false # dns_lookup = false
``` ```
## Measurements & Fields ## Metrics
- chrony - chrony
- system_time (float, seconds) - system_time (float, seconds)
@ -80,7 +88,7 @@ Delete second or Not synchronised.
- stratum - stratum
- leap_status - leap_status
### Example Output ## Example Output
```shell ```shell
$ telegraf --config telegraf.conf --input-filter chrony --test $ telegraf --config telegraf.conf --input-filter chrony --test

View File

@ -1,13 +1,16 @@
# Cisco Model-Driven Telemetry (MDT) Input Plugin # Cisco Model-Driven Telemetry (MDT) Input Plugin
Cisco model-driven telemetry (MDT) is an input plugin that consumes Cisco model-driven telemetry (MDT) is an input plugin that consumes telemetry
telemetry data from Cisco IOS XR, IOS XE and NX-OS platforms. It supports TCP & GRPC dialout transports. data from Cisco IOS XR, IOS XE and NX-OS platforms. It supports TCP & GRPC
RPC-based transport can utilize TLS for authentication and encryption. dialout transports. RPC-based transport can utilize TLS for authentication and
Telemetry data is expected to be GPB-KV (self-describing-gpb) encoded. encryption. Telemetry data is expected to be GPB-KV (self-describing-gpb)
encoded.
The GRPC dialout transport is supported on various IOS XR (64-bit) 6.1.x and later, IOS XE 16.10 and later, as well as NX-OS 7.x and later platforms. The GRPC dialout transport is supported on various IOS XR (64-bit) 6.1.x and
later, IOS XE 16.10 and later, as well as NX-OS 7.x and later platforms.
The TCP dialout transport is supported on IOS XR (32-bit and 64-bit) 6.1.x and later. The TCP dialout transport is supported on IOS XR (32-bit and 64-bit) 6.1.x and
later.
## Configuration ## Configuration

View File

@ -1,6 +1,7 @@
# ClickHouse Input Plugin # ClickHouse Input Plugin
This plugin gathers the statistic data from [ClickHouse](https://github.com/ClickHouse/ClickHouse) server. This plugin gathers the statistic data from
[ClickHouse](https://github.com/ClickHouse/ClickHouse) server.
## Configuration ## Configuration
@ -184,7 +185,7 @@ This plugin gathers the statistic data from [ClickHouse](https://github.com/Clic
- fields: - fields:
- messages_last_10_min - gauge which show how many messages collected - messages_last_10_min - gauge which show how many messages collected
### Examples ## Example Output
```text ```text
clickhouse_events,cluster=test_cluster_two_shards_localhost,host=kshvakov,source=localhost,shard_num=1 read_compressed_bytes=212i,arena_alloc_chunks=35i,function_execute=85i,merge_tree_data_writer_rows=3i,rw_lock_acquired_read_locks=421i,file_open=46i,io_buffer_alloc_bytes=86451985i,inserted_bytes=196i,regexp_created=3i,real_time_microseconds=116832i,query=23i,network_receive_elapsed_microseconds=268i,merge_tree_data_writer_compressed_bytes=1080i,arena_alloc_bytes=212992i,disk_write_elapsed_microseconds=556i,inserted_rows=3i,compressed_read_buffer_bytes=81i,read_buffer_from_file_descriptor_read_bytes=148i,write_buffer_from_file_descriptor_write=47i,merge_tree_data_writer_blocks=3i,soft_page_faults=896i,hard_page_faults=7i,select_query=21i,merge_tree_data_writer_uncompressed_bytes=196i,merge_tree_data_writer_blocks_already_sorted=3i,user_time_microseconds=40196i,compressed_read_buffer_blocks=5i,write_buffer_from_file_descriptor_write_bytes=3246i,io_buffer_allocs=296i,created_write_buffer_ordinary=12i,disk_read_elapsed_microseconds=59347044i,network_send_elapsed_microseconds=1538i,context_lock=1040i,insert_query=1i,system_time_microseconds=14582i,read_buffer_from_file_descriptor_read=3i 1569421000000000000 clickhouse_events,cluster=test_cluster_two_shards_localhost,host=kshvakov,source=localhost,shard_num=1 read_compressed_bytes=212i,arena_alloc_chunks=35i,function_execute=85i,merge_tree_data_writer_rows=3i,rw_lock_acquired_read_locks=421i,file_open=46i,io_buffer_alloc_bytes=86451985i,inserted_bytes=196i,regexp_created=3i,real_time_microseconds=116832i,query=23i,network_receive_elapsed_microseconds=268i,merge_tree_data_writer_compressed_bytes=1080i,arena_alloc_bytes=212992i,disk_write_elapsed_microseconds=556i,inserted_rows=3i,compressed_read_buffer_bytes=81i,read_buffer_from_file_descriptor_read_bytes=148i,write_buffer_from_file_descriptor_write=47i,merge_tree_data_writer_blocks=3i,soft_page_faults=896i,hard_page_faults=7i,select_query=21i,merge_tree_data_writer_uncompressed_bytes=196i,merge_tree_data_writer_blocks_already_sorted=3i,user_time_microseconds=40196i,compressed_read_buffer_blocks=5i,write_buffer_from_file_descriptor_write_bytes=3246i,io_buffer_allocs=296i,created_write_buffer_ordinary=12i,disk_read_elapsed_microseconds=59347044i,network_send_elapsed_microseconds=1538i,context_lock=1040i,insert_query=1i,system_time_microseconds=14582i,read_buffer_from_file_descriptor_read=3i 1569421000000000000

View File

@ -85,7 +85,8 @@ and creates metrics using one of the supported [input data formats][].
### Multiple Subscriptions and Topics ### Multiple Subscriptions and Topics
This plugin assumes you have already created a PULL subscription for a given This plugin assumes you have already created a PULL subscription for a given
PubSub topic. To learn how to do so, see [how to create a subscription][pubsub create sub]. PubSub topic. To learn how to do so, see [how to create a subscription][pubsub
create sub].
Each plugin agent can listen to one subscription at a time, so you will Each plugin agent can listen to one subscription at a time, so you will
need to run multiple instances of the plugin to pull messages from multiple need to run multiple instances of the plugin to pull messages from multiple

View File

@ -1,18 +1,20 @@
# Google Cloud PubSub Push Input Plugin # Google Cloud PubSub Push Input Plugin
The Google Cloud PubSub Push listener is a service input plugin that listens for messages sent via an HTTP POST from [Google Cloud PubSub][pubsub]. The Google Cloud PubSub Push listener is a service input plugin that listens for
The plugin expects messages in Google's Pub/Sub JSON Format ONLY. messages sent via an HTTP POST from [Google Cloud PubSub][pubsub]. The plugin
The intent of the plugin is to allow Telegraf to serve as an endpoint of the Google Pub/Sub 'Push' service. expects messages in Google's Pub/Sub JSON Format ONLY. The intent of the plugin
Google's PubSub service will **only** send over HTTPS/TLS so this plugin must be behind a valid proxy or must be configured to use TLS. is to allow Telegraf to serve as an endpoint of the Google Pub/Sub 'Push'
service. Google's PubSub service will **only** send over HTTPS/TLS so this
plugin must be behind a valid proxy or must be configured to use TLS.
Enable TLS by specifying the file names of a service TLS certificate and key. Enable TLS by specifying the file names of a service TLS certificate and key.
Enable mutually authenticated TLS and authorize client connections by signing certificate authority by including a list of allowed CA certificate file names in `tls_allowed_cacerts`. Enable mutually authenticated TLS and authorize client connections by signing
certificate authority by including a list of allowed CA certificate file names
in `tls_allowed_cacerts`.
## Configuration ## Configuration
This is a sample configuration for the plugin.
```toml @sample.conf ```toml @sample.conf
# Google Cloud Pub/Sub Push HTTP listener # Google Cloud Pub/Sub Push HTTP listener
[[inputs.cloud_pubsub_push]] [[inputs.cloud_pubsub_push]]

View File

@ -116,7 +116,8 @@ API endpoint. In the following order the plugin will attempt to authenticate.
## Requirements and Terminology ## Requirements and Terminology
Plugin Configuration utilizes [CloudWatch concepts](http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html) and access pattern to allow monitoring of any CloudWatch Metric. Plugin Configuration utilizes [CloudWatch concepts][1] and access pattern to
allow monitoring of any CloudWatch Metric.
- `region` must be a valid AWS [Region](http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html#CloudWatchRegions) value - `region` must be a valid AWS [Region](http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html#CloudWatchRegions) value
- `period` must be a valid CloudWatch [Period](http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html#CloudWatchPeriods) value - `period` must be a valid CloudWatch [Period](http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html#CloudWatchPeriods) value
@ -124,9 +125,10 @@ Plugin Configuration utilizes [CloudWatch concepts](http://docs.aws.amazon.com/A
- `names` must be valid CloudWatch [Metric](http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html#Metric) names - `names` must be valid CloudWatch [Metric](http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html#Metric) names
- `dimensions` must be valid CloudWatch [Dimension](http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html#Dimension) name/value pairs - `dimensions` must be valid CloudWatch [Dimension](http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html#Dimension) name/value pairs
Omitting or specifying a value of `'*'` for a dimension value configures all available metrics that contain a dimension with the specified name Omitting or specifying a value of `'*'` for a dimension value configures all
to be retrieved. If specifying >1 dimension, then the metric must contain *all* the configured dimensions where the the value of the available metrics that contain a dimension with the specified name to be
wildcard dimension is ignored. retrieved. If specifying >1 dimension, then the metric must contain *all* the
configured dimensions where the the value of the wildcard dimension is ignored.
Example: Example:
@ -160,20 +162,27 @@ Then 2 metrics will be output:
- name: `p-example`, availabilityZone: `us-east-1a` - name: `p-example`, availabilityZone: `us-east-1a`
- name: `p-example`, availabilityZone: `us-east-1b` - name: `p-example`, availabilityZone: `us-east-1b`
If the `AvailabilityZone` wildcard dimension was omitted, then a single metric (name: `p-example`) If the `AvailabilityZone` wildcard dimension was omitted, then a single metric
would be exported containing the aggregate values of the ELB across availability zones. (name: `p-example`) would be exported containing the aggregate values of the ELB
across availability zones.
To maximize efficiency and savings, consider making fewer requests by increasing `interval` but keeping `period` at the duration you would like metrics to be reported. The above example will request metrics from Cloudwatch every 5 minutes but will output five metrics timestamped one minute apart. To maximize efficiency and savings, consider making fewer requests by increasing
`interval` but keeping `period` at the duration you would like metrics to be
reported. The above example will request metrics from Cloudwatch every 5 minutes
but will output five metrics timestamped one minute apart.
[1]: http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html
## Restrictions and Limitations ## Restrictions and Limitations
- CloudWatch metrics are not available instantly via the CloudWatch API. You should adjust your collection `delay` to account for this lag in metrics availability based on your [monitoring subscription level](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-cloudwatch-new.html) - CloudWatch metrics are not available instantly via the CloudWatch API. You should adjust your collection `delay` to account for this lag in metrics availability based on your [monitoring subscription level](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-cloudwatch-new.html)
- CloudWatch API usage incurs cost - see [GetMetricData Pricing](https://aws.amazon.com/cloudwatch/pricing/) - CloudWatch API usage incurs cost - see [GetMetricData Pricing](https://aws.amazon.com/cloudwatch/pricing/)
## Measurements & Fields ## Metrics
Each CloudWatch Namespace monitored records a measurement with fields for each available Metric Statistic. Each CloudWatch Namespace monitored records a measurement with fields for each
Namespace and Metrics are represented in [snake case](https://en.wikipedia.org/wiki/Snake_case) available Metric Statistic. Namespace and Metrics are represented in [snake
case](https://en.wikipedia.org/wiki/Snake_case)
- cloudwatch_{namespace} - cloudwatch_{namespace}
- {metric}_sum (metric Sum value) - {metric}_sum (metric Sum value)
@ -182,10 +191,11 @@ Namespace and Metrics are represented in [snake case](https://en.wikipedia.org/w
- {metric}_maximum (metric Maximum value) - {metric}_maximum (metric Maximum value)
- {metric}_sample_count (metric SampleCount value) - {metric}_sample_count (metric SampleCount value)
## Tags ### Tags
Each measurement is tagged with the following identifiers to uniquely identify the associated metric Each measurement is tagged with the following identifiers to uniquely identify
Tag Dimension names are represented in [snake case](https://en.wikipedia.org/wiki/Snake_case) the associated metric Tag Dimension names are represented in [snake
case](https://en.wikipedia.org/wiki/Snake_case)
- All measurements have the following tags: - All measurements have the following tags:
- region (CloudWatch Region) - region (CloudWatch Region)
@ -229,7 +239,7 @@ aws cloudwatch get-metric-data \
]' ]'
``` ```
## Example ## Example Output
```shell ```shell
$ ./telegraf --config telegraf.conf --input-filter cloudwatch --test $ ./telegraf --config telegraf.conf --input-filter cloudwatch --test

View File

@ -37,13 +37,13 @@ For more information on conntrack-tools, see the
dirs = ["/proc/sys/net/ipv4/netfilter","/proc/sys/net/netfilter"] dirs = ["/proc/sys/net/ipv4/netfilter","/proc/sys/net/netfilter"]
``` ```
## Measurements & Fields ## Metrics
- conntrack - conntrack
- ip_conntrack_count (int, count): the number of entries in the conntrack table - ip_conntrack_count (int, count): the number of entries in the conntrack table
- ip_conntrack_max (int, size): the max capacity of the conntrack table - ip_conntrack_max (int, size): the max capacity of the conntrack table
## Tags ### Tags
This input does not use tags. This input does not use tags.

View File

@ -1,10 +1,13 @@
# Consul Input Plugin # Consul Input Plugin
This plugin will collect statistics about all health checks registered in the This plugin will collect statistics about all health checks registered in the
Consul. It uses [Consul API](https://www.consul.io/docs/agent/http/health.html#health_state) Consul. It uses [Consul API][1] to query the data. It will not report the
to query the data. It will not report the [telemetry][2] but Consul can report those stats already using StatsD protocol
[telemetry](https://www.consul.io/docs/agent/telemetry.html) but Consul can if needed.
report those stats already using StatsD protocol if needed.
[1]: https://www.consul.io/docs/agent/http/health.html#health_state
[2]: https://www.consul.io/docs/agent/telemetry.html
## Configuration ## Configuration
@ -81,10 +84,11 @@ report those stats already using StatsD protocol if needed.
- warning (integer) - warning (integer)
`passing`, `critical`, and `warning` are integer representations of the health `passing`, `critical`, and `warning` are integer representations of the health
check state. A value of `1` represents that the status was the state of the check state. A value of `1` represents that the status was the state of the the
the health check at this sample. `status` is string representation of the same state. health check at this sample. `status` is string representation of the same
state.
## Example output ## Example Output
```shell ```shell
consul_health_checks,host=wolfpit,node=consul-server-node,check_id="serfHealth" check_name="Serf Health Status",service_id="",status="passing",passing=1i,critical=0i,warning=0i 1464698464486439902 consul_health_checks,host=wolfpit,node=consul-server-node,check_id="serfHealth" check_name="Serf Health Status",service_id="",status="passing",passing=1i,critical=0i,warning=0i 1464698464486439902

View File

@ -1,6 +1,8 @@
# Hashicorp Consul Agent Metrics Input Plugin # Hashicorp Consul Agent Metrics Input Plugin
This plugin grabs metrics from a Consul agent. Telegraf may be present in every node and connect to the agent locally. In this case should be something like `http://127.0.0.1:8500`. This plugin grabs metrics from a Consul agent. Telegraf may be present in every
node and connect to the agent locally. In this case should be something like
`http://127.0.0.1:8500`.
> Tested on Consul 1.10.4 . > Tested on Consul 1.10.4 .
@ -30,6 +32,7 @@ This plugin grabs metrics from a Consul agent. Telegraf may be present in every
## Metrics ## Metrics
Consul collects various metrics. For every details, please have a look at Consul following documentation: Consul collects various metrics. For every details, please have a look at Consul
following documentation:
- [https://www.consul.io/api/agent#view-metrics](https://www.consul.io/api/agent#view-metrics) - [https://www.consul.io/api/agent#view-metrics](https://www.consul.io/api/agent#view-metrics)

View File

@ -1,7 +1,8 @@
# Couchbase Input Plugin # Couchbase Input Plugin
Couchbase is a distributed NoSQL database. Couchbase is a distributed NoSQL database. This plugin gets metrics for each
This plugin gets metrics for each Couchbase node, as well as detailed metrics for each bucket, for a given couchbase server. Couchbase node, as well as detailed metrics for each bucket, for a given
couchbase server.
## Configuration ## Configuration
@ -31,7 +32,7 @@ This plugin gets metrics for each Couchbase node, as well as detailed metrics fo
# insecure_skip_verify = false # insecure_skip_verify = false
``` ```
## Measurements ## Metrics
### couchbase_node ### couchbase_node
@ -62,7 +63,8 @@ Default bucket fields:
- data_used (unit: bytes, example: 212179309111.0) - data_used (unit: bytes, example: 212179309111.0)
- mem_used (unit: bytes, example: 202156957464.0) - mem_used (unit: bytes, example: 202156957464.0)
Additional fields that can be configured with the `bucket_stats_included` option: Additional fields that can be configured with the `bucket_stats_included`
option:
- couch_total_disk_size - couch_total_disk_size
- couch_docs_fragmentation - couch_docs_fragmentation
@ -280,7 +282,7 @@ Additional fields that can be configured with the `bucket_stats_included` option
- swap_total - swap_total
- swap_used - swap_used
## Example output ## Example Output
```shell ```shell
couchbase_node,cluster=http://localhost:8091/,hostname=172.17.0.2:8091 memory_free=7705575424,memory_total=16558182400 1547829754000000000 couchbase_node,cluster=http://localhost:8091/,hostname=172.17.0.2:8091 memory_free=7705575424,memory_total=16558182400 1547829754000000000

View File

@ -16,7 +16,7 @@ The CouchDB plugin gathers metrics of CouchDB using [_stats] endpoint.
# basic_password = "p@ssw0rd" # basic_password = "p@ssw0rd"
``` ```
## Measurements & Fields ## Metrics
Statistics specific to the internals of CouchDB: Statistics specific to the internals of CouchDB:
@ -65,7 +65,7 @@ httpd statistics:
- server (url of the couchdb _stats endpoint) - server (url of the couchdb _stats endpoint)
## Example ## Example Output
### Post Couchdb 2.0 ### Post Couchdb 2.0

View File

@ -19,7 +19,8 @@ The `csgo` plugin gather metrics from Counter-Strike: Global Offensive servers.
## Metrics ## Metrics
The plugin retrieves the output of the `stats` command that is executed via rcon. The plugin retrieves the output of the `stats` command that is executed via
rcon.
If no servers are specified, no data will be collected If no servers are specified, no data will be collected

View File

@ -1,6 +1,7 @@
# DC/OS Input Plugin # DC/OS Input Plugin
This input plugin gathers metrics from a DC/OS cluster's [metrics component](https://docs.mesosphere.com/1.10/metrics/). This input plugin gathers metrics from a DC/OS cluster's [metrics
component](https://docs.mesosphere.com/1.10/metrics/).
## Series Cardinality Warning ## Series Cardinality Warning
@ -77,7 +78,7 @@ dcos:adminrouter:ops:system-metrics full
dcos:adminrouter:ops:mesos full dcos:adminrouter:ops:mesos full
``` ```
Follow the directions to [create a service account and assign permissions](https://docs.mesosphere.com/1.10/security/service-auth/custom-service-auth/). Follow the directions to [create a service account and assign permissions][1].
Quick configuration using the Enterprise CLI: Quick configuration using the Enterprise CLI:
@ -88,6 +89,8 @@ dcos security org users grant telegraf dcos:adminrouter:ops:system-metrics full
dcos security org users grant telegraf dcos:adminrouter:ops:mesos full dcos security org users grant telegraf dcos:adminrouter:ops:mesos full
``` ```
[1]: https://docs.mesosphere.com/1.10/security/service-auth/custom-service-auth/
### Open Source Authentication ### Open Source Authentication
The Open Source DC/OS does not provide service accounts. Instead you can use The Open Source DC/OS does not provide service accounts. Instead you can use
@ -110,12 +113,14 @@ cluster secret. This will allow you to set the expiration date manually or
even create a never expiring token. However, if the cluster secret or the even create a never expiring token. However, if the cluster secret or the
token is compromised it cannot be revoked and may require a full reinstall of token is compromised it cannot be revoked and may require a full reinstall of
the cluster. For more information on this technique reference the cluster. For more information on this technique reference
[this blog post](https://medium.com/@richardgirges/authenticating-open-source-dc-os-with-third-party-services-125fa33a5add). [this blog post][2].
[2]: https://medium.com/@richardgirges/authenticating-open-source-dc-os-with-third-party-services-125fa33a5add
## Metrics ## Metrics
Please consult the [Metrics Reference](https://docs.mesosphere.com/1.10/metrics/reference/) Please consult the [Metrics Reference][3] for details about field
for details about field interpretation. interpretation.
- dcos_node - dcos_node
- tags: - tags:
@ -190,7 +195,9 @@ for details about field interpretation.
- fields: - fields:
- fields are application specific - fields are application specific
## Example [3]: https://docs.mesosphere.com/1.10/metrics/reference/
## Example Output
```shell ```shell
dcos_node,cluster=enterprise,hostname=192.168.122.18,path=/boot filesystem_capacity_free_bytes=918188032i,filesystem_capacity_total_bytes=1063256064i,filesystem_capacity_used_bytes=145068032i,filesystem_inode_free=523958,filesystem_inode_total=524288,filesystem_inode_used=330 1511859222000000000 dcos_node,cluster=enterprise,hostname=192.168.122.18,path=/boot filesystem_capacity_free_bytes=918188032i,filesystem_capacity_total_bytes=1063256064i,filesystem_capacity_used_bytes=145068032i,filesystem_inode_free=523958,filesystem_inode_total=524288,filesystem_inode_used=330 1511859222000000000

View File

@ -1,9 +1,17 @@
# Directory Monitor Input Plugin # Directory Monitor Input Plugin
This plugin monitors a single directory (without looking at sub-directories), and takes in each file placed in the directory. This plugin monitors a single directory (without looking at sub-directories),
The plugin will gather all files in the directory at a configurable interval (`monitor_interval`), and parse the ones that haven't been picked up yet. and takes in each file placed in the directory. The plugin will gather all
files in the directory at a configurable interval (`monitor_interval`), and
parse the ones that haven't been picked up yet.
This plugin is intended to read files that are moved or copied to the monitored directory, and thus files should also not be used by another process or else they may fail to be gathered. Please be advised that this plugin pulls files directly after they've been in the directory for the length of the configurable `directory_duration_threshold`, and thus files should not be written 'live' to the monitored directory. If you absolutely must write files directly, they must be guaranteed to finish writing before the `directory_duration_threshold`. This plugin is intended to read files that are moved or copied to the monitored
directory, and thus files should also not be used by another process or else
they may fail to be gathered. Please be advised that this plugin pulls files
directly after they've been in the directory for the length of the configurable
`directory_duration_threshold`, and thus files should not be written 'live' to
the monitored directory. If you absolutely must write files directly, they must
be guaranteed to finish writing before the `directory_duration_threshold`.
## Configuration ## Configuration

View File

@ -26,12 +26,12 @@ Note that `used_percent` is calculated by doing `used / (used + free)`, _not_
### Docker container ### Docker container
To monitor the Docker engine host from within a container you will need to To monitor the Docker engine host from within a container you will need to mount
mount the host's filesystem into the container and set the `HOST_PROC` the host's filesystem into the container and set the `HOST_PROC` environment
environment variable to the location of the `/proc` filesystem. If desired, you can variable to the location of the `/proc` filesystem. If desired, you can also
also set the `HOST_MOUNT_PREFIX` environment variable to the prefix containing set the `HOST_MOUNT_PREFIX` environment variable to the prefix containing the
the `/proc` directory, when present this variable is stripped from the `/proc` directory, when present this variable is stripped from the reported
reported `path` tag. `path` tag.
```shell ```shell
docker run -v /:/hostfs:ro -e HOST_MOUNT_PREFIX=/hostfs -e HOST_PROC=/hostfs/proc telegraf docker run -v /:/hostfs:ro -e HOST_MOUNT_PREFIX=/hostfs -e HOST_PROC=/hostfs/proc telegraf
@ -72,7 +72,7 @@ It may be desired to use POSIX ACLs to provide additional access:
sudo setfacl -R -m u:telegraf:X /var/lib/docker/volumes/ sudo setfacl -R -m u:telegraf:X /var/lib/docker/volumes/
``` ```
## Example ## Example Output
```shell ```shell
disk,fstype=hfs,mode=ro,path=/ free=398407520256i,inodes_free=97267461i,inodes_total=121847806i,inodes_used=24580345i,total=499088621568i,used=100418957312i,used_percent=20.131039916242397 1453832006274071563 disk,fstype=hfs,mode=ro,path=/ free=398407520256i,inodes_free=97267461i,inodes_total=121847806i,inodes_used=24580345i,total=499088621568i,used=100418957312i,used_percent=20.131039916242397 1453832006274071563

View File

@ -67,10 +67,12 @@ docker run --privileged -v /:/hostfs:ro -v /run/udev:/run/udev:ro -e HOST_PROC=/
- merged_reads (integer, counter) - merged_reads (integer, counter)
- merged_writes (integer, counter) - merged_writes (integer, counter)
On linux these values correspond to the values in On linux these values correspond to the values in [`/proc/diskstats`][1] and
[`/proc/diskstats`](https://www.kernel.org/doc/Documentation/ABI/testing/procfs-diskstats) [`/sys/block/<dev>/stat`][2].
and
[`/sys/block/<dev>/stat`](https://www.kernel.org/doc/Documentation/block/stat.txt). [1]: https://www.kernel.org/doc/Documentation/ABI/testing/procfs-diskstats
[2]: https://www.kernel.org/doc/Documentation/block/stat.txt
### `reads` & `writes` ### `reads` & `writes`
@ -124,13 +126,14 @@ SELECT non_negative_derivative(last("io_time"),1ms) FROM "diskio" WHERE time > n
### Calculate average queue depth ### Calculate average queue depth
`iops_in_progress` will give you an instantaneous value. This will give you the average between polling intervals. `iops_in_progress` will give you an instantaneous value. This will give you the
average between polling intervals.
```sql ```sql
SELECT non_negative_derivative(last("weighted_io_time"),1ms) from "diskio" WHERE time > now() - 30m GROUP BY "host","name",time(60s) SELECT non_negative_derivative(last("weighted_io_time"),1ms) from "diskio" WHERE time > now() - 30m GROUP BY "host","name",time(60s)
``` ```
## Example ## Example Output
```shell ```shell
diskio,name=sda1 merged_reads=0i,reads=2353i,writes=10i,write_bytes=2117632i,write_time=49i,io_time=1271i,weighted_io_time=1350i,read_bytes=31350272i,read_time=1303i,iops_in_progress=0i,merged_writes=0i 1578326400000000000 diskio,name=sda1 merged_reads=0i,reads=2353i,writes=10i,write_bytes=2117632i,write_time=49i,io_time=1271i,weighted_io_time=1350i,read_bytes=31350272i,read_time=1303i,iops_in_progress=0i,merged_writes=0i 1578326400000000000

View File

@ -1,6 +1,7 @@
# Disque Input Plugin # Disque Input Plugin
[Disque](https://github.com/antirez/disque) is an ongoing experiment to build a distributed, in-memory, message broker. [Disque](https://github.com/antirez/disque) is an ongoing experiment to build a
distributed, in-memory, message broker.
## Configuration ## Configuration

View File

@ -1,10 +1,13 @@
# DMCache Input Plugin # DMCache Input Plugin
This plugin provide a native collection for dmsetup based statistics for dm-cache. This plugin provide a native collection for dmsetup based statistics for
dm-cache.
This plugin requires sudo, that is why you should setup and be sure that the telegraf is able to execute sudo without a password. This plugin requires sudo, that is why you should setup and be sure that the
telegraf is able to execute sudo without a password.
`sudo /sbin/dmsetup status --target cache` is the full command that telegraf will run for debugging purposes. `sudo /sbin/dmsetup status --target cache` is the full command that telegraf
will run for debugging purposes.
## Configuration ## Configuration
@ -15,7 +18,7 @@ This plugin requires sudo, that is why you should setup and be sure that the tel
per_device = true per_device = true
``` ```
## Measurements & Fields ## Metrics
- dmcache - dmcache
- length - length

View File

@ -1,6 +1,7 @@
# DNS Query Input Plugin # DNS Query Input Plugin
The DNS plugin gathers dns query times in miliseconds - like [Dig](https://en.wikipedia.org/wiki/Dig_\(command\)) The DNS plugin gathers dns query times in miliseconds - like
[Dig](https://en.wikipedia.org/wiki/Dig_\(command\))
## Configuration ## Configuration
@ -66,7 +67,7 @@ The DNS plugin gathers dns query times in miliseconds - like [Dig](https://en.wi
|22 | BADTRUNC | Bad Truncation | |22 | BADTRUNC | Bad Truncation |
|23 | BADCOOKIE | Bad/missing Server Cookie | |23 | BADCOOKIE | Bad/missing Server Cookie |
### Example ## Example Output
```shell ```shell
dns_query,domain=google.com,rcode=NOERROR,record_type=A,result=success,server=127.0.0.1 rcode_value=0i,result_code=0i,query_time_ms=0.13746 1550020750001000000 dns_query,domain=google.com,rcode=NOERROR,record_type=A,result=success,server=127.0.0.1 rcode_value=0i,result_code=0i,query_time_ms=0.13746 1550020750001000000

View File

@ -3,8 +3,12 @@
The docker plugin uses the Docker Engine API to gather metrics on running The docker plugin uses the Docker Engine API to gather metrics on running
docker containers. docker containers.
The docker plugin uses the [Official Docker Client](https://github.com/moby/moby/tree/master/client) The docker plugin uses the [Official Docker Client][1] to gather stats from the
to gather stats from the [Engine API](https://docs.docker.com/engine/api/v1.24/). [Engine API][2].
[1]: https://github.com/moby/moby/tree/master/client
[2]: https://docs.docker.com/engine/api/v1.24/
## Configuration ## Configuration
@ -85,12 +89,18 @@ to gather stats from the [Engine API](https://docs.docker.com/engine/api/v1.24/)
### Environment Configuration ### Environment Configuration
When using the `"ENV"` endpoint, the connection is configured using the When using the `"ENV"` endpoint, the connection is configured using the [cli
[cli Docker environment variables](https://godoc.org/github.com/moby/moby/client#NewEnvClient). Docker environment variables][3].
[3]: https://godoc.org/github.com/moby/moby/client#NewEnvClient
### Security ### Security
Giving telegraf access to the Docker daemon expands the [attack surface](https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface) that could result in an attacker gaining root access to a machine. This is especially relevant if the telegraf configuration can be changed by untrusted users. Giving telegraf access to the Docker daemon expands the [attack surface][4] that
could result in an attacker gaining root access to a machine. This is especially
relevant if the telegraf configuration can be changed by untrusted users.
[4]: https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface
### Docker Daemon Permissions ### Docker Daemon Permissions
@ -115,14 +125,17 @@ volumes:
### source tag ### source tag
Selecting the containers measurements can be tricky if you have many containers with the same name. Selecting the containers measurements can be tricky if you have many containers
To alleviate this issue you can set the below value to `true` with the same name. To alleviate this issue you can set the below value to
`true`
```toml ```toml
source_tag = true source_tag = true
``` ```
This will cause all measurements to have the `source` tag be set to the first 12 characters of the container id. The first 12 characters is the common hostname for containers that have no explicit hostname set, as defined by docker. This will cause all measurements to have the `source` tag be set to the first 12
characters of the container id. The first 12 characters is the common hostname
for containers that have no explicit hostname set, as defined by docker.
### Kubernetes Labels ### Kubernetes Labels
@ -135,7 +148,8 @@ may prefer to exclude them:
### Docker-compose Labels ### Docker-compose Labels
Docker-compose will add labels to your containers. You can limit restrict labels to selected ones, e.g. Docker-compose will add labels to your containers. You can limit restrict labels
to selected ones, e.g.
```json ```json
docker_label_include = [ docker_label_include = [
@ -147,7 +161,7 @@ Docker-compose will add labels to your containers. You can limit restrict labels
] ]
``` ```
### Metrics ## Metrics
- docker - docker
- tags: - tags:
@ -190,7 +204,8 @@ some storage drivers such as devicemapper.
- total - total
- used - used
The above measurements for the devicemapper storage driver can now be found in the new `docker_devicemapper` measurement The above measurements for the devicemapper storage driver can now be found in
the new `docker_devicemapper` measurement
- docker_devicemapper - docker_devicemapper
- tags: - tags:
@ -355,7 +370,7 @@ status if configured.
- tasks_desired - tasks_desired
- tasks_running - tasks_running
## Example ## Example Output
```shell ```shell
docker,engine_host=debian-stretch-docker,server_version=17.09.0-ce n_containers=6i,n_containers_paused=0i,n_containers_running=1i,n_containers_stopped=5i,n_cpus=2i,n_goroutines=41i,n_images=2i,n_listener_events=0i,n_used_file_descriptors=27i 1524002041000000000 docker,engine_host=debian-stretch-docker,server_version=17.09.0-ce n_containers=6i,n_containers_paused=0i,n_containers_running=1i,n_containers_stopped=5i,n_cpus=2i,n_goroutines=41i,n_images=2i,n_listener_events=0i,n_used_file_descriptors=27i 1524002041000000000

View File

@ -64,14 +64,16 @@ When using the `"ENV"` endpoint, the connection is configured using the
## source tag ## source tag
Selecting the containers can be tricky if you have many containers with the same name. Selecting the containers can be tricky if you have many containers with the same
To alleviate this issue you can set the below value to `true` name. To alleviate this issue you can set the below value to `true`
```toml ```toml
source_tag = true source_tag = true
``` ```
This will cause all data points to have the `source` tag be set to the first 12 characters of the container id. The first 12 characters is the common hostname for containers that have no explicit hostname set, as defined by docker. This will cause all data points to have the `source` tag be set to the first 12
characters of the container id. The first 12 characters is the common hostname
for containers that have no explicit hostname set, as defined by docker.
## Metrics ## Metrics

View File

@ -63,7 +63,7 @@ the [upgrading steps][upgrading].
- mail_read_bytes (integer) - mail_read_bytes (integer)
- mail_cache_hits (integer) - mail_cache_hits (integer)
### Example Output ## Example Output
```shell ```shell
dovecot,server=dovecot-1.domain.test,type=global clock_time=101196971074203.94,disk_input=6493168218112i,disk_output=17978638815232i,invol_cs=1198855447i,last_update="2016-04-08 11:04:13.000379245 +0200 CEST",mail_cache_hits=68192209i,mail_lookup_attr=0i,mail_lookup_path=653861i,mail_read_bytes=86705151847i,mail_read_count=566125i,maj_faults=17208i,min_faults=1286179702i,num_cmds=917469i,num_connected_sessions=8896i,num_logins=174827i,read_bytes=30327690466186i,read_count=1772396430i,reset_timestamp="2016-04-08 10:28:45 +0200 CEST",sys_cpu=157965.692,user_cpu=219337.48,vol_cs=2827615787i,write_bytes=17150837661940i,write_count=992653220i 1460106266642153907 dovecot,server=dovecot-1.domain.test,type=global clock_time=101196971074203.94,disk_input=6493168218112i,disk_output=17978638815232i,invol_cs=1198855447i,last_update="2016-04-08 11:04:13.000379245 +0200 CEST",mail_cache_hits=68192209i,mail_lookup_attr=0i,mail_lookup_path=653861i,mail_read_bytes=86705151847i,mail_read_count=566125i,maj_faults=17208i,min_faults=1286179702i,num_cmds=917469i,num_connected_sessions=8896i,num_logins=174827i,read_bytes=30327690466186i,read_count=1772396430i,reset_timestamp="2016-04-08 10:28:45 +0200 CEST",sys_cpu=157965.692,user_cpu=219337.48,vol_cs=2827615787i,write_bytes=17150837661940i,write_count=992653220i 1460106266642153907

View File

@ -1,24 +1,32 @@
# Data Plane Development Kit (DPDK) Input Plugin # Data Plane Development Kit (DPDK) Input Plugin
The `dpdk` plugin collects metrics exposed by applications built with [Data Plane Development Kit](https://www.dpdk.org/) The `dpdk` plugin collects metrics exposed by applications built with [Data
which is an extensive set of open source libraries designed for accelerating packet processing workloads. Plane Development Kit](https://www.dpdk.org/) which is an extensive set of open
source libraries designed for accelerating packet processing workloads.
DPDK provides APIs that enable exposing various statistics from the devices used by DPDK applications and enable exposing DPDK provides APIs that enable exposing various statistics from the devices used
KPI metrics directly from applications. Device statistics include e.g. common statistics available across NICs, like: by DPDK applications and enable exposing KPI metrics directly from
received and sent packets, received and sent bytes etc. In addition to this generic statistics, an extended statistics API applications. Device statistics include e.g. common statistics available across
is available that allows providing more detailed, driver-specific metrics that are not available as generic statistics. NICs, like: received and sent packets, received and sent bytes etc. In addition
to this generic statistics, an extended statistics API is available that allows
providing more detailed, driver-specific metrics that are not available as
generic statistics.
[DPDK Release 20.05](https://doc.dpdk.org/guides/rel_notes/release_20_05.html) introduced updated telemetry interface [DPDK Release 20.05](https://doc.dpdk.org/guides/rel_notes/release_20_05.html)
that enables DPDK libraries and applications to provide their telemetry. This is referred to as `v2` version of this introduced updated telemetry interface that enables DPDK libraries and
socket-based telemetry interface. This release enabled e.g. reading driver-specific extended stats (`/ethdev/xstats`) applications to provide their telemetry. This is referred to as `v2` version of
via this new interface. this socket-based telemetry interface. This release enabled e.g. reading
driver-specific extended stats (`/ethdev/xstats`) via this new interface.
[DPDK Release 20.11](https://doc.dpdk.org/guides/rel_notes/release_20_11.html) introduced reading via `v2` interface [DPDK Release 20.11](https://doc.dpdk.org/guides/rel_notes/release_20_11.html)
common statistics (`/ethdev/stats`) in addition to existing (`/ethdev/xstats`). introduced reading via `v2` interface common statistics (`/ethdev/stats`) in
addition to existing (`/ethdev/xstats`).
The example usage of `v2` telemetry interface can be found in [Telemetry User Guide](https://doc.dpdk.org/guides/howto/telemetry.html). The example usage of `v2` telemetry interface can be found in [Telemetry User
A variety of [DPDK Sample Applications](https://doc.dpdk.org/guides/sample_app_ug/index.html) is also available for users Guide](https://doc.dpdk.org/guides/howto/telemetry.html). A variety of [DPDK
to discover and test the capabilities of DPDK libraries and to explore the exposed metrics. Sample Applications](https://doc.dpdk.org/guides/sample_app_ug/index.html) is
also available for users to discover and test the capabilities of DPDK libraries
and to explore the exposed metrics.
> **DPDK Version Info:** This plugin uses this `v2` interface to read telemetry data from applications build with > **DPDK Version Info:** This plugin uses this `v2` interface to read telemetry data from applications build with
> `DPDK version >= 20.05`. The default configuration include reading common statistics from `/ethdev/stats` that is > `DPDK version >= 20.05`. The default configuration include reading common statistics from `/ethdev/stats` that is
@ -32,8 +40,6 @@ to discover and test the capabilities of DPDK libraries and to explore the expos
## Configuration ## Configuration
This plugin offers multiple configuration options, please review examples below for additional usage information.
```toml @sample.conf ```toml @sample.conf
# Reads metrics from DPDK applications using v2 telemetry interface. # Reads metrics from DPDK applications using v2 telemetry interface.
[[inputs.dpdk]] [[inputs.dpdk]]
@ -69,9 +75,13 @@ This plugin offers multiple configuration options, please review examples below
## dpdk_instance = "my-fwd-app" ## dpdk_instance = "my-fwd-app"
``` ```
This plugin offers multiple configuration options, please review examples below
for additional usage information.
### Example: Minimal Configuration for NIC metrics ### Example: Minimal Configuration for NIC metrics
This configuration allows getting metrics for all devices reported via `/ethdev/list` command: This configuration allows getting metrics for all devices reported via
`/ethdev/list` command:
* `/ethdev/stats` - basic device statistics (since `DPDK 20.11`) * `/ethdev/stats` - basic device statistics (since `DPDK 20.11`)
* `/ethdev/xstats` - extended device statistics * `/ethdev/xstats` - extended device statistics
@ -82,15 +92,17 @@ This configuration allows getting metrics for all devices reported via `/ethdev/
device_types = ["ethdev"] device_types = ["ethdev"]
``` ```
Since this configuration will query `/ethdev/link_status` it's recommended to increase timeout to `socket_access_timeout = "10s"`. Since this configuration will query `/ethdev/link_status` it's recommended to
increase timeout to `socket_access_timeout = "10s"`.
The [plugin collecting interval](https://github.com/influxdata/telegraf/blob/master/docs/CONFIGURATION.md#input-plugins) The [plugin collecting interval](../../../docs/CONFIGURATION.md#input-plugins)
should be adjusted accordingly (e.g. `interval = "30s"`). should be adjusted accordingly (e.g. `interval = "30s"`).
### Example: Excluding NIC link status from being collected ### Example: Excluding NIC link status from being collected
Checking link status depending on underlying implementation may take more time to complete. Checking link status depending on underlying implementation may take more time
This configuration can be used to exclude this telemetry command to allow faster response for metrics. to complete. This configuration can be used to exclude this telemetry command
to allow faster response for metrics.
```toml ```toml
[[inputs.dpdk]] [[inputs.dpdk]]
@ -100,14 +112,17 @@ This configuration can be used to exclude this telemetry command to allow faster
exclude_commands = ["/ethdev/link_status"] exclude_commands = ["/ethdev/link_status"]
``` ```
A separate plugin instance with higher timeout settings can be used to get `/ethdev/link_status` independently. A separate plugin instance with higher timeout settings can be used to get
Consult [Independent NIC link status configuration](#example-independent-nic-link-status-configuration) `/ethdev/link_status` independently. Consult [Independent NIC link status
and [Getting metrics from multiple DPDK instances running on same host](#example-getting-metrics-from-multiple-dpdk-instances-running-on-same-host) configuration](#example-independent-nic-link-status-configuration) and [Getting
metrics from multiple DPDK instances running on same
host](#example-getting-metrics-from-multiple-dpdk-instances-on-same-host)
examples for further details. examples for further details.
### Example: Independent NIC link status configuration ### Example: Independent NIC link status configuration
This configuration allows getting `/ethdev/link_status` using separate configuration, with higher timeout. This configuration allows getting `/ethdev/link_status` using separate
configuration, with higher timeout.
```toml ```toml
[[inputs.dpdk]] [[inputs.dpdk]]
@ -121,8 +136,9 @@ This configuration allows getting `/ethdev/link_status` using separate configura
### Example: Getting application-specific metrics ### Example: Getting application-specific metrics
This configuration allows reading custom metrics exposed by applications. Example telemetry command obtained from This configuration allows reading custom metrics exposed by
[L3 Forwarding with Power Management Sample Application](https://doc.dpdk.org/guides/sample_app_ug/l3_forward_power_man.html). applications. Example telemetry command obtained from [L3 Forwarding with Power
Management Sample Application][sample-app].
```toml ```toml
[[inputs.dpdk]] [[inputs.dpdk]]
@ -133,20 +149,25 @@ This configuration allows reading custom metrics exposed by applications. Exampl
exclude_commands = ["/ethdev/link_status"] exclude_commands = ["/ethdev/link_status"]
``` ```
Command entries specified in `additional_commands` should match DPDK command format: Command entries specified in `additional_commands` should match DPDK command
format:
* Command entry format: either `command` or `command,params` for commands that expect parameters, where comma (`,`) separates command from params. * Command entry format: either `command` or `command,params` for commands that expect parameters, where comma (`,`) separates command from params.
* Command entry length (command with params) should be `< 1024` characters. * Command entry length (command with params) should be `< 1024` characters.
* Command length (without params) should be `< 56` characters. * Command length (without params) should be `< 56` characters.
* Commands have to start with `/`. * Commands have to start with `/`.
Providing invalid commands will prevent the plugin from starting. Additional commands allow duplicates, but they Providing invalid commands will prevent the plugin from starting. Additional
will be removed during execution so each command will be executed only once during each metric gathering interval. commands allow duplicates, but they will be removed during execution so each
command will be executed only once during each metric gathering interval.
### Example: Getting metrics from multiple DPDK instances running on same host [sample-app]: https://doc.dpdk.org/guides/sample_app_ug/l3_forward_power_man.html
This configuration allows getting metrics from two separate applications exposing their telemetry interfaces ### Example: Getting metrics from multiple DPDK instances on same host
via separate sockets. For each plugin instance a unique tag `[inputs.dpdk.tags]` allows distinguishing between them.
This configuration allows getting metrics from two separate applications
exposing their telemetry interfaces via separate sockets. For each plugin
instance a unique tag `[inputs.dpdk.tags]` allows distinguishing between them.
```toml ```toml
# Instance #1 - L3 Forwarding with Power Management Application # Instance #1 - L3 Forwarding with Power Management Application
@ -173,23 +194,27 @@ via separate sockets. For each plugin instance a unique tag `[inputs.dpdk.tags]`
dpdk_instance = "l2fwd-cat" dpdk_instance = "l2fwd-cat"
``` ```
This utilizes Telegraf's standard capability of [adding custom tags](https://github.com/influxdata/telegraf/blob/master/docs/CONFIGURATION.md#input-plugins) This utilizes Telegraf's standard capability of [adding custom
to input plugin's measurements. tags](../../../docs/CONFIGURATION.md#input-plugins) to input plugin's
measurements.
## Metrics ## Metrics
The DPDK socket accepts `command,params` requests and returns metric data in JSON format. All metrics from DPDK socket The DPDK socket accepts `command,params` requests and returns metric data in
become flattened using [Telegraf's JSON Flattener](../../parsers/json/README.md) and exposed as fields. JSON format. All metrics from DPDK socket become flattened using [Telegraf's
If DPDK response contains no information (is empty or is null) then such response will be discarded. JSON Flattener](../../parsers/json/README.md) and exposed as fields. If DPDK
response contains no information (is empty or is null) then such response will
be discarded.
> **NOTE:** Since DPDK allows registering custom metrics in its telemetry framework the JSON response from DPDK > **NOTE:** Since DPDK allows registering custom metrics in its telemetry framework the JSON response from DPDK
> may contain various sets of metrics. While metrics from `/ethdev/stats` should be most stable, the `/ethdev/xstats` > may contain various sets of metrics. While metrics from `/ethdev/stats` should be most stable, the `/ethdev/xstats`
> may contain driver-specific metrics (depending on DPDK application configuration). The application-specific commands > may contain driver-specific metrics (depending on DPDK application configuration). The application-specific commands
> like `/l3fwd-power/stats` can return their own specific set of metrics. > like `/l3fwd-power/stats` can return their own specific set of metrics.
## Example output ## Example Output
The output consists of plugin name (`dpdk`), and a set of tags that identify querying hierarchy: The output consists of plugin name (`dpdk`), and a set of tags that identify
querying hierarchy:
```shell ```shell
dpdk,host=dpdk-host,dpdk_instance=l3fwd-power,command=/ethdev/stats,params=0 [fields] [timestamp] dpdk,host=dpdk-host,dpdk_instance=l3fwd-power,command=/ethdev/stats,params=0 [fields] [timestamp]
@ -212,7 +237,8 @@ When running plugin configuration below...
dpdk_instance = "l3fwd-power" dpdk_instance = "l3fwd-power"
``` ```
...expected output for `dpdk` plugin instance running on host named `host=dpdk-host`: ...expected output for `dpdk` plugin instance running on host named
`host=dpdk-host`:
```shell ```shell
dpdk,command=/ethdev/stats,dpdk_instance=l3fwd-power,host=dpdk-host,params=0 q_opackets_0=0,q_ipackets_5=0,q_errors_11=0,ierrors=0,q_obytes_5=0,q_obytes_10=0,q_opackets_10=0,q_ipackets_4=0,q_ipackets_7=0,q_ipackets_15=0,q_ibytes_5=0,q_ibytes_6=0,q_ibytes_9=0,obytes=0,q_opackets_1=0,q_opackets_11=0,q_obytes_7=0,q_errors_5=0,q_errors_10=0,q_ibytes_4=0,q_obytes_6=0,q_errors_1=0,q_opackets_5=0,q_errors_3=0,q_errors_12=0,q_ipackets_11=0,q_ipackets_12=0,q_obytes_14=0,q_opackets_15=0,q_obytes_2=0,q_errors_8=0,q_opackets_12=0,q_errors_0=0,q_errors_9=0,q_opackets_14=0,q_ibytes_3=0,q_ibytes_15=0,q_ipackets_13=0,q_ipackets_14=0,q_obytes_3=0,q_errors_13=0,q_opackets_3=0,q_ibytes_0=7092,q_ibytes_2=0,q_ibytes_8=0,q_ipackets_8=0,q_ipackets_10=0,q_obytes_4=0,q_ibytes_10=0,q_ibytes_13=0,q_ibytes_1=0,q_ibytes_12=0,opackets=0,q_obytes_1=0,q_errors_15=0,q_opackets_2=0,oerrors=0,rx_nombuf=0,q_opackets_8=0,q_ibytes_11=0,q_ipackets_3=0,q_obytes_0=0,q_obytes_12=0,q_obytes_11=0,q_obytes_13=0,q_errors_6=0,q_ipackets_1=0,q_ipackets_6=0,q_ipackets_9=0,q_obytes_15=0,q_opackets_7=0,q_ibytes_14=0,ipackets=98,q_ipackets_2=0,q_opackets_6=0,q_ibytes_7=0,imissed=0,q_opackets_4=0,q_opackets_9=0,q_obytes_8=0,q_obytes_9=0,q_errors_4=0,q_errors_14=0,q_opackets_13=0,ibytes=7092,q_ipackets_0=98,q_errors_2=0,q_errors_7=0 1606310780000000000 dpdk,command=/ethdev/stats,dpdk_instance=l3fwd-power,host=dpdk-host,params=0 q_opackets_0=0,q_ipackets_5=0,q_errors_11=0,ierrors=0,q_obytes_5=0,q_obytes_10=0,q_opackets_10=0,q_ipackets_4=0,q_ipackets_7=0,q_ipackets_15=0,q_ibytes_5=0,q_ibytes_6=0,q_ibytes_9=0,obytes=0,q_opackets_1=0,q_opackets_11=0,q_obytes_7=0,q_errors_5=0,q_errors_10=0,q_ibytes_4=0,q_obytes_6=0,q_errors_1=0,q_opackets_5=0,q_errors_3=0,q_errors_12=0,q_ipackets_11=0,q_ipackets_12=0,q_obytes_14=0,q_opackets_15=0,q_obytes_2=0,q_errors_8=0,q_opackets_12=0,q_errors_0=0,q_errors_9=0,q_opackets_14=0,q_ibytes_3=0,q_ibytes_15=0,q_ipackets_13=0,q_ipackets_14=0,q_obytes_3=0,q_errors_13=0,q_opackets_3=0,q_ibytes_0=7092,q_ibytes_2=0,q_ibytes_8=0,q_ipackets_8=0,q_ipackets_10=0,q_obytes_4=0,q_ibytes_10=0,q_ibytes_13=0,q_ibytes_1=0,q_ibytes_12=0,opackets=0,q_obytes_1=0,q_errors_15=0,q_opackets_2=0,oerrors=0,rx_nombuf=0,q_opackets_8=0,q_ibytes_11=0,q_ipackets_3=0,q_obytes_0=0,q_obytes_12=0,q_obytes_11=0,q_obytes_13=0,q_errors_6=0,q_ipackets_1=0,q_ipackets_6=0,q_ipackets_9=0,q_obytes_15=0,q_opackets_7=0,q_ibytes_14=0,ipackets=98,q_ipackets_2=0,q_opackets_6=0,q_ibytes_7=0,imissed=0,q_opackets_4=0,q_opackets_9=0,q_obytes_8=0,q_obytes_9=0,q_errors_4=0,q_errors_14=0,q_opackets_13=0,ibytes=7092,q_ipackets_0=98,q_errors_2=0,q_errors_7=0 1606310780000000000