fix: markdown: resolve all markdown issues with d-f (#10171)

This commit is contained in:
Joshua Powers 2021-11-24 11:56:26 -07:00 committed by GitHub
parent 6fa29f2966
commit 0c02f245d6
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
25 changed files with 292 additions and 259 deletions

View File

@ -2,7 +2,7 @@
This input plugin gathers metrics from a DC/OS cluster's [metrics component](https://docs.mesosphere.com/1.10/metrics/).
**Series Cardinality Warning**
## Series Cardinality Warning
Depending on the work load of your DC/OS cluster, this plugin can quickly
create a high number of series which, when unchecked, can cause high load on
@ -18,7 +18,8 @@ your database.
- Monitor your databases
[series cardinality](https://docs.influxdata.com/influxdb/latest/query_language/spec/#show-cardinality).
### Configuration:
## Configuration
```toml
[[inputs.dcos]]
## The DC/OS cluster URL.
@ -63,13 +64,14 @@ your database.
# path = ["/var/lib/mesos/slave/slaves/*"]
```
#### Enterprise Authentication
### Enterprise Authentication
When using Enterprise DC/OS, it is recommended to use a service account to
authenticate with the cluster.
The plugin requires the following permissions:
```
```text
dcos:adminrouter:ops:system-metrics full
dcos:adminrouter:ops:mesos full
```
@ -77,14 +79,15 @@ dcos:adminrouter:ops:mesos full
Follow the directions to [create a service account and assign permissions](https://docs.mesosphere.com/1.10/security/service-auth/custom-service-auth/).
Quick configuration using the Enterprise CLI:
```
```text
dcos security org service-accounts keypair telegraf-sa-key.pem telegraf-sa-cert.pem
dcos security org service-accounts create -p telegraf-sa-cert.pem -d "Telegraf DC/OS input plugin" telegraf
dcos security org users grant telegraf dcos:adminrouter:ops:system-metrics full
dcos security org users grant telegraf dcos:adminrouter:ops:mesos full
```
#### Open Source Authentication
### Open Source Authentication
The Open Source DC/OS does not provide service accounts. Instead you can use
of the following options:
@ -95,7 +98,8 @@ of the following options:
Then `token_file` can be set by using the [dcos cli] to login periodically.
The cli can login for at most XXX days, you will need to ensure the cli
performs a new login before this time expires.
```
```shell
dcos auth login --username foo --password bar
dcos config show core.dcos_acs_token > ~/.dcos/token
```
@ -107,7 +111,7 @@ token is compromised it cannot be revoked and may require a full reinstall of
the cluster. For more information on this technique reference
[this blog post](https://medium.com/@richardgirges/authenticating-open-source-dc-os-with-third-party-services-125fa33a5add).
### Metrics:
## Metrics
Please consult the [Metrics Reference](https://docs.mesosphere.com/1.10/metrics/reference/)
for details about field interpretation.
@ -185,9 +189,9 @@ for details about field interpretation.
- fields:
- fields are application specific
### Example Output:
## Example
```
```shell
dcos_node,cluster=enterprise,hostname=192.168.122.18,path=/boot filesystem_capacity_free_bytes=918188032i,filesystem_capacity_total_bytes=1063256064i,filesystem_capacity_used_bytes=145068032i,filesystem_inode_free=523958,filesystem_inode_total=524288,filesystem_inode_used=330 1511859222000000000
dcos_node,cluster=enterprise,hostname=192.168.122.18,interface=dummy0 network_in_bytes=0i,network_in_dropped=0,network_in_errors=0,network_in_packets=0,network_out_bytes=0i,network_out_dropped=0,network_out_errors=0,network_out_packets=0 1511859222000000000
dcos_node,cluster=enterprise,hostname=192.168.122.18,interface=docker0 network_in_bytes=0i,network_in_dropped=0,network_in_errors=0,network_in_packets=0,network_out_bytes=0i,network_out_dropped=0,network_out_errors=0,network_out_packets=0 1511859222000000000

View File

@ -5,7 +5,7 @@ The plugin will gather all files in the directory at a configurable interval (`m
This plugin is intended to read files that are moved or copied to the monitored directory, and thus files should also not be used by another process or else they may fail to be gathered. Please be advised that this plugin pulls files directly after they've been in the directory for the length of the configurable `directory_duration_threshold`, and thus files should not be written 'live' to the monitored directory. If you absolutely must write files directly, they must be guaranteed to finish writing before the `directory_duration_threshold`.
### Configuration:
## Configuration
```toml
[[inputs.directory_monitor]]
@ -22,7 +22,7 @@ This plugin is intended to read files that are moved or copied to the monitored
## The amount of time a file is allowed to sit in the directory before it is picked up.
## This time can generally be low but if you choose to have a very large file written to the directory and it's potentially slow,
## set this higher so that the plugin will wait until the file is fully copied to the directory.
# directory_duration_threshold = "50ms"
# directory_duration_threshold = "50ms"
#
## A list of the only file names to monitor, if necessary. Supports regex. If left blank, all files are ingested.
# files_to_monitor = ["^.*\.csv"]
@ -37,11 +37,11 @@ This plugin is intended to read files that are moved or copied to the monitored
#
## The maximum amount of file paths to queue up for processing at once, before waiting until files are processed to find more files.
## Lowering this value will result in *slightly* less memory use, with a potential sacrifice in speed efficiency, if absolutely necessary.
# file_queue_size = 100000
# file_queue_size = 100000
#
## Name a tag containing the name of the file the data was parsed from. Leave empty
## to disable. Cautious when file name variation is high, this can increase the cardinality
## significantly. Read more about cardinality here:
## to disable. Cautious when file name variation is high, this can increase the cardinality
## significantly. Read more about cardinality here:
## https://docs.influxdata.com/influxdb/cloud/reference/glossary/#series-cardinality
# file_tag = ""
#

View File

@ -4,9 +4,9 @@ The disk input plugin gathers metrics about disk usage.
Note that `used_percent` is calculated by doing `used / (used + free)`, _not_
`used / total`, which is how the unix `df` command does it. See
https://en.wikipedia.org/wiki/Df_(Unix) for more details.
[wikipedia - df](https://en.wikipedia.org/wiki/Df_(Unix)) for more details.
### Configuration:
## Configuration
```toml
[[inputs.disk]]
@ -18,7 +18,7 @@ https://en.wikipedia.org/wiki/Df_(Unix) for more details.
ignore_fs = ["tmpfs", "devtmpfs", "devfs", "iso9660", "overlay", "aufs", "squashfs"]
```
#### Docker container
### Docker container
To monitor the Docker engine host from within a container you will need to
mount the host's filesystem into the container and set the `HOST_PROC`
@ -27,11 +27,11 @@ also set the `HOST_MOUNT_PREFIX` environment variable to the prefix containing
the `/proc` directory, when present this variable is stripped from the
reported `path` tag.
```
```shell
docker run -v /:/hostfs:ro -e HOST_MOUNT_PREFIX=/hostfs -e HOST_PROC=/hostfs/proc telegraf
```
### Metrics:
## Metrics
- disk
- tags:
@ -48,25 +48,27 @@ docker run -v /:/hostfs:ro -e HOST_MOUNT_PREFIX=/hostfs -e HOST_PROC=/hostfs/pro
- inodes_total (integer, files)
- inodes_used (integer, files)
### Troubleshooting
## Troubleshooting
On Linux, the list of disks is taken from the `/proc/self/mounts` file and a
[statfs] call is made on the second column. If any expected filesystems are
missing ensure that the `telegraf` user can read these files:
```
```shell
$ sudo -u telegraf cat /proc/self/mounts | grep sda2
/dev/sda2 /home ext4 rw,relatime,data=ordered 0 0
$ sudo -u telegraf stat /home
```
It may be desired to use POSIX ACLs to provide additional access:
```
```shell
sudo setfacl -R -m u:telegraf:X /var/lib/docker/volumes/
```
### Example Output:
## Example
```
```shell
disk,fstype=hfs,mode=ro,path=/ free=398407520256i,inodes_free=97267461i,inodes_total=121847806i,inodes_used=24580345i,total=499088621568i,used=100418957312i,used_percent=20.131039916242397 1453832006274071563
disk,fstype=devfs,mode=rw,path=/dev free=0i,inodes_free=0i,inodes_total=628i,inodes_used=628i,total=185856i,used=185856i,used_percent=100 1453832006274137913
disk,fstype=autofs,mode=rw,path=/net free=0i,inodes_free=0i,inodes_total=0i,inodes_used=0i,total=0i,used=0i,used_percent=0 1453832006274157077

View File

@ -2,7 +2,7 @@
The diskio input plugin gathers metrics about disk traffic and timing.
### Configuration:
## Configuration
```toml
# Read metrics about disk IO by device
@ -34,7 +34,7 @@ The diskio input plugin gathers metrics about disk traffic and timing.
# name_templates = ["$ID_FS_LABEL","$DM_VG_NAME/$DM_LV_NAME"]
```
#### Docker container
### Docker container
To monitor the Docker engine host from within a container you will need to
mount the host's filesystem into the container and set the `HOST_PROC`
@ -44,11 +44,11 @@ it is required to use privileged mode to provide access to `/dev`.
If you are using the `device_tags` or `name_templates` options, you will need
to bind mount `/run/udev` into the container.
```
```shell
docker run --privileged -v /:/hostfs:ro -v /run/udev:/run/udev:ro -e HOST_PROC=/hostfs/proc telegraf
```
### Metrics:
## Metrics
- diskio
- tags:
@ -72,16 +72,16 @@ On linux these values correspond to the values in
and
[`/sys/block/<dev>/stat`](https://www.kernel.org/doc/Documentation/block/stat.txt).
#### `reads` & `writes`:
### `reads` & `writes`
These values increment when an I/O request completes.
#### `read_bytes` & `write_bytes`:
### `read_bytes` & `write_bytes`
These values count the number of bytes read from or written to this
block device.
#### `read_time` & `write_time`:
### `read_time` & `write_time`
These values count the number of milliseconds that I/O requests have
waited on this block device. If there are multiple I/O requests waiting,
@ -89,49 +89,51 @@ these values will increase at a rate greater than 1000/second; for
example, if 60 read requests wait for an average of 30 ms, the read_time
field will increase by 60*30 = 1800.
#### `io_time`:
### `io_time`
This value counts the number of milliseconds during which the device has
had I/O requests queued.
#### `weighted_io_time`:
### `weighted_io_time`
This value counts the number of milliseconds that I/O requests have waited
on this block device. If there are multiple I/O requests waiting, this
value will increase as the product of the number of milliseconds times the
number of requests waiting (see `read_time` above for an example).
#### `iops_in_progress`:
### `iops_in_progress`
This value counts the number of I/O requests that have been issued to
the device driver but have not yet completed. It does not include I/O
requests that are in the queue but not yet issued to the device driver.
#### `merged_reads` & `merged_writes`:
### `merged_reads` & `merged_writes`
Reads and writes which are adjacent to each other may be merged for
efficiency. Thus two 4K reads may become one 8K read before it is
ultimately handed to the disk, and so it will be counted (and queued)
as only one I/O. These fields lets you know how often this was done.
### Sample Queries:
## Sample Queries
#### Calculate percent IO utilization per disk and host:
```
### Calculate percent IO utilization per disk and host
```sql
SELECT non_negative_derivative(last("io_time"),1ms) FROM "diskio" WHERE time > now() - 30m GROUP BY "host","name",time(60s)
```
#### Calculate average queue depth:
### Calculate average queue depth
`iops_in_progress` will give you an instantaneous value. This will give you the average between polling intervals.
```
```sql
SELECT non_negative_derivative(last("weighted_io_time"),1ms) from "diskio" WHERE time > now() - 30m GROUP BY "host","name",time(60s)
```
### Example Output:
## Example
```
```shell
diskio,name=sda1 merged_reads=0i,reads=2353i,writes=10i,write_bytes=2117632i,write_time=49i,io_time=1271i,weighted_io_time=1350i,read_bytes=31350272i,read_time=1303i,iops_in_progress=0i,merged_writes=0i 1578326400000000000
diskio,name=centos/var_log reads=1063077i,writes=591025i,read_bytes=139325491712i,write_bytes=144233131520i,read_time=650221i,write_time=24368817i,io_time=852490i,weighted_io_time=25037394i,iops_in_progress=1i,merged_reads=0i,merged_writes=0i 1578326400000000000
diskio,name=sda write_time=49i,io_time=1317i,weighted_io_time=1404i,reads=2495i,read_time=1357i,write_bytes=2117632i,iops_in_progress=0i,merged_reads=0i,merged_writes=0i,writes=10i,read_bytes=38956544i 1578326400000000000
```

View File

@ -2,11 +2,10 @@
[Disque](https://github.com/antirez/disque) is an ongoing experiment to build a distributed, in-memory, message broker.
### Configuration:
## Configuration
```toml
[[inputs.disque]]
[[inputs.disque]]
## An array of URI to gather stats about. Specify an ip or hostname
## with optional port and password.
## ie disque://localhost, disque://10.10.3.33:18832, 10.0.0.1:10000, etc.
@ -14,8 +13,7 @@
servers = ["localhost"]
```
### Metrics
## Metrics
- disque
- disque_host

View File

@ -6,7 +6,7 @@ This plugin requires sudo, that is why you should setup and be sure that the tel
`sudo /sbin/dmsetup status --target cache` is the full command that telegraf will run for debugging purposes.
### Configuration
## Configuration
```toml
[[inputs.dmcache]]
@ -14,33 +14,33 @@ This plugin requires sudo, that is why you should setup and be sure that the tel
per_device = true
```
### Measurements & Fields:
## Measurements & Fields
- dmcache
- length
- target
- metadata_blocksize
- metadata_used
- metadata_total
- cache_blocksize
- cache_used
- cache_total
- read_hits
- read_misses
- write_hits
- write_misses
- demotions
- promotions
- dirty
- length
- target
- metadata_blocksize
- metadata_used
- metadata_total
- cache_blocksize
- cache_used
- cache_total
- read_hits
- read_misses
- write_hits
- write_misses
- demotions
- promotions
- dirty
### Tags:
## Tags
- All measurements have the following tags:
- device
- device
### Example Output:
## Example Output
```
```shell
$ ./telegraf --test --config /etc/telegraf/telegraf.conf --input-filter dmcache
* Plugin: inputs.dmcache, Collection 1
> dmcache,device=example cache_blocksize=0i,read_hits=995134034411520i,read_misses=916807089127424i,write_hits=195107267543040i,metadata_used=12861440i,write_misses=563725346013184i,promotions=3265223720960i,dirty=0i,metadata_blocksize=0i,cache_used=1099511627776ii,cache_total=0i,length=0i,metadata_total=1073741824i,demotions=3265223720960i 1491482035000000000

View File

@ -2,7 +2,8 @@
The DNS plugin gathers dns query times in miliseconds - like [Dig](https://en.wikipedia.org/wiki/Dig_\(command\))
### Configuration:
## Configuration
```toml
# Query given DNS server and gives statistics
[[inputs.dns_query]]
@ -26,7 +27,7 @@ The DNS plugin gathers dns query times in miliseconds - like [Dig](https://en.wi
# timeout = 2
```
### Metrics:
## Metrics
- dns_query
- tags:
@ -40,8 +41,8 @@ The DNS plugin gathers dns query times in miliseconds - like [Dig](https://en.wi
- result_code (int, success = 0, timeout = 1, error = 2)
- rcode_value (int)
## Rcode Descriptions
### Rcode Descriptions
|rcode_value|rcode|Description|
|---|-----------|-----------------------------------|
|0 | NoError | No Error |
@ -65,9 +66,8 @@ The DNS plugin gathers dns query times in miliseconds - like [Dig](https://en.wi
|22 | BADTRUNC | Bad Truncation |
|23 | BADCOOKIE | Bad/missing Server Cookie |
### Example
### Example Output:
```
```shell
dns_query,domain=google.com,rcode=NOERROR,record_type=A,result=success,server=127.0.0.1 rcode_value=0i,result_code=0i,query_time_ms=0.13746 1550020750001000000
```

View File

@ -6,7 +6,7 @@ docker containers.
The docker plugin uses the [Official Docker Client](https://github.com/moby/moby/tree/master/client)
to gather stats from the [Engine API](https://docs.docker.com/engine/api/v1.24/).
### Configuration:
## Configuration
```toml
# Read metrics about docker containers
@ -46,23 +46,23 @@ to gather stats from the [Engine API](https://docs.docker.com/engine/api/v1.24/)
## Whether to report for each container per-device blkio (8:0, 8:1...),
## network (eth0, eth1, ...) and cpu (cpu0, cpu1, ...) stats or not.
## Usage of this setting is discouraged since it will be deprecated in favor of 'perdevice_include'.
## Default value is 'true' for backwards compatibility, please set it to 'false' so that 'perdevice_include' setting
## Default value is 'true' for backwards compatibility, please set it to 'false' so that 'perdevice_include' setting
## is honored.
perdevice = true
## Specifies for which classes a per-device metric should be issued
## Possible values are 'cpu' (cpu0, cpu1, ...), 'blkio' (8:0, 8:1, ...) and 'network' (eth0, eth1, ...)
## Please note that this setting has no effect if 'perdevice' is set to 'true'
# perdevice_include = ["cpu"]
## Whether to report for each container total blkio and network stats or not.
## Usage of this setting is discouraged since it will be deprecated in favor of 'total_include'.
## Default value is 'false' for backwards compatibility, please set it to 'true' so that 'total_include' setting
## Default value is 'false' for backwards compatibility, please set it to 'true' so that 'total_include' setting
## is honored.
total = false
## Specifies for which classes a total metric should be issued. Total is an aggregated of the 'perdevice' values.
## Possible values are 'cpu', 'blkio' and 'network'
## Possible values are 'cpu', 'blkio' and 'network'
## Total 'cpu' is reported directly by Docker daemon, and 'network' and 'blkio' totals are aggregated by this plugin.
## Please note that this setting has no effect if 'total' is set to 'false'
# total_include = ["cpu", "blkio", "network"]
@ -83,23 +83,23 @@ to gather stats from the [Engine API](https://docs.docker.com/engine/api/v1.24/)
# insecure_skip_verify = false
```
#### Environment Configuration
### Environment Configuration
When using the `"ENV"` endpoint, the connection is configured using the
[cli Docker environment variables](https://godoc.org/github.com/moby/moby/client#NewEnvClient).
#### Security
### Security
Giving telegraf access to the Docker daemon expands the [attack surface](https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface) that could result in an attacker gaining root access to a machine. This is especially relevant if the telegraf configuration can be changed by untrusted users.
#### Docker Daemon Permissions
### Docker Daemon Permissions
Typically, telegraf must be given permission to access the docker daemon unix
socket when using the default endpoint. This can be done by adding the
`telegraf` unix user (created when installing a Telegraf package) to the
`docker` unix group with the following command:
```
```shell
sudo usermod -aG docker telegraf
```
@ -108,12 +108,12 @@ within the telegraf container. This can be done in the docker CLI by add the
option `-v /var/run/docker.sock:/var/run/docker.sock` or adding the following
lines to the telegraf container definition in a docker compose file:
```
```yaml
volumes:
- /var/run/docker.sock:/var/run/docker.sock
```
#### source tag
### source tag
Selecting the containers measurements can be tricky if you have many containers with the same name.
To alleviate this issue you can set the below value to `true`
@ -124,20 +124,20 @@ source_tag = true
This will cause all measurements to have the `source` tag be set to the first 12 characters of the container id. The first 12 characters is the common hostname for containers that have no explicit hostname set, as defined by docker.
#### Kubernetes Labels
### Kubernetes Labels
Kubernetes may add many labels to your containers, if they are not needed you
may prefer to exclude them:
```
```json
docker_label_exclude = ["annotation.kubernetes*"]
```
### Docker-compose Labels
#### Docker-compose Labels
Docker-compose will add labels to your containers. You can limit restrict labels to selected ones, e.g.
Docker-compose will add labels to your containers. You can limit restrict labels to selected ones, e.g.
```
```json
docker_label_include = [
"com.docker.compose.config-hash",
"com.docker.compose.container-number",
@ -147,15 +147,14 @@ Docker-compose will add labels to your containers. You can limit restrict labels
]
```
### Metrics:
### Metrics
- docker
- tags:
- unit
- engine_host
- server_version
+ fields:
- fields:
- n_used_file_descriptors
- n_cpus
- n_containers
@ -171,12 +170,12 @@ Docker-compose will add labels to your containers. You can limit restrict labels
The `docker_data` and `docker_metadata` measurements are available only for
some storage drivers such as devicemapper.
+ docker_data (deprecated see: `docker_devicemapper`)
- docker_data (deprecated see: `docker_devicemapper`)
- tags:
- unit
- engine_host
- server_version
+ fields:
- fields:
- available
- total
- used
@ -186,7 +185,7 @@ some storage drivers such as devicemapper.
- unit
- engine_host
- server_version
+ fields:
- fields:
- available
- total
- used
@ -198,7 +197,7 @@ The above measurements for the devicemapper storage driver can now be found in t
- engine_host
- server_version
- pool_name
+ fields:
- fields:
- pool_blocksize_bytes
- data_space_used_bytes
- data_space_total_bytes
@ -208,7 +207,7 @@ The above measurements for the devicemapper storage driver can now be found in t
- metadata_space_available_bytes
- thin_pool_minimum_free_space_bytes
+ docker_container_mem
- docker_container_mem
- tags:
- engine_host
- server_version
@ -216,7 +215,7 @@ The above measurements for the devicemapper storage driver can now be found in t
- container_name
- container_status
- container_version
+ fields:
- fields:
- total_pgmajfault
- cache
- mapped_file
@ -261,7 +260,7 @@ The above measurements for the devicemapper storage driver can now be found in t
- container_status
- container_version
- cpu
+ fields:
- fields:
- throttling_periods
- throttling_throttled_periods
- throttling_throttled_time
@ -272,7 +271,7 @@ The above measurements for the devicemapper storage driver can now be found in t
- usage_percent
- container_id
+ docker_container_net
- docker_container_net
- tags:
- engine_host
- server_version
@ -281,7 +280,7 @@ The above measurements for the devicemapper storage driver can now be found in t
- container_status
- container_version
- network
+ fields:
- fields:
- rx_dropped
- rx_bytes
- rx_errors
@ -327,8 +326,8 @@ status if configured.
- container_status
- container_version
- fields:
- health_status (string)
- failing_streak (integer)
- health_status (string)
- failing_streak (integer)
- docker_container_status
- tags:
@ -356,9 +355,9 @@ status if configured.
- tasks_desired
- tasks_running
### Example Output:
## Example
```
```shell
docker,engine_host=debian-stretch-docker,server_version=17.09.0-ce n_containers=6i,n_containers_paused=0i,n_containers_running=1i,n_containers_stopped=5i,n_cpus=2i,n_goroutines=41i,n_images=2i,n_listener_events=0i,n_used_file_descriptors=27i 1524002041000000000
docker,engine_host=debian-stretch-docker,server_version=17.09.0-ce,unit=bytes memory_total=2101661696i 1524002041000000000
docker_container_mem,container_image=telegraf,container_name=zen_ritchie,container_status=running,container_version=unknown,engine_host=debian-stretch-docker,server_version=17.09.0-ce active_anon=8327168i,active_file=2314240i,cache=27402240i,container_id="adc4ba9593871bf2ab95f3ffde70d1b638b897bb225d21c2c9c84226a10a8cf4",hierarchical_memory_limit=9223372036854771712i,inactive_anon=0i,inactive_file=25088000i,limit=2101661696i,mapped_file=20582400i,max_usage=36646912i,pgfault=4193i,pgmajfault=214i,pgpgin=9243i,pgpgout=520i,rss=8327168i,rss_huge=0i,total_active_anon=8327168i,total_active_file=2314240i,total_cache=27402240i,total_inactive_anon=0i,total_inactive_file=25088000i,total_mapped_file=20582400i,total_pgfault=4193i,total_pgmajfault=214i,total_pgpgin=9243i,total_pgpgout=520i,total_rss=8327168i,total_rss_huge=0i,total_unevictable=0i,total_writeback=0i,unevictable=0i,usage=36528128i,usage_percent=0.4342225020025297,writeback=0i 1524002042000000000

View File

@ -12,7 +12,7 @@ The docker plugin uses the [Official Docker Client][] to gather logs from the
[Official Docker Client]: https://github.com/moby/moby/tree/master/client
[Engine API]: https://docs.docker.com/engine/api/v1.24/
### Configuration
## Configuration
```toml
[[inputs.docker_log]]
@ -54,14 +54,14 @@ The docker plugin uses the [Official Docker Client][] to gather logs from the
# insecure_skip_verify = false
```
#### Environment Configuration
### Environment Configuration
When using the `"ENV"` endpoint, the connection is configured using the
[CLI Docker environment variables][env]
[env]: https://godoc.org/github.com/moby/moby/client#NewEnvClient
### source tag
## source tag
Selecting the containers can be tricky if you have many containers with the same name.
To alleviate this issue you can set the below value to `true`
@ -72,7 +72,7 @@ source_tag = true
This will cause all data points to have the `source` tag be set to the first 12 characters of the container id. The first 12 characters is the common hostname for containers that have no explicit hostname set, as defined by docker.
### Metrics
## Metrics
- docker_log
- tags:
@ -85,9 +85,9 @@ This will cause all data points to have the `source` tag be set to the first 12
- container_id
- message
### Example Output
## Example Output
```
```shell
docker_log,container_image=telegraf,container_name=sharp_bell,container_version=alpine,stream=stderr container_id="371ee5d3e58726112f499be62cddef800138ca72bbba635ed2015fbf475b1023",message="2019-06-19T03:11:11Z I! [agent] Config: Interval:10s, Quiet:false, Hostname:\"371ee5d3e587\", Flush Interval:10s" 1560913872000000000
docker_log,container_image=telegraf,container_name=sharp_bell,container_version=alpine,stream=stderr container_id="371ee5d3e58726112f499be62cddef800138ca72bbba635ed2015fbf475b1023",message="2019-06-19T03:11:11Z I! Tags enabled: host=371ee5d3e587" 1560913872000000000
docker_log,container_image=telegraf,container_name=sharp_bell,container_version=alpine,stream=stderr container_id="371ee5d3e58726112f499be62cddef800138ca72bbba635ed2015fbf475b1023",message="2019-06-19T03:11:11Z I! Loaded outputs: file" 1560913872000000000

View File

@ -6,7 +6,7 @@ metrics on configured domains.
When using Dovecot v2.3 you are still able to use this protocol by following
the [upgrading steps][upgrading].
### Configuration:
## Configuration
```toml
# Read metrics about dovecot servers
@ -23,50 +23,49 @@ the [upgrading steps][upgrading].
## Type is one of "user", "domain", "ip", or "global"
type = "global"
## Wildcard matches like "*.com". An empty string "" is same as "*"
## If type = "ip" filters should be <IP/network>
filters = [""]
```
### Metrics:
## Metrics
- dovecot
- tags:
- server (hostname)
- type (query type)
- ip (ip addr)
- user (username)
- domain (domain name)
- server (hostname)
- type (query type)
- ip (ip addr)
- user (username)
- domain (domain name)
- fields:
- reset_timestamp (string)
- last_update (string)
- num_logins (integer)
- num_cmds (integer)
- num_connected_sessions (integer)
- user_cpu (float)
- sys_cpu (float)
- clock_time (float)
- min_faults (integer)
- maj_faults (integer)
- vol_cs (integer)
- invol_cs (integer)
- disk_input (integer)
- disk_output (integer)
- read_count (integer)
- read_bytes (integer)
- write_count (integer)
- write_bytes (integer)
- mail_lookup_path (integer)
- mail_lookup_attr (integer)
- mail_read_count (integer)
- mail_read_bytes (integer)
- mail_cache_hits (integer)
- reset_timestamp (string)
- last_update (string)
- num_logins (integer)
- num_cmds (integer)
- num_connected_sessions (integer)
- user_cpu (float)
- sys_cpu (float)
- clock_time (float)
- min_faults (integer)
- maj_faults (integer)
- vol_cs (integer)
- invol_cs (integer)
- disk_input (integer)
- disk_output (integer)
- read_count (integer)
- read_bytes (integer)
- write_count (integer)
- write_bytes (integer)
- mail_lookup_path (integer)
- mail_lookup_attr (integer)
- mail_read_count (integer)
- mail_read_bytes (integer)
- mail_cache_hits (integer)
### Example Output
### Example Output:
```
```shell
dovecot,server=dovecot-1.domain.test,type=global clock_time=101196971074203.94,disk_input=6493168218112i,disk_output=17978638815232i,invol_cs=1198855447i,last_update="2016-04-08 11:04:13.000379245 +0200 CEST",mail_cache_hits=68192209i,mail_lookup_attr=0i,mail_lookup_path=653861i,mail_read_bytes=86705151847i,mail_read_count=566125i,maj_faults=17208i,min_faults=1286179702i,num_cmds=917469i,num_connected_sessions=8896i,num_logins=174827i,read_bytes=30327690466186i,read_count=1772396430i,reset_timestamp="2016-04-08 10:28:45 +0200 CEST",sys_cpu=157965.692,user_cpu=219337.48,vol_cs=2827615787i,write_bytes=17150837661940i,write_count=992653220i 1460106266642153907
```

View File

@ -1,4 +1,5 @@
# Data Plane Development Kit (DPDK) Input Plugin
The `dpdk` plugin collects metrics exposed by applications built with [Data Plane Development Kit](https://www.dpdk.org/)
which is an extensive set of open source libraries designed for accelerating packet processing workloads.
@ -23,13 +24,15 @@ to discover and test the capabilities of DPDK libraries and to explore the expos
> `DPDK version >= 20.05`. The default configuration include reading common statistics from `/ethdev/stats` that is
> available from `DPDK version >= 20.11`. When using `DPDK 20.05 <= version < DPDK 20.11` it is recommended to disable
> querying `/ethdev/stats` by setting corresponding `exclude_commands` configuration option.
>
> **NOTE:** Since DPDK will most likely run with root privileges, the socket telemetry interface exposed by DPDK
> will also require root access. This means that either access permissions have to be adjusted for socket telemetry
> interface to allow Telegraf to access it, or Telegraf should run with root privileges.
## Configuration
This plugin offers multiple configuration options, please review examples below for additional usage information.
```toml
# Reads metrics from DPDK applications using v2 telemetry interface.
[[inputs.dpdk]]
@ -50,7 +53,7 @@ This plugin offers multiple configuration options, please review examples below
## List of custom, application-specific telemetry commands to query
## The list of available commands depend on the application deployed. Applications can register their own commands
## via telemetry library API http://doc.dpdk.org/guides/prog_guide/telemetry_lib.html#registering-commands
## For e.g. L3 Forwarding with Power Management Sample Application this could be:
## For e.g. L3 Forwarding with Power Management Sample Application this could be:
## additional_commands = ["/l3fwd-power/stats"]
# additional_commands = []
@ -60,28 +63,34 @@ This plugin offers multiple configuration options, please review examples below
exclude_commands = ["/ethdev/link_status"]
## When running multiple instances of the plugin it's recommended to add a unique tag to each instance to identify
## metrics exposed by an instance of DPDK application. This is useful when multiple DPDK apps run on a single host.
## metrics exposed by an instance of DPDK application. This is useful when multiple DPDK apps run on a single host.
## [inputs.dpdk.tags]
## dpdk_instance = "my-fwd-app"
```
### Example: Minimal Configuration for NIC metrics
This configuration allows getting metrics for all devices reported via `/ethdev/list` command:
* `/ethdev/stats` - basic device statistics (since `DPDK 20.11`)
* `/ethdev/xstats` - extended device statistics
* `/ethdev/link_status` - up/down link status
```toml
[[inputs.dpdk]]
device_types = ["ethdev"]
```
Since this configuration will query `/ethdev/link_status` it's recommended to increase timeout to `socket_access_timeout = "10s"`.
The [plugin collecting interval](https://github.com/influxdata/telegraf/blob/master/docs/CONFIGURATION.md#input-plugins)
should be adjusted accordingly (e.g. `interval = "30s"`).
### Example: Excluding NIC link status from being collected
Checking link status depending on underlying implementation may take more time to complete.
This configuration can be used to exclude this telemetry command to allow faster response for metrics.
```toml
[[inputs.dpdk]]
device_types = ["ethdev"]
@ -89,13 +98,16 @@ This configuration can be used to exclude this telemetry command to allow faster
[inputs.dpdk.ethdev]
exclude_commands = ["/ethdev/link_status"]
```
A separate plugin instance with higher timeout settings can be used to get `/ethdev/link_status` independently.
Consult [Independent NIC link status configuration](#example-independent-nic-link-status-configuration)
and [Getting metrics from multiple DPDK instances running on same host](#example-getting-metrics-from-multiple-dpdk-instances-running-on-same-host)
examples for further details.
### Example: Independent NIC link status configuration
This configuration allows getting `/ethdev/link_status` using separate configuration, with higher timeout.
```toml
[[inputs.dpdk]]
interval = "30s"
@ -107,8 +119,10 @@ This configuration allows getting `/ethdev/link_status` using separate configura
```
### Example: Getting application-specific metrics
This configuration allows reading custom metrics exposed by applications. Example telemetry command obtained from
This configuration allows reading custom metrics exposed by applications. Example telemetry command obtained from
[L3 Forwarding with Power Management Sample Application](https://doc.dpdk.org/guides/sample_app_ug/l3_forward_power_man.html).
```toml
[[inputs.dpdk]]
device_types = ["ethdev"]
@ -117,18 +131,22 @@ This configuration allows reading custom metrics exposed by applications. Exampl
[inputs.dpdk.ethdev]
exclude_commands = ["/ethdev/link_status"]
```
Command entries specified in `additional_commands` should match DPDK command format:
* Command entry format: either `command` or `command,params` for commands that expect parameters, where comma (`,`) separates command from params.
* Command entry length (command with params) should be `< 1024` characters.
* Command length (without params) should be `< 56` characters.
* Commands have to start with `/`.
Providing invalid commands will prevent the plugin from starting. Additional commands allow duplicates, but they
will be removed during execution so each command will be executed only once during each metric gathering interval.
will be removed during execution so each command will be executed only once during each metric gathering interval.
### Example: Getting metrics from multiple DPDK instances running on same host
This configuration allows getting metrics from two separate applications exposing their telemetry interfaces
via separate sockets. For each plugin instance a unique tag `[inputs.dpdk.tags]` allows distinguishing between them.
via separate sockets. For each plugin instance a unique tag `[inputs.dpdk.tags]` allows distinguishing between them.
```toml
# Instance #1 - L3 Forwarding with Power Management Application
[[inputs.dpdk]]
@ -153,22 +171,26 @@ via separate sockets. For each plugin instance a unique tag `[inputs.dpdk.tags]`
[inputs.dpdk.tags]
dpdk_instance = "l2fwd-cat"
```
This utilizes Telegraf's standard capability of [adding custom tags](https://github.com/influxdata/telegraf/blob/master/docs/CONFIGURATION.md#input-plugins)
to input plugin's measurements.
## Metrics
The DPDK socket accepts `command,params` requests and returns metric data in JSON format. All metrics from DPDK socket
become flattened using [Telegraf's JSON Flattener](../../parsers/json/README.md) and exposed as fields.
become flattened using [Telegraf's JSON Flattener](../../parsers/json/README.md) and exposed as fields.
If DPDK response contains no information (is empty or is null) then such response will be discarded.
> **NOTE:** Since DPDK allows registering custom metrics in its telemetry framework the JSON response from DPDK
> **NOTE:** Since DPDK allows registering custom metrics in its telemetry framework the JSON response from DPDK
> may contain various sets of metrics. While metrics from `/ethdev/stats` should be most stable, the `/ethdev/xstats`
> may contain driver-specific metrics (depending on DPDK application configuration). The application-specific commands
> like `/l3fwd-power/stats` can return their own specific set of metrics.
## Example output
The output consists of plugin name (`dpdk`), and a set of tags that identify querying hierarchy:
```
```shell
dpdk,host=dpdk-host,dpdk_instance=l3fwd-power,command=/ethdev/stats,params=0 [fields] [timestamp]
```
@ -177,9 +199,10 @@ dpdk,host=dpdk-host,dpdk_instance=l3fwd-power,command=/ethdev/stats,params=0 [fi
| `host` | hostname of the machine (consult [Telegraf Agent configuration](https://github.com/influxdata/telegraf/blob/master/docs/CONFIGURATION.md#agent) for additional details) |
| `dpdk_instance` | custom tag from `[inputs.dpdk.tags]` (optional) |
| `command` | executed command (without params) |
| `params` | command parameter, e.g. for `/ethdev/stats` it is the id of NIC as exposed by `/ethdev/list`<br>For DPDK app that uses 2 NICs the metrics will output e.g. `params=0`, `params=1`. |
| `params` | command parameter, e.g. for `/ethdev/stats` it is the id of NIC as exposed by `/ethdev/list`. For DPDK app that uses 2 NICs the metrics will output e.g. `params=0`, `params=1`. |
When running plugin configuration below...
```toml
[[inputs.dpdk]]
device_types = ["ethdev"]
@ -189,7 +212,8 @@ When running plugin configuration below...
```
...expected output for `dpdk` plugin instance running on host named `host=dpdk-host`:
```
```shell
dpdk,command=/ethdev/stats,dpdk_instance=l3fwd-power,host=dpdk-host,params=0 q_opackets_0=0,q_ipackets_5=0,q_errors_11=0,ierrors=0,q_obytes_5=0,q_obytes_10=0,q_opackets_10=0,q_ipackets_4=0,q_ipackets_7=0,q_ipackets_15=0,q_ibytes_5=0,q_ibytes_6=0,q_ibytes_9=0,obytes=0,q_opackets_1=0,q_opackets_11=0,q_obytes_7=0,q_errors_5=0,q_errors_10=0,q_ibytes_4=0,q_obytes_6=0,q_errors_1=0,q_opackets_5=0,q_errors_3=0,q_errors_12=0,q_ipackets_11=0,q_ipackets_12=0,q_obytes_14=0,q_opackets_15=0,q_obytes_2=0,q_errors_8=0,q_opackets_12=0,q_errors_0=0,q_errors_9=0,q_opackets_14=0,q_ibytes_3=0,q_ibytes_15=0,q_ipackets_13=0,q_ipackets_14=0,q_obytes_3=0,q_errors_13=0,q_opackets_3=0,q_ibytes_0=7092,q_ibytes_2=0,q_ibytes_8=0,q_ipackets_8=0,q_ipackets_10=0,q_obytes_4=0,q_ibytes_10=0,q_ibytes_13=0,q_ibytes_1=0,q_ibytes_12=0,opackets=0,q_obytes_1=0,q_errors_15=0,q_opackets_2=0,oerrors=0,rx_nombuf=0,q_opackets_8=0,q_ibytes_11=0,q_ipackets_3=0,q_obytes_0=0,q_obytes_12=0,q_obytes_11=0,q_obytes_13=0,q_errors_6=0,q_ipackets_1=0,q_ipackets_6=0,q_ipackets_9=0,q_obytes_15=0,q_opackets_7=0,q_ibytes_14=0,ipackets=98,q_ipackets_2=0,q_opackets_6=0,q_ibytes_7=0,imissed=0,q_opackets_4=0,q_opackets_9=0,q_obytes_8=0,q_obytes_9=0,q_errors_4=0,q_errors_14=0,q_opackets_13=0,ibytes=7092,q_ipackets_0=98,q_errors_2=0,q_errors_7=0 1606310780000000000
dpdk,command=/ethdev/stats,dpdk_instance=l3fwd-power,host=dpdk-host,params=1 q_opackets_0=0,q_ipackets_5=0,q_errors_11=0,ierrors=0,q_obytes_5=0,q_obytes_10=0,q_opackets_10=0,q_ipackets_4=0,q_ipackets_7=0,q_ipackets_15=0,q_ibytes_5=0,q_ibytes_6=0,q_ibytes_9=0,obytes=0,q_opackets_1=0,q_opackets_11=0,q_obytes_7=0,q_errors_5=0,q_errors_10=0,q_ibytes_4=0,q_obytes_6=0,q_errors_1=0,q_opackets_5=0,q_errors_3=0,q_errors_12=0,q_ipackets_11=0,q_ipackets_12=0,q_obytes_14=0,q_opackets_15=0,q_obytes_2=0,q_errors_8=0,q_opackets_12=0,q_errors_0=0,q_errors_9=0,q_opackets_14=0,q_ibytes_3=0,q_ibytes_15=0,q_ipackets_13=0,q_ipackets_14=0,q_obytes_3=0,q_errors_13=0,q_opackets_3=0,q_ibytes_0=7092,q_ibytes_2=0,q_ibytes_8=0,q_ipackets_8=0,q_ipackets_10=0,q_obytes_4=0,q_ibytes_10=0,q_ibytes_13=0,q_ibytes_1=0,q_ibytes_12=0,opackets=0,q_obytes_1=0,q_errors_15=0,q_opackets_2=0,oerrors=0,rx_nombuf=0,q_opackets_8=0,q_ibytes_11=0,q_ipackets_3=0,q_obytes_0=0,q_obytes_12=0,q_obytes_11=0,q_obytes_13=0,q_errors_6=0,q_ipackets_1=0,q_ipackets_6=0,q_ipackets_9=0,q_obytes_15=0,q_opackets_7=0,q_ibytes_14=0,ipackets=98,q_ipackets_2=0,q_opackets_6=0,q_ibytes_7=0,imissed=0,q_opackets_4=0,q_opackets_9=0,q_obytes_8=0,q_obytes_9=0,q_errors_4=0,q_errors_14=0,q_opackets_13=0,ibytes=7092,q_ipackets_0=98,q_errors_2=0,q_errors_7=0 1606310780000000000
dpdk,command=/ethdev/xstats,dpdk_instance=l3fwd-power,host=dpdk-host,params=0 out_octets_encrypted=0,rx_fcoe_mbuf_allocation_errors=0,tx_q1packets=0,rx_priority0_xoff_packets=0,rx_priority7_xoff_packets=0,rx_errors=0,mac_remote_errors=0,in_pkts_invalid=0,tx_priority3_xoff_packets=0,tx_errors=0,rx_fcoe_bytes=0,rx_flow_control_xon_packets=0,rx_priority4_xoff_packets=0,tx_priority2_xoff_packets=0,rx_illegal_byte_errors=0,rx_xoff_packets=0,rx_management_packets=0,rx_priority7_dropped=0,rx_priority4_dropped=0,in_pkts_unchecked=0,rx_error_bytes=0,rx_size_256_to_511_packets=0,tx_priority4_xoff_packets=0,rx_priority6_xon_packets=0,tx_priority4_xon_to_xoff_packets=0,in_pkts_delayed=0,rx_priority0_mbuf_allocation_errors=0,out_octets_protected=0,tx_priority7_xon_to_xoff_packets=0,tx_priority1_xon_to_xoff_packets=0,rx_fcoe_no_direct_data_placement_ext_buff=0,tx_priority6_xon_to_xoff_packets=0,flow_director_filter_add_errors=0,rx_total_packets=99,rx_crc_errors=0,flow_director_filter_remove_errors=0,rx_missed_errors=0,tx_size_64_packets=0,rx_priority3_dropped=0,flow_director_matched_filters=0,tx_priority2_xon_to_xoff_packets=0,rx_priority1_xon_packets=0,rx_size_65_to_127_packets=99,rx_fragment_errors=0,in_pkts_notusingsa=0,rx_q0bytes=7162,rx_fcoe_dropped=0,rx_priority1_dropped=0,rx_fcoe_packets=0,rx_priority5_xoff_packets=0,out_pkts_protected=0,tx_total_packets=0,rx_priority2_dropped=0,in_pkts_late=0,tx_q1bytes=0,in_pkts_badtag=0,rx_multicast_packets=99,rx_priority6_xoff_packets=0,tx_flow_control_xoff_packets=0,rx_flow_control_xoff_packets=0,rx_priority0_xon_packets=0,in_pkts_untagged=0,tx_fcoe_packets=0,rx_priority7_mbuf_allocation_errors=0,tx_priority0_xon_to_xoff_packets=0,tx_priority5_xon_to_xoff_packets=0,tx_flow_control_xon_packets=0,tx_q0packets=0,tx_xoff_packets=0,rx_size_512_to_1023_packets=0,rx_priority3_xon_packets=0,rx_q0errors=0,rx_oversize_errors=0,tx_priority4_xon_packets=0,tx_priority5_xoff_packets=0,rx_priority5_xon_packets=0,rx_total_missed_packets=0,rx_priority4_mbuf_allocation_errors=0,tx_priority1_xon_packets=0,tx_management_packets=0,rx_priority5_mbuf_allocation_errors=0,rx_fcoe_no_direct_data_placement=0,rx_undersize_errors=0,tx_priority1_xoff_packets=0,rx_q0packets=99,tx_q2packets=0,tx_priority6_xon_packets=0,rx_good_packets=99,tx_priority5_xon_packets=0,tx_size_256_to_511_packets=0,rx_priority6_dropped=0,rx_broadcast_packets=0,tx_size_512_to_1023_packets=0,tx_priority3_xon_to_xoff_packets=0,in_pkts_unknownsci=0,in_octets_validated=0,tx_priority6_xoff_packets=0,tx_priority7_xoff_packets=0,rx_jabber_errors=0,tx_priority7_xon_packets=0,tx_priority0_xon_packets=0,in_pkts_unusedsa=0,tx_priority0_xoff_packets=0,mac_local_errors=33,rx_total_bytes=7162,in_pkts_notvalid=0,rx_length_errors=0,in_octets_decrypted=0,rx_size_128_to_255_packets=0,rx_good_bytes=7162,tx_size_65_to_127_packets=0,rx_mac_short_packet_dropped=0,tx_size_1024_to_max_packets=0,rx_priority2_mbuf_allocation_errors=0,flow_director_added_filters=0,tx_multicast_packets=0,rx_fcoe_crc_errors=0,rx_priority1_xoff_packets=0,flow_director_missed_filters=0,rx_xon_packets=0,tx_size_128_to_255_packets=0,out_pkts_encrypted=0,rx_priority4_xon_packets=0,rx_priority0_dropped=0,rx_size_1024_to_max_packets=0,tx_good_bytes=0,rx_management_dropped=0,rx_mbuf_allocation_errors=0,tx_xon_packets=0,rx_priority3_xoff_packets=0,tx_good_packets=0,tx_fcoe_bytes=0,rx_priority6_mbuf_allocation_errors=0,rx_priority2_xon_packets=0,tx_broadcast_packets=0,tx_q2bytes=0,rx_priority7_xon_packets=0,out_pkts_untagged=0,rx_priority2_xoff_packets=0,rx_priority1_mbuf_allocation_errors=0,tx_q0bytes=0,rx_size_64_packets=0,rx_priority5_dropped=0,tx_priority2_xon_packets=0,in_pkts_nosci=0,flow_director_removed_filters=0,in_pkts_ok=0,rx_l3_l4_xsum_error=0,rx_priority3_mbuf_allocation_errors=0,tx_priority3_xon_packets=0 1606310780000000000

View File

@ -14,7 +14,7 @@ formats.
The amazon-ecs-agent (though it _is_ a container running on the host) is not
present in the metadata/stats endpoints.
### Configuration
## Configuration
```toml
# Read metrics about ECS containers
@ -45,7 +45,7 @@ present in the metadata/stats endpoints.
# timeout = "5s"
```
### Configuration (enforce v2 metadata)
## Configuration (enforce v2 metadata)
```toml
# Read metrics about ECS containers
@ -76,7 +76,7 @@ present in the metadata/stats endpoints.
# timeout = "5s"
```
### Metrics
## Metrics
- ecs_task
- tags:
@ -92,7 +92,7 @@ present in the metadata/stats endpoints.
- limit_cpu (float)
- limit_mem (float)
+ ecs_container_mem
- ecs_container_mem
- tags:
- cluster
- task_arn
@ -158,7 +158,7 @@ present in the metadata/stats endpoints.
- usage_percent
- usage_total
+ ecs_container_net
- ecs_container_net
- tags:
- cluster
- task_arn
@ -200,7 +200,7 @@ present in the metadata/stats endpoints.
- io_serviced_recursive_total
- io_serviced_recursive_write
+ ecs_container_meta
- ecs_container_meta
- tags:
- cluster
- task_arn
@ -221,10 +221,9 @@ present in the metadata/stats endpoints.
- started_at
- type
## Example
### Example Output
```
```shell
ecs_task,cluster=test,family=nginx,host=c4b301d4a123,revision=2,task_arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a desired_status="RUNNING",known_status="RUNNING",limit_cpu=0.5,limit_mem=512 1542641488000000000
ecs_container_mem,cluster=test,com.amazonaws.ecs.cluster=test,com.amazonaws.ecs.container-name=~internal~ecs~pause,com.amazonaws.ecs.task-arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a,com.amazonaws.ecs.task-definition-family=nginx,com.amazonaws.ecs.task-definition-version=2,family=nginx,host=c4b301d4a123,id=e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba,name=~internal~ecs~pause,revision=2,task_arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a active_anon=40960i,active_file=8192i,cache=790528i,pgpgin=1243i,total_pgfault=1298i,total_rss=40960i,limit=1033658368i,max_usage=4825088i,hierarchical_memory_limit=536870912i,rss=40960i,total_active_file=8192i,total_mapped_file=618496i,usage_percent=0.05349543109392212,container_id="e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba",pgfault=1298i,pgmajfault=6i,pgpgout=1040i,total_active_anon=40960i,total_inactive_file=782336i,total_pgpgin=1243i,usage=552960i,inactive_file=782336i,mapped_file=618496i,total_cache=790528i,total_pgpgout=1040i 1542642001000000000
ecs_container_cpu,cluster=test,com.amazonaws.ecs.cluster=test,com.amazonaws.ecs.container-name=~internal~ecs~pause,com.amazonaws.ecs.task-arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a,com.amazonaws.ecs.task-definition-family=nginx,com.amazonaws.ecs.task-definition-version=2,cpu=cpu-total,family=nginx,host=c4b301d4a123,id=e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba,name=~internal~ecs~pause,revision=2,task_arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a usage_in_kernelmode=0i,throttling_throttled_periods=0i,throttling_periods=0i,throttling_throttled_time=0i,container_id="e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba",usage_percent=0,usage_total=26426156i,usage_in_usermode=20000000i,usage_system=2336100000000i 1542642001000000000
@ -242,4 +241,4 @@ ecs_container_meta,cluster=test,com.amazonaws.ecs.cluster=test,com.amazonaws.ecs
[docker-input]: /plugins/inputs/docker/README.md
[task-metadata-endpoint-v2]: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-metadata-endpoint-v2.html
[task-metadata-endpoint-v3] https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-metadata-endpoint-v3.html
[task-metadata-endpoint-v3]: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-metadata-endpoint-v3.html

View File

@ -12,6 +12,7 @@ In addition, the following optional queries are only made by the master node:
[Shard Stats](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-stats.html)
Specific Elasticsearch endpoints that are queried:
- Node: either /_nodes/stats or /_nodes/_local/stats depending on 'local' configuration setting
- Cluster Heath: /_cluster/health?level=indices
- Cluster Stats: /_cluster/stats
@ -20,7 +21,7 @@ Specific Elasticsearch endpoints that are queried:
Note that specific statistics information can change between Elasticsearch versions. In general, this plugin attempts to stay as version-generic as possible by tagging high-level categories only and using a generic json parser to make unique field names of whatever statistics names are provided at the mid-low level.
### Configuration
## Configuration
```toml
[[inputs.elasticsearch]]
@ -81,7 +82,7 @@ Note that specific statistics information can change between Elasticsearch versi
# num_most_recent_indices = 0
```
### Metrics
## Metrics
Emitted when `cluster_health = true`:
@ -169,7 +170,7 @@ Emitted when `cluster_stats = true`:
- shards_total (float)
- store_size_in_bytes (float)
+ elasticsearch_clusterstats_nodes
- elasticsearch_clusterstats_nodes
- tags:
- cluster_name
- node_name
@ -230,7 +231,7 @@ Emitted when the appropriate `node_stats` options are set.
- tx_count (float)
- tx_size_in_bytes (float)
+ elasticsearch_breakers
- elasticsearch_breakers
- tags:
- cluster_name
- node_attribute_ml.enabled
@ -291,7 +292,7 @@ Emitted when the appropriate `node_stats` options are set.
- total_free_in_bytes (float)
- total_total_in_bytes (float)
+ elasticsearch_http
- elasticsearch_http
- tags:
- cluster_name
- node_attribute_ml.enabled
@ -402,7 +403,7 @@ Emitted when the appropriate `node_stats` options are set.
- warmer_total (float)
- warmer_total_time_in_millis (float)
+ elasticsearch_jvm
- elasticsearch_jvm
- tags:
- cluster_name
- node_attribute_ml.enabled
@ -480,7 +481,7 @@ Emitted when the appropriate `node_stats` options are set.
- swap_used_in_bytes (float)
- timestamp (float)
+ elasticsearch_process
- elasticsearch_process
- tags:
- cluster_name
- node_attribute_ml.enabled

View File

@ -2,7 +2,7 @@
The ethtool input plugin pulls ethernet device stats. Fields pulled will depend on the network device and driver.
### Configuration:
## Configuration
```toml
# Returns ethtool statistics for given interfaces
@ -30,13 +30,13 @@ Interfaces can be included or ignored using:
Note that loopback interfaces will be automatically ignored.
### Metrics:
## Metrics
Metrics are dependent on the network device and driver.
### Example Output:
## Example Output
```
```shell
ethtool,driver=igb,host=test01,interface=mgmt0 tx_queue_1_packets=280782i,rx_queue_5_csum_err=0i,tx_queue_4_restart=0i,tx_multicast=7i,tx_queue_1_bytes=39674885i,rx_queue_2_alloc_failed=0i,tx_queue_5_packets=173970i,tx_single_coll_ok=0i,rx_queue_1_drops=0i,tx_queue_2_restart=0i,tx_aborted_errors=0i,rx_queue_6_csum_err=0i,tx_queue_5_restart=0i,tx_queue_4_bytes=64810835i,tx_abort_late_coll=0i,tx_queue_4_packets=109102i,os2bmc_tx_by_bmc=0i,tx_bytes=427527435i,tx_queue_7_packets=66665i,dropped_smbus=0i,rx_queue_0_csum_err=0i,tx_flow_control_xoff=0i,rx_packets=25926536i,rx_queue_7_csum_err=0i,rx_queue_3_bytes=84326060i,rx_multicast=83771i,rx_queue_4_alloc_failed=0i,rx_queue_3_drops=0i,rx_queue_3_csum_err=0i,rx_errors=0i,tx_errors=0i,tx_queue_6_packets=183236i,rx_broadcast=24378893i,rx_queue_7_packets=88680i,tx_dropped=0i,rx_frame_errors=0i,tx_queue_3_packets=161045i,tx_packets=1257017i,rx_queue_1_csum_err=0i,tx_window_errors=0i,tx_dma_out_of_sync=0i,rx_length_errors=0i,rx_queue_5_drops=0i,tx_timeout_count=0i,rx_queue_4_csum_err=0i,rx_flow_control_xon=0i,tx_heartbeat_errors=0i,tx_flow_control_xon=0i,collisions=0i,tx_queue_0_bytes=29465801i,rx_queue_6_drops=0i,rx_queue_0_alloc_failed=0i,tx_queue_1_restart=0i,rx_queue_0_drops=0i,tx_broadcast=9i,tx_carrier_errors=0i,tx_queue_7_bytes=13777515i,tx_queue_7_restart=0i,rx_queue_5_bytes=50732006i,rx_queue_7_bytes=35744457i,tx_deferred_ok=0i,tx_multi_coll_ok=0i,rx_crc_errors=0i,rx_fifo_errors=0i,rx_queue_6_alloc_failed=0i,tx_queue_2_packets=175206i,tx_queue_0_packets=107011i,rx_queue_4_bytes=201364548i,rx_queue_6_packets=372573i,os2bmc_rx_by_host=0i,multicast=83771i,rx_queue_4_drops=0i,rx_queue_5_packets=130535i,rx_queue_6_bytes=139488035i,tx_fifo_errors=0i,tx_queue_5_bytes=84899130i,rx_queue_0_packets=24529563i,rx_queue_3_alloc_failed=0i,rx_queue_7_drops=0i,tx_queue_6_bytes=96288614i,tx_queue_2_bytes=22132949i,tx_tcp_seg_failed=0i,rx_queue_1_bytes=246703840i,rx_queue_0_bytes=1506870738i,tx_queue_0_restart=0i,rx_queue_2_bytes=111344804i,tx_tcp_seg_good=0i,tx_queue_3_restart=0i,rx_no_buffer_count=0i,rx_smbus=0i,rx_queue_1_packets=273865i,rx_over_errors=0i,os2bmc_tx_by_host=0i,rx_queue_1_alloc_failed=0i,rx_queue_7_alloc_failed=0i,rx_short_length_errors=0i,tx_hwtstamp_timeouts=0i,tx_queue_6_restart=0i,rx_queue_2_packets=207136i,tx_queue_3_bytes=70391970i,rx_queue_3_packets=112007i,rx_queue_4_packets=212177i,tx_smbus=0i,rx_long_byte_count=2480280632i,rx_queue_2_csum_err=0i,rx_missed_errors=0i,rx_bytes=2480280632i,rx_queue_5_alloc_failed=0i,rx_queue_2_drops=0i,os2bmc_rx_by_bmc=0i,rx_align_errors=0i,rx_long_length_errors=0i,interface_up=1i,rx_hwtstamp_cleared=0i,rx_flow_control_xoff=0i 1564658080000000000
ethtool,driver=igb,host=test02,interface=mgmt0 rx_queue_2_bytes=111344804i,tx_queue_3_bytes=70439858i,multicast=83771i,rx_broadcast=24378975i,tx_queue_0_packets=107011i,rx_queue_6_alloc_failed=0i,rx_queue_6_drops=0i,rx_hwtstamp_cleared=0i,tx_window_errors=0i,tx_tcp_seg_good=0i,rx_queue_1_drops=0i,tx_queue_1_restart=0i,rx_queue_7_csum_err=0i,rx_no_buffer_count=0i,tx_queue_1_bytes=39675245i,tx_queue_5_bytes=84899130i,tx_broadcast=9i,rx_queue_1_csum_err=0i,tx_flow_control_xoff=0i,rx_queue_6_csum_err=0i,tx_timeout_count=0i,os2bmc_tx_by_bmc=0i,rx_queue_6_packets=372577i,rx_queue_0_alloc_failed=0i,tx_flow_control_xon=0i,rx_queue_2_drops=0i,tx_queue_2_packets=175206i,rx_queue_3_csum_err=0i,tx_abort_late_coll=0i,tx_queue_5_restart=0i,tx_dropped=0i,rx_queue_2_alloc_failed=0i,tx_multi_coll_ok=0i,rx_queue_1_packets=273865i,rx_flow_control_xon=0i,tx_single_coll_ok=0i,rx_length_errors=0i,rx_queue_7_bytes=35744457i,rx_queue_4_alloc_failed=0i,rx_queue_6_bytes=139488395i,rx_queue_2_csum_err=0i,rx_long_byte_count=2480288216i,rx_queue_1_alloc_failed=0i,tx_queue_0_restart=0i,rx_queue_0_csum_err=0i,tx_queue_2_bytes=22132949i,rx_queue_5_drops=0i,tx_dma_out_of_sync=0i,rx_queue_3_drops=0i,rx_queue_4_packets=212177i,tx_queue_6_restart=0i,rx_packets=25926650i,rx_queue_7_packets=88680i,rx_frame_errors=0i,rx_queue_3_bytes=84326060i,rx_short_length_errors=0i,tx_queue_7_bytes=13777515i,rx_queue_3_alloc_failed=0i,tx_queue_6_packets=183236i,rx_queue_0_drops=0i,rx_multicast=83771i,rx_queue_2_packets=207136i,rx_queue_5_csum_err=0i,rx_queue_5_packets=130535i,rx_queue_7_alloc_failed=0i,tx_smbus=0i,tx_queue_3_packets=161081i,rx_queue_7_drops=0i,tx_queue_2_restart=0i,tx_multicast=7i,tx_fifo_errors=0i,tx_queue_3_restart=0i,rx_long_length_errors=0i,tx_queue_6_bytes=96288614i,tx_queue_1_packets=280786i,tx_tcp_seg_failed=0i,rx_align_errors=0i,tx_errors=0i,rx_crc_errors=0i,rx_queue_0_packets=24529673i,rx_flow_control_xoff=0i,tx_queue_0_bytes=29465801i,rx_over_errors=0i,rx_queue_4_drops=0i,os2bmc_rx_by_bmc=0i,rx_smbus=0i,dropped_smbus=0i,tx_hwtstamp_timeouts=0i,rx_errors=0i,tx_queue_4_packets=109102i,tx_carrier_errors=0i,tx_queue_4_bytes=64810835i,tx_queue_4_restart=0i,rx_queue_4_csum_err=0i,tx_queue_7_packets=66665i,tx_aborted_errors=0i,rx_missed_errors=0i,tx_bytes=427575843i,collisions=0i,rx_queue_1_bytes=246703840i,rx_queue_5_bytes=50732006i,rx_bytes=2480288216i,os2bmc_rx_by_host=0i,rx_queue_5_alloc_failed=0i,rx_queue_3_packets=112007i,tx_deferred_ok=0i,os2bmc_tx_by_host=0i,tx_heartbeat_errors=0i,rx_queue_0_bytes=1506877506i,tx_queue_7_restart=0i,tx_packets=1257057i,rx_queue_4_bytes=201364548i,interface_up=0i,rx_fifo_errors=0i,tx_queue_5_packets=173970i 1564658090000000000
```

View File

@ -2,15 +2,15 @@
This plugin provides a consumer for use with Azure Event Hubs and Azure IoT Hub.
### IoT Hub Setup
## IoT Hub Setup
The main focus for development of this plugin is Azure IoT hub:
1. Create an Azure IoT Hub by following any of the guides provided here: https://docs.microsoft.com/en-us/azure/iot-hub/
1. Create an Azure IoT Hub by following any of the guides provided here: [Azure IoT Hub](https://docs.microsoft.com/en-us/azure/iot-hub/)
2. Create a device, for example a [simulated Raspberry Pi](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-raspberry-pi-web-simulator-get-started)
3. The connection string needed for the plugin is located under *Shared access policies*, both the *iothubowner* and *service* policies should work
### Configuration
## Configuration
```toml
[[inputs.eventhub_consumer]]
@ -98,7 +98,7 @@ The main focus for development of this plugin is Azure IoT hub:
data_format = "influx"
```
#### Environment Variables
### Environment Variables
[Full documentation of the available environment variables][envvar].

View File

@ -7,7 +7,7 @@ additional information can be found.
Telegraf minimum version: Telegraf x.x
Plugin minimum tested version: x.x
### Configuration
## Configuration
This section contains the default TOML to configure the plugin. You can
generate it using `telegraf --usage <plugin-name>`.
@ -17,12 +17,12 @@ generate it using `telegraf --usage <plugin-name>`.
example_option = "example_value"
```
#### example_option
### example_option
A more in depth description of an option can be provided here, but only do so
if the option cannot be fully described in the sample config.
### Metrics
## Metrics
Here you should add an optional description and links to where the user can
get more information about the measurements.
@ -39,7 +39,7 @@ mapped to the output.
- field1 (type, unit)
- field2 (float, percent)
+ measurement2
- measurement2
- tags:
- tag3
- fields:
@ -49,29 +49,30 @@ mapped to the output.
- field6 (float)
- field7 (boolean)
### Sample Queries
## Sample Queries
This section can contain some useful InfluxDB queries that can be used to get
started with the plugin or to generate dashboards. For each query listed,
describe at a high level what data is returned.
Get the max, mean, and min for the measurement in the last hour:
```
```sql
SELECT max(field1), mean(field1), min(field1) FROM measurement1 WHERE tag1=bar AND time > now() - 1h GROUP BY tag
```
### Troubleshooting
## Troubleshooting
This optional section can provide basic troubleshooting steps that a user can
perform.
### Example Output
## Example
This section shows example output in Line Protocol format. You can often use
`telegraf --input-filter <plugin-name> --test` or use the `file` output to get
this information.
```
```shell
measurement1,tag1=foo,tag2=bar field1=1i,field2=2.1 1453831884664956455
measurement2,tag1=foo,tag2=bar,tag3=baz field3=1i 1453831884664956455
```

View File

@ -5,7 +5,7 @@ their output in any one of the accepted [Input Data Formats](https://github.com/
This plugin can be used to poll for custom metrics from any source.
### Configuration:
## Configuration
```toml
[[inputs.exec]]
@ -32,15 +32,17 @@ This plugin can be used to poll for custom metrics from any source.
Glob patterns in the `command` option are matched on every run, so adding new
scripts that match the pattern will cause them to be picked up immediately.
### Example:
## Example
This script produces static values, since no timestamp is specified the values are at the current time.
```sh
#!/bin/sh
echo 'example,tag1=a,tag2=b i=42i,j=43i,k=44i'
```
It can be paired with the following configuration and will be run at the `interval` of the agent.
```toml
[[inputs.exec]]
commands = ["sh /tmp/test.sh"]
@ -48,18 +50,19 @@ It can be paired with the following configuration and will be run at the `interv
data_format = "influx"
```
### Common Issues:
## Common Issues
#### My script works when I run it by hand, but not when Telegraf is running as a service.
### My script works when I run it by hand, but not when Telegraf is running as a service
This may be related to the Telegraf service running as a different user. The
official packages run Telegraf as the `telegraf` user and group on Linux
systems.
#### With a PowerShell on Windows, the output of the script appears to be truncated.
### With a PowerShell on Windows, the output of the script appears to be truncated
You may need to set a variable in your script to increase the number of columns
available for output:
```
```shell
$host.UI.RawUI.BufferSize = new-object System.Management.Automation.Host.Size(1024,50)
```

View File

@ -1,7 +1,7 @@
# Execd Input Plugin
The `execd` plugin runs an external program as a long-running daemon.
The programs must output metrics in any one of the accepted
The `execd` plugin runs an external program as a long-running daemon.
The programs must output metrics in any one of the accepted
[Input Data Formats][] on the process's STDOUT, and is expected to
stay running. If you'd instead like the process to collect metrics and then exit,
check out the [inputs.exec][] plugin.
@ -13,7 +13,7 @@ new line to the process's STDIN.
STDERR from the process will be relayed to Telegraf as errors in the logs.
### Configuration:
## Configuration
```toml
[[inputs.execd]]
@ -41,9 +41,9 @@ STDERR from the process will be relayed to Telegraf as errors in the logs.
data_format = "influx"
```
### Example
## Example
##### Daemon written in bash using STDIN signaling
### Daemon written in bash using STDIN signaling
```bash
#!/bin/bash
@ -62,7 +62,7 @@ done
signal = "STDIN"
```
##### Go daemon using SIGHUP
### Go daemon using SIGHUP
```go
package main
@ -96,7 +96,7 @@ func main() {
signal = "SIGHUP"
```
##### Ruby daemon running standalone
### Ruby daemon running standalone
```ruby
#!/usr/bin/env ruby

View File

@ -9,7 +9,7 @@ Acquiring the required permissions can be done using several methods:
- [Use sudo](#using-sudo) run fail2ban-client.
- Run telegraf as root. (not recommended)
### Configuration
## Configuration
```toml
# Read metrics from fail2ban.
@ -18,7 +18,7 @@ Acquiring the required permissions can be done using several methods:
use_sudo = false
```
### Using sudo
## Using sudo
Make sure to set `use_sudo = true` in your configuration file.
@ -26,20 +26,21 @@ You will also need to update your sudoers file. It is recommended to modify a
file in the `/etc/sudoers.d` directory using `visudo`:
```bash
$ sudo visudo -f /etc/sudoers.d/telegraf
sudo visudo -f /etc/sudoers.d/telegraf
```
Add the following lines to the file, these commands allow the `telegraf` user
to call `fail2ban-client` without needing to provide a password and disables
logging of the call in the auth.log. Consult `man 8 visudo` and `man 5
sudoers` for details.
```
```text
Cmnd_Alias FAIL2BAN = /usr/bin/fail2ban-client status, /usr/bin/fail2ban-client status *
telegraf ALL=(root) NOEXEC: NOPASSWD: FAIL2BAN
Defaults!FAIL2BAN !logfile, !syslog, !pam_session
```
### Metrics
## Metrics
- fail2ban
- tags:
@ -50,7 +51,7 @@ Defaults!FAIL2BAN !logfile, !syslog, !pam_session
### Example Output
```
```shell
# fail2ban-client status sshd
Status for the jail: sshd
|- Filter
@ -63,6 +64,6 @@ Status for the jail: sshd
`- Banned IP list: 192.168.0.1 192.168.0.2
```
```
```shell
fail2ban,jail=sshd failed=5i,banned=2i 1495868667000000000
```

View File

@ -3,7 +3,7 @@
The Fibaro plugin makes HTTP calls to the Fibaro controller API to gather values of hooked devices.
Those values could be true (1) or false (0) for switches, percentage for dimmers, temperature, etc.
### Configuration:
## Configuration
```toml
# Read devices value(s) from a Fibaro controller
@ -20,7 +20,7 @@ Those values could be true (1) or false (0) for switches, percentage for dimmers
# timeout = "5s"
```
### Metrics:
## Metrics
- fibaro
- tags:
@ -36,10 +36,9 @@ Those values could be true (1) or false (0) for switches, percentage for dimmers
- value (float)
- value2 (float, when available from device)
## Example Output
### Example Output:
```
```shell
fibaro,deviceId=9,host=vm1,name=Fenêtre\ haute,room=Cuisine,section=Cuisine,type=com.fibaro.FGRM222 energy=2.04,power=0.7,value=99,value2=99 1529996807000000000
fibaro,deviceId=10,host=vm1,name=Escaliers,room=Dégagement,section=Pièces\ communes,type=com.fibaro.binarySwitch value=0 1529996807000000000
fibaro,deviceId=13,host=vm1,name=Porte\ fenêtre,room=Salon,section=Pièces\ communes,type=com.fibaro.FGRM222 energy=4.33,power=0.7,value=99,value2=99 1529996807000000000

View File

@ -6,7 +6,7 @@ the selected [input data format][].
**Note:** If you wish to parse only newly appended lines use the [tail][] input
plugin instead.
### Configuration:
## Configuration
```toml
[[inputs.file]]
@ -20,10 +20,10 @@ plugin instead.
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "influx"
## Name a tag containing the name of the file the data was parsed from. Leave empty
## to disable. Cautious when file name variation is high, this can increase the cardinality
## significantly. Read more about cardinality here:
## to disable. Cautious when file name variation is high, this can increase the cardinality
## significantly. Read more about cardinality here:
## https://docs.influxdata.com/influxdb/cloud/reference/glossary/#series-cardinality
# file_tag = ""
```

View File

@ -2,7 +2,7 @@
Reports the number and total size of files in specified directories.
### Configuration:
## Configuration
```toml
[[inputs.filecount]]
@ -42,7 +42,7 @@ Reports the number and total size of files in specified directories.
mtime = "0s"
```
### Metrics
## Metrics
- filecount
- tags:
@ -51,9 +51,9 @@ Reports the number and total size of files in specified directories.
- count (integer)
- size_bytes (integer)
### Example Output:
## Example Output
```
```shell
filecount,directory=/var/cache/apt count=7i,size_bytes=7438336i 1530034445000000000
filecount,directory=/tmp count=17i,size_bytes=28934786i 1530034445000000000
```

View File

@ -2,7 +2,7 @@
The filestat plugin gathers metrics about file existence, size, and other stats.
### Configuration:
## Configuration
```toml
# Read stats about given file(s)
@ -16,22 +16,22 @@ The filestat plugin gathers metrics about file existence, size, and other stats.
md5 = false
```
### Measurements & Fields:
## Measurements & Fields
- filestat
- exists (int, 0 | 1)
- size_bytes (int, bytes)
- modification_time (int, unix time nanoseconds)
- md5 (optional, string)
- exists (int, 0 | 1)
- size_bytes (int, bytes)
- modification_time (int, unix time nanoseconds)
- md5 (optional, string)
### Tags:
## Tags
- All measurements have the following tags:
- file (the path the to file, as specified in the config)
- file (the path the to file, as specified in the config)
### Example Output:
### Example
```
```shell
$ telegraf --config /etc/telegraf/telegraf.conf --input-filter filestat --test
* Plugin: filestat, Collection 1
> filestat,file=/tmp/foo/bar,host=tyrion exists=0i 1507218518192154351

View File

@ -4,7 +4,7 @@ The fireboard plugin gathers the real time temperature data from fireboard
thermometers. In order to use this input plugin, you'll need to sign up to use
the [Fireboard REST API](https://docs.fireboard.io/reference/restapi.html).
### Configuration
## Configuration
```toml
[[inputs.fireboard]]
@ -16,23 +16,23 @@ the [Fireboard REST API](https://docs.fireboard.io/reference/restapi.html).
# http_timeout = 4
```
#### auth_token
### auth_token
In lieu of requiring a username and password, this plugin requires an
authentication token that you can generate using the [Fireboard REST
API](https://docs.fireboard.io/reference/restapi.html#Authentication).
#### url
### url
While there should be no reason to override the URL, the option is available
in case Fireboard changes their site, etc.
#### http_timeout
### http_timeout
If you need to increase the HTTP timeout, you can do so here. You can set this
value in seconds. The default value is four (4) seconds.
### Metrics
## Metrics
The Fireboard REST API docs have good examples of the data that is available,
currently this input only returns the real time temperatures. Temperature
@ -47,12 +47,12 @@ values are included if they are less than a minute old.
- fields:
- temperature (float, unit)
### Example Output
## Example
This section shows example output in Line Protocol format. You can often use
`telegraf --input-filter <plugin-name> --test` or use the `file` output to get
this information.
```
```shell
fireboard,channel=2,host=patas-mbp,scale=Farenheit,title=telegraf-FireBoard,uuid=b55e766c-b308-49b5-93a4-df89fe31efd0 temperature=78.2 1561690040000000000
```

View File

@ -7,7 +7,8 @@ You might need to adjust your fluentd configuration, in order to reduce series c
According to [fluentd documentation](https://docs.fluentd.org/configuration/config-file#common-plugin-parameter), you are able to add `@id` parameter for each plugin to avoid this behaviour and define custom `plugin_id`.
example configuration with `@id` parameter for http plugin:
```
```text
<source>
@type http
@id http
@ -15,7 +16,7 @@ example configuration with `@id` parameter for http plugin:
</source>
```
### Configuration:
## Configuration
```toml
# Read metrics exposed by fluentd in_monitor plugin
@ -29,30 +30,30 @@ example configuration with `@id` parameter for http plugin:
## Define which plugins have to be excluded (based on "type" field - e.g. monitor_agent)
exclude = [
"monitor_agent",
"dummy",
"monitor_agent",
"dummy",
]
```
### Measurements & Fields:
## Measurements & Fields
Fields may vary depending on the plugin type
- fluentd
- retry_count (float, unit)
- buffer_queue_length (float, unit)
- buffer_total_queued_size (float, unit)
- retry_count (float, unit)
- buffer_queue_length (float, unit)
- buffer_total_queued_size (float, unit)
### Tags:
## Tags
- All measurements have the following tags:
- plugin_id (unique plugin id)
- plugin_type (type of the plugin e.g. s3)
- plugin_id (unique plugin id)
- plugin_type (type of the plugin e.g. s3)
- plugin_category (plugin category e.g. output)
### Example Output:
## Example Output
```
```shell
$ telegraf --config fluentd.conf --input-filter fluentd --test
* Plugin: inputs.fluentd, Collection 1
> fluentd,host=T440s,plugin_id=object:9f748c,plugin_category=input,plugin_type=dummy buffer_total_queued_size=0,buffer_queue_length=0,retry_count=0 1492006105000000000