docs(inputs): Add plugin metadata and update description for a* to f* (#16097)

This commit is contained in:
Sven Rebhan 2024-10-31 22:15:21 +01:00 committed by GitHub
parent 42fe362af2
commit dcec9d1cea
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
60 changed files with 794 additions and 653 deletions

View File

@ -20,7 +20,8 @@ To generate a file with specific inputs and outputs, you can use the
telegraf config --input-filter cpu:mem:net:swap --output-filter influxdb:kafka
```
[View the full list][flags] of Telegraf commands and flags or by running `telegraf --help`.
[View the full list][flags] of Telegraf commands and flags or by running
`telegraf --help`.
### Windows PowerShell v5 Encoding
@ -64,14 +65,18 @@ If using an environment variable with a single backslash, then enclose the
variable in single quotes which signifies a string literal (e.g.
`'C:\Program Files'`).
In addition to this, Telegraf also supports Shell parameter expansion for environment variables
which allows syntax such as:
In addition to this, Telegraf also supports Shell parameter expansion for
environment variables which allows syntax such as:
- `${VARIABLE:-default}` evaluates to default if VARIABLE is unset or empty in the environment.
- `${VARIABLE-default}` evaluates to default only if VARIABLE is unset in the environment.
Similarly, the following syntax allows you to specify mandatory variables:
- `${VARIABLE:?err}` exits with an error message containing err if VARIABLE is unset or empty in the environment.
- `${VARIABLE?err}` exits with an error message containing err if VARIABLE is unset in the environment.
- `${VARIABLE:-default}` evaluates to default if VARIABLE is unset or empty in
the environment.
- `${VARIABLE-default}` evaluates to default only if VARIABLE is unset in the
environment. Similarly, the following syntax allows you
to specify mandatory variables:
- `${VARIABLE:?err}` exits with an error message containing err if VARIABLE is
unset or empty in the environment.
- `${VARIABLE?err}` exits with an error message containing err if VARIABLE is
unset in the environment.
When using the `.deb` or `.rpm` packages, you can define environment variables
in the `/etc/default/telegraf` file.

View File

@ -1,7 +1,13 @@
# ActiveMQ Input Plugin
This plugin gather queues, topics & subscribers metrics using ActiveMQ Console
API.
This plugin gathers queue, topics and subscribers metrics using the Console API
[ActiveMQ][activemq] message broker daemon.
⭐ Telegraf v1.8.0
🏷️ messaging
💻 all
[activemq]: https://activemq.apache.org/
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,19 +1,32 @@
# Aerospike Input Plugin
**DEPRECATED: As of version 1.30 the Aerospike plugin has been deprecated in
favor of the [prometheus plugin](../prometheus/README.md) with the
Aerospike Prometheus Exporter**
This plugin queries [Aerospike][aerospike] server(s) for node statistics and
statistics on all configured namespaces.
The aerospike plugin queries aerospike server(s) and get node statistics & stats
for all the configured namespaces.
> [!CAUTION]
> As of version 1.30 the Aerospike plugin has been deprecated in favor of the
> [prometheus plugin](/plugins/inputs/prometheus/README.md) and the officially
> supported [Aerospike Prometheus Exporter][prometheus_exporter]
For what the measurements mean, please consult the [Aerospike Metrics Reference
Docs](http://www.aerospike.com/docs/reference/metrics).
For details on the measurements mean, please consult the
[Aerospike Metrics Reference Docs][ref_manual].
The metric names, to make it less complicated in querying, have replaced all `-`
with `_` as Aerospike metrics come in both forms (no idea why).
> [!NOTE]
> Metric names will have dashes (`-`) replaced as underscores (`_`) to make
> querying more consistently and easy.
All metrics are attempted to be cast to integers, then booleans, then strings.
All metrics are attempted to be cast to integers, then booleans, then strings
in order.
⭐ Telegraf v0.2.0
🚩 Telegraf v1.30.0
🔥 Telegraf v1.40.0
🏷️ server
💻 all
[aerospike]: https://www.aerospike.com
[prometheus_exporter]: https://aerospike.com/docs/monitorstack/configure/configure-exporter
[ref_manual]: https://www.aerospike.com/docs/reference/metrics
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,9 +1,15 @@
# Alibaba (Aliyun) CloudMonitor Service Statistics Input Plugin
# Alibaba Cloud Monitor Service (Aliyun) Input Plugin
Here and after we use `Aliyun` instead `Alibaba` as it is default naming
across web console and docs.
This plugin gathers statistics from the
[Alibaba / Aliyun cloud monitoring service][alibaba]. In the following we will
use `Aliyun` instead of `Alibaba` as it's the default naming across the web
console and docs.
This plugin will pull metric statistics from Aliyun CMS.
⭐ Telegraf v1.19.0
🏷️ cloud
💻 all
[alibaba]: https://www.alibabacloud.com
## Aliyun Authentication

View File

@ -1,9 +1,18 @@
# AMD ROCm System Management Interface (SMI) Input Plugin
This plugin uses a query on the [`rocm-smi`][1] binary to pull GPU stats
including memory and GPU usage, temperatures and other.
This plugin gathers statistics including memory and GPU usage, temperatures
etc from [AMD ROCm platform][amd_rocm] GPUs.
[1]: https://github.com/RadeonOpenCompute/rocm_smi_lib/tree/master/python_smi_tools
> [!IMPORTANT]
> The [`rocm-smi` binary][binary] is required and needs to be installed on the
> system.
⭐ Telegraf v1.20.0
🏷️ hardware, system
💻 all
[amd_rocm]: https://rocm.docs.amd.com/
[binary]: https://github.com/RadeonOpenCompute/rocm_smi_lib/tree/master/python_smi_tools
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,18 +1,22 @@
# AMQP Consumer Input Plugin
This plugin provides a consumer for use with AMQP 0-9-1, a prominent
implementation of this protocol being [RabbitMQ](https://www.rabbitmq.com/).
This plugin consumes messages from an Advanced Message Queuing Protocol v0.9.1
broker. A prominent implementation of this protocol is [RabbitMQ][rabbitmq].
Metrics are read from a topic exchange using the configured queue and
binding_key.
Message payload should be formatted in one of the
[Telegraf Data Formats](../../../docs/DATA_FORMATS_INPUT.md).
Metrics are read from a topic exchange using the configured queue and binding
key. The message payloads must be formatted in one of the supported
[data formats][data_formats].
For an introduction check the [AMQP concepts page][amqp_concepts] and the
[RabbitMQ getting started guide][rabbitmq_getting_started].
⭐ Telegraf v1.3.0
🏷️ messaging
💻 all
[amqp_concepts]: https://www.rabbitmq.com/tutorials/amqp-concepts.html
[data_formats]: /docs/DATA_FORMATS_INPUT.md
[rabbitmq]: https://www.rabbitmq.com
[rabbitmq_getting_started]: https://www.rabbitmq.com/getstarted.html
## Service Input <!-- @/docs/includes/service_input.md -->

View File

@ -1,15 +1,20 @@
# Apache Input Plugin
The Apache plugin collects server performance information using the
[`mod_status`](https://httpd.apache.org/docs/2.4/mod/mod_status.html) module of
the [Apache HTTP Server](https://httpd.apache.org/).
This plugin collects performance information from [Apache HTTP Servers][apache]
using the [`mod_status` module][mod_status_module]. Typically, this module is
configured to expose a page at the `/server-status?auto` endpoint the server.
Typically, the `mod_status` module is configured to expose a page at the
`/server-status?auto` location of the Apache server. The
[ExtendedStatus](https://httpd.apache.org/docs/2.4/mod/core.html#extendedstatus)
option must be enabled in order to collect all available fields. For
information about how to configure your server reference the [module
documentation](https://httpd.apache.org/docs/2.4/mod/mod_status.html#enable).
The [ExtendedStatus option][extended_status] must be enabled in order to collect
all available fields. For information about configuration of your server check
the [module documentation][mod_status_module].
⭐ Telegraf v1.8.0
🏷️ server, web
💻 all
[apache]: https://httpd.apache.org
[extended_status]: https://httpd.apache.org/docs/current/mod/core.html#extendedstatus
[mod_status_module]: https://httpd.apache.org/docs/current/mod/mod_status.html
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,10 +1,14 @@
# APCUPSD Input Plugin
# APC UPSD Input Plugin
This plugin reads data from an apcupsd daemon over its NIS network protocol.
This plugin gathers data from one or more [apcupsd daemon][apcupsd_daemon] over
the NIS network protocol. To query a server, the daemon must be running and be
accessible.
## Requirements
⭐ Telegraf v1.12.0
🏷️ hardware, server
💻 all
apcupsd should be installed and it's daemon should be running.
[apcupsd_daemon]: https://sourceforge.net/projects/apcupsd/
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,10 +1,15 @@
# Aurora Input Plugin
# Apache Aurora Input Plugin
The Aurora Input Plugin gathers metrics from [Apache
Aurora](https://aurora.apache.org/) schedulers.
This plugin gathers metrics from [Apache Aurora][aurora] schedulers. For
monitoring recommendations check the [Monitoring your Aurora cluster][monitoring]
article.
For monitoring recommendations reference [Monitoring your Aurora
cluster](https://aurora.apache.org/documentation/latest/operations/monitoring/)
⭐ Telegraf v1.7.0
🏷️ applications, server
💻 all
[aurora]: https://aurora.apache.org
[monitoring]: https://aurora.apache.org/documentation/latest/operations/monitoring
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,48 +1,44 @@
# Azure Monitor Input Plugin
The `azure_monitor` plugin, gathers metrics of each Azure
resource using Azure Monitor API. Uses **Logz.io
azure-monitor-metrics-receiver** package -
an SDK wrapper for Azure Monitor SDK.
This plugin gathers metrics of Azure resources using the
[Azure Monitor][azure_monitor] API. The plugin requires a `client_id`,
`client_secret` and `tenant_id` for authentication via access token. The
`subscription_id` is required for accessing Azure resources.
## Azure Credential
Check the [supported metrics page][supported_metrics] for available resource
types and their metrics.
This plugin uses `client_id`, `client_secret` and `tenant_id`
for authentication (access token), and `subscription_id`
is for accessing Azure resources.
> [!IMPORTANT]
> The Azure API has a read limit of 12,000 requests per hour. Please make sure
> you don't exceed this limit with the total number of metrics you are in the
> configured interval.
⭐ Telegraf v1.25.0
🏷️ cloud
💻 all
[azure_monitor]: https://docs.microsoft.com/en-us/azure/azure-monitor
[supported_metrics]: https://docs.microsoft.com/en-us/azure/azure-monitor/essentials/metrics-supported
## Property Locations
`subscription_id` can be found under **Overview**->**Essentials** in
the Azure portal for your application/service.
The `subscription_id` can be found under `Overview > Essentials` in the Azure
portal for your application or service.
`client_id` and `client_secret` can be obtained by registering an
The `client_id` and `client_secret` can be obtained by registering an
application under Azure Active Directory.
`tenant_id` can be found under **Azure Active Directory**->**Properties**.
The `tenant_id` can be found under `Azure Active Directory > Properties`.
resource target `resource_id` can be found under
**Overview**->**Essentials**->**JSON View** (link) in the Azure
portal for your application/service.
The resource target `resource_id` can be found under
`Overview > Essentials > JSON View` in the Azure portal for your
application or service.
`cloud_option` defines the optional value for the API endpoints in case you
The `cloud_option` defines the optional value for the API endpoints in case you
are using the solution to get the metrics from the Azure Sovereign Cloud
shipment e.g. AzureChina, AzureGovernment or AzurePublic.
The default value is AzurePublic
## More Information
To see a table of resource types and their metrics, please use this link:
`https://docs.microsoft.com/en-us/azure/azure-monitor/
essentials/metrics-supported`
## Rate Limits
Azure API read limit is 12000 requests per hour.
Please make sure the total number of metrics you are requesting is proportional
to your time interval.
## Usage
Use `resource_targets` to collect metrics from specific resources using

View File

@ -1,6 +1,13 @@
# Azure Storage Queue Input Plugin
# Azure Queue Storage Input Plugin
This plugin gathers sizes of Azure Storage Queues.
This plugin gathers queue sizes from the [Azure Queue Storage][azure_queues]
service, storing a large numbers of messages.
⭐ Telegraf v1.13.0
🏷️ cloud
💻 all
[azure_queues]: https://learn.microsoft.com/en-us/azure/storage/queues
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,55 +1,13 @@
# bcache Input Plugin
# Bcache Input Plugin
Get bcache stat from stats_total directory and dirty_data file.
This plugin gathers statistics for the [block layer cache][bcache]
from the `stats_total` directory and `dirty_data` file.
## Metrics
⭐ Telegraf v0.2.0
🏷️ system
💻 linux
Meta:
- tags: `backing_dev=dev bcache_dev=dev`
Measurement names:
- dirty_data
- bypassed
- cache_bypass_hits
- cache_bypass_misses
- cache_hit_ratio
- cache_hits
- cache_miss_collisions
- cache_misses
- cache_readaheads
## Description
```text
dirty_data
Amount of dirty data for this backing device in the cache. Continuously
updated unlike the cache set's version, but may be slightly off.
bypassed
Amount of IO (both reads and writes) that has bypassed the cache
cache_bypass_hits
cache_bypass_misses
Hits and misses for IO that is intended to skip the cache are still counted,
but broken out here.
cache_hits
cache_misses
cache_hit_ratio
Hits and misses are counted per individual IO as bcache sees them; a
partial hit is counted as a miss.
cache_miss_collisions
Counts instances where data was going to be inserted into the cache from a
cache miss, but raced with a write and data was already present (usually 0
since the synchronization for cache misses was rewritten)
cache_readaheads
Count of times readahead occurred.
```
[bcache]: https://docs.kernel.org/admin-guide/bcache.html
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
@ -76,6 +34,30 @@ See the [CONFIGURATION.md][CONFIGURATION.md] for more details.
bcacheDevs = ["bcache0"]
```
## Metrics
Tags:
- `backing_dev` device backed by the cache
- `bcache_dev` device used for caching
Fields:
- `dirty_data`: Amount of dirty data for this backing device in the cache.
Continuously updated unlike the cache set's version, but may be slightly off
- `bypassed`: Amount of IO (both reads and writes) that has bypassed the cache
- `cache_bypass_hits`: Hits for IO that is intended to skip the cache
- `cache_bypass_misses`: Misses for IO that is intended to skip the cache
- `cache_hits`: Hits per individual IO as seen by bcache sees them; a
partial hit is counted as a miss.
- `cache_misses`: Misses per individual IO as seen by bcache sees them; a
partial hit is counted as a miss.
- `cache_hit_ratio`: Hit to miss ratio
- `cache_miss_collisions`: Instances where data was going to be inserted into
cache from a miss, but raced with a write and data was already present
(usually zero since the synchronization for cache misses was rewritten)
- `cache_readaheads`: Count of times readahead occurred.
## Example Output
```text

View File

@ -1,7 +1,14 @@
# Beanstalkd Input Plugin
The `beanstalkd` plugin collects server stats as well as tube stats (reported by
`stats` and `stats-tube` commands respectively).
This plugin collects server statistics as well as tube statistics from a
[Beanstalkd work queue][beanstalkd] as reported by the `stats` and `stats-tube`
server commands.
⭐ Telegraf v1.8.0
🏷️ messaging
💻 all
[beanstalkd]: https://beanstalkd.github.io/
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
@ -27,9 +34,10 @@ See the [CONFIGURATION.md][CONFIGURATION.md] for more details.
## Metrics
Please see the [Beanstalk Protocol
doc](https://raw.githubusercontent.com/kr/beanstalkd/master/doc/protocol.txt)
for detailed explanation of `stats` and `stats-tube` commands output.
Please see the [Beanstalk protocol doc][protocol] for a detailed explanation of
`stats` and `stats-tube` server commands output.
[protocol]: https://github.com/beanstalkd/beanstalkd/blob/master/doc/protocol.txt
`beanstalkd_overview` statistical information about the system as a whole

View File

@ -1,7 +1,13 @@
# Beat Input Plugin
The Beat plugin will collect metrics from the given Beat instances. It is
known to work with Filebeat and Kafkabeat.
This plugin will collect metrics from a [Beats][beats] instances. It is known
to work with Filebeat and Kafkabeat.
⭐ Telegraf v1.18.0
🏷️ applications
💻 all
[beats]: https://www.elastic.co/beats
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,20 +1,26 @@
# BIND 9 Nameserver Statistics Input Plugin
# BIND 9 Nameserver Input Plugin
This plugin decodes the JSON or XML statistics provided by BIND 9 nameservers.
This plugin collects metrics from [BIND 9 nameservers][bind] using the XML or
JSON endpoint.
## XML Statistics Channel
For _XML_, version 2 statistics (BIND 9.6 to 9.9) and version 3 statistics
(BIND 9.9+) are supported. Version 3 statistics are the default and only XML
format in BIND 9.10+.
Version 2 statistics (BIND 9.6 - 9.9) and version 3 statistics (BIND 9.9+) are
supported. Note that for BIND 9.9 to support version 3 statistics, it must be
built with the `--enable-newstats` compile flag, and it must be specifically
requested via the correct URL. Version 3 statistics are the default (and only)
XML format in BIND 9.10+.
> [!NOTE]
> For BIND 9.9 to support version 3 statistics, it must be built with the
> `--enable-newstats` compile flag, and the statistics must be specifically
> requested via the correct URL.
## JSON Statistics Channel
JSON statistics schema version 1 (BIND 9.10+) is supported. As of writing, some
For _JSON_, version 1 statistics (BIND 9.10+) are supported. As of writing, some
distros still do not enable support for JSON statistics in their BIND packages.
⭐ Telegraf v1.11.0
🏷️ server
💻 all
[bind]: https://www.isc.org/bind
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
In addition to the plugin-specific configuration settings, plugins support

View File

@ -1,8 +1,11 @@
# Bond Input Plugin
The Bond input plugin collects network bond interface status for both the
network bond interface as well as slave interfaces.
The plugin collects these metrics from `/proc/net/bonding/*` files.
This plugin collects metrics for both the network bond interface as well as its
slave interfaces using `/proc/net/bonding/*` files.
⭐ Telegraf v1.5.0
🏷️ system
💻 all
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
@ -39,50 +42,31 @@ See the [CONFIGURATION.md][CONFIGURATION.md] for more details.
## Metrics
- bond
- active_slave (for active-backup mode)
- status
- tags:
- `bond`: name of the bond
- fields:
- `active_slave`: currently active slave interface for active-backup mode
- `status`: status of the interface (0: down , 1: up)
- bond_slave
- failures
- status
- count
- actor_churned (for LACP bonds)
- partner_churned (for LACP bonds)
- total_churned (for LACP bonds)
- tags:
- `bond`: name of the bond
- `interface`: name of the network interface
- fields:
- `failures`: amount of failures for bond's slave interface
- `status`: status of the interface (0: down , 1: up)
- `count`: number of slaves attached to bond
- `actor_churned (for LACP bonds)`: count for local end of LACP bond flapped
- `partner_churned (for LACP bonds)`: count for remote end of LACP bond flapped
- `total_churned (for LACP bonds)`: full count of all churn events
- bond_sys
- slave_count
- ad_port_count
## Description
- active_slave
- Currently active slave interface for active-backup mode.
- status
- Status of bond interface or bonds's slave interface (down = 0, up = 1).
- failures
- Amount of failures for bond's slave interface.
- count
- Number of slaves attached to bond
- actor_churned
- number of times local end of LACP bond flapped
- partner_churned
- number of times remote end of LACP bond flapped
- total_churned
- full count of all churn events
## Tags
- bond
- bond
- bond_slave
- bond
- interface
- bond_sys
- bond
- mode
- tags:
- `bond`: name of the bond
- `mode`: name of the bonding mode
- fields:
- `slave_count`: number of slaves
- `ad_port_count`: number of ports
## Example Output

View File

@ -1,10 +1,15 @@
# Burrow Kafka Consumer Lag Checking Input Plugin
# Burrow Input Plugin
Collect Kafka topic, consumer and partition status via
[Burrow](https://github.com/linkedin/Burrow) HTTP
[API](https://github.com/linkedin/Burrow/wiki/HTTP-Endpoint).
This plugin collect Kafka topic, consumer and partition status from the
[Burrow - Kafka Consumer Lag Checking][burrow] companion via [HTTP API][api].
Burrow v1.x versions are supported.
Supported Burrow version: `1.x`
⭐ Telegraf v1.7.0
🏷️ messaging
💻 all
[burrow]: https://github.com/linkedin/Burrow
[api]: https://github.com/linkedin/Burrow/wiki/HTTP-Endpoint
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,61 +1,16 @@
# Ceph Storage Input Plugin
Collects performance metrics from the MON and OSD nodes in a Ceph storage
cluster.
This plugin collects performance metrics from MON and OSD nodes in a
[Ceph storage cluster][ceph]. Support for Telegraf has been introduced in the
v13.x Mimic release where data is sent to a socket (see
[their documnetation][docs]).
Ceph has introduced a Telegraf and Influx plugin in the 13.x Mimic release.
The Telegraf module sends to a Telegraf configured with a socket_listener.
[Learn more in their docs](https://docs.ceph.com/en/latest/mgr/telegraf/)
⭐ Telegraf v0.13.1
🏷️ system
💻 all
## Admin Socket Stats
This gatherer works by scanning the configured SocketDir for OSD, MON, MDS
and RGW socket files. When it finds a MON socket, it runs
```shell
ceph --admin-daemon $file perfcounters_dump
```
For OSDs it runs
```shell
ceph --admin-daemon $file perf dump
```
The resulting JSON is parsed and grouped into collections, based on
top-level key. Top-level keys are used as collection tags, and all
sub-keys are flattened. For example:
```json
{
"paxos": {
"refresh": 9363435,
"refresh_latency": {
"avgcount": 9363435,
"sum": 5378.794002000
}
}
}
```
Would be parsed into the following metrics, all of which would be tagged
with `collection=paxos`:
- refresh = 9363435
- refresh_latency.avgcount: 9363435
- refresh_latency.sum: 5378.794002000
## Cluster Stats
This gatherer works by invoking ceph commands against the cluster thus only
requires the ceph client, valid ceph configuration and an access key to
function (the ceph_config and ceph_user configuration variables work in
conjunction to specify these prerequisites). It may be run on any server you
wish which has access to the cluster. The currently supported commands are:
- ceph status
- ceph df
- ceph osd pool stats
[ceph]: https://ceph.com
[docs]: https://docs.ceph.com/en/latest/mgr/telegraf
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
@ -114,6 +69,56 @@ See the [CONFIGURATION.md][CONFIGURATION.md] for more details.
gather_cluster_stats = false
```
## Admin Socket Stats
This gatherer works by scanning the configured SocketDir for OSD, MON, MDS
and RGW socket files. When it finds a MON socket, it runs
```shell
ceph --admin-daemon $file perfcounters_dump
```
For OSDs it runs
```shell
ceph --admin-daemon $file perf dump
```
The resulting JSON is parsed and grouped into collections, based on
top-level key. Top-level keys are used as collection tags, and all
sub-keys are flattened. For example:
```json
{
"paxos": {
"refresh": 9363435,
"refresh_latency": {
"avgcount": 9363435,
"sum": 5378.794002000
}
}
}
```
Would be parsed into the following metrics, all of which would be tagged
with `collection=paxos`:
- refresh = 9363435
- refresh_latency.avgcount: 9363435
- refresh_latency.sum: 5378.794002000
## Cluster Stats
This gatherer works by invoking ceph commands against the cluster thus only
requires the ceph client, valid ceph configuration and an access key to
function (the ceph_config and ceph_user configuration variables work in
conjunction to specify these prerequisites). It may be run on any server you
wish which has access to the cluster. The currently supported commands are:
- ceph status
- ceph df
- ceph osd pool stats
## Metrics
### Admin Socket

View File

@ -1,42 +1,43 @@
# CGroup Input Plugin
# Control Group Input Plugin
This input plugin will capture specific statistics per cgroup.
This plugin gathers statistics per [control group (cgroup)][cgroup].
Consider restricting paths to the set of cgroups you really
want to monitor if you have a large number of cgroups, to avoid
any cardinality issues.
> [!NOTE]
> Consider restricting paths to the set of cgroups you are interested in if you
> have a large number of cgroups, to avoid cardinality issues.
Following file formats are supported:
* Single value
The plugin supports the _single value format_ in the form
```text
VAL\n
```
* New line separated values
the _new line separated values format_ in the form
```text
VAL0\n
VAL1\n
```
* Space separated values
the _space separated values format_ in the form
```text
VAL0 VAL1 ...\n
```
* Space separated keys and value, separated by new line
and the _space separated keys and value, separated by new line format_ in the
form
```text
KEY0 ... VAL0\n
KEY1 ... VAL1\n
```
## Metrics
⭐ Telegraf v1.0.0
🏷️ system
💻 linux
All measurements have the `path` tag.
[cgroup]: https://docs.kernel.org/admin-guide/cgroup-v2.html
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
@ -67,22 +68,8 @@ See the [CONFIGURATION.md][CONFIGURATION.md] for more details.
# files = ["memory.*usage*", "memory.limit_in_bytes"]
```
## Example Configurations
## Metrics
```toml
# [[inputs.cgroup]]
# paths = [
# "/sys/fs/cgroup/cpu", # root cgroup
# "/sys/fs/cgroup/cpu/*", # all container cgroups
# "/sys/fs/cgroup/cpu/*/*", # all children cgroups under each container cgroup
# ]
# files = ["cpuacct.usage", "cpu.cfs_period_us", "cpu.cfs_quota_us"]
# [[inputs.cgroup]]
# paths = [
# "/sys/fs/cgroup/unified/*", # root cgroup
# ]
# files = ["*"]
```
All measurements have the `path` tag.
## Example Output

View File

@ -1,9 +1,14 @@
# chrony Input Plugin
This plugin queries metrics from a chrony NTP server. For details on the
meaning of the gathered fields please check the [chronyc manual][]
This plugin queries metrics from a [chrony NTP server][chrony]. For details on
the meaning of the gathered fields please check the [chronyc manual][manual].
[chronyc manual]: https://chrony-project.org/doc/4.4/chronyc.html
⭐ Telegraf v0.13.1
🏷️ system
💻 all
[chrony]: https://chrony-project.org
[manual]: https://chrony-project.org/doc/4.4/chronyc.html
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,17 +1,21 @@
# Cisco Model-Driven Telemetry (MDT) Input Plugin
Cisco model-driven telemetry (MDT) is an input plugin that consumes telemetry
data from Cisco IOS XR, IOS XE and NX-OS platforms. It supports TCP & GRPC
dialout transports. RPC-based transport can utilize TLS for authentication and
encryption. Telemetry data is expected to be GPB-KV (self-describing-gpb)
encoded.
This plugin consumes [Cisco model-driven telemetry (MDT)][cisco_mdt] data from
Cisco IOS XR, IOS XE and NX-OS platforms via TCP or GRPC. GRPC-based transport
can utilize TLS for authentication and encryption. Telemetry data is expected to
be GPB-KV (self-describing-gpb) encoded.
The GRPC dialout transport is supported on various IOS XR (64-bit) 6.1.x and
later, IOS XE 16.10 and later, as well as NX-OS 7.x and later platforms.
The TCP dialout transport is supported on IOS XR (32-bit and 64-bit) 6.1.x and
later, IOS XE 16.10 and later, as well as NX-OS 7.x and later platforms. The
TCP dialout transport is supported on IOS XR (32-bit and 64-bit) 6.1.x and
later.
⭐ Telegraf v1.11.0
🏷️ applications
💻 all
[cisco_mdt]: https://www.cisco.com/c/en/us/products/collateral/switches/catalyst-9300-series-switches/model-driven-telemetry-wp.html
## Service Input <!-- @/docs/includes/service_input.md -->
This plugin is a service input. Normal plugins gather metrics determined by the

View File

@ -1,11 +1,15 @@
# ClickHouse Input Plugin
This plugin gathers the statistic data from
[ClickHouse](https://github.com/ClickHouse/ClickHouse) server.
User's on Clickhouse Cloud will not see the Zookeeper metrics as they may not
This plugin gathers statistics data from a [ClickHouse server][clickhouse].
Users on Clickhouse Cloud will not see the Zookeeper metrics as they may not
have permissions to query those tables.
⭐ Telegraf v1.14.0
🏷️ server
💻 all
[clickhouse]: https://github.com/ClickHouse/ClickHouse
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
In addition to the plugin-specific configuration settings, plugins support

View File

@ -1,7 +1,14 @@
# Google Cloud PubSub Input Plugin
The GCP PubSub plugin ingests metrics from [Google Cloud PubSub][pubsub]
and creates metrics using one of the supported [input data formats][].
This plugin consumes messages from the [Google Cloud PubSub][pubsub] service
and creates metrics using one of the supported [data formats][data_formats].
⭐ Telegraf v1.10.0
🏷️ cloud, messaging
💻 all
[pubsub]: https://cloud.google.com/pubsub
[data_formats]: /docs/DATA_FORMATS_INPUT.md
## Service Input <!-- @/docs/includes/service_input.md -->
@ -123,9 +130,7 @@ Each plugin agent can listen to one subscription at a time, so you will
need to run multiple instances of the plugin to pull messages from multiple
subscriptions/topics.
[pubsub]: https://cloud.google.com/pubsub
[pubsub create sub]: https://cloud.google.com/pubsub/docs/admin#create_a_pull_subscription
[input data formats]: /docs/DATA_FORMATS_INPUT.md
## Metrics

View File

@ -1,19 +1,24 @@
# Google Cloud PubSub Push Input Plugin
The Google Cloud PubSub Push listener is a service input plugin that listens
for messages sent via an HTTP POST from [Google Cloud PubSub][pubsub].
The plugin expects messages in Google's Pub/Sub JSON Format ONLY. The intent
of the plugin is to allow Telegraf to serve as an endpoint of the
Google Pub/Sub 'Push' service. Google's PubSub service will **only** send
over HTTPS/TLS so this plugin must be behind a valid proxy or must be
configured to use TLS.
This plugin listens for messages sent via an HTTP POST from
[Google Cloud PubSub][pubsub] and expects messages in Google's Pub/Sub
_JSON format_. The plugin allows Telegraf to serve as an endpoint of push
service.
Enable TLS by specifying the file names of a service TLS certificate and key.
Google's PubSub service will __only__ send over HTTPS/TLS so this plugin must be
behind a valid proxy or must be configured to use TLS by setting the `tls_cert`
and `tls_key` accordingly.
Enable mutually authenticated TLS and authorize client connections by signing
certificate authority by including a list of allowed CA certificate file names
in `tls_allowed_cacerts`.
⭐ Telegraf v1.10.0
🏷️ cloud, messaging
💻 all
[pubsub]: https://cloud.google.com/pubsub
## Service Input <!-- @/docs/includes/service_input.md -->
This plugin is a service input. Normal plugins gather metrics determined by the
@ -93,8 +98,6 @@ See the [CONFIGURATION.md][CONFIGURATION.md] for more details.
This plugin assumes you have already created a PUSH subscription for a given
PubSub topic.
[pubsub]: https://cloud.google.com/pubsub
## Metrics
## Example Output

View File

@ -1,6 +1,12 @@
# Amazon CloudWatch Statistics Input Plugin
This plugin will pull Metric Statistics from Amazon CloudWatch.
This plugin will gather metric statistics from [Amazon CloudWatch][cloudwatch].
⭐ Telegraf v0.12.1
🏷️ cloud
💻 all
[cloudwatch]: https://aws.amazon.com/cloudwatch
## Amazon Authentication

View File

@ -1,11 +1,20 @@
# CloudWatch Metric Streams Input Plugin
# Amazon CloudWatch Metric Streams Input Plugin
The CloudWatch Metric Streams plugin is a service input plugin that listens
for metrics sent via HTTP and performs the required processing for
[Metric Streams from AWS](#troubleshooting-documentation).
This plugin listens for metrics sent via HTTP by
[Cloudwatch metric streams][metric_streams] implementing the required
[response specifications][response_specs].
For cost, see the Metric Streams example in
[CloudWatch pricing](#troubleshooting-documentation).
> [!IMPORTANT]
> Using this plugin can incure costs, see the _Metric Streams example_ in
> [CloudWatch pricing][pricing].
⭐ Telegraf v1.24.0
🏷️ cloud
💻 all
[pricing]: https://aws.amazon.com/cloudwatch/pricing
[metric_streams]: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Metric-Streams.html
[response_specs]: https://docs.aws.amazon.com/firehose/latest/dev/httpdeliveryrequestresponse.html
## Service Input <!-- @/docs/includes/service_input.md -->
@ -64,6 +73,32 @@ See the [CONFIGURATION.md][CONFIGURATION.md] for more details.
# tls_key = "/etc/telegraf/key.pem"
```
## Troubleshooting
The plugin has its own internal metrics for troubleshooting:
* Requests Received
* The number of requests received by the listener.
* Writes Served
* The number of writes served by the listener.
* Bad Requests
* The number of bad requests, separated by the error code as a tag.
* Request Time
* The duration of the request measured in ns.
* Age Max
* The maximum age of a metric in this interval. This is useful for offsetting
any lag or latency measurements in a metrics pipeline that measures based
on the timestamp.
* Age Min
* The minimum age of a metric in this interval.
Specific errors will be logged and an error will be returned to AWS.
For additional help check the [Firehose Troubleshooting][firehose_troubleshoot]
page.
[firehose_troubleshoot]: https://docs.aws.amazon.com/firehose/latest/dev/http_troubleshooting.html
## Metrics
Metrics sent by AWS are Base64 encoded blocks of JSON data.
@ -132,34 +167,3 @@ aws_ec2_cpuutilization,accountId=541737779709,region=us-west-2,InstanceId=i-0efc
```text
aws_ec2_cpuutilization,accountId=541737779709,region=us-west-2,InstanceId=i-0efc7ghy09c123428 maximum=10.011666666666667,minimum=10.011666666666667,sum=10.011666666666667,samplecount=1 1651679580000
```
## Troubleshooting
The plugin has its own internal metrics for troubleshooting:
* Requests Received
* The number of requests received by the listener.
* Writes Served
* The number of writes served by the listener.
* Bad Requests
* The number of bad requests, separated by the error code as a tag.
* Request Time
* The duration of the request measured in ns.
* Age Max
* The maximum age of a metric in this interval. This is useful for offsetting
any lag or latency measurements in a metrics pipeline that measures based
on the timestamp.
* Age Min
* The minimum age of a metric in this interval.
Specific errors will be logged and an error will be returned to AWS.
### Troubleshooting Documentation
Additional troubleshooting for a Metric Stream can be found
in AWS's documentation:
* [CloudWatch Metric Streams](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Metric-Streams.html)
* [AWS HTTP Specifications](https://docs.aws.amazon.com/firehose/latest/dev/httpdeliveryrequestresponse.html)
* [Firehose Troubleshooting](https://docs.aws.amazon.com/firehose/latest/dev/http_troubleshooting.html)
* [CloudWatch Pricing](https://aws.amazon.com/cloudwatch/pricing/)

View File

@ -1,40 +1,25 @@
# Conntrack Input Plugin
Collects stats from Netfilter's conntrack-tools.
# Netfilter Conntrack Input Plugin
This plugin collects metrics from [Netfilter's conntrack tools][conntrack].
There are two collection mechanisms for this plugin:
## /proc/net/stat/nf_conntrack
When a user specifies the `collect` config option with valid options, then the
plugin will loop through the files in `/proc/net/stat/nf_conntrack` to find
CPU specific values.
## Specific files and dirs
The second mechanism is for the user to specify a set of directories and files
to search through
At runtime, conntrack exposes many of those connection statistics within
`/proc/sys/net`. Depending on your kernel version, these files can be found in
either `/proc/sys/net/ipv4/netfilter` or `/proc/sys/net/netfilter` and will be
prefixed with either `ip` or `nf`. This plugin reads the files specified
in its configuration and publishes each one as a field, with the prefix
normalized to ip_.
conntrack exposes many of those connection statistics within `/proc/sys/net`.
Depending on your kernel version, these files can be found in either
`/proc/sys/net/ipv4/netfilter` or `/proc/sys/net/netfilter` and will be
prefixed with either `ip_` or `nf_`. This plugin reads the files specified
in its configuration and publishes each one as a field, with the prefix
normalized to `ip_`.
1. Extracting information from `/proc/net/stat/nf_conntrack` files if the
`collect` option is set accordingly for finding CPU specific values.
2. Using specific files and directories by specifying the `dirs` option. At
runtime, conntrack exposes many of those connection statistics within
`/proc/sys/net`. Depending on your kernel version, these files can be found
in either `/proc/sys/net/ipv4/netfilter` or `/proc/sys/net/netfilter` and
will be prefixed with either `ip` or `nf`.
In order to simplify configuration in a heterogeneous environment, a superset
of directory and filenames can be specified. Any locations that does nt exist
are ignored.
of directory and filenames can be specified. Any locations that doesn't exist
is ignored.
For more information on conntrack-tools, see the
[Netfilter Documentation](http://conntrack-tools.netfilter.org/).
⭐ Telegraf v1.0.0
🏷️ system
💻 linux
[conntrack]: https://conntrack-tools.netfilter.org/
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
@ -94,9 +79,10 @@ With `collect = ["all"]`:
- `delete`: The number of entries which were removed
- `delete_list`: The number of entries which were put to dying list
- `insert`: The number of entries inserted into the list
- `insert_failed`: The number of insertion attempted but failed (same entry exists)
- `insert_failed`: The number of insertion attempted but failed (duplicate entry)
- `drop`: The number of packets dropped due to conntrack failure
- `early_drop`: The number of dropped entries to make room for new ones, if maxsize reached
- `early_drop`: The number of dropped entries to make room for new ones, if
`maxsize` is reached
- `icmp_error`: Subset of invalid. Packets that can't be tracked due to error
- `expect_new`: Entries added after an expectation was already present
- `expect_create`: Expectations added

View File

@ -1,13 +1,17 @@
# Consul Input Plugin
# Hashicorp Consul Input Plugin
This plugin will collect statistics about all health checks registered in the
Consul. It uses [Consul API][1] to query the data. It will not report the
[telemetry][2] but Consul can report those stats already using StatsD protocol
if needed.
This plugin will collect statistics about all health checks registered in
[Consul][consul] using the [Consul API][api]. The plugin will not report any
[telemetry metrics][telemetry] but Consul can report those statistics using
the StatsD protocol if needed.
[1]: https://www.consul.io/docs/agent/http/health.html#health_state
⭐ Telegraf v1.0.0
🏷️ server
💻 all
[2]: https://www.consul.io/docs/agent/telemetry.html
[api]: https://www.consul.io/docs/agent/http/health.html#health_state
[telemetry]: https://www.consul.io/docs/agent/telemetry.html
[consul]: https://www.consul.io
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,10 +1,13 @@
# Hashicorp Consul Agent Metrics Input Plugin
# Hashicorp Consul Agent Input Plugin
This plugin grabs metrics from a Consul agent. Telegraf may be present in every
node and connect to the agent locally. In this case should be something like
`http://127.0.0.1:8500`.
This plugin collects metrics from a [Consul agent][agent]. Telegraf may be
present in every node and connect to the agent locally. Tested on Consul v1.10.
> Tested on Consul 1.10.4 .
⭐ Telegraf v1.22.0
🏷️ server
💻 all
[agent]: https://developer.hashicorp.com/consul/commands/agent
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
@ -41,9 +44,7 @@ See the [CONFIGURATION.md][CONFIGURATION.md] for more details.
## Metrics
Consul collects various metrics. For every details, please have a look at Consul
following documentation:
- [https://www.consul.io/api/agent#view-metrics](https://www.consul.io/api/agent#view-metrics)
Consul collects various metrics. For every details, please have a look at
[Consul's documentation](https://www.consul.io/api/agent#view-metrics).
## Example Output

View File

@ -1,8 +1,14 @@
# Couchbase Input Plugin
Couchbase is a distributed NoSQL database. This plugin gets metrics for each
Couchbase node, as well as detailed metrics for each bucket, for a given
couchbase server.
This plugin collects metrics from [Couchbase][couchbase], a distributed NoSQL
database. Metrics are collected for each node, as well as detailed metrics for
each bucket, for a given couchbase server.
⭐ Telegraf v0.12.0
🏷️ server
💻 all
[couchbase]: https://www.couchbase.com/
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,6 +1,14 @@
# CouchDB Input Plugin
# Apache CouchDB Input Plugin
The CouchDB plugin gathers metrics of CouchDB using [_stats] endpoint.
This plugin gathers metrics from [Apache CouchDB][couchdb] instances using the
[stats][stats] endpoint.
⭐ Telegraf v0.10.3
🏷️ server
💻 all
[couchdb]: https://couchdb.apache.org/
[stats]: http://docs.couchdb.org/en/1.6.1/api/server/common.html?highlight=stats#get--_stats
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
@ -87,5 +95,3 @@ couchdb,server=http://couchdb22:5984/_node/_local/_stats couchdb_auth_cache_hits
```text
couchdb,server=http://couchdb16:5984/_stats couchdb_request_time_sum=96,httpd_status_codes_200_sum=37,httpd_status_codes_200_min=0,httpd_requests_mean=0.005,httpd_requests_min=0,couchdb_request_time_stddev=3.833,couchdb_request_time_min=1,httpd_request_methods_get_stddev=0.073,httpd_request_methods_get_min=0,httpd_status_codes_200_mean=0.005,httpd_status_codes_200_max=1,httpd_requests_sum=37,couchdb_request_time_current=96,httpd_request_methods_get_sum=37,httpd_request_methods_get_mean=0.005,httpd_request_methods_get_max=1,httpd_status_codes_200_stddev=0.073,couchdb_request_time_mean=2.595,couchdb_request_time_max=25,httpd_request_methods_get_current=37,httpd_status_codes_200_current=37,httpd_requests_current=37,httpd_requests_stddev=0.073,httpd_requests_max=1 1536707179000000000
```
[_stats]: http://docs.couchdb.org/en/1.6.1/api/server/common.html?highlight=stats#get--_stats

View File

@ -1,19 +1,10 @@
# CPU Input Plugin
The `cpu` plugin gather metrics on the system CPUs.
This plugin gather metrics on the system's CPUs.
## macOS Support
The [gopsutil][1] library, which is used to collect CPU data, does not support
gathering CPU metrics without CGO on macOS. The user will see a "not
implemented" message in this case. Builds provided by InfluxData do not build
with CGO.
Users can use the builds provided by [Homebrew][2], which build with CGO, to
produce CPU metrics.
[1]: https://github.com/shirou/gopsutil/blob/master/cpu/cpu_darwin_nocgo.go
[2]: https://formulae.brew.sh/formula/telegraf
⭐ Telegraf v0.1.5
🏷️ system
💻 all
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,6 +1,13 @@
# Counter-Strike: Global Offensive (CSGO) Input Plugin
The `csgo` plugin gather metrics from Counter-Strike: Global Offensive servers.
This plugin gather metrics from [Counter-Strike: Global Offensive][csgo]
servers.
⭐ Telegraf v1.18.0
🏷️ server
💻 all
[csgo]: https://www.counter-strike.net/
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,12 +1,16 @@
# ctrlX Data Layer Input Plugin
# Bosch Rexroth ctrlX Data Layer Input Plugin
The `ctrlx_datalayer` plugin gathers data from the ctrlX Data Layer,
a communication middleware running on
[ctrlX CORE devices](https://ctrlx-core.com) from
[Bosch Rexroth](https://boschrexroth.com). The platform is used for
professional automation applications like industrial automation, building
automation, robotics, IoT Gateways or as classical PLC. For more
information, see [ctrlX AUTOMATION](https://ctrlx-automation.com).
This plugin gathers data from the [ctrlX Data Layer][ctrlx] a communication
middleware running on Bosch Rexroth's [ctrlX CORE devices][core_devs]. The
platform is used for professional automation applications like industrial
automation, building automation, robotics, IoT Gateways or as classical PLC.
⭐ Telegraf v1.27.0
🏷️ iot, messaging
💻 all
[ctrlx]: https://ctrlx-automation.com
[core_devs]: https://ctrlx-core.com
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,23 +1,19 @@
# DC/OS Input Plugin
# Mesosphere Distributed Cloud OS Input Plugin
This input plugin gathers metrics from a DC/OS cluster's [metrics
component](https://docs.mesosphere.com/1.10/metrics/).
This input plugin gathers metrics from a [Distributed Cloud OS][dcos] cluster's
[metrics component][metrics].
## Series Cardinality Warning
> [!WARNING]
> Depending on the workload of your DC/OS cluster, this plugin can quickly
> create a high number of series which, when unchecked, can cause high load on
> your database!
Depending on the work load of your DC/OS cluster, this plugin can quickly
create a high number of series which, when unchecked, can cause high load on
your database.
⭐ Telegraf v1.5.0
🏷️ containers
💻 all
- Use the
[measurement filtering](https://docs.influxdata.com/telegraf/latest/administration/configuration/#measurement-filtering)
options to exclude unneeded tags.
- Write to a database with an appropriate
[retention policy](https://docs.influxdata.com/influxdb/latest/guides/downsampling_and_retention/).
- Consider using the
[Time Series Index](https://docs.influxdata.com/influxdb/latest/concepts/time-series-index/).
- Monitor your databases
[series cardinality](https://docs.influxdata.com/influxdb/latest/query_language/spec/#show-cardinality).
[dcos]: https://dcos.io/
[metrics]: https://docs.mesosphere.com/1.10/metrics/
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
@ -126,6 +122,17 @@ the cluster. For more information on this technique reference
[2]: https://medium.com/@richardgirges/authenticating-open-source-dc-os-with-third-party-services-125fa33a5add
### Series Cardinality Mitigation
- Use [measurement filtering](/docs/CONFIGURATION.md#metric-filtering)to exclude
unnecessary tags.
- Write to a database with an appropriate
[retention policy](https://docs.influxdata.com/influxdb/latest/guides/downsampling_and_retention/).
- Consider using the
[Time Series Index](https://docs.influxdata.com/influxdb/latest/concepts/time-series-index/).
- Monitor your databases'
[series cardinality](https://docs.influxdata.com/influxdb/latest/query_language/spec/#show-cardinality).
## Metrics
Please consult the [Metrics Reference][3] for details about field

View File

@ -1,17 +1,19 @@
# Directory Monitor Input Plugin
This plugin monitors a single directory (traversing sub-directories),
and takes in each file placed in the directory. The plugin will gather all
files in the directory at the configured interval, and parse the ones that
haven't been picked up yet.
This plugin monitors a single directory (traversing sub-directories), and
processes each file placed in the directory. The plugin will gather all files in
the directory at the configured interval, and parse the ones that haven't been
picked up yet.
This plugin is intended to read files that are moved or copied to the monitored
directory, and thus files should also not be used by another process or else
they may fail to be gathered. Please be advised that this plugin pulls files
directly after they've been in the directory for the length of the configurable
`directory_duration_threshold`, and thus files should not be written 'live' to
the monitored directory. If you absolutely must write files directly, they must
be guaranteed to finish writing before the `directory_duration_threshold`.
> [!NOTE]
> Files should not be used by another process or the plugin may fail.
> Furthermore, files should not be written _live_ to the monitored directory.
> If you absolutely must write files directly, they must be guaranteed to finish
> writing before `directory_duration_threshold`.
⭐ Telegraf v1.18.0
🏷️ system
💻 all
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,10 +1,17 @@
# Disk Input Plugin
The disk input plugin gathers metrics about disk usage.
This plugin gathers metrics about disk usage.
Note that `used_percent` is calculated by doing `used / (used + free)`, _not_
`used / total`, which is how the unix `df` command does it. See
[wikipedia - df](https://en.wikipedia.org/wiki/Df_(Unix)) for more details.
> [!NOTE]
> The `used_percent` field is calculated by `used / (used + free)` and _not_
> `used / total` as the unix `df` command does it. See [wikipedia - df][wiki_df]
> for more details.
⭐ Telegraf v0.1.1
🏷️ system
💻 all
[wiki_df]: https://en.wikipedia.org/wiki/Df_(Unix)
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,6 +1,10 @@
# DiskIO Input Plugin
The diskio input plugin gathers metrics about disk traffic and timing.
This plugin gathers metrics about disk traffic and timing.
⭐ Telegraf v0.10.0
🏷️ system
💻 all
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,8 +1,14 @@
# Disque Input Plugin
[Disque](https://github.com/antirez/disque) is an ongoing experiment to build a
This plugin gathers data from a [Disque][disque] instance, an experimental
distributed, in-memory, message broker.
⭐ Telegraf v0.10.0
🏷️ messaging
💻 all
[disque]: https://github.com/antirez/disque
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
In addition to the plugin-specific configuration settings, plugins support

View File

@ -1,13 +1,18 @@
# DMCache Input Plugin
# Device Mapper Cache Input Plugin
This plugin provide a native collection for dmsetup based statistics for
dm-cache.
[dm-cache][dmcache].
This plugin requires sudo, that is why you should setup and be sure that the
telegraf is able to execute sudo without a password.
> [!NOTE]
> This plugin requires super-user permissions! Please make sure, Telegraf is
> able to run `sudo /sbin/dmsetup status --target cache` without requiring a
> password.
`sudo /sbin/dmsetup status --target cache` is the full command that telegraf
will run for debugging purposes.
⭐ Telegraf v1.3.0
🏷️ system
💻 linux
[dmcache]: https://docs.kernel.org/admin-guide/device-mapper/cache.html
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,7 +1,11 @@
# DNS Query Input Plugin
The DNS plugin gathers dns query times in milliseconds - like
[Dig](https://en.wikipedia.org/wiki/Dig_\(command\))
This plugin gathers information about DNS queries such as response time and
result codes.
⭐ Telegraf v1.4.0
🏷️ system, network
💻 all
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,14 +1,17 @@
# Docker Input Plugin
The docker plugin uses the Docker Engine API to gather metrics on running
This plugin uses the [Docker Engine API][api] to gather metrics on running
docker containers.
The docker plugin uses the [Official Docker Client][1] to gather stats from the
[Engine API][2].
> [!NOTE]
> Please make sure Telegraf has sufficient permissions to access the configured
> endpoint!
[1]: https://github.com/moby/moby/tree/master/client
⭐ Telegraf v0.1.9
🏷️ containers
💻 all
[2]: https://docs.docker.com/engine/api/v1.24/
[api]: https://docs.docker.com/engine/api
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,16 +1,18 @@
# Docker Log Input Plugin
The docker log plugin uses the Docker Engine API to get logs on running
This plugin uses the [Docker Engine API][api] to gather logs from running
docker containers.
The docker plugin uses the [Official Docker Client][] to gather logs from the
[Engine API][].
> [!NOTE]
> This plugin works only for containers with the `local` or `json-file` or
> `journald` logging driver. Please make sure Telegraf has sufficient
> permissions to access the configured endpoint!
**Note:** This plugin works only for containers with the `local` or
`json-file` or `journald` logging driver.
⭐ Telegraf v1.12.0
🏷️ containers, logging
💻 all
[Official Docker Client]: https://github.com/moby/moby/tree/master/client
[Engine API]: https://docs.docker.com/engine/api/v1.24/
[api]: https://docs.docker.com/engine/api
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,10 +1,15 @@
# Dovecot Input Plugin
The dovecot plugin uses the Dovecot [v2.1 stats protocol][stats old] to gather
metrics on configured domains.
This plugin uses the Dovecot [v2.1 stats protocol][stats] to gather
metrics on configured domains of [Dovecot][dovecot] servers. You should still
be able to use this protocol on newer versions of Dovecot.
When using Dovecot v2.3 you are still able to use this protocol by following
the [upgrading steps][upgrading].
⭐ Telegraf v0.10.3
🏷️ server
💻 all
[dovecot]: https://www.dovecot.org/
[stats]: https://doc.dovecot.org/configuration_manual/stats/old_statistics/#old-statistics
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
@ -77,6 +82,3 @@ See the [CONFIGURATION.md][CONFIGURATION.md] for more details.
```text
dovecot,server=dovecot-1.domain.test,type=global clock_time=101196971074203.94,disk_input=6493168218112i,disk_output=17978638815232i,invol_cs=1198855447i,last_update="2016-04-08 11:04:13.000379245 +0200 CEST",mail_cache_hits=68192209i,mail_lookup_attr=0i,mail_lookup_path=653861i,mail_read_bytes=86705151847i,mail_read_count=566125i,maj_faults=17208i,min_faults=1286179702i,num_cmds=917469i,num_connected_sessions=8896i,num_logins=174827i,read_bytes=30327690466186i,read_count=1772396430i,reset_timestamp="2016-04-08 10:28:45 +0200 CEST",sys_cpu=157965.692,user_cpu=219337.48,vol_cs=2827615787i,write_bytes=17150837661940i,write_count=992653220i 1460106266642153907
```
[stats old]: http://wiki2.dovecot.org/Statistics/Old
[upgrading]: https://wiki2.dovecot.org/Upgrading/2.3#Statistics_Redesign

View File

@ -1,59 +1,30 @@
# Data Plane Development Kit (DPDK) Input Plugin
The `dpdk` plugin collects metrics exposed by applications built with [Data
Plane Development Kit](https://www.dpdk.org/) which is an extensive set of open
This plugin collects metrics exposed by applications built with the
[Data Plane Development Kit][dpdk] which is an extensive set of open
source libraries designed for accelerating packet processing workloads.
DPDK provides APIs that enable exposing various statistics from the devices used
by DPDK applications and enable exposing KPI metrics directly from
applications. Device statistics include e.g. common statistics available across
NICs, like: received and sent packets, received and sent bytes etc. In addition
to this generic statistics, an extended statistics API is available that allows
providing more detailed, driver-specific metrics that are not available as
generic statistics.
> [!NOTE]
> Since DPDK will most likely run with root privileges, the telemetry socket
> exposed by DPDK will also require root access. Please adjust permissions
> accordingly!
[DPDK Release 20.05](https://doc.dpdk.org/guides/rel_notes/release_20_05.html)
introduced updated telemetry interface that enables DPDK libraries and
applications to provide their telemetry. This is referred to as `v2` version of
this socket-based telemetry interface. This release enabled e.g. reading
driver-specific extended stats (`/ethdev/xstats`) via this new interface.
Refer to the [Telemetry User Guide][user_guide] for details and examples on how
to use DPDK in your application.
[DPDK Release 20.11](https://doc.dpdk.org/guides/rel_notes/release_20_11.html)
introduced reading via `v2` interface common statistics (`/ethdev/stats`) in
addition to existing (`/ethdev/xstats`).
> [!IMPORTANT]
> This plugin uses the `v2` interface to read telemetry > data from applications
> and required DPDK version `v20.05` or higher. Some metrics might require later
> versions.
> The recommended version, especially in conjunction with the `in_memory`
> option is `DPDK 21.11.2` or higher.
[DPDK Release 21.11](https://doc.dpdk.org/guides/rel_notes/release_21_11.html)
introduced reading via `v2` interface additional ethernet device information
(`/ethdev/info`).
This version also adds support for exposing telemetry from multiple
`--in-memory` instances of DPDK via dedicated sockets.
The plugin supports reading from those sockets when `in_memory`
option is set.
⭐ Telegraf v1.19.0
🏷️ applications, networking
💻 linux
The example usage of `v2` telemetry interface can be found in [Telemetry User
Guide](https://doc.dpdk.org/guides/howto/telemetry.html). A variety of [DPDK
Sample Applications](https://doc.dpdk.org/guides/sample_app_ug/index.html) is
also available for users to discover and test the capabilities of DPDK libraries
and to explore the exposed metrics.
> **DPDK Version Info:** This plugin uses this `v2` interface to read telemetry
> data from applications build with `DPDK version >= 20.05`. The default
> configuration include reading common statistics from `/ethdev/stats` that is
> available from `DPDK version >= 20.11`. When using
> `DPDK 20.05 <= version < DPDK 20.11` it is recommended to disable querying
> `/ethdev/stats` by setting corresponding `exclude_commands` configuration
> option.
>
> **NOTE:** Since DPDK will most likely run with root privileges, the socket
> telemetry interface exposed by DPDK will also require root access. This means
> that either access permissions have to be adjusted for socket telemetry
> interface to allow Telegraf to access it, or Telegraf should run with root
> privileges.
>
> **NOTE:** There are known issues with exposing telemetry from multiple
> `--in-memory` instances while using `DPDK 21.11.1`. The recommended version
> to use in conjunction with `in_memory` plugin option is `DPDK 21.11.2`
> or higher.
[dpdk]: https://www.dpdk.org
[user_guide]: https://doc.dpdk.org/guides/howto/telemetry.html
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
@ -135,7 +106,8 @@ for additional usage information.
This configuration allows getting metrics for all devices reported via
`/ethdev/list` command:
* `/ethdev/info` - device information: name, MAC address, buffers size, etc. (since `DPDK 21.11`)
* `/ethdev/info` - device information: name, MAC address, buffers size, etc
(since `DPDK 21.11`)
* `/ethdev/stats` - basic device statistics (since `DPDK 20.11`)
* `/ethdev/xstats` - extended device statistics
* `/ethdev/link_status` - up/down link status
@ -190,8 +162,8 @@ configuration, with higher timeout.
### Example: Getting application-specific metrics
This configuration allows reading custom metrics exposed by
applications. Example telemetry command obtained from [L3 Forwarding with Power
Management Sample Application][sample-app].
applications. Example telemetry command obtained from
[L3 Forwarding with Power Management Sample Application][sample].
```toml
[[inputs.dpdk]]
@ -215,7 +187,7 @@ Providing invalid commands will prevent the plugin from starting. Additional
commands allow duplicates, but they will be removed during execution, so each
command will be executed only once during each metric gathering interval.
[sample-app]: https://doc.dpdk.org/guides/sample_app_ug/l3_forward_power_man.html
[sample]: https://doc.dpdk.org/guides/sample_app_ug/l3_forward_power_man.html
### Example: Getting metrics from multiple DPDK instances on same host

View File

@ -1,18 +1,25 @@
# Amazon ECS Input Plugin
# Amazon Elastic Container Service Input Plugin
Amazon ECS, Fargate compatible, input plugin which uses the Amazon ECS metadata
and stats [v2][task-metadata-endpoint-v2] or [v3][task-metadata-endpoint-v3] API
endpoints to gather stats on running containers in a Task.
This plugin gathers statistics on running containers in a Task from the
[Amazon Elastic Container Service][ecs] using the [Amazon ECS metadata][metadata]
and the [v2][v2_endpoint] or [v3][v3_endpoint] statistics API endpoints.
The telegraf container must be run in the same Task as the workload it is
inspecting.
This is similar to (and reuses a few pieces of) the [Docker][docker-input] input
plugin, with some ECS specific modifications for AWS metadata and stats formats.
> [!IMPORTANT]
> The telegraf container must be run in the same Task as the workload it is
> inspecting.
The amazon-ecs-agent (though it _is_ a container running on the host) is not
present in the metadata/stats endpoints.
⭐ Telegraf v1.11.0
🏷️ cloud
💻 all
[ecs]: https://aws.amazon.com/ecs/
[metadata]: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-metadata-endpoint.html
[v2_endpoint]: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-metadata-endpoint-v2.html
[v3_endpoint]: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-metadata-endpoint-v3.html
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
In addition to the plugin-specific configuration settings, plugins support
@ -246,7 +253,3 @@ ecs_container_blkio,cluster=test,com.amazonaws.ecs.cluster=test,com.amazonaws.ec
ecs_container_blkio,cluster=test,com.amazonaws.ecs.cluster=test,com.amazonaws.ecs.container-name=~internal~ecs~pause,com.amazonaws.ecs.task-arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a,com.amazonaws.ecs.task-definition-family=nginx,com.amazonaws.ecs.task-definition-version=2,device=total,family=nginx,host=c4b301d4a123,id=e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba,name=~internal~ecs~pause,revision=2,task_arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a io_serviced_recursive_async=0i,io_serviced_recursive_read=40i,io_serviced_recursive_sync=40i,io_serviced_recursive_write=0i,io_serviced_recursive_total=40i,io_service_bytes_recursive_read=3162112i,io_service_bytes_recursive_write=0i,io_service_bytes_recursive_async=0i,container_id="e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba",io_service_bytes_recursive_sync=3162112i,io_service_bytes_recursive_total=3162112i 1542642001000000000
ecs_container_meta,cluster=test,com.amazonaws.ecs.cluster=test,com.amazonaws.ecs.container-name=~internal~ecs~pause,com.amazonaws.ecs.task-arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a,com.amazonaws.ecs.task-definition-family=nginx,com.amazonaws.ecs.task-definition-version=2,family=nginx,host=c4b301d4a123,id=e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba,name=~internal~ecs~pause,revision=2,task_arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a limit_mem=0,type="CNI_PAUSE",container_id="e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba",docker_name="ecs-nginx-2-internalecspause",limit_cpu=0,known_status="RESOURCES_PROVISIONED",image="amazon/amazon-ecs-pause:0.1.0",image_id="",desired_status="RESOURCES_PROVISIONED" 1542642001000000000
```
[docker-input]: /plugins/inputs/docker/README.md
[task-metadata-endpoint-v2]: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-metadata-endpoint-v2.html
[task-metadata-endpoint-v3]: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-metadata-endpoint-v3.html

View File

@ -1,31 +1,26 @@
# Elasticsearch Input Plugin
The [elasticsearch](https://www.elastic.co/) plugin queries endpoints to obtain
[Node Stats][1] and optionally [Cluster-Health][2] metrics.
This plugin queries endpoints of a [Elasticsearch][elastic] instance to obtain
[node statistics][node_stats] and optionally [cluster-health][cluster_health]
metrics.
Additionally, the plugin is able to query [cluster][cluster_stats],
[indices and shard][indices_stats] statistics for the master node.
In addition, the following optional queries are only made by the master node:
[Cluster Stats][3] [Indices Stats][4] [Shard Stats][5]
> [!NOTE]
> Specific statistics information can change between Elasticsearch versions. In
> general, this plugin attempts to stay as version-generic as possible by
> tagging high-level categories only and creating unique field names of
> whatever statistics names are provided at the mid-low level.
Specific Elasticsearch endpoints that are queried:
⭐ Telegraf v0.1.5
🏷️ server
💻 all
- Node: either /_nodes/stats or /_nodes/_local/stats depending on 'local'
configuration setting
- Cluster Heath: /_cluster/health?level=indices
- Cluster Stats: /_cluster/stats
- Indices Stats: /_all/_stats
- Shard Stats: /_all/_stats?level=shards
Note that specific statistics information can change between Elasticsearch
versions. In general, this plugin attempts to stay as version-generic as
possible by tagging high-level categories only and using a generic json parser
to make unique field names of whatever statistics names are provided at the
mid-low level.
[1]: https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-nodes-stats.html
[2]: https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html
[3]: https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-stats.html
[4]: https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-stats.html
[5]: https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-stats.html
[elastic]: https://www.elastic.co/
[node_stats]: https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-nodes-stats.html
[cluster_health]: https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html
[cluster_stats]: https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-stats.html
[indices_stats]: https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-stats.html
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,19 +1,20 @@
# Elasticsearch Query Input Plugin
This [elasticsearch](https://www.elastic.co/) query plugin queries endpoints
to obtain metrics from data stored in an Elasticsearch cluster.
This plugin allows to query an [Elasticsearch][elastic] instance to obtain
metrics from data stored in the cluster. The plugins supports counting the
number of hits for a search query, calculating statistics for numeric fields,
filtered by a query, aggregated per tag and to count the number of terms for a
particular field.
The following is supported:
> [!IMPORTANT]
> This plugins supports Elasticsearch 5.x and 6.x but is known to break on 7.x
> or higher.
- return number of hits for a search query
- calculate the avg/max/min/sum for a numeric field, filtered by a query,
aggregated per tag
- count number of terms for a particular field
⭐ Telegraf v1.20.0
🏷️ datastore
💻 all
## Elasticsearch Support
This plugins is tested against Elasticsearch 5.x and 6.x releases.
Currently it is known to break on 7.x or greater versions.
[elastic]: https://www.elastic.co/
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,7 +1,11 @@
# Ethtool Input Plugin
The ethtool input plugin pulls ethernet device stats. Fields pulled will depend
on the network device and driver.
This plugin collects ethernet device statistics. The available information
strongly depends on the network device and driver.
⭐ Telegraf v1.13.0
🏷️ system, networking
💻 linux
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,6 +1,14 @@
# Event Hub Consumer Input Plugin
# Azure Event Hub Consumer Input Plugin
This plugin provides a consumer for use with Azure Event Hubs and Azure IoT Hub.
This plugin allows consuming messages from [Azure Event Hubs][eventhub] and
[Azure IoT Hub][iothub] instances.
⭐ Telegraf v1.14.0
🏷️ iot, messaging
💻 all
[eventhub]: https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-about
[iothub]: https://azure.microsoft.com/en-us/products/iot-hub
## IoT Hub Setup

View File

@ -6,6 +6,12 @@ additional information can be found.
Telegraf minimum version: Telegraf x.x Plugin minimum tested version: x.x
⭐ Telegraf v1.0.0 <!-- introduction version -->
🚩 Telegraf v1.10.0 <!-- deprecation version if any -->
🔥 Telegraf v1.20.0 <!-- removal version if any -->
🏷️ your labels
💻 your OS support
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
In addition to the plugin-specific configuration settings, plugins support

View File

@ -1,11 +1,15 @@
# Exec Input Plugin
The `exec` plugin executes all the `commands` in parallel on every interval and
parses metrics from their output in any one of the accepted [Input Data
Formats](../../../docs/DATA_FORMATS_INPUT.md).
This plugin executes the given `commands` on every interval and parses metrics
from their output in any one of the supported [data formats][data_formats].
This plugin can be used to poll for custom metrics from any source.
⭐ Telegraf v0.1.5
🏷️ system
💻 all
[data_formats]: /docs/DATA_FORMATS_INPUT.md
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
In addition to the plugin-specific configuration settings, plugins support

View File

@ -1,25 +1,22 @@
# Execd Input Plugin
The `execd` plugin runs an external program as a long-running daemon. The
programs must output metrics in any one of the accepted [Input Data Formats][]
on the process's STDOUT, and is expected to stay running. If you'd instead like
the process to collect metrics and then exit, check out the [inputs.exec][]
plugin.
This plugin runs the given external program as a long-running daemon and collects
the metrics in one of the supported [data formats][data_formats] on the
process's `stdout`. The program is expected to stay running and output data
when receiving the configured `signal`.
The `signal` can be configured to send a signal the running daemon on each
collection interval. This is used for when you want to have Telegraf notify the
plugin when it's time to run collection. STDIN is recommended, which writes a
new line to the process's STDIN.
The `stderr` output of the process will be relayed to Telegraf's logging
facilities and will be logged as _error_ by default. However, you can log to
other levels by prefixing your message with `E!` for error, `W!` for warning,
`I!` for info, `D!` for debugging and `T!` for trace levels followed by a space
and the actual message. For example outputting `I! A log message` will create a
`info` log line in your Telegraf logging output.
STDERR from the process will be relayed to Telegraf's logging facilities. By
default all messages on `stderr` will be logged as errors. However, you can
log to other levels by prefixing your message with `E!` for error, `W!` for
warning, `I!` for info, `D!` for debugging and `T!` for trace levels followed by
a space and the actual message. For example outputting `I! A log message` will
create a `info` log line in your Telegraf logging output.
⭐ Telegraf v1.14.0
🏷️ system
💻 all
[Input Data Formats]: ../../../docs/DATA_FORMATS_INPUT.md
[inputs.exec]: ../exec/README.md
[data_formats]: /docs/DATA_FORMATS_INPUT.md
## Service Input <!-- @/docs/includes/service_input.md -->

View File

@ -1,13 +1,18 @@
# Fail2ban Input Plugin
The fail2ban plugin gathers the count of failed and banned ip addresses using
[fail2ban](https://www.fail2ban.org).
This plugin gathers the count of failed and banned IP addresses using
[fail2ban][fail2ban] by running the `fail2ban-client` command.
This plugin runs the `fail2ban-client` command which generally requires root
access. Acquiring the required permissions can be done using several methods:
> [!NOTE]
> The `fail2ban-client` requires root access, so please make sure to either
> allow Telegraf to run that command using `sudo` without a password or by
> running telegraf as root (not recommended).
- [Use sudo](#using-sudo) run fail2ban-client.
- Run telegraf as root. (not recommended)
⭐ Telegraf v1.4.0
🏷️ networking, system
💻 all
[fail2ban]: https://www.fail2ban.org
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,11 +1,15 @@
# Fibaro Input Plugin
The Fibaro plugin makes HTTP calls to the Fibaro controller API to gather values
of hooked devices. Those values could be true (1) or false (0) for switches,
percentage for dimmers, temperature, etc.
This plugin gathers data from devices connected to a [Fibaro][fibaro]
controller. Those values could be true (1) or false (0) for switches, percentage
for dimmers, temperature, etc. Both _Home Center 2_ and _Home Center 3_ devices
are supported.
By default, this plugin supports HC2 devices. To support HC3 devices, please
use the device type config option.
⭐ Telegraf v1.7.0
🏷️ iot
💻 all
[fibaro]: https://www.fibaro.com
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,10 +1,19 @@
# File Input Plugin
The file plugin parses the **complete** contents of a file **every interval**
using the selected [input data format][].
This plugin reads the __complete__ contents of the configured files in
__every__ interval. The file content is split line-wise and parsed according to
one of the supported [data formats][data_formats].
**Note:** If you wish to parse only newly appended lines use the [tail][] input
plugin instead.
> [!TIP]
> If you wish to only process newly appended lines use the [tail][tail] input
> plugin instead.
⭐ Telegraf v1.8.0
🏷️ system
💻 all
[data_formats]: /docs/DATA_FORMATS_INPUT.md
[tail]: /plugins/inputs/tail
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
@ -57,7 +66,4 @@ See the [CONFIGURATION.md][CONFIGURATION.md] for more details.
The format of metrics produced by this plugin depends on the content and data
format of the file.
[input data format]: /docs/DATA_FORMATS_INPUT.md
[tail]: /plugins/inputs/tail
## Example Output

View File

@ -1,6 +1,10 @@
# Filecount Input Plugin
Reports the number and total size of files in specified directories.
This plugin reports the number and total size of files in specified directories.
⭐ Telegraf v1.8.0
🏷️ system
💻 all
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,6 +1,11 @@
# Filestat Input Plugin
# File statistics Input Plugin
The filestat plugin gathers metrics about file existence, size, and other stats.
This plugin gathers metrics about file existence, size, and other file
statistics.
⭐ Telegraf v0.13.0
🏷️ system
💻 all
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,8 +1,18 @@
# Fireboard Input Plugin
The fireboard plugin gathers the real time temperature data from fireboard
thermometers. In order to use this input plugin, you'll need to sign up to use
the [Fireboard REST API](https://docs.fireboard.io/reference/restapi.html).
This plugin gathers real-time temperature data from [fireboard][fireboard]
thermometers.
> [!NOTE]
> You will need to sign up to for the [Fireboard REST API][api] in order to use
> this plugin.
⭐ Telegraf v1.12.0
🏷️ iot
💻 all
[fireboard]: https://www.fireboard.com
[api]: https://docs.fireboard.io/reference/restapi.html
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,27 +1,23 @@
# Fluentd Input Plugin
The fluentd plugin gathers metrics from plugin endpoint provided by [in_monitor
plugin][1]. This plugin understands data provided by /api/plugin.json resource
(/api/config.json is not covered).
This plugin gathers internal metrics of a [fluentd][fluentd] instance provided
by fluentd's [monitor agent plugin][monitor_agent]. Data provided
by the `/api/plugin.json` resource, `/api/config.json` is not covered.
You might need to adjust your fluentd configuration, in order to reduce series
cardinality in case your fluentd restarts frequently. Every time fluentd starts,
`plugin_id` value is given a new random value. According to [fluentd
documentation][2], you are able to add `@id` parameter for each plugin to avoid
this behaviour and define custom `plugin_id`.
> [!IMPORTANT]
> This plugin might produce high-cardinality series as the `plugin_id` value is
> random after each restart of fluentd. You might need to adjust your fluentd
> configuration, in order to reduce series cardinality in case your fluentd
> restarts frequently by adding the `@id` parameter to each plugin.
> See [fluentd's documentation][docs] for details.
example configuration with `@id` parameter for http plugin:
⭐ Telegraf v1.4.0
🏷️ server
💻 all
```text
<source>
@type http
@id http
port 8888
</source>
```
[1]: https://docs.fluentd.org/input/monitor_agent
[2]: https://docs.fluentd.org/configuration/config-file#common-plugin-parameter
[fluentd]: https://www.fluentd.org/
[monitor_agent]: https://docs.fluentd.org/input/monitor_agent
[docs]: https://docs.fluentd.org/configuration/config-file#common-plugin-parameter
## Global configuration options <!-- @/docs/includes/plugin_config.md -->