docs(outputs): Add plugin metadata and update description (#16061)

This commit is contained in:
Sven Rebhan 2024-10-28 18:00:22 +01:00 committed by GitHub
parent 43c503e734
commit 61902ba15a
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
66 changed files with 607 additions and 312 deletions

View File

@ -9,10 +9,11 @@ binding_key.
Message payload should be formatted in one of the
[Telegraf Data Formats](../../../docs/DATA_FORMATS_INPUT.md).
For an introduction to AMQP see:
For an introduction check the [AMQP concepts page][amqp_concepts] and the
[RabbitMQ getting started guide][rabbitmq_getting_started].
- [amqp - concepts](https://www.rabbitmq.com/tutorials/amqp-concepts.html)
- [rabbitmq: getting started](https://www.rabbitmq.com/getstarted.html)
[amqp_concepts]: https://www.rabbitmq.com/tutorials/amqp-concepts.html
[rabbitmq_getting_started]: https://www.rabbitmq.com/getstarted.html
## Service Input <!-- @/docs/includes/service_input.md -->

View File

@ -1,15 +1,17 @@
# HTTP Listener v2 Input Plugin
HTTP Listener v2 is a service input plugin that listens for metrics sent via
HTTP. Metrics may be sent in any supported [data format][data_format]. For
metrics in [InfluxDB Line Protocol][line_protocol] it's recommended to use the
[`influxdb_listener`][influxdb_listener] or
[`influxdb_v2_listener`][influxdb_v2_listener] instead.
The HTTP Listener v2 is a service input plugin that listens for metrics sent
via HTTP. Metrics may be sent in any supported [data-format][data_format].
**Note:** The plugin previously known as `http_listener` has been renamed
`influxdb_listener`. If you would like Telegraf to act as a proxy/relay for
InfluxDB it is recommended to use [`influxdb_listener`][influxdb_listener] or
[`influxdb_v2_listener`][influxdb_v2_listener].
> [!NOTE]
> If you would like Telegraf to act as a proxy/relay for InfluxDB v1 or
> InfluxDB v2 it is recommended to use the
> [`influxdb__listener`][influxdb_listener] or
> [`influxdb_v2_listener`][influxdb_v2_listener] plugin instead.
⭐ Telegraf v1.30.0
🏷️ servers, web
💻 all
## Service Input <!-- @/docs/includes/service_input.md -->
@ -138,5 +140,4 @@ curl -i -XGET 'http://localhost:8080/telegraf?host=server01&value=0.42'
[data_format]: /docs/DATA_FORMATS_INPUT.md
[influxdb_listener]: /plugins/inputs/influxdb_listener/README.md
[line_protocol]: https://docs.influxdata.com/influxdb/cloud/reference/syntax/line-protocol/
[influxdb_v2_listener]: /plugins/inputs/influxdb_v2_listener/README.md

View File

@ -1,13 +1,19 @@
# Amon Output Plugin
This plugin writes to [Amon](https://www.amon.cx) and requires an `serverkey`
and `amoninstance` URL which can be obtained
[here](https://www.amon.cx/docs/monitoring/) for the account.
This plugin writes metrics to [Amon monitoring platform][amon]. It requires a
`serverkey` and `amoninstance` URL which can be obtained [here][amon_monitoring]
for your account.
If the point value being sent cannot be converted to a float64, the metric is
skipped.
> [!IMPORTANT]
> If point values being sent cannot be converted to a `float64`, the metric is
> skipped.
Metrics are grouped by converting any `_` characters to `.` in the Point Name.
⭐ Telegraf v0.2.1
🏷️ databases
💻 all
[amon]: https://www.amon.cx
[amon_monitoring]:https://www.amon.cx/docs/monitoring/
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
@ -32,3 +38,7 @@ See the [CONFIGURATION.md][CONFIGURATION.md] for more details.
## Connection timeout.
# timeout = "5s"
```
## Conversions
Metrics are grouped by converting any `_` characters to `.` in the point name

View File

@ -1,14 +1,21 @@
# AMQP Output Plugin
This plugin writes to a AMQP 0-9-1 Exchange, a prominent implementation of this
protocol being [RabbitMQ](https://www.rabbitmq.com/).
This plugin writes to an Advanced Message Queuing Protocol v0.9.1 broker.
A prominent implementation of this protocol is [RabbitMQ][rabbitmq].
This plugin does not bind the exchange to a queue.
> [!NOTE]
> This plugin does not bind the AMQP exchange to a queue.
For an introduction to AMQP see:
For an introduction check the [AMQP concepts page][amqp_concepts] and the
[RabbitMQ getting started guide][rabbitmq_getting_started].
- [amqp: concepts](https://www.rabbitmq.com/tutorials/amqp-concepts.html)
- [rabbitmq: getting started](https://www.rabbitmq.com/getstarted.html)
⭐ Telegraf v0.1.9
🏷️ messaging
💻 all
[amqp_concepts]: https://www.rabbitmq.com/tutorials/amqp-concepts.html
[rabbitmq]: https://www.rabbitmq.com
[rabbitmq_getting_started]: https://www.rabbitmq.com/getstarted.html
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,7 +1,13 @@
# Application Insights Output Plugin
# Azure Application Insights Output Plugin
This plugin writes telegraf metrics to [Azure Application
Insights](https://azure.microsoft.com/en-us/services/application-insights/).
This plugin writes metrics to the [Azure Application Insights][insights]
service.
⭐ Telegraf v1.7.0
🏷️ applications, cloud
💻 all
[insights]: https://azure.microsoft.com/en-us/services/application-insights/
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,11 +1,15 @@
# Azure Data Explorer Output Plugin
Azure Data Explorer is a distributed, columnar store, purpose built for any type
of logs, metrics and time series data.
This plugin writes metrics to the [Azure Data Explorer][data_explorer],
[Azure Synapse Data Explorer][synapse], and
[Real time analytics in Fabric][fabric] services.
This plugin writes data collected by any of the Telegraf input plugins to
[Azure Data Explorer][data_explorer], [Azure Synapse Data Explorer][synapse],
and [Real time analytics in Fabric][fabric].
Azure Data Explorer is a distributed, columnar store, purpose built for any
type of logs, metrics and time series data.
⭐ Telegraf v1.20.0
🏷️ cloud, datastore
💻 all
[data_explorer]: https://docs.microsoft.com/en-us/azure/data-explorer
[synapse]: https://docs.microsoft.com/en-us/azure/synapse-analytics/data-explorer/data-explorer-overview

View File

@ -1,12 +1,13 @@
# Azure Monitor Output Plugin
**The Azure Monitor custom metrics service is currently in preview and not
available in a subset of Azure regions.**
This plugin writes metrics to [Azure Monitor][azure_monitor] which has
a metric resolution of one minute. To accomodate for this in Telegraf, the
plugin will automatically aggregate metrics into one minute buckets and send
them to the service on every flush interval.
This plugin will send custom metrics to Azure Monitor. Azure Monitor has a
metric resolution of one minute. To handle this in Telegraf, the Azure Monitor
output plugin will automatically aggregates metrics into one minute buckets,
which are then sent to Azure Monitor on every flush interval.
> [!IMPORTANT]
> The Azure Monitor custom metrics service is currently in preview and might
> not be available in all Azure regions.
The metrics from each input plugin will be written to a separate Azure Monitor
namespace, prefixed with `Telegraf/` by default. The field name for each metric
@ -14,12 +15,19 @@ is written as the Azure Monitor metric name. All field values are written as a
summarized set that includes: min, max, sum, count. Tags are written as a
dimension on each Azure Monitor metric.
Note that Azure Monitor wont accept metrics that are too far in the past
or future. Keep this in mind when configuring your output buffer limits or other
variables, such as flush intervals, or when using input sources that could cause
metrics to be out of this allowed range.
Currently, the timestamp should not be older than 30 minutes or more than
4 minutes in the future at the time when it is sent to Azure Monitor service.
> [!NOTE]
> Azure Monitor won't accept metrics that are too far in the past or future.
> Keep this in mind when configuring your output buffer limits or other
> variables, such as flush intervals, or when using input sources that could
> cause metrics to be out of this allowed range.
> Currently, the timestamp should not be older than 30 minutes or more than
> 4 minutes in the future at the time when it is sent to Azure Monitor service.
⭐ Telegraf v1.8.0
🏷️ cloud, datastore
💻 all
[azure_monitor]: https://learn.microsoft.com/en-us/azure/azure-monitor
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
@ -146,8 +154,9 @@ configurations:
[msi]: https://docs.microsoft.com/en-us/azure/active-directory/msi-overview
[arm]: https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-overview
**Note: As shown above, the last option (#4) is the preferred way to
authenticate when running Telegraf on Azure VMs.
> [!NOTE]
> As shown above, the last option (#4) is the preferred way to authenticate
> when running Telegraf on Azure VMs.
## Dimensions

View File

@ -1,12 +1,20 @@
# Google BigQuery Output Plugin
This plugin writes to the [Google Cloud
BigQuery](https://cloud.google.com/bigquery) and requires
[authentication](https://cloud.google.com/bigquery/docs/authentication) with
Google Cloud using either a service account or user credentials.
This plugin writes metrics to the [Google Cloud BigQuery][big_query] service
and requires [authentication][authentication] with Google Cloud using either a
service account or user credentials.
Be aware that this plugin accesses APIs that are
[chargeable](https://cloud.google.com/bigquery/pricing) and might incur costs.
> [!IMPORTANT]
> Be aware that this plugin accesses APIs that are [chargeable][pricing] and
> might incur costs.
[authentication]: https://cloud.google.com/bigquery/docs/authentication
[big_query]: https://cloud.google.com/bigquery
[pricing]: https://cloud.google.com/bigquery/pricing
⭐ Telegraf v1.18.0
🏷️ cloud, datastore
💻 all
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,8 +1,12 @@
# Clarify Output Plugin
This plugin writes to [Clarify][clarify]. To use this plugin you will
This plugin writes metrics to [Clarify][clarify]. To use this plugin you will
need to obtain a set of [credentials][credentials].
⭐ Telegraf v1.27.0
🏷️ cloud, datastore
💻 all
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
In addition to the plugin-specific configuration settings, plugins support

View File

@ -1,7 +1,11 @@
# Google Cloud PubSub Output Plugin
The GCP PubSub plugin publishes metrics to a [Google Cloud PubSub][pubsub] topic
as one of the supported [output data formats][].
This plugin publishes metrics to a [Google Cloud PubSub][pubsub] topic in one
of the supported [data formats][data_formats].
⭐ Telegraf v1.10.0
🏷️ cloud, messaging
💻 all
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
@ -75,4 +79,4 @@ See the [CONFIGURATION.md][CONFIGURATION.md] for more details.
```
[pubsub]: https://cloud.google.com/pubsub
[output data formats]: /docs/DATA_FORMATS_OUTPUT.md
[data_formats]: /docs/DATA_FORMATS_OUTPUT.md

View File

@ -1,6 +1,12 @@
# Amazon CloudWatch Output Plugin
This plugin will send metrics to Amazon CloudWatch.
This plugin writes metrics to the [Amazon CloudWatch][cloudwatch] service.
⭐ Telegraf v0.10.1
🏷️ cloud
💻 all
[cloudwatch]: https://aws.amazon.com/cloudwatch
## Amazon Authentication

View File

@ -1,6 +1,12 @@
# Amazon CloudWatch Logs Output Plugin
This plugin will send logs to Amazon CloudWatch.
This plugin writes log-metrics to the [Amazon CloudWatch][cloudwatch] service.
⭐ Telegraf v1.19.0
🏷️ cloud, logging
💻 all
[cloudwatch]: https://aws.amazon.com/cloudwatch
## Amazon Authentication

View File

@ -1,7 +1,14 @@
# CrateDB Output Plugin
This plugin writes to [CrateDB](https://crate.io/) via its [PostgreSQL
protocol](https://crate.io/docs/crate/reference/protocols/postgres.html).
This plugin writes metrics to [CrateDB][cratedb] via its
[PostgreSQL protocol][psql_protocol].
⭐ Telegraf v1.5.0
🏷️ cloud, datastore
💻 all
[cratedb]: https://crate.io/
[psql_protocol]: https://crate.io/docs/crate/reference/protocols/postgres.html
## Table Schema

View File

@ -1,8 +1,13 @@
# Datadog Output Plugin
This plugin writes to the [Datadog Metrics API][metrics] and requires an
`apikey` which can be obtained [here][apikey] for the account. This plugin
supports the v1 API.
This plugin writes metrics to the [Datadog Metrics API][metrics] and requires an
`apikey` which can be obtained [here][apikey] for the account.
> [!NOTE]
> This plugin supports the v1 API.
⭐ Telegraf v0.1.6
🏷️ applications, cloud, datastore
💻 all
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,7 +1,11 @@
# discard Output Plugin
# Discard Output Plugin
This output plugin simply drops all metrics that are sent to it. It is only
meant to be used for testing purposes.
This plugin discards all metrics written to it and is meant for testing
purposes.
⭐ Telegraf v1.2.0
🏷️ testing
💻 all
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,20 +1,27 @@
# Dynatrace Output Plugin
This plugin sends Telegraf metrics to [Dynatrace](https://www.dynatrace.com) via
the [Dynatrace Metrics API V2][api-v2]. It may be run alongside the Dynatrace
This plugin writes metrics to [Dynatrace][dynatrace] via the
[Dynatrace Metrics API V2][api-v2]. It may be run alongside the Dynatrace
OneAgent for automatic authentication or it may be run standalone on a host
without a OneAgent by specifying a URL and API Token. More information on the
plugin can be found in the [Dynatrace documentation][docs]. All metrics are
reported as gauges, unless they are specified to be delta counters using the
`additional_counters` or `additional_counters_patterns` config option
(see below).
See the [Dynatrace Metrics ingestion protocol documentation][proto-docs]
for details on the types defined there.
without OneAgent by specifying a URL and API Token.
More information on the plugin can be found in the
[Dynatrace documentation][docs].
> [!NOTE]
> All metrics are reported as gauges, unless they are specified to be delta
> counters using the `additional_counters` or `additional_counters_patterns`
> config option (see below).
> See the [Dynatrace Metrics ingestion protocol documentation][proto-docs]
> for details on the types defined there.
⭐ Telegraf v1.16.0
🏷️ cloud, datastore
💻 all
[api-v2]: https://docs.dynatrace.com/docs/shortlink/api-metrics-v2
[docs]: https://docs.dynatrace.com/docs/shortlink/telegraf
[dynatrace]: https://www.dynatrace.com
[proto-docs]: https://docs.dynatrace.com/docs/shortlink/metric-ingestion-protocol
## Requirements

View File

@ -1,9 +1,15 @@
# Elasticsearch Output Plugin
This plugin writes to [Elasticsearch](https://www.elastic.co) via HTTP using
Elastic (<http://olivere.github.io/elastic/).>
This plugin writes metrics to [Elasticsearch][elasticsearch] via HTTP using the
[Elastic client library][client_lib]. The plugin supports Elasticsearch
releases from v5.x up to v7.x.
It supports Elasticsearch releases from 5.x up to 7.x.
⭐ Telegraf v0.1.5
🏷️ datastore, logging
💻 all
[elasticsearch]: https://www.elastic.co
[client_lib]: http://olivere.github.io/elastic/
## Elasticsearch indexes and templates

View File

@ -1,16 +1,20 @@
# Exec Output Plugin
# Executable Output Plugin
This plugin sends telegraf metrics to an external application over stdin.
This plugin writes metrics to an external application via `stdin`. The command
will be executed on each write creating a new process. Metrics are passed in
one of the supported [data formats][data_formats].
The command should be defined similar to docker's `exec` form:
The executable and the individual parameters must be defined as a list.
All outputs of the executable to `stderr` will be logged in the Telegraf log.
```text
["executable", "param1", "param2"]
```
> [!TIP]
> For better performance consider execd which runs continuously.
On non-zero exit stderr will be logged at error level.
⭐ Telegraf v1.12.0
🏷️ system
💻 all
For better performance, consider execd, which runs continuously.
[data_formats]: /docs/DATA_FORMATS_OUTPUT.md
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,9 +1,19 @@
# Execd Output Plugin
# Executable Daemon Output Plugin
The `execd` plugin runs an external program as a daemon.
This plugin writes metrics to an external daemon program via `stdin`. The
command will be executed once and metrics will be passed to it on every write
in one of the supported [data formats][data_formats].
The executable and the individual parameters must be defined as a list.
All outputs of the executable to `stderr` will be logged in the Telegraf log.
Telegraf minimum version: Telegraf 1.15.0
⭐ Telegraf v1.15.0
🏷️ system
💻 all
[data_formats]: /docs/DATA_FORMATS_OUTPUT.md
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
In addition to the plugin-specific configuration settings, plugins support

View File

@ -1,6 +1,13 @@
# File Output Plugin
This plugin writes telegraf metrics to files
This plugin writes metrics to one or more local files in one of the supported
[data formats][data_formats].
⭐ Telegraf v0.10.3
🏷️ system
💻 all
[data_formats]: /docs/DATA_FORMATS_OUTPUT.md
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,12 +1,15 @@
# Graphite Output Plugin
This plugin writes to [Graphite][1] via raw TCP.
This plugin writes metrics to [Graphite][graphite] via TCP. For details on the
translation between Telegraf Metrics and Graphite output see the
[Graphite data format][serializer].
For details on the translation between Telegraf Metrics and Graphite output,
see the [Graphite Data Format][2].
⭐ Telegraf v0.10.1
🏷️ datastore
💻 all
[1]: http://graphite.readthedocs.org/en/latest/index.html
[2]: ../../../docs/DATA_FORMATS_OUTPUT.md
[graphite]: http://graphite.readthedocs.org/en/latest/index.html
[serializer]: /plugins/serializers/graphite/README.md
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,8 +1,14 @@
# Graylog Output Plugin
This plugin writes to a Graylog instance using the "[GELF][]" format.
This plugin writes metrics to a [Graylog][graylog] instance using the
[GELF data format][gelf].
[GELF]: https://docs.graylog.org/en/3.1/pages/gelf.html#gelf-payload-specification
⭐ Telegraf v1.0.0
🏷️ datatstore, logging
💻 all
[gelf]: https://docs.graylog.org/en/3.1/pages/gelf.html#gelf-payload-specification
[graylog]: https://graylog.org/
## GELF Fields

View File

@ -1,9 +1,15 @@
# GroundWork Output Plugin
This plugin writes to a [GroundWork Monitor][1] instance. Plugin only supports
GW8+
This plugin writes metrics to a [GroundWork Monitor][groundwork] instance.
[1]: https://www.gwos.com/product/groundwork-monitor/
> [!IMPORTANT]
> Plugin only supports GroundWork v8 or later.
⭐ Telegraf v1.21.0
🏷️ applications, messaging
💻 all
[groundwork]: https://www.gwos.com/product/groundwork-monitor/
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,12 +1,16 @@
# Health Output Plugin
The health plugin provides a HTTP health check resource that can be configured
to return a failure status code based on the value of a metric.
This plugin provides a HTTP health check endpoint that can be configured to
return failure status codes based on the value of a metric.
When the plugin is healthy it will return a 200 response; when unhealthy it
will return a 503 response. The default state is healthy, one or more checks
must fail in order for the resource to enter the failed state.
⭐ Telegraf v1.11.0
🏷️ applications
💻 all
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
In addition to the plugin-specific configuration settings, plugins support

View File

@ -1,8 +1,14 @@
# HTTP Output Plugin
This plugin sends metrics in a HTTP message encoded using one of the output data
formats. For data_formats that support batching, metrics are sent in batch
format by default.
This plugin writes metrics to a HTTP endpoint using one of the supported
[data formats][data_formats]. For data formats supporting batching, metrics are
sent in batches by default.
⭐ Telegraf v1.7.0
🏷️ applications
💻 all
[data_formats]: /docs/DATA_FORMATS_OUTPUT.md
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,7 +1,13 @@
# InfluxDB v1.x Output Plugin
The InfluxDB output plugin writes metrics to the [InfluxDB v1.x] HTTP or UDP
service.
This plugin writes metrics to a [InfluxDB v1.x][influxdb_v1] instance via
HTTP or UDP protocol.
⭐ Telegraf v0.1.1
🏷️ datastore
💻 all
[influxdb_v1]: https://docs.influxdata.com/influxdb/v1
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
@ -120,6 +126,4 @@ define additional `[[outputs.influxdb]]` section with new `urls`.
Reference the [influx serializer][] for details about metric production.
[InfluxDB v1.x]: https://github.com/influxdata/influxdb
[influx serializer]: /plugins/serializers/influx/README.md#Metrics

View File

@ -1,6 +1,12 @@
# InfluxDB v2.x Output Plugin
The InfluxDB output plugin writes metrics to the [InfluxDB v2.x] HTTP service.
This plugin writes metrics to a [InfluxDB v2.x][influxdb_v2] instance via HTTP.
⭐ Telegraf v1.8.0
🏷️ datastore
💻 all
[influxdb_v2]: https://docs.influxdata.com/influxdb/v2
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
@ -101,5 +107,4 @@ to use them.
Reference the [influx serializer][] for details about metric production.
[InfluxDB v2.x]: https://github.com/influxdata/influxdb
[influx serializer]: /plugins/serializers/influx/README.md#Metrics

View File

@ -1,13 +1,19 @@
# Instrumental Output Plugin
This plugin writes to the [Instrumental Collector
API](https://instrumentalapp.com/docs/tcp-collector) and requires a
Project-specific API token.
This plugin writes metrics to the [Instrumental Collector API][instrumental]
and requires a project-specific API token.
Instrumental accepts stats in a format very close to Graphite, with the only
difference being that the type of stat (gauge, increment) is the first token,
separated from the metric itself by whitespace. The `increment` type is only
used if the metric comes in as a counter through `[[input.statsd]]`.
used if the metric comes in as a counter via the [statsd input plugin][statsd].
⭐ Telegraf v0.13.1
🏷️ applications
💻 all
[instrumental]: https://instrumentalapp.com/docs/tcp-collector
[statsd]: /plugins/inputs/statsd/README.md
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,20 +1,13 @@
# IoTDB Output Plugin
# Apache IoTDB Output Plugin
This output plugin saves Telegraf metrics to an Apache IoTDB backend,
supporting session connection and data insertion.
This plugin writes metrics to an [Apache IoTDB][iotdb] instance, a database
for the Internet of Things, supporting session connection and data insertion.
## Apache IoTDB
⭐ Telegraf v1.24.0
🏷️ datastore
💻 all
Apache IoTDB (Database for Internet of Things) is an IoT native database with
high performance for data management and analysis, deployable on the edge and
the cloud. Due to its light-weight architecture, high performance and rich
feature set together with its deep integration with Apache Hadoop, Spark and
Flink, Apache IoTDB can meet the requirements of massive data storage,
high-speed data ingestion and complex data analysis in the IoT industrial
fields.
For more details consult the [Apache IoTDB website](https://iotdb.apache.org)
or the [Apache IoTDB GitHub page](https://github.com/apache/iotdb).
[iotdb]: https://iotdb.apache.org
## Getting started
@ -149,11 +142,10 @@ to use them.
## for iotdb 1.x.x and above -> https://iotdb.apache.org/UserGuide/V1.3.x/User-Manual/Syntax-Rule.html#identifier
##
## Available values are:
## - "1.0", "1.1", "1.2", "1.3" -- enclose in `` the world having forbidden character
## such as @ $ # : [ ] { } ( ) space
## - "0.13" -- enclose in `` the world having forbidden character
## - "1.0", "1.1", "1.2", "1.3" -- use backticks to enclose tags with forbidden characters
## such as @$#:[]{}() and space
## - "0.13" -- use backticks to enclose tags with forbidden characters
## such as space
##
## Keep this section commented if you don't want to sanitize the path
# sanitize_tag = "1.3"
```

View File

@ -51,10 +51,9 @@
## for iotdb 1.x.x and above -> https://iotdb.apache.org/UserGuide/V1.3.x/User-Manual/Syntax-Rule.html#identifier
##
## Available values are:
## - "1.0", "1.1", "1.2", "1.3" -- enclose in `` the world having forbidden character
## such as @ $ # : [ ] { } ( ) space
## - "0.13" -- enclose in `` the world having forbidden character
## - "1.0", "1.1", "1.2", "1.3" -- use backticks to enclose tags with forbidden characters
## such as @$#:[]{}() and space
## - "0.13" -- use backticks to enclose tags with forbidden characters
## such as space
##
## Keep this section commented if you don't want to sanitize the path
# sanitize_tag = "1.3"

View File

@ -1,7 +1,12 @@
# Kafka Output Plugin
This plugin writes to a [Kafka
Broker](http://kafka.apache.org/07/quickstart.html) acting a Kafka Producer.
This plugin writes metrics to a [Kafka Broker][kafka] acting a Kafka Producer.
⭐ Telegraf v0.1.7
🏷️ messaging
💻 all
[kafka]: http://kafka.apache.org
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,15 +1,17 @@
# Amazon Kinesis Output Plugin
This is an experimental plugin that is still in the early stages of
development. It will batch up all of the Points in one Put request to
Kinesis. This should save the number of API requests by a considerable level.
This plugin writes metrics to a [Amazon Kinesis][kinesis] endpoint. It will
batch all Points in one request to reduce the number of API requests.
## About Kinesis
Please consult [Amazon's official documentation][docs] for more details on the
Kinesis architecture and concepts.
This is not the place to document all of the various Kinesis terms however it
maybe useful for users to review Amazons official documentation which is
available
[here](http://docs.aws.amazon.com/kinesis/latest/dev/key-concepts.html).
⭐ Telegraf v0.2.5
🏷️ cloud, messaging
💻 all
[kinesis]: https://aws.amazon.com/kinesis
[docs]: http://docs.aws.amazon.com/kinesis/latest/dev/key-concepts.html
## Amazon Authentication

View File

@ -1,18 +1,22 @@
# Librato Output Plugin
This plugin writes to the [Librato Metrics API][metrics-api] and requires an
`api_user` and `api_token` which can be obtained [here][tokens] for the account.
This plugin writes metrics to the [Librato][librato] service. It requires an
`api_user` and `api_token` which can be obtained [here][tokens] for your
account.
The `source_tag` option in the Configuration file is used to send contextual
information from Point Tags to the API.
information from Point Tags to the API. Besides from this, the plugin currently
does not send any additional associated Point Tags.
If the point value being sent cannot be converted to a float64, the metric is
skipped.
> [!IMPOTANT]
> If the point value being sent cannot be converted to a `float64`, the metric
> is skipped.
Currently, the plugin does not send any associated Point Tags.
[metrics-api]: http://dev.librato.com/v1/metrics#metrics
⭐ Telegraf v0.2.0
🏷️ cloud, datastore
💻 all
[librato]: https://www.librato.com/
[tokens]: https://metrics.librato.com/account/api_tokens
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,6 +1,12 @@
# Logz.io Output Plugin
This plugin sends metrics to Logz.io over HTTPs.
This plugin writes metrics to the [Logz.io][logzio] service via HTTP.
⭐ Telegraf v1.17.0
🏷️ cloud, datastore
💻 all
[logzio]: https://logz.io
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,11 +1,17 @@
# Loki Output Plugin
# Grafana Loki Output Plugin
This plugin sends logs to Loki, using metric name and tags as labels, log line
will content all fields in `key="value"` format which is easily parsable with
`logfmt` parser in Loki.
This plugin writes logs to a [Grafana Loki][loki] instance, using the metric
name and tags as labels. The log line will contain all fields in
`key="value"` format easily parsable with the `logfmt` parser in Loki.
Logs within each stream are sorted by timestamp before being sent to Loki.
⭐ Telegraf v1.18.0
🏷️ logging
💻 all
[loki]: https://grafana.com/loki
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
In addition to the plugin-specific configuration settings, plugins support

View File

@ -1,8 +1,16 @@
# MongoDB Output Plugin
This plugin sends metrics to MongoDB and automatically creates the collections
as time series collections when they don't already exist. **Please note:**
Requires MongoDB 5.0+ for Time Series Collections
This plugin writes metrics to [MongoDB][mongodb] automatically creating
collections as time series collections if they don't exist.
> [!NOTE]
> This plugin requires MongoDB v5 or later for time series collections.
⭐ Telegraf v1.21.0
🏷️ datastore
💻 all
[mongodb]: https://www.mongodb.com
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,16 +1,20 @@
# MQTT Producer Output Plugin
This plugin writes to a [MQTT Broker](http://http://mqtt.org/) acting as a mqtt
Producer. It supports MQTT protocols `3.1.1` and `5`.
This plugin writes metrics to a [MQTT broker][mqtt] acting as a MQTT producer.
The plugin supports the MQTT protocols `3.1.1` and `5`.
## Mosquitto v2.0.12+ and `identifier rejected`
> [!NOTE]
> In v2.0.12+ of the mosquitto MQTT server, there is a [bug][mosquitto_bug]
> requiring the `keep_alive` value to be set non-zero in Telegraf. Otherwise,
> the server will return with `identifier rejected`.
> As a reference `eclipse/paho.golang` sets the `keep_alive` to 30.
In v2.0.12+ of the mosquitto MQTT server, there is a
[bug](https://github.com/eclipse/mosquitto/issues/2117) which requires the
`keep_alive` value to be set non-zero in your telegraf configuration. If not
set, the server will return with `identifier rejected`.
⭐ Telegraf v0.2.0
🏷️ messaging
💻 all
As a reference `eclipse/paho.golang` sets the `keep_alive` to 30.
[mqtt]: http://http://mqtt.org/
[mosquitto_bug]: https://github.com/eclipse/mosquitto/issues/2117
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,6 +1,14 @@
# NATS Output Plugin
This plugin writes to a (list of) specified NATS instance(s).
This plugin writes metrics to subjects of a set of [NATS][nats] instances in
one of the supported [data formats][data_formats].
⭐ Telegraf v1.1.0
🏷️ messaging
💻 all
[nats]: https://nats.io
[data_formats]: /docs/DATA_FORMATS_OUTPUT.md
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,7 +1,12 @@
# Nebius Cloud Monitoring Output Plugin
This plugin will send custom metrics to
[Nebuis Cloud Monitoring](https://nebius.com/il/services/monitoring).
This plugin writes metrics to the [Nebuis Cloud Monitoring][nebius] service.
⭐ Telegraf v1.27.0
🏷️ cloud, datastore
💻 all
[nebius]: https://nebius.com/il/services/monitoring
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,10 +1,16 @@
# New Relic Output Plugin
This plugins writes to New Relic Insights using the [Metrics API][].
This plugins writes metrics to [New Relic Insights][newrelic] using the
[Metrics API][metrics_api]. To use this plugin you have to obtain an
[Insights API Key][insights_api_key].
To use this plugin you must first obtain an [Insights API Key][].
⭐ Telegraf v1.15.0
🏷️ applications
💻 all
Telegraf minimum version: Telegraf 1.15.0
[newrelic]: https://newrelic.com
[metrics_api]: https://docs.newrelic.com/docs/data-ingest-apis/get-data-new-relic/metric-api/introduction-metric-api
[insights_api_key]: https://docs.newrelic.com/docs/apis/get-started/intro-apis/types-new-relic-api-keys#user-api-key
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
@ -41,7 +47,3 @@ See the [CONFIGURATION.md][CONFIGURATION.md] for more details.
# If not set use values from the standard
# metric_url = "https://metric-api.newrelic.com/metric/v1"
```
[Metrics API]: https://docs.newrelic.com/docs/data-ingest-apis/get-data-new-relic/metric-api/introduction-metric-api
[Insights API Key]: https://docs.newrelic.com/docs/apis/get-started/intro-apis/types-new-relic-api-keys#user-api-key

View File

@ -1,7 +1,14 @@
# NSQ Output Plugin
This plugin writes to a specified NSQD instance, usually local to the
producer. It requires a `server` name and a `topic` name.
This plugin writes metrics to the given topic of a [NSQ][nsq] instance as a
producer in one of the supported [data formats][data_formats].
⭐ Telegraf v0.2.1
🏷️ messaging
💻 all
[nsq]: https://nsq.io
[data_formats]: /docs/DATA_FORMATS_OUTPUT.md
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,10 +1,17 @@
# OpenSearch Output Plugin
This plugin writes to [OpenSearch](https://opensearch.org/) via HTTP
This plugin writes metrics to a [OpenSearch][opensearch] instance via HTTP.
It supports OpenSearch releases v1 and v2 but future comparability with 1.x is
not guaranteed and instead will focus on 2.x support.
It supports OpenSearch releases from 1 and 2. Future comparability with 1.x is
not guaranteed and instead will focus on 2.x support. Consider using the
existing Elasticsearch plugin for 1.x.
> [!TIP]
> Consider using the existing Elasticsearch plugin for 1.x.
⭐ Telegraf v1.29.0
🏷️ datastore, logging
💻 all
[opensearch]: https://opensearch.org/
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,7 +1,13 @@
# OpenTelemetry Output Plugin
This plugin sends metrics to [OpenTelemetry](https://opentelemetry.io) servers
and agents via gRPC.
This plugin writes metrics to [OpenTelemetry][opentelemetry] servers and agents
via gRPC.
⭐ Telegraf v1.20.0
🏷️ logging, messaging
💻 all
[opentelemetry]: https://opentelemetry.io
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,14 +1,13 @@
# OpenTSDB Output Plugin
This plugin writes to an OpenTSDB instance using either the "telnet" or Http
mode.
This plugin writes metrics to an [OpenTSDB][opentsdb] instance using either
the telnet or HTTP mode. Using the HTTP API is recommended since OpenTSDB 2.0.
Using the Http API is the recommended way of writing metrics since OpenTSDB 2.0
To use Http mode, set useHttp to true in config. You can also control how many
metrics is sent in each http request by setting batchSize in config.
⭐ Telegraf v0.1.9
🏷️ datastore
💻 all
See [the docs](http://opentsdb.net/docs/build/html/api_http/put.html) for
details.
[opentsdb]: http://opentsdb.net/
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,14 +1,21 @@
# Parquet Output Plugin
This plugin writes metrics to parquet files. By default, the parquet
output groups metrics by metric name and write those metrics all to the same
file. If a metric schema does not match then metrics are dropped.
This plugin writes metrics to [parquet][parquet] files. By default, metrics are
grouped by metric name and written all to the same file.
To lean more about Parquet check out the [Parquet docs][] as well as a blog
post on [Querying Parquet][].
> [!IMPORTANT]
> If a metric schema does not match the schema in the file it will be dropped.
[Parquet docs]: https://parquet.apache.org/docs/
[Querying Parquet]: https://www.influxdata.com/blog/querying-parquet-millisecond-latency/
To lean more about the parquet format, check out the [parquet docs][docs] as
well as a blog post on [querying parquet][querying].
⭐ Telegraf v1.32.0
🏷️ datastore
💻 all
[parquet]: https://parquet.apache.org
[docs]: https://parquet.apache.org/docs/
[querying]: https://www.influxdata.com/blog/querying-parquet-millisecond-latency/
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,7 +1,13 @@
# PostgreSQL Output Plugin
This output plugin writes metrics to PostgreSQL (or compatible database).
The plugin manages the schema, automatically updating missing columns.
This plugin writes metrics to a [PostgreSQL][postgresql] (or compatible) server
managing the schema and automatically updating missing columns.
⭐ Telegraf v1.24.0
🏷️ datastore
💻 all
[postgresql]: https://www.postgresql.org/
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,7 +1,14 @@
# Prometheus Output Plugin
This plugin starts a [Prometheus](https://prometheus.io/) Client, it exposes all
metrics on `/metrics` (default) to be polled by a Prometheus server.
This plugin starts a [Prometheus][prometheus] client and exposes the written
metrics on a `/metrics` endpoint by default. This endpoint can then be polled
by a Prometheus server.
⭐ Telegraf v0.2.1
🏷️ applications
💻 all
[prometheus]: https://prometheus.io
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,6 +1,12 @@
# RedisTimeSeries Producer Output Plugin
# Redis Time Series Output Plugin
The RedisTimeSeries output plugin writes metrics to the RedisTimeSeries server.
This plugin writes metrics to a [Redis time-series][redists] server.
⭐ Telegraf v1.0.0
🏷️ datastore
💻 all
[redists]: https://redis.io/timeseries
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,13 +1,18 @@
# Remote File Output Plugin
This plugin writes telegraf metrics to files in remote locations using the
[rclone library](https://rclone.org). Currently the following backends are
supported:
This plugin writes metrics to files in a remote location using the
[rclone library][rclone]. Currently the following backends are supported:
- `local`: [Local filesystem](https://rclone.org/local/)
- `s3`: [Amazon S3 storage providers](https://rclone.org/s3/)
- `sftp`: [Secure File Transfer Protocol](https://rclone.org/sftp/)
⭐ Telegraf v1.32.0
🏷️ datastore
💻 all
[rclone]: https://rclone.org
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
In addition to the plugin-specific configuration settings, plugins support

View File

@ -1,6 +1,12 @@
# Riemann Output Plugin
This plugin writes to [Riemann](http://riemann.io/) via TCP or UDP.
This plugin writes metric to the [Riemann][riemann] serice via TCP or UDP.
⭐ Telegraf v1.3.0
🏷️ datastore
💻 all
[riemann]: http://riemann.io
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,7 +1,12 @@
# Sensu Go Output Plugin
This plugin writes metrics events to [Sensu Go](https://sensu.io) via its
HTTP events API.
This plugin writes metrics to [Sensu Go][sensu] via its HTTP events API.
⭐ Telegraf v1.18.0
🏷️ applications
💻 all
[sensu]: https://sensu.io
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,6 +1,10 @@
# SignalFx Output Plugin
The SignalFx output plugin sends metrics to [SignalFx][docs].
This plugin writes metrics to [SignalFx][docs].
⭐ Telegraf v1.18.0
🏷️ applications
💻 all
[docs]: https://docs.signalfx.com/en/latest/

View File

@ -1,10 +1,13 @@
# Socket Writer Output Plugin
The socket writer plugin can write to a UDP, TCP, or unix socket.
This plugin writes metrics to a network service e.g. via UDP or TCP in one of
the supported [data formats][data_formats].
It can output data in any of the [supported output formats][formats].
⭐ Telegraf v1.3.0
🏷️ applications, networking
💻 all
[formats]: ../../../docs/DATA_FORMATS_OUTPUT.md
[data_formats]: /docs/DATA_FORMATS_OUTPUT.md
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,18 +1,21 @@
# SQL Output Plugin
The SQL output plugin saves Telegraf metric data to an SQL database.
This plugin writes metrics to a supported SQL database using a simple,
hard-coded database schema. There is a table for each metric type with the
table name corresponding to the metric name. There is a column per field
and a column per tag with an optional column for the metric timestamp.
The plugin uses a simple, hard-coded database schema. There is a table for each
metric type and the table name is the metric name. There is a column per field
and a column per tag. There is an optional column for the metric timestamp.
A row is written for every input metric. This means multiple metrics are never
A row is written for every metric. This means multiple metrics are never
merged into a single row, even if they have the same metric name, tags, and
timestamp.
The plugin uses Golang's generic "database/sql" interface and third party
drivers. See the driver-specific section below for a list of supported drivers
and details. Additional drivers may be added in future Telegraf releases.
drivers. See the driver-specific section for a list of supported drivers
and details.
⭐ Telegraf v1.19.0
🏷️ datastore
💻 all
## Getting started

View File

@ -1,25 +1,23 @@
# Stackdriver Google Cloud Monitoring Output Plugin
# Google Cloud Monitoring Output Plugin
This plugin writes to the [Google Cloud Monitoring API][stackdriver] (formerly
Stackdriver) and requires [authentication][] with Google Cloud using either a
service account or user credentials
This plugin writes metrics to a `project` in
[Google Cloud Monitoring][stackdriver] (formerly called Stackdriver).
[Authentication][authentication] with Google Cloud is required using either a
service account or user credentials.
This plugin accesses APIs which are [chargeable][pricing]; you might incur
costs.
> [!IMPORTANT]
> This plugin accesses APIs which are [chargeable][pricing] and might incur
> costs.
Requires `project` to specify where Stackdriver metrics will be delivered to.
By default, Metrics are grouped by the `namespace` variable and metric key -
By default, Metrics are grouped by the `namespace` variable and metric key,
eg: `custom.googleapis.com/telegraf/system/load5`. However, this is not the
best practice. Setting `metric_name_format = "official"` will produce a more
easily queried format of: `metric_type_prefix/[namespace_]name_key/kind`. If
the global namespace is not set, it is omitted as well.
[Resource type](https://cloud.google.com/monitoring/api/resources) is configured
by the `resource_type` variable (default `global`).
Additional resource labels can be configured by `resource_labels`. By default
the required `project_id` label is always set to the `project` variable.
⭐ Telegraf v1.9.0
🏷️ cloud, datastore
💻 all
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
@ -112,9 +110,6 @@ of the stackdriver API. This cache is not GCed: if you remove a large number of
counters from the input side, you may wish to restart telegraf to clear it.
[basicstats]: /plugins/aggregators/basicstats/README.md
[stackdriver]: https://cloud.google.com/monitoring/api/v3/
[authentication]: https://cloud.google.com/docs/authentication/getting-started
[pricing]: https://cloud.google.com/stackdriver/pricing#google-clouds-operations-suite-pricing

View File

@ -1,9 +1,17 @@
# STOMP Producer Output Plugin
# ActiveMQ STOMP Output Plugin
This plugin writes to a [Active MQ Broker](http://activemq.apache.org/)
for STOMP <http://stomp.github.io>.
This plugin writes metrics to an [Active MQ Broker][activemq] for [STOMP][stomp]
but also supports [Amazon MQ][amazonmq] brokers. Metrics can be written in one
of the supported [data formats][data_formats].
It also support Amazon MQ <https://aws.amazon.com/amazon-mq/>
⭐ Telegraf v1.24.0
🏷️ messaging
💻 all
[activemq]: http://activemq.apache.org/
[stomp]: https://stomp.github.io
[amazonmq]:https://aws.amazon.com/amazon-mq
[data_formats]: /docs/DATA_FORMATS_OUTPUT.md
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,18 +1,17 @@
# Sumo Logic Output Plugin
This plugin sends metrics to [Sumo Logic HTTP Source][http-source] in HTTP
messages, encoded using one of the output data formats.
This plugin writes metrics to a [Sumo Logic HTTP Source][sumologic] using one
of the following data formats:
Telegraf minimum version: Telegraf 1.16.0
- `graphite` for Content-Type of `application/vnd.sumologic.graphite`
- `carbon2` for Content-Type of `application/vnd.sumologic.carbon2`
- `prometheus` for Content-Type of `application/vnd.sumologic.prometheus`
Currently metrics can be sent using one of the following data formats, supported
by Sumologic HTTP Source:
⭐ Telegraf v1.16.0
🏷️ logging
💻 all
* `graphite` - for Content-Type of `application/vnd.sumologic.graphite`
* `carbon2` - for Content-Type of `application/vnd.sumologic.carbon2`
* `prometheus` - for Content-Type of `application/vnd.sumologic.prometheus`
[http-source]: https://help.sumologic.com/03Send-Data/Sources/02Sources-for-Hosted-Collectors/HTTP-Source/Upload-Metrics-to-an-HTTP-Source
[sumologic]: https://help.sumologic.com/03Send-Data/Sources/02Sources-for-Hosted-Collectors/HTTP-Source/Upload-Metrics-to-an-HTTP-Source
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,18 +1,24 @@
# Syslog Output Plugin
The syslog output plugin sends syslog messages transmitted over
[UDP](https://tools.ietf.org/html/rfc5426) or
[TCP](https://tools.ietf.org/html/rfc6587) or
[TLS](https://tools.ietf.org/html/rfc5425), with or without the octet counting
framing.
This plugin writes metrics as syslog messages via UDP in
[RFC5426 format][rfc5426] or via TCP in [RFC6587 format][rfc6587] or via
TLS in [RFC5425 format][rfc5425], with or without the octet counting framing.
Syslog messages are formatted according to [RFC
5424](https://tools.ietf.org/html/rfc5424). Per this RFC there are limitations
to the field sizes when sending messages. See the [Syslog Message Format][]
section of the RFC. Sending messages beyond these sizes may get dropped by a
strict receiver silently.
> [!IMPORTANT]
> Syslog messages are formatted according to [RFC5424][rfc5424] limiting the
> field sizes when sending messages according to the
> [syslog message format][msgformat] section of the RFC. Sending messages beyond
> these sizes may get dropped by a strict receiver silently.
[Syslog Message Format]: https://datatracker.ietf.org/doc/html/rfc5424#section-6
⭐ Telegraf v1.11.0
🏷️ logging
💻 all
[rfc5426]: https://tools.ietf.org/html/rfc5426
[rfc6587]: https://tools.ietf.org/html/rfc6587
[rfc5425]: https://tools.ietf.org/html/rfc5425
[rfc5424]: https://tools.ietf.org/html/rfc5424
[msgformat]: https://datatracker.ietf.org/doc/html/rfc5424#section-6
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,6 +1,12 @@
# Timestream Output Plugin
# Amazon Timestream Output Plugin
The Timestream output plugin writes metrics to the [Amazon Timestream] service.
This plugin writes metrics to the [Amazon Timestream][timestream] service.
⭐ Telegraf v1.16.0
🏷️ cloud, datastore
💻 all
[timestream]: https://aws.amazon.com/timestream
## Authentication
@ -13,11 +19,18 @@ API endpoint. In the following order the plugin will attempt to authenticate.
credentials are evaluated from subsequent rules). The `endpoint_url` attribute
is used only for Timestream service. When fetching credentials, STS global
endpoint will be used.
1. Web identity provider credentials via STS if `role_arn` and `web_identity_token_file` are specified
1. [Assumed credentials via STS][sts_credentials] if `role_arn` attribute is specified (source credentials are evaluated from subsequent rules). The `endpoint_url` attribute is used only for Timestream service. When fetching credentials, STS global endpoint will be used.
1. Explicit credentials from `access_key`, `secret_key`, and `token` attributes
1. Shared profile from `profile` attribute
1. [Environment Variables]
1. [Shared Credentials]
1. [EC2 Instance Profile]
1. [Environment Variables][env_vars]
1. [Shared Credentials][shared_credentials]
1. [EC2 Instance Profile][ec2_profile]
[sts_credentials]: https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/credentials/stscreds
[env_vars]: https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#environment-variables
[shared_credentials]: https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#shared-credentials-file
[ec2_profile]: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
@ -279,11 +292,3 @@ and store each field in a separate table row. In that case:
actual value of that property.
`<measure_name_for_multi_measure_records>` represents the actual value of
that property.
### References
- [Amazon Timestream](https://aws.amazon.com/timestream/)
- [Assumed credentials via STS](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/credentials/stscreds)
- [Environment Variables](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#environment-variables)
- [Shared Credentials](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#shared-credentials-file)
- [EC2 Instance Profile](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html)

View File

@ -1,6 +1,12 @@
# Warp10 Output Plugin
The `warp10` output plugin writes metrics to [Warp 10][].
This plugin writes metrics to the [Warp 10][warp10] service.
⭐ Telegraf v1.14.0
🏷️ cloud, datastore
💻 all
[warp10]: https://www.warp10.io
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
@ -64,5 +70,4 @@ string types directly. Unsigned integer fields will be capped to the largest
Timestamps are sent in microsecond precision.
[Warp 10]: https://www.warp10.io
[Geo Time Series]: https://www.warp10.io/content/03_Documentation/03_Interacting_with_Warp_10/03_Ingesting_data/02_GTS_input_format

View File

@ -1,7 +1,13 @@
# Wavefront Output Plugin
This plugin writes to a [Wavefront](https://www.wavefront.com) instance or a
Wavefront Proxy instance over HTTP or HTTPS.
This plugin writes metrics to a [Wavefront][wavefront] instance or a Wavefront
Proxy instance over HTTP or HTTPS.
⭐ Telegraf v1.5.0
🏷️ applications, cloud
💻 all
[wavefront]: https://www.wavefront.com
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,10 +1,13 @@
# Websocket Output Plugin
This plugin can write to a WebSocket endpoint.
This plugin writes metrics to a WebSocket endpoint in one of the supported
[data formats][data_formats].
It can output data in any of the [supported output formats][formats].
⭐ Telegraf v1.19.0
🏷️ applications, web
💻 all
[formats]: ../../../docs/DATA_FORMATS_OUTPUT.md
[data_formats]: /docs/DATA_FORMATS_OUTPUT.md
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,7 +1,12 @@
# Yandex Cloud Monitoring Output Plugin
This plugin will send custom metrics to [Yandex Cloud
Monitoring](https://cloud.yandex.com/services/monitoring).
This plugin writes metrics to the [Yandex Cloud Monitoring][yandex] service.
⭐ Telegraf v1.17.0
🏷️ cloud
💻 all
[yandex]: https://cloud.yandex.com/services/monitoring
## Global configuration options <!-- @/docs/includes/plugin_config.md -->

View File

@ -1,21 +1,16 @@
# Zabbix Output Plugin
This plugin send metrics to [Zabbix](https://www.zabbix.com/) via
[traps][traps].
This plugin writes metrics to [Zabbix][zabbix] via [traps][traps]. It has been
tested with versions v3.0, v4.0 and v6.0 but should work with newer versions
of Zabbix as long as the protocol doesn't change.
It has been tested with versions
[3.0](https://www.zabbix.com/documentation/3.0/en/manual/appendix/items/trapper)
,
[4.0](https://www.zabbix.com/documentation/4.0/en/manual/appendix/items/trapper)
and
[6.0](https://www.zabbix.com/documentation/6.0/en/manual/appendix/items/trapper)
.
⭐ Telegraf v1.30.0
🏷️ datastore
💻 all
[zabbix]: https://www.zabbix.com/
[traps]: https://www.zabbix.com/documentation/current/en/manual/appendix/items/trapper
It should work with newer versions as long as Zabbix does not change the
protocol.
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
In addition to the plugin-specific configuration settings, plugins support