chore: Fix readme linter errors for output plugins (#10951)

This commit is contained in:
reimda 2022-04-21 09:45:47 -06:00 committed by GitHub
parent dc95d22272
commit 6ba3b1e91e
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
44 changed files with 716 additions and 407 deletions

View File

@ -1,10 +1,11 @@
# Amon Output Plugin
This plugin writes to [Amon](https://www.amon.cx)
and requires an `serverkey` and `amoninstance` URL which can be obtained [here](https://www.amon.cx/docs/monitoring/)
for the account.
This plugin writes to [Amon](https://www.amon.cx) and requires an `serverkey`
and `amoninstance` URL which can be obtained
[here](https://www.amon.cx/docs/monitoring/) for the account.
If the point value being sent cannot be converted to a float64, the metric is skipped.
If the point value being sent cannot be converted to a float64, the metric is
skipped.
Metrics are grouped by converting any `_` characters to `.` in the Point Name.

View File

@ -1,6 +1,7 @@
# AMQP Output Plugin
This plugin writes to a AMQP 0-9-1 Exchange, a prominent implementation of this protocol being [RabbitMQ](https://www.rabbitmq.com/).
This plugin writes to a AMQP 0-9-1 Exchange, a prominent implementation of this
protocol being [RabbitMQ](https://www.rabbitmq.com/).
This plugin does not bind the exchange to a queue.
@ -111,11 +112,11 @@ For an introduction to AMQP see:
### Routing
If `routing_tag` is set, and the tag is defined on the metric, the value of
the tag is used as the routing key. Otherwise the value of `routing_key` is
used directly. If both are unset the empty string is used.
If `routing_tag` is set, and the tag is defined on the metric, the value of the
tag is used as the routing key. Otherwise the value of `routing_key` is used
directly. If both are unset the empty string is used.
Exchange types that do not use a routing key, `direct` and `header`, always
use the empty string as the routing key.
Exchange types that do not use a routing key, `direct` and `header`, always use
the empty string as the routing key.
Metrics are published in batches based on the final routing key.

View File

@ -1,6 +1,7 @@
# Application Insights Output Plugin
This plugin writes telegraf metrics to [Azure Application Insights](https://azure.microsoft.com/en-us/services/application-insights/).
This plugin writes telegraf metrics to [Azure Application
Insights](https://azure.microsoft.com/en-us/services/application-insights/).
## Configuration
@ -39,7 +40,8 @@ on the measurement name and field.
foo,host=a first=42,second=43 1525293034000000000
```
In the special case of a single field named `value`, a single telemetry record is created named using only the measurement name
In the special case of a single field named `value`, a single telemetry record
is created named using only the measurement name
**Example:** Create a telemetry record `bar`:

View File

@ -1,12 +1,17 @@
# Azure Data Explorer output plugin
# Azure Data Explorer Output Plugin
This plugin writes data collected by any of the Telegraf input plugins to [Azure Data Explorer](https://azure.microsoft.com/en-au/services/data-explorer/).
Azure Data Explorer is a distributed, columnar store, purpose built for any type of logs, metrics and time series data.
This plugin writes data collected by any of the Telegraf input plugins to [Azure
Data Explorer](https://azure.microsoft.com/en-au/services/data-explorer/).
Azure Data Explorer is a distributed, columnar store, purpose built for any type
of logs, metrics and time series data.
## Pre-requisites
- [Create Azure Data Explorer cluster and database](https://docs.microsoft.com/en-us/azure/data-explorer/create-cluster-database-portal)
- VM/compute or container to host Telegraf - it could be hosted locally where an app/service to be monitored is deployed or remotely on a dedicated monitoring compute/container.
- [Create Azure Data Explorer cluster and
database](https://docs.microsoft.com/en-us/azure/data-explorer/create-cluster-database-portal)
- VM/compute or container to host Telegraf - it could be hosted locally where an
app/service to be monitored is deployed or remotely on a dedicated monitoring
compute/container.
## Configuration
@ -40,21 +45,40 @@ Azure Data Explorer is a distributed, columnar store, purpose built for any type
## Metrics Grouping
Metrics can be grouped in two ways to be sent to Azure Data Explorer. To specify which metric grouping type the plugin should use, the respective value should be given to the `metrics_grouping_type` in the config file. If no value is given to `metrics_grouping_type`, by default, the metrics will be grouped using `TablePerMetric`.
Metrics can be grouped in two ways to be sent to Azure Data Explorer. To specify
which metric grouping type the plugin should use, the respective value should be
given to the `metrics_grouping_type` in the config file. If no value is given to
`metrics_grouping_type`, by default, the metrics will be grouped using
`TablePerMetric`.
### TablePerMetric
The plugin will group the metrics by the metric name, and will send each group of metrics to an Azure Data Explorer table. If the table doesn't exist the plugin will create the table, if the table exists then the plugin will try to merge the Telegraf metric schema to the existing table. For more information about the merge process check the [`.create-merge` documentation](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/management/create-merge-table-command).
The plugin will group the metrics by the metric name, and will send each group
of metrics to an Azure Data Explorer table. If the table doesn't exist the
plugin will create the table, if the table exists then the plugin will try to
merge the Telegraf metric schema to the existing table. For more information
about the merge process check the [`.create-merge` documentation][create-merge].
The table name will match the `name` property of the metric, this means that the name of the metric should comply with the Azure Data Explorer table naming constraints in case you plan to add a prefix to the metric name.
The table name will match the `name` property of the metric, this means that the
name of the metric should comply with the Azure Data Explorer table naming
constraints in case you plan to add a prefix to the metric name.
[create-merge]: https://docs.microsoft.com/en-us/azure/data-explorer/kusto/management/create-merge-table-command
### SingleTable
The plugin will send all the metrics received to a single Azure Data Explorer table. The name of the table must be supplied via `table_name` in the config file. If the table doesn't exist the plugin will create the table, if the table exists then the plugin will try to merge the Telegraf metric schema to the existing table. For more information about the merge process check the [`.create-merge` documentation](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/management/create-merge-table-command).
The plugin will send all the metrics received to a single Azure Data Explorer
table. The name of the table must be supplied via `table_name` in the config
file. If the table doesn't exist the plugin will create the table, if the table
exists then the plugin will try to merge the Telegraf metric schema to the
existing table. For more information about the merge process check the
[`.create-merge` documentation][create-merge].
## Tables Schema
The schema of the Azure Data Explorer table will match the structure of the Telegraf `Metric` object. The corresponding Azure Data Explorer command generated by the plugin would be like the following:
The schema of the Azure Data Explorer table will match the structure of the
Telegraf `Metric` object. The corresponding Azure Data Explorer command
generated by the plugin would be like the following:
```text
.create-merge table ['table-name'] (['fields']:dynamic, ['name']:string, ['tags']:dynamic, ['timestamp']:datetime)
@ -66,38 +90,51 @@ The corresponding table mapping would be like the following:
.create-or-alter table ['table-name'] ingestion json mapping 'table-name_mapping' '[{"column":"fields", "Properties":{"Path":"$[\'fields\']"}},{"column":"name", "Properties":{"Path":"$[\'name\']"}},{"column":"tags", "Properties":{"Path":"$[\'tags\']"}},{"column":"timestamp", "Properties":{"Path":"$[\'timestamp\']"}}]'
```
**Note**: This plugin will automatically create Azure Data Explorer tables and corresponding table mapping as per the above mentioned commands.
**Note**: This plugin will automatically create Azure Data Explorer tables and
corresponding table mapping as per the above mentioned commands.
## Authentiation
### Supported Authentication Methods
This plugin provides several types of authentication. The plugin will check the existence of several specific environment variables, and consequently will choose the right method.
This plugin provides several types of authentication. The plugin will check the
existence of several specific environment variables, and consequently will
choose the right method.
These methods are:
1. AAD Application Tokens (Service Principals with secrets or certificates).
For guidance on how to create and register an App in Azure Active Directory check [this article](https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app#register-an-application), and for more information on the Service Principals check [this article](https://docs.microsoft.com/en-us/azure/active-directory/develop/app-objects-and-service-principals).
For guidance on how to create and register an App in Azure Active Directory
check [this article][register], and for more information on the Service
Principals check [this article][principal].
2. AAD User Tokens
- Allows Telegraf to authenticate like a user. This method is mainly used for development purposes only.
- Allows Telegraf to authenticate like a user. This method is mainly used
for development purposes only.
3. Managed Service Identity (MSI) token
- If you are running Telegraf from Azure VM or infrastructure, then this is the prefered authentication method.
- If you are running Telegraf from Azure VM or infrastructure, then this is
the prefered authentication method.
[principal]: https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-application-objects
[register]: https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app#register-an-application
Whichever method, the designated Principal needs to be assigned the `Database User` role on the Database level in the Azure Data Explorer. This role will
allow the plugin to create the required tables and ingest data into it.
If `create_tables=false` then the designated principal only needs the `Database Ingestor` role at least.
[principal]: https://docs.microsoft.com/en-us/azure/active-directory/develop/app-objects-and-service-principals
Whichever method, the designated Principal needs to be assigned the `Database
User` role on the Database level in the Azure Data Explorer. This role will
allow the plugin to create the required tables and ingest data into it. If
`create_tables=false` then the designated principal only needs the `Database
Ingestor` role at least.
### Configurations of the chosen Authentication Method
The plugin will authenticate using the first available of the
following configurations, **it's important to understand that the assessment, and consequently choosing the authentication method, will happen in order as below**:
The plugin will authenticate using the first available of the following
configurations, **it's important to understand that the assessment, and
consequently choosing the authentication method, will happen in order as
below**:
1. **Client Credentials**: Azure AD Application ID and Secret.
@ -125,14 +162,16 @@ following configurations, **it's important to understand that the assessment, an
4. **Azure Managed Service Identity**: Delegate credential management to the
platform. Requires that code is running in Azure, e.g. on a VM. All
configuration is handled by Azure. See [Azure Managed Service Identity][msi]
for more details. Only available when using the [Azure Resource Manager][arm].
for more details. Only available when using the [Azure Resource
Manager][arm].
[msi]: https://docs.microsoft.com/en-us/azure/active-directory/msi-overview
[arm]: https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-overview
## Querying data collected in Azure Data Explorer
Examples of data transformations and queries that would be useful to gain insights -
Examples of data transformations and queries that would be useful to gain
insights -
### Using SQL input plugin
@ -143,9 +182,12 @@ name | tags | timestamp | fields
sqlserver_database_io|{"database_name":"azure-sql-db2","file_type":"DATA","host":"adx-vm","logical_filename":"tempdev","measurement_db_type":"AzureSQLDB","physical_filename":"tempdb.mdf","replica_updateability":"READ_WRITE","sql_instance":"adx-sql-server"}|2021-09-09T13:51:20Z|{"current_size_mb":16,"database_id":2,"file_id":1,"read_bytes":2965504,"read_latency_ms":68,"reads":47,"rg_read_stall_ms":42,"rg_write_stall_ms":0,"space_used_mb":0,"write_bytes":1220608,"write_latency_ms":103,"writes":149}
sqlserver_waitstats|{"database_name":"azure-sql-db2","host":"adx-vm","measurement_db_type":"AzureSQLDB","replica_updateability":"READ_WRITE","sql_instance":"adx-sql-server","wait_category":"Worker Thread","wait_type":"THREADPOOL"}|2021-09-09T13:51:20Z|{"max_wait_time_ms":15,"resource_wait_ms":4469,"signal_wait_time_ms":0,"wait_time_ms":4469,"waiting_tasks_count":1464}
Since collected metrics object is of complex type so "fields" and "tags" are stored as dynamic data type, multiple ways to query this data-
Since collected metrics object is of complex type so "fields" and "tags" are
stored as dynamic data type, multiple ways to query this data-
1. Query JSON attributes directly: Azure Data Explorer provides an ability to query JSON data in raw format without parsing it, so JSON attributes can be queried directly in following way:
1. Query JSON attributes directly: Azure Data Explorer provides an ability to
query JSON data in raw format without parsing it, so JSON attributes can be
queried directly in following way:
```text
Tablename
@ -157,9 +199,14 @@ Since collected metrics object is of complex type so "fields" and "tags" are sto
| distinct tostring(tags.database_name)
```
**Note** - This approach could have performance impact in case of large volumes of data, use belwo mentioned approach for such cases.
**Note** - This approach could have performance impact in case of large
volumes of data, use belwo mentioned approach for such cases.
1. Use [Update policy](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/management/updatepolicy)**: Transform dynamic data type columns using update policy. This is the recommended performant way for querying over large volumes of data compared to querying directly over JSON attributes:
1. Use [Update
policy](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/management/updatepolicy)**:
Transform dynamic data type columns using update policy. This is the
recommended performant way for querying over large volumes of data compared
to querying directly over JSON attributes:
```json
// Function to transform data
@ -186,9 +233,15 @@ name | tags | timestamp | fields
syslog|{"appname":"azsecmond","facility":"user","host":"adx-linux-vm","hostname":"adx-linux-vm","severity":"info"}|2021-09-20T14:36:44Z|{"facility_code":1,"message":" 2021/09/20 14:36:44.890110 Failed to connect to mdsd: dial unix /var/run/mdsd/default_djson.socket: connect: no such file or directory","procid":"2184","severity_code":6,"timestamp":"1632148604890477000","version":1}
syslog|{"appname":"CRON","facility":"authpriv","host":"adx-linux-vm","hostname":"adx-linux-vm","severity":"info"}|2021-09-20T14:37:01Z|{"facility_code":10,"message":" pam_unix(cron:session): session opened for user root by (uid=0)","procid":"26446","severity_code":6,"timestamp":"1632148621120781000","version":1}
There are multiple ways to flatten dynamic columns using 'extend' or 'bag_unpack' operator. You can use either of these ways in above mentioned update policy function - 'Transform_TargetTableName()'
There are multiple ways to flatten dynamic columns using 'extend' or
'bag_unpack' operator. You can use either of these ways in above mentioned
update policy function - 'Transform_TargetTableName()'
- Use [extend](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/query/extendoperator) operator - This is the recommended approach compared to 'bag_unpack' as it is faster and robust. Even if schema changes, it will not break queries or dashboards.
- Use
[extend](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/query/extendoperator)
operator - This is the recommended approach compared to 'bag_unpack' as it is
faster and robust. Even if schema changes, it will not break queries or
dashboards.
```text
Tablenmae
@ -198,7 +251,10 @@ There are multiple ways to flatten dynamic columns using 'extend' or 'bag_unpack
| project-away fields, tags
```
- Use [bag_unpack plugin](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/query/bag-unpackplugin) to unpack the dynamic type columns automatically. This method could lead to issues if source schema changes as its dynamically expanding columns.
- Use [bag_unpack
plugin](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/query/bag-unpackplugin)
to unpack the dynamic type columns automatically. This method could lead to
issues if source schema changes as its dynamically expanding columns.
```text
Tablename

View File

@ -1,4 +1,4 @@
# Azure Monitor
# Azure Monitor Output Plugin
__The Azure Monitor custom metrics service is currently in preview and not
available in a subset of Azure regions.__
@ -9,10 +9,10 @@ output plugin will automatically aggregates metrics into one minute buckets,
which are then sent to Azure Monitor on every flush interval.
The metrics from each input plugin will be written to a separate Azure Monitor
namespace, prefixed with `Telegraf/` by default. The field name for each
metric is written as the Azure Monitor metric name. All field values are
written as a summarized set that includes: min, max, sum, count. Tags are
written as a dimension on each Azure Monitor metric.
namespace, prefixed with `Telegraf/` by default. The field name for each metric
is written as the Azure Monitor metric name. All field values are written as a
summarized set that includes: min, max, sum, count. Tags are written as a
dimension on each Azure Monitor metric.
## Configuration
@ -50,22 +50,24 @@ written as a dimension on each Azure Monitor metric.
## Setup
1. [Register the `microsoft.insights` resource provider in your Azure subscription][resource provider].
1. If using Managed Service Identities to authenticate an Azure VM,
[enable system-assigned managed identity][enable msi].
1. Use a region that supports Azure Monitor Custom Metrics,
For regions with Custom Metrics support, an endpoint will be available with
the format `https://<region>.monitoring.azure.com`.
1. [Register the `microsoft.insights` resource provider in your Azure
subscription][resource provider].
1. If using Managed Service Identities to authenticate an Azure VM, [enable
system-assigned managed identity][enable msi].
1. Use a region that supports Azure Monitor Custom Metrics, For regions with
Custom Metrics support, an endpoint will be available with the format
`https://<region>.monitoring.azure.com`.
[resource provider]: https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-supported-services
[enable msi]: https://docs.microsoft.com/en-us/azure/active-directory/managed-service-identity/qs-configure-portal-windows-vm
### Region and Resource ID
The plugin will attempt to discover the region and resource ID using the Azure
VM Instance Metadata service. If Telegraf is not running on a virtual machine
or the VM Instance Metadata service is not available, the following variables
are required for the output to function.
VM Instance Metadata service. If Telegraf is not running on a virtual machine or
the VM Instance Metadata service is not available, the following variables are
required for the output to function.
* region
* resource_id
@ -76,7 +78,9 @@ This plugin uses one of several different types of authenticate methods. The
preferred authentication methods are different from the *order* in which each
authentication is checked. Here are the preferred authentication methods:
1. Managed Service Identity (MSI) token: This is the preferred authentication method. Telegraf will automatically authenticate using this method when running on Azure VMs.
1. Managed Service Identity (MSI) token: This is the preferred authentication
method. Telegraf will automatically authenticate using this method when
running on Azure VMs.
2. AAD Application Tokens (Service Principals)
* Primarily useful if Telegraf is writing metrics for other resources.
@ -92,10 +96,11 @@ authentication is checked. Here are the preferred authentication methods:
[principal]: https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-application-objects
The plugin will authenticate using the first available of the
following configurations:
The plugin will authenticate using the first available of the following
configurations:
1. **Client Credentials**: Azure AD Application ID and Secret. Set the following environment variables:
1. **Client Credentials**: Azure AD Application ID and Secret. Set the following
environment variables:
* `AZURE_TENANT_ID`: Specifies the Tenant to which to authenticate.
* `AZURE_CLIENT_ID`: Specifies the app client ID to use.
@ -119,7 +124,8 @@ following configurations:
1. **Azure Managed Service Identity**: Delegate credential management to the
platform. Requires that code is running in Azure, e.g. on a VM. All
configuration is handled by Azure. See [Azure Managed Service Identity][msi]
for more details. Only available when using the [Azure Resource Manager][arm].
for more details. Only available when using the [Azure Resource
Manager][arm].
[msi]: https://docs.microsoft.com/en-us/azure/active-directory/msi-overview
[arm]: https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-overview
@ -140,5 +146,7 @@ dimension limit.
To convert only a subset of string-typed fields as dimensions, enable
`strings_as_dimensions` and use the [`fieldpass` or `fielddrop`
processors](https://docs.influxdata.com/telegraf/v1.7/administration/configuration/#processor-configuration)
to limit the string-typed fields that are sent to the plugin.
processors][conf-processor] to limit the string-typed fields that are sent to
the plugin.
[conf-processor]: https://docs.influxdata.com/telegraf/v1.7/administration/configuration/#processor-configuration

View File

@ -1,9 +1,12 @@
# Google BigQuery Output Plugin
This plugin writes to the [Google Cloud BigQuery](https://cloud.google.com/bigquery) and requires [authentication](https://cloud.google.com/bigquery/docs/authentication)
with Google Cloud using either a service account or user credentials.
This plugin writes to the [Google Cloud
BigQuery](https://cloud.google.com/bigquery) and requires
[authentication](https://cloud.google.com/bigquery/docs/authentication) with
Google Cloud using either a service account or user credentials.
Be aware that this plugin accesses APIs that are [chargeable](https://cloud.google.com/bigquery/pricing) and might incur costs.
Be aware that this plugin accesses APIs that are
[chargeable](https://cloud.google.com/bigquery/pricing) and might incur costs.
## Configuration
@ -28,23 +31,30 @@ Be aware that this plugin accesses APIs that are [chargeable](https://cloud.goog
Requires `project` to specify where BigQuery entries will be persisted.
Requires `dataset` to specify under which BigQuery dataset the corresponding metrics tables reside.
Requires `dataset` to specify under which BigQuery dataset the corresponding
metrics tables reside.
Each metric should have a corresponding table to BigQuery.
The schema of the table on BigQuery:
Each metric should have a corresponding table to BigQuery. The schema of the
table on BigQuery:
* Should contain the field `timestamp` which is the timestamp of a telegraph metrics
* Should contain the metric's tags with the same name and the column type should be set to string.
* Should contain the metric's fields with the same name and the column type should match the field type.
* Should contain the field `timestamp` which is the timestamp of a telegraph
metrics
* Should contain the metric's tags with the same name and the column type should
be set to string.
* Should contain the metric's fields with the same name and the column type
should match the field type.
## Restrictions
Avoid hyphens on BigQuery tables, underlying SDK cannot handle streaming inserts to Table with hyphens.
Avoid hyphens on BigQuery tables, underlying SDK cannot handle streaming inserts
to Table with hyphens.
In cases of metrics with hyphens please use the [Rename Processor Plugin](https://github.com/influxdata/telegraf/tree/master/plugins/processors/rename).
In cases of metrics with hyphens please use the [Rename Processor
Plugin][rename].
In case of a metric with hyphen by default hyphens shall be replaced with underscores (_).
This can be altered using the `replace_hyphen_to` configuration property.
In case of a metric with hyphen by default hyphens shall be replaced with
underscores (_). This can be altered using the `replace_hyphen_to`
configuration property.
Available data type options are:
@ -53,9 +63,13 @@ Available data type options are:
* string
* boolean
All field naming restrictions that apply to BigQuery should apply to the measurements to be imported.
All field naming restrictions that apply to BigQuery should apply to the
measurements to be imported.
Tables on BigQuery should be created beforehand and they are not created during persistence
Tables on BigQuery should be created beforehand and they are not created during
persistence
Pay attention to the column `timestamp` since it is reserved upfront and cannot change.
If partitioning is required make sure it is applied beforehand.
Pay attention to the column `timestamp` since it is reserved upfront and cannot
change. If partitioning is required make sure it is applied beforehand.
[rename]: ../../processors/rename/README.md

View File

@ -5,9 +5,6 @@ as one of the supported [output data formats][].
## Configuration
This section contains the default TOML to configure the plugin. You can
generate it using `telegraf --usage cloud_pubsub`.
```toml
# Publish Telegraf metrics to a Google Cloud PubSub topic
[[outputs.cloud_pubsub]]

View File

@ -1,25 +1,32 @@
# Amazon CloudWatch Output for Telegraf
# Amazon CloudWatch Output Plugin
This plugin will send metrics to Amazon CloudWatch.
## Amazon Authentication
This plugin uses a credential chain for Authentication with the CloudWatch
API endpoint. In the following order the plugin will attempt to authenticate.
This plugin uses a credential chain for Authentication with the CloudWatch API
endpoint. In the following order the plugin will attempt to authenticate.
1. Web identity provider credentials via STS if `role_arn` and `web_identity_token_file` are specified
1. Assumed credentials via STS if `role_arn` attribute is specified (source credentials are evaluated from subsequent rules)
1. Web identity provider credentials via STS if `role_arn` and
`web_identity_token_file` are specified
1. Assumed credentials via STS if `role_arn` attribute is specified (source
credentials are evaluated from subsequent rules)
1. Explicit credentials from `access_key`, `secret_key`, and `token` attributes
1. Shared profile from `profile` attribute
1. [Environment Variables](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#environment-variables)
1. [Shared Credentials](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#shared-credentials-file)
1. [EC2 Instance Profile](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html)
1. [Environment Variables][1]
1. [Shared Credentials][2]
1. [EC2 Instance Profile][3]
If you are using credentials from a web identity provider, you can specify the session name using `role_session_name`. If
left empty, the current timestamp will be used.
If you are using credentials from a web identity provider, you can specify the
session name using `role_session_name`. If left empty, the current timestamp
will be used.
The IAM user needs only the `cloudwatch:PutMetricData` permission.
[1]: https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#environment-variables
[2]: https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#shared-credentials-file
[3]: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
## Configuration
```toml
@ -67,16 +74,16 @@ The IAM user needs only the `cloudwatch:PutMetricData` permission.
# high_resolution_metrics = false
```
For this output plugin to function correctly the following variables
must be configured.
For this output plugin to function correctly the following variables must be
configured.
* region
* namespace
### region
The region is the Amazon region that you wish to connect to.
Examples include but are not limited to:
The region is the Amazon region that you wish to connect to. Examples include
but are not limited to:
* us-west-1
* us-west-2
@ -91,13 +98,16 @@ The namespace used for AWS CloudWatch metrics.
### write_statistics
If you have a large amount of metrics, you should consider to send statistic
values instead of raw metrics which could not only improve performance but
also save AWS API cost. If enable this flag, this plugin would parse the required
[CloudWatch statistic fields](https://docs.aws.amazon.com/sdk-for-go/api/service/cloudwatch/#StatisticSet)
(count, min, max, and sum) and send them to CloudWatch. You could use `basicstats`
aggregator to calculate those fields. If not all statistic fields are available,
all fields would still be sent as raw metrics.
values instead of raw metrics which could not only improve performance but also
save AWS API cost. If enable this flag, this plugin would parse the required
[CloudWatch statistic fields][1] (count, min, max, and sum) and send them to
CloudWatch. You could use `basicstats` aggregator to calculate those fields. If
not all statistic fields are available, all fields would still be sent as raw
metrics.
[1]: https://docs.aws.amazon.com/sdk-for-go/api/service/cloudwatch/#StatisticSet
### high_resolution_metrics
Enable high resolution metrics (1 second precision) instead of standard ones (60 seconds precision)
Enable high resolution metrics (1 second precision) instead of standard ones (60
seconds precision)

View File

@ -1,4 +1,4 @@
# Amazon CloudWatch Logs Output for Telegraf
# Amazon CloudWatch Logs Output Plugin
This plugin will send logs to Amazon CloudWatch.
@ -7,20 +7,29 @@ This plugin will send logs to Amazon CloudWatch.
This plugin uses a credential chain for Authentication with the CloudWatch Logs
API endpoint. In the following order the plugin will attempt to authenticate.
1. Web identity provider credentials via STS if `role_arn` and `web_identity_token_file` are specified
1. Assumed credentials via STS if `role_arn` attribute is specified (source credentials are evaluated from subsequent rules)
1. Web identity provider credentials via STS if `role_arn` and
`web_identity_token_file` are specified
1. Assumed credentials via STS if `role_arn` attribute is specified (source
credentials are evaluated from subsequent rules)
1. Explicit credentials from `access_key`, `secret_key`, and `token` attributes
1. Shared profile from `profile` attribute
1. [Environment Variables](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#environment-variables)
1. [Shared Credentials](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#shared-credentials-file)
1. [EC2 Instance Profile](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html)
1. [Environment Variables][1]
1. [Shared Credentials][2]
1. [EC2 Instance Profile][3]
The IAM user needs the following permissions (see this [reference](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/permissions-reference-cwl.html) for more):
The IAM user needs the following permissions (see this [reference][4] for more):
- `logs:DescribeLogGroups` - required for check if configured log group exist
- `logs:DescribeLogStreams` - required to view all log streams associated with a log group.
- `logs:DescribeLogStreams` - required to view all log streams associated with a
log group.
- `logs:CreateLogStream` - required to create a new log stream in a log group.)
- `logs:PutLogEvents` - required to upload a batch of log events into log stream.
- `logs:PutLogEvents` - required to upload a batch of log events into log
stream.
[1]: https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#environment-variables
[2]: https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#shared-credentials-file
[3]: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
[4]: https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/permissions-reference-cwl.html
## Configuration

View File

@ -1,6 +1,7 @@
# CrateDB Output Plugin for Telegraf
# CrateDB Output Plugin
This plugin writes to [CrateDB](https://crate.io/) via its [PostgreSQL protocol](https://crate.io/docs/crate/reference/protocols/postgres.html).
This plugin writes to [CrateDB](https://crate.io/) via its [PostgreSQL
protocol](https://crate.io/docs/crate/reference/protocols/postgres.html).
## Table Schema

View File

@ -27,8 +27,8 @@ This plugin writes to the [Datadog Metrics API][metrics] and requires an
## Metrics
Datadog metric names are formed by joining the Telegraf metric name and the field
key with a `.` character.
Datadog metric names are formed by joining the Telegraf metric name and the
field key with a `.` character.
Field values are converted to floating point numbers. Strings and floats that
cannot be sent over JSON, namely NaN and Inf, are ignored.

View File

@ -1,29 +1,54 @@
# Dynatrace Output Plugin
This plugin sends Telegraf metrics to [Dynatrace](https://www.dynatrace.com) via the [Dynatrace Metrics API V2](https://www.dynatrace.com/support/help/dynatrace-api/environment-api/metric-v2/). It may be run alongside the Dynatrace OneAgent for automatic authentication or it may be run standalone on a host without a OneAgent by specifying a URL and API Token.
More information on the plugin can be found in the [Dynatrace documentation](https://www.dynatrace.com/support/help/how-to-use-dynatrace/metrics/metric-ingestion/ingestion-methods/telegraf/).
All metrics are reported as gauges, unless they are specified to be delta counters using the `additional_counters` config option (see below).
See the [Dynatrace Metrics ingestion protocol documentation](https://www.dynatrace.com/support/help/how-to-use-dynatrace/metrics/metric-ingestion/metric-ingestion-protocol) for details on the types defined there.
This plugin sends Telegraf metrics to [Dynatrace](https://www.dynatrace.com) via
the [Dynatrace Metrics API V2][api-v2]. It may be run alongside the Dynatrace
OneAgent for automatic authentication or it may be run standalone on a host
without a OneAgent by specifying a URL and API Token. More information on the
plugin can be found in the [Dynatrace documentation][docs]. All metrics are
reported as gauges, unless they are specified to be delta counters using the
`additional_counters` config option (see below). See the [Dynatrace Metrics
ingestion protocol documentation][proto-docs] for details on the types defined
there.
[api-v2]: https://www.dynatrace.com/support/help/dynatrace-api/environment-api/metric-v2/
[docs]: https://www.dynatrace.com/support/help/how-to-use-dynatrace/metrics/metric-ingestion/ingestion-methods/telegraf/
[proto-docs]: https://www.dynatrace.com/support/help/how-to-use-dynatrace/metrics/metric-ingestion/metric-ingestion-protocol
## Requirements
You will either need a Dynatrace OneAgent (version 1.201 or higher) installed on the same host as Telegraf; or a Dynatrace environment with version 1.202 or higher.
You will either need a Dynatrace OneAgent (version 1.201 or higher) installed on
the same host as Telegraf; or a Dynatrace environment with version 1.202 or
higher.
- Telegraf minimum version: Telegraf 1.16
## Getting Started
Setting up Telegraf is explained in the [Telegraf Documentation](https://docs.influxdata.com/telegraf/latest/introduction/getting-started/).
The Dynatrace exporter may be enabled by adding an `[[outputs.dynatrace]]` section to your `telegraf.conf` config file.
All configurations are optional, but if a `url` other than the OneAgent metric ingestion endpoint is specified then an `api_token` is required.
To see all available options, see [Configuration](#configuration) below.
Setting up Telegraf is explained in the [Telegraf
Documentation][getting-started].
The Dynatrace exporter may be enabled by adding an `[[outputs.dynatrace]]`
section to your `telegraf.conf` config file. All configurations are optional,
but if a `url` other than the OneAgent metric ingestion endpoint is specified
then an `api_token` is required. To see all available options, see
[Configuration](#configuration) below.
[getting-started]: https://docs.influxdata.com/telegraf/latest/introduction/getting-started/
### Running alongside Dynatrace OneAgent (preferred)
If you run the Telegraf agent on a host or VM that is monitored by the Dynatrace OneAgent then you only need to enable the plugin, but need no further configuration. The Dynatrace Telegraf output plugin will send all metrics to the OneAgent which will use its secure and load balanced connection to send the metrics to your Dynatrace SaaS or Managed environment.
Depending on your environment, you might have to enable metrics ingestion on the OneAgent first as described in the [Dynatrace documentation](https://www.dynatrace.com/support/help/how-to-use-dynatrace/metrics/metric-ingestion/ingestion-methods/telegraf/).
If you run the Telegraf agent on a host or VM that is monitored by the Dynatrace
OneAgent then you only need to enable the plugin, but need no further
configuration. The Dynatrace Telegraf output plugin will send all metrics to the
OneAgent which will use its secure and load balanced connection to send the
metrics to your Dynatrace SaaS or Managed environment. Depending on your
environment, you might have to enable metrics ingestion on the OneAgent first as
described in the [Dynatrace documentation][docs].
Note: The name and identifier of the host running Telegraf will be added as a dimension to every metric. If this is undesirable, then the output plugin may be used in standalone mode using the directions below.
Note: The name and identifier of the host running Telegraf will be added as a
dimension to every metric. If this is undesirable, then the output plugin may be
used in standalone mode using the directions below.
```toml
[[outputs.dynatrace]]
@ -32,15 +57,23 @@ Note: The name and identifier of the host running Telegraf will be added as a di
### Running standalone
If you run the Telegraf agent on a host or VM without a OneAgent you will need to configure the environment API endpoint to send the metrics to and an API token for security.
If you run the Telegraf agent on a host or VM without a OneAgent you will need
to configure the environment API endpoint to send the metrics to and an API
token for security.
You will also need to configure an API token for secure access. Find out how to create a token in the [Dynatrace documentation](https://www.dynatrace.com/support/help/dynatrace-api/basics/dynatrace-api-authentication/) or simply navigate to **Settings > Integration > Dynatrace API** in your Dynatrace environment and create a token with Dynatrace API and create a new token with
'Ingest metrics' (`metrics.ingest`) scope enabled. It is recommended to limit Token scope to only this permission.
You will also need to configure an API token for secure access. Find out how to
create a token in the [Dynatrace documentation][api-auth] or simply navigate to
**Settings > Integration > Dynatrace API** in your Dynatrace environment and
create a token with Dynatrace API and create a new token with 'Ingest metrics'
(`metrics.ingest`) scope enabled. It is recommended to limit Token scope to only
this permission.
The endpoint for the Dynatrace Metrics API v2 is
- on Dynatrace Managed: `https://{your-domain}/e/{your-environment-id}/api/v2/metrics/ingest`
- on Dynatrace SaaS: `https://{your-environment-id}.live.dynatrace.com/api/v2/metrics/ingest`
- on Dynatrace Managed:
`https://{your-domain}/e/{your-environment-id}/api/v2/metrics/ingest`
- on Dynatrace SaaS:
`https://{your-environment-id}.live.dynatrace.com/api/v2/metrics/ingest`
```toml
[[outputs.dynatrace]]
@ -53,7 +86,10 @@ The endpoint for the Dynatrace Metrics API v2 is
api_token = "your API token here" // hard-coded for illustration only, should be read from environment
```
You can learn more about how to use the Dynatrace API [here](https://www.dynatrace.com/support/help/dynatrace-api/).
You can learn more about how to use the Dynatrace API
[here](https://www.dynatrace.com/support/help/dynatrace-api/).
[api-auth]: https://www.dynatrace.com/support/help/dynatrace-api/basics/dynatrace-api-authentication/
## Configuration
@ -102,17 +138,25 @@ You can learn more about how to use the Dynatrace API [here](https://www.dynatra
*default*: Local OneAgent endpoint
Set your Dynatrace environment URL (e.g.: `https://{your-environment-id}.live.dynatrace.com/api/v2/metrics/ingest`, see the [Dynatrace documentation](https://www.dynatrace.com/support/help/dynatrace-api/environment-api/metric-v2/post-ingest-metrics/) for details) if you do not use a OneAgent or wish to export metrics directly to a Dynatrace metrics v2 endpoint. If a URL is set to anything other than the local OneAgent endpoint, then an API token is required.
Set your Dynatrace environment URL (e.g.:
`https://{your-environment-id}.live.dynatrace.com/api/v2/metrics/ingest`, see
the [Dynatrace documentation][post-ingest] for details) if you do not use a
OneAgent or wish to export metrics directly to a Dynatrace metrics v2
endpoint. If a URL is set to anything other than the local OneAgent endpoint,
then an API token is required.
```toml
url = "https://{your-environment-id}.live.dynatrace.com/api/v2/metrics/ingest"
```
[post-ingest]: https://www.dynatrace.com/support/help/dynatrace-api/environment-api/metric-v2/post-ingest-metrics/
### `api_token`
*required*: `false` unless `url` is specified
API token is required if a URL other than the OneAgent endpoint is specified and it should be restricted to the 'Ingest metrics' scope.
API token is required if a URL other than the OneAgent endpoint is specified and
it should be restricted to the 'Ingest metrics' scope.
```toml
api_token = "your API token here"
@ -122,7 +166,8 @@ api_token = "your API token here"
*required*: `false`
Optional prefix to be prepended to all metric names (will be separated with a `.`).
Optional prefix to be prepended to all metric names (will be separated with a
`.`).
```toml
prefix = "telegraf"
@ -132,7 +177,8 @@ prefix = "telegraf"
*required*: `false`
Setting this option to true skips TLS verification for testing or when using self-signed certificates.
Setting this option to true skips TLS verification for testing or when using
self-signed certificates.
```toml
insecure_skip_verify = false
@ -142,7 +188,8 @@ insecure_skip_verify = false
*required*: `false`
If you want a metric to be treated and reported as a delta counter, add its name to this list.
If you want a metric to be treated and reported as a delta counter, add its name
to this list.
```toml
additional_counters = [ ]

View File

@ -1,6 +1,7 @@
# Elasticsearch Output Plugin
This plugin writes to [Elasticsearch](https://www.elastic.co) via HTTP using Elastic (<http://olivere.github.io/elastic/).>
This plugin writes to [Elasticsearch](https://www.elastic.co) via HTTP using
Elastic (<http://olivere.github.io/elastic/).>
It supports Elasticsearch releases from 5.x up to 7.x.
@ -8,19 +9,28 @@ It supports Elasticsearch releases from 5.x up to 7.x.
### Indexes per time-frame
This plugin can manage indexes per time-frame, as commonly done in other tools with Elasticsearch.
This plugin can manage indexes per time-frame, as commonly done in other tools
with Elasticsearch.
The timestamp of the metric collected will be used to decide the index destination.
The timestamp of the metric collected will be used to decide the index
destination.
For more information about this usage on Elasticsearch, check [the docs](https://www.elastic.co/guide/en/elasticsearch/guide/master/time-based.html#index-per-timeframe).
For more information about this usage on Elasticsearch, check [the
docs][1].
[1]: https://www.elastic.co/guide/en/elasticsearch/guide/master/time-based.html#index-per-timeframe
### Template management
Index templates are used in Elasticsearch to define settings and mappings for the indexes and how the fields should be analyzed.
For more information on how this works, see [the docs](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-templates.html).
Index templates are used in Elasticsearch to define settings and mappings for
the indexes and how the fields should be analyzed. For more information on how
this works, see [the docs][2].
This plugin can create a working template for use with telegraf metrics. It uses Elasticsearch dynamic templates feature to set proper types for the tags and metrics fields.
If the template specified already exists, it will not overwrite unless you configure this plugin to do so. Thus you can customize this template after its creation if necessary.
This plugin can create a working template for use with telegraf metrics. It uses
Elasticsearch dynamic templates feature to set proper types for the tags and
metrics fields. If the template specified already exists, it will not overwrite
unless you configure this plugin to do so. Thus you can customize this template
after its creation if necessary.
Example of an index template created by telegraf on Elasticsearch 5.x:
@ -98,6 +108,8 @@ Example of an index template created by telegraf on Elasticsearch 5.x:
```
[2]: https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-templates.html
### Example events
This plugin will format the events in the following way:
@ -152,10 +164,10 @@ the actual underlying Elasticsearch version is v7.1. This breaks Telegraf and
other Elasticsearch clients that need to know what major version they are
interfacing with.
Amazon has created a [compatibility mode](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/rename.html#rename-upgrade)
to allow existing Elasticsearch clients to properly work when the version needs
to be checked. To enable compatibility mode users need to set the
`override_main_response_version` to `true`.
Amazon has created a [compatibility mode][3] to allow existing Elasticsearch
clients to properly work when the version needs to be checked. To enable
compatibility mode users need to set the `override_main_response_version` to
`true`.
On existing clusters run:
@ -181,6 +193,8 @@ POST https://es.us-east-1.amazonaws.com/2021-01-01/opensearch/upgradeDomain
}
```
[3]: https://docs.aws.amazon.com/opensearch-service/latest/developerguide/rename.html#rename-upgrade
## Configuration
```toml
@ -267,17 +281,18 @@ POST https://es.us-east-1.amazonaws.com/2021-01-01/opensearch/upgradeDomain
### Permissions
If you are using authentication within your Elasticsearch cluster, you need
to create a account and create a role with at least the manage role in the
Cluster Privileges category. Overwise, your account will not be able to
connect to your Elasticsearch cluster and send logs to your cluster. After
that, you need to add "create_indice" and "write" permission to your specific
index pattern.
If you are using authentication within your Elasticsearch cluster, you need to
create a account and create a role with at least the manage role in the Cluster
Privileges category. Overwise, your account will not be able to connect to your
Elasticsearch cluster and send logs to your cluster. After that, you need to
add "create_indice" and "write" permission to your specific index pattern.
### Required parameters
* `urls`: A list containing the full HTTP URL of one or more nodes from your Elasticsearch instance.
* `index_name`: The target index for metrics. You can use the date specifiers below to create indexes per time frame.
* `urls`: A list containing the full HTTP URL of one or more nodes from your
Elasticsearch instance.
* `index_name`: The target index for metrics. You can use the date specifiers
below to create indexes per time frame.
``` %Y - year (2017)
%y - last two digits of year (00..99)
@ -287,30 +302,65 @@ index pattern.
%V - week of the year (ISO week) (01..53)
```
Additionally, you can specify dynamic index names by using tags with the notation ```{{tag_name}}```. This will store the metrics with different tag values in different indices. If the tag does not exist in a particular metric, the `default_tag_value` will be used instead.
Additionally, you can specify dynamic index names by using tags with the
notation ```{{tag_name}}```. This will store the metrics with different tag
values in different indices. If the tag does not exist in a particular metric,
the `default_tag_value` will be used instead.
### Optional parameters
* `timeout`: Elasticsearch client timeout, defaults to "5s" if not set.
* `enable_sniffer`: Set to true to ask Elasticsearch a list of all cluster nodes, thus it is not necessary to list all nodes in the urls config option.
* `health_check_interval`: Set the interval to check if the nodes are available, in seconds. Setting to 0 will disable the health check (not recommended in production).
* `username`: The username for HTTP basic authentication details (eg. when using Shield).
* `password`: The password for HTTP basic authentication details (eg. when using Shield).
* `manage_template`: Set to true if you want telegraf to manage its index template. If enabled it will create a recommended index template for telegraf indexes.
* `enable_sniffer`: Set to true to ask Elasticsearch a list of all cluster
nodes, thus it is not necessary to list all nodes in the urls config option.
* `health_check_interval`: Set the interval to check if the nodes are available,
in seconds. Setting to 0 will disable the health check (not recommended in
production).
* `username`: The username for HTTP basic authentication details (eg. when using
Shield).
* `password`: The password for HTTP basic authentication details (eg. when using
Shield).
* `manage_template`: Set to true if you want telegraf to manage its index
template. If enabled it will create a recommended index template for telegraf
indexes.
* `template_name`: The template name used for telegraf indexes.
* `overwrite_template`: Set to true if you want telegraf to overwrite an existing template.
* `force_document_id`: Set to true will compute a unique hash from as sha256(concat(timestamp,measurement,series-hash)),enables resend or update data withoud ES duplicated documents.
* `float_handling`: Specifies how to handle `NaN` and infinite field values. `"none"` (default) will do nothing, `"drop"` will drop the field and `replace` will replace the field value by the number in `float_replacement_value`
* `float_replacement_value`: Value (defaulting to `0.0`) to replace `NaN`s and `inf`s if `float_handling` is set to `replace`. Negative `inf` will be replaced by the negative value in this number to respect the sign of the field's original value.
* `use_pipeline`: If set, the set value will be used as the pipeline to call when sending events to elasticsearch. Additionally, you can specify dynamic pipeline names by using tags with the notation ```{{tag_name}}```. If the tag does not exist in a particular metric, the `default_pipeline` will be used instead.
* `default_pipeline`: If dynamic pipeline names the tag does not exist in a particular metric, this value will be used instead.
* `overwrite_template`: Set to true if you want telegraf to overwrite an
existing template.
* `force_document_id`: Set to true will compute a unique hash from as
sha256(concat(timestamp,measurement,series-hash)),enables resend or update
data withoud ES duplicated documents.
* `float_handling`: Specifies how to handle `NaN` and infinite field
values. `"none"` (default) will do nothing, `"drop"` will drop the field and
`replace` will replace the field value by the number in
`float_replacement_value`
* `float_replacement_value`: Value (defaulting to `0.0`) to replace `NaN`s and
`inf`s if `float_handling` is set to `replace`. Negative `inf` will be
replaced by the negative value in this number to respect the sign of the
field's original value.
* `use_pipeline`: If set, the set value will be used as the pipeline to call
when sending events to elasticsearch. Additionally, you can specify dynamic
pipeline names by using tags with the notation ```{{tag_name}}```. If the tag
does not exist in a particular metric, the `default_pipeline` will be used
instead.
* `default_pipeline`: If dynamic pipeline names the tag does not exist in a
particular metric, this value will be used instead.
## Known issues
Integer values collected that are bigger than 2^63 and smaller than 1e21 (or in this exact same window of their negative counterparts) are encoded by golang JSON encoder in decimal format and that is not fully supported by Elasticsearch dynamic field mapping. This causes the metrics with such values to be dropped in case a field mapping has not been created yet on the telegraf index. If that's the case you will see an exception on Elasticsearch side like this:
Integer values collected that are bigger than 2^63 and smaller than 1e21 (or in
this exact same window of their negative counterparts) are encoded by golang
JSON encoder in decimal format and that is not fully supported by Elasticsearch
dynamic field mapping. This causes the metrics with such values to be dropped in
case a field mapping has not been created yet on the telegraf index. If that's
the case you will see an exception on Elasticsearch side like this:
```{"error":{"root_cause":[{"type":"mapper_parsing_exception","reason":"failed to parse"}],"type":"mapper_parsing_exception","reason":"failed to parse","caused_by":{"type":"illegal_state_exception","reason":"No matching token for number_type [BIG_INTEGER]"}},"status":400}```
```json
{"error":{"root_cause":[{"type":"mapper_parsing_exception","reason":"failed to parse"}],"type":"mapper_parsing_exception","reason":"failed to parse","caused_by":{"type":"illegal_state_exception","reason":"No matching token for number_type [BIG_INTEGER]"}},"status":400}
```
The correct field mapping will be created on the telegraf index as soon as a supported JSON value is received by Elasticsearch, and subsequent insertions will work because the field mapping will already exist.
The correct field mapping will be created on the telegraf index as soon as a
supported JSON value is received by Elasticsearch, and subsequent insertions
will work because the field mapping will already exist.
This issue is caused by the way Elasticsearch tries to detect integer fields, and by how golang encodes numbers in JSON. There is no clear workaround for this at the moment.
This issue is caused by the way Elasticsearch tries to detect integer fields,
and by how golang encodes numbers in JSON. There is no clear workaround for this
at the moment.

View File

@ -1,10 +1,17 @@
# Azure Event Hubs output plugin
# Azure Event Hubs Output Plugin
This plugin for [Azure Event Hubs](https://azure.microsoft.com/en-gb/services/event-hubs/) will send metrics to a single Event Hub within an Event Hubs namespace. Metrics are sent as message batches, each message payload containing one metric object. The messages do not specify a partition key, and will thus be automatically load-balanced (round-robin) across all the Event Hub partitions.
This plugin for [Azure Event
Hubs](https://azure.microsoft.com/en-gb/services/event-hubs/) will send metrics
to a single Event Hub within an Event Hubs namespace. Metrics are sent as
message batches, each message payload containing one metric object. The messages
do not specify a partition key, and will thus be automatically load-balanced
(round-robin) across all the Event Hub partitions.
## Metrics
The plugin uses the Telegraf serializers to format the metric data sent in the message payloads. You can select any of the supported output formats, although JSON is probably the easiest to integrate with downstream components.
The plugin uses the Telegraf serializers to format the metric data sent in the
message payloads. You can select any of the supported output formats, although
JSON is probably the easiest to integrate with downstream components.
## Configuration

View File

@ -27,4 +27,4 @@ Telegraf minimum version: Telegraf 1.15.0
see [examples][]
[examples]: https://github.com/influxdata/telegraf/blob/master/plugins/outputs/execd/examples/
[examples]: examples/

View File

@ -1,10 +1,13 @@
# Graphite Output Plugin
This plugin writes to [Graphite](http://graphite.readthedocs.org/en/latest/index.html)
via raw TCP.
This plugin writes to [Graphite][1] via raw TCP.
For details on the translation between Telegraf Metrics and Graphite output,
see the [Graphite Data Format](../../../docs/DATA_FORMATS_OUTPUT.md)
see the [Graphite Data Format][2].
[1]: http://graphite.readthedocs.org/en/latest/index.html
[2]: ../../../docs/DATA_FORMATS_OUTPUT.md
## Configuration

View File

@ -8,16 +8,16 @@ This plugin writes to a Graylog instance using the "[GELF][]" format.
The [GELF spec][] spec defines a number of specific fields in a GELF payload.
These fields may have specific requirements set by the spec and users of the
Graylog plugin need to follow these requirements or metrics may be rejected
due to invalid data.
Graylog plugin need to follow these requirements or metrics may be rejected due
to invalid data.
For example, the timestamp field defined in the GELF spec, is required to be
a UNIX timestamp. This output plugin will not modify or check the timestamp
field if one is present and send it as-is to Graylog. If the field is absent
then Telegraf will set the timestamp to the current time.
For example, the timestamp field defined in the GELF spec, is required to be a
UNIX timestamp. This output plugin will not modify or check the timestamp field
if one is present and send it as-is to Graylog. If the field is absent then
Telegraf will set the timestamp to the current time.
Any field not defined by the spec will have an underscore (e.g. `_`) prefixed
to the field name.
Any field not defined by the spec will have an underscore (e.g. `_`) prefixed to
the field name.
[GELF spec]: https://docs.graylog.org/docs/gelf#gelf-payload-specification
@ -50,5 +50,6 @@ to the field name.
# insecure_skip_verify = false
```
Server endpoint may be specified without UDP or TCP scheme (eg. "127.0.0.1:12201").
In such case, UDP protocol is assumed. TLS config is ignored for UDP endpoints.
Server endpoint may be specified without UDP or TCP scheme
(eg. "127.0.0.1:12201"). In such case, UDP protocol is assumed. TLS config is
ignored for UDP endpoints.

View File

@ -1,6 +1,7 @@
# GroundWork Output Plugin
This plugin writes to a [GroundWork Monitor][1] instance. Plugin only supports GW8+
This plugin writes to a [GroundWork Monitor][1] instance. Plugin only supports
GW8+
[1]: https://www.gwos.com/product/groundwork-monitor/
@ -34,15 +35,24 @@ This plugin writes to a [GroundWork Monitor][1] instance. Plugin only supports G
## List of tags used by the plugin
* group - to define the name of the group you want to monitor, can be changed with config.
* host - to define the name of the host you want to monitor, can be changed with config.
* service - to define the name of the service you want to monitor.
* status - to define the status of the service. Supported statuses: "SERVICE_OK", "SERVICE_WARNING", "SERVICE_UNSCHEDULED_CRITICAL", "SERVICE_PENDING", "SERVICE_SCHEDULED_CRITICAL", "SERVICE_UNKNOWN".
* message - to provide any message you want.
* unitType - to use in monitoring contexts(subset of The Unified Code for Units of Measure standard). Supported types: "1", "%cpu", "KB", "GB", "MB".
* warning - to define warning threshold value.
* group - to define the name of the group you want to monitor, can be changed
with config.
* host - to define the name of the host you want to monitor, can be changed with
config.
* service - to define the name of the service you want to monitor.
* status - to define the status of the service. Supported statuses:
"SERVICE_OK", "SERVICE_WARNING", "SERVICE_UNSCHEDULED_CRITICAL",
"SERVICE_PENDING", "SERVICE_SCHEDULED_CRITICAL", "SERVICE_UNKNOWN".
* message - to provide any message you want.
* unitType - to use in monitoring contexts(subset of The Unified Code for Units
of Measure standard). Supported types: "1", "%cpu", "KB", "GB", "MB".
* warning - to define warning threshold value.
* critical - to define critical threshold value.
## NOTE
The current version of GroundWork Monitor does not support metrics whose values are strings. Such metrics will be skipped and will not be added to the final payload. You can find more context in this pull request: [#10255]( https://github.com/influxdata/telegraf/pull/10255)
The current version of GroundWork Monitor does not support metrics whose values
are strings. Such metrics will be skipped and will not be added to the final
payload. You can find more context in this pull request: [#10255][].
[#10255]: https://github.com/influxdata/telegraf/pull/10255

View File

@ -1,8 +1,8 @@
# HTTP Output Plugin
This plugin sends metrics in a HTTP message encoded using one of the output
data formats. For data_formats that support batching, metrics are sent in
batch format by default.
This plugin sends metrics in a HTTP message encoded using one of the output data
formats. For data_formats that support batching, metrics are sent in batch
format by default.
## Configuration
@ -97,4 +97,11 @@ batch format by default.
### Optional Cookie Authentication Settings
The optional Cookie Authentication Settings will retrieve a cookie from the given authorization endpoint, and use it in subsequent API requests. This is useful for services that do not provide OAuth or Basic Auth authentication, e.g. the [Tesla Powerwall API](https://www.tesla.com/support/energy/powerwall/own/monitoring-from-home-network), which uses a Cookie Auth Body to retrieve an authorization cookie. The Cookie Auth Renewal interval will renew the authorization by retrieving a new cookie at the given interval.
The optional Cookie Authentication Settings will retrieve a cookie from the
given authorization endpoint, and use it in subsequent API requests. This is
useful for services that do not provide OAuth or Basic Auth authentication,
e.g. the [Tesla Powerwall API][powerwall], which uses a Cookie Auth Body to
retrieve an authorization cookie. The Cookie Auth Renewal interval will renew
the authorization by retrieving a new cookie at the given interval.
[powerwall]: https://www.tesla.com/support/energy/powerwall/own/monitoring-from-home-network

View File

@ -1,6 +1,7 @@
# InfluxDB v1.x Output Plugin
The InfluxDB output plugin writes metrics to the [InfluxDB v1.x] HTTP or UDP service.
The InfluxDB output plugin writes metrics to the [InfluxDB v1.x] HTTP or UDP
service.
## Configuration
@ -89,4 +90,5 @@ The InfluxDB output plugin writes metrics to the [InfluxDB v1.x] HTTP or UDP ser
Reference the [influx serializer][] for details about metric production.
[InfluxDB v1.x]: https://github.com/influxdata/influxdb
[influx serializer]: /plugins/serializers/influx/README.md#Metrics

View File

@ -1,11 +1,13 @@
# Instrumental Output Plugin
This plugin writes to the [Instrumental Collector API](https://instrumentalapp.com/docs/tcp-collector)
and requires a Project-specific API token.
This plugin writes to the [Instrumental Collector
API](https://instrumentalapp.com/docs/tcp-collector) and requires a
Project-specific API token.
Instrumental accepts stats in a format very close to Graphite, with the only difference being that
the type of stat (gauge, increment) is the first token, separated from the metric itself
by whitespace. The `increment` type is only used if the metric comes in as a counter through `[[input.statsd]]`.
Instrumental accepts stats in a format very close to Graphite, with the only
difference being that the type of stat (gauge, increment) is the first token,
separated from the metric itself by whitespace. The `increment` type is only
used if the metric comes in as a counter through `[[input.statsd]]`.
## Configuration

View File

@ -1,6 +1,7 @@
# Kafka Output Plugin
This plugin writes to a [Kafka Broker](http://kafka.apache.org/07/quickstart.html) acting a Kafka Producer.
This plugin writes to a [Kafka
Broker](http://kafka.apache.org/07/quickstart.html) acting a Kafka Producer.
## Configuration
@ -163,8 +164,8 @@ This plugin writes to a [Kafka Broker](http://kafka.apache.org/07/quickstart.htm
This option controls the number of retries before a failure notification is
displayed for each message when no acknowledgement is received from the
broker. When the setting is greater than `0`, message latency can be reduced,
duplicate messages can occur in cases of transient errors, and broker loads
can increase during downtime.
duplicate messages can occur in cases of transient errors, and broker loads can
increase during downtime.
The option is similar to the
[retries](https://kafka.apache.org/documentation/#producerconfigs) Producer

View File

@ -1,29 +1,38 @@
# Amazon Kinesis Output for Telegraf
# Amazon Kinesis Output Plugin
This is an experimental plugin that is still in the early stages of development. It will batch up all of the Points
in one Put request to Kinesis. This should save the number of API requests by a considerable level.
This is an experimental plugin that is still in the early stages of
development. It will batch up all of the Points in one Put request to
Kinesis. This should save the number of API requests by a considerable level.
## About Kinesis
This is not the place to document all of the various Kinesis terms however it
maybe useful for users to review Amazons official documentation which is available
maybe useful for users to review Amazons official documentation which is
available
[here](http://docs.aws.amazon.com/kinesis/latest/dev/key-concepts.html).
## Amazon Authentication
This plugin uses a credential chain for Authentication with the Kinesis API endpoint. In the following order the plugin
will attempt to authenticate.
This plugin uses a credential chain for Authentication with the Kinesis API
endpoint. In the following order the plugin will attempt to authenticate.
1. Web identity provider credentials via STS if `role_arn` and `web_identity_token_file` are specified
1. Assumed credentials via STS if `role_arn` attribute is specified (source credentials are evaluated from subsequent rules)
1. Web identity provider credentials via STS if `role_arn` and
`web_identity_token_file` are specified
1. Assumed credentials via STS if `role_arn` attribute is specified (source
credentials are evaluated from subsequent rules)
1. Explicit credentials from `access_key`, `secret_key`, and `token` attributes
1. Shared profile from `profile` attribute
1. [Environment Variables](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#environment-variables)
1. [Shared Credentials](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#shared-credentials-file)
1. [EC2 Instance Profile](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html)
1. [Environment Variables][1]
1. [Shared Credentials][2]
1. [EC2 Instance Profile][3]
If you are using credentials from a web identity provider, you can specify the session name using `role_session_name`. If
left empty, the current timestamp will be used.
If you are using credentials from a web identity provider, you can specify the
session name using `role_session_name`. If left empty, the current timestamp
will be used.
[1]: https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#environment-variables
[2]: https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#shared-credentials-file
[3]: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
## Configuration
@ -93,14 +102,16 @@ left empty, the current timestamp will be used.
debug = false
```
For this output plugin to function correctly the following variables must be configured.
For this output plugin to function correctly the following variables must be
configured.
* region
* streamname
### region
The region is the Amazon region that you wish to connect to. Examples include but are not limited to
The region is the Amazon region that you wish to connect to. Examples include
but are not limited to
* us-west-1
* us-west-2
@ -110,39 +121,45 @@ The region is the Amazon region that you wish to connect to. Examples include bu
### streamname
The streamname is used by the plugin to ensure that data is sent to the correct Kinesis stream. It is important to
note that the stream *MUST* be pre-configured for this plugin to function correctly. If the stream does not exist the
plugin will result in telegraf exiting with an exit code of 1.
The streamname is used by the plugin to ensure that data is sent to the correct
Kinesis stream. It is important to note that the stream *MUST* be pre-configured
for this plugin to function correctly. If the stream does not exist the plugin
will result in telegraf exiting with an exit code of 1.
### partitionkey [DEPRECATED]
This is used to group data within a stream. Currently this plugin only supports a single partitionkey.
Manually configuring different hosts, or groups of hosts with manually selected partitionkeys might be a workable
solution to scale out.
This is used to group data within a stream. Currently this plugin only supports
a single partitionkey. Manually configuring different hosts, or groups of hosts
with manually selected partitionkeys might be a workable solution to scale out.
### use_random_partitionkey [DEPRECATED]
When true a random UUID will be generated and used as the partitionkey when sending data to Kinesis. This allows data to evenly spread across multiple shards in the stream. Due to using a random partitionKey there can be no guarantee of ordering when consuming the data off the shards.
If true then the partitionkey option will be ignored.
When true a random UUID will be generated and used as the partitionkey when
sending data to Kinesis. This allows data to evenly spread across multiple
shards in the stream. Due to using a random partitionKey there can be no
guarantee of ordering when consuming the data off the shards. If true then the
partitionkey option will be ignored.
### partition
This is used to group data within a stream. Currently four methods are supported: random, static, tag or measurement
This is used to group data within a stream. Currently four methods are
supported: random, static, tag or measurement
#### random
This will generate a UUIDv4 for each metric to spread them across shards.
Any guarantee of ordering is lost with this method
This will generate a UUIDv4 for each metric to spread them across shards. Any
guarantee of ordering is lost with this method
#### static
This uses a static string as a partitionkey.
All metrics will be mapped to the same shard which may limit throughput.
This uses a static string as a partitionkey. All metrics will be mapped to the
same shard which may limit throughput.
#### tag
This will take the value of the specified tag from each metric as the partitionKey.
If the tag is not found the `default` value will be used or `telegraf` if unspecified
This will take the value of the specified tag from each metric as the
partitionKey. If the tag is not found the `default` value will be used or
`telegraf` if unspecified
#### measurement
@ -150,12 +167,14 @@ This will use the measurement's name as the partitionKey.
### format
The format configuration value has been designated to allow people to change the format of the Point as written to
Kinesis. Right now there are two supported formats string and custom.
The format configuration value has been designated to allow people to change the
format of the Point as written to Kinesis. Right now there are two supported
formats string and custom.
#### string
String is defined using the default Point.String() value and translated to []byte for the Kinesis stream.
String is defined using the default Point.String() value and translated to
[]byte for the Kinesis stream.
#### custom

View File

@ -1,16 +1,20 @@
# Librato Output Plugin
This plugin writes to the [Librato Metrics API](http://dev.librato.com/v1/metrics#metrics)
and requires an `api_user` and `api_token` which can be obtained [here](https://metrics.librato.com/account/api_tokens)
for the account.
This plugin writes to the [Librato Metrics API][metrics-api] and requires an
`api_user` and `api_token` which can be obtained [here][tokens] for the account.
The `source_tag` option in the Configuration file is used to send contextual information from
Point Tags to the API.
The `source_tag` option in the Configuration file is used to send contextual
information from Point Tags to the API.
If the point value being sent cannot be converted to a float64, the metric is skipped.
If the point value being sent cannot be converted to a float64, the metric is
skipped.
Currently, the plugin does not send any associated Point Tags.
[metrics-api]: http://dev.librato.com/v1/metrics#metrics
[tokens]: https://metrics.librato.com/account/api_tokens
## Configuration
```toml

View File

@ -1,7 +1,8 @@
# Loki Output Plugin
This plugin sends logs to Loki, using metric name and tags as labels,
log line will content all fields in `key="value"` format which is easily parsable with `logfmt` parser in Loki.
This plugin sends logs to Loki, using metric name and tags as labels, log line
will content all fields in `key="value"` format which is easily parsable with
`logfmt` parser in Loki.
Logs within each stream are sorted by timestamp before being sent to Loki.

View File

@ -1,7 +1,8 @@
# MongoDB Output Plugin
This plugin sends metrics to MongoDB and automatically creates the collections as time series collections when they don't already exist.
**Please note:** Requires MongoDB 5.0+ for Time Series Collections
This plugin sends metrics to MongoDB and automatically creates the collections
as time series collections when they don't already exist. **Please note:**
Requires MongoDB 5.0+ for Time Series Collections
## Configuration

View File

@ -1,6 +1,7 @@
# MQTT Producer Output Plugin
This plugin writes to a [MQTT Broker](http://http://mqtt.org/) acting as a mqtt Producer.
This plugin writes to a [MQTT Broker](http://http://mqtt.org/) acting as a mqtt
Producer.
## Mosquitto v2.0.12+ and `identifier rejected`

View File

@ -1,4 +1,4 @@
# New Relic output plugin
# New Relic Output Plugin
This plugins writes to New Relic Insights using the [Metrics API][].
@ -34,4 +34,5 @@ Telegraf minimum version: Telegraf 1.15.0
```
[Metrics API]: https://docs.newrelic.com/docs/data-ingest-apis/get-data-new-relic/metric-api/introduction-metric-api
[Insights API Key]: https://docs.newrelic.com/docs/apis/get-started/intro-apis/types-new-relic-api-keys#user-api-key

View File

@ -1,7 +1,7 @@
# NSQ Output Plugin
This plugin writes to a specified NSQD instance, usually local to the producer. It requires
a `server` name and a `topic` name.
This plugin writes to a specified NSQD instance, usually local to the
producer. It requires a `server` name and a `topic` name.
## Configuration

View File

@ -1,6 +1,7 @@
# OpenTelemetry Output Plugin
This plugin sends metrics to [OpenTelemetry](https://opentelemetry.io) servers and agents via gRPC.
This plugin sends metrics to [OpenTelemetry](https://opentelemetry.io) servers
and agents via gRPC.
## Configuration
@ -42,16 +43,16 @@ This plugin sends metrics to [OpenTelemetry](https://opentelemetry.io) servers a
### Schema
The InfluxDB->OpenTelemetry conversion [schema](https://github.com/influxdata/influxdb-observability/blob/main/docs/index.md)
and [implementation](https://github.com/influxdata/influxdb-observability/tree/main/influx2otel)
are hosted on [GitHub](https://github.com/influxdata/influxdb-observability).
The InfluxDB->OpenTelemetry conversion [schema][] and [implementation][] are
hosted on [GitHub][repo].
For metrics, two input schemata exist.
Line protocol with measurement name `prometheus` is assumed to have a schema
matching [Prometheus input plugin](../../inputs/prometheus/README.md) when `metric_version = 2`.
Line protocol with other measurement names is assumed to have schema
matching [Prometheus input plugin](../../inputs/prometheus/README.md) when `metric_version = 1`.
If both schema assumptions fail, then the line protocol data is interpreted as:
For metrics, two input schemata exist. Line protocol with measurement name
`prometheus` is assumed to have a schema matching [Prometheus input
plugin](../../inputs/prometheus/README.md) when `metric_version = 2`. Line
protocol with other measurement names is assumed to have schema matching
[Prometheus input plugin](../../inputs/prometheus/README.md) when
`metric_version = 1`. If both schema assumptions fail, then the line protocol
data is interpreted as:
- Metric type = gauge (or counter, if indicated by the input plugin)
- Metric name = `[measurement]_[field key]`
@ -59,3 +60,9 @@ If both schema assumptions fail, then the line protocol data is interpreted as:
- Metric labels = line protocol tags
Also see the [OpenTelemetry input plugin](../../inputs/opentelemetry/README.md).
[schema]: https://github.com/influxdata/influxdb-observability/blob/main/docs/index.md
[implementation]: https://github.com/influxdata/influxdb-observability/tree/main/influx2otel
[repo]: https://github.com/influxdata/influxdb-observability

View File

@ -1,12 +1,14 @@
# OpenTSDB Output Plugin
This plugin writes to an OpenTSDB instance using either the "telnet" or Http mode.
This plugin writes to an OpenTSDB instance using either the "telnet" or Http
mode.
Using the Http API is the recommended way of writing metrics since OpenTSDB 2.0
To use Http mode, set useHttp to true in config. You can also control how many
metrics is sent in each http request by setting batchSize in config.
See [the docs](http://opentsdb.net/docs/build/html/api_http/put.html) for details.
See [the docs](http://opentsdb.net/docs/build/html/api_http/put.html) for
details.
## Configuration
@ -47,8 +49,8 @@ The expected input from OpenTSDB is specified in the following way:
put <metric> <timestamp> <value> <tagk1=tagv1[ tagk2=tagv2 ...tagkN=tagvN]>
```
The telegraf output plugin adds an optional prefix to the metric keys so
that a subamount can be selected.
The telegraf output plugin adds an optional prefix to the metric keys so that a
subamount can be selected.
```text
put <[prefix.]metric> <timestamp> <value> <tagk1=tagv1[ tagk2=tagv2 ...tagkN=tagvN]>

View File

@ -1,7 +1,7 @@
# Prometheus Output Plugin
This plugin starts a [Prometheus](https://prometheus.io/) Client, it exposes
all metrics on `/metrics` (default) to be polled by a Prometheus server.
This plugin starts a [Prometheus](https://prometheus.io/) Client, it exposes all
metrics on `/metrics` (default) to be polled by a Prometheus server.
## Configuration
@ -55,6 +55,7 @@ all metrics on `/metrics` (default) to be polled by a Prometheus server.
## Metrics
Prometheus metrics are produced in the same manner as the [prometheus serializer][].
Prometheus metrics are produced in the same manner as the [prometheus
serializer][].
[prometheus serializer]: /plugins/serializers/prometheus/README.md#Metrics

View File

@ -45,11 +45,16 @@ This plugin writes to [Riemann](http://riemann.io/) via TCP or UDP.
### Optional parameters
* `ttl`: Riemann event TTL, floating-point time in seconds. Defines how long that an event is considered valid for in Riemann.
* `separator`: Separator to use between measurement and field name in Riemann service name.
* `measurement_as_attribute`: Set measurement name as a Riemann attribute, instead of prepending it to the Riemann service name.
* `string_as_state`: Send string metrics as Riemann event states. If this is not enabled then all string metrics will be ignored.
* `tag_keys`: A list of tag keys whose values get sent as Riemann tags. If empty, all Telegraf tag values will be sent as tags.
* `ttl`: Riemann event TTL, floating-point time in seconds. Defines how long
that an event is considered valid for in Riemann.
* `separator`: Separator to use between measurement and field name in Riemann
service name.
* `measurement_as_attribute`: Set measurement name as a Riemann attribute,
instead of prepending it to the Riemann service name.
* `string_as_state`: Send string metrics as Riemann event states. If this is not
enabled then all string metrics will be ignored.
* `tag_keys`: A list of tag keys whose values get sent as Riemann tags. If
empty, all Telegraf tag values will be sent as tags.
* `tags`: Additional Riemann tags that will be sent.
* `description_text`: Description text for Riemann event.
@ -63,7 +68,8 @@ Riemann event emitted by Telegraf with default configuration:
:service "disk/used_percent", :metric 73.16736001949994, :path "/boot", :fstype "ext4", :time 1475605021}
```
Telegraf emitting the same Riemann event with `measurement_as_attribute` set to `true`:
Telegraf emitting the same Riemann event with `measurement_as_attribute` set to
`true`:
```text
#riemann.codec.Event{ ...
@ -80,7 +86,8 @@ Telegraf emitting the same Riemann event with additional Riemann tags defined:
:tags ["telegraf" "postgres_cluster"]}
```
Telegraf emitting a Riemann event with a status text and `string_as_state` set to `true`, and a `description_text` defined:
Telegraf emitting a Riemann event with a status text and `string_as_state` set
to `true`, and a `description_text` defined:
```text
#riemann.codec.Event{

View File

@ -1,6 +1,9 @@
# Riemann Legacy
# Riemann Legacy Output Plugin
This is a deprecated plugin
This is a deprecated plugin. Please use the [Riemann Output Plugin][new]
instead.
[new]: ../riemann/README.md
## Configuration

View File

@ -1,6 +1,8 @@
# SignalFx Output Plugin
The SignalFx output plugin sends metrics to [SignalFx](https://docs.signalfx.com/en/latest/).
The SignalFx output plugin sends metrics to [SignalFx][docs].
[docs]: https://docs.signalfx.com/en/latest/
## Configuration

View File

@ -1,8 +1,10 @@
# socket_writer Plugin
# Socket Writer Output Plugin
The socket_writer plugin can write to a UDP, TCP, or unix socket.
The socket writer plugin can write to a UDP, TCP, or unix socket.
It can output data in any of the [supported output formats](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md).
It can output data in any of the [supported output formats][formats].
[formats]: ../../../docs/DATA_FORMATS_OUTPUT.md
## Configuration

View File

@ -2,66 +2,61 @@
The SQL output plugin saves Telegraf metric data to an SQL database.
The plugin uses a simple, hard-coded database schema. There is a table
for each metric type and the table name is the metric name. There is a
column per field and a column per tag. There is an optional column for
the metric timestamp.
The plugin uses a simple, hard-coded database schema. There is a table for each
metric type and the table name is the metric name. There is a column per field
and a column per tag. There is an optional column for the metric timestamp.
A row is written for every input metric. This means multiple metrics
are never merged into a single row, even if they have the same metric
name, tags, and timestamp.
A row is written for every input metric. This means multiple metrics are never
merged into a single row, even if they have the same metric name, tags, and
timestamp.
The plugin uses Golang's generic "database/sql" interface and third
party drivers. See the driver-specific section below for a list of
supported drivers and details. Additional drivers may be added in
future Telegraf releases.
The plugin uses Golang's generic "database/sql" interface and third party
drivers. See the driver-specific section below for a list of supported drivers
and details. Additional drivers may be added in future Telegraf releases.
## Getting started
To use the plugin, set the driver setting to the driver name
appropriate for your database. Then set the data source name
(DSN). The format of the DSN varies by driver but often includes a
username, password, the database instance to use, and the hostname of
the database server. The user account must have privileges to insert
rows and create tables.
To use the plugin, set the driver setting to the driver name appropriate for
your database. Then set the data source name (DSN). The format of the DSN varies
by driver but often includes a username, password, the database instance to use,
and the hostname of the database server. The user account must have privileges
to insert rows and create tables.
## Generated SQL
The plugin generates simple ANSI/ISO SQL that is likely to work on any
DBMS. It doesn't use language features that are specific to a
particular DBMS. If you want to use a feature that is specific to a
particular DBMS, you may be able to set it up manually outside of this
plugin or through the init_sql setting.
The plugin generates simple ANSI/ISO SQL that is likely to work on any DBMS. It
doesn't use language features that are specific to a particular DBMS. If you
want to use a feature that is specific to a particular DBMS, you may be able to
set it up manually outside of this plugin or through the init_sql setting.
The insert statements generated by the plugin use placeholder
parameters. Most database drivers use question marks as placeholders
but postgres uses indexed dollar signs. The plugin chooses which
placeholder style to use depending on the driver selected.
The insert statements generated by the plugin use placeholder parameters. Most
database drivers use question marks as placeholders but postgres uses indexed
dollar signs. The plugin chooses which placeholder style to use depending on the
driver selected.
## Advanced options
When the plugin first connects it runs SQL from the init_sql setting,
allowing you to perform custom initialization for the connection.
When the plugin first connects it runs SQL from the init_sql setting, allowing
you to perform custom initialization for the connection.
Before inserting a row, the plugin checks whether the table exists. If
it doesn't exist, the plugin creates the table. The existence check
and the table creation statements can be changed through template
settings. The template settings allows you to have the plugin create
customized tables or skip table creation entirely by setting the check
template to any query that executes without error, such as "select 1".
Before inserting a row, the plugin checks whether the table exists. If it
doesn't exist, the plugin creates the table. The existence check and the table
creation statements can be changed through template settings. The template
settings allows you to have the plugin create customized tables or skip table
creation entirely by setting the check template to any query that executes
without error, such as "select 1".
The name of the timestamp column is "timestamp" but it can be changed
with the timestamp\_column setting. The timestamp column can be
completely disabled by setting it to "".
The name of the timestamp column is "timestamp" but it can be changed with the
timestamp\_column setting. The timestamp column can be completely disabled by
setting it to "".
By changing the table creation template, it's possible with some
databases to save a row insertion timestamp. You can add an additional
column with a default value to the template, like "CREATE TABLE
{TABLE}(insertion_timestamp TIMESTAMP DEFAULT CURRENT\_TIMESTAMP,
{COLUMNS})".
By changing the table creation template, it's possible with some databases to
save a row insertion timestamp. You can add an additional column with a default
value to the template, like "CREATE TABLE {TABLE}(insertion_timestamp TIMESTAMP
DEFAULT CURRENT\_TIMESTAMP, {COLUMNS})".
The mapping of metric types to sql column types can be customized
through the convert settings.
The mapping of metric types to sql column types can be customized through the
convert settings.
## Configuration
@ -125,18 +120,20 @@ through the convert settings.
### go-sql-driver/mysql
MySQL default quoting differs from standard ANSI/ISO SQL quoting. You
must use MySQL's ANSI\_QUOTES mode with this plugin. You can enable
this mode by using the setting `init_sql = "SET
sql_mode='ANSI_QUOTES';"` or through a command-line option when
running MySQL. See MySQL's docs for [details on
ANSI\_QUOTES](https://dev.mysql.com/doc/refman/8.0/en/sql-mode.html#sqlmode_ansi_quotes)
and [how to set the SQL
mode](https://dev.mysql.com/doc/refman/8.0/en/sql-mode.html#sql-mode-setting).
MySQL default quoting differs from standard ANSI/ISO SQL quoting. You must use
MySQL's ANSI\_QUOTES mode with this plugin. You can enable this mode by using
the setting `init_sql = "SET sql_mode='ANSI_QUOTES';"` or through a command-line
option when running MySQL. See MySQL's docs for [details on
ANSI\_QUOTES][mysql-quotes] and [how to set the SQL mode][mysql-mode].
You can use a DSN of the format
"username:password@tcp(host:port)/dbname". See the [driver
docs](https://github.com/go-sql-driver/mysql) for details.
You can use a DSN of the format "username:password@tcp(host:port)/dbname". See
the [driver docs][mysql-driver] for details.
[mysql-quotes]: https://dev.mysql.com/doc/refman/8.0/en/sql-mode.html#sqlmode_ansi_quotes
[mysql-mode]: https://dev.mysql.com/doc/refman/8.0/en/sql-mode.html#sql-mode-setting
[mysql-driver]: https://github.com/go-sql-driver/mysql
### jackc/pgx
@ -146,10 +143,9 @@ docs](https://github.com/jackc/pgx) for more details.
### modernc.org/sqlite
This driver is not available on all operating systems and
architectures. It is only included in Linux builds on amd64, 386,
arm64, arm, and Darwin on amd64. It is not available for Windows,
FreeBSD, and other Linux and Darwin platforms.
This driver is not available on all operating systems and architectures. It is
only included in Linux builds on amd64, 386, arm64, arm, and Darwin on amd64. It
is not available for Windows, FreeBSD, and other Linux and Darwin platforms.
The DSN is a filename or url with scheme "file:". See the [driver
docs](https://modernc.org/sqlite) for details.
@ -168,14 +164,15 @@ Use this metric type to SQL type conversion:
bool = "UInt8"
```
See [ClickHouse data types](https://clickhouse.com/docs/en/sql-reference/data-types/) for more info.
See [ClickHouse data
types](https://clickhouse.com/docs/en/sql-reference/data-types/) for more info.
### denisenkom/go-mssqldb
Telegraf doesn't have unit tests for go-mssqldb so it should be
treated as experimental.
Telegraf doesn't have unit tests for go-mssqldb so it should be treated as
experimental.
### snowflakedb/gosnowflake
Telegraf doesn't have unit tests for gosnowflake so it should be
treated as experimental.
Telegraf doesn't have unit tests for gosnowflake so it should be treated as
experimental.

View File

@ -9,11 +9,14 @@ costs.
Requires `project` to specify where Stackdriver metrics will be delivered to.
Metrics are grouped by the `namespace` variable and metric key - eg: `custom.googleapis.com/telegraf/system/load5`
Metrics are grouped by the `namespace` variable and metric key - eg:
`custom.googleapis.com/telegraf/system/load5`
[Resource type](https://cloud.google.com/monitoring/api/resources) is configured by the `resource_type` variable (default `global`).
[Resource type](https://cloud.google.com/monitoring/api/resources) is configured
by the `resource_type` variable (default `global`).
Additional resource labels can be configured by `resource_labels`. By default the required `project_id` label is always set to the `project` variable.
Additional resource labels can be configured by `resource_labels`. By default
the required `project_id` label is always set to the `project` variable.
## Configuration
@ -38,29 +41,31 @@ Additional resource labels can be configured by `resource_labels`. By default th
## Restrictions
Stackdriver does not support string values in custom metrics, any string
fields will not be written.
Stackdriver does not support string values in custom metrics, any string fields
will not be written.
The Stackdriver API does not allow writing points which are out of order,
older than 24 hours, or more with resolution greater than than one per point
minute. Since Telegraf writes the newest points first and moves backwards
through the metric buffer, it may not be possible to write historical data
after an interruption.
The Stackdriver API does not allow writing points which are out of order, older
than 24 hours, or more with resolution greater than than one per point minute.
Since Telegraf writes the newest points first and moves backwards through the
metric buffer, it may not be possible to write historical data after an
interruption.
Points collected with greater than 1 minute precision may need to be
aggregated before then can be written. Consider using the [basicstats][]
aggregator to do this.
Points collected with greater than 1 minute precision may need to be aggregated
before then can be written. Consider using the [basicstats][] aggregator to do
this.
Histogram / distribution and delta metrics are not yet supported. These will
be dropped silently unless debugging is on.
Histogram / distribution and delta metrics are not yet supported. These will be
dropped silently unless debugging is on.
Note that the plugin keeps an in-memory cache of the start times and last
observed values of all COUNTER metrics in order to comply with the
requirements of the stackdriver API. This cache is not GCed: if you remove
a large number of counters from the input side, you may wish to restart
telegraf to clear it.
observed values of all COUNTER metrics in order to comply with the requirements
of the stackdriver API. This cache is not GCed: if you remove a large number of
counters from the input side, you may wish to restart telegraf to clear it.
[basicstats]: /plugins/aggregators/basicstats/README.md
[stackdriver]: https://cloud.google.com/monitoring/api/v3/
[authentication]: https://cloud.google.com/docs/authentication/getting-started
[pricing]: https://cloud.google.com/stackdriver/pricing#google-clouds-operations-suite-pricing

View File

@ -1,7 +1,7 @@
# Sumo Logic Output Plugin
This plugin sends metrics to [Sumo Logic HTTP Source](https://help.sumologic.com/03Send-Data/Sources/02Sources-for-Hosted-Collectors/HTTP-Source/Upload-Metrics-to-an-HTTP-Source)
in HTTP messages, encoded using one of the output data formats.
This plugin sends metrics to [Sumo Logic HTTP Source][http-source] in HTTP
messages, encoded using one of the output data formats.
Telegraf minimum version: Telegraf 1.16.0
@ -12,6 +12,8 @@ by Sumologic HTTP Source:
* `carbon2` - for Content-Type of `application/vnd.sumologic.carbon2`
* `prometheus` - for Content-Type of `application/vnd.sumologic.prometheus`
[http-source]: https://help.sumologic.com/03Send-Data/Sources/02Sources-for-Hosted-Collectors/HTTP-Source/Upload-Metrics-to-an-HTTP-Source
## Configuration
```toml

View File

@ -3,10 +3,11 @@
The syslog output plugin sends syslog messages transmitted over
[UDP](https://tools.ietf.org/html/rfc5426) or
[TCP](https://tools.ietf.org/html/rfc6587) or
[TLS](https://tools.ietf.org/html/rfc5425), with or without the octet counting framing.
[TLS](https://tools.ietf.org/html/rfc5425), with or without the octet counting
framing.
Syslog messages are formatted according to
[RFC 5424](https://tools.ietf.org/html/rfc5424).
Syslog messages are formatted according to [RFC
5424](https://tools.ietf.org/html/rfc5424).
## Configuration
@ -94,7 +95,8 @@ Syslog messages are formatted according to
The output plugin expects syslog metrics tags and fields to match up with the
ones created in the [syslog input][].
The following table shows the metric tags, field and defaults used to format syslog messages.
The following table shows the metric tags, field and defaults used to format
syslog messages.
| Syslog field | Metric Tag | Metric Field | Default value |
| --- | --- | --- | --- |

View File

@ -126,28 +126,39 @@ The Timestream output plugin writes metrics to the [Amazon Timestream] service.
### Batching
Timestream WriteInputRequest.CommonAttributes are used to efficiently write data to Timestream.
Timestream WriteInputRequest.CommonAttributes are used to efficiently write data
to Timestream.
### Multithreading
Single thread is used to write the data to Timestream, following general plugin design pattern.
Single thread is used to write the data to Timestream, following general plugin
design pattern.
### Errors
In case of an attempt to write an unsupported by Timestream Telegraf Field type, the field is dropped and error is emitted to the logs.
In case of an attempt to write an unsupported by Timestream Telegraf Field type,
the field is dropped and error is emitted to the logs.
In case of receiving ThrottlingException or InternalServerException from Timestream, the errors are returned to Telegraf, in which case Telegraf will keep the metrics in buffer and retry writing those metrics on the next flush.
In case of receiving ThrottlingException or InternalServerException from
Timestream, the errors are returned to Telegraf, in which case Telegraf will
keep the metrics in buffer and retry writing those metrics on the next flush.
In case of receiving ResourceNotFoundException:
- If `create_table_if_not_exists` configuration is set to `true`, the plugin will try to create appropriate table and write the records again, if the table creation was successful.
- If `create_table_if_not_exists` configuration is set to `false`, the records are dropped, and an error is emitted to the logs.
- If `create_table_if_not_exists` configuration is set to `true`, the plugin
will try to create appropriate table and write the records again, if the table
creation was successful.
- If `create_table_if_not_exists` configuration is set to `false`, the records
are dropped, and an error is emitted to the logs.
In case of receiving any other AWS error from Timestream, the records are dropped, and an error is emitted to the logs, as retrying such requests isn't likely to succeed.
In case of receiving any other AWS error from Timestream, the records are
dropped, and an error is emitted to the logs, as retrying such requests isn't
likely to succeed.
### Logging
Turn on debug flag in the Telegraf to turn on detailed logging (including records being written to Timestream).
Turn on debug flag in the Telegraf to turn on detailed logging (including
records being written to Timestream).
### Testing

View File

@ -1,6 +1,7 @@
# Wavefront Output Plugin
This plugin writes to a [Wavefront](https://www.wavefront.com) proxy, in Wavefront data format over TCP.
This plugin writes to a [Wavefront](https://www.wavefront.com) proxy, in
Wavefront data format over TCP.
## Configuration
@ -60,23 +61,28 @@ This plugin writes to a [Wavefront](https://www.wavefront.com) proxy, in Wavefro
### Convert Path & Metric Separator
If the `convert_path` option is true any `_` in metric and field names will be converted to the `metric_separator` value.
By default, to ease metrics browsing in the Wavefront UI, the `convert_path` option is true, and `metric_separator` is `.` (dot).
Default integrations within Wavefront expect these values to be set to their defaults, however if converting from another platform
it may be desirable to change these defaults.
If the `convert_path` option is true any `_` in metric and field names will be
converted to the `metric_separator` value. By default, to ease metrics browsing
in the Wavefront UI, the `convert_path` option is true, and `metric_separator`
is `.` (dot). Default integrations within Wavefront expect these values to be
set to their defaults, however if converting from another platform it may be
desirable to change these defaults.
### Use Regex
Most illegal characters in the metric name are automatically converted to `-`.
The `use_regex` setting can be used to ensure all illegal characters are properly handled, but can lead to performance degradation.
The `use_regex` setting can be used to ensure all illegal characters are
properly handled, but can lead to performance degradation.
### Source Override
Often when collecting metrics from another system, you want to use the target system as the source, not the one running Telegraf.
Many Telegraf plugins will identify the target source with a tag. The tag name can vary for different plugins. The `source_override`
option will use the value specified in any of the listed tags if found. The tag names are checked in the same order as listed,
and if found, the other tags will not be checked. If no tags specified are found, the default host tag will be used to identify the
source of the metric.
Often when collecting metrics from another system, you want to use the target
system as the source, not the one running Telegraf. Many Telegraf plugins will
identify the target source with a tag. The tag name can vary for different
plugins. The `source_override` option will use the value specified in any of the
listed tags if found. The tag names are checked in the same order as listed, and
if found, the other tags will not be checked. If no tags specified are found,
the default host tag will be used to identify the source of the metric.
### Wavefront Data format
@ -86,9 +92,11 @@ The expected input for Wavefront is specified in the following way:
<metric> <value> [<timestamp>] <source|host>=<sourceTagValue> [tagk1=tagv1 ...tagkN=tagvN]
```
More information about the Wavefront data format is available [here](https://community.wavefront.com/docs/DOC-1031)
More information about the Wavefront data format is available
[here](https://community.wavefront.com/docs/DOC-1031)
### Allowed values for metrics
Wavefront allows `integers` and `floats` as input values. By default it also maps `bool` values to numeric, false -> 0.0,
true -> 1.0. To map `strings` use the [enum](../../processors/enum) processor plugin.
Wavefront allows `integers` and `floats` as input values. By default it also
maps `bool` values to numeric, false -> 0.0, true -> 1.0. To map `strings` use
the [enum](../../processors/enum) processor plugin.

View File

@ -2,7 +2,9 @@
This plugin can write to a WebSocket endpoint.
It can output data in any of the [supported output formats](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md).
It can output data in any of the [supported output formats][formats].
[formats]: ../../../docs/DATA_FORMATS_OUTPUT.md
## Configuration

View File

@ -1,6 +1,7 @@
# Yandex Cloud Monitoring
# Yandex Cloud Monitoring Output Plugin
This plugin will send custom metrics to [Yandex Cloud Monitoring](https://cloud.yandex.com/services/monitoring).
This plugin will send custom metrics to [Yandex Cloud
Monitoring](https://cloud.yandex.com/services/monitoring).
## Configuration
@ -21,6 +22,7 @@ This plugin will send custom metrics to [Yandex Cloud Monitoring](https://cloud.
This plugin currently support only YC.Compute metadata based authentication.
When plugin is working inside a YC.Compute instance it will take IAM token and Folder ID from instance metadata.
When plugin is working inside a YC.Compute instance it will take IAM token and
Folder ID from instance metadata.
Other authentication methods will be added later.