chore: Fix readme linter errors for output plugins (#10951)
This commit is contained in:
parent
dc95d22272
commit
6ba3b1e91e
|
|
@ -1,10 +1,11 @@
|
||||||
# Amon Output Plugin
|
# Amon Output Plugin
|
||||||
|
|
||||||
This plugin writes to [Amon](https://www.amon.cx)
|
This plugin writes to [Amon](https://www.amon.cx) and requires an `serverkey`
|
||||||
and requires an `serverkey` and `amoninstance` URL which can be obtained [here](https://www.amon.cx/docs/monitoring/)
|
and `amoninstance` URL which can be obtained
|
||||||
for the account.
|
[here](https://www.amon.cx/docs/monitoring/) for the account.
|
||||||
|
|
||||||
If the point value being sent cannot be converted to a float64, the metric is skipped.
|
If the point value being sent cannot be converted to a float64, the metric is
|
||||||
|
skipped.
|
||||||
|
|
||||||
Metrics are grouped by converting any `_` characters to `.` in the Point Name.
|
Metrics are grouped by converting any `_` characters to `.` in the Point Name.
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,6 +1,7 @@
|
||||||
# AMQP Output Plugin
|
# AMQP Output Plugin
|
||||||
|
|
||||||
This plugin writes to a AMQP 0-9-1 Exchange, a prominent implementation of this protocol being [RabbitMQ](https://www.rabbitmq.com/).
|
This plugin writes to a AMQP 0-9-1 Exchange, a prominent implementation of this
|
||||||
|
protocol being [RabbitMQ](https://www.rabbitmq.com/).
|
||||||
|
|
||||||
This plugin does not bind the exchange to a queue.
|
This plugin does not bind the exchange to a queue.
|
||||||
|
|
||||||
|
|
@ -111,11 +112,11 @@ For an introduction to AMQP see:
|
||||||
|
|
||||||
### Routing
|
### Routing
|
||||||
|
|
||||||
If `routing_tag` is set, and the tag is defined on the metric, the value of
|
If `routing_tag` is set, and the tag is defined on the metric, the value of the
|
||||||
the tag is used as the routing key. Otherwise the value of `routing_key` is
|
tag is used as the routing key. Otherwise the value of `routing_key` is used
|
||||||
used directly. If both are unset the empty string is used.
|
directly. If both are unset the empty string is used.
|
||||||
|
|
||||||
Exchange types that do not use a routing key, `direct` and `header`, always
|
Exchange types that do not use a routing key, `direct` and `header`, always use
|
||||||
use the empty string as the routing key.
|
the empty string as the routing key.
|
||||||
|
|
||||||
Metrics are published in batches based on the final routing key.
|
Metrics are published in batches based on the final routing key.
|
||||||
|
|
|
||||||
|
|
@ -1,6 +1,7 @@
|
||||||
# Application Insights Output Plugin
|
# Application Insights Output Plugin
|
||||||
|
|
||||||
This plugin writes telegraf metrics to [Azure Application Insights](https://azure.microsoft.com/en-us/services/application-insights/).
|
This plugin writes telegraf metrics to [Azure Application
|
||||||
|
Insights](https://azure.microsoft.com/en-us/services/application-insights/).
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
@ -39,7 +40,8 @@ on the measurement name and field.
|
||||||
foo,host=a first=42,second=43 1525293034000000000
|
foo,host=a first=42,second=43 1525293034000000000
|
||||||
```
|
```
|
||||||
|
|
||||||
In the special case of a single field named `value`, a single telemetry record is created named using only the measurement name
|
In the special case of a single field named `value`, a single telemetry record
|
||||||
|
is created named using only the measurement name
|
||||||
|
|
||||||
**Example:** Create a telemetry record `bar`:
|
**Example:** Create a telemetry record `bar`:
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,12 +1,17 @@
|
||||||
# Azure Data Explorer output plugin
|
# Azure Data Explorer Output Plugin
|
||||||
|
|
||||||
This plugin writes data collected by any of the Telegraf input plugins to [Azure Data Explorer](https://azure.microsoft.com/en-au/services/data-explorer/).
|
This plugin writes data collected by any of the Telegraf input plugins to [Azure
|
||||||
Azure Data Explorer is a distributed, columnar store, purpose built for any type of logs, metrics and time series data.
|
Data Explorer](https://azure.microsoft.com/en-au/services/data-explorer/).
|
||||||
|
Azure Data Explorer is a distributed, columnar store, purpose built for any type
|
||||||
|
of logs, metrics and time series data.
|
||||||
|
|
||||||
## Pre-requisites
|
## Pre-requisites
|
||||||
|
|
||||||
- [Create Azure Data Explorer cluster and database](https://docs.microsoft.com/en-us/azure/data-explorer/create-cluster-database-portal)
|
- [Create Azure Data Explorer cluster and
|
||||||
- VM/compute or container to host Telegraf - it could be hosted locally where an app/service to be monitored is deployed or remotely on a dedicated monitoring compute/container.
|
database](https://docs.microsoft.com/en-us/azure/data-explorer/create-cluster-database-portal)
|
||||||
|
- VM/compute or container to host Telegraf - it could be hosted locally where an
|
||||||
|
app/service to be monitored is deployed or remotely on a dedicated monitoring
|
||||||
|
compute/container.
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
@ -40,21 +45,40 @@ Azure Data Explorer is a distributed, columnar store, purpose built for any type
|
||||||
|
|
||||||
## Metrics Grouping
|
## Metrics Grouping
|
||||||
|
|
||||||
Metrics can be grouped in two ways to be sent to Azure Data Explorer. To specify which metric grouping type the plugin should use, the respective value should be given to the `metrics_grouping_type` in the config file. If no value is given to `metrics_grouping_type`, by default, the metrics will be grouped using `TablePerMetric`.
|
Metrics can be grouped in two ways to be sent to Azure Data Explorer. To specify
|
||||||
|
which metric grouping type the plugin should use, the respective value should be
|
||||||
|
given to the `metrics_grouping_type` in the config file. If no value is given to
|
||||||
|
`metrics_grouping_type`, by default, the metrics will be grouped using
|
||||||
|
`TablePerMetric`.
|
||||||
|
|
||||||
### TablePerMetric
|
### TablePerMetric
|
||||||
|
|
||||||
The plugin will group the metrics by the metric name, and will send each group of metrics to an Azure Data Explorer table. If the table doesn't exist the plugin will create the table, if the table exists then the plugin will try to merge the Telegraf metric schema to the existing table. For more information about the merge process check the [`.create-merge` documentation](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/management/create-merge-table-command).
|
The plugin will group the metrics by the metric name, and will send each group
|
||||||
|
of metrics to an Azure Data Explorer table. If the table doesn't exist the
|
||||||
|
plugin will create the table, if the table exists then the plugin will try to
|
||||||
|
merge the Telegraf metric schema to the existing table. For more information
|
||||||
|
about the merge process check the [`.create-merge` documentation][create-merge].
|
||||||
|
|
||||||
The table name will match the `name` property of the metric, this means that the name of the metric should comply with the Azure Data Explorer table naming constraints in case you plan to add a prefix to the metric name.
|
The table name will match the `name` property of the metric, this means that the
|
||||||
|
name of the metric should comply with the Azure Data Explorer table naming
|
||||||
|
constraints in case you plan to add a prefix to the metric name.
|
||||||
|
|
||||||
|
[create-merge]: https://docs.microsoft.com/en-us/azure/data-explorer/kusto/management/create-merge-table-command
|
||||||
|
|
||||||
### SingleTable
|
### SingleTable
|
||||||
|
|
||||||
The plugin will send all the metrics received to a single Azure Data Explorer table. The name of the table must be supplied via `table_name` in the config file. If the table doesn't exist the plugin will create the table, if the table exists then the plugin will try to merge the Telegraf metric schema to the existing table. For more information about the merge process check the [`.create-merge` documentation](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/management/create-merge-table-command).
|
The plugin will send all the metrics received to a single Azure Data Explorer
|
||||||
|
table. The name of the table must be supplied via `table_name` in the config
|
||||||
|
file. If the table doesn't exist the plugin will create the table, if the table
|
||||||
|
exists then the plugin will try to merge the Telegraf metric schema to the
|
||||||
|
existing table. For more information about the merge process check the
|
||||||
|
[`.create-merge` documentation][create-merge].
|
||||||
|
|
||||||
## Tables Schema
|
## Tables Schema
|
||||||
|
|
||||||
The schema of the Azure Data Explorer table will match the structure of the Telegraf `Metric` object. The corresponding Azure Data Explorer command generated by the plugin would be like the following:
|
The schema of the Azure Data Explorer table will match the structure of the
|
||||||
|
Telegraf `Metric` object. The corresponding Azure Data Explorer command
|
||||||
|
generated by the plugin would be like the following:
|
||||||
|
|
||||||
```text
|
```text
|
||||||
.create-merge table ['table-name'] (['fields']:dynamic, ['name']:string, ['tags']:dynamic, ['timestamp']:datetime)
|
.create-merge table ['table-name'] (['fields']:dynamic, ['name']:string, ['tags']:dynamic, ['timestamp']:datetime)
|
||||||
|
|
@ -66,38 +90,51 @@ The corresponding table mapping would be like the following:
|
||||||
.create-or-alter table ['table-name'] ingestion json mapping 'table-name_mapping' '[{"column":"fields", "Properties":{"Path":"$[\'fields\']"}},{"column":"name", "Properties":{"Path":"$[\'name\']"}},{"column":"tags", "Properties":{"Path":"$[\'tags\']"}},{"column":"timestamp", "Properties":{"Path":"$[\'timestamp\']"}}]'
|
.create-or-alter table ['table-name'] ingestion json mapping 'table-name_mapping' '[{"column":"fields", "Properties":{"Path":"$[\'fields\']"}},{"column":"name", "Properties":{"Path":"$[\'name\']"}},{"column":"tags", "Properties":{"Path":"$[\'tags\']"}},{"column":"timestamp", "Properties":{"Path":"$[\'timestamp\']"}}]'
|
||||||
```
|
```
|
||||||
|
|
||||||
**Note**: This plugin will automatically create Azure Data Explorer tables and corresponding table mapping as per the above mentioned commands.
|
**Note**: This plugin will automatically create Azure Data Explorer tables and
|
||||||
|
corresponding table mapping as per the above mentioned commands.
|
||||||
|
|
||||||
## Authentiation
|
## Authentiation
|
||||||
|
|
||||||
### Supported Authentication Methods
|
### Supported Authentication Methods
|
||||||
|
|
||||||
This plugin provides several types of authentication. The plugin will check the existence of several specific environment variables, and consequently will choose the right method.
|
This plugin provides several types of authentication. The plugin will check the
|
||||||
|
existence of several specific environment variables, and consequently will
|
||||||
|
choose the right method.
|
||||||
|
|
||||||
These methods are:
|
These methods are:
|
||||||
|
|
||||||
1. AAD Application Tokens (Service Principals with secrets or certificates).
|
1. AAD Application Tokens (Service Principals with secrets or certificates).
|
||||||
|
|
||||||
For guidance on how to create and register an App in Azure Active Directory check [this article](https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app#register-an-application), and for more information on the Service Principals check [this article](https://docs.microsoft.com/en-us/azure/active-directory/develop/app-objects-and-service-principals).
|
For guidance on how to create and register an App in Azure Active Directory
|
||||||
|
check [this article][register], and for more information on the Service
|
||||||
|
Principals check [this article][principal].
|
||||||
|
|
||||||
2. AAD User Tokens
|
2. AAD User Tokens
|
||||||
|
|
||||||
- Allows Telegraf to authenticate like a user. This method is mainly used for development purposes only.
|
- Allows Telegraf to authenticate like a user. This method is mainly used
|
||||||
|
for development purposes only.
|
||||||
|
|
||||||
3. Managed Service Identity (MSI) token
|
3. Managed Service Identity (MSI) token
|
||||||
|
|
||||||
- If you are running Telegraf from Azure VM or infrastructure, then this is the prefered authentication method.
|
- If you are running Telegraf from Azure VM or infrastructure, then this is
|
||||||
|
the prefered authentication method.
|
||||||
|
|
||||||
[principal]: https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-application-objects
|
[register]: https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app#register-an-application
|
||||||
|
|
||||||
Whichever method, the designated Principal needs to be assigned the `Database User` role on the Database level in the Azure Data Explorer. This role will
|
[principal]: https://docs.microsoft.com/en-us/azure/active-directory/develop/app-objects-and-service-principals
|
||||||
allow the plugin to create the required tables and ingest data into it.
|
|
||||||
If `create_tables=false` then the designated principal only needs the `Database Ingestor` role at least.
|
Whichever method, the designated Principal needs to be assigned the `Database
|
||||||
|
User` role on the Database level in the Azure Data Explorer. This role will
|
||||||
|
allow the plugin to create the required tables and ingest data into it. If
|
||||||
|
`create_tables=false` then the designated principal only needs the `Database
|
||||||
|
Ingestor` role at least.
|
||||||
|
|
||||||
### Configurations of the chosen Authentication Method
|
### Configurations of the chosen Authentication Method
|
||||||
|
|
||||||
The plugin will authenticate using the first available of the
|
The plugin will authenticate using the first available of the following
|
||||||
following configurations, **it's important to understand that the assessment, and consequently choosing the authentication method, will happen in order as below**:
|
configurations, **it's important to understand that the assessment, and
|
||||||
|
consequently choosing the authentication method, will happen in order as
|
||||||
|
below**:
|
||||||
|
|
||||||
1. **Client Credentials**: Azure AD Application ID and Secret.
|
1. **Client Credentials**: Azure AD Application ID and Secret.
|
||||||
|
|
||||||
|
|
@ -125,14 +162,16 @@ following configurations, **it's important to understand that the assessment, an
|
||||||
4. **Azure Managed Service Identity**: Delegate credential management to the
|
4. **Azure Managed Service Identity**: Delegate credential management to the
|
||||||
platform. Requires that code is running in Azure, e.g. on a VM. All
|
platform. Requires that code is running in Azure, e.g. on a VM. All
|
||||||
configuration is handled by Azure. See [Azure Managed Service Identity][msi]
|
configuration is handled by Azure. See [Azure Managed Service Identity][msi]
|
||||||
for more details. Only available when using the [Azure Resource Manager][arm].
|
for more details. Only available when using the [Azure Resource
|
||||||
|
Manager][arm].
|
||||||
|
|
||||||
[msi]: https://docs.microsoft.com/en-us/azure/active-directory/msi-overview
|
[msi]: https://docs.microsoft.com/en-us/azure/active-directory/msi-overview
|
||||||
[arm]: https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-overview
|
[arm]: https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-overview
|
||||||
|
|
||||||
## Querying data collected in Azure Data Explorer
|
## Querying data collected in Azure Data Explorer
|
||||||
|
|
||||||
Examples of data transformations and queries that would be useful to gain insights -
|
Examples of data transformations and queries that would be useful to gain
|
||||||
|
insights -
|
||||||
|
|
||||||
### Using SQL input plugin
|
### Using SQL input plugin
|
||||||
|
|
||||||
|
|
@ -143,9 +182,12 @@ name | tags | timestamp | fields
|
||||||
sqlserver_database_io|{"database_name":"azure-sql-db2","file_type":"DATA","host":"adx-vm","logical_filename":"tempdev","measurement_db_type":"AzureSQLDB","physical_filename":"tempdb.mdf","replica_updateability":"READ_WRITE","sql_instance":"adx-sql-server"}|2021-09-09T13:51:20Z|{"current_size_mb":16,"database_id":2,"file_id":1,"read_bytes":2965504,"read_latency_ms":68,"reads":47,"rg_read_stall_ms":42,"rg_write_stall_ms":0,"space_used_mb":0,"write_bytes":1220608,"write_latency_ms":103,"writes":149}
|
sqlserver_database_io|{"database_name":"azure-sql-db2","file_type":"DATA","host":"adx-vm","logical_filename":"tempdev","measurement_db_type":"AzureSQLDB","physical_filename":"tempdb.mdf","replica_updateability":"READ_WRITE","sql_instance":"adx-sql-server"}|2021-09-09T13:51:20Z|{"current_size_mb":16,"database_id":2,"file_id":1,"read_bytes":2965504,"read_latency_ms":68,"reads":47,"rg_read_stall_ms":42,"rg_write_stall_ms":0,"space_used_mb":0,"write_bytes":1220608,"write_latency_ms":103,"writes":149}
|
||||||
sqlserver_waitstats|{"database_name":"azure-sql-db2","host":"adx-vm","measurement_db_type":"AzureSQLDB","replica_updateability":"READ_WRITE","sql_instance":"adx-sql-server","wait_category":"Worker Thread","wait_type":"THREADPOOL"}|2021-09-09T13:51:20Z|{"max_wait_time_ms":15,"resource_wait_ms":4469,"signal_wait_time_ms":0,"wait_time_ms":4469,"waiting_tasks_count":1464}
|
sqlserver_waitstats|{"database_name":"azure-sql-db2","host":"adx-vm","measurement_db_type":"AzureSQLDB","replica_updateability":"READ_WRITE","sql_instance":"adx-sql-server","wait_category":"Worker Thread","wait_type":"THREADPOOL"}|2021-09-09T13:51:20Z|{"max_wait_time_ms":15,"resource_wait_ms":4469,"signal_wait_time_ms":0,"wait_time_ms":4469,"waiting_tasks_count":1464}
|
||||||
|
|
||||||
Since collected metrics object is of complex type so "fields" and "tags" are stored as dynamic data type, multiple ways to query this data-
|
Since collected metrics object is of complex type so "fields" and "tags" are
|
||||||
|
stored as dynamic data type, multiple ways to query this data-
|
||||||
|
|
||||||
1. Query JSON attributes directly: Azure Data Explorer provides an ability to query JSON data in raw format without parsing it, so JSON attributes can be queried directly in following way:
|
1. Query JSON attributes directly: Azure Data Explorer provides an ability to
|
||||||
|
query JSON data in raw format without parsing it, so JSON attributes can be
|
||||||
|
queried directly in following way:
|
||||||
|
|
||||||
```text
|
```text
|
||||||
Tablename
|
Tablename
|
||||||
|
|
@ -157,9 +199,14 @@ Since collected metrics object is of complex type so "fields" and "tags" are sto
|
||||||
| distinct tostring(tags.database_name)
|
| distinct tostring(tags.database_name)
|
||||||
```
|
```
|
||||||
|
|
||||||
**Note** - This approach could have performance impact in case of large volumes of data, use belwo mentioned approach for such cases.
|
**Note** - This approach could have performance impact in case of large
|
||||||
|
volumes of data, use belwo mentioned approach for such cases.
|
||||||
|
|
||||||
1. Use [Update policy](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/management/updatepolicy)**: Transform dynamic data type columns using update policy. This is the recommended performant way for querying over large volumes of data compared to querying directly over JSON attributes:
|
1. Use [Update
|
||||||
|
policy](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/management/updatepolicy)**:
|
||||||
|
Transform dynamic data type columns using update policy. This is the
|
||||||
|
recommended performant way for querying over large volumes of data compared
|
||||||
|
to querying directly over JSON attributes:
|
||||||
|
|
||||||
```json
|
```json
|
||||||
// Function to transform data
|
// Function to transform data
|
||||||
|
|
@ -186,9 +233,15 @@ name | tags | timestamp | fields
|
||||||
syslog|{"appname":"azsecmond","facility":"user","host":"adx-linux-vm","hostname":"adx-linux-vm","severity":"info"}|2021-09-20T14:36:44Z|{"facility_code":1,"message":" 2021/09/20 14:36:44.890110 Failed to connect to mdsd: dial unix /var/run/mdsd/default_djson.socket: connect: no such file or directory","procid":"2184","severity_code":6,"timestamp":"1632148604890477000","version":1}
|
syslog|{"appname":"azsecmond","facility":"user","host":"adx-linux-vm","hostname":"adx-linux-vm","severity":"info"}|2021-09-20T14:36:44Z|{"facility_code":1,"message":" 2021/09/20 14:36:44.890110 Failed to connect to mdsd: dial unix /var/run/mdsd/default_djson.socket: connect: no such file or directory","procid":"2184","severity_code":6,"timestamp":"1632148604890477000","version":1}
|
||||||
syslog|{"appname":"CRON","facility":"authpriv","host":"adx-linux-vm","hostname":"adx-linux-vm","severity":"info"}|2021-09-20T14:37:01Z|{"facility_code":10,"message":" pam_unix(cron:session): session opened for user root by (uid=0)","procid":"26446","severity_code":6,"timestamp":"1632148621120781000","version":1}
|
syslog|{"appname":"CRON","facility":"authpriv","host":"adx-linux-vm","hostname":"adx-linux-vm","severity":"info"}|2021-09-20T14:37:01Z|{"facility_code":10,"message":" pam_unix(cron:session): session opened for user root by (uid=0)","procid":"26446","severity_code":6,"timestamp":"1632148621120781000","version":1}
|
||||||
|
|
||||||
There are multiple ways to flatten dynamic columns using 'extend' or 'bag_unpack' operator. You can use either of these ways in above mentioned update policy function - 'Transform_TargetTableName()'
|
There are multiple ways to flatten dynamic columns using 'extend' or
|
||||||
|
'bag_unpack' operator. You can use either of these ways in above mentioned
|
||||||
|
update policy function - 'Transform_TargetTableName()'
|
||||||
|
|
||||||
- Use [extend](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/query/extendoperator) operator - This is the recommended approach compared to 'bag_unpack' as it is faster and robust. Even if schema changes, it will not break queries or dashboards.
|
- Use
|
||||||
|
[extend](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/query/extendoperator)
|
||||||
|
operator - This is the recommended approach compared to 'bag_unpack' as it is
|
||||||
|
faster and robust. Even if schema changes, it will not break queries or
|
||||||
|
dashboards.
|
||||||
|
|
||||||
```text
|
```text
|
||||||
Tablenmae
|
Tablenmae
|
||||||
|
|
@ -198,7 +251,10 @@ There are multiple ways to flatten dynamic columns using 'extend' or 'bag_unpack
|
||||||
| project-away fields, tags
|
| project-away fields, tags
|
||||||
```
|
```
|
||||||
|
|
||||||
- Use [bag_unpack plugin](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/query/bag-unpackplugin) to unpack the dynamic type columns automatically. This method could lead to issues if source schema changes as its dynamically expanding columns.
|
- Use [bag_unpack
|
||||||
|
plugin](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/query/bag-unpackplugin)
|
||||||
|
to unpack the dynamic type columns automatically. This method could lead to
|
||||||
|
issues if source schema changes as its dynamically expanding columns.
|
||||||
|
|
||||||
```text
|
```text
|
||||||
Tablename
|
Tablename
|
||||||
|
|
|
||||||
|
|
@ -1,4 +1,4 @@
|
||||||
# Azure Monitor
|
# Azure Monitor Output Plugin
|
||||||
|
|
||||||
__The Azure Monitor custom metrics service is currently in preview and not
|
__The Azure Monitor custom metrics service is currently in preview and not
|
||||||
available in a subset of Azure regions.__
|
available in a subset of Azure regions.__
|
||||||
|
|
@ -9,10 +9,10 @@ output plugin will automatically aggregates metrics into one minute buckets,
|
||||||
which are then sent to Azure Monitor on every flush interval.
|
which are then sent to Azure Monitor on every flush interval.
|
||||||
|
|
||||||
The metrics from each input plugin will be written to a separate Azure Monitor
|
The metrics from each input plugin will be written to a separate Azure Monitor
|
||||||
namespace, prefixed with `Telegraf/` by default. The field name for each
|
namespace, prefixed with `Telegraf/` by default. The field name for each metric
|
||||||
metric is written as the Azure Monitor metric name. All field values are
|
is written as the Azure Monitor metric name. All field values are written as a
|
||||||
written as a summarized set that includes: min, max, sum, count. Tags are
|
summarized set that includes: min, max, sum, count. Tags are written as a
|
||||||
written as a dimension on each Azure Monitor metric.
|
dimension on each Azure Monitor metric.
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
@ -50,22 +50,24 @@ written as a dimension on each Azure Monitor metric.
|
||||||
|
|
||||||
## Setup
|
## Setup
|
||||||
|
|
||||||
1. [Register the `microsoft.insights` resource provider in your Azure subscription][resource provider].
|
1. [Register the `microsoft.insights` resource provider in your Azure
|
||||||
1. If using Managed Service Identities to authenticate an Azure VM,
|
subscription][resource provider].
|
||||||
[enable system-assigned managed identity][enable msi].
|
1. If using Managed Service Identities to authenticate an Azure VM, [enable
|
||||||
1. Use a region that supports Azure Monitor Custom Metrics,
|
system-assigned managed identity][enable msi].
|
||||||
For regions with Custom Metrics support, an endpoint will be available with
|
1. Use a region that supports Azure Monitor Custom Metrics, For regions with
|
||||||
the format `https://<region>.monitoring.azure.com`.
|
Custom Metrics support, an endpoint will be available with the format
|
||||||
|
`https://<region>.monitoring.azure.com`.
|
||||||
|
|
||||||
[resource provider]: https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-supported-services
|
[resource provider]: https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-supported-services
|
||||||
|
|
||||||
[enable msi]: https://docs.microsoft.com/en-us/azure/active-directory/managed-service-identity/qs-configure-portal-windows-vm
|
[enable msi]: https://docs.microsoft.com/en-us/azure/active-directory/managed-service-identity/qs-configure-portal-windows-vm
|
||||||
|
|
||||||
### Region and Resource ID
|
### Region and Resource ID
|
||||||
|
|
||||||
The plugin will attempt to discover the region and resource ID using the Azure
|
The plugin will attempt to discover the region and resource ID using the Azure
|
||||||
VM Instance Metadata service. If Telegraf is not running on a virtual machine
|
VM Instance Metadata service. If Telegraf is not running on a virtual machine or
|
||||||
or the VM Instance Metadata service is not available, the following variables
|
the VM Instance Metadata service is not available, the following variables are
|
||||||
are required for the output to function.
|
required for the output to function.
|
||||||
|
|
||||||
* region
|
* region
|
||||||
* resource_id
|
* resource_id
|
||||||
|
|
@ -76,7 +78,9 @@ This plugin uses one of several different types of authenticate methods. The
|
||||||
preferred authentication methods are different from the *order* in which each
|
preferred authentication methods are different from the *order* in which each
|
||||||
authentication is checked. Here are the preferred authentication methods:
|
authentication is checked. Here are the preferred authentication methods:
|
||||||
|
|
||||||
1. Managed Service Identity (MSI) token: This is the preferred authentication method. Telegraf will automatically authenticate using this method when running on Azure VMs.
|
1. Managed Service Identity (MSI) token: This is the preferred authentication
|
||||||
|
method. Telegraf will automatically authenticate using this method when
|
||||||
|
running on Azure VMs.
|
||||||
2. AAD Application Tokens (Service Principals)
|
2. AAD Application Tokens (Service Principals)
|
||||||
|
|
||||||
* Primarily useful if Telegraf is writing metrics for other resources.
|
* Primarily useful if Telegraf is writing metrics for other resources.
|
||||||
|
|
@ -92,10 +96,11 @@ authentication is checked. Here are the preferred authentication methods:
|
||||||
|
|
||||||
[principal]: https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-application-objects
|
[principal]: https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-application-objects
|
||||||
|
|
||||||
The plugin will authenticate using the first available of the
|
The plugin will authenticate using the first available of the following
|
||||||
following configurations:
|
configurations:
|
||||||
|
|
||||||
1. **Client Credentials**: Azure AD Application ID and Secret. Set the following environment variables:
|
1. **Client Credentials**: Azure AD Application ID and Secret. Set the following
|
||||||
|
environment variables:
|
||||||
|
|
||||||
* `AZURE_TENANT_ID`: Specifies the Tenant to which to authenticate.
|
* `AZURE_TENANT_ID`: Specifies the Tenant to which to authenticate.
|
||||||
* `AZURE_CLIENT_ID`: Specifies the app client ID to use.
|
* `AZURE_CLIENT_ID`: Specifies the app client ID to use.
|
||||||
|
|
@ -119,7 +124,8 @@ following configurations:
|
||||||
1. **Azure Managed Service Identity**: Delegate credential management to the
|
1. **Azure Managed Service Identity**: Delegate credential management to the
|
||||||
platform. Requires that code is running in Azure, e.g. on a VM. All
|
platform. Requires that code is running in Azure, e.g. on a VM. All
|
||||||
configuration is handled by Azure. See [Azure Managed Service Identity][msi]
|
configuration is handled by Azure. See [Azure Managed Service Identity][msi]
|
||||||
for more details. Only available when using the [Azure Resource Manager][arm].
|
for more details. Only available when using the [Azure Resource
|
||||||
|
Manager][arm].
|
||||||
|
|
||||||
[msi]: https://docs.microsoft.com/en-us/azure/active-directory/msi-overview
|
[msi]: https://docs.microsoft.com/en-us/azure/active-directory/msi-overview
|
||||||
[arm]: https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-overview
|
[arm]: https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-overview
|
||||||
|
|
@ -140,5 +146,7 @@ dimension limit.
|
||||||
|
|
||||||
To convert only a subset of string-typed fields as dimensions, enable
|
To convert only a subset of string-typed fields as dimensions, enable
|
||||||
`strings_as_dimensions` and use the [`fieldpass` or `fielddrop`
|
`strings_as_dimensions` and use the [`fieldpass` or `fielddrop`
|
||||||
processors](https://docs.influxdata.com/telegraf/v1.7/administration/configuration/#processor-configuration)
|
processors][conf-processor] to limit the string-typed fields that are sent to
|
||||||
to limit the string-typed fields that are sent to the plugin.
|
the plugin.
|
||||||
|
|
||||||
|
[conf-processor]: https://docs.influxdata.com/telegraf/v1.7/administration/configuration/#processor-configuration
|
||||||
|
|
|
||||||
|
|
@ -1,9 +1,12 @@
|
||||||
# Google BigQuery Output Plugin
|
# Google BigQuery Output Plugin
|
||||||
|
|
||||||
This plugin writes to the [Google Cloud BigQuery](https://cloud.google.com/bigquery) and requires [authentication](https://cloud.google.com/bigquery/docs/authentication)
|
This plugin writes to the [Google Cloud
|
||||||
with Google Cloud using either a service account or user credentials.
|
BigQuery](https://cloud.google.com/bigquery) and requires
|
||||||
|
[authentication](https://cloud.google.com/bigquery/docs/authentication) with
|
||||||
|
Google Cloud using either a service account or user credentials.
|
||||||
|
|
||||||
Be aware that this plugin accesses APIs that are [chargeable](https://cloud.google.com/bigquery/pricing) and might incur costs.
|
Be aware that this plugin accesses APIs that are
|
||||||
|
[chargeable](https://cloud.google.com/bigquery/pricing) and might incur costs.
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
@ -28,23 +31,30 @@ Be aware that this plugin accesses APIs that are [chargeable](https://cloud.goog
|
||||||
|
|
||||||
Requires `project` to specify where BigQuery entries will be persisted.
|
Requires `project` to specify where BigQuery entries will be persisted.
|
||||||
|
|
||||||
Requires `dataset` to specify under which BigQuery dataset the corresponding metrics tables reside.
|
Requires `dataset` to specify under which BigQuery dataset the corresponding
|
||||||
|
metrics tables reside.
|
||||||
|
|
||||||
Each metric should have a corresponding table to BigQuery.
|
Each metric should have a corresponding table to BigQuery. The schema of the
|
||||||
The schema of the table on BigQuery:
|
table on BigQuery:
|
||||||
|
|
||||||
* Should contain the field `timestamp` which is the timestamp of a telegraph metrics
|
* Should contain the field `timestamp` which is the timestamp of a telegraph
|
||||||
* Should contain the metric's tags with the same name and the column type should be set to string.
|
metrics
|
||||||
* Should contain the metric's fields with the same name and the column type should match the field type.
|
* Should contain the metric's tags with the same name and the column type should
|
||||||
|
be set to string.
|
||||||
|
* Should contain the metric's fields with the same name and the column type
|
||||||
|
should match the field type.
|
||||||
|
|
||||||
## Restrictions
|
## Restrictions
|
||||||
|
|
||||||
Avoid hyphens on BigQuery tables, underlying SDK cannot handle streaming inserts to Table with hyphens.
|
Avoid hyphens on BigQuery tables, underlying SDK cannot handle streaming inserts
|
||||||
|
to Table with hyphens.
|
||||||
|
|
||||||
In cases of metrics with hyphens please use the [Rename Processor Plugin](https://github.com/influxdata/telegraf/tree/master/plugins/processors/rename).
|
In cases of metrics with hyphens please use the [Rename Processor
|
||||||
|
Plugin][rename].
|
||||||
|
|
||||||
In case of a metric with hyphen by default hyphens shall be replaced with underscores (_).
|
In case of a metric with hyphen by default hyphens shall be replaced with
|
||||||
This can be altered using the `replace_hyphen_to` configuration property.
|
underscores (_). This can be altered using the `replace_hyphen_to`
|
||||||
|
configuration property.
|
||||||
|
|
||||||
Available data type options are:
|
Available data type options are:
|
||||||
|
|
||||||
|
|
@ -53,9 +63,13 @@ Available data type options are:
|
||||||
* string
|
* string
|
||||||
* boolean
|
* boolean
|
||||||
|
|
||||||
All field naming restrictions that apply to BigQuery should apply to the measurements to be imported.
|
All field naming restrictions that apply to BigQuery should apply to the
|
||||||
|
measurements to be imported.
|
||||||
|
|
||||||
Tables on BigQuery should be created beforehand and they are not created during persistence
|
Tables on BigQuery should be created beforehand and they are not created during
|
||||||
|
persistence
|
||||||
|
|
||||||
Pay attention to the column `timestamp` since it is reserved upfront and cannot change.
|
Pay attention to the column `timestamp` since it is reserved upfront and cannot
|
||||||
If partitioning is required make sure it is applied beforehand.
|
change. If partitioning is required make sure it is applied beforehand.
|
||||||
|
|
||||||
|
[rename]: ../../processors/rename/README.md
|
||||||
|
|
|
||||||
|
|
@ -5,9 +5,6 @@ as one of the supported [output data formats][].
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
This section contains the default TOML to configure the plugin. You can
|
|
||||||
generate it using `telegraf --usage cloud_pubsub`.
|
|
||||||
|
|
||||||
```toml
|
```toml
|
||||||
# Publish Telegraf metrics to a Google Cloud PubSub topic
|
# Publish Telegraf metrics to a Google Cloud PubSub topic
|
||||||
[[outputs.cloud_pubsub]]
|
[[outputs.cloud_pubsub]]
|
||||||
|
|
|
||||||
|
|
@ -1,25 +1,32 @@
|
||||||
# Amazon CloudWatch Output for Telegraf
|
# Amazon CloudWatch Output Plugin
|
||||||
|
|
||||||
This plugin will send metrics to Amazon CloudWatch.
|
This plugin will send metrics to Amazon CloudWatch.
|
||||||
|
|
||||||
## Amazon Authentication
|
## Amazon Authentication
|
||||||
|
|
||||||
This plugin uses a credential chain for Authentication with the CloudWatch
|
This plugin uses a credential chain for Authentication with the CloudWatch API
|
||||||
API endpoint. In the following order the plugin will attempt to authenticate.
|
endpoint. In the following order the plugin will attempt to authenticate.
|
||||||
|
|
||||||
1. Web identity provider credentials via STS if `role_arn` and `web_identity_token_file` are specified
|
1. Web identity provider credentials via STS if `role_arn` and
|
||||||
1. Assumed credentials via STS if `role_arn` attribute is specified (source credentials are evaluated from subsequent rules)
|
`web_identity_token_file` are specified
|
||||||
|
1. Assumed credentials via STS if `role_arn` attribute is specified (source
|
||||||
|
credentials are evaluated from subsequent rules)
|
||||||
1. Explicit credentials from `access_key`, `secret_key`, and `token` attributes
|
1. Explicit credentials from `access_key`, `secret_key`, and `token` attributes
|
||||||
1. Shared profile from `profile` attribute
|
1. Shared profile from `profile` attribute
|
||||||
1. [Environment Variables](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#environment-variables)
|
1. [Environment Variables][1]
|
||||||
1. [Shared Credentials](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#shared-credentials-file)
|
1. [Shared Credentials][2]
|
||||||
1. [EC2 Instance Profile](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html)
|
1. [EC2 Instance Profile][3]
|
||||||
|
|
||||||
If you are using credentials from a web identity provider, you can specify the session name using `role_session_name`. If
|
If you are using credentials from a web identity provider, you can specify the
|
||||||
left empty, the current timestamp will be used.
|
session name using `role_session_name`. If left empty, the current timestamp
|
||||||
|
will be used.
|
||||||
|
|
||||||
The IAM user needs only the `cloudwatch:PutMetricData` permission.
|
The IAM user needs only the `cloudwatch:PutMetricData` permission.
|
||||||
|
|
||||||
|
[1]: https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#environment-variables
|
||||||
|
[2]: https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#shared-credentials-file
|
||||||
|
[3]: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
```toml
|
```toml
|
||||||
|
|
@ -67,16 +74,16 @@ The IAM user needs only the `cloudwatch:PutMetricData` permission.
|
||||||
# high_resolution_metrics = false
|
# high_resolution_metrics = false
|
||||||
```
|
```
|
||||||
|
|
||||||
For this output plugin to function correctly the following variables
|
For this output plugin to function correctly the following variables must be
|
||||||
must be configured.
|
configured.
|
||||||
|
|
||||||
* region
|
* region
|
||||||
* namespace
|
* namespace
|
||||||
|
|
||||||
### region
|
### region
|
||||||
|
|
||||||
The region is the Amazon region that you wish to connect to.
|
The region is the Amazon region that you wish to connect to. Examples include
|
||||||
Examples include but are not limited to:
|
but are not limited to:
|
||||||
|
|
||||||
* us-west-1
|
* us-west-1
|
||||||
* us-west-2
|
* us-west-2
|
||||||
|
|
@ -91,13 +98,16 @@ The namespace used for AWS CloudWatch metrics.
|
||||||
### write_statistics
|
### write_statistics
|
||||||
|
|
||||||
If you have a large amount of metrics, you should consider to send statistic
|
If you have a large amount of metrics, you should consider to send statistic
|
||||||
values instead of raw metrics which could not only improve performance but
|
values instead of raw metrics which could not only improve performance but also
|
||||||
also save AWS API cost. If enable this flag, this plugin would parse the required
|
save AWS API cost. If enable this flag, this plugin would parse the required
|
||||||
[CloudWatch statistic fields](https://docs.aws.amazon.com/sdk-for-go/api/service/cloudwatch/#StatisticSet)
|
[CloudWatch statistic fields][1] (count, min, max, and sum) and send them to
|
||||||
(count, min, max, and sum) and send them to CloudWatch. You could use `basicstats`
|
CloudWatch. You could use `basicstats` aggregator to calculate those fields. If
|
||||||
aggregator to calculate those fields. If not all statistic fields are available,
|
not all statistic fields are available, all fields would still be sent as raw
|
||||||
all fields would still be sent as raw metrics.
|
metrics.
|
||||||
|
|
||||||
|
[1]: https://docs.aws.amazon.com/sdk-for-go/api/service/cloudwatch/#StatisticSet
|
||||||
|
|
||||||
### high_resolution_metrics
|
### high_resolution_metrics
|
||||||
|
|
||||||
Enable high resolution metrics (1 second precision) instead of standard ones (60 seconds precision)
|
Enable high resolution metrics (1 second precision) instead of standard ones (60
|
||||||
|
seconds precision)
|
||||||
|
|
|
||||||
|
|
@ -1,4 +1,4 @@
|
||||||
# Amazon CloudWatch Logs Output for Telegraf
|
# Amazon CloudWatch Logs Output Plugin
|
||||||
|
|
||||||
This plugin will send logs to Amazon CloudWatch.
|
This plugin will send logs to Amazon CloudWatch.
|
||||||
|
|
||||||
|
|
@ -7,20 +7,29 @@ This plugin will send logs to Amazon CloudWatch.
|
||||||
This plugin uses a credential chain for Authentication with the CloudWatch Logs
|
This plugin uses a credential chain for Authentication with the CloudWatch Logs
|
||||||
API endpoint. In the following order the plugin will attempt to authenticate.
|
API endpoint. In the following order the plugin will attempt to authenticate.
|
||||||
|
|
||||||
1. Web identity provider credentials via STS if `role_arn` and `web_identity_token_file` are specified
|
1. Web identity provider credentials via STS if `role_arn` and
|
||||||
1. Assumed credentials via STS if `role_arn` attribute is specified (source credentials are evaluated from subsequent rules)
|
`web_identity_token_file` are specified
|
||||||
|
1. Assumed credentials via STS if `role_arn` attribute is specified (source
|
||||||
|
credentials are evaluated from subsequent rules)
|
||||||
1. Explicit credentials from `access_key`, `secret_key`, and `token` attributes
|
1. Explicit credentials from `access_key`, `secret_key`, and `token` attributes
|
||||||
1. Shared profile from `profile` attribute
|
1. Shared profile from `profile` attribute
|
||||||
1. [Environment Variables](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#environment-variables)
|
1. [Environment Variables][1]
|
||||||
1. [Shared Credentials](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#shared-credentials-file)
|
1. [Shared Credentials][2]
|
||||||
1. [EC2 Instance Profile](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html)
|
1. [EC2 Instance Profile][3]
|
||||||
|
|
||||||
The IAM user needs the following permissions (see this [reference](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/permissions-reference-cwl.html) for more):
|
The IAM user needs the following permissions (see this [reference][4] for more):
|
||||||
|
|
||||||
- `logs:DescribeLogGroups` - required for check if configured log group exist
|
- `logs:DescribeLogGroups` - required for check if configured log group exist
|
||||||
- `logs:DescribeLogStreams` - required to view all log streams associated with a log group.
|
- `logs:DescribeLogStreams` - required to view all log streams associated with a
|
||||||
|
log group.
|
||||||
- `logs:CreateLogStream` - required to create a new log stream in a log group.)
|
- `logs:CreateLogStream` - required to create a new log stream in a log group.)
|
||||||
- `logs:PutLogEvents` - required to upload a batch of log events into log stream.
|
- `logs:PutLogEvents` - required to upload a batch of log events into log
|
||||||
|
stream.
|
||||||
|
|
||||||
|
[1]: https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#environment-variables
|
||||||
|
[2]: https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#shared-credentials-file
|
||||||
|
[3]: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
|
||||||
|
[4]: https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/permissions-reference-cwl.html
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,6 +1,7 @@
|
||||||
# CrateDB Output Plugin for Telegraf
|
# CrateDB Output Plugin
|
||||||
|
|
||||||
This plugin writes to [CrateDB](https://crate.io/) via its [PostgreSQL protocol](https://crate.io/docs/crate/reference/protocols/postgres.html).
|
This plugin writes to [CrateDB](https://crate.io/) via its [PostgreSQL
|
||||||
|
protocol](https://crate.io/docs/crate/reference/protocols/postgres.html).
|
||||||
|
|
||||||
## Table Schema
|
## Table Schema
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -27,8 +27,8 @@ This plugin writes to the [Datadog Metrics API][metrics] and requires an
|
||||||
|
|
||||||
## Metrics
|
## Metrics
|
||||||
|
|
||||||
Datadog metric names are formed by joining the Telegraf metric name and the field
|
Datadog metric names are formed by joining the Telegraf metric name and the
|
||||||
key with a `.` character.
|
field key with a `.` character.
|
||||||
|
|
||||||
Field values are converted to floating point numbers. Strings and floats that
|
Field values are converted to floating point numbers. Strings and floats that
|
||||||
cannot be sent over JSON, namely NaN and Inf, are ignored.
|
cannot be sent over JSON, namely NaN and Inf, are ignored.
|
||||||
|
|
|
||||||
|
|
@ -1,29 +1,54 @@
|
||||||
# Dynatrace Output Plugin
|
# Dynatrace Output Plugin
|
||||||
|
|
||||||
This plugin sends Telegraf metrics to [Dynatrace](https://www.dynatrace.com) via the [Dynatrace Metrics API V2](https://www.dynatrace.com/support/help/dynatrace-api/environment-api/metric-v2/). It may be run alongside the Dynatrace OneAgent for automatic authentication or it may be run standalone on a host without a OneAgent by specifying a URL and API Token.
|
This plugin sends Telegraf metrics to [Dynatrace](https://www.dynatrace.com) via
|
||||||
More information on the plugin can be found in the [Dynatrace documentation](https://www.dynatrace.com/support/help/how-to-use-dynatrace/metrics/metric-ingestion/ingestion-methods/telegraf/).
|
the [Dynatrace Metrics API V2][api-v2]. It may be run alongside the Dynatrace
|
||||||
All metrics are reported as gauges, unless they are specified to be delta counters using the `additional_counters` config option (see below).
|
OneAgent for automatic authentication or it may be run standalone on a host
|
||||||
See the [Dynatrace Metrics ingestion protocol documentation](https://www.dynatrace.com/support/help/how-to-use-dynatrace/metrics/metric-ingestion/metric-ingestion-protocol) for details on the types defined there.
|
without a OneAgent by specifying a URL and API Token. More information on the
|
||||||
|
plugin can be found in the [Dynatrace documentation][docs]. All metrics are
|
||||||
|
reported as gauges, unless they are specified to be delta counters using the
|
||||||
|
`additional_counters` config option (see below). See the [Dynatrace Metrics
|
||||||
|
ingestion protocol documentation][proto-docs] for details on the types defined
|
||||||
|
there.
|
||||||
|
|
||||||
|
[api-v2]: https://www.dynatrace.com/support/help/dynatrace-api/environment-api/metric-v2/
|
||||||
|
|
||||||
|
[docs]: https://www.dynatrace.com/support/help/how-to-use-dynatrace/metrics/metric-ingestion/ingestion-methods/telegraf/
|
||||||
|
|
||||||
|
[proto-docs]: https://www.dynatrace.com/support/help/how-to-use-dynatrace/metrics/metric-ingestion/metric-ingestion-protocol
|
||||||
|
|
||||||
## Requirements
|
## Requirements
|
||||||
|
|
||||||
You will either need a Dynatrace OneAgent (version 1.201 or higher) installed on the same host as Telegraf; or a Dynatrace environment with version 1.202 or higher.
|
You will either need a Dynatrace OneAgent (version 1.201 or higher) installed on
|
||||||
|
the same host as Telegraf; or a Dynatrace environment with version 1.202 or
|
||||||
|
higher.
|
||||||
|
|
||||||
- Telegraf minimum version: Telegraf 1.16
|
- Telegraf minimum version: Telegraf 1.16
|
||||||
|
|
||||||
## Getting Started
|
## Getting Started
|
||||||
|
|
||||||
Setting up Telegraf is explained in the [Telegraf Documentation](https://docs.influxdata.com/telegraf/latest/introduction/getting-started/).
|
Setting up Telegraf is explained in the [Telegraf
|
||||||
The Dynatrace exporter may be enabled by adding an `[[outputs.dynatrace]]` section to your `telegraf.conf` config file.
|
Documentation][getting-started].
|
||||||
All configurations are optional, but if a `url` other than the OneAgent metric ingestion endpoint is specified then an `api_token` is required.
|
The Dynatrace exporter may be enabled by adding an `[[outputs.dynatrace]]`
|
||||||
To see all available options, see [Configuration](#configuration) below.
|
section to your `telegraf.conf` config file. All configurations are optional,
|
||||||
|
but if a `url` other than the OneAgent metric ingestion endpoint is specified
|
||||||
|
then an `api_token` is required. To see all available options, see
|
||||||
|
[Configuration](#configuration) below.
|
||||||
|
|
||||||
|
[getting-started]: https://docs.influxdata.com/telegraf/latest/introduction/getting-started/
|
||||||
|
|
||||||
### Running alongside Dynatrace OneAgent (preferred)
|
### Running alongside Dynatrace OneAgent (preferred)
|
||||||
|
|
||||||
If you run the Telegraf agent on a host or VM that is monitored by the Dynatrace OneAgent then you only need to enable the plugin, but need no further configuration. The Dynatrace Telegraf output plugin will send all metrics to the OneAgent which will use its secure and load balanced connection to send the metrics to your Dynatrace SaaS or Managed environment.
|
If you run the Telegraf agent on a host or VM that is monitored by the Dynatrace
|
||||||
Depending on your environment, you might have to enable metrics ingestion on the OneAgent first as described in the [Dynatrace documentation](https://www.dynatrace.com/support/help/how-to-use-dynatrace/metrics/metric-ingestion/ingestion-methods/telegraf/).
|
OneAgent then you only need to enable the plugin, but need no further
|
||||||
|
configuration. The Dynatrace Telegraf output plugin will send all metrics to the
|
||||||
|
OneAgent which will use its secure and load balanced connection to send the
|
||||||
|
metrics to your Dynatrace SaaS or Managed environment. Depending on your
|
||||||
|
environment, you might have to enable metrics ingestion on the OneAgent first as
|
||||||
|
described in the [Dynatrace documentation][docs].
|
||||||
|
|
||||||
Note: The name and identifier of the host running Telegraf will be added as a dimension to every metric. If this is undesirable, then the output plugin may be used in standalone mode using the directions below.
|
Note: The name and identifier of the host running Telegraf will be added as a
|
||||||
|
dimension to every metric. If this is undesirable, then the output plugin may be
|
||||||
|
used in standalone mode using the directions below.
|
||||||
|
|
||||||
```toml
|
```toml
|
||||||
[[outputs.dynatrace]]
|
[[outputs.dynatrace]]
|
||||||
|
|
@ -32,15 +57,23 @@ Note: The name and identifier of the host running Telegraf will be added as a di
|
||||||
|
|
||||||
### Running standalone
|
### Running standalone
|
||||||
|
|
||||||
If you run the Telegraf agent on a host or VM without a OneAgent you will need to configure the environment API endpoint to send the metrics to and an API token for security.
|
If you run the Telegraf agent on a host or VM without a OneAgent you will need
|
||||||
|
to configure the environment API endpoint to send the metrics to and an API
|
||||||
|
token for security.
|
||||||
|
|
||||||
You will also need to configure an API token for secure access. Find out how to create a token in the [Dynatrace documentation](https://www.dynatrace.com/support/help/dynatrace-api/basics/dynatrace-api-authentication/) or simply navigate to **Settings > Integration > Dynatrace API** in your Dynatrace environment and create a token with Dynatrace API and create a new token with
|
You will also need to configure an API token for secure access. Find out how to
|
||||||
'Ingest metrics' (`metrics.ingest`) scope enabled. It is recommended to limit Token scope to only this permission.
|
create a token in the [Dynatrace documentation][api-auth] or simply navigate to
|
||||||
|
**Settings > Integration > Dynatrace API** in your Dynatrace environment and
|
||||||
|
create a token with Dynatrace API and create a new token with 'Ingest metrics'
|
||||||
|
(`metrics.ingest`) scope enabled. It is recommended to limit Token scope to only
|
||||||
|
this permission.
|
||||||
|
|
||||||
The endpoint for the Dynatrace Metrics API v2 is
|
The endpoint for the Dynatrace Metrics API v2 is
|
||||||
|
|
||||||
- on Dynatrace Managed: `https://{your-domain}/e/{your-environment-id}/api/v2/metrics/ingest`
|
- on Dynatrace Managed:
|
||||||
- on Dynatrace SaaS: `https://{your-environment-id}.live.dynatrace.com/api/v2/metrics/ingest`
|
`https://{your-domain}/e/{your-environment-id}/api/v2/metrics/ingest`
|
||||||
|
- on Dynatrace SaaS:
|
||||||
|
`https://{your-environment-id}.live.dynatrace.com/api/v2/metrics/ingest`
|
||||||
|
|
||||||
```toml
|
```toml
|
||||||
[[outputs.dynatrace]]
|
[[outputs.dynatrace]]
|
||||||
|
|
@ -53,7 +86,10 @@ The endpoint for the Dynatrace Metrics API v2 is
|
||||||
api_token = "your API token here" // hard-coded for illustration only, should be read from environment
|
api_token = "your API token here" // hard-coded for illustration only, should be read from environment
|
||||||
```
|
```
|
||||||
|
|
||||||
You can learn more about how to use the Dynatrace API [here](https://www.dynatrace.com/support/help/dynatrace-api/).
|
You can learn more about how to use the Dynatrace API
|
||||||
|
[here](https://www.dynatrace.com/support/help/dynatrace-api/).
|
||||||
|
|
||||||
|
[api-auth]: https://www.dynatrace.com/support/help/dynatrace-api/basics/dynatrace-api-authentication/
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
@ -102,17 +138,25 @@ You can learn more about how to use the Dynatrace API [here](https://www.dynatra
|
||||||
|
|
||||||
*default*: Local OneAgent endpoint
|
*default*: Local OneAgent endpoint
|
||||||
|
|
||||||
Set your Dynatrace environment URL (e.g.: `https://{your-environment-id}.live.dynatrace.com/api/v2/metrics/ingest`, see the [Dynatrace documentation](https://www.dynatrace.com/support/help/dynatrace-api/environment-api/metric-v2/post-ingest-metrics/) for details) if you do not use a OneAgent or wish to export metrics directly to a Dynatrace metrics v2 endpoint. If a URL is set to anything other than the local OneAgent endpoint, then an API token is required.
|
Set your Dynatrace environment URL (e.g.:
|
||||||
|
`https://{your-environment-id}.live.dynatrace.com/api/v2/metrics/ingest`, see
|
||||||
|
the [Dynatrace documentation][post-ingest] for details) if you do not use a
|
||||||
|
OneAgent or wish to export metrics directly to a Dynatrace metrics v2
|
||||||
|
endpoint. If a URL is set to anything other than the local OneAgent endpoint,
|
||||||
|
then an API token is required.
|
||||||
|
|
||||||
```toml
|
```toml
|
||||||
url = "https://{your-environment-id}.live.dynatrace.com/api/v2/metrics/ingest"
|
url = "https://{your-environment-id}.live.dynatrace.com/api/v2/metrics/ingest"
|
||||||
```
|
```
|
||||||
|
|
||||||
|
[post-ingest]: https://www.dynatrace.com/support/help/dynatrace-api/environment-api/metric-v2/post-ingest-metrics/
|
||||||
|
|
||||||
### `api_token`
|
### `api_token`
|
||||||
|
|
||||||
*required*: `false` unless `url` is specified
|
*required*: `false` unless `url` is specified
|
||||||
|
|
||||||
API token is required if a URL other than the OneAgent endpoint is specified and it should be restricted to the 'Ingest metrics' scope.
|
API token is required if a URL other than the OneAgent endpoint is specified and
|
||||||
|
it should be restricted to the 'Ingest metrics' scope.
|
||||||
|
|
||||||
```toml
|
```toml
|
||||||
api_token = "your API token here"
|
api_token = "your API token here"
|
||||||
|
|
@ -122,7 +166,8 @@ api_token = "your API token here"
|
||||||
|
|
||||||
*required*: `false`
|
*required*: `false`
|
||||||
|
|
||||||
Optional prefix to be prepended to all metric names (will be separated with a `.`).
|
Optional prefix to be prepended to all metric names (will be separated with a
|
||||||
|
`.`).
|
||||||
|
|
||||||
```toml
|
```toml
|
||||||
prefix = "telegraf"
|
prefix = "telegraf"
|
||||||
|
|
@ -132,7 +177,8 @@ prefix = "telegraf"
|
||||||
|
|
||||||
*required*: `false`
|
*required*: `false`
|
||||||
|
|
||||||
Setting this option to true skips TLS verification for testing or when using self-signed certificates.
|
Setting this option to true skips TLS verification for testing or when using
|
||||||
|
self-signed certificates.
|
||||||
|
|
||||||
```toml
|
```toml
|
||||||
insecure_skip_verify = false
|
insecure_skip_verify = false
|
||||||
|
|
@ -142,7 +188,8 @@ insecure_skip_verify = false
|
||||||
|
|
||||||
*required*: `false`
|
*required*: `false`
|
||||||
|
|
||||||
If you want a metric to be treated and reported as a delta counter, add its name to this list.
|
If you want a metric to be treated and reported as a delta counter, add its name
|
||||||
|
to this list.
|
||||||
|
|
||||||
```toml
|
```toml
|
||||||
additional_counters = [ ]
|
additional_counters = [ ]
|
||||||
|
|
|
||||||
|
|
@ -1,6 +1,7 @@
|
||||||
# Elasticsearch Output Plugin
|
# Elasticsearch Output Plugin
|
||||||
|
|
||||||
This plugin writes to [Elasticsearch](https://www.elastic.co) via HTTP using Elastic (<http://olivere.github.io/elastic/).>
|
This plugin writes to [Elasticsearch](https://www.elastic.co) via HTTP using
|
||||||
|
Elastic (<http://olivere.github.io/elastic/).>
|
||||||
|
|
||||||
It supports Elasticsearch releases from 5.x up to 7.x.
|
It supports Elasticsearch releases from 5.x up to 7.x.
|
||||||
|
|
||||||
|
|
@ -8,19 +9,28 @@ It supports Elasticsearch releases from 5.x up to 7.x.
|
||||||
|
|
||||||
### Indexes per time-frame
|
### Indexes per time-frame
|
||||||
|
|
||||||
This plugin can manage indexes per time-frame, as commonly done in other tools with Elasticsearch.
|
This plugin can manage indexes per time-frame, as commonly done in other tools
|
||||||
|
with Elasticsearch.
|
||||||
|
|
||||||
The timestamp of the metric collected will be used to decide the index destination.
|
The timestamp of the metric collected will be used to decide the index
|
||||||
|
destination.
|
||||||
|
|
||||||
For more information about this usage on Elasticsearch, check [the docs](https://www.elastic.co/guide/en/elasticsearch/guide/master/time-based.html#index-per-timeframe).
|
For more information about this usage on Elasticsearch, check [the
|
||||||
|
docs][1].
|
||||||
|
|
||||||
|
[1]: https://www.elastic.co/guide/en/elasticsearch/guide/master/time-based.html#index-per-timeframe
|
||||||
|
|
||||||
### Template management
|
### Template management
|
||||||
|
|
||||||
Index templates are used in Elasticsearch to define settings and mappings for the indexes and how the fields should be analyzed.
|
Index templates are used in Elasticsearch to define settings and mappings for
|
||||||
For more information on how this works, see [the docs](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-templates.html).
|
the indexes and how the fields should be analyzed. For more information on how
|
||||||
|
this works, see [the docs][2].
|
||||||
|
|
||||||
This plugin can create a working template for use with telegraf metrics. It uses Elasticsearch dynamic templates feature to set proper types for the tags and metrics fields.
|
This plugin can create a working template for use with telegraf metrics. It uses
|
||||||
If the template specified already exists, it will not overwrite unless you configure this plugin to do so. Thus you can customize this template after its creation if necessary.
|
Elasticsearch dynamic templates feature to set proper types for the tags and
|
||||||
|
metrics fields. If the template specified already exists, it will not overwrite
|
||||||
|
unless you configure this plugin to do so. Thus you can customize this template
|
||||||
|
after its creation if necessary.
|
||||||
|
|
||||||
Example of an index template created by telegraf on Elasticsearch 5.x:
|
Example of an index template created by telegraf on Elasticsearch 5.x:
|
||||||
|
|
||||||
|
|
@ -98,6 +108,8 @@ Example of an index template created by telegraf on Elasticsearch 5.x:
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
|
[2]: https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-templates.html
|
||||||
|
|
||||||
### Example events
|
### Example events
|
||||||
|
|
||||||
This plugin will format the events in the following way:
|
This plugin will format the events in the following way:
|
||||||
|
|
@ -152,10 +164,10 @@ the actual underlying Elasticsearch version is v7.1. This breaks Telegraf and
|
||||||
other Elasticsearch clients that need to know what major version they are
|
other Elasticsearch clients that need to know what major version they are
|
||||||
interfacing with.
|
interfacing with.
|
||||||
|
|
||||||
Amazon has created a [compatibility mode](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/rename.html#rename-upgrade)
|
Amazon has created a [compatibility mode][3] to allow existing Elasticsearch
|
||||||
to allow existing Elasticsearch clients to properly work when the version needs
|
clients to properly work when the version needs to be checked. To enable
|
||||||
to be checked. To enable compatibility mode users need to set the
|
compatibility mode users need to set the `override_main_response_version` to
|
||||||
`override_main_response_version` to `true`.
|
`true`.
|
||||||
|
|
||||||
On existing clusters run:
|
On existing clusters run:
|
||||||
|
|
||||||
|
|
@ -181,6 +193,8 @@ POST https://es.us-east-1.amazonaws.com/2021-01-01/opensearch/upgradeDomain
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
[3]: https://docs.aws.amazon.com/opensearch-service/latest/developerguide/rename.html#rename-upgrade
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
```toml
|
```toml
|
||||||
|
|
@ -267,17 +281,18 @@ POST https://es.us-east-1.amazonaws.com/2021-01-01/opensearch/upgradeDomain
|
||||||
|
|
||||||
### Permissions
|
### Permissions
|
||||||
|
|
||||||
If you are using authentication within your Elasticsearch cluster, you need
|
If you are using authentication within your Elasticsearch cluster, you need to
|
||||||
to create a account and create a role with at least the manage role in the
|
create a account and create a role with at least the manage role in the Cluster
|
||||||
Cluster Privileges category. Overwise, your account will not be able to
|
Privileges category. Overwise, your account will not be able to connect to your
|
||||||
connect to your Elasticsearch cluster and send logs to your cluster. After
|
Elasticsearch cluster and send logs to your cluster. After that, you need to
|
||||||
that, you need to add "create_indice" and "write" permission to your specific
|
add "create_indice" and "write" permission to your specific index pattern.
|
||||||
index pattern.
|
|
||||||
|
|
||||||
### Required parameters
|
### Required parameters
|
||||||
|
|
||||||
* `urls`: A list containing the full HTTP URL of one or more nodes from your Elasticsearch instance.
|
* `urls`: A list containing the full HTTP URL of one or more nodes from your
|
||||||
* `index_name`: The target index for metrics. You can use the date specifiers below to create indexes per time frame.
|
Elasticsearch instance.
|
||||||
|
* `index_name`: The target index for metrics. You can use the date specifiers
|
||||||
|
below to create indexes per time frame.
|
||||||
|
|
||||||
``` %Y - year (2017)
|
``` %Y - year (2017)
|
||||||
%y - last two digits of year (00..99)
|
%y - last two digits of year (00..99)
|
||||||
|
|
@ -287,30 +302,65 @@ index pattern.
|
||||||
%V - week of the year (ISO week) (01..53)
|
%V - week of the year (ISO week) (01..53)
|
||||||
```
|
```
|
||||||
|
|
||||||
Additionally, you can specify dynamic index names by using tags with the notation ```{{tag_name}}```. This will store the metrics with different tag values in different indices. If the tag does not exist in a particular metric, the `default_tag_value` will be used instead.
|
Additionally, you can specify dynamic index names by using tags with the
|
||||||
|
notation ```{{tag_name}}```. This will store the metrics with different tag
|
||||||
|
values in different indices. If the tag does not exist in a particular metric,
|
||||||
|
the `default_tag_value` will be used instead.
|
||||||
|
|
||||||
### Optional parameters
|
### Optional parameters
|
||||||
|
|
||||||
* `timeout`: Elasticsearch client timeout, defaults to "5s" if not set.
|
* `timeout`: Elasticsearch client timeout, defaults to "5s" if not set.
|
||||||
* `enable_sniffer`: Set to true to ask Elasticsearch a list of all cluster nodes, thus it is not necessary to list all nodes in the urls config option.
|
* `enable_sniffer`: Set to true to ask Elasticsearch a list of all cluster
|
||||||
* `health_check_interval`: Set the interval to check if the nodes are available, in seconds. Setting to 0 will disable the health check (not recommended in production).
|
nodes, thus it is not necessary to list all nodes in the urls config option.
|
||||||
* `username`: The username for HTTP basic authentication details (eg. when using Shield).
|
* `health_check_interval`: Set the interval to check if the nodes are available,
|
||||||
* `password`: The password for HTTP basic authentication details (eg. when using Shield).
|
in seconds. Setting to 0 will disable the health check (not recommended in
|
||||||
* `manage_template`: Set to true if you want telegraf to manage its index template. If enabled it will create a recommended index template for telegraf indexes.
|
production).
|
||||||
|
* `username`: The username for HTTP basic authentication details (eg. when using
|
||||||
|
Shield).
|
||||||
|
* `password`: The password for HTTP basic authentication details (eg. when using
|
||||||
|
Shield).
|
||||||
|
* `manage_template`: Set to true if you want telegraf to manage its index
|
||||||
|
template. If enabled it will create a recommended index template for telegraf
|
||||||
|
indexes.
|
||||||
* `template_name`: The template name used for telegraf indexes.
|
* `template_name`: The template name used for telegraf indexes.
|
||||||
* `overwrite_template`: Set to true if you want telegraf to overwrite an existing template.
|
* `overwrite_template`: Set to true if you want telegraf to overwrite an
|
||||||
* `force_document_id`: Set to true will compute a unique hash from as sha256(concat(timestamp,measurement,series-hash)),enables resend or update data withoud ES duplicated documents.
|
existing template.
|
||||||
* `float_handling`: Specifies how to handle `NaN` and infinite field values. `"none"` (default) will do nothing, `"drop"` will drop the field and `replace` will replace the field value by the number in `float_replacement_value`
|
* `force_document_id`: Set to true will compute a unique hash from as
|
||||||
* `float_replacement_value`: Value (defaulting to `0.0`) to replace `NaN`s and `inf`s if `float_handling` is set to `replace`. Negative `inf` will be replaced by the negative value in this number to respect the sign of the field's original value.
|
sha256(concat(timestamp,measurement,series-hash)),enables resend or update
|
||||||
* `use_pipeline`: If set, the set value will be used as the pipeline to call when sending events to elasticsearch. Additionally, you can specify dynamic pipeline names by using tags with the notation ```{{tag_name}}```. If the tag does not exist in a particular metric, the `default_pipeline` will be used instead.
|
data withoud ES duplicated documents.
|
||||||
* `default_pipeline`: If dynamic pipeline names the tag does not exist in a particular metric, this value will be used instead.
|
* `float_handling`: Specifies how to handle `NaN` and infinite field
|
||||||
|
values. `"none"` (default) will do nothing, `"drop"` will drop the field and
|
||||||
|
`replace` will replace the field value by the number in
|
||||||
|
`float_replacement_value`
|
||||||
|
* `float_replacement_value`: Value (defaulting to `0.0`) to replace `NaN`s and
|
||||||
|
`inf`s if `float_handling` is set to `replace`. Negative `inf` will be
|
||||||
|
replaced by the negative value in this number to respect the sign of the
|
||||||
|
field's original value.
|
||||||
|
* `use_pipeline`: If set, the set value will be used as the pipeline to call
|
||||||
|
when sending events to elasticsearch. Additionally, you can specify dynamic
|
||||||
|
pipeline names by using tags with the notation ```{{tag_name}}```. If the tag
|
||||||
|
does not exist in a particular metric, the `default_pipeline` will be used
|
||||||
|
instead.
|
||||||
|
* `default_pipeline`: If dynamic pipeline names the tag does not exist in a
|
||||||
|
particular metric, this value will be used instead.
|
||||||
|
|
||||||
## Known issues
|
## Known issues
|
||||||
|
|
||||||
Integer values collected that are bigger than 2^63 and smaller than 1e21 (or in this exact same window of their negative counterparts) are encoded by golang JSON encoder in decimal format and that is not fully supported by Elasticsearch dynamic field mapping. This causes the metrics with such values to be dropped in case a field mapping has not been created yet on the telegraf index. If that's the case you will see an exception on Elasticsearch side like this:
|
Integer values collected that are bigger than 2^63 and smaller than 1e21 (or in
|
||||||
|
this exact same window of their negative counterparts) are encoded by golang
|
||||||
|
JSON encoder in decimal format and that is not fully supported by Elasticsearch
|
||||||
|
dynamic field mapping. This causes the metrics with such values to be dropped in
|
||||||
|
case a field mapping has not been created yet on the telegraf index. If that's
|
||||||
|
the case you will see an exception on Elasticsearch side like this:
|
||||||
|
|
||||||
```{"error":{"root_cause":[{"type":"mapper_parsing_exception","reason":"failed to parse"}],"type":"mapper_parsing_exception","reason":"failed to parse","caused_by":{"type":"illegal_state_exception","reason":"No matching token for number_type [BIG_INTEGER]"}},"status":400}```
|
```json
|
||||||
|
{"error":{"root_cause":[{"type":"mapper_parsing_exception","reason":"failed to parse"}],"type":"mapper_parsing_exception","reason":"failed to parse","caused_by":{"type":"illegal_state_exception","reason":"No matching token for number_type [BIG_INTEGER]"}},"status":400}
|
||||||
|
```
|
||||||
|
|
||||||
The correct field mapping will be created on the telegraf index as soon as a supported JSON value is received by Elasticsearch, and subsequent insertions will work because the field mapping will already exist.
|
The correct field mapping will be created on the telegraf index as soon as a
|
||||||
|
supported JSON value is received by Elasticsearch, and subsequent insertions
|
||||||
|
will work because the field mapping will already exist.
|
||||||
|
|
||||||
This issue is caused by the way Elasticsearch tries to detect integer fields, and by how golang encodes numbers in JSON. There is no clear workaround for this at the moment.
|
This issue is caused by the way Elasticsearch tries to detect integer fields,
|
||||||
|
and by how golang encodes numbers in JSON. There is no clear workaround for this
|
||||||
|
at the moment.
|
||||||
|
|
|
||||||
|
|
@ -1,10 +1,17 @@
|
||||||
# Azure Event Hubs output plugin
|
# Azure Event Hubs Output Plugin
|
||||||
|
|
||||||
This plugin for [Azure Event Hubs](https://azure.microsoft.com/en-gb/services/event-hubs/) will send metrics to a single Event Hub within an Event Hubs namespace. Metrics are sent as message batches, each message payload containing one metric object. The messages do not specify a partition key, and will thus be automatically load-balanced (round-robin) across all the Event Hub partitions.
|
This plugin for [Azure Event
|
||||||
|
Hubs](https://azure.microsoft.com/en-gb/services/event-hubs/) will send metrics
|
||||||
|
to a single Event Hub within an Event Hubs namespace. Metrics are sent as
|
||||||
|
message batches, each message payload containing one metric object. The messages
|
||||||
|
do not specify a partition key, and will thus be automatically load-balanced
|
||||||
|
(round-robin) across all the Event Hub partitions.
|
||||||
|
|
||||||
## Metrics
|
## Metrics
|
||||||
|
|
||||||
The plugin uses the Telegraf serializers to format the metric data sent in the message payloads. You can select any of the supported output formats, although JSON is probably the easiest to integrate with downstream components.
|
The plugin uses the Telegraf serializers to format the metric data sent in the
|
||||||
|
message payloads. You can select any of the supported output formats, although
|
||||||
|
JSON is probably the easiest to integrate with downstream components.
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -27,4 +27,4 @@ Telegraf minimum version: Telegraf 1.15.0
|
||||||
|
|
||||||
see [examples][]
|
see [examples][]
|
||||||
|
|
||||||
[examples]: https://github.com/influxdata/telegraf/blob/master/plugins/outputs/execd/examples/
|
[examples]: examples/
|
||||||
|
|
|
||||||
|
|
@ -1,10 +1,13 @@
|
||||||
# Graphite Output Plugin
|
# Graphite Output Plugin
|
||||||
|
|
||||||
This plugin writes to [Graphite](http://graphite.readthedocs.org/en/latest/index.html)
|
This plugin writes to [Graphite][1] via raw TCP.
|
||||||
via raw TCP.
|
|
||||||
|
|
||||||
For details on the translation between Telegraf Metrics and Graphite output,
|
For details on the translation between Telegraf Metrics and Graphite output,
|
||||||
see the [Graphite Data Format](../../../docs/DATA_FORMATS_OUTPUT.md)
|
see the [Graphite Data Format][2].
|
||||||
|
|
||||||
|
[1]: http://graphite.readthedocs.org/en/latest/index.html
|
||||||
|
|
||||||
|
[2]: ../../../docs/DATA_FORMATS_OUTPUT.md
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -8,16 +8,16 @@ This plugin writes to a Graylog instance using the "[GELF][]" format.
|
||||||
|
|
||||||
The [GELF spec][] spec defines a number of specific fields in a GELF payload.
|
The [GELF spec][] spec defines a number of specific fields in a GELF payload.
|
||||||
These fields may have specific requirements set by the spec and users of the
|
These fields may have specific requirements set by the spec and users of the
|
||||||
Graylog plugin need to follow these requirements or metrics may be rejected
|
Graylog plugin need to follow these requirements or metrics may be rejected due
|
||||||
due to invalid data.
|
to invalid data.
|
||||||
|
|
||||||
For example, the timestamp field defined in the GELF spec, is required to be
|
For example, the timestamp field defined in the GELF spec, is required to be a
|
||||||
a UNIX timestamp. This output plugin will not modify or check the timestamp
|
UNIX timestamp. This output plugin will not modify or check the timestamp field
|
||||||
field if one is present and send it as-is to Graylog. If the field is absent
|
if one is present and send it as-is to Graylog. If the field is absent then
|
||||||
then Telegraf will set the timestamp to the current time.
|
Telegraf will set the timestamp to the current time.
|
||||||
|
|
||||||
Any field not defined by the spec will have an underscore (e.g. `_`) prefixed
|
Any field not defined by the spec will have an underscore (e.g. `_`) prefixed to
|
||||||
to the field name.
|
the field name.
|
||||||
|
|
||||||
[GELF spec]: https://docs.graylog.org/docs/gelf#gelf-payload-specification
|
[GELF spec]: https://docs.graylog.org/docs/gelf#gelf-payload-specification
|
||||||
|
|
||||||
|
|
@ -50,5 +50,6 @@ to the field name.
|
||||||
# insecure_skip_verify = false
|
# insecure_skip_verify = false
|
||||||
```
|
```
|
||||||
|
|
||||||
Server endpoint may be specified without UDP or TCP scheme (eg. "127.0.0.1:12201").
|
Server endpoint may be specified without UDP or TCP scheme
|
||||||
In such case, UDP protocol is assumed. TLS config is ignored for UDP endpoints.
|
(eg. "127.0.0.1:12201"). In such case, UDP protocol is assumed. TLS config is
|
||||||
|
ignored for UDP endpoints.
|
||||||
|
|
|
||||||
|
|
@ -1,6 +1,7 @@
|
||||||
# GroundWork Output Plugin
|
# GroundWork Output Plugin
|
||||||
|
|
||||||
This plugin writes to a [GroundWork Monitor][1] instance. Plugin only supports GW8+
|
This plugin writes to a [GroundWork Monitor][1] instance. Plugin only supports
|
||||||
|
GW8+
|
||||||
|
|
||||||
[1]: https://www.gwos.com/product/groundwork-monitor/
|
[1]: https://www.gwos.com/product/groundwork-monitor/
|
||||||
|
|
||||||
|
|
@ -34,15 +35,24 @@ This plugin writes to a [GroundWork Monitor][1] instance. Plugin only supports G
|
||||||
|
|
||||||
## List of tags used by the plugin
|
## List of tags used by the plugin
|
||||||
|
|
||||||
* group - to define the name of the group you want to monitor, can be changed with config.
|
* group - to define the name of the group you want to monitor, can be changed
|
||||||
* host - to define the name of the host you want to monitor, can be changed with config.
|
with config.
|
||||||
* service - to define the name of the service you want to monitor.
|
* host - to define the name of the host you want to monitor, can be changed with
|
||||||
* status - to define the status of the service. Supported statuses: "SERVICE_OK", "SERVICE_WARNING", "SERVICE_UNSCHEDULED_CRITICAL", "SERVICE_PENDING", "SERVICE_SCHEDULED_CRITICAL", "SERVICE_UNKNOWN".
|
config.
|
||||||
* message - to provide any message you want.
|
* service - to define the name of the service you want to monitor.
|
||||||
* unitType - to use in monitoring contexts(subset of The Unified Code for Units of Measure standard). Supported types: "1", "%cpu", "KB", "GB", "MB".
|
* status - to define the status of the service. Supported statuses:
|
||||||
* warning - to define warning threshold value.
|
"SERVICE_OK", "SERVICE_WARNING", "SERVICE_UNSCHEDULED_CRITICAL",
|
||||||
|
"SERVICE_PENDING", "SERVICE_SCHEDULED_CRITICAL", "SERVICE_UNKNOWN".
|
||||||
|
* message - to provide any message you want.
|
||||||
|
* unitType - to use in monitoring contexts(subset of The Unified Code for Units
|
||||||
|
of Measure standard). Supported types: "1", "%cpu", "KB", "GB", "MB".
|
||||||
|
* warning - to define warning threshold value.
|
||||||
* critical - to define critical threshold value.
|
* critical - to define critical threshold value.
|
||||||
|
|
||||||
## NOTE
|
## NOTE
|
||||||
|
|
||||||
The current version of GroundWork Monitor does not support metrics whose values are strings. Such metrics will be skipped and will not be added to the final payload. You can find more context in this pull request: [#10255]( https://github.com/influxdata/telegraf/pull/10255)
|
The current version of GroundWork Monitor does not support metrics whose values
|
||||||
|
are strings. Such metrics will be skipped and will not be added to the final
|
||||||
|
payload. You can find more context in this pull request: [#10255][].
|
||||||
|
|
||||||
|
[#10255]: https://github.com/influxdata/telegraf/pull/10255
|
||||||
|
|
|
||||||
|
|
@ -1,8 +1,8 @@
|
||||||
# HTTP Output Plugin
|
# HTTP Output Plugin
|
||||||
|
|
||||||
This plugin sends metrics in a HTTP message encoded using one of the output
|
This plugin sends metrics in a HTTP message encoded using one of the output data
|
||||||
data formats. For data_formats that support batching, metrics are sent in
|
formats. For data_formats that support batching, metrics are sent in batch
|
||||||
batch format by default.
|
format by default.
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
@ -97,4 +97,11 @@ batch format by default.
|
||||||
|
|
||||||
### Optional Cookie Authentication Settings
|
### Optional Cookie Authentication Settings
|
||||||
|
|
||||||
The optional Cookie Authentication Settings will retrieve a cookie from the given authorization endpoint, and use it in subsequent API requests. This is useful for services that do not provide OAuth or Basic Auth authentication, e.g. the [Tesla Powerwall API](https://www.tesla.com/support/energy/powerwall/own/monitoring-from-home-network), which uses a Cookie Auth Body to retrieve an authorization cookie. The Cookie Auth Renewal interval will renew the authorization by retrieving a new cookie at the given interval.
|
The optional Cookie Authentication Settings will retrieve a cookie from the
|
||||||
|
given authorization endpoint, and use it in subsequent API requests. This is
|
||||||
|
useful for services that do not provide OAuth or Basic Auth authentication,
|
||||||
|
e.g. the [Tesla Powerwall API][powerwall], which uses a Cookie Auth Body to
|
||||||
|
retrieve an authorization cookie. The Cookie Auth Renewal interval will renew
|
||||||
|
the authorization by retrieving a new cookie at the given interval.
|
||||||
|
|
||||||
|
[powerwall]: https://www.tesla.com/support/energy/powerwall/own/monitoring-from-home-network
|
||||||
|
|
|
||||||
|
|
@ -1,6 +1,7 @@
|
||||||
# InfluxDB v1.x Output Plugin
|
# InfluxDB v1.x Output Plugin
|
||||||
|
|
||||||
The InfluxDB output plugin writes metrics to the [InfluxDB v1.x] HTTP or UDP service.
|
The InfluxDB output plugin writes metrics to the [InfluxDB v1.x] HTTP or UDP
|
||||||
|
service.
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
@ -89,4 +90,5 @@ The InfluxDB output plugin writes metrics to the [InfluxDB v1.x] HTTP or UDP ser
|
||||||
Reference the [influx serializer][] for details about metric production.
|
Reference the [influx serializer][] for details about metric production.
|
||||||
|
|
||||||
[InfluxDB v1.x]: https://github.com/influxdata/influxdb
|
[InfluxDB v1.x]: https://github.com/influxdata/influxdb
|
||||||
|
|
||||||
[influx serializer]: /plugins/serializers/influx/README.md#Metrics
|
[influx serializer]: /plugins/serializers/influx/README.md#Metrics
|
||||||
|
|
|
||||||
|
|
@ -1,11 +1,13 @@
|
||||||
# Instrumental Output Plugin
|
# Instrumental Output Plugin
|
||||||
|
|
||||||
This plugin writes to the [Instrumental Collector API](https://instrumentalapp.com/docs/tcp-collector)
|
This plugin writes to the [Instrumental Collector
|
||||||
and requires a Project-specific API token.
|
API](https://instrumentalapp.com/docs/tcp-collector) and requires a
|
||||||
|
Project-specific API token.
|
||||||
|
|
||||||
Instrumental accepts stats in a format very close to Graphite, with the only difference being that
|
Instrumental accepts stats in a format very close to Graphite, with the only
|
||||||
the type of stat (gauge, increment) is the first token, separated from the metric itself
|
difference being that the type of stat (gauge, increment) is the first token,
|
||||||
by whitespace. The `increment` type is only used if the metric comes in as a counter through `[[input.statsd]]`.
|
separated from the metric itself by whitespace. The `increment` type is only
|
||||||
|
used if the metric comes in as a counter through `[[input.statsd]]`.
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,6 +1,7 @@
|
||||||
# Kafka Output Plugin
|
# Kafka Output Plugin
|
||||||
|
|
||||||
This plugin writes to a [Kafka Broker](http://kafka.apache.org/07/quickstart.html) acting a Kafka Producer.
|
This plugin writes to a [Kafka
|
||||||
|
Broker](http://kafka.apache.org/07/quickstart.html) acting a Kafka Producer.
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
@ -163,8 +164,8 @@ This plugin writes to a [Kafka Broker](http://kafka.apache.org/07/quickstart.htm
|
||||||
This option controls the number of retries before a failure notification is
|
This option controls the number of retries before a failure notification is
|
||||||
displayed for each message when no acknowledgement is received from the
|
displayed for each message when no acknowledgement is received from the
|
||||||
broker. When the setting is greater than `0`, message latency can be reduced,
|
broker. When the setting is greater than `0`, message latency can be reduced,
|
||||||
duplicate messages can occur in cases of transient errors, and broker loads
|
duplicate messages can occur in cases of transient errors, and broker loads can
|
||||||
can increase during downtime.
|
increase during downtime.
|
||||||
|
|
||||||
The option is similar to the
|
The option is similar to the
|
||||||
[retries](https://kafka.apache.org/documentation/#producerconfigs) Producer
|
[retries](https://kafka.apache.org/documentation/#producerconfigs) Producer
|
||||||
|
|
|
||||||
|
|
@ -1,29 +1,38 @@
|
||||||
# Amazon Kinesis Output for Telegraf
|
# Amazon Kinesis Output Plugin
|
||||||
|
|
||||||
This is an experimental plugin that is still in the early stages of development. It will batch up all of the Points
|
This is an experimental plugin that is still in the early stages of
|
||||||
in one Put request to Kinesis. This should save the number of API requests by a considerable level.
|
development. It will batch up all of the Points in one Put request to
|
||||||
|
Kinesis. This should save the number of API requests by a considerable level.
|
||||||
|
|
||||||
## About Kinesis
|
## About Kinesis
|
||||||
|
|
||||||
This is not the place to document all of the various Kinesis terms however it
|
This is not the place to document all of the various Kinesis terms however it
|
||||||
maybe useful for users to review Amazons official documentation which is available
|
maybe useful for users to review Amazons official documentation which is
|
||||||
|
available
|
||||||
[here](http://docs.aws.amazon.com/kinesis/latest/dev/key-concepts.html).
|
[here](http://docs.aws.amazon.com/kinesis/latest/dev/key-concepts.html).
|
||||||
|
|
||||||
## Amazon Authentication
|
## Amazon Authentication
|
||||||
|
|
||||||
This plugin uses a credential chain for Authentication with the Kinesis API endpoint. In the following order the plugin
|
This plugin uses a credential chain for Authentication with the Kinesis API
|
||||||
will attempt to authenticate.
|
endpoint. In the following order the plugin will attempt to authenticate.
|
||||||
|
|
||||||
1. Web identity provider credentials via STS if `role_arn` and `web_identity_token_file` are specified
|
1. Web identity provider credentials via STS if `role_arn` and
|
||||||
1. Assumed credentials via STS if `role_arn` attribute is specified (source credentials are evaluated from subsequent rules)
|
`web_identity_token_file` are specified
|
||||||
|
1. Assumed credentials via STS if `role_arn` attribute is specified (source
|
||||||
|
credentials are evaluated from subsequent rules)
|
||||||
1. Explicit credentials from `access_key`, `secret_key`, and `token` attributes
|
1. Explicit credentials from `access_key`, `secret_key`, and `token` attributes
|
||||||
1. Shared profile from `profile` attribute
|
1. Shared profile from `profile` attribute
|
||||||
1. [Environment Variables](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#environment-variables)
|
1. [Environment Variables][1]
|
||||||
1. [Shared Credentials](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#shared-credentials-file)
|
1. [Shared Credentials][2]
|
||||||
1. [EC2 Instance Profile](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html)
|
1. [EC2 Instance Profile][3]
|
||||||
|
|
||||||
If you are using credentials from a web identity provider, you can specify the session name using `role_session_name`. If
|
If you are using credentials from a web identity provider, you can specify the
|
||||||
left empty, the current timestamp will be used.
|
session name using `role_session_name`. If left empty, the current timestamp
|
||||||
|
will be used.
|
||||||
|
|
||||||
|
[1]: https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#environment-variables
|
||||||
|
[2]: https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#shared-credentials-file
|
||||||
|
[3]: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
@ -93,14 +102,16 @@ left empty, the current timestamp will be used.
|
||||||
debug = false
|
debug = false
|
||||||
```
|
```
|
||||||
|
|
||||||
For this output plugin to function correctly the following variables must be configured.
|
For this output plugin to function correctly the following variables must be
|
||||||
|
configured.
|
||||||
|
|
||||||
* region
|
* region
|
||||||
* streamname
|
* streamname
|
||||||
|
|
||||||
### region
|
### region
|
||||||
|
|
||||||
The region is the Amazon region that you wish to connect to. Examples include but are not limited to
|
The region is the Amazon region that you wish to connect to. Examples include
|
||||||
|
but are not limited to
|
||||||
|
|
||||||
* us-west-1
|
* us-west-1
|
||||||
* us-west-2
|
* us-west-2
|
||||||
|
|
@ -110,39 +121,45 @@ The region is the Amazon region that you wish to connect to. Examples include bu
|
||||||
|
|
||||||
### streamname
|
### streamname
|
||||||
|
|
||||||
The streamname is used by the plugin to ensure that data is sent to the correct Kinesis stream. It is important to
|
The streamname is used by the plugin to ensure that data is sent to the correct
|
||||||
note that the stream *MUST* be pre-configured for this plugin to function correctly. If the stream does not exist the
|
Kinesis stream. It is important to note that the stream *MUST* be pre-configured
|
||||||
plugin will result in telegraf exiting with an exit code of 1.
|
for this plugin to function correctly. If the stream does not exist the plugin
|
||||||
|
will result in telegraf exiting with an exit code of 1.
|
||||||
|
|
||||||
### partitionkey [DEPRECATED]
|
### partitionkey [DEPRECATED]
|
||||||
|
|
||||||
This is used to group data within a stream. Currently this plugin only supports a single partitionkey.
|
This is used to group data within a stream. Currently this plugin only supports
|
||||||
Manually configuring different hosts, or groups of hosts with manually selected partitionkeys might be a workable
|
a single partitionkey. Manually configuring different hosts, or groups of hosts
|
||||||
solution to scale out.
|
with manually selected partitionkeys might be a workable solution to scale out.
|
||||||
|
|
||||||
### use_random_partitionkey [DEPRECATED]
|
### use_random_partitionkey [DEPRECATED]
|
||||||
|
|
||||||
When true a random UUID will be generated and used as the partitionkey when sending data to Kinesis. This allows data to evenly spread across multiple shards in the stream. Due to using a random partitionKey there can be no guarantee of ordering when consuming the data off the shards.
|
When true a random UUID will be generated and used as the partitionkey when
|
||||||
If true then the partitionkey option will be ignored.
|
sending data to Kinesis. This allows data to evenly spread across multiple
|
||||||
|
shards in the stream. Due to using a random partitionKey there can be no
|
||||||
|
guarantee of ordering when consuming the data off the shards. If true then the
|
||||||
|
partitionkey option will be ignored.
|
||||||
|
|
||||||
### partition
|
### partition
|
||||||
|
|
||||||
This is used to group data within a stream. Currently four methods are supported: random, static, tag or measurement
|
This is used to group data within a stream. Currently four methods are
|
||||||
|
supported: random, static, tag or measurement
|
||||||
|
|
||||||
#### random
|
#### random
|
||||||
|
|
||||||
This will generate a UUIDv4 for each metric to spread them across shards.
|
This will generate a UUIDv4 for each metric to spread them across shards. Any
|
||||||
Any guarantee of ordering is lost with this method
|
guarantee of ordering is lost with this method
|
||||||
|
|
||||||
#### static
|
#### static
|
||||||
|
|
||||||
This uses a static string as a partitionkey.
|
This uses a static string as a partitionkey. All metrics will be mapped to the
|
||||||
All metrics will be mapped to the same shard which may limit throughput.
|
same shard which may limit throughput.
|
||||||
|
|
||||||
#### tag
|
#### tag
|
||||||
|
|
||||||
This will take the value of the specified tag from each metric as the partitionKey.
|
This will take the value of the specified tag from each metric as the
|
||||||
If the tag is not found the `default` value will be used or `telegraf` if unspecified
|
partitionKey. If the tag is not found the `default` value will be used or
|
||||||
|
`telegraf` if unspecified
|
||||||
|
|
||||||
#### measurement
|
#### measurement
|
||||||
|
|
||||||
|
|
@ -150,12 +167,14 @@ This will use the measurement's name as the partitionKey.
|
||||||
|
|
||||||
### format
|
### format
|
||||||
|
|
||||||
The format configuration value has been designated to allow people to change the format of the Point as written to
|
The format configuration value has been designated to allow people to change the
|
||||||
Kinesis. Right now there are two supported formats string and custom.
|
format of the Point as written to Kinesis. Right now there are two supported
|
||||||
|
formats string and custom.
|
||||||
|
|
||||||
#### string
|
#### string
|
||||||
|
|
||||||
String is defined using the default Point.String() value and translated to []byte for the Kinesis stream.
|
String is defined using the default Point.String() value and translated to
|
||||||
|
[]byte for the Kinesis stream.
|
||||||
|
|
||||||
#### custom
|
#### custom
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,16 +1,20 @@
|
||||||
# Librato Output Plugin
|
# Librato Output Plugin
|
||||||
|
|
||||||
This plugin writes to the [Librato Metrics API](http://dev.librato.com/v1/metrics#metrics)
|
This plugin writes to the [Librato Metrics API][metrics-api] and requires an
|
||||||
and requires an `api_user` and `api_token` which can be obtained [here](https://metrics.librato.com/account/api_tokens)
|
`api_user` and `api_token` which can be obtained [here][tokens] for the account.
|
||||||
for the account.
|
|
||||||
|
|
||||||
The `source_tag` option in the Configuration file is used to send contextual information from
|
The `source_tag` option in the Configuration file is used to send contextual
|
||||||
Point Tags to the API.
|
information from Point Tags to the API.
|
||||||
|
|
||||||
If the point value being sent cannot be converted to a float64, the metric is skipped.
|
If the point value being sent cannot be converted to a float64, the metric is
|
||||||
|
skipped.
|
||||||
|
|
||||||
Currently, the plugin does not send any associated Point Tags.
|
Currently, the plugin does not send any associated Point Tags.
|
||||||
|
|
||||||
|
[metrics-api]: http://dev.librato.com/v1/metrics#metrics
|
||||||
|
|
||||||
|
[tokens]: https://metrics.librato.com/account/api_tokens
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
```toml
|
```toml
|
||||||
|
|
|
||||||
|
|
@ -1,7 +1,8 @@
|
||||||
# Loki Output Plugin
|
# Loki Output Plugin
|
||||||
|
|
||||||
This plugin sends logs to Loki, using metric name and tags as labels,
|
This plugin sends logs to Loki, using metric name and tags as labels, log line
|
||||||
log line will content all fields in `key="value"` format which is easily parsable with `logfmt` parser in Loki.
|
will content all fields in `key="value"` format which is easily parsable with
|
||||||
|
`logfmt` parser in Loki.
|
||||||
|
|
||||||
Logs within each stream are sorted by timestamp before being sent to Loki.
|
Logs within each stream are sorted by timestamp before being sent to Loki.
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,7 +1,8 @@
|
||||||
# MongoDB Output Plugin
|
# MongoDB Output Plugin
|
||||||
|
|
||||||
This plugin sends metrics to MongoDB and automatically creates the collections as time series collections when they don't already exist.
|
This plugin sends metrics to MongoDB and automatically creates the collections
|
||||||
**Please note:** Requires MongoDB 5.0+ for Time Series Collections
|
as time series collections when they don't already exist. **Please note:**
|
||||||
|
Requires MongoDB 5.0+ for Time Series Collections
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,6 +1,7 @@
|
||||||
# MQTT Producer Output Plugin
|
# MQTT Producer Output Plugin
|
||||||
|
|
||||||
This plugin writes to a [MQTT Broker](http://http://mqtt.org/) acting as a mqtt Producer.
|
This plugin writes to a [MQTT Broker](http://http://mqtt.org/) acting as a mqtt
|
||||||
|
Producer.
|
||||||
|
|
||||||
## Mosquitto v2.0.12+ and `identifier rejected`
|
## Mosquitto v2.0.12+ and `identifier rejected`
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,4 +1,4 @@
|
||||||
# New Relic output plugin
|
# New Relic Output Plugin
|
||||||
|
|
||||||
This plugins writes to New Relic Insights using the [Metrics API][].
|
This plugins writes to New Relic Insights using the [Metrics API][].
|
||||||
|
|
||||||
|
|
@ -34,4 +34,5 @@ Telegraf minimum version: Telegraf 1.15.0
|
||||||
```
|
```
|
||||||
|
|
||||||
[Metrics API]: https://docs.newrelic.com/docs/data-ingest-apis/get-data-new-relic/metric-api/introduction-metric-api
|
[Metrics API]: https://docs.newrelic.com/docs/data-ingest-apis/get-data-new-relic/metric-api/introduction-metric-api
|
||||||
|
|
||||||
[Insights API Key]: https://docs.newrelic.com/docs/apis/get-started/intro-apis/types-new-relic-api-keys#user-api-key
|
[Insights API Key]: https://docs.newrelic.com/docs/apis/get-started/intro-apis/types-new-relic-api-keys#user-api-key
|
||||||
|
|
|
||||||
|
|
@ -1,7 +1,7 @@
|
||||||
# NSQ Output Plugin
|
# NSQ Output Plugin
|
||||||
|
|
||||||
This plugin writes to a specified NSQD instance, usually local to the producer. It requires
|
This plugin writes to a specified NSQD instance, usually local to the
|
||||||
a `server` name and a `topic` name.
|
producer. It requires a `server` name and a `topic` name.
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,6 +1,7 @@
|
||||||
# OpenTelemetry Output Plugin
|
# OpenTelemetry Output Plugin
|
||||||
|
|
||||||
This plugin sends metrics to [OpenTelemetry](https://opentelemetry.io) servers and agents via gRPC.
|
This plugin sends metrics to [OpenTelemetry](https://opentelemetry.io) servers
|
||||||
|
and agents via gRPC.
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
@ -42,16 +43,16 @@ This plugin sends metrics to [OpenTelemetry](https://opentelemetry.io) servers a
|
||||||
|
|
||||||
### Schema
|
### Schema
|
||||||
|
|
||||||
The InfluxDB->OpenTelemetry conversion [schema](https://github.com/influxdata/influxdb-observability/blob/main/docs/index.md)
|
The InfluxDB->OpenTelemetry conversion [schema][] and [implementation][] are
|
||||||
and [implementation](https://github.com/influxdata/influxdb-observability/tree/main/influx2otel)
|
hosted on [GitHub][repo].
|
||||||
are hosted on [GitHub](https://github.com/influxdata/influxdb-observability).
|
|
||||||
|
|
||||||
For metrics, two input schemata exist.
|
For metrics, two input schemata exist. Line protocol with measurement name
|
||||||
Line protocol with measurement name `prometheus` is assumed to have a schema
|
`prometheus` is assumed to have a schema matching [Prometheus input
|
||||||
matching [Prometheus input plugin](../../inputs/prometheus/README.md) when `metric_version = 2`.
|
plugin](../../inputs/prometheus/README.md) when `metric_version = 2`. Line
|
||||||
Line protocol with other measurement names is assumed to have schema
|
protocol with other measurement names is assumed to have schema matching
|
||||||
matching [Prometheus input plugin](../../inputs/prometheus/README.md) when `metric_version = 1`.
|
[Prometheus input plugin](../../inputs/prometheus/README.md) when
|
||||||
If both schema assumptions fail, then the line protocol data is interpreted as:
|
`metric_version = 1`. If both schema assumptions fail, then the line protocol
|
||||||
|
data is interpreted as:
|
||||||
|
|
||||||
- Metric type = gauge (or counter, if indicated by the input plugin)
|
- Metric type = gauge (or counter, if indicated by the input plugin)
|
||||||
- Metric name = `[measurement]_[field key]`
|
- Metric name = `[measurement]_[field key]`
|
||||||
|
|
@ -59,3 +60,9 @@ If both schema assumptions fail, then the line protocol data is interpreted as:
|
||||||
- Metric labels = line protocol tags
|
- Metric labels = line protocol tags
|
||||||
|
|
||||||
Also see the [OpenTelemetry input plugin](../../inputs/opentelemetry/README.md).
|
Also see the [OpenTelemetry input plugin](../../inputs/opentelemetry/README.md).
|
||||||
|
|
||||||
|
[schema]: https://github.com/influxdata/influxdb-observability/blob/main/docs/index.md
|
||||||
|
|
||||||
|
[implementation]: https://github.com/influxdata/influxdb-observability/tree/main/influx2otel
|
||||||
|
|
||||||
|
[repo]: https://github.com/influxdata/influxdb-observability
|
||||||
|
|
|
||||||
|
|
@ -1,12 +1,14 @@
|
||||||
# OpenTSDB Output Plugin
|
# OpenTSDB Output Plugin
|
||||||
|
|
||||||
This plugin writes to an OpenTSDB instance using either the "telnet" or Http mode.
|
This plugin writes to an OpenTSDB instance using either the "telnet" or Http
|
||||||
|
mode.
|
||||||
|
|
||||||
Using the Http API is the recommended way of writing metrics since OpenTSDB 2.0
|
Using the Http API is the recommended way of writing metrics since OpenTSDB 2.0
|
||||||
To use Http mode, set useHttp to true in config. You can also control how many
|
To use Http mode, set useHttp to true in config. You can also control how many
|
||||||
metrics is sent in each http request by setting batchSize in config.
|
metrics is sent in each http request by setting batchSize in config.
|
||||||
|
|
||||||
See [the docs](http://opentsdb.net/docs/build/html/api_http/put.html) for details.
|
See [the docs](http://opentsdb.net/docs/build/html/api_http/put.html) for
|
||||||
|
details.
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
@ -47,8 +49,8 @@ The expected input from OpenTSDB is specified in the following way:
|
||||||
put <metric> <timestamp> <value> <tagk1=tagv1[ tagk2=tagv2 ...tagkN=tagvN]>
|
put <metric> <timestamp> <value> <tagk1=tagv1[ tagk2=tagv2 ...tagkN=tagvN]>
|
||||||
```
|
```
|
||||||
|
|
||||||
The telegraf output plugin adds an optional prefix to the metric keys so
|
The telegraf output plugin adds an optional prefix to the metric keys so that a
|
||||||
that a subamount can be selected.
|
subamount can be selected.
|
||||||
|
|
||||||
```text
|
```text
|
||||||
put <[prefix.]metric> <timestamp> <value> <tagk1=tagv1[ tagk2=tagv2 ...tagkN=tagvN]>
|
put <[prefix.]metric> <timestamp> <value> <tagk1=tagv1[ tagk2=tagv2 ...tagkN=tagvN]>
|
||||||
|
|
|
||||||
|
|
@ -1,7 +1,7 @@
|
||||||
# Prometheus Output Plugin
|
# Prometheus Output Plugin
|
||||||
|
|
||||||
This plugin starts a [Prometheus](https://prometheus.io/) Client, it exposes
|
This plugin starts a [Prometheus](https://prometheus.io/) Client, it exposes all
|
||||||
all metrics on `/metrics` (default) to be polled by a Prometheus server.
|
metrics on `/metrics` (default) to be polled by a Prometheus server.
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
@ -55,6 +55,7 @@ all metrics on `/metrics` (default) to be polled by a Prometheus server.
|
||||||
|
|
||||||
## Metrics
|
## Metrics
|
||||||
|
|
||||||
Prometheus metrics are produced in the same manner as the [prometheus serializer][].
|
Prometheus metrics are produced in the same manner as the [prometheus
|
||||||
|
serializer][].
|
||||||
|
|
||||||
[prometheus serializer]: /plugins/serializers/prometheus/README.md#Metrics
|
[prometheus serializer]: /plugins/serializers/prometheus/README.md#Metrics
|
||||||
|
|
|
||||||
|
|
@ -45,11 +45,16 @@ This plugin writes to [Riemann](http://riemann.io/) via TCP or UDP.
|
||||||
|
|
||||||
### Optional parameters
|
### Optional parameters
|
||||||
|
|
||||||
* `ttl`: Riemann event TTL, floating-point time in seconds. Defines how long that an event is considered valid for in Riemann.
|
* `ttl`: Riemann event TTL, floating-point time in seconds. Defines how long
|
||||||
* `separator`: Separator to use between measurement and field name in Riemann service name.
|
that an event is considered valid for in Riemann.
|
||||||
* `measurement_as_attribute`: Set measurement name as a Riemann attribute, instead of prepending it to the Riemann service name.
|
* `separator`: Separator to use between measurement and field name in Riemann
|
||||||
* `string_as_state`: Send string metrics as Riemann event states. If this is not enabled then all string metrics will be ignored.
|
service name.
|
||||||
* `tag_keys`: A list of tag keys whose values get sent as Riemann tags. If empty, all Telegraf tag values will be sent as tags.
|
* `measurement_as_attribute`: Set measurement name as a Riemann attribute,
|
||||||
|
instead of prepending it to the Riemann service name.
|
||||||
|
* `string_as_state`: Send string metrics as Riemann event states. If this is not
|
||||||
|
enabled then all string metrics will be ignored.
|
||||||
|
* `tag_keys`: A list of tag keys whose values get sent as Riemann tags. If
|
||||||
|
empty, all Telegraf tag values will be sent as tags.
|
||||||
* `tags`: Additional Riemann tags that will be sent.
|
* `tags`: Additional Riemann tags that will be sent.
|
||||||
* `description_text`: Description text for Riemann event.
|
* `description_text`: Description text for Riemann event.
|
||||||
|
|
||||||
|
|
@ -63,7 +68,8 @@ Riemann event emitted by Telegraf with default configuration:
|
||||||
:service "disk/used_percent", :metric 73.16736001949994, :path "/boot", :fstype "ext4", :time 1475605021}
|
:service "disk/used_percent", :metric 73.16736001949994, :path "/boot", :fstype "ext4", :time 1475605021}
|
||||||
```
|
```
|
||||||
|
|
||||||
Telegraf emitting the same Riemann event with `measurement_as_attribute` set to `true`:
|
Telegraf emitting the same Riemann event with `measurement_as_attribute` set to
|
||||||
|
`true`:
|
||||||
|
|
||||||
```text
|
```text
|
||||||
#riemann.codec.Event{ ...
|
#riemann.codec.Event{ ...
|
||||||
|
|
@ -80,7 +86,8 @@ Telegraf emitting the same Riemann event with additional Riemann tags defined:
|
||||||
:tags ["telegraf" "postgres_cluster"]}
|
:tags ["telegraf" "postgres_cluster"]}
|
||||||
```
|
```
|
||||||
|
|
||||||
Telegraf emitting a Riemann event with a status text and `string_as_state` set to `true`, and a `description_text` defined:
|
Telegraf emitting a Riemann event with a status text and `string_as_state` set
|
||||||
|
to `true`, and a `description_text` defined:
|
||||||
|
|
||||||
```text
|
```text
|
||||||
#riemann.codec.Event{
|
#riemann.codec.Event{
|
||||||
|
|
|
||||||
|
|
@ -1,6 +1,9 @@
|
||||||
# Riemann Legacy
|
# Riemann Legacy Output Plugin
|
||||||
|
|
||||||
This is a deprecated plugin
|
This is a deprecated plugin. Please use the [Riemann Output Plugin][new]
|
||||||
|
instead.
|
||||||
|
|
||||||
|
[new]: ../riemann/README.md
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,6 +1,8 @@
|
||||||
# SignalFx Output Plugin
|
# SignalFx Output Plugin
|
||||||
|
|
||||||
The SignalFx output plugin sends metrics to [SignalFx](https://docs.signalfx.com/en/latest/).
|
The SignalFx output plugin sends metrics to [SignalFx][docs].
|
||||||
|
|
||||||
|
[docs]: https://docs.signalfx.com/en/latest/
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,8 +1,10 @@
|
||||||
# socket_writer Plugin
|
# Socket Writer Output Plugin
|
||||||
|
|
||||||
The socket_writer plugin can write to a UDP, TCP, or unix socket.
|
The socket writer plugin can write to a UDP, TCP, or unix socket.
|
||||||
|
|
||||||
It can output data in any of the [supported output formats](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md).
|
It can output data in any of the [supported output formats][formats].
|
||||||
|
|
||||||
|
[formats]: ../../../docs/DATA_FORMATS_OUTPUT.md
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -2,66 +2,61 @@
|
||||||
|
|
||||||
The SQL output plugin saves Telegraf metric data to an SQL database.
|
The SQL output plugin saves Telegraf metric data to an SQL database.
|
||||||
|
|
||||||
The plugin uses a simple, hard-coded database schema. There is a table
|
The plugin uses a simple, hard-coded database schema. There is a table for each
|
||||||
for each metric type and the table name is the metric name. There is a
|
metric type and the table name is the metric name. There is a column per field
|
||||||
column per field and a column per tag. There is an optional column for
|
and a column per tag. There is an optional column for the metric timestamp.
|
||||||
the metric timestamp.
|
|
||||||
|
|
||||||
A row is written for every input metric. This means multiple metrics
|
A row is written for every input metric. This means multiple metrics are never
|
||||||
are never merged into a single row, even if they have the same metric
|
merged into a single row, even if they have the same metric name, tags, and
|
||||||
name, tags, and timestamp.
|
timestamp.
|
||||||
|
|
||||||
The plugin uses Golang's generic "database/sql" interface and third
|
The plugin uses Golang's generic "database/sql" interface and third party
|
||||||
party drivers. See the driver-specific section below for a list of
|
drivers. See the driver-specific section below for a list of supported drivers
|
||||||
supported drivers and details. Additional drivers may be added in
|
and details. Additional drivers may be added in future Telegraf releases.
|
||||||
future Telegraf releases.
|
|
||||||
|
|
||||||
## Getting started
|
## Getting started
|
||||||
|
|
||||||
To use the plugin, set the driver setting to the driver name
|
To use the plugin, set the driver setting to the driver name appropriate for
|
||||||
appropriate for your database. Then set the data source name
|
your database. Then set the data source name (DSN). The format of the DSN varies
|
||||||
(DSN). The format of the DSN varies by driver but often includes a
|
by driver but often includes a username, password, the database instance to use,
|
||||||
username, password, the database instance to use, and the hostname of
|
and the hostname of the database server. The user account must have privileges
|
||||||
the database server. The user account must have privileges to insert
|
to insert rows and create tables.
|
||||||
rows and create tables.
|
|
||||||
|
|
||||||
## Generated SQL
|
## Generated SQL
|
||||||
|
|
||||||
The plugin generates simple ANSI/ISO SQL that is likely to work on any
|
The plugin generates simple ANSI/ISO SQL that is likely to work on any DBMS. It
|
||||||
DBMS. It doesn't use language features that are specific to a
|
doesn't use language features that are specific to a particular DBMS. If you
|
||||||
particular DBMS. If you want to use a feature that is specific to a
|
want to use a feature that is specific to a particular DBMS, you may be able to
|
||||||
particular DBMS, you may be able to set it up manually outside of this
|
set it up manually outside of this plugin or through the init_sql setting.
|
||||||
plugin or through the init_sql setting.
|
|
||||||
|
|
||||||
The insert statements generated by the plugin use placeholder
|
The insert statements generated by the plugin use placeholder parameters. Most
|
||||||
parameters. Most database drivers use question marks as placeholders
|
database drivers use question marks as placeholders but postgres uses indexed
|
||||||
but postgres uses indexed dollar signs. The plugin chooses which
|
dollar signs. The plugin chooses which placeholder style to use depending on the
|
||||||
placeholder style to use depending on the driver selected.
|
driver selected.
|
||||||
|
|
||||||
## Advanced options
|
## Advanced options
|
||||||
|
|
||||||
When the plugin first connects it runs SQL from the init_sql setting,
|
When the plugin first connects it runs SQL from the init_sql setting, allowing
|
||||||
allowing you to perform custom initialization for the connection.
|
you to perform custom initialization for the connection.
|
||||||
|
|
||||||
Before inserting a row, the plugin checks whether the table exists. If
|
Before inserting a row, the plugin checks whether the table exists. If it
|
||||||
it doesn't exist, the plugin creates the table. The existence check
|
doesn't exist, the plugin creates the table. The existence check and the table
|
||||||
and the table creation statements can be changed through template
|
creation statements can be changed through template settings. The template
|
||||||
settings. The template settings allows you to have the plugin create
|
settings allows you to have the plugin create customized tables or skip table
|
||||||
customized tables or skip table creation entirely by setting the check
|
creation entirely by setting the check template to any query that executes
|
||||||
template to any query that executes without error, such as "select 1".
|
without error, such as "select 1".
|
||||||
|
|
||||||
The name of the timestamp column is "timestamp" but it can be changed
|
The name of the timestamp column is "timestamp" but it can be changed with the
|
||||||
with the timestamp\_column setting. The timestamp column can be
|
timestamp\_column setting. The timestamp column can be completely disabled by
|
||||||
completely disabled by setting it to "".
|
setting it to "".
|
||||||
|
|
||||||
By changing the table creation template, it's possible with some
|
By changing the table creation template, it's possible with some databases to
|
||||||
databases to save a row insertion timestamp. You can add an additional
|
save a row insertion timestamp. You can add an additional column with a default
|
||||||
column with a default value to the template, like "CREATE TABLE
|
value to the template, like "CREATE TABLE {TABLE}(insertion_timestamp TIMESTAMP
|
||||||
{TABLE}(insertion_timestamp TIMESTAMP DEFAULT CURRENT\_TIMESTAMP,
|
DEFAULT CURRENT\_TIMESTAMP, {COLUMNS})".
|
||||||
{COLUMNS})".
|
|
||||||
|
|
||||||
The mapping of metric types to sql column types can be customized
|
The mapping of metric types to sql column types can be customized through the
|
||||||
through the convert settings.
|
convert settings.
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
@ -125,18 +120,20 @@ through the convert settings.
|
||||||
|
|
||||||
### go-sql-driver/mysql
|
### go-sql-driver/mysql
|
||||||
|
|
||||||
MySQL default quoting differs from standard ANSI/ISO SQL quoting. You
|
MySQL default quoting differs from standard ANSI/ISO SQL quoting. You must use
|
||||||
must use MySQL's ANSI\_QUOTES mode with this plugin. You can enable
|
MySQL's ANSI\_QUOTES mode with this plugin. You can enable this mode by using
|
||||||
this mode by using the setting `init_sql = "SET
|
the setting `init_sql = "SET sql_mode='ANSI_QUOTES';"` or through a command-line
|
||||||
sql_mode='ANSI_QUOTES';"` or through a command-line option when
|
option when running MySQL. See MySQL's docs for [details on
|
||||||
running MySQL. See MySQL's docs for [details on
|
ANSI\_QUOTES][mysql-quotes] and [how to set the SQL mode][mysql-mode].
|
||||||
ANSI\_QUOTES](https://dev.mysql.com/doc/refman/8.0/en/sql-mode.html#sqlmode_ansi_quotes)
|
|
||||||
and [how to set the SQL
|
|
||||||
mode](https://dev.mysql.com/doc/refman/8.0/en/sql-mode.html#sql-mode-setting).
|
|
||||||
|
|
||||||
You can use a DSN of the format
|
You can use a DSN of the format "username:password@tcp(host:port)/dbname". See
|
||||||
"username:password@tcp(host:port)/dbname". See the [driver
|
the [driver docs][mysql-driver] for details.
|
||||||
docs](https://github.com/go-sql-driver/mysql) for details.
|
|
||||||
|
[mysql-quotes]: https://dev.mysql.com/doc/refman/8.0/en/sql-mode.html#sqlmode_ansi_quotes
|
||||||
|
|
||||||
|
[mysql-mode]: https://dev.mysql.com/doc/refman/8.0/en/sql-mode.html#sql-mode-setting
|
||||||
|
|
||||||
|
[mysql-driver]: https://github.com/go-sql-driver/mysql
|
||||||
|
|
||||||
### jackc/pgx
|
### jackc/pgx
|
||||||
|
|
||||||
|
|
@ -146,10 +143,9 @@ docs](https://github.com/jackc/pgx) for more details.
|
||||||
|
|
||||||
### modernc.org/sqlite
|
### modernc.org/sqlite
|
||||||
|
|
||||||
This driver is not available on all operating systems and
|
This driver is not available on all operating systems and architectures. It is
|
||||||
architectures. It is only included in Linux builds on amd64, 386,
|
only included in Linux builds on amd64, 386, arm64, arm, and Darwin on amd64. It
|
||||||
arm64, arm, and Darwin on amd64. It is not available for Windows,
|
is not available for Windows, FreeBSD, and other Linux and Darwin platforms.
|
||||||
FreeBSD, and other Linux and Darwin platforms.
|
|
||||||
|
|
||||||
The DSN is a filename or url with scheme "file:". See the [driver
|
The DSN is a filename or url with scheme "file:". See the [driver
|
||||||
docs](https://modernc.org/sqlite) for details.
|
docs](https://modernc.org/sqlite) for details.
|
||||||
|
|
@ -168,14 +164,15 @@ Use this metric type to SQL type conversion:
|
||||||
bool = "UInt8"
|
bool = "UInt8"
|
||||||
```
|
```
|
||||||
|
|
||||||
See [ClickHouse data types](https://clickhouse.com/docs/en/sql-reference/data-types/) for more info.
|
See [ClickHouse data
|
||||||
|
types](https://clickhouse.com/docs/en/sql-reference/data-types/) for more info.
|
||||||
|
|
||||||
### denisenkom/go-mssqldb
|
### denisenkom/go-mssqldb
|
||||||
|
|
||||||
Telegraf doesn't have unit tests for go-mssqldb so it should be
|
Telegraf doesn't have unit tests for go-mssqldb so it should be treated as
|
||||||
treated as experimental.
|
experimental.
|
||||||
|
|
||||||
### snowflakedb/gosnowflake
|
### snowflakedb/gosnowflake
|
||||||
|
|
||||||
Telegraf doesn't have unit tests for gosnowflake so it should be
|
Telegraf doesn't have unit tests for gosnowflake so it should be treated as
|
||||||
treated as experimental.
|
experimental.
|
||||||
|
|
|
||||||
|
|
@ -9,11 +9,14 @@ costs.
|
||||||
|
|
||||||
Requires `project` to specify where Stackdriver metrics will be delivered to.
|
Requires `project` to specify where Stackdriver metrics will be delivered to.
|
||||||
|
|
||||||
Metrics are grouped by the `namespace` variable and metric key - eg: `custom.googleapis.com/telegraf/system/load5`
|
Metrics are grouped by the `namespace` variable and metric key - eg:
|
||||||
|
`custom.googleapis.com/telegraf/system/load5`
|
||||||
|
|
||||||
[Resource type](https://cloud.google.com/monitoring/api/resources) is configured by the `resource_type` variable (default `global`).
|
[Resource type](https://cloud.google.com/monitoring/api/resources) is configured
|
||||||
|
by the `resource_type` variable (default `global`).
|
||||||
|
|
||||||
Additional resource labels can be configured by `resource_labels`. By default the required `project_id` label is always set to the `project` variable.
|
Additional resource labels can be configured by `resource_labels`. By default
|
||||||
|
the required `project_id` label is always set to the `project` variable.
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
@ -38,29 +41,31 @@ Additional resource labels can be configured by `resource_labels`. By default th
|
||||||
|
|
||||||
## Restrictions
|
## Restrictions
|
||||||
|
|
||||||
Stackdriver does not support string values in custom metrics, any string
|
Stackdriver does not support string values in custom metrics, any string fields
|
||||||
fields will not be written.
|
will not be written.
|
||||||
|
|
||||||
The Stackdriver API does not allow writing points which are out of order,
|
The Stackdriver API does not allow writing points which are out of order, older
|
||||||
older than 24 hours, or more with resolution greater than than one per point
|
than 24 hours, or more with resolution greater than than one per point minute.
|
||||||
minute. Since Telegraf writes the newest points first and moves backwards
|
Since Telegraf writes the newest points first and moves backwards through the
|
||||||
through the metric buffer, it may not be possible to write historical data
|
metric buffer, it may not be possible to write historical data after an
|
||||||
after an interruption.
|
interruption.
|
||||||
|
|
||||||
Points collected with greater than 1 minute precision may need to be
|
Points collected with greater than 1 minute precision may need to be aggregated
|
||||||
aggregated before then can be written. Consider using the [basicstats][]
|
before then can be written. Consider using the [basicstats][] aggregator to do
|
||||||
aggregator to do this.
|
this.
|
||||||
|
|
||||||
Histogram / distribution and delta metrics are not yet supported. These will
|
Histogram / distribution and delta metrics are not yet supported. These will be
|
||||||
be dropped silently unless debugging is on.
|
dropped silently unless debugging is on.
|
||||||
|
|
||||||
Note that the plugin keeps an in-memory cache of the start times and last
|
Note that the plugin keeps an in-memory cache of the start times and last
|
||||||
observed values of all COUNTER metrics in order to comply with the
|
observed values of all COUNTER metrics in order to comply with the requirements
|
||||||
requirements of the stackdriver API. This cache is not GCed: if you remove
|
of the stackdriver API. This cache is not GCed: if you remove a large number of
|
||||||
a large number of counters from the input side, you may wish to restart
|
counters from the input side, you may wish to restart telegraf to clear it.
|
||||||
telegraf to clear it.
|
|
||||||
|
|
||||||
[basicstats]: /plugins/aggregators/basicstats/README.md
|
[basicstats]: /plugins/aggregators/basicstats/README.md
|
||||||
|
|
||||||
[stackdriver]: https://cloud.google.com/monitoring/api/v3/
|
[stackdriver]: https://cloud.google.com/monitoring/api/v3/
|
||||||
|
|
||||||
[authentication]: https://cloud.google.com/docs/authentication/getting-started
|
[authentication]: https://cloud.google.com/docs/authentication/getting-started
|
||||||
|
|
||||||
[pricing]: https://cloud.google.com/stackdriver/pricing#google-clouds-operations-suite-pricing
|
[pricing]: https://cloud.google.com/stackdriver/pricing#google-clouds-operations-suite-pricing
|
||||||
|
|
|
||||||
|
|
@ -1,7 +1,7 @@
|
||||||
# Sumo Logic Output Plugin
|
# Sumo Logic Output Plugin
|
||||||
|
|
||||||
This plugin sends metrics to [Sumo Logic HTTP Source](https://help.sumologic.com/03Send-Data/Sources/02Sources-for-Hosted-Collectors/HTTP-Source/Upload-Metrics-to-an-HTTP-Source)
|
This plugin sends metrics to [Sumo Logic HTTP Source][http-source] in HTTP
|
||||||
in HTTP messages, encoded using one of the output data formats.
|
messages, encoded using one of the output data formats.
|
||||||
|
|
||||||
Telegraf minimum version: Telegraf 1.16.0
|
Telegraf minimum version: Telegraf 1.16.0
|
||||||
|
|
||||||
|
|
@ -12,6 +12,8 @@ by Sumologic HTTP Source:
|
||||||
* `carbon2` - for Content-Type of `application/vnd.sumologic.carbon2`
|
* `carbon2` - for Content-Type of `application/vnd.sumologic.carbon2`
|
||||||
* `prometheus` - for Content-Type of `application/vnd.sumologic.prometheus`
|
* `prometheus` - for Content-Type of `application/vnd.sumologic.prometheus`
|
||||||
|
|
||||||
|
[http-source]: https://help.sumologic.com/03Send-Data/Sources/02Sources-for-Hosted-Collectors/HTTP-Source/Upload-Metrics-to-an-HTTP-Source
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
```toml
|
```toml
|
||||||
|
|
|
||||||
|
|
@ -3,10 +3,11 @@
|
||||||
The syslog output plugin sends syslog messages transmitted over
|
The syslog output plugin sends syslog messages transmitted over
|
||||||
[UDP](https://tools.ietf.org/html/rfc5426) or
|
[UDP](https://tools.ietf.org/html/rfc5426) or
|
||||||
[TCP](https://tools.ietf.org/html/rfc6587) or
|
[TCP](https://tools.ietf.org/html/rfc6587) or
|
||||||
[TLS](https://tools.ietf.org/html/rfc5425), with or without the octet counting framing.
|
[TLS](https://tools.ietf.org/html/rfc5425), with or without the octet counting
|
||||||
|
framing.
|
||||||
|
|
||||||
Syslog messages are formatted according to
|
Syslog messages are formatted according to [RFC
|
||||||
[RFC 5424](https://tools.ietf.org/html/rfc5424).
|
5424](https://tools.ietf.org/html/rfc5424).
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
@ -94,7 +95,8 @@ Syslog messages are formatted according to
|
||||||
The output plugin expects syslog metrics tags and fields to match up with the
|
The output plugin expects syslog metrics tags and fields to match up with the
|
||||||
ones created in the [syslog input][].
|
ones created in the [syslog input][].
|
||||||
|
|
||||||
The following table shows the metric tags, field and defaults used to format syslog messages.
|
The following table shows the metric tags, field and defaults used to format
|
||||||
|
syslog messages.
|
||||||
|
|
||||||
| Syslog field | Metric Tag | Metric Field | Default value |
|
| Syslog field | Metric Tag | Metric Field | Default value |
|
||||||
| --- | --- | --- | --- |
|
| --- | --- | --- | --- |
|
||||||
|
|
|
||||||
|
|
@ -126,28 +126,39 @@ The Timestream output plugin writes metrics to the [Amazon Timestream] service.
|
||||||
|
|
||||||
### Batching
|
### Batching
|
||||||
|
|
||||||
Timestream WriteInputRequest.CommonAttributes are used to efficiently write data to Timestream.
|
Timestream WriteInputRequest.CommonAttributes are used to efficiently write data
|
||||||
|
to Timestream.
|
||||||
|
|
||||||
### Multithreading
|
### Multithreading
|
||||||
|
|
||||||
Single thread is used to write the data to Timestream, following general plugin design pattern.
|
Single thread is used to write the data to Timestream, following general plugin
|
||||||
|
design pattern.
|
||||||
|
|
||||||
### Errors
|
### Errors
|
||||||
|
|
||||||
In case of an attempt to write an unsupported by Timestream Telegraf Field type, the field is dropped and error is emitted to the logs.
|
In case of an attempt to write an unsupported by Timestream Telegraf Field type,
|
||||||
|
the field is dropped and error is emitted to the logs.
|
||||||
|
|
||||||
In case of receiving ThrottlingException or InternalServerException from Timestream, the errors are returned to Telegraf, in which case Telegraf will keep the metrics in buffer and retry writing those metrics on the next flush.
|
In case of receiving ThrottlingException or InternalServerException from
|
||||||
|
Timestream, the errors are returned to Telegraf, in which case Telegraf will
|
||||||
|
keep the metrics in buffer and retry writing those metrics on the next flush.
|
||||||
|
|
||||||
In case of receiving ResourceNotFoundException:
|
In case of receiving ResourceNotFoundException:
|
||||||
|
|
||||||
- If `create_table_if_not_exists` configuration is set to `true`, the plugin will try to create appropriate table and write the records again, if the table creation was successful.
|
- If `create_table_if_not_exists` configuration is set to `true`, the plugin
|
||||||
- If `create_table_if_not_exists` configuration is set to `false`, the records are dropped, and an error is emitted to the logs.
|
will try to create appropriate table and write the records again, if the table
|
||||||
|
creation was successful.
|
||||||
|
- If `create_table_if_not_exists` configuration is set to `false`, the records
|
||||||
|
are dropped, and an error is emitted to the logs.
|
||||||
|
|
||||||
In case of receiving any other AWS error from Timestream, the records are dropped, and an error is emitted to the logs, as retrying such requests isn't likely to succeed.
|
In case of receiving any other AWS error from Timestream, the records are
|
||||||
|
dropped, and an error is emitted to the logs, as retrying such requests isn't
|
||||||
|
likely to succeed.
|
||||||
|
|
||||||
### Logging
|
### Logging
|
||||||
|
|
||||||
Turn on debug flag in the Telegraf to turn on detailed logging (including records being written to Timestream).
|
Turn on debug flag in the Telegraf to turn on detailed logging (including
|
||||||
|
records being written to Timestream).
|
||||||
|
|
||||||
### Testing
|
### Testing
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,6 +1,7 @@
|
||||||
# Wavefront Output Plugin
|
# Wavefront Output Plugin
|
||||||
|
|
||||||
This plugin writes to a [Wavefront](https://www.wavefront.com) proxy, in Wavefront data format over TCP.
|
This plugin writes to a [Wavefront](https://www.wavefront.com) proxy, in
|
||||||
|
Wavefront data format over TCP.
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
@ -60,23 +61,28 @@ This plugin writes to a [Wavefront](https://www.wavefront.com) proxy, in Wavefro
|
||||||
|
|
||||||
### Convert Path & Metric Separator
|
### Convert Path & Metric Separator
|
||||||
|
|
||||||
If the `convert_path` option is true any `_` in metric and field names will be converted to the `metric_separator` value.
|
If the `convert_path` option is true any `_` in metric and field names will be
|
||||||
By default, to ease metrics browsing in the Wavefront UI, the `convert_path` option is true, and `metric_separator` is `.` (dot).
|
converted to the `metric_separator` value. By default, to ease metrics browsing
|
||||||
Default integrations within Wavefront expect these values to be set to their defaults, however if converting from another platform
|
in the Wavefront UI, the `convert_path` option is true, and `metric_separator`
|
||||||
it may be desirable to change these defaults.
|
is `.` (dot). Default integrations within Wavefront expect these values to be
|
||||||
|
set to their defaults, however if converting from another platform it may be
|
||||||
|
desirable to change these defaults.
|
||||||
|
|
||||||
### Use Regex
|
### Use Regex
|
||||||
|
|
||||||
Most illegal characters in the metric name are automatically converted to `-`.
|
Most illegal characters in the metric name are automatically converted to `-`.
|
||||||
The `use_regex` setting can be used to ensure all illegal characters are properly handled, but can lead to performance degradation.
|
The `use_regex` setting can be used to ensure all illegal characters are
|
||||||
|
properly handled, but can lead to performance degradation.
|
||||||
|
|
||||||
### Source Override
|
### Source Override
|
||||||
|
|
||||||
Often when collecting metrics from another system, you want to use the target system as the source, not the one running Telegraf.
|
Often when collecting metrics from another system, you want to use the target
|
||||||
Many Telegraf plugins will identify the target source with a tag. The tag name can vary for different plugins. The `source_override`
|
system as the source, not the one running Telegraf. Many Telegraf plugins will
|
||||||
option will use the value specified in any of the listed tags if found. The tag names are checked in the same order as listed,
|
identify the target source with a tag. The tag name can vary for different
|
||||||
and if found, the other tags will not be checked. If no tags specified are found, the default host tag will be used to identify the
|
plugins. The `source_override` option will use the value specified in any of the
|
||||||
source of the metric.
|
listed tags if found. The tag names are checked in the same order as listed, and
|
||||||
|
if found, the other tags will not be checked. If no tags specified are found,
|
||||||
|
the default host tag will be used to identify the source of the metric.
|
||||||
|
|
||||||
### Wavefront Data format
|
### Wavefront Data format
|
||||||
|
|
||||||
|
|
@ -86,9 +92,11 @@ The expected input for Wavefront is specified in the following way:
|
||||||
<metric> <value> [<timestamp>] <source|host>=<sourceTagValue> [tagk1=tagv1 ...tagkN=tagvN]
|
<metric> <value> [<timestamp>] <source|host>=<sourceTagValue> [tagk1=tagv1 ...tagkN=tagvN]
|
||||||
```
|
```
|
||||||
|
|
||||||
More information about the Wavefront data format is available [here](https://community.wavefront.com/docs/DOC-1031)
|
More information about the Wavefront data format is available
|
||||||
|
[here](https://community.wavefront.com/docs/DOC-1031)
|
||||||
|
|
||||||
### Allowed values for metrics
|
### Allowed values for metrics
|
||||||
|
|
||||||
Wavefront allows `integers` and `floats` as input values. By default it also maps `bool` values to numeric, false -> 0.0,
|
Wavefront allows `integers` and `floats` as input values. By default it also
|
||||||
true -> 1.0. To map `strings` use the [enum](../../processors/enum) processor plugin.
|
maps `bool` values to numeric, false -> 0.0, true -> 1.0. To map `strings` use
|
||||||
|
the [enum](../../processors/enum) processor plugin.
|
||||||
|
|
|
||||||
|
|
@ -2,7 +2,9 @@
|
||||||
|
|
||||||
This plugin can write to a WebSocket endpoint.
|
This plugin can write to a WebSocket endpoint.
|
||||||
|
|
||||||
It can output data in any of the [supported output formats](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md).
|
It can output data in any of the [supported output formats][formats].
|
||||||
|
|
||||||
|
[formats]: ../../../docs/DATA_FORMATS_OUTPUT.md
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,6 +1,7 @@
|
||||||
# Yandex Cloud Monitoring
|
# Yandex Cloud Monitoring Output Plugin
|
||||||
|
|
||||||
This plugin will send custom metrics to [Yandex Cloud Monitoring](https://cloud.yandex.com/services/monitoring).
|
This plugin will send custom metrics to [Yandex Cloud
|
||||||
|
Monitoring](https://cloud.yandex.com/services/monitoring).
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
@ -21,6 +22,7 @@ This plugin will send custom metrics to [Yandex Cloud Monitoring](https://cloud.
|
||||||
|
|
||||||
This plugin currently support only YC.Compute metadata based authentication.
|
This plugin currently support only YC.Compute metadata based authentication.
|
||||||
|
|
||||||
When plugin is working inside a YC.Compute instance it will take IAM token and Folder ID from instance metadata.
|
When plugin is working inside a YC.Compute instance it will take IAM token and
|
||||||
|
Folder ID from instance metadata.
|
||||||
|
|
||||||
Other authentication methods will be added later.
|
Other authentication methods will be added later.
|
||||||
|
|
|
||||||
Loading…
Reference in New Issue