From 0d8d118319d4de8223803b621a3427a420cfc4d6 Mon Sep 17 00:00:00 2001 From: Joshua Powers Date: Wed, 24 Nov 2021 11:47:33 -0700 Subject: [PATCH] chore: clean up all markdown lint errors in output plugins (#10159) --- plugins/outputs/amon/README.md | 2 +- plugins/outputs/amqp/README.md | 10 +- .../outputs/application_insights/README.md | 15 +- plugins/outputs/azure_data_explorer/README.md | 168 +++++++++--------- plugins/outputs/azure_monitor/README.md | 55 +++--- plugins/outputs/bigquery/README.md | 13 +- plugins/outputs/cloud_pubsub/README.md | 9 +- plugins/outputs/cloudwatch/README.md | 31 ++-- plugins/outputs/cloudwatch_logs/README.md | 45 ++--- plugins/outputs/cratedb/README.md | 1 - plugins/outputs/datadog/README.md | 5 +- plugins/outputs/discard/README.md | 2 +- plugins/outputs/dynatrace/README.md | 8 +- plugins/outputs/elasticsearch/README.md | 18 +- plugins/outputs/exec/README.md | 6 +- plugins/outputs/execd/README.md | 4 +- plugins/outputs/file/README.md | 2 +- plugins/outputs/graphite/README.md | 2 +- plugins/outputs/graylog/README.md | 2 +- plugins/outputs/health/README.md | 7 +- plugins/outputs/http/README.md | 4 +- plugins/outputs/influxdb/README.md | 5 +- plugins/outputs/influxdb_v2/README.md | 6 +- plugins/outputs/instrumental/README.md | 2 +- plugins/outputs/kafka/README.md | 7 +- plugins/outputs/kinesis/README.md | 17 +- plugins/outputs/librato/README.md | 2 +- plugins/outputs/logzio/README.md | 8 +- plugins/outputs/loki/README.md | 4 +- plugins/outputs/mongodb/README.md | 12 +- plugins/outputs/mqtt/README.md | 15 +- plugins/outputs/newrelic/README.md | 7 +- plugins/outputs/nsq/README.md | 2 +- plugins/outputs/opentelemetry/README.md | 7 +- plugins/outputs/opentsdb/README.md | 48 +++-- plugins/outputs/prometheus_client/README.md | 4 +- plugins/outputs/riemann/README.md | 20 ++- plugins/outputs/sensu/README.md | 46 ++--- plugins/outputs/signalfx/README.md | 3 +- plugins/outputs/sql/README.md | 2 +- plugins/outputs/stackdriver/README.md | 4 +- plugins/outputs/sumologic/README.md | 12 +- plugins/outputs/syslog/README.md | 5 +- plugins/outputs/timestream/README.md | 25 +-- plugins/outputs/warp10/README.md | 4 +- plugins/outputs/wavefront/README.md | 41 ++--- plugins/outputs/websocket/README.md | 2 +- .../outputs/yandex_cloud_monitoring/README.md | 5 +- 48 files changed, 375 insertions(+), 349 deletions(-) diff --git a/plugins/outputs/amon/README.md b/plugins/outputs/amon/README.md index 3860e4371..57ecf2e18 100644 --- a/plugins/outputs/amon/README.md +++ b/plugins/outputs/amon/README.md @@ -6,4 +6,4 @@ for the account. If the point value being sent cannot be converted to a float64, the metric is skipped. -Metrics are grouped by converting any `_` characters to `.` in the Point Name. \ No newline at end of file +Metrics are grouped by converting any `_` characters to `.` in the Point Name. diff --git a/plugins/outputs/amqp/README.md b/plugins/outputs/amqp/README.md index 04715f8e3..1c164a10e 100644 --- a/plugins/outputs/amqp/README.md +++ b/plugins/outputs/amqp/README.md @@ -5,10 +5,12 @@ This plugin writes to a AMQP 0-9-1 Exchange, a prominent implementation of this This plugin does not bind the exchange to a queue. For an introduction to AMQP see: -- https://www.rabbitmq.com/tutorials/amqp-concepts.html -- https://www.rabbitmq.com/getstarted.html -### Configuration: +- [amqp: concepts](https://www.rabbitmq.com/tutorials/amqp-concepts.html) +- [rabbitmq: getting started](https://www.rabbitmq.com/getstarted.html) + +## Configuration + ```toml # Publishes metrics to an AMQP broker [[outputs.amqp]] @@ -107,7 +109,7 @@ For an introduction to AMQP see: # data_format = "influx" ``` -#### Routing +### Routing If `routing_tag` is set, and the tag is defined on the metric, the value of the tag is used as the routing key. Otherwise the value of `routing_key` is diff --git a/plugins/outputs/application_insights/README.md b/plugins/outputs/application_insights/README.md index b23f1affe..4beeb1ec8 100644 --- a/plugins/outputs/application_insights/README.md +++ b/plugins/outputs/application_insights/README.md @@ -2,12 +2,13 @@ This plugin writes telegraf metrics to [Azure Application Insights](https://azure.microsoft.com/en-us/services/application-insights/). -### Configuration: +## Configuration + ```toml [[outputs.application_insights]] ## Instrumentation key of the Application Insights resource. instrumentation_key = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx" - + ## Regions that require endpoint modification https://docs.microsoft.com/en-us/azure/azure-monitor/app/custom-endpoints # endpoint_url = "https://dc.services.visualstudio.com/v2/track" @@ -26,21 +27,21 @@ This plugin writes telegraf metrics to [Azure Application Insights](https://azur # "ai.cloud.roleInstance" = "kubernetes_pod_name" ``` - -### Metric Encoding: +## Metric Encoding For each field an Application Insights Telemetry record is created named based on the measurement name and field. - **Example:** Create the telemetry records `foo_first` and `foo_second`: -``` + +```text foo,host=a first=42,second=43 1525293034000000000 ``` In the special case of a single field named `value`, a single telemetry record is created named using only the measurement name **Example:** Create a telemetry record `bar`: -``` + +```text bar,host=a value=42 1525293034000000000 ``` diff --git a/plugins/outputs/azure_data_explorer/README.md b/plugins/outputs/azure_data_explorer/README.md index db2aba469..96193f7fc 100644 --- a/plugins/outputs/azure_data_explorer/README.md +++ b/plugins/outputs/azure_data_explorer/README.md @@ -1,14 +1,14 @@ # Azure Data Explorer output plugin -This plugin writes data collected by any of the Telegraf input plugins to [Azure Data Explorer](https://azure.microsoft.com/en-au/services/data-explorer/). +This plugin writes data collected by any of the Telegraf input plugins to [Azure Data Explorer](https://azure.microsoft.com/en-au/services/data-explorer/). Azure Data Explorer is a distributed, columnar store, purpose built for any type of logs, metrics and time series data. -## Pre-requisites: +## Pre-requisites + - [Create Azure Data Explorer cluster and database](https://docs.microsoft.com/en-us/azure/data-explorer/create-cluster-database-portal) - VM/compute or container to host Telegraf - it could be hosted locally where an app/service to be monitored is deployed or remotely on a dedicated monitoring compute/container. - -## Configuration: +## Configuration ```toml [[outputs.azure_data_explorer]] @@ -23,16 +23,16 @@ Azure Data Explorer is a distributed, columnar store, purpose built for any type ## Timeout for Azure Data Explorer operations # timeout = "20s" - - ## Type of metrics grouping used when pushing to Azure Data Explorer. - ## Default is "TablePerMetric" for one table per different metric. + + ## Type of metrics grouping used when pushing to Azure Data Explorer. + ## Default is "TablePerMetric" for one table per different metric. ## For more information, please check the plugin README. # metrics_grouping_type = "TablePerMetric" - + ## Name of the single table to store all the metrics (Only needed if metrics_grouping_type is "SingleTable"). # table_name = "" - ## Creates tables and relevant mapping if set to true(default). + ## Creates tables and relevant mapping if set to true(default). ## Skips table and mapping creation if set to false, this is useful for running Telegraf with the lowest possible permissions i.e. table ingestor role. # create_tables = true ``` @@ -47,60 +47,59 @@ The plugin will group the metrics by the metric name, and will send each group o The table name will match the `name` property of the metric, this means that the name of the metric should comply with the Azure Data Explorer table naming constraints in case you plan to add a prefix to the metric name. - ### SingleTable -The plugin will send all the metrics received to a single Azure Data Explorer table. The name of the table must be supplied via `table_name` in the config file. If the table doesn't exist the plugin will create the table, if the table exists then the plugin will try to merge the Telegraf metric schema to the existing table. For more information about the merge process check the [`.create-merge` documentation](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/management/create-merge-table-command). - +The plugin will send all the metrics received to a single Azure Data Explorer table. The name of the table must be supplied via `table_name` in the config file. If the table doesn't exist the plugin will create the table, if the table exists then the plugin will try to merge the Telegraf metric schema to the existing table. For more information about the merge process check the [`.create-merge` documentation](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/management/create-merge-table-command). ## Tables Schema The schema of the Azure Data Explorer table will match the structure of the Telegraf `Metric` object. The corresponding Azure Data Explorer command generated by the plugin would be like the following: -``` + +```text .create-merge table ['table-name'] (['fields']:dynamic, ['name']:string, ['tags']:dynamic, ['timestamp']:datetime) ``` The corresponding table mapping would be like the following: -``` + +```text .create-or-alter table ['table-name'] ingestion json mapping 'table-name_mapping' '[{"column":"fields", "Properties":{"Path":"$[\'fields\']"}},{"column":"name", "Properties":{"Path":"$[\'name\']"}},{"column":"tags", "Properties":{"Path":"$[\'tags\']"}},{"column":"timestamp", "Properties":{"Path":"$[\'timestamp\']"}}]' ``` -**Note**: This plugin will automatically create Azure Data Explorer tables and corresponding table mapping as per the above mentioned commands. +**Note**: This plugin will automatically create Azure Data Explorer tables and corresponding table mapping as per the above mentioned commands. ## Authentiation ### Supported Authentication Methods -This plugin provides several types of authentication. The plugin will check the existence of several specific environment variables, and consequently will choose the right method. + +This plugin provides several types of authentication. The plugin will check the existence of several specific environment variables, and consequently will choose the right method. These methods are: - 1. AAD Application Tokens (Service Principals with secrets or certificates). For guidance on how to create and register an App in Azure Active Directory check [this article](https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app#register-an-application), and for more information on the Service Principals check [this article](https://docs.microsoft.com/en-us/azure/active-directory/develop/app-objects-and-service-principals). +2. AAD User Tokens -2. AAD User Tokens - - Allows Telegraf to authenticate like a user. This method is mainly used - for development purposes only. + - Allows Telegraf to authenticate like a user. This method is mainly used for development purposes only. 3. Managed Service Identity (MSI) token - - If you are running Telegraf from Azure VM or infrastructure, then this is the prefered authentication method. + + - If you are running Telegraf from Azure VM or infrastructure, then this is the prefered authentication method. [principal]: https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-application-objects -Whichever method, the designated Principal needs to be assigned the `Database User` role on the Database level in the Azure Data Explorer. This role will -allow the plugin to create the required tables and ingest data into it. +Whichever method, the designated Principal needs to be assigned the `Database User` role on the Database level in the Azure Data Explorer. This role will +allow the plugin to create the required tables and ingest data into it. If `create_tables=false` then the designated principal only needs the `Database Ingestor` role at least. - -### Configurations of the chosen Authentication Method +### Configurations of the chosen Authentication Method The plugin will authenticate using the first available of the following configurations, **it's important to understand that the assessment, and consequently choosing the authentication method, will happen in order as below**: 1. **Client Credentials**: Azure AD Application ID and Secret. - + Set the following environment variables: - `AZURE_TENANT_ID`: Specifies the Tenant to which to authenticate. @@ -130,73 +129,78 @@ following configurations, **it's important to understand that the assessment, an [msi]: https://docs.microsoft.com/en-us/azure/active-directory/msi-overview [arm]: https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-overview - ## Querying data collected in Azure Data Explorer + Examples of data transformations and queries that would be useful to gain insights - -1. **Data collected using SQL input plugin** - - Sample SQL metrics data - - name | tags | timestamp | fields - -----|------|-----------|------- - sqlserver_database_io|{"database_name":"azure-sql-db2","file_type":"DATA","host":"adx-vm","logical_filename":"tempdev","measurement_db_type":"AzureSQLDB","physical_filename":"tempdb.mdf","replica_updateability":"READ_WRITE","sql_instance":"adx-sql-server"}|2021-09-09T13:51:20Z|{"current_size_mb":16,"database_id":2,"file_id":1,"read_bytes":2965504,"read_latency_ms":68,"reads":47,"rg_read_stall_ms":42,"rg_write_stall_ms":0,"space_used_mb":0,"write_bytes":1220608,"write_latency_ms":103,"writes":149} - sqlserver_waitstats|{"database_name":"azure-sql-db2","host":"adx-vm","measurement_db_type":"AzureSQLDB","replica_updateability":"READ_WRITE","sql_instance":"adx-sql-server","wait_category":"Worker Thread","wait_type":"THREADPOOL"}|2021-09-09T13:51:20Z|{"max_wait_time_ms":15,"resource_wait_ms":4469,"signal_wait_time_ms":0,"wait_time_ms":4469,"waiting_tasks_count":1464} +### Using SQL input plugin +Sample SQL metrics data - - Since collected metrics object is of complex type so "fields" and "tags" are stored as dynamic data type, multiple ways to query this data- +name | tags | timestamp | fields +-----|------|-----------|------- +sqlserver_database_io|{"database_name":"azure-sql-db2","file_type":"DATA","host":"adx-vm","logical_filename":"tempdev","measurement_db_type":"AzureSQLDB","physical_filename":"tempdb.mdf","replica_updateability":"READ_WRITE","sql_instance":"adx-sql-server"}|2021-09-09T13:51:20Z|{"current_size_mb":16,"database_id":2,"file_id":1,"read_bytes":2965504,"read_latency_ms":68,"reads":47,"rg_read_stall_ms":42,"rg_write_stall_ms":0,"space_used_mb":0,"write_bytes":1220608,"write_latency_ms":103,"writes":149} +sqlserver_waitstats|{"database_name":"azure-sql-db2","host":"adx-vm","measurement_db_type":"AzureSQLDB","replica_updateability":"READ_WRITE","sql_instance":"adx-sql-server","wait_category":"Worker Thread","wait_type":"THREADPOOL"}|2021-09-09T13:51:20Z|{"max_wait_time_ms":15,"resource_wait_ms":4469,"signal_wait_time_ms":0,"wait_time_ms":4469,"waiting_tasks_count":1464} - - **Query JSON attributes directly**: Azure Data Explorer provides an ability to query JSON data in raw format without parsing it, so JSON attributes can be queried directly in following way - - ``` - Tablename - | where name == "sqlserver_azure_db_resource_stats" and todouble(fields.avg_cpu_percent) > 7 - ``` - ``` - Tablename - | distinct tostring(tags.database_name) - ``` - **Note** - This approach could have performance impact in case of large volumes of data, use belwo mentioned approach for such cases. +Since collected metrics object is of complex type so "fields" and "tags" are stored as dynamic data type, multiple ways to query this data- - - **Use [Update policy](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/management/updatepolicy)**: Transform dynamic data type columns using update policy. This is the recommended performant way for querying over large volumes of data compared to querying directly over JSON attributes. +1. Query JSON attributes directly: Azure Data Explorer provides an ability to query JSON data in raw format without parsing it, so JSON attributes can be queried directly in following way: - ``` - // Function to transform data - .create-or-alter function Transform_TargetTableName() { - SourceTableName - | mv-apply fields on (extend key = tostring(bag_keys(fields)[0])) - | project fieldname=key, value=todouble(fields[key]), name, tags, timestamp - } + ```text + Tablename + | where name == "sqlserver_azure_db_resource_stats" and todouble(fields.avg_cpu_percent) > 7 + ``` - // Create destination table with above query's results schema (if it doesn't exist already) - .set-or-append TargetTableName <| Transform_TargetTableName() | limit 0 + ```text + Tablename + | distinct tostring(tags.database_name) + ``` - // Apply update policy on destination table - .alter table TargetTableName policy update - @'[{"IsEnabled": true, "Source": "SourceTableName", "Query": "Transform_TargetTableName()", "IsTransactional": true, "PropagateIngestionProperties": false}]' - ``` + **Note** - This approach could have performance impact in case of large volumes of data, use belwo mentioned approach for such cases. -2. **Data collected using syslog input plugin** - - Sample syslog data - +1. Use [Update policy](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/management/updatepolicy)**: Transform dynamic data type columns using update policy. This is the recommended performant way for querying over large volumes of data compared to querying directly over JSON attributes: - name | tags | timestamp | fields - -----|------|-----------|------- - syslog|{"appname":"azsecmond","facility":"user","host":"adx-linux-vm","hostname":"adx-linux-vm","severity":"info"}|2021-09-20T14:36:44Z|{"facility_code":1,"message":" 2021/09/20 14:36:44.890110 Failed to connect to mdsd: dial unix /var/run/mdsd/default_djson.socket: connect: no such file or directory","procid":"2184","severity_code":6,"timestamp":"1632148604890477000","version":1} - syslog|{"appname":"CRON","facility":"authpriv","host":"adx-linux-vm","hostname":"adx-linux-vm","severity":"info"}|2021-09-20T14:37:01Z|{"facility_code":10,"message":" pam_unix(cron:session): session opened for user root by (uid=0)","procid":"26446","severity_code":6,"timestamp":"1632148621120781000","version":1} - - There are multiple ways to flatten dynamic columns using 'extend' or 'bag_unpack' operator. You can use either of these ways in above mentioned update policy function - 'Transform_TargetTableName()' + ```json + // Function to transform data + .create-or-alter function Transform_TargetTableName() { + SourceTableName + | mv-apply fields on (extend key = tostring(bag_keys(fields)[0])) + | project fieldname=key, value=todouble(fields[key]), name, tags, timestamp + } - - Use [extend](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/query/extendoperator) operator - This is the recommended approach compared to 'bag_unpack' as it is faster and robust. Even if schema changes, it will not break queries or dashboards. - ``` - Tablenmae - | extend facility_code=toint(fields.facility_code), message=tostring(fields.message), procid= tolong(fields.procid), severity_code=toint(fields.severity_code), - SysLogTimestamp=unixtime_nanoseconds_todatetime(tolong(fields.timestamp)), version= todouble(fields.version), - appname= tostring(tags.appname), facility= tostring(tags.facility),host= tostring(tags.host), hostname=tostring(tags.hostname), severity=tostring(tags.severity) - | project-away fields, tags - ``` - - Use [bag_unpack plugin](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/query/bag-unpackplugin) to unpack the dynamic type columns automatically. This method could lead to issues if source schema changes as its dynamically expanding columns. - ``` - Tablename - | evaluate bag_unpack(tags, columnsConflict='replace_source') - | evaluate bag_unpack(fields, columnsConflict='replace_source') - ``` + // Create destination table with above query's results schema (if it doesn't exist already) + .set-or-append TargetTableName <| Transform_TargetTableName() | limit 0 + // Apply update policy on destination table + .alter table TargetTableName policy update + @'[{"IsEnabled": true, "Source": "SourceTableName", "Query": "Transform_TargetTableName()", "IsTransactional": true, "PropagateIngestionProperties": false}]' + ``` + +### Using syslog input plugin + +Sample syslog data - + +name | tags | timestamp | fields +-----|------|-----------|------- +syslog|{"appname":"azsecmond","facility":"user","host":"adx-linux-vm","hostname":"adx-linux-vm","severity":"info"}|2021-09-20T14:36:44Z|{"facility_code":1,"message":" 2021/09/20 14:36:44.890110 Failed to connect to mdsd: dial unix /var/run/mdsd/default_djson.socket: connect: no such file or directory","procid":"2184","severity_code":6,"timestamp":"1632148604890477000","version":1} +syslog|{"appname":"CRON","facility":"authpriv","host":"adx-linux-vm","hostname":"adx-linux-vm","severity":"info"}|2021-09-20T14:37:01Z|{"facility_code":10,"message":" pam_unix(cron:session): session opened for user root by (uid=0)","procid":"26446","severity_code":6,"timestamp":"1632148621120781000","version":1} + +There are multiple ways to flatten dynamic columns using 'extend' or 'bag_unpack' operator. You can use either of these ways in above mentioned update policy function - 'Transform_TargetTableName()' + +- Use [extend](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/query/extendoperator) operator - This is the recommended approach compared to 'bag_unpack' as it is faster and robust. Even if schema changes, it will not break queries or dashboards. + + ```text + Tablenmae + | extend facility_code=toint(fields.facility_code), message=tostring(fields.message), procid= tolong(fields.procid), severity_code=toint(fields.severity_code), + SysLogTimestamp=unixtime_nanoseconds_todatetime(tolong(fields.timestamp)), version= todouble(fields.version), + appname= tostring(tags.appname), facility= tostring(tags.facility),host= tostring(tags.host), hostname=tostring(tags.hostname), severity=tostring(tags.severity) + | project-away fields, tags + ``` + +- Use [bag_unpack plugin](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/query/bag-unpackplugin) to unpack the dynamic type columns automatically. This method could lead to issues if source schema changes as its dynamically expanding columns. + + ```text + Tablename + | evaluate bag_unpack(tags, columnsConflict='replace_source') + | evaluate bag_unpack(fields, columnsConflict='replace_source') + ``` diff --git a/plugins/outputs/azure_monitor/README.md b/plugins/outputs/azure_monitor/README.md index 9d835c1eb..8f7bbb9cb 100644 --- a/plugins/outputs/azure_monitor/README.md +++ b/plugins/outputs/azure_monitor/README.md @@ -14,7 +14,7 @@ metric is written as the Azure Monitor metric name. All field values are written as a summarized set that includes: min, max, sum, count. Tags are written as a dimension on each Azure Monitor metric. -### Configuration: +## Configuration ```toml [[outputs.azure_monitor]] @@ -47,12 +47,12 @@ written as a dimension on each Azure Monitor metric. # endpoint_url = "https://monitoring.core.usgovcloudapi.net" ``` -### Setup +## Setup 1. [Register the `microsoft.insights` resource provider in your Azure subscription][resource provider]. -2. If using Managed Service Identities to authenticate an Azure VM, +1. If using Managed Service Identities to authenticate an Azure VM, [enable system-assigned managed identity][enable msi]. -2. Use a region that supports Azure Monitor Custom Metrics, +1. Use a region that supports Azure Monitor Custom Metrics, For regions with Custom Metrics support, an endpoint will be available with the format `https://.monitoring.azure.com`. @@ -75,17 +75,18 @@ This plugin uses one of several different types of authenticate methods. The preferred authentication methods are different from the *order* in which each authentication is checked. Here are the preferred authentication methods: -1. Managed Service Identity (MSI) token - - This is the preferred authentication method. Telegraf will automatically - authenticate using this method when running on Azure VMs. +1. Managed Service Identity (MSI) token: This is the preferred authentication method. Telegraf will automatically authenticate using this method when running on Azure VMs. 2. AAD Application Tokens (Service Principals) - - Primarily useful if Telegraf is writing metrics for other resources. + + * Primarily useful if Telegraf is writing metrics for other resources. [More information][principal]. - - A Service Principal or User Principal needs to be assigned the `Monitoring + * A Service Principal or User Principal needs to be assigned the `Monitoring Metrics Publisher` role on the resource(s) metrics will be emitted against. + 3. AAD User Tokens (User Principals) - - Allows Telegraf to authenticate like a user. It is best to use this method + + * Allows Telegraf to authenticate like a user. It is best to use this method for development. [principal]: https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-application-objects @@ -93,30 +94,28 @@ authentication is checked. Here are the preferred authentication methods: The plugin will authenticate using the first available of the following configurations: -1. **Client Credentials**: Azure AD Application ID and Secret. +1. **Client Credentials**: Azure AD Application ID and Secret. Set the following environment variables: - Set the following environment variables: + * `AZURE_TENANT_ID`: Specifies the Tenant to which to authenticate. + * `AZURE_CLIENT_ID`: Specifies the app client ID to use. + * `AZURE_CLIENT_SECRET`: Specifies the app secret to use. - - `AZURE_TENANT_ID`: Specifies the Tenant to which to authenticate. - - `AZURE_CLIENT_ID`: Specifies the app client ID to use. - - `AZURE_CLIENT_SECRET`: Specifies the app secret to use. +1. **Client Certificate**: Azure AD Application ID and X.509 Certificate. -2. **Client Certificate**: Azure AD Application ID and X.509 Certificate. + * `AZURE_TENANT_ID`: Specifies the Tenant to which to authenticate. + * `AZURE_CLIENT_ID`: Specifies the app client ID to use. + * `AZURE_CERTIFICATE_PATH`: Specifies the certificate Path to use. + * `AZURE_CERTIFICATE_PASSWORD`: Specifies the certificate password to use. - - `AZURE_TENANT_ID`: Specifies the Tenant to which to authenticate. - - `AZURE_CLIENT_ID`: Specifies the app client ID to use. - - `AZURE_CERTIFICATE_PATH`: Specifies the certificate Path to use. - - `AZURE_CERTIFICATE_PASSWORD`: Specifies the certificate password to use. - -3. **Resource Owner Password**: Azure AD User and Password. This grant type is +1. **Resource Owner Password**: Azure AD User and Password. This grant type is *not recommended*, use device login instead if you need interactive login. - - `AZURE_TENANT_ID`: Specifies the Tenant to which to authenticate. - - `AZURE_CLIENT_ID`: Specifies the app client ID to use. - - `AZURE_USERNAME`: Specifies the username to use. - - `AZURE_PASSWORD`: Specifies the password to use. + * `AZURE_TENANT_ID`: Specifies the Tenant to which to authenticate. + * `AZURE_CLIENT_ID`: Specifies the app client ID to use. + * `AZURE_USERNAME`: Specifies the username to use. + * `AZURE_PASSWORD`: Specifies the password to use. -4. **Azure Managed Service Identity**: Delegate credential management to the +1. **Azure Managed Service Identity**: Delegate credential management to the platform. Requires that code is running in Azure, e.g. on a VM. All configuration is handled by Azure. See [Azure Managed Service Identity][msi] for more details. Only available when using the [Azure Resource Manager][arm]. @@ -127,7 +126,7 @@ following configurations: **Note: As shown above, the last option (#4) is the preferred way to authenticate when running Telegraf on Azure VMs. -### Dimensions +## Dimensions Azure Monitor only accepts values with a numeric type. The plugin will drop fields with a string type by default. The plugin can set all string type fields diff --git a/plugins/outputs/bigquery/README.md b/plugins/outputs/bigquery/README.md index 9515711d5..8ca265cc0 100644 --- a/plugins/outputs/bigquery/README.md +++ b/plugins/outputs/bigquery/README.md @@ -1,11 +1,11 @@ # Google BigQuery Output Plugin -This plugin writes to the [Google Cloud BigQuery](https://cloud.google.com/bigquery) and requires [authentication](https://cloud.google.com/bigquery/docs/authentication) +This plugin writes to the [Google Cloud BigQuery](https://cloud.google.com/bigquery) and requires [authentication](https://cloud.google.com/bigquery/docs/authentication) with Google Cloud using either a service account or user credentials. Be aware that this plugin accesses APIs that are [chargeable](https://cloud.google.com/bigquery/pricing) and might incur costs. -### Configuration +## Configuration ```toml [[outputs.bigquery]] @@ -21,17 +21,19 @@ Be aware that this plugin accesses APIs that are [chargeable](https://cloud.goog ## Character to replace hyphens on Metric name # replace_hyphen_to = "_" ``` + Requires `project` to specify where BigQuery entries will be persisted. Requires `dataset` to specify under which BigQuery dataset the corresponding metrics tables reside. -Each metric should have a corresponding table to BigQuery. +Each metric should have a corresponding table to BigQuery. The schema of the table on BigQuery: + * Should contain the field `timestamp` which is the timestamp of a telegraph metrics * Should contain the metric's tags with the same name and the column type should be set to string. * Should contain the metric's fields with the same name and the column type should match the field type. -### Restrictions +## Restrictions Avoid hyphens on BigQuery tables, underlying SDK cannot handle streaming inserts to Table with hyphens. @@ -41,6 +43,7 @@ In case of a metric with hyphen by default hyphens shall be replaced with unders This can be altered using the `replace_hyphen_to` configuration property. Available data type options are: + * integer * float or long * string @@ -50,5 +53,5 @@ All field naming restrictions that apply to BigQuery should apply to the measure Tables on BigQuery should be created beforehand and they are not created during persistence -Pay attention to the column `timestamp` since it is reserved upfront and cannot change. +Pay attention to the column `timestamp` since it is reserved upfront and cannot change. If partitioning is required make sure it is applied beforehand. diff --git a/plugins/outputs/cloud_pubsub/README.md b/plugins/outputs/cloud_pubsub/README.md index d3d5e2fe3..6274e1dac 100644 --- a/plugins/outputs/cloud_pubsub/README.md +++ b/plugins/outputs/cloud_pubsub/README.md @@ -3,8 +3,7 @@ The GCP PubSub plugin publishes metrics to a [Google Cloud PubSub][pubsub] topic as one of the supported [output data formats][]. - -### Configuration +## Configuration This section contains the default TOML to configure the plugin. You can generate it using `telegraf --usage cloud_pubsub`. @@ -24,9 +23,9 @@ generate it using `telegraf --usage cloud_pubsub`. ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md data_format = "influx" - ## Optional. Filepath for GCP credentials JSON file to authorize calls to - ## PubSub APIs. If not set explicitly, Telegraf will attempt to use - ## Application Default Credentials, which is preferred. + ## Optional. Filepath for GCP credentials JSON file to authorize calls to + ## PubSub APIs. If not set explicitly, Telegraf will attempt to use + ## Application Default Credentials, which is preferred. # credentials_file = "path/to/my/creds.json" ## Optional. If true, will send all metrics per write in one PubSub message. diff --git a/plugins/outputs/cloudwatch/README.md b/plugins/outputs/cloudwatch/README.md index 56436c3c5..ff62726de 100644 --- a/plugins/outputs/cloudwatch/README.md +++ b/plugins/outputs/cloudwatch/README.md @@ -1,4 +1,4 @@ -## Amazon CloudWatch Output for Telegraf +# Amazon CloudWatch Output for Telegraf This plugin will send metrics to Amazon CloudWatch. @@ -6,13 +6,14 @@ This plugin will send metrics to Amazon CloudWatch. This plugin uses a credential chain for Authentication with the CloudWatch API endpoint. In the following order the plugin will attempt to authenticate. + 1. Web identity provider credentials via STS if `role_arn` and `web_identity_token_file` are specified -2. Assumed credentials via STS if `role_arn` attribute is specified (source credentials are evaluated from subsequent rules) -3. Explicit credentials from `access_key`, `secret_key`, and `token` attributes -4. Shared profile from `profile` attribute -5. [Environment Variables](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#environment-variables) -6. [Shared Credentials](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#shared-credentials-file) -7. [EC2 Instance Profile](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html) +1. Assumed credentials via STS if `role_arn` attribute is specified (source credentials are evaluated from subsequent rules) +1. Explicit credentials from `access_key`, `secret_key`, and `token` attributes +1. Shared profile from `profile` attribute +1. [Environment Variables](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#environment-variables) +1. [Shared Credentials](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#shared-credentials-file) +1. [EC2 Instance Profile](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html) If you are using credentials from a web identity provider, you can specify the session name using `role_session_name`. If left empty, the current timestamp will be used. @@ -31,6 +32,7 @@ must be configured. The region is the Amazon region that you wish to connect to. Examples include but are not limited to: + * us-west-1 * us-west-2 * us-east-1 @@ -43,13 +45,14 @@ The namespace used for AWS CloudWatch metrics. ### write_statistics -If you have a large amount of metrics, you should consider to send statistic -values instead of raw metrics which could not only improve performance but -also save AWS API cost. If enable this flag, this plugin would parse the required -[CloudWatch statistic fields](https://docs.aws.amazon.com/sdk-for-go/api/service/cloudwatch/#StatisticSet) -(count, min, max, and sum) and send them to CloudWatch. You could use `basicstats` -aggregator to calculate those fields. If not all statistic fields are available, +If you have a large amount of metrics, you should consider to send statistic +values instead of raw metrics which could not only improve performance but +also save AWS API cost. If enable this flag, this plugin would parse the required +[CloudWatch statistic fields](https://docs.aws.amazon.com/sdk-for-go/api/service/cloudwatch/#StatisticSet) +(count, min, max, and sum) and send them to CloudWatch. You could use `basicstats` +aggregator to calculate those fields. If not all statistic fields are available, all fields would still be sent as raw metrics. ### high_resolution_metrics -Enable high resolution metrics (1 second precision) instead of standard ones (60 seconds precision) \ No newline at end of file + +Enable high resolution metrics (1 second precision) instead of standard ones (60 seconds precision) diff --git a/plugins/outputs/cloudwatch_logs/README.md b/plugins/outputs/cloudwatch_logs/README.md index ab745d877..9898f9e84 100644 --- a/plugins/outputs/cloudwatch_logs/README.md +++ b/plugins/outputs/cloudwatch_logs/README.md @@ -1,4 +1,4 @@ -## Amazon CloudWatch Logs Output for Telegraf +# Amazon CloudWatch Logs Output for Telegraf This plugin will send logs to Amazon CloudWatch. @@ -6,21 +6,24 @@ This plugin will send logs to Amazon CloudWatch. This plugin uses a credential chain for Authentication with the CloudWatch Logs API endpoint. In the following order the plugin will attempt to authenticate. -1. Web identity provider credentials via STS if `role_arn` and `web_identity_token_file` are specified -2. Assumed credentials via STS if `role_arn` attribute is specified (source credentials are evaluated from subsequent rules) -3. Explicit credentials from `access_key`, `secret_key`, and `token` attributes -4. Shared profile from `profile` attribute -5. [Environment Variables](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#environment-variables) -6. [Shared Credentials](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#shared-credentials-file) -7. [EC2 Instance Profile](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html) -The IAM user needs the following permissions ( https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/permissions-reference-cwl.html): -- `logs:DescribeLogGroups` - required for check if configured log group exist -- `logs:DescribeLogStreams` - required to view all log streams associated with a log group. -- `logs:CreateLogStream` - required to create a new log stream in a log group.) -- `logs:PutLogEvents` - required to upload a batch of log events into log stream. +1. Web identity provider credentials via STS if `role_arn` and `web_identity_token_file` are specified +1. Assumed credentials via STS if `role_arn` attribute is specified (source credentials are evaluated from subsequent rules) +1. Explicit credentials from `access_key`, `secret_key`, and `token` attributes +1. Shared profile from `profile` attribute +1. [Environment Variables](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#environment-variables) +1. [Shared Credentials](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#shared-credentials-file) +1. [EC2 Instance Profile](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html) + +The IAM user needs the following permissions (see this [reference](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/permissions-reference-cwl.html) for more): + +- `logs:DescribeLogGroups` - required for check if configured log group exist +- `logs:DescribeLogStreams` - required to view all log streams associated with a log group. +- `logs:CreateLogStream` - required to create a new log stream in a log group.) +- `logs:PutLogEvents` - required to upload a batch of log events into log stream. ## Config + ```toml [[outputs.cloudwatch_logs]] ## The region is the Amazon region that you wish to connect to. @@ -50,7 +53,7 @@ The IAM user needs the following permissions ( https://docs.aws.amazon.com/Amazo #role_session_name = "" #profile = "" #shared_credential_file = "" - + ## Endpoint to make request against, the correct endpoint is automatically ## determined and this option should only be set if you wish to override the ## default. @@ -59,24 +62,24 @@ The IAM user needs the following permissions ( https://docs.aws.amazon.com/Amazo ## Cloud watch log group. Must be created in AWS cloudwatch logs upfront! ## For example, you can specify the name of the k8s cluster here to group logs from all cluster in oine place - log_group = "my-group-name" - + log_group = "my-group-name" + ## Log stream in log group ## Either log group name or reference to metric attribute, from which it can be parsed: ## tag: or field:. If log stream is not exist, it will be created. - ## Since AWS is not automatically delete logs streams with expired logs entries (i.e. empty log stream) + ## Since AWS is not automatically delete logs streams with expired logs entries (i.e. empty log stream) ## you need to put in place appropriate house-keeping (https://forums.aws.amazon.com/thread.jspa?threadID=178855) log_stream = "tag:location" - + ## Source of log data - metric name ## specify the name of the metric, from which the log data should be retrieved. ## I.e., if you are using docker_log plugin to stream logs from container, then ## specify log_data_metric_name = "docker_log" log_data_metric_name = "docker_log" - + ## Specify from which metric attribute the log data should be retrieved: ## tag: or field:. ## I.e., if you are using docker_log plugin to stream logs from container, then - ## specify log_data_source = "field:message" + ## specify log_data_source = "field:message" log_data_source = "field:message" -``` \ No newline at end of file +``` diff --git a/plugins/outputs/cratedb/README.md b/plugins/outputs/cratedb/README.md index 11214092d..63a9ba4f9 100644 --- a/plugins/outputs/cratedb/README.md +++ b/plugins/outputs/cratedb/README.md @@ -6,7 +6,6 @@ This plugin writes to [CrateDB](https://crate.io/) via its [PostgreSQL protocol] The plugin requires a table with the following schema. - ```sql CREATE TABLE my_metrics ( "hash_id" LONG INDEX OFF, diff --git a/plugins/outputs/datadog/README.md b/plugins/outputs/datadog/README.md index f9dd3fb0e..dc709449b 100644 --- a/plugins/outputs/datadog/README.md +++ b/plugins/outputs/datadog/README.md @@ -3,8 +3,7 @@ This plugin writes to the [Datadog Metrics API][metrics] and requires an `apikey` which can be obtained [here][apikey] for the account. - -### Configuration +## Configuration ```toml [[outputs.datadog]] @@ -21,7 +20,7 @@ This plugin writes to the [Datadog Metrics API][metrics] and requires an # http_proxy_url = "http://localhost:8888" ``` -### Metrics +## Metrics Datadog metric names are formed by joining the Telegraf metric name and the field key with a `.` character. diff --git a/plugins/outputs/discard/README.md b/plugins/outputs/discard/README.md index e1c70b742..c86d389fa 100644 --- a/plugins/outputs/discard/README.md +++ b/plugins/outputs/discard/README.md @@ -3,7 +3,7 @@ This output plugin simply drops all metrics that are sent to it. It is only meant to be used for testing purposes. -### Configuration: +## Configuration ```toml # Send metrics to nowhere at all diff --git a/plugins/outputs/dynatrace/README.md b/plugins/outputs/dynatrace/README.md index f25b87089..2776fa23e 100644 --- a/plugins/outputs/dynatrace/README.md +++ b/plugins/outputs/dynatrace/README.md @@ -34,13 +34,13 @@ Note: The name and identifier of the host running Telegraf will be added as a di If you run the Telegraf agent on a host or VM without a OneAgent you will need to configure the environment API endpoint to send the metrics to and an API token for security. -You will also need to configure an API token for secure access. Find out how to create a token in the [Dynatrace documentation](https://www.dynatrace.com/support/help/dynatrace-api/basics/dynatrace-api-authentication/) or simply navigate to **Settings > Integration > Dynatrace API** in your Dynatrace environment and create a token with Dynatrace API and create a new token with +You will also need to configure an API token for secure access. Find out how to create a token in the [Dynatrace documentation](https://www.dynatrace.com/support/help/dynatrace-api/basics/dynatrace-api-authentication/) or simply navigate to **Settings > Integration > Dynatrace API** in your Dynatrace environment and create a token with Dynatrace API and create a new token with 'Ingest metrics' (`metrics.ingest`) scope enabled. It is recommended to limit Token scope to only this permission. -The endpoint for the Dynatrace Metrics API v2 is +The endpoint for the Dynatrace Metrics API v2 is -* on Dynatrace Managed: `https://{your-domain}/e/{your-environment-id}/api/v2/metrics/ingest` -* on Dynatrace SaaS: `https://{your-environment-id}.live.dynatrace.com/api/v2/metrics/ingest` +- on Dynatrace Managed: `https://{your-domain}/e/{your-environment-id}/api/v2/metrics/ingest` +- on Dynatrace SaaS: `https://{your-environment-id}.live.dynatrace.com/api/v2/metrics/ingest` ```toml [[outputs.dynatrace]] diff --git a/plugins/outputs/elasticsearch/README.md b/plugins/outputs/elasticsearch/README.md index 2616ff1a6..41001ee89 100644 --- a/plugins/outputs/elasticsearch/README.md +++ b/plugins/outputs/elasticsearch/README.md @@ -4,7 +4,7 @@ This plugin writes to [Elasticsearch](https://www.elastic.co) via HTTP using Ela It supports Elasticsearch releases from 5.x up to 7.x. -### Elasticsearch indexes and templates +## Elasticsearch indexes and templates ### Indexes per time-frame @@ -12,12 +12,12 @@ This plugin can manage indexes per time-frame, as commonly done in other tools w The timestamp of the metric collected will be used to decide the index destination. -For more information about this usage on Elasticsearch, check https://www.elastic.co/guide/en/elasticsearch/guide/master/time-based.html#index-per-timeframe +For more information about this usage on Elasticsearch, check [the docs](https://www.elastic.co/guide/en/elasticsearch/guide/master/time-based.html#index-per-timeframe). ### Template management Index templates are used in Elasticsearch to define settings and mappings for the indexes and how the fields should be analyzed. -For more information on how this works, see https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-templates.html +For more information on how this works, see [the docs](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-templates.html). This plugin can create a working template for use with telegraf metrics. It uses Elasticsearch dynamic templates feature to set proper types for the tags and metrics fields. If the template specified already exists, it will not overwrite unless you configure this plugin to do so. Thus you can customize this template after its creation if necessary. @@ -98,7 +98,7 @@ Example of an index template created by telegraf on Elasticsearch 5.x: ``` -### Example events: +### Example events This plugin will format the events in the following way: @@ -144,7 +144,7 @@ This plugin will format the events in the following way: } ``` -### Configuration +## Configuration ```toml [[outputs.elasticsearch]] @@ -201,7 +201,7 @@ This plugin will format the events in the following way: force_document_id = false ``` -#### Permissions +### Permissions If you are using authentication within your Elasticsearch cluster, you need to create a account and create a role with at least the manage role in the @@ -210,7 +210,7 @@ connect to your Elasticsearch cluster and send logs to your cluster. After that, you need to add "create_indice" and "write" permission to your specific index pattern. -#### Required parameters: +### Required parameters * `urls`: A list containing the full HTTP URL of one or more nodes from your Elasticsearch instance. * `index_name`: The target index for metrics. You can use the date specifiers below to create indexes per time frame. @@ -225,7 +225,7 @@ index pattern. Additionally, you can specify dynamic index names by using tags with the notation ```{{tag_name}}```. This will store the metrics with different tag values in different indices. If the tag does not exist in a particular metric, the `default_tag_value` will be used instead. -#### Optional parameters: +### Optional parameters * `timeout`: Elasticsearch client timeout, defaults to "5s" if not set. * `enable_sniffer`: Set to true to ask Elasticsearch a list of all cluster nodes, thus it is not necessary to list all nodes in the urls config option. @@ -237,7 +237,7 @@ Additionally, you can specify dynamic index names by using tags with the notatio * `overwrite_template`: Set to true if you want telegraf to overwrite an existing template. * `force_document_id`: Set to true will compute a unique hash from as sha256(concat(timestamp,measurement,series-hash)),enables resend or update data withoud ES duplicated documents. -### Known issues +## Known issues Integer values collected that are bigger than 2^63 and smaller than 1e21 (or in this exact same window of their negative counterparts) are encoded by golang JSON encoder in decimal format and that is not fully supported by Elasticsearch dynamic field mapping. This causes the metrics with such values to be dropped in case a field mapping has not been created yet on the telegraf index. If that's the case you will see an exception on Elasticsearch side like this: diff --git a/plugins/outputs/exec/README.md b/plugins/outputs/exec/README.md index 7e19b9a84..60b4ac385 100644 --- a/plugins/outputs/exec/README.md +++ b/plugins/outputs/exec/README.md @@ -4,13 +4,15 @@ This plugin sends telegraf metrics to an external application over stdin. The command should be defined similar to docker's `exec` form: - ["executable", "param1", "param2"] +```text +["executable", "param1", "param2"] +``` On non-zero exit stderr will be logged at error level. For better performance, consider execd, which runs continuously. -### Configuration +## Configuration ```toml [[outputs.exec]] diff --git a/plugins/outputs/execd/README.md b/plugins/outputs/execd/README.md index 8569c1033..5b2124625 100644 --- a/plugins/outputs/execd/README.md +++ b/plugins/outputs/execd/README.md @@ -4,7 +4,7 @@ The `execd` plugin runs an external program as a daemon. Telegraf minimum version: Telegraf 1.15.0 -### Configuration: +## Configuration ```toml [[outputs.execd]] @@ -22,7 +22,7 @@ Telegraf minimum version: Telegraf 1.15.0 data_format = "influx" ``` -### Example +## Example see [examples][] diff --git a/plugins/outputs/file/README.md b/plugins/outputs/file/README.md index 45d0ac155..2e6a12d97 100644 --- a/plugins/outputs/file/README.md +++ b/plugins/outputs/file/README.md @@ -2,7 +2,7 @@ This plugin writes telegraf metrics to files -### Configuration +## Configuration ```toml [[outputs.file]] diff --git a/plugins/outputs/graphite/README.md b/plugins/outputs/graphite/README.md index b6b36cfca..ddd85278f 100644 --- a/plugins/outputs/graphite/README.md +++ b/plugins/outputs/graphite/README.md @@ -6,7 +6,7 @@ via raw TCP. For details on the translation between Telegraf Metrics and Graphite output, see the [Graphite Data Format](../../../docs/DATA_FORMATS_OUTPUT.md) -### Configuration: +## Configuration ```toml # Configuration for Graphite server to send metrics to diff --git a/plugins/outputs/graylog/README.md b/plugins/outputs/graylog/README.md index 96e290b09..d59602148 100644 --- a/plugins/outputs/graylog/README.md +++ b/plugins/outputs/graylog/README.md @@ -4,7 +4,7 @@ This plugin writes to a Graylog instance using the "[GELF][]" format. [GELF]: https://docs.graylog.org/en/3.1/pages/gelf.html#gelf-payload-specification -### Configuration: +## Configuration ```toml [[outputs.graylog]] diff --git a/plugins/outputs/health/README.md b/plugins/outputs/health/README.md index 0a56d5192..a88417f63 100644 --- a/plugins/outputs/health/README.md +++ b/plugins/outputs/health/README.md @@ -7,7 +7,8 @@ When the plugin is healthy it will return a 200 response; when unhealthy it will return a 503 response. The default state is healthy, one or more checks must fail in order for the resource to enter the failed state. -### Configuration +## Configuration + ```toml [[outputs.health]] ## Address and port to listen on. @@ -48,7 +49,7 @@ must fail in order for the resource to enter the failed state. ## field = "buffer_size" ``` -#### compares +### compares The `compares` check is used to assert basic mathematical relationships. Use it by choosing a field key and one or more comparisons that must hold true. If @@ -56,7 +57,7 @@ the field is not found on a metric no comparison will be made. Comparisons must be hold true on all metrics for the check to pass. -#### contains +### contains The `contains` check can be used to require a field key to exist on at least one metric. diff --git a/plugins/outputs/http/README.md b/plugins/outputs/http/README.md index 909779262..99206a8bb 100644 --- a/plugins/outputs/http/README.md +++ b/plugins/outputs/http/README.md @@ -4,7 +4,7 @@ This plugin sends metrics in a HTTP message encoded using one of the output data formats. For data_formats that support batching, metrics are sent in batch format by default. -### Configuration: +## Configuration ```toml # A plugin that can transmit metrics over HTTP @@ -70,6 +70,6 @@ batch format by default. # idle_conn_timeout = 0 ``` -### Optional Cookie Authentication Settings: +### Optional Cookie Authentication Settings The optional Cookie Authentication Settings will retrieve a cookie from the given authorization endpoint, and use it in subsequent API requests. This is useful for services that do not provide OAuth or Basic Auth authentication, e.g. the [Tesla Powerwall API](https://www.tesla.com/support/energy/powerwall/own/monitoring-from-home-network), which uses a Cookie Auth Body to retrieve an authorization cookie. The Cookie Auth Renewal interval will renew the authorization by retrieving a new cookie at the given interval. diff --git a/plugins/outputs/influxdb/README.md b/plugins/outputs/influxdb/README.md index 36fde827e..6624adfae 100644 --- a/plugins/outputs/influxdb/README.md +++ b/plugins/outputs/influxdb/README.md @@ -2,7 +2,7 @@ The InfluxDB output plugin writes metrics to the [InfluxDB v1.x] HTTP or UDP service. -### Configuration: +## Configuration ```toml # Configuration for sending metrics to InfluxDB @@ -84,7 +84,8 @@ The InfluxDB output plugin writes metrics to the [InfluxDB v1.x] HTTP or UDP ser # influx_uint_support = false ``` -### Metrics +## Metrics + Reference the [influx serializer][] for details about metric production. [InfluxDB v1.x]: https://github.com/influxdata/influxdb diff --git a/plugins/outputs/influxdb_v2/README.md b/plugins/outputs/influxdb_v2/README.md index b176fffcd..3b9ddf682 100644 --- a/plugins/outputs/influxdb_v2/README.md +++ b/plugins/outputs/influxdb_v2/README.md @@ -2,7 +2,7 @@ The InfluxDB output plugin writes metrics to the [InfluxDB v2.x] HTTP service. -### Configuration: +## Configuration ```toml # Configuration for sending metrics to InfluxDB 2.0 @@ -58,8 +58,8 @@ The InfluxDB output plugin writes metrics to the [InfluxDB v2.x] HTTP service. # insecure_skip_verify = false ``` -### Metrics - +## Metrics + Reference the [influx serializer][] for details about metric production. [InfluxDB v2.x]: https://github.com/influxdata/influxdb diff --git a/plugins/outputs/instrumental/README.md b/plugins/outputs/instrumental/README.md index f8b48fd1e..65113aecc 100644 --- a/plugins/outputs/instrumental/README.md +++ b/plugins/outputs/instrumental/README.md @@ -7,7 +7,7 @@ Instrumental accepts stats in a format very close to Graphite, with the only dif the type of stat (gauge, increment) is the first token, separated from the metric itself by whitespace. The `increment` type is only used if the metric comes in as a counter through `[[input.statsd]]`. -## Configuration: +## Configuration ```toml [[outputs.instrumental]] diff --git a/plugins/outputs/kafka/README.md b/plugins/outputs/kafka/README.md index 54108d8be..5f3c2f5ea 100644 --- a/plugins/outputs/kafka/README.md +++ b/plugins/outputs/kafka/README.md @@ -2,7 +2,8 @@ This plugin writes to a [Kafka Broker](http://kafka.apache.org/07/quickstart.html) acting a Kafka Producer. -### Configuration: +## Configuration + ```toml [[outputs.kafka]] ## URLs of kafka brokers @@ -80,7 +81,7 @@ This plugin writes to a [Kafka Broker](http://kafka.apache.org/07/quickstart.htm ## 3 : LZ4 ## 4 : ZSTD # compression_codec = 0 - + ## Idempotent Writes ## If enabled, exactly one copy of each message is written. # idempotent_writes = false @@ -146,7 +147,7 @@ This plugin writes to a [Kafka Broker](http://kafka.apache.org/07/quickstart.htm # data_format = "influx" ``` -#### `max_retry` +### `max_retry` This option controls the number of retries before a failure notification is displayed for each message when no acknowledgement is received from the diff --git a/plugins/outputs/kinesis/README.md b/plugins/outputs/kinesis/README.md index 2d909090b..b5f9422f8 100644 --- a/plugins/outputs/kinesis/README.md +++ b/plugins/outputs/kinesis/README.md @@ -1,4 +1,4 @@ -## Amazon Kinesis Output for Telegraf +# Amazon Kinesis Output for Telegraf This is an experimental plugin that is still in the early stages of development. It will batch up all of the Points in one Put request to Kinesis. This should save the number of API requests by a considerable level. @@ -13,18 +13,18 @@ maybe useful for users to review Amazons official documentation which is availab This plugin uses a credential chain for Authentication with the Kinesis API endpoint. In the following order the plugin will attempt to authenticate. + 1. Web identity provider credentials via STS if `role_arn` and `web_identity_token_file` are specified -2. Assumed credentials via STS if `role_arn` attribute is specified (source credentials are evaluated from subsequent rules) -3. Explicit credentials from `access_key`, `secret_key`, and `token` attributes -4. Shared profile from `profile` attribute -5. [Environment Variables](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#environment-variables) -6. [Shared Credentials](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#shared-credentials-file) -7. [EC2 Instance Profile](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html) +1. Assumed credentials via STS if `role_arn` attribute is specified (source credentials are evaluated from subsequent rules) +1. Explicit credentials from `access_key`, `secret_key`, and `token` attributes +1. Shared profile from `profile` attribute +1. [Environment Variables](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#environment-variables) +1. [Shared Credentials](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#shared-credentials-file) +1. [EC2 Instance Profile](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html) If you are using credentials from a web identity provider, you can specify the session name using `role_session_name`. If left empty, the current timestamp will be used. - ## Config For this output plugin to function correctly the following variables must be configured. @@ -35,6 +35,7 @@ For this output plugin to function correctly the following variables must be con ### region The region is the Amazon region that you wish to connect to. Examples include but are not limited to + * us-west-1 * us-west-2 * us-east-1 diff --git a/plugins/outputs/librato/README.md b/plugins/outputs/librato/README.md index 731b9dbd2..685c36432 100644 --- a/plugins/outputs/librato/README.md +++ b/plugins/outputs/librato/README.md @@ -9,4 +9,4 @@ Point Tags to the API. If the point value being sent cannot be converted to a float64, the metric is skipped. -Currently, the plugin does not send any associated Point Tags. \ No newline at end of file +Currently, the plugin does not send any associated Point Tags. diff --git a/plugins/outputs/logzio/README.md b/plugins/outputs/logzio/README.md index 5cf61233e..fd1912e26 100644 --- a/plugins/outputs/logzio/README.md +++ b/plugins/outputs/logzio/README.md @@ -2,7 +2,7 @@ This plugin sends metrics to Logz.io over HTTPs. -### Configuration: +## Configuration ```toml # A plugin that can send metrics over HTTPs to Logz.io @@ -30,14 +30,14 @@ This plugin sends metrics to Logz.io over HTTPs. # url = "https://listener.logz.io:8071" ``` -### Required parameters: +### Required parameters * `token`: Your Logz.io token, which can be found under "settings" in your account. -### Optional parameters: +### Optional parameters * `check_disk_space`: Set to true if Logz.io sender checks the disk space before adding metrics to the disk queue. * `disk_threshold`: If the queue_dir space crosses this threshold (in % of disk usage), the plugin will start dropping logs. * `drain_duration`: Time to sleep between sending attempts. * `queue_dir`: Metrics disk path. All the unsent metrics are saved to the disk in this location. -* `url`: Logz.io listener URL. \ No newline at end of file +* `url`: Logz.io listener URL. diff --git a/plugins/outputs/loki/README.md b/plugins/outputs/loki/README.md index 6c7eb91c8..400ab71a9 100644 --- a/plugins/outputs/loki/README.md +++ b/plugins/outputs/loki/README.md @@ -1,11 +1,11 @@ # Loki Output Plugin -This plugin sends logs to Loki, using metric name and tags as labels, +This plugin sends logs to Loki, using metric name and tags as labels, log line will content all fields in `key="value"` format which is easily parsable with `logfmt` parser in Loki. Logs within each stream are sorted by timestamp before being sent to Loki. -### Configuration: +## Configuration ```toml # A plugin that can transmit logs to Loki diff --git a/plugins/outputs/mongodb/README.md b/plugins/outputs/mongodb/README.md index 0f9ca9973..05a0cf7f1 100644 --- a/plugins/outputs/mongodb/README.md +++ b/plugins/outputs/mongodb/README.md @@ -3,7 +3,7 @@ This plugin sends metrics to MongoDB and automatically creates the collections as time series collections when they don't already exist. **Please note:** Requires MongoDB 5.0+ for Time Series Collections -### Configuration: +## Configuration ```toml # A plugin that can transmit logs to mongodb @@ -33,11 +33,11 @@ This plugin sends metrics to MongoDB and automatically creates the collections a # database to store measurements and time series collections # database = "telegraf" - # granularity can be seconds, minutes, or hours. - # configuring this value will be based on your input collection frequency. + # granularity can be seconds, minutes, or hours. + # configuring this value will be based on your input collection frequency. # see https://docs.mongodb.com/manual/core/timeseries-collections/#create-a-time-series-collection - # granularity = "seconds" + # granularity = "seconds" # optionally set a TTL to automatically expire documents from the measurement collections. - # ttl = "360h" -``` \ No newline at end of file + # ttl = "360h" +``` diff --git a/plugins/outputs/mqtt/README.md b/plugins/outputs/mqtt/README.md index f82d7597c..64d8c16b3 100644 --- a/plugins/outputs/mqtt/README.md +++ b/plugins/outputs/mqtt/README.md @@ -40,8 +40,8 @@ This plugin writes to a [MQTT Broker](http://http://mqtt.org/) acting as a mqtt ## When true, messages will have RETAIN flag set. # retain = false - ## Defines the maximum length of time that the broker and client may not communicate. - ## Defaults to 0 which turns the feature off. For version v2.0.12 mosquitto there is a + ## Defines the maximum length of time that the broker and client may not communicate. + ## Defaults to 0 which turns the feature off. For version v2.0.12 mosquitto there is a ## [bug](https://github.com/eclipse/mosquitto/issues/2117) which requires keep_alive to be set. ## As a reference eclipse/paho.mqtt.golang v1.3.0 defaults to 30. # keep_alive = 0 @@ -50,13 +50,14 @@ This plugin writes to a [MQTT Broker](http://http://mqtt.org/) acting as a mqtt # data_format = "influx" ``` -### Required parameters: +## Required parameters * `servers`: List of strings, this is for speaking to a cluster of `mqtt` brokers. On each flush interval, Telegraf will randomly choose one of the urls to write to. Each URL should just include host and port e.g. -> `["{host}:{port}","{host2}:{port2}"]` -* `topic_prefix`: The `mqtt` topic prefix to publish to. MQTT outputs send metrics to this topic format "///" ( ex: prefix/web01.example.com/mem) -* `qos`: The `mqtt` QoS policy for sending messages. See https://www.ibm.com/support/knowledgecenter/en/SSFKSJ_9.0.0/com.ibm.mq.dev.doc/q029090_.htm for details. +* `topic_prefix`: The `mqtt` topic prefix to publish to. MQTT outputs send metrics to this topic format `///` ( ex: `prefix/web01.example.com/mem`) +* `qos`: The `mqtt` QoS policy for sending messages. See [these docs](https://www.ibm.com/support/knowledgecenter/en/SSFKSJ_9.0.0/com.ibm.mq.dev.doc/q029090_.htm) for details. + +## Optional parameters -### Optional parameters: * `username`: The username to connect MQTT server. * `password`: The password to connect MQTT server. * `client_id`: The unique client id to connect MQTT server. If this parameter is not set then a random ID is generated. @@ -68,4 +69,4 @@ This plugin writes to a [MQTT Broker](http://http://mqtt.org/) acting as a mqtt * `batch`: When true, metrics will be sent in one MQTT message per flush. Otherwise, metrics are written one metric per MQTT message. * `retain`: Set `retain` flag when publishing * `data_format`: [About Telegraf data formats](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md) -* `keep_alive`: Defines the maximum length of time that the broker and client may not communicate with each other. Defaults to 0 which deactivates this feature. +* `keep_alive`: Defines the maximum length of time that the broker and client may not communicate with each other. Defaults to 0 which deactivates this feature. diff --git a/plugins/outputs/newrelic/README.md b/plugins/outputs/newrelic/README.md index e15bedb4b..54e34e825 100644 --- a/plugins/outputs/newrelic/README.md +++ b/plugins/outputs/newrelic/README.md @@ -6,7 +6,8 @@ To use this plugin you must first obtain an [Insights API Key][]. Telegraf minimum version: Telegraf 1.15.0 -### Configuration +## Configuration + ```toml [[outputs.newrelic]] ## New Relic Insights API key @@ -17,13 +18,13 @@ Telegraf minimum version: Telegraf 1.15.0 ## Timeout for writes to the New Relic API. # timeout = "15s" - + ## HTTP Proxy override. If unset use values from the standard ## proxy environment variables to determine proxy, if any. # http_proxy = "http://corporate.proxy:3128" ## Metric URL override to enable geographic location endpoints. - # If not set use values from the standard + # If not set use values from the standard # metric_url = "https://metric-api.newrelic.com/metric/v1" ``` diff --git a/plugins/outputs/nsq/README.md b/plugins/outputs/nsq/README.md index 61b4dad98..bf5958d32 100644 --- a/plugins/outputs/nsq/README.md +++ b/plugins/outputs/nsq/README.md @@ -1,4 +1,4 @@ # NSQ Output Plugin This plugin writes to a specified NSQD instance, usually local to the producer. It requires -a `server` name and a `topic` name. \ No newline at end of file +a `server` name and a `topic` name. diff --git a/plugins/outputs/opentelemetry/README.md b/plugins/outputs/opentelemetry/README.md index e6b4ebdfc..135540190 100644 --- a/plugins/outputs/opentelemetry/README.md +++ b/plugins/outputs/opentelemetry/README.md @@ -2,7 +2,7 @@ This plugin sends metrics to [OpenTelemetry](https://opentelemetry.io) servers and agents via gRPC. -### Configuration +## Configuration ```toml [[outputs.opentelemetry]] @@ -39,11 +39,11 @@ This plugin sends metrics to [OpenTelemetry](https://opentelemetry.io) servers a # key1 = "value1" ``` -#### Schema +### Schema The InfluxDB->OpenTelemetry conversion [schema](https://github.com/influxdata/influxdb-observability/blob/main/docs/index.md) and [implementation](https://github.com/influxdata/influxdb-observability/tree/main/influx2otel) -are hosted at https://github.com/influxdata/influxdb-observability . +are hosted on [GitHub](https://github.com/influxdata/influxdb-observability). For metrics, two input schemata exist. Line protocol with measurement name `prometheus` is assumed to have a schema @@ -51,6 +51,7 @@ matching [Prometheus input plugin](../../inputs/prometheus/README.md) when `metr Line protocol with other measurement names is assumed to have schema matching [Prometheus input plugin](../../inputs/prometheus/README.md) when `metric_version = 1`. If both schema assumptions fail, then the line protocol data is interpreted as: + - Metric type = gauge (or counter, if indicated by the input plugin) - Metric name = `[measurement]_[field key]` - Metric value = line protocol field value, cast to float diff --git a/plugins/outputs/opentsdb/README.md b/plugins/outputs/opentsdb/README.md index f737d48ae..b89c6c8a5 100644 --- a/plugins/outputs/opentsdb/README.md +++ b/plugins/outputs/opentsdb/README.md @@ -6,26 +6,26 @@ Using the Http API is the recommended way of writing metrics since OpenTSDB 2.0 To use Http mode, set useHttp to true in config. You can also control how many metrics is sent in each http request by setting batchSize in config. -See http://opentsdb.net/docs/build/html/api_http/put.html for details. +See [the docs](http://opentsdb.net/docs/build/html/api_http/put.html) for details. ## Transfer "Protocol" in the telnet mode The expected input from OpenTSDB is specified in the following way: -``` +```text put ``` The telegraf output plugin adds an optional prefix to the metric keys so that a subamount can be selected. -``` +```text put <[prefix.]metric> ``` ### Example -``` +```text put nine.telegraf.system_load1 1441910356 0.430000 dc=homeoffice host=irimame scope=green put nine.telegraf.system_load5 1441910356 0.580000 dc=homeoffice host=irimame scope=green put nine.telegraf.system_load15 1441910356 0.730000 dc=homeoffice host=irimame scope=green @@ -44,8 +44,6 @@ put nine.telegraf.ping_average_response_ms 1441910366 24.006000 dc=homeoffice ho ... ``` -## - The OpenTSDB telnet interface can be simulated with this reader: ```go @@ -53,28 +51,28 @@ The OpenTSDB telnet interface can be simulated with this reader: package main import ( - "io" - "log" - "net" - "os" + "io" + "log" + "net" + "os" ) func main() { - l, err := net.Listen("tcp", "localhost:4242") - if err != nil { - log.Fatal(err) - } - defer l.Close() - for { - conn, err := l.Accept() - if err != nil { - log.Fatal(err) - } - go func(c net.Conn) { - defer c.Close() - io.Copy(os.Stdout, c) - }(conn) - } + l, err := net.Listen("tcp", "localhost:4242") + if err != nil { + log.Fatal(err) + } + defer l.Close() + for { + conn, err := l.Accept() + if err != nil { + log.Fatal(err) + } + go func(c net.Conn) { + defer c.Close() + io.Copy(os.Stdout, c) + }(conn) + } } ``` diff --git a/plugins/outputs/prometheus_client/README.md b/plugins/outputs/prometheus_client/README.md index 844cf3f2d..085fc4649 100644 --- a/plugins/outputs/prometheus_client/README.md +++ b/plugins/outputs/prometheus_client/README.md @@ -3,7 +3,7 @@ This plugin starts a [Prometheus](https://prometheus.io/) Client, it exposes all metrics on `/metrics` (default) to be polled by a Prometheus server. -### Configuration +## Configuration ```toml [[outputs.prometheus_client]] @@ -52,7 +52,7 @@ all metrics on `/metrics` (default) to be polled by a Prometheus server. # export_timestamp = false ``` -### Metrics +## Metrics Prometheus metrics are produced in the same manner as the [prometheus serializer][]. diff --git a/plugins/outputs/riemann/README.md b/plugins/outputs/riemann/README.md index 82615728c..d50dc9f9b 100644 --- a/plugins/outputs/riemann/README.md +++ b/plugins/outputs/riemann/README.md @@ -2,7 +2,7 @@ This plugin writes to [Riemann](http://riemann.io/) via TCP or UDP. -### Configuration: +## Configuration ```toml # Configuration for Riemann to send metrics to @@ -39,11 +39,11 @@ This plugin writes to [Riemann](http://riemann.io/) via TCP or UDP. # timeout = "5s" ``` -### Required parameters: +### Required parameters * `url`: The full TCP or UDP URL of the Riemann server to send events to. -### Optional parameters: +### Optional parameters * `ttl`: Riemann event TTL, floating-point time in seconds. Defines how long that an event is considered valid for in Riemann. * `separator`: Separator to use between measurement and field name in Riemann service name. @@ -53,24 +53,27 @@ This plugin writes to [Riemann](http://riemann.io/) via TCP or UDP. * `tags`: Additional Riemann tags that will be sent. * `description_text`: Description text for Riemann event. -### Example Events: +## Example Events Riemann event emitted by Telegraf with default configuration: -``` + +```text #riemann.codec.Event{ :host "postgresql-1e612b44-e92f-4d27-9f30-5e2f53947870", :state nil, :description nil, :ttl 30.0, :service "disk/used_percent", :metric 73.16736001949994, :path "/boot", :fstype "ext4", :time 1475605021} ``` Telegraf emitting the same Riemann event with `measurement_as_attribute` set to `true`: -``` + +```text #riemann.codec.Event{ ... :measurement "disk", :service "used_percent", :metric 73.16736001949994, ... :time 1475605021} ``` Telegraf emitting the same Riemann event with additional Riemann tags defined: -``` + +```text #riemann.codec.Event{ :host "postgresql-1e612b44-e92f-4d27-9f30-5e2f53947870", :state nil, :description nil, :ttl 30.0, :service "disk/used_percent", :metric 73.16736001949994, :path "/boot", :fstype "ext4", :time 1475605021, @@ -78,7 +81,8 @@ Telegraf emitting the same Riemann event with additional Riemann tags defined: ``` Telegraf emitting a Riemann event with a status text and `string_as_state` set to `true`, and a `description_text` defined: -``` + +```text #riemann.codec.Event{ :host "postgresql-1e612b44-e92f-4d27-9f30-5e2f53947870", :state "Running", :ttl 30.0, :description "PostgreSQL master node is up and running", diff --git a/plugins/outputs/sensu/README.md b/plugins/outputs/sensu/README.md index f21159c64..3d6c7d53d 100644 --- a/plugins/outputs/sensu/README.md +++ b/plugins/outputs/sensu/README.md @@ -1,14 +1,14 @@ # Sensu Go Output Plugin -This plugin writes metrics events to [Sensu Go](https://sensu.io) via its +This plugin writes metrics events to [Sensu Go](https://sensu.io) via its HTTP events API. -### Configuration: +## Configuration ```toml [[outputs.sensu]] - ## BACKEND API URL is the Sensu Backend API root URL to send metrics to - ## (protocol, host, and port only). The output plugin will automatically + ## BACKEND API URL is the Sensu Backend API root URL to send metrics to + ## (protocol, host, and port only). The output plugin will automatically ## append the corresponding backend API path ## /api/core/v2/namespaces/:entity_namespace/events/:entity_name/:check_name). ## @@ -16,34 +16,34 @@ HTTP events API. ## https://docs.sensu.io/sensu-go/latest/api/events/ ## ## AGENT API URL is the Sensu Agent API root URL to send metrics to - ## (protocol, host, and port only). The output plugin will automatically + ## (protocol, host, and port only). The output plugin will automatically ## append the correspeonding agent API path (/events). ## ## Agent API Events API reference: ## https://docs.sensu.io/sensu-go/latest/api/events/ - ## - ## NOTE: if backend_api_url and agent_api_url and api_key are set, the output - ## plugin will use backend_api_url. If backend_api_url and agent_api_url are - ## not provided, the output plugin will default to use an agent_api_url of + ## + ## NOTE: if backend_api_url and agent_api_url and api_key are set, the output + ## plugin will use backend_api_url. If backend_api_url and agent_api_url are + ## not provided, the output plugin will default to use an agent_api_url of ## http://127.0.0.1:3031 - ## + ## # backend_api_url = "http://127.0.0.1:8080" # agent_api_url = "http://127.0.0.1:3031" - ## API KEY is the Sensu Backend API token - ## Generate a new API token via: - ## + ## API KEY is the Sensu Backend API token + ## Generate a new API token via: + ## ## $ sensuctl cluster-role create telegraf --verb create --resource events,entities ## $ sensuctl cluster-role-binding create telegraf --cluster-role telegraf --group telegraf - ## $ sensuctl user create telegraf --group telegraf --password REDACTED + ## $ sensuctl user create telegraf --group telegraf --password REDACTED ## $ sensuctl api-key grant telegraf ## - ## For more information on Sensu RBAC profiles & API tokens, please visit: + ## For more information on Sensu RBAC profiles & API tokens, please visit: ## - https://docs.sensu.io/sensu-go/latest/reference/rbac/ - ## - https://docs.sensu.io/sensu-go/latest/reference/apikeys/ - ## + ## - https://docs.sensu.io/sensu-go/latest/reference/apikeys/ + ## # api_key = "${SENSU_API_KEY}" - + ## Optional TLS Config # tls_ca = "/etc/telegraf/ca.pem" # tls_cert = "/etc/telegraf/cert.pem" @@ -58,7 +58,7 @@ HTTP events API. ## compress body or "identity" to apply no encoding. # content_encoding = "identity" - ## Sensu Event details + ## Sensu Event details ## ## Below are the event details to be sent to Sensu. The main portions of the ## event are the check, entity, and metrics specifications. For more information @@ -67,7 +67,7 @@ HTTP events API. ## - Checks - https://docs.sensu.io/sensu-go/latest/reference/checks ## - Entities - https://docs.sensu.io/sensu-go/latest/reference/entities ## - Metrics - https://docs.sensu.io/sensu-go/latest/reference/events#metrics - ## + ## ## Check specification ## The check name is the name to give the Sensu check associated with the event ## created. This maps to check.metatadata.name in the event. @@ -78,10 +78,10 @@ HTTP events API. ## Configure the entity name and namespace, if necessary. This will be part of ## the entity.metadata in the event. ## - ## NOTE: if the output plugin is configured to send events to a - ## backend_api_url and entity_name is not set, the value returned by + ## NOTE: if the output plugin is configured to send events to a + ## backend_api_url and entity_name is not set, the value returned by ## os.Hostname() will be used; if the output plugin is configured to send - ## events to an agent_api_url, entity_name and entity_namespace are not used. + ## events to an agent_api_url, entity_name and entity_namespace are not used. # [outputs.sensu.entity] # name = "server-01" # namespace = "default" diff --git a/plugins/outputs/signalfx/README.md b/plugins/outputs/signalfx/README.md index 00b39cf30..09b7f41db 100644 --- a/plugins/outputs/signalfx/README.md +++ b/plugins/outputs/signalfx/README.md @@ -2,7 +2,8 @@ The SignalFx output plugin sends metrics to [SignalFx](https://docs.signalfx.com/en/latest/). -### Configuration +## Configuration + ```toml [[outputs.signalfx]] ## SignalFx Org Access Token diff --git a/plugins/outputs/sql/README.md b/plugins/outputs/sql/README.md index 77b89762a..7f8f5da72 100644 --- a/plugins/outputs/sql/README.md +++ b/plugins/outputs/sql/README.md @@ -65,7 +65,7 @@ through the convert settings. ## Configuration -``` +```toml # Save metrics to an SQL Database [[outputs.sql]] ## Database driver diff --git a/plugins/outputs/stackdriver/README.md b/plugins/outputs/stackdriver/README.md index 27ef3a09f..a3c4f8295 100644 --- a/plugins/outputs/stackdriver/README.md +++ b/plugins/outputs/stackdriver/README.md @@ -15,7 +15,7 @@ Metrics are grouped by the `namespace` variable and metric key - eg: `custom.goo Additional resource labels can be configured by `resource_labels`. By default the required `project_id` label is always set to the `project` variable. -### Configuration +## Configuration ```toml [[outputs.stackdriver]] @@ -35,7 +35,7 @@ Additional resource labels can be configured by `resource_labels`. By default th # location = "eu-north0" ``` -### Restrictions +## Restrictions Stackdriver does not support string values in custom metrics, any string fields will not be written. diff --git a/plugins/outputs/sumologic/README.md b/plugins/outputs/sumologic/README.md index 20fb75799..4dcc1c7e8 100644 --- a/plugins/outputs/sumologic/README.md +++ b/plugins/outputs/sumologic/README.md @@ -8,11 +8,11 @@ Telegraf minimum version: Telegraf 1.16.0 Currently metrics can be sent using one of the following data formats, supported by Sumologic HTTP Source: - * `graphite` - for Content-Type of `application/vnd.sumologic.graphite` - * `carbon2` - for Content-Type of `application/vnd.sumologic.carbon2` - * `prometheus` - for Content-Type of `application/vnd.sumologic.prometheus` +* `graphite` - for Content-Type of `application/vnd.sumologic.graphite` +* `carbon2` - for Content-Type of `application/vnd.sumologic.carbon2` +* `prometheus` - for Content-Type of `application/vnd.sumologic.prometheus` -### Configuration: +## Configuration ```toml # A plugin that can send metrics to Sumo Logic HTTP metric collector. @@ -23,7 +23,7 @@ by Sumologic HTTP Source: ## Data format to be used for sending metrics. ## This will set the "Content-Type" header accordingly. - ## Currently supported formats: + ## Currently supported formats: ## * graphite - for Content-Type of application/vnd.sumologic.graphite ## * carbon2 - for Content-Type of application/vnd.sumologic.carbon2 ## * prometheus - for Content-Type of application/vnd.sumologic.prometheus @@ -38,7 +38,7 @@ by Sumologic HTTP Source: ## Timeout used for HTTP request # timeout = "5s" - + ## Max HTTP request body size in bytes before compression (if applied). ## By default 1MB is recommended. ## NOTE: diff --git a/plugins/outputs/syslog/README.md b/plugins/outputs/syslog/README.md index cb9bc8965..7b2c480f3 100644 --- a/plugins/outputs/syslog/README.md +++ b/plugins/outputs/syslog/README.md @@ -8,7 +8,7 @@ The syslog output plugin sends syslog messages transmitted over Syslog messages are formatted according to [RFC 5424](https://tools.ietf.org/html/rfc5424). -### Configuration +## Configuration ```toml [[outputs.syslog]] @@ -88,7 +88,8 @@ Syslog messages are formatted according to # default_appname = "Telegraf" ``` -### Metric mapping +## Metric mapping + The output plugin expects syslog metrics tags and fields to match up with the ones created in the [syslog input][]. diff --git a/plugins/outputs/timestream/README.md b/plugins/outputs/timestream/README.md index dc063a068..6761ad4da 100644 --- a/plugins/outputs/timestream/README.md +++ b/plugins/outputs/timestream/README.md @@ -2,14 +2,14 @@ The Timestream output plugin writes metrics to the [Amazon Timestream] service. -### Configuration +## Configuration ```toml # Configuration for sending metrics to Amazon Timestream. [[outputs.timestream]] ## Amazon Region region = "us-east-1" - + ## Amazon Credentials ## Credentials are loaded in the following order ## 1) Web identity provider credentials via STS if role_arn and web_identity_token_file are specified @@ -27,7 +27,7 @@ The Timestream output plugin writes metrics to the [Amazon Timestream] service. #role_session_name = "" #profile = "" #shared_credential_file = "" - + ## Endpoint to make request against, the correct endpoint is automatically ## determined and this option should only be set if you wish to override the ## default. @@ -40,7 +40,7 @@ The Timestream output plugin writes metrics to the [Amazon Timestream] service. ## Specifies if the plugin should describe the Timestream database upon starting ## to validate if it has access necessary permissions, connection, etc., as a safety check. - ## If the describe operation fails, the plugin will not start + ## If the describe operation fails, the plugin will not start ## and therefore the Telegraf agent will not start. describe_database_on_start = false @@ -49,17 +49,17 @@ The Timestream output plugin writes metrics to the [Amazon Timestream] service. ## For example, consider the following data in line protocol format: ## weather,location=us-midwest,season=summer temperature=82,humidity=71 1465839830100400200 ## airquality,location=us-west no2=5,pm25=16 1465839830100400200 - ## where weather and airquality are the measurement names, location and season are tags, + ## where weather and airquality are the measurement names, location and season are tags, ## and temperature, humidity, no2, pm25 are fields. ## In multi-table mode: ## - first line will be ingested to table named weather ## - second line will be ingested to table named airquality ## - the tags will be represented as dimensions ## - first table (weather) will have two records: - ## one with measurement name equals to temperature, + ## one with measurement name equals to temperature, ## another with measurement name equals to humidity ## - second table (airquality) will have two records: - ## one with measurement name equals to no2, + ## one with measurement name equals to no2, ## another with measurement name equals to pm25 ## - the Timestream tables from the example will look like this: ## TABLE "weather": @@ -93,7 +93,7 @@ The Timestream output plugin writes metrics to the [Amazon Timestream] service. ## Specifies the Timestream table where the metrics will be uploaded. # single_table_name = "yourTableNameHere" - ## Only valid and required for mapping_mode = "single-table" + ## Only valid and required for mapping_mode = "single-table" ## Describes what will be the Timestream dimension name for the Telegraf ## measurement name. # single_table_dimension_name_for_telegraf_measurement_name = "namespace" @@ -135,8 +135,9 @@ In case of an attempt to write an unsupported by Timestream Telegraf Field type, In case of receiving ThrottlingException or InternalServerException from Timestream, the errors are returned to Telegraf, in which case Telegraf will keep the metrics in buffer and retry writing those metrics on the next flush. In case of receiving ResourceNotFoundException: - - If `create_table_if_not_exists` configuration is set to `true`, the plugin will try to create appropriate table and write the records again, if the table creation was successful. - - If `create_table_if_not_exists` configuration is set to `false`, the records are dropped, and an error is emitted to the logs. + +- If `create_table_if_not_exists` configuration is set to `true`, the plugin will try to create appropriate table and write the records again, if the table creation was successful. +- If `create_table_if_not_exists` configuration is set to `false`, the records are dropped, and an error is emitted to the logs. In case of receiving any other AWS error from Timestream, the records are dropped, and an error is emitted to the logs, as retrying such requests isn't likely to succeed. @@ -148,8 +149,8 @@ Turn on debug flag in the Telegraf to turn on detailed logging (including record Execute unit tests with: -``` +```shell go test -v ./plugins/outputs/timestream/... ``` -[Amazon Timestream]: https://aws.amazon.com/timestream/ \ No newline at end of file +[Amazon Timestream]: https://aws.amazon.com/timestream/ diff --git a/plugins/outputs/warp10/README.md b/plugins/outputs/warp10/README.md index 07e6cd25b..4ffc2ce73 100644 --- a/plugins/outputs/warp10/README.md +++ b/plugins/outputs/warp10/README.md @@ -2,7 +2,7 @@ The `warp10` output plugin writes metrics to [Warp 10][]. -### Configuration +## Configuration ```toml [[outputs.warp10]] @@ -32,7 +32,7 @@ The `warp10` output plugin writes metrics to [Warp 10][]. # insecure_skip_verify = false ``` -### Output Format +## Output Format Metrics are converted and sent using the [Geo Time Series][] (GTS) input format. diff --git a/plugins/outputs/wavefront/README.md b/plugins/outputs/wavefront/README.md index 8439295bb..6ccd6e35e 100644 --- a/plugins/outputs/wavefront/README.md +++ b/plugins/outputs/wavefront/README.md @@ -2,8 +2,7 @@ This plugin writes to a [Wavefront](https://www.wavefront.com) proxy, in Wavefront data format over TCP. - -### Configuration: +## Configuration ```toml ## Url for Wavefront Direct Ingestion or using HTTP with Wavefront Proxy @@ -11,8 +10,8 @@ This plugin writes to a [Wavefront](https://www.wavefront.com) proxy, in Wavefro url = "https://metrics.wavefront.com" ## Authentication Token for Wavefront. Only required if using Direct Ingestion - #token = "DUMMY_TOKEN" - + #token = "DUMMY_TOKEN" + ## DNS name of the wavefront proxy server. Do not use if url is specified #host = "wavefront.example.com" @@ -35,7 +34,7 @@ This plugin writes to a [Wavefront](https://www.wavefront.com) proxy, in Wavefro ## Use Strict rules to sanitize metric and tag names from invalid characters ## When enabled forward slash (/) and comma (,) will be accepted #use_strict = false - + ## Use Regex to sanitize metric and tag names from invalid characters ## Regex is more thorough, but significantly slower. default is false #use_regex = false @@ -46,46 +45,48 @@ This plugin writes to a [Wavefront](https://www.wavefront.com) proxy, in Wavefro ## whether to convert boolean values to numeric values, with false -> 0.0 and true -> 1.0. default is true #convert_bool = true - ## Truncate metric tags to a total of 254 characters for the tag name value. Wavefront will reject any + ## Truncate metric tags to a total of 254 characters for the tag name value. Wavefront will reject any ## data point exceeding this limit if not truncated. Defaults to 'false' to provide backwards compatibility. #truncate_tags = false ## Flush the internal buffers after each batch. This effectively bypasses the background sending of metrics - ## normally done by the Wavefront SDK. This can be used if you are experiencing buffer overruns. The sending + ## normally done by the Wavefront SDK. This can be used if you are experiencing buffer overruns. The sending ## of metrics will block for a longer time, but this will be handled gracefully by the internal buffering in ## Telegraf. #immediate_flush = true ``` - ### Convert Path & Metric Separator -If the `convert_path` option is true any `_` in metric and field names will be converted to the `metric_separator` value. -By default, to ease metrics browsing in the Wavefront UI, the `convert_path` option is true, and `metric_separator` is `.` (dot). + +If the `convert_path` option is true any `_` in metric and field names will be converted to the `metric_separator` value. +By default, to ease metrics browsing in the Wavefront UI, the `convert_path` option is true, and `metric_separator` is `.` (dot). Default integrations within Wavefront expect these values to be set to their defaults, however if converting from another platform it may be desirable to change these defaults. - ### Use Regex -Most illegal characters in the metric name are automatically converted to `-`. + +Most illegal characters in the metric name are automatically converted to `-`. The `use_regex` setting can be used to ensure all illegal characters are properly handled, but can lead to performance degradation. - ### Source Override -Often when collecting metrics from another system, you want to use the target system as the source, not the one running Telegraf. + +Often when collecting metrics from another system, you want to use the target system as the source, not the one running Telegraf. Many Telegraf plugins will identify the target source with a tag. The tag name can vary for different plugins. The `source_override` -option will use the value specified in any of the listed tags if found. The tag names are checked in the same order as listed, -and if found, the other tags will not be checked. If no tags specified are found, the default host tag will be used to identify the +option will use the value specified in any of the listed tags if found. The tag names are checked in the same order as listed, +and if found, the other tags will not be checked. If no tags specified are found, the default host tag will be used to identify the source of the metric. - ### Wavefront Data format + The expected input for Wavefront is specified in the following way: -``` + +```text [] = [tagk1=tagv1 ...tagkN=tagvN] ``` + More information about the Wavefront data format is available [here](https://community.wavefront.com/docs/DOC-1031) - ### Allowed values for metrics -Wavefront allows `integers` and `floats` as input values. By default it also maps `bool` values to numeric, false -> 0.0, + +Wavefront allows `integers` and `floats` as input values. By default it also maps `bool` values to numeric, false -> 0.0, true -> 1.0. To map `strings` use the [enum](../../processors/enum) processor plugin. diff --git a/plugins/outputs/websocket/README.md b/plugins/outputs/websocket/README.md index 577c10e6b..51d329317 100644 --- a/plugins/outputs/websocket/README.md +++ b/plugins/outputs/websocket/README.md @@ -4,7 +4,7 @@ This plugin can write to a WebSocket endpoint. It can output data in any of the [supported output formats](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md). -### Configuration: +## Configuration ```toml # A plugin that can transmit metrics over WebSocket. diff --git a/plugins/outputs/yandex_cloud_monitoring/README.md b/plugins/outputs/yandex_cloud_monitoring/README.md index 3bace22b4..412a57e4e 100644 --- a/plugins/outputs/yandex_cloud_monitoring/README.md +++ b/plugins/outputs/yandex_cloud_monitoring/README.md @@ -1,9 +1,8 @@ # Yandex Cloud Monitoring -This plugin will send custom metrics to Yandex Cloud Monitoring. -https://cloud.yandex.com/services/monitoring +This plugin will send custom metrics to [Yandex Cloud Monitoring](https://cloud.yandex.com/services/monitoring). -### Configuration: +## Configuration ```toml [[outputs.yandex_cloud_monitoring]]