chore(outputs): Fix line-length in READMEs (#16079)

Co-authored-by: Dane Strandboge <136023093+DStrand1@users.noreply.github.com>
This commit is contained in:
Sven Rebhan 2024-10-25 16:46:51 +02:00 committed by GitHub
parent 22b153ac65
commit 13d053f917
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
13 changed files with 249 additions and 211 deletions

View File

@ -1,6 +1,9 @@
# ![tiger](assets/TelegrafTigerSmall.png "tiger") Telegraf # ![tiger](assets/TelegrafTigerSmall.png "tiger") Telegraf
[![GoDoc](https://img.shields.io/badge/doc-reference-00ADD8.svg?logo=go)](https://godoc.org/github.com/influxdata/telegraf) [![Docker pulls](https://img.shields.io/docker/pulls/library/telegraf.svg)](https://hub.docker.com/_/telegraf/) [![Go Report Card](https://goreportcard.com/badge/github.com/influxdata/telegraf)](https://goreportcard.com/report/github.com/influxdata/telegraf) [![Circle CI](https://circleci.com/gh/influxdata/telegraf.svg?style=svg)](https://circleci.com/gh/influxdata/telegraf) [![GoDoc](https://img.shields.io/badge/doc-reference-00ADD8.svg?logo=go)](https://godoc.org/github.com/influxdata/telegraf)
[![Docker pulls](https://img.shields.io/docker/pulls/library/telegraf.svg)](https://hub.docker.com/_/telegraf/)
[![Go Report Card](https://goreportcard.com/badge/github.com/influxdata/telegraf)](https://goreportcard.com/report/github.com/influxdata/telegraf)
[![Circle CI](https://circleci.com/gh/influxdata/telegraf.svg?style=svg)](https://circleci.com/gh/influxdata/telegraf)
Telegraf is an agent for collecting, processing, aggregating, and writing Telegraf is an agent for collecting, processing, aggregating, and writing
metrics, logs, and other arbitrary data. metrics, logs, and other arbitrary data.
@ -76,13 +79,14 @@ Also, join us on our [Community Slack](https://influxdata.com/slack) or
[Community Forums](https://community.influxdata.com/) if you have questions or [Community Forums](https://community.influxdata.com/) if you have questions or
comments for our engineering teams. comments for our engineering teams.
If you are completely new to Telegraf and InfluxDB, you can also enroll for free at If you are completely new to Telegraf and InfluxDB, you can also enroll for free
[InfluxDB university](https://www.influxdata.com/university/) to take courses to at [InfluxDB university](https://www.influxdata.com/university/) to take courses
learn more. to learn more.
## Support ## Support
[![Slack](https://img.shields.io/badge/slack-join_chat-blue.svg?logo=slack)](https://www.influxdata.com/slack) [![Forums](https://img.shields.io/badge/discourse-join_forums-blue.svg?logo=discourse)](https://community.influxdata.com/) [![Slack](https://img.shields.io/badge/slack-join_chat-blue.svg?logo=slack)](https://www.influxdata.com/slack)
[![Forums](https://img.shields.io/badge/discourse-join_forums-blue.svg?logo=discourse)](https://community.influxdata.com/)
Please use the [Community Slack](https://influxdata.com/slack) or Please use the [Community Slack](https://influxdata.com/slack) or
[Community Forums](https://community.influxdata.com/) if you have questions or [Community Forums](https://community.influxdata.com/) if you have questions or

View File

@ -46,7 +46,8 @@ See the [CONFIGURATION.md][CONFIGURATION.md] for more details.
## Amazon Credentials ## Amazon Credentials
## Credentials are loaded in the following order ## Credentials are loaded in the following order
## 1) Web identity provider credentials via STS if role_arn and web_identity_token_file are specified ## 1) Web identity provider credentials via STS if role_arn and
## web_identity_token_file are specified
## 2) Assumed credentials via STS if role_arn is specified ## 2) Assumed credentials via STS if role_arn is specified
## 3) explicit credentials from 'access_key' and 'secret_key' ## 3) explicit credentials from 'access_key' and 'secret_key'
## 4) shared profile from 'profile' ## 4) shared profile from 'profile'
@ -75,15 +76,17 @@ See the [CONFIGURATION.md][CONFIGURATION.md] for more details.
## Namespace for the CloudWatch MetricDatums ## Namespace for the CloudWatch MetricDatums
namespace = "InfluxData/Telegraf" namespace = "InfluxData/Telegraf"
## If you have a large amount of metrics, you should consider to send statistic ## If you have a large amount of metrics, you should consider to send
## values instead of raw metrics which could not only improve performance but ## statistic values instead of raw metrics which could not only improve
## also save AWS API cost. If enable this flag, this plugin would parse the required ## performance but also save AWS API cost. If enable this flag, this plugin
## CloudWatch statistic fields (count, min, max, and sum) and send them to CloudWatch. ## would parse the required CloudWatch statistic fields (count, min, max, and
## You could use basicstats aggregator to calculate those fields. If not all statistic ## sum) and send them to CloudWatch. You could use basicstats aggregator to
## fields are available, all fields would still be sent as raw metrics. ## calculate those fields. If not all statistic fields are available, all
## fields would still be sent as raw metrics.
# write_statistics = false # write_statistics = false
## Enable high resolution metrics of 1 second (if not enabled, standard resolution are of 60 seconds precision) ## Enable high resolution metrics of 1 second (if not enabled, standard
## resolution are of 60 seconds precision)
# high_resolution_metrics = false # high_resolution_metrics = false
``` ```

View File

@ -5,7 +5,8 @@
## Amazon Credentials ## Amazon Credentials
## Credentials are loaded in the following order ## Credentials are loaded in the following order
## 1) Web identity provider credentials via STS if role_arn and web_identity_token_file are specified ## 1) Web identity provider credentials via STS if role_arn and
## web_identity_token_file are specified
## 2) Assumed credentials via STS if role_arn is specified ## 2) Assumed credentials via STS if role_arn is specified
## 3) explicit credentials from 'access_key' and 'secret_key' ## 3) explicit credentials from 'access_key' and 'secret_key'
## 4) shared profile from 'profile' ## 4) shared profile from 'profile'
@ -34,13 +35,15 @@
## Namespace for the CloudWatch MetricDatums ## Namespace for the CloudWatch MetricDatums
namespace = "InfluxData/Telegraf" namespace = "InfluxData/Telegraf"
## If you have a large amount of metrics, you should consider to send statistic ## If you have a large amount of metrics, you should consider to send
## values instead of raw metrics which could not only improve performance but ## statistic values instead of raw metrics which could not only improve
## also save AWS API cost. If enable this flag, this plugin would parse the required ## performance but also save AWS API cost. If enable this flag, this plugin
## CloudWatch statistic fields (count, min, max, and sum) and send them to CloudWatch. ## would parse the required CloudWatch statistic fields (count, min, max, and
## You could use basicstats aggregator to calculate those fields. If not all statistic ## sum) and send them to CloudWatch. You could use basicstats aggregator to
## fields are available, all fields would still be sent as raw metrics. ## calculate those fields. If not all statistic fields are available, all
## fields would still be sent as raw metrics.
# write_statistics = false # write_statistics = false
## Enable high resolution metrics of 1 second (if not enabled, standard resolution are of 60 seconds precision) ## Enable high resolution metrics of 1 second (if not enabled, standard
## resolution are of 60 seconds precision)
# high_resolution_metrics = false # high_resolution_metrics = false

View File

@ -7,9 +7,12 @@ This plugin will send logs to Amazon CloudWatch.
This plugin uses a credential chain for Authentication with the CloudWatch Logs This plugin uses a credential chain for Authentication with the CloudWatch Logs
API endpoint. In the following order the plugin will attempt to authenticate. API endpoint. In the following order the plugin will attempt to authenticate.
1. Web identity provider credentials via STS if `role_arn` and `web_identity_token_file` are specified 1. Web identity provider credentials via STS if `role_arn` and
1. Assumed credentials via STS if `role_arn` attribute is specified (source credentials are evaluated from subsequent rules). `web_identity_token_file` are specified
The `endpoint_url` attribute is used only for Cloudwatch Logs service. When fetching credentials, STS global endpoint will be used. 1. Assumed credentials via STS if `role_arn` attribute is specified (source
credentials are evaluated from subsequent rules). The `endpoint_url`
attribute is used only for Cloudwatch Logs service. When fetching
credentials, STS global endpoint will be used.
1. Explicit credentials from `access_key`, `secret_key`, and `token` attributes 1. Explicit credentials from `access_key`, `secret_key`, and `token` attributes
1. Shared profile from `profile` attribute 1. Shared profile from `profile` attribute
1. [Environment Variables][1] 1. [Environment Variables][1]
@ -56,7 +59,8 @@ See the [CONFIGURATION.md][CONFIGURATION.md] for more details.
## Amazon Credentials ## Amazon Credentials
## Credentials are loaded in the following order ## Credentials are loaded in the following order
## 1) Web identity provider credentials via STS if role_arn and web_identity_token_file are specified ## 1) Web identity provider credentials via STS if role_arn and
## web_identity_token_file are specified
## 2) Assumed credentials via STS if role_arn is specified ## 2) Assumed credentials via STS if role_arn is specified
## 3) explicit credentials from 'access_key' and 'secret_key' ## 3) explicit credentials from 'access_key' and 'secret_key'
## 4) shared profile from 'profile' ## 4) shared profile from 'profile'
@ -74,30 +78,31 @@ See the [CONFIGURATION.md][CONFIGURATION.md] for more details.
## Endpoint to make request against, the correct endpoint is automatically ## Endpoint to make request against, the correct endpoint is automatically
## determined and this option should only be set if you wish to override the ## determined and this option should only be set if you wish to override the
## default. ## default, e.g endpoint_url = "http://localhost:8000"
## ex: endpoint_url = "http://localhost:8000"
# endpoint_url = "" # endpoint_url = ""
## Cloud watch log group. Must be created in AWS cloudwatch logs upfront! ## Cloud watch log group. Must be created in AWS cloudwatch logs upfront!
## For example, you can specify the name of the k8s cluster here to group logs from all cluster in oine place ## For example, you can specify the name of the k8s cluster here to group logs
## from all cluster in oine place
log_group = "my-group-name" log_group = "my-group-name"
## Log stream in log group ## Log stream in log group
## Either log group name or reference to metric attribute, from which it can be parsed: ## Either log group name or reference to metric attribute, from which it can
## tag:<TAG_NAME> or field:<FIELD_NAME>. If log stream is not exist, it will be created. ## be parsed, tag:<TAG_NAME> or field:<FIELD_NAME>. If the log stream is not
## Since AWS is not automatically delete logs streams with expired logs entries (i.e. empty log stream) ## exist, it will be created. Since AWS is not automatically delete logs
## you need to put in place appropriate house-keeping (https://forums.aws.amazon.com/thread.jspa?threadID=178855) ## streams with expired logs entries (i.e. empty log stream) you need to put
## in place appropriate house-keeping (https://forums.aws.amazon.com/thread.jspa?threadID=178855)
log_stream = "tag:location" log_stream = "tag:location"
## Source of log data - metric name ## Source of log data - metric name
## specify the name of the metric, from which the log data should be retrieved. ## specify the name of the metric, from which the log data should be
## I.e., if you are using docker_log plugin to stream logs from container, then ## retrieved. I.e., if you are using docker_log plugin to stream logs from
## specify log_data_metric_name = "docker_log" ## container, then specify log_data_metric_name = "docker_log"
log_data_metric_name = "docker_log" log_data_metric_name = "docker_log"
## Specify from which metric attribute the log data should be retrieved: ## Specify from which metric attribute the log data should be retrieved:
## tag:<TAG_NAME> or field:<FIELD_NAME>. ## tag:<TAG_NAME> or field:<FIELD_NAME>.
## I.e., if you are using docker_log plugin to stream logs from container, then ## I.e., if you are using docker_log plugin to stream logs from container,
## specify log_data_source = "field:message" ## then specify log_data_source = "field:message"
log_data_source = "field:message" log_data_source = "field:message"
``` ```

View File

@ -12,7 +12,8 @@
## Amazon Credentials ## Amazon Credentials
## Credentials are loaded in the following order ## Credentials are loaded in the following order
## 1) Web identity provider credentials via STS if role_arn and web_identity_token_file are specified ## 1) Web identity provider credentials via STS if role_arn and
## web_identity_token_file are specified
## 2) Assumed credentials via STS if role_arn is specified ## 2) Assumed credentials via STS if role_arn is specified
## 3) explicit credentials from 'access_key' and 'secret_key' ## 3) explicit credentials from 'access_key' and 'secret_key'
## 4) shared profile from 'profile' ## 4) shared profile from 'profile'
@ -30,29 +31,30 @@
## Endpoint to make request against, the correct endpoint is automatically ## Endpoint to make request against, the correct endpoint is automatically
## determined and this option should only be set if you wish to override the ## determined and this option should only be set if you wish to override the
## default. ## default, e.g endpoint_url = "http://localhost:8000"
## ex: endpoint_url = "http://localhost:8000"
# endpoint_url = "" # endpoint_url = ""
## Cloud watch log group. Must be created in AWS cloudwatch logs upfront! ## Cloud watch log group. Must be created in AWS cloudwatch logs upfront!
## For example, you can specify the name of the k8s cluster here to group logs from all cluster in oine place ## For example, you can specify the name of the k8s cluster here to group logs
## from all cluster in oine place
log_group = "my-group-name" log_group = "my-group-name"
## Log stream in log group ## Log stream in log group
## Either log group name or reference to metric attribute, from which it can be parsed: ## Either log group name or reference to metric attribute, from which it can
## tag:<TAG_NAME> or field:<FIELD_NAME>. If log stream is not exist, it will be created. ## be parsed, tag:<TAG_NAME> or field:<FIELD_NAME>. If the log stream is not
## Since AWS is not automatically delete logs streams with expired logs entries (i.e. empty log stream) ## exist, it will be created. Since AWS is not automatically delete logs
## you need to put in place appropriate house-keeping (https://forums.aws.amazon.com/thread.jspa?threadID=178855) ## streams with expired logs entries (i.e. empty log stream) you need to put
## in place appropriate house-keeping (https://forums.aws.amazon.com/thread.jspa?threadID=178855)
log_stream = "tag:location" log_stream = "tag:location"
## Source of log data - metric name ## Source of log data - metric name
## specify the name of the metric, from which the log data should be retrieved. ## specify the name of the metric, from which the log data should be
## I.e., if you are using docker_log plugin to stream logs from container, then ## retrieved. I.e., if you are using docker_log plugin to stream logs from
## specify log_data_metric_name = "docker_log" ## container, then specify log_data_metric_name = "docker_log"
log_data_metric_name = "docker_log" log_data_metric_name = "docker_log"
## Specify from which metric attribute the log data should be retrieved: ## Specify from which metric attribute the log data should be retrieved:
## tag:<TAG_NAME> or field:<FIELD_NAME>. ## tag:<TAG_NAME> or field:<FIELD_NAME>.
## I.e., if you are using docker_log plugin to stream logs from container, then ## I.e., if you are using docker_log plugin to stream logs from container,
## specify log_data_source = "field:message" ## then specify log_data_source = "field:message"
log_data_source = "field:message" log_data_source = "field:message"

View File

@ -41,12 +41,16 @@ to use them.
### Required parameters ### Required parameters
* `token`: Your Logz.io token, which can be found under "settings" in your account. Your Logz.io `token`, which can be found under "settings" in your account, is
required.
### Optional parameters ### Optional parameters
* `check_disk_space`: Set to true if Logz.io sender checks the disk space before adding metrics to the disk queue. - `check_disk_space`: Set to true if Logz.io sender checks the disk space before
* `disk_threshold`: If the queue_dir space crosses this threshold (in % of disk usage), the plugin will start dropping logs. adding metrics to the disk queue.
* `drain_duration`: Time to sleep between sending attempts. - `disk_threshold`: If the queue_dir space crosses this threshold
* `queue_dir`: Metrics disk path. All the unsent metrics are saved to the disk in this location. (in % of disk usage), the plugin will start dropping logs.
* `url`: Logz.io listener URL. - `drain_duration`: Time to sleep between sending attempts.
- `queue_dir`: Metrics disk path. All the unsent metrics are saved to the disk
in this location.
- `url`: Logz.io listener URL.

View File

@ -75,9 +75,12 @@ the `[output.opentelemetry.coralogix]` section.
There, you can find the required setting to interact with the server. There, you can find the required setting to interact with the server.
- The `private_key` is your Private Key, which you can find in Settings > Send Your Data. - The `private_key` is your Private Key, which you can find in
- The `application`, is your application name, which will be added to your metric attributes. `Settings > Send Your Data`.
- The `subsystem`, is your subsystem, which will be added to your metric attributes. - The `application`, is your application name, which will be added to your
`metric attributes`.
- The `subsystem`, is your subsystem, which will be added to your metric
attributes.
More information in the More information in the
[Getting Started page](https://coralogix.com/docs/guide-first-steps-coralogix/). [Getting Started page](https://coralogix.com/docs/guide-first-steps-coralogix/).
@ -103,7 +106,5 @@ data is interpreted as:
Also see the [OpenTelemetry input plugin](../../inputs/opentelemetry/README.md). Also see the [OpenTelemetry input plugin](../../inputs/opentelemetry/README.md).
[schema]: https://github.com/influxdata/influxdb-observability/blob/main/docs/index.md [schema]: https://github.com/influxdata/influxdb-observability/blob/main/docs/index.md
[implementation]: https://github.com/influxdata/influxdb-observability/tree/main/influx2otel [implementation]: https://github.com/influxdata/influxdb-observability/tree/main/influx2otel
[repo]: https://github.com/influxdata/influxdb-observability [repo]: https://github.com/influxdata/influxdb-observability

View File

@ -53,7 +53,7 @@ to use them.
## Non-standard parameters: ## Non-standard parameters:
## pool_max_conns (default: 1) - Maximum size of connection pool for parallel (per-batch per-table) inserts. ## pool_max_conns (default: 1) - Maximum size of connection pool for parallel (per-batch per-table) inserts.
## pool_min_conns (default: 0) - Minimum size of connection pool. ## pool_min_conns (default: 0) - Minimum size of connection pool.
## pool_max_conn_lifetime (default: 0s) - Maximum age of a connection before closing. ## pool_max_conn_lifetime (default: 0s) - Maximum connection age before closing.
## pool_max_conn_idle_time (default: 0s) - Maximum idle time of a connection before closing. ## pool_max_conn_idle_time (default: 0s) - Maximum idle time of a connection before closing.
## pool_health_check_period (default: 0s) - Duration between health checks on idle connections. ## pool_health_check_period (default: 0s) - Duration between health checks on idle connections.
# connection = "" # connection = ""
@ -91,8 +91,9 @@ to use them.
# ] # ]
## Templated statements to execute when adding columns to a table. ## Templated statements to execute when adding columns to a table.
## Set to an empty list to disable. Points containing tags for which there is no column will be skipped. Points ## Set to an empty list to disable. Points containing tags for which there is
## containing fields for which there is no column will have the field omitted. ## no column will be skipped. Points containing fields for which there is no
## column will have the field omitted.
# add_column_templates = [ # add_column_templates = [
# '''ALTER TABLE {{ .table }} ADD COLUMN IF NOT EXISTS {{ .columns|join ", ADD COLUMN IF NOT EXISTS " }}''', # '''ALTER TABLE {{ .table }} ADD COLUMN IF NOT EXISTS {{ .columns|join ", ADD COLUMN IF NOT EXISTS " }}''',
# ] # ]
@ -103,25 +104,26 @@ to use them.
# ] # ]
## Templated statements to execute when adding columns to a tag table. ## Templated statements to execute when adding columns to a tag table.
## Set to an empty list to disable. Points containing tags for which there is no column will be skipped. ## Set to an empty list to disable. Points containing tags for which there is
## no column will be skipped.
# tag_table_add_column_templates = [ # tag_table_add_column_templates = [
# '''ALTER TABLE {{ .table }} ADD COLUMN IF NOT EXISTS {{ .columns|join ", ADD COLUMN IF NOT EXISTS " }}''', # '''ALTER TABLE {{ .table }} ADD COLUMN IF NOT EXISTS {{ .columns|join ", ADD COLUMN IF NOT EXISTS " }}''',
# ] # ]
## The postgres data type to use for storing unsigned 64-bit integer values (Postgres does not have a native ## The postgres data type to use for storing unsigned 64-bit integer values
## unsigned 64-bit integer type). ## (Postgres does not have a native unsigned 64-bit integer type).
## The value can be one of: ## The value can be one of:
## numeric - Uses the PostgreSQL "numeric" data type. ## numeric - Uses the PostgreSQL "numeric" data type.
## uint8 - Requires pguint extension (https://github.com/petere/pguint) ## uint8 - Requires pguint extension (https://github.com/petere/pguint)
# uint64_type = "numeric" # uint64_type = "numeric"
## When using pool_max_conns>1, and a temporary error occurs, the query is retried with an incremental backoff. This ## When using pool_max_conns > 1, and a temporary error occurs, the query is
## controls the maximum backoff duration. ## retried with an incremental backoff. This controls the maximum duration.
# retry_max_backoff = "15s" # retry_max_backoff = "15s"
## Approximate number of tag IDs to store in in-memory cache (when using tags_as_foreign_keys). ## Approximate number of tag IDs to store in in-memory cache (when using
## This is an optimization to skip inserting known tag IDs. ## tags_as_foreign_keys). This is an optimization to skip inserting known
## Each entry consumes approximately 34 bytes of memory. ## tag IDs. Each entry consumes approximately 34 bytes of memory.
# tag_cache_size = 100000 # tag_cache_size = 100000
## Enable & set the log level for the Postgres driver. ## Enable & set the log level for the Postgres driver.

View File

@ -14,7 +14,7 @@
## Non-standard parameters: ## Non-standard parameters:
## pool_max_conns (default: 1) - Maximum size of connection pool for parallel (per-batch per-table) inserts. ## pool_max_conns (default: 1) - Maximum size of connection pool for parallel (per-batch per-table) inserts.
## pool_min_conns (default: 0) - Minimum size of connection pool. ## pool_min_conns (default: 0) - Minimum size of connection pool.
## pool_max_conn_lifetime (default: 0s) - Maximum age of a connection before closing. ## pool_max_conn_lifetime (default: 0s) - Maximum connection age before closing.
## pool_max_conn_idle_time (default: 0s) - Maximum idle time of a connection before closing. ## pool_max_conn_idle_time (default: 0s) - Maximum idle time of a connection before closing.
## pool_health_check_period (default: 0s) - Duration between health checks on idle connections. ## pool_health_check_period (default: 0s) - Duration between health checks on idle connections.
# connection = "" # connection = ""
@ -52,8 +52,9 @@
# ] # ]
## Templated statements to execute when adding columns to a table. ## Templated statements to execute when adding columns to a table.
## Set to an empty list to disable. Points containing tags for which there is no column will be skipped. Points ## Set to an empty list to disable. Points containing tags for which there is
## containing fields for which there is no column will have the field omitted. ## no column will be skipped. Points containing fields for which there is no
## column will have the field omitted.
# add_column_templates = [ # add_column_templates = [
# '''ALTER TABLE {{ .table }} ADD COLUMN IF NOT EXISTS {{ .columns|join ", ADD COLUMN IF NOT EXISTS " }}''', # '''ALTER TABLE {{ .table }} ADD COLUMN IF NOT EXISTS {{ .columns|join ", ADD COLUMN IF NOT EXISTS " }}''',
# ] # ]
@ -64,25 +65,26 @@
# ] # ]
## Templated statements to execute when adding columns to a tag table. ## Templated statements to execute when adding columns to a tag table.
## Set to an empty list to disable. Points containing tags for which there is no column will be skipped. ## Set to an empty list to disable. Points containing tags for which there is
## no column will be skipped.
# tag_table_add_column_templates = [ # tag_table_add_column_templates = [
# '''ALTER TABLE {{ .table }} ADD COLUMN IF NOT EXISTS {{ .columns|join ", ADD COLUMN IF NOT EXISTS " }}''', # '''ALTER TABLE {{ .table }} ADD COLUMN IF NOT EXISTS {{ .columns|join ", ADD COLUMN IF NOT EXISTS " }}''',
# ] # ]
## The postgres data type to use for storing unsigned 64-bit integer values (Postgres does not have a native ## The postgres data type to use for storing unsigned 64-bit integer values
## unsigned 64-bit integer type). ## (Postgres does not have a native unsigned 64-bit integer type).
## The value can be one of: ## The value can be one of:
## numeric - Uses the PostgreSQL "numeric" data type. ## numeric - Uses the PostgreSQL "numeric" data type.
## uint8 - Requires pguint extension (https://github.com/petere/pguint) ## uint8 - Requires pguint extension (https://github.com/petere/pguint)
# uint64_type = "numeric" # uint64_type = "numeric"
## When using pool_max_conns>1, and a temporary error occurs, the query is retried with an incremental backoff. This ## When using pool_max_conns > 1, and a temporary error occurs, the query is
## controls the maximum backoff duration. ## retried with an incremental backoff. This controls the maximum duration.
# retry_max_backoff = "15s" # retry_max_backoff = "15s"
## Approximate number of tag IDs to store in in-memory cache (when using tags_as_foreign_keys). ## Approximate number of tag IDs to store in in-memory cache (when using
## This is an optimization to skip inserting known tag IDs. ## tags_as_foreign_keys). This is an optimization to skip inserting known
## Each entry consumes approximately 34 bytes of memory. ## tag IDs. Each entry consumes approximately 34 bytes of memory.
# tag_cache_size = 100000 # tag_cache_size = 100000
## Enable & set the log level for the Postgres driver. ## Enable & set the log level for the Postgres driver.

View File

@ -7,8 +7,12 @@ The Timestream output plugin writes metrics to the [Amazon Timestream] service.
This plugin uses a credential chain for Authentication with Timestream This plugin uses a credential chain for Authentication with Timestream
API endpoint. In the following order the plugin will attempt to authenticate. API endpoint. In the following order the plugin will attempt to authenticate.
1. Web identity provider credentials via STS if `role_arn` and `web_identity_token_file` are specified 1. Web identity provider credentials via STS if `role_arn` and
1. [Assumed credentials via STS] if `role_arn` attribute is specified (source credentials are evaluated from subsequent rules). The `endpoint_url` attribute is used only for Timestream service. When fetching credentials, STS global endpoint will be used. `web_identity_token_file` are specified
1. [Assumed credentials via STS] if `role_arn` attribute is specified (source
credentials are evaluated from subsequent rules). The `endpoint_url` attribute
is used only for Timestream service. When fetching credentials, STS global
endpoint will be used.
1. Explicit credentials from `access_key`, `secret_key`, and `token` attributes 1. Explicit credentials from `access_key`, `secret_key`, and `token` attributes
1. Shared profile from `profile` attribute 1. Shared profile from `profile` attribute
1. [Environment Variables] 1. [Environment Variables]
@ -34,7 +38,8 @@ See the [CONFIGURATION.md][CONFIGURATION.md] for more details.
## Amazon Credentials ## Amazon Credentials
## Credentials are loaded in the following order: ## Credentials are loaded in the following order:
## 1) Web identity provider credentials via STS if role_arn and web_identity_token_file are specified ## 1) Web identity provider credentials via STS if role_arn and
## web_identity_token_file are specified
## 2) Assumed credentials via STS if role_arn is specified ## 2) Assumed credentials via STS if role_arn is specified
## 3) explicit credentials from 'access_key' and 'secret_key' ## 3) explicit credentials from 'access_key' and 'secret_key'
## 4) shared profile from 'profile' ## 4) shared profile from 'profile'
@ -60,20 +65,20 @@ See the [CONFIGURATION.md][CONFIGURATION.md] for more details.
## The database must exist prior to starting Telegraf. ## The database must exist prior to starting Telegraf.
database_name = "yourDatabaseNameHere" database_name = "yourDatabaseNameHere"
## Specifies if the plugin should describe the Timestream database upon starting ## Specifies if the plugin should describe the Timestream database upon
## to validate if it has access necessary permissions, connection, etc., as a safety check. ## starting to validate if it has access, necessary permissions, connection,
## If the describe operation fails, the plugin will not start ## etc., as a safety check. If the describe operation fails, the plugin will
## and therefore the Telegraf agent will not start. ## not start and therefore the Telegraf agent will not start.
describe_database_on_start = false describe_database_on_start = false
## Specifies how the data is organized in Timestream. ## Specifies how the data is organized in Timestream.
## Valid values are: single-table, multi-table. ## Valid values are: single-table, multi-table.
## When mapping_mode is set to single-table, all of the data is stored in a single table. ## When mapping_mode is set to single-table, all of the data is stored in a
## When mapping_mode is set to multi-table, the data is organized and stored in multiple tables. ## single table. When mapping_mode is set to multi-table, the data is
## The default is multi-table. ## organized and stored in multiple tables. The default is multi-table.
mapping_mode = "multi-table" mapping_mode = "multi-table"
## Specifies if the plugin should create the table, if the table does not exist. ## Specifies if the plugin should create the table, if it doesn't exist.
create_table_if_not_exists = true create_table_if_not_exists = true
## Specifies the Timestream table magnetic store retention period in days. ## Specifies the Timestream table magnetic store retention period in days.
@ -88,25 +93,25 @@ See the [CONFIGURATION.md][CONFIGURATION.md] for more details.
## Specifies how the data is written into Timestream. ## Specifies how the data is written into Timestream.
## Valid values are: true, false ## Valid values are: true, false
## When use_multi_measure_records is set to true, all of the tags and fields are stored ## When use_multi_measure_records is set to true, all of the tags and fields
## as a single row in a Timestream table. ## are stored as a single row in a Timestream table.
## When use_multi_measure_record is set to false, Timestream stores each field in a ## When use_multi_measure_record is set to false, Timestream stores each field
## separate table row, thereby storing the tags multiple times (once for each field). ## in a separate table row, thereby storing the tags multiple times (once for
## The recommended setting is true. ## each field). The recommended setting is true. The default is false.
## The default is false.
use_multi_measure_records = "false" use_multi_measure_records = "false"
## Specifies the measure_name to use when sending multi-measure records. ## Specifies the measure_name to use when sending multi-measure records.
## NOTE: This property is valid when use_multi_measure_records=true and mapping_mode=multi-table ## NOTE: This property is valid when use_multi_measure_records=true and
## mapping_mode=multi-table
measure_name_for_multi_measure_records = "telegraf_measure" measure_name_for_multi_measure_records = "telegraf_measure"
## Specifies the name of the table to write data into ## Specifies the name of the table to write data into
## NOTE: This property is valid when mapping_mode=single-table. ## NOTE: This property is valid when mapping_mode=single-table.
# single_table_name = "" # single_table_name = ""
## Specifies the name of dimension when all of the data is being stored in a single table ## Specifies the name of dimension when all of the data is being stored in a
## and the measurement name is transformed into the dimension value ## single table and the measurement name is transformed into the dimension
## (see Mapping data from Influx to Timestream for details) ## value (see Mapping data from Influx to Timestream for details)
## NOTE: This property is valid when mapping_mode=single-table. ## NOTE: This property is valid when mapping_mode=single-table.
# single_table_dimension_name_for_telegraf_measurement_name = "namespace" # single_table_dimension_name_for_telegraf_measurement_name = "namespace"
@ -118,9 +123,6 @@ See the [CONFIGURATION.md][CONFIGURATION.md] for more details.
## Specify the maximum number of parallel go routines to ingest/write data ## Specify the maximum number of parallel go routines to ingest/write data
## If not specified, defaulted to 1 go routines ## If not specified, defaulted to 1 go routines
max_write_go_routines = 25 max_write_go_routines = 25
## Please see README.md to know how line protocol data is mapped to Timestream
##
``` ```
### Unsigned Integers ### Unsigned Integers
@ -185,8 +187,10 @@ data is written by default as follows:
For example, consider the following data in line protocol format: For example, consider the following data in line protocol format:
> weather,location=us-midwest,season=summer temperature=82,humidity=71 1465839830100400200 ```text
> airquality,location=us-west no2=5,pm25=16 1465839830100400200 weather,location=us-midwest,season=summer temperature=82,humidity=71 1465839830100400200
airquality,location=us-west no2=5,pm25=16 1465839830100400200
```
where: where:
`weather` and `airquality` are the measurement names, `weather` and `airquality` are the measurement names,
@ -197,8 +201,10 @@ When you choose to create a separate table for each measurement and store
multiple fields in a single table row, the data will be written into multiple fields in a single table row, the data will be written into
Timestream as: Timestream as:
1. The plugin will create 2 tables, namely, weather and airquality (mapping_mode=multi-table). 1. The plugin will create 2 tables, namely, weather and airquality
2. The tables may contain multiple fields in a single table row (use_multi_measure_records=true). (mapping_mode=multi-table).
2. The tables may contain multiple fields in a single table row
(use_multi_measure_records=true).
3. The table weather will contain the following columns and data: 3. The table weather will contain the following columns and data:
| time | location | season | measure_name | temperature | humidity | | time | location | season | measure_name | temperature | humidity |

View File

@ -5,7 +5,8 @@
## Amazon Credentials ## Amazon Credentials
## Credentials are loaded in the following order: ## Credentials are loaded in the following order:
## 1) Web identity provider credentials via STS if role_arn and web_identity_token_file are specified ## 1) Web identity provider credentials via STS if role_arn and
## web_identity_token_file are specified
## 2) Assumed credentials via STS if role_arn is specified ## 2) Assumed credentials via STS if role_arn is specified
## 3) explicit credentials from 'access_key' and 'secret_key' ## 3) explicit credentials from 'access_key' and 'secret_key'
## 4) shared profile from 'profile' ## 4) shared profile from 'profile'
@ -31,20 +32,20 @@
## The database must exist prior to starting Telegraf. ## The database must exist prior to starting Telegraf.
database_name = "yourDatabaseNameHere" database_name = "yourDatabaseNameHere"
## Specifies if the plugin should describe the Timestream database upon starting ## Specifies if the plugin should describe the Timestream database upon
## to validate if it has access necessary permissions, connection, etc., as a safety check. ## starting to validate if it has access, necessary permissions, connection,
## If the describe operation fails, the plugin will not start ## etc., as a safety check. If the describe operation fails, the plugin will
## and therefore the Telegraf agent will not start. ## not start and therefore the Telegraf agent will not start.
describe_database_on_start = false describe_database_on_start = false
## Specifies how the data is organized in Timestream. ## Specifies how the data is organized in Timestream.
## Valid values are: single-table, multi-table. ## Valid values are: single-table, multi-table.
## When mapping_mode is set to single-table, all of the data is stored in a single table. ## When mapping_mode is set to single-table, all of the data is stored in a
## When mapping_mode is set to multi-table, the data is organized and stored in multiple tables. ## single table. When mapping_mode is set to multi-table, the data is
## The default is multi-table. ## organized and stored in multiple tables. The default is multi-table.
mapping_mode = "multi-table" mapping_mode = "multi-table"
## Specifies if the plugin should create the table, if the table does not exist. ## Specifies if the plugin should create the table, if it doesn't exist.
create_table_if_not_exists = true create_table_if_not_exists = true
## Specifies the Timestream table magnetic store retention period in days. ## Specifies the Timestream table magnetic store retention period in days.
@ -59,25 +60,25 @@
## Specifies how the data is written into Timestream. ## Specifies how the data is written into Timestream.
## Valid values are: true, false ## Valid values are: true, false
## When use_multi_measure_records is set to true, all of the tags and fields are stored ## When use_multi_measure_records is set to true, all of the tags and fields
## as a single row in a Timestream table. ## are stored as a single row in a Timestream table.
## When use_multi_measure_record is set to false, Timestream stores each field in a ## When use_multi_measure_record is set to false, Timestream stores each field
## separate table row, thereby storing the tags multiple times (once for each field). ## in a separate table row, thereby storing the tags multiple times (once for
## The recommended setting is true. ## each field). The recommended setting is true. The default is false.
## The default is false.
use_multi_measure_records = "false" use_multi_measure_records = "false"
## Specifies the measure_name to use when sending multi-measure records. ## Specifies the measure_name to use when sending multi-measure records.
## NOTE: This property is valid when use_multi_measure_records=true and mapping_mode=multi-table ## NOTE: This property is valid when use_multi_measure_records=true and
## mapping_mode=multi-table
measure_name_for_multi_measure_records = "telegraf_measure" measure_name_for_multi_measure_records = "telegraf_measure"
## Specifies the name of the table to write data into ## Specifies the name of the table to write data into
## NOTE: This property is valid when mapping_mode=single-table. ## NOTE: This property is valid when mapping_mode=single-table.
# single_table_name = "" # single_table_name = ""
## Specifies the name of dimension when all of the data is being stored in a single table ## Specifies the name of dimension when all of the data is being stored in a
## and the measurement name is transformed into the dimension value ## single table and the measurement name is transformed into the dimension
## (see Mapping data from Influx to Timestream for details) ## value (see Mapping data from Influx to Timestream for details)
## NOTE: This property is valid when mapping_mode=single-table. ## NOTE: This property is valid when mapping_mode=single-table.
# single_table_dimension_name_for_telegraf_measurement_name = "namespace" # single_table_dimension_name_for_telegraf_measurement_name = "namespace"
@ -89,6 +90,3 @@
## Specify the maximum number of parallel go routines to ingest/write data ## Specify the maximum number of parallel go routines to ingest/write data
## If not specified, defaulted to 1 go routines ## If not specified, defaulted to 1 go routines
max_write_go_routines = 25 max_write_go_routines = 25
## Please see README.md to know how line protocol data is mapped to Timestream
##

View File

@ -24,24 +24,26 @@ to use them.
```toml @sample.conf ```toml @sample.conf
[[outputs.wavefront]] [[outputs.wavefront]]
## Url for Wavefront API or Wavefront proxy instance. ## URL for Wavefront API or Wavefront proxy instance
## Direct Ingestion via Wavefront API requires authentication. See below. ## Direct Ingestion via Wavefront API requires authentication. See below.
url = "https://metrics.wavefront.com" url = "https://metrics.wavefront.com"
## Maximum number of metrics to send per HTTP request. This value should be higher than the `metric_batch_size`. Default is 10,000. Values higher than 40,000 are not recommended. ## Maximum number of metrics to send per HTTP request. This value should be
## higher than the `metric_batch_size`. Values higher than 40,000 are not
## recommended.
# http_maximum_batch_size = 10000 # http_maximum_batch_size = 10000
## prefix for metrics keys ## Prefix for metrics keys
# prefix = "my.specific.prefix." # prefix = "my.specific.prefix."
## whether to use "value" for name of simple fields. default is false ## Use "value" for name of simple fields
# simple_fields = false # simple_fields = false
## character to use between metric and field name. default is . (dot) ## character to use between metric and field name
# metric_separator = "." # metric_separator = "."
## Convert metric name paths to use metricSeparator character ## Convert metric name paths to use metricSeparator character
## When true will convert all _ (underscore) characters in final metric name. default is true ## When true will convert all _ (underscore) characters in final metric name.
# convert_paths = true # convert_paths = true
## Use Strict rules to sanitize metric and tag names from invalid characters ## Use Strict rules to sanitize metric and tag names from invalid characters
@ -49,26 +51,29 @@ to use them.
# use_strict = false # use_strict = false
## Use Regex to sanitize metric and tag names from invalid characters ## Use Regex to sanitize metric and tag names from invalid characters
## Regex is more thorough, but significantly slower. default is false ## Regex is more thorough, but significantly slower.
# use_regex = false # use_regex = false
## point tags to use as the source name for Wavefront (if none found, host will be used) ## Tags to use as the source name for Wavefront ("host" if none is found)
# source_override = ["hostname", "address", "agent_host", "node_host"] # source_override = ["hostname", "address", "agent_host", "node_host"]
## whether to convert boolean values to numeric values, with false -> 0.0 and true -> 1.0. default is true ## Convert boolean values to numeric values, with false -> 0.0 and true -> 1.0
# convert_bool = true # convert_bool = true
## Truncate metric tags to a total of 254 characters for the tag name value. Wavefront will reject any ## Truncate metric tags to a total of 254 characters for the tag name value
## data point exceeding this limit if not truncated. Defaults to 'false' to provide backwards compatibility. ## Wavefront will reject any data point exceeding this limit if not truncated
## Defaults to 'false' to provide backwards compatibility.
# truncate_tags = false # truncate_tags = false
## Flush the internal buffers after each batch. This effectively bypasses the background sending of metrics ## Flush the internal buffers after each batch. This effectively bypasses the
## normally done by the Wavefront SDK. This can be used if you are experiencing buffer overruns. The sending ## background sending of metrics normally done by the Wavefront SDK. This can
## of metrics will block for a longer time, but this will be handled gracefully by the internal buffering in ## be used if you are experiencing buffer overruns. The sending of metrics
## Telegraf. ## will block for a longer time, but this will be handled gracefully by
## internal buffering in Telegraf.
# immediate_flush = true # immediate_flush = true
## Send internal metrics (starting with `~sdk.go`) for valid, invalid, and dropped metrics. default is true. ## Send internal metrics (starting with `~sdk.go`) for valid, invalid, and
## dropped metrics
# send_internal_metrics = true # send_internal_metrics = true
## Optional TLS Config ## Optional TLS Config
@ -89,39 +94,38 @@ to use them.
## HTTP Timeout ## HTTP Timeout
# timeout="10s" # timeout="10s"
## MaxIdleConns controls the maximum number of idle (keep-alive) ## MaxIdleConns controls the maximum number of idle (keep-alive) connections
## connections across all hosts. Zero means no limit. ## across all hosts. Zero means unlimited.
# max_idle_conn = 0 # max_idle_conn = 0
## MaxIdleConnsPerHost, if non-zero, controls the maximum idle ## MaxIdleConnsPerHost, if non-zero, controls the maximum idle (keep-alive)
## (keep-alive) connections to keep per-host. If zero, ## connections to keep per-host. If zero, DefaultMaxIdleConnsPerHost is used.
## DefaultMaxIdleConnsPerHost is used(2).
# max_idle_conn_per_host = 2 # max_idle_conn_per_host = 2
## Idle (keep-alive) connection timeout. ## Idle (keep-alive) connection timeout
## Maximum amount of time before idle connection is closed.
## Zero means no limit.
# idle_conn_timeout = 0 # idle_conn_timeout = 0
## Authentication for Direct Ingestion. ## Authentication for Direct Ingestion.
## Direct Ingestion requires one of: `token`,`auth_csp_api_token`, or `auth_csp_client_credentials` ## Direct Ingestion requires one of: `token`,`auth_csp_api_token`, or
## See https://docs.wavefront.com/csp_getting_started.html to learn more about using CSP credentials with Wavefront. ## `auth_csp_client_credentials` (see https://docs.wavefront.com/csp_getting_started.html)
## to learn more about using CSP credentials with Wavefront.
## Not required if using a Wavefront proxy. ## Not required if using a Wavefront proxy.
## Wavefront API Token Authentication. Ignored if using a Wavefront proxy. ## Wavefront API Token Authentication, ignored if using a Wavefront proxy
## 1. Click the gear icon at the top right in the Wavefront UI. ## 1. Click the gear icon at the top right in the Wavefront UI.
## 2. Click your account name (usually your email) ## 2. Click your account name (usually your email)
## 3. Click *API access*. ## 3. Click *API access*.
# token = "YOUR_TOKEN" # token = "YOUR_TOKEN"
## Optional. defaults to "https://console.cloud.vmware.com/" ## Base URL used for authentication, ignored if using a Wavefront proxy or a
## Ignored if using a Wavefront proxy or a Wavefront API token. ## Wavefront API token.
# auth_csp_base_url=https://console.cloud.vmware.com # auth_csp_base_url=https://console.cloud.vmware.com
## CSP API Token Authentication for Wavefront. Ignored if using a Wavefront proxy. ## CSP API Token Authentication, ignored if using a Wavefront proxy
# auth_csp_api_token=CSP_API_TOKEN_HERE # auth_csp_api_token=CSP_API_TOKEN_HERE
## CSP Client Credentials Authentication Information for Wavefront. Ignored if using a Wavefront proxy. ## CSP Client Credentials Authentication Information, ignored if using a
## Wavefront proxy.
## See also: https://docs.wavefront.com/csp_getting_started.html#whats-a-server-to-server-app ## See also: https://docs.wavefront.com/csp_getting_started.html#whats-a-server-to-server-app
# [outputs.wavefront.auth_csp_client_credentials] # [outputs.wavefront.auth_csp_client_credentials]
# app_id=CSP_APP_ID_HERE # app_id=CSP_APP_ID_HERE

View File

@ -1,22 +1,24 @@
[[outputs.wavefront]] [[outputs.wavefront]]
## Url for Wavefront API or Wavefront proxy instance. ## URL for Wavefront API or Wavefront proxy instance
## Direct Ingestion via Wavefront API requires authentication. See below. ## Direct Ingestion via Wavefront API requires authentication. See below.
url = "https://metrics.wavefront.com" url = "https://metrics.wavefront.com"
## Maximum number of metrics to send per HTTP request. This value should be higher than the `metric_batch_size`. Default is 10,000. Values higher than 40,000 are not recommended. ## Maximum number of metrics to send per HTTP request. This value should be
## higher than the `metric_batch_size`. Values higher than 40,000 are not
## recommended.
# http_maximum_batch_size = 10000 # http_maximum_batch_size = 10000
## prefix for metrics keys ## Prefix for metrics keys
# prefix = "my.specific.prefix." # prefix = "my.specific.prefix."
## whether to use "value" for name of simple fields. default is false ## Use "value" for name of simple fields
# simple_fields = false # simple_fields = false
## character to use between metric and field name. default is . (dot) ## character to use between metric and field name
# metric_separator = "." # metric_separator = "."
## Convert metric name paths to use metricSeparator character ## Convert metric name paths to use metricSeparator character
## When true will convert all _ (underscore) characters in final metric name. default is true ## When true will convert all _ (underscore) characters in final metric name.
# convert_paths = true # convert_paths = true
## Use Strict rules to sanitize metric and tag names from invalid characters ## Use Strict rules to sanitize metric and tag names from invalid characters
@ -24,26 +26,29 @@
# use_strict = false # use_strict = false
## Use Regex to sanitize metric and tag names from invalid characters ## Use Regex to sanitize metric and tag names from invalid characters
## Regex is more thorough, but significantly slower. default is false ## Regex is more thorough, but significantly slower.
# use_regex = false # use_regex = false
## point tags to use as the source name for Wavefront (if none found, host will be used) ## Tags to use as the source name for Wavefront ("host" if none is found)
# source_override = ["hostname", "address", "agent_host", "node_host"] # source_override = ["hostname", "address", "agent_host", "node_host"]
## whether to convert boolean values to numeric values, with false -> 0.0 and true -> 1.0. default is true ## Convert boolean values to numeric values, with false -> 0.0 and true -> 1.0
# convert_bool = true # convert_bool = true
## Truncate metric tags to a total of 254 characters for the tag name value. Wavefront will reject any ## Truncate metric tags to a total of 254 characters for the tag name value
## data point exceeding this limit if not truncated. Defaults to 'false' to provide backwards compatibility. ## Wavefront will reject any data point exceeding this limit if not truncated
## Defaults to 'false' to provide backwards compatibility.
# truncate_tags = false # truncate_tags = false
## Flush the internal buffers after each batch. This effectively bypasses the background sending of metrics ## Flush the internal buffers after each batch. This effectively bypasses the
## normally done by the Wavefront SDK. This can be used if you are experiencing buffer overruns. The sending ## background sending of metrics normally done by the Wavefront SDK. This can
## of metrics will block for a longer time, but this will be handled gracefully by the internal buffering in ## be used if you are experiencing buffer overruns. The sending of metrics
## Telegraf. ## will block for a longer time, but this will be handled gracefully by
## internal buffering in Telegraf.
# immediate_flush = true # immediate_flush = true
## Send internal metrics (starting with `~sdk.go`) for valid, invalid, and dropped metrics. default is true. ## Send internal metrics (starting with `~sdk.go`) for valid, invalid, and
## dropped metrics
# send_internal_metrics = true # send_internal_metrics = true
## Optional TLS Config ## Optional TLS Config
@ -64,39 +69,38 @@
## HTTP Timeout ## HTTP Timeout
# timeout="10s" # timeout="10s"
## MaxIdleConns controls the maximum number of idle (keep-alive) ## MaxIdleConns controls the maximum number of idle (keep-alive) connections
## connections across all hosts. Zero means no limit. ##across all hosts. Zero means unlimited.
# max_idle_conn = 0 # max_idle_conn = 0
## MaxIdleConnsPerHost, if non-zero, controls the maximum idle ## MaxIdleConnsPerHost, if non-zero, controls the maximum idle (keep-alive)
## (keep-alive) connections to keep per-host. If zero, ## connections to keep per-host. If zero, DefaultMaxIdleConnsPerHost is used.
## DefaultMaxIdleConnsPerHost is used(2).
# max_idle_conn_per_host = 2 # max_idle_conn_per_host = 2
## Idle (keep-alive) connection timeout. ## Idle (keep-alive) connection timeout
## Maximum amount of time before idle connection is closed.
## Zero means no limit.
# idle_conn_timeout = 0 # idle_conn_timeout = 0
## Authentication for Direct Ingestion. ## Authentication for Direct Ingestion.
## Direct Ingestion requires one of: `token`,`auth_csp_api_token`, or `auth_csp_client_credentials` ## Direct Ingestion requires one of: `token`,`auth_csp_api_token`, or
## See https://docs.wavefront.com/csp_getting_started.html to learn more about using CSP credentials with Wavefront. ## `auth_csp_client_credentials` (see https://docs.wavefront.com/csp_getting_started.html)
## to learn more about using CSP credentials with Wavefront.
## Not required if using a Wavefront proxy. ## Not required if using a Wavefront proxy.
## Wavefront API Token Authentication. Ignored if using a Wavefront proxy. ## Wavefront API Token Authentication, ignored if using a Wavefront proxy
## 1. Click the gear icon at the top right in the Wavefront UI. ## 1. Click the gear icon at the top right in the Wavefront UI.
## 2. Click your account name (usually your email) ## 2. Click your account name (usually your email)
## 3. Click *API access*. ## 3. Click *API access*.
# token = "YOUR_TOKEN" # token = "YOUR_TOKEN"
## Optional. defaults to "https://console.cloud.vmware.com/" ## Base URL used for authentication, ignored if using a Wavefront proxy or a
## Ignored if using a Wavefront proxy or a Wavefront API token. ## Wavefront API token.
# auth_csp_base_url=https://console.cloud.vmware.com # auth_csp_base_url=https://console.cloud.vmware.com
## CSP API Token Authentication for Wavefront. Ignored if using a Wavefront proxy. ## CSP API Token Authentication, ignored if using a Wavefront proxy
# auth_csp_api_token=CSP_API_TOKEN_HERE # auth_csp_api_token=CSP_API_TOKEN_HERE
## CSP Client Credentials Authentication Information for Wavefront. Ignored if using a Wavefront proxy. ## CSP Client Credentials Authentication Information, ignored if using a
## Wavefront proxy.
## See also: https://docs.wavefront.com/csp_getting_started.html#whats-a-server-to-server-app ## See also: https://docs.wavefront.com/csp_getting_started.html#whats-a-server-to-server-app
# [outputs.wavefront.auth_csp_client_credentials] # [outputs.wavefront.auth_csp_client_credentials]
# app_id=CSP_APP_ID_HERE # app_id=CSP_APP_ID_HERE