chore(outputs): Fix line-length in READMEs (#16079)
Co-authored-by: Dane Strandboge <136023093+DStrand1@users.noreply.github.com>
This commit is contained in:
parent
22b153ac65
commit
13d053f917
14
README.md
14
README.md
|
|
@ -1,6 +1,9 @@
|
|||
#  Telegraf
|
||||
|
||||
[](https://godoc.org/github.com/influxdata/telegraf) [](https://hub.docker.com/_/telegraf/) [](https://goreportcard.com/report/github.com/influxdata/telegraf) [](https://circleci.com/gh/influxdata/telegraf)
|
||||
[](https://godoc.org/github.com/influxdata/telegraf)
|
||||
[](https://hub.docker.com/_/telegraf/)
|
||||
[](https://goreportcard.com/report/github.com/influxdata/telegraf)
|
||||
[](https://circleci.com/gh/influxdata/telegraf)
|
||||
|
||||
Telegraf is an agent for collecting, processing, aggregating, and writing
|
||||
metrics, logs, and other arbitrary data.
|
||||
|
|
@ -76,13 +79,14 @@ Also, join us on our [Community Slack](https://influxdata.com/slack) or
|
|||
[Community Forums](https://community.influxdata.com/) if you have questions or
|
||||
comments for our engineering teams.
|
||||
|
||||
If you are completely new to Telegraf and InfluxDB, you can also enroll for free at
|
||||
[InfluxDB university](https://www.influxdata.com/university/) to take courses to
|
||||
learn more.
|
||||
If you are completely new to Telegraf and InfluxDB, you can also enroll for free
|
||||
at [InfluxDB university](https://www.influxdata.com/university/) to take courses
|
||||
to learn more.
|
||||
|
||||
## ℹ️ Support
|
||||
|
||||
[](https://www.influxdata.com/slack) [](https://community.influxdata.com/)
|
||||
[](https://www.influxdata.com/slack)
|
||||
[](https://community.influxdata.com/)
|
||||
|
||||
Please use the [Community Slack](https://influxdata.com/slack) or
|
||||
[Community Forums](https://community.influxdata.com/) if you have questions or
|
||||
|
|
|
|||
|
|
@ -46,7 +46,8 @@ See the [CONFIGURATION.md][CONFIGURATION.md] for more details.
|
|||
|
||||
## Amazon Credentials
|
||||
## Credentials are loaded in the following order
|
||||
## 1) Web identity provider credentials via STS if role_arn and web_identity_token_file are specified
|
||||
## 1) Web identity provider credentials via STS if role_arn and
|
||||
## web_identity_token_file are specified
|
||||
## 2) Assumed credentials via STS if role_arn is specified
|
||||
## 3) explicit credentials from 'access_key' and 'secret_key'
|
||||
## 4) shared profile from 'profile'
|
||||
|
|
@ -75,15 +76,17 @@ See the [CONFIGURATION.md][CONFIGURATION.md] for more details.
|
|||
## Namespace for the CloudWatch MetricDatums
|
||||
namespace = "InfluxData/Telegraf"
|
||||
|
||||
## If you have a large amount of metrics, you should consider to send statistic
|
||||
## values instead of raw metrics which could not only improve performance but
|
||||
## also save AWS API cost. If enable this flag, this plugin would parse the required
|
||||
## CloudWatch statistic fields (count, min, max, and sum) and send them to CloudWatch.
|
||||
## You could use basicstats aggregator to calculate those fields. If not all statistic
|
||||
## fields are available, all fields would still be sent as raw metrics.
|
||||
## If you have a large amount of metrics, you should consider to send
|
||||
## statistic values instead of raw metrics which could not only improve
|
||||
## performance but also save AWS API cost. If enable this flag, this plugin
|
||||
## would parse the required CloudWatch statistic fields (count, min, max, and
|
||||
## sum) and send them to CloudWatch. You could use basicstats aggregator to
|
||||
## calculate those fields. If not all statistic fields are available, all
|
||||
## fields would still be sent as raw metrics.
|
||||
# write_statistics = false
|
||||
|
||||
## Enable high resolution metrics of 1 second (if not enabled, standard resolution are of 60 seconds precision)
|
||||
## Enable high resolution metrics of 1 second (if not enabled, standard
|
||||
## resolution are of 60 seconds precision)
|
||||
# high_resolution_metrics = false
|
||||
```
|
||||
|
||||
|
|
|
|||
|
|
@ -5,7 +5,8 @@
|
|||
|
||||
## Amazon Credentials
|
||||
## Credentials are loaded in the following order
|
||||
## 1) Web identity provider credentials via STS if role_arn and web_identity_token_file are specified
|
||||
## 1) Web identity provider credentials via STS if role_arn and
|
||||
## web_identity_token_file are specified
|
||||
## 2) Assumed credentials via STS if role_arn is specified
|
||||
## 3) explicit credentials from 'access_key' and 'secret_key'
|
||||
## 4) shared profile from 'profile'
|
||||
|
|
@ -34,13 +35,15 @@
|
|||
## Namespace for the CloudWatch MetricDatums
|
||||
namespace = "InfluxData/Telegraf"
|
||||
|
||||
## If you have a large amount of metrics, you should consider to send statistic
|
||||
## values instead of raw metrics which could not only improve performance but
|
||||
## also save AWS API cost. If enable this flag, this plugin would parse the required
|
||||
## CloudWatch statistic fields (count, min, max, and sum) and send them to CloudWatch.
|
||||
## You could use basicstats aggregator to calculate those fields. If not all statistic
|
||||
## fields are available, all fields would still be sent as raw metrics.
|
||||
## If you have a large amount of metrics, you should consider to send
|
||||
## statistic values instead of raw metrics which could not only improve
|
||||
## performance but also save AWS API cost. If enable this flag, this plugin
|
||||
## would parse the required CloudWatch statistic fields (count, min, max, and
|
||||
## sum) and send them to CloudWatch. You could use basicstats aggregator to
|
||||
## calculate those fields. If not all statistic fields are available, all
|
||||
## fields would still be sent as raw metrics.
|
||||
# write_statistics = false
|
||||
|
||||
## Enable high resolution metrics of 1 second (if not enabled, standard resolution are of 60 seconds precision)
|
||||
## Enable high resolution metrics of 1 second (if not enabled, standard
|
||||
## resolution are of 60 seconds precision)
|
||||
# high_resolution_metrics = false
|
||||
|
|
|
|||
|
|
@ -7,9 +7,12 @@ This plugin will send logs to Amazon CloudWatch.
|
|||
This plugin uses a credential chain for Authentication with the CloudWatch Logs
|
||||
API endpoint. In the following order the plugin will attempt to authenticate.
|
||||
|
||||
1. Web identity provider credentials via STS if `role_arn` and `web_identity_token_file` are specified
|
||||
1. Assumed credentials via STS if `role_arn` attribute is specified (source credentials are evaluated from subsequent rules).
|
||||
The `endpoint_url` attribute is used only for Cloudwatch Logs service. When fetching credentials, STS global endpoint will be used.
|
||||
1. Web identity provider credentials via STS if `role_arn` and
|
||||
`web_identity_token_file` are specified
|
||||
1. Assumed credentials via STS if `role_arn` attribute is specified (source
|
||||
credentials are evaluated from subsequent rules). The `endpoint_url`
|
||||
attribute is used only for Cloudwatch Logs service. When fetching
|
||||
credentials, STS global endpoint will be used.
|
||||
1. Explicit credentials from `access_key`, `secret_key`, and `token` attributes
|
||||
1. Shared profile from `profile` attribute
|
||||
1. [Environment Variables][1]
|
||||
|
|
@ -56,7 +59,8 @@ See the [CONFIGURATION.md][CONFIGURATION.md] for more details.
|
|||
|
||||
## Amazon Credentials
|
||||
## Credentials are loaded in the following order
|
||||
## 1) Web identity provider credentials via STS if role_arn and web_identity_token_file are specified
|
||||
## 1) Web identity provider credentials via STS if role_arn and
|
||||
## web_identity_token_file are specified
|
||||
## 2) Assumed credentials via STS if role_arn is specified
|
||||
## 3) explicit credentials from 'access_key' and 'secret_key'
|
||||
## 4) shared profile from 'profile'
|
||||
|
|
@ -74,30 +78,31 @@ See the [CONFIGURATION.md][CONFIGURATION.md] for more details.
|
|||
|
||||
## Endpoint to make request against, the correct endpoint is automatically
|
||||
## determined and this option should only be set if you wish to override the
|
||||
## default.
|
||||
## ex: endpoint_url = "http://localhost:8000"
|
||||
## default, e.g endpoint_url = "http://localhost:8000"
|
||||
# endpoint_url = ""
|
||||
|
||||
## Cloud watch log group. Must be created in AWS cloudwatch logs upfront!
|
||||
## For example, you can specify the name of the k8s cluster here to group logs from all cluster in oine place
|
||||
## For example, you can specify the name of the k8s cluster here to group logs
|
||||
## from all cluster in oine place
|
||||
log_group = "my-group-name"
|
||||
|
||||
## Log stream in log group
|
||||
## Either log group name or reference to metric attribute, from which it can be parsed:
|
||||
## tag:<TAG_NAME> or field:<FIELD_NAME>. If log stream is not exist, it will be created.
|
||||
## Since AWS is not automatically delete logs streams with expired logs entries (i.e. empty log stream)
|
||||
## you need to put in place appropriate house-keeping (https://forums.aws.amazon.com/thread.jspa?threadID=178855)
|
||||
## Either log group name or reference to metric attribute, from which it can
|
||||
## be parsed, tag:<TAG_NAME> or field:<FIELD_NAME>. If the log stream is not
|
||||
## exist, it will be created. Since AWS is not automatically delete logs
|
||||
## streams with expired logs entries (i.e. empty log stream) you need to put
|
||||
## in place appropriate house-keeping (https://forums.aws.amazon.com/thread.jspa?threadID=178855)
|
||||
log_stream = "tag:location"
|
||||
|
||||
## Source of log data - metric name
|
||||
## specify the name of the metric, from which the log data should be retrieved.
|
||||
## I.e., if you are using docker_log plugin to stream logs from container, then
|
||||
## specify log_data_metric_name = "docker_log"
|
||||
## specify the name of the metric, from which the log data should be
|
||||
## retrieved. I.e., if you are using docker_log plugin to stream logs from
|
||||
## container, then specify log_data_metric_name = "docker_log"
|
||||
log_data_metric_name = "docker_log"
|
||||
|
||||
## Specify from which metric attribute the log data should be retrieved:
|
||||
## tag:<TAG_NAME> or field:<FIELD_NAME>.
|
||||
## I.e., if you are using docker_log plugin to stream logs from container, then
|
||||
## specify log_data_source = "field:message"
|
||||
## I.e., if you are using docker_log plugin to stream logs from container,
|
||||
## then specify log_data_source = "field:message"
|
||||
log_data_source = "field:message"
|
||||
```
|
||||
|
|
|
|||
|
|
@ -12,7 +12,8 @@
|
|||
|
||||
## Amazon Credentials
|
||||
## Credentials are loaded in the following order
|
||||
## 1) Web identity provider credentials via STS if role_arn and web_identity_token_file are specified
|
||||
## 1) Web identity provider credentials via STS if role_arn and
|
||||
## web_identity_token_file are specified
|
||||
## 2) Assumed credentials via STS if role_arn is specified
|
||||
## 3) explicit credentials from 'access_key' and 'secret_key'
|
||||
## 4) shared profile from 'profile'
|
||||
|
|
@ -30,29 +31,30 @@
|
|||
|
||||
## Endpoint to make request against, the correct endpoint is automatically
|
||||
## determined and this option should only be set if you wish to override the
|
||||
## default.
|
||||
## ex: endpoint_url = "http://localhost:8000"
|
||||
## default, e.g endpoint_url = "http://localhost:8000"
|
||||
# endpoint_url = ""
|
||||
|
||||
## Cloud watch log group. Must be created in AWS cloudwatch logs upfront!
|
||||
## For example, you can specify the name of the k8s cluster here to group logs from all cluster in oine place
|
||||
## For example, you can specify the name of the k8s cluster here to group logs
|
||||
## from all cluster in oine place
|
||||
log_group = "my-group-name"
|
||||
|
||||
## Log stream in log group
|
||||
## Either log group name or reference to metric attribute, from which it can be parsed:
|
||||
## tag:<TAG_NAME> or field:<FIELD_NAME>. If log stream is not exist, it will be created.
|
||||
## Since AWS is not automatically delete logs streams with expired logs entries (i.e. empty log stream)
|
||||
## you need to put in place appropriate house-keeping (https://forums.aws.amazon.com/thread.jspa?threadID=178855)
|
||||
## Either log group name or reference to metric attribute, from which it can
|
||||
## be parsed, tag:<TAG_NAME> or field:<FIELD_NAME>. If the log stream is not
|
||||
## exist, it will be created. Since AWS is not automatically delete logs
|
||||
## streams with expired logs entries (i.e. empty log stream) you need to put
|
||||
## in place appropriate house-keeping (https://forums.aws.amazon.com/thread.jspa?threadID=178855)
|
||||
log_stream = "tag:location"
|
||||
|
||||
## Source of log data - metric name
|
||||
## specify the name of the metric, from which the log data should be retrieved.
|
||||
## I.e., if you are using docker_log plugin to stream logs from container, then
|
||||
## specify log_data_metric_name = "docker_log"
|
||||
## specify the name of the metric, from which the log data should be
|
||||
## retrieved. I.e., if you are using docker_log plugin to stream logs from
|
||||
## container, then specify log_data_metric_name = "docker_log"
|
||||
log_data_metric_name = "docker_log"
|
||||
|
||||
## Specify from which metric attribute the log data should be retrieved:
|
||||
## tag:<TAG_NAME> or field:<FIELD_NAME>.
|
||||
## I.e., if you are using docker_log plugin to stream logs from container, then
|
||||
## specify log_data_source = "field:message"
|
||||
## I.e., if you are using docker_log plugin to stream logs from container,
|
||||
## then specify log_data_source = "field:message"
|
||||
log_data_source = "field:message"
|
||||
|
|
|
|||
|
|
@ -41,12 +41,16 @@ to use them.
|
|||
|
||||
### Required parameters
|
||||
|
||||
* `token`: Your Logz.io token, which can be found under "settings" in your account.
|
||||
Your Logz.io `token`, which can be found under "settings" in your account, is
|
||||
required.
|
||||
|
||||
### Optional parameters
|
||||
|
||||
* `check_disk_space`: Set to true if Logz.io sender checks the disk space before adding metrics to the disk queue.
|
||||
* `disk_threshold`: If the queue_dir space crosses this threshold (in % of disk usage), the plugin will start dropping logs.
|
||||
* `drain_duration`: Time to sleep between sending attempts.
|
||||
* `queue_dir`: Metrics disk path. All the unsent metrics are saved to the disk in this location.
|
||||
* `url`: Logz.io listener URL.
|
||||
- `check_disk_space`: Set to true if Logz.io sender checks the disk space before
|
||||
adding metrics to the disk queue.
|
||||
- `disk_threshold`: If the queue_dir space crosses this threshold
|
||||
(in % of disk usage), the plugin will start dropping logs.
|
||||
- `drain_duration`: Time to sleep between sending attempts.
|
||||
- `queue_dir`: Metrics disk path. All the unsent metrics are saved to the disk
|
||||
in this location.
|
||||
- `url`: Logz.io listener URL.
|
||||
|
|
|
|||
|
|
@ -75,9 +75,12 @@ the `[output.opentelemetry.coralogix]` section.
|
|||
|
||||
There, you can find the required setting to interact with the server.
|
||||
|
||||
- The `private_key` is your Private Key, which you can find in Settings > Send Your Data.
|
||||
- The `application`, is your application name, which will be added to your metric attributes.
|
||||
- The `subsystem`, is your subsystem, which will be added to your metric attributes.
|
||||
- The `private_key` is your Private Key, which you can find in
|
||||
`Settings > Send Your Data`.
|
||||
- The `application`, is your application name, which will be added to your
|
||||
`metric attributes`.
|
||||
- The `subsystem`, is your subsystem, which will be added to your metric
|
||||
attributes.
|
||||
|
||||
More information in the
|
||||
[Getting Started page](https://coralogix.com/docs/guide-first-steps-coralogix/).
|
||||
|
|
@ -103,7 +106,5 @@ data is interpreted as:
|
|||
Also see the [OpenTelemetry input plugin](../../inputs/opentelemetry/README.md).
|
||||
|
||||
[schema]: https://github.com/influxdata/influxdb-observability/blob/main/docs/index.md
|
||||
|
||||
[implementation]: https://github.com/influxdata/influxdb-observability/tree/main/influx2otel
|
||||
|
||||
[repo]: https://github.com/influxdata/influxdb-observability
|
||||
|
|
|
|||
|
|
@ -53,7 +53,7 @@ to use them.
|
|||
## Non-standard parameters:
|
||||
## pool_max_conns (default: 1) - Maximum size of connection pool for parallel (per-batch per-table) inserts.
|
||||
## pool_min_conns (default: 0) - Minimum size of connection pool.
|
||||
## pool_max_conn_lifetime (default: 0s) - Maximum age of a connection before closing.
|
||||
## pool_max_conn_lifetime (default: 0s) - Maximum connection age before closing.
|
||||
## pool_max_conn_idle_time (default: 0s) - Maximum idle time of a connection before closing.
|
||||
## pool_health_check_period (default: 0s) - Duration between health checks on idle connections.
|
||||
# connection = ""
|
||||
|
|
@ -91,8 +91,9 @@ to use them.
|
|||
# ]
|
||||
|
||||
## Templated statements to execute when adding columns to a table.
|
||||
## Set to an empty list to disable. Points containing tags for which there is no column will be skipped. Points
|
||||
## containing fields for which there is no column will have the field omitted.
|
||||
## Set to an empty list to disable. Points containing tags for which there is
|
||||
## no column will be skipped. Points containing fields for which there is no
|
||||
## column will have the field omitted.
|
||||
# add_column_templates = [
|
||||
# '''ALTER TABLE {{ .table }} ADD COLUMN IF NOT EXISTS {{ .columns|join ", ADD COLUMN IF NOT EXISTS " }}''',
|
||||
# ]
|
||||
|
|
@ -103,25 +104,26 @@ to use them.
|
|||
# ]
|
||||
|
||||
## Templated statements to execute when adding columns to a tag table.
|
||||
## Set to an empty list to disable. Points containing tags for which there is no column will be skipped.
|
||||
## Set to an empty list to disable. Points containing tags for which there is
|
||||
## no column will be skipped.
|
||||
# tag_table_add_column_templates = [
|
||||
# '''ALTER TABLE {{ .table }} ADD COLUMN IF NOT EXISTS {{ .columns|join ", ADD COLUMN IF NOT EXISTS " }}''',
|
||||
# ]
|
||||
|
||||
## The postgres data type to use for storing unsigned 64-bit integer values (Postgres does not have a native
|
||||
## unsigned 64-bit integer type).
|
||||
## The postgres data type to use for storing unsigned 64-bit integer values
|
||||
## (Postgres does not have a native unsigned 64-bit integer type).
|
||||
## The value can be one of:
|
||||
## numeric - Uses the PostgreSQL "numeric" data type.
|
||||
## uint8 - Requires pguint extension (https://github.com/petere/pguint)
|
||||
# uint64_type = "numeric"
|
||||
|
||||
## When using pool_max_conns>1, and a temporary error occurs, the query is retried with an incremental backoff. This
|
||||
## controls the maximum backoff duration.
|
||||
## When using pool_max_conns > 1, and a temporary error occurs, the query is
|
||||
## retried with an incremental backoff. This controls the maximum duration.
|
||||
# retry_max_backoff = "15s"
|
||||
|
||||
## Approximate number of tag IDs to store in in-memory cache (when using tags_as_foreign_keys).
|
||||
## This is an optimization to skip inserting known tag IDs.
|
||||
## Each entry consumes approximately 34 bytes of memory.
|
||||
## Approximate number of tag IDs to store in in-memory cache (when using
|
||||
## tags_as_foreign_keys). This is an optimization to skip inserting known
|
||||
## tag IDs. Each entry consumes approximately 34 bytes of memory.
|
||||
# tag_cache_size = 100000
|
||||
|
||||
## Enable & set the log level for the Postgres driver.
|
||||
|
|
|
|||
|
|
@ -14,7 +14,7 @@
|
|||
## Non-standard parameters:
|
||||
## pool_max_conns (default: 1) - Maximum size of connection pool for parallel (per-batch per-table) inserts.
|
||||
## pool_min_conns (default: 0) - Minimum size of connection pool.
|
||||
## pool_max_conn_lifetime (default: 0s) - Maximum age of a connection before closing.
|
||||
## pool_max_conn_lifetime (default: 0s) - Maximum connection age before closing.
|
||||
## pool_max_conn_idle_time (default: 0s) - Maximum idle time of a connection before closing.
|
||||
## pool_health_check_period (default: 0s) - Duration between health checks on idle connections.
|
||||
# connection = ""
|
||||
|
|
@ -52,8 +52,9 @@
|
|||
# ]
|
||||
|
||||
## Templated statements to execute when adding columns to a table.
|
||||
## Set to an empty list to disable. Points containing tags for which there is no column will be skipped. Points
|
||||
## containing fields for which there is no column will have the field omitted.
|
||||
## Set to an empty list to disable. Points containing tags for which there is
|
||||
## no column will be skipped. Points containing fields for which there is no
|
||||
## column will have the field omitted.
|
||||
# add_column_templates = [
|
||||
# '''ALTER TABLE {{ .table }} ADD COLUMN IF NOT EXISTS {{ .columns|join ", ADD COLUMN IF NOT EXISTS " }}''',
|
||||
# ]
|
||||
|
|
@ -64,25 +65,26 @@
|
|||
# ]
|
||||
|
||||
## Templated statements to execute when adding columns to a tag table.
|
||||
## Set to an empty list to disable. Points containing tags for which there is no column will be skipped.
|
||||
## Set to an empty list to disable. Points containing tags for which there is
|
||||
## no column will be skipped.
|
||||
# tag_table_add_column_templates = [
|
||||
# '''ALTER TABLE {{ .table }} ADD COLUMN IF NOT EXISTS {{ .columns|join ", ADD COLUMN IF NOT EXISTS " }}''',
|
||||
# ]
|
||||
|
||||
## The postgres data type to use for storing unsigned 64-bit integer values (Postgres does not have a native
|
||||
## unsigned 64-bit integer type).
|
||||
## The postgres data type to use for storing unsigned 64-bit integer values
|
||||
## (Postgres does not have a native unsigned 64-bit integer type).
|
||||
## The value can be one of:
|
||||
## numeric - Uses the PostgreSQL "numeric" data type.
|
||||
## uint8 - Requires pguint extension (https://github.com/petere/pguint)
|
||||
# uint64_type = "numeric"
|
||||
|
||||
## When using pool_max_conns>1, and a temporary error occurs, the query is retried with an incremental backoff. This
|
||||
## controls the maximum backoff duration.
|
||||
## When using pool_max_conns > 1, and a temporary error occurs, the query is
|
||||
## retried with an incremental backoff. This controls the maximum duration.
|
||||
# retry_max_backoff = "15s"
|
||||
|
||||
## Approximate number of tag IDs to store in in-memory cache (when using tags_as_foreign_keys).
|
||||
## This is an optimization to skip inserting known tag IDs.
|
||||
## Each entry consumes approximately 34 bytes of memory.
|
||||
## Approximate number of tag IDs to store in in-memory cache (when using
|
||||
## tags_as_foreign_keys). This is an optimization to skip inserting known
|
||||
## tag IDs. Each entry consumes approximately 34 bytes of memory.
|
||||
# tag_cache_size = 100000
|
||||
|
||||
## Enable & set the log level for the Postgres driver.
|
||||
|
|
|
|||
|
|
@ -7,8 +7,12 @@ The Timestream output plugin writes metrics to the [Amazon Timestream] service.
|
|||
This plugin uses a credential chain for Authentication with Timestream
|
||||
API endpoint. In the following order the plugin will attempt to authenticate.
|
||||
|
||||
1. Web identity provider credentials via STS if `role_arn` and `web_identity_token_file` are specified
|
||||
1. [Assumed credentials via STS] if `role_arn` attribute is specified (source credentials are evaluated from subsequent rules). The `endpoint_url` attribute is used only for Timestream service. When fetching credentials, STS global endpoint will be used.
|
||||
1. Web identity provider credentials via STS if `role_arn` and
|
||||
`web_identity_token_file` are specified
|
||||
1. [Assumed credentials via STS] if `role_arn` attribute is specified (source
|
||||
credentials are evaluated from subsequent rules). The `endpoint_url` attribute
|
||||
is used only for Timestream service. When fetching credentials, STS global
|
||||
endpoint will be used.
|
||||
1. Explicit credentials from `access_key`, `secret_key`, and `token` attributes
|
||||
1. Shared profile from `profile` attribute
|
||||
1. [Environment Variables]
|
||||
|
|
@ -34,7 +38,8 @@ See the [CONFIGURATION.md][CONFIGURATION.md] for more details.
|
|||
|
||||
## Amazon Credentials
|
||||
## Credentials are loaded in the following order:
|
||||
## 1) Web identity provider credentials via STS if role_arn and web_identity_token_file are specified
|
||||
## 1) Web identity provider credentials via STS if role_arn and
|
||||
## web_identity_token_file are specified
|
||||
## 2) Assumed credentials via STS if role_arn is specified
|
||||
## 3) explicit credentials from 'access_key' and 'secret_key'
|
||||
## 4) shared profile from 'profile'
|
||||
|
|
@ -60,20 +65,20 @@ See the [CONFIGURATION.md][CONFIGURATION.md] for more details.
|
|||
## The database must exist prior to starting Telegraf.
|
||||
database_name = "yourDatabaseNameHere"
|
||||
|
||||
## Specifies if the plugin should describe the Timestream database upon starting
|
||||
## to validate if it has access necessary permissions, connection, etc., as a safety check.
|
||||
## If the describe operation fails, the plugin will not start
|
||||
## and therefore the Telegraf agent will not start.
|
||||
## Specifies if the plugin should describe the Timestream database upon
|
||||
## starting to validate if it has access, necessary permissions, connection,
|
||||
## etc., as a safety check. If the describe operation fails, the plugin will
|
||||
## not start and therefore the Telegraf agent will not start.
|
||||
describe_database_on_start = false
|
||||
|
||||
## Specifies how the data is organized in Timestream.
|
||||
## Valid values are: single-table, multi-table.
|
||||
## When mapping_mode is set to single-table, all of the data is stored in a single table.
|
||||
## When mapping_mode is set to multi-table, the data is organized and stored in multiple tables.
|
||||
## The default is multi-table.
|
||||
## When mapping_mode is set to single-table, all of the data is stored in a
|
||||
## single table. When mapping_mode is set to multi-table, the data is
|
||||
## organized and stored in multiple tables. The default is multi-table.
|
||||
mapping_mode = "multi-table"
|
||||
|
||||
## Specifies if the plugin should create the table, if the table does not exist.
|
||||
## Specifies if the plugin should create the table, if it doesn't exist.
|
||||
create_table_if_not_exists = true
|
||||
|
||||
## Specifies the Timestream table magnetic store retention period in days.
|
||||
|
|
@ -88,25 +93,25 @@ See the [CONFIGURATION.md][CONFIGURATION.md] for more details.
|
|||
|
||||
## Specifies how the data is written into Timestream.
|
||||
## Valid values are: true, false
|
||||
## When use_multi_measure_records is set to true, all of the tags and fields are stored
|
||||
## as a single row in a Timestream table.
|
||||
## When use_multi_measure_record is set to false, Timestream stores each field in a
|
||||
## separate table row, thereby storing the tags multiple times (once for each field).
|
||||
## The recommended setting is true.
|
||||
## The default is false.
|
||||
## When use_multi_measure_records is set to true, all of the tags and fields
|
||||
## are stored as a single row in a Timestream table.
|
||||
## When use_multi_measure_record is set to false, Timestream stores each field
|
||||
## in a separate table row, thereby storing the tags multiple times (once for
|
||||
## each field). The recommended setting is true. The default is false.
|
||||
use_multi_measure_records = "false"
|
||||
|
||||
## Specifies the measure_name to use when sending multi-measure records.
|
||||
## NOTE: This property is valid when use_multi_measure_records=true and mapping_mode=multi-table
|
||||
## NOTE: This property is valid when use_multi_measure_records=true and
|
||||
## mapping_mode=multi-table
|
||||
measure_name_for_multi_measure_records = "telegraf_measure"
|
||||
|
||||
## Specifies the name of the table to write data into
|
||||
## NOTE: This property is valid when mapping_mode=single-table.
|
||||
# single_table_name = ""
|
||||
|
||||
## Specifies the name of dimension when all of the data is being stored in a single table
|
||||
## and the measurement name is transformed into the dimension value
|
||||
## (see Mapping data from Influx to Timestream for details)
|
||||
## Specifies the name of dimension when all of the data is being stored in a
|
||||
## single table and the measurement name is transformed into the dimension
|
||||
## value (see Mapping data from Influx to Timestream for details)
|
||||
## NOTE: This property is valid when mapping_mode=single-table.
|
||||
# single_table_dimension_name_for_telegraf_measurement_name = "namespace"
|
||||
|
||||
|
|
@ -118,9 +123,6 @@ See the [CONFIGURATION.md][CONFIGURATION.md] for more details.
|
|||
## Specify the maximum number of parallel go routines to ingest/write data
|
||||
## If not specified, defaulted to 1 go routines
|
||||
max_write_go_routines = 25
|
||||
|
||||
## Please see README.md to know how line protocol data is mapped to Timestream
|
||||
##
|
||||
```
|
||||
|
||||
### Unsigned Integers
|
||||
|
|
@ -178,15 +180,17 @@ go test -v ./plugins/outputs/timestream/...
|
|||
When writing data from Influx to Timestream,
|
||||
data is written by default as follows:
|
||||
|
||||
1. The timestamp is written as the time field.
|
||||
2. Tags are written as dimensions.
|
||||
3. Fields are written as measures.
|
||||
4. Measurements are written as table names.
|
||||
1. The timestamp is written as the time field.
|
||||
2. Tags are written as dimensions.
|
||||
3. Fields are written as measures.
|
||||
4. Measurements are written as table names.
|
||||
|
||||
For example, consider the following data in line protocol format:
|
||||
For example, consider the following data in line protocol format:
|
||||
|
||||
> weather,location=us-midwest,season=summer temperature=82,humidity=71 1465839830100400200
|
||||
> airquality,location=us-west no2=5,pm25=16 1465839830100400200
|
||||
```text
|
||||
weather,location=us-midwest,season=summer temperature=82,humidity=71 1465839830100400200
|
||||
airquality,location=us-west no2=5,pm25=16 1465839830100400200
|
||||
```
|
||||
|
||||
where:
|
||||
`weather` and `airquality` are the measurement names,
|
||||
|
|
@ -197,23 +201,25 @@ When you choose to create a separate table for each measurement and store
|
|||
multiple fields in a single table row, the data will be written into
|
||||
Timestream as:
|
||||
|
||||
1. The plugin will create 2 tables, namely, weather and airquality (mapping_mode=multi-table).
|
||||
2. The tables may contain multiple fields in a single table row (use_multi_measure_records=true).
|
||||
3. The table weather will contain the following columns and data:
|
||||
1. The plugin will create 2 tables, namely, weather and airquality
|
||||
(mapping_mode=multi-table).
|
||||
2. The tables may contain multiple fields in a single table row
|
||||
(use_multi_measure_records=true).
|
||||
3. The table weather will contain the following columns and data:
|
||||
|
||||
| time | location | season | measure_name | temperature | humidity |
|
||||
| :--- | :--- | :--- | :--- | :--- | :--- |
|
||||
| 2016-06-13 17:43:50 | us-midwest | summer | `<measure_name_for_multi_measure_records>` | 82 | 71|
|
||||
| time | location | season | measure_name | temperature | humidity |
|
||||
| :--- | :--- | :--- | :--- | :--- | :--- |
|
||||
| 2016-06-13 17:43:50 | us-midwest | summer | `<measure_name_for_multi_measure_records>` | 82 | 71|
|
||||
|
||||
4. The table airquality will contain the following columns and data:
|
||||
4. The table airquality will contain the following columns and data:
|
||||
|
||||
| time | location | measure_name | no2 | pm25 |
|
||||
| :--- | :--- | :--- | :--- | :--- |
|
||||
|2016-06-13 17:43:50 | us-west | `<measure_name_for_multi_measure_records>` | 5 | 16 |
|
||||
| time | location | measure_name | no2 | pm25 |
|
||||
| :--- | :--- | :--- | :--- | :--- |
|
||||
|2016-06-13 17:43:50 | us-west | `<measure_name_for_multi_measure_records>` | 5 | 16 |
|
||||
|
||||
NOTE:
|
||||
`<measure_name_for_multi_measure_records>` represents the actual
|
||||
value of that property.
|
||||
NOTE:
|
||||
`<measure_name_for_multi_measure_records>` represents the actual
|
||||
value of that property.
|
||||
|
||||
You can also choose to create a separate table per measurement and store
|
||||
each field in a separate row per table. In that case:
|
||||
|
|
|
|||
|
|
@ -5,7 +5,8 @@
|
|||
|
||||
## Amazon Credentials
|
||||
## Credentials are loaded in the following order:
|
||||
## 1) Web identity provider credentials via STS if role_arn and web_identity_token_file are specified
|
||||
## 1) Web identity provider credentials via STS if role_arn and
|
||||
## web_identity_token_file are specified
|
||||
## 2) Assumed credentials via STS if role_arn is specified
|
||||
## 3) explicit credentials from 'access_key' and 'secret_key'
|
||||
## 4) shared profile from 'profile'
|
||||
|
|
@ -31,20 +32,20 @@
|
|||
## The database must exist prior to starting Telegraf.
|
||||
database_name = "yourDatabaseNameHere"
|
||||
|
||||
## Specifies if the plugin should describe the Timestream database upon starting
|
||||
## to validate if it has access necessary permissions, connection, etc., as a safety check.
|
||||
## If the describe operation fails, the plugin will not start
|
||||
## and therefore the Telegraf agent will not start.
|
||||
## Specifies if the plugin should describe the Timestream database upon
|
||||
## starting to validate if it has access, necessary permissions, connection,
|
||||
## etc., as a safety check. If the describe operation fails, the plugin will
|
||||
## not start and therefore the Telegraf agent will not start.
|
||||
describe_database_on_start = false
|
||||
|
||||
## Specifies how the data is organized in Timestream.
|
||||
## Valid values are: single-table, multi-table.
|
||||
## When mapping_mode is set to single-table, all of the data is stored in a single table.
|
||||
## When mapping_mode is set to multi-table, the data is organized and stored in multiple tables.
|
||||
## The default is multi-table.
|
||||
## When mapping_mode is set to single-table, all of the data is stored in a
|
||||
## single table. When mapping_mode is set to multi-table, the data is
|
||||
## organized and stored in multiple tables. The default is multi-table.
|
||||
mapping_mode = "multi-table"
|
||||
|
||||
## Specifies if the plugin should create the table, if the table does not exist.
|
||||
## Specifies if the plugin should create the table, if it doesn't exist.
|
||||
create_table_if_not_exists = true
|
||||
|
||||
## Specifies the Timestream table magnetic store retention period in days.
|
||||
|
|
@ -59,25 +60,25 @@
|
|||
|
||||
## Specifies how the data is written into Timestream.
|
||||
## Valid values are: true, false
|
||||
## When use_multi_measure_records is set to true, all of the tags and fields are stored
|
||||
## as a single row in a Timestream table.
|
||||
## When use_multi_measure_record is set to false, Timestream stores each field in a
|
||||
## separate table row, thereby storing the tags multiple times (once for each field).
|
||||
## The recommended setting is true.
|
||||
## The default is false.
|
||||
## When use_multi_measure_records is set to true, all of the tags and fields
|
||||
## are stored as a single row in a Timestream table.
|
||||
## When use_multi_measure_record is set to false, Timestream stores each field
|
||||
## in a separate table row, thereby storing the tags multiple times (once for
|
||||
## each field). The recommended setting is true. The default is false.
|
||||
use_multi_measure_records = "false"
|
||||
|
||||
## Specifies the measure_name to use when sending multi-measure records.
|
||||
## NOTE: This property is valid when use_multi_measure_records=true and mapping_mode=multi-table
|
||||
## NOTE: This property is valid when use_multi_measure_records=true and
|
||||
## mapping_mode=multi-table
|
||||
measure_name_for_multi_measure_records = "telegraf_measure"
|
||||
|
||||
## Specifies the name of the table to write data into
|
||||
## NOTE: This property is valid when mapping_mode=single-table.
|
||||
# single_table_name = ""
|
||||
|
||||
## Specifies the name of dimension when all of the data is being stored in a single table
|
||||
## and the measurement name is transformed into the dimension value
|
||||
## (see Mapping data from Influx to Timestream for details)
|
||||
## Specifies the name of dimension when all of the data is being stored in a
|
||||
## single table and the measurement name is transformed into the dimension
|
||||
## value (see Mapping data from Influx to Timestream for details)
|
||||
## NOTE: This property is valid when mapping_mode=single-table.
|
||||
# single_table_dimension_name_for_telegraf_measurement_name = "namespace"
|
||||
|
||||
|
|
@ -89,6 +90,3 @@
|
|||
## Specify the maximum number of parallel go routines to ingest/write data
|
||||
## If not specified, defaulted to 1 go routines
|
||||
max_write_go_routines = 25
|
||||
|
||||
## Please see README.md to know how line protocol data is mapped to Timestream
|
||||
##
|
||||
|
|
|
|||
|
|
@ -24,24 +24,26 @@ to use them.
|
|||
|
||||
```toml @sample.conf
|
||||
[[outputs.wavefront]]
|
||||
## Url for Wavefront API or Wavefront proxy instance.
|
||||
## URL for Wavefront API or Wavefront proxy instance
|
||||
## Direct Ingestion via Wavefront API requires authentication. See below.
|
||||
url = "https://metrics.wavefront.com"
|
||||
|
||||
## Maximum number of metrics to send per HTTP request. This value should be higher than the `metric_batch_size`. Default is 10,000. Values higher than 40,000 are not recommended.
|
||||
## Maximum number of metrics to send per HTTP request. This value should be
|
||||
## higher than the `metric_batch_size`. Values higher than 40,000 are not
|
||||
## recommended.
|
||||
# http_maximum_batch_size = 10000
|
||||
|
||||
## prefix for metrics keys
|
||||
## Prefix for metrics keys
|
||||
# prefix = "my.specific.prefix."
|
||||
|
||||
## whether to use "value" for name of simple fields. default is false
|
||||
## Use "value" for name of simple fields
|
||||
# simple_fields = false
|
||||
|
||||
## character to use between metric and field name. default is . (dot)
|
||||
## character to use between metric and field name
|
||||
# metric_separator = "."
|
||||
|
||||
## Convert metric name paths to use metricSeparator character
|
||||
## When true will convert all _ (underscore) characters in final metric name. default is true
|
||||
## When true will convert all _ (underscore) characters in final metric name.
|
||||
# convert_paths = true
|
||||
|
||||
## Use Strict rules to sanitize metric and tag names from invalid characters
|
||||
|
|
@ -49,26 +51,29 @@ to use them.
|
|||
# use_strict = false
|
||||
|
||||
## Use Regex to sanitize metric and tag names from invalid characters
|
||||
## Regex is more thorough, but significantly slower. default is false
|
||||
## Regex is more thorough, but significantly slower.
|
||||
# use_regex = false
|
||||
|
||||
## point tags to use as the source name for Wavefront (if none found, host will be used)
|
||||
## Tags to use as the source name for Wavefront ("host" if none is found)
|
||||
# source_override = ["hostname", "address", "agent_host", "node_host"]
|
||||
|
||||
## whether to convert boolean values to numeric values, with false -> 0.0 and true -> 1.0. default is true
|
||||
## Convert boolean values to numeric values, with false -> 0.0 and true -> 1.0
|
||||
# convert_bool = true
|
||||
|
||||
## Truncate metric tags to a total of 254 characters for the tag name value. Wavefront will reject any
|
||||
## data point exceeding this limit if not truncated. Defaults to 'false' to provide backwards compatibility.
|
||||
## Truncate metric tags to a total of 254 characters for the tag name value
|
||||
## Wavefront will reject any data point exceeding this limit if not truncated
|
||||
## Defaults to 'false' to provide backwards compatibility.
|
||||
# truncate_tags = false
|
||||
|
||||
## Flush the internal buffers after each batch. This effectively bypasses the background sending of metrics
|
||||
## normally done by the Wavefront SDK. This can be used if you are experiencing buffer overruns. The sending
|
||||
## of metrics will block for a longer time, but this will be handled gracefully by the internal buffering in
|
||||
## Telegraf.
|
||||
## Flush the internal buffers after each batch. This effectively bypasses the
|
||||
## background sending of metrics normally done by the Wavefront SDK. This can
|
||||
## be used if you are experiencing buffer overruns. The sending of metrics
|
||||
## will block for a longer time, but this will be handled gracefully by
|
||||
## internal buffering in Telegraf.
|
||||
# immediate_flush = true
|
||||
|
||||
## Send internal metrics (starting with `~sdk.go`) for valid, invalid, and dropped metrics. default is true.
|
||||
## Send internal metrics (starting with `~sdk.go`) for valid, invalid, and
|
||||
## dropped metrics
|
||||
# send_internal_metrics = true
|
||||
|
||||
## Optional TLS Config
|
||||
|
|
@ -89,39 +94,38 @@ to use them.
|
|||
## HTTP Timeout
|
||||
# timeout="10s"
|
||||
|
||||
## MaxIdleConns controls the maximum number of idle (keep-alive)
|
||||
## connections across all hosts. Zero means no limit.
|
||||
## MaxIdleConns controls the maximum number of idle (keep-alive) connections
|
||||
## across all hosts. Zero means unlimited.
|
||||
# max_idle_conn = 0
|
||||
|
||||
## MaxIdleConnsPerHost, if non-zero, controls the maximum idle
|
||||
## (keep-alive) connections to keep per-host. If zero,
|
||||
## DefaultMaxIdleConnsPerHost is used(2).
|
||||
## MaxIdleConnsPerHost, if non-zero, controls the maximum idle (keep-alive)
|
||||
## connections to keep per-host. If zero, DefaultMaxIdleConnsPerHost is used.
|
||||
# max_idle_conn_per_host = 2
|
||||
|
||||
## Idle (keep-alive) connection timeout.
|
||||
## Maximum amount of time before idle connection is closed.
|
||||
## Zero means no limit.
|
||||
## Idle (keep-alive) connection timeout
|
||||
# idle_conn_timeout = 0
|
||||
|
||||
## Authentication for Direct Ingestion.
|
||||
## Direct Ingestion requires one of: `token`,`auth_csp_api_token`, or `auth_csp_client_credentials`
|
||||
## See https://docs.wavefront.com/csp_getting_started.html to learn more about using CSP credentials with Wavefront.
|
||||
## Direct Ingestion requires one of: `token`,`auth_csp_api_token`, or
|
||||
## `auth_csp_client_credentials` (see https://docs.wavefront.com/csp_getting_started.html)
|
||||
## to learn more about using CSP credentials with Wavefront.
|
||||
## Not required if using a Wavefront proxy.
|
||||
|
||||
## Wavefront API Token Authentication. Ignored if using a Wavefront proxy.
|
||||
## Wavefront API Token Authentication, ignored if using a Wavefront proxy
|
||||
## 1. Click the gear icon at the top right in the Wavefront UI.
|
||||
## 2. Click your account name (usually your email)
|
||||
## 3. Click *API access*.
|
||||
# token = "YOUR_TOKEN"
|
||||
|
||||
## Optional. defaults to "https://console.cloud.vmware.com/"
|
||||
## Ignored if using a Wavefront proxy or a Wavefront API token.
|
||||
## Base URL used for authentication, ignored if using a Wavefront proxy or a
|
||||
## Wavefront API token.
|
||||
# auth_csp_base_url=https://console.cloud.vmware.com
|
||||
|
||||
## CSP API Token Authentication for Wavefront. Ignored if using a Wavefront proxy.
|
||||
## CSP API Token Authentication, ignored if using a Wavefront proxy
|
||||
# auth_csp_api_token=CSP_API_TOKEN_HERE
|
||||
|
||||
## CSP Client Credentials Authentication Information for Wavefront. Ignored if using a Wavefront proxy.
|
||||
## CSP Client Credentials Authentication Information, ignored if using a
|
||||
## Wavefront proxy.
|
||||
## See also: https://docs.wavefront.com/csp_getting_started.html#whats-a-server-to-server-app
|
||||
# [outputs.wavefront.auth_csp_client_credentials]
|
||||
# app_id=CSP_APP_ID_HERE
|
||||
|
|
|
|||
|
|
@ -1,22 +1,24 @@
|
|||
[[outputs.wavefront]]
|
||||
## Url for Wavefront API or Wavefront proxy instance.
|
||||
## URL for Wavefront API or Wavefront proxy instance
|
||||
## Direct Ingestion via Wavefront API requires authentication. See below.
|
||||
url = "https://metrics.wavefront.com"
|
||||
|
||||
## Maximum number of metrics to send per HTTP request. This value should be higher than the `metric_batch_size`. Default is 10,000. Values higher than 40,000 are not recommended.
|
||||
## Maximum number of metrics to send per HTTP request. This value should be
|
||||
## higher than the `metric_batch_size`. Values higher than 40,000 are not
|
||||
## recommended.
|
||||
# http_maximum_batch_size = 10000
|
||||
|
||||
## prefix for metrics keys
|
||||
## Prefix for metrics keys
|
||||
# prefix = "my.specific.prefix."
|
||||
|
||||
## whether to use "value" for name of simple fields. default is false
|
||||
## Use "value" for name of simple fields
|
||||
# simple_fields = false
|
||||
|
||||
## character to use between metric and field name. default is . (dot)
|
||||
## character to use between metric and field name
|
||||
# metric_separator = "."
|
||||
|
||||
## Convert metric name paths to use metricSeparator character
|
||||
## When true will convert all _ (underscore) characters in final metric name. default is true
|
||||
## When true will convert all _ (underscore) characters in final metric name.
|
||||
# convert_paths = true
|
||||
|
||||
## Use Strict rules to sanitize metric and tag names from invalid characters
|
||||
|
|
@ -24,26 +26,29 @@
|
|||
# use_strict = false
|
||||
|
||||
## Use Regex to sanitize metric and tag names from invalid characters
|
||||
## Regex is more thorough, but significantly slower. default is false
|
||||
## Regex is more thorough, but significantly slower.
|
||||
# use_regex = false
|
||||
|
||||
## point tags to use as the source name for Wavefront (if none found, host will be used)
|
||||
## Tags to use as the source name for Wavefront ("host" if none is found)
|
||||
# source_override = ["hostname", "address", "agent_host", "node_host"]
|
||||
|
||||
## whether to convert boolean values to numeric values, with false -> 0.0 and true -> 1.0. default is true
|
||||
## Convert boolean values to numeric values, with false -> 0.0 and true -> 1.0
|
||||
# convert_bool = true
|
||||
|
||||
## Truncate metric tags to a total of 254 characters for the tag name value. Wavefront will reject any
|
||||
## data point exceeding this limit if not truncated. Defaults to 'false' to provide backwards compatibility.
|
||||
## Truncate metric tags to a total of 254 characters for the tag name value
|
||||
## Wavefront will reject any data point exceeding this limit if not truncated
|
||||
## Defaults to 'false' to provide backwards compatibility.
|
||||
# truncate_tags = false
|
||||
|
||||
## Flush the internal buffers after each batch. This effectively bypasses the background sending of metrics
|
||||
## normally done by the Wavefront SDK. This can be used if you are experiencing buffer overruns. The sending
|
||||
## of metrics will block for a longer time, but this will be handled gracefully by the internal buffering in
|
||||
## Telegraf.
|
||||
## Flush the internal buffers after each batch. This effectively bypasses the
|
||||
## background sending of metrics normally done by the Wavefront SDK. This can
|
||||
## be used if you are experiencing buffer overruns. The sending of metrics
|
||||
## will block for a longer time, but this will be handled gracefully by
|
||||
## internal buffering in Telegraf.
|
||||
# immediate_flush = true
|
||||
|
||||
## Send internal metrics (starting with `~sdk.go`) for valid, invalid, and dropped metrics. default is true.
|
||||
## Send internal metrics (starting with `~sdk.go`) for valid, invalid, and
|
||||
## dropped metrics
|
||||
# send_internal_metrics = true
|
||||
|
||||
## Optional TLS Config
|
||||
|
|
@ -64,39 +69,38 @@
|
|||
## HTTP Timeout
|
||||
# timeout="10s"
|
||||
|
||||
## MaxIdleConns controls the maximum number of idle (keep-alive)
|
||||
## connections across all hosts. Zero means no limit.
|
||||
## MaxIdleConns controls the maximum number of idle (keep-alive) connections
|
||||
##across all hosts. Zero means unlimited.
|
||||
# max_idle_conn = 0
|
||||
|
||||
## MaxIdleConnsPerHost, if non-zero, controls the maximum idle
|
||||
## (keep-alive) connections to keep per-host. If zero,
|
||||
## DefaultMaxIdleConnsPerHost is used(2).
|
||||
## MaxIdleConnsPerHost, if non-zero, controls the maximum idle (keep-alive)
|
||||
## connections to keep per-host. If zero, DefaultMaxIdleConnsPerHost is used.
|
||||
# max_idle_conn_per_host = 2
|
||||
|
||||
## Idle (keep-alive) connection timeout.
|
||||
## Maximum amount of time before idle connection is closed.
|
||||
## Zero means no limit.
|
||||
## Idle (keep-alive) connection timeout
|
||||
# idle_conn_timeout = 0
|
||||
|
||||
## Authentication for Direct Ingestion.
|
||||
## Direct Ingestion requires one of: `token`,`auth_csp_api_token`, or `auth_csp_client_credentials`
|
||||
## See https://docs.wavefront.com/csp_getting_started.html to learn more about using CSP credentials with Wavefront.
|
||||
## Direct Ingestion requires one of: `token`,`auth_csp_api_token`, or
|
||||
## `auth_csp_client_credentials` (see https://docs.wavefront.com/csp_getting_started.html)
|
||||
## to learn more about using CSP credentials with Wavefront.
|
||||
## Not required if using a Wavefront proxy.
|
||||
|
||||
## Wavefront API Token Authentication. Ignored if using a Wavefront proxy.
|
||||
## Wavefront API Token Authentication, ignored if using a Wavefront proxy
|
||||
## 1. Click the gear icon at the top right in the Wavefront UI.
|
||||
## 2. Click your account name (usually your email)
|
||||
## 3. Click *API access*.
|
||||
# token = "YOUR_TOKEN"
|
||||
|
||||
## Optional. defaults to "https://console.cloud.vmware.com/"
|
||||
## Ignored if using a Wavefront proxy or a Wavefront API token.
|
||||
## Base URL used for authentication, ignored if using a Wavefront proxy or a
|
||||
## Wavefront API token.
|
||||
# auth_csp_base_url=https://console.cloud.vmware.com
|
||||
|
||||
## CSP API Token Authentication for Wavefront. Ignored if using a Wavefront proxy.
|
||||
## CSP API Token Authentication, ignored if using a Wavefront proxy
|
||||
# auth_csp_api_token=CSP_API_TOKEN_HERE
|
||||
|
||||
## CSP Client Credentials Authentication Information for Wavefront. Ignored if using a Wavefront proxy.
|
||||
## CSP Client Credentials Authentication Information, ignored if using a
|
||||
## Wavefront proxy.
|
||||
## See also: https://docs.wavefront.com/csp_getting_started.html#whats-a-server-to-server-app
|
||||
# [outputs.wavefront.auth_csp_client_credentials]
|
||||
# app_id=CSP_APP_ID_HERE
|
||||
|
|
|
|||
Loading…
Reference in New Issue