chore: clean up all markdown lint errors in output plugins (#10159)
This commit is contained in:
parent
c172df21a4
commit
0d8d118319
|
|
@ -5,10 +5,12 @@ This plugin writes to a AMQP 0-9-1 Exchange, a prominent implementation of this
|
|||
This plugin does not bind the exchange to a queue.
|
||||
|
||||
For an introduction to AMQP see:
|
||||
- https://www.rabbitmq.com/tutorials/amqp-concepts.html
|
||||
- https://www.rabbitmq.com/getstarted.html
|
||||
|
||||
### Configuration:
|
||||
- [amqp: concepts](https://www.rabbitmq.com/tutorials/amqp-concepts.html)
|
||||
- [rabbitmq: getting started](https://www.rabbitmq.com/getstarted.html)
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
# Publishes metrics to an AMQP broker
|
||||
[[outputs.amqp]]
|
||||
|
|
@ -107,7 +109,7 @@ For an introduction to AMQP see:
|
|||
# data_format = "influx"
|
||||
```
|
||||
|
||||
#### Routing
|
||||
### Routing
|
||||
|
||||
If `routing_tag` is set, and the tag is defined on the metric, the value of
|
||||
the tag is used as the routing key. Otherwise the value of `routing_key` is
|
||||
|
|
|
|||
|
|
@ -2,7 +2,8 @@
|
|||
|
||||
This plugin writes telegraf metrics to [Azure Application Insights](https://azure.microsoft.com/en-us/services/application-insights/).
|
||||
|
||||
### Configuration:
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
[[outputs.application_insights]]
|
||||
## Instrumentation key of the Application Insights resource.
|
||||
|
|
@ -26,21 +27,21 @@ This plugin writes telegraf metrics to [Azure Application Insights](https://azur
|
|||
# "ai.cloud.roleInstance" = "kubernetes_pod_name"
|
||||
```
|
||||
|
||||
|
||||
### Metric Encoding:
|
||||
## Metric Encoding
|
||||
|
||||
For each field an Application Insights Telemetry record is created named based
|
||||
on the measurement name and field.
|
||||
|
||||
|
||||
**Example:** Create the telemetry records `foo_first` and `foo_second`:
|
||||
```
|
||||
|
||||
```text
|
||||
foo,host=a first=42,second=43 1525293034000000000
|
||||
```
|
||||
|
||||
In the special case of a single field named `value`, a single telemetry record is created named using only the measurement name
|
||||
|
||||
**Example:** Create a telemetry record `bar`:
|
||||
```
|
||||
|
||||
```text
|
||||
bar,host=a value=42 1525293034000000000
|
||||
```
|
||||
|
|
|
|||
|
|
@ -3,12 +3,12 @@
|
|||
This plugin writes data collected by any of the Telegraf input plugins to [Azure Data Explorer](https://azure.microsoft.com/en-au/services/data-explorer/).
|
||||
Azure Data Explorer is a distributed, columnar store, purpose built for any type of logs, metrics and time series data.
|
||||
|
||||
## Pre-requisites:
|
||||
## Pre-requisites
|
||||
|
||||
- [Create Azure Data Explorer cluster and database](https://docs.microsoft.com/en-us/azure/data-explorer/create-cluster-database-portal)
|
||||
- VM/compute or container to host Telegraf - it could be hosted locally where an app/service to be monitored is deployed or remotely on a dedicated monitoring compute/container.
|
||||
|
||||
|
||||
## Configuration:
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
[[outputs.azure_data_explorer]]
|
||||
|
|
@ -47,21 +47,21 @@ The plugin will group the metrics by the metric name, and will send each group o
|
|||
|
||||
The table name will match the `name` property of the metric, this means that the name of the metric should comply with the Azure Data Explorer table naming constraints in case you plan to add a prefix to the metric name.
|
||||
|
||||
|
||||
### SingleTable
|
||||
|
||||
The plugin will send all the metrics received to a single Azure Data Explorer table. The name of the table must be supplied via `table_name` in the config file. If the table doesn't exist the plugin will create the table, if the table exists then the plugin will try to merge the Telegraf metric schema to the existing table. For more information about the merge process check the [`.create-merge` documentation](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/management/create-merge-table-command).
|
||||
|
||||
|
||||
## Tables Schema
|
||||
|
||||
The schema of the Azure Data Explorer table will match the structure of the Telegraf `Metric` object. The corresponding Azure Data Explorer command generated by the plugin would be like the following:
|
||||
```
|
||||
|
||||
```text
|
||||
.create-merge table ['table-name'] (['fields']:dynamic, ['name']:string, ['tags']:dynamic, ['timestamp']:datetime)
|
||||
```
|
||||
|
||||
The corresponding table mapping would be like the following:
|
||||
```
|
||||
|
||||
```text
|
||||
.create-or-alter table ['table-name'] ingestion json mapping 'table-name_mapping' '[{"column":"fields", "Properties":{"Path":"$[\'fields\']"}},{"column":"name", "Properties":{"Path":"$[\'name\']"}},{"column":"tags", "Properties":{"Path":"$[\'tags\']"}},{"column":"timestamp", "Properties":{"Path":"$[\'timestamp\']"}}]'
|
||||
```
|
||||
|
||||
|
|
@ -70,21 +70,21 @@ The corresponding table mapping would be like the following:
|
|||
## Authentiation
|
||||
|
||||
### Supported Authentication Methods
|
||||
|
||||
This plugin provides several types of authentication. The plugin will check the existence of several specific environment variables, and consequently will choose the right method.
|
||||
|
||||
These methods are:
|
||||
|
||||
|
||||
1. AAD Application Tokens (Service Principals with secrets or certificates).
|
||||
|
||||
For guidance on how to create and register an App in Azure Active Directory check [this article](https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app#register-an-application), and for more information on the Service Principals check [this article](https://docs.microsoft.com/en-us/azure/active-directory/develop/app-objects-and-service-principals).
|
||||
|
||||
|
||||
2. AAD User Tokens
|
||||
- Allows Telegraf to authenticate like a user. This method is mainly used
|
||||
for development purposes only.
|
||||
|
||||
- Allows Telegraf to authenticate like a user. This method is mainly used for development purposes only.
|
||||
|
||||
3. Managed Service Identity (MSI) token
|
||||
|
||||
- If you are running Telegraf from Azure VM or infrastructure, then this is the prefered authentication method.
|
||||
|
||||
[principal]: https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-application-objects
|
||||
|
|
@ -93,7 +93,6 @@ Whichever method, the designated Principal needs to be assigned the `Database Us
|
|||
allow the plugin to create the required tables and ingest data into it.
|
||||
If `create_tables=false` then the designated principal only needs the `Database Ingestor` role at least.
|
||||
|
||||
|
||||
### Configurations of the chosen Authentication Method
|
||||
|
||||
The plugin will authenticate using the first available of the
|
||||
|
|
@ -130,10 +129,11 @@ following configurations, **it's important to understand that the assessment, an
|
|||
[msi]: https://docs.microsoft.com/en-us/azure/active-directory/msi-overview
|
||||
[arm]: https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-overview
|
||||
|
||||
|
||||
## Querying data collected in Azure Data Explorer
|
||||
|
||||
Examples of data transformations and queries that would be useful to gain insights -
|
||||
1. **Data collected using SQL input plugin**
|
||||
|
||||
### Using SQL input plugin
|
||||
|
||||
Sample SQL metrics data -
|
||||
|
||||
|
|
@ -142,23 +142,25 @@ Examples of data transformations and queries that would be useful to gain insigh
|
|||
sqlserver_database_io|{"database_name":"azure-sql-db2","file_type":"DATA","host":"adx-vm","logical_filename":"tempdev","measurement_db_type":"AzureSQLDB","physical_filename":"tempdb.mdf","replica_updateability":"READ_WRITE","sql_instance":"adx-sql-server"}|2021-09-09T13:51:20Z|{"current_size_mb":16,"database_id":2,"file_id":1,"read_bytes":2965504,"read_latency_ms":68,"reads":47,"rg_read_stall_ms":42,"rg_write_stall_ms":0,"space_used_mb":0,"write_bytes":1220608,"write_latency_ms":103,"writes":149}
|
||||
sqlserver_waitstats|{"database_name":"azure-sql-db2","host":"adx-vm","measurement_db_type":"AzureSQLDB","replica_updateability":"READ_WRITE","sql_instance":"adx-sql-server","wait_category":"Worker Thread","wait_type":"THREADPOOL"}|2021-09-09T13:51:20Z|{"max_wait_time_ms":15,"resource_wait_ms":4469,"signal_wait_time_ms":0,"wait_time_ms":4469,"waiting_tasks_count":1464}
|
||||
|
||||
|
||||
Since collected metrics object is of complex type so "fields" and "tags" are stored as dynamic data type, multiple ways to query this data-
|
||||
|
||||
- **Query JSON attributes directly**: Azure Data Explorer provides an ability to query JSON data in raw format without parsing it, so JSON attributes can be queried directly in following way -
|
||||
```
|
||||
1. Query JSON attributes directly: Azure Data Explorer provides an ability to query JSON data in raw format without parsing it, so JSON attributes can be queried directly in following way:
|
||||
|
||||
```text
|
||||
Tablename
|
||||
| where name == "sqlserver_azure_db_resource_stats" and todouble(fields.avg_cpu_percent) > 7
|
||||
```
|
||||
```
|
||||
|
||||
```text
|
||||
Tablename
|
||||
| distinct tostring(tags.database_name)
|
||||
```
|
||||
|
||||
**Note** - This approach could have performance impact in case of large volumes of data, use belwo mentioned approach for such cases.
|
||||
|
||||
- **Use [Update policy](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/management/updatepolicy)**: Transform dynamic data type columns using update policy. This is the recommended performant way for querying over large volumes of data compared to querying directly over JSON attributes.
|
||||
1. Use [Update policy](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/management/updatepolicy)**: Transform dynamic data type columns using update policy. This is the recommended performant way for querying over large volumes of data compared to querying directly over JSON attributes:
|
||||
|
||||
```
|
||||
```json
|
||||
// Function to transform data
|
||||
.create-or-alter function Transform_TargetTableName() {
|
||||
SourceTableName
|
||||
|
|
@ -174,7 +176,7 @@ Examples of data transformations and queries that would be useful to gain insigh
|
|||
@'[{"IsEnabled": true, "Source": "SourceTableName", "Query": "Transform_TargetTableName()", "IsTransactional": true, "PropagateIngestionProperties": false}]'
|
||||
```
|
||||
|
||||
2. **Data collected using syslog input plugin**
|
||||
### Using syslog input plugin
|
||||
|
||||
Sample syslog data -
|
||||
|
||||
|
|
@ -186,17 +188,19 @@ Examples of data transformations and queries that would be useful to gain insigh
|
|||
There are multiple ways to flatten dynamic columns using 'extend' or 'bag_unpack' operator. You can use either of these ways in above mentioned update policy function - 'Transform_TargetTableName()'
|
||||
|
||||
- Use [extend](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/query/extendoperator) operator - This is the recommended approach compared to 'bag_unpack' as it is faster and robust. Even if schema changes, it will not break queries or dashboards.
|
||||
```
|
||||
|
||||
```text
|
||||
Tablenmae
|
||||
| extend facility_code=toint(fields.facility_code), message=tostring(fields.message), procid= tolong(fields.procid), severity_code=toint(fields.severity_code),
|
||||
SysLogTimestamp=unixtime_nanoseconds_todatetime(tolong(fields.timestamp)), version= todouble(fields.version),
|
||||
appname= tostring(tags.appname), facility= tostring(tags.facility),host= tostring(tags.host), hostname=tostring(tags.hostname), severity=tostring(tags.severity)
|
||||
| project-away fields, tags
|
||||
```
|
||||
|
||||
- Use [bag_unpack plugin](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/query/bag-unpackplugin) to unpack the dynamic type columns automatically. This method could lead to issues if source schema changes as its dynamically expanding columns.
|
||||
```
|
||||
|
||||
```text
|
||||
Tablename
|
||||
| evaluate bag_unpack(tags, columnsConflict='replace_source')
|
||||
| evaluate bag_unpack(fields, columnsConflict='replace_source')
|
||||
```
|
||||
|
||||
|
|
|
|||
|
|
@ -14,7 +14,7 @@ metric is written as the Azure Monitor metric name. All field values are
|
|||
written as a summarized set that includes: min, max, sum, count. Tags are
|
||||
written as a dimension on each Azure Monitor metric.
|
||||
|
||||
### Configuration:
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
[[outputs.azure_monitor]]
|
||||
|
|
@ -47,12 +47,12 @@ written as a dimension on each Azure Monitor metric.
|
|||
# endpoint_url = "https://monitoring.core.usgovcloudapi.net"
|
||||
```
|
||||
|
||||
### Setup
|
||||
## Setup
|
||||
|
||||
1. [Register the `microsoft.insights` resource provider in your Azure subscription][resource provider].
|
||||
2. If using Managed Service Identities to authenticate an Azure VM,
|
||||
1. If using Managed Service Identities to authenticate an Azure VM,
|
||||
[enable system-assigned managed identity][enable msi].
|
||||
2. Use a region that supports Azure Monitor Custom Metrics,
|
||||
1. Use a region that supports Azure Monitor Custom Metrics,
|
||||
For regions with Custom Metrics support, an endpoint will be available with
|
||||
the format `https://<region>.monitoring.azure.com`.
|
||||
|
||||
|
|
@ -75,17 +75,18 @@ This plugin uses one of several different types of authenticate methods. The
|
|||
preferred authentication methods are different from the *order* in which each
|
||||
authentication is checked. Here are the preferred authentication methods:
|
||||
|
||||
1. Managed Service Identity (MSI) token
|
||||
- This is the preferred authentication method. Telegraf will automatically
|
||||
authenticate using this method when running on Azure VMs.
|
||||
1. Managed Service Identity (MSI) token: This is the preferred authentication method. Telegraf will automatically authenticate using this method when running on Azure VMs.
|
||||
2. AAD Application Tokens (Service Principals)
|
||||
- Primarily useful if Telegraf is writing metrics for other resources.
|
||||
|
||||
* Primarily useful if Telegraf is writing metrics for other resources.
|
||||
[More information][principal].
|
||||
- A Service Principal or User Principal needs to be assigned the `Monitoring
|
||||
* A Service Principal or User Principal needs to be assigned the `Monitoring
|
||||
Metrics Publisher` role on the resource(s) metrics will be emitted
|
||||
against.
|
||||
|
||||
3. AAD User Tokens (User Principals)
|
||||
- Allows Telegraf to authenticate like a user. It is best to use this method
|
||||
|
||||
* Allows Telegraf to authenticate like a user. It is best to use this method
|
||||
for development.
|
||||
|
||||
[principal]: https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-application-objects
|
||||
|
|
@ -93,30 +94,28 @@ authentication is checked. Here are the preferred authentication methods:
|
|||
The plugin will authenticate using the first available of the
|
||||
following configurations:
|
||||
|
||||
1. **Client Credentials**: Azure AD Application ID and Secret.
|
||||
1. **Client Credentials**: Azure AD Application ID and Secret. Set the following environment variables:
|
||||
|
||||
Set the following environment variables:
|
||||
* `AZURE_TENANT_ID`: Specifies the Tenant to which to authenticate.
|
||||
* `AZURE_CLIENT_ID`: Specifies the app client ID to use.
|
||||
* `AZURE_CLIENT_SECRET`: Specifies the app secret to use.
|
||||
|
||||
- `AZURE_TENANT_ID`: Specifies the Tenant to which to authenticate.
|
||||
- `AZURE_CLIENT_ID`: Specifies the app client ID to use.
|
||||
- `AZURE_CLIENT_SECRET`: Specifies the app secret to use.
|
||||
1. **Client Certificate**: Azure AD Application ID and X.509 Certificate.
|
||||
|
||||
2. **Client Certificate**: Azure AD Application ID and X.509 Certificate.
|
||||
* `AZURE_TENANT_ID`: Specifies the Tenant to which to authenticate.
|
||||
* `AZURE_CLIENT_ID`: Specifies the app client ID to use.
|
||||
* `AZURE_CERTIFICATE_PATH`: Specifies the certificate Path to use.
|
||||
* `AZURE_CERTIFICATE_PASSWORD`: Specifies the certificate password to use.
|
||||
|
||||
- `AZURE_TENANT_ID`: Specifies the Tenant to which to authenticate.
|
||||
- `AZURE_CLIENT_ID`: Specifies the app client ID to use.
|
||||
- `AZURE_CERTIFICATE_PATH`: Specifies the certificate Path to use.
|
||||
- `AZURE_CERTIFICATE_PASSWORD`: Specifies the certificate password to use.
|
||||
|
||||
3. **Resource Owner Password**: Azure AD User and Password. This grant type is
|
||||
1. **Resource Owner Password**: Azure AD User and Password. This grant type is
|
||||
*not recommended*, use device login instead if you need interactive login.
|
||||
|
||||
- `AZURE_TENANT_ID`: Specifies the Tenant to which to authenticate.
|
||||
- `AZURE_CLIENT_ID`: Specifies the app client ID to use.
|
||||
- `AZURE_USERNAME`: Specifies the username to use.
|
||||
- `AZURE_PASSWORD`: Specifies the password to use.
|
||||
* `AZURE_TENANT_ID`: Specifies the Tenant to which to authenticate.
|
||||
* `AZURE_CLIENT_ID`: Specifies the app client ID to use.
|
||||
* `AZURE_USERNAME`: Specifies the username to use.
|
||||
* `AZURE_PASSWORD`: Specifies the password to use.
|
||||
|
||||
4. **Azure Managed Service Identity**: Delegate credential management to the
|
||||
1. **Azure Managed Service Identity**: Delegate credential management to the
|
||||
platform. Requires that code is running in Azure, e.g. on a VM. All
|
||||
configuration is handled by Azure. See [Azure Managed Service Identity][msi]
|
||||
for more details. Only available when using the [Azure Resource Manager][arm].
|
||||
|
|
@ -127,7 +126,7 @@ following configurations:
|
|||
**Note: As shown above, the last option (#4) is the preferred way to
|
||||
authenticate when running Telegraf on Azure VMs.
|
||||
|
||||
### Dimensions
|
||||
## Dimensions
|
||||
|
||||
Azure Monitor only accepts values with a numeric type. The plugin will drop
|
||||
fields with a string type by default. The plugin can set all string type fields
|
||||
|
|
|
|||
|
|
@ -5,7 +5,7 @@ with Google Cloud using either a service account or user credentials.
|
|||
|
||||
Be aware that this plugin accesses APIs that are [chargeable](https://cloud.google.com/bigquery/pricing) and might incur costs.
|
||||
|
||||
### Configuration
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
[[outputs.bigquery]]
|
||||
|
|
@ -21,17 +21,19 @@ Be aware that this plugin accesses APIs that are [chargeable](https://cloud.goog
|
|||
## Character to replace hyphens on Metric name
|
||||
# replace_hyphen_to = "_"
|
||||
```
|
||||
|
||||
Requires `project` to specify where BigQuery entries will be persisted.
|
||||
|
||||
Requires `dataset` to specify under which BigQuery dataset the corresponding metrics tables reside.
|
||||
|
||||
Each metric should have a corresponding table to BigQuery.
|
||||
The schema of the table on BigQuery:
|
||||
|
||||
* Should contain the field `timestamp` which is the timestamp of a telegraph metrics
|
||||
* Should contain the metric's tags with the same name and the column type should be set to string.
|
||||
* Should contain the metric's fields with the same name and the column type should match the field type.
|
||||
|
||||
### Restrictions
|
||||
## Restrictions
|
||||
|
||||
Avoid hyphens on BigQuery tables, underlying SDK cannot handle streaming inserts to Table with hyphens.
|
||||
|
||||
|
|
@ -41,6 +43,7 @@ In case of a metric with hyphen by default hyphens shall be replaced with unders
|
|||
This can be altered using the `replace_hyphen_to` configuration property.
|
||||
|
||||
Available data type options are:
|
||||
|
||||
* integer
|
||||
* float or long
|
||||
* string
|
||||
|
|
|
|||
|
|
@ -3,8 +3,7 @@
|
|||
The GCP PubSub plugin publishes metrics to a [Google Cloud PubSub][pubsub] topic
|
||||
as one of the supported [output data formats][].
|
||||
|
||||
|
||||
### Configuration
|
||||
## Configuration
|
||||
|
||||
This section contains the default TOML to configure the plugin. You can
|
||||
generate it using `telegraf --usage cloud_pubsub`.
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
## Amazon CloudWatch Output for Telegraf
|
||||
# Amazon CloudWatch Output for Telegraf
|
||||
|
||||
This plugin will send metrics to Amazon CloudWatch.
|
||||
|
||||
|
|
@ -6,13 +6,14 @@ This plugin will send metrics to Amazon CloudWatch.
|
|||
|
||||
This plugin uses a credential chain for Authentication with the CloudWatch
|
||||
API endpoint. In the following order the plugin will attempt to authenticate.
|
||||
|
||||
1. Web identity provider credentials via STS if `role_arn` and `web_identity_token_file` are specified
|
||||
2. Assumed credentials via STS if `role_arn` attribute is specified (source credentials are evaluated from subsequent rules)
|
||||
3. Explicit credentials from `access_key`, `secret_key`, and `token` attributes
|
||||
4. Shared profile from `profile` attribute
|
||||
5. [Environment Variables](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#environment-variables)
|
||||
6. [Shared Credentials](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#shared-credentials-file)
|
||||
7. [EC2 Instance Profile](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html)
|
||||
1. Assumed credentials via STS if `role_arn` attribute is specified (source credentials are evaluated from subsequent rules)
|
||||
1. Explicit credentials from `access_key`, `secret_key`, and `token` attributes
|
||||
1. Shared profile from `profile` attribute
|
||||
1. [Environment Variables](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#environment-variables)
|
||||
1. [Shared Credentials](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#shared-credentials-file)
|
||||
1. [EC2 Instance Profile](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html)
|
||||
|
||||
If you are using credentials from a web identity provider, you can specify the session name using `role_session_name`. If
|
||||
left empty, the current timestamp will be used.
|
||||
|
|
@ -31,6 +32,7 @@ must be configured.
|
|||
|
||||
The region is the Amazon region that you wish to connect to.
|
||||
Examples include but are not limited to:
|
||||
|
||||
* us-west-1
|
||||
* us-west-2
|
||||
* us-east-1
|
||||
|
|
@ -52,4 +54,5 @@ aggregator to calculate those fields. If not all statistic fields are available,
|
|||
all fields would still be sent as raw metrics.
|
||||
|
||||
### high_resolution_metrics
|
||||
|
||||
Enable high resolution metrics (1 second precision) instead of standard ones (60 seconds precision)
|
||||
|
|
@ -1,4 +1,4 @@
|
|||
## Amazon CloudWatch Logs Output for Telegraf
|
||||
# Amazon CloudWatch Logs Output for Telegraf
|
||||
|
||||
This plugin will send logs to Amazon CloudWatch.
|
||||
|
||||
|
|
@ -6,21 +6,24 @@ This plugin will send logs to Amazon CloudWatch.
|
|||
|
||||
This plugin uses a credential chain for Authentication with the CloudWatch Logs
|
||||
API endpoint. In the following order the plugin will attempt to authenticate.
|
||||
1. Web identity provider credentials via STS if `role_arn` and `web_identity_token_file` are specified
|
||||
2. Assumed credentials via STS if `role_arn` attribute is specified (source credentials are evaluated from subsequent rules)
|
||||
3. Explicit credentials from `access_key`, `secret_key`, and `token` attributes
|
||||
4. Shared profile from `profile` attribute
|
||||
5. [Environment Variables](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#environment-variables)
|
||||
6. [Shared Credentials](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#shared-credentials-file)
|
||||
7. [EC2 Instance Profile](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html)
|
||||
|
||||
The IAM user needs the following permissions ( https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/permissions-reference-cwl.html):
|
||||
1. Web identity provider credentials via STS if `role_arn` and `web_identity_token_file` are specified
|
||||
1. Assumed credentials via STS if `role_arn` attribute is specified (source credentials are evaluated from subsequent rules)
|
||||
1. Explicit credentials from `access_key`, `secret_key`, and `token` attributes
|
||||
1. Shared profile from `profile` attribute
|
||||
1. [Environment Variables](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#environment-variables)
|
||||
1. [Shared Credentials](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#shared-credentials-file)
|
||||
1. [EC2 Instance Profile](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html)
|
||||
|
||||
The IAM user needs the following permissions (see this [reference](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/permissions-reference-cwl.html) for more):
|
||||
|
||||
- `logs:DescribeLogGroups` - required for check if configured log group exist
|
||||
- `logs:DescribeLogStreams` - required to view all log streams associated with a log group.
|
||||
- `logs:CreateLogStream` - required to create a new log stream in a log group.)
|
||||
- `logs:PutLogEvents` - required to upload a batch of log events into log stream.
|
||||
|
||||
## Config
|
||||
|
||||
```toml
|
||||
[[outputs.cloudwatch_logs]]
|
||||
## The region is the Amazon region that you wish to connect to.
|
||||
|
|
|
|||
|
|
@ -6,7 +6,6 @@ This plugin writes to [CrateDB](https://crate.io/) via its [PostgreSQL protocol]
|
|||
|
||||
The plugin requires a table with the following schema.
|
||||
|
||||
|
||||
```sql
|
||||
CREATE TABLE my_metrics (
|
||||
"hash_id" LONG INDEX OFF,
|
||||
|
|
|
|||
|
|
@ -3,8 +3,7 @@
|
|||
This plugin writes to the [Datadog Metrics API][metrics] and requires an
|
||||
`apikey` which can be obtained [here][apikey] for the account.
|
||||
|
||||
|
||||
### Configuration
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
[[outputs.datadog]]
|
||||
|
|
@ -21,7 +20,7 @@ This plugin writes to the [Datadog Metrics API][metrics] and requires an
|
|||
# http_proxy_url = "http://localhost:8888"
|
||||
```
|
||||
|
||||
### Metrics
|
||||
## Metrics
|
||||
|
||||
Datadog metric names are formed by joining the Telegraf metric name and the field
|
||||
key with a `.` character.
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@
|
|||
This output plugin simply drops all metrics that are sent to it. It is only
|
||||
meant to be used for testing purposes.
|
||||
|
||||
### Configuration:
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
# Send metrics to nowhere at all
|
||||
|
|
|
|||
|
|
@ -39,8 +39,8 @@ You will also need to configure an API token for secure access. Find out how to
|
|||
|
||||
The endpoint for the Dynatrace Metrics API v2 is
|
||||
|
||||
* on Dynatrace Managed: `https://{your-domain}/e/{your-environment-id}/api/v2/metrics/ingest`
|
||||
* on Dynatrace SaaS: `https://{your-environment-id}.live.dynatrace.com/api/v2/metrics/ingest`
|
||||
- on Dynatrace Managed: `https://{your-domain}/e/{your-environment-id}/api/v2/metrics/ingest`
|
||||
- on Dynatrace SaaS: `https://{your-environment-id}.live.dynatrace.com/api/v2/metrics/ingest`
|
||||
|
||||
```toml
|
||||
[[outputs.dynatrace]]
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@ This plugin writes to [Elasticsearch](https://www.elastic.co) via HTTP using Ela
|
|||
|
||||
It supports Elasticsearch releases from 5.x up to 7.x.
|
||||
|
||||
### Elasticsearch indexes and templates
|
||||
## Elasticsearch indexes and templates
|
||||
|
||||
### Indexes per time-frame
|
||||
|
||||
|
|
@ -12,12 +12,12 @@ This plugin can manage indexes per time-frame, as commonly done in other tools w
|
|||
|
||||
The timestamp of the metric collected will be used to decide the index destination.
|
||||
|
||||
For more information about this usage on Elasticsearch, check https://www.elastic.co/guide/en/elasticsearch/guide/master/time-based.html#index-per-timeframe
|
||||
For more information about this usage on Elasticsearch, check [the docs](https://www.elastic.co/guide/en/elasticsearch/guide/master/time-based.html#index-per-timeframe).
|
||||
|
||||
### Template management
|
||||
|
||||
Index templates are used in Elasticsearch to define settings and mappings for the indexes and how the fields should be analyzed.
|
||||
For more information on how this works, see https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-templates.html
|
||||
For more information on how this works, see [the docs](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-templates.html).
|
||||
|
||||
This plugin can create a working template for use with telegraf metrics. It uses Elasticsearch dynamic templates feature to set proper types for the tags and metrics fields.
|
||||
If the template specified already exists, it will not overwrite unless you configure this plugin to do so. Thus you can customize this template after its creation if necessary.
|
||||
|
|
@ -98,7 +98,7 @@ Example of an index template created by telegraf on Elasticsearch 5.x:
|
|||
|
||||
```
|
||||
|
||||
### Example events:
|
||||
### Example events
|
||||
|
||||
This plugin will format the events in the following way:
|
||||
|
||||
|
|
@ -144,7 +144,7 @@ This plugin will format the events in the following way:
|
|||
}
|
||||
```
|
||||
|
||||
### Configuration
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
[[outputs.elasticsearch]]
|
||||
|
|
@ -201,7 +201,7 @@ This plugin will format the events in the following way:
|
|||
force_document_id = false
|
||||
```
|
||||
|
||||
#### Permissions
|
||||
### Permissions
|
||||
|
||||
If you are using authentication within your Elasticsearch cluster, you need
|
||||
to create a account and create a role with at least the manage role in the
|
||||
|
|
@ -210,7 +210,7 @@ connect to your Elasticsearch cluster and send logs to your cluster. After
|
|||
that, you need to add "create_indice" and "write" permission to your specific
|
||||
index pattern.
|
||||
|
||||
#### Required parameters:
|
||||
### Required parameters
|
||||
|
||||
* `urls`: A list containing the full HTTP URL of one or more nodes from your Elasticsearch instance.
|
||||
* `index_name`: The target index for metrics. You can use the date specifiers below to create indexes per time frame.
|
||||
|
|
@ -225,7 +225,7 @@ index pattern.
|
|||
|
||||
Additionally, you can specify dynamic index names by using tags with the notation ```{{tag_name}}```. This will store the metrics with different tag values in different indices. If the tag does not exist in a particular metric, the `default_tag_value` will be used instead.
|
||||
|
||||
#### Optional parameters:
|
||||
### Optional parameters
|
||||
|
||||
* `timeout`: Elasticsearch client timeout, defaults to "5s" if not set.
|
||||
* `enable_sniffer`: Set to true to ask Elasticsearch a list of all cluster nodes, thus it is not necessary to list all nodes in the urls config option.
|
||||
|
|
@ -237,7 +237,7 @@ Additionally, you can specify dynamic index names by using tags with the notatio
|
|||
* `overwrite_template`: Set to true if you want telegraf to overwrite an existing template.
|
||||
* `force_document_id`: Set to true will compute a unique hash from as sha256(concat(timestamp,measurement,series-hash)),enables resend or update data withoud ES duplicated documents.
|
||||
|
||||
### Known issues
|
||||
## Known issues
|
||||
|
||||
Integer values collected that are bigger than 2^63 and smaller than 1e21 (or in this exact same window of their negative counterparts) are encoded by golang JSON encoder in decimal format and that is not fully supported by Elasticsearch dynamic field mapping. This causes the metrics with such values to be dropped in case a field mapping has not been created yet on the telegraf index. If that's the case you will see an exception on Elasticsearch side like this:
|
||||
|
||||
|
|
|
|||
|
|
@ -4,13 +4,15 @@ This plugin sends telegraf metrics to an external application over stdin.
|
|||
|
||||
The command should be defined similar to docker's `exec` form:
|
||||
|
||||
```text
|
||||
["executable", "param1", "param2"]
|
||||
```
|
||||
|
||||
On non-zero exit stderr will be logged at error level.
|
||||
|
||||
For better performance, consider execd, which runs continuously.
|
||||
|
||||
### Configuration
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
[[outputs.exec]]
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@ The `execd` plugin runs an external program as a daemon.
|
|||
|
||||
Telegraf minimum version: Telegraf 1.15.0
|
||||
|
||||
### Configuration:
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
[[outputs.execd]]
|
||||
|
|
@ -22,7 +22,7 @@ Telegraf minimum version: Telegraf 1.15.0
|
|||
data_format = "influx"
|
||||
```
|
||||
|
||||
### Example
|
||||
## Example
|
||||
|
||||
see [examples][]
|
||||
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
This plugin writes telegraf metrics to files
|
||||
|
||||
### Configuration
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
[[outputs.file]]
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@ via raw TCP.
|
|||
For details on the translation between Telegraf Metrics and Graphite output,
|
||||
see the [Graphite Data Format](../../../docs/DATA_FORMATS_OUTPUT.md)
|
||||
|
||||
### Configuration:
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
# Configuration for Graphite server to send metrics to
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@ This plugin writes to a Graylog instance using the "[GELF][]" format.
|
|||
|
||||
[GELF]: https://docs.graylog.org/en/3.1/pages/gelf.html#gelf-payload-specification
|
||||
|
||||
### Configuration:
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
[[outputs.graylog]]
|
||||
|
|
|
|||
|
|
@ -7,7 +7,8 @@ When the plugin is healthy it will return a 200 response; when unhealthy it
|
|||
will return a 503 response. The default state is healthy, one or more checks
|
||||
must fail in order for the resource to enter the failed state.
|
||||
|
||||
### Configuration
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
[[outputs.health]]
|
||||
## Address and port to listen on.
|
||||
|
|
@ -48,7 +49,7 @@ must fail in order for the resource to enter the failed state.
|
|||
## field = "buffer_size"
|
||||
```
|
||||
|
||||
#### compares
|
||||
### compares
|
||||
|
||||
The `compares` check is used to assert basic mathematical relationships. Use
|
||||
it by choosing a field key and one or more comparisons that must hold true. If
|
||||
|
|
@ -56,7 +57,7 @@ the field is not found on a metric no comparison will be made.
|
|||
|
||||
Comparisons must be hold true on all metrics for the check to pass.
|
||||
|
||||
#### contains
|
||||
### contains
|
||||
|
||||
The `contains` check can be used to require a field key to exist on at least
|
||||
one metric.
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@ This plugin sends metrics in a HTTP message encoded using one of the output
|
|||
data formats. For data_formats that support batching, metrics are sent in
|
||||
batch format by default.
|
||||
|
||||
### Configuration:
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
# A plugin that can transmit metrics over HTTP
|
||||
|
|
@ -70,6 +70,6 @@ batch format by default.
|
|||
# idle_conn_timeout = 0
|
||||
```
|
||||
|
||||
### Optional Cookie Authentication Settings:
|
||||
### Optional Cookie Authentication Settings
|
||||
|
||||
The optional Cookie Authentication Settings will retrieve a cookie from the given authorization endpoint, and use it in subsequent API requests. This is useful for services that do not provide OAuth or Basic Auth authentication, e.g. the [Tesla Powerwall API](https://www.tesla.com/support/energy/powerwall/own/monitoring-from-home-network), which uses a Cookie Auth Body to retrieve an authorization cookie. The Cookie Auth Renewal interval will renew the authorization by retrieving a new cookie at the given interval.
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
The InfluxDB output plugin writes metrics to the [InfluxDB v1.x] HTTP or UDP service.
|
||||
|
||||
### Configuration:
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
# Configuration for sending metrics to InfluxDB
|
||||
|
|
@ -84,7 +84,8 @@ The InfluxDB output plugin writes metrics to the [InfluxDB v1.x] HTTP or UDP ser
|
|||
# influx_uint_support = false
|
||||
```
|
||||
|
||||
### Metrics
|
||||
## Metrics
|
||||
|
||||
Reference the [influx serializer][] for details about metric production.
|
||||
|
||||
[InfluxDB v1.x]: https://github.com/influxdata/influxdb
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
The InfluxDB output plugin writes metrics to the [InfluxDB v2.x] HTTP service.
|
||||
|
||||
### Configuration:
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
# Configuration for sending metrics to InfluxDB 2.0
|
||||
|
|
@ -58,8 +58,8 @@ The InfluxDB output plugin writes metrics to the [InfluxDB v2.x] HTTP service.
|
|||
# insecure_skip_verify = false
|
||||
```
|
||||
|
||||
### Metrics
|
||||

|
||||
## Metrics
|
||||
|
||||
Reference the [influx serializer][] for details about metric production.
|
||||
|
||||
[InfluxDB v2.x]: https://github.com/influxdata/influxdb
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ Instrumental accepts stats in a format very close to Graphite, with the only dif
|
|||
the type of stat (gauge, increment) is the first token, separated from the metric itself
|
||||
by whitespace. The `increment` type is only used if the metric comes in as a counter through `[[input.statsd]]`.
|
||||
|
||||
## Configuration:
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
[[outputs.instrumental]]
|
||||
|
|
|
|||
|
|
@ -2,7 +2,8 @@
|
|||
|
||||
This plugin writes to a [Kafka Broker](http://kafka.apache.org/07/quickstart.html) acting a Kafka Producer.
|
||||
|
||||
### Configuration:
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
[[outputs.kafka]]
|
||||
## URLs of kafka brokers
|
||||
|
|
@ -146,7 +147,7 @@ This plugin writes to a [Kafka Broker](http://kafka.apache.org/07/quickstart.htm
|
|||
# data_format = "influx"
|
||||
```
|
||||
|
||||
#### `max_retry`
|
||||
### `max_retry`
|
||||
|
||||
This option controls the number of retries before a failure notification is
|
||||
displayed for each message when no acknowledgement is received from the
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
## Amazon Kinesis Output for Telegraf
|
||||
# Amazon Kinesis Output for Telegraf
|
||||
|
||||
This is an experimental plugin that is still in the early stages of development. It will batch up all of the Points
|
||||
in one Put request to Kinesis. This should save the number of API requests by a considerable level.
|
||||
|
|
@ -13,18 +13,18 @@ maybe useful for users to review Amazons official documentation which is availab
|
|||
|
||||
This plugin uses a credential chain for Authentication with the Kinesis API endpoint. In the following order the plugin
|
||||
will attempt to authenticate.
|
||||
|
||||
1. Web identity provider credentials via STS if `role_arn` and `web_identity_token_file` are specified
|
||||
2. Assumed credentials via STS if `role_arn` attribute is specified (source credentials are evaluated from subsequent rules)
|
||||
3. Explicit credentials from `access_key`, `secret_key`, and `token` attributes
|
||||
4. Shared profile from `profile` attribute
|
||||
5. [Environment Variables](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#environment-variables)
|
||||
6. [Shared Credentials](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#shared-credentials-file)
|
||||
7. [EC2 Instance Profile](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html)
|
||||
1. Assumed credentials via STS if `role_arn` attribute is specified (source credentials are evaluated from subsequent rules)
|
||||
1. Explicit credentials from `access_key`, `secret_key`, and `token` attributes
|
||||
1. Shared profile from `profile` attribute
|
||||
1. [Environment Variables](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#environment-variables)
|
||||
1. [Shared Credentials](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#shared-credentials-file)
|
||||
1. [EC2 Instance Profile](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html)
|
||||
|
||||
If you are using credentials from a web identity provider, you can specify the session name using `role_session_name`. If
|
||||
left empty, the current timestamp will be used.
|
||||
|
||||
|
||||
## Config
|
||||
|
||||
For this output plugin to function correctly the following variables must be configured.
|
||||
|
|
@ -35,6 +35,7 @@ For this output plugin to function correctly the following variables must be con
|
|||
### region
|
||||
|
||||
The region is the Amazon region that you wish to connect to. Examples include but are not limited to
|
||||
|
||||
* us-west-1
|
||||
* us-west-2
|
||||
* us-east-1
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
This plugin sends metrics to Logz.io over HTTPs.
|
||||
|
||||
### Configuration:
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
# A plugin that can send metrics over HTTPs to Logz.io
|
||||
|
|
@ -30,11 +30,11 @@ This plugin sends metrics to Logz.io over HTTPs.
|
|||
# url = "https://listener.logz.io:8071"
|
||||
```
|
||||
|
||||
### Required parameters:
|
||||
### Required parameters
|
||||
|
||||
* `token`: Your Logz.io token, which can be found under "settings" in your account.
|
||||
|
||||
### Optional parameters:
|
||||
### Optional parameters
|
||||
|
||||
* `check_disk_space`: Set to true if Logz.io sender checks the disk space before adding metrics to the disk queue.
|
||||
* `disk_threshold`: If the queue_dir space crosses this threshold (in % of disk usage), the plugin will start dropping logs.
|
||||
|
|
|
|||
|
|
@ -5,7 +5,7 @@ log line will content all fields in `key="value"` format which is easily parsabl
|
|||
|
||||
Logs within each stream are sorted by timestamp before being sent to Loki.
|
||||
|
||||
### Configuration:
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
# A plugin that can transmit logs to Loki
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@
|
|||
This plugin sends metrics to MongoDB and automatically creates the collections as time series collections when they don't already exist.
|
||||
**Please note:** Requires MongoDB 5.0+ for Time Series Collections
|
||||
|
||||
### Configuration:
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
# A plugin that can transmit logs to mongodb
|
||||
|
|
|
|||
|
|
@ -50,13 +50,14 @@ This plugin writes to a [MQTT Broker](http://http://mqtt.org/) acting as a mqtt
|
|||
# data_format = "influx"
|
||||
```
|
||||
|
||||
### Required parameters:
|
||||
## Required parameters
|
||||
|
||||
* `servers`: List of strings, this is for speaking to a cluster of `mqtt` brokers. On each flush interval, Telegraf will randomly choose one of the urls to write to. Each URL should just include host and port e.g. -> `["{host}:{port}","{host2}:{port2}"]`
|
||||
* `topic_prefix`: The `mqtt` topic prefix to publish to. MQTT outputs send metrics to this topic format "<topic_prefix>/<hostname>/<pluginname>/" ( ex: prefix/web01.example.com/mem)
|
||||
* `qos`: The `mqtt` QoS policy for sending messages. See https://www.ibm.com/support/knowledgecenter/en/SSFKSJ_9.0.0/com.ibm.mq.dev.doc/q029090_.htm for details.
|
||||
* `topic_prefix`: The `mqtt` topic prefix to publish to. MQTT outputs send metrics to this topic format `<topic_prefix>/<hostname>/<pluginname>/` ( ex: `prefix/web01.example.com/mem`)
|
||||
* `qos`: The `mqtt` QoS policy for sending messages. See [these docs](https://www.ibm.com/support/knowledgecenter/en/SSFKSJ_9.0.0/com.ibm.mq.dev.doc/q029090_.htm) for details.
|
||||
|
||||
## Optional parameters
|
||||
|
||||
### Optional parameters:
|
||||
* `username`: The username to connect MQTT server.
|
||||
* `password`: The password to connect MQTT server.
|
||||
* `client_id`: The unique client id to connect MQTT server. If this parameter is not set then a random ID is generated.
|
||||
|
|
|
|||
|
|
@ -6,7 +6,8 @@ To use this plugin you must first obtain an [Insights API Key][].
|
|||
|
||||
Telegraf minimum version: Telegraf 1.15.0
|
||||
|
||||
### Configuration
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
[[outputs.newrelic]]
|
||||
## New Relic Insights API key
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
This plugin sends metrics to [OpenTelemetry](https://opentelemetry.io) servers and agents via gRPC.
|
||||
|
||||
### Configuration
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
[[outputs.opentelemetry]]
|
||||
|
|
@ -39,11 +39,11 @@ This plugin sends metrics to [OpenTelemetry](https://opentelemetry.io) servers a
|
|||
# key1 = "value1"
|
||||
```
|
||||
|
||||
#### Schema
|
||||
### Schema
|
||||
|
||||
The InfluxDB->OpenTelemetry conversion [schema](https://github.com/influxdata/influxdb-observability/blob/main/docs/index.md)
|
||||
and [implementation](https://github.com/influxdata/influxdb-observability/tree/main/influx2otel)
|
||||
are hosted at https://github.com/influxdata/influxdb-observability .
|
||||
are hosted on [GitHub](https://github.com/influxdata/influxdb-observability).
|
||||
|
||||
For metrics, two input schemata exist.
|
||||
Line protocol with measurement name `prometheus` is assumed to have a schema
|
||||
|
|
@ -51,6 +51,7 @@ matching [Prometheus input plugin](../../inputs/prometheus/README.md) when `metr
|
|||
Line protocol with other measurement names is assumed to have schema
|
||||
matching [Prometheus input plugin](../../inputs/prometheus/README.md) when `metric_version = 1`.
|
||||
If both schema assumptions fail, then the line protocol data is interpreted as:
|
||||
|
||||
- Metric type = gauge (or counter, if indicated by the input plugin)
|
||||
- Metric name = `[measurement]_[field key]`
|
||||
- Metric value = line protocol field value, cast to float
|
||||
|
|
|
|||
|
|
@ -6,26 +6,26 @@ Using the Http API is the recommended way of writing metrics since OpenTSDB 2.0
|
|||
To use Http mode, set useHttp to true in config. You can also control how many
|
||||
metrics is sent in each http request by setting batchSize in config.
|
||||
|
||||
See http://opentsdb.net/docs/build/html/api_http/put.html for details.
|
||||
See [the docs](http://opentsdb.net/docs/build/html/api_http/put.html) for details.
|
||||
|
||||
## Transfer "Protocol" in the telnet mode
|
||||
|
||||
The expected input from OpenTSDB is specified in the following way:
|
||||
|
||||
```
|
||||
```text
|
||||
put <metric> <timestamp> <value> <tagk1=tagv1[ tagk2=tagv2 ...tagkN=tagvN]>
|
||||
```
|
||||
|
||||
The telegraf output plugin adds an optional prefix to the metric keys so
|
||||
that a subamount can be selected.
|
||||
|
||||
```
|
||||
```text
|
||||
put <[prefix.]metric> <timestamp> <value> <tagk1=tagv1[ tagk2=tagv2 ...tagkN=tagvN]>
|
||||
```
|
||||
|
||||
### Example
|
||||
|
||||
```
|
||||
```text
|
||||
put nine.telegraf.system_load1 1441910356 0.430000 dc=homeoffice host=irimame scope=green
|
||||
put nine.telegraf.system_load5 1441910356 0.580000 dc=homeoffice host=irimame scope=green
|
||||
put nine.telegraf.system_load15 1441910356 0.730000 dc=homeoffice host=irimame scope=green
|
||||
|
|
@ -44,8 +44,6 @@ put nine.telegraf.ping_average_response_ms 1441910366 24.006000 dc=homeoffice ho
|
|||
...
|
||||
```
|
||||
|
||||
##
|
||||
|
||||
The OpenTSDB telnet interface can be simulated with this reader:
|
||||
|
||||
```go
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@
|
|||
This plugin starts a [Prometheus](https://prometheus.io/) Client, it exposes
|
||||
all metrics on `/metrics` (default) to be polled by a Prometheus server.
|
||||
|
||||
### Configuration
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
[[outputs.prometheus_client]]
|
||||
|
|
@ -52,7 +52,7 @@ all metrics on `/metrics` (default) to be polled by a Prometheus server.
|
|||
# export_timestamp = false
|
||||
```
|
||||
|
||||
### Metrics
|
||||
## Metrics
|
||||
|
||||
Prometheus metrics are produced in the same manner as the [prometheus serializer][].
|
||||
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
This plugin writes to [Riemann](http://riemann.io/) via TCP or UDP.
|
||||
|
||||
### Configuration:
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
# Configuration for Riemann to send metrics to
|
||||
|
|
@ -39,11 +39,11 @@ This plugin writes to [Riemann](http://riemann.io/) via TCP or UDP.
|
|||
# timeout = "5s"
|
||||
```
|
||||
|
||||
### Required parameters:
|
||||
### Required parameters
|
||||
|
||||
* `url`: The full TCP or UDP URL of the Riemann server to send events to.
|
||||
|
||||
### Optional parameters:
|
||||
### Optional parameters
|
||||
|
||||
* `ttl`: Riemann event TTL, floating-point time in seconds. Defines how long that an event is considered valid for in Riemann.
|
||||
* `separator`: Separator to use between measurement and field name in Riemann service name.
|
||||
|
|
@ -53,24 +53,27 @@ This plugin writes to [Riemann](http://riemann.io/) via TCP or UDP.
|
|||
* `tags`: Additional Riemann tags that will be sent.
|
||||
* `description_text`: Description text for Riemann event.
|
||||
|
||||
### Example Events:
|
||||
## Example Events
|
||||
|
||||
Riemann event emitted by Telegraf with default configuration:
|
||||
```
|
||||
|
||||
```text
|
||||
#riemann.codec.Event{
|
||||
:host "postgresql-1e612b44-e92f-4d27-9f30-5e2f53947870", :state nil, :description nil, :ttl 30.0,
|
||||
:service "disk/used_percent", :metric 73.16736001949994, :path "/boot", :fstype "ext4", :time 1475605021}
|
||||
```
|
||||
|
||||
Telegraf emitting the same Riemann event with `measurement_as_attribute` set to `true`:
|
||||
```
|
||||
|
||||
```text
|
||||
#riemann.codec.Event{ ...
|
||||
:measurement "disk", :service "used_percent", :metric 73.16736001949994,
|
||||
... :time 1475605021}
|
||||
```
|
||||
|
||||
Telegraf emitting the same Riemann event with additional Riemann tags defined:
|
||||
```
|
||||
|
||||
```text
|
||||
#riemann.codec.Event{
|
||||
:host "postgresql-1e612b44-e92f-4d27-9f30-5e2f53947870", :state nil, :description nil, :ttl 30.0,
|
||||
:service "disk/used_percent", :metric 73.16736001949994, :path "/boot", :fstype "ext4", :time 1475605021,
|
||||
|
|
@ -78,7 +81,8 @@ Telegraf emitting the same Riemann event with additional Riemann tags defined:
|
|||
```
|
||||
|
||||
Telegraf emitting a Riemann event with a status text and `string_as_state` set to `true`, and a `description_text` defined:
|
||||
```
|
||||
|
||||
```text
|
||||
#riemann.codec.Event{
|
||||
:host "postgresql-1e612b44-e92f-4d27-9f30-5e2f53947870", :state "Running", :ttl 30.0,
|
||||
:description "PostgreSQL master node is up and running",
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@
|
|||
This plugin writes metrics events to [Sensu Go](https://sensu.io) via its
|
||||
HTTP events API.
|
||||
|
||||
### Configuration:
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
[[outputs.sensu]]
|
||||
|
|
|
|||
|
|
@ -2,7 +2,8 @@
|
|||
|
||||
The SignalFx output plugin sends metrics to [SignalFx](https://docs.signalfx.com/en/latest/).
|
||||
|
||||
### Configuration
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
[[outputs.signalfx]]
|
||||
## SignalFx Org Access Token
|
||||
|
|
|
|||
|
|
@ -65,7 +65,7 @@ through the convert settings.
|
|||
|
||||
## Configuration
|
||||
|
||||
```
|
||||
```toml
|
||||
# Save metrics to an SQL Database
|
||||
[[outputs.sql]]
|
||||
## Database driver
|
||||
|
|
|
|||
|
|
@ -15,7 +15,7 @@ Metrics are grouped by the `namespace` variable and metric key - eg: `custom.goo
|
|||
|
||||
Additional resource labels can be configured by `resource_labels`. By default the required `project_id` label is always set to the `project` variable.
|
||||
|
||||
### Configuration
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
[[outputs.stackdriver]]
|
||||
|
|
@ -35,7 +35,7 @@ Additional resource labels can be configured by `resource_labels`. By default th
|
|||
# location = "eu-north0"
|
||||
```
|
||||
|
||||
### Restrictions
|
||||
## Restrictions
|
||||
|
||||
Stackdriver does not support string values in custom metrics, any string
|
||||
fields will not be written.
|
||||
|
|
|
|||
|
|
@ -12,7 +12,7 @@ by Sumologic HTTP Source:
|
|||
* `carbon2` - for Content-Type of `application/vnd.sumologic.carbon2`
|
||||
* `prometheus` - for Content-Type of `application/vnd.sumologic.prometheus`
|
||||
|
||||
### Configuration:
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
# A plugin that can send metrics to Sumo Logic HTTP metric collector.
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ The syslog output plugin sends syslog messages transmitted over
|
|||
Syslog messages are formatted according to
|
||||
[RFC 5424](https://tools.ietf.org/html/rfc5424).
|
||||
|
||||
### Configuration
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
[[outputs.syslog]]
|
||||
|
|
@ -88,7 +88,8 @@ Syslog messages are formatted according to
|
|||
# default_appname = "Telegraf"
|
||||
```
|
||||
|
||||
### Metric mapping
|
||||
## Metric mapping
|
||||
|
||||
The output plugin expects syslog metrics tags and fields to match up with the
|
||||
ones created in the [syslog input][].
|
||||
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
The Timestream output plugin writes metrics to the [Amazon Timestream] service.
|
||||
|
||||
### Configuration
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
# Configuration for sending metrics to Amazon Timestream.
|
||||
|
|
@ -135,6 +135,7 @@ In case of an attempt to write an unsupported by Timestream Telegraf Field type,
|
|||
In case of receiving ThrottlingException or InternalServerException from Timestream, the errors are returned to Telegraf, in which case Telegraf will keep the metrics in buffer and retry writing those metrics on the next flush.
|
||||
|
||||
In case of receiving ResourceNotFoundException:
|
||||
|
||||
- If `create_table_if_not_exists` configuration is set to `true`, the plugin will try to create appropriate table and write the records again, if the table creation was successful.
|
||||
- If `create_table_if_not_exists` configuration is set to `false`, the records are dropped, and an error is emitted to the logs.
|
||||
|
||||
|
|
@ -148,7 +149,7 @@ Turn on debug flag in the Telegraf to turn on detailed logging (including record
|
|||
|
||||
Execute unit tests with:
|
||||
|
||||
```
|
||||
```shell
|
||||
go test -v ./plugins/outputs/timestream/...
|
||||
```
|
||||
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
The `warp10` output plugin writes metrics to [Warp 10][].
|
||||
|
||||
### Configuration
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
[[outputs.warp10]]
|
||||
|
|
@ -32,7 +32,7 @@ The `warp10` output plugin writes metrics to [Warp 10][].
|
|||
# insecure_skip_verify = false
|
||||
```
|
||||
|
||||
### Output Format
|
||||
## Output Format
|
||||
|
||||
Metrics are converted and sent using the [Geo Time Series][] (GTS) input format.
|
||||
|
||||
|
|
|
|||
|
|
@ -2,8 +2,7 @@
|
|||
|
||||
This plugin writes to a [Wavefront](https://www.wavefront.com) proxy, in Wavefront data format over TCP.
|
||||
|
||||
|
||||
### Configuration:
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
## Url for Wavefront Direct Ingestion or using HTTP with Wavefront Proxy
|
||||
|
|
@ -57,35 +56,37 @@ This plugin writes to a [Wavefront](https://www.wavefront.com) proxy, in Wavefro
|
|||
#immediate_flush = true
|
||||
```
|
||||
|
||||
|
||||
### Convert Path & Metric Separator
|
||||
|
||||
If the `convert_path` option is true any `_` in metric and field names will be converted to the `metric_separator` value.
|
||||
By default, to ease metrics browsing in the Wavefront UI, the `convert_path` option is true, and `metric_separator` is `.` (dot).
|
||||
Default integrations within Wavefront expect these values to be set to their defaults, however if converting from another platform
|
||||
it may be desirable to change these defaults.
|
||||
|
||||
|
||||
### Use Regex
|
||||
|
||||
Most illegal characters in the metric name are automatically converted to `-`.
|
||||
The `use_regex` setting can be used to ensure all illegal characters are properly handled, but can lead to performance degradation.
|
||||
|
||||
|
||||
### Source Override
|
||||
|
||||
Often when collecting metrics from another system, you want to use the target system as the source, not the one running Telegraf.
|
||||
Many Telegraf plugins will identify the target source with a tag. The tag name can vary for different plugins. The `source_override`
|
||||
option will use the value specified in any of the listed tags if found. The tag names are checked in the same order as listed,
|
||||
and if found, the other tags will not be checked. If no tags specified are found, the default host tag will be used to identify the
|
||||
source of the metric.
|
||||
|
||||
|
||||
### Wavefront Data format
|
||||
|
||||
The expected input for Wavefront is specified in the following way:
|
||||
```
|
||||
|
||||
```text
|
||||
<metric> <value> [<timestamp>] <source|host>=<sourceTagValue> [tagk1=tagv1 ...tagkN=tagvN]
|
||||
```
|
||||
|
||||
More information about the Wavefront data format is available [here](https://community.wavefront.com/docs/DOC-1031)
|
||||
|
||||
|
||||
### Allowed values for metrics
|
||||
|
||||
Wavefront allows `integers` and `floats` as input values. By default it also maps `bool` values to numeric, false -> 0.0,
|
||||
true -> 1.0. To map `strings` use the [enum](../../processors/enum) processor plugin.
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@ This plugin can write to a WebSocket endpoint.
|
|||
|
||||
It can output data in any of the [supported output formats](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md).
|
||||
|
||||
### Configuration:
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
# A plugin that can transmit metrics over WebSocket.
|
||||
|
|
|
|||
|
|
@ -1,9 +1,8 @@
|
|||
# Yandex Cloud Monitoring
|
||||
|
||||
This plugin will send custom metrics to Yandex Cloud Monitoring.
|
||||
https://cloud.yandex.com/services/monitoring
|
||||
This plugin will send custom metrics to [Yandex Cloud Monitoring](https://cloud.yandex.com/services/monitoring).
|
||||
|
||||
### Configuration:
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
[[outputs.yandex_cloud_monitoring]]
|
||||
|
|
|
|||
Loading…
Reference in New Issue