2016-02-10 06:03:46 +08:00
|
|
|
# Kafka Consumer Input Plugin
|
2015-07-09 03:39:23 +08:00
|
|
|
|
2018-11-06 05:34:28 +08:00
|
|
|
The [Kafka][kafka] consumer plugin reads from Kafka
|
|
|
|
|
and creates metrics using one of the supported [input data formats][].
|
2015-07-09 03:39:23 +08:00
|
|
|
|
2018-11-06 05:34:28 +08:00
|
|
|
For old kafka version (< 0.8), please use the [kafka_consumer_legacy][] input plugin
|
2017-06-08 09:22:28 +08:00
|
|
|
and use the old zookeeper connection method.
|
|
|
|
|
|
2018-11-06 05:34:28 +08:00
|
|
|
### Configuration
|
2016-02-10 06:03:46 +08:00
|
|
|
|
|
|
|
|
```toml
|
|
|
|
|
[[inputs.kafka_consumer]]
|
2019-07-30 11:41:12 +08:00
|
|
|
## Kafka brokers.
|
2018-08-24 02:46:41 +08:00
|
|
|
brokers = ["localhost:9092"]
|
2019-07-30 11:41:12 +08:00
|
|
|
|
|
|
|
|
## Topics to consume.
|
2016-02-10 06:03:46 +08:00
|
|
|
topics = ["telegraf"]
|
2019-07-30 11:41:12 +08:00
|
|
|
|
|
|
|
|
## When set this tag will be added to all metrics with the topic as the value.
|
2018-11-29 08:29:26 +08:00
|
|
|
# topic_tag = ""
|
2016-02-10 06:03:46 +08:00
|
|
|
|
2018-08-24 02:46:41 +08:00
|
|
|
## Optional Client id
|
2018-07-14 04:59:45 +08:00
|
|
|
# client_id = "Telegraf"
|
2018-07-14 04:53:56 +08:00
|
|
|
|
2018-08-24 02:46:41 +08:00
|
|
|
## Set the minimal supported Kafka version. Setting this enables the use of new
|
2019-07-31 12:33:29 +08:00
|
|
|
## Kafka features and APIs. Must be 0.10.2.0 or greater.
|
2018-08-24 02:46:41 +08:00
|
|
|
## ex: version = "1.1.0"
|
|
|
|
|
# version = ""
|
|
|
|
|
|
2018-05-05 07:33:23 +08:00
|
|
|
## Optional TLS Config
|
|
|
|
|
# tls_ca = "/etc/telegraf/ca.pem"
|
|
|
|
|
# tls_cert = "/etc/telegraf/cert.pem"
|
|
|
|
|
# tls_key = "/etc/telegraf/key.pem"
|
|
|
|
|
## Use TLS but skip chain & host verification
|
2017-06-08 09:22:28 +08:00
|
|
|
# insecure_skip_verify = false
|
|
|
|
|
|
2020-01-03 08:27:26 +08:00
|
|
|
## SASL authentication credentials. These settings should typically be used
|
|
|
|
|
## with TLS encryption enabled using the "enable_tls" option.
|
2017-06-08 09:22:28 +08:00
|
|
|
# sasl_username = "kafka"
|
|
|
|
|
# sasl_password = "secret"
|
|
|
|
|
|
2020-10-29 00:16:59 +08:00
|
|
|
## Optional SASL:
|
|
|
|
|
## one of: OAUTHBEARER, PLAIN, SCRAM-SHA-256, SCRAM-SHA-512, GSSAPI
|
|
|
|
|
## (defaults to PLAIN)
|
|
|
|
|
# sasl_mechanism = ""
|
|
|
|
|
|
|
|
|
|
## used if sasl_mechanism is GSSAPI (experimental)
|
|
|
|
|
# sasl_gssapi_service_name = ""
|
|
|
|
|
# ## One of: KRB5_USER_AUTH and KRB5_KEYTAB_AUTH
|
|
|
|
|
# sasl_gssapi_auth_type = "KRB5_USER_AUTH"
|
|
|
|
|
# sasl_gssapi_kerberos_config_path = "/"
|
|
|
|
|
# sasl_gssapi_realm = "realm"
|
|
|
|
|
# sasl_gssapi_key_tab_path = ""
|
|
|
|
|
# sasl_gssapi_disable_pafxfast = false
|
|
|
|
|
|
|
|
|
|
## used if sasl_mechanism is OAUTHBEARER (experimental)
|
|
|
|
|
# sasl_access_token = ""
|
|
|
|
|
|
2020-01-03 08:27:26 +08:00
|
|
|
## SASL protocol version. When connecting to Azure EventHub set to 0.
|
|
|
|
|
# sasl_version = 1
|
|
|
|
|
|
2019-07-30 11:41:12 +08:00
|
|
|
## Name of the consumer group.
|
|
|
|
|
# consumer_group = "telegraf_metrics_consumers"
|
|
|
|
|
|
|
|
|
|
## Initial offset position; one of "oldest" or "newest".
|
|
|
|
|
# offset = "oldest"
|
2018-08-24 02:46:41 +08:00
|
|
|
|
2019-11-28 02:54:29 +08:00
|
|
|
## Consumer group partition assignment strategy; one of "range", "roundrobin" or "sticky".
|
|
|
|
|
# balance_strategy = "range"
|
|
|
|
|
|
2018-11-06 05:34:28 +08:00
|
|
|
## Maximum length of a message to consume, in bytes (default 0/unlimited);
|
|
|
|
|
## larger messages are dropped
|
|
|
|
|
max_message_len = 1000000
|
|
|
|
|
|
|
|
|
|
## Maximum messages to read from the broker that have not been written by an
|
|
|
|
|
## output. For best throughput set based on the number of metrics within
|
|
|
|
|
## each message and the size of the output's metric_batch_size.
|
|
|
|
|
##
|
|
|
|
|
## For example, if each message from the queue contains 10 metrics and the
|
|
|
|
|
## output metric_batch_size is 1000, setting this to 100 will ensure that a
|
|
|
|
|
## full batch is collected and the write is triggered immediately without
|
|
|
|
|
## waiting until the next flush_interval.
|
|
|
|
|
# max_undelivered_messages = 1000
|
|
|
|
|
|
2016-06-23 00:23:53 +08:00
|
|
|
## Data format to consume.
|
2017-04-28 05:59:18 +08:00
|
|
|
## Each data format has its own unique set of configuration options, read
|
2016-02-19 05:26:51 +08:00
|
|
|
## more about them here:
|
|
|
|
|
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
2016-02-10 06:03:46 +08:00
|
|
|
data_format = "influx"
|
|
|
|
|
```
|
|
|
|
|
|
2018-11-06 05:34:28 +08:00
|
|
|
[kafka]: https://kafka.apache.org
|
|
|
|
|
[kafka_consumer_legacy]: /plugins/inputs/kafka_consumer_legacy/README.md
|
|
|
|
|
[input data formats]: /docs/DATA_FORMATS_INPUT.md
|