Update changelog

(cherry picked from commit 18ef80c365ea8b5ca1a232ea88ab50799561715f)
This commit is contained in:
MyaLongmire 2021-12-01 13:21:31 -07:00
parent df6bf48f8d
commit bc11e20405
2 changed files with 411 additions and 62 deletions

View File

@ -2,6 +2,72 @@
# Change Log
## v1.21.0-rc0 [2021-12-01]
### Release Notes
Thank you to @zak-pawel for lots of linter fixes!
### Bugfixes
- [#10112](https://github.com/influxdata/telegraf/pull/10112) `inputs.cloudwatch` fix cloudwatch metrics collection
- [#10178](https://github.com/influxdata/telegraf/pull/10178) `outputs.all` fix register bigquery to output plugins
- [#10165](https://github.com/influxdata/telegraf/pull/10165) `inputs.sysstat` fix sysstat to use unique temp file vs hard-coded
- [#10046](https://github.com/influxdata/telegraf/pull/10046) fix update nats-sever to support openbsd
- [#10091](https://github.com/influxdata/telegraf/pull/10091) `inputs.prometheus` fix check error before defer in prometheus k8s
- [#10101](https://github.com/influxdata/telegraf/pull/10101) `inputs.win_perf_counters` fix add setting to win_perf_counters input to ignore localization
- [#10136](https://github.com/influxdata/telegraf/pull/10136) `inputs.snmp_trap` fix remove snmptranslate from readme and fix default path
- [#10116](https://github.com/influxdata/telegraf/pull/10116) `inputs.statsd` fix input plugin statsd parse error
- [#10131](https://github.com/influxdata/telegraf/pull/10131) fix skip knxlistener when writing the sample config
- [#10119](https://github.com/influxdata/telegraf/pull/10119) `inputs.cpu` update shirou/gopsutil to v3
- [#10074](https://github.com/influxdata/telegraf/pull/10074) `outputs.graylog` fix failing test due to port already in use
- [#9865](https://github.com/influxdata/telegraf/pull/9865) `inputs.directory_monitor` fix directory monitor input plugin when data format is CSV and csv_skip_rows>0 and csv_header_row_count>=1
- [#9862](https://github.com/influxdata/telegraf/pull/9862) `outputs.graylog` fix graylog plugin TLS support and message format
- [#9908](https://github.com/influxdata/telegraf/pull/9908) `parsers.json_v2` fix remove dead code
- [#9881](https://github.com/influxdata/telegraf/pull/9881) `outputs.graylog` fix mute graylog UDP/TCP tests by marking them as integration
- [#9751](https://github.com/influxdata/telegraf/pull/9751) bump google.golang.org/grpc from 1.39.1 to 1.40.0
### Features
- [#10200](https://github.com/influxdata/telegraf/pull/10200) `aggregators.deprecations.go` Implement deprecation infrastructure
- [#9518](https://github.com/influxdata/telegraf/pull/9518) `inputs.snmp` snmp to use gosmi
- [#10130](https://github.com/influxdata/telegraf/pull/10130) `outputs.influxdb_v2` add retry to 413 errors with InfluxDB output
- [#10144](https://github.com/influxdata/telegraf/pull/10144) `inputs.win_services` add exclude filter
- [#9995](https://github.com/influxdata/telegraf/pull/9995) `inputs.mqtt_consumer` enable extracting tag values from MQTT topics
- [#9419](https://github.com/influxdata/telegraf/pull/9419) `aggregators.all` add support of aggregator as Starlark script
- [#9561](https://github.com/influxdata/telegraf/pull/9561) `processors.regex` extend regexp processor do allow renaming of measurements, tags and fields
- [#8184](https://github.com/influxdata/telegraf/pull/8184) `outputs.http` add use_batch_format for HTTP output plugin
- [#9988](https://github.com/influxdata/telegraf/pull/9988) `inputs.kafka_consumer` add max_processing_time config to Kafka Consumer input
- [#9841](https://github.com/influxdata/telegraf/pull/9841) `inputs.sqlserver` add additional metrics to support elastic pool (sqlserver plugin)
- [#9910](https://github.com/influxdata/telegraf/pull/9910) `common.tls` filter client certificates by DNS names
- [#9942](https://github.com/influxdata/telegraf/pull/9942) `outputs.azure_data_explorer` add option to skip table creation in azure data explorer output
- [#9984](https://github.com/influxdata/telegraf/pull/9984) `processors.ifname` add more details to logmessages
- [#9833](https://github.com/influxdata/telegraf/pull/9833) `common.kafka` add metadata full to config
- [#9876](https://github.com/influxdata/telegraf/pull/9876) update etc/telegraf.conf and etc/telegraf_windows.conf
- [#9256](https://github.com/influxdata/telegraf/pull/9256) `inputs.modbus` modbus connection settings (serial)
- [#9860](https://github.com/influxdata/telegraf/pull/9860) `inputs.directory_monitor` adds the ability to create and name a tag containing the filename using the directory monitor input plugin
- [#9740](https://github.com/influxdata/telegraf/pull/9740) `inputs.prometheus` add ignore_timestamp option
- [#9513](https://github.com/influxdata/telegraf/pull/9513) `processors.starlark` starlark processor example for processing sparkplug_b messages
- [#9449](https://github.com/influxdata/telegraf/pull/9449) `parsers.json_v2` support defining field/tag tables within an object table
- [#9827](https://github.com/influxdata/telegraf/pull/9827) `inputs.elasticsearch_query` add debug query output to elasticsearch_query
- [#9241](https://github.com/influxdata/telegraf/pull/9241) `inputs.snmp` telegraf to merge tables with different indexes
- [#9013](https://github.com/influxdata/telegraf/pull/9013) `inputs.opcua` allow user to select the source for the metric timestamp.
- [#9706](https://github.com/influxdata/telegraf/pull/9706) `inputs.puppetagent` add measurements from puppet 5
- [#9644](https://github.com/influxdata/telegraf/pull/9644) `outputs.graylog` add graylog plugin TCP support
- [#8229](https://github.com/influxdata/telegraf/pull/8229) `outputs.azure_data_explorer` add json_timestamp_layout option
### New Input Plugins
- [#9724](https://github.com/influxdata/telegraf/pull/9724) `inputs.all` feat: add intel_pmu plugin
- [#9771](https://github.com/influxdata/telegraf/pull/9771) `inputs.all` feat: add Linux Volume Manager input plugin
- [#9236](https://github.com/influxdata/telegraf/pull/9236) `inputs.all` feat: Openstack input plugin
### New Output Plugins
- [#9891](https://github.com/influxdata/telegraf/pull/9891) `outputs.all` feat: add new groundwork output plugin
- [#9923](https://github.com/influxdata/telegraf/pull/9923) `common.tls` feat: add mongodb output plugin
- [#9346](https://github.com/influxdata/telegraf/pull/9346) `outputs.all` feat: Azure Event Hubs output plugin
## v1.20.4 [2021-11-17]
### Release Notes

View File

@ -317,7 +317,7 @@
# # Sends metrics to Azure Data Explorer
# [[outputs.azure_data_explorer]]
# ## Azure Data Exlorer cluster endpoint
# ## Azure Data Explorer cluster endpoint
# ## ex: endpoint_url = "https://clustername.australiasoutheast.kusto.windows.net"
# endpoint_url = ""
#
@ -337,6 +337,9 @@
# ## Name of the single table to store all the metrics (Only needed if metrics_grouping_type is "SingleTable").
# # table_name = ""
#
# ## Creates tables and relevant mapping if set to true(default).
# ## Skips table and mapping creation if set to false, this is useful for running Telegraf with the lowest possible permissions i.e. table ingestor role.
# # create_tables = true
# # Send aggregate metrics to Azure Monitor
@ -370,6 +373,24 @@
# # endpoint_url = "https://monitoring.core.usgovcloudapi.net"
# # Configuration for Google Cloud BigQuery to send entries
# [[outputs.bigquery]]
# ## Credentials File
# credentials_file = "/path/to/service/account/key.json"
#
# ## Google Cloud Platform Project
# project = "my-gcp-project"
#
# ## The namespace for the metric descriptor
# dataset = "telegraf"
#
# ## Timeout for BigQuery operations.
# # timeout = "5s"
#
# ## Character to replace hyphens on Metric name
# # replace_hyphen_to = "_"
# # Publish Telegraf metrics to a Google Cloud PubSub topic
# [[outputs.cloud_pubsub]]
# ## Required. Name of Google Cloud Platform (GCP) Project that owns
@ -657,6 +678,22 @@
# force_document_id = false
# # Configuration for Event Hubs output plugin
# [[outputs.event_hubs]]
# ## The full connection string to the Event Hub (required)
# ## The shared access key must have "Send" permissions on the target Event Hub.
# connection_string = "Endpoint=sb://namespace.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=superSecret1234=;EntityPath=hubName"
#
# ## Client timeout (defaults to 30s)
# # timeout = "30s"
#
# ## Data format to output.
# ## Each data format has its own unique set of configuration options, read
# ## more about them here:
# ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
# data_format = "json"
# # Send metrics to command as input over stdin
# [[outputs.exec]]
# ## Command to ingest metrics via stdin.
@ -765,11 +802,19 @@
# ## Endpoints for your graylog instances.
# servers = ["udp://127.0.0.1:12201"]
#
# ## Connection timeout.
# # timeout = "5s"
#
# ## The field to use as the GELF short_message, if unset the static string
# ## "telegraf" will be used.
# ## example: short_message_field = "message"
# # short_message_field = ""
#
# ## According to GELF payload specification, additional fields names must be prefixed
# ## with an underscore. Previous versions did not prefix custom field 'name' with underscore.
# ## Set to true for backward compatibility.
# # name_field_no_prefix = false
#
# ## Optional TLS Config
# # tls_ca = "/etc/telegraf/ca.pem"
# # tls_cert = "/etc/telegraf/cert.pem"
@ -778,6 +823,28 @@
# # insecure_skip_verify = false
# # Send telegraf metrics to GroundWork Monitor
# [[outputs.groundwork]]
# ## URL of your groundwork instance.
# url = "https://groundwork.example.com"
#
# ## Agent uuid for GroundWork API Server.
# agent_id = ""
#
# ## Username and password to access GroundWork API.
# username = ""
# password = ""
#
# ## Default display name for the host with services(metrics).
# # default_host = "telegraf"
#
# ## Default service state.
# # default_service_state = "SERVICE_OK"
#
# ## The name of the tag that contains the hostname.
# # resource_tag = "host"
# # Configurable HTTP health check resource based on metrics
# [[outputs.health]]
# ## Address and port to listen on.
@ -861,6 +928,11 @@
# ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
# # data_format = "influx"
#
# ## Use batch serialization format (default) instead of line based format.
# ## Batch format is more efficient and should be used unless line based
# ## format is really needed.
# # use_batch_format = true
#
# ## HTTP Content-Encoding for write request body, can be set to "gzip" to
# ## compress body or "identity" to apply no encoding.
# # content_encoding = "identity"
@ -1224,6 +1296,42 @@
# # tls_key = "/etc/telegraf/key.pem"
# # Sends metrics to MongoDB
# [[outputs.mongodb]]
# # connection string examples for mongodb
# dsn = "mongodb://localhost:27017"
# # dsn = "mongodb://mongod1:27017,mongod2:27017,mongod3:27017/admin&replicaSet=myReplSet&w=1"
#
# # overrides serverSelectionTimeoutMS in dsn if set
# # timeout = "30s"
#
# # default authentication, optional
# # authentication = "NONE"
#
# # for SCRAM-SHA-256 authentication
# # authentication = "SCRAM"
# # username = "root"
# # password = "***"
#
# # for x509 certificate authentication
# # authentication = "X509"
# # tls_ca = "ca.pem"
# # tls_key = "client.pem"
# # # tls_key_pwd = "changeme" # required for encrypted tls_key
# # insecure_skip_verify = false
#
# # database to store measurements and time series collections
# # database = "telegraf"
#
# # granularity can be seconds, minutes, or hours.
# # configuring this value will be based on your input collection frequency.
# # see https://docs.mongodb.com/manual/core/timeseries-collections/#create-a-time-series-collection
# # granularity = "seconds"
#
# # optionally set a TTL to automatically expire documents from the measurement collections.
# # ttl = "360h"
# # Configuration for MQTT server to send metrics to
# [[outputs.mqtt]]
# servers = ["localhost:1883"] # required.
@ -2420,7 +2528,7 @@
# [[processors.printer]]
# # Transforms tag and field values with regex pattern
# # Transforms tag and field values as well as measurement, tag and field names with regex pattern
# [[processors.regex]]
# ## Tag and field conversions defined in a separate sub-tables
# # [[processors.regex.tags]]
@ -2450,6 +2558,38 @@
# # pattern = ".*category=(\\w+).*"
# # replacement = "${1}"
# # result_key = "search_category"
#
# ## Rename metric fields
# # [[processors.regex.field_rename]]
# # ## Regular expression to match on a field name
# # pattern = "^search_(\\w+)d$"
# # ## Matches of the pattern will be replaced with this string. Use ${1}
# # ## notation to use the text of the first submatch.
# # replacement = "${1}"
# # ## If the new field name already exists, you can either "overwrite" the
# # ## existing one with the value of the renamed field OR you can "keep"
# # ## both the existing and source field.
# # # result_key = "keep"
#
# ## Rename metric tags
# # [[processors.regex.tag_rename]]
# # ## Regular expression to match on a tag name
# # pattern = "^search_(\\w+)d$"
# # ## Matches of the pattern will be replaced with this string. Use ${1}
# # ## notation to use the text of the first submatch.
# # replacement = "${1}"
# # ## If the new tag name already exists, you can either "overwrite" the
# # ## existing one with the value of the renamed tag OR you can "keep"
# # ## both the existing and source tag.
# # # result_key = "keep"
#
# ## Rename metrics
# # [[processors.regex.metric_rename]]
# # ## Regular expression to match on an metric name
# # pattern = "^search_(\\w+)d$"
# # ## Matches of the pattern will be replaced with this string. Use ${1}
# # ## notation to use the text of the first submatch.
# # replacement = "${1}"
# # Rename measurements, tags, and fields that pass through this filter.
@ -2832,6 +2972,37 @@
# # compression = 100.0
# # Aggregate metrics using a Starlark script
# [[aggregators.starlark]]
# ## The Starlark source can be set as a string in this configuration file, or
# ## by referencing a file containing the script. Only one source or script
# ## should be set at once.
# ##
# ## Source of the Starlark script.
# source = '''
# state = {}
#
# def add(metric):
# state["last"] = metric
#
# def push():
# return state.get("last")
#
# def reset():
# state.clear()
# '''
#
# ## File containing a Starlark script.
# # script = "/usr/local/bin/myscript.star"
#
# ## The constants of the Starlark script.
# # [aggregators.starlark.constants]
# # max_size = 10
# # threshold = 0.75
# # default_name = "Julia"
# # debug_mode = true
# # Count the occurrence of values in fields.
# [[aggregators.valuecounter]]
# ## General Aggregator Arguments:
@ -4821,6 +4992,12 @@
# # ]
# # Read metrics about LVM physical volumes, volume groups, logical volumes.
# [[inputs.lvm]]
# ## Use sudo to run LVM commands
# use_sudo = false
# # Gathers metrics from the /3.0/reports MailChimp API
# [[inputs.mailchimp]]
# ## MailChimp API key
@ -4968,6 +5145,11 @@
# # data_bits = 8
# # parity = "N"
# # stop_bits = 1
# # transmission_mode = "RTU"
#
# ## Trace the connection to the modbus device as debug messages
# ## Note: You have to enable telegraf's debug mode to see those messages!
# # debug_connection = false
#
# ## For Modbus over TCP you can choose between "TCP", "RTUoverTCP" and "ASCIIoverTCP"
# ## default behaviour is "TCP" if the controller is TCP
@ -5021,6 +5203,15 @@
# { name = "tank_ph", byte_order = "AB", data_type = "INT16", scale=1.0, address = [1]},
# { name = "pump1_speed", byte_order = "ABCD", data_type = "INT32", scale=1.0, address = [3,4]},
# ]
#
# ## Enable workarounds required by some devices to work correctly
# # [inputs.modbus.workarounds]
# ## Pause between read requests sent to the device. This might be necessary for (slow) serial devices.
# # pause_between_requests = "0ms"
# ## Close the connection after every gather cycle. Usually the plugin closes the connection after a certain
# ## idle-timeout, however, if you query a device with limited simultaneous connectivity (e.g. serial devices)
# ## from multiple instances you might want to only stay connected during gather and disconnect afterwards.
# # close_connection_after_gather = false
# # Read metrics from one or many MongoDB servers
@ -5517,6 +5708,12 @@
# ## Password. Required for auth_method = "UserName"
# # password = ""
# #
# ## Option to select the metric timestamp to use. Valid options are:
# ## "gather" -- uses the time of receiving the data in telegraf
# ## "server" -- uses the timestamp provided by the server
# ## "source" -- uses the timestamp provided by the source
# # timestamp = "gather"
# #
# ## Node ID configuration
# ## name - field name to use in the output
# ## namespace - OPC UA namespace of the node (integer value 0 thru 3)
@ -5604,6 +5801,59 @@
# timeout = 1000
# # Collects performance metrics from OpenStack services
# [[inputs.openstack]]
# ## The recommended interval to poll is '30m'
#
# ## The identity endpoint to authenticate against and get the service catalog from.
# authentication_endpoint = "https://my.openstack.cloud:5000"
#
# ## The domain to authenticate against when using a V3 identity endpoint.
# # domain = "default"
#
# ## The project to authenticate as.
# # project = "admin"
#
# ## User authentication credentials. Must have admin rights.
# username = "admin"
# password = "password"
#
# ## Available services are:
# ## "agents", "aggregates", "flavors", "hypervisors", "networks", "nova_services",
# ## "ports", "projects", "servers", "services", "stacks", "storage_pools", "subnets", "volumes"
# # enabled_services = ["services", "projects", "hypervisors", "flavors", "networks", "volumes"]
#
# ## Collect Server Diagnostics
# # server_diagnotics = false
#
# ## output secrets (such as adminPass(for server) and UserID(for volume)).
# # output_secrets = false
#
# ## Amount of time allowed to complete the HTTP(s) request.
# # timeout = "5s"
#
# ## HTTP Proxy support
# # http_proxy_url = ""
#
# ## Optional TLS Config
# # tls_ca = /path/to/cafile
# # tls_cert = /path/to/certfile
# # tls_key = /path/to/keyfile
# ## Use TLS but skip chain & host verification
# # insecure_skip_verify = false
#
# ## Options for tags received from Openstack
# # tag_prefix = "openstack_tag_"
# # tag_value = "true"
#
# ## Timestamp format for timestamp data recieved from Openstack.
# ## If false format is unix nanoseconds.
# # human_readable_timestamps = false
#
# ## Measure Openstack call duration
# # measure_openstack_requests = false
# # Read current weather and forecasts data from openweathermap.org
# [[inputs.openweathermap]]
# ## OpenWeatherMap API key.
@ -6116,6 +6366,9 @@
# ## SNMP version; can be 1, 2, or 3.
# # version = 2
#
# ## Path to mib files
# # path = ["/usr/share/snmp/mibs"]
#
# ## Agent host tag; the tag used to reference the source host
# # agent_host_tag = "agent_host"
#
@ -6634,30 +6887,6 @@
###############################################################################
# # Listener capable of handling KNX bus messages provided through a KNX-IP Interface.
# [[inputs.KNXListener]]
# ## Type of KNX-IP interface.
# ## Can be either "tunnel" or "router".
# # service_type = "tunnel"
#
# ## Address of the KNX-IP interface.
# service_address = "localhost:3671"
#
# ## Measurement definition(s)
# # [[inputs.knx_listener.measurement]]
# # ## Name of the measurement
# # name = "temperature"
# # ## Datapoint-Type (DPT) of the KNX messages
# # dpt = "9.001"
# # ## List of Group-Addresses (GAs) assigned to the measurement
# # addresses = ["5/5/1"]
#
# # [[inputs.knx_listener.measurement]]
# # name = "illumination"
# # dpt = "9.004"
# # addresses = ["5/5/3"]
# # Pull Metric Statistics from Aliyun CMS
# [[inputs.aliyuncms]]
# ## Aliyun Credentials
@ -7500,6 +7729,55 @@
# # token = "some-long-shared-secret-token"
# # Intel Performance Monitoring Unit plugin exposes Intel PMU metrics available through Linux Perf subsystem
# [[inputs.intel_pmu]]
# ## List of filesystem locations of JSON files that contain PMU event definitions.
# event_definitions = ["/var/cache/pmu/GenuineIntel-6-55-4-core.json", "/var/cache/pmu/GenuineIntel-6-55-4-uncore.json"]
#
# ## List of core events measurement entities. There can be more than one core_events sections.
# [[inputs.intel_pmu.core_events]]
# ## List of events to be counted. Event names shall match names from event_definitions files.
# ## Single entry can contain name of the event (case insensitive) augmented with config options and perf modifiers.
# ## If absent, all core events from provided event_definitions are counted skipping unresolvable ones.
# events = ["INST_RETIRED.ANY", "CPU_CLK_UNHALTED.THREAD_ANY:config1=0x4043200000000k"]
#
# ## Limits the counting of events to core numbers specified.
# ## If absent, events are counted on all cores.
# ## Single "0", multiple "0,1,2" and range "0-2" notation is supported for each array element.
# ## example: cores = ["0,2", "4", "12-16"]
# cores = ["0"]
#
# ## Indicator that plugin shall attempt to run core_events.events as a single perf group.
# ## If absent or set to false, each event is counted individually. Defaults to false.
# ## This limits the number of events that can be measured to a maximum of available hardware counters per core.
# ## Could vary depending on type of event, use of fixed counters.
# # perf_group = false
#
# ## Optionally set a custom tag value that will be added to every measurement within this events group.
# ## Can be applied to any group of events, unrelated to perf_group setting.
# # events_tag = ""
#
# ## List of uncore event measurement entities. There can be more than one uncore_events sections.
# [[inputs.intel_pmu.uncore_events]]
# ## List of events to be counted. Event names shall match names from event_definitions files.
# ## Single entry can contain name of the event (case insensitive) augmented with config options and perf modifiers.
# ## If absent, all uncore events from provided event_definitions are counted skipping unresolvable ones.
# events = ["UNC_CHA_CLOCKTICKS", "UNC_CHA_TOR_OCCUPANCY.IA_MISS"]
#
# ## Limits the counting of events to specified sockets.
# ## If absent, events are counted on all sockets.
# ## Single "0", multiple "0,1" and range "0-1" notation is supported for each array element.
# ## example: sockets = ["0-2"]
# sockets = ["0"]
#
# ## Indicator that plugin shall provide an aggregated value for multiple units of same type distributed in an uncore.
# ## If absent or set to false, events for each unit are exposed as separate metric. Defaults to false.
# # aggregate_uncore_units = false
#
# ## Optionally set a custom tag value that will be added to every measurement within this events group.
# # events_tag = ""
# # Intel Resource Director Technology plugin
# [[inputs.intel_rdt]]
# ## Optionally set sampling interval to Nx100ms.
@ -7667,6 +7945,15 @@
# ## waiting until the next flush_interval.
# # max_undelivered_messages = 1000
#
# ## Maximum amount of time the consumer should take to process messages. If
# ## the debug log prints messages from sarama about 'abandoning subscription
# ## to [topic] because consuming was taking too long', increase this value to
# ## longer than the time taken by the output plugin(s).
# ##
# ## Note that the effective timeout could be between 'max_processing_time' and
# ## '2 * max_processing_time'.
# # max_processing_time = "100ms"
#
# ## Data format to consume.
# ## Each data format has its own unique set of configuration options, read
# ## more about them here:
@ -7864,18 +8151,20 @@
# ## servers = ["ssl://localhost:1883"]
# ## servers = ["ws://localhost:1883"]
# servers = ["tcp://127.0.0.1:1883"]
#
# ## Topics that will be subscribed to.
# topics = [
# "telegraf/host01/cpu",
# "telegraf/+/mem",
# "sensors/#",
# ]
#
# ## Enable extracting tag values from MQTT topics
# ## _ denotes an ignored entry in the topic path
# # topic_tags = "_/format/client/_"
# # topic_measurement = "measurement/_/_/_"
# # topic_fields = "_/_/_/temperature"
# ## The message topic will be stored in a tag specified by this value. If set
# ## to the empty string no topic tag will be created.
# # topic_tag = "topic"
#
# ## QoS policy for messages
# ## 0 = at most once
# ## 1 = at least once
@ -7884,10 +8173,8 @@
# ## When using a QoS of 1 or 2, you should enable persistent_session to allow
# ## resuming unacknowledged messages.
# # qos = 0
#
# ## Connection timeout for initial connection in seconds
# # connection_timeout = "30s"
#
# ## Maximum messages to read from the broker that have not been written by an
# ## output. For best throughput set based on the number of metrics within
# ## each message and the size of the output's metric_batch_size.
@ -7897,33 +8184,37 @@
# ## full batch is collected and the write is triggered immediately without
# ## waiting until the next flush_interval.
# # max_undelivered_messages = 1000
#
# ## Persistent session disables clearing of the client session on connection.
# ## In order for this option to work you must also set client_id to identify
# ## the client. To receive messages that arrived while the client is offline,
# ## also set the qos option to 1 or 2 and don't forget to also set the QoS when
# ## publishing.
# # persistent_session = false
#
# ## If unset, a random client ID will be generated.
# # client_id = ""
#
# ## Username and password to connect MQTT server.
# # username = "telegraf"
# # password = "metricsmetricsmetricsmetrics"
#
# ## Optional TLS Config
# # tls_ca = "/etc/telegraf/ca.pem"
# # tls_cert = "/etc/telegraf/cert.pem"
# # tls_key = "/etc/telegraf/key.pem"
# ## Use TLS but skip chain & host verification
# # insecure_skip_verify = false
#
# ## Data format to consume.
# ## Each data format has its own unique set of configuration options, read
# ## more about them here:
# ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
# data_format = "influx"
# ## Enable extracting tag values from MQTT topics
# ## _ denotes an ignored entry in the topic path
# ## [[inputs.mqtt_consumer.topic_parsing]]
# ## topic = ""
# ## measurement = ""
# ## tags = ""
# ## fields = ""
# ## [inputs.mqtt_consumer.topic_parsing.types]
# ##
# # Read metrics from NATS subject(s)
@ -8487,42 +8778,34 @@
#
# ## "database_type" enables a specific set of queries depending on the database type. If specified, it replaces azuredb = true/false and query_version = 2
# ## In the config file, the sql server plugin section should be repeated each with a set of servers for a specific database_type.
# ## Possible values for database_type are - "AzureSQLDB" or "AzureSQLManagedInstance" or "SQLServer"
# ## Possible values for database_type are - "SQLServer" or "AzureSQLDB" or "AzureSQLManagedInstance" or "AzureSQLPool"
#
# database_type = "SQLServer"
#
# ## A list of queries to include. If not specified, all the below listed queries are used.
# include_query = []
#
# ## A list of queries to explicitly ignore.
# exclude_query = ["SQLServerAvailabilityReplicaStates", "SQLServerDatabaseReplicaStates"]
#
# ## Queries enabled by default for database_type = "SQLServer" are -
# ## SQLServerPerformanceCounters, SQLServerWaitStatsCategorized, SQLServerDatabaseIO, SQLServerProperties, SQLServerMemoryClerks,
# ## SQLServerSchedulers, SQLServerRequests, SQLServerVolumeSpace, SQLServerCpu, SQLServerAvailabilityReplicaStates, SQLServerDatabaseReplicaStates
#
# ## Queries enabled by default for database_type = "AzureSQLDB" are -
# ## AzureSQLDBResourceStats, AzureSQLDBResourceGovernance, AzureSQLDBWaitStats, AzureSQLDBDatabaseIO, AzureSQLDBServerProperties,
# ## AzureSQLDBOsWaitstats, AzureSQLDBMemoryClerks, AzureSQLDBPerformanceCounters, AzureSQLDBRequests, AzureSQLDBSchedulers
#
# # database_type = "AzureSQLDB"
#
# ## A list of queries to include. If not specified, all the above listed queries are used.
# # include_query = []
#
# ## A list of queries to explicitly ignore.
# # exclude_query = []
#
# ## Queries enabled by default for database_type = "AzureSQLManagedInstance" are -
# ## AzureSQLMIResourceStats, AzureSQLMIResourceGovernance, AzureSQLMIDatabaseIO, AzureSQLMIServerProperties, AzureSQLMIOsWaitstats,
# ## AzureSQLMIMemoryClerks, AzureSQLMIPerformanceCounters, AzureSQLMIRequests, AzureSQLMISchedulers
#
# # database_type = "AzureSQLManagedInstance"
# ## Queries enabled by default for database_type = "AzureSQLPool" are -
# ## AzureSQLPoolResourceStats, AzureSQLPoolResourceGovernance, AzureSQLPoolDatabaseIO, AzureSQLPoolWaitStats,
# ## AzureSQLPoolMemoryClerks, AzureSQLPoolPerformanceCounters, AzureSQLPoolSchedulers
#
# # include_query = []
#
# # exclude_query = []
#
# ## Queries enabled by default for database_type = "SQLServer" are -
# ## SQLServerPerformanceCounters, SQLServerWaitStatsCategorized, SQLServerDatabaseIO, SQLServerProperties, SQLServerMemoryClerks,
# ## SQLServerSchedulers, SQLServerRequests, SQLServerVolumeSpace, SQLServerCpu
#
# database_type = "SQLServer"
#
# include_query = []
#
# ## SQLServerAvailabilityReplicaStates and SQLServerDatabaseReplicaStates are optional queries and hence excluded here as default
# exclude_query = ["SQLServerAvailabilityReplicaStates", "SQLServerDatabaseReplicaStates"]
#
# ## Following are old config settings, you may use them only if you are using the earlier flavor of queries, however it is recommended to use
# ## Following are old config settings
# ## You may use them only if you are using the earlier flavor of queries, however it is recommended to use
# ## the new mechanism of identifying the database_type there by use it's corresponding queries
#
# ## Optional parameter, setting this to 2 will use a new version