Update changelog
(cherry picked from commit f88373afa465bd7e4e4cd9030115238582166b80)
This commit is contained in:
parent
34151c47a6
commit
7d3b7fc2f9
57
CHANGELOG.md
57
CHANGELOG.md
|
|
@ -1,3 +1,59 @@
|
|||
## v1.17.0-rc0 [2020-12-10]
|
||||
|
||||
#### Release Notes
|
||||
|
||||
- Starlark plugins can now store state between runs using a global state variable
|
||||
-
|
||||
|
||||
#### Bugfixes
|
||||
|
||||
- [#8505](https://github.com/influxdata/telegraf/pull/8505) `inputs.vsphere` Fixed misspelled check for datacenter
|
||||
- [#8499](https://github.com/influxdata/telegraf/pull/8499) `processors.execd` Adding support for new lines in influx line protocol fields.
|
||||
- [#8254](https://github.com/influxdata/telegraf/pull/8254) `serializers.carbon2` Fix carbon2 tests
|
||||
- [#8498](https://github.com/influxdata/telegraf/pull/8498) `inputs.http_response` fixed network test
|
||||
- [#8276](https://github.com/influxdata/telegraf/pull/8276) `parsers.grok` Update grok package to support for field names containing '-' and '.'
|
||||
- [#8414](https://github.com/influxdata/telegraf/pull/8414) `inputs.bcache` Fix tests for Windows - part 1
|
||||
|
||||
#### Features
|
||||
|
||||
- [#8038](https://github.com/influxdata/telegraf/pull/8038) `inputs.jenkins` feat: add build number field to jenkins_job measurement
|
||||
- [#7345](https://github.com/influxdata/telegraf/pull/7345) `inputs.ping` Add percentiles to the ping plugin
|
||||
- [#8369](https://github.com/influxdata/telegraf/pull/8369) `inputs.sqlserver` Added tags for monitoring readable secondaries for Azure SQL MI
|
||||
- [#8379](https://github.com/influxdata/telegraf/pull/8379) `inputs.sqlserver` SQL Server HA/DR Availability Group queries
|
||||
- [#8520](https://github.com/influxdata/telegraf/pull/8520) Add initialization example to mock-plugin.
|
||||
- [#8426](https://github.com/influxdata/telegraf/pull/8426) `inputs.snmp` Add support to convert snmp hex strings to integers
|
||||
- [#8509](https://github.com/influxdata/telegraf/pull/8509) `inputs.statsd` Add configurable Max TTL duration for statsd input plugin entries
|
||||
- [#8508](https://github.com/influxdata/telegraf/pull/8508) `inputs.bind` Add configurable timeout to bind input plugin http call
|
||||
- [#8368](https://github.com/influxdata/telegraf/pull/8368) `inputs.sqlserver` Added is_primary_replica for monitoring readable secondaries for Azure SQL DB
|
||||
- [#8462](https://github.com/influxdata/telegraf/pull/8462) `inputs.sqlserver` sqlAzureMIRequests - remove duplicate column [session_db_name]
|
||||
- [#8464](https://github.com/influxdata/telegraf/pull/8464) `inputs.sqlserver` Add column measurement_db_type to output of all queries if not empty
|
||||
- [#8389](https://github.com/influxdata/telegraf/pull/8389) `inputs.opcua` Add node groups to opcua input plugin
|
||||
- [#8432](https://github.com/influxdata/telegraf/pull/8432) add support for linux/ppc64le
|
||||
- [#8474](https://github.com/influxdata/telegraf/pull/8474) `inputs.modbus` Add FLOAT64-IEEE support to inputs.modbus (#8361) (by @Nemecsek)
|
||||
- [#8447](https://github.com/influxdata/telegraf/pull/8447) `processors.starlark` Add the shared state to the global scope to get previous data
|
||||
- [#8383](https://github.com/influxdata/telegraf/pull/8383) `inputs.zfs` Add dataset metrics to zfs input
|
||||
- [#8429](https://github.com/influxdata/telegraf/pull/8429) `outputs.nats` Added "name" parameter to NATS output plugin
|
||||
- [#8477](https://github.com/influxdata/telegraf/pull/8477) `inputs.http` proxy support for http input
|
||||
- [#8466](https://github.com/influxdata/telegraf/pull/8466) `inputs.snmp` Translate snmp field values
|
||||
- [#8435](https://github.com/influxdata/telegraf/pull/8435) `common.kafka` Enable kafka zstd compression and idempotent writes
|
||||
- [#8056](https://github.com/influxdata/telegraf/pull/8056) `inputs.monit` Add response_time to monit plugin
|
||||
- [#8446](https://github.com/influxdata/telegraf/pull/8446) update to go 1.15.5
|
||||
- [#8428](https://github.com/influxdata/telegraf/pull/8428) `aggregators.basicstats` Add rate and interval to the basicstats aggregator plugin
|
||||
|
||||
#### New Parser Plugins
|
||||
|
||||
- [#7778](https://github.com/influxdata/telegraf/pull/7778) `parsers.prometheus` Add a parser plugin for prometheus
|
||||
|
||||
#### New Input Plugins
|
||||
|
||||
- [#8163](https://github.com/influxdata/telegraf/pull/8163) `inputs.riemann` Support Riemann-Protobuff Listener
|
||||
|
||||
#### New Output Plugins
|
||||
|
||||
- [#8296](https://github.com/influxdata/telegraf/pull/8296) `outputs.yandex_cloud_monitoring` #8295 Initial Yandex.Cloud monitoring
|
||||
- [#8202](https://github.com/influxdata/telegraf/pull/8202) `outputs.all` A new Logz.io output plugin
|
||||
|
||||
|
||||
## v1.16.3 [2020-12-01]
|
||||
|
||||
#### Bugfixes
|
||||
|
|
@ -18,6 +74,7 @@
|
|||
- [#8404](https://github.com/influxdata/telegraf/pull/8404) `outputs.wavefront` Wavefront output should distinguish between retryable and non-retryable errors
|
||||
- [#8401](https://github.com/influxdata/telegraf/pull/8401) `processors.starlark` Allow to catch errors that occur in the apply function
|
||||
|
||||
|
||||
## v1.16.2 [2020-11-13]
|
||||
|
||||
#### Bugfixes
|
||||
|
|
|
|||
|
|
@ -1 +1 @@
|
|||
1.16.3
|
||||
1.17.0
|
||||
|
|
|
|||
|
|
@ -171,7 +171,7 @@
|
|||
|
||||
## HTTP Content-Encoding for write request body, can be set to "gzip" to
|
||||
## compress body or "identity" to apply no encoding.
|
||||
# content_encoding = "identity"
|
||||
# content_encoding = "gzip"
|
||||
|
||||
## When true, Telegraf will output unsigned integers as unsigned values,
|
||||
## i.e.: "42u". You will need a version of InfluxDB supporting unsigned
|
||||
|
|
@ -883,14 +883,19 @@
|
|||
# ## routing_key = "telegraf"
|
||||
# # routing_key = ""
|
||||
#
|
||||
# ## CompressionCodec represents the various compression codecs recognized by
|
||||
# ## Compression codec represents the various compression codecs recognized by
|
||||
# ## Kafka in messages.
|
||||
# ## 0 : No compression
|
||||
# ## 1 : Gzip compression
|
||||
# ## 2 : Snappy compression
|
||||
# ## 3 : LZ4 compression
|
||||
# ## 0 : None
|
||||
# ## 1 : Gzip
|
||||
# ## 2 : Snappy
|
||||
# ## 3 : LZ4
|
||||
# ## 4 : ZSTD
|
||||
# # compression_codec = 0
|
||||
#
|
||||
# ## Idempotent Writes
|
||||
# ## If enabled, exactly one copy of each message is written.
|
||||
# # idempotent_writes = false
|
||||
#
|
||||
# ## RequiredAcks is used in Produce Requests to tell the broker how many
|
||||
# ## replica acknowledgements it must see before responding
|
||||
# ## 0 : the producer never waits for an acknowledgement from the broker.
|
||||
|
|
@ -916,7 +921,6 @@
|
|||
# # max_message_bytes = 1000000
|
||||
#
|
||||
# ## Optional TLS Config
|
||||
# # enable_tls = true
|
||||
# # tls_ca = "/etc/telegraf/ca.pem"
|
||||
# # tls_cert = "/etc/telegraf/cert.pem"
|
||||
# # tls_key = "/etc/telegraf/key.pem"
|
||||
|
|
@ -927,6 +931,23 @@
|
|||
# # sasl_username = "kafka"
|
||||
# # sasl_password = "secret"
|
||||
#
|
||||
# ## Optional SASL:
|
||||
# ## one of: OAUTHBEARER, PLAIN, SCRAM-SHA-256, SCRAM-SHA-512, GSSAPI
|
||||
# ## (defaults to PLAIN)
|
||||
# # sasl_mechanism = ""
|
||||
#
|
||||
# ## used if sasl_mechanism is GSSAPI (experimental)
|
||||
# # sasl_gssapi_service_name = ""
|
||||
# # ## One of: KRB5_USER_AUTH and KRB5_KEYTAB_AUTH
|
||||
# # sasl_gssapi_auth_type = "KRB5_USER_AUTH"
|
||||
# # sasl_gssapi_kerberos_config_path = "/"
|
||||
# # sasl_gssapi_realm = "realm"
|
||||
# # sasl_gssapi_key_tab_path = ""
|
||||
# # sasl_gssapi_disable_pafxfast = false
|
||||
#
|
||||
# ## used if sasl_mechanism is OAUTHBEARER (experimental)
|
||||
# # sasl_access_token = ""
|
||||
#
|
||||
# ## SASL protocol version. When connecting to Azure EventHub set to 0.
|
||||
# # sasl_version = 1
|
||||
#
|
||||
|
|
@ -1023,6 +1044,23 @@
|
|||
#
|
||||
|
||||
|
||||
# # Send aggregate metrics to Logz.io
|
||||
# [[outputs.logzio]]
|
||||
# ## Connection timeout, defaults to "5s" if not set.
|
||||
# timeout = "5s"
|
||||
#
|
||||
# ## Optional TLS Config
|
||||
# # tls_ca = "/etc/telegraf/ca.pem"
|
||||
# # tls_cert = "/etc/telegraf/cert.pem"
|
||||
# # tls_key = "/etc/telegraf/key.pem"
|
||||
#
|
||||
# ## Logz.io account token
|
||||
# token = "your logz.io token" # required
|
||||
#
|
||||
# ## Use your listener URL for your Logz.io account region.
|
||||
# # url = "https://listener.logz.io:8071"
|
||||
|
||||
|
||||
# # Configuration for MQTT server to send metrics to
|
||||
# [[outputs.mqtt]]
|
||||
# servers = ["localhost:1883"] # required.
|
||||
|
|
@ -1075,6 +1113,9 @@
|
|||
# ## URLs of NATS servers
|
||||
# servers = ["nats://localhost:4222"]
|
||||
#
|
||||
# ## Optional client name
|
||||
# # name = ""
|
||||
#
|
||||
# ## Optional credentials
|
||||
# # username = ""
|
||||
# # password = ""
|
||||
|
|
@ -1435,6 +1476,118 @@
|
|||
# # default_appname = "Telegraf"
|
||||
|
||||
|
||||
# # Configuration for Amazon Timestream output.
|
||||
# [[outputs.timestream]]
|
||||
# ## Amazon Region
|
||||
# region = "us-east-1"
|
||||
#
|
||||
# ## Amazon Credentials
|
||||
# ## Credentials are loaded in the following order:
|
||||
# ## 1) Assumed credentials via STS if role_arn is specified
|
||||
# ## 2) Explicit credentials from 'access_key' and 'secret_key'
|
||||
# ## 3) Shared profile from 'profile'
|
||||
# ## 4) Environment variables
|
||||
# ## 5) Shared credentials file
|
||||
# ## 6) EC2 Instance Profile
|
||||
# #access_key = ""
|
||||
# #secret_key = ""
|
||||
# #token = ""
|
||||
# #role_arn = ""
|
||||
# #profile = ""
|
||||
# #shared_credential_file = ""
|
||||
#
|
||||
# ## Endpoint to make request against, the correct endpoint is automatically
|
||||
# ## determined and this option should only be set if you wish to override the
|
||||
# ## default.
|
||||
# ## ex: endpoint_url = "http://localhost:8000"
|
||||
# # endpoint_url = ""
|
||||
#
|
||||
# ## Timestream database where the metrics will be inserted.
|
||||
# ## The database must exist prior to starting Telegraf.
|
||||
# database_name = "yourDatabaseNameHere"
|
||||
#
|
||||
# ## Specifies if the plugin should describe the Timestream database upon starting
|
||||
# ## to validate if it has access necessary permissions, connection, etc., as a safety check.
|
||||
# ## If the describe operation fails, the plugin will not start
|
||||
# ## and therefore the Telegraf agent will not start.
|
||||
# describe_database_on_start = false
|
||||
#
|
||||
# ## The mapping mode specifies how Telegraf records are represented in Timestream.
|
||||
# ## Valid values are: single-table, multi-table.
|
||||
# ## For example, consider the following data in line protocol format:
|
||||
# ## weather,location=us-midwest,season=summer temperature=82,humidity=71 1465839830100400200
|
||||
# ## airquality,location=us-west no2=5,pm25=16 1465839830100400200
|
||||
# ## where weather and airquality are the measurement names, location and season are tags,
|
||||
# ## and temperature, humidity, no2, pm25 are fields.
|
||||
# ## In multi-table mode:
|
||||
# ## - first line will be ingested to table named weather
|
||||
# ## - second line will be ingested to table named airquality
|
||||
# ## - the tags will be represented as dimensions
|
||||
# ## - first table (weather) will have two records:
|
||||
# ## one with measurement name equals to temperature,
|
||||
# ## another with measurement name equals to humidity
|
||||
# ## - second table (airquality) will have two records:
|
||||
# ## one with measurement name equals to no2,
|
||||
# ## another with measurement name equals to pm25
|
||||
# ## - the Timestream tables from the example will look like this:
|
||||
# ## TABLE "weather":
|
||||
# ## time | location | season | measure_name | measure_value::bigint
|
||||
# ## 2016-06-13 17:43:50 | us-midwest | summer | temperature | 82
|
||||
# ## 2016-06-13 17:43:50 | us-midwest | summer | humidity | 71
|
||||
# ## TABLE "airquality":
|
||||
# ## time | location | measure_name | measure_value::bigint
|
||||
# ## 2016-06-13 17:43:50 | us-west | no2 | 5
|
||||
# ## 2016-06-13 17:43:50 | us-west | pm25 | 16
|
||||
# ## In single-table mode:
|
||||
# ## - the data will be ingested to a single table, which name will be valueOf(single_table_name)
|
||||
# ## - measurement name will stored in dimension named valueOf(single_table_dimension_name_for_telegraf_measurement_name)
|
||||
# ## - location and season will be represented as dimensions
|
||||
# ## - temperature, humidity, no2, pm25 will be represented as measurement name
|
||||
# ## - the Timestream table from the example will look like this:
|
||||
# ## Assuming:
|
||||
# ## - single_table_name = "my_readings"
|
||||
# ## - single_table_dimension_name_for_telegraf_measurement_name = "namespace"
|
||||
# ## TABLE "my_readings":
|
||||
# ## time | location | season | namespace | measure_name | measure_value::bigint
|
||||
# ## 2016-06-13 17:43:50 | us-midwest | summer | weather | temperature | 82
|
||||
# ## 2016-06-13 17:43:50 | us-midwest | summer | weather | humidity | 71
|
||||
# ## 2016-06-13 17:43:50 | us-west | NULL | airquality | no2 | 5
|
||||
# ## 2016-06-13 17:43:50 | us-west | NULL | airquality | pm25 | 16
|
||||
# ## In most cases, using multi-table mapping mode is recommended.
|
||||
# ## However, you can consider using single-table in situations when you have thousands of measurement names.
|
||||
# mapping_mode = "multi-table"
|
||||
#
|
||||
# ## Only valid and required for mapping_mode = "single-table"
|
||||
# ## Specifies the Timestream table where the metrics will be uploaded.
|
||||
# # single_table_name = "yourTableNameHere"
|
||||
#
|
||||
# ## Only valid and required for mapping_mode = "single-table"
|
||||
# ## Describes what will be the Timestream dimension name for the Telegraf
|
||||
# ## measurement name.
|
||||
# # single_table_dimension_name_for_telegraf_measurement_name = "namespace"
|
||||
#
|
||||
# ## Specifies if the plugin should create the table, if the table do not exist.
|
||||
# ## The plugin writes the data without prior checking if the table exists.
|
||||
# ## When the table does not exist, the error returned from Timestream will cause
|
||||
# ## the plugin to create the table, if this parameter is set to true.
|
||||
# create_table_if_not_exists = true
|
||||
#
|
||||
# ## Only valid and required if create_table_if_not_exists = true
|
||||
# ## Specifies the Timestream table magnetic store retention period in days.
|
||||
# ## Check Timestream documentation for more details.
|
||||
# create_table_magnetic_store_retention_period_in_days = 365
|
||||
#
|
||||
# ## Only valid and required if create_table_if_not_exists = true
|
||||
# ## Specifies the Timestream table memory store retention period in hours.
|
||||
# ## Check Timestream documentation for more details.
|
||||
# create_table_memory_store_retention_period_in_hours = 24
|
||||
#
|
||||
# ## Only valid and optional if create_table_if_not_exists = true
|
||||
# ## Specifies the Timestream table tags.
|
||||
# ## Check Timestream documentation for more details
|
||||
# # create_table_tags = { "foo" = "bar", "environment" = "dev"}
|
||||
|
||||
|
||||
# # Write metrics to Warp 10
|
||||
# [[outputs.warp10]]
|
||||
# # Prefix to add to the measurement.
|
||||
|
|
@ -1509,6 +1662,12 @@
|
|||
# ## data point exceeding this limit if not truncated. Defaults to 'false' to provide backwards compatibility.
|
||||
# #truncate_tags = false
|
||||
#
|
||||
# ## Flush the internal buffers after each batch. This effectively bypasses the background sending of metrics
|
||||
# ## normally done by the Wavefront SDK. This can be used if you are experiencing buffer overruns. The sending
|
||||
# ## of metrics will block for a longer time, but this will be handled gracefully by the internal buffering in
|
||||
# ## Telegraf.
|
||||
# #immediate_flush = true
|
||||
#
|
||||
# ## Define a mapping, namespaced by metric prefix, from string values to numeric values
|
||||
# ## deprecated in 1.9; use the enum processor plugin
|
||||
# #[[outputs.wavefront.string_to_number.elasticsearch]]
|
||||
|
|
@ -1517,6 +1676,18 @@
|
|||
# # red = 0.0
|
||||
|
||||
|
||||
# # Send aggregated metrics to Yandex.Cloud Monitoring
|
||||
# [[outputs.yandex_cloud_monitoring]]
|
||||
# ## Timeout for HTTP writes.
|
||||
# # timeout = "20s"
|
||||
#
|
||||
# ## Yandex.Cloud monitoring API endpoint. Normally should not be changed
|
||||
# # endpoint_url = "https://monitoring.api.cloud.yandex.net/monitoring/v2/data/write"
|
||||
#
|
||||
# ## All user metrics should be sent with "custom" service specified. Normally should not be changed
|
||||
# # service = "custom"
|
||||
|
||||
|
||||
###############################################################################
|
||||
# PROCESSOR PLUGINS #
|
||||
###############################################################################
|
||||
|
|
@ -1779,17 +1950,25 @@
|
|||
# value_key = "value"
|
||||
|
||||
|
||||
# # Given a tag of a TCP or UDP port number, add a tag of the service name looked up in the system services file
|
||||
# # Given a tag/field of a TCP or UDP port number, add a tag/field of the service name looked up in the system services file
|
||||
# [[processors.port_name]]
|
||||
# [[processors.port_name]]
|
||||
# ## Name of tag holding the port number
|
||||
# # tag = "port"
|
||||
# ## Or name of the field holding the port number
|
||||
# # field = "port"
|
||||
#
|
||||
# ## Name of output tag where service name will be added
|
||||
# ## Name of output tag or field (depending on the source) where service name will be added
|
||||
# # dest = "service"
|
||||
#
|
||||
# ## Default tcp or udp
|
||||
# # default_protocol = "tcp"
|
||||
#
|
||||
# ## Tag containing the protocol (tcp or udp, case-insensitive)
|
||||
# # protocol_tag = "proto"
|
||||
#
|
||||
# ## Field containing the protocol (tcp or udp, case-insensitive)
|
||||
# # protocol_field = "proto"
|
||||
|
||||
|
||||
# # Print all metrics that pass through this filter.
|
||||
|
|
@ -2362,7 +2541,7 @@
|
|||
# ## If not specified, then default is:
|
||||
# bcachePath = "/sys/fs/bcache"
|
||||
#
|
||||
# ## By default, telegraf gather stats for all bcache devices
|
||||
# ## By default, Telegraf gather stats for all bcache devices
|
||||
# ## Setting devices will restrict the stats to the specified
|
||||
# ## bcache devices.
|
||||
# bcacheDevs = ["bcache0"]
|
||||
|
|
@ -2385,6 +2564,9 @@
|
|||
# # urls = ["http://localhost:8053/xml/v3"]
|
||||
# # gather_memory_contexts = false
|
||||
# # gather_views = false
|
||||
#
|
||||
# ## Timeout for http requests made by bind nameserver
|
||||
# # timeout = "4s"
|
||||
|
||||
|
||||
# # Collect bond interface status, slaves statuses and failures count
|
||||
|
|
@ -3188,6 +3370,9 @@
|
|||
# ## compress body or "identity" to apply no encoding.
|
||||
# # content_encoding = "identity"
|
||||
#
|
||||
# ## HTTP Proxy support
|
||||
# # http_proxy_url = ""
|
||||
#
|
||||
# ## Optional TLS Config
|
||||
# # tls_ca = "/etc/telegraf/ca.pem"
|
||||
# # tls_cert = "/etc/telegraf/cert.pem"
|
||||
|
|
@ -3256,6 +3441,12 @@
|
|||
# # response_string_match = "ok"
|
||||
# # response_string_match = "\".*_status\".?:.?\"up\""
|
||||
#
|
||||
# ## Expected response status code.
|
||||
# ## The status code of the response is compared to this value. If they match, the field
|
||||
# ## "response_status_code_match" will be 1, otherwise it will be 0. If the
|
||||
# ## expected status code is 0, the check is disabled and the field won't be added.
|
||||
# # response_status_code = 0
|
||||
#
|
||||
# ## Optional TLS Config
|
||||
# # tls_ca = "/etc/telegraf/ca.pem"
|
||||
# # tls_cert = "/etc/telegraf/cert.pem"
|
||||
|
|
@ -3985,7 +4176,8 @@
|
|||
# ## |---BA, DCBA - Little Endian
|
||||
# ## |---BADC - Mid-Big Endian
|
||||
# ## |---CDAB - Mid-Little Endian
|
||||
# ## data_type - INT16, UINT16, INT32, UINT32, INT64, UINT64, FLOAT32-IEEE (the IEEE 754 binary representation)
|
||||
# ## data_type - INT16, UINT16, INT32, UINT32, INT64, UINT64,
|
||||
# ## FLOAT32-IEEE, FLOAT64-IEEE (the IEEE 754 binary representation)
|
||||
# ## FLOAT32, FIXED, UFIXED (fixed-point representation on input)
|
||||
# ## scale - the final numeric variable representation
|
||||
# ## address - variable address
|
||||
|
|
@ -4414,9 +4606,8 @@
|
|||
|
||||
# # Retrieve data from OPCUA devices
|
||||
# [[inputs.opcua]]
|
||||
# [[inputs.opcua]]
|
||||
# ## Device name
|
||||
# # name = "localhost"
|
||||
# ## Metric name
|
||||
# # name = "opcua"
|
||||
# #
|
||||
# ## OPC UA Endpoint URL
|
||||
# # endpoint = "opc.tcp://localhost:4840"
|
||||
|
|
@ -4453,18 +4644,41 @@
|
|||
# # password = ""
|
||||
# #
|
||||
# ## Node ID configuration
|
||||
# ## name - the variable name
|
||||
# ## namespace - integer value 0 thru 3
|
||||
# ## identifier_type - s=string, i=numeric, g=guid, b=opaque
|
||||
# ## identifier - tag as shown in opcua browser
|
||||
# ## data_type - boolean, byte, short, int, uint, uint16, int16,
|
||||
# ## uint32, int32, float, double, string, datetime, number
|
||||
# ## name - field name to use in the output
|
||||
# ## namespace - OPC UA namespace of the node (integer value 0 thru 3)
|
||||
# ## identifier_type - OPC UA ID type (s=string, i=numeric, g=guid, b=opaque)
|
||||
# ## identifier - OPC UA ID (tag as shown in opcua browser)
|
||||
# ## Example:
|
||||
# ## {name="ProductUri", namespace="0", identifier_type="i", identifier="2262", data_type="string", description="http://open62541.org"}
|
||||
# nodes = [
|
||||
# {name="", namespace="", identifier_type="", identifier="", data_type="", description=""},
|
||||
# {name="", namespace="", identifier_type="", identifier="", data_type="", description=""},
|
||||
# ]
|
||||
# ## {name="ProductUri", namespace="0", identifier_type="i", identifier="2262"}
|
||||
# # nodes = [
|
||||
# # {name="", namespace="", identifier_type="", identifier=""},
|
||||
# # {name="", namespace="", identifier_type="", identifier=""},
|
||||
# #]
|
||||
# #
|
||||
# ## Node Group
|
||||
# ## Sets defaults for OPC UA namespace and ID type so they aren't required in
|
||||
# ## every node. A group can also have a metric name that overrides the main
|
||||
# ## plugin metric name.
|
||||
# ##
|
||||
# ## Multiple node groups are allowed
|
||||
# #[[inputs.opcua.group]]
|
||||
# ## Group Metric name. Overrides the top level name. If unset, the
|
||||
# ## top level name is used.
|
||||
# # name =
|
||||
# #
|
||||
# ## Group default namespace. If a node in the group doesn't set its
|
||||
# ## namespace, this is used.
|
||||
# # namespace =
|
||||
# #
|
||||
# ## Group default identifier type. If a node in the group doesn't set its
|
||||
# ## namespace, this is used.
|
||||
# # identifier_type =
|
||||
# #
|
||||
# ## Node ID Configuration. Array of nodes with the same settings as above.
|
||||
# # nodes = [
|
||||
# # {name="", namespace="", identifier_type="", identifier=""},
|
||||
# # {name="", namespace="", identifier_type="", identifier=""},
|
||||
# #]
|
||||
|
||||
|
||||
# # OpenLDAP cn=Monitor plugin
|
||||
|
|
@ -4638,6 +4852,9 @@
|
|||
# ## option of the ping command.
|
||||
# # interface = ""
|
||||
#
|
||||
# ## Percentiles to calculate. This only works with the native method.
|
||||
# # percentiles = [50, 95, 99]
|
||||
#
|
||||
# ## Specify the ping executable binary.
|
||||
# # binary = "ping"
|
||||
#
|
||||
|
|
@ -4724,6 +4941,8 @@
|
|||
# ## API connection configuration. The API token was introduced in Proxmox v6.2. Required permissions for user and token: PVEAuditor role on /.
|
||||
# base_url = "https://localhost:8006/api2/json"
|
||||
# api_token = "USER@REALM!TOKENID=UUID"
|
||||
# ## Node name, defaults to OS hostname
|
||||
# # node_name = ""
|
||||
#
|
||||
# ## Optional TLS Config
|
||||
# # tls_ca = "/etc/telegraf/ca.pem"
|
||||
|
|
@ -5135,53 +5354,53 @@
|
|||
# servers = [
|
||||
# "Server=192.168.1.10;Port=1433;User Id=<user>;Password=<pw>;app name=telegraf;log=1;",
|
||||
# ]
|
||||
|
||||
#
|
||||
# ## "database_type" enables a specific set of queries depending on the database type. If specified, it replaces azuredb = true/false and query_version = 2
|
||||
# ## In the config file, the sql server plugin section should be repeated each with a set of servers for a specific database_type.
|
||||
# ## Possible values for database_type are - "AzureSQLDB" or "AzureSQLManagedInstance" or "SQLServer"
|
||||
|
||||
# ## Queries enabled by default for database_type = "AzureSQLDB" are -
|
||||
# ## AzureSQLDBResourceStats, AzureSQLDBResourceGovernance, AzureSQLDBWaitStats, AzureSQLDBDatabaseIO, AzureSQLDBServerProperties,
|
||||
#
|
||||
# ## Queries enabled by default for database_type = "AzureSQLDB" are -
|
||||
# ## AzureSQLDBResourceStats, AzureSQLDBResourceGovernance, AzureSQLDBWaitStats, AzureSQLDBDatabaseIO, AzureSQLDBServerProperties,
|
||||
# ## AzureSQLDBOsWaitstats, AzureSQLDBMemoryClerks, AzureSQLDBPerformanceCounters, AzureSQLDBRequests, AzureSQLDBSchedulers
|
||||
|
||||
#
|
||||
# # database_type = "AzureSQLDB"
|
||||
|
||||
#
|
||||
# ## A list of queries to include. If not specified, all the above listed queries are used.
|
||||
# # include_query = []
|
||||
|
||||
#
|
||||
# ## A list of queries to explicitly ignore.
|
||||
# # exclude_query = []
|
||||
|
||||
# ## Queries enabled by default for database_type = "AzureSQLManagedInstance" are -
|
||||
# ## AzureSQLMIResourceStats, AzureSQLMIResourceGovernance, AzureSQLMIDatabaseIO, AzureSQLMIServerProperties, AzureSQLMIOsWaitstats,
|
||||
#
|
||||
# ## Queries enabled by default for database_type = "AzureSQLManagedInstance" are -
|
||||
# ## AzureSQLMIResourceStats, AzureSQLMIResourceGovernance, AzureSQLMIDatabaseIO, AzureSQLMIServerProperties, AzureSQLMIOsWaitstats,
|
||||
# ## AzureSQLMIMemoryClerks, AzureSQLMIPerformanceCounters, AzureSQLMIRequests, AzureSQLMISchedulers
|
||||
|
||||
#
|
||||
# # database_type = "AzureSQLManagedInstance"
|
||||
|
||||
#
|
||||
# # include_query = []
|
||||
|
||||
#
|
||||
# # exclude_query = []
|
||||
|
||||
# ## Queries enabled by default for database_type = "SQLServer" are -
|
||||
# ## SQLServerPerformanceCounters, SQLServerWaitStatsCategorized, SQLServerDatabaseIO, SQLServerProperties, SQLServerMemoryClerks,
|
||||
#
|
||||
# ## Queries enabled by default for database_type = "SQLServer" are -
|
||||
# ## SQLServerPerformanceCounters, SQLServerWaitStatsCategorized, SQLServerDatabaseIO, SQLServerProperties, SQLServerMemoryClerks,
|
||||
# ## SQLServerSchedulers, SQLServerRequests, SQLServerVolumeSpace, SQLServerCpu
|
||||
|
||||
#
|
||||
# database_type = "SQLServer"
|
||||
|
||||
#
|
||||
# include_query = []
|
||||
|
||||
#
|
||||
# ## SQLServerAvailabilityReplicaStates and SQLServerDatabaseReplicaStates are optional queries and hence excluded here as default
|
||||
# exclude_query = ["SQLServerAvailabilityReplicaStates", "SQLServerDatabaseReplicaStates"]
|
||||
|
||||
# ## Following are old config settings, you may use them only if you are using the earlier flavor of queries, however it is recommended to use
|
||||
#
|
||||
# ## Following are old config settings, you may use them only if you are using the earlier flavor of queries, however it is recommended to use
|
||||
# ## the new mechanism of identifying the database_type there by use it's corresponding queries
|
||||
|
||||
#
|
||||
# ## Optional parameter, setting this to 2 will use a new version
|
||||
# ## of the collection queries that break compatibility with the original
|
||||
# ## dashboards.
|
||||
# ## Version 2 - is compatible from SQL Server 2012 and later versions and also for SQL Azure DB
|
||||
# # query_version = 2
|
||||
|
||||
#
|
||||
# ## If you are using AzureDB, setting this to true will gather resource utilization metrics
|
||||
# # azuredb = false
|
||||
|
||||
|
|
@ -5499,7 +5718,7 @@
|
|||
# # tls_key = "/etc/telegraf/key.pem"
|
||||
|
||||
|
||||
# # Read metrics of ZFS from arcstats, zfetchstats, vdev_cache_stats, and pools
|
||||
# # Read metrics of ZFS from arcstats, zfetchstats, vdev_cache_stats, pools and datasets
|
||||
# [[inputs.zfs]]
|
||||
# ## ZFS kstat path. Ignored on FreeBSD
|
||||
# ## If not specified, then default is:
|
||||
|
|
@ -5513,6 +5732,8 @@
|
|||
# # "dmu_tx", "fm", "vdev_mirror_stats", "zfetchstats", "zil"]
|
||||
# ## By default, don't gather zpool stats
|
||||
# # poolMetrics = false
|
||||
# ## By default, don't gather zdataset stats
|
||||
# # datasetMetrics = false
|
||||
|
||||
|
||||
# # Reads 'mntr' stats from one or many zookeeper servers
|
||||
|
|
@ -6040,7 +6261,7 @@
|
|||
# username = "cisco"
|
||||
# password = "cisco"
|
||||
#
|
||||
# ## gNMI encoding requested (one of: "proto", "json", "json_ietf")
|
||||
# ## gNMI encoding requested (one of: "proto", "json", "json_ietf", "bytes")
|
||||
# # encoding = "proto"
|
||||
#
|
||||
# ## redial in case of failures after
|
||||
|
|
@ -6243,6 +6464,32 @@
|
|||
# # token = "some-long-shared-secret-token"
|
||||
|
||||
|
||||
# # Intel Resource Director Technology plugin
|
||||
# [[inputs.intel_rdt]]
|
||||
# ## Optionally set sampling interval to Nx100ms.
|
||||
# ## This value is propagated to pqos tool. Interval format is defined by pqos itself.
|
||||
# ## If not provided or provided 0, will be set to 10 = 10x100ms = 1s.
|
||||
# # sampling_interval = "10"
|
||||
#
|
||||
# ## Optionally specify the path to pqos executable.
|
||||
# ## If not provided, auto discovery will be performed.
|
||||
# # pqos_path = "/usr/local/bin/pqos"
|
||||
#
|
||||
# ## Optionally specify if IPC and LLC_Misses metrics shouldn't be propagated.
|
||||
# ## If not provided, default value is false.
|
||||
# # shortened_metrics = false
|
||||
#
|
||||
# ## Specify the list of groups of CPU core(s) to be provided as pqos input.
|
||||
# ## Mandatory if processes aren't set and forbidden if processes are specified.
|
||||
# ## e.g. ["0-3", "4,5,6"] or ["1-3,4"]
|
||||
# # cores = ["0-3"]
|
||||
#
|
||||
# ## Specify the list of processes for which Metrics will be collected.
|
||||
# ## Mandatory if cores aren't set and forbidden if cores are specified.
|
||||
# ## e.g. ["qemu", "pmd"]
|
||||
# # processes = ["process"]
|
||||
|
||||
|
||||
# # Read JTI OpenConfig Telemetry from listed sensors
|
||||
# [[inputs.jti_openconfig_telemetry]]
|
||||
# ## List of device addresses to collect telemetry from
|
||||
|
|
@ -6314,7 +6561,6 @@
|
|||
# # version = ""
|
||||
#
|
||||
# ## Optional TLS Config
|
||||
# # enable_tls = true
|
||||
# # tls_ca = "/etc/telegraf/ca.pem"
|
||||
# # tls_cert = "/etc/telegraf/cert.pem"
|
||||
# # tls_key = "/etc/telegraf/key.pem"
|
||||
|
|
@ -6322,16 +6568,42 @@
|
|||
# # insecure_skip_verify = false
|
||||
#
|
||||
# ## SASL authentication credentials. These settings should typically be used
|
||||
# ## with TLS encryption enabled using the "enable_tls" option.
|
||||
# ## with TLS encryption enabled
|
||||
# # sasl_username = "kafka"
|
||||
# # sasl_password = "secret"
|
||||
#
|
||||
# ## Optional SASL:
|
||||
# ## one of: OAUTHBEARER, PLAIN, SCRAM-SHA-256, SCRAM-SHA-512, GSSAPI
|
||||
# ## (defaults to PLAIN)
|
||||
# # sasl_mechanism = ""
|
||||
#
|
||||
# ## used if sasl_mechanism is GSSAPI (experimental)
|
||||
# # sasl_gssapi_service_name = ""
|
||||
# # ## One of: KRB5_USER_AUTH and KRB5_KEYTAB_AUTH
|
||||
# # sasl_gssapi_auth_type = "KRB5_USER_AUTH"
|
||||
# # sasl_gssapi_kerberos_config_path = "/"
|
||||
# # sasl_gssapi_realm = "realm"
|
||||
# # sasl_gssapi_key_tab_path = ""
|
||||
# # sasl_gssapi_disable_pafxfast = false
|
||||
#
|
||||
# ## used if sasl_mechanism is OAUTHBEARER (experimental)
|
||||
# # sasl_access_token = ""
|
||||
#
|
||||
# ## SASL protocol version. When connecting to Azure EventHub set to 0.
|
||||
# # sasl_version = 1
|
||||
#
|
||||
# ## Name of the consumer group.
|
||||
# # consumer_group = "telegraf_metrics_consumers"
|
||||
#
|
||||
# ## Compression codec represents the various compression codecs recognized by
|
||||
# ## Kafka in messages.
|
||||
# ## 0 : None
|
||||
# ## 1 : Gzip
|
||||
# ## 2 : Snappy
|
||||
# ## 3 : LZ4
|
||||
# ## 4 : ZSTD
|
||||
# # compression_codec = 0
|
||||
#
|
||||
# ## Initial offset position; one of "oldest" or "newest".
|
||||
# # offset = "oldest"
|
||||
#
|
||||
|
|
@ -6831,6 +7103,35 @@
|
|||
# # insecure_skip_verify = false
|
||||
|
||||
|
||||
# # Riemann protobuff listener.
|
||||
# [[inputs.riemann_listener]]
|
||||
# ## URL to listen on.
|
||||
# ## Default is "tcp://:5555"
|
||||
# # service_address = "tcp://:8094"
|
||||
# # service_address = "tcp://127.0.0.1:http"
|
||||
# # service_address = "tcp4://:8094"
|
||||
# # service_address = "tcp6://:8094"
|
||||
# # service_address = "tcp6://[2001:db8::1]:8094"
|
||||
#
|
||||
# ## Maximum number of concurrent connections.
|
||||
# ## 0 (default) is unlimited.
|
||||
# # max_connections = 1024
|
||||
# ## Read timeout.
|
||||
# ## 0 (default) is unlimited.
|
||||
# # read_timeout = "30s"
|
||||
# ## Optional TLS configuration.
|
||||
# # tls_cert = "/etc/telegraf/cert.pem"
|
||||
# # tls_key = "/etc/telegraf/key.pem"
|
||||
# ## Enables client authentication if set.
|
||||
# # tls_allowed_cacerts = ["/etc/telegraf/clientca.pem"]
|
||||
# ## Maximum socket buffer size (in bytes when no unit specified).
|
||||
# # read_buffer_size = "64KiB"
|
||||
# ## Period between keep alive probes.
|
||||
# ## 0 disables keep alive probes.
|
||||
# ## Defaults to the OS configuration.
|
||||
# # keep_alive_period = "5m"
|
||||
|
||||
|
||||
# # SFlow V5 Protocol Listener
|
||||
# [[inputs.sflow]]
|
||||
# ## Address to listen for sFlow packets.
|
||||
|
|
@ -6993,6 +7294,9 @@
|
|||
# ## calculation of percentiles. Raising this limit increases the accuracy
|
||||
# ## of percentiles but also increases the memory usage and cpu time.
|
||||
# percentile_limit = 1000
|
||||
#
|
||||
# ## Max duration (TTL) for each metric to stay cached/reported without being updated.
|
||||
# #max_ttl = "1000h"
|
||||
|
||||
|
||||
# # Suricata stats plugin
|
||||
|
|
|
|||
Loading…
Reference in New Issue