Update changelog
(cherry picked from commit 81f186baa81f9b09f8221a7a02a482466459f95d)
This commit is contained in:
parent
0fd0ae0953
commit
6f956411a2
71
CHANGELOG.md
71
CHANGELOG.md
|
|
@ -1,3 +1,74 @@
|
|||
## v1.19.0-rc0 [2021-06-04]
|
||||
|
||||
#### Release Notes
|
||||
|
||||
- Many linter fixes - thanks @zak-pawel and all!
|
||||
|
||||
#### Bugfixes
|
||||
|
||||
- [#9182](https://github.com/influxdata/telegraf/pull/9182) Update pgx to v4
|
||||
- [#9275](https://github.com/influxdata/telegraf/pull/9275) Fix reading config files starting with http:
|
||||
- [#9196](https://github.com/influxdata/telegraf/pull/9196) `serializers.prometheusremotewrite` Update dependency and remove tags with empty values
|
||||
- [#9051](https://github.com/influxdata/telegraf/pull/9051) `outputs.kafka` Don't prevent telegraf from starting when there's a connection error
|
||||
- [#8795](https://github.com/influxdata/telegraf/pull/8795) `parsers.prometheusremotewrite` Update prometheus dependency to v2.21.0
|
||||
|
||||
#### Features
|
||||
|
||||
- [#8987](https://github.com/influxdata/telegraf/pull/8987) Config file environment variable can be a URL
|
||||
- [#9297](https://github.com/influxdata/telegraf/pull/9297) `outputs.datadog` Add HTTP proxy to datadog output
|
||||
- [#9087](https://github.com/influxdata/telegraf/pull/9087) Add named timestamp formats
|
||||
- [#9276](https://github.com/influxdata/telegraf/pull/9276) `inputs.vsphere` Add config option for the historical interval duration
|
||||
- [#9274](https://github.com/influxdata/telegraf/pull/9274) `inputs.ping` Add an option to specify packet size
|
||||
- [#9007](https://github.com/influxdata/telegraf/pull/9007) Allow multiple "--config" and "--config-directory" flags
|
||||
- [#9249](https://github.com/influxdata/telegraf/pull/9249) `outputs.graphite` Allow more characters in graphite tags
|
||||
- [#8351](https://github.com/influxdata/telegraf/pull/8351) `inputs.sqlserver` Added login_name
|
||||
- [#9223](https://github.com/influxdata/telegraf/pull/9223) `inputs.dovecot` Add support for unix domain sockets
|
||||
- [#9118](https://github.com/influxdata/telegraf/pull/9118) `processors.strings` Add UTF-8 sanitizer
|
||||
- [#9156](https://github.com/influxdata/telegraf/pull/9156) `inputs.aliyuncms` Add config option list of regions to query
|
||||
- [#9138](https://github.com/influxdata/telegraf/pull/9138) `common.http` Add OAuth2 to HTTP input
|
||||
- [#8822](https://github.com/influxdata/telegraf/pull/8822) `inputs.sqlserver` Enable Azure Active Directory (AAD) authentication support
|
||||
- [#9136](https://github.com/influxdata/telegraf/pull/9136) `inputs.cloudwatch` Add wildcard support in dimensions configuration
|
||||
- [#5517](https://github.com/influxdata/telegraf/pull/5517) `inputs.mysql` Gather all mysql channels
|
||||
- [#8911](https://github.com/influxdata/telegraf/pull/8911) `processors.enum` Support float64
|
||||
- [#9105](https://github.com/influxdata/telegraf/pull/9105) `processors.starlark` Support nanosecond resolution timestamp
|
||||
- [#9080](https://github.com/influxdata/telegraf/pull/9080) `inputs.logstash` Add support for version 7 queue stats
|
||||
- [#9074](https://github.com/influxdata/telegraf/pull/9074) `parsers.prometheusremotewrite` Add starlark script for renaming metrics
|
||||
- [#9032](https://github.com/influxdata/telegraf/pull/9032) `inputs.couchbase` Add ~200 more Couchbase metrics via Buckets endpoint
|
||||
- [#8596](https://github.com/influxdata/telegraf/pull/8596) `inputs.sqlserver` input/sqlserver: Add service and save connection pools
|
||||
- [#9042](https://github.com/influxdata/telegraf/pull/9042) `processors.starlark` Add math module
|
||||
- [#6952](https://github.com/influxdata/telegraf/pull/6952) `inputs.x509_cert` Wildcard support for cert filenames
|
||||
- [#9004](https://github.com/influxdata/telegraf/pull/9004) `processors.starlark` Add time module
|
||||
- [#8891](https://github.com/influxdata/telegraf/pull/8891) `inputs.kinesis_consumer` Add content_encoding option with gzip and zlib support
|
||||
- [#8996](https://github.com/influxdata/telegraf/pull/8996) `processors.starlark` Add an example showing how to obtain IOPS from diskio input
|
||||
- [#8966](https://github.com/influxdata/telegraf/pull/8966) `inputs.http_listener_v2` Add support for snappy compression
|
||||
- [#8661](https://github.com/influxdata/telegraf/pull/8661) `inputs.cisco_telemetry_mdt` Add support for events and class based query
|
||||
- [#8861](https://github.com/influxdata/telegraf/pull/8861) `inputs.mongodb` Optionally collect top stats
|
||||
- [#8979](https://github.com/influxdata/telegraf/pull/8979) `parsers.value` Add custom field name config option
|
||||
- [#8544](https://github.com/influxdata/telegraf/pull/8544) `inputs.sqlserver` Add an optional health metric
|
||||
|
||||
#### New Input Plugins
|
||||
|
||||
- [OpenTelemetry](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/opentelemetry) - contributed by @jacobmarble
|
||||
- [Intel Data Plane Development Kit (DPDK)](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/dpdk) - contributed by @p-zak
|
||||
- [KNX](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/knx_listener) - contributed by @DocLambda
|
||||
|
||||
#### New Output Plugins
|
||||
|
||||
- [Websocket](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/websocket) - contributed by @FZambia
|
||||
- [SQL](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/sql) - contributed by @illuusio
|
||||
- [AWS Cloudwatch logs](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/cloudwatch_logs) - contributed by @i-prudnikov
|
||||
|
||||
#### New Parser Plugins
|
||||
|
||||
- [Prometheus Remote Write](https://github.com/influxdata/telegraf/tree/master/plugins/parsers/prometheusremotewrite) - contributed by @helenosheaa
|
||||
|
||||
#### New External Plugins
|
||||
|
||||
- [ldap_org and ds389](https://github.com/falon/CSI-telegraf-plugins) - contributed by @falon
|
||||
- [x509_crl](https://github.com/jcgonnard/telegraf-input-x590crl) - contributed by @jcgonnard
|
||||
- [dnsmasq](https://github.com/machinly/dnsmasq-telegraf-plugin) - contributed by @machinly
|
||||
- [Big Blue Button](https://github.com/SLedunois/bigbluebutton-telegraf-plugin) - contributed by @SLedunois
|
||||
|
||||
## v1.18.3 [2021-05-20]
|
||||
|
||||
#### Release Notes
|
||||
|
|
|
|||
|
|
@ -90,8 +90,8 @@
|
|||
## If set to -1, no archives are removed.
|
||||
# logfile_rotation_max_archives = 5
|
||||
|
||||
## Pick a timezone to use when logging or type 'local' for local time. Example: 'America/Chicago'.
|
||||
## See https://socketloop.com/tutorials/golang-display-list-of-timezones-with-gmt for timezone formatting options.
|
||||
## Pick a timezone to use when logging or type 'local' for local time.
|
||||
## Example: America/Chicago
|
||||
# log_with_timezone = ""
|
||||
|
||||
## Override default hostname, if empty use os.Hostname()
|
||||
|
|
@ -99,7 +99,6 @@
|
|||
## If set to true, do no set the "host" tag in the telegraf agent.
|
||||
omit_hostname = false
|
||||
|
||||
|
||||
###############################################################################
|
||||
# OUTPUT PLUGINS #
|
||||
###############################################################################
|
||||
|
|
@ -347,20 +346,6 @@
|
|||
# # endpoint_url = "https://monitoring.core.usgovcloudapi.net"
|
||||
|
||||
|
||||
# [[outputs.bigquery]]
|
||||
# ## GCP Project
|
||||
# project = "erudite-bloom-151019"
|
||||
#
|
||||
# ## The BigQuery dataset
|
||||
# dataset = "telegraf"
|
||||
#
|
||||
# ## Timeout for BigQuery operations.
|
||||
# # timeout = "5s"
|
||||
#
|
||||
# ## Character to replace hyphens on Metric name
|
||||
# # replace_hyphen_to = "_"
|
||||
|
||||
|
||||
# # Publish Telegraf metrics to a Google Cloud PubSub topic
|
||||
# [[outputs.cloud_pubsub]]
|
||||
# ## Required. Name of Google Cloud Platform (GCP) Project that owns
|
||||
|
|
@ -453,6 +438,63 @@
|
|||
# # high_resolution_metrics = false
|
||||
|
||||
|
||||
# # Configuration for AWS CloudWatchLogs output.
|
||||
# [[outputs.cloudwatch_logs]]
|
||||
# ## The region is the Amazon region that you wish to connect to.
|
||||
# ## Examples include but are not limited to:
|
||||
# ## - us-west-1
|
||||
# ## - us-west-2
|
||||
# ## - us-east-1
|
||||
# ## - ap-southeast-1
|
||||
# ## - ap-southeast-2
|
||||
# ## ...
|
||||
# region = "us-east-1"
|
||||
#
|
||||
# ## Amazon Credentials
|
||||
# ## Credentials are loaded in the following order
|
||||
# ## 1) Assumed credentials via STS if role_arn is specified
|
||||
# ## 2) explicit credentials from 'access_key' and 'secret_key'
|
||||
# ## 3) shared profile from 'profile'
|
||||
# ## 4) environment variables
|
||||
# ## 5) shared credentials file
|
||||
# ## 6) EC2 Instance Profile
|
||||
# #access_key = ""
|
||||
# #secret_key = ""
|
||||
# #token = ""
|
||||
# #role_arn = ""
|
||||
# #profile = ""
|
||||
# #shared_credential_file = ""
|
||||
#
|
||||
# ## Endpoint to make request against, the correct endpoint is automatically
|
||||
# ## determined and this option should only be set if you wish to override the
|
||||
# ## default.
|
||||
# ## ex: endpoint_url = "http://localhost:8000"
|
||||
# # endpoint_url = ""
|
||||
#
|
||||
# ## Cloud watch log group. Must be created in AWS cloudwatch logs upfront!
|
||||
# ## For example, you can specify the name of the k8s cluster here to group logs from all cluster in oine place
|
||||
# log_group = "my-group-name"
|
||||
#
|
||||
# ## Log stream in log group
|
||||
# ## Either log group name or reference to metric attribute, from which it can be parsed:
|
||||
# ## tag:<TAG_NAME> or field:<FIELD_NAME>. If log stream is not exist, it will be created.
|
||||
# ## Since AWS is not automatically delete logs streams with expired logs entries (i.e. empty log stream)
|
||||
# ## you need to put in place appropriate house-keeping (https://forums.aws.amazon.com/thread.jspa?threadID=178855)
|
||||
# log_stream = "tag:location"
|
||||
#
|
||||
# ## Source of log data - metric name
|
||||
# ## specify the name of the metric, from which the log data should be retrieved.
|
||||
# ## I.e., if you are using docker_log plugin to stream logs from container, then
|
||||
# ## specify log_data_metric_name = "docker_log"
|
||||
# log_data_metric_name = "docker_log"
|
||||
#
|
||||
# ## Specify from which metric attribute the log data should be retrieved:
|
||||
# ## tag:<TAG_NAME> or field:<FIELD_NAME>.
|
||||
# ## I.e., if you are using docker_log plugin to stream logs from container, then
|
||||
# ## specify log_data_source = "field:message"
|
||||
# log_data_source = "field:message"
|
||||
|
||||
|
||||
# # Configuration for CrateDB to send metrics to.
|
||||
# [[outputs.cratedb]]
|
||||
# # A github.com/jackc/pgx/v4 connection string.
|
||||
|
|
@ -476,6 +518,9 @@
|
|||
#
|
||||
# ## Write URL override; useful for debugging.
|
||||
# # url = "https://app.datadoghq.com/api/v1/series"
|
||||
#
|
||||
# ## Set http_proxy (telegraf uses the system wide proxy settings if it isn't set)
|
||||
# # http_proxy_url = "http://localhost:8888"
|
||||
|
||||
|
||||
# # Send metrics to nowhere at all
|
||||
|
|
@ -650,6 +695,11 @@
|
|||
# ## Enable Graphite tags support
|
||||
# # graphite_tag_support = false
|
||||
#
|
||||
# ## Define how metric names and tags are sanitized; options are "strict", or "compatible"
|
||||
# ## strict - Default method, and backwards compatible with previous versionf of Telegraf
|
||||
# ## compatible - More relaxed sanitizing when using tags, and compatible with the graphite spec
|
||||
# # graphite_tag_sanitize_mode = "strict"
|
||||
#
|
||||
# ## Character for separating metric name and field for Graphite tags
|
||||
# # graphite_separator = "."
|
||||
#
|
||||
|
|
@ -1496,6 +1546,46 @@
|
|||
# # data_format = "influx"
|
||||
|
||||
|
||||
# # Send metrics to SQL Database
|
||||
# [[outputs.sql]]
|
||||
# ## Database driver
|
||||
# ## Valid options: mssql (Microsoft SQL Server), mysql (MySQL), pgx (Postgres),
|
||||
# ## sqlite (SQLite3), snowflake (snowflake.com)
|
||||
# # driver = ""
|
||||
#
|
||||
# ## Data source name
|
||||
# ## The format of the data source name is different for each database driver.
|
||||
# ## See the plugin readme for details.
|
||||
# # data_source_name = ""
|
||||
#
|
||||
# ## Timestamp column name
|
||||
# # timestamp_column = "timestamp"
|
||||
#
|
||||
# ## Table creation template
|
||||
# ## Available template variables:
|
||||
# ## {TABLE} - table name as a quoted identifier
|
||||
# ## {TABLELITERAL} - table name as a quoted string literal
|
||||
# ## {COLUMNS} - column definitions (list of quoted identifiers and types)
|
||||
# # table_template = "CREATE TABLE {TABLE}({COLUMNS})"
|
||||
#
|
||||
# ## Table existence check template
|
||||
# ## Available template variables:
|
||||
# ## {TABLE} - tablename as a quoted identifier
|
||||
# # table_exists_template = "SELECT 1 FROM {TABLE} LIMIT 1"
|
||||
#
|
||||
# ## Initialization SQL
|
||||
# # init_sql = ""
|
||||
#
|
||||
# ## Metric type to SQL type conversion
|
||||
# #[outputs.sql.convert]
|
||||
# # integer = "INT"
|
||||
# # real = "DOUBLE"
|
||||
# # text = "TEXT"
|
||||
# # timestamp = "TIMESTAMP"
|
||||
# # defaultvalue = "TEXT"
|
||||
# # unsigned = "UNSIGNED"
|
||||
|
||||
|
||||
# # Configuration for Google Cloud Stackdriver to send metrics to
|
||||
# [[outputs.stackdriver]]
|
||||
# ## GCP Project
|
||||
|
|
@ -1845,6 +1935,37 @@
|
|||
# # red = 0.0
|
||||
|
||||
|
||||
# # Generic WebSocket output writer.
|
||||
# [[outputs.websocket]]
|
||||
# ## URL is the address to send metrics to. Make sure ws or wss scheme is used.
|
||||
# url = "ws://127.0.0.1:8080/telegraf"
|
||||
#
|
||||
# ## Timeouts (make sure read_timeout is larger than server ping interval or set to zero).
|
||||
# # connect_timeout = "30s"
|
||||
# # write_timeout = "30s"
|
||||
# # read_timeout = "30s"
|
||||
#
|
||||
# ## Optionally turn on using text data frames (binary by default).
|
||||
# # use_text_frames = false
|
||||
#
|
||||
# ## Optional TLS Config
|
||||
# # tls_ca = "/etc/telegraf/ca.pem"
|
||||
# # tls_cert = "/etc/telegraf/cert.pem"
|
||||
# # tls_key = "/etc/telegraf/key.pem"
|
||||
# ## Use TLS but skip chain & host verification
|
||||
# # insecure_skip_verify = false
|
||||
#
|
||||
# ## Data format to output.
|
||||
# ## Each data format has it's own unique set of configuration options, read
|
||||
# ## more about them here:
|
||||
# ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
|
||||
# # data_format = "influx"
|
||||
#
|
||||
# ## Additional HTTP Upgrade headers
|
||||
# # [outputs.websocket.headers]
|
||||
# # Authorization = "Bearer <TOKEN>"
|
||||
|
||||
|
||||
# # Send aggregated metrics to Yandex.Cloud Monitoring
|
||||
# [[outputs.yandex_cloud_monitoring]]
|
||||
# ## Timeout for HTTP writes.
|
||||
|
|
@ -2367,6 +2488,12 @@
|
|||
# ## Decode a base64 encoded utf-8 string
|
||||
# # [[processors.strings.base64decode]]
|
||||
# # field = "message"
|
||||
#
|
||||
# ## Sanitize a string to ensure it is a valid utf-8 string
|
||||
# ## Each run of invalid UTF-8 byte sequences is replaced by the replacement string, which may be empty
|
||||
# # [[processors.strings.valid_utf8]]
|
||||
# # field = "message"
|
||||
# # replacement = ""
|
||||
|
||||
|
||||
# # Restricts the number of tags that can pass through this filter and chooses which tags to preserve when over the limit.
|
||||
|
|
@ -2745,7 +2872,7 @@
|
|||
# # disable_query_namespaces = true # default false
|
||||
# # namespaces = ["namespace1", "namespace2"]
|
||||
#
|
||||
# # Enable set level telmetry
|
||||
# # Enable set level telemetry
|
||||
# # query_sets = true # default: false
|
||||
# # Add namespace set combinations to limit sets executed on
|
||||
# # Leave blank to do all sets
|
||||
|
|
@ -2758,6 +2885,8 @@
|
|||
# # by default, aerospike produces a 100 bucket histogram
|
||||
# # this is not great for most graphing tools, this will allow
|
||||
# # the ability to squash this to a smaller number of buckets
|
||||
# # To have a balanced histogram, the number of buckets chosen
|
||||
# # should divide evenly into 100.
|
||||
# # num_histogram_buckets = 100 # default: 10
|
||||
|
||||
|
||||
|
|
@ -2978,7 +3107,14 @@
|
|||
# ## suffix used to identify socket files
|
||||
# socket_suffix = "asok"
|
||||
#
|
||||
# ## Ceph user to authenticate as
|
||||
# ## Ceph user to authenticate as, ceph will search for the corresponding keyring
|
||||
# ## e.g. client.admin.keyring in /etc/ceph, or the explicit path defined in the
|
||||
# ## client section of ceph.conf for example:
|
||||
# ##
|
||||
# ## [client.telegraf]
|
||||
# ## keyring = /etc/ceph/client.telegraf.keyring
|
||||
# ##
|
||||
# ## Consult the ceph documentation for more detail on keyring generation.
|
||||
# ceph_user = "client.admin"
|
||||
#
|
||||
# ## Ceph configuration to use to locate the cluster
|
||||
|
|
@ -2987,7 +3123,8 @@
|
|||
# ## Whether to gather statistics via the admin socket
|
||||
# gather_admin_socket_stats = true
|
||||
#
|
||||
# ## Whether to gather statistics via ceph commands
|
||||
# ## Whether to gather statistics via ceph commands, requires ceph_user and ceph_config
|
||||
# ## to be specified
|
||||
# gather_cluster_stats = false
|
||||
|
||||
|
||||
|
|
@ -3099,6 +3236,7 @@
|
|||
# #
|
||||
# # ## Dimension filters for Metric. All dimensions defined for the metric names
|
||||
# # ## must be specified in order to retrieve the metric statistics.
|
||||
# # ## 'value' has wildcard / 'glob' matching support such as 'p-*'.
|
||||
# # [[inputs.cloudwatch.metrics.dimensions]]
|
||||
# # name = "LoadBalancerName"
|
||||
# # value = "p-example"
|
||||
|
|
@ -3158,7 +3296,7 @@
|
|||
# # tag_delimiter = ":"
|
||||
|
||||
|
||||
# # Read metrics from one or many couchbase clusters
|
||||
# # Read per-node and per-bucket metrics from Couchbase
|
||||
# [[inputs.couchbase]]
|
||||
# ## specify servers via a url matching:
|
||||
# ## [protocol://][:password]@address[:port]
|
||||
|
|
@ -3170,6 +3308,9 @@
|
|||
# ## If no protocol is specified, HTTP is used.
|
||||
# ## If no port is specified, 8091 is used.
|
||||
# servers = ["http://localhost:8091"]
|
||||
#
|
||||
# ## Filter bucket fields to include only here.
|
||||
# # bucket_stats_included = ["quota_percent_used", "ops_per_sec", "disk_fetches", "item_count", "disk_used", "data_used", "mem_used"]
|
||||
|
||||
|
||||
# # Read CouchDB Stats from one or more servers
|
||||
|
|
@ -3364,6 +3505,40 @@
|
|||
# filters = [""]
|
||||
|
||||
|
||||
# # Reads metrics from DPDK applications using v2 telemetry interface.
|
||||
# [[inputs.dpdk]]
|
||||
# ## Path to DPDK telemetry socket. This shall point to v2 version of DPDK telemetry interface.
|
||||
# # socket_path = "/var/run/dpdk/rte/dpdk_telemetry.v2"
|
||||
#
|
||||
# ## Duration that defines how long the connected socket client will wait for a response before terminating connection.
|
||||
# ## This includes both writing to and reading from socket. Since it's local socket access
|
||||
# ## to a fast packet processing application, the timeout should be sufficient for most users.
|
||||
# ## Setting the value to 0 disables the timeout (not recommended)
|
||||
# # socket_access_timeout = "200ms"
|
||||
#
|
||||
# ## Enables telemetry data collection for selected device types.
|
||||
# ## Adding "ethdev" enables collection of telemetry from DPDK NICs (stats, xstats, link_status).
|
||||
# ## Adding "rawdev" enables collection of telemetry from DPDK Raw Devices (xstats).
|
||||
# # device_types = ["ethdev"]
|
||||
#
|
||||
# ## List of custom, application-specific telemetry commands to query
|
||||
# ## The list of available commands depend on the application deployed. Applications can register their own commands
|
||||
# ## via telemetry library API http://doc.dpdk.org/guides/prog_guide/telemetry_lib.html#registering-commands
|
||||
# ## For e.g. L3 Forwarding with Power Management Sample Application this could be:
|
||||
# ## additional_commands = ["/l3fwd-power/stats"]
|
||||
# # additional_commands = []
|
||||
#
|
||||
# ## Allows turning off collecting data for individual "ethdev" commands.
|
||||
# ## Remove "/ethdev/link_status" from list to start getting link status metrics.
|
||||
# [inputs.dpdk.ethdev]
|
||||
# exclude_commands = ["/ethdev/link_status"]
|
||||
#
|
||||
# ## When running multiple instances of the plugin it's recommended to add a unique tag to each instance to identify
|
||||
# ## metrics exposed by an instance of DPDK application. This is useful when multiple DPDK apps run on a single host.
|
||||
# ## [inputs.dpdk.tags]
|
||||
# ## dpdk_instance = "my-fwd-app"
|
||||
|
||||
|
||||
# # Read metrics about docker containers from Fargate/ECS v2, v3 meta endpoints.
|
||||
# [[inputs.ecs]]
|
||||
# ## ECS metadata url.
|
||||
|
|
@ -3750,6 +3925,12 @@
|
|||
# ## HTTP Proxy support
|
||||
# # http_proxy_url = ""
|
||||
#
|
||||
# ## OAuth2 Client Credentials Grant
|
||||
# # client_id = "clientid"
|
||||
# # client_secret = "secret"
|
||||
# # token_url = "https://indentityprovider/oauth2/v1/token"
|
||||
# # scopes = ["urn:opc:idm:__myscopes__"]
|
||||
#
|
||||
# ## Optional TLS Config
|
||||
# # tls_ca = "/etc/telegraf/ca.pem"
|
||||
# # tls_cert = "/etc/telegraf/cert.pem"
|
||||
|
|
@ -4620,6 +4801,10 @@
|
|||
# ## When true, collect per collection stats
|
||||
# # gather_col_stats = false
|
||||
#
|
||||
# ## When true, collect usage statistics for each collection
|
||||
# ## (insert, update, queries, remove, getmore, commands etc...).
|
||||
# # gather_top_stat = false
|
||||
#
|
||||
# ## List of db where collections stats are collected
|
||||
# ## If empty, all db are concerned
|
||||
# # col_stats_dbs = ["local"]
|
||||
|
|
@ -4723,6 +4908,12 @@
|
|||
# ## gather metrics from SHOW SLAVE STATUS command output
|
||||
# # gather_slave_status = false
|
||||
#
|
||||
# ## gather metrics from all channels from SHOW SLAVE STATUS command output
|
||||
# # gather_all_slave_channels = false
|
||||
#
|
||||
# ## use MariaDB dialect for all channels SHOW SLAVE STATUS
|
||||
# # mariadb_dialect = false
|
||||
#
|
||||
# ## gather metrics from SHOW BINARY LOGS command output
|
||||
# # gather_binary_logs = false
|
||||
#
|
||||
|
|
@ -5304,6 +5495,10 @@
|
|||
#
|
||||
# ## Use only IPv6 addresses when resolving a hostname.
|
||||
# # ipv6 = false
|
||||
#
|
||||
# ## Number of data bytes to be sent. Corresponds to the "-s"
|
||||
# ## option of the ping command. This only works with the native method.
|
||||
# # size = 56
|
||||
|
||||
|
||||
# # Measure postfix queue statistics
|
||||
|
|
@ -5817,74 +6012,6 @@
|
|||
# # password = "pa$$word"
|
||||
|
||||
|
||||
# # Read metrics from Microsoft SQL Server
|
||||
# [[inputs.sqlserver]]
|
||||
# ## Specify instances to monitor with a list of connection strings.
|
||||
# ## All connection parameters are optional.
|
||||
# ## By default, the host is localhost, listening on default port, TCP 1433.
|
||||
# ## for Windows, the user is the currently running AD user (SSO).
|
||||
# ## See https://github.com/denisenkom/go-mssqldb for detailed connection
|
||||
# ## parameters, in particular, tls connections can be created like so:
|
||||
# ## "encrypt=true;certificate=<cert>;hostNameInCertificate=<SqlServer host fqdn>"
|
||||
# servers = [
|
||||
# "Server=192.168.1.10;Port=1433;User Id=<user>;Password=<pw>;app name=telegraf;log=1;",
|
||||
# ]
|
||||
#
|
||||
# ## "database_type" enables a specific set of queries depending on the database type. If specified, it replaces azuredb = true/false and query_version = 2
|
||||
# ## In the config file, the sql server plugin section should be repeated each with a set of servers for a specific database_type.
|
||||
# ## Possible values for database_type are - "AzureSQLDB" or "AzureSQLManagedInstance" or "SQLServer"
|
||||
#
|
||||
# ## Queries enabled by default for database_type = "AzureSQLDB" are -
|
||||
# ## AzureSQLDBResourceStats, AzureSQLDBResourceGovernance, AzureSQLDBWaitStats, AzureSQLDBDatabaseIO, AzureSQLDBServerProperties,
|
||||
# ## AzureSQLDBOsWaitstats, AzureSQLDBMemoryClerks, AzureSQLDBPerformanceCounters, AzureSQLDBRequests, AzureSQLDBSchedulers
|
||||
#
|
||||
# # database_type = "AzureSQLDB"
|
||||
#
|
||||
# ## A list of queries to include. If not specified, all the above listed queries are used.
|
||||
# # include_query = []
|
||||
#
|
||||
# ## A list of queries to explicitly ignore.
|
||||
# # exclude_query = []
|
||||
#
|
||||
# ## Queries enabled by default for database_type = "AzureSQLManagedInstance" are -
|
||||
# ## AzureSQLMIResourceStats, AzureSQLMIResourceGovernance, AzureSQLMIDatabaseIO, AzureSQLMIServerProperties, AzureSQLMIOsWaitstats,
|
||||
# ## AzureSQLMIMemoryClerks, AzureSQLMIPerformanceCounters, AzureSQLMIRequests, AzureSQLMISchedulers
|
||||
#
|
||||
# # database_type = "AzureSQLManagedInstance"
|
||||
#
|
||||
# # include_query = []
|
||||
#
|
||||
# # exclude_query = []
|
||||
#
|
||||
# ## Queries enabled by default for database_type = "SQLServer" are -
|
||||
# ## SQLServerPerformanceCounters, SQLServerWaitStatsCategorized, SQLServerDatabaseIO, SQLServerProperties, SQLServerMemoryClerks,
|
||||
# ## SQLServerSchedulers, SQLServerRequests, SQLServerVolumeSpace, SQLServerCpu
|
||||
#
|
||||
# database_type = "SQLServer"
|
||||
#
|
||||
# include_query = []
|
||||
#
|
||||
# ## SQLServerAvailabilityReplicaStates and SQLServerDatabaseReplicaStates are optional queries and hence excluded here as default
|
||||
# exclude_query = ["SQLServerAvailabilityReplicaStates", "SQLServerDatabaseReplicaStates"]
|
||||
#
|
||||
# ## Following are old config settings, you may use them only if you are using the earlier flavor of queries, however it is recommended to use
|
||||
# ## the new mechanism of identifying the database_type there by use it's corresponding queries
|
||||
#
|
||||
# ## Optional parameter, setting this to 2 will use a new version
|
||||
# ## of the collection queries that break compatibility with the original
|
||||
# ## dashboards.
|
||||
# ## Version 2 - is compatible from SQL Server 2012 and later versions and also for SQL Azure DB
|
||||
# # query_version = 2
|
||||
#
|
||||
# ## If you are using AzureDB, setting this to true will gather resource utilization metrics
|
||||
# # azuredb = false
|
||||
|
||||
# ## Toggling this to true will emit an additional metric called "sqlserver_telegraf_health".
|
||||
# ## This metric tracks the count of attempted queries and successful queries for each SQL instance specified in "servers".
|
||||
# ## The purpose of this metric is to assist with identifying and diagnosing any connectivity or query issues.
|
||||
# ## This setting/metric is optional and is disabled by default.
|
||||
# # health_metric = false
|
||||
|
||||
# # Gather timeseries from Google Cloud Platform v3 monitoring API
|
||||
# [[inputs.stackdriver]]
|
||||
# ## GCP Project
|
||||
|
|
@ -6183,7 +6310,9 @@
|
|||
# # Reads metrics from a SSL certificate
|
||||
# [[inputs.x509_cert]]
|
||||
# ## List certificate sources
|
||||
# sources = ["/etc/ssl/certs/ssl-cert-snakeoil.pem", "tcp://example.org:443"]
|
||||
# ## Prefix your entry with 'file://' if you intend to use relative paths
|
||||
# sources = ["/etc/ssl/certs/ssl-cert-snakeoil.pem", "tcp://example.org:443",
|
||||
# "/etc/mycerts/*.mydomain.org.pem", "file:///path/to/*.pem"]
|
||||
#
|
||||
# ## Timeout for SSL connection
|
||||
# # timeout = "5s"
|
||||
|
|
@ -6242,30 +6371,130 @@
|
|||
###############################################################################
|
||||
|
||||
|
||||
# # Intel Resource Director Technology plugin
|
||||
# [[inputs.IntelRDT]]
|
||||
# ## Optionally set sampling interval to Nx100ms.
|
||||
# ## This value is propagated to pqos tool. Interval format is defined by pqos itself.
|
||||
# ## If not provided or provided 0, will be set to 10 = 10x100ms = 1s.
|
||||
# # sampling_interval = "10"
|
||||
#
|
||||
# ## Optionally specify the path to pqos executable.
|
||||
# ## If not provided, auto discovery will be performed.
|
||||
# # pqos_path = "/usr/local/bin/pqos"
|
||||
# # Listener capable of handling KNX bus messages provided through a KNX-IP Interface.
|
||||
# [[inputs.KNXListener]]
|
||||
# ## Type of KNX-IP interface.
|
||||
# ## Can be either "tunnel" or "router".
|
||||
# # service_type = "tunnel"
|
||||
#
|
||||
# ## Optionally specify if IPC and LLC_Misses metrics shouldn't be propagated.
|
||||
# ## If not provided, default value is false.
|
||||
# # shortened_metrics = false
|
||||
#
|
||||
# ## Specify the list of groups of CPU core(s) to be provided as pqos input.
|
||||
# ## Mandatory if processes aren't set and forbidden if processes are specified.
|
||||
# ## e.g. ["0-3", "4,5,6"] or ["1-3,4"]
|
||||
# # cores = ["0-3"]
|
||||
#
|
||||
# ## Specify the list of processes for which Metrics will be collected.
|
||||
# ## Mandatory if cores aren't set and forbidden if cores are specified.
|
||||
# ## e.g. ["qemu", "pmd"]
|
||||
# # processes = ["process"]
|
||||
# ## Address of the KNX-IP interface.
|
||||
# service_address = "localhost:3671"
|
||||
#
|
||||
# ## Measurement definition(s)
|
||||
# # [[inputs.KNXListener.measurement]]
|
||||
# # ## Name of the measurement
|
||||
# # name = "temperature"
|
||||
# # ## Datapoint-Type (DPT) of the KNX messages
|
||||
# # dpt = "9.001"
|
||||
# # ## List of Group-Addresses (GAs) assigned to the measurement
|
||||
# # addresses = ["5/5/1"]
|
||||
#
|
||||
# # [[inputs.KNXListener.measurement]]
|
||||
# # name = "illumination"
|
||||
# # dpt = "9.004"
|
||||
# # addresses = ["5/5/3"]
|
||||
|
||||
|
||||
# # Pull Metric Statistics from Aliyun CMS
|
||||
# [[inputs.aliyuncms]]
|
||||
# ## Aliyun Credentials
|
||||
# ## Credentials are loaded in the following order
|
||||
# ## 1) Ram RoleArn credential
|
||||
# ## 2) AccessKey STS token credential
|
||||
# ## 3) AccessKey credential
|
||||
# ## 4) Ecs Ram Role credential
|
||||
# ## 5) RSA keypair credential
|
||||
# ## 6) Environment variables credential
|
||||
# ## 7) Instance metadata credential
|
||||
#
|
||||
# # access_key_id = ""
|
||||
# # access_key_secret = ""
|
||||
# # access_key_sts_token = ""
|
||||
# # role_arn = ""
|
||||
# # role_session_name = ""
|
||||
# # private_key = ""
|
||||
# # public_key_id = ""
|
||||
# # role_name = ""
|
||||
#
|
||||
# ## Specify the ali cloud region list to be queried for metrics and objects discovery
|
||||
# ## If not set, all supported regions (see below) would be covered, it can provide a significant load on API, so the recommendation here
|
||||
# ## is to limit the list as much as possible. Allowed values: https://www.alibabacloud.com/help/zh/doc-detail/40654.htm
|
||||
# ## Default supported regions are:
|
||||
# ## 21 items: cn-qingdao,cn-beijing,cn-zhangjiakou,cn-huhehaote,cn-hangzhou,cn-shanghai,cn-shenzhen,
|
||||
# ## cn-heyuan,cn-chengdu,cn-hongkong,ap-southeast-1,ap-southeast-2,ap-southeast-3,ap-southeast-5,
|
||||
# ## ap-south-1,ap-northeast-1,us-west-1,us-east-1,eu-central-1,eu-west-1,me-east-1
|
||||
# ##
|
||||
# ## From discovery perspective it set the scope for object discovery, the discovered info can be used to enrich
|
||||
# ## the metrics with objects attributes/tags. Discovery is supported not for all projects (if not supported, then
|
||||
# ## it will be reported on the start - for example for 'acs_cdn' project:
|
||||
# ## 'E! [inputs.aliyuncms] Discovery tool is not activated: no discovery support for project "acs_cdn"' )
|
||||
# ## Currently, discovery supported for the following projects:
|
||||
# ## - acs_ecs_dashboard
|
||||
# ## - acs_rds_dashboard
|
||||
# ## - acs_slb_dashboard
|
||||
# ## - acs_vpc_eip
|
||||
# regions = ["cn-hongkong"]
|
||||
#
|
||||
# # The minimum period for AliyunCMS metrics is 1 minute (60s). However not all
|
||||
# # metrics are made available to the 1 minute period. Some are collected at
|
||||
# # 3 minute, 5 minute, or larger intervals.
|
||||
# # See: https://help.aliyun.com/document_detail/51936.html?spm=a2c4g.11186623.2.18.2bc1750eeOw1Pv
|
||||
# # Note that if a period is configured that is smaller than the minimum for a
|
||||
# # particular metric, that metric will not be returned by the Aliyun OpenAPI
|
||||
# # and will not be collected by Telegraf.
|
||||
# #
|
||||
# ## Requested AliyunCMS aggregation Period (required - must be a multiple of 60s)
|
||||
# period = "5m"
|
||||
#
|
||||
# ## Collection Delay (required - must account for metrics availability via AliyunCMS API)
|
||||
# delay = "1m"
|
||||
#
|
||||
# ## Recommended: use metric 'interval' that is a multiple of 'period' to avoid
|
||||
# ## gaps or overlap in pulled data
|
||||
# interval = "5m"
|
||||
#
|
||||
# ## Metric Statistic Project (required)
|
||||
# project = "acs_slb_dashboard"
|
||||
#
|
||||
# ## Maximum requests per second, default value is 200
|
||||
# ratelimit = 200
|
||||
#
|
||||
# ## How often the discovery API call executed (default 1m)
|
||||
# #discovery_interval = "1m"
|
||||
#
|
||||
# ## Metrics to Pull (Required)
|
||||
# [[inputs.aliyuncms.metrics]]
|
||||
# ## Metrics names to be requested,
|
||||
# ## described here (per project): https://help.aliyun.com/document_detail/28619.html?spm=a2c4g.11186623.6.690.1938ad41wg8QSq
|
||||
# names = ["InstanceActiveConnection", "InstanceNewConnection"]
|
||||
#
|
||||
# ## Dimension filters for Metric (these are optional).
|
||||
# ## This allows to get additional metric dimension. If dimension is not specified it can be returned or
|
||||
# ## the data can be aggregated - it depends on particular metric, you can find details here: https://help.aliyun.com/document_detail/28619.html?spm=a2c4g.11186623.6.690.1938ad41wg8QSq
|
||||
# ##
|
||||
# ## Note, that by default dimension filter includes the list of discovered objects in scope (if discovery is enabled)
|
||||
# ## Values specified here would be added into the list of discovered objects.
|
||||
# ## You can specify either single dimension:
|
||||
# #dimensions = '{"instanceId": "p-example"}'
|
||||
#
|
||||
# ## Or you can specify several dimensions at once:
|
||||
# #dimensions = '[{"instanceId": "p-example"},{"instanceId": "q-example"}]'
|
||||
#
|
||||
# ## Enrichment tags, can be added from discovery (if supported)
|
||||
# ## Notation is <measurement_tag_name>:<JMES query path (https://jmespath.org/tutorial.html)>
|
||||
# ## To figure out which fields are available, consult the Describe<ObjectType> API per project.
|
||||
# ## For example, for SLB: https://api.aliyun.com/#/?product=Slb&version=2014-05-15&api=DescribeLoadBalancers¶ms={}&tab=MOCK&lang=GO
|
||||
# #tag_query_path = [
|
||||
# # "address:Address",
|
||||
# # "name:LoadBalancerName",
|
||||
# # "cluster_owner:Tags.Tag[?TagKey=='cs.cluster.name'].TagValue | [0]"
|
||||
# # ]
|
||||
# ## The following tags added by default: regionId (if discovery enabled), userId, instanceId.
|
||||
#
|
||||
# ## Allow metrics without discovery data, if discovery is enabled. If set to true, then metric without discovery
|
||||
# ## data would be emitted, otherwise dropped. This cane be of help, in case debugging dimension filters, or partial coverage
|
||||
# ## of discovery scope vs monitoring scope
|
||||
# #allow_dps_without_discovery = false
|
||||
|
||||
|
||||
# # AMQP consumer plugin
|
||||
|
|
@ -6393,12 +6622,16 @@
|
|||
# ## Define aliases to map telemetry encoding paths to simple measurement names
|
||||
# [inputs.cisco_telemetry_mdt.aliases]
|
||||
# ifstats = "ietf-interfaces:interfaces-state/interface/statistics"
|
||||
# ##Define Property Xformation, please refer README and https://pubhub.devnetcloud.com/media/dme-docs-9-3-3/docs/appendix/ for Model details.
|
||||
# [inputs.cisco_telemetry_mdt.dmes]
|
||||
# ModTs = "ignore"
|
||||
# CreateTs = "ignore"
|
||||
|
||||
|
||||
# # Read metrics from one or many ClickHouse servers
|
||||
# [[inputs.clickhouse]]
|
||||
# ## Username for authorization on ClickHouse server
|
||||
# ## example: username = "default""
|
||||
# ## example: username = "default"
|
||||
# username = "default"
|
||||
#
|
||||
# ## Password for authorization on ClickHouse server
|
||||
|
|
@ -6674,8 +6907,6 @@
|
|||
# ## This requires one of the following sets of environment variables to be set:
|
||||
# ##
|
||||
# ## 1) Expected Environment Variables:
|
||||
# ## - "EVENTHUB_NAMESPACE"
|
||||
# ## - "EVENTHUB_NAME"
|
||||
# ## - "EVENTHUB_CONNECTION_STRING"
|
||||
# ##
|
||||
# ## 2) Expected Environment Variables:
|
||||
|
|
@ -6684,8 +6915,17 @@
|
|||
# ## - "EVENTHUB_KEY_NAME"
|
||||
# ## - "EVENTHUB_KEY_VALUE"
|
||||
#
|
||||
# ## 3) Expected Environment Variables:
|
||||
# ## - "EVENTHUB_NAMESPACE"
|
||||
# ## - "EVENTHUB_NAME"
|
||||
# ## - "AZURE_TENANT_ID"
|
||||
# ## - "AZURE_CLIENT_ID"
|
||||
# ## - "AZURE_CLIENT_SECRET"
|
||||
#
|
||||
# ## Uncommenting the option below will create an Event Hub client based solely on the connection string.
|
||||
# ## This can either be the associated environment variable or hard coded directly.
|
||||
# ## If this option is uncommented, environment variables will be ignored.
|
||||
# ## Connection string should contain EventHubName (EntityPath)
|
||||
# # connection_string = ""
|
||||
#
|
||||
# ## Set persistence directory to a valid folder to use a file persister instead of an in-memory persister
|
||||
|
|
@ -7227,6 +7467,15 @@
|
|||
# ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||
# data_format = "influx"
|
||||
#
|
||||
# ##
|
||||
# ## The content encoding of the data from kinesis
|
||||
# ## If you are processing a cloudwatch logs kinesis stream then set this to "gzip"
|
||||
# ## as AWS compresses cloudwatch log data before it is sent to kinesis (aws
|
||||
# ## also base64 encodes the zip byte data before pushing to the stream. The base64 decoding
|
||||
# ## is done automatically by the golang sdk, as data is read from kinesis)
|
||||
# ##
|
||||
# # content_encoding = "identity"
|
||||
#
|
||||
# ## Optional
|
||||
# ## Configuration for a dynamodb checkpoint
|
||||
# [inputs.kinesis_consumer.checkpoint_dynamodb]
|
||||
|
|
@ -7448,6 +7697,20 @@
|
|||
# data_format = "influx"
|
||||
|
||||
|
||||
# # Receive OpenTelemetry traces, metrics, and logs over gRPC
|
||||
# [[inputs.opentelemetry]]
|
||||
# ## Override the OpenTelemetry gRPC service address:port
|
||||
# # service_address = "0.0.0.0:4317"
|
||||
#
|
||||
# ## Override the default request timeout
|
||||
# # timeout = "5s"
|
||||
#
|
||||
# ## Select a schema for metrics: prometheus-v1 or prometheus-v2
|
||||
# ## For more information about the alternatives, read the Prometheus input
|
||||
# ## plugin notes.
|
||||
# # metrics_schema = "prometheus-v1"
|
||||
|
||||
|
||||
# # Read metrics from one or many pgbouncer servers
|
||||
# [[inputs.pgbouncer]]
|
||||
# ## specify address via a url matching:
|
||||
|
|
@ -7588,7 +7851,7 @@
|
|||
# # metric_version = 1
|
||||
#
|
||||
# ## Url tag name (tag containing scrapped url. optional, default is "url")
|
||||
# # url_tag = "scrapeUrl"
|
||||
# # url_tag = "url"
|
||||
#
|
||||
# ## An array of Kubernetes services to scrape metrics from.
|
||||
# # kubernetes_services = ["http://my-service-dns.my-namespace:9100/metrics"]
|
||||
|
|
@ -7783,6 +8046,69 @@
|
|||
# # content_encoding = "identity"
|
||||
|
||||
|
||||
# # Read metrics from Microsoft SQL Server
|
||||
# [[inputs.sqlserver]]
|
||||
# ## Specify instances to monitor with a list of connection strings.
|
||||
# ## All connection parameters are optional.
|
||||
# ## By default, the host is localhost, listening on default port, TCP 1433.
|
||||
# ## for Windows, the user is the currently running AD user (SSO).
|
||||
# ## See https://github.com/denisenkom/go-mssqldb for detailed connection
|
||||
# ## parameters, in particular, tls connections can be created like so:
|
||||
# ## "encrypt=true;certificate=<cert>;hostNameInCertificate=<SqlServer host fqdn>"
|
||||
# servers = [
|
||||
# "Server=192.168.1.10;Port=1433;User Id=<user>;Password=<pw>;app name=telegraf;log=1;",
|
||||
# ]
|
||||
#
|
||||
# ## "database_type" enables a specific set of queries depending on the database type. If specified, it replaces azuredb = true/false and query_version = 2
|
||||
# ## In the config file, the sql server plugin section should be repeated each with a set of servers for a specific database_type.
|
||||
# ## Possible values for database_type are - "AzureSQLDB" or "AzureSQLManagedInstance" or "SQLServer"
|
||||
#
|
||||
# ## Queries enabled by default for database_type = "AzureSQLDB" are -
|
||||
# ## AzureSQLDBResourceStats, AzureSQLDBResourceGovernance, AzureSQLDBWaitStats, AzureSQLDBDatabaseIO, AzureSQLDBServerProperties,
|
||||
# ## AzureSQLDBOsWaitstats, AzureSQLDBMemoryClerks, AzureSQLDBPerformanceCounters, AzureSQLDBRequests, AzureSQLDBSchedulers
|
||||
#
|
||||
# # database_type = "AzureSQLDB"
|
||||
#
|
||||
# ## A list of queries to include. If not specified, all the above listed queries are used.
|
||||
# # include_query = []
|
||||
#
|
||||
# ## A list of queries to explicitly ignore.
|
||||
# # exclude_query = []
|
||||
#
|
||||
# ## Queries enabled by default for database_type = "AzureSQLManagedInstance" are -
|
||||
# ## AzureSQLMIResourceStats, AzureSQLMIResourceGovernance, AzureSQLMIDatabaseIO, AzureSQLMIServerProperties, AzureSQLMIOsWaitstats,
|
||||
# ## AzureSQLMIMemoryClerks, AzureSQLMIPerformanceCounters, AzureSQLMIRequests, AzureSQLMISchedulers
|
||||
#
|
||||
# # database_type = "AzureSQLManagedInstance"
|
||||
#
|
||||
# # include_query = []
|
||||
#
|
||||
# # exclude_query = []
|
||||
#
|
||||
# ## Queries enabled by default for database_type = "SQLServer" are -
|
||||
# ## SQLServerPerformanceCounters, SQLServerWaitStatsCategorized, SQLServerDatabaseIO, SQLServerProperties, SQLServerMemoryClerks,
|
||||
# ## SQLServerSchedulers, SQLServerRequests, SQLServerVolumeSpace, SQLServerCpu
|
||||
#
|
||||
# database_type = "SQLServer"
|
||||
#
|
||||
# include_query = []
|
||||
#
|
||||
# ## SQLServerAvailabilityReplicaStates and SQLServerDatabaseReplicaStates are optional queries and hence excluded here as default
|
||||
# exclude_query = ["SQLServerAvailabilityReplicaStates", "SQLServerDatabaseReplicaStates"]
|
||||
#
|
||||
# ## Following are old config settings, you may use them only if you are using the earlier flavor of queries, however it is recommended to use
|
||||
# ## the new mechanism of identifying the database_type there by use it's corresponding queries
|
||||
#
|
||||
# ## Optional parameter, setting this to 2 will use a new version
|
||||
# ## of the collection queries that break compatibility with the original
|
||||
# ## dashboards.
|
||||
# ## Version 2 - is compatible from SQL Server 2012 and later versions and also for SQL Azure DB
|
||||
# # query_version = 2
|
||||
#
|
||||
# ## If you are using AzureDB, setting this to true will gather resource utilization metrics
|
||||
# # azuredb = false
|
||||
|
||||
|
||||
# # Statsd UDP/TCP Server
|
||||
# [[inputs.statsd]]
|
||||
# ## Protocol, must be "tcp", "udp", "udp4" or "udp6" (default=udp)
|
||||
|
|
@ -8178,6 +8504,10 @@
|
|||
# # ssl_key = "/path/to/keyfile"
|
||||
# ## Use SSL but skip chain & host verification
|
||||
# # insecure_skip_verify = false
|
||||
#
|
||||
# ## The Historical Interval value must match EXACTLY the interval in the daily
|
||||
# # "Interval Duration" found on the VCenter server under Configure > General > Statistics > Statistic intervals
|
||||
# # historical_interval = "5m"
|
||||
|
||||
|
||||
# # A Webhooks Event collector
|
||||
|
|
|
|||
Loading…
Reference in New Issue