fix: Restore sample configurations broken during initial migration (#11276)

This commit is contained in:
Sven Rebhan 2022-06-22 21:33:58 +02:00 committed by GitHub
parent 48fa1990ee
commit a049175e58
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
17 changed files with 300 additions and 106 deletions

View File

@ -3,6 +3,61 @@
The Derivative Aggregator Plugin estimates the derivative for all fields of the
aggregated metrics.
## Configuration
```toml @sample.conf
# Calculates a derivative for every field.
[[aggregators.derivative]]
## The period in which to flush the aggregator.
period = "30s"
##
## Suffix to append for the resulting derivative field.
# suffix = "_rate"
##
## Field to use for the quotient when computing the derivative.
## When using a field as the derivation parameter the name of that field will
## be used for the resulting derivative, e.g. *fieldname_by_parameter*.
## By default the timestamps of the metrics are used and the suffix is omitted.
# variable = ""
##
## Maximum number of roll-overs in case only one measurement is found during a period.
# max_roll_over = 10
```
This aggregator will estimate a derivative for each field of a metric, which is
contained in both the first and last metric of the aggregation interval.
Without further configuration the derivative will be calculated with respect to
the time difference between these two measurements in seconds.
The following formula is applied is for every field
```text
derivative = (value_last - value_first) / (time_last - time_first)
```
The resulting derivative will be named `<fieldname>_rate` if no `suffix` is
configured.
To calculate a derivative for every field use
```toml
[[aggregators.derivative]]
## Specific Derivative Aggregator Arguments:
## Configure a custom derivation variable. Timestamp is used if none is given.
# variable = ""
## Suffix to add to the field name for the derivative name.
# suffix = "_rate"
## Roll-Over last measurement to first measurement of next period
# max_roll_over = 10
## General Aggregator Arguments:
## calculate derivative every 30 seconds
period = "30s"
```
## Time Derivatives
In its default configuration it determines the first and last measurement of
@ -11,9 +66,7 @@ calculated. This time difference is than used to divide the difference of each
field using the following formula:
```text
field_last - field_first
derivative = --------------------------
time_difference
derivative = (value_last - value_first) / (time_last - time_first)
```
For each field the derivative is emitted with a naming pattern
@ -26,9 +79,7 @@ variable in the denominator. This variable is assumed to be a monotonically
increasing value. In this feature the following formula is used:
```text
field_last - field_first
derivative = --------------------------------
variable_last - variable_first
derivative = (value_last - value_first) / (variable_last - variable_first)
```
**Make sure the specified variable is not filtered and exists in the metrics
@ -150,28 +201,6 @@ greater 0 may be important, if you need to detect changes between periods,
e.g. when you have very few measurements in a period or quasi-constant metrics
with only occasional changes.
## Configuration
```toml @sample.conf
# Calculates a derivative for every field.
[[aggregators.derivative]]
## Specific Derivative Aggregator Arguments:
## Configure a custom derivation variable. Timestamp is used if none is given.
# variable = ""
## Suffix to add to the field name for the derivative name.
# suffix = "_rate"
## Roll-Over last measurement to first measurement of next period
# max_roll_over = 10
## General Aggregator Arguments:
## calculate derivative every 30 seconds
period = "30s"
```
### Tags
No tags are applied by this aggregator.

View File

@ -1,17 +1,16 @@
# Calculates a derivative for every field.
[[aggregators.derivative]]
## Specific Derivative Aggregator Arguments:
## Configure a custom derivation variable. Timestamp is used if none is given.
# variable = ""
## Suffix to add to the field name for the derivative name.
# suffix = "_rate"
## Roll-Over last measurement to first measurement of next period
# max_roll_over = 10
## General Aggregator Arguments:
## calculate derivative every 30 seconds
## The period in which to flush the aggregator.
period = "30s"
##
## Suffix to append for the resulting derivative field.
# suffix = "_rate"
##
## Field to use for the quotient when computing the derivative.
## When using a field as the derivation parameter the name of that field will
## be used for the resulting derivative, e.g. *fieldname_by_parameter*.
## By default the timestamps of the metrics are used and the suffix is omitted.
# variable = ""
##
## Maximum number of roll-overs in case only one measurement is found during a period.
# max_roll_over = 10

View File

@ -32,11 +32,6 @@ querying table metrics with a wildcard for the keyspace or table name.
```toml @sample.conf
# Read Cassandra metrics through Jolokia
[[inputs.cassandra]]
## DEPRECATED: The cassandra plugin has been deprecated. Please use the
## jolokia2 plugin instead.
##
## see https://github.com/influxdata/telegraf/tree/master/plugins/inputs/jolokia2
context = "/jolokia/read"
## List of cassandra servers exposing jolokia read service
servers = ["myuser:mypassword@10.10.10.1:8778","10.10.10.2:8778",":8778"]

View File

@ -1,10 +1,5 @@
# Read Cassandra metrics through Jolokia
[[inputs.cassandra]]
## DEPRECATED: The cassandra plugin has been deprecated. Please use the
## jolokia2 plugin instead.
##
## see https://github.com/influxdata/telegraf/tree/master/plugins/inputs/jolokia2
context = "/jolokia/read"
## List of cassandra servers exposing jolokia read service
servers = ["myuser:mypassword@10.10.10.1:8778","10.10.10.2:8778",":8778"]

View File

@ -12,7 +12,6 @@ instances of telegraf can read from the same topic in parallel.
## Configuration
```toml @sample.conf
## DEPRECATED: The 'kafka_consumer_legacy' plugin is deprecated in version 1.4.0, use 'inputs.kafka_consumer' instead, NOTE: 'kafka_consumer' only supports Kafka v0.8+.
# Read metrics from Kafka topic(s)
[[inputs.kafka_consumer_legacy]]
## topic(s) to consume

View File

@ -1,4 +1,3 @@
## DEPRECATED: The 'kafka_consumer_legacy' plugin is deprecated in version 1.4.0, use 'inputs.kafka_consumer' instead, NOTE: 'kafka_consumer' only supports Kafka v0.8+.
# Read metrics from Kafka topic(s)
[[inputs.kafka_consumer_legacy]]
## topic(s) to consume

View File

@ -49,7 +49,6 @@ Migration Example:
## Configuration
```toml @sample.conf
## DEPRECATED: The 'logparser' plugin is deprecated in version 1.15.0, use 'inputs.tail' with 'grok' data format instead.
# Read metrics off Arista LANZ, via socket
[[inputs.logparser]]
## Log files to parse.

View File

@ -1,4 +1,3 @@
## DEPRECATED: The 'logparser' plugin is deprecated in version 1.15.0, use 'inputs.tail' with 'grok' data format instead.
# Read metrics off Arista LANZ, via socket
[[inputs.logparser]]
## Log files to parse.

View File

@ -7,7 +7,7 @@ client that use riemann clients using riemann-protobuff format.
```toml @sample.conf
# Riemann protobuff listener
[[inputs.rimann_listener]]
[[inputs.riemann_listener]]
## URL to listen on
## Default is "tcp://:5555"
# service_address = "tcp://:8094"

View File

@ -1,5 +1,5 @@
# Riemann protobuff listener
[[inputs.rimann_listener]]
[[inputs.riemann_listener]]
## URL to listen on
## Default is "tcp://:5555"
# service_address = "tcp://:8094"

View File

@ -7,7 +7,6 @@ The SNMP input plugin gathers metrics from SNMP agents
## Configuration
```toml @sample.conf
# DEPRECATED! PLEASE USE inputs.snmp INSTEAD.
[[inputs.snmp_legacy]]
## Use 'oids.txt' file to translate oids to names
## To generate 'oids.txt' you need to run:

View File

@ -1,4 +1,3 @@
# DEPRECATED! PLEASE USE inputs.snmp INSTEAD.
[[inputs.snmp_legacy]]
## Use 'oids.txt' file to translate oids to names
## To generate 'oids.txt' you need to run:

View File

@ -19,10 +19,10 @@ Compatibility information is available from the govmomi project
## Configuration
```toml
# Read metrics from one or many vCenters
```toml @sample.conf
-# Read metrics from one or many vCenters
[[inputs.vsphere]]
## List of vCenter URLs to be monitored. These three lines must be uncommented
## List of vCenter URLs to be monitored. These three lines must be uncommented
## and edited for the plugin to work.
vcenters = [ "https://vcenter.local/sdk" ]
username = "user@corp.local"
@ -144,7 +144,7 @@ Compatibility information is available from the govmomi project
# datastore_metric_exclude = [] ## Nothing excluded by default
# datastore_instances = false ## false by default
## Datastores
## Datastores
# datastore_include = [ "/*/datastore/**"] # Inventory path to datastores to collect (by default all are collected)
# datastore_exclude = [] # Inventory paths to exclude
# datastore_metric_include = [] ## if omitted or empty, all metrics are collected
@ -188,12 +188,6 @@ Compatibility information is available from the govmomi project
## preserve the full precision when averaging takes place.
# use_int_samples = true
## The number of vSphere 5 minute metric collection cycles to look back for non-realtime metrics. In
## some versions (6.7, 7.0 and possible more), certain metrics, such as cluster metrics, may be reported
## with a significant delay (>30min). If this happens, try increasing this number. Please note that increasing
## it too much may cause performance issues.
# metric_lookback = 3
## Custom attributes from vCenter can be very useful for queries in order to slice the
## metrics along different dimension and for forming ad-hoc relationships. They are disabled
## by default, since they can add a considerable amount of tags to the resulting metrics. To
@ -205,19 +199,29 @@ Compatibility information is available from the govmomi project
# custom_attribute_include = []
# custom_attribute_exclude = ["*"]
## The number of vSphere 5 minute metric collection cycles to look back for non-realtime metrics. In
## some versions (6.7, 7.0 and possible more), certain metrics, such as cluster metrics, may be reported
## with a significant delay (>30min). If this happens, try increasing this number. Please note that increasing
## it too much may cause performance issues.
# metric_lookback = 3
## Optional SSL Config
# ssl_ca = "/path/to/cafile"
# ssl_cert = "/path/to/certfile"
# ssl_key = "/path/to/keyfile"
## Use SSL but skip chain & host verification
# insecure_skip_verify = false
## The Historical Interval value must match EXACTLY the interval in the daily
# "Interval Duration" found on the VCenter server under Configure > General > Statistics > Statistic intervals
# historical_interval = "5m"
```
NOTE: To disable collection of a specific resource type, simply exclude all
metrics using the XX_metric_exclude. For example, to disable collection of VMs,
add this:
```toml @sample.conf
```toml
vm_metric_exclude = [ "*" ]
```

View File

@ -1 +1,195 @@
vm_metric_exclude = [ "*" ]
-# Read metrics from one or many vCenters
[[inputs.vsphere]]
## List of vCenter URLs to be monitored. These three lines must be uncommented
## and edited for the plugin to work.
vcenters = [ "https://vcenter.local/sdk" ]
username = "user@corp.local"
password = "secret"
## VMs
## Typical VM metrics (if omitted or empty, all metrics are collected)
# vm_include = [ "/*/vm/**"] # Inventory path to VMs to collect (by default all are collected)
# vm_exclude = [] # Inventory paths to exclude
vm_metric_include = [
"cpu.demand.average",
"cpu.idle.summation",
"cpu.latency.average",
"cpu.readiness.average",
"cpu.ready.summation",
"cpu.run.summation",
"cpu.usagemhz.average",
"cpu.used.summation",
"cpu.wait.summation",
"mem.active.average",
"mem.granted.average",
"mem.latency.average",
"mem.swapin.average",
"mem.swapinRate.average",
"mem.swapout.average",
"mem.swapoutRate.average",
"mem.usage.average",
"mem.vmmemctl.average",
"net.bytesRx.average",
"net.bytesTx.average",
"net.droppedRx.summation",
"net.droppedTx.summation",
"net.usage.average",
"power.power.average",
"virtualDisk.numberReadAveraged.average",
"virtualDisk.numberWriteAveraged.average",
"virtualDisk.read.average",
"virtualDisk.readOIO.latest",
"virtualDisk.throughput.usage.average",
"virtualDisk.totalReadLatency.average",
"virtualDisk.totalWriteLatency.average",
"virtualDisk.write.average",
"virtualDisk.writeOIO.latest",
"sys.uptime.latest",
]
# vm_metric_exclude = [] ## Nothing is excluded by default
# vm_instances = true ## true by default
## Hosts
## Typical host metrics (if omitted or empty, all metrics are collected)
# host_include = [ "/*/host/**"] # Inventory path to hosts to collect (by default all are collected)
# host_exclude [] # Inventory paths to exclude
host_metric_include = [
"cpu.coreUtilization.average",
"cpu.costop.summation",
"cpu.demand.average",
"cpu.idle.summation",
"cpu.latency.average",
"cpu.readiness.average",
"cpu.ready.summation",
"cpu.swapwait.summation",
"cpu.usage.average",
"cpu.usagemhz.average",
"cpu.used.summation",
"cpu.utilization.average",
"cpu.wait.summation",
"disk.deviceReadLatency.average",
"disk.deviceWriteLatency.average",
"disk.kernelReadLatency.average",
"disk.kernelWriteLatency.average",
"disk.numberReadAveraged.average",
"disk.numberWriteAveraged.average",
"disk.read.average",
"disk.totalReadLatency.average",
"disk.totalWriteLatency.average",
"disk.write.average",
"mem.active.average",
"mem.latency.average",
"mem.state.latest",
"mem.swapin.average",
"mem.swapinRate.average",
"mem.swapout.average",
"mem.swapoutRate.average",
"mem.totalCapacity.average",
"mem.usage.average",
"mem.vmmemctl.average",
"net.bytesRx.average",
"net.bytesTx.average",
"net.droppedRx.summation",
"net.droppedTx.summation",
"net.errorsRx.summation",
"net.errorsTx.summation",
"net.usage.average",
"power.power.average",
"storageAdapter.numberReadAveraged.average",
"storageAdapter.numberWriteAveraged.average",
"storageAdapter.read.average",
"storageAdapter.write.average",
"sys.uptime.latest",
]
## Collect IP addresses? Valid values are "ipv4" and "ipv6"
# ip_addresses = ["ipv6", "ipv4" ]
# host_metric_exclude = [] ## Nothing excluded by default
# host_instances = true ## true by default
## Clusters
# cluster_include = [ "/*/host/**"] # Inventory path to clusters to collect (by default all are collected)
# cluster_exclude = [] # Inventory paths to exclude
# cluster_metric_include = [] ## if omitted or empty, all metrics are collected
# cluster_metric_exclude = [] ## Nothing excluded by default
# cluster_instances = false ## false by default
## Resource Pools
# datastore_include = [ "/*/host/**"] # Inventory path to datastores to collect (by default all are collected)
# datastore_exclude = [] # Inventory paths to exclude
# datastore_metric_include = [] ## if omitted or empty, all metrics are collected
# datastore_metric_exclude = [] ## Nothing excluded by default
# datastore_instances = false ## false by default
## Datastores
# datastore_include = [ "/*/datastore/**"] # Inventory path to datastores to collect (by default all are collected)
# datastore_exclude = [] # Inventory paths to exclude
# datastore_metric_include = [] ## if omitted or empty, all metrics are collected
# datastore_metric_exclude = [] ## Nothing excluded by default
# datastore_instances = false ## false by default
## Datacenters
# datacenter_include = [ "/*/host/**"] # Inventory path to clusters to collect (by default all are collected)
# datacenter_exclude = [] # Inventory paths to exclude
datacenter_metric_include = [] ## if omitted or empty, all metrics are collected
datacenter_metric_exclude = [ "*" ] ## Datacenters are not collected by default.
# datacenter_instances = false ## false by default
## Plugin Settings
## separator character to use for measurement and field names (default: "_")
# separator = "_"
## number of objects to retrieve per query for realtime resources (vms and hosts)
## set to 64 for vCenter 5.5 and 6.0 (default: 256)
# max_query_objects = 256
## number of metrics to retrieve per query for non-realtime resources (clusters and datastores)
## set to 64 for vCenter 5.5 and 6.0 (default: 256)
# max_query_metrics = 256
## number of go routines to use for collection and discovery of objects and metrics
# collect_concurrency = 1
# discover_concurrency = 1
## the interval before (re)discovering objects subject to metrics collection (default: 300s)
# object_discovery_interval = "300s"
## timeout applies to any of the api request made to vcenter
# timeout = "60s"
## When set to true, all samples are sent as integers. This makes the output
## data types backwards compatible with Telegraf 1.9 or lower. Normally all
## samples from vCenter, with the exception of percentages, are integer
## values, but under some conditions, some averaging takes place internally in
## the plugin. Setting this flag to "false" will send values as floats to
## preserve the full precision when averaging takes place.
# use_int_samples = true
## Custom attributes from vCenter can be very useful for queries in order to slice the
## metrics along different dimension and for forming ad-hoc relationships. They are disabled
## by default, since they can add a considerable amount of tags to the resulting metrics. To
## enable, simply set custom_attribute_exclude to [] (empty set) and use custom_attribute_include
## to select the attributes you want to include.
## By default, since they can add a considerable amount of tags to the resulting metrics. To
## enable, simply set custom_attribute_exclude to [] (empty set) and use custom_attribute_include
## to select the attributes you want to include.
# custom_attribute_include = []
# custom_attribute_exclude = ["*"]
## The number of vSphere 5 minute metric collection cycles to look back for non-realtime metrics. In
## some versions (6.7, 7.0 and possible more), certain metrics, such as cluster metrics, may be reported
## with a significant delay (>30min). If this happens, try increasing this number. Please note that increasing
## it too much may cause performance issues.
# metric_lookback = 3
## Optional SSL Config
# ssl_ca = "/path/to/cafile"
# ssl_cert = "/path/to/certfile"
# ssl_key = "/path/to/keyfile"
## Use SSL but skip chain & host verification
# insecure_skip_verify = false
## The Historical Interval value must match EXACTLY the interval in the daily
# "Interval Duration" found on the VCenter server under Configure > General > Statistics > Statistic intervals
# historical_interval = "5m"

View File

@ -7,24 +7,16 @@ This plugin sends metrics to Logz.io over HTTPs.
```toml @sample.conf
# A plugin that can send metrics over HTTPs to Logz.io
[[outputs.logzio]]
## Set to true if Logz.io sender checks the disk space before adding metrics to the disk queue.
# check_disk_space = true
## Connection timeout, defaults to "5s" if not set.
# timeout = "5s"
## The percent of used file system space at which the sender will stop queueing.
## When we will reach that percentage, the file system in which the queue is stored will drop
## all new logs until the percentage of used space drops below that threshold.
# disk_threshold = 98
## How often Logz.io sender should drain the queue.
## Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
# drain_duration = "3s"
## Where Logz.io sender should store the queue
## queue_dir = Sprintf("%s%s%s%s%d", os.TempDir(), string(os.PathSeparator),
## "logzio-buffer", string(os.PathSeparator), time.Now().UnixNano())
## Optional TLS Config
# tls_ca = "/etc/telegraf/ca.pem"
# tls_cert = "/etc/telegraf/cert.pem"
# tls_key = "/etc/telegraf/key.pem"
## Logz.io account token
token = "your Logz.io token" # required
token = "your logz.io token" # required
## Use your listener URL for your Logz.io account region.
# url = "https://listener.logz.io:8071"

View File

@ -1,23 +1,15 @@
# A plugin that can send metrics over HTTPs to Logz.io
[[outputs.logzio]]
## Set to true if Logz.io sender checks the disk space before adding metrics to the disk queue.
# check_disk_space = true
## Connection timeout, defaults to "5s" if not set.
# timeout = "5s"
## The percent of used file system space at which the sender will stop queueing.
## When we will reach that percentage, the file system in which the queue is stored will drop
## all new logs until the percentage of used space drops below that threshold.
# disk_threshold = 98
## How often Logz.io sender should drain the queue.
## Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
# drain_duration = "3s"
## Where Logz.io sender should store the queue
## queue_dir = Sprintf("%s%s%s%s%d", os.TempDir(), string(os.PathSeparator),
## "logzio-buffer", string(os.PathSeparator), time.Now().UnixNano())
## Optional TLS Config
# tls_ca = "/etc/telegraf/ca.pem"
# tls_cert = "/etc/telegraf/cert.pem"
# tls_key = "/etc/telegraf/key.pem"
## Logz.io account token
token = "your Logz.io token" # required
token = "your logz.io token" # required
## Use your listener URL for your Logz.io account region.
# url = "https://listener.logz.io:8071"

View File

@ -57,7 +57,7 @@ func insertInclude(buf *bytes.Buffer, include string) error {
}
func insertIncludes(buf *bytes.Buffer, b includeBlock) error {
// Insert all includes in the order they occured
// Insert all includes in the order they occurred
for _, include := range b.Includes {
if err := insertInclude(buf, include); err != nil {
return err
@ -127,7 +127,7 @@ func main() {
block.Includes = append(block.Includes, string(inc[1]))
}
// Extract the block boarders
// Extract the block borders
block.extractBlockBorders(codeNode)
blocksToReplace = append(blocksToReplace, block)
}
@ -153,13 +153,13 @@ func main() {
log.Fatal(err)
}
}
// Copy the remainings of the original file...
// Copy the remaining of the original file...
if _, err := output.Write(readme[offset:]); err != nil {
log.Fatalf("Writing remaining content failed: %v", err)
}
// Write output with same permission as input
file, err := os.OpenFile(inputFilename, os.O_CREATE|os.O_WRONLY, perm)
file, err := os.OpenFile(inputFilename, os.O_CREATE|os.O_TRUNC|os.O_WRONLY, perm)
if err != nil {
log.Fatalf("Opening output file failed: %v", err)
}