chore: Fix readme linter errors for input plugins M-Z (#11274)

This commit is contained in:
reimda 2022-06-08 15:22:56 -06:00 committed by GitHub
parent 21607ead9c
commit f7aab29381
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
91 changed files with 1294 additions and 716 deletions

View File

@ -1,12 +1,11 @@
# Mailchimp Input Plugin
Pulls campaign reports from the [Mailchimp API](https://developer.mailchimp.com/).
Pulls campaign reports from the [Mailchimp API][1].
[1]: https://developer.mailchimp.com/
## Configuration
This section contains the default TOML to configure the plugin. You can
generate it using `telegraf --usage mailchimp`.
```toml @sample.conf
# Gathers metrics from the /3.0/reports MailChimp API
[[inputs.mailchimp]]

View File

@ -1,6 +1,7 @@
# MarkLogic Input Plugin
The MarkLogic Telegraf plugin gathers health status metrics from one or more host.
The MarkLogic Telegraf plugin gathers health status metrics from one or more
host.
## Configuration

View File

@ -15,11 +15,12 @@ This plugin gathers statistics data from a Mcrouter server.
# timeout = "5s"
```
## Measurements & Fields
## Metrics
The fields from this plugin are gathered in the *mcrouter* measurement.
Description of gathered fields can be found [here](https://github.com/facebook/mcrouter/wiki/Stats-list).
Description of gathered fields can be found
[here](https://github.com/facebook/mcrouter/wiki/Stats-list).
Fields:

View File

@ -1,12 +1,17 @@
# mdstat Input Plugin
The mdstat plugin gathers statistics about any Linux MD RAID arrays configured on the host
by reading /proc/mdstat. For a full list of available fields see the
/proc/mdstat section of the [proc man page](http://man7.org/linux/man-pages/man5/proc.5.html).
For a better idea of what each field represents, see the
[mdstat man page](https://raid.wiki.kernel.org/index.php/Mdstat).
The mdstat plugin gathers statistics about any Linux MD RAID arrays configured
on the host by reading /proc/mdstat. For a full list of available fields see the
/proc/mdstat section of the [proc man page][man-proc]. For a better idea of
what each field represents, see the [mdstat man page][man-mdstat].
Stat collection based on Prometheus' mdstat collection library at <https://github.com/prometheus/procfs/blob/master/mdstat.go>
Stat collection based on Prometheus' [mdstat collection library][prom-lib].
[man-proc]: http://man7.org/linux/man-pages/man5/proc.5.html
[man-mdstat]: https://raid.wiki.kernel.org/index.php/Mdstat
[prom-lib]: https://github.com/prometheus/procfs/blob/master/mdstat.go
## Configuration
@ -18,7 +23,7 @@ Stat collection based on Prometheus' mdstat collection library at <https://githu
# file_name = "/proc/mdstat"
```
## Measurements & Fields
## Metrics
- mdstat
- BlocksSynced (if the array is rebuilding/checking, this is the count of blocks that have been scanned)

View File

@ -22,7 +22,7 @@ This plugin gathers statistics data from a Memcached server.
# insecure_skip_verify = true
```
## Measurements & Fields
## Metrics
The fields from this plugin are gathered in the *memcached* measurement.
@ -76,7 +76,9 @@ Fields:
* touch_misses - Number of items that have been touched and not found
* uptime - Number of secs since the server started
Description of gathered fields taken from [here](https://github.com/memcached/memcached/blob/master/doc/protocol.txt).
Description of gathered fields taken from [memcached protocol docs][protocol].
[protocol]: https://github.com/memcached/memcached/blob/master/doc/protocol.txt
## Tags
@ -85,7 +87,9 @@ Description of gathered fields taken from [here](https://github.com/memcached/me
## Sample Queries
You can use the following query to get the average get hit and miss ratio, as well as the total average size of cached items, number of cached items and average connection counts per server.
You can use the following query to get the average get hit and miss ratio, as
well as the total average size of cached items, number of cached items and
average connection counts per server.
```sql
SELECT mean(get_hits) / mean(cmd_get) as get_ratio, mean(get_misses) / mean(cmd_get) as get_misses_ratio, mean(bytes), mean(curr_items), mean(curr_connections) FROM memcached WHERE time > now() - 1h GROUP BY server

View File

@ -1,7 +1,9 @@
# Mesos Input Plugin
This input plugin gathers metrics from Mesos.
For more information, please check the [Mesos Observability Metrics](http://mesos.apache.org/documentation/latest/monitoring/) page.
This input plugin gathers metrics from Mesos. For more information, please
check the [Mesos Observability Metrics][1] page.
[1]: http://mesos.apache.org/documentation/latest/monitoring/
## Configuration
@ -50,10 +52,12 @@ For more information, please check the [Mesos Observability Metrics](http://meso
# insecure_skip_verify = false
```
By default this plugin is not configured to gather metrics from mesos. Since a mesos cluster can be deployed in numerous ways it does not provide any default
values. User needs to specify master/slave nodes this plugin will gather metrics from.
By default this plugin is not configured to gather metrics from mesos. Since a
mesos cluster can be deployed in numerous ways it does not provide any default
values. User needs to specify master/slave nodes this plugin will gather metrics
from.
## Measurements & Fields
## Metrics
Mesos master metric groups

View File

@ -1,4 +1,4 @@
# Mock Data
# Mock Data Input Plugin
The mock input plugin generates random data based on a selection of different
algorithms. For example, it can produce random data between a set of values,
@ -9,13 +9,6 @@ required to mock their situation.
## Configuration
The mock plugin only requires that:
1) Metric name is set
2) One of the below data field algorithms is defined
Below is a sample config to generate one of each of the four types:
```toml @sample.conf
# Generate metrics for test and demonstration purposes
[[inputs.mock]]
@ -49,6 +42,11 @@ Below is a sample config to generate one of each of the four types:
## volatility = 0.2
```
The mock plugin only requires that:
1) Metric name is set
2) One of the data field algorithms is defined
## Available Algorithms
The available algorithms for generating mock data include:

View File

@ -206,13 +206,22 @@ Registers via Modbus TCP or Modbus RTU/ASCII.
## Notes
You can debug Modbus connection issues by enabling `debug_connection`. To see those debug messages, Telegraf has to be started with debugging enabled (i.e. with the `--debug` option). Please be aware that connection tracing will produce a lot of messages and should __NOT__ be used in production environments.
You can debug Modbus connection issues by enabling `debug_connection`. To see
those debug messages, Telegraf has to be started with debugging enabled
(i.e. with the `--debug` option). Please be aware that connection tracing will
produce a lot of messages and should __NOT__ be used in production environments.
Please use `pause_between_requests` with care. Ensure the total gather time, including the pause(s), does not exceed the configured collection interval. Note that pauses add up if multiple requests are sent!
Please use `pause_between_requests` with care. Ensure the total gather time,
including the pause(s), does not exceed the configured collection interval. Note
that pauses add up if multiple requests are sent!
## Configuration styles
The modbus plugin supports multiple configuration styles that can be set using the `configuration_type` setting. The different styles are described below. Please note that styles cannot be mixed, i.e. only the settings belonging to the configured `configuration_type` are used for constructing _modbus_ requests and creation of metrics.
The modbus plugin supports multiple configuration styles that can be set using
the `configuration_type` setting. The different styles are described
below. Please note that styles cannot be mixed, i.e. only the settings belonging
to the configured `configuration_type` are used for constructing _modbus_
requests and creation of metrics.
Directly jump to the styles:
@ -223,68 +232,83 @@ Directly jump to the styles:
### `register` configuration style
This is the original style used by this plugin. It allows a per-register configuration for a single slave-device.
#### Metrics
Metrics are custom and configured using the `discrete_inputs`, `coils`,
`holding_register` and `input_registers` options.
This is the original style used by this plugin. It allows a per-register
configuration for a single slave-device.
#### Usage of `data_type`
The field `data_type` defines the representation of the data value on input from the modbus registers.
The input values are then converted from the given `data_type` to a type that is apropriate when
sending the value to the output plugin. These output types are usually one of string,
integer or floating-point-number. The size of the output type is assumed to be large enough
for all supported input types. The mapping from the input type to the output type is fixed
and cannot be configured.
The field `data_type` defines the representation of the data value on input from
the modbus registers. The input values are then converted from the given
`data_type` to a type that is apropriate when sending the value to the output
plugin. These output types are usually one of string, integer or
floating-point-number. The size of the output type is assumed to be large enough
for all supported input types. The mapping from the input type to the output
type is fixed and cannot be configured.
##### Integers: `INT16`, `UINT16`, `INT32`, `UINT32`, `INT64`, `UINT64`
These types are used for integer input values. Select the one that matches your modbus data source.
These types are used for integer input values. Select the one that matches your
modbus data source.
##### Floating Point: `FLOAT32-IEEE`, `FLOAT64-IEEE`
Use these types if your modbus registers contain a value that is encoded in this format. These types
always include the sign, therefore no variant exists.
Use these types if your modbus registers contain a value that is encoded in this
format. These types always include the sign, therefore no variant exists.
##### Fixed Point: `FIXED`, `UFIXED` (`FLOAT32`)
These types are handled as an integer type on input, but are converted to floating point representation
for further processing (e.g. scaling). Use one of these types when the input value is a decimal fixed point
representation of a non-integer value.
These types are handled as an integer type on input, but are converted to
floating point representation for further processing (e.g. scaling). Use one of
these types when the input value is a decimal fixed point representation of a
non-integer value.
Select the type `UFIXED` when the input type is declared to hold unsigned integer values, which cannot
be negative. The documentation of your modbus device should indicate this by a term like
'uint16 containing fixed-point representation with N decimal places'.
Select the type `UFIXED` when the input type is declared to hold unsigned
integer values, which cannot be negative. The documentation of your modbus
device should indicate this by a term like 'uint16 containing fixed-point
representation with N decimal places'.
Select the type `FIXED` when the input type is declared to hold signed integer values. Your documentation
of the modbus device should indicate this with a term like 'int32 containing fixed-point representation
with N decimal places'.
Select the type `FIXED` when the input type is declared to hold signed integer
values. Your documentation of the modbus device should indicate this with a term
like 'int32 containing fixed-point representation with N decimal places'.
(FLOAT32 is deprecated and should not be used. UFIXED provides the same conversion from unsigned values).
(FLOAT32 is deprecated and should not be used. UFIXED provides the same
conversion from unsigned values).
---
### `request` configuration style
This sytle can be used to specify the modbus requests directly. It enables specifying multiple `[[inputs.modbus.request]]` sections including multiple slave-devices. This way, _modbus_ gateway devices can be queried. Please note that _requests_ might be split for non-consecutive addresses. If you want to avoid this behavior please add _fields_ with the `omit` flag set filling the gaps between addresses.
This sytle can be used to specify the modbus requests directly. It enables
specifying multiple `[[inputs.modbus.request]]` sections including multiple
slave-devices. This way, _modbus_ gateway devices can be queried. Please note
that _requests_ might be split for non-consecutive addresses. If you want to
avoid this behavior please add _fields_ with the `omit` flag set filling the
gaps between addresses.
#### Slave device
You can use the `slave_id` setting to specify the ID of the slave device to query. It should be specified for each request, otherwise it defaults to zero. Please note, only one `slave_id` can be specified per request.
You can use the `slave_id` setting to specify the ID of the slave device to
query. It should be specified for each request, otherwise it defaults to
zero. Please note, only one `slave_id` can be specified per request.
#### Byte order of the register
The `byte_order` setting specifies the byte and word-order of the registers. It can be set to `ABCD` for _big endian (Motorola)_ or `DCBA` for _little endian (Intel)_ format as well as `BADC` and `CDAB` for _big endian_ or _little endian_ with _byte swap_.
The `byte_order` setting specifies the byte and word-order of the registers. It
can be set to `ABCD` for _big endian (Motorola)_ or `DCBA` for _little endian
(Intel)_ format as well as `BADC` and `CDAB` for _big endian_ or _little endian_
with _byte swap_.
#### Register type
The `register` setting specifies the modbus register-set to query and can be set to `coil`, `discrete`, `holding` or `input`.
The `register` setting specifies the modbus register-set to query and can be set
to `coil`, `discrete`, `holding` or `input`.
#### Per-request measurement setting
You can specify the name of the measurement for the following field definitions using the `measurement` setting. If the setting is omitted `modbus` is used. Furthermore, the measurement value can be overridden by each field individually.
You can specify the name of the measurement for the following field definitions
using the `measurement` setting. If the setting is omitted `modbus` is
used. Furthermore, the measurement value can be overridden by each field
individually.
#### Field definitions
@ -292,78 +316,148 @@ Each `request` can contain a list of fields to collect from the modbus device.
##### address
A field is identified by an `address` that reflects the modbus register address. You can usually find the address values for the different datapoints in the datasheet of your modbus device. This is a mandatory setting.
A field is identified by an `address` that reflects the modbus register
address. You can usually find the address values for the different datapoints in
the datasheet of your modbus device. This is a mandatory setting.
For _coil_ and _discrete input_ registers this setting specifies the __bit__ containing the value of the field.
For _coil_ and _discrete input_ registers this setting specifies the __bit__
containing the value of the field.
##### name
Using the `name` setting you can specify the field-name in the metric as output by the plugin. This setting is ignored if the field's `omit` is set to `true` and can be omitted in this case.
Using the `name` setting you can specify the field-name in the metric as output
by the plugin. This setting is ignored if the field's `omit` is set to `true`
and can be omitted in this case.
__Please note:__ There cannot be multiple fields with the same `name` in one metric identified by `measurement`, `slave_id` and `register`.
__Please note:__ There cannot be multiple fields with the same `name` in one
metric identified by `measurement`, `slave_id` and `register`.
##### register datatype
The `register` setting specifies the datatype of the modbus register and can be set to `INT16`, `UINT16`, `INT32`, `UINT32`, `INT64` or `UINT64` for integer types or `FLOAT32` and `FLOAT64` for IEEE 754 binary representations of floating point values. Usually the datatype of the register is listed in the datasheet of your modbus device in relation to the `address` described above.
The `register` setting specifies the datatype of the modbus register and can be
set to `INT16`, `UINT16`, `INT32`, `UINT32`, `INT64` or `UINT64` for integer
types or `FLOAT32` and `FLOAT64` for IEEE 754 binary representations of floating
point values. Usually the datatype of the register is listed in the datasheet of
your modbus device in relation to the `address` described above.
This setting is ignored if the field's `omit` is set to `true` or if the `register` type is a bit-type (`coil` or `discrete`) and can be omitted in these cases.
This setting is ignored if the field's `omit` is set to `true` or if the
`register` type is a bit-type (`coil` or `discrete`) and can be omitted in
these cases.
##### scaling
You can use the `scale` setting to scale the register values, e.g. if the register contains a fix-point values in `UINT32` format with two decimal places for example. To convert the read register value to the actual value you can set the `scale=0.01`. The scale is used as a factor e.g. `field_value * scale`.
You can use the `scale` setting to scale the register values, e.g. if the
register contains a fix-point values in `UINT32` format with two decimal places
for example. To convert the read register value to the actual value you can set
the `scale=0.01`. The scale is used as a factor e.g. `field_value * scale`.
This setting is ignored if the field's `omit` is set to `true` or if the `register` type is a bit-type (`coil` or `discrete`) and can be omitted in these cases.
This setting is ignored if the field's `omit` is set to `true` or if the
`register` type is a bit-type (`coil` or `discrete`) and can be omitted in these
cases.
__Please note:__ The resulting field-type will be set to `FLOAT64` if no output format is specified.
__Please note:__ The resulting field-type will be set to `FLOAT64` if no output
format is specified.
##### output datatype
Using the `output` setting you can explicitly specify the output field-datatype. The `output` type can be `INT64`, `UINT64` or `FLOAT64`. If not set explicitly, the output type is guessed as follows: If `scale` is set to a non-zero value, the output type is `FLOAT64`. Otherwise, the output type corresponds to the register datatype _class_, i.e. `INT*` will result in `INT64`, `UINT*` in `UINT64` and `FLOAT*` in `FLOAT64`.
Using the `output` setting you can explicitly specify the output
field-datatype. The `output` type can be `INT64`, `UINT64` or `FLOAT64`. If not
set explicitly, the output type is guessed as follows: If `scale` is set to a
non-zero value, the output type is `FLOAT64`. Otherwise, the output type
corresponds to the register datatype _class_, i.e. `INT*` will result in
`INT64`, `UINT*` in `UINT64` and `FLOAT*` in `FLOAT64`.
This setting is ignored if the field's `omit` is set to `true` or if the `register` type is a bit-type (`coil` or `discrete`) and can be omitted in these cases. For `coil` and `discrete` registers the field-value is output as zero or one in `UINT16` format.
This setting is ignored if the field's `omit` is set to `true` or if the
`register` type is a bit-type (`coil` or `discrete`) and can be omitted in these
cases. For `coil` and `discrete` registers the field-value is output as zero or
one in `UINT16` format.
#### per-field measurement setting
The `measurement` setting can be used to override the measurement name on a per-field basis. This might be useful if you want to split the fields in one request to multiple measurements. If not specified, the value specified in the [`request` section](#per-request-measurement-setting) or, if also omitted, `modbus` is used.
The `measurement` setting can be used to override the measurement name on a
per-field basis. This might be useful if you want to split the fields in one
request to multiple measurements. If not specified, the value specified in the
[`request` section](#per-request-measurement-setting) or, if also omitted,
`modbus` is used.
This setting is ignored if the field's `omit` is set to `true` and can be omitted in this case.
This setting is ignored if the field's `omit` is set to `true` and can be
omitted in this case.
#### omitting a field
When specifying `omit=true`, the corresponding field will be ignored when collecting the metric but is taken into account when constructing the modbus requests. This way, you can fill "holes" in the addresses to construct consecutive address ranges resulting in a single request. Using a single modbus request can be beneficial as the values are all collected at the same point in time.
When specifying `omit=true`, the corresponding field will be ignored when
collecting the metric but is taken into account when constructing the modbus
requests. This way, you can fill "holes" in the addresses to construct
consecutive address ranges resulting in a single request. Using a single modbus
request can be beneficial as the values are all collected at the same point in
time.
#### Tags definitions
Each `request` can be accompanied by tags valid for this request.
__Please note:__ These tags take precedence over predefined tags such as `name`, `type` or `slave_id`.
__Please note:__ These tags take precedence over predefined tags such as `name`,
`type` or `slave_id`.
---
## Metrics
Metrics are custom and configured using the `discrete_inputs`, `coils`,
`holding_register` and `input_registers` options.
## Troubleshooting
### Strange data
Modbus documentation is often a mess. People confuse memory-address (starts at one) and register address (starts at zero) or are unsure about the word-order used. Furthermore, there are some non-standard implementations that also swap the bytes within the register word (16-bit).
Modbus documentation is often a mess. People confuse memory-address (starts at
one) and register address (starts at zero) or are unsure about the word-order
used. Furthermore, there are some non-standard implementations that also swap
the bytes within the register word (16-bit).
If you get an error or don't get the expected values from your device, you can try the following steps (assuming a 32-bit value).
If you get an error or don't get the expected values from your device, you can
try the following steps (assuming a 32-bit value).
If you are using a serial device and get a `permission denied` error, check the permissions of your serial device and change them accordingly.
If you are using a serial device and get a `permission denied` error, check the
permissions of your serial device and change them accordingly.
In case you get an `exception '2' (illegal data address)` error you might try to offset your `address` entries by minus one as it is very likely that there is confusion between memory and register addresses.
In case you get an `exception '2' (illegal data address)` error you might try to
offset your `address` entries by minus one as it is very likely that there is
confusion between memory and register addresses.
If you see strange values, the `byte_order` might be wrong. You can either probe all combinations (`ABCD`, `CDBA`, `BADC` or `DCBA`) or set `byte_order="ABCD" data_type="UINT32"` and use the resulting value(s) in an online converter like [this](https://www.scadacore.com/tools/programming-calculators/online-hex-converter/). This especially makes sense if you don't want to mess with the device, deal with 64-bit values and/or don't know the `data_type` of your register (e.g. fix-point floating values vs. IEEE floating point).
If you see strange values, the `byte_order` might be wrong. You can either probe
all combinations (`ABCD`, `CDBA`, `BADC` or `DCBA`) or set `byte_order="ABCD"
data_type="UINT32"` and use the resulting value(s) in an online converter like
[this][online-converter]. This especially makes sense if you don't want to mess
with the device, deal with 64-bit values and/or don't know the `data_type` of
your register (e.g. fix-point floating values vs. IEEE floating point).
If your data still looks corrupted, please post your configuration, error message and/or the output of `byte_order="ABCD" data_type="UINT32"` to one of the telegraf support channels (forum, slack or as an issue).
If nothing helps, please post your configuration, error message and/or the output of `byte_order="ABCD" data_type="UINT32"` to one of the telegraf support channels (forum, slack or as an issue).
If your data still looks corrupted, please post your configuration, error
message and/or the output of `byte_order="ABCD" data_type="UINT32"` to one of
the telegraf support channels (forum, slack or as an issue). If nothing helps,
please post your configuration, error message and/or the output of
`byte_order="ABCD" data_type="UINT32"` to one of the telegraf support channels
(forum, slack or as an issue).
[online-converter]: https://www.scadacore.com/tools/programming-calculators/online-hex-converter/
### Workarounds
Some Modbus devices need special read characteristics when reading data and will fail otherwise. For example, some serial devices need a pause between register read requests. Others might only support a limited number of simultaneously connected devices, like serial devices or some ModbusTCP devices. In case you need to access those devices in parallel you might want to disconnect immediately after the plugin finishes reading.
Some Modbus devices need special read characteristics when reading data and will
fail otherwise. For example, some serial devices need a pause between register
read requests. Others might only support a limited number of simultaneously
connected devices, like serial devices or some ModbusTCP devices. In case you
need to access those devices in parallel you might want to disconnect
immediately after the plugin finishes reading.
To enable this plugin to also handle those "special" devices, there is the `workarounds` configuration option. In case your documentation states certain read requirements or you get read timeouts or other read errors, you might want to try one or more workaround options.
If you find that other/more workarounds are required for your device, please let us know.
To enable this plugin to also handle those "special" devices, there is the
`workarounds` configuration option. In case your documentation states certain
read requirements or you get read timeouts or other read errors, you might want
to try one or more workaround options. If you find that other/more workarounds
are required for your device, please let us know.
In case your device needs a workaround that is not yet implemented, please open an issue or submit a pull-request.
In case your device needs a workaround that is not yet implemented, please open
an issue or submit a pull-request.
## Example Output

View File

@ -64,7 +64,7 @@ Some permission related errors are logged at debug level, you can check these
messages by setting `debug = true` in the agent section of the configuration or
by running Telegraf with the `--debug` argument.
### Metrics
## Metrics
- mongodb
- tags:
@ -300,7 +300,7 @@ by running Telegraf with the `--debug` argument.
- commands_time (integer)
- commands_count (integer)
### Example Output
## Example Output
```shell
mongodb,hostname=127.0.0.1:27017 active_reads=1i,active_writes=0i,aggregate_command_failed=0i,aggregate_command_total=0i,assert_msg=0i,assert_regular=0i,assert_rollovers=0i,assert_user=0i,assert_warning=0i,available_reads=127i,available_writes=128i,commands=65i,commands_per_sec=4i,connections_available=51199i,connections_current=1i,connections_total_created=5i,count_command_failed=0i,count_command_total=7i,cursor_no_timeout=0i,cursor_no_timeout_count=0i,cursor_pinned=0i,cursor_pinned_count=0i,cursor_timed_out=0i,cursor_timed_out_count=0i,cursor_total=0i,cursor_total_count=0i,delete_command_failed=0i,delete_command_total=1i,deletes=1i,deletes_per_sec=0i,distinct_command_failed=0i,distinct_command_total=0i,document_deleted=0i,document_inserted=0i,document_returned=0i,document_updated=0i,find_and_modify_command_failed=0i,find_and_modify_command_total=0i,find_command_failed=0i,find_command_total=1i,flushes=52i,flushes_per_sec=0i,flushes_total_time_ns=364000000i,get_more_command_failed=0i,get_more_command_total=0i,getmores=0i,getmores_per_sec=0i,insert_command_failed=0i,insert_command_total=0i,inserts=0i,inserts_per_sec=0i,jumbo_chunks=0i,latency_commands=5740i,latency_commands_count=46i,latency_reads=348i,latency_reads_count=7i,latency_writes=0i,latency_writes_count=0i,net_in_bytes=296i,net_in_bytes_count=4262i,net_out_bytes=29322i,net_out_bytes_count=242103i,open_connections=1i,operation_scan_and_order=0i,operation_write_conflicts=0i,page_faults=1i,percent_cache_dirty=0,percent_cache_used=0,queries=1i,queries_per_sec=0i,queued_reads=0i,queued_writes=0i,resident_megabytes=33i,storage_freelist_search_bucket_exhausted=0i,storage_freelist_search_requests=0i,storage_freelist_search_scanned=0i,tcmalloc_central_cache_free_bytes=0i,tcmalloc_current_allocated_bytes=0i,tcmalloc_current_total_thread_cache_bytes=0i,tcmalloc_heap_size=0i,tcmalloc_max_total_thread_cache_bytes=0i,tcmalloc_pageheap_commit_count=0i,tcmalloc_pageheap_committed_bytes=0i,tcmalloc_pageheap_decommit_count=0i,tcmalloc_pageheap_free_bytes=0i,tcmalloc_pageheap_reserve_count=0i,tcmalloc_pageheap_scavenge_count=0i,tcmalloc_pageheap_total_commit_bytes=0i,tcmalloc_pageheap_total_decommit_bytes=0i,tcmalloc_pageheap_total_reserve_bytes=0i,tcmalloc_pageheap_unmapped_bytes=0i,tcmalloc_spinlock_total_delay_ns=0i,tcmalloc_thread_cache_free_bytes=0i,tcmalloc_total_free_bytes=0i,tcmalloc_transfer_cache_free_bytes=0i,total_available=0i,total_created=0i,total_docs_scanned=0i,total_in_use=0i,total_keys_scanned=0i,total_refreshing=0i,total_tickets_reads=128i,total_tickets_writes=128i,ttl_deletes=0i,ttl_deletes_per_sec=0i,ttl_passes=51i,ttl_passes_per_sec=0i,update_command_failed=0i,update_command_total=0i,updates=0i,updates_per_sec=0i,uptime_ns=6135152000000i,version="4.0.19",vsize_megabytes=5088i,wt_connection_files_currently_open=13i,wt_data_handles_currently_active=18i,wtcache_app_threads_page_read_count=99i,wtcache_app_threads_page_read_time=44528i,wtcache_app_threads_page_write_count=19i,wtcache_bytes_read_into=3248195i,wtcache_bytes_written_from=170612i,wtcache_current_bytes=3648788i,wtcache_internal_pages_evicted=0i,wtcache_max_bytes_configured=8053063680i,wtcache_modified_pages_evicted=0i,wtcache_pages_evicted_by_app_thread=0i,wtcache_pages_queued_for_eviction=0i,wtcache_pages_read_into=234i,wtcache_pages_requested_from=18235i,wtcache_server_evicting_pages=0i,wtcache_tracked_dirty_bytes=0i,wtcache_unmodified_pages_evicted=0i,wtcache_worker_thread_evictingpages=0i 1595691605000000000

View File

@ -89,11 +89,11 @@ and creates metrics using one of the supported [input data formats][].
## About Topic Parsing
The MQTT topic as a whole is stored as a tag, but this can be far too coarse
to be easily used when utilizing the data further down the line. This
change allows tag values to be extracted from the MQTT topic letting you
store the information provided in the topic in a meaningful way. An `_` denotes an
ignored entry in the topic path. Please see the following example.
The MQTT topic as a whole is stored as a tag, but this can be far too coarse to
be easily used when utilizing the data further down the line. This change allows
tag values to be extracted from the MQTT topic letting you store the information
provided in the topic in a meaningful way. An `_` denotes an ignored entry in
the topic path. Please see the following example.
## Example Configuration for topic parsing
@ -127,13 +127,13 @@ ignored entry in the topic path. Please see the following example.
test = "int"
```
Result:
## Example Output
```shell
cpu,host=pop-os,tag=telegraf,topic=telegraf/one/cpu/23 value=45,test=23i 1637014942460689291
```
### Metrics
## Metrics
- All measurements are tagged with the incoming topic, ie
`topic=telegraf/host01/cpu`

View File

@ -52,20 +52,23 @@ Data format used to parse the file contents:
## Example Output
This example shows a BME280 connected to a Raspberry Pi, using the sample config.
This example shows a BME280 connected to a Raspberry Pi, using the sample
config.
```sh
multifile pressure=101.343285156,temperature=20.4,humidityrelative=48.9 1547202076000000000
```
To reproduce this, connect a BMP280 to the board's GPIO pins and register the BME280 device driver
To reproduce this, connect a BMP280 to the board's GPIO pins and register the
BME280 device driver
```sh
cd /sys/bus/i2c/devices/i2c-1
echo bme280 0x76 > new_device
```
The kernel driver provides the following files in `/sys/bus/i2c/devices/1-0076/iio:device0`:
The kernel driver provides the following files in
`/sys/bus/i2c/devices/1-0076/iio:device0`:
* `in_humidityrelative_input`: `48900`
* `in_pressure_input`: `101.343285156`

View File

@ -1,7 +1,9 @@
# NATS Input Plugin
The [NATS](http://www.nats.io/about/) monitoring plugin gathers metrics from
the NATS [monitoring http server](https://www.nats.io/documentation/server/gnatsd-monitoring/).
The [NATS](http://www.nats.io/about/) monitoring plugin gathers metrics from the
NATS [monitoring http server][1].
[1]: https://www.nats.io/documentation/server/gnatsd-monitoring/
## Configuration

View File

@ -1,10 +1,12 @@
# Neptune Apex Input Plugin
The Neptune Apex controller family allows an aquarium hobbyist to monitor and control
their tanks based on various probes. The data is taken directly from the `/cgi-bin/status.xml` at the interval specified
in the telegraf.conf configuration file.
The Neptune Apex controller family allows an aquarium hobbyist to monitor and
control their tanks based on various probes. The data is taken directly from the
`/cgi-bin/status.xml` at the interval specified in the telegraf.conf
configuration file.
The [Neptune Apex](https://www.neptunesystems.com/) input plugin collects real-time data from the Apex's status.xml page.
The [Neptune Apex](https://www.neptunesystems.com/) input plugin collects
real-time data from the Apex's status.xml page.
## Configuration
@ -27,13 +29,16 @@ The [Neptune Apex](https://www.neptunesystems.com/) input plugin collects real-t
## Metrics
The Neptune Apex controller family allows an aquarium hobbyist to monitor and control
their tanks based on various probes. The data is taken directly from the /cgi-bin/status.xml at the interval specified
in the telegraf.conf configuration file.
The Neptune Apex controller family allows an aquarium hobbyist to monitor and
control their tanks based on various probes. The data is taken directly from the
/cgi-bin/status.xml at the interval specified in the telegraf.conf configuration
file.
No manipulation is done on any of the fields to ensure future changes to the status.xml do not introduce conversion bugs
to this plugin. When reasonable and predictable, some tags are derived to make graphing easier and without front-end
programming. These tags are clearly marked in the list below and should be considered a convenience rather than authoritative.
No manipulation is done on any of the fields to ensure future changes to the
status.xml do not introduce conversion bugs to this plugin. When reasonable and
predictable, some tags are derived to make graphing easier and without front-end
programming. These tags are clearly marked in the list below and should be
considered a convenience rather than authoritative.
- neptune_apex (All metrics have this measurement name)
- tags:
@ -78,7 +83,8 @@ SELECT mean("value") FROM "neptune_apex" WHERE ("probe_type" = 'Temp') AND time
### sendRequest failure
This indicates a problem communicating with the local Apex controller. If on Mac/Linux, try curl:
This indicates a problem communicating with the local Apex controller. If on
Mac/Linux, try curl:
```sh
curl apex.local/cgi-bin/status.xml
@ -88,12 +94,14 @@ to isolate the problem.
### parseXML errors
Ensure the XML being returned is valid. If you get valid XML back, open a bug request.
Ensure the XML being returned is valid. If you get valid XML back, open a bug
request.
### Missing fields/data
The neptune_apex plugin is strict on its input to prevent any conversion errors. If you have fields in the status.xml
output that are not converted to a metric, open a feature request and paste your whole status.xml
The neptune_apex plugin is strict on its input to prevent any conversion
errors. If you have fields in the status.xml output that are not converted to a
metric, open a feature request and paste your whole status.xml
## Example Output
@ -144,10 +152,11 @@ neptune_apex,hardware=1.0,host=ubuntu,name=Volt_4,software=5.04_7A18,source=apex
## Contributing
This plugin is used for mission-critical aquatic life support. A bug could very well result in the death of animals.
Neptune does not publish a schema file and as such, we have made this plugin very strict on input with no provisions for
automatically adding fields. We are also careful to not add default values when none are presented to prevent automation
errors.
This plugin is used for mission-critical aquatic life support. A bug could very
well result in the death of animals. Neptune does not publish a schema file and
as such, we have made this plugin very strict on input with no provisions for
automatically adding fields. We are also careful to not add default values when
none are presented to prevent automation errors.
When writing unit tests, use actual Apex output to run tests. It's acceptable to abridge the number of repeated fields
but never inner fields or parameters.
When writing unit tests, use actual Apex output to run tests. It's acceptable to
abridge the number of repeated fields but never inner fields or parameters.

View File

@ -1,6 +1,7 @@
# Net Input Plugin
This plugin gathers metrics about network interface and protocol usage (Linux only).
This plugin gathers metrics about network interface and protocol usage (Linux
only).
## Configuration
@ -21,7 +22,7 @@ This plugin gathers metrics about network interface and protocol usage (Linux on
##
```
## Measurements & Fields
## Metrics
The fields from this plugin are gathered in the _net_ measurement.
@ -36,11 +37,20 @@ Fields (all platforms):
* drop_in - The total number of received packets dropped by the interface
* drop_out - The total number of transmitted packets dropped by the interface
Different platforms gather the data above with different mechanisms. Telegraf uses the ([gopsutil](https://github.com/shirou/gopsutil)) package, which under Linux reads the /proc/net/dev file.
Under freebsd/openbsd and darwin the plugin uses netstat.
Different platforms gather the data above with different mechanisms. Telegraf
uses the ([gopsutil](https://github.com/shirou/gopsutil)) package, which under
Linux reads the /proc/net/dev file. Under freebsd/openbsd and darwin the plugin
uses netstat.
Additionally, for the time being _only under Linux_, the plugin gathers system wide stats for different network protocols using /proc/net/snmp (tcp, udp, icmp, etc.).
Explanation of the different metrics exposed by snmp is out of the scope of this document. The best way to find information would be tracing the constants in the Linux kernel source [here](https://elixir.bootlin.com/linux/latest/source/net/ipv4/proc.c) and their usage. If /proc/net/snmp cannot be read for some reason, telegraf ignores the error silently.
Additionally, for the time being _only under Linux_, the plugin gathers system
wide stats for different network protocols using /proc/net/snmp (tcp, udp, icmp,
etc.). Explanation of the different metrics exposed by snmp is out of the scope
of this document. The best way to find information would be tracing the
constants in the [Linux kernel source][source] and their usage. If
/proc/net/snmp cannot be read for some reason, telegraf ignores the error
silently.
[source]: https://elixir.bootlin.com/linux/latest/source/net/ipv4/proc.c
## Tags
@ -51,7 +61,12 @@ Under Linux the system wide protocol metrics have the interface=all tag.
## Sample Queries
You can use the following query to get the upload/download traffic rate per second for all interfaces in the last hour. The query uses the [derivative function](https://docs.influxdata.com/influxdb/v1.2/query_language/functions#derivative) which calculates the rate of change between subsequent field values.
You can use the following query to get the upload/download traffic rate per
second for all interfaces in the last hour. The query uses the [derivative
function][deriv] which calculates the rate of change between subsequent field
values.
[deriv]: https://docs.influxdata.com/influxdb/v1.2/query_language/functions#derivative
```sql
SELECT derivative(first(bytes_recv), 1s) as "download bytes/sec", derivative(first(bytes_sent), 1s) as "upload bytes/sec" FROM net WHERE time > now() - 1h AND interface != 'all' GROUP BY time(10s), interface fill(0);

View File

@ -1,6 +1,7 @@
# Netstat Input Plugin
This plugin collects TCP connections state and UDP socket counts by using `lsof`.
This plugin collects TCP connections state and UDP socket counts by using
`lsof`.
## Configuration
@ -10,7 +11,7 @@ This plugin collects TCP connections state and UDP socket counts by using `lsof`
# no configuration
```
## Measurements
## Metrics
Supported TCP Connection states are follows.

View File

@ -1,9 +1,15 @@
# NFS Client Input Plugin
The NFS Client input plugin collects data from /proc/self/mountstats. By default, only a limited number of general system-level metrics are collected, including basic read/write counts.
If `fullstat` is set, a great deal of additional metrics are collected, detailed below.
The NFS Client input plugin collects data from /proc/self/mountstats. By
default, only a limited number of general system-level metrics are collected,
including basic read/write counts. If `fullstat` is set, a great deal of
additional metrics are collected, detailed below.
__NOTE__ Many of the metrics, even if tagged with a mount point, are really _per-server_. Thus, if you mount these two shares: `nfs01:/vol/foo/bar` and `nfs01:/vol/foo/baz`, there will be two near identical entries in /proc/self/mountstats. This is a limitation of the metrics exposed by the kernel, not the telegraf plugin.
__NOTE__ Many of the metrics, even if tagged with a mount point, are really
_per-server_. Thus, if you mount these two shares: `nfs01:/vol/foo/bar` and
`nfs01:/vol/foo/baz`, there will be two near identical entries in
/proc/self/mountstats. This is a limitation of the metrics exposed by the
kernel, not the telegraf plugin.
## Configuration
@ -45,7 +51,9 @@ __NOTE__ Many of the metrics, even if tagged with a mount point, are really _per
- __include_operations__ list(string): List of specific NFS operations to track. See /proc/self/mountstats (the "per-op statistics" section) for complete lists of valid options for NFSv3 and NFSV4. The default is to gather all metrics, but this is almost certainly _not_ what you want (there are 22 operations for NFSv3, and well over 50 for NFSv4). A suggested 'minimal' list of operations to collect for basic usage: `['READ','WRITE','ACCESS','GETATTR','READDIR','LOOKUP','LOOKUP']`
- __exclude_operations__ list(string): Gather all metrics, except those listed. Excludes take precedence over includes.
_N.B._ the `include_mounts` and `exclude_mounts` arguments are both applied to the local mount location (e.g. /mnt/NFS), not the server export (e.g. nfsserver:/vol/NFS). Go regexp patterns can be used in either.
_N.B._ the `include_mounts` and `exclude_mounts` arguments are both applied to
the local mount location (e.g. /mnt/NFS), not the server export
(e.g. nfsserver:/vol/NFS). Go regexp patterns can be used in either.
### References
@ -79,13 +87,16 @@ In addition enabling `fullstat` will make many more metrics available.
## Additional metrics
When `fullstat` is true, additional measurements are collected. Tags are the same as above.
When `fullstat` is true, additional measurements are collected. Tags are the
same as above.
### NFS Operations
Most descriptions come from Reference [[3](https://utcc.utoronto.ca/~cks/space/blog/linux/NFSMountstatsIndex)] and `nfs_iostat.h`. Field order and names are the same as in `/proc/self/mountstats` and the Kernel source.
Most descriptions come from [Reference][ref] and `nfs_iostat.h`. Field order
and names are the same as in `/proc/self/mountstats` and the Kernel source.
Please refer to `/proc/self/mountstats` for a list of supported NFS operations, as it changes occasionally.
Please refer to `/proc/self/mountstats` for a list of supported NFS operations,
as it changes occasionally.
- nfs_bytes
- fields:
@ -156,6 +167,8 @@ Please refer to `/proc/self/mountstats` for a list of supported NFS operations,
- total_time (int, milliseconds): Cumulative time a request waited in the queue before sending.
- errors (int, count): Total number operations that complete with tk_status < 0 (usually errors). This is a new field, present in kernel >=5.3, mountstats version 1.1
[ref]: https://utcc.utoronto.ca/~cks/space/blog/linux/NFSMountstatsIndex
## Example Output
For basic metrics showing server-wise read and write data.
@ -166,9 +179,11 @@ nfsstat,mountpoint=/NFS,operation=WRITE,serverexport=1.2.3.4:/storage/NFS bytes=
```
For `fullstat=true` metrics, which includes additional measurements for `nfs_bytes`, `nfs_events`, and `nfs_xprt_tcp` (and `nfs_xprt_udp` if present).
Additionally, per-OP metrics are collected, with examples for READ, LOOKUP, and NULL shown.
Please refer to `/proc/self/mountstats` for a list of supported NFS operations, as it changes as it changes periodically.
For `fullstat=true` metrics, which includes additional measurements for
`nfs_bytes`, `nfs_events`, and `nfs_xprt_tcp` (and `nfs_xprt_udp` if present).
Additionally, per-OP metrics are collected, with examples for READ, LOOKUP, and
NULL shown. Please refer to `/proc/self/mountstats` for a list of supported NFS
operations, as it changes as it changes periodically.
```shell
nfs_bytes,mountpoint=/home,serverexport=nfs01:/vol/home directreadbytes=0i,directwritebytes=0i,normalreadbytes=42648757667i,normalwritebytes=0i,readpages=10404603i,serverreadbytes=42617098139i,serverwritebytes=0i,writepages=0i 1608787697000000000

View File

@ -1,5 +1,11 @@
# Nginx Input Plugin
This plugin gathers basic status from the open source web server Nginx. Nginx
Plus is a commercial version. For more information about the differences between
Nginx (F/OSS) and Nginx Plus, see the Nginx [documentation][diff-doc].
[diff-doc]: https://www.nginx.com/blog/whats-difference-nginx-foss-nginx-plus/
## Configuration
```toml @sample.conf

View File

@ -1,9 +1,15 @@
# Nginx Plus Input Plugin
Nginx Plus is a commercial version of the open source web server Nginx. The use this plugin you will need a license. For more information about the differences between Nginx (F/OSS) and Nginx Plus, [click here](https://www.nginx.com/blog/whats-difference-nginx-foss-nginx-plus/).
Nginx Plus is a commercial version of the open source web server Nginx. The use
this plugin you will need a license. For more information about the differences
between Nginx (F/OSS) and Nginx Plus, see the Nginx [documentation][diff-doc].
Structures for Nginx Plus have been built based on history of
[status module documentation](http://nginx.org/en/docs/http/ngx_http_status_module.html)
Structures for Nginx Plus have been built based on history of [status module
documentation][status-mod].
[diff-doc]: https://www.nginx.com/blog/whats-difference-nginx-foss-nginx-plus/
[status-mod]: http://nginx.org/en/docs/http/ngx_http_status_module.html
## Configuration
@ -24,7 +30,7 @@ Structures for Nginx Plus have been built based on history of
# insecure_skip_verify = false
```
## Measurements & Fields
## Metrics
- nginx_plus_processes
- respawned
@ -69,7 +75,7 @@ Structures for Nginx Plus have been built based on history of
- fails
- downtime
## Tags
### Tags
- nginx_plus_processes, nginx_plus_connections, nginx_plus_ssl, nginx_plus_requests
- server

View File

@ -1,6 +1,10 @@
# Nginx Plus API Input Plugin
Nginx Plus is a commercial version of the open source web server Nginx. The use this plugin you will need a license. For more information about the differences between Nginx (F/OSS) and Nginx Plus, [click here](https://www.nginx.com/blog/whats-difference-nginx-foss-nginx-plus/).
Nginx Plus is a commercial version of the open source web server Nginx. The use
this plugin you will need a license. For more information about the differences
between Nginx (F/OSS) and Nginx Plus, see the Nginx [documentation][diff-doc].
[diff-doc]: https://www.nginx.com/blog/whats-difference-nginx-foss-nginx-plus/
## Configuration

View File

@ -1,12 +1,16 @@
# Nginx Upstream Check Input Plugin
Read the status output of the nginx_upstream_check (<https://github.com/yaoweibin/nginx_upstream_check_module>).
This module can periodically check the servers in the Nginx's upstream with configured request and interval to determine
if the server is still available. If checks are failed the server is marked as "down" and will not receive any requests
until the check will pass and a server will be marked as "up" again.
Read the status output of the [nginx_upstream_check][1]. This module can
periodically check the servers in the Nginx's upstream with configured request
and interval to determine if the server is still available. If checks are failed
the server is marked as "down" and will not receive any requests until the check
will pass and a server will be marked as "up" again.
The status page displays the current status of all upstreams and servers as well as number of the failed and successful
checks. This information can be exported in JSON format and parsed by this input.
The status page displays the current status of all upstreams and servers as well
as number of the failed and successful checks. This information can be exported
in JSON format and parsed by this input.
[1]: https://github.com/yaoweibin/nginx_upstream_check_module
## Configuration
@ -41,7 +45,7 @@ checks. This information can be exported in JSON format and parsed by this input
# insecure_skip_verify = false
```
## Measurements & Fields
## Metrics
- Measurement
- fall (The number of failed server check attempts, counter)
@ -49,11 +53,13 @@ checks. This information can be exported in JSON format and parsed by this input
- status (The reporter server status as a string)
- status_code (The server status code. 1 - up, 2 - down, 0 - other)
The "status_code" field most likely will be the most useful one because it allows you to determine the current
state of every server and, possible, add some monitoring to watch over it. InfluxDB can use string values and the
"status" field can be used instead, but for most other monitoring solutions the integer code will be appropriate.
The "status_code" field most likely will be the most useful one because it
allows you to determine the current state of every server and, possible, add
some monitoring to watch over it. InfluxDB can use string values and the
"status" field can be used instead, but for most other monitoring solutions the
integer code will be appropriate.
## Tags
### Tags
- All measurements have the following tags:
- name (The hostname or IP of the upstream server)

View File

@ -1,7 +1,11 @@
# Nginx Virtual Host Traffic (VTS) Input Plugin
This plugin gathers Nginx status using external virtual host traffic status module - <https://github.com/vozlt/nginx-module-vts>. This is an Nginx module that provides access to virtual host status information. It contains the current status such as servers, upstreams, caches. This is similar to the live activity monitoring of Nginx plus.
For module configuration details please see its [documentation](https://github.com/vozlt/nginx-module-vts#synopsis).
This plugin gathers Nginx status using external virtual host traffic status
module - <https://github.com/vozlt/nginx-module-vts>. This is an Nginx module
that provides access to virtual host status information. It contains the current
status such as servers, upstreams, caches. This is similar to the live activity
monitoring of Nginx plus. For module configuration details please see its
[documentation](https://github.com/vozlt/nginx-module-vts#synopsis).
## Configuration
@ -22,7 +26,7 @@ For module configuration details please see its [documentation](https://github.c
# insecure_skip_verify = false
```
## Measurements & Fields
## Metrics
- nginx_vts_connections
- active
@ -80,7 +84,7 @@ For module configuration details please see its [documentation](https://github.c
- hit
- scarce
## Tags
### Tags
- nginx_vts_connections
- source

View File

@ -1,6 +1,8 @@
# Hashicorp Nomad Input Plugin
The Nomad plugin must grab metrics from every Nomad agent of the cluster. Telegraf may be present in every node and connect to the agent locally. In this case should be something like `http://127.0.0.1:4646`.
The Nomad plugin must grab metrics from every Nomad agent of the
cluster. Telegraf may be present in every node and connect to the agent
locally. In this case should be something like `http://127.0.0.1:4646`.
> Tested on Nomad 1.1.6
@ -23,7 +25,8 @@ The Nomad plugin must grab metrics from every Nomad agent of the cluster. Telegr
## Metrics
Both Nomad servers and agents collect various metrics. For every details, please have a look at Nomad following documentation:
Both Nomad servers and agents collect various metrics. For every details, please
have a look at Nomad following documentation:
- [https://www.nomadproject.io/docs/operations/metrics](https://www.nomadproject.io/docs/operations/metrics)
- [https://www.nomadproject.io/docs/operations/telemetry](https://www.nomadproject.io/docs/operations/telemetry)

View File

@ -1,5 +1,10 @@
# NSQ Input Plugin
This plugin gathers metrics from [NSQ](https://nsq.io/).
See the [NSQD API docs](https://nsq.io/components/nsqd.html) for endpoints that
the plugin can read.
## Configuration
```toml @sample.conf

View File

@ -1,11 +1,25 @@
# Nstat Input Plugin
Plugin collects network metrics from `/proc/net/netstat`, `/proc/net/snmp` and `/proc/net/snmp6` files
Plugin collects network metrics from `/proc/net/netstat`, `/proc/net/snmp` and
`/proc/net/snmp6` files
## Configuration
The plugin firstly tries to read file paths from config values
if it is empty, then it reads from env variables.
```toml @sample.conf
# Collect kernel snmp counters and network interface statistics
[[inputs.nstat]]
## file paths for proc files. If empty default paths will be used:
## /proc/net/netstat, /proc/net/snmp, /proc/net/snmp6
## These can also be overridden with env variables, see README.
proc_net_netstat = "/proc/net/netstat"
proc_net_snmp = "/proc/net/snmp"
proc_net_snmp6 = "/proc/net/snmp6"
## dump metrics with 0 values too
dump_zeros = true
```
The plugin firstly tries to read file paths from config values if it is empty,
then it reads from env variables.
* `PROC_NET_NETSTAT`
* `PROC_NET_SNMP`
@ -21,30 +35,17 @@ Then appends default file paths:
* `/net/snmp`
* `/net/snmp6`
So if nothing is given, no paths in config and in env vars, the plugin takes the default paths.
So if nothing is given, no paths in config and in env vars, the plugin takes the
default paths.
* `/proc/net/netstat`
* `/proc/net/snmp`
* `/proc/net/snmp6`
The sample config file
In case that `proc_net_snmp6` path doesn't exist (e.g. IPv6 is not enabled) no
error would be raised.
```toml @sample.conf
# Collect kernel snmp counters and network interface statistics
[[inputs.nstat]]
## file paths for proc files. If empty default paths will be used:
## /proc/net/netstat, /proc/net/snmp, /proc/net/snmp6
## These can also be overridden with env variables, see README.
proc_net_netstat = "/proc/net/netstat"
proc_net_snmp = "/proc/net/snmp"
proc_net_snmp6 = "/proc/net/snmp6"
## dump metrics with 0 values too
dump_zeros = true
```
In case that `proc_net_snmp6` path doesn't exist (e.g. IPv6 is not enabled) no error would be raised.
## Measurements & Fields
## Metrics
* nstat
* Icmp6InCsumErrors
@ -345,7 +346,7 @@ In case that `proc_net_snmp6` path doesn't exist (e.g. IPv6 is not enabled) no e
* UdpRcvbufErrors
* UdpSndbufErrors
## Tags
### Tags
* All measurements have the following tags
* host (host of the system)

View File

@ -33,7 +33,7 @@ server (RMS of difference of multiple time samples, milliseconds);
dns_lookup = true
```
## Measurements & Fields
## Metrics
- ntpq
- delay (float, milliseconds)
@ -43,7 +43,7 @@ server (RMS of difference of multiple time samples, milliseconds);
- reach (int)
- when (int, seconds)
## Tags
### Tags
- All measurements have the following tags:
- refid

View File

@ -1,6 +1,8 @@
# Nvidia System Management Interface (SMI) Input Plugin
This plugin uses a query on the [`nvidia-smi`](https://developer.nvidia.com/nvidia-system-management-interface) binary to pull GPU stats including memory and GPU usage, temp and other.
This plugin uses a query on the
[`nvidia-smi`](https://developer.nvidia.com/nvidia-system-management-interface)
binary to pull GPU stats including memory and GPU usage, temp and other.
## Configuration
@ -22,10 +24,12 @@ On Linux, `nvidia-smi` is generally located at `/usr/bin/nvidia-smi`
### Windows
On Windows, `nvidia-smi` is generally located at `C:\Program Files\NVIDIA Corporation\NVSMI\nvidia-smi.exe`
On Windows 10, you may also find this located here `C:\Windows\System32\nvidia-smi.exe`
On Windows, `nvidia-smi` is generally located at `C:\Program Files\NVIDIA
Corporation\NVSMI\nvidia-smi.exe` On Windows 10, you may also find this located
here `C:\Windows\System32\nvidia-smi.exe`
You'll need to escape the `\` within the `telegraf.conf` like this: `C:\\Program Files\\NVIDIA Corporation\\NVSMI\\nvidia-smi.exe`
You'll need to escape the `\` within the `telegraf.conf` like this: `C:\\Program
Files\\NVIDIA Corporation\\NVSMI\\nvidia-smi.exe`
## Metrics
@ -64,7 +68,8 @@ You'll need to escape the `\` within the `telegraf.conf` like this: `C:\\Program
## Sample Query
The below query could be used to alert on the average temperature of the your GPUs over the last minute
The below query could be used to alert on the average temperature of the your
GPUs over the last minute
```sql
SELECT mean("temperature_gpu") FROM "nvidia_smi" WHERE time > now() - 5m GROUP BY time(1m), "index", "name", "host"
@ -98,7 +103,11 @@ nvidia_smi,compute_mode=Default,host=8218cf,index=2,name=GeForce\ GTX\ 1080,psta
## Limitations
Note that there seems to be an issue with getting current memory clock values when the memory is overclocked.
This may or may not apply to everyone but it's confirmed to be an issue on an EVGA 2080 Ti.
Note that there seems to be an issue with getting current memory clock values
when the memory is overclocked. This may or may not apply to everyone but it's
confirmed to be an issue on an EVGA 2080 Ti.
**NOTE:** For use with docker either generate your own custom docker image based on nvidia/cuda which also installs a telegraf package or use [volume mount binding](https://docs.docker.com/storage/bind-mounts/) to inject the required binary into the docker container.
**NOTE:** For use with docker either generate your own custom docker image based
on nvidia/cuda which also installs a telegraf package or use [volume mount
binding](https://docs.docker.com/storage/bind-mounts/) to inject the required
binary into the docker container.

View File

@ -152,7 +152,7 @@ This example group configuration has two groups with two nodes each:
]
```
It produces metrics like these:
## Example Output
```text
group1_metric_name,group1_tag=val1,id=ns\=3;i\=1001,node1_tag=val2 name=0,Quality="OK (0x0)" 1606893246000000000

View File

@ -4,8 +4,6 @@ This plugin gathers metrics from OpenLDAP's cn=Monitor backend.
## Configuration
To use this plugin you must enable the [slapd monitoring](https://www.openldap.org/devel/admin/monitoringslapd.html) backend.
```toml @sample.conf
# OpenLDAP cn=Monitor plugin
[[inputs.openldap]]
@ -32,17 +30,25 @@ To use this plugin you must enable the [slapd monitoring](https://www.openldap.o
reverse_metric_names = true
```
## Measurements & Fields
To use this plugin you must enable the [slapd
monitoring](https://www.openldap.org/devel/admin/monitoringslapd.html) backend.
All **monitorCounter**, **monitoredInfo**, **monitorOpInitiated**, and **monitorOpCompleted** attributes are gathered based on this LDAP query:
## Metrics
All **monitorCounter**, **monitoredInfo**, **monitorOpInitiated**, and
**monitorOpCompleted** attributes are gathered based on this LDAP query:
```sh
(|(objectClass=monitorCounterObject)(objectClass=monitorOperation)(objectClass=monitoredObject))
```
Metric names are based on their entry DN with the cn=Monitor base removed. If `reverse_metric_names` is not set, metrics are based on their DN. If `reverse_metric_names` is set to `true`, the names are reversed. This is recommended as it allows the names to sort more naturally.
Metric names are based on their entry DN with the cn=Monitor base removed. If
`reverse_metric_names` is not set, metrics are based on their DN. If
`reverse_metric_names` is set to `true`, the names are reversed. This is
recommended as it allows the names to sort more naturally.
Metrics for the **monitorOp*** attributes have **_initiated** and **_completed** added to the base name as appropriate.
Metrics for the **monitorOp*** attributes have **_initiated** and **_completed**
added to the base name as appropriate.
An OpenLDAP 2.4 server will provide these metrics:
@ -85,7 +91,7 @@ An OpenLDAP 2.4 server will provide these metrics:
- waiters_read
- waiters_write
## Tags
### Tags
- server= # value from config
- port= # value from config

View File

@ -1,6 +1,7 @@
# OpenSMTPD Input Plugin
This plugin gathers stats from [OpenSMTPD - a FREE implementation of the server-side SMTP protocol](https://www.opensmtpd.org/)
This plugin gathers stats from [OpenSMTPD - a FREE implementation of the
server-side SMTP protocol](https://www.opensmtpd.org/)
## Configuration
@ -17,10 +18,10 @@ This plugin gathers stats from [OpenSMTPD - a FREE implementation of the server-
#timeout = "1s"
```
## Measurements & Fields
## Metrics
This is the full list of stats provided by smtpctl and potentially collected by telegram
depending of your smtpctl configuration.
This is the full list of stats provided by smtpctl and potentially collected by
telegram depending of your smtpctl configuration.
- smtpctl
bounce_envelope
@ -62,8 +63,10 @@ depending of your smtpctl configuration.
## Permissions
It's important to note that this plugin references smtpctl, which may require additional permissions to execute successfully.
Depending on the user/group permissions of the telegraf user executing this plugin, you may need to alter the group membership, set facls, or use sudo.
It's important to note that this plugin references smtpctl, which may require
additional permissions to execute successfully. Depending on the user/group
permissions of the telegraf user executing this plugin, you may need to alter
the group membership, set facls, or use sudo.
**Group membership (Recommended)**:

View File

@ -1,4 +1,3 @@
# OpenStack Input Plugin
Collects the metrics from following services of OpenStack:
@ -22,13 +21,21 @@ At present this plugin requires the following APIs:
### Recommendations
Due to the large number of unique tags that this plugin generates, in order to keep the cardinality down it is **highly recommended** to use [modifiers](https://github.com/influxdata/telegraf/blob/master/docs/CONFIGURATION.md#modifiers) like `tagexclude` to discard unwanted tags.
Due to the large number of unique tags that this plugin generates, in order to
keep the cardinality down it is **highly recommended** to use
[modifiers](../../../docs/CONFIGURATION.md#modifiers) like `tagexclude` to
discard unwanted tags.
For deployments with only a small number of VMs and hosts, a small polling interval (e.g. seconds-minutes) is acceptable. For larger deployments, polling a large number of systems will impact performance. Use the `interval` option to change how often the plugin is run:
For deployments with only a small number of VMs and hosts, a small polling
interval (e.g. seconds-minutes) is acceptable. For larger deployments, polling a
large number of systems will impact performance. Use the `interval` option to
change how often the plugin is run:
`interval`: How often a metric is gathered. Setting this value at the plugin level overrides the global agent interval setting.
`interval`: How often a metric is gathered. Setting this value at the plugin
level overrides the global agent interval setting.
Also, consider polling OpenStack services at different intervals depending on your requirements. This will help with load and cardinality as well.
Also, consider polling OpenStack services at different intervals depending on
your requirements. This will help with load and cardinality as well.
```toml
[[inputs.openstack]]
@ -104,7 +111,7 @@ Also, consider polling OpenStack services at different intervals depending on yo
# measure_openstack_requests = false
```
### Measurements, Tags & Fields
## Metrics
* openstack_aggregate
* name
@ -343,7 +350,7 @@ Also, consider polling OpenStack services at different intervals depending on yo
* total_attachments [integer]
* updated_at [string]
### Example Output
## Example Output
```text
> openstack_neutron_agent,agent_host=vim2,agent_type=DHCP\ agent,availability_zone=nova,binary=neutron-dhcp-agent,host=telegraf_host,topic=dhcp_agent admin_state_up=true,alive=true,created_at="2021-01-07T03:40:53Z",heartbeat_timestamp="2021-10-14T07:46:40Z",id="17e1e446-d7da-4656-9e32-67d3690a306f",resources_synced=false,started_at="2021-07-02T21:47:42Z" 1634197616000000000

View File

@ -1,6 +1,7 @@
# OpenTelemetry Input Plugin
This plugin receives traces, metrics and logs from [OpenTelemetry](https://opentelemetry.io) clients and agents via gRPC.
This plugin receives traces, metrics and logs from
[OpenTelemetry](https://opentelemetry.io) clients and agents via gRPC.
## Configuration
@ -33,22 +34,26 @@ This plugin receives traces, metrics and logs from [OpenTelemetry](https://opent
### Schema
The OpenTelemetry->InfluxDB conversion [schema](https://github.com/influxdata/influxdb-observability/blob/main/docs/index.md)
and [implementation](https://github.com/influxdata/influxdb-observability/tree/main/otel2influx)
are hosted at <https://github.com/influxdata/influxdb-observability> .
The OpenTelemetry->InfluxDB conversion [schema][1] and [implementation][2] are
hosted at <https://github.com/influxdata/influxdb-observability> .
Spans are stored in measurement `spans`.
Logs are stored in measurement `logs`.
For metrics, two output schemata exist.
Metrics received with `metrics_schema=prometheus-v1` are assigned measurement from the OTel field `Metric.name`.
Metrics received with `metrics_schema=prometheus-v2` are stored in measurement `prometheus`.
For metrics, two output schemata exist. Metrics received with
`metrics_schema=prometheus-v1` are assigned measurement from the OTel field
`Metric.name`. Metrics received with `metrics_schema=prometheus-v2` are stored
in measurement `prometheus`.
Also see the OpenTelemetry output plugin for Telegraf.
### Example Output
[1]: https://github.com/influxdata/influxdb-observability/blob/main/docs/index.md
#### Tracing Spans
[2]: https://github.com/influxdata/influxdb-observability/tree/main/otel2influx
## Example Output
### Tracing Spans
```text
spans end_time_unix_nano="2021-02-19 20:50:25.6893952 +0000 UTC",instrumentation_library_name="tracegen",kind="SPAN_KIND_INTERNAL",name="okey-dokey",net.peer.ip="1.2.3.4",parent_span_id="d5270e78d85f570f",peer.service="tracegen-client",service.name="tracegen",span.kind="server",span_id="4c28227be6a010e1",status_code="STATUS_CODE_OK",trace_id="7d4854815225332c9834e6dbf85b9380" 1613767825689169000
@ -58,7 +63,9 @@ spans end_time_unix_nano="2021-02-19 20:50:25.6895667 +0000 UTC",instrumentation
spans end_time_unix_nano="2021-02-19 20:50:25.6896741 +0000 UTC",instrumentation_library_name="tracegen",kind="SPAN_KIND_INTERNAL",name="okey-dokey",net.peer.ip="1.2.3.4",parent_span_id="6a8e6a0edcc1c966",peer.service="tracegen-client",service.name="tracegen",span.kind="server",span_id="d68f7f3b41eb8075",status_code="STATUS_CODE_OK",trace_id="651dadde186b7834c52b13a28fc27bea" 1613767825689480300
```
### Metrics - `prometheus-v1`
## Metrics
### `prometheus-v1`
```shell
cpu_temp,foo=bar gauge=87.332
@ -68,7 +75,7 @@ http_request_duration_seconds 0.05=24054,0.1=33444,0.2=100392,0.5=129389,1=13398
rpc_duration_seconds 0.01=3102,0.05=3272,0.5=4773,0.9=9001,0.99=76656,sum=1.7560473e+07,count=2693
```
### Metrics - `prometheus-v2`
### `prometheus-v2`
```shell
prometheus,foo=bar cpu_temp=87.332

View File

@ -1,6 +1,7 @@
# Passenger Input Plugin
Gather [Phusion Passenger](https://www.phusionpassenger.com/) metrics using the `passenger-status` command line utility.
Gather [Phusion Passenger](https://www.phusionpassenger.com/) metrics using the
`passenger-status` command line utility.
## Series Cardinality Warning
@ -38,7 +39,8 @@ manage your series cardinality:
### Permissions
Telegraf must have permission to execute the `passenger-status` command. On most systems, Telegraf runs as the `telegraf` user.
Telegraf must have permission to execute the `passenger-status` command. On
most systems, Telegraf runs as the `telegraf` user.
## Metrics

View File

@ -1,8 +1,13 @@
# PF Input Plugin
The pf plugin gathers information from the FreeBSD/OpenBSD pf firewall. Currently it can retrieve information about the state table: the number of current entries in the table, and counters for the number of searches, inserts, and removals to the table.
The pf plugin gathers information from the FreeBSD/OpenBSD pf
firewall. Currently it can retrieve information about the state table: the
number of current entries in the table, and counters for the number of searches,
inserts, and removals to the table.
The pf plugin retrieves this information by invoking the `pfstat` command. The `pfstat` command requires read access to the device file `/dev/pf`. You have several options to permit telegraf to run `pfctl`:
The pf plugin retrieves this information by invoking the `pfstat` command. The
`pfstat` command requires read access to the device file `/dev/pf`. You have
several options to permit telegraf to run `pfctl`:
* Run telegraf as root. This is strongly discouraged.
* Change the ownership and permissions for /dev/pf such that the user telegraf runs at can read the /dev/pf device file. This is probably not that good of an idea either.
@ -29,7 +34,7 @@ telegraf ALL=(root) NOPASSWD: /sbin/pfctl -s info
use_sudo = false
```
## Measurements & Fields
## Metrics
* pf
* entries (integer, count)

View File

@ -27,16 +27,21 @@ More information about the meaning of these metrics can be found in the
Specify address via a postgresql connection string:
`host=/run/postgresql port=6432 user=telegraf database=pgbouncer`
```text
host=/run/postgresql port=6432 user=telegraf database=pgbouncer
```
Or via an url matching:
`postgres://[pqgotest[:password]]@host:port[/dbname]?sslmode=[disable|verify-ca|verify-full]`
```text
postgres://[pqgotest[:password]]@host:port[/dbname]?sslmode=[disable|verify-ca|verify-full]
```
All connection parameters are optional.
Without the dbname parameter, the driver will default to a database with the same name as the user.
This dbname is just for instantiating a connection with the server and doesn't restrict the databases we are trying to grab metrics for.
Without the dbname parameter, the driver will default to a database with the
same name as the user. This dbname is just for instantiating a connection with
the server and doesn't restrict the databases we are trying to grab metrics for.
## Metrics

View File

@ -1,6 +1,7 @@
# Ping Input Plugin
Sends a ping message by executing the system ping command and reports the results.
Sends a ping message by executing the system ping command and reports the
results.
This plugin has two main methods of operation: `exec` and `native`. The
recommended method is `native`, which has greater system compatibility and
@ -110,8 +111,9 @@ systemctl restart telegraf
### Linux Permissions
When using `method = "native"`, Telegraf will attempt to use privileged raw
ICMP sockets. On most systems, doing so requires `CAP_NET_RAW` capabilities or for Telegraf to be run as root.
When using `method = "native"`, Telegraf will attempt to use privileged raw ICMP
sockets. On most systems, doing so requires `CAP_NET_RAW` capabilities or for
Telegraf to be run as root.
With systemd:
@ -142,7 +144,8 @@ setting capabilities.
### Other OS Permissions
When using `method = "native"`, you will need permissions similar to the executable ping program for your OS.
When using `method = "native"`, you will need permissions similar to the
executable ping program for your OS.
## Metrics
@ -166,7 +169,8 @@ When using `method = "native"`, you will need permissions similar to the executa
### reply_received vs packets_received
On Windows systems with `method = "exec"`, the "Destination net unreachable" reply will increment `packets_received` but not `reply_received`*.
On Windows systems with `method = "exec"`, the "Destination net unreachable"
reply will increment `packets_received` but not `reply_received`*.
### ttl

View File

@ -1,6 +1,9 @@
# PostgreSQL Input Plugin
This postgresql plugin provides metrics for your postgres database. It currently works with postgres versions 8.1+. It uses data from the built in _pg_stat_database_ and pg_stat_bgwriter views. The metrics recorded depend on your version of postgres. See table:
This postgresql plugin provides metrics for your postgres database. It currently
works with postgres versions 8.1+. It uses data from the built in
_pg_stat_database_ and pg_stat_bgwriter views. The metrics recorded depend on
your version of postgres. See table:
```sh
pg version 9.2+ 9.1 8.3-9.0 8.1-8.2 7.4-8.0(unsupported)
@ -28,7 +31,10 @@ stats_reset* x x
_* value ignored and therefore not recorded._
More information about the meaning of these metrics can be found in the [PostgreSQL Documentation](http://www.postgresql.org/docs/9.2/static/monitoring-stats.html#PG-STAT-DATABASE-VIEW)
More information about the meaning of these metrics can be found in the
[PostgreSQL Documentation][1].
[1]: http://www.postgresql.org/docs/9.2/static/monitoring-stats.html#PG-STAT-DATABASE-VIEW
## Configuration
@ -74,21 +80,34 @@ More information about the meaning of these metrics can be found in the [Postgre
Specify address via a postgresql connection string:
`host=localhost port=5432 user=telegraf database=telegraf`
```text
host=localhost port=5432 user=telegraf database=telegraf
```
Or via an url matching:
`postgres://[pqgotest[:password]]@host:port[/dbname]?sslmode=[disable|verify-ca|verify-full]`
```text
postgres://[pqgotest[:password]]@host:port[/dbname]?sslmode=[disable|verify-ca|verify-full]
```
All connection parameters are optional. Without the dbname parameter, the driver will default to a database with the same name as the user. This dbname is just for instantiating a connection with the server and doesn't restrict the databases we are trying to grab metrics for.
All connection parameters are optional. Without the dbname parameter, the driver
will default to a database with the same name as the user. This dbname is just
for instantiating a connection with the server and doesn't restrict the
databases we are trying to grab metrics for.
A list of databases to explicitly ignore. If not specified, metrics for all databases are gathered. Do NOT use with the 'databases' option.
A list of databases to explicitly ignore. If not specified, metrics for all
databases are gathered. Do NOT use with the 'databases' option.
`ignored_databases = ["postgres", "template0", "template1"]`
```text
ignored_databases = ["postgres", "template0", "template1"]`
```
A list of databases to pull metrics about. If not specified, metrics for all databases are gathered. Do NOT use with the 'ignored_databases' option.
A list of databases to pull metrics about. If not specified, metrics for all
databases are gathered. Do NOT use with the 'ignored_databases' option.
`databases = ["app_production", "testing"]`
```text
databases = ["app_production", "testing"]`
```
### TLS Configuration

View File

@ -71,7 +71,14 @@ The example below has two queries are specified, with the following parameters:
```
The system can be easily extended using homemade metrics collection tools or
using postgresql extensions ([pg_stat_statements](http://www.postgresql.org/docs/current/static/pgstatstatements.html), [pg_proctab](https://github.com/markwkm/pg_proctab) or [powa](http://dalibo.github.io/powa/))
using postgresql extensions ([pg_stat_statements][1], [pg_proctab][2] or
[powa][3])
[1]: http://www.postgresql.org/docs/current/static/pgstatstatements.html
[2]: https://github.com/markwkm/pg_proctab
[3]: http://dalibo.github.io/powa/
## Sample Queries

View File

@ -16,8 +16,9 @@ it requires access to execute `ps`.
# no configuration
```
Another possible configuration is to define an alternative path for resolving the /proc location.
Using the environment variable `HOST_PROC` the plugin will retrieve process information from the specified location.
Another possible configuration is to define an alternative path for resolving
the /proc location. Using the environment variable `HOST_PROC` the plugin will
retrieve process information from the specified location.
`docker run -v /proc:/rootfs/proc:ro -e HOST_PROC=/rootfs/proc`

View File

@ -1,7 +1,7 @@
# Procstat Input Plugin
The procstat plugin can be used to monitor the system resource usage of one or more processes.
The procstat_lookup metric displays the query information,
The procstat plugin can be used to monitor the system resource usage of one or
more processes. The procstat_lookup metric displays the query information,
specifically the number of PIDs returned on a search
Processes can be selected for monitoring using one of several methods:

View File

@ -98,7 +98,10 @@ in Prometheus format.
# insecure_skip_verify = false
```
`urls` can contain a unix socket as well. If a different path is required (default is `/metrics` for both http[s] and unix) for a unix socket, add `path` as a query parameter as follows: `unix:///var/run/prometheus.sock?path=/custom/metrics`
`urls` can contain a unix socket as well. If a different path is required
(default is `/metrics` for both http[s] and unix) for a unix socket, add `path`
as a query parameter as follows:
`unix:///var/run/prometheus.sock?path=/custom/metrics`
### Metric Format Configuration
@ -131,28 +134,37 @@ option in both to ensure metrics are round-tripped without modification.
### Kubernetes Service Discovery
URLs listed in the `kubernetes_services` parameter will be expanded
by looking up all A records assigned to the hostname as described in
[Kubernetes DNS service discovery](https://kubernetes.io/docs/concepts/services-networking/service/#dns).
URLs listed in the `kubernetes_services` parameter will be expanded by looking
up all A records assigned to the hostname as described in [Kubernetes DNS
service discovery][serv-disc].
This method can be used to locate all
[Kubernetes headless services](https://kubernetes.io/docs/concepts/services-networking/service/#headless-services).
This method can be used to locate all [Kubernetes headless services][headless].
[serv-disc]: https://kubernetes.io/docs/concepts/services-networking/service/#dns
[headless]: https://kubernetes.io/docs/concepts/services-networking/service/#headless-services
### Kubernetes scraping
Enabling this option will allow the plugin to scrape for prometheus annotation on Kubernetes
pods. Currently, you can run this plugin in your kubernetes cluster, or we use the kubeconfig
file to determine where to monitor.
Currently the following annotation are supported:
Enabling this option will allow the plugin to scrape for prometheus annotation
on Kubernetes pods. Currently, you can run this plugin in your kubernetes
cluster, or we use the kubeconfig file to determine where to monitor. Currently
the following annotation are supported:
* `prometheus.io/scrape` Enable scraping for this pod.
* `prometheus.io/scheme` If the metrics endpoint is secured then you will need to set this to `https` & most likely set the tls config. (default 'http')
* `prometheus.io/path` Override the path for the metrics endpoint on the service. (default '/metrics')
* `prometheus.io/port` Used to override the port. (default 9102)
Using the `monitor_kubernetes_pods_namespace` option allows you to limit which pods you are scraping.
Using the `monitor_kubernetes_pods_namespace` option allows you to limit which
pods you are scraping.
Using `pod_scrape_scope = "node"` allows more scalable scraping for pods which will scrape pods only in the node that telegraf is running. It will fetch the pod list locally from the node's kubelet. This will require running Telegraf in every node of the cluster. Note that either `node_ip` must be specified in the config or the environment variable `NODE_IP` must be set to the host IP. ThisThe latter can be done in the yaml of the pod running telegraf:
Using `pod_scrape_scope = "node"` allows more scalable scraping for pods which
will scrape pods only in the node that telegraf is running. It will fetch the
pod list locally from the node's kubelet. This will require running Telegraf in
every node of the cluster. Note that either `node_ip` must be specified in the
config or the environment variable `NODE_IP` must be set to the host IP. ThisThe
latter can be done in the yaml of the pod running telegraf:
```sh
env:
@ -162,11 +174,15 @@ env:
fieldPath: status.hostIP
```
If using node level scrape scope, `pod_scrape_interval` specifies how often (in seconds) the pod list for scraping should updated. If not specified, the default is 60 seconds.
If using node level scrape scope, `pod_scrape_interval` specifies how often (in
seconds) the pod list for scraping should updated. If not specified, the default
is 60 seconds.
The pod running telegraf will need to have the proper rbac configuration in order to be allowed to call the k8s api to discover and watch pods in the cluster.
A typical configuration will create a service account, a cluster role with the appropriate rules and a cluster role binding to tie the cluster role to the service account.
Example of configuration for cluster level discovery:
The pod running telegraf will need to have the proper rbac configuration in
order to be allowed to call the k8s api to discover and watch pods in the
cluster. A typical configuration will create a service account, a cluster role
with the appropriate rules and a cluster role binding to tie the cluster role to
the service account. Example of configuration for cluster level discovery:
```yaml
---
@ -206,10 +222,11 @@ metadata:
### Consul Service Discovery
Enabling this option and configuring consul `agent` url will allow the plugin to query
consul catalog for available services. Using `query_interval` the plugin will periodically
query the consul catalog for services with `name` and `tag` and refresh the list of scraped urls.
It can use the information from the catalog to build the scraped url and additional tags from a template.
Enabling this option and configuring consul `agent` url will allow the plugin to
query consul catalog for available services. Using `query_interval` the plugin
will periodically query the consul catalog for services with `name` and `tag`
and refresh the list of scraped urls. It can use the information from the
catalog to build the scraped url and additional tags from a template.
Multiple consul queries can be configured, each for different service.
The following example fields can be used in url or tag templates:

View File

@ -1,6 +1,7 @@
# Proxmox Input Plugin
The proxmox plugin gathers metrics about containers and VMs using the Proxmox API.
The proxmox plugin gathers metrics about containers and VMs using the Proxmox
API.
Telegraf minimum version: Telegraf 1.16.0
@ -28,11 +29,11 @@ Telegraf minimum version: Telegraf 1.16.0
### Permissions
The plugin will need to have access to the Proxmox API. An API token
must be provided with the corresponding user being assigned at least the PVEAuditor
role on /.
The plugin will need to have access to the Proxmox API. An API token must be
provided with the corresponding user being assigned at least the PVEAuditor role
on /.
## Measurements & Fields
## Metrics
- proxmox
- status
@ -51,7 +52,7 @@ role on /.
- disk_free
- disk_used_percentage
## Tags
### Tags
- node_fqdn - FQDN of the node telegraf is running on
- vm_name - Name of the VM/container

View File

@ -1,10 +1,8 @@
# PuppetAgent Input Plugin
## Description
The puppetagent plugin collects variables outputted from the 'last_run_summary.yaml' file
usually located in `/var/lib/puppet/state/`
[PuppetAgent Runs](https://puppet.com/blog/puppet-monitoring-how-to-monitor-success-or-failure-of-puppet-runs/).
The puppetagent plugin collects variables outputted from the
'last_run_summary.yaml' file usually located in `/var/lib/puppet/state/`
[PuppetAgent Runs][1].
```sh
cat /var/lib/puppet/state/last_run_summary.yaml
@ -77,6 +75,8 @@ jcross@pit-devops-02 ~ >sudo ./telegraf_linux_amd64 --input-filter puppetagent -
> [] puppetagent_version_puppet value=3.7.5
```
[1]: https://puppet.com/blog/puppet-monitoring-how-to-monitor-success-or-failure-of-puppet-runs/
## Configuration
```toml @sample.conf

View File

@ -2,7 +2,8 @@
Reads metrics from RabbitMQ servers via the [Management Plugin][management].
For additional details reference the [RabbitMQ Management HTTP Stats][management-reference].
For additional details reference the [RabbitMQ Management HTTP
Stats][management-reference].
[management]: https://www.rabbitmq.com/management.html
[management-reference]: https://raw.githack.com/rabbitmq/rabbitmq-management/rabbitmq_v3_6_9/priv/www/api/index.html
@ -221,7 +222,9 @@ For additional details reference the [RabbitMQ Management HTTP Stats][management
## Sample Queries
Message rates for the entire node can be calculated from total message counts. For instance, to get the rate of messages published per minute, use this query:
Message rates for the entire node can be calculated from total message
counts. For instance, to get the rate of messages published per minute, use this
query:
```sql
SELECT NON_NEGATIVE_DERIVATIVE(LAST("messages_published"), 1m) AS messages_published_rate FROM rabbitmq_overview WHERE time > now() - 10m GROUP BY time(1m)

View File

@ -1,7 +1,8 @@
# Raindrops Input Plugin
The [raindrops](http://raindrops.bogomips.org/) plugin reads from
specified raindops [middleware](http://raindrops.bogomips.org/Raindrops/Middleware.html) URI and adds stats to InfluxDB.
The [raindrops](http://raindrops.bogomips.org/) plugin reads from specified
raindops [middleware](http://raindrops.bogomips.org/Raindrops/Middleware.html)
URI and adds stats to InfluxDB.
## Configuration
@ -12,7 +13,7 @@ specified raindops [middleware](http://raindrops.bogomips.org/Raindrops/Middlewa
urls = ["http://localhost:8080/_raindrops"]
```
## Measurements & Fields
## Metrics
- raindrops
- calling (integer, count)
@ -21,7 +22,7 @@ specified raindops [middleware](http://raindrops.bogomips.org/Raindrops/Middlewa
- active (integer, bytes)
- queued (integer, bytes)
## Tags
### Tags
- Raindops calling/writing of all the workers:
- server

View File

@ -1,8 +1,10 @@
# RAS Daemon Input Plugin
This plugin is only available on Linux (only for `386`, `amd64`, `arm` and `arm64` architectures).
This plugin is only available on Linux (only for `386`, `amd64`, `arm` and
`arm64` architectures).
The `RAS` plugin gathers and counts errors provided by [RASDaemon](https://github.com/mchehab/rasdaemon).
The `RAS` plugin gathers and counts errors provided by
[RASDaemon](https://github.com/mchehab/rasdaemon).
## Configuration
@ -14,7 +16,8 @@ The `RAS` plugin gathers and counts errors provided by [RASDaemon](https://githu
# db_path = ""
```
In addition `RASDaemon` runs, by default, with `--enable-sqlite3` flag. In case of problems with SQLite3 database please verify this is still a default option.
In addition `RASDaemon` runs, by default, with `--enable-sqlite3` flag. In case
of problems with SQLite3 database please verify this is still a default option.
## Metrics
@ -40,7 +43,8 @@ In addition `RASDaemon` runs, by default, with `--enable-sqlite3` flag. In case
- microcode_rom_parity_errors
- unclassified_mce_errors
Please note that `processor_base_errors` is aggregate counter measuring the following MCE events:
Please note that `processor_base_errors` is aggregate counter measuring the
following MCE events:
- internal_timer_errors
- smm_handler_code_access_violation_errors
@ -52,7 +56,8 @@ Please note that `processor_base_errors` is aggregate counter measuring the foll
## Permissions
This plugin requires access to SQLite3 database from `RASDaemon`. Please make sure that user has required permissions to this database.
This plugin requires access to SQLite3 database from `RASDaemon`. Please make
sure that user has required permissions to this database.
## Example Output

View File

@ -6,8 +6,6 @@ Requires RavenDB Server 5.2+.
## Configuration
The following is an example config for RavenDB. **Note:** The client certificate used should have `Operator` permissions on the cluster.
```toml @sample.conf
# Reads metrics from RavenDB servers via the Monitoring Endpoints
[[inputs.ravendb]]
@ -47,6 +45,9 @@ The following is an example config for RavenDB. **Note:** The client certificate
# collection_stats_dbs = []
```
**Note:** The client certificate used should have `Operator` permissions on the
cluster.
## Metrics
- ravendb_server
@ -205,7 +206,7 @@ The following is an example config for RavenDB. **Note:** The client certificate
- tombstones_size_in_bytes
- total_size_in_bytes
## Example output
## Example Output
```text
> ravendb_server,cluster_id=07aecc42-9194-4181-999c-1c42450692c9,host=DESKTOP-2OISR6D,node_tag=A,url=http://localhost:8080 backup_current_number_of_running_backups=0i,backup_max_number_of_concurrent_backups=4i,certificate_server_certificate_expiration_left_in_sec=-1,cluster_current_term=2i,cluster_index=10i,cluster_node_state=4i,config_server_urls="http://127.0.0.1:8080",cpu_assigned_processor_count=8i,cpu_machine_usage=19.09944089456869,cpu_process_usage=0.16977205323024872,cpu_processor_count=8i,cpu_thread_pool_available_completion_port_threads=1000i,cpu_thread_pool_available_worker_threads=32763i,databases_loaded_count=1i,databases_total_count=1i,disk_remaining_storage_space_percentage=18i,disk_system_store_total_data_file_size_in_mb=35184372088832i,disk_system_store_used_data_file_size_in_mb=31379031064576i,disk_total_free_space_in_mb=42931i,license_expiration_left_in_sec=24079222.8772186,license_max_cores=256i,license_type="Enterprise",license_utilized_cpu_cores=8i,memory_allocated_in_mb=205i,memory_installed_in_mb=16384i,memory_low_memory_severity=0i,memory_physical_in_mb=16250i,memory_total_dirty_in_mb=0i,memory_total_swap_size_in_mb=0i,memory_total_swap_usage_in_mb=0i,memory_working_set_swap_usage_in_mb=0i,network_concurrent_requests_count=1i,network_last_request_time_in_sec=0.0058717,network_requests_per_sec=0.09916543455308825,network_tcp_active_connections=128i,network_total_requests=10i,server_full_version="5.2.0-custom-52",server_process_id=31044i,server_version="5.2",uptime_in_sec=56i 1613027977000000000

View File

@ -1,6 +1,9 @@
# Redfish Input Plugin
The `redfish` plugin gathers metrics and status information about CPU temperature, fanspeed, Powersupply, voltage, hostname and Location details (datacenter, placement, rack and room) of hardware servers for which [DMTF's Redfish](https://redfish.dmtf.org/) is enabled.
The `redfish` plugin gathers metrics and status information about CPU
temperature, fanspeed, Powersupply, voltage, hostname and Location details
(datacenter, placement, rack and room) of hardware servers for which [DMTF's
Redfish](https://redfish.dmtf.org/) is enabled.
Telegraf minimum version: Telegraf 1.15.0

View File

@ -1,5 +1,7 @@
# Redis Input Plugin
The Redis input plugin gathers metrics from one or many Redis servers.
## Configuration
```toml @sample.conf
@ -37,12 +39,14 @@
# insecure_skip_verify = true
```
## Measurements & Fields
## Metrics
The plugin gathers the results of the [INFO](https://redis.io/commands/info) redis command.
There are two separate measurements: _redis_ and _redis\_keyspace_, the latter is used for gathering database related statistics.
The plugin gathers the results of the [INFO](https://redis.io/commands/info)
redis command. There are two separate measurements: _redis_ and
_redis\_keyspace_, the latter is used for gathering database related statistics.
Additionally the plugin also calculates the hit/miss ratio (keyspace\_hitrate) and the elapsed time since the last rdb save (rdb\_last\_save\_time\_elapsed).
Additionally the plugin also calculates the hit/miss ratio (keyspace\_hitrate)
and the elapsed time since the last rdb save (rdb\_last\_save\_time\_elapsed).
- redis
- keyspace_hitrate(float, number)
@ -148,7 +152,7 @@ Additionally the plugin also calculates the hit/miss ratio (keyspace\_hitrate) a
- lag(int, number)
- offset(int, number)
## Tags
### Tags
- All measurements have the following tags:
- port

View File

@ -36,7 +36,8 @@ The plugin gathers the results of these commands and measurements:
* `sentinel replicas` - `redis_replicas`
* `info all` - `redis_sentinel`
The `has_quorum` field in `redis_sentinel_masters` is from calling the command `sentinels ckquorum`.
The `has_quorum` field in `redis_sentinel_masters` is from calling the command
`sentinels ckquorum`.
There are 5 remote network requests made for each server listed in the config.
@ -172,7 +173,8 @@ There are 5 remote network requests made for each server listed in the config.
## Example Output
An example of 2 Redis Sentinel instances monitoring a single master and replica. It produces:
An example of 2 Redis Sentinel instances monitoring a single master and
replica. It produces:
### redis_sentinel_masters

View File

@ -4,9 +4,6 @@ Collect metrics from [RethinkDB](https://www.rethinkdb.com/).
## Configuration
This section contains the default TOML to configure the plugin. You can
generate it using `telegraf --usage rethinkdb`.
```toml @sample.conf
# Read metrics from one or many RethinkDB servers
[[inputs.rethinkdb]]

View File

@ -11,7 +11,7 @@ The Riak plugin gathers metrics from one or more riak instances.
servers = ["http://localhost:8098"]
```
## Measurements & Fields
## Metrics
Riak provides one measurement named "riak", with the following fields:
@ -61,9 +61,10 @@ Riak provides one measurement named "riak", with the following fields:
- read_repairs
- read_repairs_total
Measurements of time (such as node_get_fsm_time_mean) are measured in nanoseconds.
Measurements of time (such as node_get_fsm_time_mean) are measured in
nanoseconds.
## Tags
### Tags
All measurements have the following tags:

View File

@ -5,8 +5,6 @@ client that use riemann clients using riemann-protobuff format.
## Configuration
This is a sample configuration for the plugin.
```toml @sample.conf
# Riemann protobuff listener
[[inputs.rimann_listener]]
@ -37,7 +35,9 @@ This is a sample configuration for the plugin.
# keep_alive_period = "5m"
```
Just like Riemann the default port is 5555. This can be configured, refer configuration above.
Just like Riemann the default port is 5555. This can be configured, refer
configuration above.
Riemann `Service` is mapped as `measurement`. `metric` and `TTL` are converted into field values.
As Riemann tags as simply an array, they are converted into the `influx_line` format key-value, where both key and value are the tags.
Riemann `Service` is mapped as `measurement`. `metric` and `TTL` are converted
into field values. As Riemann tags as simply an array, they are converted into
the `influx_line` format key-value, where both key and value are the tags.

View File

@ -1,7 +1,10 @@
# Salesforce Input Plugin
The Salesforce plugin gathers metrics about the limits in your Salesforce organization and the remaining usage.
It fetches its data from the [limits endpoint](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/resources_limits.htm) of Salesforce's REST API.
The Salesforce plugin gathers metrics about the limits in your Salesforce
organization and the remaining usage. It fetches its data from the [limits
endpoint][limits] of Salesforce's REST API.
[limits]: https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/resources_limits.htm
## Configuration
@ -26,7 +29,7 @@ It fetches its data from the [limits endpoint](https://developer.salesforce.com/
# version = "39.0"
```
## Measurements & Fields
## Metrics
Salesforce provide one measurement named "salesforce".
Each entry is converted to snake\_case and 2 fields are created.
@ -39,7 +42,7 @@ Each entry is converted to snake\_case and 2 fields are created.
- \<key\>_remaining (int)
- (...)
## Tags
### Tags
- All measurements have the following tags:
- host

View File

@ -1,9 +1,10 @@
# LM Sensors Input Plugin
Collect [lm-sensors](https://en.wikipedia.org/wiki/Lm_sensors) metrics - requires the lm-sensors
package installed.
Collect [lm-sensors](https://en.wikipedia.org/wiki/Lm_sensors) metrics -
requires the lm-sensors package installed.
This plugin collects sensor metrics with the `sensors` executable from the lm-sensor package.
This plugin collects sensor metrics with the `sensors` executable from the
lm-sensor package.
## Configuration
@ -18,11 +19,11 @@ This plugin collects sensor metrics with the `sensors` executable from the lm-se
# timeout = "5s"
```
## Measurements & Fields
## Metrics
Fields are created dynamically depending on the sensors. All fields are float.
## Tags
### Tags
- All measurements have the following tags:
- chip

View File

@ -3,13 +3,16 @@
This plugin collects details on how much memory each entry in Slab cache is
consuming. For example, it collects the consumption of `kmalloc-1024` and
`xfs_inode`. Since this information is obtained by parsing `/proc/slabinfo`
file, only Linux is supported. The specification of `/proc/slabinfo` has
not changed since [Linux v2.6.12 (April 2005)](https://github.com/torvalds/linux/blob/1da177e4/mm/slab.c#L2848-L2861),
so it can be regarded as sufficiently stable. The memory usage is
equivalent to the `CACHE_SIZE` column of `slabtop` command.
If the HOST_PROC environment variable is set, Telegraf will use its value instead of `/proc`
file, only Linux is supported. The specification of `/proc/slabinfo` has not
changed since [Linux v2.6.12 (April 2005)][slab-c], so it can be regarded as
sufficiently stable. The memory usage is equivalent to the `CACHE_SIZE` column
of `slabtop` command. If the HOST_PROC environment variable is set, Telegraf
will use its value instead of `/proc`
**Note: `/proc/slabinfo` is usually restricted to read as root user. Make sure telegraf can execute `sudo` without password.**
**Note: `/proc/slabinfo` is usually restricted to read as root user. Make sure
telegraf can execute `sudo` without password.**
[slab-c]: https://github.com/torvalds/linux/blob/1da177e4/mm/slab.c#L2848-L2861
## Configuration
@ -22,10 +25,13 @@ If the HOST_PROC environment variable is set, Telegraf will use its value instea
## Sudo configuration
Since the slabinfo file is only readable by root, the plugin runs `sudo /bin/cat` to read the file.
Since the slabinfo file is only readable by root, the plugin runs `sudo
/bin/cat` to read the file.
Sudo can be configured to allow telegraf to run just the command needed to read the slabinfo file. For example, if telegraf is running as the user 'telegraf' and HOST_PROC is not used, add this to the sudoers file:
`telegraf ALL = (root) NOPASSWD: /bin/cat /proc/slabinfo`
Sudo can be configured to allow telegraf to run just the command needed to read
the slabinfo file. For example, if telegraf is running as the user 'telegraf'
and HOST_PROC is not used, add this to the sudoers file: `telegraf ALL = (root)
NOPASSWD: /bin/cat /proc/slabinfo`
## Metrics

View File

@ -1,11 +1,19 @@
# S.M.A.R.T. Input Plugin
Get metrics using the command line utility `smartctl` for S.M.A.R.T. (Self-Monitoring, Analysis and Reporting Technology) storage devices. SMART is a monitoring system included in computer hard disk drives (HDDs) and solid-state drives (SSDs) that detects and reports on various indicators of drive reliability, with the intent of enabling the anticipation of hardware failures.
See smartmontools (<https://www.smartmontools.org/>).
Get metrics using the command line utility `smartctl` for
S.M.A.R.T. (Self-Monitoring, Analysis and Reporting Technology) storage
devices. SMART is a monitoring system included in computer hard disk drives
(HDDs) and solid-state drives (SSDs) that detects and reports on various
indicators of drive reliability, with the intent of enabling the anticipation of
hardware failures. See smartmontools (<https://www.smartmontools.org/>).
SMART information is separated between different measurements: `smart_device` is used for general information, while `smart_attribute` stores the detailed attribute information if `attributes = true` is enabled in the plugin configuration.
SMART information is separated between different measurements: `smart_device` is
used for general information, while `smart_attribute` stores the detailed
attribute information if `attributes = true` is enabled in the plugin
configuration.
If no devices are specified, the plugin will scan for SMART devices via the following command:
If no devices are specified, the plugin will scan for SMART devices via the
following command:
```sh
smartctl --scan
@ -17,9 +25,9 @@ Metrics will be reported from the following `smartctl` command:
smartctl --info --attributes --health -n <nocheck> --format=brief <device>
```
This plugin supports _smartmontools_ version 5.41 and above, but v. 5.41 and v. 5.42
might require setting `nocheck`, see the comment in the sample configuration.
Also, NVMe capabilities were introduced in version 6.5.
This plugin supports _smartmontools_ version 5.41 and above, but v. 5.41 and
v. 5.42 might require setting `nocheck`, see the comment in the sample
configuration. Also, NVMe capabilities were introduced in version 6.5.
To enable SMART on a storage device run:
@ -29,20 +37,23 @@ smartctl -s on <device>
## NVMe vendor specific attributes
For NVMe disk type, plugin can use command line utility `nvme-cli`. It has a feature
to easy access a vendor specific attributes.
This plugin supports nmve-cli version 1.5 and above (<https://github.com/linux-nvme/nvme-cli>).
In case of `nvme-cli` absence NVMe vendor specific metrics will not be obtained.
For NVMe disk type, plugin can use command line utility `nvme-cli`. It has a
feature to easy access a vendor specific attributes. This plugin supports
nmve-cli version 1.5 and above (<https://github.com/linux-nvme/nvme-cli>). In
case of `nvme-cli` absence NVMe vendor specific metrics will not be obtained.
Vendor specific SMART metrics for NVMe disks may be reported from the following `nvme` command:
Vendor specific SMART metrics for NVMe disks may be reported from the following
`nvme` command:
```sh
nvme <vendor> smart-log-add <device>
```
Note that vendor plugins for `nvme-cli` could require different naming convention and report format.
Note that vendor plugins for `nvme-cli` could require different naming
convention and report format.
To see installed plugin extensions, depended on the nvme-cli version, look at the bottom of:
To see installed plugin extensions, depended on the nvme-cli version, look at
the bottom of:
```sh
nvme help
@ -54,7 +65,8 @@ To gather disk vendor id (vid) `id-ctrl` could be used:
nvme id-ctrl <device>
```
Association between a vid and company can be found there: <https://pcisig.com/membership/member-companies>.
Association between a vid and company can be found there:
<https://pcisig.com/membership/member-companies>.
Devices affiliation to being NVMe or non NVMe will be determined thanks to:
@ -124,8 +136,10 @@ smartctl --scan -d nvme
## Permissions
It's important to note that this plugin references smartctl and nvme-cli, which may require additional permissions to execute successfully.
Depending on the user/group permissions of the telegraf user executing this plugin, you may need to use sudo.
It's important to note that this plugin references smartctl and nvme-cli, which
may require additional permissions to execute successfully. Depending on the
user/group permissions of the telegraf user executing this plugin, you may need
to use sudo.
You will need the following in your telegraf config:
@ -149,8 +163,9 @@ telegraf ALL=(ALL) NOPASSWD: NVME
Defaults!NVME !logfile, !syslog, !pam_session
```
To run smartctl or nvme with `sudo` wrapper script can be created. `path_smartctl` or
`path_nvme` in the configuration should be set to execute this script.
To run smartctl or nvme with `sudo` wrapper script can be
created. `path_smartctl` or `path_nvme` in the configuration should be set to
execute this script.
## Metrics
@ -202,9 +217,9 @@ The interpretation of the tag `flags` is:
### Exit Status
The `exit_status` field captures the exit status of the used cli utilities command which
is defined by a bitmask. For the interpretation of the bitmask see the man page for
smartctl or nvme-cli.
The `exit_status` field captures the exit status of the used cli utilities
command which is defined by a bitmask. For the interpretation of the bitmask see
the man page for smartctl or nvme-cli.
## Device Names
@ -216,15 +231,17 @@ devices can be referenced by the WWN in the following location:
## Troubleshooting
If you expect to see more SMART metrics than this plugin shows, be sure to use a proper version
of smartctl or nvme-cli utility which has the functionality to gather desired data. Also, check
your device capability because not every SMART metrics are mandatory.
For example the number of temperature sensors depends on the device specification.
If you expect to see more SMART metrics than this plugin shows, be sure to use a
proper version of smartctl or nvme-cli utility which has the functionality to
gather desired data. Also, check your device capability because not every SMART
metrics are mandatory. For example the number of temperature sensors depends on
the device specification.
If this plugin is not working as expected for your SMART enabled device,
please run these commands and include the output in a bug report:
For non NVMe devices (from smartctl version >= 7.0 this will also return NVMe devices by default):
For non NVMe devices (from smartctl version >= 7.0 this will also return NVMe
devices by default):
```sh
smartctl --scan
@ -250,9 +267,11 @@ and replace vendor and device to match your case:
nvme VENDOR smart-log-add DEVICE
```
If you have specified devices array in configuration file, and Telegraf only shows data from one device, you should
change the plugin configuration to sequentially gather disk attributes instead of collecting it in separate threads
(goroutines). To do this find in plugin configuration read_method and change it to sequential:
If you have specified devices array in configuration file, and Telegraf only
shows data from one device, you should change the plugin configuration to
sequentially gather disk attributes instead of collecting it in separate threads
(goroutines). To do this find in plugin configuration read_method and change it
to sequential:
```toml
## Optionally call smartctl and nvme-cli with a specific concurrency policy.
@ -264,7 +283,7 @@ change the plugin configuration to sequentially gather disk attributes instead o
read_method = "sequential"
```
## Example SMART Plugin Outputs
## Example Output
```shell
smart_device,enabled=Enabled,host=mbpro.local,device=rdisk0,model=APPLE\ SSD\ SM0512F,serial_no=S1K5NYCD964433,wwn=5002538655584d30,capacity=500277790720 udma_crc_errors=0i,exit_status=0i,health_ok=true,read_error_rate=0i,temp_c=40i 1502536854000000000

View File

@ -220,9 +220,9 @@ One [metric][] is created for each row of the SNMP table.
#### Two Table Join
Snmp plugin can join two snmp tables that have different indexes. For this to work one table
should have translation field that return index of second table as value. Examples
of such fields are:
Snmp plugin can join two snmp tables that have different indexes. For this to
work one table should have translation field that return index of second table
as value. Examples of such fields are:
* Cisco portTable with translation field: `CISCO-STACK-MIB::portIfIndex`,
which value is IfIndex from ifTable
@ -231,11 +231,13 @@ which value is IfIndex from ifTable
* Cisco cpeExtPsePortTable with translation field: `CISCO-POWER-ETHERNET-EXT-MIB::cpeExtPsePortEntPhyIndex`,
which value is index from entPhysicalTable
Such field can be used to translate index to secondary table with `secondary_index_table = true`
and all fields from secondary table (with index pointed from translation field), should have added option
`secondary_index_use = true`. Telegraf cannot duplicate entries during join so translation
must be 1-to-1 (not 1-to-many). To add fields from secondary table with index that is not present
in translation table (outer join), there is a second option for translation index `secondary_outer_join = true`.
Such field can be used to translate index to secondary table with
`secondary_index_table = true` and all fields from secondary table (with index
pointed from translation field), should have added option `secondary_index_use =
true`. Telegraf cannot duplicate entries during join so translation must be
1-to-1 (not 1-to-many). To add fields from secondary table with index that is
not present in translation table (outer join), there is a second option for
translation index `secondary_outer_join = true`.
##### Example configuration for table joins
@ -255,7 +257,8 @@ name = "EntPhyIndex"
oid = "CISCO-POWER-ETHERNET-EXT-MIB::cpeExtPsePortEntPhyIndex"
```
Partial result (removed agent_host and host columns from all following outputs in this section):
Partial result (removed agent_host and host columns from all following outputs
in this section):
```text
> ciscoPower,index=1.2 EntPhyIndex=1002i,PortPwrConsumption=6643i 1621460628000000000
@ -263,7 +266,8 @@ Partial result (removed agent_host and host columns from all following outputs i
> ciscoPower,index=1.5 EntPhyIndex=1005i,PortPwrConsumption=8358i 1621460628000000000
```
Note here that EntPhyIndex column carries index from ENTITY-MIB table, config for it:
Note here that EntPhyIndex column carries index from ENTITY-MIB table, config
for it:
```toml
[[inputs.snmp.table]]
@ -283,9 +287,9 @@ Partial result:
> entityTable,index=1005 EntPhysicalName="GigabitEthernet1/5" 1621460809000000000
```
Now, lets attempt to join these results into one table. EntPhyIndex matches index
from second table, and lets convert EntPhysicalName into tag, so second table will
only provide tags into result. Configuration:
Now, lets attempt to join these results into one table. EntPhyIndex matches
index from second table, and lets convert EntPhysicalName into tag, so second
table will only provide tags into result. Configuration:
```toml
[[inputs.snmp.table]]

View File

@ -1,15 +1,11 @@
# SNMP Legacy Input Plugin
## Deprecated in version 1.0. Use [SNMP input plugin][]
**Deprecated in version 1.0. Use [SNMP input plugin][]**
The SNMP input plugin gathers metrics from SNMP agents
## Configuration
In this example, the plugin will gather value of OIDS:
- `.1.3.6.1.2.1.2.2.1.4.1`
```toml @sample.conf
# DEPRECATED! PLEASE USE inputs.snmp INSTEAD.
[[inputs.snmp_legacy]]
@ -104,6 +100,10 @@ In this example, the plugin will gather value of OIDS:
sub_tables=[".1.3.6.1.2.1.2.2.1.13", "bytes_recv", "bytes_send"]
```
In the previous example, the plugin will gather value of OIDS:
- `.1.3.6.1.2.1.2.2.1.4.1`
### Simple example
In this example, Telegraf gathers value of OIDS:
@ -580,7 +580,7 @@ Note: the plugin will add instance name as tag *instance*
- In **inputs.snmp.subtable** section, you can put a name from `snmptranslate_file`
as `oid` attribute instead of a valid OID
## Measurements & Fields
## Metrics
With the last example (Table with both mapping and subtable example):
@ -591,7 +591,7 @@ With the last example (Table with both mapping and subtable example):
- ifHCInOctets
- ifHCInOctets
## Tags
### Tags
With the last example (Table with both mapping and subtable example):

View File

@ -76,7 +76,7 @@ setcap cap_net_bind_service=+ep /usr/bin/telegraf
On Mac OS, listening on privileged ports is unrestricted on versions
10.14 and later.
### Metrics
## Metrics
- snmp_trap
- tags:
@ -93,7 +93,7 @@ On Mac OS, listening on privileged ports is unrestricted on versions
the trap variable names after MIB lookup. Field values are trap
variable values.
### Example Output
## Example Output
```shell
snmp_trap,mib=SNMPv2-MIB,name=coldStart,oid=.1.3.6.1.6.3.1.1.5.1,source=192.168.122.102,version=2c,community=public snmpTrapEnterprise.0="linux",sysUpTimeInstance=1i 1574109187723429814

View File

@ -3,13 +3,11 @@
The Socket Listener is a service input plugin that listens for messages from
streaming (tcp, unix) or datagram (udp, unixgram) protocols.
The plugin expects messages in the
[Telegraf Input Data Formats](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md).
The plugin expects messages in the [Telegraf Input Data
Formats](../../../docs/DATA_FORMATS_INPUT.md).
## Configuration
This is a sample configuration for the plugin.
```toml @sample.conf
# Generic socket listener capable of handling multiple socket types.
[[inputs.socket_listener]]
@ -74,18 +72,18 @@ This is a sample configuration for the plugin.
## A Note on UDP OS Buffer Sizes
The `read_buffer_size` config option can be used to adjust the size of the socket
buffer, but this number is limited by OS settings. On Linux, `read_buffer_size`
will default to `rmem_default` and will be capped by `rmem_max`. On BSD systems,
`read_buffer_size` is capped by `maxsockbuf`, and there is no OS default
setting.
The `read_buffer_size` config option can be used to adjust the size of the
socket buffer, but this number is limited by OS settings. On Linux,
`read_buffer_size` will default to `rmem_default` and will be capped by
`rmem_max`. On BSD systems, `read_buffer_size` is capped by `maxsockbuf`, and
there is no OS default setting.
Instructions on how to adjust these OS settings are available below.
Some OSes (most notably, Linux) place very restrictive limits on the performance
of UDP protocols. It is _highly_ recommended that you increase these OS limits to
at least 8MB before trying to run large amounts of UDP traffic to your instance.
8MB is just a recommendation, and can be adjusted higher.
of UDP protocols. It is _highly_ recommended that you increase these OS limits
to at least 8MB before trying to run large amounts of UDP traffic to your
instance. 8MB is just a recommendation, and can be adjusted higher.
### Linux
@ -117,9 +115,8 @@ sysctl -w net.core.rmem_default=8388608
On BSD/Darwin systems you need to add about a 15% padding to the kernel limit
socket buffer. Meaning if you want an 8MB buffer (8388608 bytes) you need to set
the kernel limit to `8388608*1.15 = 9646900`. This is not documented anywhere but
happens
[in the kernel here.](https://github.com/freebsd/freebsd/blob/master/sys/kern/uipc_sockbuf.c#L63-L64)
the kernel limit to `8388608*1.15 = 9646900`. This is not documented anywhere
but can be seen [in the kernel source code][1].
Check the current UDP/IP buffer limit by typing the following command:
@ -140,3 +137,5 @@ To update the values immediately, type the following command as root:
```sh
sysctl -w kern.ipc.maxsockbuf=9646900
```
[1]: https://github.com/freebsd/freebsd/blob/master/sys/kern/uipc_sockbuf.c#L63-L64

View File

@ -1,10 +1,13 @@
# SocketStat plugin
# SocketStat Input Plugin
The socketstat plugin gathers indicators from established connections, using iproute2's `ss` command.
The socketstat plugin gathers indicators from established connections, using
iproute2's `ss` command.
The `ss` command does not require specific privileges.
**WARNING: The output format will produce series with very high cardinality.** You should either store those by an engine which doesn't suffer from it, use a short retention policy or do appropriate filtering.
**WARNING: The output format will produce series with very high cardinality.**
You should either store those by an engine which doesn't suffer from it, use a
short retention policy or do appropriate filtering.
## Configuration

View File

@ -1,12 +1,16 @@
# Solr Input Plugin
The [solr](http://lucene.apache.org/solr/) plugin collects stats via the
[MBean Request Handler](https://cwiki.apache.org/confluence/display/solr/MBean+Request+Handler)
The [solr](http://lucene.apache.org/solr/) plugin collects stats via the [MBean
Request Handler][1].
More about [performance statistics](https://cwiki.apache.org/confluence/display/solr/Performance+Statistics+Reference)
More about [performance statistics][2].
Tested from 3.5 to 7.*
[1]: https://cwiki.apache.org/confluence/display/solr/MBean+Request+Handler
[2]: https://cwiki.apache.org/confluence/display/solr/Performance+Statistics+Reference
## Configuration
```toml @sample.conf

View File

@ -1,15 +1,13 @@
# SQL Input Plugin
This plugin reads metrics from performing SQL queries against a SQL server. Different server
types are supported and their settings might differ (especially the connection parameters).
Please check the list of [supported SQL drivers](../../../docs/SQL_DRIVERS_INPUT.md) for the
`driver` name and options for the data-source-name (`dsn`) options.
This plugin reads metrics from performing SQL queries against a SQL
server. Different server types are supported and their settings might differ
(especially the connection parameters). Please check the list of [supported SQL
drivers](../../../docs/SQL_DRIVERS_INPUT.md) for the `driver` name and options
for the data-source-name (`dsn`) options.
## Configuration
This section contains the default TOML to configure the plugin. You can
generate it using `telegraf --usage <plugin-name>`.
```toml @sample.conf
# Read metrics from SQL queries
[[inputs.sql]]
@ -94,28 +92,34 @@ generate it using `telegraf --usage <plugin-name>`.
### Driver
The `driver` and `dsn` options specify how to connect to the database. As especially the `dsn` format and
values vary with the `driver` refer to the list of [supported SQL drivers](../../../docs/SQL_DRIVERS_INPUT.md) for possible values and more details.
The `driver` and `dsn` options specify how to connect to the database. As
especially the `dsn` format and values vary with the `driver` refer to the list
of [supported SQL drivers](../../../docs/SQL_DRIVERS_INPUT.md) for possible
values and more details.
### Connection limits
With these options you can limit the number of connections kept open by this plugin. Details about the exact
workings can be found in the [golang sql documentation](https://golang.org/pkg/database/sql/#DB.SetConnMaxIdleTime).
With these options you can limit the number of connections kept open by this
plugin. Details about the exact workings can be found in the [golang sql
documentation](https://golang.org/pkg/database/sql/#DB.SetConnMaxIdleTime).
### Query sections
Multiple `query` sections can be specified for this plugin. Each specified query will first be prepared on the server
and then executed in every interval using the column mappings specified. Please note that `tag` and `field` columns
are not exclusive, i.e. a column can be added to both. When using both `include` and `exclude` lists, the `exclude`
list takes precedence over the `include` list. I.e. given you specify `foo` in both lists, `foo` will _never_ pass
the filter. In case any the columns specified in `measurement_col` or `time_col` are _not_ returned by the query,
the plugin falls-back to the documented defaults. Fields or tags specified in the includes of the options but missing
in the returned query are silently ignored.
Multiple `query` sections can be specified for this plugin. Each specified query
will first be prepared on the server and then executed in every interval using
the column mappings specified. Please note that `tag` and `field` columns are
not exclusive, i.e. a column can be added to both. When using both `include` and
`exclude` lists, the `exclude` list takes precedence over the `include`
list. I.e. given you specify `foo` in both lists, `foo` will _never_ pass the
filter. In case any the columns specified in `measurement_col` or `time_col` are
_not_ returned by the query, the plugin falls-back to the documented
defaults. Fields or tags specified in the includes of the options but missing in
the returned query are silently ignored.
## Types
This plugin relies on the driver to do the type conversion. For the different properties of the metric the following
types are accepted.
This plugin relies on the driver to do the type conversion. For the different
properties of the metric the following types are accepted.
### Measurement
@ -123,26 +127,31 @@ Only columns of type `string` are accepted.
### Time
For the metric time columns of type `time` are accepted directly. For numeric columns, `time_format` should be set
to any of `unix`, `unix_ms`, `unix_ns` or `unix_us` accordingly. By default the a timestamp in `unix` format is
expected. For string columns, please specify the `time_format` accordingly.
See the [golang time documentation](https://golang.org/pkg/time/#Time.Format) for details.
For the metric time columns of type `time` are accepted directly. For numeric
columns, `time_format` should be set to any of `unix`, `unix_ms`, `unix_ns` or
`unix_us` accordingly. By default the a timestamp in `unix` format is
expected. For string columns, please specify the `time_format` accordingly. See
the [golang time documentation](https://golang.org/pkg/time/#Time.Format) for
details.
### Tags
For tags columns with textual values (`string` and `bytes`), signed and unsigned integers (8, 16, 32 and 64 bit),
floating-point (32 and 64 bit), `boolean` and `time` values are accepted. Those values will be converted to string.
For tags columns with textual values (`string` and `bytes`), signed and unsigned
integers (8, 16, 32 and 64 bit), floating-point (32 and 64 bit), `boolean` and
`time` values are accepted. Those values will be converted to string.
### Fields
For fields columns with textual values (`string` and `bytes`), signed and unsigned integers (8, 16, 32 and 64 bit),
floating-point (32 and 64 bit), `boolean` and `time` values are accepted. Here `bytes` will be converted to `string`,
signed and unsigned integer values will be converted to `int64` or `uint64` respectively. Floating-point values are converted to `float64` and `time` is converted to a nanosecond timestamp of type `int64`.
For fields columns with textual values (`string` and `bytes`), signed and
unsigned integers (8, 16, 32 and 64 bit), floating-point (32 and 64 bit),
`boolean` and `time` values are accepted. Here `bytes` will be converted to
`string`, signed and unsigned integer values will be converted to `int64` or
`uint64` respectively. Floating-point values are converted to `float64` and
`time` is converted to a nanosecond timestamp of type `int64`.
## Example Output
Using the [MariaDB sample database](https://www.mariadbtutorial.com/getting-started/mariadb-sample-database) and the
configuration
Using the [MariaDB sample database][maria-sample] and the configuration
```toml
[[inputs.sql]]
@ -165,3 +174,5 @@ nation,host=Hugin,name=Jean guest_id=3i 1611332164000000000
nation,host=Hugin,name=Storm guest_id=4i 1611332164000000000
nation,host=Hugin,name=Beast guest_id=5i 1611332164000000000
```
[maria-sample]: https://www.mariadbtutorial.com/getting-started/mariadb-sample-database

View File

@ -1,5 +1,7 @@
# StatsD Input Plugin
The StatsD input plugin gathers metrics from a Statsd server.
## Configuration
```toml @sample.conf
@ -91,7 +93,8 @@ The statsd plugin is a special type of plugin which runs a backgrounded statsd
listener service while telegraf is running.
The format of the statsd messages was based on the format described in the
original [etsy statsd](https://github.com/etsy/statsd/blob/master/docs/metric_types.md)
original [etsy
statsd](https://github.com/etsy/statsd/blob/master/docs/metric_types.md)
implementation. In short, the telegraf statsd listener will accept:
- Gauges
@ -144,9 +147,11 @@ users.current,service=payroll,region=us-west:32|g
```
current.users,service=payroll,server=host01:west=10,east=10,central=2,south=10|g
``` -->
```
## Measurements
-->
## Metrics
Meta:
@ -222,8 +227,8 @@ measurements and tags.
The plugin supports specifying templates for transforming statsd buckets into
InfluxDB measurement names and tags. The templates have a _measurement_ keyword,
which can be used to specify parts of the bucket that are to be used in the
measurement name. Other words in the template are used as tag names. For example,
the following template:
measurement name. Other words in the template are used as tag names. For
example, the following template:
```toml
templates = [

View File

@ -100,7 +100,8 @@ All fields for Suricata stats are numeric.
- tcp_synack
- ...
Some fields of the Suricata alerts are strings, for example the signatures. See <https://suricata.readthedocs.io/en/suricata-6.0.0/output/eve/eve-json-format.html?highlight=priority#event-type-alert> for more information.
Some fields of the Suricata alerts are strings, for example the signatures. See
the Suricata [event docs][1] for more information.
- suricata_alert
- fields:
@ -114,6 +115,8 @@ Some fields of the Suricata alerts are strings, for example the signatures. See
- target_port
- ...
[1]: https://suricata.readthedocs.io/en/suricata-6.0.0/output/eve/eve-json-format.html?highlight=priority#event-type-alert
### Suricata configuration
Suricata needs to deliver the 'stats' event type to a given unix socket for
@ -132,9 +135,10 @@ output in the Suricata configuration file:
### FreeBSD tuning
Under FreeBSD it is necessary to increase the localhost buffer space to at least 16384, default is 8192
otherwise messages from Suricata are truncated as they exceed the default available buffer space,
consequently no statistics are processed by the plugin.
Under FreeBSD it is necessary to increase the localhost buffer space to at least
16384, default is 8192 otherwise messages from Suricata are truncated as they
exceed the default available buffer space, consequently no statistics are
processed by the plugin.
```text
sysctl -w net.local.stream.recvspace=16384

View File

@ -2,7 +2,8 @@
The swap plugin collects system swap metrics.
For more information on what swap memory is, read [All about Linux swap space](https://www.linux.com/news/all-about-linux-swap-space).
For more information on what swap memory is, read [All about Linux swap
space](https://www.linux.com/news/all-about-linux-swap-space).
## Configuration

View File

@ -1,18 +1,19 @@
# Synproxy Input Plugin
The synproxy plugin gathers the synproxy counters. Synproxy is a Linux netfilter module used for SYN attack mitigation.
The use of synproxy is documented in `man iptables-extensions` under the SYNPROXY section.
The synproxy plugin gathers the synproxy counters. Synproxy is a Linux netfilter
module used for SYN attack mitigation. The use of synproxy is documented in
`man iptables-extensions` under the SYNPROXY section.
## Configuration
The synproxy plugin does not need any configuration
```toml @sample.conf
# Get synproxy counter statistics from procfs
[[inputs.synproxy]]
# no configuration
```
The synproxy plugin does not need any configuration
## Metrics
The following synproxy counters are gathered
@ -28,7 +29,8 @@ The following synproxy counters are gathered
## Sample Queries
Get the number of packets per 5 minutes for the measurement in the last hour from InfluxDB:
Get the number of packets per 5 minutes for the measurement in the last hour
from InfluxDB:
```sql
SELECT difference(last("cookie_invalid")) AS "cookie_invalid", difference(last("cookie_retrans")) AS "cookie_retrans", difference(last("cookie_valid")) AS "cookie_valid", difference(last("entries")) AS "entries", difference(last("syn_received")) AS "syn_received", difference(last("conn_reopened")) AS "conn_reopened" FROM synproxy WHERE time > NOW() - 1h GROUP BY time(5m) FILL(null);

View File

@ -1,10 +1,10 @@
# Syslog Input Plugin
The syslog plugin listens for syslog messages transmitted over
a Unix Domain socket,
[UDP](https://tools.ietf.org/html/rfc5426),
The syslog plugin listens for syslog messages transmitted over a Unix Domain
socket, [UDP](https://tools.ietf.org/html/rfc5426),
[TCP](https://tools.ietf.org/html/rfc6587), or
[TLS](https://tools.ietf.org/html/rfc5425); with or without the octet counting framing.
[TLS](https://tools.ietf.org/html/rfc5425); with or without the octet counting
framing.
Syslog messages should be formatted according to
[RFC 5424](https://tools.ietf.org/html/rfc5424).
@ -70,10 +70,17 @@ Syslog messages should be formatted according to
### Message transport
The `framing` option only applies to streams. It governs the way we expect to receive messages within the stream.
Namely, with the [`"octet counting"`](https://tools.ietf.org/html/rfc5425#section-4.3) technique (default) or with the [`"non-transparent"`](https://tools.ietf.org/html/rfc6587#section-3.4.2) framing.
The `framing` option only applies to streams. It governs the way we expect to
receive messages within the stream. Namely, with the [`"octet counting"`][1]
technique (default) or with the [`"non-transparent"`][2] framing.
The `trailer` option only applies when `framing` option is `"non-transparent"`. It must have one of the following values: `"LF"` (default), or `"NUL"`.
The `trailer` option only applies when `framing` option is
`"non-transparent"`. It must have one of the following values: `"LF"` (default),
or `"NUL"`.
[1]: https://tools.ietf.org/html/rfc5425#section-4.3
[2]: https://tools.ietf.org/html/rfc6587#section-3.4.2
### Best effort
@ -84,7 +91,7 @@ messages. If unset only full messages will be collected.
### Rsyslog Integration
Rsyslog can be configured to forward logging messages to Telegraf by configuring
[remote logging](https://www.rsyslog.com/doc/v8-stable/configuration/actions.html#remote-machine).
[remote logging][3].
Most system are setup with a configuration split between `/etc/rsyslog.conf`
and the files in the `/etc/rsyslog.d/` directory, it is recommended to add the
@ -117,7 +124,11 @@ action(type="omfwd" Protocol="tcp" TCP_Framing="octet-counted" Target="127.0.0.1
#action(type="omfwd" Protocol="udp" Target="127.0.0.1" Port="6514" Template="RSYSLOG_SyslogProtocol23Format")
```
To complete TLS setup please refer to [rsyslog docs](https://www.rsyslog.com/doc/v8-stable/tutorials/tls.html).
To complete TLS setup please refer to [rsyslog docs][4].
[3]: https://www.rsyslog.com/doc/v8-stable/configuration/actions.html#remote-machine
[4]: https://www.rsyslog.com/doc/v8-stable/tutorials/tls.html
## Metrics
@ -140,7 +151,8 @@ To complete TLS setup please refer to [rsyslog docs](https://www.rsyslog.com/doc
### Structured Data
Structured data produces field keys by combining the `SD_ID` with the `PARAM_NAME` combined using the `sdparam_separator` as in the following example:
Structured data produces field keys by combining the `SD_ID` with the
`PARAM_NAME` combined using the `sdparam_separator` as in the following example:
```shell
170 <165>1 2018-10-01:14:15.000Z mymachine.example.com evntslog - ID47 [exampleSDID@32473 iut="3" eventSource="Application" eventID="1011"] An application event log entry...
@ -164,7 +176,8 @@ echo "<13>1 2018-10-01T12:00:00.0Z example.org root - - - test" | nc -u 127.0.0.
### RFC3164
RFC3164 encoded messages are supported for UDP only, but not all vendors output valid RFC3164 messages by default
RFC3164 encoded messages are supported for UDP only, but not all vendors output
valid RFC3164 messages by default
- E.g. Cisco IOS

View File

@ -1,10 +1,10 @@
# sysstat Input Plugin
Collect [sysstat](https://github.com/sysstat/sysstat) metrics - requires the sysstat
package installed.
Collect [sysstat](https://github.com/sysstat/sysstat) metrics - requires the
sysstat package installed.
This plugin collects system metrics with the sysstat collector utility `sadc` and parses
the created binary data file with the `sadf` utility.
This plugin collects system metrics with the sysstat collector utility `sadc`
and parses the created binary data file with the `sadf` utility.
## Configuration
@ -63,7 +63,7 @@ the created binary data file with the `sadf` utility.
# vg = "rootvg"
```
## Measurements & Fields
## Metrics
### If group=true
@ -117,7 +117,7 @@ And much more, depending on the options you configure.
And much more, depending on the options you configure.
## Tags
### Tags
- All measurements have the following tags:
- device

View File

@ -15,8 +15,9 @@ Number of CPUs is obtained from the /proc/cpuinfo file.
### Permissions
The `n_users` field requires read access to `/var/run/utmp`, and may require
the `telegraf` user to be added to the `utmp` group on some systems. If this file does not exist `n_users` will be skipped.
The `n_users` field requires read access to `/var/run/utmp`, and may require the
`telegraf` user to be added to the `utmp` group on some systems. If this file
does not exist `n_users` will be skipped.
## Metrics

View File

@ -1,13 +1,14 @@
# systemd Units Input Plugin
The systemd_units plugin gathers systemd unit status on Linux. It relies on
`systemctl list-units [PATTERN] --all --plain --type=service` to collect data on service status.
`systemctl list-units [PATTERN] --all --plain --type=service` to collect data on
service status.
The results are tagged with the unit name and provide enumerated fields for
loaded, active and running fields, indicating the unit health.
This plugin is related to the [win_services module](/plugins/inputs/win_services/), which
fulfills the same purpose on windows.
This plugin is related to the [win_services module](../win_services/README.md),
which fulfills the same purpose on windows.
In addition to services, this plugin can gather other unit types as well,
see `systemctl list-units --all --type help` for possible options.
@ -48,7 +49,7 @@ see `systemctl list-units --all --type help` for possible options.
### Load
enumeration of [unit_load_state_table](https://github.com/systemd/systemd/blob/c87700a1335f489be31cd3549927da68b5638819/src/basic/unit-def.c#L87)
enumeration of [unit_load_state_table][1]
| Value | Meaning | Description |
| ----- | ------- | ----------- |
@ -60,9 +61,11 @@ enumeration of [unit_load_state_table](https://github.com/systemd/systemd/blob/c
| 5 | merged | unit is ~ |
| 6 | masked | unit is ~ |
[1]: https://github.com/systemd/systemd/blob/c87700a1335f489be31cd3549927da68b5638819/src/basic/unit-def.c#L87
### Active
enumeration of [unit_active_state_table](https://github.com/systemd/systemd/blob/c87700a1335f489be31cd3549927da68b5638819/src/basic/unit-def.c#L99)
enumeration of [unit_active_state_table][2]
| Value | Meaning | Description |
| ----- | ------- | ----------- |
@ -73,11 +76,12 @@ enumeration of [unit_active_state_table](https://github.com/systemd/systemd/blob
| 4 | activating | unit is ~ |
| 5 | deactivating | unit is ~ |
[2]: https://github.com/systemd/systemd/blob/c87700a1335f489be31cd3549927da68b5638819/src/basic/unit-def.c#L99
### Sub
enumeration of sub states, see various [unittype_state_tables](https://github.com/systemd/systemd/blob/c87700a1335f489be31cd3549927da68b5638819/src/basic/unit-def.c#L163);
duplicates were removed, tables are hex aligned to keep some space for future
values
enumeration of sub states, see various [unittype_state_tables][3]; duplicates
were removed, tables are hex aligned to keep some space for future values
| Value | Meaning | Description |
| ----- | ------- | ----------- |
@ -135,6 +139,8 @@ values
| 0x00a0 | elapsed | unit is ~ |
| | | |
[3]: https://github.com/systemd/systemd/blob/c87700a1335f489be31cd3549927da68b5638819/src/basic/unit-def.c#L163
## Example Output
```shell

View File

@ -16,8 +16,8 @@ the `from_beginning` option is set).
see <http://man7.org/linux/man-pages/man1/tail.1.html> for more details.
The plugin expects messages in one of the
[Telegraf Input Data Formats](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md).
The plugin expects messages in one of the [Telegraf Input Data
Formats](../../../docs/DATA_FORMATS_INPUT.md).
## Configuration

View File

@ -1,7 +1,7 @@
# TCP Listener Input Plugin
> DEPRECATED: As of version 1.3 the TCP listener plugin has been deprecated in favor of the
> [socket_listener plugin](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/socket_listener)
**DEPRECATED: As of version 1.3 the TCP listener plugin has been deprecated in
favor of the [socket_listener plugin](../socket_listener/README.md)**
## Configuration

View File

@ -1,9 +1,13 @@
# Teamspeak 3 Input Plugin
This plugin uses the Teamspeak 3 ServerQuery interface of the Teamspeak server to collect statistics of one or more
virtual servers. If you are querying an external Teamspeak server, make sure to add the host which is running Telegraf
to query_ip_allowlist.txt in the Teamspeak Server directory. For information about how to configure the server take a look
the [Teamspeak 3 ServerQuery Manual](http://media.teamspeak.com/ts3_literature/TeamSpeak%203%20Server%20Query%20Manual.pdf)
This plugin uses the Teamspeak 3 ServerQuery interface of the Teamspeak server
to collect statistics of one or more virtual servers. If you are querying an
external Teamspeak server, make sure to add the host which is running Telegraf
to query_ip_allowlist.txt in the Teamspeak Server directory. For information
about how to configure the server take a look the [Teamspeak 3 ServerQuery
Manual][1].
[1]: http://media.teamspeak.com/ts3_literature/TeamSpeak%203%20Server%20Query%20Manual.pdf
## Configuration
@ -22,7 +26,7 @@ the [Teamspeak 3 ServerQuery Manual](http://media.teamspeak.com/ts3_literature/T
# virtual_servers = [1]
```
## Measurements
## Metrics
- teamspeak
- uptime
@ -35,13 +39,13 @@ the [Teamspeak 3 ServerQuery Manual](http://media.teamspeak.com/ts3_literature/T
- bytes_received_total
- query_clients_online
## Tags
### Tags
- The following tags are used:
- virtual_server
- name
## Example output
## Example Output
```shell
teamspeak,virtual_server=1,name=LeopoldsServer,host=vm01 bytes_received_total=29638202639i,uptime=13567846i,total_ping=26.89,total_packet_loss=0,packets_sent_total=415821252i,packets_received_total=237069900i,bytes_sent_total=55309568252i,clients_online=11i,query_clients_online=1i 1507406561000000000

View File

@ -1,8 +1,12 @@
# Tomcat Input Plugin
The Tomcat plugin collects statistics available from the tomcat manager status page from the `http://<host>/manager/status/all?XML=true URL.` (`XML=true` will return only xml data).
The Tomcat plugin collects statistics available from the tomcat manager status
page from the `http://<host>/manager/status/all?XML=true URL.` (`XML=true` will
return only xml data).
See the [Tomcat documentation](https://tomcat.apache.org/tomcat-9.0-doc/manager-howto.html#Server_Status) for details of these statistics.
See the [Tomcat documentation][1] for details of these statistics.
[1]: https://tomcat.apache.org/tomcat-9.0-doc/manager-howto.html#Server_Status
## Configuration
@ -27,7 +31,7 @@ See the [Tomcat documentation](https://tomcat.apache.org/tomcat-9.0-doc/manager-
# insecure_skip_verify = false
```
## Measurements & Fields
## Metrics
- tomcat_jvm_memory
- free
@ -54,7 +58,7 @@ See the [Tomcat documentation](https://tomcat.apache.org/tomcat-9.0-doc/manager-
- bytes_received
- bytes_sent
## Tags
### Tags
- tomcat_jvm_memorypool has the following tags:
- name

View File

@ -1,6 +1,7 @@
# Twemproxy Input Plugin
The `twemproxy` plugin gathers statistics from [Twemproxy](https://github.com/twitter/twemproxy) servers.
The `twemproxy` plugin gathers statistics from
[Twemproxy](https://github.com/twitter/twemproxy) servers.
## Configuration

View File

@ -1,7 +1,7 @@
# UDP Listener Input Plugin
> DEPRECATED: As of version 1.3 the UDP listener plugin has been deprecated in favor of the
> [socket_listener plugin](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/socket_listener)
**DEPRECATED: As of version 1.3 the UDP listener plugin has been deprecated in
favor of the [socket_listener plugin](../socket_listener/README.md)**
## Configuration

View File

@ -34,8 +34,10 @@ a validating, recursive, and caching DNS resolver.
### Permissions
It's important to note that this plugin references unbound-control, which may require additional permissions to execute successfully.
Depending on the user/group permissions of the telegraf user executing this plugin, you may need to alter the group membership, set facls, or use sudo.
It's important to note that this plugin references unbound-control, which may
require additional permissions to execute successfully. Depending on the
user/group permissions of the telegraf user executing this plugin, you may need
to alter the group membership, set facls, or use sudo.
**Group membership (Recommended)**:
@ -71,10 +73,11 @@ Please use the solution you see as most appropriate.
## Metrics
This is the full list of stats provided by unbound-control and potentially collected
depending of your unbound configuration. Histogram related statistics will never be collected,
extended statistics can also be imported ("extended-statistics: yes" in unbound configuration).
In the output, the dots in the unbound-control stat name are replaced by underscores(see
This is the full list of stats provided by unbound-control and potentially
collected depending of your unbound configuration. Histogram related statistics
will never be collected, extended statistics can also be imported
("extended-statistics: yes" in unbound configuration). In the output, the dots
in the unbound-control stat name are replaced by underscores(see
<https://www.unbound.net/documentation/unbound-control.html> for details).
Shown metrics are with `thread_as_tag` enabled.

View File

@ -1,6 +1,7 @@
# uWSGI Input Plugin
The uWSGI input plugin gathers metrics about uWSGI using its [Stats Server](https://uwsgi-docs.readthedocs.io/en/latest/StatsServer.html).
The uWSGI input plugin gathers metrics about uWSGI using its [Stats
Server](https://uwsgi-docs.readthedocs.io/en/latest/StatsServer.html).
## Configuration

View File

@ -45,10 +45,13 @@ This plugin gathers stats from [Varnish HTTP Cache](https://varnish-cache.org/)
# timeout = "1s"
```
### Measurements & Fields (metric_version=1)
## Metrics
This is the full list of stats provided by varnish. Stats will be grouped by their capitalized prefix (eg MAIN, MEMPOOL,
etc). In the output, the prefix will be used as a tag, and removed from field names.
### metric_version=1
This is the full list of stats provided by varnish. Stats will be grouped by
their capitalized prefix (eg MAIN, MEMPOOL, etc). In the output, the prefix will
be used as a tag, and removed from field names.
- varnish
- MAIN.uptime (uint64, count, Child process uptime)
@ -347,8 +350,8 @@ etc). In the output, the prefix will be used as a tag, and removed from field na
### Tags
As indicated above, the prefix of a varnish stat will be used as it's 'section' tag. So section tag may have one of the
following values:
As indicated above, the prefix of a varnish stat will be used as it's 'section'
tag. So section tag may have one of the following values:
- section:
- MAIN
@ -358,18 +361,19 @@ following values:
- VBE
- LCK
## Measurements & Fields (metric_version=2)
### metric_version=2
When `metric_version=2` is enabled, the plugin runs `varnishstat -j` command and parses the JSON output into metrics.
When `metric_version=2` is enabled, the plugin runs `varnishstat -j` command and
parses the JSON output into metrics.
Plugin uses `varnishadm vcl.list -j` commandline to find the active VCL. Metrics that are related to the nonactive VCL
are excluded from monitoring.
Plugin uses `varnishadm vcl.list -j` commandline to find the active VCL. Metrics
that are related to the nonactive VCL are excluded from monitoring.
### Requirements
## Requirements
- Varnish 6.0.2+ is required (older versions do not support JSON output from CLI tools)
#### Examples
## Examples
Varnish counter:
@ -387,14 +391,17 @@ Varnish counter:
Influx metric:
`varnish,section=MAIN cache_hit=51i 1462765437090957980`
### Advanced customizations using regexps
## Advanced customizations using regexps
Finding the VCL in a varnish measurement and parsing into tags can be adjusted by using GO regular expressions.
Finding the VCL in a varnish measurement and parsing into tags can be adjusted
by using GO regular expressions.
Regexps use a special named group `(?P<_vcl>[\w\-]*)(\.)` to extract VCL name. `(?P<_field>[\w\-.+]*)\.val` regexp group
extracts the field name. All other named regexp groups like `(?P<my_tag>[\w\-.+]*)` are tags.
Regexps use a special named group `(?P<_vcl>[\w\-]*)(\.)` to extract VCL
name. `(?P<_field>[\w\-.+]*)\.val` regexp group extracts the field name. All
other named regexp groups like `(?P<my_tag>[\w\-.+]*)` are tags.
_Tip: It is useful to verify regexps using online tools like <https://regoio.herokuapp.com/>._
_Tip: It is useful to verify regexps using online tools like
<https://regoio.herokuapp.com/>._
By default, the plugin has a builtin list of regexps for following VMODs:
@ -416,8 +423,9 @@ By default, the plugin has a builtin list of regexps for following VMODs:
- regexp `([\w\-]*)\.(?P<_field>[\w\-.]*)`
- `MSE_STORE.store-1-1.g_aio_running_bytes_write` -> `varnish,section=MSE_STORE store-1-1.g_aio_running_bytes_write=5i`
The default regexps list can be extended in the telegraf config. The following example shows a config with a custom
regexp for parsing of `accounting` VMOD metrics in `ACCG.<namespace>.<key>.<stat_name>` format. The namespace value will
The default regexps list can be extended in the telegraf config. The following
example shows a config with a custom regexp for parsing of `accounting` VMOD
metrics in `ACCG.<namespace>.<key>.<stat_name>` format. The namespace value will
be used as a tag.
```toml
@ -425,15 +433,17 @@ be used as a tag.
regexps = ['^ACCG.(?P<namespace>[\w-]*).(?P<_field>[\w-.]*)']
```
### Custom arguments
## Custom arguments
You can change the default binary location and custom arguments for `varnishstat` and `varnishadm` command output. This
is useful when running varnish in docker or executing using varnish by SSH on a different machine.
You can change the default binary location and custom arguments for
`varnishstat` and `varnishadm` command output. This is useful when running
varnish in docker or executing using varnish by SSH on a different machine.
It's important to note that `instance_name` parameter is not take into account when using custom `binary_args` or
`adm_binary_args`. You have to add `"-n", "/instance_name"` manually into configuration.
It's important to note that `instance_name` parameter is not take into account
when using custom `binary_args` or `adm_binary_args`. You have to add `"-n",
"/instance_name"` manually into configuration.
#### Example for SSH
### Example for SSH
```toml
[[inputs.varnish]]
@ -445,7 +455,7 @@ It's important to note that `instance_name` parameter is not take into account w
stats = ["*"]
```
#### Example for Docker
### Example for Docker
```toml
[[inputs.varnish]]
@ -457,12 +467,14 @@ It's important to note that `instance_name` parameter is not take into account w
stats = ["*"]
```
### Permissions
## Permissions
It's important to note that this plugin references `varnishstat` and `varnishadm`, which may require additional permissions to execute successfully.
Depending on the user/group permissions of the telegraf user executing this plugin, you may need to alter the group membership, set facls, or use sudo.
It's important to note that this plugin references `varnishstat` and
`varnishadm`, which may require additional permissions to execute successfully.
Depending on the user/group permissions of the telegraf user executing this
plugin, you may need to alter the group membership, set facls, or use sudo.
#### Group membership (Recommended)
### Group membership (Recommended)
```bash
$ groups telegraf
@ -474,7 +486,7 @@ $ groups telegraf
telegraf : telegraf varnish
```
#### Extended filesystem ACL's
### Extended filesystem ACL's
```bash
$ getfacl /var/lib/varnish/<hostname>/_.vsm
@ -518,7 +530,9 @@ Defaults!VARNISHSTAT !logfile, !syslog, !pam_session
Please use the solution you see as most appropriate.
### Example Output
## Example Output
### metric_version = 1
```bash
telegraf --config etc/telegraf.conf --input-filter varnish --test
@ -526,7 +540,7 @@ Please use the solution you see as most appropriate.
> varnish,host=rpercy-VirtualBox,section=MAIN cache_hit=0i,cache_miss=0i,uptime=8416i 1462765437090957980
```
### Output (when metric_version = 2)
### metric_version = 2
```bash
telegraf --config etc/telegraf.conf --input-filter varnish --test

View File

@ -1,6 +1,8 @@
# Hashicorp Vault Input Plugin
The Vault plugin could grab metrics from every Vault agent of the cluster. Telegraf may be present in every node and connect to the agent locally. In this case should be something like `http://127.0.0.1:8200`.
The Vault plugin could grab metrics from every Vault agent of the
cluster. Telegraf may be present in every node and connect to the agent
locally. In this case should be something like `http://127.0.0.1:8200`.
> Tested on vault 1.8.5
@ -30,7 +32,8 @@ The Vault plugin could grab metrics from every Vault agent of the cluster. Teleg
## Metrics
For a more deep understanding of Vault monitoring, please have a look at the following Vault documentation:
For a more deep understanding of Vault monitoring, please have a look at the
following Vault documentation:
- [https://www.vaultproject.io/docs/internals/telemetry](https://www.vaultproject.io/docs/internals/telemetry)
- [https://learn.hashicorp.com/tutorials/vault/monitor-telemetry-audit-splunk?in=vault/monitoring](https://learn.hashicorp.com/tutorials/vault/monitor-telemetry-audit-splunk?in=vault/monitoring)

View File

@ -1,6 +1,7 @@
# VMware vSphere Input Plugin
The VMware vSphere plugin uses the vSphere API to gather metrics from multiple vCenter servers.
The VMware vSphere plugin uses the vSphere API to gather metrics from multiple
vCenter servers.
* Clusters
* Hosts
@ -10,19 +11,14 @@ The VMware vSphere plugin uses the vSphere API to gather metrics from multiple v
## Supported versions of vSphere
This plugin supports vSphere version 6.5, 6.7 and 7.0. It may work with versions 5.1, 5.5 and 6.0, but neither are officially supported.
This plugin supports vSphere version 6.5, 6.7 and 7.0. It may work with versions
5.1, 5.5 and 6.0, but neither are officially supported.
Compatibility information was found [here](https://github.com/vmware/govmomi/tree/v0.26.0#compatibility)
Compatibility information is available from the govmomi project
[here](https://github.com/vmware/govmomi/tree/v0.26.0#compatibility)
## Configuration
NOTE: To disable collection of a specific resource type, simply exclude all metrics using the XX_metric_exclude.
For example, to disable collection of VMs, add this:
```toml @sample.conf
vm_metric_exclude = [ "*" ]
```
```toml
# Read metrics from one or many vCenters
[[inputs.vsphere]]
@ -217,12 +213,28 @@ vm_metric_exclude = [ "*" ]
# insecure_skip_verify = false
```
NOTE: To disable collection of a specific resource type, simply exclude all
metrics using the XX_metric_exclude. For example, to disable collection of VMs,
add this:
```toml @sample.conf
vm_metric_exclude = [ "*" ]
```
NOTE: To disable collection of a specific resource type, simply exclude all
metrics using the XX_metric_exclude. For example, to disable collection of VMs,
add this:
### Objects and Metrics Per Query
By default, in vCenter's configuration a limit is set to the number of entities that are included in a performance chart query. Default settings for vCenter 6.5 and above is 256. Prior versions of vCenter have this set to 64.
A vCenter administrator can change this setting, see this [VMware KB article](https://kb.vmware.com/s/article/2107096) for more information.
By default, in vCenter's configuration a limit is set to the number of entities
that are included in a performance chart query. Default settings for vCenter 6.5
and above is 256. Prior versions of vCenter have this set to 64. A vCenter
administrator can change this setting, see this [VMware KB
article](https://kb.vmware.com/s/article/2107096) for more information.
Any modification should be reflected in this plugin by modifying the parameter `max_query_objects`
Any modification should be reflected in this plugin by modifying the parameter
`max_query_objects`
```toml
## number of objects to retrieve per query for realtime resources (vms and hosts)
@ -232,11 +244,13 @@ Any modification should be reflected in this plugin by modifying the parameter `
### Collection and Discovery concurrency
On large vCenter setups it may be prudent to have multiple concurrent go routines collect performance metrics
in order to avoid potential errors for time elapsed during a collection cycle. This should never be greater than 8,
though the default of 1 (no concurrency) should be sufficient for most configurations.
On large vCenter setups it may be prudent to have multiple concurrent go
routines collect performance metrics in order to avoid potential errors for time
elapsed during a collection cycle. This should never be greater than 8, though
the default of 1 (no concurrency) should be sufficient for most configurations.
For setting up concurrency, modify `collect_concurrency` and `discover_concurrency` parameters.
For setting up concurrency, modify `collect_concurrency` and
`discover_concurrency` parameters.
```toml
## number of go routines to use for collection and discovery of objects and metrics
@ -246,8 +260,9 @@ For setting up concurrency, modify `collect_concurrency` and `discover_concurren
### Inventory Paths
Resources to be monitored can be selected using Inventory Paths. This treats the vSphere inventory as a tree structure similar
to a file system. A vSphere inventory has a structure similar to this:
Resources to be monitored can be selected using Inventory Paths. This treats the
vSphere inventory as a tree structure similar to a file system. A vSphere
inventory has a structure similar to this:
```bash
<root>
@ -279,36 +294,66 @@ to a file system. A vSphere inventory has a structure similar to this:
#### Using Inventory Paths
Using familiar UNIX-style paths, one could select e.g. VM2 with the path ```/DC0/vm/VM2```.
Using familiar UNIX-style paths, one could select e.g. VM2 with the path
`/DC0/vm/VM2`.
Often, we want to select a group of resource, such as all the VMs in a folder. We could use the path ```/DC0/vm/Folder1/*``` for that.
Often, we want to select a group of resource, such as all the VMs in a
folder. We could use the path `/DC0/vm/Folder1/*` for that.
Another possibility is to select objects using a partial name, such as ```/DC0/vm/Folder1/hadoop*``` yielding all vms in Folder1 with a name starting with "hadoop".
Another possibility is to select objects using a partial name, such as
`/DC0/vm/Folder1/hadoop*` yielding all vms in Folder1 with a name starting
with "hadoop".
Finally, due to the arbitrary nesting of the folder structure, we need a "recursive wildcard" for traversing multiple folders. We use the "**" symbol for that. If we want to look for a VM with a name starting with "hadoop" in any folder, we could use the following path: ```/DC0/vm/**/hadoop*```
Finally, due to the arbitrary nesting of the folder structure, we need a
"recursive wildcard" for traversing multiple folders. We use the "**" symbol for
that. If we want to look for a VM with a name starting with "hadoop" in any
folder, we could use the following path: `/DC0/vm/**/hadoop*`
#### Multiple paths to VMs
As we can see from the example tree above, VMs appear both in its on folder under the datacenter, as well as under the hosts. This is useful when you like to select VMs on a specific host. For example, ```/DC0/host/Cluster1/Host1/hadoop*``` selects all VMs with a name starting with "hadoop" that are running on Host1.
As we can see from the example tree above, VMs appear both in its on folder
under the datacenter, as well as under the hosts. This is useful when you like
to select VMs on a specific host. For example,
`/DC0/host/Cluster1/Host1/hadoop*` selects all VMs with a name starting with
"hadoop" that are running on Host1.
We can extend this to looking at a cluster level: ```/DC0/host/Cluster1/*/hadoop*```. This selects any VM matching "hadoop*" on any host in Cluster1.
We can extend this to looking at a cluster level:
`/DC0/host/Cluster1/*/hadoop*`. This selects any VM matching "hadoop*" on any
host in Cluster1.
## Performance Considerations
### Realtime vs. historical metrics
vCenter keeps two different kinds of metrics, known as realtime and historical metrics.
vCenter keeps two different kinds of metrics, known as realtime and historical
metrics.
* Realtime metrics: Available at a 20 second granularity. These metrics are stored in memory and are very fast and cheap to query. Our tests have shown that a complete set of realtime metrics for 7000 virtual machines can be obtained in less than 20 seconds. Realtime metrics are only available on **ESXi hosts** and **virtual machine** resources. Realtime metrics are only stored for 1 hour in vCenter.
* Historical metrics: Available at a (default) 5 minute, 30 minutes, 2 hours and 24 hours rollup levels. The vSphere Telegraf plugin only uses the most granular rollup which defaults to 5 minutes but can be changed in vCenter to other interval durations. These metrics are stored in the vCenter database and can be expensive and slow to query. Historical metrics are the only type of metrics available for **clusters**, **datastores**, **resource pools** and **datacenters**.
For more information, refer to the vSphere documentation here: <https://pubs.vmware.com/vsphere-50/index.jsp?topic=%2Fcom.vmware.wssdk.pg.doc_50%2FPG_Ch16_Performance.18.2.html>
For more information, refer to the vSphere [documentation][vsphere-16].
This distinction has an impact on how Telegraf collects metrics. A single instance of an input plugin can have one and only one collection interval, which means that you typically set the collection interval based on the most frequently collected metric. Let's assume you set the collection interval to 1 minute. All realtime metrics will be collected every minute. Since the historical metrics are only available on a 5 minute interval, the vSphere Telegraf plugin automatically skips four out of five collection cycles for these metrics. This works fine in many cases. Problems arise when the collection of historical metrics takes longer than the collection interval. This will cause error messages similar to this to appear in the Telegraf logs:
This distinction has an impact on how Telegraf collects metrics. A single
instance of an input plugin can have one and only one collection interval, which
means that you typically set the collection interval based on the most
frequently collected metric. Let's assume you set the collection interval to 1
minute. All realtime metrics will be collected every minute. Since the
historical metrics are only available on a 5 minute interval, the vSphere
Telegraf plugin automatically skips four out of five collection cycles for these
metrics. This works fine in many cases. Problems arise when the collection of
historical metrics takes longer than the collection interval. This will cause
error messages similar to this to appear in the Telegraf logs:
```2019-01-16T13:41:10Z W! [agent] input "inputs.vsphere" did not complete within its interval```
```text
2019-01-16T13:41:10Z W! [agent] input "inputs.vsphere" did not complete within its interval
```
This will disrupt the metric collection and can result in missed samples. The best practice workaround is to specify two instances of the vSphere plugin, one for the realtime metrics with a short collection interval and one for the historical metrics with a longer interval. You can use the ```*_metric_exclude``` to turn off the resources you don't want to collect metrics for in each instance. For example:
This will disrupt the metric collection and can result in missed samples. The
best practice workaround is to specify two instances of the vSphere plugin, one
for the realtime metrics with a short collection interval and one for the
historical metrics with a longer interval. You can use the `*_metric_exclude` to
turn off the resources you don't want to collect metrics for in each
instance. For example:
```toml
## Realtime instance
@ -348,41 +393,68 @@ This will disrupt the metric collection and can result in missed samples. The be
collect_concurrency = 3
```
[vsphere-16]: https://pubs.vmware.com/vsphere-50/index.jsp?topic=%2Fcom.vmware.wssdk.pg.doc_50%2FPG_Ch16_Performance.18.2.html
### Configuring max_query_metrics setting
The ```max_query_metrics``` determines the maximum number of metrics to attempt to retrieve in one call to vCenter. Generally speaking, a higher number means faster and more efficient queries. However, the number of allowed metrics in a query is typically limited in vCenter by the ```config.vpxd.stats.maxQueryMetrics``` setting in vCenter. The value defaults to 64 on vSphere 5.5 and older and 256 on newver versions of vCenter. The vSphere plugin always checks this setting and will automatically reduce the number if the limit configured in vCenter is lower than max_query_metrics in the plugin. This will result in a log message similar to this:
The `max_query_metrics` determines the maximum number of metrics to attempt to
retrieve in one call to vCenter. Generally speaking, a higher number means
faster and more efficient queries. However, the number of allowed metrics in a
query is typically limited in vCenter by the `config.vpxd.stats.maxQueryMetrics`
setting in vCenter. The value defaults to 64 on vSphere 5.5 and older and 256 on
newver versions of vCenter. The vSphere plugin always checks this setting and
will automatically reduce the number if the limit configured in vCenter is lower
than max_query_metrics in the plugin. This will result in a log message similar
to this:
```2019-01-21T03:24:18Z W! [input.vsphere] Configured max_query_metrics is 256, but server limits it to 64. Reducing.```
```text
2019-01-21T03:24:18Z W! [input.vsphere] Configured max_query_metrics is 256, but server limits it to 64. Reducing.
```
You may ask a vCenter administrator to increase this limit to help boost performance.
You may ask a vCenter administrator to increase this limit to help boost
performance.
### Cluster metrics and the max_query_metrics setting
Cluster metrics are handled a bit differently by vCenter. They are aggregated from ESXi and virtual machine metrics and may not be available when you query their most recent values. When this happens, vCenter will attempt to perform that aggregation on the fly. Unfortunately, all the subqueries needed internally in vCenter to perform this aggregation will count towards ```config.vpxd.stats.maxQueryMetrics```. This means that even a very small query may result in an error message similar to this:
Cluster metrics are handled a bit differently by vCenter. They are aggregated
from ESXi and virtual machine metrics and may not be available when you query
their most recent values. When this happens, vCenter will attempt to perform
that aggregation on the fly. Unfortunately, all the subqueries needed internally
in vCenter to perform this aggregation will count towards
`config.vpxd.stats.maxQueryMetrics`. This means that even a very small query may
result in an error message similar to this:
```2018-11-02T13:37:11Z E! Error in plugin [inputs.vsphere]: ServerFaultCode: This operation is restricted by the administrator - 'vpxd.stats.maxQueryMetrics'. Contact your system administrator```
```text
2018-11-02T13:37:11Z E! Error in plugin [inputs.vsphere]: ServerFaultCode: This operation is restricted by the administrator - 'vpxd.stats.maxQueryMetrics'. Contact your system administrator
```
There are two ways of addressing this:
* Ask your vCenter administrator to set ```config.vpxd.stats.maxQueryMetrics``` to a number that's higher than the total number of virtual machines managed by a vCenter instance.
* Ask your vCenter administrator to set `config.vpxd.stats.maxQueryMetrics` to a number that's higher than the total number of virtual machines managed by a vCenter instance.
* Exclude the cluster metrics and use either the basicstats aggregator to calculate sums and averages per cluster or use queries in the visualization tool to obtain the same result.
### Concurrency settings
The vSphere plugin allows you to specify two concurrency settings:
* ```collect_concurrency```: The maximum number of simultaneous queries for performance metrics allowed per resource type.
* ```discover_concurrency```: The maximum number of simultaneous queries for resource discovery allowed.
* `collect_concurrency`: The maximum number of simultaneous queries for performance metrics allowed per resource type.
* `discover_concurrency`: The maximum number of simultaneous queries for resource discovery allowed.
While a higher level of concurrency typically has a positive impact on performance, increasing these numbers too much can cause performance issues at the vCenter server. A rule of thumb is to set these parameters to the number of virtual machines divided by 1500 and rounded up to the nearest integer.
While a higher level of concurrency typically has a positive impact on
performance, increasing these numbers too much can cause performance issues at
the vCenter server. A rule of thumb is to set these parameters to the number of
virtual machines divided by 1500 and rounded up to the nearest integer.
### Configuring historical_interval setting
When the vSphere plugin queries vCenter for historical statistics it queries for statistics that exist at a specific interval. The default historical interval duration is 5 minutes but if this interval has been changed then you must override the default query interval in the vSphere plugin.
When the vSphere plugin queries vCenter for historical statistics it queries for
statistics that exist at a specific interval. The default historical interval
duration is 5 minutes but if this interval has been changed then you must
override the default query interval in the vSphere plugin.
* ```historical_interval```: The interval of the most granular statistics configured in vSphere represented in seconds.
* `historical_interval`: The interval of the most granular statistics configured in vSphere represented in seconds.
## Measurements &amp; Fields
## Metrics
* Cluster Stats
* Cluster services: CPU, memory, failover
@ -421,9 +493,10 @@ When the vSphere plugin queries vCenter for historical statistics it queries for
* Datastore stats:
* Disk: Capacity, provisioned, used
For a detailed list of commonly available metrics, please refer to [METRICS.md](METRICS.md)
For a detailed list of commonly available metrics, please refer to
[METRICS.md](METRICS.md)
## Tags
### Tags
* all metrics
* vcenter (vcenter url)
@ -455,7 +528,7 @@ For a detailed list of commonly available metrics, please refer to [METRICS.md](
* virtualDisk stats for VM
* disk (name of virtual disk)
## Sample output
## Example Output
```shell
vsphere_vm_cpu,esxhostname=DC0_H0,guest=other,host=host.example.com,moid=vm-35,os=Mac,source=DC0_H0_VM0,vcenter=localhost:8989,vmname=DC0_H0_VM0 run_summation=2608i,ready_summation=129i,usage_average=5.01,used_summation=2134i,demand_average=326i 1535660299000000000

View File

@ -1,12 +1,14 @@
# Webhooks Input Plugin
This is a Telegraf service plugin that start an http server and register multiple webhook listeners.
This is a Telegraf service plugin that start an http server and register
multiple webhook listeners.
```sh
telegraf config -input-filter webhooks -output-filter influxdb > config.conf.new
```
Change the config file to point to the InfluxDB server you are using and adjust the settings to match your environment. Once that is complete:
Change the config file to point to the InfluxDB server you are using and adjust
the settings to match your environment. Once that is complete:
```sh
cp config.conf.new /etc/telegraf/telegraf.conf

View File

@ -1,14 +1,17 @@
# Windows Eventlog Input Plugin
Telegraf's win_eventlog input plugin gathers metrics from the windows event log.
## Collect Windows Event Log messages
Supports Windows Vista and higher.
Telegraf should have Administrator permissions to subscribe for some of the Windows Events Channels, like System Log.
Telegraf should have Administrator permissions to subscribe for some of the
Windows Events Channels, like System Log.
Telegraf minimum version: Telegraf 1.16.0
### Configuration
## Configuration
```toml @sample.conf
# Input plugin to collect Windows Event Log messages
@ -92,7 +95,8 @@ Telegraf minimum version: Telegraf 1.16.0
### Filtering
There are three types of filtering: **Event Log** name, **XPath Query** and **XML Query**.
There are three types of filtering: **Event Log** name, **XPath Query** and
**XML Query**.
**Event Log** name filtering is simple:
@ -101,26 +105,39 @@ There are three types of filtering: **Event Log** name, **XPath Query** and **XM
xpath_query = '''
```
For **XPath Query** filtering set the `xpath_query` value, and `eventlog_name` will be ignored:
For **XPath Query** filtering set the `xpath_query` value, and `eventlog_name`
will be ignored:
```toml
eventlog_name = ""
xpath_query = "Event/System[EventID=999]"
```
**XML Query** is the most flexible: you can Select or Suppress any values, and give ranges for other values. XML query is the recommended form, because it is most flexible. You can create or debug XML Query by creating Custom View in Windows Event Viewer and then copying resulting XML in config file.
**XML Query** is the most flexible: you can Select or Suppress any values, and
give ranges for other values. XML query is the recommended form, because it is
most flexible. You can create or debug XML Query by creating Custom View in
Windows Event Viewer and then copying resulting XML in config file.
XML Query documentation:
<https://docs.microsoft.com/en-us/windows/win32/wes/consuming-events>
### Metrics
## Metrics
You can send any field, *System*, *Computed* or *XML* as tag field. List of those fields is in the `event_tags` config array. Globbing is supported in this array, i.e. `Level*` for all fields beginning with `Level`, or `L?vel` for all fields where the name is `Level`, `L3vel`, `L@vel` and so on. Tag fields are converted to strings automatically.
You can send any field, *System*, *Computed* or *XML* as tag field. List of
those fields is in the `event_tags` config array. Globbing is supported in this
array, i.e. `Level*` for all fields beginning with `Level`, or `L?vel` for all
fields where the name is `Level`, `L3vel`, `L@vel` and so on. Tag fields are
converted to strings automatically.
By default, all other fields are sent, but you can limit that either by listing it in `event_fields` config array with globbing, or by adding some field name masks in the `exclude_fields` config array.
By default, all other fields are sent, but you can limit that either by listing
it in `event_fields` config array with globbing, or by adding some field name
masks in the `exclude_fields` config array.
You can limit sending fields with empty values by adding masks of names of such fields in the `exclude_empty` config array. Value considered empty, if the System field of type `int` or `uint32` is equal to zero, or if any field of type `string` is an empty string.
You can limit sending fields with empty values by adding masks of names of such
fields in the `exclude_empty` config array. Value considered empty, if the
System field of type `int` or `uint32` is equal to zero, or if any field of type
`string` is an empty string.
List of System fields:
@ -149,25 +166,42 @@ List of System fields:
### Computed fields
Fields `Level`, `Opcode` and `Task` are converted to text and saved as computed `*Text` fields.
Fields `Level`, `Opcode` and `Task` are converted to text and saved as computed
`*Text` fields.
`Keywords` field is converted from hex uint64 value by the `_EvtFormatMessage` WINAPI function. There can be more than one value, in that case they will be comma-separated. If keywords can't be converted (bad device driver or forwarded from another computer with unknown Event Channel), hex uint64 is saved as is.
`Keywords` field is converted from hex uint64 value by the `_EvtFormatMessage`
WINAPI function. There can be more than one value, in that case they will be
comma-separated. If keywords can't be converted (bad device driver or forwarded
from another computer with unknown Event Channel), hex uint64 is saved as is.
`ProcessName` field is found by looking up ProcessID. Can be empty if telegraf doesn't have enough permissions.
`ProcessName` field is found by looking up ProcessID. Can be empty if telegraf
doesn't have enough permissions.
`Username` field is found by looking up SID from UserID.
`Message` field is rendered from the event data, and can be several kilobytes of text with line breaks. For most events the first line of this text is more then enough, and additional info is more useful to be parsed as XML fields. So, for brevity, plugin takes only the first line. You can set `only_first_line_of_message` parameter to `false` to take full message text.
`Message` field is rendered from the event data, and can be several kilobytes of
text with line breaks. For most events the first line of this text is more then
enough, and additional info is more useful to be parsed as XML fields. So, for
brevity, plugin takes only the first line. You can set
`only_first_line_of_message` parameter to `false` to take full message text.
`TimeCreated` field is a string in RFC3339Nano format. By default Telegraf parses it as an event timestamp. If there is a field parse error or `timestamp_from_event` configration parameter is set to `false`, then event timestamp will be set to the exact time when Telegraf has parsed this event, so it will be rounded to the nearest minute.
`TimeCreated` field is a string in RFC3339Nano format. By default Telegraf
parses it as an event timestamp. If there is a field parse error or
`timestamp_from_event` configration parameter is set to `false`, then event
timestamp will be set to the exact time when Telegraf has parsed this event, so
it will be rounded to the nearest minute.
### Additional Fields
The content of **Event Data** and **User Data** XML Nodes can be added as additional fields, and is added by default. You can disable that by setting `process_userdata` or `process_eventdata` parameters to `false`.
The content of **Event Data** and **User Data** XML Nodes can be added as
additional fields, and is added by default. You can disable that by setting
`process_userdata` or `process_eventdata` parameters to `false`.
For the fields from additional XML Nodes the `Name` attribute is taken as the name, and inner text is the value. Type of those fields is always string.
For the fields from additional XML Nodes the `Name` attribute is taken as the
name, and inner text is the value. Type of those fields is always string.
Name of the field is formed from XML Path by adding _ inbetween levels. For example, if UserData XML looks like this:
Name of the field is formed from XML Path by adding _ inbetween levels. For
example, if UserData XML looks like this:
```xml
<UserData>
@ -191,19 +225,27 @@ CbsPackageChangeState_ErrorCode = "0x0"
CbsPackageChangeState_Client = "UpdateAgentLCU"
```
If there are more than one field with the same name, all those fields are given suffix with number: `_1`, `_2` and so on.
If there are more than one field with the same name, all those fields are given
suffix with number: `_1`, `_2` and so on.
### Localization
## Localization
Human readable Event Description is in the Message field. But it is better to be skipped in favour of the Event XML values, because they are more machine-readable.
Human readable Event Description is in the Message field. But it is better to be
skipped in favour of the Event XML values, because they are more
machine-readable.
Keywords, LevelText, TaskText, OpcodeText and Message are saved with the current Windows locale by default. You can override this, for example, to English locale by setting `locale` config parameter to `1033`. Unfortunately, **Event Data** and **User Data** XML Nodes are in default Windows locale only.
Keywords, LevelText, TaskText, OpcodeText and Message are saved with the current
Windows locale by default. You can override this, for example, to English locale
by setting `locale` config parameter to `1033`. Unfortunately, **Event Data**
and **User Data** XML Nodes are in default Windows locale only.
Locale should be present on the computer. English locale is usually available on all localized versions of modern Windows. List of locales:
Locale should be present on the computer. English locale is usually available on
all localized versions of modern Windows. A list of all locales is available
from Microsoft's [Open Specifications][1].
<https://docs.microsoft.com/en-us/openspecs/office_standards/ms-oe376/6c085406-a698-4e12-9d4d-c3b0ee3dbc4a>
[1]: https://docs.microsoft.com/en-us/openspecs/office_standards/ms-oe376/6c085406-a698-4e12-9d4d-c3b0ee3dbc4a
### Example Output
## Example Output
Some values are changed for anonymity.

View File

@ -1,15 +1,17 @@
# Windows Performance Counters Input Plugin
This document presents the input plugin to read Performance Counters on Windows operating systems.
This document presents the input plugin to read Performance Counters on Windows
operating systems.
The configuration is parsed and then tested for validity, such as
whether the Object, Instance and Counter exist on Telegraf startup.
Counter paths are refreshed periodically, see the [CountersRefreshInterval](#countersrefreshinterval)
configuration parameter for more info.
Counter paths are refreshed periodically, see the
[CountersRefreshInterval](#countersrefreshinterval) configuration parameter for
more info.
In case of query for all instances `["*"]`, the plugin does not return the instance `_Total`
by default. See [IncludeTotal](#includetotal) for more info.
In case of query for all instances `["*"]`, the plugin does not return the
instance `_Total` by default. See [IncludeTotal](#includetotal) for more info.
## Basics
@ -73,12 +75,14 @@ Example:
#### CountersRefreshInterval
Configured counters are matched against available counters at the interval
specified by the `CountersRefreshInterval` parameter. The default value is `1m` (1 minute).
specified by the `CountersRefreshInterval` parameter. The default value is `1m`
(1 minute).
If wildcards are used in instance or counter names, they are expanded at this point, if the `UseWildcardsExpansion` param is set to `true`.
If wildcards are used in instance or counter names, they are expanded at this
point, if the `UseWildcardsExpansion` param is set to `true`.
Setting the `CountersRefreshInterval` too low (order of seconds) can cause Telegraf to create
a high CPU load.
Setting the `CountersRefreshInterval` too low (order of seconds) can cause
Telegraf to create a high CPU load.
Set it to `0s` to disable periodic refreshing.
@ -87,11 +91,15 @@ Example:
#### PreVistaSupport
(Deprecated in 1.7; Necessary features on Windows Vista and newer are checked dynamically)
(Deprecated in 1.7; Necessary features on Windows Vista and newer are checked
dynamically)
Bool, if set to `true`, the plugin will use the localized PerfCounter interface that has been present since before Vista for backwards compatibility.
Bool, if set to `true`, the plugin will use the localized PerfCounter interface
that has been present since before Vista for backwards compatibility.
It is recommended NOT to use this on OSes starting with Vista and newer because it requires more configuration to use this than the newer interface present since Vista.
It is recommended NOT to use this on OSes starting with Vista and newer because
it requires more configuration to use this than the newer interface present
since Vista.
Example for Windows Server 2003, this would be set to true:
`PreVistaSupport=true`
@ -107,9 +115,11 @@ Example:
#### IgnoredErrors
IgnoredErrors accepts a list of PDH error codes which are defined in pdh.go, if this error is encountered it will be ignored.
For example, you can provide "PDH_NO_DATA" to ignore performance counters with no instances, but by default no errors are ignored.
You can find the list of possible errors here: [PDH errors](https://github.com/influxdata/telegraf/blob/master/plugins/inputs/win_perf_counters/pdh.go)
IgnoredErrors accepts a list of PDH error codes which are defined in pdh.go, if
this error is encountered it will be ignored. For example, you can provide
"PDH_NO_DATA" to ignore performance counters with no instances, but by default
no errors are ignored. You can find the list of possible errors here: [PDH
errors](pdh.go).
Example:
`IgnoredErrors=["PDH_NO_DATA"]`
@ -125,13 +135,15 @@ A new configuration entry consists of the TOML header starting with,
This must follow before other plugin configurations,
beneath the main win_perf_counters entry, `[[inputs.win_perf_counters]]`.
Following this are 3 required key/value pairs and three optional parameters and their usage.
Following this are 3 required key/value pairs and three optional parameters and
their usage.
#### ObjectName
(Required)
ObjectName is the Object to query for, like Processor, DirectoryServices, LogicalDisk or similar.
ObjectName is the Object to query for, like Processor, DirectoryServices,
LogicalDisk or similar.
Example: `ObjectName = "LogicalDisk"`
@ -139,18 +151,18 @@ Example: `ObjectName = "LogicalDisk"`
(Required)
The instances key (this is an array) declares the instances of a counter you would like returned,
it can be one or more values.
The instances key (this is an array) declares the instances of a counter you
would like returned, it can be one or more values.
Example: `Instances = ["C:","D:","E:"]`
This will return only for the instances
C:, D: and E: where relevant. To get all instances of a Counter, use `["*"]` only.
By default any results containing `_Total` are stripped,
unless this is specified as the wanted instance.
This will return only for the instances C:, D: and E: where relevant. To get all
instances of a Counter, use `["*"]` only. By default any results containing
`_Total` are stripped, unless this is specified as the wanted instance.
Alternatively see the option `IncludeTotal` below.
It is also possible to set partial wildcards, eg. `["chrome*"]`, if the `UseWildcardsExpansion` param is set to `true`
It is also possible to set partial wildcards, eg. `["chrome*"]`, if the
`UseWildcardsExpansion` param is set to `true`
Some Objects do not have instances to select from at all.
Here only one option is valid if you want data back,
@ -173,11 +185,10 @@ is set to `true`.
(Optional)
This key is optional. If it is not set it will be `win_perf_counters`.
In InfluxDB this is the key underneath which the returned data is stored.
So for ordering your data in a good manner,
this is a good key to set with a value when you want your IIS and Disk results stored
separately from Processor results.
This key is optional. If it is not set it will be `win_perf_counters`. In
InfluxDB this is the key underneath which the returned data is stored. So for
ordering your data in a good manner, this is a good key to set with a value when
you want your IIS and Disk results stored separately from Processor results.
Example: `Measurement = "win_disk"`
@ -185,11 +196,16 @@ Example: `Measurement = "win_disk"`
(Optional)
This key is optional. It is a simple bool.
If set to `true`, counter values will be provided in the raw, integer, form. This is in contrast with the default behavior, where values are returned in a formatted, displayable, form
as seen in the Windows Performance Monitor.
A field representing raw counter value has the `_Raw` suffix. Raw values should be further used in a calculation, e.g. `100-(non_negative_derivative("Percent_Processor_Time_Raw",1s)/100000`
Note: Time based counters (i.e. _% Processor Time_) are reported in hundredths of nanoseconds.
This key is optional. It is a simple bool. If set to `true`, counter values
will be provided in the raw, integer, form. This is in contrast with the default
behavior, where values are returned in a formatted, displayable, form
as seen in the Windows Performance Monitor.
A field representing raw counter value has the `_Raw` suffix. Raw values should
be further used in a calculation,
e.g. `100-(non_negative_derivative("Percent_Processor_Time_Raw",1s)/100000`
Note: Time based counters (i.e. _% Processor Time_) are reported in hundredths
of nanoseconds.
Example: `UseRawValues = true`
@ -218,10 +234,10 @@ asked for that do not match. Useful when debugging new configurations.
(Internal)
This key should not be used. It is for testing purposes only.
It is a simple bool. If it is not set to true or included this is treated as false.
If this is set to true, the plugin will abort and end prematurely
if any of the combinations of ObjectName/Instances/Counters are invalid.
This key should not be used. It is for testing purposes only. It is a simple
bool. If it is not set to true or included this is treated as false. If this is
set to true, the plugin will abort and end prematurely if any of the
combinations of ObjectName/Instances/Counters are invalid.
## Configuration
@ -591,9 +607,9 @@ if any of the combinations of ObjectName/Instances/Counters are invalid.
## Troubleshooting
If you are getting an error about an invalid counter, use the `typeperf` command to check the counter path
on the command line.
E.g. `typeperf "Process(chrome*)\% Processor Time"`
If you are getting an error about an invalid counter, use the `typeperf` command
to check the counter path on the command line. E.g. `typeperf
"Process(chrome*)\% Processor Time"`
If no metrics are emitted even with the default config, you may need to repair
your performance counters.

View File

@ -2,7 +2,8 @@
Reports information about Windows service status.
Monitoring some services may require running Telegraf with administrator privileges.
Monitoring some services may require running Telegraf with administrator
privileges.
## Configuration
@ -18,7 +19,7 @@ Monitoring some services may require running Telegraf with administrator privile
excluded_service_names = ['WinRM'] # optional, list of service names to exclude
```
### Measurements & Fields
## Metrics
- win_services
- state : integer
@ -48,7 +49,7 @@ The `startup_mode` field can have the following values:
- service_name
- display_name
### Example Output
## Example Output
```shell
win_services,host=WIN2008R2H401,display_name=Server,service_name=LanmanServer state=4i,startup_mode=2i 1500040669000000000
@ -57,9 +58,10 @@ win_services,display_name=Remote\ Desktop\ Services,service_name=TermService,hos
### TICK Scripts
A sample TICK script for a notification about a not running service.
It sends a notification whenever any service changes its state to be not _running_ and when it changes that state back to _running_.
The notification is sent via an HTTP POST call.
A sample TICK script for a notification about a not running service. It sends a
notification whenever any service changes its state to be not _running_ and when
it changes that state back to _running_. The notification is sent via an HTTP
POST call.
```shell
stream

View File

@ -1,6 +1,7 @@
# Wireless Input Plugin
The wireless plugin gathers metrics about wireless link quality by reading the `/proc/net/wireless` file. This plugin currently supports linux only.
The wireless plugin gathers metrics about wireless link quality by reading the
`/proc/net/wireless` file. This plugin currently supports linux only.
## Configuration

View File

@ -3,7 +3,8 @@
This plugin provides information about X509 certificate accessible via local
file or network connection.
When using a UDP address as a certificate source, the server must support [DTLS](https://en.wikipedia.org/wiki/Datagram_Transport_Layer_Security).
When using a UDP address as a certificate source, the server must support
[DTLS](https://en.wikipedia.org/wiki/Datagram_Transport_Layer_Security).
## Configuration
@ -57,7 +58,7 @@ When using a UDP address as a certificate source, the server must support [DTLS]
- startdate (int, seconds)
- enddate (int, seconds)
## Example output
## Example Output
```shell
x509_cert,common_name=ubuntu,source=/etc/ssl/certs/ssl-cert-snakeoil.pem,verification=valid age=7693222i,enddate=1871249033i,expiry=307666777i,startdate=1555889033i,verification_code=0i 1563582256000000000

View File

@ -1,6 +1,9 @@
# XtremIO Input Plugin
The `xtremio` plugin gathers metrics from a Dell EMC XtremIO Storage Array's V3 Rest API. Documentation can be found [here](https://dl.dell.com/content/docu96624_xtremio-storage-array-x1-and-x2-cluster-types-with-xms-6-3-0-to-6-3-3-and-xios-4-0-15-to-4-0-31-and-6-0-0-to-6-3-3-restful-api-3-x-guide.pdf?language=en_us)
The `xtremio` plugin gathers metrics from a Dell EMC XtremIO Storage Array's V3
Rest API. Documentation can be found [here][1].
[1]: https://dl.dell.com/content/docu96624_xtremio-storage-array-x1-and-x2-cluster-types-with-xms-6-3-0-to-6-3-3-and-xios-4-0-15-to-4-0-31-and-6-0-0-to-6-3-3-restful-api-3-x-guide.pdf?language=en_us
## Configuration

View File

@ -30,7 +30,7 @@ from `sysctl`, 'zfs' and `zpool` on FreeBSD.
# datasetMetrics = false
```
### Measurements & Fields
## Metrics
By default this plugin collects metrics about ZFS internals pool and dataset.
These metrics are either counters or measure sizes
@ -46,7 +46,7 @@ each dataset.
- zfs
With fields listed bellow.
#### ARC Stats (FreeBSD and Linux)
### ARC Stats (FreeBSD and Linux)
- arcstats_allocated (FreeBSD only)
- arcstats_anon_evict_data (Linux only)
@ -166,7 +166,7 @@ each dataset.
- arcstats_size
- arcstats_sync_wait_for_async (FreeBSD only)
#### Zfetch Stats (FreeBSD and Linux)
### Zfetch Stats (FreeBSD and Linux)
- zfetchstats_bogus_streams (Linux only)
- zfetchstats_colinear_hits (Linux only)
@ -181,13 +181,13 @@ each dataset.
- zfetchstats_stride_hits (Linux only)
- zfetchstats_stride_misses (Linux only)
#### Vdev Cache Stats (FreeBSD)
### Vdev Cache Stats (FreeBSD)
- vdev_cache_stats_delegations
- vdev_cache_stats_hits
- vdev_cache_stats_misses
#### Pool Metrics (optional)
### Pool Metrics (optional)
On Linux (reference: kstat accumulated time and queue length statistics):
@ -225,7 +225,7 @@ On FreeBSD:
- size (integer, bytes)
- fragmentation (integer, percent)
#### Dataset Metrics (optional, only on FreeBSD)
### Dataset Metrics (optional, only on FreeBSD)
- zfs_dataset
- avail (integer, bytes)
@ -247,7 +247,7 @@ On FreeBSD:
- Dataset metrics (`zfs_dataset`) will have the following tag:
- dataset - with the name of the dataset which the metrics are for.
### Example Output
## Example Output
```shell
$ ./telegraf --config telegraf.conf --input-filter zfs --test
@ -257,77 +257,98 @@ $ ./telegraf --config telegraf.conf --input-filter zfs --test
> zfs,pools=zroot arcstats_allocated=4167764i,arcstats_anon_evictable_data=0i,arcstats_anon_evictable_metadata=0i,arcstats_anon_size=16896i,arcstats_arc_meta_limit=10485760i,arcstats_arc_meta_max=115269568i,arcstats_arc_meta_min=8388608i,arcstats_arc_meta_used=51977456i,arcstats_c=16777216i,arcstats_c_max=41943040i,arcstats_c_min=16777216i,arcstats_data_size=0i,arcstats_deleted=1699340i,arcstats_demand_data_hits=14836131i,arcstats_demand_data_misses=2842945i,arcstats_demand_hit_predictive_prefetch=0i,arcstats_demand_metadata_hits=1655006i,arcstats_demand_metadata_misses=830074i,arcstats_duplicate_buffers=0i,arcstats_duplicate_buffers_size=0i,arcstats_duplicate_reads=123i,arcstats_evict_l2_cached=0i,arcstats_evict_l2_eligible=332172623872i,arcstats_evict_l2_ineligible=6168576i,arcstats_evict_l2_skip=0i,arcstats_evict_not_enough=12189444i,arcstats_evict_skip=195190764i,arcstats_hash_chain_max=2i,arcstats_hash_chains=10i,arcstats_hash_collisions=43134i,arcstats_hash_elements=2268i,arcstats_hash_elements_max=6136i,arcstats_hdr_size=565632i,arcstats_hits=16515778i,arcstats_l2_abort_lowmem=0i,arcstats_l2_asize=0i,arcstats_l2_cdata_free_on_write=0i,arcstats_l2_cksum_bad=0i,arcstats_l2_compress_failures=0i,arcstats_l2_compress_successes=0i,arcstats_l2_compress_zeros=0i,arcstats_l2_evict_l1cached=0i,arcstats_l2_evict_lock_retry=0i,arcstats_l2_evict_reading=0i,arcstats_l2_feeds=0i,arcstats_l2_free_on_write=0i,arcstats_l2_hdr_size=0i,arcstats_l2_hits=0i,arcstats_l2_io_error=0i,arcstats_l2_misses=0i,arcstats_l2_read_bytes=0i,arcstats_l2_rw_clash=0i,arcstats_l2_size=0i,arcstats_l2_write_buffer_bytes_scanned=0i,arcstats_l2_write_buffer_iter=0i,arcstats_l2_write_buffer_list_iter=0i,arcstats_l2_write_buffer_list_null_iter=0i,arcstats_l2_write_bytes=0i,arcstats_l2_write_full=0i,arcstats_l2_write_in_l2=0i,arcstats_l2_write_io_in_progress=0i,arcstats_l2_write_not_cacheable=380i,arcstats_l2_write_passed_headroom=0i,arcstats_l2_write_pios=0i,arcstats_l2_write_spa_mismatch=0i,arcstats_l2_write_trylock_fail=0i,arcstats_l2_writes_done=0i,arcstats_l2_writes_error=0i,arcstats_l2_writes_lock_retry=0i,arcstats_l2_writes_sent=0i,arcstats_memory_throttle_count=0i,arcstats_metadata_size=17014784i,arcstats_mfu_evictable_data=0i,arcstats_mfu_evictable_metadata=16384i,arcstats_mfu_ghost_evictable_data=5723648i,arcstats_mfu_ghost_evictable_metadata=10709504i,arcstats_mfu_ghost_hits=1315619i,arcstats_mfu_ghost_size=16433152i,arcstats_mfu_hits=7646611i,arcstats_mfu_size=305152i,arcstats_misses=3676993i,arcstats_mru_evictable_data=0i,arcstats_mru_evictable_metadata=0i,arcstats_mru_ghost_evictable_data=0i,arcstats_mru_ghost_evictable_metadata=80896i,arcstats_mru_ghost_hits=324250i,arcstats_mru_ghost_size=80896i,arcstats_mru_hits=8844526i,arcstats_mru_size=16693248i,arcstats_mutex_miss=354023i,arcstats_other_size=34397040i,arcstats_p=4172800i,arcstats_prefetch_data_hits=0i,arcstats_prefetch_data_misses=0i,arcstats_prefetch_metadata_hits=24641i,arcstats_prefetch_metadata_misses=3974i,arcstats_size=51977456i,arcstats_sync_wait_for_async=0i,vdev_cache_stats_delegations=779i,vdev_cache_stats_hits=323123i,vdev_cache_stats_misses=59929i,zfetchstats_hits=0i,zfetchstats_max_streams=0i,zfetchstats_misses=0i 1464473103634124908
```
### Description
## Description
A short description for some of the metrics.
#### ARC Stats
### ARC Stats
`arcstats_hits` Total amount of cache hits in the arc.
`arcstats_misses` Total amount of cache misses in the arc.
`arcstats_demand_data_hits` Amount of cache hits for demand data, this is what matters (is good) for your application/share.
`arcstats_demand_data_hits` Amount of cache hits for demand data, this is what
matters (is good) for your application/share.
`arcstats_demand_data_misses` Amount of cache misses for demand data, this is what matters (is bad) for your application/share.
`arcstats_demand_data_misses` Amount of cache misses for demand data, this is
what matters (is bad) for your application/share.
`arcstats_demand_metadata_hits` Amount of cache hits for demand metadata, this matters (is good) for getting filesystem data (ls,find,…)
`arcstats_demand_metadata_hits` Amount of cache hits for demand metadata, this
matters (is good) for getting filesystem data (ls,find,…)
`arcstats_demand_metadata_misses` Amount of cache misses for demand metadata, this matters (is bad) for getting filesystem data (ls,find,…)
`arcstats_demand_metadata_misses` Amount of cache misses for demand metadata,
this matters (is bad) for getting filesystem data (ls,find,…)
`arcstats_prefetch_data_hits` The zfs prefetcher tried to prefetch something, but it was already cached (boring)
`arcstats_prefetch_data_hits` The zfs prefetcher tried to prefetch something,
but it was already cached (boring)
`arcstats_prefetch_data_misses` The zfs prefetcher prefetched something which was not in the cache (good job, could become a demand hit in the future)
`arcstats_prefetch_data_misses` The zfs prefetcher prefetched something which
was not in the cache (good job, could become a demand hit in the future)
`arcstats_prefetch_metadata_hits` Same as above, but for metadata
`arcstats_prefetch_metadata_misses` Same as above, but for metadata
`arcstats_mru_hits` Cache hit in the “most recently used cache”, we move this to the mfu cache.
`arcstats_mru_hits` Cache hit in the “most recently used cache”, we move this to
the mfu cache.
`arcstats_mru_ghost_hits` Cache hit in the “most recently used ghost list” we had this item in the cache, but evicted it, maybe we should increase the mru cache size.
`arcstats_mru_ghost_hits` Cache hit in the “most recently used ghost list” we
had this item in the cache, but evicted it, maybe we should increase the mru
cache size.
`arcstats_mfu_hits` Cache hit in the “most frequently used cache” we move this to the beginning of the mfu cache.
`arcstats_mfu_hits` Cache hit in the “most frequently used cache” we move this
to the beginning of the mfu cache.
`arcstats_mfu_ghost_hits` Cache hit in the “most frequently used ghost list” we had this item in the cache, but evicted it, maybe we should increase the mfu cache size.
`arcstats_mfu_ghost_hits` Cache hit in the “most frequently used ghost list” we
had this item in the cache, but evicted it, maybe we should increase the mfu
cache size.
`arcstats_allocated` New data is written to the cache.
`arcstats_deleted` Old data is evicted (deleted) from the cache.
`arcstats_evict_l2_cached` We evicted something from the arc, but its still cached in the l2 if we need it.
`arcstats_evict_l2_cached` We evicted something from the arc, but its still
cached in the l2 if we need it.
`arcstats_evict_l2_eligible` We evicted something from the arc, and its not in the l2 this is sad. (maybe we hadnt had enough time to store it there)
`arcstats_evict_l2_eligible` We evicted something from the arc, and its not in
the l2 this is sad. (maybe we hadnt had enough time to store it there)
`arcstats_evict_l2_ineligible` We evicted something which cannot be stored in the l2.
Reasons could be:
`arcstats_evict_l2_ineligible` We evicted something which cannot be stored in
the l2. Reasons could be:
- We have multiple pools, we evicted something from a pool without an l2 device.
- The zfs property secondary cache.
`arcstats_c` Arc target size, this is the size the system thinks the arc should have.
`arcstats_c` Arc target size, this is the size the system thinks the arc should
have.
`arcstats_size` Total size of the arc.
`arcstats_l2_hits` Hits to the L2 cache. (It was not in the arc, but in the l2 cache)
`arcstats_l2_hits` Hits to the L2 cache. (It was not in the arc, but in the l2
cache)
`arcstats_l2_misses` Miss to the L2 cache. (It was not in the arc, and not in the l2 cache)
`arcstats_l2_misses` Miss to the L2 cache. (It was not in the arc, and not in
the l2 cache)
`arcstats_l2_size` Size of the l2 cache.
`arcstats_l2_hdr_size` Size of the metadata in the arc (ram) used to manage (lookup if something is in the l2) the l2 cache.
`arcstats_l2_hdr_size` Size of the metadata in the arc (ram) used to manage
(lookup if something is in the l2) the l2 cache.
#### Zfetch Stats
### Zfetch Stats
`zfetchstats_hits` Counts the number of cache hits, to items which are in the cache because of the prefetcher.
`zfetchstats_hits` Counts the number of cache hits, to items which are in the
cache because of the prefetcher.
`zfetchstats_misses` Counts the number of prefetch cache misses.
`zfetchstats_colinear_hits` Counts the number of cache hits, to items which are in the cache because of the prefetcher (prefetched linear reads)
`zfetchstats_colinear_hits` Counts the number of cache hits, to items which are
in the cache because of the prefetcher (prefetched linear reads)
`zfetchstats_stride_hits` Counts the number of cache hits, to items which are in the cache because of the prefetcher (prefetched stride reads)
`zfetchstats_stride_hits` Counts the number of cache hits, to items which are in
the cache because of the prefetcher (prefetched stride reads)
#### Vdev Cache Stats (FreeBSD only)
### Vdev Cache Stats (FreeBSD only)
note: the vdev cache is deprecated in some ZFS implementations
@ -335,7 +356,7 @@ note: the vdev cache is deprecated in some ZFS implementations
`vdev_cache_stats_misses` Misses to the vdev (device level) cache.
#### ABD Stats (Linux Only)
### ABD Stats (Linux Only)
ABD is a linear/scatter dual typed buffer for ARC
@ -347,19 +368,22 @@ ABD is a linear/scatter dual typed buffer for ARC
`abdstats_scatter_data_size` amount of data stored in all scatter ABDs
#### DMU Stats (Linux Only)
### DMU Stats (Linux Only)
`dmu_tx_dirty_throttle` counts when writes are throttled due to the amount of dirty data growing too large
`dmu_tx_dirty_throttle` counts when writes are throttled due to the amount of
dirty data growing too large
`dmu_tx_memory_reclaim` counts when memory is low and throttling activity
`dmu_tx_memory_reserve` counts when memory footprint of the txg exceeds the ARC size
`dmu_tx_memory_reserve` counts when memory footprint of the txg exceeds the ARC
size
#### Fault Management Ereport errors (Linux Only)
### Fault Management Ereport errors (Linux Only)
`fm_erpt-dropped` counts when an error report cannot be created (eg available memory is too low)
`fm_erpt-dropped` counts when an error report cannot be created (eg available
memory is too low)
#### ZIL (Linux Only)
### ZIL (Linux Only)
note: ZIL measurements are system-wide, neither per-pool nor per-dataset

View File

@ -1,9 +1,11 @@
# Zipkin Input Plugin
This plugin implements the Zipkin http server to gather trace and timing data needed to troubleshoot latency problems in microservice architectures.
This plugin implements the Zipkin http server to gather trace and timing data
needed to troubleshoot latency problems in microservice architectures.
__Please Note:__ This plugin is experimental; Its data schema may be subject to change
based on its main usage cases and the evolution of the OpenTracing standard.
__Please Note:__ This plugin is experimental; Its data schema may be subject to
change based on its main usage cases and the evolution of the OpenTracing
standard.
## Configuration
@ -14,8 +16,9 @@ based on its main usage cases and the evolution of the OpenTracing standard.
# port = 9411 # Port on which Telegraf listens
```
The plugin accepts spans in `JSON` or `thrift` if the `Content-Type` is `application/json` or `application/x-thrift`, respectively.
If `Content-Type` is not set, then the plugin assumes it is `JSON` format.
The plugin accepts spans in `JSON` or `thrift` if the `Content-Type` is
`application/json` or `application/x-thrift`, respectively. If `Content-Type`
is not set, then the plugin assumes it is `JSON` format.
## Tracing
@ -38,6 +41,10 @@ Traces are built by collecting all Spans that share a traceId.
- __CR (client receive):__ end of span, client receives response from server
RPC is considered complete with this annotation
## Metrics
- __"duration_ns":__ The time in nanoseconds between the end and beginning of a span.
### Tags
- __"id":__ The 64 bit ID of the span.
@ -58,10 +65,6 @@ Traces are built by collecting all Spans that share a traceId.
- __"endpoint_host":__ Listening port concat with IPV4, if port is not present it will not be concatenated
- __"annotation_key":__ label describing the annotation
## Fields
- __"duration_ns":__ The time in nanoseconds between the end and beginning of a span.
## Sample Queries
__Get All Span Names for Service__ `my_web_server`
@ -90,7 +93,10 @@ SELECT max("duration_ns") FROM "zipkin" WHERE "service_name" = 'my_service' AND
### Recommended InfluxDB setup
This test will create high cardinality data so we recommend using the [tsi influxDB engine](https://www.influxdata.com/path-1-billion-time-series-influxdb-high-cardinality-indexing-ready-testing/).
This test will create high cardinality data so we recommend using the [tsi
influxDB engine][1].
[1]: https://www.influxdata.com/path-1-billion-time-series-influxdb-high-cardinality-indexing-ready-testing/
#### How To Set Up InfluxDB For Work With Zipkin