chore: clean up all markdown lint errors for input plugins i through m (#10173)

This commit is contained in:
Mya 2021-11-24 12:18:53 -07:00 committed by GitHub
parent de6c2f74d6
commit 84562877cc
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
47 changed files with 1171 additions and 1029 deletions

View File

@ -6,7 +6,7 @@ The icinga2 plugin uses the icinga2 remote API to gather status on running
services and hosts. You can read Icinga2's documentation for their remote API
[here](https://docs.icinga.com/icinga2/latest/doc/module/icinga2/chapter/icinga2-api)
### Configuration:
## Configuration
```toml
# Description
@ -32,24 +32,24 @@ services and hosts. You can read Icinga2's documentation for their remote API
# insecure_skip_verify = true
```
### Measurements & Fields:
## Measurements & Fields
- All measurements have the following fields:
- name (string)
- state_code (int)
- name (string)
- state_code (int)
### Tags:
## Tags
- All measurements have the following tags:
- check_command - The short name of the check command
- display_name - The name of the service or host
- state - The state: UP/DOWN for hosts, OK/WARNING/CRITICAL/UNKNOWN for services
- source - The icinga2 host
- port - The icinga2 port
- scheme - The icinga2 protocol (http/https)
- server - The server the check_command is running for
- check_command - The short name of the check command
- display_name - The name of the service or host
- state - The state: UP/DOWN for hosts, OK/WARNING/CRITICAL/UNKNOWN for services
- source - The icinga2 host
- port - The icinga2 port
- scheme - The icinga2 protocol (http/https)
- server - The server the check_command is running for
### Sample Queries:
## Sample Queries
```sql
SELECT * FROM "icinga2_services" WHERE state_code = 0 AND time > now() - 24h // Service with OK status
@ -58,9 +58,9 @@ SELECT * FROM "icinga2_services" WHERE state_code = 2 AND time > now() - 24h //
SELECT * FROM "icinga2_services" WHERE state_code = 3 AND time > now() - 24h // Service with UNKNOWN status
```
### Example Output:
## Example Output
```
```text
$ ./telegraf -config telegraf.conf -input-filter icinga2 -test
icinga2_hosts,display_name=router-fr.eqx.fr,check_command=hostalive-custom,host=test-vm,source=localhost,port=5665,scheme=https,state=ok name="router-fr.eqx.fr",state=0 1492021603000000000
```

View File

@ -6,14 +6,14 @@ system. These are the counters that can be found in
**Supported Platforms**: Linux
### Configuration
## Configuration
```toml
[[inputs.infiniband]]
# no configuration
```
### Metrics
## Metrics
Actual metrics depend on the InfiniBand devices, the plugin uses a simple
mapping from counter -> counter value.
@ -49,10 +49,8 @@ mapping from counter -> counter value.
- unicast_xmit_packets (integer)
- VL15_dropped (integer)
## Example Output
### Example Output
```
```shell
infiniband,device=mlx5_0,port=1 VL15_dropped=0i,excessive_buffer_overrun_errors=0i,link_downed=0i,link_error_recovery=0i,local_link_integrity_errors=0i,multicast_rcv_packets=0i,multicast_xmit_packets=0i,port_rcv_constraint_errors=0i,port_rcv_data=237159415345822i,port_rcv_errors=0i,port_rcv_packets=801977655075i,port_rcv_remote_physical_errors=0i,port_rcv_switch_relay_errors=0i,port_xmit_constraint_errors=0i,port_xmit_data=238334949937759i,port_xmit_discards=0i,port_xmit_packets=803162651391i,port_xmit_wait=4294967295i,symbol_error=0i,unicast_rcv_packets=801977655075i,unicast_xmit_packets=803162651391i 1573125558000000000
```

View File

@ -1,13 +1,13 @@
# InfluxDB Input Plugin
The InfluxDB plugin will collect metrics on the given InfluxDB servers. Read our
[documentation](https://docs.influxdata.com/platform/monitoring/influxdata-platform/tools/measurements-internal/)
for detailed information about `influxdb` metrics.
The InfluxDB plugin will collect metrics on the given InfluxDB servers. Read our
[documentation](https://docs.influxdata.com/platform/monitoring/influxdata-platform/tools/measurements-internal/)
for detailed information about `influxdb` metrics.
This plugin can also gather metrics from endpoints that expose
InfluxDB-formatted endpoints. See below for more information.
### Configuration:
## Configuration
```toml
# Read InfluxDB-formatted JSON metrics from one or more HTTP endpoints
@ -37,7 +37,7 @@ InfluxDB-formatted endpoints. See below for more information.
timeout = "5s"
```
### Measurements & Fields
## Measurements & Fields
**Note:** The measurements and fields included in this plugin are dynamically built from the InfluxDB source, and may vary between versions:
@ -80,7 +80,7 @@ InfluxDB-formatted endpoints. See below for more information.
- **influxdb_hh_processor** _(Enterprise Only)_: Statistics stored for a single queue (shard).
- **bytesRead**: Size, in bytes, of points read from the hinted handoff queue and sent to its destination data node.
- **bytesWritten**: Total number of bytes written to the hinted handoff queue.
- **queueBytes**: Total number of bytes remaining in the hinted handoff queue.
- **queueBytes**: Total number of bytes remaining in the hinted handoff queue.
- **queueDepth**: Total number of segments in the hinted handoff queue. The HH queue is a sequence of 10MB “segment” files.
- **writeBlocked**: Number of writes blocked because the number of concurrent HH requests exceeds the limit.
- **writeDropped**: Number of writes dropped from the HH queue because the write appeared to be corrupted.
@ -125,7 +125,7 @@ InfluxDB-formatted endpoints. See below for more information.
- **HeapInuse**: Number of bytes in in-use spans.
- **HeapObjects**: Number of allocated heap objects.
- **HeapReleased**: Number of bytes of physical memory returned to the OS.
- **HeapSys**: Number of bytes of heap memory obtained from the OS.
- **HeapSys**: Number of bytes of heap memory obtained from the OS.
- **LastGC**: Time the last garbage collection finished.
- **Lookups**: Number of pointer lookups performed by the runtime.
- **MCacheInuse**: Number of bytes of allocated mcache structures.
@ -258,9 +258,9 @@ InfluxDB-formatted endpoints. See below for more information.
- **writePartial** _(Enterprise only)_: Total number of batches written to at least one node, but did not meet the requested consistency level.
- **writeTimeout**: Total number of write requests that failed to complete within the default write timeout duration.
### Example Output:
## Example Output
```
```sh
telegraf --config ~/ws/telegraf.conf --input-filter influxdb --test
* Plugin: influxdb, Collection 1
> influxdb_database,database=_internal,host=tyrion,url=http://localhost:8086/debug/vars numMeasurements=10,numSeries=29 1463590500247354636
@ -292,7 +292,7 @@ telegraf --config ~/ws/telegraf.conf --input-filter influxdb --test
> influxdb_shard,host=tyrion n_shards=4i 1463590500247354636
```
### InfluxDB-formatted endpoints
## InfluxDB-formatted endpoints
The influxdb plugin can collect InfluxDB-formatted data from JSON endpoints.
Whether associated with an Influx database or not.

View File

@ -18,7 +18,7 @@ receive a 200 OK response with message body `{"results":[]}` but they are not
relayed. The output configuration of the Telegraf instance which ultimately
submits data to InfluxDB determines the destination database.
### Configuration:
## Configuration
```toml
[[inputs.influxdb_listener]]
@ -64,14 +64,15 @@ submits data to InfluxDB determines the destination database.
# basic_password = "barfoo"
```
### Metrics:
## Metrics
Metrics are created from InfluxDB Line Protocol in the request body.
### Troubleshooting:
## Troubleshooting
**Example Query:**
```
```sh
curl -i -XPOST 'http://localhost:8186/write' --data-binary 'cpu_load_short,host=server01,region=us-west value=0.64 1434055562000000000'
```

View File

@ -11,7 +11,7 @@ defer to the output plugins configuration.
Telegraf minimum version: Telegraf 1.16.0
### Configuration:
## Configuration
```toml
[[inputs.influxdb_v2_listener]]
@ -42,14 +42,15 @@ Telegraf minimum version: Telegraf 1.16.0
# token = "some-long-shared-secret-token"
```
### Metrics:
## Metrics
Metrics are created from InfluxDB Line Protocol in the request body.
### Troubleshooting:
## Troubleshooting
**Example Query:**
```
```sh
curl -i -XPOST 'http://localhost:8186/api/v2/write' --data-binary 'cpu_load_short,host=server01,region=us-west value=0.64 1434055562000000000'
```

View File

@ -1,11 +1,13 @@
# Intel PowerStat Input Plugin
This input plugin monitors power statistics on Intel-based platforms and assumes presence of Linux based OS.
Main use cases are power saving and workload migration. Telemetry frameworks allow users to monitor critical platform level metrics.
Key source of platform telemetry is power domain that is beneficial for MANO/Monitoring&Analytics systems
to take preventive/corrective actions based on platform busyness, CPU temperature, actual CPU utilization and power statistics.
This input plugin monitors power statistics on Intel-based platforms and assumes presence of Linux based OS.
Main use cases are power saving and workload migration. Telemetry frameworks allow users to monitor critical platform level metrics.
Key source of platform telemetry is power domain that is beneficial for MANO/Monitoring&Analytics systems
to take preventive/corrective actions based on platform busyness, CPU temperature, actual CPU utilization and power statistics.
## Configuration
### Configuration:
```toml
# Intel PowerStat plugin enables monitoring of platform metrics (power, TDP) and per-CPU metrics like temperature, power and utilization.
[[inputs.intel_powerstat]]
@ -17,52 +19,65 @@ to take preventive/corrective actions based on platform busyness, CPU temperatur
## "cpu_frequency", "cpu_busy_frequency", "cpu_temperature", "cpu_c1_state_residency", "cpu_c6_state_residency", "cpu_busy_cycles"
# cpu_metrics = []
```
### Example: Configuration with no per-CPU telemetry
## Example: Configuration with no per-CPU telemetry
This configuration allows getting global metrics (processor package specific), no per-CPU metrics are collected:
```toml
[[inputs.intel_powerstat]]
cpu_metrics = []
```
### Example: Configuration with no per-CPU telemetry - equivalent case
## Example: Configuration with no per-CPU telemetry - equivalent case
This configuration allows getting global metrics (processor package specific), no per-CPU metrics are collected:
```toml
[[inputs.intel_powerstat]]
```
### Example: Configuration for CPU Temperature and Frequency only
## Example: Configuration for CPU Temperature and Frequency only
This configuration allows getting global metrics plus subset of per-CPU metrics (CPU Temperature and Current Frequency):
```toml
[[inputs.intel_powerstat]]
cpu_metrics = ["cpu_frequency", "cpu_temperature"]
```
### Example: Configuration with all available metrics
## Example: Configuration with all available metrics
This configuration allows getting global metrics and all per-CPU metrics:
```toml
[[inputs.intel_powerstat]]
cpu_metrics = ["cpu_frequency", "cpu_busy_frequency", "cpu_temperature", "cpu_c1_state_residency", "cpu_c6_state_residency", "cpu_busy_cycles"]
```
### SW Dependencies:
## SW Dependencies
Plugin is based on Linux Kernel modules that expose specific metrics over `sysfs` or `devfs` interfaces.
The following dependencies are expected by plugin:
- _intel-rapl_ module which exposes Intel Runtime Power Limiting metrics over `sysfs` (`/sys/devices/virtual/powercap/intel-rapl`),
- _msr_ kernel module that provides access to processor model specific registers over `devfs` (`/dev/cpu/cpu%d/msr`),
- _cpufreq_ kernel module - which exposes per-CPU Frequency over `sysfs` (`/sys/devices/system/cpu/cpu%d/cpufreq/scaling_cur_freq`).
- _cpufreq_ kernel module - which exposes per-CPU Frequency over `sysfs` (`/sys/devices/system/cpu/cpu%d/cpufreq/scaling_cur_freq`).
Minimum kernel version required is 3.13 to satisfy all requirements.
Please make sure that kernel modules are loaded and running. You might have to manually enable them by using `modprobe`.
Exact commands to be executed are:
```
```sh
sudo modprobe cpufreq-stats
sudo modprobe msr
sudo modprobe intel_rapl
```
**Telegraf with Intel PowerStat plugin enabled may require root access to read model specific registers (MSRs)**
**Telegraf with Intel PowerStat plugin enabled may require root access to read model specific registers (MSRs)**
to retrieve data for calculation of most critical per-CPU specific metrics:
- `cpu_busy_frequency_mhz`
- `cpu_temperature_celsius`
- `cpu_c1_state_residency_percent`
@ -71,23 +86,25 @@ to retrieve data for calculation of most critical per-CPU specific metrics:
To expose other Intel PowerStat metrics root access may or may not be required (depending on OS type or configuration).
### HW Dependencies:
Specific metrics require certain processor features to be present, otherwise Intel PowerStat plugin won't be able to
read them. When using Linux Kernel based OS, user can detect supported processor features reading `/proc/cpuinfo` file.
## HW Dependencies
Specific metrics require certain processor features to be present, otherwise Intel PowerStat plugin won't be able to
read them. When using Linux Kernel based OS, user can detect supported processor features reading `/proc/cpuinfo` file.
Plugin assumes crucial properties are the same for all CPU cores in the system.
The following processor properties are examined in more detail in this section:
processor _cpu family_, _model_ and _flags_.
The following processor properties are required by the plugin:
- Processor _cpu family_ must be Intel (0x6) - since data used by the plugin assumes Intel specific
- Processor _cpu family_ must be Intel (0x6) - since data used by the plugin assumes Intel specific
model specific registers for all features
- The following processor flags shall be present:
- "_msr_" shall be present for plugin to read platform data from processor model specific registers and collect
the following metrics: _powerstat_core.cpu_temperature_, _powerstat_core.cpu_busy_frequency_,
- "_msr_" shall be present for plugin to read platform data from processor model specific registers and collect
the following metrics: _powerstat_core.cpu_temperature_, _powerstat_core.cpu_busy_frequency_,
_powerstat_core.cpu_busy_cycles_, _powerstat_core.cpu_c1_state_residency_, _powerstat_core._cpu_c6_state_residency_
- "_aperfmperf_" shall be present to collect the following metrics: _powerstat_core.cpu_busy_frequency_,
- "_aperfmperf_" shall be present to collect the following metrics: _powerstat_core.cpu_busy_frequency_,
_powerstat_core.cpu_busy_cycles_, _powerstat_core.cpu_c1_state_residency_
- "_dts_" shall be present to collect _powerstat_core.cpu_temperature_
- Processor _Model number_ must be one of the following values for plugin to read _powerstat_core.cpu_c1_state_residency_
- "_dts_" shall be present to collect _powerstat_core.cpu_temperature_
- Processor _Model number_ must be one of the following values for plugin to read _powerstat_core.cpu_c1_state_residency_
and _powerstat_core.cpu_c6_state_residency_ metrics:
| Model number | Processor name |
@ -95,12 +112,12 @@ and _powerstat_core.cpu_c6_state_residency_ metrics:
| 0x37 | Intel Atom® Bay Trail |
| 0x4D | Intel Atom® Avaton |
| 0x5C | Intel Atom® Apollo Lake |
| 0x5F | Intel Atom® Denverton |
| 0x5F | Intel Atom® Denverton |
| 0x7A | Intel Atom® Goldmont |
| 0x4C | Intel Atom® Airmont |
| 0x86 | Intel Atom® Jacobsville |
| 0x96 | Intel Atom® Elkhart Lake |
| 0x9C | Intel Atom® Jasper Lake |
| 0x96 | Intel Atom® Elkhart Lake |
| 0x9C | Intel Atom® Jasper Lake |
| 0x1A | Intel Nehalem-EP |
| 0x1E | Intel Nehalem |
| 0x1F | Intel Nehalem-G |
@ -138,27 +155,32 @@ and _powerstat_core.cpu_c6_state_residency_ metrics:
| 0x8F | Intel Sapphire Rapids X |
| 0x8C | Intel TigerLake-L |
| 0x8D | Intel TigerLake |
### Metrics
## Metrics
All metrics collected by Intel PowerStat plugin are collected in fixed intervals.
Metrics that reports processor C-state residency or power are calculated over elapsed intervals.
When starting to measure metrics, plugin skips first iteration of metrics if they are based on deltas with previous value.
**The following measurements are supported by Intel PowerStat plugin:**
- powerstat_core
- The following Tags are returned by plugin with powerstat_core measurements:
- The following Tags are returned by plugin with powerstat_core measurements:
```text
| Tag | Description |
|-----|-------------|
| `package_id` | ID of platform package/socket |
| `core_id` | ID of physical processor core |
| `core_id` | ID of physical processor core |
| `cpu_id` | ID of logical processor core |
Measurement powerstat_core metrics are collected per-CPU (cpu_id is the key)
Measurement powerstat_core metrics are collected per-CPU (cpu_id is the key)
while core_id and package_id tags are additional topology information.
```
- Available metrics for powerstat_core measurement
- Available metrics for powerstat_core measurement
```text
| Metric name (field) | Description | Units |
|-----|-------------|-----|
| `cpu_frequency_mhz` | Current operational frequency of CPU Core | MHz |
@ -167,31 +189,33 @@ When starting to measure metrics, plugin skips first iteration of metrics if the
| `cpu_c1_state_residency_percent` | Percentage of time that CPU Core spent in C1 Core residency state | % |
| `cpu_c6_state_residency_percent` | Percentage of time that CPU Core spent in C6 Core residency state | % |
| `cpu_busy_cycles_percent` | CPU Core Busy cycles as a ratio of Cycles spent in C0 state residency to all cycles executed by CPU Core | % |
```
- powerstat_package
- The following Tags are returned by plugin with powerstat_package measurements:
- The following Tags are returned by plugin with powerstat_package measurements:
```text
| Tag | Description |
|-----|-------------|
| `package_id` | ID of platform package/socket |
Measurement powerstat_package metrics are collected per processor package - _package_id_ tag indicates which
Measurement powerstat_package metrics are collected per processor package -_package_id_ tag indicates which
package metric refers to.
```
- Available metrics for powerstat_package measurement
- Available metrics for powerstat_package measurement
```text
| Metric name (field) | Description | Units |
|-----|-------------|-----|
| `thermal_design_power_watts` | Maximum Thermal Design Power (TDP) available for processor package | Watts |
| `thermal_design_power_watts` | Maximum Thermal Design Power (TDP) available for processor package | Watts |
| `current_power_consumption_watts` | Current power consumption of processor package | Watts |
| `current_dram_power_consumption_watts` | Current power consumption of processor package DRAM subsystem | Watts |
```
### Example Output
### Example Output:
```
```shell
powerstat_package,host=ubuntu,package_id=0 thermal_design_power_watts=160 1606494744000000000
powerstat_package,host=ubuntu,package_id=0 current_power_consumption_watts=35 1606494744000000000
powerstat_package,host=ubuntu,package_id=0 current_dram_power_consumption_watts=13.94 1606494744000000000

View File

@ -1,22 +1,26 @@
# Intel RDT Input Plugin
The `intel_rdt` plugin collects information provided by monitoring features of
the Intel Resource Director Technology (Intel(R) RDT). Intel RDT provides the hardware framework to monitor
and control the utilization of shared resources (ex: last level cache, memory bandwidth).
### About Intel RDT
The `intel_rdt` plugin collects information provided by monitoring features of
the Intel Resource Director Technology (Intel(R) RDT). Intel RDT provides the hardware framework to monitor
and control the utilization of shared resources (ex: last level cache, memory bandwidth).
## About Intel RDT
Intels Resource Director Technology (RDT) framework consists of:
- Cache Monitoring Technology (CMT)
- Cache Monitoring Technology (CMT)
- Memory Bandwidth Monitoring (MBM)
- Cache Allocation Technology (CAT)
- Code and Data Prioritization (CDP)
- Cache Allocation Technology (CAT)
- Code and Data Prioritization (CDP)
As multithreaded and multicore platform architectures emerge, the last level cache and
memory bandwidth are key resources to manage for running workloads in single-threaded,
multithreaded, or complex virtual machine environments. Intel introduces CMT, MBM, CAT
and CDP to manage these workloads across shared resources.
As multithreaded and multicore platform architectures emerge, the last level cache and
memory bandwidth are key resources to manage for running workloads in single-threaded,
multithreaded, or complex virtual machine environments. Intel introduces CMT, MBM, CAT
and CDP to manage these workloads across shared resources.
### Prerequsities - PQoS Tool
To gather Intel RDT metrics, the `intel_rdt` plugin uses _pqos_ cli tool which is a
## Prerequsities - PQoS Tool
To gather Intel RDT metrics, the `intel_rdt` plugin uses _pqos_ cli tool which is a
part of [Intel(R) RDT Software Package](https://github.com/intel/intel-cmt-cat).
Before using this plugin please be sure _pqos_ is properly installed and configured regarding that the plugin
run _pqos_ to work with `OS Interface` mode. This plugin supports _pqos_ version 4.0.0 and above.
@ -24,7 +28,7 @@ Note: pqos tool needs root privileges to work properly.
Metrics will be constantly reported from the following `pqos` commands within the given interval:
#### If telegraf does not run as the root user
### If telegraf does not run as the root user
The `pqos` binary needs to run as root. If telegraf is running as a non-root user, you may enable sudo
to allow `pqos` to run correctly.
@ -40,40 +44,46 @@ Alternately, you may enable sudo to allow `pqos` to run correctly, as follows:
Add the following to your sudoers file (assumes telegraf runs as a user named `telegraf`):
```
```sh
telegraf ALL=(ALL) NOPASSWD:/usr/sbin/pqos -r --iface-os --mon-file-type=csv --mon-interval=*
```
If you wish to use sudo, you must also add `use_sudo = true` to the Telegraf
configuration (see below).
#### In case of cores monitoring:
```
### In case of cores monitoring
```sh
pqos -r --iface-os --mon-file-type=csv --mon-interval=INTERVAL --mon-core=all:[CORES]\;mbt:[CORES]
```
where `CORES` is equal to group of cores provided in config. User can provide many groups.
#### In case of process monitoring:
```
### In case of process monitoring
```sh
pqos -r --iface-os --mon-file-type=csv --mon-interval=INTERVAL --mon-pid=all:[PIDS]\;mbt:[PIDS]
```
where `PIDS` is group of processes IDs which name are equal to provided process name in a config.
User can provide many process names which lead to create many processes groups.
In both cases `INTERVAL` is equal to sampling_interval from config.
Because PIDs association within system could change in every moment, Intel RDT plugin provides a
Because PIDs association within system could change in every moment, Intel RDT plugin provides a
functionality to check on every interval if desired processes change their PIDs association.
If some change is reported, plugin will restart _pqos_ tool with new arguments. If provided by user
process name is not equal to any of available processes, will be omitted and plugin will constantly
check for process availability.
### Useful links
Pqos installation process: https://github.com/intel/intel-cmt-cat/blob/master/INSTALL
Enabling OS interface: https://github.com/intel/intel-cmt-cat/wiki, https://github.com/intel/intel-cmt-cat/wiki/resctrl
More about Intel RDT: https://www.intel.com/content/www/us/en/architecture-and-technology/resource-director-technology.html
## Useful links
Pqos installation process: <https://github.com/intel/intel-cmt-cat/blob/master/INSTALL>
Enabling OS interface: <https://github.com/intel/intel-cmt-cat/wiki>, <https://github.com/intel/intel-cmt-cat/wiki/resctrl>
More about Intel RDT: <https://www.intel.com/content/www/us/en/architecture-and-technology/resource-director-technology.html>
## Configuration
### Configuration
```toml
# Read Intel RDT metrics
[[inputs.intel_rdt]]
@ -81,7 +91,7 @@ More about Intel RDT: https://www.intel.com/content/www/us/en/architecture-and-t
## This value is propagated to pqos tool. Interval format is defined by pqos itself.
## If not provided or provided 0, will be set to 10 = 10x100ms = 1s.
# sampling_interval = "10"
## Optionally specify the path to pqos executable.
## If not provided, auto discovery will be performed.
# pqos_path = "/usr/local/bin/pqos"
@ -105,7 +115,8 @@ More about Intel RDT: https://www.intel.com/content/www/us/en/architecture-and-t
# use_sudo = false
```
### Exposed metrics
## Exposed metrics
| Name | Full name | Description |
|---------------|-----------------------------------------------|-------------|
| MBL | Memory Bandwidth on Local NUMA Node | Memory bandwidth utilization by the relevant CPU core/process on the local NUMA memory channel |
@ -117,7 +128,8 @@ More about Intel RDT: https://www.intel.com/content/www/us/en/architecture-and-t
*optional
### Troubleshooting
## Troubleshooting
Pointing to non-existing cores will lead to throwing an error by _pqos_ and the plugin will not work properly.
Be sure to check provided core number exists within desired system.
@ -126,13 +138,16 @@ Do not use any other _pqos_ instance that is monitoring the same cores or PIDs w
It is not possible to monitor same cores or PIDs on different groups.
PIDs associated for the given process could be manually checked by `pidof` command. E.g:
```
```sh
pidof PROCESS
```
where `PROCESS` is process name.
### Example Output
```
## Example Output
```shell
> rdt_metric,cores=12\,19,host=r2-compute-20,name=IPC,process=top value=0 1598962030000000000
> rdt_metric,cores=12\,19,host=r2-compute-20,name=LLC_Misses,process=top value=0 1598962030000000000
> rdt_metric,cores=12\,19,host=r2-compute-20,name=LLC,process=top value=0 1598962030000000000

View File

@ -5,7 +5,7 @@ The `internal` plugin collects metrics about the telegraf agent itself.
Note that some metrics are aggregates across all instances of one type of
plugin.
### Configuration:
## Configuration
```toml
# Collect statistics about itself
@ -14,71 +14,69 @@ plugin.
# collect_memstats = true
```
### Measurements & Fields:
## Measurements & Fields
memstats are taken from the Go runtime: https://golang.org/pkg/runtime/#MemStats
memstats are taken from the Go runtime: <https://golang.org/pkg/runtime/#MemStats>
- internal_memstats
- alloc_bytes
- frees
- heap_alloc_bytes
- heap_idle_bytes
- heap_in_use_bytes
- heap_objects_bytes
- heap_released_bytes
- heap_sys_bytes
- mallocs
- num_gc
- pointer_lookups
- sys_bytes
- total_alloc_bytes
- alloc_bytes
- frees
- heap_alloc_bytes
- heap_idle_bytes
- heap_in_use_bytes
- heap_objects_bytes
- heap_released_bytes
- heap_sys_bytes
- mallocs
- num_gc
- pointer_lookups
- sys_bytes
- total_alloc_bytes
agent stats collect aggregate stats on all telegraf plugins.
- internal_agent
- gather_errors
- metrics_dropped
- metrics_gathered
- metrics_written
- gather_errors
- metrics_dropped
- metrics_gathered
- metrics_written
internal_gather stats collect aggregate stats on all input plugins
that are of the same input type. They are tagged with `input=<plugin_name>`
`version=<telegraf_version>` and `go_version=<go_build_version>`.
- internal_gather
- gather_time_ns
- metrics_gathered
- gather_time_ns
- metrics_gathered
internal_write stats collect aggregate stats on all output plugins
that are of the same input type. They are tagged with `output=<plugin_name>`
and `version=<telegraf_version>`.
- internal_write
- buffer_limit
- buffer_size
- metrics_added
- metrics_written
- metrics_dropped
- metrics_filtered
- write_time_ns
- buffer_limit
- buffer_size
- metrics_added
- metrics_written
- metrics_dropped
- metrics_filtered
- write_time_ns
internal_<plugin_name> are metrics which are defined on a per-plugin basis, and
usually contain tags which differentiate each instance of a particular type of
plugin and `version=<telegraf_version>`.
- internal_<plugin_name>
- individual plugin-specific fields, such as requests counts.
- individual plugin-specific fields, such as requests counts.
### Tags:
## Tags
All measurements for specific plugins are tagged with information relevant
to each particular plugin and with `version=<telegraf_version>`.
## Example Output
### Example Output:
```
```shell
internal_memstats,host=tyrion alloc_bytes=4457408i,sys_bytes=10590456i,pointer_lookups=7i,mallocs=17642i,frees=7473i,heap_sys_bytes=6848512i,heap_idle_bytes=1368064i,heap_in_use_bytes=5480448i,heap_released_bytes=0i,total_alloc_bytes=6875560i,heap_alloc_bytes=4457408i,heap_objects_bytes=10169i,num_gc=2i 1480682800000000000
internal_agent,host=tyrion,go_version=1.12.7,version=1.99.0 metrics_written=18i,metrics_dropped=0i,metrics_gathered=19i,gather_errors=0i 1480682800000000000
internal_write,output=file,host=tyrion,version=1.99.0 buffer_limit=10000i,write_time_ns=636609i,metrics_added=18i,metrics_written=18i,buffer_size=0i 1480682800000000000

View File

@ -16,7 +16,6 @@ The `Internet Speed Monitor` collects data about the internet speed on the syste
It collects latency, download speed and upload speed
| Name | filed name | type | Unit |
| -------------- | ---------- | ------- | ---- |
| Download Speed | download | float64 | Mbps |
@ -27,4 +26,4 @@ It collects latency, download speed and upload speed
```sh
internet_speed,host=Sanyam-Ubuntu download=41.791,latency=28.518,upload=59.798 1631031183000000000
```
```

View File

@ -2,7 +2,8 @@
The interrupts plugin gathers metrics about IRQs from `/proc/interrupts` and `/proc/softirqs`.
### Configuration
## Configuration
```toml
[[inputs.interrupts]]
## When set to true, cpu metrics are tagged with the cpu. Otherwise cpu is
@ -18,7 +19,7 @@ The interrupts plugin gathers metrics about IRQs from `/proc/interrupts` and `/p
# irq = [ "NET_RX", "TASKLET" ]
```
### Metrics
## Metrics
There are two styles depending on the value of `cpu_as_tag`.
@ -64,10 +65,11 @@ With `cpu_as_tag = true`:
- fields:
- count (int, number of interrupts)
### Example Output
## Example Output
With `cpu_as_tag = false`:
```
```shell
interrupts,irq=0,type=IO-APIC,device=2-edge\ timer,cpu=cpu0 count=23i 1489346531000000000
interrupts,irq=1,type=IO-APIC,device=1-edge\ i8042,cpu=cpu0 count=9i 1489346531000000000
interrupts,irq=30,type=PCI-MSI,device=65537-edge\ virtio1-input.0,cpu=cpu1 count=1i 1489346531000000000
@ -75,7 +77,8 @@ soft_interrupts,irq=NET_RX,cpu=cpu0 count=280879i 1489346531000000000
```
With `cpu_as_tag = true`:
```
```shell
interrupts,cpu=cpu6,irq=PIW,type=Posted-interrupt\ wakeup\ event count=0i 1543539773000000000
interrupts,cpu=cpu7,irq=PIW,type=Posted-interrupt\ wakeup\ event count=0i 1543539773000000000
soft_interrupts,cpu=cpu0,irq=HI count=246441i 1543539773000000000

View File

@ -5,26 +5,29 @@ Get bare metal metrics using the command line utility
If no servers are specified, the plugin will query the local machine sensor stats via the following command:
```
```sh
ipmitool sdr
```
or with the version 2 schema:
```
```sh
ipmitool sdr elist
```
When one or more servers are specified, the plugin will use the following command to collect remote host sensor stats:
```
```sh
ipmitool -I lan -H SERVER -U USERID -P PASSW0RD sdr
```
Any of the following parameters will be added to the aformentioned query if they're configured:
```
```sh
-y hex_key -L privilege
```
### Configuration
## Configuration
```toml
# Read metrics from the bare metal servers via IPMI
@ -72,9 +75,10 @@ Any of the following parameters will be added to the aformentioned query if they
# cache_path = ""
```
### Measurements
## Measurements
Version 1 schema:
- ipmi_sensor:
- tags:
- name
@ -86,6 +90,7 @@ Version 1 schema:
- value (float)
Version 2 schema:
- ipmi_sensor:
- tags:
- name
@ -98,17 +103,19 @@ Version 2 schema:
- fields:
- value (float)
#### Permissions
### Permissions
When gathering from the local system, Telegraf will need permission to the
ipmi device node. When using udev you can create the device node giving
`rw` permissions to the `telegraf` user by adding the following rule to
`/etc/udev/rules.d/52-telegraf-ipmi.rules`:
```
```sh
KERNEL=="ipmi*", MODE="660", GROUP="telegraf"
```
Alternatively, it is possible to use sudo. You will need the following in your telegraf config:
```toml
[[inputs.ipmi_sensor]]
use_sudo = true
@ -124,11 +131,13 @@ telegraf ALL=(root) NOPASSWD: IPMITOOL
Defaults!IPMITOOL !logfile, !syslog, !pam_session
```
### Example Output
## Example Output
### Version 1 Schema
#### Version 1 Schema
When retrieving stats from a remote server:
```
```shell
ipmi_sensor,server=10.20.2.203,name=uid_light value=0,status=1i 1517125513000000000
ipmi_sensor,server=10.20.2.203,name=sys._health_led status=1i,value=0 1517125513000000000
ipmi_sensor,server=10.20.2.203,name=power_supply_1,unit=watts status=1i,value=110 1517125513000000000
@ -137,9 +146,9 @@ ipmi_sensor,server=10.20.2.203,name=power_supplies value=0,status=1i 15171255130
ipmi_sensor,server=10.20.2.203,name=fan_1,unit=percent status=1i,value=43.12 1517125513000000000
```
When retrieving stats from the local machine (no server specified):
```
```shell
ipmi_sensor,name=uid_light value=0,status=1i 1517125513000000000
ipmi_sensor,name=sys._health_led status=1i,value=0 1517125513000000000
ipmi_sensor,name=power_supply_1,unit=watts status=1i,value=110 1517125513000000000
@ -151,7 +160,8 @@ ipmi_sensor,name=fan_1,unit=percent status=1i,value=43.12 1517125513000000000
#### Version 2 Schema
When retrieving stats from the local machine (no server specified):
```
```shell
ipmi_sensor,name=uid_light,entity_id=23.1,status_code=ok,status_desc=ok value=0 1517125474000000000
ipmi_sensor,name=sys._health_led,entity_id=23.2,status_code=ok,status_desc=ok value=0 1517125474000000000
ipmi_sensor,entity_id=10.1,name=power_supply_1,status_code=ok,status_desc=presence_detected,unit=watts value=110 1517125474000000000

View File

@ -5,33 +5,37 @@ It uses the output of the command "ipset save".
Ipsets created without the "counters" option are ignored.
Results are tagged with:
- ipset name
- ipset entry
There are 3 ways to grant telegraf the right to run ipset:
* Run as root (strongly discouraged)
* Use sudo
* Configure systemd to run telegraf with CAP_NET_ADMIN and CAP_NET_RAW capabilities.
### Using systemd capabilities
- Run as root (strongly discouraged)
- Use sudo
- Configure systemd to run telegraf with CAP_NET_ADMIN and CAP_NET_RAW capabilities.
## Using systemd capabilities
You may run `systemctl edit telegraf.service` and add the following:
```
```text
[Service]
CapabilityBoundingSet=CAP_NET_RAW CAP_NET_ADMIN
AmbientCapabilities=CAP_NET_RAW CAP_NET_ADMIN
```
### Using sudo
## Using sudo
You will need the following in your telegraf config:
```toml
[[inputs.ipset]]
use_sudo = true
```
You will also need to update your sudoers file:
```bash
$ visudo
# Add the following line:
@ -40,7 +44,7 @@ telegraf ALL=(root) NOPASSWD: IPSETSAVE
Defaults!IPSETSAVE !logfile, !syslog, !pam_session
```
### Configuration
## Configuration
```toml
[[inputs.ipset]]
@ -56,15 +60,15 @@ Defaults!IPSETSAVE !logfile, !syslog, !pam_session
```
### Example Output
## Example Output
```
```sh
$ sudo ipset save
create myset hash:net family inet hashsize 1024 maxelem 65536 counters comment
add myset 10.69.152.1 packets 8 bytes 672 comment "machine A"
```
```
```sh
$ telegraf --config telegraf.conf --input-filter ipset --test --debug
* Plugin: inputs.ipset, Collection 1
> ipset,rule=10.69.152.1,host=trashme,set=myset bytes_total=8i,packets_total=672i 1507615028000000000

View File

@ -14,11 +14,11 @@ The iptables command requires CAP_NET_ADMIN and CAP_NET_RAW capabilities. You ha
* Configure systemd to run telegraf with CAP_NET_ADMIN and CAP_NET_RAW. This is the simplest and recommended option.
* Configure sudo to grant telegraf to run iptables. This is the most restrictive option, but require sudo setup.
### Using systemd capabilities
## Using systemd capabilities
You may run `systemctl edit telegraf.service` and add the following:
```
```shell
[Service]
CapabilityBoundingSet=CAP_NET_RAW CAP_NET_ADMIN
AmbientCapabilities=CAP_NET_RAW CAP_NET_ADMIN
@ -26,9 +26,10 @@ AmbientCapabilities=CAP_NET_RAW CAP_NET_ADMIN
Since telegraf will fork a process to run iptables, `AmbientCapabilities` is required to transmit the capabilities bounding set to the forked process.
### Using sudo
## Using sudo
You will need the following in your telegraf config:
```toml
[[inputs.iptables]]
use_sudo = true
@ -44,11 +45,11 @@ telegraf ALL=(root) NOPASSWD: IPTABLESSHOW
Defaults!IPTABLESSHOW !logfile, !syslog, !pam_session
```
### Using IPtables lock feature
## Using IPtables lock feature
Defining multiple instances of this plugin in telegraf.conf can lead to concurrent IPtables access resulting in "ERROR in input [inputs.iptables]: exit status 4" messages in telegraf.log and missing metrics. Setting 'use_lock = true' in the plugin configuration will run IPtables with the '-w' switch, allowing a lock usage to prevent this error.
### Configuration:
## Configuration
```toml
# use sudo to run iptables
@ -63,25 +64,24 @@ Defining multiple instances of this plugin in telegraf.conf can lead to concurre
chains = [ "INPUT" ]
```
### Measurements & Fields:
## Measurements & Fields
* iptables
* pkts (integer, count)
* bytes (integer, bytes)
- iptables
- pkts (integer, count)
- bytes (integer, bytes)
## Tags
### Tags:
- All measurements have the following tags:
- table
- chain
- ruleid
* All measurements have the following tags:
* table
* chain
* ruleid
The `ruleid` is the comment associated to the rule.
### Example Output:
## Example Output
```
```text
$ iptables -nvL INPUT
Chain INPUT (policy DROP 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
@ -89,7 +89,7 @@ pkts bytes target prot opt in out source destination
42 2048 ACCEPT tcp -- * * 192.168.0.0/24 0.0.0.0/0 tcp dpt:80 /* httpd */
```
```
```shell
$ ./telegraf --config telegraf.conf --input-filter iptables --test
iptables,table=filter,chain=INPUT,ruleid=ssh pkts=100i,bytes=1024i 1453831884664956455
iptables,table=filter,chain=INPUT,ruleid=httpd pkts=42i,bytes=2048i 1453831884664956455

View File

@ -5,14 +5,14 @@ metrics about ipvs virtual and real servers.
**Supported Platforms:** Linux
### Configuration
## Configuration
```toml
[[inputs.ipvs]]
# no configuration
```
#### Permissions
### Permissions
Assuming you installed the telegraf package via one of the published packages,
the process will be running as the `telegraf` user. However, in order for this
@ -20,7 +20,7 @@ plugin to communicate over netlink sockets it needs the telegraf process to be
running as `root` (or some user with `CAP_NET_ADMIN` and `CAP_NET_RAW`). Be sure
to ensure these permissions before running telegraf with this plugin included.
### Metrics
## Metrics
Server will contain tags identifying how it was configured, using one of
`address` + `port` + `protocol` *OR* `fwmark`. This is how one would normally
@ -66,17 +66,19 @@ configure a virtual server using `ipvsadm`.
- pps_out
- cps
### Example Output
## Example Output
Virtual server is configured using `fwmark` and backed by 2 real servers:
```
```shell
ipvs_virtual_server,address=172.18.64.234,address_family=inet,netmask=32,port=9000,protocol=tcp,sched=rr bytes_in=0i,bytes_out=0i,pps_in=0i,pps_out=0i,cps=0i,connections=0i,pkts_in=0i,pkts_out=0i 1541019340000000000
ipvs_real_server,address=172.18.64.220,address_family=inet,port=9000,virtual_address=172.18.64.234,virtual_port=9000,virtual_protocol=tcp active_connections=0i,inactive_connections=0i,pkts_in=0i,bytes_out=0i,pps_out=0i,connections=0i,pkts_out=0i,bytes_in=0i,pps_in=0i,cps=0i 1541019340000000000
ipvs_real_server,address=172.18.64.219,address_family=inet,port=9000,virtual_address=172.18.64.234,virtual_port=9000,virtual_protocol=tcp active_connections=0i,inactive_connections=0i,pps_in=0i,pps_out=0i,connections=0i,pkts_in=0i,pkts_out=0i,bytes_in=0i,bytes_out=0i,cps=0i 1541019340000000000
```
Virtual server is configured using `proto+addr+port` and backed by 2 real servers:
```
```shell
ipvs_virtual_server,address_family=inet,fwmark=47,netmask=32,sched=rr cps=0i,connections=0i,pkts_in=0i,pkts_out=0i,bytes_in=0i,bytes_out=0i,pps_in=0i,pps_out=0i 1541019340000000000
ipvs_real_server,address=172.18.64.220,address_family=inet,port=9000,virtual_fwmark=47 inactive_connections=0i,pkts_out=0i,bytes_out=0i,pps_in=0i,cps=0i,active_connections=0i,pkts_in=0i,bytes_in=0i,pps_out=0i,connections=0i 1541019340000000000
ipvs_real_server,address=172.18.64.219,address_family=inet,port=9000,virtual_fwmark=47 cps=0i,active_connections=0i,inactive_connections=0i,connections=0i,pkts_in=0i,bytes_out=0i,pkts_out=0i,bytes_in=0i,pps_in=0i,pps_out=0i 1541019340000000000

View File

@ -4,7 +4,7 @@ The jenkins plugin gathers information about the nodes and jobs running in a jen
This plugin does not require a plugin on jenkins and it makes use of Jenkins API to retrieve all the information needed.
### Configuration:
## Configuration
```toml
[[inputs.jenkins]]
@ -55,7 +55,7 @@ This plugin does not require a plugin on jenkins and it makes use of Jenkins API
# max_connections = 5
```
### Metrics:
## Metrics
- jenkins
- tags:
@ -65,7 +65,7 @@ This plugin does not require a plugin on jenkins and it makes use of Jenkins API
- busy_executors
- total_executors
+ jenkins_node
- jenkins_node
- tags:
- arch
- disk_path
@ -96,23 +96,22 @@ This plugin does not require a plugin on jenkins and it makes use of Jenkins API
- number
- result_code (0 = SUCCESS, 1 = FAILURE, 2 = NOT_BUILD, 3 = UNSTABLE, 4 = ABORTED)
### Sample Queries:
## Sample Queries
```
```sql
SELECT mean("memory_available") AS "mean_memory_available", mean("memory_total") AS "mean_memory_total", mean("temp_available") AS "mean_temp_available" FROM "jenkins_node" WHERE time > now() - 15m GROUP BY time(:interval:) FILL(null)
```
```
```sql
SELECT mean("duration") AS "mean_duration" FROM "jenkins_job" WHERE time > now() - 24h GROUP BY time(:interval:) FILL(null)
```
### Example Output:
## Example Output
```
```shell
$ ./telegraf --config telegraf.conf --input-filter jenkins --test
jenkins,host=myhost,port=80,source=my-jenkins-instance busy_executors=4i,total_executors=8i 1580418261000000000
jenkins_node,arch=Linux\ (amd64),disk_path=/var/jenkins_home,temp_path=/tmp,host=myhost,node_name=master,source=my-jenkins-instance,port=8080 swap_total=4294963200,memory_available=586711040,memory_total=6089498624,status=online,response_time=1000i,disk_available=152392036352,temp_available=152392036352,swap_available=3503263744,num_executors=2i 1516031535000000000
jenkins_job,host=myhost,name=JOB1,parents=apps/br1,result=SUCCESS,source=my-jenkins-instance,port=8080 duration=2831i,result_code=0i 1516026630000000000
jenkins_job,host=myhost,name=JOB2,parents=apps/br2,result=SUCCESS,source=my-jenkins-instance,port=8080 duration=2285i,result_code=0i 1516027230000000000
```

View File

@ -1,8 +1,8 @@
# Jolokia Input Plugin
### Deprecated in version 1.5: Please use the [jolokia2](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/jolokia2) plugin.
## Deprecated in version 1.5: Please use the [jolokia2](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/jolokia2) plugin
#### Configuration
### Configuration
```toml
# Read JMX metrics through Jolokia
@ -66,8 +66,9 @@
The Jolokia plugin collects JVM metrics exposed as MBean's attributes through
jolokia REST endpoint. All metrics are collected for each server configured.
See: https://jolokia.org/
See: <https://jolokia.org/>
## Measurements
# Measurements:
Jolokia plugin produces one measure for each metric configured,
adding Server's `jolokia_name`, `jolokia_host` and `jolokia_port` as tags.

View File

@ -2,9 +2,9 @@
The [Jolokia](http://jolokia.org) _agent_ and _proxy_ input plugins collect JMX metrics from an HTTP endpoint using Jolokia's [JSON-over-HTTP protocol](https://jolokia.org/reference/html/protocol.html).
### Configuration:
## Configuration
#### Jolokia Agent Configuration
### Jolokia Agent Configuration
The `jolokia2_agent` input plugin reads JMX metrics from one or more [Jolokia agent](https://jolokia.org/agent/jvm.html) REST endpoints.
@ -34,7 +34,7 @@ Optionally, specify TLS options for communicating with agents:
paths = ["Uptime"]
```
#### Jolokia Proxy Configuration
### Jolokia Proxy Configuration
The `jolokia2_proxy` input plugin reads JMX metrics from one or more _targets_ by interacting with a [Jolokia proxy](https://jolokia.org/features/proxy.html) REST endpoint.
@ -79,7 +79,7 @@ Optionally, specify TLS options for communicating with proxies:
paths = ["Uptime"]
```
#### Jolokia Metric Configuration
### Jolokia Metric Configuration
Each `metric` declaration generates a Jolokia request to fetch telemetry from a JMX MBean.
@ -103,7 +103,7 @@ Use `paths` to refine which fields to collect.
The preceeding `jvm_memory` `metric` declaration produces the following output:
```
```text
jvm_memory HeapMemoryUsage.committed=4294967296,HeapMemoryUsage.init=4294967296,HeapMemoryUsage.max=4294967296,HeapMemoryUsage.used=1750658992,NonHeapMemoryUsage.committed=67350528,NonHeapMemoryUsage.init=2555904,NonHeapMemoryUsage.max=-1,NonHeapMemoryUsage.used=65821352,ObjectPendingFinalizationCount=0 1503762436000000000
```
@ -119,7 +119,7 @@ Use `*` wildcards against `mbean` property-key values to create distinct series
Since `name=*` matches both `G1 Old Generation` and `G1 Young Generation`, and `name` is used as a tag, the preceeding `jvm_garbage_collector` `metric` declaration produces two metrics.
```
```shell
jvm_garbage_collector,name=G1\ Old\ Generation CollectionCount=0,CollectionTime=0 1503762520000000000
jvm_garbage_collector,name=G1\ Young\ Generation CollectionTime=32,CollectionCount=2 1503762520000000000
```
@ -137,7 +137,7 @@ Use `tag_prefix` along with `tag_keys` to add detail to tag names.
The preceeding `jvm_memory_pool` `metric` declaration produces six metrics, each with a distinct `pool_name` tag.
```
```text
jvm_memory_pool,pool_name=Compressed\ Class\ Space PeakUsage.max=1073741824,PeakUsage.committed=3145728,PeakUsage.init=0,Usage.committed=3145728,Usage.init=0,PeakUsage.used=3017976,Usage.max=1073741824,Usage.used=3017976 1503764025000000000
jvm_memory_pool,pool_name=Code\ Cache PeakUsage.init=2555904,PeakUsage.committed=6291456,Usage.committed=6291456,PeakUsage.used=6202752,PeakUsage.max=251658240,Usage.used=6210368,Usage.max=251658240,Usage.init=2555904 1503764025000000000
jvm_memory_pool,pool_name=G1\ Eden\ Space CollectionUsage.max=-1,PeakUsage.committed=56623104,PeakUsage.init=56623104,PeakUsage.used=53477376,Usage.max=-1,Usage.committed=49283072,Usage.used=19922944,CollectionUsage.committed=49283072,CollectionUsage.init=56623104,CollectionUsage.used=0,PeakUsage.max=-1,Usage.init=56623104 1503764025000000000
@ -158,7 +158,7 @@ Use substitutions to create fields and field prefixes with MBean property-keys c
The preceeding `kafka_topic` `metric` declaration produces a metric per Kafka topic. The `name` Mbean property-key is used as a field prefix to aid in gathering fields together into the single metric.
```
```text
kafka_topic,topic=my-topic BytesOutPerSec.MeanRate=0,FailedProduceRequestsPerSec.MeanRate=0,BytesOutPerSec.EventType="bytes",BytesRejectedPerSec.Count=0,FailedProduceRequestsPerSec.RateUnit="SECONDS",FailedProduceRequestsPerSec.EventType="requests",MessagesInPerSec.RateUnit="SECONDS",BytesInPerSec.EventType="bytes",BytesOutPerSec.RateUnit="SECONDS",BytesInPerSec.OneMinuteRate=0,FailedFetchRequestsPerSec.EventType="requests",TotalFetchRequestsPerSec.MeanRate=146.301533938701,BytesOutPerSec.FifteenMinuteRate=0,TotalProduceRequestsPerSec.MeanRate=0,BytesRejectedPerSec.FifteenMinuteRate=0,MessagesInPerSec.FiveMinuteRate=0,BytesInPerSec.Count=0,BytesRejectedPerSec.MeanRate=0,FailedFetchRequestsPerSec.MeanRate=0,FailedFetchRequestsPerSec.FiveMinuteRate=0,FailedFetchRequestsPerSec.FifteenMinuteRate=0,FailedProduceRequestsPerSec.Count=0,TotalFetchRequestsPerSec.FifteenMinuteRate=128.59314292334466,TotalFetchRequestsPerSec.OneMinuteRate=126.71551273850747,TotalFetchRequestsPerSec.Count=1353483,TotalProduceRequestsPerSec.FifteenMinuteRate=0,FailedFetchRequestsPerSec.OneMinuteRate=0,FailedFetchRequestsPerSec.Count=0,FailedProduceRequestsPerSec.FifteenMinuteRate=0,TotalFetchRequestsPerSec.FiveMinuteRate=130.8516148751592,TotalFetchRequestsPerSec.RateUnit="SECONDS",BytesRejectedPerSec.RateUnit="SECONDS",BytesInPerSec.MeanRate=0,FailedFetchRequestsPerSec.RateUnit="SECONDS",BytesRejectedPerSec.OneMinuteRate=0,BytesOutPerSec.Count=0,BytesOutPerSec.OneMinuteRate=0,MessagesInPerSec.FifteenMinuteRate=0,MessagesInPerSec.MeanRate=0,BytesInPerSec.FiveMinuteRate=0,TotalProduceRequestsPerSec.RateUnit="SECONDS",FailedProduceRequestsPerSec.OneMinuteRate=0,TotalProduceRequestsPerSec.EventType="requests",BytesRejectedPerSec.FiveMinuteRate=0,BytesRejectedPerSec.EventType="bytes",BytesOutPerSec.FiveMinuteRate=0,FailedProduceRequestsPerSec.FiveMinuteRate=0,MessagesInPerSec.Count=0,TotalProduceRequestsPerSec.FiveMinuteRate=0,TotalProduceRequestsPerSec.OneMinuteRate=0,MessagesInPerSec.EventType="messages",MessagesInPerSec.OneMinuteRate=0,TotalFetchRequestsPerSec.EventType="requests",BytesInPerSec.RateUnit="SECONDS",BytesInPerSec.FifteenMinuteRate=0,TotalProduceRequestsPerSec.Count=0 1503767532000000000
```
@ -170,7 +170,7 @@ Both `jolokia2_agent` and `jolokia2_proxy` plugins support default configuration
| `default_field_prefix` | _None_ | A string to prepend to the field names produced by all `metric` declarations. |
| `default_tag_prefix` | _None_ | A string to prepend to the tag names produced by all `metric` declarations. |
### Example Configurations:
## Example Configurations
- [ActiveMQ](/plugins/inputs/jolokia2/examples/activemq.conf)
- [BitBucket](/plugins/inputs/jolokia2/examples/bitbucket.conf)

View File

@ -3,7 +3,7 @@
This plugin reads Juniper Networks implementation of OpenConfig telemetry data from listed sensors using Junos Telemetry Interface. Refer to
[openconfig.net](http://openconfig.net/) for more details about OpenConfig and [Junos Telemetry Interface (JTI)](https://www.juniper.net/documentation/en_US/junos/topics/concept/junos-telemetry-interface-oveview.html).
### Configuration:
## Configuration
```toml
# Subscribe and receive OpenConfig Telemetry data using JTI
@ -57,7 +57,7 @@ This plugin reads Juniper Networks implementation of OpenConfig telemetry data f
str_as_tags = false
```
### Tags:
## Tags
- All measurements are tagged appropriately using the identifier information
in incoming data

View File

@ -6,7 +6,7 @@ and creates metrics using one of the supported [input data formats][].
For old kafka version (< 0.8), please use the [kafka_consumer_legacy][] input plugin
and use the old zookeeper connection method.
### Configuration
## Configuration
```toml
[[inputs.kafka_consumer]]

View File

@ -1,6 +1,6 @@
# Kafka Consumer Legacy Input Plugin
### Deprecated in version 1.4. Please use [Kafka Consumer input plugin](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/kafka_consumer).
## Deprecated in version 1.4. Please use [Kafka Consumer input plugin](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/kafka_consumer)
The [Kafka](http://kafka.apache.org/) consumer plugin polls a specified Kafka
topic and adds messages to InfluxDB. The plugin assumes messages follow the

View File

@ -2,7 +2,7 @@
The Kapacitor plugin collects metrics from the given Kapacitor instances.
### Configuration:
## Configuration
```toml
[[inputs.kapacitor]]
@ -23,276 +23,334 @@ The Kapacitor plugin collects metrics from the given Kapacitor instances.
# insecure_skip_verify = false
```
### Measurements and fields
## Measurements and fields
- [kapacitor](#kapacitor)
- [num_enabled_tasks](#num_enabled_tasks) _(integer)_
- [num_subscriptions](#num_subscriptions) _(integer)_
- [num_tasks](#num_tasks) _(integer)_
- [num_enabled_tasks](#num_enabled_tasks) _(integer)_
- [num_subscriptions](#num_subscriptions) _(integer)_
- [num_tasks](#num_tasks) _(integer)_
- [kapacitor_alert](#kapacitor_alert)
- [notification_dropped](#notification_dropped) _(integer)_
- [primary-handle-count](#primary-handle-count) _(integer)_
- [secondary-handle-count](#secondary-handle-count) _(integer)_
- [notification_dropped](#notification_dropped) _(integer)_
- [primary-handle-count](#primary-handle-count) _(integer)_
- [secondary-handle-count](#secondary-handle-count) _(integer)_
- (Kapacitor Enterprise only) [kapacitor_cluster](#kapacitor_cluster) _(integer)_
- [dropped_member_events](#dropped_member_events) _(integer)_
- [dropped_user_events](#dropped_user_events) _(integer)_
- [query_handler_errors](#query_handler_errors) _(integer)_
- [dropped_member_events](#dropped_member_events) _(integer)_
- [dropped_user_events](#dropped_user_events) _(integer)_
- [query_handler_errors](#query_handler_errors) _(integer)_
- [kapacitor_edges](#kapacitor_edges)
- [collected](#collected) _(integer)_
- [emitted](#emitted) _(integer)_
- [collected](#collected) _(integer)_
- [emitted](#emitted) _(integer)_
- [kapacitor_ingress](#kapacitor_ingress)
- [points_received](#points_received) _(integer)_
- [points_received](#points_received) _(integer)_
- [kapacitor_load](#kapacitor_load)
- [errors](#errors) _(integer)_
- [errors](#errors) _(integer)_
- [kapacitor_memstats](#kapacitor_memstats)
- [alloc_bytes](#alloc_bytes) _(integer)_
- [buck_hash_sys_bytes](#buck_hash_sys_bytes) _(integer)_
- [frees](#frees) _(integer)_
- [gc_sys_bytes](#gc_sys_bytes) _(integer)_
- [gc_cpu_fraction](#gc_cpu_fraction) _(float)_
- [heap_alloc_bytes](#heap_alloc_bytes) _(integer)_
- [heap_idle_bytes](#heap_idle_bytes) _(integer)_
- [heap_in_use_bytes](#heap_in_use_bytes) _(integer)_
- [heap_objects](#heap_objects) _(integer)_
- [heap_released_bytes](#heap_released_bytes) _(integer)_
- [heap_sys_bytes](#heap_sys_bytes) _(integer)_
- [last_gc_ns](#last_gc_ns) _(integer)_
- [lookups](#lookups) _(integer)_
- [mallocs](#mallocs) _(integer)_
- [mcache_in_use_bytes](#mcache_in_use_bytes) _(integer)_
- [mcache_sys_bytes](#mcache_sys_bytes) _(integer)_
- [mspan_in_use_bytes](#mspan_in_use_bytes) _(integer)_
- [mspan_sys_bytes](#mspan_sys_bytes) _(integer)_
- [next_gc_ns](#next_gc_ns) _(integer)_
- [num_gc](#num_gc) _(integer)_
- [other_sys_bytes](#other_sys_bytes) _(integer)_
- [pause_total_ns](#pause_total_ns) _(integer)_
- [stack_in_use_bytes](#stack_in_use_bytes) _(integer)_
- [stack_sys_bytes](#stack_sys_bytes) _(integer)_
- [sys_bytes](#sys_bytes) _(integer)_
- [total_alloc_bytes](#total_alloc_bytes) _(integer)_
- [alloc_bytes](#alloc_bytes) _(integer)_
- [buck_hash_sys_bytes](#buck_hash_sys_bytes) _(integer)_
- [frees](#frees) _(integer)_
- [gc_sys_bytes](#gc_sys_bytes) _(integer)_
- [gc_cpu_fraction](#gc_cpu_fraction) _(float)_
- [heap_alloc_bytes](#heap_alloc_bytes) _(integer)_
- [heap_idle_bytes](#heap_idle_bytes) _(integer)_
- [heap_in_use_bytes](#heap_in_use_bytes) _(integer)_
- [heap_objects](#heap_objects) _(integer)_
- [heap_released_bytes](#heap_released_bytes) _(integer)_
- [heap_sys_bytes](#heap_sys_bytes) _(integer)_
- [last_gc_ns](#last_gc_ns) _(integer)_
- [lookups](#lookups) _(integer)_
- [mallocs](#mallocs) _(integer)_
- [mcache_in_use_bytes](#mcache_in_use_bytes) _(integer)_
- [mcache_sys_bytes](#mcache_sys_bytes) _(integer)_
- [mspan_in_use_bytes](#mspan_in_use_bytes) _(integer)_
- [mspan_sys_bytes](#mspan_sys_bytes) _(integer)_
- [next_gc_ns](#next_gc_ns) _(integer)_
- [num_gc](#num_gc) _(integer)_
- [other_sys_bytes](#other_sys_bytes) _(integer)_
- [pause_total_ns](#pause_total_ns) _(integer)_
- [stack_in_use_bytes](#stack_in_use_bytes) _(integer)_
- [stack_sys_bytes](#stack_sys_bytes) _(integer)_
- [sys_bytes](#sys_bytes) _(integer)_
- [total_alloc_bytes](#total_alloc_bytes) _(integer)_
- [kapacitor_nodes](#kapacitor_nodes)
- [alerts_inhibited](#alerts_inhibited) _(integer)_
- [alerts_triggered](#alerts_triggered) _(integer)_
- [avg_exec_time_ns](#avg_exec_time_ns) _(integer)_
- [crits_triggered](#crits_triggered) _(integer)_
- [errors](#errors) _(integer)_
- [infos_triggered](#infos_triggered) _(integer)_
- [oks_triggered](#oks_triggered) _(integer)_
- [points_written](#points_written) _(integer)_
- [warns_triggered](#warns_triggered) _(integer)_
- [write_errors](#write_errors) _(integer)_
- [alerts_inhibited](#alerts_inhibited) _(integer)_
- [alerts_triggered](#alerts_triggered) _(integer)_
- [avg_exec_time_ns](#avg_exec_time_ns) _(integer)_
- [crits_triggered](#crits_triggered) _(integer)_
- [errors](#errors) _(integer)_
- [infos_triggered](#infos_triggered) _(integer)_
- [oks_triggered](#oks_triggered) _(integer)_
- [points_written](#points_written) _(integer)_
- [warns_triggered](#warns_triggered) _(integer)_
- [write_errors](#write_errors) _(integer)_
- [kapacitor_topics](#kapacitor_topics)
- [collected](#collected) _(integer)_
- [collected](#collected) _(integer)_
---
### kapacitor
## kapacitor
The `kapacitor` measurement stores fields with information related to
[Kapacitor tasks](https://docs.influxdata.com/kapacitor/latest/introduction/getting-started/#kapacitor-tasks)
and [subscriptions](https://docs.influxdata.com/kapacitor/latest/administration/subscription-management/).
#### num_enabled_tasks
### num_enabled_tasks
The number of enabled Kapacitor tasks.
#### num_subscriptions
### num_subscriptions
The number of Kapacitor/InfluxDB subscriptions.
#### num_tasks
### num_tasks
The total number of Kapacitor tasks.
---
### kapacitor_alert
## kapacitor_alert
The `kapacitor_alert` measurement stores fields with information related to
[Kapacitor alerts](https://docs.influxdata.com/kapacitor/v1.5/working/alerts/).
#### notification-dropped
### notification-dropped
The number of internal notifications dropped because they arrive too late from another Kapacitor node.
If this count is increasing, Kapacitor Enterprise nodes aren't able to communicate fast enough
to keep up with the volume of alerts.
#### primary-handle-count
### primary-handle-count
The number of times this node handled an alert as the primary. This count should increase under normal conditions.
#### secondary-handle-count
### secondary-handle-count
The number of times this node handled an alert as the secondary. An increase in this counter indicates that the primary is failing to handle alerts in a timely manner.
---
### kapacitor_cluster
## kapacitor_cluster
The `kapacitor_cluster` measurement reflects the ability of [Kapacitor nodes to communicate](https://docs.influxdata.com/enterprise_kapacitor/v1.5/administration/configuration/#cluster-communications) with one another. Specifically, these metrics track the gossip communication between the Kapacitor nodes.
#### dropped_member_events
### dropped_member_events
The number of gossip member events that were dropped.
#### dropped_user_events
### dropped_user_events
The number of gossip user events that were dropped.
---
### kapacitor_edges
## kapacitor_edges
The `kapacitor_edges` measurement stores fields with information related to
[edges](https://docs.influxdata.com/kapacitor/latest/tick/introduction/#pipelines)
in Kapacitor TICKscripts.
#### collected
### collected
The number of messages collected by TICKscript edges.
#### emitted
### emitted
The number of messages emitted by TICKscript edges.
---
### kapacitor_ingress
## kapacitor_ingress
The `kapacitor_ingress` measurement stores fields with information related to data
coming into Kapacitor.
#### points_received
### points_received
The number of points received by Kapacitor.
---
### kapacitor_load
## kapacitor_load
The `kapacitor_load` measurement stores fields with information related to the
[Kapacitor Load Directory service](https://docs.influxdata.com/kapacitor/latest/guides/load_directory/).
#### errors
### errors
The number of errors reported from the load directory service.
---
### kapacitor_memstats
## kapacitor_memstats
The `kapacitor_memstats` measurement stores fields related to Kapacitor memory usage.
#### alloc_bytes
### alloc_bytes
The number of bytes of memory allocated by Kapacitor that are still in use.
#### buck_hash_sys_bytes
### buck_hash_sys_bytes
The number of bytes of memory used by the profiling bucket hash table.
#### frees
### frees
The number of heap objects freed.
#### gc_sys_bytes
### gc_sys_bytes
The number of bytes of memory used for garbage collection system metadata.
#### gc_cpu_fraction
### gc_cpu_fraction
The fraction of Kapacitor's available CPU time used by garbage collection since
Kapacitor started.
#### heap_alloc_bytes
### heap_alloc_bytes
The number of reachable and unreachable heap objects garbage collection has
not freed.
#### heap_idle_bytes
### heap_idle_bytes
The number of heap bytes waiting to be used.
#### heap_in_use_bytes
### heap_in_use_bytes
The number of heap bytes in use.
#### heap_objects
### heap_objects
The number of allocated objects.
#### heap_released_bytes
### heap_released_bytes
The number of heap bytes released to the operating system.
#### heap_sys_bytes
### heap_sys_bytes
The number of heap bytes obtained from `system`.
#### last_gc_ns
### last_gc_ns
The nanosecond epoch time of the last garbage collection.
#### lookups
### lookups
The total number of pointer lookups.
#### mallocs
### mallocs
The total number of mallocs.
#### mcache_in_use_bytes
### mcache_in_use_bytes
The number of bytes in use by mcache structures.
#### mcache_sys_bytes
### mcache_sys_bytes
The number of bytes used for mcache structures obtained from `system`.
#### mspan_in_use_bytes
### mspan_in_use_bytes
The number of bytes in use by mspan structures.
#### mspan_sys_bytes
### mspan_sys_bytes
The number of bytes used for mspan structures obtained from `system`.
#### next_gc_ns
### next_gc_ns
The nanosecond epoch time of the next garbage collection.
#### num_gc
### num_gc
The number of completed garbage collection cycles.
#### other_sys_bytes
### other_sys_bytes
The number of bytes used for other system allocations.
#### pause_total_ns
### pause_total_ns
The total number of nanoseconds spent in garbage collection "stop-the-world"
pauses since Kapacitor started.
#### stack_in_use_bytes
### stack_in_use_bytes
The number of bytes in use by the stack allocator.
#### stack_sys_bytes
### stack_sys_bytes
The number of bytes obtained from `system` for stack allocator.
#### sys_bytes
### sys_bytes
The number of bytes of memory obtained from `system`.
#### total_alloc_bytes
### total_alloc_bytes
The total number of bytes allocated, even if freed.
---
### kapacitor_nodes
## kapacitor_nodes
The `kapacitor_nodes` measurement stores fields related to events that occur in
[TICKscript nodes](https://docs.influxdata.com/kapacitor/latest/nodes/).
#### alerts_inhibited
### alerts_inhibited
The total number of alerts inhibited by TICKscripts.
#### alerts_triggered
### alerts_triggered
The total number of alerts triggered by TICKscripts.
#### avg_exec_time_ns
### avg_exec_time_ns
The average execution time of TICKscripts in nanoseconds.
#### crits_triggered
### crits_triggered
The number of critical (`crit`) alerts triggered by TICKscripts.
#### errors
### errors (from TICKscripts)
The number of errors caused caused by TICKscripts.
#### infos_triggered
### infos_triggered
The number of info (`info`) alerts triggered by TICKscripts.
#### oks_triggered
### oks_triggered
The number of ok (`ok`) alerts triggered by TICKscripts.
#### points_written
The number of points written to InfluxDB or back to Kapacitor.
#### warns_triggered
The number of warning (`warn`) alerts triggered by TICKscripts.
#### working_cardinality
The total number of unique series processed.
#### write_errors
The number of errors that occurred when writing to InfluxDB or other write endpoints.
---
### kapacitor_topics
The `kapacitor_topics` measurement stores fields related to
Kapacitor topics](https://docs.influxdata.com/kapacitor/latest/working/using_alert_topics/).
#### collected
The `kapacitor_topics` measurement stores fields related to
Kapacitor topics](<https://docs.influxdata.com/kapacitor/latest/working/using_alert_topics/>).
#### collected (kapacitor_topics)
The number of events collected by Kapacitor topics.
---
@ -303,7 +361,7 @@ these values.
## Example Output
```
```shell
$ telegraf --config /etc/telegraf.conf --input-filter kapacitor --test
* Plugin: inputs.kapacitor, Collection 1
> kapacitor_memstats,host=hostname.local,kap_version=1.1.0~rc2,url=http://localhost:9092/kapacitor/v1/debug/vars alloc_bytes=6974808i,buck_hash_sys_bytes=1452609i,frees=207281i,gc_sys_bytes=802816i,gc_cpu_fraction=0.00004693548939673313,heap_alloc_bytes=6974808i,heap_idle_bytes=6742016i,heap_in_use_bytes=9183232i,heap_objects=23216i,heap_released_bytes=0i,heap_sys_bytes=15925248i,last_gc_ns=1478791460012676997i,lookups=88i,mallocs=230497i,mcache_in_use_bytes=9600i,mcache_sys_bytes=16384i,mspan_in_use_bytes=98560i,mspan_sys_bytes=131072i,next_gc_ns=11467528i,num_gc=8i,other_sys_bytes=2236087i,pause_total_ns=2994110i,stack_in_use_bytes=1900544i,stack_sys_bytes=1900544i,sys_bytes=22464760i,total_alloc_bytes=35023600i 1478791462000000000

View File

@ -9,7 +9,7 @@ not covered by other plugins as well as the value of `/proc/sys/kernel/random/en
The metrics are documented in `man proc` under the `/proc/stat` section.
The metrics are documented in `man 4 random` under the `/proc/stat` section.
```
```text
/proc/sys/kernel/random/entropy_avail
@ -39,7 +39,7 @@ processes 86031
Number of forks since boot.
```
### Configuration:
## Configuration
```toml
# Get kernel statistics from /proc/stat
@ -47,24 +47,24 @@ Number of forks since boot.
# no configuration
```
### Measurements & Fields:
## Measurements & Fields
- kernel
- boot_time (integer, seconds since epoch, `btime`)
- context_switches (integer, `ctxt`)
- disk_pages_in (integer, `page (0)`)
- disk_pages_out (integer, `page (1)`)
- interrupts (integer, `intr`)
- processes_forked (integer, `processes`)
- entropy_avail (integer, `entropy_available`)
- boot_time (integer, seconds since epoch, `btime`)
- context_switches (integer, `ctxt`)
- disk_pages_in (integer, `page (0)`)
- disk_pages_out (integer, `page (1)`)
- interrupts (integer, `intr`)
- processes_forked (integer, `processes`)
- entropy_avail (integer, `entropy_available`)
### Tags:
## Tags
None
### Example Output:
## Example Output
```
```shell
$ telegraf --config ~/ws/telegraf.conf --input-filter kernel --test
* Plugin: kernel, Collection 1
> kernel entropy_available=2469i,boot_time=1457505775i,context_switches=2626618i,disk_pages_in=5741i,disk_pages_out=1808i,interrupts=1472736i,processes_forked=10673i 1457613402960879816

View File

@ -1,13 +1,12 @@
# Kernel VMStat Input Plugin
The kernel_vmstat plugin gathers virtual memory statistics
by reading /proc/vmstat. For a full list of available fields see the
The kernel_vmstat plugin gathers virtual memory statistics
by reading /proc/vmstat. For a full list of available fields see the
/proc/vmstat section of the [proc man page](http://man7.org/linux/man-pages/man5/proc.5.html).
For a better idea of what each field represents, see the
For a better idea of what each field represents, see the
[vmstat man page](http://linux.die.net/man/8/vmstat).
```
```text
/proc/vmstat
kernel/system statistics. Common entries include (from http://www.linuxinsight.com/proc_vmstat.html):
@ -109,7 +108,7 @@ pgrotated 3781
nr_bounce 0
```
### Configuration:
## Configuration
```toml
# Get kernel statistics from /proc/vmstat
@ -117,108 +116,108 @@ nr_bounce 0
# no configuration
```
### Measurements & Fields:
## Measurements & Fields
- kernel_vmstat
- nr_free_pages (integer, `nr_free_pages`)
- nr_inactive_anon (integer, `nr_inactive_anon`)
- nr_active_anon (integer, `nr_active_anon`)
- nr_inactive_file (integer, `nr_inactive_file`)
- nr_active_file (integer, `nr_active_file`)
- nr_unevictable (integer, `nr_unevictable`)
- nr_mlock (integer, `nr_mlock`)
- nr_anon_pages (integer, `nr_anon_pages`)
- nr_mapped (integer, `nr_mapped`)
- nr_file_pages (integer, `nr_file_pages`)
- nr_dirty (integer, `nr_dirty`)
- nr_writeback (integer, `nr_writeback`)
- nr_slab_reclaimable (integer, `nr_slab_reclaimable`)
- nr_slab_unreclaimable (integer, `nr_slab_unreclaimable`)
- nr_page_table_pages (integer, `nr_page_table_pages`)
- nr_kernel_stack (integer, `nr_kernel_stack`)
- nr_unstable (integer, `nr_unstable`)
- nr_bounce (integer, `nr_bounce`)
- nr_vmscan_write (integer, `nr_vmscan_write`)
- nr_writeback_temp (integer, `nr_writeback_temp`)
- nr_isolated_anon (integer, `nr_isolated_anon`)
- nr_isolated_file (integer, `nr_isolated_file`)
- nr_shmem (integer, `nr_shmem`)
- numa_hit (integer, `numa_hit`)
- numa_miss (integer, `numa_miss`)
- numa_foreign (integer, `numa_foreign`)
- numa_interleave (integer, `numa_interleave`)
- numa_local (integer, `numa_local`)
- numa_other (integer, `numa_other`)
- nr_anon_transparent_hugepages (integer, `nr_anon_transparent_hugepages`)
- pgpgin (integer, `pgpgin`)
- pgpgout (integer, `pgpgout`)
- pswpin (integer, `pswpin`)
- pswpout (integer, `pswpout`)
- pgalloc_dma (integer, `pgalloc_dma`)
- pgalloc_dma32 (integer, `pgalloc_dma32`)
- pgalloc_normal (integer, `pgalloc_normal`)
- pgalloc_movable (integer, `pgalloc_movable`)
- pgfree (integer, `pgfree`)
- pgactivate (integer, `pgactivate`)
- pgdeactivate (integer, `pgdeactivate`)
- pgfault (integer, `pgfault`)
- pgmajfault (integer, `pgmajfault`)
- pgrefill_dma (integer, `pgrefill_dma`)
- pgrefill_dma32 (integer, `pgrefill_dma32`)
- pgrefill_normal (integer, `pgrefill_normal`)
- pgrefill_movable (integer, `pgrefill_movable`)
- pgsteal_dma (integer, `pgsteal_dma`)
- pgsteal_dma32 (integer, `pgsteal_dma32`)
- pgsteal_normal (integer, `pgsteal_normal`)
- pgsteal_movable (integer, `pgsteal_movable`)
- pgscan_kswapd_dma (integer, `pgscan_kswapd_dma`)
- pgscan_kswapd_dma32 (integer, `pgscan_kswapd_dma32`)
- pgscan_kswapd_normal (integer, `pgscan_kswapd_normal`)
- pgscan_kswapd_movable (integer, `pgscan_kswapd_movable`)
- pgscan_direct_dma (integer, `pgscan_direct_dma`)
- pgscan_direct_dma32 (integer, `pgscan_direct_dma32`)
- pgscan_direct_normal (integer, `pgscan_direct_normal`)
- pgscan_direct_movable (integer, `pgscan_direct_movable`)
- zone_reclaim_failed (integer, `zone_reclaim_failed`)
- pginodesteal (integer, `pginodesteal`)
- slabs_scanned (integer, `slabs_scanned`)
- kswapd_steal (integer, `kswapd_steal`)
- kswapd_inodesteal (integer, `kswapd_inodesteal`)
- kswapd_low_wmark_hit_quickly (integer, `kswapd_low_wmark_hit_quickly`)
- kswapd_high_wmark_hit_quickly (integer, `kswapd_high_wmark_hit_quickly`)
- kswapd_skip_congestion_wait (integer, `kswapd_skip_congestion_wait`)
- pageoutrun (integer, `pageoutrun`)
- allocstall (integer, `allocstall`)
- pgrotated (integer, `pgrotated`)
- compact_blocks_moved (integer, `compact_blocks_moved`)
- compact_pages_moved (integer, `compact_pages_moved`)
- compact_pagemigrate_failed (integer, `compact_pagemigrate_failed`)
- compact_stall (integer, `compact_stall`)
- compact_fail (integer, `compact_fail`)
- compact_success (integer, `compact_success`)
- htlb_buddy_alloc_success (integer, `htlb_buddy_alloc_success`)
- htlb_buddy_alloc_fail (integer, `htlb_buddy_alloc_fail`)
- unevictable_pgs_culled (integer, `unevictable_pgs_culled`)
- unevictable_pgs_scanned (integer, `unevictable_pgs_scanned`)
- unevictable_pgs_rescued (integer, `unevictable_pgs_rescued`)
- unevictable_pgs_mlocked (integer, `unevictable_pgs_mlocked`)
- unevictable_pgs_munlocked (integer, `unevictable_pgs_munlocked`)
- unevictable_pgs_cleared (integer, `unevictable_pgs_cleared`)
- unevictable_pgs_stranded (integer, `unevictable_pgs_stranded`)
- unevictable_pgs_mlockfreed (integer, `unevictable_pgs_mlockfreed`)
- thp_fault_alloc (integer, `thp_fault_alloc`)
- thp_fault_fallback (integer, `thp_fault_fallback`)
- thp_collapse_alloc (integer, `thp_collapse_alloc`)
- thp_collapse_alloc_failed (integer, `thp_collapse_alloc_failed`)
- thp_split (integer, `thp_split`)
- nr_free_pages (integer, `nr_free_pages`)
- nr_inactive_anon (integer, `nr_inactive_anon`)
- nr_active_anon (integer, `nr_active_anon`)
- nr_inactive_file (integer, `nr_inactive_file`)
- nr_active_file (integer, `nr_active_file`)
- nr_unevictable (integer, `nr_unevictable`)
- nr_mlock (integer, `nr_mlock`)
- nr_anon_pages (integer, `nr_anon_pages`)
- nr_mapped (integer, `nr_mapped`)
- nr_file_pages (integer, `nr_file_pages`)
- nr_dirty (integer, `nr_dirty`)
- nr_writeback (integer, `nr_writeback`)
- nr_slab_reclaimable (integer, `nr_slab_reclaimable`)
- nr_slab_unreclaimable (integer, `nr_slab_unreclaimable`)
- nr_page_table_pages (integer, `nr_page_table_pages`)
- nr_kernel_stack (integer, `nr_kernel_stack`)
- nr_unstable (integer, `nr_unstable`)
- nr_bounce (integer, `nr_bounce`)
- nr_vmscan_write (integer, `nr_vmscan_write`)
- nr_writeback_temp (integer, `nr_writeback_temp`)
- nr_isolated_anon (integer, `nr_isolated_anon`)
- nr_isolated_file (integer, `nr_isolated_file`)
- nr_shmem (integer, `nr_shmem`)
- numa_hit (integer, `numa_hit`)
- numa_miss (integer, `numa_miss`)
- numa_foreign (integer, `numa_foreign`)
- numa_interleave (integer, `numa_interleave`)
- numa_local (integer, `numa_local`)
- numa_other (integer, `numa_other`)
- nr_anon_transparent_hugepages (integer, `nr_anon_transparent_hugepages`)
- pgpgin (integer, `pgpgin`)
- pgpgout (integer, `pgpgout`)
- pswpin (integer, `pswpin`)
- pswpout (integer, `pswpout`)
- pgalloc_dma (integer, `pgalloc_dma`)
- pgalloc_dma32 (integer, `pgalloc_dma32`)
- pgalloc_normal (integer, `pgalloc_normal`)
- pgalloc_movable (integer, `pgalloc_movable`)
- pgfree (integer, `pgfree`)
- pgactivate (integer, `pgactivate`)
- pgdeactivate (integer, `pgdeactivate`)
- pgfault (integer, `pgfault`)
- pgmajfault (integer, `pgmajfault`)
- pgrefill_dma (integer, `pgrefill_dma`)
- pgrefill_dma32 (integer, `pgrefill_dma32`)
- pgrefill_normal (integer, `pgrefill_normal`)
- pgrefill_movable (integer, `pgrefill_movable`)
- pgsteal_dma (integer, `pgsteal_dma`)
- pgsteal_dma32 (integer, `pgsteal_dma32`)
- pgsteal_normal (integer, `pgsteal_normal`)
- pgsteal_movable (integer, `pgsteal_movable`)
- pgscan_kswapd_dma (integer, `pgscan_kswapd_dma`)
- pgscan_kswapd_dma32 (integer, `pgscan_kswapd_dma32`)
- pgscan_kswapd_normal (integer, `pgscan_kswapd_normal`)
- pgscan_kswapd_movable (integer, `pgscan_kswapd_movable`)
- pgscan_direct_dma (integer, `pgscan_direct_dma`)
- pgscan_direct_dma32 (integer, `pgscan_direct_dma32`)
- pgscan_direct_normal (integer, `pgscan_direct_normal`)
- pgscan_direct_movable (integer, `pgscan_direct_movable`)
- zone_reclaim_failed (integer, `zone_reclaim_failed`)
- pginodesteal (integer, `pginodesteal`)
- slabs_scanned (integer, `slabs_scanned`)
- kswapd_steal (integer, `kswapd_steal`)
- kswapd_inodesteal (integer, `kswapd_inodesteal`)
- kswapd_low_wmark_hit_quickly (integer, `kswapd_low_wmark_hit_quickly`)
- kswapd_high_wmark_hit_quickly (integer, `kswapd_high_wmark_hit_quickly`)
- kswapd_skip_congestion_wait (integer, `kswapd_skip_congestion_wait`)
- pageoutrun (integer, `pageoutrun`)
- allocstall (integer, `allocstall`)
- pgrotated (integer, `pgrotated`)
- compact_blocks_moved (integer, `compact_blocks_moved`)
- compact_pages_moved (integer, `compact_pages_moved`)
- compact_pagemigrate_failed (integer, `compact_pagemigrate_failed`)
- compact_stall (integer, `compact_stall`)
- compact_fail (integer, `compact_fail`)
- compact_success (integer, `compact_success`)
- htlb_buddy_alloc_success (integer, `htlb_buddy_alloc_success`)
- htlb_buddy_alloc_fail (integer, `htlb_buddy_alloc_fail`)
- unevictable_pgs_culled (integer, `unevictable_pgs_culled`)
- unevictable_pgs_scanned (integer, `unevictable_pgs_scanned`)
- unevictable_pgs_rescued (integer, `unevictable_pgs_rescued`)
- unevictable_pgs_mlocked (integer, `unevictable_pgs_mlocked`)
- unevictable_pgs_munlocked (integer, `unevictable_pgs_munlocked`)
- unevictable_pgs_cleared (integer, `unevictable_pgs_cleared`)
- unevictable_pgs_stranded (integer, `unevictable_pgs_stranded`)
- unevictable_pgs_mlockfreed (integer, `unevictable_pgs_mlockfreed`)
- thp_fault_alloc (integer, `thp_fault_alloc`)
- thp_fault_fallback (integer, `thp_fault_fallback`)
- thp_collapse_alloc (integer, `thp_collapse_alloc`)
- thp_collapse_alloc_failed (integer, `thp_collapse_alloc_failed`)
- thp_split (integer, `thp_split`)
### Tags:
## Tags
None
### Example Output:
## Example Output
```
```shell
$ telegraf --config ~/ws/telegraf.conf --input-filter kernel_vmstat --test
* Plugin: kernel_vmstat, Collection 1
> kernel_vmstat allocstall=81496i,compact_blocks_moved=238196i,compact_fail=135220i,compact_pagemigrate_failed=0i,compact_pages_moved=6370588i,compact_stall=142092i,compact_success=6872i,htlb_buddy_alloc_fail=0i,htlb_buddy_alloc_success=0i,kswapd_high_wmark_hit_quickly=25439i,kswapd_inodesteal=29770874i,kswapd_low_wmark_hit_quickly=8756i,kswapd_skip_congestion_wait=0i,kswapd_steal=291534428i,nr_active_anon=2515657i,nr_active_file=2244914i,nr_anon_pages=1358675i,nr_anon_transparent_hugepages=2034i,nr_bounce=0i,nr_dirty=5690i,nr_file_pages=5153546i,nr_free_pages=78730i,nr_inactive_anon=426259i,nr_inactive_file=2366791i,nr_isolated_anon=0i,nr_isolated_file=0i,nr_kernel_stack=579i,nr_mapped=558821i,nr_mlock=0i,nr_page_table_pages=11115i,nr_shmem=541689i,nr_slab_reclaimable=459806i,nr_slab_unreclaimable=47859i,nr_unevictable=0i,nr_unstable=0i,nr_vmscan_write=6206i,nr_writeback=0i,nr_writeback_temp=0i,numa_foreign=0i,numa_hit=5113399878i,numa_interleave=35793i,numa_local=5113399878i,numa_miss=0i,numa_other=0i,pageoutrun=505006i,pgactivate=375664931i,pgalloc_dma=0i,pgalloc_dma32=122480220i,pgalloc_movable=0i,pgalloc_normal=5233176719i,pgdeactivate=122735906i,pgfault=8699921410i,pgfree=5359765021i,pginodesteal=9188431i,pgmajfault=122210i,pgpgin=219717626i,pgpgout=3495885510i,pgrefill_dma=0i,pgrefill_dma32=1180010i,pgrefill_movable=0i,pgrefill_normal=119866676i,pgrotated=60620i,pgscan_direct_dma=0i,pgscan_direct_dma32=12256i,pgscan_direct_movable=0i,pgscan_direct_normal=31501600i,pgscan_kswapd_dma=0i,pgscan_kswapd_dma32=4480608i,pgscan_kswapd_movable=0i,pgscan_kswapd_normal=287857984i,pgsteal_dma=0i,pgsteal_dma32=4466436i,pgsteal_movable=0i,pgsteal_normal=318463755i,pswpin=2092i,pswpout=6206i,slabs_scanned=93775616i,thp_collapse_alloc=24857i,thp_collapse_alloc_failed=102214i,thp_fault_alloc=346219i,thp_fault_fallback=895453i,thp_split=9817i,unevictable_pgs_cleared=0i,unevictable_pgs_culled=1531i,unevictable_pgs_mlocked=6988i,unevictable_pgs_mlockfreed=0i,unevictable_pgs_munlocked=6988i,unevictable_pgs_rescued=5426i,unevictable_pgs_scanned=0i,unevictable_pgs_stranded=0i,zone_reclaim_failed=0i 1459455200071462843

View File

@ -7,7 +7,7 @@ The `kibana` plugin queries the [Kibana][] API to obtain the service status.
[Kibana]: https://www.elastic.co/
### Configuration
## Configuration
```toml
[[inputs.kibana]]
@ -29,7 +29,7 @@ The `kibana` plugin queries the [Kibana][] API to obtain the service status.
# insecure_skip_verify = false
```
### Metrics
## Metrics
- kibana
- tags:
@ -48,9 +48,9 @@ The `kibana` plugin queries the [Kibana][] API to obtain the service status.
- concurrent_connections (integer)
- requests_per_sec (float)
### Example Output
## Example Output
```
```shell
kibana,host=myhost,name=my-kibana,source=localhost:5601,status=green,version=6.5.4 concurrent_connections=8i,heap_max_bytes=447778816i,heap_total_bytes=447778816i,heap_used_bytes=380603352i,requests_per_sec=1,response_time_avg_ms=57.6,response_time_max_ms=220i,status_code=1i,uptime_ms=6717489805i 1534864502000000000
```
@ -58,8 +58,8 @@ kibana,host=myhost,name=my-kibana,source=localhost:5601,status=green,version=6.5
Requires the following tools:
* [Docker](https://docs.docker.com/get-docker/)
* [Docker Compose](https://docs.docker.com/compose/install/)
- [Docker](https://docs.docker.com/get-docker/)
- [Docker Compose](https://docs.docker.com/compose/install/)
From the root of this project execute the following script: `./plugins/inputs/kibana/test_environment/run_test_env.sh`
@ -67,4 +67,4 @@ This will build the latest Telegraf and then start up Kibana and Elasticsearch,
Then you can attach to the telegraf container to inspect the file `/tmp/metrics.out` to see if the status is being reported.
The Visual Studio Code [Remote - Containers](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers) extension provides an easy user interface to attach to the running container.
The Visual Studio Code [Remote - Containers](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers) extension provides an easy user interface to attach to the running container.

View File

@ -3,8 +3,7 @@
The [Kinesis][kinesis] consumer plugin reads from a Kinesis data stream
and creates metrics using one of the supported [input data formats][].
### Configuration
## Configuration
```toml
[[inputs.kinesis_consumer]]
@ -74,29 +73,28 @@ and creates metrics using one of the supported [input data formats][].
table_name = "default"
```
#### Required AWS IAM permissions
### Required AWS IAM permissions
Kinesis:
- DescribeStream
- GetRecords
- GetShardIterator
- DescribeStream
- GetRecords
- GetShardIterator
DynamoDB:
- GetItem
- PutItem
- GetItem
- PutItem
#### DynamoDB Checkpoint
### DynamoDB Checkpoint
The DynamoDB checkpoint stores the last processed record in a DynamoDB. To leverage
this functionality, create a table with the following string type keys:
```
```shell
Partition key: namespace
Sort key: shard_id
```
[kinesis]: https://aws.amazon.com/kinesis/
[input data formats]: /docs/DATA_FORMATS_INPUT.md

View File

@ -3,9 +3,9 @@
The KNX input plugin that listens for messages on the KNX home-automation bus.
This plugin connects to the KNX bus via a KNX-IP interface.
Information about supported KNX message datapoint types can be found at the
underlying "knx-go" project site (https://github.com/vapourismo/knx-go).
underlying "knx-go" project site (<https://github.com/vapourismo/knx-go>).
### Configuration
## Configuration
This is a sample config for the plugin.
@ -34,7 +34,7 @@ This is a sample config for the plugin.
# addresses = ["5/5/3"]
```
#### Measurement configurations
### Measurement configurations
Each measurement contains only one datapoint-type (DPT) and assigns a list of
addresses to this measurement. You can, for example group all temperature sensor
@ -43,23 +43,24 @@ messages of one datapoint-type to multiple measurements.
**NOTE: You should not assign a group-address (GA) to multiple measurements!**
### Metrics
## Metrics
Received KNX data is stored in the named measurement as configured above using
the "value" field. Additional to the value, there are the following tags added
to the datapoint:
- "groupaddress": KNX group-address corresponding to the value
- "unit": unit of the value
- "source": KNX physical address sending the value
- "groupaddress": KNX group-address corresponding to the value
- "unit": unit of the value
- "source": KNX physical address sending the value
To find out about the datatype of the datapoint please check your KNX project,
the KNX-specification or the "knx-go" project for the corresponding DPT.
### Example Output
## Example Output
This section shows example output in Line Protocol format.
```
```shell
illumination,groupaddress=5/5/4,host=Hugin,source=1.1.12,unit=lux value=17.889999389648438 1582132674999013274
temperature,groupaddress=5/5/1,host=Hugin,source=1.1.8,unit=°C value=17.799999237060547 1582132663427587361
windowopen,groupaddress=1/0/1,host=Hugin,source=1.1.3 value=true 1582132630425581320

View File

@ -19,7 +19,7 @@ the major cloud providers; this is roughly 4 release / 2 years.
**This plugin supports Kubernetes 1.11 and later.**
#### Series Cardinality Warning
## Series Cardinality Warning
This plugin may produce a high number of series which, when not controlled
for, will cause high load on your database. Use the following techniques to
@ -31,7 +31,7 @@ avoid cardinality issues:
- Monitor your databases [series cardinality][].
- Consult the [InfluxDB documentation][influx-docs] for the most up-to-date techniques.
### Configuration:
## Configuration
```toml
[[inputs.kube_inventory]]
@ -81,7 +81,7 @@ avoid cardinality issues:
# fielddrop = ["terminated_reason"]
```
#### Kubernetes Permissions
## Kubernetes Permissions
If using [RBAC authorization](https://kubernetes.io/docs/reference/access-authn-authz/rbac/), you will need to create a cluster role to list "persistentvolumes" and "nodes". You will then need to make an [aggregated ClusterRole](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#aggregated-clusterroles) that will eventually be bound to a user or group.
@ -150,7 +150,7 @@ tls_cert = "/run/telegraf-kubernetes-cert"
tls_key = "/run/telegraf-kubernetes-key"
```
### Metrics:
## Metrics
- kubernetes_daemonset
- tags:
@ -167,7 +167,7 @@ tls_key = "/run/telegraf-kubernetes-key"
- number_unavailable
- updated_number_scheduled
* kubernetes_deployment
- kubernetes_deployment
- tags:
- deployment_name
- namespace
@ -192,7 +192,7 @@ tls_key = "/run/telegraf-kubernetes-key"
- ready
- port
* kubernetes_ingress
- kubernetes_ingress
- tags:
- ingress_name
- namespace
@ -220,7 +220,7 @@ tls_key = "/run/telegraf-kubernetes-key"
- allocatable_memory_bytes
- allocatable_pods
* kubernetes_persistentvolume
- kubernetes_persistentvolume
- tags:
- pv_name
- phase
@ -238,7 +238,7 @@ tls_key = "/run/telegraf-kubernetes-key"
- fields:
- phase_type (int, [see below](#pvc-phase_type))
* kubernetes_pod_container
- kubernetes_pod_container
- tags:
- container_name
- namespace
@ -274,7 +274,7 @@ tls_key = "/run/telegraf-kubernetes-key"
- port
- target_port
* kubernetes_statefulset
- kubernetes_statefulset
- tags:
- statefulset_name
- namespace
@ -289,7 +289,7 @@ tls_key = "/run/telegraf-kubernetes-key"
- spec_replicas
- observed_generation
#### pv `phase_type`
### pv `phase_type`
The persistentvolume "phase" is saved in the `phase` tag with a correlated numeric field called `phase_type` corresponding with that tag value.
@ -302,7 +302,7 @@ The persistentvolume "phase" is saved in the `phase` tag with a correlated numer
| available | 4 |
| unknown | 5 |
#### pvc `phase_type`
### pvc `phase_type`
The persistentvolumeclaim "phase" is saved in the `phase` tag with a correlated numeric field called `phase_type` corresponding with that tag value.
@ -313,9 +313,9 @@ The persistentvolumeclaim "phase" is saved in the `phase` tag with a correlated
| pending | 2 |
| unknown | 3 |
### Example Output:
## Example Output
```
```shell
kubernetes_configmap,configmap_name=envoy-config,namespace=default,resource_version=56593031 created=1544103867000000000i 1547597616000000000
kubernetes_daemonset,daemonset_name=telegraf,selector_select1=s1,namespace=logging number_unavailable=0i,desired_number_scheduled=11i,number_available=11i,number_misscheduled=8i,number_ready=11i,updated_number_scheduled=11i,created=1527758699000000000i,generation=16i,current_number_scheduled=11i 1547597616000000000
kubernetes_deployment,deployment_name=deployd,selector_select1=s1,namespace=default replicas_unavailable=0i,created=1544103082000000000i,replicas_available=1i 1547597616000000000

View File

@ -8,8 +8,8 @@ should configure this plugin to talk to its locally running kubelet.
To find the ip address of the host you are running on you can issue a command like the following:
```
$ curl -s $API_URL/api/v1/namespaces/$POD_NAMESPACE/pods/$HOSTNAME --header "Authorization: Bearer $TOKEN" --insecure | jq -r '.status.hostIP'
```sh
curl -s $API_URL/api/v1/namespaces/$POD_NAMESPACE/pods/$HOSTNAME --header "Authorization: Bearer $TOKEN" --insecure | jq -r '.status.hostIP'
```
In this case we used the downward API to pass in the `$POD_NAMESPACE` and `$HOSTNAME` is the hostname of the pod which is set by the kubernetes API.
@ -20,7 +20,7 @@ the major cloud providers; this is roughly 4 release / 2 years.
**This plugin supports Kubernetes 1.11 and later.**
#### Series Cardinality Warning
## Series Cardinality Warning
This plugin may produce a high number of series which, when not controlled
for, will cause high load on your database. Use the following techniques to
@ -32,7 +32,7 @@ avoid cardinality issues:
- Monitor your databases [series cardinality][].
- Consult the [InfluxDB documentation][influx-docs] for the most up-to-date techniques.
### Configuration
## Configuration
```toml
[[inputs.kubernetes]]
@ -62,7 +62,7 @@ avoid cardinality issues:
# insecure_skip_verify = false
```
### DaemonSet
## DaemonSet
For recommendations on running Telegraf as a DaemonSet see [Monitoring Kubernetes
Architecture][k8s-telegraf] or view the Helm charts:
@ -72,7 +72,7 @@ Architecture][k8s-telegraf] or view the Helm charts:
- [Chronograf][]
- [Kapacitor][]
### Metrics
## Metrics
- kubernetes_node
- tags:
@ -97,7 +97,7 @@ Architecture][k8s-telegraf] or view the Helm charts:
- runtime_image_fs_capacity_bytes
- runtime_image_fs_used_bytes
* kubernetes_pod_container
- kubernetes_pod_container
- tags:
- container_name
- namespace
@ -129,7 +129,7 @@ Architecture][k8s-telegraf] or view the Helm charts:
- capacity_bytes
- used_bytes
* kubernetes_pod_network
- kubernetes_pod_network
- tags:
- namespace
- node_name
@ -140,9 +140,9 @@ Architecture][k8s-telegraf] or view the Helm charts:
- tx_bytes
- tx_errors
### Example Output
## Example Output
```
```shell
kubernetes_node
kubernetes_pod_container,container_name=deis-controller,namespace=deis,node_name=ip-10-0-0-0.ec2.internal,pod_name=deis-controller-3058870187-xazsr cpu_usage_core_nanoseconds=2432835i,cpu_usage_nanocores=0i,logsfs_available_bytes=121128271872i,logsfs_capacity_bytes=153567944704i,logsfs_used_bytes=20787200i,memory_major_page_faults=0i,memory_page_faults=175i,memory_rss_bytes=0i,memory_usage_bytes=0i,memory_working_set_bytes=0i,rootfs_available_bytes=121128271872i,rootfs_capacity_bytes=153567944704i,rootfs_used_bytes=1110016i 1476477530000000000
kubernetes_pod_network,namespace=deis,node_name=ip-10-0-0-0.ec2.internal,pod_name=deis-controller-3058870187-xazsr rx_bytes=120671099i,rx_errors=0i,tx_bytes=102451983i,tx_errors=0i 1476477530000000000

View File

@ -5,18 +5,18 @@ This plugin provides a consumer for use with Arista Networks Latency Analyzer
Metrics are read from a stream of data via TCP through port 50001 on the
switches management IP. The data is in Protobuffers format. For more information on Arista LANZ
- https://www.arista.com/en/um-eos/eos-latency-analyzer-lanz
- <https://www.arista.com/en/um-eos/eos-latency-analyzer-lanz>
This plugin uses Arista's sdk.
- https://github.com/aristanetworks/goarista
- <https://github.com/aristanetworks/goarista>
### Configuration
## Configuration
You will need to configure LANZ and enable streaming LANZ data.
- https://www.arista.com/en/um-eos/eos-section-44-3-configuring-lanz
- https://www.arista.com/en/um-eos/eos-section-44-3-configuring-lanz#ww1149292
- <https://www.arista.com/en/um-eos/eos-section-44-3-configuring-lanz>
- <https://www.arista.com/en/um-eos/eos-section-44-3-configuring-lanz#ww1149292>
```toml
[[inputs.lanz]]
@ -26,9 +26,9 @@ You will need to configure LANZ and enable streaming LANZ data.
]
```
### Metrics
## Metrics
For more details on the metrics see https://github.com/aristanetworks/goarista/blob/master/lanz/proto/lanz.proto
For more details on the metrics see <https://github.com/aristanetworks/goarista/blob/master/lanz/proto/lanz.proto>
- lanz_congestion_record:
- tags:
@ -47,7 +47,7 @@ For more details on the metrics see https://github.com/aristanetworks/goarista/b
- tx_latency (integer)
- q_drop_count (integer)
+ lanz_global_buffer_usage_record
- lanz_global_buffer_usage_record
- tags:
- entry_type
- source
@ -57,31 +57,31 @@ For more details on the metrics see https://github.com/aristanetworks/goarista/b
- buffer_size (integer)
- duration (integer)
### Sample Queries
## Sample Queries
Get the max tx_latency for the last hour for all interfaces on all switches.
```sql
SELECT max("tx_latency") AS "max_tx_latency" FROM "congestion_record" WHERE time > now() - 1h GROUP BY time(10s), "hostname", "intf_name"
```
Get the max tx_latency for the last hour for all interfaces on all switches.
```sql
SELECT max("queue_size") AS "max_queue_size" FROM "congestion_record" WHERE time > now() - 1h GROUP BY time(10s), "hostname", "intf_name"
```
Get the max buffer_size for over the last hour for all switches.
```sql
SELECT max("buffer_size") AS "max_buffer_size" FROM "global_buffer_usage_record" WHERE time > now() - 1h GROUP BY time(10s), "hostname"
```
### Example output
```
## Example output
```shell
lanz_global_buffer_usage_record,entry_type=2,host=telegraf.int.example.com,port=50001,source=switch01.int.example.com timestamp=158334105824919i,buffer_size=505i,duration=0i 1583341058300643815
lanz_congestion_record,entry_type=2,host=telegraf.int.example.com,intf_name=Ethernet36,port=50001,port_id=61,source=switch01.int.example.com,switch_id=0,traffic_class=1 time_of_max_qlen=0i,tx_latency=564480i,q_drop_count=0i,timestamp=158334105824919i,queue_size=225i 1583341058300636045
lanz_global_buffer_usage_record,entry_type=2,host=telegraf.int.example.com,port=50001,source=switch01.int.example.com timestamp=158334105824919i,buffer_size=589i,duration=0i 1583341058300457464
lanz_congestion_record,entry_type=1,host=telegraf.int.example.com,intf_name=Ethernet36,port=50001,port_id=61,source=switch01.int.example.com,switch_id=0,traffic_class=1 q_drop_count=0i,timestamp=158334105824919i,queue_size=232i,time_of_max_qlen=0i,tx_latency=584640i 1583341058300450302
```

View File

@ -2,7 +2,7 @@
The LeoFS plugin gathers metrics of LeoGateway, LeoManager, and LeoStorage using SNMP. See [LeoFS Documentation / System Administration / System Monitoring](https://leo-project.net/leofs/docs/admin/system_admin/monitoring/).
## Configuration:
## Configuration
```toml
# Sample Config:
@ -11,57 +11,60 @@ The LeoFS plugin gathers metrics of LeoGateway, LeoManager, and LeoStorage using
servers = ["127.0.0.1:4010"]
```
## Measurements & Fields:
## Measurements & Fields
### Statistics specific to the internals of LeoManager
#### Erlang VM
#### Erlang VM of LeoManager
- 1 min Statistics
- num_of_processes
- total_memory_usage
- system_memory_usage
- processes_memory_usage
- ets_memory_usage
- used_allocated_memory
- allocated_memory
- num_of_processes
- total_memory_usage
- system_memory_usage
- processes_memory_usage
- ets_memory_usage
- used_allocated_memory
- allocated_memory
- 5 min Statistics
- num_of_processes_5min
- total_memory_usage_5min
- system_memory_usage_5min
- processes_memory_usage_5min
- ets_memory_usage_5min
- used_allocated_memory_5min
- allocated_memory_5min
- num_of_processes_5min
- total_memory_usage_5min
- system_memory_usage_5min
- processes_memory_usage_5min
- ets_memory_usage_5min
- used_allocated_memory_5min
- allocated_memory_5min
### Statistics specific to the internals of LeoStorage
#### Erlang VM
### Erlang VM of LeoStorage
- 1 min Statistics
- num_of_processes
- total_memory_usage
- system_memory_usage
- processes_memory_usage
- ets_memory_usage
- used_allocated_memory
- allocated_memory
- num_of_processes
- total_memory_usage
- system_memory_usage
- processes_memory_usage
- ets_memory_usage
- used_allocated_memory
- allocated_memory
- 5 min Statistics
- num_of_processes_5min
- total_memory_usage_5min
- system_memory_usage_5min
- processes_memory_usage_5min
- ets_memory_usage_5min
- used_allocated_memory_5min
- allocated_memory_5min
- num_of_processes_5min
- total_memory_usage_5min
- system_memory_usage_5min
- processes_memory_usage_5min
- ets_memory_usage_5min
- used_allocated_memory_5min
- allocated_memory_5min
#### Total Number of Requests
### Total Number of Requests for LeoStorage
- 1 min Statistics
- num_of_writes
- num_of_reads
- num_of_deletes
- num_of_writes
- num_of_reads
- num_of_deletes
- 5 min Statistics
- num_of_writes_5min
- num_of_reads_5min
- num_of_deletes_5min
- num_of_writes_5min
- num_of_reads_5min
- num_of_deletes_5min
#### Total Number of Objects and Total Size of Objects
@ -103,35 +106,36 @@ Note: The following items are available since LeoFS v1.4.0:
Note: The all items are available since LeoFS v1.4.0.
### Statistics specific to the internals of LeoGateway
#### Erlang VM
#### Erlang VM of LeoGateway
- 1 min Statistics
- num_of_processes
- total_memory_usage
- system_memory_usage
- processes_memory_usage
- ets_memory_usage
- used_allocated_memory
- allocated_memory
- num_of_processes
- total_memory_usage
- system_memory_usage
- processes_memory_usage
- ets_memory_usage
- used_allocated_memory
- allocated_memory
- 5 min Statistics
- num_of_processes_5min
- total_memory_usage_5min
- system_memory_usage_5min
- processes_memory_usage_5min
- ets_memory_usage_5min
- used_allocated_memory_5min
- allocated_memory_5min
- num_of_processes_5min
- total_memory_usage_5min
- system_memory_usage_5min
- processes_memory_usage_5min
- ets_memory_usage_5min
- used_allocated_memory_5min
- allocated_memory_5min
#### Total Number of Requests
#### Total Number of Requests for LeoGateway
- 1 min Statistics
- num_of_writes
- num_of_reads
- num_of_deletes
- num_of_writes
- num_of_reads
- num_of_deletes
- 5 min Statistics
- num_of_writes_5min
- num_of_reads_5min
- num_of_deletes_5min
- num_of_writes_5min
- num_of_reads_5min
- num_of_deletes_5min
#### Object Cache
@ -140,15 +144,13 @@ Note: The all items are available since LeoFS v1.4.0.
- total_of_files
- total_cached_size
### Tags:
### Tags
All measurements have the following tags:
- node
### Example output:
### Example output
#### LeoManager
@ -221,7 +223,7 @@ $ ./telegraf --config ./plugins/inputs/leofs/leo_storage.conf --input-filter leo
#### LeoGateway
```
```shell
$ ./telegraf --config ./plugins/inputs/leofs/leo_gateway.conf --input-filter leofs --test
> leofs, host=gateway_0, node=gateway_0@127.0.0.1
allocated_memory=87941120,

View File

@ -1,9 +1,9 @@
# Linux Sysctl FS Input Plugin
The linux_sysctl_fs input provides Linux system level file metrics. The documentation on these fields can be found at https://www.kernel.org/doc/Documentation/sysctl/fs.txt.
The linux_sysctl_fs input provides Linux system level file metrics. The documentation on these fields can be found at <https://www.kernel.org/doc/Documentation/sysctl/fs.txt>.
Example output:
```
```shell
> linux_sysctl_fs,host=foo dentry-want-pages=0i,file-max=44222i,aio-max-nr=65536i,inode-preshrink-nr=0i,dentry-nr=64340i,dentry-unused-nr=55274i,file-nr=1568i,aio-nr=0i,inode-nr=35952i,inode-free-nr=12957i,dentry-age-limit=45i 1490982022000000000
```

View File

@ -1,6 +1,6 @@
# Logparser Input Plugin
### Deprecated in Telegraf 1.15: Please use the [tail][] plugin along with the [`grok` data format][grok parser].
## Deprecated in Telegraf 1.15: Please use the [tail][] plugin along with the [`grok` data format][grok parser]
The `logparser` plugin streams and parses the given logfiles. Currently it
has the capability of parsing "grok" patterns from logfiles, which also supports
@ -8,12 +8,14 @@ regex patterns.
The `tail` plugin now provides all the functionality of the `logparser` plugin.
Most options can be translated directly to the `tail` plugin:
- For options in the `[inputs.logparser.grok]` section, the equivalent option
will have add the `grok_` prefix when using them in the `tail` input.
- The grok `measurement` option can be replaced using the standard plugin
`name_override` option.
Migration Example:
```diff
- [[inputs.logparser]]
- files = ["/var/log/apache/access.log"]
@ -38,7 +40,7 @@ Migration Example:
+ data_format = "grok"
```
### Configuration
## Configuration
```toml
[[inputs.logparser]]
@ -90,15 +92,14 @@ Migration Example:
# timezone = "Canada/Eastern"
```
### Grok Parser
## Grok Parser
Reference the [grok parser][] documentation to setup the grok section of the
configuration.
## Additional Resources
### Additional Resources
- https://www.influxdata.com/telegraf-correlate-log-metrics-data-performance-bottlenecks/
- <https://www.influxdata.com/telegraf-correlate-log-metrics-data-performance-bottlenecks/>
[tail]: /plugins/inputs/tail/README.md
[grok parser]: /plugins/parsers/grok/README.md

View File

@ -5,7 +5,7 @@ This plugin reads metrics exposed by
Logstash 5 and later is supported.
### Configuration
## Configuration
```toml
[[inputs.logstash]]
@ -40,7 +40,7 @@ Logstash 5 and later is supported.
# "X-Special-Header" = "Special-Value"
```
### Metrics
## Metrics
Additional plugin stats may be collected (because logstash doesn't consistently expose all stats)
@ -80,7 +80,7 @@ Additional plugin stats may be collected (because logstash doesn't consistently
- gc_collectors_young_collection_count
- uptime_in_millis
+ logstash_process
- logstash_process
- tags:
- node_id
- node_name
@ -112,7 +112,7 @@ Additional plugin stats may be collected (because logstash doesn't consistently
- filtered
- out
+ logstash_plugins
- logstash_plugins
- tags:
- node_id
- node_name
@ -148,9 +148,9 @@ Additional plugin stats may be collected (because logstash doesn't consistently
- page_capacity_in_bytes
- queue_size_in_bytes
### Example Output
## Example Output
```
```shell
logstash_jvm,node_id=3da53ed0-a946-4a33-9cdb-33013f2273f6,node_name=debian-stretch-logstash6.virt,node_version=6.8.1,source=debian-stretch-logstash6.virt gc_collectors_old_collection_count=2,gc_collectors_old_collection_time_in_millis=100,gc_collectors_young_collection_count=26,gc_collectors_young_collection_time_in_millis=1028,mem_heap_committed_in_bytes=1056309248,mem_heap_max_in_bytes=1056309248,mem_heap_used_in_bytes=207216328,mem_heap_used_percent=19,mem_non_heap_committed_in_bytes=160878592,mem_non_heap_used_in_bytes=140838184,mem_pools_old_committed_in_bytes=899284992,mem_pools_old_max_in_bytes=899284992,mem_pools_old_peak_max_in_bytes=899284992,mem_pools_old_peak_used_in_bytes=189468088,mem_pools_old_used_in_bytes=189468088,mem_pools_survivor_committed_in_bytes=17432576,mem_pools_survivor_max_in_bytes=17432576,mem_pools_survivor_peak_max_in_bytes=17432576,mem_pools_survivor_peak_used_in_bytes=17432576,mem_pools_survivor_used_in_bytes=12572640,mem_pools_young_committed_in_bytes=139591680,mem_pools_young_max_in_bytes=139591680,mem_pools_young_peak_max_in_bytes=139591680,mem_pools_young_peak_used_in_bytes=139591680,mem_pools_young_used_in_bytes=5175600,threads_count=20,threads_peak_count=24,uptime_in_millis=739089 1566425244000000000
logstash_process,node_id=3da53ed0-a946-4a33-9cdb-33013f2273f6,node_name=debian-stretch-logstash6.virt,node_version=6.8.1,source=debian-stretch-logstash6.virt cpu_load_average_15m=0.03,cpu_load_average_1m=0.01,cpu_load_average_5m=0.04,cpu_percent=0,cpu_total_in_millis=83230,max_file_descriptors=16384,mem_total_virtual_in_bytes=3689132032,open_file_descriptors=118,peak_open_file_descriptors=118 1566425244000000000
logstash_events,node_id=3da53ed0-a946-4a33-9cdb-33013f2273f6,node_name=debian-stretch-logstash6.virt,node_version=6.8.1,pipeline=main,source=debian-stretch-logstash6.virt duration_in_millis=0,filtered=0,in=0,out=0,queue_push_duration_in_millis=0 1566425244000000000

View File

@ -5,7 +5,7 @@ many requirements of leadership class HPC simulation environments.
This plugin monitors the Lustre file system using its entries in the proc filesystem.
### Configuration
## Configuration
```toml
# Read metrics from local Lustre service on OST, MDS
@ -24,7 +24,7 @@ This plugin monitors the Lustre file system using its entries in the proc filesy
# ]
```
### Metrics
## Metrics
From `/proc/fs/lustre/obdfilter/*/stats` and `/proc/fs/lustre/osd-ldiskfs/*/stats`:
@ -113,17 +113,16 @@ From `/proc/fs/lustre/mdt/*/job_stats`:
- jobstats_sync
- jobstats_unlink
### Troubleshooting
## Troubleshooting
Check for the default or custom procfiles in the proc filesystem, and reference
the [Lustre Monitoring and Statistics Guide][guide]. This plugin does not
report all information from these files, only a limited set of items
corresponding to the above metric fields.
### Example Output
## Example Output
```
```shell
lustre2,host=oss2,jobid=42990218,name=wrk-OST0041 jobstats_ost_setattr=0i,jobstats_ost_sync=0i,jobstats_punch=0i,jobstats_read_bytes=4096i,jobstats_read_calls=1i,jobstats_read_max_size=4096i,jobstats_read_min_size=4096i,jobstats_write_bytes=310206488i,jobstats_write_calls=7423i,jobstats_write_max_size=53048i,jobstats_write_min_size=8820i 1556525847000000000
lustre2,host=mds1,jobid=42992017,name=wrk-MDT0000 jobstats_close=31798i,jobstats_crossdir_rename=0i,jobstats_getattr=34146i,jobstats_getxattr=15i,jobstats_link=0i,jobstats_mkdir=658i,jobstats_mknod=0i,jobstats_open=31797i,jobstats_rename=0i,jobstats_rmdir=0i,jobstats_samedir_rename=0i,jobstats_setattr=1788i,jobstats_setxattr=0i,jobstats_statfs=0i,jobstats_sync=0i,jobstats_unlink=0i 1556525828000000000

View File

@ -3,7 +3,7 @@
The Logical Volume Management (LVM) input plugin collects information about
physical volumes, volume groups, and logical volumes.
### Configuration
## Configuration
The `lvm` command requires elevated permissions. If the user has configured
sudo with the ability to run these commands, then set the `use_sudo` to true.
@ -15,7 +15,7 @@ sudo with the ability to run these commands, then set the `use_sudo` to true.
use_sudo = false
```
#### Using sudo
### Using sudo
If your account does not already have the ability to run commands
with passwordless sudo then updates to the sudoers file are required. Below
@ -31,7 +31,7 @@ Cmnd_Alias LVM = /usr/sbin/pvs *, /usr/sbin/vgs *, /usr/sbin/lvs *
Defaults!LVM !logfile, !syslog, !pam_session
```
### Metrics
## Metrics
Metrics are broken out by physical volume (pv), volume group (vg), and logical
volume (lv):
@ -64,14 +64,16 @@ volume (lv):
- data_percent
- meta_percent
### Example Output
## Example Output
The following example shows a system with the root partition on an LVM group
as well as with a Docker thin-provisioned LVM group on a second drive:
```shell
> lvm_physical_vol,path=/dev/sda2,vol_group=vgroot free=0i,size=249510756352i,used=249510756352i,used_percent=100 1631823026000000000
> lvm_physical_vol,path=/dev/sdb,vol_group=docker free=3858759680i,size=128316342272i,used=124457582592i,used_percent=96.99277612525741 1631823026000000000
> lvm_vol_group,name=vgroot free=0i,logical_volume_count=1i,physical_volume_count=1i,size=249510756352i,snapshot_count=0i,used_percent=100 1631823026000000000
> lvm_vol_group,name=docker free=3858759680i,logical_volume_count=1i,physical_volume_count=1i,size=128316342272i,snapshot_count=0i,used_percent=96.99277612525741 1631823026000000000
> lvm_logical_vol,name=lvroot,vol_group=vgroot data_percent=0,metadata_percent=0,size=249510756352i 1631823026000000000
> lvm_logical_vol,name=thinpool,vol_group=docker data_percent=0.36000001430511475,metadata_percent=1.3300000429153442,size=121899057152i 1631823026000000000
```

View File

@ -2,7 +2,7 @@
Pulls campaign reports from the [Mailchimp API](https://developer.mailchimp.com/).
### Configuration
## Configuration
This section contains the default TOML to configure the plugin. You can
generate it using `telegraf --usage mailchimp`.
@ -21,7 +21,7 @@ generate it using `telegraf --usage mailchimp`.
# campaign_id = ""
```
### Metrics
## Metrics
- mailchimp
- tags:

View File

@ -2,7 +2,7 @@
The MarkLogic Telegraf plugin gathers health status metrics from one or more host.
### Configuration:
## Configuration
```toml
[[inputs.marklogic]]
@ -24,7 +24,7 @@ The MarkLogic Telegraf plugin gathers health status metrics from one or more hos
# insecure_skip_verify = false
```
### Metrics
## Metrics
- marklogic
- tags:
@ -56,9 +56,9 @@ The MarkLogic Telegraf plugin gathers health status metrics from one or more hos
- http_server_receive_bytes
- http_server_send_bytes
### Example Output:
## Example Output
```
```shell
$> marklogic,host=localhost,id=2592913110757471141,source=ml1.local total_cpu_stat_iowait=0.0125649003311992,memory_process_swap_size=0i,host_size=380i,data_dir_space=28216i,query_read_load=0i,ncpus=1i,log_device_space=28216i,query_read_bytes=13947332i,merge_write_load=0i,http_server_receive_bytes=225893i,online=true,ncores=4i,total_cpu_stat_user=0.150778993964195,total_cpu_stat_system=0.598927974700928,total_cpu_stat_idle=99.2210006713867,memory_system_total=3947i,memory_system_free=2669i,memory_size=4096i,total_rate=14.7697010040283,http_server_send_bytes=0i,memory_process_size=903i,memory_process_rss=486i,merge_read_load=0i,total_load=0.00502600101754069 1566373000000000000
```

View File

@ -2,7 +2,7 @@
This plugin gathers statistics data from a Mcrouter server.
### Configuration:
## Configuration
```toml
# Read metrics from one or many mcrouter servers.
@ -15,7 +15,7 @@ This plugin gathers statistics data from a Mcrouter server.
# timeout = "5s"
```
### Measurements & Fields:
## Measurements & Fields
The fields from this plugin are gathered in the *mcrouter* measurement.
@ -88,16 +88,14 @@ Fields:
* cmd_delete_out_all
* cmd_lease_set_out_all
### Tags:
## Tags
* Mcrouter measurements have the following tags:
- server (the host name from which metrics are gathered)
* server (the host name from which metrics are gathered)
## Example Output
### Example Output:
```
```shell
$ ./telegraf --config telegraf.conf --input-filter mcrouter --test
mcrouter,server=localhost:11211 uptime=166,num_servers=1,num_servers_new=1,num_servers_up=0,num_servers_down=0,num_servers_closed=0,num_clients=1,num_suspect_servers=0,destination_batches_sum=0,destination_requests_sum=0,outstanding_route_get_reqs_queued=0,outstanding_route_update_reqs_queued=0,outstanding_route_get_avg_queue_size=0,outstanding_route_update_avg_queue_size=0,outstanding_route_get_avg_wait_time_sec=0,outstanding_route_update_avg_wait_time_sec=0,retrans_closed_connections=0,destination_pending_reqs=0,destination_inflight_reqs=0,destination_batch_size=0,asynclog_requests=0,proxy_reqs_processing=1,proxy_reqs_waiting=0,client_queue_notify_period=0,rusage_system=0.040966,rusage_user=0.020483,ps_num_minor_faults=2490,ps_num_major_faults=11,ps_user_time_sec=0.02,ps_system_time_sec=0.04,ps_vsize=697741312,ps_rss=10563584,fibers_allocated=0,fibers_pool_size=0,fibers_stack_high_watermark=0,successful_client_connections=18,duration_us=0,destination_max_pending_reqs=0,destination_max_inflight_reqs=0,retrans_per_kbyte_max=0,cmd_get_count=0,cmd_delete_out=0,cmd_lease_get=0,cmd_set=0,cmd_get_out_all=0,cmd_get_out=0,cmd_lease_set_count=0,cmd_other_out_all=0,cmd_lease_get_out=0,cmd_set_count=0,cmd_lease_set_out=0,cmd_delete_count=0,cmd_other=0,cmd_delete=0,cmd_get=0,cmd_lease_set=0,cmd_set_out=0,cmd_lease_get_count=0,cmd_other_out=0,cmd_lease_get_out_all=0,cmd_set_out_all=0,cmd_other_count=0,cmd_delete_out_all=0,cmd_lease_set_out_all=0 1453831884664956455
```

View File

@ -1,15 +1,14 @@
# mdstat Input Plugin
The mdstat plugin gathers statistics about any Linux MD RAID arrays configured on the host
by reading /proc/mdstat. For a full list of available fields see the
by reading /proc/mdstat. For a full list of available fields see the
/proc/mdstat section of the [proc man page](http://man7.org/linux/man-pages/man5/proc.5.html).
For a better idea of what each field represents, see the
For a better idea of what each field represents, see the
[mdstat man page](https://raid.wiki.kernel.org/index.php/Mdstat).
Stat collection based on Prometheus' mdstat collection library at https://github.com/prometheus/procfs/blob/master/mdstat.go
Stat collection based on Prometheus' mdstat collection library at <https://github.com/prometheus/procfs/blob/master/mdstat.go>
### Configuration:
## Configuration
```toml
# Get kernel statistics from /proc/mdstat
@ -19,7 +18,7 @@ Stat collection based on Prometheus' mdstat collection library at https://github
# file_name = "/proc/mdstat"
```
### Measurements & Fields:
## Measurements & Fields
- mdstat
- BlocksSynced (if the array is rebuilding/checking, this is the count of blocks that have been scanned)
@ -32,16 +31,16 @@ Stat collection based on Prometheus' mdstat collection library at https://github
- DisksSpare (the current count of "spare" disks in the array)
- DisksTotal (total count of disks in the array)
### Tags:
## Tags
- mdstat
- ActivityState (`active` or `inactive`)
- Devices (comma separated list of devices that make up the array)
- Name (name of the array)
### Example Output:
## Example Output
```
```shell
$ telegraf --config ~/ws/telegraf.conf --input-filter mdstat --test
* Plugin: mdstat, Collection 1
> mdstat,ActivityState=active,Devices=sdm1\,sdn1,Name=md1 BlocksSynced=231299072i,BlocksSyncedFinishTime=0,BlocksSyncedPct=0,BlocksSyncedSpeed=0,BlocksTotal=231299072i,DisksActive=2i,DisksFailed=0i,DisksSpare=0i,DisksTotal=2i,DisksDown=0i 1617814276000000000

View File

@ -5,14 +5,15 @@ The mem plugin collects system memory metrics.
For a more complete explanation of the difference between *used* and
*actual_used* RAM, see [Linux ate my ram](http://www.linuxatemyram.com/).
### Configuration:
## Configuration
```toml
# Read metrics about memory usage
[[inputs.mem]]
# no configuration
```
### Metrics:
## Metrics
Available fields are dependent on platform.
@ -55,7 +56,8 @@ Available fields are dependent on platform.
- write_back (integer, Linux)
- write_back_tmp (integer, Linux)
### Example Output:
```
## Example Output
```shell
mem active=9299595264i,available=16818249728i,available_percent=80.41654254645131,buffered=2383761408i,cached=13316689920i,commit_limit=14751920128i,committed_as=11781156864i,dirty=122880i,free=1877688320i,high_free=0i,high_total=0i,huge_page_size=2097152i,huge_pages_free=0i,huge_pages_total=0i,inactive=7549939712i,low_free=0i,low_total=0i,mapped=416763904i,page_tables=19787776i,shared=670679040i,slab=2081071104i,sreclaimable=1923395584i,sunreclaim=157675520i,swap_cached=1302528i,swap_free=4286128128i,swap_total=4294963200i,total=20913917952i,used=3335778304i,used_percent=15.95004011996231,vmalloc_chunk=0i,vmalloc_total=35184372087808i,vmalloc_used=0i,wired=0i,write_back=0i,write_back_tmp=0i 1574712869000000000
```

View File

@ -2,7 +2,7 @@
This plugin gathers statistics data from a Memcached server.
### Configuration:
## Configuration
```toml
# Read metrics from one or many memcached servers.
@ -14,7 +14,7 @@ This plugin gathers statistics data from a Memcached server.
# unix_sockets = ["/var/run/memcached.sock"]
```
### Measurements & Fields:
## Measurements & Fields
The fields from this plugin are gathered in the *memcached* measurement.
@ -63,22 +63,22 @@ Fields:
Description of gathered fields taken from [here](https://github.com/memcached/memcached/blob/master/doc/protocol.txt).
### Tags:
## Tags
* Memcached measurements have the following tags:
- server (the host name from which metrics are gathered)
* server (the host name from which metrics are gathered)
### Sample Queries:
## Sample Queries
You can use the following query to get the average get hit and miss ratio, as well as the total average size of cached items, number of cached items and average connection counts per server.
```
```sql
SELECT mean(get_hits) / mean(cmd_get) as get_ratio, mean(get_misses) / mean(cmd_get) as get_misses_ratio, mean(bytes), mean(curr_items), mean(curr_connections) FROM memcached WHERE time > now() - 1h GROUP BY server
```
### Example Output:
## Example Output
```
```shell
$ ./telegraf --config telegraf.conf --input-filter memcached --test
memcached,server=localhost:11211 get_hits=1,get_misses=2,evictions=0,limit_maxbytes=0,bytes=10,uptime=3600,curr_items=2,total_items=2,curr_connections=1,total_connections=2,connection_structures=1,cmd_get=2,cmd_set=1,delete_hits=0,delete_misses=0,incr_hits=0,incr_misses=0,decr_hits=0,decr_misses=0,cas_hits=0,cas_misses=0,bytes_read=10,bytes_written=10,threads=1,conn_yields=0 1453831884664956455
```

View File

@ -3,7 +3,7 @@
This input plugin gathers metrics from Mesos.
For more information, please check the [Mesos Observability Metrics](http://mesos.apache.org/documentation/latest/monitoring/) page.
### Configuration:
## Configuration
```toml
# Telegraf plugin for gathering metrics from N Mesos masters
@ -53,280 +53,282 @@ For more information, please check the [Mesos Observability Metrics](http://meso
By default this plugin is not configured to gather metrics from mesos. Since a mesos cluster can be deployed in numerous ways it does not provide any default
values. User needs to specify master/slave nodes this plugin will gather metrics from.
### Measurements & Fields:
## Measurements & Fields
Mesos master metric groups
- resources
- master/cpus_percent
- master/cpus_used
- master/cpus_total
- master/cpus_revocable_percent
- master/cpus_revocable_total
- master/cpus_revocable_used
- master/disk_percent
- master/disk_used
- master/disk_total
- master/disk_revocable_percent
- master/disk_revocable_total
- master/disk_revocable_used
- master/gpus_percent
- master/gpus_used
- master/gpus_total
- master/gpus_revocable_percent
- master/gpus_revocable_total
- master/gpus_revocable_used
- master/mem_percent
- master/mem_used
- master/mem_total
- master/mem_revocable_percent
- master/mem_revocable_total
- master/mem_revocable_used
- master/cpus_percent
- master/cpus_used
- master/cpus_total
- master/cpus_revocable_percent
- master/cpus_revocable_total
- master/cpus_revocable_used
- master/disk_percent
- master/disk_used
- master/disk_total
- master/disk_revocable_percent
- master/disk_revocable_total
- master/disk_revocable_used
- master/gpus_percent
- master/gpus_used
- master/gpus_total
- master/gpus_revocable_percent
- master/gpus_revocable_total
- master/gpus_revocable_used
- master/mem_percent
- master/mem_used
- master/mem_total
- master/mem_revocable_percent
- master/mem_revocable_total
- master/mem_revocable_used
- master
- master/elected
- master/uptime_secs
- master/elected
- master/uptime_secs
- system
- system/cpus_total
- system/load_15min
- system/load_5min
- system/load_1min
- system/mem_free_bytes
- system/mem_total_bytes
- system/cpus_total
- system/load_15min
- system/load_5min
- system/load_1min
- system/mem_free_bytes
- system/mem_total_bytes
- slaves
- master/slave_registrations
- master/slave_removals
- master/slave_reregistrations
- master/slave_shutdowns_scheduled
- master/slave_shutdowns_canceled
- master/slave_shutdowns_completed
- master/slaves_active
- master/slaves_connected
- master/slaves_disconnected
- master/slaves_inactive
- master/slave_unreachable_canceled
- master/slave_unreachable_completed
- master/slave_unreachable_scheduled
- master/slaves_unreachable
- master/slave_registrations
- master/slave_removals
- master/slave_reregistrations
- master/slave_shutdowns_scheduled
- master/slave_shutdowns_canceled
- master/slave_shutdowns_completed
- master/slaves_active
- master/slaves_connected
- master/slaves_disconnected
- master/slaves_inactive
- master/slave_unreachable_canceled
- master/slave_unreachable_completed
- master/slave_unreachable_scheduled
- master/slaves_unreachable
- frameworks
- master/frameworks_active
- master/frameworks_connected
- master/frameworks_disconnected
- master/frameworks_inactive
- master/outstanding_offers
- master/frameworks_active
- master/frameworks_connected
- master/frameworks_disconnected
- master/frameworks_inactive
- master/outstanding_offers
- framework offers
- master/frameworks/subscribed
- master/frameworks/calls_total
- master/frameworks/calls
- master/frameworks/events_total
- master/frameworks/events
- master/frameworks/operations_total
- master/frameworks/operations
- master/frameworks/tasks/active
- master/frameworks/tasks/terminal
- master/frameworks/offers/sent
- master/frameworks/offers/accepted
- master/frameworks/offers/declined
- master/frameworks/offers/rescinded
- master/frameworks/roles/suppressed
- master/frameworks/subscribed
- master/frameworks/calls_total
- master/frameworks/calls
- master/frameworks/events_total
- master/frameworks/events
- master/frameworks/operations_total
- master/frameworks/operations
- master/frameworks/tasks/active
- master/frameworks/tasks/terminal
- master/frameworks/offers/sent
- master/frameworks/offers/accepted
- master/frameworks/offers/declined
- master/frameworks/offers/rescinded
- master/frameworks/roles/suppressed
- tasks
- master/tasks_error
- master/tasks_failed
- master/tasks_finished
- master/tasks_killed
- master/tasks_lost
- master/tasks_running
- master/tasks_staging
- master/tasks_starting
- master/tasks_dropped
- master/tasks_gone
- master/tasks_gone_by_operator
- master/tasks_killing
- master/tasks_unreachable
- master/tasks_error
- master/tasks_failed
- master/tasks_finished
- master/tasks_killed
- master/tasks_lost
- master/tasks_running
- master/tasks_staging
- master/tasks_starting
- master/tasks_dropped
- master/tasks_gone
- master/tasks_gone_by_operator
- master/tasks_killing
- master/tasks_unreachable
- messages
- master/invalid_executor_to_framework_messages
- master/invalid_framework_to_executor_messages
- master/invalid_status_update_acknowledgements
- master/invalid_status_updates
- master/dropped_messages
- master/messages_authenticate
- master/messages_deactivate_framework
- master/messages_decline_offers
- master/messages_executor_to_framework
- master/messages_exited_executor
- master/messages_framework_to_executor
- master/messages_kill_task
- master/messages_launch_tasks
- master/messages_reconcile_tasks
- master/messages_register_framework
- master/messages_register_slave
- master/messages_reregister_framework
- master/messages_reregister_slave
- master/messages_resource_request
- master/messages_revive_offers
- master/messages_status_update
- master/messages_status_update_acknowledgement
- master/messages_unregister_framework
- master/messages_unregister_slave
- master/messages_update_slave
- master/recovery_slave_removals
- master/slave_removals/reason_registered
- master/slave_removals/reason_unhealthy
- master/slave_removals/reason_unregistered
- master/valid_framework_to_executor_messages
- master/valid_status_update_acknowledgements
- master/valid_status_updates
- master/task_lost/source_master/reason_invalid_offers
- master/task_lost/source_master/reason_slave_removed
- master/task_lost/source_slave/reason_executor_terminated
- master/valid_executor_to_framework_messages
- master/invalid_operation_status_update_acknowledgements
- master/messages_operation_status_update_acknowledgement
- master/messages_reconcile_operations
- master/messages_suppress_offers
- master/valid_operation_status_update_acknowledgements
- master/invalid_executor_to_framework_messages
- master/invalid_framework_to_executor_messages
- master/invalid_status_update_acknowledgements
- master/invalid_status_updates
- master/dropped_messages
- master/messages_authenticate
- master/messages_deactivate_framework
- master/messages_decline_offers
- master/messages_executor_to_framework
- master/messages_exited_executor
- master/messages_framework_to_executor
- master/messages_kill_task
- master/messages_launch_tasks
- master/messages_reconcile_tasks
- master/messages_register_framework
- master/messages_register_slave
- master/messages_reregister_framework
- master/messages_reregister_slave
- master/messages_resource_request
- master/messages_revive_offers
- master/messages_status_update
- master/messages_status_update_acknowledgement
- master/messages_unregister_framework
- master/messages_unregister_slave
- master/messages_update_slave
- master/recovery_slave_removals
- master/slave_removals/reason_registered
- master/slave_removals/reason_unhealthy
- master/slave_removals/reason_unregistered
- master/valid_framework_to_executor_messages
- master/valid_status_update_acknowledgements
- master/valid_status_updates
- master/task_lost/source_master/reason_invalid_offers
- master/task_lost/source_master/reason_slave_removed
- master/task_lost/source_slave/reason_executor_terminated
- master/valid_executor_to_framework_messages
- master/invalid_operation_status_update_acknowledgements
- master/messages_operation_status_update_acknowledgement
- master/messages_reconcile_operations
- master/messages_suppress_offers
- master/valid_operation_status_update_acknowledgements
- evqueue
- master/event_queue_dispatches
- master/event_queue_http_requests
- master/event_queue_messages
- master/operator_event_stream_subscribers
- master/event_queue_dispatches
- master/event_queue_http_requests
- master/event_queue_messages
- master/operator_event_stream_subscribers
- registrar
- registrar/state_fetch_ms
- registrar/state_store_ms
- registrar/state_store_ms/max
- registrar/state_store_ms/min
- registrar/state_store_ms/p50
- registrar/state_store_ms/p90
- registrar/state_store_ms/p95
- registrar/state_store_ms/p99
- registrar/state_store_ms/p999
- registrar/state_store_ms/p9999
- registrar/state_store_ms/count
- registrar/log/ensemble_size
- registrar/log/recovered
- registrar/queued_operations
- registrar/registry_size_bytes
- registrar/state_fetch_ms
- registrar/state_store_ms
- registrar/state_store_ms/max
- registrar/state_store_ms/min
- registrar/state_store_ms/p50
- registrar/state_store_ms/p90
- registrar/state_store_ms/p95
- registrar/state_store_ms/p99
- registrar/state_store_ms/p999
- registrar/state_store_ms/p9999
- registrar/state_store_ms/count
- registrar/log/ensemble_size
- registrar/log/recovered
- registrar/queued_operations
- registrar/registry_size_bytes
- allocator
- allocator/allocation_run_ms
- allocator/allocation_run_ms/count
- allocator/allocation_run_ms/max
- allocator/allocation_run_ms/min
- allocator/allocation_run_ms/p50
- allocator/allocation_run_ms/p90
- allocator/allocation_run_ms/p95
- allocator/allocation_run_ms/p99
- allocator/allocation_run_ms/p999
- allocator/allocation_run_ms/p9999
- allocator/allocation_runs
- allocator/allocation_run_latency_ms
- allocator/allocation_run_latency_ms/count
- allocator/allocation_run_latency_ms/max
- allocator/allocation_run_latency_ms/min
- allocator/allocation_run_latency_ms/p50
- allocator/allocation_run_latency_ms/p90
- allocator/allocation_run_latency_ms/p95
- allocator/allocation_run_latency_ms/p99
- allocator/allocation_run_latency_ms/p999
- allocator/allocation_run_latency_ms/p9999
- allocator/roles/shares/dominant
- allocator/event_queue_dispatches
- allocator/offer_filters/roles/active
- allocator/quota/roles/resources/offered_or_allocated
- allocator/quota/roles/resources/guarantee
- allocator/resources/cpus/offered_or_allocated
- allocator/resources/cpus/total
- allocator/resources/disk/offered_or_allocated
- allocator/resources/disk/total
- allocator/resources/mem/offered_or_allocated
- allocator/resources/mem/total
- allocator/allocation_run_ms
- allocator/allocation_run_ms/count
- allocator/allocation_run_ms/max
- allocator/allocation_run_ms/min
- allocator/allocation_run_ms/p50
- allocator/allocation_run_ms/p90
- allocator/allocation_run_ms/p95
- allocator/allocation_run_ms/p99
- allocator/allocation_run_ms/p999
- allocator/allocation_run_ms/p9999
- allocator/allocation_runs
- allocator/allocation_run_latency_ms
- allocator/allocation_run_latency_ms/count
- allocator/allocation_run_latency_ms/max
- allocator/allocation_run_latency_ms/min
- allocator/allocation_run_latency_ms/p50
- allocator/allocation_run_latency_ms/p90
- allocator/allocation_run_latency_ms/p95
- allocator/allocation_run_latency_ms/p99
- allocator/allocation_run_latency_ms/p999
- allocator/allocation_run_latency_ms/p9999
- allocator/roles/shares/dominant
- allocator/event_queue_dispatches
- allocator/offer_filters/roles/active
- allocator/quota/roles/resources/offered_or_allocated
- allocator/quota/roles/resources/guarantee
- allocator/resources/cpus/offered_or_allocated
- allocator/resources/cpus/total
- allocator/resources/disk/offered_or_allocated
- allocator/resources/disk/total
- allocator/resources/mem/offered_or_allocated
- allocator/resources/mem/total
Mesos slave metric groups
- resources
- slave/cpus_percent
- slave/cpus_used
- slave/cpus_total
- slave/cpus_revocable_percent
- slave/cpus_revocable_total
- slave/cpus_revocable_used
- slave/disk_percent
- slave/disk_used
- slave/disk_total
- slave/disk_revocable_percent
- slave/disk_revocable_total
- slave/disk_revocable_used
- slave/gpus_percent
- slave/gpus_used
- slave/gpus_total,
- slave/gpus_revocable_percent
- slave/gpus_revocable_total
- slave/gpus_revocable_used
- slave/mem_percent
- slave/mem_used
- slave/mem_total
- slave/mem_revocable_percent
- slave/mem_revocable_total
- slave/mem_revocable_used
- slave/cpus_percent
- slave/cpus_used
- slave/cpus_total
- slave/cpus_revocable_percent
- slave/cpus_revocable_total
- slave/cpus_revocable_used
- slave/disk_percent
- slave/disk_used
- slave/disk_total
- slave/disk_revocable_percent
- slave/disk_revocable_total
- slave/disk_revocable_used
- slave/gpus_percent
- slave/gpus_used
- slave/gpus_total,
- slave/gpus_revocable_percent
- slave/gpus_revocable_total
- slave/gpus_revocable_used
- slave/mem_percent
- slave/mem_used
- slave/mem_total
- slave/mem_revocable_percent
- slave/mem_revocable_total
- slave/mem_revocable_used
- agent
- slave/registered
- slave/uptime_secs
- slave/registered
- slave/uptime_secs
- system
- system/cpus_total
- system/load_15min
- system/load_5min
- system/load_1min
- system/mem_free_bytes
- system/mem_total_bytes
- system/cpus_total
- system/load_15min
- system/load_5min
- system/load_1min
- system/mem_free_bytes
- system/mem_total_bytes
- executors
- containerizer/mesos/container_destroy_errors
- slave/container_launch_errors
- slave/executors_preempted
- slave/frameworks_active
- slave/executor_directory_max_allowed_age_secs
- slave/executors_registering
- slave/executors_running
- slave/executors_terminated
- slave/executors_terminating
- slave/recovery_errors
- containerizer/mesos/container_destroy_errors
- slave/container_launch_errors
- slave/executors_preempted
- slave/frameworks_active
- slave/executor_directory_max_allowed_age_secs
- slave/executors_registering
- slave/executors_running
- slave/executors_terminated
- slave/executors_terminating
- slave/recovery_errors
- tasks
- slave/tasks_failed
- slave/tasks_finished
- slave/tasks_killed
- slave/tasks_lost
- slave/tasks_running
- slave/tasks_staging
- slave/tasks_starting
- slave/tasks_failed
- slave/tasks_finished
- slave/tasks_killed
- slave/tasks_lost
- slave/tasks_running
- slave/tasks_staging
- slave/tasks_starting
- messages
- slave/invalid_framework_messages
- slave/invalid_status_updates
- slave/valid_framework_messages
- slave/valid_status_updates
- slave/invalid_framework_messages
- slave/invalid_status_updates
- slave/valid_framework_messages
- slave/valid_status_updates
### Tags:
## Tags
- All master/slave measurements have the following tags:
- server (network location of server: `host:port`)
- url (URL origin of server: `scheme://host:port`)
- role (master/slave)
- server (network location of server: `host:port`)
- url (URL origin of server: `scheme://host:port`)
- role (master/slave)
- All master measurements have the extra tags:
- state (leader/follower)
- state (leader/follower)
### Example Output:
```
## Example Output
```shell
$ telegraf --config ~/mesos.conf --input-filter mesos --test
* Plugin: mesos, Collection 1
mesos,role=master,state=leader,host=172.17.8.102,server=172.17.8.101
@ -347,4 +349,3 @@ master/mem_revocable_used=0,master/mem_total=1002,
master/mem_used=0,master/messages_authenticate=0,
master/messages_deactivate_framework=0 ...
```

View File

@ -7,7 +7,7 @@ This plugin is known to support Minecraft Java Edition versions 1.11 - 1.14.
When using an version of Minecraft earlier than 1.13, be aware that the values
for some criterion has changed and may need to be modified.
#### Server Setup
## Server Setup
Enable [RCON][] on the Minecraft server, add this to your server configuration
in the [server.properties][] file:
@ -24,22 +24,25 @@ from the server console, or over an RCON connection.
When getting started pick an easy to test objective. This command will add an
objective that counts the number of times a player has jumped:
```
```sh
/scoreboard objectives add jumps minecraft.custom:minecraft.jump
```
Once a player has triggered the event they will be added to the scoreboard,
you can then list all players with recorded scores:
```
```sh
/scoreboard players list
```
View the current scores with a command, substituting your player name:
```
```sh
/scoreboard players list Etho
```
### Configuration
## Configuration
```toml
[[inputs.minecraft]]
@ -53,7 +56,7 @@ View the current scores with a command, substituting your player name:
password = ""
```
### Metrics
## Metrics
- minecraft
- tags:
@ -64,15 +67,17 @@ View the current scores with a command, substituting your player name:
- fields:
- `<objective_name>` (integer, count)
### Sample Queries:
## Sample Queries
Get the number of jumps per player in the last hour:
```sql
SELECT SPREAD("jumps") FROM "minecraft" WHERE time > now() - 1h GROUP BY "player"
```
### Example Output:
```
## Example Output
```shell
minecraft,player=notch,source=127.0.0.1,port=25575 jumps=178i 1498261397000000000
minecraft,player=dinnerbone,source=127.0.0.1,port=25575 deaths=1i,jumps=1999i,cow_kills=1i 1498261397000000000
minecraft,player=jeb,source=127.0.0.1,port=25575 d_pickaxe=1i,damage_dealt=80i,d_sword=2i,hunger=20i,health=20i,kills=1i,level=33i,jumps=264i,armor=15i 1498261397000000000

View File

@ -3,7 +3,7 @@
The Modbus plugin collects Discrete Inputs, Coils, Input Registers and Holding
Registers via Modbus TCP or Modbus RTU/ASCII.
### Configuration
## Configuration
```toml
[[inputs.modbus]]
@ -103,17 +103,18 @@ Registers via Modbus TCP or Modbus RTU/ASCII.
# close_connection_after_gather = false
```
### Notes
## Notes
You can debug Modbus connection issues by enabling `debug_connection`. To see those debug messages Telegraf has to be started with debugging enabled (i.e. with `--debug` option). Please be aware that connection tracing will produce a lot of messages and should **NOT** be used in production environments.
Please use `pause_between_requests` with care. Especially make sure that the total gather time, including the pause(s), does not exceed the configured collection interval. Note, that pauses add up if multiple requests are sent!
### Metrics
## Metrics
Metric are custom and configured using the `discrete_inputs`, `coils`,
`holding_register` and `input_registers` options.
### Usage of `data_type`
## Usage of `data_type`
The field `data_type` defines the representation of the data value on input from the modbus registers.
The input values are then converted from the given `data_type` to a type that is apropriate when
@ -122,16 +123,16 @@ integer or floating-point-number. The size of the output type is assumed to be l
for all supported input types. The mapping from the input type to the output type is fixed
and cannot be configured.
#### Integers: `INT16`, `UINT16`, `INT32`, `UINT32`, `INT64`, `UINT64`
### Integers: `INT16`, `UINT16`, `INT32`, `UINT32`, `INT64`, `UINT64`
These types are used for integer input values. Select the one that matches your modbus data source.
#### Floating Point: `FLOAT32-IEEE`, `FLOAT64-IEEE`
### Floating Point: `FLOAT32-IEEE`, `FLOAT64-IEEE`
Use these types if your modbus registers contain a value that is encoded in this format. These types
always include the sign and therefore there exists no variant.
#### Fixed Point: `FIXED`, `UFIXED` (`FLOAT32`)
### Fixed Point: `FIXED`, `UFIXED` (`FLOAT32`)
These types are handled as an integer type on input, but are converted to floating point representation
for further processing (e.g. scaling). Use one of these types when the input value is a decimal fixed point
@ -148,9 +149,10 @@ with N decimal places'.
(FLOAT32 is deprecated and should not be used any more. UFIXED provides the same conversion
from unsigned values).
### Trouble shooting
## Trouble shooting
### Strange data
#### Strange data
Modbus documentations are often a mess. People confuse memory-address (starts at one) and register address (starts at zero) or stay unclear about the used word-order. Furthermore, there are some non-standard implementations that also
swap the bytes within the register word (16-bit).
@ -164,7 +166,8 @@ In case you see strange values, the `byte_order` might be off. You can either pr
If your data still looks corrupted, please post your configuration, error message and/or the output of `byte_order="ABCD" data_type="UINT32"` to one of the telegraf support channels (forum, slack or as issue).
#### Workarounds
### Workarounds
Some Modbus devices need special read characteristics when reading data and will fail otherwise. For example, there are certain serial devices that need a certain pause between register read requests. Others might only offer a limited number of simultaneously connected devices, like serial devices or some ModbusTCP devices. In case you need to access those devices in parallel you might want to disconnect immediately after the plugin finished reading.
To allow this plugin to also handle those "special" devices there is the `workarounds` configuration options. In case your documentation states certain read requirements or you get read timeouts or other read errors you might want to try one or more workaround options.
@ -172,7 +175,7 @@ If you find that other/more workarounds are required for your device, please let
In case your device needs a workaround that is not yet implemented, please open an issue or submit a pull-request.
### Example Output
## Example Output
```sh
$ ./telegraf -config telegraf.conf -input-filter modbus -test

View File

@ -12,7 +12,7 @@ Minimum Version of Monit tested with is 5.16.
[monit]: https://mmonit.com/
[httpd]: https://mmonit.com/monit/documentation/monit.html#TCP-PORT
### Configuration
## Configuration
```toml
[[inputs.monit]]
@ -34,7 +34,7 @@ Minimum Version of Monit tested with is 5.16.
# insecure_skip_verify = false
```
### Metrics
## Metrics
- monit_filesystem
- tags:
@ -57,7 +57,7 @@ Minimum Version of Monit tested with is 5.16.
- inode_usage
- inode_total
+ monit_directory
- monit_directory
- tags:
- address
- version
@ -88,7 +88,7 @@ Minimum Version of Monit tested with is 5.16.
- size
- permissions
+ monit_process
- monit_process
- tags:
- address
- version
@ -132,7 +132,7 @@ Minimum Version of Monit tested with is 5.16.
- protocol
- type
+ monit_system
- monit_system
- tags:
- address
- version
@ -169,9 +169,9 @@ Minimum Version of Monit tested with is 5.16.
- status_code
- monitoring_status_code
- monitoring_mode_code
- permissions
- permissions
+ monit_program
- monit_program
- tags:
- address
- version
@ -199,7 +199,7 @@ Minimum Version of Monit tested with is 5.16.
- monitoring_status_code
- monitoring_mode_code
+ monit_program
- monit_program
- tags:
- address
- version
@ -227,8 +227,9 @@ Minimum Version of Monit tested with is 5.16.
- monitoring_status_code
- monitoring_mode_code
### Example Output
```
## Example Output
```shell
monit_file,monitoring_mode=active,monitoring_status=monitored,pending_action=none,platform_name=Linux,service=rsyslog_pid,source=xyzzy.local,status=running,version=5.20.0 mode=644i,monitoring_mode_code=0i,monitoring_status_code=1i,pending_action_code=0i,size=3i,status_code=0i 1579735047000000000
monit_process,monitoring_mode=active,monitoring_status=monitored,pending_action=none,platform_name=Linux,service=rsyslog,source=xyzzy.local,status=running,version=5.20.0 children=0i,cpu_percent=0,cpu_percent_total=0,mem_kb=3148i,mem_kb_total=3148i,mem_percent=0.2,mem_percent_total=0.2,monitoring_mode_code=0i,monitoring_status_code=1i,parent_pid=1i,pending_action_code=0i,pid=318i,status_code=0i,threads=4i 1579735047000000000
monit_program,monitoring_mode=active,monitoring_status=initializing,pending_action=none,platform_name=Linux,service=echo,source=xyzzy.local,status=running,version=5.20.0 monitoring_mode_code=0i,monitoring_status_code=2i,pending_action_code=0i,program_started=0i,program_status=0i,status_code=0i 1579735047000000000

View File

@ -7,7 +7,8 @@ useful creating custom metrics from the `/sys` or `/proc` filesystems.
> Note: If you wish to parse metrics from a single file formatted in one of the supported
> [input data formats][], you should use the [file][] input plugin instead.
### Configuration
## Configuration
```toml
[[inputs.multifile]]
## Base directory where telegraf will look for files.
@ -34,32 +35,37 @@ useful creating custom metrics from the `/sys` or `/proc` filesystems.
```
Each file table can contain the following options:
* `file`:
Path of the file to be parsed, relative to the `base_dir`.
* `dest`:
Name of the field/tag key, defaults to `$(basename file)`.
* `conversion`:
Data format used to parse the file contents:
* `float(X)`: Converts the input value into a float and divides by the Xth power of 10. Effectively just moves the decimal left X places. For example a value of `123` with `float(2)` will result in `1.23`.
* `float`: Converts the value into a float with no adjustment. Same as `float(0)`.
* `int`: Converts the value into an integer.
* `string`, `""`: No conversion.
* `bool`: Converts the value into a boolean.
* `tag`: File content is used as a tag.
* `float(X)`: Converts the input value into a float and divides by the Xth power of 10. Effectively just moves the decimal left X places. For example a value of `123` with `float(2)` will result in `1.23`.
* `float`: Converts the value into a float with no adjustment. Same as `float(0)`.
* `int`: Converts the value into an integer.
* `string`, `""`: No conversion.
* `bool`: Converts the value into a boolean.
* `tag`: File content is used as a tag.
## Example Output
### Example Output
This example shows a BME280 connected to a Raspberry Pi, using the sample config.
```
```sh
multifile pressure=101.343285156,temperature=20.4,humidityrelative=48.9 1547202076000000000
```
To reproduce this, connect a BMP280 to the board's GPIO pins and register the BME280 device driver
```
```sh
cd /sys/bus/i2c/devices/i2c-1
echo bme280 0x76 > new_device
```
The kernel driver provides the following files in `/sys/bus/i2c/devices/1-0076/iio:device0`:
* `in_humidityrelative_input`: `48900`
* `in_pressure_input`: `101.343285156`
* `in_temp_input`: `20400`

View File

@ -18,7 +18,7 @@ This plugin gathers the statistic data from MySQL server
* File events statistics
* Table schema statistics
### Configuration
## Configuration
```toml
[[inputs.mysql]]
@ -122,7 +122,7 @@ This plugin gathers the statistic data from MySQL server
# insecure_skip_verify = false
```
#### Metric Version
### Metric Version
When `metric_version = 2`, a variety of field type issues are corrected as well
as naming inconsistencies. If you have existing data on the original version
@ -132,6 +132,7 @@ InfluxDB due to the change of types. For this reason, you should keep the
If preserving your old data is not required you may wish to drop conflicting
measurements:
```sql
DROP SERIES from mysql
DROP SERIES from mysql_variables
@ -143,6 +144,7 @@ Otherwise, migration can be performed using the following steps:
1. Duplicate your `mysql` plugin configuration and add a `name_suffix` and
`metric_version = 2`, this will result in collection using both the old and new
style concurrently:
```toml
[[inputs.mysql]]
servers = ["tcp(127.0.0.1:3306)/"]
@ -157,8 +159,8 @@ style concurrently:
2. Upgrade all affected Telegraf clients to version >=1.6.
New measurements will be created with the `name_suffix`, for example::
- `mysql_v2`
- `mysql_variables_v2`
* `mysql_v2`
* `mysql_variables_v2`
3. Update charts, alerts, and other supporting code to the new format.
4. You can now remove the old `mysql` plugin configuration and remove old
@ -169,6 +171,7 @@ historical data to the default name. Do this only after retiring the old
measurement name.
1. Use the technique described above to write to multiple locations:
```toml
[[inputs.mysql]]
servers = ["tcp(127.0.0.1:3306)/"]
@ -180,8 +183,10 @@ measurement name.
servers = ["tcp(127.0.0.1:3306)/"]
```
2. Create a TICKScript to copy the historical data:
```
```sql
dbrp "telegraf"."autogen"
batch
@ -195,17 +200,23 @@ measurement name.
.retentionPolicy('autogen')
.measurement('mysql')
```
3. Define a task for your script:
```sh
kapacitor define copy-measurement -tick copy-measurement.task
```
4. Run the task over the data you would like to migrate:
```sh
kapacitor replay-live batch -start 2018-03-30T20:00:00Z -stop 2018-04-01T12:00:00Z -rec-time -task copy-measurement
```
5. Verify copied data and repeat for other measurements.
### Metrics:
## Metrics
* Global statuses - all numeric and boolean values of `SHOW GLOBAL STATUSES`
* Global variables - all numeric and boolean values of `SHOW GLOBAL VARIABLES`
* Slave status - metrics from `SHOW SLAVE STATUS` the metrics are gathered when
@ -214,141 +225,142 @@ then everything works differently, this metric does not work with multi-source
replication, unless you set `gather_all_slave_channels = true`. For MariaDB,
`mariadb_dialect = true` should be set to address the field names and commands
differences.
* slave_[column name]()
* slave_[column name]
* Binary logs - all metrics including size and count of all binary files.
Requires to be turned on in configuration.
* binary_size_bytes(int, number)
* binary_files_count(int, number)
* binary_size_bytes(int, number)
* binary_files_count(int, number)
* Process list - connection metrics from processlist for each user. It has the following tags
* connections(int, number)
* connections(int, number)
* User Statistics - connection metrics from user statistics for each user. It has the following fields
* access_denied
* binlog_bytes_written
* busy_time
* bytes_received
* bytes_sent
* commit_transactions
* concurrent_connections
* connected_time
* cpu_time
* denied_connections
* empty_queries
* hostlost_connections
* other_commands
* rollback_transactions
* rows_fetched
* rows_updated
* select_commands
* server
* table_rows_read
* total_connections
* total_ssl_connections
* update_commands
* user
* access_denied
* binlog_bytes_written
* busy_time
* bytes_received
* bytes_sent
* commit_transactions
* concurrent_connections
* connected_time
* cpu_time
* denied_connections
* empty_queries
* hostlost_connections
* other_commands
* rollback_transactions
* rows_fetched
* rows_updated
* select_commands
* server
* table_rows_read
* total_connections
* total_ssl_connections
* update_commands
* user
* Perf Table IO waits - total count and time of I/O waits event for each table
and process. It has following fields:
* table_io_waits_total_fetch(float, number)
* table_io_waits_total_insert(float, number)
* table_io_waits_total_update(float, number)
* table_io_waits_total_delete(float, number)
* table_io_waits_seconds_total_fetch(float, milliseconds)
* table_io_waits_seconds_total_insert(float, milliseconds)
* table_io_waits_seconds_total_update(float, milliseconds)
* table_io_waits_seconds_total_delete(float, milliseconds)
* table_io_waits_total_fetch(float, number)
* table_io_waits_total_insert(float, number)
* table_io_waits_total_update(float, number)
* table_io_waits_total_delete(float, number)
* table_io_waits_seconds_total_fetch(float, milliseconds)
* table_io_waits_seconds_total_insert(float, milliseconds)
* table_io_waits_seconds_total_update(float, milliseconds)
* table_io_waits_seconds_total_delete(float, milliseconds)
* Perf index IO waits - total count and time of I/O waits event for each index
and process. It has following fields:
* index_io_waits_total_fetch(float, number)
* index_io_waits_seconds_total_fetch(float, milliseconds)
* index_io_waits_total_insert(float, number)
* index_io_waits_total_update(float, number)
* index_io_waits_total_delete(float, number)
* index_io_waits_seconds_total_insert(float, milliseconds)
* index_io_waits_seconds_total_update(float, milliseconds)
* index_io_waits_seconds_total_delete(float, milliseconds)
* index_io_waits_total_fetch(float, number)
* index_io_waits_seconds_total_fetch(float, milliseconds)
* index_io_waits_total_insert(float, number)
* index_io_waits_total_update(float, number)
* index_io_waits_total_delete(float, number)
* index_io_waits_seconds_total_insert(float, milliseconds)
* index_io_waits_seconds_total_update(float, milliseconds)
* index_io_waits_seconds_total_delete(float, milliseconds)
* Info schema autoincrement statuses - autoincrement fields and max values
for them. It has following fields:
* auto_increment_column(int, number)
* auto_increment_column_max(int, number)
* auto_increment_column(int, number)
* auto_increment_column_max(int, number)
* InnoDB metrics - all metrics of information_schema.INNODB_METRICS with a status "enabled"
* Perf table lock waits - gathers total number and time for SQL and external
lock waits events for each table and operation. It has following fields.
The unit of fields varies by the tags.
* read_normal(float, number/milliseconds)
* read_with_shared_locks(float, number/milliseconds)
* read_high_priority(float, number/milliseconds)
* read_no_insert(float, number/milliseconds)
* write_normal(float, number/milliseconds)
* write_allow_write(float, number/milliseconds)
* write_concurrent_insert(float, number/milliseconds)
* write_low_priority(float, number/milliseconds)
* read(float, number/milliseconds)
* write(float, number/milliseconds)
* read_normal(float, number/milliseconds)
* read_with_shared_locks(float, number/milliseconds)
* read_high_priority(float, number/milliseconds)
* read_no_insert(float, number/milliseconds)
* write_normal(float, number/milliseconds)
* write_allow_write(float, number/milliseconds)
* write_concurrent_insert(float, number/milliseconds)
* write_low_priority(float, number/milliseconds)
* read(float, number/milliseconds)
* write(float, number/milliseconds)
* Perf events waits - gathers total time and number of event waits
* events_waits_total(float, number)
* events_waits_seconds_total(float, milliseconds)
* events_waits_total(float, number)
* events_waits_seconds_total(float, milliseconds)
* Perf file events statuses - gathers file events statuses
* file_events_total(float,number)
* file_events_seconds_total(float, milliseconds)
* file_events_bytes_total(float, bytes)
* file_events_total(float,number)
* file_events_seconds_total(float, milliseconds)
* file_events_bytes_total(float, bytes)
* Perf events statements - gathers attributes of each event
* events_statements_total(float, number)
* events_statements_seconds_total(float, millieconds)
* events_statements_errors_total(float, number)
* events_statements_warnings_total(float, number)
* events_statements_rows_affected_total(float, number)
* events_statements_rows_sent_total(float, number)
* events_statements_rows_examined_total(float, number)
* events_statements_tmp_tables_total(float, number)
* events_statements_tmp_disk_tables_total(float, number)
* events_statements_sort_merge_passes_totals(float, number)
* events_statements_sort_rows_total(float, number)
* events_statements_no_index_used_total(float, number)
* events_statements_total(float, number)
* events_statements_seconds_total(float, millieconds)
* events_statements_errors_total(float, number)
* events_statements_warnings_total(float, number)
* events_statements_rows_affected_total(float, number)
* events_statements_rows_sent_total(float, number)
* events_statements_rows_examined_total(float, number)
* events_statements_tmp_tables_total(float, number)
* events_statements_tmp_disk_tables_total(float, number)
* events_statements_sort_merge_passes_totals(float, number)
* events_statements_sort_rows_total(float, number)
* events_statements_no_index_used_total(float, number)
* Table schema - gathers statistics of each schema. It has following measurements
* info_schema_table_rows(float, number)
* info_schema_table_size_data_length(float, number)
* info_schema_table_size_index_length(float, number)
* info_schema_table_size_data_free(float, number)
* info_schema_table_version(float, number)
* info_schema_table_rows(float, number)
* info_schema_table_size_data_length(float, number)
* info_schema_table_size_index_length(float, number)
* info_schema_table_size_data_free(float, number)
* info_schema_table_version(float, number)
## Tags
* All measurements has following tags
* server (the host name from which the metrics are gathered)
* server (the host name from which the metrics are gathered)
* Process list measurement has following tags
* user (username for whom the metrics are gathered)
* user (username for whom the metrics are gathered)
* User Statistics measurement has following tags
* user (username for whom the metrics are gathered)
* user (username for whom the metrics are gathered)
* Perf table IO waits measurement has following tags
* schema
* name (object name for event or process)
* schema
* name (object name for event or process)
* Perf index IO waits has following tags
* schema
* name
* index
* schema
* name
* index
* Info schema autoincrement statuses has following tags
* schema
* table
* column
* schema
* table
* column
* Perf table lock waits has following tags
* schema
* table
* sql_lock_waits_total(fields including this tag have numeric unit)
* external_lock_waits_total(fields including this tag have numeric unit)
* sql_lock_waits_seconds_total(fields including this tag have millisecond unit)
* external_lock_waits_seconds_total(fields including this tag have millisecond unit)
* schema
* table
* sql_lock_waits_total(fields including this tag have numeric unit)
* external_lock_waits_total(fields including this tag have numeric unit)
* sql_lock_waits_seconds_total(fields including this tag have millisecond unit)
* external_lock_waits_seconds_total(fields including this tag have millisecond unit)
* Perf events statements has following tags
* event_name
* event_name
* Perf file events statuses has following tags
* event_name
* mode
* event_name
* mode
* Perf file events statements has following tags
* schema
* digest
* digest_text
* schema
* digest
* digest_text
* Table schema has following tags
* schema
* table
* component
* type
* engine
* row_format
* create_options
* schema
* table
* component
* type
* engine
* row_format
* create_options