chore: clean up all markdown lint errors for input plugins i through m (#10173)

This commit is contained in:
Mya 2021-11-24 12:18:53 -07:00 committed by GitHub
parent de6c2f74d6
commit 84562877cc
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
47 changed files with 1171 additions and 1029 deletions

View File

@ -6,7 +6,7 @@ The icinga2 plugin uses the icinga2 remote API to gather status on running
services and hosts. You can read Icinga2's documentation for their remote API services and hosts. You can read Icinga2's documentation for their remote API
[here](https://docs.icinga.com/icinga2/latest/doc/module/icinga2/chapter/icinga2-api) [here](https://docs.icinga.com/icinga2/latest/doc/module/icinga2/chapter/icinga2-api)
### Configuration: ## Configuration
```toml ```toml
# Description # Description
@ -32,13 +32,13 @@ services and hosts. You can read Icinga2's documentation for their remote API
# insecure_skip_verify = true # insecure_skip_verify = true
``` ```
### Measurements & Fields: ## Measurements & Fields
- All measurements have the following fields: - All measurements have the following fields:
- name (string) - name (string)
- state_code (int) - state_code (int)
### Tags: ## Tags
- All measurements have the following tags: - All measurements have the following tags:
- check_command - The short name of the check command - check_command - The short name of the check command
@ -49,7 +49,7 @@ services and hosts. You can read Icinga2's documentation for their remote API
- scheme - The icinga2 protocol (http/https) - scheme - The icinga2 protocol (http/https)
- server - The server the check_command is running for - server - The server the check_command is running for
### Sample Queries: ## Sample Queries
```sql ```sql
SELECT * FROM "icinga2_services" WHERE state_code = 0 AND time > now() - 24h // Service with OK status SELECT * FROM "icinga2_services" WHERE state_code = 0 AND time > now() - 24h // Service with OK status
@ -58,9 +58,9 @@ SELECT * FROM "icinga2_services" WHERE state_code = 2 AND time > now() - 24h //
SELECT * FROM "icinga2_services" WHERE state_code = 3 AND time > now() - 24h // Service with UNKNOWN status SELECT * FROM "icinga2_services" WHERE state_code = 3 AND time > now() - 24h // Service with UNKNOWN status
``` ```
### Example Output: ## Example Output
``` ```text
$ ./telegraf -config telegraf.conf -input-filter icinga2 -test $ ./telegraf -config telegraf.conf -input-filter icinga2 -test
icinga2_hosts,display_name=router-fr.eqx.fr,check_command=hostalive-custom,host=test-vm,source=localhost,port=5665,scheme=https,state=ok name="router-fr.eqx.fr",state=0 1492021603000000000 icinga2_hosts,display_name=router-fr.eqx.fr,check_command=hostalive-custom,host=test-vm,source=localhost,port=5665,scheme=https,state=ok name="router-fr.eqx.fr",state=0 1492021603000000000
``` ```

View File

@ -6,14 +6,14 @@ system. These are the counters that can be found in
**Supported Platforms**: Linux **Supported Platforms**: Linux
### Configuration ## Configuration
```toml ```toml
[[inputs.infiniband]] [[inputs.infiniband]]
# no configuration # no configuration
``` ```
### Metrics ## Metrics
Actual metrics depend on the InfiniBand devices, the plugin uses a simple Actual metrics depend on the InfiniBand devices, the plugin uses a simple
mapping from counter -> counter value. mapping from counter -> counter value.
@ -49,10 +49,8 @@ mapping from counter -> counter value.
- unicast_xmit_packets (integer) - unicast_xmit_packets (integer)
- VL15_dropped (integer) - VL15_dropped (integer)
## Example Output
```shell
### Example Output
```
infiniband,device=mlx5_0,port=1 VL15_dropped=0i,excessive_buffer_overrun_errors=0i,link_downed=0i,link_error_recovery=0i,local_link_integrity_errors=0i,multicast_rcv_packets=0i,multicast_xmit_packets=0i,port_rcv_constraint_errors=0i,port_rcv_data=237159415345822i,port_rcv_errors=0i,port_rcv_packets=801977655075i,port_rcv_remote_physical_errors=0i,port_rcv_switch_relay_errors=0i,port_xmit_constraint_errors=0i,port_xmit_data=238334949937759i,port_xmit_discards=0i,port_xmit_packets=803162651391i,port_xmit_wait=4294967295i,symbol_error=0i,unicast_rcv_packets=801977655075i,unicast_xmit_packets=803162651391i 1573125558000000000 infiniband,device=mlx5_0,port=1 VL15_dropped=0i,excessive_buffer_overrun_errors=0i,link_downed=0i,link_error_recovery=0i,local_link_integrity_errors=0i,multicast_rcv_packets=0i,multicast_xmit_packets=0i,port_rcv_constraint_errors=0i,port_rcv_data=237159415345822i,port_rcv_errors=0i,port_rcv_packets=801977655075i,port_rcv_remote_physical_errors=0i,port_rcv_switch_relay_errors=0i,port_xmit_constraint_errors=0i,port_xmit_data=238334949937759i,port_xmit_discards=0i,port_xmit_packets=803162651391i,port_xmit_wait=4294967295i,symbol_error=0i,unicast_rcv_packets=801977655075i,unicast_xmit_packets=803162651391i 1573125558000000000
``` ```

View File

@ -7,7 +7,7 @@ for detailed information about `influxdb` metrics.
This plugin can also gather metrics from endpoints that expose This plugin can also gather metrics from endpoints that expose
InfluxDB-formatted endpoints. See below for more information. InfluxDB-formatted endpoints. See below for more information.
### Configuration: ## Configuration
```toml ```toml
# Read InfluxDB-formatted JSON metrics from one or more HTTP endpoints # Read InfluxDB-formatted JSON metrics from one or more HTTP endpoints
@ -37,7 +37,7 @@ InfluxDB-formatted endpoints. See below for more information.
timeout = "5s" timeout = "5s"
``` ```
### Measurements & Fields ## Measurements & Fields
**Note:** The measurements and fields included in this plugin are dynamically built from the InfluxDB source, and may vary between versions: **Note:** The measurements and fields included in this plugin are dynamically built from the InfluxDB source, and may vary between versions:
@ -258,9 +258,9 @@ InfluxDB-formatted endpoints. See below for more information.
- **writePartial** _(Enterprise only)_: Total number of batches written to at least one node, but did not meet the requested consistency level. - **writePartial** _(Enterprise only)_: Total number of batches written to at least one node, but did not meet the requested consistency level.
- **writeTimeout**: Total number of write requests that failed to complete within the default write timeout duration. - **writeTimeout**: Total number of write requests that failed to complete within the default write timeout duration.
### Example Output: ## Example Output
``` ```sh
telegraf --config ~/ws/telegraf.conf --input-filter influxdb --test telegraf --config ~/ws/telegraf.conf --input-filter influxdb --test
* Plugin: influxdb, Collection 1 * Plugin: influxdb, Collection 1
> influxdb_database,database=_internal,host=tyrion,url=http://localhost:8086/debug/vars numMeasurements=10,numSeries=29 1463590500247354636 > influxdb_database,database=_internal,host=tyrion,url=http://localhost:8086/debug/vars numMeasurements=10,numSeries=29 1463590500247354636
@ -292,7 +292,7 @@ telegraf --config ~/ws/telegraf.conf --input-filter influxdb --test
> influxdb_shard,host=tyrion n_shards=4i 1463590500247354636 > influxdb_shard,host=tyrion n_shards=4i 1463590500247354636
``` ```
### InfluxDB-formatted endpoints ## InfluxDB-formatted endpoints
The influxdb plugin can collect InfluxDB-formatted data from JSON endpoints. The influxdb plugin can collect InfluxDB-formatted data from JSON endpoints.
Whether associated with an Influx database or not. Whether associated with an Influx database or not.

View File

@ -18,7 +18,7 @@ receive a 200 OK response with message body `{"results":[]}` but they are not
relayed. The output configuration of the Telegraf instance which ultimately relayed. The output configuration of the Telegraf instance which ultimately
submits data to InfluxDB determines the destination database. submits data to InfluxDB determines the destination database.
### Configuration: ## Configuration
```toml ```toml
[[inputs.influxdb_listener]] [[inputs.influxdb_listener]]
@ -64,14 +64,15 @@ submits data to InfluxDB determines the destination database.
# basic_password = "barfoo" # basic_password = "barfoo"
``` ```
### Metrics: ## Metrics
Metrics are created from InfluxDB Line Protocol in the request body. Metrics are created from InfluxDB Line Protocol in the request body.
### Troubleshooting: ## Troubleshooting
**Example Query:** **Example Query:**
```
```sh
curl -i -XPOST 'http://localhost:8186/write' --data-binary 'cpu_load_short,host=server01,region=us-west value=0.64 1434055562000000000' curl -i -XPOST 'http://localhost:8186/write' --data-binary 'cpu_load_short,host=server01,region=us-west value=0.64 1434055562000000000'
``` ```

View File

@ -11,7 +11,7 @@ defer to the output plugins configuration.
Telegraf minimum version: Telegraf 1.16.0 Telegraf minimum version: Telegraf 1.16.0
### Configuration: ## Configuration
```toml ```toml
[[inputs.influxdb_v2_listener]] [[inputs.influxdb_v2_listener]]
@ -42,14 +42,15 @@ Telegraf minimum version: Telegraf 1.16.0
# token = "some-long-shared-secret-token" # token = "some-long-shared-secret-token"
``` ```
### Metrics: ## Metrics
Metrics are created from InfluxDB Line Protocol in the request body. Metrics are created from InfluxDB Line Protocol in the request body.
### Troubleshooting: ## Troubleshooting
**Example Query:** **Example Query:**
```
```sh
curl -i -XPOST 'http://localhost:8186/api/v2/write' --data-binary 'cpu_load_short,host=server01,region=us-west value=0.64 1434055562000000000' curl -i -XPOST 'http://localhost:8186/api/v2/write' --data-binary 'cpu_load_short,host=server01,region=us-west value=0.64 1434055562000000000'
``` ```

View File

@ -1,11 +1,13 @@
# Intel PowerStat Input Plugin # Intel PowerStat Input Plugin
This input plugin monitors power statistics on Intel-based platforms and assumes presence of Linux based OS. This input plugin monitors power statistics on Intel-based platforms and assumes presence of Linux based OS.
Main use cases are power saving and workload migration. Telemetry frameworks allow users to monitor critical platform level metrics. Main use cases are power saving and workload migration. Telemetry frameworks allow users to monitor critical platform level metrics.
Key source of platform telemetry is power domain that is beneficial for MANO/Monitoring&Analytics systems Key source of platform telemetry is power domain that is beneficial for MANO/Monitoring&Analytics systems
to take preventive/corrective actions based on platform busyness, CPU temperature, actual CPU utilization and power statistics. to take preventive/corrective actions based on platform busyness, CPU temperature, actual CPU utilization and power statistics.
### Configuration: ## Configuration
```toml ```toml
# Intel PowerStat plugin enables monitoring of platform metrics (power, TDP) and per-CPU metrics like temperature, power and utilization. # Intel PowerStat plugin enables monitoring of platform metrics (power, TDP) and per-CPU metrics like temperature, power and utilization.
[[inputs.intel_powerstat]] [[inputs.intel_powerstat]]
@ -17,36 +19,47 @@ to take preventive/corrective actions based on platform busyness, CPU temperatur
## "cpu_frequency", "cpu_busy_frequency", "cpu_temperature", "cpu_c1_state_residency", "cpu_c6_state_residency", "cpu_busy_cycles" ## "cpu_frequency", "cpu_busy_frequency", "cpu_temperature", "cpu_c1_state_residency", "cpu_c6_state_residency", "cpu_busy_cycles"
# cpu_metrics = [] # cpu_metrics = []
``` ```
### Example: Configuration with no per-CPU telemetry
## Example: Configuration with no per-CPU telemetry
This configuration allows getting global metrics (processor package specific), no per-CPU metrics are collected: This configuration allows getting global metrics (processor package specific), no per-CPU metrics are collected:
```toml ```toml
[[inputs.intel_powerstat]] [[inputs.intel_powerstat]]
cpu_metrics = [] cpu_metrics = []
``` ```
### Example: Configuration with no per-CPU telemetry - equivalent case ## Example: Configuration with no per-CPU telemetry - equivalent case
This configuration allows getting global metrics (processor package specific), no per-CPU metrics are collected: This configuration allows getting global metrics (processor package specific), no per-CPU metrics are collected:
```toml ```toml
[[inputs.intel_powerstat]] [[inputs.intel_powerstat]]
``` ```
### Example: Configuration for CPU Temperature and Frequency only ## Example: Configuration for CPU Temperature and Frequency only
This configuration allows getting global metrics plus subset of per-CPU metrics (CPU Temperature and Current Frequency): This configuration allows getting global metrics plus subset of per-CPU metrics (CPU Temperature and Current Frequency):
```toml ```toml
[[inputs.intel_powerstat]] [[inputs.intel_powerstat]]
cpu_metrics = ["cpu_frequency", "cpu_temperature"] cpu_metrics = ["cpu_frequency", "cpu_temperature"]
``` ```
### Example: Configuration with all available metrics ## Example: Configuration with all available metrics
This configuration allows getting global metrics and all per-CPU metrics: This configuration allows getting global metrics and all per-CPU metrics:
```toml ```toml
[[inputs.intel_powerstat]] [[inputs.intel_powerstat]]
cpu_metrics = ["cpu_frequency", "cpu_busy_frequency", "cpu_temperature", "cpu_c1_state_residency", "cpu_c6_state_residency", "cpu_busy_cycles"] cpu_metrics = ["cpu_frequency", "cpu_busy_frequency", "cpu_temperature", "cpu_c1_state_residency", "cpu_c6_state_residency", "cpu_busy_cycles"]
``` ```
### SW Dependencies: ## SW Dependencies
Plugin is based on Linux Kernel modules that expose specific metrics over `sysfs` or `devfs` interfaces. Plugin is based on Linux Kernel modules that expose specific metrics over `sysfs` or `devfs` interfaces.
The following dependencies are expected by plugin: The following dependencies are expected by plugin:
- _intel-rapl_ module which exposes Intel Runtime Power Limiting metrics over `sysfs` (`/sys/devices/virtual/powercap/intel-rapl`), - _intel-rapl_ module which exposes Intel Runtime Power Limiting metrics over `sysfs` (`/sys/devices/virtual/powercap/intel-rapl`),
- _msr_ kernel module that provides access to processor model specific registers over `devfs` (`/dev/cpu/cpu%d/msr`), - _msr_ kernel module that provides access to processor model specific registers over `devfs` (`/dev/cpu/cpu%d/msr`),
- _cpufreq_ kernel module - which exposes per-CPU Frequency over `sysfs` (`/sys/devices/system/cpu/cpu%d/cpufreq/scaling_cur_freq`). - _cpufreq_ kernel module - which exposes per-CPU Frequency over `sysfs` (`/sys/devices/system/cpu/cpu%d/cpufreq/scaling_cur_freq`).
@ -55,7 +68,8 @@ Minimum kernel version required is 3.13 to satisfy all requirements.
Please make sure that kernel modules are loaded and running. You might have to manually enable them by using `modprobe`. Please make sure that kernel modules are loaded and running. You might have to manually enable them by using `modprobe`.
Exact commands to be executed are: Exact commands to be executed are:
```
```sh
sudo modprobe cpufreq-stats sudo modprobe cpufreq-stats
sudo modprobe msr sudo modprobe msr
sudo modprobe intel_rapl sudo modprobe intel_rapl
@ -63,6 +77,7 @@ sudo modprobe intel_rapl
**Telegraf with Intel PowerStat plugin enabled may require root access to read model specific registers (MSRs)** **Telegraf with Intel PowerStat plugin enabled may require root access to read model specific registers (MSRs)**
to retrieve data for calculation of most critical per-CPU specific metrics: to retrieve data for calculation of most critical per-CPU specific metrics:
- `cpu_busy_frequency_mhz` - `cpu_busy_frequency_mhz`
- `cpu_temperature_celsius` - `cpu_temperature_celsius`
- `cpu_c1_state_residency_percent` - `cpu_c1_state_residency_percent`
@ -71,13 +86,15 @@ to retrieve data for calculation of most critical per-CPU specific metrics:
To expose other Intel PowerStat metrics root access may or may not be required (depending on OS type or configuration). To expose other Intel PowerStat metrics root access may or may not be required (depending on OS type or configuration).
### HW Dependencies: ## HW Dependencies
Specific metrics require certain processor features to be present, otherwise Intel PowerStat plugin won't be able to Specific metrics require certain processor features to be present, otherwise Intel PowerStat plugin won't be able to
read them. When using Linux Kernel based OS, user can detect supported processor features reading `/proc/cpuinfo` file. read them. When using Linux Kernel based OS, user can detect supported processor features reading `/proc/cpuinfo` file.
Plugin assumes crucial properties are the same for all CPU cores in the system. Plugin assumes crucial properties are the same for all CPU cores in the system.
The following processor properties are examined in more detail in this section: The following processor properties are examined in more detail in this section:
processor _cpu family_, _model_ and _flags_. processor _cpu family_, _model_ and _flags_.
The following processor properties are required by the plugin: The following processor properties are required by the plugin:
- Processor _cpu family_ must be Intel (0x6) - since data used by the plugin assumes Intel specific - Processor _cpu family_ must be Intel (0x6) - since data used by the plugin assumes Intel specific
model specific registers for all features model specific registers for all features
- The following processor flags shall be present: - The following processor flags shall be present:
@ -139,16 +156,19 @@ and _powerstat_core.cpu_c6_state_residency_ metrics:
| 0x8C | Intel TigerLake-L | | 0x8C | Intel TigerLake-L |
| 0x8D | Intel TigerLake | | 0x8D | Intel TigerLake |
### Metrics ## Metrics
All metrics collected by Intel PowerStat plugin are collected in fixed intervals. All metrics collected by Intel PowerStat plugin are collected in fixed intervals.
Metrics that reports processor C-state residency or power are calculated over elapsed intervals. Metrics that reports processor C-state residency or power are calculated over elapsed intervals.
When starting to measure metrics, plugin skips first iteration of metrics if they are based on deltas with previous value. When starting to measure metrics, plugin skips first iteration of metrics if they are based on deltas with previous value.
**The following measurements are supported by Intel PowerStat plugin:** **The following measurements are supported by Intel PowerStat plugin:**
- powerstat_core - powerstat_core
- The following Tags are returned by plugin with powerstat_core measurements: - The following Tags are returned by plugin with powerstat_core measurements:
```text
| Tag | Description | | Tag | Description |
|-----|-------------| |-----|-------------|
| `package_id` | ID of platform package/socket | | `package_id` | ID of platform package/socket |
@ -156,9 +176,11 @@ When starting to measure metrics, plugin skips first iteration of metrics if the
| `cpu_id` | ID of logical processor core | | `cpu_id` | ID of logical processor core |
Measurement powerstat_core metrics are collected per-CPU (cpu_id is the key) Measurement powerstat_core metrics are collected per-CPU (cpu_id is the key)
while core_id and package_id tags are additional topology information. while core_id and package_id tags are additional topology information.
```
- Available metrics for powerstat_core measurement - Available metrics for powerstat_core measurement
```text
| Metric name (field) | Description | Units | | Metric name (field) | Description | Units |
|-----|-------------|-----| |-----|-------------|-----|
| `cpu_frequency_mhz` | Current operational frequency of CPU Core | MHz | | `cpu_frequency_mhz` | Current operational frequency of CPU Core | MHz |
@ -167,31 +189,33 @@ When starting to measure metrics, plugin skips first iteration of metrics if the
| `cpu_c1_state_residency_percent` | Percentage of time that CPU Core spent in C1 Core residency state | % | | `cpu_c1_state_residency_percent` | Percentage of time that CPU Core spent in C1 Core residency state | % |
| `cpu_c6_state_residency_percent` | Percentage of time that CPU Core spent in C6 Core residency state | % | | `cpu_c6_state_residency_percent` | Percentage of time that CPU Core spent in C6 Core residency state | % |
| `cpu_busy_cycles_percent` | CPU Core Busy cycles as a ratio of Cycles spent in C0 state residency to all cycles executed by CPU Core | % | | `cpu_busy_cycles_percent` | CPU Core Busy cycles as a ratio of Cycles spent in C0 state residency to all cycles executed by CPU Core | % |
```
- powerstat_package - powerstat_package
- The following Tags are returned by plugin with powerstat_package measurements: - The following Tags are returned by plugin with powerstat_package measurements:
```text
| Tag | Description | | Tag | Description |
|-----|-------------| |-----|-------------|
| `package_id` | ID of platform package/socket | | `package_id` | ID of platform package/socket |
Measurement powerstat_package metrics are collected per processor package - _package_id_ tag indicates which Measurement powerstat_package metrics are collected per processor package -_package_id_ tag indicates which
package metric refers to. package metric refers to.
```
- Available metrics for powerstat_package measurement - Available metrics for powerstat_package measurement
```text
| Metric name (field) | Description | Units | | Metric name (field) | Description | Units |
|-----|-------------|-----| |-----|-------------|-----|
| `thermal_design_power_watts` | Maximum Thermal Design Power (TDP) available for processor package | Watts | | `thermal_design_power_watts` | Maximum Thermal Design Power (TDP) available for processor package | Watts |
| `current_power_consumption_watts` | Current power consumption of processor package | Watts | | `current_power_consumption_watts` | Current power consumption of processor package | Watts |
| `current_dram_power_consumption_watts` | Current power consumption of processor package DRAM subsystem | Watts | | `current_dram_power_consumption_watts` | Current power consumption of processor package DRAM subsystem | Watts |
```
### Example Output
### Example Output: ```shell
```
powerstat_package,host=ubuntu,package_id=0 thermal_design_power_watts=160 1606494744000000000 powerstat_package,host=ubuntu,package_id=0 thermal_design_power_watts=160 1606494744000000000
powerstat_package,host=ubuntu,package_id=0 current_power_consumption_watts=35 1606494744000000000 powerstat_package,host=ubuntu,package_id=0 current_power_consumption_watts=35 1606494744000000000
powerstat_package,host=ubuntu,package_id=0 current_dram_power_consumption_watts=13.94 1606494744000000000 powerstat_package,host=ubuntu,package_id=0 current_dram_power_consumption_watts=13.94 1606494744000000000

View File

@ -1,10 +1,13 @@
# Intel RDT Input Plugin # Intel RDT Input Plugin
The `intel_rdt` plugin collects information provided by monitoring features of The `intel_rdt` plugin collects information provided by monitoring features of
the Intel Resource Director Technology (Intel(R) RDT). Intel RDT provides the hardware framework to monitor the Intel Resource Director Technology (Intel(R) RDT). Intel RDT provides the hardware framework to monitor
and control the utilization of shared resources (ex: last level cache, memory bandwidth). and control the utilization of shared resources (ex: last level cache, memory bandwidth).
### About Intel RDT ## About Intel RDT
Intels Resource Director Technology (RDT) framework consists of: Intels Resource Director Technology (RDT) framework consists of:
- Cache Monitoring Technology (CMT) - Cache Monitoring Technology (CMT)
- Memory Bandwidth Monitoring (MBM) - Memory Bandwidth Monitoring (MBM)
- Cache Allocation Technology (CAT) - Cache Allocation Technology (CAT)
@ -15,7 +18,8 @@ memory bandwidth are key resources to manage for running workloads in single-thr
multithreaded, or complex virtual machine environments. Intel introduces CMT, MBM, CAT multithreaded, or complex virtual machine environments. Intel introduces CMT, MBM, CAT
and CDP to manage these workloads across shared resources. and CDP to manage these workloads across shared resources.
### Prerequsities - PQoS Tool ## Prerequsities - PQoS Tool
To gather Intel RDT metrics, the `intel_rdt` plugin uses _pqos_ cli tool which is a To gather Intel RDT metrics, the `intel_rdt` plugin uses _pqos_ cli tool which is a
part of [Intel(R) RDT Software Package](https://github.com/intel/intel-cmt-cat). part of [Intel(R) RDT Software Package](https://github.com/intel/intel-cmt-cat).
Before using this plugin please be sure _pqos_ is properly installed and configured regarding that the plugin Before using this plugin please be sure _pqos_ is properly installed and configured regarding that the plugin
@ -24,7 +28,7 @@ Note: pqos tool needs root privileges to work properly.
Metrics will be constantly reported from the following `pqos` commands within the given interval: Metrics will be constantly reported from the following `pqos` commands within the given interval:
#### If telegraf does not run as the root user ### If telegraf does not run as the root user
The `pqos` binary needs to run as root. If telegraf is running as a non-root user, you may enable sudo The `pqos` binary needs to run as root. If telegraf is running as a non-root user, you may enable sudo
to allow `pqos` to run correctly. to allow `pqos` to run correctly.
@ -40,23 +44,27 @@ Alternately, you may enable sudo to allow `pqos` to run correctly, as follows:
Add the following to your sudoers file (assumes telegraf runs as a user named `telegraf`): Add the following to your sudoers file (assumes telegraf runs as a user named `telegraf`):
``` ```sh
telegraf ALL=(ALL) NOPASSWD:/usr/sbin/pqos -r --iface-os --mon-file-type=csv --mon-interval=* telegraf ALL=(ALL) NOPASSWD:/usr/sbin/pqos -r --iface-os --mon-file-type=csv --mon-interval=*
``` ```
If you wish to use sudo, you must also add `use_sudo = true` to the Telegraf If you wish to use sudo, you must also add `use_sudo = true` to the Telegraf
configuration (see below). configuration (see below).
#### In case of cores monitoring: ### In case of cores monitoring
```
```sh
pqos -r --iface-os --mon-file-type=csv --mon-interval=INTERVAL --mon-core=all:[CORES]\;mbt:[CORES] pqos -r --iface-os --mon-file-type=csv --mon-interval=INTERVAL --mon-core=all:[CORES]\;mbt:[CORES]
``` ```
where `CORES` is equal to group of cores provided in config. User can provide many groups. where `CORES` is equal to group of cores provided in config. User can provide many groups.
#### In case of process monitoring: ### In case of process monitoring
```
```sh
pqos -r --iface-os --mon-file-type=csv --mon-interval=INTERVAL --mon-pid=all:[PIDS]\;mbt:[PIDS] pqos -r --iface-os --mon-file-type=csv --mon-interval=INTERVAL --mon-pid=all:[PIDS]\;mbt:[PIDS]
``` ```
where `PIDS` is group of processes IDs which name are equal to provided process name in a config. where `PIDS` is group of processes IDs which name are equal to provided process name in a config.
User can provide many process names which lead to create many processes groups. User can provide many process names which lead to create many processes groups.
@ -68,12 +76,14 @@ If some change is reported, plugin will restart _pqos_ tool with new arguments.
process name is not equal to any of available processes, will be omitted and plugin will constantly process name is not equal to any of available processes, will be omitted and plugin will constantly
check for process availability. check for process availability.
### Useful links ## Useful links
Pqos installation process: https://github.com/intel/intel-cmt-cat/blob/master/INSTALL
Enabling OS interface: https://github.com/intel/intel-cmt-cat/wiki, https://github.com/intel/intel-cmt-cat/wiki/resctrl Pqos installation process: <https://github.com/intel/intel-cmt-cat/blob/master/INSTALL>
More about Intel RDT: https://www.intel.com/content/www/us/en/architecture-and-technology/resource-director-technology.html Enabling OS interface: <https://github.com/intel/intel-cmt-cat/wiki>, <https://github.com/intel/intel-cmt-cat/wiki/resctrl>
More about Intel RDT: <https://www.intel.com/content/www/us/en/architecture-and-technology/resource-director-technology.html>
## Configuration
### Configuration
```toml ```toml
# Read Intel RDT metrics # Read Intel RDT metrics
[[inputs.intel_rdt]] [[inputs.intel_rdt]]
@ -105,7 +115,8 @@ More about Intel RDT: https://www.intel.com/content/www/us/en/architecture-and-t
# use_sudo = false # use_sudo = false
``` ```
### Exposed metrics ## Exposed metrics
| Name | Full name | Description | | Name | Full name | Description |
|---------------|-----------------------------------------------|-------------| |---------------|-----------------------------------------------|-------------|
| MBL | Memory Bandwidth on Local NUMA Node | Memory bandwidth utilization by the relevant CPU core/process on the local NUMA memory channel | | MBL | Memory Bandwidth on Local NUMA Node | Memory bandwidth utilization by the relevant CPU core/process on the local NUMA memory channel |
@ -117,7 +128,8 @@ More about Intel RDT: https://www.intel.com/content/www/us/en/architecture-and-t
*optional *optional
### Troubleshooting ## Troubleshooting
Pointing to non-existing cores will lead to throwing an error by _pqos_ and the plugin will not work properly. Pointing to non-existing cores will lead to throwing an error by _pqos_ and the plugin will not work properly.
Be sure to check provided core number exists within desired system. Be sure to check provided core number exists within desired system.
@ -126,13 +138,16 @@ Do not use any other _pqos_ instance that is monitoring the same cores or PIDs w
It is not possible to monitor same cores or PIDs on different groups. It is not possible to monitor same cores or PIDs on different groups.
PIDs associated for the given process could be manually checked by `pidof` command. E.g: PIDs associated for the given process could be manually checked by `pidof` command. E.g:
```
```sh
pidof PROCESS pidof PROCESS
``` ```
where `PROCESS` is process name. where `PROCESS` is process name.
### Example Output ## Example Output
```
```shell
> rdt_metric,cores=12\,19,host=r2-compute-20,name=IPC,process=top value=0 1598962030000000000 > rdt_metric,cores=12\,19,host=r2-compute-20,name=IPC,process=top value=0 1598962030000000000
> rdt_metric,cores=12\,19,host=r2-compute-20,name=LLC_Misses,process=top value=0 1598962030000000000 > rdt_metric,cores=12\,19,host=r2-compute-20,name=LLC_Misses,process=top value=0 1598962030000000000
> rdt_metric,cores=12\,19,host=r2-compute-20,name=LLC,process=top value=0 1598962030000000000 > rdt_metric,cores=12\,19,host=r2-compute-20,name=LLC,process=top value=0 1598962030000000000

View File

@ -5,7 +5,7 @@ The `internal` plugin collects metrics about the telegraf agent itself.
Note that some metrics are aggregates across all instances of one type of Note that some metrics are aggregates across all instances of one type of
plugin. plugin.
### Configuration: ## Configuration
```toml ```toml
# Collect statistics about itself # Collect statistics about itself
@ -14,9 +14,9 @@ plugin.
# collect_memstats = true # collect_memstats = true
``` ```
### Measurements & Fields: ## Measurements & Fields
memstats are taken from the Go runtime: https://golang.org/pkg/runtime/#MemStats memstats are taken from the Go runtime: <https://golang.org/pkg/runtime/#MemStats>
- internal_memstats - internal_memstats
- alloc_bytes - alloc_bytes
@ -53,7 +53,6 @@ internal_write stats collect aggregate stats on all output plugins
that are of the same input type. They are tagged with `output=<plugin_name>` that are of the same input type. They are tagged with `output=<plugin_name>`
and `version=<telegraf_version>`. and `version=<telegraf_version>`.
- internal_write - internal_write
- buffer_limit - buffer_limit
- buffer_size - buffer_size
@ -70,15 +69,14 @@ plugin and `version=<telegraf_version>`.
- internal_<plugin_name> - internal_<plugin_name>
- individual plugin-specific fields, such as requests counts. - individual plugin-specific fields, such as requests counts.
### Tags: ## Tags
All measurements for specific plugins are tagged with information relevant All measurements for specific plugins are tagged with information relevant
to each particular plugin and with `version=<telegraf_version>`. to each particular plugin and with `version=<telegraf_version>`.
## Example Output
### Example Output: ```shell
```
internal_memstats,host=tyrion alloc_bytes=4457408i,sys_bytes=10590456i,pointer_lookups=7i,mallocs=17642i,frees=7473i,heap_sys_bytes=6848512i,heap_idle_bytes=1368064i,heap_in_use_bytes=5480448i,heap_released_bytes=0i,total_alloc_bytes=6875560i,heap_alloc_bytes=4457408i,heap_objects_bytes=10169i,num_gc=2i 1480682800000000000 internal_memstats,host=tyrion alloc_bytes=4457408i,sys_bytes=10590456i,pointer_lookups=7i,mallocs=17642i,frees=7473i,heap_sys_bytes=6848512i,heap_idle_bytes=1368064i,heap_in_use_bytes=5480448i,heap_released_bytes=0i,total_alloc_bytes=6875560i,heap_alloc_bytes=4457408i,heap_objects_bytes=10169i,num_gc=2i 1480682800000000000
internal_agent,host=tyrion,go_version=1.12.7,version=1.99.0 metrics_written=18i,metrics_dropped=0i,metrics_gathered=19i,gather_errors=0i 1480682800000000000 internal_agent,host=tyrion,go_version=1.12.7,version=1.99.0 metrics_written=18i,metrics_dropped=0i,metrics_gathered=19i,gather_errors=0i 1480682800000000000
internal_write,output=file,host=tyrion,version=1.99.0 buffer_limit=10000i,write_time_ns=636609i,metrics_added=18i,metrics_written=18i,buffer_size=0i 1480682800000000000 internal_write,output=file,host=tyrion,version=1.99.0 buffer_limit=10000i,write_time_ns=636609i,metrics_added=18i,metrics_written=18i,buffer_size=0i 1480682800000000000

View File

@ -16,7 +16,6 @@ The `Internet Speed Monitor` collects data about the internet speed on the syste
It collects latency, download speed and upload speed It collects latency, download speed and upload speed
| Name | filed name | type | Unit | | Name | filed name | type | Unit |
| -------------- | ---------- | ------- | ---- | | -------------- | ---------- | ------- | ---- |
| Download Speed | download | float64 | Mbps | | Download Speed | download | float64 | Mbps |

View File

@ -2,7 +2,8 @@
The interrupts plugin gathers metrics about IRQs from `/proc/interrupts` and `/proc/softirqs`. The interrupts plugin gathers metrics about IRQs from `/proc/interrupts` and `/proc/softirqs`.
### Configuration ## Configuration
```toml ```toml
[[inputs.interrupts]] [[inputs.interrupts]]
## When set to true, cpu metrics are tagged with the cpu. Otherwise cpu is ## When set to true, cpu metrics are tagged with the cpu. Otherwise cpu is
@ -18,7 +19,7 @@ The interrupts plugin gathers metrics about IRQs from `/proc/interrupts` and `/p
# irq = [ "NET_RX", "TASKLET" ] # irq = [ "NET_RX", "TASKLET" ]
``` ```
### Metrics ## Metrics
There are two styles depending on the value of `cpu_as_tag`. There are two styles depending on the value of `cpu_as_tag`.
@ -64,10 +65,11 @@ With `cpu_as_tag = true`:
- fields: - fields:
- count (int, number of interrupts) - count (int, number of interrupts)
### Example Output ## Example Output
With `cpu_as_tag = false`: With `cpu_as_tag = false`:
```
```shell
interrupts,irq=0,type=IO-APIC,device=2-edge\ timer,cpu=cpu0 count=23i 1489346531000000000 interrupts,irq=0,type=IO-APIC,device=2-edge\ timer,cpu=cpu0 count=23i 1489346531000000000
interrupts,irq=1,type=IO-APIC,device=1-edge\ i8042,cpu=cpu0 count=9i 1489346531000000000 interrupts,irq=1,type=IO-APIC,device=1-edge\ i8042,cpu=cpu0 count=9i 1489346531000000000
interrupts,irq=30,type=PCI-MSI,device=65537-edge\ virtio1-input.0,cpu=cpu1 count=1i 1489346531000000000 interrupts,irq=30,type=PCI-MSI,device=65537-edge\ virtio1-input.0,cpu=cpu1 count=1i 1489346531000000000
@ -75,7 +77,8 @@ soft_interrupts,irq=NET_RX,cpu=cpu0 count=280879i 1489346531000000000
``` ```
With `cpu_as_tag = true`: With `cpu_as_tag = true`:
```
```shell
interrupts,cpu=cpu6,irq=PIW,type=Posted-interrupt\ wakeup\ event count=0i 1543539773000000000 interrupts,cpu=cpu6,irq=PIW,type=Posted-interrupt\ wakeup\ event count=0i 1543539773000000000
interrupts,cpu=cpu7,irq=PIW,type=Posted-interrupt\ wakeup\ event count=0i 1543539773000000000 interrupts,cpu=cpu7,irq=PIW,type=Posted-interrupt\ wakeup\ event count=0i 1543539773000000000
soft_interrupts,cpu=cpu0,irq=HI count=246441i 1543539773000000000 soft_interrupts,cpu=cpu0,irq=HI count=246441i 1543539773000000000

View File

@ -5,26 +5,29 @@ Get bare metal metrics using the command line utility
If no servers are specified, the plugin will query the local machine sensor stats via the following command: If no servers are specified, the plugin will query the local machine sensor stats via the following command:
``` ```sh
ipmitool sdr ipmitool sdr
``` ```
or with the version 2 schema: or with the version 2 schema:
```
```sh
ipmitool sdr elist ipmitool sdr elist
``` ```
When one or more servers are specified, the plugin will use the following command to collect remote host sensor stats: When one or more servers are specified, the plugin will use the following command to collect remote host sensor stats:
``` ```sh
ipmitool -I lan -H SERVER -U USERID -P PASSW0RD sdr ipmitool -I lan -H SERVER -U USERID -P PASSW0RD sdr
``` ```
Any of the following parameters will be added to the aformentioned query if they're configured: Any of the following parameters will be added to the aformentioned query if they're configured:
```
```sh
-y hex_key -L privilege -y hex_key -L privilege
``` ```
### Configuration ## Configuration
```toml ```toml
# Read metrics from the bare metal servers via IPMI # Read metrics from the bare metal servers via IPMI
@ -72,9 +75,10 @@ Any of the following parameters will be added to the aformentioned query if they
# cache_path = "" # cache_path = ""
``` ```
### Measurements ## Measurements
Version 1 schema: Version 1 schema:
- ipmi_sensor: - ipmi_sensor:
- tags: - tags:
- name - name
@ -86,6 +90,7 @@ Version 1 schema:
- value (float) - value (float)
Version 2 schema: Version 2 schema:
- ipmi_sensor: - ipmi_sensor:
- tags: - tags:
- name - name
@ -98,17 +103,19 @@ Version 2 schema:
- fields: - fields:
- value (float) - value (float)
#### Permissions ### Permissions
When gathering from the local system, Telegraf will need permission to the When gathering from the local system, Telegraf will need permission to the
ipmi device node. When using udev you can create the device node giving ipmi device node. When using udev you can create the device node giving
`rw` permissions to the `telegraf` user by adding the following rule to `rw` permissions to the `telegraf` user by adding the following rule to
`/etc/udev/rules.d/52-telegraf-ipmi.rules`: `/etc/udev/rules.d/52-telegraf-ipmi.rules`:
``` ```sh
KERNEL=="ipmi*", MODE="660", GROUP="telegraf" KERNEL=="ipmi*", MODE="660", GROUP="telegraf"
``` ```
Alternatively, it is possible to use sudo. You will need the following in your telegraf config: Alternatively, it is possible to use sudo. You will need the following in your telegraf config:
```toml ```toml
[[inputs.ipmi_sensor]] [[inputs.ipmi_sensor]]
use_sudo = true use_sudo = true
@ -124,11 +131,13 @@ telegraf ALL=(root) NOPASSWD: IPMITOOL
Defaults!IPMITOOL !logfile, !syslog, !pam_session Defaults!IPMITOOL !logfile, !syslog, !pam_session
``` ```
### Example Output ## Example Output
### Version 1 Schema
#### Version 1 Schema
When retrieving stats from a remote server: When retrieving stats from a remote server:
```
```shell
ipmi_sensor,server=10.20.2.203,name=uid_light value=0,status=1i 1517125513000000000 ipmi_sensor,server=10.20.2.203,name=uid_light value=0,status=1i 1517125513000000000
ipmi_sensor,server=10.20.2.203,name=sys._health_led status=1i,value=0 1517125513000000000 ipmi_sensor,server=10.20.2.203,name=sys._health_led status=1i,value=0 1517125513000000000
ipmi_sensor,server=10.20.2.203,name=power_supply_1,unit=watts status=1i,value=110 1517125513000000000 ipmi_sensor,server=10.20.2.203,name=power_supply_1,unit=watts status=1i,value=110 1517125513000000000
@ -137,9 +146,9 @@ ipmi_sensor,server=10.20.2.203,name=power_supplies value=0,status=1i 15171255130
ipmi_sensor,server=10.20.2.203,name=fan_1,unit=percent status=1i,value=43.12 1517125513000000000 ipmi_sensor,server=10.20.2.203,name=fan_1,unit=percent status=1i,value=43.12 1517125513000000000
``` ```
When retrieving stats from the local machine (no server specified): When retrieving stats from the local machine (no server specified):
```
```shell
ipmi_sensor,name=uid_light value=0,status=1i 1517125513000000000 ipmi_sensor,name=uid_light value=0,status=1i 1517125513000000000
ipmi_sensor,name=sys._health_led status=1i,value=0 1517125513000000000 ipmi_sensor,name=sys._health_led status=1i,value=0 1517125513000000000
ipmi_sensor,name=power_supply_1,unit=watts status=1i,value=110 1517125513000000000 ipmi_sensor,name=power_supply_1,unit=watts status=1i,value=110 1517125513000000000
@ -151,7 +160,8 @@ ipmi_sensor,name=fan_1,unit=percent status=1i,value=43.12 1517125513000000000
#### Version 2 Schema #### Version 2 Schema
When retrieving stats from the local machine (no server specified): When retrieving stats from the local machine (no server specified):
```
```shell
ipmi_sensor,name=uid_light,entity_id=23.1,status_code=ok,status_desc=ok value=0 1517125474000000000 ipmi_sensor,name=uid_light,entity_id=23.1,status_code=ok,status_desc=ok value=0 1517125474000000000
ipmi_sensor,name=sys._health_led,entity_id=23.2,status_code=ok,status_desc=ok value=0 1517125474000000000 ipmi_sensor,name=sys._health_led,entity_id=23.2,status_code=ok,status_desc=ok value=0 1517125474000000000
ipmi_sensor,entity_id=10.1,name=power_supply_1,status_code=ok,status_desc=presence_detected,unit=watts value=110 1517125474000000000 ipmi_sensor,entity_id=10.1,name=power_supply_1,status_code=ok,status_desc=presence_detected,unit=watts value=110 1517125474000000000

View File

@ -5,33 +5,37 @@ It uses the output of the command "ipset save".
Ipsets created without the "counters" option are ignored. Ipsets created without the "counters" option are ignored.
Results are tagged with: Results are tagged with:
- ipset name - ipset name
- ipset entry - ipset entry
There are 3 ways to grant telegraf the right to run ipset: There are 3 ways to grant telegraf the right to run ipset:
* Run as root (strongly discouraged)
* Use sudo
* Configure systemd to run telegraf with CAP_NET_ADMIN and CAP_NET_RAW capabilities.
### Using systemd capabilities - Run as root (strongly discouraged)
- Use sudo
- Configure systemd to run telegraf with CAP_NET_ADMIN and CAP_NET_RAW capabilities.
## Using systemd capabilities
You may run `systemctl edit telegraf.service` and add the following: You may run `systemctl edit telegraf.service` and add the following:
``` ```text
[Service] [Service]
CapabilityBoundingSet=CAP_NET_RAW CAP_NET_ADMIN CapabilityBoundingSet=CAP_NET_RAW CAP_NET_ADMIN
AmbientCapabilities=CAP_NET_RAW CAP_NET_ADMIN AmbientCapabilities=CAP_NET_RAW CAP_NET_ADMIN
``` ```
### Using sudo ## Using sudo
You will need the following in your telegraf config: You will need the following in your telegraf config:
```toml ```toml
[[inputs.ipset]] [[inputs.ipset]]
use_sudo = true use_sudo = true
``` ```
You will also need to update your sudoers file: You will also need to update your sudoers file:
```bash ```bash
$ visudo $ visudo
# Add the following line: # Add the following line:
@ -40,7 +44,7 @@ telegraf ALL=(root) NOPASSWD: IPSETSAVE
Defaults!IPSETSAVE !logfile, !syslog, !pam_session Defaults!IPSETSAVE !logfile, !syslog, !pam_session
``` ```
### Configuration ## Configuration
```toml ```toml
[[inputs.ipset]] [[inputs.ipset]]
@ -56,15 +60,15 @@ Defaults!IPSETSAVE !logfile, !syslog, !pam_session
``` ```
### Example Output ## Example Output
``` ```sh
$ sudo ipset save $ sudo ipset save
create myset hash:net family inet hashsize 1024 maxelem 65536 counters comment create myset hash:net family inet hashsize 1024 maxelem 65536 counters comment
add myset 10.69.152.1 packets 8 bytes 672 comment "machine A" add myset 10.69.152.1 packets 8 bytes 672 comment "machine A"
``` ```
``` ```sh
$ telegraf --config telegraf.conf --input-filter ipset --test --debug $ telegraf --config telegraf.conf --input-filter ipset --test --debug
* Plugin: inputs.ipset, Collection 1 * Plugin: inputs.ipset, Collection 1
> ipset,rule=10.69.152.1,host=trashme,set=myset bytes_total=8i,packets_total=672i 1507615028000000000 > ipset,rule=10.69.152.1,host=trashme,set=myset bytes_total=8i,packets_total=672i 1507615028000000000

View File

@ -14,11 +14,11 @@ The iptables command requires CAP_NET_ADMIN and CAP_NET_RAW capabilities. You ha
* Configure systemd to run telegraf with CAP_NET_ADMIN and CAP_NET_RAW. This is the simplest and recommended option. * Configure systemd to run telegraf with CAP_NET_ADMIN and CAP_NET_RAW. This is the simplest and recommended option.
* Configure sudo to grant telegraf to run iptables. This is the most restrictive option, but require sudo setup. * Configure sudo to grant telegraf to run iptables. This is the most restrictive option, but require sudo setup.
### Using systemd capabilities ## Using systemd capabilities
You may run `systemctl edit telegraf.service` and add the following: You may run `systemctl edit telegraf.service` and add the following:
``` ```shell
[Service] [Service]
CapabilityBoundingSet=CAP_NET_RAW CAP_NET_ADMIN CapabilityBoundingSet=CAP_NET_RAW CAP_NET_ADMIN
AmbientCapabilities=CAP_NET_RAW CAP_NET_ADMIN AmbientCapabilities=CAP_NET_RAW CAP_NET_ADMIN
@ -26,9 +26,10 @@ AmbientCapabilities=CAP_NET_RAW CAP_NET_ADMIN
Since telegraf will fork a process to run iptables, `AmbientCapabilities` is required to transmit the capabilities bounding set to the forked process. Since telegraf will fork a process to run iptables, `AmbientCapabilities` is required to transmit the capabilities bounding set to the forked process.
### Using sudo ## Using sudo
You will need the following in your telegraf config: You will need the following in your telegraf config:
```toml ```toml
[[inputs.iptables]] [[inputs.iptables]]
use_sudo = true use_sudo = true
@ -44,11 +45,11 @@ telegraf ALL=(root) NOPASSWD: IPTABLESSHOW
Defaults!IPTABLESSHOW !logfile, !syslog, !pam_session Defaults!IPTABLESSHOW !logfile, !syslog, !pam_session
``` ```
### Using IPtables lock feature ## Using IPtables lock feature
Defining multiple instances of this plugin in telegraf.conf can lead to concurrent IPtables access resulting in "ERROR in input [inputs.iptables]: exit status 4" messages in telegraf.log and missing metrics. Setting 'use_lock = true' in the plugin configuration will run IPtables with the '-w' switch, allowing a lock usage to prevent this error. Defining multiple instances of this plugin in telegraf.conf can lead to concurrent IPtables access resulting in "ERROR in input [inputs.iptables]: exit status 4" messages in telegraf.log and missing metrics. Setting 'use_lock = true' in the plugin configuration will run IPtables with the '-w' switch, allowing a lock usage to prevent this error.
### Configuration: ## Configuration
```toml ```toml
# use sudo to run iptables # use sudo to run iptables
@ -63,25 +64,24 @@ Defining multiple instances of this plugin in telegraf.conf can lead to concurre
chains = [ "INPUT" ] chains = [ "INPUT" ]
``` ```
### Measurements & Fields: ## Measurements & Fields
* iptables
* pkts (integer, count)
* bytes (integer, bytes)
- iptables ## Tags
- pkts (integer, count)
- bytes (integer, bytes)
### Tags: * All measurements have the following tags:
* table
- All measurements have the following tags: * chain
- table * ruleid
- chain
- ruleid
The `ruleid` is the comment associated to the rule. The `ruleid` is the comment associated to the rule.
### Example Output: ## Example Output
``` ```text
$ iptables -nvL INPUT $ iptables -nvL INPUT
Chain INPUT (policy DROP 0 packets, 0 bytes) Chain INPUT (policy DROP 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination pkts bytes target prot opt in out source destination
@ -89,7 +89,7 @@ pkts bytes target prot opt in out source destination
42 2048 ACCEPT tcp -- * * 192.168.0.0/24 0.0.0.0/0 tcp dpt:80 /* httpd */ 42 2048 ACCEPT tcp -- * * 192.168.0.0/24 0.0.0.0/0 tcp dpt:80 /* httpd */
``` ```
``` ```shell
$ ./telegraf --config telegraf.conf --input-filter iptables --test $ ./telegraf --config telegraf.conf --input-filter iptables --test
iptables,table=filter,chain=INPUT,ruleid=ssh pkts=100i,bytes=1024i 1453831884664956455 iptables,table=filter,chain=INPUT,ruleid=ssh pkts=100i,bytes=1024i 1453831884664956455
iptables,table=filter,chain=INPUT,ruleid=httpd pkts=42i,bytes=2048i 1453831884664956455 iptables,table=filter,chain=INPUT,ruleid=httpd pkts=42i,bytes=2048i 1453831884664956455

View File

@ -5,14 +5,14 @@ metrics about ipvs virtual and real servers.
**Supported Platforms:** Linux **Supported Platforms:** Linux
### Configuration ## Configuration
```toml ```toml
[[inputs.ipvs]] [[inputs.ipvs]]
# no configuration # no configuration
``` ```
#### Permissions ### Permissions
Assuming you installed the telegraf package via one of the published packages, Assuming you installed the telegraf package via one of the published packages,
the process will be running as the `telegraf` user. However, in order for this the process will be running as the `telegraf` user. However, in order for this
@ -20,7 +20,7 @@ plugin to communicate over netlink sockets it needs the telegraf process to be
running as `root` (or some user with `CAP_NET_ADMIN` and `CAP_NET_RAW`). Be sure running as `root` (or some user with `CAP_NET_ADMIN` and `CAP_NET_RAW`). Be sure
to ensure these permissions before running telegraf with this plugin included. to ensure these permissions before running telegraf with this plugin included.
### Metrics ## Metrics
Server will contain tags identifying how it was configured, using one of Server will contain tags identifying how it was configured, using one of
`address` + `port` + `protocol` *OR* `fwmark`. This is how one would normally `address` + `port` + `protocol` *OR* `fwmark`. This is how one would normally
@ -66,17 +66,19 @@ configure a virtual server using `ipvsadm`.
- pps_out - pps_out
- cps - cps
### Example Output ## Example Output
Virtual server is configured using `fwmark` and backed by 2 real servers: Virtual server is configured using `fwmark` and backed by 2 real servers:
```
```shell
ipvs_virtual_server,address=172.18.64.234,address_family=inet,netmask=32,port=9000,protocol=tcp,sched=rr bytes_in=0i,bytes_out=0i,pps_in=0i,pps_out=0i,cps=0i,connections=0i,pkts_in=0i,pkts_out=0i 1541019340000000000 ipvs_virtual_server,address=172.18.64.234,address_family=inet,netmask=32,port=9000,protocol=tcp,sched=rr bytes_in=0i,bytes_out=0i,pps_in=0i,pps_out=0i,cps=0i,connections=0i,pkts_in=0i,pkts_out=0i 1541019340000000000
ipvs_real_server,address=172.18.64.220,address_family=inet,port=9000,virtual_address=172.18.64.234,virtual_port=9000,virtual_protocol=tcp active_connections=0i,inactive_connections=0i,pkts_in=0i,bytes_out=0i,pps_out=0i,connections=0i,pkts_out=0i,bytes_in=0i,pps_in=0i,cps=0i 1541019340000000000 ipvs_real_server,address=172.18.64.220,address_family=inet,port=9000,virtual_address=172.18.64.234,virtual_port=9000,virtual_protocol=tcp active_connections=0i,inactive_connections=0i,pkts_in=0i,bytes_out=0i,pps_out=0i,connections=0i,pkts_out=0i,bytes_in=0i,pps_in=0i,cps=0i 1541019340000000000
ipvs_real_server,address=172.18.64.219,address_family=inet,port=9000,virtual_address=172.18.64.234,virtual_port=9000,virtual_protocol=tcp active_connections=0i,inactive_connections=0i,pps_in=0i,pps_out=0i,connections=0i,pkts_in=0i,pkts_out=0i,bytes_in=0i,bytes_out=0i,cps=0i 1541019340000000000 ipvs_real_server,address=172.18.64.219,address_family=inet,port=9000,virtual_address=172.18.64.234,virtual_port=9000,virtual_protocol=tcp active_connections=0i,inactive_connections=0i,pps_in=0i,pps_out=0i,connections=0i,pkts_in=0i,pkts_out=0i,bytes_in=0i,bytes_out=0i,cps=0i 1541019340000000000
``` ```
Virtual server is configured using `proto+addr+port` and backed by 2 real servers: Virtual server is configured using `proto+addr+port` and backed by 2 real servers:
```
```shell
ipvs_virtual_server,address_family=inet,fwmark=47,netmask=32,sched=rr cps=0i,connections=0i,pkts_in=0i,pkts_out=0i,bytes_in=0i,bytes_out=0i,pps_in=0i,pps_out=0i 1541019340000000000 ipvs_virtual_server,address_family=inet,fwmark=47,netmask=32,sched=rr cps=0i,connections=0i,pkts_in=0i,pkts_out=0i,bytes_in=0i,bytes_out=0i,pps_in=0i,pps_out=0i 1541019340000000000
ipvs_real_server,address=172.18.64.220,address_family=inet,port=9000,virtual_fwmark=47 inactive_connections=0i,pkts_out=0i,bytes_out=0i,pps_in=0i,cps=0i,active_connections=0i,pkts_in=0i,bytes_in=0i,pps_out=0i,connections=0i 1541019340000000000 ipvs_real_server,address=172.18.64.220,address_family=inet,port=9000,virtual_fwmark=47 inactive_connections=0i,pkts_out=0i,bytes_out=0i,pps_in=0i,cps=0i,active_connections=0i,pkts_in=0i,bytes_in=0i,pps_out=0i,connections=0i 1541019340000000000
ipvs_real_server,address=172.18.64.219,address_family=inet,port=9000,virtual_fwmark=47 cps=0i,active_connections=0i,inactive_connections=0i,connections=0i,pkts_in=0i,bytes_out=0i,pkts_out=0i,bytes_in=0i,pps_in=0i,pps_out=0i 1541019340000000000 ipvs_real_server,address=172.18.64.219,address_family=inet,port=9000,virtual_fwmark=47 cps=0i,active_connections=0i,inactive_connections=0i,connections=0i,pkts_in=0i,bytes_out=0i,pkts_out=0i,bytes_in=0i,pps_in=0i,pps_out=0i 1541019340000000000

View File

@ -4,7 +4,7 @@ The jenkins plugin gathers information about the nodes and jobs running in a jen
This plugin does not require a plugin on jenkins and it makes use of Jenkins API to retrieve all the information needed. This plugin does not require a plugin on jenkins and it makes use of Jenkins API to retrieve all the information needed.
### Configuration: ## Configuration
```toml ```toml
[[inputs.jenkins]] [[inputs.jenkins]]
@ -55,7 +55,7 @@ This plugin does not require a plugin on jenkins and it makes use of Jenkins API
# max_connections = 5 # max_connections = 5
``` ```
### Metrics: ## Metrics
- jenkins - jenkins
- tags: - tags:
@ -65,7 +65,7 @@ This plugin does not require a plugin on jenkins and it makes use of Jenkins API
- busy_executors - busy_executors
- total_executors - total_executors
+ jenkins_node - jenkins_node
- tags: - tags:
- arch - arch
- disk_path - disk_path
@ -96,23 +96,22 @@ This plugin does not require a plugin on jenkins and it makes use of Jenkins API
- number - number
- result_code (0 = SUCCESS, 1 = FAILURE, 2 = NOT_BUILD, 3 = UNSTABLE, 4 = ABORTED) - result_code (0 = SUCCESS, 1 = FAILURE, 2 = NOT_BUILD, 3 = UNSTABLE, 4 = ABORTED)
### Sample Queries: ## Sample Queries
``` ```sql
SELECT mean("memory_available") AS "mean_memory_available", mean("memory_total") AS "mean_memory_total", mean("temp_available") AS "mean_temp_available" FROM "jenkins_node" WHERE time > now() - 15m GROUP BY time(:interval:) FILL(null) SELECT mean("memory_available") AS "mean_memory_available", mean("memory_total") AS "mean_memory_total", mean("temp_available") AS "mean_temp_available" FROM "jenkins_node" WHERE time > now() - 15m GROUP BY time(:interval:) FILL(null)
``` ```
``` ```sql
SELECT mean("duration") AS "mean_duration" FROM "jenkins_job" WHERE time > now() - 24h GROUP BY time(:interval:) FILL(null) SELECT mean("duration") AS "mean_duration" FROM "jenkins_job" WHERE time > now() - 24h GROUP BY time(:interval:) FILL(null)
``` ```
### Example Output: ## Example Output
``` ```shell
$ ./telegraf --config telegraf.conf --input-filter jenkins --test $ ./telegraf --config telegraf.conf --input-filter jenkins --test
jenkins,host=myhost,port=80,source=my-jenkins-instance busy_executors=4i,total_executors=8i 1580418261000000000 jenkins,host=myhost,port=80,source=my-jenkins-instance busy_executors=4i,total_executors=8i 1580418261000000000
jenkins_node,arch=Linux\ (amd64),disk_path=/var/jenkins_home,temp_path=/tmp,host=myhost,node_name=master,source=my-jenkins-instance,port=8080 swap_total=4294963200,memory_available=586711040,memory_total=6089498624,status=online,response_time=1000i,disk_available=152392036352,temp_available=152392036352,swap_available=3503263744,num_executors=2i 1516031535000000000 jenkins_node,arch=Linux\ (amd64),disk_path=/var/jenkins_home,temp_path=/tmp,host=myhost,node_name=master,source=my-jenkins-instance,port=8080 swap_total=4294963200,memory_available=586711040,memory_total=6089498624,status=online,response_time=1000i,disk_available=152392036352,temp_available=152392036352,swap_available=3503263744,num_executors=2i 1516031535000000000
jenkins_job,host=myhost,name=JOB1,parents=apps/br1,result=SUCCESS,source=my-jenkins-instance,port=8080 duration=2831i,result_code=0i 1516026630000000000 jenkins_job,host=myhost,name=JOB1,parents=apps/br1,result=SUCCESS,source=my-jenkins-instance,port=8080 duration=2831i,result_code=0i 1516026630000000000
jenkins_job,host=myhost,name=JOB2,parents=apps/br2,result=SUCCESS,source=my-jenkins-instance,port=8080 duration=2285i,result_code=0i 1516027230000000000 jenkins_job,host=myhost,name=JOB2,parents=apps/br2,result=SUCCESS,source=my-jenkins-instance,port=8080 duration=2285i,result_code=0i 1516027230000000000
``` ```

View File

@ -1,8 +1,8 @@
# Jolokia Input Plugin # Jolokia Input Plugin
### Deprecated in version 1.5: Please use the [jolokia2](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/jolokia2) plugin. ## Deprecated in version 1.5: Please use the [jolokia2](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/jolokia2) plugin
#### Configuration ### Configuration
```toml ```toml
# Read JMX metrics through Jolokia # Read JMX metrics through Jolokia
@ -66,8 +66,9 @@
The Jolokia plugin collects JVM metrics exposed as MBean's attributes through The Jolokia plugin collects JVM metrics exposed as MBean's attributes through
jolokia REST endpoint. All metrics are collected for each server configured. jolokia REST endpoint. All metrics are collected for each server configured.
See: https://jolokia.org/ See: <https://jolokia.org/>
## Measurements
# Measurements:
Jolokia plugin produces one measure for each metric configured, Jolokia plugin produces one measure for each metric configured,
adding Server's `jolokia_name`, `jolokia_host` and `jolokia_port` as tags. adding Server's `jolokia_name`, `jolokia_host` and `jolokia_port` as tags.

View File

@ -2,9 +2,9 @@
The [Jolokia](http://jolokia.org) _agent_ and _proxy_ input plugins collect JMX metrics from an HTTP endpoint using Jolokia's [JSON-over-HTTP protocol](https://jolokia.org/reference/html/protocol.html). The [Jolokia](http://jolokia.org) _agent_ and _proxy_ input plugins collect JMX metrics from an HTTP endpoint using Jolokia's [JSON-over-HTTP protocol](https://jolokia.org/reference/html/protocol.html).
### Configuration: ## Configuration
#### Jolokia Agent Configuration ### Jolokia Agent Configuration
The `jolokia2_agent` input plugin reads JMX metrics from one or more [Jolokia agent](https://jolokia.org/agent/jvm.html) REST endpoints. The `jolokia2_agent` input plugin reads JMX metrics from one or more [Jolokia agent](https://jolokia.org/agent/jvm.html) REST endpoints.
@ -34,7 +34,7 @@ Optionally, specify TLS options for communicating with agents:
paths = ["Uptime"] paths = ["Uptime"]
``` ```
#### Jolokia Proxy Configuration ### Jolokia Proxy Configuration
The `jolokia2_proxy` input plugin reads JMX metrics from one or more _targets_ by interacting with a [Jolokia proxy](https://jolokia.org/features/proxy.html) REST endpoint. The `jolokia2_proxy` input plugin reads JMX metrics from one or more _targets_ by interacting with a [Jolokia proxy](https://jolokia.org/features/proxy.html) REST endpoint.
@ -79,7 +79,7 @@ Optionally, specify TLS options for communicating with proxies:
paths = ["Uptime"] paths = ["Uptime"]
``` ```
#### Jolokia Metric Configuration ### Jolokia Metric Configuration
Each `metric` declaration generates a Jolokia request to fetch telemetry from a JMX MBean. Each `metric` declaration generates a Jolokia request to fetch telemetry from a JMX MBean.
@ -103,7 +103,7 @@ Use `paths` to refine which fields to collect.
The preceeding `jvm_memory` `metric` declaration produces the following output: The preceeding `jvm_memory` `metric` declaration produces the following output:
``` ```text
jvm_memory HeapMemoryUsage.committed=4294967296,HeapMemoryUsage.init=4294967296,HeapMemoryUsage.max=4294967296,HeapMemoryUsage.used=1750658992,NonHeapMemoryUsage.committed=67350528,NonHeapMemoryUsage.init=2555904,NonHeapMemoryUsage.max=-1,NonHeapMemoryUsage.used=65821352,ObjectPendingFinalizationCount=0 1503762436000000000 jvm_memory HeapMemoryUsage.committed=4294967296,HeapMemoryUsage.init=4294967296,HeapMemoryUsage.max=4294967296,HeapMemoryUsage.used=1750658992,NonHeapMemoryUsage.committed=67350528,NonHeapMemoryUsage.init=2555904,NonHeapMemoryUsage.max=-1,NonHeapMemoryUsage.used=65821352,ObjectPendingFinalizationCount=0 1503762436000000000
``` ```
@ -119,7 +119,7 @@ Use `*` wildcards against `mbean` property-key values to create distinct series
Since `name=*` matches both `G1 Old Generation` and `G1 Young Generation`, and `name` is used as a tag, the preceeding `jvm_garbage_collector` `metric` declaration produces two metrics. Since `name=*` matches both `G1 Old Generation` and `G1 Young Generation`, and `name` is used as a tag, the preceeding `jvm_garbage_collector` `metric` declaration produces two metrics.
``` ```shell
jvm_garbage_collector,name=G1\ Old\ Generation CollectionCount=0,CollectionTime=0 1503762520000000000 jvm_garbage_collector,name=G1\ Old\ Generation CollectionCount=0,CollectionTime=0 1503762520000000000
jvm_garbage_collector,name=G1\ Young\ Generation CollectionTime=32,CollectionCount=2 1503762520000000000 jvm_garbage_collector,name=G1\ Young\ Generation CollectionTime=32,CollectionCount=2 1503762520000000000
``` ```
@ -137,7 +137,7 @@ Use `tag_prefix` along with `tag_keys` to add detail to tag names.
The preceeding `jvm_memory_pool` `metric` declaration produces six metrics, each with a distinct `pool_name` tag. The preceeding `jvm_memory_pool` `metric` declaration produces six metrics, each with a distinct `pool_name` tag.
``` ```text
jvm_memory_pool,pool_name=Compressed\ Class\ Space PeakUsage.max=1073741824,PeakUsage.committed=3145728,PeakUsage.init=0,Usage.committed=3145728,Usage.init=0,PeakUsage.used=3017976,Usage.max=1073741824,Usage.used=3017976 1503764025000000000 jvm_memory_pool,pool_name=Compressed\ Class\ Space PeakUsage.max=1073741824,PeakUsage.committed=3145728,PeakUsage.init=0,Usage.committed=3145728,Usage.init=0,PeakUsage.used=3017976,Usage.max=1073741824,Usage.used=3017976 1503764025000000000
jvm_memory_pool,pool_name=Code\ Cache PeakUsage.init=2555904,PeakUsage.committed=6291456,Usage.committed=6291456,PeakUsage.used=6202752,PeakUsage.max=251658240,Usage.used=6210368,Usage.max=251658240,Usage.init=2555904 1503764025000000000 jvm_memory_pool,pool_name=Code\ Cache PeakUsage.init=2555904,PeakUsage.committed=6291456,Usage.committed=6291456,PeakUsage.used=6202752,PeakUsage.max=251658240,Usage.used=6210368,Usage.max=251658240,Usage.init=2555904 1503764025000000000
jvm_memory_pool,pool_name=G1\ Eden\ Space CollectionUsage.max=-1,PeakUsage.committed=56623104,PeakUsage.init=56623104,PeakUsage.used=53477376,Usage.max=-1,Usage.committed=49283072,Usage.used=19922944,CollectionUsage.committed=49283072,CollectionUsage.init=56623104,CollectionUsage.used=0,PeakUsage.max=-1,Usage.init=56623104 1503764025000000000 jvm_memory_pool,pool_name=G1\ Eden\ Space CollectionUsage.max=-1,PeakUsage.committed=56623104,PeakUsage.init=56623104,PeakUsage.used=53477376,Usage.max=-1,Usage.committed=49283072,Usage.used=19922944,CollectionUsage.committed=49283072,CollectionUsage.init=56623104,CollectionUsage.used=0,PeakUsage.max=-1,Usage.init=56623104 1503764025000000000
@ -158,7 +158,7 @@ Use substitutions to create fields and field prefixes with MBean property-keys c
The preceeding `kafka_topic` `metric` declaration produces a metric per Kafka topic. The `name` Mbean property-key is used as a field prefix to aid in gathering fields together into the single metric. The preceeding `kafka_topic` `metric` declaration produces a metric per Kafka topic. The `name` Mbean property-key is used as a field prefix to aid in gathering fields together into the single metric.
``` ```text
kafka_topic,topic=my-topic BytesOutPerSec.MeanRate=0,FailedProduceRequestsPerSec.MeanRate=0,BytesOutPerSec.EventType="bytes",BytesRejectedPerSec.Count=0,FailedProduceRequestsPerSec.RateUnit="SECONDS",FailedProduceRequestsPerSec.EventType="requests",MessagesInPerSec.RateUnit="SECONDS",BytesInPerSec.EventType="bytes",BytesOutPerSec.RateUnit="SECONDS",BytesInPerSec.OneMinuteRate=0,FailedFetchRequestsPerSec.EventType="requests",TotalFetchRequestsPerSec.MeanRate=146.301533938701,BytesOutPerSec.FifteenMinuteRate=0,TotalProduceRequestsPerSec.MeanRate=0,BytesRejectedPerSec.FifteenMinuteRate=0,MessagesInPerSec.FiveMinuteRate=0,BytesInPerSec.Count=0,BytesRejectedPerSec.MeanRate=0,FailedFetchRequestsPerSec.MeanRate=0,FailedFetchRequestsPerSec.FiveMinuteRate=0,FailedFetchRequestsPerSec.FifteenMinuteRate=0,FailedProduceRequestsPerSec.Count=0,TotalFetchRequestsPerSec.FifteenMinuteRate=128.59314292334466,TotalFetchRequestsPerSec.OneMinuteRate=126.71551273850747,TotalFetchRequestsPerSec.Count=1353483,TotalProduceRequestsPerSec.FifteenMinuteRate=0,FailedFetchRequestsPerSec.OneMinuteRate=0,FailedFetchRequestsPerSec.Count=0,FailedProduceRequestsPerSec.FifteenMinuteRate=0,TotalFetchRequestsPerSec.FiveMinuteRate=130.8516148751592,TotalFetchRequestsPerSec.RateUnit="SECONDS",BytesRejectedPerSec.RateUnit="SECONDS",BytesInPerSec.MeanRate=0,FailedFetchRequestsPerSec.RateUnit="SECONDS",BytesRejectedPerSec.OneMinuteRate=0,BytesOutPerSec.Count=0,BytesOutPerSec.OneMinuteRate=0,MessagesInPerSec.FifteenMinuteRate=0,MessagesInPerSec.MeanRate=0,BytesInPerSec.FiveMinuteRate=0,TotalProduceRequestsPerSec.RateUnit="SECONDS",FailedProduceRequestsPerSec.OneMinuteRate=0,TotalProduceRequestsPerSec.EventType="requests",BytesRejectedPerSec.FiveMinuteRate=0,BytesRejectedPerSec.EventType="bytes",BytesOutPerSec.FiveMinuteRate=0,FailedProduceRequestsPerSec.FiveMinuteRate=0,MessagesInPerSec.Count=0,TotalProduceRequestsPerSec.FiveMinuteRate=0,TotalProduceRequestsPerSec.OneMinuteRate=0,MessagesInPerSec.EventType="messages",MessagesInPerSec.OneMinuteRate=0,TotalFetchRequestsPerSec.EventType="requests",BytesInPerSec.RateUnit="SECONDS",BytesInPerSec.FifteenMinuteRate=0,TotalProduceRequestsPerSec.Count=0 1503767532000000000 kafka_topic,topic=my-topic BytesOutPerSec.MeanRate=0,FailedProduceRequestsPerSec.MeanRate=0,BytesOutPerSec.EventType="bytes",BytesRejectedPerSec.Count=0,FailedProduceRequestsPerSec.RateUnit="SECONDS",FailedProduceRequestsPerSec.EventType="requests",MessagesInPerSec.RateUnit="SECONDS",BytesInPerSec.EventType="bytes",BytesOutPerSec.RateUnit="SECONDS",BytesInPerSec.OneMinuteRate=0,FailedFetchRequestsPerSec.EventType="requests",TotalFetchRequestsPerSec.MeanRate=146.301533938701,BytesOutPerSec.FifteenMinuteRate=0,TotalProduceRequestsPerSec.MeanRate=0,BytesRejectedPerSec.FifteenMinuteRate=0,MessagesInPerSec.FiveMinuteRate=0,BytesInPerSec.Count=0,BytesRejectedPerSec.MeanRate=0,FailedFetchRequestsPerSec.MeanRate=0,FailedFetchRequestsPerSec.FiveMinuteRate=0,FailedFetchRequestsPerSec.FifteenMinuteRate=0,FailedProduceRequestsPerSec.Count=0,TotalFetchRequestsPerSec.FifteenMinuteRate=128.59314292334466,TotalFetchRequestsPerSec.OneMinuteRate=126.71551273850747,TotalFetchRequestsPerSec.Count=1353483,TotalProduceRequestsPerSec.FifteenMinuteRate=0,FailedFetchRequestsPerSec.OneMinuteRate=0,FailedFetchRequestsPerSec.Count=0,FailedProduceRequestsPerSec.FifteenMinuteRate=0,TotalFetchRequestsPerSec.FiveMinuteRate=130.8516148751592,TotalFetchRequestsPerSec.RateUnit="SECONDS",BytesRejectedPerSec.RateUnit="SECONDS",BytesInPerSec.MeanRate=0,FailedFetchRequestsPerSec.RateUnit="SECONDS",BytesRejectedPerSec.OneMinuteRate=0,BytesOutPerSec.Count=0,BytesOutPerSec.OneMinuteRate=0,MessagesInPerSec.FifteenMinuteRate=0,MessagesInPerSec.MeanRate=0,BytesInPerSec.FiveMinuteRate=0,TotalProduceRequestsPerSec.RateUnit="SECONDS",FailedProduceRequestsPerSec.OneMinuteRate=0,TotalProduceRequestsPerSec.EventType="requests",BytesRejectedPerSec.FiveMinuteRate=0,BytesRejectedPerSec.EventType="bytes",BytesOutPerSec.FiveMinuteRate=0,FailedProduceRequestsPerSec.FiveMinuteRate=0,MessagesInPerSec.Count=0,TotalProduceRequestsPerSec.FiveMinuteRate=0,TotalProduceRequestsPerSec.OneMinuteRate=0,MessagesInPerSec.EventType="messages",MessagesInPerSec.OneMinuteRate=0,TotalFetchRequestsPerSec.EventType="requests",BytesInPerSec.RateUnit="SECONDS",BytesInPerSec.FifteenMinuteRate=0,TotalProduceRequestsPerSec.Count=0 1503767532000000000
``` ```
@ -170,7 +170,7 @@ Both `jolokia2_agent` and `jolokia2_proxy` plugins support default configuration
| `default_field_prefix` | _None_ | A string to prepend to the field names produced by all `metric` declarations. | | `default_field_prefix` | _None_ | A string to prepend to the field names produced by all `metric` declarations. |
| `default_tag_prefix` | _None_ | A string to prepend to the tag names produced by all `metric` declarations. | | `default_tag_prefix` | _None_ | A string to prepend to the tag names produced by all `metric` declarations. |
### Example Configurations: ## Example Configurations
- [ActiveMQ](/plugins/inputs/jolokia2/examples/activemq.conf) - [ActiveMQ](/plugins/inputs/jolokia2/examples/activemq.conf)
- [BitBucket](/plugins/inputs/jolokia2/examples/bitbucket.conf) - [BitBucket](/plugins/inputs/jolokia2/examples/bitbucket.conf)

View File

@ -3,7 +3,7 @@
This plugin reads Juniper Networks implementation of OpenConfig telemetry data from listed sensors using Junos Telemetry Interface. Refer to This plugin reads Juniper Networks implementation of OpenConfig telemetry data from listed sensors using Junos Telemetry Interface. Refer to
[openconfig.net](http://openconfig.net/) for more details about OpenConfig and [Junos Telemetry Interface (JTI)](https://www.juniper.net/documentation/en_US/junos/topics/concept/junos-telemetry-interface-oveview.html). [openconfig.net](http://openconfig.net/) for more details about OpenConfig and [Junos Telemetry Interface (JTI)](https://www.juniper.net/documentation/en_US/junos/topics/concept/junos-telemetry-interface-oveview.html).
### Configuration: ## Configuration
```toml ```toml
# Subscribe and receive OpenConfig Telemetry data using JTI # Subscribe and receive OpenConfig Telemetry data using JTI
@ -57,7 +57,7 @@ This plugin reads Juniper Networks implementation of OpenConfig telemetry data f
str_as_tags = false str_as_tags = false
``` ```
### Tags: ## Tags
- All measurements are tagged appropriately using the identifier information - All measurements are tagged appropriately using the identifier information
in incoming data in incoming data

View File

@ -6,7 +6,7 @@ and creates metrics using one of the supported [input data formats][].
For old kafka version (< 0.8), please use the [kafka_consumer_legacy][] input plugin For old kafka version (< 0.8), please use the [kafka_consumer_legacy][] input plugin
and use the old zookeeper connection method. and use the old zookeeper connection method.
### Configuration ## Configuration
```toml ```toml
[[inputs.kafka_consumer]] [[inputs.kafka_consumer]]

View File

@ -1,6 +1,6 @@
# Kafka Consumer Legacy Input Plugin # Kafka Consumer Legacy Input Plugin
### Deprecated in version 1.4. Please use [Kafka Consumer input plugin](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/kafka_consumer). ## Deprecated in version 1.4. Please use [Kafka Consumer input plugin](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/kafka_consumer)
The [Kafka](http://kafka.apache.org/) consumer plugin polls a specified Kafka The [Kafka](http://kafka.apache.org/) consumer plugin polls a specified Kafka
topic and adds messages to InfluxDB. The plugin assumes messages follow the topic and adds messages to InfluxDB. The plugin assumes messages follow the

View File

@ -2,7 +2,7 @@
The Kapacitor plugin collects metrics from the given Kapacitor instances. The Kapacitor plugin collects metrics from the given Kapacitor instances.
### Configuration: ## Configuration
```toml ```toml
[[inputs.kapacitor]] [[inputs.kapacitor]]
@ -23,7 +23,7 @@ The Kapacitor plugin collects metrics from the given Kapacitor instances.
# insecure_skip_verify = false # insecure_skip_verify = false
``` ```
### Measurements and fields ## Measurements and fields
- [kapacitor](#kapacitor) - [kapacitor](#kapacitor)
- [num_enabled_tasks](#num_enabled_tasks) _(integer)_ - [num_enabled_tasks](#num_enabled_tasks) _(integer)_
@ -85,214 +85,272 @@ The Kapacitor plugin collects metrics from the given Kapacitor instances.
- [kapacitor_topics](#kapacitor_topics) - [kapacitor_topics](#kapacitor_topics)
- [collected](#collected) _(integer)_ - [collected](#collected) _(integer)_
--- ---
### kapacitor ## kapacitor
The `kapacitor` measurement stores fields with information related to The `kapacitor` measurement stores fields with information related to
[Kapacitor tasks](https://docs.influxdata.com/kapacitor/latest/introduction/getting-started/#kapacitor-tasks) [Kapacitor tasks](https://docs.influxdata.com/kapacitor/latest/introduction/getting-started/#kapacitor-tasks)
and [subscriptions](https://docs.influxdata.com/kapacitor/latest/administration/subscription-management/). and [subscriptions](https://docs.influxdata.com/kapacitor/latest/administration/subscription-management/).
#### num_enabled_tasks ### num_enabled_tasks
The number of enabled Kapacitor tasks. The number of enabled Kapacitor tasks.
#### num_subscriptions ### num_subscriptions
The number of Kapacitor/InfluxDB subscriptions. The number of Kapacitor/InfluxDB subscriptions.
#### num_tasks ### num_tasks
The total number of Kapacitor tasks. The total number of Kapacitor tasks.
--- ---
### kapacitor_alert ## kapacitor_alert
The `kapacitor_alert` measurement stores fields with information related to The `kapacitor_alert` measurement stores fields with information related to
[Kapacitor alerts](https://docs.influxdata.com/kapacitor/v1.5/working/alerts/). [Kapacitor alerts](https://docs.influxdata.com/kapacitor/v1.5/working/alerts/).
#### notification-dropped ### notification-dropped
The number of internal notifications dropped because they arrive too late from another Kapacitor node. The number of internal notifications dropped because they arrive too late from another Kapacitor node.
If this count is increasing, Kapacitor Enterprise nodes aren't able to communicate fast enough If this count is increasing, Kapacitor Enterprise nodes aren't able to communicate fast enough
to keep up with the volume of alerts. to keep up with the volume of alerts.
#### primary-handle-count ### primary-handle-count
The number of times this node handled an alert as the primary. This count should increase under normal conditions. The number of times this node handled an alert as the primary. This count should increase under normal conditions.
#### secondary-handle-count ### secondary-handle-count
The number of times this node handled an alert as the secondary. An increase in this counter indicates that the primary is failing to handle alerts in a timely manner. The number of times this node handled an alert as the secondary. An increase in this counter indicates that the primary is failing to handle alerts in a timely manner.
--- ---
### kapacitor_cluster ## kapacitor_cluster
The `kapacitor_cluster` measurement reflects the ability of [Kapacitor nodes to communicate](https://docs.influxdata.com/enterprise_kapacitor/v1.5/administration/configuration/#cluster-communications) with one another. Specifically, these metrics track the gossip communication between the Kapacitor nodes. The `kapacitor_cluster` measurement reflects the ability of [Kapacitor nodes to communicate](https://docs.influxdata.com/enterprise_kapacitor/v1.5/administration/configuration/#cluster-communications) with one another. Specifically, these metrics track the gossip communication between the Kapacitor nodes.
#### dropped_member_events ### dropped_member_events
The number of gossip member events that were dropped. The number of gossip member events that were dropped.
#### dropped_user_events ### dropped_user_events
The number of gossip user events that were dropped. The number of gossip user events that were dropped.
--- ---
### kapacitor_edges ## kapacitor_edges
The `kapacitor_edges` measurement stores fields with information related to The `kapacitor_edges` measurement stores fields with information related to
[edges](https://docs.influxdata.com/kapacitor/latest/tick/introduction/#pipelines) [edges](https://docs.influxdata.com/kapacitor/latest/tick/introduction/#pipelines)
in Kapacitor TICKscripts. in Kapacitor TICKscripts.
#### collected ### collected
The number of messages collected by TICKscript edges. The number of messages collected by TICKscript edges.
#### emitted ### emitted
The number of messages emitted by TICKscript edges. The number of messages emitted by TICKscript edges.
--- ---
### kapacitor_ingress ## kapacitor_ingress
The `kapacitor_ingress` measurement stores fields with information related to data The `kapacitor_ingress` measurement stores fields with information related to data
coming into Kapacitor. coming into Kapacitor.
#### points_received ### points_received
The number of points received by Kapacitor. The number of points received by Kapacitor.
--- ---
### kapacitor_load ## kapacitor_load
The `kapacitor_load` measurement stores fields with information related to the The `kapacitor_load` measurement stores fields with information related to the
[Kapacitor Load Directory service](https://docs.influxdata.com/kapacitor/latest/guides/load_directory/). [Kapacitor Load Directory service](https://docs.influxdata.com/kapacitor/latest/guides/load_directory/).
#### errors ### errors
The number of errors reported from the load directory service. The number of errors reported from the load directory service.
--- ---
### kapacitor_memstats ## kapacitor_memstats
The `kapacitor_memstats` measurement stores fields related to Kapacitor memory usage. The `kapacitor_memstats` measurement stores fields related to Kapacitor memory usage.
#### alloc_bytes ### alloc_bytes
The number of bytes of memory allocated by Kapacitor that are still in use. The number of bytes of memory allocated by Kapacitor that are still in use.
#### buck_hash_sys_bytes ### buck_hash_sys_bytes
The number of bytes of memory used by the profiling bucket hash table. The number of bytes of memory used by the profiling bucket hash table.
#### frees ### frees
The number of heap objects freed. The number of heap objects freed.
#### gc_sys_bytes ### gc_sys_bytes
The number of bytes of memory used for garbage collection system metadata. The number of bytes of memory used for garbage collection system metadata.
#### gc_cpu_fraction ### gc_cpu_fraction
The fraction of Kapacitor's available CPU time used by garbage collection since The fraction of Kapacitor's available CPU time used by garbage collection since
Kapacitor started. Kapacitor started.
#### heap_alloc_bytes ### heap_alloc_bytes
The number of reachable and unreachable heap objects garbage collection has The number of reachable and unreachable heap objects garbage collection has
not freed. not freed.
#### heap_idle_bytes ### heap_idle_bytes
The number of heap bytes waiting to be used. The number of heap bytes waiting to be used.
#### heap_in_use_bytes ### heap_in_use_bytes
The number of heap bytes in use. The number of heap bytes in use.
#### heap_objects ### heap_objects
The number of allocated objects. The number of allocated objects.
#### heap_released_bytes ### heap_released_bytes
The number of heap bytes released to the operating system. The number of heap bytes released to the operating system.
#### heap_sys_bytes ### heap_sys_bytes
The number of heap bytes obtained from `system`. The number of heap bytes obtained from `system`.
#### last_gc_ns ### last_gc_ns
The nanosecond epoch time of the last garbage collection. The nanosecond epoch time of the last garbage collection.
#### lookups ### lookups
The total number of pointer lookups. The total number of pointer lookups.
#### mallocs ### mallocs
The total number of mallocs. The total number of mallocs.
#### mcache_in_use_bytes ### mcache_in_use_bytes
The number of bytes in use by mcache structures. The number of bytes in use by mcache structures.
#### mcache_sys_bytes ### mcache_sys_bytes
The number of bytes used for mcache structures obtained from `system`. The number of bytes used for mcache structures obtained from `system`.
#### mspan_in_use_bytes ### mspan_in_use_bytes
The number of bytes in use by mspan structures. The number of bytes in use by mspan structures.
#### mspan_sys_bytes ### mspan_sys_bytes
The number of bytes used for mspan structures obtained from `system`. The number of bytes used for mspan structures obtained from `system`.
#### next_gc_ns ### next_gc_ns
The nanosecond epoch time of the next garbage collection. The nanosecond epoch time of the next garbage collection.
#### num_gc ### num_gc
The number of completed garbage collection cycles. The number of completed garbage collection cycles.
#### other_sys_bytes ### other_sys_bytes
The number of bytes used for other system allocations. The number of bytes used for other system allocations.
#### pause_total_ns ### pause_total_ns
The total number of nanoseconds spent in garbage collection "stop-the-world" The total number of nanoseconds spent in garbage collection "stop-the-world"
pauses since Kapacitor started. pauses since Kapacitor started.
#### stack_in_use_bytes ### stack_in_use_bytes
The number of bytes in use by the stack allocator. The number of bytes in use by the stack allocator.
#### stack_sys_bytes ### stack_sys_bytes
The number of bytes obtained from `system` for stack allocator. The number of bytes obtained from `system` for stack allocator.
#### sys_bytes ### sys_bytes
The number of bytes of memory obtained from `system`. The number of bytes of memory obtained from `system`.
#### total_alloc_bytes ### total_alloc_bytes
The total number of bytes allocated, even if freed. The total number of bytes allocated, even if freed.
--- ---
### kapacitor_nodes ## kapacitor_nodes
The `kapacitor_nodes` measurement stores fields related to events that occur in The `kapacitor_nodes` measurement stores fields related to events that occur in
[TICKscript nodes](https://docs.influxdata.com/kapacitor/latest/nodes/). [TICKscript nodes](https://docs.influxdata.com/kapacitor/latest/nodes/).
#### alerts_inhibited ### alerts_inhibited
The total number of alerts inhibited by TICKscripts. The total number of alerts inhibited by TICKscripts.
#### alerts_triggered ### alerts_triggered
The total number of alerts triggered by TICKscripts. The total number of alerts triggered by TICKscripts.
#### avg_exec_time_ns ### avg_exec_time_ns
The average execution time of TICKscripts in nanoseconds. The average execution time of TICKscripts in nanoseconds.
#### crits_triggered ### crits_triggered
The number of critical (`crit`) alerts triggered by TICKscripts. The number of critical (`crit`) alerts triggered by TICKscripts.
#### errors ### errors (from TICKscripts)
The number of errors caused caused by TICKscripts. The number of errors caused caused by TICKscripts.
#### infos_triggered ### infos_triggered
The number of info (`info`) alerts triggered by TICKscripts. The number of info (`info`) alerts triggered by TICKscripts.
#### oks_triggered ### oks_triggered
The number of ok (`ok`) alerts triggered by TICKscripts. The number of ok (`ok`) alerts triggered by TICKscripts.
#### points_written #### points_written
The number of points written to InfluxDB or back to Kapacitor. The number of points written to InfluxDB or back to Kapacitor.
#### warns_triggered #### warns_triggered
The number of warning (`warn`) alerts triggered by TICKscripts. The number of warning (`warn`) alerts triggered by TICKscripts.
#### working_cardinality #### working_cardinality
The total number of unique series processed. The total number of unique series processed.
#### write_errors #### write_errors
The number of errors that occurred when writing to InfluxDB or other write endpoints. The number of errors that occurred when writing to InfluxDB or other write endpoints.
--- ---
### kapacitor_topics ### kapacitor_topics
The `kapacitor_topics` measurement stores fields related to
Kapacitor topics](https://docs.influxdata.com/kapacitor/latest/working/using_alert_topics/).
#### collected The `kapacitor_topics` measurement stores fields related to
Kapacitor topics](<https://docs.influxdata.com/kapacitor/latest/working/using_alert_topics/>).
#### collected (kapacitor_topics)
The number of events collected by Kapacitor topics. The number of events collected by Kapacitor topics.
--- ---
@ -303,7 +361,7 @@ these values.
## Example Output ## Example Output
``` ```shell
$ telegraf --config /etc/telegraf.conf --input-filter kapacitor --test $ telegraf --config /etc/telegraf.conf --input-filter kapacitor --test
* Plugin: inputs.kapacitor, Collection 1 * Plugin: inputs.kapacitor, Collection 1
> kapacitor_memstats,host=hostname.local,kap_version=1.1.0~rc2,url=http://localhost:9092/kapacitor/v1/debug/vars alloc_bytes=6974808i,buck_hash_sys_bytes=1452609i,frees=207281i,gc_sys_bytes=802816i,gc_cpu_fraction=0.00004693548939673313,heap_alloc_bytes=6974808i,heap_idle_bytes=6742016i,heap_in_use_bytes=9183232i,heap_objects=23216i,heap_released_bytes=0i,heap_sys_bytes=15925248i,last_gc_ns=1478791460012676997i,lookups=88i,mallocs=230497i,mcache_in_use_bytes=9600i,mcache_sys_bytes=16384i,mspan_in_use_bytes=98560i,mspan_sys_bytes=131072i,next_gc_ns=11467528i,num_gc=8i,other_sys_bytes=2236087i,pause_total_ns=2994110i,stack_in_use_bytes=1900544i,stack_sys_bytes=1900544i,sys_bytes=22464760i,total_alloc_bytes=35023600i 1478791462000000000 > kapacitor_memstats,host=hostname.local,kap_version=1.1.0~rc2,url=http://localhost:9092/kapacitor/v1/debug/vars alloc_bytes=6974808i,buck_hash_sys_bytes=1452609i,frees=207281i,gc_sys_bytes=802816i,gc_cpu_fraction=0.00004693548939673313,heap_alloc_bytes=6974808i,heap_idle_bytes=6742016i,heap_in_use_bytes=9183232i,heap_objects=23216i,heap_released_bytes=0i,heap_sys_bytes=15925248i,last_gc_ns=1478791460012676997i,lookups=88i,mallocs=230497i,mcache_in_use_bytes=9600i,mcache_sys_bytes=16384i,mspan_in_use_bytes=98560i,mspan_sys_bytes=131072i,next_gc_ns=11467528i,num_gc=8i,other_sys_bytes=2236087i,pause_total_ns=2994110i,stack_in_use_bytes=1900544i,stack_sys_bytes=1900544i,sys_bytes=22464760i,total_alloc_bytes=35023600i 1478791462000000000

View File

@ -9,7 +9,7 @@ not covered by other plugins as well as the value of `/proc/sys/kernel/random/en
The metrics are documented in `man proc` under the `/proc/stat` section. The metrics are documented in `man proc` under the `/proc/stat` section.
The metrics are documented in `man 4 random` under the `/proc/stat` section. The metrics are documented in `man 4 random` under the `/proc/stat` section.
``` ```text
/proc/sys/kernel/random/entropy_avail /proc/sys/kernel/random/entropy_avail
@ -39,7 +39,7 @@ processes 86031
Number of forks since boot. Number of forks since boot.
``` ```
### Configuration: ## Configuration
```toml ```toml
# Get kernel statistics from /proc/stat # Get kernel statistics from /proc/stat
@ -47,7 +47,7 @@ Number of forks since boot.
# no configuration # no configuration
``` ```
### Measurements & Fields: ## Measurements & Fields
- kernel - kernel
- boot_time (integer, seconds since epoch, `btime`) - boot_time (integer, seconds since epoch, `btime`)
@ -58,13 +58,13 @@ Number of forks since boot.
- processes_forked (integer, `processes`) - processes_forked (integer, `processes`)
- entropy_avail (integer, `entropy_available`) - entropy_avail (integer, `entropy_available`)
### Tags: ## Tags
None None
### Example Output: ## Example Output
``` ```shell
$ telegraf --config ~/ws/telegraf.conf --input-filter kernel --test $ telegraf --config ~/ws/telegraf.conf --input-filter kernel --test
* Plugin: kernel, Collection 1 * Plugin: kernel, Collection 1
> kernel entropy_available=2469i,boot_time=1457505775i,context_switches=2626618i,disk_pages_in=5741i,disk_pages_out=1808i,interrupts=1472736i,processes_forked=10673i 1457613402960879816 > kernel entropy_available=2469i,boot_time=1457505775i,context_switches=2626618i,disk_pages_in=5741i,disk_pages_out=1808i,interrupts=1472736i,processes_forked=10673i 1457613402960879816

View File

@ -6,8 +6,7 @@ by reading /proc/vmstat. For a full list of available fields see the
For a better idea of what each field represents, see the For a better idea of what each field represents, see the
[vmstat man page](http://linux.die.net/man/8/vmstat). [vmstat man page](http://linux.die.net/man/8/vmstat).
```text
```
/proc/vmstat /proc/vmstat
kernel/system statistics. Common entries include (from http://www.linuxinsight.com/proc_vmstat.html): kernel/system statistics. Common entries include (from http://www.linuxinsight.com/proc_vmstat.html):
@ -109,7 +108,7 @@ pgrotated 3781
nr_bounce 0 nr_bounce 0
``` ```
### Configuration: ## Configuration
```toml ```toml
# Get kernel statistics from /proc/vmstat # Get kernel statistics from /proc/vmstat
@ -117,7 +116,7 @@ nr_bounce 0
# no configuration # no configuration
``` ```
### Measurements & Fields: ## Measurements & Fields
- kernel_vmstat - kernel_vmstat
- nr_free_pages (integer, `nr_free_pages`) - nr_free_pages (integer, `nr_free_pages`)
@ -212,13 +211,13 @@ nr_bounce 0
- thp_collapse_alloc_failed (integer, `thp_collapse_alloc_failed`) - thp_collapse_alloc_failed (integer, `thp_collapse_alloc_failed`)
- thp_split (integer, `thp_split`) - thp_split (integer, `thp_split`)
### Tags: ## Tags
None None
### Example Output: ## Example Output
``` ```shell
$ telegraf --config ~/ws/telegraf.conf --input-filter kernel_vmstat --test $ telegraf --config ~/ws/telegraf.conf --input-filter kernel_vmstat --test
* Plugin: kernel_vmstat, Collection 1 * Plugin: kernel_vmstat, Collection 1
> kernel_vmstat allocstall=81496i,compact_blocks_moved=238196i,compact_fail=135220i,compact_pagemigrate_failed=0i,compact_pages_moved=6370588i,compact_stall=142092i,compact_success=6872i,htlb_buddy_alloc_fail=0i,htlb_buddy_alloc_success=0i,kswapd_high_wmark_hit_quickly=25439i,kswapd_inodesteal=29770874i,kswapd_low_wmark_hit_quickly=8756i,kswapd_skip_congestion_wait=0i,kswapd_steal=291534428i,nr_active_anon=2515657i,nr_active_file=2244914i,nr_anon_pages=1358675i,nr_anon_transparent_hugepages=2034i,nr_bounce=0i,nr_dirty=5690i,nr_file_pages=5153546i,nr_free_pages=78730i,nr_inactive_anon=426259i,nr_inactive_file=2366791i,nr_isolated_anon=0i,nr_isolated_file=0i,nr_kernel_stack=579i,nr_mapped=558821i,nr_mlock=0i,nr_page_table_pages=11115i,nr_shmem=541689i,nr_slab_reclaimable=459806i,nr_slab_unreclaimable=47859i,nr_unevictable=0i,nr_unstable=0i,nr_vmscan_write=6206i,nr_writeback=0i,nr_writeback_temp=0i,numa_foreign=0i,numa_hit=5113399878i,numa_interleave=35793i,numa_local=5113399878i,numa_miss=0i,numa_other=0i,pageoutrun=505006i,pgactivate=375664931i,pgalloc_dma=0i,pgalloc_dma32=122480220i,pgalloc_movable=0i,pgalloc_normal=5233176719i,pgdeactivate=122735906i,pgfault=8699921410i,pgfree=5359765021i,pginodesteal=9188431i,pgmajfault=122210i,pgpgin=219717626i,pgpgout=3495885510i,pgrefill_dma=0i,pgrefill_dma32=1180010i,pgrefill_movable=0i,pgrefill_normal=119866676i,pgrotated=60620i,pgscan_direct_dma=0i,pgscan_direct_dma32=12256i,pgscan_direct_movable=0i,pgscan_direct_normal=31501600i,pgscan_kswapd_dma=0i,pgscan_kswapd_dma32=4480608i,pgscan_kswapd_movable=0i,pgscan_kswapd_normal=287857984i,pgsteal_dma=0i,pgsteal_dma32=4466436i,pgsteal_movable=0i,pgsteal_normal=318463755i,pswpin=2092i,pswpout=6206i,slabs_scanned=93775616i,thp_collapse_alloc=24857i,thp_collapse_alloc_failed=102214i,thp_fault_alloc=346219i,thp_fault_fallback=895453i,thp_split=9817i,unevictable_pgs_cleared=0i,unevictable_pgs_culled=1531i,unevictable_pgs_mlocked=6988i,unevictable_pgs_mlockfreed=0i,unevictable_pgs_munlocked=6988i,unevictable_pgs_rescued=5426i,unevictable_pgs_scanned=0i,unevictable_pgs_stranded=0i,zone_reclaim_failed=0i 1459455200071462843 > kernel_vmstat allocstall=81496i,compact_blocks_moved=238196i,compact_fail=135220i,compact_pagemigrate_failed=0i,compact_pages_moved=6370588i,compact_stall=142092i,compact_success=6872i,htlb_buddy_alloc_fail=0i,htlb_buddy_alloc_success=0i,kswapd_high_wmark_hit_quickly=25439i,kswapd_inodesteal=29770874i,kswapd_low_wmark_hit_quickly=8756i,kswapd_skip_congestion_wait=0i,kswapd_steal=291534428i,nr_active_anon=2515657i,nr_active_file=2244914i,nr_anon_pages=1358675i,nr_anon_transparent_hugepages=2034i,nr_bounce=0i,nr_dirty=5690i,nr_file_pages=5153546i,nr_free_pages=78730i,nr_inactive_anon=426259i,nr_inactive_file=2366791i,nr_isolated_anon=0i,nr_isolated_file=0i,nr_kernel_stack=579i,nr_mapped=558821i,nr_mlock=0i,nr_page_table_pages=11115i,nr_shmem=541689i,nr_slab_reclaimable=459806i,nr_slab_unreclaimable=47859i,nr_unevictable=0i,nr_unstable=0i,nr_vmscan_write=6206i,nr_writeback=0i,nr_writeback_temp=0i,numa_foreign=0i,numa_hit=5113399878i,numa_interleave=35793i,numa_local=5113399878i,numa_miss=0i,numa_other=0i,pageoutrun=505006i,pgactivate=375664931i,pgalloc_dma=0i,pgalloc_dma32=122480220i,pgalloc_movable=0i,pgalloc_normal=5233176719i,pgdeactivate=122735906i,pgfault=8699921410i,pgfree=5359765021i,pginodesteal=9188431i,pgmajfault=122210i,pgpgin=219717626i,pgpgout=3495885510i,pgrefill_dma=0i,pgrefill_dma32=1180010i,pgrefill_movable=0i,pgrefill_normal=119866676i,pgrotated=60620i,pgscan_direct_dma=0i,pgscan_direct_dma32=12256i,pgscan_direct_movable=0i,pgscan_direct_normal=31501600i,pgscan_kswapd_dma=0i,pgscan_kswapd_dma32=4480608i,pgscan_kswapd_movable=0i,pgscan_kswapd_normal=287857984i,pgsteal_dma=0i,pgsteal_dma32=4466436i,pgsteal_movable=0i,pgsteal_normal=318463755i,pswpin=2092i,pswpout=6206i,slabs_scanned=93775616i,thp_collapse_alloc=24857i,thp_collapse_alloc_failed=102214i,thp_fault_alloc=346219i,thp_fault_fallback=895453i,thp_split=9817i,unevictable_pgs_cleared=0i,unevictable_pgs_culled=1531i,unevictable_pgs_mlocked=6988i,unevictable_pgs_mlockfreed=0i,unevictable_pgs_munlocked=6988i,unevictable_pgs_rescued=5426i,unevictable_pgs_scanned=0i,unevictable_pgs_stranded=0i,zone_reclaim_failed=0i 1459455200071462843

View File

@ -7,7 +7,7 @@ The `kibana` plugin queries the [Kibana][] API to obtain the service status.
[Kibana]: https://www.elastic.co/ [Kibana]: https://www.elastic.co/
### Configuration ## Configuration
```toml ```toml
[[inputs.kibana]] [[inputs.kibana]]
@ -29,7 +29,7 @@ The `kibana` plugin queries the [Kibana][] API to obtain the service status.
# insecure_skip_verify = false # insecure_skip_verify = false
``` ```
### Metrics ## Metrics
- kibana - kibana
- tags: - tags:
@ -48,9 +48,9 @@ The `kibana` plugin queries the [Kibana][] API to obtain the service status.
- concurrent_connections (integer) - concurrent_connections (integer)
- requests_per_sec (float) - requests_per_sec (float)
### Example Output ## Example Output
``` ```shell
kibana,host=myhost,name=my-kibana,source=localhost:5601,status=green,version=6.5.4 concurrent_connections=8i,heap_max_bytes=447778816i,heap_total_bytes=447778816i,heap_used_bytes=380603352i,requests_per_sec=1,response_time_avg_ms=57.6,response_time_max_ms=220i,status_code=1i,uptime_ms=6717489805i 1534864502000000000 kibana,host=myhost,name=my-kibana,source=localhost:5601,status=green,version=6.5.4 concurrent_connections=8i,heap_max_bytes=447778816i,heap_total_bytes=447778816i,heap_used_bytes=380603352i,requests_per_sec=1,response_time_avg_ms=57.6,response_time_max_ms=220i,status_code=1i,uptime_ms=6717489805i 1534864502000000000
``` ```
@ -58,8 +58,8 @@ kibana,host=myhost,name=my-kibana,source=localhost:5601,status=green,version=6.5
Requires the following tools: Requires the following tools:
* [Docker](https://docs.docker.com/get-docker/) - [Docker](https://docs.docker.com/get-docker/)
* [Docker Compose](https://docs.docker.com/compose/install/) - [Docker Compose](https://docs.docker.com/compose/install/)
From the root of this project execute the following script: `./plugins/inputs/kibana/test_environment/run_test_env.sh` From the root of this project execute the following script: `./plugins/inputs/kibana/test_environment/run_test_env.sh`

View File

@ -3,8 +3,7 @@
The [Kinesis][kinesis] consumer plugin reads from a Kinesis data stream The [Kinesis][kinesis] consumer plugin reads from a Kinesis data stream
and creates metrics using one of the supported [input data formats][]. and creates metrics using one of the supported [input data formats][].
## Configuration
### Configuration
```toml ```toml
[[inputs.kinesis_consumer]] [[inputs.kinesis_consumer]]
@ -74,29 +73,28 @@ and creates metrics using one of the supported [input data formats][].
table_name = "default" table_name = "default"
``` ```
### Required AWS IAM permissions
#### Required AWS IAM permissions
Kinesis: Kinesis:
- DescribeStream
- GetRecords - DescribeStream
- GetShardIterator - GetRecords
- GetShardIterator
DynamoDB: DynamoDB:
- GetItem
- PutItem
- GetItem
- PutItem
#### DynamoDB Checkpoint ### DynamoDB Checkpoint
The DynamoDB checkpoint stores the last processed record in a DynamoDB. To leverage The DynamoDB checkpoint stores the last processed record in a DynamoDB. To leverage
this functionality, create a table with the following string type keys: this functionality, create a table with the following string type keys:
``` ```shell
Partition key: namespace Partition key: namespace
Sort key: shard_id Sort key: shard_id
``` ```
[kinesis]: https://aws.amazon.com/kinesis/ [kinesis]: https://aws.amazon.com/kinesis/
[input data formats]: /docs/DATA_FORMATS_INPUT.md [input data formats]: /docs/DATA_FORMATS_INPUT.md

View File

@ -3,9 +3,9 @@
The KNX input plugin that listens for messages on the KNX home-automation bus. The KNX input plugin that listens for messages on the KNX home-automation bus.
This plugin connects to the KNX bus via a KNX-IP interface. This plugin connects to the KNX bus via a KNX-IP interface.
Information about supported KNX message datapoint types can be found at the Information about supported KNX message datapoint types can be found at the
underlying "knx-go" project site (https://github.com/vapourismo/knx-go). underlying "knx-go" project site (<https://github.com/vapourismo/knx-go>).
### Configuration ## Configuration
This is a sample config for the plugin. This is a sample config for the plugin.
@ -34,7 +34,7 @@ This is a sample config for the plugin.
# addresses = ["5/5/3"] # addresses = ["5/5/3"]
``` ```
#### Measurement configurations ### Measurement configurations
Each measurement contains only one datapoint-type (DPT) and assigns a list of Each measurement contains only one datapoint-type (DPT) and assigns a list of
addresses to this measurement. You can, for example group all temperature sensor addresses to this measurement. You can, for example group all temperature sensor
@ -43,23 +43,24 @@ messages of one datapoint-type to multiple measurements.
**NOTE: You should not assign a group-address (GA) to multiple measurements!** **NOTE: You should not assign a group-address (GA) to multiple measurements!**
### Metrics ## Metrics
Received KNX data is stored in the named measurement as configured above using Received KNX data is stored in the named measurement as configured above using
the "value" field. Additional to the value, there are the following tags added the "value" field. Additional to the value, there are the following tags added
to the datapoint: to the datapoint:
- "groupaddress": KNX group-address corresponding to the value
- "unit": unit of the value - "groupaddress": KNX group-address corresponding to the value
- "source": KNX physical address sending the value - "unit": unit of the value
- "source": KNX physical address sending the value
To find out about the datatype of the datapoint please check your KNX project, To find out about the datatype of the datapoint please check your KNX project,
the KNX-specification or the "knx-go" project for the corresponding DPT. the KNX-specification or the "knx-go" project for the corresponding DPT.
### Example Output ## Example Output
This section shows example output in Line Protocol format. This section shows example output in Line Protocol format.
``` ```shell
illumination,groupaddress=5/5/4,host=Hugin,source=1.1.12,unit=lux value=17.889999389648438 1582132674999013274 illumination,groupaddress=5/5/4,host=Hugin,source=1.1.12,unit=lux value=17.889999389648438 1582132674999013274
temperature,groupaddress=5/5/1,host=Hugin,source=1.1.8,unit=°C value=17.799999237060547 1582132663427587361 temperature,groupaddress=5/5/1,host=Hugin,source=1.1.8,unit=°C value=17.799999237060547 1582132663427587361
windowopen,groupaddress=1/0/1,host=Hugin,source=1.1.3 value=true 1582132630425581320 windowopen,groupaddress=1/0/1,host=Hugin,source=1.1.3 value=true 1582132630425581320

View File

@ -19,7 +19,7 @@ the major cloud providers; this is roughly 4 release / 2 years.
**This plugin supports Kubernetes 1.11 and later.** **This plugin supports Kubernetes 1.11 and later.**
#### Series Cardinality Warning ## Series Cardinality Warning
This plugin may produce a high number of series which, when not controlled This plugin may produce a high number of series which, when not controlled
for, will cause high load on your database. Use the following techniques to for, will cause high load on your database. Use the following techniques to
@ -31,7 +31,7 @@ avoid cardinality issues:
- Monitor your databases [series cardinality][]. - Monitor your databases [series cardinality][].
- Consult the [InfluxDB documentation][influx-docs] for the most up-to-date techniques. - Consult the [InfluxDB documentation][influx-docs] for the most up-to-date techniques.
### Configuration: ## Configuration
```toml ```toml
[[inputs.kube_inventory]] [[inputs.kube_inventory]]
@ -81,7 +81,7 @@ avoid cardinality issues:
# fielddrop = ["terminated_reason"] # fielddrop = ["terminated_reason"]
``` ```
#### Kubernetes Permissions ## Kubernetes Permissions
If using [RBAC authorization](https://kubernetes.io/docs/reference/access-authn-authz/rbac/), you will need to create a cluster role to list "persistentvolumes" and "nodes". You will then need to make an [aggregated ClusterRole](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#aggregated-clusterroles) that will eventually be bound to a user or group. If using [RBAC authorization](https://kubernetes.io/docs/reference/access-authn-authz/rbac/), you will need to create a cluster role to list "persistentvolumes" and "nodes". You will then need to make an [aggregated ClusterRole](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#aggregated-clusterroles) that will eventually be bound to a user or group.
@ -150,7 +150,7 @@ tls_cert = "/run/telegraf-kubernetes-cert"
tls_key = "/run/telegraf-kubernetes-key" tls_key = "/run/telegraf-kubernetes-key"
``` ```
### Metrics: ## Metrics
- kubernetes_daemonset - kubernetes_daemonset
- tags: - tags:
@ -167,7 +167,7 @@ tls_key = "/run/telegraf-kubernetes-key"
- number_unavailable - number_unavailable
- updated_number_scheduled - updated_number_scheduled
* kubernetes_deployment - kubernetes_deployment
- tags: - tags:
- deployment_name - deployment_name
- namespace - namespace
@ -192,7 +192,7 @@ tls_key = "/run/telegraf-kubernetes-key"
- ready - ready
- port - port
* kubernetes_ingress - kubernetes_ingress
- tags: - tags:
- ingress_name - ingress_name
- namespace - namespace
@ -220,7 +220,7 @@ tls_key = "/run/telegraf-kubernetes-key"
- allocatable_memory_bytes - allocatable_memory_bytes
- allocatable_pods - allocatable_pods
* kubernetes_persistentvolume - kubernetes_persistentvolume
- tags: - tags:
- pv_name - pv_name
- phase - phase
@ -238,7 +238,7 @@ tls_key = "/run/telegraf-kubernetes-key"
- fields: - fields:
- phase_type (int, [see below](#pvc-phase_type)) - phase_type (int, [see below](#pvc-phase_type))
* kubernetes_pod_container - kubernetes_pod_container
- tags: - tags:
- container_name - container_name
- namespace - namespace
@ -274,7 +274,7 @@ tls_key = "/run/telegraf-kubernetes-key"
- port - port
- target_port - target_port
* kubernetes_statefulset - kubernetes_statefulset
- tags: - tags:
- statefulset_name - statefulset_name
- namespace - namespace
@ -289,7 +289,7 @@ tls_key = "/run/telegraf-kubernetes-key"
- spec_replicas - spec_replicas
- observed_generation - observed_generation
#### pv `phase_type` ### pv `phase_type`
The persistentvolume "phase" is saved in the `phase` tag with a correlated numeric field called `phase_type` corresponding with that tag value. The persistentvolume "phase" is saved in the `phase` tag with a correlated numeric field called `phase_type` corresponding with that tag value.
@ -302,7 +302,7 @@ The persistentvolume "phase" is saved in the `phase` tag with a correlated numer
| available | 4 | | available | 4 |
| unknown | 5 | | unknown | 5 |
#### pvc `phase_type` ### pvc `phase_type`
The persistentvolumeclaim "phase" is saved in the `phase` tag with a correlated numeric field called `phase_type` corresponding with that tag value. The persistentvolumeclaim "phase" is saved in the `phase` tag with a correlated numeric field called `phase_type` corresponding with that tag value.
@ -313,9 +313,9 @@ The persistentvolumeclaim "phase" is saved in the `phase` tag with a correlated
| pending | 2 | | pending | 2 |
| unknown | 3 | | unknown | 3 |
### Example Output: ## Example Output
``` ```shell
kubernetes_configmap,configmap_name=envoy-config,namespace=default,resource_version=56593031 created=1544103867000000000i 1547597616000000000 kubernetes_configmap,configmap_name=envoy-config,namespace=default,resource_version=56593031 created=1544103867000000000i 1547597616000000000
kubernetes_daemonset,daemonset_name=telegraf,selector_select1=s1,namespace=logging number_unavailable=0i,desired_number_scheduled=11i,number_available=11i,number_misscheduled=8i,number_ready=11i,updated_number_scheduled=11i,created=1527758699000000000i,generation=16i,current_number_scheduled=11i 1547597616000000000 kubernetes_daemonset,daemonset_name=telegraf,selector_select1=s1,namespace=logging number_unavailable=0i,desired_number_scheduled=11i,number_available=11i,number_misscheduled=8i,number_ready=11i,updated_number_scheduled=11i,created=1527758699000000000i,generation=16i,current_number_scheduled=11i 1547597616000000000
kubernetes_deployment,deployment_name=deployd,selector_select1=s1,namespace=default replicas_unavailable=0i,created=1544103082000000000i,replicas_available=1i 1547597616000000000 kubernetes_deployment,deployment_name=deployd,selector_select1=s1,namespace=default replicas_unavailable=0i,created=1544103082000000000i,replicas_available=1i 1547597616000000000

View File

@ -8,8 +8,8 @@ should configure this plugin to talk to its locally running kubelet.
To find the ip address of the host you are running on you can issue a command like the following: To find the ip address of the host you are running on you can issue a command like the following:
``` ```sh
$ curl -s $API_URL/api/v1/namespaces/$POD_NAMESPACE/pods/$HOSTNAME --header "Authorization: Bearer $TOKEN" --insecure | jq -r '.status.hostIP' curl -s $API_URL/api/v1/namespaces/$POD_NAMESPACE/pods/$HOSTNAME --header "Authorization: Bearer $TOKEN" --insecure | jq -r '.status.hostIP'
``` ```
In this case we used the downward API to pass in the `$POD_NAMESPACE` and `$HOSTNAME` is the hostname of the pod which is set by the kubernetes API. In this case we used the downward API to pass in the `$POD_NAMESPACE` and `$HOSTNAME` is the hostname of the pod which is set by the kubernetes API.
@ -20,7 +20,7 @@ the major cloud providers; this is roughly 4 release / 2 years.
**This plugin supports Kubernetes 1.11 and later.** **This plugin supports Kubernetes 1.11 and later.**
#### Series Cardinality Warning ## Series Cardinality Warning
This plugin may produce a high number of series which, when not controlled This plugin may produce a high number of series which, when not controlled
for, will cause high load on your database. Use the following techniques to for, will cause high load on your database. Use the following techniques to
@ -32,7 +32,7 @@ avoid cardinality issues:
- Monitor your databases [series cardinality][]. - Monitor your databases [series cardinality][].
- Consult the [InfluxDB documentation][influx-docs] for the most up-to-date techniques. - Consult the [InfluxDB documentation][influx-docs] for the most up-to-date techniques.
### Configuration ## Configuration
```toml ```toml
[[inputs.kubernetes]] [[inputs.kubernetes]]
@ -62,7 +62,7 @@ avoid cardinality issues:
# insecure_skip_verify = false # insecure_skip_verify = false
``` ```
### DaemonSet ## DaemonSet
For recommendations on running Telegraf as a DaemonSet see [Monitoring Kubernetes For recommendations on running Telegraf as a DaemonSet see [Monitoring Kubernetes
Architecture][k8s-telegraf] or view the Helm charts: Architecture][k8s-telegraf] or view the Helm charts:
@ -72,7 +72,7 @@ Architecture][k8s-telegraf] or view the Helm charts:
- [Chronograf][] - [Chronograf][]
- [Kapacitor][] - [Kapacitor][]
### Metrics ## Metrics
- kubernetes_node - kubernetes_node
- tags: - tags:
@ -97,7 +97,7 @@ Architecture][k8s-telegraf] or view the Helm charts:
- runtime_image_fs_capacity_bytes - runtime_image_fs_capacity_bytes
- runtime_image_fs_used_bytes - runtime_image_fs_used_bytes
* kubernetes_pod_container - kubernetes_pod_container
- tags: - tags:
- container_name - container_name
- namespace - namespace
@ -129,7 +129,7 @@ Architecture][k8s-telegraf] or view the Helm charts:
- capacity_bytes - capacity_bytes
- used_bytes - used_bytes
* kubernetes_pod_network - kubernetes_pod_network
- tags: - tags:
- namespace - namespace
- node_name - node_name
@ -140,9 +140,9 @@ Architecture][k8s-telegraf] or view the Helm charts:
- tx_bytes - tx_bytes
- tx_errors - tx_errors
### Example Output ## Example Output
``` ```shell
kubernetes_node kubernetes_node
kubernetes_pod_container,container_name=deis-controller,namespace=deis,node_name=ip-10-0-0-0.ec2.internal,pod_name=deis-controller-3058870187-xazsr cpu_usage_core_nanoseconds=2432835i,cpu_usage_nanocores=0i,logsfs_available_bytes=121128271872i,logsfs_capacity_bytes=153567944704i,logsfs_used_bytes=20787200i,memory_major_page_faults=0i,memory_page_faults=175i,memory_rss_bytes=0i,memory_usage_bytes=0i,memory_working_set_bytes=0i,rootfs_available_bytes=121128271872i,rootfs_capacity_bytes=153567944704i,rootfs_used_bytes=1110016i 1476477530000000000 kubernetes_pod_container,container_name=deis-controller,namespace=deis,node_name=ip-10-0-0-0.ec2.internal,pod_name=deis-controller-3058870187-xazsr cpu_usage_core_nanoseconds=2432835i,cpu_usage_nanocores=0i,logsfs_available_bytes=121128271872i,logsfs_capacity_bytes=153567944704i,logsfs_used_bytes=20787200i,memory_major_page_faults=0i,memory_page_faults=175i,memory_rss_bytes=0i,memory_usage_bytes=0i,memory_working_set_bytes=0i,rootfs_available_bytes=121128271872i,rootfs_capacity_bytes=153567944704i,rootfs_used_bytes=1110016i 1476477530000000000
kubernetes_pod_network,namespace=deis,node_name=ip-10-0-0-0.ec2.internal,pod_name=deis-controller-3058870187-xazsr rx_bytes=120671099i,rx_errors=0i,tx_bytes=102451983i,tx_errors=0i 1476477530000000000 kubernetes_pod_network,namespace=deis,node_name=ip-10-0-0-0.ec2.internal,pod_name=deis-controller-3058870187-xazsr rx_bytes=120671099i,rx_errors=0i,tx_bytes=102451983i,tx_errors=0i 1476477530000000000

View File

@ -5,18 +5,18 @@ This plugin provides a consumer for use with Arista Networks Latency Analyzer
Metrics are read from a stream of data via TCP through port 50001 on the Metrics are read from a stream of data via TCP through port 50001 on the
switches management IP. The data is in Protobuffers format. For more information on Arista LANZ switches management IP. The data is in Protobuffers format. For more information on Arista LANZ
- https://www.arista.com/en/um-eos/eos-latency-analyzer-lanz - <https://www.arista.com/en/um-eos/eos-latency-analyzer-lanz>
This plugin uses Arista's sdk. This plugin uses Arista's sdk.
- https://github.com/aristanetworks/goarista - <https://github.com/aristanetworks/goarista>
### Configuration ## Configuration
You will need to configure LANZ and enable streaming LANZ data. You will need to configure LANZ and enable streaming LANZ data.
- https://www.arista.com/en/um-eos/eos-section-44-3-configuring-lanz - <https://www.arista.com/en/um-eos/eos-section-44-3-configuring-lanz>
- https://www.arista.com/en/um-eos/eos-section-44-3-configuring-lanz#ww1149292 - <https://www.arista.com/en/um-eos/eos-section-44-3-configuring-lanz#ww1149292>
```toml ```toml
[[inputs.lanz]] [[inputs.lanz]]
@ -26,9 +26,9 @@ You will need to configure LANZ and enable streaming LANZ data.
] ]
``` ```
### Metrics ## Metrics
For more details on the metrics see https://github.com/aristanetworks/goarista/blob/master/lanz/proto/lanz.proto For more details on the metrics see <https://github.com/aristanetworks/goarista/blob/master/lanz/proto/lanz.proto>
- lanz_congestion_record: - lanz_congestion_record:
- tags: - tags:
@ -47,7 +47,7 @@ For more details on the metrics see https://github.com/aristanetworks/goarista/b
- tx_latency (integer) - tx_latency (integer)
- q_drop_count (integer) - q_drop_count (integer)
+ lanz_global_buffer_usage_record - lanz_global_buffer_usage_record
- tags: - tags:
- entry_type - entry_type
- source - source
@ -57,31 +57,31 @@ For more details on the metrics see https://github.com/aristanetworks/goarista/b
- buffer_size (integer) - buffer_size (integer)
- duration (integer) - duration (integer)
## Sample Queries
### Sample Queries
Get the max tx_latency for the last hour for all interfaces on all switches. Get the max tx_latency for the last hour for all interfaces on all switches.
```sql ```sql
SELECT max("tx_latency") AS "max_tx_latency" FROM "congestion_record" WHERE time > now() - 1h GROUP BY time(10s), "hostname", "intf_name" SELECT max("tx_latency") AS "max_tx_latency" FROM "congestion_record" WHERE time > now() - 1h GROUP BY time(10s), "hostname", "intf_name"
``` ```
Get the max tx_latency for the last hour for all interfaces on all switches. Get the max tx_latency for the last hour for all interfaces on all switches.
```sql ```sql
SELECT max("queue_size") AS "max_queue_size" FROM "congestion_record" WHERE time > now() - 1h GROUP BY time(10s), "hostname", "intf_name" SELECT max("queue_size") AS "max_queue_size" FROM "congestion_record" WHERE time > now() - 1h GROUP BY time(10s), "hostname", "intf_name"
``` ```
Get the max buffer_size for over the last hour for all switches. Get the max buffer_size for over the last hour for all switches.
```sql ```sql
SELECT max("buffer_size") AS "max_buffer_size" FROM "global_buffer_usage_record" WHERE time > now() - 1h GROUP BY time(10s), "hostname" SELECT max("buffer_size") AS "max_buffer_size" FROM "global_buffer_usage_record" WHERE time > now() - 1h GROUP BY time(10s), "hostname"
``` ```
### Example output ## Example output
```
```shell
lanz_global_buffer_usage_record,entry_type=2,host=telegraf.int.example.com,port=50001,source=switch01.int.example.com timestamp=158334105824919i,buffer_size=505i,duration=0i 1583341058300643815 lanz_global_buffer_usage_record,entry_type=2,host=telegraf.int.example.com,port=50001,source=switch01.int.example.com timestamp=158334105824919i,buffer_size=505i,duration=0i 1583341058300643815
lanz_congestion_record,entry_type=2,host=telegraf.int.example.com,intf_name=Ethernet36,port=50001,port_id=61,source=switch01.int.example.com,switch_id=0,traffic_class=1 time_of_max_qlen=0i,tx_latency=564480i,q_drop_count=0i,timestamp=158334105824919i,queue_size=225i 1583341058300636045 lanz_congestion_record,entry_type=2,host=telegraf.int.example.com,intf_name=Ethernet36,port=50001,port_id=61,source=switch01.int.example.com,switch_id=0,traffic_class=1 time_of_max_qlen=0i,tx_latency=564480i,q_drop_count=0i,timestamp=158334105824919i,queue_size=225i 1583341058300636045
lanz_global_buffer_usage_record,entry_type=2,host=telegraf.int.example.com,port=50001,source=switch01.int.example.com timestamp=158334105824919i,buffer_size=589i,duration=0i 1583341058300457464 lanz_global_buffer_usage_record,entry_type=2,host=telegraf.int.example.com,port=50001,source=switch01.int.example.com timestamp=158334105824919i,buffer_size=589i,duration=0i 1583341058300457464
lanz_congestion_record,entry_type=1,host=telegraf.int.example.com,intf_name=Ethernet36,port=50001,port_id=61,source=switch01.int.example.com,switch_id=0,traffic_class=1 q_drop_count=0i,timestamp=158334105824919i,queue_size=232i,time_of_max_qlen=0i,tx_latency=584640i 1583341058300450302 lanz_congestion_record,entry_type=1,host=telegraf.int.example.com,intf_name=Ethernet36,port=50001,port_id=61,source=switch01.int.example.com,switch_id=0,traffic_class=1 q_drop_count=0i,timestamp=158334105824919i,queue_size=232i,time_of_max_qlen=0i,tx_latency=584640i 1583341058300450302
``` ```

View File

@ -2,7 +2,7 @@
The LeoFS plugin gathers metrics of LeoGateway, LeoManager, and LeoStorage using SNMP. See [LeoFS Documentation / System Administration / System Monitoring](https://leo-project.net/leofs/docs/admin/system_admin/monitoring/). The LeoFS plugin gathers metrics of LeoGateway, LeoManager, and LeoStorage using SNMP. See [LeoFS Documentation / System Administration / System Monitoring](https://leo-project.net/leofs/docs/admin/system_admin/monitoring/).
## Configuration: ## Configuration
```toml ```toml
# Sample Config: # Sample Config:
@ -11,9 +11,11 @@ The LeoFS plugin gathers metrics of LeoGateway, LeoManager, and LeoStorage using
servers = ["127.0.0.1:4010"] servers = ["127.0.0.1:4010"]
``` ```
## Measurements & Fields: ## Measurements & Fields
### Statistics specific to the internals of LeoManager ### Statistics specific to the internals of LeoManager
#### Erlang VM
#### Erlang VM of LeoManager
- 1 min Statistics - 1 min Statistics
- num_of_processes - num_of_processes
@ -33,7 +35,8 @@ The LeoFS plugin gathers metrics of LeoGateway, LeoManager, and LeoStorage using
- allocated_memory_5min - allocated_memory_5min
### Statistics specific to the internals of LeoStorage ### Statistics specific to the internals of LeoStorage
#### Erlang VM
### Erlang VM of LeoStorage
- 1 min Statistics - 1 min Statistics
- num_of_processes - num_of_processes
@ -52,7 +55,7 @@ The LeoFS plugin gathers metrics of LeoGateway, LeoManager, and LeoStorage using
- used_allocated_memory_5min - used_allocated_memory_5min
- allocated_memory_5min - allocated_memory_5min
#### Total Number of Requests ### Total Number of Requests for LeoStorage
- 1 min Statistics - 1 min Statistics
- num_of_writes - num_of_writes
@ -103,7 +106,8 @@ Note: The following items are available since LeoFS v1.4.0:
Note: The all items are available since LeoFS v1.4.0. Note: The all items are available since LeoFS v1.4.0.
### Statistics specific to the internals of LeoGateway ### Statistics specific to the internals of LeoGateway
#### Erlang VM
#### Erlang VM of LeoGateway
- 1 min Statistics - 1 min Statistics
- num_of_processes - num_of_processes
@ -122,7 +126,7 @@ Note: The all items are available since LeoFS v1.4.0.
- used_allocated_memory_5min - used_allocated_memory_5min
- allocated_memory_5min - allocated_memory_5min
#### Total Number of Requests #### Total Number of Requests for LeoGateway
- 1 min Statistics - 1 min Statistics
- num_of_writes - num_of_writes
@ -140,15 +144,13 @@ Note: The all items are available since LeoFS v1.4.0.
- total_of_files - total_of_files
- total_cached_size - total_cached_size
### Tags
### Tags:
All measurements have the following tags: All measurements have the following tags:
- node - node
### Example output
### Example output:
#### LeoManager #### LeoManager
@ -221,7 +223,7 @@ $ ./telegraf --config ./plugins/inputs/leofs/leo_storage.conf --input-filter leo
#### LeoGateway #### LeoGateway
``` ```shell
$ ./telegraf --config ./plugins/inputs/leofs/leo_gateway.conf --input-filter leofs --test $ ./telegraf --config ./plugins/inputs/leofs/leo_gateway.conf --input-filter leofs --test
> leofs, host=gateway_0, node=gateway_0@127.0.0.1 > leofs, host=gateway_0, node=gateway_0@127.0.0.1
allocated_memory=87941120, allocated_memory=87941120,

View File

@ -1,9 +1,9 @@
# Linux Sysctl FS Input Plugin # Linux Sysctl FS Input Plugin
The linux_sysctl_fs input provides Linux system level file metrics. The documentation on these fields can be found at https://www.kernel.org/doc/Documentation/sysctl/fs.txt. The linux_sysctl_fs input provides Linux system level file metrics. The documentation on these fields can be found at <https://www.kernel.org/doc/Documentation/sysctl/fs.txt>.
Example output: Example output:
``` ```shell
> linux_sysctl_fs,host=foo dentry-want-pages=0i,file-max=44222i,aio-max-nr=65536i,inode-preshrink-nr=0i,dentry-nr=64340i,dentry-unused-nr=55274i,file-nr=1568i,aio-nr=0i,inode-nr=35952i,inode-free-nr=12957i,dentry-age-limit=45i 1490982022000000000 > linux_sysctl_fs,host=foo dentry-want-pages=0i,file-max=44222i,aio-max-nr=65536i,inode-preshrink-nr=0i,dentry-nr=64340i,dentry-unused-nr=55274i,file-nr=1568i,aio-nr=0i,inode-nr=35952i,inode-free-nr=12957i,dentry-age-limit=45i 1490982022000000000
``` ```

View File

@ -1,6 +1,6 @@
# Logparser Input Plugin # Logparser Input Plugin
### Deprecated in Telegraf 1.15: Please use the [tail][] plugin along with the [`grok` data format][grok parser]. ## Deprecated in Telegraf 1.15: Please use the [tail][] plugin along with the [`grok` data format][grok parser]
The `logparser` plugin streams and parses the given logfiles. Currently it The `logparser` plugin streams and parses the given logfiles. Currently it
has the capability of parsing "grok" patterns from logfiles, which also supports has the capability of parsing "grok" patterns from logfiles, which also supports
@ -8,12 +8,14 @@ regex patterns.
The `tail` plugin now provides all the functionality of the `logparser` plugin. The `tail` plugin now provides all the functionality of the `logparser` plugin.
Most options can be translated directly to the `tail` plugin: Most options can be translated directly to the `tail` plugin:
- For options in the `[inputs.logparser.grok]` section, the equivalent option - For options in the `[inputs.logparser.grok]` section, the equivalent option
will have add the `grok_` prefix when using them in the `tail` input. will have add the `grok_` prefix when using them in the `tail` input.
- The grok `measurement` option can be replaced using the standard plugin - The grok `measurement` option can be replaced using the standard plugin
`name_override` option. `name_override` option.
Migration Example: Migration Example:
```diff ```diff
- [[inputs.logparser]] - [[inputs.logparser]]
- files = ["/var/log/apache/access.log"] - files = ["/var/log/apache/access.log"]
@ -38,7 +40,7 @@ Migration Example:
+ data_format = "grok" + data_format = "grok"
``` ```
### Configuration ## Configuration
```toml ```toml
[[inputs.logparser]] [[inputs.logparser]]
@ -90,15 +92,14 @@ Migration Example:
# timezone = "Canada/Eastern" # timezone = "Canada/Eastern"
``` ```
### Grok Parser ## Grok Parser
Reference the [grok parser][] documentation to setup the grok section of the Reference the [grok parser][] documentation to setup the grok section of the
configuration. configuration.
## Additional Resources
### Additional Resources - <https://www.influxdata.com/telegraf-correlate-log-metrics-data-performance-bottlenecks/>
- https://www.influxdata.com/telegraf-correlate-log-metrics-data-performance-bottlenecks/
[tail]: /plugins/inputs/tail/README.md [tail]: /plugins/inputs/tail/README.md
[grok parser]: /plugins/parsers/grok/README.md [grok parser]: /plugins/parsers/grok/README.md

View File

@ -5,7 +5,7 @@ This plugin reads metrics exposed by
Logstash 5 and later is supported. Logstash 5 and later is supported.
### Configuration ## Configuration
```toml ```toml
[[inputs.logstash]] [[inputs.logstash]]
@ -40,7 +40,7 @@ Logstash 5 and later is supported.
# "X-Special-Header" = "Special-Value" # "X-Special-Header" = "Special-Value"
``` ```
### Metrics ## Metrics
Additional plugin stats may be collected (because logstash doesn't consistently expose all stats) Additional plugin stats may be collected (because logstash doesn't consistently expose all stats)
@ -80,7 +80,7 @@ Additional plugin stats may be collected (because logstash doesn't consistently
- gc_collectors_young_collection_count - gc_collectors_young_collection_count
- uptime_in_millis - uptime_in_millis
+ logstash_process - logstash_process
- tags: - tags:
- node_id - node_id
- node_name - node_name
@ -112,7 +112,7 @@ Additional plugin stats may be collected (because logstash doesn't consistently
- filtered - filtered
- out - out
+ logstash_plugins - logstash_plugins
- tags: - tags:
- node_id - node_id
- node_name - node_name
@ -148,9 +148,9 @@ Additional plugin stats may be collected (because logstash doesn't consistently
- page_capacity_in_bytes - page_capacity_in_bytes
- queue_size_in_bytes - queue_size_in_bytes
### Example Output ## Example Output
``` ```shell
logstash_jvm,node_id=3da53ed0-a946-4a33-9cdb-33013f2273f6,node_name=debian-stretch-logstash6.virt,node_version=6.8.1,source=debian-stretch-logstash6.virt gc_collectors_old_collection_count=2,gc_collectors_old_collection_time_in_millis=100,gc_collectors_young_collection_count=26,gc_collectors_young_collection_time_in_millis=1028,mem_heap_committed_in_bytes=1056309248,mem_heap_max_in_bytes=1056309248,mem_heap_used_in_bytes=207216328,mem_heap_used_percent=19,mem_non_heap_committed_in_bytes=160878592,mem_non_heap_used_in_bytes=140838184,mem_pools_old_committed_in_bytes=899284992,mem_pools_old_max_in_bytes=899284992,mem_pools_old_peak_max_in_bytes=899284992,mem_pools_old_peak_used_in_bytes=189468088,mem_pools_old_used_in_bytes=189468088,mem_pools_survivor_committed_in_bytes=17432576,mem_pools_survivor_max_in_bytes=17432576,mem_pools_survivor_peak_max_in_bytes=17432576,mem_pools_survivor_peak_used_in_bytes=17432576,mem_pools_survivor_used_in_bytes=12572640,mem_pools_young_committed_in_bytes=139591680,mem_pools_young_max_in_bytes=139591680,mem_pools_young_peak_max_in_bytes=139591680,mem_pools_young_peak_used_in_bytes=139591680,mem_pools_young_used_in_bytes=5175600,threads_count=20,threads_peak_count=24,uptime_in_millis=739089 1566425244000000000 logstash_jvm,node_id=3da53ed0-a946-4a33-9cdb-33013f2273f6,node_name=debian-stretch-logstash6.virt,node_version=6.8.1,source=debian-stretch-logstash6.virt gc_collectors_old_collection_count=2,gc_collectors_old_collection_time_in_millis=100,gc_collectors_young_collection_count=26,gc_collectors_young_collection_time_in_millis=1028,mem_heap_committed_in_bytes=1056309248,mem_heap_max_in_bytes=1056309248,mem_heap_used_in_bytes=207216328,mem_heap_used_percent=19,mem_non_heap_committed_in_bytes=160878592,mem_non_heap_used_in_bytes=140838184,mem_pools_old_committed_in_bytes=899284992,mem_pools_old_max_in_bytes=899284992,mem_pools_old_peak_max_in_bytes=899284992,mem_pools_old_peak_used_in_bytes=189468088,mem_pools_old_used_in_bytes=189468088,mem_pools_survivor_committed_in_bytes=17432576,mem_pools_survivor_max_in_bytes=17432576,mem_pools_survivor_peak_max_in_bytes=17432576,mem_pools_survivor_peak_used_in_bytes=17432576,mem_pools_survivor_used_in_bytes=12572640,mem_pools_young_committed_in_bytes=139591680,mem_pools_young_max_in_bytes=139591680,mem_pools_young_peak_max_in_bytes=139591680,mem_pools_young_peak_used_in_bytes=139591680,mem_pools_young_used_in_bytes=5175600,threads_count=20,threads_peak_count=24,uptime_in_millis=739089 1566425244000000000
logstash_process,node_id=3da53ed0-a946-4a33-9cdb-33013f2273f6,node_name=debian-stretch-logstash6.virt,node_version=6.8.1,source=debian-stretch-logstash6.virt cpu_load_average_15m=0.03,cpu_load_average_1m=0.01,cpu_load_average_5m=0.04,cpu_percent=0,cpu_total_in_millis=83230,max_file_descriptors=16384,mem_total_virtual_in_bytes=3689132032,open_file_descriptors=118,peak_open_file_descriptors=118 1566425244000000000 logstash_process,node_id=3da53ed0-a946-4a33-9cdb-33013f2273f6,node_name=debian-stretch-logstash6.virt,node_version=6.8.1,source=debian-stretch-logstash6.virt cpu_load_average_15m=0.03,cpu_load_average_1m=0.01,cpu_load_average_5m=0.04,cpu_percent=0,cpu_total_in_millis=83230,max_file_descriptors=16384,mem_total_virtual_in_bytes=3689132032,open_file_descriptors=118,peak_open_file_descriptors=118 1566425244000000000
logstash_events,node_id=3da53ed0-a946-4a33-9cdb-33013f2273f6,node_name=debian-stretch-logstash6.virt,node_version=6.8.1,pipeline=main,source=debian-stretch-logstash6.virt duration_in_millis=0,filtered=0,in=0,out=0,queue_push_duration_in_millis=0 1566425244000000000 logstash_events,node_id=3da53ed0-a946-4a33-9cdb-33013f2273f6,node_name=debian-stretch-logstash6.virt,node_version=6.8.1,pipeline=main,source=debian-stretch-logstash6.virt duration_in_millis=0,filtered=0,in=0,out=0,queue_push_duration_in_millis=0 1566425244000000000

View File

@ -5,7 +5,7 @@ many requirements of leadership class HPC simulation environments.
This plugin monitors the Lustre file system using its entries in the proc filesystem. This plugin monitors the Lustre file system using its entries in the proc filesystem.
### Configuration ## Configuration
```toml ```toml
# Read metrics from local Lustre service on OST, MDS # Read metrics from local Lustre service on OST, MDS
@ -24,7 +24,7 @@ This plugin monitors the Lustre file system using its entries in the proc filesy
# ] # ]
``` ```
### Metrics ## Metrics
From `/proc/fs/lustre/obdfilter/*/stats` and `/proc/fs/lustre/osd-ldiskfs/*/stats`: From `/proc/fs/lustre/obdfilter/*/stats` and `/proc/fs/lustre/osd-ldiskfs/*/stats`:
@ -113,17 +113,16 @@ From `/proc/fs/lustre/mdt/*/job_stats`:
- jobstats_sync - jobstats_sync
- jobstats_unlink - jobstats_unlink
## Troubleshooting
### Troubleshooting
Check for the default or custom procfiles in the proc filesystem, and reference Check for the default or custom procfiles in the proc filesystem, and reference
the [Lustre Monitoring and Statistics Guide][guide]. This plugin does not the [Lustre Monitoring and Statistics Guide][guide]. This plugin does not
report all information from these files, only a limited set of items report all information from these files, only a limited set of items
corresponding to the above metric fields. corresponding to the above metric fields.
### Example Output ## Example Output
``` ```shell
lustre2,host=oss2,jobid=42990218,name=wrk-OST0041 jobstats_ost_setattr=0i,jobstats_ost_sync=0i,jobstats_punch=0i,jobstats_read_bytes=4096i,jobstats_read_calls=1i,jobstats_read_max_size=4096i,jobstats_read_min_size=4096i,jobstats_write_bytes=310206488i,jobstats_write_calls=7423i,jobstats_write_max_size=53048i,jobstats_write_min_size=8820i 1556525847000000000 lustre2,host=oss2,jobid=42990218,name=wrk-OST0041 jobstats_ost_setattr=0i,jobstats_ost_sync=0i,jobstats_punch=0i,jobstats_read_bytes=4096i,jobstats_read_calls=1i,jobstats_read_max_size=4096i,jobstats_read_min_size=4096i,jobstats_write_bytes=310206488i,jobstats_write_calls=7423i,jobstats_write_max_size=53048i,jobstats_write_min_size=8820i 1556525847000000000
lustre2,host=mds1,jobid=42992017,name=wrk-MDT0000 jobstats_close=31798i,jobstats_crossdir_rename=0i,jobstats_getattr=34146i,jobstats_getxattr=15i,jobstats_link=0i,jobstats_mkdir=658i,jobstats_mknod=0i,jobstats_open=31797i,jobstats_rename=0i,jobstats_rmdir=0i,jobstats_samedir_rename=0i,jobstats_setattr=1788i,jobstats_setxattr=0i,jobstats_statfs=0i,jobstats_sync=0i,jobstats_unlink=0i 1556525828000000000 lustre2,host=mds1,jobid=42992017,name=wrk-MDT0000 jobstats_close=31798i,jobstats_crossdir_rename=0i,jobstats_getattr=34146i,jobstats_getxattr=15i,jobstats_link=0i,jobstats_mkdir=658i,jobstats_mknod=0i,jobstats_open=31797i,jobstats_rename=0i,jobstats_rmdir=0i,jobstats_samedir_rename=0i,jobstats_setattr=1788i,jobstats_setxattr=0i,jobstats_statfs=0i,jobstats_sync=0i,jobstats_unlink=0i 1556525828000000000

View File

@ -3,7 +3,7 @@
The Logical Volume Management (LVM) input plugin collects information about The Logical Volume Management (LVM) input plugin collects information about
physical volumes, volume groups, and logical volumes. physical volumes, volume groups, and logical volumes.
### Configuration ## Configuration
The `lvm` command requires elevated permissions. If the user has configured The `lvm` command requires elevated permissions. If the user has configured
sudo with the ability to run these commands, then set the `use_sudo` to true. sudo with the ability to run these commands, then set the `use_sudo` to true.
@ -15,7 +15,7 @@ sudo with the ability to run these commands, then set the `use_sudo` to true.
use_sudo = false use_sudo = false
``` ```
#### Using sudo ### Using sudo
If your account does not already have the ability to run commands If your account does not already have the ability to run commands
with passwordless sudo then updates to the sudoers file are required. Below with passwordless sudo then updates to the sudoers file are required. Below
@ -31,7 +31,7 @@ Cmnd_Alias LVM = /usr/sbin/pvs *, /usr/sbin/vgs *, /usr/sbin/lvs *
Defaults!LVM !logfile, !syslog, !pam_session Defaults!LVM !logfile, !syslog, !pam_session
``` ```
### Metrics ## Metrics
Metrics are broken out by physical volume (pv), volume group (vg), and logical Metrics are broken out by physical volume (pv), volume group (vg), and logical
volume (lv): volume (lv):
@ -64,14 +64,16 @@ volume (lv):
- data_percent - data_percent
- meta_percent - meta_percent
### Example Output ## Example Output
The following example shows a system with the root partition on an LVM group The following example shows a system with the root partition on an LVM group
as well as with a Docker thin-provisioned LVM group on a second drive: as well as with a Docker thin-provisioned LVM group on a second drive:
```shell
> lvm_physical_vol,path=/dev/sda2,vol_group=vgroot free=0i,size=249510756352i,used=249510756352i,used_percent=100 1631823026000000000 > lvm_physical_vol,path=/dev/sda2,vol_group=vgroot free=0i,size=249510756352i,used=249510756352i,used_percent=100 1631823026000000000
> lvm_physical_vol,path=/dev/sdb,vol_group=docker free=3858759680i,size=128316342272i,used=124457582592i,used_percent=96.99277612525741 1631823026000000000 > lvm_physical_vol,path=/dev/sdb,vol_group=docker free=3858759680i,size=128316342272i,used=124457582592i,used_percent=96.99277612525741 1631823026000000000
> lvm_vol_group,name=vgroot free=0i,logical_volume_count=1i,physical_volume_count=1i,size=249510756352i,snapshot_count=0i,used_percent=100 1631823026000000000 > lvm_vol_group,name=vgroot free=0i,logical_volume_count=1i,physical_volume_count=1i,size=249510756352i,snapshot_count=0i,used_percent=100 1631823026000000000
> lvm_vol_group,name=docker free=3858759680i,logical_volume_count=1i,physical_volume_count=1i,size=128316342272i,snapshot_count=0i,used_percent=96.99277612525741 1631823026000000000 > lvm_vol_group,name=docker free=3858759680i,logical_volume_count=1i,physical_volume_count=1i,size=128316342272i,snapshot_count=0i,used_percent=96.99277612525741 1631823026000000000
> lvm_logical_vol,name=lvroot,vol_group=vgroot data_percent=0,metadata_percent=0,size=249510756352i 1631823026000000000 > lvm_logical_vol,name=lvroot,vol_group=vgroot data_percent=0,metadata_percent=0,size=249510756352i 1631823026000000000
> lvm_logical_vol,name=thinpool,vol_group=docker data_percent=0.36000001430511475,metadata_percent=1.3300000429153442,size=121899057152i 1631823026000000000 > lvm_logical_vol,name=thinpool,vol_group=docker data_percent=0.36000001430511475,metadata_percent=1.3300000429153442,size=121899057152i 1631823026000000000
```

View File

@ -2,7 +2,7 @@
Pulls campaign reports from the [Mailchimp API](https://developer.mailchimp.com/). Pulls campaign reports from the [Mailchimp API](https://developer.mailchimp.com/).
### Configuration ## Configuration
This section contains the default TOML to configure the plugin. You can This section contains the default TOML to configure the plugin. You can
generate it using `telegraf --usage mailchimp`. generate it using `telegraf --usage mailchimp`.
@ -21,7 +21,7 @@ generate it using `telegraf --usage mailchimp`.
# campaign_id = "" # campaign_id = ""
``` ```
### Metrics ## Metrics
- mailchimp - mailchimp
- tags: - tags:

View File

@ -2,7 +2,7 @@
The MarkLogic Telegraf plugin gathers health status metrics from one or more host. The MarkLogic Telegraf plugin gathers health status metrics from one or more host.
### Configuration: ## Configuration
```toml ```toml
[[inputs.marklogic]] [[inputs.marklogic]]
@ -24,7 +24,7 @@ The MarkLogic Telegraf plugin gathers health status metrics from one or more hos
# insecure_skip_verify = false # insecure_skip_verify = false
``` ```
### Metrics ## Metrics
- marklogic - marklogic
- tags: - tags:
@ -56,9 +56,9 @@ The MarkLogic Telegraf plugin gathers health status metrics from one or more hos
- http_server_receive_bytes - http_server_receive_bytes
- http_server_send_bytes - http_server_send_bytes
### Example Output: ## Example Output
``` ```shell
$> marklogic,host=localhost,id=2592913110757471141,source=ml1.local total_cpu_stat_iowait=0.0125649003311992,memory_process_swap_size=0i,host_size=380i,data_dir_space=28216i,query_read_load=0i,ncpus=1i,log_device_space=28216i,query_read_bytes=13947332i,merge_write_load=0i,http_server_receive_bytes=225893i,online=true,ncores=4i,total_cpu_stat_user=0.150778993964195,total_cpu_stat_system=0.598927974700928,total_cpu_stat_idle=99.2210006713867,memory_system_total=3947i,memory_system_free=2669i,memory_size=4096i,total_rate=14.7697010040283,http_server_send_bytes=0i,memory_process_size=903i,memory_process_rss=486i,merge_read_load=0i,total_load=0.00502600101754069 1566373000000000000 $> marklogic,host=localhost,id=2592913110757471141,source=ml1.local total_cpu_stat_iowait=0.0125649003311992,memory_process_swap_size=0i,host_size=380i,data_dir_space=28216i,query_read_load=0i,ncpus=1i,log_device_space=28216i,query_read_bytes=13947332i,merge_write_load=0i,http_server_receive_bytes=225893i,online=true,ncores=4i,total_cpu_stat_user=0.150778993964195,total_cpu_stat_system=0.598927974700928,total_cpu_stat_idle=99.2210006713867,memory_system_total=3947i,memory_system_free=2669i,memory_size=4096i,total_rate=14.7697010040283,http_server_send_bytes=0i,memory_process_size=903i,memory_process_rss=486i,merge_read_load=0i,total_load=0.00502600101754069 1566373000000000000
``` ```

View File

@ -2,7 +2,7 @@
This plugin gathers statistics data from a Mcrouter server. This plugin gathers statistics data from a Mcrouter server.
### Configuration: ## Configuration
```toml ```toml
# Read metrics from one or many mcrouter servers. # Read metrics from one or many mcrouter servers.
@ -15,7 +15,7 @@ This plugin gathers statistics data from a Mcrouter server.
# timeout = "5s" # timeout = "5s"
``` ```
### Measurements & Fields: ## Measurements & Fields
The fields from this plugin are gathered in the *mcrouter* measurement. The fields from this plugin are gathered in the *mcrouter* measurement.
@ -88,16 +88,14 @@ Fields:
* cmd_delete_out_all * cmd_delete_out_all
* cmd_lease_set_out_all * cmd_lease_set_out_all
### Tags: ## Tags
* Mcrouter measurements have the following tags: * Mcrouter measurements have the following tags:
- server (the host name from which metrics are gathered) * server (the host name from which metrics are gathered)
## Example Output
```shell
### Example Output:
```
$ ./telegraf --config telegraf.conf --input-filter mcrouter --test $ ./telegraf --config telegraf.conf --input-filter mcrouter --test
mcrouter,server=localhost:11211 uptime=166,num_servers=1,num_servers_new=1,num_servers_up=0,num_servers_down=0,num_servers_closed=0,num_clients=1,num_suspect_servers=0,destination_batches_sum=0,destination_requests_sum=0,outstanding_route_get_reqs_queued=0,outstanding_route_update_reqs_queued=0,outstanding_route_get_avg_queue_size=0,outstanding_route_update_avg_queue_size=0,outstanding_route_get_avg_wait_time_sec=0,outstanding_route_update_avg_wait_time_sec=0,retrans_closed_connections=0,destination_pending_reqs=0,destination_inflight_reqs=0,destination_batch_size=0,asynclog_requests=0,proxy_reqs_processing=1,proxy_reqs_waiting=0,client_queue_notify_period=0,rusage_system=0.040966,rusage_user=0.020483,ps_num_minor_faults=2490,ps_num_major_faults=11,ps_user_time_sec=0.02,ps_system_time_sec=0.04,ps_vsize=697741312,ps_rss=10563584,fibers_allocated=0,fibers_pool_size=0,fibers_stack_high_watermark=0,successful_client_connections=18,duration_us=0,destination_max_pending_reqs=0,destination_max_inflight_reqs=0,retrans_per_kbyte_max=0,cmd_get_count=0,cmd_delete_out=0,cmd_lease_get=0,cmd_set=0,cmd_get_out_all=0,cmd_get_out=0,cmd_lease_set_count=0,cmd_other_out_all=0,cmd_lease_get_out=0,cmd_set_count=0,cmd_lease_set_out=0,cmd_delete_count=0,cmd_other=0,cmd_delete=0,cmd_get=0,cmd_lease_set=0,cmd_set_out=0,cmd_lease_get_count=0,cmd_other_out=0,cmd_lease_get_out_all=0,cmd_set_out_all=0,cmd_other_count=0,cmd_delete_out_all=0,cmd_lease_set_out_all=0 1453831884664956455 mcrouter,server=localhost:11211 uptime=166,num_servers=1,num_servers_new=1,num_servers_up=0,num_servers_down=0,num_servers_closed=0,num_clients=1,num_suspect_servers=0,destination_batches_sum=0,destination_requests_sum=0,outstanding_route_get_reqs_queued=0,outstanding_route_update_reqs_queued=0,outstanding_route_get_avg_queue_size=0,outstanding_route_update_avg_queue_size=0,outstanding_route_get_avg_wait_time_sec=0,outstanding_route_update_avg_wait_time_sec=0,retrans_closed_connections=0,destination_pending_reqs=0,destination_inflight_reqs=0,destination_batch_size=0,asynclog_requests=0,proxy_reqs_processing=1,proxy_reqs_waiting=0,client_queue_notify_period=0,rusage_system=0.040966,rusage_user=0.020483,ps_num_minor_faults=2490,ps_num_major_faults=11,ps_user_time_sec=0.02,ps_system_time_sec=0.04,ps_vsize=697741312,ps_rss=10563584,fibers_allocated=0,fibers_pool_size=0,fibers_stack_high_watermark=0,successful_client_connections=18,duration_us=0,destination_max_pending_reqs=0,destination_max_inflight_reqs=0,retrans_per_kbyte_max=0,cmd_get_count=0,cmd_delete_out=0,cmd_lease_get=0,cmd_set=0,cmd_get_out_all=0,cmd_get_out=0,cmd_lease_set_count=0,cmd_other_out_all=0,cmd_lease_get_out=0,cmd_set_count=0,cmd_lease_set_out=0,cmd_delete_count=0,cmd_other=0,cmd_delete=0,cmd_get=0,cmd_lease_set=0,cmd_set_out=0,cmd_lease_get_count=0,cmd_other_out=0,cmd_lease_get_out_all=0,cmd_set_out_all=0,cmd_other_count=0,cmd_delete_out_all=0,cmd_lease_set_out_all=0 1453831884664956455
``` ```

View File

@ -6,10 +6,9 @@ by reading /proc/mdstat. For a full list of available fields see the
For a better idea of what each field represents, see the For a better idea of what each field represents, see the
[mdstat man page](https://raid.wiki.kernel.org/index.php/Mdstat). [mdstat man page](https://raid.wiki.kernel.org/index.php/Mdstat).
Stat collection based on Prometheus' mdstat collection library at https://github.com/prometheus/procfs/blob/master/mdstat.go Stat collection based on Prometheus' mdstat collection library at <https://github.com/prometheus/procfs/blob/master/mdstat.go>
## Configuration
### Configuration:
```toml ```toml
# Get kernel statistics from /proc/mdstat # Get kernel statistics from /proc/mdstat
@ -19,7 +18,7 @@ Stat collection based on Prometheus' mdstat collection library at https://github
# file_name = "/proc/mdstat" # file_name = "/proc/mdstat"
``` ```
### Measurements & Fields: ## Measurements & Fields
- mdstat - mdstat
- BlocksSynced (if the array is rebuilding/checking, this is the count of blocks that have been scanned) - BlocksSynced (if the array is rebuilding/checking, this is the count of blocks that have been scanned)
@ -32,16 +31,16 @@ Stat collection based on Prometheus' mdstat collection library at https://github
- DisksSpare (the current count of "spare" disks in the array) - DisksSpare (the current count of "spare" disks in the array)
- DisksTotal (total count of disks in the array) - DisksTotal (total count of disks in the array)
### Tags: ## Tags
- mdstat - mdstat
- ActivityState (`active` or `inactive`) - ActivityState (`active` or `inactive`)
- Devices (comma separated list of devices that make up the array) - Devices (comma separated list of devices that make up the array)
- Name (name of the array) - Name (name of the array)
### Example Output: ## Example Output
``` ```shell
$ telegraf --config ~/ws/telegraf.conf --input-filter mdstat --test $ telegraf --config ~/ws/telegraf.conf --input-filter mdstat --test
* Plugin: mdstat, Collection 1 * Plugin: mdstat, Collection 1
> mdstat,ActivityState=active,Devices=sdm1\,sdn1,Name=md1 BlocksSynced=231299072i,BlocksSyncedFinishTime=0,BlocksSyncedPct=0,BlocksSyncedSpeed=0,BlocksTotal=231299072i,DisksActive=2i,DisksFailed=0i,DisksSpare=0i,DisksTotal=2i,DisksDown=0i 1617814276000000000 > mdstat,ActivityState=active,Devices=sdm1\,sdn1,Name=md1 BlocksSynced=231299072i,BlocksSyncedFinishTime=0,BlocksSyncedPct=0,BlocksSyncedSpeed=0,BlocksTotal=231299072i,DisksActive=2i,DisksFailed=0i,DisksSpare=0i,DisksTotal=2i,DisksDown=0i 1617814276000000000

View File

@ -5,14 +5,15 @@ The mem plugin collects system memory metrics.
For a more complete explanation of the difference between *used* and For a more complete explanation of the difference between *used* and
*actual_used* RAM, see [Linux ate my ram](http://www.linuxatemyram.com/). *actual_used* RAM, see [Linux ate my ram](http://www.linuxatemyram.com/).
### Configuration: ## Configuration
```toml ```toml
# Read metrics about memory usage # Read metrics about memory usage
[[inputs.mem]] [[inputs.mem]]
# no configuration # no configuration
``` ```
### Metrics: ## Metrics
Available fields are dependent on platform. Available fields are dependent on platform.
@ -55,7 +56,8 @@ Available fields are dependent on platform.
- write_back (integer, Linux) - write_back (integer, Linux)
- write_back_tmp (integer, Linux) - write_back_tmp (integer, Linux)
### Example Output: ## Example Output
```
```shell
mem active=9299595264i,available=16818249728i,available_percent=80.41654254645131,buffered=2383761408i,cached=13316689920i,commit_limit=14751920128i,committed_as=11781156864i,dirty=122880i,free=1877688320i,high_free=0i,high_total=0i,huge_page_size=2097152i,huge_pages_free=0i,huge_pages_total=0i,inactive=7549939712i,low_free=0i,low_total=0i,mapped=416763904i,page_tables=19787776i,shared=670679040i,slab=2081071104i,sreclaimable=1923395584i,sunreclaim=157675520i,swap_cached=1302528i,swap_free=4286128128i,swap_total=4294963200i,total=20913917952i,used=3335778304i,used_percent=15.95004011996231,vmalloc_chunk=0i,vmalloc_total=35184372087808i,vmalloc_used=0i,wired=0i,write_back=0i,write_back_tmp=0i 1574712869000000000 mem active=9299595264i,available=16818249728i,available_percent=80.41654254645131,buffered=2383761408i,cached=13316689920i,commit_limit=14751920128i,committed_as=11781156864i,dirty=122880i,free=1877688320i,high_free=0i,high_total=0i,huge_page_size=2097152i,huge_pages_free=0i,huge_pages_total=0i,inactive=7549939712i,low_free=0i,low_total=0i,mapped=416763904i,page_tables=19787776i,shared=670679040i,slab=2081071104i,sreclaimable=1923395584i,sunreclaim=157675520i,swap_cached=1302528i,swap_free=4286128128i,swap_total=4294963200i,total=20913917952i,used=3335778304i,used_percent=15.95004011996231,vmalloc_chunk=0i,vmalloc_total=35184372087808i,vmalloc_used=0i,wired=0i,write_back=0i,write_back_tmp=0i 1574712869000000000
``` ```

View File

@ -2,7 +2,7 @@
This plugin gathers statistics data from a Memcached server. This plugin gathers statistics data from a Memcached server.
### Configuration: ## Configuration
```toml ```toml
# Read metrics from one or many memcached servers. # Read metrics from one or many memcached servers.
@ -14,7 +14,7 @@ This plugin gathers statistics data from a Memcached server.
# unix_sockets = ["/var/run/memcached.sock"] # unix_sockets = ["/var/run/memcached.sock"]
``` ```
### Measurements & Fields: ## Measurements & Fields
The fields from this plugin are gathered in the *memcached* measurement. The fields from this plugin are gathered in the *memcached* measurement.
@ -63,22 +63,22 @@ Fields:
Description of gathered fields taken from [here](https://github.com/memcached/memcached/blob/master/doc/protocol.txt). Description of gathered fields taken from [here](https://github.com/memcached/memcached/blob/master/doc/protocol.txt).
### Tags: ## Tags
* Memcached measurements have the following tags: * Memcached measurements have the following tags:
- server (the host name from which metrics are gathered) * server (the host name from which metrics are gathered)
### Sample Queries: ## Sample Queries
You can use the following query to get the average get hit and miss ratio, as well as the total average size of cached items, number of cached items and average connection counts per server. You can use the following query to get the average get hit and miss ratio, as well as the total average size of cached items, number of cached items and average connection counts per server.
``` ```sql
SELECT mean(get_hits) / mean(cmd_get) as get_ratio, mean(get_misses) / mean(cmd_get) as get_misses_ratio, mean(bytes), mean(curr_items), mean(curr_connections) FROM memcached WHERE time > now() - 1h GROUP BY server SELECT mean(get_hits) / mean(cmd_get) as get_ratio, mean(get_misses) / mean(cmd_get) as get_misses_ratio, mean(bytes), mean(curr_items), mean(curr_connections) FROM memcached WHERE time > now() - 1h GROUP BY server
``` ```
### Example Output: ## Example Output
``` ```shell
$ ./telegraf --config telegraf.conf --input-filter memcached --test $ ./telegraf --config telegraf.conf --input-filter memcached --test
memcached,server=localhost:11211 get_hits=1,get_misses=2,evictions=0,limit_maxbytes=0,bytes=10,uptime=3600,curr_items=2,total_items=2,curr_connections=1,total_connections=2,connection_structures=1,cmd_get=2,cmd_set=1,delete_hits=0,delete_misses=0,incr_hits=0,incr_misses=0,decr_hits=0,decr_misses=0,cas_hits=0,cas_misses=0,bytes_read=10,bytes_written=10,threads=1,conn_yields=0 1453831884664956455 memcached,server=localhost:11211 get_hits=1,get_misses=2,evictions=0,limit_maxbytes=0,bytes=10,uptime=3600,curr_items=2,total_items=2,curr_connections=1,total_connections=2,connection_structures=1,cmd_get=2,cmd_set=1,delete_hits=0,delete_misses=0,incr_hits=0,incr_misses=0,decr_hits=0,decr_misses=0,cas_hits=0,cas_misses=0,bytes_read=10,bytes_written=10,threads=1,conn_yields=0 1453831884664956455
``` ```

View File

@ -3,7 +3,7 @@
This input plugin gathers metrics from Mesos. This input plugin gathers metrics from Mesos.
For more information, please check the [Mesos Observability Metrics](http://mesos.apache.org/documentation/latest/monitoring/) page. For more information, please check the [Mesos Observability Metrics](http://mesos.apache.org/documentation/latest/monitoring/) page.
### Configuration: ## Configuration
```toml ```toml
# Telegraf plugin for gathering metrics from N Mesos masters # Telegraf plugin for gathering metrics from N Mesos masters
@ -53,7 +53,7 @@ For more information, please check the [Mesos Observability Metrics](http://meso
By default this plugin is not configured to gather metrics from mesos. Since a mesos cluster can be deployed in numerous ways it does not provide any default By default this plugin is not configured to gather metrics from mesos. Since a mesos cluster can be deployed in numerous ways it does not provide any default
values. User needs to specify master/slave nodes this plugin will gather metrics from. values. User needs to specify master/slave nodes this plugin will gather metrics from.
### Measurements & Fields: ## Measurements & Fields
Mesos master metric groups Mesos master metric groups
@ -250,6 +250,7 @@ Mesos master metric groups
- allocator/resources/mem/total - allocator/resources/mem/total
Mesos slave metric groups Mesos slave metric groups
- resources - resources
- slave/cpus_percent - slave/cpus_percent
- slave/cpus_used - slave/cpus_used
@ -315,7 +316,7 @@ Mesos slave metric groups
- slave/valid_framework_messages - slave/valid_framework_messages
- slave/valid_status_updates - slave/valid_status_updates
### Tags: ## Tags
- All master/slave measurements have the following tags: - All master/slave measurements have the following tags:
- server (network location of server: `host:port`) - server (network location of server: `host:port`)
@ -325,8 +326,9 @@ Mesos slave metric groups
- All master measurements have the extra tags: - All master measurements have the extra tags:
- state (leader/follower) - state (leader/follower)
### Example Output: ## Example Output
```
```shell
$ telegraf --config ~/mesos.conf --input-filter mesos --test $ telegraf --config ~/mesos.conf --input-filter mesos --test
* Plugin: mesos, Collection 1 * Plugin: mesos, Collection 1
mesos,role=master,state=leader,host=172.17.8.102,server=172.17.8.101 mesos,role=master,state=leader,host=172.17.8.102,server=172.17.8.101
@ -347,4 +349,3 @@ master/mem_revocable_used=0,master/mem_total=1002,
master/mem_used=0,master/messages_authenticate=0, master/mem_used=0,master/messages_authenticate=0,
master/messages_deactivate_framework=0 ... master/messages_deactivate_framework=0 ...
``` ```

View File

@ -7,7 +7,7 @@ This plugin is known to support Minecraft Java Edition versions 1.11 - 1.14.
When using an version of Minecraft earlier than 1.13, be aware that the values When using an version of Minecraft earlier than 1.13, be aware that the values
for some criterion has changed and may need to be modified. for some criterion has changed and may need to be modified.
#### Server Setup ## Server Setup
Enable [RCON][] on the Minecraft server, add this to your server configuration Enable [RCON][] on the Minecraft server, add this to your server configuration
in the [server.properties][] file: in the [server.properties][] file:
@ -24,22 +24,25 @@ from the server console, or over an RCON connection.
When getting started pick an easy to test objective. This command will add an When getting started pick an easy to test objective. This command will add an
objective that counts the number of times a player has jumped: objective that counts the number of times a player has jumped:
```
```sh
/scoreboard objectives add jumps minecraft.custom:minecraft.jump /scoreboard objectives add jumps minecraft.custom:minecraft.jump
``` ```
Once a player has triggered the event they will be added to the scoreboard, Once a player has triggered the event they will be added to the scoreboard,
you can then list all players with recorded scores: you can then list all players with recorded scores:
```
```sh
/scoreboard players list /scoreboard players list
``` ```
View the current scores with a command, substituting your player name: View the current scores with a command, substituting your player name:
```
```sh
/scoreboard players list Etho /scoreboard players list Etho
``` ```
### Configuration ## Configuration
```toml ```toml
[[inputs.minecraft]] [[inputs.minecraft]]
@ -53,7 +56,7 @@ View the current scores with a command, substituting your player name:
password = "" password = ""
``` ```
### Metrics ## Metrics
- minecraft - minecraft
- tags: - tags:
@ -64,15 +67,17 @@ View the current scores with a command, substituting your player name:
- fields: - fields:
- `<objective_name>` (integer, count) - `<objective_name>` (integer, count)
### Sample Queries: ## Sample Queries
Get the number of jumps per player in the last hour: Get the number of jumps per player in the last hour:
```sql ```sql
SELECT SPREAD("jumps") FROM "minecraft" WHERE time > now() - 1h GROUP BY "player" SELECT SPREAD("jumps") FROM "minecraft" WHERE time > now() - 1h GROUP BY "player"
``` ```
### Example Output: ## Example Output
```
```shell
minecraft,player=notch,source=127.0.0.1,port=25575 jumps=178i 1498261397000000000 minecraft,player=notch,source=127.0.0.1,port=25575 jumps=178i 1498261397000000000
minecraft,player=dinnerbone,source=127.0.0.1,port=25575 deaths=1i,jumps=1999i,cow_kills=1i 1498261397000000000 minecraft,player=dinnerbone,source=127.0.0.1,port=25575 deaths=1i,jumps=1999i,cow_kills=1i 1498261397000000000
minecraft,player=jeb,source=127.0.0.1,port=25575 d_pickaxe=1i,damage_dealt=80i,d_sword=2i,hunger=20i,health=20i,kills=1i,level=33i,jumps=264i,armor=15i 1498261397000000000 minecraft,player=jeb,source=127.0.0.1,port=25575 d_pickaxe=1i,damage_dealt=80i,d_sword=2i,hunger=20i,health=20i,kills=1i,level=33i,jumps=264i,armor=15i 1498261397000000000

View File

@ -3,7 +3,7 @@
The Modbus plugin collects Discrete Inputs, Coils, Input Registers and Holding The Modbus plugin collects Discrete Inputs, Coils, Input Registers and Holding
Registers via Modbus TCP or Modbus RTU/ASCII. Registers via Modbus TCP or Modbus RTU/ASCII.
### Configuration ## Configuration
```toml ```toml
[[inputs.modbus]] [[inputs.modbus]]
@ -103,17 +103,18 @@ Registers via Modbus TCP or Modbus RTU/ASCII.
# close_connection_after_gather = false # close_connection_after_gather = false
``` ```
### Notes ## Notes
You can debug Modbus connection issues by enabling `debug_connection`. To see those debug messages Telegraf has to be started with debugging enabled (i.e. with `--debug` option). Please be aware that connection tracing will produce a lot of messages and should **NOT** be used in production environments. You can debug Modbus connection issues by enabling `debug_connection`. To see those debug messages Telegraf has to be started with debugging enabled (i.e. with `--debug` option). Please be aware that connection tracing will produce a lot of messages and should **NOT** be used in production environments.
Please use `pause_between_requests` with care. Especially make sure that the total gather time, including the pause(s), does not exceed the configured collection interval. Note, that pauses add up if multiple requests are sent! Please use `pause_between_requests` with care. Especially make sure that the total gather time, including the pause(s), does not exceed the configured collection interval. Note, that pauses add up if multiple requests are sent!
### Metrics ## Metrics
Metric are custom and configured using the `discrete_inputs`, `coils`, Metric are custom and configured using the `discrete_inputs`, `coils`,
`holding_register` and `input_registers` options. `holding_register` and `input_registers` options.
### Usage of `data_type` ## Usage of `data_type`
The field `data_type` defines the representation of the data value on input from the modbus registers. The field `data_type` defines the representation of the data value on input from the modbus registers.
The input values are then converted from the given `data_type` to a type that is apropriate when The input values are then converted from the given `data_type` to a type that is apropriate when
@ -122,16 +123,16 @@ integer or floating-point-number. The size of the output type is assumed to be l
for all supported input types. The mapping from the input type to the output type is fixed for all supported input types. The mapping from the input type to the output type is fixed
and cannot be configured. and cannot be configured.
#### Integers: `INT16`, `UINT16`, `INT32`, `UINT32`, `INT64`, `UINT64` ### Integers: `INT16`, `UINT16`, `INT32`, `UINT32`, `INT64`, `UINT64`
These types are used for integer input values. Select the one that matches your modbus data source. These types are used for integer input values. Select the one that matches your modbus data source.
#### Floating Point: `FLOAT32-IEEE`, `FLOAT64-IEEE` ### Floating Point: `FLOAT32-IEEE`, `FLOAT64-IEEE`
Use these types if your modbus registers contain a value that is encoded in this format. These types Use these types if your modbus registers contain a value that is encoded in this format. These types
always include the sign and therefore there exists no variant. always include the sign and therefore there exists no variant.
#### Fixed Point: `FIXED`, `UFIXED` (`FLOAT32`) ### Fixed Point: `FIXED`, `UFIXED` (`FLOAT32`)
These types are handled as an integer type on input, but are converted to floating point representation These types are handled as an integer type on input, but are converted to floating point representation
for further processing (e.g. scaling). Use one of these types when the input value is a decimal fixed point for further processing (e.g. scaling). Use one of these types when the input value is a decimal fixed point
@ -148,9 +149,10 @@ with N decimal places'.
(FLOAT32 is deprecated and should not be used any more. UFIXED provides the same conversion (FLOAT32 is deprecated and should not be used any more. UFIXED provides the same conversion
from unsigned values). from unsigned values).
### Trouble shooting ## Trouble shooting
### Strange data
#### Strange data
Modbus documentations are often a mess. People confuse memory-address (starts at one) and register address (starts at zero) or stay unclear about the used word-order. Furthermore, there are some non-standard implementations that also Modbus documentations are often a mess. People confuse memory-address (starts at one) and register address (starts at zero) or stay unclear about the used word-order. Furthermore, there are some non-standard implementations that also
swap the bytes within the register word (16-bit). swap the bytes within the register word (16-bit).
@ -164,7 +166,8 @@ In case you see strange values, the `byte_order` might be off. You can either pr
If your data still looks corrupted, please post your configuration, error message and/or the output of `byte_order="ABCD" data_type="UINT32"` to one of the telegraf support channels (forum, slack or as issue). If your data still looks corrupted, please post your configuration, error message and/or the output of `byte_order="ABCD" data_type="UINT32"` to one of the telegraf support channels (forum, slack or as issue).
#### Workarounds ### Workarounds
Some Modbus devices need special read characteristics when reading data and will fail otherwise. For example, there are certain serial devices that need a certain pause between register read requests. Others might only offer a limited number of simultaneously connected devices, like serial devices or some ModbusTCP devices. In case you need to access those devices in parallel you might want to disconnect immediately after the plugin finished reading. Some Modbus devices need special read characteristics when reading data and will fail otherwise. For example, there are certain serial devices that need a certain pause between register read requests. Others might only offer a limited number of simultaneously connected devices, like serial devices or some ModbusTCP devices. In case you need to access those devices in parallel you might want to disconnect immediately after the plugin finished reading.
To allow this plugin to also handle those "special" devices there is the `workarounds` configuration options. In case your documentation states certain read requirements or you get read timeouts or other read errors you might want to try one or more workaround options. To allow this plugin to also handle those "special" devices there is the `workarounds` configuration options. In case your documentation states certain read requirements or you get read timeouts or other read errors you might want to try one or more workaround options.
@ -172,7 +175,7 @@ If you find that other/more workarounds are required for your device, please let
In case your device needs a workaround that is not yet implemented, please open an issue or submit a pull-request. In case your device needs a workaround that is not yet implemented, please open an issue or submit a pull-request.
### Example Output ## Example Output
```sh ```sh
$ ./telegraf -config telegraf.conf -input-filter modbus -test $ ./telegraf -config telegraf.conf -input-filter modbus -test

View File

@ -12,7 +12,7 @@ Minimum Version of Monit tested with is 5.16.
[monit]: https://mmonit.com/ [monit]: https://mmonit.com/
[httpd]: https://mmonit.com/monit/documentation/monit.html#TCP-PORT [httpd]: https://mmonit.com/monit/documentation/monit.html#TCP-PORT
### Configuration ## Configuration
```toml ```toml
[[inputs.monit]] [[inputs.monit]]
@ -34,7 +34,7 @@ Minimum Version of Monit tested with is 5.16.
# insecure_skip_verify = false # insecure_skip_verify = false
``` ```
### Metrics ## Metrics
- monit_filesystem - monit_filesystem
- tags: - tags:
@ -57,7 +57,7 @@ Minimum Version of Monit tested with is 5.16.
- inode_usage - inode_usage
- inode_total - inode_total
+ monit_directory - monit_directory
- tags: - tags:
- address - address
- version - version
@ -88,7 +88,7 @@ Minimum Version of Monit tested with is 5.16.
- size - size
- permissions - permissions
+ monit_process - monit_process
- tags: - tags:
- address - address
- version - version
@ -132,7 +132,7 @@ Minimum Version of Monit tested with is 5.16.
- protocol - protocol
- type - type
+ monit_system - monit_system
- tags: - tags:
- address - address
- version - version
@ -171,7 +171,7 @@ Minimum Version of Monit tested with is 5.16.
- monitoring_mode_code - monitoring_mode_code
- permissions - permissions
+ monit_program - monit_program
- tags: - tags:
- address - address
- version - version
@ -199,7 +199,7 @@ Minimum Version of Monit tested with is 5.16.
- monitoring_status_code - monitoring_status_code
- monitoring_mode_code - monitoring_mode_code
+ monit_program - monit_program
- tags: - tags:
- address - address
- version - version
@ -227,8 +227,9 @@ Minimum Version of Monit tested with is 5.16.
- monitoring_status_code - monitoring_status_code
- monitoring_mode_code - monitoring_mode_code
### Example Output ## Example Output
```
```shell
monit_file,monitoring_mode=active,monitoring_status=monitored,pending_action=none,platform_name=Linux,service=rsyslog_pid,source=xyzzy.local,status=running,version=5.20.0 mode=644i,monitoring_mode_code=0i,monitoring_status_code=1i,pending_action_code=0i,size=3i,status_code=0i 1579735047000000000 monit_file,monitoring_mode=active,monitoring_status=monitored,pending_action=none,platform_name=Linux,service=rsyslog_pid,source=xyzzy.local,status=running,version=5.20.0 mode=644i,monitoring_mode_code=0i,monitoring_status_code=1i,pending_action_code=0i,size=3i,status_code=0i 1579735047000000000
monit_process,monitoring_mode=active,monitoring_status=monitored,pending_action=none,platform_name=Linux,service=rsyslog,source=xyzzy.local,status=running,version=5.20.0 children=0i,cpu_percent=0,cpu_percent_total=0,mem_kb=3148i,mem_kb_total=3148i,mem_percent=0.2,mem_percent_total=0.2,monitoring_mode_code=0i,monitoring_status_code=1i,parent_pid=1i,pending_action_code=0i,pid=318i,status_code=0i,threads=4i 1579735047000000000 monit_process,monitoring_mode=active,monitoring_status=monitored,pending_action=none,platform_name=Linux,service=rsyslog,source=xyzzy.local,status=running,version=5.20.0 children=0i,cpu_percent=0,cpu_percent_total=0,mem_kb=3148i,mem_kb_total=3148i,mem_percent=0.2,mem_percent_total=0.2,monitoring_mode_code=0i,monitoring_status_code=1i,parent_pid=1i,pending_action_code=0i,pid=318i,status_code=0i,threads=4i 1579735047000000000
monit_program,monitoring_mode=active,monitoring_status=initializing,pending_action=none,platform_name=Linux,service=echo,source=xyzzy.local,status=running,version=5.20.0 monitoring_mode_code=0i,monitoring_status_code=2i,pending_action_code=0i,program_started=0i,program_status=0i,status_code=0i 1579735047000000000 monit_program,monitoring_mode=active,monitoring_status=initializing,pending_action=none,platform_name=Linux,service=echo,source=xyzzy.local,status=running,version=5.20.0 monitoring_mode_code=0i,monitoring_status_code=2i,pending_action_code=0i,program_started=0i,program_status=0i,status_code=0i 1579735047000000000

View File

@ -7,7 +7,8 @@ useful creating custom metrics from the `/sys` or `/proc` filesystems.
> Note: If you wish to parse metrics from a single file formatted in one of the supported > Note: If you wish to parse metrics from a single file formatted in one of the supported
> [input data formats][], you should use the [file][] input plugin instead. > [input data formats][], you should use the [file][] input plugin instead.
### Configuration ## Configuration
```toml ```toml
[[inputs.multifile]] [[inputs.multifile]]
## Base directory where telegraf will look for files. ## Base directory where telegraf will look for files.
@ -34,6 +35,7 @@ useful creating custom metrics from the `/sys` or `/proc` filesystems.
``` ```
Each file table can contain the following options: Each file table can contain the following options:
* `file`: * `file`:
Path of the file to be parsed, relative to the `base_dir`. Path of the file to be parsed, relative to the `base_dir`.
* `dest`: * `dest`:
@ -47,19 +49,23 @@ Data format used to parse the file contents:
* `bool`: Converts the value into a boolean. * `bool`: Converts the value into a boolean.
* `tag`: File content is used as a tag. * `tag`: File content is used as a tag.
### Example Output ## Example Output
This example shows a BME280 connected to a Raspberry Pi, using the sample config. This example shows a BME280 connected to a Raspberry Pi, using the sample config.
```
```sh
multifile pressure=101.343285156,temperature=20.4,humidityrelative=48.9 1547202076000000000 multifile pressure=101.343285156,temperature=20.4,humidityrelative=48.9 1547202076000000000
``` ```
To reproduce this, connect a BMP280 to the board's GPIO pins and register the BME280 device driver To reproduce this, connect a BMP280 to the board's GPIO pins and register the BME280 device driver
```
```sh
cd /sys/bus/i2c/devices/i2c-1 cd /sys/bus/i2c/devices/i2c-1
echo bme280 0x76 > new_device echo bme280 0x76 > new_device
``` ```
The kernel driver provides the following files in `/sys/bus/i2c/devices/1-0076/iio:device0`: The kernel driver provides the following files in `/sys/bus/i2c/devices/1-0076/iio:device0`:
* `in_humidityrelative_input`: `48900` * `in_humidityrelative_input`: `48900`
* `in_pressure_input`: `101.343285156` * `in_pressure_input`: `101.343285156`
* `in_temp_input`: `20400` * `in_temp_input`: `20400`

View File

@ -18,7 +18,7 @@ This plugin gathers the statistic data from MySQL server
* File events statistics * File events statistics
* Table schema statistics * Table schema statistics
### Configuration ## Configuration
```toml ```toml
[[inputs.mysql]] [[inputs.mysql]]
@ -122,7 +122,7 @@ This plugin gathers the statistic data from MySQL server
# insecure_skip_verify = false # insecure_skip_verify = false
``` ```
#### Metric Version ### Metric Version
When `metric_version = 2`, a variety of field type issues are corrected as well When `metric_version = 2`, a variety of field type issues are corrected as well
as naming inconsistencies. If you have existing data on the original version as naming inconsistencies. If you have existing data on the original version
@ -132,6 +132,7 @@ InfluxDB due to the change of types. For this reason, you should keep the
If preserving your old data is not required you may wish to drop conflicting If preserving your old data is not required you may wish to drop conflicting
measurements: measurements:
```sql ```sql
DROP SERIES from mysql DROP SERIES from mysql
DROP SERIES from mysql_variables DROP SERIES from mysql_variables
@ -143,6 +144,7 @@ Otherwise, migration can be performed using the following steps:
1. Duplicate your `mysql` plugin configuration and add a `name_suffix` and 1. Duplicate your `mysql` plugin configuration and add a `name_suffix` and
`metric_version = 2`, this will result in collection using both the old and new `metric_version = 2`, this will result in collection using both the old and new
style concurrently: style concurrently:
```toml ```toml
[[inputs.mysql]] [[inputs.mysql]]
servers = ["tcp(127.0.0.1:3306)/"] servers = ["tcp(127.0.0.1:3306)/"]
@ -157,8 +159,8 @@ style concurrently:
2. Upgrade all affected Telegraf clients to version >=1.6. 2. Upgrade all affected Telegraf clients to version >=1.6.
New measurements will be created with the `name_suffix`, for example:: New measurements will be created with the `name_suffix`, for example::
- `mysql_v2` * `mysql_v2`
- `mysql_variables_v2` * `mysql_variables_v2`
3. Update charts, alerts, and other supporting code to the new format. 3. Update charts, alerts, and other supporting code to the new format.
4. You can now remove the old `mysql` plugin configuration and remove old 4. You can now remove the old `mysql` plugin configuration and remove old
@ -169,6 +171,7 @@ historical data to the default name. Do this only after retiring the old
measurement name. measurement name.
1. Use the technique described above to write to multiple locations: 1. Use the technique described above to write to multiple locations:
```toml ```toml
[[inputs.mysql]] [[inputs.mysql]]
servers = ["tcp(127.0.0.1:3306)/"] servers = ["tcp(127.0.0.1:3306)/"]
@ -180,8 +183,10 @@ measurement name.
servers = ["tcp(127.0.0.1:3306)/"] servers = ["tcp(127.0.0.1:3306)/"]
``` ```
2. Create a TICKScript to copy the historical data: 2. Create a TICKScript to copy the historical data:
```
```sql
dbrp "telegraf"."autogen" dbrp "telegraf"."autogen"
batch batch
@ -195,17 +200,23 @@ measurement name.
.retentionPolicy('autogen') .retentionPolicy('autogen')
.measurement('mysql') .measurement('mysql')
``` ```
3. Define a task for your script: 3. Define a task for your script:
```sh ```sh
kapacitor define copy-measurement -tick copy-measurement.task kapacitor define copy-measurement -tick copy-measurement.task
``` ```
4. Run the task over the data you would like to migrate: 4. Run the task over the data you would like to migrate:
```sh ```sh
kapacitor replay-live batch -start 2018-03-30T20:00:00Z -stop 2018-04-01T12:00:00Z -rec-time -task copy-measurement kapacitor replay-live batch -start 2018-03-30T20:00:00Z -stop 2018-04-01T12:00:00Z -rec-time -task copy-measurement
``` ```
5. Verify copied data and repeat for other measurements. 5. Verify copied data and repeat for other measurements.
### Metrics: ## Metrics
* Global statuses - all numeric and boolean values of `SHOW GLOBAL STATUSES` * Global statuses - all numeric and boolean values of `SHOW GLOBAL STATUSES`
* Global variables - all numeric and boolean values of `SHOW GLOBAL VARIABLES` * Global variables - all numeric and boolean values of `SHOW GLOBAL VARIABLES`
* Slave status - metrics from `SHOW SLAVE STATUS` the metrics are gathered when * Slave status - metrics from `SHOW SLAVE STATUS` the metrics are gathered when
@ -214,7 +225,7 @@ then everything works differently, this metric does not work with multi-source
replication, unless you set `gather_all_slave_channels = true`. For MariaDB, replication, unless you set `gather_all_slave_channels = true`. For MariaDB,
`mariadb_dialect = true` should be set to address the field names and commands `mariadb_dialect = true` should be set to address the field names and commands
differences. differences.
* slave_[column name]() * slave_[column name]
* Binary logs - all metrics including size and count of all binary files. * Binary logs - all metrics including size and count of all binary files.
Requires to be turned on in configuration. Requires to be turned on in configuration.
* binary_size_bytes(int, number) * binary_size_bytes(int, number)
@ -311,6 +322,7 @@ The unit of fields varies by the tags.
* info_schema_table_version(float, number) * info_schema_table_version(float, number)
## Tags ## Tags
* All measurements has following tags * All measurements has following tags
* server (the host name from which the metrics are gathered) * server (the host name from which the metrics are gathered)
* Process list measurement has following tags * Process list measurement has following tags