chore: clean up all errors for markdown lint input plugins s through v (#10167)

This commit is contained in:
Mya 2021-11-24 11:50:13 -07:00 committed by GitHub
parent d4582dca70
commit 837465fcd5
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
31 changed files with 930 additions and 855 deletions

View File

@ -3,7 +3,7 @@
The Salesforce plugin gathers metrics about the limits in your Salesforce organization and the remaining usage.
It fetches its data from the [limits endpoint](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/resources_limits.htm) of Salesforce's REST API.
### Configuration:
## Configuration
```toml
# Gather Metrics about Salesforce limits and remaining usage
@ -19,7 +19,7 @@ It fetches its data from the [limits endpoint](https://developer.salesforce.com/
# version = "39.0"
```
### Measurements & Fields:
## Measurements & Fields
Salesforce provide one measurement named "salesforce".
Each entry is converted to snake\_case and 2 fields are created.
@ -28,20 +28,19 @@ Each entry is converted to snake\_case and 2 fields are created.
- \<key\>_remaining represents the usage remaining before hitting the limit threshold
- salesforce
- \<key\>_max (int)
- \<key\>_remaining (int)
- (...)
- \<key\>_max (int)
- \<key\>_remaining (int)
- (...)
### Tags:
## Tags
- All measurements have the following tags:
- host
- organization_id (t18 char organisation ID)
- host
- organization_id (t18 char organisation ID)
## Example Output
### Example Output:
```
```sh
$./telegraf --config telegraf.conf --input-filter salesforce --test
salesforce,organization_id=XXXXXXXXXXXXXXXXXX,host=xxxxx.salesforce.com daily_workflow_emails_max=546000i,hourly_time_based_workflow_max=50i,daily_async_apex_executions_remaining=250000i,daily_durable_streaming_api_events_remaining=1000000i,streaming_api_concurrent_clients_remaining=2000i,daily_bulk_api_requests_remaining=10000i,hourly_sync_report_runs_remaining=500i,daily_api_requests_max=5000000i,data_storage_mb_remaining=1073i,file_storage_mb_remaining=1069i,daily_generic_streaming_api_events_remaining=10000i,hourly_async_report_runs_remaining=1200i,hourly_time_based_workflow_remaining=50i,daily_streaming_api_events_remaining=1000000i,single_email_max=5000i,hourly_dashboard_refreshes_remaining=200i,streaming_api_concurrent_clients_max=2000i,daily_durable_generic_streaming_api_events_remaining=1000000i,daily_api_requests_remaining=4999998i,hourly_dashboard_results_max=5000i,hourly_async_report_runs_max=1200i,daily_durable_generic_streaming_api_events_max=1000000i,hourly_dashboard_results_remaining=5000i,concurrent_sync_report_runs_max=20i,durable_streaming_api_concurrent_clients_remaining=2000i,daily_workflow_emails_remaining=546000i,hourly_dashboard_refreshes_max=200i,daily_streaming_api_events_max=1000000i,hourly_sync_report_runs_max=500i,hourly_o_data_callout_max=10000i,mass_email_max=5000i,mass_email_remaining=5000i,single_email_remaining=5000i,hourly_dashboard_statuses_max=999999999i,concurrent_async_get_report_instances_max=200i,daily_durable_streaming_api_events_max=1000000i,daily_generic_streaming_api_events_max=10000i,hourly_o_data_callout_remaining=10000i,concurrent_sync_report_runs_remaining=20i,daily_bulk_api_requests_max=10000i,data_storage_mb_max=1073i,hourly_dashboard_statuses_remaining=999999999i,concurrent_async_get_report_instances_remaining=200i,daily_async_apex_executions_max=250000i,durable_streaming_api_concurrent_clients_max=2000i,file_storage_mb_max=1073i 1501565661000000000

View File

@ -5,7 +5,8 @@ package installed.
This plugin collects sensor metrics with the `sensors` executable from the lm-sensor package.
### Configuration:
## Configuration
```toml
# Monitor sensors, requires lm-sensors package
[[inputs.sensors]]
@ -17,19 +18,21 @@ This plugin collects sensor metrics with the `sensors` executable from the lm-se
# timeout = "5s"
```
### Measurements & Fields:
## Measurements & Fields
Fields are created dynamically depending on the sensors. All fields are float.
### Tags:
## Tags
- All measurements have the following tags:
- chip
- feature
- chip
- feature
### Example Output:
## Example Output
#### Default
```
### Default
```shell
$ telegraf --config telegraf.conf --input-filter sensors --test
* Plugin: sensors, Collection 1
> sensors,chip=power_meter-acpi-0,feature=power1 power_average=0,power_average_interval=300 1466751326000000000
@ -39,8 +42,9 @@ $ telegraf --config telegraf.conf --input-filter sensors --test
> sensors,chip=k10temp-pci-00db,feature=temp1 temp_crit=70,temp_crit_hyst=65,temp_input=29.5,temp_max=70 1466751326000000000
```
#### With remove_numbers=false
```
### With remove_numbers=false
```shell
* Plugin: sensors, Collection 1
> sensors,chip=power_meter-acpi-0,feature=power1 power1_average=0,power1_average_interval=300 1466753424000000000
> sensors,chip=k10temp-pci-00c3,feature=temp1 temp1_crit=70,temp1_crit_hyst=65,temp1_input=29.125,temp1_max=70 1466753424000000000

View File

@ -6,7 +6,7 @@ accordance with the specification from [sflow.org](https://sflow.org/).
Currently only Flow Samples of Ethernet / IPv4 & IPv4 TCP & UDP headers are
turned into metrics. Counters and other header samples are ignored.
#### Series Cardinality Warning
## Series Cardinality Warning
This plugin may produce a high number of series which, when not controlled
for, will cause high load on your database. Use the following techniques to
@ -18,7 +18,7 @@ avoid cardinality issues:
- Monitor your databases [series cardinality][].
- Consult the [InfluxDB documentation][influx-docs] for the most up-to-date techniques.
### Configuration
## Configuration
```toml
[[inputs.sflow]]
@ -33,7 +33,7 @@ avoid cardinality issues:
# read_buffer_size = ""
```
### Metrics
## Metrics
- sflow
- tags:
@ -81,34 +81,36 @@ avoid cardinality issues:
- ip_flags (integer, ip_ver field of IPv4 structures)
- tcp_flags (integer, TCP flags of TCP IP header (IPv4 or IPv6))
### Troubleshooting
## Troubleshooting
The [sflowtool][] utility can be used to print sFlow packets, and compared
against the metrics produced by Telegraf.
```
```sh
sflowtool -p 6343
```
If opening an issue, in addition to the output of sflowtool it will also be
helpful to collect a packet capture. Adjust the interface, host and port as
needed:
```
$ sudo tcpdump -s 0 -i eth0 -w telegraf-sflow.pcap host 127.0.0.1 and port 6343
```sh
sudo tcpdump -s 0 -i eth0 -w telegraf-sflow.pcap host 127.0.0.1 and port 6343
```
[sflowtool]: https://github.com/sflow/sflowtool
### Example Output
```
## Example Output
```shell
sflow,agent_address=0.0.0.0,dst_ip=10.0.0.2,dst_mac=ff:ff:ff:ff:ff:ff,dst_port=40042,ether_type=IPv4,header_protocol=ETHERNET-ISO88023,input_ifindex=6,ip_dscp=27,ip_ecn=0,output_ifindex=1073741823,source_id_index=3,source_id_type=0,src_ip=10.0.0.1,src_mac=ff:ff:ff:ff:ff:ff,src_port=443 bytes=1570i,drops=0i,frame_length=157i,header_length=128i,ip_flags=2i,ip_fragment_offset=0i,ip_total_length=139i,ip_ttl=42i,sampling_rate=10i,tcp_header_length=0i,tcp_urgent_pointer=0i,tcp_window_size=14i 1584473704793580447
```
### Reference Documentation
## Reference Documentation
This sflow implementation was built from the reference document
This sflow implementation was built from the reference document
[sflow.org/sflow_version_5.txt](sflow_version_5)
[metric filtering]: https://github.com/influxdata/telegraf/blob/master/docs/CONFIGURATION.md#metric-filtering
[retention policy]: https://docs.influxdata.com/influxdb/latest/guides/downsampling_and_retention/
[tsi]: https://docs.influxdata.com/influxdb/latest/concepts/time-series-index/

View File

@ -1,19 +1,19 @@
# S.M.A.R.T. Input Plugin
Get metrics using the command line utility `smartctl` for S.M.A.R.T. (Self-Monitoring, Analysis and Reporting Technology) storage devices. SMART is a monitoring system included in computer hard disk drives (HDDs) and solid-state drives (SSDs) that detects and reports on various indicators of drive reliability, with the intent of enabling the anticipation of hardware failures.
See smartmontools (https://www.smartmontools.org/).
See smartmontools (<https://www.smartmontools.org/>).
SMART information is separated between different measurements: `smart_device` is used for general information, while `smart_attribute` stores the detailed attribute information if `attributes = true` is enabled in the plugin configuration.
If no devices are specified, the plugin will scan for SMART devices via the following command:
```
```sh
smartctl --scan
```
Metrics will be reported from the following `smartctl` command:
```
```sh
smartctl --info --attributes --health -n <nocheck> --format=brief <device>
```
@ -23,41 +23,48 @@ Also, NVMe capabilities were introduced in version 6.5.
To enable SMART on a storage device run:
```
```sh
smartctl -s on <device>
```
## NVMe vendor specific attributes
For NVMe disk type, plugin can use command line utility `nvme-cli`. It has a feature
For NVMe disk type, plugin can use command line utility `nvme-cli`. It has a feature
to easy access a vendor specific attributes.
This plugin supports nmve-cli version 1.5 and above (https://github.com/linux-nvme/nvme-cli).
This plugin supports nmve-cli version 1.5 and above (<https://github.com/linux-nvme/nvme-cli>).
In case of `nvme-cli` absence NVMe vendor specific metrics will not be obtained.
Vendor specific SMART metrics for NVMe disks may be reported from the following `nvme` command:
```
```sh
nvme <vendor> smart-log-add <device>
```
Note that vendor plugins for `nvme-cli` could require different naming convention and report format.
To see installed plugin extensions, depended on the nvme-cli version, look at the bottom of:
```
```sh
nvme help
```
To gather disk vendor id (vid) `id-ctrl` could be used:
```
```sh
nvme id-ctrl <device>
```
Association between a vid and company can be found there: https://pcisig.com/membership/member-companies.
Association between a vid and company can be found there: <https://pcisig.com/membership/member-companies>.
Devices affiliation to being NVMe or non NVMe will be determined thanks to:
```
```sh
smartctl --scan
```
and:
```
```sh
smartctl --scan -d nvme
```
@ -113,12 +120,14 @@ It's important to note that this plugin references smartctl and nvme-cli, which
Depending on the user/group permissions of the telegraf user executing this plugin, you may need to use sudo.
You will need the following in your telegraf config:
```toml
[[inputs.smart]]
use_sudo = true
```
You will also need to update your sudoers file:
```bash
$ visudo
# For smartctl add the following lines:
@ -131,6 +140,7 @@ Cmnd_Alias NVME = /path/to/nvme
telegraf ALL=(ALL) NOPASSWD: NVME
Defaults!NVME !logfile, !syslog, !pam_session
```
To run smartctl or nvme with `sudo` wrapper script can be created. `path_smartctl` or
`path_nvme` in the configuration should be set to execute this script.
@ -171,57 +181,70 @@ To run smartctl or nvme with `sudo` wrapper script can be created. `path_smartct
- value
- worst
#### Flags
### Flags
The interpretation of the tag `flags` is:
- `K` auto-keep
- `C` event count
- `R` error rate
- `S` speed/performance
- `O` updated online
- `P` prefailure warning
#### Exit Status
- `K` auto-keep
- `C` event count
- `R` error rate
- `S` speed/performance
- `O` updated online
- `P` prefailure warning
### Exit Status
The `exit_status` field captures the exit status of the used cli utilities command which
is defined by a bitmask. For the interpretation of the bitmask see the man page for
smartctl or nvme-cli.
## Device Names
Device names, e.g., `/dev/sda`, are *not persistent*, and may be
subject to change across reboots or system changes. Instead, you can use the
*World Wide Name* (WWN) or serial number to identify devices. On Linux block
devices can be referenced by the WWN in the following location:
`/dev/disk/by-id/`.
## Troubleshooting
If you expect to see more SMART metrics than this plugin shows, be sure to use a proper version
of smartctl or nvme-cli utility which has the functionality to gather desired data. Also, check
your device capability because not every SMART metrics are mandatory.
your device capability because not every SMART metrics are mandatory.
For example the number of temperature sensors depends on the device specification.
If this plugin is not working as expected for your SMART enabled device,
please run these commands and include the output in a bug report:
For non NVMe devices (from smartctl version >= 7.0 this will also return NVMe devices by default):
```
```sh
smartctl --scan
```
For NVMe devices:
```
```sh
smartctl --scan -d nvme
```
Run the following command replacing your configuration setting for NOCHECK and
the DEVICE (name of the device could be taken from the previous command):
```
```sh
smartctl --info --health --attributes --tolerance=verypermissive --nocheck NOCHECK --format=brief -d DEVICE
```
If you try to gather vendor specific metrics, please provide this commad
and replace vendor and device to match your case:
```
```sh
nvme VENDOR smart-log-add DEVICE
```
## Example SMART Plugin Outputs
```
```shell
smart_device,enabled=Enabled,host=mbpro.local,device=rdisk0,model=APPLE\ SSD\ SM0512F,serial_no=S1K5NYCD964433,wwn=5002538655584d30,capacity=500277790720 udma_crc_errors=0i,exit_status=0i,health_ok=true,read_error_rate=0i,temp_c=40i 1502536854000000000
smart_attribute,capacity=500277790720,device=rdisk0,enabled=Enabled,fail=-,flags=-O-RC-,host=mbpro.local,id=199,model=APPLE\ SSD\ SM0512F,name=UDMA_CRC_Error_Count,serial_no=S1K5NYCD964433,wwn=5002538655584d30 exit_status=0i,raw_value=0i,threshold=0i,value=200i,worst=200i 1502536854000000000
smart_attribute,capacity=500277790720,device=rdisk0,enabled=Enabled,fail=-,flags=-O---K,host=mbpro.local,id=199,model=APPLE\ SSD\ SM0512F,name=Unknown_SSD_Attribute,serial_no=S1K5NYCD964433,wwn=5002538655584d30 exit_status=0i,raw_value=0i,threshold=0i,value=100i,worst=100i 1502536854000000000

View File

@ -4,7 +4,7 @@ The `snmp` input plugin uses polling to gather metrics from SNMP agents.
Support for gathering individual OIDs as well as complete SNMP tables is
included.
### Prerequisites
## Prerequisites
This plugin uses the `snmptable` and `snmptranslate` programs from the
[net-snmp][] project. These tools will need to be installed into the `PATH` in
@ -18,7 +18,8 @@ location of these files can be configured in the `snmp.conf` or via the
`MIBDIRS` environment variable. See [`man 1 snmpcmd`][man snmpcmd] for more
information.
### Configuration
## Configuration
```toml
[[inputs.snmp]]
## Agent addresses to retrieve values from.
@ -91,13 +92,13 @@ information.
is_tag = true
```
#### Configure SNMP Requests
### Configure SNMP Requests
This plugin provides two methods for configuring the SNMP requests: `fields`
and `tables`. Use the `field` option to gather single ad-hoc variables.
To collect SNMP tables, use the `table` option.
##### Field
#### Field
Use a `field` to collect a variable by OID. Requests specified with this
option operate similar to the `snmpget` utility.
@ -138,7 +139,7 @@ option operate similar to the `snmpget` utility.
# conversion = ""
```
##### Table
#### Table
Use a `table` to configure the collection of a SNMP table. SNMP requests
formed with this option operate similarly way to the `snmptable` command.
@ -201,7 +202,7 @@ One [metric][] is created for each row of the SNMP table.
## Specifies if the value of given field should be snmptranslated
## by default no field values are translated
# translate = true
## Secondary index table allows to merge data from two tables with
## different index that this filed will be used to join them. There can
## be only one secondary index table.
@ -220,27 +221,30 @@ One [metric][] is created for each row of the SNMP table.
# secondary_outer_join = false
```
##### Two Table Join
#### Two Table Join
Snmp plugin can join two snmp tables that have different indexes. For this to work one table
should have translation field that return index of second table as value. Examples
of such fields are:
* Cisco portTable with translation field: `CISCO-STACK-MIB::portIfIndex`,
* Cisco portTable with translation field: `CISCO-STACK-MIB::portIfIndex`,
which value is IfIndex from ifTable
* Adva entityFacilityTable with translation field: `ADVA-FSPR7-MIB::entityFacilityOneIndex`,
* Adva entityFacilityTable with translation field: `ADVA-FSPR7-MIB::entityFacilityOneIndex`,
which value is IfIndex from ifTable
* Cisco cpeExtPsePortTable with translation field: `CISCO-POWER-ETHERNET-EXT-MIB::cpeExtPsePortEntPhyIndex`,
* Cisco cpeExtPsePortTable with translation field: `CISCO-POWER-ETHERNET-EXT-MIB::cpeExtPsePortEntPhyIndex`,
which value is index from entPhysicalTable
Such field can be used to translate index to secondary table with `secondary_index_table = true`
and all fields from secondary table (with index pointed from translation field), should have added option
`secondary_index_use = true`. Telegraf cannot duplicate entries during join so translation
must be 1-to-1 (not 1-to-many). To add fields from secondary table with index that is not present
in translation table (outer join), there is a second option for translation index `secondary_outer_join = true`.
###### Example configuration for table joins
##### Example configuration for table joins
CISCO-POWER-ETHERNET-EXT-MIB table before join:
```
```toml
[[inputs.snmp.table]]
name = "ciscoPower"
index_as_tag = true
@ -255,14 +259,16 @@ oid = "CISCO-POWER-ETHERNET-EXT-MIB::cpeExtPsePortEntPhyIndex"
```
Partial result (removed agent_host and host columns from all following outputs in this section):
```
```shell
> ciscoPower,index=1.2 EntPhyIndex=1002i,PortPwrConsumption=6643i 1621460628000000000
> ciscoPower,index=1.6 EntPhyIndex=1006i,PortPwrConsumption=10287i 1621460628000000000
> ciscoPower,index=1.5 EntPhyIndex=1005i,PortPwrConsumption=8358i 1621460628000000000
```
Note here that EntPhyIndex column carries index from ENTITY-MIB table, config for it:
```
```toml
[[inputs.snmp.table]]
name = "entityTable"
index_as_tag = true
@ -271,8 +277,10 @@ index_as_tag = true
name = "EntPhysicalName"
oid = "ENTITY-MIB::entPhysicalName"
```
Partial result:
```
```text
> entityTable,index=1006 EntPhysicalName="GigabitEthernet1/6" 1621460809000000000
> entityTable,index=1002 EntPhysicalName="GigabitEthernet1/2" 1621460809000000000
> entityTable,index=1005 EntPhysicalName="GigabitEthernet1/5" 1621460809000000000
@ -282,7 +290,7 @@ Now, lets attempt to join these results into one table. EntPhyIndex matches inde
from second table, and lets convert EntPhysicalName into tag, so second table will
only provide tags into result. Configuration:
```
```toml
[[inputs.snmp.table]]
name = "ciscoPowerEntity"
index_as_tag = true
@ -304,40 +312,45 @@ is_tag = true
```
Result:
```
```shell
> ciscoPowerEntity,EntPhysicalName=GigabitEthernet1/2,index=1.2 EntPhyIndex=1002i,PortPwrConsumption=6643i 1621461148000000000
> ciscoPowerEntity,EntPhysicalName=GigabitEthernet1/6,index=1.6 EntPhyIndex=1006i,PortPwrConsumption=10287i 1621461148000000000
> ciscoPowerEntity,EntPhysicalName=GigabitEthernet1/5,index=1.5 EntPhyIndex=1005i,PortPwrConsumption=8358i 1621461148000000000
```
### Troubleshooting
## Troubleshooting
Check that a numeric field can be translated to a textual field:
```
```sh
$ snmptranslate .1.3.6.1.2.1.1.3.0
DISMAN-EVENT-MIB::sysUpTimeInstance
```
Request a top-level field:
```
$ snmpget -v2c -c public 127.0.0.1 sysUpTime.0
```sh
snmpget -v2c -c public 127.0.0.1 sysUpTime.0
```
Request a table:
```
$ snmptable -v2c -c public 127.0.0.1 ifTable
```sh
snmptable -v2c -c public 127.0.0.1 ifTable
```
To collect a packet capture, run this command in the background while running
Telegraf or one of the above commands. Adjust the interface, host and port as
needed:
```
$ sudo tcpdump -s 0 -i eth0 -w telegraf-snmp.pcap host 127.0.0.1 and port 161
```sh
sudo tcpdump -s 0 -i eth0 -w telegraf-snmp.pcap host 127.0.0.1 and port 161
```
### Example Output
## Example Output
```
```shell
snmp,agent_host=127.0.0.1,source=loaner uptime=11331974i 1575509815000000000
interface,agent_host=127.0.0.1,ifDescr=wlan0,ifIndex=3,source=example.org ifAdminStatus=1i,ifInDiscards=0i,ifInErrors=0i,ifInNUcastPkts=0i,ifInOctets=3436617431i,ifInUcastPkts=2717778i,ifInUnknownProtos=0i,ifLastChange=0i,ifMtu=1500i,ifOperStatus=1i,ifOutDiscards=0i,ifOutErrors=0i,ifOutNUcastPkts=0i,ifOutOctets=581368041i,ifOutQLen=0i,ifOutUcastPkts=1354338i,ifPhysAddress="c8:5b:76:c9:e6:8c",ifSpecific=".0.0",ifSpeed=0i,ifType=6i 1575509815000000000
interface,agent_host=127.0.0.1,ifDescr=eth0,ifIndex=2,source=example.org ifAdminStatus=1i,ifInDiscards=0i,ifInErrors=0i,ifInNUcastPkts=21i,ifInOctets=3852386380i,ifInUcastPkts=3634004i,ifInUnknownProtos=0i,ifLastChange=9088763i,ifMtu=1500i,ifOperStatus=1i,ifOutDiscards=0i,ifOutErrors=0i,ifOutNUcastPkts=0i,ifOutOctets=434865441i,ifOutQLen=0i,ifOutUcastPkts=2110394i,ifPhysAddress="c8:5b:76:c9:e6:8c",ifSpecific=".0.0",ifSpeed=1000000000i,ifType=6i 1575509815000000000

View File

@ -1,17 +1,16 @@
# SNMP Legacy Input Plugin
### Deprecated in version 1.0. Use [SNMP input plugin][].
## Deprecated in version 1.0. Use [SNMP input plugin][]
The SNMP input plugin gathers metrics from SNMP agents
### Configuration:
## Configuration
#### Very simple example
### Very simple example
In this example, the plugin will gather value of OIDS:
- `.1.3.6.1.2.1.2.2.1.4.1`
- `.1.3.6.1.2.1.2.2.1.4.1`
```toml
# Very Simple Example
@ -28,36 +27,34 @@ In this example, the plugin will gather value of OIDS:
get_oids = [".1.3.6.1.2.1.2.2.1.4.1"]
```
#### Simple example
### Simple example
In this example, Telegraf gathers value of OIDS:
- named **ifnumber**
- named **interface_speed**
- named **ifnumber**
- named **interface_speed**
With **inputs.snmp.get** section the plugin gets the oid number:
- **ifnumber** => `.1.3.6.1.2.1.2.1.0`
- **interface_speed** => *ifSpeed*
- **ifnumber** => `.1.3.6.1.2.1.2.1.0`
- **interface_speed** => *ifSpeed*
As you can see *ifSpeed* is not a valid OID. In order to get
the valid OID, the plugin uses `snmptranslate_file` to match the OID:
- **ifnumber** => `.1.3.6.1.2.1.2.1.0`
- **interface_speed** => *ifSpeed* => `.1.3.6.1.2.1.2.2.1.5`
- **ifnumber** => `.1.3.6.1.2.1.2.1.0`
- **interface_speed** => *ifSpeed* => `.1.3.6.1.2.1.2.2.1.5`
Also as the plugin will append `instance` to the corresponding OID:
- **ifnumber** => `.1.3.6.1.2.1.2.1.0`
- **interface_speed** => *ifSpeed* => `.1.3.6.1.2.1.2.2.1.5.1`
- **ifnumber** => `.1.3.6.1.2.1.2.1.0`
- **interface_speed** => *ifSpeed* => `.1.3.6.1.2.1.2.2.1.5.1`
In this example, the plugin will gather value of OIDS:
- `.1.3.6.1.2.1.2.1.0`
- `.1.3.6.1.2.1.2.2.1.5.1`
```toml
# Simple example
[[inputs.snmp]]
@ -88,36 +85,35 @@ In this example, the plugin will gather value of OIDS:
```
#### Simple bulk example
### Simple bulk example
In this example, Telegraf gathers value of OIDS:
- named **ifnumber**
- named **interface_speed**
- named **if_out_octets**
- named **ifnumber**
- named **interface_speed**
- named **if_out_octets**
With **inputs.snmp.get** section the plugin gets oid number:
- **ifnumber** => `.1.3.6.1.2.1.2.1.0`
- **interface_speed** => *ifSpeed*
- **ifnumber** => `.1.3.6.1.2.1.2.1.0`
- **interface_speed** => *ifSpeed*
With **inputs.snmp.bulk** section the plugin gets the oid number:
- **if_out_octets** => *ifOutOctets*
- **if_out_octets** => *ifOutOctets*
As you can see *ifSpeed* and *ifOutOctets* are not a valid OID.
In order to get the valid OID, the plugin uses `snmptranslate_file`
to match the OID:
- **ifnumber** => `.1.3.6.1.2.1.2.1.0`
- **interface_speed** => *ifSpeed* => `.1.3.6.1.2.1.2.2.1.5`
- **if_out_octets** => *ifOutOctets* => `.1.3.6.1.2.1.2.2.1.16`
- **ifnumber** => `.1.3.6.1.2.1.2.1.0`
- **interface_speed** => *ifSpeed* => `.1.3.6.1.2.1.2.2.1.5`
- **if_out_octets** => *ifOutOctets* => `.1.3.6.1.2.1.2.2.1.16`
Also, the plugin will append `instance` to the corresponding OID:
- **ifnumber** => `.1.3.6.1.2.1.2.1.0`
- **interface_speed** => *ifSpeed* => `.1.3.6.1.2.1.2.2.1.5.1`
- **ifnumber** => `.1.3.6.1.2.1.2.1.0`
- **interface_speed** => *ifSpeed* => `.1.3.6.1.2.1.2.2.1.5.1`
And **if_out_octets** is a bulk request, the plugin will gathers all
OIDS in the table.
@ -140,7 +136,6 @@ In this example, the plugin will gather value of OIDS:
- `.1.3.6.1.2.1.2.2.1.16.5`
- `...`
```toml
# Simple bulk example
[[inputs.snmp]]
@ -174,8 +169,7 @@ In this example, the plugin will gather value of OIDS:
oid = "ifOutOctets"
```
#### Table example
### Table example
In this example, we remove collect attribute to the host section,
but you can still use it in combination of the following part.
@ -185,11 +179,11 @@ other configuration
Telegraf gathers value of OIDS of the table:
- named **iftable1**
- named **iftable1**
With **inputs.snmp.table** section the plugin gets oid number:
- **iftable1** => `.1.3.6.1.2.1.31.1.1.1`
- **iftable1** => `.1.3.6.1.2.1.31.1.1.1`
Also **iftable1** is a table, the plugin will gathers all
OIDS in the table and in the subtables
@ -239,8 +233,7 @@ OIDS in the table and in the subtables
oid = ".1.3.6.1.2.1.31.1.1.1"
```
#### Table with subtable example
### Table with subtable example
In this example, we remove collect attribute to the host section,
but you can still use it in combination of the following part.
@ -250,12 +243,12 @@ other configuration
Telegraf gathers value of OIDS of the table:
- named **iftable2**
- named **iftable2**
With **inputs.snmp.table** section *AND* **sub_tables** attribute,
the plugin will get OIDS from subtables:
- **iftable2** => `.1.3.6.1.2.1.2.2.1.13`
- **iftable2** => `.1.3.6.1.2.1.2.2.1.13`
Also **iftable2** is a table, the plugin will gathers all
OIDS in subtables:
@ -266,7 +259,6 @@ OIDS in subtables:
- `.1.3.6.1.2.1.2.2.1.13.4`
- `.1.3.6.1.2.1.2.2.1.13....`
```toml
# Table with subtable example
[[inputs.snmp]]
@ -295,19 +287,18 @@ OIDS in subtables:
# oid attribute is useless
```
#### Table with mapping example
### Table with mapping example
In this example, we remove collect attribute to the host section,
but you can still use it in combination of the following part.
Telegraf gathers value of OIDS of the table:
- named **iftable3**
- named **iftable3**
With **inputs.snmp.table** section the plugin gets oid number:
- **iftable3** => `.1.3.6.1.2.1.31.1.1.1`
- **iftable3** => `.1.3.6.1.2.1.31.1.1.1`
Also **iftable2** is a table, the plugin will gathers all
OIDS in the table and in the subtables
@ -334,11 +325,12 @@ will be gathered; As you see, there is an other attribute, `mapping_table`.
`include_instances` and `mapping_table` permit to build a hash table
to filter only OIDS you want.
Let's say, we have the following data on SNMP server:
- OID: `.1.3.6.1.2.1.31.1.1.1.1.1` has as value: `enp5s0`
- OID: `.1.3.6.1.2.1.31.1.1.1.1.2` has as value: `enp5s1`
- OID: `.1.3.6.1.2.1.31.1.1.1.1.3` has as value: `enp5s2`
- OID: `.1.3.6.1.2.1.31.1.1.1.1.4` has as value: `eth0`
- OID: `.1.3.6.1.2.1.31.1.1.1.1.5` has as value: `eth1`
- OID: `.1.3.6.1.2.1.31.1.1.1.1.1` has as value: `enp5s0`
- OID: `.1.3.6.1.2.1.31.1.1.1.1.2` has as value: `enp5s1`
- OID: `.1.3.6.1.2.1.31.1.1.1.1.3` has as value: `enp5s2`
- OID: `.1.3.6.1.2.1.31.1.1.1.1.4` has as value: `eth0`
- OID: `.1.3.6.1.2.1.31.1.1.1.1.5` has as value: `eth1`
The plugin will build the following hash table:
@ -399,20 +391,19 @@ Note: the plugin will add instance name as tag *instance*
# if empty, get all subtables
```
#### Table with both mapping and subtable example
### Table with both mapping and subtable example
In this example, we remove collect attribute to the host section,
but you can still use it in combination of the following part.
Telegraf gathers value of OIDS of the table:
- named **iftable4**
- named **iftable4**
With **inputs.snmp.table** section *AND* **sub_tables** attribute,
the plugin will get OIDS from subtables:
- **iftable4** => `.1.3.6.1.2.1.31.1.1.1`
- **iftable4** => `.1.3.6.1.2.1.31.1.1.1`
Also **iftable2** is a table, the plugin will gathers all
OIDS in the table and in the subtables
@ -433,11 +424,12 @@ will be gathered; As you see, there is an other attribute, `mapping_table`.
`include_instances` and `mapping_table` permit to build a hash table
to filter only OIDS you want.
Let's say, we have the following data on SNMP server:
- OID: `.1.3.6.1.2.1.31.1.1.1.1.1` has as value: `enp5s0`
- OID: `.1.3.6.1.2.1.31.1.1.1.1.2` has as value: `enp5s1`
- OID: `.1.3.6.1.2.1.31.1.1.1.1.3` has as value: `enp5s2`
- OID: `.1.3.6.1.2.1.31.1.1.1.1.4` has as value: `eth0`
- OID: `.1.3.6.1.2.1.31.1.1.1.1.5` has as value: `eth1`
- OID: `.1.3.6.1.2.1.31.1.1.1.1.1` has as value: `enp5s0`
- OID: `.1.3.6.1.2.1.31.1.1.1.1.2` has as value: `enp5s1`
- OID: `.1.3.6.1.2.1.31.1.1.1.1.3` has as value: `enp5s2`
- OID: `.1.3.6.1.2.1.31.1.1.1.1.4` has as value: `eth0`
- OID: `.1.3.6.1.2.1.31.1.1.1.1.5` has as value: `eth1`
The plugin will build the following hash table:
@ -459,8 +451,6 @@ the following OIDS:
Note: the plugin will add instance name as tag *instance*
```toml
# Table with both mapping and subtable example
[[inputs.snmp]]
@ -505,7 +495,7 @@ Note: the plugin will add instance name as tag *instance*
unit = "octets"
```
#### Configuration notes
### Configuration notes
- In **inputs.snmp.table** section, the `oid` attribute is useless if
the `sub_tables` attributes is defined
@ -513,38 +503,38 @@ Note: the plugin will add instance name as tag *instance*
- In **inputs.snmp.subtable** section, you can put a name from `snmptranslate_file`
as `oid` attribute instead of a valid OID
### Measurements & Fields:
## Measurements & Fields
With the last example (Table with both mapping and subtable example):
- ifHCOutOctets
- ifHCOutOctets
- ifHCOutOctets
- ifInDiscards
- ifInDiscards
- ifInDiscards
- ifHCInOctets
- ifHCInOctets
- ifHCInOctets
### Tags:
## Tags
With the last example (Table with both mapping and subtable example):
- ifHCOutOctets
- host
- instance
- unit
- host
- instance
- unit
- ifInDiscards
- host
- instance
- host
- instance
- ifHCInOctets
- host
- instance
- unit
- host
- instance
- unit
### Example Output:
## Example Output
With the last example (Table with both mapping and subtable example):
```
```shell
ifHCOutOctets,host=127.0.0.1,instance=enp5s0,unit=octets ifHCOutOctets=10565628i 1456878706044462901
ifInDiscards,host=127.0.0.1,instance=enp5s0 ifInDiscards=0i 1456878706044510264
ifHCInOctets,host=127.0.0.1,instance=enp5s0,unit=octets ifHCInOctets=76351777i 1456878706044531312

View File

@ -6,7 +6,7 @@ streaming (tcp, unix) or datagram (udp, unixgram) protocols.
The plugin expects messages in the
[Telegraf Input Data Formats](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md).
### Configuration:
## Configuration
This is a sample configuration for the plugin.
@ -92,7 +92,7 @@ at least 8MB before trying to run large amounts of UDP traffic to your instance.
Check the current UDP/IP receive buffer limit & default by typing the following
commands:
```
```sh
sysctl net.core.rmem_max
sysctl net.core.rmem_default
```
@ -100,7 +100,7 @@ sysctl net.core.rmem_default
If the values are less than 8388608 bytes you should add the following lines to
the /etc/sysctl.conf file:
```
```text
net.core.rmem_max=8388608
net.core.rmem_default=8388608
```
@ -108,7 +108,7 @@ net.core.rmem_default=8388608
Changes to /etc/sysctl.conf do not take effect until reboot.
To update the values immediately, type the following commands as root:
```
```sh
sysctl -w net.core.rmem_max=8388608
sysctl -w net.core.rmem_default=8388608
```
@ -123,20 +123,20 @@ happens
Check the current UDP/IP buffer limit by typing the following command:
```
```sh
sysctl kern.ipc.maxsockbuf
```
If the value is less than 9646900 bytes you should add the following lines
to the /etc/sysctl.conf file (create it if necessary):
```
```text
kern.ipc.maxsockbuf=9646900
```
Changes to /etc/sysctl.conf do not take effect until reboot.
To update the values immediately, type the following command as root:
```
```sh
sysctl -w kern.ipc.maxsockbuf=9646900
```

View File

@ -7,7 +7,7 @@ More about [performance statistics](https://cwiki.apache.org/confluence/display/
Tested from 3.5 to 7.*
### Configuration:
## Configuration
```toml
[[inputs.solr]]
@ -22,9 +22,9 @@ Tested from 3.5 to 7.*
# password = "pa$$word"
```
### Example output of gathered metrics:
## Example output of gathered metrics
```
```shell
➜ ~ telegraf -config telegraf.conf -input-filter solr -test
* Plugin: solr, Collection 1
> solr_core,core=main,handler=searcher,host=testhost deleted_docs=17616645i,max_docs=261848363i,num_docs=244231718i 1478214949000000000

View File

@ -5,7 +5,7 @@ types are supported and their settings might differ (especially the connection p
Please check the list of [supported SQL drivers](../../../docs/SQL_DRIVERS_INPUT.md) for the
`driver` name and options for the data-source-name (`dsn`) options.
### Configuration
## Configuration
This section contains the default TOML to configure the plugin. You can
generate it using `telegraf --usage <plugin-name>`.
@ -73,13 +73,13 @@ generate it using `telegraf --usage <plugin-name>`.
## Column names containing fields (explicit types)
## Convert the given columns to the corresponding type. Explicit type conversions take precedence over
## the automatic (driver-based) conversion below.
## NOTE: Columns should not be specified for multiple types or the resulting type is undefined.
## the automatic (driver-based) conversion below.
## NOTE: Columns should not be specified for multiple types or the resulting type is undefined.
# field_columns_float = []
# field_columns_int = []
# field_columns_uint = []
# field_columns_bool = []
# field_columns_string = []
# field_columns_uint = []
# field_columns_bool = []
# field_columns_string = []
## Column names containing fields (automatic types)
## An empty include list is equivalent to '[*]' and all returned columns will be accepted. An empty
@ -89,16 +89,20 @@ generate it using `telegraf --usage <plugin-name>`.
# field_columns_exclude = []
```
### Options
#### Driver
## Options
### Driver
The `driver` and `dsn` options specify how to connect to the database. As especially the `dsn` format and
values vary with the `driver` refer to the list of [supported SQL drivers](../../../docs/SQL_DRIVERS_INPUT.md) for possible values and more details.
#### Connection limits
### Connection limits
With these options you can limit the number of connections kept open by this plugin. Details about the exact
workings can be found in the [golang sql documentation](https://golang.org/pkg/database/sql/#DB.SetConnMaxIdleTime).
#### Query sections
### Query sections
Multiple `query` sections can be specified for this plugin. Each specified query will first be prepared on the server
and then executed in every interval using the column mappings specified. Please note that `tag` and `field` columns
are not exclusive, i.e. a column can be added to both. When using both `include` and `exclude` lists, the `exclude`
@ -107,31 +111,38 @@ the filter. In case any the columns specified in `measurement_col` or `time_col`
the plugin falls-back to the documented defaults. Fields or tags specified in the includes of the options but missing
in the returned query are silently ignored.
### Types
## Types
This plugin relies on the driver to do the type conversion. For the different properties of the metric the following
types are accepted.
#### Measurement
### Measurement
Only columns of type `string` are accepted.
#### Time
### Time
For the metric time columns of type `time` are accepted directly. For numeric columns, `time_format` should be set
to any of `unix`, `unix_ms`, `unix_ns` or `unix_us` accordingly. By default the a timestamp in `unix` format is
expected. For string columns, please specify the `time_format` accordingly.
See the [golang time documentation](https://golang.org/pkg/time/#Time.Format) for details.
#### Tags
### Tags
For tags columns with textual values (`string` and `bytes`), signed and unsigned integers (8, 16, 32 and 64 bit),
floating-point (32 and 64 bit), `boolean` and `time` values are accepted. Those values will be converted to string.
#### Fields
### Fields
For fields columns with textual values (`string` and `bytes`), signed and unsigned integers (8, 16, 32 and 64 bit),
floating-point (32 and 64 bit), `boolean` and `time` values are accepted. Here `bytes` will be converted to `string`,
signed and unsigned integer values will be converted to `int64` or `uint64` respectively. Floating-point values are converted to `float64` and `time` is converted to a nanosecond timestamp of type `int64`.
### Example Output
## Example Output
Using the [MariaDB sample database](https://www.mariadbtutorial.com/getting-started/mariadb-sample-database) and the
configuration
```toml
[[inputs.sql]]
driver = "mysql"
@ -145,7 +156,8 @@ configuration
```
Telegraf will output the following metrics
```
```shell
nation,host=Hugin,name=John guest_id=1i 1611332164000000000
nation,host=Hugin,name=Jane guest_id=2i 1611332164000000000
nation,host=Hugin,name=Jean guest_id=3i 1611332164000000000

View File

@ -3,7 +3,7 @@
The `sqlserver` plugin provides metrics for your SQL Server instance. Recorded metrics are
lightweight and use Dynamic Management Views supplied by SQL Server.
### The SQL Server plugin supports the following editions/versions of SQL Server
## The SQL Server plugin supports the following editions/versions of SQL Server
- SQL Server
- 2012 or newer (Plugin support aligned with the [official Microsoft SQL Server support](https://docs.microsoft.com/en-us/sql/sql-server/end-of-support/sql-server-end-of-life-overview?view=sql-server-ver15#lifecycle-dates))
@ -12,7 +12,7 @@ lightweight and use Dynamic Management Views supplied by SQL Server.
- Azure SQL Managed Instance
- Azure SQL Elastic Pool
### Additional Setup
## Additional Setup
You have to create a login on every SQL Server instance or Azure SQL Managed instance you want to monitor, with following script:
@ -57,7 +57,7 @@ GO
CREATE USER [telegraf] FOR LOGIN telegraf;
```
### Configuration
## Configuration
```toml
[agent]
@ -203,7 +203,7 @@ CREATE USER [telegraf] FOR LOGIN telegraf;
## - PerformanceMetrics
```
### Support for Azure Active Directory (AAD) authentication using [Managed Identity](https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview)
## Support for Azure Active Directory (AAD) authentication using [Managed Identity](https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview)
Azure SQL Database supports 2 main methods of authentication: [SQL authentication and AAD authentication](https://docs.microsoft.com/en-us/azure/azure-sql/database/security-overview#authentication). The recommended practice is to [use AAD authentication when possible](https://docs.microsoft.com/en-us/azure/azure-sql/database/authentication-aad-overview).
@ -211,7 +211,7 @@ AAD is a more modern authentication protocol, allows for easier credential/role
To enable support for AAD authentication, we leverage the existing AAD authentication support in the [SQL Server driver for Go](https://github.com/denisenkom/go-mssqldb#azure-active-directory-authentication---preview)
#### How to use AAD Auth with MSI
### How to use AAD Auth with MSI
- Configure "system-assigned managed identity" for Azure resources on the Monitoring VM (the VM that'd connect to the SQL server/database) [using the Azure portal](https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm).
- On the database being monitored, create/update a USER with the name of the Monitoring VM as the principal using the below script. This might require allow-listing the client machine's IP address (from where the below SQL script is being run) on the SQL Server resource.
@ -239,13 +239,13 @@ EXECUTE ('GRANT VIEW DATABASE STATE TO [<Monitoring_VM_Name>]')
- Please note AAD based auth is currently only supported for Azure SQL Database and Azure SQL Managed Instance (but not for SQL Server), as described [here](https://docs.microsoft.com/en-us/azure/azure-sql/database/security-overview#authentication).
### Metrics
## Metrics
To provide backwards compatibility, this plugin support two versions of metrics queries.
**Note**: Version 2 queries are not backwards compatible with the old queries. Any dashboards or queries based on the old query format will not work with the new format. The version 2 queries only report raw metrics, no math has been done to calculate deltas. To graph this data you must calculate deltas in your dashboarding software.
#### Version 1 (query_version=1): This is Deprecated in 1.6, all future development will be under configuration option database_type
### Version 1 (query_version=1): This is Deprecated in 1.6, all future development will be under configuration option database_type
The original metrics queries provide:
@ -265,7 +265,7 @@ If you are using the original queries all stats have the following tags:
- `servername`: hostname:instance
- `type`: type of stats to easily filter measurements
#### Version 2 (query_version=2): Being deprecated, All future development will be under configuration option database_type
### Version 2 (query_version=2): Being deprecated, All future development will be under configuration option database_type
The new (version 2) metrics provide:
@ -299,7 +299,7 @@ The new (version 2) metrics provide:
- Resource governance stats from `sys.dm_user_db_resource_governance`
- Stats from `sys.dm_db_resource_stats`
#### database_type = "AzureSQLDB"
### database_type = "AzureSQLDB"
These are metrics for Azure SQL Database (single database) and are very similar to version 2 but split out for maintenance reasons, better ability to test,differences in DMVs:
@ -313,7 +313,7 @@ These are metrics for Azure SQL Database (single database) and are very similar
- *AzureSQLDBRequests: Requests which are blocked or have a wait type from `sys.dm_exec_sessions` and `sys.dm_exec_requests`
- *AzureSQLDBSchedulers* - This captures `sys.dm_os_schedulers` snapshots.
#### database_type = "AzureSQLManagedInstance"
### database_type = "AzureSQLManagedInstance"
These are metrics for Azure SQL Managed instance, are very similar to version 2 but split out for maintenance reasons, better ability to test, differences in DMVs:
@ -326,7 +326,7 @@ These are metrics for Azure SQL Managed instance, are very similar to version 2
- *AzureSQLMIRequests*: Requests which are blocked or have a wait type from `sys.dm_exec_sessions` and `sys.dm_exec_requests`
- *AzureSQLMISchedulers*: This captures `sys.dm_os_schedulers` snapshots.
#### database_type = "AzureSQLPool"
### database_type = "AzureSQLPool"
These are metrics for Azure SQL to monitor resources usage at Elastic Pool level. These metrics require additional permissions to be collected, please ensure to check additional setup section in this documentation.
@ -338,7 +338,7 @@ These are metrics for Azure SQL to monitor resources usage at Elastic Pool level
- *AzureSQLPoolPerformanceCounters*: A selected list of performance counters from `sys.dm_os_performance_counters`. Note: Performance counters where the cntr_type column value is 537003264 are already returned with a percentage format between 0 and 100. For other counters, please check [sys.dm_os_performance_counters](https://docs.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-os-performance-counters-transact-sql?view=azuresqldb-current) documentation.
- *AzureSQLPoolSchedulers*: This captures `sys.dm_os_schedulers` snapshots.
#### database_type = "SQLServer"
### database_type = "SQLServer"
- *SQLServerDatabaseIO*: IO stats from `sys.dm_io_virtual_file_stats`
- *SQLServerMemoryClerks*: Memory clerk breakdown from `sys.dm_os_memory_clerks`, most clerks have been given a friendly name.
@ -359,7 +359,7 @@ These are metrics for Azure SQL to monitor resources usage at Elastic Pool level
- SQLServerAvailabilityReplicaStates: Collects availability replica state information from `sys.dm_hadr_availability_replica_states` for a High Availability / Disaster Recovery (HADR) setup
- SQLServerDatabaseReplicaStates: Collects database replica state information from `sys.dm_hadr_database_replica_states` for a High Availability / Disaster Recovery (HADR) setup
#### Output Measures
### Output Measures
The guiding principal is that all data collected from the same primary DMV ends up in the same measure irrespective of database_type.
@ -412,7 +412,7 @@ Version 2 queries have the following tags:
- `sql_instance`: Physical host and instance name (hostname:instance)
- `database_name`: For Azure SQLDB, database_name denotes the name of the Azure SQL Database as server name is a logical construct.
#### Health Metric
### Health Metric
All collection versions (version 1, version 2, and database_type) support an optional plugin health metric called `sqlserver_telegraf_health`. This metric tracks if connections to SQL Server are succeeding or failing. Users can leverage this metric to detect if their SQL Server monitoring is not working as intended.

View File

@ -6,7 +6,7 @@ Query data from Google Cloud Monitoring (formerly Stackdriver) using the
This plugin accesses APIs which are [chargeable][pricing]; you might incur
costs.
### Configuration
## Configuration
```toml
[[inputs.stackdriver]]
@ -58,9 +58,9 @@ costs.
## For a list of aligner strings see:
## https://cloud.google.com/monitoring/api/ref_v3/rpc/google.monitoring.v3#aligner
# distribution_aggregation_aligners = [
# "ALIGN_PERCENTILE_99",
# "ALIGN_PERCENTILE_95",
# "ALIGN_PERCENTILE_50",
# "ALIGN_PERCENTILE_99",
# "ALIGN_PERCENTILE_95",
# "ALIGN_PERCENTILE_50",
# ]
## Filters can be added to reduce the number of time series matched. All
@ -84,23 +84,24 @@ costs.
## Metric labels refine the time series selection with the following expression:
## metric.labels.<key> = <value>
# [[inputs.stackdriver.filter.metric_labels]]
# key = "device_name"
# value = 'one_of("sda", "sdb")'
# key = "device_name"
# value = 'one_of("sda", "sdb")'
```
#### Authentication
### Authentication
It is recommended to use a service account to authenticate with the
Stackdriver Monitoring API. [Getting Started with Authentication][auth].
### Metrics
## Metrics
Metrics are created using one of there patterns depending on if the value type
is a scalar value, raw distribution buckets, or aligned bucket values.
In all cases, the Stackdriver metric type is split on the last component into
the measurement and field:
```
```sh
compute.googleapis.com/instance/disk/read_bytes_count
└────────── measurement ─────────┘ └── field ───┘
```
@ -114,7 +115,6 @@ compute.googleapis.com/instance/disk/read_bytes_count
- fields:
- field
**Distributions:**
Distributions are represented by a set of fields along with the bucket values
@ -132,7 +132,7 @@ represents the total number of items less than the `lt` tag.
- field_range_min
- field_range_max
+ measurement
- measurement
- tags:
- resource_labels
- metric_labels
@ -149,14 +149,16 @@ represents the total number of items less than the `lt` tag.
- fields:
- field_alignment_function
### Troubleshooting
## Troubleshooting
When Telegraf is ran with `--debug`, detailed information about the performed
queries will be logged.
### Example Output
```
## Example Output
```shell
```
[stackdriver]: https://cloud.google.com/monitoring/api/v3/
[auth]: https://cloud.google.com/docs/authentication/getting-started
[pricing]: https://cloud.google.com/stackdriver/pricing#stackdriver_monitoring_services

View File

@ -1,6 +1,6 @@
# StatsD Input Plugin
### Configuration
## Configuration
```toml
# Statsd Server
@ -77,7 +77,7 @@
# max_ttl = "10h"
```
### Description
## Description
The statsd plugin is a special type of plugin which runs a backgrounded statsd
listener service while telegraf is running.
@ -87,49 +87,48 @@ original [etsy statsd](https://github.com/etsy/statsd/blob/master/docs/metric_ty
implementation. In short, the telegraf statsd listener will accept:
- Gauges
- `users.current.den001.myapp:32|g` <- standard
- `users.current.den001.myapp:+10|g` <- additive
- `users.current.den001.myapp:-10|g`
- `users.current.den001.myapp:32|g` <- standard
- `users.current.den001.myapp:+10|g` <- additive
- `users.current.den001.myapp:-10|g`
- Counters
- `deploys.test.myservice:1|c` <- increments by 1
- `deploys.test.myservice:101|c` <- increments by 101
- `deploys.test.myservice:1|c|@0.1` <- with sample rate, increments by 10
- `deploys.test.myservice:1|c` <- increments by 1
- `deploys.test.myservice:101|c` <- increments by 101
- `deploys.test.myservice:1|c|@0.1` <- with sample rate, increments by 10
- Sets
- `users.unique:101|s`
- `users.unique:101|s`
- `users.unique:102|s` <- would result in a count of 2 for `users.unique`
- `users.unique:101|s`
- `users.unique:101|s`
- `users.unique:102|s` <- would result in a count of 2 for `users.unique`
- Timings & Histograms
- `load.time:320|ms`
- `load.time.nanoseconds:1|h`
- `load.time:200|ms|@0.1` <- sampled 1/10 of the time
- `load.time:320|ms`
- `load.time.nanoseconds:1|h`
- `load.time:200|ms|@0.1` <- sampled 1/10 of the time
- Distributions
- `load.time:320|d`
- `load.time.nanoseconds:1|d`
- `load.time:200|d|@0.1` <- sampled 1/10 of the time
- `load.time:320|d`
- `load.time.nanoseconds:1|d`
- `load.time:200|d|@0.1` <- sampled 1/10 of the time
It is possible to omit repetitive names and merge individual stats into a
single line by separating them with additional colons:
- `users.current.den001.myapp:32|g:+10|g:-10|g`
- `deploys.test.myservice:1|c:101|c:1|c|@0.1`
- `users.unique:101|s:101|s:102|s`
- `load.time:320|ms:200|ms|@0.1`
- `users.current.den001.myapp:32|g:+10|g:-10|g`
- `deploys.test.myservice:1|c:101|c:1|c|@0.1`
- `users.unique:101|s:101|s:102|s`
- `load.time:320|ms:200|ms|@0.1`
This also allows for mixed types in a single line:
- `foo:1|c:200|ms`
- `foo:1|c:200|ms`
The string `foo:1|c:200|ms` is internally split into two individual metrics
`foo:1|c` and `foo:200|ms` which are added to the aggregator separately.
### Influx Statsd
## Influx Statsd
In order to take advantage of InfluxDB's tagging system, we have made a couple
additions to the standard statsd protocol. First, you can specify
tags in a manner similar to the line-protocol, like this:
```
```shell
users.current,service=payroll,region=us-west:32|g
```
@ -139,9 +138,10 @@ users.current,service=payroll,region=us-west:32|g
current.users,service=payroll,server=host01:west=10,east=10,central=2,south=10|g
``` -->
### Measurements:
## Measurements
Meta:
- tags: `metric_type=<gauge|set|counter|timing|histogram>`
Outputted measurements will depend entirely on the measurements that the user
@ -149,42 +149,42 @@ sends, but here is a brief rundown of what you can expect to find from each
metric type:
- Gauges
- Gauges are a constant data type. They are not subject to averaging, and they
- Gauges are a constant data type. They are not subject to averaging, and they
dont change unless you change them. That is, once you set a gauge value, it
will be a flat line on the graph until you change it again.
- Counters
- Counters are the most basic type. They are treated as a count of a type of
- Counters are the most basic type. They are treated as a count of a type of
event. They will continually increase unless you set `delete_counters=true`.
- Sets
- Sets count the number of unique values passed to a key. For example, you
- Sets count the number of unique values passed to a key. For example, you
could count the number of users accessing your system using `users:<user_id>|s`.
No matter how many times the same user_id is sent, the count will only increase
by 1.
- Timings & Histograms
- Timers are meant to track how long something took. They are an invaluable
- Timers are meant to track how long something took. They are an invaluable
tool for tracking application performance.
- The following aggregate measurements are made for timers:
- `statsd_<name>_lower`: The lower bound is the lowest value statsd saw
- The following aggregate measurements are made for timers:
- `statsd_<name>_lower`: The lower bound is the lowest value statsd saw
for that stat during that interval.
- `statsd_<name>_upper`: The upper bound is the highest value statsd saw
- `statsd_<name>_upper`: The upper bound is the highest value statsd saw
for that stat during that interval.
- `statsd_<name>_mean`: The mean is the average of all values statsd saw
- `statsd_<name>_mean`: The mean is the average of all values statsd saw
for that stat during that interval.
- `statsd_<name>_stddev`: The stddev is the sample standard deviation
- `statsd_<name>_stddev`: The stddev is the sample standard deviation
of all values statsd saw for that stat during that interval.
- `statsd_<name>_sum`: The sum is the sample sum of all values statsd saw
- `statsd_<name>_sum`: The sum is the sample sum of all values statsd saw
for that stat during that interval.
- `statsd_<name>_count`: The count is the number of timings statsd saw
- `statsd_<name>_count`: The count is the number of timings statsd saw
for that stat during that interval. It is not averaged.
- `statsd_<name>_percentile_<P>` The `Pth` percentile is a value x such
- `statsd_<name>_percentile_<P>` The `Pth` percentile is a value x such
that `P%` of all the values statsd saw for that stat during that time
period are below x. The most common value that people use for `P` is the
`90`, this is a great number to try to optimize.
- Distributions
- The Distribution metric represents the global statistical distribution of a set of values calculated across your entire distributed infrastructure in one time interval. A Distribution can be used to instrument logical objects, like services, independently from the underlying hosts.
- Unlike the Histogram metric type, which aggregates on the Agent during a given time interval, a Distribution metric sends all the raw data during a time interval.
- The Distribution metric represents the global statistical distribution of a set of values calculated across your entire distributed infrastructure in one time interval. A Distribution can be used to instrument logical objects, like services, independently from the underlying hosts.
- Unlike the Histogram metric type, which aggregates on the Agent during a given time interval, a Distribution metric sends all the raw data during a time interval.
### Plugin arguments
## Plugin arguments
- **protocol** string: Protocol used in listener - tcp or udp options
- **max_tcp_connections** []int: Maximum number of concurrent TCP connections
@ -204,12 +204,12 @@ per-measurement in the calculation of percentiles. Raising this limit increases
the accuracy of percentiles but also increases the memory usage and cpu time.
- **templates** []string: Templates for transforming statsd buckets into influx
measurements and tags.
- **parse_data_dog_tags** boolean: Enable parsing of tags in DataDog's dogstatsd format (http://docs.datadoghq.com/guides/dogstatsd/)
- **datadog_extensions** boolean: Enable parsing of DataDog's extensions to dogstatsd format (http://docs.datadoghq.com/guides/dogstatsd/)
- **datadog_distributions** boolean: Enable parsing of the Distribution metric in DataDog's dogstatsd format (https://docs.datadoghq.com/developers/metrics/types/?tab=distribution#definition)
- **parse_data_dog_tags** boolean: Enable parsing of tags in DataDog's dogstatsd format (<http://docs.datadoghq.com/guides/dogstatsd/>)
- **datadog_extensions** boolean: Enable parsing of DataDog's extensions to dogstatsd format (<http://docs.datadoghq.com/guides/dogstatsd/>)
- **datadog_distributions** boolean: Enable parsing of the Distribution metric in DataDog's dogstatsd format (<https://docs.datadoghq.com/developers/metrics/types/?tab=distribution#definition>)
- **max_ttl** config.Duration: Max duration (TTL) for each metric to stay cached/reported without being updated.
### Statsd bucket -> InfluxDB line-protocol Templates
## Statsd bucket -> InfluxDB line-protocol Templates
The plugin supports specifying templates for transforming statsd buckets into
InfluxDB measurement names and tags. The templates have a _measurement_ keyword,
@ -217,7 +217,7 @@ which can be used to specify parts of the bucket that are to be used in the
measurement name. Other words in the template are used as tag names. For example,
the following template:
```
```toml
templates = [
"measurement.measurement.region"
]
@ -225,7 +225,7 @@ templates = [
would result in the following transformation:
```
```shell
cpu.load.us-west:100|g
=> cpu_load,region=us-west 100
```
@ -233,7 +233,7 @@ cpu.load.us-west:100|g
Users can also filter the template to use based on the name of the bucket,
using glob matching, like so:
```
```toml
templates = [
"cpu.* measurement.measurement.region",
"mem.* measurement.measurement.host"
@ -242,7 +242,7 @@ templates = [
which would result in the following transformation:
```
```shell
cpu.load.us-west:100|g
=> cpu_load,region=us-west 100

View File

@ -6,7 +6,7 @@ and much more. It provides a socket for the Suricata log output to write JSON
stats output to, and processes the incoming data to fit Telegraf's format.
It can also report for triggered Suricata IDS/IPS alerts.
### Configuration
## Configuration
```toml
[[inputs.suricata]]
@ -23,14 +23,15 @@ It can also report for triggered Suricata IDS/IPS alerts.
alerts = false
```
### Metrics
## Metrics
Fields in the 'suricata' measurement follow the JSON format used by Suricata's
stats output.
See http://suricata.readthedocs.io/en/latest/performance/statistics.html for
See <http://suricata.readthedocs.io/en/latest/performance/statistics.html> for
more information.
All fields for Suricata stats are numeric.
- suricata
- tags:
- thread: `Global` for global statistics (if enabled), thread IDs (e.g. `W#03-enp0s31f6`) for thread-specific statistics
@ -98,7 +99,7 @@ All fields for Suricata stats are numeric.
- tcp_synack
- ...
Some fields of the Suricata alerts are strings, for example the signatures. See https://suricata.readthedocs.io/en/suricata-6.0.0/output/eve/eve-json-format.html?highlight=priority#event-type-alert for more information.
Some fields of the Suricata alerts are strings, for example the signatures. See <https://suricata.readthedocs.io/en/suricata-6.0.0/output/eve/eve-json-format.html?highlight=priority#event-type-alert> for more information.
- suricata_alert
- fields:
@ -112,7 +113,7 @@ Some fields of the Suricata alerts are strings, for example the signatures. See
- target_port
- ...
#### Suricata configuration
### Suricata configuration
Suricata needs to deliver the 'stats' event type to a given unix socket for
this plugin to pick up. This can be done, for example, by creating an additional
@ -128,11 +129,10 @@ output in the Suricata configuration file:
threads: yes
```
#### FreeBSD tuning
### FreeBSD tuning
Under FreeBSD it is necessary to increase the localhost buffer space to at least 16384, default is 8192
otherwise messages from Suricata are truncated as they exceed the default available buffer space,
Under FreeBSD it is necessary to increase the localhost buffer space to at least 16384, default is 8192
otherwise messages from Suricata are truncated as they exceed the default available buffer space,
consequently no statistics are processed by the plugin.
```text
@ -140,8 +140,7 @@ sysctl -w net.local.stream.recvspace=16384
sysctl -w net.local.stream.sendspace=16384
```
### Example Output
## Example Output
```text
suricata,host=myhost,thread=FM#01 flow_mgr_rows_empty=0,flow_mgr_rows_checked=65536,flow_mgr_closed_pruned=0,flow_emerg_mode_over=0,flow_mgr_flows_timeout_inuse=0,flow_mgr_rows_skipped=65535,flow_mgr_bypassed_pruned=0,flow_mgr_flows_removed=0,flow_mgr_est_pruned=0,flow_mgr_flows_notimeout=1,flow_mgr_flows_checked=1,flow_mgr_rows_busy=0,flow_spare=10000,flow_mgr_rows_maxlen=1,flow_mgr_new_pruned=0,flow_emerg_mode_entered=0,flow_tcp_reuse=0,flow_mgr_flows_timeout=0 1568368562545197545

View File

@ -4,7 +4,7 @@ The swap plugin collects system swap metrics.
For more information on what swap memory is, read [All about Linux swap space](https://www.linux.com/news/all-about-linux-swap-space).
### Configuration:
## Configuration
```toml
# Read metrics about swap memory usage
@ -12,7 +12,7 @@ For more information on what swap memory is, read [All about Linux swap space](h
# no configuration
```
### Metrics:
## Metrics
- swap
- fields:
@ -23,8 +23,8 @@ For more information on what swap memory is, read [All about Linux swap space](h
- in (int, bytes): data swapped in since last boot calculated from page number
- out (int, bytes): data swapped out since last boot calculated from page number
### Example Output:
## Example Output
```
```shell
swap total=20855394304i,used_percent=45.43883523785713,used=9476448256i,free=1715331072i 1511894782000000000
```

View File

@ -1,10 +1,9 @@
# Synproxy Input Plugin
The synproxy plugin gathers the synproxy counters. Synproxy is a Linux netfilter module used for SYN attack mitigation.
The synproxy plugin gathers the synproxy counters. Synproxy is a Linux netfilter module used for SYN attack mitigation.
The use of synproxy is documented in `man iptables-extensions` under the SYNPROXY section.
### Configuration
## Configuration
The synproxy plugin does not need any configuration
@ -13,7 +12,7 @@ The synproxy plugin does not need any configuration
# no configuration
```
### Metrics
## Metrics
The following synproxy counters are gathered
@ -26,24 +25,26 @@ The following synproxy counters are gathered
- syn_received (uint32, packets, counter) - SYN received
- conn_reopened (uint32, packets, counter) - Connections reopened
### Sample Queries
## Sample Queries
Get the number of packets per 5 minutes for the measurement in the last hour from InfluxDB:
```sql
SELECT difference(last("cookie_invalid")) AS "cookie_invalid", difference(last("cookie_retrans")) AS "cookie_retrans", difference(last("cookie_valid")) AS "cookie_valid", difference(last("entries")) AS "entries", difference(last("syn_received")) AS "syn_received", difference(last("conn_reopened")) AS "conn_reopened" FROM synproxy WHERE time > NOW() - 1h GROUP BY time(5m) FILL(null);
```
### Troubleshooting
## Troubleshooting
Execute the following CLI command in Linux to test the synproxy counters:
```sh
cat /proc/net/stat/synproxy
```
### Example Output
## Example Output
This section shows example output in Line Protocol format.
```
```shell
synproxy,host=Filter-GW01,rack=filter-node1 conn_reopened=0i,cookie_invalid=235i,cookie_retrans=0i,cookie_valid=8814i,entries=0i,syn_received=8742i 1549550634000000000
```

View File

@ -9,7 +9,7 @@ a Unix Domain socket,
Syslog messages should be formatted according to
[RFC 5424](https://tools.ietf.org/html/rfc5424).
### Configuration
## Configuration
```toml
[[inputs.syslog]]
@ -68,20 +68,20 @@ Syslog messages should be formatted according to
# sdparam_separator = "_"
```
#### Message transport
### Message transport
The `framing` option only applies to streams. It governs the way we expect to receive messages within the stream.
Namely, with the [`"octet counting"`](https://tools.ietf.org/html/rfc5425#section-4.3) technique (default) or with the [`"non-transparent"`](https://tools.ietf.org/html/rfc6587#section-3.4.2) framing.
The `trailer` option only applies when `framing` option is `"non-transparent"`. It must have one of the following values: `"LF"` (default), or `"NUL"`.
#### Best effort
### Best effort
The [`best_effort`](https://github.com/influxdata/go-syslog#best-effort-mode)
option instructs the parser to extract partial but valid info from syslog
messages. If unset only full messages will be collected.
#### Rsyslog Integration
### Rsyslog Integration
Rsyslog can be configured to forward logging messages to Telegraf by configuring
[remote logging](https://www.rsyslog.com/doc/v8-stable/configuration/actions.html#remote-machine).
@ -93,7 +93,8 @@ config file.
Add the following lines to `/etc/rsyslog.d/50-telegraf.conf` making
adjustments to the target address as needed:
```
```shell
$ActionQueueType LinkedList # use asynchronous processing
$ActionQueueFileName srvrfwd # set file name, also enables disk mode
$ActionResumeRetryCount -1 # infinite retries on insert failure
@ -107,7 +108,8 @@ $ActionQueueSaveOnShutdown on # save in-memory data if rsyslog shuts down
```
You can alternately use `advanced` format (aka RainerScript):
```
```bash
# forward over tcp with octet framing according to RFC 5425
action(type="omfwd" Protocol="tcp" TCP_Framing="octet-counted" Target="127.0.0.1" Port="6514" Template="RSYSLOG_SyslogProtocol23Format")
@ -117,7 +119,7 @@ action(type="omfwd" Protocol="tcp" TCP_Framing="octet-counted" Target="127.0.0.1
To complete TLS setup please refer to [rsyslog docs](https://www.rsyslog.com/doc/v8-stable/tutorials/tls.html).
### Metrics
## Metrics
- syslog
- tags
@ -136,17 +138,19 @@ To complete TLS setup please refer to [rsyslog docs](https://www.rsyslog.com/doc
- *Structured Data* (string)
- timestamp: the time the messages was received
#### Structured Data
### Structured Data
Structured data produces field keys by combining the `SD_ID` with the `PARAM_NAME` combined using the `sdparam_separator` as in the following example:
```
```shell
170 <165>1 2018-10-01:14:15.000Z mymachine.example.com evntslog - ID47 [exampleSDID@32473 iut="3" eventSource="Application" eventID="1011"] An application event log entry...
```
```
```shell
syslog,appname=evntslog,facility=local4,hostname=mymachine.example.com,severity=notice exampleSDID@32473_eventID="1011",exampleSDID@32473_eventSource="Application",exampleSDID@32473_iut="3",facility_code=20i,message="An application event log entry...",msgid="ID47",severity_code=5i,timestamp=1065910455003000000i,version=1i 1538421339749472344
```
### Troubleshooting
## Troubleshooting
You can send debugging messages directly to the input plugin using netcat:
@ -158,14 +162,16 @@ echo "57 <13>1 2018-10-01T12:00:00.0Z example.org root - - - test" | nc 127.0.0.
echo "<13>1 2018-10-01T12:00:00.0Z example.org root - - - test" | nc -u 127.0.0.1 6514
```
#### RFC3164
### RFC3164
RFC3164 encoded messages are supported for UDP only, but not all vendors output valid RFC3164 messages by default
- E.g. Cisco IOS
If you see the following error, it is due to a message encoded in this format:
```
```shell
E! Error in plugin [inputs.syslog]: expecting a version value in the range 1-999 [col 5]
```
You can use rsyslog to translate RFC3164 syslog messages into RFC5424 format.
You can use rsyslog to translate RFC3164 syslog messages into RFC5424 format.

View File

@ -6,7 +6,7 @@ package installed.
This plugin collects system metrics with the sysstat collector utility `sadc` and parses
the created binary data file with the `sadf` utility.
### Configuration:
## Configuration
```toml
# Sysstat metrics collector
@ -38,22 +38,22 @@ the created binary data file with the `sadf` utility.
##
## Run 'sar -h' or 'man sar' to find out the supported options for your sysstat version.
[inputs.sysstat.options]
-C = "cpu"
-B = "paging"
-b = "io"
-d = "disk" # requires DISK activity
"-n ALL" = "network"
"-P ALL" = "per_cpu"
-q = "queue"
-R = "mem"
-r = "mem_util"
-S = "swap_util"
-u = "cpu_util"
-v = "inode"
-W = "swap"
-w = "task"
# -H = "hugepages" # only available for newer linux distributions
# "-I ALL" = "interrupts" # requires INT activity
-C = "cpu"
-B = "paging"
-b = "io"
-d = "disk" # requires DISK activity
"-n ALL" = "network"
"-P ALL" = "per_cpu"
-q = "queue"
-R = "mem"
-r = "mem_util"
-S = "swap_util"
-u = "cpu_util"
-v = "inode"
-W = "swap"
-w = "task"
# -H = "hugepages" # only available for newer linux distributions
# "-I ALL" = "interrupts" # requires INT activity
## Device tags can be used to add additional tags for devices. For example the configuration below
## adds a tag vg with value rootvg for all metrics with sda devices.
@ -61,94 +61,100 @@ the created binary data file with the `sadf` utility.
# vg = "rootvg"
```
### Measurements & Fields:
#### If group=true
## Measurements & Fields
### If group=true
- cpu
- pct_idle (float)
- pct_iowait (float)
- pct_nice (float)
- pct_steal (float)
- pct_system (float)
- pct_user (float)
- pct_idle (float)
- pct_iowait (float)
- pct_nice (float)
- pct_steal (float)
- pct_system (float)
- pct_user (float)
- disk
- avgqu-sz (float)
- avgrq-sz (float)
- await (float)
- pct_util (float)
- rd_sec_pers (float)
- svctm (float)
- tps (float)
- avgqu-sz (float)
- avgrq-sz (float)
- await (float)
- pct_util (float)
- rd_sec_pers (float)
- svctm (float)
- tps (float)
And much more, depending on the options you configure.
#### If group=false
### If group=false
- cpu_pct_idle
- value (float)
- value (float)
- cpu_pct_iowait
- value (float)
- value (float)
- cpu_pct_nice
- value (float)
- value (float)
- cpu_pct_steal
- value (float)
- value (float)
- cpu_pct_system
- value (float)
- value (float)
- cpu_pct_user
- value (float)
- value (float)
- disk_avgqu-sz
- value (float)
- value (float)
- disk_avgrq-sz
- value (float)
- value (float)
- disk_await
- value (float)
- value (float)
- disk_pct_util
- value (float)
- value (float)
- disk_rd_sec_per_s
- value (float)
- value (float)
- disk_svctm
- value (float)
- value (float)
- disk_tps
- value (float)
- value (float)
And much more, depending on the options you configure.
### Tags:
## Tags
- All measurements have the following tags:
- device
- device
And more if you define some `device_tags`.
### Example Output:
## Example Output
With the configuration below:
```toml
[[inputs.sysstat]]
sadc_path = "/usr/lib/sa/sadc" # required
activities = ["DISK", "SNMP", "INT"]
group = true
[inputs.sysstat.options]
-C = "cpu"
-B = "paging"
-b = "io"
-d = "disk" # requires DISK activity
-H = "hugepages"
"-I ALL" = "interrupts" # requires INT activity
"-n ALL" = "network"
"-P ALL" = "per_cpu"
-q = "queue"
-R = "mem"
"-r ALL" = "mem_util"
-S = "swap_util"
-u = "cpu_util"
-v = "inode"
-W = "swap"
-w = "task"
-C = "cpu"
-B = "paging"
-b = "io"
-d = "disk" # requires DISK activity
-H = "hugepages"
"-I ALL" = "interrupts" # requires INT activity
"-n ALL" = "network"
"-P ALL" = "per_cpu"
-q = "queue"
-R = "mem"
"-r ALL" = "mem_util"
-S = "swap_util"
-u = "cpu_util"
-v = "inode"
-W = "swap"
-w = "task"
[[inputs.sysstat.device_tags.sda]]
vg = "rootvg"
```
you get the following output:
```
```shell
$ telegraf --config telegraf.conf --input-filter sysstat --test
* Plugin: sysstat, Collection 1
> cpu_util,device=all pct_idle=98.85,pct_iowait=0,pct_nice=0.38,pct_steal=0,pct_system=0.64,pct_user=0.13 1459255626657883725
@ -189,34 +195,36 @@ $ telegraf --config telegraf.conf --input-filter sysstat --test
```
If you change the group value to false like below:
```toml
[[inputs.sysstat]]
sadc_path = "/usr/lib/sa/sadc" # required
activities = ["DISK", "SNMP", "INT"]
group = false
[inputs.sysstat.options]
-C = "cpu"
-B = "paging"
-b = "io"
-d = "disk" # requires DISK activity
-H = "hugepages"
"-I ALL" = "interrupts" # requires INT activity
"-n ALL" = "network"
"-P ALL" = "per_cpu"
-q = "queue"
-R = "mem"
"-r ALL" = "mem_util"
-S = "swap_util"
-u = "cpu_util"
-v = "inode"
-W = "swap"
-w = "task"
-C = "cpu"
-B = "paging"
-b = "io"
-d = "disk" # requires DISK activity
-H = "hugepages"
"-I ALL" = "interrupts" # requires INT activity
"-n ALL" = "network"
"-P ALL" = "per_cpu"
-q = "queue"
-R = "mem"
"-r ALL" = "mem_util"
-S = "swap_util"
-u = "cpu_util"
-v = "inode"
-W = "swap"
-w = "task"
[[inputs.sysstat.device_tags.sda]]
vg = "rootvg"
```
you get the following output:
```
```shell
$ telegraf -config telegraf.conf -input-filter sysstat -test
* Plugin: sysstat, Collection 1
> io_tps value=0.5 1459255780126025822

View File

@ -5,33 +5,34 @@ and number of users logged in. It is similar to the unix `uptime` command.
Number of CPUs is obtained from the /proc/cpuinfo file.
### Configuration:
## Configuration
```toml
# Read metrics about system load & uptime
[[inputs.system]]
# no configuration
```
#### Permissions:
### Permissions
The `n_users` field requires read access to `/var/run/utmp`, and may require
the `telegraf` user to be added to the `utmp` group on some systems. If this file does not exist `n_users` will be skipped.
### Metrics:
## Metrics
- system
- fields:
- load1 (float)
- load15 (float)
- load5 (float)
- n_users (integer)
- n_cpus (integer)
- uptime (integer, seconds)
- uptime_format (string, deprecated in 1.10, use `uptime` field)
- load1 (float)
- load15 (float)
- load5 (float)
- n_users (integer)
- n_cpus (integer)
- uptime (integer, seconds)
- uptime_format (string, deprecated in 1.10, use `uptime` field)
### Example Output:
## Example Output
```
```shell
system,host=tyrion load1=3.72,load5=2.4,load15=2.1,n_users=3i,n_cpus=4i 1483964144000000000
system,host=tyrion uptime=1249632i 1483964144000000000
system,host=tyrion uptime_format="14 days, 11:07" 1483964144000000000

View File

@ -12,7 +12,8 @@ fulfills the same purpose on windows.
In addition to services, this plugin can gather other unit types as well,
see `systemctl list-units --all --type help` for possible options.
### Configuration
## Configuration
```toml
[[inputs.systemd_units]]
## Set timeout for systemctl execution
@ -31,7 +32,8 @@ see `systemctl list-units --all --type help` for possible options.
## pattern = "a*"
```
### Metrics
## Metrics
- systemd_units:
- tags:
- name (string, unit name)
@ -43,7 +45,7 @@ see `systemctl list-units --all --type help` for possible options.
- active_code (int, see below)
- sub_code (int, see below)
#### Load
### Load
enumeration of [unit_load_state_table](https://github.com/systemd/systemd/blob/c87700a1335f489be31cd3549927da68b5638819/src/basic/unit-def.c#L87)
@ -57,7 +59,7 @@ enumeration of [unit_load_state_table](https://github.com/systemd/systemd/blob/c
| 5 | merged | unit is ~ |
| 6 | masked | unit is ~ |
#### Active
### Active
enumeration of [unit_active_state_table](https://github.com/systemd/systemd/blob/c87700a1335f489be31cd3549927da68b5638819/src/basic/unit-def.c#L99)
@ -70,7 +72,7 @@ enumeration of [unit_active_state_table](https://github.com/systemd/systemd/blob
| 4 | activating | unit is ~ |
| 5 | deactivating | unit is ~ |
#### Sub
### Sub
enumeration of sub states, see various [unittype_state_tables](https://github.com/systemd/systemd/blob/c87700a1335f489be31cd3549927da68b5638819/src/basic/unit-def.c#L163);
duplicates were removed, tables are hex aligned to keep some space for future
@ -132,9 +134,9 @@ values
| 0x00a0 | elapsed | unit is ~ |
| | | |
### Example Output
## Example Output
```
```shell
systemd_units,host=host1.example.com,name=dbus.service,load=loaded,active=active,sub=running load_code=0i,active_code=0i,sub_code=0i 1533730725000000000
systemd_units,host=host1.example.com,name=networking.service,load=loaded,active=failed,sub=failed load_code=0i,active_code=3i,sub_code=12i 1533730725000000000
systemd_units,host=host1.example.com,name=ssh.service,load=loaded,active=active,sub=running load_code=0i,active_code=0i,sub_code=0i 1533730725000000000

View File

@ -4,7 +4,7 @@ The tail plugin "tails" a logfile and parses each log message.
By default, the tail plugin acts like the following unix tail command:
```
```shell
tail -F --lines=0 myfile.log
```
@ -14,12 +14,12 @@ inaccessible files.
- `--lines=0` means that it will start at the end of the file (unless
the `from_beginning` option is set).
see http://man7.org/linux/man-pages/man1/tail.1.html for more details.
see <http://man7.org/linux/man-pages/man1/tail.1.html> for more details.
The plugin expects messages in one of the
[Telegraf Input Data Formats](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md).
### Configuration
## Configuration
```toml
[[inputs.tail]]
@ -85,7 +85,7 @@ The plugin expects messages in one of the
#timeout = 5s
```
### Metrics
## Metrics
Metrics are produced according to the `data_format` option. Additionally a
tag labeled `path` is added to the metric containing the filename being tailed.

View File

@ -2,10 +2,10 @@
This plugin uses the Teamspeak 3 ServerQuery interface of the Teamspeak server to collect statistics of one or more
virtual servers. If you are querying an external Teamspeak server, make sure to add the host which is running Telegraf
to query_ip_whitelist.txt in the Teamspeak Server directory. For information about how to configure the server take a look
to query_ip_whitelist.txt in the Teamspeak Server directory. For information about how to configure the server take a look
the [Teamspeak 3 ServerQuery Manual](http://media.teamspeak.com/ts3_literature/TeamSpeak%203%20Server%20Query%20Manual.pdf)
### Configuration:
## Configuration
```toml
# Reads metrics from a Teamspeak 3 Server via ServerQuery
@ -20,27 +20,27 @@ the [Teamspeak 3 ServerQuery Manual](http://media.teamspeak.com/ts3_literature/T
# virtual_servers = [1]
```
### Measurements:
## Measurements
- teamspeak
- uptime
- clients_online
- total_ping
- total_packet_loss
- packets_sent_total
- packets_received_total
- bytes_sent_total
- bytes_received_total
- query_clients_online
- uptime
- clients_online
- total_ping
- total_packet_loss
- packets_sent_total
- packets_received_total
- bytes_sent_total
- bytes_received_total
- query_clients_online
### Tags:
## Tags
- The following tags are used:
- virtual_server
- name
- virtual_server
- name
### Example output:
## Example output
```
```shell
teamspeak,virtual_server=1,name=LeopoldsServer,host=vm01 bytes_received_total=29638202639i,uptime=13567846i,total_ping=26.89,total_packet_loss=0,packets_sent_total=415821252i,packets_received_total=237069900i,bytes_sent_total=55309568252i,clients_online=11i,query_clients_online=1i 1507406561000000000
```
```

View File

@ -5,14 +5,14 @@ meant to be multi platform and uses platform specific collection methods.
Currently supports Linux and Windows.
### Configuration
## Configuration
```toml
[[inputs.temp]]
# no configuration
```
### Metrics
## Metrics
- temp
- tags:
@ -20,18 +20,18 @@ Currently supports Linux and Windows.
- fields:
- temp (float, celcius)
### Troubleshooting
## Troubleshooting
On **Windows**, the plugin uses a WMI call that is can be replicated with the
following command:
```
```shell
wmic /namespace:\\root\wmi PATH MSAcpi_ThermalZoneTemperature
```
### Example Output
## Example Output
```
```shell
temp,sensor=coretemp_physicalid0_crit temp=100 1531298763000000000
temp,sensor=coretemp_physicalid0_critalarm temp=0 1531298763000000000
temp,sensor=coretemp_physicalid0_input temp=100 1531298763000000000

View File

@ -4,7 +4,7 @@ The tengine plugin gathers metrics from the
[Tengine Web Server](http://tengine.taobao.org/) via the
[reqstat](http://tengine.taobao.org/document/http_reqstat.html) module.
### Configuration:
## Configuration
```toml
# Read Tengine's basic status information (ngx_http_reqstat_module)
@ -23,7 +23,7 @@ The tengine plugin gathers metrics from the
# insecure_skip_verify = false
```
### Metrics:
## Metrics
- Measurement
- tags:
@ -60,9 +60,9 @@ The tengine plugin gathers metrics from the
- http_other_detail_status (integer, total number of requests of other status codes*http_ups_4xx total number of requests of upstream 4xx)
- http_ups_5xx (integer, total number of requests of upstream 5xx)
### Example Output:
## Example Output
```
```shell
tengine,host=gcp-thz-api-5,port=80,server=localhost,server_name=localhost bytes_in=9129i,bytes_out=56334i,conn_total=14i,http_200=90i,http_206=0i,http_2xx=90i,http_302=0i,http_304=0i,http_3xx=0i,http_403=0i,http_404=0i,http_416=0i,http_499=0i,http_4xx=0i,http_500=0i,http_502=0i,http_503=0i,http_504=0i,http_508=0i,http_5xx=0i,http_other_detail_status=0i,http_other_status=0i,http_ups_4xx=0i,http_ups_5xx=0i,req_total=90i,rt=0i,ups_req=0i,ups_rt=0i,ups_tries=0i 1526546308000000000
tengine,host=gcp-thz-api-5,port=80,server=localhost,server_name=28.79.190.35.bc.googleusercontent.com bytes_in=1500i,bytes_out=3009i,conn_total=4i,http_200=1i,http_206=0i,http_2xx=1i,http_302=0i,http_304=0i,http_3xx=0i,http_403=0i,http_404=1i,http_416=0i,http_499=0i,http_4xx=3i,http_500=0i,http_502=0i,http_503=0i,http_504=0i,http_508=0i,http_5xx=0i,http_other_detail_status=0i,http_other_status=0i,http_ups_4xx=0i,http_ups_5xx=0i,req_total=4i,rt=0i,ups_req=0i,ups_rt=0i,ups_tries=0i 1526546308000000000
tengine,host=gcp-thz-api-5,port=80,server=localhost,server_name=www.google.com bytes_in=372i,bytes_out=786i,conn_total=1i,http_200=1i,http_206=0i,http_2xx=1i,http_302=0i,http_304=0i,http_3xx=0i,http_403=0i,http_404=0i,http_416=0i,http_499=0i,http_4xx=0i,http_500=0i,http_502=0i,http_503=0i,http_504=0i,http_508=0i,http_5xx=0i,http_other_detail_status=0i,http_other_status=0i,http_ups_4xx=0i,http_ups_5xx=0i,req_total=1i,rt=0i,ups_req=0i,ups_rt=0i,ups_tries=0i 1526546308000000000

View File

@ -4,7 +4,7 @@ The Tomcat plugin collects statistics available from the tomcat manager status p
See the [Tomcat documentation](https://tomcat.apache.org/tomcat-9.0-doc/manager-howto.html#Server_Status) for details of these statistics.
### Configuration:
## Configuration
```toml
# Gather metrics from the Tomcat server status page.
@ -27,7 +27,7 @@ See the [Tomcat documentation](https://tomcat.apache.org/tomcat-9.0-doc/manager-
# insecure_skip_verify = false
```
### Measurements & Fields:
## Measurements & Fields
- tomcat_jvm_memory
- free
@ -54,7 +54,7 @@ See the [Tomcat documentation](https://tomcat.apache.org/tomcat-9.0-doc/manager-
- bytes_received
- bytes_sent
### Tags:
## Tags
- tomcat_jvm_memorypool has the following tags:
- name
@ -62,9 +62,9 @@ See the [Tomcat documentation](https://tomcat.apache.org/tomcat-9.0-doc/manager-
- tomcat_connector
- name
### Example Output:
## Example Output
```
```shell
tomcat_jvm_memory,host=N8-MBP free=20014352i,max=127729664i,total=41459712i 1474663361000000000
tomcat_jvm_memorypool,host=N8-MBP,name=Eden\ Space,type=Heap\ memory committed=11534336i,init=2228224i,max=35258368i,used=1941200i 1474663361000000000
tomcat_jvm_memorypool,host=N8-MBP,name=Survivor\ Space,type=Heap\ memory committed=1376256i,init=262144i,max=4390912i,used=1376248i 1474663361000000000

View File

@ -2,7 +2,7 @@
The `trig` plugin is for demonstration purposes and inserts sine and cosine
### Configuration
## Configuration
```toml
# Inserts sine and cosine waves for demonstration purposes
@ -11,17 +11,16 @@ The `trig` plugin is for demonstration purposes and inserts sine and cosine
amplitude = 10.0
```
### Metrics
## Metrics
- trig
- fields:
- cosine (float)
- sine (float)
## Example Output
### Example Output
```
```shell
trig,host=MBP15-SWANG.local cosine=10,sine=0 1632338680000000000
trig,host=MBP15-SWANG.local sine=5.877852522924732,cosine=8.090169943749473 1632338690000000000
trig,host=MBP15-SWANG.local sine=9.510565162951535,cosine=3.0901699437494745 1632338700000000000

View File

@ -2,8 +2,7 @@
The `twemproxy` plugin gathers statistics from [Twemproxy](https://github.com/twitter/twemproxy) servers.
### Configuration
## Configuration
```toml
# Read Twemproxy stats data
@ -13,4 +12,3 @@ The `twemproxy` plugin gathers statistics from [Twemproxy](https://github.com/tw
## Monitor pool name
pools = ["redis_pool", "mc_pool"]
```

View File

@ -3,7 +3,7 @@
This plugin gathers stats from [Unbound](https://www.unbound.net/) -
a validating, recursive, and caching DNS resolver.
### Configuration:
## Configuration
```toml
# A plugin to collect stats from the Unbound DNS resolver
@ -32,12 +32,13 @@ a validating, recursive, and caching DNS resolver.
thread_as_tag = false
```
#### Permissions:
### Permissions
It's important to note that this plugin references unbound-control, which may require additional permissions to execute successfully.
Depending on the user/group permissions of the telegraf user executing this plugin, you may need to alter the group membership, set facls, or use sudo.
**Group membership (Recommended)**:
```bash
$ groups telegraf
telegraf : telegraf
@ -50,12 +51,14 @@ telegraf : telegraf unbound
**Sudo privileges**:
If you use this method, you will need the following in your telegraf config:
```toml
[[inputs.unbound]]
use_sudo = true
```
You will also need to update your sudoers file:
```bash
$ visudo
# Add the following line:
@ -66,13 +69,13 @@ Defaults!UNBOUNDCTL !logfile, !syslog, !pam_session
Please use the solution you see as most appropriate.
### Metrics:
## Metrics
This is the full list of stats provided by unbound-control and potentially collected
depending of your unbound configuration. Histogram related statistics will never be collected,
extended statistics can also be imported ("extended-statistics: yes" in unbound configuration).
In the output, the dots in the unbound-control stat name are replaced by underscores(see
https://www.unbound.net/documentation/unbound-control.html for details).
<https://www.unbound.net/documentation/unbound-control.html> for details).
Shown metrics are with `thread_as_tag` enabled.
@ -147,8 +150,9 @@ Shown metrics are with `thread_as_tag` enabled.
- recursion_time_avg
- recursion_time_median
### Example Output:
```
## Example Output
```shell
unbound,host=localhost total_requestlist_avg=0,total_requestlist_exceeded=0,total_requestlist_overwritten=0,total_requestlist_current_user=0,total_recursion_time_avg=0.029186,total_tcpusage=0,total_num_queries=51,total_num_queries_ip_ratelimited=0,total_num_recursivereplies=6,total_requestlist_max=0,time_now=1522804978.784814,time_elapsed=310.435217,total_num_cachemiss=6,total_num_zero_ttl=0,time_up=310.435217,total_num_cachehits=45,total_num_prefetch=0,total_requestlist_current_all=0,total_recursion_time_median=0.016384 1522804979000000000
unbound_threads,host=localhost,thread=0 num_queries_ip_ratelimited=0,requestlist_current_user=0,recursion_time_avg=0.029186,num_prefetch=0,requestlist_overwritten=0,requestlist_exceeded=0,requestlist_current_all=0,tcpusage=0,num_cachehits=37,num_cachemiss=6,num_recursivereplies=6,requestlist_avg=0,num_queries=43,num_zero_ttl=0,requestlist_max=0,recursion_time_median=0.032768 1522804979000000000
unbound_threads,host=localhost,thread=1 num_zero_ttl=0,recursion_time_avg=0,num_queries_ip_ratelimited=0,num_cachehits=8,num_prefetch=0,requestlist_exceeded=0,recursion_time_median=0,tcpusage=0,num_cachemiss=0,num_recursivereplies=0,requestlist_max=0,requestlist_overwritten=0,requestlist_current_user=0,num_queries=8,requestlist_avg=0,requestlist_current_all=0 1522804979000000000

View File

@ -2,7 +2,7 @@
The uWSGI input plugin gathers metrics about uWSGI using its [Stats Server](https://uwsgi-docs.readthedocs.io/en/latest/StatsServer.html).
### Configuration
## Configuration
```toml
[[inputs.uwsgi]]
@ -17,23 +17,22 @@ The uWSGI input plugin gathers metrics about uWSGI using its [Stats Server](http
# timeout = "5s"
```
## Metrics
### Metrics:
- uwsgi_overview
- tags:
- source
- uid
- gid
- version
- fields:
- listen_queue
- listen_queue_errors
- signal_queue
- load
- pid
- uwsgi_overview
- tags:
- source
- uid
- gid
- version
- fields:
- listen_queue
- listen_queue_errors
- signal_queue
- load
- pid
+ uwsgi_workers
- uwsgi_workers
- tags:
- worker_id
- source
@ -66,7 +65,7 @@ The uWSGI input plugin gathers metrics about uWSGI using its [Stats Server](http
- startup_time
- exceptions
+ uwsgi_cores
- uwsgi_cores
- tags:
- core_id
- worker_id
@ -78,15 +77,13 @@ The uWSGI input plugin gathers metrics about uWSGI using its [Stats Server](http
- offloaded_requests
- write_errors
- read_errors
- in_request
- in_request
## Example Output
### Example Output:
```
```shell
uwsgi_overview,gid=0,uid=0,source=172.17.0.2,version=2.0.18 listen_queue=0i,listen_queue_errors=0i,load=0i,pid=1i,signal_queue=0i 1564441407000000000
uwsgi_workers,source=172.17.0.2,worker_id=1 accepting=1i,avg_rt=0i,delta_request=0i,exceptions=0i,harakiri_count=0i,last_spawn=1564441202i,pid=6i,requests=0i,respawn_count=1i,rss=0i,running_time=0i,signal_queue=0i,signals=0i,status="idle",tx=0i,vsz=0i 1564441407000000000
uwsgi_apps,app_id=0,worker_id=1,source=172.17.0.2 exceptions=0i,modifier1=0i,requests=0i,startup_time=0i 1564441407000000000
uwsgi_cores,core_id=0,worker_id=1,source=172.17.0.2 in_request=0i,offloaded_requests=0i,read_errors=0i,requests=0i,routed_requests=0i,static_requests=0i,write_errors=0i 1564441407000000000
```

View File

@ -2,7 +2,7 @@
This plugin gathers stats from [Varnish HTTP Cache](https://varnish-cache.org/)
### Configuration:
## Configuration
```toml
[[inputs.varnish]]
@ -26,311 +26,311 @@ This plugin gathers stats from [Varnish HTTP Cache](https://varnish-cache.org/)
# timeout = "1s"
```
### Measurements & Fields:
## Measurements & Fields
This is the full list of stats provided by varnish. Stats will be grouped by their capitalized prefix (eg MAIN,
This is the full list of stats provided by varnish. Stats will be grouped by their capitalized prefix (eg MAIN,
MEMPOOL, etc). In the output, the prefix will be used as a tag, and removed from field names.
- varnish
- MAIN.uptime (uint64, count, Child process uptime)
- MAIN.sess_conn (uint64, count, Sessions accepted)
- MAIN.sess_drop (uint64, count, Sessions dropped)
- MAIN.sess_fail (uint64, count, Session accept failures)
- MAIN.sess_pipe_overflow (uint64, count, Session pipe overflow)
- MAIN.client_req_400 (uint64, count, Client requests received,)
- MAIN.client_req_411 (uint64, count, Client requests received,)
- MAIN.client_req_413 (uint64, count, Client requests received,)
- MAIN.client_req_417 (uint64, count, Client requests received,)
- MAIN.client_req (uint64, count, Good client requests)
- MAIN.cache_hit (uint64, count, Cache hits)
- MAIN.cache_hitpass (uint64, count, Cache hits for)
- MAIN.cache_miss (uint64, count, Cache misses)
- MAIN.backend_conn (uint64, count, Backend conn. success)
- MAIN.backend_unhealthy (uint64, count, Backend conn. not)
- MAIN.backend_busy (uint64, count, Backend conn. too)
- MAIN.backend_fail (uint64, count, Backend conn. failures)
- MAIN.backend_reuse (uint64, count, Backend conn. reuses)
- MAIN.backend_toolate (uint64, count, Backend conn. was)
- MAIN.backend_recycle (uint64, count, Backend conn. recycles)
- MAIN.backend_retry (uint64, count, Backend conn. retry)
- MAIN.fetch_head (uint64, count, Fetch no body)
- MAIN.fetch_length (uint64, count, Fetch with Length)
- MAIN.fetch_chunked (uint64, count, Fetch chunked)
- MAIN.fetch_eof (uint64, count, Fetch EOF)
- MAIN.fetch_bad (uint64, count, Fetch bad T- E)
- MAIN.fetch_close (uint64, count, Fetch wanted close)
- MAIN.fetch_oldhttp (uint64, count, Fetch pre HTTP/1.1)
- MAIN.fetch_zero (uint64, count, Fetch zero len)
- MAIN.fetch_1xx (uint64, count, Fetch no body)
- MAIN.fetch_204 (uint64, count, Fetch no body)
- MAIN.fetch_304 (uint64, count, Fetch no body)
- MAIN.fetch_failed (uint64, count, Fetch failed (all)
- MAIN.fetch_no_thread (uint64, count, Fetch failed (no)
- MAIN.pools (uint64, count, Number of thread)
- MAIN.threads (uint64, count, Total number of)
- MAIN.threads_limited (uint64, count, Threads hit max)
- MAIN.threads_created (uint64, count, Threads created)
- MAIN.threads_destroyed (uint64, count, Threads destroyed)
- MAIN.threads_failed (uint64, count, Thread creation failed)
- MAIN.thread_queue_len (uint64, count, Length of session)
- MAIN.busy_sleep (uint64, count, Number of requests)
- MAIN.busy_wakeup (uint64, count, Number of requests)
- MAIN.sess_queued (uint64, count, Sessions queued for)
- MAIN.sess_dropped (uint64, count, Sessions dropped for)
- MAIN.n_object (uint64, count, object structs made)
- MAIN.n_vampireobject (uint64, count, unresurrected objects)
- MAIN.n_objectcore (uint64, count, objectcore structs made)
- MAIN.n_objecthead (uint64, count, objecthead structs made)
- MAIN.n_waitinglist (uint64, count, waitinglist structs made)
- MAIN.n_backend (uint64, count, Number of backends)
- MAIN.n_expired (uint64, count, Number of expired)
- MAIN.n_lru_nuked (uint64, count, Number of LRU)
- MAIN.n_lru_moved (uint64, count, Number of LRU)
- MAIN.losthdr (uint64, count, HTTP header overflows)
- MAIN.s_sess (uint64, count, Total sessions seen)
- MAIN.s_req (uint64, count, Total requests seen)
- MAIN.s_pipe (uint64, count, Total pipe sessions)
- MAIN.s_pass (uint64, count, Total pass- ed requests)
- MAIN.s_fetch (uint64, count, Total backend fetches)
- MAIN.s_synth (uint64, count, Total synthetic responses)
- MAIN.s_req_hdrbytes (uint64, count, Request header bytes)
- MAIN.s_req_bodybytes (uint64, count, Request body bytes)
- MAIN.s_resp_hdrbytes (uint64, count, Response header bytes)
- MAIN.s_resp_bodybytes (uint64, count, Response body bytes)
- MAIN.s_pipe_hdrbytes (uint64, count, Pipe request header)
- MAIN.s_pipe_in (uint64, count, Piped bytes from)
- MAIN.s_pipe_out (uint64, count, Piped bytes to)
- MAIN.sess_closed (uint64, count, Session Closed)
- MAIN.sess_pipeline (uint64, count, Session Pipeline)
- MAIN.sess_readahead (uint64, count, Session Read Ahead)
- MAIN.sess_herd (uint64, count, Session herd)
- MAIN.shm_records (uint64, count, SHM records)
- MAIN.shm_writes (uint64, count, SHM writes)
- MAIN.shm_flushes (uint64, count, SHM flushes due)
- MAIN.shm_cont (uint64, count, SHM MTX contention)
- MAIN.shm_cycles (uint64, count, SHM cycles through)
- MAIN.sms_nreq (uint64, count, SMS allocator requests)
- MAIN.sms_nobj (uint64, count, SMS outstanding allocations)
- MAIN.sms_nbytes (uint64, count, SMS outstanding bytes)
- MAIN.sms_balloc (uint64, count, SMS bytes allocated)
- MAIN.sms_bfree (uint64, count, SMS bytes freed)
- MAIN.backend_req (uint64, count, Backend requests made)
- MAIN.n_vcl (uint64, count, Number of loaded)
- MAIN.n_vcl_avail (uint64, count, Number of VCLs)
- MAIN.n_vcl_discard (uint64, count, Number of discarded)
- MAIN.bans (uint64, count, Count of bans)
- MAIN.bans_completed (uint64, count, Number of bans)
- MAIN.bans_obj (uint64, count, Number of bans)
- MAIN.bans_req (uint64, count, Number of bans)
- MAIN.bans_added (uint64, count, Bans added)
- MAIN.bans_deleted (uint64, count, Bans deleted)
- MAIN.bans_tested (uint64, count, Bans tested against)
- MAIN.bans_obj_killed (uint64, count, Objects killed by)
- MAIN.bans_lurker_tested (uint64, count, Bans tested against)
- MAIN.bans_tests_tested (uint64, count, Ban tests tested)
- MAIN.bans_lurker_tests_tested (uint64, count, Ban tests tested)
- MAIN.bans_lurker_obj_killed (uint64, count, Objects killed by)
- MAIN.bans_dups (uint64, count, Bans superseded by)
- MAIN.bans_lurker_contention (uint64, count, Lurker gave way)
- MAIN.bans_persisted_bytes (uint64, count, Bytes used by)
- MAIN.bans_persisted_fragmentation (uint64, count, Extra bytes in)
- MAIN.n_purges (uint64, count, Number of purge)
- MAIN.n_obj_purged (uint64, count, Number of purged)
- MAIN.exp_mailed (uint64, count, Number of objects)
- MAIN.exp_received (uint64, count, Number of objects)
- MAIN.hcb_nolock (uint64, count, HCB Lookups without)
- MAIN.hcb_lock (uint64, count, HCB Lookups with)
- MAIN.hcb_insert (uint64, count, HCB Inserts)
- MAIN.esi_errors (uint64, count, ESI parse errors)
- MAIN.esi_warnings (uint64, count, ESI parse warnings)
- MAIN.vmods (uint64, count, Loaded VMODs)
- MAIN.n_gzip (uint64, count, Gzip operations)
- MAIN.n_gunzip (uint64, count, Gunzip operations)
- MAIN.vsm_free (uint64, count, Free VSM space)
- MAIN.vsm_used (uint64, count, Used VSM space)
- MAIN.vsm_cooling (uint64, count, Cooling VSM space)
- MAIN.vsm_overflow (uint64, count, Overflow VSM space)
- MAIN.vsm_overflowed (uint64, count, Overflowed VSM space)
- MGT.uptime (uint64, count, Management process uptime)
- MGT.child_start (uint64, count, Child process started)
- MGT.child_exit (uint64, count, Child process normal)
- MGT.child_stop (uint64, count, Child process unexpected)
- MGT.child_died (uint64, count, Child process died)
- MGT.child_dump (uint64, count, Child process core)
- MGT.child_panic (uint64, count, Child process panic)
- MEMPOOL.vbc.live (uint64, count, In use)
- MEMPOOL.vbc.pool (uint64, count, In Pool)
- MEMPOOL.vbc.sz_wanted (uint64, count, Size requested)
- MEMPOOL.vbc.sz_needed (uint64, count, Size allocated)
- MEMPOOL.vbc.allocs (uint64, count, Allocations )
- MEMPOOL.vbc.frees (uint64, count, Frees )
- MEMPOOL.vbc.recycle (uint64, count, Recycled from pool)
- MEMPOOL.vbc.timeout (uint64, count, Timed out from)
- MEMPOOL.vbc.toosmall (uint64, count, Too small to)
- MEMPOOL.vbc.surplus (uint64, count, Too many for)
- MEMPOOL.vbc.randry (uint64, count, Pool ran dry)
- MEMPOOL.busyobj.live (uint64, count, In use)
- MEMPOOL.busyobj.pool (uint64, count, In Pool)
- MEMPOOL.busyobj.sz_wanted (uint64, count, Size requested)
- MEMPOOL.busyobj.sz_needed (uint64, count, Size allocated)
- MEMPOOL.busyobj.allocs (uint64, count, Allocations )
- MEMPOOL.busyobj.frees (uint64, count, Frees )
- MEMPOOL.busyobj.recycle (uint64, count, Recycled from pool)
- MEMPOOL.busyobj.timeout (uint64, count, Timed out from)
- MEMPOOL.busyobj.toosmall (uint64, count, Too small to)
- MEMPOOL.busyobj.surplus (uint64, count, Too many for)
- MEMPOOL.busyobj.randry (uint64, count, Pool ran dry)
- MEMPOOL.req0.live (uint64, count, In use)
- MEMPOOL.req0.pool (uint64, count, In Pool)
- MEMPOOL.req0.sz_wanted (uint64, count, Size requested)
- MEMPOOL.req0.sz_needed (uint64, count, Size allocated)
- MEMPOOL.req0.allocs (uint64, count, Allocations )
- MEMPOOL.req0.frees (uint64, count, Frees )
- MEMPOOL.req0.recycle (uint64, count, Recycled from pool)
- MEMPOOL.req0.timeout (uint64, count, Timed out from)
- MEMPOOL.req0.toosmall (uint64, count, Too small to)
- MEMPOOL.req0.surplus (uint64, count, Too many for)
- MEMPOOL.req0.randry (uint64, count, Pool ran dry)
- MEMPOOL.sess0.live (uint64, count, In use)
- MEMPOOL.sess0.pool (uint64, count, In Pool)
- MEMPOOL.sess0.sz_wanted (uint64, count, Size requested)
- MEMPOOL.sess0.sz_needed (uint64, count, Size allocated)
- MEMPOOL.sess0.allocs (uint64, count, Allocations )
- MEMPOOL.sess0.frees (uint64, count, Frees )
- MEMPOOL.sess0.recycle (uint64, count, Recycled from pool)
- MEMPOOL.sess0.timeout (uint64, count, Timed out from)
- MEMPOOL.sess0.toosmall (uint64, count, Too small to)
- MEMPOOL.sess0.surplus (uint64, count, Too many for)
- MEMPOOL.sess0.randry (uint64, count, Pool ran dry)
- MEMPOOL.req1.live (uint64, count, In use)
- MEMPOOL.req1.pool (uint64, count, In Pool)
- MEMPOOL.req1.sz_wanted (uint64, count, Size requested)
- MEMPOOL.req1.sz_needed (uint64, count, Size allocated)
- MEMPOOL.req1.allocs (uint64, count, Allocations )
- MEMPOOL.req1.frees (uint64, count, Frees )
- MEMPOOL.req1.recycle (uint64, count, Recycled from pool)
- MEMPOOL.req1.timeout (uint64, count, Timed out from)
- MEMPOOL.req1.toosmall (uint64, count, Too small to)
- MEMPOOL.req1.surplus (uint64, count, Too many for)
- MEMPOOL.req1.randry (uint64, count, Pool ran dry)
- MEMPOOL.sess1.live (uint64, count, In use)
- MEMPOOL.sess1.pool (uint64, count, In Pool)
- MEMPOOL.sess1.sz_wanted (uint64, count, Size requested)
- MEMPOOL.sess1.sz_needed (uint64, count, Size allocated)
- MEMPOOL.sess1.allocs (uint64, count, Allocations )
- MEMPOOL.sess1.frees (uint64, count, Frees )
- MEMPOOL.sess1.recycle (uint64, count, Recycled from pool)
- MEMPOOL.sess1.timeout (uint64, count, Timed out from)
- MEMPOOL.sess1.toosmall (uint64, count, Too small to)
- MEMPOOL.sess1.surplus (uint64, count, Too many for)
- MEMPOOL.sess1.randry (uint64, count, Pool ran dry)
- SMA.s0.c_req (uint64, count, Allocator requests)
- SMA.s0.c_fail (uint64, count, Allocator failures)
- SMA.s0.c_bytes (uint64, count, Bytes allocated)
- SMA.s0.c_freed (uint64, count, Bytes freed)
- SMA.s0.g_alloc (uint64, count, Allocations outstanding)
- SMA.s0.g_bytes (uint64, count, Bytes outstanding)
- SMA.s0.g_space (uint64, count, Bytes available)
- SMA.Transient.c_req (uint64, count, Allocator requests)
- SMA.Transient.c_fail (uint64, count, Allocator failures)
- SMA.Transient.c_bytes (uint64, count, Bytes allocated)
- SMA.Transient.c_freed (uint64, count, Bytes freed)
- SMA.Transient.g_alloc (uint64, count, Allocations outstanding)
- SMA.Transient.g_bytes (uint64, count, Bytes outstanding)
- SMA.Transient.g_space (uint64, count, Bytes available)
- VBE.default(127.0.0.1,,8080).vcls (uint64, count, VCL references)
- VBE.default(127.0.0.1,,8080).happy (uint64, count, Happy health probes)
- VBE.default(127.0.0.1,,8080).bereq_hdrbytes (uint64, count, Request header bytes)
- VBE.default(127.0.0.1,,8080).bereq_bodybytes (uint64, count, Request body bytes)
- VBE.default(127.0.0.1,,8080).beresp_hdrbytes (uint64, count, Response header bytes)
- VBE.default(127.0.0.1,,8080).beresp_bodybytes (uint64, count, Response body bytes)
- VBE.default(127.0.0.1,,8080).pipe_hdrbytes (uint64, count, Pipe request header)
- VBE.default(127.0.0.1,,8080).pipe_out (uint64, count, Piped bytes to)
- VBE.default(127.0.0.1,,8080).pipe_in (uint64, count, Piped bytes from)
- LCK.sms.creat (uint64, count, Created locks)
- LCK.sms.destroy (uint64, count, Destroyed locks)
- LCK.sms.locks (uint64, count, Lock Operations)
- LCK.smp.creat (uint64, count, Created locks)
- LCK.smp.destroy (uint64, count, Destroyed locks)
- LCK.smp.locks (uint64, count, Lock Operations)
- LCK.sma.creat (uint64, count, Created locks)
- LCK.sma.destroy (uint64, count, Destroyed locks)
- LCK.sma.locks (uint64, count, Lock Operations)
- LCK.smf.creat (uint64, count, Created locks)
- LCK.smf.destroy (uint64, count, Destroyed locks)
- LCK.smf.locks (uint64, count, Lock Operations)
- LCK.hsl.creat (uint64, count, Created locks)
- LCK.hsl.destroy (uint64, count, Destroyed locks)
- LCK.hsl.locks (uint64, count, Lock Operations)
- LCK.hcb.creat (uint64, count, Created locks)
- LCK.hcb.destroy (uint64, count, Destroyed locks)
- LCK.hcb.locks (uint64, count, Lock Operations)
- LCK.hcl.creat (uint64, count, Created locks)
- LCK.hcl.destroy (uint64, count, Destroyed locks)
- LCK.hcl.locks (uint64, count, Lock Operations)
- LCK.vcl.creat (uint64, count, Created locks)
- LCK.vcl.destroy (uint64, count, Destroyed locks)
- LCK.vcl.locks (uint64, count, Lock Operations)
- LCK.sessmem.creat (uint64, count, Created locks)
- LCK.sessmem.destroy (uint64, count, Destroyed locks)
- LCK.sessmem.locks (uint64, count, Lock Operations)
- LCK.sess.creat (uint64, count, Created locks)
- LCK.sess.destroy (uint64, count, Destroyed locks)
- LCK.sess.locks (uint64, count, Lock Operations)
- LCK.wstat.creat (uint64, count, Created locks)
- LCK.wstat.destroy (uint64, count, Destroyed locks)
- LCK.wstat.locks (uint64, count, Lock Operations)
- LCK.herder.creat (uint64, count, Created locks)
- LCK.herder.destroy (uint64, count, Destroyed locks)
- LCK.herder.locks (uint64, count, Lock Operations)
- LCK.wq.creat (uint64, count, Created locks)
- LCK.wq.destroy (uint64, count, Destroyed locks)
- LCK.wq.locks (uint64, count, Lock Operations)
- LCK.objhdr.creat (uint64, count, Created locks)
- LCK.objhdr.destroy (uint64, count, Destroyed locks)
- LCK.objhdr.locks (uint64, count, Lock Operations)
- LCK.exp.creat (uint64, count, Created locks)
- LCK.exp.destroy (uint64, count, Destroyed locks)
- LCK.exp.locks (uint64, count, Lock Operations)
- LCK.lru.creat (uint64, count, Created locks)
- LCK.lru.destroy (uint64, count, Destroyed locks)
- LCK.lru.locks (uint64, count, Lock Operations)
- LCK.cli.creat (uint64, count, Created locks)
- LCK.cli.destroy (uint64, count, Destroyed locks)
- LCK.cli.locks (uint64, count, Lock Operations)
- LCK.ban.creat (uint64, count, Created locks)
- LCK.ban.destroy (uint64, count, Destroyed locks)
- LCK.ban.locks (uint64, count, Lock Operations)
- LCK.vbp.creat (uint64, count, Created locks)
- LCK.vbp.destroy (uint64, count, Destroyed locks)
- LCK.vbp.locks (uint64, count, Lock Operations)
- LCK.backend.creat (uint64, count, Created locks)
- LCK.backend.destroy (uint64, count, Destroyed locks)
- LCK.backend.locks (uint64, count, Lock Operations)
- LCK.vcapace.creat (uint64, count, Created locks)
- LCK.vcapace.destroy (uint64, count, Destroyed locks)
- LCK.vcapace.locks (uint64, count, Lock Operations)
- LCK.nbusyobj.creat (uint64, count, Created locks)
- LCK.nbusyobj.destroy (uint64, count, Destroyed locks)
- LCK.nbusyobj.locks (uint64, count, Lock Operations)
- LCK.busyobj.creat (uint64, count, Created locks)
- LCK.busyobj.destroy (uint64, count, Destroyed locks)
- LCK.busyobj.locks (uint64, count, Lock Operations)
- LCK.mempool.creat (uint64, count, Created locks)
- LCK.mempool.destroy (uint64, count, Destroyed locks)
- LCK.mempool.locks (uint64, count, Lock Operations)
- LCK.vxid.creat (uint64, count, Created locks)
- LCK.vxid.destroy (uint64, count, Destroyed locks)
- LCK.vxid.locks (uint64, count, Lock Operations)
- LCK.pipestat.creat (uint64, count, Created locks)
- LCK.pipestat.destroy (uint64, count, Destroyed locks)
- LCK.pipestat.locks (uint64, count, Lock Operations)
- MAIN.uptime (uint64, count, Child process uptime)
- MAIN.sess_conn (uint64, count, Sessions accepted)
- MAIN.sess_drop (uint64, count, Sessions dropped)
- MAIN.sess_fail (uint64, count, Session accept failures)
- MAIN.sess_pipe_overflow (uint64, count, Session pipe overflow)
- MAIN.client_req_400 (uint64, count, Client requests received,)
- MAIN.client_req_411 (uint64, count, Client requests received,)
- MAIN.client_req_413 (uint64, count, Client requests received,)
- MAIN.client_req_417 (uint64, count, Client requests received,)
- MAIN.client_req (uint64, count, Good client requests)
- MAIN.cache_hit (uint64, count, Cache hits)
- MAIN.cache_hitpass (uint64, count, Cache hits for)
- MAIN.cache_miss (uint64, count, Cache misses)
- MAIN.backend_conn (uint64, count, Backend conn. success)
- MAIN.backend_unhealthy (uint64, count, Backend conn. not)
- MAIN.backend_busy (uint64, count, Backend conn. too)
- MAIN.backend_fail (uint64, count, Backend conn. failures)
- MAIN.backend_reuse (uint64, count, Backend conn. reuses)
- MAIN.backend_toolate (uint64, count, Backend conn. was)
- MAIN.backend_recycle (uint64, count, Backend conn. recycles)
- MAIN.backend_retry (uint64, count, Backend conn. retry)
- MAIN.fetch_head (uint64, count, Fetch no body)
- MAIN.fetch_length (uint64, count, Fetch with Length)
- MAIN.fetch_chunked (uint64, count, Fetch chunked)
- MAIN.fetch_eof (uint64, count, Fetch EOF)
- MAIN.fetch_bad (uint64, count, Fetch bad T- E)
- MAIN.fetch_close (uint64, count, Fetch wanted close)
- MAIN.fetch_oldhttp (uint64, count, Fetch pre HTTP/1.1)
- MAIN.fetch_zero (uint64, count, Fetch zero len)
- MAIN.fetch_1xx (uint64, count, Fetch no body)
- MAIN.fetch_204 (uint64, count, Fetch no body)
- MAIN.fetch_304 (uint64, count, Fetch no body)
- MAIN.fetch_failed (uint64, count, Fetch failed (all)
- MAIN.fetch_no_thread (uint64, count, Fetch failed (no)
- MAIN.pools (uint64, count, Number of thread)
- MAIN.threads (uint64, count, Total number of)
- MAIN.threads_limited (uint64, count, Threads hit max)
- MAIN.threads_created (uint64, count, Threads created)
- MAIN.threads_destroyed (uint64, count, Threads destroyed)
- MAIN.threads_failed (uint64, count, Thread creation failed)
- MAIN.thread_queue_len (uint64, count, Length of session)
- MAIN.busy_sleep (uint64, count, Number of requests)
- MAIN.busy_wakeup (uint64, count, Number of requests)
- MAIN.sess_queued (uint64, count, Sessions queued for)
- MAIN.sess_dropped (uint64, count, Sessions dropped for)
- MAIN.n_object (uint64, count, object structs made)
- MAIN.n_vampireobject (uint64, count, unresurrected objects)
- MAIN.n_objectcore (uint64, count, objectcore structs made)
- MAIN.n_objecthead (uint64, count, objecthead structs made)
- MAIN.n_waitinglist (uint64, count, waitinglist structs made)
- MAIN.n_backend (uint64, count, Number of backends)
- MAIN.n_expired (uint64, count, Number of expired)
- MAIN.n_lru_nuked (uint64, count, Number of LRU)
- MAIN.n_lru_moved (uint64, count, Number of LRU)
- MAIN.losthdr (uint64, count, HTTP header overflows)
- MAIN.s_sess (uint64, count, Total sessions seen)
- MAIN.s_req (uint64, count, Total requests seen)
- MAIN.s_pipe (uint64, count, Total pipe sessions)
- MAIN.s_pass (uint64, count, Total pass- ed requests)
- MAIN.s_fetch (uint64, count, Total backend fetches)
- MAIN.s_synth (uint64, count, Total synthetic responses)
- MAIN.s_req_hdrbytes (uint64, count, Request header bytes)
- MAIN.s_req_bodybytes (uint64, count, Request body bytes)
- MAIN.s_resp_hdrbytes (uint64, count, Response header bytes)
- MAIN.s_resp_bodybytes (uint64, count, Response body bytes)
- MAIN.s_pipe_hdrbytes (uint64, count, Pipe request header)
- MAIN.s_pipe_in (uint64, count, Piped bytes from)
- MAIN.s_pipe_out (uint64, count, Piped bytes to)
- MAIN.sess_closed (uint64, count, Session Closed)
- MAIN.sess_pipeline (uint64, count, Session Pipeline)
- MAIN.sess_readahead (uint64, count, Session Read Ahead)
- MAIN.sess_herd (uint64, count, Session herd)
- MAIN.shm_records (uint64, count, SHM records)
- MAIN.shm_writes (uint64, count, SHM writes)
- MAIN.shm_flushes (uint64, count, SHM flushes due)
- MAIN.shm_cont (uint64, count, SHM MTX contention)
- MAIN.shm_cycles (uint64, count, SHM cycles through)
- MAIN.sms_nreq (uint64, count, SMS allocator requests)
- MAIN.sms_nobj (uint64, count, SMS outstanding allocations)
- MAIN.sms_nbytes (uint64, count, SMS outstanding bytes)
- MAIN.sms_balloc (uint64, count, SMS bytes allocated)
- MAIN.sms_bfree (uint64, count, SMS bytes freed)
- MAIN.backend_req (uint64, count, Backend requests made)
- MAIN.n_vcl (uint64, count, Number of loaded)
- MAIN.n_vcl_avail (uint64, count, Number of VCLs)
- MAIN.n_vcl_discard (uint64, count, Number of discarded)
- MAIN.bans (uint64, count, Count of bans)
- MAIN.bans_completed (uint64, count, Number of bans)
- MAIN.bans_obj (uint64, count, Number of bans)
- MAIN.bans_req (uint64, count, Number of bans)
- MAIN.bans_added (uint64, count, Bans added)
- MAIN.bans_deleted (uint64, count, Bans deleted)
- MAIN.bans_tested (uint64, count, Bans tested against)
- MAIN.bans_obj_killed (uint64, count, Objects killed by)
- MAIN.bans_lurker_tested (uint64, count, Bans tested against)
- MAIN.bans_tests_tested (uint64, count, Ban tests tested)
- MAIN.bans_lurker_tests_tested (uint64, count, Ban tests tested)
- MAIN.bans_lurker_obj_killed (uint64, count, Objects killed by)
- MAIN.bans_dups (uint64, count, Bans superseded by)
- MAIN.bans_lurker_contention (uint64, count, Lurker gave way)
- MAIN.bans_persisted_bytes (uint64, count, Bytes used by)
- MAIN.bans_persisted_fragmentation (uint64, count, Extra bytes in)
- MAIN.n_purges (uint64, count, Number of purge)
- MAIN.n_obj_purged (uint64, count, Number of purged)
- MAIN.exp_mailed (uint64, count, Number of objects)
- MAIN.exp_received (uint64, count, Number of objects)
- MAIN.hcb_nolock (uint64, count, HCB Lookups without)
- MAIN.hcb_lock (uint64, count, HCB Lookups with)
- MAIN.hcb_insert (uint64, count, HCB Inserts)
- MAIN.esi_errors (uint64, count, ESI parse errors)
- MAIN.esi_warnings (uint64, count, ESI parse warnings)
- MAIN.vmods (uint64, count, Loaded VMODs)
- MAIN.n_gzip (uint64, count, Gzip operations)
- MAIN.n_gunzip (uint64, count, Gunzip operations)
- MAIN.vsm_free (uint64, count, Free VSM space)
- MAIN.vsm_used (uint64, count, Used VSM space)
- MAIN.vsm_cooling (uint64, count, Cooling VSM space)
- MAIN.vsm_overflow (uint64, count, Overflow VSM space)
- MAIN.vsm_overflowed (uint64, count, Overflowed VSM space)
- MGT.uptime (uint64, count, Management process uptime)
- MGT.child_start (uint64, count, Child process started)
- MGT.child_exit (uint64, count, Child process normal)
- MGT.child_stop (uint64, count, Child process unexpected)
- MGT.child_died (uint64, count, Child process died)
- MGT.child_dump (uint64, count, Child process core)
- MGT.child_panic (uint64, count, Child process panic)
- MEMPOOL.vbc.live (uint64, count, In use)
- MEMPOOL.vbc.pool (uint64, count, In Pool)
- MEMPOOL.vbc.sz_wanted (uint64, count, Size requested)
- MEMPOOL.vbc.sz_needed (uint64, count, Size allocated)
- MEMPOOL.vbc.allocs (uint64, count, Allocations )
- MEMPOOL.vbc.frees (uint64, count, Frees )
- MEMPOOL.vbc.recycle (uint64, count, Recycled from pool)
- MEMPOOL.vbc.timeout (uint64, count, Timed out from)
- MEMPOOL.vbc.toosmall (uint64, count, Too small to)
- MEMPOOL.vbc.surplus (uint64, count, Too many for)
- MEMPOOL.vbc.randry (uint64, count, Pool ran dry)
- MEMPOOL.busyobj.live (uint64, count, In use)
- MEMPOOL.busyobj.pool (uint64, count, In Pool)
- MEMPOOL.busyobj.sz_wanted (uint64, count, Size requested)
- MEMPOOL.busyobj.sz_needed (uint64, count, Size allocated)
- MEMPOOL.busyobj.allocs (uint64, count, Allocations )
- MEMPOOL.busyobj.frees (uint64, count, Frees )
- MEMPOOL.busyobj.recycle (uint64, count, Recycled from pool)
- MEMPOOL.busyobj.timeout (uint64, count, Timed out from)
- MEMPOOL.busyobj.toosmall (uint64, count, Too small to)
- MEMPOOL.busyobj.surplus (uint64, count, Too many for)
- MEMPOOL.busyobj.randry (uint64, count, Pool ran dry)
- MEMPOOL.req0.live (uint64, count, In use)
- MEMPOOL.req0.pool (uint64, count, In Pool)
- MEMPOOL.req0.sz_wanted (uint64, count, Size requested)
- MEMPOOL.req0.sz_needed (uint64, count, Size allocated)
- MEMPOOL.req0.allocs (uint64, count, Allocations )
- MEMPOOL.req0.frees (uint64, count, Frees )
- MEMPOOL.req0.recycle (uint64, count, Recycled from pool)
- MEMPOOL.req0.timeout (uint64, count, Timed out from)
- MEMPOOL.req0.toosmall (uint64, count, Too small to)
- MEMPOOL.req0.surplus (uint64, count, Too many for)
- MEMPOOL.req0.randry (uint64, count, Pool ran dry)
- MEMPOOL.sess0.live (uint64, count, In use)
- MEMPOOL.sess0.pool (uint64, count, In Pool)
- MEMPOOL.sess0.sz_wanted (uint64, count, Size requested)
- MEMPOOL.sess0.sz_needed (uint64, count, Size allocated)
- MEMPOOL.sess0.allocs (uint64, count, Allocations )
- MEMPOOL.sess0.frees (uint64, count, Frees )
- MEMPOOL.sess0.recycle (uint64, count, Recycled from pool)
- MEMPOOL.sess0.timeout (uint64, count, Timed out from)
- MEMPOOL.sess0.toosmall (uint64, count, Too small to)
- MEMPOOL.sess0.surplus (uint64, count, Too many for)
- MEMPOOL.sess0.randry (uint64, count, Pool ran dry)
- MEMPOOL.req1.live (uint64, count, In use)
- MEMPOOL.req1.pool (uint64, count, In Pool)
- MEMPOOL.req1.sz_wanted (uint64, count, Size requested)
- MEMPOOL.req1.sz_needed (uint64, count, Size allocated)
- MEMPOOL.req1.allocs (uint64, count, Allocations )
- MEMPOOL.req1.frees (uint64, count, Frees )
- MEMPOOL.req1.recycle (uint64, count, Recycled from pool)
- MEMPOOL.req1.timeout (uint64, count, Timed out from)
- MEMPOOL.req1.toosmall (uint64, count, Too small to)
- MEMPOOL.req1.surplus (uint64, count, Too many for)
- MEMPOOL.req1.randry (uint64, count, Pool ran dry)
- MEMPOOL.sess1.live (uint64, count, In use)
- MEMPOOL.sess1.pool (uint64, count, In Pool)
- MEMPOOL.sess1.sz_wanted (uint64, count, Size requested)
- MEMPOOL.sess1.sz_needed (uint64, count, Size allocated)
- MEMPOOL.sess1.allocs (uint64, count, Allocations )
- MEMPOOL.sess1.frees (uint64, count, Frees )
- MEMPOOL.sess1.recycle (uint64, count, Recycled from pool)
- MEMPOOL.sess1.timeout (uint64, count, Timed out from)
- MEMPOOL.sess1.toosmall (uint64, count, Too small to)
- MEMPOOL.sess1.surplus (uint64, count, Too many for)
- MEMPOOL.sess1.randry (uint64, count, Pool ran dry)
- SMA.s0.c_req (uint64, count, Allocator requests)
- SMA.s0.c_fail (uint64, count, Allocator failures)
- SMA.s0.c_bytes (uint64, count, Bytes allocated)
- SMA.s0.c_freed (uint64, count, Bytes freed)
- SMA.s0.g_alloc (uint64, count, Allocations outstanding)
- SMA.s0.g_bytes (uint64, count, Bytes outstanding)
- SMA.s0.g_space (uint64, count, Bytes available)
- SMA.Transient.c_req (uint64, count, Allocator requests)
- SMA.Transient.c_fail (uint64, count, Allocator failures)
- SMA.Transient.c_bytes (uint64, count, Bytes allocated)
- SMA.Transient.c_freed (uint64, count, Bytes freed)
- SMA.Transient.g_alloc (uint64, count, Allocations outstanding)
- SMA.Transient.g_bytes (uint64, count, Bytes outstanding)
- SMA.Transient.g_space (uint64, count, Bytes available)
- VBE.default(127.0.0.1,,8080).vcls (uint64, count, VCL references)
- VBE.default(127.0.0.1,,8080).happy (uint64, count, Happy health probes)
- VBE.default(127.0.0.1,,8080).bereq_hdrbytes (uint64, count, Request header bytes)
- VBE.default(127.0.0.1,,8080).bereq_bodybytes (uint64, count, Request body bytes)
- VBE.default(127.0.0.1,,8080).beresp_hdrbytes (uint64, count, Response header bytes)
- VBE.default(127.0.0.1,,8080).beresp_bodybytes (uint64, count, Response body bytes)
- VBE.default(127.0.0.1,,8080).pipe_hdrbytes (uint64, count, Pipe request header)
- VBE.default(127.0.0.1,,8080).pipe_out (uint64, count, Piped bytes to)
- VBE.default(127.0.0.1,,8080).pipe_in (uint64, count, Piped bytes from)
- LCK.sms.creat (uint64, count, Created locks)
- LCK.sms.destroy (uint64, count, Destroyed locks)
- LCK.sms.locks (uint64, count, Lock Operations)
- LCK.smp.creat (uint64, count, Created locks)
- LCK.smp.destroy (uint64, count, Destroyed locks)
- LCK.smp.locks (uint64, count, Lock Operations)
- LCK.sma.creat (uint64, count, Created locks)
- LCK.sma.destroy (uint64, count, Destroyed locks)
- LCK.sma.locks (uint64, count, Lock Operations)
- LCK.smf.creat (uint64, count, Created locks)
- LCK.smf.destroy (uint64, count, Destroyed locks)
- LCK.smf.locks (uint64, count, Lock Operations)
- LCK.hsl.creat (uint64, count, Created locks)
- LCK.hsl.destroy (uint64, count, Destroyed locks)
- LCK.hsl.locks (uint64, count, Lock Operations)
- LCK.hcb.creat (uint64, count, Created locks)
- LCK.hcb.destroy (uint64, count, Destroyed locks)
- LCK.hcb.locks (uint64, count, Lock Operations)
- LCK.hcl.creat (uint64, count, Created locks)
- LCK.hcl.destroy (uint64, count, Destroyed locks)
- LCK.hcl.locks (uint64, count, Lock Operations)
- LCK.vcl.creat (uint64, count, Created locks)
- LCK.vcl.destroy (uint64, count, Destroyed locks)
- LCK.vcl.locks (uint64, count, Lock Operations)
- LCK.sessmem.creat (uint64, count, Created locks)
- LCK.sessmem.destroy (uint64, count, Destroyed locks)
- LCK.sessmem.locks (uint64, count, Lock Operations)
- LCK.sess.creat (uint64, count, Created locks)
- LCK.sess.destroy (uint64, count, Destroyed locks)
- LCK.sess.locks (uint64, count, Lock Operations)
- LCK.wstat.creat (uint64, count, Created locks)
- LCK.wstat.destroy (uint64, count, Destroyed locks)
- LCK.wstat.locks (uint64, count, Lock Operations)
- LCK.herder.creat (uint64, count, Created locks)
- LCK.herder.destroy (uint64, count, Destroyed locks)
- LCK.herder.locks (uint64, count, Lock Operations)
- LCK.wq.creat (uint64, count, Created locks)
- LCK.wq.destroy (uint64, count, Destroyed locks)
- LCK.wq.locks (uint64, count, Lock Operations)
- LCK.objhdr.creat (uint64, count, Created locks)
- LCK.objhdr.destroy (uint64, count, Destroyed locks)
- LCK.objhdr.locks (uint64, count, Lock Operations)
- LCK.exp.creat (uint64, count, Created locks)
- LCK.exp.destroy (uint64, count, Destroyed locks)
- LCK.exp.locks (uint64, count, Lock Operations)
- LCK.lru.creat (uint64, count, Created locks)
- LCK.lru.destroy (uint64, count, Destroyed locks)
- LCK.lru.locks (uint64, count, Lock Operations)
- LCK.cli.creat (uint64, count, Created locks)
- LCK.cli.destroy (uint64, count, Destroyed locks)
- LCK.cli.locks (uint64, count, Lock Operations)
- LCK.ban.creat (uint64, count, Created locks)
- LCK.ban.destroy (uint64, count, Destroyed locks)
- LCK.ban.locks (uint64, count, Lock Operations)
- LCK.vbp.creat (uint64, count, Created locks)
- LCK.vbp.destroy (uint64, count, Destroyed locks)
- LCK.vbp.locks (uint64, count, Lock Operations)
- LCK.backend.creat (uint64, count, Created locks)
- LCK.backend.destroy (uint64, count, Destroyed locks)
- LCK.backend.locks (uint64, count, Lock Operations)
- LCK.vcapace.creat (uint64, count, Created locks)
- LCK.vcapace.destroy (uint64, count, Destroyed locks)
- LCK.vcapace.locks (uint64, count, Lock Operations)
- LCK.nbusyobj.creat (uint64, count, Created locks)
- LCK.nbusyobj.destroy (uint64, count, Destroyed locks)
- LCK.nbusyobj.locks (uint64, count, Lock Operations)
- LCK.busyobj.creat (uint64, count, Created locks)
- LCK.busyobj.destroy (uint64, count, Destroyed locks)
- LCK.busyobj.locks (uint64, count, Lock Operations)
- LCK.mempool.creat (uint64, count, Created locks)
- LCK.mempool.destroy (uint64, count, Destroyed locks)
- LCK.mempool.locks (uint64, count, Lock Operations)
- LCK.vxid.creat (uint64, count, Created locks)
- LCK.vxid.destroy (uint64, count, Destroyed locks)
- LCK.vxid.locks (uint64, count, Lock Operations)
- LCK.pipestat.creat (uint64, count, Created locks)
- LCK.pipestat.destroy (uint64, count, Destroyed locks)
- LCK.pipestat.locks (uint64, count, Lock Operations)
## Tags
### Tags:
As indicated above, the prefix of a varnish stat will be used as it's 'section' tag. So section tag may have one of
As indicated above, the prefix of a varnish stat will be used as it's 'section' tag. So section tag may have one of
the following values:
- section:
- MAIN
- MGT
@ -339,14 +339,13 @@ the following values:
- VBE
- LCK
### Permissions:
## Permissions
It's important to note that this plugin references varnishstat, which may require additional permissions to execute successfully.
Depending on the user/group permissions of the telegraf user executing this plugin, you may need to alter the group membership, set facls, or use sudo.
**Group membership (Recommended)**:
```bash
$ groups telegraf
telegraf : telegraf
@ -358,6 +357,7 @@ telegraf : telegraf varnish
```
**Extended filesystem ACL's**:
```bash
$ getfacl /var/lib/varnish/<hostname>/_.vsm
# file: var/lib/varnish/<hostname>/_.vsm
@ -382,12 +382,14 @@ other::---
**Sudo privileges**:
If you use this method, you will need the following in your telegraf config:
```toml
[[inputs.varnish]]
use_sudo = true
```
You will also need to update your sudoers file:
```bash
$ visudo
# Add the following line:
@ -398,9 +400,9 @@ Defaults!VARNISHSTAT !logfile, !syslog, !pam_session
Please use the solution you see as most appropriate.
### Example Output:
## Example Output
```
```shell
telegraf --config etc/telegraf.conf --input-filter varnish --test
* Plugin: varnish, Collection 1
> varnish,host=rpercy-VirtualBox,section=MAIN cache_hit=0i,cache_miss=0i,uptime=8416i 1462765437090957980

View File

@ -1,7 +1,8 @@
# Common vSphere Performance Metrics
The set of performance metrics in vSphere is open ended. Metrics may be added or removed in new releases
and the set of available metrics may vary depending hardware, as well as what plugins and add-on products
are installed. Therefore, providing a definitive list of available metrics is difficult. The metrics listed
The set of performance metrics in vSphere is open ended. Metrics may be added or removed in new releases
and the set of available metrics may vary depending hardware, as well as what plugins and add-on products
are installed. Therefore, providing a definitive list of available metrics is difficult. The metrics listed
below are the most commonly available as of vSphere 6.5.
For a complete list of metrics available from vSphere and the units they measure in, please reference the [VMWare vCenter Converter API Reference](https://www.vmware.com/support/developer/converter-sdk/conv60_apireference/vim.PerformanceManager.html).
@ -9,12 +10,14 @@ For a complete list of metrics available from vSphere and the units they measure
To list the exact set in your environment, please use the govc tool available [here](https://github.com/vmware/govmomi/tree/master/govc)
To obtain the set of metrics for e.g. a VM, you may use the following command:
```
```shell
govc metric.ls vm/*
```
## Virtual Machine Metrics
```
```metrics
cpu.demandEntitlementRatio.latest
cpu.usage.average
cpu.ready.summation
@ -107,7 +110,8 @@ virtualDisk.read.average
```
## Host System Metrics
```
```metrics
cpu.corecount.contention.average
cpu.usage.average
cpu.reservedCapacity.average
@ -190,7 +194,8 @@ vmop.numXVMotion.latest
```
## Cluster Metrics
```
```metrics
cpu.corecount.contention.average
cpu.usage.average
cpu.reservedCapacity.average
@ -273,7 +278,8 @@ vmop.numXVMotion.latest
```
## Datastore Metrics
```
```metrics
datastore.numberReadAveraged.average
datastore.throughput.contention.average
datastore.throughput.usage.average

View File

@ -18,7 +18,7 @@ Compatibility information was found [here](https://github.com/vmware/govmomi/tre
NOTE: To disable collection of a specific resource type, simply exclude all metrics using the XX_metric_exclude.
For example, to disable collection of VMs, add this:
```
```toml
vm_metric_exclude = [ "*" ]
```
@ -216,7 +216,7 @@ A vCenter administrator can change this setting, see this [VMware KB article](ht
Any modification should be reflected in this plugin by modifying the parameter `max_query_objects`
```
```toml
## number of objects to retrieve per query for realtime resources (vms and hosts)
## set to 64 for vCenter 5.5 and 6.0 (default: 256)
# max_query_objects = 256
@ -230,17 +230,18 @@ though the default of 1 (no concurrency) should be sufficient for most configura
For setting up concurrency, modify `collect_concurrency` and `discover_concurrency` parameters.
```
```toml
## number of go routines to use for collection and discovery of objects and metrics
# collect_concurrency = 1
# discover_concurrency = 1
```
### Inventory Paths
Resources to be monitored can be selected using Inventory Paths. This treats the vSphere inventory as a tree structure similar
to a file system. A vSphere inventory has a structure similar to this:
```
```bash
<root>
+-DC0 # Virtual datacenter
+-datastore # Datastore folder (created by system)
@ -266,6 +267,7 @@ to a file system. A vSphere inventory has a structure similar to this:
```
#### Using Inventory Paths
Using familiar UNIX-style paths, one could select e.g. VM2 with the path ```/DC0/vm/VM2```.
Often, we want to select a group of resource, such as all the VMs in a folder. We could use the path ```/DC0/vm/Folder1/*``` for that.
@ -275,9 +277,11 @@ Another possibility is to select objects using a partial name, such as ```/DC0/v
Finally, due to the arbitrary nesting of the folder structure, we need a "recursive wildcard" for traversing multiple folders. We use the "**" symbol for that. If we want to look for a VM with a name starting with "hadoop" in any folder, we could use the following path: ```/DC0/vm/**/hadoop*```
#### Multiple paths to VMs
As we can see from the example tree above, VMs appear both in its on folder under the datacenter, as well as under the hosts. This is useful when you like to select VMs on a specific host. For example, ```/DC0/host/Cluster1/Host1/hadoop*``` selects all VMs with a name starting with "hadoop" that are running on Host1.
We can extend this to looking at a cluster level: ```/DC0/host/Cluster1/*/hadoop*```. This selects any VM matching "hadoop*" on any host in Cluster1.
## Performance Considerations
### Realtime vs. historical metrics
@ -287,7 +291,7 @@ vCenter keeps two different kinds of metrics, known as realtime and historical m
* Realtime metrics: Available at a 20 second granularity. These metrics are stored in memory and are very fast and cheap to query. Our tests have shown that a complete set of realtime metrics for 7000 virtual machines can be obtained in less than 20 seconds. Realtime metrics are only available on **ESXi hosts** and **virtual machine** resources. Realtime metrics are only stored for 1 hour in vCenter.
* Historical metrics: Available at a (default) 5 minute, 30 minutes, 2 hours and 24 hours rollup levels. The vSphere Telegraf plugin only uses the most granular rollup which defaults to 5 minutes but can be changed in vCenter to other interval durations. These metrics are stored in the vCenter database and can be expensive and slow to query. Historical metrics are the only type of metrics available for **clusters**, **datastores** and **datacenters**.
For more information, refer to the vSphere documentation here: https://pubs.vmware.com/vsphere-50/index.jsp?topic=%2Fcom.vmware.wssdk.pg.doc_50%2FPG_Ch16_Performance.18.2.html
For more information, refer to the vSphere documentation here: <https://pubs.vmware.com/vsphere-50/index.jsp?topic=%2Fcom.vmware.wssdk.pg.doc_50%2FPG_Ch16_Performance.18.2.html>
This distinction has an impact on how Telegraf collects metrics. A single instance of an input plugin can have one and only one collection interval, which means that you typically set the collection interval based on the most frequently collected metric. Let's assume you set the collection interval to 1 minute. All realtime metrics will be collected every minute. Since the historical metrics are only available on a 5 minute interval, the vSphere Telegraf plugin automatically skips four out of five collection cycles for these metrics. This works fine in many cases. Problems arise when the collection of historical metrics takes longer than the collection interval. This will cause error messages similar to this to appear in the Telegraf logs:
@ -347,12 +351,14 @@ Cluster metrics are handled a bit differently by vCenter. They are aggregated fr
```2018-11-02T13:37:11Z E! Error in plugin [inputs.vsphere]: ServerFaultCode: This operation is restricted by the administrator - 'vpxd.stats.maxQueryMetrics'. Contact your system administrator```
There are two ways of addressing this:
* Ask your vCenter administrator to set ```config.vpxd.stats.maxQueryMetrics``` to a number that's higher than the total number of virtual machines managed by a vCenter instance.
* Exclude the cluster metrics and use either the basicstats aggregator to calculate sums and averages per cluster or use queries in the visualization tool to obtain the same result.
### Concurrency settings
The vSphere plugin allows you to specify two concurrency settings:
* ```collect_concurrency```: The maximum number of simultaneous queries for performance metrics allowed per resource type.
* ```discover_concurrency```: The maximum number of simultaneous queries for resource discovery allowed.
@ -361,77 +367,78 @@ While a higher level of concurrency typically has a positive impact on performan
### Configuring historical_interval setting
When the vSphere plugin queries vCenter for historical statistics it queries for statistics that exist at a specific interval. The default historical interval duration is 5 minutes but if this interval has been changed then you must override the default query interval in the vSphere plugin.
* ```historical_interval```: The interval of the most granular statistics configured in vSphere represented in seconds.
## Measurements &amp; Fields
- Cluster Stats
- Cluster services: CPU, memory, failover
- CPU: total, usage
- Memory: consumed, total, vmmemctl
- VM operations: # changes, clone, create, deploy, destroy, power, reboot, reconfigure, register, reset, shutdown, standby, vmotion
- Host Stats:
- CPU: total, usage, cost, mhz
- Datastore: iops, latency, read/write bytes, # reads/writes
- Disk: commands, latency, kernel reads/writes, # reads/writes, queues
- Memory: total, usage, active, latency, swap, shared, vmmemctl
- Network: broadcast, bytes, dropped, errors, multicast, packets, usage
- Power: energy, usage, capacity
- Res CPU: active, max, running
- Storage Adapter: commands, latency, # reads/writes
- Storage Path: commands, latency, # reads/writes
- System Resources: cpu active, cpu max, cpu running, cpu usage, mem allocated, mem consumed, mem shared, swap
- System: uptime
- Flash Module: active VMDKs
- VM Stats:
- CPU: demand, usage, readiness, cost, mhz
- Datastore: latency, # reads/writes
- Disk: commands, latency, # reads/writes, provisioned, usage
- Memory: granted, usage, active, swap, vmmemctl
- Network: broadcast, bytes, dropped, multicast, packets, usage
- Power: energy, usage
- Res CPU: active, max, running
- System: operating system uptime, uptime
- Virtual Disk: seeks, # reads/writes, latency, load
- Datastore stats:
- Disk: Capacity, provisioned, used
* Cluster Stats
* Cluster services: CPU, memory, failover
* CPU: total, usage
* Memory: consumed, total, vmmemctl
* VM operations: # changes, clone, create, deploy, destroy, power, reboot, reconfigure, register, reset, shutdown, standby, vmotion
* Host Stats:
* CPU: total, usage, cost, mhz
* Datastore: iops, latency, read/write bytes, # reads/writes
* Disk: commands, latency, kernel reads/writes, # reads/writes, queues
* Memory: total, usage, active, latency, swap, shared, vmmemctl
* Network: broadcast, bytes, dropped, errors, multicast, packets, usage
* Power: energy, usage, capacity
* Res CPU: active, max, running
* Storage Adapter: commands, latency, # reads/writes
* Storage Path: commands, latency, # reads/writes
* System Resources: cpu active, cpu max, cpu running, cpu usage, mem allocated, mem consumed, mem shared, swap
* System: uptime
* Flash Module: active VMDKs
* VM Stats:
* CPU: demand, usage, readiness, cost, mhz
* Datastore: latency, # reads/writes
* Disk: commands, latency, # reads/writes, provisioned, usage
* Memory: granted, usage, active, swap, vmmemctl
* Network: broadcast, bytes, dropped, multicast, packets, usage
* Power: energy, usage
* Res CPU: active, max, running
* System: operating system uptime, uptime
* Virtual Disk: seeks, # reads/writes, latency, load
* Datastore stats:
* Disk: Capacity, provisioned, used
For a detailed list of commonly available metrics, please refer to [METRICS.md](METRICS.md)
## Tags
- all metrics
- vcenter (vcenter url)
- all host metrics
- cluster (vcenter cluster)
- all vm metrics
- cluster (vcenter cluster)
- esxhost (name of ESXi host)
- guest (guest operating system id)
- cpu stats for Host and VM
- cpu (cpu core - not all CPU fields will have this tag)
- datastore stats for Host and VM
- datastore (id of datastore)
- disk stats for Host and VM
- disk (name of disk)
- disk.used.capacity for Datastore
- disk (type of disk)
- net stats for Host and VM
- interface (name of network interface)
- storageAdapter stats for Host
- adapter (name of storage adapter)
- storagePath stats for Host
- path (id of storage path)
- sys.resource* stats for Host
- resource (resource type)
- vflashModule stats for Host
- module (name of flash module)
- virtualDisk stats for VM
- disk (name of virtual disk)
* all metrics
* vcenter (vcenter url)
* all host metrics
* cluster (vcenter cluster)
* all vm metrics
* cluster (vcenter cluster)
* esxhost (name of ESXi host)
* guest (guest operating system id)
* cpu stats for Host and VM
* cpu (cpu core - not all CPU fields will have this tag)
* datastore stats for Host and VM
* datastore (id of datastore)
* disk stats for Host and VM
* disk (name of disk)
* disk.used.capacity for Datastore
* disk (type of disk)
* net stats for Host and VM
* interface (name of network interface)
* storageAdapter stats for Host
* adapter (name of storage adapter)
* storagePath stats for Host
* path (id of storage path)
* sys.resource* stats for Host
* resource (resource type)
* vflashModule stats for Host
* module (name of flash module)
* virtualDisk stats for VM
* disk (name of virtual disk)
## Sample output
```
```shell
vsphere_vm_cpu,esxhostname=DC0_H0,guest=other,host=host.example.com,moid=vm-35,os=Mac,source=DC0_H0_VM0,vcenter=localhost:8989,vmname=DC0_H0_VM0 run_summation=2608i,ready_summation=129i,usage_average=5.01,used_summation=2134i,demand_average=326i 1535660299000000000
vsphere_vm_net,esxhostname=DC0_H0,guest=other,host=host.example.com,moid=vm-35,os=Mac,source=DC0_H0_VM0,vcenter=localhost:8989,vmname=DC0_H0_VM0 bytesRx_average=321i,bytesTx_average=335i 1535660299000000000
vsphere_vm_virtualDisk,esxhostname=DC0_H0,guest=other,host=host.example.com,moid=vm-35,os=Mac,source=DC0_H0_VM0,vcenter=localhost:8989,vmname=DC0_H0_VM0 write_average=144i,read_average=4i 1535660299000000000