fix: markdown: resolve all markdown issues with a-c (#10169)

This commit is contained in:
Joshua Powers 2021-11-24 11:55:55 -07:00 committed by GitHub
parent 8e85a67ee1
commit 6fa29f2966
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
30 changed files with 384 additions and 358 deletions

View File

@ -2,7 +2,7 @@
This plugin gather queues, topics & subscribers metrics using ActiveMQ Console API.
### Configuration:
## Configuration
```toml
# Description
@ -33,7 +33,7 @@ This plugin gather queues, topics & subscribers metrics using ActiveMQ Console A
# insecure_skip_verify = false
```
### Metrics
## Metrics
Every effort was made to preserve the names based on the XML response from the ActiveMQ Console API.
@ -47,7 +47,7 @@ Every effort was made to preserve the names based on the XML response from the A
- consumer_count
- enqueue_count
- dequeue_count
+ activemq_topics
- activemq_topics
- tags:
- name
- source
@ -76,7 +76,7 @@ Every effort was made to preserve the names based on the XML response from the A
### Example Output
```
```shell
activemq_queues,name=sandra,host=88284b2fe51b,source=localhost,port=8161 consumer_count=0i,enqueue_count=0i,dequeue_count=0i,size=0i 1492610703000000000
activemq_queues,name=Test,host=88284b2fe51b,source=localhost,port=8161 dequeue_count=0i,size=0i,consumer_count=0i,enqueue_count=0i 1492610703000000000
activemq_topics,name=ActiveMQ.Advisory.MasterBroker\ ,host=88284b2fe51b,source=localhost,port=8161 size=0i,consumer_count=0i,enqueue_count=1i,dequeue_count=0i 1492610703000000000

File diff suppressed because one or more lines are too long

View File

@ -1,12 +1,14 @@
# Alibaba (Aliyun) CloudMonitor Service Statistics Input Plugin
Here and after we use `Aliyun` instead `Alibaba` as it is default naming across web console and docs.
This plugin will pull Metric Statistics from Aliyun CMS.
### Aliyun Authentication
## Aliyun Authentication
This plugin uses an [AccessKey](https://www.alibabacloud.com/help/doc-detail/53045.htm?spm=a2c63.p38356.b99.127.5cba21fdt5MJKr&parentId=28572) credential for Authentication with the Aliyun OpenAPI endpoint.
In the following order the plugin will attempt to authenticate.
1. Ram RoleARN credential if `access_key_id`, `access_key_secret`, `role_arn`, `role_session_name` is specified
2. AccessKey STS token credential if `access_key_id`, `access_key_secret`, `access_key_sts_token` is specified
3. AccessKey credential if `access_key_id`, `access_key_secret` is specified
@ -15,7 +17,7 @@ In the following order the plugin will attempt to authenticate.
6. Environment variables credential
7. Instance metadata credential
### Configuration:
## Configuration
```toml
## Aliyun Credentials
@ -27,7 +29,7 @@ In the following order the plugin will attempt to authenticate.
## 5) RSA keypair credential
## 6) Environment variables credential
## 7) Instance metadata credential
# access_key_id = ""
# access_key_secret = ""
# access_key_sts_token = ""
@ -38,7 +40,7 @@ In the following order the plugin will attempt to authenticate.
# role_name = ""
## Specify the ali cloud region list to be queried for metrics and objects discovery
## If not set, all supported regions (see below) would be covered, it can provide a significant load on API, so the recommendation here
## If not set, all supported regions (see below) would be covered, it can provide a significant load on API, so the recommendation here
## is to limit the list as much as possible. Allowed values: https://www.alibabacloud.com/help/zh/doc-detail/40654.htm
## Default supported regions are:
## 21 items: cn-qingdao,cn-beijing,cn-zhangjiakou,cn-huhehaote,cn-hangzhou,cn-shanghai,cn-shenzhen,
@ -46,14 +48,14 @@ In the following order the plugin will attempt to authenticate.
## ap-south-1,ap-northeast-1,us-west-1,us-east-1,eu-central-1,eu-west-1,me-east-1
##
## From discovery perspective it set the scope for object discovery, the discovered info can be used to enrich
## the metrics with objects attributes/tags. Discovery is supported not for all projects (if not supported, then
## the metrics with objects attributes/tags. Discovery is supported not for all projects (if not supported, then
## it will be reported on the start - for example for 'acs_cdn' project:
## 'E! [inputs.aliyuncms] Discovery tool is not activated: no discovery support for project "acs_cdn"' )
## Currently, discovery supported for the following projects:
## - acs_ecs_dashboard
## - acs_rds_dashboard
## - acs_slb_dashboard
## - acs_vpc_eip
## - acs_vpc_eip
regions = ["cn-hongkong"]
# The minimum period for AliyunCMS metrics is 1 minute (60s). However not all
@ -66,41 +68,41 @@ In the following order the plugin will attempt to authenticate.
#
## Requested AliyunCMS aggregation Period (required - must be a multiple of 60s)
period = "5m"
## Collection Delay (required - must account for metrics availability via AliyunCMS API)
delay = "1m"
## Recommended: use metric 'interval' that is a multiple of 'period' to avoid
## gaps or overlap in pulled data
interval = "5m"
## Metric Statistic Project (required)
project = "acs_slb_dashboard"
## Maximum requests per second, default value is 200
ratelimit = 200
## How often the discovery API call executed (default 1m)
#discovery_interval = "1m"
## Metrics to Pull (Required)
[[inputs.aliyuncms.metrics]]
## Metrics names to be requested,
## Metrics names to be requested,
## described here (per project): https://help.aliyun.com/document_detail/28619.html?spm=a2c4g.11186623.6.690.1938ad41wg8QSq
names = ["InstanceActiveConnection", "InstanceNewConnection"]
## Dimension filters for Metric (these are optional).
## This allows to get additional metric dimension. If dimension is not specified it can be returned or
## the data can be aggregated - it depends on particular metric, you can find details here: https://help.aliyun.com/document_detail/28619.html?spm=a2c4g.11186623.6.690.1938ad41wg8QSq
##
## Note, that by default dimension filter includes the list of discovered objects in scope (if discovery is enabled)
## Values specified here would be added into the list of discovered objects.
## You can specify either single dimension:
## You can specify either single dimension:
#dimensions = '{"instanceId": "p-example"}'
## Or you can specify several dimensions at once:
#dimensions = '[{"instanceId": "p-example"},{"instanceId": "q-example"}]'
## Enrichment tags, can be added from discovery (if supported)
## Notation is <measurement_tag_name>:<JMES query path (https://jmespath.org/tutorial.html)>
## To figure out which fields are available, consult the Describe<ObjectType> API per project.
@ -111,14 +113,14 @@ In the following order the plugin will attempt to authenticate.
# "cluster_owner:Tags.Tag[?TagKey=='cs.cluster.name'].TagValue | [0]"
# ]
## The following tags added by default: regionId (if discovery enabled), userId, instanceId.
## Allow metrics without discovery data, if discovery is enabled. If set to true, then metric without discovery
## data would be emitted, otherwise dropped. This cane be of help, in case debugging dimension filters, or partial coverage
## of discovery scope vs monitoring scope
## data would be emitted, otherwise dropped. This cane be of help, in case debugging dimension filters, or partial coverage
## of discovery scope vs monitoring scope
#allow_dps_without_discovery = false
```
#### Requirements and Terminology
### Requirements and Terminology
Plugin Configuration utilizes [preset metric items references](https://www.alibabacloud.com/help/doc-detail/28619.htm?spm=a2c63.p38356.a3.2.389f233d0kPJn0)
@ -128,7 +130,7 @@ Plugin Configuration utilizes [preset metric items references](https://www.aliba
- `names` must be preset metric names
- `dimensions` must be preset dimension values
### Measurements & Fields:
## Measurements & Fields
Each Aliyun CMS Project monitored records a measurement with fields for each available Metric Statistic
Project and Metrics are represented in [snake case](https://en.wikipedia.org/wiki/Snake_case)
@ -139,9 +141,9 @@ Project and Metrics are represented in [snake case](https://en.wikipedia.org/wik
- {metric}_maximum (metric Maximum value)
- {metric}_value (metric Value value)
### Example Output:
## Example Output
```
```shell
$ ./telegraf --config telegraf.conf --input-filter aliyuncms --test
> aliyuncms_acs_slb_dashboard,instanceId=p-example,regionId=cn-hangzhou,userId=1234567890 latency_average=0.004810798017284538,latency_maximum=0.1100282669067383,latency_minimum=0.0006084442138671875
```
```

View File

@ -2,7 +2,7 @@
This plugin uses a query on the [`rocm-smi`](https://github.com/RadeonOpenCompute/rocm_smi_lib/tree/master/python_smi_tools) binary to pull GPU stats including memory and GPU usage, temperatures and other.
### Configuration
## Configuration
```toml
# Pulls statistics from AMD GPUs attached to the host
@ -14,7 +14,8 @@ This plugin uses a query on the [`rocm-smi`](https://github.com/RadeonOpenComput
# timeout = "5s"
```
### Metrics
## Metrics
- measurement: `amd_rocm_smi`
- tags
- `name` (entry name assigned by rocm-smi executable)
@ -36,21 +37,28 @@ This plugin uses a query on the [`rocm-smi`](https://github.com/RadeonOpenComput
- `clocks_current_memory` (integer, Mhz)
- `power_draw` (float, Watt)
### Troubleshooting
## Troubleshooting
Check the full output by running `rocm-smi` binary manually.
Linux:
```sh
rocm-smi rocm-smi -o -l -m -M -g -c -t -u -i -f -p -P -s -S -v --showreplaycount --showpids --showdriverversion --showmemvendor --showfwinfo --showproductname --showserial --showuniqueid --showbus --showpendingpages --showpagesinfo --showretiredpages --showunreservablepages --showmemuse --showvoltage --showtopo --showtopoweight --showtopohops --showtopotype --showtoponuma --showmeminfo all --json
```
Please include the output of this command if opening a GitHub issue, together with ROCm version.
### Example Output
```
```shell
amd_rocm_smi,gpu_id=0x6861,gpu_unique_id=0x2150e7d042a1124,host=ali47xl,name=card0 clocks_current_memory=167i,clocks_current_sm=852i,driver_version=51114i,fan_speed=14i,memory_free=17145282560i,memory_total=17163091968i,memory_used=17809408i,power_draw=7,temperature_sensor_edge=28,temperature_sensor_junction=29,temperature_sensor_memory=92,utilization_gpu=0i 1630572551000000000
amd_rocm_smi,gpu_id=0x6861,gpu_unique_id=0x2150e7d042a1124,host=ali47xl,name=card0 clocks_current_memory=167i,clocks_current_sm=852i,driver_version=51114i,fan_speed=14i,memory_free=17145282560i,memory_total=17163091968i,memory_used=17809408i,power_draw=7,temperature_sensor_edge=29,temperature_sensor_junction=30,temperature_sensor_memory=91,utilization_gpu=0i 1630572701000000000
amd_rocm_smi,gpu_id=0x6861,gpu_unique_id=0x2150e7d042a1124,host=ali47xl,name=card0 clocks_current_memory=167i,clocks_current_sm=852i,driver_version=51114i,fan_speed=14i,memory_free=17145282560i,memory_total=17163091968i,memory_used=17809408i,power_draw=7,temperature_sensor_edge=29,temperature_sensor_junction=29,temperature_sensor_memory=92,utilization_gpu=0i 1630572749000000000
```
### Limitations and notices
Please notice that this plugin has been developed and tested on a limited number of versions and small set of GPUs. Currently the latest ROCm version tested is 4.3.0.
Notice that depending on the device and driver versions the amount of information provided by `rocm-smi` can vary so that some fields would start/stop appearing in the metrics upon updates.
The `rocm-smi` JSON output is not perfectly homogeneous and is possibly changing in the future, hence parsing and unmarshaling can start failing upon updating ROCm.

View File

@ -7,8 +7,9 @@ Metrics are read from a topic exchange using the configured queue and binding_ke
Message payload should be formatted in one of the [Telegraf Data Formats](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md).
For an introduction to AMQP see:
- https://www.rabbitmq.com/tutorials/amqp-concepts.html
- https://www.rabbitmq.com/getstarted.html
- [amqp - concepts](https://www.rabbitmq.com/tutorials/amqp-concepts.html)
- [rabbitmq: getting started](https://www.rabbitmq.com/getstarted.html)
The following defaults are known to work with RabbitMQ:

View File

@ -4,7 +4,7 @@ The Apache plugin collects server performance information using the [`mod_status
Typically, the `mod_status` module is configured to expose a page at the `/server-status?auto` location of the Apache server. The [ExtendedStatus](https://httpd.apache.org/docs/2.4/mod/core.html#extendedstatus) option must be enabled in order to collect all available fields. For information about how to configure your server reference the [module documentation](https://httpd.apache.org/docs/2.4/mod/mod_status.html#enable).
### Configuration:
## Configuration
```toml
# Read Apache status information (mod_status)
@ -29,7 +29,7 @@ Typically, the `mod_status` module is configured to expose a page at the `/serve
# insecure_skip_verify = false
```
### Measurements & Fields:
## Measurements & Fields
- apache
- BusyWorkers (float)
@ -71,14 +71,14 @@ The following fields are collected from the `Scoreboard`, and represent the numb
- scboard_starting (float)
- scboard_waiting (float)
### Tags:
## Tags
- All measurements have the following tags:
- port
- server
- port
- server
### Example Output:
## Example Output
```
```shell
apache,port=80,server=debian-stretch-apache BusyWorkers=1,BytesPerReq=0,BytesPerSec=0,CPUChildrenSystem=0,CPUChildrenUser=0,CPULoad=0.00995025,CPUSystem=0.01,CPUUser=0.01,ConnsAsyncClosing=0,ConnsAsyncKeepAlive=0,ConnsAsyncWriting=0,ConnsTotal=0,IdleWorkers=49,Load1=0.01,Load15=0,Load5=0,ParentServerConfigGeneration=3,ParentServerMPMGeneration=2,ReqPerSec=0.00497512,ServerUptimeSeconds=201,TotalAccesses=1,TotalkBytes=0,Uptime=201,scboard_closing=0,scboard_dnslookup=0,scboard_finishing=0,scboard_idle_cleanup=0,scboard_keepalive=0,scboard_logging=0,scboard_open=100,scboard_reading=0,scboard_sending=1,scboard_starting=0,scboard_waiting=49 1502489900000000000
```

View File

@ -2,11 +2,11 @@
This plugin reads data from an apcupsd daemon over its NIS network protocol.
### Requirements
## Requirements
apcupsd should be installed and it's daemon should be running.
### Configuration
## Configuration
```toml
[[inputs.apcupsd]]
@ -18,7 +18,7 @@ apcupsd should be installed and it's daemon should be running.
timeout = "5s"
```
### Metrics
## Metrics
- apcupsd
- tags:
@ -43,11 +43,9 @@ apcupsd should be installed and it's daemon should be running.
- nominal_power
- firmware
## Example output
### Example output
```
```shell
apcupsd,serial=AS1231515,status=ONLINE,ups_name=name1 time_on_battery=0,load_percent=9.7,time_left_minutes=98,output_voltage=230.4,internal_temp=32.4,battery_voltage=27.4,input_frequency=50.2,input_voltage=230.4,battery_charge_percent=100,status_flags=8i 1490035922000000000
```

File diff suppressed because one or more lines are too long

View File

@ -2,7 +2,7 @@
This plugin gathers sizes of Azure Storage Queues.
### Configuration:
## Configuration
```toml
# Description
@ -12,12 +12,13 @@ This plugin gathers sizes of Azure Storage Queues.
## Required Azure Storage Account access key
account_key = "storageaccountaccesskey"
## Set to false to disable peeking age of oldest message (executes faster)
# peek_oldest_message_age = true
```
### Metrics
## Metrics
- azure_storage_queues
- tags:
- queue
@ -26,10 +27,10 @@ This plugin gathers sizes of Azure Storage Queues.
- size (integer, count)
- oldest_message_age_ns (integer, nanoseconds) Age of message at the head of the queue.
Requires `peek_oldest_message_age` to be configured to `true`.
### Example Output
```
## Example Output
```shell
azure_storage_queues,queue=myqueue,account=mystorageaccount oldest_message_age=799714900i,size=7i 1565970503000000000
azure_storage_queues,queue=myemptyqueue,account=mystorageaccount size=0i 1565970502000000000
```
```

View File

@ -2,7 +2,7 @@
Get bcache stat from stats_total directory and dirty_data file.
# Measurements
## Measurements
Meta:
@ -20,9 +20,9 @@ Measurement names:
- cache_misses
- cache_readaheads
### Description
## Description
```
```text
dirty_data
Amount of dirty data for this backing device in the cache. Continuously
updated unlike the cache set's version, but may be slightly off.
@ -51,7 +51,7 @@ cache_readaheads
Count of times readahead occurred.
```
# Example output
## Example
Using this configuration:
@ -69,13 +69,13 @@ Using this configuration:
When run with:
```
```shell
./telegraf --config telegraf.conf --input-filter bcache --test
```
It produces:
```
```shell
* Plugin: bcache, Collection 1
> [backing_dev="md10" bcache_dev="bcache0"] bcache_dirty_data value=11639194
> [backing_dev="md10" bcache_dev="bcache0"] bcache_bypassed value=5167704440832

View File

@ -2,7 +2,7 @@
The `beanstalkd` plugin collects server stats as well as tube stats (reported by `stats` and `stats-tube` commands respectively).
### Configuration:
## Configuration
```toml
[[inputs.beanstalkd]]
@ -14,11 +14,12 @@ The `beanstalkd` plugin collects server stats as well as tube stats (reported by
tubes = ["notifications"]
```
### Metrics:
## Metrics
Please see the [Beanstalk Protocol doc](https://raw.githubusercontent.com/kr/beanstalkd/master/doc/protocol.txt) for detailed explanation of `stats` and `stats-tube` commands output.
`beanstalkd_overview` statistical information about the system as a whole
- fields
- cmd_delete
- cmd_pause_tube
@ -38,6 +39,7 @@ Please see the [Beanstalk Protocol doc](https://raw.githubusercontent.com/kr/bea
- server (address taken from config)
`beanstalkd_tube` statistical information about the specified tube
- fields
- binlog_current_index
- binlog_max_size
@ -90,8 +92,9 @@ Please see the [Beanstalk Protocol doc](https://raw.githubusercontent.com/kr/bea
- server (address taken from config)
- version
### Example Output:
```
## Example
```shell
beanstalkd_overview,host=server.local,hostname=a2ab22ed12e0,id=232485800aa11b24,server=localhost:11300,version=1.10 cmd_stats_tube=29482i,current_jobs_delayed=0i,current_jobs_urgent=6i,cmd_kick=0i,cmd_stats=7378i,cmd_stats_job=0i,current_waiting=0i,max_job_size=65535i,pid=6i,cmd_bury=0i,cmd_reserve_with_timeout=0i,cmd_touch=0i,current_connections=1i,current_jobs_ready=6i,current_producers=0i,cmd_delete=0i,cmd_list_tubes=7369i,cmd_peek_ready=0i,cmd_put=6i,cmd_use=3i,cmd_watch=0i,current_jobs_reserved=0i,rusage_stime=6.07,cmd_list_tubes_watched=0i,cmd_pause_tube=0i,total_jobs=6i,binlog_records_migrated=0i,cmd_list_tube_used=0i,cmd_peek_delayed=0i,cmd_release=0i,current_jobs_buried=0i,job_timeouts=0i,binlog_current_index=0i,binlog_max_size=10485760i,total_connections=7378i,cmd_peek_buried=0i,cmd_reserve=0i,current_tubes=4i,binlog_records_written=0i,cmd_peek=0i,rusage_utime=1.13,uptime=7099i,binlog_oldest_index=0i,current_workers=0i,cmd_ignore=0i 1528801650000000000
beanstalkd_tube,host=server.local,name=notifications,server=localhost:11300 pause_time_left=0i,current_jobs_buried=0i,current_jobs_delayed=0i,current_jobs_reserved=0i,current_using=0i,current_waiting=0i,pause=0i,total_jobs=3i,cmd_delete=0i,cmd_pause_tube=0i,current_jobs_ready=3i,current_jobs_urgent=3i,current_watching=0i 1528801650000000000

View File

@ -1,7 +1,10 @@
# Beat Input Plugin
The Beat plugin will collect metrics from the given Beat instances. It is
known to work with Filebeat and Kafkabeat.
### Configuration:
## Configuration
```toml
## An URL from which to read Beat-formatted JSON
## Default is "http://127.0.0.1:5066".
@ -35,9 +38,11 @@ known to work with Filebeat and Kafkabeat.
## Use TLS but skip chain & host verification
# insecure_skip_verify = false
```
### Measurements & Fields
## Measurements & Fields
- **beat**
* Fields:
- Fields:
- cpu_system_ticks
- cpu_system_time_ms
- cpu_total_ticks
@ -50,7 +55,7 @@ known to work with Filebeat and Kafkabeat.
- memstats_memory_alloc
- memstats_memory_total
- memstats_rss
* Tags:
- Tags:
- beat_beat
- beat_host
- beat_id
@ -58,7 +63,7 @@ known to work with Filebeat and Kafkabeat.
- beat_version
- **beat_filebeat**
* Fields:
- Fields:
- events_active
- events_added
- events_done
@ -69,7 +74,7 @@ known to work with Filebeat and Kafkabeat.
- harvester_started
- input_log_files_renamed
- input_log_files_truncated
* Tags:
- Tags:
- beat_beat
- beat_host
- beat_id
@ -77,7 +82,7 @@ known to work with Filebeat and Kafkabeat.
- beat_version
- **beat_libbeat**
* Fields:
- Fields:
- config_module_running
- config_module_starts
- config_module_stops
@ -105,7 +110,7 @@ known to work with Filebeat and Kafkabeat.
- pipeline_events_retry
- pipeline_events_total
- pipeline_queue_acked
* Tags:
- Tags:
- beat_beat
- beat_host
- beat_id
@ -113,7 +118,7 @@ known to work with Filebeat and Kafkabeat.
- beat_version
- **beat_system**
* Field:
- Field:
- cpu_cores
- load_1
- load_15
@ -121,15 +126,16 @@ known to work with Filebeat and Kafkabeat.
- load_norm_1
- load_norm_15
- load_norm_5
* Tags:
- Tags:
- beat_beat
- beat_host
- beat_id
- beat_name
- beat_version
### Example Output:
```
## Example
```shell
$ telegraf --input-filter beat --test
> beat,beat_beat=filebeat,beat_host=node-6,beat_id=9c1c8697-acb4-4df0-987d-28197814f788,beat_name=node-6-test,beat_version=6.4.2,host=node-6

View File

@ -2,19 +2,19 @@
This plugin decodes the JSON or XML statistics provided by BIND 9 nameservers.
### XML Statistics Channel
## XML Statistics Channel
Version 2 statistics (BIND 9.6 - 9.9) and version 3 statistics (BIND 9.9+) are supported. Note that
for BIND 9.9 to support version 3 statistics, it must be built with the `--enable-newstats` compile
flag, and it must be specifically requested via the correct URL. Version 3 statistics are the
default (and only) XML format in BIND 9.10+.
### JSON Statistics Channel
## JSON Statistics Channel
JSON statistics schema version 1 (BIND 9.10+) is supported. As of writing, some distros still do
not enable support for JSON statistics in their BIND packages.
### Configuration:
## Configuration
- **urls** []string: List of BIND statistics channel URLs to collect from. Do not include a
trailing slash in the URL. Default is "http://localhost:8053/xml/v3".
@ -27,15 +27,16 @@ version and configured statistics channel.
| BIND Version | Statistics Format | Example URL |
| ------------ | ----------------- | ----------------------------- |
| 9.6 - 9.8 | XML v2 | http://localhost:8053 |
| 9.9 | XML v2 | http://localhost:8053/xml/v2 |
| 9.9+ | XML v3 | http://localhost:8053/xml/v3 |
| 9.10+ | JSON v1 | http://localhost:8053/json/v1 |
| 9.6 - 9.8 | XML v2 | `http://localhost:8053` |
| 9.9 | XML v2 | `http://localhost:8053/xml/v2` |
| 9.9+ | XML v3 | `http://localhost:8053/xml/v3` |
| 9.10+ | JSON v1 | `http://localhost:8053/json/v1` |
#### Configuration of BIND Daemon
### Configuration of BIND Daemon
Add the following to your named.conf if running Telegraf on the same host as the BIND daemon:
```
```json
statistics-channels {
inet 127.0.0.1 port 8053;
};
@ -46,7 +47,7 @@ configure the BIND daemon to listen on that address. Note that you should secure
channel with an ACL if it is publicly reachable. Consult the BIND Administrator Reference Manual
for more information.
### Measurements & Fields:
## Measurements & Fields
- bind_counter
- name=value (multiple)
@ -60,7 +61,7 @@ for more information.
- total
- in_use
### Tags:
## Tags
- All measurements
- url
@ -73,7 +74,7 @@ for more information.
- id
- name
### Sample Queries:
## Sample Queries
These are some useful queries (to generate dashboards or other) to run against data from this
plugin:
@ -84,7 +85,7 @@ WHERE "url" = 'localhost:8053' AND "type" = 'qtype' AND time > now() - 1h \
GROUP BY time(5m), "type"
```
```
```text
name: bind_counter
tags: type=qtype
time non_negative_derivative_A non_negative_derivative_PTR
@ -104,11 +105,11 @@ time non_negative_derivative_A non_negative_derivative_PTR
1553865600000000000 280.6666666667443 1807.9071428570896
```
### Example Output
## Example Output
Here is example output of this plugin:
```
```shell
bind_memory,host=LAP,port=8053,source=localhost,url=localhost:8053 block_size=12058624i,context_size=4575056i,in_use=4113717i,lost=0i,total_use=16663252i 1554276619000000000
bind_counter,host=LAP,port=8053,source=localhost,type=opcode,url=localhost:8053 IQUERY=0i,NOTIFY=0i,QUERY=9i,STATUS=0i,UPDATE=0i 1554276619000000000
bind_counter,host=LAP,port=8053,source=localhost,type=rcode,url=localhost:8053 17=0i,18=0i,19=0i,20=0i,21=0i,22=0i,BADCOOKIE=0i,BADVERS=0i,FORMERR=0i,NOERROR=7i,NOTAUTH=0i,NOTIMP=0i,NOTZONE=0i,NXDOMAIN=0i,NXRRSET=0i,REFUSED=0i,RESERVED11=0i,RESERVED12=0i,RESERVED13=0i,RESERVED14=0i,RESERVED15=0i,SERVFAIL=2i,YXDOMAIN=0i,YXRRSET=0i 1554276619000000000

View File

@ -4,7 +4,7 @@ The Bond input plugin collects network bond interface status for both the
network bond interface as well as slave interfaces.
The plugin collects these metrics from `/proc/net/bonding/*` files.
### Configuration:
## Configuration
```toml
[[inputs.bond]]
@ -18,7 +18,7 @@ The plugin collects these metrics from `/proc/net/bonding/*` files.
# bond_interfaces = ["bond0"]
```
### Measurements & Fields:
## Measurements & Fields
- bond
- active_slave (for active-backup mode)
@ -29,9 +29,9 @@ The plugin collects these metrics from `/proc/net/bonding/*` files.
- status
- count
### Description:
## Description
```
```shell
active_slave
Currently active slave interface for active-backup mode.
@ -45,7 +45,7 @@ count
Number of slaves attached to bond
```
### Tags:
## Tags
- bond
- bond
@ -54,11 +54,11 @@ count
- bond
- interface
### Example output:
## Example output
Configuration:
```
```toml
[[inputs.bond]]
## Sets 'proc' directory path
## If not specified, then default is /proc
@ -72,13 +72,13 @@ Configuration:
Run:
```
```shell
telegraf --config telegraf.conf --input-filter bond --test
```
Output:
```
```shell
* Plugin: inputs.bond, Collection 1
> bond,bond=bond1,host=local active_slave="eth0",status=1i 1509704525000000000
> bond_slave,bond=bond1,interface=eth0,host=local status=1i,failures=0i 1509704525000000000

View File

@ -5,7 +5,7 @@ via [Burrow](https://github.com/linkedin/Burrow) HTTP [API](https://github.com/l
Supported Burrow version: `1.x`
### Configuration
## Configuration
```toml
[[inputs.burrow]]
@ -50,7 +50,7 @@ Supported Burrow version: `1.x`
# insecure_skip_verify = false
```
### Group/Partition Status mappings
## Group/Partition Status mappings
* `OK` = 1
* `NOT_FOUND` = 2
@ -61,42 +61,41 @@ Supported Burrow version: `1.x`
> unknown value will be mapped to 0
### Fields
## Fields
* `burrow_group` (one event per each consumer group)
- status (string, see Partition Status mappings)
- status_code (int, `1..6`, see Partition status mappings)
- partition_count (int, `number of partitions`)
- offset (int64, `total offset of all partitions`)
- total_lag (int64, `totallag`)
- lag (int64, `maxlag.current_lag || 0`)
- timestamp (int64, `end.timestamp`)
* status (string, see Partition Status mappings)
* status_code (int, `1..6`, see Partition status mappings)
* partition_count (int, `number of partitions`)
* offset (int64, `total offset of all partitions`)
* total_lag (int64, `totallag`)
* lag (int64, `maxlag.current_lag || 0`)
* timestamp (int64, `end.timestamp`)
* `burrow_partition` (one event per each topic partition)
- status (string, see Partition Status mappings)
- status_code (int, `1..6`, see Partition status mappings)
- lag (int64, `current_lag || 0`)
- offset (int64, `end.timestamp`)
- timestamp (int64, `end.timestamp`)
* status (string, see Partition Status mappings)
* status_code (int, `1..6`, see Partition status mappings)
* lag (int64, `current_lag || 0`)
* offset (int64, `end.timestamp`)
* timestamp (int64, `end.timestamp`)
* `burrow_topic` (one event per topic offset)
- offset (int64)
* offset (int64)
### Tags
## Tags
* `burrow_group`
- cluster (string)
- group (string)
* cluster (string)
* group (string)
* `burrow_partition`
- cluster (string)
- group (string)
- topic (string)
- partition (int)
- owner (string)
* cluster (string)
* group (string)
* topic (string)
* partition (int)
* owner (string)
* `burrow_topic`
- cluster (string)
- topic (string)
- partition (int)
* cluster (string)
* topic (string)
* partition (int)

View File

@ -1,19 +1,21 @@
# Cassandra Input Plugin
### **Deprecated in version 1.7**: Please use the [jolokia2](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/jolokia2) plugin with the [cassandra.conf](/plugins/inputs/jolokia2/examples/cassandra.conf) example configuration.
**Deprecated in version 1.7**: Please use the [jolokia2](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/jolokia2) plugin with the [cassandra.conf](/plugins/inputs/jolokia2/examples/cassandra.conf) example configuration.
## Plugin arguments
#### Plugin arguments:
- **context** string: Context root used for jolokia url
- **servers** []string: List of servers with the format "<user:passwd@><host>:port"
- **servers** []string: List of servers with the format `<user:passwd@><host>:port`"
- **metrics** []string: List of Jmx paths that identify mbeans attributes
#### Description
## Description
The Cassandra plugin collects Cassandra 3 / JVM metrics exposed as MBean's attributes through jolokia REST endpoint. All metrics are collected for each server configured.
See: https://jolokia.org/ and [Cassandra Documentation](http://docs.datastax.com/en/cassandra/3.x/cassandra/operations/monitoringCassandraTOC.html)
See: [https://jolokia.org/](https://jolokia.org/) and [Cassandra Documentation](http://docs.datastax.com/en/cassandra/3.x/cassandra/operations/monitoringCassandraTOC.html)
## Measurements
# Measurements:
Cassandra plugin produces one or more measurements for each metric configured, adding Server's name as `host` tag. More than one measurement is generated when querying table metrics with a wildcard for the keyspace or table name.
Given a configuration like:
@ -43,30 +45,30 @@ Given a configuration like:
The collected metrics will be:
```
```shell
javaMemory,host=myHost,mname=HeapMemoryUsage HeapMemoryUsage_committed=1040187392,HeapMemoryUsage_init=1050673152,HeapMemoryUsage_max=1040187392,HeapMemoryUsage_used=368155000 1459551767230567084
```
# Useful Metrics:
## Useful Metrics
Here is a list of metrics that might be useful to monitor your cassandra cluster. This was put together from multiple sources on the web.
- [How to monitor Cassandra performance metrics](https://www.datadoghq.com/blog/how-to-monitor-cassandra-performance-metrics)
- [Cassandra Documentation](http://docs.datastax.com/en/cassandra/3.x/cassandra/operations/monitoringCassandraTOC.html)
#### measurement = javaGarbageCollector
### measurement = javaGarbageCollector
- /java.lang:type=GarbageCollector,name=ConcurrentMarkSweep/CollectionTime
- /java.lang:type=GarbageCollector,name=ConcurrentMarkSweep/CollectionCount
- /java.lang:type=GarbageCollector,name=ParNew/CollectionTime
- /java.lang:type=GarbageCollector,name=ParNew/CollectionCount
#### measurement = javaMemory
### measurement = javaMemory
- /java.lang:type=Memory/HeapMemoryUsage
- /java.lang:type=Memory/NonHeapMemoryUsage
#### measurement = cassandraCache
### measurement = cassandraCache
- /org.apache.cassandra.metrics:type=Cache,scope=KeyCache,name=Hits
- /org.apache.cassandra.metrics:type=Cache,scope=KeyCache,name=Requests
@ -79,11 +81,11 @@ Here is a list of metrics that might be useful to monitor your cassandra cluster
- /org.apache.cassandra.metrics:type=Cache,scope=RowCache,name=Size
- /org.apache.cassandra.metrics:type=Cache,scope=RowCache,name=Capacity
#### measurement = cassandraClient
### measurement = cassandraClient
- /org.apache.cassandra.metrics:type=Client,name=connectedNativeClients
#### measurement = cassandraClientRequest
### measurement = cassandraClientRequest
- /org.apache.cassandra.metrics:type=ClientRequest,scope=Read,name=TotalLatency
- /org.apache.cassandra.metrics:type=ClientRequest,scope=Write,name=TotalLatency
@ -96,24 +98,25 @@ Here is a list of metrics that might be useful to monitor your cassandra cluster
- /org.apache.cassandra.metrics:type=ClientRequest,scope=Read,name=Failures
- /org.apache.cassandra.metrics:type=ClientRequest,scope=Write,name=Failures
#### measurement = cassandraCommitLog
### measurement = cassandraCommitLog
- /org.apache.cassandra.metrics:type=CommitLog,name=PendingTasks
- /org.apache.cassandra.metrics:type=CommitLog,name=TotalCommitLogSize
#### measurement = cassandraCompaction
### measurement = cassandraCompaction
- /org.apache.cassandra.metrics:type=Compaction,name=CompletedTasks
- /org.apache.cassandra.metrics:type=Compaction,name=PendingTasks
- /org.apache.cassandra.metrics:type=Compaction,name=TotalCompactionsCompleted
- /org.apache.cassandra.metrics:type=Compaction,name=BytesCompacted
#### measurement = cassandraStorage
### measurement = cassandraStorage
- /org.apache.cassandra.metrics:type=Storage,name=Load
- /org.apache.cassandra.metrics:type=Storage,name=Exceptions
#### measurement = cassandraTable
### measurement = cassandraTable
Using wildcards for "keyspace" and "scope" can create a lot of series as metrics will be reported for every table and keyspace including internal system tables. Specify a keyspace name and/or a table name to limit them.
- /org.apache.cassandra.metrics:type=Table,keyspace=\*,scope=\*,name=LiveDiskSpaceUsed
@ -124,20 +127,17 @@ Using wildcards for "keyspace" and "scope" can create a lot of series as metrics
- /org.apache.cassandra.metrics:type=Table,keyspace=\*,scope=\*,name=ReadTotalLatency
- /org.apache.cassandra.metrics:type=Table,keyspace=\*,scope=\*,name=WriteTotalLatency
#### measurement = cassandraThreadPools
### measurement = cassandraThreadPools
- /org.apache.cassandra.metrics:type=ThreadPools,path=internal,scope=CompactionExecutor,name=ActiveTasks
- /org.apache.cassandra.metrics:type=ThreadPools,path=internal,scope=AntiEntropyStage,name=ActiveTasks
- /org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=CounterMutationStage,name=PendingTasks
- /org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=CounterMutationStage,name=CurrentlyBlockedTasks
- /org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=CounterMutationStage,name=PendingTasks
- /org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=CounterMutationStage,name=CurrentlyBlockedTasks
- /org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=MutationStage,name=PendingTasks
- /org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=MutationStage,name=CurrentlyBlockedTasks
- /org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=ReadRepairStage,name=PendingTasks
- /org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=ReadRepairStage,name=CurrentlyBlockedTasks
- /org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=ReadStage,name=PendingTasks
- /org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=ReadStage,name=CurrentlyBlockedTasks
- /org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=RequestResponseStage,name=PendingTasks
- /org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=RequestResponseStage,name=PendingTasks
- /org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=RequestResponseStage,name=CurrentlyBlockedTasks

View File

@ -4,7 +4,7 @@ Collects performance metrics from the MON and OSD nodes in a Ceph storage cluste
Ceph has introduced a Telegraf and Influx plugin in the 13.x Mimic release. The Telegraf module sends to a Telegraf configured with a socket_listener. [Learn more in their docs](https://docs.ceph.com/en/latest/mgr/telegraf/)
*Admin Socket Stats*
## Admin Socket Stats
This gatherer works by scanning the configured SocketDir for OSD, MON, MDS and RGW socket files. When it finds
a MON socket, it runs **ceph --admin-daemon $file perfcounters_dump**. For OSDs it runs **ceph --admin-daemon $file perf dump**
@ -26,23 +26,22 @@ used as collection tags, and all sub-keys are flattened. For example:
Would be parsed into the following metrics, all of which would be tagged with collection=paxos:
- refresh = 9363435
- refresh_latency.avgcount: 9363435
- refresh_latency.sum: 5378.794002000
- refresh = 9363435
- refresh_latency.avgcount: 9363435
- refresh_latency.sum: 5378.794002000
*Cluster Stats*
## Cluster Stats
This gatherer works by invoking ceph commands against the cluster thus only requires the ceph client, valid
ceph configuration and an access key to function (the ceph_config and ceph_user configuration variables work
in conjunction to specify these prerequisites). It may be run on any server you wish which has access to
the cluster. The currently supported commands are:
* ceph status
* ceph df
* ceph osd pool stats
- ceph status
- ceph df
- ceph osd pool stats
### Configuration:
## Configuration
```toml
# Collects performance metrics from the MON, OSD, MDS and RGW nodes in a Ceph storage cluster.
@ -89,9 +88,9 @@ the cluster. The currently supported commands are:
gather_cluster_stats = false
```
### Metrics:
## Metrics
*Admin Socket Stats*
### Admin Socket
All fields are collected under the **ceph** measurement and stored as float64s. For a full list of fields, see the sample perf dumps in ceph_test.go.
@ -167,9 +166,9 @@ All admin measurements will have the following tags:
- throttle-objecter_ops
- throttle-rgw_async_rados_ops
*Cluster Stats*
## Cluster
+ ceph_health
- ceph_health
- fields:
- status
- overall_status
@ -184,7 +183,7 @@ All admin measurements will have the following tags:
- nearfull (bool)
- num_remapped_pgs (float)
+ ceph_pgmap
- ceph_pgmap
- fields:
- version (float)
- num_pgs (float)
@ -204,7 +203,7 @@ All admin measurements will have the following tags:
- fields:
- count (float)
+ ceph_usage
- ceph_usage
- fields:
- total_bytes (float)
- total_used_bytes (float)
@ -223,7 +222,7 @@ All admin measurements will have the following tags:
- percent_used (float)
- max_avail (float)
+ ceph_pool_stats
- ceph_pool_stats
- tags:
- name
- fields:
@ -236,12 +235,11 @@ All admin measurements will have the following tags:
- recovering_bytes_per_sec (float)
- recovering_keys_per_sec (float)
## Example
### Example Output:
Below is an example of a custer stats:
*Cluster Stats*
```
```shell
ceph_health,host=stefanmon1 overall_status="",status="HEALTH_WARN" 1587118504000000000
ceph_osdmap,host=stefanmon1 epoch=203,full=false,nearfull=false,num_in_osds=8,num_osds=9,num_remapped_pgs=0,num_up_osds=8 1587118504000000000
ceph_pgmap,host=stefanmon1 bytes_avail=849879302144,bytes_total=858959904768,bytes_used=9080602624,data_bytes=5055,num_pgs=504,read_bytes_sec=0,read_op_per_sec=0,version=0,write_bytes_sec=0,write_op_per_sec=0 1587118504000000000
@ -251,9 +249,9 @@ ceph_pool_usage,host=stefanmon1,name=cephfs_data bytes_used=0,kb_used=0,max_avai
ceph_pool_stats,host=stefanmon1,name=cephfs_data read_bytes_sec=0,read_op_per_sec=0,recovering_bytes_per_sec=0,recovering_keys_per_sec=0,recovering_objects_per_sec=0,write_bytes_sec=0,write_op_per_sec=0 1587118506000000000
```
*Admin Socket Stats*
Below is an example of admin socket stats:
```
```shell
> ceph,collection=cct,host=stefanmon1,id=stefanmon1,type=monitor total_workers=0,unhealthy_workers=0 1587117563000000000
> ceph,collection=mempool,host=stefanmon1,id=stefanmon1,type=monitor bloom_filter_bytes=0,bloom_filter_items=0,bluefs_bytes=0,bluefs_items=0,bluestore_alloc_bytes=0,bluestore_alloc_items=0,bluestore_cache_data_bytes=0,bluestore_cache_data_items=0,bluestore_cache_onode_bytes=0,bluestore_cache_onode_items=0,bluestore_cache_other_bytes=0,bluestore_cache_other_items=0,bluestore_fsck_bytes=0,bluestore_fsck_items=0,bluestore_txc_bytes=0,bluestore_txc_items=0,bluestore_writing_bytes=0,bluestore_writing_deferred_bytes=0,bluestore_writing_deferred_items=0,bluestore_writing_items=0,buffer_anon_bytes=719152,buffer_anon_items=192,buffer_meta_bytes=352,buffer_meta_items=4,mds_co_bytes=0,mds_co_items=0,osd_bytes=0,osd_items=0,osd_mapbl_bytes=0,osd_mapbl_items=0,osd_pglog_bytes=0,osd_pglog_items=0,osdmap_bytes=15872,osdmap_items=138,osdmap_mapping_bytes=63112,osdmap_mapping_items=7626,pgmap_bytes=38680,pgmap_items=477,unittest_1_bytes=0,unittest_1_items=0,unittest_2_bytes=0,unittest_2_items=0 1587117563000000000
> ceph,collection=throttle-mon_client_bytes,host=stefanmon1,id=stefanmon1,type=monitor get=1041157,get_or_fail_fail=0,get_or_fail_success=1041157,get_started=0,get_sum=64928901,max=104857600,put=1041157,put_sum=64928901,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117563000000000

View File

@ -10,38 +10,35 @@ Following file formats are supported:
* Single value
```
```text
VAL\n
```
* New line separated values
```
```text
VAL0\n
VAL1\n
```
* Space separated values
```
```text
VAL0 VAL1 ...\n
```
* Space separated keys and value, separated by new line
```
```text
KEY0 ... VAL0\n
KEY1 ... VAL1\n
```
## Tags
### Tags:
All measurements have the `path` tag.
All measurements have the following tags:
- path
### Configuration:
## Configuration
```toml
# Read specific statistics per cgroup
@ -60,7 +57,7 @@ All measurements have the following tags:
# files = ["memory.*usage*", "memory.limit_in_bytes"]
```
### usage examples:
## Example
```toml
# [[inputs.cgroup]]

View File

@ -51,7 +51,7 @@ Dispersion is due to system clock resolution, statistical measurement variations
- Leap status - This is the leap status, which can be Normal, Insert second,
Delete second or Not synchronised.
### Configuration:
## Configuration
```toml
# Get standard chrony metrics, requires chronyc executable.
@ -60,34 +60,30 @@ Delete second or Not synchronised.
# dns_lookup = false
```
### Measurements & Fields:
## Measurements & Fields
- chrony
- system_time (float, seconds)
- last_offset (float, seconds)
- rms_offset (float, seconds)
- frequency (float, ppm)
- residual_freq (float, ppm)
- skew (float, ppm)
- root_delay (float, seconds)
- root_dispersion (float, seconds)
- update_interval (float, seconds)
- system_time (float, seconds)
- last_offset (float, seconds)
- rms_offset (float, seconds)
- frequency (float, ppm)
- residual_freq (float, ppm)
- skew (float, ppm)
- root_delay (float, seconds)
- root_dispersion (float, seconds)
- update_interval (float, seconds)
### Tags:
### Tags
- All measurements have the following tags:
- reference_id
- stratum
- leap_status
- reference_id
- stratum
- leap_status
### Example Output:
### Example Output
```
```shell
$ telegraf --config telegraf.conf --input-filter chrony --test
* Plugin: chrony, Collection 1
> chrony,leap_status=normal,reference_id=192.168.1.1,stratum=3 frequency=-35.657,system_time=0.000027073,last_offset=-0.000013616,residual_freq=-0,rms_offset=0.000027073,root_delay=0.000644,root_dispersion=0.003444,skew=0.001,update_interval=1031.2 1463750789687639161
```

View File

@ -9,8 +9,7 @@ The GRPC dialout transport is supported on various IOS XR (64-bit) 6.1.x and lat
The TCP dialout transport is supported on IOS XR (32-bit and 64-bit) 6.1.x and later.
### Configuration:
## Configuration
```toml
[[inputs.cisco_telemetry_mdt]]
@ -53,14 +52,16 @@ The TCP dialout transport is supported on IOS XR (32-bit and 64-bit) 6.1.x and l
# dnpath3 = '{"Name": "show processes memory physical","prop": [{"Key": "processname","Value": "string"}]}'
```
### Example Output:
```
## Example Output
```shell
ifstats,path=ietf-interfaces:interfaces-state/interface/statistics,host=linux,name=GigabitEthernet2,source=csr1kv,subscription=101 in-unicast-pkts=27i,in-multicast-pkts=0i,discontinuity-time="2019-05-23T07:40:23.000362+00:00",in-octets=5233i,in-errors=0i,out-multicast-pkts=0i,out-discards=0i,in-broadcast-pkts=0i,in-discards=0i,in-unknown-protos=0i,out-unicast-pkts=0i,out-broadcast-pkts=0i,out-octets=0i,out-errors=0i 1559150462624000000
ifstats,path=ietf-interfaces:interfaces-state/interface/statistics,host=linux,name=GigabitEthernet1,source=csr1kv,subscription=101 in-octets=3394770806i,in-broadcast-pkts=0i,in-multicast-pkts=0i,out-broadcast-pkts=0i,in-unknown-protos=0i,out-octets=350212i,in-unicast-pkts=9477273i,in-discards=0i,out-unicast-pkts=2726i,out-discards=0i,discontinuity-time="2019-05-23T07:40:23.000363+00:00",in-errors=30i,out-multicast-pkts=0i,out-errors=0i 1559150462624000000
```
### NX-OS Configuration Example:
```
### NX-OS Configuration Example
```text
Requirement DATA-SOURCE Configuration
-----------------------------------------
Environment DME path sys/ch query-condition query-target=subtree&target-subtree-class=eqptPsuSlot,eqptFtSlot,eqptSupCSlot,eqptPsu,eqptFt,eqptSensor,eqptLCSlot
@ -92,13 +93,11 @@ multicast igmp NXAPI show ip igmp snooping groups
multicast igmp NXAPI show ip igmp snooping groups detail
multicast igmp NXAPI show ip igmp snooping groups summary
multicast igmp NXAPI show ip igmp snooping mrouter
multicast igmp NXAPI show ip igmp snooping statistics
multicast igmp NXAPI show ip igmp snooping statistics
multicast pim NXAPI show ip pim interface vrf all
multicast pim NXAPI show ip pim neighbor vrf all
multicast pim NXAPI show ip pim route vrf all
multicast pim NXAPI show ip pim rp vrf all
multicast pim NXAPI show ip pim statistics vrf all
multicast pim NXAPI show ip pim vrf all
```

View File

@ -2,7 +2,8 @@
This plugin gathers the statistic data from [ClickHouse](https://github.com/ClickHouse/ClickHouse) server.
### Configuration
## Configuration
```toml
# Read metrics from one or many ClickHouse servers
[[inputs.clickhouse]]
@ -71,7 +72,7 @@ This plugin gathers the statistic data from [ClickHouse](https://github.com/Clic
# insecure_skip_verify = false
```
### Metrics
## Metrics
- clickhouse_events
- tags:
@ -81,7 +82,7 @@ This plugin gathers the statistic data from [ClickHouse](https://github.com/Clic
- fields:
- all rows from [system.events][]
+ clickhouse_metrics
- clickhouse_metrics
- tags:
- source (ClickHouse server hostname)
- cluster (Name of the cluster [optional])
@ -97,7 +98,7 @@ This plugin gathers the statistic data from [ClickHouse](https://github.com/Clic
- fields:
- all rows from [system.asynchronous_metrics][]
+ clickhouse_tables
- clickhouse_tables
- tags:
- source (ClickHouse server hostname)
- table
@ -115,9 +116,9 @@ This plugin gathers the statistic data from [ClickHouse](https://github.com/Clic
- cluster (Name of the cluster [optional])
- shard_num (Shard number in the cluster [optional])
- fields:
- root_nodes (count of node from [system.zookeeper][] where path=/)
- root_nodes (count of node from [system.zookeeper][] where path=/)
+ clickhouse_replication_queue
- clickhouse_replication_queue
- tags:
- source (ClickHouse server hostname)
- cluster (Name of the cluster [optional])
@ -132,8 +133,8 @@ This plugin gathers the statistic data from [ClickHouse](https://github.com/Clic
- shard_num (Shard number in the cluster [optional])
- fields:
- detached_parts (total detached parts for all tables and databases from [system.detached_parts][])
+ clickhouse_dictionaries
- clickhouse_dictionaries
- tags:
- source (ClickHouse server hostname)
- cluster (Name of the cluster [optional])
@ -153,7 +154,7 @@ This plugin gathers the statistic data from [ClickHouse](https://github.com/Clic
- failed - counter which show total failed mutations from first clickhouse-server run
- completed - counter which show total successful finished mutations from first clickhouse-server run
+ clickhouse_disks
- clickhouse_disks
- tags:
- source (ClickHouse server hostname)
- cluster (Name of the cluster [optional])
@ -161,8 +162,8 @@ This plugin gathers the statistic data from [ClickHouse](https://github.com/Clic
- name (disk name in storage configuration)
- path (path to disk)
- fields:
- free_space_percent - 0-100, gauge which show current percent of free disk space bytes relative to total disk space bytes
- keep_free_space_percent - 0-100, gauge which show current percent of required keep free disk bytes relative to total disk space bytes
- free_space_percent - 0-100, gauge which show current percent of free disk space bytes relative to total disk space bytes
- keep_free_space_percent - 0-100, gauge which show current percent of required keep free disk bytes relative to total disk space bytes
- clickhouse_processes
- tags:
@ -170,8 +171,8 @@ This plugin gathers the statistic data from [ClickHouse](https://github.com/Clic
- cluster (Name of the cluster [optional])
- shard_num (Shard number in the cluster [optional])
- fields:
- percentile_50 - float gauge which show 50% percentile (quantile 0.5) for `elapsed` field of running processes, see [system.processes][] for details
- percentile_90 - float gauge which show 90% percentile (quantile 0.9) for `elapsed` field of running processes, see [system.processes][] for details
- percentile_50 - float gauge which show 50% percentile (quantile 0.5) for `elapsed` field of running processes, see [system.processes][] for details
- percentile_90 - float gauge which show 90% percentile (quantile 0.9) for `elapsed` field of running processes, see [system.processes][] for details
- longest_running - float gauge which show maximum value for `elapsed` field of running processes, see [system.processes][] for details
- clickhouse_text_log
@ -179,13 +180,13 @@ This plugin gathers the statistic data from [ClickHouse](https://github.com/Clic
- source (ClickHouse server hostname)
- cluster (Name of the cluster [optional])
- shard_num (Shard number in the cluster [optional])
- level (message level, only message with level less or equal Notice is collects), see details on [system.text_log][]
- level (message level, only message with level less or equal Notice is collects), see details on [system.text_log][]
- fields:
- messages_last_10_min - gauge which show how many messages collected
### Example Output
```
### Examples
```text
clickhouse_events,cluster=test_cluster_two_shards_localhost,host=kshvakov,source=localhost,shard_num=1 read_compressed_bytes=212i,arena_alloc_chunks=35i,function_execute=85i,merge_tree_data_writer_rows=3i,rw_lock_acquired_read_locks=421i,file_open=46i,io_buffer_alloc_bytes=86451985i,inserted_bytes=196i,regexp_created=3i,real_time_microseconds=116832i,query=23i,network_receive_elapsed_microseconds=268i,merge_tree_data_writer_compressed_bytes=1080i,arena_alloc_bytes=212992i,disk_write_elapsed_microseconds=556i,inserted_rows=3i,compressed_read_buffer_bytes=81i,read_buffer_from_file_descriptor_read_bytes=148i,write_buffer_from_file_descriptor_write=47i,merge_tree_data_writer_blocks=3i,soft_page_faults=896i,hard_page_faults=7i,select_query=21i,merge_tree_data_writer_uncompressed_bytes=196i,merge_tree_data_writer_blocks_already_sorted=3i,user_time_microseconds=40196i,compressed_read_buffer_blocks=5i,write_buffer_from_file_descriptor_write_bytes=3246i,io_buffer_allocs=296i,created_write_buffer_ordinary=12i,disk_read_elapsed_microseconds=59347044i,network_send_elapsed_microseconds=1538i,context_lock=1040i,insert_query=1i,system_time_microseconds=14582i,read_buffer_from_file_descriptor_read=3i 1569421000000000000
clickhouse_asynchronous_metrics,cluster=test_cluster_two_shards_localhost,host=kshvakov,source=localhost,shard_num=1 jemalloc.metadata_thp=0i,replicas_max_relative_delay=0i,jemalloc.mapped=1803177984i,jemalloc.allocated=1724839256i,jemalloc.background_thread.run_interval=0i,jemalloc.background_thread.num_threads=0i,uncompressed_cache_cells=0i,replicas_max_absolute_delay=0i,mark_cache_bytes=0i,compiled_expression_cache_count=0i,replicas_sum_queue_size=0i,number_of_tables=35i,replicas_max_merges_in_queue=0i,replicas_max_inserts_in_queue=0i,replicas_sum_merges_in_queue=0i,replicas_max_queue_size=0i,mark_cache_files=0i,jemalloc.background_thread.num_runs=0i,jemalloc.active=1726210048i,uptime=158i,jemalloc.retained=380481536i,replicas_sum_inserts_in_queue=0i,uncompressed_cache_bytes=0i,number_of_databases=2i,jemalloc.metadata=9207704i,max_part_count_for_partition=1i,jemalloc.resident=1742442496i 1569421000000000000
clickhouse_metrics,cluster=test_cluster_two_shards_localhost,host=kshvakov,source=localhost,shard_num=1 replicated_send=0i,write=0i,ephemeral_node=0i,zoo_keeper_request=0i,distributed_files_to_insert=0i,replicated_fetch=0i,background_schedule_pool_task=0i,interserver_connection=0i,leader_replica=0i,delayed_inserts=0i,global_thread_active=41i,merge=0i,readonly_replica=0i,memory_tracking_in_background_schedule_pool=0i,memory_tracking_for_merges=0i,zoo_keeper_session=0i,context_lock_wait=0i,storage_buffer_bytes=0i,background_pool_task=0i,send_external_tables=0i,zoo_keeper_watch=0i,part_mutation=0i,disk_space_reserved_for_merge=0i,distributed_send=0i,version_integer=19014003i,local_thread=0i,replicated_checks=0i,memory_tracking=0i,memory_tracking_in_background_processing_pool=0i,leader_election=0i,revision=54425i,open_file_for_read=0i,open_file_for_write=0i,storage_buffer_rows=0i,rw_lock_waiting_readers=0i,rw_lock_waiting_writers=0i,rw_lock_active_writers=0i,local_thread_active=0i,query_preempted=0i,tcp_connection=1i,http_connection=1i,read=2i,query_thread=0i,dict_cache_requests=0i,rw_lock_active_readers=1i,global_thread=43i,query=1i 1569421000000000000
@ -196,10 +197,10 @@ clickhouse_tables,cluster=test_cluster_two_shards_localhost,database=default,hos
[system.events]: https://clickhouse.tech/docs/en/operations/system-tables/events/
[system.metrics]: https://clickhouse.tech/docs/en/operations/system-tables/metrics/
[system.asynchronous_metrics]: https://clickhouse.tech/docs/en/operations/system-tables/asynchronous_metrics/
[system.zookeeper]: https://clickhouse.tech/docs/en/operations/system-tables/zookeeper/
[system.zookeeper]: https://clickhouse.tech/docs/en/operations/system-tables/zookeeper/
[system.detached_parts]: https://clickhouse.tech/docs/en/operations/system-tables/detached_parts/
[system.dictionaries]: https://clickhouse.tech/docs/en/operations/system-tables/dictionaries/
[system.mutations]: https://clickhouse.tech/docs/en/operations/system-tables/mutations/
[system.disks]: https://clickhouse.tech/docs/en/operations/system-tables/disks/
[system.processes]: https://clickhouse.tech/docs/en/operations/system-tables/processes/
[system.text_log]: https://clickhouse.tech/docs/en/operations/system-tables/text_log/
[system.dictionaries]: https://clickhouse.tech/docs/en/operations/system-tables/dictionaries/
[system.mutations]: https://clickhouse.tech/docs/en/operations/system-tables/mutations/
[system.disks]: https://clickhouse.tech/docs/en/operations/system-tables/disks/
[system.processes]: https://clickhouse.tech/docs/en/operations/system-tables/processes/
[system.text_log]: https://clickhouse.tech/docs/en/operations/system-tables/text_log/

View File

@ -3,8 +3,7 @@
The GCP PubSub plugin ingests metrics from [Google Cloud PubSub][pubsub]
and creates metrics using one of the supported [input data formats][].
### Configuration
## Configuration
```toml
[[inputs.cloud_pubsub]]
@ -26,8 +25,8 @@ and creates metrics using one of the supported [input data formats][].
## Application Default Credentials, which is preferred.
# credentials_file = "path/to/my/creds.json"
## Optional. Number of seconds to wait before attempting to restart the
## PubSub subscription receiver after an unexpected error.
## Optional. Number of seconds to wait before attempting to restart the
## PubSub subscription receiver after an unexpected error.
## If the streaming pull for a PubSub Subscription fails (receiver),
## the agent attempts to restart receiving messages after this many seconds.
# retry_delay_seconds = 5
@ -76,7 +75,7 @@ and creates metrics using one of the supported [input data formats][].
## processed concurrently (use "max_outstanding_messages" instead).
# max_receiver_go_routines = 0
## Optional. If true, Telegraf will attempt to base64 decode the
## Optional. If true, Telegraf will attempt to base64 decode the
## PubSub message data before parsing. Many GCP services that
## output JSON to Google PubSub base64-encode the JSON payload.
# base64_data = false
@ -91,8 +90,6 @@ Each plugin agent can listen to one subscription at a time, so you will
need to run multiple instances of the plugin to pull messages from multiple
subscriptions/topics.
[pubsub]: https://cloud.google.com/pubsub
[pubsub create sub]: https://cloud.google.com/pubsub/docs/admin#create_a_pull_subscription
[input data formats]: /docs/DATA_FORMATS_INPUT.md

View File

@ -9,8 +9,7 @@ Enable TLS by specifying the file names of a service TLS certificate and key.
Enable mutually authenticated TLS and authorize client connections by signing certificate authority by including a list of allowed CA certificate file names in `tls_allowed_cacerts`.
### Configuration:
## Configuration
This is a sample configuration for the plugin.

View File

@ -2,10 +2,11 @@
This plugin will pull Metric Statistics from Amazon CloudWatch.
### Amazon Authentication
## Amazon Authentication
This plugin uses a credential chain for Authentication with the CloudWatch
API endpoint. In the following order the plugin will attempt to authenticate.
1. Assumed credentials via STS if `role_arn` attribute is specified (source credentials are evaluated from subsequent rules)
2. Explicit credentials from `access_key`, `secret_key`, and `token` attributes
3. Shared profile from `profile` attribute
@ -13,7 +14,7 @@ API endpoint. In the following order the plugin will attempt to authenticate.
5. [Shared Credentials](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#shared-credentials-file)
6. [EC2 Instance Profile](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html)
### Configuration:
## Configuration
```toml
# Pull Metric Statistics from Amazon CloudWatch
@ -112,7 +113,8 @@ API endpoint. In the following order the plugin will attempt to authenticate.
# name = "LoadBalancerName"
# value = "p-example"
```
#### Requirements and Terminology
## Requirements and Terminology
Plugin Configuration utilizes [CloudWatch concepts](http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html) and access pattern to allow monitoring of any CloudWatch Metric.
@ -127,7 +129,8 @@ to be retrieved. If specifying >1 dimension, then the metric must contain *all*
wildcard dimension is ignored.
Example:
```
```toml
[[inputs.cloudwatch]]
period = "1m"
interval = "5m"
@ -146,13 +149,14 @@ Example:
```
If the following ELBs are available:
- name: `p-example`, availabilityZone: `us-east-1a`
- name: `p-example`, availabilityZone: `us-east-1b`
- name: `q-example`, availabilityZone: `us-east-1a`
- name: `q-example`, availabilityZone: `us-east-1b`
Then 2 metrics will be output:
- name: `p-example`, availabilityZone: `us-east-1a`
- name: `p-example`, availabilityZone: `us-east-1b`
@ -161,11 +165,12 @@ would be exported containing the aggregate values of the ELB across availability
To maximize efficiency and savings, consider making fewer requests by increasing `interval` but keeping `period` at the duration you would like metrics to be reported. The above example will request metrics from Cloudwatch every 5 minutes but will output five metrics timestamped one minute apart.
#### Restrictions and Limitations
## Restrictions and Limitations
- CloudWatch metrics are not available instantly via the CloudWatch API. You should adjust your collection `delay` to account for this lag in metrics availability based on your [monitoring subscription level](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-cloudwatch-new.html)
- CloudWatch API usage incurs cost - see [GetMetricData Pricing](https://aws.amazon.com/cloudwatch/pricing/)
### Measurements & Fields:
## Measurements & Fields
Each CloudWatch Namespace monitored records a measurement with fields for each available Metric Statistic.
Namespace and Metrics are represented in [snake case](https://en.wikipedia.org/wiki/Snake_case)
@ -177,8 +182,8 @@ Namespace and Metrics are represented in [snake case](https://en.wikipedia.org/w
- {metric}_maximum (metric Maximum value)
- {metric}_sample_count (metric SampleCount value)
## Tags
### Tags:
Each measurement is tagged with the following identifiers to uniquely identify the associated metric
Tag Dimension names are represented in [snake case](https://en.wikipedia.org/wiki/Snake_case)
@ -186,17 +191,19 @@ Tag Dimension names are represented in [snake case](https://en.wikipedia.org/wik
- region (CloudWatch Region)
- {dimension-name} (Cloudwatch Dimension value - one for each metric dimension)
### Troubleshooting:
## Troubleshooting
You can use the aws cli to get a list of available metrics and dimensions:
```
```shell
aws cloudwatch list-metrics --namespace AWS/EC2 --region us-east-1
aws cloudwatch list-metrics --namespace AWS/EC2 --region us-east-1 --metric-name CPUCreditBalance
```
If the expected metrics are not returned, you can try getting them manually
for a short period of time:
```
```shell
aws cloudwatch get-metric-data \
--start-time 2018-07-01T00:00:00Z \
--end-time 2018-07-01T00:15:00Z \
@ -222,9 +229,9 @@ aws cloudwatch get-metric-data \
]'
```
### Example Output:
## Example
```
```shell
$ ./telegraf --config telegraf.conf --input-filter cloudwatch --test
> cloudwatch_aws_elb,load_balancer_name=p-example,region=us-east-1 latency_average=0.004810798017284538,latency_maximum=0.1100282669067383,latency_minimum=0.0006084442138671875,latency_sample_count=4029,latency_sum=19.382705211639404 1459542420000000000
```

View File

@ -3,23 +3,22 @@
Collects stats from Netfilter's conntrack-tools.
The conntrack-tools provide a mechanism for tracking various aspects of
network connections as they are processed by netfilter. At runtime,
network connections as they are processed by netfilter. At runtime,
conntrack exposes many of those connection statistics within /proc/sys/net.
Depending on your kernel version, these files can be found in either
/proc/sys/net/ipv4/netfilter or /proc/sys/net/netfilter and will be
prefixed with either ip_ or nf_. This plugin reads the files specified
prefixed with either ip or nf. This plugin reads the files specified
in its configuration and publishes each one as a field, with the prefix
normalized to ip_.
normalized to ip_.
In order to simplify configuration in a heterogeneous environment, a superset
of directory and filenames can be specified. Any locations that don't exist
will be ignored.
For more information on conntrack-tools, see the
For more information on conntrack-tools, see the
[Netfilter Documentation](http://conntrack-tools.netfilter.org/).
### Configuration:
## Configuration
```toml
# Collects conntrack stats from the configured directories and files.
@ -38,19 +37,19 @@ For more information on conntrack-tools, see the
dirs = ["/proc/sys/net/ipv4/netfilter","/proc/sys/net/netfilter"]
```
### Measurements & Fields:
## Measurements & Fields
- conntrack
- ip_conntrack_count (int, count): the number of entries in the conntrack table
- ip_conntrack_max (int, size): the max capacity of the conntrack table
- ip_conntrack_count (int, count): the number of entries in the conntrack table
- ip_conntrack_max (int, size): the max capacity of the conntrack table
### Tags:
## Tags
This input does not use tags.
### Example Output:
## Example Output
```
```shell
$ ./telegraf --config telegraf.conf --input-filter conntrack --test
conntrack,host=myhost ip_conntrack_count=2,ip_conntrack_max=262144 1461620427667995735
```

View File

@ -6,7 +6,7 @@ to query the data. It will not report the
[telemetry](https://www.consul.io/docs/agent/telemetry.html) but Consul can
report those stats already using StatsD protocol if needed.
### Configuration:
## Configuration
```toml
# Gather health check statuses from services registered in Consul
@ -48,13 +48,15 @@ report those stats already using StatsD protocol if needed.
# tag_delimiter = ":"
```
### Metrics:
##### metric_version = 1:
## Metrics
### metric_version = 1
- consul_health_checks
- tags:
- node (node that check/service is registered on)
- service_name
- check_id
- node (node that check/service is registered on)
- service_name
- check_id
- fields:
- check_name
- service_id
@ -63,27 +65,28 @@ report those stats already using StatsD protocol if needed.
- critical (integer)
- warning (integer)
##### metric_version = 2:
### metric_version = 2
- consul_health_checks
- tags:
- node (node that check/service is registered on)
- service_name
- check_id
- check_name
- node (node that check/service is registered on)
- service_name
- check_id
- check_name
- service_id
- status
- fields:
- passing (integer)
- critical (integer)
- warning (integer)
`passing`, `critical`, and `warning` are integer representations of the health
check state. A value of `1` represents that the status was the state of the
the health check at this sample. `status` is string representation of the same state.
## Example output
```
```shell
consul_health_checks,host=wolfpit,node=consul-server-node,check_id="serfHealth" check_name="Serf Health Status",service_id="",status="passing",passing=1i,critical=0i,warning=0i 1464698464486439902
consul_health_checks,host=wolfpit,node=consul-server-node,service_name=www.example.com,check_id="service:www-example-com.test01" check_name="Service 'www.example.com' check",service_id="www-example-com.test01",status="critical",passing=0i,critical=1i,warning=0i 1464698464486519036
```

View File

@ -1,8 +1,9 @@
# Couchbase Input Plugin
Couchbase is a distributed NoSQL database.
This plugin gets metrics for each Couchbase node, as well as detailed metrics for each bucket, for a given couchbase server.
## Configuration:
## Configuration
```toml
# Read per-node and per-bucket metrics from Couchbase
@ -30,25 +31,29 @@ This plugin gets metrics for each Couchbase node, as well as detailed metrics fo
# insecure_skip_verify = false
```
## Measurements:
## Measurements
### couchbase_node
Tags:
- cluster: sanitized string from `servers` configuration field e.g.: `http://user:password@couchbase-0.example.com:8091/endpoint` -> `http://couchbase-0.example.com:8091/endpoint`
- hostname: Couchbase's name for the node and port, e.g., `172.16.10.187:8091`
Fields:
- memory_free (unit: bytes, example: 23181365248.0)
- memory_total (unit: bytes, example: 64424656896.0)
### couchbase_bucket
Tags:
- cluster: whatever you called it in `servers` in the configuration, e.g.: `http://couchbase-0.example.com/`)
- bucket: the name of the couchbase bucket, e.g., `blastro-df`
Default bucket fields:
- quota_percent_used (unit: percent, example: 68.85424936294555)
- ops_per_sec (unit: count, example: 5686.789686789687)
- disk_fetches (unit: count, example: 0.0)
@ -58,7 +63,8 @@ Default bucket fields:
- mem_used (unit: bytes, example: 202156957464.0)
Additional fields that can be configured with the `bucket_stats_included` option:
- couch_total_disk_size
- couch_total_disk_size
- couch_docs_fragmentation
- couch_views_fragmentation
- hit_ratio
@ -274,10 +280,9 @@ Additional fields that can be configured with the `bucket_stats_included` option
- swap_total
- swap_used
## Example output
```
```shell
couchbase_node,cluster=http://localhost:8091/,hostname=172.17.0.2:8091 memory_free=7705575424,memory_total=16558182400 1547829754000000000
couchbase_bucket,bucket=beer-sample,cluster=http://localhost:8091/ quota_percent_used=27.09285736083984,ops_per_sec=0,disk_fetches=0,item_count=7303,disk_used=21662946,data_used=9325087,mem_used=28408920 1547829754000000000
```

View File

@ -2,7 +2,7 @@
The CouchDB plugin gathers metrics of CouchDB using [_stats] endpoint.
### Configuration
## Configuration
```toml
[[inputs.couchdb]]
@ -15,7 +15,7 @@ The CouchDB plugin gathers metrics of CouchDB using [_stats] endpoint.
# basic_password = "p@ssw0rd"
```
### Measurements & Fields:
## Measurements & Fields
Statistics specific to the internals of CouchDB:
@ -60,19 +60,21 @@ httpd statistics:
- httpd_bulk_requests
- httpd_view_reads
### Tags:
## Tags
- server (url of the couchdb _stats endpoint)
### Example output:
## Example
**Post Couchdb 2.0**
```
### Post Couchdb 2.0
```shell
couchdb,server=http://couchdb22:5984/_node/_local/_stats couchdb_auth_cache_hits_value=0,httpd_request_methods_delete_value=0,couchdb_auth_cache_misses_value=0,httpd_request_methods_get_value=42,httpd_status_codes_304_value=0,httpd_status_codes_400_value=0,httpd_request_methods_head_value=0,httpd_status_codes_201_value=0,couchdb_database_reads_value=0,httpd_request_methods_copy_value=0,couchdb_request_time_max=0,httpd_status_codes_200_value=42,httpd_status_codes_301_value=0,couchdb_open_os_files_value=2,httpd_request_methods_put_value=0,httpd_request_methods_post_value=0,httpd_status_codes_202_value=0,httpd_status_codes_403_value=0,httpd_status_codes_409_value=0,couchdb_database_writes_value=0,couchdb_request_time_min=0,httpd_status_codes_412_value=0,httpd_status_codes_500_value=0,httpd_status_codes_401_value=0,httpd_status_codes_404_value=0,httpd_status_codes_405_value=0,couchdb_open_databases_value=0 1536707179000000000
```
**Pre Couchdb 2.0**
```
### Pre Couchdb 2.0
```shell
couchdb,server=http://couchdb16:5984/_stats couchdb_request_time_sum=96,httpd_status_codes_200_sum=37,httpd_status_codes_200_min=0,httpd_requests_mean=0.005,httpd_requests_min=0,couchdb_request_time_stddev=3.833,couchdb_request_time_min=1,httpd_request_methods_get_stddev=0.073,httpd_request_methods_get_min=0,httpd_status_codes_200_mean=0.005,httpd_status_codes_200_max=1,httpd_requests_sum=37,couchdb_request_time_current=96,httpd_request_methods_get_sum=37,httpd_request_methods_get_mean=0.005,httpd_request_methods_get_max=1,httpd_status_codes_200_stddev=0.073,couchdb_request_time_mean=2.595,couchdb_request_time_max=25,httpd_request_methods_get_current=37,httpd_status_codes_200_current=37,httpd_requests_current=37,httpd_requests_stddev=0.073,httpd_requests_max=1 1536707179000000000
```

View File

@ -2,7 +2,8 @@
The `cpu` plugin gather metrics on the system CPUs.
#### Configuration
## Configuration
```toml
# Read metrics about cpu usage
[[inputs.cpu]]
@ -16,7 +17,7 @@ The `cpu` plugin gather metrics on the system CPUs.
report_active = false
```
### Metrics
## Metrics
On Linux, consult `man proc` for details on the meanings of these values.
@ -47,14 +48,14 @@ On Linux, consult `man proc` for details on the meanings of these values.
- usage_guest (float, percent)
- usage_guest_nice (float, percent)
### Troubleshooting
## Troubleshooting
On Linux systems the `/proc/stat` file is used to gather CPU times.
Percentages are based on the last 2 samples.
### Example Output
## Example Output
```
```shell
cpu,cpu=cpu0,host=loaner time_active=202224.15999999992,time_guest=30250.35,time_guest_nice=0,time_idle=1527035.04,time_iowait=1352,time_irq=0,time_nice=169.28,time_softirq=6281.4,time_steal=0,time_system=40097.14,time_user=154324.34 1568760922000000000
cpu,cpu=cpu0,host=loaner usage_active=31.249999981810106,usage_guest=2.083333333080696,usage_guest_nice=0,usage_idle=68.7500000181899,usage_iowait=0,usage_irq=0,usage_nice=0,usage_softirq=0,usage_steal=0,usage_system=4.166666666161392,usage_user=25.000000002273737 1568760922000000000
cpu,cpu=cpu1,host=loaner time_active=201890.02000000002,time_guest=30508.41,time_guest_nice=0,time_idle=264641.18,time_iowait=210.44,time_irq=0,time_nice=181.75,time_softirq=4537.88,time_steal=0,time_system=39480.7,time_user=157479.25 1568760922000000000

View File

@ -2,7 +2,8 @@
The `csgo` plugin gather metrics from Counter-Strike: Global Offensive servers.
#### Configuration
## Configuration
```toml
# Fetch metrics from a CSGO SRCDS
[[inputs.csgo]]
@ -16,7 +17,7 @@ The `csgo` plugin gather metrics from Counter-Strike: Global Offensive servers.
servers = []
```
### Metrics
## Metrics
The plugin retrieves the output of the `stats` command that is executed via rcon.