fix(readmes): adding code block annotations (#7963)

This commit is contained in:
Russ Savage 2020-08-10 12:50:48 -07:00 committed by GitHub
parent 242714224b
commit 75e701c288
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
36 changed files with 64 additions and 62 deletions

View File

@ -48,7 +48,7 @@ execd plugins:
1. Configure Telegraf to call your new plugin binary. For an input, this would 1. Configure Telegraf to call your new plugin binary. For an input, this would
look something like: look something like:
``` ```toml
[[inputs.execd]] [[inputs.execd]]
command = ["/path/to/rand", "-config", "/path/to/plugin.conf"] command = ["/path/to/rand", "-config", "/path/to/plugin.conf"]
signal = "none" signal = "none"

View File

@ -55,7 +55,7 @@ cache_readaheads
Using this configuration: Using this configuration:
``` ```toml
[bcache] [bcache]
# Bcache sets path # Bcache sets path
# If not specified, then default is: # If not specified, then default is:

View File

@ -77,7 +77,7 @@ for more information.
These are some useful queries (to generate dashboards or other) to run against data from this These are some useful queries (to generate dashboards or other) to run against data from this
plugin: plugin:
``` ```sql
SELECT non_negative_derivative(mean(/^A$|^PTR$/), 5m) FROM bind_counter \ SELECT non_negative_derivative(mean(/^A$|^PTR$/), 5m) FROM bind_counter \
WHERE "url" = 'localhost:8053' AND "type" = 'qtype' AND time > now() - 1h \ WHERE "url" = 'localhost:8053' AND "type" = 'qtype' AND time > now() - 1h \
GROUP BY time(5m), "type" GROUP BY time(5m), "type"

View File

@ -7,7 +7,7 @@ Supported Burrow version: `1.x`
### Configuration ### Configuration
``` ```toml
[[inputs.burrow]] [[inputs.burrow]]
## Burrow API endpoints in format "schema://host:port". ## Burrow API endpoints in format "schema://host:port".
## Default is "http://localhost:8000". ## Default is "http://localhost:8000".

View File

@ -12,7 +12,7 @@ a MON socket, it runs **ceph --admin-daemon $file perfcounters_dump**. For OSDs
The resulting JSON is parsed and grouped into collections, based on top-level key. Top-level keys are The resulting JSON is parsed and grouped into collections, based on top-level key. Top-level keys are
used as collection tags, and all sub-keys are flattened. For example: used as collection tags, and all sub-keys are flattened. For example:
``` ```json
{ {
"paxos": { "paxos": {
"refresh": 9363435, "refresh": 9363435,
@ -44,7 +44,7 @@ the cluster. The currently supported commands are:
### Configuration: ### Configuration:
``` ```toml
# Collects performance metrics from the MON and OSD nodes in a Ceph storage cluster. # Collects performance metrics from the MON and OSD nodes in a Ceph storage cluster.
[[inputs.ceph]] [[inputs.ceph]]
## This is the recommended interval to poll. Too frequent and you will lose ## This is the recommended interval to poll. Too frequent and you will lose

View File

@ -2,7 +2,7 @@
## Configuration: ## Configuration:
``` ```toml
# Read per-node and per-bucket metrics from Couchbase # Read per-node and per-bucket metrics from Couchbase
[[inputs.couchbase]] [[inputs.couchbase]]
## specify servers via a url matching: ## specify servers via a url matching:

View File

@ -8,7 +8,7 @@ the [upgrading steps][upgrading].
### Configuration: ### Configuration:
``` ```toml
# Read metrics about dovecot servers # Read metrics about dovecot servers
[[inputs.dovecot]] [[inputs.dovecot]]
## specify dovecot servers via an address:port list ## specify dovecot servers via an address:port list

View File

@ -4,7 +4,7 @@ This input plugin checks HTTP/HTTPS connections.
### Configuration: ### Configuration:
``` ```toml
# HTTP/HTTPS request given an address a method and a timeout # HTTP/HTTPS request given an address a method and a timeout
[[inputs.http_response]] [[inputs.http_response]]
## Deprecated in 1.12, use 'urls' ## Deprecated in 1.12, use 'urls'

View File

@ -51,7 +51,7 @@ services and hosts. You can read Icinga2's documentation for their remote API
### Sample Queries: ### Sample Queries:
``` ```sql
SELECT * FROM "icinga2_services" WHERE state_code = 0 AND time > now() - 24h // Service with OK status SELECT * FROM "icinga2_services" WHERE state_code = 0 AND time > now() - 24h // Service with OK status
SELECT * FROM "icinga2_services" WHERE state_code = 1 AND time > now() - 24h // Service with WARNING status SELECT * FROM "icinga2_services" WHERE state_code = 1 AND time > now() - 24h // Service with WARNING status
SELECT * FROM "icinga2_services" WHERE state_code = 2 AND time > now() - 24h // Service with CRITICAL status SELECT * FROM "icinga2_services" WHERE state_code = 2 AND time > now() - 24h // Service with CRITICAL status

View File

@ -62,17 +62,17 @@ For more details on the metrics see https://github.com/aristanetworks/goarista/b
### Sample Queries ### Sample Queries
Get the max tx_latency for the last hour for all interfaces on all switches. Get the max tx_latency for the last hour for all interfaces on all switches.
``` ```sql
SELECT max("tx_latency") AS "max_tx_latency" FROM "congestion_record" WHERE time > now() - 1h GROUP BY time(10s), "hostname", "intf_name" SELECT max("tx_latency") AS "max_tx_latency" FROM "congestion_record" WHERE time > now() - 1h GROUP BY time(10s), "hostname", "intf_name"
``` ```
Get the max tx_latency for the last hour for all interfaces on all switches. Get the max tx_latency for the last hour for all interfaces on all switches.
``` ```sql
SELECT max("queue_size") AS "max_queue_size" FROM "congestion_record" WHERE time > now() - 1h GROUP BY time(10s), "hostname", "intf_name" SELECT max("queue_size") AS "max_queue_size" FROM "congestion_record" WHERE time > now() - 1h GROUP BY time(10s), "hostname", "intf_name"
``` ```
Get the max buffer_size for over the last hour for all switches. Get the max buffer_size for over the last hour for all switches.
``` ```sql
SELECT max("buffer_size") AS "max_buffer_size" FROM "global_buffer_usage_record" WHERE time > now() - 1h GROUP BY time(10s), "hostname" SELECT max("buffer_size") AS "max_buffer_size" FROM "global_buffer_usage_record" WHERE time > now() - 1h GROUP BY time(10s), "hostname"
``` ```

View File

@ -67,7 +67,7 @@ View the current scores with a command, substituting your player name:
### Sample Queries: ### Sample Queries:
Get the number of jumps per player in the last hour: Get the number of jumps per player in the last hour:
``` ```sql
SELECT SPREAD("jumps") FROM "minecraft" WHERE time > now() - 1h GROUP BY "player" SELECT SPREAD("jumps") FROM "minecraft" WHERE time > now() - 1h GROUP BY "player"
``` ```

View File

@ -129,7 +129,7 @@ from unsigned values).
### Example Output ### Example Output
``` ```sh
$ ./telegraf -config telegraf.conf -input-filter modbus -test $ ./telegraf -config telegraf.conf -input-filter modbus -test
modbus.InputRegisters,host=orangepizero Current=0,Energy=0,Frecuency=60,Power=0,PowerFactor=0,Voltage=123.9000015258789 1554079521000000000 modbus.InputRegisters,host=orangepizero Current=0,Energy=0,Frecuency=60,Power=0,PowerFactor=0,Voltage=123.9000015258789 1554079521000000000
``` ```

View File

@ -117,7 +117,7 @@ InfluxDB due to the change of types. For this reason, you should keep the
If preserving your old data is not required you may wish to drop conflicting If preserving your old data is not required you may wish to drop conflicting
measurements: measurements:
``` ```sql
DROP SERIES from mysql DROP SERIES from mysql
DROP SERIES from mysql_variables DROP SERIES from mysql_variables
DROP SERIES from mysql_innodb DROP SERIES from mysql_innodb

View File

@ -71,7 +71,7 @@ programming. These tags are clearly marked in the list below and should be consi
Get the max, mean, and min for the temperature in the last hour: Get the max, mean, and min for the temperature in the last hour:
``` ```sql
SELECT mean("value") FROM "neptune_apex" WHERE ("probe_type" = 'Temp') AND time >= now() - 6h GROUP BY time(20s) SELECT mean("value") FROM "neptune_apex" WHERE ("probe_type" = 'Temp') AND time >= now() - 6h GROUP BY time(20s)
``` ```
@ -79,7 +79,7 @@ SELECT mean("value") FROM "neptune_apex" WHERE ("probe_type" = 'Temp') AND time
#### sendRequest failure #### sendRequest failure
This indicates a problem communicating with the local Apex controller. If on Mac/Linux, try curl: This indicates a problem communicating with the local Apex controller. If on Mac/Linux, try curl:
``` ```sh
$ curl apex.local/cgi-bin/status.xml $ curl apex.local/cgi-bin/status.xml
``` ```
to isolate the problem. to isolate the problem.

View File

@ -53,7 +53,7 @@ Under Linux the system wide protocol metrics have the interface=all tag.
You can use the following query to get the upload/download traffic rate per second for all interfaces in the last hour. The query uses the [derivative function](https://docs.influxdata.com/influxdb/v1.2/query_language/functions#derivative) which calculates the rate of change between subsequent field values. You can use the following query to get the upload/download traffic rate per second for all interfaces in the last hour. The query uses the [derivative function](https://docs.influxdata.com/influxdb/v1.2/query_language/functions#derivative) which calculates the rate of change between subsequent field values.
``` ```sql
SELECT derivative(first(bytes_recv), 1s) as "download bytes/sec", derivative(first(bytes_sent), 1s) as "upload bytes/sec" FROM net WHERE time > now() - 1h AND interface != 'all' GROUP BY time(10s), interface fill(0); SELECT derivative(first(bytes_recv), 1s) as "download bytes/sec", derivative(first(bytes_sent), 1s) as "upload bytes/sec" FROM net WHERE time > now() - 1h AND interface != 'all' GROUP BY time(10s), interface fill(0);
``` ```
@ -70,4 +70,4 @@ net,interface=eth0,host=HOST bytes_sent=451838509i,bytes_recv=3284081640i,packet
$ ./telegraf --config telegraf.conf --input-filter net --test $ ./telegraf --config telegraf.conf --input-filter net --test
net,interface=eth0,host=HOST bytes_sent=451838509i,bytes_recv=3284081640i,packets_sent=2663590i,packets_recv=3585442i,err_in=0i,err_out=0i,drop_in=4i,drop_out=0i 1492834180000000000 net,interface=eth0,host=HOST bytes_sent=451838509i,bytes_recv=3284081640i,packets_sent=2663590i,packets_recv=3585442i,err_in=0i,err_out=0i,drop_in=4i,drop_out=0i 1492834180000000000
net,interface=all,host=HOST ip_reasmfails=0i,icmp_insrcquenchs=0i,icmp_outtimestamps=0i,ip_inhdrerrors=0i,ip_inunknownprotos=0i,icmp_intimeexcds=10i,icmp_outaddrmasks=0i,icmp_indestunreachs=11005i,icmpmsg_outtype0=6i,tcp_retranssegs=14669i,udplite_outdatagrams=0i,ip_reasmtimeout=0i,ip_outnoroutes=2577i,ip_inaddrerrors=186i,icmp_outaddrmaskreps=0i,tcp_incsumerrors=0i,tcp_activeopens=55965i,ip_reasmoks=0i,icmp_inechos=6i,icmp_outdestunreachs=9417i,ip_reasmreqds=0i,icmp_outtimestampreps=0i,tcp_rtoalgorithm=1i,icmpmsg_intype3=11005i,icmpmsg_outtype69=129i,tcp_outsegs=2777459i,udplite_rcvbuferrors=0i,ip_fragoks=0i,icmp_inmsgs=13398i,icmp_outerrors=0i,tcp_outrsts=14951i,udplite_noports=0i,icmp_outmsgs=11517i,icmp_outechoreps=6i,icmpmsg_intype11=10i,icmp_inparmprobs=0i,ip_forwdatagrams=0i,icmp_inechoreps=1909i,icmp_outredirects=0i,icmp_intimestampreps=0i,icmpmsg_intype5=468i,tcp_rtomax=120000i,tcp_maxconn=-1i,ip_fragcreates=0i,ip_fragfails=0i,icmp_inredirects=468i,icmp_outtimeexcds=0i,icmp_outechos=1965i,icmp_inaddrmasks=0i,tcp_inerrs=389i,tcp_rtomin=200i,ip_defaultttl=64i,ip_outrequests=3366408i,ip_forwarding=2i,udp_incsumerrors=0i,udp_indatagrams=522136i,udplite_incsumerrors=0i,ip_outdiscards=871i,icmp_inerrors=958i,icmp_outsrcquenchs=0i,icmpmsg_intype0=1909i,tcp_insegs=3580226i,udp_outdatagrams=577265i,udp_rcvbuferrors=0i,udplite_sndbuferrors=0i,icmp_incsumerrors=0i,icmp_outparmprobs=0i,icmpmsg_outtype3=9417i,tcp_attemptfails=2652i,udplite_inerrors=0i,udplite_indatagrams=0i,ip_inreceives=4172969i,icmpmsg_outtype8=1965i,tcp_currestab=59i,udp_noports=5961i,ip_indelivers=4099279i,ip_indiscards=0i,tcp_estabresets=5818i,udp_sndbuferrors=3i,icmp_intimestamps=0i,icmpmsg_intype8=6i,udp_inerrors=0i,icmp_inaddrmaskreps=0i,tcp_passiveopens=452i 1492831540000000000 net,interface=all,host=HOST ip_reasmfails=0i,icmp_insrcquenchs=0i,icmp_outtimestamps=0i,ip_inhdrerrors=0i,ip_inunknownprotos=0i,icmp_intimeexcds=10i,icmp_outaddrmasks=0i,icmp_indestunreachs=11005i,icmpmsg_outtype0=6i,tcp_retranssegs=14669i,udplite_outdatagrams=0i,ip_reasmtimeout=0i,ip_outnoroutes=2577i,ip_inaddrerrors=186i,icmp_outaddrmaskreps=0i,tcp_incsumerrors=0i,tcp_activeopens=55965i,ip_reasmoks=0i,icmp_inechos=6i,icmp_outdestunreachs=9417i,ip_reasmreqds=0i,icmp_outtimestampreps=0i,tcp_rtoalgorithm=1i,icmpmsg_intype3=11005i,icmpmsg_outtype69=129i,tcp_outsegs=2777459i,udplite_rcvbuferrors=0i,ip_fragoks=0i,icmp_inmsgs=13398i,icmp_outerrors=0i,tcp_outrsts=14951i,udplite_noports=0i,icmp_outmsgs=11517i,icmp_outechoreps=6i,icmpmsg_intype11=10i,icmp_inparmprobs=0i,ip_forwdatagrams=0i,icmp_inechoreps=1909i,icmp_outredirects=0i,icmp_intimestampreps=0i,icmpmsg_intype5=468i,tcp_rtomax=120000i,tcp_maxconn=-1i,ip_fragcreates=0i,ip_fragfails=0i,icmp_inredirects=468i,icmp_outtimeexcds=0i,icmp_outechos=1965i,icmp_inaddrmasks=0i,tcp_inerrs=389i,tcp_rtomin=200i,ip_defaultttl=64i,ip_outrequests=3366408i,ip_forwarding=2i,udp_incsumerrors=0i,udp_indatagrams=522136i,udplite_incsumerrors=0i,ip_outdiscards=871i,icmp_inerrors=958i,icmp_outsrcquenchs=0i,icmpmsg_intype0=1909i,tcp_insegs=3580226i,udp_outdatagrams=577265i,udp_rcvbuferrors=0i,udplite_sndbuferrors=0i,icmp_incsumerrors=0i,icmp_outparmprobs=0i,icmpmsg_outtype3=9417i,tcp_attemptfails=2652i,udplite_inerrors=0i,udplite_indatagrams=0i,ip_inreceives=4172969i,icmpmsg_outtype8=1965i,tcp_currestab=59i,udp_noports=5961i,ip_indelivers=4099279i,ip_indiscards=0i,tcp_estabresets=5818i,udp_sndbuferrors=3i,icmp_intimestamps=0i,icmpmsg_intype8=6i,udp_inerrors=0i,icmp_inaddrmaskreps=0i,tcp_passiveopens=452i 1492831540000000000
`` ```

View File

@ -2,7 +2,7 @@
### Configuration: ### Configuration:
``` ```toml
# Read Nginx's basic status information (ngx_http_stub_status_module) # Read Nginx's basic status information (ngx_http_stub_status_module)
[[inputs.nginx]] [[inputs.nginx]]
## An array of Nginx stub_status URI to gather stats. ## An array of Nginx stub_status URI to gather stats.
@ -39,14 +39,14 @@
### Example Output: ### Example Output:
Using this configuration: Using this configuration:
``` ```toml
[[inputs.nginx]] [[inputs.nginx]]
## An array of Nginx stub_status URI to gather stats. ## An array of Nginx stub_status URI to gather stats.
urls = ["http://localhost/status"] urls = ["http://localhost/status"]
``` ```
When run with: When run with:
``` ```sh
./telegraf --config telegraf.conf --input-filter nginx --test ./telegraf --config telegraf.conf --input-filter nginx --test
``` ```

View File

@ -7,7 +7,7 @@ Structures for Nginx Plus have been built based on history of
### Configuration: ### Configuration:
``` ```toml
# Read Nginx Plus' advanced status information # Read Nginx Plus' advanced status information
[[inputs.nginx_plus]] [[inputs.nginx_plus]]
## An array of Nginx status URIs to gather stats. ## An array of Nginx status URIs to gather stats.
@ -81,14 +81,14 @@ Structures for Nginx Plus have been built based on history of
### Example Output: ### Example Output:
Using this configuration: Using this configuration:
``` ```toml
[[inputs.nginx_plus]] [[inputs.nginx_plus]]
## An array of Nginx Plus status URIs to gather stats. ## An array of Nginx Plus status URIs to gather stats.
urls = ["http://localhost/status"] urls = ["http://localhost/status"]
``` ```
When run with: When run with:
``` ```sh
./telegraf -config telegraf.conf -input-filter nginx_plus -test ./telegraf -config telegraf.conf -input-filter nginx_plus -test
``` ```

View File

@ -4,7 +4,7 @@ Nginx Plus is a commercial version of the open source web server Nginx. The use
### Configuration: ### Configuration:
``` ```toml
# Read Nginx Plus API advanced status information # Read Nginx Plus API advanced status information
[[inputs.nginx_plus_api]] [[inputs.nginx_plus_api]]
## An array of Nginx API URIs to gather stats. ## An array of Nginx API URIs to gather stats.
@ -201,14 +201,14 @@ Nginx Plus is a commercial version of the open source web server Nginx. The use
### Example Output: ### Example Output:
Using this configuration: Using this configuration:
``` ```toml
[[inputs.nginx_plus_api]] [[inputs.nginx_plus_api]]
## An array of Nginx Plus API URIs to gather stats. ## An array of Nginx Plus API URIs to gather stats.
urls = ["http://localhost/api"] urls = ["http://localhost/api"]
``` ```
When run with: When run with:
``` ```sh
./telegraf -config telegraf.conf -input-filter nginx_plus_api -test ./telegraf -config telegraf.conf -input-filter nginx_plus_api -test
``` ```

View File

@ -10,7 +10,7 @@ checks. This information can be exported in JSON format and parsed by this input
### Configuration: ### Configuration:
``` ```toml
## An URL where Nginx Upstream check module is enabled ## An URL where Nginx Upstream check module is enabled
## It should be set to return a JSON formatted response ## It should be set to return a JSON formatted response
url = "http://127.0.0.1/status?format=json" url = "http://127.0.0.1/status?format=json"
@ -63,7 +63,7 @@ state of every server and, possible, add some monitoring to watch over it. Influ
### Example Output: ### Example Output:
When run with: When run with:
``` ```sh
./telegraf --config telegraf.conf --input-filter nginx_upstream_check --test ./telegraf --config telegraf.conf --input-filter nginx_upstream_check --test
``` ```

View File

@ -5,7 +5,7 @@ For module configuration details please see its [documentation](https://github.c
### Configuration: ### Configuration:
``` ```toml
# Read nginx status information using nginx-module-vts module # Read nginx status information using nginx-module-vts module
[[inputs.nginx_vts]] [[inputs.nginx_vts]]
## An array of Nginx status URIs to gather stats. ## An array of Nginx status URIs to gather stats.
@ -99,14 +99,14 @@ For module configuration details please see its [documentation](https://github.c
### Example Output: ### Example Output:
Using this configuration: Using this configuration:
``` ```toml
[[inputs.nginx_vts]] [[inputs.nginx_vts]]
## An array of Nginx status URIs to gather stats. ## An array of Nginx status URIs to gather stats.
urls = ["http://localhost/status"] urls = ["http://localhost/status"]
``` ```
When run with: When run with:
``` ```sh
./telegraf -config telegraf.conf -input-filter nginx_vts -test ./telegraf -config telegraf.conf -input-filter nginx_vts -test
``` ```

View File

@ -57,7 +57,7 @@ You'll need to escape the `\` within the `telegraf.conf` like this: `C:\\Program
The below query could be used to alert on the average temperature of the your GPUs over the last minute The below query could be used to alert on the average temperature of the your GPUs over the last minute
``` ```sql
SELECT mean("temperature_gpu") FROM "nvidia_smi" WHERE time > now() - 5m GROUP BY time(1m), "index", "name", "host" SELECT mean("temperature_gpu") FROM "nvidia_smi" WHERE time > now() - 5m GROUP BY time(1m), "index", "name", "host"
``` ```
@ -66,7 +66,7 @@ SELECT mean("temperature_gpu") FROM "nvidia_smi" WHERE time > now() - 5m GROUP B
Check the full output by running `nvidia-smi` binary manually. Check the full output by running `nvidia-smi` binary manually.
Linux: Linux:
``` ```sh
sudo -u telegraf -- /usr/bin/nvidia-smi -q -x sudo -u telegraf -- /usr/bin/nvidia-smi -q -x
``` ```

View File

@ -35,7 +35,9 @@ To use this plugin you must enable the [slapd monitoring](https://www.openldap.o
All **monitorCounter**, **monitoredInfo**, **monitorOpInitiated**, and **monitorOpCompleted** attributes are gathered based on this LDAP query: All **monitorCounter**, **monitoredInfo**, **monitorOpInitiated**, and **monitorOpCompleted** attributes are gathered based on this LDAP query:
```(|(objectClass=monitorCounterObject)(objectClass=monitorOperation)(objectClass=monitoredObject))``` ```
(|(objectClass=monitorCounterObject)(objectClass=monitorOperation)(objectClass=monitoredObject))
```
Metric names are based on their entry DN with the cn=Monitor base removed. If `reverse_metric_names` is not set, metrics are based on their DN. If `reverse_metric_names` is set to `true`, the names are reversed. This is recommended as it allows the names to sort more naturally. Metric names are based on their entry DN with the cn=Monitor base removed. If `reverse_metric_names` is not set, metrics are based on their DN. If `reverse_metric_names` is set to `true`, the names are reversed. This is recommended as it allows the names to sort more naturally.

View File

@ -57,7 +57,7 @@ host=localhost user=pgotest dbname=app_production sslmode=require sslkey=/etc/te
``` ```
### Configuration example ### Configuration example
``` ```toml
[[inputs.postgresql]] [[inputs.postgresql]]
address = "postgres://telegraf@localhost/someDB" address = "postgres://telegraf@localhost/someDB"
ignored_databases = ["template0", "template1"] ignored_databases = ["template0", "template1"]

View File

@ -11,7 +11,7 @@ The example below has two queries are specified, with the following parameters:
* The name of the measurement * The name of the measurement
* A list of the columns to be defined as tags * A list of the columns to be defined as tags
``` ```toml
[[inputs.postgresql_extensible]] [[inputs.postgresql_extensible]]
# specify address via a url matching: # specify address via a url matching:
# postgres://[pqgotest[:password]]@host:port[/dbname]?sslmode=... # postgres://[pqgotest[:password]]@host:port[/dbname]?sslmode=...
@ -76,7 +76,7 @@ using postgresql extensions ([pg_stat_statements](http://www.postgresql.org/docs
# Sample Queries : # Sample Queries :
- telegraf.conf postgresql_extensible queries (assuming that you have configured - telegraf.conf postgresql_extensible queries (assuming that you have configured
correctly your connection) correctly your connection)
``` ```toml
[[inputs.postgresql_extensible.query]] [[inputs.postgresql_extensible.query]]
sqlquery="SELECT * FROM pg_stat_database" sqlquery="SELECT * FROM pg_stat_database"
version=901 version=901

View File

@ -4,7 +4,7 @@ The powerdns plugin gathers metrics about PowerDNS using unix socket.
### Configuration: ### Configuration:
``` ```toml
# Description # Description
[[inputs.powerdns]] [[inputs.powerdns]]
# An array of sockets to gather stats about. # An array of sockets to gather stats about.

View File

@ -103,7 +103,7 @@ If you want to monitor Caddy, you need to use Caddy with its Prometheus plugin:
* Restart Caddy * Restart Caddy
* Configure Telegraf to fetch metrics on it: * Configure Telegraf to fetch metrics on it:
``` ```toml
[[inputs.prometheus]] [[inputs.prometheus]]
# ## An array of urls to scrape metrics from. # ## An array of urls to scrape metrics from.
urls = ["http://localhost:9180/metrics"] urls = ["http://localhost:9180/metrics"]

View File

@ -2,7 +2,7 @@
### Configuration: ### Configuration:
``` ```toml
# Read Redis's basic status information # Read Redis's basic status information
[[inputs.redis]] [[inputs.redis]]
## specify servers via a url matching: ## specify servers via a url matching:
@ -153,7 +153,7 @@ Additionally the plugin also calculates the hit/miss ratio (keyspace\_hitrate) a
### Example Output: ### Example Output:
Using this configuration: Using this configuration:
``` ```toml
[[inputs.redis]] [[inputs.redis]]
## specify servers via a url matching: ## specify servers via a url matching:
## [protocol://][:password]@address[:port] ## [protocol://][:password]@address[:port]

View File

@ -6,7 +6,7 @@ package installed.
This plugin collects sensor metrics with the `sensors` executable from the lm-sensor package. This plugin collects sensor metrics with the `sensors` executable from the lm-sensor package.
### Configuration: ### Configuration:
``` ```toml
# Monitor sensors, requires lm-sensors package # Monitor sensors, requires lm-sensors package
[[inputs.sensors]] [[inputs.sensors]]
## Remove numbers from field names. ## Remove numbers from field names.

View File

@ -9,7 +9,7 @@ Tested from 3.5 to 7.*
### Configuration: ### Configuration:
``` ```toml
[[inputs.solr]] [[inputs.solr]]
## specify a list of one or more Solr servers ## specify a list of one or more Solr servers
servers = ["http://localhost:8983"] servers = ["http://localhost:8983"]

View File

@ -29,14 +29,14 @@ The following synproxy counters are gathered
### Sample Queries ### Sample Queries
Get the number of packets per 5 minutes for the measurement in the last hour from InfluxDB: Get the number of packets per 5 minutes for the measurement in the last hour from InfluxDB:
``` ```sql
SELECT difference(last("cookie_invalid")) AS "cookie_invalid", difference(last("cookie_retrans")) AS "cookie_retrans", difference(last("cookie_valid")) AS "cookie_valid", difference(last("entries")) AS "entries", difference(last("syn_received")) AS "syn_received", difference(last("conn_reopened")) AS "conn_reopened" FROM synproxy WHERE time > NOW() - 1h GROUP BY time(5m) FILL(null); SELECT difference(last("cookie_invalid")) AS "cookie_invalid", difference(last("cookie_retrans")) AS "cookie_retrans", difference(last("cookie_valid")) AS "cookie_valid", difference(last("entries")) AS "entries", difference(last("syn_received")) AS "syn_received", difference(last("conn_reopened")) AS "conn_reopened" FROM synproxy WHERE time > NOW() - 1h GROUP BY time(5m) FILL(null);
``` ```
### Troubleshooting ### Troubleshooting
Execute the following CLI command in Linux to test the synproxy counters: Execute the following CLI command in Linux to test the synproxy counters:
``` ```sh
cat /proc/net/stat/synproxy cat /proc/net/stat/synproxy
``` ```

View File

@ -7,7 +7,7 @@ the [Teamspeak 3 ServerQuery Manual](http://media.teamspeak.com/ts3_literature/T
### Configuration: ### Configuration:
``` ```toml
# Reads metrics from a Teamspeak 3 Server via ServerQuery # Reads metrics from a Teamspeak 3 Server via ServerQuery
[[inputs.teamspeak]] [[inputs.teamspeak]]
## Server address for Teamspeak 3 ServerQuery ## Server address for Teamspeak 3 ServerQuery

View File

@ -19,7 +19,7 @@ For example, to disable collection of VMs, add this:
vm_metric_exclude = [ "*" ] vm_metric_exclude = [ "*" ]
``` ```
``` ```toml
# Read metrics from one or many vCenters # Read metrics from one or many vCenters
[[inputs.vsphere]] [[inputs.vsphere]]
## List of vCenter URLs to be monitored. These three lines must be uncommented ## List of vCenter URLs to be monitored. These three lines must be uncommented
@ -286,7 +286,7 @@ This distinction has an impact on how Telegraf collects metrics. A single instan
This will disrupt the metric collection and can result in missed samples. The best practice workaround is to specify two instances of the vSphere plugin, one for the realtime metrics with a short collection interval and one for the historical metrics with a longer interval. You can use the ```*_metric_exclude``` to turn off the resources you don't want to collect metrics for in each instance. For example: This will disrupt the metric collection and can result in missed samples. The best practice workaround is to specify two instances of the vSphere plugin, one for the realtime metrics with a short collection interval and one for the historical metrics with a longer interval. You can use the ```*_metric_exclude``` to turn off the resources you don't want to collect metrics for in each instance. For example:
``` ```toml
## Realtime instance ## Realtime instance
[[inputs.vsphere]] [[inputs.vsphere]]
interval = "60s" interval = "60s"

View File

@ -3,7 +3,7 @@
You should configure your Particle.io's Webhooks to point at the `webhooks` service. To do this go to [https://console.particle.io](https://console.particle.io/) and click `Integrations > New Integration > Webhook`. In the resulting page set `URL` to `http://<my_ip>:1619/particle`, and under `Advanced Settings` click on `JSON` and add: You should configure your Particle.io's Webhooks to point at the `webhooks` service. To do this go to [https://console.particle.io](https://console.particle.io/) and click `Integrations > New Integration > Webhook`. In the resulting page set `URL` to `http://<my_ip>:1619/particle`, and under `Advanced Settings` click on `JSON` and add:
``` ```json
{ {
"measurement": "your_measurement_name" "measurement": "your_measurement_name"
} }

View File

@ -173,7 +173,7 @@ if any of the combinations of ObjectName/Instances/Counters are invalid.
## Examples ## Examples
### Generic Queries ### Generic Queries
``` ```toml
[[inputs.win_perf_counters]] [[inputs.win_perf_counters]]
[[inputs.win_perf_counters.object]] [[inputs.win_perf_counters.object]]
# Processor usage, alternative to native, reports on a per core. # Processor usage, alternative to native, reports on a per core.
@ -217,7 +217,7 @@ if any of the combinations of ObjectName/Instances/Counters are invalid.
``` ```
### Active Directory Domain Controller ### Active Directory Domain Controller
``` ```toml
[[inputs.win_perf_counters]] [[inputs.win_perf_counters]]
[inputs.win_perf_counters.tags] [inputs.win_perf_counters.tags]
monitorgroup = "ActiveDirectory" monitorgroup = "ActiveDirectory"
@ -245,7 +245,7 @@ if any of the combinations of ObjectName/Instances/Counters are invalid.
``` ```
### DFS Namespace + Domain Controllers ### DFS Namespace + Domain Controllers
``` ```toml
[[inputs.win_perf_counters]] [[inputs.win_perf_counters]]
[[inputs.win_perf_counters.object]] [[inputs.win_perf_counters.object]]
# AD, DFS N, Useful if the server hosts a DFS Namespace or is a Domain Controller # AD, DFS N, Useful if the server hosts a DFS Namespace or is a Domain Controller
@ -258,7 +258,7 @@ if any of the combinations of ObjectName/Instances/Counters are invalid.
``` ```
### DFS Replication + Domain Controllers ### DFS Replication + Domain Controllers
``` ```toml
[[inputs.win_perf_counters]] [[inputs.win_perf_counters]]
[[inputs.win_perf_counters.object]] [[inputs.win_perf_counters.object]]
# AD, DFS R, Useful if the server hosts a DFS Replication folder or is a Domain Controller # AD, DFS R, Useful if the server hosts a DFS Replication folder or is a Domain Controller
@ -271,7 +271,7 @@ if any of the combinations of ObjectName/Instances/Counters are invalid.
``` ```
### DNS Server + Domain Controllers ### DNS Server + Domain Controllers
``` ```toml
[[inputs.win_perf_counters]] [[inputs.win_perf_counters]]
[[inputs.win_perf_counters.object]] [[inputs.win_perf_counters.object]]
ObjectName = "DNS" ObjectName = "DNS"
@ -282,7 +282,7 @@ if any of the combinations of ObjectName/Instances/Counters are invalid.
``` ```
### IIS / ASP.NET ### IIS / ASP.NET
``` ```toml
[[inputs.win_perf_counters]] [[inputs.win_perf_counters]]
[[inputs.win_perf_counters.object]] [[inputs.win_perf_counters.object]]
# HTTP Service request queues in the Kernel before being handed over to User Mode. # HTTP Service request queues in the Kernel before being handed over to User Mode.
@ -326,7 +326,7 @@ if any of the combinations of ObjectName/Instances/Counters are invalid.
``` ```
### Process ### Process
``` ```toml
[[inputs.win_perf_counters]] [[inputs.win_perf_counters]]
[[inputs.win_perf_counters.object]] [[inputs.win_perf_counters.object]]
# Process metrics, in this case for IIS only # Process metrics, in this case for IIS only
@ -338,7 +338,7 @@ if any of the combinations of ObjectName/Instances/Counters are invalid.
``` ```
### .NET Monitoring ### .NET Monitoring
``` ```toml
[[inputs.win_perf_counters]] [[inputs.win_perf_counters]]
[[inputs.win_perf_counters.object]] [[inputs.win_perf_counters.object]]
# .NET CLR Exceptions, in this case for IIS only # .NET CLR Exceptions, in this case for IIS only

View File

@ -48,7 +48,7 @@ put nine.telegraf.ping_average_response_ms 1441910366 24.006000 dc=homeoffice ho
The OpenTSDB telnet interface can be simulated with this reader: The OpenTSDB telnet interface can be simulated with this reader:
``` ```go
// opentsdb_telnet_mode_mock.go // opentsdb_telnet_mode_mock.go
package main package main

View File

@ -100,7 +100,7 @@ columns and rows.
### Examples ### Examples
Config: Config:
``` ```toml
[[inputs.file]] [[inputs.file]]
files = ["example"] files = ["example"]
data_format = "csv" data_format = "csv"