chore: clean up all markdown lint error on input plugins n through r (#10168)

This commit is contained in:
Mya 2021-11-24 11:50:01 -07:00 committed by GitHub
parent 0d8d118319
commit d4582dca70
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
50 changed files with 1228 additions and 1143 deletions

View File

@ -3,7 +3,7 @@
The [NATS](http://www.nats.io/about/) monitoring plugin gathers metrics from
the NATS [monitoring http server](https://www.nats.io/documentation/server/gnatsd-monitoring/).
### Configuration
## Configuration
```toml
[[inputs.nats]]
@ -14,7 +14,7 @@ the NATS [monitoring http server](https://www.nats.io/documentation/server/gnats
# response_timeout = "5s"
```
### Metrics:
## Metrics
- nats
- tags
@ -35,8 +35,8 @@ the NATS [monitoring http server](https://www.nats.io/documentation/server/gnats
- out_msgs (integer, count)
- in_bytes (integer, bytes)
### Example Output:
## Example Output
```
```shell
nats,server=http://localhost:8222 uptime=117158348682i,mem=6647808i,subscriptions=0i,out_bytes=0i,connections=0i,in_msgs=0i,total_connections=0i,cores=2i,cpu=0,slow_consumers=0i,routes=0i,remotes=0i,out_msgs=0i,in_bytes=0i 1517015107000000000
```

View File

@ -6,7 +6,7 @@ creates metrics using one of the supported [input data formats][].
A [Queue Group][queue group] is used when subscribing to subjects so multiple
instances of telegraf can read from a NATS cluster in parallel.
### Configuration:
## Configuration
```toml
[[inputs.nats_consumer]]

View File

@ -6,8 +6,7 @@ in the telegraf.conf configuration file.
The [Neptune Apex](https://www.neptunesystems.com/) input plugin collects real-time data from the Apex's status.xml page.
### Configuration
## Configuration
```toml
[[inputs.neptune_apex]]
@ -25,7 +24,7 @@ The [Neptune Apex](https://www.neptunesystems.com/) input plugin collects real-t
```
### Metrics
## Metrics
The Neptune Apex controller family allows an aquarium hobbyist to monitor and control
their tanks based on various probes. The data is taken directly from the /cgi-bin/status.xml at the interval specified
@ -62,38 +61,42 @@ programming. These tags are clearly marked in the list below and should be consi
- power_failed (int64, Unix epoch in ns) when the controller last lost power. Omitted if the apex reports it as "none"
- power_restored (int64, Unix epoch in ns) when the controller last powered on. Omitted if the apex reports it as "none"
- serial (string, serial number)
- time:
- The time used for the metric is parsed from the status.xml page. This helps when cross-referencing events with
- time:
- The time used for the metric is parsed from the status.xml page. This helps when cross-referencing events with
the local system of Apex Fusion. Since the Apex uses NTP, this should not matter in most scenarios.
### Sample Queries
## Sample Queries
Get the max, mean, and min for the temperature in the last hour:
```sql
SELECT mean("value") FROM "neptune_apex" WHERE ("probe_type" = 'Temp') AND time >= now() - 6h GROUP BY time(20s)
```
### Troubleshooting
## Troubleshooting
### sendRequest failure
#### sendRequest failure
This indicates a problem communicating with the local Apex controller. If on Mac/Linux, try curl:
```sh
$ curl apex.local/cgi-bin/status.xml
curl apex.local/cgi-bin/status.xml
```
to isolate the problem.
#### parseXML errors
### parseXML errors
Ensure the XML being returned is valid. If you get valid XML back, open a bug request.
#### Missing fields/data
### Missing fields/data
The neptune_apex plugin is strict on its input to prevent any conversion errors. If you have fields in the status.xml
output that are not converted to a metric, open a feature request and paste your whole status.xml
### Example Output
## Example Output
```
```text
neptune_apex,hardware=1.0,host=ubuntu,software=5.04_7A18,source=apex,type=controller power_failed=1544814000000000000i,power_restored=1544833875000000000i,serial="AC5:12345" 1545978278000000000
neptune_apex,device_id=base_Var1,hardware=1.0,host=ubuntu,name=VarSpd1_I1,output_id=0,output_type=variable,software=5.04_7A18,source=apex,type=output state="PF1" 1545978278000000000
neptune_apex,device_id=base_Var2,hardware=1.0,host=ubuntu,name=VarSpd2_I2,output_id=1,output_type=variable,software=5.04_7A18,source=apex,type=output state="PF2" 1545978278000000000
@ -138,7 +141,7 @@ neptune_apex,hardware=1.0,host=ubuntu,name=Volt_4,software=5.04_7A18,source=apex
```
### Contributing
## Contributing
This plugin is used for mission-critical aquatic life support. A bug could very well result in the death of animals.
Neptune does not publish a schema file and as such, we have made this plugin very strict on input with no provisions for

View File

@ -2,7 +2,7 @@
This plugin collects TCP connections state and UDP socket counts by using `lsof`.
### Configuration:
## Configuration
``` toml
# Collect TCP connections state and UDP socket counts
@ -10,7 +10,7 @@ This plugin collects TCP connections state and UDP socket counts by using `lsof`
# no configuration
```
# Measurements:
## Measurements
Supported TCP Connection states are follows.
@ -27,12 +27,14 @@ Supported TCP Connection states are follows.
- closing
- none
### TCP Connection State measurements:
## TCP Connection State measurements
Meta:
- units: counts
Measurement names:
- tcp_established
- tcp_syn_sent
- tcp_syn_recv
@ -48,10 +50,12 @@ Measurement names:
If there are no connection on the state, the metric is not counted.
### UDP socket counts measurements:
## UDP socket counts measurements
Meta:
- units: counts
Measurement names:
- udp_socket

View File

@ -2,7 +2,7 @@
This plugin gathers metrics about network interface and protocol usage (Linux only).
### Configuration:
## Configuration
```toml
# Gather metrics about network interfaces
@ -21,7 +21,7 @@ This plugin gathers metrics about network interface and protocol usage (Linux on
##
```
### Measurements & Fields:
## Measurements & Fields
The fields from this plugin are gathered in the _net_ measurement.
@ -42,14 +42,14 @@ Under freebsd/openbsd and darwin the plugin uses netstat.
Additionally, for the time being _only under Linux_, the plugin gathers system wide stats for different network protocols using /proc/net/snmp (tcp, udp, icmp, etc.).
Explanation of the different metrics exposed by snmp is out of the scope of this document. The best way to find information would be tracing the constants in the Linux kernel source [here](https://elixir.bootlin.com/linux/latest/source/net/ipv4/proc.c) and their usage. If /proc/net/snmp cannot be read for some reason, telegraf ignores the error silently.
### Tags:
## Tags
* Net measurements have the following tags:
- interface (the interface from which metrics are gathered)
* interface (the interface from which metrics are gathered)
Under Linux the system wide protocol metrics have the interface=all tag.
### Sample Queries:
## Sample Queries
You can use the following query to get the upload/download traffic rate per second for all interfaces in the last hour. The query uses the [derivative function](https://docs.influxdata.com/influxdb/v1.2/query_language/functions#derivative) which calculates the rate of change between subsequent field values.
@ -57,15 +57,15 @@ You can use the following query to get the upload/download traffic rate per seco
SELECT derivative(first(bytes_recv), 1s) as "download bytes/sec", derivative(first(bytes_sent), 1s) as "upload bytes/sec" FROM net WHERE time > now() - 1h AND interface != 'all' GROUP BY time(10s), interface fill(0);
```
### Example Output:
## Example Output
```
```shell
# All platforms
$ ./telegraf --config telegraf.conf --input-filter net --test
net,interface=eth0,host=HOST bytes_sent=451838509i,bytes_recv=3284081640i,packets_sent=2663590i,packets_recv=3585442i,err_in=0i,err_out=0i,drop_in=4i,drop_out=0i 1492834180000000000
```
```
```shell
# Linux
$ ./telegraf --config telegraf.conf --input-filter net --test
net,interface=eth0,host=HOST bytes_sent=451838509i,bytes_recv=3284081640i,packets_sent=2663590i,packets_recv=3585442i,err_in=0i,err_out=0i,drop_in=4i,drop_out=0i 1492834180000000000

View File

@ -3,7 +3,7 @@
The input plugin test UDP/TCP connections response time and can optional
verify text in the response.
### Configuration:
## Configuration
```toml
# Collect response time of a TCP or UDP connection
@ -33,7 +33,7 @@ verify text in the response.
# fielddrop = ["result_type", "string_found"]
```
### Metrics:
## Metrics
- net_response
- tags:
@ -47,9 +47,9 @@ verify text in the response.
- result_type (string) **DEPRECATED in 1.7; use result tag**
- string_found (boolean) **DEPRECATED in 1.4; use result tag**
### Example Output:
## Example Output
```
```shell
net_response,port=8086,protocol=tcp,result=success,server=localhost response_time=0.000092948,result_code=0i,result_type="success" 1525820185000000000
net_response,port=8080,protocol=tcp,result=connection_failed,server=localhost result_code=2i,result_type="connection_failed" 1525820088000000000
net_response,port=8080,protocol=udp,result=read_failed,server=localhost result_code=3i,result_type="read_failed",string_found=false 1525820088000000000

View File

@ -5,7 +5,7 @@ If `fullstat` is set, a great deal of additional metrics are collected, detailed
**NOTE** Many of the metrics, even if tagged with a mount point, are really _per-server_. Thus, if you mount these two shares: `nfs01:/vol/foo/bar` and `nfs01:/vol/foo/baz`, there will be two near identical entries in /proc/self/mountstats. This is a limitation of the metrics exposed by the kernel, not the telegraf plugin.
### Configuration
## Configuration
```toml
[[inputs.nfsclient]]
@ -35,7 +35,9 @@ If `fullstat` is set, a great deal of additional metrics are collected, detailed
# include_operations = []
# exclude_operations = []
```
#### Configuration Options
### Configuration Options
- **fullstat** bool: Collect per-operation type metrics. Defaults to false.
- **include_mounts** list(string): gather metrics for only these mounts. Default is to watch all mounts.
- **exclude_mounts** list(string): gather metrics for all mounts, except those listed in this option. Excludes take precedence over includes.
@ -44,121 +46,119 @@ If `fullstat` is set, a great deal of additional metrics are collected, detailed
*N.B.* the `include_mounts` and `exclude_mounts` arguments are both applied to the local mount location (e.g. /mnt/NFS), not the server export (e.g. nfsserver:/vol/NFS). Go regexp patterns can be used in either.
#### References
### References
1. [nfsiostat](http://git.linux-nfs.org/?p=steved/nfs-utils.git;a=summary)
2. [net/sunrpc/stats.c - Linux source code](https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/net/sunrpc/stats.c)
3. [What is in /proc/self/mountstats for NFS mounts: an introduction](https://utcc.utoronto.ca/~cks/space/blog/linux/NFSMountstatsIndex)
4. [The xprt: data for NFS mounts in /proc/self/mountstats](https://utcc.utoronto.ca/~cks/space/blog/linux/NFSMountstatsXprt)
## Metrics
### Metrics
#### Fields
### Fields
- nfsstat
- bytes (integer, bytes) - The total number of bytes exchanged doing this operation. This is bytes sent *and* received, including overhead *and* payload. (bytes = OP_bytes_sent + OP_bytes_recv. See nfs_ops below)
- ops (integer, count) - The number of operations of this type executed.
- retrans (integer, count) - The number of times an operation had to be retried (retrans = OP_trans - OP_ops. See nfs_ops below)
- exe (integer, miliseconds) - The number of miliseconds it took to process the operations.
- rtt (integer, miliseconds) - The round-trip time for operations.
- bytes (integer, bytes) - The total number of bytes exchanged doing this operation. This is bytes sent *and* received, including overhead *and* payload. (bytes = OP_bytes_sent + OP_bytes_recv. See nfs_ops below)
- ops (integer, count) - The number of operations of this type executed.
- retrans (integer, count) - The number of times an operation had to be retried (retrans = OP_trans - OP_ops. See nfs_ops below)
- exe (integer, miliseconds) - The number of miliseconds it took to process the operations.
- rtt (integer, miliseconds) - The round-trip time for operations.
In addition enabling `fullstat` will make many more metrics available.
#### Tags
### Tags
- All measurements have the following tags:
- mountpoint - The local mountpoint, for instance: "/var/www"
- serverexport - The full server export, for instance: "nfsserver.example.org:/export"
- mountpoint - The local mountpoint, for instance: "/var/www"
- serverexport - The full server export, for instance: "nfsserver.example.org:/export"
- Measurements nfsstat and nfs_ops will also include:
- operation - the NFS operation in question. `READ` or `WRITE` for nfsstat, but potentially one of ~20 or ~50, depending on NFS version. A complete list of operations supported is visible in `/proc/self/mountstats`.
- operation - the NFS operation in question. `READ` or `WRITE` for nfsstat, but potentially one of ~20 or ~50, depending on NFS version. A complete list of operations supported is visible in `/proc/self/mountstats`.
### Additional metrics
## Additional metrics
When `fullstat` is true, additional measurements are collected. Tags are the same as above.
#### NFS Operations
### NFS Operations
Most descriptions come from Reference [[3](https://utcc.utoronto.ca/~cks/space/blog/linux/NFSMountstatsIndex)] and `nfs_iostat.h`. Field order and names are the same as in `/proc/self/mountstats` and the Kernel source.
Please refer to `/proc/self/mountstats` for a list of supported NFS operations, as it changes occasionally.
- nfs_bytes
- fields:
- normalreadbytes (int, bytes): Bytes read from the server via `read()`
- normalwritebytes (int, bytes): Bytes written to the server via `write()`
- directreadbytes (int, bytes): Bytes read with O_DIRECT set
- directwritebytes (int, bytes): Bytes written with O_DIRECT set
- serverreadbytes (int, bytes): Bytes read via NFS READ (via `mmap()`)
- serverwritebytes (int, bytes): Bytes written via NFS WRITE (via `mmap()`)
- readpages (int, count): Number of pages read
- writepages (int, count): Number of pages written
- fields:
- normalreadbytes (int, bytes): Bytes read from the server via `read()`
- normalwritebytes (int, bytes): Bytes written to the server via `write()`
- directreadbytes (int, bytes): Bytes read with O_DIRECT set
- directwritebytes (int, bytes): Bytes written with O_DIRECT set
- serverreadbytes (int, bytes): Bytes read via NFS READ (via `mmap()`)
- serverwritebytes (int, bytes): Bytes written via NFS WRITE (via `mmap()`)
- readpages (int, count): Number of pages read
- writepages (int, count): Number of pages written
- nfs_events (Per-event metrics)
- fields:
- inoderevalidates (int, count): How many times cached inode attributes have to be re-validated from the server.
- dentryrevalidates (int, count): How many times cached dentry nodes have to be re-validated.
- datainvalidates (int, count): How many times an inode had its cached data thrown out.
- attrinvalidates (int, count): How many times an inode has had cached inode attributes invalidated.
- vfsopen (int, count): How many times files or directories have been `open()`'d.
- vfslookup (int, count): How many name lookups in directories there have been.
- vfsaccess (int, count): Number of calls to `access()`. (formerly called "vfspermission")
- vfsupdatepage (int, count): Count of updates (and potential writes) to pages.
- vfsreadpage (int, count): Number of pages read.
- vfsreadpages (int, count): Count of how many times a _group_ of pages was read (possibly via `mmap()`?).
- vfswritepage (int, count): Number of pages written.
- vfswritepages (int, count): Count of how many times a _group_ of pages was written (possibly via `mmap()`?)
- vfsgetdents (int, count): Count of directory entry reads with getdents(). These reads can be served from cache and don't necessarily imply actual NFS requests. (formerly called "vfsreaddir")
- vfssetattr (int, count): How many times we've set attributes on inodes.
- vfsflush (int, count): Count of times pending writes have been forcibly flushed to the server.
- vfsfsync (int, count): Count of calls to `fsync()` on directories and files.
- vfslock (int, count): Number of times a lock was attempted on a file (regardless of success or not).
- vfsrelease (int, count): Number of calls to `close()`.
- congestionwait (int, count): Believe unused by the Linux kernel, but it is part of the NFS spec.
- setattrtrunc (int, count): How many times files have had their size truncated.
- extendwrite (int, count): How many times a file has been grown because you're writing beyond the existing end of the file.
- sillyrenames (int, count): Number of times an in-use file was removed (thus creating a temporary ".nfsXXXXXX" file)
- shortreads (int, count): Number of times the NFS server returned less data than requested.
- shortwrites (int, count): Number of times NFS server reports it wrote less data than requested.
- delay (int, count): Occurances of EJUKEBOX ("Jukebox Delay", probably unused)
- pnfsreads (int, count): Count of NFS v4.1+ pNFS reads.
- pnfswrites (int, count): Count of NFS v4.1+ pNFS writes.
- fields:
- inoderevalidates (int, count): How many times cached inode attributes have to be re-validated from the server.
- dentryrevalidates (int, count): How many times cached dentry nodes have to be re-validated.
- datainvalidates (int, count): How many times an inode had its cached data thrown out.
- attrinvalidates (int, count): How many times an inode has had cached inode attributes invalidated.
- vfsopen (int, count): How many times files or directories have been `open()`'d.
- vfslookup (int, count): How many name lookups in directories there have been.
- vfsaccess (int, count): Number of calls to `access()`. (formerly called "vfspermission")
- vfsupdatepage (int, count): Count of updates (and potential writes) to pages.
- vfsreadpage (int, count): Number of pages read.
- vfsreadpages (int, count): Count of how many times a _group_ of pages was read (possibly via `mmap()`?).
- vfswritepage (int, count): Number of pages written.
- vfswritepages (int, count): Count of how many times a _group_ of pages was written (possibly via `mmap()`?)
- vfsgetdents (int, count): Count of directory entry reads with getdents(). These reads can be served from cache and don't necessarily imply actual NFS requests. (formerly called "vfsreaddir")
- vfssetattr (int, count): How many times we've set attributes on inodes.
- vfsflush (int, count): Count of times pending writes have been forcibly flushed to the server.
- vfsfsync (int, count): Count of calls to `fsync()` on directories and files.
- vfslock (int, count): Number of times a lock was attempted on a file (regardless of success or not).
- vfsrelease (int, count): Number of calls to `close()`.
- congestionwait (int, count): Believe unused by the Linux kernel, but it is part of the NFS spec.
- setattrtrunc (int, count): How many times files have had their size truncated.
- extendwrite (int, count): How many times a file has been grown because you're writing beyond the existing end of the file.
- sillyrenames (int, count): Number of times an in-use file was removed (thus creating a temporary ".nfsXXXXXX" file)
- shortreads (int, count): Number of times the NFS server returned less data than requested.
- shortwrites (int, count): Number of times NFS server reports it wrote less data than requested.
- delay (int, count): Occurances of EJUKEBOX ("Jukebox Delay", probably unused)
- pnfsreads (int, count): Count of NFS v4.1+ pNFS reads.
- pnfswrites (int, count): Count of NFS v4.1+ pNFS writes.
- nfs_xprt_tcp
- fields:
- bind_count (int, count): Number of _completely new_ mounts to this server (sometimes 0?)
- connect_count (int, count): How many times the client has connected to the server in question
- connect_time (int, jiffies): How long the NFS client has spent waiting for its connection(s) to the server to be established.
- idle_time (int, seconds): How long (in seconds) since the NFS mount saw any RPC traffic.
- rpcsends (int, count): How many RPC requests this mount has sent to the server.
- rpcreceives (int, count): How many RPC replies this mount has received from the server.
- badxids (int, count): Count of XIDs sent by the server that the client doesn't know about.
- inflightsends (int, count): Number of outstanding requests; always >1. (See reference #4 for comment on this field)
- backlogutil (int, count): Cumulative backlog count
- fields:
- bind_count (int, count): Number of_completely new_ mounts to this server (sometimes 0?)
- connect_count (int, count): How many times the client has connected to the server in question
- connect_time (int, jiffies): How long the NFS client has spent waiting for its connection(s) to the server to be established.
- idle_time (int, seconds): How long (in seconds) since the NFS mount saw any RPC traffic.
- rpcsends (int, count): How many RPC requests this mount has sent to the server.
- rpcreceives (int, count): How many RPC replies this mount has received from the server.
- badxids (int, count): Count of XIDs sent by the server that the client doesn't know about.
- inflightsends (int, count): Number of outstanding requests; always >1. (See reference #4 for comment on this field)
- backlogutil (int, count): Cumulative backlog count
- nfs_xprt_udp
- fields:
- [same as nfs_xprt_tcp, except for connect_count, connect_time, and idle_time]
- fields:
- [same as nfs_xprt_tcp, except for connect_count, connect_time, and idle_time]
- nfs_ops
- fields (In all cases, the `operations` tag is set to the uppercase name of the NFS operation, _e.g._ "READ", "FSINFO", _etc_. See /proc/self/mountstats for a full list):
- ops (int, count): Total operations of this type.
- trans (int, count): Total transmissions of this type, including retransmissions: `OP_ops - OP_trans = total_retransmissions` (lower is better).
- timeouts (int, count): Number of major timeouts.
- bytes_sent (int, count): Bytes received, including headers (should also be close to on-wire size).
- bytes_recv (int, count): Bytes sent, including headers (should be close to on-wire size).
- queue_time (int, milliseconds): Cumulative time a request waited in the queue before sending this OP type.
- response_time (int, milliseconds): Cumulative time waiting for a response for this OP type.
- total_time (int, milliseconds): Cumulative time a request waited in the queue before sending.
- errors (int, count): Total number operations that complete with tk_status < 0 (usually errors). This is a new field, present in kernel >=5.3, mountstats version 1.1
- fields (In all cases, the `operations` tag is set to the uppercase name of the NFS operation, _e.g._ "READ", "FSINFO", _etc_. See /proc/self/mountstats for a full list):
- ops (int, count): Total operations of this type.
- trans (int, count): Total transmissions of this type, including retransmissions: `OP_ops - OP_trans = total_retransmissions` (lower is better).
- timeouts (int, count): Number of major timeouts.
- bytes_sent (int, count): Bytes received, including headers (should also be close to on-wire size).
- bytes_recv (int, count): Bytes sent, including headers (should be close to on-wire size).
- queue_time (int, milliseconds): Cumulative time a request waited in the queue before sending this OP type.
- response_time (int, milliseconds): Cumulative time waiting for a response for this OP type.
- total_time (int, milliseconds): Cumulative time a request waited in the queue before sending.
- errors (int, count): Total number operations that complete with tk_status < 0 (usually errors). This is a new field, present in kernel >=5.3, mountstats version 1.1
## Example Output
### Example Output
For basic metrics showing server-wise read and write data.
```
```shell
nfsstat,mountpoint=/NFS,operation=READ,serverexport=1.2.3.4:/storage/NFS ops=600i,retrans=1i,bytes=1207i,rtt=606i,exe=607i 1612651512000000000
nfsstat,mountpoint=/NFS,operation=WRITE,serverexport=1.2.3.4:/storage/NFS bytes=1407i,rtt=706i,exe=707i,ops=700i,retrans=1i 1612651512000000000
@ -168,7 +168,7 @@ For `fullstat=true` metrics, which includes additional measurements for `nfs_byt
Additionally, per-OP metrics are collected, with examples for READ, LOOKUP, and NULL shown.
Please refer to `/proc/self/mountstats` for a list of supported NFS operations, as it changes as it changes periodically.
```
```shell
nfs_bytes,mountpoint=/home,serverexport=nfs01:/vol/home directreadbytes=0i,directwritebytes=0i,normalreadbytes=42648757667i,normalwritebytes=0i,readpages=10404603i,serverreadbytes=42617098139i,serverwritebytes=0i,writepages=0i 1608787697000000000
nfs_events,mountpoint=/home,serverexport=nfs01:/vol/home attrinvalidates=116i,congestionwait=0i,datainvalidates=65i,delay=0i,dentryrevalidates=5911243i,extendwrite=0i,inoderevalidates=200378i,pnfsreads=0i,pnfswrites=0i,setattrtrunc=0i,shortreads=0i,shortwrites=0i,sillyrenames=0i,vfsaccess=7203852i,vfsflush=117405i,vfsfsync=0i,vfsgetdents=3368i,vfslock=0i,vfslookup=740i,vfsopen=157281i,vfsreadpage=16i,vfsreadpages=86874i,vfsrelease=155526i,vfssetattr=0i,vfsupdatepage=0i,vfswritepage=0i,vfswritepages=215514i 1608787697000000000
nfs_xprt_tcp,mountpoint=/home,serverexport=nfs01:/vol/home backlogutil=0i,badxids=0i,bind_count=1i,connect_count=1i,connect_time=0i,idle_time=0i,inflightsends=15659826i,rpcreceives=2173896i,rpcsends=2173896i 1608787697000000000
@ -177,5 +177,3 @@ nfs_ops,mountpoint=/NFS,operation=NULL,serverexport=1.2.3.4:/storage/NFS trans=0
nfs_ops,mountpoint=/NFS,operation=READ,serverexport=1.2.3.4:/storage/NFS bytes=1207i,timeouts=602i,total_time=607i,exe=607i,trans=601i,bytes_sent=603i,bytes_recv=604i,queue_time=605i,ops=600i,retrans=1i,rtt=606i,response_time=606i 1612651512000000000
nfs_ops,mountpoint=/NFS,operation=WRITE,serverexport=1.2.3.4:/storage/NFS ops=700i,bytes=1407i,exe=707i,trans=701i,timeouts=702i,response_time=706i,total_time=707i,retrans=1i,rtt=706i,bytes_sent=703i,bytes_recv=704i,queue_time=705i 1612651512000000000
```

View File

@ -1,6 +1,6 @@
# Nginx Input Plugin
### Configuration:
## Configuration
```toml
# Read Nginx's basic status information (ngx_http_stub_status_module)
@ -19,26 +19,27 @@
response_timeout = "5s"
```
### Measurements & Fields:
## Measurements & Fields
- Measurement
- accepts
- active
- handled
- reading
- requests
- waiting
- writing
- accepts
- active
- handled
- reading
- requests
- waiting
- writing
### Tags:
## Tags
- All measurements have the following tags:
- port
- server
- port
- server
### Example Output:
## Example Output
Using this configuration:
```toml
[[inputs.nginx]]
## An array of Nginx stub_status URI to gather stats.
@ -46,12 +47,14 @@ Using this configuration:
```
When run with:
```sh
./telegraf --config telegraf.conf --input-filter nginx --test
```
It produces:
```
```shell
* Plugin: nginx, Collection 1
> nginx,port=80,server=localhost accepts=605i,active=2i,handled=605i,reading=0i,requests=12132i,waiting=1i,writing=1i 1456690994701784331
```

View File

@ -5,7 +5,7 @@ Nginx Plus is a commercial version of the open source web server Nginx. The use
Structures for Nginx Plus have been built based on history of
[status module documentation](http://nginx.org/en/docs/http/ngx_http_status_module.html)
### Configuration:
## Configuration
```toml
# Read Nginx Plus' advanced status information
@ -14,7 +14,7 @@ Structures for Nginx Plus have been built based on history of
urls = ["http://localhost/status"]
```
### Measurements & Fields:
## Measurements & Fields
- nginx_plus_processes
- respawned
@ -59,8 +59,7 @@ Structures for Nginx Plus have been built based on history of
- fails
- downtime
### Tags:
## Tags
- nginx_plus_processes, nginx_plus_connections, nginx_plus_ssl, nginx_plus_requests
- server
@ -78,9 +77,10 @@ Structures for Nginx Plus have been built based on history of
- port
- upstream_address
### Example Output:
## Example Output
Using this configuration:
```toml
[[inputs.nginx_plus]]
## An array of Nginx Plus status URIs to gather stats.
@ -88,12 +88,14 @@ Using this configuration:
```
When run with:
```sh
./telegraf -config telegraf.conf -input-filter nginx_plus -test
```
It produces:
```
```text
* Plugin: inputs.nginx_plus, Collection 1
> nginx_plus_processes,server=localhost,port=12021,host=word.local respawned=0i 1505782513000000000
> nginx_plus_connections,server=localhost,port=12021,host=word.local accepted=5535735212i,dropped=10140186i,active=9541i,idle=67540i 1505782513000000000

View File

@ -2,7 +2,7 @@
Nginx Plus is a commercial version of the open source web server Nginx. The use this plugin you will need a license. For more information about the differences between Nginx (F/OSS) and Nginx Plus, [click here](https://www.nginx.com/blog/whats-difference-nginx-foss-nginx-plus/).
### Configuration:
## Configuration
```toml
# Read Nginx Plus API advanced status information
@ -13,7 +13,7 @@ Nginx Plus is a commercial version of the open source web server Nginx. The use
# api_version = 3
```
### Migration from Nginx Plus (Status) input plugin
## Migration from Nginx Plus (Status) input plugin
| Nginx Plus | Nginx Plus API |
|---------------------------------|--------------------------------------|
@ -29,7 +29,7 @@ Nginx Plus is a commercial version of the open source web server Nginx. The use
| nginx_plus_stream_upstream_peer | nginx_plus_api_stream_upstream_peers |
| nginx.stream.zone | nginx_plus_api_stream_server_zones |
### Measurements by API version
## Measurements by API version
| Measurement | API version (api_version) |
|--------------------------------------|---------------------------|
@ -47,7 +47,7 @@ Nginx Plus is a commercial version of the open source web server Nginx. The use
| nginx_plus_api_http_location_zones | >= 5 |
| nginx_plus_api_resolver_zones | >= 5 |
### Measurements & Fields:
## Measurements & Fields
- nginx_plus_api_processes
- respawned
@ -171,7 +171,7 @@ Nginx Plus is a commercial version of the open source web server Nginx. The use
- timedout
- unknown
### Tags:
## Tags
- nginx_plus_api_processes, nginx_plus_api_connections, nginx_plus_api_ssl, nginx_plus_api_http_requests
- source
@ -198,9 +198,10 @@ Nginx Plus is a commercial version of the open source web server Nginx. The use
- source
- port
### Example Output:
## Example Output
Using this configuration:
```toml
[[inputs.nginx_plus_api]]
## An array of Nginx Plus API URIs to gather stats.
@ -208,12 +209,14 @@ Using this configuration:
```
When run with:
```sh
./telegraf -config telegraf.conf -input-filter nginx_plus_api -test
```
It produces:
```
```text
> nginx_plus_api_processes,port=80,source=demo.nginx.com respawned=0i 1570696321000000000
> nginx_plus_api_connections,port=80,source=demo.nginx.com accepted=68998606i,active=7i,dropped=0i,idle=57i 1570696322000000000
> nginx_plus_api_ssl,port=80,source=demo.nginx.com handshakes=9398978i,handshakes_failed=289353i,session_reuses=1004389i 1570696322000000000

View File

@ -1,7 +1,7 @@
# Nginx Stream STS Input Plugin
This plugin gathers Nginx status using external virtual host traffic status
module - https://github.com/vozlt/nginx-module-sts. This is an Nginx module
module - <https://github.com/vozlt/nginx-module-sts>. This is an Nginx module
that provides access to stream host status information. It contains the current
status such as servers, upstreams, caches. This is similar to the live activity
monitoring of Nginx plus. For module configuration details please see its
@ -9,7 +9,7 @@ monitoring of Nginx plus. For module configuration details please see its
Telegraf minimum version: Telegraf 1.15.0
### Configuration
## Configuration
```toml
[[inputs.nginx_sts]]
@ -27,7 +27,7 @@ Telegraf minimum version: Telegraf 1.15.0
# insecure_skip_verify = false
```
### Metrics
## Metrics
- nginx_sts_connections
- tags:
@ -42,7 +42,7 @@ Telegraf minimum version: Telegraf 1.15.0
- handled
- requests
+ nginx_sts_server
- nginx_sts_server
- tags:
- source
- port
@ -77,7 +77,7 @@ Telegraf minimum version: Telegraf 1.15.0
- session_msec_counter
- session_msec
+ nginx_sts_upstream
- nginx_sts_upstream
- tags:
- source
- port
@ -106,9 +106,9 @@ Telegraf minimum version: Telegraf 1.15.0
- backup
- down
### Example Output:
## Example Output
```
```shell
nginx_sts_upstream,host=localhost,port=80,source=127.0.0.1,upstream=backend_cluster,upstream_address=1.2.3.4:8080 upstream_connect_msec_counter=0i,out_bytes=0i,down=false,connects=0i,session_msec=0i,upstream_session_msec=0i,upstream_session_msec_counter=0i,upstream_connect_msec=0i,upstream_firstbyte_msec_counter=0i,response_3xx_count=0i,session_msec_counter=0i,weight=1i,max_fails=1i,backup=false,upstream_firstbyte_msec=0i,in_bytes=0i,response_1xx_count=0i,response_2xx_count=0i,response_4xx_count=0i,response_5xx_count=0i,fail_timeout=10i 1584699180000000000
nginx_sts_upstream,host=localhost,port=80,source=127.0.0.1,upstream=backend_cluster,upstream_address=9.8.7.6:8080 upstream_firstbyte_msec_counter=0i,response_2xx_count=0i,down=false,upstream_session_msec_counter=0i,out_bytes=0i,response_5xx_count=0i,weight=1i,max_fails=1i,fail_timeout=10i,connects=0i,session_msec_counter=0i,upstream_session_msec=0i,in_bytes=0i,response_1xx_count=0i,response_3xx_count=0i,response_4xx_count=0i,session_msec=0i,upstream_connect_msec=0i,upstream_connect_msec_counter=0i,upstream_firstbyte_msec=0i,backup=false 1584699180000000000
nginx_sts_server,host=localhost,port=80,source=127.0.0.1,zone=* response_2xx_count=0i,response_4xx_count=0i,response_5xx_count=0i,session_msec_counter=0i,in_bytes=0i,out_bytes=0i,session_msec=0i,response_1xx_count=0i,response_3xx_count=0i,connects=0i 1584699180000000000

View File

@ -1,6 +1,6 @@
# Nginx Upstream Check Input Plugin
Read the status output of the nginx_upstream_check (https://github.com/yaoweibin/nginx_upstream_check_module).
Read the status output of the nginx_upstream_check (<https://github.com/yaoweibin/nginx_upstream_check_module>).
This module can periodically check the servers in the Nginx's upstream with configured request and interval to determine
if the server is still available. If checks are failed the server is marked as "down" and will not receive any requests
until the check will pass and a server will be marked as "up" again.
@ -8,7 +8,7 @@ until the check will pass and a server will be marked as "up" again.
The status page displays the current status of all upstreams and servers as well as number of the failed and successful
checks. This information can be exported in JSON format and parsed by this input.
### Configuration:
## Configuration
```toml
## An URL where Nginx Upstream check module is enabled
@ -39,36 +39,38 @@ checks. This information can be exported in JSON format and parsed by this input
# insecure_skip_verify = false
```
### Measurements & Fields:
## Measurements & Fields
- Measurement
- fall (The number of failed server check attempts, counter)
- rise (The number of successful server check attempts, counter)
- status (The reporter server status as a string)
- status_code (The server status code. 1 - up, 2 - down, 0 - other)
- fall (The number of failed server check attempts, counter)
- rise (The number of successful server check attempts, counter)
- status (The reporter server status as a string)
- status_code (The server status code. 1 - up, 2 - down, 0 - other)
The "status_code" field most likely will be the most useful one because it allows you to determine the current
state of every server and, possible, add some monitoring to watch over it. InfluxDB can use string values and the
"status" field can be used instead, but for most other monitoring solutions the integer code will be appropriate.
### Tags:
## Tags
- All measurements have the following tags:
- name (The hostname or IP of the upstream server)
- port (The alternative check port, 0 if the default one is used)
- type (The check type, http/tcp)
- upstream (The name of the upstream block in the Nginx configuration)
- url (The status url used by telegraf)
- name (The hostname or IP of the upstream server)
- port (The alternative check port, 0 if the default one is used)
- type (The check type, http/tcp)
- upstream (The name of the upstream block in the Nginx configuration)
- url (The status url used by telegraf)
### Example Output:
## Example Output
When run with:
```sh
./telegraf --config telegraf.conf --input-filter nginx_upstream_check --test
```
It produces:
```
```text
* Plugin: nginx_upstream_check, Collection 1
> nginx_upstream_check,host=node1,name=192.168.0.1:8080,port=0,type=http,upstream=my_backends,url=http://127.0.0.1:80/status?format\=json fall=0i,rise=100i,status="up",status_code=1i 1529088524000000000
> nginx_upstream_check,host=node2,name=192.168.0.2:8080,port=0,type=http,upstream=my_backends,url=http://127.0.0.1:80/status?format\=json fall=100i,rise=0i,status="down",status_code=2i 1529088524000000000

View File

@ -1,9 +1,9 @@
# Nginx Virtual Host Traffic (VTS) Input Plugin
This plugin gathers Nginx status using external virtual host traffic status module - https://github.com/vozlt/nginx-module-vts. This is an Nginx module that provides access to virtual host status information. It contains the current status such as servers, upstreams, caches. This is similar to the live activity monitoring of Nginx plus.
This plugin gathers Nginx status using external virtual host traffic status module - <https://github.com/vozlt/nginx-module-vts>. This is an Nginx module that provides access to virtual host status information. It contains the current status such as servers, upstreams, caches. This is similar to the live activity monitoring of Nginx plus.
For module configuration details please see its [documentation](https://github.com/vozlt/nginx-module-vts#synopsis).
### Configuration:
## Configuration
```toml
# Read nginx status information using nginx-module-vts module
@ -12,7 +12,7 @@ For module configuration details please see its [documentation](https://github.c
urls = ["http://localhost/status"]
```
### Measurements & Fields:
## Measurements & Fields
- nginx_vts_connections
- active
@ -70,8 +70,7 @@ For module configuration details please see its [documentation](https://github.c
- hit
- scarce
### Tags:
## Tags
- nginx_vts_connections
- source
@ -95,10 +94,10 @@ For module configuration details please see its [documentation](https://github.c
- port
- zone
### Example Output:
## Example Output
Using this configuration:
```toml
[[inputs.nginx_vts]]
## An array of Nginx status URIs to gather stats.
@ -106,12 +105,14 @@ Using this configuration:
```
When run with:
```sh
./telegraf -config telegraf.conf -input-filter nginx_vts -test
```
It produces:
```
```shell
nginx_vts_connections,source=localhost,port=80,host=localhost waiting=30i,accepted=295333i,handled=295333i,requests=6833487i,active=33i,reading=0i,writing=3i 1518341521000000000
nginx_vts_server,zone=example.com,port=80,host=localhost,source=localhost cache_hit=158915i,in_bytes=1935528964i,out_bytes=6531366419i,response_2xx_count=809994i,response_4xx_count=16664i,cache_bypass=0i,cache_stale=0i,cache_revalidated=0i,requests=2187977i,response_1xx_count=0i,response_3xx_count=1360390i,cache_miss=2249i,cache_updating=0i,cache_scarce=0i,request_time=13i,response_5xx_count=929i,cache_expired=0i 1518341521000000000
nginx_vts_server,host=localhost,source=localhost,port=80,zone=* requests=6775284i,in_bytes=5003242389i,out_bytes=36858233827i,cache_expired=318881i,cache_updating=0i,request_time=51i,response_1xx_count=0i,response_2xx_count=4385916i,response_4xx_count=83680i,response_5xx_count=1186i,cache_bypass=0i,cache_revalidated=0i,cache_hit=1972222i,cache_scarce=0i,response_3xx_count=2304502i,cache_miss=408251i,cache_stale=0i 1518341521000000000

View File

@ -4,7 +4,7 @@ This plugin gathers stats from
[NSD](https://www.nlnetlabs.nl/projects/nsd/about) - an authoritative DNS name
server.
### Configuration:
## Configuration
```toml
# A plugin to collect stats from the NSD DNS resolver
@ -26,7 +26,7 @@ server.
# timeout = "1s"
```
#### Permissions:
### Permissions
It's important to note that this plugin references nsd-control, which may
require additional permissions to execute successfully. Depending on the
@ -34,6 +34,7 @@ user/group permissions of the telegraf user executing this plugin, you may
need to alter the group membership, set facls, or use sudo.
**Group membership (Recommended)**:
```bash
$ groups telegraf
telegraf : telegraf
@ -46,12 +47,14 @@ telegraf : telegraf nsd
**Sudo privileges**:
If you use this method, you will need the following in your telegraf config:
```toml
[[inputs.nsd]]
use_sudo = true
```
You will also need to update your sudoers file:
```bash
$ visudo
# Add the following line:
@ -62,11 +65,11 @@ Defaults!NSDCONTROLCTL !logfile, !syslog, !pam_session
Please use the solution you see as most appropriate.
### Metrics:
## Metrics
This is the full list of stats provided by nsd-control. In the output, the
dots in the nsd-control stat name are replaced by underscores (see
https://www.nlnetlabs.nl/documentation/nsd/nsd-control/ for details).
<https://www.nlnetlabs.nl/documentation/nsd/nsd-control/> for details).
- nsd
- fields:

View File

@ -1,6 +1,6 @@
# NSQ Input Plugin
### Configuration:
## Configuration
```toml
# Description

View File

@ -3,7 +3,7 @@
The [NSQ][nsq] consumer plugin reads from NSQD and creates metrics using one
of the supported [input data formats][].
### Configuration:
## Configuration
```toml
# Read metrics from NSQD topic(s)

View File

@ -2,10 +2,11 @@
Plugin collects network metrics from `/proc/net/netstat`, `/proc/net/snmp` and `/proc/net/snmp6` files
### Configuration
## Configuration
The plugin firstly tries to read file paths from config values
if it is empty, then it reads from env variables.
* `PROC_NET_NETSTAT`
* `PROC_NET_SNMP`
* `PROC_NET_SNMP6`
@ -15,331 +16,335 @@ then it tries to read the proc root from env - `PROC_ROOT`,
and sets `/proc` as a root path if `PROC_ROOT` is also empty.
Then appends default file paths:
* `/net/netstat`
* `/net/snmp`
* `/net/snmp6`
So if nothing is given, no paths in config and in env vars, the plugin takes the default paths.
* `/proc/net/netstat`
* `/proc/net/snmp`
* `/proc/net/snmp6`
The sample config file
```toml
[[inputs.nstat]]
## file paths
## e.g: /proc/net/netstat, /proc/net/snmp, /proc/net/snmp6
# proc_net_netstat = ""
# proc_net_snmp = ""
# proc_net_snmp6 = ""
# proc_net_netstat = ""
# proc_net_snmp = ""
# proc_net_snmp6 = ""
## dump metrics with 0 values too
# dump_zeros = true
# dump_zeros = true
```
In case that `proc_net_snmp6` path doesn't exist (e.g. IPv6 is not enabled) no error would be raised.
### Measurements & Fields
## Measurements & Fields
- nstat
- Icmp6InCsumErrors
- Icmp6InDestUnreachs
- Icmp6InEchoReplies
- Icmp6InEchos
- Icmp6InErrors
- Icmp6InGroupMembQueries
- Icmp6InGroupMembReductions
- Icmp6InGroupMembResponses
- Icmp6InMLDv2Reports
- Icmp6InMsgs
- Icmp6InNeighborAdvertisements
- Icmp6InNeighborSolicits
- Icmp6InParmProblems
- Icmp6InPktTooBigs
- Icmp6InRedirects
- Icmp6InRouterAdvertisements
- Icmp6InRouterSolicits
- Icmp6InTimeExcds
- Icmp6OutDestUnreachs
- Icmp6OutEchoReplies
- Icmp6OutEchos
- Icmp6OutErrors
- Icmp6OutGroupMembQueries
- Icmp6OutGroupMembReductions
- Icmp6OutGroupMembResponses
- Icmp6OutMLDv2Reports
- Icmp6OutMsgs
- Icmp6OutNeighborAdvertisements
- Icmp6OutNeighborSolicits
- Icmp6OutParmProblems
- Icmp6OutPktTooBigs
- Icmp6OutRedirects
- Icmp6OutRouterAdvertisements
- Icmp6OutRouterSolicits
- Icmp6OutTimeExcds
- Icmp6OutType133
- Icmp6OutType135
- Icmp6OutType143
- IcmpInAddrMaskReps
- IcmpInAddrMasks
- IcmpInCsumErrors
- IcmpInDestUnreachs
- IcmpInEchoReps
- IcmpInEchos
- IcmpInErrors
- IcmpInMsgs
- IcmpInParmProbs
- IcmpInRedirects
- IcmpInSrcQuenchs
- IcmpInTimeExcds
- IcmpInTimestampReps
- IcmpInTimestamps
- IcmpMsgInType3
- IcmpMsgOutType3
- IcmpOutAddrMaskReps
- IcmpOutAddrMasks
- IcmpOutDestUnreachs
- IcmpOutEchoReps
- IcmpOutEchos
- IcmpOutErrors
- IcmpOutMsgs
- IcmpOutParmProbs
- IcmpOutRedirects
- IcmpOutSrcQuenchs
- IcmpOutTimeExcds
- IcmpOutTimestampReps
- IcmpOutTimestamps
- Ip6FragCreates
- Ip6FragFails
- Ip6FragOKs
- Ip6InAddrErrors
- Ip6InBcastOctets
- Ip6InCEPkts
- Ip6InDelivers
- Ip6InDiscards
- Ip6InECT0Pkts
- Ip6InECT1Pkts
- Ip6InHdrErrors
- Ip6InMcastOctets
- Ip6InMcastPkts
- Ip6InNoECTPkts
- Ip6InNoRoutes
- Ip6InOctets
- Ip6InReceives
- Ip6InTooBigErrors
- Ip6InTruncatedPkts
- Ip6InUnknownProtos
- Ip6OutBcastOctets
- Ip6OutDiscards
- Ip6OutForwDatagrams
- Ip6OutMcastOctets
- Ip6OutMcastPkts
- Ip6OutNoRoutes
- Ip6OutOctets
- Ip6OutRequests
- Ip6ReasmFails
- Ip6ReasmOKs
- Ip6ReasmReqds
- Ip6ReasmTimeout
- IpDefaultTTL
- IpExtInBcastOctets
- IpExtInBcastPkts
- IpExtInCEPkts
- IpExtInCsumErrors
- IpExtInECT0Pkts
- IpExtInECT1Pkts
- IpExtInMcastOctets
- IpExtInMcastPkts
- IpExtInNoECTPkts
- IpExtInNoRoutes
- IpExtInOctets
- IpExtInTruncatedPkts
- IpExtOutBcastOctets
- IpExtOutBcastPkts
- IpExtOutMcastOctets
- IpExtOutMcastPkts
- IpExtOutOctets
- IpForwDatagrams
- IpForwarding
- IpFragCreates
- IpFragFails
- IpFragOKs
- IpInAddrErrors
- IpInDelivers
- IpInDiscards
- IpInHdrErrors
- IpInReceives
- IpInUnknownProtos
- IpOutDiscards
- IpOutNoRoutes
- IpOutRequests
- IpReasmFails
- IpReasmOKs
- IpReasmReqds
- IpReasmTimeout
- TcpActiveOpens
- TcpAttemptFails
- TcpCurrEstab
- TcpEstabResets
- TcpExtArpFilter
- TcpExtBusyPollRxPackets
- TcpExtDelayedACKLocked
- TcpExtDelayedACKLost
- TcpExtDelayedACKs
- TcpExtEmbryonicRsts
- TcpExtIPReversePathFilter
- TcpExtListenDrops
- TcpExtListenOverflows
- TcpExtLockDroppedIcmps
- TcpExtOfoPruned
- TcpExtOutOfWindowIcmps
- TcpExtPAWSActive
- TcpExtPAWSEstab
- TcpExtPAWSPassive
- TcpExtPruneCalled
- TcpExtRcvPruned
- TcpExtSyncookiesFailed
- TcpExtSyncookiesRecv
- TcpExtSyncookiesSent
- TcpExtTCPACKSkippedChallenge
- TcpExtTCPACKSkippedFinWait2
- TcpExtTCPACKSkippedPAWS
- TcpExtTCPACKSkippedSeq
- TcpExtTCPACKSkippedSynRecv
- TcpExtTCPACKSkippedTimeWait
- TcpExtTCPAbortFailed
- TcpExtTCPAbortOnClose
- TcpExtTCPAbortOnData
- TcpExtTCPAbortOnLinger
- TcpExtTCPAbortOnMemory
- TcpExtTCPAbortOnTimeout
- TcpExtTCPAutoCorking
- TcpExtTCPBacklogDrop
- TcpExtTCPChallengeACK
- TcpExtTCPDSACKIgnoredNoUndo
- TcpExtTCPDSACKIgnoredOld
- TcpExtTCPDSACKOfoRecv
- TcpExtTCPDSACKOfoSent
- TcpExtTCPDSACKOldSent
- TcpExtTCPDSACKRecv
- TcpExtTCPDSACKUndo
- TcpExtTCPDeferAcceptDrop
- TcpExtTCPDirectCopyFromBacklog
- TcpExtTCPDirectCopyFromPrequeue
- TcpExtTCPFACKReorder
- TcpExtTCPFastOpenActive
- TcpExtTCPFastOpenActiveFail
- TcpExtTCPFastOpenCookieReqd
- TcpExtTCPFastOpenListenOverflow
- TcpExtTCPFastOpenPassive
- TcpExtTCPFastOpenPassiveFail
- TcpExtTCPFastRetrans
- TcpExtTCPForwardRetrans
- TcpExtTCPFromZeroWindowAdv
- TcpExtTCPFullUndo
- TcpExtTCPHPAcks
- TcpExtTCPHPHits
- TcpExtTCPHPHitsToUser
- TcpExtTCPHystartDelayCwnd
- TcpExtTCPHystartDelayDetect
- TcpExtTCPHystartTrainCwnd
- TcpExtTCPHystartTrainDetect
- TcpExtTCPKeepAlive
- TcpExtTCPLossFailures
- TcpExtTCPLossProbeRecovery
- TcpExtTCPLossProbes
- TcpExtTCPLossUndo
- TcpExtTCPLostRetransmit
- TcpExtTCPMD5NotFound
- TcpExtTCPMD5Unexpected
- TcpExtTCPMTUPFail
- TcpExtTCPMTUPSuccess
- TcpExtTCPMemoryPressures
- TcpExtTCPMinTTLDrop
- TcpExtTCPOFODrop
- TcpExtTCPOFOMerge
- TcpExtTCPOFOQueue
- TcpExtTCPOrigDataSent
- TcpExtTCPPartialUndo
- TcpExtTCPPrequeueDropped
- TcpExtTCPPrequeued
- TcpExtTCPPureAcks
- TcpExtTCPRcvCoalesce
- TcpExtTCPRcvCollapsed
- TcpExtTCPRenoFailures
- TcpExtTCPRenoRecovery
- TcpExtTCPRenoRecoveryFail
- TcpExtTCPRenoReorder
- TcpExtTCPReqQFullDoCookies
- TcpExtTCPReqQFullDrop
- TcpExtTCPRetransFail
- TcpExtTCPSACKDiscard
- TcpExtTCPSACKReneging
- TcpExtTCPSACKReorder
- TcpExtTCPSYNChallenge
- TcpExtTCPSackFailures
- TcpExtTCPSackMerged
- TcpExtTCPSackRecovery
- TcpExtTCPSackRecoveryFail
- TcpExtTCPSackShiftFallback
- TcpExtTCPSackShifted
- TcpExtTCPSchedulerFailed
- TcpExtTCPSlowStartRetrans
- TcpExtTCPSpuriousRTOs
- TcpExtTCPSpuriousRtxHostQueues
- TcpExtTCPSynRetrans
- TcpExtTCPTSReorder
- TcpExtTCPTimeWaitOverflow
- TcpExtTCPTimeouts
- TcpExtTCPToZeroWindowAdv
- TcpExtTCPWantZeroWindowAdv
- TcpExtTCPWinProbe
- TcpExtTW
- TcpExtTWKilled
- TcpExtTWRecycled
- TcpInCsumErrors
- TcpInErrs
- TcpInSegs
- TcpMaxConn
- TcpOutRsts
- TcpOutSegs
- TcpPassiveOpens
- TcpRetransSegs
- TcpRtoAlgorithm
- TcpRtoMax
- TcpRtoMin
- Udp6IgnoredMulti
- Udp6InCsumErrors
- Udp6InDatagrams
- Udp6InErrors
- Udp6NoPorts
- Udp6OutDatagrams
- Udp6RcvbufErrors
- Udp6SndbufErrors
- UdpIgnoredMulti
- UdpInCsumErrors
- UdpInDatagrams
- UdpInErrors
- UdpLite6InCsumErrors
- UdpLite6InDatagrams
- UdpLite6InErrors
- UdpLite6NoPorts
- UdpLite6OutDatagrams
- UdpLite6RcvbufErrors
- UdpLite6SndbufErrors
- UdpLiteIgnoredMulti
- UdpLiteInCsumErrors
- UdpLiteInDatagrams
- UdpLiteInErrors
- UdpLiteNoPorts
- UdpLiteOutDatagrams
- UdpLiteRcvbufErrors
- UdpLiteSndbufErrors
- UdpNoPorts
- UdpOutDatagrams
- UdpRcvbufErrors
- UdpSndbufErrors
* nstat
* Icmp6InCsumErrors
* Icmp6InDestUnreachs
* Icmp6InEchoReplies
* Icmp6InEchos
* Icmp6InErrors
* Icmp6InGroupMembQueries
* Icmp6InGroupMembReductions
* Icmp6InGroupMembResponses
* Icmp6InMLDv2Reports
* Icmp6InMsgs
* Icmp6InNeighborAdvertisements
* Icmp6InNeighborSolicits
* Icmp6InParmProblems
* Icmp6InPktTooBigs
* Icmp6InRedirects
* Icmp6InRouterAdvertisements
* Icmp6InRouterSolicits
* Icmp6InTimeExcds
* Icmp6OutDestUnreachs
* Icmp6OutEchoReplies
* Icmp6OutEchos
* Icmp6OutErrors
* Icmp6OutGroupMembQueries
* Icmp6OutGroupMembReductions
* Icmp6OutGroupMembResponses
* Icmp6OutMLDv2Reports
* Icmp6OutMsgs
* Icmp6OutNeighborAdvertisements
* Icmp6OutNeighborSolicits
* Icmp6OutParmProblems
* Icmp6OutPktTooBigs
* Icmp6OutRedirects
* Icmp6OutRouterAdvertisements
* Icmp6OutRouterSolicits
* Icmp6OutTimeExcds
* Icmp6OutType133
* Icmp6OutType135
* Icmp6OutType143
* IcmpInAddrMaskReps
* IcmpInAddrMasks
* IcmpInCsumErrors
* IcmpInDestUnreachs
* IcmpInEchoReps
* IcmpInEchos
* IcmpInErrors
* IcmpInMsgs
* IcmpInParmProbs
* IcmpInRedirects
* IcmpInSrcQuenchs
* IcmpInTimeExcds
* IcmpInTimestampReps
* IcmpInTimestamps
* IcmpMsgInType3
* IcmpMsgOutType3
* IcmpOutAddrMaskReps
* IcmpOutAddrMasks
* IcmpOutDestUnreachs
* IcmpOutEchoReps
* IcmpOutEchos
* IcmpOutErrors
* IcmpOutMsgs
* IcmpOutParmProbs
* IcmpOutRedirects
* IcmpOutSrcQuenchs
* IcmpOutTimeExcds
* IcmpOutTimestampReps
* IcmpOutTimestamps
* Ip6FragCreates
* Ip6FragFails
* Ip6FragOKs
* Ip6InAddrErrors
* Ip6InBcastOctets
* Ip6InCEPkts
* Ip6InDelivers
* Ip6InDiscards
* Ip6InECT0Pkts
* Ip6InECT1Pkts
* Ip6InHdrErrors
* Ip6InMcastOctets
* Ip6InMcastPkts
* Ip6InNoECTPkts
* Ip6InNoRoutes
* Ip6InOctets
* Ip6InReceives
* Ip6InTooBigErrors
* Ip6InTruncatedPkts
* Ip6InUnknownProtos
* Ip6OutBcastOctets
* Ip6OutDiscards
* Ip6OutForwDatagrams
* Ip6OutMcastOctets
* Ip6OutMcastPkts
* Ip6OutNoRoutes
* Ip6OutOctets
* Ip6OutRequests
* Ip6ReasmFails
* Ip6ReasmOKs
* Ip6ReasmReqds
* Ip6ReasmTimeout
* IpDefaultTTL
* IpExtInBcastOctets
* IpExtInBcastPkts
* IpExtInCEPkts
* IpExtInCsumErrors
* IpExtInECT0Pkts
* IpExtInECT1Pkts
* IpExtInMcastOctets
* IpExtInMcastPkts
* IpExtInNoECTPkts
* IpExtInNoRoutes
* IpExtInOctets
* IpExtInTruncatedPkts
* IpExtOutBcastOctets
* IpExtOutBcastPkts
* IpExtOutMcastOctets
* IpExtOutMcastPkts
* IpExtOutOctets
* IpForwDatagrams
* IpForwarding
* IpFragCreates
* IpFragFails
* IpFragOKs
* IpInAddrErrors
* IpInDelivers
* IpInDiscards
* IpInHdrErrors
* IpInReceives
* IpInUnknownProtos
* IpOutDiscards
* IpOutNoRoutes
* IpOutRequests
* IpReasmFails
* IpReasmOKs
* IpReasmReqds
* IpReasmTimeout
* TcpActiveOpens
* TcpAttemptFails
* TcpCurrEstab
* TcpEstabResets
* TcpExtArpFilter
* TcpExtBusyPollRxPackets
* TcpExtDelayedACKLocked
* TcpExtDelayedACKLost
* TcpExtDelayedACKs
* TcpExtEmbryonicRsts
* TcpExtIPReversePathFilter
* TcpExtListenDrops
* TcpExtListenOverflows
* TcpExtLockDroppedIcmps
* TcpExtOfoPruned
* TcpExtOutOfWindowIcmps
* TcpExtPAWSActive
* TcpExtPAWSEstab
* TcpExtPAWSPassive
* TcpExtPruneCalled
* TcpExtRcvPruned
* TcpExtSyncookiesFailed
* TcpExtSyncookiesRecv
* TcpExtSyncookiesSent
* TcpExtTCPACKSkippedChallenge
* TcpExtTCPACKSkippedFinWait2
* TcpExtTCPACKSkippedPAWS
* TcpExtTCPACKSkippedSeq
* TcpExtTCPACKSkippedSynRecv
* TcpExtTCPACKSkippedTimeWait
* TcpExtTCPAbortFailed
* TcpExtTCPAbortOnClose
* TcpExtTCPAbortOnData
* TcpExtTCPAbortOnLinger
* TcpExtTCPAbortOnMemory
* TcpExtTCPAbortOnTimeout
* TcpExtTCPAutoCorking
* TcpExtTCPBacklogDrop
* TcpExtTCPChallengeACK
* TcpExtTCPDSACKIgnoredNoUndo
* TcpExtTCPDSACKIgnoredOld
* TcpExtTCPDSACKOfoRecv
* TcpExtTCPDSACKOfoSent
* TcpExtTCPDSACKOldSent
* TcpExtTCPDSACKRecv
* TcpExtTCPDSACKUndo
* TcpExtTCPDeferAcceptDrop
* TcpExtTCPDirectCopyFromBacklog
* TcpExtTCPDirectCopyFromPrequeue
* TcpExtTCPFACKReorder
* TcpExtTCPFastOpenActive
* TcpExtTCPFastOpenActiveFail
* TcpExtTCPFastOpenCookieReqd
* TcpExtTCPFastOpenListenOverflow
* TcpExtTCPFastOpenPassive
* TcpExtTCPFastOpenPassiveFail
* TcpExtTCPFastRetrans
* TcpExtTCPForwardRetrans
* TcpExtTCPFromZeroWindowAdv
* TcpExtTCPFullUndo
* TcpExtTCPHPAcks
* TcpExtTCPHPHits
* TcpExtTCPHPHitsToUser
* TcpExtTCPHystartDelayCwnd
* TcpExtTCPHystartDelayDetect
* TcpExtTCPHystartTrainCwnd
* TcpExtTCPHystartTrainDetect
* TcpExtTCPKeepAlive
* TcpExtTCPLossFailures
* TcpExtTCPLossProbeRecovery
* TcpExtTCPLossProbes
* TcpExtTCPLossUndo
* TcpExtTCPLostRetransmit
* TcpExtTCPMD5NotFound
* TcpExtTCPMD5Unexpected
* TcpExtTCPMTUPFail
* TcpExtTCPMTUPSuccess
* TcpExtTCPMemoryPressures
* TcpExtTCPMinTTLDrop
* TcpExtTCPOFODrop
* TcpExtTCPOFOMerge
* TcpExtTCPOFOQueue
* TcpExtTCPOrigDataSent
* TcpExtTCPPartialUndo
* TcpExtTCPPrequeueDropped
* TcpExtTCPPrequeued
* TcpExtTCPPureAcks
* TcpExtTCPRcvCoalesce
* TcpExtTCPRcvCollapsed
* TcpExtTCPRenoFailures
* TcpExtTCPRenoRecovery
* TcpExtTCPRenoRecoveryFail
* TcpExtTCPRenoReorder
* TcpExtTCPReqQFullDoCookies
* TcpExtTCPReqQFullDrop
* TcpExtTCPRetransFail
* TcpExtTCPSACKDiscard
* TcpExtTCPSACKReneging
* TcpExtTCPSACKReorder
* TcpExtTCPSYNChallenge
* TcpExtTCPSackFailures
* TcpExtTCPSackMerged
* TcpExtTCPSackRecovery
* TcpExtTCPSackRecoveryFail
* TcpExtTCPSackShiftFallback
* TcpExtTCPSackShifted
* TcpExtTCPSchedulerFailed
* TcpExtTCPSlowStartRetrans
* TcpExtTCPSpuriousRTOs
* TcpExtTCPSpuriousRtxHostQueues
* TcpExtTCPSynRetrans
* TcpExtTCPTSReorder
* TcpExtTCPTimeWaitOverflow
* TcpExtTCPTimeouts
* TcpExtTCPToZeroWindowAdv
* TcpExtTCPWantZeroWindowAdv
* TcpExtTCPWinProbe
* TcpExtTW
* TcpExtTWKilled
* TcpExtTWRecycled
* TcpInCsumErrors
* TcpInErrs
* TcpInSegs
* TcpMaxConn
* TcpOutRsts
* TcpOutSegs
* TcpPassiveOpens
* TcpRetransSegs
* TcpRtoAlgorithm
* TcpRtoMax
* TcpRtoMin
* Udp6IgnoredMulti
* Udp6InCsumErrors
* Udp6InDatagrams
* Udp6InErrors
* Udp6NoPorts
* Udp6OutDatagrams
* Udp6RcvbufErrors
* Udp6SndbufErrors
* UdpIgnoredMulti
* UdpInCsumErrors
* UdpInDatagrams
* UdpInErrors
* UdpLite6InCsumErrors
* UdpLite6InDatagrams
* UdpLite6InErrors
* UdpLite6NoPorts
* UdpLite6OutDatagrams
* UdpLite6RcvbufErrors
* UdpLite6SndbufErrors
* UdpLiteIgnoredMulti
* UdpLiteInCsumErrors
* UdpLiteInDatagrams
* UdpLiteInErrors
* UdpLiteNoPorts
* UdpLiteOutDatagrams
* UdpLiteRcvbufErrors
* UdpLiteSndbufErrors
* UdpNoPorts
* UdpOutDatagrams
* UdpRcvbufErrors
* UdpSndbufErrors
### Tags
- All measurements have the following tags
- host (host of the system)
- name (the type of the metric: snmp, snmp6 or netstat)
## Tags
* All measurements have the following tags
* host (host of the system)
* name (the type of the metric: snmp, snmp6 or netstat)

View File

@ -24,7 +24,7 @@ the remote peer or server (RMS, milliseconds);
- jitter Mean deviation (jitter) in the time reported for that remote peer or
server (RMS of difference of multiple time samples, milliseconds);
### Configuration:
## Configuration
```toml
# Get standard NTP query metrics, requires ntpq executable
@ -33,27 +33,27 @@ server (RMS of difference of multiple time samples, milliseconds);
dns_lookup = true
```
### Measurements & Fields:
## Measurements & Fields
- ntpq
- delay (float, milliseconds)
- jitter (float, milliseconds)
- offset (float, milliseconds)
- poll (int, seconds)
- reach (int)
- when (int, seconds)
- delay (float, milliseconds)
- jitter (float, milliseconds)
- offset (float, milliseconds)
- poll (int, seconds)
- reach (int)
- when (int, seconds)
### Tags:
## Tags
- All measurements have the following tags:
- refid
- remote
- type
- stratum
- refid
- remote
- type
- stratum
### Example Output:
## Example Output
```
```shell
$ telegraf --config ~/ws/telegraf.conf --input-filter ntpq --test
* Plugin: ntpq, Collection 1
> ntpq,refid=.GPSs.,remote=*time.apple.com,stratum=1,type=u delay=91.797,jitter=3.735,offset=12.841,poll=64i,reach=377i,when=35i 1457960478909556134

View File

@ -2,7 +2,7 @@
This plugin uses a query on the [`nvidia-smi`](https://developer.nvidia.com/nvidia-system-management-interface) binary to pull GPU stats including memory and GPU usage, temp and other.
### Configuration
## Configuration
```toml
# Pulls statistics from nvidia GPUs attached to the host
@ -16,18 +16,19 @@ This plugin uses a query on the [`nvidia-smi`](https://developer.nvidia.com/nvid
# timeout = "5s"
```
#### Linux
### Linux
On Linux, `nvidia-smi` is generally located at `/usr/bin/nvidia-smi`
#### Windows
### Windows
On Windows, `nvidia-smi` is generally located at `C:\Program Files\NVIDIA Corporation\NVSMI\nvidia-smi.exe`
On Windows 10, you may also find this located here `C:\Windows\System32\nvidia-smi.exe`
You'll need to escape the `\` within the `telegraf.conf` like this: `C:\\Program Files\\NVIDIA Corporation\\NVSMI\\nvidia-smi.exe`
### Metrics
## Metrics
- measurement: `nvidia_smi`
- tags
- `name` (type of GPU e.g. `GeForce GTX 1070 Ti`)
@ -61,7 +62,7 @@ You'll need to escape the `\` within the `telegraf.conf` like this: `C:\\Program
- `driver_version` (string)
- `cuda_version` (string)
### Sample Query
## Sample Query
The below query could be used to alert on the average temperature of the your GPUs over the last minute
@ -69,30 +70,34 @@ The below query could be used to alert on the average temperature of the your GP
SELECT mean("temperature_gpu") FROM "nvidia_smi" WHERE time > now() - 5m GROUP BY time(1m), "index", "name", "host"
```
### Troubleshooting
## Troubleshooting
Check the full output by running `nvidia-smi` binary manually.
Linux:
```sh
sudo -u telegraf -- /usr/bin/nvidia-smi -q -x
```
Windows:
```
```sh
"C:\Program Files\NVIDIA Corporation\NVSMI\nvidia-smi.exe" -q -x
```
Please include the output of this command if opening an GitHub issue.
### Example Output
```
## Example Output
```text
nvidia_smi,compute_mode=Default,host=8218cf,index=0,name=GeForce\ GTX\ 1070,pstate=P2,uuid=GPU-823bc202-6279-6f2c-d729-868a30f14d96 fan_speed=100i,memory_free=7563i,memory_total=8112i,memory_used=549i,temperature_gpu=53i,utilization_gpu=100i,utilization_memory=90i 1523991122000000000
nvidia_smi,compute_mode=Default,host=8218cf,index=1,name=GeForce\ GTX\ 1080,pstate=P2,uuid=GPU-f9ba66fc-a7f5-94c5-da19-019ef2f9c665 fan_speed=100i,memory_free=7557i,memory_total=8114i,memory_used=557i,temperature_gpu=50i,utilization_gpu=100i,utilization_memory=85i 1523991122000000000
nvidia_smi,compute_mode=Default,host=8218cf,index=2,name=GeForce\ GTX\ 1080,pstate=P2,uuid=GPU-d4cfc28d-0481-8d07-b81a-ddfc63d74adf fan_speed=100i,memory_free=7557i,memory_total=8114i,memory_used=557i,temperature_gpu=58i,utilization_gpu=100i,utilization_memory=86i 1523991122000000000
```
### Limitations
## Limitations
Note that there seems to be an issue with getting current memory clock values when the memory is overclocked.
This may or may not apply to everyone but it's confirmed to be an issue on an EVGA 2080 Ti.

View File

@ -5,7 +5,7 @@ The `opcua` plugin retrieves data from OPC UA client devices.
Telegraf minimum version: Telegraf 1.16
Plugin minimum tested version: 1.16
### Configuration:
## Configuration
```toml
[[inputs.opcua]]
@ -91,23 +91,28 @@ Plugin minimum tested version: 1.16
#]
```
### Node Configuration
## Node Configuration
An OPC UA node ID may resemble: "n=3;s=Temperature". In this example:
- n=3 is indicating the `namespace` is 3
- s=Temperature is indicting that the `identifier_type` is a string and `identifier` value is 'Temperature'
- This example temperature node has a value of 79.0
To gather data from this node enter the following line into the 'nodes' property above:
```
```shell
{field_name="temp", namespace="3", identifier_type="s", identifier="Temperature"},
```
This node configuration produces a metric like this:
```
```text
opcua,id=n\=3;s\=Temperature temp=79.0,quality="OK (0x0)" 1597820490000000000
```
### Group Configuration
## Group Configuration
Groups can set default values for the namespace, identifier type, and
tags settings. The default values apply to all the nodes in the
group. If a default is set, a node may omit the setting altogether.
@ -119,7 +124,8 @@ a tag with the same name is set in both places, the tag value from the
node is used.
This example group configuration has two groups with two nodes each:
```
```toml
[[inputs.opcua.group]]
name="group1_metric_name"
namespace="3"
@ -141,7 +147,8 @@ This example group configuration has two groups with two nodes each:
```
It produces metrics like these:
```
```text
group1_metric_name,group1_tag=val1,id=ns\=3;i\=1001,node1_tag=val2 name=0,Quality="OK (0x0)" 1606893246000000000
group1_metric_name,group1_tag=val1,id=ns\=3;i\=1002,node1_tag=val3 name=-1.389117,Quality="OK (0x0)" 1606893246000000000
group2_metric_name,group2_tag=val3,id=ns\=3;i\=1003,node2_tag=val4 Quality="OK (0x0)",saw=-1.6 1606893246000000000

View File

@ -2,7 +2,7 @@
This plugin gathers metrics from OpenLDAP's cn=Monitor backend.
### Configuration:
## Configuration
To use this plugin you must enable the [slapd monitoring](https://www.openldap.org/devel/admin/monitoringslapd.html) backend.
@ -31,11 +31,11 @@ To use this plugin you must enable the [slapd monitoring](https://www.openldap.o
reverse_metric_names = true
```
### Measurements & Fields:
## Measurements & Fields
All **monitorCounter**, **monitoredInfo**, **monitorOpInitiated**, and **monitorOpCompleted** attributes are gathered based on this LDAP query:
```
```sh
(|(objectClass=monitorCounterObject)(objectClass=monitorOperation)(objectClass=monitoredObject))
```
@ -46,52 +46,52 @@ Metrics for the **monitorOp*** attributes have **_initiated** and **_completed**
An OpenLDAP 2.4 server will provide these metrics:
- openldap
- connections_current
- connections_max_file_descriptors
- connections_total
- operations_abandon_completed
- operations_abandon_initiated
- operations_add_completed
- operations_add_initiated
- operations_bind_completed
- operations_bind_initiated
- operations_compare_completed
- operations_compare_initiated
- operations_delete_completed
- operations_delete_initiated
- operations_extended_completed
- operations_extended_initiated
- operations_modify_completed
- operations_modify_initiated
- operations_modrdn_completed
- operations_modrdn_initiated
- operations_search_completed
- operations_search_initiated
- operations_unbind_completed
- operations_unbind_initiated
- statistics_bytes
- statistics_entries
- statistics_pdu
- statistics_referrals
- threads_active
- threads_backload
- threads_max
- threads_max_pending
- threads_open
- threads_pending
- threads_starting
- time_uptime
- waiters_read
- waiters_write
- connections_current
- connections_max_file_descriptors
- connections_total
- operations_abandon_completed
- operations_abandon_initiated
- operations_add_completed
- operations_add_initiated
- operations_bind_completed
- operations_bind_initiated
- operations_compare_completed
- operations_compare_initiated
- operations_delete_completed
- operations_delete_initiated
- operations_extended_completed
- operations_extended_initiated
- operations_modify_completed
- operations_modify_initiated
- operations_modrdn_completed
- operations_modrdn_initiated
- operations_search_completed
- operations_search_initiated
- operations_unbind_completed
- operations_unbind_initiated
- statistics_bytes
- statistics_entries
- statistics_pdu
- statistics_referrals
- threads_active
- threads_backload
- threads_max
- threads_max_pending
- threads_open
- threads_pending
- threads_starting
- time_uptime
- waiters_read
- waiters_write
### Tags:
## Tags
- server= # value from config
- port= # value from config
### Example Output:
## Example Output
```
```shell
$ telegraf -config telegraf.conf -input-filter openldap -test --debug
* Plugin: inputs.openldap, Collection 1
> openldap,server=localhost,port=389,host=niska.ait.psu.edu operations_bind_initiated=10i,operations_unbind_initiated=6i,operations_modrdn_completed=0i,operations_delete_initiated=0i,operations_add_completed=2i,operations_delete_completed=0i,operations_abandon_completed=0i,statistics_entries=1516i,threads_open=2i,threads_active=1i,waiters_read=1i,operations_modify_completed=0i,operations_extended_initiated=4i,threads_pending=0i,operations_search_initiated=36i,operations_compare_initiated=0i,connections_max_file_descriptors=4096i,operations_modify_initiated=0i,operations_modrdn_initiated=0i,threads_max=16i,time_uptime=6017i,connections_total=1037i,connections_current=1i,operations_add_initiated=2i,statistics_bytes=162071i,operations_unbind_completed=6i,operations_abandon_initiated=0i,statistics_pdu=1566i,threads_max_pending=0i,threads_backload=1i,waiters_write=0i,operations_bind_completed=10i,operations_search_completed=35i,operations_compare_completed=0i,operations_extended_completed=4i,statistics_referrals=0i,threads_starting=0i 1516912070000000000

View File

@ -20,7 +20,7 @@ the remote peer or server (RMS, milliseconds);
- jitter Mean deviation (jitter) in the time reported for that remote peer or
server (RMS of difference of multiple time samples, milliseconds);
### Configuration
## Configuration
```toml
[[inputs.openntpd]]
@ -34,7 +34,7 @@ server (RMS of difference of multiple time samples, milliseconds);
# timeout = "5ms"
```
### Metrics
## Metrics
- ntpctl
- tags:
@ -49,7 +49,7 @@ server (RMS of difference of multiple time samples, milliseconds);
- wt (int)
- tl (int)
### Permissions
## Permissions
It's important to note that this plugin references ntpctl, which may require
additional permissions to execute successfully.
@ -57,6 +57,7 @@ Depending on the user/group permissions of the telegraf user executing this
plugin, you may need to alter the group membership, set facls, or use sudo.
**Group membership (Recommended)**:
```bash
$ groups telegraf
telegraf : telegraf
@ -69,12 +70,14 @@ telegraf : telegraf ntpd
**Sudo privileges**:
If you use this method, you will need the following in your telegraf config:
```toml
[[inputs.openntpd]]
use_sudo = true
```
You will also need to update your sudoers file:
```bash
$ visudo
# Add the following lines:
@ -85,9 +88,9 @@ Defaults!NTPCTL !logfile, !syslog, !pam_session
Please use the solution you see as most appropriate.
### Example Output
## Example Output
```
```shell
openntpd,remote=194.57.169.1,stratum=2,host=localhost tl=10i,poll=1007i,
offset=2.295,jitter=3.896,delay=53.766,next=266i,wt=1i 1514454299000000000
```

View File

@ -2,7 +2,7 @@
This plugin gathers stats from [OpenSMTPD - a FREE implementation of the server-side SMTP protocol](https://www.opensmtpd.org/)
### Configuration:
## Configuration
```toml
[[inputs.opensmtpd]]
@ -16,7 +16,7 @@ This plugin gathers stats from [OpenSMTPD - a FREE implementation of the server-
#timeout = "1s"
```
### Measurements & Fields:
## Measurements & Fields
This is the full list of stats provided by smtpctl and potentially collected by telegram
depending of your smtpctl configuration.
@ -59,12 +59,13 @@ depending of your smtpctl configuration.
smtp_session_local
uptime
### Permissions:
## Permissions
It's important to note that this plugin references smtpctl, which may require additional permissions to execute successfully.
Depending on the user/group permissions of the telegraf user executing this plugin, you may need to alter the group membership, set facls, or use sudo.
**Group membership (Recommended)**:
```bash
$ groups telegraf
telegraf : telegraf
@ -77,12 +78,14 @@ telegraf : telegraf opensmtpd
**Sudo privileges**:
If you use this method, you will need the following in your telegraf config:
```toml
[[inputs.opensmtpd]]
use_sudo = true
```
You will also need to update your sudoers file:
```bash
$ visudo
# Add the following line:
@ -93,9 +96,9 @@ Defaults!SMTPCTL !logfile, !syslog, !pam_session
Please use the solution you see as most appropriate.
### Example Output:
## Example Output
```
```shell
telegraf --config etc/telegraf.conf --input-filter opensmtpd --test
* Plugin: inputs.opensmtpd, Collection 1
> opensmtpd,host=localhost scheduler_delivery_tempfail=822,mta_host=10,mta_task_running=4,queue_bounce=13017,scheduler_delivery_permfail=51022,mta_relay=7,queue_evpcache_size=2,scheduler_envelope_expired=26,bounce_message=0,mta_domain=7,queue_evpcache_update_hit=848,smtp_session_local=12294,bounce_envelope=0,queue_evpcache_load_hit=4389703,scheduler_ramqueue_update=0,mta_route=3,scheduler_delivery_ok=2149489,smtp_session_inet4=2131997,control_session=1,scheduler_envelope_incoming=0,uptime=10346728,scheduler_ramqueue_envelope=2,smtp_session=0,bounce_session=0,mta_envelope=2,mta_session=6,mta_task=2,scheduler_ramqueue_message=2,mta_connector=7,mta_source=1,scheduler_envelope=2,scheduler_envelope_inflight=2 1510220300000000000

View File

@ -19,9 +19,10 @@ At present this plugin requires the following APIs:
* orchestration v1
## Configuration and Recommendations
### Recommendations
Due to the large number of unique tags that this plugin generates, in order to keep the cardinality down it is **highly recommended** to use [modifiers](https://github.com/influxdata/telegraf/blob/master/docs/CONFIGURATION.md#modifiers) like `tagexclude` to discard unwanted tags.
Due to the large number of unique tags that this plugin generates, in order to keep the cardinality down it is **highly recommended** to use [modifiers](https://github.com/influxdata/telegraf/blob/master/docs/CONFIGURATION.md#modifiers) like `tagexclude` to discard unwanted tags.
For deployments with only a small number of VMs and hosts, a small polling interval (e.g. seconds-minutes) is acceptable. For larger deployments, polling a large number of systems will impact performance. Use the `interval` option to change how often the plugin is run:
@ -29,7 +30,7 @@ For deployments with only a small number of VMs and hosts, a small polling inter
Also, consider polling OpenStack services at different intervals depending on your requirements. This will help with load and cardinality as well.
```
```toml
[[inputs.openstack]]
interval = 5m
....
@ -47,10 +48,9 @@ Also, consider polling OpenStack services at different intervals depending on yo
....
```
### Configuration
```
```toml
## The recommended interval to poll is '30m'
## The identity endpoint to authenticate against and get the service catalog from.
@ -105,245 +105,245 @@ Also, consider polling OpenStack services at different intervals depending on yo
### Measurements, Tags & Fields
* openstack_aggregate
* name
* aggregate_host [string]
* aggregate_hosts [integer]
* created_at [string]
* deleted [boolean]
* deleted_at [string]
* id [integer]
* updated_at [string]
* name
* aggregate_host [string]
* aggregate_hosts [integer]
* created_at [string]
* deleted [boolean]
* deleted_at [string]
* id [integer]
* updated_at [string]
* openstack_flavor
* is_public
* name
* disk [integer]
* ephemeral [integer]
* id [string]
* ram [integer]
* rxtx_factor [float]
* swap [integer]
* vcpus [integer]
* is_public
* name
* disk [integer]
* ephemeral [integer]
* id [string]
* ram [integer]
* rxtx_factor [float]
* swap [integer]
* vcpus [integer]
* openstack_hypervisor
* cpu_arch
* cpu_feature_tsc
* cpu_feature_tsc-deadline
* cpu_feature_tsc_adjust
* cpu_feature_tsx-ctrl
* cpu_feature_vme
* cpu_feature_vmx
* cpu_feature_x2apic
* cpu_feature_xgetbv1
* cpu_feature_xsave
* cpu_model
* cpu_vendor
* hypervisor_hostname
* hypervisor_type
* hypervisor_version
* service_host
* service_id
* state
* status
* cpu_topology_cores [integer]
* cpu_topology_sockets [integer]
* cpu_topology_threads [integer]
* current_workload [integer]
* disk_available_least [integer]
* free_disk_gb [integer]
* free_ram_mb [integer]
* host_ip [string]
* id [string]
* local_gb [integer]
* local_gb_used [integer]
* memory_mb [integer]
* memory_mb_used [integer]
* running_vms [integer]
* vcpus [integer]
* vcpus_used [integer]
* cpu_arch
* cpu_feature_tsc
* cpu_feature_tsc-deadline
* cpu_feature_tsc_adjust
* cpu_feature_tsx-ctrl
* cpu_feature_vme
* cpu_feature_vmx
* cpu_feature_x2apic
* cpu_feature_xgetbv1
* cpu_feature_xsave
* cpu_model
* cpu_vendor
* hypervisor_hostname
* hypervisor_type
* hypervisor_version
* service_host
* service_id
* state
* status
* cpu_topology_cores [integer]
* cpu_topology_sockets [integer]
* cpu_topology_threads [integer]
* current_workload [integer]
* disk_available_least [integer]
* free_disk_gb [integer]
* free_ram_mb [integer]
* host_ip [string]
* id [string]
* local_gb [integer]
* local_gb_used [integer]
* memory_mb [integer]
* memory_mb_used [integer]
* running_vms [integer]
* vcpus [integer]
* vcpus_used [integer]
* openstack_identity
* description
* domain_id
* name
* parent_id
* enabled boolean
* id string
* is_domain boolean
* projects integer
* description
* domain_id
* name
* parent_id
* enabled boolean
* id string
* is_domain boolean
* projects integer
* openstack_network
* name
* openstack_tags_xyz
* project_id
* status
* tenant_id
* admin_state_up [boolean]
* availability_zone_hints [string]
* created_at [string]
* id [string]
* shared [boolean]
* subnet_id [string]
* subnets [integer]
* updated_at [string]
* name
* openstack_tags_xyz
* project_id
* status
* tenant_id
* admin_state_up [boolean]
* availability_zone_hints [string]
* created_at [string]
* id [string]
* shared [boolean]
* subnet_id [string]
* subnets [integer]
* updated_at [string]
* openstack_newtron_agent
* agent_host
* agent_type
* availability_zone
* binary
* topic
* admin_state_up [boolean]
* alive [boolean]
* created_at [string]
* heartbeat_timestamp [string]
* id [string]
* resources_synced [boolean]
* started_at [string]
* agent_host
* agent_type
* availability_zone
* binary
* topic
* admin_state_up [boolean]
* alive [boolean]
* created_at [string]
* heartbeat_timestamp [string]
* id [string]
* resources_synced [boolean]
* started_at [string]
* openstack_nova_service
* host_machine
* name
* state
* status
* zone
* disabled_reason [string]
* forced_down [boolean]
* id [string]
* updated_at [string]
* host_machine
* name
* state
* status
* zone
* disabled_reason [string]
* forced_down [boolean]
* id [string]
* updated_at [string]
* openstack_port
* device_id
* device_owner
* name
* network_id
* project_id
* status
* tenant_id
* admin_state_up [boolean]
* allowed_address_pairs [integer]
* fixed_ips [integer]
* id [string]
* ip_address [string]
* mac_address [string]
* security_groups [string]
* subnet_id [string]
* device_id
* device_owner
* name
* network_id
* project_id
* status
* tenant_id
* admin_state_up [boolean]
* allowed_address_pairs [integer]
* fixed_ips [integer]
* id [string]
* ip_address [string]
* mac_address [string]
* security_groups [string]
* subnet_id [string]
* openstack_request_duration
* agents [integer]
* aggregates [integer]
* flavors [integer]
* hypervisors [integer]
* networks [integer]
* nova_services [integer]
* ports [integer]
* projects [integer]
* servers [integer]
* stacks [integer]
* storage_pools [integer]
* subnets [integer]
* volumes [integer]
* agents [integer]
* aggregates [integer]
* flavors [integer]
* hypervisors [integer]
* networks [integer]
* nova_services [integer]
* ports [integer]
* projects [integer]
* servers [integer]
* stacks [integer]
* storage_pools [integer]
* subnets [integer]
* volumes [integer]
* openstack_server
* flavor
* host_id
* host_name
* image
* key_name
* name
* project
* status
* tenant_id
* user_id
* accessIPv4 [string]
* accessIPv6 [string]
* addresses [integer]
* adminPass [string]
* created [string]
* disk_gb [integer]
* fault_code [integer]
* fault_created [string]
* fault_details [string]
* fault_message [string]
* id [string]
* progress [integer]
* ram_mb [integer]
* security_groups [integer]
* updated [string]
* vcpus [integer]
* volume_id [string]
* volumes_attached [integer]
* flavor
* host_id
* host_name
* image
* key_name
* name
* project
* status
* tenant_id
* user_id
* accessIPv4 [string]
* accessIPv6 [string]
* addresses [integer]
* adminPass [string]
* created [string]
* disk_gb [integer]
* fault_code [integer]
* fault_created [string]
* fault_details [string]
* fault_message [string]
* id [string]
* progress [integer]
* ram_mb [integer]
* security_groups [integer]
* updated [string]
* vcpus [integer]
* volume_id [string]
* volumes_attached [integer]
* openstack_server_diagnostics
* disk_name
* no_of_disks
* no_of_ports
* port_name
* server_id
* cpu0_time [float]
* cpu1_time [float]
* cpu2_time [float]
* cpu3_time [float]
* cpu4_time [float]
* cpu5_time [float]
* cpu6_time [float]
* cpu7_time [float]
* disk_errors [float]
* disk_read [float]
* disk_read_req [float]
* disk_write [float]
* disk_write_req [float]
* memory [float]
* memory-actual [float]
* memory-rss [float]
* memory-swap_in [float]
* port_rx [float]
* port_rx_drop [float]
* port_rx_errors [float]
* port_rx_packets [float]
* port_tx [float]
* port_tx_drop [float]
* port_tx_errors [float]
* port_tx_packets [float]
* disk_name
* no_of_disks
* no_of_ports
* port_name
* server_id
* cpu0_time [float]
* cpu1_time [float]
* cpu2_time [float]
* cpu3_time [float]
* cpu4_time [float]
* cpu5_time [float]
* cpu6_time [float]
* cpu7_time [float]
* disk_errors [float]
* disk_read [float]
* disk_read_req [float]
* disk_write [float]
* disk_write_req [float]
* memory [float]
* memory-actual [float]
* memory-rss [float]
* memory-swap_in [float]
* port_rx [float]
* port_rx_drop [float]
* port_rx_errors [float]
* port_rx_packets [float]
* port_tx [float]
* port_tx_drop [float]
* port_tx_errors [float]
* port_tx_packets [float]
* openstack_service
* name
* service_enabled [boolean]
* service_id [string]
* name
* service_enabled [boolean]
* service_id [string]
* openstack_storage_pool
* driver_version
* name
* storage_protocol
* vendor_name
* volume_backend_name
* free_capacity_gb [float]
* total_capacity_gb [float]
* driver_version
* name
* storage_protocol
* vendor_name
* volume_backend_name
* free_capacity_gb [float]
* total_capacity_gb [float]
* openstack_subnet
* cidr
* gateway_ip
* ip_version
* name
* network_id
* openstack_tags_subnet_type_PRV
* project_id
* tenant_id
* allocation_pools [string]
* dhcp_enabled [boolean]
* dns_nameservers [string]
* id [string]
* cidr
* gateway_ip
* ip_version
* name
* network_id
* openstack_tags_subnet_type_PRV
* project_id
* tenant_id
* allocation_pools [string]
* dhcp_enabled [boolean]
* dns_nameservers [string]
* id [string]
* openstack_volume
* attachment_attachment_id
* attachment_device
* attachment_host_name
* availability_zone
* bootable
* description
* name
* status
* user_id
* volume_type
* attachment_attached_at [string]
* attachment_server_id [string]
* created_at [string]
* encrypted [boolean]
* id [string]
* multiattach [boolean]
* size [integer]
* total_attachments [integer]
* updated_at [string]
* attachment_attachment_id
* attachment_device
* attachment_host_name
* availability_zone
* bootable
* description
* name
* status
* user_id
* volume_type
* attachment_attached_at [string]
* attachment_server_id [string]
* created_at [string]
* encrypted [boolean]
* id [string]
* multiattach [boolean]
* size [integer]
* total_attachments [integer]
* updated_at [string]
### Example Output
```
```text
> openstack_newtron_agent,agent_host=vim2,agent_type=DHCP\ agent,availability_zone=nova,binary=neutron-dhcp-agent,host=telegraf_host,topic=dhcp_agent admin_state_up=true,alive=true,created_at="2021-01-07T03:40:53Z",heartbeat_timestamp="2021-10-14T07:46:40Z",id="17e1e446-d7da-4656-9e32-67d3690a306f",resources_synced=false,started_at="2021-07-02T21:47:42Z" 1634197616000000000
> openstack_aggregate,host=telegraf_host,name=non-dpdk aggregate_host="vim3",aggregate_hosts=2i,created_at="2021-02-01T18:28:00Z",deleted=false,deleted_at="0001-01-01T00:00:00Z",id=3i,updated_at="0001-01-01T00:00:00Z" 1634197617000000000
> openstack_flavor,host=telegraf_host,is_public=true,name=hwflavor disk=20i,ephemeral=0i,id="f89785c0-6b9f-47f5-a02e-f0fcbb223163",ram=8192i,rxtx_factor=1,swap=0i,vcpus=8i 1634197617000000000

View File

@ -2,7 +2,7 @@
This plugin receives traces, metrics and logs from [OpenTelemetry](https://opentelemetry.io) clients and agents via gRPC.
### Configuration
## Configuration
```toml
[[inputs.opentelemetry]]
@ -30,11 +30,11 @@ This plugin receives traces, metrics and logs from [OpenTelemetry](https://opent
# tls_key = "/etc/telegraf/key.pem"
```
#### Schema
### Schema
The OpenTelemetry->InfluxDB conversion [schema](https://github.com/influxdata/influxdb-observability/blob/main/docs/index.md)
and [implementation](https://github.com/influxdata/influxdb-observability/tree/main/otel2influx)
are hosted at https://github.com/influxdata/influxdb-observability .
are hosted at <https://github.com/influxdata/influxdb-observability> .
Spans are stored in measurement `spans`.
Logs are stored in measurement `logs`.
@ -48,7 +48,8 @@ Also see the OpenTelemetry output plugin for Telegraf.
### Example Output
#### Tracing Spans
```
```text
spans end_time_unix_nano="2021-02-19 20:50:25.6893952 +0000 UTC",instrumentation_library_name="tracegen",kind="SPAN_KIND_INTERNAL",name="okey-dokey",net.peer.ip="1.2.3.4",parent_span_id="d5270e78d85f570f",peer.service="tracegen-client",service.name="tracegen",span.kind="server",span_id="4c28227be6a010e1",status_code="STATUS_CODE_OK",trace_id="7d4854815225332c9834e6dbf85b9380" 1613767825689169000
spans end_time_unix_nano="2021-02-19 20:50:25.6893952 +0000 UTC",instrumentation_library_name="tracegen",kind="SPAN_KIND_INTERNAL",name="lets-go",net.peer.ip="1.2.3.4",peer.service="tracegen-server",service.name="tracegen",span.kind="client",span_id="d5270e78d85f570f",status_code="STATUS_CODE_OK",trace_id="7d4854815225332c9834e6dbf85b9380" 1613767825689135000
spans end_time_unix_nano="2021-02-19 20:50:25.6895667 +0000 UTC",instrumentation_library_name="tracegen",kind="SPAN_KIND_INTERNAL",name="okey-dokey",net.peer.ip="1.2.3.4",parent_span_id="b57e98af78c3399b",peer.service="tracegen-client",service.name="tracegen",span.kind="server",span_id="a0643a156d7f9f7f",status_code="STATUS_CODE_OK",trace_id="fd6b8bb5965e726c94978c644962cdc8" 1613767825689388000
@ -57,7 +58,8 @@ spans end_time_unix_nano="2021-02-19 20:50:25.6896741 +0000 UTC",instrumentation
```
### Metrics - `prometheus-v1`
```
```shell
cpu_temp,foo=bar gauge=87.332
http_requests_total,method=post,code=200 counter=1027
http_requests_total,method=post,code=400 counter=3
@ -66,7 +68,8 @@ rpc_duration_seconds 0.01=3102,0.05=3272,0.5=4773,0.9=9001,0.99=76656,sum=1.7560
```
### Metrics - `prometheus-v2`
```
```shell
prometheus,foo=bar cpu_temp=87.332
prometheus,method=post,code=200 http_requests_total=1027
prometheus,method=post,code=400 http_requests_total=3
@ -85,7 +88,8 @@ prometheus rpc_duration_seconds_count=1.7560473e+07,rpc_duration_s
```
### Logs
```
```text
logs fluent.tag="fluent.info",pid=18i,ppid=9i,worker=0i 1613769568895331700
logs fluent.tag="fluent.debug",instance=1720i,queue_size=0i,stage_size=0i 1613769568895697200
logs fluent.tag="fluent.info",worker=0i 1613769568896515100

View File

@ -6,11 +6,11 @@ To use this plugin you will need an [api key][] (app_id).
City identifiers can be found in the [city list][]. Alternately you
can [search][] by name; the `city_id` can be found as the last digits
of the URL: https://openweathermap.org/city/2643743. Language
of the URL: <https://openweathermap.org/city/2643743>. Language
identifiers can be found in the [lang list][]. Documentation for
condition ID, icon, and main is at [weather conditions][].
### Configuration
## Configuration
```toml
[[inputs.openweathermap]]
@ -44,7 +44,7 @@ condition ID, icon, and main is at [weather conditions][].
interval = "10m"
```
### Metrics
## Metrics
- weather
- tags:
@ -66,10 +66,9 @@ condition ID, icon, and main is at [weather conditions][].
- condition_description (string, localized long description)
- condition_icon
## Example Output
### Example Output
```
```shell
> weather,city=San\ Francisco,city_id=5391959,condition_id=800,condition_main=Clear,country=US,forecast=* cloudiness=1i,condition_description="clear sky",condition_icon="01d",humidity=35i,pressure=1012,rain=0,sunrise=1570630329000000000i,sunset=1570671689000000000i,temperature=21.52,visibility=16093i,wind_degrees=280,wind_speed=5.7 1570659256000000000
> weather,city=San\ Francisco,city_id=5391959,condition_id=800,condition_main=Clear,country=US,forecast=3h cloudiness=0i,condition_description="clear sky",condition_icon="01n",humidity=41i,pressure=1010,rain=0,temperature=22.34,wind_degrees=249.393,wind_speed=2.085 1570665600000000000
> weather,city=San\ Francisco,city_id=5391959,condition_id=800,condition_main=Clear,country=US,forecast=6h cloudiness=0i,condition_description="clear sky",condition_icon="01n",humidity=50i,pressure=1012,rain=0,temperature=17.09,wind_degrees=310.754,wind_speed=3.009 1570676400000000000

View File

@ -2,7 +2,7 @@
Gather [Phusion Passenger](https://www.phusionpassenger.com/) metrics using the `passenger-status` command line utility.
**Series Cardinality Warning**
## Series Cardinality Warning
Depending on your environment, this `passenger_process` measurement of this
plugin can quickly create a high number of series which, when unchecked, can
@ -20,7 +20,7 @@ manage your series cardinality:
- Monitor your databases
[series cardinality](https://docs.influxdata.com/influxdb/latest/query_language/spec/#show-cardinality).
### Configuration
## Configuration
```toml
# Read metrics of passenger using passenger-status
@ -36,11 +36,11 @@ manage your series cardinality:
command = "passenger-status -v --show=xml"
```
#### Permissions:
### Permissions
Telegraf must have permission to execute the `passenger-status` command. On most systems, Telegraf runs as the `telegraf` user.
### Metrics:
## Metrics
- passenger
- tags:
@ -95,8 +95,9 @@ Telegraf must have permission to execute the `passenger-status` command. On mos
- real_memory
- vmsize
### Example Output:
```
## Example Output
```shell
passenger,passenger_version=5.0.17 capacity_used=23i,get_wait_list_size=0i,max=23i,process_count=23i 1452984112799414257
passenger_supergroup,name=/var/app/current/public capacity_used=23i,get_wait_list_size=0i 1452984112799496977
passenger_group,app_root=/var/app/current,app_type=rack,name=/var/app/current/public capacity_used=23i,get_wait_list_size=0i,processes_being_spawned=0i 1452984112799527021

View File

@ -7,9 +7,9 @@ The pf plugin retrieves this information by invoking the `pfstat` command. The `
* Run telegraf as root. This is strongly discouraged.
* Change the ownership and permissions for /dev/pf such that the user telegraf runs at can read the /dev/pf device file. This is probably not that good of an idea either.
* Configure sudo to grant telegraf to run `pfctl` as root. This is the most restrictive option, but require sudo setup.
* Add "telegraf" to the "proxy" group as /dev/pf is owned by root:proxy.
* Add "telegraf" to the "proxy" group as /dev/pf is owned by root:proxy.
### Using sudo
## Using sudo
You may edit your sudo configuration with the following:
@ -17,40 +17,39 @@ You may edit your sudo configuration with the following:
telegraf ALL=(root) NOPASSWD: /sbin/pfctl -s info
```
### Configuration:
## Configuration
```toml
# use sudo to run pfctl
use_sudo = false
```
### Measurements & Fields:
## Measurements & Fields
* pf
* entries (integer, count)
* searches (integer, count)
* inserts (integer, count)
* removals (integer, count)
* match (integer, count)
* bad-offset (integer, count)
* fragment (integer, count)
* short (integer, count)
* normalize (integer, count)
* memory (integer, count)
* bad-timestamp (integer, count)
* congestion (integer, count)
* ip-option (integer, count)
* proto-cksum (integer, count)
* state-mismatch (integer, count)
* state-insert (integer, count)
* state-limit (integer, count)
* src-limit (integer, count)
* synproxy (integer, count)
- pf
- entries (integer, count)
- searches (integer, count)
- inserts (integer, count)
- removals (integer, count)
- match (integer, count)
- bad-offset (integer, count)
- fragment (integer, count)
- short (integer, count)
- normalize (integer, count)
- memory (integer, count)
- bad-timestamp (integer, count)
- congestion (integer, count)
- ip-option (integer, count)
- proto-cksum (integer, count)
- state-mismatch (integer, count)
- state-insert (integer, count)
- state-limit (integer, count)
- src-limit (integer, count)
- synproxy (integer, count)
## Example Output
### Example Output:
```
```text
> pfctl -s info
Status: Enabled for 0 days 00:26:05 Debug: Urgent
@ -77,7 +76,7 @@ Counters
synproxy 0 0.0/s
```
```
```shell
> ./telegraf --config telegraf.conf --input-filter pf --test
* Plugin: inputs.pf, Collection 1
> pf,host=columbia entries=3i,searches=2668i,inserts=12i,removals=9i 1510941775000000000

View File

@ -7,7 +7,7 @@ More information about the meaning of these metrics can be found in the
- PgBouncer minimum tested version: 1.5
### Configuration example
## Configuration example
```toml
[[inputs.pgbouncer]]
@ -22,7 +22,7 @@ More information about the meaning of these metrics can be found in the
address = "host=localhost user=pgbouncer sslmode=disable"
```
#### `address`
### `address`
Specify address via a postgresql connection string:
@ -37,7 +37,7 @@ All connection parameters are optional.
Without the dbname parameter, the driver will default to a database with the same name as the user.
This dbname is just for instantiating a connection with the server and doesn't restrict the databases we are trying to grab metrics for.
### Metrics
## Metrics
- pgbouncer
- tags:
@ -57,7 +57,7 @@ This dbname is just for instantiating a connection with the server and doesn't r
- total_xact_count
- total_xact_time
+ pgbouncer_pools
- pgbouncer_pools
- tags:
- db
- pool_mode
@ -74,9 +74,9 @@ This dbname is just for instantiating a connection with the server and doesn't r
- sv_tested
- sv_used
### Example Output
## Example Output
```
```shell
pgbouncer,db=pgbouncer,server=host\=debian-buster-postgres\ user\=dbn\ port\=6432\ dbname\=pgbouncer\ avg_query_count=0i,avg_query_time=0i,avg_wait_time=0i,avg_xact_count=0i,avg_xact_time=0i,total_query_count=26i,total_query_time=0i,total_received=0i,total_sent=0i,total_wait_time=0i,total_xact_count=26i,total_xact_time=0i 1581569936000000000
pgbouncer_pools,db=pgbouncer,pool_mode=statement,server=host\=debian-buster-postgres\ user\=dbn\ port\=6432\ dbname\=pgbouncer\ ,user=pgbouncer cl_active=1i,cl_waiting=0i,maxwait=0i,maxwait_us=0i,sv_active=0i,sv_idle=0i,sv_login=0i,sv_tested=0i,sv_used=0i 1581569936000000000
```

View File

@ -2,7 +2,7 @@
Get phpfpm stats using either HTTP status page or fpm socket.
### Configuration:
## Configuration
```toml
# Read metrics of phpfpm, via HTTP status page or socket
@ -44,7 +44,7 @@ Get phpfpm stats using either HTTP status page or fpm socket.
When using `unixsocket`, you have to ensure that telegraf runs on same
host, and socket path is accessible to telegraf user.
### Metrics:
## Metrics
- phpfpm
- tags:
@ -62,9 +62,9 @@ host, and socket path is accessible to telegraf user.
- max_children_reached
- slow_requests
# Example Output
## Example Output
```
```shell
phpfpm,pool=www accepted_conn=13i,active_processes=2i,idle_processes=1i,listen_queue=0i,listen_queue_len=0i,max_active_processes=2i,max_children_reached=0i,max_listen_queue=0i,slow_requests=0i,total_processes=3i 1453011293083331187
phpfpm,pool=www2 accepted_conn=12i,active_processes=1i,idle_processes=2i,listen_queue=0i,listen_queue_len=0i,max_active_processes=2i,max_children_reached=0i,max_listen_queue=0i,slow_requests=0i,total_processes=3i 1453011293083691422
phpfpm,pool=www3 accepted_conn=11i,active_processes=1i,idle_processes=2i,listen_queue=0i,listen_queue_len=0i,max_active_processes=2i,max_children_reached=0i,max_listen_queue=0i,slow_requests=0i,total_processes=3i 1453011293083691658

View File

@ -13,7 +13,8 @@ ping packets.
Most ping command implementations are supported, one notable exception being
that there is currently no support for GNU Inetutils ping. You may instead use
the iputils-ping implementation:
```
```sh
apt-get install iputils-ping
```
@ -21,7 +22,7 @@ When using `method = "native"` a ping is sent and the results are reported in
native Go by the Telegraf process, eliminating the need to execute the system
`ping` command.
### Configuration:
## Configuration
```toml
[[inputs.ping]]
@ -76,7 +77,7 @@ native Go by the Telegraf process, eliminating the need to execute the system
# size = 56
```
#### File Limit
### File Limit
Since this plugin runs the ping command, it may need to open multiple files per
host. The number of files used is lessened with the `native` option but still
@ -88,42 +89,49 @@ use the "drop-in directory", usually located at
`/etc/systemd/system/telegraf.service.d`.
You can create or edit a drop-in file in the correct location using:
```sh
$ systemctl edit telegraf
systemctl edit telegraf
```
Increase the number of open files:
```ini
[Service]
LimitNOFILE=8192
```
Restart Telegraf:
```sh
$ systemctl restart telegraf
systemctl restart telegraf
```
#### Linux Permissions
### Linux Permissions
When using `method = "native"`, Telegraf will attempt to use privileged raw
ICMP sockets. On most systems, doing so requires `CAP_NET_RAW` capabilities or for Telegraf to be run as root.
With systemd:
```sh
$ systemctl edit telegraf
systemctl edit telegraf
```
```ini
[Service]
CapabilityBoundingSet=CAP_NET_RAW
AmbientCapabilities=CAP_NET_RAW
```
```sh
$ systemctl restart telegraf
systemctl restart telegraf
```
Without systemd:
```sh
$ setcap cap_net_raw=eip /usr/bin/telegraf
setcap cap_net_raw=eip /usr/bin/telegraf
```
Reference [`man 7 capabilities`][man 7 capabilities] for more information about
@ -131,11 +139,11 @@ setting capabilities.
[man 7 capabilities]: http://man7.org/linux/man-pages/man7/capabilities.7.html
#### Other OS Permissions
### Other OS Permissions
When using `method = "native"`, you will need permissions similar to the executable ping program for your OS.
When using `method = "native"`, you will need permissions similar to the executable ping program for your OS.
### Metrics
## Metrics
- ping
- tags:
@ -155,19 +163,18 @@ When using `method = "native"`, you will need permissions similar to the executa
- percent_reply_loss (float, Windows with method = "exec" only)
- result_code (int, success = 0, no such host = 1, ping error = 2)
##### reply_received vs packets_received
### reply_received vs packets_received
On Windows systems with `method = "exec"`, the "Destination net unreachable" reply will increment `packets_received` but not `reply_received`*.
##### ttl
### ttl
There is currently no support for TTL on windows with `"native"`; track
progress at https://github.com/golang/go/issues/7175 and
https://github.com/golang/go/issues/7174
progress at <https://github.com/golang/go/issues/7175> and
<https://github.com/golang/go/issues/7174>
## Example Output
### Example Output
```
```shell
ping,url=example.org average_response_ms=23.066,ttl=63,maximum_response_ms=24.64,minimum_response_ms=22.451,packets_received=5i,packets_transmitted=5i,percent_packet_loss=0,result_code=0i,standard_deviation_ms=0.809 1535747258000000000
```

View File

@ -3,11 +3,11 @@
The postfix plugin reports metrics on the postfix queues.
For each of the active, hold, incoming, maildrop, and deferred queues
(http://www.postfix.org/QSHAPE_README.html#queues), it will report the queue
(<http://www.postfix.org/QSHAPE_README.html#queues>), it will report the queue
length (number of items), size (bytes used by items), and age (age of oldest
item in seconds).
### Configuration
## Configuration
```toml
[[inputs.postfix]]
@ -16,7 +16,7 @@ item in seconds).
# queue_directory = "/var/spool/postfix"
```
#### Permissions
### Permissions
Telegraf will need read access to the files in the queue directory. You may
need to alter the permissions of these directories to provide access to the
@ -26,20 +26,22 @@ This can be setup either using standard unix permissions or with Posix ACLs,
you will only need to use one method:
Unix permissions:
```sh
$ sudo chgrp -R telegraf /var/spool/postfix/{active,hold,incoming,deferred}
$ sudo chmod -R g+rXs /var/spool/postfix/{active,hold,incoming,deferred}
$ sudo usermod -a -G postdrop telegraf
$ sudo chmod g+r /var/spool/postfix/maildrop
sudo chgrp -R telegraf /var/spool/postfix/{active,hold,incoming,deferred}
sudo chmod -R g+rXs /var/spool/postfix/{active,hold,incoming,deferred}
sudo usermod -a -G postdrop telegraf
sudo chmod g+r /var/spool/postfix/maildrop
```
Posix ACL:
```sh
$ sudo setfacl -Rm g:telegraf:rX /var/spool/postfix/
$ sudo setfacl -dm g:telegraf:rX /var/spool/postfix/
sudo setfacl -Rm g:telegraf:rX /var/spool/postfix/
sudo setfacl -dm g:telegraf:rX /var/spool/postfix/
```
### Metrics
## Metrics
- postfix_queue
- tags:
@ -49,10 +51,9 @@ $ sudo setfacl -dm g:telegraf:rX /var/spool/postfix/
- size (integer, bytes)
- age (integer, seconds)
## Example Output
### Example Output
```
```shell
postfix_queue,queue=active length=3,size=12345,age=9
postfix_queue,queue=hold length=0,size=0,age=0
postfix_queue,queue=maildrop length=1,size=2000,age=2

View File

@ -1,7 +1,8 @@
# PostgreSQL Input Plugin
This postgresql plugin provides metrics for your postgres database. It currently works with postgres versions 8.1+. It uses data from the built in _pg_stat_database_ and pg_stat_bgwriter views. The metrics recorded depend on your version of postgres. See table:
```
```sh
pg version 9.2+ 9.1 8.3-9.0 8.1-8.2 7.4-8.0(unsupported)
--- --- --- ------- ------- -------
datid x x x x
@ -27,10 +28,10 @@ stats_reset* x x
_* value ignored and therefore not recorded._
More information about the meaning of these metrics can be found in the [PostgreSQL Documentation](http://www.postgresql.org/docs/9.2/static/monitoring-stats.html#PG-STAT-DATABASE-VIEW)
## Configuration
Specify address via a postgresql connection string:
`host=localhost port=5432 user=telegraf database=telegraf`
@ -52,11 +53,13 @@ A list of databases to pull metrics about. If not specified, metrics for all dat
### TLS Configuration
Add the `sslkey`, `sslcert` and `sslrootcert` options to your DSN:
```
```shell
host=localhost user=pgotest dbname=app_production sslmode=require sslkey=/etc/telegraf/key.pem sslcert=/etc/telegraf/cert.pem sslrootcert=/etc/telegraf/ca.pem
```
### Configuration example
```toml
[[inputs.postgresql]]
address = "postgres://telegraf@localhost/someDB"

View File

@ -78,9 +78,11 @@ The example below has two queries are specified, with the following parameters:
The system can be easily extended using homemade metrics collection tools or
using postgresql extensions ([pg_stat_statements](http://www.postgresql.org/docs/current/static/pgstatstatements.html), [pg_proctab](https://github.com/markwkm/pg_proctab) or [powa](http://dalibo.github.io/powa/))
# Sample Queries :
- telegraf.conf postgresql_extensible queries (assuming that you have configured
## Sample Queries
* telegraf.conf postgresql_extensible queries (assuming that you have configured
correctly your connection)
```toml
[[inputs.postgresql_extensible.query]]
sqlquery="SELECT * FROM pg_stat_database"
@ -132,27 +134,33 @@ using postgresql extensions ([pg_stat_statements](http://www.postgresql.org/docs
tagvalue="type,enabled"
```
# Postgresql Side
## Postgresql Side
postgresql.conf :
```
```sql
shared_preload_libraries = 'pg_stat_statements,pg_stat_kcache'
```
Please follow the requirements to setup those extensions.
In the database (can be a specific monitoring db)
```
```sql
create extension pg_stat_statements;
create extension pg_stat_kcache;
create extension pg_proctab;
```
(assuming that the extension is installed on the OS Layer)
- pg_stat_kcache is available on the postgresql.org yum repo
- pg_proctab is available at : https://github.com/markwkm/pg_proctab
* pg_stat_kcache is available on the postgresql.org yum repo
* pg_proctab is available at : <https://github.com/markwkm/pg_proctab>
## Views
* Blocking sessions
## Views
- Blocking sessions
```sql
CREATE OR REPLACE VIEW public.blocking_procs AS
SELECT a.datname AS db,
@ -176,7 +184,9 @@ CREATE OR REPLACE VIEW public.blocking_procs AS
WHERE kl.granted AND NOT bl.granted
ORDER BY a.query_start;
```
- Sessions Statistics
* Sessions Statistics
```sql
CREATE OR REPLACE VIEW public.sessions AS
WITH proctab AS (

View File

@ -2,7 +2,7 @@
The powerdns plugin gathers metrics about PowerDNS using unix socket.
### Configuration:
## Configuration
```toml
# Description
@ -14,17 +14,18 @@ The powerdns plugin gathers metrics about PowerDNS using unix socket.
unix_sockets = ["/var/run/pdns.controlsocket"]
```
#### Permissions
### Permissions
Telegraf will need read access to the powerdns control socket.
On many systems this can be accomplished by adding the `telegraf` user to the
`pdns` group:
```
```sh
usermod telegraf -a -G pdns
```
### Measurements & Fields:
## Measurements & Fields
- powerdns
- corrupt-packets
@ -66,13 +67,13 @@ usermod telegraf -a -G pdns
- uptime
- user-msec
### Tags:
## Tags
- tags: `server=socket`
### Example Output:
## Example Output
```
```sh
$ ./telegraf --config telegraf.conf --input-filter powerdns --test
> powerdns,server=/var/run/pdns.controlsocket corrupt-packets=0i,deferred-cache-inserts=0i,deferred-cache-lookup=0i,dnsupdate-answers=0i,dnsupdate-changes=0i,dnsupdate-queries=0i,dnsupdate-refused=0i,key-cache-size=0i,latency=26i,meta-cache-size=0i,packetcache-hit=0i,packetcache-miss=1i,packetcache-size=0i,qsize-q=0i,query-cache-hit=0i,query-cache-miss=6i,rd-queries=1i,recursing-answers=0i,recursing-questions=0i,recursion-unanswered=0i,security-status=3i,servfail-packets=0i,signature-cache-size=0i,signatures=0i,sys-msec=4349i,tcp-answers=0i,tcp-queries=0i,timedout-packets=0i,udp-answers=1i,udp-answers-bytes=50i,udp-do-queries=0i,udp-queries=0i,udp4-answers=1i,udp4-queries=1i,udp6-answers=0i,udp6-queries=0i,uptime=166738i,user-msec=3036i 1454078624932715706
```

View File

@ -3,7 +3,7 @@
The `powerdns_recursor` plugin gathers metrics about PowerDNS Recursor using
the unix controlsocket.
### Configuration
## Configuration
```toml
[[inputs.powerdns_recursor]]
@ -17,7 +17,7 @@ the unix controlsocket.
# socket_mode = "0666"
```
#### Permissions
### Permissions
Telegraf will need read/write access to the control socket and to the
`socket_dir`. PowerDNS will need to be able to write to the `socket_dir`.
@ -27,25 +27,28 @@ adapted for other systems.
First change permissions on the controlsocket in the PowerDNS recursor
configuration, usually in `/etc/powerdns/recursor.conf`:
```
```sh
socket-mode = 660
```
Then place the `telegraf` user into the `pdns` group:
```
```sh
usermod telegraf -a -G pdns
```
Since `telegraf` cannot write to to the default `/var/run` socket directory,
create a subdirectory and adjust permissions for this directory so that both
users can access it.
```sh
$ mkdir /var/run/pdns
$ chown root:pdns /var/run/pdns
$ chmod 770 /var/run/pdns
mkdir /var/run/pdns
chown root:pdns /var/run/pdns
chmod 770 /var/run/pdns
```
### Metrics
## Metrics
- powerdns_recursor
- tags:
@ -156,8 +159,8 @@ $ chmod 770 /var/run/pdns
- x-ourtime4-8
- x-ourtime8-16
### Example Output
## Example Output
```
```shell
powerdns_recursor,server=/var/run/pdns_recursor.controlsocket all-outqueries=3631810i,answers-slow=36863i,answers0-1=179612i,answers1-10=1223305i,answers10-100=1252199i,answers100-1000=408357i,auth-zone-queries=4i,auth4-answers-slow=44758i,auth4-answers0-1=59721i,auth4-answers1-10=1766787i,auth4-answers10-100=1329638i,auth4-answers100-1000=430372i,auth6-answers-slow=0i,auth6-answers0-1=0i,auth6-answers1-10=0i,auth6-answers10-100=0i,auth6-answers100-1000=0i,cache-entries=296689i,cache-hits=150654i,cache-misses=2949682i,case-mismatches=0i,chain-resends=420004i,client-parse-errors=0i,concurrent-queries=0i,dlg-only-drops=0i,dnssec-queries=152970i,dnssec-result-bogus=0i,dnssec-result-indeterminate=0i,dnssec-result-insecure=0i,dnssec-result-nta=0i,dnssec-result-secure=47i,dnssec-validations=47i,dont-outqueries=62i,ecs-queries=0i,ecs-responses=0i,edns-ping-matches=0i,edns-ping-mismatches=0i,failed-host-entries=21i,fd-usage=32i,ignored-packets=0i,ipv6-outqueries=0i,ipv6-questions=0i,malloc-bytes=0i,max-cache-entries=1000000i,max-mthread-stack=33747i,max-packetcache-entries=500000i,negcache-entries=100019i,no-packet-error=0i,noedns-outqueries=73341i,noerror-answers=25453808i,noping-outqueries=0i,nsset-invalidations=2398i,nsspeeds-entries=3966i,nxdomain-answers=3341302i,outgoing-timeouts=44384i,outgoing4-timeouts=44384i,outgoing6-timeouts=0i,over-capacity-drops=0i,packetcache-entries=78258i,packetcache-hits=25999027i,packetcache-misses=3100179i,policy-drops=0i,policy-result-custom=0i,policy-result-drop=0i,policy-result-noaction=3100336i,policy-result-nodata=0i,policy-result-nxdomain=0i,policy-result-truncate=0i,qa-latency=6553i,query-pipe-full-drops=0i,questions=29099363i,real-memory-usage=280494080i,resource-limits=0i,security-status=1i,server-parse-errors=0i,servfail-answers=304253i,spoof-prevents=0i,sys-msec=1312600i,tcp-client-overflow=0i,tcp-clients=0i,tcp-outqueries=116i,tcp-questions=133i,throttle-entries=21i,throttled-out=13296i,throttled-outqueries=13296i,too-old-drops=2i,udp-in-errors=4i,udp-noport-errors=2918i,udp-recvbuf-errors=0i,udp-sndbuf-errors=0i,unauthorized-tcp=0i,unauthorized-udp=0i,unexpected-packets=0i,unreachables=1708i,uptime=167482i,user-msec=1282640i,x-our-latency=19i,x-ourtime-slow=642i,x-ourtime0-1=3095566i,x-ourtime1-2=3401i,x-ourtime16-32=201i,x-ourtime2-4=304i,x-ourtime4-8=198i,x-ourtime8-16=24i 1533903879000000000
```

View File

@ -8,7 +8,7 @@ it requires access to execute `ps`.
**Supported Platforms**: Linux, FreeBSD, Darwin
### Configuration
## Configuration
```toml
# Get the number of processes and group them by status
@ -21,7 +21,7 @@ Using the environment variable `HOST_PROC` the plugin will retrieve process info
`docker run -v /proc:/rootfs/proc:ro -e HOST_PROC=/rootfs/proc`
### Metrics
## Metrics
- processes
- fields:
@ -38,13 +38,13 @@ Using the environment variable `HOST_PROC` the plugin will retrieve process info
- parked (linux only)
- total_threads (linux only)
### Process State Mappings
## Process State Mappings
Different OSes use slightly different State codes for their processes, these
state codes are documented in `man ps`, and I will give a mapping of what major
OS state codes correspond to in telegraf metrics:
```
```sh
Linux FreeBSD Darwin meaning
R R R running
S S S sleeping
@ -56,8 +56,8 @@ Linux FreeBSD Darwin meaning
W W none paging (linux kernel < 2.6 only), wait (freebsd)
```
### Example Output
## Example Output
```
```shell
processes blocked=8i,running=1i,sleeping=265i,stopped=0i,total=274i,zombie=0i,dead=0i,paging=0i,total_threads=687i 1457478636980905042
```

View File

@ -5,6 +5,7 @@ The procstat_lookup metric displays the query information,
specifically the number of PIDs returned on a search
Processes can be selected for monitoring using one of several methods:
- pidfile
- exe
- pattern
@ -13,7 +14,7 @@ Processes can be selected for monitoring using one of several methods:
- cgroup
- win_service
### Configuration:
## Configuration
```toml
# Monitor process cpu and memory usage
@ -63,12 +64,12 @@ Processes can be selected for monitoring using one of several methods:
# pid_finder = "pgrep"
```
#### Windows support
### Windows support
Preliminary support for Windows has been added, however you may prefer using
the `win_perf_counters` input plugin as a more mature alternative.
### Metrics:
## Metrics
- procstat
- tags:
@ -161,9 +162,9 @@ the `win_perf_counters` input plugin as a more mature alternative.
*NOTE: Resource limit > 2147483647 will be reported as 2147483647.*
### Example Output:
## Example Output
```
```shell
procstat_lookup,host=prash-laptop,pattern=influxd,pid_finder=pgrep,result=success pid_count=1i,running=1i,result_code=0i 1582089700000000000
procstat,host=prash-laptop,pattern=influxd,process_name=influxd,user=root involuntary_context_switches=151496i,child_minor_faults=1061i,child_major_faults=8i,cpu_time_user=2564.81,cpu_time_idle=0,cpu_time_irq=0,cpu_time_guest=0,pid=32025i,major_faults=8609i,created_at=1580107536000000000i,voluntary_context_switches=1058996i,cpu_time_system=616.98,cpu_time_steal=0,cpu_time_guest_nice=0,memory_swap=0i,memory_locked=0i,memory_usage=1.7797634601593018,num_threads=18i,cpu_time_nice=0,cpu_time_iowait=0,cpu_time_soft_irq=0,memory_rss=148643840i,memory_vms=1435688960i,memory_data=0i,memory_stack=0i,minor_faults=1856550i 1582089700000000000
```

View File

@ -3,7 +3,7 @@
The prometheus input plugin gathers metrics from HTTP servers exposing metrics
in Prometheus format.
### Configuration:
## Configuration
```toml
# Read metrics from one or many prometheus clients
@ -49,7 +49,7 @@ in Prometheus format.
## Only for node scrape scope: node IP of the node that telegraf is running on.
## Either this config or the environment variable NODE_IP must be set.
# node_ip = "10.180.1.1"
## Only for node scrape scope: interval in seconds for how often to get updated pod list for scraping.
## Default is 60 seconds.
# pod_scrape_interval = 60
@ -100,7 +100,7 @@ in Prometheus format.
`urls` can contain a unix socket as well. If a different path is required (default is `/metrics` for both http[s] and unix) for a unix socket, add `path` as a query parameter as follows: `unix:///var/run/prometheus.sock?path=/custom/metrics`
#### Kubernetes Service Discovery
### Kubernetes Service Discovery
URLs listed in the `kubernetes_services` parameter will be expanded
by looking up all A records assigned to the hostname as described in
@ -109,7 +109,7 @@ by looking up all A records assigned to the hostname as described in
This method can be used to locate all
[Kubernetes headless services](https://kubernetes.io/docs/concepts/services-networking/service/#headless-services).
#### Kubernetes scraping
### Kubernetes scraping
Enabling this option will allow the plugin to scrape for prometheus annotation on Kubernetes
pods. Currently, you can run this plugin in your kubernetes cluster, or we use the kubeconfig
@ -124,7 +124,8 @@ Currently the following annotation are supported:
Using the `monitor_kubernetes_pods_namespace` option allows you to limit which pods you are scraping.
Using `pod_scrape_scope = "node"` allows more scalable scraping for pods which will scrape pods only in the node that telegraf is running. It will fetch the pod list locally from the node's kubelet. This will require running Telegraf in every node of the cluster. Note that either `node_ip` must be specified in the config or the environment variable `NODE_IP` must be set to the host IP. ThisThe latter can be done in the yaml of the pod running telegraf:
```
```sh
env:
- name: NODE_IP
valueFrom:
@ -134,7 +135,7 @@ env:
If using node level scrape scope, `pod_scrape_interval` specifies how often (in seconds) the pod list for scraping should updated. If not specified, the default is 60 seconds.
#### Consul Service Discovery
### Consul Service Discovery
Enabling this option and configuring consul `agent` url will allow the plugin to query
consul catalog for available services. Using `query_interval` the plugin will periodically
@ -143,6 +144,7 @@ It can use the information from the catalog to build the scraped url and additio
Multiple consul queries can be configured, each for different service.
The following example fields can be used in url or tag templates:
* Node
* Address
* NodeMeta
@ -152,15 +154,15 @@ The following example fields can be used in url or tag templates:
* ServiceMeta
For full list of available fields and their type see struct CatalogService in
https://github.com/hashicorp/consul/blob/master/api/catalog.go
<https://github.com/hashicorp/consul/blob/master/api/catalog.go>
#### Bearer Token
### Bearer Token
If set, the file specified by the `bearer_token` parameter will be read on
each interval and its contents will be appended to the Bearer string in the
Authorization header.
### Usage for Caddy HTTP server
## Usage for Caddy HTTP server
Steps to monitor Caddy with Telegraf's Prometheus input plugin:
@ -178,7 +180,7 @@ Steps to monitor Caddy with Telegraf's Prometheus input plugin:
> This is the default URL where Caddy will send data.
> For more details, please read the [Caddy Prometheus documentation](https://github.com/miekg/caddy-prometheus/blob/master/README.md).
### Metrics:
## Metrics
Measurement names are based on the Metric Family and tags are created for each
label. The value is added to a field named based on the metric type.
@ -187,10 +189,11 @@ All metrics receive the `url` tag indicating the related URL specified in the
Telegraf configuration. If using Kubernetes service discovery the `address`
tag is also added indicating the discovered ip address.
### Example Output:
## Example Output
**Source**
```
### Source
```shell
# HELP go_gc_duration_seconds A summary of the GC invocation durations.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 7.4545e-05
@ -211,8 +214,9 @@ cpu_usage_user{cpu="cpu2"} 2.0161290322588776
cpu_usage_user{cpu="cpu3"} 1.5045135406226022
```
**Output**
```
### Output
```shell
go_gc_duration_seconds,url=http://example.org:9273/metrics 1=0.001336611,count=14,sum=0.004527551,0=0.000057965,0.25=0.000083812,0.5=0.000286537,0.75=0.000365303 1505776733000000000
go_goroutines,url=http://example.org:9273/metrics gauge=21 1505776695000000000
cpu_usage_user,cpu=cpu0,url=http://example.org:9273/metrics gauge=1.513622603430151 1505776751000000000
@ -221,8 +225,9 @@ cpu_usage_user,cpu=cpu2,url=http://example.org:9273/metrics gauge=2.119071644805
cpu_usage_user,cpu=cpu3,url=http://example.org:9273/metrics gauge=1.5228426395944945 1505776751000000000
```
**Output (when metric_version = 2)**
```
### Output (when metric_version = 2)
```shell
prometheus,quantile=1,url=http://example.org:9273/metrics go_gc_duration_seconds=0.005574303 1556075100000000000
prometheus,quantile=0.75,url=http://example.org:9273/metrics go_gc_duration_seconds=0.0001046 1556075100000000000
prometheus,quantile=0.5,url=http://example.org:9273/metrics go_gc_duration_seconds=0.0000719 1556075100000000000

View File

@ -4,7 +4,7 @@ The proxmox plugin gathers metrics about containers and VMs using the Proxmox AP
Telegraf minimum version: Telegraf 1.16.0
### Configuration:
## Configuration
```toml
[[inputs.proxmox]]
@ -25,13 +25,13 @@ Telegraf minimum version: Telegraf 1.16.0
response_timeout = "5s"
```
#### Permissions
### Permissions
The plugin will need to have access to the Proxmox API. An API token
must be provided with the corresponding user being assigned at least the PVEAuditor
role on /.
### Measurements & Fields:
## Measurements & Fields
- proxmox
- status
@ -50,16 +50,16 @@ role on /.
- disk_free
- disk_used_percentage
### Tags:
## Tags
- node_fqdn - FQDN of the node telegraf is running on
- vm_name - Name of the VM/container
- vm_fqdn - FQDN of the VM/container
- vm_type - Type of the VM/container (lxc, qemu)
- node_fqdn - FQDN of the node telegraf is running on
- vm_name - Name of the VM/container
- vm_fqdn - FQDN of the VM/container
- vm_type - Type of the VM/container (lxc, qemu)
### Example Output:
## Example Output
```
```text
$ ./telegraf --config telegraf.conf --input-filter proxmox --test
> proxmox,host=pxnode,node_fqdn=pxnode.example.com,vm_fqdn=vm1.example.com,vm_name=vm1,vm_type=lxc cpuload=0.147998116735236,disk_free=4461129728i,disk_total=5217320960i,disk_used=756191232i,disk_used_percentage=14,mem_free=1046827008i,mem_total=1073741824i,mem_used=26914816i,mem_used_percentage=2,status="running",swap_free=536698880i,swap_total=536870912i,swap_used=172032i,swap_used_percentage=0,uptime=1643793i 1595457277000000000
> ...

View File

@ -1,12 +1,12 @@
# PuppetAgent Input Plugin
#### Description
## Description
The puppetagent plugin collects variables outputted from the 'last_run_summary.yaml' file
usually located in `/var/lib/puppet/state/`
[PuppetAgent Runs](https://puppet.com/blog/puppet-monitoring-how-to-monitor-success-or-failure-of-puppet-runs/).
```
```sh
cat /var/lib/puppet/state/last_run_summary.yaml
---
@ -45,7 +45,7 @@ cat /var/lib/puppet/state/last_run_summary.yaml
puppet: "3.7.5"
```
```
```sh
jcross@pit-devops-02 ~ >sudo ./telegraf_linux_amd64 --input-filter puppetagent --config tele.conf --test
* Plugin: puppetagent, Collection 1
> [] puppetagent_events_failure value=0
@ -77,65 +77,72 @@ jcross@pit-devops-02 ~ >sudo ./telegraf_linux_amd64 --input-filter puppetagent -
> [] puppetagent_version_puppet value=3.7.5
```
## Measurements:
#### PuppetAgent int64 measurements:
## Measurements
### PuppetAgent int64 measurements
Meta:
- units: int64
- tags: ``
Measurement names:
- puppetagent_changes_total
- puppetagent_events_failure
- puppetagent_events_total
- puppetagent_events_success
- puppetagent_resources_changed
- puppetagent_resources_corrective_change
- puppetagent_resources_failed
- puppetagent_resources_failedtorestart
- puppetagent_resources_outofsync
- puppetagent_resources_restarted
- puppetagent_resources_scheduled
- puppetagent_resources_skipped
- puppetagent_resources_total
- puppetagent_time_service
- puppetagent_time_lastrun
- puppetagent_version_config
#### PuppetAgent float64 measurements:
- puppetagent_changes_total
- puppetagent_events_failure
- puppetagent_events_total
- puppetagent_events_success
- puppetagent_resources_changed
- puppetagent_resources_corrective_change
- puppetagent_resources_failed
- puppetagent_resources_failedtorestart
- puppetagent_resources_outofsync
- puppetagent_resources_restarted
- puppetagent_resources_scheduled
- puppetagent_resources_skipped
- puppetagent_resources_total
- puppetagent_time_service
- puppetagent_time_lastrun
- puppetagent_version_config
### PuppetAgent float64 measurements
Meta:
- units: float64
- tags: ``
Measurement names:
- puppetagent_time_anchor
- puppetagent_time_catalogapplication
- puppetagent_time_configretrieval
- puppetagent_time_convertcatalog
- puppetagent_time_cron
- puppetagent_time_exec
- puppetagent_time_factgeneration
- puppetagent_time_file
- puppetagent_time_filebucket
- puppetagent_time_group
- puppetagent_time_lastrun
- puppetagent_time_noderetrieval
- puppetagent_time_notify
- puppetagent_time_package
- puppetagent_time_pluginsync
- puppetagent_time_schedule
- puppetagent_time_sshauthorizedkey
- puppetagent_time_total
- puppetagent_time_transactionevaluation
- puppetagent_time_user
- puppetagent_version_config
#### PuppetAgent string measurements:
- puppetagent_time_anchor
- puppetagent_time_catalogapplication
- puppetagent_time_configretrieval
- puppetagent_time_convertcatalog
- puppetagent_time_cron
- puppetagent_time_exec
- puppetagent_time_factgeneration
- puppetagent_time_file
- puppetagent_time_filebucket
- puppetagent_time_group
- puppetagent_time_lastrun
- puppetagent_time_noderetrieval
- puppetagent_time_notify
- puppetagent_time_package
- puppetagent_time_pluginsync
- puppetagent_time_schedule
- puppetagent_time_sshauthorizedkey
- puppetagent_time_total
- puppetagent_time_transactionevaluation
- puppetagent_time_user
- puppetagent_version_config
### PuppetAgent string measurements
Meta:
- units: string
- tags: ``
Measurement names:
- puppetagent_version_puppet
- puppetagent_version_puppet

View File

@ -7,7 +7,7 @@ For additional details reference the [RabbitMQ Management HTTP Stats][management
[management]: https://www.rabbitmq.com/management.html
[management-reference]: https://raw.githack.com/rabbitmq/rabbitmq-management/rabbitmq_v3_6_9/priv/www/api/index.html
### Configuration
## Configuration
```toml
[[inputs.rabbitmq]]
@ -66,7 +66,7 @@ For additional details reference the [RabbitMQ Management HTTP Stats][management
# federation_upstream_exclude = []
```
### Metrics
## Metrics
- rabbitmq_overview
- tags:
@ -90,7 +90,7 @@ For additional details reference the [RabbitMQ Management HTTP Stats][management
- return_unroutable (int, number of unroutable messages)
- return_unroutable_rate (float, number of unroutable messages per second)
+ rabbitmq_node
- rabbitmq_node
- tags:
- url
- node
@ -182,7 +182,7 @@ For additional details reference the [RabbitMQ Management HTTP Stats][management
- slave_nodes (int, count)
- synchronised_slave_nodes (int, count)
+ rabbitmq_exchange
- rabbitmq_exchange
- tags:
- url
- exchange
@ -217,17 +217,17 @@ For additional details reference the [RabbitMQ Management HTTP Stats][management
- messages_publish (int, count)
- messages_return_unroutable (int, count)
### Sample Queries
## Sample Queries
Message rates for the entire node can be calculated from total message counts. For instance, to get the rate of messages published per minute, use this query:
```
```sql
SELECT NON_NEGATIVE_DERIVATIVE(LAST("messages_published"), 1m) AS messages_published_rate FROM rabbitmq_overview WHERE time > now() - 10m GROUP BY time(1m)
```
### Example Output
## Example Output
```
```text
rabbitmq_queue,url=http://amqp.example.org:15672,queue=telegraf,vhost=influxdb,node=rabbit@amqp.example.org,durable=true,auto_delete=false,host=amqp.example.org messages_deliver_get=0i,messages_publish=329i,messages_publish_rate=0.2,messages_redeliver_rate=0,message_bytes_ready=0i,message_bytes_unacked=0i,messages_deliver=329i,messages_unack=0i,consumers=1i,idle_since="",messages=0i,messages_deliver_rate=0.2,messages_deliver_get_rate=0.2,messages_redeliver=0i,memory=43032i,message_bytes_ram=0i,messages_ack=329i,messages_ready=0i,messages_ack_rate=0.2,consumer_utilisation=1,message_bytes=0i,message_bytes_persist=0i 1493684035000000000
rabbitmq_overview,url=http://amqp.example.org:15672,host=amqp.example.org channels=2i,consumers=1i,exchanges=17i,messages_acked=329i,messages=0i,messages_ready=0i,messages_unacked=0i,connections=2i,queues=1i,messages_delivered=329i,messages_published=329i,clustering_listeners=2i,amqp_listeners=1i 1493684035000000000
rabbitmq_node,url=http://amqp.example.org:15672,node=rabbit@amqp.example.org,host=amqp.example.org fd_total=1024i,fd_used=32i,mem_limit=8363329126i,sockets_total=829i,disk_free=8175935488i,disk_free_limit=50000000i,mem_used=58771080i,proc_total=1048576i,proc_used=267i,run_queue=0i,sockets_used=2i,running=1i 149368403500000000

View File

@ -3,7 +3,7 @@
The [raindrops](http://raindrops.bogomips.org/) plugin reads from
specified raindops [middleware](http://raindrops.bogomips.org/Raindrops/Middleware.html) URI and adds stats to InfluxDB.
### Configuration:
## Configuration
```toml
# Read raindrops stats
@ -11,31 +11,31 @@ specified raindops [middleware](http://raindrops.bogomips.org/Raindrops/Middlewa
urls = ["http://localhost:8080/_raindrops"]
```
### Measurements & Fields:
## Measurements & Fields
- raindrops
- calling (integer, count)
- writing (integer, count)
- calling (integer, count)
- writing (integer, count)
- raindrops_listen
- active (integer, bytes)
- queued (integer, bytes)
- active (integer, bytes)
- queued (integer, bytes)
### Tags:
## Tags
- Raindops calling/writing of all the workers:
- server
- port
- server
- port
- raindrops_listen (ip:port):
- ip
- port
- ip
- port
- raindrops_listen (Unix Socket):
- socket
- socket
### Example Output:
## Example Output
```
```shell
$ ./telegraf --config telegraf.conf --input-filter raindrops --test
* Plugin: raindrops, Collection 1
> raindrops,port=8080,server=localhost calling=0i,writing=0i 1455479896806238204

View File

@ -4,7 +4,7 @@ This plugin is only available on Linux (only for `386`, `amd64`, `arm` and `arm6
The `RAS` plugin gathers and counts errors provided by [RASDaemon](https://github.com/mchehab/rasdaemon).
### Configuration
## Configuration
```toml
[[inputs.ras]]
@ -15,7 +15,7 @@ The `RAS` plugin gathers and counts errors provided by [RASDaemon](https://githu
In addition `RASDaemon` runs, by default, with `--enable-sqlite3` flag. In case of problems with SQLite3 database please verify this is still a default option.
### Metrics
## Metrics
- ras
- tags:
@ -40,6 +40,7 @@ In addition `RASDaemon` runs, by default, with `--enable-sqlite3` flag. In case
- unclassified_mce_errors
Please note that `processor_base_errors` is aggregate counter measuring the following MCE events:
- internal_timer_errors
- smm_handler_code_access_violation_errors
- internal_parity_errors
@ -48,13 +49,13 @@ Please note that `processor_base_errors` is aggregate counter measuring the foll
- microcode_rom_parity_errors
- unclassified_mce_errors
### Permissions
## Permissions
This plugin requires access to SQLite3 database from `RASDaemon`. Please make sure that user has required permissions to this database.
### Example Output
## Example Output
```
```shell
ras,host=ubuntu,socket_id=0 external_mce_base_errors=1i,frc_errors=1i,instruction_tlb_errors=5i,internal_parity_errors=1i,internal_timer_errors=1i,l0_and_l1_cache_errors=7i,memory_read_corrected_errors=25i,memory_read_uncorrectable_errors=0i,memory_write_corrected_errors=5i,memory_write_uncorrectable_errors=0i,microcode_rom_parity_errors=1i,processor_base_errors=7i,processor_bus_errors=1i,smm_handler_code_access_violation_errors=1i,unclassified_mce_base_errors=1i 1598867393000000000
ras,host=ubuntu level_2_cache_errors=0i,upi_errors=0i 1598867393000000000
```

View File

@ -4,7 +4,7 @@ Reads metrics from RavenDB servers via monitoring endpoints APIs.
Requires RavenDB Server 5.2+.
### Configuration
## Configuration
The following is an example config for RavenDB. **Note:** The client certificate used should have `Operator` permissions on the cluster.
@ -43,7 +43,7 @@ The following is an example config for RavenDB. **Note:** The client certificate
# collection_stats_dbs = []
```
### Metrics
## Metrics
- ravendb_server
- tags:
@ -57,7 +57,7 @@ The following is an example config for RavenDB. **Note:** The client certificate
- certificate_server_certificate_expiration_left_in_sec (optional)
- certificate_well_known_admin_certificates (optional, separated by ';')
- cluster_current_term
- cluster_index
- cluster_index
- cluster_node_state
- 0 -> Passive
- 1 -> Candidate
@ -147,7 +147,7 @@ The following is an example config for RavenDB. **Note:** The client certificate
- uptime_in_sec
- ravendb_indexes
- tags:
- tags:
- database_name
- index_name
- node_tag
@ -201,16 +201,16 @@ The following is an example config for RavenDB. **Note:** The client certificate
- tombstones_size_in_bytes
- total_size_in_bytes
### Example output
## Example output
```
```text
> ravendb_server,cluster_id=07aecc42-9194-4181-999c-1c42450692c9,host=DESKTOP-2OISR6D,node_tag=A,url=http://localhost:8080 backup_current_number_of_running_backups=0i,backup_max_number_of_concurrent_backups=4i,certificate_server_certificate_expiration_left_in_sec=-1,cluster_current_term=2i,cluster_index=10i,cluster_node_state=4i,config_server_urls="http://127.0.0.1:8080",cpu_assigned_processor_count=8i,cpu_machine_usage=19.09944089456869,cpu_process_usage=0.16977205323024872,cpu_processor_count=8i,cpu_thread_pool_available_completion_port_threads=1000i,cpu_thread_pool_available_worker_threads=32763i,databases_loaded_count=1i,databases_total_count=1i,disk_remaining_storage_space_percentage=18i,disk_system_store_total_data_file_size_in_mb=35184372088832i,disk_system_store_used_data_file_size_in_mb=31379031064576i,disk_total_free_space_in_mb=42931i,license_expiration_left_in_sec=24079222.8772186,license_max_cores=256i,license_type="Enterprise",license_utilized_cpu_cores=8i,memory_allocated_in_mb=205i,memory_installed_in_mb=16384i,memory_low_memory_severity=0i,memory_physical_in_mb=16250i,memory_total_dirty_in_mb=0i,memory_total_swap_size_in_mb=0i,memory_total_swap_usage_in_mb=0i,memory_working_set_swap_usage_in_mb=0i,network_concurrent_requests_count=1i,network_last_request_time_in_sec=0.0058717,network_requests_per_sec=0.09916543455308825,network_tcp_active_connections=128i,network_total_requests=10i,server_full_version="5.2.0-custom-52",server_process_id=31044i,server_version="5.2",uptime_in_sec=56i 1613027977000000000
> ravendb_databases,database_id=ced0edba-8f80-48b8-8e81-c3d2c6748ec3,database_name=db1,host=DESKTOP-2OISR6D,node_tag=A,url=http://localhost:8080 counts_alerts=0i,counts_attachments=17i,counts_documents=1059i,counts_performance_hints=0i,counts_rehabs=0i,counts_replication_factor=1i,counts_revisions=5475i,counts_unique_attachments=17i,indexes_auto_count=0i,indexes_count=7i,indexes_disabled_count=0i,indexes_errored_count=0i,indexes_errors_count=0i,indexes_idle_count=0i,indexes_stale_count=0i,indexes_static_count=7i,statistics_doc_puts_per_sec=0,statistics_map_index_indexes_per_sec=0,statistics_map_reduce_index_mapped_per_sec=0,statistics_map_reduce_index_reduced_per_sec=0,statistics_request_average_duration_in_ms=0,statistics_requests_count=0i,statistics_requests_per_sec=0,storage_documents_allocated_data_file_in_mb=140737488355328i,storage_documents_used_data_file_in_mb=74741020884992i,storage_indexes_allocated_data_file_in_mb=175921860444160i,storage_indexes_used_data_file_in_mb=120722940755968i,storage_total_allocated_storage_file_in_mb=325455441821696i,storage_total_free_space_in_mb=42931i,uptime_in_sec=54 1613027977000000000
> ravendb_indexes,database_name=db1,host=DESKTOP-2OISR6D,index_name=Orders/Totals,node_tag=A,url=http://localhost:8080 errors=0i,is_invalid=false,lock_mode="Unlock",mapped_per_sec=0,priority="Normal",reduced_per_sec=0,state="Normal",status="Running",time_since_last_indexing_in_sec=45.4256655,time_since_last_query_in_sec=45.4304202,type="Map" 1613027977000000000
> ravendb_collections,collection_name=@hilo,database_name=db1,host=DESKTOP-2OISR6D,node_tag=A,url=http://localhost:8080 documents_count=8i,documents_size_in_bytes=122880i,revisions_size_in_bytes=0i,tombstones_size_in_bytes=122880i,total_size_in_bytes=245760i 1613027977000000000
```
### Contributors
## Contributors
- Marcin Lewandowski (https://github.com/ml054/)
- Casey Barton (https://github.com/bartoncasey)
- Marcin Lewandowski (<https://github.com/ml054/>)
- Casey Barton (<https://github.com/bartoncasey>)

View File

@ -4,7 +4,7 @@ The `redfish` plugin gathers metrics and status information about CPU temperatur
Telegraf minimum version: Telegraf 1.15.0
### Configuration
## Configuration
```toml
[[inputs.redfish]]
@ -29,7 +29,7 @@ Telegraf minimum version: Telegraf 1.15.0
# insecure_skip_verify = false
```
### Metrics
## Metrics
- redfish_thermal_temperatures
- tags:
@ -50,8 +50,7 @@ Telegraf minimum version: Telegraf 1.15.0
- lower_threshold_critical
- lower_threshold_fatal
+ redfish_thermal_fans
- redfish_thermal_fans
- tags:
- source
- member_id
@ -70,7 +69,6 @@ Telegraf minimum version: Telegraf 1.15.0
- lower_threshold_critical
- lower_threshold_fatal
- redfish_power_powersupplies
- tags:
- source
@ -90,7 +88,6 @@ Telegraf minimum version: Telegraf 1.15.0
- power_input_watts
- power_output_watts
- redfish_power_voltages (available only if voltage data is found)
- tags:
- source
@ -110,10 +107,9 @@ Telegraf minimum version: Telegraf 1.15.0
- lower_threshold_critical
- lower_threshold_fatal
## Example Output
### Example Output
```
```text
redfish_thermal_temperatures,source=test-hostname,name=CPU1,address=http://190.0.0.1,member_id="0"datacenter="Tampa",health="OK",rack="12",room="tbc",row="3",state="Enabled" reading_celsius=41,upper_threshold_critical=59,upper_threshold_fatal=64 1582114112000000000
redfish_thermal_temperatures,source=test-hostname,name=CPU2,address=http://190.0.0.1,member_id="1"datacenter="Tampa",health="OK",rack="12",room="tbc",row="3",state="Enabled" reading_celsius=51,upper_threshold_critical=59,upper_threshold_fatal=64 1582114112000000000
redfish_thermal_temperatures,source=test-hostname,name=SystemBoardInlet,address=http://190.0.0.1,member_id="2"datacenter="Tampa",health="OK",rack="12",room="tbc",row="3",state="Enabled" reading_celsius=23,upper_threshold_critical=59,upper_threshold_fatal=64 1582114112000000000

View File

@ -1,6 +1,6 @@
# Redis Input Plugin
### Configuration:
## Configuration
```toml
# Read Redis's basic status information
@ -37,7 +37,7 @@
# insecure_skip_verify = true
```
### Measurements & Fields:
## Measurements & Fields
The plugin gathers the results of the [INFO](https://redis.io/commands/info) redis command.
There are two separate measurements: _redis_ and _redis\_keyspace_, the latter is used for gathering database related statistics.
@ -45,97 +45,97 @@ There are two separate measurements: _redis_ and _redis\_keyspace_, the latter i
Additionally the plugin also calculates the hit/miss ratio (keyspace\_hitrate) and the elapsed time since the last rdb save (rdb\_last\_save\_time\_elapsed).
- redis
- keyspace_hitrate(float, number)
- rdb_last_save_time_elapsed(int, seconds)
- keyspace_hitrate(float, number)
- rdb_last_save_time_elapsed(int, seconds)
**Server**
- uptime(int, seconds)
- lru_clock(int, number)
- redis_version(string)
- uptime(int, seconds)
- lru_clock(int, number)
- redis_version(string)
**Clients**
- clients(int, number)
- client_longest_output_list(int, number)
- client_biggest_input_buf(int, number)
- blocked_clients(int, number)
- clients(int, number)
- client_longest_output_list(int, number)
- client_biggest_input_buf(int, number)
- blocked_clients(int, number)
**Memory**
- used_memory(int, bytes)
- used_memory_rss(int, bytes)
- used_memory_peak(int, bytes)
- total_system_memory(int, bytes)
- used_memory_lua(int, bytes)
- maxmemory(int, bytes)
- maxmemory_policy(string)
- mem_fragmentation_ratio(float, number)
- used_memory(int, bytes)
- used_memory_rss(int, bytes)
- used_memory_peak(int, bytes)
- total_system_memory(int, bytes)
- used_memory_lua(int, bytes)
- maxmemory(int, bytes)
- maxmemory_policy(string)
- mem_fragmentation_ratio(float, number)
**Persistence**
- loading(int,flag)
- rdb_changes_since_last_save(int, number)
- rdb_bgsave_in_progress(int, flag)
- rdb_last_save_time(int, seconds)
- rdb_last_bgsave_status(string)
- rdb_last_bgsave_time_sec(int, seconds)
- rdb_current_bgsave_time_sec(int, seconds)
- aof_enabled(int, flag)
- aof_rewrite_in_progress(int, flag)
- aof_rewrite_scheduled(int, flag)
- aof_last_rewrite_time_sec(int, seconds)
- aof_current_rewrite_time_sec(int, seconds)
- aof_last_bgrewrite_status(string)
- aof_last_write_status(string)
- loading(int,flag)
- rdb_changes_since_last_save(int, number)
- rdb_bgsave_in_progress(int, flag)
- rdb_last_save_time(int, seconds)
- rdb_last_bgsave_status(string)
- rdb_last_bgsave_time_sec(int, seconds)
- rdb_current_bgsave_time_sec(int, seconds)
- aof_enabled(int, flag)
- aof_rewrite_in_progress(int, flag)
- aof_rewrite_scheduled(int, flag)
- aof_last_rewrite_time_sec(int, seconds)
- aof_current_rewrite_time_sec(int, seconds)
- aof_last_bgrewrite_status(string)
- aof_last_write_status(string)
**Stats**
- total_connections_received(int, number)
- total_commands_processed(int, number)
- instantaneous_ops_per_sec(int, number)
- total_net_input_bytes(int, bytes)
- total_net_output_bytes(int, bytes)
- instantaneous_input_kbps(float, KB/sec)
- instantaneous_output_kbps(float, KB/sec)
- rejected_connections(int, number)
- sync_full(int, number)
- sync_partial_ok(int, number)
- sync_partial_err(int, number)
- expired_keys(int, number)
- evicted_keys(int, number)
- keyspace_hits(int, number)
- keyspace_misses(int, number)
- pubsub_channels(int, number)
- pubsub_patterns(int, number)
- latest_fork_usec(int, microseconds)
- migrate_cached_sockets(int, number)
- total_connections_received(int, number)
- total_commands_processed(int, number)
- instantaneous_ops_per_sec(int, number)
- total_net_input_bytes(int, bytes)
- total_net_output_bytes(int, bytes)
- instantaneous_input_kbps(float, KB/sec)
- instantaneous_output_kbps(float, KB/sec)
- rejected_connections(int, number)
- sync_full(int, number)
- sync_partial_ok(int, number)
- sync_partial_err(int, number)
- expired_keys(int, number)
- evicted_keys(int, number)
- keyspace_hits(int, number)
- keyspace_misses(int, number)
- pubsub_channels(int, number)
- pubsub_patterns(int, number)
- latest_fork_usec(int, microseconds)
- migrate_cached_sockets(int, number)
**Replication**
- connected_slaves(int, number)
- master_link_down_since_seconds(int, number)
- master_link_status(string)
- master_repl_offset(int, number)
- second_repl_offset(int, number)
- repl_backlog_active(int, number)
- repl_backlog_size(int, bytes)
- repl_backlog_first_byte_offset(int, number)
- repl_backlog_histlen(int, bytes)
- connected_slaves(int, number)
- master_link_down_since_seconds(int, number)
- master_link_status(string)
- master_repl_offset(int, number)
- second_repl_offset(int, number)
- repl_backlog_active(int, number)
- repl_backlog_size(int, bytes)
- repl_backlog_first_byte_offset(int, number)
- repl_backlog_histlen(int, bytes)
**CPU**
- used_cpu_sys(float, number)
- used_cpu_user(float, number)
- used_cpu_sys_children(float, number)
- used_cpu_user_children(float, number)
- used_cpu_sys(float, number)
- used_cpu_user(float, number)
- used_cpu_sys_children(float, number)
- used_cpu_user_children(float, number)
**Cluster**
- cluster_enabled(int, flag)
- cluster_enabled(int, flag)
- redis_keyspace
- keys(int, number)
- expires(int, number)
- avg_ttl(int, number)
- keys(int, number)
- expires(int, number)
- avg_ttl(int, number)
- redis_cmdstat
Every Redis used command will have 3 new fields:
- calls(int, number)
- usec(int, mircoseconds)
- usec_per_call(float, microseconds)
- calls(int, number)
- usec(int, mircoseconds)
- usec_per_call(float, microseconds)
- redis_replication
- tags:
@ -148,22 +148,23 @@ Additionally the plugin also calculates the hit/miss ratio (keyspace\_hitrate) a
- lag(int, number)
- offset(int, number)
### Tags:
## Tags
- All measurements have the following tags:
- port
- server
- replication_role
- port
- server
- replication_role
- The redis_keyspace measurement has an additional database tag:
- database
- database
- The redis_cmdstat measurement has an additional tag:
- command
- command
### Example Output:
## Example Output
Using this configuration:
```toml
[[inputs.redis]]
## specify servers via a url matching:
@ -178,22 +179,26 @@ Using this configuration:
```
When run with:
```
```sh
./telegraf --config telegraf.conf --input-filter redis --test
```
It produces:
```
```shell
* Plugin: redis, Collection 1
> redis,server=localhost,port=6379,replication_role=master,host=host keyspace_hitrate=1,clients=2i,blocked_clients=0i,instantaneous_input_kbps=0,sync_full=0i,pubsub_channels=0i,pubsub_patterns=0i,total_net_output_bytes=6659253i,used_memory=842448i,total_system_memory=8351916032i,aof_current_rewrite_time_sec=-1i,rdb_changes_since_last_save=0i,sync_partial_err=0i,latest_fork_usec=508i,instantaneous_output_kbps=0,expired_keys=0i,used_memory_peak=843416i,aof_rewrite_in_progress=0i,aof_last_bgrewrite_status="ok",migrate_cached_sockets=0i,connected_slaves=0i,maxmemory_policy="noeviction",aof_rewrite_scheduled=0i,total_net_input_bytes=3125i,used_memory_rss=9564160i,repl_backlog_histlen=0i,rdb_last_bgsave_status="ok",aof_last_rewrite_time_sec=-1i,keyspace_misses=0i,client_biggest_input_buf=5i,used_cpu_user=1.33,maxmemory=0i,rdb_current_bgsave_time_sec=-1i,total_commands_processed=271i,repl_backlog_size=1048576i,used_cpu_sys=3,uptime=2822i,lru_clock=16706281i,used_memory_lua=37888i,rejected_connections=0i,sync_partial_ok=0i,evicted_keys=0i,rdb_last_save_time_elapsed=1922i,rdb_last_save_time=1493099368i,instantaneous_ops_per_sec=0i,used_cpu_user_children=0,client_longest_output_list=0i,master_repl_offset=0i,repl_backlog_active=0i,keyspace_hits=2i,used_cpu_sys_children=0,cluster_enabled=0i,rdb_last_bgsave_time_sec=0i,aof_last_write_status="ok",total_connections_received=263i,aof_enabled=0i,repl_backlog_first_byte_offset=0i,mem_fragmentation_ratio=11.35,loading=0i,rdb_bgsave_in_progress=0i 1493101290000000000
```
redis_keyspace:
```
```shell
> redis_keyspace,database=db1,host=host,server=localhost,port=6379,replication_role=master keys=1i,expires=0i,avg_ttl=0i 1493101350000000000
```
redis_command:
```
```shell
> redis_cmdstat,command=publish,host=host,port=6379,replication_role=master,server=localhost calls=68113i,usec=325146i,usec_per_call=4.77 1559227136000000000
```

View File

@ -2,7 +2,7 @@
Collect metrics from [RethinkDB](https://www.rethinkdb.com/).
### Configuration
## Configuration
This section contains the default TOML to configure the plugin. You can
generate it using `telegraf --usage rethinkdb`.
@ -25,7 +25,7 @@ generate it using `telegraf --usage rethinkdb`.
# servers = ["rethinkdb://username:auth_key@127.0.0.1:28015"]
```
### Metrics
## Metrics
- rethinkdb
- tags:
@ -44,7 +44,7 @@ generate it using `telegraf --usage rethinkdb`.
- disk_usage_metadata_bytes (integer, bytes)
- disk_usage_preallocated_bytes (integer, bytes)
+ rethinkdb_engine
- rethinkdb_engine
- tags:
- type
- ns

View File

@ -2,7 +2,7 @@
The Riak plugin gathers metrics from one or more riak instances.
### Configuration:
## Configuration
```toml
# Description
@ -11,7 +11,7 @@ The Riak plugin gathers metrics from one or more riak instances.
servers = ["http://localhost:8098"]
```
### Measurements & Fields:
## Measurements & Fields
Riak provides one measurement named "riak", with the following fields:
@ -63,16 +63,16 @@ Riak provides one measurement named "riak", with the following fields:
Measurements of time (such as node_get_fsm_time_mean) are measured in nanoseconds.
### Tags:
## Tags
All measurements have the following tags:
- server (the host:port of the given server address, ex. `127.0.0.1:8087`)
- nodename (the internal node name received, ex. `riak@127.0.0.1`)
### Example Output:
## Example Output
```
```shell
$ ./telegraf --config telegraf.conf --input-filter riak --test
> riak,nodename=riak@127.0.0.1,server=localhost:8098 cpu_avg1=31i,cpu_avg15=69i,cpu_avg5=51i,memory_code=11563738i,memory_ets=5925872i,memory_processes=30236069i,memory_system=93074971i,memory_total=123311040i,node_get_fsm_objsize_100=0i,node_get_fsm_objsize_95=0i,node_get_fsm_objsize_99=0i,node_get_fsm_objsize_mean=0i,node_get_fsm_objsize_median=0i,node_get_fsm_siblings_100=0i,node_get_fsm_siblings_95=0i,node_get_fsm_siblings_99=0i,node_get_fsm_siblings_mean=0i,node_get_fsm_siblings_median=0i,node_get_fsm_time_100=0i,node_get_fsm_time_95=0i,node_get_fsm_time_99=0i,node_get_fsm_time_mean=0i,node_get_fsm_time_median=0i,node_gets=0i,node_gets_total=19i,node_put_fsm_time_100=0i,node_put_fsm_time_95=0i,node_put_fsm_time_99=0i,node_put_fsm_time_mean=0i,node_put_fsm_time_median=0i,node_puts=0i,node_puts_total=0i,pbc_active=0i,pbc_connects=0i,pbc_connects_total=20i,vnode_gets=0i,vnode_gets_total=57i,vnode_index_reads=0i,vnode_index_reads_total=0i,vnode_index_writes=0i,vnode_index_writes_total=0i,vnode_puts=0i,vnode_puts_total=0i,read_repair=0i,read_repairs_total=0i 1455913392622482332
```

View File

@ -3,8 +3,7 @@
The Riemann Listener is a simple input plugin that listens for messages from
client that use riemann clients using riemann-protobuff format.
### Configuration:
## Configuration
This is a sample configuration for the plugin.
@ -36,6 +35,7 @@ This is a sample configuration for the plugin.
## Defaults to the OS configuration.
# keep_alive_period = "5m"
```
Just like Riemann the default port is 5555. This can be configured, refer configuration above.
Riemann `Service` is mapped as `measurement`. `metric` and `TTL` are converted into field values.