* Fix the issue
* Remove test I was using for replication
* Accidentally removed test.
* Add lock only where it is necessary.
* eliminate unnecessary space
* [outputs.signalfx] Add output plugin for SignalFX
This output plugin converts the `telegraf.Metrics` into signalfx
`datapoint`s and then transmits them to the ingest servers using
signalfx golang client lib.
As of this commit, the client lib is allowed to pick sane defaults
and none of its fields are overridable via telegraf config. This
can be changed in the future if needed.
The unit tests only test for conversion of `telegraf.Metric`s to
the `datapoint` structs. All code that executes after that is
assumed to be tested in the signalfx client lib itself (and not
worth writing end-to-end tests for).
Further enhancements:
- Custom ingest urls
- Better batching
- More extensive tests
- Support for events, sent by whitelist only
Co-authored-by: Ben Keith <benkeith@splunk.com>
Co-authored-by: Akshay <akshay.moghe@gmail.com>
Co-authored-by: Jay Camp <jcamp@splunk.com>
* Revive fixes regarding following set of rules:
[rule.if-return]
[rule.increment-decrement]
[rule.var-declaration]
[rule.package-comments]
[rule.receiver-naming]
[rule.unexported-return]
* Replace exclamation mark with caret
* Update README and use table driven tests
* Use ReplaceAll instead
* Use doublestar package instead to glob filepath
* Add license
* Fix order of dependencies
* Doc improvement, maybe better then str replace?
* Forgot to remove nil from test
* Use regex instead of library
* Revert unnecessary change
* Go back to using library
replace string twice to handle edge case
* Add support for datadog distributions in statsd
* Parse metric distribution correctly
* Add tests to check distributions are parsed correctly
* Update Statsd plugin Readme with details about Distributions metric
* Refactor metric distribution initialization code
* Update distribution metric interface to replace fields with value
* Refactor statsd distribution metric test code
* Fix go formatting errors
* Add tests to parse only when DataDog Distributions config is enabled
* Add config to enable parsing DataDog Statsd Distributions
* Document use of datadog_distributions config in Readme
* improve mntr regex to match user specific keys.
* Update plugins/inputs/zookeeper/zookeeper.go
Co-authored-by: Sven Rebhan <36194019+srebhan@users.noreply.github.com>
Co-authored-by: guoxu <guoxu@chinatelecom.cn>
Co-authored-by: Sven Rebhan <36194019+srebhan@users.noreply.github.com>
* Use go-ping for "native" execution in Ping plugin
* Check for ipv6 and deadline out of go func
* ensure dns failure
* Move interval and timeout calc to init
Removed dns failure check, 3rd parties libary responsibility
* Rename timeout to avoid conflict
* Move native ping to interface
Update tests
* Check for zero length
* GNMI plugin should not take off the first character of field keys when no 'alias path' exists.
* fix test method name
* fix test file formatting
* fix test file formatting
* Remove my unnecessary failing test
This plugin is known to work with Kafkabeat and Filebeat, and will
likely work with other Beat instances that have a similar HTTP API.
It is based on work done by @dmitryilyin.
Co-authored-by: Dmitry Ilyin <idv1985@gmail.com>
The previous implementation of SeriesGrouper required breaking a metric object apart into its constituents, converting tags and keys into unoptimized maps, only to have it put them back together into another metric object. This resulted in a significant performance overhead. This overhead was further compounded when the number of fields was large.
This change adds a new AddMetric method to SeriesGrouper which preserves the metric object and removes the back-and-forth conversion.
Additionlly the method used for calculating the metric's hash was switched to use maphash, which is optimized for this case.
----
Benchmarks
Before:
BenchmarkMergeOne-16 106012 11790 ns/op
BenchmarkMergeTwo-16 48529 24819 ns/op
BenchmarkGroupID-16 780018 1608 ns/op
After:
BenchmarkMergeOne-16 907093 1173 ns/op
BenchmarkMergeTwo-16 508321 2168 ns/op
BenchmarkGroupID-16 11217788 99.4 ns/op
* tls_config: Allow specifying SNI hostnames
Add a new configration field `tls_server_name` that allows specifying
the server name that'll be sent in the ClientHello when telegraf makes
a request to TLS servers. This allows checking against load balancers
responding to specific hostnames that otherwise wouldn't resolve to
their addresses.
Add the setting to the documentation of common TLS options, as well as
to the http_response plugin.
Fixes#7598.
* Adjust the x509_cert to allow usage of tls_server_name
This plugin has been using ServerName previously, and will have to
deal with the new setting, too: Extract the server-name choosing into
a method & add a test to ensure we choose the right value (and error
under the right circumstances). Also document that the two settings
are mutually exclusive.
* Improve documentation on what we try to accomplish in the nil return
Also get rid of the TODO, as I am fairly certain this behavior is the
correct one.
* Remove unused struct field in tests
Squashed commits:
[c4e2bee2] Closes#8530: Extended the internal snmp wrapper to support AES192, AES192C, AES256, and AES256C. Updated the example configuration with the new privProtocols. Added the warning that those protocols are only supported if you have the appropriate tooling on your system. Added test to ensure all 4 new privProtocols could be selected and properly encrypt the priv password.
* [http_listener_v2] Stop() succeeds even if fails to start
In cases where the http_listener_v2 plugin config is invalid, when the agent attempts to cleanup by stopping all the inputs, the Stop method here panics as it tries to call listener.Stop() when no listener has been set. This also masks the error message returned from the Start method.
```
> telegraf --test
2020-10-27T12:21:45Z I! Starting Telegraf 1.16.0
2020-10-27T12:21:45Z I! Using config file: /etc/telegraf/telegraf.conf
...
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x28 pc=0x1245130]
goroutine 45 [running]:
github.com/influxdata/telegraf/plugins/inputs/http_listener_v2.(*HTTPListenerV2).Stop(0xc00043e000)
/go/src/github.com/influxdata/telegraf/plugins/inputs/http_listener_v2/http_listener_v2.go:178 +0x30
github.com/influxdata/telegraf/agent.stopServiceInputs(0xc00045e480, 0x5, 0x8)
/go/src/github.com/influxdata/telegraf/agent/agent.go:445 +0x82
github.com/influxdata/telegraf/agent.(*Agent).testRunInputs(0xc000288080, 0x32be8c0, 0xc0000f1f00, 0x0, 0xc00000f480, 0x0, 0x0)
/go/src/github.com/influxdata/telegraf/agent/agent.go:434 +0x1b7
github.com/influxdata/telegraf/agent.(*Agent).test.func4(0xc000057b70, 0xc000288080, 0x32be8c0, 0xc0000f1f00, 0x0, 0xc00000f480)
/go/src/github.com/influxdata/telegraf/agent/agent.go:977 +0x8b
created by github.com/influxdata/telegraf/agent.(*Agent).test
/go/src/github.com/influxdata/telegraf/agent/agent.go:975 +0x352
```
This fixes this issue by checking if the listener has been set before calling listener.Stop.
```
> ./telegraf --config test.conf --test
2020-10-27T12:43:25Z I! Starting Telegraf
2020-10-27T12:43:25Z E! [agent] Starting input inputs.http_listener_v2: listen tcp: address address_without_port: missing port in address
```
* retry CI
Add configurable number of 'most recent' date-stamped indices to gather in the Elasticsearch input plugin, and allow wildcards to account for date-suffixed index names. Configuring '3' for num_most_recent_indices will only gather the 3 latest indices, based on the date or number they end with. Finding the date or number is dependent on the targeted indices being configured with wildcards at the end of their 'base' names.
Looks like ear/959 has already been resolved, but these additional information for the errors still seems useful.
I just re-based the change and merging.
* Allow glob patterns in config
* Update README
* Move creating filter to init
* Need to explictly call init
Co-authored-by: Bas <3441183+BattleBas@users.noreply.github.com>