Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

StatsdMeterRegistry doesn't renew resolved ip address on DNS change #1252

Open
denzelby opened this issue Feb 27, 2019 · 15 comments
Open

StatsdMeterRegistry doesn't renew resolved ip address on DNS change #1252

denzelby opened this issue Feb 27, 2019 · 15 comments
Labels
enhancement A general enhancement help wanted An issue that a contributor can help us with registry: statsd A StatsD Registry related issue

Comments

@denzelby
Copy link

denzelby commented Feb 27, 2019

Hello everyone.

I've faced with the problem that StatsdMeterRegistry doesn't aware of DNS changes after it was start()ed.
As a result, metrics aren't delivered to Telegraf and there is no warning or so in logs.

How to reproduce:

  1. Create record in /etc/hosts file, e.g.: 127.0.0.1 telegraf.kube-system.svc.cluster.local.
  2. Start application with StatsdMeterRegistry enabled and use host described above.
  3. Change record in /etc/hosts file to another ip (127.0.0.2 for example)

Actual result:
Using wireshark/tcpdump you can see that all metrics are being sent to 127.0.0.1 even after step 3.

Expected result:
StatsdMeterRegistry respect DNS name change.

Can be reproduced on real network/DNS server. Security manager disabled.
Micrometer version: 1.1.3, reproducible on 1.0.7 as well.

@mweirauch
Copy link
Contributor

Could this actually be related to the default JVM DNS caching behaviour? It's quite conservative (never expire) at times: https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/java-dg-jvm-ttl.html
Usually the host operating system network stack should do quite a good job at managing the DNS cache.

@denzelby
Copy link
Author

denzelby commented Feb 27, 2019

@mweirauch no. As I mentioned earlier the security manager is disabled and according to the docs:

# default value is forever (FOREVER). For security reasons, this
# caching is made forever when a security manager is set. When a security
# manager is not set, the default behavior in this implementation
# is to cache for 30 seconds.
#
#networkaddress.cache.ttl=-1

On OS level everything set to defauls. Tested on desktop Ubuntu 18.04 LTS with Oracle jvm and debian-stretch in a Docker container (openjdk:8-jdk-stretch).
JVM seems to be out of suspicion due to other hosts (e.g. connections to postgres) resolved correctly.

@shakuzen shakuzen added the registry: statsd A StatsD Registry related issue label Feb 28, 2019
@shakuzen
Copy link
Member

I looked into this a bit. reactor-netty's UdpClient, which we use to send StatsD lines over UDP, only resolves the hostname on initial connection creation. It seems we would have to detect the change in resolution and then swap out the UdpClient for a new one which would resolve to the new IP address. Thus, this is not as simple as just changing a configuration option on the UdpClient unfortunately.

@shakuzen shakuzen added the enhancement A general enhancement label Mar 27, 2019
@shakuzen shakuzen added this to the 1.x milestone Mar 27, 2019
@shakuzen shakuzen added the help wanted An issue that a contributor can help us with label Mar 27, 2019
@izeye izeye changed the title StatsdMeterRegistry doesn't renew resolved ip adress on DNS change StatsdMeterRegistry doesn't renew resolved ip address on DNS change Apr 10, 2019
@mootpt
Copy link

mootpt commented May 29, 2019

@shakuzen knowing very little about this issue, would setting the connection duration to hosts ttl be an option?

@uchandroth
Copy link

reactor-netty's UdpClient, which we use to send StatsD lines over UDP, only resolves the hostname on initial connection creation. It seems we would have to detect the change in resolution and then swap out the UdpClient for a new one which would resolve to the new IP address.

---> any idea about after StatsdMeterRegistry.stop() and start() udp client is not able to send the metrics

@shakuzen
Copy link
Member

any idea about after StatsdMeterRegistry.stop() and start() udp client is not able to send the metrics

That probably won't work right now because of the issue described in #1251 (comment). I'll reopen #1676 since it isn't strictly about a change in address.

@arohatgi
Copy link

@shakuzen Is this issue still open. Is this actively being worked upon ?

@shakuzen
Copy link
Member

Is this issue still open. Is this actively being worked upon ?

It is still open. It is not actively being worked on. If you have a workable solution, feel free to propose it here or in a pull request. I've marked it as help wanted because we do not have a solution at the moment. As long as we are using the Reactor Netty client(s) to send metrics lines to Statsd, it may be best if Reactor Netty supported this scenario more directly.

@arohatgi
Copy link

@shakuzen I am more than happy to help. Based on the code review isn't micrometer is using shaded reactor code within micrometer. Is the shaded reactor has any customization for micrometer which needs more to externalize the reactor dependency. Which is the right place to begin with is it the shaded reactor in micrometer or look into reactor project ?

@shakuzen
Copy link
Member

Micrometer shades reactor and reactor-netty so Micrometer is not affected by the version a user may wish to use in their own code, or vice versa. There are no changes to the shaded code other than changing the package to a micrometer package. So any changes in reactor or reactor-netty code should be made upstream in those projects.

@jkschneider
Copy link
Contributor

I think this is fixed by #1843.

@jkschneider jkschneider added the duplicate A duplicate of another issue label May 7, 2020
@jkschneider jkschneider removed this from the 1.x milestone May 7, 2020
@shakuzen shakuzen reopened this Aug 5, 2020
@shakuzen shakuzen removed the duplicate A duplicate of another issue label Aug 5, 2020
@marcingrzejszczak
Copy link
Contributor

As Jon mentioned, isn't this already fixed?

@raymondchen625
Copy link

As Jon mentioned, isn't this already fixed?

#1843 was released in 1.4.0. I was still able to reproduce it with 1.9.7. See my comment in #3563 (comment).

@marcingrzejszczak
Copy link
Contributor

So we can close this in favour of #3563 (comment) ?

@raymondchen625
Copy link

So we can close this in favour of #3563 (comment) ?

Maybe the opposite, close #3563 and keep this open.

@shakuzen shakuzen removed the waiting for feedback We need additional information before we can continue label Dec 21, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement A general enhancement help wanted An issue that a contributor can help us with registry: statsd A StatsD Registry related issue
Projects
None yet
Development

No branches or pull requests

9 participants