Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

nsq_exporter not collect any data #18

Open
madmanidze opened this issue Jul 5, 2017 · 14 comments
Open

nsq_exporter not collect any data #18

madmanidze opened this issue Jul 5, 2017 · 14 comments

Comments

@madmanidze
Copy link

Hello,

Wnen we trying running nsq_exporter by:
version: '2'
services:
nsqlookupd:
image: nsqio/nsq
command: /nsqlookupd -broadcast-address=nsqlookupd
restart: always
ports:
- "4160:4160"
- "4161:4161"
nsqd:
image: nsqio/nsq
command: /nsqd --lookupd-tcp-address=nsqlookupd:4160 --data-path=/data --broadcast-address=nsqd
restart: always
volumes:
- nsq_data:/data
ports:
- "4150:4150"
- "4151:4151"
nsqadmin:
image: nsqio/nsq
command: /nsqadmin --lookupd-http-address=nsqlookupd:4161
restart: always
ports:
- "4171:4171"
nsq_exporter:
image: lovoo/nsq_exporter:latest
restart: always
command:
- '-nsqd.addr=http://nsqd:4151/stats'
- '-collect=stats.topics,stats.channels,stats.clients'
ports:
- "9117:9117"

volumes:
nsq_data: {}

they not collecnt any data:
go_memstats_alloc_bytes 1.610192e+06
go_memstats_alloc_bytes_total 2.4429984e+07
go_memstats_buck_hash_sys_bytes 1.444735e+06
go_memstats_frees_total 30021
go_memstats_gc_sys_bytes 475136
go_memstats_heap_alloc_bytes 1.610192e+06
go_memstats_heap_idle_bytes 5.808128e+06
go_memstats_heap_inuse_bytes 2.121728e+06
go_memstats_heap_objects 5940
go_memstats_heap_released_bytes 0
go_memstats_heap_sys_bytes 7.929856e+06
go_memstats_last_gc_time_seconds 1.4992581719082916e+09
go_memstats_lookups_total 287
go_memstats_mallocs_total 35961
go_memstats_mcache_inuse_bytes 2400
go_memstats_mcache_sys_bytes 16384
go_memstats_mspan_inuse_bytes 24000
go_memstats_mspan_sys_bytes 32768
go_memstats_next_gc_bytes 4.194304e+06
go_memstats_other_sys_bytes 789881
go_memstats_stack_inuse_bytes 458752
go_memstats_stack_sys_bytes 458752
go_memstats_sys_bytes 1.1147512e+07
http_request_duration_microseconds{handler="prometheus",quantile="0.5"} 1768.584
http_request_duration_microseconds{handler="prometheus",quantile="0.9"} 5565.156
http_request_duration_microseconds{handler="prometheus",quantile="0.99"} 15772.45
http_request_duration_microseconds_sum{handler="prometheus"} 71702.471
http_request_duration_microseconds_count{handler="prometheus"} 26
http_request_size_bytes{handler="prometheus",quantile="0.5"} 276
http_request_size_bytes{handler="prometheus",quantile="0.9"} 276
http_request_size_bytes{handler="prometheus",quantile="0.99"} 276
http_request_size_bytes_sum{handler="prometheus"} 6963
http_request_size_bytes_count{handler="prometheus"} 26
http_requests_total{code="200",handler="prometheus",method="get"} 26
http_response_size_bytes{handler="prometheus",quantile="0.5"} 1447
http_response_size_bytes{handler="prometheus",quantile="0.9"} 1467
http_response_size_bytes{handler="prometheus",quantile="0.99"} 7046
http_response_size_bytes_sum{handler="prometheus"} 42655
http_response_size_bytes_count{handler="prometheus"} 26
nsq_exporter_scrape_duration_seconds{result="success",quantile="0.5"} 0.0007699250000000001
nsq_exporter_scrape_duration_seconds{result="success",quantile="0.9"} 0.001025803
nsq_exporter_scrape_duration_seconds{result="success",quantile="0.99"} 0.014166406000000001
nsq_exporter_scrape_duration_seconds_sum{result="success"} 0.03366974
nsq_exporter_scrape_duration_seconds_count{result="success"} 26
process_cpu_seconds_total 0.09
process_max_fds 65536
process_open_fds 8
process_resident_memory_bytes 1.2742656e+07
process_start_time_seconds 1.49925804813e+09
process_virtual_memory_bytes 1.8702336e+07

How we can fix that?

Thank you.

@jwitko
Copy link

jwitko commented Aug 16, 2017

I am having the exact same issue. Running the latest version for the master branch on a VM within AWS.

/opt/prometheus/nsq_exporter/nsq_exporter -nsqd.addr http://$HOSTNAME:4151/stats -web.listen $HOSTNAME:9117 -collect=stats.topics,stats.channels

The service runs and listens, I can access the metrics page on port 9117, but there are no NSQ metrics at all:

# HELP go_gc_duration_seconds A summary of the GC invocation durations.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 8.864100000000001e-05
go_gc_duration_seconds{quantile="0.25"} 0.00013701500000000002
go_gc_duration_seconds{quantile="0.5"} 0.000213531
go_gc_duration_seconds{quantile="0.75"} 0.00024645200000000003
go_gc_duration_seconds{quantile="1"} 0.00024645200000000003
go_gc_duration_seconds_sum 0.000685639
go_gc_duration_seconds_count 4
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 9
# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 3.849104e+06
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 1.6696704e+07
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
# TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 1.444128e+06
# HELP go_memstats_frees_total Total number of frees.
# TYPE go_memstats_frees_total counter
go_memstats_frees_total 11685
# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata.
# TYPE go_memstats_gc_sys_bytes gauge
go_memstats_gc_sys_bytes 210944
# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use.
# TYPE go_memstats_heap_alloc_bytes gauge
go_memstats_heap_alloc_bytes 3.849104e+06
# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.
# TYPE go_memstats_heap_idle_bytes gauge
go_memstats_heap_idle_bytes 1.55648e+06
# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use.
# TYPE go_memstats_heap_inuse_bytes gauge
go_memstats_heap_inuse_bytes 4.308992e+06
# HELP go_memstats_heap_objects Number of allocated objects.
# TYPE go_memstats_heap_objects gauge
go_memstats_heap_objects 7122
# HELP go_memstats_heap_released_bytes Number of heap bytes released to OS.
# TYPE go_memstats_heap_released_bytes gauge
go_memstats_heap_released_bytes 0
# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system.
# TYPE go_memstats_heap_sys_bytes gauge
go_memstats_heap_sys_bytes 5.865472e+06
# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
# TYPE go_memstats_last_gc_time_seconds gauge
go_memstats_last_gc_time_seconds 1.5028918345735433e+09
# HELP go_memstats_lookups_total Total number of pointer lookups.
# TYPE go_memstats_lookups_total counter
go_memstats_lookups_total 123
# HELP go_memstats_mallocs_total Total number of mallocs.
# TYPE go_memstats_mallocs_total counter
go_memstats_mallocs_total 18807
# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures.
# TYPE go_memstats_mcache_inuse_bytes gauge
go_memstats_mcache_inuse_bytes 2400
# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system.
# TYPE go_memstats_mcache_sys_bytes gauge
go_memstats_mcache_sys_bytes 16384
# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures.
# TYPE go_memstats_mspan_inuse_bytes gauge
go_memstats_mspan_inuse_bytes 18240
# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system.
# TYPE go_memstats_mspan_sys_bytes gauge
go_memstats_mspan_sys_bytes 32768
# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place.
# TYPE go_memstats_next_gc_bytes gauge
go_memstats_next_gc_bytes 4.194304e+06
# HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations.
# TYPE go_memstats_other_sys_bytes gauge
go_memstats_other_sys_bytes 792536
# HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator.
# TYPE go_memstats_stack_inuse_bytes gauge
go_memstats_stack_inuse_bytes 425984
# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator.
# TYPE go_memstats_stack_sys_bytes gauge
go_memstats_stack_sys_bytes 425984
# HELP go_memstats_sys_bytes Number of bytes obtained from system.
# TYPE go_memstats_sys_bytes gauge
go_memstats_sys_bytes 8.788216e+06
# HELP http_request_duration_microseconds The HTTP request latencies in microseconds.
# TYPE http_request_duration_microseconds summary
http_request_duration_microseconds{handler="prometheus",quantile="0.5"} 1833.474
http_request_duration_microseconds{handler="prometheus",quantile="0.9"} 2953.216
http_request_duration_microseconds{handler="prometheus",quantile="0.99"} 3024.047
http_request_duration_microseconds_sum{handler="prometheus"} 20197.015000000003
http_request_duration_microseconds_count{handler="prometheus"} 10
# HELP http_request_size_bytes The HTTP request sizes in bytes.
# TYPE http_request_size_bytes summary
http_request_size_bytes{handler="prometheus",quantile="0.5"} 282
http_request_size_bytes{handler="prometheus",quantile="0.9"} 282
http_request_size_bytes{handler="prometheus",quantile="0.99"} 408
http_request_size_bytes_sum{handler="prometheus"} 2946
http_request_size_bytes_count{handler="prometheus"} 10
# HELP http_requests_total Total number of HTTP requests made.
# TYPE http_requests_total counter
http_requests_total{code="200",handler="prometheus",method="get"} 10
# HELP http_response_size_bytes The HTTP response sizes in bytes.
# TYPE http_response_size_bytes summary
http_response_size_bytes{handler="prometheus",quantile="0.5"} 1397
http_response_size_bytes{handler="prometheus",quantile="0.9"} 1435
http_response_size_bytes{handler="prometheus",quantile="0.99"} 1437
http_response_size_bytes_sum{handler="prometheus"} 13875
http_response_size_bytes_count{handler="prometheus"} 10
# HELP nsq_exporter_scrape_duration_seconds Duration of a scrape job of the NSQ exporter
# TYPE nsq_exporter_scrape_duration_seconds summary
nsq_exporter_scrape_duration_seconds{result="success",quantile="0.5"} 0.000595661
nsq_exporter_scrape_duration_seconds{result="success",quantile="0.9"} 0.000993397
nsq_exporter_scrape_duration_seconds{result="success",quantile="0.99"} 0.0010485750000000002
nsq_exporter_scrape_duration_seconds_sum{result="success"} 0.006856681
nsq_exporter_scrape_duration_seconds_count{result="success"} 10
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 0.02
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 1024
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 8
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 1.2443648e+07
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1.50289172489e+09
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 1.98873088e+08

@tecbot
Copy link
Contributor

tecbot commented Aug 16, 2017

The metrics are lazy, can you confirm that you have traffic on the specified NSQ nodes?

@jwitko
Copy link

jwitko commented Aug 16, 2017

@tecbot Sorry for the delayed response and thank you for responding so quickly! A tcpdump on the NSQ server immediately shows traffic from the prometheus server on port 9117 and traffic flows in both directions.

A pastebin to view more easily:
https://pastebin.com/raw/hHH8Cp5i

A paste of the tcpdump for when the pastebin eventually expires:

[root@us-east-1c-prod-nsq13686 ~]# tcpdump port 9117
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
12:02:29.568514 IP us-east-1b-prod-prometheus13464.44826 > us-east-1c-prod-nsq13686.9117: Flags [S], seq 3150991817, win 26883, options [mss 8961,sackOK,TS val 1199733759 ecr 0,nop,wscale 7], length 0
12:02:29.568562 IP us-east-1c-prod-nsq13686.9117 > us-east-1b-prod-prometheus13464.44826: Flags [S.], seq 844009685, ack 3150991818, win 26847, options [mss 8961,sackOK,TS val 81155698 ecr 1199733759,nop,wscale 7], length 0
12:02:29.569137 IP us-east-1b-prod-prometheus13464.44826 > us-east-1c-prod-nsq13686.9117: Flags [.], ack 1, win 211, options [nop,nop,TS val 1199733760 ecr 81155698], length 0
12:02:29.569249 IP us-east-1b-prod-prometheus13464.44826 > us-east-1c-prod-nsq13686.9117: Flags [P.], seq 1:317, ack 1, win 211, options [nop,nop,TS val 1199733760 ecr 81155698], length 316
12:02:29.569261 IP us-east-1c-prod-nsq13686.9117 > us-east-1b-prod-prometheus13464.44826: Flags [.], ack 317, win 219, options [nop,nop,TS val 81155699 ecr 1199733760], length 0
12:02:29.571873 IP us-east-1c-prod-nsq13686.9117 > us-east-1b-prod-prometheus13464.44826: Flags [P.], seq 1:1710, ack 317, win 219, options [nop,nop,TS val 81155701 ecr 1199733760], length 1709
12:02:29.571898 IP us-east-1c-prod-nsq13686.9117 > us-east-1b-prod-prometheus13464.44826: Flags [F.], seq 1710, ack 317, win 219, options [nop,nop,TS val 81155701 ecr 1199733760], length 0
12:02:29.572542 IP us-east-1b-prod-prometheus13464.44826 > us-east-1c-prod-nsq13686.9117: Flags [.], ack 1710, win 237, options [nop,nop,TS val 1199733763 ecr 81155701], length 0
12:02:29.572599 IP us-east-1b-prod-prometheus13464.44826 > us-east-1c-prod-nsq13686.9117: Flags [F.], seq 317, ack 1711, win 237, options [nop,nop,TS val 1199733763 ecr 81155701], length 0
12:02:29.572611 IP us-east-1c-prod-nsq13686.9117 > us-east-1b-prod-prometheus13464.44826: Flags [.], ack 318, win 219, options [nop,nop,TS val 81155702 ecr 1199733763], length 0

@jwitko
Copy link

jwitko commented Aug 21, 2017

@tecbot still have not been able to get the proper NSQ metrics from the exporter. Any ideas on whats going wrong? I can view the exported metrics on the web UI for that server where the exporter is and I can see the actual metrics that do get exported in prometheus. It would just appear that the metrics for NSQ aren't being found by the exporter or something to that affect.

@madmanidze
Copy link
Author

e7ab1d4

@jwitko
Copy link

jwitko commented Aug 21, 2017

Thanks @madmanidze.

@josh-janrain
Copy link

Running the latest release with default settings on a host that does have some data queued up exporter doesn't seem to be reporting any metrics for me.

Here are stats:

ubuntu@ip-10-11-28-99:~$ curl -sS localhost:4151/stats
nsqd v1.0.0-compat (built w/go1.8)
start_time 2018-02-06T17:50:38Z
uptime 2h2m22.021306138s

Health: OK

   [legacy         ] depth: 174   be-depth: 0     msgs: 174      e2e%:

   [siem           ] depth: 172   be-depth: 0     msgs: 172      e2e%:
ubuntu@ip-10-11-28-99:~$

Hitting /metrics:

ubuntu@ip-10-11-28-99:~$ curl -sS localhost:9117/metrics | grep -i nsq
# HELP nsq_exporter_scrape_duration_seconds Duration of a scrape job of the NSQ exporter
# TYPE nsq_exporter_scrape_duration_seconds summary
nsq_exporter_scrape_duration_seconds{result="error",quantile="0.5"} NaN
nsq_exporter_scrape_duration_seconds{result="error",quantile="0.9"} NaN
nsq_exporter_scrape_duration_seconds{result="error",quantile="0.99"} NaN
nsq_exporter_scrape_duration_seconds_sum{result="error"} 0.043141542999999984
nsq_exporter_scrape_duration_seconds_count{result="error"} 12
nsq_exporter_scrape_duration_seconds{result="success",quantile="0.5"} 0.000407443
nsq_exporter_scrape_duration_seconds{result="success",quantile="0.9"} 0.000846205
nsq_exporter_scrape_duration_seconds{result="success",quantile="0.99"} 0.0016417060000000002
nsq_exporter_scrape_duration_seconds_sum{result="success"} 1.6192882170000016
nsq_exporter_scrape_duration_seconds_count{result="success"} 1003
ubuntu@ip-10-11-28-99:~$

Is this expected? Exporters been running for awhile and consistently reporting scrape metrics but no nsqd stat data.

@josh-janrain
Copy link

Ahh works with fix in e7ab1d4

@philicious
Copy link

yep, issue can be closed, its working for me also

@lvluqi
Copy link

lvluqi commented May 17, 2018

hello,

go_gc_duration_seconds{quantile="0"} 2.0617e-05
go_gc_duration_seconds{quantile="0.25"} 3.6594e-05
go_gc_duration_seconds{quantile="0.5"} 4.7335e-05
go_gc_duration_seconds{quantile="0.75"} 8.563e-05
go_gc_duration_seconds{quantile="1"} 0.00020846
go_gc_duration_seconds_sum 0.004386392
go_gc_duration_seconds_count 68
go_goroutines 12
go_memstats_alloc_bytes 3.599632e+06
go_memstats_alloc_bytes_total 1.92648536e+08
go_memstats_buck_hash_sys_bytes 1.452681e+06
go_memstats_frees_total 299646
go_memstats_gc_sys_bytes 405504
go_memstats_heap_alloc_bytes 3.599632e+06
go_memstats_heap_idle_bytes 925696
go_memstats_heap_inuse_bytes 4.481024e+06
go_memstats_heap_objects 8760
go_memstats_heap_released_bytes 0
go_memstats_heap_sys_bytes 5.40672e+06
go_memstats_last_gc_time_seconds 1.5265397295485654e+09
go_memstats_lookups_total 1682
go_memstats_mallocs_total 308406
go_memstats_mcache_inuse_bytes 13888
go_memstats_mcache_sys_bytes 16384
go_memstats_mspan_inuse_bytes 36480
go_memstats_mspan_sys_bytes 49152
go_memstats_next_gc_bytes 4.194304e+06
go_memstats_other_sys_bytes 1.818223e+06
go_memstats_stack_inuse_bytes 884736
go_memstats_stack_sys_bytes 884736
go_memstats_sys_bytes 1.00334e+07
http_request_duration_microseconds{handler="prometheus",quantile="0.5"} 1454
http_request_duration_microseconds{handler="prometheus",quantile="0.9"} 2758.666
http_request_duration_microseconds{handler="prometheus",quantile="0.99"} 3866.575
http_request_duration_microseconds_sum{handler="prometheus"} 358994.75200000004
http_request_duration_microseconds_count{handler="prometheus"} 207
http_request_size_bytes{handler="prometheus",quantile="0.5"} 520
http_request_size_bytes{handler="prometheus",quantile="0.9"} 520
http_request_size_bytes{handler="prometheus",quantile="0.99"} 520
http_request_size_bytes_sum{handler="prometheus"} 100620
http_request_size_bytes_count{handler="prometheus"} 207
http_requests_total{code="200",handler="prometheus",method="get"} 207
http_response_size_bytes{handler="prometheus",quantile="0.5"} 1518
http_response_size_bytes{handler="prometheus",quantile="0.9"} 1527
http_response_size_bytes{handler="prometheus",quantile="0.99"} 1530
http_response_size_bytes_sum{handler="prometheus"} 313377
http_response_size_bytes_count{handler="prometheus"} 207
metrics_exporter_scrape_duration_seconds{result="success",quantile="0.5"} 0.00077764
metrics_exporter_scrape_duration_seconds{result="success",quantile="0.9"} 0.001119944
metrics_exporter_scrape_duration_seconds{result="success",quantile="0.99"} 0.001303021
metrics_exporter_scrape_duration_seconds_sum{result="success"} 0.17201756999999993
metrics_exporter_scrape_duration_seconds_count{result="success"} 207
process_cpu_seconds_total 0.6
process_max_fds 1024
process_open_fds 10
process_resident_memory_bytes 1.3713408e+07
process_start_time_seconds 1.52653947061e+09
process_virtual_memory_bytes 3.74669312e+08

not nsq data, topics and channels
can you help me?

@tjsampson
Copy link

Same issue here. The service runs fine, but it's not exporting any of the actual NSQ data, except for the following three metrics:

nsq_exporter_scrape_duration_seconds
nsq_exporter_scrape_duration_seconds_count
nsq_exporter_scrape_duration_seconds_sum

There isn't any data on topics,channels or clients and I am explicitly setting the collectors (stats.topics,stats.channels,stats.clients).

We are using the latest release: 2.0.2

@tjsampson
Copy link

Whoops, I should have checked the pull requests. @jhutchins already has a PR for this: #25

@absispat
Copy link

Ahh works with fix in e7ab1d4

This really works dont know the reason for the current master to not use this , can e7ab1d4 be used please with master

@miaoxg
Copy link

miaoxg commented Aug 29, 2022

Hello,

I use the latest version,but when I curl the api twice, I get two values,one zero,another value is the correct value

image

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants