-
Notifications
You must be signed in to change notification settings - Fork 73
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
nsq_exporter not collect any data #18
Comments
I am having the exact same issue. Running the latest version for the master branch on a VM within AWS.
The service runs and listens, I can access the metrics page on port 9117, but there are no NSQ metrics at all:
|
The metrics are lazy, can you confirm that you have traffic on the specified NSQ nodes? |
@tecbot Sorry for the delayed response and thank you for responding so quickly! A tcpdump on the NSQ server immediately shows traffic from the prometheus server on port 9117 and traffic flows in both directions. A pastebin to view more easily: A paste of the tcpdump for when the pastebin eventually expires:
|
@tecbot still have not been able to get the proper NSQ metrics from the exporter. Any ideas on whats going wrong? I can view the exported metrics on the web UI for that server where the exporter is and I can see the actual metrics that do get exported in prometheus. It would just appear that the metrics for NSQ aren't being found by the exporter or something to that affect. |
Thanks @madmanidze. |
Running the latest release with default settings on a host that does have some data queued up exporter doesn't seem to be reporting any metrics for me. Here are stats:
Hitting
Is this expected? Exporters been running for awhile and consistently reporting scrape metrics but no nsqd stat data. |
Ahh works with fix in e7ab1d4 |
yep, issue can be closed, its working for me also |
hello, go_gc_duration_seconds{quantile="0"} 2.0617e-05 not nsq data, topics and channels |
Same issue here. The service runs fine, but it's not exporting any of the actual NSQ data, except for the following three metrics: nsq_exporter_scrape_duration_seconds There isn't any data on topics,channels or clients and I am explicitly setting the collectors (stats.topics,stats.channels,stats.clients). We are using the latest release: 2.0.2 |
Whoops, I should have checked the pull requests. @jhutchins already has a PR for this: #25 |
Hello,
Wnen we trying running nsq_exporter by:
version: '2'
services:
nsqlookupd:
image: nsqio/nsq
command: /nsqlookupd -broadcast-address=nsqlookupd
restart: always
ports:
- "4160:4160"
- "4161:4161"
nsqd:
image: nsqio/nsq
command: /nsqd --lookupd-tcp-address=nsqlookupd:4160 --data-path=/data --broadcast-address=nsqd
restart: always
volumes:
- nsq_data:/data
ports:
- "4150:4150"
- "4151:4151"
nsqadmin:
image: nsqio/nsq
command: /nsqadmin --lookupd-http-address=nsqlookupd:4161
restart: always
ports:
- "4171:4171"
nsq_exporter:
image: lovoo/nsq_exporter:latest
restart: always
command:
- '-nsqd.addr=http://nsqd:4151/stats'
- '-collect=stats.topics,stats.channels,stats.clients'
ports:
- "9117:9117"
volumes:
nsq_data: {}
they not collecnt any data:
go_memstats_alloc_bytes 1.610192e+06
go_memstats_alloc_bytes_total 2.4429984e+07
go_memstats_buck_hash_sys_bytes 1.444735e+06
go_memstats_frees_total 30021
go_memstats_gc_sys_bytes 475136
go_memstats_heap_alloc_bytes 1.610192e+06
go_memstats_heap_idle_bytes 5.808128e+06
go_memstats_heap_inuse_bytes 2.121728e+06
go_memstats_heap_objects 5940
go_memstats_heap_released_bytes 0
go_memstats_heap_sys_bytes 7.929856e+06
go_memstats_last_gc_time_seconds 1.4992581719082916e+09
go_memstats_lookups_total 287
go_memstats_mallocs_total 35961
go_memstats_mcache_inuse_bytes 2400
go_memstats_mcache_sys_bytes 16384
go_memstats_mspan_inuse_bytes 24000
go_memstats_mspan_sys_bytes 32768
go_memstats_next_gc_bytes 4.194304e+06
go_memstats_other_sys_bytes 789881
go_memstats_stack_inuse_bytes 458752
go_memstats_stack_sys_bytes 458752
go_memstats_sys_bytes 1.1147512e+07
http_request_duration_microseconds{handler="prometheus",quantile="0.5"} 1768.584
http_request_duration_microseconds{handler="prometheus",quantile="0.9"} 5565.156
http_request_duration_microseconds{handler="prometheus",quantile="0.99"} 15772.45
http_request_duration_microseconds_sum{handler="prometheus"} 71702.471
http_request_duration_microseconds_count{handler="prometheus"} 26
http_request_size_bytes{handler="prometheus",quantile="0.5"} 276
http_request_size_bytes{handler="prometheus",quantile="0.9"} 276
http_request_size_bytes{handler="prometheus",quantile="0.99"} 276
http_request_size_bytes_sum{handler="prometheus"} 6963
http_request_size_bytes_count{handler="prometheus"} 26
http_requests_total{code="200",handler="prometheus",method="get"} 26
http_response_size_bytes{handler="prometheus",quantile="0.5"} 1447
http_response_size_bytes{handler="prometheus",quantile="0.9"} 1467
http_response_size_bytes{handler="prometheus",quantile="0.99"} 7046
http_response_size_bytes_sum{handler="prometheus"} 42655
http_response_size_bytes_count{handler="prometheus"} 26
nsq_exporter_scrape_duration_seconds{result="success",quantile="0.5"} 0.0007699250000000001
nsq_exporter_scrape_duration_seconds{result="success",quantile="0.9"} 0.001025803
nsq_exporter_scrape_duration_seconds{result="success",quantile="0.99"} 0.014166406000000001
nsq_exporter_scrape_duration_seconds_sum{result="success"} 0.03366974
nsq_exporter_scrape_duration_seconds_count{result="success"} 26
process_cpu_seconds_total 0.09
process_max_fds 65536
process_open_fds 8
process_resident_memory_bytes 1.2742656e+07
process_start_time_seconds 1.49925804813e+09
process_virtual_memory_bytes 1.8702336e+07
How we can fix that?
Thank you.
The text was updated successfully, but these errors were encountered: