Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bug]: 0.11.1, Too many requests on https listener behind reverse proxy #1078

Closed
1 task done
zeylos opened this issue Jan 9, 2025 · 9 comments
Closed
1 task done
Labels
bug Something isn't working

Comments

@zeylos
Copy link

zeylos commented Jan 9, 2025

What happened?

Hi,

I had an issue with my stalwart cluster while upgrading to 0.11.1, I have 3 servers with haproxy in front, the haproxy has multiple backends each pointing to stalwart listeners, every backend in haproxy checks the health of the stalwart node with this kind of config

backend stalwart_smtps
    option httpchk
    http-check connect ssl port 443
    http-check send meth GET uri /healthz/ready
    http-check expect status 200
    default-server init-addr last,libc send-proxy-v2 ssl verify none check

Since version 0.11.1 my load balancer gets rate limited, I see these logs on the 3 nodes

2025-01-09T17:38:27Z WARN Too many requests (limit.too-many-requests) listenerId = "https", localPort = 443, remoteIp = 10.50.<redacted>, remotePort = 17482

I have allowed the ip addresses of my lan with this config :
image

But it looks like the rate limiter from version 0.11 does not check this, which looks like a bug to me. If that's not the case can you point me to the right config ?

Thanks !

How can we reproduce the problem?

Setup stalwart 0.11.1 https, smtp, smtps and imaps listeners, setup haproxy with these backends :

backend stalwart
    option forwarded
    option httpchk
    http-check connect ssl alpn h2 port 443
    http-check send meth GET uri /healthz/ready
    http-check expect status 200
    default-server ssl verify none check
    server stalwart1 stalwart1:443

backend stalwart_smtp
    option httpchk
    http-check connect ssl port 443
    http-check send meth GET uri /healthz/ready
    http-check expect status 200
    default-server init-addr last,libc send-proxy-v2 verify none check
    server stalwart1 stalwart1:25

backend stalwart_smtps
    option httpchk
    http-check connect ssl port 443
    http-check send meth GET uri /healthz/ready
    http-check expect status 200
    default-server init-addr last,libc send-proxy-v2 ssl verify none check
    server stalwart1 stalwart1:465

backend stalwart_imaps
    option httpchk
    http-check connect ssl port 443
    http-check send meth GET uri /healthz/ready
    http-check expect status 200
    default-server init-addr last,libc send-proxy-v2 ssl verify none check
    server stalwart1 stalwart1:993

Observe that haproxy health checks are not working, which disables the backends. You can also observe multiple log lines on stalwart nodes like these :

2025-01-09T17:38:27Z WARN Too many requests (limit.too-many-requests) listenerId = "https", localPort = 443, remoteIp = 10.50.<redacted>, remotePort = 17482

Version

v0.11.x

What database are you using?

PostgreSQL

What blob storage are you using?

PostgreSQL

Where is your directory located?

Internal

What operating system are you using?

Docker

Relevant log output

No response

Code of Conduct

  • I agree to follow this project's Code of Conduct
@zeylos zeylos added the bug Something isn't working label Jan 9, 2025
@mdecimus
Copy link
Member

Looks like Stalwart is not receiving (or you have not enabled) the proxy protocol headers. For this reason the IP address of your reverse proxy is being rate limited.

@zeylos
Copy link
Author

zeylos commented Jan 10, 2025

I have enabled proxy-protocol for smtp, smtps & imaps ports. The backends for these are configured with send-proxy-v2.

The https port is not using proxy-protocol (neither on the listener or the haproxy backend) but the forwarded header.

The too many requests we're having is coming from the haproxy health checks which are supposed to come from the load balancer's IP, it's not an error.

What you mean is that I have to configure proxy protocol anyway on every listener since v0.11.1 or my load balancer will get rate limited ?

@mdecimus
Copy link
Member

You need to either enable the proxy protocol on the http ports or configure Stalwart to use the Forwarded-For http headers.
You'll know Stalwart is using the right remote IP address when your proxy's internal IP stops appearing in the logs.

@zeylos
Copy link
Author

zeylos commented Jan 10, 2025

As I said, I use the forwarded header and the fact that my proxy's internal IP is appearing in the logs for these lines is normal : this is the haproxy health checks. They're coming from the load balancer itself so it's supposed to show the internal IP.

Can I disable the rate limiter on the https listener ?

@mdecimus
Copy link
Member

@zeylos
Copy link
Author

zeylos commented Jan 10, 2025

Thanks, it wasn't clear for me that jmap rate limiting was also applied to the https listener.

  1. in the future could we be able to disable rate limiting for some allowed ips, like we have for auto-bans ?
  2. also, could we have a dedicated rate limiter by listener ?

@mdecimus
Copy link
Member

  1. Done.
  2. Rate limiters are by service rather than by listener. What would be the use case for this?

@zeylos
Copy link
Author

zeylos commented Jan 11, 2025

  1. great news ! thanks.
  2. Typically internal vs external use, I could configure a specific listener for my internal ips to let them spam smtp / https while rate limiting "external" listener on another port. Also we could rate limit the "jmap listener" without rate limiting the "webadmin listener" (I only use the https listener for webadmin for now and I don't really need the rate limiting on this one). This may be my comprehension of the product that's biaised, it feels more logical to me to rate limit on a port. I'm a network dude so this may be only me, I'll let you chose what you feel is better.

Thanks again for your time and good job with the product, I'm happy to have subscribed

@mdecimus
Copy link
Member

Typically internal vs external use, I could configure a specific listener for my internal ips to let them spam smtp / https while rate limiting "external" listener on another port. Also we could rate limit the "jmap listener" without rate limiting the "webadmin listener" (I only use the https listener for webadmin for now and I don't really need the rate limiting on this one). This may be my comprehension of the product that's biaised, it feels more logical to me to rate limit on a port. I'm a network dude so this may be only me, I'll let you chose what you feel is better.

The reason rate limiting is done per service type is to prevent bad actors from distributing an attack across ports. Hopefully now that rate limits are not enforced on trusted IPs it won't cause you troubles anymore.
Also, slightly related to this, but you can keep your webadmin listener secure by enabling HTTP Access Control.

Thanks again for your time and good job with the product, I'm happy to have subscribed

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants