Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DB Connection Allocations #243

Open
dlinch opened this issue Dec 18, 2024 · 1 comment
Open

DB Connection Allocations #243

dlinch opened this issue Dec 18, 2024 · 1 comment

Comments

@dlinch
Copy link

dlinch commented Dec 18, 2024

I've been having a fun conversation over on the SolidQueue repo about how many DB connections I can expect to be taken up from SolidQueue, and they pointed me back here to help ensure a full picture of my situation.

We are on Heroku, and when they provision us a Postgres DB instance they limit the number of connections we have available. To keep our costs down, we have a single DB instance for both our SolideCache and SolidQueue implementation. We seem to be consistently running out of DB connections, causing us some pretty gnarly exceptions and downtime.

My first question is: I am looking to understand how many DB connections I can expect to be used by a single Ruby thread running my Web server, where my Puma configuration is WORKERS*THREADS. For a real-life example, we had a production application with 8 dynos, a web concurrency of 8, and threads set to 5. So 40 threads per dyno, or 320 threads total across the 8 dynos.

The maintainer on SolidQueue thought SolidCache would hold two open DB connections per thread, meaning we were trying to open up 640 connections to our DB, despite our DB only provisioning us 400. The maximum Postgres plan allotment available on Heroku is only 500, for what it is worth.

Can you help me understand how many DB connections I can expect each web thread to take up?

Secondly, I was trying to understand if it would make sense to throw a warning that the connection to the DB failed, but fall back to the cache's block rather than erroring out. I want to balance some form of alerting that the actual cache seems to not be working, but when there's a very reasonable default to fall back on when the cache is unavailable it would be really nice to simply fall back to the yielded block instead of tanking the response with an exception.

So to summarize:

  • I'm trying to understand how greedy the Cache is for every thread with the DB connections
  • I am asking if it makes sense to now throw an exception when the cache can't be reached, but to add some kind of hook for custom alerting and simply fall back to yielded block.
@djmb
Copy link
Collaborator

djmb commented Dec 23, 2024

Hi @dlinch,

Each thread will only use one connection, but there is also a background thread that does cache invalidation, so the number of connections per process depends on the number of application threads you are running per process.

Since you have 5 threads that means you'll need 5 connections for the threads + 1 for the background task for each worker.

So the equation is:

(Threads + 1) * workers * dynos
(5 + 1) * 8 * 8 = 384 connections

One thing you could do is reduce the number of threads - the default in Rails was changed to 3 because that generally yields better performance, though you'd need to test whether that was the case for your app.

What error do you get when you run out of connections?

I'd be nervous about generally swallowing connection errors and if you are getting connection errors and Solid Cache and your app use the same database, then I would expect you are just as likely to get connection errors for the app itself.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants