You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on May 4, 2019. It is now read-only.
Copy file name to clipboardExpand all lines: _posts/2015-11-12-bunny-threads.markdown
+9-9Lines changed: 9 additions & 9 deletions
Original file line number
Diff line number
Diff line change
@@ -53,7 +53,7 @@ Since the threads act on the same instance and memory space as the singular work
53
53
### Sharing the Pool for Fun and Profit
54
54
With each thread using the same instance memory space of the same worker, that means that our caching of the Bunny connection and channel will not do. The documentation on Bunny provides more insight:
So the docs say that we can share the connection but channels are a definite no-go. Since our event publishing code was wrapped in a utility gem used by many different applications, we wanted to devise a solution that was backwards compatible. Since the channel requested can be used by both publishing and subscribing, it was not feasible for us to use an ephemeral channel implementation since we don’t want to accidentally close a channel used for subscribing. The solution we came up with was to simply pool our opened channels. Luckily there’s a great `connection_pool` gem that does exactly that and allows us to abstract that out cleanly:
59
59
@@ -73,11 +73,11 @@ The result of deploying that change? All `ConnectionClosedError` and `Unexpected
73
73
When writing Ruby, we sometimes take advantage of the single threaded nature of the environment and forget some of the pitfalls of being thread safe. When using servers such as Puma that allow us to take advantage of thread to maximize on performance, we found an issue with our Bunny implementation. The issue was identified as a documented inability for Bunny channels to be shared across threads and we developed a solution to address the issue. The main take away here, read the documentation regardless of how dry it is!
0 commit comments