Skip to content

Commit 00a44a8

Browse files
committed
Deprecate some legacy features
1 parent 6d3449c commit 00a44a8

File tree

3 files changed

+78
-114
lines changed

3 files changed

+78
-114
lines changed

packages/pg-pool/README.md

Lines changed: 45 additions & 63 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,11 @@
11
# pg-pool
2+
23
[![Build Status](https://travis-ci.org/brianc/node-pg-pool.svg?branch=master)](https://travis-ci.org/brianc/node-pg-pool)
34

45
A connection pool for node-postgres
56

67
## install
8+
79
```sh
810
npm i pg-pool pg
911
```
@@ -48,25 +50,26 @@ const pgNativePool = new Pool({ Client: PgNativeClient })
4850
```
4951

5052
##### Note:
53+
5154
The Pool constructor does not support passing a Database URL as the parameter. To use pg-pool on heroku, for example, you need to parse the URL into a config object. Here is an example of how to parse a Database URL.
5255

5356
```js
54-
const Pool = require('pg-pool');
57+
const Pool = require('pg-pool')
5558
const url = require('url')
5659

57-
const params = url.parse(process.env.DATABASE_URL);
58-
const auth = params.auth.split(':');
60+
const params = url.parse(process.env.DATABASE_URL)
61+
const auth = params.auth.split(':')
5962

6063
const config = {
6164
user: auth[0],
6265
password: auth[1],
6366
host: params.hostname,
6467
port: params.port,
6568
database: params.pathname.split('/')[1],
66-
ssl: true
67-
};
69+
ssl: true,
70+
}
6871

69-
const pool = new Pool(config);
72+
const pool = new Pool(config)
7073

7174
/*
7275
Transforms, 'postgres://DBuser:secret@DBHost:#####/myDB', into
@@ -79,23 +82,25 @@ const pool = new Pool(config);
7982
ssl: true
8083
}
8184
*/
82-
```
85+
```
8386

8487
### acquire clients with a promise
8588

8689
pg-pool supports a fully promise-based api for acquiring clients
8790

8891
```js
8992
const pool = new Pool()
90-
pool.connect().then(client => {
91-
client.query('select $1::text as name', ['pg-pool']).then(res => {
92-
client.release()
93-
console.log('hello from', res.rows[0].name)
94-
})
95-
.catch(e => {
96-
client.release()
97-
console.error('query error', e.message, e.stack)
98-
})
93+
pool.connect().then((client) => {
94+
client
95+
.query('select $1::text as name', ['pg-pool'])
96+
.then((res) => {
97+
client.release()
98+
console.log('hello from', res.rows[0].name)
99+
})
100+
.catch((e) => {
101+
client.release()
102+
console.error('query error', e.message, e.stack)
103+
})
99104
})
100105
```
101106

@@ -105,7 +110,7 @@ this ends up looking much nicer if you're using [co](https://github.com/tj/co) o
105110

106111
```js
107112
// with async/await
108-
(async () => {
113+
;(async () => {
109114
const pool = new Pool()
110115
const client = await pool.connect()
111116
try {
@@ -114,18 +119,18 @@ this ends up looking much nicer if you're using [co](https://github.com/tj/co) o
114119
} finally {
115120
client.release()
116121
}
117-
})().catch(e => console.error(e.message, e.stack))
122+
})().catch((e) => console.error(e.message, e.stack))
118123

119124
// with co
120-
co(function * () {
125+
co(function* () {
121126
const client = yield pool.connect()
122127
try {
123128
const result = yield client.query('select $1::text as name', ['brianc'])
124129
console.log('hello from', result.rows[0])
125130
} finally {
126131
client.release()
127132
}
128-
}).catch(e => console.error(e.message, e.stack))
133+
}).catch((e) => console.error(e.message, e.stack))
129134
```
130135

131136
### your new favorite helper method
@@ -148,14 +153,14 @@ pool.query('SELECT $1::text as name', ['brianc'], function (err, res) {
148153
})
149154
```
150155

151-
__pro tip:__ unless you need to run a transaction (which requires a single client for multiple queries) or you
156+
**pro tip:** unless you need to run a transaction (which requires a single client for multiple queries) or you
152157
have some other edge case like [streaming rows](https://github.com/brianc/node-pg-query-stream) or using a [cursor](https://github.com/brianc/node-pg-cursor)
153-
you should almost always just use `pool.query`. Its easy, it does the right thing :tm:, and wont ever forget to return
158+
you should almost always just use `pool.query`. Its easy, it does the right thing :tm:, and wont ever forget to return
154159
clients back to the pool after the query is done.
155160

156161
### drop-in backwards compatible
157162

158-
pg-pool still and will always support the traditional callback api for acquiring a client. This is the exact API node-postgres has shipped with for years:
163+
pg-pool still and will always support the traditional callback api for acquiring a client. This is the exact API node-postgres has shipped with for years:
159164

160165
```js
161166
const pool = new Pool()
@@ -175,7 +180,7 @@ pool.connect((err, client, done) => {
175180
### shut it down
176181

177182
When you are finished with the pool if all the clients are idle the pool will close them after `config.idleTimeoutMillis` and your app
178-
will shutdown gracefully. If you don't want to wait for the timeout you can end the pool as follows:
183+
will shutdown gracefully. If you don't want to wait for the timeout you can end the pool as follows:
179184

180185
```js
181186
const pool = new Pool()
@@ -187,7 +192,7 @@ await pool.end()
187192

188193
### a note on instances
189194

190-
The pool should be a __long-lived object__ in your application. Generally you'll want to instantiate one pool when your app starts up and use the same instance of the pool throughout the lifetime of your application. If you are frequently creating a new pool within your code you likely don't have your pool initialization code in the correct place. Example:
195+
The pool should be a **long-lived object** in your application. Generally you'll want to instantiate one pool when your app starts up and use the same instance of the pool throughout the lifetime of your application. If you are frequently creating a new pool within your code you likely don't have your pool initialization code in the correct place. Example:
191196

192197
```js
193198
// assume this is a file in your program at ./your-app/lib/db.js
@@ -215,11 +220,11 @@ module.exports.connect = () => {
215220

216221
### events
217222

218-
Every instance of a `Pool` is an event emitter. These instances emit the following events:
223+
Every instance of a `Pool` is an event emitter. These instances emit the following events:
219224

220225
#### error
221226

222-
Emitted whenever an idle client in the pool encounters an error. This is common when your PostgreSQL server shuts down, reboots, or a network partition otherwise causes it to become unavailable while your pool has connected clients.
227+
Emitted whenever an idle client in the pool encounters an error. This is common when your PostgreSQL server shuts down, reboots, or a network partition otherwise causes it to become unavailable while your pool has connected clients.
223228

224229
Example:
225230

@@ -229,15 +234,15 @@ const pool = new Pool()
229234

230235
// attach an error handler to the pool for when a connected, idle client
231236
// receives an error by being disconnected, etc
232-
pool.on('error', function(error, client) {
237+
pool.on('error', function (error, client) {
233238
// handle this in the same way you would treat process.on('uncaughtException')
234239
// it is supplied the error as well as the idle client which received the error
235240
})
236241
```
237242

238243
#### connect
239244

240-
Fired whenever the pool creates a __new__ `pg.Client` instance and successfully connects it to the backend.
245+
Fired whenever the pool creates a **new** `pg.Client` instance and successfully connects it to the backend.
241246

242247
Example:
243248

@@ -247,20 +252,19 @@ const pool = new Pool()
247252

248253
const count = 0
249254

250-
pool.on('connect', client => {
255+
pool.on('connect', (client) => {
251256
client.count = count++
252257
})
253258

254259
pool
255260
.connect()
256-
.then(client => {
261+
.then((client) => {
257262
return client
258263
.query('SELECT $1::int AS "clientCount"', [client.count])
259-
.then(res => console.log(res.rows[0].clientCount)) // outputs 0
264+
.then((res) => console.log(res.rows[0].clientCount)) // outputs 0
260265
.then(() => client)
261266
})
262-
.then(client => client.release())
263-
267+
.then((client) => client.release())
264268
```
265269

266270
#### acquire
@@ -293,12 +297,11 @@ setTimeout(function () {
293297
console.log('connect count:', connectCount) // output: connect count: 10
294298
console.log('acquire count:', acquireCount) // output: acquire count: 200
295299
}, 100)
296-
297300
```
298301

299302
### environment variables
300303

301-
pg-pool & node-postgres support some of the same environment variables as `psql` supports. The most common are:
304+
pg-pool & node-postgres support some of the same environment variables as `psql` supports. The most common are:
302305

303306
```
304307
PGDATABASE=my_db
@@ -308,40 +311,19 @@ PGPORT=5432
308311
PGSSLMODE=require
309312
```
310313

311-
Usually I will export these into my local environment via a `.env` file with environment settings or export them in `~/.bash_profile` or something similar. This way I get configurability which works with both the postgres suite of tools (`psql`, `pg_dump`, `pg_restore`) and node, I can vary the environment variables locally and in production, and it supports the concept of a [12-factor app](http://12factor.net/) out of the box.
312-
313-
## bring your own promise
314-
315-
In versions of node `<=0.12.x` there is no native promise implementation available globally. You can polyfill the promise globally like this:
316-
317-
```js
318-
// first run `npm install promise-polyfill --save
319-
if (typeof Promise == 'undefined') {
320-
global.Promise = require('promise-polyfill')
321-
}
322-
```
323-
324-
You can use any other promise implementation you'd like. The pool also allows you to configure the promise implementation on a per-pool level:
325-
326-
```js
327-
const bluebirdPool = new Pool({
328-
Promise: require('bluebird')
329-
})
330-
```
331-
332-
__please note:__ in node `<=0.12.x` the pool will throw if you do not provide a promise constructor in one of the two ways mentioned above. In node `>=4.0.0` the pool will use the native promise implementation by default; however, the two methods above still allow you to "bring your own."
314+
Usually I will export these into my local environment via a `.env` file with environment settings or export them in `~/.bash_profile` or something similar. This way I get configurability which works with both the postgres suite of tools (`psql`, `pg_dump`, `pg_restore`) and node, I can vary the environment variables locally and in production, and it supports the concept of a [12-factor app](http://12factor.net/) out of the box.
333315

334316
## maxUses and read-replica autoscaling (e.g. AWS Aurora)
335317

336318
The maxUses config option can help an application instance rebalance load against a replica set that has been auto-scaled after the connection pool is already full of healthy connections.
337319

338-
The mechanism here is that a connection is considered "expended" after it has been acquired and released `maxUses` number of times. Depending on the load on your system, this means there will be an approximate time in which any given connection will live, thus creating a window for rebalancing.
320+
The mechanism here is that a connection is considered "expended" after it has been acquired and released `maxUses` number of times. Depending on the load on your system, this means there will be an approximate time in which any given connection will live, thus creating a window for rebalancing.
339321

340-
Imagine a scenario where you have 10 app instances providing an API running against a replica cluster of 3 that are accessed via a round-robin DNS entry. Each instance runs a connection pool size of 20. With an ambient load of 50 requests per second, the connection pool will likely fill up in a few minutes with healthy connections.
322+
Imagine a scenario where you have 10 app instances providing an API running against a replica cluster of 3 that are accessed via a round-robin DNS entry. Each instance runs a connection pool size of 20. With an ambient load of 50 requests per second, the connection pool will likely fill up in a few minutes with healthy connections.
341323

342-
If you have weekly bursts of traffic which peak at 1,000 requests per second, you might want to grow your replicas to 10 during this period. Without setting `maxUses`, the new replicas will not be adopted by the app servers without an intervention -- namely, restarting each in turn in order to build up new connection pools that are balanced against all the replicas. Adding additional app server instances will help to some extent because they will adopt all the replicas in an even way, but the initial app servers will continue to focus additional load on the original replicas.
324+
If you have weekly bursts of traffic which peak at 1,000 requests per second, you might want to grow your replicas to 10 during this period. Without setting `maxUses`, the new replicas will not be adopted by the app servers without an intervention -- namely, restarting each in turn in order to build up new connection pools that are balanced against all the replicas. Adding additional app server instances will help to some extent because they will adopt all the replicas in an even way, but the initial app servers will continue to focus additional load on the original replicas.
343325

344-
This is where the `maxUses` configuration option comes into play. Setting `maxUses` to 7500 will ensure that over a period of 30 minutes or so the new replicas will be adopted as the pre-existing connections are closed and replaced with new ones, thus creating a window for eventual balance.
326+
This is where the `maxUses` configuration option comes into play. Setting `maxUses` to 7500 will ensure that over a period of 30 minutes or so the new replicas will be adopted as the pre-existing connections are closed and replaced with new ones, thus creating a window for eventual balance.
345327

346328
You'll want to test based on your own scenarios, but one way to make a first guess at `maxUses` is to identify an acceptable window for rebalancing and then solve for the value:
347329

@@ -362,7 +344,7 @@ To run tests clone the repo, `npm i` in the working dir, and then run `npm test`
362344

363345
## contributions
364346

365-
I love contributions. Please make sure they have tests, and submit a PR. If you're not sure if the issue is worth it or will be accepted it never hurts to open an issue to begin the conversation. If you're interested in keeping up with node-postgres releated stuff, you can follow me on twitter at [@briancarlson](https://twitter.com/briancarlson) - I generally announce any noteworthy updates there.
347+
I love contributions. Please make sure they have tests, and submit a PR. If you're not sure if the issue is worth it or will be accepted it never hurts to open an issue to begin the conversation. If you're interested in keeping up with node-postgres releated stuff, you can follow me on twitter at [@briancarlson](https://twitter.com/briancarlson) - I generally announce any noteworthy updates there.
366348

367349
## license
368350

packages/pg-pool/test/bring-your-own-promise.js

Lines changed: 0 additions & 42 deletions
This file was deleted.

0 commit comments

Comments
 (0)