Skip to content

Feat: v1.1 #1

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 6 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion .env
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
LAST_PPTR_VERSION=19.3.0
LAST_PPTR_VERSION=21.0.3
TEST_URL=https://www.example.com/
VERSIONS_PATH=./versions
PUPPETEER_DISABLE_HEADLESS_WARNING=true
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
versions
results*.html
reports/results*.html
cache

# Logs
logs
Expand Down
186 changes: 176 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
# Puppeteer Benchmark
<!-- markdownlint-disable no-trailing-punctuation ul-style link-fragments no-inline-html -->

# Puppeteer Benchmark Tool

This project is a CLI to write, test, and benchmark versions of puppeteer (and their respective Chrome binaries) for workloads that you might be interested in. By default, it comes with three basic test-cases:

Expand All @@ -8,22 +10,52 @@ This project is a CLI to write, test, and benchmark versions of puppeteer (and t

Tests are simple async functions that make use of the `perf_hooks` library to capture events you're interested in. Feel free to fork and add your own!

## Basic CLI usage
- [Puppeteer Benchmark Tool](#puppeteer-benchmark-tool)
* [Setting Up](#setting-up)
* [Basic CLI usage](#basic-cli-usage)
* [To install all versions](#to-install-all-versions)
* [To test all versions](#to-test-all-versions)
* [CLI options](#cli-options)
+ [run \<case\>](#run-case)
- [Arguments](#arguments)
* [-r, --retries-number \<number\>](#-r---retries-number-number)
* [--puppeteer-versions \<string...\>](#--puppeteer-versions-string)
* [--case-url \<url\>](#--case-url-url)
* [--out \<filePath\>](#--out-filepath)
* [--temp-dir \<tempDir\>](#--temp-dir-tempdir)
* [--generate-report](#--generate-report)
* [--highlight-html](#--highlight-html)
* [--report-dir \<reportDir\>](#--report-dir-reportdir)
* [--silent](#--silent)
+ [prepare \<versions...\>](#prepare-versions)
+ [prepare-suite](#prepare-suite)
+ [suite](#suite)
* [Why?](#why-)
* [Future Features](#future-features)
* [I want to Help!](#i-want-to-help-)

## Setting Up

```sh
# Setup
$ npm i
$ npm link
# Install
$ npm i # add -g flag to install globally
$ npm run build

# Set up puppeteer versions
$ npx pptr-benchmark prepare 13 15 latest
# or install all versions
$ npx pptr-benchmark prepare-suite
```

# Prepare (download and install) the versions you care about
$ puppeteer-test prepare 13 15 latest
## Basic CLI usage

```sh
# Run the tests and output the results to a JSON file
$ puppeteer-test run test-cases/generate-pdf.js -r 2 --puppeteer-versions 13 15 latest --out results.json
$ npx pptr-benchmark run test-cases/generate-pdf.js -r 2 --puppeteer-versions 13 15 latest --out results.json

$ puppeteer-test run test-cases/paint-events.js -r 5 \
$ npx pptr-benchmark run all -r 5 \
--puppeteer-versions 13 15 latest \
--case-url "http://example.com" --case-opts '{"selector": "p"}' \
--case-url "http://example.com" \
--out results.json
```

Expand All @@ -39,6 +71,135 @@ npm run prepare-suite
npm run suite
```

## CLI options

### run \<case\>

Runs a specific benchmark test. `<case>` must be one of the following cases: `all` | `generate-pdf.js` | `make-screenshot.js` | `paint-events.js`

```sh
# Will run PDF tests
npx pptr-benchmark run test-cases/generate-pdf.js
```

#### Arguments

##### -r, --retries-number \<number\>

Number of test exectuions. Defaults to `5`.

```sh
# Will average and aggregate results based on 2 runs
npx pptr-benchmark run all -r 2
```

##### --puppeteer-versions \<string...\>

Tests a list of puppeteer versions, separated by commas or spaces. Defaults to `latest`

```sh
# Will run all tests on pptr v13, v15 and latest
npx pptr-benchmark run all --puppeteer-versions 13 15 latest
```

##### --case-url \<url\>

Url that will be navigated to on the function. Defaults to `http://example.com/`.

```sh
# Will run all tests navigating to https://www.browserless.io/
npx pptr-benchmark run all --case-url https://www.browserless.io/
```

##### --out \<filePath\>

Writes json results to file.

```sh
# Will run all tests and write the results to ./file.json
npx pptr-benchmark run all --out ./file.json
```

##### --temp-dir \<tempDir\>

Writes testing PDFs and screenshots to directory.

```sh
# Will run all tests and save the PDFs files to ./cache
npx pptr-benchmark run all --temp-dir ./cache
```

##### --generate-report

Exports results as an HTML report

```sh
# Will run PDF tests on pptr v16, v18 and latest and generate a report
npx pptr-benchmark run all --puppeteer-versions 16 18 latest --generate-report
```

And generates the following report:

![Basic HTML Report](./docs/html.jpg)

##### --highlight-html

Highlights min and max values in HTML report.

```sh
# Will run all tests on pptr v16, v18 and latest and generate a highlighted report
npx pptr-benchmark run all --puppeteer-versions 16 18 latest --generate-report
```

##### --report-dir \<reportDir\>

Writes the final HTML repor to directory

```sh
# Will run all tests on pptr v16, v18 and latest and generate a highlighted report on ./cache
npx pptr-benchmark run all --puppeteer-versions 16 18 latest --generate-report --report-dir ./cache
```

And generates the following report:

![Basic HTML Report](./docs/html-highlight.jpg)

##### --silent

Turns off console output

```sh
# Will only print the results table
npx pptr-benchmark run all --silent
```

### prepare \<versions...\>

Downloads versions of puppeteer

```sh
# Will download pptr v13, v15 and latest
npx pptr-benchmark prepare 13 15 latest
```

### prepare-suite

Downloads and installs all major puppeteer versions.

```sh
# Will download all majr versions from v1 to latest
npx pptr-benchmark prepare-suite
```

### suite

Runs all tests with default arguments, on all available puppeteer versions. Easiest and most complete benchmark test.

```sh
# Will download all majr versions from v1 to latest
npx pptr-benchmark suite
```

## Why?

You can read more about why we did this in our blog. The TL;DR is that we (browserless) heard a lot from our users about performance changes from version-to-version of Chrome, and wanted a way to programmatically see if a version change would introduce new latencies.
Expand All @@ -50,4 +211,9 @@ This CLI was born from that curiosity, and we wanted to open-source it to the co
Eventually we'll track the results of this suite into a static webpage that you can check on. This will hopefully give you a good sense of what to expect when upgrading. We'll work on adding newer tests as time goes on, but found enough value out of these initial few that we wanted to see what the community thought!

## I want to Help!

First, thanks! Please submit a PR and we'll follow up with you. If you have a bigger feature or want to do something drastic, please submit an issue describing what you want to do in order to avoid doing all that work.

---

<small><i><a href='http://ecotrust-canada.github.io/markdown-toc/'>Table of contents generated with markdown-toc</a></i></small>
Binary file added docs/html-highlight.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/html.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
65 changes: 59 additions & 6 deletions package-lock.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

18 changes: 11 additions & 7 deletions package.json
Original file line number Diff line number Diff line change
@@ -1,16 +1,17 @@
{
"name": "puppeteer-test-tool",
"version": "1.0.0",
"description": "",
"name": "puppeteer-benchmark",
"version": "1.1.0",
"description": "A Node utility for comparing performance among puppeteer versions",
"bin": {
"puppeteer-test": "dotenv/config ./build/cli.js"
"pptr-benchmark": "dotenv/config ./build/cli.js"
},
"main": "build",
"main": "./build/cli.js",
"scripts": {
"build": "tsc",
"test": "echo \"Error: no test specified\" && exit 1",
"cli": "node ./build/cli.js",
"make-puppeteer-placeholder": "node ./build/helpers/make-puppeteer-placeholder.js",
"postinstall": "npm run build && npm run make-puppeteer-placeholder",
"prettier": "prettier --config .prettierrc --write --log-level error \"{src,test-cases}/**/*.{js,ts}\"",
"prepare": "npx simple-git-hooks",
"prepare-suite": "npm run build && npm link && node ./build/cli.js prepare-suite",
"suite": "npm run build && npm link && node ./build/cli.js suite",
Expand All @@ -23,11 +24,14 @@
"dotenv": "^16.0.3",
"lodash.difference": "^4.5.0",
"lodash.range": "^3.2.0",
"lodash.startcase": "^4.4.0",
"mathjs": "^11.3.3",
"ora": "^5.4.1"
"ora": "^5.4.1",
"pidusage": "^3.0.2"
},
"devDependencies": {
"lint-staged": "^13.0.3",
"prettier": "^3.0.2",
"rewiremock": "^3.14.4",
"rome": "^10.0.1",
"simple-git-hooks": "^2.8.1",
Expand Down
Loading