iso-bench
is a small benchmark library focused on avoiding optimization/deoptimization pollution between tests by isolating them.
I've always used benchmark.js
for my benchmark tests, but I noticed that changing the tests order also changed the performance outcome. They were getting polluted between them somehow. V8 optimizations/deoptimizations maybe? I decided to take advantage of forking to do tests in completely separated processes with their own V8 instances, memory and so on, to avoid present and future optimization/deoptimization pollution.
All single threaded benchmark libraries, like benny or benchmark.js suffer this problem, so you may had this pollution on your tests and you didn't even notice, just thinking that one test was faster than the other. This happened to me, and when I noticed the problem it was too late and I had to refactor some PacoPack code
Running this test on benchmark.js
will return different outcomes. Note how method
and method_again
run the very same exact code:
const Benchmark = require("benchmark");
const functions = {
method: function(buf:Buffer) {
return buf.readUint8(0);
},
direct: function(buf:Buffer) {
return buf[0];
},
method_again: function(buf:Buffer) {
return buf.readUint8(0);
}
};
const buffers = new Array(1000).fill(0).map(() => {
const buf = Buffer.allocUnsafe(1);
buf[0] = Math.floor(Math.random() * 0xFF);
return buf;
});
const suite = new Benchmark.Suite();
for (const [type, fn] of Object.entries(functions)) {
suite.add(`${type}`, () => {
for (let i = 0; i < buffers.length; i++) {
fn(buffers[i]);
}
});
}
suite.on("cycle", event => {
console.log(String(event.target));
}).run({
async: true
});
Which yields the next results:
method x 314,830 ops/sec
direct x 300,522 ops/sec
method_again x 187,985 ops/sec // SLOWER THAN "method"?? IS THE SAME CODE!!
And if I run the direct
test first, it is even worse:
direct x 1,601,246 ops/sec // 5 TIMES FASTER THAN BEFORE??
method x 183,015 ops/sec // This test already got deoptimized
method_again x 183,956 ops/sec
On iso-bench this is not possible, as every test will run in a completely different process. No matter the order, the outcome will be equally stable. This is the very same test on iso-bench:
import { IsoBench } from "..";
const bench = new IsoBench();
const functions = {
method: function(buf:Buffer) {
return buf.readUint8(0);
},
direct: function(buf:Buffer) {
return buf[0];
},
method_again: function(buf:Buffer) {
return buf.readUint8(0);
}
};
const buffers = new Array(1000).fill(0).map(() => {
const buf = Buffer.allocUnsafe(1);
buf[0] = Math.floor(Math.random() * 0xFF);
return buf;
});
for (const [type, fn] of Object.entries(functions)) {
bench.add(`${type}`, () => {
for (let i = 0; i < buffers.length; i++) {
fn(buffers[i]);
}
});
}
bench.consoleLog().run();
Which yields these results with zero pollution:
method - 1.714.953 op/s.
direct - 1.712.045 op/s.
method_again - 1.699.022 op/s.
npm install iso-bench
Example code:
import { IsoBench } from "iso-bench";
const bench = new IsoBench("My bench");
bench.add("indexOf", () => {
"thisisastring".indexOf("a") > -1;
})
.add("RegExp", () => {
/a/.test("thisisastring");
})
.consoleLog()
.run();
new IsoBench(name?:string, options?:IsoBenchOptions);
Creates a new IsoBench
instance to benchmark your code.
name
: The name of this IsoBench instance. Defaults toIsoBench
.options
: Object:parallel
: The amount of parallel tests to run. Defaults to 1.time
: The minimum time (in milliseconds) to invest on each test. The library will automatically increase the amount of cycles to reach a minimum ofms
between tests to take samples. Note that the setup callback is called one time per cycle set, so it will be reused on each cycle, so if the setup is consumable you must usecustomCycles
instead. Defaults to 100.customCycles
: If you have your own amount of cycles (for, while, iterator, anything), you can input the amount of cycles that you are running. The library will divide the resuting time with thiscustomCycles
to calculate the amount of operations per second for this sample. Make sure that a proper amount of time is spent on each iteration (50-100ms recommended). This allows to use consumable setups, so the library doesn't run multiple cycles over them (readtime
help). Defaults to null.spawns
: Amount of processes to spawn per test. They will be spawned linearly for the same test, never in parallel. Defaults to 10.samplesPerSpawn
: Amount of samples to run on each spawned process. Defaults to 5.
bench.add<T>(name:string, test:(setupReturn:T)=>void, setup?:()=>T, testOptions?:TestOptions):this;
Adds new test.
name
: The name of this test.test
: The test function to run. Returns the IsoBench instance, to concatenate new tests easily.setup
: Optional. The setup function to run before the test. If you are very concerned about the pollution between tests when preparing data that only one test needs, you can use thesetup
callback to return the data that will be provided to thetest
callback as the first argument. The other tests will not run thissetup
callback in their isolated processes.testOptions
: Same options asIsoBenchOptions
butparallel
. These will apply to this specific test, merging with the generalIsoBenchOptions
that you've passed.
Example:
bench.add("object.result", (obj) => {
// Test callback receiving the obj from the setup callback
object.result = object.result + 1;
}, () => {
// Setup callback
let objResult = 0;
return Object.defineProperties({}, {
result: {
get: () => objResult,
set: (res) => objResult = res
}
});
});
bench.add("for of generator", (obj) => {
let res = 0;
for(const value of iterable) {
res = res ^ value;
}
}, () => {
// Create a consumable setup element
let count = 1000000;
function* createIterable() {
for (let i = 0; i < count; i++) {
yield i;
}
}
return createIterable();
}, {
// The library first cycle will consume it so next cycles will return invalid results,
// so we tell the library that we have a custom cycle system with the amount of cycles
customCycles: 1000000
});
bench.endGroup(name:string):this;
Groups the tests added up to this point. The result comparator will be done only between tests in the same group. Example:
import { IsoBench } from "iso-bench";
const bench = new IsoBench("My bench");
bench.add("indexOf", () => {
"thisisastring".indexOf("a") > -1;
})
.add("RegExp", () => {
/a/.test("thisisastring");
})
.endGroup("small string") // First group
.add("indexOf", () => {
"thisisastring thisisastring".indexOf("a") > -1;
})
.add("RegExp", () => {
/a/.test("thisisastring thisisastring");
})
.endGroup("big string") // Second group
.consoleLog()
.run();
bench.consoleLog():this;
Adds a built-in Processor that outputs the result in the console. Returns the IsoBench instance.
bench.streamLog(streamCallback:() => Stream.Writable):this;
Adds a built-in Processor that outputs the result in a writable stream, like a file or a socket. The writable stream should be returned inside the callback function so it will be only called in the main process. Returns the IsoBench instance.
bench.addProcessor(processorCallback:() => Processor):this;
Adds a custom Processor that must implement the Processor interface. The callback should return a Processor instance so it will be only called in the main process. Returns the IsoBench instance.
bench.run():Promise<void>;
Start running the tests. Returns a Promise
that will resolve when all the tests are completed.
Processors will receive the benchmark events to process them. They must implement the Processor interface:
export interface Processor {
initialize?(bench:IsoBench, tests:Test[]):void;
start?(test:Test):void;
sample?(test:Test, sample:Sample):void;
end?(test:Test):void;
completed?(tests:Tests[]):void;
}
Processor methods:
initialize(bench:IsoBench, tests:Test[]):void;
Will be called when the benchmark starts. Receives the IsoBench instance and a test array of the tests that are going to be run, for initialization purposes. Optional.
start(test:Test):void;
Will be called when a Test starts to run. Optional.
sample(test:Test, sample:Sample):void;
Will be called when a new Sample is added to a Test. Optional.
end(test:Test):void;
Will be called when a Test has collected enough samples and can calculate the final result, or when a test fails (check for the test.error
property). Optional.
completed(tests:Tests[]):void;
Will be called when the benchmark is completed. Receives an array with all the tests (including the errored ones). Optional.
Custom Processor example:
import { Processor, Test } from "iso-bench";
class MyProcessor implements Processor {
end(test:Test) {
console.log(test);
}
completed(tests:Tests[]) {
console.log(tests);
}
}