2021-02-13 21:26:14 +08:00
|
|
|
# How to write and run benchmarks in Node.js core
|
2014-05-23 11:57:31 +08:00
|
|
|
|
2021-02-13 21:26:14 +08:00
|
|
|
## Table of contents
|
2016-02-21 20:14:39 +08:00
|
|
|
|
|
|
|
* [Prerequisites](#prerequisites)
|
2021-02-13 21:26:14 +08:00
|
|
|
* [HTTP benchmark requirements](#http-benchmark-requirements)
|
|
|
|
* [HTTPS benchmark requirements](#https-benchmark-requirements)
|
|
|
|
* [HTTP/2 benchmark requirements](#http2-benchmark-requirements)
|
|
|
|
* [Benchmark analysis requirements](#benchmark-analysis-requirements)
|
2016-02-21 20:14:39 +08:00
|
|
|
* [Running benchmarks](#running-benchmarks)
|
2017-02-08 01:10:09 +08:00
|
|
|
* [Running individual benchmarks](#running-individual-benchmarks)
|
|
|
|
* [Running all benchmarks](#running-all-benchmarks)
|
2024-04-07 06:43:53 +08:00
|
|
|
* [Specifying CPU Cores for Benchmarks with run.js](#specifying-cpu-cores-for-benchmarks-with-runjs)
|
2019-10-16 06:14:20 +08:00
|
|
|
* [Filtering benchmarks](#filtering-benchmarks)
|
2017-02-08 01:10:09 +08:00
|
|
|
* [Comparing Node.js versions](#comparing-nodejs-versions)
|
|
|
|
* [Comparing parameters](#comparing-parameters)
|
2021-02-13 21:26:14 +08:00
|
|
|
* [Running benchmarks on the CI](#running-benchmarks-on-the-ci)
|
2016-02-21 20:14:39 +08:00
|
|
|
* [Creating a benchmark](#creating-a-benchmark)
|
2017-02-08 01:10:09 +08:00
|
|
|
* [Basics of a benchmark](#basics-of-a-benchmark)
|
|
|
|
* [Creating an HTTP benchmark](#creating-an-http-benchmark)
|
2014-05-23 11:57:31 +08:00
|
|
|
|
2015-02-27 10:37:46 +08:00
|
|
|
## Prerequisites
|
2015-01-28 07:28:41 +08:00
|
|
|
|
2017-02-08 01:10:09 +08:00
|
|
|
Basic Unix tools are required for some benchmarks.
|
|
|
|
[Git for Windows][git-for-windows] includes Git Bash and the necessary tools,
|
|
|
|
which need to be included in the global Windows `PATH`.
|
|
|
|
|
2021-02-13 21:26:14 +08:00
|
|
|
### HTTP benchmark requirements
|
2017-02-08 01:10:09 +08:00
|
|
|
|
2017-03-26 02:01:06 +08:00
|
|
|
Most of the HTTP benchmarks require a benchmarker to be installed. This can be
|
2016-08-05 17:34:50 +08:00
|
|
|
either [`wrk`][wrk] or [`autocannon`][autocannon].
|
|
|
|
|
2017-02-08 01:10:09 +08:00
|
|
|
`Autocannon` is a Node.js script that can be installed using
|
|
|
|
`npm install -g autocannon`. It will use the Node.js executable that is in the
|
2017-05-17 06:53:27 +08:00
|
|
|
path. In order to compare two HTTP benchmark runs, make sure that the
|
2017-02-08 01:10:09 +08:00
|
|
|
Node.js version in the path is not altered.
|
2016-08-05 17:34:50 +08:00
|
|
|
|
2018-02-12 15:31:55 +08:00
|
|
|
`wrk` may be available through one of the available package managers. If not,
|
|
|
|
it can be easily built [from source][wrk] via `make`.
|
2016-08-05 17:34:50 +08:00
|
|
|
|
2017-03-26 02:01:06 +08:00
|
|
|
By default, `wrk` will be used as the benchmarker. If it is not available,
|
2017-05-17 06:53:27 +08:00
|
|
|
`autocannon` will be used in its place. When creating an HTTP benchmark, the
|
|
|
|
benchmarker to be used should be specified by providing it as an argument:
|
2016-08-05 17:34:50 +08:00
|
|
|
|
|
|
|
`node benchmark/run.js --set benchmarker=autocannon http`
|
|
|
|
|
|
|
|
`node benchmark/http/simple.js benchmarker=autocannon`
|
2015-06-23 11:27:17 +08:00
|
|
|
|
2021-02-13 21:26:14 +08:00
|
|
|
#### HTTPS benchmark requirements
|
2020-12-23 21:52:44 +08:00
|
|
|
|
|
|
|
To run the `https` benchmarks, one of `autocannon` or `wrk` benchmarkers must
|
|
|
|
be used.
|
|
|
|
|
|
|
|
`node benchmark/https/simple.js benchmarker=autocannon`
|
|
|
|
|
2021-02-13 21:26:14 +08:00
|
|
|
#### HTTP/2 benchmark requirements
|
http2: introducing HTTP/2
At long last: The initial *experimental* implementation of HTTP/2.
This is an accumulation of the work that has been done in the nodejs/http2
repository, squashed down to a couple of commits. The original commit
history has been preserved in the nodejs/http2 repository.
This PR introduces the nghttp2 C library as a new dependency. This library
provides the majority of the HTTP/2 protocol implementation, with the rest
of the code here providing the mapping of the library into a usable JS API.
Within src, a handful of new node_http2_*.c and node_http2_*.h files are
introduced. These provide the internal mechanisms that interface with nghttp
and define the `process.binding('http2')` interface.
The JS API is defined within `internal/http2/*.js`.
There are two APIs provided: Core and Compat.
The Core API is HTTP/2 specific and is designed to be as minimal and as
efficient as possible.
The Compat API is intended to be as close to the existing HTTP/1 API as
possible, with some exceptions.
Tests, documentation and initial benchmarks are included.
The `http2` module is gated by a new `--expose-http2` command line flag.
When used, `require('http2')` will be exposed to users. Note that there
is an existing `http2` module on npm that would be impacted by the introduction
of this module, which is the main reason for gating this behind a flag.
When using `require('http2')` the first time, a process warning will be
emitted indicating that an experimental feature is being used.
To run the benchmarks, the `h2load` tool (part of the nghttp project) is
required: `./node benchmarks/http2/simple.js benchmarker=h2load`. Only
two benchmarks are currently available.
Additional configuration options to enable verbose debugging are provided:
```
$ ./configure --debug-http2 --debug-nghttp2
$ NODE_DEBUG=http2 ./node
```
The `--debug-http2` configuration option enables verbose debug statements
from the `src/node_http2_*` files. The `--debug-nghttp2` enables the nghttp
library's own verbose debug output. The `NODE_DEBUG=http2` enables JS-level
debug output.
The following illustrates as simple HTTP/2 server and client interaction:
(The HTTP/2 client and server support both plain text and TLS connections)
```jt client = http2.connect('http://localhost:80');
const req = client.request({ ':path': '/some/path' });
req.on('data', (chunk) => { /* do something with the data */ });
req.on('end', () => {
client.destroy();
});
// Plain text (non-TLS server)
const server = http2.createServer();
server.on('stream', (stream, requestHeaders) => {
stream.respond({ ':status': 200 });
stream.write('hello ');
stream.end('world');
});
server.listen(80);
```
```js
const http2 = require('http2');
const client = http2.connect('http://localhost');
```
Author: Anna Henningsen <anna@addaleax.net>
Author: Colin Ihrig <cjihrig@gmail.com>
Author: Daniel Bevenius <daniel.bevenius@gmail.com>
Author: James M Snell <jasnell@gmail.com>
Author: Jun Mukai
Author: Kelvin Jin
Author: Matteo Collina <matteo.collina@gmail.com>
Author: Robert Kowalski <rok@kowalski.gd>
Author: Santiago Gimeno <santiago.gimeno@gmail.com>
Author: Sebastiaan Deckers <sebdeckers83@gmail.com>
Author: Yosuke Furukawa <yosuke.furukawa@gmail.com>
PR-URL: https://github.com/nodejs/node/pull/14239
Reviewed-By: Anna Henningsen <anna@addaleax.net>
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
2017-07-18 01:17:16 +08:00
|
|
|
|
|
|
|
To run the `http2` benchmarks, the `h2load` benchmarker must be used. The
|
|
|
|
`h2load` tool is a component of the `nghttp2` project and may be installed
|
2017-07-18 01:43:33 +08:00
|
|
|
from [nghttp2.org][] or built from source.
|
http2: introducing HTTP/2
At long last: The initial *experimental* implementation of HTTP/2.
This is an accumulation of the work that has been done in the nodejs/http2
repository, squashed down to a couple of commits. The original commit
history has been preserved in the nodejs/http2 repository.
This PR introduces the nghttp2 C library as a new dependency. This library
provides the majority of the HTTP/2 protocol implementation, with the rest
of the code here providing the mapping of the library into a usable JS API.
Within src, a handful of new node_http2_*.c and node_http2_*.h files are
introduced. These provide the internal mechanisms that interface with nghttp
and define the `process.binding('http2')` interface.
The JS API is defined within `internal/http2/*.js`.
There are two APIs provided: Core and Compat.
The Core API is HTTP/2 specific and is designed to be as minimal and as
efficient as possible.
The Compat API is intended to be as close to the existing HTTP/1 API as
possible, with some exceptions.
Tests, documentation and initial benchmarks are included.
The `http2` module is gated by a new `--expose-http2` command line flag.
When used, `require('http2')` will be exposed to users. Note that there
is an existing `http2` module on npm that would be impacted by the introduction
of this module, which is the main reason for gating this behind a flag.
When using `require('http2')` the first time, a process warning will be
emitted indicating that an experimental feature is being used.
To run the benchmarks, the `h2load` tool (part of the nghttp project) is
required: `./node benchmarks/http2/simple.js benchmarker=h2load`. Only
two benchmarks are currently available.
Additional configuration options to enable verbose debugging are provided:
```
$ ./configure --debug-http2 --debug-nghttp2
$ NODE_DEBUG=http2 ./node
```
The `--debug-http2` configuration option enables verbose debug statements
from the `src/node_http2_*` files. The `--debug-nghttp2` enables the nghttp
library's own verbose debug output. The `NODE_DEBUG=http2` enables JS-level
debug output.
The following illustrates as simple HTTP/2 server and client interaction:
(The HTTP/2 client and server support both plain text and TLS connections)
```jt client = http2.connect('http://localhost:80');
const req = client.request({ ':path': '/some/path' });
req.on('data', (chunk) => { /* do something with the data */ });
req.on('end', () => {
client.destroy();
});
// Plain text (non-TLS server)
const server = http2.createServer();
server.on('stream', (stream, requestHeaders) => {
stream.respond({ ':status': 200 });
stream.write('hello ');
stream.end('world');
});
server.listen(80);
```
```js
const http2 = require('http2');
const client = http2.connect('http://localhost');
```
Author: Anna Henningsen <anna@addaleax.net>
Author: Colin Ihrig <cjihrig@gmail.com>
Author: Daniel Bevenius <daniel.bevenius@gmail.com>
Author: James M Snell <jasnell@gmail.com>
Author: Jun Mukai
Author: Kelvin Jin
Author: Matteo Collina <matteo.collina@gmail.com>
Author: Robert Kowalski <rok@kowalski.gd>
Author: Santiago Gimeno <santiago.gimeno@gmail.com>
Author: Sebastiaan Deckers <sebdeckers83@gmail.com>
Author: Yosuke Furukawa <yosuke.furukawa@gmail.com>
PR-URL: https://github.com/nodejs/node/pull/14239
Reviewed-By: Anna Henningsen <anna@addaleax.net>
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
2017-07-18 01:17:16 +08:00
|
|
|
|
2020-12-23 21:52:44 +08:00
|
|
|
`node benchmark/http2/simple.js benchmarker=h2load`
|
http2: introducing HTTP/2
At long last: The initial *experimental* implementation of HTTP/2.
This is an accumulation of the work that has been done in the nodejs/http2
repository, squashed down to a couple of commits. The original commit
history has been preserved in the nodejs/http2 repository.
This PR introduces the nghttp2 C library as a new dependency. This library
provides the majority of the HTTP/2 protocol implementation, with the rest
of the code here providing the mapping of the library into a usable JS API.
Within src, a handful of new node_http2_*.c and node_http2_*.h files are
introduced. These provide the internal mechanisms that interface with nghttp
and define the `process.binding('http2')` interface.
The JS API is defined within `internal/http2/*.js`.
There are two APIs provided: Core and Compat.
The Core API is HTTP/2 specific and is designed to be as minimal and as
efficient as possible.
The Compat API is intended to be as close to the existing HTTP/1 API as
possible, with some exceptions.
Tests, documentation and initial benchmarks are included.
The `http2` module is gated by a new `--expose-http2` command line flag.
When used, `require('http2')` will be exposed to users. Note that there
is an existing `http2` module on npm that would be impacted by the introduction
of this module, which is the main reason for gating this behind a flag.
When using `require('http2')` the first time, a process warning will be
emitted indicating that an experimental feature is being used.
To run the benchmarks, the `h2load` tool (part of the nghttp project) is
required: `./node benchmarks/http2/simple.js benchmarker=h2load`. Only
two benchmarks are currently available.
Additional configuration options to enable verbose debugging are provided:
```
$ ./configure --debug-http2 --debug-nghttp2
$ NODE_DEBUG=http2 ./node
```
The `--debug-http2` configuration option enables verbose debug statements
from the `src/node_http2_*` files. The `--debug-nghttp2` enables the nghttp
library's own verbose debug output. The `NODE_DEBUG=http2` enables JS-level
debug output.
The following illustrates as simple HTTP/2 server and client interaction:
(The HTTP/2 client and server support both plain text and TLS connections)
```jt client = http2.connect('http://localhost:80');
const req = client.request({ ':path': '/some/path' });
req.on('data', (chunk) => { /* do something with the data */ });
req.on('end', () => {
client.destroy();
});
// Plain text (non-TLS server)
const server = http2.createServer();
server.on('stream', (stream, requestHeaders) => {
stream.respond({ ':status': 200 });
stream.write('hello ');
stream.end('world');
});
server.listen(80);
```
```js
const http2 = require('http2');
const client = http2.connect('http://localhost');
```
Author: Anna Henningsen <anna@addaleax.net>
Author: Colin Ihrig <cjihrig@gmail.com>
Author: Daniel Bevenius <daniel.bevenius@gmail.com>
Author: James M Snell <jasnell@gmail.com>
Author: Jun Mukai
Author: Kelvin Jin
Author: Matteo Collina <matteo.collina@gmail.com>
Author: Robert Kowalski <rok@kowalski.gd>
Author: Santiago Gimeno <santiago.gimeno@gmail.com>
Author: Sebastiaan Deckers <sebdeckers83@gmail.com>
Author: Yosuke Furukawa <yosuke.furukawa@gmail.com>
PR-URL: https://github.com/nodejs/node/pull/14239
Reviewed-By: Anna Henningsen <anna@addaleax.net>
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
2017-07-18 01:17:16 +08:00
|
|
|
|
2021-02-13 21:26:14 +08:00
|
|
|
### Benchmark analysis requirements
|
2016-09-26 17:03:21 +08:00
|
|
|
|
2021-08-21 22:50:08 +08:00
|
|
|
To analyze the results statistically, you can use either the
|
|
|
|
[node-benchmark-compare][] tool or the R script `benchmark/compare.R`.
|
|
|
|
|
|
|
|
[node-benchmark-compare][] is a Node.js script that can be installed with
|
|
|
|
`npm install -g node-benchmark-compare`.
|
|
|
|
|
|
|
|
To draw comparison plots when analyzing the results, `R` must be installed.
|
|
|
|
Use one of the available package managers or download it from
|
|
|
|
<https://www.r-project.org/>.
|
2016-02-21 20:14:39 +08:00
|
|
|
|
|
|
|
The R packages `ggplot2` and `plyr` are also used and can be installed using
|
|
|
|
the R REPL.
|
|
|
|
|
2020-05-25 00:37:21 +08:00
|
|
|
```console
|
2016-02-21 20:14:39 +08:00
|
|
|
$ R
|
|
|
|
install.packages("ggplot2")
|
|
|
|
install.packages("plyr")
|
|
|
|
```
|
2015-02-27 10:37:46 +08:00
|
|
|
|
2021-07-06 21:25:51 +08:00
|
|
|
If a message states that a CRAN mirror must be selected first, specify a mirror
|
|
|
|
with the `repo` parameter.
|
2016-12-10 09:01:00 +08:00
|
|
|
|
2020-05-25 00:37:21 +08:00
|
|
|
```r
|
2016-12-10 09:01:00 +08:00
|
|
|
install.packages("ggplot2", repo="http://cran.us.r-project.org")
|
|
|
|
```
|
|
|
|
|
2017-05-17 06:53:27 +08:00
|
|
|
Of course, use an appropriate mirror based on location.
|
2016-12-10 09:01:00 +08:00
|
|
|
A list of mirrors is [located here](https://cran.r-project.org/mirrors.html).
|
|
|
|
|
2016-02-21 20:14:39 +08:00
|
|
|
## Running benchmarks
|
2014-05-23 11:57:31 +08:00
|
|
|
|
2024-09-05 03:21:18 +08:00
|
|
|
### Setting CPU Frequency scaling governor to "performance"
|
|
|
|
|
|
|
|
It is recommended to set the CPU frequency to `performance` before running
|
|
|
|
benchmarks. This increases the likelihood of each benchmark achieving peak performance
|
|
|
|
according to the hardware. Therefore, run:
|
|
|
|
|
|
|
|
```console
|
|
|
|
$ ./benchmarks/cpu.sh fast
|
|
|
|
```
|
|
|
|
|
2016-02-21 20:14:39 +08:00
|
|
|
### Running individual benchmarks
|
2014-05-23 11:57:31 +08:00
|
|
|
|
2016-02-21 20:14:39 +08:00
|
|
|
This can be useful for debugging a benchmark or doing a quick performance
|
|
|
|
measure. But it does not provide the statistical information to make any
|
|
|
|
conclusions about the performance.
|
2015-06-14 00:07:20 +08:00
|
|
|
|
2016-02-21 20:14:39 +08:00
|
|
|
Individual benchmarks can be executed by simply executing the benchmark script
|
|
|
|
with node.
|
2014-05-23 11:57:31 +08:00
|
|
|
|
2016-07-14 18:46:01 +08:00
|
|
|
```console
|
2016-02-21 20:14:39 +08:00
|
|
|
$ node benchmark/buffers/buffer-tostring.js
|
2014-05-23 11:57:31 +08:00
|
|
|
|
2016-02-21 20:14:39 +08:00
|
|
|
buffers/buffer-tostring.js n=10000000 len=0 arg=true: 62710590.393305704
|
|
|
|
buffers/buffer-tostring.js n=10000000 len=1 arg=true: 9178624.591787899
|
|
|
|
buffers/buffer-tostring.js n=10000000 len=64 arg=true: 7658962.8891432695
|
|
|
|
buffers/buffer-tostring.js n=10000000 len=1024 arg=true: 4136904.4060201733
|
|
|
|
buffers/buffer-tostring.js n=10000000 len=0 arg=false: 22974354.231509723
|
|
|
|
buffers/buffer-tostring.js n=10000000 len=1 arg=false: 11485945.656765845
|
|
|
|
buffers/buffer-tostring.js n=10000000 len=64 arg=false: 8718280.70650129
|
|
|
|
buffers/buffer-tostring.js n=10000000 len=1024 arg=false: 4103857.0726124765
|
|
|
|
```
|
|
|
|
|
|
|
|
Each line represents a single benchmark with parameters specified as
|
|
|
|
`${variable}=${value}`. Each configuration combination is executed in a separate
|
|
|
|
process. This ensures that benchmark results aren't affected by the execution
|
2017-11-17 13:41:14 +08:00
|
|
|
order due to V8 optimizations. **The last number is the rate of operations
|
2016-02-21 20:14:39 +08:00
|
|
|
measured in ops/sec (higher is better).**
|
|
|
|
|
2017-05-17 06:53:27 +08:00
|
|
|
Furthermore a subset of the configurations can be specified, by setting them in
|
2016-02-21 20:14:39 +08:00
|
|
|
the process arguments:
|
2014-05-23 11:57:31 +08:00
|
|
|
|
2016-07-14 18:46:01 +08:00
|
|
|
```console
|
2016-02-21 20:14:39 +08:00
|
|
|
$ node benchmark/buffers/buffer-tostring.js len=1024
|
2014-05-23 11:57:31 +08:00
|
|
|
|
2016-02-21 20:14:39 +08:00
|
|
|
buffers/buffer-tostring.js n=10000000 len=1024 arg=true: 3498295.68561504
|
|
|
|
buffers/buffer-tostring.js n=10000000 len=1024 arg=false: 3783071.1678948295
|
2014-05-23 11:57:31 +08:00
|
|
|
```
|
2016-02-21 20:14:39 +08:00
|
|
|
|
|
|
|
### Running all benchmarks
|
|
|
|
|
|
|
|
Similar to running individual benchmarks, a group of benchmarks can be executed
|
2017-02-08 01:10:09 +08:00
|
|
|
by using the `run.js` tool. To see how to use this script,
|
|
|
|
run `node benchmark/run.js`. Again this does not provide the statistical
|
2016-02-21 20:14:39 +08:00
|
|
|
information to make any conclusions.
|
|
|
|
|
2016-07-14 18:46:01 +08:00
|
|
|
```console
|
2019-01-06 17:02:23 +08:00
|
|
|
$ node benchmark/run.js assert
|
2016-02-21 20:14:39 +08:00
|
|
|
|
2019-01-06 17:02:23 +08:00
|
|
|
assert/deepequal-buffer.js
|
|
|
|
assert/deepequal-buffer.js method="deepEqual" strict=0 len=100 n=20000: 773,200.4995493788
|
|
|
|
assert/deepequal-buffer.js method="notDeepEqual" strict=0 len=100 n=20000: 964,411.712953848
|
2014-05-23 11:57:31 +08:00
|
|
|
...
|
|
|
|
|
2019-01-06 17:02:23 +08:00
|
|
|
assert/deepequal-map.js
|
|
|
|
assert/deepequal-map.js method="deepEqual_primitiveOnly" strict=0 len=500 n=500: 20,445.06368453332
|
|
|
|
assert/deepequal-map.js method="deepEqual_objectOnly" strict=0 len=500 n=500: 1,393.3481642240833
|
2016-02-21 20:14:39 +08:00
|
|
|
...
|
2014-05-23 11:57:31 +08:00
|
|
|
|
2019-01-06 17:02:23 +08:00
|
|
|
assert/deepequal-object.js
|
|
|
|
assert/deepequal-object.js method="deepEqual" strict=0 size=100 n=5000: 1,053.1950937538475
|
|
|
|
assert/deepequal-object.js method="notDeepEqual" strict=0 size=100 n=5000: 9,734.193251965213
|
2014-05-23 11:57:31 +08:00
|
|
|
...
|
|
|
|
```
|
|
|
|
|
2016-02-21 20:14:39 +08:00
|
|
|
It is possible to execute more groups by adding extra process arguments.
|
2017-12-18 07:32:28 +08:00
|
|
|
|
2023-05-20 06:14:03 +08:00
|
|
|
```bash
|
|
|
|
node benchmark/run.js assert async_hooks
|
2016-02-21 20:14:39 +08:00
|
|
|
```
|
|
|
|
|
2024-10-18 23:33:58 +08:00
|
|
|
It's also possible to execute the benchmark more than once using the
|
|
|
|
`--runs` flag.
|
|
|
|
|
|
|
|
```bash
|
|
|
|
node benchmark/run.js --runs 10 assert async_hooks
|
|
|
|
```
|
|
|
|
|
|
|
|
This command will run the benchmark files in `benchmark/assert` and `benchmark/async_hooks`
|
|
|
|
10 times each.
|
|
|
|
|
2024-04-07 06:43:53 +08:00
|
|
|
#### Specifying CPU Cores for Benchmarks with run.js
|
|
|
|
|
|
|
|
When using `run.js` to execute a group of benchmarks,
|
|
|
|
you can specify on which CPU cores the
|
|
|
|
benchmarks should execute
|
|
|
|
by using the `--set CPUSET=value` option.
|
|
|
|
This controls the CPU core
|
|
|
|
affinity for the benchmark process,
|
|
|
|
potentially reducing
|
|
|
|
interference from other processes and allowing
|
|
|
|
for performance
|
|
|
|
testing under specific hardware configurations.
|
|
|
|
|
|
|
|
The `CPUSET` option utilizes the `taskset` command's format
|
|
|
|
for setting CPU affinity, where `value` can be a single core
|
|
|
|
number or a range of cores.
|
|
|
|
|
|
|
|
Examples:
|
|
|
|
|
|
|
|
* `node benchmark/run.js --set CPUSET=0` ... runs benchmarks on CPU core 0.
|
|
|
|
* `node benchmark/run.js --set CPUSET=0-2` ...
|
|
|
|
specifies that benchmarks should run on CPU cores 0 to 2.
|
|
|
|
|
|
|
|
Note: This option is only applicable when using `run.js`.
|
|
|
|
Ensure the `taskset` command is available on your system
|
|
|
|
and the specified `CPUSET` format matches its requirements.
|
|
|
|
|
2019-10-16 06:14:20 +08:00
|
|
|
#### Filtering benchmarks
|
|
|
|
|
|
|
|
`benchmark/run.js` and `benchmark/compare.js` have `--filter pattern` and
|
|
|
|
`--exclude pattern` options, which can be used to run a subset of benchmarks or
|
|
|
|
to exclude specific benchmarks from the execution, respectively.
|
|
|
|
|
|
|
|
```console
|
|
|
|
$ node benchmark/run.js --filter "deepequal-b" assert
|
|
|
|
|
|
|
|
assert/deepequal-buffer.js
|
|
|
|
assert/deepequal-buffer.js method="deepEqual" strict=0 len=100 n=20000: 773,200.4995493788
|
|
|
|
assert/deepequal-buffer.js method="notDeepEqual" strict=0 len=100 n=20000: 964,411.712953848
|
|
|
|
|
|
|
|
$ node benchmark/run.js --exclude "deepequal-b" assert
|
|
|
|
|
|
|
|
assert/deepequal-map.js
|
|
|
|
assert/deepequal-map.js method="deepEqual_primitiveOnly" strict=0 len=500 n=500: 20,445.06368453332
|
|
|
|
assert/deepequal-map.js method="deepEqual_objectOnly" strict=0 len=500 n=500: 1,393.3481642240833
|
|
|
|
...
|
|
|
|
|
|
|
|
assert/deepequal-object.js
|
|
|
|
assert/deepequal-object.js method="deepEqual" strict=0 size=100 n=5000: 1,053.1950937538475
|
|
|
|
assert/deepequal-object.js method="notDeepEqual" strict=0 size=100 n=5000: 9,734.193251965213
|
|
|
|
...
|
|
|
|
```
|
|
|
|
|
|
|
|
`--filter` and `--exclude` can be repeated to provide multiple patterns.
|
|
|
|
|
|
|
|
```console
|
|
|
|
$ node benchmark/run.js --filter "deepequal-b" --filter "deepequal-m" assert
|
|
|
|
|
|
|
|
assert/deepequal-buffer.js
|
|
|
|
assert/deepequal-buffer.js method="deepEqual" strict=0 len=100 n=20000: 773,200.4995493788
|
|
|
|
assert/deepequal-buffer.js method="notDeepEqual" strict=0 len=100 n=20000: 964,411.712953848
|
|
|
|
|
|
|
|
assert/deepequal-map.js
|
|
|
|
assert/deepequal-map.js method="deepEqual_primitiveOnly" strict=0 len=500 n=500: 20,445.06368453332
|
|
|
|
assert/deepequal-map.js method="deepEqual_objectOnly" strict=0 len=500 n=500: 1,393.3481642240833
|
|
|
|
|
|
|
|
$ node benchmark/run.js --exclude "deepequal-b" --exclude "deepequal-m" assert
|
|
|
|
|
|
|
|
assert/deepequal-object.js
|
|
|
|
assert/deepequal-object.js method="deepEqual" strict=0 size=100 n=5000: 1,053.1950937538475
|
|
|
|
assert/deepequal-object.js method="notDeepEqual" strict=0 size=100 n=5000: 9,734.193251965213
|
|
|
|
...
|
|
|
|
|
|
|
|
assert/deepequal-prims-and-objs-big-array-set.js
|
|
|
|
assert/deepequal-prims-and-objs-big-array-set.js method="deepEqual_Array" strict=0 len=20000 n=25 primitive="string": 865.2977195251661
|
|
|
|
assert/deepequal-prims-and-objs-big-array-set.js method="notDeepEqual_Array" strict=0 len=20000 n=25 primitive="string": 827.8297281403861
|
|
|
|
assert/deepequal-prims-and-objs-big-array-set.js method="deepEqual_Set" strict=0 len=20000 n=25 primitive="string": 28,826.618268696366
|
|
|
|
...
|
|
|
|
```
|
|
|
|
|
|
|
|
If `--filter` and `--exclude` are used together, `--filter` is applied first,
|
|
|
|
and `--exclude` is applied on the result of `--filter`:
|
|
|
|
|
|
|
|
```console
|
|
|
|
$ node benchmark/run.js --filter "bench-" process
|
|
|
|
|
|
|
|
process/bench-env.js
|
|
|
|
process/bench-env.js operation="get" n=1000000: 2,356,946.0770617095
|
|
|
|
process/bench-env.js operation="set" n=1000000: 1,295,176.3266261867
|
|
|
|
process/bench-env.js operation="enumerate" n=1000000: 24,592.32231990992
|
|
|
|
process/bench-env.js operation="query" n=1000000: 3,625,787.2150573144
|
|
|
|
process/bench-env.js operation="delete" n=1000000: 1,521,131.5742806569
|
|
|
|
|
|
|
|
process/bench-hrtime.js
|
|
|
|
process/bench-hrtime.js type="raw" n=1000000: 13,178,002.113936031
|
|
|
|
process/bench-hrtime.js type="diff" n=1000000: 11,585,435.712423025
|
|
|
|
process/bench-hrtime.js type="bigint" n=1000000: 13,342,884.703919787
|
|
|
|
|
|
|
|
$ node benchmark/run.js --filter "bench-" --exclude "hrtime" process
|
|
|
|
|
|
|
|
process/bench-env.js
|
|
|
|
process/bench-env.js operation="get" n=1000000: 2,356,946.0770617095
|
|
|
|
process/bench-env.js operation="set" n=1000000: 1,295,176.3266261867
|
|
|
|
process/bench-env.js operation="enumerate" n=1000000: 24,592.32231990992
|
|
|
|
process/bench-env.js operation="query" n=1000000: 3,625,787.2150573144
|
|
|
|
process/bench-env.js operation="delete" n=1000000: 1,521,131.5742806569
|
|
|
|
```
|
|
|
|
|
2024-09-04 11:26:53 +08:00
|
|
|
#### Grouping benchmarks
|
|
|
|
|
|
|
|
Benchmarks can also have groups, giving the developer greater flexibility in differentiating between test cases
|
|
|
|
and also helping reduce the time to run the combination of benchmark parameters.
|
|
|
|
|
|
|
|
By default, all groups are executed when running the benchmark.
|
|
|
|
However, it is possible to specify individual groups by setting the
|
|
|
|
`NODE_RUN_BENCHMARK_GROUPS` environment variable when running `compare.js`:
|
|
|
|
|
|
|
|
```bash
|
|
|
|
NODE_RUN_BENCHMARK_GROUPS=fewHeaders,manyHeaders node http/headers.js
|
|
|
|
```
|
|
|
|
|
2017-02-08 01:10:09 +08:00
|
|
|
### Comparing Node.js versions
|
2016-02-21 20:14:39 +08:00
|
|
|
|
2017-02-08 01:10:09 +08:00
|
|
|
To compare the effect of a new Node.js version use the `compare.js` tool. This
|
2016-02-21 20:14:39 +08:00
|
|
|
will run each benchmark multiple times, making it possible to calculate
|
2017-02-08 01:10:09 +08:00
|
|
|
statistics on the performance measures. To see how to use this script,
|
|
|
|
run `node benchmark/compare.js`.
|
2016-02-21 20:14:39 +08:00
|
|
|
|
|
|
|
As an example on how to check for a possible performance improvement, the
|
|
|
|
[#5134](https://github.com/nodejs/node/pull/5134) pull request will be used as
|
|
|
|
an example. This pull request _claims_ to improve the performance of the
|
2022-04-20 16:23:41 +08:00
|
|
|
`node:string_decoder` module.
|
2016-02-21 20:14:39 +08:00
|
|
|
|
2022-06-16 21:10:56 +08:00
|
|
|
First build two versions of Node.js, one from the `main` branch (here called
|
|
|
|
`./node-main`) and another with the pull request applied (here called
|
2017-08-03 02:15:54 +08:00
|
|
|
`./node-pr-5134`).
|
2016-02-21 20:14:39 +08:00
|
|
|
|
2017-10-11 16:15:49 +08:00
|
|
|
To run multiple compiled versions in parallel you need to copy the output of the
|
2022-06-16 21:10:56 +08:00
|
|
|
build: `cp ./out/Release/node ./node-main`. Check out the following example:
|
2017-10-11 16:15:49 +08:00
|
|
|
|
2023-05-20 06:14:03 +08:00
|
|
|
```bash
|
|
|
|
git checkout main
|
|
|
|
./configure && make -j4
|
|
|
|
cp ./out/Release/node ./node-main
|
2017-10-11 16:15:49 +08:00
|
|
|
|
2023-05-20 06:14:03 +08:00
|
|
|
git checkout pr-5134
|
|
|
|
./configure && make -j4
|
|
|
|
cp ./out/Release/node ./node-pr-5134
|
2017-10-11 16:15:49 +08:00
|
|
|
```
|
|
|
|
|
2016-02-21 20:14:39 +08:00
|
|
|
The `compare.js` tool will then produce a csv file with the benchmark results.
|
|
|
|
|
2023-05-20 06:14:03 +08:00
|
|
|
```bash
|
|
|
|
node benchmark/compare.js --old ./node-main --new ./node-pr-5134 string_decoder > compare-pr-5134.csv
|
2016-02-21 20:14:39 +08:00
|
|
|
```
|
2015-06-14 00:07:20 +08:00
|
|
|
|
2021-10-07 12:40:23 +08:00
|
|
|
_Tips: there are some useful options of `benchmark/compare.js`. For example,
|
2018-02-12 15:31:55 +08:00
|
|
|
if you want to compare the benchmark of a single script instead of a whole
|
2021-10-07 12:40:23 +08:00
|
|
|
module, you can use the `--filter` option:_
|
2018-01-31 17:57:05 +08:00
|
|
|
|
|
|
|
```console
|
|
|
|
--new ./new-node-binary new node binary (required)
|
|
|
|
--old ./old-node-binary old node binary (required)
|
|
|
|
--runs 30 number of samples
|
|
|
|
--filter pattern string to filter benchmark scripts
|
2024-04-07 06:43:53 +08:00
|
|
|
--exclude pattern excludes scripts matching <pattern> (can be
|
|
|
|
repeated)
|
2018-01-31 17:57:05 +08:00
|
|
|
--set variable=value set benchmark variable (can be repeated)
|
|
|
|
--no-progress don't show benchmark progress indicator
|
2024-04-07 06:43:53 +08:00
|
|
|
|
|
|
|
Examples:
|
|
|
|
--set CPUSET=0 Runs benchmarks on CPU core 0.
|
|
|
|
--set CPUSET=0-2 Specifies that benchmarks should run on CPU cores 0 to 2.
|
|
|
|
|
|
|
|
Note: The CPUSET format should match the specifications of the 'taskset' command
|
2018-01-31 17:57:05 +08:00
|
|
|
```
|
|
|
|
|
2023-04-23 21:34:08 +08:00
|
|
|
For analyzing the benchmark results, use [node-benchmark-compare][] or the R
|
2023-06-28 21:07:19 +08:00
|
|
|
scripts:
|
|
|
|
|
|
|
|
* `benchmark/compare.R`
|
|
|
|
* `benchmark/bar.R`
|
2015-06-14 00:07:20 +08:00
|
|
|
|
2016-07-14 18:46:01 +08:00
|
|
|
```console
|
2021-08-21 22:50:08 +08:00
|
|
|
$ node-benchmark-compare compare-pr-5134.csv # or cat compare-pr-5134.csv | Rscript benchmark/compare.R
|
2014-05-23 11:57:31 +08:00
|
|
|
|
2018-04-28 12:38:05 +08:00
|
|
|
confidence improvement accuracy (*) (**) (***)
|
|
|
|
string_decoder/string-decoder.js n=2500000 chunkLen=16 inLen=128 encoding='ascii' *** -3.76 % ±1.36% ±1.82% ±2.40%
|
|
|
|
string_decoder/string-decoder.js n=2500000 chunkLen=16 inLen=128 encoding='utf8' ** -0.81 % ±0.53% ±0.71% ±0.93%
|
|
|
|
string_decoder/string-decoder.js n=2500000 chunkLen=16 inLen=32 encoding='ascii' *** -2.70 % ±0.83% ±1.11% ±1.45%
|
|
|
|
string_decoder/string-decoder.js n=2500000 chunkLen=16 inLen=32 encoding='base64-ascii' *** -1.57 % ±0.83% ±1.11% ±1.46%
|
2016-02-21 20:14:39 +08:00
|
|
|
...
|
2014-05-23 11:57:31 +08:00
|
|
|
```
|
2016-02-21 20:14:39 +08:00
|
|
|
|
|
|
|
In the output, _improvement_ is the relative improvement of the new version,
|
2017-01-11 20:16:25 +08:00
|
|
|
hopefully this is positive. _confidence_ tells if there is enough
|
2016-02-21 20:14:39 +08:00
|
|
|
statistical evidence to validate the _improvement_. If there is enough evidence
|
|
|
|
then there will be at least one star (`*`), more stars is just better. **However
|
2017-05-17 06:53:27 +08:00
|
|
|
if there are no stars, then don't make any conclusions based on the
|
|
|
|
_improvement_.** Sometimes this is fine, for example if no improvements are
|
|
|
|
expected, then there shouldn't be any stars.
|
2016-02-21 20:14:39 +08:00
|
|
|
|
|
|
|
**A word of caution:** Statistics is not a foolproof tool. If a benchmark shows
|
|
|
|
a statistical significant difference, there is a 5% risk that this
|
2017-02-11 04:21:35 +08:00
|
|
|
difference doesn't actually exist. For a single benchmark this is not an
|
2016-02-21 20:14:39 +08:00
|
|
|
issue. But when considering 20 benchmarks it's normal that one of them
|
|
|
|
will show significance, when it shouldn't. A possible solution is to instead
|
|
|
|
consider at least two stars (`**`) as the threshold, in that case the risk
|
|
|
|
is 1%. If three stars (`***`) is considered the risk is 0.1%. However this
|
|
|
|
may require more runs to obtain (can be set with `--runs`).
|
|
|
|
|
2021-08-21 22:50:08 +08:00
|
|
|
_For the statistically minded, the script performs an [independent/unpaired
|
2016-02-21 20:14:39 +08:00
|
|
|
2-group t-test][t-test], with the null hypothesis that the performance is the
|
2017-01-11 20:16:25 +08:00
|
|
|
same for both versions. The confidence field will show a star if the p-value
|
2016-02-21 20:14:39 +08:00
|
|
|
is less than `0.05`._
|
|
|
|
|
2021-08-21 22:50:08 +08:00
|
|
|
The `compare.R` tool can additionally produce a box plot by using the
|
|
|
|
`--plot filename` option. In this case there are 48 different benchmark
|
|
|
|
combinations, and there may be a need to filter the csv file. This can be done
|
|
|
|
while benchmarking using the `--set` parameter (e.g. `--set encoding=ascii`) or
|
|
|
|
by filtering results afterwards using tools such as `sed` or `grep`. In the
|
|
|
|
`sed` case be sure to keep the first line since that contains the header
|
|
|
|
information.
|
2016-02-21 20:14:39 +08:00
|
|
|
|
2016-07-14 18:46:01 +08:00
|
|
|
```console
|
2018-04-28 12:38:05 +08:00
|
|
|
$ cat compare-pr-5134.csv | sed '1p;/encoding='"'"ascii"'"'/!d' | Rscript benchmark/compare.R --plot compare-plot.png
|
|
|
|
|
|
|
|
confidence improvement accuracy (*) (**) (***)
|
|
|
|
string_decoder/string-decoder.js n=2500000 chunkLen=16 inLen=128 encoding='ascii' *** -3.76 % ±1.36% ±1.82% ±2.40%
|
|
|
|
string_decoder/string-decoder.js n=2500000 chunkLen=16 inLen=32 encoding='ascii' *** -2.70 % ±0.83% ±1.11% ±1.45%
|
|
|
|
string_decoder/string-decoder.js n=2500000 chunkLen=16 inLen=4096 encoding='ascii' *** -4.06 % ±0.31% ±0.41% ±0.54%
|
|
|
|
string_decoder/string-decoder.js n=2500000 chunkLen=256 inLen=1024 encoding='ascii' *** -1.42 % ±0.58% ±0.77% ±1.01%
|
2014-05-23 11:57:31 +08:00
|
|
|
...
|
|
|
|
```
|
|
|
|
|
2021-10-31 06:40:34 +08:00
|
|
|
![compare tool boxplot](doc_img/compare-boxplot.png)
|
2016-02-21 20:14:39 +08:00
|
|
|
|
|
|
|
### Comparing parameters
|
|
|
|
|
|
|
|
It can be useful to compare the performance for different parameters, for
|
|
|
|
example to analyze the time complexity.
|
|
|
|
|
|
|
|
To do this use the `scatter.js` tool, this will run a benchmark multiple times
|
2017-02-08 01:10:09 +08:00
|
|
|
and generate a csv with the results. To see how to use this script,
|
|
|
|
run `node benchmark/scatter.js`.
|
2016-02-21 20:14:39 +08:00
|
|
|
|
2023-05-20 06:14:03 +08:00
|
|
|
```bash
|
|
|
|
node benchmark/scatter.js benchmark/string_decoder/string-decoder.js > scatter.csv
|
2016-02-21 20:14:39 +08:00
|
|
|
```
|
|
|
|
|
|
|
|
After generating the csv, a comparison table can be created using the
|
|
|
|
`scatter.R` tool. Even more useful it creates an actual scatter plot when using
|
|
|
|
the `--plot filename` option.
|
2015-01-28 07:28:41 +08:00
|
|
|
|
2016-07-14 18:46:01 +08:00
|
|
|
```console
|
2018-08-02 03:20:51 +08:00
|
|
|
$ cat scatter.csv | Rscript benchmark/scatter.R --xaxis chunkLen --category encoding --plot scatter-plot.png --log
|
|
|
|
|
|
|
|
aggregating variable: inLen
|
|
|
|
|
|
|
|
chunkLen encoding rate confidence.interval
|
|
|
|
16 ascii 1515855.1 334492.68
|
|
|
|
16 base64-ascii 403527.2 89677.70
|
|
|
|
16 base64-utf8 322352.8 70792.93
|
|
|
|
16 utf16le 1714567.5 388439.81
|
|
|
|
16 utf8 1100181.6 254141.32
|
|
|
|
64 ascii 3550402.0 661277.65
|
|
|
|
64 base64-ascii 1093660.3 229976.34
|
|
|
|
64 base64-utf8 997804.8 227238.04
|
|
|
|
64 utf16le 3372234.0 647274.88
|
|
|
|
64 utf8 1731941.2 360854.04
|
|
|
|
256 ascii 5033793.9 723354.30
|
|
|
|
256 base64-ascii 1447962.1 236625.96
|
|
|
|
256 base64-utf8 1357269.2 231045.70
|
|
|
|
256 utf16le 4039581.5 655483.16
|
|
|
|
256 utf8 1828672.9 360311.55
|
|
|
|
1024 ascii 5677592.7 624771.56
|
|
|
|
1024 base64-ascii 1494171.7 227302.34
|
|
|
|
1024 base64-utf8 1399218.9 224584.79
|
|
|
|
1024 utf16le 4157452.0 630416.28
|
|
|
|
1024 utf8 1824266.6 359628.52
|
2015-01-28 07:28:41 +08:00
|
|
|
```
|
2016-02-21 20:14:39 +08:00
|
|
|
|
2018-08-02 03:20:51 +08:00
|
|
|
Because the scatter plot can only show two variables (in this case _chunkLen_
|
|
|
|
and _encoding_) the rest is aggregated. Sometimes aggregating is a problem, this
|
2016-02-21 20:14:39 +08:00
|
|
|
can be solved by filtering. This can be done while benchmarking using the
|
|
|
|
`--set` parameter (e.g. `--set encoding=ascii`) or by filtering results
|
|
|
|
afterwards using tools such as `sed` or `grep`. In the `sed` case be
|
|
|
|
sure to keep the first line since that contains the header information.
|
|
|
|
|
2016-07-14 18:46:01 +08:00
|
|
|
```console
|
2018-08-02 03:20:51 +08:00
|
|
|
$ cat scatter.csv | sed -E '1p;/([^,]+, ){3}128,/!d' | Rscript benchmark/scatter.R --xaxis chunkLen --category encoding --plot scatter-plot.png --log
|
|
|
|
|
|
|
|
chunkLen encoding rate confidence.interval
|
|
|
|
16 ascii 1302078.5 71692.27
|
|
|
|
16 base64-ascii 338669.1 15159.54
|
|
|
|
16 base64-utf8 281904.2 20326.75
|
|
|
|
16 utf16le 1381515.5 58533.61
|
|
|
|
16 utf8 831183.2 33631.01
|
|
|
|
64 ascii 4363402.8 224030.00
|
|
|
|
64 base64-ascii 1036825.9 48644.72
|
|
|
|
64 base64-utf8 780059.3 60994.98
|
|
|
|
64 utf16le 3900749.5 158366.84
|
|
|
|
64 utf8 1723710.6 80665.65
|
|
|
|
256 ascii 8472896.1 511822.51
|
|
|
|
256 base64-ascii 2215884.6 104347.53
|
|
|
|
256 base64-utf8 1996230.3 131778.47
|
|
|
|
256 utf16le 5824147.6 234550.82
|
|
|
|
256 utf8 2019428.8 100913.36
|
|
|
|
1024 ascii 8340189.4 598855.08
|
|
|
|
1024 base64-ascii 2201316.2 111777.68
|
|
|
|
1024 base64-utf8 2002272.9 128843.11
|
|
|
|
1024 utf16le 5789281.7 240642.77
|
|
|
|
1024 utf8 2025551.2 81770.69
|
2015-01-28 07:28:41 +08:00
|
|
|
```
|
|
|
|
|
2021-10-31 06:40:34 +08:00
|
|
|
![compare tool boxplot](doc_img/scatter-plot.png)
|
2014-05-23 11:57:31 +08:00
|
|
|
|
2021-02-13 21:26:14 +08:00
|
|
|
### Running benchmarks on the CI
|
2017-10-08 14:50:30 +08:00
|
|
|
|
2021-07-14 21:39:59 +08:00
|
|
|
To see the performance impact of a pull request by running benchmarks on
|
2017-10-08 14:50:30 +08:00
|
|
|
the CI, check out [How to: Running core benchmarks on Node.js CI][benchmark-ci].
|
|
|
|
|
2016-02-21 20:14:39 +08:00
|
|
|
## Creating a benchmark
|
2014-05-23 11:57:31 +08:00
|
|
|
|
2017-02-08 01:10:09 +08:00
|
|
|
### Basics of a benchmark
|
|
|
|
|
2016-02-21 20:14:39 +08:00
|
|
|
All benchmarks use the `require('../common.js')` module. This contains the
|
2017-05-17 06:53:27 +08:00
|
|
|
`createBenchmark(main, configs[, options])` method which will setup the
|
2016-12-29 20:07:08 +08:00
|
|
|
benchmark.
|
2014-05-23 11:57:31 +08:00
|
|
|
|
2016-12-29 20:07:08 +08:00
|
|
|
The arguments of `createBenchmark` are:
|
2014-05-23 11:57:31 +08:00
|
|
|
|
2016-12-29 20:07:08 +08:00
|
|
|
* `main` {Function} The benchmark function,
|
|
|
|
where the code running operations and controlling timers should go
|
|
|
|
* `configs` {Object} The benchmark parameters. `createBenchmark` will run all
|
|
|
|
possible combinations of these parameters, unless specified otherwise.
|
|
|
|
Each configuration is a property with an array of possible values.
|
2019-06-21 03:27:15 +08:00
|
|
|
The configuration values can only be strings or numbers.
|
2022-12-05 03:11:38 +08:00
|
|
|
* `options` {Object} The benchmark options. Supported options:
|
|
|
|
* `flags` {Array} Contains node-specific command line flags to pass to
|
|
|
|
the child process.
|
2024-09-04 11:26:53 +08:00
|
|
|
|
|
|
|
* `byGroups` {Boolean} option for processing `configs` by groups:
|
|
|
|
```js
|
|
|
|
const bench = common.createBenchmark(main, {
|
|
|
|
groupA: {
|
|
|
|
source: ['array'],
|
|
|
|
len: [10, 2048],
|
|
|
|
n: [50],
|
|
|
|
},
|
|
|
|
groupB: {
|
|
|
|
source: ['buffer', 'string'],
|
|
|
|
len: [2048],
|
|
|
|
n: [50, 2048],
|
2024-11-20 18:10:38 +08:00
|
|
|
},
|
2024-09-04 11:26:53 +08:00
|
|
|
}, { byGroups: true });
|
|
|
|
```
|
|
|
|
|
2022-12-05 03:11:38 +08:00
|
|
|
* `combinationFilter` {Function} Has a single parameter which is an object
|
|
|
|
containing a combination of benchmark parameters. It should return `true`
|
|
|
|
or `false` to indicate whether the combination should be included or not.
|
2016-12-29 20:07:08 +08:00
|
|
|
|
|
|
|
`createBenchmark` returns a `bench` object, which is used for timing
|
2016-02-21 20:14:39 +08:00
|
|
|
the runtime of the benchmark. Run `bench.start()` after the initialization
|
|
|
|
and `bench.end(n)` when the benchmark is done. `n` is the number of operations
|
2017-05-17 06:53:27 +08:00
|
|
|
performed in the benchmark.
|
2014-05-23 11:57:31 +08:00
|
|
|
|
2016-12-29 20:07:08 +08:00
|
|
|
The benchmark script will be run twice:
|
|
|
|
|
|
|
|
The first pass will configure the benchmark with the combination of
|
|
|
|
parameters specified in `configs`, and WILL NOT run the `main` function.
|
|
|
|
In this pass, no flags except the ones directly passed via commands
|
2017-05-17 06:53:27 +08:00
|
|
|
when running the benchmarks will be used.
|
2016-12-29 20:07:08 +08:00
|
|
|
|
|
|
|
In the second pass, the `main` function will be run, and the process
|
|
|
|
will be launched with:
|
|
|
|
|
2017-05-17 06:53:27 +08:00
|
|
|
* The flags passed into `createBenchmark` (the third argument)
|
|
|
|
* The flags in the command passed when the benchmark was run
|
2016-12-29 20:07:08 +08:00
|
|
|
|
|
|
|
Beware that any code outside the `main` function will be run twice
|
|
|
|
in different processes. This could be troublesome if the code
|
|
|
|
outside the `main` function has side effects. In general, prefer putting
|
|
|
|
the code inside the `main` function if it's more than just declaration.
|
|
|
|
|
2016-02-21 20:14:39 +08:00
|
|
|
```js
|
|
|
|
'use strict';
|
|
|
|
const common = require('../common.js');
|
2022-04-20 16:23:41 +08:00
|
|
|
const { SlowBuffer } = require('node:buffer');
|
2014-05-23 11:57:31 +08:00
|
|
|
|
2016-12-29 20:07:08 +08:00
|
|
|
const configs = {
|
|
|
|
// Number of operations, specified here so they show up in the report.
|
|
|
|
// Most benchmarks just use one value for all runs.
|
2016-02-21 20:14:39 +08:00
|
|
|
n: [1024],
|
2016-12-29 20:07:08 +08:00
|
|
|
type: ['fast', 'slow'], // Custom configurations
|
2022-11-17 21:19:12 +08:00
|
|
|
size: [16, 128, 1024], // Custom configurations
|
2016-12-29 20:07:08 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
const options = {
|
2017-05-17 06:53:27 +08:00
|
|
|
// Add --expose-internals in order to require internal modules in main
|
2022-11-17 21:19:12 +08:00
|
|
|
flags: ['--zero-fill-buffers'],
|
2016-12-29 20:07:08 +08:00
|
|
|
};
|
|
|
|
|
2018-12-10 20:27:32 +08:00
|
|
|
// `main` and `configs` are required, `options` is optional.
|
2016-12-29 20:07:08 +08:00
|
|
|
const bench = common.createBenchmark(main, configs, options);
|
|
|
|
|
2019-06-21 03:27:15 +08:00
|
|
|
// Any code outside main will be run twice,
|
2016-12-29 20:07:08 +08:00
|
|
|
// in different processes, with different command line arguments.
|
2014-05-23 11:57:31 +08:00
|
|
|
|
|
|
|
function main(conf) {
|
2017-05-17 06:53:27 +08:00
|
|
|
// Only flags that have been passed to createBenchmark
|
|
|
|
// earlier when main is run will be in effect.
|
|
|
|
// In order to benchmark the internal modules, require them here. For example:
|
2016-12-29 20:07:08 +08:00
|
|
|
// const URL = require('internal/url').URL
|
|
|
|
|
|
|
|
// Start the timer
|
2016-02-21 20:14:39 +08:00
|
|
|
bench.start();
|
|
|
|
|
2016-12-29 20:07:08 +08:00
|
|
|
// Do operations here
|
2016-02-21 20:14:39 +08:00
|
|
|
const BufferConstructor = conf.type === 'fast' ? Buffer : SlowBuffer;
|
|
|
|
|
|
|
|
for (let i = 0; i < conf.n; i++) {
|
|
|
|
new BufferConstructor(conf.size);
|
2014-05-23 11:57:31 +08:00
|
|
|
}
|
2016-12-29 20:07:08 +08:00
|
|
|
|
|
|
|
// End the timer, pass in the number of operations
|
2016-02-21 20:14:39 +08:00
|
|
|
bench.end(conf.n);
|
2014-05-23 11:57:31 +08:00
|
|
|
}
|
|
|
|
```
|
2016-07-14 18:46:01 +08:00
|
|
|
|
2017-02-08 01:10:09 +08:00
|
|
|
### Creating an HTTP benchmark
|
2016-08-05 17:34:50 +08:00
|
|
|
|
|
|
|
The `bench` object returned by `createBenchmark` implements
|
|
|
|
`http(options, callback)` method. It can be used to run external tool to
|
|
|
|
benchmark HTTP servers.
|
|
|
|
|
|
|
|
```js
|
|
|
|
'use strict';
|
|
|
|
|
|
|
|
const common = require('../common.js');
|
|
|
|
|
|
|
|
const bench = common.createBenchmark(main, {
|
|
|
|
kb: [64, 128, 256, 1024],
|
2020-02-13 02:33:33 +08:00
|
|
|
connections: [100, 500],
|
2022-11-17 21:19:12 +08:00
|
|
|
duration: 5,
|
2016-08-05 17:34:50 +08:00
|
|
|
});
|
|
|
|
|
|
|
|
function main(conf) {
|
2022-04-20 16:23:41 +08:00
|
|
|
const http = require('node:http');
|
2016-08-05 17:34:50 +08:00
|
|
|
const len = conf.kb * 1024;
|
2016-11-19 01:40:45 +08:00
|
|
|
const chunk = Buffer.alloc(len, 'x');
|
2018-11-24 15:40:56 +08:00
|
|
|
const server = http.createServer((req, res) => {
|
2016-08-05 17:34:50 +08:00
|
|
|
res.end(chunk);
|
|
|
|
});
|
|
|
|
|
2018-11-24 15:40:56 +08:00
|
|
|
server.listen(common.PORT, () => {
|
2016-08-05 17:34:50 +08:00
|
|
|
bench.http({
|
|
|
|
connections: conf.connections,
|
2018-11-24 15:40:56 +08:00
|
|
|
}, () => {
|
2016-08-05 17:34:50 +08:00
|
|
|
server.close();
|
|
|
|
});
|
|
|
|
});
|
|
|
|
}
|
|
|
|
```
|
|
|
|
|
|
|
|
Supported options keys are:
|
2017-12-18 07:32:28 +08:00
|
|
|
|
2016-08-05 17:34:50 +08:00
|
|
|
* `port` - defaults to `common.PORT`
|
|
|
|
* `path` - defaults to `/`
|
|
|
|
* `connections` - number of concurrent connections to use, defaults to 100
|
|
|
|
* `duration` - duration of the benchmark in seconds, defaults to 10
|
2020-02-13 02:33:33 +08:00
|
|
|
* `benchmarker` - benchmarker to use, defaults to the first available http
|
|
|
|
benchmarker
|
2016-08-05 17:34:50 +08:00
|
|
|
|
|
|
|
[autocannon]: https://github.com/mcollina/autocannon
|
2021-02-18 03:49:52 +08:00
|
|
|
[benchmark-ci]: https://github.com/nodejs/benchmarking/blob/HEAD/docs/core_benchmarks.md
|
2019-03-26 22:36:55 +08:00
|
|
|
[git-for-windows]: https://git-scm.com/download/win
|
|
|
|
[nghttp2.org]: https://nghttp2.org
|
2021-08-21 22:50:08 +08:00
|
|
|
[node-benchmark-compare]: https://github.com/targos/node-benchmark-compare
|
2022-05-24 17:27:17 +08:00
|
|
|
[t-test]: https://en.wikipedia.org/wiki/Student%27s_t-test#Equal_or_unequal_sample_sizes%2C_unequal_variances_%28sX1_%3E_2sX2_or_sX2_%3E_2sX1%29
|
2020-09-18 00:53:37 +08:00
|
|
|
[wrk]: https://github.com/wg/wrk
|