mirror of
https://github.com/pnpm/pnpm.git
synced 2026-04-11 10:40:53 -04:00
* test: ensure prerelease weighting is correct * fix: use higher weight for package versions already in lockfile * test: remove fundamentally incompatible test * fix(test): use undici MockAgent instead of nock for HTTP mocking nock only patches Node's built-in http/https modules, but pnpm uses undici for HTTP requests. Replace nock with @pnpm/testing.mock-agent (which wraps undici's MockAgent) so the regression test actually intercepts registry metadata requests. * fix(benchmarks): show errors from store populate step The populate step redirected both stdout and stderr to /dev/null, hiding the actual error when pnpm install fails during benchmarks. * fix(benchmarks): replace deprecated packages in benchmark fixture The old fixture used deprecated babel 6, gulp, and other legacy packages whose transitive dependencies (e.g. es-abstract) are missing the "time" field in registry metadata, causing ERR_PNPM_MISSING_TIME with time-based resolution mode. Replace with modern equivalents (babel 7, webpack 5, MUI, Redux Toolkit, etc.) that maintain a similar dependency tree size (~1300 packages) while using well-maintained packages with proper registry metadata. * fix(benchmarks): drop eslint plugins that pull in es-abstract eslint-plugin-react, eslint-plugin-import, and eslint-plugin-jsx-a11y transitively depend on es-abstract, whose registry metadata lacks the "time" field. Replace them with eslint-plugin-prettier to avoid ERR_PNPM_MISSING_TIME with time-based resolution. --------- Co-authored-by: Zoltan Kochan <z@kochan.io>
pnpm Benchmarks
Compares pnpm install performance between the current branch and main.
Prerequisites
- hyperfine — install via
brew install hyperfine - The current branch must be compiled (
pnpm run compile) - If providing a pre-existing main checkout path, it must also be compiled
Usage
pnpm run compile
./benchmarks/bench.sh
If a git worktree with main already exists, the script finds and uses it automatically. Otherwise it creates one at ../.pnpm-bench-main (a sibling of the repo), installs dependencies, and compiles.
You can also point to a specific checkout of main:
./benchmarks/bench.sh /path/to/main
Scenarios
| # | Name | Lockfile | Store + Cache | Description |
|---|---|---|---|---|
| 1 | Headless | ✔ frozen | warm | Repeat install with warm store |
| 2 | Re-resolution | ✔ + add dep | warm | Add a new dependency to an existing lockfile |
| 3 | Full resolution | ✗ | warm | Resolve everything from scratch with warm store and cache |
| 4 | Headless cold | ✔ frozen | cold | Typical CI install — fetch all packages with lockfile |
| 5 | Cold install | ✗ | cold | True cold start — nothing cached |
All scenarios use --ignore-scripts and isolated store/cache directories per variant.
Output
Results are printed to the terminal and saved as:
results.md— consolidated markdown table<scenario>-main.json/<scenario>-branch.json— raw hyperfine data
All files are written to a temp directory printed at the end of the run.
Configuration
Edit the variables at the top of bench.sh:
WARMUP— number of warmup runs before timing (default: 1)RUNS— number of timed runs per benchmark (default: 10)