fix: adapt audit client to npmjs /advisories/bulk endpoint (#11268)

The legacy `/-/npm/v1/security/audits{,/quick}` endpoints have been retired by npmjs.org. This PR rewires the audit client to the replacement `/-/npm/v1/security/advisories/bulk` endpoint.

The new endpoint is not a drop-in rename — the request and response contracts are both different:

- **Request**: a flat `{ pkgName: [versions] }` map. `lockfileToAuditRequest` walks the lockfile once and builds the POST body directly; there is no more nested `AuditTree`.
- **Response**: only `id`, `url`, `title`, `severity`, `vulnerable_versions`, and `cwe` per advisory. Everything else the old endpoint returned is computed locally:
  - `findings[].paths` are walked from the lockfile (skipped entirely when the response is empty; the second walk intentionally avoids `@pnpm/lockfile.walker`'s global dedup so alternate install chains to the same shared dep aren't dropped).
  - `metadata.vulnerabilities` counts advisories per severity.
  - `metadata.dependencies` / `devDependencies` / `optionalDependencies` / `totalDependencies` come from a classified lockfile walk; the classifier respects `--prod`/`--dev` include flags when deciding whether a subgraph is reachable non-optionally.
  - `patched_versions` is inferred from the vulnerable range for common `<X.Y.Z` / `<=X.Y.Z` shapes so `audit --fix` can still produce usable overrides; left `undefined` when inference fails.
  - `github_advisory_id` is parsed from the advisory URL and canonicalized to the github.com form (uppercase `GHSA-` prefix, lowercase suffix).
  - `info` severity is now supported end-to-end (severity type, `--audit-level`, filters, colors).

## Breaking changes (v11)

- Private registries that do not implement `/advisories/bulk` now fail with `AuditEndpointNotExistsError`.
- CVE-based filtering is replaced with GHSA-based filtering, since the bulk endpoint does not return CVE identifiers:
  - `auditConfig.ignoreCves` → `auditConfig.ignoreGhsas` (the old key is no longer recognized).
  - `pnpm audit --ignore <id>` and `--ignore-unfixable` now read and write GHSAs.
  - Migration: replace each `CVE-YYYY-NNNNN` in `auditConfig.ignoreCves` with the matching `GHSA-xxxx-xxxx-xxxx` (visible in the `More info` column of `pnpm audit` output) under `auditConfig.ignoreGhsas`.
- `--ignore-unfixable` now only targets advisories whose patched range couldn't be inferred — the only "no fix available" signal the bulk endpoint provides.
- `AuditReport` and `AuditAdvisory` are trimmed to just the fields the audit client actually populates:
  - `AuditReport`: `advisories` + `metadata` only (`actions` and `muted` removed).
  - `AuditAdvisory`: `findings`, `id`, `title`, `module_name`, `vulnerable_versions`, `patched_versions?`, `severity`, `cwe`, `github_advisory_id`, `url`. Dropped: `cves`, `created`, `updated`, `deleted`, `access`, `overview`, `recommendation`, `references`, `found_by`, `reported_by`, `metadata`.
  - `AuditAction`, `AuditResolution`, `AuditActionRecommendation` removed (no consumers).

## Hardening

- Response body validated: non-object / malformed JSON / non-array package buckets all surface as `ERR_PNPM_AUDIT_BAD_RESPONSE` with a body excerpt. Advisory `id` must be a finite number and `severity` must be a known value before being indexed.
- Name-keyed records use `Object.create(null)` so a hostile/unusual package name can't trigger prototype pollution.
- GHSA ids canonicalized on both read and write so casing drift between config and registry doesn't mask ignores.
- `findings[].paths` are deduped and capped per (name, version) to keep pathologically shared graphs from blowing up memory.

## Internals

- `AuditTree` / `AuditNode` / `lockfileToAuditTree` removed. `lockfileToAuditIndex.ts` exports `lockfileToAuditRequest` (flat POST body + counts) and `buildAuditPathIndex` (only invoked when the response has advisories).
- `AuditAdvisory.findings` is now `AuditFinding[]` (was an unintended 1-tuple).
- Top-level test fixtures regenerated from real `registry.npmjs.org` responses; synthetic `update-*` fixtures converted in place to bulk shape.

---------

Co-authored-by: John van Leeuwen <john.van.leeuwen@priva.com>
Co-authored-by: Zoltan Kochan <z@kochan.io>
This commit is contained in:
John van Leeuwen
2026-04-16 01:07:48 +02:00
committed by GitHub
parent b738043ab1
commit ff28085997
42 changed files with 10408 additions and 6780 deletions

View File

@@ -0,0 +1,29 @@
---
"@pnpm/deps.compliance.audit": major
"@pnpm/deps.compliance.commands": major
"@pnpm/types": major
"pnpm": major
---
`pnpm audit` now calls npm's `/-/npm/v1/security/advisories/bulk` endpoint. The legacy `/-/npm/v1/security/audits{,/quick}` endpoints have been retired by the registry, so the legacy request/response contract is no longer supported.
The new endpoint returns only `id`, `url`, `title`, `severity`, `vulnerable_versions`, and `cwe` per advisory. Everything else is computed locally:
- `findings[].paths` are computed by walking the lockfile and matching `vulnerable_versions` via semver.
- `metadata.vulnerabilities` counts advisories per severity.
- `metadata.dependencies`, `devDependencies`, `optionalDependencies`, and `totalDependencies` are computed from the lockfile.
- `patched_versions` is inferred from `vulnerable_versions` for the common `<X.Y.Z` / `<=X.Y.Z` patterns so `pnpm audit --fix` still produces usable overrides. When inference fails, it is left undefined and `pnpm audit --ignore-unfixable` treats those advisories as having no known fix.
- `github_advisory_id` is parsed from each advisory's `url`.
- `info` severity advisories are now supported across `--audit-level`, filters, and output.
### Shape changes to `AuditReport` / `AuditAdvisory`
Fields the bulk endpoint doesn't return have been removed from both types (major bump). `AuditReport` now contains only `advisories` and `metadata`. `AuditAdvisory` contains only `findings`, `id`, `title`, `module_name`, `vulnerable_versions`, `patched_versions`, `severity`, `cwe`, `github_advisory_id`, and `url`. Consumers that relied on `actions`, `muted`, `cves`, `created`, `updated`, `deleted`, `access`, `overview`, `recommendation`, `references`, `found_by`, `reported_by`, or `metadata` on advisories need to update.
The bulk endpoint does not return CVE identifiers. CVE-based filtering has been replaced with GitHub advisory ID (GHSA) filtering:
- `auditConfig.ignoreCves``auditConfig.ignoreGhsas` (the previous key is no longer recognized)
- `pnpm audit --ignore <id>` / `pnpm audit --ignore-unfixable` now read and write GHSAs instead of CVEs
- GHSAs are derived from each advisory's `url` (`https://github.com/advisories/GHSA-xxxx-xxxx-xxxx`)
To migrate: replace each `CVE-YYYY-NNNNN` entry in your `auditConfig.ignoreCves` with the corresponding `GHSA-xxxx-xxxx-xxxx` value (visible in the `More info` column of `pnpm audit` output) and move it under `auditConfig.ignoreGhsas`.

View File

@@ -255,7 +255,7 @@ export interface Config extends OptionsFromRootManifest {
trustPolicy?: TrustPolicy
trustPolicyExclude?: string[]
trustPolicyIgnoreAfter?: number
auditLevel?: 'low' | 'moderate' | 'high' | 'critical'
auditLevel?: 'info' | 'low' | 'moderate' | 'high' | 'critical'
packageConfigs?: ProjectConfigSet
}

View File

@@ -98,6 +98,7 @@ export interface PackageVulnerability {
}
export type VulnerabilitySeverity =
| 'info'
| 'low'
| 'moderate'
| 'high'

View File

@@ -165,7 +165,6 @@ export type ConfigDependencies = Record<string, VersionWithIntegrity | {
export type ConfigDependencySpecifiers = Record<string, string>
export interface AuditConfig {
ignoreCves?: string[]
ignoreGhsas?: string[]
}

View File

@@ -32,6 +32,7 @@
".test": "cross-env NODE_OPTIONS=\"$NODE_OPTIONS --experimental-vm-modules --disable-warning=ExperimentalWarning --disable-warning=DEP0169\" jest"
},
"dependencies": {
"@pnpm/deps.path": "workspace:*",
"@pnpm/error": "workspace:*",
"@pnpm/fetching.types": "workspace:*",
"@pnpm/lockfile.detect-dep-types": "workspace:*",
@@ -41,8 +42,7 @@
"@pnpm/lockfile.walker": "workspace:*",
"@pnpm/network.fetch": "workspace:*",
"@pnpm/types": "workspace:*",
"@pnpm/workspace.project-manifest-reader": "workspace:*",
"ramda": "catalog:"
"semver": "catalog:"
},
"peerDependencies": {
"@pnpm/logger": "catalog:"
@@ -53,7 +53,7 @@
"@pnpm/logger": "workspace:*",
"@pnpm/test-fixtures": "workspace:*",
"@pnpm/testing.mock-agent": "workspace:*",
"@types/ramda": "catalog:"
"@types/semver": "catalog:"
},
"engines": {
"node": ">=22.13"

View File

@@ -1,14 +1,40 @@
import { PnpmError } from '@pnpm/error'
import type { GetAuthHeader } from '@pnpm/fetching.types'
import { detectDepTypes } from '@pnpm/lockfile.detect-dep-types'
import type { EnvLockfile, LockfileObject } from '@pnpm/lockfile.types'
import { type DispatcherOptions, fetchWithDispatcher, type RetryTimeoutOptions } from '@pnpm/network.fetch'
import type { DependenciesField } from '@pnpm/types'
import semver from 'semver'
import { lockfileToAuditTree } from './lockfileToAuditTree.js'
import type { AuditReport } from './types.js'
import {
type AuditIndexRequest,
type AuditPathIndex,
buildAuditPathIndex,
collectOptionalOnlyDepPaths,
lockfileToAuditRequest,
type PathInfo,
} from './lockfileToAuditIndex.js'
import type { AuditAdvisory, AuditFinding, AuditLevelString, AuditReport, AuditVulnerabilityCounts } from './types.js'
export type { AuditIndexRequest, AuditPathIndex, PathInfo } from './lockfileToAuditIndex.js'
export { buildAuditPathIndex, lockfileToAuditRequest } from './lockfileToAuditIndex.js'
export * from './types.js'
// The shape of a single advisory as returned by npm's /advisories/bulk
// endpoint. The two AuditAdvisory fields not populated directly from this
// are derived from it: github_advisory_id from `url` and patched_versions
// from `vulnerable_versions`. findings are built from the lockfile walk.
interface BulkAdvisory {
id: number
url?: string
title?: string
severity: AuditLevelString
vulnerable_versions: string
cwe?: string | string[]
}
type BulkAdvisoriesResponse = Record<string, BulkAdvisory[]>
export async function audit (
lockfile: LockfileObject,
getAuthHeader: GetAuthHeader,
@@ -16,48 +42,183 @@ export async function audit (
dispatcherOptions?: DispatcherOptions
envLockfile?: EnvLockfile | null
include?: { [dependenciesField in DependenciesField]: boolean }
lockfileDir: string
registry: string
retry?: RetryTimeoutOptions
timeout?: number
virtualStoreDirMaxLength: number
}
): Promise<AuditReport> {
const auditTree = await lockfileToAuditTree(lockfile, { envLockfile: opts.envLockfile, include: opts.include, lockfileDir: opts.lockfileDir })
const depTypes = detectDepTypes(lockfile)
const optionalOnly = collectOptionalOnlyDepPaths(lockfile, opts.include)
const auditRequest = lockfileToAuditRequest(lockfile, { envLockfile: opts.envLockfile, include: opts.include, depTypes, optionalOnly })
const registry = opts.registry.endsWith('/') ? opts.registry : `${opts.registry}/`
const auditUrl = `${registry}-/npm/v1/security/audits`
const quickAuditUrl = `${registry}-/npm/v1/security/audits/quick`
const auditUrl = `${registry}-/npm/v1/security/advisories/bulk`
const authHeaderValue = getAuthHeader(registry)
const requestBody = JSON.stringify(auditTree)
const requestHeaders = {
'Content-Type': 'application/json',
...getAuthHeaders(authHeaderValue),
}
const requestOptions = {
const res = await fetchWithDispatcher(auditUrl, {
dispatcherOptions: opts.dispatcherOptions ?? {},
body: requestBody,
body: JSON.stringify(auditRequest.request),
headers: requestHeaders,
method: 'POST',
retry: opts.retry,
timeout: opts.timeout,
}
})
const quickRes = await fetchWithDispatcher(quickAuditUrl, requestOptions)
if (quickRes.status === 200) {
return (quickRes.json() as Promise<AuditReport>)
}
const res = await fetchWithDispatcher(auditUrl, requestOptions)
if (res.status === 200) {
return (res.json() as Promise<AuditReport>)
const rawBody = await res.text()
let body: unknown
try {
body = JSON.parse(rawBody)
} catch (err: unknown) {
const reason = err instanceof Error ? err.message : String(err)
throw new PnpmError('AUDIT_BAD_RESPONSE', `The audit endpoint (at ${auditUrl}) returned invalid JSON: ${reason}. Response body: ${rawBody.slice(0, 500)}`)
}
if (!isBulkResponseShape(body)) {
throw new PnpmError('AUDIT_BAD_RESPONSE', `The audit endpoint (at ${auditUrl}) returned an unexpected body. Expected an object keyed by package name; got: ${JSON.stringify(body)?.slice(0, 500) ?? String(body)}`)
}
const vulnerableNames = new Set(Object.keys(body))
let auditPathIndex: AuditPathIndex = {}
if (vulnerableNames.size > 0) {
auditPathIndex = buildAuditPathIndex(lockfile, vulnerableNames, { envLockfile: opts.envLockfile, include: opts.include, depTypes, optionalOnly })
}
return bulkResponseToAuditReport(body, auditRequest, auditPathIndex)
}
if (quickRes.status === 404 && res.status === 404) {
throw new AuditEndpointNotExistsError(quickAuditUrl)
if (res.status === 404) {
throw new AuditEndpointNotExistsError(auditUrl)
}
throw new PnpmError('AUDIT_BAD_RESPONSE', `The audit endpoint (at ${quickAuditUrl}) responded with ${quickRes.status}: ${await quickRes.text()}. Fallback endpoint (at ${auditUrl}) responded with ${res.status}: ${await res.text()}`)
throw new PnpmError('AUDIT_BAD_RESPONSE', `The audit endpoint (at ${auditUrl}) responded with ${res.status}: ${await res.text()}`)
}
function bulkResponseToAuditReport (bulk: BulkAdvisoriesResponse, auditRequest: AuditIndexRequest, auditPathIndex: AuditPathIndex): AuditReport {
// Null-prototype map — the id comes from the registry and could be anything.
const advisories: Record<string, AuditAdvisory> = Object.create(null)
const vulnerabilities: AuditVulnerabilityCounts = { info: 0, low: 0, moderate: 0, high: 0, critical: 0 }
for (const [moduleName, packageAdvisories] of Object.entries(bulk)) {
const byVersion = auditPathIndex[moduleName]
for (const adv of packageAdvisories) {
// Guard against registry-supplied values that could corrupt the report:
// only accept finite numeric ids and severities from the known set.
if (typeof adv.id !== 'number' || !Number.isFinite(adv.id)) continue
if (!isKnownSeverity(adv.severity)) continue
const findings = buildFindings(adv, byVersion)
// If no installed version is vulnerable, skip the advisory entirely so
// we don't report false positives for packages the lockfile doesn't use.
if (findings.length === 0) continue
advisories[String(adv.id)] = normalizeAdvisory(adv, moduleName, findings)
// npm's audit report counts one vulnerability per advisory in the metadata summary
// when using the bulk endpoint format pnpm expects.
vulnerabilities[adv.severity] += 1
}
}
return {
advisories,
metadata: {
vulnerabilities,
dependencies: auditRequest.dependencies,
devDependencies: auditRequest.devDependencies,
optionalDependencies: auditRequest.optionalDependencies,
totalDependencies: auditRequest.totalDependencies,
},
}
}
function buildFindings (adv: BulkAdvisory, byVersion: Map<string, PathInfo> | undefined): AuditFinding[] {
if (byVersion == null) return []
const findings: AuditFinding[] = []
for (const [version, info] of byVersion) {
if (satisfiesSafe(version, adv.vulnerable_versions)) {
findings.push({
version,
paths: info.paths,
dev: info.dev,
optional: info.optional,
bundled: false,
})
}
}
return findings
}
const KNOWN_SEVERITIES: ReadonlySet<AuditLevelString> = new Set(['info', 'low', 'moderate', 'high', 'critical'])
function isKnownSeverity (severity: unknown): severity is AuditLevelString {
return typeof severity === 'string' && KNOWN_SEVERITIES.has(severity as AuditLevelString)
}
function isBulkResponseShape (body: unknown): body is BulkAdvisoriesResponse {
if (typeof body !== 'object' || body === null || Array.isArray(body)) return false
// Every value must be an array of advisory objects; a null or scalar value
// would crash `for (const adv of packageAdvisories)` downstream.
return Object.values(body).every((packageAdvisories) =>
Array.isArray(packageAdvisories) && packageAdvisories.every((advisory) =>
typeof advisory === 'object' && advisory !== null && !Array.isArray(advisory) &&
typeof (advisory as { vulnerable_versions?: unknown }).vulnerable_versions === 'string'
)
)
}
function satisfiesSafe (version: string, range: string): boolean {
try {
return semver.satisfies(version, range, { includePrerelease: true, loose: true })
} catch {
return false
}
}
function normalizeAdvisory (adv: BulkAdvisory, moduleName: string, findings: AuditFinding[]): AuditAdvisory {
const cwe = Array.isArray(adv.cwe) ? adv.cwe.join(', ') : adv.cwe
return {
findings,
id: adv.id,
title: adv.title ?? '',
module_name: moduleName,
vulnerable_versions: adv.vulnerable_versions,
patched_versions: inferPatchedVersions(adv.vulnerable_versions),
severity: adv.severity,
cwe: cwe ?? '',
github_advisory_id: deriveGithubAdvisoryId(adv.url),
url: adv.url ?? '',
}
}
function inferPatchedVersions (vulnerableRange: string): string | undefined {
// Matches `<X.Y.Z` or `<= X.Y.Z` (with optional whitespace after the operator)
// at the end of the range, optionally preceded by other comparators like
// `>=0.8.1 <0.28.0`. Returns undefined if the range doesn't have a
// recognizable upper bound — callers must not confuse that with "no fix".
const trimmed = vulnerableRange.trim()
const ltMatch = trimmed.match(/(?:^|\s)<\s*(\d+\.\d+\.\d[\w\-.+]*)\s*$/)
if (ltMatch) return `>=${ltMatch[1]}`
const lteMatch = trimmed.match(/(?:^|\s)<=\s*(\d+\.\d+\.\d[\w\-.+]*)\s*$/)
if (lteMatch) {
const next = semver.inc(lteMatch[1], 'patch')
if (next) return `>=${next}`
}
return undefined
}
function deriveGithubAdvisoryId (url: string | undefined): string {
if (!url) return ''
const match = url.match(/\/(GHSA-[\w-]+)/i)
return match ? normalizeGhsaId(match[1]) : ''
}
// GHSA identifiers are canonically written with an uppercase `GHSA-` prefix
// and a lowercase hexadecimal-style suffix (e.g. `GHSA-cph5-m8f7-6c5x`).
// Normalize both halves so ignore-list comparisons don't depend on how the
// user (or the advisory url) happens to case the id.
export function normalizeGhsaId (ghsaId: string): string {
const trimmed = ghsaId.trim()
const dash = trimmed.indexOf('-')
if (dash < 0) return trimmed.toUpperCase()
return trimmed.slice(0, dash).toUpperCase() + trimmed.slice(dash).toLowerCase()
}
interface AuthHeaders {
@@ -74,7 +235,7 @@ function getAuthHeaders (authHeaderValue: string | undefined): AuthHeaders {
export class AuditEndpointNotExistsError extends PnpmError {
constructor (endpoint: string) {
const message = `The audit endpoint (at ${endpoint}) is doesn't exist.`
const message = `The audit endpoint (at ${endpoint}) doesn't exist.`
super(
'AUDIT_ENDPOINT_NOT_EXISTS',
message,

View File

@@ -0,0 +1,346 @@
import * as dp from '@pnpm/deps.path'
import { DepType, type DepTypes, detectDepTypes } from '@pnpm/lockfile.detect-dep-types'
import { convertToLockfileObject } from '@pnpm/lockfile.fs'
import type { EnvLockfile, LockfileObject, ResolvedDependencies } from '@pnpm/lockfile.types'
import { nameVerFromPkgSnapshot } from '@pnpm/lockfile.utils'
import { lockfileWalkerGroupImporterSteps, type LockfileWalkerStep } from '@pnpm/lockfile.walker'
import type { DependenciesField, DepPath, ProjectId } from '@pnpm/types'
export interface PathInfo {
paths: string[]
dev: boolean
optional: boolean
}
// Versions installed per package name, keyed by version.
export type AuditPathIndex = Record<string, Map<string, PathInfo>>
export interface AuditIndexRequest {
// Flat map suitable as the POST body for `/advisories/bulk`.
request: Record<string, string[]>
totalDependencies: number
// Production dependencies: neither dev-only nor optional-only. Kept as a
// distinct counter because devOnly and optionalOnly aren't mutually
// exclusive — a (name, version) can be both — so `total - dev - optional`
// would double-subtract those entries.
dependencies: number
devDependencies: number
optionalDependencies: number
}
export interface AuditIndexOptions {
envLockfile?: EnvLockfile | null
include?: { [dependenciesField in DependenciesField]: boolean }
// Pre-computed dep types. Callers that also call buildAuditPathIndex on the
// same lockfile can share this to avoid walking the lockfile twice.
depTypes?: DepTypes
// Pre-computed optional-only depPaths for the main lockfile. Shared between
// lockfileToAuditRequest and buildAuditPathIndex when both are called.
optionalOnly?: Set<DepPath>
}
export function lockfileToAuditRequest (
lockfile: LockfileObject,
opts: AuditIndexOptions
): AuditIndexRequest {
const importerIds = Object.keys(lockfile.importers) as ProjectId[]
const importerWalkers = lockfileWalkerGroupImporterSteps(lockfile, importerIds, { include: opts.include })
const depTypes = opts.depTypes ?? detectDepTypes(lockfile)
const optionalOnly = opts.optionalOnly ?? collectOptionalOnlyDepPaths(lockfile, opts.include)
// Use null-prototype objects for records keyed by package names so a
// hostile or unusual package name (e.g. "__proto__") cannot pollute the
// prototype or overwrite inherited properties.
const request: Record<string, string[]> = Object.create(null)
// Per (name, version) classification. Counted as dev/optional only while
// every observed occurrence is dev-only / optional-only; once a non-dev or
// non-optional occurrence is seen, the flag is cleared and the counter
// decremented.
const versionStatesByName: Record<string, Map<string, { devOnly: boolean, optionalOnly: boolean }>> = Object.create(null)
let totalDependencies = 0
let dependencies = 0
let devDependencies = 0
let optionalDependencies = 0
const registerOccurrence = (o: { name: string, version: string, devOnly: boolean, optionalOnly: boolean }): void => {
let versionStates = versionStatesByName[o.name]
if (!versionStates) {
versionStates = new Map()
versionStatesByName[o.name] = versionStates
request[o.name] = []
}
const state = versionStates.get(o.version)
if (!state) {
versionStates.set(o.version, { devOnly: o.devOnly, optionalOnly: o.optionalOnly })
request[o.name].push(o.version)
totalDependencies++
if (o.devOnly) devDependencies++
if (o.optionalOnly) optionalDependencies++
if (!o.devOnly && !o.optionalOnly) dependencies++
return
}
const wasProduction = !state.devOnly && !state.optionalOnly
if (state.devOnly && !o.devOnly) {
state.devOnly = false
devDependencies--
}
if (state.optionalOnly && !o.optionalOnly) {
state.optionalOnly = false
optionalDependencies--
}
if (!wasProduction && !state.devOnly && !state.optionalOnly) {
dependencies++
}
}
// Build a visitor for one lockfile graph. The walker already de-duplicates
// by depPath internally, so we don't need a second visited set here.
const makeVisitor = (graphDepTypes: DepTypes, graphOptionalOnly: Set<DepPath>) => {
const visit = (step: LockfileWalkerStep): void => {
for (const { depPath, pkgSnapshot, next } of step.dependencies) {
const { name, version } = nameVerFromPkgSnapshot(depPath, pkgSnapshot)
if (version) {
registerOccurrence({
name,
version,
devOnly: graphDepTypes[depPath] === DepType.DevOnly,
optionalOnly: graphOptionalOnly.has(depPath),
})
}
visit(next())
}
}
return visit
}
const visitMain = makeVisitor(depTypes, optionalOnly)
for (const importerWalker of importerWalkers) {
visitMain(importerWalker.step)
}
if (opts.envLockfile) {
const envLockfileObject = envLockfileToLockfileObject(opts.envLockfile)
const envDepTypes = detectDepTypes(envLockfileObject)
const envOptionalOnly = collectOptionalOnlyDepPaths(envLockfileObject, opts.include)
const visitEnv = makeVisitor(envDepTypes, envOptionalOnly)
for (const { step } of lockfileWalkerGroupImporterSteps(envLockfileObject, Object.keys(envLockfileObject.importers) as ProjectId[], { include: opts.include })) {
visitEnv(step)
}
}
return { request, totalDependencies, dependencies, devDependencies, optionalDependencies }
}
export function buildAuditPathIndex (
lockfile: LockfileObject,
vulnerableNames: Set<string>,
opts: AuditIndexOptions
): AuditPathIndex {
// Null-prototype record keyed by package name to avoid prototype pollution
// from registry-supplied or lockfile-supplied names.
const paths: AuditPathIndex = Object.create(null)
const depTypes = opts.depTypes ?? detectDepTypes(lockfile)
const optionalOnly = opts.optionalOnly ?? collectOptionalOnlyDepPaths(lockfile, opts.include)
walkForPaths({
lockfile,
vulnerableNames,
paths,
depTypes,
optionalOnly,
include: opts.include,
importerSegmentOf: (importerId) => importerId.replace(/\//g, '__'),
})
if (opts.envLockfile) {
const envLockfileObject = envLockfileToLockfileObject(opts.envLockfile)
walkForPaths({
lockfile: envLockfileObject,
vulnerableNames,
paths,
depTypes: detectDepTypes(envLockfileObject),
optionalOnly: collectOptionalOnlyDepPaths(envLockfileObject, opts.include),
include: opts.include,
importerSegmentOf: (importerId) => importerId,
})
}
return paths
}
// Traverse the lockfile graph without the global depPath de-duplication that
// `@pnpm/lockfile.walker` applies. `findings[].paths` is supposed to list every
// distinct install path to a vulnerable package, so a shared transitive
// dependency (e.g. lodash reached via many parents) must contribute one path
// per parent chain, not just the first one the walker encounters. A per-trail
// visited set prevents cycles without suppressing distinct paths.
interface WalkForPathsCtx {
lockfile: LockfileObject
vulnerableNames: Set<string>
paths: AuditPathIndex
depTypes: DepTypes
optionalOnly: Set<DepPath>
include?: AuditIndexOptions['include']
importerSegmentOf: (importerId: string) => string
}
function walkForPaths (ctx: WalkForPathsCtx): void {
const { lockfile, vulnerableNames, paths, depTypes, optionalOnly, include, importerSegmentOf } = ctx
const includeDeps = include?.dependencies !== false
const includeDevDeps = include?.devDependencies !== false
const includeOptDeps = include?.optionalDependencies !== false
const packages = lockfile.packages ?? {}
// Reused across every root to avoid per-node Set cloning. visit adds the
// current depPath before recursing and removes it on the way back, so the
// set always reflects the current trail.
const inTrail = new Set<DepPath>()
const visit = (edge: { name: string, depPath: DepPath }, trail: string[]): void => {
if (inTrail.has(edge.depPath)) return
const pkgSnapshot = packages[edge.depPath]
if (pkgSnapshot == null) return
const { name, version } = nameVerFromPkgSnapshot(edge.depPath, pkgSnapshot)
const resolvedName = name ?? edge.name
const fullPath = [...trail, resolvedName]
if (version && vulnerableNames.has(resolvedName)) {
recordPath(paths, resolvedName, version, fullPath.join('>'),
depTypes[edge.depPath] === DepType.DevOnly,
optionalOnly.has(edge.depPath))
}
inTrail.add(edge.depPath)
try {
for (const child of resolvedDepsToNamedDepPaths(pkgSnapshot.dependencies ?? {})) {
visit(child, fullPath)
}
if (includeOptDeps) {
for (const child of resolvedDepsToNamedDepPaths(pkgSnapshot.optionalDependencies ?? {})) {
visit(child, fullPath)
}
}
} finally {
inTrail.delete(edge.depPath)
}
}
for (const [importerId, importer] of Object.entries(lockfile.importers)) {
const trail = [importerSegmentOf(importerId)]
const roots: Array<{ name: string, depPath: DepPath }> = []
if (includeDeps) roots.push(...resolvedDepsToNamedDepPaths(importer.dependencies ?? {}))
if (includeDevDeps) roots.push(...resolvedDepsToNamedDepPaths(importer.devDependencies ?? {}))
if (includeOptDeps) roots.push(...resolvedDepsToNamedDepPaths(importer.optionalDependencies ?? {}))
for (const root of roots) {
visit(root, trail)
}
}
}
// Per-(name, version) cap on recorded paths. The CLI only ever displays the
// first few and follows with a "run pnpm why" hint, so keeping tens of
// thousands of equivalent chains is wasted memory/CPU for projects with
// heavy sharing (e.g. diamond dependencies deep in the graph).
const MAX_PATHS_PER_FINDING = 100
function recordPath (paths: AuditPathIndex, name: string, version: string, joined: string, isDev: boolean, isOptional: boolean): void {
let byVersion = paths[name]
if (!byVersion) {
byVersion = new Map()
paths[name] = byVersion
}
const info = byVersion.get(version)
if (!info) {
byVersion.set(version, { paths: [joined], dev: isDev, optional: isOptional })
return
}
if (!isDev) info.dev = false
if (!isOptional) info.optional = false
if (info.paths.length >= MAX_PATHS_PER_FINDING) return
// Dedupe — the same joined trail can be produced when a package appears in
// both `dependencies` and `optionalDependencies` of the same parent, or via
// equivalent peer-suffix variants.
if (info.paths.includes(joined)) return
info.paths.push(joined)
}
function resolvedDepsToNamedDepPaths (deps: ResolvedDependencies): Array<{ name: string, depPath: DepPath }> {
const result: Array<{ name: string, depPath: DepPath }> = []
for (const [alias, ref] of Object.entries(deps)) {
const depPath = dp.refToRelative(ref, alias)
if (depPath != null) result.push({ name: alias, depPath })
}
return result
}
// Returns the set of depPaths that are reachable only through optional edges
// (i.e. they would be absent from the install set if optionalDependencies were
// not included). Matches the AuditMetadata.optionalDependencies semantic.
//
// Implemented as (reachableWithOptional reachableWithoutOptional) so that
// optionalDependencies nested inside a required chain are also accounted for,
// not just the ones declared directly on importer.optionalDependencies.
//
// Root selection honours the caller's `include` flags, so running
// `pnpm audit --prod` doesn't let dev-only subgraphs flip a package out of
// "optional-only" classification.
export function collectOptionalOnlyDepPaths (
lockfile: LockfileObject,
include?: AuditIndexOptions['include']
): Set<DepPath> {
const includeDeps = include?.dependencies !== false
const includeDevDeps = include?.devDependencies !== false
const includeOptDeps = include?.optionalDependencies !== false
const withoutOptional = new Set<DepPath>()
const withOptional = new Set<DepPath>()
for (const importer of Object.values(lockfile.importers)) {
const nonOptionalRoots = [
...(includeDeps ? resolvedDepsToDepPaths(importer.dependencies ?? {}) : []),
...(includeDevDeps ? resolvedDepsToDepPaths(importer.devDependencies ?? {}) : []),
]
const allRoots = [
...nonOptionalRoots,
...(includeOptDeps ? resolvedDepsToDepPaths(importer.optionalDependencies ?? {}) : []),
]
walkReachable(lockfile, nonOptionalRoots, withoutOptional, false)
walkReachable(lockfile, allRoots, withOptional, includeOptDeps)
}
const result = new Set<DepPath>()
for (const depPath of withOptional) {
if (!withoutOptional.has(depPath)) result.add(depPath)
}
return result
}
function walkReachable (lockfile: LockfileObject, depPaths: DepPath[], seen: Set<DepPath>, includeOptionalEdges: boolean): void {
const packages = lockfile.packages ?? {}
for (const depPath of depPaths) {
if (seen.has(depPath)) continue
seen.add(depPath)
const snapshot = packages[depPath]
if (!snapshot) continue
walkReachable(lockfile, resolvedDepsToDepPaths(snapshot.dependencies ?? {}), seen, includeOptionalEdges)
if (includeOptionalEdges) {
walkReachable(lockfile, resolvedDepsToDepPaths(snapshot.optionalDependencies ?? {}), seen, includeOptionalEdges)
}
}
}
function resolvedDepsToDepPaths (deps: ResolvedDependencies): DepPath[] {
return Object.entries(deps)
.map(([alias, ref]) => dp.refToRelative(ref, alias))
.filter((depPath): depPath is DepPath => depPath !== null)
}
function envLockfileToLockfileObject (envLockfile: EnvLockfile): LockfileObject {
const envImporter = envLockfile.importers['.']
const importers: Record<string, { dependencies?: Record<string, { specifier: string, version: string }> }> = {}
if (Object.keys(envImporter.configDependencies).length > 0) {
importers['configDependencies'] = { dependencies: envImporter.configDependencies }
}
if (envImporter.packageManagerDependencies) {
importers['packageManagerDependencies'] = { dependencies: envImporter.packageManagerDependencies }
}
return convertToLockfileObject({
lockfileVersion: envLockfile.lockfileVersion,
importers,
packages: envLockfile.packages,
snapshots: envLockfile.snapshots,
})
}

View File

@@ -1,125 +0,0 @@
import path from 'node:path'
import { DepType, type DepTypes, detectDepTypes } from '@pnpm/lockfile.detect-dep-types'
import { convertToLockfileObject } from '@pnpm/lockfile.fs'
import type { EnvLockfile, LockfileObject, TarballResolution } from '@pnpm/lockfile.types'
import { nameVerFromPkgSnapshot } from '@pnpm/lockfile.utils'
import { lockfileWalkerGroupImporterSteps, type LockfileWalkerStep } from '@pnpm/lockfile.walker'
import type { DependenciesField, ProjectId } from '@pnpm/types'
import { safeReadProjectManifestOnly } from '@pnpm/workspace.project-manifest-reader'
import { map as mapValues } from 'ramda'
export interface AuditNode {
version?: string
integrity?: string
requires?: Record<string, string>
dependencies?: { [name: string]: AuditNode }
dev: boolean
}
export interface AuditTree extends AuditNode {
name?: string
install: string[]
remove: string[]
metadata: unknown
}
export async function lockfileToAuditTree (
lockfile: LockfileObject,
opts: {
envLockfile?: EnvLockfile | null
include?: { [dependenciesField in DependenciesField]: boolean }
lockfileDir: string
}
): Promise<AuditTree> {
const importerWalkers = lockfileWalkerGroupImporterSteps(lockfile, Object.keys(lockfile.importers) as ProjectId[], { include: opts?.include })
const dependencies: Record<string, AuditNode> = {}
const depTypes = detectDepTypes(lockfile)
await Promise.all(
importerWalkers.map(async (importerWalker) => {
const importerDeps = lockfileToAuditNode(depTypes, importerWalker.step)
// For some reason the registry responds with 500 if the keys in dependencies have slashes
// see issue: https://github.com/pnpm/pnpm/issues/2848
const depName = importerWalker.importerId.replace(/\//g, '__')
const manifest = await safeReadProjectManifestOnly(path.join(opts.lockfileDir, importerWalker.importerId))
dependencies[depName] = {
dependencies: importerDeps,
dev: false,
requires: toRequires(importerDeps),
version: manifest?.version ?? '0.0.0',
}
})
)
if (opts.envLockfile) {
const envLockfileObject = envLockfileToLockfileObject(opts.envLockfile)
const envDepTypes = detectDepTypes(envLockfileObject)
for (const { importerId, step } of lockfileWalkerGroupImporterSteps(envLockfileObject, Object.keys(envLockfileObject.importers) as ProjectId[], { include: opts.include })) {
const deps = lockfileToAuditNode(envDepTypes, step)
if (Object.keys(deps).length > 0) {
dependencies[importerId] = wrapDepsGroup(deps)
}
}
}
const auditTree: AuditTree = {
name: undefined,
version: undefined,
dependencies,
dev: false,
install: [],
integrity: undefined,
metadata: {},
remove: [],
requires: toRequires(dependencies),
}
return auditTree
}
function lockfileToAuditNode (depTypes: DepTypes, step: LockfileWalkerStep): Record<string, AuditNode> {
const dependencies: Record<string, AuditNode> = {}
for (const { depPath, pkgSnapshot, next } of step.dependencies) {
const { name, version } = nameVerFromPkgSnapshot(depPath, pkgSnapshot)
const subdeps = lockfileToAuditNode(depTypes, next())
const dep: AuditNode = {
dev: depTypes[depPath] === DepType.DevOnly,
integrity: (pkgSnapshot.resolution as TarballResolution).integrity,
version,
}
if (Object.keys(subdeps).length > 0) {
dep.dependencies = subdeps
dep.requires = toRequires(subdeps)
}
dependencies[name] = dep
}
return dependencies
}
function toRequires (auditNodesByDepName: Record<string, AuditNode>): Record<string, string> {
return mapValues((auditNode) => auditNode.version!, auditNodesByDepName)
}
function wrapDepsGroup (deps: Record<string, AuditNode>): AuditNode {
return {
dependencies: deps,
dev: false,
requires: toRequires(deps),
version: '0.0.0',
}
}
function envLockfileToLockfileObject (envLockfile: EnvLockfile): LockfileObject {
const envImporter = envLockfile.importers['.']
const importers: Record<string, { dependencies?: Record<string, { specifier: string, version: string }> }> = {}
if (Object.keys(envImporter.configDependencies).length > 0) {
importers['configDependencies'] = { dependencies: envImporter.configDependencies }
}
if (envImporter.packageManagerDependencies) {
importers['packageManagerDependencies'] = { dependencies: envImporter.packageManagerDependencies }
}
return convertToLockfileObject({
lockfileVersion: envLockfile.lockfileVersion,
importers,
packages: envLockfile.packages,
snapshots: envLockfile.snapshots,
})
}

View File

@@ -7,69 +7,37 @@ export interface AuditVulnerabilityCounts {
}
export interface IgnoredAuditVulnerabilityCounts {
info: number
low: number
moderate: number
high: number
critical: number
}
export interface AuditResolution {
id: number
path: string
export type AuditLevelString = 'info' | 'low' | 'moderate' | 'high' | 'critical'
export type AuditLevelNumber = 0 | 1 | 2 | 3 | 4
export interface AuditFinding {
version: string
paths: string[]
dev: boolean
optional: boolean
bundled: boolean
}
export interface AuditAction {
action: string
module: string
target: string
isMajor: boolean
resolves: AuditResolution[]
}
export type AuditLevelString = 'low' | 'moderate' | 'high' | 'critical'
export type AuditLevelNumber = 0 | 1 | 2 | 3
export interface AuditAdvisory {
findings: [
{
version: string
paths: string[]
dev: boolean
optional: boolean
bundled: boolean
}
]
findings: AuditFinding[]
id: number
created: string
updated: string
deleted?: boolean
title: string
found_by: {
name: string
}
reported_by: {
name: string
}
module_name: string
cves: string[]
vulnerable_versions: string
patched_versions: string
overview: string
recommendation: string
references: string
access: string
// Inferred from vulnerable_versions. Undefined when inference fails —
// `audit --fix` and `--ignore-unfixable` treat that as "no fix available".
patched_versions?: string
severity: AuditLevelString
cwe: string
github_advisory_id: string
metadata: {
module_type: string
exploitability: number
affected_components: string
}
url: string
}
@@ -82,14 +50,6 @@ export interface AuditMetadata {
}
export interface AuditReport {
actions: AuditAction[]
advisories: { [id: string]: AuditAdvisory }
muted: unknown[]
metadata: AuditMetadata
}
export interface AuditActionRecommendation {
cmd: string
isBreaking: boolean
action: AuditAction
}

View File

@@ -1,4 +0,0 @@
{
"name": "pkg",
"version": "1.0.0"
}

View File

@@ -1,3 +0,0 @@
{
"name": "pkg"
}

View File

@@ -1,168 +1,180 @@
import { LOCKFILE_VERSION } from '@pnpm/constants'
import { audit } from '@pnpm/deps.compliance.audit'
import { audit, buildAuditPathIndex, lockfileToAuditRequest } from '@pnpm/deps.compliance.audit'
import type { PnpmError } from '@pnpm/error'
import { fixtures } from '@pnpm/test-fixtures'
import { getMockAgent, setupMockAgent, teardownMockAgent } from '@pnpm/testing.mock-agent'
import type { DepPath, ProjectId } from '@pnpm/types'
import { lockfileToAuditTree } from '../lib/lockfileToAuditTree.js'
const f = fixtures(import.meta.dirname)
describe('audit', () => {
test('lockfileToAuditTree()', async () => {
expect(await lockfileToAuditTree({
test('lockfileToAuditRequest() flattens dependencies', () => {
const result = lockfileToAuditRequest({
importers: {
['.' as ProjectId]: {
dependencies: {
foo: '1.0.0',
},
specifiers: {
foo: '^1.0.0',
},
dependencies: { foo: '1.0.0' },
specifiers: { foo: '^1.0.0' },
},
},
lockfileVersion: LOCKFILE_VERSION,
packages: {
['bar@1.0.0' as DepPath]: {
resolution: {
integrity: 'bar-integrity',
},
},
['bar@1.0.0' as DepPath]: { resolution: { integrity: 'bar-integrity' } },
['foo@1.0.0' as DepPath]: {
dependencies: {
bar: '1.0.0',
},
resolution: {
integrity: 'foo-integrity',
},
dependencies: { bar: '1.0.0' },
resolution: { integrity: 'foo-integrity' },
},
},
}, { lockfileDir: f.find('one-project') })).toEqual({
name: undefined,
version: undefined,
}, {})
dependencies: {
'.': {
dependencies: {
foo: {
dependencies: {
bar: {
dev: false,
integrity: 'bar-integrity',
version: '1.0.0',
},
},
dev: false,
integrity: 'foo-integrity',
requires: {
bar: '1.0.0',
},
version: '1.0.0',
},
},
dev: false,
requires: {
foo: '1.0.0',
},
version: '1.0.0',
expect(result.request).toEqual({ foo: ['1.0.0'], bar: ['1.0.0'] })
expect(result.totalDependencies).toBe(2)
expect(result.devDependencies).toBe(0)
})
test('buildAuditPathIndex() records install paths for vulnerable packages', () => {
const lockfile = {
importers: {
['.' as ProjectId]: {
dependencies: { foo: '1.0.0' },
specifiers: { foo: '^1.0.0' },
},
},
lockfileVersion: LOCKFILE_VERSION,
packages: {
['bar@1.0.0' as DepPath]: { resolution: { integrity: 'bar-integrity' } },
['foo@1.0.0' as DepPath]: {
dependencies: { bar: '1.0.0' },
resolution: { integrity: 'foo-integrity' },
},
},
}
const result = buildAuditPathIndex(lockfile, new Set(['bar']), {})
expect(result['bar']!.get('1.0.0')).toEqual({ paths: ['.>foo>bar'], dev: false, optional: false })
expect(result['foo']).toBeUndefined()
})
test('buildAuditPathIndex() records every distinct install path for shared deps', () => {
// lodash is reachable via two different parent chains. The lockfile walker
// globally dedupes by depPath, so using it directly would record only the
// first-seen chain. buildAuditPathIndex must produce one path per chain.
const lockfile = {
importers: {
['.' as ProjectId]: {
dependencies: { a: '1.0.0', b: '1.0.0' },
specifiers: { a: '^1.0.0', b: '^1.0.0' },
},
},
lockfileVersion: LOCKFILE_VERSION,
packages: {
['a@1.0.0' as DepPath]: {
dependencies: { lodash: '4.0.0' },
resolution: { integrity: 'a-integrity' },
},
['b@1.0.0' as DepPath]: {
dependencies: { lodash: '4.0.0' },
resolution: { integrity: 'b-integrity' },
},
['lodash@4.0.0' as DepPath]: { resolution: { integrity: 'lodash-integrity' } },
},
}
const result = buildAuditPathIndex(lockfile, new Set(['lodash']), {})
const info = result['lodash']!.get('4.0.0')!
expect(info.paths).toHaveLength(2)
expect(info.paths).toEqual(expect.arrayContaining(['.>a>lodash', '.>b>lodash']))
})
test('buildAuditPathIndex() classifies as optional when the only non-optional path runs through an excluded devDependency', () => {
// shared-pkg is reachable two ways: via a devDependency chain (excluded
// when include.devDependencies === false) and via an optionalDependency
// root. With dev excluded, the only remaining path runs through the
// optional edge, so the finding should be flagged as optional.
const lockfile = {
importers: {
['.' as ProjectId]: {
devDependencies: { 'dev-root': '1.0.0' },
optionalDependencies: { 'opt-root': '1.0.0' },
specifiers: { 'dev-root': '^1.0.0', 'opt-root': '^1.0.0' },
},
},
lockfileVersion: LOCKFILE_VERSION,
packages: {
['dev-root@1.0.0' as DepPath]: {
dependencies: { 'shared-pkg': '1.0.0' },
resolution: { integrity: 'dev-root-integrity' },
},
['opt-root@1.0.0' as DepPath]: {
dependencies: { 'shared-pkg': '1.0.0' },
resolution: { integrity: 'opt-root-integrity' },
},
['shared-pkg@1.0.0' as DepPath]: { resolution: { integrity: 'shared-pkg-integrity' } },
},
}
const withDev = buildAuditPathIndex(lockfile, new Set(['shared-pkg']), {
include: { dependencies: true, devDependencies: true, optionalDependencies: true },
})
// When the dev chain is in scope the dep is reachable via a non-optional
// path too, so it is NOT optional-only.
expect(withDev['shared-pkg']!.get('1.0.0')!.optional).toBe(false)
const prodOnly = buildAuditPathIndex(lockfile, new Set(['shared-pkg']), {
include: { dependencies: true, devDependencies: false, optionalDependencies: true },
})
// With devDependencies excluded the only remaining way to reach shared-pkg
// is through opt-root, so the dep becomes optional-only.
expect(prodOnly['shared-pkg']!.get('1.0.0')!.optional).toBe(true)
})
test('buildAuditPathIndex() flags findings reached only through optional edges', () => {
const lockfile = {
importers: {
['.' as ProjectId]: {
optionalDependencies: { native: '1.0.0' },
specifiers: { native: '^1.0.0' },
},
},
lockfileVersion: LOCKFILE_VERSION,
packages: {
['native@1.0.0' as DepPath]: { resolution: { integrity: 'native-integrity' } },
},
}
const result = buildAuditPathIndex(lockfile, new Set(['native']), {})
expect(result['native']!.get('1.0.0')).toEqual({
paths: ['.>native'],
dev: false,
install: [],
integrity: undefined,
metadata: {},
remove: [],
requires: { '.': '1.0.0' },
optional: true,
})
})
test('lockfileToAuditTree() without specified version should use default version 0.0.0', async () => {
expect(await lockfileToAuditTree({
test('buildAuditPathIndex() replaces slashes in workspace importer ids', () => {
const lockfile = {
importers: {
['.' as ProjectId]: {
dependencies: {
foo: '1.0.0',
},
specifiers: {
foo: '^1.0.0',
},
['packages/foo' as ProjectId]: {
dependencies: { foo: '1.0.0' },
specifiers: { foo: '^1.0.0' },
},
},
lockfileVersion: LOCKFILE_VERSION,
packages: {
['bar@1.0.0' as DepPath]: {
resolution: {
integrity: 'bar-integrity',
},
},
['foo@1.0.0' as DepPath]: {
dependencies: {
bar: '1.0.0',
},
resolution: {
integrity: 'foo-integrity',
},
},
['foo@1.0.0' as DepPath]: { resolution: { integrity: 'foo-integrity' } },
},
}, { lockfileDir: f.find('project-without-version') })).toEqual({
name: undefined,
version: undefined,
}
const result = buildAuditPathIndex(lockfile, new Set(['foo']), {})
dependencies: {
'.': {
dependencies: {
foo: {
dependencies: {
bar: {
dev: false,
integrity: 'bar-integrity',
version: '1.0.0',
},
},
dev: false,
integrity: 'foo-integrity',
requires: {
bar: '1.0.0',
},
version: '1.0.0',
},
},
dev: false,
requires: {
foo: '1.0.0',
},
version: '0.0.0',
},
},
dev: false,
install: [],
integrity: undefined,
metadata: {},
remove: [],
requires: { '.': '0.0.0' },
})
expect(result['foo']!.get('1.0.0')!.paths).toEqual(['packages__foo>foo'])
})
test('lockfileToAuditTree() includes env lockfile configDependencies and packageManagerDependencies as separate groups', async () => {
const result = await lockfileToAuditTree({
test('lockfileToAuditRequest() includes env lockfile configDependencies and packageManagerDependencies', () => {
const result = lockfileToAuditRequest({
importers: {
['.' as ProjectId]: {
dependencies: {
foo: '1.0.0',
},
specifiers: {
foo: '^1.0.0',
},
dependencies: { foo: '1.0.0' },
specifiers: { foo: '^1.0.0' },
},
},
lockfileVersion: LOCKFILE_VERSION,
packages: {
['foo@1.0.0' as DepPath]: {
resolution: {
integrity: 'foo-integrity',
},
},
['foo@1.0.0' as DepPath]: { resolution: { integrity: 'foo-integrity' } },
},
}, {
envLockfile: {
@@ -170,186 +182,53 @@ describe('audit', () => {
importers: {
'.': {
configDependencies: {
'my-config': {
specifier: '2.0.0',
version: '2.0.0',
},
'my-config': { specifier: '2.0.0', version: '2.0.0' },
},
packageManagerDependencies: {
pnpm: {
specifier: '9.0.0',
version: '9.0.0',
},
pnpm: { specifier: '9.0.0', version: '9.0.0' },
},
},
},
packages: {
'my-config@2.0.0': {
resolution: { integrity: 'my-config-integrity' },
},
'config-util@1.0.0': {
resolution: { integrity: 'config-util-integrity' },
},
'pnpm@9.0.0': {
resolution: { integrity: 'pnpm-integrity' },
},
'my-config@2.0.0': { resolution: { integrity: 'my-config-integrity' } },
'config-util@1.0.0': { resolution: { integrity: 'config-util-integrity' } },
'pnpm@9.0.0': { resolution: { integrity: 'pnpm-integrity' } },
},
snapshots: {
'my-config@2.0.0': {
dependencies: {
'config-util': '1.0.0',
},
},
'my-config@2.0.0': { dependencies: { 'config-util': '1.0.0' } },
'config-util@1.0.0': {},
'pnpm@9.0.0': {},
},
},
lockfileDir: f.find('one-project'),
})
expect(result.dependencies).toHaveProperty('configDependencies')
expect(result.dependencies).toHaveProperty('packageManagerDependencies')
expect(result.dependencies!['configDependencies']).toEqual({
dev: false,
version: '0.0.0',
dependencies: {
'my-config': {
dev: false,
integrity: 'my-config-integrity',
version: '2.0.0',
dependencies: {
'config-util': {
dev: false,
integrity: 'config-util-integrity',
version: '1.0.0',
},
},
requires: {
'config-util': '1.0.0',
},
},
},
requires: {
'my-config': '2.0.0',
},
})
expect(result.dependencies!['packageManagerDependencies']).toEqual({
dev: false,
version: '0.0.0',
dependencies: {
pnpm: {
dev: false,
integrity: 'pnpm-integrity',
version: '9.0.0',
},
},
requires: {
pnpm: '9.0.0',
},
})
expect(result.request['foo']).toEqual(['1.0.0'])
expect(result.request['my-config']).toEqual(['2.0.0'])
expect(result.request['config-util']).toEqual(['1.0.0'])
expect(result.request['pnpm']).toEqual(['9.0.0'])
})
test('lockfileToAuditTree() with env lockfile with only configDependencies omits packageManagerDependencies group', async () => {
const result = await lockfileToAuditTree({
test('lockfileToAuditRequest() accepts a null envLockfile', () => {
const result = lockfileToAuditRequest({
importers: {
['.' as ProjectId]: {
specifiers: {},
},
},
lockfileVersion: LOCKFILE_VERSION,
}, {
envLockfile: {
lockfileVersion: LOCKFILE_VERSION,
importers: {
'.': {
configDependencies: {
'my-hook': {
specifier: '1.0.0',
version: '1.0.0',
},
},
},
},
packages: {
'my-hook@1.0.0': {
resolution: { integrity: 'my-hook-integrity' },
},
},
snapshots: {
'my-hook@1.0.0': {},
},
},
lockfileDir: f.find('one-project'),
})
expect(result.dependencies).toHaveProperty('configDependencies')
expect(result.dependencies).not.toHaveProperty('packageManagerDependencies')
})
test('lockfileToAuditTree() with env lockfile with empty configDependencies and no packageManagerDependencies adds no groups', async () => {
const result = await lockfileToAuditTree({
importers: {
['.' as ProjectId]: {
specifiers: {},
},
},
lockfileVersion: LOCKFILE_VERSION,
}, {
envLockfile: {
lockfileVersion: LOCKFILE_VERSION,
importers: {
'.': {
configDependencies: {},
},
},
packages: {},
snapshots: {},
},
lockfileDir: f.find('one-project'),
})
expect(result.dependencies).not.toHaveProperty('configDependencies')
expect(result.dependencies).not.toHaveProperty('packageManagerDependencies')
})
test('lockfileToAuditTree() with null envLockfile adds no groups', async () => {
const result = await lockfileToAuditTree({
importers: {
['.' as ProjectId]: {
dependencies: {
foo: '1.0.0',
},
specifiers: {
foo: '^1.0.0',
},
dependencies: { foo: '1.0.0' },
specifiers: { foo: '^1.0.0' },
},
},
lockfileVersion: LOCKFILE_VERSION,
packages: {
['foo@1.0.0' as DepPath]: {
resolution: {
integrity: 'foo-integrity',
},
},
['foo@1.0.0' as DepPath]: { resolution: { integrity: 'foo-integrity' } },
},
}, {
envLockfile: null,
lockfileDir: f.find('one-project'),
})
}, { envLockfile: null })
expect(result.dependencies).not.toHaveProperty('configDependencies')
expect(result.dependencies).not.toHaveProperty('packageManagerDependencies')
expect(result.dependencies!['.'] ).toBeDefined()
expect(result.request).toEqual({ foo: ['1.0.0'] })
})
test('lockfileToAuditTree() env lockfile includes optionalDependencies from snapshots', async () => {
const result = await lockfileToAuditTree({
test('lockfileToAuditRequest() includes optionalDependencies from env snapshots', () => {
const result = lockfileToAuditRequest({
importers: {
['.' as ProjectId]: {
specifiers: {},
},
['.' as ProjectId]: { specifiers: {} },
},
lockfileVersion: LOCKFILE_VERSION,
}, {
@@ -358,66 +237,34 @@ describe('audit', () => {
importers: {
'.': {
configDependencies: {
'my-tool': {
specifier: '1.0.0',
version: '1.0.0',
},
'my-tool': { specifier: '1.0.0', version: '1.0.0' },
},
},
},
packages: {
'my-tool@1.0.0': {
resolution: { integrity: 'my-tool-integrity' },
},
'required-dep@1.0.0': {
resolution: { integrity: 'required-dep-integrity' },
},
'optional-dep@2.0.0': {
resolution: { integrity: 'optional-dep-integrity' },
},
'my-tool@1.0.0': { resolution: { integrity: 'my-tool-integrity' } },
'required-dep@1.0.0': { resolution: { integrity: 'required-dep-integrity' } },
'optional-dep@2.0.0': { resolution: { integrity: 'optional-dep-integrity' } },
},
snapshots: {
'my-tool@1.0.0': {
dependencies: {
'required-dep': '1.0.0',
},
optionalDependencies: {
'optional-dep': '2.0.0',
},
dependencies: { 'required-dep': '1.0.0' },
optionalDependencies: { 'optional-dep': '2.0.0' },
},
'required-dep@1.0.0': {},
'optional-dep@2.0.0': {},
},
},
lockfileDir: f.find('one-project'),
})
const myTool = result.dependencies!['configDependencies']?.dependencies!['my-tool']
expect(myTool).toBeDefined()
expect(myTool.dependencies).toHaveProperty('required-dep')
expect(myTool.dependencies).toHaveProperty('optional-dep')
expect(myTool.dependencies!['required-dep']).toEqual({
dev: false,
integrity: 'required-dep-integrity',
version: '1.0.0',
})
expect(myTool.dependencies!['optional-dep']).toEqual({
dev: false,
integrity: 'optional-dep-integrity',
version: '2.0.0',
})
expect(myTool.requires).toEqual({
'required-dep': '1.0.0',
'optional-dep': '2.0.0',
})
expect(result.request['required-dep']).toEqual(['1.0.0'])
expect(result.request['optional-dep']).toEqual(['2.0.0'])
})
test('lockfileToAuditTree() env lockfile does not include unreachable packages', async () => {
const result = await lockfileToAuditTree({
test('lockfileToAuditRequest() does not include env packages unreachable from importers', () => {
const result = lockfileToAuditRequest({
importers: {
['.' as ProjectId]: {
specifiers: {},
},
['.' as ProjectId]: { specifiers: {} },
},
lockfileVersion: LOCKFILE_VERSION,
}, {
@@ -426,35 +273,23 @@ describe('audit', () => {
importers: {
'.': {
configDependencies: {
'my-config': {
specifier: '1.0.0',
version: '1.0.0',
},
'my-config': { specifier: '1.0.0', version: '1.0.0' },
},
},
},
packages: {
'my-config@1.0.0': {
resolution: { integrity: 'my-config-integrity' },
},
'orphan-pkg@3.0.0': {
resolution: { integrity: 'orphan-integrity' },
},
'my-config@1.0.0': { resolution: { integrity: 'my-config-integrity' } },
'orphan-pkg@3.0.0': { resolution: { integrity: 'orphan-integrity' } },
},
snapshots: {
'my-config@1.0.0': {},
'orphan-pkg@3.0.0': {},
},
},
lockfileDir: f.find('one-project'),
})
const configDeps = result.dependencies!['configDependencies']
expect(configDeps.dependencies).toHaveProperty('my-config')
expect(configDeps.dependencies).not.toHaveProperty('orphan-pkg')
// Also verify it doesn't appear anywhere in the top-level dependencies
expect(result.dependencies).not.toHaveProperty('orphan-pkg')
expect(result.request).toHaveProperty('my-config')
expect(result.request).not.toHaveProperty('orphan-pkg')
})
test('an error is thrown if the audit endpoint responds with a non-OK code', async () => {
@@ -462,11 +297,8 @@ describe('audit', () => {
const getAuthHeader = () => undefined
await setupMockAgent()
getMockAgent().get('http://registry.registry')
.intercept({ path: '/-/npm/v1/security/audits/quick', method: 'POST' })
.intercept({ path: '/-/npm/v1/security/advisories/bulk', method: 'POST' })
.reply(500, { message: 'Something bad happened' })
getMockAgent().get('http://registry.registry')
.intercept({ path: '/-/npm/v1/security/audits', method: 'POST' })
.reply(500, { message: 'Fallback failed too' })
try {
let err!: PnpmError
@@ -477,12 +309,10 @@ describe('audit', () => {
},
getAuthHeader,
{
lockfileDir: f.find('one-project'),
registry,
retry: {
retries: 0,
},
virtualStoreDirMaxLength: 120,
})
} catch (_err: any) { // eslint-disable-line
err = _err
@@ -490,71 +320,61 @@ describe('audit', () => {
expect(err).toBeDefined()
expect(err.code).toBe('ERR_PNPM_AUDIT_BAD_RESPONSE')
expect(err.message).toBe('The audit endpoint (at http://registry.registry/-/npm/v1/security/audits/quick) responded with 500: {"message":"Something bad happened"}. Fallback endpoint (at http://registry.registry/-/npm/v1/security/audits) responded with 500: {"message":"Fallback failed too"}')
expect(err.message).toBe('The audit endpoint (at http://registry.registry/-/npm/v1/security/advisories/bulk) responded with 500: {"message":"Something bad happened"}')
} finally {
await teardownMockAgent()
}
})
test('falls back to /audits if /audits/quick fails', async () => {
test('throws AUDIT_BAD_RESPONSE if the registry body is not valid JSON', async () => {
const registry = 'http://registry.registry/'
const getAuthHeader = () => undefined
await setupMockAgent()
getMockAgent().get('http://registry.registry')
.intercept({ path: '/-/npm/v1/security/audits/quick', method: 'POST' })
.reply(500, { message: 'Something bad happened' })
getMockAgent().get('http://registry.registry')
.intercept({ path: '/-/npm/v1/security/audits', method: 'POST' })
.reply(200, {
actions: [],
advisories: {},
metadata: {
dependencies: 0,
devDependencies: 0,
optionalDependencies: 0,
totalDependencies: 0,
vulnerabilities: {
critical: 0,
high: 0,
info: 0,
low: 0,
moderate: 0,
},
},
muted: [],
})
.intercept({ path: '/-/npm/v1/security/advisories/bulk', method: 'POST' })
.reply(200, 'not json <html>')
try {
expect(await audit({
importers: {},
lockfileVersion: LOCKFILE_VERSION,
},
getAuthHeader,
{
lockfileDir: f.find('one-project'),
registry,
retry: {
retries: 0,
},
virtualStoreDirMaxLength: 120,
})).toEqual({
actions: [],
advisories: {},
metadata: {
dependencies: 0,
devDependencies: 0,
optionalDependencies: 0,
totalDependencies: 0,
vulnerabilities: {
critical: 0,
high: 0,
info: 0,
low: 0,
moderate: 0,
},
},
muted: [],
})
let err!: PnpmError
try {
await audit(
{ importers: {}, lockfileVersion: LOCKFILE_VERSION },
getAuthHeader,
{ registry, retry: { retries: 0 } }
)
} catch (_err: any) { // eslint-disable-line
err = _err
}
expect(err).toBeDefined()
expect(err.code).toBe('ERR_PNPM_AUDIT_BAD_RESPONSE')
expect(err.message).toMatch(/invalid JSON/)
} finally {
await teardownMockAgent()
}
})
test('throws AUDIT_BAD_RESPONSE if the registry returns a non-object body', async () => {
const registry = 'http://registry.registry/'
const getAuthHeader = () => undefined
await setupMockAgent()
getMockAgent().get('http://registry.registry')
.intercept({ path: '/-/npm/v1/security/advisories/bulk', method: 'POST' })
.reply(200, [])
try {
let err!: PnpmError
try {
await audit(
{ importers: {}, lockfileVersion: LOCKFILE_VERSION },
getAuthHeader,
{ registry, retry: { retries: 0 } }
)
} catch (_err: any) { // eslint-disable-line
err = _err
}
expect(err).toBeDefined()
expect(err.code).toBe('ERR_PNPM_AUDIT_BAD_RESPONSE')
expect(err.message).toMatch(/unexpected body/)
} finally {
await teardownMockAgent()
}
@@ -567,17 +387,17 @@ describe('audit', () => {
// intercept will only match if the authorization header is present and correct
getMockAgent().get('http://registry.registry')
.intercept({
path: '/-/npm/v1/security/audits/quick',
path: '/-/npm/v1/security/advisories/bulk',
method: 'POST',
headers: { authorization: 'Bearer test-token' },
})
.reply(200, { actions: [], advisories: {}, metadata: { dependencies: 0, devDependencies: 0, optionalDependencies: 0, totalDependencies: 0, vulnerabilities: { critical: 0, high: 0, info: 0, low: 0, moderate: 0 } }, muted: [] })
.reply(200, {})
try {
const result = await audit(
{ importers: {}, lockfileVersion: LOCKFILE_VERSION },
getAuthHeader,
{ lockfileDir: f.find('one-project'), registry, retry: { retries: 0 }, virtualStoreDirMaxLength: 120 }
{ registry, retry: { retries: 0 } }
)
expect(result.advisories).toEqual({})
} finally {
@@ -585,27 +405,125 @@ describe('audit', () => {
}
})
test('computes findings paths and severity counts locally when the bulk response omits findings', async () => {
const registry = 'http://registry.registry/'
const getAuthHeader = () => undefined
await setupMockAgent()
// Bare bulk response — no `findings`, no `patched_versions`, no `cves`,
// no `module_name`. Exactly what registry.npmjs.org returns today.
getMockAgent().get('http://registry.registry')
.intercept({ path: '/-/npm/v1/security/advisories/bulk', method: 'POST' })
.reply(200, {
bar: [
{
id: 42,
url: 'https://github.com/advisories/GHSA-xxxx-yyyy-zzzz',
title: 'bar is bad',
severity: 'high',
vulnerable_versions: '<2.0.0',
},
],
})
try {
const result = await audit(
{
importers: {
['.' as ProjectId]: {
dependencies: { foo: '1.0.0' },
specifiers: { foo: '^1.0.0' },
},
},
lockfileVersion: LOCKFILE_VERSION,
packages: {
['bar@1.0.0' as DepPath]: { resolution: { integrity: 'bar-integrity' } },
['foo@1.0.0' as DepPath]: {
dependencies: { bar: '1.0.0' },
resolution: { integrity: 'foo-integrity' },
},
},
},
getAuthHeader,
{ registry, retry: { retries: 0 } }
)
const advisory = result.advisories['42']
expect(advisory).toBeDefined()
expect(advisory.module_name).toBe('bar')
expect(advisory.github_advisory_id).toBe('GHSA-xxxx-yyyy-zzzz')
expect(advisory.patched_versions).toBe('>=2.0.0')
expect(advisory.findings).toHaveLength(1)
expect(advisory.findings[0].version).toBe('1.0.0')
expect(advisory.findings[0].paths).toEqual(['.>foo>bar'])
expect(result.metadata.vulnerabilities.high).toBe(1)
expect(result.metadata.totalDependencies).toBe(2)
} finally {
await teardownMockAgent()
}
})
test('does not send authorization header when getAuthHeader returns undefined', async () => {
const registry = 'http://registry.registry/'
const getAuthHeader = () => undefined
await setupMockAgent()
let capturedHeaders: Record<string, string> = {}
getMockAgent().get('http://registry.registry')
.intercept({ path: '/-/npm/v1/security/audits/quick', method: 'POST' })
.intercept({ path: '/-/npm/v1/security/advisories/bulk', method: 'POST' })
.reply(200, (opts) => {
capturedHeaders = opts.headers as Record<string, string>
return { actions: [], advisories: {}, metadata: { dependencies: 0, devDependencies: 0, optionalDependencies: 0, totalDependencies: 0, vulnerabilities: { critical: 0, high: 0, info: 0, low: 0, moderate: 0 } }, muted: [] }
return {}
})
try {
await audit(
{ importers: {}, lockfileVersion: LOCKFILE_VERSION },
getAuthHeader,
{ lockfileDir: f.find('one-project'), registry, retry: { retries: 0 }, virtualStoreDirMaxLength: 120 }
{ registry, retry: { retries: 0 } }
)
expect(capturedHeaders).not.toHaveProperty('authorization')
} finally {
await teardownMockAgent()
}
})
test('handles info severity in bulk response', async () => {
const registry = 'http://registry.registry/'
const getAuthHeader = () => undefined
await setupMockAgent()
getMockAgent().get('http://registry.registry')
.intercept({ path: '/-/npm/v1/security/advisories/bulk', method: 'POST' })
.reply(200, {
info_pkg: [
{
id: 100,
url: 'https://github.com/advisories/GHSA-info-info-info',
title: 'just some info',
severity: 'info',
vulnerable_versions: '*',
},
],
})
try {
const result = await audit(
{
importers: {
['.' as ProjectId]: {
dependencies: { info_pkg: '1.0.0' },
specifiers: { info_pkg: '1.0.0' },
},
},
lockfileVersion: LOCKFILE_VERSION,
packages: {
['info_pkg@1.0.0' as DepPath]: { resolution: { integrity: 'info-integrity' } },
},
},
getAuthHeader,
{ registry, retry: { retries: 0 } }
)
expect(result.metadata.vulnerabilities.info).toBe(1)
expect(result.advisories['100'].severity).toBe('info')
} finally {
await teardownMockAgent()
}
})
})

View File

@@ -49,7 +49,7 @@
"path": "../../../testing/mock-agent"
},
{
"path": "../../../workspace/project-manifest-reader"
"path": "../../path"
}
]
}

View File

@@ -1,7 +1,7 @@
import { docsUrl, TABLE_OPTIONS } from '@pnpm/cli.utils'
import { type Config, type ConfigContext, types as allTypes, type UniversalOptions } from '@pnpm/config.reader'
import { WANTED_LOCKFILE } from '@pnpm/constants'
import { audit, type AuditAdvisory, type AuditLevelNumber, type AuditLevelString, type AuditReport, type AuditVulnerabilityCounts, type IgnoredAuditVulnerabilityCounts } from '@pnpm/deps.compliance.audit'
import { audit, type AuditAdvisory, type AuditLevelNumber, type AuditLevelString, type AuditReport, type AuditVulnerabilityCounts, type IgnoredAuditVulnerabilityCounts, normalizeGhsaId } from '@pnpm/deps.compliance.audit'
import { PnpmError } from '@pnpm/error'
import { type InstallCommandOptions, update } from '@pnpm/installing.commands'
import { readEnvLockfile, readWantedLockfile } from '@pnpm/lockfile.fs'
@@ -9,7 +9,7 @@ import { createGetAuthHeaderByURI } from '@pnpm/network.auth-header'
import type { Registries } from '@pnpm/types'
import { table } from '@zkochan/table'
import chalk, { type ChalkInstance } from 'chalk'
import { difference, pick, pickBy } from 'ramda'
import { pick, pickBy } from 'ramda'
import { renderHelp } from 'render-help'
import { fix } from './fix.js'
@@ -17,13 +17,15 @@ import { fixWithUpdate, type FixWithUpdateResult } from './fixWithUpdate.js'
import { ignore } from './ignore.js'
const AUDIT_LEVEL_NUMBER = {
low: 0,
moderate: 1,
high: 2,
critical: 3,
info: 0,
low: 1,
moderate: 2,
high: 3,
critical: 4,
} satisfies Record<AuditLevelString, AuditLevelNumber>
const AUDIT_COLOR = {
info: chalk.dim,
low: chalk.bold,
moderate: chalk.bold.yellow,
high: chalk.bold.red,
@@ -53,7 +55,7 @@ export function rcOptionsTypes (): Record<string, unknown> {
'production',
'registry',
], allTypes),
'audit-level': ['low', 'moderate', 'high', 'critical'],
'audit-level': ['info', 'low', 'moderate', 'high', 'critical'],
// For fix, use String instead of a list of allowed string values.
// Otherwise, an unexpected value will get coerced to true because of the Boolean type.
fix: [String, Boolean],
@@ -99,7 +101,7 @@ export function help (): string {
name: '--json',
},
{
description: 'Only print advisories with severity greater than or equal to one of the following: low|moderate|high|critical. Default: low',
description: 'Only print advisories with severity greater than or equal to one of the following: info|low|moderate|high|critical. Default: low',
name: '--audit-level <severity>',
},
{
@@ -121,11 +123,11 @@ export function help (): string {
name: '--ignore-registry-errors',
},
{
description: 'Ignore a vulnerability by CVE',
description: 'Ignore a vulnerability by its GitHub advisory ID (e.g. GHSA-xxxx-xxxx-xxxx)',
name: '--ignore <vulnerability>',
},
{
description: 'Ignore all CVEs with no resolution',
description: 'Ignore all vulnerabilities for which no fix exists',
name: '--ignore-unfixable',
},
],
@@ -205,7 +207,6 @@ export async function handler (opts: AuditOptions): Promise<{ exitCode: number,
},
envLockfile,
include,
lockfileDir,
registry: opts.registries.default,
retry: {
factor: opts.fetchRetryFactor,
@@ -214,7 +215,6 @@ export async function handler (opts: AuditOptions): Promise<{ exitCode: number,
retries: opts.fetchRetries,
},
timeout: opts.fetchTimeout,
virtualStoreDirMaxLength: opts.virtualStoreDirMaxLength,
})
} catch (err: any) { // eslint-disable-line
if (opts.ignoreRegistryErrors) {
@@ -293,6 +293,7 @@ ${newIgnores.join('\n')}`,
}
const vulnerabilities = auditReport.metadata.vulnerabilities
const ignoredVulnerabilities: IgnoredAuditVulnerabilityCounts = {
info: 0,
low: 0,
moderate: 0,
high: 0,
@@ -301,19 +302,12 @@ ${newIgnores.join('\n')}`,
const totalVulnerabilityCount = Object.values(vulnerabilities)
.reduce((sum: number, vulnerabilitiesCount: number) => sum + vulnerabilitiesCount, 0)
const ignoreGhsas = opts.auditConfig?.ignoreGhsas
if (ignoreGhsas) {
if (ignoreGhsas?.length) {
// Compare GHSA ids after normalizing so stored entries with varying
// casing still match the canonical form on the advisory.
const ignoreSet = new Set(ignoreGhsas.map(normalizeGhsaId))
auditReport.advisories = pickBy(({ github_advisory_id: githubAdvisoryId, severity }) => {
if (!ignoreGhsas.includes(githubAdvisoryId)) {
return true
}
ignoredVulnerabilities[severity as AuditLevelString] += 1
return false
}, auditReport.advisories)
}
const ignoreCves = opts.auditConfig?.ignoreCves
if (ignoreCves) {
auditReport.advisories = pickBy(({ cves, severity }) => {
if (cves.length === 0 || difference(cves, ignoreCves).length > 0) {
if (!ignoreSet.has(normalizeGhsaId(githubAdvisoryId))) {
return true
}
ignoredVulnerabilities[severity as AuditLevelString] += 1
@@ -339,7 +333,7 @@ ${newIgnores.join('\n')}`,
[AUDIT_COLOR[advisory.severity](advisory.severity), chalk.bold(advisory.title)],
['Package', advisory.module_name],
['Vulnerable versions', advisory.vulnerable_versions],
['Patched versions', advisory.patched_versions],
['Patched versions', advisory.patched_versions ?? '(unknown)'],
[
'Paths',
(paths.length > MAX_PATHS_COUNT

View File

@@ -1,6 +1,5 @@
import { writeSettings } from '@pnpm/config.writer'
import type { AuditAdvisory, AuditReport } from '@pnpm/deps.compliance.audit'
import { difference } from 'ramda'
import { type AuditAdvisory, type AuditReport, normalizeGhsaId } from '@pnpm/deps.compliance.audit'
import semver from 'semver'
import type { AuditOptions } from './audit.js'
@@ -11,7 +10,7 @@ export interface FixResult {
}
export async function fix (auditReport: AuditReport, opts: AuditOptions): Promise<FixResult> {
const fixableAdvisories = getFixableAdvisories(Object.values(auditReport.advisories), opts.auditConfig?.ignoreCves)
const fixableAdvisories = getFixableAdvisories(Object.values(auditReport.advisories), opts.auditConfig?.ignoreGhsas)
const vulnOverrides = createOverrides(fixableAdvisories)
if (Object.values(vulnOverrides).length === 0) return { vulnOverrides, addedAgeExcludes: [] }
const addedAgeExcludes = opts.minimumReleaseAge ? createMinimumReleaseAgeExcludes(fixableAdvisories) : []
@@ -25,29 +24,33 @@ export async function fix (auditReport: AuditReport, opts: AuditOptions): Promis
return { vulnOverrides, addedAgeExcludes }
}
function getFixableAdvisories (advisories: AuditAdvisory[], ignoreCves?: string[]): AuditAdvisory[] {
if (ignoreCves) {
advisories = advisories.filter(({ cves }) => difference(cves, ignoreCves).length > 0)
function getFixableAdvisories (advisories: AuditAdvisory[], ignoreGhsas?: string[]): AuditAdvisory[] {
if (ignoreGhsas) {
// Normalize on both sides so ignore entries match regardless of casing.
const ignored = new Set(ignoreGhsas.map(normalizeGhsaId))
advisories = advisories.filter(({ github_advisory_id: ghsaId }) => !ghsaId || !ignored.has(normalizeGhsaId(ghsaId)))
}
return advisories
.filter(({ vulnerable_versions: vulnerableVersions, patched_versions: patchedVersions }) => vulnerableVersions !== '>=0.0.0' && patchedVersions !== '<0.0.0')
// Only advisories with a known patched range can produce an override.
// patched_versions is undefined when the range couldn't be inferred from
// vulnerable_versions — no override is possible in that case.
return advisories.filter(({ patched_versions: patchedVersions }) => patchedVersions != null)
}
function createOverrides (advisories: AuditAdvisory[]): Record<string, string> {
return Object.fromEntries(
advisories.map((advisory) => [
`${advisory.module_name}@${advisory.vulnerable_versions}`,
advisory.patched_versions,
])
)
const entries: Array<[string, string]> = []
for (const advisory of advisories) {
if (!advisory.patched_versions) continue
entries.push([`${advisory.module_name}@${advisory.vulnerable_versions}`, advisory.patched_versions])
}
return Object.fromEntries(entries)
}
export function createMinimumReleaseAgeExcludes (advisories: AuditAdvisory[]): string[] {
const excludes = new Set<string>()
for (const advisory of advisories) {
if (advisory.patched_versions === '<0.0.0') continue
if (advisory.vulnerable_versions === '>=0.0.0' || advisory.vulnerable_versions === '*') continue
const minVersion = semver.minVersion(advisory.patched_versions)
const patchedVersions = advisory.patched_versions
if (!patchedVersions) continue
const minVersion = semver.minVersion(patchedVersions)
if (minVersion) {
excludes.add(`${advisory.module_name}@${minVersion.version}`)
}

View File

@@ -1,5 +1,6 @@
import { writeSettings } from '@pnpm/config.writer'
import type { AuditAdvisory, AuditReport } from '@pnpm/deps.compliance.audit'
import { type AuditAdvisory, type AuditReport, normalizeGhsaId } from '@pnpm/deps.compliance.audit'
import { PnpmError } from '@pnpm/error'
import type { AuditConfig, ProjectManifest } from '@pnpm/types'
import { difference } from 'ramda'
@@ -15,32 +16,46 @@ export interface IgnoreVulnerabilitiesOptions {
}
export async function ignore (opts: IgnoreVulnerabilitiesOptions): Promise<string[]> {
const currentCves = opts?.auditConfig?.ignoreCves ?? []
const currentUniqueCves = new Set(currentCves)
const advisoryWthNoResolutions = filterAdvisoriesWithNoResolutions(Object.values(opts.auditReport.advisories))
// GHSA IDs are canonically uppercase; normalize on read/write so a stored
// "ghsa-..." or uppercase user input both match the derived id at filter
// time.
const currentGhsas = (opts?.auditConfig?.ignoreGhsas ?? []).map(normalizeGhsaId)
const currentUniqueGhsas = new Set(currentGhsas)
const advisoriesWithNoResolutions = filterAdvisoriesWithNoResolutions(Object.values(opts.auditReport.advisories))
if (opts.ignoreUnfixable) {
Object.values(advisoryWthNoResolutions).forEach((advisory: AuditAdvisory) => {
advisory.cves.forEach((cve) => currentUniqueCves.add(cve))
})
} else {
opts.ignore?.forEach((cve) => currentUniqueCves.add(cve))
for (const advisory of advisoriesWithNoResolutions) {
if (!advisory.github_advisory_id) {
throw new PnpmError(
'AUDIT_MISSING_GHSA',
`Cannot ignore advisory ${advisory.id} (${advisory.module_name}): the registry did not provide a GHSA id or a resolvable url.`
)
}
currentUniqueGhsas.add(normalizeGhsaId(advisory.github_advisory_id))
}
} else if (opts.ignore) {
for (const ghsa of opts.ignore) {
currentUniqueGhsas.add(normalizeGhsaId(ghsa))
}
}
const newIgnoreCves = currentUniqueCves.size > 0 ? Array.from(currentUniqueCves) : undefined
const diffCve = difference(newIgnoreCves ?? [], currentCves)
const newIgnoreGhsas = currentUniqueGhsas.size > 0 ? Array.from(currentUniqueGhsas) : undefined
const diffGhsas = difference(newIgnoreGhsas ?? [], currentGhsas)
await writeSettings({
...opts,
updatedSettings: {
auditConfig: {
...opts.auditConfig,
ignoreCves: newIgnoreCves,
ignoreGhsas: newIgnoreGhsas,
},
},
})
return [...diffCve]
return [...diffGhsas]
}
function filterAdvisoriesWithNoResolutions (advisories: AuditAdvisory[]) {
return advisories.filter(({ patched_versions: patchedVersions }) => patchedVersions === '<0.0.0')
// Advisories for which no override can be produced — patched_versions is
// undefined when pnpm couldn't infer a patched range from vulnerable_versions.
// That is the only "no fix available" signal the bulk endpoint provides.
function filterAdvisoriesWithNoResolutions (advisories: AuditAdvisory[]): AuditAdvisory[] {
return advisories.filter(({ patched_versions: patchedVersions }) => patchedVersions == null)
}

View File

File diff suppressed because it is too large Load Diff

View File

@@ -22,7 +22,7 @@ test('overrides are added for vulnerable dependencies', async () => {
const tmp = f.prepare('has-vulnerabilities')
getMockAgent().get(AUDIT_REGISTRY.replace(/\/$/, ''))
.intercept({ path: '/-/npm/v1/security/audits/quick', method: 'POST' })
.intercept({ path: '/-/npm/v1/security/advisories/bulk', method: 'POST' })
.reply(200, responses.ALL_VULN_RESP)
const { exitCode, output } = await audit.handler({
@@ -55,7 +55,7 @@ test('no overrides are added if no vulnerabilities are found', async () => {
const tmp = f.prepare('fixture')
getMockAgent().get(AUDIT_REGISTRY.replace(/\/$/, ''))
.intercept({ path: '/-/npm/v1/security/audits/quick', method: 'POST' })
.intercept({ path: '/-/npm/v1/security/advisories/bulk', method: 'POST' })
.reply(200, responses.NO_VULN_RESP)
const { exitCode, output } = await audit.handler({
@@ -70,22 +70,20 @@ test('no overrides are added if no vulnerabilities are found', async () => {
expect(output).toBe('No fixes were made')
})
test('CVEs found in the allow list are not added as overrides', async () => {
test('GHSAs in the ignore list are not added as overrides', async () => {
const tmp = f.prepare('has-vulnerabilities')
getMockAgent().get(AUDIT_REGISTRY.replace(/\/$/, ''))
.intercept({ path: '/-/npm/v1/security/audits/quick', method: 'POST' })
.intercept({ path: '/-/npm/v1/security/advisories/bulk', method: 'POST' })
.reply(200, responses.ALL_VULN_RESP)
const { exitCode, output } = await audit.handler({
...AUDIT_REGISTRY_OPTS,
auditLevel: 'moderate',
auditConfig: {
ignoreCves: [
'CVE-2019-10742',
'CVE-2020-28168',
'CVE-2021-3749',
'CVE-2020-7598',
ignoreGhsas: [
// Denial of Service in axios (<=0.18.0)
'GHSA-42xw-2xvc-qx8m',
],
},
dir: tmp,
@@ -97,7 +95,4 @@ test('CVEs found in the allow list are not added as overrides', async () => {
const manifest = readYamlFileSync<{ overrides?: Record<string, string> }>(path.join(tmp, 'pnpm-workspace.yaml'))
expect(manifest.overrides?.['axios@<=0.18.0']).toBeFalsy()
expect(manifest.overrides?.['axios@<0.21.1']).toBeFalsy()
expect(manifest.overrides?.['minimist@<0.2.1']).toBeFalsy()
expect(manifest.overrides?.['url-parse@<1.5.6']).toBeTruthy()
})

View File

@@ -1,4 +1,3 @@
import { readFile } from 'node:fs/promises'
import { join } from 'node:path'
import { audit } from '@pnpm/deps.compliance.commands'
@@ -10,6 +9,7 @@ import type { DepPath } from '@pnpm/types'
import { readProjectManifest } from '@pnpm/workspace.project-manifest-reader'
import { filterProjectsFromDir } from '@pnpm/workspace.projects-filter'
import chalk from 'chalk'
import { loadJsonFile } from 'load-json-file'
import { readYamlFileSync } from 'read-yaml-file'
import { MOCK_REGISTRY, MOCK_REGISTRY_OPTS } from './utils/options.js'
@@ -42,11 +42,11 @@ describe('audit fix with update', () => {
expect(originalLockfile!.packages![originalPkgId]).toBeDefined()
expect(originalLockfile!.packages![expectedPkgId]).toBeUndefined()
const mockResponse = await readFile(join(tmp, 'responses', 'top-level-vulnerability.json'), 'utf-8')
const mockResponse = await loadJsonFile<Record<string, unknown[]>>(join(tmp, 'responses', 'top-level-vulnerability.json'))
expect(mockResponse).toBeTruthy()
getMockAgent().get(MOCK_REGISTRY)
.intercept({ path: '/-/npm/v1/security/audits/quick', method: 'POST' })
.intercept({ path: '/-/npm/v1/security/advisories/bulk', method: 'POST' })
.reply(200, mockResponse)
const { exitCode, output } = await audit.handler({
@@ -108,11 +108,11 @@ The fixed vulnerabilities are:
expect(originalLockfile!.packages![originalDepPkgId]).toBeDefined()
expect(originalLockfile!.packages![expectedDepPkgId]).toBeUndefined()
const mockResponse = await readFile(join(tmp, 'responses', 'top-level-vulnerability.json'), 'utf-8')
const mockResponse = await loadJsonFile<Record<string, unknown[]>>(join(tmp, 'responses', 'top-level-vulnerability.json'))
expect(mockResponse).toBeTruthy()
getMockAgent().get(MOCK_REGISTRY)
.intercept({ path: '/-/npm/v1/security/audits/quick', method: 'POST' })
.intercept({ path: '/-/npm/v1/security/advisories/bulk', method: 'POST' })
.reply(200, mockResponse)
const { exitCode, output } = await audit.handler({
@@ -169,11 +169,11 @@ The fixed vulnerabilities are:
expect(originalLockfile!.packages![originalPkgId]).toBeDefined()
expect(originalLockfile!.packages![expectedPkgId]).toBeUndefined()
const mockResponse = await readFile(join(tmp, 'responses', 'depth-2-vulnerability.json'), 'utf-8')
const mockResponse = await loadJsonFile<Record<string, unknown[]>>(join(tmp, 'responses', 'depth-2-vulnerability.json'))
expect(mockResponse).toBeTruthy()
getMockAgent().get(MOCK_REGISTRY)
.intercept({ path: '/-/npm/v1/security/audits/quick', method: 'POST' })
.intercept({ path: '/-/npm/v1/security/advisories/bulk', method: 'POST' })
.reply(200, mockResponse)
const { exitCode, output } = await audit.handler({
@@ -220,11 +220,11 @@ The fixed vulnerabilities are:
expect(originalLockfile!.packages![originalPkgId]).toBeDefined()
expect(originalLockfile!.packages![expectedPkgId]).toBeUndefined()
const mockResponse = await readFile(join(tmp, 'responses', 'depth-3-vulnerability.json'), 'utf-8')
const mockResponse = await loadJsonFile<Record<string, unknown[]>>(join(tmp, 'responses', 'depth-3-vulnerability.json'))
expect(mockResponse).toBeTruthy()
getMockAgent().get(MOCK_REGISTRY)
.intercept({ path: '/-/npm/v1/security/audits/quick', method: 'POST' })
.intercept({ path: '/-/npm/v1/security/advisories/bulk', method: 'POST' })
.reply(200, mockResponse)
const { exitCode, output } = await audit.handler({
@@ -274,11 +274,11 @@ The fixed vulnerabilities are:
expect(originalLockfile!.packages).toBeDefined()
expect(originalLockfile!.packages![pkgId]).toBeDefined()
const mockResponse = await readFile(join(tmp, 'responses', 'unfixable-vulnerability.json'), 'utf-8')
const mockResponse = await loadJsonFile<Record<string, unknown[]>>(join(tmp, 'responses', 'unfixable-vulnerability.json'))
expect(mockResponse).toBeTruthy()
getMockAgent().get(MOCK_REGISTRY)
.intercept({ path: '/-/npm/v1/security/audits/quick', method: 'POST' })
.intercept({ path: '/-/npm/v1/security/advisories/bulk', method: 'POST' })
.reply(200, mockResponse)
const { exitCode, output } = await audit.handler({
@@ -338,11 +338,11 @@ The remaining vulnerabilities are:
expect(originalLockfile!.packages![expectedPkgId1]).toBeUndefined()
expect(originalLockfile!.packages![expectedPkgId2]).toBeUndefined()
const mockResponse = await readFile(join(tmp, 'responses', 'form-data-vulnerability.json'), 'utf-8')
const mockResponse = await loadJsonFile<Record<string, unknown[]>>(join(tmp, 'responses', 'form-data-vulnerability.json'))
expect(mockResponse).toBeTruthy()
getMockAgent().get(MOCK_REGISTRY)
.intercept({ path: '/-/npm/v1/security/audits/quick', method: 'POST' })
.intercept({ path: '/-/npm/v1/security/advisories/bulk', method: 'POST' })
.reply(200, mockResponse)
const { exitCode, output } = await audit.handler({
@@ -404,11 +404,11 @@ The fixed vulnerabilities are:
expect(originalLockfile!.packages![originalPkgId]).toBeDefined()
expect(originalLockfile!.packages![expectedPkgId]).toBeUndefined()
const mockResponse = await readFile(join(tmp, 'responses', 'top-level-vulnerability.json'), 'utf-8')
const mockResponse = await loadJsonFile<Record<string, unknown[]>>(join(tmp, 'responses', 'top-level-vulnerability.json'))
expect(mockResponse).toBeTruthy()
getMockAgent().get(MOCK_REGISTRY)
.intercept({ path: '/-/npm/v1/security/audits/quick', method: 'POST' })
.intercept({ path: '/-/npm/v1/security/advisories/bulk', method: 'POST' })
.reply(200, mockResponse)
const {
@@ -478,11 +478,11 @@ The fixed vulnerabilities are:
expect(originalLockfile!.packages![originalPkgId]).toBeDefined()
expect(originalLockfile!.packages![expectedPkgId]).toBeUndefined()
const mockResponse = await readFile(join(tmp, 'responses', 'depth-2-vulnerability.json'), 'utf-8')
const mockResponse = await loadJsonFile<Record<string, unknown[]>>(join(tmp, 'responses', 'depth-2-vulnerability.json'))
expect(mockResponse).toBeTruthy()
getMockAgent().get(MOCK_REGISTRY)
.intercept({ path: '/-/npm/v1/security/audits/quick', method: 'POST' })
.intercept({ path: '/-/npm/v1/security/advisories/bulk', method: 'POST' })
.reply(200, mockResponse)
const {
@@ -559,11 +559,11 @@ The fixed vulnerabilities are:
expect(originalLockfile!.packages![originalDepPkgId]).toBeDefined()
expect(originalLockfile!.packages![expectedDepPkgId]).toBeUndefined()
const mockResponse = await readFile(join(tmp, 'responses', 'top-level-vulnerability.json'), 'utf-8')
const mockResponse = await loadJsonFile<Record<string, unknown[]>>(join(tmp, 'responses', 'top-level-vulnerability.json'))
expect(mockResponse).toBeTruthy()
getMockAgent().get(MOCK_REGISTRY)
.intercept({ path: '/-/npm/v1/security/audits/quick', method: 'POST' })
.intercept({ path: '/-/npm/v1/security/advisories/bulk', method: 'POST' })
.reply(200, mockResponse)
const {
@@ -650,11 +650,11 @@ The fixed vulnerabilities are:
expect(originalLockfile!.packages![originalPkgId]).toBeDefined()
expect(originalLockfile!.packages![expectedPkgId]).toBeUndefined()
const mockResponse = await readFile(join(tmp, 'responses', 'top-level-vulnerability.json'), 'utf-8')
const mockResponse = await loadJsonFile<Record<string, unknown[]>>(join(tmp, 'responses', 'top-level-vulnerability.json'))
expect(mockResponse).toBeTruthy()
getMockAgent().get(MOCK_REGISTRY)
.intercept({ path: '/-/npm/v1/security/audits/quick', method: 'POST' })
.intercept({ path: '/-/npm/v1/security/advisories/bulk', method: 'POST' })
.reply(200, mockResponse)
const {
@@ -738,11 +738,11 @@ The fixed vulnerabilities are:
expect(originalLockfile!.packages![originalPkgId]).toBeDefined()
expect(originalLockfile!.packages![expectedPkgId]).toBeUndefined()
const mockResponse = await readFile(join(tmp, 'responses', 'top-level-vulnerability.json'), 'utf-8')
const mockResponse = await loadJsonFile<Record<string, unknown[]>>(join(tmp, 'responses', 'top-level-vulnerability.json'))
expect(mockResponse).toBeTruthy()
getMockAgent().get(MOCK_REGISTRY)
.intercept({ path: '/-/npm/v1/security/audits/quick', method: 'POST' })
.intercept({ path: '/-/npm/v1/security/advisories/bulk', method: 'POST' })
.reply(200, mockResponse)
const {

View File

@@ -22,3 +22,9 @@ packages:
axios@0.18.0:
resolution: {integrity: sha512-1qjL8847bdp87/g7G5nCW12s5J0D1Xv45Z6M4Z5Tsp8sTuXj5w8e0HIiln4Wj12v2H2tX4f6j62/j/k7u0/g==}
snapshots:
is-positive@1.0.0: {}
axios@0.18.0: {}

View File

@@ -1,36 +1,6 @@
{
"actions": [
"form-data": [
{
"action": "update",
"resolves": [
{
"id": 1109538,
"path": ".>axios>form-data",
"dev": false,
"optional": false,
"bundled": false
}
],
"module": "form-data",
"target": "4.0.5",
"depth": 3
},
{
"action": "review",
"module": "form-data",
"resolves": [
{
"id": 1109539,
"path": ".>form-data",
"dev": false,
"optional": false,
"bundled": false
}
]
}
],
"advisories": {
"1109538": {
"findings": [
{
"version": "4.0.0",
@@ -69,7 +39,7 @@
],
"url": "https://github.com/advisories/GHSA-fjxv-7rqg-78g4"
},
"1109539": {
{
"findings": [
{
"version": "3.0.1",
@@ -108,19 +78,5 @@
],
"url": "https://github.com/advisories/GHSA-fjxv-7rqg-78g4"
}
},
"muted": [],
"metadata": {
"vulnerabilities": {
"info": 0,
"low": 0,
"moderate": 0,
"high": 0,
"critical": 2
},
"dependencies": 11,
"devDependencies": 0,
"optionalDependencies": 0,
"totalDependencies": 11
}
}
]
}

View File

@@ -1,23 +1,6 @@
{
"actions": [
"@pnpm.e2e/dep-of-pkg-with-1-dep": [
{
"action": "update",
"resolves": [
{
"id": 123456,
"path": ".>@pnpm.e2e/pkg-with-1-dep>@pnpm.e2e/dep-of-pkg-with-1-dep",
"dev": false,
"optional": false,
"bundled": false
}
],
"module": "@pnpm.e2e/dep-of-pkg-with-1-dep",
"target": "100.0.0",
"depth": 3
}
],
"advisories": {
"123456": {
"findings": [
{
"version": "100.0.0",
@@ -48,19 +31,5 @@
"overview": "Overview: mock vulnerability in @pnpm.e2e/dep-of-pkg-with-1-dep",
"url": "https://example.com"
}
},
"muted": [],
"metadata": {
"vulnerabilities": {
"info": 0,
"low": 0,
"moderate": 0,
"high": 1,
"critical": 0
},
"dependencies": 2,
"devDependencies": 0,
"optionalDependencies": 0,
"totalDependencies": 2
}
}
]
}

View File

@@ -1,23 +1,6 @@
{
"actions": [
"@pnpm.e2e/pkg-with-1-dep": [
{
"action": "update",
"resolves": [
{
"id": 123456,
"path": ".>@pnpm.e2e/pkg-with-1-dep",
"dev": false,
"optional": false,
"bundled": false
}
],
"module": "@pnpm.e2e/pkg-with-1-dep",
"target": "100.0.0",
"depth": 2
}
],
"advisories": {
"123456": {
"findings": [
{
"version": "100.0.0",
@@ -48,19 +31,5 @@
"overview": "Overview: mock vulnerability in @pnpm.e2e/pkg-with-1-dep",
"url": "https://example.com"
}
},
"muted": [],
"metadata": {
"vulnerabilities": {
"info": 0,
"low": 0,
"moderate": 0,
"high": 1,
"critical": 0
},
"dependencies": 2,
"devDependencies": 0,
"optionalDependencies": 0,
"totalDependencies": 2
}
}
]
}

View File

@@ -1,7 +1,6 @@
{
"actions": [],
"advisories": {
"123456": {
"@pnpm.e2e/pkg-with-1-dep": [
{
"findings": [
{
"version": "100.0.0",
@@ -32,19 +31,5 @@
"overview": "Overview: unfixable vulnerability in @pnpm.e2e/pkg-with-1-dep",
"url": "https://example.com"
}
},
"muted": [],
"metadata": {
"vulnerabilities": {
"info": 0,
"low": 0,
"moderate": 0,
"high": 1,
"critical": 0
},
"dependencies": 2,
"devDependencies": 0,
"optionalDependencies": 0,
"totalDependencies": 2
}
}
]
}

View File

@@ -1,23 +1,6 @@
{
"actions": [
"@pnpm.e2e/dep-of-pkg-with-1-dep": [
{
"action": "update",
"resolves": [
{
"id": 123456,
"path": ".>@pnpm.e2e/parent-of-pkg-with-1-dep>@pnpm.e2e/pkg-with-1-dep>@pnpm.e2e/dep-of-pkg-with-1-dep",
"dev": false,
"optional": false,
"bundled": false
}
],
"module": "@pnpm.e2e/dep-of-pkg-with-1-dep",
"target": "100.0.0",
"depth": 4
}
],
"advisories": {
"123456": {
"findings": [
{
"version": "100.0.0",
@@ -48,19 +31,5 @@
"overview": "Overview: mock vulnerability in @pnpm.e2e/dep-of-pkg-with-1-dep",
"url": "https://example.com"
}
},
"muted": [],
"metadata": {
"vulnerabilities": {
"info": 0,
"low": 0,
"moderate": 0,
"high": 1,
"critical": 0
},
"dependencies": 3,
"devDependencies": 0,
"optionalDependencies": 0,
"totalDependencies": 3
}
}
]
}

View File

@@ -1,23 +1,6 @@
{
"actions": [
"@pnpm.e2e/pkg-with-1-dep": [
{
"action": "update",
"resolves": [
{
"id": 123456,
"path": ".>@pnpm.e2e/pkg-with-1-dep",
"dev": false,
"optional": false,
"bundled": false
}
],
"module": "@pnpm.e2e/pkg-with-1-dep",
"target": "100.0.0",
"depth": 2
}
],
"advisories": {
"123456": {
"findings": [
{
"version": "100.0.0",
@@ -48,19 +31,5 @@
"overview": "Overview: mock vulnerability in @pnpm.e2e/pkg-with-1-dep",
"url": "https://example.com"
}
},
"muted": [],
"metadata": {
"vulnerabilities": {
"info": 0,
"low": 0,
"moderate": 0,
"high": 1,
"critical": 0
},
"dependencies": 2,
"devDependencies": 0,
"optionalDependencies": 0,
"totalDependencies": 2
}
}
]
}

View File

@@ -1,23 +1,6 @@
{
"actions": [
"@pnpm.e2e/pkg-with-1-dep": [
{
"action": "update",
"resolves": [
{
"id": 123456,
"path": "packages__sub-pkg>@pnpm.e2e/pkg-with-1-dep",
"dev": false,
"optional": false,
"bundled": false
}
],
"module": "@pnpm.e2e/pkg-with-1-dep",
"target": "100.0.0",
"depth": 2
}
],
"advisories": {
"123456": {
"findings": [
{
"version": "100.0.0",
@@ -48,19 +31,5 @@
"overview": "Overview: mock vulnerability in @pnpm.e2e/pkg-with-1-dep",
"url": "https://example.com"
}
},
"muted": [],
"metadata": {
"vulnerabilities": {
"info": 0,
"low": 0,
"moderate": 0,
"high": 1,
"critical": 0
},
"dependencies": 2,
"devDependencies": 0,
"optionalDependencies": 0,
"totalDependencies": 2
}
]
}

View File

@@ -1,23 +1,6 @@
{
"actions": [
"@pnpm.e2e/pkg-with-1-dep": [
{
"action": "update",
"resolves": [
{
"id": 123456,
"path": "packages__sub-pkg>@pnpm.e2e/pkg-with-1-dep",
"dev": false,
"optional": false,
"bundled": false
}
],
"module": "@pnpm.e2e/pkg-with-1-dep",
"target": "100.0.0",
"depth": 2
}
],
"advisories": {
"123456": {
"findings": [
{
"version": "100.0.0",
@@ -48,19 +31,5 @@
"overview": "Overview: mock vulnerability in @pnpm.e2e/pkg-with-1-dep",
"url": "https://example.com"
}
},
"muted": [],
"metadata": {
"vulnerabilities": {
"info": 0,
"low": 0,
"moderate": 0,
"high": 1,
"critical": 0
},
"dependencies": 2,
"devDependencies": 0,
"optionalDependencies": 0,
"totalDependencies": 2
}
]
}

View File

@@ -1,23 +1,6 @@
{
"actions": [
"@pnpm.e2e/dep-of-pkg-with-1-dep": [
{
"action": "update",
"resolves": [
{
"id": 123456,
"path": "packages__sub-pkg>@pnpm.e2e/pkg-with-1-dep>@pnpm.e2e/dep-of-pkg-with-1-dep",
"dev": false,
"optional": false,
"bundled": false
}
],
"module": "@pnpm.e2e/dep-of-pkg-with-1-dep",
"target": "100.0.0",
"depth": 3
}
],
"advisories": {
"123456": {
"findings": [
{
"version": "100.0.0",
@@ -48,19 +31,5 @@
"overview": "Overview: mock vulnerability in @pnpm.e2e/dep-of-pkg-with-1-dep",
"url": "https://example.com"
}
},
"muted": [],
"metadata": {
"vulnerabilities": {
"info": 0,
"low": 0,
"moderate": 0,
"high": 1,
"critical": 0
},
"dependencies": 2,
"devDependencies": 0,
"optionalDependencies": 0,
"totalDependencies": 2
}
]
}

View File

@@ -1,23 +1,6 @@
{
"actions": [
"@pnpm.e2e/pkg-with-1-dep": [
{
"action": "update",
"resolves": [
{
"id": 123456,
"path": "packages__sub-pkg>@pnpm.e2e/pkg-with-1-dep",
"dev": false,
"optional": false,
"bundled": false
}
],
"module": "@pnpm.e2e/pkg-with-1-dep",
"target": "100.0.0",
"depth": 2
}
],
"advisories": {
"123456": {
"findings": [
{
"version": "100.0.0",
@@ -48,19 +31,5 @@
"overview": "Overview: mock vulnerability in @pnpm.e2e/pkg-with-1-dep",
"url": "https://example.com"
}
},
"muted": [],
"metadata": {
"vulnerabilities": {
"info": 0,
"low": 0,
"moderate": 0,
"high": 1,
"critical": 0
},
"dependencies": 2,
"devDependencies": 0,
"optionalDependencies": 0,
"totalDependencies": 2
}
}
]
}

View File

@@ -1,23 +1,6 @@
{
"actions": [
"@pnpm.e2e/pkg-with-1-dep": [
{
"action": "update",
"resolves": [
{
"id": 123456,
"path": "packages__sub-pkg>@pnpm.e2e/pkg-with-1-dep",
"dev": false,
"optional": false,
"bundled": false
}
],
"module": "@pnpm.e2e/pkg-with-1-dep",
"target": "100.0.0",
"depth": 2
}
],
"advisories": {
"123456": {
"findings": [
{
"version": "100.0.0",
@@ -48,19 +31,5 @@
"overview": "Overview: mock vulnerability in @pnpm.e2e/pkg-with-1-dep",
"url": "https://example.com"
}
},
"muted": [],
"metadata": {
"vulnerabilities": {
"info": 0,
"low": 0,
"moderate": 0,
"high": 1,
"critical": 0
},
"dependencies": 2,
"devDependencies": 0,
"optionalDependencies": 0,
"totalDependencies": 2
}
]
}

View File

@@ -18,12 +18,36 @@ afterEach(async () => {
await teardownMockAgent()
})
// Advisories whose vulnerable_versions can't be inferred into a patched
// range (`>=0.0.0` / `*` cover the entire version space). With no inferable
// fix, these surface as "no resolution" for --ignore-unfixable.
const UNFIXABLE_RESPONSE = {
axios: [
{
id: 90000001,
url: 'https://github.com/advisories/GHSA-unfixable-test-0001',
title: 'unfixable axios advisory used for tests',
severity: 'high',
vulnerable_versions: '>=0.0.0',
cwe: [] as string[],
},
{
id: 90000002,
url: 'https://github.com/advisories/GHSA-unfixable-test-0002',
title: 'another unfixable axios advisory used for tests',
severity: 'moderate',
vulnerable_versions: '*',
cwe: [] as string[],
},
],
}
test('ignores are added for vulnerable dependencies with no resolutions', async () => {
const tmp = f.prepare('has-vulnerabilities')
getMockAgent().get(AUDIT_REGISTRY.replace(/\/$/, ''))
.intercept({ path: '/-/npm/v1/security/audits/quick', method: 'POST' })
.reply(200, responses.ALL_VULN_RESP)
.intercept({ path: '/-/npm/v1/security/advisories/bulk', method: 'POST' })
.reply(200, UNFIXABLE_RESPONSE)
const { exitCode, output } = await audit.handler({
...AUDIT_REGISTRY_OPTS,
@@ -38,16 +62,16 @@ test('ignores are added for vulnerable dependencies with no resolutions', async
expect(output).toContain('2 new vulnerabilities were ignored')
const manifest = readYamlFileSync<any>(path.join(tmp, 'pnpm-workspace.yaml')) // eslint-disable-line
const cveList = manifest.auditConfig?.ignoreCves
expect(cveList?.length).toBe(2)
expect(cveList).toStrictEqual(expect.arrayContaining(['CVE-2017-16115', 'CVE-2017-16024']))
const ghsaList = manifest.auditConfig?.ignoreGhsas
expect(ghsaList?.length).toBe(2)
expect(ghsaList).toStrictEqual(expect.arrayContaining(['GHSA-unfixable-test-0001', 'GHSA-unfixable-test-0002']))
})
test('the specified vulnerabilities are ignored', async () => {
const tmp = f.prepare('has-vulnerabilities')
getMockAgent().get(AUDIT_REGISTRY.replace(/\/$/, ''))
.intercept({ path: '/-/npm/v1/security/audits/quick', method: 'POST' })
.intercept({ path: '/-/npm/v1/security/advisories/bulk', method: 'POST' })
.reply(200, responses.ALL_VULN_RESP)
const { exitCode, output } = await audit.handler({
@@ -56,21 +80,23 @@ test('the specified vulnerabilities are ignored', async () => {
dir: tmp,
rootProjectManifestDir: tmp,
fix: false,
ignore: ['CVE-2017-16115'],
ignore: ['GHSA-cph5-m8f7-6c5x'],
})
expect(exitCode).toBe(0)
expect(output).toContain('1 new vulnerabilities were ignored')
const manifest = readYamlFileSync<any>(path.join(tmp, 'pnpm-workspace.yaml')) // eslint-disable-line
expect(manifest.auditConfig?.ignoreCves).toStrictEqual(['CVE-2017-16115'])
// Stored canonicalized (GHSA prefix upper, suffix lower) regardless of the
// user-supplied casing.
expect(manifest.auditConfig?.ignoreGhsas).toStrictEqual(['GHSA-cph5-m8f7-6c5x'])
})
test('no ignores are added if no vulnerabilities are found', async () => {
const tmp = f.prepare('fixture')
getMockAgent().get(AUDIT_REGISTRY.replace(/\/$/, ''))
.intercept({ path: '/-/npm/v1/security/audits/quick', method: 'POST' })
.intercept({ path: '/-/npm/v1/security/advisories/bulk', method: 'POST' })
.reply(200, responses.NO_VULN_RESP)
const { exitCode, output } = await audit.handler({
@@ -86,24 +112,22 @@ test('no ignores are added if no vulnerabilities are found', async () => {
expect(output).toBe('No new vulnerabilities were ignored')
})
test('ignored CVEs are not duplicated', async () => {
test('ignored GHSAs are not duplicated', async () => {
const tmp = f.prepare('has-vulnerabilities')
const existingCves = [
'CVE-2019-10742',
'CVE-2020-7598',
'CVE-2017-16115',
'CVE-2017-16024',
const existingGhsas = [
'GHSA-unfixable-test-0001',
'GHSA-unfixable-test-0002',
]
getMockAgent().get(AUDIT_REGISTRY.replace(/\/$/, ''))
.intercept({ path: '/-/npm/v1/security/audits/quick', method: 'POST' })
.reply(200, responses.ALL_VULN_RESP)
.intercept({ path: '/-/npm/v1/security/advisories/bulk', method: 'POST' })
.reply(200, UNFIXABLE_RESPONSE)
const { exitCode, output } = await audit.handler({
...AUDIT_REGISTRY_OPTS,
auditLevel: 'moderate',
auditConfig: {
ignoreCves: existingCves,
ignoreGhsas: existingGhsas,
},
dir: tmp,
rootProjectManifestDir: tmp,
@@ -114,5 +138,5 @@ test('ignored CVEs are not duplicated', async () => {
expect(output).toBe('No new vulnerabilities were ignored')
const manifest = readYamlFileSync<any>(path.join(tmp, 'pnpm-workspace.yaml')) // eslint-disable-line
expect(manifest.auditConfig?.ignoreCves).toStrictEqual(expect.arrayContaining(existingCves))
expect(manifest.auditConfig?.ignoreGhsas).toStrictEqual(expect.arrayContaining(existingGhsas))
})

View File

@@ -29,7 +29,7 @@ describe('plugin-commands-audit', () => {
})
test('audit', async () => {
getMockAgent().get(AUDIT_REGISTRY.replace(/\/$/, ''))
.intercept({ path: '/-/npm/v1/security/audits/quick', method: 'POST' })
.intercept({ path: '/-/npm/v1/security/advisories/bulk', method: 'POST' })
.reply(200, responses.ALL_VULN_RESP)
const { output, exitCode } = await audit.handler({
@@ -43,7 +43,7 @@ describe('plugin-commands-audit', () => {
test('audit --dev', async () => {
getMockAgent().get(AUDIT_REGISTRY.replace(/\/$/, ''))
.intercept({ path: '/-/npm/v1/security/audits/quick', method: 'POST' })
.intercept({ path: '/-/npm/v1/security/advisories/bulk', method: 'POST' })
.reply(200, responses.DEV_VULN_ONLY_RESP)
const { output, exitCode } = await audit.handler({
@@ -60,7 +60,7 @@ describe('plugin-commands-audit', () => {
test('audit --audit-level', async () => {
getMockAgent().get(AUDIT_REGISTRY.replace(/\/$/, ''))
.intercept({ path: '/-/npm/v1/security/audits/quick', method: 'POST' })
.intercept({ path: '/-/npm/v1/security/advisories/bulk', method: 'POST' })
.reply(200, responses.ALL_VULN_RESP)
const { output, exitCode } = await audit.handler({
@@ -76,7 +76,7 @@ describe('plugin-commands-audit', () => {
test('audit: no vulnerabilities', async () => {
getMockAgent().get(AUDIT_REGISTRY.replace(/\/$/, ''))
.intercept({ path: '/-/npm/v1/security/audits/quick', method: 'POST' })
.intercept({ path: '/-/npm/v1/security/advisories/bulk', method: 'POST' })
.reply(200, responses.NO_VULN_RESP)
const { output, exitCode } = await audit.handler({
@@ -91,7 +91,7 @@ describe('plugin-commands-audit', () => {
test('audit --json', async () => {
getMockAgent().get(AUDIT_REGISTRY.replace(/\/$/, ''))
.intercept({ path: '/-/npm/v1/security/audits/quick', method: 'POST' })
.intercept({ path: '/-/npm/v1/security/advisories/bulk', method: 'POST' })
.reply(200, responses.ALL_VULN_RESP)
const { output, exitCode } = await audit.handler({
@@ -106,27 +106,39 @@ describe('plugin-commands-audit', () => {
expect(exitCode).toBe(1)
})
test.skip('audit does not exit with code 1 if the found vulnerabilities are having lower severity then what we asked for', async () => {
test('audit exits 0 when every found vulnerability is below --audit-level', async () => {
// Only a single moderate advisory against axios. With --audit-level=high
// the table is empty (so exitCode is 0), but the summary still reports
// the moderate vulnerability so the user knows it exists.
getMockAgent().get(AUDIT_REGISTRY.replace(/\/$/, ''))
.intercept({ path: '/-/npm/v1/security/audits/quick', method: 'POST' })
.reply(200, responses.DEV_VULN_ONLY_RESP)
.intercept({ path: '/-/npm/v1/security/advisories/bulk', method: 'POST' })
.reply(200, {
axios: [
{
id: 99000001,
url: 'https://github.com/advisories/GHSA-below-level-test-0001',
title: 'moderate axios advisory for audit-level test',
severity: 'moderate',
vulnerable_versions: '<=0.99.0',
cwe: [] as string[],
},
],
})
const { output, exitCode } = await audit.handler({
...AUDIT_REGISTRY_OPTS,
auditLevel: 'high',
dir: hasVulnerabilitiesDir,
rootProjectManifestDir: hasVulnerabilitiesDir,
dev: true,
})
expect(exitCode).toBe(0)
expect(stripAnsi(output)).toBe(`1 vulnerabilities found
Severity: 1 moderate`)
expect(stripAnsi(output)).toBe('1 vulnerabilities found\nSeverity: 1 moderate')
})
test('audit --json respects audit-level', async () => {
getMockAgent().get(AUDIT_REGISTRY.replace(/\/$/, ''))
.intercept({ path: '/-/npm/v1/security/audits/quick', method: 'POST' })
.intercept({ path: '/-/npm/v1/security/advisories/bulk', method: 'POST' })
.reply(200, responses.DEV_VULN_ONLY_RESP)
const { exitCode, output } = await audit.handler({
@@ -138,14 +150,19 @@ describe('plugin-commands-audit', () => {
dev: true,
})
expect(exitCode).toBe(0)
expect(exitCode).toBe(1)
const parsed = JSON.parse(output)
expect(Object.keys(parsed.advisories)).toHaveLength(0)
// DEV_VULN_ONLY_RESP has 2 critical advisories — only those should be
// included at audit-level=critical.
expect(Object.keys(parsed.advisories)).toHaveLength(2)
for (const advisory of Object.values(parsed.advisories) as Array<{ severity: string }>) {
expect(advisory.severity).toBe('critical')
}
})
test('audit --json filters advisories by audit-level', async () => {
getMockAgent().get(AUDIT_REGISTRY.replace(/\/$/, ''))
.intercept({ path: '/-/npm/v1/security/audits/quick', method: 'POST' })
.intercept({ path: '/-/npm/v1/security/advisories/bulk', method: 'POST' })
.reply(200, responses.DEV_VULN_ONLY_RESP)
const { exitCode, output } = await audit.handler({
@@ -159,21 +176,17 @@ describe('plugin-commands-audit', () => {
expect(exitCode).toBe(1)
const parsed = JSON.parse(output)
// DEV_VULN_ONLY_RESP has 4 high and 2 moderate advisories
// With audit-level=high, only the 4 high advisories should be included
expect(Object.keys(parsed.advisories)).toHaveLength(4)
// At audit-level=high, only high/critical advisories should remain.
for (const advisory of Object.values(parsed.advisories) as Array<{ severity: string }>) {
expect(advisory.severity).toBe('high')
expect(['high', 'critical']).toContain(advisory.severity)
}
expect(Object.keys(parsed.advisories).length).toBeGreaterThan(0)
})
test('audit does not exit with code 1 if the registry responds with a non-200 response and ignoreRegistryErrors is used', async () => {
getMockAgent().get(AUDIT_REGISTRY.replace(/\/$/, ''))
.intercept({ path: '/-/npm/v1/security/audits/quick', method: 'POST' })
.intercept({ path: '/-/npm/v1/security/advisories/bulk', method: 'POST' })
.reply(500, { message: 'Something bad happened' })
getMockAgent().get(AUDIT_REGISTRY.replace(/\/$/, ''))
.intercept({ path: '/-/npm/v1/security/audits', method: 'POST' })
.reply(500, { message: 'Fallback failed too' })
const { output, exitCode } = await audit.handler({
...AUDIT_REGISTRY_OPTS,
dir: hasVulnerabilitiesDir,
@@ -185,13 +198,13 @@ describe('plugin-commands-audit', () => {
})
expect(exitCode).toBe(0)
expect(stripAnsi(output)).toBe(`The audit endpoint (at ${AUDIT_REGISTRY}-/npm/v1/security/audits/quick) responded with 500: {"message":"Something bad happened"}. Fallback endpoint (at ${AUDIT_REGISTRY}-/npm/v1/security/audits) responded with 500: {"message":"Fallback failed too"}`)
expect(stripAnsi(output)).toBe(`The audit endpoint (at ${AUDIT_REGISTRY}-/npm/v1/security/advisories/bulk) responded with 500: {"message":"Something bad happened"}`)
})
test('audit sends authToken', async () => {
getMockAgent().get(AUDIT_REGISTRY.replace(/\/$/, ''))
.intercept({
path: '/-/npm/v1/security/audits/quick',
path: '/-/npm/v1/security/advisories/bulk',
method: 'POST',
headers: { authorization: 'Bearer 123' },
})
@@ -212,10 +225,7 @@ describe('plugin-commands-audit', () => {
test('audit endpoint does not exist', async () => {
getMockAgent().get(AUDIT_REGISTRY.replace(/\/$/, ''))
.intercept({ path: '/-/npm/v1/security/audits/quick', method: 'POST' })
.reply(404, {})
getMockAgent().get(AUDIT_REGISTRY.replace(/\/$/, ''))
.intercept({ path: '/-/npm/v1/security/audits', method: 'POST' })
.intercept({ path: '/-/npm/v1/security/advisories/bulk', method: 'POST' })
.reply(404, {})
await expect(audit.handler({
@@ -229,38 +239,11 @@ describe('plugin-commands-audit', () => {
})).rejects.toThrow(AuditEndpointNotExistsError)
})
test('audit: CVEs in ignoreCves do not show up', async () => {
test('audit: advisories in ignoreGhsas do not show up', async () => {
const tmp = f.prepare('has-vulnerabilities')
getMockAgent().get(AUDIT_REGISTRY.replace(/\/$/, ''))
.intercept({ path: '/-/npm/v1/security/audits/quick', method: 'POST' })
.reply(200, responses.ALL_VULN_RESP)
const { exitCode, output } = await audit.handler({
...AUDIT_REGISTRY_OPTS,
auditLevel: 'moderate',
dir: tmp,
rootProjectManifestDir: tmp,
rootProjectManifest: {},
auditConfig: {
ignoreCves: [
'CVE-2019-10742',
'CVE-2020-28168',
'CVE-2021-3749',
'CVE-2020-7598',
],
},
})
expect(exitCode).toBe(1)
expect(stripAnsi(output)).toMatchSnapshot()
})
test('audit: CVEs in ignoreGhsas do not show up', async () => {
const tmp = f.prepare('has-vulnerabilities')
getMockAgent().get(AUDIT_REGISTRY.replace(/\/$/, ''))
.intercept({ path: '/-/npm/v1/security/audits/quick', method: 'POST' })
.intercept({ path: '/-/npm/v1/security/advisories/bulk', method: 'POST' })
.reply(200, responses.ALL_VULN_RESP)
const { exitCode, output } = await audit.handler({
@@ -283,11 +266,11 @@ describe('plugin-commands-audit', () => {
expect(stripAnsi(output)).toMatchSnapshot()
})
test('audit: CVEs in ignoreCves do not show up when JSON output is used', async () => {
test('audit: advisories in ignoreGhsas do not show up when JSON output is used', async () => {
const tmp = f.prepare('has-vulnerabilities')
getMockAgent().get(AUDIT_REGISTRY.replace(/\/$/, ''))
.intercept({ path: '/-/npm/v1/security/audits/quick', method: 'POST' })
.intercept({ path: '/-/npm/v1/security/advisories/bulk', method: 'POST' })
.reply(200, responses.ALL_VULN_RESP)
const { exitCode, output } = await audit.handler({
@@ -298,11 +281,11 @@ describe('plugin-commands-audit', () => {
json: true,
rootProjectManifest: {},
auditConfig: {
ignoreCves: [
'CVE-2019-10742',
'CVE-2020-28168',
'CVE-2021-3749',
'CVE-2020-7598',
ignoreGhsas: [
'GHSA-42xw-2xvc-qx8m',
'GHSA-4w2v-q235-vp99',
'GHSA-cph5-m8f7-6c5x',
'GHSA-vh95-rmgr-6w4m',
],
},
})
@@ -310,4 +293,37 @@ describe('plugin-commands-audit', () => {
expect(exitCode).toBe(1)
expect(stripAnsi(output)).toMatchSnapshot()
})
test('audit --audit-level info', async () => {
getMockAgent().get(AUDIT_REGISTRY.replace(/\/$/, ''))
.intercept({ path: '/-/npm/v1/security/advisories/bulk', method: 'POST' })
.reply(200, responses.INFO_VULN_RESP)
const { output, exitCode } = await audit.handler({
...AUDIT_REGISTRY_OPTS,
auditLevel: 'info',
dir: hasVulnerabilitiesDir,
rootProjectManifestDir: hasVulnerabilitiesDir,
})
expect(exitCode).toBe(1)
expect(stripAnsi(output)).toContain('just some info')
expect(stripAnsi(output)).toContain('info')
})
test('audit defaults to low level and ignores info', async () => {
getMockAgent().get(AUDIT_REGISTRY.replace(/\/$/, ''))
.intercept({ path: '/-/npm/v1/security/advisories/bulk', method: 'POST' })
.reply(200, responses.INFO_VULN_RESP)
const { output, exitCode } = await audit.handler({
...AUDIT_REGISTRY_OPTS,
dir: hasVulnerabilitiesDir,
rootProjectManifestDir: hasVulnerabilitiesDir,
})
expect(exitCode).toBe(0)
expect(stripAnsi(output)).toBe(`1 vulnerabilities found
Severity: 1 info`)
})
})

View File

@@ -25,7 +25,7 @@ test('overrides with references (via $) are preserved during audit --fix', async
const tmp = f.prepare('preserve-reference-overrides')
getMockAgent().get(registries.default.replace(/\/$/, ''))
.intercept({ path: '/-/npm/v1/security/audits/quick', method: 'POST' })
.intercept({ path: '/-/npm/v1/security/advisories/bulk', method: 'POST' })
.reply(200, responses.ALL_VULN_RESP)
const { manifest: initialManifest } = await readProjectManifest(tmp)

View File

File diff suppressed because it is too large Load Diff

View File

@@ -1,278 +1,211 @@
{
"actions": [
"axios": [
{
"action": "review",
"module": "axios",
"resolves": [
{
"id": 1005018,
"path": ".>axios",
"dev": false,
"optional": false,
"bundled": false
},
{
"id": 1005506,
"path": ".>axios",
"dev": false,
"optional": false,
"bundled": false
},
{
"id": 1006349,
"path": ".>axios",
"dev": false,
"optional": false,
"bundled": false
}
]
"id": 1102326,
"url": "https://github.com/advisories/GHSA-cph5-m8f7-6c5x",
"title": "axios Inefficient Regular Expression Complexity vulnerability",
"severity": "high",
"vulnerable_versions": "<0.21.2",
"cwe": [
"CWE-400",
"CWE-1333"
],
"cvss": {
"score": 7.5,
"vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H"
}
},
{
"action": "review",
"module": "sync-exec",
"resolves": [
{
"id": 1005902,
"path": ".>sync-exec",
"dev": false,
"bundled": false,
"optional": false
}
]
"id": 1111034,
"url": "https://github.com/advisories/GHSA-jr5f-v2jv-69x6",
"title": "axios Requests Vulnerable To Possible SSRF and Credential Leakage via Absolute URL",
"severity": "high",
"vulnerable_versions": "<0.30.0",
"cwe": [
"CWE-918"
],
"cvss": {
"score": 0,
"vectorString": null
}
},
{
"action": "review",
"module": "follow-redirects",
"resolves": [
{
"id": 1006865,
"path": ".>axios>follow-redirects",
"dev": false,
"optional": false,
"bundled": false
},
{
"id": 1007026,
"path": ".>axios>follow-redirects",
"dev": false,
"optional": false,
"bundled": false
}
]
"id": 1113274,
"url": "https://github.com/advisories/GHSA-43fc-jf86-j433",
"title": "Axios is Vulnerable to Denial of Service via __proto__ Key in mergeConfig",
"severity": "high",
"vulnerable_versions": "<=0.30.2",
"cwe": [
"CWE-754"
],
"cvss": {
"score": 7.5,
"vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H"
}
},
{
"id": 1116365,
"url": "https://github.com/advisories/GHSA-3p68-rc4w-qgx5",
"title": "Axios has a NO_PROXY Hostname Normalization Bypass Leads to SSRF",
"severity": "critical",
"vulnerable_versions": "<1.15.0",
"cwe": [
"CWE-441",
"CWE-918"
],
"cvss": {
"score": 0,
"vectorString": null
}
},
{
"id": 1116605,
"url": "https://github.com/advisories/GHSA-fvcv-3m26-pcqx",
"title": "Axios has Unrestricted Cloud Metadata Exfiltration via Header Injection Chain",
"severity": "critical",
"vulnerable_versions": "<0.31.0",
"cwe": [
"CWE-113",
"CWE-444",
"CWE-918"
],
"cvss": {
"score": 10,
"vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:C/C:H/I:H/A:H"
}
},
{
"id": 1090049,
"url": "https://github.com/advisories/GHSA-4w2v-q235-vp99",
"title": "Axios vulnerable to Server-Side Request Forgery",
"severity": "moderate",
"vulnerable_versions": "<0.21.1",
"cwe": [
"CWE-918"
],
"cvss": {
"score": 5.9,
"vectorString": "CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:H/I:N/A:N"
}
},
{
"id": 1097679,
"url": "https://github.com/advisories/GHSA-wf5p-g6vw-rhxx",
"title": "Axios Cross-Site Request Forgery Vulnerability",
"severity": "moderate",
"vulnerable_versions": ">=0.8.1 <0.28.0",
"cwe": [
"CWE-352"
],
"cvss": {
"score": 6.5,
"vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:N/A:N"
}
},
{
"id": 1091722,
"url": "https://github.com/advisories/GHSA-42xw-2xvc-qx8m",
"title": "Denial of Service in axios",
"severity": "high",
"vulnerable_versions": "<=0.18.0",
"cwe": [
"CWE-20",
"CWE-755"
],
"cvss": {
"score": 7.5,
"vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H"
}
}
],
"advisories": {
"1005018": {
"findings": [
{
"version": "0.15.3",
"paths": [
".>axios"
]
}
],
"metadata": null,
"vulnerable_versions": "<=0.21.1",
"module_name": "axios",
"severity": "high",
"github_advisory_id": "GHSA-cph5-m8f7-6c5x",
"cves": [
"CVE-2021-3749"
],
"access": "public",
"patched_versions": ">=0.21.2",
"updated": "2021-09-08T16:46:47.000Z",
"recommendation": "Upgrade to version 0.21.2 or later",
"cwe": "CWE-697",
"found_by": null,
"deleted": null,
"id": 1005018,
"references": "- https://nvd.nist.gov/vuln/detail/CVE-2021-3749\n- https://github.com/axios/axios/commit/5b457116e31db0e88fede6c428e969e87f290929\n- https://huntr.dev/bounties/1e8f07fc-c384-4ff9-8498-0690de2e8c31\n- https://www.npmjs.com/package/axios\n- https://lists.apache.org/thread.html/r075d464dce95cd13c03ff9384658edcccd5ab2983b82bfc72b62bb10@%3Ccommits.druid.apache.org%3E\n- https://lists.apache.org/thread.html/r216f0fd0a3833856d6a6a1fada488cadba45f447d87010024328ccf2@%3Ccommits.druid.apache.org%3E\n- https://lists.apache.org/thread.html/r3ae6d2654f92c5851bdb73b35e96b0e4e3da39f28ac7a1b15ae3aab8@%3Ccommits.druid.apache.org%3E\n- https://lists.apache.org/thread.html/r4bf1b32983f50be00f9752214c1b53738b621be1c2b0dbd68c7f2391@%3Ccommits.druid.apache.org%3E\n- https://lists.apache.org/thread.html/r7324ecc35b8027a51cb6ed629490fcd3b2d7cf01c424746ed5744bf1@%3Ccommits.druid.apache.org%3E\n- https://lists.apache.org/thread.html/r74d0b359408fff31f87445261f0ee13bdfcac7d66f6b8e846face321@%3Ccommits.druid.apache.org%3E\n- https://lists.apache.org/thread.html/ra15d63c54dc6474b29f72ae4324bcb03038758545b3ab800845de7a1@%3Ccommits.druid.apache.org%3E\n- https://lists.apache.org/thread.html/rc263bfc5b53afcb7e849605478d73f5556eb0c00d1f912084e407289@%3Ccommits.druid.apache.org%3E\n- https://lists.apache.org/thread.html/rfa094029c959da0f7c8cd7dc9c4e59d21b03457bf0cedf6c93e1bb0a@%3Cdev.druid.apache.org%3E\n- https://lists.apache.org/thread.html/rfc5c478053ff808671aef170f3d9fc9d05cc1fab8fb64431edc66103@%3Ccommits.druid.apache.org%3E\n- https://github.com/advisories/GHSA-cph5-m8f7-6c5x",
"created": "2021-11-18T16:00:48.489Z",
"reported_by": null,
"title": "Incorrect Comparison in axios",
"npm_advisory_id": null,
"overview": "axios is vulnerable to Inefficient Regular Expression Complexity",
"url": "https://github.com/advisories/GHSA-cph5-m8f7-6c5x"
},
"1005506": {
"findings": [
{
"version": "0.15.3",
"paths": [
".>axios"
]
}
],
"metadata": null,
"vulnerable_versions": "<0.21.1",
"module_name": "axios",
"severity": "high",
"github_advisory_id": "GHSA-4w2v-q235-vp99",
"cves": [
"CVE-2020-28168"
],
"access": "public",
"patched_versions": ">=0.21.1",
"updated": "2021-01-04T20:58:17.000Z",
"recommendation": "Upgrade to version 0.21.1 or later",
"cwe": "CWE-918",
"found_by": null,
"deleted": null,
"id": 1005506,
"references": "- https://nvd.nist.gov/vuln/detail/CVE-2020-28168\n- https://github.com/axios/axios/issues/3369\n- https://github.com/axios/axios/commit/c7329fefc890050edd51e40e469a154d0117fc55\n- https://snyk.io/vuln/SNYK-JS-AXIOS-1038255\n- https://www.npmjs.com/package/axios\n- https://www.npmjs.com/advisories/1594\n- https://lists.apache.org/thread.html/r954d80fd18e9dafef6e813963eb7e08c228151c2b6268ecd63b35d1f@%3Ccommits.druid.apache.org%3E\n- https://lists.apache.org/thread.html/r25d53acd06f29244b8a103781b0339c5e7efee9099a4d52f0c230e4a@%3Ccommits.druid.apache.org%3E\n- https://lists.apache.org/thread.html/rdfd2901b8b697a3f6e2c9c6ecc688fd90d7f881937affb5144d61d6e@%3Ccommits.druid.apache.org%3E\n- https://github.com/advisories/GHSA-4w2v-q235-vp99",
"created": "2021-11-18T16:00:48.546Z",
"reported_by": null,
"title": "Server-Side Request Forgery in Axios",
"npm_advisory_id": null,
"overview": "Axios NPM package 0.21.0 contains a Server-Side Request Forgery (SSRF) vulnerability where an attacker is able to bypass a proxy by providing a URL that responds with a redirect to a restricted host or IP address.",
"url": "https://github.com/advisories/GHSA-4w2v-q235-vp99"
},
"1005902": {
"findings": [
{
"version": "0.6.2",
"paths": [
".>sync-exec"
]
}
],
"metadata": null,
"vulnerable_versions": "<=0.6.2",
"module_name": "sync-exec",
"severity": "moderate",
"github_advisory_id": "GHSA-38h8-x697-gh8q",
"cves": [
"CVE-2017-16024"
],
"access": "public",
"patched_versions": "<0.0.0",
"updated": "2020-08-31T18:18:48.000Z",
"recommendation": "None",
"cwe": "CWE-377",
"found_by": null,
"deleted": null,
"id": 1005902,
"references": "- https://nvd.nist.gov/vuln/detail/CVE-2017-16024\n- https://github.com/gvarsanyi/sync-exec/issues/17\n- https://cwe.mitre.org/data/definitions/377.html\n- https://github.com/advisories/GHSA-38h8-x697-gh8q\n- https://www.npmjs.com/advisories/310\n- https://nodesecurity.io/advisories/310\n- https://www.owasp.org/index.php/Insecure_Temporary_File",
"created": "2021-11-18T16:00:48.581Z",
"reported_by": null,
"title": "Tmp files readable by other users in sync-exec",
"npm_advisory_id": null,
"overview": "Affected versions of `sync-exec` use files located in `/tmp/` to buffer command results before returning values. As `/tmp/` is almost always set with world readable permissions, this may allow low privilege users on the system to read the results of commands run via `sync-exec` under a higher privilege user.\n\n\n## Recommendation\n\nThere is currently no direct patch for `sync-exec`, as the `child_process.execSync` function provided in Node.js v0.12.0 and later provides the same functionality natively. \n\nThe best mitigation currently is to update to Node.js v0.12.0 or later, and migrate all uses of `sync-exec` to `child_process.execSync()`.",
"url": "https://github.com/advisories/GHSA-38h8-x697-gh8q"
},
"1006349": {
"findings": [
{
"version": "0.15.3",
"paths": [
".>axios"
]
}
],
"metadata": null,
"vulnerable_versions": "<=0.18.0",
"module_name": "axios",
"severity": "high",
"github_advisory_id": "GHSA-42xw-2xvc-qx8m",
"cves": [
"CVE-2019-10742"
],
"access": "public",
"patched_versions": ">=0.18.1",
"updated": "2019-06-05T16:22:11.000Z",
"recommendation": "Upgrade to version 0.18.1 or later",
"cwe": "CWE-20",
"found_by": null,
"deleted": null,
"id": 1006349,
"references": "- https://nvd.nist.gov/vuln/detail/CVE-2019-10742\n- https://app.snyk.io/vuln/SNYK-JS-AXIOS-174505\n- https://github.com/axios/axios/issues/1098\n- https://github.com/axios/axios/pull/1485\n- https://snyk.io/vuln/SNYK-JS-AXIOS-174505\n- https://www.npmjs.com/advisories/880\n- https://github.com/advisories/GHSA-42xw-2xvc-qx8m",
"created": "2021-11-18T16:00:48.617Z",
"reported_by": null,
"title": "Denial of Service in axios",
"npm_advisory_id": null,
"overview": "Versions of `axios` prior to 0.18.1 are vulnerable to Denial of Service. If a request exceeds the `maxContentLength` property, the package prints an error but does not stop the request. This may cause high CPU usage and lead to Denial of Service.\n\n\n## Recommendation\n\nUpgrade to 0.18.1 or later.",
"url": "https://github.com/advisories/GHSA-42xw-2xvc-qx8m"
},
"1006865": {
"findings": [
{
"version": "1.0.0",
"paths": [
".>axios>follow-redirects"
]
}
],
"metadata": null,
"vulnerable_versions": "<1.14.7",
"module_name": "follow-redirects",
"severity": "high",
"github_advisory_id": "GHSA-74fj-2j2h-c42q",
"cves": [
"CVE-2022-0155"
],
"access": "public",
"patched_versions": ">=1.14.7",
"updated": "2022-01-11T18:41:09.000Z",
"recommendation": "Upgrade to version 1.14.7 or later",
"cwe": "CWE-359",
"found_by": null,
"deleted": null,
"id": 1006865,
"references": "- https://nvd.nist.gov/vuln/detail/CVE-2022-0155\n- https://github.com/follow-redirects/follow-redirects/commit/8b347cbcef7c7b72a6e9be20f5710c17d6163c22\n- https://huntr.dev/bounties/fc524e4b-ebb6-427d-ab67-a64181020406\n- https://github.com/advisories/GHSA-74fj-2j2h-c42q",
"created": "2022-01-12T23:00:43.967Z",
"reported_by": null,
"follow-redirects": [
{
"id": 1102323,
"url": "https://github.com/advisories/GHSA-74fj-2j2h-c42q",
"title": "Exposure of sensitive information in follow-redirects",
"npm_advisory_id": null,
"overview": "follow-redirects is vulnerable to Exposure of Private Personal Information to an Unauthorized Actor",
"url": "https://github.com/advisories/GHSA-74fj-2j2h-c42q"
},
"1007026": {
"findings": [
{
"version": "1.0.0",
"paths": [
".>axios>follow-redirects"
]
}
"severity": "high",
"vulnerable_versions": "<1.14.7",
"cwe": [
"CWE-359"
],
"metadata": null,
"vulnerable_versions": "<1.14.8",
"module_name": "follow-redirects",
"cvss": {
"score": 8,
"vectorString": "CVSS:3.0/AV:N/AC:L/PR:L/UI:R/S:U/C:H/I:H/A:H"
}
},
{
"id": 1109569,
"url": "https://github.com/advisories/GHSA-jchw-25xp-jwwc",
"title": "Follow Redirects improperly handles URLs in the url.parse() function",
"severity": "moderate",
"github_advisory_id": "GHSA-pw2r-vq6v-hr8c",
"cves": [
"CVE-2022-0536"
"vulnerable_versions": "<1.15.4",
"cwe": [
"CWE-20",
"CWE-601"
],
"access": "public",
"patched_versions": ">=1.14.8",
"updated": "2022-02-11T21:18:03.000Z",
"recommendation": "Upgrade to version 1.14.8 or later",
"cwe": "CWE-200",
"found_by": null,
"deleted": null,
"id": 1007026,
"references": "- https://nvd.nist.gov/vuln/detail/CVE-2022-0536\n- https://github.com/follow-redirects/follow-redirects/commit/62e546a99c07c3ee5e4e0718c84a6ca127c5c445\n- https://huntr.dev/bounties/7cf2bf90-52da-4d59-8028-a73b132de0db\n- https://github.com/advisories/GHSA-pw2r-vq6v-hr8c",
"created": "2022-02-14T23:00:43.878Z",
"reported_by": null,
"title": "Exposure of Sensitive Information to an Unauthorized Actor in follow-redirects",
"npm_advisory_id": null,
"overview": "Exposure of Sensitive Information to an Unauthorized Actor in NPM follow-redirects prior to 1.14.8.",
"url": "https://github.com/advisories/GHSA-pw2r-vq6v-hr8c"
}
},
"muted": [],
"metadata": {
"vulnerabilities": {
"info": 0,
"low": 0,
"moderate": 2,
"high": 4,
"critical": 0
"cvss": {
"score": 6.1,
"vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:C/C:L/I:L/A:N"
}
},
"dependencies": 6,
"devDependencies": 0,
"optionalDependencies": 0,
"totalDependencies": 6
}
{
"id": 1096856,
"url": "https://github.com/advisories/GHSA-cxjh-pqwp-8mfp",
"title": "follow-redirects' Proxy-Authorization header kept across hosts",
"severity": "moderate",
"vulnerable_versions": "<=1.15.5",
"cwe": [
"CWE-200"
],
"cvss": {
"score": 6.5,
"vectorString": "CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:N/A:N"
}
},
{
"id": 1116560,
"url": "https://github.com/advisories/GHSA-r4q5-vmmm-2653",
"title": "follow-redirects leaks Custom Authentication Headers to Cross-Domain Redirect Targets",
"severity": "moderate",
"vulnerable_versions": "<=1.15.11",
"cwe": [
"CWE-200"
],
"cvss": {
"score": 0,
"vectorString": null
}
},
{
"id": 1092623,
"url": "https://github.com/advisories/GHSA-pw2r-vq6v-hr8c",
"title": "Exposure of Sensitive Information to an Unauthorized Actor in follow-redirects",
"severity": "moderate",
"vulnerable_versions": "<1.14.8",
"cwe": [
"CWE-200",
"CWE-212"
],
"cvss": {
"score": 5.9,
"vectorString": "CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:H/I:N/A:N"
}
}
],
"sync-exec": [
{
"id": 1093475,
"url": "https://github.com/advisories/GHSA-38h8-x697-gh8q",
"title": "Tmp files readable by other users in sync-exec",
"severity": "moderate",
"vulnerable_versions": "<=0.6.2",
"cwe": [
"CWE-377"
],
"cvss": {
"score": 6.5,
"vectorString": "CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:N/A:N"
}
}
]
}

View File

@@ -8,3 +8,5 @@ export const DEV_VULN_ONLY_RESP = loadJsonFileSync<any>(path.join(import.meta.di
export const ALL_VULN_RESP = loadJsonFileSync<any>(path.join(import.meta.dirname, 'all-vulnerabilities-response.json'))
// eslint-disable-next-line
export const NO_VULN_RESP = loadJsonFileSync<any>(path.join(import.meta.dirname, 'no-vulnerabilities-response.json'))
// eslint-disable-next-line
export const INFO_VULN_RESP = loadJsonFileSync<any>(path.join(import.meta.dirname, 'info-vulnerability-response.json'))

View File

@@ -0,0 +1,11 @@
{
"axios": [
{
"id": 100,
"url": "https://github.com/advisories/GHSA-info-info-info",
"title": "just some info",
"severity": "info",
"vulnerable_versions": "*"
}
]
}

View File

@@ -1,18 +1 @@
{
"actions": [],
"advisories": {},
"muted": [],
"metadata": {
"vulnerabilities": {
"info": 0,
"low": 0,
"moderate": 0,
"high": 0,
"critical": 0
},
"dependencies": 4,
"devDependencies": 0,
"optionalDependencies": 0,
"totalDependencies": 4
}
}
{}

View File

@@ -1,30 +1,37 @@
import fs from 'node:fs'
import path from 'node:path'
import { audit } from '@pnpm/deps.compliance.audit'
import { lockfileToAuditRequest } from '@pnpm/deps.compliance.audit'
import { readWantedLockfile } from '@pnpm/lockfile.fs'
import { fixtures } from '@pnpm/test-fixtures'
const f = fixtures(import.meta.dirname)
const REGISTRY = 'https://registry.npmjs.org'
async function writeResponse (lockfileDir: string, filename: string, opts: {
production?: boolean
dev?: boolean
optional?: boolean
}) {
}): Promise<void> {
const lockfile = await readWantedLockfile(lockfileDir, { ignoreIncompatible: true })
if (!lockfile) throw new Error(`no lockfile at ${lockfileDir}`)
const include = {
dependencies: opts.production !== false,
devDependencies: opts.dev !== false,
optionalDependencies: opts.optional !== false,
}
// @ts-expect-error
const auditReport = await audit(lockfile!, {
dispatcherOptions: {},
include,
registry: 'https://registry.npmjs.org/',
const auditRequest = lockfileToAuditRequest(lockfile, { include })
const res = await fetch(`${REGISTRY}/-/npm/v1/security/advisories/bulk`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(auditRequest.request),
})
fs.writeFileSync(path.join(import.meta.dirname, filename), JSON.stringify(auditReport, null, 2))
if (!res.ok) {
throw new Error(`bulk audit endpoint responded with ${res.status}: ${await res.text()}`)
}
const bulkResponse = await res.json()
fs.writeFileSync(path.join(import.meta.dirname, filename), JSON.stringify(bulkResponse, null, 2))
}
; (async () => {
@@ -34,4 +41,7 @@ async function writeResponse (lockfileDir: string, filename: string, opts: {
})
await writeResponse(f.find('has-vulnerabilities'), 'all-vulnerabilities-response.json', {})
await writeResponse(f.find('has-outdated-deps'), 'no-vulnerabilities-response.json', {})
})()
})().catch((err: unknown) => {
console.error(err)
process.exitCode = 1
})

14
pnpm-lock.yaml generated
View File

@@ -2710,6 +2710,9 @@ importers:
deps/compliance/audit:
dependencies:
'@pnpm/deps.path':
specifier: workspace:*
version: link:../../path
'@pnpm/error':
specifier: workspace:*
version: link:../../../core/error
@@ -2737,12 +2740,9 @@ importers:
'@pnpm/types':
specifier: workspace:*
version: link:../../../core/types
'@pnpm/workspace.project-manifest-reader':
specifier: workspace:*
version: link:../../../workspace/project-manifest-reader
ramda:
semver:
specifier: 'catalog:'
version: '@pnpm/ramda@0.28.1'
version: 7.7.4
devDependencies:
'@pnpm/constants':
specifier: workspace:*
@@ -2759,9 +2759,9 @@ importers:
'@pnpm/testing.mock-agent':
specifier: workspace:*
version: link:../../../testing/mock-agent
'@types/ramda':
'@types/semver':
specifier: 'catalog:'
version: 0.31.1
version: 7.7.1
deps/compliance/commands:
dependencies:

View File

@@ -60,8 +60,6 @@ allowBuilds:
unrs-resolver: true
auditConfig:
ignoreCves:
- CVE-2025-56200
ignoreGhsas:
- GHSA-2g4f-4pwh-qvx6
- GHSA-76c9-3jph-rj3q