-
-
Notifications
You must be signed in to change notification settings - Fork 685
Description
Bug Description
When calling .match(url) for a cache, it is possible to get an error thrown that Body has already been consumed
Reproducible By
I have been trying to reproduce this for a good while now, this here consistently has been popping up for our team. The reproduction takes hint from the original posted here but this consistently exits after seconds (or less)
Changing ALLOCATION_SIZE_BETWEEN_USE to a larger value makes the issue occur sooner.
import { caches, Response } from "undici";
const url = "https://example.com/";
const ALLOCATION_SIZE_BETWEEN_USE = 30000;
const cache = await caches.open("cache");
await cache.put(
url,
Response.json({
initiatedAt: new Date().toISOString(),
random: Math.random(),
answer: 42,
}),
);
for (let i = 0; i < 100; i += 1) {
{
void Array.from({ length: ALLOCATION_SIZE_BETWEEN_USE }, () => ({
value: Math.random(),
}));
// Without the long wait, no problem occurs
// 1-2ms timeout does not cause problem
// >3ms timeout does cause problem inconsistently
// >4ms timeout does cause problem consistently
// queueMicrotask does not cause problem
await new Promise((resolve) => setTimeout(resolve, 4));
}
const match = await cache.match(url);
const result = await match.json();
console.log(result);
}In local environments where we are wanting to use in memory cache storage this makes any non trivial use non usable if we're doing things alongside.
In tests it had always worked with small cases, but as we used this more and had other resources being allocated in memory, it seems we hit it faster, which can be seen by the change in object allocation causing issue.
Expected Behavior
The code to run (It does in a browser, below)
Logs & Screenshots
{
initiatedAt: '2025-12-23T01:39:11.929Z',
random: 0.1796295151908207,
answer: 42
}
/project/node_modules/undici/lib/web/webidl/index.js:34
return new TypeError(`${message.header}: ${message.message}`)
^
TypeError: Response.clone: Body has already been consumed.
at webidl.errors.exception (/project/node_modules/undici/lib/web/webidl/index.js:34:10)
at Response.clone (/project/node_modules/undici/lib/web/fetch/response.js:236:27)
at #internalMatchAll (/project/node_modules/undici/lib/web/cache/cache.js:799:40)
at Cache.match (/project/node_modules/undici/lib/web/cache/cache.js:54:37)
at file:///project/test.js:29:29
Node.js v24.12.0
Environment
Additional context
This may be related to #4150
Additional Debugging
When debugging I could see that the stream does get collected in my project prior to
e.g. if I add this log:
const streamRegistry = new FinalizationRegistry((weakRef) => {
const stream = weakRef.deref()
if (stream && !stream.locked && !isDisturbed(stream) && !isErrored(stream)) {
console.log("COLLECTION TIME");
stream.cancel('Response object has been garbage collected').catch(noop)
}
})I see the error for the first time following:
COLLECTION TIME
TypeError: Response.clone: Body has already been consumed.
I found that when cloning if the body is unusable but a source is available (e.g. string or Buffer), the stream could be constructed as needed, the clone would then work as expected:
// 1. If this is unusable, then throw a TypeError.
if (bodyUnusable(this.#state)) {
if (typeof this.#state.body.source === "string" || util.isBuffer(this.#state.body.source)) {
const buffer = typeof this.#state.body.source === "string" ? Buffer.from(this.#state.body.source, "utf-8") : this.#state.body.source;
this.#state = {
...this.#state,
body: {
...this.#state.body,
stream: ReadableStream.from([buffer]),
}
}
} else {
throw webidl.errors.exception({
header: 'Response.clone',
message: 'Body has already been consumed.'
})
}
}
// 2. Let clonedResponse be the result of cloning this’s response.
const clonedResponse = cloneResponse(this.#state)
// Note: To re-register because of a new stream.
if (this.#state.body?.stream) {
streamRegistry.register(this, new WeakRef(this.#state.body.stream))
}I don't believe this is a proper fix, but it is a hint that we should be able to start a new stream here or keep the original stream non finalised if it is in a cache or otherwise.
There is many reasons why a Response object may be kept around, not being able to use .clone() after a period of time is definitely unexpected in userland in my opinion.
For a cached Response, which is the most likely to stay around for a long while it seems, the body.source value is always a buffer if there is a body to read. I see this is too inline with the spec for cache.put()
Lines 360 to 364 in 59fd5f4
| const bytes = await bodyReadPromise.promise | |
| if (clonedResponse.body != null) { | |
| clonedResponse.body.source = bytes | |
| } |
I'd assume that if there was a .source available, that having a closed stream should never be an actual concern and if a clone has been signalled the body should be brought back to life and made usable again... especially if the stream was only finalised rather than used externally.