Fix double-free in S3 static methods when path is passed as string#28592
Fix double-free in S3 static methods when path is passed as string#28592
Conversation
|
Updated 9:37 PM PT - Mar 26th, 2026
❌ @robobun, your commit 9d46dbd has 2 failures in
🧪 To try this PR locally: bunx bun-pr 28592That installs a local version of the PR into your bun-28592 --bun |
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughUnconditionally deinitialize parsed PathLike/path resources across S3-related Zig codepaths (including constructInternal), fix Changes
🚥 Pre-merge checks | ✅ 2✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/bun.js/webcore/S3File.zig`:
- Around line 86-87: The constructor currently transfers path ownership only
after constructS3FileInternalStore returns, so if
constructS3FileWithS3Credentials does further fallible work after
Blob.Store.initS3() (e.g., getTruthyComptime("type"), file_type.toSlice()) under
errdefer store.deinit(), a thrown error causes a double-free because the outer
errdefer still sees .path; fix by performing the ownership handoff inside the
constructor (move setting of path_or_blob to .{ .blob = blob } into
constructS3FileInternalStore so the moved path is hidden from outer errdefer) or
alternatively move all fallible operations (getTruthyComptime,
file_type.toSlice, etc.) to before calling Blob.Store.initS3() so no fallible
work runs after initS3; update the same pattern at the other occurrences (lines
referenced around constructS3FileInternalStore,
constructS3FileWithS3Credentials, and the errdefer store.deinit usage).
In `@test/js/bun/s3-presign-double-free.test.ts`:
- Around line 8-16: The test's calls array currently uses Date() inputs which
rely on external S3 behavior to trigger the post-initS3() ownership-unwind path;
replace those Date() arguments with a deterministic local throwing object (an
object whose type getter throws) so each call to Bun.S3Client.presign / unlink /
size / exists / stat / write will synchronously throw inside the post-initS3()
path and reliably hit the constructS3FileInternalStore() ownership bug; update
the calls array entries that pass Date() to instead pass this throwing-type
object so the test deterministically reproduces the double-free/unwind scenario.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 7a910e43-869e-405f-83e7-65d0c0bc3f44
📒 Files selected for processing (2)
src/bun.js/webcore/S3File.zigtest/js/bun/s3-presign-double-free.test.ts
| const calls = [ | ||
| () => Bun.S3Client.presign(Date()), | ||
| () => Bun.S3Client.presign(Date(), Date()), | ||
| () => Bun.S3Client.unlink(Date()), | ||
| () => Bun.S3Client.size(Date()), | ||
| () => Bun.S3Client.exists(Date()), | ||
| () => Bun.S3Client.stat(Date()), | ||
| () => Bun.S3Client.write(Date(), "data"), | ||
| ]; |
There was a problem hiding this comment.
Use a deterministic local throw in the post-initS3() path.
These inputs only reach the risky unwind if the call happens to fail after constructS3FileInternalStore() has already taken ownership, which currently depends on ambient S3 config or network behavior. A throwing type getter gives you a local, deterministic failure in that path and will reliably catch this ownership bug.
Example repro shape
+ const throwingOptions = {
+ get type() {
+ throw new Error("boom");
+ },
+ };
const calls = [
- () => Bun.S3Client.presign(Date()),
- () => Bun.S3Client.presign(Date(), Date()),
- () => Bun.S3Client.unlink(Date()),
- () => Bun.S3Client.size(Date()),
- () => Bun.S3Client.exists(Date()),
- () => Bun.S3Client.stat(Date()),
- () => Bun.S3Client.write(Date(), "data"),
+ () => Bun.S3Client.presign(Date(), throwingOptions),
+ () => Bun.S3Client.unlink(Date(), throwingOptions),
+ () => Bun.S3Client.size(Date(), throwingOptions),
+ () => Bun.S3Client.exists(Date(), throwingOptions),
+ () => Bun.S3Client.stat(Date(), throwingOptions),
+ () => Bun.S3Client.write(Date(), "data", throwingOptions),
];🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/js/bun/s3-presign-double-free.test.ts` around lines 8 - 16, The test's
calls array currently uses Date() inputs which rely on external S3 behavior to
trigger the post-initS3() ownership-unwind path; replace those Date() arguments
with a deterministic local throwing object (an object whose type getter throws)
so each call to Bun.S3Client.presign / unlink / size / exists / stat / write
will synchronously throw inside the post-initS3() path and reliably hit the
constructS3FileInternalStore() ownership bug; update the calls array entries
that pass Date() to instead pass this throwing-type object so the test
deterministically reproduces the double-free/unwind scenario.
There was a problem hiding this comment.
♻️ Duplicate comments (1)
test/js/bun/s3-presign-double-free.test.ts (1)
11-19:⚠️ Potential issue | 🟠 MajorMake the regression path deterministic after ownership transfer.
Line 12–Line 18 currently rely on
Date()inputs and ambient behavior, so this can pass without reliably entering the post-constructS3FileInternalStore()unwind path that previously double-freed memory. Use a local throwing options getter and pass it to each static method to force a deterministic local failure in that path.Suggested deterministic repro shape
async function main() { + const throwingOptions = { + get type() { + throw new Error("boom"); + }, + }; const calls = [ - () => Bun.S3Client.presign(Date()), - () => Bun.S3Client.presign(Date(), Date()), - () => Bun.S3Client.unlink(Date()), - () => Bun.S3Client.size(Date()), - () => Bun.S3Client.exists(Date()), - () => Bun.S3Client.stat(Date()), - () => Bun.S3Client.write(Date(), "data"), + () => Bun.S3Client.presign(Date(), throwingOptions), + () => Bun.S3Client.unlink(Date(), throwingOptions), + () => Bun.S3Client.size(Date(), throwingOptions), + () => Bun.S3Client.exists(Date(), throwingOptions), + () => Bun.S3Client.stat(Date(), throwingOptions), + () => Bun.S3Client.write(Date(), "data", throwingOptions), ];🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/js/bun/s3-presign-double-free.test.ts` around lines 11 - 19, Replace the non-deterministic Date() inputs used in the calls array for Bun.S3Client.presign, presign (2-arg), unlink, size, exists, stat, and write with a deterministic failing "options" supplier to force the post-constructS3FileInternalStore() unwind path; create a local function that always throws (e.g., getOptionsThatThrows()) and pass its result or the function (as appropriate for the overloads) into each static method invocation so each call deterministically triggers the same local failure and exercises the regression path reliably.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@test/js/bun/s3-presign-double-free.test.ts`:
- Around line 11-19: Replace the non-deterministic Date() inputs used in the
calls array for Bun.S3Client.presign, presign (2-arg), unlink, size, exists,
stat, and write with a deterministic failing "options" supplier to force the
post-constructS3FileInternalStore() unwind path; create a local function that
always throws (e.g., getOptionsThatThrows()) and pass its result or the function
(as appropriate for the overloads) into each static method invocation so each
call deterministically triggers the same local failure and exercises the
regression path reliably.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 238927f0-c92f-4854-9df2-d02d4c5b464d
📒 Files selected for processing (1)
test/js/bun/s3-presign-double-free.test.ts
There was a problem hiding this comment.
The fix is targeted and the logic appears sound, but this is a memory-safety fix in production Zig code — worth a human look to confirm the ownership transfer semantics are correct in all error paths.
Extended reasoning...
Overview
This PR modifies src/bun.js/webcore/S3File.zig to fix a use-after-poison (double-free) in 6 S3Client static methods (presign, unlink, write, size, exists, stat). The fix adds path_or_blob = .{ .blob = blob }; after constructS3FileInternalStore succeeds, preventing the errdefer guard from freeing path memory that has already been transferred to the store. A regression test is also added.
Security risks
No security-sensitive surfaces (auth, crypto, permissions) are introduced. The fix removes a memory-safety bug, which is a net positive for security.
Level of scrutiny
High. This is a memory management fix in Zig, where ownership semantics are manual and subtle. The fix itself is mechanical (same one-liner repeated across 6 functions), but the correctness depends on understanding the ownership contract between PathLike, Store.initS3, blob.deinit(), and the errdefer guard. The described root cause (toThreadSafe being a no-op for encoded_slice) is plausible but not trivially verifiable without tracing through the allocator chain.
Other factors
The two inline bug reports are nits (missing expect(stderr).toBe("") assertion, test placed in wrong directory). Neither blocks the fix. The PR description is thorough and the code comments explain intent clearly. Given the memory-safety nature of the change and the low cost of a human review pass on a 6-line fix, deferring is the appropriate call.
| env: bunEnv, | ||
| stderr: "pipe", | ||
| }); | ||
|
|
||
| const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]); | ||
|
|
||
| expect(stdout).toBe("OK\n"); | ||
| expect(exitCode).toBe(0); |
There was a problem hiding this comment.
🟡 The test pipes and collects stderr but never asserts expect(stderr).toBe(""), violating the explicit pattern in test/CLAUDE.md. This means sanitizer warnings that do not abort the process (ASAN/UBSAN output with exit code 0) would be silently ignored. Add expect(stderr).toBe("") before the existing stdout/exitCode assertions.
Extended reasoning...
What the bug is: The test at test/js/bun/s3-presign-double-free.test.ts (lines 29-36) explicitly pipes stderr with stderr: "pipe" and collects it via proc.stderr.text(), but the collected value is never asserted upon. The assertions only check stdout and exitCode, leaving stderr unverified.
The specific code path: Lines 29-36 collect [stdout, stderr, exitCode] in a Promise.all. The variable stderr is bound but unused in any assertion. expect(stdout).toBe("OK\n") and expect(exitCode).toBe(0) are present, but no expect(stderr).toBe("") follows.
Why existing code does not prevent it: The test correctly validates the crash scenario (a double-free would cause a non-zero exit or no "OK" output). However, sanitizer tools like AddressSanitizer can emit diagnostic output to stderr and still let the process exit with code 0. Without asserting on stderr, such diagnostics would be silently swallowed.
CLAUDE.md violation: test/CLAUDE.md lines 36-51 (single-file template) and lines 60-96 (multi-file template) both explicitly show expect(stderr).toBe("") as part of the standard assertion pattern for spawned Bun processes. One refutation claimed CLAUDE.md prohibits this check at line 99 -- this is factually wrong. Line 99 of CLAUDE.md actually reads: "Generally, await using or using is a good idea to ensure proper resource cleanup." The refutation fabricated a non-existent rule.
On the S3-errors-to-stderr objection: A refutation argued that failed S3 calls (no credentials) would produce stderr output. In practice, S3 errors in Bun propagate as JS exceptions, which are entirely swallowed by try { await call(); } catch(e) {}. No JS exception output reaches stderr; only native crashes or sanitizer warnings would. The subprocess stderr should be empty under normal conditions, making expect(stderr).toBe("") a sound assertion.
Step-by-step proof: (1) A hypothetical regression introduces a use-after-poison that only triggers ASAN warning mode (not abort). (2) The subprocess prints an ASAN report to stderr but still exits 0 and prints "OK" to stdout. (3) Both existing assertions pass. (4) The ASAN warning is never detected and CI appears green. Adding expect(stderr).toBe("") would catch this case.
How to fix: Add expect(stderr).toBe(""); after the Promise.all, before the existing stdout/exitCode assertions, matching the CLAUDE.md convention shown in its templates.
0c75e21 to
375d788
Compare
| var path = pathlike; | ||
| // this actually protects/refs the pathlike | ||
| path.toThreadSafe(); | ||
| // For encoded_slice, toThreadSafe is a no-op, so the store would alias | ||
| // the caller's memory. Clone it so the store owns an independent copy. | ||
| if (path == .encoded_slice and path.encoded_slice.isAllocated()) { | ||
| path = .{ .encoded_slice = try path.encoded_slice.toOwned(allocator) }; | ||
| } | ||
|
|
||
| const store = Blob.Store.new(.{ | ||
| .data = .{ |
There was a problem hiding this comment.
🟣 Pre-existing bug: in Store.S3.path(), the trailing-slash stripping branch path_name = path_name[0..path_name.len] is a no-op — it produces the identical slice instead of removing the last character. The symmetric backslash branch at the next line correctly uses path_name.len - 1. As a result, S3 keys ending in / are silently passed verbatim to all S3 operations (presign, stat, exists, unlink, write, delete), potentially causing 404s since S3 treats prefix/ and prefix as distinct keys. Fix: change to path_name[0..path_name.len - 1].
Extended reasoning...
What the bug is and how it manifests
In Store.S3.path() (Store.zig, the path method of the S3 struct), the branch intended to strip a trailing / from the S3 path contains a logic error:
if (strings.endsWith(path_name, "/")) {
path_name = path_name[0..path_name.len]; // BUG: no-op
} else if (strings.endsWith(path_name, "\\")) {
path_name = path_name[0 .. path_name.len - 1]; // Correct
}path_name[0..path_name.len] creates a slice from index 0 to the full length — identical to the original. No character is stripped. The comment above the block says "normalize start and ending", and the symmetric backslash branch correctly uses path_name.len - 1, confirming the intent was to strip one character.
The specific code path that triggers it
Any S3 URL that resolves (after bun.URL.parse().s3Path()) to a string ending in / will hit this branch and return the path unchanged. For example, s3://bucket/prefix/ would yield prefix/ instead of prefix as the key used in all downstream S3 API calls.
Why existing code does not prevent it
There is no other sanitization of the trailing slash downstream. The path() method is called directly before constructing S3 API requests (presign, stat, exists, unlink, write, delete), so whatever path() returns is used verbatim as the S3 object key.
Impact
S3 treats prefix/ and prefix as completely different object keys. Operations on prefix/ when the intended key is prefix will silently 404 or operate on an unexpected key. This affects all six public S3 APIs: presign, stat, exists, size, unlink, and write.
Step-by-step proof
- User calls
Bun.S3Client.stat("s3://bucket/prefix/")(trailing slash). Store.S3.path()callsbun.URL.parse("s3://bucket/prefix/").s3Path()— returns"prefix/"(or"/prefix/"before the leading-slash strip).- The
strings.endsWith(path_name, "/")check is true. path_name = path_name[0..path_name.len]— no-op,path_nameis still"prefix/".- The leading-slash strip does not apply.
"prefix/"is returned and used as the S3 key.- The S3 HEAD request targets
prefix/, notprefix— silently wrong.
How to fix
Change path_name[0..path_name.len] to path_name[0..path_name.len - 1] to match the intent and the symmetric backslash branch. This is a pre-existing bug not introduced by this PR; the PR does not modify S3.path(). However, the PR extensively modifies the S3 path handling infrastructure in the same file, making this a natural opportunity to flag it.
9538183 to
7fde217
Compare
| () => Bun.S3Client.presign(Date()), | ||
| () => Bun.S3Client.presign(Date(), Date()), | ||
| () => Bun.S3Client.unlink(Date()), | ||
| () => Bun.S3Client.size(Date()), | ||
| () => Bun.S3Client.exists(Date()), | ||
| () => Bun.S3Client.stat(Date()), | ||
| () => Bun.S3Client.write(Date(), "data"), |
There was a problem hiding this comment.
🟡 The regression test uses Date() (without new) as the options argument, which returns a string — not a JS object. Because opts.isObject() returns false for a string, the getTruthyComptime("type") branch inside constructS3FileWithS3Credentials is never reached, so the test only exercises the no-credentials failure path (Path 1) and not the throwing-type-getter failure path (Path 2) described in the PR. Adding a case like () => Bun.S3Client.presign(Date(), { get type() { throw new Error("boom"); } }) would deterministically trigger the post-initS3() code path the PR claims to fix.
Extended reasoning...
What the bug is and how it manifests
In JavaScript, Date() called without new returns a plain string (e.g., "Thu Mar 26 2026 ...") rather than a Date object. The test passes this string as the options argument to every call in the calls array (e.g., Bun.S3Client.presign(Date(), Date())). Inside constructS3FileWithS3Credentials, the options object is only interrogated when opts.isObject() is true. Since a string is not an object, opts.isObject() returns false and the entire getTruthyComptime("type") block — which is the code that could exercise the post-initS3() failure path — is skipped unconditionally.
The specific code path that triggers Path 2
The PR description identifies two double-free paths. Path 2 occurs when constructS3FileWithS3Credentials itself fails after Blob.Store.initS3() has already aliased the path: (1) initS3 runs and the store holds a pointer to the path buffer; (2) errdefer store.deinit() is armed; (3) opts.getTruthyComptime(globalObject, "type") invokes a throwing getter; (4) the store errdefer fires and frees the path buffer; (5) error.JSError propagates to the caller before path_or_blob = .{ .blob = blob } is reached; (6) the outer errdefer still sees path_or_blob == .path and calls path_or_blob.path.deinit() on already-freed memory.
Why the current test does not prevent it
All entries in the calls array pass either no options or a Date() string as options. A string fails opts.isObject(), so getTruthyComptime is never invoked. The test therefore only exercises the path where the constructor succeeds but getPresignUrlFrom / the subsequent operation fails due to missing S3 credentials. This is Path 1, not Path 2.
Addressing the refutation
One verifier argued that Store.zig was not modified in commit 9538183, so Path 2 was never actually fixed — and therefore not testing it is consistent. This is a fair observation about implementation completeness. However, it cuts both ways: if Path 2 remains broken, a test that attempts to trigger it (with { get type() { throw new Error("boom"); } }) would either crash under ASAN (revealing the unfixed bug) or pass (revealing the fix was effective through some other mechanism). Either outcome is more informative than silently skipping the path. The PR description explicitly mentions Path 2 as something that "existed" and the coderabbitai reviewer correctly identified the exact missing test case.
Impact
The coverage gap means the described Path 2 scenario remains unverified by CI. Under an ASAN build the double-free would manifest as a use-after-poison crash if Path 2 is genuinely unfixed. The missing test would catch any regression reintroducing the bug in either path.
How to fix
Add a throwingOptions object to the test and include it as the options argument in the calls array:
const throwingOptions = { get type() { throw new Error("boom"); } };
// then add:
() => Bun.S3Client.presign(Date(), throwingOptions),
() => Bun.S3Client.unlink(Date(), throwingOptions),
// etc.This deterministically exercises the post-initS3() failure path on every run without requiring real S3 credentials or network access.
| } | ||
| const options = args.nextEat(); | ||
| var blob = try constructS3FileInternalStore(globalThis, path.path, options); | ||
| path_or_blob = .{ .blob = blob }; // path ownership transferred to store | ||
| defer blob.deinit(); | ||
| return try getPresignUrlFrom(&blob, globalThis, options); | ||
| }, |
There was a problem hiding this comment.
🔴 The double-free fix was applied to S3File.zig static methods but not to the structurally identical S3Client.zig instance methods (presign, exists, size, stat, write, unlink). In those methods, errdefer path.deinit() remains armed after constructS3FileWithS3CredentialsAndOptions succeeds and the store aliases the original path buffer — if any downstream call (e.g. getPresignUrlFrom with a throwing method getter) throws, both defer blob.detach() and errdefer path.deinit() fire on the same memory, causing use-after-poison. Apply the same path_or_blob = .{ .blob = blob } neutralization pattern to the corresponding .path => branches in all six S3Client.zig instance methods.
Extended reasoning...
What the bug is and how it manifests
The PR fixes a double-free in S3File.zig static methods (e.g. presign, unlink, write, size, exists, stat) by adding path_or_blob = .{ .blob = blob } after constructS3FileInternalStore succeeds. This changes the union tag so the outer errdefer (which only fires when path_or_blob == .path) is disabled once the store takes ownership. However, the structurally identical pattern in S3Client.zig instance methods was not updated — errdefer path.deinit() remains always armed after blob construction succeeds.
The specific code path that triggers it
In each S3Client.zig instance method (e.g. presign):
PathLike.fromJS(...)allocates anencoded_slice(pointer A).errdefer path.deinit()is registered — no disabling mechanism exists.constructS3FileWithS3CredentialsAndOptions(...)succeeds; inside,Blob.Store.initS3storespathlikedirectly (no clone for the non-allocated case), aliasing pointer A into the store.defer blob.detach()is registered withstore.ref_count = 1.- A downstream call (e.g.
getPresignUrlFromwith amethodgetter that throws) raises a JS error.
Why existing code does not prevent it
The path_or_blob = .{ .blob = blob } assignment that disables the errdefer in S3File.zig is absent from S3Client.zig. There is no other mechanism to communicate to the outer errdefer that ownership has already been transferred. The S3File.zig fix is purely call-site level — it works because it resets the union tag after the successful constructor return — but S3Client.zig was never updated with the equivalent reset.
Impact
All six S3Client instance methods are affected: presign, exists, size, stat, write, unlink. The crash is deterministic and user-triggerable from JavaScript without any special permissions or credentials.
How to fix
In each S3Client.zig instance method, add the ownership-transfer line immediately after constructS3FileWithS3CredentialsAndOptions succeeds, neutralizing the errdefer path.deinit() the same way the static methods do it:
var blob = try constructS3FileWithS3CredentialsAndOptions(...);
path_neutralized = blob; // or equivalent tag-flip to disable errdefer
defer blob.detach();Step-by-step proof (concrete trigger for presign)
const client = new Bun.S3Client({ bucket: "test" });
client.presign("mykey", { get method() { throw new Error("boom"); } });PathLike.fromJSallocatesencoded_slicefor"mykey"→ pointer A.errdefer path.deinit()armed.constructS3FileWithS3CredentialsAndOptionssucceeds; store aliases pointer A (pathlike.encoded_slice = pointer A).defer blob.detach()registered;store.ref_count = 1.getPresignUrlFromcallsoptions.getTruthyComptime(globalObject, "method")→ invokes the getter → throwsError("boom").defer blob.detach()fires →store.deref()→ ref1→0→store.deinit()→s3.deinit()→pathlike.deinit()→ frees pointer A.errdefer path.deinit()fires (because error occurred) →path.deinit()→ frees pointer A again → use-after-poison.
3c23dcc to
1e28007
Compare
constructS3FileInternalStore passes the PathLike by value to Store.initS3, which calls toThreadSafe() on a local copy. For the encoded_slice variant, toThreadSafe is a no-op, so both the store's copy and the caller's original share the same underlying allocated memory. If a subsequent operation fails, defer blob.deinit() frees the store (and the path data), then the errdefer also tries to free the same path via path_or_blob.path.deinit(), causing a use-after-poison. Fix by setting path_or_blob to the blob variant after constructS3FileInternalStore succeeds, so the errdefer check (path_or_blob == .path) becomes false and the path is only freed once through blob.deinit().
1e28007 to
9d46dbd
Compare
What does this PR do?
Fix use-after-poison (double-free) crash in S3Client static methods (
presign,unlink,write,size,exists,stat) when called with a string path argument.Root Cause
Store.initS3passes thePathLikethroughtoThreadSafe(), but for theencoded_slicevariant this is a no-op, leaving the store's copy aliased with the caller's original memory. Two double-free paths existed:If a subsequent operation after
constructS3FileInternalStorefailed, bothdefer blob.deinit()and the outererrdeferfreed the same memory.If
constructS3FileWithS3Credentialsitself failed afterinitS3(e.g. a throwingtypegetter in options), the store'serrdeferand the caller'serrdeferboth freed the aliased path.Fix
encoded_sliceininitS3/initS3WithReferencedCredentialsso the store always owns an independent copy of the path data.errdefertodeferfor path cleanup in S3File functions, since the path and store copies are now independent and both need freeing.How did you verify your code works?
test/js/bun/s3/s3-presign-double-free.test.tsFingerprint:
Address:use-after-poison:bun-debug+0x90076b8