feat: add deploy v1 runtime, schema, and examples#88
Conversation
|
Warning Rate limit exceeded
You’ve run out of usage credits. Purchase more in the billing tab. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Plus Run ID: 📒 Files selected for processing (25)
📝 WalkthroughWalkthroughAdds persona-kit types/schema/parsers/triggers, three example deployable personas with handlers and docs, a deploy bundler with dev/sandbox run modes, a runtime package with filesystem-backed integration clients, tests, and packaging/tsconfig updates. ChangesDeployable Persona Infrastructure Persona types & exports
Schema generation
Parsing & linting
Persona-kit package updates
Examples
Deploy package
Run modes
Runtime & clients
Root docs & build
Workload router
Estimated code review effort 🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related issues
Possibly related PRs
✨ Finishing Touches🧪 Generate unit tests (beta)
|
| const pullRequest = event.pull_request as { number?: number } | undefined; | ||
| const issue = event.issue as { number?: number } | undefined; | ||
| const owner = typeof repository?.owner === 'string' | ||
| ? repository.owner | ||
| : repository?.owner?.login ?? repository?.full_name?.split('/')[0]; | ||
| const repo = repository?.name ?? repository?.full_name?.split('/')[1]; | ||
| const number = pullRequest?.number ?? issue?.number ?? Number(event.number); |
There was a problem hiding this comment.
🔴 githubTarget() cannot extract PR number from check_run.completed events
The handleFailedCheck handler dispatches check_run.completed events through githubTarget() (examples/review-agent/agent.ts:52), which tries to extract the PR number via pullRequest?.number ?? issue?.number ?? Number(event.number) at line 17. GitHub's check_run webhook payload does not have top-level pull_request or issue fields — the associated PRs live under event.check_run.pull_requests[]. Since all three fallbacks evaluate to undefined/NaN, Number.isFinite(NaN) is false, and the function always throws "GitHub event is missing owner, repo, or number". The check_run handler can never comment on the failed PR.
Expected vs actual check_run payload shape
GitHub check_run events have this structure:
{
"check_run": {
"pull_requests": [{ "number": 42 }],
"conclusion": "failure"
},
"repository": { "owner": { "login": "acme" }, "name": "app" }
}But githubTarget looks for event.pull_request and event.issue (both absent), then falls through to Number(event.number) which is NaN.
| const pullRequest = event.pull_request as { number?: number } | undefined; | |
| const issue = event.issue as { number?: number } | undefined; | |
| const owner = typeof repository?.owner === 'string' | |
| ? repository.owner | |
| : repository?.owner?.login ?? repository?.full_name?.split('/')[0]; | |
| const repo = repository?.name ?? repository?.full_name?.split('/')[1]; | |
| const number = pullRequest?.number ?? issue?.number ?? Number(event.number); | |
| const pullRequest = event.pull_request as { number?: number } | undefined; | |
| const issue = event.issue as { number?: number } | undefined; | |
| const checkRunPrs = (event.check_run as { pull_requests?: Array<{ number?: number }> } | undefined)?.pull_requests; | |
| const owner = typeof repository?.owner === 'string' | |
| ? repository.owner | |
| : repository?.owner?.login ?? repository?.full_name?.split('/')[0]; | |
| const repo = repository?.name ?? repository?.full_name?.split('/')[1]; | |
| const number = pullRequest?.number ?? issue?.number ?? checkRunPrs?.[0]?.number ?? Number(event.number); |
Was this helpful? React with 👍 or 👎 to provide feedback.
| 'comment.create', | ||
| 'comment.update', | ||
| 'comment.remove', |
There was a problem hiding this comment.
🟡 README documents comment.created as a Linear trigger, but KNOWN_TRIGGERS only has comment.create
The README at line 100 lists comment.created as a typical Linear trigger, but KNOWN_TRIGGERS.linear in packages/persona-kit/src/triggers.ts:18 only contains comment.create (the older webhook action format). Jira and Notion both include comment.created in their known triggers, while Linear only has comment.create, comment.update, comment.remove. A persona following the README and using comment.created for Linear would get an unknown_trigger lint warning from lintTriggers().
| 'comment.create', | |
| 'comment.update', | |
| 'comment.remove', | |
| 'comment.create', | |
| 'comment.update', | |
| 'comment.remove', | |
| 'comment.created', | |
| 'comment.updated', | |
| 'comment.deleted', |
Was this helpful? React with 👍 or 👎 to provide feedback.
There was a problem hiding this comment.
Actionable comments posted: 15
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
packages/workload-router/routing-profiles/schema.json (1)
13-44:⚠️ Potential issue | 🟠 Major | ⚡ Quick winSchema is out of sync with default profile intent keys.
With
additionalProperties: false, this schema rejectsintents.persona-improvementandintents.relay-orchestratorcurrently present inrouting-profiles/default.json, making the default profile invalid.Proposed schema sync
- "required": ["implement-frontend", "review", "architecture-plan", "requirements-analysis", "research", "implementation", "debugging", "security-review", "documentation", "verification", "test-strategy", "tdd-enforcement", "flake-investigation", "opencode-workflow-correctness", "npm-provenance", "cloud-sandbox-infra", "sage-slack-egress-migration", "sage-proactive-rewire", "cloud-slack-proxy-guard", "sage-cloud-e2e-conduction", "capability-discovery", "npm-package-compat", "posthog", "persona-authoring", "agent-relay-workflow", "slop-audit", "api-contract-review", "local-stack-orchestration", "e2e-validation", "write-integration-tests"], + "required": ["implement-frontend", "review", "architecture-plan", "requirements-analysis", "research", "implementation", "debugging", "security-review", "documentation", "verification", "test-strategy", "tdd-enforcement", "flake-investigation", "opencode-workflow-correctness", "npm-provenance", "cloud-sandbox-infra", "sage-slack-egress-migration", "sage-proactive-rewire", "cloud-slack-proxy-guard", "sage-cloud-e2e-conduction", "capability-discovery", "npm-package-compat", "posthog", "persona-authoring", "persona-improvement", "agent-relay-workflow", "slop-audit", "api-contract-review", "local-stack-orchestration", "e2e-validation", "write-integration-tests", "relay-orchestrator"], ... "posthog": { "$ref": "#/definitions/rule" }, "persona-authoring": { "$ref": "#/definitions/rule" }, + "persona-improvement": { "$ref": "#/definitions/rule" }, "agent-relay-workflow": { "$ref": "#/definitions/rule" }, "slop-audit": { "$ref": "#/definitions/rule" }, "api-contract-review": { "$ref": "#/definitions/rule" }, "local-stack-orchestration": { "$ref": "#/definitions/rule" }, "e2e-validation": { "$ref": "#/definitions/rule" }, - "write-integration-tests": { "$ref": "#/definitions/rule" } + "write-integration-tests": { "$ref": "#/definitions/rule" }, + "relay-orchestrator": { "$ref": "#/definitions/rule" }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/workload-router/routing-profiles/schema.json` around lines 13 - 44, The schema currently has "additionalProperties": false and a required/properties list that omits the intent keys present in the default profile; add "persona-improvement" and "relay-orchestrator" to the "required" array and add corresponding entries under "properties" (e.g., "persona-improvement": { "$ref": "#/definitions/rule" } and "relay-orchestrator": { "$ref": "#/definitions/rule" }) so the schema accepts the default profile while keeping additionalProperties:false.
🧹 Nitpick comments (5)
packages/runtime/src/clients/linear.ts (1)
94-103: 💤 Low valueConsider extracting the issue type for clarity.
While the
Awaited<ReturnType<LinearClient['getIssue']>>pattern works correctly, extracting the issue shape to a standalone type (e.g.,LinearIssue) would improve maintainability and make the type usage more explicit.📦 Example refactor
+interface LinearIssue { + id: string; + identifier: string; + title: string; + description: string | null; + url: string; + state: { name: string } | null; +} + export interface LinearClient { createIssue(args: { teamId: string; title: string; description?: string; assigneeId?: string; labelIds?: string[]; projectId?: string; stateId?: string; }): Promise<{ id: string; identifier: string; url: string }>; updateIssue( issueId: string, args: { title?: string; description?: string; assigneeId?: string; stateId?: string } ): Promise<{ id: string; identifier: string; url: string }>; comment(issueId: string, body: string): Promise<{ id: string; url: string }>; - getIssue(issueId: string): Promise<{ - id: string; - identifier: string; - title: string; - description: string | null; - url: string; - state: { name: string } | null; - }>; + getIssue(issueId: string): Promise<LinearIssue>; }Then update the implementation:
async getIssue(issueId) { - const data = await graphql<{ issue: Awaited<ReturnType<LinearClient['getIssue']>> }>( + const data = await graphql<{ issue: LinearIssue }>( 'getIssue', `query Issue($id: String!) { issue(id: $id) { id identifier title description url state { name } } }`, { id: issueId } ); return data.issue; }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/runtime/src/clients/linear.ts` around lines 94 - 103, Extract the issue shape into a named type (e.g., LinearIssue) and use it instead of the inline Awaited<ReturnType<LinearClient['getIssue']>> expression: declare a LinearIssue type that represents the resolved issue shape returned by LinearClient.getIssue, update the generic on graphql in getIssue to graphql<{ issue: LinearIssue }>, and update the getIssue function's return type to return LinearIssue (or Promise<LinearIssue> as appropriate); reference the LinearClient type when deriving the new alias so it stays tied to the source shape.packages/runtime/src/clients/github.test.ts (1)
41-49: ⚡ Quick winAssert exact fetch call count to catch hidden regressions.
Both tests should assert
fetchMock.calls.lengthso retries/extra token calls don’t silently slip through while current assertions still pass.💡 Proposed fix
assert.deepEqual(issue, { number: 42, url: 'https://github.com/acme/app/issues/42' }); + assert.equal(fetchMock.calls.length, 2); assert.equal(fetchMock.calls[0].url, 'https://relay.test/api/v1/connections/conn_1/token');await assert.rejects( () => client.createIssue({ owner: 'acme', repo: 'app', title: 'Bug', body: 'Details' }), @@ } ); + assert.equal(fetchMock.calls.length, 2);Also applies to: 63-72
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/runtime/src/clients/github.test.ts` around lines 41 - 49, The tests assert individual fetch calls but omit asserting the total number of fetchMock.calls, which can hide extra retries; update the two test cases that reference fetchMock and the created issue (the assertions comparing fetchMock.calls[0].url, fetchMock.calls[1].url, authorization header and body for the created issue variable) to also assert fetchMock.calls.length equals the expected call count (e.g., 2) immediately before the existing per-call assertions so any extra token or retry calls fail the test.packages/persona-kit/src/triggers.test.ts (1)
26-33: ⚡ Quick winTest scope currently doesn’t match the “tier-1 provider” contract.
The test name says tier-1 providers, but the loop validates every provider in
KNOWN_TRIGGERS. This can create false failures if non-tier1 providers are added later.Suggested adjustment
test('KNOWN_TRIGGERS includes at least eight trigger names for every tier-1 provider', () => { - for (const [provider, triggers] of Object.entries(KNOWN_TRIGGERS)) { + const tier1Providers = ['github', 'linear', 'slack', 'notion', 'jira'] as const; + for (const provider of tier1Providers) { + const triggers = KNOWN_TRIGGERS[provider] ?? []; assert.ok( triggers.length >= 8, `${provider} should expose at least eight known trigger names` ); } });🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/persona-kit/src/triggers.test.ts` around lines 26 - 33, The test iterates over all entries in KNOWN_TRIGGERS but the assertion is meant only for tier-1 providers; update the test to only validate providers listed in your tier-1 provider constant (e.g., TIER_1_PROVIDERS or TIER1_PROVIDERS) by filtering Object.entries(KNOWN_TRIGGERS) to providers that appear in that tier-1 list before asserting triggers.length >= 8, keeping the test name and assertion unchanged but limiting the loop to the tier-1 set.packages/deploy/src/bundle.test.ts (1)
66-71: ⚡ Quick winConsider verifying the runtime version dynamically.
The test hardcodes the expected runtime version as
'2.1.4'. If the runtime package version is updated, this test will fail. Consider reading the version from the runtime's package.json or the bundle module's implementation to keep the test resilient to version bumps.💡 Suggested improvement
+import { readFile } from 'node:fs/promises'; + test('stageBundle writes persona, runner, package metadata, and esbuild bundle', async () => { await withTmpDir(async (dir) => { const personaPath = resolve('src/__fixtures__/simple-agent/persona.json'); const outDir = join(dir, 'build'); const result = await stageBundle({ personaPath, persona: basePersona({ onEvent: './agent.ts' }), outDir } as BundleInput); // ... other assertions ... + const runtimePkg = JSON.parse( + await readFile(resolve('../../runtime/package.json'), 'utf8') + ); assert.equal( JSON.parse(await readFile(result.packageJsonPath, 'utf8')).dependencies[ '@agentworkforce/runtime' ], - '2.1.4' + runtimePkg.version ); }); });🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/deploy/src/bundle.test.ts` around lines 66 - 71, The test currently asserts a hardcoded runtime version ('2.1.4') using result.packageJsonPath; change it to load the actual `@agentworkforce/runtime` package.json version dynamically (e.g. resolve and read/require '@agentworkforce/runtime/package.json' or use require.resolve to locate it) and compare that runtime version to the value in JSON.parse(await readFile(result.packageJsonPath, 'utf8')).dependencies['@agentworkforce/runtime'] so the assertion uses the real package version instead of a literal.examples/weekly-digest/agent.ts (1)
72-82: ⚡ Quick winAdd timeout and error handling for external API calls.
The fetch calls to Brave Search API lack timeouts and comprehensive error handling. Network failures, timeouts, or rate limits could cause the handler to hang or crash unexpectedly.
🔄 Suggested improvements
for (const topic of topics) { const params = new URLSearchParams({ q: topic, count: '10' }); - const response = await fetch(`https://api.search.brave.com/res/v1/web/search?${params}`, { - headers: { Accept: 'application/json', 'X-Subscription-Token': apiKey } - }); - if (!response.ok) { - throw new Error(`Brave Search failed for "${topic}" with HTTP ${response.status}`); - } - const body = (await response.json()) as { web?: { results?: BraveResult[] } }; - results.push(...(body.web?.results ?? [])); + try { + const controller = new AbortController(); + const timeout = setTimeout(() => controller.abort(), 30000); + const response = await fetch(`https://api.search.brave.com/res/v1/web/search?${params}`, { + headers: { Accept: 'application/json', 'X-Subscription-Token': apiKey }, + signal: controller.signal + }); + clearTimeout(timeout); + if (!response.ok) { + throw new Error(`Brave Search failed for "${topic}" with HTTP ${response.status}`); + } + const body = (await response.json()) as { web?: { results?: BraveResult[] } }; + results.push(...(body.web?.results ?? [])); + } catch (error) { + console.warn(`Failed to fetch results for topic "${topic}":`, error); + // Continue with other topics rather than failing completely + } }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@examples/weekly-digest/agent.ts` around lines 72 - 82, The Brave Search loop that builds URLSearchParams and calls fetch for each topic lacks timeouts and robust error handling; update the logic around the fetch call (the block using URLSearchParams, fetch, response, and pushing into results) to use an AbortController with a reasonable timeout, wrap the await fetch and await response.json() in try/catch to handle network errors and JSON parse errors, and when response.ok is false include response.status/text and any rate-limit headers in the thrown error or a logged message; ensure the AbortController is cleaned up per request and consider a simple retry/backoff for transient 5xx/network errors before pushing BraveResult items into results.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@examples/linear-shipper/agent.ts`:
- Around line 26-28: The git clone call interpolates untrusted repoUrl and repo
into a shell string which allows shell injection; change the call site that uses
ctx.sandbox.exec to pass the command and arguments separately (or otherwise
properly escape/sanitize) instead of embedding variables into a single shell
string so that repoUrl and repo are treated as arguments, e.g., update the usage
of ctx.sandbox.exec for the clone operation (where repoDir, repoUrl, repo are
used) to an API form that accepts an argv array or to a safe spawn variant to
prevent execution of injected shell metacharacters.
In `@examples/review-agent/agent.ts`:
- Around line 17-19: The current extraction of number (const number =
pullRequest?.number ?? issue?.number ?? Number(event.number)) misses check_run
events and causes an early throw in the triage flow; update the extraction to
also check event.check_run?.check_suite?.pull_requests?.[0]?.number (and similar
nested paths) and only throw if owner/repo are missing but allow number to be
undefined so the CI-failure/comment path can proceed; apply the same fallback
logic wherever number is derived (lines around the second instance mentioned)
and ensure code paths that post CI-failure comments handle a missing number by
using the pull_request number from check_suite or by gracefully skipping the
throw and continuing triage.
In `@examples/review-agent/persona.json`:
- Line 13: The current event trigger { "on":
"pull_request_review_comment.created" } fires for every review comment; change
it so the persona only runs when the comment body contains an `@mention` (i.e.,
scope to mentions). Update the "on" configuration in
examples/review-agent/persona.json (the entry currently set to
"pull_request_review_comment.created") to either add an event filter for comment
body mentions or move the mention check into the handler’s conditional (e.g.,
only proceed if comment.body includes the bot’s `@handle`), ensuring the persona
behavior matches the description (“responds to `@mentions` in comments”).
In `@examples/weekly-digest/agent.ts`:
- Around line 31-48: clusterByHost currently calls new URL(result.url) without
validation which can throw on malformed URLs; wrap the URL parsing in a
try/catch (or pre-validate) so malformed result.url entries are skipped instead
of crashing, and only add to the seen set and clusters after successful parsing;
update references inside clusterByHost (result.url, new URL(...), host, seen,
clusters) accordingly so invalid URLs are ignored and do not mark their raw
string as seen.
In `@packages/deploy/src/bundle.ts`:
- Around line 46-50: Prevent destructive outDir values by validating
input.outDir before removing it: compute personaDir = dirname(input.personaPath)
and entryPath = resolve(personaDir, persona.onEvent) (already used), then reject
if resolve(input.outDir) equals personaDir or is an ancestor of personaDir or of
dirname(entryPath) (i.e., if path.relative(input.outDir, personaDir) does not
start with '..' or is empty). If detected, throw or assert with a clear message
and skip the rm/mkdir steps; keep the existing rm(input.outDir, ...) and
mkdir(...) only after this validation.
- Around line 26-31: The template string runnerTemplate uses deprecated JSON
import syntax "assert { type: 'json' }"; update it to use the modern "with {
type: 'json' }" in the import of persona (and any other JSON imports), e.g.
change "import persona from './persona.json' assert { type: 'json' }" to use
"with { type: 'json' }"; apply the same change in the corresponding test
(referencing runnerTemplate and the persona import in
packages/deploy/src/bundle.test.ts) so imports for persona.json and any JSON
modules loaded alongside agent.bundle.mjs use the new with { type: 'json' }
form.
In `@packages/deploy/src/modes/dev.ts`:
- Around line 33-41: The child.error handler currently just rejects and skips
cleanup; update the error handler registered on child.once('error', ...) to
perform the same teardown as the exit handler: set settled = true, call
flushStdout.end() and flushStderr.end(), call process.off('SIGINT', onSigint),
and then reject with the error (instead of just rejecting silently). Ensure the
logic mirrors the child.once('exit', ...) cleanup so both paths release
listeners and flush streams consistently.
In `@packages/deploy/src/modes/sandbox.ts`:
- Around line 78-80: After creating the sandbox with daytona.create (assigned to
sandbox) you must ensure it is cleaned up if uploadBundle(input.bundle, sandbox)
fails; wrap the uploadBundle call in a try/catch (or try/finally) and call the
sandbox deletion method (e.g., sandbox.delete() or daytona.delete(sandbox)) in
the failure path, awaiting the deletion, and then rethrow the original error so
the caller still sees the upload failure; reference the daytona.create call and
the uploadBundle invocation so you add the cleanup immediately after creation.
In `@packages/persona-kit/schemas/persona.schema.json`:
- Around line 82-133: The schema allows a persona with "cloud": true but missing
"onEvent", which causes stageBundle() to fail at runtime; update
persona.schema.json to add a conditional (if/then) that when the "cloud"
property is true (properties.cloud.const: true) then require "onEvent"
(then.required: ["onEvent"]) so validation matches runtime; apply the same
conditional pattern to the other deployable persona block referenced (lines
around the alternate block at 159-166) so both schema locations enforce that a
cloud-deployable persona must include onEvent.
In `@packages/persona-kit/src/parse.test.ts`:
- Around line 57-88: The test "parsePersonaSpec accepts deploy-v1 optional
fields" sets useSubscription: true but never asserts it; add an assertion to
verify the parsed output includes this flag by adding something like
assert.equal(spec.useSubscription, true) (e.g., after the existing
assert.equal(spec.cloud, true) check) so parsePersonaSpec correctly preserves
the useSubscription field.
In `@packages/persona-kit/src/parse.ts`:
- Around line 746-748: The onEvent check must enforce a safe relative path
instead of just non-empty string: when onEvent is provided, reject absolute
paths (use path.isAbsolute), reject any segments that traverse up (e.g., include
".." after path.normalize or path.posix.normalize), and resolve the candidate
against the persona JSON directory then verify the resolved path is inside that
directory (compare normalized/resolved prefixes) before accepting; update the
validation around persona[${expectedIntent}].onEvent in parse.ts to perform
these checks and throw the same Error message if the path is unsafe.
- Around line 459-466: The parsers accept arrays because isObject() returns true
for arrays; update parseIntegrations, parseSandbox, parseMemory, and parseTraits
to explicitly reject arrays by adding an Array.isArray(value) check (in the same
spot where isObject(value) is validated) and throw the same contextual error
(e.g., `${context} must be an object if provided`) when the value is an array so
arrays cannot be treated as objects and passed through to Object.entries().
In `@packages/runtime/src/clients/github.ts`:
- Around line 93-96: The current upsertIssue flow only requests the first 100
open issues (request call in upsertIssue using 'upsertIssue.find' and
repoPath(...)/issues?state=open&per_page=100) which can miss matches and cause
duplicates; change it to paginate the issues request: repeatedly call
request('upsertIssue.find',
`${repoPath(args.owner,args.repo)}/issues?state=open&per_page=100&page=${page}`)
(or use Link headers if available) accumulating or checking titles per page
until you either find a matching title (then update via the existing update path
and stop) or no more pages (then create). Ensure the loop respects rate/exit
conditions, stops when a page returns fewer than per_page results or when Link
header indicates no next page, and reuse the same request/update/create logic in
upsertIssue.
In `@packages/runtime/src/clients/request.ts`:
- Around line 51-54: The fetch calls that resolve tokens and call providers (the
fetch using tokenUrl and the provider-request fetch around lines 82-91) can hang
indefinitely; add a bounded timeout by using an AbortController and a timer to
abort the request after a configurable duration (e.g., REQUEST_TIMEOUT_MS with a
sensible default like 10_000 ms). In the token resolution path (where tokenUrl
is fetched) create an AbortController, pass controller.signal to fetch(tokenUrl,
...), start a setTimeout that calls controller.abort() after the timeout, and
clear the timeout on success/failure; do the same for the provider request
fetch. Make the timeout configurable via an exported or env-driven constant so
callers can adjust it, and ensure errors from aborted requests are handled and
surfaced consistently in the existing error/log paths.
In `@README.md`:
- Around line 30-31: The README uses mixed run-mode flag syntax (`--mode dev` vs
the table's `--dev`/`--sandbox`/`--cloud`); pick the canonical short-form flags
(e.g., `--dev`, `--sandbox`, `--cloud`) and replace all occurrences of the long
form `--mode <mode>` (notably the example with `BRAVE_API_KEY=... workforce
deploy ./examples/weekly-digest/persona.json --mode dev` and the occurrences
around lines 86-89) so examples and the run-mode table use the same
`--dev`/`--sandbox`/`--cloud` syntax throughout.
---
Outside diff comments:
In `@packages/workload-router/routing-profiles/schema.json`:
- Around line 13-44: The schema currently has "additionalProperties": false and
a required/properties list that omits the intent keys present in the default
profile; add "persona-improvement" and "relay-orchestrator" to the "required"
array and add corresponding entries under "properties" (e.g.,
"persona-improvement": { "$ref": "#/definitions/rule" } and
"relay-orchestrator": { "$ref": "#/definitions/rule" }) so the schema accepts
the default profile while keeping additionalProperties:false.
---
Nitpick comments:
In `@examples/weekly-digest/agent.ts`:
- Around line 72-82: The Brave Search loop that builds URLSearchParams and calls
fetch for each topic lacks timeouts and robust error handling; update the logic
around the fetch call (the block using URLSearchParams, fetch, response, and
pushing into results) to use an AbortController with a reasonable timeout, wrap
the await fetch and await response.json() in try/catch to handle network errors
and JSON parse errors, and when response.ok is false include
response.status/text and any rate-limit headers in the thrown error or a logged
message; ensure the AbortController is cleaned up per request and consider a
simple retry/backoff for transient 5xx/network errors before pushing BraveResult
items into results.
In `@packages/deploy/src/bundle.test.ts`:
- Around line 66-71: The test currently asserts a hardcoded runtime version
('2.1.4') using result.packageJsonPath; change it to load the actual
`@agentworkforce/runtime` package.json version dynamically (e.g. resolve and
read/require '@agentworkforce/runtime/package.json' or use require.resolve to
locate it) and compare that runtime version to the value in JSON.parse(await
readFile(result.packageJsonPath,
'utf8')).dependencies['@agentworkforce/runtime'] so the assertion uses the real
package version instead of a literal.
In `@packages/persona-kit/src/triggers.test.ts`:
- Around line 26-33: The test iterates over all entries in KNOWN_TRIGGERS but
the assertion is meant only for tier-1 providers; update the test to only
validate providers listed in your tier-1 provider constant (e.g.,
TIER_1_PROVIDERS or TIER1_PROVIDERS) by filtering Object.entries(KNOWN_TRIGGERS)
to providers that appear in that tier-1 list before asserting triggers.length >=
8, keeping the test name and assertion unchanged but limiting the loop to the
tier-1 set.
In `@packages/runtime/src/clients/github.test.ts`:
- Around line 41-49: The tests assert individual fetch calls but omit asserting
the total number of fetchMock.calls, which can hide extra retries; update the
two test cases that reference fetchMock and the created issue (the assertions
comparing fetchMock.calls[0].url, fetchMock.calls[1].url, authorization header
and body for the created issue variable) to also assert fetchMock.calls.length
equals the expected call count (e.g., 2) immediately before the existing
per-call assertions so any extra token or retry calls fail the test.
In `@packages/runtime/src/clients/linear.ts`:
- Around line 94-103: Extract the issue shape into a named type (e.g.,
LinearIssue) and use it instead of the inline
Awaited<ReturnType<LinearClient['getIssue']>> expression: declare a LinearIssue
type that represents the resolved issue shape returned by LinearClient.getIssue,
update the generic on graphql in getIssue to graphql<{ issue: LinearIssue }>,
and update the getIssue function's return type to return LinearIssue (or
Promise<LinearIssue> as appropriate); reference the LinearClient type when
deriving the new alias so it stays tied to the source shape.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro Plus
Run ID: ee8df1fc-c57f-4b3f-a8dc-9e7cfd20467d
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (59)
README.mdexamples/linear-shipper/README.mdexamples/linear-shipper/agent.tsexamples/linear-shipper/persona.jsonexamples/review-agent/README.mdexamples/review-agent/agent.tsexamples/review-agent/persona.jsonexamples/tsconfig.jsonexamples/weekly-digest/README.mdexamples/weekly-digest/agent.tsexamples/weekly-digest/persona.jsonpackage.jsonpackages/deploy/package.jsonpackages/deploy/src/__fixtures__/simple-agent/agent.tspackages/deploy/src/__fixtures__/simple-agent/persona.jsonpackages/deploy/src/bundle.test.tspackages/deploy/src/bundle.tspackages/deploy/src/index.tspackages/deploy/src/modes/dev.test.tspackages/deploy/src/modes/dev.tspackages/deploy/src/modes/sandbox.test.tspackages/deploy/src/modes/sandbox.tspackages/deploy/src/types.tspackages/deploy/tsconfig.jsonpackages/persona-kit/package.jsonpackages/persona-kit/schemas/persona.schema.jsonpackages/persona-kit/scripts/emit-schema.mjspackages/persona-kit/scripts/emit-schema.test.tspackages/persona-kit/src/__fixtures__/personas/cron-only.jsonpackages/persona-kit/src/__fixtures__/personas/full.jsonpackages/persona-kit/src/__fixtures__/personas/invalid-unknown-trigger.jsonpackages/persona-kit/src/__fixtures__/personas/minimal.jsonpackages/persona-kit/src/constants.tspackages/persona-kit/src/index.tspackages/persona-kit/src/parse.test.tspackages/persona-kit/src/parse.tspackages/persona-kit/src/triggers.test.tspackages/persona-kit/src/triggers.tspackages/persona-kit/src/types.tspackages/runtime/package.jsonpackages/runtime/src/clients/github.test.tspackages/runtime/src/clients/github.tspackages/runtime/src/clients/index.tspackages/runtime/src/clients/jira.test.tspackages/runtime/src/clients/jira.tspackages/runtime/src/clients/linear.test.tspackages/runtime/src/clients/linear.tspackages/runtime/src/clients/notion.test.tspackages/runtime/src/clients/notion.tspackages/runtime/src/clients/request.tspackages/runtime/src/clients/slack.test.tspackages/runtime/src/clients/slack.tspackages/runtime/src/errors.tspackages/runtime/src/index.tspackages/runtime/src/raw.tspackages/runtime/src/runner.tspackages/runtime/tsconfig.jsonpackages/workload-router/routing-profiles/default.jsonpackages/workload-router/routing-profiles/schema.json
0270910 to
5667fa5
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
packages/persona-kit/package.json (1)
8-13:⚠️ Potential issue | 🟠 Major | ⚡ Quick winSchema is published but not actually importable via package subpath.
Line 16 includes
schemasin thefilesarray, but lines 8-13 only export".". With anexportsmap present, Node.js module resolution strictly enforces it—consumers cannot import@agentworkforce/persona-kit/schemas/persona.schema.jsonunless that subpath is explicitly exported.Proposed fix
"exports": { ".": { "types": "./dist/index.d.ts", "default": "./dist/index.js" - } + }, + "./schemas/persona.schema.json": "./schemas/persona.schema.json" },🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/persona-kit/package.json` around lines 8 - 13, The package.json currently defines an "exports" map that only exposes ".", blocking consumers from importing the published JSON schema subpath; update the "exports" field to explicitly expose the schemas subpath(s) (for example add an export entry like "./schemas/*" or specific entries such as "./schemas/persona.schema.json") that point to the shipped schema files (as packaged under "files"), so imports like `@agentworkforce/persona-kit/schemas/persona.schema.json` resolve correctly; modify the "exports" object in package.json to include these schema subpath mappings while keeping the existing "." entry.
♻️ Duplicate comments (3)
examples/linear-shipper/agent.ts (1)
26-29:⚠️ Potential issue | 🔴 Critical | ⚡ Quick winHarden
git cloneinvocation against command injection.Line 28 builds a shell command with untrusted values (
REPO_URL,GITHUB_REPO), enabling shell injection.Proposed hardening
+function shellQuote(value: string): string { + return `'${value.replace(/'/g, `'\\''`)}'`; +} + export default handler(async (ctx, event) => { @@ const repo = inputDefault(ctx, 'GITHUB_REPO'); + if (!/^[A-Za-z0-9._-]+$/.test(repo)) { + throw new Error('GITHUB_REPO contains invalid characters'); + } const repoDir = `${ctx.sandbox.cwd}/${repo}`; - await ctx.sandbox.exec(`git clone ${repoUrl} ${repoDir}`); + await ctx.sandbox.exec(`git clone ${shellQuote(repoUrl)} ${shellQuote(repoDir)}`);#!/bin/bash # Verify shell interpolation in sandbox exec call sites rg -n "sandbox\\.exec\\(`" examples/linear-shipper/agent.ts rg -n "git clone .*\\$\\{" examples/linear-shipper/agent.ts🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@examples/linear-shipper/agent.ts` around lines 26 - 29, The git clone command builds a shell string with untrusted values (repoUrl, repo) and should be hardened to avoid shell injection; change the call site that uses ctx.sandbox.exec(`git clone ${repoUrl} ${repoDir}`) to use a safe exec variant or pass arguments separately (e.g., an array/args option) so repoUrl and repoDir are not interpolated into a shell, validate/whitelist repoUrl and repo before use, and ensure repoDir is constructed using path.join instead of string concat (look for repoDir and ctx.sandbox.exec in this file to update).packages/runtime/src/clients/request.ts (1)
117-125:⚠️ Potential issue | 🟠 Major | ⚡ Quick winAdd a bounded timeout to provider proxy requests.
Line 117 performs an unbounded
fetch; a stalled upstream can hang the runtime path indefinitely.Proposed patch
+const REQUEST_TIMEOUT_MS = Number(process.env.WORKFORCE_REQUEST_TIMEOUT_MS ?? 15_000); + +async function fetchWithTimeout(url: string, init: RequestInit, timeoutMs = REQUEST_TIMEOUT_MS): Promise<Response> { + const controller = new AbortController(); + const timer = setTimeout(() => controller.abort(), timeoutMs); + try { + return await fetch(url, { ...init, signal: controller.signal }); + } finally { + clearTimeout(timer); + } +} + export async function providerRequest<TResult = unknown, TBody = unknown>( options: ProviderRequestOptions<TBody> ): Promise<TResult> { @@ - const response = await fetch(proxyUrl, { + const response = await fetchWithTimeout(proxyUrl, { method: 'POST', headers: {#!/bin/bash # Verify unbounded fetch usage in this client request path rg -n "fetch\\(" packages/runtime/src/clients/request.ts rg -n "AbortController|signal|REQUEST_TIMEOUT|timeout" packages/runtime/src/clients/request.ts🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/runtime/src/clients/request.ts` around lines 117 - 125, The fetch call that posts to proxyUrl currently has no timeout and can hang; modify the request logic in this module to use an AbortController with a bounded timeout (e.g., REQUEST_TIMEOUT ms constant or a sensible default like 30_000ms), pass controller.signal into fetch options, and clear the timeout when the request completes or errors; specifically update the code that calls fetch(proxyUrl, {... body: JSON.stringify(buildProxyBody(options)) }) to create an AbortController, start a setTimeout that calls controller.abort(), include signal in the fetch options, and ensure the timeout is cleared on success/failure so requests cannot hang indefinitely.README.md (1)
30-31:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winNormalize run-mode flag syntax across docs.
Line 30 still uses
--mode devwhile Lines 86-88 document--dev|--sandbox|--cloud. Please pick one canonical form and use it everywhere in README.Also applies to: 86-88
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@README.md` around lines 30 - 31, Normalize the run-mode flag syntax across README by choosing one canonical form (either the single flag with value syntax `--mode dev` or the separate boolean-style flags `--dev|--sandbox|--cloud`) and updating all occurrences to that form; specifically replace the `--mode dev` example on the line with `BRAVE_API_KEY=... workforce deploy ./examples/weekly-digest/persona.json --mode dev` to match the chosen canonical syntax and also update the examples described as `--dev|--sandbox|--cloud` so every usage of the run-mode flag (`--mode`, `--dev`, `--sandbox`, `--cloud`) is consistent throughout the README.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@packages/deploy/src/modes/sandbox.ts`:
- Around line 141-145: The current emitChunk usage splits each arbitrary chunk
independently causing broken log lines; change the logic so you maintain a
per-stream buffer (e.g., keyed by stream name or cmdId + stream) and update
emitChunk to accept a stream identifier, append incoming chunk to that buffer,
split on '\n', call opts.onLog for each complete line, and retain the final
partial segment in the buffer; when the stream-end callback is invoked (the
second callback passed to processApi.getSessionCommandLogs) flush any remaining
buffered partial line via opts.onLog. Apply the same buffered handling for both
processApi.getSessionCommandLogs calls referenced around emitChunk and the
similar block at lines 180–186.
In `@packages/persona-kit/src/triggers.ts`:
- Around line 103-105: Replace prototype-chain 'in' checks with an own-property
check when validating providers: change the boolean test in isKnownProvider to
use Object.hasOwn(...) (or Object.prototype.hasOwnProperty.call(...)) against
KNOWN_TRIGGERS so only own keys are accepted, and make the same replacement
wherever KNOWN_TRIGGERS[provider] is later used (the check at the site
referenced around line 124) to ensure you won’t read inherited properties when
retrieving KNOWN_TRIGGERS[provider].
---
Outside diff comments:
In `@packages/persona-kit/package.json`:
- Around line 8-13: The package.json currently defines an "exports" map that
only exposes ".", blocking consumers from importing the published JSON schema
subpath; update the "exports" field to explicitly expose the schemas subpath(s)
(for example add an export entry like "./schemas/*" or specific entries such as
"./schemas/persona.schema.json") that point to the shipped schema files (as
packaged under "files"), so imports like
`@agentworkforce/persona-kit/schemas/persona.schema.json` resolve correctly;
modify the "exports" object in package.json to include these schema subpath
mappings while keeping the existing "." entry.
---
Duplicate comments:
In `@examples/linear-shipper/agent.ts`:
- Around line 26-29: The git clone command builds a shell string with untrusted
values (repoUrl, repo) and should be hardened to avoid shell injection; change
the call site that uses ctx.sandbox.exec(`git clone ${repoUrl} ${repoDir}`) to
use a safe exec variant or pass arguments separately (e.g., an array/args
option) so repoUrl and repoDir are not interpolated into a shell,
validate/whitelist repoUrl and repo before use, and ensure repoDir is
constructed using path.join instead of string concat (look for repoDir and
ctx.sandbox.exec in this file to update).
In `@packages/runtime/src/clients/request.ts`:
- Around line 117-125: The fetch call that posts to proxyUrl currently has no
timeout and can hang; modify the request logic in this module to use an
AbortController with a bounded timeout (e.g., REQUEST_TIMEOUT ms constant or a
sensible default like 30_000ms), pass controller.signal into fetch options, and
clear the timeout when the request completes or errors; specifically update the
code that calls fetch(proxyUrl, {... body:
JSON.stringify(buildProxyBody(options)) }) to create an AbortController, start a
setTimeout that calls controller.abort(), include signal in the fetch options,
and ensure the timeout is cleared on success/failure so requests cannot hang
indefinitely.
In `@README.md`:
- Around line 30-31: Normalize the run-mode flag syntax across README by
choosing one canonical form (either the single flag with value syntax `--mode
dev` or the separate boolean-style flags `--dev|--sandbox|--cloud`) and updating
all occurrences to that form; specifically replace the `--mode dev` example on
the line with `BRAVE_API_KEY=... workforce deploy
./examples/weekly-digest/persona.json --mode dev` to match the chosen canonical
syntax and also update the examples described as `--dev|--sandbox|--cloud` so
every usage of the run-mode flag (`--mode`, `--dev`, `--sandbox`, `--cloud`) is
consistent throughout the README.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro Plus
Run ID: 4d207ee0-8857-4af2-99e2-1fc4ae4a6281
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (59)
README.mdexamples/linear-shipper/README.mdexamples/linear-shipper/agent.tsexamples/linear-shipper/persona.jsonexamples/review-agent/README.mdexamples/review-agent/agent.tsexamples/review-agent/persona.jsonexamples/tsconfig.jsonexamples/weekly-digest/README.mdexamples/weekly-digest/agent.tsexamples/weekly-digest/persona.jsonpackage.jsonpackages/deploy/package.jsonpackages/deploy/src/__fixtures__/simple-agent/agent.tspackages/deploy/src/__fixtures__/simple-agent/persona.jsonpackages/deploy/src/bundle.test.tspackages/deploy/src/bundle.tspackages/deploy/src/index.tspackages/deploy/src/modes/dev.test.tspackages/deploy/src/modes/dev.tspackages/deploy/src/modes/sandbox.test.tspackages/deploy/src/modes/sandbox.tspackages/deploy/src/types.tspackages/deploy/tsconfig.jsonpackages/persona-kit/package.jsonpackages/persona-kit/schemas/persona.schema.jsonpackages/persona-kit/scripts/emit-schema.mjspackages/persona-kit/scripts/emit-schema.test.tspackages/persona-kit/src/__fixtures__/personas/cron-only.jsonpackages/persona-kit/src/__fixtures__/personas/full.jsonpackages/persona-kit/src/__fixtures__/personas/invalid-unknown-trigger.jsonpackages/persona-kit/src/__fixtures__/personas/minimal.jsonpackages/persona-kit/src/constants.tspackages/persona-kit/src/index.tspackages/persona-kit/src/parse.test.tspackages/persona-kit/src/parse.tspackages/persona-kit/src/triggers.test.tspackages/persona-kit/src/triggers.tspackages/persona-kit/src/types.tspackages/runtime/package.jsonpackages/runtime/src/clients/github.test.tspackages/runtime/src/clients/github.tspackages/runtime/src/clients/index.tspackages/runtime/src/clients/jira.test.tspackages/runtime/src/clients/jira.tspackages/runtime/src/clients/linear.test.tspackages/runtime/src/clients/linear.tspackages/runtime/src/clients/notion.test.tspackages/runtime/src/clients/notion.tspackages/runtime/src/clients/request.tspackages/runtime/src/clients/slack.test.tspackages/runtime/src/clients/slack.tspackages/runtime/src/errors.tspackages/runtime/src/index.tspackages/runtime/src/raw.tspackages/runtime/src/runner.tspackages/runtime/tsconfig.jsonpackages/workload-router/routing-profiles/default.jsonpackages/workload-router/routing-profiles/schema.json
✅ Files skipped from review due to trivial changes (20)
- examples/tsconfig.json
- examples/linear-shipper/README.md
- packages/deploy/src/types.ts
- packages/runtime/src/clients/index.ts
- examples/review-agent/README.md
- packages/persona-kit/src/fixtures/personas/invalid-unknown-trigger.json
- package.json
- packages/deploy/src/index.ts
- packages/deploy/src/fixtures/simple-agent/persona.json
- packages/runtime/src/clients/jira.test.ts
- packages/runtime/src/raw.ts
- packages/persona-kit/src/fixtures/personas/cron-only.json
- packages/runtime/tsconfig.json
- examples/weekly-digest/README.md
- packages/persona-kit/src/fixtures/personas/minimal.json
- packages/runtime/package.json
- examples/linear-shipper/persona.json
- packages/deploy/package.json
- packages/persona-kit/src/parse.test.ts
- packages/runtime/src/runner.ts
🚧 Files skipped from review as they are similar to previous changes (29)
- packages/workload-router/routing-profiles/default.json
- packages/persona-kit/src/constants.ts
- packages/deploy/src/modes/dev.test.ts
- examples/weekly-digest/persona.json
- examples/review-agent/persona.json
- packages/persona-kit/src/triggers.test.ts
- packages/deploy/src/bundle.test.ts
- packages/persona-kit/scripts/emit-schema.mjs
- packages/runtime/src/clients/linear.test.ts
- packages/runtime/src/index.ts
- packages/runtime/src/clients/notion.ts
- packages/runtime/src/clients/slack.test.ts
- packages/runtime/src/clients/notion.test.ts
- packages/runtime/src/clients/jira.ts
- packages/persona-kit/src/types.ts
- packages/runtime/src/errors.ts
- packages/runtime/src/clients/github.ts
- packages/deploy/src/fixtures/simple-agent/agent.ts
- examples/review-agent/agent.ts
- packages/workload-router/routing-profiles/schema.json
- packages/persona-kit/scripts/emit-schema.test.ts
- packages/persona-kit/src/index.ts
- examples/weekly-digest/agent.ts
- packages/persona-kit/schemas/persona.schema.json
- packages/persona-kit/src/parse.ts
- packages/persona-kit/src/fixtures/personas/full.json
- packages/deploy/src/modes/sandbox.test.ts
- packages/deploy/src/modes/dev.ts
- packages/deploy/src/bundle.ts
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (1)
packages/runtime/src/clients/github.ts (1)
107-123:⚠️ Potential issue | 🟠 Major | ⚡ Quick win
upsertIssuecan create duplicate drafts when matched issues have nonumber.Line 107 requires a truthy
issue.value.numberto treat a title as existing. But drafts from this same client only persisttitle/body/labels, so repeated upserts with the samematchTitlecan keep creating new drafts instead of updating.Suggested minimal correction
- const existing = issues.find((issue) => issue.value.title === args.matchTitle && issue.value.number); + const existing = issues.find((issue) => issue.value.title === args.matchTitle);🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/runtime/src/clients/github.ts` around lines 107 - 123, The upsertIssue logic only treats a match as existing when existing.value.number is truthy, so drafts (which lack number) keep getting recreated; update the check in upsertIssue to treat any found existing (the variable issues and existing) as an update candidate regardless of number, write the draft update to the same storage key (use encodeSegment(existing.value.number ?? existing.value.title) or similar unique identifier) via writeJsonFile (same callsite), and return { number: existing.value.number ?? undefined, url: existing.value.html_url ?? existing.value.url ?? '', created: false } so drafts are updated instead of creating new issues; only call this.createIssue(args) when no existing match is found.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@packages/runtime/src/clients/jira.ts`:
- Around line 25-62: The Jira write methods (createIssue, comment, transition)
accept cloudId but never persist it, so include cloudId in the payload passed to
writeJsonFile: in createIssue add cloudId: args.cloudId to the body object, in
comment and transition include cloudId: target.cloudId alongside body/transition
objects; keep using writeJsonFile(opts, 'jira', ...) and preserve
draftFile/encodeSegment usage, and ensure you access cloudId safely (e.g.,
optional chaining or type checks) so the artifact contains the tenant/site
context.
In `@packages/runtime/src/clients/request.ts`:
- Line 92: The current listJsonFiles call to readdir(absoluteDir) swallows all
errors and returns [] which hides real I/O/permission problems; change the error
handling around readdir in listJsonFiles so that you only treat a
missing-directory error (err.code === 'ENOENT') as a benign case that returns
[], and rethrow any other errors (preserving the original error) so permission
and other I/O failures surface; locate the readdir call in listJsonFiles and
update its .catch/try-catch accordingly.
In `@packages/runtime/src/clients/slack.ts`:
- Around line 17-20: In parseThreadRef, validate the object-form SlackThreadRef
before returning it: when typeof threadTs !== 'string' ensure threadTs.channel
and threadTs.ts both exist and are non-empty strings, and if either is
missing/empty throw a clear error (e.g., TypeError with context) to avoid
invalid write paths; keep existing behavior for string inputs
(parsing/normalizing them) but reject bad object refs early.
---
Duplicate comments:
In `@packages/runtime/src/clients/github.ts`:
- Around line 107-123: The upsertIssue logic only treats a match as existing
when existing.value.number is truthy, so drafts (which lack number) keep getting
recreated; update the check in upsertIssue to treat any found existing (the
variable issues and existing) as an update candidate regardless of number, write
the draft update to the same storage key (use
encodeSegment(existing.value.number ?? existing.value.title) or similar unique
identifier) via writeJsonFile (same callsite), and return { number:
existing.value.number ?? undefined, url: existing.value.html_url ??
existing.value.url ?? '', created: false } so drafts are updated instead of
creating new issues; only call this.createIssue(args) when no existing match is
found.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro Plus
Run ID: ee6f9ce8-67c7-4b03-b5a1-f9a45814c414
📒 Files selected for processing (11)
packages/runtime/src/clients/github.test.tspackages/runtime/src/clients/github.tspackages/runtime/src/clients/jira.test.tspackages/runtime/src/clients/jira.tspackages/runtime/src/clients/linear.test.tspackages/runtime/src/clients/linear.tspackages/runtime/src/clients/notion.test.tspackages/runtime/src/clients/notion.tspackages/runtime/src/clients/request.tspackages/runtime/src/clients/slack.test.tspackages/runtime/src/clients/slack.ts
|
Closing in favor of #90 (now merged) plus a focused follow-up chain:
Thanks for the breadth here — the linear/slack/notion/jira clients and the JSON schema work are all preserved in the follow-up chain. |
Summary
Review fixes included
/api/v1/proxy/:provider; Slack uses workspace/team identity for cloud-side Nango resolution and does not sendconnectionIdor provider config keysTests
Notes
Spec: docs/plans/deploy-v1-codex-spec.md