Skip to content

Drafts: OpenAI Open Source Fund + Claude for OSS applications, plus funding-program landscape#182

Draft
tonyketcham wants to merge 2 commits intomainfrom
toeknee/oss-fund-drafts-and-research-4b1c
Draft

Drafts: OpenAI Open Source Fund + Claude for OSS applications, plus funding-program landscape#182
tonyketcham wants to merge 2 commits intomainfrom
toeknee/oss-fund-drafts-and-research-4b1c

Conversation

@tonyketcham
Copy link
Copy Markdown
Collaborator

@tonyketcham tonyketcham commented May 9, 2026

Two @flatbread/proof DAG runs produced four review-ready Markdown artifacts; a third revision DAG then compressed the roadmap and rebalanced both drafts toward audacious bets. Nothing here is submitted — everything is a draft for maintainer review.

What landed

Path Purpose
funding-applications/openai-open-source-fund.md Full Codex OSS Fund application response, form-field-mirrored, with budget table, revised 4-phase roadmap (foundations + Effort Graph compressed into months 1-4; audacious bets occupy months 5-12), and > NOTE TO REVIEWER: callouts on every UNVERIFIED form field.
funding-applications/claude-for-oss-brief.md Sales-intake brief for Anthropic's Claude for OSS funnel — eligibility checklist, revised monthly token-volume projection (~80-160M input / ~16-32M output, ~2-3x prior estimate) tied to continuous preset/eval workloads, and MCP / Claude Code / HITL / evals positioning.
funding-applications/REVIEW-CHECKLIST.md Pre-submission self-assessment with a new top-of-file revision summary, updated 1–5 scoring (technical specificity raised; evidence-of-traction due-diligence cost flagged), refreshed UNVERIFIED list, cross-draft consistency check, and recommended submission order.
funding-research/funding-program-landscape.md Independent landscape research — TL;DR top-5 ranked table, full per-program coverage across AI-lab, foundation, infra/credits, and academic/research categories, eligibility-blocker map, stacking strategy, and ordered application backlog.

Audacious-bet shape adopted in both drafts

  • Phase 1 (months 1–2) — Foundations, compressed. Typed defineConfig, ID normalization, relation validation, watch-mode parity — one umbrella PR train under Codex/Claude review.
  • Phase 2 (months 3–4) — Effort Graph MVP. Conventions preset, schema-validated Append API, flatbread-mcp server (read + append) for Codex / Claude Code / Cursor.
  • Phase 3 (months 5–8) — Bet A: Workflow Presets for Complex Projects. Six shipped presets — schema-cutover, release-train, research-compendium, docs-site-refactor, api-version-cutover, design-system-token-rotation — each a parameterized DAG over the Effort Graph + @flatbread/proof. Seventh slot held for a community-contributed preset by month 8.
  • Phase 4 (months 9–12) — Bets B + C in parallel.
    • B. HITL ergonomics. needsApproval boundary on every DAG node, Claude-Code-style plan-review gate on Decision/Plan, LangGraph-style durable pause/resume keyed to a thread_id Session checkpoint.
    • C. Continuous-improvement evals + research loop. fixture-promote CLI, PR-time regression-replay GitHub Action, public Inspect-View-style dashboard, eval-driven preset retuning so the catalog self-tunes.

OpenAI budget rebalanced inside the published $25k cap: Phase 3 presets ($8k, largest line), evals loop ($5k), HITL ($3.5k), Effort Graph + MCP ($3k), foundation Codex toil ($2.5k), docs/cookbook/contributor sponsorship ($3k).

How it was generated

All three DAGs ran via pnpm exec proof (the @flatbread/proof package shipped from this repo) using the proof skill workflow. Live canvases were rendered to ~/.cursor/projects/workspace/canvases/ for the maintainer to scrub progress in real time.

What the maintainer needs to do before submission

The REVIEW-CHECKLIST file consolidates the action list. High-leverage items:

  1. Fill placeholders flagged in the OpenAI draft: LinkedIn URL, primary GitHub handle (org FlatbreadLabs vs personal), confirm ketcham.dev@gmail.com is the canonical contact.
  2. Confirm program form-field text against the live forms (the playbooks captured the current published fields; OpenAI in particular updates Typeform fields without notice).
  3. Confirm whether Claude for OSS permits API credits for an automated eval/preset DAG harness, or only Max-seat usage — the brief is written to accept either shape but the ask line should be tightened once known.
  4. Decide whether to register a fiscal sponsor (Open Source Collective is the recommended on-ramp — it unlocks Sentry / NLnet / thanks.dev / and several rows in the landscape).
  5. Stress-test the revised Claude monthly token-volume math against actual recent @flatbread/proof usage; the projection assumes continuous preset/eval workloads.
  6. Sanity-check the Phase 3 preset list (schema-cutover, release-train, research-compendium, docs-site-refactor, api-version-cutover, design-system-token-rotation) — these names appear in both drafts so they must read as committed product surfaces.

Out of scope

  • No source-code changes. No test changes. No build / lint / typecheck delta — only Markdown additions and revisions under two new top-level directories.
  • Applications are not submitted anywhere; this PR exists so the artifacts are reviewable in normal PR flow before the maintainer sends anything.
Open in Web Open in Cursor 

cursoragent and others added 2 commits May 9, 2026 20:43
DAG runs via @flatbread/proof produced four review-ready artifacts:

- funding-applications/openai-open-source-fund.md
  Full Codex Open Source Fund response with form-field-mirrored
  structure, budget, 12-month milestones, and reviewer-flagged
  UNVERIFIED items.

- funding-applications/claude-for-oss-brief.md
  Sales-intake brief for Anthropic Claude for OSS, eligibility
  checklist, monthly token-volume projection, and MCP/Claude
  Code positioning.

- funding-applications/REVIEW-CHECKLIST.md
  Pre-submission self-assessment, consolidated unverified items,
  cross-draft consistency check, and submission ordering.

- funding-research/funding-program-landscape.md
  Full landscape across AI-lab, foundation, infra/credits, and
  academic categories with TL;DR top-5, eligibility blockers,
  stacking strategy, and application backlog.

Drafts only — not for submission. Awaiting maintainer review of
flagged UNVERIFIED form fields and identity placeholders.

Co-authored-by: Tony <tonyketcham@users.noreply.github.com>
Per maintainer feedback: shrink foundation/Effort-Graph engineering to
months 1-4 and dedicate months 5-12 to three audacious bets that drive
use-case coverage, community adoption, and workflow capture.

OpenAI draft (openai-open-source-fund.md):
- New 4-phase roadmap (was 4-quarter): foundations + Effort Graph
  compressed into Phases 1-2; back half is Phase 3 workflow presets,
  Phase 4 HITL ergonomics + continuous-improvement evals loop.
- Phase 3 names six shipped presets: schema-cutover, release-train,
  research-compendium, docs-site-refactor, api-version-cutover,
  design-system-token-rotation, plus a 7th community slot.
- Phase 4 adds approval API (needsApproval), Claude-Code-style
  plan-review gate, LangGraph-style durable pause/resume, plus
  fixture-promote CLI, PR-time regression replay, public eval
  dashboard, and eval-driven preset retuning.
- Budget rebalanced toward Phase 3 presets ($8k, largest line),
  HITL surfaces ($3.5k), and evals loop ($5k); foundation Codex
  toil compressed to $2.5k. Total still $25k.
- Credit-use bullets and 'Anything else' / 'Why now/why us' rewritten
  to make use-case coverage, community adoption, workflow capture
  the explicit public payoff.
- Public-progress section adds quarterly Inspect-View-style dashboard.

Claude brief (claude-for-oss-brief.md):
- Same 4-phase roadmap, identical six preset names + community slot.
- Token projection raised to ~80-160M input / ~16-32M output per
  month (~2-3x prior estimate) with explicit arithmetic for
  continuous preset DAGs + nightly fixture replay.
- 'Why Claude specifically' adds a bullet on Anthropic's HITL/evals
  posture as funder-aligned validation for the audacious bets.
- Public commitment adds the open-source preset gallery and the
  public Inspect-View-style evals dashboard.

REVIEW-CHECKLIST.md:
- New top section explaining the revision shape.
- Acceptance-likelihood scores updated: technical specificity raised,
  evidence-of-traction due-diligence cost raised (scope vs solo
  maintainer), with mitigation notes.
- UNVERIFIED list refreshed.
- Cross-draft consistency check confirms identical 4-phase structure,
  preset names, and audacious-bet vocabulary.

Generated via @flatbread/proof DAG (7 tasks, 4 ranks, 25m13s).

Co-authored-by: Tony <tonyketcham@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants