Skip to content

Commit 1ee6707

Browse files
MrFlounderclaude
andauthored
feat(promptfoo): add crab pf serve Slack polling daemon (#37)
## Why Team members — including non-technical people — need a way to run the promptfoo target discovery agent without setting up a local dev environment. A Slack-based interface lets anyone DM the Crab bot with a target specification and get back a ready-to-use `promptfooconfig.yaml`. ## What Adds `crab pf serve`, a local Slack polling daemon that: - **Polls for DMs** to the Crab bot, filtering by a `pf:` trigger prefix - **Runs the discovery agent** locally using the operator's own API keys - **Posts results** (generated config files) back to the Slack thread - **Handles file attachments** — users can attach API specs, curl commands, etc. ### Files changed | File | Change | |------|--------| | `plugins/promptfoo/src/slack.ts` | **New** — Slack Web API helpers using native `fetch()`, zero new dependencies. Token resolution, user lookup, DM channel management, message posting, reactions, file download/upload (3-step flow). | | `plugins/promptfoo/src/serve.ts` | **New** — Daemon loop with `--setup` flow (prompts for Slack username and provider preferences), poll loop, job directory management. Defaults to `openai:gpt-5` with `reasoning: low`. | | `plugins/promptfoo/src/cli.ts` | **Modified** — Routes `serve` subcommand, adds `--reasoning` flag support, updates help text. | | `plugins/promptfoo/src/agent/providers.ts` | **Modified** — Adds `reasoningEffort` support for GPT-5/o-series models (`max_completion_tokens` instead of `max_tokens`, skips `temperature` for reasoning models). | | `plugins/promptfoo/package-lock.json` | **Modified** — Minor peer dependency annotation changes. | ## How to Test 1. Set `CRAB_SLACK_BOT_TOKEN` env var (or add `slack.bot_token` to `~/.crabcode/config.yaml`) 2. Set `OPENAI_API_KEY` env var 3. Run `crab pf serve --setup` and follow prompts to configure your Slack username 4. Run `crab pf serve -v` to start the daemon in verbose mode 5. DM the Crab bot in Slack with `pf: My API is at http://localhost:8080/chat` 6. Verify the bot acknowledges with a reaction, runs the agent, and posts back a `promptfooconfig.yaml` 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
1 parent 916d46f commit 1ee6707

File tree

5 files changed

+875
-5
lines changed

5 files changed

+875
-5
lines changed

plugins/promptfoo/package-lock.json

Lines changed: 3 additions & 0 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

plugins/promptfoo/src/agent/providers.ts

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -56,6 +56,7 @@ export class OpenAIProvider implements LLMProvider {
5656
}
5757

5858
async chat(options: ChatOptions): Promise<ChatResponse> {
59+
const isReasoning = this.model.startsWith('gpt-5') || this.model.startsWith('o1') || this.model.startsWith('o3');
5960
const response = await fetch(`${this.baseUrl}/chat/completions`, {
6061
method: 'POST',
6162
headers: {
@@ -66,13 +67,13 @@ export class OpenAIProvider implements LLMProvider {
6667
model: this.model,
6768
messages: options.messages.map((m) => this.toOpenAIMessage(m)),
6869
tools: options.tools,
69-
...(this.model.startsWith('gpt-5') || this.model.startsWith('o1') || this.model.startsWith('o3')
70+
...(isReasoning
7071
? { max_completion_tokens: options.maxTokens || 4096 }
7172
: { max_tokens: options.maxTokens || 4096 }),
72-
...(this.model.startsWith('gpt-5') || this.model.startsWith('o1') || this.model.startsWith('o3')
73-
? {}
74-
: { temperature: options.temperature ?? 0.7 }),
75-
...(options.reasoningEffort || this.reasoningEffort ? { reasoning_effort: options.reasoningEffort || this.reasoningEffort } : {}),
73+
...(isReasoning ? {} : { temperature: options.temperature ?? 0.7 }),
74+
...(options.reasoningEffort || this.reasoningEffort
75+
? { reasoning_effort: options.reasoningEffort || this.reasoningEffort }
76+
: {}),
7677
}),
7778
});
7879

plugins/promptfoo/src/cli.ts

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -15,11 +15,18 @@ import * as path from 'node:path';
1515
import { parseArtifact, detectFormat } from './parsers/index.js';
1616
import { runDiscoveryAgent } from './agent/loop.js';
1717
import { createProvider } from './agent/providers.js';
18+
import { runServe } from './serve.js';
1819

1920
const args = process.argv.slice(2);
2021

2122
async function main() {
2223
try {
24+
// Handle 'serve' subcommand
25+
if (args[0] === 'serve') {
26+
await runServe(args.slice(1));
27+
return;
28+
}
29+
2330
// Parse arguments
2431
const filePath = getArg('--file') || getArg('-f');
2532
const urlArg = getArg('--url');
@@ -160,6 +167,11 @@ Examples:
160167
# Using Anthropic
161168
crab pf --file target.txt --provider anthropic:claude-sonnet-4-20250514
162169
170+
Subcommands:
171+
serve Run Slack polling daemon
172+
serve --setup Configure Slack username
173+
serve --help Show serve help
174+
163175
Output:
164176
The agent will create:
165177
- promptfooconfig.yaml (promptfoo configuration)

0 commit comments

Comments
 (0)