feat(storage): Add SQLite storage backend with JSONL sync#1619
feat(storage): Add SQLite storage backend with JSONL sync#1619dhabedank wants to merge 20 commits intoeyaltoledano:nextfrom
Conversation
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
Updates displayHelp() to reflect the actual command implementations: Tag Management: - Changed from old commands (add-tag, use-tag, delete-tag, rename-tag, copy-tag) to unified 'tags' subcommand structure (tags add, tags use, tags remove, tags rename, tags copy) - Added --ready option for tags list List command: - Added -s/--status and -t/--tag filter options - Added -f/--format, --json, -c/--compact output format options - Added -w/--watch for watch mode - Added --ready, --blocking, --all-tags filter options Fixes eyaltoledano#1588
Design for adding SQLite as a configurable storage backend with JSONL sync for git-friendly persistence. Key features: - SQLite as local working database - JSONL as git-synced export format (line-based diffs) - Self-healing: rebuild SQLite from JSONL on startup - Legacy shim for scripts/modules compatibility Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Implements configurable SQLite storage for TaskMaster: - SQLite as local working database for fast queries - JSONL as git-synced export format (line-based diffs) - Self-healing: rebuild SQLite from JSONL on startup New modules: - sqlite-storage/database.ts: Database connection with WAL mode - sqlite-storage/schema.ts: Full SQL schema definitions - sqlite-storage/migrations.ts: Schema migration runner - sqlite-storage/queries.ts: SQL query helpers - sqlite-storage/jsonl-sync.ts: JSONL read/write/sync - sqlite-storage/sqlite-storage.ts: IStorage implementation - storage-migration.ts: Migration tooling between backends Updates: - StorageFactory: Added 'sqlite' case - StorageType: Added 'sqlite' option - RuntimeStorageConfig: Added dbPath, walMode options Using libsql (better-sqlite3 compatible) for wider Node.js support. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add `task-master storage` CLI command with status/migrate/rebuild actions - Add `storage_config` MCP tool for storage backend management - Add StorageMigration.getStatus() and rebuildSqlite() methods - Add comprehensive tests (27 SqliteStorage + 21 JsonlSync tests) - Fix SqliteStorage lazy initialization (create DB after mkdir) - Update type definitions to support 'sqlite' storage type in UI components Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
🦋 Changeset detectedLatest commit: b765b0a The changes in this PR will be included in the next version bump. Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
📝 WalkthroughWalkthroughAdds a SQLite storage backend (DB wrapper, schema, queries, migrations, JSONL sync), migration service, CLI storage command and MCP tool, docs, tests, and integrates sqlite into factory, exports, types, init scripts, and .gitignore. Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant CLI as CLI Storage<br/>Command
participant Core as StorageMigration<br/>Service
participant File as FileStorage<br/>Adapter
participant SQLite as SqliteStorage<br/>Adapter
participant DB as SQLite<br/>Database
participant JSONL as JsonlSync
participant Config as Config<br/>(config.json)
User->>CLI: storage switch --to sqlite
CLI->>Core: migrateToSqlite(options)
rect rgba(100, 150, 200, 0.5)
Note over Core,File: Read & validate source
Core->>File: loadTasks(all tags)
File-->>Core: tasks[]
end
rect rgba(150, 150, 100, 0.5)
Note over Core,SQLite: Initialize SQLite
Core->>SQLite: initialize()
SQLite->>DB: create schema
DB-->>SQLite: schema ready
end
rect rgba(100, 200, 100, 0.5)
Note over Core,JSONL: Migrate data
Core->>SQLite: saveTasks(per tag)
SQLite->>DB: insert tasks
SQLite->>JSONL: write JSONL
JSONL-->>SQLite: synced
end
rect rgba(200, 100, 100, 0.5)
Note over Core,Config: Update config & backup
Core->>Config: updateStorageType('sqlite')
Core->>Core: createBackup()
end
Core-->>CLI: MigrationResult
CLI-->>User: Migration complete
sequenceDiagram
participant App as Application
participant Factory as StorageFactory
participant Config as Configuration
participant SQLite as SqliteStorage
participant DB as SqliteDatabase
participant JSONL as JsonlSync
participant FS as File System
App->>Factory: create(projectPath, config)
Factory->>Config: resolve storage type
alt storage type = 'sqlite'
Factory->>SQLite: new SqliteStorage(projectPath)
SQLite->>DB: new SqliteDatabase(dbPath)
end
SQLite->>DB: initialize()
rect rgba(150, 150, 200, 0.5)
Note over DB,FS: Database setup
DB->>FS: open/create db file
DB->>DB: apply pragmas (WAL, FK)
DB->>DB: create schema tables
end
rect rgba(200, 150, 100, 0.5)
Note over DB,FS: Self-healing from JSONL
SQLite->>FS: check JSONL exists
alt JSONL exists, DB missing
SQLite->>JSONL: readAll()
JSONL-->>SQLite: JsonlTask[]
SQLite->>DB: transaction: insert tasks
end
end
SQLite-->>App: ready
rect rgba(100, 200, 100, 0.5)
Note over App,JSONL: Runtime operations
App->>SQLite: saveTasks(tasks)
SQLite->>DB: transaction: insert/update
SQLite->>JSONL: writeTask() per task
JSONL->>FS: append/update JSONL
end
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 9 potential issues.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.
packages/tm-core/src/modules/storage/adapters/sqlite-storage/sqlite-storage.ts
Show resolved
Hide resolved
packages/tm-core/src/modules/storage/adapters/sqlite-storage/sqlite-storage.ts
Outdated
Show resolved
Hide resolved
packages/tm-core/src/modules/storage/adapters/sqlite-storage/database.ts
Show resolved
Hide resolved
packages/tm-core/src/modules/storage/adapters/sqlite-storage/jsonl-sync.ts
Show resolved
Hide resolved
packages/tm-core/src/modules/storage/adapters/sqlite-storage/database.ts
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 13
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
packages/tm-core/package.json (1)
22-33:⚠️ Potential issue | 🟠 MajorAdd required test scripts (
test:e2e,test:ci).The required script entries are missing, which breaks the repo’s testing workflow contract. Based on learnings, package.json scripts must include: 'test', 'test:watch', 'test:coverage', 'test:unit', 'test:integration', 'test:e2e', and 'test:ci' commands for testing framework integration.
apps/cli/src/ui/components/header.component.ts (1)
33-83:⚠️ Potential issue | 🟡 MinorMissing display branch for SQLite storage type.
The
HeaderOptions.storageTypenow accepts'sqlite', butdisplayHeaderhas no explicit handling for it. WhenstorageType === 'sqlite':
- Line 37 condition (
storageType === 'api' && briefInfo) won't match- Line 64 condition (
else if (tag)) only works if a tag is providedThis could result in no header output for SQLite storage without a tag.
🛠️ Suggested fix: Add sqlite branch
} else if (tag) { // File storage: Show tag information + } else if (storageType === 'sqlite') { + // SQLite storage: Show database info + console.log(`🗄️ storage: ${chalk.cyan('SQLite')}`); + if (filePath) { + const absolutePath = filePath.startsWith('/') + ? filePath + : `${process.cwd()}/${filePath}`; + console.log(`Listing tasks from: ${chalk.dim(absolutePath)}`); + } + } else if (tag) { + // File storage: Show tag informationpackages/tm-core/src/modules/tasks/tasks-domain.ts (2)
280-288:⚠️ Potential issue | 🟠 MajorSQLite storage must use
setActiveTaginswitchTagmethod.The method incorrectly treats SQLite as API storage. SQLite is local storage (like file storage) that uses JSONL files for git-friendly persistence. It should call
setActiveTag()instead ofbriefsDomain.switchBrief().Change the condition to explicitly check for API storage:
- If
storageType === 'api': callbriefsDomain.switchBrief(tagName)- If
storageType === 'file' || 'sqlite': callsetActiveTag(tagName)The current comment is also incomplete—it should mention SQLite's behavior alongside file storage.
461-477:⚠️ Potential issue | 🟡 MinorClarify documentation: SQLite is intentionally supported for task file generation.
Both the JSDoc note ("Only applicable for file storage") and inline comment ("Only file storage supports task file generation") are misleading. Based on the architecture, SQLite is an intentionally supported local storage type alongside file storage. The code correctly blocks only
apistorage (cloud-based, remote), allowing bothfileandsqlite(local, file-based) to generate task files. Update the JSDoc and inline comment to say "local storage types" instead of "file storage," and change the error message from "local file storage" to "local storage" for accuracy.
🤖 Fix all issues with AI agents
In `@packages/tm-core/src/modules/storage/adapters/sqlite-storage/database.ts`:
- Around line 198-208: The getSize() method's queryOne generic is incorrect: it
expects page_count and page_size but the SQL selects a single aliased column
"size", causing the cast to any; update the queryOne call in getSize() to use
the correct return type (e.g. { size: number }) and return result.size (or 0
when result is falsy) instead of casting to any; keep the ':memory:' check and
preserve existing behavior for missing results.
- Around line 236-242: The backup(destPath: string) method currently
interpolates destPath into SQL (this.db.exec(`VACUUM INTO '${destPath}'`))
creating an SQL injection risk; update this method to avoid string interpolation
by either using a safe library/API for SQLite backups (e.g., a dedicated
backup/backupToFile method on the DB client) or by validating/normalizing and
escaping the path before use: enforce a strict path whitelist (use path.resolve,
ensure no special characters or quotes, and restrict to allowed directories) or
properly escape single quotes if parameterization isn't available; replace the
direct this.db.exec call in backup with the safe alternative and add validation
to reject or sanitize dangerous destPath values.
In `@packages/tm-core/src/modules/storage/adapters/sqlite-storage/jsonl-sync.ts`:
- Around line 131-175: The read-modify-write in writeTask (and similarly
writeTasks, deleteTask, softDeleteTask) is not atomic and can corrupt the JSONL
file under concurrent calls; fix by serializing writes and performing atomic
writes: acquire a process-level write lock (e.g., via proper-lockfile or an
internal mutex) around readLines/modify/writeLines, and change writeLines to
write to a temporary file (e.g., `${this.jsonlPath}.tmp`) then fs.rename to
replace jsonlPath atomically; ensure locks are released on errors and reference
the methods writeTask, writeTasks, deleteTask, softDeleteTask, writeLines,
readLines and the jsonlPath field so reviewers can locate the changes.
- Around line 395-417: The code closes the readline interface twice (once inside
the loop when a matching task is found and again in the finally block). Remove
the inner rl.close() call inside the loop and simply return the found task so
the finally block handles closing rl; target the loop that parses lines (uses
variables rl, taskId, and task as JsonlTask) and delete the rl.close() that
appears right before returning the task.
In `@packages/tm-core/src/modules/storage/adapters/sqlite-storage/migrations.ts`:
- Around line 86-156: Wrap the multi-statement work in applyMigration and
revertMigration in an explicit transaction: start a transaction (BEGIN), run the
migration.up / migration.down statements, then insert/delete the schema_version
row with the prepared stmt, and COMMIT; on any error ROLLBACK and rethrow to
avoid partial application (use the existing this.db.exec and stmt.run calls
inside the transaction). In migrateTo, validate targetVersion is within [0,
CURRENT_SCHEMA_VERSION] (use getCurrentVersion and the constant
CURRENT_SCHEMA_VERSION), throw a clear error if out of bounds before selecting
migrations, and keep the existing logic for applying (filter by version >
current && <= target) and reverting (filter then reverse). Ensure
revertMigration still checks migration.down before running the transaction.
In `@packages/tm-core/src/modules/storage/adapters/sqlite-storage/queries.ts`:
- Around line 702-719: The subtaskRowToSubtask function returns
acceptanceCriteria as a single-element array containing the newline-joined
string (row.acceptance_criteria) instead of restoring the original list shape;
change the mapping so that if row.acceptance_criteria is truthy you split it on
newlines (e.g., row.acceptance_criteria.split(/\r?\n/)), trim entries and filter
out empty strings, otherwise set acceptanceCriteria to undefined. Ensure you
update the acceptanceCriteria assignment in subtaskRowToSubtask to use
row.acceptance_criteria, produce an array of strings, and preserve undefined
when there are no criteria.
- Around line 235-257: The delete-then-insert sequence in setTaskDependencies
(and similarly in setTaskLabels and setSubtaskDependencies) must be executed
atomically so a failed insert cannot leave you without previous rows; wrap the
delete and subsequent inserts in a single DB transaction (use the sqlite
driver's transaction API or explicit BEGIN/COMMIT/ROLLBACK) and move the
deleteStmt.run(...) and the loop of insertStmt.run(...) inside the transaction
body so any error rolls back the delete; apply the same transactional pattern to
the functions setTaskLabels and setSubtaskDependencies to ensure all three
operate atomically.
- Around line 839-943: Wrap the entire body of saveCompleteTask in a DB
transaction: call db.exec('BEGIN TRANSACTION') before any
inserts/updates/deletes, and on success call db.exec('COMMIT'); if any error
occurs catch it, call db.exec('ROLLBACK'), and rethrow the error so callers see
the failure. Implement this using try/catch/finally around the existing logic in
saveCompleteTask (referencing saveCompleteTask, getTask, insertTask, updateTask,
setTaskDependencies, setTaskLabels, setTaskMetadata, getSubtasks, deleteSubtask,
updateSubtask, insertSubtask, setSubtaskDependencies) to ensure all six affected
tables are atomic; follow the same BEGIN/COMMIT/ROLLBACK pattern used elsewhere
(database.ts) and avoid leaving partial state on exceptions.
In
`@packages/tm-core/src/modules/storage/adapters/sqlite-storage/sqlite-storage.ts`:
- Around line 128-159: rebuildFromJsonl() currently hardcodes every task's tag
to 'master', losing original tag data; update the JsonlTask type (in
jsonl-sync.ts) to include an optional _tag?: string, ensure whatever writes
JSONL (toJsonlTask) includes the task's tag into _tag, then change
rebuildFromJsonl() to read const tag = jsonlTask._tag || 'master' when grouping
and passing that tag into saveCompleteTask(this.getDb().getDb(), task, tag) so
original tags are preserved.
- Around line 663-667: The computed lastModified currently ignores jsonlStats;
update the logic around jsonlSync.getStats() so lastModified uses the JSONL
file's actual modification time when available (e.g., use jsonlStats.mtime or
jsonlStats.mtimeMs converted to an ISO string) and only falls back to new
Date().toISOString() when jsonlStats or its time field is missing; adjust the
code that sets the lastModified variable accordingly (refer to
jsonlSync.getStats() and the lastModified assignment).
In `@packages/tm-core/src/modules/storage/index.ts`:
- Around line 11-18: Remove the three internal-only exported types from the
public export list: delete MigrationOptions, ValidationResult, and StorageStatus
from the export block that currently re-exports StorageMigration and
MigrationResult; keep StorageMigration and MigrationResult exported so apps/cli
and apps/mcp still consume them. Update the export statement that references
'./services/storage-migration.js' to only export StorageMigration and type
MigrationResult.
In `@packages/tm-core/src/modules/storage/services/storage-migration.ts`:
- Around line 706-750: The roundTripTest function can leak resources if
this.validate(...) throws because source.close() and target.close() are only
called after validate; wrap the lifecycle so both storages are always closed:
create source and target as now, call await source.initialize() / await
target.initialize() inside try, then move validation and copying into a try
block and put source.close() and target.close() into a finally block (only
calling close if the instance exists/was initialized) so any exception from
validate or other operations still triggers cleanup; keep the existing catch
that returns the ValidationResult with the error message.
- Around line 634-644: The subtask counts are asymmetric because
sourceSubtaskCount is incremented for every entry in sourceMap while
targetSubtaskCount is only incremented when a matching targetTask exists; fix by
ensuring target subtasks are counted independently of source iteration—either
move the target subtask counting out of the for-loop and iterate targetMap to
sum targetTask.subtasks lengths into targetSubtaskCount, or, inside the current
loop, always lookup targetTask for the id and if present add its subtasks but
also separately compute a full pass over targetMap to guarantee all target
subtasks are included (update variables: sourceSubtaskCount, targetSubtaskCount;
use sourceMap, targetMap, sourceTask, targetTask, id).
🧹 Nitpick comments (13)
docs/plans/2025-02-05-sqlite-storage-backend-design.md (3)
9-29: Add language specifier to fenced code block.The ASCII architecture diagram should have a language specifier for consistent markdown rendering. Use
textorplaintext.📝 Suggested fix
-``` +```text ┌─────────────────────────────────────────────────────────┐
197-206: Add language specifier for directory structure block.📝 Suggested fix
-``` +```text .taskmaster/
210-227: Add language specifier for implementation structure block.📝 Suggested fix
-``` +```text packages/tm-core/src/modules/storage/apps/mcp/src/tools/storage/storage-config.tool.ts (2)
156-162: Consider passing migration options for consistency with CLI.The CLI's
executeSwitchpasses explicit options (createBackup: true,validateAfterMigration: true,updateConfig: true,updateGitignore: true), but this MCP tool callsmigrateToSqlite()andmigrateToFile()without options. This may result in different default behavior between CLI and MCP interfaces.♻️ Suggested fix to pass explicit options
if (type === 'sqlite') { - result = await migration.migrateToSqlite(); + result = await migration.migrateToSqlite({ + createBackup: true, + validateAfterMigration: true, + updateConfig: true, + updateGitignore: true + }); } else { - result = await migration.migrateToFile(); + result = await migration.migrateToFile({ + createBackup: true, + validateAfterMigration: true, + updateConfig: true + }); }
103-108: Theas anycast masks potential type mismatch.The cast
as anyon the storage config update could hide type errors if the config structure changes.♻️ Suggested improvement
Consider importing or defining the proper storage config type:
-await tmCore.config.updateConfig({ - storage: { type: type as StorageType } as any -}); +await tmCore.config.updateConfig({ + storage: { type: type as StorageType } +});If TypeScript complains, the
updateConfigmethod signature may need adjustment in tm-core rather than usingas anyhere.packages/tm-core/src/modules/storage/services/storage-factory.ts (1)
216-227: Consider auto-detecting SQLite when artifacts exist.
detectOptimalStorageonly returns'sqlite'when explicitly configured. Consider also detecting SQLite whentasks.dbexists but config says'auto', to improve self-healing behavior for users who clone a repo with SQLite artifacts.apps/cli/src/commands/storage.command.ts (2)
119-159: Inconsistent output methods: mix ofconsole.loganddisplay*utilities.The method uses both raw
console.log(Lines 124-125, 133-142) and the imported display utilities. For consistency, prefer using the display utilities throughout or document when raw console output is intentional.
493-523: UsereadJSON/writeJSONutilities for config file operations.This method uses raw
fs.promises.readFileandfs.promises.writeFilefor JSON operations. As per coding guidelines, use the centralized JSON utilities to ensure consistent error handling and validation.♻️ Suggested approach
Consider importing and using
readJSON/writeJSONfrom the utilities module, or delegating config updates toStorageMigration.updateConfig()which already handles this.- private async updateConfigType(newType: 'file' | 'sqlite'): Promise<void> { - const taskmasterDir = path.dirname(this.configPath); - - // Ensure directory exists - await fs.promises.mkdir(taskmasterDir, { recursive: true }); - - let config: TaskMasterConfig = {}; - - // Read existing config if it exists - if (fs.existsSync(this.configPath)) { - try { - config = JSON.parse(await fs.promises.readFile(this.configPath, 'utf-8')); - } catch { - // Start fresh if config is corrupted - config = {}; - } - } - - // Update storage type - config.storage = { - ...config.storage, - type: newType - }; - - // Write updated config - await fs.promises.writeFile( - this.configPath, - JSON.stringify(config, null, 2), - 'utf-8' - ); - } + private async updateConfigType(newType: 'file' | 'sqlite'): Promise<void> { + const migration = new StorageMigration(this.projectPath); + await migration.updateConfig(newType); + }packages/tm-core/src/modules/storage/adapters/sqlite-storage/database.ts (1)
67-93: Consider usingdb.transaction()wrapper for consistency.The
initialize()method manually managesBEGIN TRANSACTION/COMMIT/ROLLBACKviaexec(), while Line 116-118 provides atransaction()wrapper. Using the wrapper would be more consistent.packages/tm-core/src/modules/storage/adapters/sqlite-storage/jsonl-sync.ts (1)
112-117: Use central logger instead ofconsole.error.As per coding guidelines, use the central log function instead of direct
console.errorcalls.♻️ Proposed fix
+import { getLogger } from '../../../../common/logger/index.js'; + +const logger = getLogger('JsonlSync'); + // In readAll(): - } catch (error) { - // Log parse errors but continue reading - console.error( - `[JsonlSync] Failed to parse line ${lineNumber}: ${error instanceof Error ? error.message : 'Unknown error'}` - ); - } + } catch (error) { + // Log parse errors but continue reading + logger.warn( + `Failed to parse line ${lineNumber}: ${error instanceof Error ? error.message : 'Unknown error'}` + ); + }packages/tm-core/src/modules/storage/services/storage-migration.ts (3)
177-179: Consider logging suppressed errors for debugging.The empty
catchblock silently swallows any errors during tag extraction. While falling back to['master']is reasonable, logging the error (even at debug level) would help diagnose issues when tag retrieval fails unexpectedly.💡 Suggested improvement
- } catch { + } catch (error) { + // Log at debug level for troubleshooting + // Consider: log.debug('Failed to extract tags', { error }); tags = ['master']; }
231-245: Consider adding backup option before deleting the existing database.When rebuilding, the existing SQLite database is deleted without creating a backup. If the JSONL file is corrupted or incomplete, this could result in data loss. The
MigrationOptionsinterface already supportscreateBackup, butrebuildSqlitedoesn't accept options.♻️ Suggested enhancement
- async rebuildSqlite(): Promise<MigrationResult> { + async rebuildSqlite(options: MigrationOptions = {}): Promise<MigrationResult> { const startTime = Date.now(); const errors: string[] = []; const warnings: string[] = []; let taskCount = 0; let subtaskCount = 0; let tagsCount = 0; + const { createBackup = true } = options; try { const jsonlPath = path.join(this.tasksDir, 'tasks.jsonl'); const sqliteDbPath = path.join(this.tasksDir, 'tasks.db'); // ... existing JSONL check ... // Delete existing database if it exists if (fs.existsSync(sqliteDbPath)) { + if (createBackup) { + await this.createBackup(sqliteDbPath); + } await fs.promises.unlink(sqliteDbPath); warnings.push('Deleted existing SQLite database'); }
582-590: Consider validating the storage type value.The type assertion
(config.storage?.type as StorageType)could pass through invalid values if the config file contains unexpected data.💡 Suggested improvement
- return (config.storage?.type as StorageType) || 'auto'; + const type = config.storage?.type; + if (type === 'file' || type === 'sqlite' || type === 'api' || type === 'auto') { + return type; + } + return 'auto'; } catch { return 'auto'; }
packages/tm-core/src/modules/storage/adapters/sqlite-storage/database.ts
Show resolved
Hide resolved
packages/tm-core/src/modules/storage/adapters/sqlite-storage/database.ts
Show resolved
Hide resolved
packages/tm-core/src/modules/storage/adapters/sqlite-storage/jsonl-sync.ts
Show resolved
Hide resolved
packages/tm-core/src/modules/storage/adapters/sqlite-storage/jsonl-sync.ts
Show resolved
Hide resolved
packages/tm-core/src/modules/storage/adapters/sqlite-storage/migrations.ts
Show resolved
Hide resolved
| export function saveCompleteTask(db: Database.Database, task: Task, tag: string): void { | ||
| const insertData = taskToInsertData(task, tag); | ||
|
|
||
| // Insert or update the task | ||
| const existing = getTask(db, task.id, tag); | ||
| if (existing) { | ||
| updateTask(db, task.id, tag, { | ||
| title: task.title, | ||
| description: task.description, | ||
| status: task.status, | ||
| priority: task.priority, | ||
| details: task.details, | ||
| test_strategy: task.testStrategy, | ||
| effort: task.effort, | ||
| actual_effort: task.actualEffort, | ||
| complexity: task.complexity, | ||
| assignee: task.assignee, | ||
| expansion_prompt: task.expansionPrompt, | ||
| complexity_reasoning: task.complexityReasoning, | ||
| implementation_approach: task.implementationApproach, | ||
| recommended_subtasks: task.recommendedSubtasks | ||
| }); | ||
| } else { | ||
| insertTask(db, insertData); | ||
| } | ||
|
|
||
| // Set dependencies | ||
| setTaskDependencies(db, task.id, tag, task.dependencies); | ||
|
|
||
| // Set labels | ||
| if (task.tags) { | ||
| setTaskLabels(db, task.id, tag, task.tags); | ||
| } | ||
|
|
||
| // Set metadata if present | ||
| const hasMetadata = | ||
| task.relevantFiles || | ||
| task.codebasePatterns || | ||
| task.existingInfrastructure || | ||
| task.scopeBoundaries || | ||
| task.technicalConstraints || | ||
| task.acceptanceCriteria || | ||
| task.skills || | ||
| task.category || | ||
| task.metadata; | ||
|
|
||
| if (hasMetadata) { | ||
| setTaskMetadata( | ||
| db, | ||
| task.id, | ||
| tag, | ||
| { | ||
| relevantFiles: task.relevantFiles, | ||
| codebasePatterns: task.codebasePatterns, | ||
| existingInfrastructure: task.existingInfrastructure, | ||
| scopeBoundaries: task.scopeBoundaries, | ||
| technicalConstraints: task.technicalConstraints, | ||
| acceptanceCriteria: task.acceptanceCriteria, | ||
| skills: task.skills, | ||
| category: task.category | ||
| }, | ||
| task.metadata | ||
| ); | ||
| } | ||
|
|
||
| // Handle subtasks | ||
| const existingSubtasks = getSubtasks(db, task.id, tag); | ||
| const existingSubtaskIds = new Set(existingSubtasks.map(s => s.id)); | ||
| const newSubtaskIds = new Set( | ||
| task.subtasks.map(s => (typeof s.id === 'number' ? s.id : parseInt(String(s.id), 10))) | ||
| ); | ||
|
|
||
| // Delete removed subtasks | ||
| for (const existingSub of existingSubtasks) { | ||
| if (!newSubtaskIds.has(existingSub.id)) { | ||
| deleteSubtask(db, existingSub.id, task.id, tag); | ||
| } | ||
| } | ||
|
|
||
| // Insert or update subtasks | ||
| for (const subtask of task.subtasks) { | ||
| const subtaskId = typeof subtask.id === 'number' ? subtask.id : parseInt(String(subtask.id), 10); | ||
| const insertData = subtaskToInsertData(subtask); | ||
|
|
||
| if (existingSubtaskIds.has(subtaskId)) { | ||
| updateSubtask(db, subtaskId, task.id, tag, { | ||
| title: subtask.title, | ||
| description: subtask.description, | ||
| status: subtask.status, | ||
| priority: subtask.priority, | ||
| details: subtask.details, | ||
| test_strategy: subtask.testStrategy, | ||
| acceptance_criteria: subtask.acceptanceCriteria?.join('\n'), | ||
| assignee: subtask.assignee | ||
| }); | ||
| } else { | ||
| insertSubtask(db, insertData, tag); | ||
| } | ||
|
|
||
| // Set subtask dependencies | ||
| const deps = subtask.dependencies?.map(d => | ||
| typeof d === 'number' ? d : parseInt(String(d), 10) | ||
| ) ?? []; | ||
| setSubtaskDependencies(db, task.id, subtaskId, tag, deps); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cd packages/tm-core && find . -type f -name "*.ts" | grep -E "(sqlite|storage|queries)" | head -20Repository: eyaltoledano/claude-task-master
Length of output: 1243
🏁 Script executed:
cd packages/tm-core && head -100 src/modules/storage/adapters/sqlite-storage/queries.tsRepository: eyaltoledano/claude-task-master
Length of output: 2578
🏁 Script executed:
cd packages/tm-core && rg -A 10 "set\*|setTask|setSubtask" src/modules/storage/adapters/sqlite-storage/queries.ts | head -80Repository: eyaltoledano/claude-task-master
Length of output: 2135
🏁 Script executed:
cd packages/tm-core && rg -B 5 -A 15 "BEGIN|COMMIT|ROLLBACK" src/modules/storage/adapters/sqlite-storage/Repository: eyaltoledano/claude-task-master
Length of output: 3033
🏁 Script executed:
cd packages/tm-core && grep -r "Database\|sqlite" src/modules/storage/adapters/sqlite-storage/queries.ts | head -10Repository: eyaltoledano/claude-task-master
Length of output: 570
🏁 Script executed:
cd packages/tm-core && rg -B 5 -A 5 "BEGIN|COMMIT|ROLLBACK" src/modules/storage/adapters/sqlite-storage/queries.tsRepository: eyaltoledano/claude-task-master
Length of output: 57
🌐 Web query:
libsql BEGIN vs BEGIN TRANSACTION transaction syntax
💡 Result:
In libSQL (and SQLite), TRANSACTION is optional—so these are equivalent:
BEGIN;BEGIN TRANSACTION;
SQLite’s grammar is BEGIN [DEFERRED | IMMEDIATE | EXCLUSIVE] [TRANSACTION [name]], and the default is DEFERRED when you don’t specify a mode. [1]
So these are also valid (and differ by locking/when the write lock is acquired):
BEGIN DEFERRED;(same default behavior asBEGIN;) [1]BEGIN IMMEDIATE;(starts a write transaction immediately) [1]BEGIN EXCLUSIVE;[1]
libSQL additionally supports a read-only transaction form (extension):
BEGIN TRANSACTION READONLY;[2]
And in @libsql/client, their transaction “modes” map to these statements:
"write"→BEGIN IMMEDIATE"deferred"→BEGIN DEFERRED"read"→BEGIN TRANSACTION READONLY[2]
Sources: [1] SQLite transaction syntax/docs, [2] @libsql/client TransactionMode docs.
Wrap saveCompleteTask in a transaction to prevent partial saves.
This function modifies six tables (tasks, dependencies, labels, metadata, subtasks, subtask_dependencies) without atomicity. If any operation fails after the initial insert, the database is left inconsistent (e.g., task saved but dependencies missing). Use db.exec('BEGIN TRANSACTION') and db.exec('COMMIT'/'ROLLBACK') as established in database.ts initialization.
🔒 Suggested fix (transaction wrapper)
export function saveCompleteTask(db: Database.Database, task: Task, tag: string): void {
+ db.exec('BEGIN TRANSACTION');
+ try {
const insertData = taskToInsertData(task, tag);
// ... rest of function
setSubtaskDependencies(db, task.id, subtaskId, tag, deps);
}
+ db.exec('COMMIT');
+ } catch (error) {
+ db.exec('ROLLBACK');
+ throw error;
+ }
}🤖 Prompt for AI Agents
In `@packages/tm-core/src/modules/storage/adapters/sqlite-storage/queries.ts`
around lines 839 - 943, Wrap the entire body of saveCompleteTask in a DB
transaction: call db.exec('BEGIN TRANSACTION') before any
inserts/updates/deletes, and on success call db.exec('COMMIT'); if any error
occurs catch it, call db.exec('ROLLBACK'), and rethrow the error so callers see
the failure. Implement this using try/catch/finally around the existing logic in
saveCompleteTask (referencing saveCompleteTask, getTask, insertTask, updateTask,
setTaskDependencies, setTaskLabels, setTaskMetadata, getSubtasks, deleteSubtask,
updateSubtask, insertSubtask, setSubtaskDependencies) to ensure all six affected
tables are atomic; follow the same BEGIN/COMMIT/ROLLBACK pattern used elsewhere
(database.ts) and avoid leaving partial state on exceptions.
| // Get last modified from database or JSONL | ||
| const jsonlStats = this.jsonlSync.getStats(); | ||
| const lastModified = jsonlStats | ||
| ? new Date().toISOString() | ||
| : new Date().toISOString(); |
There was a problem hiding this comment.
lastModified calculation doesn't use jsonlStats.
The jsonlStats variable is retrieved but not used meaningfully - both branches return new Date().toISOString(). The intent seems to be to use the file's actual modification time.
🔧 Proposed fix
- // Get last modified from database or JSONL
- const jsonlStats = this.jsonlSync.getStats();
- const lastModified = jsonlStats
- ? new Date().toISOString()
- : new Date().toISOString();
+ // Get last modified from JSONL file stats
+ let lastModified = new Date().toISOString();
+ try {
+ const stat = await fs.promises.stat(this.jsonlPath);
+ lastModified = stat.mtime.toISOString();
+ } catch {
+ // Use current time if file doesn't exist
+ }🤖 Prompt for AI Agents
In
`@packages/tm-core/src/modules/storage/adapters/sqlite-storage/sqlite-storage.ts`
around lines 663 - 667, The computed lastModified currently ignores jsonlStats;
update the logic around jsonlSync.getStats() so lastModified uses the JSONL
file's actual modification time when available (e.g., use jsonlStats.mtime or
jsonlStats.mtimeMs converted to an ISO string) and only falls back to new
Date().toISOString() when jsonlStats or its time field is missing; adjust the
code that sets the lastModified variable accordingly (refer to
jsonlSync.getStats() and the lastModified assignment).
packages/tm-core/src/modules/storage/services/storage-migration.ts
Outdated
Show resolved
Hide resolved
High severity: - Fix JSONL sync to reflect deletions (use exportAll for full overwrite) - Fix rebuild from JSONL to preserve tag context (add _tag field) Medium severity: - Fix SQL injection in backup method (escape single quotes) - Fix subtask acceptance_criteria round-trip (split on newline when reading) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Fix double rl.close() in jsonl-sync getTask (use try/finally) - Add modifiedTime to JsonlStats and use in lastModified calculation - Fix getSize() type annotation mismatch - Fix resource leak in roundTripTest with try/finally pattern Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The tool was exported from @tm/mcp but not actually registered in the MCP server's tool registry. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add Storage Management section to command-reference.md - Add Storage Configuration section to configuration.md - Document CLI commands: status, switch, migrate, rebuild - Document storage types: file (default) and sqlite Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit fixes several issues discovered when testing SQLite migration
with the actual project data:
1. Type consistency for task IDs:
- Ensure task IDs are stored as strings in taskToInsertData
- Convert parent_id to string in subtaskToInsertData
- Convert task.id to String() when calling setTaskDependencies
- Convert dependency IDs to strings with .map(String)
2. Deduplication of subtasks:
- Handle duplicate subtask IDs by keeping only the last occurrence
- This fixes UNIQUE constraint violations for malformed data
3. Two-pass dependency handling:
- Add SaveCompleteTaskOptions with skipTaskDependencies and
skipSubtaskDependencies flags
- saveTasks now does two passes: first inserts all tasks, then
sets dependencies (to avoid FK violations on forward references)
- rebuildFromJsonl also uses two-pass approach
4. Data format handling:
- Handle acceptanceCriteria as string or array
- Provide defaults for required fields (status, priority)
- Parse dotted dependency notation (e.g., "21.4" -> 4)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Fix two issues with SQLite storage: 1. Fix "Failed to initialize TmCore" error when SQLite storage is configured via StorageFactory. The root cause was that when passing config to SqliteDatabase, if config contained `dbPath: undefined`, it would override the valid computed dbPath when spread in the constructor. Fixed by extracting dbPath from config before passing. 2. Fix migration validation failures: - Handle duplicate subtask IDs in source (count unique IDs) - Only validate dependencies that reference existing tasks - These are data quality issues in source, not migration failures Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Ignore tasks.db, tasks.db-shm, and tasks.db-wal files. The tasks.jsonl file should be tracked for git-friendly sync. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This file is the git-friendly sync file for SQLite storage. It contains all tasks in JSONL format for version control. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Auto-fixed formatting and import organization issues. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 7
🤖 Fix all issues with AI agents
In `@packages/tm-core/src/modules/storage/adapters/sqlite-storage/jsonl-sync.ts`:
- Around line 151-170: ID comparisons use strict equality and can fail when IDs
are number vs string; change comparisons to normalize both sides to the same
type (e.g., String(existingTask.id) === String(task.id)) so numeric IDs and
string IDs match. Update the comparison inside the JSONL read loop in
jsonl-sync.ts (the block that parses existingTask and checks existingTask.id ===
task.id) and apply the same normalization pattern to the other ID checks
referenced in this file (writeTasks, deleteTask, softDeleteTask) so all ID
equality checks use String(id) on both operands.
In `@packages/tm-core/src/modules/storage/adapters/sqlite-storage/queries.ts`:
- Around line 664-694: Wrap every JSON.parse usage on metadata fields in a safe
parse that catches errors and falls back to a sensible default; implement a
local helper (e.g., safeParseJson<T>(value: string | undefined, defaultValue:
T)) and replace direct calls in the block handling metadata (the code that reads
metadata.relevant_files, codebase_patterns, existing_infrastructure,
scope_boundaries, technical_constraints, acceptance_criteria, skills,
user_metadata) so any invalid JSON returns the default (empty array/object or
undefined) and does not throw—update assignments like task.relevantFiles,
task.codebasePatterns, task.existingInfrastructure, task.scopeBoundaries,
task.technicalConstraints, task.acceptanceCriteria, task.skills, and
task.metadata to use the helper.
In
`@packages/tm-core/src/modules/storage/adapters/sqlite-storage/sqlite-storage.ts`:
- Around line 283-290: The current return spreads a Subtask into a Task and uses
an unsafe cast (return { ...subtask, id: taskId, subtasks: [], dependencies:
subtask.dependencies || [] } as Task), which can omit required Task fields like
testStrategy, priority, and details; instead either (A) explicitly map and
populate all required Task properties when converting a Subtask (e.g., create a
helper convertSubtaskToTask(subtask, taskId) that sets id, subtasks,
dependencies and provides sensible defaults or pulls missing fields from context
for testStrategy, priority, details), or (B) change the return type to reflect a
Subtask-like shape (e.g., Subtask | Partial<Task>) so callers don’t assume a
full Task; update the code that references this return to use the correct type
or mapping helper.
- Around line 879-905: syncToJsonl currently ignores its _tag argument and
always calls syncAllToJsonl causing costly full exports; change syncToJsonl to
load and export only the provided tag by calling
loadAllTasks(this.getDb().getDb(), _tag) and then call a targeted export on the
JSONL adapter (e.g., jsonlSync.exportTag or jsonlSync.exportWithTags for a
single tag) instead of jsonlSync.exportAllWithTags; additionally, for
single-task mutations (updateTask, updateTaskStatus, deleteTask) replace the
full-sync call with incremental operations on jsonlSync (e.g.,
jsonlSync.writeTask(task, tag) or jsonlSync.deleteTask(taskId, tag)) so only the
affected tag or task is written/deleted rather than rewriting the entire file.
In `@packages/tm-core/src/modules/storage/services/storage-migration.ts`:
- Around line 166-170: Several methods (getStatus, rebuildSqlite,
migrateToSqlite, migrateToFile) open SqliteStorage/FileStorage and only call
.close() on the happy path which leaks resources on exceptions; change each
method to declare the storage variable (e.g., let sqliteStorage: SqliteStorage |
undefined) before the try, instantiate and await .initialize() inside the try,
and move the .close() call into a finally block that checks for a non-undefined
instance and awaits .close(); ensure any caught errors are rethrown or
propagated after closing so you don't swallow exceptions.
- Around line 855-862: The check mistakenly uses patterns.some(...) which
returns true if any single pattern exists and causes early return, skipping
additions when others are missing; change the condition to use
patterns.every(pattern => content.includes(pattern) && !pattern.startsWith('#'))
so we only return early when all patterns (and not commented ones) are present;
update the variable (hasPatterns) logic in the storage-migration.ts block where
patterns, content.includes, and pattern.startsWith are used to ensure missing
-shm/-wal entries get added.
- Around line 670-671: The code currently calls sourceTask.dependencies?.sort()
and targetTask.dependencies?.sort(), which mutates the original arrays; change
these to non-mutating sorts by copying before sorting (e.g., use Array.from(...)
or spread: [...(sourceTask.dependencies || [])].sort().join(',') and similarly
for targetTask.dependencies) so sourceDeps and targetDeps are derived from
sorted copies and the original task objects remain unmodified when validate()
returns; update both the sourceDeps and targetDeps assignments accordingly.
🧹 Nitpick comments (9)
packages/tm-core/src/modules/storage/adapters/sqlite-storage/jsonl-sync.ts (2)
116-121: Use the central log function instead ofconsole.error.Per the coding guidelines, direct
console.log/console.errorcalls should be avoided outside the logging utility. Replace with the project's centrallogfunction.As per coding guidelines: "Do not add direct console.log calls outside the logging utility - use the central log function instead."
376-396:getStatssynchronously reads the entire file to count lines.
readFileSyncon Line 383 loads the full JSONL file into memory and blocks the event loop. For a stats method that returns line count, consider using a streaming approach or caching the count.packages/tm-core/src/modules/storage/adapters/sqlite-storage/queries.ts (3)
827-843:loadAllTasksissues 5+ queries per task (N+1 pattern).For each task, this executes separate queries for dependencies, labels, subtasks, metadata, and then per-subtask dependency queries. With 100 tasks averaging 5 subtasks each, that's ~700 queries. Consider batch-fetching dependencies/labels/subtasks per tag and joining in memory.
986-1007: Subtask dependency parsing logic (dotted notation, type coercion) is duplicated.This same parsing block appears in
saveCompleteTaskhere and is repeated insqlite-storage.ts(saveTasks,rebuildFromJsonl). Extract a shared helper to reduce maintenance burden and divergence risk.♻️ Example helper extraction
+/** + * Parse subtask dependencies from mixed formats (number, string, dotted notation) + * and filter to only valid subtask IDs within the given set. + */ +export function parseSubtaskDependencies( + rawDeps: Array<string | number>, + validSubtaskIds: Set<number> +): number[] { + const deps: number[] = []; + for (const d of rawDeps) { + let depId: number; + if (typeof d === 'number') { + depId = d; + } else { + const str = String(d); + if (str.includes('.')) { + const parts = str.split('.'); + depId = parseInt(parts[parts.length - 1], 10); + } else { + depId = parseInt(str, 10); + } + } + if (!isNaN(depId) && validSubtaskIds.has(depId)) { + deps.push(depId); + } + } + return deps; +}
567-607:setTagMetadatahas a TOCTOU race between the existence check and insert.
getTagMetadata(Line 572) and the subsequentINSERT(Line 596) are not atomic. Two concurrent calls both seeing "not exists" would cause a UNIQUE constraint failure on the second insert. Low risk in practice since tag ops are infrequent, but worth noting.♻️ Use INSERT OR REPLACE or ON CONFLICT to make it idempotent
- const existing = getTagMetadata(db, tag); - if (existing) { - // ... UPDATE - } else { - // ... INSERT - } + const stmt = db.prepare(` + INSERT INTO tag_metadata (tag, description, project_name, version, created_at, updated_at) + VALUES (?, ?, ?, ?, datetime('now'), datetime('now')) + ON CONFLICT(tag) DO UPDATE SET + description = COALESCE(excluded.description, tag_metadata.description), + project_name = COALESCE(excluded.project_name, tag_metadata.project_name), + version = COALESCE(excluded.version, tag_metadata.version), + updated_at = datetime('now') + `); + stmt.run( + tag, + metadata.description ?? null, + metadata.project_name ?? null, + metadata.version ?? '1.0.0' + );packages/tm-core/src/modules/storage/adapters/sqlite-storage/sqlite-storage.ts (4)
775-821:getTagsWithStatsloads all tasks (with full relations) for every tag just to count subtasks.Line 786 calls
loadAllTaskswhich issues N+1 queries per task. For a stats endpoint, this is very expensive. A direct SQL count query on thesubtaskstable would be much cheaper.♻️ Direct SQL query for subtask counts
// In queries.ts — add a function like: export function getSubtaskCounts( db: Database.Database, tag: string ): { total: number; byStatus: Record<string, number> } { const totalStmt = db.prepare('SELECT COUNT(*) as count FROM subtasks WHERE tag = ?'); const total = (totalStmt.get(tag) as { count: number }).count; const byStatusStmt = db.prepare( 'SELECT status, COUNT(*) as count FROM subtasks WHERE tag = ? GROUP BY status' ); const rows = byStatusStmt.all(tag) as Array<{ status: string; count: number }>; const byStatus: Record<string, number> = {}; for (const row of rows) { byStatus[row.status] = row.count; } return { total, byStatus }; }
166-204: Subtask dependency parsing logic is duplicated fromqueries.ts.The dotted-notation parsing and validation block (Lines 183–200) is nearly identical to the one in
saveCompleteTaskinqueries.ts(Lines 986–1007) and also insaveTasks(Lines 331–354). Extracting a sharedparseSubtaskDependencieshelper (as suggested in thequeries.tsreview) would eliminate all three copies.
910-920:getActiveTagFromStatereadsstate.jsonwith rawfs.readFile+JSON.parse.Per coding guidelines,
readJSONandwriteJSONutilities should be used for all JSON file operations.As per coding guidelines: "Use readJSON and writeJSON utilities for all JSON file operations instead of raw fs.readFileSync or fs.writeFileSync."
316-324: Redundant condition when setting task dependencies insaveTasks.Line 321:
if (validDeps.length > 0 || (task.dependencies && task.dependencies.length > 0))— since Line 303 already deleted all tasks for the tag and Line 308 re-inserted withskipTaskDependencies: true, there are no pre-existing dependencies to clear. The second disjunct only triggerssetTaskDependencies(…, [])when all original deps were invalid, which is a no-op on a fresh insert.♻️ Simplify to only set when there are valid deps
- if (validDeps.length > 0 || (task.dependencies && task.dependencies.length > 0)) { - // Only call if there are deps to set or clear - setTaskDependencies(this.getDb().getDb(), String(task.id), resolvedTag, validDeps); - } + if (validDeps.length > 0) { + setTaskDependencies(this.getDb().getDb(), String(task.id), resolvedTag, validDeps); + }
packages/tm-core/src/modules/storage/adapters/sqlite-storage/jsonl-sync.ts
Show resolved
Hide resolved
packages/tm-core/src/modules/storage/adapters/sqlite-storage/sqlite-storage.ts
Outdated
Show resolved
Hide resolved
packages/tm-core/src/modules/storage/adapters/sqlite-storage/sqlite-storage.ts
Show resolved
Hide resolved
packages/tm-core/src/modules/storage/services/storage-migration.ts
Outdated
Show resolved
Hide resolved
packages/tm-core/src/modules/storage/services/storage-migration.ts
Outdated
Show resolved
Hide resolved
packages/tm-core/src/modules/storage/services/storage-migration.ts
Outdated
Show resolved
Hide resolved
- Auto-fix formatting issues across tm-core module - Fix package.json indentation (CodeRabbit issue eyaltoledano#13) - Add .taskmaster/tasks/*.backup-*.json to gitignore Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Critical fixes: - database.ts: Add path validation to prevent SQL injection in backup() - jsonl-sync.ts: Use atomic writes (temp file + rename) to prevent corruption High priority fixes: - storage-migration.ts: Add finally blocks to close storage connections in getStatus, migrateToSqlite, migrateToFile to prevent resource leaks - tasks-domain.ts: Treat SQLite as local storage (like file) in switchTag, not as API storage Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Fix all issues with AI agents
In `@packages/tm-core/package.json`:
- Line 37: Add the external dependency "libsql" to the root package.json
dependencies so tsdown bundling and production installs can resolve it;
specifically, add the entry "libsql": "^0.5.0" to the dependencies object
(matching the version already added in packages/tm-core/package.json) and run an
install or lockfile update to persist the change.
In `@packages/tm-core/src/modules/storage/adapters/sqlite-storage/queries.ts`:
- Around line 948-951: The code passes task.dependencies directly to
setTaskDependencies (called in the block gated by options?.skipTaskDependencies)
but setTaskDependencies expects string[]; normalize values first to avoid
non-string types (e.g., numbers) getting inserted. Update the call site so that
task.dependencies is transformed to strings (e.g., task.dependencies.map(String)
or equivalent filtering/mapping) before passing to setTaskDependencies; ensure
you preserve empty/null checks if task.dependencies may be undefined.
- Around line 657-661: getAllTags currently only selects tags present in tasks,
missing tags that exist only in tag_metadata; update the SQL in getAllTags
(function getAllTags) to return the union of tags from tag_metadata and tasks
(e.g. SELECT tag FROM tag_metadata UNION SELECT DISTINCT tag FROM tasks) and
ensure you apply DISTINCT/ORDER BY so the returned string[] includes tags that
have no tasks as well as those that do.
In
`@packages/tm-core/src/modules/storage/adapters/sqlite-storage/sqlite-storage.ts`:
- Around line 600-621: The parent status auto-adjustment in the block using
getSubtasks(...) and isDoneLike(...) can leave parentNewStatus undefined when
all subtasks are 'pending', so the parent remains 'in-progress'; ensure you set
a fallback status (e.g., 'pending') when neither allDone, anyInProgress, nor
anyDone are true by assigning parentNewStatus = 'pending' before calling
updateTaskQuery(this.getDb().getDb(), parentId, tag, { status: parentNewStatus
}); so the parent is correctly reverted when subtasks become pending again.
In `@packages/tm-core/src/modules/storage/adapters/sqlite-storage/types.ts`:
- Around line 283-302: The parseComplexity function incorrectly treats empty or
whitespace-only strings as numeric zero because Number('') === 0; update
parseComplexity to first guard against empty/whitespace values (e.g., if value
=== null || value.trim() === '' return undefined) before attempting numeric
parsing, then continue with Number parsing and enum check; reference the
parseComplexity function and TaskComplexity type when making this change.
🧹 Nitpick comments (13)
packages/tm-core/src/modules/storage/adapters/sqlite-storage/database.ts (1)
26-41: Explicitundefinedvalues inoptionswill shadow defaults.The spread
{ dbPath, ...DEFAULT_SQLITE_CONFIG, ...options }allows a caller who passes{ walMode: undefined }to overwrite the defaulttruewithundefined, breaking theRequired<SqliteStorageConfig>invariant at runtime.Consider filtering out
undefinedvalues, or use an explicit merge:🛡️ Suggested defensive merge
constructor( dbPath: string, options?: Partial<Omit<SqliteStorageConfig, 'dbPath'>> ) { + // Filter out undefined values so defaults aren't shadowed + const defined = options + ? Object.fromEntries(Object.entries(options).filter(([, v]) => v !== undefined)) + : {}; this.config = { dbPath, ...DEFAULT_SQLITE_CONFIG, - ...options + ...defined };packages/tm-core/src/modules/storage/adapters/sqlite-storage/types.ts (1)
254-278: Type guard arrays duplicate the canonical type definitions — consider deriving them.
isValidTaskStatusandisValidTaskPriorityhard-code the same string literals asTaskStatusandTaskPriority. If a new status/priority is added to the union type but not here, the guard silently rejects valid values. This is acceptable for now, but consider using a sharedconstarray as the single source of truth.packages/tm-core/src/modules/storage/services/storage-migration.ts (1)
576-590: Unsafe cast ofconfig.storage?.typetoStorageType.If
config.jsoncontains an unexpected value like"postgres", theas StorageTypecast on line 586 will silently pass it through. Consider validating the value against known types before returning it.🛡️ Proposed fix
async getCurrentStorageType(): Promise<StorageType> { try { if (!fs.existsSync(this.configPath)) { return 'auto'; } const config: TaskMasterConfig = JSON.parse( await fs.promises.readFile(this.configPath, 'utf-8') ); - return (config.storage?.type as StorageType) || 'auto'; + const type = config.storage?.type; + if (type === 'file' || type === 'sqlite' || type === 'api' || type === 'auto') { + return type; + } + return 'auto'; } catch { return 'auto'; } }packages/tm-core/src/modules/storage/adapters/sqlite-storage/jsonl-sync.spec.ts (1)
39-50: UnnecessaryasynconbeforeEach/afterEach— only synchronous calls inside.Both hooks use synchronous
fsSync.mkdtempSyncandfsSync.rmSync. Theasynckeyword is not needed and adds a minor deviation from the project guideline. As per coding guidelines, "NEVER use async/await in test functions unless testing actual asynchronous operations."♻️ Suggested fix
- beforeEach(async () => { + beforeEach(() => { // Create a temp directory for each test tempDir = fsSync.mkdtempSync(path.join(os.tmpdir(), 'jsonl-sync-test-')); jsonlPath = path.join(tempDir, 'tasks.jsonl'); sync = new JsonlSync(jsonlPath); }); - afterEach(async () => { + afterEach(() => { if (tempDir && fsSync.existsSync(tempDir)) { fsSync.rmSync(tempDir, { recursive: true, force: true }); } });packages/tm-core/src/modules/storage/adapters/sqlite-storage/jsonl-sync.ts (3)
116-121: Replaceconsole.errorwith the central logging utility.The coding guidelines require using the central
logfunction instead of directconsole.log/console.errorcalls. This applies to parse-error logging inreadAll.As per coding guidelines: "Do not add direct console.log calls outside the logging utility - use the central log function instead."
86-124: Consider wrapping the streaming reader intry/finallyto ensure cleanup.If an unexpected error propagates during
for await, thefileStreamandrlare not explicitly closed. While the async iterator protocol typically handles this, adding afinallyblock (as done ingetTask) would be more defensive.
379-393:getStatsuses synchronous I/O, blocking the event loop.
fs.statSyncandfs.readFileSyncare used here while the rest of the class is async. For large JSONL files, reading the full content just to count lines blocks the thread unnecessarily. Consider usingfs.promises.stat/fs.promises.readFile, or streaming line counting, and making this methodasync.packages/tm-core/src/modules/storage/adapters/sqlite-storage/index.ts (2)
76-123: Consider reducing the public API surface of this barrel.This index re-exports nearly every internal function (individual query helpers like
insertTask,updateTask,getSubtaskDependencies, row-level types, schema table constants, etc.). Consumers of the storage module should typically interact through theSqliteStorageadapter, not call low-level query functions directly.Exposing all internals increases the maintenance burden — any internal refactor becomes a breaking change for downstream consumers.
Based on learnings: "the team prefers a minimal, need-based export strategy in index files rather than exposing all internal utilities. Exports should only be added when functions are actually consumed by other packages in the monorepo."
70-74:default as SqliteStorageDefaultis redundant alongside the namedSqliteStorageexport.Both point to the same class. Unless there's a specific consumer that requires a default import, the alias just adds surface area.
packages/tm-core/src/modules/storage/adapters/sqlite-storage/sqlite-storage.ts (3)
135-234: Dependency save logic is duplicated betweenrebuildFromJsonlandsaveTasks.The two-pass approach (save tasks without deps, then set deps) with subtask dependency parsing is nearly identical in both methods (Lines 158-233 vs Lines 331-413). Consider extracting a shared helper to reduce duplication.
457-467: Spread ofupdatescan silently replace subtasks array.If
updatescontains asubtasksfield,{...existingTask, ...updates}replaces the entire subtask array, which then gets persisted viasaveCompleteTask. This may be intentional, but callers passing partial updates could accidentally wipe subtasks. Consider documenting this behavior or guarding against unintentional subtask replacement.
846-858: Loading all complete tasks just to count subtasks is unnecessarily expensive.
loadAllTaskstriggers N+1 queries (deps, labels, subtasks, metadata per task) for every tag, but only subtask counts are needed. A direct SQL query would be far more efficient:SELECT status, COUNT(*) as count FROM subtasks WHERE tag = ? GROUP BY statuspackages/tm-core/src/modules/storage/adapters/sqlite-storage/queries.ts (1)
886-902:loadAllTasksexhibits N+1 query pattern.For each task, this issues 4 additional queries (dependencies, labels, subtasks, metadata), plus one per subtask for subtask dependencies. With 100 tasks averaging 5 subtasks, that's ~901 queries per tag load.
Since this feeds
syncAllToJsonl(called on every mutation for every tag), the overhead compounds. Consider batch-loading dependencies, labels, subtasks, and metadata with single queries per relation type, then assembling in memory.
packages/tm-core/src/modules/storage/adapters/sqlite-storage/sqlite-storage.ts
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@packages/tm-core/src/modules/storage/adapters/sqlite-storage/database.ts`:
- Around line 245-264: The backup(destPath: string) method currently validates
only destPath against validPathPattern but executes VACUUM INTO with
normalizedPath, leaving a validation gap; update backup to re-run the whitelist
check (validPathPattern) against normalizedPath and reject it if it fails, and
add a containment check that ensures normalizedPath is inside an allowed base
directory (e.g., compare
path.resolve(normalizedPath).startsWith(path.resolve(allowedBaseDir))) to
prevent path traversal; ensure the method throws an Error with a clear message
when either check fails before calling this.db.exec(`VACUUM INTO
'${normalizedPath}'`).
In `@packages/tm-core/src/modules/storage/adapters/sqlite-storage/jsonl-sync.ts`:
- Around line 86-124: The JSONL read loop that builds tasks from this.jsonlPath
should be wrapped in a try/finally so resources are always cleaned up: open the
fileStream and readline (rl) as now, but put the for-await loop and parsing
logic in a try block and in finally call rl.close() and
fileStream.close()/destroy() to avoid leaking descriptors (mirror pattern used
in getTask); also replace the direct console.error call in the catch with the
central logger (e.g., this.logger.error or the project's logger) so parse errors
are logged via the standard logging utility instead of console.error, keeping
the same error message and lineNumber context.
🧹 Nitpick comments (3)
packages/tm-core/src/modules/tasks/tasks-domain.ts (1)
377-380: JSDoc still references only "file storage" in several places after sqlite addition.The
switchTagcomment (line 277) was updated to mention "file/sqlite", but the tag management methods (createTag,deleteTag,renameTag,copyTag,getTagsWithStats),watch, andgenerateTaskFilesstill reference only "file storage." Since sqlite is now a first-class local storage type, these comments should be updated for consistency — e.g., "For local storage (file/sqlite): …".packages/tm-core/src/modules/storage/adapters/sqlite-storage/jsonl-sync.ts (1)
364-384:getStatsreads the entire file synchronously.This method uses
fs.readFileSyncto count lines, which will block the event loop. For a diagnostics method called infrequently, this is tolerable — but consider making it async (or using a streaming line count) if this is expected to run on large JSONL files during normal operation.packages/tm-core/src/modules/storage/adapters/sqlite-storage/database.ts (1)
178-189: WAL checkpoint on close is a good durability practice.Consider guarding public methods against use-after-close (e.g., checking
this.initializedor aclosedflag) to produce a clear error message instead of an opaque libsql error. Low priority since callers are expected to manage lifecycle.
packages/tm-core/src/modules/storage/adapters/sqlite-storage/database.ts
Show resolved
Hide resolved
packages/tm-core/src/modules/storage/adapters/sqlite-storage/jsonl-sync.ts
Show resolved
Hide resolved
Storage optimization: - Add tag-scoped JSONL sync (writeTaskWithTag, deleteTaskWithTag, syncTagTasks) - Replace full exports with incremental updates for single-task mutations Bug fixes: - Fix subtask counting asymmetry in validation (count independently) - Fix getAllTags to include empty tags via UNION with tag_metadata - Add parent status cascade to 'pending' when all subtasks are pending - Fix parseComplexity empty string handling (already done, verified) - Fix ID normalization in JSONL operations (already done, verified) Security: - Add backup path containment validation (must be in db directory) - Validate normalized path against regex, not just raw input Resource management: - Fix resource leaks in rebuildSqlite with finally block - Add try/finally to JSONL readAll for stream cleanup - Replace console.error with central logger Code quality: - Remove unused exports (MigrationOptions, ValidationResult, StorageStatus) - Normalize task.dependencies to strings before DB insert - Add convertSubtaskToTask helper with safe defaults Testing: - Add tests for tag-aware JSONL operations - Add tests for backup path validation - Add tests for migration transaction safety Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 7
🤖 Fix all issues with AI agents
In `@apps/cli/src/commands/storage.command.ts`:
- Around line 501-508: SqliteStorage is initialized but storage.close() is only
called on the success path, risking a resource leak if initialize() or
getStats() throws; wrap creation/usage of the storage variable in a try/finally
(create const storage = new SqliteStorage(this.projectPath); await
storage.initialize(); try { const stats = await storage.getStats(); } finally {
await storage.close(); }) so that storage.close() always runs; reference
SqliteStorage, initialize, getStats, close, and the storage variable when making
the change.
In `@package.json`:
- Line 111: Move the "libsql" entry from dependencies into optionalDependencies
in package.json so platforms that can't build native bindings won't fail
installs; keep the version string ("^0.5.0") unchanged, remove the original
"libsql" line under "dependencies" and add the same entry under
"optionalDependencies", then regenerate your lockfile (npm install / yarn
install) so the lockfile reflects the change.
In
`@packages/tm-core/src/modules/storage/adapters/sqlite-storage/migrations.spec.ts`:
- Around line 76-82: The test currently never runs because it guards on
ALL_MIGRATIONS[0].down.length > 0 (MIGRATION_1.down is []); update the test in
migrations.spec.ts to remove that conditional and assert the expected behavior
of runner.migrateTo(0) for a migration with an empty down array — e.g., replace
the if-block with a direct expect(() =>
runner.migrateTo(0)).toThrow(/rollback|down|cannot migrate to 0/i) (or the
specific error message your migrateTo throws) so the test actually verifies that
attempting to migrate to version 0 fails when MIGRATION_1.down is empty.
In `@packages/tm-core/src/modules/storage/adapters/sqlite-storage/queries.ts`:
- Around line 1143-1153: The copyTasksToTag function must perform the entire
copy inside a single DB transaction to avoid partial copies; modify
copyTasksToTag to begin a transaction (e.g., exec('BEGIN') or use the DB's
transaction wrapper), call loadAllTasks(db, sourceTag) then loop calling
saveCompleteTask(db, task, targetTag) for each task, commit the transaction on
success, and rollback+rethrow on any error so the targetTag is not left in a
partial state; reference copyTasksToTag, saveCompleteTask, and loadAllTasks when
locating the change.
- Around line 849-876: subtaskToInsertData currently uses
parseInt(String(subtask.id), 10) which can yield NaN for non-numeric ids; update
the id handling in subtaskToInsertData so it (1) if typeof subtask.id ===
'number' use it, (2) else if it's a string and matches a numeric pattern (e.g.
/^\d+$/) parseInt it, and (3) otherwise preserve the original string id (or
throw/return an explicit error if your DB schema requires numeric ids) instead
of inserting NaN; locate the id assignment in subtaskToInsertData and make this
guarded conversion (referenced symbol: subtaskToInsertData and the returned id
field), keeping behavior consistent with saveCompleteTask.
In
`@packages/tm-core/src/modules/storage/adapters/sqlite-storage/sqlite-storage.ts`:
- Around line 154-253: rebuildFromJsonl duplicates the two-pass
dependency-resolution logic found in saveTasks; extract the shared logic into a
helper (e.g., saveTasksWithDependencies or persistTasksTwoPass) that accepts the
DB handle and tasksByTag map and performs the two passes using saveCompleteTask,
setTaskDependencies and setSubtaskDependencies (including the same ID
normalization and filtering rules), then call that helper from both
rebuildFromJsonl and saveTasks to eliminate the duplicated block and centralize
dependency normalization and persistence.
- Around line 154-175: In rebuildFromJsonl(), when iterating jsonlTask from
jsonlSync.readAll(), skip any task where the destructured _deleted === true so
soft-deleted entries are not imported; specifically, inside the loop in
rebuildFromJsonl() after const { _v, _ts, _deleted, _tag, ...task } = jsonlTask,
add a guard if (_deleted) continue; before determining tag and adding to
tasksByTag to prevent resurrecting deleted tasks.
🧹 Nitpick comments (14)
packages/tm-core/src/modules/storage/services/storage-migration.spec.ts (1)
24-35: Cleanup may fail for nested directories or leftover WAL files.
fs.rmdirSynconly removes empty directories and the loop only handles direct children. If any test creates nested structures, cleanup silently fails. TheafterEachinjsonl-sync.spec.tsusesfs.rmSync(tempDir, { recursive: true, force: true })which is more robust and consistent.♻️ Suggested fix
afterEach(() => { - // Clean up temp directory try { - const files = fs.readdirSync(tempDir); - for (const file of files) { - fs.unlinkSync(path.join(tempDir, file)); - } - fs.rmdirSync(tempDir); + fs.rmSync(tempDir, { recursive: true, force: true }); } catch { // Ignore cleanup errors } });packages/tm-core/src/modules/storage/adapters/sqlite-storage/coderabbit-fixes.spec.ts (1)
299-312:db.close()in afterEach could throw and skip file cleanup.If
db.close()throws (e.g., if a test already closed it or the DB is corrupted), the file cleanup on lines 302-311 is skipped. Wrapdb.close()in its own try/catch or move it inside the existing one.♻️ Suggested fix
afterEach(() => { + try { db.close(); + } catch { + // Ignore close errors + } // Clean up test files const dir = path.dirname(testDbPath); try {packages/tm-core/src/modules/storage/adapters/sqlite-storage/jsonl-sync.ts (2)
505-525:getStatsreads the entire file synchronously while the rest of the class is async.
fs.readFileSyncon line 512 loads the entire JSONL file into memory just to count lines. For large files, this is wasteful. Consider using a streaming line counter or at minimum switching tofs.promises.readFilefor consistency with the async API surface.That said, this is only called for status reporting, so the impact is low.
369-409: Asymmetric tag defaulting could cause mismatches.Line 392 defaults a missing
_tagto'master':(existingTask._tag || 'master') === tag. This means tasks written without a tag (viawriteTaskorexportAll) will only be matched whentag === 'master'. If a caller passes a different default or an empty string, the match silently fails.This same pattern is used consistently in
deleteTaskWithTag(line 434) andsyncTagTasks(line 475), so it's at least internally consistent. Just ensure callers are aware that tagless entries are treated as'master'.packages/tm-core/src/modules/storage/adapters/sqlite-storage/types.ts (1)
254-278: Type guard arrays are hardcoded and could drift from the canonicalTaskStatus/TaskPrioritytypes.If the union types in
common/types/index.tsare updated, these guards won't catch the change at compile time. Consider deriving the valid values from the source types if possible, or add a comment noting the coupling.packages/tm-core/src/modules/storage/adapters/sqlite-storage/queries.ts (2)
932-948:loadAllTaskshas an N+1 query pattern that will degrade with scale.For each task, five additional queries are executed (dependencies, labels, subtasks, metadata, and per-subtask dependencies). With 100 tasks averaging 3 subtasks each, that's ~800 queries. Consider batching the subsidiary queries (e.g., load all dependencies for a tag in one query, then distribute in memory).
97-167: Dynamic SQL via string concatenation is safe here but repetitive.The field-by-field
if (updates.X !== undefined)pattern in bothupdateTaskandupdateSubtaskis verbose. Consider a helper that builds SET clauses from an object, reducing boilerplate. Not a blocker — correctness is fine.packages/tm-core/src/modules/storage/adapters/sqlite-storage/migrations.spec.ts (1)
214-221:history[0]access without a guard — fragile if migration setup changes.
history.lengthis asserted>= 1, buthistory[0]is accessed unconditionally on the next lines. If a future refactor changes initialization behavior, this test would throw a confusingTypeErrorinstead of a clear assertion failure. Consider usingexpect(history[0]).toBeDefined()before the property checks or accessing viahistory.at(0).apps/mcp/src/tools/storage/storage-config.tool.ts (1)
105-108:as anycast hides potential type mismatch with config API.The cast
{ type: type as StorageType } as anybypasses the compiler check on the shape of the storage config object. If theupdateConfigmethod's expected shape changes, this will silently break at runtime.Consider defining the proper type or using a more targeted cast. If the config interface doesn't yet support partial storage updates, that may be worth addressing upstream.
packages/tm-core/src/modules/storage/adapters/sqlite-storage/migrations.ts (1)
239-249:MIGRATION_1.downis an empty array — semantically means "no rollback" but differs fromundefined.The
revertMigrationmethod treats both!migration.downandmigration.down.length === 0as "does not support rollback". Havingdown: []here is functionally identical to omitting it, but it's misleading since it suggests rollback was considered but intentionally left empty vs. not applicable. The inline comment clarifies intent, so this is purely a nit.Nit: omit `down` to signal "no rollback" more clearly
const MIGRATION_1: Migration = { version: 1, description: SCHEMA_VERSION_DESCRIPTIONS[1], up: [ // Initial schema is created in database.ts, this just records the version - ], - down: [ - // Cannot revert initial schema - would drop all data ] + // down intentionally omitted — cannot revert initial schema };apps/cli/src/commands/storage.command.ts (3)
136-162: Directconsole.logcalls violate the central logging guideline.The
executeStatusmethod usesconsole.logfor output formatting (Lines 136, 137, 147, 150, 155, 162). As per coding guidelines, the central log function or the UI display utilities (displayInfo,displaySuccess, etc.) should be used instead. Line 406 inexecuteMigratealso has a bareconsole.log().
536-568:updateConfigTypeduplicatesStorageMigration.updateConfiglogic.This method reimplements the same JSON read-merge-write pattern for
config.jsonthat already exists as a private method inStorageMigration. If the config format changes, both locations need updating.Consider exposing the config update as a public method on
StorageMigrationor a shared utility, rather than duplicating it here. As per coding guidelines, CLI should be a thin layer calling tm-core methods.
55-58:projectPathis hardcoded toprocess.cwd()in the constructor.This makes the command difficult to test and prevents passing a custom project root via CLI options. Consider accepting
projectPathas a constructor parameter or resolving it from the command's options/arguments, consistent with how other commands handle--project-root.packages/tm-core/src/modules/storage/adapters/sqlite-storage/sqlite-storage.ts (1)
859-896:getTagsWithStatsloads all tasks for every tag to count subtasks.
loadAllTasks(Line 870) performs a full task load with subtask hydration for every tag, just to sum subtask counts and bucket by status. For a project with many tasks/tags, this is expensive. A targeted SQL query (e.g.,SELECT COUNT(*), status FROM subtasks WHERE tag = ? GROUP BY status) would be far more efficient.
| // Create SqliteStorage to trigger rebuild | ||
| // The SqliteStorage automatically rebuilds from JSONL when database doesn't exist | ||
| const storage = new SqliteStorage(this.projectPath); | ||
| await storage.initialize(); | ||
|
|
||
| // Get stats | ||
| const stats = await storage.getStats(); | ||
| await storage.close(); |
There was a problem hiding this comment.
Resource leak if initialize() or getStats() throws.
SqliteStorage is created and initialized, but close() is only called on the success path. If initialize() succeeds but getStats() throws, the database connection leaks.
Proposed fix: wrap in try/finally
const storage = new SqliteStorage(this.projectPath);
- await storage.initialize();
-
- // Get stats
- const stats = await storage.getStats();
- await storage.close();
+ try {
+ await storage.initialize();
+ // Get stats
+ const stats = await storage.getStats();
+
+ spinner.succeed(
+ `Database rebuilt: ${stats.totalTasks} tasks in ${stats.totalTags} tag(s)`
+ );
+ displaySuccess('SQLite database successfully rebuilt from JSONL file');
+
+ this.lastResult = {
+ success: true,
+ action: 'rebuild',
+ currentType: 'sqlite',
+ message: `Rebuilt database with ${stats.totalTasks} tasks`
+ };
+ } finally {
+ await storage.close();
+ }🤖 Prompt for AI Agents
In `@apps/cli/src/commands/storage.command.ts` around lines 501 - 508,
SqliteStorage is initialized but storage.close() is only called on the success
path, risking a resource leak if initialize() or getStats() throws; wrap
creation/usage of the storage variable in a try/finally (create const storage =
new SqliteStorage(this.projectPath); await storage.initialize(); try { const
stats = await storage.getStats(); } finally { await storage.close(); }) so that
storage.close() always runs; reference SqliteStorage, initialize, getStats,
close, and the storage variable when making the change.
packages/tm-core/src/modules/storage/adapters/sqlite-storage/migrations.spec.ts
Outdated
Show resolved
Hide resolved
packages/tm-core/src/modules/storage/adapters/sqlite-storage/sqlite-storage.ts
Show resolved
Hide resolved
packages/tm-core/src/modules/storage/adapters/sqlite-storage/sqlite-storage.ts
Show resolved
Hide resolved
…bit feedback Init flow improvements: - Add storage backend prompt (JSON vs SQLite) after selecting Solo mode - Add --storage-backend CLI option for non-interactive init - Add Storage Management section to help output - Update git tasks prompt text based on storage type selected CodeRabbit fixes: - Fix resource leak in storage.command.ts rebuild (try/finally) - Fix resource leak in init.js SQLite initialization (try/finally) - Move libsql to optionalDependencies for platform compatibility - Fix migrations test that never executed (MIGRATION_1.down is empty) - Add NaN guard to subtaskToInsertData for non-numeric IDs - Wrap copyTasksToTag in transaction to prevent partial copies - Extract persistTasksWithDependencies helper to eliminate duplication - Filter _deleted tasks during JSONL rebuild to prevent resurrection Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 7
🤖 Fix all issues with AI agents
In `@apps/cli/src/commands/storage.command.ts`:
- Around line 540-571: The updateConfigType method currently uses raw fs +
JSON.parse/JSON.stringify; replace that with the repo's shared JSON helpers:
call the readJSON helper to load existing config (fallback to {} if it returns
null/throws) and call writeJSON to persist the updated TaskMasterConfig, while
keeping the existing directory creation (taskmasterDir via this.configPath) and
the merge logic that sets config.storage.type = newType; also ensure you add the
appropriate import for readJSON and writeJSON and preserve error handling
consistent with other usages of these helpers.
In `@packages/tm-core/package.json`:
- Around line 44-46: database.ts currently statically imports libsql (e.g.,
"import Database from 'libsql'") which breaks startup when libsql is absent
despite being listed as an optionalDependency; change that to a dynamic import
(or wrap require in try/catch) so the module is only loaded when needed and
failure is handled gracefully: replace the top-level static import with an async
dynamic import inside the function that initializes or opens the SQLite store
(e.g., the init/open function that references Database), catch the import error,
return a clear fallback or rethrow a descriptive error for callers so the app
can degrade when libsql is unavailable.
In
`@packages/tm-core/src/modules/storage/adapters/sqlite-storage/sqlite-storage.ts`:
- Around line 206-234: The initialize() method sets this.initialized = true
before awaiting the async rebuildFromJsonl(), which leaves the instance
half-initialized if rebuildFromJsonl() throws; change initialize() to only mark
initialized after successful rebuild (or reset it on error) by wrapping the
rebuildFromJsonl() call in a try/catch: call this.db.initialize(), then if
(!dbExists && jsonlExists) await this.rebuildFromJsonl() inside try, set
this.initialized = true after that, and in the catch reset this.initialized =
false (and close() the db if needed) before rethrowing so getDb()/initialize()
retries work as expected. Ensure you reference the same symbols: initialize(),
rebuildFromJsonl(), this.initialized, getDb(), close() when making changes.
- Around line 385-400: appendTasks currently calls saveCompleteTask in a loop
which triggers immediate dependency inserts and can violate FK constraints for
inter-dependent tasks in the same batch; refactor appendTasks to use the
existing two-pass helper persistTasksWithDependencies (or replicate its pattern)
so tasks are first saved with skipTaskDependencies=true and then dependencies
are set in a second pass, and then call setTagMetadata and syncToJsonl as
before; locate appendTasks, saveCompleteTask, persistTasksWithDependencies and
the skipTaskDependencies/setTaskDependencies flags to implement the two-pass
save and dependency resolution.
In `@scripts/init.js`:
- Around line 980-998: The config is being written to 'sqlite' before the DB is
actually created; move the updateStorageConfig call so it runs only after
SqliteStorage.initialize() succeeds (i.e., after await storage.initialize() and
before logging success), ensuring updateStorageConfig(configPath,
selectedStorage, authCredentials, storageBackendType) is invoked only on
successful initialization; keep the existing try/catch/finally (including
storage.close()) and if you prefer to keep the original placement instead,
revert the config in the catch by calling updateStorageConfig to restore the
previous type (e.g., 'file') so the config never claims sqlite when
initialization failed.
In `@scripts/modules/commands.js`:
- Around line 3564-3568: Add validation for the --storage-backend option:
normalize the provided value to lowercase and only allow "file" or "sqlite"; if
the value is unsupported, call processLogger.error (or console.error) with a
clear message and process.exit(1). Implement this check immediately after
parsing CLI options (the parsed value usually referenced as
options.storageBackend or argv.storageBackend) and before any storage
initialization code (e.g., the initStorage/initLocalStorage or storage backend
switch that consumes the flag) so misconfiguration is rejected early.
🧹 Nitpick comments (3)
scripts/init.js (1)
906-917: Consider using an options object forcreateProjectStructure.This function now accepts 9 positional parameters. Positional args are error-prone at this count — it's easy to swap
authCredentialsandstorageBackendType, for example. A singleoptionsobject (or at least grouping the last few params) would improve readability and reduce mistakes at call sites.This doesn't need to block the PR, but is worth cleaning up in a follow-up.
packages/tm-core/src/modules/storage/adapters/sqlite-storage/sqlite-storage.ts (2)
759-770:copyTagdoesn't validate that the target tag is new or that the source tag exists.
createTag(line 683) andrenameTag(line 728) both check for existing target tags, butcopyTagdoes not. Copying to an existing tag could cause unique-constraint violations on(task_id, tag), and copying from a non-existent source silently creates an empty tag with metadata.Proposed validation
async copyTag(sourceTag: string, targetTag: string): Promise<void> { + const existingTags = getAllTags(this.getDb().getDb()); + if (!existingTags.includes(sourceTag)) { + throw new Error(`Source tag ${sourceTag} does not exist`); + } + if (existingTags.includes(targetTag)) { + throw new Error(`Target tag ${targetTag} already exists`); + } + this.getDb().transaction(() => {
817-848:getTagsWithStatsloads every task (with subtasks) per tag just to count subtask statuses.
loadAllTasks(line 822) performs N complete-task loads per tag, including subtask and dependency joins, only to count subtask statuses. If a SQL-level aggregate is available (or can be added toqueries.ts), this could be a single query per tag instead.
| // Check if migration is possible | ||
| const canMigrate = await migration.canMigrate(targetType); | ||
| if (!canMigrate.canMigrate && shouldMigrate) { | ||
| spinner.warn(canMigrate.reason || 'Cannot migrate'); | ||
| displayWarning(canMigrate.reason || 'Migration not possible'); | ||
| } | ||
|
|
||
| spinner.text = `Switching to ${this.formatStorageType(targetType)} storage...`; | ||
|
|
||
| let migrationResult: MigrationResult | undefined; | ||
|
|
||
| if (shouldMigrate && canMigrate.canMigrate) { | ||
| spinner.text = 'Migrating data...'; | ||
|
|
||
| if (targetType === 'sqlite') { | ||
| migrationResult = await migration.migrateToSqlite({ | ||
| createBackup: shouldBackup, | ||
| validateAfterMigration: true, | ||
| updateConfig: true, | ||
| updateGitignore: true | ||
| }); | ||
| } else { | ||
| migrationResult = await migration.migrateToFile({ | ||
| createBackup: shouldBackup, | ||
| validateAfterMigration: true, | ||
| updateConfig: true | ||
| }); | ||
| } | ||
|
|
||
| if (migrationResult.success) { | ||
| spinner.succeed( | ||
| `Switched to ${this.formatStorageType(targetType)} storage ` + | ||
| `(migrated ${migrationResult.taskCount} tasks, ${migrationResult.subtaskCount} subtasks)` | ||
| ); | ||
|
|
||
| if (migrationResult.warnings.length > 0) { | ||
| for (const warning of migrationResult.warnings) { | ||
| displayWarning(warning); | ||
| } | ||
| } | ||
| } else { | ||
| spinner.fail('Migration failed'); | ||
| for (const error of migrationResult.errors) { | ||
| displayError(error); | ||
| } | ||
| this.lastResult = { | ||
| success: false, | ||
| action: 'switch', | ||
| message: 'Migration failed', | ||
| migrationResult | ||
| }; | ||
| process.exit(1); | ||
| return; | ||
| } | ||
| } else { | ||
| // Just update config without migration | ||
| await this.updateConfigType(targetType); | ||
| spinner.succeed( | ||
| `Switched to ${this.formatStorageType(targetType)} storage (config only, no data migrated)` | ||
| ); | ||
| } |
There was a problem hiding this comment.
Don’t switch config when migration is impossible.
With default --migrate, canMigrate=false only warns and then proceeds to update config, which can leave users pointing at an empty backend. If migration isn’t possible, fail unless the user explicitly passed --no-migrate.
Suggested fix
const canMigrate = await migration.canMigrate(targetType);
- if (!canMigrate.canMigrate && shouldMigrate) {
- spinner.warn(canMigrate.reason || 'Cannot migrate');
- displayWarning(canMigrate.reason || 'Migration not possible');
- }
+ if (!canMigrate.canMigrate && shouldMigrate) {
+ spinner.fail(canMigrate.reason || 'Cannot migrate');
+ displayError(canMigrate.reason || 'Migration not possible');
+ this.lastResult = {
+ success: false,
+ action: 'switch',
+ message: canMigrate.reason || 'Migration not possible'
+ };
+ process.exit(1);
+ return;
+ }| private async updateConfigType(newType: 'file' | 'sqlite'): Promise<void> { | ||
| const taskmasterDir = path.dirname(this.configPath); | ||
|
|
||
| // Ensure directory exists | ||
| await fs.promises.mkdir(taskmasterDir, { recursive: true }); | ||
|
|
||
| let config: TaskMasterConfig = {}; | ||
|
|
||
| // Read existing config if it exists | ||
| if (fs.existsSync(this.configPath)) { | ||
| try { | ||
| config = JSON.parse( | ||
| await fs.promises.readFile(this.configPath, 'utf-8') | ||
| ); | ||
| } catch { | ||
| // Start fresh if config is corrupted | ||
| config = {}; | ||
| } | ||
| } | ||
|
|
||
| // Update storage type | ||
| config.storage = { | ||
| ...config.storage, | ||
| type: newType | ||
| }; | ||
|
|
||
| // Write updated config | ||
| await fs.promises.writeFile( | ||
| this.configPath, | ||
| JSON.stringify(config, null, 2), | ||
| 'utf-8' | ||
| ); |
There was a problem hiding this comment.
Use shared JSON helpers for config read/write.
This uses raw fs + JSON.parse/JSON.stringify; the repo standard is to use shared JSON helpers for consistent validation/error handling.
As per coding guidelines: “Use readJSON and writeJSON utilities for all JSON file operations instead of raw fs.readFileSync or fs.writeFileSync.”
🤖 Prompt for AI Agents
In `@apps/cli/src/commands/storage.command.ts` around lines 540 - 571, The
updateConfigType method currently uses raw fs + JSON.parse/JSON.stringify;
replace that with the repo's shared JSON helpers: call the readJSON helper to
load existing config (fallback to {} if it returns null/throws) and call
writeJSON to persist the updated TaskMasterConfig, while keeping the existing
directory creation (taskmasterDir via this.configPath) and the merge logic that
sets config.storage.type = newType; also ensure you add the appropriate import
for readJSON and writeJSON and preserve error handling consistent with other
usages of these helpers.
packages/tm-core/src/modules/storage/adapters/sqlite-storage/sqlite-storage.ts
Show resolved
Hide resolved
packages/tm-core/src/modules/storage/adapters/sqlite-storage/sqlite-storage.ts
Show resolved
Hide resolved
| .option( | ||
| '--storage-backend <type>', | ||
| 'Storage backend type for local storage: file (JSON) or sqlite', | ||
| 'file' | ||
| ) |
There was a problem hiding this comment.
Validate --storage-backend values before init.
Right now any string is accepted and forwarded, which can lead to confusing failures or misconfiguration. Add a small guard to normalize and reject unsupported values early.
Suggested fix
.action(async (cmdOptions) => {
+ const validBackends = ['file', 'sqlite'];
+ if (cmdOptions.storageBackend) {
+ const normalized = String(cmdOptions.storageBackend).toLowerCase();
+ if (!validBackends.includes(normalized)) {
+ console.error(
+ chalk.red(
+ `Error: Invalid --storage-backend "${cmdOptions.storageBackend}". ` +
+ `Valid values: ${validBackends.join(', ')}`
+ )
+ );
+ process.exit(1);
+ }
+ cmdOptions.storageBackend = normalized;
+ }
// cmdOptions contains parsed argumentsAs per coding guidelines: “Validate required parameters early with clear error messages and use process.exit(1) for validation failures.”
🤖 Prompt for AI Agents
In `@scripts/modules/commands.js` around lines 3564 - 3568, Add validation for the
--storage-backend option: normalize the provided value to lowercase and only
allow "file" or "sqlite"; if the value is unsupported, call processLogger.error
(or console.error) with a clear message and process.exit(1). Implement this
check immediately after parsing CLI options (the parsed value usually referenced
as options.storageBackend or argv.storageBackend) and before any storage
initialization code (e.g., the initStorage/initLocalStorage or storage backend
switch that consumes the flag) so misconfiguration is rejected early.
CodeRabbit review fixes (second batch): - database.ts: Use dynamic import for libsql to support optional dependency - Add async open() method that loads libsql at runtime - Add ensureOpen() guard method for all database operations - Update all methods to require open() before use - sqlite-storage.ts: Fix initialization order issues - Call db.open() before db.initialize() - Move initialized flag to after rebuildFromJsonl() succeeds - Fix appendTasks() to use two-pass dependency pattern - Use db directly in rebuildFromJsonl() during initialization - init.js: Fix config write order for SQLite - Only write sqlite to config after successful initialization - Fall back to JSON storage if SQLite init fails - commands.js: Add validation for --storage-backend option - Reject invalid values with helpful error message - Fix unused import warning in migrations.spec.ts - Remove phantom type exports from tm-core index.ts - Update tests to call open() before initialize() Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
|
Hey @dhabedank what prompted you to make this PR ? I think we're quite happy with the performance of tasks.json, we haven't really even fully implemented .jsonl, so I'm curious how you came to the conclusion to create a .sql file, was it performance issues, I'm trying to grasp truly what promblem you're trying to solve |
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@packages/tm-core/src/modules/storage/adapters/sqlite-storage/database.ts`:
- Around line 347-363: Both createDatabase and createInMemoryDatabase open the
connection (via new SqliteDatabase(...); await db.open()) but call
db.initialize() without handling failures, leaking the open connection; wrap the
db.initialize() call in a try/catch in both functions, and on catch await
db.close() before rethrowing the error so the SqliteDatabase is always closed if
initialization fails (referencing createDatabase, createInMemoryDatabase,
db.open(), db.initialize(), and db.close()).
In
`@packages/tm-core/src/modules/storage/adapters/sqlite-storage/sqlite-storage.ts`:
- Around line 91-148: persistTasksWithDependencies currently filters
dependencies only against tagTasks (new batch) which drops deps that point to
existing DB tasks when called from appendTasks; fix by adding an optional
parameter (e.g., knownTaskIds: Map<string, Set<string>>) to
persistTasksWithDependencies and use the union of tagTasks IDs and
knownTaskIds.get(tag) when computing validDeps before calling
setTaskDependencies; update callers (notably appendTasks) to call loadAllTasks
to collect existing IDs for the tag and pass that map into
persistTasksWithDependencies, leaving behavior unchanged when the optional map
is omitted.
In `@scripts/init.js`:
- Around line 377-378: Validate the CLI storage backend value before writing to
config: instead of accepting any options.storageBackend into storageBackendType,
check it against the allowed set {'file','sqlite'} (use a small helper or inline
validation in the same block that defines storageBackendType) and if the value
is invalid throw or exit with a clear error message; ensure the validated value
(not the raw options.storageBackend) is what gets written to config.json and
update any uses of storageBackendType downstream to rely on the validated value.
🧹 Nitpick comments (8)
scripts/init.js (1)
715-720: JSDoc is missing the newstorageBackendTypeparameter.The
@paramblock doesn't document the fourth parameter added to the signature.Proposed fix
/** * Update config.json with storage configuration * `@param` {string} configPath - Path to config.json file * `@param` {string} selectedStorage - Storage type ('cloud' or 'local') * `@param` {object|null} authCredentials - Auth credentials if cloud storage selected + * `@param` {'file'|'sqlite'} [storageBackendType='file'] - Local storage backend type */packages/tm-core/src/modules/storage/adapters/sqlite-storage/database.ts (2)
113-120:busyTimeoutpragma uses string interpolation — safe only if the type invariant holds.
db.pragma(\busy_timeout = ${this.config.busyTimeout}`)is safe becausebusyTimeoutis typed asnumberthroughRequired. However, sincedefinedOptionsis built viaObject.fromEntrieswhich loses type safety (returns{ [k: string]: unknown }), a non-number could theoretically sneak in ifSqliteStorageConfig` evolves or is cast loosely upstream.Consider adding a numeric guard or using a prepared approach, though the current risk is low given the strongly-typed config interface.
127-155: ManualBEGIN/COMMIT/ROLLBACKinstead of the class's owntransaction()method.
initialize()uses rawdb.exec('BEGIN TRANSACTION')/db.exec('COMMIT')/db.exec('ROLLBACK')while the class exposes atransaction()helper on Line 178 that wraps libsql's built-in transaction support. Using the helper would be more consistent and less error-prone (e.g., no risk of forgettingROLLBACKon a new code path).That said, the current code works correctly since
MigrationRunner.applyMigrationusesSAVEPOINTinternally, which is compatible with the outer explicit transaction. This is a minor consistency nit.packages/tm-core/src/modules/storage/adapters/sqlite-storage/migrations.spec.ts (1)
37-52: Considerfs.rmSyncwithrecursive+forcefor more robust cleanup.
fs.rmdirSync(tempDir)only works on empty directories and is deprecated in newer Node.js versions. If a test fails mid-way and leaves files behind, cleanup will silently fail.♻️ Proposed fix
- try { - fs.rmdirSync(tempDir); - } catch { - // Ignore - } + try { + fs.rmSync(tempDir, { recursive: true, force: true }); + } catch { + // Ignore + }packages/tm-core/src/modules/storage/adapters/sqlite-storage/jsonl-sync.ts (1)
505-525:getStats()reads the entire file into memory to count lines.
fs.readFileSyncon Line 512 loads the full file content just to count non-empty lines. For a large JSONL file this is wasteful sincestat.sizeis already available without reading content. This could use streaming or simply count newlines via a buffered read.That said, this is a metadata/diagnostic method unlikely to be on a hot path, so the impact is low.
packages/tm-core/src/modules/storage/adapters/sqlite-storage/sqlite-storage.ts (2)
340-369:loadTaskonly handles single-level subtask IDs.
taskId.split('.')on Line 345 destructures into[parentId, subtaskId], discarding any additional segments. For an ID like"1.2.3", this yieldsparentId="1"andsubtaskId="2", silently ignoring"3". The same pattern appears inupdateSubtaskStatus(Line 556).If deeply nested subtask IDs are not supported, consider validating and rejecting IDs with more than one dot to fail explicitly rather than silently returning the wrong subtask.
🛡️ Proposed validation
if (taskId.includes('.')) { - const [parentId, subtaskId] = taskId.split('.'); + const parts = taskId.split('.'); + if (parts.length !== 2) { + throw new Error(`Invalid task ID format: "${taskId}". Only single-level subtask IDs (e.g., "1.2") are supported.`); + } + const [parentId, subtaskId] = parts;
827-871:getTagsWithStatsloads all complete tasks for every tag to count subtasks.Line 838 calls
loadAllTasks(which fully resolves dependencies, subtasks, labels, etc.) for each tag, just to count subtasks. This is O(tags × tasks) with full hydration. If thequeries.tslayer exposed a lightweight subtask count query, this could be significantly cheaper.This is a diagnostic/stats endpoint so the performance impact is limited, but worth noting for projects with many tags and tasks.
packages/tm-core/src/modules/storage/adapters/sqlite-storage/coderabbit-fixes.spec.ts (1)
149-157: Minor: cleanup uses deprecatedrmdirSync.Same pattern as noted in
migrations.spec.ts—fs.rmdirSyncis deprecated and only works for empty directories. Considerfs.rmSync(tempDir, { recursive: true, force: true })for robustness.
| export async function createDatabase(dbPath: string): Promise<SqliteDatabase> { | ||
| const db = new SqliteDatabase(dbPath); | ||
| await db.open(); | ||
| db.initialize(); | ||
| return db; | ||
| } | ||
|
|
||
| /** | ||
| * Create an in-memory SQLite database (useful for testing) | ||
| * @returns Initialized in-memory SqliteDatabase instance | ||
| */ | ||
| export async function createInMemoryDatabase(): Promise<SqliteDatabase> { | ||
| const db = new SqliteDatabase(':memory:'); | ||
| await db.open(); | ||
| db.initialize(); | ||
| return db; | ||
| } |
There was a problem hiding this comment.
Factory functions don't call close() on failure after open() succeeds.
If db.initialize() throws, the connection opened by db.open() is leaked. Both createDatabase and createInMemoryDatabase should wrap initialize() in a try/catch that closes the connection on failure.
🔧 Proposed fix
export async function createDatabase(dbPath: string): Promise<SqliteDatabase> {
const db = new SqliteDatabase(dbPath);
await db.open();
- db.initialize();
+ try {
+ db.initialize();
+ } catch (error) {
+ db.close();
+ throw error;
+ }
return db;
}
export async function createInMemoryDatabase(): Promise<SqliteDatabase> {
const db = new SqliteDatabase(':memory:');
await db.open();
- db.initialize();
+ try {
+ db.initialize();
+ } catch (error) {
+ db.close();
+ throw error;
+ }
return db;
}🤖 Prompt for AI Agents
In `@packages/tm-core/src/modules/storage/adapters/sqlite-storage/database.ts`
around lines 347 - 363, Both createDatabase and createInMemoryDatabase open the
connection (via new SqliteDatabase(...); await db.open()) but call
db.initialize() without handling failures, leaking the open connection; wrap the
db.initialize() call in a try/catch in both functions, and on catch await
db.close() before rethrowing the error so the SqliteDatabase is always closed if
initialization fails (referencing createDatabase, createInMemoryDatabase,
db.open(), db.initialize(), and db.close()).
| function persistTasksWithDependencies( | ||
| db: import('libsql').Database, | ||
| tasksByTag: Map<string, Task[]> | ||
| ): void { | ||
| // First pass: Save all tasks without dependencies | ||
| for (const [tag, tagTasks] of tasksByTag) { | ||
| for (const task of tagTasks) { | ||
| saveCompleteTask(db, task, tag, { | ||
| skipTaskDependencies: true, | ||
| skipSubtaskDependencies: true | ||
| }); | ||
| } | ||
| } | ||
|
|
||
| // Second pass: Set all dependencies | ||
| for (const [tag, tagTasks] of tasksByTag) { | ||
| for (const task of tagTasks) { | ||
| // Set task dependencies (filter to only valid ones that exist in this tag) | ||
| const validDeps = (task.dependencies || []) | ||
| .filter((depId) => tagTasks.some((t) => String(t.id) === String(depId))) | ||
| .map((d) => String(d)); | ||
|
|
||
| if ( | ||
| validDeps.length > 0 || | ||
| (task.dependencies && task.dependencies.length > 0) | ||
| ) { | ||
| setTaskDependencies(db, String(task.id), tag, validDeps); | ||
| } | ||
|
|
||
| // Set subtask dependencies (within the same task) | ||
| const subtasks = task.subtasks || []; | ||
| const subtaskIds = new Set( | ||
| subtasks.map((s) => | ||
| typeof s.id === 'number' ? s.id : parseInt(String(s.id), 10) | ||
| ) | ||
| ); | ||
|
|
||
| for (const subtask of subtasks) { | ||
| const subtaskId = | ||
| typeof subtask.id === 'number' | ||
| ? subtask.id | ||
| : parseInt(String(subtask.id), 10); | ||
|
|
||
| const deps: number[] = []; | ||
| for (const d of subtask.dependencies || []) { | ||
| const depId = normalizeSubtaskDepId(d); | ||
| if (!isNaN(depId) && subtaskIds.has(depId)) { | ||
| deps.push(depId); | ||
| } | ||
| } | ||
|
|
||
| if (deps.length > 0) { | ||
| setSubtaskDependencies(db, String(task.id), subtaskId, tag, deps); | ||
| } | ||
| } | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
persistTasksWithDependencies filters deps against only the provided batch — breaks appendTasks.
Line 110 filters valid dependencies to only those whose IDs exist in tagTasks:
.filter((depId) => tagTasks.some((t) => String(t.id) === String(depId)))
This is correct for saveTasks and rebuildFromJsonl where tagTasks contains all tasks for the tag. However, appendTasks passes only the new tasks. Dependencies from new tasks pointing to pre-existing tasks in the database are silently dropped, and the condition on Lines 113–115 will call setTaskDependencies(db, taskId, tag, []) — setting empty deps instead of the intended ones.
🔧 Proposed fix — accept an optional set of known task IDs
function persistTasksWithDependencies(
db: import('libsql').Database,
- tasksByTag: Map<string, Task[]>
+ tasksByTag: Map<string, Task[]>,
+ existingTaskIdsByTag?: Map<string, Set<string>>
): void {
// First pass: Save all tasks without dependencies
for (const [tag, tagTasks] of tasksByTag) {
for (const task of tagTasks) {
saveCompleteTask(db, task, tag, {
skipTaskDependencies: true,
skipSubtaskDependencies: true
});
}
}
// Second pass: Set all dependencies
for (const [tag, tagTasks] of tasksByTag) {
+ const existingIds = existingTaskIdsByTag?.get(tag);
for (const task of tagTasks) {
- // Set task dependencies (filter to only valid ones that exist in this tag)
const validDeps = (task.dependencies || [])
- .filter((depId) => tagTasks.some((t) => String(t.id) === String(depId)))
+ .filter((depId) =>
+ tagTasks.some((t) => String(t.id) === String(depId)) ||
+ existingIds?.has(String(depId))
+ )
.map((d) => String(d));Then in appendTasks, gather existing task IDs before persisting:
const existingTasks = loadAllTasks(this.getDb().getDb(), resolvedTag);
const existingIds = new Map<string, Set<string>>();
existingIds.set(resolvedTag, new Set(existingTasks.map(t => String(t.id))));
persistTasksWithDependencies(this.getDb().getDb(), tasksByTag, existingIds);🤖 Prompt for AI Agents
In
`@packages/tm-core/src/modules/storage/adapters/sqlite-storage/sqlite-storage.ts`
around lines 91 - 148, persistTasksWithDependencies currently filters
dependencies only against tagTasks (new batch) which drops deps that point to
existing DB tasks when called from appendTasks; fix by adding an optional
parameter (e.g., knownTaskIds: Map<string, Set<string>>) to
persistTasksWithDependencies and use the union of tagTasks IDs and
knownTaskIds.get(tag) when computing validDeps before calling
setTaskDependencies; update callers (notably appendTasks) to call loadAllTasks
to collect existing IDs for the tag and pass that map into
persistTasksWithDependencies, leaving behavior unchanged when the optional map
is omitted.
| // Default to file (JSON) storage backend unless explicitly specified | ||
| const storageBackendType = options.storageBackend || 'file'; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Search for CLI command definition with --storage-backend
rg -n --type=js 'storage-backend' --glob '!node_modules/**'Repository: eyaltoledano/claude-task-master
Length of output: 138
🏁 Script executed:
# Look at the context around line 377-390 in scripts/init.js
sed -n '370,395p' scripts/init.js | cat -nRepository: eyaltoledano/claude-task-master
Length of output: 1035
🏁 Script executed:
# Check line 982 in scripts/init.js where storageBackendType is checked
sed -n '975,990p' scripts/init.js | cat -nRepository: eyaltoledano/claude-task-master
Length of output: 800
🏁 Script executed:
# Find updateStorageConfig function
rg -n 'updateStorageConfig' --type=js --glob '!node_modules/**' -B2 -A10Repository: eyaltoledano/claude-task-master
Length of output: 1501
🏁 Script executed:
# Get more context around the --storage-backend definition in commands.js
sed -n '3560,3575p' scripts/modules/commands.js | cat -nRepository: eyaltoledano/claude-task-master
Length of output: 824
🏁 Script executed:
# Search for any validation around storageBackend in the entire commands.js file
rg -n --type=js -B5 -A5 'storage-backend' scripts/modules/commands.jsRepository: eyaltoledano/claude-task-master
Length of output: 540
🏁 Script executed:
# Check updateStorageConfig to see what it does with the value
sed -n '720,760p' scripts/init.js | cat -nRepository: eyaltoledano/claude-task-master
Length of output: 1823
Add validation for --storage-backend value.
The CLI accepts any arbitrary string (no .choices() validation), and init.js writes it directly to config.json without checking if it's 'file' or 'sqlite'. If a user passes an invalid value like 'sqllite' (typo) or 'postgres', it gets stored in config and may cause downstream issues.
Validate in init.js:
Proposed fix
- const storageBackendType = options.storageBackend || 'file';
+ const validBackends = ['file', 'sqlite'];
+ const storageBackendType = validBackends.includes(options.storageBackend)
+ ? options.storageBackend
+ : 'file';📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Default to file (JSON) storage backend unless explicitly specified | |
| const storageBackendType = options.storageBackend || 'file'; | |
| // Default to file (JSON) storage backend unless explicitly specified | |
| const validBackends = ['file', 'sqlite']; | |
| const storageBackendType = validBackends.includes(options.storageBackend) | |
| ? options.storageBackend | |
| : 'file'; |
🤖 Prompt for AI Agents
In `@scripts/init.js` around lines 377 - 378, Validate the CLI storage backend
value before writing to config: instead of accepting any options.storageBackend
into storageBackendType, check it against the allowed set {'file','sqlite'} (use
a small helper or inline validation in the same block that defines
storageBackendType) and if the value is invalid throw or exit with a clear error
message; ensure the validated value (not the raw options.storageBackend) is what
gets written to config.json and update any uses of storageBackendType downstream
to rely on the validated value.
Hey @Crunchyman-ralph! Good question - happy to explain where my head was at. So on some past projects I've ended up with a few hundred tasks + subtasks. When you're doing heavy LLM-driven development where the agent is constantly updating task details as it works - especially across context windows where things get compacted - I was thinking about the efficiency of targeted queries vs loading/parsing the full JSON structure. Like when an agent just needs to update subtask 47.3, being able to hit that specific row in a database feels more surgical than reading/modifying/rewriting the whole file. At scale with lots of task updates, indexed lookups add up. SQLite also gives you transactions, schema enforcement, and integrity checks - which feels like more robust protection for a constantly-updating task database. This was inspired by Beads which uses SQLite+JSONL. But I actually like Taskmaster way more, especially how you handle PRD parsing and the whole complexity analysis flow. Beads is pretty bare-bones in comparison. So I thought hey, why not bring that storage pattern here? That said - I totally get if this is solving a problem you're not actually having. If tasks.json is working great at your scale, no need to complicate things. Happy to close this out or just keep it on ice for later if it ever becomes relevant. Just wanted to contribute something I thought might be useful. 🙂 |
|
Very well fought, I'll look into the PR and will investigate, thank you for your detailed message, really appreciate it! |
Summary
Adds SQLite storage backend using a hybrid approach: SQLite for fast local operations, JSONL for git-friendly persistence.
Why This Matters
Version-controlled tasks - The JSONL file can be committed to git. Tasks travel with your code. See task history in
git log. Branch-specific task states. Team members get task context when they pull.Offline-first - No network dependency. Works anywhere.
Queryable data - SQLite enables filtering and aggregation that JSON files can't do efficiently. Foundation for future reporting features (tasks by status, completion rates, time-in-status metrics).
Scales better - JSON storage reads/writes the entire file on every operation. SQLite handles thousands of tasks with indexed queries.
More durable - WAL mode for concurrent access, transactions for atomicity, foreign keys for referential integrity, built-in integrity checks.
Self-healing - Delete the
.dbfile and it rebuilds automatically from JSONL on next access. No manual recovery needed.What's Included
SqliteStorageadapter implementingIStorageinterfacetask-master storage status|switch|migrate|rebuildstorage_configfor AI agent accesstask-master init(choose JSON or SQLite)--storage-backendCLI option for non-interactive inittask-master --helpCodeRabbit Fixes
libsqlto optionalDependencies for platform compatibilitycopyTasksToTag(prevents partial copies)subtaskToInsertDatafor non-numeric IDs_deletedtasks during JSONL rebuildpersistTasksWithDependencieshelper (DRY)Files Changed
packages/tm-core/src/modules/storage/adapters/sqlite-storage/- Core adapterpackages/tm-core/src/modules/storage/services/storage-migration.ts- Migration utilitiesapps/cli/src/commands/storage.command.ts- CLI commandapps/mcp/src/tools/storage/- MCP toolmcp-server/src/tools/tool-registry.js- Tool registrationscripts/init.js- Storage backend selection in init flowscripts/modules/commands.js- CLI optionscripts/modules/ui.js- Help outputdocs/command-reference.md- Storage Management sectiondocs/configuration.md- Storage Configuration sectionTest plan
storage status,storage switch,init)🤖 Generated with Claude Code
Summary by CodeRabbit
Note
Low Risk
This PR only adds release notes and gitignore rules; it does not change runtime code paths, so behavioral risk is minimal.
Overview
Adds a Changesets entry documenting a set of follow-up fixes/improvements for the SQLite + JSONL storage backend (tag-scoped JSONL sync, validation/counting fixes, dependency normalization, status cascade behavior, security check for backup paths, and migration resource cleanup), along with added test coverage.
Updates
.gitignoreto exclude the local SQLite database files (tasks.db*) and TaskMaster backup artifacts from version control.Written by Cursor Bugbot for commit b765b0a. This will update automatically on new commits. Configure here.