Releases: kriasoft/syncguard
v2.5.3
What's Changed
Bug Fixes
- formatFence: Reject invalid inputs (NaN, Infinity, floats, negative fractions) before
BigInt()conversion — previously-0.1silently became"000000000000000"andNaNleaked aRangeError - makeStorageKey: Validate
backendLimitBytes(must be positive integer) andreserveBytes(must be non-negative integer) to prevent backend limit bypass via misconfiguration
Internal
- Add
FENCE_FORMAT_MAXconstant to distinguish format limit (10¹⁵−1) from operational limit (FENCE_THRESHOLDS.MAX= 9×10¹⁴)
Migration
No breaking changes. All invalid inputs that previously caused undefined behavior now throw LockError("InvalidArgument").
Contributors
Full Changelog: v2.5.2...v2.5.3
v2.5.2
Bug Fixes
- Fix race condition where
dispose()returnednullwhen called whilerelease()was awaiting backend - Handle synchronous throws from
backend.release()to prevent state getting stuck at "disposing" - Add runtime validation to
acquireHandle()for misconfigured backends/mocks
Changes
common/disposable.ts
- Track
pendingReleasepromise sodispose()can wait for in-flightrelease()operations - Wrap
backend.release()call to handle synchronous throws - Validate decorated result methods in
acquireHandle()
What's Changed
- chore(deps): update dependencies by @koistya in #22
- ci: add Codecov integration by @koistya in #23
- test: reorganize test structure with fixtures and contracts by @koistya in #24
- docs: move specs and ADRs under docs/ by @koistya in #25
- fix: dispose() race condition when release() is in-flight by @koistya in #26
Full Changelog: v2.5.1...v2.5.2
v2.5.1
What's new
Better disposal timeout handling – The timeout logic for lock disposal has been refactored to be more reliable and resilient, especially with Firestore which can have slower gRPC calls. The implementation now uses Promise.race to ensure disposal never blocks indefinitely, while still observing the outcome of background operations.
Enhanced documentation – Added comprehensive documentation for disposal timeout behavior, including important caveats about how different backends handle timeouts. Firestore users should be aware that AbortSignal cannot interrupt in-flight gRPC calls, so timeouts signal cancellation intent but may not stop the RPC.
Improved Firestore reliability – Fixed edge cases in the Firestore emulator integration and improved CI robustness to catch timeout-related issues earlier.
Better setup docs – Enhanced README with detailed setup, configuration, and troubleshooting guidance for all three backends (Redis, PostgreSQL, Firestore).
What's Changed
- Dependency updates by @dependabot[bot] in #15
- Firestore emulator test improvements by @koistya in #16
- Improved disposal timeout handling by @koistya in #17
- Enhanced README with setup and troubleshooting by @koistya in #18
New Contributors
- @dependabot[bot] made their first contribution in #15
Full Changelog: v2.5.0...v2.5.1
v2.5.0
What's New
SyncGuard now supports modern await using syntax for automatic lock cleanup. Lock handles implement Symbol.asyncDispose, ensuring cleanup on all code paths including early returns and exceptions.
Modern API (Node.js ≥20)
{
await using lock = await backend.acquire({ key, ttlMs: 30000 });
if (lock.ok) {
await doWork(lock.fence);
// Lock automatically released - no try/finally needed
}
}Key Features
- RAII-style cleanup: Locks automatically released when scope exits
- Idempotent disposal: At-most-once semantics prevent double-release
- Smart error handling: Disposal errors route to
onReleaseErrorcallback (never throw) - Optional timeouts: Configure
disposeTimeoutMsfor unreliable networks - Development-friendly: Logs disposal errors to console in dev mode
- Backwards compatible: Legacy try/finally pattern still supported
Production Error Observability
const backend = createRedisBackend(redis, {
onReleaseError: (err, ctx) => {
logger.error('Disposal failed', { err, ...ctx });
metrics.increment('syncguard.disposal.error');
},
disposeTimeoutMs: 5000 // Optional: abort disposal after 5s
});Backend Support
All backends updated with AsyncDisposable support:
- Redis
- PostgreSQL
- Firestore
Migration
No breaking changes. Existing code continues to work unchanged. Upgrade to Node.js 20+ to use await using syntax.
Documentation
- README and all guides updated with
await usingexamples - New ADRs: ADR-015 (Async RAII), ADR-016 (Opt-In Disposal Timeout)
- Comprehensive JSDoc with error handling best practices
Testing
- New integration tests:
test/integration/disposable.test.ts - New unit tests:
test/unit/disposable.test.ts - Full coverage across all backends
Credits
Thanks to @alii for proposing the await using API!
Changelog
Full Changelog: v2.4.0...v2.5.0
v2.4.0
Simplified PostgreSQL backend initialization by separating schema setup from lock creation.
What Changed
- New
setupSchema()function: Explicit one-time schema initialization - Synchronous
createLock(): No longer async, cleaner API - Better separation of concerns: Schema setup decoupled from backend creation
Benefits
- Clearer initialization flow
- Synchronous lock creation (no await needed)
- Explicit control over schema setup timing
- Better testability and composability
Full Changelog: v2.3.1...v2.4.0
v2.3.1
Fixed
- Added missing
createLock()convenience function tosyncguard/postgresmodule for feature parity with Redis and Firestore backends
Changed
- Streamlined public API by removing
createAutoLockfrom main package exports (still available insyncguard/commonfor custom backend implementations) - All built-in backends (Redis, PostgreSQL, Firestore) now use
lock()directly as the primary API
Migration
No breaking changes. If you were using createAutoLock from the main package, switch to using lock() instead (recommended) or import from syncguard/common if needed for custom backend implementations.
Changelog
Full Changelog: v2.3.0...v2.3.1
v2.3.0
🎉 What's New
PostgreSQL Backend Support
SyncGuard now supports PostgreSQL as a distributed lock backend, joining Redis and Firestore! This brings the power of transaction-based locking to teams already using PostgreSQL, eliminating the need for additional infrastructure.
Why PostgreSQL Backend?
- Zero Additional Infrastructure: If you're already using PostgreSQL, you can now implement distributed locking without deploying Redis or Firestore
- ACID Guarantees: Built on PostgreSQL's robust transaction system with automatic rollback
- Server Time Authority: Uses PostgreSQL server time (
NOW()) for consistent lock timing across all clients, eliminating clock skew issues - Transaction-Based Atomicity: All lock operations use
sql.begin()transactions with row-level locking (FOR UPDATE) for bulletproof concurrency control - Automatic Table Creation: Tables and indexes are created automatically on first use (configurable)
Quick Start
Installation
npm install syncguard postgresBasic Usage
import { createLock } from "syncguard/postgres";
import postgres from "postgres";
const sql = postgres("postgresql://localhost:5432/myapp");
const lock = createLock(sql);
// Prevent duplicate payment processing
await lock(
async () => {
const payment = await getPayment(paymentId);
if (payment.status === "pending") {
await processPayment(payment);
await updatePaymentStatus(paymentId, "completed");
}
},
{ key: `payment:${paymentId}`, ttlMs: 60000 }
);Manual Lock Control
import { createPostgresBackend } from "syncguard/postgres";
const backend = createPostgresBackend(sql);
// Acquire lock
const result = await backend.acquire({
key: "batch:daily-report",
ttlMs: 300000, // 5 minutes
});
if (result.ok) {
try {
const { lockId, fence } = result; // Fencing token for stale lock protection
await generateDailyReport(fence);
// Extend lock for long-running tasks
const extended = await backend.extend({ lockId, ttlMs: 300000 });
if (!extended.ok) {
throw new Error("Failed to extend lock");
}
await sendReportEmail();
} finally {
await backend.release({ lockId: result.lockId });
}
} else {
console.log("Resource is locked by another process");
}Configuration Options
const lock = createLock(sql, {
tableName: "app_locks", // Default: "syncguard_locks"
fenceTableName: "app_fences", // Default: "syncguard_fence_counters"
autoCreateTables: true, // Default: true
cleanupInIsLocked: false, // Default: false (read-only)
});🗄️ Database Schema
The PostgreSQL backend uses two tables:
Lock Table (syncguard_locks)
Stores active lock records with automatic cleanup based on TTL expiration.
CREATE TABLE syncguard_locks (
key TEXT PRIMARY KEY,
lock_id TEXT NOT NULL,
expires_at_ms BIGINT NOT NULL,
acquired_at_ms BIGINT NOT NULL,
fence TEXT NOT NULL,
user_key TEXT NOT NULL
);
-- Required indexes
CREATE UNIQUE INDEX idx_syncguard_locks_lock_id ON syncguard_locks(lock_id);
CREATE INDEX idx_syncguard_locks_expires ON syncguard_locks(expires_at_ms);Fence Counter Table (syncguard_fence_counters)
Stores monotonic fence counters that persist indefinitely to guarantee fencing token uniqueness.
CREATE TABLE syncguard_fence_counters (
fence_key TEXT PRIMARY KEY,
fence BIGINT NOT NULL DEFAULT 0,
key_debug TEXT
);Note: The schema file is included in the package at postgres/schema.sql for manual setup or migration tools.
✨ Key Features
Transaction-Based Atomicity
All PostgreSQL lock operations use transactions with:
- Row-Level Locking:
FOR UPDATEclauses prevent TOCTOU (time-of-check-time-of-use) races - Automatic Rollback: Connection errors automatically roll back partial changes
- ACID Guarantees: Full transactional safety for all lock operations
Absent-Row Race Protection
The PostgreSQL backend uses a three-step pattern to prevent duplicate fence tokens when counters don't exist:
- Advisory Lock:
pg_advisory_xact_lock()serializes concurrent acquires per key - Idempotent Initialization:
INSERT ... ON CONFLICT DO NOTHINGensures row exists - Atomic Increment:
UPDATE ... RETURNINGwith implicit row lock
This guarantees monotonic fence tokens even under high concurrency.
Server Time Authority
- Uses PostgreSQL's
NOW()function for authoritative time - Eliminates client clock skew issues
- All clients see consistent lock state regardless of local time
- Captured inside transactions for timestamp consistency
Fencing Token Support
- Monotonic 15-digit fence tokens (e.g., "000000000000042")
- Stored as
BIGINTin database, converted to zero-padded strings at API boundary - Lexicographic ordering for easy comparison
- Persistent across lock cleanup and PostgreSQL restarts
📊 Performance Characteristics
- Primary Key Access: O(1) lock acquisition and status checks
- Indexed Lookups: Fast release/extend operations via
lock_idindex - Transaction Overhead: ~2-5ms per operation (depending on configuration)
- Expected Throughput: 500-2000 ops/sec with connection pooling
- Competitive with Redis: Local PostgreSQL instances achieve sub-millisecond latency
🔒 Security & Safety
Explicit Ownership Verification (ADR-003)
All release/extend operations include explicit ownership verification:
if (data?.lock_id !== lockId) {
return { ok: false };
}This defense-in-depth approach guards against edge cases even with row-level locks.
Configuration Validation
The backend validates configuration at initialization:
- Ensures fence table differs from lock table (prevents accidental counter deletion)
- Validates table names as safe SQL identifiers
- Throws
LockError("InvalidArgument")on invalid configuration
AbortSignal Support
Manual cancellation checks at strategic points:
- Before transaction work
- After read operations
- Before write operations
Provides responsive cancellation without excessive overhead.
🆚 Backend Comparison
| Feature | Redis | PostgreSQL | Firestore |
|---|---|---|---|
| Infrastructure | Separate service | Existing database | Google Cloud |
| Time Authority | Server time | Server time | Client time |
| Transaction Model | Lua scripts | SQL transactions | Document transactions |
| Setup Complexity | Medium | Low | Medium |
| Fencing Tokens | ✅ Always | ✅ Always | ✅ Always |
| Best For | High performance | Zero overhead | Serverless apps |
🔧 Migration Guide
From Redis to PostgreSQL
// Before (Redis)
import { createLock } from "syncguard/redis";
import Redis from "ioredis";
const redis = new Redis();
const lock = createLock(redis);
// After (PostgreSQL)
import { createLock } from "syncguard/postgres";
import postgres from "postgres";
const sql = postgres("postgresql://localhost:5432/myapp");
const lock = createLock(sql);
// Usage remains identical!
await lock(
async () => { /* critical section */ },
{ key: "resource:123", ttlMs: 30000 }
);Configuration Mapping
// Redis configuration
const lock = createLock(redis, {
keyPrefix: "myapp",
});
// PostgreSQL equivalent
const lock = createLock(sql, {
tableName: "myapp_locks",
fenceTableName: "myapp_fences",
});📚 Documentation
- Full Documentation: https://kriasoft.com/syncguard/
- PostgreSQL Backend Spec: specs/postgres-backend.md
- Interface Spec: specs/interface.md
- Schema Reference: postgres/schema.sql
🐛 Bug Fixes
- Firestore: Fixed CI/CD test flakiness in timeout tests by adjusting timing parameters for variable emulator latency
🙏 Acknowledgments
Thank you to the community for requesting PostgreSQL support and providing valuable feedback during development!
📦 Installation
# Redis backend
npm install syncguard ioredis
# PostgreSQL backend (NEW!)
npm install syncguard postgres
# Firestore backend
npm install syncguard @google-cloud/firestore🔗 Links
- Package: https://npmjs.com/package/syncguard
- Repository: https://github.com/kriasoft/syncguard
- Issues: https://github.com/kriasoft/syncguard/issues
- Discord: https://discord.gg/EnbEa7Gsxg
Full Changelog: v2.2.0...v2.3.0
v2.2.0
Overview
Version 2.2.0 enhances operational robustness and observability while maintaining 100% backward compatibility. This release introduces defensive improvements for edge cases, richer telemetry for debugging, and refined error handling – all without breaking changes.
Highlights
🛡️ Duplicate Detection
Added defensive duplicate lockId detection to Firestore operations:
- Automatic detection and cleanup of expired duplicates during
extendandrelease - Fail-safe abort if multiple live locks detected (prevents ambiguous state)
- Diagnostic logging for duplicate detection in
lookupoperations - Transparent to users – no API changes required
📊 Enhanced Telemetry
Richer observability for lock operations:
- Telemetry events now include
reasonfield for release/extend failures ("expired"|"not-found") - Better operational insights for debugging lock contention
- Zero-cost abstraction – no performance impact when telemetry is disabled
🔧 Better Error Handling
Refined error categorization for more precise retry logic:
- Separated network timeout errors from transient availability errors
- Prevents incorrect retry behavior (timeouts vs. availability issues)
- Enhanced error messages with better context across both backends
What's Changed
Full Changelog: v2.1.0...v2.2.0
v2.1.0
A major release introducing critical improvements to distributed lock reliability, standardizing key generation algorithms, enhancing fence token safety, and adding AbortSignal support for operation cancellation.
🎯 Highlights
- Unified Storage Key Generation: Canonical
makeStorageKey()function ensures consistent key derivation across all backends - Enhanced Fence Token Safety: Reduced precision format (15 digits) provides full Lua 53-bit precision compatibility
- AbortSignal Support: Graceful operation cancellation with new
LockError("Aborted")error code - Comprehensive Test Coverage: 7 new test suites covering edge cases, cross-backend consistency, and overflow scenarios
- Improved Documentation: Restructured specs and expanded API documentation with examples
⚠️ Breaking Changes
Fence Token Format
- Changed: 19-digit → 15-digit zero-padded format
- Impact: Existing deployments with fence tokens > 10^15 may require migration (extremely rare)
- Capacity: Provides 10^15 operations ≈ 31.7 years at 1M locks/sec
- Safety: Full Lua 53-bit precision compatibility prevents silent overflows
Storage Key Algorithm
- Changed: Hash truncation now uses byte-based (not character-based) measurements
- Changed: Switched from 24-char hex to 22-char base64url encoding (30% space savings)
- Impact: Existing truncated keys will regenerate different hashes on next acquire
- Note: No action needed for most users - keys regenerate transparently
🔑 Storage Key Generation
Two-Step Fence Pattern
- New: Backends derive fence keys from base storage keys to ensure 1:1 mapping
- New: Reserve byte allocations per backend (Redis: 26, Firestore: 0) prevent derived key overflows
Algorithm Improvements
- Fixed: Byte-accurate UTF-8 length calculations replace character-based measurements
- Enhanced: Base64url hashing provides 128-bit collision resistance with 30% space savings
- Standardized: Mandatory
makeStorageKey()usage across all backend operations
📏 Fence Token Enhancements
Overflow Protection
- New:
FENCE_THRESHOLDSconstants (max: 9e14, warn: 9e13) - New: Mandatory warning logs when approaching thresholds
- New:
formatFence()range checks prevent silent overflows
Precision Safety
- Fixed: 15-digit format ensures compatibility with Lua 53-bit integer precision
- Validated: Cross-backend consistency tests verify Redis/Firestore fence behavior parity
🚫 AbortSignal Support
New Error Code
// Cancel lock operations via AbortSignal
const controller = new AbortController();
const result = await lock("resource-key", () => {
// Work with lock
}, { signal: controller.signal });
// Cancellation throws LockError("Aborted")
controller.abort();Integration Points
- Updated:
lock()function respects AbortSignal - Updated: Backend acquire/extend/release operations check for cancellation
- New:
checkAborted()helper for consistent signal handling
📚 Documentation
Restructured Specifications
- Renamed:
specs/redis.md→specs/redis-backend.md - Renamed:
specs/firestore.md→specs/firestore-backend.md - Expanded: ADR-004 (fence format) with new specifications
- Expanded: ADR-006 (key truncation) with algorithm details
Enhanced API Documentation
- Improved: Comprehensive API documentation (
docs/api.md) - Added: Examples for all public interfaces
- Fixed: Cross-references and navigation
- Added: Social media preview image (
docs/public/og-image.webp)
Configuration Updates
- Updated:
CLAUDE.mdwith new file structure - Updated:
CONTRIBUTING.mdwith backend contribution checklist - Fixed: VitePress sitemap hostname
🧪 Testing
New Test Suites
abort-signal.test.ts: AbortSignal cancellation scenarios (347 lines)cross-backend-consistency.test.ts: Redis/Firestore behavior parity (558 lines)fence-overflow.test.ts: Overflow threshold enforcement (284 lines)hash-truncation-verification.test.ts: Storage key collision analysis (542 lines)lockid-format-validation.test.ts: Lock ID format correctness (322 lines)redis-truncation-correctness.test.ts: Redis key truncation edge cases (346 lines)time-tolerance-enforcement.test.ts: Time tolerance validation (256 lines)
Enhanced Coverage
- Extended: Redis integration tests with 229+ new lines
- Extended: Firestore integration tests with 333+ new lines
- All tests passing: Unit, integration, performance, type checking, and build
🔧 Backend Implementations
Redis Backend
- Updated: Lua scripts use 15-digit fence format
- Added: Fence overflow checks in acquire/extend operations
- Integrated: Two-step key derivation (base key → fence key)
- Enhanced: Error handling with centralized mapping
Firestore Backend
- Aligned: Acquire/release/extend with standardized key generation
- Added: Fence validation in all operations
- Integrated: Reserve byte calculations (0 bytes for Firestore)
- Enhanced: Transaction handling with improved error context
Shared Utilities
- New:
logFenceWarning()for consistent monitoring across backends - Improved: Error messages include context (key, lockId) in all operations
📦 Migration Guide
For Most Users
These changes are transparent and require no action:
- Storage keys regenerate automatically on next acquire
- Fence tokens remain well below thresholds (< 10^13 for typical workloads)
For Edge Cases
If your deployment has:
- Fence tokens > 10^15: Contact maintainers (extremely unlikely scenario)
- Custom key generation: Review
common/crypto.tsfor new algorithm
🔍 Testing Verification
All test suites pass successfully:
✓ bun run test:unit # Unit tests with mocked dependencies
✓ bun run test:integration # Redis + Firestore integration tests
✓ bun run test:performance # Performance benchmarks
✓ bun run typecheck # TypeScript strict mode validation
✓ bun run build # Production build📊 Statistics
- Files Changed: 59 files
- Lines Added: 6,604
- Lines Removed: 2,174
- Net Change: +4,430 lines
- Test Coverage: 7 new test suites, 2,655+ new test lines
🙏 Acknowledgments
This release addresses:
- Key truncation inconsistencies across backends
- Lua precision safety concerns with fence counters
- Requested AbortSignal support for graceful cancellation
- Documentation discoverability improvements
📖 Resources
- Documentation: https://kriasoft.com/syncguard/
- Repository: https://github.com/kriasoft/syncguard
- Changelog: See commit
eeff991 - Issues: Report at https://github.com/kriasoft/syncguard/issues
🔗 Links
What's Changed
- docs: add VitePress documentation site by @koistya in #5
- refactor: standardize key generation, fence format by @koistya in #7
Full Changelog: v1.0.0...v2.1.0
v1.0.0
🎯 Major API Redesign
This release represents a comprehensive overhaul of SyncGuard with significant breaking changes, enhanced type safety, and improved functionality.
Breaking Changes
1. New Result-Based Error Handling
- Replaced exception-heavy error handling with discriminated union types (
LockResult<T>) - Backend operations now return structured results instead of throwing
- Example migration:
// Before const lock = createLock(backend); await lock(workFn, { key: "resource:123" }); // After const backend = createRedisBackend(redis); const result = await backend.acquire({ key: "resource:123" }); if (result.ok) { // Use result.lockId and result.fence }
2. Fencing Token Support
- All operations now support fencing tokens for stale lock protection
- Compile-time type safety:
result.fenceis guaranteed whensupportsFencing: true - Requires schema updates for existing deployments (new
fence_counterscollection/keys)
3. Enhanced Lookup Operations
- Renamed
get-lock-info.ts→lookup.tsin both backends - New unified lookup interface supporting both key-based and lockId-based queries
- Added
owns()andgetByKey()helper functions
4. Restructured Common Module
Split monolithic common/backend.ts into focused modules:
types.ts- Core interfaces and typesconstants.ts- Configuration constantserrors.ts- LockError classvalidation.ts- Key & lockId validationcrypto.ts- Cryptographic functionshelpers.ts- Utility functionsauto-lock.ts- Auto-managed locksconfig.ts- Configuration helperstelemetry.ts- Observability decorators
✨ New Features
Fencing Tokens
- Protect against stale lock scenarios in distributed systems
- Lexicographic string comparison (
fence > prevFence) - 19-digit zero-padded format for cross-language compatibility
Enhanced Type Safety
- Parameterized result types by backend capabilities
- Direct
result.fenceaccess without runtime assertions - Improved error codes with structured reasons
Lookup Operations
- O(1) key-based lookup:
getByKey(backend, key) - Ownership verification:
owns(backend, lockId) - Debug access:
lookupDebug()for raw lock data
Opt-In Telemetry
- Zero-cost abstraction when disabled
- Decorator pattern:
withTelemetry(backend, options) - Async event callbacks that never block operations
🏗️ Architecture Improvements
ADR Documentation
Added 9 Architectural Decision Records covering:
- Explicit ownership re-verification (ADR-003)
- Lexicographic fence comparison (ADR-004-R2)
- Unified time tolerance (ADR-005)
- Mandatory uniform key truncation (ADR-006)
- Opt-in telemetry (ADR-007)
- Compile-time fencing contract (ADR-008)
- Retry logic in helpers only (ADR-009)
Backend Operations
- Atomicity improvements in both Redis (Lua scripts) and Firestore (transactions)
- Centralized error mapping for consistent error handling
- Script caching optimization for Redis operations
- Enhanced test coverage with integration and performance tests
📚 Documentation
New Specifications
specs/interface.md- Complete LockBackend API contracts & usage patterns- Updated
specs/firestore.md- Enhanced Firestore implementation requirements - Updated
specs/redis.md- Enhanced Redis implementation requirements specs/adrs.md- Architectural decision records
Updated Examples
- Comprehensive usage examples for both Redis and Firestore
- Manual lock control patterns
- Ownership checking examples
- Rate limiting and job processing patterns
🧪 Testing Infrastructure
New Test Suites
- Unit tests:
bun run test:unit(fast, mocked dependencies) - Integration tests:
bun run test:integration(Redis + Firestore emulator) - Performance tests:
bun run test:performance(benchmarks)
CI/CD Enhancements
- Added Firestore emulator to CI pipeline
- Redis service containers for integration testing
- Separate workflows for unit and integration tests
📦 Installation & Migration
New Dependencies
# Firestore backend
npm install syncguard @google-cloud/firestore
# Redis backend
npm install syncguard ioredisMigration Guide
For Auto-Managed Locks:
// Still works - minimal changes required
import { createLock } from "syncguard/redis";
const lock = createLock(redis);
await lock(workFn, { key: "resource:123" });For Manual Lock Control:
// Before
const result = await lock.acquire({ key: "batch:daily" });
if (result.success) {
try {
await work();
} finally {
await lock.release(result.lockId);
}
}
// After
const backend = createRedisBackend(redis);
const result = await backend.acquire({ key: "batch:daily" });
if (result.ok) {
try {
const { lockId, fence } = result; // Now with fencing!
await work(fence);
} finally {
await backend.release({ lockId });
}
}Error Handling:
// result.ok replaced result.success
// result.reason provides structured error information
if (!result.ok) {
console.log(`Lock failed: ${result.reason}`);
}⚠️ Required Schema Updates
Firestore
Add new collection for fence counters (default: fence_counters):
const lock = createLock(db, {
collection: "locks",
fenceCollection: "fence_counters" // New!
});Redis
Fence counters use existing key-value store with configurable prefix:
const lock = createLock(redis, {
keyPrefix: "myapp" // Default: "syncguard"
});🔍 What's Next
This release establishes a solid foundation for v1.0. Future work will focus on:
- Additional backend implementations (PostgreSQL, DynamoDB)
- Performance optimizations
- Enhanced observability features
- Production deployment guides
📊 Statistics
- 64 files changed
- 9,097 additions, 1,735 deletions
- 9 new ADRs documenting architectural decisions
- 3 new test suites (unit, integration, performance)
- 1,293 lines of new interface documentation
🙏 Acknowledgments
This release represents a complete rethink of the distributed locking API, prioritizing:
- Type safety and compile-time guarantees
- Predictable cross-backend behavior
- Production-grade fencing token support
- Clear architectural documentation
Special thanks to early adopters who provided feedback that shaped this redesign.