Skip to content

Latest commit

 

History

History

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

README.md

@ruvector/edge-net

Collective AI Computing Network - Share, Contribute, Compute Together

A distributed computing platform that enables collective resource sharing for AI workloads. Contributors share idle compute resources, earning participation units (rUv) that can be used to access the network's collective AI computing power.

┌─────────────────────────────────────────────────────────────────────────────┐
│              EDGE-NET: COLLECTIVE AI COMPUTING NETWORK                      │
├─────────────────────────────────────────────────────────────────────────────┤
│                                                                             │
│   ┌─────────────┐       ┌─────────────┐       ┌─────────────┐              │
│   │  Your       │       │  Collective │       │  AI Tasks   │              │
│   │  Browser    │◄─────►│  Network    │◄─────►│  Completed  │              │
│   │  (Idle CPU) │  P2P  │  (1000s)    │       │  for You    │              │
│   └─────────────┘       └─────────────┘       └─────────────┘              │
│         │                     │                     │                       │
│         ▼                     ▼                     ▼                       │
│   ┌─────────────┐       ┌─────────────┐       ┌─────────────┐              │
│   │  Contribute │       │  Earn rUv   │       │  Use rUv    │              │
│   │  Compute    │  ───► │  Units      │  ───► │  for AI     │              │
│   │  When Idle  │       │  (Credits)  │       │  Workloads  │              │
│   └─────────────┘       └─────────────┘       └─────────────┘              │
│                                                                             │
│   Vector Search │ Embeddings │ Semantic Match │ Encryption │ Compression   │
│                                                                             │
└─────────────────────────────────────────────────────────────────────────────┘

Table of Contents


What is Edge-Net?

Edge-net creates a collective computing network where participants share idle browser resources to power distributed AI workloads. Think of it as a cooperative where:

  1. You Contribute - Share unused CPU cycles when browsing
  2. You Earn - Accumulate rUv (Resource Utility Vouchers) based on contribution
  3. You Use - Spend rUv to run AI tasks across the collective network
  4. Network Grows - More participants = more collective computing power

Why Collective AI Computing?

Traditional AI Computing Collective Edge-Net
Expensive GPU servers Free idle browser CPUs
Centralized data centers Distributed global network
Pay-per-use pricing Contribution-based access
Single point of failure Resilient P2P mesh
Limited by your hardware Scale with the collective

Core Principles

Principle Description
Collectibility Resources are pooled and shared fairly
Contribution Earn by giving, spend by using
Self-Sustaining Network operates without central control
Privacy-First Pi-Key cryptographic identity system
Adaptive Q-learning security protects the collective

Key Features

Collective Resource Sharing

Feature Benefit
Idle CPU Utilization Use resources that would otherwise be wasted
Browser-Based No installation, runs in any modern browser
Adjustable Contribution Control how much you share (10-50% CPU)
Battery Aware Automatically reduces on battery power
Fair Distribution Work routed based on capability matching

AI Computing Capabilities

Edge-net provides a complete AI stack that runs entirely in your browser. Each component is designed to be lightweight, fast, and work without a central server.

┌─────────────────────────────────────────────────────────────────────────────┐
│                        AI INTELLIGENCE STACK                                 │
├─────────────────────────────────────────────────────────────────────────────┤
│                                                                             │
│  ┌─────────────────────────────────────────────────────────────────────┐   │
│  │                    MicroLoRA Adapter Pool (from ruvLLM)              │   │
│  │  • LRU-managed pool (16 slots) • Rank 1-16 adaptation                │   │
│  │  • <50µs rank-1 forward        • 2,236+ ops/sec with batch 32        │   │
│  │  • 4-bit/8-bit quantization    • P2P shareable adapters              │   │
│  └─────────────────────────────────────────────────────────────────────┘   │
│                                                                             │
│  ┌─────────────────────────────────────────────────────────────────────┐   │
│  │                    SONA - Self-Optimizing Neural Architecture         │   │
│  │  • Instant Loop: Per-request MicroLoRA adaptation                    │   │
│  │  • Background Loop: Hourly K-means consolidation                     │   │
│  │  • Deep Loop: Weekly EWC++ consolidation (catastrophic forgetting)   │   │
│  └─────────────────────────────────────────────────────────────────────┘   │
│                                                                             │
│  ┌──────────────────────┐  ┌──────────────────────┐  ┌─────────────────┐  │
│  │   HNSW Vector Index  │  │  Federated Learning  │  │ ReasoningBank   │  │
│  │   • 150x faster      │  │  • TopK Sparsify 90% │  │ • Trajectories  │  │
│  │   • O(log N) search  │  │  • Byzantine tolerant│  │ • Pattern learn │  │
│  │   • Incremental P2P  │  │  • Diff privacy      │  │ • 87x energy    │  │
│  └──────────────────────┘  └──────────────────────┘  └─────────────────┘  │
│                                                                             │
└─────────────────────────────────────────────────────────────────────────────┘

Core AI Tasks

Task Type Use Case How It Works
Vector Search Find similar items HNSW index with 150x speedup
Embeddings Text understanding Generate semantic vectors
Semantic Match Intent detection Classify meaning
LoRA Inference Task adaptation MicroLoRA <100µs forward
Pattern Learning Self-optimization ReasoningBank trajectories

MicroLoRA Adapter System

What it does: Lets the network specialize for different tasks without retraining the whole model. Think of it like having 16 expert "hats" the AI can quickly swap between - one for searching, one for encryption, one for routing, etc.

Ported from ruvLLM with enhancements for distributed compute:

Feature Specification Performance
Rank Support 1-16 Rank-1: <50µs, Rank-2: <100µs
Pool Size 16 concurrent adapters LRU eviction policy
Quantization 4-bit, 8-bit 75% memory reduction
Batch Size 32 (optimal) 2,236+ ops/sec
Task Types VectorSearch, Embedding, Inference, Crypto, Routing Auto-routing

Why it matters: Traditional AI models are "one size fits all." MicroLoRA lets each node become a specialist for specific tasks in under 100 microseconds - faster than a blink.


SONA: Self-Optimizing Neural Architecture

What it does: The network teaches itself to get better over time using three learning speeds - instant reactions, daily improvements, and long-term memory. Like how your brain handles reflexes, daily learning, and permanent memories differently.

Three-temporal-loop continuous learning system:

Loop Interval Mechanism Purpose
Instant Per-request MicroLoRA rank-2 Immediate adaptation
Background Hourly K-means clustering Pattern consolidation
Deep Weekly EWC++ (λ=2000) Prevent catastrophic forgetting

Why it matters: Most AI systems forget old knowledge when learning new things ("catastrophic forgetting"). SONA's three-loop design lets the network learn continuously without losing what it already knows.


HNSW Vector Index

What it does: Finds similar items incredibly fast by organizing data like a multi-level highway system. Instead of checking every item (like walking door-to-door), it takes smart shortcuts to find what you need 150x faster.

Parameter Default Description
M 32 Max connections per node
M_max_0 64 Max connections at layer 0
ef_construction 200 Build-time beam width
ef_search 64 Search-time beam width
Performance 150x Speedup vs linear scan

Why it matters: When searching millions of vectors, naive search takes seconds. HNSW takes milliseconds - essential for real-time AI responses.


Federated Learning

What it does: Nodes teach each other without sharing their private data. Each node trains on its own data, then shares only the "lessons learned" (gradients) - like students sharing study notes instead of copying each other's homework.

P2P gradient gossip without central coordinator:

Feature Mechanism Benefit
TopK Sparsification 90% compression Only share the most important updates
Rep-Weighted FedAvg Reputation scoring Trusted nodes have more influence
Byzantine Tolerance Outlier detection, clipping Ignore malicious or broken nodes
Differential Privacy Noise injection Mathematically guaranteed privacy
Gossip Protocol Eventually consistent Works even if some nodes go offline

Why it matters: Traditional AI training requires sending all your data to a central server. Federated learning keeps your data local while still benefiting from collective intelligence.


ReasoningBank & Learning Intelligence

What it does: The network's "memory system" that remembers what worked and what didn't. Like keeping a journal of successful strategies that any node can learn from.

Component What It Does Why It's Fast
ReasoningBank Stores successful task patterns Semantic search for quick recall
Pattern Extractor Groups similar experiences together K-means finds common patterns
Multi-Head Attention Decides which node handles each task Parallel evaluation of options
Spike-Driven Attention Ultra-low-power decision making 87x more energy efficient

Why it matters: Without memory, the network would repeat the same mistakes. ReasoningBank lets nodes learn from each other's successes and failures across the entire collective.

Pi-Key Identity System

Ultra-compact cryptographic identity using mathematical constants:

Key Type Size Purpose
π (Pi-Key) 40 bytes Your permanent identity
e (Session) 34 bytes Temporary encrypted sessions
φ (Genesis) 21 bytes Network origin markers

Self-Optimizing Network

  • Automatic Task Routing - Work goes to best-suited nodes
  • Topology Optimization - Network self-organizes for efficiency
  • Q-Learning Security - Learns to defend against threats
  • Economic Balance - Self-sustaining resource economy

Quick Start

1. Add to Your Website

<script type="module">
  import init, { EdgeNetNode, EdgeNetConfig } from '@ruvector/edge-net';

  async function joinCollective() {
    await init();

    // Join the collective with your site ID
    const node = new EdgeNetConfig('my-website')
      .cpuLimit(0.3)          // Contribute 30% CPU when idle
      .memoryLimit(256 * 1024 * 1024)  // 256MB max
      .respectBattery(true)   // Reduce on battery
      .build();

    // Start contributing to the collective
    node.start();

    // Monitor your participation
    setInterval(() => {
      console.log(`Contributed: ${node.ruvBalance()} rUv`);
      console.log(`Tasks completed: ${node.getStats().tasks_completed}`);
    }, 10000);
  }

  joinCollective();
</script>

2. Use the Collective's AI Power

// Submit an AI task to the collective
const result = await node.submitTask('vector_search', {
  query: embeddings,
  k: 10,
  index: 'shared-knowledge-base'
}, 5);  // Spend up to 5 rUv

console.log('Similar items:', result);

3. Monitor Your Contribution

// Check your standing in the collective
const stats = node.getStats();
console.log(`
  rUv Earned: ${stats.ruv_earned}
  rUv Spent: ${stats.ruv_spent}
  Net Balance: ${stats.ruv_earned - stats.ruv_spent}
  Tasks Completed: ${stats.tasks_completed}
  Reputation: ${(stats.reputation * 100).toFixed(1)}%
`);

How It Works

The Contribution Cycle

┌─────────────────────────────────────────────────────────────────────────────┐
│                        CONTRIBUTION CYCLE                                    │
├─────────────────────────────────────────────────────────────────────────────┤
│                                                                             │
│   1. CONTRIBUTE          2. EARN              3. USE                        │
│   ┌─────────────┐       ┌─────────────┐       ┌─────────────┐              │
│   │   Browser   │       │    rUv      │       │  AI Tasks   │              │
│   │   detects   │  ───► │   credited  │  ───► │  submitted  │              │
│   │   idle time │       │   to you    │       │  to network │              │
│   └─────────────┘       └─────────────┘       └─────────────┘              │
│         │                     │                     │                       │
│         ▼                     ▼                     ▼                       │
│   ┌─────────────┐       ┌─────────────┐       ┌─────────────┐              │
│   │  Process    │       │  10x boost  │       │  Results    │              │
│   │  incoming   │       │  for early  │       │  returned   │              │
│   │  tasks      │       │  adopters   │       │  to you     │              │
│   └─────────────┘       └─────────────┘       └─────────────┘              │
│                                                                             │
└─────────────────────────────────────────────────────────────────────────────┘

Network Growth Phases

The collective grows through natural phases:

Phase Size Your Benefit
Genesis 0-10K nodes 10x rUv multiplier (early adopter bonus)
Growth 10K-50K Multiplier decreases, network strengthens
Maturation 50K-100K Stable economy, high reliability
Independence 100K+ Self-sustaining, maximum collective power

Fair Resource Allocation

// The network automatically optimizes task distribution
const health = JSON.parse(node.getEconomicHealth());

console.log(`
  Resource Velocity: ${health.velocity}      // How fast resources circulate
  Utilization: ${health.utilization}         // Network capacity used
  Growth Rate: ${health.growth}              // Network expansion
  Stability: ${health.stability}             // Economic equilibrium
`);

AI Computing Tasks

Vector Search (Distributed Similarity)

Find similar items across the collective's distributed index:

// Search for similar documents
const similar = await node.submitTask('vector_search', {
  query: [0.1, 0.2, 0.3, ...],  // Your query vector
  k: 10,                         // Top 10 results
  index: 'shared-docs'           // Distributed index name
}, 3);  // Max 3 rUv

// Results from across the network
similar.forEach(item => {
  console.log(`Score: ${item.score}, ID: ${item.id}`);
});

Embedding Generation

Generate semantic embeddings using collective compute:

// Generate embeddings for text
const embeddings = await node.submitTask('embedding', {
  text: 'Your text to embed',
  model: 'sentence-transformer'
}, 2);

console.log('Embedding vector:', embeddings);

Semantic Matching

Classify intent or meaning:

// Classify text intent
const intent = await node.submitTask('semantic_match', {
  text: 'I want to cancel my subscription',
  categories: ['billing', 'support', 'sales', 'general']
}, 1);

console.log('Detected intent:', intent.category);

Secure Operations

Encrypt data across the network:

// Distributed encryption
const encrypted = await node.submitTask('encryption', {
  data: sensitiveData,
  operation: 'encrypt',
  key_id: 'my-shared-key'
}, 2);

Pi-Key Identity System

Your identity in the collective uses mathematical constants for key sizes:

Key Types

┌─────────────────────────────────────────────────────────────────────────────┐
│                        PI-KEY IDENTITY SYSTEM                               │
├─────────────────────────────────────────────────────────────────────────────┤
│                                                                             │
│   π Pi-Key (Identity)     e Euler-Key (Session)    φ Phi-Key (Genesis)     │
│   ┌─────────────────┐     ┌───────────────┐        ┌───────────────┐       │
│   │   314 bits      │     │   271 bits    │        │   161 bits    │       │
│   │   = 40 bytes    │     │   = 34 bytes  │        │   = 21 bytes  │       │
│   │                 │     │               │        │               │       │
│   │   Your unique   │     │   Temporary   │        │   Origin      │       │
│   │   identity      │     │   sessions    │        │   markers     │       │
│   │   (permanent)   │     │   (encrypted) │        │   (network)   │       │
│   └─────────────────┘     └───────────────┘        └───────────────┘       │
│                                                                             │
│   Ed25519 Signing         AES-256-GCM              SHA-256 Derived         │
│                                                                             │
└─────────────────────────────────────────────────────────────────────────────┘

Using Pi-Keys

import { PiKey, SessionKey, GenesisKey } from '@ruvector/edge-net';

// Create your permanent identity
const identity = new PiKey();
console.log(`Your ID: ${identity.getShortId()}`);  // π:a1b2c3d4...

// Sign data
const signature = identity.sign(data);
const valid = identity.verify(data, signature, identity.getPublicKey());

// Create encrypted backup
const backup = identity.createEncryptedBackup('my-password');

// Create temporary session
const session = SessionKey.create(identity, 3600);  // 1 hour
const encrypted = session.encrypt(sensitiveData);
const decrypted = session.decrypt(encrypted);

Security Architecture

Edge-net implements production-grade cryptographic security:

Cryptographic Primitives

Component Algorithm Purpose
Key Derivation Argon2id (64MB, 3 iterations) Memory-hard password hashing
Signing Ed25519 Digital signatures (128-bit security)
Encryption AES-256-GCM Authenticated encryption
Hashing SHA-256 Content hashing and verification

Identity Protection

// Password-protected key export with Argon2id + AES-256-GCM
let encrypted = identity.export_secret_key("strong_password")?;

// Secure memory cleanup (zeroize)
// All sensitive key material is automatically zeroed after use

Authority Verification

All resolution events require cryptographic proof:

// Ed25519 signature verification for authority decisions
let signature = ScopedAuthority::sign_resolution(&resolution, &context, &signing_key);
// Signature verified against registered authority public keys

Attack Resistance

The RAC (RuVector Adversarial Coherence) protocol defends against:

Attack Defense
Sybil Stake-weighted voting, witness path diversity
Eclipse Context isolation, Merkle divergence detection
Byzantine 1/3 threshold, escalation tracking
Replay Timestamp validation, duplicate detection
Double-spend Conflict detection, quarantine system

Self-Optimization

The network continuously improves itself:

Automatic Task Routing

// Get optimal peers for your tasks
const peers = node.getOptimalPeers(5);

// Network learns from every interaction
node.recordTaskRouting('vector_search', 'peer-123', 45, true);

Fitness-Based Evolution

// High-performing nodes can replicate their config
if (node.shouldReplicate()) {
  const optimalConfig = node.getRecommendedConfig();
  // New nodes inherit successful configurations
}

// Track your contribution
const fitness = node.getNetworkFitness();  // 0.0 - 1.0

Q-Learning Security

The collective learns to defend itself:

// Run security audit
const audit = JSON.parse(node.runSecurityAudit());
console.log(`Security Score: ${audit.security_score}/10`);

// Defends against:
// - DDoS attacks
// - Sybil attacks
// - Byzantine behavior
// - Eclipse attacks
// - Replay attacks

Tutorials

Tutorial 1: Join the Collective

import init, { EdgeNetConfig } from '@ruvector/edge-net';

async function joinCollective() {
  await init();

  // Configure your contribution
  const node = new EdgeNetConfig('my-site')
    .cpuLimit(0.25)           // 25% CPU when idle
    .memoryLimit(128 * 1024 * 1024)  // 128MB
    .minIdleTime(5000)        // Wait 5s of idle
    .respectBattery(true)     // Reduce on battery
    .build();

  // Join the network
  node.start();

  // Check your status
  console.log('Joined collective!');
  console.log(`Node ID: ${node.nodeId()}`);
  console.log(`Multiplier: ${node.getMultiplier()}x`);

  return node;
}

Tutorial 2: Contribute and Earn

async function contributeAndEarn(node) {
  // Process tasks from the collective
  let tasksCompleted = 0;

  while (true) {
    // Check if we should work
    if (node.isIdle()) {
      // Process a task from the network
      const processed = await node.processNextTask();

      if (processed) {
        tasksCompleted++;
        const stats = node.getStats();
        console.log(`Completed ${tasksCompleted} tasks, earned ${stats.ruv_earned} rUv`);
      }
    }

    await new Promise(r => setTimeout(r, 1000));
  }
}

Tutorial 3: Use Collective AI Power

async function useCollectiveAI(node) {
  // Check your balance
  const balance = node.ruvBalance();
  console.log(`Available: ${balance} rUv`);

  // Submit AI tasks
  const tasks = [
    { type: 'vector_search', cost: 3 },
    { type: 'embedding', cost: 2 },
    { type: 'semantic_match', cost: 1 }
  ];

  for (const task of tasks) {
    if (balance >= task.cost) {
      console.log(`Running ${task.type}...`);
      const result = await node.submitTask(
        task.type,
        { data: 'sample' },
        task.cost
      );
      console.log(`Result: ${JSON.stringify(result)}`);
    }
  }
}

Tutorial 4: Monitor Network Health

async function monitorHealth(node) {
  setInterval(() => {
    // Your contribution
    const stats = node.getStats();
    console.log(`
      === Your Contribution ===
      Earned: ${stats.ruv_earned} rUv
      Spent: ${stats.ruv_spent} rUv
      Tasks: ${stats.tasks_completed}
      Reputation: ${(stats.reputation * 100).toFixed(1)}%
    `);

    // Network health
    const health = JSON.parse(node.getEconomicHealth());
    console.log(`
      === Network Health ===
      Velocity: ${health.velocity.toFixed(2)}
      Utilization: ${(health.utilization * 100).toFixed(1)}%
      Stability: ${health.stability.toFixed(2)}
    `);

    // Check sustainability
    const sustainable = node.isSelfSustaining(10000, 50000);
    console.log(`Self-sustaining: ${sustainable}`);

  }, 30000);
}

API Reference

Core Methods

Method Description Returns
new EdgeNetNode(siteId) Join the collective EdgeNetNode
start() Begin contributing void
pause() / resume() Control contribution void
ruvBalance() Check your credits u64
submitTask(type, payload, maxCost) Use collective compute Promise<Result>
processNextTask() Process work for others Promise<bool>

Identity Methods

Method Description Returns
new PiKey() Generate identity PiKey
getIdentity() Get 40-byte identity Vec<u8>
sign(data) Sign data Vec<u8>
verify(data, sig, pubkey) Verify signature bool
createEncryptedBackup(password) Backup identity Vec<u8>

Network Methods

Method Description Returns
getNetworkFitness() Your contribution score f32
getOptimalPeers(count) Best nodes for tasks Vec<String>
getEconomicHealth() Network health metrics String (JSON)
isSelfSustaining(nodes, tasks) Check sustainability bool

Development

Build

cd examples/edge-net
wasm-pack build --target web --out-dir pkg

Test

cargo test

Run Simulation

cd sim
npm install
npm run simulate

Exotic AI Capabilities

Edge-net can be enhanced with exotic AI WASM capabilities for advanced P2P coordination, self-learning, and distributed reasoning. Enable these features by building with the appropriate feature flags.

Available Feature Flags

Feature Description Dependencies
exotic Time Crystal, NAO, Morphogenetic Networks ruvector-exotic-wasm
learning-enhanced MicroLoRA, BTSP, HDC, WTA, Global Workspace ruvector-learning-wasm, ruvector-nervous-system-wasm
economy-enhanced Enhanced CRDT credits ruvector-economy-wasm
exotic-full All exotic capabilities All above

Time Crystal (P2P Synchronization)

Robust distributed coordination using discrete time crystal dynamics:

// Enable time crystal with 10 oscillators
node.enableTimeCrystal(10);

// Check synchronization level (0.0 - 1.0)
const sync = node.getTimeCrystalSync();
console.log(`P2P sync: ${(sync * 100).toFixed(1)}%`);

// Check if crystal is stable
if (node.isTimeCrystalStable()) {
  console.log('Network is synchronized!');
}

NAO (Neural Autonomous Organization)

Decentralized governance with stake-weighted quadratic voting:

// Enable NAO with 70% quorum requirement
node.enableNAO(0.7);

// Add peer nodes as members
node.addNAOMember('peer-123', 100);
node.addNAOMember('peer-456', 50);

// Propose and vote on network actions
const propId = node.proposeNAOAction('Increase task capacity');
node.voteNAOProposal(propId, 0.9);  // Vote with 90% weight

// Execute if quorum reached
if (node.executeNAOProposal(propId)) {
  console.log('Proposal executed!');
}

MicroLoRA (Per-Node Self-Learning)

Ultra-fast LoRA adaptation with <100us latency:

// Enable MicroLoRA with rank-2 adaptation
node.enableMicroLoRA(2);

// Adapt weights based on task feedback
const gradient = new Float32Array(128);
node.adaptMicroLoRA('vector_search', gradient);

// Apply adaptation to inputs
const input = new Float32Array(128);
const adapted = node.applyMicroLoRA('vector_search', input);

HDC (Hyperdimensional Computing)

10,000-bit binary hypervectors for distributed reasoning:

// Enable HDC memory
node.enableHDC();

// Store patterns for semantic operations
node.storeHDCPattern('concept_a');
node.storeHDCPattern('concept_b');

WTA (Winner-Take-All)

Instant decisions with <1us latency:

// Enable WTA with 1000 neurons
node.enableWTA(1000);

BTSP (One-Shot Learning)

Immediate pattern association without iterative training:

// Enable BTSP with 128-dim inputs
node.enableBTSP(128);

// One-shot associate a pattern
const pattern = new Float32Array(128);
node.oneShotAssociate(pattern, 1.0);

Morphogenetic Network

Self-organizing network topology through cellular differentiation:

// Enable 100x100 morphogenetic grid
node.enableMorphogenetic(100);

// Network grows automatically
console.log(`Cells: ${node.getMorphogeneticCellCount()}`);

Stepping All Capabilities

In your main loop, step all capabilities forward:

function gameLoop(dt) {
  // Step exotic capabilities
  node.stepCapabilities(dt);

  // Process tasks
  node.processNextTask();
}

setInterval(() => gameLoop(0.016), 16);  // 60 FPS

Building with Exotic Features

# Build with exotic capabilities
wasm-pack build --target web --release --out-dir pkg -- --features exotic

# Build with learning-enhanced capabilities
wasm-pack build --target web --release --out-dir pkg -- --features learning-enhanced

# Build with all exotic capabilities
wasm-pack build --target web --release --out-dir pkg -- --features exotic-full

Core Architecture & Capabilities

Edge-net is a production-grade distributed AI computing platform with ~36,500 lines of Rust code and 177 passing tests.

Unified Attention Architecture

Four attention mechanisms that answer critical questions for distributed AI:

┌─────────────────────────────────────────────────────────────────────────────┐
│                    UNIFIED ATTENTION ARCHITECTURE                            │
├─────────────────────────────────────────────────────────────────────────────┤
│                                                                             │
│  ┌─────────────────┐  ┌─────────────────┐  ┌─────────────────┐            │
│  │ Neural Attention│  │  DAG Attention  │  │ Graph Attention │            │
│  │                 │  │                 │  │                 │            │
│  │ "What words     │  │ "What steps     │  │ "What relations │            │
│  │  matter?"       │  │  matter?"       │  │  matter?"       │            │
│  │                 │  │                 │  │                 │            │
│  │ • Multi-head    │  │ • Topo-sort     │  │ • GAT-style     │            │
│  │ • Q/K/V project │  │ • Critical path │  │ • Edge features │            │
│  │ • Softmax focus │  │ • Parallelism   │  │ • Message pass  │            │
│  └─────────────────┘  └─────────────────┘  └─────────────────┘            │
│                                                                             │
│  ┌─────────────────────────────────────────────────────────────┐          │
│  │                   State Space Model (SSM)                    │          │
│  │                                                              │          │
│  │     "What history still matters?" - O(n) Mamba-style         │          │
│  │                                                              │          │
│  │  • Selective gating: What to remember vs forget              │          │
│  │  • O(n) complexity: Efficient long-sequence processing       │          │
│  │  • Temporal dynamics: dt, A, B, C, D state transitions       │          │
│  └─────────────────────────────────────────────────────────────┘          │
│                                                                             │
└─────────────────────────────────────────────────────────────────────────────┘
Attention Type Question Answered Use Case
Neural What words matter? Semantic focus, importance weighting
DAG What steps matter? Task scheduling, critical path analysis
Graph What relationships matter? Network topology, peer connections
State Space What history matters? Long-term memory, temporal patterns

AI Intelligence Layer

┌─────────────────────────────────────────────────────────────────────────────┐
│                       AI Intelligence Layer                                  │
├─────────────────────────────────────────────────────────────────────────────┤
│  ┌─────────────────┐  ┌─────────────────┐  ┌─────────────────┐             │
│  │  HNSW Index     │  │  AdapterPool    │  │  Federated      │             │
│  │  (memory.rs)    │  │   (lora.rs)     │  │ (federated.rs)  │             │
│  │                 │  │                 │  │                 │             │
│  │ • 150x speedup  │  │ • LRU eviction  │  │ • TopK Sparse   │             │
│  │ • O(log N)      │  │ • 16 slots      │  │ • Byzantine tol │             │
│  │ • Cosine dist   │  │ • Task routing  │  │ • Rep-weighted  │             │
│  └─────────────────┘  └─────────────────┘  └─────────────────┘             │
│                                                                             │
│  ┌─────────────────┐  ┌─────────────────┐  ┌─────────────────┐             │
│  │  DAG Attention  │  │  LoraAdapter    │  │ GradientGossip  │             │
│  │                 │  │                 │  │                 │             │
│  │ • Critical path │  │ • Rank 1-16     │  │ • Error feedback│             │
│  │ • Topo sort     │  │ • SIMD forward  │  │ • Diff privacy  │             │
│  │ • Parallelism   │  │ • 4/8-bit quant │  │ • Gossipsub     │             │
│  └─────────────────┘  └─────────────────┘  └─────────────────┘             │
└─────────────────────────────────────────────────────────────────────────────┘

Swarm Intelligence

Component Capability Description
Entropy Consensus Belief convergence Shannon entropy-based decision making
Collective Memory Pattern sharing Hippocampal-inspired consolidation and replay
Stigmergy Pheromone trails Ant colony optimization for task routing
Consensus Coordinator Multi-topic Parallel consensus on multiple decisions

Compute Acceleration

┌─────────────────────────────────────────────────────────────────────────────┐
│                      COMPUTE ACCELERATION STACK                              │
├─────────────────────────────────────────────────────────────────────────────┤
│                                                                             │
│  ┌─────────────────────────────────────────────────────────────────────┐   │
│  │                      WebGPU Compute Backend                          │   │
│  │                                                                      │   │
│  │  • wgpu-based GPU acceleration (10+ TFLOPS target)                   │   │
│  │  • Matrix multiplication pipeline (tiled, cache-friendly)            │   │
│  │  • Attention pipeline (Flash Attention algorithm)                    │   │
│  │  • LoRA forward pipeline (<1ms inference)                            │   │
│  │  • Staging buffer pool (16MB, zero-copy transfers)                   │   │
│  └─────────────────────────────────────────────────────────────────────┘   │
│                                                                             │
│  ┌─────────────────────────────────────────────────────────────────────┐   │
│  │                      WebWorker Pool                                  │   │
│  │                                                                      │   │
│  │  +------------------+                                                │   │
│  │  |   Main Thread    |                                                │   │
│  │  |  (Coordinator)   |                                                │   │
│  │  +--------+---------+                                                │   │
│  │           |                                                          │   │
│  │     +-----+-----+-----+-----+                                        │   │
│  │     |     |     |     |     |                                        │   │
│  │  +--v-+ +-v--+ +--v-+ +--v-+ +--v-+                                  │   │
│  │  | W1 | | W2 | | W3 | | W4 | | Wn |  (up to 16 workers)             │   │
│  │  +----+ +----+ +----+ +----+ +----+                                  │   │
│  │     |     |     |     |     |                                        │   │
│  │     +-----+-----+-----+-----+                                        │   │
│  │           |                                                          │   │
│  │     SharedArrayBuffer (when available, zero-copy)                    │   │
│  └─────────────────────────────────────────────────────────────────────┘   │
│                                                                             │
│  ┌────────────────────────┐  ┌────────────────────────┐                   │
│  │   WASM SIMD (simd128)  │  │   WebGL Compute        │                   │
│  │   • f32x4 vectorized   │  │   • Shader fallback    │                   │
│  │   • 4x parallel ops    │  │   • Universal support  │                   │
│  │   • All modern browsers│  │   • Fragment matmul    │                   │
│  └────────────────────────┘  └────────────────────────┘                   │
│                                                                             │
└─────────────────────────────────────────────────────────────────────────────┘
Backend Availability Performance Operations
WebGPU Chrome 113+, Firefox 120+ 10+ TFLOPS Matmul, Attention, LoRA
WebWorker Pool All browsers 4-16x CPU cores Parallel matmul, dot product
WASM SIMD All modern browsers 4x vectorized Cosine distance, softmax
WebGL Universal fallback Shader compute Matrix operations
CPU Always available Loop-unrolled All operations

WebGPU Pipelines

Pipeline Purpose Performance Target
Matmul Matrix multiplication (tiled) 10+ TFLOPS
Attention Flash attention (memory efficient) 2ms for 4K context
LoRA Low-rank adapter forward pass <1ms inference

WebWorker Operations

Operation Description Parallelization
MatmulPartial Row-blocked matrix multiply Rows split across workers
DotProductPartial Partial vector dot products Segments split across workers
VectorOp Element-wise ops (add, mul, relu, sigmoid) Ranges split across workers
Reduce Sum, max, min, mean reductions Hierarchical aggregation

Work Stealing

Workers that finish early can steal tasks from busy workers' queues:

  • LIFO for local tasks (cache locality)
  • FIFO for stolen tasks (load balancing)

Economics & Reputation

Feature Mechanism Purpose
AMM Automated Market Maker Dynamic rUv pricing
Reputation Stake-weighted scoring Trust computation
Slashing Byzantine penalties Bad actor deterrence
Rewards Contribution tracking Fair distribution

Network Learning

Component Learning Type Application
RAC Adversarial Coherence Conflict resolution
ReasoningBank Trajectory learning Strategy optimization
Q-Learning Reinforcement Security adaptation
Federated Distributed training Model improvement

Self-Learning Hooks & MCP Integration

Edge-net integrates with Claude Code's hooks system for continuous self-learning.

Learning Scenarios Module

use ruvector_edge_net::learning_scenarios::{
    NeuralAttention, DagAttention, GraphAttention, StateSpaceAttention,
    AttentionOrchestrator, ErrorLearningTracker, SequenceTracker,
    get_ruvector_tools, generate_settings_json,
};

// Create unified attention orchestrator
let orchestrator = AttentionOrchestrator::new(
    NeuralAttention::new(128, 4),      // 128 dim, 4 heads
    DagAttention::new(),
    GraphAttention::new(64, 4),         // 64 dim, 4 heads
    StateSpaceAttention::new(256, 0.95), // 256 dim, 0.95 decay
);

// Get comprehensive attention analysis
let analysis = orchestrator.analyze(tokens, &dag, &graph, &history);

Error Pattern Learning

let mut tracker = ErrorLearningTracker::new();

// Record errors for learning
tracker.record_error(ErrorPattern::TypeMismatch, "expected String", "lib.rs", 42);

// Get AI-suggested fixes
let fixes = tracker.get_suggestions("type mismatch");
// ["Use .to_string()", "Use String::from()", ...]

MCP Tool Categories

Category Tools Purpose
VectorDb vector_search, vector_store, vector_query Semantic similarity
Learning learn_pattern, train_model, get_suggestions Pattern recognition
Memory remember, recall, forget Vector memory
Swarm spawn_agent, coordinate, route_task Multi-agent coordination
Telemetry track_event, get_stats, export_metrics Usage analytics
AgentRouting suggest_agent, record_outcome, get_routing_table Agent selection

RuVector CLI Commands

# Session management
ruvector hooks session-start    # Start learning session
ruvector hooks session-end      # Save patterns

# Intelligence
ruvector hooks stats            # Show learning stats
ruvector hooks route <task>     # Get agent suggestion
ruvector hooks suggest-context  # Context suggestions

# Memory
ruvector hooks remember <content> -t <type>  # Store memory
ruvector hooks recall <query>                # Semantic search

Claude Code Hook Events

Event Trigger Action
PreToolUse Before Edit/Bash Agent routing, risk analysis
PostToolUse After Edit/Bash Q-learning update, pattern recording
SessionStart Conversation begins Load intelligence
Stop Conversation ends Save learning data
UserPromptSubmit User message Context suggestions
PreCompact Before compaction Preserve context

Research Foundation

Edge-net is built on research in:

  • Distributed Computing - P2P resource sharing
  • Collective Intelligence - Emergent optimization
  • Game Theory - Incentive-compatible mechanisms
  • Adaptive Security - Q-learning threat response
  • Time Crystals - Floquet engineering for coordination
  • Neuromorphic Computing - BTSP, HDC, WTA mechanisms
  • Decentralized Governance - Neural Autonomous Organizations

Disclaimer

Edge-net is a research platform for collective computing. The rUv units are:

  • Resource participation metrics, not currency
  • Used for balancing contribution and consumption
  • Not redeemable for money or goods outside the network

Links

License

MIT License