Skip to content

Conversation

@scarmuega
Copy link
Member

@scarmuega scarmuega commented Feb 12, 2026

Summary by CodeRabbit

  • New Features

    • Added AVVM reclamation tracking at Shelley-to-Allegra era transitions
    • Enhanced MIR processing with protocol-aware accumulation behavior
    • Improved reward accounting with separate treasury and reserve allocations
    • Added comprehensive epoch pot validation testing framework
  • Improvements

    • Refined stability window calculations for better accuracy
    • Enhanced pool reward account resolution using scheduled snapshots
    • Expanded test infrastructure with ledger harness and ground-truth comparisons
    • Added seed data and bootstrap configuration options

@coderabbitai
Copy link

coderabbitai bot commented Feb 12, 2026

📝 Walkthrough

Walkthrough

This PR implements comprehensive MIR (Move Instantaneous Rewards) processing for epoch boundaries, adds AVVM reclamation at the Shelley→Allegra transition, refactors reward accounting to track treasury and reserves allocations separately, introduces a test harness for Cardano ledger simulation, and establishes xtask infrastructure for ground-truth testing and seed management. The rupd-snapshot-dump feature is removed.

Changes

Cohort / File(s) Summary
MIR Delta System
crates/cardano/src/rupd/deltas.rs, crates/cardano/src/roll/accounts.rs, crates/cardano/src/model.rs
New EnqueueMir and DequeueMir delta types for pending MIR state management; replaces AssignMirRewards with enqueue-based approach; adds PendingMirState entity with credential and treasury/reserve tracking.
Reward & Pot Accounting
crates/cardano/src/pots.rs, crates/cardano/src/rewards/mod.rs, crates/cardano/src/ewrap/rewards.rs, crates/cardano/src/ewrap/wrapup.rs
Splits unspendable rewards into unspendable_to_treasury and unspendable_to_reserves; adds treasury_mirs field; implements parallel delegator reward processing; introduces AppliedReward tracking; updates RewardMap with separate treasury/reserves counters and drain behavior.
AVVM & Boundary Initialization
crates/cardano/src/estart/loading.rs, crates/cardano/src/estart/mod.rs, crates/cardano/src/estart/reset.rs
Adds avvm_reclamation computation and field to WorkContext at Shelley→Allegra boundary; extends PotDelta with new MIR and AVVM fields; updates boundary initialization and logging.
Epoch Transition & Rolling Statistics
crates/cardano/src/roll/epochs.rs, crates/cardano/src/roll/mod.rs, crates/cardano/src/roll/accounts.rs, crates/cardano/src/lib.rs, crates/cardano/src/utils.rs
Adds non_overlay_blocks_minted tracking; treasury MIR support in epoch stats; changes stability_window to use randomness_stability_window; threads epoch_start through DeltaBuilder; updates MIR handling for reserves and treasury sources with protocol-version-based overwrite logic.
Work Unit Accessors
crates/cardano/src/ewrap/work_unit.rs, crates/cardano/src/estart/work_unit.rs, crates/cardano/src/rupd/work_unit.rs
Adds public accessor methods (boundary, work, rewards, ended_state) to expose internal work unit state.
Test Harness & Comparison Utilities
crates/testing/src/harness/cardano.rs, crates/testing/src/harness/mod.rs, crates/testing/src/lib.rs, tests/epoch_pots/compare.rs, tests/epoch_pots/main.rs, tests/cardano/harness/compare.rs
Introduces self-contained LedgerHarness for Cardano domain simulation with block processing callbacks; adds comprehensive epoch pot test suite with CSV comparison, fixture extraction, and ground-truth validation for multiple networks.
Xtask Configuration & Ground Truth
xtask/src/config.rs, xtask/src/bootstrap.rs, xtask/src/test_instance/create.rs, xtask/src/ground_truth/generate.rs, xtask/src/ground_truth/delegation.rs, xtask/src/ground_truth/stake.rs, xtask/src/ground_truth/rewards.rs, xtask/src/util.rs
Adds SeedConfig for per-network seed data management; implements pagination for delegation/stake/rewards fetching; introduces subject_epoch and selective dataset flags for ground-truth generation; adds seed copying and directory recursion utilities.
EWRAP & Loading
crates/cardano/src/ewrap/loading.rs, crates/cardano/src/ewrap/mod.rs, crates/cardano/src/ewrap/commit.rs
Adds process_pending_mirs for MIR application to registered accounts; expands BoundaryWork with MIR-related state fields (applied_rewards, effective MIRs, invalid MIRs); implements PendingMirState deletion after rewards processing.
RUPD & Model Updates
crates/cardano/src/rupd/mod.rs, crates/cardano/src/rupd/loading.rs, crates/cardano/src/model.rs
Adds blocks_made_total and performance_epoch_pool_blocks fields; updates exports with DequeueMir/EnqueueMir; removes dump module and feature gating; extends pool reward account resolution with next snapshot fallback; adds is_registered_at query method to AccountState.
Configuration & Minor Updates
Cargo.toml, crates/cardano/Cargo.toml, crates/cardano/src/rewards/formulas.rs, crates/cardano/src/roll/batch.rs, crates/cardano/src/roll/pools.rs, crates/cardano/src/roll/proposals.rs, crates/cardano/src/roll/txs.rs, crates/cardano/src/hacks.rs, src/bin/dolos/data/dump_logs.rs
Adds test target and dev-dependencies (dolos-cardano, bech32, pallas, tracing); removes rupd-snapshot-dump feature; adds pointer mappings and hard fork proposals; updates dump_logs unspendable field name; minor BlockVisitor signature updates with epoch_start parameter.

Sequence Diagram(s)

sequenceDiagram
    actor Test
    participant LedgerHarness
    participant CardanoLogic
    participant Roll/RUPD/EWRAP/ESTART
    participant StateStore
    participant Domain

    Test->>LedgerHarness: run(chunk_size, callback)
    activate LedgerHarness
    LedgerHarness->>Domain: bootstrap
    LedgerHarness->>StateStore: open fjall store
    
    loop for each block chunk
        LedgerHarness->>CardanoLogic: load()/compute()/commit()
        activate CardanoLogic
        
        opt Block processing
            CardanoLogic->>Roll/RUPD/EWRAP/ESTART: process deltas
            activate Roll/RUPD/EWRAP/ESTART
            
            Note over Roll/RUPD/EWRAP/ESTART: Roll: MIR enqueue via EnqueueMir
            Note over Roll/RUPD/EWRAP/ESTART: RUPD: compute rewards with parallel delegators
            Note over Roll/RUPD/EWRAP/ESTART: EWRAP: process_pending_mirs, apply rewards, split unspendable
            Note over Roll/RUPD/EWRAP/ESTART: ESTART: compute_avvm_reclamation at boundary
            
            Roll/RUPD/EWRAP/ESTART->>StateStore: apply deltas and commit
            deactivate Roll/RUPD/EWRAP/ESTART
        end
        
        deactivate CardanoLogic
        LedgerHarness->>Test: callback(domain, work_unit)
    end
    
    deactivate LedgerHarness
Loading
sequenceDiagram
    actor Harness as EpochPots Test
    participant LedgerHarness
    participant RUPD as RUPD Phase
    participant EWRAP as EWRAP Phase
    participant ESTART as ESTART Phase
    participant CSV as CSV Dump
    participant Compare as CSV Compare
    
    Harness->>LedgerHarness: run for subject_epoch
    activate LedgerHarness
    
    Note over LedgerHarness: Load seed state and genesis
    
    loop Process blocks
        LedgerHarness->>RUPD: compute rewards
        activate RUPD
        Note over RUPD: Parallel delegator rewards<br/>Track applied_rewards, effective_mirs
        RUPD-->>LedgerHarness: RewardMap with split treasury/reserves
        deactivate RUPD
        
        LedgerHarness->>EWRAP: apply rewards
        activate EWRAP
        Note over EWRAP: process_pending_mirs<br/>Split unspendable_to_treasury/reserves<br/>Track applied_rewards
        EWRAP-->>LedgerHarness: BoundaryWork
        deactivate EWRAP
        
        alt Boundary epoch
            LedgerHarness->>ESTART: compute AVVM & finalize
            activate ESTART
            Note over ESTART: compute_avvm_reclamation<br/>Build final EndStats/PotDelta
            ESTART-->>LedgerHarness: completed epoch
            deactivate ESTART
            
            LedgerHarness->>CSV: dump delegation/stake/rewards/epochs/pparams
        end
    end
    
    deactivate LedgerHarness
    
    Harness->>Compare: compare_csvs_with_ignore
    activate Compare
    Note over Compare: Key-based diff against ground-truth<br/>Report adds/deletes/modifies per category
    Compare-->>Harness: mismatch count
    deactivate Compare
    
    alt Mismatches found
        Harness->>Harness: panic with aggregated failures
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~80 minutes

Possibly related PRs

Poem

🐰 A rabbit hops through epochs grand,
With pots and MIRs, carefully planned,
From AVVM's claim to rewards split clean,
Treasury and reserves—a harmonious scene,
Now test harnesses leap to validate the math,
thump-thump down the verification path!

🚥 Pre-merge checks | ✅ 1 | ❌ 2
❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 60.12% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Title check ❓ Inconclusive The PR title is vague and does not clearly convey the scope or primary changes. Terms like 'edge-cases' are non-descriptive, and it does not indicate what 'mainnet-specific' changes entail or which historical periods are affected. Consider a more specific title that describes the key changes, such as 'Add MIR processing, AVVM reclamation, and overlay block tracking' or similar, to help reviewers quickly understand the changeset scope.
✅ Passed checks (1 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix/mainnet-200-plus

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
xtask/src/bootstrap.rs (1)

74-79: ⚠️ Potential issue | 🟡 Minor

Existing data protection is weakened when a seed path is configured.

The bail condition now skips when seed_path.is_some(), meaning existing data in data_dir can be silently overwritten by the seed copy (line 123) without --force. Was this intentional?

If the intent is to always allow seed-based initialization to proceed, consider at least logging a warning when overwriting, or checking skip_seed in this guard:

-    if data_dir.exists() && dir_has_entries(&data_dir)? && !args.force && seed_path.is_none() {
+    if data_dir.exists() && dir_has_entries(&data_dir)? && !args.force && (seed_path.is_none() || args.skip_seed) {
crates/cardano/src/estart/reset.rs (1)

226-231: ⚠️ Potential issue | 🟡 Minor

dbg!() macros should not be in production code.

dbg!() writes to stderr and is typically flagged by cargo clippy (clippy::dbg_macro). Replace with structured tracing::error! or similar, especially since this is a consistency-check failure path that you'd want in production logs.

♻️ Suggested diff
     if !pots.is_consistent(epoch.initial_pots.max_supply()) {
-        dbg!(end);
-        dbg!(&epoch.initial_pots);
-        dbg!(&pots);
-        dbg!(delta);
+        tracing::error!(
+            ?end,
+            initial_pots = ?epoch.initial_pots,
+            ?pots,
+            ?delta,
+            "pot consistency check FAILED"
+        );
     }

As per coding guidelines, **/*.rs: "Run cargo clippy --workspace --all-targets --all-features and resolve all clippy warnings before committing changes."

🤖 Fix all issues with AI agents
In `@crates/cardano/src/ewrap/wrapup.rs`:
- Around line 172-173: The expressions use plain addition for effective +
to_treasury which can overflow before the call to
incentives.available_rewards.saturating_sub; change the logic in wrapup.rs to
compute the sum using saturating_add (e.g., let total =
effective.saturating_add(to_treasury)) and then use total for both consumed and
the argument to incentives.available_rewards.saturating_sub to avoid
intermediate overflow.
- Around line 157-164: The current assert! in wrapup.rs that compares effective
and applied_rewards_sum (assert!(effective == applied_rewards_sum, ... using
ctx.ending_state().number, effective, applied_rewards_sum)) will panic in
production; replace it with a non-panicking check: either convert to
debug_assert! for debug-only checks, or emit a tracing::error! with the same
diagnostic string and handle the mismatch as a soft failure (e.g., log the epoch
number and the diff and continue or return a Result::Err from the enclosing
function) so the node does not crash on mainnet edge cases. Ensure the
replacement references the same symbols (effective, applied_rewards_sum,
ctx.ending_state().number) so logs carry the same context.

In `@crates/cardano/src/hacks.rs`:
- Around line 80-83: The match arm that handles unknown pointer tuples (slot,
tx_idx, cert_idx) currently calls warn!(...) and then unconditionally panics via
panic!(), which will crash production; change this so the code logs a
descriptive warning including the tuple and does NOT panic in release builds:
replace the unconditional panic!() with a debug-only assertion (e.g.
debug_assert!(false, "...") or a conditional panic behind
cfg!(debug_assertions)) and return None (or an appropriate error) in the
non-debug path; keep the warn! call but enhance its message to include
slot/tx_idx/cert_idx for easier debugging.

In `@crates/cardano/src/rupd/deltas.rs`:
- Around line 150-182: EnqueueMir::undo currently can't restore a fully
overwritten entity because apply replaces the entity when overwrite is true;
change EnqueueMir to capture the prior state (e.g. add a field like prev:
Option<PendingMirState>) when apply runs with overwrite==true (store the
existing Option<PendingMirState> before overwriting), and then update undo to:
if overwrite==true restore *entity = self.prev.clone(), otherwise keep the
existing saturating_sub logic; reference the EnqueueMir::apply and
EnqueueMir::undo methods, the overwrite flag, PendingMirState, and mirror the
DequeueMir.prev approach for how prev is stored and restored.

In `@crates/testing/src/harness/cardano.rs`:
- Around line 293-308: copy_dir_recursive currently treats symlinks as regular
files because it uses file_type() which follows symlinks; update
copy_dir_recursive to call entry.path().symlink_metadata() (or
DirEntry::symlink_metadata) and check file_type.is_symlink(), then recreate the
symlink at dst instead of copying the target: use std::os::unix::fs::symlink or
the Windows equivalents (std::os::windows::fs::symlink_file / symlink_dir)
depending on target type, and for non-symlink dirs/files keep the existing
recursion and std::fs::copy behavior; ensure errors from symlink creation are
propagated as std::io::Result so callers of copy_dir_recursive see failures.

In `@tests/epoch_pots/compare.rs`:
- Around line 115-120: The printed `matched` value is wrong because total_rows
only comes from file1 while `total` includes DiffByteRecord::Add from file2;
modify the logic to compute `matched` directly by counting DiffByteRecord::Equal
records while iterating diffs (or otherwise track add counts from file2) instead
of using total_rows.saturating_sub(total + ignored); update the code that
computes `matched` (referencing matched, total_rows, total, ignored and
DiffByteRecord::Add/Equal) so the informational output reflects actual equal
rows.
🧹 Nitpick comments (24)
crates/cardano/src/roll/batch.rs (1)

253-255: Excellent documentation clarifying critical ordering semantics.

The added comments clearly document that delta application order is intentional and matches the reference Haskell ledger implementation. This is valuable context for maintainers, especially given the PR's focus on mainnet-specific edge-cases.

As per coding guidelines, please ensure cargo test --workspace --all-features passes to verify that the documented ordering behavior is covered by tests.

xtask/src/ground_truth/stake.rs (1)

54-56: Nit: prefer a safer comparison to avoid the as i64 cast.

count as i64 is safe here since countPAGE_SIZE, but using a uniform type avoids a potential clippy::cast_possible_wrap pedantic lint and is more idiomatic.

♻️ Suggested change
-        if (count as i64) < PAGE_SIZE {
+        if count < PAGE_SIZE as usize {
             break;
         }
xtask/src/ground_truth/generate.rs (2)

100-103: saturating_sub(2) silently clamps to 0 for subject_epoch < 2.

If someone passes --subject-epoch 0 or 1, performance_epoch will silently be 0 rather than signaling an error. Since this only applies to a developer xtask and Shelley rewards don't begin until well after epoch 2, this is unlikely to bite in practice—but a guard or early bail-out would make intent clearer.

💡 Optional: add an early check
-    let performance_epoch = args.subject_epoch.saturating_sub(2);
+    anyhow::ensure!(
+        args.subject_epoch >= 2,
+        "subject_epoch must be >= 2 (got {}); performance_epoch would be meaningless",
+        args.subject_epoch
+    );
+    let performance_epoch = args.subject_epoch - 2;

143-149: Rewards CSV filename omits the epoch, unlike delegation and stake.

delegation-{performance_epoch}.csv and stake-{performance_epoch}.csv embed the epoch to avoid collisions when the same output_dir is reused, but rewards.csv does not. If this is intentional (e.g., rewards are always overwritten), no action needed—but if it should parallel the others, consider rewards-{performance_epoch}.csv.

crates/cardano/src/hacks.rs (1)

37-78: Consider grouping mainnet pointer mappings with inline comments for traceability.

The preview and preprod sections have implicit grouping. The new mainnet block mixes Some(...) and None entries without annotation of where each pointer was observed (block height, tx hash, etc.). If these ever need debugging or updating, having a brief provenance note (similar to the proposal comments) would help. This is a minor maintainability nit for a growing list.

xtask/src/ground_truth/delegation.rs (1)

51-56: amount as u64 silently wraps negative values.

i64 as u64 performs a bitwise reinterpretation on negative values. While epoch_stake.amount should always be non-negative, a defensive try_from or .max(0) guard would be safer.

Suggested defensive cast
-            let amount: i64 = row.get(2);
-
-            let entry = pools
-                .entry(pool_bech32)
-                .or_insert_with(|| (pool_hash, 0));
-            entry.1 += amount as u64;
+            let amount: i64 = row.get(2);
+            let amount: u64 = u64::try_from(amount)
+                .expect("epoch_stake.amount should be non-negative");
+
+            let entry = pools
+                .entry(pool_bech32)
+                .or_insert_with(|| (pool_hash, 0));
+            entry.1 += amount;
tests/epoch_pots/compare.rs (1)

51-55: Redundant & on record — Clippy will flag this.

record is already &DiffByteRecord from the slice iteration, so ignore(&record) passes &&DiffByteRecord. The extra & is unnecessary.

🔧 Proposed fix
-        if ignore(&record) {
+        if ignore(record) {

As per coding guidelines, **/*.rs: "Run cargo clippy --workspace --all-targets --all-features and resolve all clippy warnings before committing changes."

crates/cardano/src/roll/epochs.rs (1)

328-348: Treasury MIR path logs at info level per credential — consider debug.

The tracing::info! on lines 339-345 fires for every credential in a treasury MIR certificate. The equivalent reserves path (lines 314-326) only logs at warn for negative amounts. In production, info-level per-credential logging could be noisy. Consider downgrading to debug to match the reserves path's approach.

🔇 Proposed fix
-                        tracing::info!(
+                        tracing::debug!(
                             source = "treasury",
                             credential = ?cred,
                             amount = amount,
                             amount_u64 = amount_u64,
                             "processing treasury MIR"
                         );
crates/cardano/src/rewards/mod.rs (3)

454-456: Chunk size of 25 may be too small for Rayon.

With DELEGATOR_CHUNK_SIZE = 25, pools with tens of thousands of delegators will spawn many very small tasks. Rayon's work-stealing handles this, but the per-task overhead could outweigh benefits. A value of 100–500 is more typical for compute-bound work of this granularity.


590-607: Consider flat_map over map + flatten for clarity.

Rayon's flat_map is more idiomatic than map(...).flatten() for this pattern.

♻️ Proposed fix
         let delegator_rewards: Vec<_> = delegators
             .par_chunks(DELEGATOR_CHUNK_SIZE)
-            .map(|chunk| {
+            .flat_map(|chunk| {
                 compute_delegator_chunk(
                     ctx,
                     pool,
                     total_pool_reward,
                     pool_stake,
                     circulating_supply,
                     pool_params.cost,
                     pool_margin.clone(),
                     &owners,
                     babbage_or_later,
                     chunk,
                 )
             })
-            .flatten()
             .collect();

351-362: drain_all unconditionally routes unspendable to treasury, but should support pre-Babbage behavior.

drain_all routes all unspendable rewards to applied_unspendable_to_treasury without a way to direct them to reserves. The companion method drain_unspendable(to_reserves: bool) demonstrates the correct pattern: pre-Babbage should return unspendable to reserves, post-Babbage to treasury.

Currently drain_all is only called in a single test and not used in production reward handling logic (which correctly uses drain_unspendable(pre_babbage)). However, the public method with hard-coded post-Babbage behavior creates maintenance risk. Add a to_reserves parameter to match drain_unspendable, or document clearly that drain_all is exclusively for post-Babbage/testing scenarios.

tests/epoch_pots/main.rs (2)

42-73: Unused macro parameter $fixture_mod.

The $fixture_mod identifier is captured by the macro but never referenced in the expansion body. This adds noise to each invocation site (lines 717–739) without purpose.

Suggested fix
 macro_rules! epoch_test {
-    ($test_name:ident, $fixture_mod:ident, $network:literal, $epoch:literal, $snapshot:literal) => {
+    ($test_name:ident, $network:literal, $epoch:literal, $snapshot:literal) => {
         #[test]
         fn $test_name() {

And update all invocations, e.g.:

-epoch_test!(test_mainnet_242, fixtures_mainnet_242, "mainnet", 242, 240);
+epoch_test!(test_mainnet_242, "mainnet", 242, 240);

254-260: Precision loss in rational_to_f64 for large numerators/denominators.

Casting i64/u64 to f64 silently loses precision for values > 2^53. For protocol parameter rationals (typically small fractions), this is fine in practice, but it's worth noting as a known limitation.

xtask/src/util.rs (1)

18-44: Duplicate copy_dir_recursive exists in crates/testing/src/harness/cardano.rs.

Both implementations appear identical. Consider extracting this utility into a shared crate (or a thin utility crate) to avoid maintaining two copies. That said, since xtask and the testing crate have distinct dependency graphs, keeping them separate may be intentional.

crates/cardano/src/lib.rs (1)

362-365: Consider renaming Cache::stability_window to reflect the semantic change.

The field still says stability_window but now holds the randomness stability window (4k/f) rather than the general stability window (3k/f). This could mislead future readers who expect the 3k/f value.

💡 Suggested rename
 pub(crate) struct Cache {
     pub eras: ChainSummary,
-    pub stability_window: u64,
+    pub randomness_stability_window: u64,
 }

Then update the two usage sites (self.cache.stability_window) accordingly.

crates/cardano/src/roll/pools.rs (1)

157-164: Extra blank line at 162–163.

Minor formatting nit — there are two consecutive blank lines before the closing brace.

Diff
     fn apply(&mut self, entity: &mut Option<PoolState>) {
         if let Some(entity) = entity {
             entity.blocks_minted_total += self.count;
             let live = entity.snapshot.unwrap_live_mut();
             live.blocks_minted += self.count;
-
-
         }
     }
crates/cardano/src/estart/loading.rs (2)

57-92: AVVM reclamation logic looks correct.

The approach of querying remaining genesis AVVM UTxOs and summing their values matches the expected behavior at the Shelley→Allegra boundary.

One minor observation: tracing::warn! at line 85 will fire on every normal chain sync at this boundary. Since AVVM reclamation is expected (not an anomaly), info may be a more appropriate level.

💡 Optional level change
-        tracing::warn!(
+        tracing::info!(
             remaining_count = remaining.len(),
             total_avvm = total,
             "AVVM reclamation at Shelley→Allegra boundary"
         );

76-83: Silent fallback on decode failure may mask data issues.

Line 81's .unwrap_or(0) silently ignores UTxOs that fail to decode, which could undercount reclamation. For genesis AVVM UTxOs this is unlikely, but a debug-level log on the error path would aid future troubleshooting.

crates/cardano/src/ewrap/rewards.rs (2)

118-119: Redundant total_value() calls — reuse reward_total.

reward_total is computed on line 118 but then reward.total_value() is called again on lines 138 and 162. Since the reward object is not mutated between these calls, reuse reward_total to avoid redundancy.

♻️ Suggested diff
         if !account.is_registered() {
-            let total = reward.total_value();
-
             // Accounts that were registered at RUPD startStep time but deregistered
             // before EWRAP. Their rewards go to treasury per the Haskell ledger's
             // applyRUpdFiltered (frTotalUnregistered → casTreasury).
@@ -148,19 +146,17 @@
             warn!(
                 account=%id,
                 credential=?account.credential,
-                amount=total,
+                amount=reward_total,
                 was_spendable=%was_spendable,
                 "reward not applied (unregistered account) -> treasury"
             );
 
-            ctx.rewards.return_reward_to_treasury(total);
+            ctx.rewards.return_reward_to_treasury(reward_total);
             return Ok(());
         }
 
         self.change(AssignRewards {
             account: id.clone(),
-            reward: reward.total_value(),
+            reward: reward_total,
         });

Also applies to: 137-162


196-216: Double iteration over iter_pending() can be collapsed into a single pass.

Lines 200–207 iterate iter_pending() twice with complementary filters. A single fold would halve the work and reduce code duplication.

♻️ Suggested diff
-        if pending_before_drain > 0 {
-            let pending_spendable: u64 = ctx.rewards.iter_pending()
-                .filter(|(_, r)| r.is_spendable())
-                .map(|(_, r)| r.total_value())
-                .sum();
-            let pending_unspendable: u64 = ctx.rewards.iter_pending()
-                .filter(|(_, r)| !r.is_spendable())
-                .map(|(_, r)| r.total_value())
-                .sum();
+        if pending_before_drain > 0 {
+            let (pending_spendable, pending_unspendable) = ctx.rewards.iter_pending()
+                .fold((0u64, 0u64), |(sp, unsp), (_, r)| {
+                    if r.is_spendable() {
+                        (sp + r.total_value(), unsp)
+                    } else {
+                        (sp, unsp + r.total_value())
+                    }
+                });
crates/cardano/src/estart/reset.rs (1)

181-213: Consider downgrading this log from warn to info or debug.

This tracing::warn! fires on every single epoch transition and is not indicating an abnormal condition — it's operational telemetry. Using warn level here will create noise in production monitoring. A debug or info level would be more appropriate for regular epoch accounting logs.

crates/cardano/src/ewrap/loading.rs (1)

355-358: Long lines may trigger rustfmt / clippy warnings.

Lines 356–357 exceed typical line-length limits. Consider breaking them up for readability and to avoid formatting tool conflicts.

♻️ Suggested diff
-        let spendable_total: u64 = pending.values().filter(|r| r.is_spendable()).map(|r| r.total_value()).sum();
-        let unspendable_total: u64 = pending.values().filter(|r| !r.is_spendable()).map(|r| r.total_value()).sum();
+        let spendable_total: u64 = pending
+            .values()
+            .filter(|r| r.is_spendable())
+            .map(|r| r.total_value())
+            .sum();
+        let unspendable_total: u64 = pending
+            .values()
+            .filter(|r| !r.is_spendable())
+            .map(|r| r.total_value())
+            .sum();

As per coding guidelines, **/*.rs: "Ensure the project builds without warnings by running cargo build --workspace --all-targets --all-features."

crates/cardano/src/rupd/loading.rs (1)

130-134: _rupd_slot parameter is accepted but unused.

The _rupd_slot parameter is computed by the caller (lines 316–319) but never used in StakeSnapshot::load. If pre-Babbage filtering is planned for a follow-up, consider adding a TODO comment to clarify intent. Otherwise, the caller-side computation on lines 316–317 is dead code.

crates/cardano/src/pots.rs (1)

327-327: Unnecessary .clone() on tau.

tau is an owned Ratio and is not used after this line. The .clone() is redundant.

Suggested fix
-    let treasury_tax = floor_int!(tau.clone() * ratio!(reward_pot), u64);
+    let treasury_tax = floor_int!(tau * ratio!(reward_pot), u64);

Comment on lines +157 to +164
assert!(
effective == applied_rewards_sum,
"EWRAP epoch {}: effective_rewards ({}) != applied_rewards_sum ({}), diff = {}",
ctx.ending_state().number,
effective,
applied_rewards_sum,
effective as i64 - applied_rewards_sum as i64
);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

assert! in production code will crash the node on mismatch.

If any mainnet edge case causes effective != applied_rewards_sum, this panics the node. Consider replacing with tracing::error! and a soft failure, or at minimum use debug_assert! so it only fires in debug/test builds — consistent with the debug_assert! pattern used in estart/reset.rs line 233 for pot consistency.

♻️ Suggested diff
-    assert!(
-        effective == applied_rewards_sum,
-        "EWRAP epoch {}: effective_rewards ({}) != applied_rewards_sum ({}), diff = {}",
-        ctx.ending_state().number,
-        effective,
-        applied_rewards_sum,
-        effective as i64 - applied_rewards_sum as i64
-    );
+    if effective != applied_rewards_sum {
+        tracing::error!(
+            epoch = ctx.ending_state().number,
+            %effective,
+            %applied_rewards_sum,
+            diff = %(effective as i64 - applied_rewards_sum as i64),
+            "EWRAP: effective_rewards != applied_rewards_sum"
+        );
+    }
+    debug_assert_eq!(effective, applied_rewards_sum,
+        "EWRAP epoch {}: effective_rewards ({}) != applied_rewards_sum ({})",
+        ctx.ending_state().number, effective, applied_rewards_sum,
+    );
🤖 Prompt for AI Agents
In `@crates/cardano/src/ewrap/wrapup.rs` around lines 157 - 164, The current
assert! in wrapup.rs that compares effective and applied_rewards_sum
(assert!(effective == applied_rewards_sum, ... using ctx.ending_state().number,
effective, applied_rewards_sum)) will panic in production; replace it with a
non-panicking check: either convert to debug_assert! for debug-only checks, or
emit a tracing::error! with the same diagnostic string and handle the mismatch
as a soft failure (e.g., log the epoch number and the diff and continue or
return a Result::Err from the enclosing function) so the node does not crash on
mainnet edge cases. Ensure the replacement references the same symbols
(effective, applied_rewards_sum, ctx.ending_state().number) so logs carry the
same context.

Comment on lines +172 to +173
consumed = %(effective + to_treasury),
returned = %(incentives.available_rewards.saturating_sub(effective + to_treasury)),
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Potential u64 overflow in effective + to_treasury before saturating_sub.

Line 173 computes effective + to_treasury inside saturating_sub. If both values are large, the addition overflows (wraps) before the saturating subtraction is applied. Use saturating_add for safety.

♻️ Suggested diff
-        consumed = %(effective + to_treasury),
-        returned = %(incentives.available_rewards.saturating_sub(effective + to_treasury)),
+        consumed = %(effective.saturating_add(to_treasury)),
+        returned = %(incentives.available_rewards.saturating_sub(effective.saturating_add(to_treasury))),
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
consumed = %(effective + to_treasury),
returned = %(incentives.available_rewards.saturating_sub(effective + to_treasury)),
consumed = %(effective.saturating_add(to_treasury)),
returned = %(incentives.available_rewards.saturating_sub(effective.saturating_add(to_treasury))),
🤖 Prompt for AI Agents
In `@crates/cardano/src/ewrap/wrapup.rs` around lines 172 - 173, The expressions
use plain addition for effective + to_treasury which can overflow before the
call to incentives.available_rewards.saturating_sub; change the logic in
wrapup.rs to compute the sum using saturating_add (e.g., let total =
effective.saturating_add(to_treasury)) and then use total for both consumed and
the argument to incentives.available_rewards.saturating_sub to avoid
intermediate overflow.

Comment on lines +80 to 83
(slot, tx_idx, cert_idx) => {
warn!(slot, tx_idx, cert_idx, "missing pointer mapping");
panic!()
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Unconditional panic!() for unknown pointers will crash the node in production.

The previous code reportedly had the panic commented out. Enabling it is useful for catching missing mappings during development, but in a deployed node any unmapped pointer (e.g., from a maliciously crafted transaction in a pre-deprecation era) would cause a hard crash.

Consider using a panic! only in debug/test builds, or at least providing a descriptive message. In release builds, returning None with the warning may be safer—or use debug_assert! if you only want this to fire during development.

💡 Suggested approach
             (slot, tx_idx, cert_idx) => {
                 warn!(slot, tx_idx, cert_idx, "missing pointer mapping");
-                panic!()
+                // Fail hard in dev/test so missing mappings are caught immediately.
+                // In release builds, fall back to None to avoid crashing the node.
+                if cfg!(debug_assertions) {
+                    panic!("unmapped pointer ({slot}, {tx_idx}, {cert_idx})");
+                }
+                None
             }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
(slot, tx_idx, cert_idx) => {
warn!(slot, tx_idx, cert_idx, "missing pointer mapping");
panic!()
}
(slot, tx_idx, cert_idx) => {
warn!(slot, tx_idx, cert_idx, "missing pointer mapping");
// Fail hard in dev/test so missing mappings are caught immediately.
// In release builds, fall back to None to avoid crashing the node.
if cfg!(debug_assertions) {
panic!("unmapped pointer ({slot}, {tx_idx}, {cert_idx})");
}
None
}
🤖 Prompt for AI Agents
In `@crates/cardano/src/hacks.rs` around lines 80 - 83, The match arm that handles
unknown pointer tuples (slot, tx_idx, cert_idx) currently calls warn!(...) and
then unconditionally panics via panic!(), which will crash production; change
this so the code logs a descriptive warning including the tuple and does NOT
panic in release builds: replace the unconditional panic!() with a debug-only
assertion (e.g. debug_assert!(false, "...") or a conditional panic behind
cfg!(debug_assertions)) and return None (or an appropriate error) in the
non-debug path; keep the warn! call but enhance its message to include
slot/tx_idx/cert_idx for easier debugging.

Comment on lines +150 to +182
fn apply(&mut self, entity: &mut Option<Self::Entity>) {
// Behavior depends on overwrite flag (determined by protocol version):
// - Pre-Alonzo (overwrite=true): Later MIRs overwrite earlier ones (Map.union semantics)
// - Alonzo+ (overwrite=false): MIRs accumulate (Map.unionWith (<>) semantics)
if self.overwrite {
// Pre-Alonzo: overwrite with new values
*entity = Some(PendingMirState {
credential: self.credential.clone(),
from_reserves: self.from_reserves,
from_treasury: self.from_treasury,
});
} else if let Some(existing) = entity.as_mut() {
// Alonzo+: accumulate
existing.from_reserves += self.from_reserves;
existing.from_treasury += self.from_treasury;
} else {
*entity = Some(PendingMirState {
credential: self.credential.clone(),
from_reserves: self.from_reserves,
from_treasury: self.from_treasury,
});
}
}

fn undo(&self, entity: &mut Option<Self::Entity>) {
if let Some(existing) = entity.as_mut() {
existing.from_reserves = existing.from_reserves.saturating_sub(self.from_reserves);
existing.from_treasury = existing.from_treasury.saturating_sub(self.from_treasury);
if existing.from_reserves == 0 && existing.from_treasury == 0 {
*entity = None;
}
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

EnqueueMir::undo does not correctly restore overwritten state.

When overwrite=true (pre-Alonzo), apply replaces the entire entity. However, undo uses saturating_sub on the current values, which does not restore the previous entity state that was overwritten.

Example: If previous state was {reserves: 100, treasury: 0} and this delta overwrites with {reserves: 50, treasury: 0}, then undo computes 50 - 50 = 0 and clears the entity — but the correct rollback should restore {reserves: 100, treasury: 0}.

Store the previous state in a field (similar to DequeueMir.prev) and restore it on undo when overwrite=true.

🐛 Proposed fix
 pub struct EnqueueMir {
     pub credential: StakeCredential,
     pub from_reserves: u64,
     pub from_treasury: u64,
     pub overwrite: bool,
+    /// Previous state stored for rollback (used when overwrite=true)
+    prev: Option<PendingMirState>,
 }
 
 impl EnqueueMir {
     pub fn new(
         credential: StakeCredential,
         from_reserves: u64,
         from_treasury: u64,
         overwrite: bool,
     ) -> Self {
         Self {
             credential,
             from_reserves,
             from_treasury,
             overwrite,
+            prev: None,
         }
     }
 // ...
     fn apply(&mut self, entity: &mut Option<Self::Entity>) {
+        self.prev = entity.clone();
         if self.overwrite {
             *entity = Some(PendingMirState {
 // ... (rest unchanged)
 
     fn undo(&self, entity: &mut Option<Self::Entity>) {
-        if let Some(existing) = entity.as_mut() {
-            existing.from_reserves = existing.from_reserves.saturating_sub(self.from_reserves);
-            existing.from_treasury = existing.from_treasury.saturating_sub(self.from_treasury);
-            if existing.from_reserves == 0 && existing.from_treasury == 0 {
-                *entity = None;
-            }
-        }
+        *entity = self.prev.clone();
     }
🤖 Prompt for AI Agents
In `@crates/cardano/src/rupd/deltas.rs` around lines 150 - 182, EnqueueMir::undo
currently can't restore a fully overwritten entity because apply replaces the
entity when overwrite is true; change EnqueueMir to capture the prior state
(e.g. add a field like prev: Option<PendingMirState>) when apply runs with
overwrite==true (store the existing Option<PendingMirState> before overwriting),
and then update undo to: if overwrite==true restore *entity = self.prev.clone(),
otherwise keep the existing saturating_sub logic; reference the
EnqueueMir::apply and EnqueueMir::undo methods, the overwrite flag,
PendingMirState, and mirror the DequeueMir.prev approach for how prev is stored
and restored.

Comment on lines +293 to +308
pub fn copy_dir_recursive(src: &Path, dst: &Path) -> std::io::Result<()> {
std::fs::create_dir_all(dst)?;

for entry in std::fs::read_dir(src)? {
let entry = entry?;
let file_type = entry.file_type()?;
let dest_path = dst.join(entry.file_name());

if file_type.is_dir() {
copy_dir_recursive(&entry.path(), &dest_path)?;
} else {
std::fs::copy(entry.path(), &dest_path)?;
}
}

Ok(())
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

copy_dir_recursive doesn't handle symlinks.

If a symlink is encountered, file_type.is_dir() returns false for directory symlinks (it checks the symlink itself, not the target), so it would attempt std::fs::copy which follows symlinks and copies the target file. For directory symlinks, this silently skips the contents. This is probably fine for test fixtures but worth noting.

🤖 Prompt for AI Agents
In `@crates/testing/src/harness/cardano.rs` around lines 293 - 308,
copy_dir_recursive currently treats symlinks as regular files because it uses
file_type() which follows symlinks; update copy_dir_recursive to call
entry.path().symlink_metadata() (or DirEntry::symlink_metadata) and check
file_type.is_symlink(), then recreate the symlink at dst instead of copying the
target: use std::os::unix::fs::symlink or the Windows equivalents
(std::os::windows::fs::symlink_file / symlink_dir) depending on target type, and
for non-symlink dirs/files keep the existing recursion and std::fs::copy
behavior; ensure errors from symlink creation are propagated as std::io::Result
so callers of copy_dir_recursive see failures.

Comment on lines +115 to +120
}

let matched = total_rows.saturating_sub(total + ignored);
eprintln!(" {} rows matched, {} differences", matched, total);

Ok(total)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

matched count may be misleading for additions from file2.

total_rows is counted from file1 only (line 16-21), but total includes DiffByteRecord::Add entries from file2 that aren't in file1. This means matched (line 117) could undercount or even saturate to 0. Since this is a test utility for informational output, it's minor but worth noting.

🤖 Prompt for AI Agents
In `@tests/epoch_pots/compare.rs` around lines 115 - 120, The printed `matched`
value is wrong because total_rows only comes from file1 while `total` includes
DiffByteRecord::Add from file2; modify the logic to compute `matched` directly
by counting DiffByteRecord::Equal records while iterating diffs (or otherwise
track add counts from file2) instead of using total_rows.saturating_sub(total +
ignored); update the code that computes `matched` (referencing matched,
total_rows, total, ignored and DiffByteRecord::Add/Equal) so the informational
output reflects actual equal rows.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Fix all issues with AI agents
In `@crates/cardano/src/hacks.rs`:
- Around line 59-63: Add a brief inline comment above the
StakeCredential::AddrKeyhash entry for the credential
"1332d859dd71f5b1089052a049690d81f7367eac9fafaef80b4da395" (the mapping for
tuple (18446744073709551615, 1221092, 2)) explaining that this credential also
appears for (20095460, 2, 0), that the slot value 18446744073709551615 is
u64::MAX and used as a sentinel in mainnet edge-case mappings, and reference the
repo context/commit "fix/mainnet-200-plus" so future readers understand the
duplicate-pointer provenance.

In `@crates/cardano/src/model.rs`:
- Around line 603-622: The is_registered_at method's (Some(reg), Some(dereg))
arm incorrectly treats any presence of a deregistration as ending the
registration period; update the logic in is_registered_at so the re-registration
case is handled: change the final match arm (Some(reg), Some(dereg)) to return
true when reg <= slot AND (dereg > slot OR dereg <= reg) (i.e., registered at
slot if it was re-registered after the last deregistration or if deregistration
happens after the target slot), keeping the rest of the match arms unchanged.

In `@xtask/src/ground_truth/delegation.rs`:
- Around line 51-56: The code currently casts amount: i64 to u64 with amount as
u64 which will wrap negative values; change the logic in the block that builds
pools for pool_bech32/pool_hash so you validate or convert safely (e.g., use
u64::try_from(amount) or amount.try_into()) and handle the Err case instead of
allowing a silent wrap — either return or log an error, skip the row, or assert
non-negative depending on desired behavior; update the entry update (entry.1 +=
...) to use the safely converted u64 value.
🧹 Nitpick comments (8)
xtask/src/ground_truth/delegation.rs (1)

16-16: PAGE_SIZE is duplicated across delegation.rs, rewards.rs, and stake.rs.

Consider extracting this constant to the parent ground_truth module so all three files share a single definition.

crates/cardano/src/model.rs (1)

1812-1815: Minor: total_value() could use saturating_add for defensive overflow protection.

Both from_reserves and from_treasury are u64 Lovelace values. While overflow is unlikely in practice, other arithmetic in this codebase uses add!/sub! macros for safe arithmetic.

Proposed fix
     pub fn total_value(&self) -> u64 {
-        self.from_reserves + self.from_treasury
+        self.from_reserves.saturating_add(self.from_treasury)
     }
crates/cardano/src/roll/pools.rs (1)

157-164: Nit: extra blank line.

Lines 162–163 have an extra blank line. Minor cosmetic issue.

Proposed fix
     fn apply(&mut self, entity: &mut Option<PoolState>) {
         if let Some(entity) = entity {
             entity.blocks_minted_total += self.count;
             let live = entity.snapshot.unwrap_live_mut();
             live.blocks_minted += self.count;
-
-
         }
     }
crates/cardano/src/ewrap/rewards.rs (2)

126-135: Consider downgrading from info to debug level.

This logs at info level for every account that has both registered_at and deregistered_at set. On mainnet with many re-registrations, this could produce significant log volume. The re-registration pattern is not inherently anomalous.


118-119: Minor: reward.total_value() is computed three times.

reward_total (line 118) is only used for logging. The same value is recomputed as total (line 138) and again via reward.total_value() (line 162). Consider reusing reward_total to avoid redundant iteration.

Proposed fix
         if !account.is_registered() {
-            let total = reward.total_value();
             warn!(
                 account=%id,
                 credential=?account.credential,
-                amount=total,
+                amount=reward_total,
                 was_spendable=%was_spendable,
                 "reward not applied (unregistered account) -> treasury"
             );

-            ctx.rewards.return_reward_to_treasury(total);
+            ctx.rewards.return_reward_to_treasury(reward_total);
             return Ok(());
         }

         self.change(AssignRewards {
             account: id.clone(),
-            reward: reward.total_value(),
+            reward: reward_total,
         });

Also applies to: 138-138, 162-162

crates/cardano/src/ewrap/loading.rs (1)

372-386: Minor: long line could be split for readability.

Line 373 (let spendable_total: u64 = ...) is quite long. Consider splitting for readability, though this is purely cosmetic.

crates/cardano/src/roll/epochs.rs (2)

329-349: Consider downgrading the per-credential treasury MIR log from info to debug.

Line 340 logs at info level for each individual treasury MIR credential. While treasury MIRs are infrequent on mainnet, this is more verbose than the reserves path which has no per-credential info log. The summary stats tracked in RollingStats.treasury_mirs already capture the totals.

Proposed fix
-                        tracing::info!(
+                        tracing::debug!(
                             source = "treasury",
                             credential = ?cred,
                             amount = amount,
                             amount_u64 = amount_u64,
                             "processing treasury MIR"
                         );

314-351: Minor: inconsistent variable naming between reserves and treasury paths.

The reserves path uses amount directly after max(0) as u64 (line 325), while the treasury path introduces amount_u64 (line 339). Consider using the same naming for consistency.

Proposed fix
-                        let amount_u64 = amount.max(0) as u64;
-                        tracing::debug!(
-                            source = "treasury",
-                            credential = ?cred,
-                            amount = amount,
-                            amount_u64 = amount_u64,
-                            "processing treasury MIR"
-                        );
-                        self.stats_delta.as_mut().unwrap().treasury_mirs += amount_u64;
+                        let amount = amount.max(0) as u64;
+                        tracing::debug!(
+                            source = "treasury",
+                            credential = ?cred,
+                            amount,
+                            "processing treasury MIR"
+                        );
+                        self.stats_delta.as_mut().unwrap().treasury_mirs += amount;

Comment on lines +59 to +63
(18446744073709551615, 1221092, 2) => Some(StakeCredential::AddrKeyhash(
"1332d859dd71f5b1089052a049690d81f7367eac9fafaef80b4da395"
.parse()
.unwrap(),
)),
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# First, read the hacks.rs file to see the lines in question
cd crates/cardano/src && cat -n hacks.rs | sed -n '35,85p'

Repository: txpipe/dolos

Length of output: 2411


🏁 Script executed:

# Find BlockSlot type definition
rg -n 'type BlockSlot|struct BlockSlot' --type rust

Repository: txpipe/dolos

Length of output: 107


🏁 Script executed:

# Find Pointer struct and slot() method
ast-grep --pattern 'struct Pointer { $$$ }'

Repository: txpipe/dolos

Length of output: 38


🏁 Script executed:

# Search for slot method on Pointer
rg -n 'fn slot\(\&self\)' --type rust -A 2

Repository: txpipe/dolos

Length of output: 754


🏁 Script executed:

# Read the context around the pointer match statement to find any explanatory comments
cd crates/cardano/src && cat -n hacks.rs | sed -n '1,40p'

Repository: txpipe/dolos

Length of output: 1871


🏁 Script executed:

# Check if there are any comments or documentation around line 59
cd crates/cardano/src && cat -n hacks.rs | sed -n '40,90p'

Repository: txpipe/dolos

Length of output: 2274


🏁 Script executed:

# Check git log for context about these additions
cd crates/cardano/src && git log --oneline -20 hacks.rs 2>/dev/null || echo "Git history not available"

Repository: txpipe/dolos

Length of output: 136


🏁 Script executed:

# Look for any PR or issue references in comments in the file
cd crates/cardano/src && grep -n "TODO\|FIXME\|XXX\|HACK\|NOTE" hacks.rs | head -20

Repository: txpipe/dolos

Length of output: 38


Add a comment documenting the duplicate credential mapping and u64::MAX edge case.

The credential 1332d859dd71f5b1089052a049690d81f7367eac9fafaef80b4da395 maps to both (20095460, 2, 0) on line 43 and (18446744073709551615, 1221092, 2) on line 59. The slot value 18446744073709551615 is u64::MAX. Since these are confirmed mainnet edge-case mappings (evident from the file's purpose and the git commit "fix/mainnet-200-plus"), a brief comment explaining the provenance would help future readers understand why the same credential resolves from two different pointer tuples, especially one with a sentinel-like slot value.

🤖 Prompt for AI Agents
In `@crates/cardano/src/hacks.rs` around lines 59 - 63, Add a brief inline comment
above the StakeCredential::AddrKeyhash entry for the credential
"1332d859dd71f5b1089052a049690d81f7367eac9fafaef80b4da395" (the mapping for
tuple (18446744073709551615, 1221092, 2)) explaining that this credential also
appears for (20095460, 2, 0), that the slot value 18446744073709551615 is
u64::MAX and used as a sentinel in mainnet edge-case mappings, and reference the
repo context/commit "fix/mainnet-200-plus" so future readers understand the
duplicate-pointer provenance.

Comment on lines +603 to +622
/// Check if the account was registered at a specific slot.
/// Used for RUPD-time filtering where we need registration status at the RUPD slot,
/// not the current chain tip.
///
/// Note: When an account deregisters, `registered_at` is cleared to `None`.
/// So we handle the case where `deregistered_at` is set but `registered_at` is not:
/// if the deregistration slot is after the target slot, the account was registered.
pub fn is_registered_at(&self, slot: u64) -> bool {
match (self.registered_at, self.deregistered_at) {
// Never registered and never deregistered
(None, None) => false,
// Deregistered but registered_at was cleared - account was registered
// until the deregistration slot. Check if target slot is before deregistration.
(None, Some(dereg)) => dereg > slot,
// Currently registered, never deregistered - check if registered by target slot
(Some(reg), None) => reg <= slot,
// Both set (re-registration case) - check if registered and not yet deregistered
(Some(reg), Some(dereg)) => reg <= slot && dereg > slot,
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Find all call sites of is_registered_at
rg -n 'is_registered_at' --type=rust

Repository: txpipe/dolos

Length of output: 143


🏁 Script executed:

# Look for tests related to is_registered_at
rg -n 'is_registered_at|AccountState' --type=rust | head -50

Repository: txpipe/dolos

Length of output: 4919


🏁 Script executed:

# Check the AccountState struct definition and understand registered_at/deregistered_at handling
rg -n 'struct AccountState' -A 30 --type=rust

Repository: txpipe/dolos

Length of output: 1662


🏁 Script executed:

# More thorough search for is_registered_at including comments, tests, etc
rg 'is_registered_at' -B 2 -A 5

Repository: txpipe/dolos

Length of output: 838


🏁 Script executed:

# Look for test files that might test AccountState methods
fd '.*test.*\.rs$' -x rg -l 'AccountState' {}

Repository: txpipe/dolos

Length of output: 38


🏁 Script executed:

# Check the is_registered method (without _at) to understand the pattern
rg -n 'fn is_registered\(' -A 8 --type=rust

Repository: txpipe/dolos

Length of output: 648


🏁 Script executed:

# Look at how registered_at and deregistered_at are modified in accounts.rs
rg -n 'registered_at|deregistered_at' crates/cardano/src/roll/accounts.rs -B 2 -A 2

Repository: txpipe/dolos

Length of output: 1930


🏁 Script executed:

# Check if is_registered_at is pub and when it was added
git log --oneline -p --all -- '**/model.rs' | grep -A 10 'is_registered_at' | head -20

Repository: txpipe/dolos

Length of output: 142


🏁 Script executed:

# Check if there's a scenario where both registered_at and deregistered_at can be Some
# by looking at the full StakeDeregistration::apply method
rg -n 'impl.*StakeDeregistration' -A 50 crates/cardano/src/roll/accounts.rs | head -80

Repository: txpipe/dolos

Length of output: 2326


🏁 Script executed:

# Double-check: look for any usage of is_registered in the codebase to see the pattern
rg 'is_registered\(\)' -B 2 -A 2 --type=rust

Repository: txpipe/dolos

Length of output: 3723


🏁 Script executed:

# Final check: see if is_registered_at appears in any documentation or comments
rg 'RUPD|rupd.*registration' -i --type=rust | head -20

Repository: txpipe/dolos

Length of output: 1913


🏁 Script executed:

# Check the rupd module to see if is_registered_at should be used there
rg -n 'is_registered' crates/cardano/src/rupd/ -B 3 -A 3

Repository: txpipe/dolos

Length of output: 502


🏁 Script executed:

# Check if there's any TODO or FIXME related to is_registered_at or registration timing
rg 'TODO|FIXME' crates/cardano/src/model.rs | grep -i 'regist'

Repository: txpipe/dolos

Length of output: 38


🏁 Script executed:

# Look at the comment above is_registered_at more carefully
rg -B 10 'pub fn is_registered_at' crates/cardano/src/model.rs | head -15

Repository: txpipe/dolos

Length of output: 583


is_registered_at logic error with re-registration case.

When an account is re-registered (e.g., deregistered at slot 50, re-registered at slot 100), the state becomes registered_at = Some(100), deregistered_at = Some(50). The current implementation evaluates 100 <= 110 && 50 > 110false, even though the account is registered at slot 110.

The (Some(reg), Some(dereg)) arm fails to account for re-registration where reg > dereg (consistent with is_registered() logic which treats start >= end as registered).

Proposed fix
     pub fn is_registered_at(&self, slot: u64) -> bool {
         match (self.registered_at, self.deregistered_at) {
             // Never registered and never deregistered
             (None, None) => false,
             // Deregistered but registered_at was cleared - account was registered
             // until the deregistration slot. Check if target slot is before deregistration.
             (None, Some(dereg)) => dereg > slot,
             // Currently registered, never deregistered - check if registered by target slot
             (Some(reg), None) => reg <= slot,
-            // Both set (re-registration case) - check if registered and not yet deregistered
-            (Some(reg), Some(dereg)) => reg <= slot && dereg > slot,
+            // Both set - handle re-registration where reg > dereg means currently registered
+            (Some(reg), Some(dereg)) => {
+                if reg >= dereg {
+                    // Re-registered after deregistration: registered from reg onwards
+                    reg <= slot
+                } else {
+                    // Registered then deregistered: registered in [reg, dereg)
+                    reg <= slot && dereg > slot
+                }
+            }
         }
     }
🤖 Prompt for AI Agents
In `@crates/cardano/src/model.rs` around lines 603 - 622, The is_registered_at
method's (Some(reg), Some(dereg)) arm incorrectly treats any presence of a
deregistration as ending the registration period; update the logic in
is_registered_at so the re-registration case is handled: change the final match
arm (Some(reg), Some(dereg)) to return true when reg <= slot AND (dereg > slot
OR dereg <= reg) (i.e., registered at slot if it was re-registered after the
last deregistration or if deregistration happens after the target slot), keeping
the rest of the match arms unchanged.

Comment on lines +51 to +56
let amount: i64 = row.get(2);

let entry = pools
.entry(pool_bech32)
.or_insert_with(|| (pool_hash, 0));
entry.1 += amount as u64;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Unchecked i64u64 cast may silently wrap negative values.

If amount is ever negative (e.g., due to data anomalies in DBSync), amount as u64 wraps to a very large number, silently corrupting the aggregated total. Use try_from or at minimum assert non-negative.

Proposed fix
-            let amount: i64 = row.get(2);
-
-            let entry = pools
-                .entry(pool_bech32)
-                .or_insert_with(|| (pool_hash, 0));
-            entry.1 += amount as u64;
+            let amount: i64 = row.get(2);
+            let amount: u64 = u64::try_from(amount).with_context(|| {
+                format!("negative delegation amount for pool {pool_bech32}: {amount}")
+            })?;
+
+            let entry = pools
+                .entry(pool_bech32)
+                .or_insert_with(|| (pool_hash, 0));
+            entry.1 += amount;
🤖 Prompt for AI Agents
In `@xtask/src/ground_truth/delegation.rs` around lines 51 - 56, The code
currently casts amount: i64 to u64 with amount as u64 which will wrap negative
values; change the logic in the block that builds pools for
pool_bech32/pool_hash so you validate or convert safely (e.g., use
u64::try_from(amount) or amount.try_into()) and handle the Err case instead of
allowing a silent wrap — either return or log an error, skip the row, or assert
non-negative depending on desired behavior; update the entry update (entry.1 +=
...) to use the safely converted u64 value.

@scarmuega scarmuega merged commit e1a660f into main Feb 12, 2026
9 of 10 checks passed
@scarmuega scarmuega deleted the fix/mainnet-200-plus branch February 12, 2026 14:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant