-
Notifications
You must be signed in to change notification settings - Fork 49
fix(cardano): apply mainnet-specific historical edge-cases #895
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…and preserved blocks)
…r debugging instructions
📝 WalkthroughWalkthroughThis PR implements comprehensive MIR (Move Instantaneous Rewards) processing for epoch boundaries, adds AVVM reclamation at the Shelley→Allegra transition, refactors reward accounting to track treasury and reserves allocations separately, introduces a test harness for Cardano ledger simulation, and establishes xtask infrastructure for ground-truth testing and seed management. The rupd-snapshot-dump feature is removed. Changes
Sequence Diagram(s)sequenceDiagram
actor Test
participant LedgerHarness
participant CardanoLogic
participant Roll/RUPD/EWRAP/ESTART
participant StateStore
participant Domain
Test->>LedgerHarness: run(chunk_size, callback)
activate LedgerHarness
LedgerHarness->>Domain: bootstrap
LedgerHarness->>StateStore: open fjall store
loop for each block chunk
LedgerHarness->>CardanoLogic: load()/compute()/commit()
activate CardanoLogic
opt Block processing
CardanoLogic->>Roll/RUPD/EWRAP/ESTART: process deltas
activate Roll/RUPD/EWRAP/ESTART
Note over Roll/RUPD/EWRAP/ESTART: Roll: MIR enqueue via EnqueueMir
Note over Roll/RUPD/EWRAP/ESTART: RUPD: compute rewards with parallel delegators
Note over Roll/RUPD/EWRAP/ESTART: EWRAP: process_pending_mirs, apply rewards, split unspendable
Note over Roll/RUPD/EWRAP/ESTART: ESTART: compute_avvm_reclamation at boundary
Roll/RUPD/EWRAP/ESTART->>StateStore: apply deltas and commit
deactivate Roll/RUPD/EWRAP/ESTART
end
deactivate CardanoLogic
LedgerHarness->>Test: callback(domain, work_unit)
end
deactivate LedgerHarness
sequenceDiagram
actor Harness as EpochPots Test
participant LedgerHarness
participant RUPD as RUPD Phase
participant EWRAP as EWRAP Phase
participant ESTART as ESTART Phase
participant CSV as CSV Dump
participant Compare as CSV Compare
Harness->>LedgerHarness: run for subject_epoch
activate LedgerHarness
Note over LedgerHarness: Load seed state and genesis
loop Process blocks
LedgerHarness->>RUPD: compute rewards
activate RUPD
Note over RUPD: Parallel delegator rewards<br/>Track applied_rewards, effective_mirs
RUPD-->>LedgerHarness: RewardMap with split treasury/reserves
deactivate RUPD
LedgerHarness->>EWRAP: apply rewards
activate EWRAP
Note over EWRAP: process_pending_mirs<br/>Split unspendable_to_treasury/reserves<br/>Track applied_rewards
EWRAP-->>LedgerHarness: BoundaryWork
deactivate EWRAP
alt Boundary epoch
LedgerHarness->>ESTART: compute AVVM & finalize
activate ESTART
Note over ESTART: compute_avvm_reclamation<br/>Build final EndStats/PotDelta
ESTART-->>LedgerHarness: completed epoch
deactivate ESTART
LedgerHarness->>CSV: dump delegation/stake/rewards/epochs/pparams
end
end
deactivate LedgerHarness
Harness->>Compare: compare_csvs_with_ignore
activate Compare
Note over Compare: Key-based diff against ground-truth<br/>Report adds/deletes/modifies per category
Compare-->>Harness: mismatch count
deactivate Compare
alt Mismatches found
Harness->>Harness: panic with aggregated failures
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~80 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
xtask/src/bootstrap.rs (1)
74-79:⚠️ Potential issue | 🟡 MinorExisting data protection is weakened when a seed path is configured.
The bail condition now skips when
seed_path.is_some(), meaning existing data indata_dircan be silently overwritten by the seed copy (line 123) without--force. Was this intentional?If the intent is to always allow seed-based initialization to proceed, consider at least logging a warning when overwriting, or checking
skip_seedin this guard:- if data_dir.exists() && dir_has_entries(&data_dir)? && !args.force && seed_path.is_none() { + if data_dir.exists() && dir_has_entries(&data_dir)? && !args.force && (seed_path.is_none() || args.skip_seed) {crates/cardano/src/estart/reset.rs (1)
226-231:⚠️ Potential issue | 🟡 Minor
dbg!()macros should not be in production code.
dbg!()writes to stderr and is typically flagged bycargo clippy(clippy::dbg_macro). Replace with structuredtracing::error!or similar, especially since this is a consistency-check failure path that you'd want in production logs.♻️ Suggested diff
if !pots.is_consistent(epoch.initial_pots.max_supply()) { - dbg!(end); - dbg!(&epoch.initial_pots); - dbg!(&pots); - dbg!(delta); + tracing::error!( + ?end, + initial_pots = ?epoch.initial_pots, + ?pots, + ?delta, + "pot consistency check FAILED" + ); }As per coding guidelines,
**/*.rs: "Runcargo clippy --workspace --all-targets --all-featuresand resolve all clippy warnings before committing changes."
🤖 Fix all issues with AI agents
In `@crates/cardano/src/ewrap/wrapup.rs`:
- Around line 172-173: The expressions use plain addition for effective +
to_treasury which can overflow before the call to
incentives.available_rewards.saturating_sub; change the logic in wrapup.rs to
compute the sum using saturating_add (e.g., let total =
effective.saturating_add(to_treasury)) and then use total for both consumed and
the argument to incentives.available_rewards.saturating_sub to avoid
intermediate overflow.
- Around line 157-164: The current assert! in wrapup.rs that compares effective
and applied_rewards_sum (assert!(effective == applied_rewards_sum, ... using
ctx.ending_state().number, effective, applied_rewards_sum)) will panic in
production; replace it with a non-panicking check: either convert to
debug_assert! for debug-only checks, or emit a tracing::error! with the same
diagnostic string and handle the mismatch as a soft failure (e.g., log the epoch
number and the diff and continue or return a Result::Err from the enclosing
function) so the node does not crash on mainnet edge cases. Ensure the
replacement references the same symbols (effective, applied_rewards_sum,
ctx.ending_state().number) so logs carry the same context.
In `@crates/cardano/src/hacks.rs`:
- Around line 80-83: The match arm that handles unknown pointer tuples (slot,
tx_idx, cert_idx) currently calls warn!(...) and then unconditionally panics via
panic!(), which will crash production; change this so the code logs a
descriptive warning including the tuple and does NOT panic in release builds:
replace the unconditional panic!() with a debug-only assertion (e.g.
debug_assert!(false, "...") or a conditional panic behind
cfg!(debug_assertions)) and return None (or an appropriate error) in the
non-debug path; keep the warn! call but enhance its message to include
slot/tx_idx/cert_idx for easier debugging.
In `@crates/cardano/src/rupd/deltas.rs`:
- Around line 150-182: EnqueueMir::undo currently can't restore a fully
overwritten entity because apply replaces the entity when overwrite is true;
change EnqueueMir to capture the prior state (e.g. add a field like prev:
Option<PendingMirState>) when apply runs with overwrite==true (store the
existing Option<PendingMirState> before overwriting), and then update undo to:
if overwrite==true restore *entity = self.prev.clone(), otherwise keep the
existing saturating_sub logic; reference the EnqueueMir::apply and
EnqueueMir::undo methods, the overwrite flag, PendingMirState, and mirror the
DequeueMir.prev approach for how prev is stored and restored.
In `@crates/testing/src/harness/cardano.rs`:
- Around line 293-308: copy_dir_recursive currently treats symlinks as regular
files because it uses file_type() which follows symlinks; update
copy_dir_recursive to call entry.path().symlink_metadata() (or
DirEntry::symlink_metadata) and check file_type.is_symlink(), then recreate the
symlink at dst instead of copying the target: use std::os::unix::fs::symlink or
the Windows equivalents (std::os::windows::fs::symlink_file / symlink_dir)
depending on target type, and for non-symlink dirs/files keep the existing
recursion and std::fs::copy behavior; ensure errors from symlink creation are
propagated as std::io::Result so callers of copy_dir_recursive see failures.
In `@tests/epoch_pots/compare.rs`:
- Around line 115-120: The printed `matched` value is wrong because total_rows
only comes from file1 while `total` includes DiffByteRecord::Add from file2;
modify the logic to compute `matched` directly by counting DiffByteRecord::Equal
records while iterating diffs (or otherwise track add counts from file2) instead
of using total_rows.saturating_sub(total + ignored); update the code that
computes `matched` (referencing matched, total_rows, total, ignored and
DiffByteRecord::Add/Equal) so the informational output reflects actual equal
rows.
🧹 Nitpick comments (24)
crates/cardano/src/roll/batch.rs (1)
253-255: Excellent documentation clarifying critical ordering semantics.The added comments clearly document that delta application order is intentional and matches the reference Haskell ledger implementation. This is valuable context for maintainers, especially given the PR's focus on mainnet-specific edge-cases.
As per coding guidelines, please ensure
cargo test --workspace --all-featurespasses to verify that the documented ordering behavior is covered by tests.xtask/src/ground_truth/stake.rs (1)
54-56: Nit: prefer a safer comparison to avoid theas i64cast.
count as i64is safe here sincecount≤PAGE_SIZE, but using a uniform type avoids a potentialclippy::cast_possible_wrappedantic lint and is more idiomatic.♻️ Suggested change
- if (count as i64) < PAGE_SIZE { + if count < PAGE_SIZE as usize { break; }xtask/src/ground_truth/generate.rs (2)
100-103:saturating_sub(2)silently clamps to 0 forsubject_epoch< 2.If someone passes
--subject-epoch 0or1,performance_epochwill silently be0rather than signaling an error. Since this only applies to a developer xtask and Shelley rewards don't begin until well after epoch 2, this is unlikely to bite in practice—but a guard or early bail-out would make intent clearer.💡 Optional: add an early check
- let performance_epoch = args.subject_epoch.saturating_sub(2); + anyhow::ensure!( + args.subject_epoch >= 2, + "subject_epoch must be >= 2 (got {}); performance_epoch would be meaningless", + args.subject_epoch + ); + let performance_epoch = args.subject_epoch - 2;
143-149: Rewards CSV filename omits the epoch, unlike delegation and stake.
delegation-{performance_epoch}.csvandstake-{performance_epoch}.csvembed the epoch to avoid collisions when the sameoutput_diris reused, butrewards.csvdoes not. If this is intentional (e.g., rewards are always overwritten), no action needed—but if it should parallel the others, considerrewards-{performance_epoch}.csv.crates/cardano/src/hacks.rs (1)
37-78: Consider grouping mainnet pointer mappings with inline comments for traceability.The preview and preprod sections have implicit grouping. The new mainnet block mixes
Some(...)andNoneentries without annotation of where each pointer was observed (block height, tx hash, etc.). If these ever need debugging or updating, having a brief provenance note (similar to the proposal comments) would help. This is a minor maintainability nit for a growing list.xtask/src/ground_truth/delegation.rs (1)
51-56:amount as u64silently wraps negative values.
i64 as u64performs a bitwise reinterpretation on negative values. Whileepoch_stake.amountshould always be non-negative, a defensivetry_fromor.max(0)guard would be safer.Suggested defensive cast
- let amount: i64 = row.get(2); - - let entry = pools - .entry(pool_bech32) - .or_insert_with(|| (pool_hash, 0)); - entry.1 += amount as u64; + let amount: i64 = row.get(2); + let amount: u64 = u64::try_from(amount) + .expect("epoch_stake.amount should be non-negative"); + + let entry = pools + .entry(pool_bech32) + .or_insert_with(|| (pool_hash, 0)); + entry.1 += amount;tests/epoch_pots/compare.rs (1)
51-55: Redundant&onrecord— Clippy will flag this.
recordis already&DiffByteRecordfrom the slice iteration, soignore(&record)passes&&DiffByteRecord. The extra&is unnecessary.🔧 Proposed fix
- if ignore(&record) { + if ignore(record) {As per coding guidelines,
**/*.rs: "Runcargo clippy --workspace --all-targets --all-featuresand resolve all clippy warnings before committing changes."crates/cardano/src/roll/epochs.rs (1)
328-348: Treasury MIR path logs atinfolevel per credential — considerdebug.The
tracing::info!on lines 339-345 fires for every credential in a treasury MIR certificate. The equivalent reserves path (lines 314-326) only logs atwarnfor negative amounts. In production,info-level per-credential logging could be noisy. Consider downgrading todebugto match the reserves path's approach.🔇 Proposed fix
- tracing::info!( + tracing::debug!( source = "treasury", credential = ?cred, amount = amount, amount_u64 = amount_u64, "processing treasury MIR" );crates/cardano/src/rewards/mod.rs (3)
454-456: Chunk size of 25 may be too small for Rayon.With
DELEGATOR_CHUNK_SIZE = 25, pools with tens of thousands of delegators will spawn many very small tasks. Rayon's work-stealing handles this, but the per-task overhead could outweigh benefits. A value of 100–500 is more typical for compute-bound work of this granularity.
590-607: Considerflat_mapovermap+flattenfor clarity.Rayon's
flat_mapis more idiomatic thanmap(...).flatten()for this pattern.♻️ Proposed fix
let delegator_rewards: Vec<_> = delegators .par_chunks(DELEGATOR_CHUNK_SIZE) - .map(|chunk| { + .flat_map(|chunk| { compute_delegator_chunk( ctx, pool, total_pool_reward, pool_stake, circulating_supply, pool_params.cost, pool_margin.clone(), &owners, babbage_or_later, chunk, ) }) - .flatten() .collect();
351-362:drain_allunconditionally routes unspendable to treasury, but should support pre-Babbage behavior.
drain_allroutes all unspendable rewards toapplied_unspendable_to_treasurywithout a way to direct them to reserves. The companion methoddrain_unspendable(to_reserves: bool)demonstrates the correct pattern: pre-Babbage should return unspendable to reserves, post-Babbage to treasury.Currently
drain_allis only called in a single test and not used in production reward handling logic (which correctly usesdrain_unspendable(pre_babbage)). However, the public method with hard-coded post-Babbage behavior creates maintenance risk. Add ato_reservesparameter to matchdrain_unspendable, or document clearly thatdrain_allis exclusively for post-Babbage/testing scenarios.tests/epoch_pots/main.rs (2)
42-73: Unused macro parameter$fixture_mod.The
$fixture_modidentifier is captured by the macro but never referenced in the expansion body. This adds noise to each invocation site (lines 717–739) without purpose.Suggested fix
macro_rules! epoch_test { - ($test_name:ident, $fixture_mod:ident, $network:literal, $epoch:literal, $snapshot:literal) => { + ($test_name:ident, $network:literal, $epoch:literal, $snapshot:literal) => { #[test] fn $test_name() {And update all invocations, e.g.:
-epoch_test!(test_mainnet_242, fixtures_mainnet_242, "mainnet", 242, 240); +epoch_test!(test_mainnet_242, "mainnet", 242, 240);
254-260: Precision loss inrational_to_f64for large numerators/denominators.Casting
i64/u64tof64silently loses precision for values > 2^53. For protocol parameter rationals (typically small fractions), this is fine in practice, but it's worth noting as a known limitation.xtask/src/util.rs (1)
18-44: Duplicatecopy_dir_recursiveexists incrates/testing/src/harness/cardano.rs.Both implementations appear identical. Consider extracting this utility into a shared crate (or a thin utility crate) to avoid maintaining two copies. That said, since
xtaskand the testing crate have distinct dependency graphs, keeping them separate may be intentional.crates/cardano/src/lib.rs (1)
362-365: Consider renamingCache::stability_windowto reflect the semantic change.The field still says
stability_windowbut now holds the randomness stability window (4k/f) rather than the general stability window (3k/f). This could mislead future readers who expect the 3k/f value.💡 Suggested rename
pub(crate) struct Cache { pub eras: ChainSummary, - pub stability_window: u64, + pub randomness_stability_window: u64, }Then update the two usage sites (
self.cache.stability_window) accordingly.crates/cardano/src/roll/pools.rs (1)
157-164: Extra blank line at 162–163.Minor formatting nit — there are two consecutive blank lines before the closing brace.
Diff
fn apply(&mut self, entity: &mut Option<PoolState>) { if let Some(entity) = entity { entity.blocks_minted_total += self.count; let live = entity.snapshot.unwrap_live_mut(); live.blocks_minted += self.count; - - } }crates/cardano/src/estart/loading.rs (2)
57-92: AVVM reclamation logic looks correct.The approach of querying remaining genesis AVVM UTxOs and summing their values matches the expected behavior at the Shelley→Allegra boundary.
One minor observation:
tracing::warn!at line 85 will fire on every normal chain sync at this boundary. Since AVVM reclamation is expected (not an anomaly),infomay be a more appropriate level.💡 Optional level change
- tracing::warn!( + tracing::info!( remaining_count = remaining.len(), total_avvm = total, "AVVM reclamation at Shelley→Allegra boundary" );
76-83: Silent fallback on decode failure may mask data issues.Line 81's
.unwrap_or(0)silently ignores UTxOs that fail to decode, which could undercount reclamation. For genesis AVVM UTxOs this is unlikely, but a debug-level log on the error path would aid future troubleshooting.crates/cardano/src/ewrap/rewards.rs (2)
118-119: Redundanttotal_value()calls — reusereward_total.
reward_totalis computed on line 118 but thenreward.total_value()is called again on lines 138 and 162. Since the reward object is not mutated between these calls, reusereward_totalto avoid redundancy.♻️ Suggested diff
if !account.is_registered() { - let total = reward.total_value(); - // Accounts that were registered at RUPD startStep time but deregistered // before EWRAP. Their rewards go to treasury per the Haskell ledger's // applyRUpdFiltered (frTotalUnregistered → casTreasury). @@ -148,19 +146,17 @@ warn!( account=%id, credential=?account.credential, - amount=total, + amount=reward_total, was_spendable=%was_spendable, "reward not applied (unregistered account) -> treasury" ); - ctx.rewards.return_reward_to_treasury(total); + ctx.rewards.return_reward_to_treasury(reward_total); return Ok(()); } self.change(AssignRewards { account: id.clone(), - reward: reward.total_value(), + reward: reward_total, });Also applies to: 137-162
196-216: Double iteration overiter_pending()can be collapsed into a single pass.Lines 200–207 iterate
iter_pending()twice with complementary filters. A single fold would halve the work and reduce code duplication.♻️ Suggested diff
- if pending_before_drain > 0 { - let pending_spendable: u64 = ctx.rewards.iter_pending() - .filter(|(_, r)| r.is_spendable()) - .map(|(_, r)| r.total_value()) - .sum(); - let pending_unspendable: u64 = ctx.rewards.iter_pending() - .filter(|(_, r)| !r.is_spendable()) - .map(|(_, r)| r.total_value()) - .sum(); + if pending_before_drain > 0 { + let (pending_spendable, pending_unspendable) = ctx.rewards.iter_pending() + .fold((0u64, 0u64), |(sp, unsp), (_, r)| { + if r.is_spendable() { + (sp + r.total_value(), unsp) + } else { + (sp, unsp + r.total_value()) + } + });crates/cardano/src/estart/reset.rs (1)
181-213: Consider downgrading this log fromwarntoinfoordebug.This
tracing::warn!fires on every single epoch transition and is not indicating an abnormal condition — it's operational telemetry. Usingwarnlevel here will create noise in production monitoring. Adebugorinfolevel would be more appropriate for regular epoch accounting logs.crates/cardano/src/ewrap/loading.rs (1)
355-358: Long lines may triggerrustfmt/ clippy warnings.Lines 356–357 exceed typical line-length limits. Consider breaking them up for readability and to avoid formatting tool conflicts.
♻️ Suggested diff
- let spendable_total: u64 = pending.values().filter(|r| r.is_spendable()).map(|r| r.total_value()).sum(); - let unspendable_total: u64 = pending.values().filter(|r| !r.is_spendable()).map(|r| r.total_value()).sum(); + let spendable_total: u64 = pending + .values() + .filter(|r| r.is_spendable()) + .map(|r| r.total_value()) + .sum(); + let unspendable_total: u64 = pending + .values() + .filter(|r| !r.is_spendable()) + .map(|r| r.total_value()) + .sum();As per coding guidelines,
**/*.rs: "Ensure the project builds without warnings by runningcargo build --workspace --all-targets --all-features."crates/cardano/src/rupd/loading.rs (1)
130-134:_rupd_slotparameter is accepted but unused.The
_rupd_slotparameter is computed by the caller (lines 316–319) but never used inStakeSnapshot::load. If pre-Babbage filtering is planned for a follow-up, consider adding a TODO comment to clarify intent. Otherwise, the caller-side computation on lines 316–317 is dead code.crates/cardano/src/pots.rs (1)
327-327: Unnecessary.clone()ontau.
tauis an ownedRatioand is not used after this line. The.clone()is redundant.Suggested fix
- let treasury_tax = floor_int!(tau.clone() * ratio!(reward_pot), u64); + let treasury_tax = floor_int!(tau * ratio!(reward_pot), u64);
| assert!( | ||
| effective == applied_rewards_sum, | ||
| "EWRAP epoch {}: effective_rewards ({}) != applied_rewards_sum ({}), diff = {}", | ||
| ctx.ending_state().number, | ||
| effective, | ||
| applied_rewards_sum, | ||
| effective as i64 - applied_rewards_sum as i64 | ||
| ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
assert! in production code will crash the node on mismatch.
If any mainnet edge case causes effective != applied_rewards_sum, this panics the node. Consider replacing with tracing::error! and a soft failure, or at minimum use debug_assert! so it only fires in debug/test builds — consistent with the debug_assert! pattern used in estart/reset.rs line 233 for pot consistency.
♻️ Suggested diff
- assert!(
- effective == applied_rewards_sum,
- "EWRAP epoch {}: effective_rewards ({}) != applied_rewards_sum ({}), diff = {}",
- ctx.ending_state().number,
- effective,
- applied_rewards_sum,
- effective as i64 - applied_rewards_sum as i64
- );
+ if effective != applied_rewards_sum {
+ tracing::error!(
+ epoch = ctx.ending_state().number,
+ %effective,
+ %applied_rewards_sum,
+ diff = %(effective as i64 - applied_rewards_sum as i64),
+ "EWRAP: effective_rewards != applied_rewards_sum"
+ );
+ }
+ debug_assert_eq!(effective, applied_rewards_sum,
+ "EWRAP epoch {}: effective_rewards ({}) != applied_rewards_sum ({})",
+ ctx.ending_state().number, effective, applied_rewards_sum,
+ );🤖 Prompt for AI Agents
In `@crates/cardano/src/ewrap/wrapup.rs` around lines 157 - 164, The current
assert! in wrapup.rs that compares effective and applied_rewards_sum
(assert!(effective == applied_rewards_sum, ... using ctx.ending_state().number,
effective, applied_rewards_sum)) will panic in production; replace it with a
non-panicking check: either convert to debug_assert! for debug-only checks, or
emit a tracing::error! with the same diagnostic string and handle the mismatch
as a soft failure (e.g., log the epoch number and the diff and continue or
return a Result::Err from the enclosing function) so the node does not crash on
mainnet edge cases. Ensure the replacement references the same symbols
(effective, applied_rewards_sum, ctx.ending_state().number) so logs carry the
same context.
| consumed = %(effective + to_treasury), | ||
| returned = %(incentives.available_rewards.saturating_sub(effective + to_treasury)), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential u64 overflow in effective + to_treasury before saturating_sub.
Line 173 computes effective + to_treasury inside saturating_sub. If both values are large, the addition overflows (wraps) before the saturating subtraction is applied. Use saturating_add for safety.
♻️ Suggested diff
- consumed = %(effective + to_treasury),
- returned = %(incentives.available_rewards.saturating_sub(effective + to_treasury)),
+ consumed = %(effective.saturating_add(to_treasury)),
+ returned = %(incentives.available_rewards.saturating_sub(effective.saturating_add(to_treasury))),📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| consumed = %(effective + to_treasury), | |
| returned = %(incentives.available_rewards.saturating_sub(effective + to_treasury)), | |
| consumed = %(effective.saturating_add(to_treasury)), | |
| returned = %(incentives.available_rewards.saturating_sub(effective.saturating_add(to_treasury))), |
🤖 Prompt for AI Agents
In `@crates/cardano/src/ewrap/wrapup.rs` around lines 172 - 173, The expressions
use plain addition for effective + to_treasury which can overflow before the
call to incentives.available_rewards.saturating_sub; change the logic in
wrapup.rs to compute the sum using saturating_add (e.g., let total =
effective.saturating_add(to_treasury)) and then use total for both consumed and
the argument to incentives.available_rewards.saturating_sub to avoid
intermediate overflow.
| (slot, tx_idx, cert_idx) => { | ||
| warn!(slot, tx_idx, cert_idx, "missing pointer mapping"); | ||
| panic!() | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unconditional panic!() for unknown pointers will crash the node in production.
The previous code reportedly had the panic commented out. Enabling it is useful for catching missing mappings during development, but in a deployed node any unmapped pointer (e.g., from a maliciously crafted transaction in a pre-deprecation era) would cause a hard crash.
Consider using a panic! only in debug/test builds, or at least providing a descriptive message. In release builds, returning None with the warning may be safer—or use debug_assert! if you only want this to fire during development.
💡 Suggested approach
(slot, tx_idx, cert_idx) => {
warn!(slot, tx_idx, cert_idx, "missing pointer mapping");
- panic!()
+ // Fail hard in dev/test so missing mappings are caught immediately.
+ // In release builds, fall back to None to avoid crashing the node.
+ if cfg!(debug_assertions) {
+ panic!("unmapped pointer ({slot}, {tx_idx}, {cert_idx})");
+ }
+ None
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| (slot, tx_idx, cert_idx) => { | |
| warn!(slot, tx_idx, cert_idx, "missing pointer mapping"); | |
| panic!() | |
| } | |
| (slot, tx_idx, cert_idx) => { | |
| warn!(slot, tx_idx, cert_idx, "missing pointer mapping"); | |
| // Fail hard in dev/test so missing mappings are caught immediately. | |
| // In release builds, fall back to None to avoid crashing the node. | |
| if cfg!(debug_assertions) { | |
| panic!("unmapped pointer ({slot}, {tx_idx}, {cert_idx})"); | |
| } | |
| None | |
| } |
🤖 Prompt for AI Agents
In `@crates/cardano/src/hacks.rs` around lines 80 - 83, The match arm that handles
unknown pointer tuples (slot, tx_idx, cert_idx) currently calls warn!(...) and
then unconditionally panics via panic!(), which will crash production; change
this so the code logs a descriptive warning including the tuple and does NOT
panic in release builds: replace the unconditional panic!() with a debug-only
assertion (e.g. debug_assert!(false, "...") or a conditional panic behind
cfg!(debug_assertions)) and return None (or an appropriate error) in the
non-debug path; keep the warn! call but enhance its message to include
slot/tx_idx/cert_idx for easier debugging.
| fn apply(&mut self, entity: &mut Option<Self::Entity>) { | ||
| // Behavior depends on overwrite flag (determined by protocol version): | ||
| // - Pre-Alonzo (overwrite=true): Later MIRs overwrite earlier ones (Map.union semantics) | ||
| // - Alonzo+ (overwrite=false): MIRs accumulate (Map.unionWith (<>) semantics) | ||
| if self.overwrite { | ||
| // Pre-Alonzo: overwrite with new values | ||
| *entity = Some(PendingMirState { | ||
| credential: self.credential.clone(), | ||
| from_reserves: self.from_reserves, | ||
| from_treasury: self.from_treasury, | ||
| }); | ||
| } else if let Some(existing) = entity.as_mut() { | ||
| // Alonzo+: accumulate | ||
| existing.from_reserves += self.from_reserves; | ||
| existing.from_treasury += self.from_treasury; | ||
| } else { | ||
| *entity = Some(PendingMirState { | ||
| credential: self.credential.clone(), | ||
| from_reserves: self.from_reserves, | ||
| from_treasury: self.from_treasury, | ||
| }); | ||
| } | ||
| } | ||
|
|
||
| fn undo(&self, entity: &mut Option<Self::Entity>) { | ||
| if let Some(existing) = entity.as_mut() { | ||
| existing.from_reserves = existing.from_reserves.saturating_sub(self.from_reserves); | ||
| existing.from_treasury = existing.from_treasury.saturating_sub(self.from_treasury); | ||
| if existing.from_reserves == 0 && existing.from_treasury == 0 { | ||
| *entity = None; | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
EnqueueMir::undo does not correctly restore overwritten state.
When overwrite=true (pre-Alonzo), apply replaces the entire entity. However, undo uses saturating_sub on the current values, which does not restore the previous entity state that was overwritten.
Example: If previous state was {reserves: 100, treasury: 0} and this delta overwrites with {reserves: 50, treasury: 0}, then undo computes 50 - 50 = 0 and clears the entity — but the correct rollback should restore {reserves: 100, treasury: 0}.
Store the previous state in a field (similar to DequeueMir.prev) and restore it on undo when overwrite=true.
🐛 Proposed fix
pub struct EnqueueMir {
pub credential: StakeCredential,
pub from_reserves: u64,
pub from_treasury: u64,
pub overwrite: bool,
+ /// Previous state stored for rollback (used when overwrite=true)
+ prev: Option<PendingMirState>,
}
impl EnqueueMir {
pub fn new(
credential: StakeCredential,
from_reserves: u64,
from_treasury: u64,
overwrite: bool,
) -> Self {
Self {
credential,
from_reserves,
from_treasury,
overwrite,
+ prev: None,
}
}
// ...
fn apply(&mut self, entity: &mut Option<Self::Entity>) {
+ self.prev = entity.clone();
if self.overwrite {
*entity = Some(PendingMirState {
// ... (rest unchanged)
fn undo(&self, entity: &mut Option<Self::Entity>) {
- if let Some(existing) = entity.as_mut() {
- existing.from_reserves = existing.from_reserves.saturating_sub(self.from_reserves);
- existing.from_treasury = existing.from_treasury.saturating_sub(self.from_treasury);
- if existing.from_reserves == 0 && existing.from_treasury == 0 {
- *entity = None;
- }
- }
+ *entity = self.prev.clone();
}🤖 Prompt for AI Agents
In `@crates/cardano/src/rupd/deltas.rs` around lines 150 - 182, EnqueueMir::undo
currently can't restore a fully overwritten entity because apply replaces the
entity when overwrite is true; change EnqueueMir to capture the prior state
(e.g. add a field like prev: Option<PendingMirState>) when apply runs with
overwrite==true (store the existing Option<PendingMirState> before overwriting),
and then update undo to: if overwrite==true restore *entity = self.prev.clone(),
otherwise keep the existing saturating_sub logic; reference the
EnqueueMir::apply and EnqueueMir::undo methods, the overwrite flag,
PendingMirState, and mirror the DequeueMir.prev approach for how prev is stored
and restored.
| pub fn copy_dir_recursive(src: &Path, dst: &Path) -> std::io::Result<()> { | ||
| std::fs::create_dir_all(dst)?; | ||
|
|
||
| for entry in std::fs::read_dir(src)? { | ||
| let entry = entry?; | ||
| let file_type = entry.file_type()?; | ||
| let dest_path = dst.join(entry.file_name()); | ||
|
|
||
| if file_type.is_dir() { | ||
| copy_dir_recursive(&entry.path(), &dest_path)?; | ||
| } else { | ||
| std::fs::copy(entry.path(), &dest_path)?; | ||
| } | ||
| } | ||
|
|
||
| Ok(()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
copy_dir_recursive doesn't handle symlinks.
If a symlink is encountered, file_type.is_dir() returns false for directory symlinks (it checks the symlink itself, not the target), so it would attempt std::fs::copy which follows symlinks and copies the target file. For directory symlinks, this silently skips the contents. This is probably fine for test fixtures but worth noting.
🤖 Prompt for AI Agents
In `@crates/testing/src/harness/cardano.rs` around lines 293 - 308,
copy_dir_recursive currently treats symlinks as regular files because it uses
file_type() which follows symlinks; update copy_dir_recursive to call
entry.path().symlink_metadata() (or DirEntry::symlink_metadata) and check
file_type.is_symlink(), then recreate the symlink at dst instead of copying the
target: use std::os::unix::fs::symlink or the Windows equivalents
(std::os::windows::fs::symlink_file / symlink_dir) depending on target type, and
for non-symlink dirs/files keep the existing recursion and std::fs::copy
behavior; ensure errors from symlink creation are propagated as std::io::Result
so callers of copy_dir_recursive see failures.
| } | ||
|
|
||
| let matched = total_rows.saturating_sub(total + ignored); | ||
| eprintln!(" {} rows matched, {} differences", matched, total); | ||
|
|
||
| Ok(total) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
matched count may be misleading for additions from file2.
total_rows is counted from file1 only (line 16-21), but total includes DiffByteRecord::Add entries from file2 that aren't in file1. This means matched (line 117) could undercount or even saturate to 0. Since this is a test utility for informational output, it's minor but worth noting.
🤖 Prompt for AI Agents
In `@tests/epoch_pots/compare.rs` around lines 115 - 120, The printed `matched`
value is wrong because total_rows only comes from file1 while `total` includes
DiffByteRecord::Add from file2; modify the logic to compute `matched` directly
by counting DiffByteRecord::Equal records while iterating diffs (or otherwise
track add counts from file2) instead of using total_rows.saturating_sub(total +
ignored); update the code that computes `matched` (referencing matched,
total_rows, total, ignored and DiffByteRecord::Add/Equal) so the informational
output reflects actual equal rows.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@crates/cardano/src/hacks.rs`:
- Around line 59-63: Add a brief inline comment above the
StakeCredential::AddrKeyhash entry for the credential
"1332d859dd71f5b1089052a049690d81f7367eac9fafaef80b4da395" (the mapping for
tuple (18446744073709551615, 1221092, 2)) explaining that this credential also
appears for (20095460, 2, 0), that the slot value 18446744073709551615 is
u64::MAX and used as a sentinel in mainnet edge-case mappings, and reference the
repo context/commit "fix/mainnet-200-plus" so future readers understand the
duplicate-pointer provenance.
In `@crates/cardano/src/model.rs`:
- Around line 603-622: The is_registered_at method's (Some(reg), Some(dereg))
arm incorrectly treats any presence of a deregistration as ending the
registration period; update the logic in is_registered_at so the re-registration
case is handled: change the final match arm (Some(reg), Some(dereg)) to return
true when reg <= slot AND (dereg > slot OR dereg <= reg) (i.e., registered at
slot if it was re-registered after the last deregistration or if deregistration
happens after the target slot), keeping the rest of the match arms unchanged.
In `@xtask/src/ground_truth/delegation.rs`:
- Around line 51-56: The code currently casts amount: i64 to u64 with amount as
u64 which will wrap negative values; change the logic in the block that builds
pools for pool_bech32/pool_hash so you validate or convert safely (e.g., use
u64::try_from(amount) or amount.try_into()) and handle the Err case instead of
allowing a silent wrap — either return or log an error, skip the row, or assert
non-negative depending on desired behavior; update the entry update (entry.1 +=
...) to use the safely converted u64 value.
🧹 Nitpick comments (8)
xtask/src/ground_truth/delegation.rs (1)
16-16:PAGE_SIZEis duplicated acrossdelegation.rs,rewards.rs, andstake.rs.Consider extracting this constant to the parent
ground_truthmodule so all three files share a single definition.crates/cardano/src/model.rs (1)
1812-1815: Minor:total_value()could usesaturating_addfor defensive overflow protection.Both
from_reservesandfrom_treasuryareu64Lovelace values. While overflow is unlikely in practice, other arithmetic in this codebase usesadd!/sub!macros for safe arithmetic.Proposed fix
pub fn total_value(&self) -> u64 { - self.from_reserves + self.from_treasury + self.from_reserves.saturating_add(self.from_treasury) }crates/cardano/src/roll/pools.rs (1)
157-164: Nit: extra blank line.Lines 162–163 have an extra blank line. Minor cosmetic issue.
Proposed fix
fn apply(&mut self, entity: &mut Option<PoolState>) { if let Some(entity) = entity { entity.blocks_minted_total += self.count; let live = entity.snapshot.unwrap_live_mut(); live.blocks_minted += self.count; - - } }crates/cardano/src/ewrap/rewards.rs (2)
126-135: Consider downgrading frominfotodebuglevel.This logs at
infolevel for every account that has bothregistered_atandderegistered_atset. On mainnet with many re-registrations, this could produce significant log volume. The re-registration pattern is not inherently anomalous.
118-119: Minor:reward.total_value()is computed three times.
reward_total(line 118) is only used for logging. The same value is recomputed astotal(line 138) and again viareward.total_value()(line 162). Consider reusingreward_totalto avoid redundant iteration.Proposed fix
if !account.is_registered() { - let total = reward.total_value(); warn!( account=%id, credential=?account.credential, - amount=total, + amount=reward_total, was_spendable=%was_spendable, "reward not applied (unregistered account) -> treasury" ); - ctx.rewards.return_reward_to_treasury(total); + ctx.rewards.return_reward_to_treasury(reward_total); return Ok(()); } self.change(AssignRewards { account: id.clone(), - reward: reward.total_value(), + reward: reward_total, });Also applies to: 138-138, 162-162
crates/cardano/src/ewrap/loading.rs (1)
372-386: Minor: long line could be split for readability.Line 373 (
let spendable_total: u64 = ...) is quite long. Consider splitting for readability, though this is purely cosmetic.crates/cardano/src/roll/epochs.rs (2)
329-349: Consider downgrading the per-credential treasury MIR log frominfotodebug.Line 340 logs at
infolevel for each individual treasury MIR credential. While treasury MIRs are infrequent on mainnet, this is more verbose than the reserves path which has no per-credential info log. The summary stats tracked inRollingStats.treasury_mirsalready capture the totals.Proposed fix
- tracing::info!( + tracing::debug!( source = "treasury", credential = ?cred, amount = amount, amount_u64 = amount_u64, "processing treasury MIR" );
314-351: Minor: inconsistent variable naming between reserves and treasury paths.The reserves path uses
amountdirectly aftermax(0) as u64(line 325), while the treasury path introducesamount_u64(line 339). Consider using the same naming for consistency.Proposed fix
- let amount_u64 = amount.max(0) as u64; - tracing::debug!( - source = "treasury", - credential = ?cred, - amount = amount, - amount_u64 = amount_u64, - "processing treasury MIR" - ); - self.stats_delta.as_mut().unwrap().treasury_mirs += amount_u64; + let amount = amount.max(0) as u64; + tracing::debug!( + source = "treasury", + credential = ?cred, + amount, + "processing treasury MIR" + ); + self.stats_delta.as_mut().unwrap().treasury_mirs += amount;
| (18446744073709551615, 1221092, 2) => Some(StakeCredential::AddrKeyhash( | ||
| "1332d859dd71f5b1089052a049690d81f7367eac9fafaef80b4da395" | ||
| .parse() | ||
| .unwrap(), | ||
| )), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, read the hacks.rs file to see the lines in question
cd crates/cardano/src && cat -n hacks.rs | sed -n '35,85p'Repository: txpipe/dolos
Length of output: 2411
🏁 Script executed:
# Find BlockSlot type definition
rg -n 'type BlockSlot|struct BlockSlot' --type rustRepository: txpipe/dolos
Length of output: 107
🏁 Script executed:
# Find Pointer struct and slot() method
ast-grep --pattern 'struct Pointer { $$$ }'Repository: txpipe/dolos
Length of output: 38
🏁 Script executed:
# Search for slot method on Pointer
rg -n 'fn slot\(\&self\)' --type rust -A 2Repository: txpipe/dolos
Length of output: 754
🏁 Script executed:
# Read the context around the pointer match statement to find any explanatory comments
cd crates/cardano/src && cat -n hacks.rs | sed -n '1,40p'Repository: txpipe/dolos
Length of output: 1871
🏁 Script executed:
# Check if there are any comments or documentation around line 59
cd crates/cardano/src && cat -n hacks.rs | sed -n '40,90p'Repository: txpipe/dolos
Length of output: 2274
🏁 Script executed:
# Check git log for context about these additions
cd crates/cardano/src && git log --oneline -20 hacks.rs 2>/dev/null || echo "Git history not available"Repository: txpipe/dolos
Length of output: 136
🏁 Script executed:
# Look for any PR or issue references in comments in the file
cd crates/cardano/src && grep -n "TODO\|FIXME\|XXX\|HACK\|NOTE" hacks.rs | head -20Repository: txpipe/dolos
Length of output: 38
Add a comment documenting the duplicate credential mapping and u64::MAX edge case.
The credential 1332d859dd71f5b1089052a049690d81f7367eac9fafaef80b4da395 maps to both (20095460, 2, 0) on line 43 and (18446744073709551615, 1221092, 2) on line 59. The slot value 18446744073709551615 is u64::MAX. Since these are confirmed mainnet edge-case mappings (evident from the file's purpose and the git commit "fix/mainnet-200-plus"), a brief comment explaining the provenance would help future readers understand why the same credential resolves from two different pointer tuples, especially one with a sentinel-like slot value.
🤖 Prompt for AI Agents
In `@crates/cardano/src/hacks.rs` around lines 59 - 63, Add a brief inline comment
above the StakeCredential::AddrKeyhash entry for the credential
"1332d859dd71f5b1089052a049690d81f7367eac9fafaef80b4da395" (the mapping for
tuple (18446744073709551615, 1221092, 2)) explaining that this credential also
appears for (20095460, 2, 0), that the slot value 18446744073709551615 is
u64::MAX and used as a sentinel in mainnet edge-case mappings, and reference the
repo context/commit "fix/mainnet-200-plus" so future readers understand the
duplicate-pointer provenance.
| /// Check if the account was registered at a specific slot. | ||
| /// Used for RUPD-time filtering where we need registration status at the RUPD slot, | ||
| /// not the current chain tip. | ||
| /// | ||
| /// Note: When an account deregisters, `registered_at` is cleared to `None`. | ||
| /// So we handle the case where `deregistered_at` is set but `registered_at` is not: | ||
| /// if the deregistration slot is after the target slot, the account was registered. | ||
| pub fn is_registered_at(&self, slot: u64) -> bool { | ||
| match (self.registered_at, self.deregistered_at) { | ||
| // Never registered and never deregistered | ||
| (None, None) => false, | ||
| // Deregistered but registered_at was cleared - account was registered | ||
| // until the deregistration slot. Check if target slot is before deregistration. | ||
| (None, Some(dereg)) => dereg > slot, | ||
| // Currently registered, never deregistered - check if registered by target slot | ||
| (Some(reg), None) => reg <= slot, | ||
| // Both set (re-registration case) - check if registered and not yet deregistered | ||
| (Some(reg), Some(dereg)) => reg <= slot && dereg > slot, | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# Find all call sites of is_registered_at
rg -n 'is_registered_at' --type=rustRepository: txpipe/dolos
Length of output: 143
🏁 Script executed:
# Look for tests related to is_registered_at
rg -n 'is_registered_at|AccountState' --type=rust | head -50Repository: txpipe/dolos
Length of output: 4919
🏁 Script executed:
# Check the AccountState struct definition and understand registered_at/deregistered_at handling
rg -n 'struct AccountState' -A 30 --type=rustRepository: txpipe/dolos
Length of output: 1662
🏁 Script executed:
# More thorough search for is_registered_at including comments, tests, etc
rg 'is_registered_at' -B 2 -A 5Repository: txpipe/dolos
Length of output: 838
🏁 Script executed:
# Look for test files that might test AccountState methods
fd '.*test.*\.rs$' -x rg -l 'AccountState' {}Repository: txpipe/dolos
Length of output: 38
🏁 Script executed:
# Check the is_registered method (without _at) to understand the pattern
rg -n 'fn is_registered\(' -A 8 --type=rustRepository: txpipe/dolos
Length of output: 648
🏁 Script executed:
# Look at how registered_at and deregistered_at are modified in accounts.rs
rg -n 'registered_at|deregistered_at' crates/cardano/src/roll/accounts.rs -B 2 -A 2Repository: txpipe/dolos
Length of output: 1930
🏁 Script executed:
# Check if is_registered_at is pub and when it was added
git log --oneline -p --all -- '**/model.rs' | grep -A 10 'is_registered_at' | head -20Repository: txpipe/dolos
Length of output: 142
🏁 Script executed:
# Check if there's a scenario where both registered_at and deregistered_at can be Some
# by looking at the full StakeDeregistration::apply method
rg -n 'impl.*StakeDeregistration' -A 50 crates/cardano/src/roll/accounts.rs | head -80Repository: txpipe/dolos
Length of output: 2326
🏁 Script executed:
# Double-check: look for any usage of is_registered in the codebase to see the pattern
rg 'is_registered\(\)' -B 2 -A 2 --type=rustRepository: txpipe/dolos
Length of output: 3723
🏁 Script executed:
# Final check: see if is_registered_at appears in any documentation or comments
rg 'RUPD|rupd.*registration' -i --type=rust | head -20Repository: txpipe/dolos
Length of output: 1913
🏁 Script executed:
# Check the rupd module to see if is_registered_at should be used there
rg -n 'is_registered' crates/cardano/src/rupd/ -B 3 -A 3Repository: txpipe/dolos
Length of output: 502
🏁 Script executed:
# Check if there's any TODO or FIXME related to is_registered_at or registration timing
rg 'TODO|FIXME' crates/cardano/src/model.rs | grep -i 'regist'Repository: txpipe/dolos
Length of output: 38
🏁 Script executed:
# Look at the comment above is_registered_at more carefully
rg -B 10 'pub fn is_registered_at' crates/cardano/src/model.rs | head -15Repository: txpipe/dolos
Length of output: 583
is_registered_at logic error with re-registration case.
When an account is re-registered (e.g., deregistered at slot 50, re-registered at slot 100), the state becomes registered_at = Some(100), deregistered_at = Some(50). The current implementation evaluates 100 <= 110 && 50 > 110 → false, even though the account is registered at slot 110.
The (Some(reg), Some(dereg)) arm fails to account for re-registration where reg > dereg (consistent with is_registered() logic which treats start >= end as registered).
Proposed fix
pub fn is_registered_at(&self, slot: u64) -> bool {
match (self.registered_at, self.deregistered_at) {
// Never registered and never deregistered
(None, None) => false,
// Deregistered but registered_at was cleared - account was registered
// until the deregistration slot. Check if target slot is before deregistration.
(None, Some(dereg)) => dereg > slot,
// Currently registered, never deregistered - check if registered by target slot
(Some(reg), None) => reg <= slot,
- // Both set (re-registration case) - check if registered and not yet deregistered
- (Some(reg), Some(dereg)) => reg <= slot && dereg > slot,
+ // Both set - handle re-registration where reg > dereg means currently registered
+ (Some(reg), Some(dereg)) => {
+ if reg >= dereg {
+ // Re-registered after deregistration: registered from reg onwards
+ reg <= slot
+ } else {
+ // Registered then deregistered: registered in [reg, dereg)
+ reg <= slot && dereg > slot
+ }
+ }
}
}🤖 Prompt for AI Agents
In `@crates/cardano/src/model.rs` around lines 603 - 622, The is_registered_at
method's (Some(reg), Some(dereg)) arm incorrectly treats any presence of a
deregistration as ending the registration period; update the logic in
is_registered_at so the re-registration case is handled: change the final match
arm (Some(reg), Some(dereg)) to return true when reg <= slot AND (dereg > slot
OR dereg <= reg) (i.e., registered at slot if it was re-registered after the
last deregistration or if deregistration happens after the target slot), keeping
the rest of the match arms unchanged.
| let amount: i64 = row.get(2); | ||
|
|
||
| let entry = pools | ||
| .entry(pool_bech32) | ||
| .or_insert_with(|| (pool_hash, 0)); | ||
| entry.1 += amount as u64; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unchecked i64 → u64 cast may silently wrap negative values.
If amount is ever negative (e.g., due to data anomalies in DBSync), amount as u64 wraps to a very large number, silently corrupting the aggregated total. Use try_from or at minimum assert non-negative.
Proposed fix
- let amount: i64 = row.get(2);
-
- let entry = pools
- .entry(pool_bech32)
- .or_insert_with(|| (pool_hash, 0));
- entry.1 += amount as u64;
+ let amount: i64 = row.get(2);
+ let amount: u64 = u64::try_from(amount).with_context(|| {
+ format!("negative delegation amount for pool {pool_bech32}: {amount}")
+ })?;
+
+ let entry = pools
+ .entry(pool_bech32)
+ .or_insert_with(|| (pool_hash, 0));
+ entry.1 += amount;🤖 Prompt for AI Agents
In `@xtask/src/ground_truth/delegation.rs` around lines 51 - 56, The code
currently casts amount: i64 to u64 with amount as u64 which will wrap negative
values; change the logic in the block that builds pools for
pool_bech32/pool_hash so you validate or convert safely (e.g., use
u64::try_from(amount) or amount.try_into()) and handle the Err case instead of
allowing a silent wrap — either return or log an error, skip the row, or assert
non-negative depending on desired behavior; update the entry update (entry.1 +=
...) to use the safely converted u64 value.
Summary by CodeRabbit
New Features
Improvements