chore(deps): quarterly batch dependency upgrade 2026-Q2#995
chore(deps): quarterly batch dependency upgrade 2026-Q2#995Patel-Raj11 merged 10 commits intomainfrom
Conversation
WalkthroughThe PR migrates the project infrastructure from Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~12 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Review rate limit: 0/1 reviews remaining, refill in 60 minutes.Comment |
There was a problem hiding this comment.
Actionable comments posted: 9
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.claude/skills/batch-deps-upgrade/README.md:
- Line 9: The README prerequisite wording still references starting a "rippled"
container but the repo/workflow uses "xrpld"; update the wording by replacing
the "rippled" mention with "xrpld" so the prereq accurately reflects the
integration node used (search for the literal string "rippled" in
.claude/skills/batch-deps-upgrade/README.md and change it to "xrpld" and adjust
any surrounding sentence if needed to keep grammar consistent).
- Around line 15-17: Add a language identifier to the fenced code block that
currently contains "/batch-deps-upgrade": change the opening fence from ``` to
```bash so the block becomes a bash-snippet; update the fence around the snippet
that includes the literal "/batch-deps-upgrade" to suppress markdownlint MD040.
In @.claude/skills/batch-deps-upgrade/SKILL.md:
- Around line 11-18: The step currently hard-codes the repo in the CLI call "gh
pr list --repo XRPLF/xrpl-py"; change this to derive the repo dynamically (or
make it a parameter) instead of embedding "XRPLF/xrpl-py": replace the literal
"--repo XRPLF/xrpl-py" in the Step 1 command with a derived value obtained from
the current checkout/auth context (e.g., use "gh repo view --json nameWithOwner"
or read git remote origin and fall back to an input variable/ENV like REPO), and
update any documentation text that shows the example command to show the
dynamic/parameterized usage so forks/renames and other repo contexts work
correctly.
- Around line 24-27: The current instruction in SKILL.md to always update direct
deps in pyproject.toml to caret (^<new_version>) can widen version ranges;
change the behavior described under "Direct deps" so that when applying a
Dependabot bump you preserve the existing constraint operator/shape (e.g., if
the existing constraint is == or exact pin, keep it exact; if it uses ~=, >=,
<=, ^, or no operator, retain that operator) or explicitly write an exact
version when the bump intends a precise pin, instead of unconditionally
replacing it with ^<new_version>; update the paragraph that begins "Direct deps
(listed in `pyproject.toml`...)" to instruct preserving the original
operator/shape or using exact pins per Dependabot intent and keep the note to
still run `poetry update <pkg>` afterwards.
- Around line 64-110: The integration step conflicts with the parallel
validation strategy because all Python versions share a single xrpld container
(xrpld-service) and ports 5005/6006; update the workflow so that
test_integration and its coverage check (poetry run poe test_integration; poetry
run coverage report --fail-under=70) are run serially after the per-version unit
tests, or change the container approach to provide per-version isolation (e.g.,
start a separate xrpld container per version with unique names/ports) and adjust
the docker run / docker rm -f xrpld-service logic accordingly to avoid
port/state contention.
- Around line 22-24: Replace the current "poetry show <pkg>" conflict check in
Step 2.2 with a resolver-based check: run "poetry update <pkg> --dry-run" to
simulate the upgrade and catch resolver failures, fall back to running "poetry
lock" (or "poetry update") to detect SolverProblemError if dry-run isn't
available, and use "poetry debug resolve <pkg>" and "poetry check --lock" for
diagnostic output; update the SKILL.md Step 2.2 text to instruct using these
resolver commands (and capturing SolverProblemError/failure output) instead of
"poetry show <pkg>" so true unsatisfiable constraints are detected before
attempting upgrades.
In @.github/workflows/integration_test.yml:
- Line 79: The teardown currently uses a chained command "docker logs
xrpld-service && docker stop xrpld-service" which prevents docker stop from
running if docker logs fails; change the step so log retrieval and container
stop/run independently (e.g., run the logs command with a non-failing fallback
like "docker logs xrpld-service || true" or separate the two actions with a
command separator so "docker stop xrpld-service" always executes) — update the
workflow step replacing the existing "docker logs xrpld-service && docker stop
xrpld-service" string accordingly to ensure cleanup always runs.
- Line 5: Replace the floating image tag used in the XRPLD_DOCKER_IMAGE
environment variable (currently set to rippleci/xrpld:develop) with a pinned
stable release tag (for example rippleci/xrpld:2.3.0) so CI runs are
deterministic; update the XRPLD_DOCKER_IMAGE value in the workflow to the chosen
fixed version and commit the change.
In `@CONTRIBUTING.md`:
- Around line 92-100: Replace the fenced triple-backtick shell code block
containing the docker run example with an indented code block using leading
four-space indentation for each line so it matches the repository's markdown
style and satisfies markdownlint MD046; locate the fenced block that starts with
```bash and the docker run lines (the docker run command, --detach, --publish
5005:5005, --publish 6006:6006, --volume "$PWD/.ci-config/:/etc/opt/xrpld/",
--name xrpld-service, rippleci/xrpld:develop --standalone) and convert each line
to be prefixed with four spaces instead of fenced backticks.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: c5da5e95-a822-455d-8958-257c6d3f3cd7
⛔ Files ignored due to path filters (1)
poetry.lockis excluded by!**/*.lock
📒 Files selected for processing (9)
.ci-config/xrpld.cfg.claude/skills/batch-deps-upgrade/README.md.claude/skills/batch-deps-upgrade/SKILL.md.github/workflows/integration_test.yml.gitignoreCONTRIBUTING.mdpyproject.tomltests/integration/sugar/test_transaction.pytests/integration/transactions/test_lending_protocol.py
|
|
||
| - `pyenv` installed with Python versions matching the CI matrix (see `.github/workflows/unit_test.yml`) | ||
| - `poetry` installed (the project uses Poetry for dependency management) | ||
| - Docker daemon running — the skill starts a rippled container for integration tests |
There was a problem hiding this comment.
Align prerequisite wording with the current integration node (xrpld).
Line 9 still says the skill starts a rippled container, while this PR and workflow use xrpld. This can send users down the wrong setup path.
Suggested patch
-- Docker daemon running — the skill starts a rippled container for integration tests
+- Docker daemon running — the skill starts an xrpld container for integration tests📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - Docker daemon running — the skill starts a rippled container for integration tests | |
| - Docker daemon running — the skill starts an xrpld container for integration tests |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.claude/skills/batch-deps-upgrade/README.md at line 9, The README
prerequisite wording still references starting a "rippled" container but the
repo/workflow uses "xrpld"; update the wording by replacing the "rippled"
mention with "xrpld" so the prereq accurately reflects the integration node used
(search for the literal string "rippled" in
.claude/skills/batch-deps-upgrade/README.md and change it to "xrpld" and adjust
any surrounding sentence if needed to keep grammar consistent).
| ``` | ||
| /batch-deps-upgrade | ||
| ``` |
There was a problem hiding this comment.
Add a language identifier to the fenced command block.
Line 15 starts a fenced block without language, triggering markdownlint MD040.
Suggested patch
-```
+```bash
/batch-deps-upgrade</details>
<details>
<summary>🧰 Tools</summary>
<details>
<summary>🪛 markdownlint-cli2 (0.22.1)</summary>
[warning] 15-15: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
</details>
</details>
<details>
<summary>🤖 Prompt for AI Agents</summary>
Verify each finding against the current code and only fix it if needed.
In @.claude/skills/batch-deps-upgrade/README.md around lines 15 - 17, Add a
language identifier to the fenced code block that currently contains
"/batch-deps-upgrade": change the opening fence from tobash so the block
becomes a bash-snippet; update the fence around the snippet that includes the
literal "/batch-deps-upgrade" to suppress markdownlint MD040.
</details>
<!-- fingerprinting:phantom:poseidon:hawk -->
<!-- d98c2f50 -->
<!-- This is an auto-generated comment by CodeRabbit -->
| Run: gh pr list --repo XRPLF/xrpl-py --label dependencies --state open --limit 500 --json number,title,headRefName,body,url | ||
|
|
||
| Parse each PR to extract package names and versions. Dependabot PRs come in two formats: | ||
|
|
||
| - **Single-package PRs**: title is `bump <pkg> from <old> to <new>` — parse from title | ||
| - **Grouped PRs**: title is `bump <pkg1> and <pkg2>` with no versions — parse from PR body, which contains a structured list of package updates with version ranges | ||
|
|
||
| If any PR can't be parsed from either title or body, flag it for manual review. Build a table of all proposed upgrades. Report the table to the user before proceeding. |
There was a problem hiding this comment.
Fix hard-coded repo in PR discovery.
Step 1 hard-codes --repo XRPLF/xrpl-py. That makes the skill brittle for forks/renames and even for running the skill in different repo contexts.
🛠️ Proposed change
-Run: gh pr list --repo XRPLF/xrpl-py --label dependencies --state open --limit 500 --json number,title,headRefName,body,url
+Run: gh pr list --label dependencies --state open --limit 500 --json number,title,headRefName,body,urlIf you need an explicit repo, prefer deriving it from the current checkout/auth context rather than embedding it in the doc.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Run: gh pr list --repo XRPLF/xrpl-py --label dependencies --state open --limit 500 --json number,title,headRefName,body,url | |
| Parse each PR to extract package names and versions. Dependabot PRs come in two formats: | |
| - **Single-package PRs**: title is `bump <pkg> from <old> to <new>` — parse from title | |
| - **Grouped PRs**: title is `bump <pkg1> and <pkg2>` with no versions — parse from PR body, which contains a structured list of package updates with version ranges | |
| If any PR can't be parsed from either title or body, flag it for manual review. Build a table of all proposed upgrades. Report the table to the user before proceeding. | |
| Run: gh pr list --label dependencies --state open --limit 500 --json number,title,headRefName,body,url | |
| Parse each PR to extract package names and versions. Dependabot PRs come in two formats: | |
| - **Single-package PRs**: title is `bump <pkg> from <old> to <new>` — parse from title | |
| - **Grouped PRs**: title is `bump <pkg1> and <pkg2>` with no versions — parse from PR body, which contains a structured list of package updates with version ranges | |
| If any PR can't be parsed from either title or body, flag it for manual review. Build a table of all proposed upgrades. Report the table to the user before proceeding. |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.claude/skills/batch-deps-upgrade/SKILL.md around lines 11 - 18, The step
currently hard-codes the repo in the CLI call "gh pr list --repo XRPLF/xrpl-py";
change this to derive the repo dynamically (or make it a parameter) instead of
embedding "XRPLF/xrpl-py": replace the literal "--repo XRPLF/xrpl-py" in the
Step 1 command with a derived value obtained from the current checkout/auth
context (e.g., use "gh repo view --json nameWithOwner" or read git remote origin
and fall back to an input variable/ENV like REPO), and update any documentation
text that shows the example command to show the dynamic/parameterized usage so
forks/renames and other repo contexts work correctly.
| 1. Create a branch from main: `deps/batch-deps-upgrade-YYYY-QN` (use current year and quarter) | ||
| 2. Check for **dependency conflicts** before upgrading. For each proposed upgrade, review `pyproject.toml` constraints and run `poetry show <pkg>` to check if any other dependency pins a version range that would block the upgrade. Mark conflicts as Skipped (dependency conflict: <details>) and do not attempt them. | ||
| 3. For each remaining dependency, apply the upgrade: |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
if ! command -v poetry >/dev/null 2>&1; then
echo "Poetry not found. Install Poetry locally to verify supported flags/commands."
exit 0
fi
echo "== poetry version =="
poetry --version || true
echo
echo "== poetry update --help (look for dry-run or no-install flags) =="
poetry update --help | sed -n '1,200p' || true
echo
echo "== poetry lock --help (look for check/dry-run flags) =="
poetry lock --help | sed -n '1,200p' || true
echo
echo "== poetry debug --help (look for resolve) =="
poetry debug --help | sed -n '1,200p' || trueRepository: XRPLF/xrpl-py
Length of output: 134
🏁 Script executed:
cat -n .claude/skills/batch-deps-upgrade/SKILL.mdRepository: XRPLF/xrpl-py
Length of output: 10964
🏁 Script executed:
# Check for Poetry version specification
fd -e pyproject.toml -e poetry.lock | head -5 | xargs -I {} bash -c "echo '=== {} ===' && head -20 {}"Repository: XRPLF/xrpl-py
Length of output: 39
🌐 Web query:
Poetry dependency conflict detection best practices resolver dry-run
💡 Result:
Best practices for Poetry dependency conflict detection, using the resolver, and dry-run: 1. Use poetry check to validate pyproject.toml consistency with poetry.lock and detect basic issues. 2. Run poetry lock or poetry update to trigger the dependency resolver; it will raise SolverProblemError with details on conflicts if no solution exists. 3. For diagnosis without changes: - poetry debug resolve : Simulates resolution for a package, shows chosen versions/tree (--tree), what would install (--install). - poetry show --tree: View full dependency tree post-resolution. - poetry show or poetry show --why : See requirements and why installed. - poetry show --outdated: Check for updatable packages. 4. Dry-run testing: - poetry install --dry-run, poetry add --dry-run, poetry update --dry-run: Preview operations without executing (note: older versions might modify lock; use recent Poetry). - poetry lock --check: Verify lockfile validity without changes. 5. Resolution strategies: - Relax constraints in pyproject.toml (use ^, ~, ranges over ==). - Update packages: poetry update . - Use dependency groups, overrides for transitive conflicts. - Verbose solver: poetry lock -vvv for trace. Run these before commits/CI to catch conflicts early. Official docs emphasize lockfile for reproducibility; resolver fails explicitly on incompatibilities.
Citations:
- 1: https://python-poetry.org/docs/cli/
- 2: https://python-poetry.org/docs/main/cli
- 3: https://www.fixdevs.com/blog/poetry-dependency-conflict/
- 4: https://www.fixwikihub.com/articles/fix-python-poetry-lock-conflict
- 5: https://fig.io/manual/poetry/debug/resolve
- 6: https://python-poetry.org/docs/main/faq
🌐 Web query:
poetry update --dry-run poetry lock check poetry debug resolve
💡 Result:
The sequence poetry update --dry-run, poetry lock check, and poetry debug resolve represents a workflow for safely managing Poetry dependencies, particularly useful in CI/CD pipelines to verify updates without applying changes and debug resolution issues. 1. poetry update --dry-run: Simulates updating dependencies to the latest compatible versions based on pyproject.toml constraints. It shows what would be installed/updated/removed without executing changes or modifying poetry.lock (fixed in later versions after early bugs). Enables verbose output automatically. Use to preview updates before committing. 2. poetry lock check: This is now poetry check --lock (legacy poetry lock --check deprecated). Validates that poetry.lock exists and is consistent/up-to-date with pyproject.toml. Exits non-zero if inconsistent or missing (with --lock), ideal for CI to ensure lockfile freshness without updating it. 3. poetry debug resolve : Debugs dependency resolution for a specific package. Shows the resolution process, dependency tree (--tree), what would be installed (--install), or extras/Python constraints. Helps diagnose why certain versions/packages are selected or conflicts occur. Together: Preview updates (update --dry-run), verify lock consistency (check --lock), and debug specific resolutions (debug resolve). Run in CI before poetry install or poetry lock --no-update to catch issues early.
Citations:
- 1: https://python-poetry.org/docs/cli/
- 2: https://python-poetry.org/docs/main/cli
- 3:
poetry update --dry-runmodifies poetry.lock python-poetry/poetry#3766 - 4: Replace
poetry lock --checkwithpoetry check --lockpython-poetry/poetry#6756 - 5: Moving "poetry lock --check" to "poetry check --lock" python-poetry/poetry#8015
- 6: https://fig.io/manual/poetry/debug/resolve
Replace poetry show with actual resolver-based conflict detection.
Step 2.2 currently instructs using poetry show <pkg> to check for conflicts, but poetry show only displays package metadata and dependency trees—it does not invoke the resolver and cannot detect unsatisfiable constraint combinations. This risks missing conflicts that would fail during lock generation.
Instead, trigger the resolver explicitly to catch all conflicts:
- Best approach:
poetry update <pkg> --dry-runsimulates the upgrade without changes, failing immediately if the resolver cannot satisfy constraints. - Fallback:
poetry lock(orpoetry update) directly, catching failures via SolverProblemError before committing. - For diagnosis:
poetry debug resolve <pkg>shows resolution details;poetry check --lockvalidates lock consistency.
Replace step 2.2 with a resolver-based check before attempting any upgrades.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.claude/skills/batch-deps-upgrade/SKILL.md around lines 22 - 24, Replace the
current "poetry show <pkg>" conflict check in Step 2.2 with a resolver-based
check: run "poetry update <pkg> --dry-run" to simulate the upgrade and catch
resolver failures, fall back to running "poetry lock" (or "poetry update") to
detect SolverProblemError if dry-run isn't available, and use "poetry debug
resolve <pkg>" and "poetry check --lock" for diagnostic output; update the
SKILL.md Step 2.2 text to instruct using these resolver commands (and capturing
SolverProblemError/failure output) instead of "poetry show <pkg>" so true
unsatisfiable constraints are detected before attempting upgrades.
| 3. For each remaining dependency, apply the upgrade: | ||
| - **Direct deps** (listed in `pyproject.toml` under `[tool.poetry.dependencies]` or `[tool.poetry.group.dev.dependencies]`): update the version constraint in `pyproject.toml` to the new version using caret (`^<new_version>`), then run `poetry update <pkg>`. Always update `pyproject.toml` for direct deps — even if the current constraint already allows the new version — so the pinned minimum stays current. | ||
| - **Transitive deps** (not in `pyproject.toml`): run `poetry update <pkg>` to update within the existing constraint range | ||
| 4. After all upgrades are applied, run `poetry lock` to regenerate `poetry.lock`. **Do NOT delete `poetry.lock` and regenerate from scratch**. |
There was a problem hiding this comment.
Don’t always rewrite direct-dep constraints to ^<new_version> (risk of widening range).
The skill always sets direct deps to caret (^<new_version>). That can widen the allowed range beyond what Dependabot intended (and can cause Poetry to resolve to higher versions than the specific bump target).
Consider preserving the existing constraint “shape” (e.g., if the current direct dependency is an exact pin, keep it exact; otherwise preserve operator), or explicitly set an exact version constraint when applying the Dependabot bump.
🛠️ Proposed change
- - **Direct deps** (listed in `pyproject.toml` under `[tool.poetry.dependencies]` or `[tool.poetry.group.dev.dependencies]`): update the version constraint in `pyproject.toml` to the new version using caret (`^<new_version>`), then run `poetry update <pkg>`. Always update `pyproject.toml` for direct deps — even if the current constraint already allows the new version — so the pinned minimum stays current.
+ - **Direct deps** (listed in `pyproject.toml` under `[tool.poetry.dependencies]` or `[tool.poetry.group.dev.dependencies]`):
+ update the constraint while preserving the existing constraint “operator/intent” (e.g., keep exact pins exact; keep caret/tilde if that’s what the project uses), then run `poetry update <pkg>`.
+ Always update `pyproject.toml` for direct deps — even if the current constraint already allows the new version — so the pinned minimum stays current.This avoids unintentional range expansion during resolution.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| 3. For each remaining dependency, apply the upgrade: | |
| - **Direct deps** (listed in `pyproject.toml` under `[tool.poetry.dependencies]` or `[tool.poetry.group.dev.dependencies]`): update the version constraint in `pyproject.toml` to the new version using caret (`^<new_version>`), then run `poetry update <pkg>`. Always update `pyproject.toml` for direct deps — even if the current constraint already allows the new version — so the pinned minimum stays current. | |
| - **Transitive deps** (not in `pyproject.toml`): run `poetry update <pkg>` to update within the existing constraint range | |
| 4. After all upgrades are applied, run `poetry lock` to regenerate `poetry.lock`. **Do NOT delete `poetry.lock` and regenerate from scratch**. | |
| 3. For each remaining dependency, apply the upgrade: | |
| - **Direct deps** (listed in `pyproject.toml` under `[tool.poetry.dependencies]` or `[tool.poetry.group.dev.dependencies]`): | |
| update the constraint while preserving the existing constraint "operator/intent" (e.g., keep exact pins exact; keep caret/tilde if that's what the project uses), then run `poetry update <pkg>`. | |
| Always update `pyproject.toml` for direct deps — even if the current constraint already allows the new version — so the pinned minimum stays current. | |
| - **Transitive deps** (not in `pyproject.toml`): run `poetry update <pkg>` to update within the existing constraint range | |
| 4. After all upgrades are applied, run `poetry lock` to regenerate `poetry.lock`. **Do NOT delete `poetry.lock` and regenerate from scratch**. |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.claude/skills/batch-deps-upgrade/SKILL.md around lines 24 - 27, The current
instruction in SKILL.md to always update direct deps in pyproject.toml to caret
(^<new_version>) can widen version ranges; change the behavior described under
"Direct deps" so that when applying a Dependabot bump you preserve the existing
constraint operator/shape (e.g., if the existing constraint is == or exact pin,
keep it exact; if it uses ~=, >=, <=, ^, or no operator, retain that operator)
or explicitly write an exact version when the bump intends a precise pin,
instead of unconditionally replacing it with ^<new_version>; update the
paragraph that begins "Direct deps (listed in `pyproject.toml`...)" to instruct
preserving the original operator/shape or using exact pins per Dependabot intent
and keep the note to still run `poetry update <pkg>` afterwards.
| Run validation **in parallel across all Python versions** from the unit test matrix to speed things up. For each Python version, create a separate working directory (e.g. using `git worktree` or by spawning parallel agents) so that each version's virtual environment does not interfere with the others. | ||
|
|
||
| For each Python version, run the following in order: | ||
|
|
||
| 1. **Lint and type-check** (only on the single lint Python version from the `lint-and-type-check` job): | ||
|
|
||
| ```bash | ||
| poetry run poe lint | ||
| poetry run mypy --strict --implicit-reexport xrpl | ||
| ``` | ||
|
|
||
| 2. **Unit tests**: | ||
|
|
||
| ```bash | ||
| poetry run poe test_unit | ||
| poetry run coverage report --fail-under=85 | ||
| ``` | ||
|
|
||
| 3. **Integration tests** (requires a single shared xrpld Docker container — start it once before running integration tests for any Python version): | ||
| - Pre-run cleanup: `docker rm -f xrpld-service 2>/dev/null || true` | ||
| - Start the container: | ||
| ```bash | ||
| docker run \ | ||
| --detach \ | ||
| --publish 5005:5005 \ | ||
| --publish 6006:6006 \ | ||
| --volume "$PWD/.ci-config/:/etc/opt/xrpld/" \ | ||
| --name xrpld-service \ | ||
| rippleci/xrpld:develop --standalone | ||
| ``` | ||
| - Wait for port 6006 with a bounded timeout: | ||
| ```bash | ||
| SECONDS=0 | ||
| until nc -z localhost 6006 || [ $SECONDS -gt 120 ]; do sleep 2; done | ||
| if ! nc -z localhost 6006; then | ||
| echo "Error: xrpld did not start within 120s" | ||
| docker logs xrpld-service | ||
| exit 1 | ||
| fi | ||
| ``` | ||
| - Run for each Python version: | ||
| ```bash | ||
| poetry run poe test_integration | ||
| poetry run coverage report --fail-under=70 | ||
| ``` | ||
| - Stop container after all versions complete: `docker logs xrpld-service && docker stop xrpld-service` | ||
|
|
There was a problem hiding this comment.
Avoid running integration tests in parallel when using a single shared xrpld container.
Step 3 instructs “Run validation in parallel across all Python versions”, but integration tests also require a single shared Docker container started once (same ports 5005/6006). Parallel runs will compete for shared state and ports, causing flakiness or false negatives.
🛠️ Proposed change
-Run validation **in parallel across all Python versions** from the unit test matrix to speed things up. For each Python version, create a separate working directory (e.g. using `git worktree` or by spawning parallel agents) so that each version's virtual environment does not interfere with the others.
+Run **lint + unit** validation in parallel across all Python versions.
+Run **integration tests sequentially** (not parallel) when using a single shared `xrpld-service` container, or else start one container per Python version with isolated ports/container names.Either serialize the integration step for all versions, or explicitly change the container strategy to per-version isolation.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.claude/skills/batch-deps-upgrade/SKILL.md around lines 64 - 110, The
integration step conflicts with the parallel validation strategy because all
Python versions share a single xrpld container (xrpld-service) and ports
5005/6006; update the workflow so that test_integration and its coverage check
(poetry run poe test_integration; poetry run coverage report --fail-under=70)
are run serially after the per-version unit tests, or change the container
approach to provide per-version isolation (e.g., start a separate xrpld
container per version with unique names/ports) and adjust the docker run /
docker rm -f xrpld-service logic accordingly to avoid port/state contention.
| env: | ||
| POETRY_VERSION: 2.1.1 | ||
| RIPPLED_DOCKER_IMAGE: rippleci/rippled:develop | ||
| XRPLD_DOCKER_IMAGE: rippleci/xrpld:develop |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
What are the current stable (non-develop) tags for the Docker image rippleci/xrpld, and which tag is recommended for CI pinning?
💡 Result:
No Docker image named rippleci/xrpld exists on Docker Hub or in public XRPL repositories. Searches across Docker Hub, GitHub (including XRPLF organizations), and related XRPL projects returned no matches for this exact image name or its tags. The closest matches are: - rippleci/rippled: Official CI image for rippled with stable tags like major versions (e.g., 2.2.0), master, and develop (not recommended for production). - xrpllabsofficial/xrpld: Community rippled Docker image with stable version tags: latest, 3.1.2, 3.1.1, 3.1.0, 3.0.0, 2.6.2, 2.6.1, 2.5.1, 2.6.0, 2.5.0 (latest pushed 21 days ago). XRPLF/CI repo provides build environments for rippled CI but no rippleci/xrpld images. A PR in XRPLF/rippled pins CI Docker tags to XRPLF/CI versions for reproducibility. rippleci/xrpld may be private, deprecated, or a misremembered name (possibly confusing rippleci/rippled or xrpllabsofficial/xrpld). For CI pinning, use specific version tags like xrpllabsofficial/xrpld:3.1.2 (latest stable) or rippleci/rippled:2.2 from analogous images, avoiding latest or develop.
Citations:
- 1: https://hub.docker.com/r/xrpllabsofficial/xrpld
- 2: https://hub.docker.com/r/rippleci/rippled
- 3: https://hub.docker.com/r/honeycluster/xrpld
- 4: https://hub.docker.com/r/xrpllabsofficial/xrpld/tags
- 5: https://github.com/flare-foundation/verifier-xrp-indexer/blob/main/Dockerfile.xrpld-ci
- 6: https://hub.docker.com/r/rippleci/rippled/tags
- 7: https://github.com/XRPLF/ci
- 8: Pin all CI Docker tags rippled#5813
🏁 Script executed:
# First, let's read the integration_test.yml file to see the actual content
cat -n .github/workflows/integration_test.yml | head -20Repository: XRPLF/xrpl-py
Length of output: 598
🏁 Script executed:
# Search for all Docker image references in the repository
rg 'rippleci|xrpld|docker.*image' .github/workflows/ -A 2 -B 2Repository: XRPLF/xrpl-py
Length of output: 1051
🏁 Script executed:
# Check if there are any other references to rippleci/xrpld in the repo
rg 'rippleci/xrpld' --type yaml --type ymlRepository: XRPLF/xrpl-py
Length of output: 85
🏁 Script executed:
# Check if there's any documentation about this image in the current repo
rg -i 'rippleci|xrpld.*image' --type md --type txtRepository: XRPLF/xrpl-py
Length of output: 195
🏁 Script executed:
# Look for any setup or configuration documentation
find . -type f \( -name "*.md" -o -name "*.rst" -o -name "*.txt" \) -exec grep -l "docker\|container" {} \; | head -5Repository: XRPLF/xrpl-py
Length of output: 166
🏁 Script executed:
# Check if there are environment setup docs
cat README.md 2>/dev/null | head -50Repository: XRPLF/xrpl-py
Length of output: 2313
🏁 Script executed:
# Read the CONTRIBUTING.md file to see full context and any guidance on stable versions
cat CONTRIBUTING.md | grep -A 10 -B 10 "rippleci\|xrpld\|docker"Repository: XRPLF/xrpl-py
Length of output: 1129
Pin the integration-test container to a stable image tag.
Line 5 uses rippleci/xrpld:develop, which can make CI nondeterministic across runs. Prefer a fixed stable release tag (e.g., a version number like 2.3.0) instead of the develop branch.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/integration_test.yml at line 5, Replace the floating image
tag used in the XRPLD_DOCKER_IMAGE environment variable (currently set to
rippleci/xrpld:develop) with a pinned stable release tag (for example
rippleci/xrpld:2.3.0) so CI runs are deterministic; update the
XRPLD_DOCKER_IMAGE value in the workflow to the chosen fixed version and commit
the change.
| - name: Stop docker container | ||
| if: always() | ||
| run: docker stop rippled-service | ||
| run: docker logs xrpld-service && docker stop xrpld-service |
There was a problem hiding this comment.
Make teardown resilient even when logs retrieval fails.
At Line 79, docker logs xrpld-service && docker stop xrpld-service skips docker stop if logs fails. Use independent commands so cleanup still runs.
Suggested patch
- run: docker logs xrpld-service && docker stop xrpld-service
+ run: |
+ docker logs xrpld-service || true
+ docker stop xrpld-service || true📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| run: docker logs xrpld-service && docker stop xrpld-service | |
| run: | | |
| docker logs xrpld-service || true | |
| docker stop xrpld-service || true |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/integration_test.yml at line 79, The teardown currently
uses a chained command "docker logs xrpld-service && docker stop xrpld-service"
which prevents docker stop from running if docker logs fails; change the step so
log retrieval and container stop/run independently (e.g., run the logs command
with a non-failing fallback like "docker logs xrpld-service || true" or separate
the two actions with a command separator so "docker stop xrpld-service" always
executes) — update the workflow step replacing the existing "docker logs
xrpld-service && docker stop xrpld-service" string accordingly to ensure cleanup
always runs.
| ```bash | ||
| docker run -dit -p 5005:5005 -p 6006:6006 --volume $PWD/.ci-config/:/etc/opt/ripple/ --entrypoint bash rippleci/rippled:develop -c 'mkdir -p /var/lib/rippled/db/ && rippled -a' | ||
| docker run \ | ||
| --detach \ | ||
| --publish 5005:5005 \ | ||
| --publish 6006:6006 \ | ||
| --volume "$PWD/.ci-config/:/etc/opt/xrpld/" \ | ||
| --name xrpld-service \ | ||
| rippleci/xrpld:develop --standalone | ||
| ``` |
There was a problem hiding this comment.
Match the repository’s markdown code block style in this section.
Line 92 uses fenced code blocks, but markdownlint reports MD046 (expects indented style). Please convert this block to the configured style to avoid lint noise/failures.
🧰 Tools
🪛 markdownlint-cli2 (0.22.1)
[warning] 92-92: Code block style
Expected: indented; Actual: fenced
(MD046, code-block-style)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@CONTRIBUTING.md` around lines 92 - 100, Replace the fenced triple-backtick
shell code block containing the docker run example with an indented code block
using leading four-space indentation for each line so it matches the
repository's markdown style and satisfies markdownlint MD046; locate the fenced
block that starts with ```bash and the docker run lines (the docker run command,
--detach, --publish 5005:5005, --publish 6006:6006, --volume
"$PWD/.ci-config/:/etc/opt/xrpld/", --name xrpld-service, rippleci/xrpld:develop
--standalone) and convert each line to be prefixed with four spaces instead of
fenced backticks.
| Deprecated = "^1.3.1" | ||
| types-Deprecated = "^1.2.9" | ||
| types-Deprecated = "^1.3.1.20260130" |
There was a problem hiding this comment.
Why do we need either of these two packages -- Deprecated (or) types-Deprecated? these can be safely removed from xrpl-py.
We do not have any deprecated decorators in the codebase.
There was a problem hiding this comment.
This is good point. This can be removed in a cleanup PR.
| base58 = "^2.1.0" | ||
| ECPy = "^1.2.5" | ||
| typing-extensions = "^4.13.2" | ||
| typing-extensions = "^4.15.0" |
There was a problem hiding this comment.
We need this since Self is only available in typing in Python 3.11 onwards. https://docs.python.org/3.11/library/typing.html#typing.Self

High Level Overview of Change
Batch dependency upgrade for Q2 2026.
Context of Change
This PR batches open Dependabot PRs to reduce merge noise. A full matrix validation across all CI Python versions was run after all upgrades.
Type of Change
This is a maintenance upgrade of dependencies and does not fit any of the standard categories. No library code behavior changes unless noted below.
Did you update CHANGELOG.md?
Test Plan
Superseded Dependabot PRs
Major version upgrade notes
packaging 25.0 → 26.2 (release notes): No code changes were required because xrpl-py does not import packaging directly — it is only a transitive dependency of black and sphinx, pinned as a dev dep.
websockets 13.1 → 15.0.1 (v14 release notes): No code changes were required. All 140 integration tests (which exercise real WebSocket connections to xrpld) pass.
isort 5.13.2 → 6.1.0 (release notes): v6's only breaking change is dropping Python 3.8 support. One minor code change was required: isort 6 changed its import grouping heuristics, causing a lint failure in
tests/integration/sugar/test_transaction.py— fixed by runningisortauto-fix (import reordering only, no behavioural change).Closing instructions
After merging, close the following superseded PRs: #941, #939, #938, #936, #935, #934, #933, #932, #931, #930, #928, #926
No PRs were Skipped — all remain eligible for closing.