Conversation
Signed-off-by: noeyy-mino <174223378+noeyy-mino@users.noreply.github.com>
|
Important Review skippedAuto incremental reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Use the checkbox below for a quick retry:
📝 WalkthroughWalkthroughThis PR refactors the deployment testing infrastructure to execute backend deployments in isolated subprocesses, renames internal methods to clarify implementation separation, and expands the test suite with additional model deployments and backend configurations. Changes
Sequence DiagramsequenceDiagram
participant Test as Test Process
participant Main as ModelDeployer.run()
participant Sub as Child Process
participant Impl as _deploy_<backend>_impl()
participant Backend as Backend (TRT-LLM/vLLM/SGLang)
Test->>Main: run(backend, model_id, ...)
Main->>Sub: _run_deploy_via_subprocess(backend, ...)
Note over Sub: Python snippet created<br/>with backend handler
activate Sub
Sub->>Impl: _run_<backend>_deploy()
Impl->>Backend: Initialize & deploy model
Backend-->>Impl: Model deployed / logs
Impl-->>Sub: stdout/stderr captured
Sub-->>Main: Process completes
deactivate Sub
Main-->>Test: Result reported to pytest
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes 🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
tests/_test_utils/deploy_utils.py (1)
86-90:pytest.fail()will not work correctly in subprocess context.The subprocess runs as a standalone Python process without pytest's test session. Calling
pytest.fail()in this context will raise an exception that terminates the subprocess with a non-zero exit code, but the actual failure message may not propagate cleanly to the parent test. Thetraceback.print_exc()beforepytest.fail()does help, but consider simply raising aSystemExit(1)or a custom exception instead, since the parent already checksresult.check_returncode().♻️ Proposed simplification
except Exception: import traceback traceback.print_exc() - pytest.fail(traceback.format_exc()) + sys.exit(1)Also applies to: 113-117, 140-144
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/_test_utils/deploy_utils.py` around lines 86 - 90, Replace the subprocess-local calls to pytest.fail() inside the generic except Exception handlers in tests/_test_utils/deploy_utils.py with a subprocess-friendly exit (e.g., raise SystemExit(1) or re-raise the original exception) so the parent process can detect the non-zero exit code and preserve the traceback; update each except block shown (the one around the traceback.print_exc() and the similar handlers at the other mentioned locations) to print the traceback and then raise SystemExit(1) (or reraise) instead of calling pytest.fail().
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/_test_utils/deploy_utils.py`:
- Around line 163-167: The generated f-string that inlines {model_id!r},
{attn_backend!r}, and {base_model!r} into executable Python code is a potential
code-injection risk; instead stop emitting executable code with repr
interpolation in deploy_utils and pass parameters safely (for example serialize
the args to JSON or write them to a temp file and have _run_{backend}_deploy
read them, or invoke the helper with explicit CLI args parsed by argparse).
Update the code that builds the string (the block that constructs code = f"""...
_run_{backend}_deploy(...)""") to produce a call that reads a JSON payload or
command-line flags and modify/ensure _run_{backend}_deploy (and any wrapper that
executes it) accepts and parses that serialized input rather than relying on
repr interpolation. Ensure model_id, attn_backend, base_model (and other inputs
like eagle3_one_model, tensor_parallel_size, mini_sm) are passed as safe
serialized values and not embedded directly into generated Python source.
In `@tests/examples/llm_ptq/test_deploy.py`:
- Around line 578-579: Fix the typo in the docstring that reads "speculative
models shoule be loaded by local path" by changing "shoule" to "should" in the
triple-quoted string beginning with "Skip test if MODELOPT_LOCAL_EAGLE_MODEL is
set but model doesn't exist locally." in tests/examples/llm_ptq/test_deploy.py
so the docstring reads "speculative models should be loaded by local path".
---
Nitpick comments:
In `@tests/_test_utils/deploy_utils.py`:
- Around line 86-90: Replace the subprocess-local calls to pytest.fail() inside
the generic except Exception handlers in tests/_test_utils/deploy_utils.py with
a subprocess-friendly exit (e.g., raise SystemExit(1) or re-raise the original
exception) so the parent process can detect the non-zero exit code and preserve
the traceback; update each except block shown (the one around the
traceback.print_exc() and the similar handlers at the other mentioned locations)
to print the traceback and then raise SystemExit(1) (or reraise) instead of
calling pytest.fail().
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #938 +/- ##
==========================================
- Coverage 73.09% 72.03% -1.06%
==========================================
Files 205 207 +2
Lines 22301 22718 +417
==========================================
+ Hits 16300 16365 +65
- Misses 6001 6353 +352 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
What does this PR do?
Type of change: new tests
Overview:
Add new deployment tests for newly added checkpoints on HF
Usage
pytest test_deploy.py --run-release
NoneTesting
None
Before your PR is "Ready for review"
Additional Information
N/A
Summary by CodeRabbit
New Features
Improvements
Chores