refactor: build wheels and conda packages using Python limited API#1027
refactor: build wheels and conda packages using Python limited API#1027gforsyth wants to merge 6 commits intorapidsai:mainfrom
Conversation
|
Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually. Contributors can view more details about this message here. |
|
/ok to test |
b57160d to
cb12326
Compare
|
/ok to test |
2 similar comments
|
/ok to test |
|
/ok to test |
jakirkham
left a comment
There was a problem hiding this comment.
Thanks Gil! 🙏
Had a few suggestions based on other PRs making similar changes (like pinning Python in tests correctly)
Also had some questions based on the errors we are seeing in CI and what else this PR might need to address them
| rapids-logger "Downloading artifacts from previous jobs" | ||
| CPP_CHANNEL=$(rapids-download-conda-from-github cpp) | ||
| PYTHON_CHANNEL=$(rapids-download-conda-from-github python) | ||
| PYTHON_CHANNEL=$(rapids-download-from-github "$(rapids-package-name conda_python cucim --stable --cuda "$RAPIDS_CUDA_VERSION")") |
There was a problem hiding this comment.
Think we may not be installing everything we need here. Seeing the following missing module errors in CI:
FAILED tests/unit/clara/test_batch_decoding.py::TestBatchDecoding::test_batch_read_multiple_locations[testimg_tiff_stripe_4096x4096_256_jpeg] - ModuleNotFoundError: No module named 'cucim.clara._cucim'
Are we...?
- Not uploading a package we need to the cache?
- Missing downloading a package from the cache?
- Not installing a third-party dependency we need?
There was a problem hiding this comment.
Thanks, John!
I think all the errors come down to the package not getting built as an abi3 wheel for whatever reason -- the missing module errors are from trying to load (say) a cp312 version of an object when there's only a cp311 one available.
Going to poke at the builds locally and see where I'm missing a switch.
|
|
||
| # echo to expand wildcard before adding `[extra]` requires for pip | ||
| rapids-pip-retry install "$(echo ${PYTHON_WHEELHOUSE}/cucim*.whl)[test]" | ||
| rapids-pip-retry install "$(echo "${PYTHON_WHEELHOUSE}"/cucim*.whl)[test]" |
There was a problem hiding this comment.
With the wheel case we see a different error on CI:
Looking in indexes: https://pip-cache.local.gha-runners.nvidia.com/simple, https://pypi.anaconda.org/rapidsai-wheels-nightly/simple
ERROR: cucim_cu13-26.4.0a19-cp311-cp311-manylinux_2_26_aarch64.manylinux_2_28_aarch64.whl is not a supported wheel on this platform.
Are we...?
- Installing the right wheel with this step
- Generating the name correctly
There was a problem hiding this comment.
We're pulling down the right artifact, but the contents are incorrect (because we're generating them incorrectly).
| # repair wheels and write to the location that artifact-uploading code expects to find them | ||
| python -m auditwheel repair -w "${RAPIDS_WHEEL_BLD_OUTPUT_DIR}" dist/* |
There was a problem hiding this comment.
It is possible this step needs to be changed based on how the wheels are now being built
84c531f to
1142e7f
Compare
1142e7f to
d8e8fc3
Compare
|
/ok to test |
Co-authored-by: jakirkham <jakirkham@gmail.com>
|
Ahh, right, |
|
|
Moving
cucimover to using the limited API as part of rapidsai/build-planning#42Ops-Bot-Merge-Barrier: true