Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
59 commits
Select commit Hold shift + click to select a range
5d5873a
Merge pull request #213 from esa/Release
gomezzz Nov 25, 2024
01b8aec
format repo and update black version
gomezzz Jul 27, 2025
a4a65ea
cleaning up workflow
gomezzz Jul 27, 2025
69fc67b
gitignoring claude things
gomezzz Jul 27, 2025
e317fc0
Update test instructions
gomezzz Jul 27, 2025
b25df6e
moving tests outside the module
gomezzz Jul 27, 2025
5aa530d
fixing vegas test
gomezzz Jul 27, 2025
dc4c4ef
Adding docs for parametrizing integration domain
gomezzz Jul 27, 2025
770170f
reworking benchmarks
gomezzz Aug 2, 2025
8924c39
putting sensible values , weak scaling still needs work
gomezzz Aug 2, 2025
e8825c3
plots in readme
gomezzz Aug 2, 2025
221028c
switching max line length to 100
gomezzz Aug 3, 2025
1db77d6
fix CI
gomezzz Aug 3, 2025
4c6795c
Merge branch 'structural_improvements' into benchmark-0.4.1
gomezzz Aug 3, 2025
e815f85
Merge branch 'structural_improvements' into docs-for-parametric-integ…
gomezzz Aug 3, 2025
e5c734d
Updating plots and scaling
gomezzz Aug 3, 2025
a402ee5
flaking
gomezzz Aug 3, 2025
6c659ff
adding a release flag to disable logging on releases
gomezzz Aug 3, 2025
8c53799
Merge branch 'docs-for-parametric-integration' into benchmark-0.4.1
gomezzz Aug 3, 2025
093f2fe
Merge branch 'benchmark-0.4.1' into improve-logging-behaviour
gomezzz Aug 3, 2025
a28a15d
formatting
gomezzz Aug 3, 2025
d8b8182
improving GPU selection and docs for it
gomezzz Aug 3, 2025
42f1c3b
increasing err margins in gauss
gomezzz Aug 3, 2025
c6a50e5
fixing setup.py
gomezzz Aug 3, 2025
ef294c5
Merge branch 'benchmark-0.4.1' into improve-logging-behaviour
gomezzz Aug 3, 2025
5d866c7
Merge branch 'improve-logging-behaviour' into improve-gpu-selection
gomezzz Aug 3, 2025
a1e163a
switching to pyproject.toml
gomezzz Aug 3, 2025
009c6d4
fixes
gomezzz Aug 3, 2025
63d697f
formatting
gomezzz Aug 3, 2025
aa29844
CI/CD docs
gomezzz Aug 3, 2025
bcd55cb
additional test function
gomezzz Aug 3, 2025
b53c33a
more comprehensive tests for the rng
gomezzz Aug 3, 2025
f0e1899
tests for deployment script and improved deployment script
gomezzz Aug 3, 2025
22f4e47
flaking and formatting
gomezzz Aug 3, 2025
6fc4394
Better error handling, ignoring known warnings in CI
gomezzz Aug 3, 2025
2ebf4f7
adding analytic groundtruths
gomezzz Aug 3, 2025
7f0e572
Merge pull request #218 from esa/structural_improvements
gomezzz Aug 3, 2025
223906d
WIP improving benchmarks
gomezzz Aug 3, 2025
9e93163
Merge pull request #219 from esa/docs-for-parametric-integration
gomezzz Aug 3, 2025
676b8a1
fixing torch using gpu
gomezzz Aug 3, 2025
9ec5332
formatting
gomezzz Aug 3, 2025
56cd081
updated plots
gomezzz Aug 3, 2025
48eaf9d
flaking
gomezzz Aug 3, 2025
6c6e0b1
formatting
gomezzz Aug 3, 2025
9f59126
Merge branch 'develop' into fixing-main-dev-merge-conflict
gomezzz Aug 3, 2025
b95669e
Merge pull request #220 from esa/benchmark-0.4.1
gomezzz Aug 3, 2025
b60150e
Merge pull request #221 from esa/improve-logging-behaviour
gomezzz Aug 3, 2025
6be589e
Merge pull request #222 from esa/improve-gpu-selection
gomezzz Aug 3, 2025
bdc6650
Merge pull request #223 from esa/switching-to-pyproject
gomezzz Aug 3, 2025
47ba09d
Merge pull request #224 from esa/more-test-functions
gomezzz Aug 3, 2025
a056c0f
Merge pull request #226 from esa/calculate_results_api_improvement
gomezzz Aug 3, 2025
e73f8a9
Merge branch 'develop' into fixing-main-dev-merge-conflict
gomezzz Aug 3, 2025
5152f5d
Merge pull request #227 from esa/fixing-main-dev-merge-conflict
gomezzz Aug 3, 2025
b5a8e8e
Merge pull request #228 from esa/develop
gomezzz Aug 3, 2025
e3e1fc2
cleaning up
gomezzz Aug 3, 2025
5a252f3
adding docs on doc building
gomezzz Aug 3, 2025
c4f6c75
Bump version number
gomezzz Aug 3, 2025
886f4e5
Update release process
gomezzz Aug 3, 2025
59ed9d3
clean up logging situation
gomezzz Aug 3, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions .flake8
Original file line number Diff line number Diff line change
@@ -1,4 +1,10 @@
[flake8]
exclude =
.git,
__pycache__,
build,
dist,
my_notebooks
extend-ignore =
# Allow whitespace before ':' because in some cases this whitespace
# avoids confusing the operator precedence,
Expand Down
5 changes: 3 additions & 2 deletions .github/ISSUE_TEMPLATE/release.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,10 +20,11 @@ _to be written during release process_
- [ ] Create PR to merge from current develop into release branch
- [ ] Write Changelog in PR and request review
- [ ] Review the PR (if OK - merge, but DO NOT delete the branch)
- [ ] Minimize packages in requirements.txt and conda-forge submission. Update packages in setup.py
- [ ] On `Release` ,Minimize packages in requirements.txt and conda-forge submission. Update packages in pyproject.toml
- [ ] Check unit tests -> Check all tests pass on CPU and [GPU (e.g. on colab)](https://colab.research.google.com/drive/1lFpdtY5zV7VpW88aazedA3n4khedHDQP?usp=sharing#scrollTo=IbU2vypPQ-Ej) and that there are tests for all important features
- [ ] Check documentation -> Check presence of documentation for all features by locally building the docs on the release
- [ ] Change version number in setup.py and docs (under conf.py)
- [ ] Change version number in pyproject.toml and docs (under conf.py) and in `__init__.py`
- [ ] In `__init__.py`, set `TORCHQUAD_DISABLE_LOGGING` to `True`
- [ ] Trigger the Upload Python Package to testpypi GitHub Action (https://github.com/esa/torchquad/actions/workflows/deploy_to_test_pypi.yml) on the release branch (need to be logged in)
- [ ] Test the build on testpypi (with `pip install --index-url https://test.pypi.org/simple/ --extra-index-url https://pypi.org/simple torchquad`)
- [ ] Finalize release on the release branch
Expand Down
13 changes: 4 additions & 9 deletions .github/workflows/autoblack.yml
Original file line number Diff line number Diff line change
@@ -1,9 +1,4 @@
# GitHub Action that uses Black to reformat the Python code in an incoming pull request.
# If all Python code in the pull request is compliant with Black then this Action does nothing.
# Othewrwise, Black is run and its changes are committed back to the incoming pull request.
# https://github.com/cclauss/autoblack

name: autoblack
name: check_formatting
on: [pull_request]
jobs:
build:
Expand All @@ -15,6 +10,6 @@ jobs:
with:
python-version: 3.11
- name: Install Black
run: pip install black==24.4.2
- name: Run black --check .
run: black --check .
run: pip install black==25.1.0
- name: Run black --check --line-length 100 .
run: black --check --line-length 100 .
5 changes: 3 additions & 2 deletions .github/workflows/deploy_to_pypi.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,12 +14,13 @@ jobs:
python-version: "3.10"
- name: Install dependencies
run: |
pip install setuptools wheel twine
pip install build twine
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
- name: Build and publish to PyPI
env:
TWINE_USERNAME: "__token__"
TWINE_PASSWORD: ${{ secrets.PYPI_TOKEN }}
TORCHQUAD_RELEASE_BUILD: "True"
run: |
python setup.py sdist bdist_wheel
python -m build
twine upload dist/*
5 changes: 3 additions & 2 deletions .github/workflows/deploy_to_test_pypi.yml
Original file line number Diff line number Diff line change
Expand Up @@ -17,12 +17,13 @@ jobs:
python-version: "3.10"
- name: Install dependencies
run: |
pip install setuptools wheel twine
pip install build twine
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
- name: Build and publish to Test PyPI
env:
TWINE_USERNAME: "__token__"
TWINE_PASSWORD: ${{ secrets.TEST_PYPI_TOKEN }}
TORCHQUAD_RELEASE_BUILD: "True"
run: |
python setup.py sdist bdist_wheel
python -m build
twine upload -r testpypi dist/*
9 changes: 5 additions & 4 deletions .github/workflows/run_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -54,22 +54,23 @@ jobs:
shell: bash -l {0}
run: |
micromamba activate torchquad
cd torchquad/tests/
pip install -e .
cd tests/
pip install pytest
pip install pytest-error-for-skips
pip install pytest-cov
pytest -ra --error-for-skips --junitxml=pytest.xml --cov-report=term-missing:skip-covered --cov=../../torchquad . | tee pytest-coverage.txt
pytest -ra --error-for-skips --junitxml=pytest.xml --cov-report=term-missing:skip-covered --cov=../torchquad . | tee pytest-coverage.txt
- name: pytest coverage comment
uses: MishaKav/pytest-coverage-comment@main
if: github.event_name == 'pull_request'
continue-on-error: true
with:
pytest-coverage-path: ./torchquad/tests/pytest-coverage.txt
pytest-coverage-path: ./tests/pytest-coverage.txt
title: Coverage Report
badge-title: Overall Coverage
hide-badge: false
hide-report: false
create-new-comment: false
hide-comment: false
report-only-changed-files: false
junitxml-path: ./torchquad/tests/pytest.xml
junitxml-path: ./tests/pytest.xml
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -132,3 +132,5 @@ my_notebooks
.vscode
pytest-coverage.txt
pytest.xml
CLAUDE.md
.claude/
110 changes: 99 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -148,9 +148,10 @@ import torchquad
torchquad._deployment_test()
```

After cloning the repository, developers can check the functionality of `torchquad` by running the following command in the `torchquad/tests` directory:
After cloning the repository, developers can check the functionality of `torchquad` by running

```sh
pip install -e .
pytest
```

Expand Down Expand Up @@ -192,8 +193,48 @@ integral_value = mc.integrate(
backend="torch",
)
```
To change the logger verbosity, set the `TORCHQUAD_LOG_LEVEL` environment
variable; for example `export TORCHQUAD_LOG_LEVEL=DEBUG`.
## Logging Configuration

By default, torchquad disables its internal logging when installed from PyPI to avoid interfering with other loggers in your application. To enable logging change `TORCHQUAD_DISABLE_LOGGING` in `__init__.py`:

1. **Set the log level**: Use the `TORCHQUAD_LOG_LEVEL` environment variable:
```bash
export TORCHQUAD_LOG_LEVEL=DEBUG # For detailed debugging
export TORCHQUAD_LOG_LEVEL=INFO # For general information
export TORCHQUAD_LOG_LEVEL=WARNING # For warnings only (default when enabled)
```

2. **Enable logging programmatically**:
```python
import torchquad
torchquad.set_log_level("DEBUG") # This will enable and configure logging
```

## Multi-GPU Usage

torchquad supports multi-GPU systems through standard PyTorch practices. The recommended approach is to use the `CUDA_VISIBLE_DEVICES` environment variable to control GPU selection:

```bash
# Use specific GPU
export CUDA_VISIBLE_DEVICES=0 # Use GPU 0
python your_script.py

export CUDA_VISIBLE_DEVICES=1 # Use GPU 1
python your_script.py

# Use multiple GPUs with separate processes
export CUDA_VISIBLE_DEVICES=0 && python integration_script.py &
export CUDA_VISIBLE_DEVICES=1 && python integration_script.py &
```

For parallel processing across multiple GPUs, we recommend spawning separate processes rather than trying to coordinate multiple GPUs within a single process. This approach:

- Provides clean separation between GPU processes
- Avoids complex device management
- Follows PyTorch best practices
- Enables easy load balancing and error handling

For detailed examples and advanced multi-GPU patterns, see the [Multi-GPU Usage section](https://torchquad.readthedocs.io/en/main/tutorial.html#multi-gpu-usage) in our documentation.

You can find all available integrators [here](https://torchquad.readthedocs.io/en/main/integration_methods.html).

Expand All @@ -206,13 +247,60 @@ See the [open issues](https://github.com/esa/torchquad/issues) for a list of pro
<!-- PERFORMANCE -->
## Performance

Using GPUs torchquad scales particularly well with integration methods that offer easy parallelization. For example, below you see error and runtime results for integrating the function `f(x,y,z) = sin(x * (y+1)²) * (z+1)` on a consumer-grade desktop PC.
Using GPUs, torchquad scales particularly well with integration methods that offer easy parallelization. The benchmarks below demonstrate performance across challenging functions from 1D to 15D, comparing torchquad's GPU-accelerated methods against scipy's CPU implementations.

<!-- TODO Update plot links -->
### Convergence Analysis
![](https://github.com/esa/torchquad/blob/benchmark-0.4.1/resources/torchquad_convergence.png?raw=true)
*Convergence comparison across challenging test functions from 1D to 15D. GPU-accelerated torchquad methods demonstrate great performance, particularly for high-dimensional integration where scipy's nquad becomes computationally infeasible. Beyond 1D, torchquad significantly outperforms scipy in efficiency.*

### Runtime vs Error Efficiency
![](https://github.com/esa/torchquad/blob/benchmark-0.4.1/resources/torchquad_runtime_vs_error.png?raw=true)
*Runtime-error trade-offs across dimensions. Lower-left positions indicate better performance. While scipy's traditional methods are competitive for simple 1D problems, torchquad's GPU acceleration provides orders of magnitude better performance for multi-dimensional integration, achieving both faster computation and lower errors.*

### Scaling Performance
![](https://github.com/esa/torchquad/blob/benchmark-0.4.1/resources/torchquad_scaling_analysis.png?raw=true)
*Scaling investigation across problem sizes and dimensions of the different methods in torchquad.*

### Vectorized Integration Speedup
![](https://github.com/esa/torchquad/blob/benchmark-0.4.1/resources/torchquad_vectorized_speedup.png?raw=true)
*Strong performance gains when evaluating multiple integrands simultaneously. The vectorized approach shows exponential speedup (up to 200x) compared to sequential evaluation, making torchquad ideal for parameter sweeps, uncertainty quantification, and machine learning applications requiring batch integration.*

### Framework Comparison
![](https://github.com/esa/torchquad/blob/benchmark-0.4.1/resources/torchquad_framework_comparison.png?raw=true)
*Cross-framework performance comparison for 1D integration using Monte Carlo and Simpson methods. Demonstrates torchquad's consistent API across PyTorch, TensorFlow, JAX, and NumPy backends, with GPU acceleration providing significant performance advantages for large number of function evaluations. All frameworks achieve similar accuracy while showcasing the computational benefits of GPU acceleration for parallel integration methods.*

### Running Benchmarks

To reproduce these benchmarks or test performance on your hardware:

```bash
# Run all benchmarks (convergence, framework comparison, scaling, vectorized)
python benchmarking/modular_benchmark.py --dimensions 1,3,7,15

# Run specific benchmark types
python benchmarking/modular_benchmark.py --convergence-only --dimensions 1,3,7,15
python benchmarking/modular_benchmark.py --scaling-only
python benchmarking/modular_benchmark.py --framework-only

# Generate all plots from results
python benchmarking/plot_results.py

# Configure benchmark parameters
# Edit benchmarking/benchmarking_cfg.toml to adjust:
# - Evaluation point ranges
# - Framework backends to test
# - Timeout limits
# - Method selection
# - scipy integration tolerances
```

![](https://github.com/esa/torchquad/blob/main/resources/torchquad_runtime.png?raw=true)
*Runtime results of the integration. Note the far superior scaling on the GPU (solid line) in comparison to the CPU (dashed and dotted) for both methods.*
**New Features:**
- **Analytic Reference Values**: Uses SymPy for exact analytic solutions where possible, providing highly accurate reference values for error calculations
- **Enhanced Test Functions**: Analytically tractable but numerically challenging functions that better demonstrate convergence behavior
- **Framework Comparison**: Cross-backend performance benchmarking across PyTorch, TensorFlow, JAX, and NumPy with GPU/CPU device comparisons

![](https://github.com/esa/torchquad/blob/main/resources/torchquad_convergence.png?raw=true)
*Convergence results of the integration. Note that Simpson quickly reaches floating point precision. Monte Carlo is not competitive here given the low dimensionality of the problem.*
**Hardware:** RTX 4060 Ti 16GB, i5-13400F, Precision: float32

<!-- CONTRIBUTING -->
## Contributing
Expand Down Expand Up @@ -245,12 +333,12 @@ Please note that PRs should be created from and into the `develop` branch. For e
3. Create your Feature Branch (`git checkout -b feature/AmazingFeature`)
4. Commit your Changes (`git commit -m 'Add some AmazingFeature'`)
5. Push to the Branch (`git push origin feature/AmazingFeature`)
6. Open a Pull Request on the `develop` branch, *not* `main` (NB: We autoformat every PR with black. Our GitHub actions may create additional commits on your PR for that reason.)
6. Open a Pull Request on the `develop` branch, *not* `main`

and we will have a look at your contribution as soon as we can.

Furthermore, please make sure that your PR passes all automated tests. Review will only happen after that.
Only PRs created on the `develop` branch with all tests passing will be considered. The only exception to this rule is if you want to update the documentation in relation to the current release on conda / pip. In that case you may ask to merge directly into `main`.
Furthermore, please make sure that your PR passes all automated tests, you can ping `@gomezzz` to run the CI. Review will only happen after that.
Only PRs created on the `develop` branch with all tests passing will be considered. The only exception to this rule is if you want to update the documentation in relation to the current release on conda / pip. In that case you open a PR directly into `main`.

<!-- LICENSE -->
## License
Expand Down
Loading
Loading