First off, thanks for taking the time to contribute! 🎉
This document provides guidelines for contributing to the VisDrone Toolkit. Following these guidelines helps communicate that you respect the time of the developers managing this project.
- Code of Conduct
- How Can I Contribute?
- Development Setup
- Pull Request Process
- Style Guidelines
- Testing Guidelines
- Commit Message Guidelines
This project adheres to the Contributor Covenant Code of Conduct. By participating, you are expected to uphold this code. Please report unacceptable behavior to kumaar324@gmail.com.
Before creating bug reports, please check existing issues to avoid duplicates. When creating a bug report, include:
- Clear title and description
- Steps to reproduce the issue
- Expected behavior vs actual behavior
- Environment details (OS, Python version, PyTorch version, GPU)
- Error messages and stack traces
- Screenshots if applicable
Bug Report Template:
**Describe the bug**
A clear description of what the bug is.
**To Reproduce**
Steps to reproduce:
1. Run command '...'
2. With configuration '...'
3. See error
**Expected behavior**
What you expected to happen.
**Environment:**
- OS: [e.g. Ubuntu 22.04]
- Python version: [e.g. 3.10]
- PyTorch version: [e.g. 2.0.1]
- CUDA version: [e.g. 11.8]
- GPU: [e.g. RTX 3090]
**Additional context**
Any other relevant information.Enhancement suggestions are tracked as GitHub issues. When creating an enhancement suggestion:
- Use a clear title that describes the enhancement
- Provide detailed description of the proposed enhancement
- Explain why this would be useful to most users
- List similar features in other projects if applicable
We actively welcome your pull requests:
- Fork the repo and create your branch from
main - Add tests if you've added code
- Update documentation if needed
- Ensure tests pass
- Make sure your code follows the style guidelines
- Issue the pull request
# Fork the repository on GitHub, then:
git clone https://github.com/dronefreak/VisDrone-dataset-python-toolkit.git
cd VisDrone-dataset-python-toolkitpython3 -m venv venv
source venv/bin/activate # Linux/Mac
# venv\Scripts\activate # Windows# Using make
make install-dev
# Or manually
pip install -e ".[dev]"pre-commit installThis will automatically run linters before each commit.
git checkout -b feature/amazing-feature-
Update tests: Add or update tests for your changes
-
Run tests: Ensure all tests pass
make test # or pytest tests/ -v
-
Check code style: Format and lint your code
make format make lint
-
Update documentation: If you changed APIs or added features
-
Update CHANGELOG.md: Add entry under "Unreleased" section
-
Push to your fork
git push origin feature/amazing-feature
-
Open Pull Request on GitHub with:
- Clear title describing the change
- Description of what changed and why
- Link to related issues
- Screenshots/demos if applicable
-
Address review feedback
- Respond to comments
- Make requested changes
- Push updates to the same branch
- Tests pass locally
- Code follows style guidelines
- Documentation updated
- CHANGELOG.md updated
- Commit messages follow guidelines
- No merge conflicts with main
We use Black for formatting and isort for import sorting.
# Auto-format code
black visdrone_toolkit scripts tests
isort visdrone_toolkit scripts tests
# Or use make
make format-
Line length: Maximum 100 characters
-
Imports: Organized with isort
# Standard library import os from pathlib import Path # Third-party import torch import numpy as np # Local from visdrone_toolkit import VisDroneDataset
-
Type hints: Use type hints for function signatures
def process_image(image: np.ndarray, size: int = 640) -> torch.Tensor: """Process image to tensor.""" pass
-
Docstrings: Use Google style
def my_function(param1: str, param2: int) -> bool: """ Short description. Longer description if needed. Args: param1: Description of param1 param2: Description of param2 Returns: Description of return value Raises: ValueError: When something is wrong """ pass
-
Naming conventions:
- Classes:
PascalCase - Functions/variables:
snake_case - Constants:
UPPER_CASE - Private methods:
_leading_underscore
- Classes:
We use flake8 and mypy for linting:
# Check code
flake8 visdrone_toolkit scripts tests
mypy visdrone_toolkit scripts
# Or use make
make lint-
Location: Put tests in
tests/directory -
Naming: Test files should start with
test_ -
Structure: Organize tests in classes
class TestMyFeature: """Tests for my feature.""" def test_basic_functionality(self): """Test basic case.""" assert my_function() == expected_result def test_edge_case(self): """Test edge case.""" with pytest.raises(ValueError): my_function(invalid_input)
-
Fixtures: Use pytest fixtures from
conftest.pydef test_with_dataset(self, mock_visdrone_dataset): """Test using fixture.""" dataset = VisDroneDataset( image_dir=str(mock_visdrone_dataset['image_dir']), annotation_dir=str(mock_visdrone_dataset['annotation_dir']), ) assert len(dataset) > 0
# All tests
pytest tests/
# Specific file
pytest tests/test_dataset.py
# With coverage
pytest tests/ --cov=visdrone_toolkit --cov-report=html
# Using make
make test- Aim for >80% coverage
- All new features must include tests
- Bug fixes should include regression tests
We follow Conventional Commits.
<type
>(<scope
>):
<subject>
<body>
<footer></footer></body></subject></scope
></type>- feat: New feature
- fix: Bug fix
- docs: Documentation changes
- style: Code style changes (formatting, etc.)
- refactor: Code refactoring
- test: Adding or updating tests
- chore: Maintenance tasks
feat(dataset): add support for video sequences
Add VideoSequenceDataset class to handle video frames with
temporal information. This enables training on video tasks.
Closes #123
fix(converter): handle empty annotation files
Previously crashed when annotation file was empty.
Now returns empty annotations gracefully.
Fixes #456
docs(readme): update installation instructions
Add section on CUDA version compatibility and
troubleshooting common installation issues.
- Use imperative mood ("add" not "added" or "adds")
- Don't capitalize first letter
- No period at the end
- Maximum 50 characters
Use Google style docstrings:
def train_model(
model: nn.Module,
dataloader: DataLoader,
epochs: int = 10,
device: str = "cuda"
) -> Dict[str, List[float]]:
"""
Train object detection model.
This function handles the complete training loop including
forward pass, loss computation, and backpropagation.
Args:
model: PyTorch model to train
dataloader: Training data loader
epochs: Number of training epochs (default: 10)
device: Device to use for training (default: "cuda")
Returns:
Dictionary containing training metrics with keys:
- 'train_loss': List of loss values per epoch
- 'val_loss': List of validation loss values
Raises:
ValueError: If epochs < 1
RuntimeError: If CUDA requested but not available
Example:
>>> model = get_model("fasterrcnn_resnet50")
>>> metrics = train_model(model, train_loader, epochs=50)
>>> print(f"Final loss: {metrics['train_loss'][-1]}")
"""
passWhen adding features:
- Update main README.md
- Add examples if applicable
- Update relevant documentation files
VisDrone-dataset-python-toolkit/
├── visdrone_toolkit/ # Core package
│ ├── __init__.py
│ ├── dataset.py # Dataset classes
│ ├── utils.py # Utilities
│ ├── visualization.py # Plotting
│ └── converters/ # Format converters
├── scripts/ # CLI tools
│ ├── train.py
│ ├── inference.py
│ ├── webcam_demo.py
│ ├── evaluate.py
│ └── convert_annotations.py
├── tests/ # Unit tests
├── configs/ # Training configs
├── docs/ # Documentation
├── examples/ # Examples
└── .github/ # CI/CD workflows- Documentation: Check README.md and other docs
- Issues: Search existing GitHub issues
- Discussions: Use GitHub Discussions for questions
- Contact: Email maintainers for sensitive matters
Contributors will be:
- Listed in CONTRIBUTORS.md
- Mentioned in release notes
- Credited in relevant documentation
By contributing, you agree that your contributions will be licensed under the Apache License 2.0.
Don't hesitate to ask! We're here to help:
- Open an issue with the "question" label
- Start a discussion on GitHub Discussions
- Contact maintainers directly
Thank you for contributing to VisDrone Toolkit! 🚀