This page covers the most common things that can go wrong when running or using Dropgate. If you get stuck, turning on debug logs for a minute usually makes the cause obvious.
- Can you reach
GET /api/infoon your server? (It should return JSON.) - Is the feature you want actually enabled?
- Hosted uploads:
ENABLE_UPLOAD=true - Direct transfer (P2P):
ENABLE_P2P=true - Web UI:
ENABLE_WEB_UI=true
- Hosted uploads:
- If you’re using the Web UI in a browser, make sure you’re on HTTPS (localhost is the usual exception).
- If you’re behind a reverse proxy, make sure it allows request bodies large enough for upload chunks (often called something like “max body size”).
- Ensure your network/firewall allows traffic on the server port (default
52443, or the value set bySERVER_PORT).
Set LOG_LEVEL=DEBUG on the server, reproduce the issue once, then set it back.
LOG_LEVEL=DEBUG→ detailed transfer flowLOG_LEVEL=INFO→ normal operationLOG_LEVEL=NONE→ no logs at all
Uploads are disabled / 404 on upload routes
- Make sure
ENABLE_UPLOAD=true.
"File exceeds limit … MB" / "Total bundle size exceeds limit" / "Chunk too large" / 413
- Increase
UPLOAD_MAX_FILE_SIZE_MB. - For multi-file uploads, the default behaviour (
UPLOAD_BUNDLE_SIZE_MODE=total) enforces the size limit against the combined size of all files. Set it toper-fileto check each file individually instead. - If you're behind NGINX/Caddy/etc, also check your proxy's upload/body size limit.
“Server out of capacity” / 507
- Increase
UPLOAD_MAX_STORAGE_GB(or set0for unlimited), and/or free disk space.
"Integrity check failed" / "Upload incomplete"
- Often proxy buffering/timeouts, unstable networks, or middleware touching the request body.
- Enable
LOG_LEVEL=DEBUG, retry once, and check where it fails (init vs chunk vs complete).
Tuning chunk size
- The upload chunk size is controlled by
UPLOAD_CHUNK_SIZE_BYTES(default5242880/ 5MB, minimum65536/ 64KB). - If you're behind a reverse proxy with a body size limit, make sure the proxy allows at least
UPLOAD_CHUNK_SIZE_BYTES + 1024bytes per request (the extra 1024 accounts for encryption overhead and request framing). - Lowering the chunk size can help on unstable connections (smaller chunks = less data to re-upload on failure), but increases the number of HTTP requests per file and adds per-chunk overhead (hashing, encryption IV/tag).
- The 64KB minimum prevents extreme fragmentation — values below this would generate millions of chunks for moderate files and cause significant per-chunk overhead.
"Too many chunks" error
- The server limits files to 100,000 chunks maximum (about 500GB at 5MB chunk size).
- Solution: Increase
UPLOAD_CHUNK_SIZE_BYTESto reduce the number of chunks. For very large files, consider using a 10MB or 20MB chunk size.
"Too many files" error (bundles)
- Bundles are limited to 1,000 files maximum for security and performance reasons.
- Solution: Split your files into multiple separate uploads, or use client-side ZIP compression before uploading.
- Some browser features (especially WebRTC used for P2P) require a secure context.
- If you see missing buttons or “blocked” errors in the Web UI, run the server behind HTTPS.
- P2P generally requires HTTPS (localhost is the usual exception).
- If peers can’t connect or get stuck “connecting”:
- Try a different network (mobile hotspot is a quick test).
- Confirm
ENABLE_P2P=true. - Try changing
P2P_STUN_SERVERSto a different STUN provider. - Some networks/NATs need a TURN server to relay traffic (currently not supported).
If clients see “Too many requests”:
- Increase
RATE_LIMIT_MAX_REQUESTSorRATE_LIMIT_WINDOW_MS. - Or disable rate limiting by setting both to
0.
When asking for help, include:
- Your
GET /api/infooutput - A short snippet of server logs around the error (ideally with
LOG_LEVEL=DEBUG) - Whether you’re using a reverse proxy/tunnel (NGINX/Caddy/Cloudflare Tunnel/Tailscale)