JugglucoNG Hub is a standalone mirror-first sync appliance for Juggluco and JugglucoNG.
It keeps compatibility with the current mirror protocol while adding:
- a web admin UI
- household management
- calibration/profile ingestion
- packaged deployment with prebuilt images
- optional private-CA HTTPS without DNS or port 80/443 assumptions
The stack has three services:
mirror: native Juggluco mirror sidecar, default public TCP port8795hub: admin/API service, internal port8787proxy: bundled Caddy reverse proxy with private CA, default public HTTPS port8443
Default public surfaces:
- mirror protocol:
8795/tcp - admin UI:
https://HOST:8443/
Internal only:
- mirror web feed:
17580 - hub HTTP:
8787
The stack does not assume ports 80/443. All public ports are configurable.
- existing Juggluco/JugglucoNG mirror senders can target this server directly
- follower/source QR generation works through the hub UI
- mirrored calibration profiles are ingested
- households can be created, renamed, and deleted in the admin UI
- packaged releases can be installed without building on the target host
- remote deploy over SSH is supported
./scripts/one-click-compose.sh "Home CGM"Optional fixed serial:
./scripts/one-click-compose.sh "Home CGM" "AIDEX-123"That command:
- ensures a container runtime is available locally
- generates a private CA and server certificate under
config/tls/ - bootstraps one managed household and one admin token
- writes
.env.local - starts the full stack
Then:
- point the source phone mirror sender to
HOST:8795 - open the admin UI at
https://HOST:8443/ - trust the generated CA if your browser/device requires it
- use
config/bootstrap-admin-token.jsonfor first admin login
Use the wrapper instead of raw Docker commands:
./scripts/hubctl.sh status
./scripts/hubctl.sh restart
./scripts/hubctl.sh logs
./scripts/hubctl.sh logs hub
./scripts/hubctl.sh logs mirror
./scripts/hubctl.sh stop
./scripts/hubctl.sh update
./scripts/hubctl.sh resetThis creates a release tarball with the source tree, scripts, and preloaded container images.
Standard docker-image release:
./scripts/build-release.shLighter binary-artifact release that skips the native mirror build on the target host:
./scripts/build-release.sh --remote-binaries user@serverOutput:
dist/JugglucoNG-Hub-release-src.tar.gzdist/JugglucoNG-Hub-release-bin.tar.gz
If you run both commands, dist/ keeps both archives side by side.
Binary-artifact notes:
- ships the mirror-native pieces that must match the build toolchain:
runtime/mirror/jugglucoruntime/mirror/lib/libjuice.so.1.6.2runtime/mirror/lib/libstdc++.so.6runtime/mirror/lib/libgcc_s.so.1
- does not bundle
caddy - for direct host-side mirror testing, use
./scripts/run-mirror-bin.shinstead of invokingruntime/mirror/juggluconaked
Included images:
jugglucong-hub:localjugglucong-mirror:localcaddy:2-alpine
The target host does not need to build those images again.
For a copy-paste Ubuntu VPS walkthrough, see INSTALL-SERVER.md.
On the target host, unpack the release and run:
./scripts/install-release.sh \
--household "Home CGM" \
--public-host 203.0.113.10 \
--https-port 8443 \
--mirror-port 8795Optional flags:
--serial SERIAL--mirror-host HOST--tls-alt-names "dns:example.com,ip:203.0.113.10"--skip-loadif images are already present
What it does:
- loads bundled images when the release includes them
- otherwise uses the packaged mirror binary runtime and builds only the light container wrapper on the host
- generates private CA + server certificate
- writes runtime config
- boots the appliance
Maintenance helpers:
./scripts/update-release.shre-runs the installer for the current bundle./scripts/uninstall-release.shremoves the stack and data volumes./scripts/uninstall-release.sh --purge-configalso removes local.env.localand bootstrap token files
From your local machine:
./scripts/deploy-remote.sh \
--target user@server \
--key ~/.ssh/id_ed25519 \
--household "Home CGM" \
--public-host 203.0.113.10 \
--https-port 8443 \
--mirror-port 8795Optional flags:
--remote-dir /opt/jugglucong-hub--serial SERIAL--mirror-host HOST--tls-alt-names "dns:example.com,ip:203.0.113.10"--bundle /path/to/JugglucoNG-Hub-release-src.tar.gz--bundle /path/to/JugglucoNG-Hub-release-bin.tar.gz--bootstrap-docker--no-bootstrap-docker--sudo auto|always|never
What it does:
- builds the release bundle locally if needed
- uploads it with
scp - optionally bootstraps Docker on the remote host
- installs into the remote directory
- starts the appliance there
Default TLS mode is private-ca.
That means:
- no DNS is required
- no public ACME/Let’s Encrypt flow is required
- no assumption that ports
80/443are free - browsers and devices may need to trust the generated CA certificate manually
Generated files:
config/tls/ca.crtconfig/tls/ca.keyconfig/tls/server.crtconfig/tls/server.keyconfig/tls/issued-for.txt
The admin UI exposes:
- secure admin URL
- CA fingerprint
- server fingerprint
- CA download at
/ca.crt
Runtime data is stored in Docker named volumes by default:
jng_hub_datajng_mirror_data
Local repo paths are not used for live application state anymore.
Local config stays under:
config/.env.local
Back up:
config/- Docker volume
jng_hub_data - Docker volume
jng_mirror_data
Current admin UI supports:
- bootstrap admin login
- mirror target/profile display
- source/follower QR generation
- household list
- household create
- household rename
- household delete
- overview, recent events, audit trail
- CA download and TLS fingerprints
Source phone:
- point the current mirror sender to
HOST:MIRROR_PORT - use the generated source QR if preferred
Follower phone:
- use the generated follower QR
- no follower IP entry is required in the admin UI
The mirror sidecar remains the compatibility transport. The hub sits on top of it.
Copy .env.example to .env.local only if you want to edit values manually.
Important variables:
JNG_PORTJNG_MIRROR_PORTJNG_MIRROR_PUBLIC_HOSTJNG_MIRROR_PUBLIC_PORTJNG_PUBLIC_TLS_MODEJNG_PUBLIC_HOSTJNG_PUBLIC_HTTPS_PORTJNG_PUBLIC_TLS_ALT_NAMESJNG_PUBLIC_CA_CERTJNG_PUBLIC_SERVER_CERTJNG_PUBLIC_SERVER_KEY
Mirror ingest bridge variables are bootstrapped automatically.
On a real Linux VPS, the runtime footprint is modest.
Rough practical target:
- personal/family use:
1 vCPU / 1 GB RAM - small shared host:
2 vCPU / 2 GB RAM
The large memory number seen on macOS during local development is mostly Colima/VM overhead, not the hub process itself.
- the current mirror-first ingestion still depends on the sidecar web feed plus calibration files
- direct richer NG sync is not the primary path yet
- retention modes such as
relay-onlyorrecent-onlyare not implemented yet - full public multi-tenant SaaS hardening is still future work
- runtime state is intentionally gitignored
- build/release artifacts live under
dist/ - the release path is the supported deployment path when you do not want on-site builds
