Skip to content

KahimWong/FontGuard

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

FontGuard

arXiv Venue License

FontGuard is a robust font watermarking framework that embeds bits by manipulating font style representations (instead of only pixel-space perturbations), then decodes them with contrastive learning for stronger distortion robustness.

Model Overview


✨ Highlights

  • Style-space watermarking with a font generator prior for better visual quality.
  • Contrastive decoder training for stable bit recovery.
  • Noise-aware curriculum that improves robustness under real-world distortions.
  • Demo assets included for 1-bit SimSun watermarking and multi-scenario evaluation.

Training Visualization


📦 Repository Layout

FontGuard/
├── main.py               # training entry
├── cfg.py                # training configuration
├── ds.py                 # dataloader (font + random background)
├── model/                # encoder/decoder/discriminator + noise layers
├── fig/                  # figures used in docs
└── demo/
    ├── test.py           # demo evaluation entry
    ├── demo_cfg.py       # demo config template
    └── README.md         # demo data details

🚀 Quick Start

1) Environment

Install dependencies in your Python environment:

pip install torch==1.13.0+cu117 torchvision==0.14.0+cu117

2) Prepare data and pretrained files

Set the root directory in cfg.py, then place required files under that root:

  • font images (font_dir, default: root/SimSun)
  • mean style feature (base_sty_path)
  • pretrained decoder checkpoint (pretrain_dec_ckpt)
  • background images (bg_dir, default: root/val2017)

Pretrained resources:

Recommended exp_data layout (matching cfg.py defaults):

exp_data/
├── SimSun/                      # training font images (ImageFolder style)
│   └── <font-subdir>/
│       ├── 0000.png
│       └── ...
├── val2017/                     # background images (e.g., COCO val2017)
├── base_sty_feat_CH.pth         # extracted mean style feature (chinese font)
├── clip_cls_CH.pt               # pretrained decoder checkpoint (chinese font)
├── font_model_CH.ckpt           # pretrained font recognition model (chinese font)
├── base_sty_feat_ENG.pth         # extracted mean style feature (english font)
├── clip_cls_ENG.pt               # pretrained decoder checkpoint (english font)
└── font_model_ENG.ckpt           # pretrained font recognition model (english font)

3) Organize font images correctly

ds.py uses torchvision.datasets.ImageFolder, so images must be inside at least one subfolder:

SimSun/
└── <font-subdir>/
    ├── 0000.png
    ├── 0001.png
    └── ...

Expected image size is 80×80 (configured by font_img_size in cfg.py).

4) Train

python main.py

Training outputs are written to exp_dir (auto-created in cfg.py), including checkpoints and visualization images.


⚙️ Key Configuration (cfg.py)

  • msg_bit: watermark bit length (default 1, so msg_n=2 classes)
  • font_dir, bg_dir: font/background data directories
  • font_model_ckpt, base_sty_path, pretrain_dec_ckpt: required model assets
  • epochs, bs, enc_lr, dec_lr, disc_lr: training schedule and optimization
  • init_epoch, start_noise_epoch, full_noise_epoch: curriculum stages

main.py sets CUDA_VISIBLE_DEVICES internally. Adjust it if needed for your machine.


🧪 Demo Evaluation

The demo folder includes evaluation code for released 1-bit watermarked SimSun assets across multiple scenarios.

  1. Download demo package (see demo/README.md).
  2. Configure paths in demo/demo_cfg.py.
  3. Ensure demo/test.py imports the same config module name (cfg).
  4. Run:
cd demo
python test.py

The script prints per-scenario decoding accuracy.


📚 Citation

If this project helps your research, please cite:

@article{wong2025fontguard,
  title={FontGuard: A Robust Font Watermarking Approach Leveraging Deep Font Knowledge},
  author={Wong, Kahim and Zhou, Jicheng and Li, Kemou and Si, Yain-Whar and Wu, Xiaowei and Zhou, Jiantao},
  journal={IEEE Transactions on Multimedia},
  year={2025}
}

🙌 Acknowledgment

This implementation includes reusable modules under model/ (e.g., DGFont, differentiable JPEG, PCGrad) integrated into the FontGuard training pipeline.

About

[TMM'25] FontGuard: A Robust Font Watermarking Approach Leveraging Deep Font Knowledge

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages