I study how to make AI systems (LLMs, VLMs, and agents) reliable and aligned, with a focus on reward modeling, evaluation, and agentic visual reasoning.
Research Directions
- Multimodal reward modeling and preference alignment
- Post-training and evaluation for LLMs / LVLMs
- Agentic visual reasoning with verifiable evidence and tool use
- Vision-to-code, visual generation, and fine-grained visual equivalence evaluation
Open-source & Activity
Contact
- Blog: chrisding.me / bblog.031105.xyz
- Email: sy.ding@smail.nju.edu.cn / syding1105@163.com
- Google Scholar: Shengyuan Ding
- Kee: kee.so/chrisding
Feel free to reach out if you'd like to chat or collaborate.



