Research
My research spans Human-AI Collaboration, Educational AI, and Computer Vision. Below are my key projects.
Human-AI Collaborative Systems for Group Ideation
APEX Lab, HKUST(GZ) · July 2025 – Oct. 2025
Supervisor: Prof. Mingming Fan
Contributed to the development of GraftMind, a novel AI-mediated collaboration system that enables users to ideate in independent workspaces while an AI mediator proactively shares collective knowledge across the team. Designed and implemented a Shared Context Mechanism using semantic graph structures and cognitive state inference algorithms to provide three types of proactive assistance.
Outcome: One second-author paper accepted at CHI 2026 (CCF-A).
Multi-Agent Framework for Open-Ended Answer Grading
SCNU · Feb. 2025 – May 2025
Supervisor: Prof. Huan Yang
Developed CogMAS, a cognitively-grounded multi-agent framework addressing challenges in scoring consistency and interpretability for open-ended student responses. Designed Bloom’s Taxonomy-based Teacher Agents enhanced by RAG and optimised via DPO, alongside a dual-verification mechanism using an Evaluation Agent for bias detection and correction.
Outcome: One first-author paper accepted at IEEE SMC 2025 (CCF-C).
Multi-Expert Fusion Framework for Human Activity Recognition
University of Aberdeen · Oct. 2024 – Apr. 2026
Supervisor: Prof. Edward Chauch
Designed and implemented a multi-branch fusion framework integrating RTMPose, RAM (Swin-Large), and VideoMAEv2 for complementary pose, object, and motion evidence extraction under a data-scarce 20-shot protocol. Developed a GPT-based LLM arbitrator performing zero-shot fusion of multi-modal evidence via structured reasoning chains, achieving 98.3% mean class accuracy on the Kaggle HAR Video Dataset.
Outcome: BSc dissertation, University of Aberdeen (2025).
Automated Scoring of Subjective Questions Based on LLMs
SCNU · Jan. 2024 – Dec. 2024
Supervisor: Prof. Huan Yang
Developed a domain-specific intelligent tutoring system integrating LLMs and knowledge tracing for automated scoring of subjective questions. Fine-tuned LLMs using p-tuning and RLHF; additionally integrated LSTM, BERT, and KTPromptCast for enhanced task alignment.
Outcome: One first-author paper accepted at EDCS 2025 (EI Indexed).