Digital health systems
Linking clinical text, wearables, patient-reported outcomes, and multimodal signals into usable evidence systems.
I build integrative frameworks that connect text, images, behavioural data, scientific literature, and health records into usable public knowledge.
The work spans digital health, natural language processing, scientometrics, and computational social science, with an emphasis on methods that travel across disciplines without losing rigour.
Linking clinical text, wearables, patient-reported outcomes, and multimodal signals into usable evidence systems.
Using NLP, semantic analysis, and knowledge graphs to trace how research and public discourse evolve.
Studying social platforms, behaviour, and visual culture through reproducible, public-facing analysis workflows.
The research portfolio is organised as a sequence of clear modules rather than a dense wall of topics. Each module combines substantive questions, technical infrastructure, and evidence design.
Method stack
Text, image, sensor, and metadata signals treated as one integrated surface.
This means multimodal harmonisation, reproducible pipelines, and interfaces that communicate findings clearly to collaborators outside computing.
Developing methods for clinical text, social media, and scientific corpora that remain interpretable across domains.
Connecting records, wearables, patient-reported outcomes, and multi-omics evidence into actionable models of care.
Tracing how public narratives, visual culture, and behaviour emerge through networked platforms and multimodal data.
Using citation graphs, topic dynamics, and large-scale metadata to identify interdisciplinary frontiers in research.
Current work ranges from data integration systems to applied multimodal research platforms. Each project is framed as a product-like module with collaborators, funding context, and a direct route into the work.
This study examines the dynamics of Chinese fashion aesthetics through social media analysis using multimodal data, and explores how fashion trends are represented, communicated, and diffused online.
An intelligent platform integrating multimodal data to predict chronic disease flare-ups and recommend personalised behavioural changes, and will evolve into a digital twin powered by federated learning for privacy-preserving, adaptive healthcare.
An assistive algorithm for visually impaired users that combines multi-camera vision to enhance spatial perception and object awareness and integrating vLLM for contextual understanding and optional LiDAR input for enhanced depth sensing, aiming to deliver robust, real-time assistance to improve user independence and environmental interaction.