I am Feng Yu, a PhD student in Computer Science at the University of Exeter, supervised by Prof. Jia Hu and Prof. Geyong Min. My current research focuses on LLM post-training, continual learning and adaptation, and efficient AI. I am particularly interested in how foundation models can be adapted efficiently while retaining capability under evolving tasks and distributions, and more recently in continual adaptation of agents under changing interfaces and workflows. My research is supported by China Scholarship Council and University of Exeter Scholarships.
Current interests
Post-training and efficient adaptation for large language models
Continual learning, retention, and robustness under evolving tasks and distributions
Evaluation and continual adaptation of agents under changing interfaces and workflows
Broader background
My broader background includes federated learning, continual learning, efficient AI, privacy-aware learning systems, and earlier work on deep reinforcement learning, blockchain-enabled learning systems, and engineering-oriented automation.
I am open to discussing research ideas and academic collaborations. Feel free to reach out by email.
News
May 18, 2025
One research paper FedTaLoRA has been released!
Nov 20, 2024
The first comprehensive survey FCL for Edge-AI has been released!
Sep 1, 2023
Started my PhD at Exeter!
Selected publications
2025
Blockwise Hadamard high-Rank Adaptation for Parameter-Efficient LLM Fine-Tuning
@article{zi2024federated,title={Federated Continual Learning for Edge-AI: A Comprehensive Survey},author={Wang, Zi and Wu, Fei and Yu, Feng and Zhou, Yurui and Hu, Jia and Min, Geyong},journal={arXiv preprint arXiv:2411.13740},year={2024},bibtex_show={true},selected={true},abbr={arXiv},html={https://arxiv.org/abs/2411.13740}}