Feng Yu

I am Feng Yu, a PhD student in Computer Science at the University of Exeter, supervised by Prof. Jia Hu and Prof. Geyong Min. My current research focuses on LLM post-training, continual learning and adaptation, and efficient AI. I am particularly interested in how foundation models can be adapted efficiently while retaining capability under evolving tasks and distributions, and more recently in continual adaptation of agents under changing interfaces and workflows. My research is supported by China Scholarship Council and University of Exeter Scholarships.

Current interests

  • Post-training and efficient adaptation for large language models
  • Continual learning, retention, and robustness under evolving tasks and distributions
  • Evaluation and continual adaptation of agents under changing interfaces and workflows

Broader background

My broader background includes federated learning, continual learning, efficient AI, privacy-aware learning systems, and earlier work on deep reinforcement learning, blockchain-enabled learning systems, and engineering-oriented automation.

I am open to discussing research ideas and academic collaborations. Feel free to reach out by email.

News

May 18, 2025 One research paper FedTaLoRA has been released!
Nov 20, 2024 The first comprehensive survey FCL for Edge-AI has been released!
Sep 1, 2023 Started my PhD at Exeter!

Selected publications

2025

  1. Blockwise Hadamard high-Rank Adaptation for Parameter-Efficient LLM Fine-Tuning
    Feng Yu , Jia Hu, and Geyong Min
    arXiv preprint arXiv:2509.21637 2025
  2. Efficient Federated Class-Incremental Learning of Pre-Trained Models via Task-agnostic Low-rank Residual Adaptation
    Feng Yu , Jia Hu, and Geyong Min
    arXiv preprint arXiv:2505.12318 2025

2024

  1. Federated Continual Learning for Edge-AI: A Comprehensive Survey
    Zi Wang, Fei Wu, Feng Yu , Yurui Zhou, Jia Hu, and Geyong Min
    arXiv preprint arXiv:2411.13740 2024
Hit Counter