About
I’m an AI4SE researcher building trustworthy, efficient, and sustainable software using AI. I currently work as a postdoctoral KTP Associate with both the University of Leeds and TurinTech AI, focusing on compiler- and LLM-based code optimisation. At the University of Leeds, I am a member of the Intelligent Systems Software Lab (ISSL) and the Distributed Systems and Services (DSS) research group, supervised by Prof Jie Xu and Prof Zheng Wang. At TurinTech AI, I’m a member of the Data Science team led by Dr Fan Wu and Dr Paul Brookes.
I completed my PhD in Dec 2024 in the Department of Computer Science at Loughborough University, supervised by Dr Tao Chen in the IDEAS Laboratory (Intelligent Dependability Engineering for Adaptive Software Laboratory). My PhD thesis received the SPEC Kaivalya Dixit Distinguished Dissertation Award 2024, a prominent award in computer benchmarking, performance evaluation, and experimental system analysis.
Research Focus
I work across different AI-powered techniques for software performance engineering, from foundational machine learning models to cutting‑edge GenAI systems.
- Doctoral research — Software configuration performance engineering
- Developed ML/DL approaches that learn the high‑dimensional configuration options to predict and optimise performance without exhaustive benchmarking, addressing critial challenges such as feature sparsity, rugged performance spaces, and cross‑environment drift (versions/hardware/workloads).
- Why it matters: This enables earlier performance issue detection, software adaptability and autoscaling, and faster product evolution with far fewer measurements.
- Current research — GenAI for industrial code performance optimisation
- Lead work on search-based multi‑LLM optimisation and meta‑prompting for robust code scoring/optimization, combined with ensembling and compiler techniques; implemented in commercial platforms via the collaboration with TurinTech AI and evaluated on real production workloads.
- Why it matters: Our methods deliver verifiable speedups and cost reductions on production codebases while making GenAI systems more reliable and auditable in practice.
- Ongoing interests — AI-driven performance engineering, AI4SE, SE4AI
- LLM performance modeling (hybrid models + online adaptive tuning), performance‑aware GenAI systems (dynamic prompt engineering + configuration tuning), trustworthy GenAI (RLHF + uncertainty verification), and industry standards/tooling (benchmarks, profiling + static analysis validation, CI/CD integration).
- Why it matters: These directions make GenAI systems predictable and safe in real-world workloads, enabling reproducible evaluation, faster industrial adoption, and lower compute and carbon footprints.
If you’re interested in collaboration, please feel free to reach out!
News
July/2025: I am honored to be selected as a Shadow Program Committee Member for the IEEE/ACM International Conference on Software Engineering (ICSE 2026).
June/2025: Our paper ‘Dually Hierarchical Drift Adaptation for Online Configuration Performance Learning’ has been accepted by the IEEE/ACM International Conference on Software Engineering (ICSE) as a research paper in the first round with acceptance rate 9.29% (60/646).
June/2025: Our paper ‘Learning Software Bug Reports: A Systematic Literature Review’ has been accepted by the *ACM Transactions on Software Engineering and Methodology (TOSEM) as a journal paper.
January/2025: I am honored to be awarded the SPEC Kaivalya Dixit Distinguished Dissertation Award 2024, which is a prominent award in the domain of in computer benchmarking, performance evaluation, and experimental system analysis. Grateful to @spec_perf for recognizing our contributions to performance engineering! Thank you @tao_chen_ideas and @PooyanJamshidi for your unwavering support!
November/2024: Our paper ‘Accuracy Can Lie: On the Impact of Surrogate Model in Configuration Tuning’ has been accepted by the IEEE Transactions on Software Engineering (TSE) as a journal paper.
October/2024: Our paper ‘Dividable Configuration Performance Learning’ has been accepted by the IEEE Transactions on Software Engineering (TSE) as a journal paper.
September/2024: I am selected as a Junior Program Committee member for the ACM/IEEE International Conference on Mining Software Repositories (MSR 2025).
August/2024: Our paper ‘Deep Configuration Performance Learning: A Systematic Survey and Taxonomy’ has been accepted by the ACM Transactions on Software Engineering and Methodology (TOSEM) as a survey paper.
July/2024: The team I led received the SSBSE’24 Challenge Track award for the paper ‘GreenStableYolo: Optimizing Inference Time and Image Quality of Text-to-Image Generation’, thanks and congratulations to all the authors!
May/2024: Our paper ‘GreenStableYolo: Optimizing Inference Time and Image Quality of Text-to-Image Generation’ has been accepted by the Symposium on Search-Based Software Engineering (SSBSE 2024) as a challenge track paper.
January/2024: Our paper ‘Predicting Configuration Performance in Multiple Environments with Sequential Meta-Leaning’ has been accepted by the ACM International Conference on the Foundations of Software Engineering (FSE 2024) as a research paper with acceptance rate 11.6% (56 out of 483).
May/2023: Our paper ‘Predicting Software Performance with Divide-and-Learn’ has been accepted by the ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE 2023) as a research paper with two strong accepts and no revision requested; acceptance rate 12.7% (60/473).
May/2022: Our paper ‘Does Configuration Encoding Matter in Learning Software Performance? An Empirical Study on Encoding Schemes’ has been accepted by the 19th International Conference on Mining Software Repositories (MSR 2022) as a technical paper, with an acceptance rate of 34% (45/138).
Selected Publications
- ICSE CCF-A CORE-A* Z. Xiang, J. Gong, and T. Chen, Dually Hierarchical Drift Adaptation for Online Configuration Performance Learning, The IEEE/ACM International Conference on Software Engineering (ICSE), 2026, 13 pages.
- TOSEM CCF-A JCR-Q1 G. Long, J. Gong, Hui Fang, and T. Chen, Learning Software Bug Reports: A Systematic Literature Review, The ACM Transactions on Software Engineering and Methodology (TOSEM), 2025, 47 pages.
- TSE CCF-A JCR-Q1 P. Chen, J. Gong, and T. Chen, Accuracy Can Lie: On the Impact of Surrogate Model in Configuration Tuning, The IEEE Transactions on Software Engineering (TSE), 2024, 33 pages.
- TSE CCF-A JCR-Q1 J. Gong, T. Chen, and R. Bahsoon, Dividable Configuration Performance Learning, The IEEE Transactions on Software Engineering (TSE), 2024, 29 pages.
- TOSEM CCF-A JCR-Q1 J. Gong and T. Chen, Deep Configuration Performance Learning: A Systematic Survey and Taxonomy, The ACM Transactions on Software Engineering and Methodology (TOSEM), 2024, 62 pages.
- SSBSE 2024 Challenge Winner CORE-B J. Gong, S Li, G d'Aloisio, Z Ding, Y Ye, W Langdon and F Sarro, GreenStableYolo: Optimizing Inference Time and Image Quality of Text-to-Image Generation, The Symposium on Search-Based Software Engineering Challenge Track (SSBSE 2024), 6 pages.
- FSE 2024 CCF-A CORE-A* J. Gong and T. Chen, Predicting Configuration Performance in Multiple Environments with Sequential Meta-Learning, The ACM International Conference on the Foundations of Software Engineering (FSE 2024), 24 pages.
- ESEC/FSE 2023 CCF-A CORE-A* J. Gong and T. Chen, Predicting Software Performance with Divide-and-Learn, The ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE 2023), 13 pages.
- MSR 2022 CCF-C CORE-A J. Gong and T. Chen, Does Configuration Encoding Matter in Learning Software Performance? An Empirical Study on Encoding Schemes, The International Conference on Mining Software Repositories (MSR 2022), 13 pages.
Further Background
I received first-class BSc degree from both the Information and Computing Science programme at Xi’an Jiaotong-Liverpool University (2014-16), and the Computer Science course at University of Liverpool (2016-18).