About

I’m an AI4SE researcher building trustworthy, efficient, and sustainable software using AI.

I currently work as a Research Associate in AI for Software Engineering at King’s College London, contributing to the ITEA4 GENIUS project—a multinational collaboration leveraging GenAI and LLMs to enhance software development life cycle. I am a member of the Software Systems (SSY) group in the Department of Informatics, supervised by Dr Jie M. Zhang, Dr Gunel Jahangirova, and Prof Mohammad Reza Mousavi. My work focuses on developing quality assurance methods for LLM-based software engineering, ensuring the functionality, quality, and architectural soundness of both human and AI-generated software systems.

Previously, from June 2024 to November 2025, I worked as a KTP Associate with both the University of Leeds and TurinTech AI, focusing on compiler- and LLM-based code optimisation. We successfully completed the two-year KTP plan in just one and a half years. At the University of Leeds, I was a member of the Intelligent Systems Software Lab (ISSL) and the Distributed Systems and Services (DSS) research group, supervised by Prof Jie Xu and Prof Zheng Wang. At TurinTech AI, I was a member of the Data Science team led by Dr Fan Wu and Dr Paul Brookes.

I completed my PhD in Dec 2024 in the Department of Computer Science at Loughborough University, supervised by Dr Tao Chen in the IDEAS Laboratory (Intelligent Dependability Engineering for Adaptive Software Laboratory). My PhD thesis received the SPEC Kaivalya Dixit Distinguished Dissertation Award 2024, a prominent award in computer benchmarking, performance evaluation, and experimental system analysis.

Research Interests

Software Configuration Performance Engineering
Data-driven ML/DL approaches that learn high‑dimensional configuration spaces to predict and optimise performance without exhaustive benchmarking, tackling challenges such as feature sparsity, rugged performance spaces, and cross‑environment drift (versions, hardware, workloads).
Software Configuration Performance Engineering
Why it matters: Enables earlier performance issue detection, software adaptability and autoscaling, and faster product evolution with far fewer measurements.
Trustworthy AI-assisted Software Development
Quality assurance and optimisation methods for LLM‑based software engineering — focusing on how we evaluate, compare, and improve AI-assisted coding workflows under realistic constraints (correctness, robustness, cost, sustainability), using SBSE‑style strategies to orchestrate LLMs and make results more reliable in practice.
Trustworthy AI-assisted Software Development
Why it matters: Transforms unverified, ad-hoc LLM-assisted coding into a reproducible engineering process, reducing computing resources and carbon footprint.
GenAI for Code Performance Optimisation
Search-based multi‑LLM optimisation and meta‑prompting for robust code scoring and optimisation, combined with ensembling and compiler techniques; implemented in commercial platforms via TurinTech AI and evaluated on real production workloads.
GenAI for Code Performance Optimisation
Why it matters: Delivers verifiable speedups and cost reductions on production codebases while making GenAI systems more reliable and auditable in practice.
General AI4SE & SE4AI
LLM performance modeling (hybrid models + online adaptive tuning), performance‑aware GenAI systems (dynamic prompt engineering + configuration tuning), trustworthy GenAI (RLHF + uncertainty verification), and industry standards and tooling (benchmarks, profiling, static analysis, CI/CD integration).
General AI4SE & SE4AI
Why it matters: Makes GenAI systems predictable and safe in real-world workloads, enabling reproducible evaluation, faster industrial adoption, and lower compute and carbon footprints.

If you’re interested in collaboration, please feel free to reach out!

News

Selected Publications

MSR'26 CCF-C CORE-A J. Gong, G. Pinna, Y. Bian, and J. M. Zhang, Analyzing Message-Code Inconsistency in AI Coding Agent-Authored Pull Requests, The ACM/IEEE International Conference on Mining Software Repositories Mining Challenge Track (MSR 2026), 2026.
MSR'26 CCF-C CORE-A G. Pinna, J. Gong, D. Williams, and F. Sarro, Comparing AI Coding Agents: A Task-Stratified Analysis of Pull Request Acceptance, The ACM/IEEE International Conference on Mining Software Repositories Mining Challenge Track (MSR 2026), 2026.
ICSE'26 CCF-A CORE-A* Z. Xiang, J. Gong, and T. Chen, Dually Hierarchical Drift Adaptation for Online Configuration Performance Learning, The IEEE/ACM International Conference on Software Engineering (ICSE), 2026, 13 pages.
ASE'25 CCF-A CORE-A* J. Gong, R. Giavrimis, P. Brookes, V. Voskanyan, F. Wu, M. Ashiga, M. Truscott, M. Basios, L. Kanthan, J. Xu, and Z. Wang, Tuning LLM-based Code Optimization via Meta-Prompting: An Industrial Perspective, The IEEE/ACM International Conference on Automated Software Engineering (ASE), 2025, 12 pages.
SSBSE'25 Challenge Track CORE-B J. Gong, Y. Bian, L. de la Cal, G. Pinna, A. Uteem, D. Williams, M. Zamorano, K. Even-Mendoza, W. B. Langdon, H. Menendez, and F. Sarro, GA4GC: Greener Agent for Greener Code via Multi-Objective Configuration Optimization, The Symposium on Search-Based Software Engineering Challenge Track (SSBSE 2025), 2025.
TOSEM'25 CCF-A JCR-Q1 G. Long, J. Gong, H. Fang, and T. Chen, Learning Software Bug Reports: A Systematic Literature Review, The ACM Transactions on Software Engineering and Methodology (TOSEM), 2025, 47 pages.
TSE'24 CCF-A JCR-Q1 P. Chen, J. Gong, and T. Chen, Accuracy Can Lie: On the Impact of Surrogate Model in Configuration Tuning, The IEEE Transactions on Software Engineering (TSE), 2024, 33 pages.
TSE'24 CCF-A JCR-Q1 J. Gong, T. Chen, and R. Bahsoon, Dividable Configuration Performance Learning, The IEEE Transactions on Software Engineering (TSE), 2024, 29 pages.
TOSEM'24 CCF-A JCR-Q1 J. Gong and T. Chen, Deep Configuration Performance Learning: A Systematic Survey and Taxonomy, The ACM Transactions on Software Engineering and Methodology (TOSEM), 2024, 62 pages.
SSBSE'24 Challenge Winner CORE-B J. Gong, S Li, G d'Aloisio, Z Ding, Y Ye, W Langdon and F Sarro, GreenStableYolo: Optimizing Inference Time and Image Quality of Text-to-Image Generation, The Symposium on Search-Based Software Engineering Challenge Track (SSBSE 2024), 6 pages.
FSE'24 CCF-A CORE-A* J. Gong and T. Chen, Predicting Configuration Performance in Multiple Environments with Sequential Meta-Learning, The ACM International Conference on the Foundations of Software Engineering (FSE 2024), 24 pages.
FSE'23 CCF-A CORE-A* J. Gong and T. Chen, Predicting Software Performance with Divide-and-Learn, The ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE 2023), 13 pages.
MSR'22 CCF-C CORE-A J. Gong and T. Chen, Does Configuration Encoding Matter in Learning Software Performance? An Empirical Study on Encoding Schemes, The International Conference on Mining Software Repositories (MSR 2022), 13 pages.

Further Background

I received first-class BSc degree from both the Information and Computing Science programme at Xi’an Jiaotong-Liverpool University (2014-16), and the Computer Science course at University of Liverpool (2016-18).