About
I’m currently a KTP Associate (a special kind of postdoc) for both University of Leeds and TurinTech AI, and I’m working on cutting-edge compiler and LLM-based techniques for code optimization. At the University of Leeds, I am a member of the DSS research group and supervised by Prof Jie Xu, and Prof Zheng Wang. Moreover, at TurinTech AI, I’m a member of the Data Science team, which is led by Dr Fan Wu and Dr Paul Brookes.
I completed my PhD degree in May 2024 in the Department of Computer Science at Loughborough University. I’m very honored to be a member of the IDEAS Labotory and be supervised by Dr Tao Chen during my doctoral study.
My key research interests are in deep learning for software performance prediction. I’m passionate about this field because it has the potential to revolutionize the way we develop software.
On this website, you can find more information about my research interests and projects. Feel free to contact me if you have any questions or would like to collaborate.
News
August/2024: Our paper ‘Deep Configuration Performance Learning: A Systematic Survey and Taxonomy’ has been accepted by the ACM Transactions on Software Engineering and Methodology (TOSEM) as a survey paper.
July/2024: Our paper ‘GreenStableYolo: Optimizing Inference Time and Image Quality of Text-to-Image Generation’ has been awarded the winner of the SSBSE’24 challenge track, thanks and congratulations to all the authors!
May/2024: Our paper ‘GreenStableYolo: Optimizing Inference Time and Image Quality of Text-to-Image Generation’ has been accepted by the Symposium on Search-Based Software Engineering (SSBSE 2024) as a challange track paper.
January/2024: Our paper ‘Predicting Configuration Performance in Multiple Environments with Sequential Meta-Leaning’ has been accepted by the ACM International Conference on the Foundations of Software Engineering (FSE 2024) as a research paper with acceptance rate 11.6% (56 out of 483).
May/2023: Our paper ‘Predicting Software Performance with Divide-and-Learn’ has been accepted by the ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE 2023) as a research paper with two strong accepts and no revision requested; acceptance rate 12.7% (60 out of 473).
May/2022: Our paper ‘Does Configuration Encoding Matter in Learning Software Performance? An Empirical Study on Encoding Schemes’ has been accepted by the 19th International Conference on Mining Software Repositories (MSR 2022) as a technical paper, with an acceptance rate of 34% (45 out of 138).
Publications
- TOSEM CCF-A J. Gong and T. Chen, Deep Configuration Performance Learning: A Systematic Survey and Taxonomy, The ACM Transactions on Software Engineering and Methodology (TOSEM).
- SSBSE 2024 Challenge Winner CORE-B J. Gong, S Li, G d'Aloisio, Z Ding, Y Ye, W Langdon and F Sarro, GreenStableYolo: Optimizing Inference Time and Image Quality of Text-to-Image Generation, The Symposium on Search-Based Software Engineering Challenge Track (SSBSE 2024).
- FSE 2024 CCF-A CORE-A* J. Gong and T. Chen, Predicting Configuration Performance in Multiple Environments with Sequential Meta-Leaning, The ACM International Conference on the Foundations of Software Engineering (FSE 2024).
- ESEC/FSE 2023 CCF-A CORE-A* J. Gong and T. Chen, Predicting Software Performance with Divide-and-Learn, The ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE 2023).
- MSR 2022 CCF-C CORE-A J. Gong and T. Chen, Does Configuration Encoding Matter in Learning Software Performance? An Empirical Study on Encoding Schemes, The International Conference on Mining Software Repositories (MSR 2022).
Research Interests
My doctoral research interests are in applying machine learning (deep learning) to learn software configurations and performance. This involves modeling the complex relationship between the configurable options of the software and its performance metrics (such as latency, execution time, etc.) and accurately predicting the performance for any configuration.
This way, software engineers and users can optimize the software to meet their design/execution requirements without spending much time testing the software. Moreover, machine learning (deep learning) models can reduce the large amount of costs on performance measurements due to their high efficiency of learning.
In my postdoc research, I’m focusing on applying large language models for software code optimization.
Further Backgrounds
I received first-class BSc degree from both the Information and Computing Science programme at Xi’an Jiaotong-Liverpool University (2014-16), and the Computer Science course at University of Liverpool (2016-18).