Predicting Configuration Performance in Multiple Environments with Sequential Meta-Learning

Published in FSE, 2024

In this paper, we target configuration performance learning under multiple environments. We do so by designing SeMPL—a meta-learning framework that learns the common understanding from configurations measured in distinct (meta) environments and generalizes them to the unforeseen, target environment. What makes it unique is that unlike common meta-learning frameworks (e.g., MAML and MetaSGD) that train the meta environments in parallel, we train them sequentially, one at a time. The order of training naturally allows discriminating the contributions among meta environments in the meta-model built, which fits better with the characteristic of configuration data that is known to dramatically differ between different environments. Experiments with nine subject systems demonstrate that SeMPL performs considerably better on 89% of the systems with up to 99% accuracy improvement.

The source codes, datasets, raw results, and supplementary materials can be found at our github repository.

The full paper can be downloaded here.