SPLC 2022 Challenge

A Benchmark for Active Learning of Variability-Intensive Systems

Shaghayegh Tavassoli, University of Tehran Tehran, Iran
Carlos Diego Nascimento Damasceno, Radboud University Nijmegen Nijmegen, Netherlands
Mohammad Reza Mousavi, King’s College London London, United Kingdom
Ramtin Khosravi, University of Tehran Tehran, Iran
Behavioral models are the key enablers for behavioral analysis of Software Product Lines (SPL), including testing and model checking. Active model learning comes to the rescue when family behavioral models are non-existent or outdated. A key challenge on active model learning is to detect commonalities and variability efficiently and combine them into concise family models. Benchmarks and their associated metrics will play a key role in shaping the research agenda in this promising field and provide an effective means for comparing and identifying relative strengths and weaknesses in the forthcoming techniques. In this challenge, we seek benchmarks to evaluate the efficiency (e.g., learning time and memory footprint) and effectiveness (e.g., conciseness and accuracy of family models) of active model learning methods in the software product line context. These benchmark sets must contain the structural and behavioral variability models of at least one SPL. Each SPL in a benchmark must contain products that requires more than one round of model learning with respect to the basic active learning L* algorithm. Alternatively, tools supporting the synthesis of artificial benchmark models are also welcome.


No solutions yet.


No discussion section for the moment.