The NLBSE’24 Tool Competition
Rafael Kallis, Giuseppe Colavito, Ali Al-Kaswan, Luca Pascarella, Oscar Chaparro, and Pooja Rani
Proceedings of the IEEE/ACM 3rd Intl. Workshop on Natural Language-Based Software Engineering (NLBSE'24)
Abstract: We report on the organization and results of the tool competition of the third International Workshop on Natural Language-based Software Engineering (NLBSE’24). As in prior editions, we organized the competition on automated issue report classification, with focus on small repositories, and on automated code comment classification, with a larger dataset. In this tool competition edition, six teams submitted multiple classification models to automatically classify issue reports and code comments. The submitted models were fine-tuned and evaluated on a benchmark dataset of 3 thou- sand issue reports or 82 thousand code comments, respectively. This paper reports details of the competition, including the rules, the teams and contestant models, and the ranking of models based on their average classification performance across issue report and code comment types.