The NLBSE’22 Tool Competition

  Rafael Kallis, Oscar Chaparro, Andrea Di Sorbo, and Sebastiano Panichella

  Proceedings of the IEEE/ACM 1st Intl. Workshop on Natural Language-Based Software Engineering (NLBSE'22)

Abstract: We report on the organization and results of the first edition of the Tool Competition from the International Workshop on Natural Language-based Software Engineering (NLBSE’22). This year, five teams submitted multiple classification models to automatically classify issue reports as bugs, enhancements, or questions. Most of them are based on BERT (Bidirectional Encoder Representations from Transformers) and were fine-tuned and evaluated on a benchmark dataset of 800k issue reports. The goal of the competition was to improve the classification performance of a baseline model based on fastText. This report provides details of the competition, including its rules, the teams and contestant models, and the ranking of models based on their average classification performance across the issue types