Development of an Automated Scoring Model Using SentenceTransformers for Discussion Forums in Online Learning Environments

Bachriah Fatwa Dhini, Abba Suganda Girsang

Abstract


Due to the limitations of public datasets, research on automatic essay scoring in Indonesian has been restrained and resulted in suboptimal accuracy. In general, the main goal of the essay scoring system is to improve execution time, which is usually done manually with human judgment. This study uses a discussion forum in online learning to generate an assessment between the responses and the lecturer's rubric in the automated essay scoring. A SentenceTransformers pre-trained model that can construct the highest vector embedding was proposed to identify the semantic meaning between the responses and the lecturer's rubric. The effectiveness of monolingual and multilingual models was compared. This research aims to determine the model's effectiveness and the appropriate model for the Automated Essay Scoring (AES) used in paired sentence Natural Language Processing tasks. The distiluse-base-multilingual-cased-v1 model, which employed the Pearson correlation method, obtained the highest performance. Specifically, it obtained a correlation value of 0.63 and a mean absolute error (MAE) score of 0.70. It indicates that the overall prediction result is enhanced when compared to the earlier regression task research.

To cite this article: B. F. Dhini and A. S. Girsang, “Development of an Automated Scoring Model Using SentenceTransformers for Discussion Forums in Online Learning Environments,” CIT. Journal of Computing and Information Technology, vol. 30, no. 2, pp. 85–99, 2022, doi: 10.20532/cit.2022.1005478.


Keywords


Automatic Essay Scoring, Discussion Forum, SentenceTransformers, Monolingual Model, Multilingual Model

Full Text:

PDF


Creative Commons License
This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.

Crossref Similarity Check logo

Crossref logologo_doaj

 Hrvatski arhiv weba logo