表題番号:2025C-787
日付:2026/04/01
研究課題Testing and Developing an open-source LLM-based automated writing assistant for the language classroom.
| 研究者所属(当時) | 資格 | 氏名 | |
|---|---|---|---|
| (代表者) | グローバル・エデュケーション・センター | 准教授 | ガイエド ジョン モリース |
- 研究成果概要
- Significant progress has been made in the development and testing of an open-source large language model (LLM)-based automated writing assistant designed for use in language education settings. The project aims to create an accessible, effective tool that can provide automated essay assessment and writing feedback to language learners, using open-source models rather than proprietary models to ensure transparency, reproducibility, and cost-effectiveness. A central focus of the work to date has been the systematic evaluation of multiple open-source LLMs to determine which architectures and model families are best suited to the task of automated essay scoring and feedback generation. Models have been assessed on criteria including scoring accuracy, consistency, the quality of written feedback, and computational efficiency. This comparative testing has provided valuable insight into the strengths and limitations of current open-source models when applied to educational writing assessment tasks. In addition to model selection, substantial effort has been devoted to exploring and testing various techniques for improving model performance on essay assessment. LoRA (Low-Rank Adaptation) fine-tuning has been applied to adapt general-purpose models to the specific demands of evaluating student writing, allowing for targeted improvements without the prohibitive cost of full model retraining. Experiments have also been conducted with open-source reasoning and thinking models, which offer the potential for more structured, step-by-step evaluation of essay quality. Other prompting and optimization strategies have been explored in parallel, contributing to a growing understanding of which approaches yield the most reliable and pedagogically useful results. A public-facing web application has been developed and deployed at https://app.awade.gec.waseda.ac.jp, where users can access and interact with some of the tools and results produced through this research. The platform serves both as a demonstration of the project's current capabilities and as a practical resource for educators and learners interested in automated writing feedback. Making the application publicly available at this stage supports the project's commitment to openness and allows for broader feedback from real users. Looking ahead, the project will continue to refine model performance through further fine-tuning and technique development, expand the range of writing tasks and languages supported, and incorporate user feedback from the deployed platform to guide future improvements. The results so far are encouraging and suggest that open-source LLMs, with appropriate adaptation, can serve as a viable foundation for automated writing assistance in the language classroom.