Note: This project was done as part of the MSc. Artificial Intelligence, under supervision of Diederik Roijers and in collaboration with the Japanese language department of Leiden University. This work was presented at BNAIC 2018. [link to paper]
Kanji, the characters in the Japanese writing system, can be notoriously complex and difficult to teach and learn as it requires - above all - a lot of repetition, which can be tedious and frankly quite boring. Inspired by an insight from the field of gaming to maximise player engagement, we aimed to improve user engagement with a kanji e-tutoring system by using implicit feedback to gauge the experienced challenge level. The core idea is that we can personalise the experience by adapting our system to provide just the right level of difficulty for each individual user, therefore increasing user engagement and ultimately increasing the time spent using the system.
Maybe even more so than in other domains, kanji requires individual practise and repetition to master, more than a classroom approach, which is why
One way to do this is by what we call a competence model, which includes parameters for the difficulty of the individual skills to acquire (e.g. per character), as well as the current level with respect to each skill for each student. A major limitation, however, is that this tends to be highly data-intensive, even in small domains. To combat this, we exploit the key observation that it is often not necessary to explicitly model competence in order to provide the appropriate challenge to the student. Instead, we can learn a mapping from
Kanji e-tutoring system
Our e-tutoring system provides the user with a series of
Because of the way we set up the system and the nature of kanji themselves, we can more or less estimate how difficult a question will be for a student in relative terms. For example, when we present the kanji for trees and provide the option trees, forest, plants and rice field, we expect the question to be more difficult than if we presented car, woman and mountain as alternative options.
Offline learning & online adaptation
Now we’ve defined our e-tutoring environment, but we want to make it
We both trained a model solely on correctness, and one on the correctness in combination of the implicit user feedback. The model trained with implicit user feedback was more effective in determining the perceived challenge level than training only on correctness.
So, of course, we want to know whether our system is actually preferred by students. We test this by presenting users with our system and a baseline system in random order, for a set number of questions. They then had to indicate which system they preferred, and why. Further feedback included users explicitly noting that the adaptive system better matched their skill level.
Initial results already confirmed that using implicit user feedback provides a better mapping to perceived challenge level than solely looking at the correctness of the answers, and further testing indicate that users generally prefer a system based on this method. We can therefore conclude that challenge balancing based on implicit user feedback provides a