Test your top ideas with your learner. Report on what you did, what you learned, and where you will go from here.
Team SAL: Soren, Alex, Lucas
Our learner is Achu. Achu struggles with verbalizing his thoughts spontaneously. While he has a good vocabulary, and no motor skills problems that might impede his speech, he does not usually use words with a few exceptions, often repeating what other might people might have said. To assess what he is learning, teachers and instructors ask him a binary choice question (is the answer to the question I am posing, option A or option B).
Our Idea: Our goal was to get Achu to verbalize more and more words – and we wanted these words to be generated by him (not merely repeated after another person). Our assumption is that, as he generates more and more words and sequences of words, he will both gain more comfort with doing that, as well as find that his words have value and positive consequences.
Our Prototype: We dream was to create an app, whereby Achu might watch a video that he enjoys, and getting him to narrate the video, while recording his narration.
To prototype this, we used the Wizard of Oz technique. We showed Achu two videos that he really enjoys (A basketball themed one and a Hot Wheels themed one). We used those two videos to trigger his interest (using the 4 Phase Development of Interest framework). We asked him to say what he saw. First we modeled it for him, with Soren acting as the instructor, and one of the two of us acting as the learner and narrating the video. Second we asked Achu to do it – as various videos were rolling, Soren, in the role of the instructor, would ask him “Achu, what is happening now?” accompanied by a light tap. In the third iteration, the instructor-Soren asked Achu only lightly tapped Achu, as a prompt to tell us what was happening in the video. Finally, after each viewing, we played it back to him – celebrating with Achu the power of his words – and letting him hear himself “narrate” the key moments.
What happened: We came in fully prepared that this might not work – that Achu might not react at all to either us or the videos. We were pleasantly surprised that during all of the iterations (both the verbal prompt and the light tap), Achu said words that he had generated himself (not repeated) that were related to the video such as “Score”, “Dribble”, “Car”, “Train” and even the phrase “Fell down”, which were all accurate and linked to the narration.
Our insights & debrief with Marina: We were very excited about the outcome, and we think this could be a potentially very good app that prompts Achu to generate his own words, and in the future, through scaffolding, his own phrases and longer text.
Marina had mixed reactions because 1) in some sense this work is very similar to the work that logotherapists have done with Achu over time; 2) it still requires prompting (i.e. does not fully solve the problem of lack of prompting). We think however, that given that in his daily instruction, Achu only chooses an A or B option, this could be a complementary tool. It would supplement the ongoing work that the instructors, the logotherapist and other caretakers are doing with Achu and allow him to generate more of his words than he is right now.
For the next time: We are debating whether to go with an improved version of this prototype (a higher resolution version or potentially something that might push him further to generate more sentences beyond words) or whether to try something else.