Arquivo da categoria: Tech 4 Learners

Tech 4 Learners – Week 8 – Assignment Revised Point of View

Assignment

Round 2!  What is your new and improved Point of View?

Response

Our point of view has stayed the same – after these weeks of work, we really want to continue working with Achu and work on the same learning goals.

HMW support ACHU generate more words and even sentences? 

We have three new prototypes that we are excited to see working with him this week:

  • First, we want to build on our previous video narration idea. However, we also want to incorporate the protégé effect, and the idea that while Achu might not find it always natural to speak for himself, but might find it compelling to speak if it is to speak to someone else/to help someone else.

For this idea, we are creating a character who will introduce themselves to Achu, and explain that they absolutely need Achu’s help to describe what is on the screen below because they cannot see. First, they will ask Achu to describe a picture, then we will ask Achu to describe a video.

This is meant to be a scaffolded exercise, and we maintain the idea of recording Achu as in previous prototypes and play that back to him.

  • Based on Marina’s feedback, we want to build on Achu’s strengths, one of which is the ability to solve/put-together puzzle pieces really well. We are designing a game that requires Achu to put together sentences, where each word is on a puzzle piece that only fits with the others in certain ways.
  • We want to test out an existing cat app with Achu, popular among children, whereby one speaks to the app, your voice is recorded and the cat replays it as if you spoke it. Beyond the entertainment value, we want to test this out to see if this might work as a warm-up exercise for Achu to generate more words. We also want to test our hypothesis that hearing his own words will give Achu a better sense of the value of his own words.

Tech 4 Learners – Week 8 – Reading Assignment

Screen Shot 2015-11-07 at 7.18.57 AM

Reading: 

“Tangible Bits: Beyond Pixels”, Hiroshi Ishi, 2008 – MIT Media Lab

Response: 

Tangible interfaces can express information regular screens cannot. The TUIs act upon senses other than sight and hearing by providing physical feedback to relay information. This additional information about the subject matter can be provided by means of force, movement, size, heat or any other way to invoke more senses. Think of a screen that is able to transform its surface to feel like velvet and show in 3D the item you are designing. TUIs gives you information you can touch.

The 3D nature of TUIs can potentially aid in education with applications that could enhance motor skills, sculpting, modeling, movement understanding and several other concepts that would be easier to learn through direct physical representations. As TUIs evolve to offer more resolution and proximity to what they are actually modeling, will serve to enhance virtual reality and immersive environments by providing stimuli to all of our senses in unison to represent information.

The application of TUI that I find more interesting is “Tangible Telepresence”. As I understand it, this type is the one who most actuates your senses in comparison to the other types. This modality attempts to bring some realism about what you are observing into your experience. It will attempt to convey the amount of acceleration force a race car driver is feeling while you watch a race. It will make you feel the vibration patterns created by wave sounds colliding in a physics experiment. It will help you choose clothes based on how the cloth feels instead of relying solely on how it looks.

Notes

IMG_1141

Tech 4 Learners – Week 7 – Class Notes

Had some of the parents and kids from OMS come watch our presentations. We were actually the first ones to go – Soren presented and I controlled the slides… went a little over time but went well.

At the end of class, stayed back with Alex, Karin and Marina to try to get to some kind of breakthrough or decide upon next steps that we could make with our project. Karin thoughtfully asked if there was anything Marina never tried and would want to. She said Virtual Reality… got the juices flowing…

How could we have A engage, practice, and develop his verbal communication skills within a virtual world. Would he verbalize in order to interact with the game? Could he drive a car with voice commands? Would he engage in the task of teach an agent within this world – or engage in verbal play with a virtual character?

Back home I thought of Minecraft – would he engage in that level of focus requiring activity? Could he interact with the game using an existing world and voice commands?

Some of the feedback we got:

IMG_1106 IMG_1107

Tech 4 Learners – Week 7 – Prototype Presentation

Prototype presentation: 

Text for presentation: 

Learning goal: 

We want ‘A’ to learn the value of communicating with others.

Theory of learning:

Our tool intends to offer him practice in generating new words and then rewarding him with the playback experience. This will reinforce that his words have meaning, power and entertainment value. If he is interested in the material presented, he will engage in narrating it. Ideally we want him to transfer these skills into real life situations.

If the program:

  • Utilizes engaging content for ‘A’
  • Replays ‘A’s words so he and others see their value (and enjoy them!)
  • Initially offers prompts for ‘A’ to speak then gradually reduces them

Then ‘A’ will overtime…

  • Generate spontaneous and increasingly complex sentences.
  • Transfer those skills to real life situations.

Question:

  • How do we build on what we’ve learned to create a tool that encourages ‘A’ to express his thoughts with less prompting?
  • What additional mechanism or feature could support ‘A’ in transferring these skills to real-world situations?
  • Did some videos but there would be no time to present it – we had 60 seconds!

Videos

I prepared a stop-motion video for presenting the prototype as well:

Which is the final result of some thinking and prototyping:

Tech 4 Learners – Week 7 – Assignment Prototype Tests 1

Assignment

Test your top ideas with your learner.  Report on what you did, what you learned, and where you will go from here.

Response

Team SAL: Soren, Alex, Lucas 

Our learner is Achu. Achu struggles with verbalizing his thoughts spontaneously. While he has a good vocabulary, and no motor skills problems that might impede his speech, he does not usually use words with a few exceptions, often repeating what other might people might have said. To assess what he is learning, teachers and instructors ask him a binary choice question (is the answer to the question I am posing, option A or option B).

Our Idea: Our goal was to get Achu to verbalize more and more words – and we wanted these words to be generated by him (not merely repeated after another person). Our assumption is that, as he generates more and more words and sequences of words, he will both gain more comfort with doing that, as well as find that his words have value and positive consequences.

Our Prototype: We dream was to create an app, whereby Achu might watch a video that he enjoys, and getting him to narrate the video, while recording his narration.

To prototype this, we used the Wizard of Oz technique. We showed Achu two videos that he really enjoys (A basketball themed one and a Hot Wheels themed one). We used those two videos to trigger his interest (using the 4 Phase Development of Interest framework). We asked him to say what he saw. First we modeled it for him, with Soren acting as the instructor, and one of the two of us acting as the learner and narrating the video. Second we asked Achu to do it – as various videos were rolling, Soren, in the role of the instructor, would ask him “Achu, what is happening now?” accompanied by a light tap. In the third iteration, the instructor-Soren asked Achu only lightly tapped Achu, as a prompt to tell us what was happening in the video. Finally, after each viewing, we played it back to him – celebrating with Achu the power of his words – and letting him hear himself “narrate” the key moments.

What happened: We came in fully prepared that this might not work – that Achu might not react at all to either us or the videos. We were pleasantly surprised that during all of the iterations (both the verbal prompt and the light tap), Achu said words that he had generated himself (not repeated) that were related to the video such as “Score”, “Dribble”, “Car”, “Train” and even the phrase “Fell down”, which were all accurate and linked to the narration.

Our insights & debrief with Marina: We were very excited about the outcome, and we think this could be a potentially very good app that prompts Achu to generate his own words, and in the future, through scaffolding, his own phrases and longer text.  

Marina had mixed reactions because 1) in some sense this work is very similar to the work that logotherapists have done with Achu over time; 2) it still requires prompting (i.e. does not fully solve the problem of lack of prompting). We think however, that given that in his daily instruction, Achu only chooses an A or B option, this could be a complementary tool. It would supplement the ongoing work that the instructors, the logotherapist and other caretakers are doing with Achu and allow him to generate more of his words than he is right now.

For the next time: We are debating whether to go with an improved version of this prototype (a higher resolution version or potentially something that might push him further to generate more sentences beyond words) or whether to try something else.

Tech 4 Learners – Week 6 – Reading Assignment

Assignment

“Teachable Agents and the Protégé Effect: Increasing the Effort Towards Learning” (Chase, Chin, Oppezzo, & Schwartz, 2009) is a research article reporting on two studies.  Please pay special attention to the two “methods” sections, and the “general discussion” at the end.  Post 3 paragraphs that describe in your own words what each of the two studies did, and what you think the important take-aways may be from this research.

Response

The research’s primary goal was to investigate further the use of TA’s to increase cognitive gains. Using a software where the student can create concept maps for chains of causal relationships, they were given a passage on “Fever”. One group of students were prompted to teach the software’s TA and the other group to learn for themselves. During this process they were able to read the passage, build and adjust the concept maps, chat with other students online and play practice rounds of the gameshow. On the following day, they would participate in the gameshow a game where their TA’s would play as either agents or avatars. 

The first study focused more closely on how a “mere belief” manipulation would affect the performance of the TA’s in the gameshow. The results showed that the students who taught the TA’s outperformed the ones learning for themselves. What strikes me as most interesting was that I’ve had this experience personally as a teacher – the more I taught about a subject, the more I learned about it. I eventually started studying new subject matters with a preset notion that I would have to teach it the next day – or at least present it the next day. This study confirmed my intuitive notion that you learn by teaching. Another aspect in the first study that struck me was that lower achieving students obtained greater gains from the exercise, probably because they were stimulated to look at how they think and move from a fixed mindset to a growth mindset. The process shows that intelligence is teachable and can advance.

The second study was done to explore the underlying mechanisms of the protege effect actually took place. The sample was selected from a higher achieving group of students and were carried out in a very similar manner as the first study. Apart from some minor differences in the mechanics of the use of the software and gameplay session, the main difference lied in what data was being collected and observed – the internal thought process of the students. The students were prompted to think out loud during the process in order to capture any underlying mechanism that would explain the positive effect of teaching the TA. The findings were that the students who taught the TA has less fear of failure (EPB) since the responsibility was shared with the TA at the same time they cared much more about future improving the TA’s performance rather than the control group. This empathetic relationship for me is the key towards better learning outcomes – the student must see a reason for learning – be it to teach someone else, be it to understand that you can actually improve someone’s “intelligence” – you are not stuck with “being dum”. 

What also struck a chord was that it seems like we tend to care much more about pleasing others than we care about taking care of ourselves. The students cared about the TA’s performance in the game, they felts sorry for them when they made a mistake and were compelled to teaching them better or more in the future. It evokes a sense of responsibility that is stronger than the one we feel towards our selves apparently. For me, it related to the feeling that if we are in a car alone we are more likely to be reckless than if we have others in the car – we are responsible for their lives and therefore act more responsibly. 

So I guess that learning by teaching can be summarized by the fact that “learning is a side effect of sustained engagement”. If you need to teach someone you engage more deeply with the reading, with the preparation of the explanation/schema/mental model, and finally you care about the outcomes – you care more if your student learned that you care if you learned it only for yourself.

Tech 4 Learners – Week 6 – OMS Visit

Went to OMS today to test our prototype with “A”. Successful in a way yet not ground breaking. We were able to have him interact with our prototype yet only a few events of spontaneous word generation. We recorded the entire session and from it we will be able to extract some more information about the specific triggers for this generation of words. Word repetition and generation when prompted was not a problem.

After the testing session we interviewed Marina and the feedback was that this kind of prompt to comment has been applied on him for the past years with limited success. He has shown gains yet we still have to think outside of the box to find some way for him to generate words and transfer this ability to his daily activities. Something that would be practical and promote a degree of independence for him.

Screen Shot 2015-10-31 at 1.06.47 PM