Prior to yesterday, I had a very rudimentary understanding of Google Glass and what it could do. What I saw last night opened my mind about what wearable technology means from daily activities such as taking pictures to having “x-ray vision” and even improving human to human interaction.
Dr. Ned T. Sahin, Harvard Neuroscientist and Fellow in Cognitive Neuroscience at the Institute for Neural Computation, talked about how Glass technology could help communication for children with Autism. His company, Brain Power, has developed an app for Glass to identify different types of human emotions through facial expressions. For example, if someone is smiling, excited, or showing happy emotions, the child wearing Google Glass would see a green frame around the person’s face. On the contrary, if someone is showing negative emotions, the child would see a red frame. The frames are just indexes, which could be replaced with other identifiers that suit the individual child. More of Dr. Sahin’s work could be found here.
Media Producer, Han Vu, and Hardware Hacker, Zack Freedman showed us how you could look at a piece of art and see additional information e.g. description, artist bio, etc. through Glass. This reminded me of Amazon’s X-Ray for movies. No more crowding in front of a piece of painting to read its description or googling about it on your phone. Han said its applications could significantly enhance experiences for museum and art gallery patrons.
Clayton Roulhac-Carr, artist/photographer showed us photos and videos he created using Google Glass. Here are some of his work from Instagram.
A big thank you to GoogleGlass NYC and NYC Apps for hosting the Creative Experiences with Google Glass event, I can honestly say I walked out feeling inspired and look forward to seeing how Glass technology will impact our lives.