Sep-Dec 2018( 2020 Version)
Script Writer, 3D architect
I delivered user scenario video and 3D earphone mockup to Google Daydream product designers. We checked in weekly to ideate future emerging technology possibilities. After the project, I built up a team working closely with script writer, product designer, 3D modeling to push the user story to higher level.Process Deck
Creating seamless transitions between the real and digital worlds with emerging technology
Google Daydream team was interested in future seamless transitions between the real and digital worlds by uncovering new hybrid tools for creators in 2023. I decided to dive deeper into emerging technology, searching and recording potentials beyond current use cases.
Curious about the surroundings? Now you can talk to smart assistant!
An AI earphone to track surrounding objects.
Look assists you along your way exploring the environment : search and answer your questions in real-time.
Users can simply ask our friend "Look" to hear about personalized opinions based on users' preferences and behaviors. Users do not need to sort out complex data out of lots of searching results.
By talking with Look, users can create their own search results based on what they see and hold. For example, I want to know what recipes that I can make based on what ingredient that I have now.
Personalization & Real-time are trendy needs but not seen in searching experiences
1. Mobile devices allow users to search and learn anywhere and anytime.
2. Phone camera enables users to search based on what they see, and user can track objects around and look up on the phone but it is not widely used yet.
3. Voice speakers are limited to the technology to understand your context and maintaining context so you cannot have dynamic exchanges that start, stop and pick up again later.
Searching 2D images is not enough, you want to see, feel, and touch in the reality before making a decision.
"Researcher is like a detective, we search from details of your your everyday life. We want to meet the user to get guided through in physical world instead of reading papers."
"As a product designer, I wish I can search 3D objects without just going online. I want to see it and feel it before designing. "
Users are struggled of getting quick and meaningful responses
1. I want to search and compare offline- online shopping quickly.
2. I want to discover making new things by searching online.
3. I want to collect and share my searching results more efficiently.
What if online search can be as natural as asking a close friend by your side?
1. I shop freely with LOOK navigating.
2. LOOK compares the two BBQ sauces in my hand.
3. LOOK recommends the left one by pointing at the bottle.
1. I want to make a dinner by asking LOOK.
2. LOOK sorted out recipes based on my daily preferences.
3. I decided to cook scrambled sausages tonight and LOOK guides me step by step.
1. I like my friend’s clothes and wonder if she can share me a link.
2, She allows LOOK to share her clothes to me.
3. I received the information and save it to my phone.
After the interviews insight, I decided to break it down into 3 different user scenarios and use cases that contextualize the searching to buy, to make, and to share.
“Hey Look, I want to know which one I should buy? ”
“Hey Look, I want to know what can I make from what I have. ”
“Hey Look, I want to share my own stuff to others seamlessly. ”
By simply scanning the objects in front of you, users can quickly grasp the online links based on what they see. They can save it and re-access it later.
Look app serves to assist your searching journey seamlessly without pausing at the moment of looking up online. Look curates your search results.
Look helps you compare your searching results by asking your AI assistant in your look earphone.
AI understands your questions and respond with sounds & visual effects
Object-Recognition + Voice command
Object + Environment
Object + Environment
Product forms factors
Voice user interfaces allow the user to interact flexibly with a system through voice commands and multi-modal gestures that are not limited to buttons.
But it is still limit to technology and privacy issues of where to access the information.
It is like a RPG gaming that you lead the story to the ending by your choices.