Google look :
Voice assistant to search wisely

Google Daydream Sponsor Class

Class:
Jenny Rodenhouse
& Leonard Wozniak

Duration:
14 Weeks

Role:
VUI & AR designer

Tools:
Unity, Sketch & Principle, After Effects,
Solidworks & Keyshot

How can Look AI personalize your searching?

Creating seamless transitions between the real and digital worlds with emerging technology

Google Daydream team was interested in future seamless transitions between the real and digital worlds by uncovering new hybrid tools for creators in 2023. I decided to dive deeper into emerging technology, searching and recording potentials beyond current use cases.

Immersive
Discovery

Emerging technology AR/VR is missing natural sensory interactions with the environments

I played and prototyped with tools in multiple forms in the market, such as Oculus games, Unity VR, and AR games. I discovered that they are mainly:
1.Focus on digital immersive environments and visual designs with mostly entertainment goals
2. AR is not being used a lot of due to having to hold phones, the high cost of the devices, and technological limitations
3. Searching for and collecting data from the real world is hard, especially in the public sphere

Market
Research

Personalization & Real-time are trendy needs but not seen in searching experiences

1. Mobile devices allow users to search and learn anywhere and anytime.
2. Phone camera enables users to search based on what they see, and user can track objects around and look up on the phone but it is not widely used yet.
3. Voice speakers are limited to the technology to understand your context and maintaining context so you cannot have dynamic exchanges that start, stop and pick up again later.

User
Interviews

Searching 2D images is not enough, see, feel, understand the real context is very important for learning new things.

I conducted 3- 4 expert interviews with researchers and designers. I learned that searching is not just to read data numbers but also about to extract information from the real context. Current online users have to spend time filtering out overloaded data, so what can be a good way to search for relevant result in real-time .


"Researcher is like a detective, we search from details of your your everyday life. We want to meet the user to get guided through in physical world instead of reading papers."


"As a product designer, I wish I can search 3D objects without just going online. I want to see it and feel it before designing. "

Users are struggled of getting quick and meaningful responses

Continuing with 5 more interviews with users who rely searching on phone to learn, I identified shopping, discovering and sharing by searching are  the most valuable use cases and needs from users:

1. I want to search and compare offline- online shopping quickly.
2. I want to discover making new things by searching online.
3. I want to collect and share my searching results more efficiently.

Identifying primary and secondary users

After the interviews, I decided to break it down into 3 different user scenarios and use cases that contextualize the searching to buy, to make, and to share.

Ideations

What if online search can be as natural as asking a close friend by your side?

Intuitive search,
via multi-modal interaction,
object recognition

Personalized inspiration,
based on environment recognition and user data.

Knowledge sharing, with multi-user detection, object + object recognition.

Iteration 1

Conversational Design of responses

After the interviews insight, I decided to break it down into 3 different user scenarios and use cases that contextualize the searching to buy, to make, and to share.

Iteration 2

AR & Voice on Adobe XD Storyboard Testing

Feedbacks:
1. I only want to know its general information that is context-rich knowledge before going into details
2. AI graph and accurate knowledge, I want to know more by saying yes/nodding the head.

Feedbacks:
1. I need to know which one I am pointing at
2. How do I know the one that my AI recommended? Do I need a visual feedback?

Feedbacks:
1. Love the step by step voice instruction of making food
2. Is it recognized only from earphone camera?

Iteration 2

I prototyped and tested with Vuforia object-recognition with friends and real context based on background.

Object Recognition

Object + Environment

Object+ Object Comparison

Service
Diagram

How it works?

VUI design

Assistant Identity built with trust and friend's personality.

Considerate
“I know what you want.”

Relatable
“Hmm, I can help you and become your accompanies.”

Humorous
“Okay! I got your back buddy!”

What if it is a NPC

App
design

With app, it tracks your data from searching

1. Smart grouping of bookmarks

Location: Home
Needs: “I want to search what I can make.”

Solution: Google Look always tries to collect your personal data and assist user to make choices smartly by curating results and as a guidebook.

2. Notification of comparing visually

Location: Store
Needs: “I want to compare products in order to buy quickly.”

Solution: Google look enables users to make decisions and compare quickly in the store. It bridges the online and offline shopping experiences.

Reflection

ROI(Net benefit cost) of product development
Bounce Rate:
Time it takes them to find the correct answer by talking to Look and switching to another object

Low Bounce Rate and high customer task achievement : Success


Task types:
1. Shopping in market with Look app prototypes
2.Calculation of successful shopping lists created in a certain time
3. Voice Over tone testing by recording different scenes


Next Project

Yotel