Hello, I am Hee Seo [Hee-Suh] Chun, a communication and visual designer, graduated from Carnegie Mellon University's School of Design Undergraduate program in 2019. In recent history I’ve crafted digital products at AKQA, and I am currently working at Huge Inc as a visual designer.

Projects

> AKQA (NDA!)

> PROJECT MELO

> TYPEMOJI

> PAINT THE PAVEMENT

> A DREAM

> LENS

> GENEROUS FEEDBACK

> LUNAR GALA


About
Resume
Contact Me︎︎︎︎︎︎


Mark








Hello, I am Hee Seo [Hee-Suh] Chun, a communication and visual designer, graduated from Carnegie Mellon University's School of Design Undergraduate program in 2019. In recent history I’ve crafted digital products at AKQA, and I am currently working at Huge Inc as a visual designer.

Projects

> GOOGLE (Huge)

> DELTA (AKQA)

> PROJECT MELO

> TYPEMOJI

> PAINT THE PAVEMENT

> A DREAM

> LENS

> GENEROUS FEEDBACK

> LUNAR GALA


Contact Me︎︎︎︎︎︎

Mark

Project Melo is a personalized voice assistant that learns about the user through active conversation. It actively evolves through time catering to user’s personality, behavior, and taste.


Technique: User Experience, Research, Motion Graphics // Design Tools: Sketch, After Effects, Illustrator // Duration: 4 weeks // The Ask: Design an intervention that would enhance the user experience based on voice interaction // Collaborators: Bo Kim, Jeongmin Seo, Hae Wan Park // 




PROBLEM SPACE

Smart assistant, although technically usable, is not utilized to its full extent due to user’s lack of sentimental attachment and motivation to use an assistant, and limited personalization of an assistant.

Many products are currently out in market, but they feel mechanical and unnatural, discouraging users to be motivated.

Many companies have invested to create a ‘smart’ assistant, but their products currently do not deliver satisfying experiences for users. Not a single company had a great success in this market yet, which gives an opportunity for a company to go one step further compare to its competitors.

Also, enhancing the user experience of a CUI(conversation user interface), and gathering extensive amounts of unique, personalized data of individual users can bridge the gap of creating AI or creating a new platform/domain of interacting with a machine.


RESEARCH


01. DIARY STUDY
How do we use voice assistants?
What are we looking for? // In order to understand how current voice assistants are used, we individually used voice assistants (Google Assistant and Siri) for a week and logged our experience in a diary. We used this method because we were specifically interested in when we had a demand and found value in using a voice assistant.

Moving forward // We discovered that our interaction with the voice assistant feels very mechanical and often just connects the user to other apps. It was serving a function of an alternative keyboard, not a conversation partner. It lacked a sense of personality.
02. CONTEXTUAL INQUIRY
How do we start a conversation?
What are we looking for? // We conducted three rounds of conversations with strangers to investigate how people talk to each other when they meet for the first time. We had a post-conversation interview to gain insight on when people felt comfortable or awkward and paid particular attention to the colloquial techniques we often overlook that current voice assistants lack.

Moving forward // We discovered that there has to be a seamless transition in taking turns between two people, a clear indicator for ending of the conversation, and discovery of a common topic to facilitate a comfortable conversation with a new person. Based on the interview, we drafted a rough script for the initialization process of a more humane and conversational assistant and tested the script with 6 people.
03. ROLE PLAY
How should we converse with an assistant?
What are we looking for? // Using our drafted script we conducted a role play through the phone as if one was a smart assistant and the other person was a user trying the smart assistant for the first time. We wanted to learn whether our initial scenario effectively conveyed the personalization process and analyze user responses to the proposed assistant experiences.

Moving forward // We learned the importance of setting expectations to the user before the onboarding process. Also, we discovered some fallbacks to linking user’s music taste with the assistant’s voice and decided to add a visual character instead for the personalization process.



PROJECT MELO



Building Positive
First Impression


First impression matters. We think introducing a personal assistant should feel more organic as it is with getting to know new people.

Part of our onboarding process includes a short introduction about the assistant. Then, the assistant will ask for your name just like people do when they meet someone for the first time.

The name of the assistant, its look and feel are initialized through your words. This step of personalization will increase the user’s attachment to the assistant.


Developing Your Assistant


Your assistant exists not only vocally but also visually. People use many nonverbal cues such as facial expressions and gestures during conversations to deliver their point and message across. Our visual character captures those conversational factors, leading the interaction to feel more natural and personal.

Over the time, the assistant learns about you based on how you talk in a conversation. Your assistant adapts itself in its look and manner, reflecting what it understands about you.

Providing Personalized Suggestions


More things you share, more things your assistant can do for you. When people know each other better, they can give better suggestions for you. Based on the assistant’s knowledge about the user, it gives you contextualized recommendations specifically catered to your situation.

For instance, your personal assistant can analyze your photo album or social media and suggest what you can wear in a rainy day instead of just giving you plain digits and droplet icons.

 

Empathetic and Smooth Conversations


Your personal assistant can catch cues about your condition and emotional states through the way you talk — nuance, tone, connotation. It your current status analyzes information collected through past conversations and your voice at the moment.

In order to suffice the potential technical limitations of voice assistants, we added certain details to help the user have a comfortable conversation. You can manually edit the assistant generated text if needed, and you also have an option to use the button on the bottom-right corner of the screen which indicates “I don’t want to talk about it” when you feel uncomfortable to talk about certain issues.


USER TESTING


TESTING HIGH FIDELITY PROTOTYPE

Using the wizard of Oz technique with our high fidelity prototype, we conducted three rounds of testing to evaluate the success of our design.

The goal of user testing the initialization process was to verify whether the solution increases interest and motivation for first-time users. To measure the outcome, we instructed each participant to rate features from 1 - 5 after going through the entire initialization process with Think Aloud method.

WHAT WORKED
“I loved how it has a face. There is something I can talk to now”

“Being able to name the assistant is definitely a personal touch”

“I started to get into the conversation because there was enough hint to understand that it was responsive”

“Calling the name several times… felt like brining this thing into existence”

Personalization process effectively triggered user’s attachment to the assistant. Interacting vocally and visually felt more conversational and users started to engage more actively. Aside from their post-testing responses, those change in attitude were also evident from the participants’ behaviors during the testing; they were inserting some conversation fillers and reactions such as “Yeah,” or “Cool” which are natural and common in normal conversations, but are often absent for interactions with the existing voice assistants.


WHAT COULD IMPROVE
“By the end I kind of forgot what the point of this process was. It was clear that I could personalize it but I wasn’t sure what it can do”

“At the end of the conversation I would start interacting more to explore the options it suggested”

Learning what the assistant can do didn’t get across as much. If we had another round, we would add more steps for the user to try out different features of the assistant after the personalization phase.

Next︎︎︎

© heeseochun 2022