Overview: EZ Chef is an Amazon Alexa voice skill that provides users with quick, easy recipe options and offers handy features to help in the kitchen. Core functions include searching and selecting recipes, saving and adjusting recipes, and instructing and guiding users through recipes.
Roles: Conversation (VUI/UX) Designer, UX Researcher
Company & Project: CareerFoundry UX Design Program: Voice Specialization Course
Timeline: March 2021 - May 2021
The aim of the project was to design a skill for Alexa devices that allows users to navigate and interact with a recipe primarily using voice commands, essentially designing a voice user interface (VUI). The main focus was allow users to select easy-to-make dishes from several options and to follow step-by-step directions to prepare the meal.
I created two personas: a proto-persona for users and a system persona for the EZ chef Alexa skill.
My proto-persona, Justin, focuses on a younger audience of working professionals that have little to no cooking experience but are in positions to start learning (living by themselves or starting to learn how to cook).
I felt this audience should be focused on for my skill based on informal user interviews conducted. I found those similar to Justin most likely have an Alexa device in their home and would be more willing to try Alexa skills.
When creating my system persona, Dana, I wanted to make sure the person would be friendly, polite, excited, and helpful. I felt this would appeal to most, if not all, users and make them feel welcomed and encouraged to cook with my EZ Chef skill.
Following these preliminary definitions of personas, I developed a collection of user stories to help sketch situations users would want to interact with the skill. This helps guide my design process to be empathetic and cognizant of our users' needs.
As a busy professional, I want to whip up something quick and and easy before and after work.
As a new cook, I want help finding and picking choices for meals as I am not sure where to start or what to make.
As cooking requires working with my hands, I want to use my voice for assistance when I’m busy cutting or need to keep an eye on the food.
To further empathize with users, I created additional sample dialogues to imagine possible conversation between users and the system. Creating these conversations helped elaborate on the situations presented in my user stories. While it may be a bit funny to come up with conversations by myself, these sample dialogues are an important step in the design process in order to start bringing the design vision to life by fleshing out the personality, tone, and manner of the interactions.
With these sample dialogues to guide the next step of my process, I created the actual interaction model. By utilizing the possible situations and conversations, I could better conceptualize and visualize what my VUI would be like and thus build and organize the logic and flows. Below is an overview of the intents (essentially functions for Alexa skills) in my EZ Chef voice skill.
Before I could test out my design, I had to create the script. In the context of VUI’s, the script is an organized database of system prompts and responses, along with sample user utterances, and is broken down into intents, or states, each of which is represented as a separate tab in a spreadsheet.
I had to develop the voice script prior to testing as the script would help serve as the prototype used in my usability tests. See screenshots below for some sections of my script.
Test Details:
With the scripts ready, it was time to test my skill. As I did not have a developer working with me on this project, the best user research testing method was a Wizard of Oz test where using the script, I acted as the system / EZ Chef skill and participants interacted with the skill through me.
I conducted five remote-moderated usability tests, acting as both the moderator and the wizard.
This allowed me to still test my voice skill as a low-fidelity prototype while also giving me the freedom to, as the system, interpret possible intents not included in the initial version of my voice script but would be part of the end product.
Test Objectives:
This was my first time conducting a Wizard of Oz test as well as a usability test for a VUI. I found it more difficult than previous usability tests I've conducted for graphical user interface (GUI) as I had to balance not giving my participants key words that would prime them while also giving them enough context so they weren't confused. However, it was definitely an enjoyable and enlightening experience! Some of my key findings are summarized below.
I also broke down, organized, and analyzed the data from my usability tests into a rainbow spreadsheet.
View the rainbow spreadsheet here for in-depth results and analysis.
The next step would be to implement changes based on this initial round of user research.
Then, I'd want to refine the interaction model through additional, repeated testing.
Afterwards, based on whether users can successfully and joyfully use EZ Chef, the skill can be certified and published for Alexa devices. Of course, the design process is never ending so after releasing the skill to the public, I would work on gathering real usage data for insights and further improvements.
From this project, I learned about the similarities and differences in designing for voice versus screens. I particularly enjoyed how accessing features can be simpler with VUI as menu options are all present at the same level, rather than having to locate options and navigate through a hierarchy of information, as one would have to do visually. On the flip side, it was more difficult for users to not give up when they were lost how to accomplish a task as opposed to with a GUI, users can keep tapping buttons to figure out how to accomplish what they want.
As the design world continues growing and technology continues changing, I am excited to keep exploring voice user interface design, developing my skills as a UX designer and creating innovate technologies and solutions.