Spark AR Scenes

-> Product Design Intern (Summer 2021)

-> Meta Reality Lab (World AR Experiences Team)


Spark AR Scenes is a back-camera-only standalone app that gives people the power to create and share World-facing AR videos and photos using 3D objects, AR effects, and 3D text.

Our app will allow people without experience making 3D effects to be amazing AR storytellers, supported by an ecosystem of Spark creators dedicated to making their content great.


3 months summer intern project

What is World AR?

World AR effects are back-camera-only. It is a key part of Meta Reality Lab's strategy, where our goal is to fill the world with unique and compelling world-anchored content.


We are seeing emerging World AR producers on Instagram. Everyday, 500k people open a World AR effect in their back camera and every month, 15M people open a World AR effect at least once.

People problem

While ~81% of people are somewhat familiar with AR, they have a hazy understanding of AR as a concept and lack interest in using World AR

Discover & Define

As a first step, I conducted a competitive analysis among 5 apps (TikTok, Snapchat, Ikea Place, LeoAR, and ARVid) with another UX Researcher intern. I noticed that there's no consistent pettern to guide users through their first World AR creation.

Therefore, the challenge was to establish an overarching framework for how we can approach the NUX in Scenes and flesh out principles to guide the design process.


  • There's no consistent onboarding pattern across competitors
    • TikTok: Post-action label, no upfront guide
    • Ikea Place: Discrete walk-thru experience for first-time users
    • ARVid: Animated guide when interacting with a World AR object
  • Onboarding recommendations
    • Only go through mandatory steps for people, allow them to skip optional tasks
    • Use push notification for re-engagement
-> View competitor audit deck

Project Goal

Define Scenes onboarding experience to help users learn about the concept of World AR, and how to interact with World AR content.

NUX Framework

Informed by past research and audit with FoA and external competitors, I led a brainstorming session on FigJam with team and XFN partners (PM, UXR). Together we mapped out 7 main buckets that we want to focus for the onboarding experience and placed them at different points during the first time user journey.

For MVP, we prioritized 3 focus areas, which were instruction, permission priming, and feature promotion.


  • Instruction
    • Prioritize how to interact with World AR affects
    • Provide timely feedback for users (SLAM)
  • Permission priming
    • Ask for critical permissions (camera, camera roll) upfront
    • Tie optional permissions with user action (landmark effect -> location permission)
  • Feature Promotion
    • Emphasize fun & unique experience that Scenes will offer upfront
-> View NUX framework deck

Main Project

For current experience, people will be brought to a blank camera screen after learning about our value propositions, accepting mandatory permissions, and SLAM (Simultaneous Localization and Mapping). While people mentioned that they feel somewhat lost and confused, what is the real frustration behind that, and how can we help them with their first creation experience?

After spending few weeks on exploration within those main buckets and project definition, I decided to focus on Instruction as my core intern project. I want to help people with their first World AR creation on Scenes. Here's part of the onboarding journey that I worked on: After orientation screens, accepting mandatory permissions, users are brought to our home screen. What will their first reaction be like? What information do they need to proceed?

We distilled down to three main pieces of information that we want people to know for their first time experience with Scenes. We want to make sure people understand Scenes UI, able to interact with World AR content, and aware of what's creatively possible. After multiple iterations and critique sessions with the team, I narrowed down to two main hypotheses that we brought to UXR interview sessions later.

(Main project is password gated, please reach out for the full deck🥰)