Experiential Design / Task 3

10/6/2025 - 6/7/2025 (Week 8 - Week 11)
Wee Jun Jie / 0375271
Experiential Design / Bachelor of Design (Hons) in Creative Media
Task 3: Project MVP Prototype


All class notes have been stored here: My Notes

 INSTRUCTIONS 

<iframe src="https://drive.google.com/file/d/146wFmRPQznny8T_HCXWHRGU81WBFRA8B/preview" width="640" height="480" allow="autoplay"></iframe>



 Week 8 - Task 3 Overview 
For Task 3, I am working on MVP prototype of my AR learning app using Unity + Vuforia. The focus was on interactive alphabet recognition, where each image target triggers a 3D model, sound, and a small quiz. The app is designed to be friendly for toddlers aged 1–3 years old.

Features Implemented
✦ .  ⁺   . ✦ .  ⁺   . ✦ .  ⁺   . ✦ .  ⁺   . ✦ .  ⁺   . ✦ .   ⁺   . ✦ .   ⁺   . ✦ .   ⁺   . ✦ .   ⁺   . ✦ .  ⁺  . ✦

🟢
 
AR Scene (Marker-based with Image Target)
  • Recognizes letters A–E using Vuforia Image Targets
  • Each letter displays 3 different 3D models
  • Users can tap a swap button to cycle between models
  • Each model has an associated voice sound (e.g. apple, ant, ambulance)
  • Sound plays only after image is detected


🟢
 Model Interaction - Simple touch-based interaction that natural and intuitive for young children.

  1. Drag to move
  2. Swipe left/right to rotate
  3. Pinch to scale

🟢 Loading Scene

  • Added animated loading screen using Unity animation
  • Smooth transition from loading to AR scene using SceneManager.LoadScene() 


🟢 Quiz Scene (Game Scene)

  • Built a multiple-choice quiz with toddler-friendly UI
  • Model sprite (2D) acts as sound trigger — child taps to hear the question
  • Quiz presents 4-word choices per question
  • Feedback text: "Bingo!" or "Oops!" depending on correctness
  • If wrong, feedback appears briefly, then resets for retry
  • If correct, feedback appears and proceeds to next question
  • Final “Yeah! You’re done!” message appears when complete

🟢 Quiz Manager Logic
  • Supports loading questions from a QuestionData  list
  • Dynamically updates answer labels per question
  • Resets button states and text feedback each round

🟢 Sound Integration

  • Each model plays a unique sound when tapped
  • Quiz uses voice-based question sounds
  • Correct/wrong answer sound effects added for feedback
  • BGM (background music) added and plays across all scenes
  1. Uses DontDestroyOnLoad  to persist
  2. Public method for adjusting BGM volume or muting

🟢 Touch Feedback Improvement

  • Bounce slightly on tap
  • Turn green (correct) or red (wrong)
  • Reset after 1 second on wrong answer, allowing retry


Challenges found & Solutions


Fig 1.1 GLB model imported with no materials - JPEG #18June25

1. GLB model imported with no materials:

S:   When I imported GLB or FBX models, they appeared grey or white. I found that the texture maps (e.g., base color, normal, roughness) were included but not auto assigned in Unity. I manually created a new material, mapped the textures (like base color to Albedo), and applied the material to the mesh manually in the Inspector.

Fig 1.2 Models not appearing when image scan - JPEG #20June25

2. Models not appearing on image scan
S:   Sometimes in Unity, the model doesn’t appear even after scanning the image this often happens during testing on PC. To troubleshoot, I exported the project to my phone for testing instead.



Fig 1.3 Checking audio source play after trigger - JPEG #20June25

3. Audio played before model was detected.
S:   When testing, the sound button played even if the model was not visible. I fixed this by checking if the current tracked target or model was active before allowing sound playback this prevents accidental taps.


Fig 1.4 Text prompt not showing - JPEG #22June25

4. "FinalText" text prompt not showing
S:   The message was correctly called in code but still invisible. I later found that the TextMeshPro object was unchecked in the Unity Hierarchy, so SetActive(true) in code had no effect. Once I left it checked and controlled only the .text value through script, it worked perfectly.


Fig 1.5 Canvas button - JPEG #22June25

5. Tap-to-swap logic clashed with canvas buttons
S:  Originally, I tried to use screen taps to change models, but this conflicted with Unity’s canvas buttons for sound and quiz. I changed the approach to use dedicated UI buttons for model swap and sound, separating interaction layers and making the experience more accessible for toddlers.


Fig 1.6 Canvas button - JPEG #27June25

6. Feedback didn't reset on wrong answer
S:   When the user tapped a wrong answer, the button turned red and stayed that way permanently. I implemented a coroutine that waits 1 second, resets the feedback text and button color, and allows the user to try again rather than jumping straight to the next question.
 

Fig 1.7 Scripting Method - JPEG #27June25

7. Background music restarted every scene
S:   Each scene had its own Audio Source, which restarted the music. I created a BGMManager with DontDestroyOnLoad() so the music continues across scenes, and added a singleton check to prevent duplicates.


Fig 1.8 Apply model mesh collider - JPEG #27June25

8. Model colliders interfered with touches
S:   Some models had mesh colliders that didn’t align well with the actual object. I either removed the collider and added a simple box collider or added one manually that wrapped the visible area. This improved tap detection.


Tools and Assets Used
  • Unity (2021+)
  • Vuforia Engine
  • GLB/FBX 3D models - Blender
  • TextMeshPro
  • Custom scripts: GameManager, ModelSwitcher, BGMManager, TextBounce and others...
  • Audio clips recorded


Progress Presentation




 REFLECTION 
Experience
Developing this MVP was a hands-on and iterative journey. Initially, I underestimated how much detail would be required to make a child-friendly AR app work smoothly. Building each feature from AR detection to model interaction, sound playback, and quiz logic involved trial and error. I spent a significant amount of time fixing small but impactful issues, such as how a button behaves after a wrong answer or when a model doesn’t load properly. Despite the challenges, I gradually built a working system that felt cohesive and playful. The learning curve for Unity and Vuforia was steep at first, especially dealing with component references, material assignments, and cross-scene persistence but working through each step helped deepen my understanding of scene logic and user flow.

Observations 
One key observation was how easy it is for features to conflict in an interactive environment for example, model taps interfering with UI buttons, or sounds triggering without context. I also noticed that toddlers (my target audience) require an extremely simplified interaction model: large buttons, clear instructions, minimal visual clutter. That led me to adjust many of my early interaction designs to be more intuitive like using UI buttons for model swapping instead of general screen taps. I also observed that timing and feedback are crucial: even 1 second of delay or color change helps reinforce learning and confidence in the user. Finally, testing on mobile vs in Unity Editor revealed differences in scale, touch sensitivity, and timing, which I accounted for as I refined the MVP.

Findings
Throughout the process, I found that modular scripting and reusability were critical. For example, having a GameManager handle quiz flow, a ModelSwitcher for AR logic, and a persistent BGMManager made my project cleaner and easier to expand. I also discovered that user feedback both visual (colors, animations) and auditory (sound effects, voice) significantly improves engagement, especially for children. Another important finding was that every scene needs to communicate clearly with the user. Even something as simple as bouncing text or a color change after clicking gives a sense of success or learning. Finally, I learned the value of debugging with patience: many of the issues I faced (missing textures, inactive objects, sound bugs) were solved by carefully checking each component’s status and the flow of the scene.