
18.11.2024 - 02.12.2024 / Week 9 -Week 11
Emily Goh Jin Yee / 0357722 / Bachelor of Design (Honours) in Creative Media
Experiential Design/ MMD60204 / Section 01
Task 3: Project MVP Prototype
TABLE OF CONTENTS
1. LECTURES
2. INSTRUCTIONS
3. TASKS
4. FEEDBACK
5. REFLECTION
LECTURES
All lectures are completed in
Task 1.
TASKS
TASK 3: PROJECT MVP PROTOTYPE
In the prototyping phase, our goal is to emphasize the main features of the AR application. At this stage, it's not required to include every asset or the full application flow. The focus should be on presenting the core functionalities that best represent the app's primary purpose and the user experience it aims to deliver.
HIGHLIGHGTS ON PREVIOUS TASK:
FINAL PROPOSAL DOCUMENT
Fig T2.8 Final Proposal Document
*mispelled 'solway' font, plesase ignore
PROCESS
Based on the feedback suggesting a story where the animation changes as users turn the carton, I've made some updates to my proposal to incorporate this dynamic element.
To begin the task, I’ve started by creating a frame-by-frame animation using Procreate. This animation will be triggered when the image target is detected. In the current version, Oatie appears and greets the users in a short sequence. While it’s still a work in progress and doesn't yet include sound, this initial animation sets the tone for the interactive experience. As I continue refining it, I plan to add more frames and transitions to ensure that the animation evolves as the user interacts with the carton, creating a more engaging and immersive experience.
Fig T3.1 Animation Sketch on Procreate
When image target(front of carton) is detected, the video appears:
Fig T3.2 Animation Outcome for the welcome page
UNITY PROCESS
Setting up Unity took really long for me as I have forgotten how to do so, after watching the recordings and with guidance of my classmate, I could successfully complete the setup.
1. after creating a new project in Unity, import Vuforia file into Unity's 'Resources'
2. add License Key
3. import package into Assets
4. Windows >Package Manager
5. in Build Settings, select desired platform and click on 'add open scenes'
6. back to Vuforia, in Target Manager, generate a new database and add a name
7. in the database, add image target
To kick off the project, I focused on the landing page, which plays a vital role in the AR app's user experience. I began by creating a scene named "Landing Page" and set up a canvas within it. On this canvas, I added the brand logo to establish the app's identity, along with a button that directs users to the scanning page. This simple yet functional design aims to provide an easy entry point into the app, ensuring users can quickly navigate to the next step in their interactive journey.
Fig T3.3 landing page process
Fig T3.4 button colour
On the inspector panel, I select the script of 'MySceneManager.cs' and dragged the Canvas below runtime, and wrote the scene where users will be directed to after clicking on the button.
Fig T3.5 inspector panel-on click
(button that directs users to the scanning page)
Fig T3.6 MySceneManager Script
Scanning Page Scene:
Hierarchy Panel>Vuforia Engine>AR Camera
After adding the AR Camera, delete main camera.
Hierarchy Panel>Vuforia Engine>Image Target
Inspector Panel>Type: From Database>select database and image target
Fig T3.8 settings
Hierarchy Panel>Video Player
- Drag the video from Files into Video Clip under the Video Player Section.
- Drag AR Camera to the camera section
- Aspect Ratio: Fit Vertically/Stretched
- Add New Script 'PlayVideoOnDetection'
Fig T3.9 Video Player process
Fig T3.10 Video Player Inspector
Creating the script turned out to be a challenging and time-consuming process, as it required more than 10 attempts. The TDS ChatGPT frequently provided outdated or incorrect scripts, which made troubleshooting particularly frustrating. I had to spend significant time navigating through Unity to identify and resolve various problems and errors in the script.
Eventually, I managed to get a version of the script that was error-free. However, when testing the AR camera, the video started playing automatically without requiring the image target to be scanned. To address this issue, I disabled the "Play on Awake" function, but the video still kept turning on by itself. After further trial and error, I turned to ChatGPT again for assistance, and finally, I was able to obtain a working script that resolved the issue. Here is the finalized script I used:

Fig T3.11 PlayVideoOnDetection Script
Next, I attempted to navigate from the landing page’s button to the scanning page, but initially, it wasn’t working. After some investigation, I realized that the issue was due to the MySceneManager script not being assigned to the inspector panel of the image target. Once I added the script to the correct location in the inspector, the navigation functioned properly, and the transition between the pages worked as expected!
OUTCOME OF THE CURRENT PROGRESS
Fig T3.12 Final Outcome of project for Task 3
Throughout the process, I encountered numerous challenges, which resulted in the prototype having fewer features and interactive elements than initially planned. However, this experience has helped shape a clearer direction for the final project. While the current prototype may seem quite basic, it serves as a foundational step, and I am committed to refining and expanding it in the final version to fully realize the project’s potential.
FEEDBACK
- cant just have buttons and information
- use front and back of carton
- when flip to different side, mascot moves in
- video form or animate in unity
- create story for product or select product with story
- cut down info
REFLECTION
Experience
The process of developing the prototype was both challenging and enlightening. I encountered several issues, such as troubleshooting scripts that required multiple attempts and addressing unexpected problems in Unity, like the video playing automatically without scanning the image target. These difficulties made the process time-consuming, especially as I had to navigate Unity to locate errors and refine my work. Despite these obstacles, I managed to implement some basic functionality, such as transitioning from the landing page to the scanning page after resolving script assignment issues. While the outcome is simpler than I initially planned, it has provided a solid foundation for further development.
Observation
Through this project, I noticed that even seemingly minor details, such as assigning scripts in Unity’s inspector panel, can make a significant difference in functionality. Proper testing at each step is crucial, as overlooking default settings like "Play on Awake" can cause unexpected behaviors. I also observed that creating a smooth user experience in AR involves balancing technical functionality with ease of use, as even small errors can disrupt the flow of interaction. Furthermore, relying solely on external tools like ChatGPT isn’t always effective, highlighting the importance of independent problem-solving and exploring Unity’s features firsthand.
Findings
This project reinforced the importance of iterative development and resilience when dealing with technical setbacks. I found that focusing on small, functional steps—rather than trying to achieve perfection right away—was key to making progress. Additionally, I learned that thorough planning is essential, as even a basic prototype requires significant time and effort to address creative and technical challenges. While the current version lacks advanced features, it lays a solid foundation for future improvements. Moving forward, I aim to refine the interactive elements and ensure the final AR experience aligns with both technical requirements and user expectations.
Comments
Post a Comment