Week 11: Theremin Composition Reproduction

The Theremin is a very weird instrument. It doesn’t have a key, or bars, or markers on how you play it. You simply.. move your hands in the air. But the basic mechanics are just distances. The vertical antenna controls the pitch, and the horizontal antenna controls the volume. Move your hand away from the vertical antenna, you get lower pitch and closer moving your hand closer to it generates higher pitch. Move your hand closer to the vertical antenna to get a lower volume, and raise your hand to raise the volume.

Although the basic premise seems simple, what is really unique about it that it is a very sensitive device. Even though your wrist is in the same position, if your fingers are in a different position, the Theremin will emit a different sound. So to play an existing.. sheet music, you will have to replicate the positions of your hand as well as your fingers. It would be easy if there were standardized finger positions as you would play the guitar, but because it’s so free, you have to replay the exact position to get the same note.

I think the way for a musical composition to be accurately played on a Theremin, it at least needs to feed 3D points as instructions to the user.

So the multiple concepts that I have come up with are:

3D projectors/AR goggles
The 3D projectors/AR goggles would display a glove that the user follows and displays clues for next movement to give user instructions at the precise time to reproduce the composition.

like a real life osu game


  • Users just need to follow the hand
  • Very flexible in deployment


  • No physical feedback for the user

Robot arms and fingers
A robot arm and hand that the user wears to guide them and how to move their hands. Like a coach physically moving your hand


  • Guaranteed to be very accurate
  • Physical feedback to user input


  • Very involving to wear
  • complicated to create

High definition tactile feedback gloves
A glove with a ring-shaped array of vibration motors around the wrist and every finger. Vibrates in the direction that the hands need to be.


  • Physical feedback
  • Less involving than a robot arm


  • Needs an external sensor to see the glove’s position in 3D space
  • The vibration motor must be really responsive for the user to be able to reproduce the composition accurately
Pugh Matrix
Criteria 3D Projectors / AR Goggles (Baseline) Robot Arms Tactile Gloves
3D information Accuracy 0
Next instruction clues 0
Physical feedback 0 + +
Involvement in using 0 0
Total 0 –2 –1

This Pugh matrix reveals that the 3D Projectors / AR Goggles are the best solution to pursue now. But this might not tell all of the story. This concept and other concepts can be used together to provide a complete experience, physical feedback with directions for future movements, although the devices that the user needs to wear will increase a lot.

Week 10: Experience Prototyping

Observe a restaurant/cafe/eating establishment (you pick which one)
Pizza on toowong

Describe that establishment (style of dining, layout, ordering/service methods)
It’s a cafe style restaurant with the plastic menu available but you can also look up on the menu on the wall. Come up to the cashier you will be met by a crew member to take your order and asks for payment. You pay the bill then wait in one of the available tables for your food. There’s a drink cabinet for soft drinks, but you need to bring the drink with you to the counter and pay again. A crew member will deliver the food to your table once your food is ready.

What is the existing observable experience for different stakeholders
(customers, wait staff, chef etc)?


  • Come up to the restaurant
  • Seeing available options to order from, trying to gauge whether the small or medium size pizza is enough
  • ordering the pizza
  • sitting down and looking at the drinks cabinet, forgetting to buy some drinks
  • grab a drink and pay for it
  • waiting for the food…
  • enjoying the food
  • leave

Wait staff:

  • Prepare for customer order while watching the reception area
    * if there’s a new customer, need to greet and take order
  • Keep watch on food from the kitchen
    * If all the orders for a table are ready, bring the food to the table
  • Keep watch on the tables
    * If diners are leaving, watch for empty tables and pick up the utensils and plates from the table
    * If diners are asking for your attention, acknowledge ASAP, come to them and tend to their needs

Chef/Kitchen Staff:

  • Watch for the order coming in
  • Cook them
  • Deliver to Wait staff

What external/internal factors impact on their experience?

External factors:

  • It’s a cafe style restaurant so if it’s raining outside the customers might get wet
  • By the side of the road so road noise might be an issue on peak hours

Internal factors:

  • Current capacity
  • queue times

What are the pain points?


  • The need to guess what a dish look like from the menu.. Some people just don’t know what ingrident names are but remember if they’ve seen it. / eaten it before
  • Knowing what size is for how much people

Wait staff

  • Waiting for a table to leave after they finish their food

What aspects of the existing experience could be enhanced/augmented/
supported with technology?

  • Digital menu
    The menu could be a display on the bar table covering the whole table. The pizzas could be selected and have their real size displayed on the screen, so the diner can see and gauge how much pizza that they want. And every time they pick a topping they can see how it looks, so they remember their favorite topping when they come back.
  • Timer pizza
    Sometimes you just have to remind your patrons subtly if they are holding up other customers. The pizza plates will have an RGB LED array around the perimeter, and when the wait staff activates it when there’s a customer waiting for a table, and the plate lights up a timer with a soft glow that just reminds customer of a timer. The timer would count down for 20 minutes, enough time for the customer to still take their time but not leave other customers ignored.

How would introducing technology in to this context change the experience?

Menu: Changing experience in ordering. From imagining what they would look like to seeing what they look like in real scale

Timer: People might be aware that the restaurant could remind them if they are holding up a table for other customers.

What experience scenarios might you test with the technology?

Menu: Ordering a pizza from memory, with the right size instead of ordering too little/too small without assistance from the wait staff.

Timer: deploying the Timer plate directly to customer and watch their reaction to the timer being turned on. Do they realize what that means? How would they react?

Interactive Prototype 3 BTS

This post described the process creating my third interactive prototype.

The Navigation mode of NaviBar is unusual, to say the least. I am taking a concept from a part of commuting: turn signals. Turn signal is a way to communicate from driver to driver, not necessarily an instruction but more as a report. By flashing the part that we want to go, we instruct the user to turn towards their destination. This prototype needs to test whether the animation on the NaviBar in navigation mode is understood by users.

The navigation mode consists of 3 signals to be displayed:

  • Go straight ahead
  • Turn left
  • Turn right

The form of the prototype still takes form of the previous prototype which is an app that’s running on a phone taped to a cap. What’s changing now is the app must be remotely controllable instead of using sensors, as I do not have an accurate interior map of the testing room. I would rather control the signals myself to correspond with the actual environment in testing.

The light bar in each signal must blink to replicate the animation in the Video Prototype. I want the similarity of a turn signal on a car, where the car is turning right, the right part of the car blinks. Another animation that I want to try is the newer ‘Audi’ blink where the turn signal builds progressively pointing to the direction the car is moving.

Testing both kind of signal ensures a feedback on which is better, because even though they are similar in use, but the previous testing shows that making sure where is your centre of your field of view is hard, and the ordinary turn signal relies on your sense of centre.

While the app layout is pretty much the same, the insides are very different. This prototype only needs to display an animation when it is triggered by a remote controller. I considered using a web remote controller, but it is too complicated and I need another device to bring around while I follow the user. I decided to use a bluetooth game controller. Android supports bluetooth game controllers and I have a PS4 controller that I can use.

To use this controller on a bare app like this prototype is pretty easy. The Android system already process the input from the controller to keycodes that is passed to the program. I just set my animations up using cases to determine the keypress. One thing to note I do not recommend waiting for d-pad or directional keycodes as it’s not as responsive as other buttons/triggers.

To solve the blinking, I use schedule and finally got the hang of it, although I still have to control each animation manually by turning on and off each light. After I succeeded in building the regular blink, creating the “Audi” blink is not so hard.

Because the physical form is the same, I just make sure the controller is connected and the phone is running the app, then build the mount similar to how I did it last time.

The prototype fullfils all my initial target for this iteration. The remote control simplifies the building but still achieves the goal of simulating navigation that corresponds to the real environment. And this method of ‘Wizard-of-Oz’ prototyping is something that is very new and very exciting for me. I did not thought something like this was this easy to build.

The ordinary blink did not fare well in the testing. My point of determining the centre is still a problem, and I should have turned on fewer lights in the corresponding direction instead of half of them. But the ‘Audi’ blink is much better in testing anyway.

Week 9: Living with the acrylic block

I’ve lived with the acrylic block for the past 4 weeks. Most of the times they are in my jeans pocket. When they are out, I usually just fiddle with them while thinking what can they represent. So here are some of my ideas.

  • Orientation based status toggling
    • connected to phone and slack apps for example, with preset status for different orientation of the block (so the block lives on a table, not in a pocket)
    • just orient to display a different color and trigger the action
  • Activity badges as non-conscious body signaling
    • based the block’s accelerometer
      • can change colors to indicate the level of physical activity undertaken by the wearer for the past few hours and give a signal of how tired the wearer is, without having to tell everyone that they had an exhausting day.
      • indicator that turns on if the wearer is lacking sleep. (Because sometimes I don’t get enough sleep, and I’m cranky but I didn’t realize that I don’t have enough sleep).
    • based on digital activity such as number of Instagram likes over the past 3 hours or number of up votes received. Maybe a color that is tied to a hashtag to raise awareness.
rgb cube
rgb cube
  • Random Color Fiddler/Exploring Colors
    • use the live rotational info (3 axis) to display directly corresponding rgb/hsl colors
    • use a coin flip mode to hash to a random color, for inspiration
rgb combination
rgb combination
  • Advanced Coin Flip/Dice
    • based on x,y and z rotation while in the air, resulting in a number that returns a predefined number, such as 2, 6, or 8 displayed using a color. A digital dice using a true source of randomness.

Interactive Prototype 2 BTS

This post described the process creating my second interactive prototype.

The physical interaction of NaviBar is the heading interaction. The information on the light bar changes with the heading of the user. This concept is familiar in games, but I want to test whether users interacting for the first time with this kind of system can understand the physical interaction or not. By testing the physical interaction, I am also researching whether users find a device like this useful or not.

The prototype have to be interacted with the change of user heading recorded from their head, and be responsive enough to be understood by the user. The NaviBar itself must be visible to the user but not obstructing their field of view. This will be a functional prototype instead of a form prototype to evaluate the physical interaction.

The prototype has to implement these functionalities:

  • display the lightbar
  • random target selection to give the same experience to each user
  • switch between compass mode and friends mode (1 target or 2 targets on display)

There are several options available to implement this prototype. The most immersive and interesting would be using VR to test this interaction. A VR prototype enables me to set the environment the user is in and what are they seeing. But creating a VR prototype is very resource expensive and I don’t have access to a VR headset/system. A more realistic prototype would be to build a prototype with an RGB strip controlled by an Arduino micro controller. But for the physical interaction to work means I have to wait for a magnetic heading sensor to ship and I have to program the RGB color and blinking manually.

A phone app is a viable alternative as long as it’s still presented not as a phone app but as a physical prototype. There are several existing ideas on how to attach my phone to a hat, and I felt that I could replicate that using duct tape, although by this point I do not know how would I do that. A smartphone has all the needed components: the screen, the controller, the heading sensor.

I considered Unity to create the prototype because it’s possible to extend on Unity’s AR platform to get the heading and display the lightbar, but I did not find an example project that implements compass on Unity and I’m still not confident in javascript or C# to implement it myself.

I decided to use Android Studio because I found an example compass project to base my project on. Another advantage of picking Android studio is the guarantee that the app will run on my phone for testing. All native functions and sensors are available for me to use. I have no experience in Java as with Unity, but I have been interested in Android development for some time and this is a simple project to jump into. The existing compass tutorial also helps me to make this faster instead of having to start from scratch.

The tutorial is meant to create a compass app with a rotating compass image to point to the users heading. The important thing that I’m keeping from the files of this tutorial is the code to read the magnetic heading and convert it into heading degrees to be displayed. I then modified the display to have 9 squares to represent the bar. Android Studio does not have a tool to create shapes, so I used buttons and change their color to represent the light bar.

Converting the heading number to the blocks is not that hard. First we check whether the target heading is in the field of view of user then divide resulting angle with the number of pieces. Then the selected block lights up.

Due to my inexperience in Android Studio, I did not manage to correct the heading when the app is being used in landscape, so all my headings are actually +90 in use. But in this prototype, accuracy of heading is not important when the target and user’s heading is measured with the same measurement. What’s important in this prototype is the sensors precision to detect minute changes from the user’s heading and responsiveness to respond to those changes quickly.

The video prototype shows the light bar blinking when target is out of bounds. But again, due to technical difficulty I did not manage to make the buttons blink in this prototype.

Armed with a hat, my phone, and duct tape I attempted to recreate the hat phone holder. Inadvertently I realized that the phone can just sit flat on top of the brim while still being able to display the information to the user.

The code for the prototype can be seen here.

This prototype fits the targets that I have set in the start, with minimal need to buy specific things and have pushed me in making it to learn a new language. Improvements for this prototype will be a brighter screen and overcoming the technical difficulty to enable more interactions such as blinking, or actually setting targets based on location instead just based on heading.

Interactive Prototype 1 BTS

For the first interactive prototype, I want to test my interaction for the companion app. The companion app is an essential part to using NaviBar as it is the interface for the user to set up and configure their NaviBar to suit their uses. All modes have a configuration interface on the companion app eventually, but for now only the compass mode and friends mode are implemented in this prototype, and navigation mode is not implemented.

This prototype will have these screens:

  • Home screen
    • Select which mode to pick
  • Compass screen
    • Pick heading to set as target
    • Shortcuts to North, East, South, and West
  • Friends mode
    • Pick friends to track
    • Show what colors for each friend

I considered these tools to build my prototype, with my personal pros/cons for each tool.


  • The course default platform with mentor help available
  • Regarded as easy to use
  • Rich library and assets from the Unity Store


  • I have no experience with C# or javascript
    • I don’t want to sink in the time to learn in because I have not seen a way to further test using unity

Android Studio / React Native (Native Mobile Development)

  • Uses native code, creates native experience
  • Full array of native functions and sensors are guaranteed to be available


  • I have no experience with Java or javascript
  • I decided on the form of the prototype too late so there is no time to learn it (although in retrospect there would be enough time to do it)

InVision / Marvel

  • Widely used as a prototyping tool to evaluate interactive screens
  • Very easy to link screens with one another


  • Needs to make screen assets externally on Illustrator/Photoshop
    • Need to reduce the time to use as there is no time

But in the end I decided to use MockingBot because:

  • No need to make assets externally, elements are available inside the tool
  • It’s possible to directly deploy the application to phone as an app

(By this point I have known that I should have used a tool that requires writing code, but I have decided to prioritize on having a prototype ready by the time of the testing session, which is 24 hours away.)

[screenshots of app]

The main home screen is a column of button for the users to navigate to the correct screen for their modes. I want to make sure that users picks a mode to start up their NaviBar in real use. There is a navigation bar located at the bottom of the screen, but that gives precedence to one of the modes above the other modes if there is no home screen to start from.

The compass screen consists of a dropdown and button shortcuts. My initial plan was to have a rotating compass image for the user to slide and have a button to set the new heading as a target, with buttons to pick one of the cardinal headings (NESW). But due to technical limitations of the app, I decided to just have a limited selection of the heading on the dropdown to not clutter up the list, and have the buttons arranged to imitate their position on a compass.

The friends screen is composed of a list of friend’s names with checkboxes to choose whether to track that friend or not. If a friend is selected to be tracked, a box will appear beside their name to indicate what color would that friend be represented on the NaviBar.

The interactivity on all screens are limited by area —> screen workflow of MockingBot so I implemented each possibility on the friends screen manually. Due to the insufficient time I only implemented the friends that are tasked to be selected on the test instead of all of them.

After the test and the Statement of Delivery has been done, I feel like I cut a lot of corners on this prototype. The interaction is not physical as it’s still interacting with a touchscreen, and the interaction logic is also not natural (the friends screen). I should have implemented the prototype on a tool with programming capabilities such and React Native or Android Studio to increase the realism and interactivity of the prototype.

Video Prototype BTS

This post described the process creating my video prototype for NaviBar.

The NaviBar is a device that assists user in navigation by providing information on a heads up display in the form of a light bar. Because the product interaction and scope is very simple, and the experience of it is very visual, I need the user to see the NaviBar in use with the user POV. PoV is needed because heads-up display are meant to be on the user’s peripheral display. Another important aspect to consider in making the video is to make the NaviBar operated in a “real world” environment where the user can see NaviBar responding to the user action/environment.

Google’s [Glass] PoV demo video shows the user what the user will actually see while not obstructing the user’s field of view. This is the kind of video that I want to produce. The video is also easier to produce because the product is not shown, and the interface is added post production using animations. The usage of animation to display the interface eases adjustments to improve the interface and makes it easier to show that the interface is responding to the environment.

The video needs to show the three modes of NaviBar:

  • The compass mode:
    Where the light bar is displaying a static bar to indicate target heading, or in this case the default North heading to simulate a compass
  • Friends mode
    Where the light bar displays multiple targets to indicate friends’ position
  • Navigation mode
    Where the light bar displays navigation direction by blinking or turning on some parts of the bar.

The script can be found here.

Although the Google Glass demo video uses real footage and animation, I decided to have the whole video animated because I do not know how to construct a believable model of the NaviBar to record, and I did not have sufficient equipment to record PoV video.

Some tools I considered to make these are



* very flexible in making the form of the prototype and the environment
* I have a bit of experience with using Blender
* possible scripting to animate and simulate the NaviBar


* steep learning curve
* manual modeling and animation

Source Film Maker


* vast libraries of characters and items from GMod, CS, and Half Life
* actually suited for making videos


* I have no experience with it
* lack of scripting
* inflexibility in editing the models to have NaviBar in it



* easy to interact objects with
* user pov by default
* possible to modify model easily using this plugin
* possible scripting to simulate the NaviBar


* No native recording, but POV can be recorded using screen recording software

If you do not know what GMod or Garry’s Mod is, it is a game based on Half-Life 2 engine In which the player can do.. anything. It’s a sandbox game. So using this game to create environments and create scenarios in it makes sense. The environment I’m using is a blank map to not distract the viewer, with models added to mark certain headings and added models to stand in as friends.

The NaviBar model is just minified row of grey bricks attached to the brim of a cap to simulate the light bar. I decided to animate the NaviBar in After Effects because I have experience in animating with it and decided it would be faster to manually keyframe the animation instead of learning to script inside GMod.

I recorded the script, then recorded the screen while matching the camera to what the script is saying, basically just rotating the head around and move the model. The exact movements are not really important because it will be added in post production. What’s important here is that the visual and the narration matches up.

After recording the screen, I imported the clip and the audio to AfterEffects, and set up the scene. The cap and the NaviBar bobbles with player movement, so I need to track the light windows to match the movement (although not perfectly). Then I keyframe the lights manually to correspond to the modes and the environment.

The video, produced in a short time, managed to satisfy my requirements regarding showing potential users of the function and the experience of NaviBar. All the modes are shown with apt explanation about them.

After building Interactive Prototype 2 and 3, a PoV video recorded with a functional prototype of that level would also be a nice video prototype to show the experience of using the NaviBar.

Week 13: Interactive Prototype III

This is the same Statement of Delivery that is submitted for grading on Blackboard. The journal entry of making this prototype is available here.

The purpose of this prototype is to test the navigation display of the NaviBar. The navigation mode provides turn-by-turn navigation, with directions displayed on the lightbar. I am intending to test whether the display sequence I’ve designed is intuitive and usable for the user. The prototype must give directions that corresponds to the real world situations. Elements taken from the previous prototype is the physical form of taping a phone to a hat, and the basic layout of the prototype application.

The Form
The Form of this prototype is a hat with a phone mounted at the brim to simulate what NaviBar would look like to a user. The phone is running an Android App that simulates NaviBar functionality with rows of “lights” that is seen by the user. The app is controlled by a PS4 controller to display various light displays to represent navigation instructions to the user. There are 5 light displays to be tested:
– The front direction
– The ordinary blink for left and right direction
– The audi blink for left and right direction
A pov video of this prototype is also available here

photo of display
Screenshot of app

Testing Approach
The agenda for this test is to evaluate whether the light display is understood easily to the user for navigation purposes. Users would be asked to wear the hat and follow the instructions of the head. No further instructions will be given other than needed (for example, if the user needs to turn around because there is a lack of space). I will be following the user from behind, giving instructions to the app through my controller that corresponds to the real-world space that I’m testing, in this case the testing lab. I will display the ordinary blinking lights for the first run, then ask the user to turn around and do the test again, but now with the audi blinking lights. After all light displays have been tested, the users will be asked a few questions regarding the experience they had using the NaviBar.

Objectives: Follow the instructions on the hat

Do you understand what the blink represents?
Do you think the timing is ok for navigation? do you want earlier sign?
Which blink do you prefer?
Is the light disturbing your field of view?
Would you use this to navigate yourself in the city? Navigate during cycling? Navigate during driving?
What is out of your expectation during use?


Users understand the purpose and the meaning of the blinking lights once they understand the context of the prototype, which is for navigation. Users prefer to have the directions a very short time before they need to make the turn, or alternatively users want to know the distance to the next turn while navigating.

The ordinary blinking lights are not understood clearly. From my observation, users often mistook the ordinary right or left blinking light as an instruction to go straight ahead. Users explained they did not really know which side of the NaviBar is blinking because the NaviBar itself is out of visual focus during use, forcing them to pay a lot of attention to it during use.

The audi blinking lights are understood clearly. Users feel they do not have to expend a lot of attention on figuring out the direction of the turn from the light display.

The NaviBar did not interfere with the user’s field of view, but if not configured correctly they feel like it’s too distracting because they need to pay a lot of attention to it to understand the directions.

Most users would use the NaviBar concept in some form to navigate themselves on foot (if it were integrated into a frame of glasses/sunglasses, for example) and on a bicycle. Most users would still prefer having a map display on their phone or voice directions for navigating while driving a car.

Other changes that were suggested by users are:
– having another way to indicate straight ahead (either a constant light or another color)
– have another interaction to give directions (vibration motors on the perimeter of the hat)
– have a set purpose for the blink (some users are confused on why the lights are blinking)
– make the instruction stay after the blink (some users miss the initial light display and don’t understand what was the direction, only knowing that a direction has been given to them)
– different colors for different directions
– display the distance to next turn/direction

Concepts for changes:
– More testing needs to be done to search for the right way to indicate straight ahead. Some navigation systems use no directions as straight ahead, while displaying the distance to the next turn.
– The blinking needs more testing on what frequency and length is usable to the user. A lightbar can be information dense, but the user would need to understand the configurations first (trained users).
– Distance display could use a building display, similar to the audi blink.
– Colors for directions are interesting, but currently there are no conventions about what color for what direction.

Unconsidered changes:
– in the final product, the display would continuously display the direction until the turn is taken (detected using magnetic heading and GPS), so no need to keep the directions displayed without the blink.
– The lightbar is used for different purposes other than navigation, so changing the interaction to vibration motors is not in the plans right now.


Week 12: What is a Prototype?

It has been 12 weeks since the course and the experiment with prototyping is done. So my definition of prototypes and my perception of prototyping has changed.

My Week 1 work reads:

A prototype is anything that is used to evaluate a products viability, usability, or operational capability. It can take many forms, from a drawing on a napkin to a high fidelity mockups with animation. For physical objects, it can be a cardboard form prototype to a highly detailed 3D printed case. At a minimum you’ll need a paper and a pen, but it can be pretty much anything that can represent something else (another object or process) in the testing. Prototypes are made to evaluate an idea or process before committing the full effort to produce and develop the idea or process into a fully featured product. Prototyping means a lot in my degree, as although the material to create product is basically  free (code), but iterating and testing still always costs less time than to develop a product completely only for it to not fit any market or demographic.

I think this is still true, but what’s changed is the variety and the specific purposes of prototypes that I know of. The differing specification of the prototypes reveals different information about the product and the user.

The prototyping process now also can expand to developing while prototyping ideas, already thinking about the viability of the technology or how to develop technology that the user needs.

I have created prototypes before. I created prototypes on a whiteboard for a webapp’s user interface and another prototype to evaluate how people react to additional stimuli during group discussion. They enable me to evaluate ideas and iterate the concept to come to a clear solution.

Throughout this semester I created another number of prototypes for testing the viability of the NaviBar concept, mainly user understanding of the product interaction. I feel the concept has met slight adjustments to the great user response. Most of the users can use the NaviBar prototypes without difficulty, proving the intuitiveness of the concept.

One interesting point that came out of the group discussion is that users are far more understanding for low fidelity mockups that did not work instead of high fidelity mockups that did not work. They expect more with a high fidelity mockups, including functionality.

This turns out to be very true. High fidelity prototypes would feel like they still would work in other areas that we have not implemented. But again, differing prototypes reveal different information from the user and adjustments for the product.

I think this course is a great experience to go through, to actually develop a product on your own, testing it on your own and going through the reflective thinking that is done through the blogging and the prototype testing delivery.

Week 10: Interactive Prototype II

This is the same Statement of Delivery that is submitted for grading on Blackboard. The journal entry of making this prototype is available here.


The purpose of this prototype is to test the physical interaction of NaviBar which is using the magnetic heading to interact with the light bar and testing the intuitiveness of the light bar display. It is important to have a precise, responsive, functional heading sensor that corresponds with the user’s head movement. The simulated light bar on the prototype must correspond with users input. The application interface that was tested in the previous round of testing is not needed in this test.


The Form

The Form of this prototype is a hat with a phone mounted at the brim to simulate what NaviBar would look like to a user. The phone is running an Android App that simulates NaviBar functionality with rows of “lights” that is seen by the user. A point-of-view video of the prototype can be seen here/below.The app has preset and random targets to be tested with the user.

Physical Form of Prototype


Screenshot of the application running on the phone

Point-of-View of user with the NaviBar indicating objective heading in Red.


The app has two modes, the compass mode with only one target heading and the friends mode with 2 targets, each with different colors. The colors do not blink as they did in the video prototype due to time constraints.


Testing Approach

Users would be asked to wear the hat and find the objective by orienting themselves to the light. No instructions will be given other than the objective. This is to test whether the interaction is intuitive enough to be understood by new users. Users will test both modes, one after another.



Orient themselves to red

Orient themselves to green

Orient themselves to blue

Orient in this case means to put the light in the middle of the bar, orienting themselves to face the objective heading directly



Do you understand what the colored bars represent?

Is the light disturbing your field of view?

Is the experience responsive enough?

What is out of your expectation during use?



From testing, all users understood the way to interact with the navibar was to rotate their head to change their heading. All users seem to have difficulty adjusting to having the lightbar in their view, most needed to be directed to not focus their sight on the NaviBar when testing. After this direction, most users felt that the NaviBar did not disturb their field of view while still being usable. The middle indicator in this prototype is not clear enough to the user, and they try to orient to their own center instead of the NaviBar’s center which might be different. My own observation in the middle of testing shows that although the center indicator is objectively in the middle of the hat, due to it being out of the visual focus it will shift position according to the user’s dominant eye, in my case shifting to my left. The users complained about the ‘jumpiness’ of the blocks when changing, especially when they are trying to precisely move the indicator to the middle. Sometimes the block flickers when the heading is between boundaries of the block (changing rapidly from one block to the previous one). Most users understood the light layout corresponding to their heading and the left and right blocks indicating objective out of field of view. One user noted that when the blue and green block are side by side in friends mode the colors seem to mix.


From this testing, there are several changes to be made to the next prototype. The first change would be to increase the ‘block count’ or the resolution of heading data to display to the users to reduce jumpiness and increase perceived responsiveness. The second would be to change the middle indicator to be more prominent and visible to the user, and solve the dominant eye problem that changes the perceived center to the user. The third change would be finding a better color scheme for increased visibility and reduced chance of ‘mixing’ color. The next prototype should also include testing of the Navigation mode to understand whether users can grasp the information intuitively.