Eye Tracking the User Experience. Aga Bojko
on the redesign indicated that participants conducted a thorough search of the left navigation before making their (incorrect) selection (see Figure 2.6). They looked at the target link (Practice Resources) at least twice, but most of them eventually selected the Legislative & Regulatory link located right below it. This finding indicated that position statements were more strongly associated with regulations and policies than with resources, and should be moved to the Legislative & Regulatory section.
FIGURE 2.6 Heatmaps illustrating the distribution of fixations aggregated across all participants attempting the “find ASCO’s position statements” task (left: original design; right: proposed redesign). The correct targets for this task are marked.
Eye Tracking Explained Differences in Efficiency
A few tasks resulted in perfect or close-to-perfect accuracy in both designs. However, participants took significantly longer to click on the correct links when using one design than when using the other. For example, when looking for a list of upcoming conferences, participants selected the intended target faster and used fewer fixations when interacting with the redesign than when interacting with the original design. We focused on the distribution of fixations during the task to understand what caused the inefficiency in the original design.
Fixations on the original design were scattered, covering multiple areas in addition to the left navigation, which contained the target: the Meetings & Education link (see Figure 2.7). Most fixations in the proposed redesign, on the other hand, were concentrated in the upper portion of the left-side navigation, where all key links were located. This increase in efficiency likely occurred because the redesign combined the redundant navigational areas, thus reducing the number of elements competing for attention.
FIGURE 2.7 Fixation count heatmaps of participants attempting the “find a list of upcoming conferences” task (left: original design; right: proposed redesign). The boxed links are the correct targets.
Another interesting finding was that participants tended to look at the target link (Meetings & Education) more than once prior to its selection in the original design. In the proposed redesign, however, everyone selected the target link (Meetings) the first time they looked at it. This suggested that “Meetings” was easier to recognize and associate with conferences on its own rather than when used in combination with “Education.”
Case Study: Car Charger Packaging
Why Eye Tracking?
A product in a store often has only a few seconds to tell its story before the customer moves on to the next product. Not only must the product package immediately attract the consumer’s attention, but it also has to quickly convey what is inside. This sounds rather obvious, yet a lot of packaging has failed on one or both of these accounts.
At the beginning of a usability study of a new mobile device accessory, participants were exposed to its intended packaging for several seconds. When asked what was in the package, most had no idea. Only upon closer inspection of the box, were the participants able to deduce that it housed a universal car charger, a device used for charging multiple gadgets in a car. Because these chargers would end up on a store shelf with several other chargers and phone accessories, it was unrealistic to expect customers to spend extra time with the package to determine what was inside when so many other choices were available.
Based on this finding, the designers wanted to make the product name more visible. But the name was already fairly large, and it was hard to imagine that anyone would miss it. To get to the bottom of the issue, we conducted a small follow-up eye tracking study.
How Eye Tracking Contributed to the Research
The eye movement data indicated that the product name received a great deal of attention. The recorded gaze patterns showed that participants not only noticed the text, but also appeared to have read it. But how could they have read the product name and not known what was in the package? The name of the product was “Smart Charge Mobile: A Universal Way to Charge Your Devices,” which apparently did not convey the fact that it was a car charger.
The package also displayed a car icon with a description “power multiple devices while in your car,” but this information was missed because another package element, three large red icons depicting a cell phone, digital camera, and a PDA, monopolized the rest of the participants’ attention, at least at first.
In this study, eye tracking revealed that participants were unable to determine what the product was based on the most prominent information on the package. This came as a surprise to the stakeholders; they assumed that the fact the product was a car charger was obvious and did not need to be specifically called out. That was why the design placed a lot more emphasis on the fact that the charger was universal.
Two solutions for the package redesign were recommended: incorporating the word “car” in the name of the product and changing the icon design to shift the weight from the three red icons to the car icon and the text next to it.
Quantitative Insight: Measuring Differences
Quantitative insight generated by eye tracking is most useful in summative studies that evaluate products or interfaces relative to one another or to benchmarks. You can compare alternative versions of the same interface, the interface of interest to those created by competitors, or even elements within one interface (for example, different ad types) to one another along either performance-related or attraction-related dimensions. How are these comparisons actionable? They inform decisions such as which design version should be selected or if the product is ready for launch.
Sometimes, you may be asked to conduct quantitative eye tracking studies with only one interface. As there are no absolute standards for eye tracking measures in the UX field, the data obtained from one design carry little meaning. If participants made an average of 10 fixations to find the Buy button on a Web page, there is no way of classifying their performance as efficient or inefficient. Similarly, if 65% of participants looked at a package on a store shelf, this could be good or bad news for the stakeholders. This is no different from time on task and other quantitative usability measures—with nothing to compare the data to, you cannot interpret them and make them actionable. Only if two or more interfaces or packages were tested, could you say which one made participants more efficient or drew more attention.
The eye tracking metrics most relevant to UX are described in Chapter 7, “Eye Tracking Measures,” while Chapter 13, “Quantitative Data Analysis,” explains how to analyze them. But before we delve into all the details associated with quantitative analysis, let’s look at the two types of differences eye tracking can measure and their examples.
Measuring Performance-Related Differences
Eye tracking measures allow you to make comparisons between stimuli along performance-related dimensions, such as search efficiency, ease of information processing, and cognitive workload. While it is true that you can also use measures such as time on task or task completion rate to identify performance-related differences between interfaces, eye tracking data can help detect differences more subtle and difficult to observe in a lab environment using more conventional methods.
If they are so subtle and almost invisible, why are these differences