Shortly before we released the Windows 8 Consumer Preview in February, we blogged about our work to make Windows 8 more accessible to people with disabilities. This included our work on Narrator to enable customers who are blind to use Windows 8 on touch screens. This work has continued to evolve in the Release Preview, and will also improve as we move toward the final release of Windows 8. This post details some of the work we have done to improve Narrator when using a touch-enabled PC. This post was authored by Doug Kirschner on our Accessibility team. –Steven
First off, we would like to thank all the people who have given us feedback; there has been a lot of positive reaction—people are excited that Windows 8 touch screens will include basic screen reading support by default. We've gotten a tremendous amount of constructive feedback on things we could do to make Narrator work better on touch screens and easier to use on the web. We’ve listened. Your suggestions, combined with suggestions from usability testing on visually impaired users here at Microsoft, have resulted in some important changes that we think you'll really like.
Listening to the accessibility community
When the Developer Preview build was released, we took the opportunity to reach out and gather feedback on Narrator from as many people who require visual assistance tools as we could. To start with, we worked with the community of folks inside Microsoft (we are fortunate to have a significant and organized community that is engaged in the accessibility of all Microsoft products) to install Windows 8 and send us their impressions, and we held internal accessibility events where people could come and try it out in person. We also held usability studies where we invited people to Microsoft’s campus to experience Narrator on a touch screen and walk through common tasks to see where we could improve. Millions of you downloaded the Developer and Consumer Previews, and many of you tried out Narrator and sent us some great feedback. We followed up with a number of people who contacted us via @BuildWindows8. Lastly, we attended the CSUN conference for Technology and Persons with Disabilities
, where we were lucky to have the chance to sit down with people one-on-one as they tried out the Windows 8 Consumer Preview for the first time on touch screens.
There were a couple of key scenarios we wanted to validate. In particular, we wanted to make sure touch users could get up and running using Narrator on a new PC, right out of the box. That includes finding and installing accessible apps from the Store, and accomplishing basic everyday tasks like sending email, reading webpages, and listening to music. The excitement around the work we'd done so far was overwhelming and gratifying, but it was clear that we still had more work to do to make touch Narrator even better.
Thanks to all of your constructive feedback, we identified key areas that we've improved for the Release Preview:
- Responsiveness: We heard that Narrator on touch screens didn’t feel responsive enough.
- Gestures: Some people had difficulty with Narrator gestures, particularly some of the more complicated multi-finger gestures.
- App exploration: Finding particular elements on the screen (e.g. finding tiles on the Start screen) could be hard for people not already familiar with the particular app or UI.
- Web navigation: The commands available in the Consumer Preview were not extensive enough for some webpages.
We worked heavily on each of these areas for the Release Preview, and we're still working in some areas for the final release of Windows 8. We wanted to share with you some of the improvements you can already experience in the Release Preview today.
Making Narrator feel more responsive to touch
Some people we heard from felt that Narrator touch was not very responsive. We heard various versions of this feedback–that Narrator was slow, that Narrator sometimes didn’t respond, or that people just felt disconnected or disoriented—but the root cause of the issue was the same. When you touch the screen, you expect a timely response. We found two common scenarios where this problem occurred:
- Single-finger exploration: When people had to find an item on the screen by dragging a finger around, we observed that they would often skip right over the item they were searching for, as they moved their fingers too quickly, generally before Narrator had a chance to start reading the item.
- Gesture response: Some people were confused as to whether their gesture had succeeded, and would attempt to repeat the gesture several times, even though the first attempt was already successful. The problem was that there was a delay between the time Narrator recognized the gesture, and when it provided the speech response. Sometimes it was also unclear from the response whether Narrator had done what the user wanted, or was just reading something similar but unrelated.
In each case, the blue, visual highlight rectangle that moves to whatever Narrator is currently reading was quick to jump to the appropriate item, indicating that Narrator had registered the user’s movement and was responding appropriately. However, the problem was in the actual speech process. The text-to-speech (TTS) synthesis is fast, but even at high speeds, it takes a while for the system to read the response back; moreover it took additional cognitive time to process the language and to understand what they were hearing. To complicate matters, the speech response time varied widely, depending on context, which made it hard for the user to discern whether the intended gesture was the one that Narrator had recognized. Each of these minor delays added up; people would skip over items altogether or repeat successful gestures, thinking that their first attempt was not successful.
For users with full vision, even if an action takes a few more milliseconds to complete, visual feedback such as highlighting a button or animating a flyout help indicate immediately that the system is responding. These cues are not only aesthetically pleasing, but also functionally important to understand how your touches are influencing the system in real-time.
As we dug into some of the feedback around responsiveness, we realized that Narrator could make more effective use of audio cues. In the Release Preview, we have started to add audible cues; each gesture now has an associated sound that plays when the gesture is performed. These cues were designed to be quick, short and easily distinguishable, allowing you to instantly recognize whether your gesture is successful and if your action has been taken. Here are some examples:
- Moving to the next item plays a “tick.”
- Activating plays a “click.”
- Scrolling plays a sliding sound.
- Selecting plays a “thud.”
- Narrator errors play a “bloop” sound that is easily distinguishable from the system error "ding."
- Explore the screen with a single finger, and Narrator makes a tick with each new item that you touch, so you know if you passed over an item too quickly to hear what it was.
We had a lot of fun designing and implementing these sounds!
Making interactions easier
The next step was to tune Narrator's touch interaction model. Some people told us they found it difficult to use multi-finger gestures. In particular, we saw people struggle with the two-finger swipe for next and previous item, and even more so with the four-finger swipe to scroll. We also observed people accidentally triggering the commands lists (available item commands, search window, etc.), which consequently caused them to lose their context in an app.
In response, we've made it easier to interact with touch Narrator. The system is now more forgiving, with a simpler gesture model that is easier to remember. Single-finger taps and flicks now carry out a majority of the common tasks in Narrator. The revised interaction model is easier to perform, and it groups gestures more logically, so that command lists and windows don’t pop up when you’re trying to perform an unrelated gesture.
The table below outlines the new interaction model:
Improving Narrator’s exploration model
|Tap or drag
||Read item under finger
Hold with one finger and tap anywhere with a second
|Do primary action
Hold with onefinger and double-tap with a second
|Do secondary action
|Flick left or right
||Move to previous/next item
|Flick up or down
||Change move increment
|Hold with one finger and 2-finger-tap with additional fingers
||Start dragging or extra key options
||Show/hide Narrator settings window
|3-finger swipe up
||Read current window
|3-finger swipe down
||Read from current location in text
|3-finger swipe left or right
||TAB forward and backward
||Show commands for current item
|4-finger double tap
||Toggle search mode
|4-finger triple tap
||Show Narrator commands list
|4-finger swipe up or down
||Enable/disable semantic zoom
(semantic zoom provides a high-level view of large blocks of content)
As we collected feedback from people who were using the Developer Preview, we reviewed the exploration model in Narrator. One of the things we heard clearly was that people wanted an easy way to find all of the controls on the screen like buttons, labels, text fields, list items, etc. without having to manually touch around the whole screen. One user who was blind gave the analogy that when he enters a hotel room, his first task is always to walk around the room and locate the door, dresser, beds, and bathroom in order to understand the layout of the room before doing anything else. Similarly, when exploring a new app, users want to know what's on the screen before deciding what to do next.
One of the ways we made all elements on the screen accessible in Developer Preview was to use horizontal swipe gestures to move between items in a container, and vertical swipe gestures to move into and out of containers. This was a powerful model —you could find all accessible items on the screen—and it was a true representation of how graphical UI is constructed. However, it wasn't intuitive. Having to navigate into and out of containers made it difficult to discover all of the interesting elements on the screen.
Changing our default cursor mode
In response to the feedback, we made some changes to the way navigation works by default in Release Preview. The navigation gestures, which are now all single-finger flicks left and right, move you through all of the items on the screen. You no longer need to know how the UI is constructed in order to navigate it; all you need to do is flick to get to the next and previous items, and Narrator presents you with a linear ordering of the important items on the screen.
This allows you to learn about all of the interesting items in an app in an easy step-by-step manner, and interact with any item as you go. If you just want to hear all of the items in an app without flicking each time, you can swipe up with three fingers and Narrator will read through all of them in order, without stopping.
(Note: This is the new default mode of navigation, which allows you to explore apps by flicking left and right to find all of the interesting items. If you prefer the old way of moving through the multiple layers of UI manually, you can change the Narrator cursor movement mode to “Advanced” in the Narrator settings).
Improving web navigation
In Windows 8, Narrator has made reading the web much easier. It has various features that are optimized for web reading, such as the “start reading” command, which reads out continuous sections of webpages without stopping, and search mode, which provides a list of various types of controls on a page. After we released the Developer and Consumer Preview builds, we heard from users that although these features were helpful, they did not enable them to accomplish some common tasks on the web, such as quickly scanning news headlines, doing a quick search, or checking stock quotes.
So we revisited this feature, and as we dug further and gained a better understanding of these scenarios, we found ways to improve them in the Release Preview. For news reading in particular, we heard people saying they wanted to jump to various points in the page (e.g. headings, links), and then subsequently to be able to read line-by-line and even letter-by-letter. Many users wanted Narrator to provide these commands for them to navigate the web with more precision.
In response, we added the concept of views to Narrator’s navigation commands. The new views are available in default navigation mode whenever you are on a webpage or other accessible text area, such as in the Mail app. The default Item view moves through the items on the page, and works the same way as item navigation throughout the system. But for accessible text areas such as webpages or Mail, Narrator now supports seven additional views:
You can easily change the view by flicking up or down, and then flick left or right to move through the items in that view. These commands are also available with a keyboard by using Caps Lock + Arrow keys.
With the new views, web reading is more powerful in the Release Preview. The views work with other Narrator reading commands as well. For example, if you find an interesting news headline and want to hear more, you can swipe down with three fingers and Narrator will start reading all of the page content until you tell it to stop.
Finishing the job
These examples represent some of the major work we’ve done in response to feedback from people who tried Narrator touch in the Developer Preview and Consumer Preview. We’ve made many more improvements based on your feedback—including reading out touch hints that teach you how to activate items, improving the Narrator settings UI to be easier to use with touch, and adding a new setting that makes it easier to type on the touch keyboard. While we believe Narrator is feature complete at this point, we’re still fixing bugs and fine-tuning it before Windows 8 is complete.
It’s been fantastic and humbling to hear from so many of you who have had the chance to try out Narrator. We’ve thoroughly enjoyed working one-on-one with users through our usability studies, at the CSUN conference, and within the Microsoft community. Thanks to all of the great constructive feedback we’ve received, we’ve made these important changes to Narrator for the Release Preview to make it a much better feature.
While we work towards shipping this product soon, we’d love for you to download and install the Release Preview
for yourself, and try out Narrator.
Note: The touch features described in this blog require touch screens supporting at least four contact points. Windows 8 certified touch hardware will universally meet this requirement, but some current Windows 7 hardware may not (see this post for more info). If you do not have a touch screen supporting four contact points, you can still run Narrator using the keyboard.