New interaction patterns
In my previous post I mentioned I was looking for usability best practices for Single Page Interfaces. This was on response to an article by Jesse James Garrett that explained the technical concepts quite well, but not the usability aspects. His colleague Peter Merholz wrote a nice article about Google Maps in which he identified the core issue, I think: "how do you provide people with cues, so they know what to do, but enable new, more powerful means?"
This is probably one of the main questions that needs to be answered before you create a successful Singe Page Interface application. There are lots of new possibilities, but how does the user know about it? I’m just trying to think of a list of unexpected behaviors, but please add to it:
- Right-click and double-click
- Keyboard shortcuts
- Modal windows
- Using decks instead of pages, enabling a more parallel workflow
- Changes in one part of the screen trigger changes in another part of the screen
So which ideas are useful to explain people about these new features? Again, just some ideas:
- A correct mouse pointer can give a hint that you can drag (which Google Maps is not doing, why not?)
- A non-modal window can provide help (e.g. click F1 during a PowerPoint presentation to see all useful keyboard shortcuts: this is a modal window unfortunately)
- Animations and effects: a good example is the cloud animation when you drag items out of the shopping cart in the Panic shop, or the Yellow Fade technique to highlight which area of the screen has changed.
- Although a parallel workflow might offer many benefits, a step-wise input wizard might help first-time users
- And, last but not least, it might be good to stick to well-known keyboard shortcuts, such as + and – for zoom in and zoom out (as Google Maps is doing)
As Jesse James Garrett mentioned: "[we can] begin to imagine a wider, richer range of possibilities".