The keyboard and mouse as input devices

I just ran across a couple of interesting articles about keyboard vs mouse input:
http://hci-matters.com/blog/?p=8 and the followup:
http://hci-matters.com/blog/?p=9

The idea is that we should really be using more of the keyboard to be productive, but only for tasks which can be qualified as discrete, such as managing documents, launching applications, and working with text. I definitely agree with the premise. We have long had keyboard shortcuts, part of the problem is how to associate the shortcut with an action: there are so many, memorizing them is impossible. As an aside, f-keys are the absolute worst way to implement shortcuts as there is no mnemonic to help with the association, and no consistency save the F1 key for help (why not just label it “help”).

The author has some good thinking in his second article about a way of improving the feedback mechanism for shortcuts. I think it’s a good start, but perhaps a bit heavy in the information presentation. Many programs have a huge number of commands which would simply be overwhelming, even with the author’s proposed trimming search functionality. A great Mac utility called Quicksilver shows a way of achieving this in more of a compact way. Hitting ctl-space brings up the search field, then typing whittles down the list of commands available while using less real estate. Integrating this kind of functionality more tightly in the operating system. One problem with this particular program, though is the merging of commands with applications. Typing in “qui” could potentially mean “quit” or “quicksilver”, one being a command, one being an application. I think it’s probably best to keep those separate.

One thought I have relates to an example given in the book “The Design of Everyday Things” where one shortwave radio with lots of buttons on the front was found to be generally easier to use than a radio with similar functionality condensed into fewer buttons. The problem with having fewer buttons is that most of the functionality is hidden behind modes, which make it very difficult to “save” actions to muscle memory through simple memorization of where the button is. Similarly, our keyboards are inherently modal, with “shift”, “ctrl”, “alt/option”, and “Windows/Apple” buttons each representing different modes of operation. Doing something like replacing our F-keys with buttons that have distinctly labeled functions such as “open”, “close”, “save”, and whatnot would allow us to commit many of those common actions to muscle memory rather than having to memorize key combinations which may not fit our own mnemonic associations.

One more thought, approaches such as those used by QuickSilver and Windows XP Start Menu claim to be adaptive. The problem with this approach is that they are not predictable: trying to invoke the same command/program that you normally use, but on someone else’s machine, will result in different actions or at least a different set of movements to invoke the same action. Moving or hiding items based on usage is a really bad approach, however something more subtle such as slightly changing the appearance of the most frequently used commands might work better.

Anyways, there is lots to consider in how to best use the keyboard. It would be fun to come up with a shortcut feedback system that works well.

2 thoughts on “The keyboard and mouse as input devices

  1. The radio example is evidence against your hypothesis that having all the controls would be overwhelming, and there are other examples too. See The Humane Interface. The advantage over Quicksilver’s UI is, like any menu, exploration.

    And, you missed the second part of Clay’s suggested UI — typing commands like “save as” after tapping Alt, rather than Control+Shift+S. The advantage is that it is mnemonic, precise and more constant across applicatoins.

  2. Thanks for your reply. I don’t think the radio example is evidence against what I described as having potentially too many items. First, there is the difference in quantity of menu items, second is the translation from a tactile interface to a screen interface.

    1) With the radio, the physical size of the radio imposes a fixed limit on how many items are accessible. There is a trade-off between having more items and fewer modes vs having an overwhelming number of buttons. There comes a point where the number of buttons makes it hard to find stuff, so while having more buttons may make better use of muscle memory and reduce modal errors, there is a reasonable limit to where it becomes silly with the number of buttons. At that point, you need to decide which features are really important. Modes are a symptom of trying to fit too many features into a device.

    2) A tactile interface has more opportunity for differentiating controls than does a screen menu. In addition to grouping, colour, and labels, they can also take advantage of button shape, texture, and haptic feedback. Control positioning on a physical device also provides clues as to its function. An on-screen menu can potentially have a couple of hundred items representing a diverse and unrelated set of functions. Searching and mnemonics certainly help, but can not match a physical device, which can be designed to be operated by touch alone.

    I did see the search thing, and that’s what reminded me of Quicksilver. That’s a great feature and I was not criticizing that. The criticism comes more from having too many items crammed on screen. What happens when an application has hundreds of menu items? That’s where Quicksilver is cleaner, as it doesn’t attempt to show everything, it shows just what matches what you typed. Still, I think that’s a hack to a larger problem: having applications which have so many features, many of which are not necessary or could be more cleanly grouped with other things. A fundamental flaw to our interfaces is the need to sell the next version of any given software package, with each one adding more and more features that constantly add to the UI.

Comments are closed.