Published January 3, 2022
Hello y'all. I've been thinking about automation, specifically, intuitive interaction design's role in coddling people.
My lab is called the Future Interfaces Group. We abuse hardware and software to make predictive interfaces and new interactions between users and technology. Every project needs a user study to verify usability or reliability — does the user like the interaction? How does the device compare to other ones like it? We think about these things to make devices that hopefully feel a bit magical, like they can tell what you're going to do before you do them.
I've been thinking about the usefulness of such research, especially since it doesn't feel like anyone's life is particularly impacted by the research we put out. While it's nice that the products we use are smooth and not frustrating, I sometimes feel that nobody would die or hate their lives if we took it all away. Do these things really build up into useful parts of someone's life? Or is it always just cruft, extra features and bloatware that nobody asked for?
I'm getting off topic. Recently, I read a Tumblr blog called Crap Futures (I'm gonna abbreviate it as CF) which shared many of my thoughts about the apparent shittiness of the future we're creating. IoT devices everywhere that go down every week due to ransomware attacks or server outages does not sound like my idea of a good tech future.
They wrote one particular post called Scratch an itch: A taxonomy of automation that really got me thinking about the degrees of automation in our lives. I'm going to summarize it here, but you should definitely give it a read too to see where I'm getting these ideas from.
I see automation's actors as the human and the device/tech/automator, and I am going to explain CF's taxonomy of automation in terms of the actions: who does the sensing, and who does the action. And I liked CF's concrete example of scratching an itch, so I'll continue with that as my main focus
On automation level 1, the human feels the itch and then reaches over to scratch it
The human senses the itch, and uses a device to scratch it. This can be a stick or an ItchScratcher 3000.
The device senses our itch (from imagery or something, use your imagination), then scratches it for us.
The device anticipates an itch, perhaps it occurs on a regular basis or shows a red spot before actually itching. The desire to scratch is circumvented entirely, through early anti-itch cream or pre-scratching.
The device anticipates and even pre-supposes an itch. It can do this to sell repairs of itself, or to sell its own usefulness. Complete loss of desire control.
I've mentioned this to a few friends, and each one has thought of a few examples of devices at each level that we already use today. As we'll see, nearly everything sits on level 1 and 2.
At level 1, we do all the work that we've always done by hand. Scratching, massaging, feeding ourselves.
Level 2 contains all "dumb" tools - the shovel, the spatula, the TV remote. We sense a boring channel coming on and click the remote to change it. Most of our technology sits at level 2, including our phones and the Roomba.
Level 3 largely drops off and contains almost nothing because level 3 features a loss of autonomy. Most devices stop short of that. Most of these require user confirmation, but I'd say that's pretty close to just letting the device do it, we just don't trust them enough.
Our email client detects dates and offers to put them on a calendar, but it knows the limits of its own accuracy and doesn't create events on its own.
Auto-sharing wifi passwords and detecting lost devices are both features on this level.
GitHub Copilot also falls on this level, but it isn't quite good enough to be trusted.
And nothing falls beyond that. It's kind of sad that so few things live at Level 3, even given all our crazy machine learning advances in the past twenty years. Our devices stay tools, useful when we use them and not otherwise.
One thing that came up in my discussions is if anything sits past Level 4. I think our friends and family fall past level 3, since they're always looking out for us and can anticipate what we want pretty well (just think of your recent Christmas gifts!). And people besides them can sit anywhere on the spectrum. We can do a task ourselves (us sensing, us doing), tell someone what to do (us sensing, them doing), or tell someone what high-level thing to be working on (them sensing, them doing). Ideally they sit as high as possible on the levels.
I think we should be pushing robots as far up the automation hierarchy as possible. Each layer saves exponentially more time. The question is if the idea being automated is worth doing yourself, and if it is, why automate it? I close with a quote from the little prince
A question for later: Why do we value human level 3 much more than robot level 3? (I think it's related to the fact that we know humans have opportunity cost but don't really think about computers having the same). What if we reported robot operating cost whenever they did a task for us?
Until then, cya around!