Will AI Make Us Prisoners of Our Bad Habits?
Imagine getting in your car on Monday morning. You place your phone on the charging pad below the dashboard. Car Play comes up with Maps already open, showing your usual route to work. The traffic looks good. You start the engine, and the latest episode of your favourite podcast starts playing.
At the office, your phone suggests posting a good morning message to your team’s Slack channel. It reminds everyone of the big planning meeting after lunch. Looks good. You hit send.
You start your day by approving several pre-generated email responses. Only two needed edits. Your assistant knows you well. Maybe a little too well.
We’ve had coding copilots, writing assistants and chatbots for a while now. As these tools improve, we’ll see similar tech being integrated into the operating systems and user interfaces we use every day.
Humane has shipped one of the first products that venture into predictive, AI-powered interfaces. Although the AI Pin may not be even close yet, many of their ideas are solid. If iterated upon or perhaps deployed on a device with a screen, these could work really well.
I can’t wait to see what the next-gen UIs look like. I’m also terrified because the whole thing could go very wrong.
The Computer as an Extension of the Brain
Your smartphone goes everywhere with you. It helps you reach people and get things done. It gives you instant access to vast amounts of knowledge and content.
Lots of people think of their smartphone as an extension of their brain. We even have a word that describes the fear of not having a working phone.
The analogy extends to computers as well. Smartphones are computers, after all.
The interface you use to communicate with your device (the screen, keyboard, voice commands, etc.) is important. The easier it is for you to get the device to do what you want, the better it can extend your brain.
Powerful hardware and efficient algorithms matter. But without a good interface, it’s just a box. Your computer may sort arrays at the speed of light. You can put it on a shelf and watch it do its thing. But you’d probably have more use for an old Pentium machine that runs Solitaire. Interfaces make computers useful.
What would make the most efficient user interface possible? A wire that plugs directly into your brain. The device would read your thoughts and upload the responses directly into your mind. No friction.
While that’d be rad, two-way brain-to-computer interfaces are still pretty far out. However, we have something that can approximate them. Does it matter if your device can’t read your mind when it looks like it can?
Predictive User Interfaces
LLMs and other generative AIs came out recently, but companies have been using machine learning to predict user behaviour for years. When ad targeting works, it can be so good that it’s creepy. We’ve all mentioned something to a friend or spouse in conversation only to see an ad about it on Instagram later. Maybe Meta is recording your every word, or maybe their ad-targeting engine knows you better than you realise.
The feed algorithms are predictive user interfaces of sorts, too. As the TikTok algorithm demonstrates, they can be very powerful, keeping users scrolling for hours and hours on end.
The idea is simple: collect as much information about the user and how they interact with the content. Then, feed it to a model that will predict the best next step to achieve a goal.
The AI Pin from Humane seems to work along similar principles. The device stores your every interaction with the device. That becomes the context for the model when responding to you.
These techniques will come into traditional operating systems as well. Microsoft is getting ready for it. The experience I outlined at the beginning of this post may be closer than you think.
I’m excited about predictive interfaces. I’ve used a coding copilot for a while now. It saves me a lot of repetitive typing. When it works, it feels like magic.
If OS-level predictive interfaces are anything like coding copilots, they will help us get rid of the bulk of repetitive work. Think organising files, managing email or tracking expenses.
That said, there will be challenges. The interests of individual users aren’t always aligned with the interests of the business.
Privacy? What About It?
Let’s tackle the obvious issue first. Your assistant will have to know a lot about you. This might range from reading your email to logging everything you do for future reference.
Over time, there’ll be enough data to effectively simulate your behaviour when using a computer. Perhaps there isn’t a way to do it now. But new techniques to mine old data are being developed all the time. Meta and Google may have enough to make a chatbot that impersonates you already.
What if there’s a data breach? What if the government wants a backdoor? Responsible company may have a change of heart or go out of business and be sold for parts.
The productivity gains from AI-powered interfaces could be incredible. But so could the losses in case the data falls into the wrong hands.
Apple is all over privacy these days, but I wouldn’t expect that sentiment to last long if it stops being good for business. Remember that Google’s original catchphrase was: ‘Don’t be evil.‘
All that data being piped to a handful of corporations creates a massive vendor lock-in. When switching providers, you’ll have to abandon your trusty assistant you trained for years and start over.
Creatures of Habit
We’ve seen how algorithmic feeds can shape our behaviours and even our perception of the world. They can be irresistible. So far, you could always uninstall the apps and be done with them.
What if the AI operates at the OS level, though? Despite acting in good faith, it may not serve our best interests. After all, we often don’t act in our best interest ourselves.
Let’s say you procrastinate from writing your dissertation by going to Reddit. After a while, you’ll train your AI assistant to suggest Reddit when you open your word processor. Quitting Reddit is hard as it is (trust me). With an AI assistant cheerfully enabling your self-defeating behaviour, it may be impossible.
You’ll have to be even more intentional about how you use your devices to prevent trapping yourself in your unproductive habits. That assumes that you’re aware of these things in the first place.
Let’s bring out the old tinfoil hat for a moment. The ability to influence people’s behaviour en masse is valuable. You can sell a lot of stuff or perhaps even sway an election. Gaming the algorithms on social media or serving people ‘relevant ads’ is the traditional way.
What if you could shape people’s behaviour directly by tweaking their AI assistants? Maybe their morning news brief shouldn’t include an article you disagree with? Or perhaps they don’t use your app nearly as much as you’d like. The AI assistants should really do something about it…
Final Thoughts
Predictive user interfaces are coming. Depending on how effectively you can manage your screen time, the future may be a lot more or a lot less productive.
Whoever ends up running these AI assistants for people will wield a lot of power. They’ll be processing vast amounts of our sensitive data. They’ll be able to shape our behaviour in subtle and perhaps not-so-subtle ways. On top of that, the cost of switching providers will be high.
Does it have to be that way, though? If we figure out how to fine-tune and run these models locally, the data wouldn’t have to go through the cloud. If the models are open, we should be able to swap them out, letting people choose their bias. And perhaps we can develop an open data format that will let users take their data with them when switching providers.
This time, we might be able to do it right from the beginning.