2023 is an odd year for information technology. Artificially intelligent software systems, especially large language models (LLMs), have reached a level of maturity where they stand poised to disrupt nearly every industry. The great irony of the present moment in the tech sector is that while artificial intelligence has been advancing by quantum leaps, the ecosystem of software and web products that has become a staple of our lives over the past two decades has been undergoing a marked decline in usability.
Signs of decay, bitrot, and outright malevolence are everywhere. Google no longer reliably returns helpful search results. Facebook has ceased to be a vibrant social hub where I can keep up to date and communicate with friends and has instead turned into an adbook which tries to push me into joining groups that I have no connection to and trying to show me insipid videos that I have no interest in. Obtrusive cookie 'consent' requests are on virtually every website. Ads are omnipresent. Many commercial websites engages in 'deceptive design' – intentionally manipulative tactics (like making the 'exit' button hard to see) intended to nudge users towards paying more money, spending more time on the site, clicking on an ad, or consenting to tracking cookies. And of course most of our software is tracking us anyway whether we consent or not.
The problem is not limited to websites. Windows, in addition to pushing the use of Microsoft's browser (Edge) and search engine (Bing) at every possible opportunity, places 'fun facts' (fun for whom, I wonder?) on user's lock screens. Windows settings and filesystem have become more unpleasant to navigate with each new version of the operating system. Settings that I could once change through the normal settings managers I can now only modify by editing registry files. And what is User/APPDATA and why does stuff keep ending up there?
Small companies can be forgiven for imperfections in their software products, but the tech titans have no excuse. Microsoft, Google, Facebook, et al are household names and are valued in the trillions or hundreds of billions of dollars. And we know their products can be better because they used to be better and have gotten progressively more unusable over time. Sure, they've added a few improvements over the past years, but the overall user experience has become worse.
A variety of explanations have been given for why we've observed this decline in user experience across software platforms. In the context of internet search, some of the blame undoubtedly lies with the SEO (search engine optimization) industry, which tries to game Google's search algorithm and artificially boost the search engine's ranking of certain websites. But some of the blame lies with Google also, which has been more aggressively pushing ads in its search results.
In 2018 Google dropped its original 'Don't be evil' motto. Perhaps Google's descent into disrepair is only coincidentally correlated with its slogan-shed, but one can't help but wonder if Google, and with it the rest of the tech world, realized that they had to be at least a little bit evil if they were going to maximize their profits.
Google's behavior is emblematic of a general trend in software products, described by this excellent Wired article. In short, when a tech company first releases its product, its main focus is growing the consumer base. At this stage, companies have an incentive to focus on product quality and user experience. But as the pool of potential new users is exhausted, the company shifts into a more profit-oriented mindset, trying to extract as much money as possible from the product. In the case of Google and other advertising-based companies like Facebook, this means promoting ads at the expense of content.
While there a certain 'rationality' to these anti-user practices, in the sense that it is reasonable for companies to do things that maximize their bottom line, they are, from the user's perspective, quite malevolent. It would not be an exaggeration to say that the entire English-speaking world, if not humanity writ large, has become dependent on the software products produced by the big tech companies. Which means that basically the entire world is experiencing at least a marginal decline in their quality of life and productivity due the decline of these software products.
Enter the new generation of AI. Large language models that can ace physics exams. Image generation software that can produce gorgeous, professional-quality works of art in a matter of seconds. These innovations will undoubtedly - indeed, already have - drastically change the world in new and unexpected ways. But the companies behind these incredible technologies are the same old names - Meta/Facebook, Google, Microsoft, and a few new entrants like OpenAI.
AI offers great promise for driving technological innovation and allowing ordinary users to complete tasks in seconds that would previously have taken days or more. But with this excitement comes fear. Many, including AI experts, are concerned about the risks associated with AI. Some are particularly worried about the 'alignment problem', i.e. what prevents AI from going rogue, or from misinterpreting human instructions, leading to disastrous consequences? These questions are presently being vigorously debated in the courts of public opinion and within government bodies.
My personal view, without getting into too much detail, is that most of the doomsday predictions are overblown. The more likely scenario, in my view, is that the same thing will happen to AI as happened to the rest of tech. What starts out as an amazing new product will become increasingly annoying to use as AI companies try to turn a profit while third parties (e.g. SEOs) try to game the AIs. We'll see pay-to-play LLM search engines, where the algorithm promotes content by paying advertisers. It will be even more difficult to distinguish promoted content from non-promoted content than in a regular search engine because of the conversational nature of LLM responses. We’ve already begun to see LLMs being used for SEO applications and pushing reams of low-quality ‘content’ to the internet, and it is likely that this will create a negative feedback loop further reducing the quality of LLM output.
Machine learning algorithms also have the inherent property that they are designed with an explicit objective function, i.e. they are supposed to maximize some metric, such as 'user engagement'. Contrast this with the pre-ML approaches, where software engineers (or designers/development teams) would make conscious artistic decisions about how the software should behave. In my experience, 'designed by humans, for humans', tends to produce more human-friendly products than 'optimized by AI to maximize user engagement'.
So what do we do about this? How do we prevent AI from further polluting the ecosystem of consumer software products? There are those who are keen on regulatory solutions to the problems created by tech, and I do have some sympathy for this attitude. I would like nothing better to see the iron fist of the state brought to bear against companies that engage in dark and malevolent design practices.
Unfortunately, with legislative solutions, the devil is in the details. For the most part, we have not seen that Western governments are capable of effectively regulating the tech sector. The EU's attempts to curb tracking cookies have led to the ubiquitous, rage-inducing cookie-consent notices that appear on nearly every website. The US's attempt to regulate social media has resulted mainly in political censorship and partisan bickering. Perhaps issues like deceptive design are sufficiently nonpartisan to avoid politcal partiality, but my faith in the United States government to pass well thought-out regulation is quite low, and the likelihood that any attempted legislation will result in regulatory capture or negative unintended consequences is quite high.
An alternative to legislative solutions is a market-based solution, where new tech companies with more prosocial design practices arise to compete with the tech giants. And this process can be expedited by the recent advances in AI. Perhaps the arena in which LLMs have demonstrated the most clear and tangible benefits are in software development and writing code. Many programmers (including myself) claim massive productivity increases when using LLMs to write, tweak, and debug their code.
The new AI-assisted world of software development means that the time, money, and effort required to go from an idea to a fully-formed software product has been substantively reduced relative to what it was. In principle, this can make it much easier for startups go 'from zero to one' in producing more user-friendly software that competes with the extant market hegemons.
Competition alone, however, will not suffice to cure our ailing software ecosystem. If new market entries are serious about human-friendly software, they need to understand where the tech giants went wrong. Part of this may simply be an issue of commitment to an ethos of non-malevolent design practices as an industry standard. Just as there are standards of ethics in medicine, law, and other professional disciplines, one can envision similar standards in software development. However, the 'democratized' nature of software development, which is less credential-obsessed than medicine or law, probably precludes the emergence of truly universal industry standards.
Instead, ethically minded software startups should think carefully about how their business models might produce anti-social incentives. Many software products rely on the ad sales model, which has the benefit of allowing users to use the software for free. This benefit, however, comes with the long-term downsides that we've already seen - aggressive ad-pushing, user tracking, and so on. The more traditional business model of 'buyer gives currency to seller in exchange for product' is much better in terms of aligning incentives between the software companies and user preferences without the ad men as middlemen.
Unfortunately, many users have become used to the free software model, which makes it difficult for companies to pivot to the direct sales model. The freemium model, which offers a basic version to non-paying customers and premium features to paying customers, is probably the best way to go here (I have to give kudos to OpenAI here, for deciding on a freemium model for GPT early on). But even within the freemium model, there can be bad incentives. Dating sites, for example, often have a freemium model model, but they are subscription-based. If we assume most users are looking for a long-term partner, the user would prefer to be relieved of the need to use the dating site as quickly as possible, while the dating site is incentivized to keep users on the site for as long as possible.
So there is no one-size-fits all solution to fixing our ailing software infrastructure. Our only hope is that ethically-minded firms will start to spring up and set hard lines against malevolent practices and business models. With the aid of AI coding tools, the good guys can see to it that products which have fallen afoul of the 'don't be evil' maxim will burn in the fires of creative destruction.
Godspeed.