Commentary: Algorithms may seem innocuous, but the U.K.’s competition authority sees ways they can be used to hurt consumers.
Sometimes we like to pretend that artificial intelligence and algorithms are neutral; that some machine is robotically, objectively observing the world and serving up recommendations, etc. This is, of course, false. Our algorithms, for example, simply embody the biases of their creators. Sometimes this is hard to spot, particularly in the consumer context.
The U.K.’s competition and consumer authority recently authored a report on how algorithms can hurt consumers and weaken competition. A few salient points are worth calling out.
SEE: Artificial intelligence ethics policy (TechRepublic Premium)
Algorithms and consumer harm
As consumers, when we search for things online, we may assume that the vendor has our best interests at heart; that they want to help us get what we want. Too often, however, the vendor is using algorithms to get us to buy what they want us to buy.
The U.K.’s Competition and Markets Authority, for example, found that hotel booking sites were returning results that weren’t governed by the best rates or options for the consumer, but rather by commission amounts hotels were paying the sites in question.
“Consumers are often unaware that they are being targeted by an algorithm, and this makes it harder for them to challenge practices or be aware when harm might happen.” Again, we’re prone to believe that our objective inquiries yield objective results. But, as illustrated in a separate example, sometimes “algorithms [are used] to set prices that are personalised to a particular person based on their behaviour or characteristics.” For example, the government found that insurance firms could gauge which consumers were most likely to renew their home and/or auto insurance policies, and raised prices for those most likely to renew.
The better the customer, the more they’d pay. Presumably, customers more likely to churn were offered lower prices. In this algorithmic sleight of hand, insurance firms were treating their best customers…worst.
Algorithms and competitive harm
We may notice allegedly anti-competitive behavior when, for example, Microsoft was found to illegally bundle Internet Explorer with Windows. Consumers in such an example may not have thought about this behavior as anti-competitive, but they could easily see that they were offered a limited choice in default browsers.
But algorithms are different and sneakier.
For example, in 2017 the European Commission found that Google was displaying its Shopping results more prominently than competitive services. But the details of how this was done are opaque to the average consumer, as the U.K.’s report noted: “It was subjecting rival comparison shopping services to its generic search algorithms, which demoted these rivals in the search result rankings and made them much less visible to consumers, but exempted its own comparison shopping service from the same algorithms.”
I’m not intending to call out particular companies, but instead the quiet, behind-the-scenes work done by algorithms. Consumers can’t see these algorithms. They don’t know why they’re seeing certain results or offers. All of the “magic” of the algorithm is veiled to them.
Equally perniciously, the U.K.’s Competition and Markets Authority found that algorithms can be used to skirt laws around collusion between firms:
They can be used to automatically detect and respond to price deviations by competitors, which could make explicit collusion between firms more stable, as there is less incentive for those involved to cheat or defect from the cartel.
Firms can also use the same algorithmic system to set prices, for example by using the same third-party software, through which they could exchange information.
There are also concerns that algorithms can learn to collude tacitly, without firms explicitly communicating with each other.
Algorithms and the data science supporting them can be used to significantly improve the lives of consumers. We need to figure out ways to make data science less of a “black box” and much more permeable/understandable to the consumers affected by them. As we do so, consumers will be better positioned to make informed choices about how algorithms can be used to guide their purchasing decisions.
Disclosure: I work for AWS, but the views expressed herein are my own.