The Human-Machine Game
We script algorithms, and in turn, they script us, and in turn, we script them, and in turn...
Ever since the emergence of large-scale recommender systems on the internet, it has been common wisdom that we should try to fulfill users’ preferences as much as we can. The idea here is: people engage most with content that they want to see, and therefore, engagement metrics are a good proxy for their true preferences. Facebook, Yelp, Amazon, Netflix all try to give us content that most matches our previously revealed engagment.
This core idea has proliferated everywhere. Pretty much every automated system we interact with on the internet today that has even a modicum of `intelligence’ (read: some way to process large-scale data and take decisions on it) tries to learn preferences through engagement.
This is typically paired with some learning or classification scheme that tries to bucket users according to how similar they are to each other. So, if you watch Netflix shows that are very similar to someone else, both of you should receive similar recommendations.
Over the last few years, however, the question of exactly how static human preferences are has come to light in a big way. Let’s take a benign example. Suppose you watch a Netflix cooking show (say Chef’s Table). Before watching it, you had no idea what umami was. Upon learning about this new flavor, your food preferences have shifted slightly towards enjoying more savory items, as opposed to what you enjoyed earlier.
But this example can be generalized. Advertisers who introduce users to new products can shift the marginal economics of purchasing decisions. It’s become common in 2021 to see a song from the 1970s come roaring back in the charts due to TikTok’s recommendation engine. Just look at the TikTok user who caused the Fleetwood Mac hit Dreams to chart this year by singing it while riding a skateboard and drinking cranberry juice. A vast generalization of this has great consequences:
This example highlights something that is becoming self-evident to more of us: algorithms that interact with us do not leave us unchanged. And because these algorithms are trained on human data, our preferences shifting implicitly changes the models that we interact with. These two processes run jointly, each dynamically affecting the other. The feedback loop looks something like this:
So, not only does media script human beings, but humans script the media that scripts us:
You might have noticed here that there is a recursive and self-referential nature to this process. This is not an accident. There is a game being played on the internet. The game consists of algorithms and humans, each simultaneously updating their actions and preferences, respectively, in response to each other.
Understanding this process will be the key to developing machines that can do what we want them to do in equilibrium with humans. That is, accounting for the users’ preference shift, what is the best thing to do? This question is materially different from just maximizing engagement.
Game theory is deeply at work here. Understanding how users will respond to particular content involves having a simulator or predictor that is able to reason about second, third, and higher-order consequences of one’s actions. It is my belief that building these predictors is a key challenge in building truly intelligent machines.
Humans are able to plan at these levels of abstraction and are able to throttle or accentuate their behavior in response to feedback from their environment. The problems listed above are doubly complicated when you have multiple humans who are themselves competing with each other, and mulitple algorithms who are competing for mindshare. The fact that machine learning algorithms induce multi-agent dynamics between humans is not well appreciated.
The networked complexity of this game exponentially increases as the number of links (machine-human interactions) grows.
But understanding what is happening here is crucial to the world that will come.
We also need to keep an eye out for the weird way in which the powers that be put their thumbs on the scale. https://knowyourmeme.com/memes/events/ethnically-ambiguous-ai-prompt-injection