Around Christmas, Raptor recommended Kevin Slavin’s TED talk “How algorithms shape our world” to me:
In this eye opening and thought provoking presentation, Slavin points out that algorithms that operate on a global scale sometimes produce unintuitive, unexpected and undesirable results, while having the power to reshape landscapes, architecture and culture at the same time. He argues that we have now created this “co-evolutionary force” that in a way we have to accept as “nature”.
Slavin mentions “Epagogix”, a company that uses algorithms to help executives make decisions. One of Epagogix’ algorithms predicts the profitability of a movie before it is made, based on its script, thus influencing, which stories make it into production and which go in the dumpster. “If these algorithms, like the algorithms on Wall Street, just crashed one day and went awry, how would we know?”, asks Slavin. Well let’s see. Why don’t we look at the movies coming out in the near future as exemplified in iTunes movie trailers (http://trailers.apple.com/trailers/) – the selection of which, undoubtedly, itself is determined by an algorithm. The up-coming season is seeing two stories based on classic Greek tales (“Immortals”, “Wrath of the Titans”) and three reimagined fairy tales, including two competing adaptations of Snow White (“MirrorMirror”, “Snow White and the Huntsman”). Coincidence? Maybe, maybe not. Let us assume for a second that indeed some algorithm preferentially green-lighted movies that are based on fairy tales and myths, is that a bad thing? After all, our interests and tastes, both as individuals and as a society are never created in a vacuum. They are shaped and directed by other individuals: mentors, friends, family and the collective that we call the media. So how is it different when our preferences start being guided by algorithms? In my mind – and I am sure that the simplicity of my following description does not do justice to the sophistication of many of these algorithms (which is part of the problem) – an algorithm that strives to optimize for profitability mines all data available to it for the single factor that explains most variance in (customer) response. It PCAs the hell out of things, finds the eigenvector corresponding to the largest eigenvalue and just runs with it. It sends you straight on a tangent just because you have shown some initial willingness to go there, no matter if the ultimate destination of that tangent is good for you or not. Let’s say you have watched two or three movies about the Third Reich on Netflix. Great, here is one more fore you, and a couple more about the Holocaust and then it does not take long for Riefenstahl’s “Triumph of the Will” to pop up in your list of recommendations. Surely you would enjoy that one, too, right?! Well your mother, even if she originally recommended for you to watch Schindler’s List because it is a good movie on an important topic, would probably become concerned if she noticed that all you ended up watching afterwards were Holocaust movies. She would probably suggest you see a romantic comedy for a change. But Netflix does the polar opposite. The more you watch a certain genre the more the matching algorithm is reinforced in its belief that that is what you need. The more the whole Netflix community watches certain genres the more likely those will be available to watch instantly. There is no force that strives for “healthy” balance, restoring the status quo or finding your personal center. In my opinion, the biggest danger in such algorithms lies in the fact that they create run-away open loops – only subject to interruption by a human’s act of will. In the case of Netflix this would be the viewer’s deliberate decision to ignore all current suggestions and to actively find something completely different to watch. But humans are notoriously bad at pulling acts of will. Is it not so much easier to just trust in what is suggested and to just go with the flow? It is just one mouse-click, just 500 ms away, after all.
Beyond the examples given by Slavin, where else do algorithms influence crucial aspects of our life profoundly? Every time you request a credit card or a loan a sophisticated algorithm mines all the data it can gather about your behavior and spits out some data: no, yes and here are the terms. So ultimately, algorithms decide if you can afford this house, car or vacation, or if you can send your kids to a good college. They shape your dreams. The situation becomes even more disconcerning when you look at life and health insurance. Algorithms decide, based on a plethora of data about your life style and pre-existing conditions if and on what terms you will receive payment for medically necessary procedures. Thus, algorithms determine the physical wellbeing and livelihood of millions of people on a daily basis. And they do so without restraints, remorse and regrets. The only reason why humans – as opposed to animals – are capable of cruelty is that thanks to our well-developed pre-frontal cortex we learned to decouple our rationality from the primate morality that has been ingrained in our much older deep brain structures by millions of years of evolution. And now we have created these global entities that are pure ratio, taken out of context and devoid of any intrinsic ethical constraints.
Slavin’s notion of algorithms as a co-evolutionary force got me thinking about one more topic: online dating. On sites like eHarmony and OKCupid you provide a snapshot of your personality. In return, sophisticated algorithms pre-select and bring to your attention certain phenotypes of your gender of interest. Given the increasing popularity of online dating, where is the logical conclusion of this train of thought? Phenotypes, at least in part, are determined by genetics. Skewing the chances of meeting, mating and procreating with certain phenotypes is a form of breeding. It is a way for algorithms to slowly rewrite our collective DNA! In a world in which people’s chances of procreation are not necessarily determined by Darwinian fitness anymore, algorithms are becoming the new natural selection. But what do they select for? It is not hard to conceive of online dating algorithms to select for people who in turn show a higher inclination to embrace online dating and to develop better algorithms, now that we mentioned co-evolution. This last idea may be a stretch but as illustrated on the above cinematic example, algorithms select for more dichotomy. They seem to act like contrast enhancers that may lead to a critical phenomenon in the physical sense, to a divide that can be hard to bridge, a phase transition in society.
The risk is in that propensity of algorithms to clash with each other and to run away without checks and balances. I do not think that we as a society should forgo or abolish algorithmic optimization but to make algorithms sustainable, we have to find a way to balance them out. We have to implement symmetries that lead to quantities conserved over time. Just like an algorithm solving a differential equation of motion knows about symmetries in space and time and checks every time step for energy and momentum conservation, any algorithm that has social impact needs to check for the conservation of culture, of values, of the environment, of phenotypes. But it is still up to us to define these quantities worthy of conservation.