How algorithms shape us

Around Christmas, Raptor recommended Kevin Slavin’s TED talk “How algorithms shape our world” to me:


In this eye opening and thought provoking presentation, Slavin points out that algorithms that operate on a global scale sometimes produce unintuitive, unexpected and undesirable results, while having the power to reshape landscapes, architecture and culture at the same time. He argues that we have now created this “co-evolutionary force” that in a way we have to accept as “nature”.

Slavin mentions “Epagogix”, a company that uses algorithms to help executives make decisions. One of Epagogix’ algorithms predicts the profitability of a movie before it is made, based on its script, thus influencing, which stories make it into production and which go in the dumpster. “If these algorithms, like the algorithms on Wall Street, just crashed one day and went awry, how would we know?”, asks Slavin. Well let’s see. Why don’t we look at the movies coming out in the near future as exemplified in iTunes movie trailers (http://trailers.apple.com/trailers/) – the selection of which, undoubtedly, itself is determined by an algorithm. The up-coming season is seeing two stories based on classic Greek tales (“Immortals”, “Wrath of the Titans”) and three reimagined fairy tales, including two competing adaptations of Snow White (“MirrorMirror”, “Snow White and the Huntsman”). Coincidence? Maybe, maybe not.  Let us assume for a second that indeed some algorithm preferentially green-lighted movies that are based on fairy tales and myths, is that a bad thing? After all, our interests and tastes, both as individuals and as a society are never created in a vacuum. They are shaped and directed by other individuals: mentors, friends, family and the collective that we call the media. So how is it different when our preferences start being guided by algorithms? In my mind – and I am sure that the simplicity of my following description does not do justice to the sophistication of many of these algorithms (which is part of the problem) – an algorithm that strives to optimize for profitability mines all data available to it for the single factor that explains most variance in (customer) response. It PCAs the hell out of things, finds the eigenvector corresponding to the largest eigenvalue and just runs with it. It sends you straight on a tangent just because you have shown some initial willingness to go there, no matter if the ultimate destination of that tangent is good for you or not. Let’s say you have watched two or three movies about the Third Reich on Netflix. Great, here is one more fore you, and a couple more about the Holocaust and then it does not take long for Riefenstahl’s “Triumph of the Will” to pop up in your list of recommendations. Surely you would enjoy that one, too, right?! Well your mother, even if she originally recommended for you to watch Schindler’s List because it is a good movie on an important topic, would probably become concerned if she noticed that all you ended up watching afterwards were Holocaust movies. She would probably suggest you see a romantic comedy for a change. But Netflix does the polar opposite. The more you watch a certain genre the more the matching algorithm is reinforced in its belief that that is what you need. The more the whole Netflix community watches certain genres the more likely those will be available to watch instantly. There is no force that strives for “healthy” balance, restoring the status quo or finding your personal center. In my opinion, the biggest danger in such algorithms lies in the fact that they create run-away open loops – only subject to interruption by a human’s act of will. In the case of Netflix this would be the viewer’s deliberate decision to ignore all current suggestions and to actively find something completely different to watch. But humans are notoriously bad at pulling acts of will. Is it not so much easier to just trust in what is suggested and to just go with the flow? It is just one mouse-click, just 500 ms away, after all.

Beyond the examples given by Slavin, where else do algorithms influence crucial aspects of our life profoundly? Every time you request a credit card or a loan a sophisticated algorithm mines all the data it can gather about your behavior and spits out some data: no, yes and here are the terms. So ultimately, algorithms decide if you can afford this house, car or vacation, or if you can send your kids to a good college. They shape your dreams. The situation becomes even more disconcerning when you look at life and health insurance.  Algorithms decide, based on a plethora of data about your life style and pre-existing conditions if and on what terms you will receive payment for medically necessary procedures. Thus, algorithms determine the physical wellbeing and livelihood of millions of people on a daily basis. And they do so without restraints, remorse and regrets. The only reason why humans – as opposed to animals – are capable of cruelty is that thanks to our well-developed pre-frontal cortex we learned to decouple our rationality from the primate morality that has been ingrained in our much older deep brain structures by millions of years of evolution. And now we have created these global entities that are pure ratio, taken out of context and devoid of any intrinsic ethical constraints.

Slavin’s notion of algorithms as a co-evolutionary force got me thinking about one more topic: online dating. On sites like eHarmony and OKCupid you provide a snapshot of your personality. In return, sophisticated algorithms pre-select and bring to your attention certain phenotypes of your gender of interest. Given the increasing popularity of online dating, where is the logical conclusion of this train of thought? Phenotypes, at least in part, are determined by genetics. Skewing the chances of meeting, mating and procreating with certain phenotypes is a form of breeding. It is a way for algorithms to slowly rewrite our collective DNA! In a world in which people’s chances of procreation are not necessarily determined by Darwinian fitness anymore, algorithms are becoming the new natural selection. But what do they select for? It is not hard to conceive of online dating algorithms to select for people who in turn show a higher inclination to embrace online dating and to develop better algorithms, now that we mentioned co-evolution. This last idea may be a stretch but as illustrated on the above cinematic example, algorithms select for more dichotomy.  They seem to act like contrast enhancers that may lead to a critical phenomenon in the physical sense, to a divide that can be hard to bridge, a phase transition in society.

The risk is in that propensity of algorithms to clash with each other and to run away without checks and balances. I do not think that we as a society should forgo or abolish algorithmic optimization but to make algorithms sustainable, we have to find a way to balance them out. We have to implement symmetries that lead to quantities conserved over time. Just like an algorithm solving a differential equation of motion knows about symmetries in space and time and checks every time step for energy and momentum conservation, any algorithm that has social impact needs to check for the conservation of culture, of values, of the environment, of phenotypes. But it is still up to us to define these quantities worthy of conservation.



5 thoughts on “How algorithms shape us

  1. The phenomenon that predictive algorithms use existing data sets to bias outcomes so as to provide a user with what he/she “wants” (as defined in part by what’s in the data set about said user) is well-known. In search, it’s sometimes called the “filter problem”; see, for example:

    I really like the idea that dating sites, run amok, are essentially a breeding program ala Bene Tleilax. 🙂

    On a serious note, I think the idea that these algorithms can “enhance contrast” or, maybe, “drive modality” is a pretty powerful one. There is a long history of criticizing mass media for this effect as well (even in sci-fi, Neil Stephenson’s earlier novels (Snow Crash or Diamond Age, for example) sort of predict a break up of society along contrasted interest groups or tribes of sorts. Everyone is strongly driven to a socioeconomic identity group and technology reinforces this). My main concern with all of this is that it would seem to inherently limit competence, limit liberty and, perhaps encourage disenfranchisement.

    So, here’s my question: I don’t have much faith in “top-down” solutions to this kind of problem. I don’t think you can legislate or agree on any set of rules or definitions that prevent the driving of bimodality (or can you?). For one, enforcement would be impossible. For two, no truly free society could come up with a set of “values” they all agreed on (this is one of the tricks of successful polities, in my opinion: they pretend to have a shared set of core values, but do so only loosely – and varying in time – to fit the needs of the moment. Our “values” are very different from those of 1700’s Massachussetts, or a 1900’s New Yorker even though we all subscribe to an “American” identity…. but anyway…). So how do you design in dampers for driving bimodality? Could it be that in a healthy society there is always a set of elements ready to fight these forces when they get out of hand? (i.e. is there always a reservoir of ideas and personalities that, when triggered by societal need, go ape-shit and throw wrenches in the machine?)

    Posted by maharbiz | January 12, 2012, 5:01 am
  2. I think that a certain type of top-down regulation might actually be viable: To use economics as an example, we know that both, the perfectly free market and a fully planned communist market lead to suboptimal, inhumane conditions, such that most modern states have settled on some form of social market economy in which some aspects of freedom are sacrificed in favor of social security. A similar concept could be adapted in mass media (some things should be reported on just because they are being rarely reported on), film making (let’s make a movie after a script, just because the premise is rare), Netflix (suggest a movie specifically because it lies outside of the user’s comfort genre), online dating (suggest a person that the user would never search for on purpose) etc. Of course any particular diversity-driving pick might cause an economic disadvantage for its “owner”. It will just have to be subsidized by the social collective, just like other diversity-enhancing measures such as affirmative action. What we need to do, is finding the right amount of social intervention, just like in economics.

    Posted by interfazed | January 12, 2012, 5:39 am
  3. I think top-down regulation would be harder than you think (but see the end of this comment) due to the nature of the processes at play (slow, hard to visualize). In my opinion, most “successful” government intervention in this country has occurred around easily defined and depicted events which cause a phase change in public opinion and the transient bubble of support needed to generate political will to implement change. For example, the late 20th century environmental movement, the mid-20th century civil rights movements, the early 20th century working conditions and labor movements were like this. So, we would need some observable catastrophe, misery, etc. that is encapsulatable in a visceral way before anything happens. To wit, take three examples which I think are pretty close to this issue that demonstrate the point. All three are examples of slow processes that effect societal change (and have positive feedback), are driven by the market, have (in my opinion) never demonstrated any large event which enables political will, and are poorly regulated. By the way, I am NOT arguing they SHOULD be regulated or that the examples below are bad or evil or whatever, just making the point.

    -Mass media regulation
    The government’s attempts at regulating mass media (which undoubtedly has a huge effect on a population’s opinions, behavior, selection criteria in choosing mates, etc.) is pretty poor. Modern high-end advertising skews people opinions on almost every aspect of their daily lives and it takes a rather concerted effort to not be affected (if that’s even possible). Outlets which report and interpret events (i.e. media and journalism) exhibit similar phenomena (eg. mainstream media largely plays to a ‘base’ and does little to piss off said base or drive viewers/readers away; hence, the way information is reported is very skewed and colored). The net effect has been to generate bimodal (or multimodal) populations across competence, cultural divide, worldview, etc.

    – Pornography
    Porn is the monster, eight-headed elephant in the room in these discussions. Online porn is accessed by pretty much everyone in the developed world, is strongly (neurochemically) self-reinforcing, skews attitudes and behaviors between genders, is a massive industry and is largely unregulated (except in very extreme cases). Of course, the argments in favor of easily-accessible erotic content are easy to defend: sexual release is healthy, these materials demistify a historically taboo activity, adults should be free to make, share and consume any non-harmful information, etc. But we’re essentially incapable as a society of discussing this issue openly and honestly: even if we all agree (and we don’t) that porn is legally and morally ok at the individual level, how does massive, continuous consumption affect society in the long run?

    – Basic education
    The government has been pretty unsuccessful (in my opinion) in providing equal access to education across demographic groups (and not for want of trying). Again, the bimodal populations in terms of competence, belief systems, etc. are sometimes amazing and it’s not like top-bottom approaches aren’t constantly attempted, but they seem to have, at best, a weak effect.

    Maybe the only solution is one you mentioned in your post: include a devil’s advocate component to any societally-relevant algorithmic process. With some frequency and strength, provide counter-balancing results, information, etc in a way that does not favor any one policy, just exposes people to more variety and choice. The problem is, as you said, that from a market perspective this is just an inefficiency. You’d need to come up with a way to pump money out of this idea if you want it to have a negative deltaG. 🙂


    Posted by maharbiz | January 12, 2012, 11:30 pm
  4. Hm, how to make this idea into a money pump? How about turning it into form of gambling? (Worked in financial markets:)) What if you could place bets on the popularity of a given algorithmic output with its target? If you guess correctly that a teenager enjoys a documentary on existentialism or that an online-dating white supremacist goes out with a black girl, you get tons of money because the odds are against you… But all the money you loose betting flows into sustaining the diversity-enabling market inefficiencies – just an idea:).

    Posted by interfazed | January 13, 2012, 1:28 am

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: