//
home

Latest Post

When do animals become spare parts? Part 1

When do animals become spare parts?

©2012 Michel M. Maharbiz

The question that nags me is, put somewhat tongue-in-cheek: “Is it ethically permissible for a capable hobbyist at randomhackerspace.org to chop and maim a Goliath beetle, implant it with an embedded system and make it compete with other Goliath beetles at this spring’s high school robot competition?” I am not trying to create any idiotic hysteria about hackerspaces, by the way. I define hacker or hobbyist from here on as any technologically-savvy, entrepreneurial human with access to hardware, software and maybe wetware/bio design and fab equipment and leave behind any political/countercultural/controversial baggage behind.  I think this question addresses an immediate reality. Consider Hirotaka’s cyborg beetle: All of the devices in that paper are off-the-shelf and amount to about a ~US$300 investment; the hardware on the bug itself might run ~US $10 – 30.

For the present purposes, I am not interested so much in the question of whether such cyborgs could be used in bad ways (i.e. the ethics of what you do with constructs). There is a well-established literature on this that goes back a long way; for some recent discussion in the context of robots, see for example [e.g. P. Lin  2011, M. Coeckelbergh 2010]. I am interested not in what harm I can do with the modified organisms but what harm is permissible to them  [e.g. A. Taylor 1996] in some process of modification or conversion to construct.

I also do not think this musing applies very strongly to work where animals are modified or manipulated during the course of clear, vetted medical research. This is an ethical topic – an important one- about which much has been written and debated and to which I do not think, at present, I can contribute anything.  The need for this distinction may be more practical than theoretical: I have a hunch that a vast gray area exists between training a dog to fetch and performing medical experiments on rats and monkeys (both of which are topics with well-thought ethical and moral discussions). If you throw accessible technology and a large group of competent hackers into this vast, grey area, I think you get a weird world on the other side.

Lastly  this is a topic that has implications for -dare I say it- transhumanist thought [Verdoux 2011, Chalmers 2009]. I’ll get back to this below, but if ethical permissibility simply derives from an argument from utility for the most advanced moral agents and there is no minimum bar (in contrast to, say, the capabilities approach), then we’re going to have a hard time convincing our technological progeny not to rip our wings off to make interesting toys.

I will start with a series of observations.

Observation 1: As man-made computation and communication systems miniaturize, lower power consumption and interface with biological constructs, the bar to modifying biological organisms will be very low

This basic dilemma is not new. There is a long history to man’s ever more successful attempts to introduce control into organisms which extends from pre-industrial machines to exert mechanical and behavioral control over animals, through the rise of modern biochemical engineering (which is built largely on introducing synthetic or synthetically improved biological functions into organisms)  up to today’s emerging revolutions in neural interfaces and robotics.

What is new is that this capability will soon (if it isn’t already) available to a large, technically-savvy group of hobbyists. It should be obvious that as the computation and communication circuits we build radically miniaturize (i.e. become so low power that 1 pJ is sufficient to bang out a bit of information over a wireless transceiver; become so small that 500 µm2 of thinned CMOS can hold a reasonable sensor front-end and digital engine), the barrier to sticking these types of interfaces into organisms will get pretty low. Put another way, the rapid pace of computation and communication miniaturization is swiftly blurring the line between the technological base that created us and the technological based we’ve created.

This specific issue has arisen before in different contexts (eg. at the dawn of genetic engineering, during the latest synthetic biology hype, etc.) Possibly the most relevant recent discussions have arisen out of the DIY Bio movement (the latest incarnation of the “can I make designer biology by splicing genes in my garage?” trope). But since most of the organisms realistically accessible to modification are far down the computational complexity scale – prokaryotic cells, some constructs made from mammalian cells and, importantly, yeast and plants (with some obvious exceptions like the ethical discussions around making fluorescent bunnies and fish),  DIY Bio discussions tend to focus on what harm could be done with the modified organisms, rather than intrinsic harm to the modified organisms.

A related literature [Stracey 2009, Zurr 2003, Mooney 2006] has recently questioned the value of bioart along the same lines that I am here: is it ethical to create art from biological constructs? This  is perhaps closest to my concern, as art is obviously subjective and thus does not easily lend itself to reductionist arguments based on utility (at least not for its purest definition… obviously works of art can be shown to have utility, etc.). The problem with most of this literature is that, frankly, it answered nothing for me. Several papers were instrumental in articulating some of the issues, but very few provided anything approaching an answer to a question such as “Is it ethically permissible for a hobbyist down at randomhackerspace chop and maim a Goliath beetle, implant it with an embedded system and make it  compete with other Goliath beetles at this spring’s high school robot competition?”.

 

Observation 2: There is no meaning to the descriptor “living”; only computational complexity matters

Most humans make a distinction between living things and non-living things and apply different standards. Consider this thought experiment: I write an app on my iPhone that emits the recording “Help! Help! I am in pain! Stop, please stop!”, or perhaps, simply begins to beep, R2D2-style, frantically, when the IMU detects violent shaking or the temperature sensor detects I placed it in the oven. Would anyone cry? Conversely, consider watching a single ant or a worm wiggle frantically while it’s dying on a hot plate.

This appears to be for reasons that are partially derived from:

  1. historically inherited vitalism – the idea that living things are a distinct category of being due to their possession of a “spark of life” of some kind
  2. partially from the fact that the technology base that produced us generates constructs (us, beetles, trees) that are usually aesthetically and functionally rather different from the constructs produced by the technology base we produced (i.e. chairs, buses, radios , turbines) so as to intuitively lead people to bin them separately by instinct.
  3. most people are not aware of any synthetic, intelligent constructs sophisticated enough to compare animals they run into in daily life
  4. there is a perception that the belief stated above would reduce our own intrinsic value (“we are the most important species in the world” or perhaps “I don’t feel special if I am also just an artifact of some process”).

Incidentally, I think this last issue is one of the fundamental sources of subtle cognitive dissonance that advanced 21st century societies have yet to resolve.

The first statement above is hard to sustain in the face of current knowledge, but because it relies on the invocation of something which can always be held outside the current, limited body of knowledge, will never be settled. I will simply categorically state that, given the evidence, I believe there to no extra-rational “spark of life”. This does not, by the way, eliminate the possibility that consciousness and sentience all require some similar computational or information-theoretic paradigm or kernel; it just claims this is intelligible, categorizable and observable. (There is an added subtlety if one wonders if we, individually, have sufficient computational complexity to perform these acts of self-description, but we’ll leave that aside for now. Chalmers has nice discussions on this and on his views; also well-trodden territory.)

The second statement is not fundamental, I think, and will, increasingly be apparently false, as evidenced by convergences in synthetic biology, neuroscience and BMI and robotics. In other words, the perception of technology as looking like concrete, steel and silicon and of biology of looking like slime, gels, and squishy, curvy things is just a transient.

I believe that the fundamental ruler which can be applied to all constructs is something akin to cognitive complexity. I want to say intelligence, but the term is loaded and in common use is reduced in scope to apply to only part of the total information processing that sentient agents carry out. I don’t think my observation is very new. A rock, a snail, my iPhone, Pieter Abbeel’s towel folding robot, beetles, pig’s bred for food, us and future putative transhumans are linked along some spectrum of complexity. The problem, of course, is what we mean by this complexity, whether it really has just one axis along which you describe it (no) and –most important for where I want to go with this – whether or not it ultimately makes sense to describe computational agents as completely independent. I believe a big part of the problem when trying to formulate ethics which include animals, for example, is the artificial desire inherent in almost all ethical theories to insist that agents have sharp computational boundaries.

[Aside: In my mind, this observation does not rob any of these constructs of their beauty, importance, intrinsic value, ability to feel pain or suffer, ability to love or be loved, feel despair or existential angst, etc. For that matter, none of these arguments disallow the existence of a sufficiently complex entity deserving of the title of God and I have enjoyed several drug-induced college discussions arising out of friends’ gymnastic attempts to produce this entity from accepted “scientific” concepts.]

Observation 3: Most modern practical decisions about exerting control over animals are based on arguments from utility

The experimentation, modification of, manipulation and consumption of animals has been controversial for a long time and is a very complex issue.  In practice, the exertion of control (even in very extreme cases where severe suffering and pain is inflicted) by humans over animals has been accepted by large swaths of the populations of modern countries when:

  • the perceived utility of that exertion is  very  high (e.g. animals as food; promises of medical advances for human diseases through animal testing; animals moved, killed or modified to maintain or modify the state of habitats, ecological niches; instruction)
  • the animals in question are not perceived to have sufficiently complex mental  processes to worry about them (e.g. sea anemones, most insects, bacteria, yeast, etc.).

In the more popular formulations (e.g. Peter Singer’s), what this means, roughly, is that ethical agents (us) should give equal consideration to all affected interests (including animals) when deciding whether an action is right (ethically-permissible). Among the most central interests for computational entities is the interest in reducing suffering; another is the interest to maximize enjoyment, or perhaps, fulfillment (taken, I suppose, to mean that an entity in question is maximizing an output of importance given the resources it has available).  The first appears more fundamental.

There is plenty to argue about in terms of the implementation details and absurd scenarios one can construct with a framework like that above. I find this reading endlessly fascinating; see, for example, the classic utility monster, the nature of hedonism, whether the evaluator must be outside the scenario to properly weigh interests, etc.

However, in the case where most computational agents are of approximately equal complexity or resource allocation, this scheme works well (As an aside: this notion of computational parity being a requirement for moral agency seems to go back at least to Locke and is a core requirement in classic and modern contractarian theories, e.g. John Rawl’s Theory of Justice). For agents with sufficiently close complexity to ours, these evaluations are functionally possible. I can, to some reasonable approximation, decide if the significant suffering of a pig in an industrial farm warrants the pleasure I derive from a ham sandwich. I can begin to think about what suffering (both human and animal) might be permissible to cure significant human diseases. This is implicitly because the agents under consideration are sufficiently close, computationally, that we can reference their suffering to ours and make decisions.

The problem emerges in the gray area I am interested in. To be clear, utilitarianism fundamentally does not necessarily require any parity among entities, just that there be individual entities with interests. Let’s say dozens of hobbyists in every major city form “cyborg clubs” where they enjoy each other’s company, show off their latest creations hacked from pieces of insects, bacteria, fungi and coleonterates and get together for competitions. Let’s further assume this has minimal engineering or scientific utility (let’s ignore arguments about the educational value of the activities). Is this right? [Taken to its extreme, this becomes a long recognized problem in art: certain patterns of information and matter clearly give enjoyment (and thus maximize somebody’s interests) but how do we reconcile this with any suffering in its creation if the piece has no other purpose. But this is a digression].

We are clearly maximizing fulfillment of humans. How do compare this to an insect’s right to not have its interests frustrated? How do we even determine if being part of a cyborg construction does frustrate its interests? It will likely feel pain, but perhaps this is transient.

As I see it, the problem breaks down into two parts:

1) Given a large enough complexity gap, we have no way of determining that computational agent’s interests;

2) more importantly, we have no way of comparing interests even if we measured them. Why? Because from our perspective, the frustrated interest of a beetle is much smaller than the fulfillment of a hobbyist but from the beetle’s there is no other interest and thus, its interest –by its internal reckoning – is very large. Its interest to itself is computationally tractable (by it) in a way that the ineffable hobbyists’ interest is not. Thus, there is no way to compare. Put differently, the more powerful computational system always wins in this scheme.

This issue of asymmetry, by the way, is explored by many contemporary ethicists in the context of animal rights, rights for the disabled, rights issues between wealthy and poor people and nations, etc. In the last instance, while I certainly will not claim that asymmetries in computational ability of individual moral agents necessarily arise from asymmetries of wealth, there are certainly advantages to wealth when it comes to information access, crowd-sourcing, the ready supply of intelligence and analysis, etc. I intend to come back to this point later, oddly enough.

In summary, what I claim is implicit in this type of utilitarian min/max’ing is that:

  1. there are plateaus of computational complexity that make one system ineffable to a lower system. Thus, in a sense, we are not ineffable to our closest ape relatives, or at least not sufficiently so that we cannot make ethical comparisons. But a human is ineffable to a rabbit which is ineffable to a worm, and so on. Here, we can introduce more technical definitions of these plateaus, which might include the jump to meta-cognition, the ability to produce simulations of the future with current data, the ability to form models of pain, etc. There may be many of these “modules” which allow jumps in computational ability.
  2. The agents are not computationally linked.

I think Point 1 is inescapable. My hunch is that there’s some real fun to be had with Point 2 when trying to understand the stronger agent’s responsibilities to the weaker. That’s the direction I’m off to.

[I should mention that a notable exception to the utilitarian metric is the body of work detailing a capabilities approach, credited to Amartya Sen and Martha Nussbaum. Here, the idea is that our ethical norms should enshrine a specific, minimum set of rights which are afforded to all sentient beings (including animals). This minimum set protects against the type of wrangling above; it protects the weak from the strong in at least a minimal way. While I personally find the idea inspiring as it applies to modern issues such as poverty and disability (i.e. there must be a mathematical floor to utilitarian tradeoffs), the introduction of such a mathematical, immovable lower bound immediately introduces obvious absurdities (e.g. Even if it takes the wealth and lives of an entire civilization to attempt to raise the conditions of one entity to meet the minimum requirements, this is an ethically required action.). In any event, I don’t think it addresses the provides a solution to  the problem above and so I will table it for now.]

Follow

Get every new post delivered to your Inbox.