//
archives

Archive for

When do animals become spare parts? Part 1

When do animals become spare parts?

©2012 Michel M. Maharbiz

The question that nags me is, put somewhat tongue-in-cheek: “Is it ethically permissible for a capable hobbyist at randomhackerspace.org to chop and maim a Goliath beetle, implant it with an embedded system and make it compete with other Goliath beetles at this spring’s high school robot competition?” I am not trying to create any idiotic hysteria about hackerspaces, by the way. I define hacker or hobbyist from here on as any technologically-savvy, entrepreneurial human with access to hardware, software and maybe wetware/bio design and fab equipment and leave behind any political/countercultural/controversial baggage behind.  I think this question addresses an immediate reality. Consider Hirotaka’s cyborg beetle: All of the devices in that paper are off-the-shelf and amount to about a ~US$300 investment; the hardware on the bug itself might run ~US $10 – 30.

For the present purposes, I am not interested so much in the question of whether such cyborgs could be used in bad ways (i.e. the ethics of what you do with constructs). There is a well-established literature on this that goes back a long way; for some recent discussion in the context of robots, see for example [e.g. P. Lin  2011, M. Coeckelbergh 2010]. I am interested not in what harm I can do with the modified organisms but what harm is permissible to them  [e.g. A. Taylor 1996] in some process of modification or conversion to construct.

I also do not think this musing applies very strongly to work where animals are modified or manipulated during the course of clear, vetted medical research. This is an ethical topic – an important one- about which much has been written and debated and to which I do not think, at present, I can contribute anything.  The need for this distinction may be more practical than theoretical: I have a hunch that a vast gray area exists between training a dog to fetch and performing medical experiments on rats and monkeys (both of which are topics with well-thought ethical and moral discussions). If you throw accessible technology and a large group of competent hackers into this vast, grey area, I think you get a weird world on the other side.

Lastly  this is a topic that has implications for -dare I say it- transhumanist thought [Verdoux 2011, Chalmers 2009]. I’ll get back to this below, but if ethical permissibility simply derives from an argument from utility for the most advanced moral agents and there is no minimum bar (in contrast to, say, the capabilities approach), then we’re going to have a hard time convincing our technological progeny not to rip our wings off to make interesting toys.

I will start with a series of observations.

Observation 1: As man-made computation and communication systems miniaturize, lower power consumption and interface with biological constructs, the bar to modifying biological organisms will be very low

This basic dilemma is not new. There is a long history to man’s ever more successful attempts to introduce control into organisms which extends from pre-industrial machines to exert mechanical and behavioral control over animals, through the rise of modern biochemical engineering (which is built largely on introducing synthetic or synthetically improved biological functions into organisms)  up to today’s emerging revolutions in neural interfaces and robotics.

What is new is that this capability will soon (if it isn’t already) available to a large, technically-savvy group of hobbyists. It should be obvious that as the computation and communication circuits we build radically miniaturize (i.e. become so low power that 1 pJ is sufficient to bang out a bit of information over a wireless transceiver; become so small that 500 µm2 of thinned CMOS can hold a reasonable sensor front-end and digital engine), the barrier to sticking these types of interfaces into organisms will get pretty low. Put another way, the rapid pace of computation and communication miniaturization is swiftly blurring the line between the technological base that created us and the technological based we’ve created.

This specific issue has arisen before in different contexts (eg. at the dawn of genetic engineering, during the latest synthetic biology hype, etc.) Possibly the most relevant recent discussions have arisen out of the DIY Bio movement (the latest incarnation of the “can I make designer biology by splicing genes in my garage?” trope). But since most of the organisms realistically accessible to modification are far down the computational complexity scale – prokaryotic cells, some constructs made from mammalian cells and, importantly, yeast and plants (with some obvious exceptions like the ethical discussions around making fluorescent bunnies and fish),  DIY Bio discussions tend to focus on what harm could be done with the modified organisms, rather than intrinsic harm to the modified organisms.

A related literature [Stracey 2009, Zurr 2003, Mooney 2006] has recently questioned the value of bioart along the same lines that I am here: is it ethical to create art from biological constructs? This  is perhaps closest to my concern, as art is obviously subjective and thus does not easily lend itself to reductionist arguments based on utility (at least not for its purest definition… obviously works of art can be shown to have utility, etc.). The problem with most of this literature is that, frankly, it answered nothing for me. Several papers were instrumental in articulating some of the issues, but very few provided anything approaching an answer to a question such as “Is it ethically permissible for a hobbyist down at randomhackerspace chop and maim a Goliath beetle, implant it with an embedded system and make it  compete with other Goliath beetles at this spring’s high school robot competition?”.

 

Observation 2: There is no meaning to the descriptor “living”; only computational complexity matters

Most humans make a distinction between living things and non-living things and apply different standards. Consider this thought experiment: I write an app on my iPhone that emits the recording “Help! Help! I am in pain! Stop, please stop!”, or perhaps, simply begins to beep, R2D2-style, frantically, when the IMU detects violent shaking or the temperature sensor detects I placed it in the oven. Would anyone cry? Conversely, consider watching a single ant or a worm wiggle frantically while it’s dying on a hot plate.

This appears to be for reasons that are partially derived from:

  1. historically inherited vitalism – the idea that living things are a distinct category of being due to their possession of a “spark of life” of some kind
  2. partially from the fact that the technology base that produced us generates constructs (us, beetles, trees) that are usually aesthetically and functionally rather different from the constructs produced by the technology base we produced (i.e. chairs, buses, radios , turbines) so as to intuitively lead people to bin them separately by instinct.
  3. most people are not aware of any synthetic, intelligent constructs sophisticated enough to compare animals they run into in daily life
  4. there is a perception that the belief stated above would reduce our own intrinsic value (“we are the most important species in the world” or perhaps “I don’t feel special if I am also just an artifact of some process”).

Incidentally, I think this last issue is one of the fundamental sources of subtle cognitive dissonance that advanced 21st century societies have yet to resolve.

The first statement above is hard to sustain in the face of current knowledge, but because it relies on the invocation of something which can always be held outside the current, limited body of knowledge, will never be settled. I will simply categorically state that, given the evidence, I believe there to no extra-rational “spark of life”. This does not, by the way, eliminate the possibility that consciousness and sentience all require some similar computational or information-theoretic paradigm or kernel; it just claims this is intelligible, categorizable and observable. (There is an added subtlety if one wonders if we, individually, have sufficient computational complexity to perform these acts of self-description, but we’ll leave that aside for now. Chalmers has nice discussions on this and on his views; also well-trodden territory.)

The second statement is not fundamental, I think, and will, increasingly be apparently false, as evidenced by convergences in synthetic biology, neuroscience and BMI and robotics. In other words, the perception of technology as looking like concrete, steel and silicon and of biology of looking like slime, gels, and squishy, curvy things is just a transient.

I believe that the fundamental ruler which can be applied to all constructs is something akin to cognitive complexity. I want to say intelligence, but the term is loaded and in common use is reduced in scope to apply to only part of the total information processing that sentient agents carry out. I don’t think my observation is very new. A rock, a snail, my iPhone, Pieter Abbeel’s towel folding robot, beetles, pig’s bred for food, us and future putative transhumans are linked along some spectrum of complexity. The problem, of course, is what we mean by this complexity, whether it really has just one axis along which you describe it (no) and –most important for where I want to go with this – whether or not it ultimately makes sense to describe computational agents as completely independent. I believe a big part of the problem when trying to formulate ethics which include animals, for example, is the artificial desire inherent in almost all ethical theories to insist that agents have sharp computational boundaries.

[Aside: In my mind, this observation does not rob any of these constructs of their beauty, importance, intrinsic value, ability to feel pain or suffer, ability to love or be loved, feel despair or existential angst, etc. For that matter, none of these arguments disallow the existence of a sufficiently complex entity deserving of the title of God and I have enjoyed several drug-induced college discussions arising out of friends’ gymnastic attempts to produce this entity from accepted “scientific” concepts.]

Observation 3: Most modern practical decisions about exerting control over animals are based on arguments from utility

The experimentation, modification of, manipulation and consumption of animals has been controversial for a long time and is a very complex issue.  In practice, the exertion of control (even in very extreme cases where severe suffering and pain is inflicted) by humans over animals has been accepted by large swaths of the populations of modern countries when:

  • the perceived utility of that exertion is  very  high (e.g. animals as food; promises of medical advances for human diseases through animal testing; animals moved, killed or modified to maintain or modify the state of habitats, ecological niches; instruction)
  • the animals in question are not perceived to have sufficiently complex mental  processes to worry about them (e.g. sea anemones, most insects, bacteria, yeast, etc.).

In the more popular formulations (e.g. Peter Singer’s), what this means, roughly, is that ethical agents (us) should give equal consideration to all affected interests (including animals) when deciding whether an action is right (ethically-permissible). Among the most central interests for computational entities is the interest in reducing suffering; another is the interest to maximize enjoyment, or perhaps, fulfillment (taken, I suppose, to mean that an entity in question is maximizing an output of importance given the resources it has available).  The first appears more fundamental.

There is plenty to argue about in terms of the implementation details and absurd scenarios one can construct with a framework like that above. I find this reading endlessly fascinating; see, for example, the classic utility monster, the nature of hedonism, whether the evaluator must be outside the scenario to properly weigh interests, etc.

However, in the case where most computational agents are of approximately equal complexity or resource allocation, this scheme works well (As an aside: this notion of computational parity being a requirement for moral agency seems to go back at least to Locke and is a core requirement in classic and modern contractarian theories, e.g. John Rawl’s Theory of Justice). For agents with sufficiently close complexity to ours, these evaluations are functionally possible. I can, to some reasonable approximation, decide if the significant suffering of a pig in an industrial farm warrants the pleasure I derive from a ham sandwich. I can begin to think about what suffering (both human and animal) might be permissible to cure significant human diseases. This is implicitly because the agents under consideration are sufficiently close, computationally, that we can reference their suffering to ours and make decisions.

The problem emerges in the gray area I am interested in. To be clear, utilitarianism fundamentally does not necessarily require any parity among entities, just that there be individual entities with interests. Let’s say dozens of hobbyists in every major city form “cyborg clubs” where they enjoy each other’s company, show off their latest creations hacked from pieces of insects, bacteria, fungi and coleonterates and get together for competitions. Let’s further assume this has minimal engineering or scientific utility (let’s ignore arguments about the educational value of the activities). Is this right? [Taken to its extreme, this becomes a long recognized problem in art: certain patterns of information and matter clearly give enjoyment (and thus maximize somebody’s interests) but how do we reconcile this with any suffering in its creation if the piece has no other purpose. But this is a digression].

We are clearly maximizing fulfillment of humans. How do compare this to an insect’s right to not have its interests frustrated? How do we even determine if being part of a cyborg construction does frustrate its interests? It will likely feel pain, but perhaps this is transient.

As I see it, the problem breaks down into two parts:

1) Given a large enough complexity gap, we have no way of determining that computational agent’s interests;

2) more importantly, we have no way of comparing interests even if we measured them. Why? Because from our perspective, the frustrated interest of a beetle is much smaller than the fulfillment of a hobbyist but from the beetle’s there is no other interest and thus, its interest –by its internal reckoning – is very large. Its interest to itself is computationally tractable (by it) in a way that the ineffable hobbyists’ interest is not. Thus, there is no way to compare. Put differently, the more powerful computational system always wins in this scheme.

This issue of asymmetry, by the way, is explored by many contemporary ethicists in the context of animal rights, rights for the disabled, rights issues between wealthy and poor people and nations, etc. In the last instance, while I certainly will not claim that asymmetries in computational ability of individual moral agents necessarily arise from asymmetries of wealth, there are certainly advantages to wealth when it comes to information access, crowd-sourcing, the ready supply of intelligence and analysis, etc. I intend to come back to this point later, oddly enough.

In summary, what I claim is implicit in this type of utilitarian min/max’ing is that:

  1. there are plateaus of computational complexity that make one system ineffable to a lower system. Thus, in a sense, we are not ineffable to our closest ape relatives, or at least not sufficiently so that we cannot make ethical comparisons. But a human is ineffable to a rabbit which is ineffable to a worm, and so on. Here, we can introduce more technical definitions of these plateaus, which might include the jump to meta-cognition, the ability to produce simulations of the future with current data, the ability to form models of pain, etc. There may be many of these “modules” which allow jumps in computational ability.
  2. The agents are not computationally linked.

I think Point 1 is inescapable. My hunch is that there’s some real fun to be had with Point 2 when trying to understand the stronger agent’s responsibilities to the weaker. That’s the direction I’m off to.

[I should mention that a notable exception to the utilitarian metric is the body of work detailing a capabilities approach, credited to Amartya Sen and Martha Nussbaum. Here, the idea is that our ethical norms should enshrine a specific, minimum set of rights which are afforded to all sentient beings (including animals). This minimum set protects against the type of wrangling above; it protects the weak from the strong in at least a minimal way. While I personally find the idea inspiring as it applies to modern issues such as poverty and disability (i.e. there must be a mathematical floor to utilitarian tradeoffs), the introduction of such a mathematical, immovable lower bound immediately introduces obvious absurdities (e.g. Even if it takes the wealth and lives of an entire civilization to attempt to raise the conditions of one entity to meet the minimum requirements, this is an ethically required action.). In any event, I don’t think it addresses the provides a solution to  the problem above and so I will table it for now.]

Advertisements

When do animals become spare parts? Part 0

When do animals become spare parts?

©2012 Michel M. Maharbiz

For some time, I have been grappling with aspects of the ethics of the insect work we do in my lab. Historically, I had taken the point of view of the “amoral scientist”. Put bluntly, this says “look, humans are very bad at predicting how technology gets used after its creation. My job is to innovate and I will stay out of philosophical discussions about technology creation or use in my role as an innovator.” As time has gone on, it has become increasingly clear to me that technology development is far from value neutral [Winner 1980].

This line of inquiry – the ethical use of technology, the lack of value neutrality in technology development, etc. – is well-trodden ground [Zerzan 2005, Winner 1980]. What kept me up most at night, however, was not the ethics of what we did with the constructs made in the lab, but the ethics of what we did to the insects. This did not arise from a sense of direct empathy with the insects, to be honest; we were working with insects that do not have, to the best of our knowledge, anything resembling meta-cognition, self-awareness, complex emotional states and even rather different responses to pain [Eisemann 1984]. In a strange way, the issue built gradually because insects seemed a kind of gray ethical area: they are not trees (which, at worst are covered by deep ecology and environmental ethics) and they are not non-human sentients (like, say, an ape). Where does that leave us?

I also came to realize how increasingly accessible these types of interventions can be (in terms of cost, technological accessibility, etc.). It is at that point that I decided to try and clear up my own thinking on the matter. The blog posts that follow are my attempt to structure my opinion on this issue. As I am, at best, a well-read amateur on the subject, I decided to write everything out and, once I did, I thought it might be constructive for me to share (within my group and then outside). I certainly make no claims to originality here; this is me thinking out loud on paper.

In the end, I think this issue in the end boils down to how we define moral agents and, further, how we deal with power asymmetries among moral agents, a topic of constant interest, but it’s fun to think about it from our technical perspective.

(circa 2007) Where might we look for inspiration

(first posted on my website ~2007)

(a version of this section appeared, with references, in Ismagilov & Maharbiz, 2007)

Provided with sufficiently advanced interface technology (see also this other Musing), are there existing multicellular systems that can be modified in useful ways? Are existing organisms too complex or lack the plasticity necessary for modification? Among the well-studied developmental biology animal models, including the fruit fly (Drosophila melanogaster), the zebrafish (Danio rerio), the sea urchin (Arbacia punctulata), and the chicken (Gallus gallus), some systems are more amenable to chemical manipulation.  The zebrafish, for example, is transparent, develops around a simple sphere (the yolk), and develops normally even if the impermeable chorion is removed.  However, simpler models may provide even better substrates for building functional biological machines.

Hydra

The millimeter scale Hydra vulgaris and its close relatives are nature’s simplest multicellular organisms possessing a neural net.  A hydra has no central nervous system.  Instead, it has a web of neurons that link chemical and mechanical sensors to primitive musculature, a system sophisticated enough to enable opportunistic feeding on tiny animals wandering into its tentacles. Hydra is much simpler than a mammalian system in a number of ways.  It has two (not three) dermal layers, where the outer skin cells serve as both epithelia and enervated muscle.  The neurons of the hydra can be stimulated locally and globally with simple electrodes.  In addition, the hydra can reproduce by budding. If separated into fragments as small as a few cells, most fragments re-organize themselves into appropriate dermal layers, where cells divide, migrate, and correctly re-form a new hydra in several days. Gradients of chemical signals have long been implicated in establishing and maintaining the hydra’s body plan, and several recent chemical screening efforts have been aimed at identifying putative signaling compounds and their roles. How far could a hydra’s geometry and neuron-musculature be re-patterned by using a microchemical interface device? Are genetic modifications required?  Given recent interest in hybrid metal-muscle devices, the hydra presents an attractive alternative to mammalian muscle constructs.

Volvox and communal algae

Volvox are colonial green algae which assemble into spheroids of tens to thousands of cells. The line between microorganism colony and multicellular organism blurs as one examines the spectrum of Volvox sub-species. In the larger organisms, cells arrange themselves precisely within an extracellular matrix, differentiate into somatic and reproductive cells, collectively locomote towards light, reproduce new spheroids in a coordinated fashion, and are capable of sexual reproduction with other colonies. Moreover, the sex-inducing pheromone of Volvox carceri is one of the most potent signaling compounds known; a 100 aM concentration is sufficient to engage the sexual reproduction pathway. Could Volvox be a template for chemically-modulated self-assembly?  A recent result suggests that extracellular, matrix-mediated self-assembly can be used to form simple multicellular aggregates similar to those seen in Volvox.

Plants

A more immediately useful system may be present in vascular plants. It has long been known that plant vasculature is assembled through a combination of chemical signaling and apoptosis, programmed cell death. The prevailing hypothesis is that the tips of growing plants emit auxin which is transported by downstream cells towards the roots. Cells experiencing the highest auxin concentrations reinforce their walls (with lignin and other compounds), form connections to nearby cells undergoing the same process, and finally commit suicide, leaving networks of empty vessels through which water and nutrients flow. This process remains active into adulthood; if the vasculature is wounded, auxin builds up locally and nearby cells are recruited to form new vascular channels. Exogenously applied, auxin is known to trigger vascular growth towards the source. In this fashion, plants have solved three long-standing engineering problems that still plague modern microfluidic systems: fluidic interconnections across scales ranging from the micro- to the macro- scale (plant vasculature links the smallest leaf capillaries to the largest trunk arteries), the ability to withstand large pressures without generating bubbles through embolism, and high velocity fluid transport without active pumps.

Additionally, a plant’s chemical processing and metabolism is mediated via the vasculature. Lastly, it is a plant’s vasculature in dead form, the secondary xylem, that gives wood its amazing structural range from balsa’s lightness to bamboo’s hardness. Could we co-opt this system to microfabricate vascular networks?

Synthetic constructs

It may be that existing multicellular systems are too complex or too developmentally inflexible for microchemical control of their developmental machinery. For example, microfluidic interface technology has previously been used to show that the development of the Drosophila embryo is robust under the environmental perturbation of a temperature step.  When the two halves of the embryo are maintained at different temperatures, the two halves develop at different rates. Nevertheless, when the temperature step is removed sufficiently early the embryo resynchronizes the two halves and proceeds to develop normally.  Future experiments utilizing microchemical interface technology may enable understanding of the mechanisms responsible for robustness of development and may uncover the limits beyond which developmental programs cannot be perturbed.  If so, the answer may lie in the approaches of synthetic biology. Could we take simple microorganisms, add the right chemical signaling genes, and direct their growth with microchemical interface technology? A recent result demonstrates that prokaryotes can be genetically modified to produce synthetic pattern formation. A number of robust pattern generation systems have been studied for decades, both at the experimental and theoretical level. These include Turing reaction-diffusion systems, simple gradient generators  and chemotaxis models. Could synthetic, addressable pattern generators be inserted into prokaryotes? This is a completely open question.

(circa 2006) We are not just building four-assed monkeys or Why?

(first posted in my website ~2007)

I am largely motivated by a single, broad assumption of mine: tomorrow’s everyday technologies will be dominated by ‘organic’ machines. In other words, when I dream of the future – 10, 50, 100 years from now – I don’t see hulking metallic monstrosities or sleek mirror-like vehicles or rooms filled with aluminum and plastic. I see machines built from what we would now called ‘living things’: tables that are derived from [what today we would call] plant cell lines, which breathe your office air and use ambient light for energy to fix themselves or grow new parts; houses whose walls are alive and whose infrastructure hosts an ecology [more on ecology below] of organisms who perform tasks both microscopic and macroscopic; computational elements whose interfaces completely blur the line between cell and chip, organ and peripheral.

It is not trivial to defend this notion (nor is this idea at all new). Is there really a reason to do this? (i.e. “Are we just building four-assed monkeys?”). Is it technologically feasible? How would we do it? Even more field-specific: what does all this have to do with microsystems? Isn’t this synthetic biology (yes). Aren’t there ethical considerations?

Developmental biology and machines

Every time a tree or a flea or a human reproduces, a complex program is set in motion that fabricates a new organism. For more than a century, the science of developmental biology has worked to unravel – to reverse engineer—the rules that organisms use to fabricate themselves. Understanding these processes has, of course, led to monumental advances in our quality of life: developmental biology is intimately linked to medicine at all levels.  As we learn what rules neurons use to weave and repair nets or what programs drive muscle repair, this leads to improvements in health care and treatment. The great progress in tissue and organ engineering is fundamentally driven by understanding of developmental biology.

But, beyond medicine, there’s something more fundamental:

[idea 1] there is an underlying fabrication technology that makes nature’s machines and we don’t use it

We don’t make machines the way nature makes them. We do know, for example, how to take whole muscles, or pieces of muscles, or even muscle-precursor cells, form them into muscle-like constructs and graft them onto things to make actuators [see, for example, Bob Dennis’ robot fish[i],[ii]]. But, these grafts are clumsy at best; we certainly do not grow entire machines from scratch that way!

Chemical messages and microtechnology

Developmental biology and its fabrication products are complex. Exactly how cells organize themselves into working tissues, organs, etc. is still the subject of much research and debate. But, I think we have enough information to make one global statement:

[idea 2] cells constantly carry out an internal program which has inputs and outputs to the environment and other cells; this I/O often takes the form of chemical and mechanical signals.

So what? Well, if this is true, we should be able to do two things: a) hack the internal program, b) hack the I/O. The first endeavor has already begun with mostly microbial organisms: it’s called Synthetic Biology. Much of today’s synthetic biology is devoted to designing and building, in what an engineer would call a bottom-up approach, simple gene and metabolic programs into cells. [Imagine finding an alien computer lying in the sand with no user’s manual. After several hundred years of trying to figure out how it works, you now try to build a tiny microcontroller with 10 lines of code with what you’ve learned].

The second endeavor is more closely related to what academics like to call micro or nanotechnology.

[idea 3] we can build generic interface systems with enough spatial and temporal resolution to affect how growing organisms develop.

In other words, I believe that with the right machine we can directly interface with a seed or an embryo (or a completely synthetic lump of cells) as it grows and change the I/O the cells are receiving. I think we can do this with existing organisms [see Essay 1], but I also think we will eventually do this with the completely man-made constructs of synthetic biologists [engineers?]. It is exactly for this reason that my group designs and builds the devices it does: we believe precise control of oxygen, nitric oxide, proteins, etc. during development will eventually allow us to hack the I/O and fabricate new things. [Hopefully not four-assed monkeys come in, although Alphonse Mephisto is such a cool name].

Obviously, the technical challenges are immense. The machines we are building now are very basic and have limitations. Getting messages into and out of cells in real time is daunting and is made more so if you deal with three-dimensional geometries. Reporting on the conditions in the cells is fairly slow at the moment (GFP proteins take ~ 0.3 – 1 hr to fold), although advances are being made rapidly. The developmental programs we want to hack are complex, very redundant and have had millions of years to adapt to environmental insults. This will require the efforts of many people.

You might ask how this all connects to research in other areas which are already co-opting nature’s processes to make wonderful new things. This leads to the last idea of the essay:

[idea 4] a biological cell, as defined by convention and in its many varieties, is the fundamental building block for the proposed technology.

This is not so trivial as it seems. This is what is fundamentally different from efforts which seek to understand biological and biochemical processes in order to employ them outside of the native environment. For me, the cell is the engine of fabrication; it is the basic Lego™ block in the machine’s architecture.

Ethics and ecologies

Ethical considerations can’t be ignored. The ethics of altering living organisms has been raging now for quite some time and many people have written on this. A lot of the groundwork will be laid in the next few years by synthetic biology and its attempts to cope with the issues. But we will not just fundamentally disturb individual organisms. If we do hack the complete fabrication of organisms, our technology will increasingly use the language of nature. It will interface with natural systems more naturally than modern machines do. This is obviously cause for concern, but I think its impact is likely to be positive. Our world is already a host for countless large and small systems of interacting organisms; the study of these systems is known as ecology. If our technology becomes more organic, our man-made systems will begin to merge with these ecologies. Our stewardship of the planet will become more apparent and more direct. In a sense, it will allow us to return to a communion with the earth that has increasingly been lost by the direction our non-organic industrialization has taken. It also means we’ll be able to cause damage and that danger cannot be overstated.