Saturday, November 21, 2009


Quantum Consciousness

Sir Roger Penrose, OM, FRS (born 8 August 1931) is an English mathematical physicist and Emeritus Rouse Ball Professor of Mathematics at the Mathematical Institute, University of Oxford and Emeritus Fellow of Wadham College. He has received a number of prizes and awards, including the 1988 Wolf Prize for physics which he shared with Stephen Hawking for their contribution to our understanding of the universe. He is renowned for his work in mathematical physics, in particular his contributions to general relativity and cosmology. He is also a recreational mathematician and philosopher.

Born in Colchester, Essex, England, Roger Penrose is a son of Lionel S. Penrose and Margaret Leathes. Penrose is the brother of mathematician Oliver Penrose and correspondence chess grandmaster Jonathan Penrose. Penrose was precocious as a child. He attended University College School. Penrose graduated with a first class degree in mathematics from University College London. In 1955, while still a student, Penrose reinvented the generalized matrix inverse (also known as Moore-Penrose inverse, (see Penrose, R. 'A Generalized Inverse for Matrices.' Proc. Cambridge Phil. Soc. 51, 406-413, 1955). Penrose earned his Ph.D. at Cambridge (St John´s College) in 1958, writing a thesis on 'tensor methods in algebraic geometry' under algebraist and geometer John A. Todd. He devised and popularised the Penrose triangle in the 1950s, describing it as "impossibility in its purest form" and exchanged material with the artist M. C. Escher, whose earlier depictions of impossible objects partly inspired it. In 1965 at Cambridge, Penrose proved that singularities (such as black holes) could be formed from the gravitational collapse of immense, dying stars (Ferguson, 1991: 66).

In 1967, Penrose invented the twistor theory which maps geometric objects in Minkowski space into the 4-dimensional complex space with the metric signature (2,2). In 1969 he conjectured the cosmic censorship hypothesis. This proposes (rather informally) that the universe protects us from the inherent unpredictability of singularities (such as the one in the centre of a black hole) by hiding them from our view behind an event horizon. This form is now known as the "weak censorship hypothesis"; in 1979, Penrose formulated a stronger version called the "strong censorship hypothesis". Together with the BKL conjecture and issues of nonlinear stability, settling the censorship conjectures is one of the most important outstanding problems in general relativity. Also from 1979 dates Penrose's influential Weyl curvature hypothesis on the initial conditions of the observable part of the Universe and the origin of the second law of thermodynamics. Penrose wrote a paper on the Terrell rotation.

Roger Penrose is well known for his 1974 discovery of Penrose tilings, which are formed from two tiles that can only tile the plane nonperiodically, and are the first tilings to exhibit fivefold rotational symmetry. Penrose developed these ideas from the article Deux types fondamentaux de distribution statistique (1938; an English translation Two Basic Types of Statistical Distribution) of Czech geographer, demographer and statistician Jaromír Korcák. In 1984, such patterns were observed in the arrangement of atoms in quasicrystals. Another noteworthy contribution is his 1971 invention of spin networks, which later came to form the geometry of spacetime in loop quantum gravity. He was influential in popularizing what are commonly known as Penrose diagrams (causal diagrams). In 2004 Penrose released The Road to Reality: A Complete Guide to the Laws of the Universe, a 1,099-page book aimed at giving a comprehensive guide to the laws of physics. He has proposed a novel interpretation of quantum mechanics. Penrose is the Francis and Helen Pentz Distinguished (visiting) Professor of Physics and Mathematics at Pennsylvania State University.

Penrose is married to Vanessa Thomas, with whom he has one child. He has three sons from a previous marriage to American Joan Isabel Wedge (1959).

From Wikipedia

Quantum Consciousness

Polymath Roger Penrose takes on the ultimate mystery
John Horgan Scientific American Nov 89

Roger Penrose is slight in figure and gentle in mien, and he is an Roddly diffident chauffeur for a man who has just proposed how the entire universe-including the enigma of human consciousness-might work. Navigating from the airport outside Syracuse, N.Y., to the city's university, he brakes at nearly every crossroad, squinting at signs as if they bore alien runes. Which way is the right way? he wonders, apologizing to me for Ins indecision. He seems mired in mysteries. when we finally reach his office, Penrose finds a can labeled "Superstring" on a table. He chuckles. On the subject of superstrings-not the filaments of foam squirted from this novelty item but the unimaginably minuscule particles that some theorists think may underlie all matter-his mind is clear: he finds them too ungainly, inelegant. "It's just not the way I'd e.,cpect the answer to be," he observes in his mild British accent. lvhen Penrose says "the answer," one envisions the words in capital letters. He confesses to agreeing with Plato that the truth is embodied in mathematics and e@sts "out there," independent of the physical world and e%-en of human thought. Scientists do not invent the truth-thev discover it.

A genuine discovery should do more than merely conform to the facts: it should feel right, it should be beautiful. In this sense, Penrose feels somewhat akin to Einstein, who judged the validity of propositions about the world by asking: Is that the way God would have done it? "Aesthetic qualities are important in science," Penrose remarks, "and necessary, I think, for great science."

I interviewed Penrose in September while he was visiting Syracuse University, on leave from his full-time post at the University of Oxford. At 58 he is one of the world's most eminent mathematicians and/or physicists (he cannot decide which category he prefers). He is a "master," says the distinguished physicist John A. Wheeler of Princeton University, at exploiting "the magnificent power of mathematics to reach into everything." An achievement in astrophysics first brought Penrose fame. In the 1960's he collaborated with Stephen W. Hawking of the University of Cambridge in showing that singularities-objects so crushed by their own weight that they become infinitely dense, beyond the ken of classical physics-are not only possible but inevitable under many circumstances. This work helped to push black holes from the outer bmits of astrophysics to the center.

In the 1970's Penrose's lifelong passion for geometric puzzles yielded a bonus. He found that as few as two geometric shapes, put together in jigsaw-puzzle fashion, can cover a flat surface in pattems that never repeat themselves. "To a small extent I was thinking about how simpie structures can force complicated arrangements," Penrose says, "but mainly I was doing it for fun;" Called Penrose tiles, the shapes were initially considered a curiosity unrelated to natural phenomena. Then in 1984 a researcher at the National Bureau of Standards discovered a substance whose molecular structure resembles Penrose tiles. This novel form of solid matter, called quasicrystals, has become a major focus of materials research [see "Quasicrystals," by David R. Nelson; Scientific American August, 1986].

Quasicrystals, singularities and almost every other oddity Penrose has puzzled over figure into his current magnum opus, The Emperor's New Mind. The book's ostensible purpose is to refute the view held by some artificial-intelligence enthusiasts that computers wifl someday do all that human brains can doand more.

The reader soon realizes, however, that Penrose's larger goal is to point the way to a grand synthesis of classical physics, quantum physics and even neuropsychology. He begins his argument by slighting computers' ability to mimic the thoughts of a mathematician. At first glance, computers might seem perfectly suited to this endeavor: after all, they were created to calculate. But Penrose points out that Alan M. Turing himself, the original champion of artificial intelligence, demonstrated that many mathematical problems are not susceptible to algorithmic analysis and resolution. The bounds of computability, Penrose says, are related to Godel's theorem, which holds that any mathematical system always contains self-evident truths that cannot be formally proved by the system's initial a)doms. The human mind can comprehend these truths, but a rule-bound computer cannot.

In what sense, then, is the mind unlike a computer? Penrose thinks the answer might have something to do with quantum physics. A system at the quantum level (a group of hydrogen atoms, for instance) does not have a single course of behavior, or state, but a number of different possible states that are somehow "superposed" on one another. When a physicist measures the system, however, all the superposed states collapse into a single state; only one of all the possibilities seems to have occurred. Penrose finds this apparent dependence of quantum physics on human observation-as well as its incompatibility with macroscopic events-profoundly unsatisfying. If the quantum view of reality is absolutely true, he suggests, we should see not a single cricket ball resting on a lawn but a blur of many balls on many lawns. He proposes that a force now conspicuously absent in quantum physics-namely gravity-may link the quantum realm to the classical, deterministic world we humans inhabit. That idea in itself is not new: many theorists-including those trying to weave reality out of supersmngshave sought a theory of quantum gravity.

But Penrose takes a new approach. He notes that as the various superposed states of a quantum-level system evolve over time, the distribution of matter and energy within them begins to diverge. At some level-intermediate between the quantum and classical realms-the differences between the superposed states become gravitationally significant; the states then collapse into the single state that physicists can measure. Seen this way, it is the gravitational influence of the measuring apparatus-and not the abstract presence of an observer that causes the superposed states to collapse. Penrosian quantum gravity can also help account for what are known as non-local effects, in which events in one region affect events in another simultaneously. The famous Einstein-Podoisky-Rosen thought experiment first indicated how nonlocality could occur: if a decaying particle simultaneously emits two photons in opposite directions, then measuring the spin of one photon instantaneously 'fixes" the spin of the other, even if it is light-years away. Penrose thinks quasicrystals may involve nonlocal effects as well. Ordinary crystals, he explains, grow serially, one atom at a time, but the complexity of quasicrystals suggests a more global phenomenon: each atom seems to sense what a number of other atoms are doing as they fall into place in concert. This process resembles that required for laying down Penrose tiles; the proper placement of one tile often depends on the positioning of other tiles that are several tiles removed.

What does all this have to do with consciousness? Penrose proposes that the physiological process underlying a given thought may initially involve a number of superposed quantum states, each of which performs a calculation of sorts. When the differences in the disnibution of mass and energy between the states reach a gravitationally significant level, the states collapse into a single state, causing measurable and possibly nonlocal changes in the neural structure of the brain. This physical event correlates with a mental one: the comprehension of a mathematical theorem, say, or the decision not to tip a waiter.

The important thing to remember, Penrose says, is that this quantum process cannot be replicated by any computer now conceived. With apparently genuine humility, Penrose emphasizes that these ideas should not be called theories yet: be prefers the word "suggestions.' But throughout his conversation and writings, he seems to imply that someday humans (not computers) will discover the ultimate answer-to everything. Does he really believe that? Penrose mulls the question over for a moment. "I guess I rather do," he says finally "although perhaps that's being too pessimistic." Why pessimistic? Isn't that the hope of science? "Solving mysteries, or trying to solve them, is wonderful," he replies, "and if they were all solved that would be rather boring."

Monday, November 9, 2009

Natasha Vita-More

What is transhumanism? And, by extension, what is extropy?

Natasha Vita-More (born 1950 as Nance Clark, New York) is a media artist and theorist known for designing "Primo Posthuman. This future human prototype incorporates biotechnology, robotics, information technology, nanotechnology, cognitive and neuroscience for human enhancement and extreme life extension.

Vita-More is the Director of H Lab for scientific and artistic design collaborations. Vita-More is currently a Visiting Scholar at Twenty-First Century Medicine. She is a PhD candidate at the Planetary Collegium, University of Plymouth. Her thesis concerns human enhancement and extreme life extension. She holds a B.F.A., University of Memphis; filmmaker-in-residence, University of Colorado; M.Sc., University of Houston; M.Phil. University of Plymouth.


In 1982, Vita-More authored the Transhuman Statement; produced and hosted cable TV show TransCentury Update on human futures reaching over 100,000 viewers in Los Angeles and Telluride 1985-1992; founded Transhumanist Arts and Culture 1993. She was the Chair of “Vital Progress Summit” 2004, establishing a precedent for proaction of human enhancement. She was the president of the Extropy Institute 2002-2006. She currently advises non-profit organizations including Center for Responsible Nanotechnology, Adaptive A.I., and LifeBoat Foundation, is a Fellow of the Institute for Ethics and Emerging Technologies, and has been a consultant to IBM on the future of human performance.

This interview was conducted by Venessa Posavec 12/20/07

V: What is transhumanism? And, by extension, what is extropy?

NVM: Transhumanism reflects philosophies of life, such as extropy, that seek the continuation and acceleration of the evolution of intelligent life beyond currently human biological form, and addresses human limitations by means of science and technology guided by, basically, ethical life-promoting positive and practical priniciples and values. In specific, transhumanism is a set of ideas which represents a worldview to improve the current situation that we as humanity are facing, which includes short lifespan, limited cognitive abilities, limited sensoral abilities, erratic emotions, and going to the larger sector of peoples – the problem with so many people suffering in the world from starvation, or lack of housing, or lack of, basically, getting any of the necessary fundamental needs met that very much affects transhumanism as looking at a world view. And, therein, we support critical thinking in the development of sciences and technologies to extend life, eradicate aging, solve problems of disease, and encourage and enhance intellectual, creative, physical and mental well-being. In this regard, it is essential to be aware of the possible dangers that lie ahead. That is why the proactionary principle is so vital in fighting the bias towards advancing the human condition. And there lies the crucial examination of potential dangers, that not only affect transhumanists, but the entire world. We look ardently at how technologies, including the NBIC technologies – nanoscience, bioscience, information science, and cognitive science – can possibly be used to help solve some of the problems in the world that address humans being stuck in a state of stasis. It’s about time we really knuckle down and started helping people around the world rather than just talking about it, so transhumanism does look at that, of course.

V: How do you address the argument that transhumanism is not an extension of humanism, but rather in direct opposition to what it means to be human?

NVM: Well, there is some issues with that. I’ve heard this, and thank you for bringing it up, because it is a very important problem to be looked at very carefully – not strategically – but carefully. Actually, transhumanism is not in opposition to what it means to be human, but in order to understand humanism and what it means to be human, we have to discuss first, what is a human and what does it mean to be human. For example, a father, a new father of a child, might have a different opinion than a politician. Or a literary scholar may have a different opinion than a religious fundamentalist. There’s no codified, definitive definition for what it means to be human, because the question, in large part, is subjective. Each human has a value system and a set of emotions and a set of experiences that determine for him or her what it means to be human. But in its simplist sense, what it means to be human is based on our biology. A human has a body, and that body is biological. As such, it has a set of chromosomes and genes, intelligence, and sensory and perceptual awareness. The single most complex issue is in accepting mortality of humans. And that’s where humanism comes in. Humanism accepts the mortality of being human in a biological state. The single most complex issue and aim for transhumanism is the emotive desire and the intellectual reason to extend life past the accepted human lifespan, which is, what, 121, 122 years, or somewhere around that particular timeframe. In order to achieve the aim of extending life, the human mind, body, and identity would have to become something other than strictly biological. It will have to incorporate technological methods to construct a regenerative existence for humans. So, that’s the crux of the matter. Humanists do not look at the next step of human evolution like the transhuman or posthuman or whatever we might become in the future. Humanism deals with the here and now pretty much. Transhumanism, on the other hand, looks ahead, is planning, is more critically minded and progressive about dealing with those things. But I think the real issue lies with what it means to be human.

V: Do you think the transhumanist meme is spreading?

NVM: Oh yes, I do. I think so because the ideas which were so avant-garde in the 1980s and even in the 1990s are now headlining the world’s most popular literature. The ideas which were so visionary and radical early on now have become the fodder for political debate. And once ideas get into the arena of political arguments and academic arguments, philosophical arguments, ethical arguments, and even in the business sector, it means that there’s a maturation process that has occurred, and the ideas we talked about early on in the 80s and the early 90s are now issues to be considered practical, probable, and even preferred futures for many people. So I think the true sign of it is we’ve caused enough trouble where people have taken us seriously. We’re no longer just science fiction or avant-gardes, we’re people with views and vision that is alarming and frightening a heck of a lot of people, but at the same time, thank goodness, a lot of people are starting to go, “Wow. This is possible. We could do these things.” And the world sorely, desperately needs a bit of practical optimism and some problem solving.

V: What events or medical advances have you seen that support the ideal of being a transhuman?

NVM: Well, you know, it’s interesting that you ask this question, because I just finished a paper for an inclusion in a book on forecasting what is probable within the next 25, 30 years, so I had to do a lot of research in looking at what is possible, not just fairy tale or Pollyanna looking at it, but really what’s going on. I looked at the issues that face people, and what people – humanity and society – is really concerned about. And that is, number one, dealing with specific diseases that cause problems for children, Tay Sachs, sickle cell anemia, other diseases that totally degenerate the body and the mind. And at the same time, looking at the enormous number of people who desperately need transplants – their organ has become diseased and they need a new organ, and what’s going on in that realm. As well as, the enormous number of people who suffer from paralysis, whether it’s full-body paralysis or semi-body paralysis. So, I looked at those particular areas, and it’s amazing to realize what’s going on. For example, zenoplantation, using pig organs to transplant into humans has been a really radical step in medicine, however that’s becoming, not passe, but becoming overlooked by the possibility of regenerating our own organs. So, in short, what we’re looking at in all these different areas of disease and paralysis and difficulty getting organs is the up and coming area of medicine, technology, and science – well, basically, biotechnology – which is looking at regenerative medicine. What is happening in biotechnology is regenerating the areas of the body that have become disesased so that those areas repgenerat the cells and repair the cell damage and the organ. For example, if you have a diseased organ, you can clone your own organ, put it back in the body, or better yet, regenerate the cells that are diseased. This, in and of itself, will affect the entire body and mind. So, you take the brain, and you have a neurological disease, and the regenerative process of the neurons in the brain will help return you to a state of better cognitive capability.

V: So, if the transhumanist philosophy was actualized, and we lived in this posthuman world, unaffected by aging or degeneration, what would that world look like?

NVM: Oh boy. I just love this question. What an exciting idea to think about. Well, this morning I was watching testa(?) on YouTube at a rave, and I was looking at the enormous light and color and fascination and sound and movement, and I thought, this is what the future could be. One big, giant rave. But seriously, I think that there’s a lot to be said about the rhythm of music, and the rhythm of people in a state of bliss or trance in expressing a communicative and mobile attitude of exchange and communication. So, in short, for me, if I could be an instrumental part of the design team, it would be stunningly pleasant, fluid, and interconnected. Where we could connect with each other 24/7, or just drop out at our whim and not communicate with anyone, where we would be able to teleport to any location at any time, where our ability to NVM: and boulders in the road.

V: I just read a post somewhere about the Fermi Paradox, and the reason that we haven’t made contact with any other intelligent civilization is because perhaps their world is exactly like that, and they’re just in this perfect utopian pleasure world, and they have no desire to explore out any further.

NVM: That’s an interesting thing that you bring that up. I find that the concept of perfection and perfect is such an oxymoron, because if one was to reach a state of perfection, that would be so disingenuous to our creative process and our cognitive process. Perfection is a state of statis, and therefore it would no longer be a state of bliss, it would be a state of almost erosion and entropy, because perfection is a dead end. So, I like the idea of becoming, and that continuous becoming and exploring. So, in my vision of the posthuman future, it wouldn’t stop at a state of perfection, it would continually be achieving and looking and pursuing. And I’m kind of a nice person, so I would have to say that my world would be different than the Fermi Paradox world, it would be reaching out to help. Maybe that’s the woman in me, I don’t know.

V: What kind of an impact do you think that these different advances would have on our society – societal impacts – of these regenerative methods and technologies. What would that do to population, if we’re all healthy and living forever?

NVM: Ok, good question. I think that we have to multi-track, and it’s one thing that many of us don’t do, and I have to catch myself at it as well. If we multi-track, that would be looking at the different domains of expertise and knowledge simultaneously, and oftentimes one domain, like science will exceed and maybe culture will lag behind, or maybe the arts will shoot far ahead in vision and potential, and perhaps economics or politics or education will lag behind. So, looking at it from a strategist point of view, and looking at a schema of events, not all domains of knowledge move forward at the same pace. So, one may lead, then the other may lead, in this massive complex adaptive system. If human beings live for longer periods of time – I try not to use “immortal” or “forever”, because I don’t know what the future holds, and I don’t know what forever means. But let’s just say extreme life extension or extended or super-longevity, if that were to occur, and regenerative medicine did help people with disease and keep us in a state of health and well-being, which would be absolutely lovely, that means that the population would not only balance out, but would probably grow, unless people, as the trend is now, had less and less children. But, having children is lovely, so let’s not take that out of the equation. What would happen in all practical purposes, would be that we develop habitats, environments off the planet, and we start building habitats on the moon, and near Earth orbit, and we start expanding out into our solar system. And this is really a practical thing to do, because society has always reached out to the next island or the next continent to expand and explore and develop. So, it’s part of our innate humanness, or what it means to be human, one of the characteristics or behavioral characteristics I would include would be the essential desire and almost need to expand beyond, to go to the next place, and build and explore and develop. With the XPrize having done so well recently, and new advances in getting our rocket boosters off the planet, and developing new types of architecture for near Earth orbit and perhaps on the moon and Mars, I think it’s reasonable if not just common sensical that we would be building habitats off planet, and I’m sure they would eventually become very lovely and soothing and enjoyable. So, I think that would eradicate a problem of overpopulation. But, the interesting thing there is, this whole myth that the old should die and make way for the young, may suit people in their middle ages, but there’s a lot of old people who don’t really want to die. They want to make room for the young to be sure, for their grandchildren or their friends’ grandchildren. But, their life is very valuable too. My mother is in her 90s, and she is still a very valuable, lovely, generous, spontaneous person, and I think she’s enjoying life, so I wouldn’t want her to die just to make room for someone else. So, I think we have to have a whole paradigm shift in our reasoning about life and sustainability. Life is not something just to throw away because it gets wrinkled. Life is something to nurture. And here’s the paradox of getting old: we become wiser and more knowledgeable and more compassionate as we get older, and then we are expected to die. I would like to change that, very much.

V: A few questions about some of the institutes that you have contributed to. What is the Extropy Institute?

NVM: Extropy Institute is the founding organization of transhumanism. It was developed in the late 1980s, early 1990s, as a pioneering transhumanist organization known for its visionary foresight in the future. And it was the pioneering organization that put transhumanism on the map. It had conferences and high gloss magazines sold in bookstores, and brought the ideas of nanotechnology, artificial intelligence, cloning, extreme life extension, space exploration, biotechnology, all of these emerging technologies, it brought it out into the mainstream as much as possible. So, it has been a crucial pioneering catalyst for transhumanism.

V: What future plans are there for the institute?

NVM: Well, the institute closed down 2 years ago, but it’s not forgotten to be sure. We closed it down because we achieved our first and most finessed goal which was to memetically engineer transhumanism through the codified philosophy of extropy. And once that was realized, we felt that it was time for the board and our advisors and all our members to go out on their own and build their own organizations, because that is part and parcel of the philosophy of extropy. Spread more memes, and sprout more fruit. That is what is occurring now. We are rebuilding an extropian network, or a network of extropy, currently, and will continue having summits, probably will have a summit in 2008. The extropy network bringing high minds to high places. I’m not sure what the goal will be. Our last summit was on countering the precautionary principle in the United States and in a few other countries, where President Bush’s bioethics committee was saying transhumanism is the most dangerous idea, and ardently fighting levels of progress to improve the human condition. And we thought that was a really terrible, disingenuous thing to do for humankind, especially Americans, since we’ve had so much trouble with our reputation and our behavior across the planet. So, we took that on. I think we did quite well. Our keynotes were Marvin Minsky and Ray Kurzweil, and Greg Fay and Max More, and myself and Anders Sandberg, and it was a great conference. It was a great summit. It was all virtual, it was the first virtual summit, and so we were very pleased with the success of that. So, I will organizing one in 2008.

V: What is Transhumanist Arts & Culture?

NVM: Transhumanist Arts & Culture was developed in the early 1980s, when I had my cable television show in Los Angeles and in Tellyride, Colorado, called Transcentury Update, based on the transhuman condition. It was basically to bring together creative people in the arts and the sciences, and technologies, to consider what the role of artist’s art is today and in the future. Throughout history, artists, including science and technology, have been a voice and vision of civilization. And artists as communicators, have an ability, a marvelous ability, to reach out to others and introduce insight and vision about society and culture. So I thought that artists and the arts, bringing that together based on transhumanism, could engender some passion and dreams and hopes for humanity, and express it through various mediums, like NET, and media art, robotics, artificial general intelligence, interactive media, animation, film, etc. Basically, it was just to get this group of creative people together artistically, whether they were pronounced artists or not.

V: What are some resources that could help people better understand the transhumanist philosophy?

NVM: I think the best resources still is, and I mean this in all objectivity, is Extropy Institute, because it was the pioneering organization of transhumanism, and the website is still up, so it’s a great resource. That’s at I think Transhumanist Arts & Culture is a good site too, because it has a FAQ, and approaches transhumanism from a more creative, visionary perspective of arts and technology, and that’s Another site is Anders Sandberg’s website. He’s now in England, but it was a Swedish website, and I think it was the original encyclopedic website for transhumanism, so that’s . The World Transhumanist website is pretty good. It does have a bit of bias, because it tends to be overtly political and not inclusive, so I’m hoping that will be changed through the new executive director James Clemet and the new board. I am an honorary vice-chair of that organization, and I’m hoping that its future is going to be positive. But, basically just Googling ‘transhumanism’ is pretty good. I think some of the most ardent writing is done by Max More, who is the author of the philosophy of transhumanism. Some of his papers are excellent.

V: What is the Singularity?

NVM: In one sentence, the Singularity is a time when supercomputers become smarter than human intelligence.

V: Ok. And, what would that mean?

NVM: That would mean that supercomputers, through artificial general intelligence, are able to teach themselves and outsmart humans in all practical sense. The thinking processes of supercomputers will far exceed our human ability to solve problems and reason. The interesting thing therein with the Singularity and supercomputing power, is that the computers will teach themselves. So it will be this dynamo effect where the supercomputers get smarter and smarter and smarter. And, the smarter you are, the more knowledge you have. It might happen very quickly, it could happen more slowly, no one knows what the timeframe of the Singularity will be, but the assumption is that when it hits, it will hit hard, unless our human potential, our human cognitive ability, our human sensibility takes a look at this, and we say to ourselves, “Ok, we’re going to merge more with machines”. Because, if supercomputers became smarter than us, more intelligent than us, then it would not be very good for our future as a species. So, that’s the threat of the Singularity. If we don’t prepare for it, then we could be left behind. And I think this is not science fiction, it is something we need to very seriously consider. And I think transhumanism is one social movement that is considering this very deeply and seriously and with all force ahead, because it is possible that this could happen.

V: How do you think transhumanism is related to the Singularity? Or, do you think the Singularity is a necessity in order to achieve the advances that will give us the richer, healthier life that the transhumanist ideals cover?

NVM: A paradigm shift is necessary. The Singularity could be the paradigm shift that would shake up the world. But, it may not look like a Singularity to people within the environment of change. It could come in slow strides, or one big wave. Transhumanism is related to this because, whether it comes in slow strides or one big wave, we are aware of it, we are thinking about it, we’re talking about it, we’re writing about it, we’re holding conferences on it, we’re bringing it to the mainstream, so that everyone can understand that supercomputing power, bringing about supercomputing intelligence could be something we’d want to be totally aware of, not dumbed down to it. So, transhumanism’s role here, one of its key roles, is to look at the issues, look at the possibilities, strategically plan for it, come up with scenarios, and help educate the public about the effects of high-end supercomputing power. With artificial intelligence, one might say, “ok, let’s look at this big soup or this smorgasbord, how does artificial intelligence relate to the Singularity”? Well, artificial intelligence is the intelligence of supercomputing. That’s what it is, basically. Now, it could be top down, bottom up, neural networks, whatever the formula is to create a vast strong heavy-handed intelligence is the issue. If that’s not human intelligence, what is it? For the history of our species and civilization, the human mind, the human capability, the cognitive ability, has been said to be smarter, more intelligent than all other species, and we’ve held that as our amulet, ‘we have it, you don’t’ type of thing, which is a hierarchy of species, to be sure. But, one thing that is vital to human nature in most people’s standards is that we are the most intelligent animal. Well, ok, what if we’re not? How would we deal with that? How would we look at that? How would that affect our species, our culture, our humanity, if indeed, the human being is no longer the most intelligent, the most capable, the most able to problem solve species on the planet? So, that would reduce us to a position of being secondary to something else. And what if that something else is an artificial intelligence or supercomputing power? How would we deal with that? Probably not very well. So, what can we do now? What we can do now is be aware of it, understand it, make preparation for it, and integrate with it. So what we could do, in my estimation, is become the supercomputers. And that’s what the posthuman is. And that’s one aspect of looking at the future, that would be one scenario, that we merge with the machines, the supercomputing capability, and we become the future species, the evolution of human with this new animal, let’s call it. It’s not an animal, obviously, it’s not biological, but its mechanism is based on regenerative processes. If we merge with that, and that’s what the posthuman could be, then we might better situate ourselves.

V: What, in your words, is a futurist?

NVM: I think there are several different types of futurists. There’s the normative futurist, there’s the strategic futurist, there’s the artistic futurist, there’s the visionary futurist. So, if I put all these together, and sculpted a quintessential futurist, I think it would be someone who was able to look at the future with scrutinizing eyes and not get lost in Pollyanna reasoning, but to ardently consider how timeframes involve various domains, which like I said, multi-tracking, various domains where one could take the lead and the other lag behind, but how this is a shifting type of environment.. It’s crucial for a futurist to be wide-eyed and bushy-tailed about the future, without allowing his or her desire to…. no, I’m gonna stop here, because I don’t think that’s right. I think I’m just gonna say, what is a futurist? A futurist is a person who considers the consequences of the future and does his or her best to help strategize and develop scenarios that would help educate others about the future.

V: What trends are you aware of that people should be looking at?

NVM: Regenerative medicine, how that might affect their life and the lives of their loved ones. A second trend that I think that we all need to be aware of, is the religious dogma, the pervasiveness of religious wars, will eventually become a moot point, so I think we need to start preparing, and start practicing in earnest a compassion and understanding and sense of diversity and acceptance of different people’s religious views, because that is the trend of the future. This “I’m right, you’re wrong”, is passe, it’s so 20th Century. I think that we all need to be practicing a new framework and language in accepting the differences and the different beliefs and gods that many people share. Another trend that I think is crucial for people to start paying attention to is equal to the religious dogma – is political dogma. There is no 21st Century politics that actually is 21st Century. Most of the political platforms and behaviors are very 20th Century, theyr’e based on ‘I’m right, you’re wrong’, it’s very talking heads, and two-dimensional. The future of politics is going to be very immersive, very connected. It’s going to be an interactive connection intelligence of determining individual rights, and self ownership of rights. I think that preparing for that would be to start moving away from ‘I’m a Democrat, you’re a Republican’, or “I’m a Liberatarian, you’re a Socialist”, or any of these languages and mindsets that keep us really in a state of dogma, and are so inappropriate today. Another trend that I think is crucial to pay attention is the sense that space exploration is coming about, and to start looking at what it might be like to actually live in different habitats, in near Earth orbit, or on the moon. With that in mind, another trend that I think is crucial is that we are going to live longer. So, I think people need to start preparing their finances. I mean, it’s never too late to put aside 10% of your income. People think if they’re over 50, if they didn’t do that when they were 20, then what’s the point now, because I’m going to die in 20 years. Well, that’s an old world mindset. I think that if you start preparing anytime, that is just fine. So, this whole retirement vision, and ageist vision, I think is going to become wiped from our memories. So, the trend there is to think youthful, think vital, think healthy. As far as all the technology trends and scientific trends, I’m sure most of your guests have already gone over them. Artificial general intelligence will come about in 20-30 years, and that’s going to be very exciting. Nano, molecular manufacturing, where each person has an MM machine on their desktop rather than a printer to build molecular manufacturing projects, just like you’d print out a color picture, will be a trend of the future. Nanotechnology is pretty much the buzzword these days. And therein nanomedicine. Going back to regenerative medicine, nanomedicine is going to just totally change the face of medicine. Nanomedicine, giving credit to Robert Freitas, nanomedicine is taking nanorobots into the body and repairing cell damage that way, which is part of the regenerative process indeed. Other trends, I think, are how we look at ourselves as humans. I think for the first time in the history of our humanity, our civilization, we’re going to realize that we may not be the end result. And this is going to cause an enormous paradigm shift for all of humanity. Similar to, perhaps, the paradigm shift that was caused when we realized that we were not at the center of the universe, or that the world was not flat, that the world was actually round and we wouldn’t fall off an edge. So, some of the biggest trends leading up to these major shifts may seem not as exciting as some of the greatest technologies and sciences. But, sometimes, just the self realization, that this little shift of perception can cause enormous reverberations that if we’re not prepared to think about, could really cause some backlashes in society and in the behavior of society.

V: So, 2008 is right around the corner. Do you have any specific predicitons for this upcoming year?

NVM: Wikipedia falls short. It is unveiled that Wikipedia is run by a few people that dominate its information base. Wikipedia has to stand up and get a good attorney. I think that Wikipedia may find itself in a lot of trouble for manipulating knowledge, and presenting itself as a knowledge bowl media source of information, where it’s not such. I think that could be very news-worthy. I don’t know, 2008, that’s right here. So, what could happen in a year? What could happen in a year? I think one of the maybe obvious things that will happen in 2008, so it’s not really a prediction, it’s just an insight, is that we will develop a new species. It’s already on the drawing board, and has been happening. I’m not saying I support it or I don’t support it. But it looks like it’s going to happen. May or may not be a good thing, I don’t know. As far as investing in technologies, I think nanotechnology will be developing more and more patents. But as far as a real forecast of something terribly exciting, i have no idea. I’m not very good at making predictions, because I don’t think it’s smart for futurists to make predictions, because they only turn around and slap us in the ass afterwards. We always find you predict something, and then it doesn’t happen, and that hits the news. So, to protect from making a fool out of myself, I’m not going to make any hard and fast predictions. But, I think it would be absolutely fabulous if we actually figured out a way to have space tourism. I just think in 2008 that would be fabulous. I’m going to give you my hopeful prediction. This is my hopeful prediction, that the United States becomes loved by the world again. (laughs) That would be my hope, that something extraordinary happens, where the United States quits dictating other cultures of people how to live and how to behave, and we just kind of take a step backwards and become a kinder, more intelligent nation again.

V: That’s a tall order.

NVM: I know. But, wouldn’t that be lovely. Because, Americans are such generous people. We need to just clean up our act, for goodness sakes.

V: How about some predictions for the next 5 years, through 2012?

NVM: Through 2012 – I think we’re going to be able to regenerate many organs of the body instead of having transplants, and that could be through cloning a cell from that organ, or having nanomedicine, or genetic engineering regenerate the organs. I think that would be astounding, and beneficial to people all over the world, the hundreds and thousands and even millions of people who are suffering from disease in their organs. So, I think that is a major trend, I think it’s totally possible, if not probably. I think reversing aging will take even greater strides within this timeframe, because plastic surgery and regenerative medicine there is really taking leaps and bounds. As far as transportation – I don’t know if we’re going to be able to fix some of the problems with pollution and transportation, it’s just so vile. I really just don’t like it. I’ve been hopeful in the past, and my dreams didn’t come true. And it’s not that I’m being pessimistic about it at all, I’m being practical. People want to have their cars. And they want their cars their way. And they want to drive faster, and bigger, and whatever. I’m gonna leave that one alone, because i would like cars to be put in prison. I think they’re just vile killers. It would be lovely if there was a new methodology for transportation, like we shared automobiles – picked one up and dropped one off at locations, I think that would eradicate a lot of the problems. I think one of the trends in this timeframe that would be near and dear to everyone, that science and technology and smart thinking develops ways to actually reverse global warming trends. This could be done through a number of engineering, but I think nanotechnology will be crucial in that, as well as regenerative medicine. I think a lot of our promise for making trends or predictions for this timeframe would be based on the ability to get nanotechnology through and get it working and solving some problems. Some of the downsides of that is legislation, and a lot of conservatives, and technoconservatives, basically, that would fight nanotechnology because of a fear of runaway technology or gray goo assemblers or whatever. But I think we have to really deal with the problems on the planet. I think this timeframe would be looking at the planet and dealing with some of the problems and that we all become more ardent environmentalists, but not crazy environmentalists like Greenpeace, or the ‘greens’, or any of these people that want to go back to villages without telephones. I think we need to be really smart about it. We’re kind of on the cusp of things happening. It’s hard to say things are going to happen in 5 or 10 years because you don’t know what discontinuities will come about, so making any type of prediction within any given timeframe is difficult, unless you go off, say, 20 years, 30 years, 40 years. I think what happens is the least expected thing to happen. I’m not very good at this, I’m sorry.

V: Do you have any general predictions for the next 10 years, through 2017?

NVM: In ten years, I think we will have a very different voting system on politics, and a very different outlook on how to govern nations, how to govern people. I think that in 10 years, there may be a revitalizing of the United Nations, and there will be new types of world councils to deal with specific problems, so the United Nations will not hold a monopoly on how the world communicates. I think we need more rigorous organizations. I think in 10 years we will see a very bountiful shift in the way women allow themselves to be treated in some of the areas of the world where there rights are extremely limited, and their whole personhood, their bodies, are shamed. In 10 years, that will be sorely addressed, because it’s been on the drawing table for so long, and it’s reaching a point of almost where it’s at a vortex, something has to happen quickly. So I think in 10 years, because things don’t happen as quickly as we like, that women in the Middle East, women in Africa, women in India and China, and lots of the areas around the world where their rights have been limited and they’ve been sequestered to suffer in very confined living sensibilities, I think that will shift, and it will be one of the most marvelous shifts in the world, because I think that women have been treated so devastatingly poorly. So, that’s something I look forward to, and I think it will happen. I feel very confident. With communication, and especially the internet and getting word out and these small grassroots groups that are helping these women, I think it will reach point where the self esteem and the pride and the sense of being will far exceed the rule of thumb posed on them by their religious brutes.


Wednesday, November 4, 2009


Nanotechnology - the Present and Future

Kim Eric Drexler (born April 25, 1955 in Almeda, California) is an American engineer best known for popularizing the potential of molecular nanotechnology (MNT), from the 1970s and 1980s. His 1991 doctoral thesis at MIT was revised and published as the book "Nanosystems Molecular Machinery Manufacturing and Computation" (1992), which received the Association of American Publishers award for Best Computer Science Book of 1992. He also coined the term grey goo.


K. Eric Drexler was very strongly influenced by ideas on Limits to Growth in the early 1970s. His response in his first year at Massachusetts Institute of Technology was to seek out someone who was working on extraterrestrial resources. He found Dr. Gerard K. O´Neill of Princeton University, a physicist famous for a strong focus on particle accelerators and his landmark work on the concepts of space colonization. Drexler was involved in NASA summer studies in 1975 and 1976. Besides working summers for O'Neill building mass driver prototypes, he delivered papers at the first three Space Manufacturing conferences at Princeton. The 1977 and 1979 papers were co-authored with Keith Henson, and patents were issued on both subjects, vapor phase fabrication and space radiators.

Drexler participated in NASA summer studies on space colonies in 1975 and 1976. He fabricated metal films a few tens of nanometers thick on a wax support to demonstrate the potentials of high performance solar sails. He was active in space politics, helping the L5 Society defeat the Moon Treaty in 1980.

During the late 1970s, he began to develop ideas about molecular nanotechnology (MNT). In 1979, Drexler encountered Richard Feynman´s provocative 1959 talk There´s Plenty of Room at the Bottom. The term nanotechnology was coined by the Tokyo Science University Professor Norio Taniguchi in 1974 to describe the precision manufacture of materials with nanometer tolerances, and was unknowingly appropriated by Drexler in his 1986 book Engines of Creation: The Coming Era of Nanotechnology to describe what later became known as molecular nanotechnology (MNT). In that book, he proposed the idea of a nanoscale "assembler" which would be able to build a copy of itself and of other items of arbitrary complexity. He also first published the term "grey goo" to describe what might happen if a hypothetical self-replicating molecular nanotechnology went out of control.

Drexler holds three degrees from MIT. He received his B.S. in Interdisciplinary Sciences in 1977 and his M.S. in 1979 in Astro/Aerospace Engineering with a Master's thesis titled “Design of a High Performance Solar Sail System,.” In 1991 he earned a Ph.D. under the auspices of the MIT Media Lab (formally, the Media Arts and Sciences Section, School of Architecture and Planning). His Ph.D. work was the first doctoral degree on the topic of molecular nanotechnology and (after some editing) his thesis, “Molecular Machinery and Manufacturing with Applications to Computation,” was published as "Nanosystems: Molecular Machinery, Manufacturing and Computation" (1992), which received the Association of American Publishers award for Best Computer Science Book of 1992.

Drexler and Christine Peterson, at that time husband and wife, founded the Foresight Institute in 1986 with the mission of "Preparing for nanotechnology.” Drexler and Peterson ended their 21-year marriage in 2002. Drexler is no longer a member of the Foresight Institute.

In August 2005 Drexler joined Nanorex, a molecular engineering software company based in Bloomfield Hills, Michigan, to serve as the company's Chief Technical Advisor. Nanorex's nanoENGINEER-1 software was reportedly able to simulate a hypothetical differential gear design in "a snap". According to Nanorex's web site, an open source molecular design program is currently slated for release in Fall 2007.

In 2006, Drexler married Rosa Wang, a former investment banker who works with Ashoka: Innovators for the Public on improving the social capital markets.


Nanotechnology - the Present and Future - An Interview with Dr K. Eric Drexler, Chairman of The Foresight Institute.

With the current major investment in nanotechnology R&D resulting in a myriad of practical discoveries and applications, nanotechnology is set to drive major breakthroughs in almost any sector of technology.

Combined with the suitability of computational tools to this emerging science,it came as no surprise that nanotechnology was a hot topic of discussion and debate at AccelrysWorld 2004.

Accelrys caught up with Dr K. Eric Drexler, founder of The Foresight Institute, and talked about the present and future goals of this enabling science.

The Foresight Institute is a non-profit organization, founded in 1986, to promote communication among researchers and to help to disseminate research results to a broader audience, such as the general public, policy makers, and venture capitalists, and to facilitate the formation of companies, networks, and collaborations in the field.

Nanotechnology is a name that gets widely touted today. "The original definition is based on the development of molecular machine systems able to build a wide range of products inexpensively, with atomic precision," said Dr Drexler. "The term now covers a wide range of cutting edge areas of technology. As a result, new developments of great value are coming out that are under this label."

Dr Drexler believes that some of these breakthroughs will have revolutionary potential, with applications ranging from aircraft and antibiotics to integrated circuits. "Molecular manufacturing promises a comprehensive revolution in our ability to manipulate the structure of matter. It's about bringing digital control to the atomic level and doing so on a large scale at low cost. It's very difficult to overstate the significance of that to physical technology, economics, medicine, and military affairs."

As the majority of molecular modeling and simulation software operates at the nanoscale, these tools will play a major part in the development and application of this technology. "Developing molecular manufacturing systems involves the use of molecular machine systems, and molecular machine systems are well modeled by molecular mechanics," explained Dr Drexler. "To examine the chemical reactions that are at the heart of construction involves the use of packages that address molecular physics at the level of quantum theory, but most of the system, most of the complexity, most of the novelty, is at the level of hundreds to thousands to millions of atoms in structures that are well described by molecular mechanics approximations."

"The vital role of molecular modeling in this field is to enable engineering design, at the component and systems level, to set the objectives that then will guide the laboratory efforts at physical implementation."

So what are the practical applications of this science and what can we all look forward to in the future? There are a number of products on the market today that have been developed using nanotechnology, such as nanoparticles in sunscreens, and carbon nanotubes in strong materials.

Dr Drexler explained that an important goal is devices with improved properties. "The earliest major results are likely to be in the field of molecular sensors that use molecular machinery for their active elements in moving and sensing the structures involved." A good example could be, for example, a DNA reader. "I think a natural early goal for a molecular machine centered nanotechnology development programme would be a DNA reader that enables you obtain, from a blood sample, a CD with your genome on it, after only a day of chip time. The chip will have molecular machines sitting on top of microelectronic circuitry, using kilobase per second per read heads."

Dr Drexler was philosophical about the longer-term future of nanotechnology, explaining that the goal is atomically precise fabrication, by guiding the motion of molecules, resulting in precise rearrangements of atoms that would typically transfer at one to several atoms at a time. But there is caution to be heeded. "The belief that nanotechnology is about building by picking up and putting down single atoms is technically misleading," warned Drexler. "This has been a basis for some of the misunderstanding that has plagued the field." Dr Drexler believes that this has been part of the reason that many chemists have failed to examine the original research literature that would have shown this perception to be incorrect.

Putting any misconceptions aside, Dr Drexler believes that manufactured novel nanoscale machines will one day become reality. "These machines already exist in nature and researchers have already begun to redesign protein molecules to have novel function as enzymes." Dr Drexler thinks that a reasonably well-defined and attractive milestone on route to nanoscale machines would be a piece of molecular machinery comparable in size and complexity to a ribosome. "A machine, that, like a ribosome, can use digital data to guide the atomically precise construction of polymeric materials with predictable and tailored properties, designed and implemented faster and easier."

Dr Drexler believes computational tools will play an important role in achieving these goals. "I think that every significant advance in new artificial molecular machine systems will be based on molecular modeling," believes Drexler. "People will not put into practice ideas without first testing them by simulation. The simulations can be used to refine the designs to within engineering margins of safety, eliminating resources wasted on 'non-starters'."

"This methodology, led by molecular simulation, will be at the heart of the engineering process that will lead us forward into this new world of technology," concluded Dr Drexler.


Tuesday, October 27, 2009


Are You Living In a Computer Simulation?

Nick Bostrom (born Niklas Boström in 1973) is a Swedish philosopher at the University of Oxford known for his work on existential risk and the Anthropic principle. He holds a PhD from the London School of Economics (2000). He is currently the director of The Future of Humanity Institute at Oxford University.

In addition to his writing for academic and popular press, Bostrom makes frequent media appearances in which he talks about transhumanism-related topics such as cloning, artificial intelligence, superintelligence, mind uploading, cryonics, nanotechnology, and the simulation argument.

In 1998, Bostrom co-founded (with David Pearce) the World Transhumanist Association (which has since changed its name to Humanity+). In 2004, he co-founded (with James Hughes) the Institute for Ethics and Emerging Technologies. Bostrom currently serves as the Chair of both organizations. In 2005 he was appointed Director of the newly created Oxford Future of Humanity Institute. Bostrom is the 2009 finalist in philosophy for and potential first ever recipient of the Eugene R. Gannon Award for the Continued Pursuit of Human Advancement.

Are You Living In a Computer Simulation?

Department of Philosophy, Oxford University


Many works of science fiction as well as some forecasts by serious technologists and futurologists predict that enormous amounts of computing power will be available in the future. Let us suppose for a moment that these predictions are correct. One thing that later generations might do with their super-powerful computers is run detailed simulations of their forebears or of people like their forebears. Because their computers would be so powerful, they could run a great many such simulations. Suppose that these simulated people are conscious (as they would be if the simulations were sufficiently fine-grained and if a certain quite widely accepted position in the philosophy of mind is correct). Then it could be the case that the vast majority of minds like ours do not belong to the original race but rather to people simulated by the advanced descendants of an original race. It is then possible to argue that, if this were the case, we would be rational to think that we are likely among the simulated minds rather than among the original biological ones. Therefore, if we don’t think that we are currently living in a computer simulation, we are not entitled to believe that we will have descendants who will run lots of such simulations of their forebears. That is the basic idea. The rest of this paper will spell it out more carefully.

Apart from the interest this thesis may hold for those who are engaged in futuristic speculation, there are also more purely theoretical rewards. The argument provides a stimulus for formulating some methodological and metaphysical questions, and it suggests naturalistic analogies to certain traditional religious conceptions, which some may find amusing or thought-provoking.

The structure of the paper is as follows. First, we formulate an assumption that we need to import from the philosophy of mind in order to get the argument started. Second, we consider some empirical reasons for thinking that running vastly many simulations of human minds would be within the capability of a future civilization that has developed many of those technologies that can already be shown to be compatible with known physical laws and engineering constraints. This part is not philosophically necessary but it provides an incentive for paying attention to the rest. Then follows the core of the argument, which makes use of some simple probability theory, and a section providing support for a weak indifference principle that the argument employs. Lastly, we discuss some interpretations of the disjunction, mentioned in the abstract, that forms the conclusion of the simulation argument.


A common assumption in the philosophy of mind is that of substrate-independence. The idea is that mental states can supervene on any of a broad class of physical substrates. Provided a system implements the right sort of computational structures and processes, it can be associated with conscious experiences. It is nor an essential property of consciousness that it is implemented on carbon-based biological neural networks inside a cranium: silicon-based processors inside a computer could in principle do the trick as well.

Arguments for this thesis have been given in the literature, and although it is not entirely uncontroversial, we shall here take it as a given.

The argument we shall present does not, however, depend on any very strong version of functionalism or computationalism. For example, we need not assume that the thesis of substrate-independence is necessarily true (either analytically or metaphysically) – just that, in fact, a computer running a suitable program would be conscious. Moreover, we need not assume that in order to create a mind on a computer it would be sufficient to program it in such a way that it behaves like a human in all situations, including passing the Turing test etc. We need only the weaker assumption that it would suffice for the generation of subjective experiences that the computational processes of a human brain are structurally replicated in suitably fine-grained detail, such as on the level of individual synapses. This attenuated version of substrate-independence is quite widely accepted.

Neurotransmitters, nerve growth factors, and other chemicals that are smaller than a synapse clearly play a role in human cognition and learning. The substrate-independence thesis is not that the effects of these chemicals are small or irrelevant, but rather that they affect subjective experience only via their direct or indirect influence on computational activities. For example, if there can be no difference in subjective experience without there also being a difference in synaptic discharges, then the requisite detail of simulation is at the synaptic level (or higher).


At our current stage of technological development, we have neither sufficiently powerful hardware nor the requisite software to create conscious minds in computers. But persuasive arguments have been given to the effect that if technological progress continues unabated then these shortcomings will eventually be overcome. Some authors argue that this stage may be only a few decades away. Yet present purposes require no assumptions about the time-scale. The simulation argument works equally well for those who think that it will take hundreds of thousands of years to reach a “posthuman” stage of civilization, where humankind has acquired most of the technological capabilities that one can currently show to be consistent with physical laws and with material and energy constraints.

Such a mature stage of technological development will make it possible to convert planets and other astronomical resources into enormously powerful computers. It is currently hard to be confident in any upper bound on the computing power that may be available to posthuman civilizations. As we are still lacking a “theory of everything”, we cannot rule out the possibility that novel physical phenomena, not allowed for in current physical theories, may be utilized to transcend those constraints that in our current understanding impose theoretical limits on the information processing attainable in a given lump of matter. We can with much greater confidence establish lower bounds on posthuman computation, by assuming only mechanisms that are already understood. For example, Eric Drexler has outlined a design for a system the size of a sugar cube (excluding cooling and power supply) that would perform 10^21 instructions per second. Another author gives a rough estimate of 10^42 operations per second for a computer with a mass on order of a large planet. (If we could create quantum computers, or learn to build computers out of nuclear matter or plasma, we could push closer to the theoretical limits. Seth Lloyd calculates an upper bound for a 1 kg computer of 5*10^50 logical operations per second carried out on ~10^31 bits. However, it suffices for our purposes to use the more conservative estimate that presupposes only currently known design-principles.)The amount of computing power needed to emulate a human mind can likewise be roughly estimated. One estimate, based on how computationally expensive it is to replicate the functionality of a piece of nervous tissue that we have already understood and whose functionality has been replicated in silico, contrast enhancement in the retina, yields a figure of ~10^14 operations per second for the entire human brain. An alternative estimate, based the number of synapses in the brain and their firing frequency, gives a figure of ~10^16-10^17 operations per second. Conceivably, even more could be required if we want to simulate in detail the internal workings of synapses and dendritic trees. However, it is likely that the human central nervous system has a high degree of redundancy on the mircoscale to compensate for the unreliability and noisiness of its neuronal components. One would therefore expect a substantial efficiency gain when using more reliable and versatile non-biological processors.

Memory seems to be a no more stringent constraint than processing power. Moreover, since the maximum human sensory bandwidth is ~10^8 bits per second, simulating all sensory events incurs a negligible cost compared to simulating the cortical activity. We can therefore use the processing power required to simulate the central nervous system as an estimate of the total computational cost of simulating a human mind.

If the environment is included in the simulation, this will require additional computing power – how much depends on the scope and granularity of the simulation. Simulating the entire universe down to the quantum level is obviously infeasible, unless radically new physics is discovered. But in order to get a realistic simulation of human experience, much less is needed – only whatever is required to ensure that the simulated humans, interacting in normal human ways with their simulated environment, don’t notice any irregularities. The microscopic structure of the inside of the Earth can be safely omitted. Distant astronomical objects can have highly compressed representations: verisimilitude need extend to the narrow band of properties that we can observe from our planet or solar system spacecraft. On the surface of Earth, macroscopic objects in inhabited areas may need to be continuously simulated, but microscopic phenomena could likely be filled in ad hoc. What you see through an electron microscope needs to look unsuspicious, but you usually have no way of confirming its coherence with unobserved parts of the microscopic world. Exceptions arise when we deliberately design systems to harness unobserved microscopic phenomena that operate in accordance with known principles to get results that we are able to independently verify. The paradigmatic case of this is a computer. The simulation may therefore need to include a continuous representation of computers down to the level of individual logic elements. This presents no problem, since our current computing power is negligible by posthuman standards.

Moreover, a posthuman simulator would have enough computing power to keep track of the detailed belief-states in all human brains at all times. Therefore, when it saw that a human was about to make an observation of the microscopic world, it could fill in sufficient detail in the simulation in the appropriate domain on an as-needed basis. Should any error occur, the director could easily edit the states of any brains that have become aware of an anomaly before it spoils the simulation. Alternatively, the director could skip back a few seconds and rerun the simulation in a way that avoids the problem.

It thus seems plausible that the main computational cost in creating simulations that are indistinguishable from physical reality for human minds in the simulation resides in simulating organic brains down to the neuronal or sub-neuronal level. While it is not possible to get a very exact estimate of the cost of a realistic simulation of human history, we can use ~10^33 - 10^36 operations as a rough estimate. As we gain more experience with virtual reality, we will get a better grasp of the computational requirements for making such worlds appear realistic to their visitors. But in any case, even if our estimate is off by several orders of magnitude, this does not matter much for our argument. We noted that a rough approximation of the computational power of a planetary-mass computer is 10^42 operations per second, and that assumes only already known nanotechnological designs, which are probably far from optimal. A single such a computer could simulate the entire mental history of humankind (call this an ancestor-simulation) by using less than one millionth of its processing power for one second. A posthuman civilization may eventually build an astronomical number of such computers. We can conclude that the computing power available to a posthuman civilization is sufficient to run a huge number of ancestor-simulations even it allocates only a minute fraction of its resources to that purpose. We can draw this conclusion even while leaving a substantial margin of error in all our estimates.
· Posthuman civilizations would have enough computing power to run hugely many ancestor-simulations even while using only a tiny fraction of their resources for that purpose.


The basic idea of this paper can be expressed roughly as follows: If there were a substantial chance that our civilization will ever get to the posthuman stage and run many ancestor-simulations, then how come you are not living in such a simulation?

We shall develop this idea into a rigorous argument. Let us introduce the following notation:

The actual fraction of all observers with human-type experiences that live in simulations is then

Writing f1 for the fraction of posthuman civilizations that are interested in running ancestor-simulations (or that contain at least some individuals who are interested in that and have sufficient resources to run a significant number of such simulations), and N1 for the average number of ancestor-simulations run by such interested civilizations, we have
and thus:
Because of the immense computing power of posthuman civilizations, N1 is extremely large, as we saw in the previous section. By inspecting (*) we can then see that at least one of the following three propositions must be true:


We can take a further step and conclude that conditional on the truth of (3), one’s credence in the hypothesis that one is in a simulation should be close to unity. More generally, if we knew that a fraction x of all observers with human-type experiences live in simulations, and we don’t have any information that indicate that our own particular experiences are any more or less likely than other human-type experiences to have been implemented in vivo rather than in machina, then our credence that we are in a simulation should equal x:

This step is sanctioned by a very weak indifference principle. Let us distinguish two cases. The first case, which is the easiest, is where all the minds in question are like your own in the sense that they are exactly qualitatively identical to yours: they have exactly the same information and the same experiences that you have. The second case is where the minds are “like” each other only in the loose sense of being the sort of minds that are typical of human creatures, but they are qualitatively distinct from one another and each has a distinct set of experiences. I maintain that even in the latter case, where the minds are qualitatively different, the simulation argument still works, provided that you have no information that bears on the question of which of the various minds are simulated and which are implemented biologically.

A detailed defense of a stronger principle, which implies the above stance for both cases as trivial special instances, has been given in the literature. Space does not permit a recapitulation of that defense here, but we can bring out one of the underlying intuitions by bringing to our attention to an analogous situation of a more familiar kind. Suppose that x% of the population has a certain genetic sequence S within the part of their DNA commonly designated as “junk DNA”. Suppose, further, that there are no manifestations of S (short of what would turn up in a gene assay) and that there are no known correlations between having S and any observable characteristic. Then, quite clearly, unless you have had your DNA sequenced, it is rational to assign a credence of x% to the hypothesis that you have S. And this is so quite irrespective of the fact that the people who have S have qualitatively different minds and experiences from the people who don’t have S. (They are different simply because all humans have different experiences from one another, not because of any known link between S and what kind of experiences one has.)

The same reasoning holds if S is not the property of having a certain genetic sequence but instead the property of being in a simulation, assuming only that we have no information that enables us to predict any differences between the experiences of simulated minds and those of the original biological minds.

It should be stressed that the bland indifference principle expressed by (#) prescribes indifference only between hypotheses about which observer you are, when you have no information about which of these observers you are. It does not in general prescribe indifference between hypotheses when you lack specific information about which of the hypotheses is true. In contrast to Laplacean and other more ambitious principles of indifference, it is therefore immune to Bertrand’s paradox and similar predicaments that tend to plague indifference principles of unrestricted scope.

Readers familiar with the Doomsday argument may worry that the bland principle of indifference invoked here is the same assumption that is responsible for getting the Doomsday argument off the ground, and that the counterintuitiveness of some of the implications of the latter incriminates or casts doubt on the validity of the former. This is not so. The Doomsday argument rests on a much stronger and more controversial premiss, namely that one should reason as if one were a random sample from the set of all people who will ever have lived (past, present, and future) even though we know that we are living in the early twenty-first century rather than at some point in the distant past or the future. The bland indifference principle, by contrast, applies only to cases where we have no information about which group of people we belong to.

If betting odds provide some guidance to rational belief, it may also be worth to ponder that if everybody were to place a bet on whether they are in a simulation or not, then if people use the bland principle of indifference, and consequently place their money on being in a simulation if they know that that’s where almost all people are, then almost everyone will win their bets. If they bet on not being in a simulation, then almost everyone will lose. It seems better that the bland indifference principle be heeded.

Further, one can consider a sequence of possible situations in which an increasing fraction of all people live in simulations: 98%, 99%, 99.9%, 99.9999%, and so on. As one approaches the limiting case in which everybody is in a simulation (from which one can deductively infer that one is in a simulation oneself), it is plausible to require that the credence one assigns to being in a simulation gradually approach the limiting case of complete certainty in a matching manner.


The possibility represented by proposition (1) is fairly straightforward. If (1) is true, then humankind will almost certainly fail to reach a posthuman level; for virtually no species at our level of development become posthuman, and it is hard to see any justification for thinking that our own species will be especially privileged or protected from future disasters. Conditional on (1), therefore, we must give a high credence to DOOM, the hypothesis that humankind will go extinct before reaching a posthuman level:

One can imagine hypothetical situations were we have such evidence as would trump knowledge of fp. For example, if we discovered that we were about to be hit by a giant meteor, this might suggest that we had been exceptionally unlucky. We could then assign a credence to DOOM larger than our expectation of the fraction of human-level civilizations that fail to reach posthumanity. In the actual case, however, we seem to lack evidence for thinking that we are special in this regard, for better or worse.

Proposition (1) doesn’t by itself imply that we are likely to go extinct soon, only that we are unlikely to reach a posthuman stage. This possibility is compatible with us remaining at, or somewhat above, our current level of technological development for a long time before going extinct. Another way for (1) to be true is if it is likely that technological civilization will collapse. Primitive human societies might then remain on Earth indefinitely.

There are many ways in which humanity could become extinct before reaching posthumanity. Perhaps the most natural interpretation of (1) is that we are likely to go extinct as a result of the development of some powerful but dangerous technology. One candidate is molecular nanotechnology, which in its mature stage would enable the construction of self-replicating nanobots capable of feeding on dirt and organic matter – a kind of mechanical bacteria. Such nanobots, designed for malicious ends, could cause the extinction of all life on our planet.

The second alternative in the simulation argument’s conclusion is that the fraction of posthuman civilizations that are interested in running ancestor-simulation is negligibly small. In order for (2) to be true, there must be a strong convergence among the courses of advanced civilizations. If the number of ancestor-simulations created by the interested civilizations is extremely large, the rarity of such civilizations must be correspondingly extreme. Virtually no posthuman civilizations decide to use their resources to run large numbers of ancestor-simulations. Furthermore, virtually all posthuman civilizations lack individuals who have sufficient resources and interest to run ancestor-simulations; or else they have reliably enforced laws that prevent such individuals from acting on their desires.

What force could bring about such convergence? One can speculate that advanced civilizations all develop along a trajectory that leads to the recognition of an ethical prohibition against running ancestor-simulations because of the suffering that is inflicted on the inhabitants of the simulation. However, from our present point of view, it is not clear that creating a human race is immoral. On the contrary, we tend to view the existence of our race as constituting a great ethical value. Moreover, convergence on an ethical view of the immorality of running ancestor-simulations is not enough: it must be combined with convergence on a civilization-wide social structure that enables activities considered immoral to be effectively banned.Another possible convergence point is that almost all individual posthumans in virtually all posthuman civilizations develop in a direction where they lose their desires to run ancestor-simulations. This would require significant changes to the motivations driving their human predecessors, for there are certainly many humans who would like to run ancestor-simulations if they could afford to do so. But perhaps many of our human desires will be regarded as silly by anyone who becomes a posthuman. Maybe the scientific value of ancestor-simulations to a posthuman civilization is negligible (which is not too implausible given its unfathomable intellectual superiority), and maybe posthumans regard recreational activities as merely a very inefficient way of getting pleasure – which can be obtained much more cheaply by direct stimulation of the brain’s reward centers. One conclusion that follows from (2) is that posthuman societies will be very different from human societies: they will not contain relatively wealthy independent agents who have the full gamut of human-like desires and are free to act on them.

The possibility expressed by alternative (3) is the conceptually most intriguing one. If we are living in a simulation, then the cosmos that we are observing is just a tiny piece of the totality of physical existence. The physics in the universe where the computer is situated that is running the simulation may or may not resemble the physics of the world that we observe. While the world we see is in some sense “real”, it is not located at the fundamental level of reality.

It may be possible for simulated civilizations to become posthuman. They may then run their own ancestor-simulations on powerful computers they build in their simulated universe. Such computers would be “virtual machines”, a familiar concept in computer science. (Java script web-applets, for instance, run on a virtual machine – a simulated computer – inside your desktop.) Virtual machines can be stacked: it’s possible to simulate a machine simulating another machine, and so on, in arbitrarily many steps of iteration. If we do go on to create our own ancestor-simulations, this would be strong evidence against (1) and (2), and we would therefore have to conclude that we live in a simulation. Moreover, we would have to suspect that the posthumans running our simulation are themselves simulated beings; and their creators, in turn, may also be simulated beings.

Reality may thus contain many levels. Even if it is necessary for the hierarchy to bottom out at some stage – the metaphysical status of this claim is somewhat obscure – there may be room for a large number of levels of reality, and the number could be increasing over time. (One consideration that counts against the multi-level hypothesis is that the computational cost for the basement-level simulators would be very great. Simulating even a single posthuman civilization might be prohibitively expensive. If so, then we should expect our simulation to be terminated when we are about to become posthuman.)

Although all the elements of such a system can be naturalistic, even physical, it is possible to draw some loose analogies with religious conceptions of the world. In some ways, the posthumans running a simulation are like gods in relation to the people inhabiting the simulation: the posthumans created the world we see; they are of superior intelligence; they are “omnipotent” in the sense that they can interfere in the workings of our world even in ways that violate its physical laws; and they are “omniscient” in the sense that they can monitor everything that happens. However, all the demigods except those at the fundamental level of reality are subject to sanctions by the more powerful gods living at lower levels.

Further rumination on these themes could climax in a naturalistic theogony that would study the structure of this hierarchy, and the constraints imposed on its inhabitants by the possibility that their actions on their own level may affect the treatment they receive from dwellers of deeper levels. For example, if nobody can be sure that they are at the basement-level, then everybody would have to consider the possibility that their actions will be rewarded or punished, based perhaps on moral criteria, by their simulators. An afterlife would be a real possibility. Because of this fundamental uncertainty, even the basement civilization may have a reason to behave ethically. The fact that it has such a reason for moral behavior would of course add to everybody else’s reason for behaving morally, and so on, in truly virtuous circle. One might get a kind of universal ethical imperative, which it would be in everybody’s self-interest to obey, as it were “from nowhere”.

In addition to ancestor-simulations, one may also consider the possibility of more selective simulations that include only a small group of humans or a single individual. The rest of humanity would then be zombies or “shadow-people” – humans simulated only at a level sufficient for the fully simulated people not to notice anything suspicious. It is not clear how much cheaper shadow-people would be to simulate than real people. It is not even obvious that it is possible for an entity to behave indistinguishably from a real human and yet lack conscious experience. Even if there are such selective simulations, you should not think that you are in one of them unless you think they are much more numerous than complete simulations. There would have to be about 100 billion times as many “me-simulations” (simulations of the life of only a single mind) as there are ancestor-simulations in order for most simulated persons to be in me-simulations.

There is also the possibility of simulators abridging certain parts of the mental lives of simulated beings and giving them false memories of the sort of experiences that they would typically have had during the omitted interval. If so, one can consider the following (farfetched) solution to the problem of evil: that there is no suffering in the world and all memories of suffering are illusions. Of course, this hypothesis can be seriously entertained only at those times when you are not currently suffering.

Supposing we live in a simulation, what are the implications for us humans? The foregoing remarks notwithstanding, the implications are not all that radical. Our best guide to how our posthuman creators have chosen to set up our world is the standard empirical study of the universe we see. The revisions to most parts of our belief networks would be rather slight and subtle – in proportion to our lack of confidence in our ability to understand the ways of posthumans. Properly understood, therefore, the truth of (3) should have no tendency to make us “go crazy” or to prevent us from going about our business and making plans and predictions for tomorrow. The chief empirical importance of (3) at the current time seems to lie in its role in the tripartite conclusion established above. We may hope that (3) is true since that would decrease the probability of (1), although if computational constraints make it likely that simulators would terminate a simulation before it reaches a posthuman level, then out best hope would be that (2) is true.

If we learn more about posthuman motivations and resource constraints, maybe as a result of developing towards becoming posthumans ourselves, then the hypothesis that we are simulated will come to have a much richer set of empirical implications.


A technologically mature “posthuman” civilization would have enormous computing power. Based on this empirical fact, the simulation argument shows that at least one of the following propositions is true: (1) The fraction of human-level civilizations that reach a posthuman stage is very close to zero; (2) The fraction of posthuman civilizations that are interested in running ancestor-simulations is very close to zero; (3) The fraction of all people with our kind of experiences that are living in a simulation is very close to one.

If (1) is true, then we will almost certainly go extinct before reaching posthumanity. If (2) is true, then there must be a strong convergence among the courses of advanced civilizations so that virtually none contains any relatively wealthy individuals who desire to run ancestor-simulations and are free to do so. If (3) is true, then we almost certainly live in a simulation. In the dark forest of our current ignorance, it seems sensible to apportion one’s credence roughly evenly between (1), (2), and (3).

Unless we are now living in a simulation, our descendants will almost certainly never run an ancestor-simulation.