Wednesday, December 26, 2012

Uploaded e-crews for interstellar missions



December 12, 2012 by Giulio Prisco


The awesome 100 Year Starship (100YSS) initiative by DARPA and NASA proposes to send people to the stars by the year 2100 — a huge challenge that will require bold, visionary, out-of-the-box thinking.

There are major challenges. “Using current propulsion technology, travel to a nearby star (such as our closest star system, Alpha Centauri, at 4.37 light years from the Sun, which also has a a planet with about the mass of the Earth orbiting it) would take close to 100,000 years,” according to Icarus Interstellar, which has teamed with the Dorothy Jemison Foundation for Excellence and the Foundation for Enterprise Development to manage the project.


Artwork: The bright star Alpha Centauri and its surroundings (credit: ESO)

“To make the trip on timescales of a human lifetime, the rocket needs to travel much faster than current probes, at least 5% the speed of light. … It’s actually physically impossible to do this using chemical rockets, since you’d need more fuel than exists in the known universe,” Icarus Interstellar points out.

So the Icarus team has chosen a fusion-based propulsion design for Project Icarus, offering a million times more energy compared to chemical reactions. It would be evolved from their Daedalus design.

This propulsion technology is not yet well developed, and there are serious problems, such as the need for heavy neutron shields and risks of interstellar dust impacts, equivalent to small nuclear explosions on the craft’s skin, as the Icarus team states.

Although Einstein’s fundamental speed-of-light limit seems solid, ways to work around it were also proposed by physicists at the recent 100 Year Starship Symposium.

However, as a reality check, I will assume as a worse case that none of these exotic propulsion breakthroughs will be developed in this century.

 
Daedalus concept (credit: Adrian Mann)

That leaves us with an unmanned craft, but for that, as Icarus Interstellar points out, “one needs a large amount of system autonomy and redundancy. If the craft travels five light years from Earth, for example, it means that any message informing mission control of some kind of system error would take five years to reach the scientists, and another five years for a solution to be received.

“Ten years is really too long to wait, so the craft needs a highly capable artificial intelligence, so that it can figure out solutions to problems with a high degree of autonomy.”

If a technological Singularity happens, all bets are off. However, again as a worse case, I assume here that a Singularity does not happen, or fully simulating an astronaut does not happen. So human monitoring and control will still be needed.

The mind-uploading solution

The very high cost of a crewed space mission comes from the need to ensure the survival and safety of the humans on-board and the need to travel at extremely high speeds to ensure it’s done within a human lifetime.

One way to overcome that is to do without the wetware bodies of the crew, and send only their minds to the stars — their “software” — uploaded to advanced circuitry, augmented by AI subsystems in the starship’s processing system.

The basic idea of uploading is to “take a particular brain [of an astronaut, in this case], scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain,” as Oxford University’s Whole Brain Emulation Roadmap explains.

It’s also known as “whole brain emulation” and “substrate-independent minds” — the astronaut’s memories, thoughts, feelings, personality, and “self” would be copied to an alternative processing substrate — such as a digital, analog, or quantum computer.

An e-crew — a crew of human uploads implemented in solid-state electronic circuitry — will not require air, water, food, medical care, or radiation shielding, and may be able to withstand extreme acceleration. So the size and weight of the starship will be dramatically reduced.

Combined advances in neuroscience and computer science suggest that mind uploading technology could be developed in this century, as noted in a recent Special Issue on Mind Uploading of the International Journal of Machine Consciousness).

Uploading research is politically incorrect: it is tainted by association with transhumanists — those fringe lunatics of the Rapture of the Nerds — so it’s often difficult to justify and defend.

Creating a brain

But MIT neuroscientist Sebastian Seung has speculated that if models of brains become increasingly accurate, eventually there must be a simulation indistinguishable from the original.

The connectome (credit: NIH Human Connectome Project)

In Connectome: How the Brain’s Wiring Makes Us Who We Are, he explains how mapping the human “connectome” (the connections between our brain cells) might enable us to upload our brains into a computer.

In fact, “neuroscience is ready for a large-scale functional mapping of the entire neural circuits,” Harvard scientist George Church and other researchers conclude in a landmark 2012 Neuron paper.

I suggest that developing mind-uploading technology for software e-crews may make the 100YSS project practical, while delivering equally important spinoffs in neuroscience, computer science, and longevity, perhaps even including indefinite life extension.

The new brain can be much more resistant and long-lived than the old biological brain, and it can be housed in a similarly resistant and long-lived robotic body. Robots powered by human uploads can be rugged, resistant to the vacuum and the harsh space environment, easily rechargeable, and much smaller and lighter than wetware human bodies.

Eventually, human uploads augmented by AI subsystems can be implemented in the solid-state circuitry of the starship’s processing system.

Boredom and isolation will not be a problem for e-crew members, because the data processing system of a miniaturized starship will be able to accommodate hundreds and even thousands of human uploads.

Light sails

The huge reduction in weight resulting from uploading would allow for radical propulsion systems, such as “light sails” (aka “solar sails”) — spacecraft driven by light energy alone. The Planetary Society currently has a research project to develop light sails .


Light sail concept (credit: NASA)

The low mass of light sails — combined with the e-crew’s ability to withstand extreme acceleration — might allow for achieving a substantial fraction of the speed of light, so the time to go to the stars would be significantly reduced.

E-crewed interstellar missions have been described by science fiction writers. Greg Egan was one of first in Diaspora. In Charlie Stross‘ Accelerando, the coke-can-sized starship Field Circus, propelled by a Jupiter-based laser and a light sail, visits a nearby star system with an e-crew of 63 uploaded persons who have a hell of a lot of fun on the way.

Here we are, sixty something human minds. We’ve been migrated — while still awake — right out of our own heads using an amazing combination of nanotechnology and electron spin resonance mapping, and we’re now running as software in an operating system designed to virtualize multiple physics models and provide a simulation of reality that doesn’t let us go mad from sensory deprivation!
 
And this whole package is about the size of a fingertip, crammed into a starship the size of your grandmother’s old Walkman, in orbit around a brown dwarf just over three light-years from home.

Of course. a light sail powered by lasers back home, can only push a starship on an one-way trip,but the data from the uploaded astronauts would will be beamed home via the Interplanetary Internet.

The “starwisp” concept proposed by Robert L. Forward is a variation of a light sail remotely driven by a microwave beam instead of visible light (but has known problems).

Sideloading

One problem with implementing mind uploading is that it’s plagued by metaphysical discussions about the continuity of personal identity (“is only a copy”), which are irrelevant here. Even if I thought that uploads will be only copies, I would be not only happy, but also grateful and honored if my upload copy could participate in the first interstellar mission.

But even coarse, preliminary uploading technology could be sufficient. “Sideloading,” proposed by science fiction writer Greg Egan in Zendegi, is the process of training a neural network to mimic a particular organic brain, using a rich set of non-invasive scans of the brain in action.

Egan describes a “Human Connectome Project,” completed in the late 2020s, that produces detailed connectome maps from brain scans of thousands of volunteers. The maps could be used to build an average human neural network, which could serve as a model of a generic human brain.

Then the model could be tweaked and fine-tuned to emulate a specific living person, using in-vivo brain scans and supervised training sessions in a VR environment. In Zendegi, the resulting personalized model passes the Turing Test and often behaves as a convincing emulation of the original.

Why not send AI’s?

If strong AI is developed, perhaps smarter than humans, why should we bother to upload humans? One answer is that most of us will want human minds on our first journey to the stars.

However, I agree with Ray Kurzweil’s speculation that we will merge with technology, so many future persons will not be “pure” humans or pure AIs, but rather hybrids, blended so tightly that it will be impossible to tell which is which.

Ultimately, I think space will not be colonized by squishy, frail and short-lived flesh-and-blood humans. As Sir Arthur C. Clarke wrote in Childhood’s End, perhaps “the stars are not for Man” — that is, not for biological humans 1.0.

It will be up to our postbiological mind children, implemented as pure software based on human uploads and AI subsystems, to explore other stars and colonize the universe. Eventually, they will travel between the stars as radiation and light beams.

Giulio Prisco is transhumanism editor for KurzweilAI. He is a science writer, technology expert, futurist, and transhumanist.

References: Kurzweil Accelerating Intelligence

Sunday, December 23, 2012

Do we live in a computer simulation? How to test the idea



The energy surface of a massless, non-interacting Wilson fermion. The continuum dispersion relation is shown as the red surface. (Credit: Silas R. Beane et al.)

The concept that we could possibly be living in a computer simulation has been suggested by science writers and others, and was formalized in a 2003 paper published in Philosophical Quarterly by Nick Bostrom, a philosophy professor at the University of Oxford.

With current limitations and trends in computing, it will be decades before researchers will be able to run even primitive simulations of the universe. But a University of Washington team has suggested tests that can be performed now, or in the near future, that could resolve the question.

Currently, supercomputers using a technique called lattice quantum chromodynamics (LQC), and starting from the fundamental physical laws that govern the universe, can simulate only a very small portion of the universe, on the scale of one 100-trillionth of a meter, a little larger than the nucleus of an atom, said Martin Savage, a UW physics professor.

Eventually though, more powerful simulations will be able to model on the scale of a molecule, then a cell and even a human being. But it will take many generations of growth in computing power to be able to simulate a large enough chunk of the universe to understand the constraints on physical processes that would indicate we are living in a computer model.

However, Savage said, there are signatures of resource constraints in present-day simulations that are likely to exist as well in simulations in the distant future, including the imprint of an underlying lattice if one is used to model the space-time continuum.

The supercomputers performing LQC calculations essentially divide space-time into a four-dimensional grid. That allows researchers to examine what is called the strong force, one of the four fundamental forces of nature and the one that binds subatomic particles called quarks and gluons together into neutrons and protons at the core of atoms. “If you make the simulations big enough, something like our universe should emerge,” Savage said. Then it would be a matter of looking for a “signature” in our universe that has an analog in the current small-scale simulations.

Savage and colleagues suggest that the signature could show up as a limitation in the energy of cosmic rays.

In a paper they have posted on arXiv, they say that the highest-energy cosmic rays would not travel along the edges of the lattice in the model but would travel diagonally, and they would not interact equally in all directions as they otherwise would be expected to do.

“This is the first testable signature of such an idea,” Savage said.

If such a concept turned out to be reality, it would raise other possibilities as well. For example, co-author Zohreh Davoudi suggests that if our universe is a simulation, then those running it could be running other simulations as well, essentially creating other universes parallel to our own.

“Then the question is, ‘Can you communicate with those other universes if they are running on the same platform?’” she said.

There are, of course, many caveats to this extrapolation. Foremost among them is the assumption that exponential growth of computers will continue into the future. Related to this is the possible existence of the technological Singularity, which could alter the curve in unpredictable ways.

And, of course, human extinction would terminate the exponential growth — or its simulation.

References: Kurzweil Accelerating Intelligence

Tuesday, December 11, 2012

How to build a million-qubit quantum computer


Hybrid dual-quantum dot/superconducting resonator device (credit: K. D. Petersson et al./Nature)

A team led by Princeton‘s Associate Professor of Physics Jason Petta has developed a new method that could eventually allow engineers to build a working quantum computer consisting of millions of quantum bits (qubits).

Quantum computers take advantage of the strange behaviors of subatomic particles like electrons. By harnessing electrons as they spin, scientists could use the particles to form the basis for a new type of computing.

The problem, though, is that these incredibly tiny electrons are hard to control. So far, scientists have only been able to harness extremely small numbers of them.

“The whole game at this point in quantum computing is trying to build a larger system,” said Andrew Houck, an associate professor of electrical engineering at Princeton who is part of the research team.

A cage for trapping electrons

To transfer information Petta’s team used a stream of microwave photons to analyze a pair of electrons trapped in a tiny cage called a quantum dot. The “spin state” of the electrons — information about how they are spinning — serves as the qubit, a basic unit of information.

The microwave stream allows the scientists to read that information. “We create a cavity with mirrors on both ends — but they don’t reflect visible light, they reflect microwave radiation,” Petta said. “Then we send microwaves in one end, and we look at the microwaves as they come out the other end. The microwaves are affected by the spin states of the electrons in the cavity, and we can read that change.”

In an ordinary sense, the distances involved are very small; the entire apparatus operates over a little more than a centimeter. But on the subatomic scale, they are vast. It is like coordinating the motion of a top spinning on the moon with another on the surface of the Earth.

“It’s the most amazing thing,” said Jake Taylor, a physicist at the National Institute of Standards and Technology, who worked on the project with the Princeton team. “You have a single electron almost completely changing the properties of an inch-long electrical system.”

One challenge facing scientists is that the spins of electrons, or any other quantum particles, are incredibly delicate. Any outside influences, whether a wisp of magnetism or glimpse of light, destabilizes the electrons’ spins and introduces errors.

Over the years, scientists have developed techniques to observe spin states without disturbing them. (This year’s Nobel Prize in physics honored two scientists, Serge Haroche and David Wineland, who first demonstrated the direct observation of quantum particles.) But analyzing small numbers of spins is not enough; millions will be required to make a real quantum processor.

Making quantum dots

To make the quantum dots, the team isolated a pair of electrons on a small section of material called a “semiconductor nanowire.” Basically, that means a wire that is so thin that it can hold electrons like soda bubbles in a straw. They then created small “cages” along the wire. The cages are set up so that electrons will settle into a particular cage depending on their energy level.

This is how the Princeton team reads the spin state: electrons of similar spin will repel, while those of different spins will attract. So the team manipulates the electrons to a certain energy level and then reads their position. If they are in the same cage, they are spinning differently; if they are in different cages, the spins are the same.

The second step is to place this quantum dot inside the microwave channel, allowing the team to transfer the information about the pair’s spin state — the qubit.

Petta said the next step is to increase the reliability of the setup for a single electron pair. After that, the team plans to add more quantum dots to create more qubits. Team members are cautiously optimistic. There appear to be no insurmountable problems at this point but, as with any system, increasing complexity could lead to unforeseen difficulties.

“The methods we are using here are scalable, and we would like to use them in a larger system,” Petta said. “But to make use of the scaling, it needs to work a little better. The first step is to make better mirrors for the microwave cavity.”

Support for the research was provided by the National Science Foundation, the Alfred P. Sloan Foundation, the Packard Foundation, the Army Research Office, and the Defense Advanced Research Projects Agency Quantum Entanglement Science and Technology Program.

References: Kurzweil Accelerating Intelligence

Friday, December 7, 2012

Waterloo researchers create ‘world’s largest functioning model of the brain’

Serial working memory task (from movie). (Credit: Chris Eliasmith et al./Science)

A team of researchers from the University of Waterloo have built what the claim is the world’s largest simulation of a functioning brain.

The purpose is to help scientists understand how the complex activity of the brain gives rise to the complex behavior exhibited by animals, including humans.

The model is called Spaun (Semantic Pointer Architecture Unified Network). It consists of 2.5 million simulated neurons. The model captures biological details of each neuron, including which neurotransmitters are used, how voltages are generated in the cell, and how they communicate.

Spaun uses this network of neurons to process visual images to control an arm that draws Spaun’s answers to perceptual, cognitive and motor tasks.


Spaun Anatomical architecture (credit: Chris Eliasmith et al./Science)
 
Information flow through Spaun during the WM task (credit: Chris Eliasmith et al./Science)
 
Spaun functional architecture. Thick black lines indicate communication between elements of the cortex; thin lines indicate communication between the actions election mechanism (basal ganglia) and the cortex. Boxes with rounded edges indicate that the action selection mechanism can use activity changes to manipulate the flow of information into a subsystem. The open-square end of the line connecting reward evaluation and action selection denotes that this connection modulates connection weights. (Credit: Chris Eliasmith et al./Science)
 
While the claim appears to be misleading, since IBM Research – Almaden actually recently simulated 530 billion neurons and 100 trillion synapses on a supercomputer, the Waterloo researchers explain that “although impressive scaling has been achieved, no previous large-scale spiking neuron models have demonstrated how such simulations connect to a variety of specific observable behaviors,” the researchers say in a Science paper.

Human-like multitasking

“The model can perform a wide variety of behaviorally relevant functions. We show results on eight different tasks that are performed by the same model, without modification.

“This is the first model that begins to get at how our brains can perform a wide variety of tasks in a flexible manner — how the brain coordinates the flow of information between different areas to exhibit complex behavior,” said Professor Chris Eliasmith, Director of the Center for Theoretical Neuroscience at Waterloo, Canada Research Chair in Theoretical Neuroscience, and professor in Waterloo’s Department of Philosophy and Department of Systems Design Engineering.

Unlike other large brain models, Spaun can perform several tasks. All inputs to the model are 28 by 28 images of handwritten or typed characters. All outputs are the movements of a physically modeled arm that has mass, length, inertia, etc.

Researchers can show patterns of digits and letters to the model’s eye, which it then processes, causing it to write its responses to any of eight tasks. And, just like the human brain, it can shift from task to task, recognizing an object one moment and memorizing a list of numbers the next. Because of its biological underpinnings, Spaun can also be used to understand how changes to the brain affect changes to behavior., the researchers suggest.

“Spaun provides a distinct opportunity to test learning algorithms in a challenging but biologically plausible setting,” say the researchers in Science. “More generally, Spaun provides an opportunity to test any neural theory that may be affected by being embedded in a complex, dynamical context, reminiscent of a real neural system.”

“In related work, we have shown how the loss of neurons with aging leads to decreased performance on cognitive tests,” said Eliasmith. “More generally, we can test our hypotheses about how the brain works, resulting in a better understanding of the effects of drugs or damage to the brain.”

In addition, the model provides new insights into the sorts of algorithms that might be useful for improving machine intelligence. For instance, it suggests new methods for controlling the flow of information through a large system attempting to solve challenging cognitive tasks.

Professor Eliasmith has written a book on the research: How To Build A Brain will be availabe this winter.


 
 
 
 
References: Kurzweil Accelerating Intelligence