Saturday, September 12, 2009

Hans Moravec


Hans Moravec (born November 30, 1948 in Austria) is an adjunct faculty member at the Robotics Institute (Carnegie Mellon) of Carnegie Mellon University. He is known for his work on robotics, artificial intelligence, and writings on the impact of technology. Moravec also is a futurist with many of his publications and predictions focusing on transhumanism. Moravec developed techniques in computer vision for determining the region of interest (ROI) in a scene. Other ROI techniques exist, including the patents of Sherman de Forest (U.S.), and the computer vision / image processing articles by Sobel. His last academic publication was in 2003.

From Wikipedia

According to Hans Moravec, by 2040 robots will become as smart as we are. And then they'll displace us as the dominant form of life on Earth. But he isn't worried - the robots will love us.
By Charles Platt
Hans Moravec reclines in his chair and places his palms against his chest. "Consider the human form," he says.
"It clearly isn't designed to be a scientist. Your mental capacity is extremely limited. You have to undergo all kinds of unnatural training to get your brain even half suited for this kind of work - and for that reason, it's hard work. You live just long enough to start figuring things out before your brain starts deteriorating. And then, you die."
He leans forward, and his eyes widen with enthusiasm. "But wouldn't it be great," he says, "if you could enhance your abilities via artificial intelligence, and extend your lifespan, and improve on the human condition?"
Since his earliest childhood, Moravec has been obsessed with artificial life. When he was 4 years old, his father helped him use a wooden erector set to build a model of a little man who would dance and wave his arms and legs when a crank was turned. "It excited me," says Moravec, "because at that moment, I saw you could assemble a few parts and end up with something more - it could seem to have a life of its own."
At the age of 10, he constructed a toy robot from miscellaneous scrap metal. In high school, when another student maintained that no machine could ever be truly human, Moravec suggested replacing human neurons, one at a time, using man-made components that would have the equivalent function. At what point, he asked, would humanness disappear? If a wholly artificial entity is still able to act human in every way, how could we prove that it isn't human?
Today, Moravec is a professor at Carnegie Mellon University's Robotics Institute, the largest robot research lab in the country and one he helped establish in 1980. He is a rare mixture of visionary and engineer, equally comfortable speculating on the fate of the planet or using a soldering iron, microchips, and stepper motors to build high-tech versions of his childhood dancing man. More than that, though, he's our most gung-ho advocate of technology as a tool to transform human beings and make us more than we are - within our lifetimes, if we want it.
Some of his concepts have a confrontational, in-your-face shock value. For instance, to find out how the mind works, Moravec suggests severing a volunteer's corpus callosum (the nerve bundle linking the two hemispheres of the human brain) and interposing a computer to monitor thought traffic. After the computer has had time to learn the code, it can start inserting its own input, helping solve difficult math problems, suggesting new ideas, even offering friendly advice.
Or here's another scenario for anyone who'd like to escape the constrictions of dull old human biology: a futuristic robot surgeon peels away the brain of a conscious patient, using sensors to analyze and simulate the function of every neuron in each slice. As Moravec puts it, "Eventually your skull is empty, and the surgeon's hand rests deep in your brainstem. Though you haven't lost consciousness, your mind has been removed from the brain and transferred to a machine."
But even proposals like these are modest compared with Moravec's Number One concern, which is nothing less than the future of humanity. By 2040, he believes, we can have robots that are as smart as we are. Eventually, these machines will begin their own process of evolution and render us extinct in our present form. Yet, according to Moravec, this is not something we should fear: it's the best thing we could hope for, the ultimate form of human transcendence. And in his own laboratory, he's laying the groundwork that may help this evolutionary leap happen ahead of schedule.
Not everyone thinks this is such a wonderful idea. Joseph Weizenbaum, professor emeritus of computer science at MIT, complains that Moravec's book Mind Children: The Future of Robot and Human Intelligence is as dangerous as Mein Kampf. Respected mathematician Roger Penrose has written a long essay for The New York Review of Books in which he twice uses the word "horrific" to describe some of Moravec's concepts. Book reviewer Poovan Murugesan denounces Moravec as "a loose cannon of fast ideas" who suffers from "irresponsible optimism."
Even Moravec's fans seem a little ambivalent. "He comes off as a cross between Mister Rogers and Dr. Faustus," says writer Richard Kadrey. And in the words of award-winning science fiction author Vernor Vinge, who is also an associate professor of mathematical sciences at San Diego State University, "Moravec puts the rest of the technological optimists to shame. He is beyond their wildest extremes." But, Vinge adds hastily, "I mean this as praise!"
How seriously should we take Moravec's ideas? He is widely respected as a pioneer in robotics, but where is the line dividing his painstakingly practical research from his unfettered speculation? Why does he insist that breaking the boundaries of being human is important not just for himself, but for everyone - and why does he seem so crazy-cheerful about the whole thing?
These questions were on my mind when I visited Moravec at Carnegie Mellon in Pittsburgh, Pennsylvania. In person, he's a friendly faced, slightly overweight, irrepressibly good-humored man in his late 40s who wears homely clothes and seems shy with strangers. But his enthusiasm gives him a childlike charm - even when he talks lyrically about human extinction.
His office is next door to the "high bay," a big lab displaying the results of previous Robotics Institute projects, including a huge, multilegged "walker" that was sent down into the cone of an active volcano, and a Pontiac minivan that can drive itself at speeds up to 60 mph. The van has already found its way from Pittsburgh to Washington, DC, with minimal human supervision, under the legal fiction that its four onboard Sparcstations and their mechanical interface are "an advanced form of cruise control."
But Moravec seems bored by these past achievements and has shed most of his administrative responsibilities at the Robotics Institute. He hides out in a small, undistinguished, modern office with a couple of computers, a few file cabinets, a refrigerator, a microwave oven, and a lot of books. This is where he pursues his immediate goal: designing and programming a domestic robot that can navigate freely in cluttered home environments. It is the next logical step, he says, toward truly intelligent machines that we will not only tolerate but love - even as they threaten to displace us as the dominant form of life on Earth.
Moravec's early work in robotics was plagued by setbacks. "I spent most of the 1970s," he recalls, "trying to teach a robot to find its way across a room. After 10 years, in 1979, I finally had one that could get where it was going three times out of four - but it took five hours to travel 90 feet." He chuckles like a fond father recalling the first incompetent steps of his baby boy.
Why was it so hard for a robot to accomplish a task that even a mouse can manage with ease? The answer, of course, is that animals have had hundreds of millions of years in which to evolve motor skills. The problem of moving through a three-dimensional world is hideously complex, as Moravec indicates, while counting off the tasks on his fingers: "Our robot used multiple images of the same scene, taken from different points of view, in order to infer distance and construct a sparse description of its surroundings. It used statistical methods to resolve mismatching errors. It planned obstacle-avoidance paths. And then it had to decide how to actually turn its motors and wheels."
In 1980, he built new robots and attempted to boost their performance. "But the best we were able to do with our old approach," he recounts, "was speed it up about tenfold and improve its accuracy tenfold. We did not manage to reduce its brittleness."
By "brittleness" Moravec means that the system tended to fail suddenly and catastrophically. "Accidental conspiracies of sensory miscues would lead it to a wrong conclusion while being sure that it was right. In practical terms, it could misidentify the surrounding objects and run into a wall."
Like Wile E. Coyote in a Road Runner cartoon, trying to run into the mouth of a tunnel painted on a rockface?
"Precisely!" he laughs again, sounding genuinely happy, as he does whenever he describes the lovably fallible behavior of his creations.
In 1984, using US$10 Polaroid ultrasonic range finders instead of expensive video cameras, he created a new commercial robot that analyzed maps of the surrounding space rather than just objects in it. The result, to his surprise, was a system that could navigate reliably and relatively swiftly.
Moravec's current research robot, a project initiated in 1987, now sits in a small workshop just across the corridor outside his office. "Would you like to take a look?" he asks.
We walk into a windowless space no larger than an average living room. There are a couple of video monitors, workbenches littered with tools, pale beige walls, and a vinyl floor. The robot stands in the center of the room: an ugly little four-wheeled truck the size of a go-cart. But Moravec exudes pleasure and affection as he guides his toy out of the workshop, into the hall, and back again.
Today's best robots can think at insect level," he says as we return to his office. He explains that state-of-the-art mobile robots orient themselves by sensing special markers placed on floors, walls, or ceilings. Insects behave the same way: ants follow pheromone trails, lightning bugs look for each other's flashes, and moths navigate with reference to the moon.
The trouble is, such systems are still brittle. Just as a moth can become fatally confused by fixing on candlelight instead of moonlight, a robot guided by markers can easily make a disastrous mistake - as happened when one designed by a Connecticut company to distribute hospital linens took a nosedive down a flight of stairs when it failed to notice a marker that was supposed to tell it not to proceed past a certain point.
Robots that orient themselves with markers have found some application in industry - transporting pallets and cleaning floors - but they offer few advantages over the older systems that follow hidden guide wires. As a result, the market is very limited. "In fact," says Moravec, "the market barely exists at all. So, what we're shooting at now is a robot with the intelligence of a small vertebrate - the smallest fish you can imagine. It will no longer depend on navigational points; it will build a relatively dense representation of volumes of space."
By 2000, he foresees that this type of machine will find its own way around complex, cluttered places without using markers and without needing to be installed by experts. At first these robots will be expensive and specialized, but Moravec predicts they will become smaller, cheaper, and more user-friendly in just the same way that microcomputers evolved from mainframes. "Once we have a robot that customers can take out of the box, show it a job, and trust it to work without doing silly things - then the market will grow easily to hundreds of thousands and beyond. Any institution that does regular cleaning will find that it's cheaper to use a robot than a person. The same goes for delivery jobs."
Moravec estimates that these systems will need an onboard computer capable of 500 million instructions per second. The first IBM PCs managed 0.3 mips; a modern Pentium-based PC reaches 200 mips; and it's reasonable to expect that 500-mips processors will be affordable by the turn of the century.
This power will enable the robot to convert 500-by-500-pixel stereoscopic pictures from its camera eyes into a 3-D model consisting of about 100-by-100-by-100 cells. Updating and processing all this visual information will take about one second - the longest interval that is reasonably safe and practical, since the robot will move blindly between glimpses of the world.
Once robots find a niche doing dull, repetitive jobs, Moravec sees an ever-expanding market. "The next step will be adding an arm and improving the sensor resolution so that they can find and manipulate objects. The result will be a first generation of universal robots, around 2010, with enough general competence to do relatively intricate mechanical tasks such as automotive repair, bathroom cleaning, or factory assembly work."
By "universal" Moravec means the robot will tackle many different jobs in the same way a Nintendo system plays many different games. Plug in one cartridge, and the robot will know how to change the oil in your car. Plug in another, and it will know how to patrol your property and challenge intruders.
Add more memory and computing power and enhance the software, and by 2020 we have a second generation that can learn from its own performance. "It will tackle tasks in various ways," says Moravec, "keep statistics on how well each alternative has succeeded, and choose the approach that worked best. This means that it can learn and adapt. Success or failure will be defined by separate programs that will monitor the robot's actions and generate internal punishment and reward signals, which will actually shape its character - what it likes to do and what it prefers not to do."
Moravec pauses. The near future of robotics is something he's spelled out a thousand times before, and he no longer finds it particularly exciting. But now we get to a subject that interests him more: the idea that robots can mimic human traits.
By 2030, according to Moravec, we should have a third-generation universal robot that emulates higher-level thought processes such as planning and foresight. "It will maintain an internal model not only of its own past actions, but of the outside world," he explains. "This means it can run different simulations of how it plans to tackle a task, see how well each one works out, and compare them with what it's done before." An onlooker will have the eerie sense that it's imagining different solutions to a problem, developing its own ideas.
But perfecting the model of reality this robot will need is not going to be an easy task. In fact, creating this model is the single hardest problem in artificial intelligence. Intuitively, human beings know why they need to wear a raincoat in wet weather, or why they must turn the handle before pushing open a door. Almost without thinking we know if a bottle is empty, whether an object is breakable, or when food has spoiled. But to an artificial intelligence, none of these things is obvious - each everyday fact must be established in advance or derived from logical principles.
On the plus side, each time a robot learns a fact or masters a skill, it will be able to pass its knowledge to other robots as quickly and easily as sending a program over the Net. This way, the task of understanding the world can be divided among thousands or millions of robot minds. As a result, the machines will soon develop a deeper knowledge base than any single person can hope to possess. Within a short space of time, robots that are linked in this way will no longer need our help to show them how to do anything.
Meanwhile, they will be smart enough to interact with us on a human level. "Their world model will include psychological attributes," Moravec says, "which means, for instance, that a robot will express in its internal language a logical statement such as 'I must be careful with this item, because it is valuable to my owner, and if I break it, my owner will be angry.' This means that if the robot's internal processes are translated into human terms, you will hear a description of consciousness - especially if the robot applies psychological attributes to its own actions, as in 'I don't like to bump into things,' which is a compact way of saying that the robot gets an internal negative reinforcement signal whenever it collides with something, or imagines a collision."
Moravec's critics are skeptical on this point. Many have stated flat out that a machine can never be "conscious." Their arguments are hard to refute, partly because no one can really say what consciousness is; but Moravec sidesteps the issue. He believes a robot that understands human behavior can be programmed to act as if it is conscious, and can also claim to be conscious. If it says it's conscious, and it seems conscious, how can we prove that it isn't conscious?
Either way, there's no doubt that systems that can analyze their world, deduce generalizations, and modify their behavior will have a major impact on society.
"The robots will still be in our thrall," Moravec points out, meaning that we will still be designing and programming them to serve and obey us. "They'll learn everything they know from us, and their goals and their methods will be imitations of ours. But as they become more competent, efficiency and productivity will keep going up, and the amount of work for humans will keep going down. By around 2040, there will be no job that people can do better than robots."
He sits back in his chair, pausing with cheerful satisfaction as he does whenever he reaches a radical conclusion that places him one step ahead, waiting for his audience to catch up.
In this case, though, Moravec's conclusion is less radical than it seems - because when many jobs are broken down into tasks, they require a relatively limited degree of "humanness." Even today, we have expert systems that offer advice based on a large number of facts in a field such as medicine or geology. Imagine this expertise gradually broadening to include subjects such as corporate law, mechanical design, profitability, and efficiency. Decisions in these areas are all made logically from sets of facts, which means that if the facts are completely spelled out, a machine intelligence should be able to deal with them.
Thus a corporation can literally become automated from the bottom up: first the assembly lines, then bookkeeping, product design, and planning. Even management can be taken over by computers that are able to learn from past performance. Ultimately, a corporation will consist of a diverse mix of robots, some mobile, some fixed, some large and powerful, some microscopic, all interacting with speed and versatility that is completely beyond human abilities.
But what about the time scale? Isn't he compressing a huge amount of progress into a very few decades?
"Back in the 1970s I made some overoptimistic assumptions about the rate of progress of computers. I thought that using an array of cheap microcomputers, we might achieve human equivalence by the mid-1980s. Then I did a slightly more careful calculation around 1978 and decided it would take another 20 years, requiring a supercomputer. But then I started getting serious, writing articles and essays, and I thought I should do the calculations more rigorously. So I collected 100 data points of previous computer progress, I did the best calculation I could, I compared the human retina with computer vision applications, and I plotted it all out."
Still, even if his predictions are confirmed to be on schedule, there's an obvious problem: When robots are doing all the work, no one will earn any money. How can an economy flourish when all the consumers are penniless?
Moravec obviously isn't troubled by the question. In fact, it's hard to imagine any question bothering him: he sits calmly, comfortably, eating the questions and spitting out answers with ease. Today, he points out, people who retire are supported via wealth that is ultimately created by industry. As industry becomes more efficient, there will be more wealth, allowing people to retire earlier. When industry is totally automated and hyper-efficient, it will create so much wealth that retirement can begin at birth. "We'll levy a tax on corporations," Moravec says, "and distribute the money to everyone as lifetime social-security payments."
But what if the robot-run corporations fail to function as he expects? He assumes these business entities will follow programs written by us, compelling them to obey laws and pay their taxes. But the programming will also encourage robot-controlled corporations to compete with each other.
Won't they try to exploit loopholes in their instructions, just as present-day businesses try to evade federal regulations? Isn't there a real risk that autonomous robots will steal from each other and cheat on their taxes?
"There is always the possibility that some kind of malfunction will produce a rogue corporation," Moravec admits. "We'll need police provisions so that legal companies will act to suppress rogues economically, or physically, if necessary. And among the inprogrammed laws we'll need antitrust clauses to force dangerously large companies to divest into smaller entities."
But this would be a second set of rules to solve a problem created by robots breaking the first set of rules. The system still seems fundamentally unstable.
"It is unstable," he agrees. "Everything will depend on the way in which we create it. Crafting these machines and the corporate laws that control them is going to be the most important thing humanity ever does. You know, each age has an activity in which the best minds get involved. Crafting the laws, and their implementation, will be the thing to do in the 21st century."
If the job is done right, he predicts a world of comfort, health, and boundless plenty - at least for a while. Human beings will be like slave owners whose servants never complain, need no supervision, and are constantly eager to please.
In the long term, though, robots programmed to serve us with maximum efficiency can become a potential hazard. They will naturally try to obtain energy and raw materials as cheaply as possible, with a minimum of regulatory interference. And the ideal way to do this is by relocating some of their operations beyond planet Earth.
Unlike human beings, robots don't need to breathe air, aren't disoriented by zero gravity, and can be easily shielded from harmful radiation. There are vast mineral resources in the asteroid belt, where there will be no regulations regarding pollution, noise, or safety. Robot factories located in space would be able to manufacture products with maximum efficiency and then drop them down into Earth's gravity well. Alternatively, they could conduct hazardous research and radio the encrypted results back to their parent corporation on Earth.
Only a small "seed colony" of robots would be needed to set up an off-world operation. Using local mineral ores and solar energy, robots could build everything they required - including copies of themselves.
In this scenario, everything is still being controlled by the parent corporations, which are still being controlled by us. Therefore, the off-world operations should present no problems. "But now suppose a company goes out of business," Moravec says, "leaving its research division in space, where there's no supervision. The result is self-sustaining, superintelligent wildlife."
His critics, of course, disagree. They complain that his vision is inhuman, lacking attributes such as culture and art that seem central to our identity. Skeptics also point out that the negative implications of his work far outweigh its benefits in the near future, when robots will cause a huge economic dislocation, creating a feeling of purposelessness among citizens who are rendered permanently unemployable.
Moravec is quite aware of this but sees no way to prevent it. He says his projection of the future is at least 50 percent probable, and we're seeing the first signs of it right now. "In Europe generally," he says, "I believe unemployment is now up to around 15 percent, and essentially this will never reverse. We're already moving into the mode I envisage, where everyone is subsidized by productive machines."
This has created uncertainty and discontent - as he readily admits. "We all agree," he says, "that the world is a bit screwed up. The reason for this is rather obvious. We have a Stone Age brain, but we don't live in the Stone Age anymore. We were fitted by evolution to live in tribal villages of up to 200 relatives and friends, finding and hunting our food. We now live in cities of millions of strangers, supporting ourselves with unnatural tasks we have to be trained to accomplish, like animals who have been forced to learn circus tricks."
In which case, what's the answer? Moravec adamantly believes that reversing the evolution of technology would create an even bigger disaster. "Most of us would starve," he says. He suggests the opposite approach: that we try to catch up with technology by accelerating our own evolution. "We can change ourselves," he says, "and we can also build new children who are properly suited for the new conditions. Robot children."
Inevitably, I ask whether he has any normal, flesh-and-blood children.
"No. In fact, I am biologically incapable of it. I contracted testicular cancer as I was finishing my PhD; it didn't affect me very much, it didn't really hurt, I noticed a growth, but I still had my thesis to write and my orals to do, and the whole thing seemed very unreal. There were two surgeries, one minor, one major - with my intestines out in a bag to get at the lymph nodes. I came through it in sparkling condition, aged around 30. But a side effect is that I'm basically infertile."
Does this mean that his love of robots is nothing more than a displaced desire for the biological children he can't have?
"Not at all. Long before the cancer, I was already obsessively committed to robots for whatever neurotic reason. That was where I wanted to spend my energy. I met my wife in the hospital when I was getting chemotherapy in 1980. She already had two children, so I inherited them as stepchildren."
Does his wife share any of his feelings about machines?
He laughs. "At the moment, my wife is a biblical scholar."
Moravec himself was raised Catholic, but he rebelled against it as a teenager and says he still has some anti-Catholic reflexes. As a result, he and his wife had some bitter theological debates in the past. "But these days there's no point in arguing," he says, "because we already know exactly what each other is going to say, and in any case she's more astute in human relations than I am, so she knows how to handle me. But I have changed my outlook slightly. I'm a little less hard-core in my atheism than I used to be. And my ideas about resurrection in some ways are not so different from those of early theologians, or from the Greek thought that fed into that."
Also, of course, the desire for human transcendence has been a fundamental feature of almost all religions. And Moravec's vision of a supremely powerful artificial intelligence that will love humanity enough to re-create it is basically a vision of a god - the only difference being that in his scheme of things, we create god version 1.0, after which it builds its own enhancements.
But how does all this fit in with Moravec's obvious personal love for machines?
"My father was an engineer in Czechoslovakia and had a business making and selling electrical goods during the war. When the Russians arrived in 1944, he became a refugee. He left the country on a tricycle with 50 kilos of tools and 50 kilos of food. He met my mother in Austria, which is where I was born. He had an electrical store, where he'd hand wind transformers to convert battery-operated radios so they'd run on house current. We relocated to Canada in 1953."
This marks the point where the genie finally gets out of the bottle and Earth's retirement community of pampered humans finds itself faced with a big problem. Out in space, the preprogrammed drive to compete and be efficient will result in the runaway evolution of machine capabilities.
Moravec feels that in a short period of time, all the local materials will be plundered and converted into machines, and all available solar energy will be used to power them.
The result will be a dense, interacting swarm of competing entities - although, he says, the competition will be relatively benign. Warfare among robots will be rare because "fighting wastes energy, and a third entity can eat the pieces."
He believes that the most useful skill will be intelligence. Robots will be motivated to make themselves as small as possible, conserving raw materials to build better brains. "As a result, you end up with the whole mess forming a cyberspace where entities try to outsmart each other by causing their way of thinking to be more pervasive. Here's an ecology where all the dead-matter activity has been squeezed out and almost everything that happens is meaningful. You have this sphere of cyberspace with a robot shell, expanding outward toward Earth."
What will it look like?
"It will look like a region of space glowing warmly, with hardly anything visible on a human scale. The competitive pressure toward miniaturization will result in activity on the subatomic level. They'll transform matter in some way; it will no longer be matter as we know it."
Since space-based machine intelligences will be free to develop at their own pace, they will quickly outstrip their cousins on Earth and eventually will be tempted to use the planet for their own purposes. "I don't think humanity will last long under these conditions," Moravec says. But, ever the optimist, he believes that "the takeover will be swift and painless."
Why? Because machine intelligence will be so far advanced, so incomprehensible to human beings, that we literally won't know what hit us. Moravec foresees a kind of happy ending, though, because the cyberspace entities should find human activity interesting from a historical perspective.
We will be remembered as their ancestors, the creators who enabled them to exist.
As Moravec puts it, "We are their past, and they will be interested in us for the same reason that today we are interested in the origins of our own life on Earth."
He seems very sincere as he says this, almost as if it's an article of faith for him - though of course it has some logical foundation. Machine intelligences of the far future will develop from our initial programming, just as a child grows from its parents' DNA. Consequently, even when robots are smarter than we are, they should retain many of our priorities and values.
But Moravec takes the scenario even one step further. Assuming the artificial intelligences now have truly overwhelming processing power, they should be able to reconstruct human society in every detail by tracing atomic events backward in time. "It will cost them very little to preserve us this way," he points out. "They will, in fact, be able to re-create a model of our entire civilization, with everything and everyone in it, down to the atomic level, simulating our atoms with machinery that's vastly subatomic. Also," he says with amusement, "they'll be able to use data compression to remove the redundant stuff that isn't important."
But by this logic, our current "reality" could be nothing more than a simulation produced by information entities.
"Of course." Moravec shrugs and waves his hand as if the idea is too obvious. "In fact, the robots will re-create us any number of times, whereas the original version of our world exists, at most, only once. Therefore, statistically speaking, it's much more likely we're living in a vast simulation than in the original version. To me, the whole concept of reality is rather absurd. But while you're inside the scenario, you can't help but play by the rules. So we might as well pretend this is real - even though the chance things are as they seem is essentially negligible."
And so, according to Hans Moravec, the human race is almost certainly extinct, while the world around us is just an advanced version of SimCity.
I've been sitting opposite Moravec in his office, typing on my laptop computer, following his exposition step by step. The vision he has described exists for him as a unified whole; it takes him only about an hour to describe it clearly and fluently from beginning to end. For him it seems entirely pleasurable: a destiny that grows out of his own work and affirms his own values.
Growing up in Montreal, learning English and adjusting to a strange new culture, Hans Moravec was a solitary child who found solace in building models and gadgets. "I remember the thrill I got when I put together something and made it work. I could admire it for hours. And these things also made other people proud of me. I guess I actually thought that they would get me a wife! I knew I didn't have any social skills, but maybe if I could build these machine things really, really well, it would make me more attractive to women." He laughs at his own childhood naïveté.
And yet, he didn't always want to be a scientist. First he wanted to be Superman. "But I could see that it wasn't practical. Then I noticed another character in the comics, Lex Luthor, who didn't have superpowers but was almost a match for Superman. So, I thought if I couldn't be Superman, maybe I could be Lex Luthor."
In person, Moravec seems diffident and gentle; he doesn't drive a car because, he says, he's uneasy with so much potentially dangerous mass in his control. He likes living in Pittsburgh because his home is a short walk from his office, and he seems to feel little need to venture outside this simple life.
Yet as a child he enjoyed fantasies about superheroes and supervillains, and as an adult he talks casually of totally rebuilding human society. He refers to his new book, for which he's currently seeking a publisher, as "a kind of speculative long-term business plan for humanity," and in it he speaks condescendingly of "Earth's small-minded biological natives." Can Moravec really claim that his work as a scientist is in no way manipulative?
"People such as myself," he says, "may have a little bit of influence, but we're like mosquitoes pushing at a rolling boulder. Progress is inflicted on people in the same way that natural evolution is inflicted on people. It really is evolution; it's the selection and growth of information, transmitted from one generation to the next."
But what about the rights of people who don't love the rolling boulder of progress?
"Well," he says, beginning to sound a little impatient with my objections, "they'll - they'll get used to it! In fact, they should enjoy it, since the amount of wealth will be astronomical; you'll be able to live anywhere and in any way you want."
In any case, he says, the progress he's talking about will be offered via the free market, not physically imposed on anyone. "All I'm suggesting is that we give people a choice. In the next decade, people will either buy their housecleaning robot or not buy it. And I think they'll want to buy it. Then they'll have the choice of upgrading to one that learns, and I think they'll want that, too. Then they'll have the choice of a robot that claims it's conscious, a really nice entity that talks like a person, seems to understand you, and has nothing but your best interests at heart - because that's how it's programmed. And then the fourth generation will take that personality and add intelligence. It will be a constant help to you; it will explain why something that you want to do isn't what you should do - because it loves you. I think people will like these machines and will quickly get used to them."
Well, yes - until the machines cut loose, develop hyperintelligence, and bring about our demise.
"But I don't consider it a demise," Moravec retorts, still insisting that his vision is wholly positive. "The robots will be a continuation of us, and they won't mean our extinction any more than a new generation of children spells the extinction of the previous generation of adults. In any case, in the long term, the robots are much more likely to resurrect us than our biological children are."
For people who find long-term resurrection a somewhat nebulous concept, there are also some practical reasons why we should be happy to change ourselves radically. On a long-term basis, Moravec points out, our planet may not be a hospitable place to live. Huge climatic shifts may occur (as they did during the ice ages). Our sun may become unstable. The world may be ravaged by incurable diseases. Our entire ecology could be destroyed by a large meteor or comet. "Sooner or later," he says, "something big will come along that we cannot deal with. But by changing ourselves in the most fundamental way, we will be able to survive such catastrophes."
This is an arguable point of view, but I can't help wondering which came first, Moravec's personal interest in becoming more than human, or his proof that it's really a very good idea. He readily admits that he has a personal obsession with robots, and his passion for transcendence is far more extreme than that of most scientists. What makes him so different from everyone else?
"Well, I was breast-fed as a baby," he answers with typically disconcerting candor. "I was also the first born of my family, and I was well loved by my mother - which must have helped me feel confident about life." He pauses, realizing that this explanation isn't adequate. "Maybe the idea of human transcendence makes me happy because my endorphin levels were misadjusted early on in life," he says with a laugh and a shrug, unable to come up with a better answer.
Personally, I suspect he likes the idea of radical change because he's an intensely intelligent man who is easily bored by the everyday world. He finds it impossible to believe that it makes sense to continue, as human beings, in our exact same form. "Do we really want more of what we have now?" he asks, sounding incredulous. "More millennia of the same old human soap opera? Surely we have played out most of the interesting scenarios already in terms of human relationships in a trivial framework. What I'm talking about transcends all that. There'll be far more interesting stories. And what is life but a set of stories?"
Ultimately, Moravec comes back again to the power and grandeur of a destiny that exceeds all limits. "This universe is so big," he says. "The possibilities must be infinitely greater than anything we can imagine for ourselves. Pushing things in the direction of expanded possibilities seems to be by far the most productive use of my time. And that, here, is my purpose."
By: Charles Platt

Thursday, September 10, 2009

Stephen Hawking

Big Questions about our universe

Stephen William Hawking, (born 8 January 1942) is a British theoretical physicist. He is known for his contributions to the fields of cosmology and quantum gravity, especially in the context of black holes. He has also achieved success with works of popular science in which he discusses his own theories and cosmology in general; these include the runaway best seller A Brief History of Time, which stayed on the British Sunday Times bestsellers list for a record-breaking 237 weeks.

Hawking's key scientific works to date have included providing, with Roger Penrose, theorems regarding singularities in the framework of general relativity, and the theoretical prediction that black holes should emit radiation, which is today known as Hawking radiation (or sometimes as bekenstein-Hawking radiation). He is a world-renowned theoretical physicist whose scientific career spans over 40 years. His books and public appearances have made him an academic celebrity. He is an Honorary Fellow of the Royal Society of Arts, and a lifetime member of the Pontifical Academy of Science. On August 12, 2009, he was awarded the Presidential Medal of Freedom, the highest civilian award in the United States.

Hawking is the Lucasian Professor of Mathematics at the University of Cambridge (but intends to retire from this post in 2009), a Fellow of Gonville and Caius College, Cambridge and the distinguished research chair at Waterloo's Perimeter Institute for Theoretical Physics.

Hawking has a neuro muscular dystrophy that is related to amyotrophic lateral sclerosis (ALS), a condition that has progressed over the years and has left him almost completely paralysed.

Research fields

Hawking's principal fields of research are theoretical cosmology and quantum gravity.

In the late 1960s, he and his Cambridge friend and colleague, Roger Penrose, applied a new, complex mathematical model they had created from Albert Einstein's general theory of relativity. This led, in 1970, to Hawking proving the first of many singularity theorems; such theorems provide a set of sufficient conditions for the existence of a singularity in space-time. This work showed that, far from being mathematical curiosities which appear only in special cases, singularities are a fairly generic feature of general relativity.

He supplied a mathematical proof, along with Brandon Carter, Werner Israel and D. Robinson, of John Wheeler's "No-Hair Theorem" – namely, that any black hole is fully described by the three properties of mass, angular momentum, and electric charge.

Hawking also suggested that, upon analysis of gamma ray emissions, after the Big Bang, primordial mini black holes were formed. With Bardeen and Carter, he proposed the four laws of black hole mechanics, drawing an analogy with thermodynamics. In 1974, he calculated that black holes should thermally create and emit subatomic particles, known today as Hawking radiation, until they exhaust their energy and evaporate.

In collaboration with Jim Hartle, Hawking developed a model in which the universe had no boundary in space-time, replacing the initial singularity of the classical Big Bang models with a region akin to the North pole: One cannot travel north of the North Pole, as there is no boundary there. While originally the no-boundary proposal predicted a closed universe, discussions with Neil Turok led to the realisation that the no-boundary proposal is also consistent with a universe which is not closed.

Hawking's many other scientific investigations have included the study of quantum cosmology, cosmic inflation, helium production in anisotropic Big Bang universes, large N cosmology, the density matrix of the universe, topology and structure of the universe, baby universes, Yang-Mills instantons and the S matrix, anti de Sitter space, quantum entanglement and entropy, the nature of space and time, including the arrow of time, spacetime foam, string theory, supergravity, Euclidean quantum gravity, the gravitational Hamiltonian, Brans-Dicke and Hoyle-Narlikar theories of gravitation, gravitational radiation, and wormholes.

At a George Washington University lecture in honour of NASA's 50th anniversary, Prof. Hawking theorised on the existence of extraterrestrial life, "primitive life is very common and intelligent life is fairly rare."

From Wikipedia

The Computer

Communication system

I communicate with a computer system. I have always used IBM compatible computers, on my wheel chair. They run from batteries under the wheel chair, although an internal battery will keep the computer running for an hour if necessary. The screen is mounted on the arm of the wheel chair where I can see it, more recent systems have the whole computer in a box on this arm. The original systems were put together for me by David Mason, of Cambridge Adaptive Communications. This company manufacture and supply a variety of products to help people with communication problems express themselves. Recently, Intel engineers designed a new computer for me powered by a Pentium II processor, which I now use.

On the computer, I run a program called Equalizer™, written by a company called Words Plus inc. A cursor moves across the upper part of the screen. I can stop it by pressing a switch in my hand. This switch is my only interface with the computer. In this way I can select words, which are printed on the lower part of the screen. When I have built up a sentence, I can send it to a speech synthesizer. I use a separate synthesizer, made by Speech+. It is the best I have heard, though it gives me an accent that has been described variously as Scandinavian, American or Scottish. I also can use Windows 98 through an interface called EZ Keys, again made by Words Plus. I am able to control the mouse with the switch through cleverly selected process from a small box shown on the desktop. I can also write text using similar menu's to those in Equalizer.

I can save what I write to disk. I write papers using a formatting program called TEX. I can write equations in words, and the program translates them into symbols, and prints them out on paper in the appropriate type. I can also give lectures. I write the lecture beforehand, and save it on disk. I can then send it to the speech synthesiser, a sentence at a time. It works quite well, and I can try out the lecture, and polish it, before I give it.

Recent Improvements

Professor Hawking is determined that he is able to keep up with the recent improvements in computer and communication technology. Below are some of the recent improvements, which have been carried out on the system within the last 12 months.

In non-wireless areas, Intel manage a 3G account for us so that Professor Hawking is able to use the internet from anywhere in the world, via a PCMCIA 3G card.

The computer has been replaced about once per year; it is currently (Jan 2009) running on a Lenovo Thinkpad T60 and the next model will have an X61 at its heart.

Upgrade to Windows XP (in around 2001)

The computer is running on Windows XP. For many years it has been impossible to upgrade beyond Windows'98, because Professor Hawking's favourite speech software, Equalizer by Words-Plus, was made in 1986, and was designed to run only on DOS based operating systems. However, Intel has kindly funded the conversion of the software to XP. This involved Words-Plus re-writing the whole program for today’s operating system.


Due to Professor Hawking's active lifestyle, it is impossible to power his chair computer via the mains as he is never in one place long enough to make this practical. Thus the laptop needs to be powered by the wheelchair batteries, which are similar to car batteries, in the back of the chair.

Keep talking

It is essential that Stephen is able to make use of a telephone. He is able to use Voice over IP, or connect his chair computer directly to a telephone socket. The process works by sending digital commands from his computer instructing the phone system to dial a number, answer the phone or hang up at the end of a call.

Who's got the remote?

Stephen has a universally programmable infra-red remote control attached directly to his computer system. This enables him to operate many of the electronic items in his home, such as televisions, video recorders and music centres. He also has a radio control device which enables him to open doors and operate lights throughout his home. He is now also able to operate doors within his workplace. With the opening of the newly built Centre for Mathematical Sciences, he will be able to get about the building virtually unassisted.

From official website of Professor Stephen William Hawking, by Nicki Ley and Graduate Assistant, Sam Blackburn.

Tuesday, September 8, 2009


"The World" What Will The Future Look Like?

Michio Kaku (b. January 24, 1947) is an American theoretical physicist specializing in string field theory, and a futurist. He is a popularizer of science, host of two radio programs and a best-selling author.

Kaku has publicly stated his concerns over issues including the human cause of global warming, nuclear armament, nuclear power, and the general misuse of science. He was critical of the Cassini-Huygens space probe because of the 72 pounds of plutonium contained in the craft for use by its radioisotope thermoelectric generator. Alerting the public to the possibility of casualties if its fuel were dispersed into the environment during a malfunction and crash as the probe was making a 'sling-shot' maneuver around earth; he was critical of NASA's risk assessment.

Ultimately, the probe was launched and successfully completed its mission. Kaku is generally a vigorous supporter of the exploration of outer space, believing that the ultimate destiny of the human race may lie in the stars; but he is critical of some of the cost-ineffective missions and methods of NASA.

Kaku credits his anti-nuclear war position to programs he heard on the Pacifica Radio network, during his student years in California. It was during this period that he made the decision to turn away from a career developing the next generation of nuclear weapons in association with Dr. Teller and focused on research, teaching, writing and media. Dr. Kaku joined with others such as Dr. Helen Caldicott, Jonathan Schell, Peace Action and was instrumental in building a global anti-nuclear weapons movement that arose in the 1980s, during the administration of US President Ronald Reagan.

Kaku was a board member of Peace Action and on the board of radio station WBAI-FM in New York City where he originated his long running program, Explorations, that focused on the issues of science, war, peace and the environment.

From Wikipedia

Quick Questions With Michio Kaku

1. If time machines exist, can we ever hope to meet our older, or younger, selves?

That is a big "if." But assuming they exist, then there is hope that we might meet our older or younger selves, but they won't be exactly "us." The river of time, I believe, may fork into two rivers if we travel in time.Hence, if we jump from one time line to another time line, we may meet ourselves in the past, but these people won't really be "us." They will be genetically identical to us, but will be a younger or older version of ourself in a parallel universe. Hence, we won't have any time paradoxes. So if we change the past, we change someone else's past, who is genetically identical to us, but is not really "us." Of course, we won't know for sure until we finally build a time machine. (In fact, I give a blueprint for a time machine in my book, Physics of the Impossible, which is consistent with all known physics.)

2. Since we haven't ever met any time travelers from the future, does that mean they will never be invented?

No. Perhaps we are not interesting to them. We think we are so great that they will want to visit us, but maybe we are too primitive for them. After all, if we see an anthill, do we go down to the ants and say "I bring you beads. I bring you trinkets. Take me to your leader"? Some of us may even have the urge to step on them.But the technological distance between an ant and us may be small compared to the technological chasm between us and a time-faring civilization. They may be thousands to millions of years ahead of us in technology, and hence have no interest in visiting us. But one day, if someone knocks on your door and says she is your great-great-great-granddaughter, do not slam the door. Perhaps in the far future our descendants will develop time machines, and want to visit their illustrious ancestors.

3. Are fears of robots taking over the world, Terminator-style, ever founded in reality?

Yes, robots may eventually take over the world. But we will have plenty of warning. Right now, robots have the intelligence of a cockroach. A retarded, stupid cockroach. Our most advanced robots take about six hours just to walk around a strange room. It may be years to decades before they are as smart as a mouse, then a rabbit, then a dog or cat, and finally a monkey. By the time they have the intelligence of a monkey, they can be dangerous, since they will have agendas of their own. But we will have plenty of warning. By the time they are as smart as a monkey, I think we should put a chip in their brains to turn them off when they have murderous thoughts. The key is that we will have plenty of time before these robot creations become truly sentient and conscious, with their own goals and desires.

Marvin Lee Minsky

Artificial Intelligence

Marvin Lee Minsky (born August 9, 1927) is an American cognitive in the field of artificial intelligence (AI), co-founder of MIT 's AI laboratory, and author of several texts on AI and philosophy.

Minsky won the Turing Award in 1969, the Japan Prize in 1990, the IJCAI Award for Research Excellence in 1991, and the Benjamin Franklin Medal from the Franklin Institute in 2001.

Minsky is listed on Google Directory as one of the all time top six people in the field of artificial intelligence. Isaac Asimov described Minsky as one of only two people he would admit were more intelligent than himself, the other being Carl Sagan. Patrick Winston has also described Minsky as the smartest person he has ever met. Minsky is a childhood friend of the Yale University critic Harold Bloom, who has referred to him as "the sinister Marvin Minsky." Ray Kurzweil has referred to Minsky as his mentor.

Minsky's patents include the first head-mounted graphical display (1963) and the confocal microscope (1957, a predecessor to today's widely used confocal laser scanning microscope). He developed, with Seymour Papert, the first Logo “turtle”. Minsky also built, in 1951, the first randomly wired neural network learning machine, SNARC.

Minsky wrote the book Perceptrons (with Seymour Papert), which became the foundational work in the analysis of artificial neural networks. This book is the center of a controversy in the history of AI, as some claim it to have had great importance in driving research away from neural networks in the 1970s, and contributing to the so-called AI winter. That said, none of the mathematical proofs present in the book, which are still important and interesting to the study of perceptron networks, were ever countered.

Minsky was an adviser on the movie 2001: A Space Odyssey and is referred to in the movie and book.

Probably no one would ever know this; it did not matter. In the 1960s, Minsky and Good had shown how neural networks could be generated automatically—self replicated—in accordance with any arbitrary learning program. Artificial brains could be grown by a process strikingly analogous to the development of a human brain. In any given case, the precise details would never be known, and even if they were, they would be millions of times too complex for human understanding.

—Arthur C. Clarke, 2001: A Space Odyssey

In the early 1970s at the MIT Artificial Intelligence Lab, Minsky and Seymour Papert started developing what came to be called The Society of Mind theory. The theory attempts to explain how what we call intelligence could be a product of the interaction of non-intelligent parts. Minsky says that the biggest source of ideas about the theory came from his work in trying to create a machine that uses a robotic arm, a video camera, and a computer to build with children's blocks. In 1986, Minsky published Robotics, a comprehensive book on the theory which, unlike most of his previously published work, was written for a general audience.

In November 2006, Minsky published The Emotion Machine, a book that critiques many popular theories of how human minds work and suggests alternative theories, often replacing simple ideas with more complex ones. Recent drafts of the book are freely available from his webpage.

Minsky is an actor in an artificial intelligence Koan (attributed to his student, Danny Hillis) from the Jargon file:

In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6 (computer).
"What are you doing?" asked Minsky.
"I am training a randomly wired neural net to play Tic-tac-toe,”
Sussman replied.
"Why is the net wired randomly?", asked Minsky.
"I do not want it to have any preconceptions of how to play," Sussman said.
Minsky then shut his eyes.
"Why do you close your eyes?" Sussman asked his teacher.
"So that the room will be empty."
At that moment, Sussman was enlightened.

What I actually said was, "If you wire it randomly, it will still have preconceptions of how to play. But you just won't know what those preconceptions are." --Marvin Minsky

From Wikipedia