|> Situated and Embodied Robots
|> Computer Vision
|> Active vs Photographic Vision
|> Fast, Cheap, and Out-of-Control
|> Ubiquitous Robotics
|> Symbiotic Home Lifeforms
|> Remote-Presence Robots
|> Emotions and Humanity
|> "New Stuff"
|> Modularized Functionality
|> Perceptual Systems
|> Mind Downloading
|> Strange Brews
<| Situated and Embodied Robots - (pp. 51-52) says Rodney: "a situated creature or robot is one that is embedded in the world, and which does not deal with abstract descriptions, but through its sensors with the here and now of the world, which directly influences the behavior of the creature". Furthermore, "an embodied creature or robot is one that has a physical body and experiences the world, at least in part, directly through the influence of the world on that body. A more specialized type of embodiment occurs when the full extent of that creature is contained within that body".
An airline reservation system is situated but not embodied, and an automated, pre-programmed spray painting machine is embodied but not situated. To Brooks, a major advantage to having embodied and situated robots is that sophisticated behaviors can be attained with much fewer computational resources needing to be devoted to the "representation" problem. In effect, the external world is its own best representation, and the robot does not necessarily have to create an internal model of it in order to engage in "intelligent" behavior.
<| Computer Vision - (pp. 75-91) Brooks has a great deal of experience in this area. Early on, major AI researchers pooh-poohed computer vision as simple I/O, but after 40 years of continuous and large-scale research, computers still cannot do most of the things that human and animal visual systems can. The "major" problems still defy solution. Computers are successful at recognizing faces from a small library, segmenting faces, tracking moving objects, determining rough 3-D structure over short distances, and translating geometric models to 3-dimensions. They are not yet good at recognizing details about faces (age, gender, etc), recognizing rotated or disguised faces or what people are wearing, determining physical properties of objects, or in recognizing general objects or discriminating them from the background - all of which humans are very good at.
This is true, even with the great advances in computer power, and Brooks says "it is clear we are missing something fundamental in the way vision in humans is organized, although almost no one will admit that". [BTW, this theme is a common thread in Brooks' appraisal of much of AI]. In the past, when computers were slower, people tried to devise clever algorithms, but today he says, "a lot of brute-force algorithms have become relatively successful". These are simple computations and convolutions done everywhere in an image without recourse to underlying models of the how the images were generated in the physical world. In other words, today's robots by and large live in a "strange, disembodied, hallucinatory" and meaningless, computational world of 1's and 0's. To Brooks, solving this problem is vital to the ultimate success of machine intelligence, and this is why he has major research projects underway with the cyborgs Cog and Kismet.
Refs: MIT research   .
<| Active vs Photographic Vision - (pp. 81-84) says Brooks, "... rather than photographing a picture in their mind, .... people actively search for and store information relevant to some task". In the 50s and 60s, Alfred Yarbus studied how people scan pictures of faces and scenes. People's eyes invariably jumped back and forth between different parts of images over and over. If asked specific questions about the images, the eyes would concentrate on areas of the images of relevance to the questions, eg, on faces regards ages of individuals, on clothes regards historical periods and job functions, etc.
In addition, Ballard and Hayhoe set up an experiment where subjects were to make a copy of a "pattern" of colored lego blocks by selecting blocks one at a time from a random pile of blocks nearby. To perform this task, they would typically use the following attention scheme: pattern, pile, pattern, copy, ... [repeat] ..., and not the scheme: pattern, pile, copy, pattern, pile, copy, ... [repeat] ... In another experiment, it was found that subjects would not even notice small changes that were made in the pattern pile from one interrogation to the next, when returning with block in hand from the supply pile. In other words, they were not working from a stored image in memory, but rather performed one step at a time with focus upon either block color or block position, but not both. "... their saccade strategy seemed to indicate that they were only remembering one thing for one block at a time". To reiterate, people appear to use active search rather than photographic storage in the solution to visual tasks, which Brooks points out is essentially opposite to how AI research has classically attacked the same problem area.
Refs: Yarbus  , Ballard   
<| Fast, Cheap, and Out-of-Control - (pp. 55-62) In 1988, Brooks and Anita Flynn presented this idea as a way to reduce the size, cost, complexity, and time to produce robots to be sent on space missions. Rather than send one huge and heavy billion dollar mobile computer to Mars or the Moon, why not send a dozen cheap, simple-minded (regards task assignment) and semi-autonomous devices at one-tenth the cost? Instead of one 1000 kg machine that moves at 1 cm/sec, send a bunch of 1 kg machines that move at 1 m/sec, and collectively send back reams of different data. If one is damaged on landing or falls in a hole, the rest will still complete the mission.
Of course, in 1997, NASA landed the small, semi-autonomous, 6-wheeled rocker-boogie chassised Sojourner/Pathfinder on Mars, using a concept entitled "Faster, Cheaper, Better". The robot vehicle itself cost just $25M. It was one of NASA's most successful missions ever. Later missions using this approach have not been as fruitful, however.
<| Ubiquitous Robotics - (pp. 113) says Rodney: "Robots are following the same path that computers took, but are lagging, generally speaking, by 20 or 25 years". The basic penetration route is research labs, industry, toys, workers, and finally today's killer apps - ie, e-mail, messaging, and WWW ubiquitous in homes. Further, "... if the early parallels between computers and robots hold up .... by the year 2020 robots will be pervasive in our lives".
<| Symbiotic Home Lifeforms - (pp. 115-126) The chief problem with building a robo-vac is the technique used to navigate the rooms and cover all the floor space. Of the 3 major possibilities, installation of triangulation beacons in each room to be cleaned is too inconvenient, 3-D vision systems are both too expensive and not advanced enough to be generally successful, and odometry is too inaccurate when used over different floor types with different texture and friction characteristics. Therefore, most companies have apparently now decided that use of random cleaning patterns is the most practical way to go. The robot may clean the same area several times, but eventually it will randomly cover all the floorspace.
Brooks mentions a prototype robo-vac, Sozzie, that used a laser to measure the dirtiness of the dust being sucked, as an indication of how long to scour a particular area before moving to the next. It also used an IR beacon to find the way to its charging station - similar to Grey Walter's tortoises of 50 years ago. Because Sozzie could not get into every nook and cranny, Brooks also thought of the possibility of using an "ecology" of robot "pucksters", ie, an army of small vacs which would each clean a local area and deposit the dust centrally for a large device to collect. In the end, meaning as of October 2002, iRobot released the Roomba Vac, a device roughly the size and shape of a bathroom scale.
Says Brooks: ".. this then is the immediate future of life in our homes. Small robots that we grab from a charging station, twist a knob, plop down on the floor while we walk away ... Dumb, simple robots ... that move about in our house with our initiation, but without our intervention. They will be new almost life-forms that coinhabit our homes". There will be others, too, for automatic cleaning of floors, windows, walls, picture frames, countertops, tabletops, and bathrooms. There will be automatic clothes washers and dishwashers. They will work on the same principle as thermostats - set and forget. They will get smaller and smaller and cheaper and cheaper, and some will be all but invisible to daily life. A symbiosis will develop between people and these artificial creatures.
<| Remote-Presence Robots - (pp. 131-147) says Rodney: "Here now is the killer app for the robots in the short term. Physical work can be done from any place in the world. The implications of this will be profound on the world's economy".
There are many applications for mobile devices which can report on, and interact with, local situations in real-time - security, general monitoring, handling dangerous situations, police actions, couch potatoes too lazy to walk to the refrigerator, etc. Crazy as it sounds, people will have remote robot presences at distant conferences, thereby saving millions in travel costs and time (shades of the Media Lab's talking heads). In addition, due to its aging population, Japan is especially interested in developing remotely-operated robots for home helper use. This would help aging individuals remain independent longer, and also allow fewer humans to provide aide for more people. Rodney's company iRobot has already produced robots with global telepresence over the internet, namely iRobot-LE and CoWorker.
<| Emotions and Humanity - (pp. 156-157) says Rodney: "... the amgydala and other parts of the limbic system ... recieve inputs from many parts of the brain's perceptual subsystems, and at the same time innervate both the primitive motor sections of the brain and the more modern decision-making and reasoning centers of the brain ... emotions are both primitive in the sense that we carry around the emotional systems that evolution installed in our brains long before we had warm blood, and that they play intimate roles in all of the higher-level decisions that we tend to think of as rational and emotionaless." This is a major reason why Brooks has been moving research into android-like robots that can interact with humans as other humans do.
<| "New Stuff" - (pp. 184-191) on what may be lacking in current robotic and Alife models:
1. We might just be getting a few parameters wrong in all of our systems.
2. We might be building all our systems in too simple environments, and once we cross a certain complexity threshold, everything will work out as expected.
3. We might simply be lacking enough computer power.
4. We might actually be missing something in our models of biology; there might indeed be some 'new stuff' that we need.
Rodney thinks the answer is really in #4 - "... we may not be seeing some fundamental mathematical description of what is going on in living systems ... it might turn out that, for all the different aspects of biology that we model, there is a different juice that is missing."
Commentary: Many new techniques and ideas have arrived, and in many cases people have tried to explain living systems using analogues of these ideas, but each is deficient in the end: telephone switching networks, computers, catastrophe theory, chaos theory, dynamical systems, Markov random fields, wavelets, GOFAI (good old-fashioned AI), single-cell recordings, mass potential recordings, holograms, neural nets, Alife, WWW, (generalization of physics) category theory.
<| Modularized Functionality - (pp. 192-193) a human called JT suffered a brain hemmorhage, and afterwards suffered from a disfunction where he could see colors and also use color words, but could no longer make associatioons between them.
Says Rodney: "... evolution builds a hodge-podge of capabilities that are adequate for the niche in which a creature survives. It is possible that with a few additional wiring changes in our 'normal' brains we would have newfound capabilities ... Just as some of our modules have capabilities that are not present in chimpanzees, a supersapiens might have modules and capabilities that are not even latently present in us."
Commentary: The brain is made up of many different modules with specific functions. Senses perceive and motor units act. New capabilities are added during evolution by adding new modules on top of the old, as well as modifying the old, and adding new connections between all. Likewise, robots should not have a single centralized brain (ie, program) that tries to do everything, but rather many small units doing their own thing, but coordinated by interacting links. To a large extent, this is the concept underlying Rodney's subsumption idea, and to some extent, Minsky's society of mind.
<| Perceptual Systems - (pp. 190) says Rodney: "... for perceptual systems, say, there might be some organizing principle, some mathematical notion, that we need in order to understand how animal perception systems really work. Once we have discovered this juice, we will be able to build computer vision systems that are good at all the things they are currently not good at. These include separating objects from background, understanding facial expressions, discriminating the living from the nonliving, and general object recognition."
Commentary: Biology doesn't do all of this in one place, and different animals do this to different degrees. The quintessential example of this is the so-called "two visual systems" of higher vertebrates - the midbrain tectal-collicular systems respond to movements in the visual field, while the cerebral cortical systems are responsible for fine-scale recognition. These could be called: "vision for action" and "vision for perception". These systems have been overlaid during evolution. Lower vertebrates, like amphibians, have well-developed tectal structures, used for acquisition of moving prey (ie, food) and avoidance of large predators (eg, birds), but little or no ability to discern the subtleties of fine art, like the Mona Lisa. Some people further dissect the cortical system. Likewise, robots should not have a single centralized vision processor that tries to do everything, but rather different units doing their own thing, and possibly coordinated in a similar manner to general subsumption architecture.
<| Mind Downloading - (pp. 204-206) Hans Moravec, Ray Kurzweil, and others believe the human mind will be readable and downloadable into computers by 2020 to 2050. This is partly because they forsee a great "Singularity" in techological innovation occurring in next few years, where techniques far beyond anything available today will come to pass. Brooks is not as optimistic, and says: "... it takes computational chauvinism to new heights", and ignores chemical aspects like neurotransmitters and hormones, and neglects the role of embodiment on human existence.
Commentary: The brain is so complex, one truly wonders how it would ever be possible to deal with the downloading problem, especially in the few split seconds between heart stoppage and brain death. The human brain has 100 billion neurons and 1000x that many synapses and upwards to 100,000 miles of dendritic tree length. Capture the operation of that as the lights are going out [if at all]? Seriously, guys.
<| Strange Brews - (pp. 213-236) Today, we have pierced body parts, laser eye surgery, hearing aids, organ transplants, artificial limbs and hearts, cochlear implants, kidney dialysis, machines tentatively operated by EEG and EMG pickup, a professor in the UK with ID chip implants, and the human genome project. Tomorrow, we will have neural links to PDAs, brain implants, more sophisticated remote-presence, advanced artificial sense organs, bioengineered body parts [already artificial skin and pinnae exist], wholescale manipulation of DNA, and nanorobots.
Tom Knight and Ron Weiss of MIT AI lab have been genetically engineering e.Coli bacteria into tiny computing robots that can perform logical operations and possibly recompute their own transcription sequences. Down the road says Brooks, "... people might achieve similar control over the molecular processes of living cells in more subtle ways", such as having programmed cells within living organisms. These will certainly be in robots. "... robot technology will merge with biotechnology in the first half of this century". Molecular biology will give us "... the power to manipulate our own bodies in the way we currently manipulate the design of our machines". As a result of prosthetic enhancements in humans, and biotechnology applied to robots, "the distinction between us and robots is going to disappear". Curiouser and curiouser.
Commentary: Reading all this gives one the impression that Brooks foresees a slightly different future than other AI researchers. Many prophesize the rise and possible takeover by robots with human-level consciousness. Brooks [probably rightly] foresees that, as compared to AI, biotechnology has been progressing by huge leaps and bounds, and it will likely have more impact on society in the next few years. How far and how fast all this goes will depend upon the rate of progress in several areas of technology, including nanotech, biotech, and computing - and to some extent, the extent to which the "Singularity", originally proposed by Vernor Vinge, occurs. "Holy Fire" by Bruce Sterling presents one scenario for biotech in the 21st century, involving wholesale recycling of worn-out body parts. Bill Joy provides a voice of dissent in "Why the Future Doesn't Need Us".