Virtual Exploration, Virtually Everywhere

05/14/2012 03:02 pm ET | Updated Jul 14, 2012
  • Jim Bell Planetary scientist; professor, School of Earth and Space Exploration, Arizona State University

Earlier this month I had the pleasure of participating in a symposium at NASA's Goddard Space Flight Center called "Space Exploration via Telepresence: A New Paradigm for Human-Robotic Cooperation." It was a blast to interact with a bunch of scientists, technologists, roboticists, entrepreneurs, and enthusiasts working on various aspects of human and robotic space exploration. However, the biggest impact the symposium had on me was how surprised I was about how little I personally know about the profound ways that telerobotics and telepresence are used in the exploration of our own world.

For the purposes of the symposium, telerobotics was defined as the control of robots from a remote distance (more on that in a bit), while telepresence was defined as the process of projecting human senses (and even feelings) into a remote environment. Telepresence and teleoperation -- performing work of some kind from a distance -- are the primary sub-fields of telerobotics. These are enormous and growing areas of research and innovation in modern robotics, electronics, control systems, bioinformatics, and software design.

Although I never really thought about it this way before, I realized that I have been part of NASA telerobotics teams for many years. Any group of people who are helping to operate robotic spacecraft on or around other planets are teleoperators. We typically operate these robots with only limited telepresence, however, usually "sensing" the remote environment primarily visually, using cameras. We haven't really had much experience projecting our other senses into remote planetary environments yet. Though we often talk of "touching" or "smelling" or "hearing" remote environments using robotic arms or electrical/chemical sensors, we're not really doing that, not yet at least.

What surprised me in the symposium, however, is how much telerobotics and telepresence is being used today, right here on our own planet, enabling people to survey, explore, mine, and even kill from great distances. Like many people, I've heard about the many voyages of exploration by people in undersea vehicles like the Woods Hole Oceanographic Institute's ALVIN submersible in recent decades, and have been fascinated by the discovery of deep hydrothermal vents and their ecosystems, as well as by the haunting images of the Titanic and other famous shipwrecks by deep-sea explorer Robert Ballard and others. Most recently, famed director James Cameron became one of only three people to visit the bottom of the 11-mile deep Marianas Trench, in a "vertical torpedo" submersible called the Deepsea Challenger. These human-piloted missions are the most famous examples of deep-sea exploration, providing an immediate (or vicarious) telepresence experience for the participants because robots allow us humans to explore, sense, and feel in an environment that we could not otherwise physically visit.

However, a fascinating array of many more truly telerobotic, semi- or fully-automated undersea robots are exploring Earth's oceans every day. Remotely Operated Vehicles (ROVs) like Woods Hole's JASON submersible, allow humans to remotely work, collect samples, and project our presence into deep-sea environments at lower cost (and risk) than human-piloted vehicles. Fully Autonomous Underwater Vehicles (AUVs) like those operated by the Monterey Bay Aquarium Research Institute, and others like the tiny AUVs called MSLEDs being developed by my colleague Alberto Behar and others at ASU and JPL for small polar lake exploration, enable even lower-cost, lower-risk, more frequent access to challenging underwater environments, often, for example, conducting surveys and remote sensing investigations 24 hours/day, 7 days/week over much wider areas than can be covered by ROVs or human-piloted submersibles.

Robotics is taking off in aviation as well, with a dizzying array of Unmanned Aerial Vehicles (UAVs) or drones now being used to let people remotely explore, survey, spy, and even attack. Among the most famous of the UAVs are the U.S. Air Force's Predator drones, many of which are being used today to conduct combat operations against targets in Afghanistan and Pakistan. Warfare has gone robotic; I can only imagine the telepresence sensations that military operators must feel to not only locate but to actually kill enemies from halfway around the planet.

Even the ancient practice of mining is being revolutionized through robotics, with remote operators telerobotically controlling driverless trucks, rockbreakers, drills, and loaders. In South America, Europe, Australia, China, and the U.S., telerobotics is fueling a growing segment of the global mining economy built around the modern automated mine, where accidents still cost time and money but not necessarily human lives.

One big difference between telerobotics in the ocean, air, or underground and telerobotics in space, however, has to do with what engineers call latency -- the time it takes the remote vehicle or system to respond to commands, and for the results of those responses to be communicated back to the teleoperators. In ocean or mine exploration, for example, typical latency can be essentially zero if the vehicle is linked to the operators through radio or fiber optics. In planetary exploration, latency ranges from seconds for vehicles in Earth orbit or on/near the Moon, to minutes to tens of minutes for vehicles operating at Venus or Mars, to nearly 9 hours when the New Horizons mission reaches Pluto in 2015, to even longer for our most distant space vehicles like Voyager. This makes real-time operations of robotic vehicles difficult or impossible for deep space destinations ("JPL to Spirit rover: Avoid that cliff!" ... "Spirit rover?" ... "Spirit rover???"), and has generally led to operations concepts typically based around time-stamped command lists radioed up to these deep space vehicles followed by daily to weekly response and reaction latencies. Occasionally, terrestrial telerobotics latencies can also be days to weeks; for example, for some AUVs that might only surface occasionally to radio their data and status back to remote controllers. Such long latencies make the often more desirably immersive experience of telepresence challenging, though as the symposium demonstrated, people are beginning to think about technologies and operational strategies that could help to work around some of those challenges.

In the end, I was simply stunned to learn from fellow symposium participants how many of these ROVs, UAVs, and robotic mine vehicles people are using here on Earth these days, how much oceanographic, geologic, biologic, and even archaeologic science and exploration they are enabling, and how important robotics is becoming to global defense and business. The teams of people operating terrestrial robots for science, defense, or resource exploration are the same kinds of teleoperators, engaged in similar kinds of telerobotics work, as those of us exploring space with Martian rovers or planetary orbiters. In that sense, these global (and interplanetary) communities can learn much from each other. Indeed, a main goal of the symposium was to share tools, experiences, and lessons learned that would enable collaborations between these different stakeholders. I can't wait to learn more.
Jim Bell is an astronomer and planetary scientist, a Professor in the School of Earth and Space Exploration at Arizona State University in Tempe, and the President of The Planetary Society, the world's largest public space advocacy organization. He is the lead scientist for the Pancam color stereo cameras on the NASA Mars rovers Spirit and Opportunity, is a member of the science camera team on NASA's Curiosity rover, and has authored several space photography books, including "Postcards from Mars", "Mars 3-D", and "Moon 3-D".