iOS app Android app More

Yes, You Can Hack a Pacemaker (and Other Medical Devices Too)

XPRIZE   |   January 2, 2013    2:42 PM ET

2012-11-06-Tarun.jpegBy Tarun Wadhwa
Tarun Wadhwa is a Research Fellow with Singularity University studying how exponential technologies can be used to solve public policy problems.

On a recent episode of the Emmy award-winning show Homeland, the Vice President of the United States is assassinated by a group of terrorists that have hacked into the pacemaker controlling his heart. In an elaborate plot, they obtain the device's unique identification number. They then are able to remotely take control and administer large electrical shocks, bringing on a fatal heart attack.

2013-01-02-InsulinHack.png

Viewers were shocked - many questioned if something like this was possible in real life. In short: yes (although the part about the attacker being halfway across the world is questionable). For years, researchers have been exposing enormous vulnerabilities in internet-connected implanted medical devices.

There are millions of people who rely on these brilliant technologies to stay alive. But as we put more electronic devices into our bodies, we must address the serious security challenges that come with them. We are familiar with the threat that cyber-crime poses to the computers around us - however, we have not yet prepared for the threat it may pose to the computers inside of us.

Implanted devices have been around for decades, but only in the last few years have these devices become virtually accessible. While they allow for doctors to collect valuable data, many of these devices were distributed without any type of encryption or defensive mechanisms in place. Unlike a regular electronic device that can be loaded with new firmware, medical devices are embedded inside the body and require surgery for "full" updates. One of the greatest constraints to adding additional security features is the very limited amount of battery power available.

Thankfully, there have been no recorded cases of a death or injury resulting from a cyber attack on the body. All demonstrations so far have been conducted for research purposes only. But if somebody decides to use these methods for nefarious purposes, it may go undetected.

Marc Goodman, a global security expert and the track chair for Policy, Law and Ethics at Singularity University, explains just how difficult it is to detect these types of attacks. "Even if a case were to go to the coroner's office for review," he asks, "how many public medical examiners would be capable of conducting a complex computer forensics investigation?" Even more troubling, Goodman points out, "The evidence of medical device tampering might not even be located on the body, where the coroner is accustomed to finding it, but rather might be thousands of kilometers away, across an ocean on a foreign computer server."

Since knowledge of these vulnerabilities became public in 2008, we've seen rapid advancements in the types of successfully attempted hacking.

The equipment needed to hack a transmitter used to cost tens of thousands of dollars; last year a researcher hacked his insulin pump using an Arduino module that cost less than $20. Barnaby Jack, a security researcher at McAfee, in April demonstrated a system that could scan for and compromise insulin pumps that communicate wirelessly.  With a push of a button on his laptop, he could have any pump within 300 feet dump its entire contents, without even needing to know the device identification numbers.  At a different conference, Jack showed how he'd reverse-engineered a pacemaker and could deliver an 830-volt shock to a person's device from 50 feet away - which he likened to an "anonymous assassination."

We've also seen some fascinating advancements in the emerging field of security for medical devices.  Researchers have created a "noise" shield that can block out certain attacks - but have strangely run into problems with telecommunication companies looking to protect their frequencies.  There have been the discussions of using ultrasound waves to determine the distance between a transmitter and a medical device to prevent far-away attacks.  One team has developed biometric heartbeat sensors to allow devices within a body to communicate with each other, keeping out intruding devices and signals.

But these developments pale in comparison to the enormous difficulty of protecting against "medical cybercrime," and the rest of the industry is falling badly behind.

In hospitals around the country there has been a dangerous rise of malware infections in computerized equipment.  Many of these systems are running very old versions of Windows that are susceptible to viruses from years ago.  Some manufacturers will not allow their equipment to be modified, even with security updates, partially due to regulatory restrictions.

A solution to this problem requires a rethinking of the legal protections, a loosening of equipment guidelines, and increased disclosure to patients.

Government regulators have studied this issue and recommended that the FDA take these concerns into account when approving devices.  This may be a helpful first step, but the government will not be able to keep up with the fast developments of cyber-crime.  As the digital and physical world continue to meld, we are going to need an aggressive system of testing and updating these systems.  The devices of yesterday were not created to protect against the threats of tomorrow.


Visit X PRIZE at xprize.org, follow us on Facebook, Twitter and Google+, and get our Newsletter to stay informed.

This material published courtesy of Singularity University.

Machines Will Outsmart Humans. We Better Be Ready

XPRIZE   |   December 19, 2012    4:00 PM ET

2012-12-19-Federico_Pistono.pngBy Federico Pistono
Federico Pistono is alum of Singularity University's graduate studies program. His book Robots Will Steal Your Job, but that's OK: How to Survive the Economic Collapse and be Happy, explores the impact of technological advances.

Today, large streams of data, coupled with statistical analysis and sophisticated algorithms, are rapidly gaining importance in almost every field of science, politics, journalism, and much more. What does this mean for the future of work?

As we saw a few weeks ago, the race for the 2012 US presidential election had a clear and undisputed winner: data. Big Data, statistics, and computer algorithms, to be precise. The war had two clear factions. On one side, we had experienced journalists, who had worked many years in the field, who based their analysis partially on polling data; but hunches, gut feelings, instincts, and intuitions had the final word on the matter. On the other side, there was New York Times blogger Nate Silver, with no experience as political analyst and hence little intuition; but with a huge bag full of Big Data, statistical models, mathematical formulas and computer algorithms. Journalists called Silver and his methods a "joke", "numbers racket", and accused him of "getting into silly land." What was the result? Most experienced political analysts failed miserably; while Nate Silver and his Data Science correctly predicted the results of the election, days in advance, with a 100% accuracy, getting all 50 states right.

This, for those who have been paying attention to the exponential trends of technology and IT in general, should come as no surprise.

It was once believed that computers could never beat the best human at chess, because computers perform very inefficient brute force attacks on problems, instead of relying on intuition and hierarchical structures like our brains do. Yet, the 1997 Deep Blue versus Garry Kasparov challenge saw the IBM machine beat the World Chess Champion, during what has been called "the most spectacular chess event in history".

History repeated itself in 2011, when IBM Watson defeated at the game of Jeopardy! Brad Rutter, the biggest all-time money winner (>$3.4 million), and Ken Jennings, the record holder for the longest championship streak (74 wins). Just before the match, the same arguments that were brought up in 1997 were presented against the machine that was crunching some 200 million pages of text through sophisticated AI. Yet, the machine won again.

Recently, legendary linguist Noam Chomsky was interviewed on the development of Artificial Intelligence over the years. According to the MIT Professor, the heavy use of statistical methods and large corpora of data is unlikely to yield any significant scientific insights into the study of language, because you "can get a better and better approximation", "but you learn nothing about the language". That may be so. But Peter Norvig, Director of Research at Google, points out in his critical response that "grammaticality is not a categorical, deterministic judgment but rather an inherently probabilistic one. This becomes clear to anyone who spends time making observations of a corpus of actual sentences, but can remain unknown to those who think that the object of study is their own set of intuitions about grammaticality [...] it is observation, not intuition that is the dominant model for science."

It's difficult to say who is right. Only time will tell. But we know which approaches had the biggest commercial success: search engines, speech recognition, machine translation, word sense disambiguation, and other technologies.

Our intuitions and insights develop quite linearly with time, but the amount of data at our disposal and the computing power capable of interpreting such data is increasing exponentially. This had a profound effect on the workforce, and it will do more so in the future. Forbes already utilizes Narrative Science,  an innovative technology company, to create rich narrative content from data. Google News aggregates millions news stories and clusters accurately them in a matter of seconds, a task that no group of humans could ever dream of performing. Facebook and Amazon's suggestions algorithms are far too complex to be matched by any man or woman with "good intuition". The list goes on and on.

It appears that whenever we believe that computers cannot outsmart humans at some task, we are proven wrong. How will this affect the labor force? What will happen to the economy in the future, in sight of these rapid changes ahead of us? The answer to these questions is not trivial, and probably nobody knows with certainty. It is my hope that we will soon start a conversation on this topic, which I think is of utmost importance, and that should be at the center of our public debate.

The future of the economy and society is very much uncertain. However, I think it will depend on us, on how we decide to use the prodigious technology that we are developing, and for what purpose. And to ensure that we take the right path, we must start a serious conversation on this issue, before it's too late.

Visit X PRIZE at xprize.org, and follow us on Facebook, Twitter and Google+.

This material published courtesy of Singularity University.

Advances in Robotics

XPRIZE   |   December 17, 2012    2:35 PM ET

2012-11-19-Nathan_Wong.jpgBy Nathan Wong
Part four of a five-part series about going back to the Moon, by Google Lunar X PRIZE technical consultant Nathan Wong.

In the past three articles we have talked about getting to the Moon, the challenges once you are there, and the benefits of going. In this article we will talk about some of the advances in robotics that are currently happening that may make future lunar and planetary exploration more successful. Robotics and artificial intelligence are very popular fields of research right now with many ideas on what will be the next big advancement. I will look at five of these upcoming fields: swarm robotics, human robotic interaction, biomimetics, telerobotics, and adaptive robotics.

Swarm Robotics

The basic idea behind swarm robotics is that using multiple robots that work together can accomplish more than the same robots working individually. The space industry is currently looking at using swarm robotics for satellites flying in formation flight to create a virtual larger aperture using the distance between satellites instead of using a single satellite. The satellites in this swarm must calculate and maintain a precise distance from all other satellites in the formation, similar to the video below showing multi-rotor vehicles in formation flight.

Additionally swarm robotics can be used for surface exploration. If you have only one robot and it fails your mission is done. If you have multiple robots and one fails you can still accomplish your objectives. An example of this can be seen in the next video where a swarm of robots pull a small child. A swarm of robots could move heavy rocks, cross gaps, or climb slopes that a single robot could not.

Human Robotic Interaction

Human robotic interaction is as it sounds; how humans and robots interact. The thought is that if you can leverage both the advantages of robots and humans together, you can accomplish more. A great example of this is Robonaut 2, which is a humanoid robot that was sent to the International Space Station (ISS). Currently Robonaut 2 is performing basic tests to see how it operates in microgravity, but one of the goals of the program is to use Robonaut 2 to perform tasks on the ISS. Currently human astronauts have to spend time cleaning the ISS, but one day Robonaut 2 might float around the station performing the cleaning, just like Rosie from the Jetsons, allowing the human crew more time to perform more research. 

2012-12-17-issshake5.jpg

Biomimetics

Animals in nature have been evolving for millions of years to become suited to their environment. Biomimetics is a field that uses this adaptation as an inspiration for design. A very simple example of this is fins used for diving. The fins provide a larger surface area, just as the tail fins of fish help propel them through the water. This same principle can be applied to robotics. Boston Dynamics is well known for their "Big Dog" walking robot, but they also have a robot that is designed to run like a cheetah at up to 18 miles per hour.

Telerobotics

Telerobotics is the control of a robot remotely. Currently, one of the most promising applications of telerobotics is use in medical surgeries. A world-class surgeon could operate on anyone, anywhere in the world eliminating the need to travel physically to the doctor's location. Systems like this could also be applied to space in the medical sector such as a telerobotic surgery station on the Moon. It can also be used to put robots in an area that may be too dangerous for humans to go, but real time human decision-making is favored over autonomous robotics. 

NASA currently has the Surface Telerobotics Project where they will look at using astronauts on the ISS to telecontrol robots on the Earth. They are also looking to expand this to the Earth Moon Second LaGrange point to explore the far side of the Moon. Looking even further out, this type of system could be used for near real time robotic exploration of Mars from a Martian moon. 

Adaptive Robotics

Adaptive robotics refers to a type of robot that is adaptable and deformable to suit environmental requirements. An example of this is tensegrity robotics which are composed entirely of interlocking rods and cables. These rods and cables are arranged in such a way that any force is distributed to all of the members instead of just one. This can increase reliability and allows for increased potential motion. The main benefit of tensegrity robotic structures for space is that they can be light weight and easily launched. An animation of how this tensegrity structure would move can be seen in the video.

The basic concept of transformable and adaptable robotics can also be applied to a more traditional structure. The next video shows a robot that can either walk, or roll depending on what the environment requires.

Conclusion

Advances in robotics are happening at a truly amazing rate. The capabilities that we have now compared to 10 years ago are staggering and the trend of increasing capabilities will extend into the future. Think about the entrants to the Google Lunar X PRIZE as the first generation of robotics, how will their designs and capabilities change as they send the second, third, ... generation of vehicles to the Moon? Maybe they will use some of the techniques talked about in this article and maybe they will use something entirely new and different. The next and last blog post in this series will talk about the sustainability of going back to the Moon.

Visit X PRIZE at xprize.org, follow us on Facebook, Twitter and Google+, and get our Newsletter to stay informed.

This blog post is brought to you by Shell, our Exploration Prize Group sponsor.

Benefits of Going Back to the Moon

XPRIZE   |   December 13, 2012    2:34 PM ET

2012-11-19-Nathan_Wong.jpgBy Nathan Wong
Part three of a five-part series about going back to the Moon, by Google Lunar X PRIZE technical consultant Nathan Wong.

Getting to the Moon and having a spacecraft function properly once there are challenging tasks that 25 teams from around the world are trying to complete. But what does the Moon give us in return? What makes going back to the Moon worthwhile and exciting? Well, if you polled the Google Lunar X PRIZE teams and the lunar science community you would get many varied answers. I am going to just touch on a few of the important benefits that the Moon can provide for us: Science, Power, Water, Analogue Demonstration, and Launch Port Capability.

Science

The Moon can provide us with great science data. Many scientists are interested to see what condition lunar heritage items are in, such as the Apollo landers, to understand what effects the lunar environment has on items after a long exposure time. Two of the Google Lunar X PRIZE Bonus Prizes deal with visiting lunar heritage items. Learning more about these environmental effects will allow us to better design the next generation of vehicles or human habitats.

Through lunar geology we can also try to learn more about our planet. Unlike Earth, whose surface changes through natural means such as plate tectonics and erosion, the Moon's surface is a detailed history book detailing composition and impacts almost 4.5 billion years old. The far side of the Moon can also provide us with a good platform for astronomy as it is shielded from radio interference coming from the Earth and lies outside of the magnetosphere, which can deflect cosmic ray particles.

Power

Two ways that the Moon can provide power are solar and nuclear. Studies have shown that the lunar surface material could be used to make many of the components used in solar panels. If large areas of the Moon are developed into solar farms then energy could potentially be beamed back to Earth, or used on the Moon for surface operations or spacecraft charging.

In addition to solar power, there is evidence that Helium-3 may be available in usable quantities for nuclear fusion. Helium-3 could be processed on the Moon and sent back to Earth or used to fuel future space missions.

Water

Water detection is another Bonus Prize for the Google Lunar X PRIZE. Finding large quantities of water on the Moon would be beneficial for future human missions to the Moon. Water is heavy and expensive to spend to space, so the ability to harvest water on the Moon rather than spend the money to ship it from the Earth would allow for more science payload or smaller cost efficient vehicles to be used. Water could not only be used for life support purposes but could also be used to make fuel for future space missions by separating the water into oxygen and hydrogen, the same fuels used by the space shuttles. NASA's Lunar Reconnaissance Orbiter (LRO) has already found signatures of hydrogen in impact crators, which could signify frozen water deposits.

2012-12-13-BenefitoftheMoon.jpg

Analogue Demonstration

Space missions are difficult, but the Moon can provide us with a test bed for future long duration surface missions, such as ones to Mars. The longest time we have spent on the surface of another body in the solar system is measured in days, but a mission to Mars would require approximately 18 months on the surface. Testing not only engineering, but also life science aspects of this mission on the Moon would increase the chances of success.

Launch Port

Looking farther down the road, the Moon is also a great location for launching rockets. The Moon has 1/6th the gravity of the Earth, which would reduce the amount of fuel needed to reach other destinations in the solar system exponentially. Fuel for this launch port could come from water as previously discussed, but the lunar surface itself is made up of rocks, which contain about 20% oxygen in the form of silicates. Many research groups around the world have techniques for separating the oxygen from these rocks and storing them for either life support or propulsion. If we are able to process large amounts of oxygen on the Moon, we would only need to bring one propellant to the Moon instead of two.

Conclusion

The Moon can provide many benefits for those willing to overcome the risks and challenges associated with a lunar mission. While some of the benefits are more long term, there are still short term benefits that make the Moon an attractive place for research and exploration, and the Google Lunar X PRIZE aims to make that research and exploration more common place. The next blog post in this series will talk about some of the advancements in robotics and artificial intelligence that are allowing space missions to do more than ever before.

Visit X PRIZE at xprize.org, follow us on Facebook, Twitter and Google+, and get our Newsletter to stay informed.

This blog post is brought to you by Shell, our Exploration Prize Group sponsor.

Two Technologies Propelling India Forward

XPRIZE   |   December 10, 2012    2:22 PM ET

2012-09-24-Iqbal_Quadir.jpgBy Iqbal Quadir
Iqbal Quadir is the founder and director of the Legatum Center for Development and Entrepreneurship at the Massachusetts Institute of Technology.

The well-known success of information technologies (IT) in India is actually two successes, with two sets of implications and potential. To understand these two phenomena, it may be useful to think through what Joseph Schumpeter and Adam Smith have said about economic progress. Let me explain.

About 100 years ago, Schumpeter argued that a special type of people, called entrepreneurs--who stray from the beaten path, break the mold and create something new--are the ones who produce economic growth. In Schumpeter's vision, the main role of entrepreneurs is not to invent technologies, provide capital or manage businesses. Rather, they seize new opportunities by combining economic forces in new ways--by producing new products, employing new technologies, pursuing new processes, or addressing new markets. In Schumpeter's language, entrepreneurs provide "the will and action" to forge new reality. Empowered by new IT, that is exactly what Indian entrepreneurs like Narayan Murthy, Nandan Nilekani, Ajim Premji and Shiv Nadar began doing about 30 years ago, breaking through the Kafkaesque bureaucracy of the Indian License Raj. These entrepreneurs combined at least three economic forces: rapidly falling prices in computing technology; world-class programing talent in India at relatively low cost thanks to the country's then slow economic growth; and the rising costs of software production in high-income countries.

Their formula worked, spectacularly. Satellite connections in the mid-1990s and fiber optic connections by 2000 further propelled the digital entrepreneurs by allowing them to ship their exports without the impediments of poor physical infrastructures or overbearing bureaucracies. In short, entrepreneurs like Murthy, Nilekani, Premji, Nadar and others have successfully combined low-cost Indian skills with increasingly cheaper computers and communications to address a market eager to contain costs. Today, the Indian IT industry is valued at over $67 billion in 2000 constant US dollars, employing 3 million. The industry has given rise to ripple effects by increasing demand in housing, transport, insurance, entertainment and other industries, employing an additional 12 million and advancing Indian GDP by another $67 billion in 2000 constant US dollars.

There are larger effects of this success: India's IT infrastructure has improved with more than 25 million Internet connections providing access to 150 million; the younger generation has found meaningful role models in these exemplary entrepreneurs; and the industry has instilled in many Indians confidence and a can-do attitude. There is much to celebrate in this achievement that perhaps we can call the Schumpeterian effect of Indian IT.

Meanwhile, less than two decades ago in the mid-1990s, the relentless increase in processing power of microchips and corresponding price declines unleashed another form of IT, commonly known as mobile phones. Entrepreneurs like Sunil Mittal and industrialists like the Ambanis, among others, introduced this device with its first killer-app: voice communication. One beauty of this IT device is that it is fundamentally egalitarian. While the "regular" IT industry employed the successful graduates of elite universities, mobiles could be useful to millions in India who could not read or write, but had the same desire and need to advance as anyone else. While the regular IT industry catered to the global needs of cost-effective software and business processing, mobiles served the communication needs of average Indian citizens.

Mobiles provide a near universal means of advancement because better communication and coordination save time (which translates to saving labor), money, and opportunities. This leads to higher productivity and earnings, enabling people to purchase the service. People can spend pennies to make calls while advancing by dollars. The fact that millions of people subscribed to mobiles is evidence enough that they are economically empowering because low-income people cannot sustainably indulge in purchases that do not advance them economically.

The spread of mobiles is the modern-day proof of the "natural effort of every individual to better his own condition" that Adam Smith considered the cornerstone of a prosperous society. When Smith wrote The Wealth of Nations nearly 250 years ago to find ways to foster "universal opulence which extends itself to the lowest ranks of the people," he found this "natural effort" to be "so powerful...that it is alone...capable of carrying on the society to wealth and prosperity." Further, the low-cost and widespread means of communication embodied by mobiles furthers the process of specialization and exchange that Smith championed as the key means of increased productivity and efficient resource allocation.

In short, the "natural effort of every individual to better his own condition" found expression, affordably and universally, in a hand-held technology. People of all walks of life reached out for mobiles, even if some mobile businesses started by selling services to high-end customers. Mobile businesses soon found plentiful evidence for Smith's assertion that the "whole consumption of...those below the middling rank...is in every country much greater, not only in quantity, but in value, than that of the middling and of those above the middling rank." Though there were no mobile phones in India as late as 1995, there are 940 million today, equal to roughly 80 percent of the population. Although the mobile phenomenon also involved Schumpeter's entrepreneurs, let us call this second IT phenomenon the Smithian effect because it more readily contributes to the cause of "universal opulence."

Does the Smithian effect on Indian GDP compare to the $134 billion Schumpeterian effect of the regular IT industry? The answer is an emphatic "yes." When one adds up the small advancements made by a billion people, the resulting impact can be quite powerful. To roughly calculate, I build on a study of 120 countries by Christine Zhen-Wei Qiang of the World Bank that found that a 10 percent increase in mobile phone penetration correlates with a 0.8 percent average increase in GDP growth. I calculate, conservatively, that the increase of 0 to 80 percent mobile phone penetration in India over 15 years contributed on average an additional one percent annual economic growth from 1996, when GDP was $381 billion, to 2011, when GDP was $1,040 billion (both measured in 2000 constant US dollars). What would the Indian GDP have been in 2011, in 2000 constant US dollars, if it had grown one less percentage point each year? The answer is $903 billion: a $137 billion difference, in 2000 constant US dollars, in India's economy due to mobile phones. I consider this calculation conservative for at least two reasons. First, though the average penetration of all of India is 80 percent, higher-income pockets are likely to have higher penetration and thus a greater growth-boosting effect on larger incomes. Second, the assumption of one additional percent of growth over a 15-year period possibly overestimates the effect in the early years, but underestimates to a far greater degree in later years when the economy is larger, the penetration is higher and the network effect is greater.

In addition to their comparable effects on GDP, the two IT industries are also similar with regard to job creation and in terms of ripple effects in other industries. In fact, the mobile industry has also enabled Indian immigrant workers in high-income countries to better connect with their homes in India. Immigrants in high-income countries whose relatives have low incomes even by low-income country standards tend to send money home; they sent $53 billion to India in 2010.

While the aggregate GDP contributions of what I call the Schumpeterian and Smithian effects are comparable, and both have been important breakthroughs in the Indian economy, I believe the Smithian effect is more powerful for the country. First, it leverages India's greatest strength, namely, its own population. As communication facilitates more efficient collaboration within its population and greater specialization-and-exchange within its vast market, the Indian economy will move steadily towards greater optimization. While India's regular IT industry contributes to greater efficiency in the global economy, the mobile industry engenders greater efficiency within India itself. Second, the Smithian effect mitigates rising inequality in India, a serious issue: 48 billionaires in India own 10.9 percent of the GDP (in China, another country with significant inequality, 95 billionaires represent only 2.6 percent of GDP). Third, rising incomes in the poorest ranks spur greater innovations: entrepreneurs find markets with greater purchasing power, larger markets give greater economies of scale for producers, and production facilities search for labor saving innovations.

Mobile phones are in effect handheld and connected computers and hold great potential in this role, since businesses in promising areas such as payments and banking, healthcare, and entertainment require combining the services of multiple providers. Like computers, mobiles can connect multiple providers; store and process huge amounts of data in various forms (text, images, video, audio); and create and deliver complex services using intricate sets of logic. Businesses are being launched on the mobile platform to provide a wide range of services from medical advice and diagnosis, to pharmaceutical authentication, to payment and finance systems. Although NGOs and governments administer many of the emerging services, there are also myriad for-profit enterprises that, using mobiles as a platform, are meeting needs ordinarily considered appropriate for the state to provide. For instance, mDhil sells health tips to 18 to 25 year olds through text messages, conveying confidential information on issues from nutrition to various ailments for the tech-savvy age group for a monthly charge of 30 Rupees. A company called Beam is providing micro-payment services for customers without bank accounts or credit cards.

As mobiles gain greater computing power and smartphones further proliferate in India, the emergence of such services on the mobile platform, and their corresponding economic benefits, will accelerate. Expertise in developing apps and software for mobiles is gaining momentum, and, moreover, smartphones tend to loosen the hold of network operators on phones, allowing small entrepreneurs to create products on the mobile platform. A lack of other infrastructures strengthens "the natural effort to better ones conditions," leaving Indian citizens ready to embrace these new ideas and services if they indeed advance people.

India, a country of many strengths, has yet another in the Smithian effect of mobile phones. The country can serve as an example for other low-income countries in South Asia, Africa and Latin America, where innovations that work in India are likely to work as well. And, just as individual economic advances have added up to rival the effect of the formidable regular IT industry, these minute advances in total can create world-class business opportunities. The innovators and entrepreneurs currently working for multinational companies may do well to turn their attention to the several billions of people who lack many fundamental services but hold powerful computers in their hands.


Visit X PRIZE at xprize.org, and follow us on Facebook, Twitter and Google+.

This material published courtesy of Singularity University.

Are We on the Edge of a Revolution in Medical Diagnostics?

XPRIZE   |   December 5, 2012    2:40 PM ET

2012-12-05-Tapani_Ryhnen.jpgBy W. Tapani Ryhänen
Tapani Ryhänen heads Nokia Research Center's Sensor and Material Technologies Laboratory in Cambridge, Espoo, and Moscow.

Health care services are one of the key pillars of any modern society. Aging populations across the globe, emerging diseases, people living without access to proper medical services, growing economies with limited resources and serious environmental issues are our major challenges today and for the generations to come. Human creativity is needed to find innovative solutions to these global problems in terms of radical changes in health care technologies, in methods of their use and to the value chains.

Personalised and more distributed health care services are emerging. Improving capability to measure and gather personal physiological information continuously and anywhere will dramatically improve diagnostics and remote patient care. Advances in micro- and nanotechnologies, biotechnologies and data analytics are jointly creating a basis for a completely new generation of intelligent sensors that can be used in personalised and remote health care services. Secondly, these intelligent sensing devices can be connected to backend services with more computing and data storage capabilities and greater access to reliable, accurate diagnostic and medical services.

Human activity and behaviours, including how they change in response to changes in our physical environments, can be recognised based on information collected by motion sensors, touch sensors, optical sensors, microphones and cameras that can be found in nearly all high end smart phones. Technological innovation means that these sensing and analysis capabilities are becoming richer. This information can be gathered and aggregated over longer periods of time, and it is possible to analyse human behaviour and to compare data of individual persons with the aggregated data of larger populations. This is a concrete existing information asset that only now is just beginning to be used in the development of new concepts for health and wellness related measurements and services. Before being overly enthusiastic, let's remember that the challenges in data security and privacy of this information need to be solved.

A sensing solution does not have to refer only to new advanced sensor technologies, we think of it as also the process of data collection using these new advanced technologies, performing analysis and comparison with other sources of data and presentation of the conclusions. By launching the Nokia Sensing X CHALLENGE we aim to stimulate the development of new components of, or even complete, sensing solutions that might, for example, consist of new biochemical or physical sensors, data analytics and intelligent algorithms embedded in sensing devices.

Secondly, we need to remember that we are interested in concepts relevant to anybody as opposed to only highly trained medical professionals. How to make these devices easy to use, robust, wearable and reliable? How do consumers become motivated to use these technologies? In general, meaningful and easily understandable representation of results and data is essential for consumers to be able to use sophisticated technologies. In health and wellness applications the information will aim to change and improve human behaviours. How this message is given to the user is vital as we are all individuals and we can react differently to same information.

Nokia Sensing X CHALLENGE competition guidelines explain our objectives. I am sure that concrete innovative solutions to revolutionise measurements and diagnostics of health, wellness and human environment will emerge from this new Challenge.

Visit X PRIZE at xprize.org, follow us on Facebook, Twitter and Google+, and get our Newsletter to stay informed.

We May Not Have Flying Cars Yet, But Visioneers Are Inventing a New Future

XPRIZE   |   December 3, 2012    6:42 PM ET

2012-12-03-patrick.mccray.pngBy W. Patrick McCray
W. Patrick McCray is a professor of history at the University of California, Santa Barbara where he leads a research group at the school's Center for Nanotechnology in Society. He is also the author of "The Visioneers: How a Group of Elite Scientists Pursued Space Colonies, Nanotechnologies, and a Limitless Future."





2012-12-03-Visioneers286x300.png Google the phrase "Peter Thiel innovation is dead" and you'll tune into a conversation launched when the venture capitalist claimed that technological innovation in America is stalled. Thiel isn't the only one with a dour assessment. George Mason University economist Tyler Cowen makes similar points in his book The Great Stagnation. In their analysis, it's not just the pace of innovation that is a problem but its very nature. Thiel's pithy assessment -- "We wanted flying cars; instead, we got 140 characters" -- reflects a wide-ranging dissatisfaction of how yesterday's techno-dreams seemed to have fallen short.

Regardless of whether one agrees with such pronouncements, the attention they've received demands we take a fresh and closer look at the innovation ecosystem. Some denizens of the ecosystem are as easy to spot as colorful birds in a tropical forest -- university scientists, corporate engineers, CEOs, investors, and patent lawyers. However, another less observed species can also shape the evolution of innovation in unexpected and sometimes important ways.

If we look at the broader history of technology, we see rare individuals who have had a clear and strong vision of an expansive future created by technologies they studied, designed, and promoted. Pushing beyond hand-waving and podium speculations, their activities can produce actual things: prototypes, models, patents, and computer simulations. Just as importantly, these people also built communities and networks so they could connect their radical ideas for the technological future to interested citizens, writers, politicians, and business leaders. Think Nikola Tesla in the 1890s or Wernher von Braun in the 1930s or Doug Engelbart in the 1960s.

A neologism of "visionary" and "engineer," visioneer captures the hybrid nature of these technologists' activities. The visionary aspect is central -- these are people who aren't simply imagining a faster airplane or a new electronic gadget. They present a vision of society as a whole that could be altered, shaped, and improved by technologies they see as necessary and even inevitable. The engineering element is just as, if not more, critical. Visioneers base their imaginings on detailed engineering studies and technical designs. They also engage in another form of engineering as they build communities of supporters and patrons. At its core, visioneering entails developing a broad and comprehensive vision for how the future might be radically changed by technology, doing research to advance this vision, and promoting one's ideas to the public and policy makers in the hopes of generating attention and perhaps even realization.

Visioneers and the communities of researchers, futurists, and entrepreneurs they attracted have often existed at the blurry border between scientific fact, technological possibility, and optimistic speculation. Their design, imagining, and promotion form part of a longer chain of technological enthusiasm that has marked so much of America's history. They are important to the growth, diversification, and health of today's technological ecosystems.

Nonetheless, visioneers and their supporters are not immune to the lures of profit, celebrity, and sensationalism. And, as their ideas receive wider attention and publicity, they must work to defend the purity and original goals of their visions. In the 1970s, Princeton physicist Gerard O'Neill achieved international recognition by advocating settlements and factories located off-world. O'Neill's visioneering for the "humanization of space" might seem pure sci-fi today -- space colonies? really...? -- but when seen in the context of the immediate post-Apollo era when fears of overpopulation and resource shortages permeated public discussion and pop culture, O'Neill's ideas appear less far-fetched. However, when Timothy Leary (yes, that Leary) tried to put his own spin on O'Neill's visioneering, the Princeton scientist was obliged to draw distinctions between his own radical ideas, grounded as they were in physics and engineering, and the former LSD guru's spacey interpretation.

Today, we can see someone like Elon Musk as a "visioneer" -- someone who combines scientific or engineering prowess (in Musk's case, a degree in physics) with an expansive view of how new technologies could upend traditional economic models and shape the future. Musk's recent successes with SpaceX is a realization, in some ways, of the vision O'Neill had circa 1975 for alternative paths to explore space and expand people's presence there.

Visioneers can play an increasingly important role in building the technological ecosystems of tomorrow. The Singularity University's programs are one way that this could be encouraged (although SU's educational approach to entrepreneurship is distinctly shaped by neoliberal economics). By combining broad views of the future with technical skills, experience, and research, visioneers take speculative ideas out of the hands of sci-fi writers and technological forecasters and put them on firmer ground. Although visioneers' ideas may sit outside the mainstream, their work secures a beachhead for exploratory notions. By inspiring (or provoking) people, visioneering reveals the future as a terrain made rough by politics and economics as well as people's hope and anxiety.

So -- is innovation dead? Coming back to Peter Thiel's catchphrase, we DO have flying cars. The first ones flew in the 1930s, in fact. But, using the much-lamented flying car as proxy for expectations of the future that didn't happen as planned, we see that achieving success demands more than just showing that something is technically possible. Visioneering helps capture the diverse set of activities required to push the frontiers of innovation.

We want and need people to come forward with big ideas. Visioneers can help define the outer edge of what's possible and, if nothing else, push other scientists and engineers to think about what future and its technologies might be like. For visioneers, the past is merely a prototype, a provisional plan for what may become a magnificent and perhaps less limited future.

Visit X PRIZE at xprize.org, follow us on Facebook, Twitter and Google+, and get our Newsletter to stay informed.

This material published courtesy of Singularity University.

Who Says Online Courseware Will Cause the Death of Universities?

XPRIZE   |   November 29, 2012    2:07 PM ET

2012-11-29-tom_katsouleas.jpgBy Tom Katsouleas
Tom Katsouleas is the Dean of Duke University's Pratt School of Engineering. He serves as Chair of the National Academy of Engineering's Advisory Committee on Engineering Grand Challenges for the 21st Century.

In a recent editorial, Ray Kurzweil, futurist and Singularity Chancellor, compared the current university model to the bookstore model, suggesting that universities will be undermined by online the way that digital books undermined Borders. Others have suggested that universities are headed the way of the newspaper. Others have suggested that online teaching represents a new funding model for universities.

Yogi Berra said predictions are hard, "especially about the future." With that in mind there are a couple of perspectives to take from predictions in old issues of Popular Science or Scientific American: futurists always over-estimate how soon new technology will manifest and researchers in the field always underestimate it. The bottom line, though, is that while online education poses a challenge for universities, they will ultimately improve them.

I'd like to offer a couple of metaphors for higher education today. One is to celebrate the rise of massive open online courses (MOOCs) like the onset of the textbooks coupled with public libraries. In theory, this opened the totality of human knowledge to everyone. In reality though, a lot of knowledge is stored in the minds of scholars pushing the edges of their fields. Which means that at the PhD level, research universities play the roles of powering innovation and passing their knowledge on to the next generation. But those roles are subsidized by the undergraduate and Masters education that pays the salaries of the faculty.

It is at the Masters level that traditional universities will first feel the effect of MOOCs. In our visits to corporate partners like Apple and Cisco, it was clear that most top engineers and executives are using MOOCs for their lifelong learning in a way that some used to use corporate sponsored masters programs. Although universities provide individual and team project-based learning that are still difficult to replicate online, a Masters education can be taken anywhere.

What about undergraduate education? The undergraduate period is the time when one discovers one's place in the world, what it means to be human and develops a sense of joy for the life of the mind. Online education will allow universities to do that even better: for one, they will provide a way for the best teachers to be recognized and promoted for something other than just research, a long time concern in the APT process at research universities. And by moving lecturing online, MOOCs allow in-person time to be more interactive, dynamic and valuable.

There's another benefit to online teaching for universities that my fellow dean and former ATT Research leader Robert Calderbank calls the "rock star effect." For example, people buy an album for $9.99 but pay much more to see Madonna in concert. This is the business model metaphor that most closely fits the future of higher ed: MOOCS like CDs and downloads will enable personal learning opportunities at low cost to a large market, while at the same time universities will provide an environment for a smaller audience of undergraduates to gain wisdom as well as knowledge personally from those who create the knowledge. This will allow teachers to become true educators. And as Duke Engineering Professor April Brown is quick to remind our faculty, the Latin root of the word educate is 'educe,' which means to draw out that which lies inside the student.

There is room for both MOOCS and this type of university experience. Students will continue to see the value of a live interaction versus one through a screen, but not all of them will have the means or ability to pursue that path. We have to take advantage of what each can provide to bring the full value to the student.

Despite the challenges to universities posed by MOOCs, there are great advantages to them as well. And the best universities will be able to capitalize on those advantages to provide the best value for their students - whether that value is online or in person. Despite what some see as a threat to higher education, MOOCs will only help it get better.

Visit X PRIZE at xprize.org, follow us on Facebook, Twitter and Google+, and get our Newsletter to stay informed.

This material published courtesy of Singularity University.

The Challenges of a Lunar Mission

XPRIZE   |   November 27, 2012    3:59 PM ET

2012-11-19-Nathan_Wong.jpgBy Nathan Wong
Part two of a five-part series about going back to the Moon, by Google Lunar X PRIZE technical consultant Nathan Wong.

We learned that getting to the Moon is not simple process, but the challenges do not end once we reach the surface. One of the main challenges associated with the Google Lunar X PRIZE is surface mobility, but there are also other challenges to consider that must be understood and overcome in order to complete any mission successfully. Out of the many difficulties associated with space travel and particularly lunar missions, we will look at five of the more important problems to solve: power, temperature, radiation, dust mitigation, and communications.

Power

Power is a critical subsystem on any space mission. Without power you will lose almost all of your capabilities; however, you will however be able to act as a very expensive paperweight ... in space. There are no power plants on the Moon so let's first take a look at how we can generate power.

The most widely used method of power generation in space missions is photovoltaic. This employs the same technology used by commercial solar panels here on Earth and the power generated is proportional to the surface area and angle of incident light. So for vehicles close to the equator the sunlight is directly overhead, whereas if the vehicle is at the poles the solar array should be perpendicular to the ground (this can be seen in the "Mohawk" of Astrobotic's Polaris rover).

2012-11-27-POLARIS1580x385_1.jpg

The other option would be to use radioisotope thermal generators or RTG's such as the Mars Science Laboratory. RTG's use the heat from radioactive decay and convert that thermal energy into electrical energy. RTG's are not dependent on sunlight, so could be used to power a vehicle through the lunar night which lasts 14 days, but weigh much more.

Thermal

Temperature on the Moon can range from about 100°C (212°F) during the lunar day to -150°C (-238°F) during the lunar night. These types of environmental temperatures can cause a real problem for the components of a spacecraft when their traditional operating temperatures range is that of traditional Earth temperatures.

2012-11-27-Thermal.jpg

Heat can transfer in three ways; conduction, convection, and radiation. On Earth conduction and convection are the primary heat transfer methods, but due to the lack of an atmosphere in space the primary heat transfer mode is radiation. The amount of radiation heat transfer is dependent on the material properties absorptivity, or the amount of heat the spacecraft absorbs, and emissivity, which is the amount of heat the spacecraft can re-radiate back to the environment.

Since the spacecraft is absorbing in the visible light spectrum and emitting in the infrared spectrum materials can be chosen with appropriate absorptivity and emissivity for the desired equilibrium temperature, for example white paint has a low absorption in the visible light spectrum and a high emissivity in the infrared which would result in a cooler equilibrium temperature since more heat is being radiated to space than is being absorbed from the sun. This material selection is an example of passive thermal control. Active thermal control can also be used to regulate spacecraft temperature such as liquid cooling, but this requires power to be used.

Radiation

Radiation in the form of heat transfer is a non-ionizing form of radiation. We must also look at how particles from the sun can ionize substances and cause damage. On Earth we are lucky to have the magnetosphere that deflects incoming high-energy particles from the sun. Only a few of these particles make it to the surface of the Earth. The Moon normally orbits outside of the Earth's magnetosphere and has no magnetosphere of its own, so incoming particles from the sun continually bombard the surface of the Moon. These particles can cause damage in a number of ways, but the most concern for a robotic mission is the loss or corruption of data. The incoming radiation can disrupt sensors and provide inaccurate information.

In order to protect from this damage we can use physical techniques that absorb or deflect the incoming particles or we can use logical techniques that use computer programs to determine if information has been altered. In reality a combination of these two methods are used for redundancy and effectiveness.

Dust

The surface of the Moon itself provides one of the most difficult challenges to overcome. Since the Moon has no atmosphere, small meteorites have been hitting the Moon for billions of years. All of these impacts have turned the surface of the Moon into a fine "dust". Well more like tiny sharp particles of pain in the butt. We have some first hand experience with this dust from the Apollo missions, where suits were damaged due to dust interaction. Robotic missions must make sure to avoid having open area where the dust can get into joints and cause malfunction.

To make matters worse, treading a light foot on the lunar surface probably won't help. The high-energy solar wind particles have also been ionizing the dust on the surface of the Moon. This makes the dust highly electrostatic. Once an object reaches the lunar surface it is almost a certainty that the dust will cling to it.

Communications

How do you send and receive information to and from the Moon? This is a big question for the Google Lunar X PRIZE. Teams must send back HD video, photographs, and information to win the prize money, but don't have the convenience of an internet service provider like when we need to transfer information here on Earth.

Sending and receiving information from the Moon is more similar to sending information to an orbiting satellite than transferring information to Mars, mostly due to distance. Communication to Mars can take up to 45 minutes round trip, but a round trip to the Moon only takes about three seconds. In order to communicate with the Earth the vehicle must have some sort of antenna and a connection to a ground station on Earth to receive the signals. The amount of data that can be transmitted depends on the transmission power and the size of the antenna. Although high data rates are preferred a balance must be made so that the overall mass or power requirements of the spacecraft are not too high.

Conclusion

Reaching the Moon is only the first challenge in a long line of many that must be looked at and overcome in order to complete the Google Lunar X PRIZE and any lunar mission. Although there is much glory in spaceflight, it is important to understand the troubles and difficulties that are faced. These challenges make things cost more money and take more time than originally intended. Some of these difficulties are common to every space mission and have reliable and cost effective ways to solve some. Others are unique to the Moon where we haven't gone to the surface as a species in over 35 years, and will need true innovation in order to overcome those challenges. The next blog post in this series will talk about some of the advantages of going back to the Moon.

Visit X PRIZE at xprize.org, follow us on Facebook, Twitter and Google+, and get our Newsletter to stay informed.

This blog post is brought to you by Shell, our Exploration Prize Group sponsor.

How Mexico Can Leapfrog India and Become America's Automaton Workshop

XPRIZE   |   November 26, 2012    3:30 PM ET

Vivek.jpgBy Vivek Wadhwa
Vice President of Academics and Innovation, Singularity University.

Mexico's I.T. offshoring industry ranks as #3 in Gartner's global rankings, lagging behind only India and the Philippines, according to analyst Frances Karamouzis. Its ambition is to take second place. Given that Mexico now claims to graduate 130,000 engineers every year--which is more than the U.S.--this may not be impossible. At an event hosted by Mexico's I.T. confederation, CANIETI, I was asked for advice on how the country can achieve this objective. The challenge, as it was described to me, is twofold: that Mexico's engineering education is not uniform in quality and that the curriculum is in any case incomplete, leaving graduates without necessary skills.

I told CANIETI that its challenge is the same as India once faced. India's education system has the same problems of inconsistency and incompleteness. To compensate for this weakness, Indian industry developed a surrogate education system. But even if Mexico learned from India and reeducated its graduates, it would face another big hurdle: that the market for I.T. outsourcing has plateaued.

That's because the I.T. departments that the outsourcing companies sell to are losing their power. With users having home access to iPads, social media, and downloadable apps--all of which are more sophisticated than what I.T. departments usually offer--user departments don't need I.T. as much as they used to. They are themselves choosing solutions from companies such as Salesforce.com, Google, and Microsoft--which use cloud computing to provide the infrastructure. As a result, the hundred-million-dollar outsourcing deals are fewer and further between. And the trend toward user control is accelerating.

My advice was that Mexico target another emerging market, one that is likely to be bigger than I.T. services and that it is in the catbird seat to own. It can leapfrog Indian I.T., which is busy defending its outsourcing turf and has become complacent because of its size.

The opportunity is to help America re-automate its manufacturing industry. I have written previously about Chinese manufacturing's having peaked and about why it is nearly certain that manufacturing will come back to the U.S. Advances in robotics, artificial intelligence, and 3D printing are going to savage China's labor-cost advantage.

Take the Baxter robot, which Rethink Robotics announced recently. It has two arms, a face that displays simulated emotion, and cameras and sensors that detect the motion of human beings that work next to it. It can perform assembly and move boxes--just as humans do. It will work 24 hours a day and not complain. It costs only $22,000. It's just one of many advances to come.

Artificial Intelligence (A.I.) is making it possible to develop self-driving cars, voice-recognition systems such as Apple's Siri, and computer systems that can make human-like decisions. A.I. technologies are also finding their way into manufacturing and are powering robots such as Baxter.

A type of manufacturing called "additive manufacturing" is making it possible to cost-effectively "print" products. 3D printers can create physical mechanical devices, medical implants, jewelry, and even clothing. The cheapest 3D printers, which print rudimentary objects, currently sell for between $500 and $1000. Soon we will have printers for this price that can print toys and household goods. By the end of this decade, we will see 3D printers doing the small-scale production of previously labor-intensive crafts and goods. In the next decade we may be 3D-printing buildings and electronics.

These technologies are becoming available and cheap, but America's manufacturing plants aren't geared up to take advantage of them. This is what opens the opportunity for Mexico. It can set up automated factories across the border that manufacture at costs comparable to China. Mexican services firms can master the new technologies and help American firms design new factory floors and program and install robots. This is a higher-margin business than the old I.T. services.

Rather than focusing on yesterday's markets and technologies, Mexico's firms can be focusing on tomorrow's advances and become America's automaton workshop. With Mexico's growing skilled workforce and its proximity to the U.S., this could be a big win for Mexico and for the U.S.

Visit X PRIZE at xprize.org, follow us on Facebook, Twitter and Google+, and get our Newsletter to stay informed.

This material published courtesy of Singularity University.

Bleeding Competitiveness: Americas New Immigrant Entrepreneurs -- Then and Now (Infographic)

XPRIZE   |   November 20, 2012    2:19 PM ET

Vivek.jpgBy Vivek Wadhwa
Vice President of Academics and Innovation, Singularity University.


A new Kauffman Foundation study finds that the number of high-tech, immigrant-founded startups--a critical source of fuel for the U.S. economy--has stagnated and is on the verge of decline. "America's New Immigrant Entrepreneurs: Then and Now," which evaluates the rate of immigrant entrepreneurship from 2006 to 2012, updates findings from a 2007 study that examined immigrant-founded companies between 1995 and 2005. The infographic below summarizes the findings of the study by Vivek Wadhwa, AnnaLee Saxenian, and F. Daniel Siciliano. Here is a link to download the full paper.

2012-11-20-ThenandNow.jpg

Visit X PRIZE at xprize.org, follow us on Facebook, Twitter and Google+, and get our Newsletter to stay informed.

This material published courtesy of Singularity University.

From the Earth to the Moon

XPRIZE   |   November 19, 2012    2:16 PM ET

2012-11-19-Nathan_Wong.jpgBy Nathan Wong
Part one of a five-part series about going back to the Moon, by Google Lunar X PRIZE technical consultant Nathan Wong.

"It's not the destination, it's the journey." Well, in the case of the Google Lunar X PRIZE it is both. Building systems that can operate on the Moon is a difficult task, but so is the task of getting a payload to the lunar surface safely. There have been 19 successful soft landings on the Moon, from Luna 9 on January 31, 1966 to Luna 24 on August 14, 1976, including the Apollo missions. The Apollo missions used the largest launch vehicle ever successfully launched, the Saturn V, to land its payload on the Moon. The Saturn V weighed 2.3 million kg (5 million lbs.) with a payload capacity of approximately 45,000 kg (100,000 lbs.) to lunar injection orbit. The teams competing in the Google Lunar X PRIZE won't need that large of a rocket to get their vehicle to the lunar surface, but the steps on how to get there are the similar.

We can break down the journey to the Moon into five distinct events that all need to happen successfully. Those events are the launch, trans lunar injection, lunar orbit insertion, lunar descent orbit, and landing.

Launch

This portion of the journey takes place from the surface of the Earth until outer space. One option is to build your own launch vehicle, which Team ARCA is doing with the Super Haas, but many Google Lunar X PRIZE teams will be purchasing their launch service. Currently Barcelona Moon and Astrobotic have signed launch contracts with China Great Wall Industry Corporation for a Long March 2C and SpaceX for a Falcon 9 respectively. Even launch services bought from a commercial provider are not without some risk. Industry averages for launch failures are about 7% and up to 15% for the first 10 launches of any vehicle.

2012-11-19-rockets_1.jpg

Trans Lunar Injection

Once the payload has left the Earth and made it to space, the spacecraft has reduced about 95% of its prelaunch mass by burning fuel and separating stages. Odds are that the Moon is not in the exact perfect location at the time of launch for a direct transit so the spacecraft is placed into a parking orbit (temporary orbit around the Earth). As the spacecraft orbits around the Earth, the spacecraft will line up close to an energetically optimal position to fire its engines and transit towards to Moon. The transit to the moon will take approximately three days.

Lunar Orbit Insertion

Depending on the path taken from Earth orbit to the Moon, the spacecraft is either going too fast and will overshoot the Moon or is going too slow and will impact the Moon. Both of these are not ideal scenarios. To avoid this the spacecraft will fire its engines again to place itself in orbit around the Moon. Potential trouble does not end here, the lunar communication infrastructure is not as developed as that around the Earth, so there may be times when you cannot communicate with your vehicle if it is on orbit on the dark side of the Moon. The spacecraft has now traveled about 384,400 km (238,855 mi). While in this orbit you can check your potential landing site(s).

Lunar Descent Orbit

In order to reach the lunar surface the spacecraft must be put in an orbit that intersects the surface of the Moon. In order to do that, rocket engines slow down the spacecraft and it begins to drift closer and closer to the lunar surface.

Landing

Taking your spacecraft and lunar vehicle down to the surface of the Moon is probably one of the most challenging parts of the Google Lunar X PRIZE. Even if your vehicle is ready to accomplish all of the mission tasks, if you have a "hard landing" (crash) on the lunar surface you will not get a chance to show off all the hard work put into the project. In order to land safely you need to reduce your speed to zero as you come close to the lunar surface and find a safe landing spot where you can complete the mission objectives.

Conclusion

From launch to lunar landing, the teams will have minimal manual control over their spacecraft and vehicles. Most of the operations will be handled autonomously, but rest assured there is still be a lot of practice behind those maneuvers. Most of the manual control will come when the teams are physically on the Moon's surface

Hopefully this breaks down a bit how teams in the Google Lunar X PRIZE will get to the Moon. Although the transportation on the Moon gets a lot of attention, and rightfully so, I hope I could shed some light on some of the challenges and events needed to get to the Moon. The next blog post in this series will talk about some of the technical challenges that must be overcome on the Moon.

Visit X PRIZE at xprize.org, follow us on Facebook, Twitter and Google+, and get our Newsletter to stay informed.

This blog post is brought to you by Shell, our Exploration Prize Group sponsor.

Mediocrity versus Mastery: The Case for Game-Based Learning

XPRIZE   |   November 14, 2012    9:26 PM ET

2012-11-15-Poonam_Sharma.jpgBy Poonam Sharma
Poonam Sharma is the Cofounder of Mango Learning, Inc.

Ask Serena Williams to describe every leap, twist and gesture on the court if what you really want is to see her stumble. Task World Chess Champion Viswanathan Anand with explaining every slide of a pawn or block of a bishop and he might lose sight of the board. Force a pilot to narrate every bank, altitude adjustment, and unplanned course correction through this week's Frankenstorm -- and prepare yourself for a pretty bumpy ride. While it may seem counterintuitive, the fact is that experts are just people who have learned the basic facts of their discipline cold enough (they have developed a level of Automaticity with basic functions) that actually makes it distracting to consciously consider these details. And in the context of how best to educate our children, understanding this aspect of true mastery can lead you to only one answer to the most critical education question of the mobile revolution:

How best can we leverage the promise of mobile platforms to raise children who aim beyond mediocrity, and towards mastery in their learning? We can do it through adaptive, mobile, game-based learning.

Contrary to popular belief, experts don't know more about the basic steps than the rest of us - they just recall them more automatically. The Automaticity they've developed (the instant recall of the basic functions) frees up their conscious minds to focus on higher-levels of strategy. And that freedom is a key part of what makes them really good at what they do. So asking someone who has mastered a discipline to consciously focus on the individual steps actually distracts them from the engine of their mastery, and throws them off their game (pun intended). Many athletes recognize this. Most coaches understand this. But despite great effort, the educational system as a whole has been unable to apply this truth on a broad scale, to produce better results, until now.

Enter the mobile revolution. Digital learning for the K through 12 market has opened up a world of potential, vastly expanding our chances of truly democratizing access to quality instruction and learning opportunities worldwide. Now that tablets have proliferated widely both at home and in schools, and students really can learn any time, any place, so long as they are motivated, the next logical question becomes: How do we build content for mobile platforms that is personalized enough, engaging enough, adaptive enough, and rigorous enough to make kids the drivers of their own advancement towards Automaticity, and beyond?

In the context of foundational Math, for example, which has been correlated with successful outcomes across a child's future educational career: "If you only teach it through rote memorization, textbooks and lectures, you would be surprised at how quickly even otherwise diligent students will lose interest," explains Bob Collins, Former Chief Instructional Office for the Los Angeles Unified School District, "But when you meet today's students where they live, in the digital world that has been theirs for as long as they can remember...and you give them different ways to learn, and chances to perceive success, that's when you start to see them get really excited about learning."

This is not about teaching students through games in order to entertain them, or to distract them from the other media vying for their daily attention. This is about training them to get excited about driving their own education. It's about delivering the sense of incremental achievement that we know inspires people not to quit, and not to want to see themselves as quitters. Because in the context of the digital age, when the answers literally are at everyone's fingertips, success becomes a function of persistence and strategy - not memorization. And the importance of teaching for Automaticity becomes more important than ever, when even instant-recall is no longer enough to help our children to get ahead.

While e-books and online lectures have their place, stopping here would be like deploying the very first televisions into every American home...and then doing little more to leverage the promise of the medium than aiming the cameras at existing radio hosts. We can do better than that.

We can deliver the instant feedback of Math games which allow a student to advance at his own pace and earn the satisfaction of unlocking higher reward levels. We can provide adaptive in-game pre-tests to gauge student's knowledge and calibrate questions specifically for them. We can deliver tutorials with multiple methods of solving the same question and arriving at the same answer, to drill home the point that it is alright to take a different path to success.

Real life does not reward pure memorization, and elementary education shouldn't either. The promise of the mobile age for education is far more than a chance to teach our kids the same facts in a more exciting way - it's a chance to retrain our students to expect more from themselves. So if you're still on the fence about Game-based Math Learning then consider how immediately children take to games of any kind and then recall how quick you were at their age to dismiss your own abilities after receiving just one failed Math test. Now, what if Serena had done that?

Visit X PRIZE at xprize.org, follow us on Facebook, Twitter and Google+, and get our Newsletter to stay informed.

This material published courtesy of Singularity University.

How Galaxy Zoo Conquered Space

Peter Diamandis   |   November 14, 2012    2:50 PM ET

In this blog I'm going to show you how science can tap into the power of the crowd to analyze the mountains of data that our exponentially powered instruments are generating.

When I was a grad student at MIT, I had a chance to become friends with the Viking Mission's chief scientist, Dr. Gerald Soffen. Viking was the first Mars lander looking for signs of life on Mars. One of the shocking facts that Dr. Soffen shared with me was that over the decade after Viking 1 and 2 landed on Mars, NASA had only the capacity to look at and analyze just 1 percent of the data from those two missions. One percent!

It hit me back then that what we needed was a way for the data to be made available to the thousands, or even millions, of amateur scientists who would "kill" to have access to that data, to analyze it and perhaps to contribute to the science.

25 years later, that's exactly what Galaxy Zoo did.

I spoke recently with Kevin Schawinski, an astrophysicist who co-founded Galaxy Zoo when he was a graduate student at Oxford.

One of his projects was identifying elliptical galaxies, football-shaped transitional galaxies that are a sort of missing link in understanding galaxy formation.

It used to be that, in astronomy, a small team of people could look at photos of a few thousand galaxies and classify and catalog them relatively easily. But now, with a new generation of robotic telescopes scanning the skies constantly and producing millions of images, that's become next to impossible.

Schawinski himself had spent a week classifying 50,000 galaxies. "We'd extracted an awful lot of interesting science from this," he told me. But when he wanted to dig deeper and classify the million galaxies for which he and his colleagues had images, he knew it was an impossible task for one person.

"We hit on the idea of putting the images on a website and finding people, perhaps two or three amateur astronomers who'd be willing to help us," he explained. "Doing one of those back-of-the-envelope calculations, we figured it would take five years for the million galaxies to be classified."

So he and his colleagues decided to go ahead. They assembled a website, Galaxy Zoo, that they opened to the public.

Surprise: "Within hours of the site going live, we were classifying every hour more galaxies than I'd done in a whole week," he said. "And then more people and more people signed up." By the time Galaxy Zoo was turned off with the completion of the project a year and a half later, it had attracted 250,000 registered users.

What's more, the results were astonishing. The original goal of Galaxy Zoo was to have every one of the million galaxies looked at just once. It ended up that every galaxy had been classified over 70 times, Schawinski said.

"What you're really tapping into is the wisdom of crowds," he told me. "You end up with 70 independent measures of each galaxy, and you can do real science with it."

And out of Galaxy Zoo has grown Zooniverse, which we'll talk about in future posts. Why was this, and subsequent projects like it, such a success?

Schawinski and his colleagues asked the same thing. They worked with social scientists and sent out questionnaires and discovered that people's No. 1 motivation for participating in a project such as Galaxy Zoo was that they want to contribute to actual science. "They want to do something that's useful," explained Schawinski.

People want to contribute. "We'd hit an unmet need," Schawinski told me. "People wanted to do this."

That's a key to understanding the appeal of crowdsourcing: we want to feel that we contribute and that we make a difference.

In my next post, I'm going to talk about some of the things to be aware of when beginning a project like Galaxy Zoo.

NOTE: Over the next year, I'm embarking on a BOLD mission -- to speak to top CEOs and entrepreneurs to find out their secrets to success. My last book Abundance, which hit No. 1 on Amazon, No. 2 on the New York Times and was at the top of Bill Gates' personal reading list, shows us the technologies that empower us to create a world of Abundance over the next 20 to 30 years. BOLD, my next book, will provide you with tools you can use to make your dreams come true and help you solve the world's grand challenges to create a world of Abundance. I'm going to write this book and share it with you every week through a series of blog posts. If you enjoyed this post and would like your comments to make it into my book, head here to share your input and feedback. Top contributors will be credited within the book as a special "thank you," and all contributors will be recognized on the forthcoming BOLD book website. To ensure you never miss a post, sign up for my newsletter here.