Technology is risky business. At least, that's what some scientists fear: the proposed Center for the Study of Existential Risk at the University of Cambridge will bring together researchers to brainstorm how we may prepare for technology-related and human-induced dangers in the future.
But what are these possible threats? Well, in part, it's too soon to tell -- that's precisely what the center hopes to study. Yet the center's co-founders have suggested we should pay more attention to the potential downsides of building sophisticated, artificially intelligent machines or of producing designer viruses. What if we build computers that are too smart for our own good, and they write their own code that wreaks havoc on our banking system or electrical grid? Or, what if a powerful genetically engineered virus is mistakenly let loose from a biotech lab and infects millions?
Dr. Martin Rees, entrepreneur and astrophysics professor at Cambridge, addressed these "what ifs" in the video above -- and/or click the link below for a full transcript. Plus, don't forget to sound off in the comments section at the bottom of the page.
See all Talk Nerdy to Me posts.