Interested in the potential for AI to change the planet? Then take a look at “Life 3.0 – Being human in the age of Artificial Intelligence” by MIT Physics Professor Max Tegmark.

According to a review in The Guardian by Yuval Noah Harari, Life 3.0 offers a highly readable and wide-ranging look at the promises and perils of the AI revolution.

Life 3.0 does a good job of clarifying basic terms and key debates, and in dispelling common myths. While science fiction has caused many people to worry about evil robots, for instance, Tegmark rightly emphasises that the real problem is with the unforeseen consequences of developing highly competent AI. Artificial intelligence need not be evil and need not be encased in a robotic frame in order to wreak havoc. In Tegmark’s words, “the real risk with artificial general intelligence isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.”

In Life 3.0, Tegmark covers a lot of ground, reviewing the impact of AI on the job market, warfare and political systems before venturing into the realms of philosophy and theology.

As Harari writes, “the real problem of Tegmark’s book is that it soon bumps up against the limits of present-day political debates. The AI revolution turns many philosophical problems into practical political questions and forces us to engage in “philosophy with a deadline” (as the philosopher Nick Bostromcalled it). Philosophers have been arguing about consciousness and free will for thousands of years, without reaching a consensus. This mattered little in the age of Plato or Descartes, because in those days the only place you could create superintelligences was in your imagination. Yet in the 21st century, these debates are shifting from philosophy faculties to departments of engineering and computer science. And whereas philosophers are patient people, engineers are impatient, and hedge fund investors are more restless still. When Tesla engineers come to design a self-driving car, they cannot wait while philosophers argue about its ethics.

“Consequently, Tegmark soon leaves behind familiar debates about the job market, privacy and weapons of mass destruction, and ventures into realms that hitherto were associated with philosophy, theology and mythology rather than politics. This can hardly be avoided. For the creation of superintelligent AI is an event on a global or even cosmic rather than a national level. For 4bn years life on Earth evolved according to the laws of natural selection and organic chemistry. Now science is about to usher in the era of non-organic life evolving by intelligent design, and such life may well eventually leave Earth to spread throughout the galaxy. The choices we make today may have a profound impact on the trajectory of life for countless millennia and far beyond our own planet.”