IFIP President, Mike Hinchey, addressed industry leaders and academics on the subject of “AI and its Social Impact” earlier this month during a visit to the University of Technology Sydney (UTS).

Professor Hinchey, who is also President of the Irish Computer Society and Emeritus Director of Lero, the Irish Software Research Centre, was in Australia as part of a brief visit to speak at the annual ACS Reimagination event. 

Drawing on his extensive experience at Lero, as well as 20 years working with and consulting to NASA’s Software Engineering Laboratory at Goddard Space Flight Centre, Mike discussed his role in developing large swarms of intelligent, autonomous robots to aid in space exploration and send data back to Earth. 

“My area of research was formal methods and autonomous systems,” he explained “We were using Autonomic Systems Specification Language (ASSL) to build autonomous robots capable of operating differently in different environments so that they can engage in self-protective behaviour when needed.” 

Professor Hinchey shared the ABCD of artificial intelligence, which he unpacked as Algorithms, Big Data, Connectivity and Domain. At the same time, he stressed the difference between AI and automation.

“Many of the systems we call AI might have elements that come from AI research, but just because something uses virtual reality doesn’t mean it’s AI – a lot of it is just systems crunching huge amounts of data to make meaningful connections,” he said.

Mike suggested changing the C from Connectivity to Consciousness, quoting luminary Roger Penrose, who said that if a system isn’t conscious, then it’s not AI. He also highlighted the need for an E, which standards for Explainability.

“We need to now how AI systems will make decisions and we need to know this in advance. This is a big issue with self-driving cars. How can we make sure they won’t randomly run on the footpath and kill pedestrians?” he asked. 

“If we allow a system to change itself, how do we know we will be safe? How much trust can we put in the system after it’s been allowed to change itself? 

Mike suggested that true AI systems are much further away than many people think, but said the world will quickly the impacts of increased development in robotics and machine learning.

“Will robots take our jobs? Absolutely. But automation took our jobs as well … and it created new ones. For every job that robots take away, they’ll create two new ones,” he predicted. 

Professor Hinchey’s presentation was followed by a panel session, where he was joined by Distinguished Professor Biswajeet Pradhan, Associate Dean Research Excellence – Distinguished Professor, Jie Lu and IFIP Vice President and ACS Past President, Anthony Wong.

Mr Wong, an ICT lawyer, warned that we need to be careful about blindly deploying AI without human supervision. 

“There will be a small percentage of situations where things can go wrong – we need that protection in those instances.”

Professor Hinchey agreed, saying that technology will be more accurate than humans, but still not perfect and a long way from conscious. Even Deep Learning doesn’t give us a machine that thinks.”

He also warned against the biases that humans naturally write into their algorithms and cautioned against using AI systems to try and predict the future. 

“In the US, AI systems are already being used to give sentences to criminals and to predict recidivism. This is madness,” he said. “These systems don’t know the future and they cannot predict who will reoffend. Using AI to predict the future is completely unethical.”

Mr Wong concurred: “AI gives you probability, not complete clarity. A human can only make one mistake at a time, sequentially. A computer can make decisions that have global implications with parallel processing. That’s why professionalism is important.”