Artificial Intelligence (AI) systems were the focus when IFIP President, Professor Mike Hinchey, presented at the ITU World Telecommunication Standardisation Assembly (WTSA-16) in Tunisia.

Professor Hinchey, who has worked with AI systems for over 15 years in his capacity as a consultant to NASA’s Space Program, explored the question of, “How do we trust AI systems?”

Held every four years, the ITU WTSA defines areas of study for ITU-T, the ITU Telecommunication Standardization Sector, which helps to develop and promote global standards for critical telecommunications infrastructure. This year’s event included a half-day session entitled, “ITU-T 60th Anniversary Talks on AI”, with Hinchey one of three keynote speakers.

Professor Hinchey, who is also Director of Lero, the Irish software research centre, Professor of Software Engineering at the University of Limerick and former Director of the NASA Software Engineering Laboratory, continues to consult to NASA’s Space Program. He applies AI in his work with swarm technology used in unmanned space exploration.

He said in today’s rapidly developing era of driverless cars, AI-enhanced shopping sites like Amazon and algorithmic trading on financial markets, many important decisions are made without human involvement.

“The challenge is how to trust those decisions, particularly in a situation where machine learning means that the computer might make a completely different decision from one context to another. If we are going to empower machines to act on our behalf, then we must be clear about the constraints we want to enforce by specifying a range of behavioural rules we will accept and those we won’t,” he said.

While recognising the enormous investment being made in AI systems like driverless cars, Professor Hinchey said the jury is still out on whether these systems will ever be fully implemented.

“Technology like driverless cars only really works if everyone applies the rules consistently. Robots will, but humans might not. Humans sometimes bend the rules out of courtesy and will use eye contact to confirm their decision. However, in a context where both humans and robots are sharing the road, problems could arise because of the different ways in which they interpret the rules,” Professor Hinchey explained.

He also questioned where to draw the line on preservation of life. “Of course, a self-driving vehicle will seek to protect its occupants, but what happens if the choice is between saving the person in the car or saving several people on the street. How does a robot decide without the benefit of human judgement?”

Professor Hinchey said more research is needed to understand the nuances of AI systems as their influence in our lives continues to grow.

ITU WTSA-16 runs from 25 October to 3 November in Yasmine Hammamet, Tunisia. For more information, visit