New cyber software 'can test the limits of AI's knowledge'

The software is designed to help companies identify limitations of AI systems and prevent potential risks of relying heavily on them

A new software promises to measure the depth of an AI system's knowledge. Reuters
Powered by automated translation

A team of researchers has developed a new software they claim can assess the true level of knowledge possessed by artificial intelligence systems.

The software is designed to verify the accuracy and depth of an AI system's understanding of a specific subject, which is important for ensuring reliable performance in various industries, from health care to finance.

The team says it can also identify gaps in an AI system's knowledge and suggest areas for improvement.

The research could prove an important breakthrough in the area of verification methods for AI-rich programs and decision-making, which makes AI safer.

The researchers of the paper published by the University of Surrey have also defined a “program-epistemic” logic, which allows the programs to specify their level of knowledge.

The system enables programs to think about things that will only be true after they and other processes finish running.

The innovation focuses on new methods for automatically verifying epistemic properties for AI-centred programs, and analysing concrete programs (over an arbitrary first-order domain) as well as requirements that are richer than before.

The software developed by the team is capable of verifying how much information an AI system has gained from an organisation's digital database. It can also identify if the AI system is capable of exploiting flaws in software code.

The software is considered a significant step towards ensuring the safe and responsible deployment of generative AI models.

Dr Solofomampionona Fortunat Rajaona, the lead author of the paper, said the ability to verify what AI had learnt would give organisations the confidence to safely unleash the power of AI into secure settings.

He said: “In many applications, AI systems interact with each other or with humans, such as self-driving cars in a highway or hospital robots. Working out what an intelligent AI data system knows is an ongoing problem which we have taken years to find a working solution for.”

The software can be used as part of a company's online security protocol to ensure that AI systems are not accessing sensitive data or exploiting software code flaws.

Prof Adrian Hilton, Director of the Institute for People-Centred AI at the University of Surrey, emphasised the importance of creating tools that can verify the performance of generative AI.

“This research is an important step towards maintaining the privacy and integrity of datasets used in training,” he said.

The paper also discussed the challenges of evaluating knowledge-centric properties in AI-based decision making. It argues that logics of knowledge or epistemic logics have been well-explored in computer-science since Hintikka.

Jaakko Hintikka was a Finnish philosopher and logician known for his work on modal logic and game-theoretical semantics. He introduced the concept of possible worlds to modal logic and was the first to use game-theoretical semantics to analyse modal logic.

The researchers created new methods for analysing how computer programs think and reason. These methods help programs to understand facts not only after they perform an action, but also before they do it.

Tech leaders call for AI-measured development and safety protocol

Martha Lane Fox, the British tech pioneer, has called for a more rational discussion surrounding the impact of AI and has warned against over-hyping it.

While acknowledging that frameworks around AI are necessary, she advocates for a more measured approach from companies in the development of AI technology.

Ms Lane Fox believes AI presents opportunities for society and businesses, but emphasizes that it should be digitised in an ethical and sustainable way.

Elon Musk, the founder of Tesla and CEO of Twitter, joined tech leaders in signing an open letter urging AI labs to temporarily pause the development of powerful AI systems for at least six months.

The letter expressed concern that AI technology with competitive human-level intelligence could pose significant risks to society.

The letter proposes that the AI labs work on developing safety protocols overseen by an independent panel before training AI systems more powerful than GPT-4.

The letter also suggests the need for new regulators, oversight, public funding for AI safety research, and liability for AI-caused harm.

Updated: April 04, 2023, 9:50 AM