Chair: Martin Gill

Deborah Evans – Security & Defence Researcher, Southwest Research Services
Dr Michael Coole – Security Science Researcher, Edith Cowan University
Chris Cubbage – Director & Executive Editor, MySecurity Media

Key points

Michael Coole notes that there are different types of AI, referring to ‘narrow’ and ‘broad’ AI, and then ‘general’ AI, with the best being equated to mirroring what humans can do. He warns that while some AI appears to be smart it is often computational and therefore limited. AI achieves objectives by following rules but those focussed on security may be in conflict with others and therein rests one of many notes of caution about how AI is and can be used. His own research work for the ASIS Foundation ( discovered that the risks of using AI are overtaking human’s ability to control them. You will hear him discuss risks relating to robotics and in the nuclear sphere, its use is wide and varied but controlling the quality of components used will be impossible opening up all sorts of possibilities for those who act with malign intent.

Debbie Evans argues, in the context of the webinar title, that it is possible to see AI as both essential and an unwelcome Big Brother. She outlines her thoughts on each. For Debbie, it is essential because other disciplines are developing apace and security could lose out, and many technologies in security and related to it use AI to maximise benefits, so keeping abreast of developments is key. Moreover, security needs to keep up with threat actors and adversaries who adapt and have no inhibitions about using whatever is needed. Human rights violations though are a concern. Interestingly though Debbie argues that privacy is an illusion, humans are heavily invested in technologies and so much out there uses private information for different purposes. She argues the need for a global consensus on governance which will be difficult to achieve and meanwhile humans need to take responsibility for agreeing and promoting responsible use.

Chris Cubbage notes he is an optimistic pessimist! You will hear him discuss the language of AI, differentiating it from machine learning. For Chris, however desirable an international framework agreement is, he does not believe it will be possible, countries are at different stages of development and have different needs reflecting different levels of commitment to approaches to governance. For example, some areas are more sophisticated in their use of AI than others, Cyber security is more developed, in a different way you will hear him discuss Tesla’s robotic (driverless) car. At its best AI is very good, very fast and very powerful, but he warns that there is no such thing as responsible AI, that any claims to such are ‘sales pitches’. He points to the bias inherent in datasets of all kinds and we are ‘stuck’ as the ‘horse has already bolted’. The truth is that humans are creating vulnerability in systems and becoming reliant on them at the same time. 1

The ASIS Foundation study raises interesting and important issues which have often operated under the radar in discussions about AI. That it increases possibilities for more advanced and imaginative use cases is not in doubt. Our panel confirm that responding to the challenges posed by AI are considerable. The problem is that there is a significant downside and that bit still needs to be understood and managed. Security has a key role to play, it needs to step up to the challenge on this one, but is it ready?

Martin Gill
17th March 2022

1 Chris mentioned three interviews that add further insight:
Reform of Australia’s Electronic Surveillance Framework
Call Out to Stop ‘Deputising’ Tech Companies

Frontier, the First Exascale Supercomputer in the USA

By on