Questions on AI Threats in CyberSecurity

As people occasionally do, I was asked to attend a conference that involved a lot of listening and not much speaking, which is generally what happens when your speaker list consists more than thirty individuals on a Zoom call.

This Medium post is where you can get more literature behind all of the types of zoom participants we all know so well….

How are threat actors currently using AI in attacks?

Source

What are the attack capabilities being enhanced?

Hmmm… hard to tell due to all of the reasons posted on the first question.

Capabilities to me = supply chain. What parts of the supply chain are being enhanced to make attack capabilities more robust?

Source

Will existing risk controls be sufficient?

Well, no. They barely get us to safe right now against traditional attacks.

I heard a lot of good pontification on this point… but to really get anywhere meaningful here, practitioners need to talk to one another. The same vulnerabilities that present themselves in regular systems that are coded together are present for AI systems. A few examples: basic code security practices- these will also matter in putting together AI/ML systems. The risks that come with copying and pasting code from Github or using open source libraries for solutions are the same in AI/ML systems as they are in any other coded systems. This is especially compounded when your ML engineer or data scientist shrugs their shoulders when it comes to explaining how and why the code works the way it does.

What are cybersecurity requirements to protect current AI/ML assets?

Let’s talk about the time a man consulted Gmail’s autocomplete to determine decisions he should make rather than making the decisions himself…!
Gotcha, Google!
Speed up or slow down?
  • Was the dataset curated properly? Unbiased? Properly divided into train, test and prod?
  • Data bias? Is your team aware of any? Any weaknesses in the data from a features perspective?
  • Finally- has the team done due diligence to ensure every part of the ML pipeline system is properly and easily visualized? Is model monitoring optimized to make sure every aspect of the system is quickly and easily understood?

What exists now and what capacity gaps do we need to address?

So… what exists now is basically all over the place. Each org is in a radically different place when it comes to ethical ML/AI and ethical practices — which mostly consist of absolutely nothing.

We are our own biggest threat here when it comes to gaps… and the majority of the threat comes from culture. A culture of not practicing security and ethics in our technology.

And now that I have rebuked everyone who is responsible for building, maintaining, selling and creating AI products, I will now also acknowledge that there isn’t much in the way of technical literature or solutions around this space. Security AI systems is probably somewhere around 5 years out in terms of making scalable solutions and having them be profitable on the market. The reasons behind this assessment mostly point to the still very nascent nature of even the most basic applied ML use cases. ML/AI based technologies and solutions are just still very new. And in proper back-assward-ness, we only start considering securing code and product… well, after a Solarwinds like incident of course. We haven’t seen this large scale high effect type attack on AI systems yet.

What of these are research challenges the science and technology base can attack?

There are two distinctive roles I see playing in the space by the usual culprits- academia/research and applied practitioners.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store