Cybersecurity has been an intense area of focus for the development of Artificial Intelligence (AI) capabilities as an opportunity to gain control of skills shortages, the overwhelming volumes of data involved, and the ever-changing threat landscape. Earlier this year, Pulse worked with Blackberry Cylance to host the AI 360 Roundtable, where 40 chief information security officers (CISO) and other cyber security leaders gathered at the London Film Museum’s Bond in Motion exhibit to examine the impact machine learning and artificial intelligence (AI) is having on their work. The case histories shared on the day and covered in the latest Pulse Innovation Report revealed projects that are delivering promising results and that ambitions for the technology are high. However, there was also a healthy dose of cynicism, and a few barriers preventing companies from investing significantly.
I believe these barriers—which reveal more about the state of cybersecurity management than they do the advancement of the technology itself –became the most useful insights to emerge on the day. Delegates shared experiences with competitive forces and siloed ambitions drawing on the research and development budget; a lack of business-level sponsorship; and poor visibility into the complex environments in which AI could be useful. Ironically, the complexity and volumes of data that cyber leaders are grappling with today, had those deploying it recognising that companies need to take a leap of faith and deploy AI to build a picture of the value it could deliver.
The key technical concern, AI’s ‘black box syndrome’ whereby the tech produces outcomes but does not explain the processes or logic behind them continues to undermine our security leaders’ trust in the ability of machines to make the right decisions. It was interesting however to see this concern fade as the discussions developed:
“It shouldn’t be too surprising that a room full of technology risk and security leaders, trained to question and anticipate problems, would express a reticence to trust AI. Our group was particularly concerned that AI does not offer an explanation of its logic; it just produces results and it can be difficult to just accept them,” writes Blackberry Cylance‘s Anton Grashion, within the event’s official report. “Those of us who work with AI hear this a lot: We also find that people are unclear about what they want to have explained, or why they want an explanation. … the need for explanation became less of a concern as the discussion turned to examining the specific challenges to which AI could be deployed. With very narrow AI applications – to stop malware, for example – the explanation of what is happening was accepted. In other areas, the outcomes seem to negate the need to understand how they were reached.”
Overall, there were few in the room who denied the inevitability of AI advancing to support complex analysis as case histories reveal insights that haven’t been possible with human analysis alone. The group also admitted this was demonstrating a very real and current risk of relying on that human analysis.
Insights shared suggest AI is demonstrating genuine opportunity to cope with the relentless pace of change, volumes of unstructured data, and the ongoing development of complexity within corporate environments. But it’s technology that works in a highly contextual way, reflecting and evolving with the environment in which it is operating. Corporate cybersecurity and risk managers won’t be able to just sit back and wait for the technology to mature; they’ll have to invest in the time, budget and creative thinking needed to play a more active role in its development.