The challenge of securing AI as it constantly changes
Launching a global AI safety coalition was no small task. One of the many challenges for the CSA, says Armstrong, was laying out a clear and concise vision. “The key was making sure the roadmap for the initiative could be understood and then fully supported by a diverse group of stakeholders,” says Armstrong.
Another big challenge was the AI’s unprecedented pace of evolution. The CSA needed to develop frameworks and tools—such as its AI Controls Matrix (AICM), with 18 domains and 243 control objectives—that would be current yet also forward-looking. This required continuous input from CSA’s research working groups and its executive leadership council, which includes cybersecurity executives from Sallie Mae, Procter & Gamble, Microsoft, and Anthropic, to name a few.
Adding to the challenge was the need to adapt guidance for different industries so that recommendations worked equally well for financial services, healthcare, manufacturing, and other sectors. CSA’s CSO Strategic Advisory Council is currently working on these industry-specific AI safety guidelines, says Armstrong.