Google has launched its Secure AI Framework (SAIF) to help improve the security of organisations using generative AI. While the tech giant’s release of its roadmap is to counter existential AI threats, for now it aims to prepare organisations to put in place fundamental cybersecurity practices to protect against realistic threats. SAIF’s six core elements expand organisations’ current security frameworks to include AI threats, integrate AI into their defence, promote uniformity in AI-related control frameworks, and constantly evaluate, inspect, and battle-test AI applications.
One emerging threat in generative AI applications such as ChatGPT, is “prompt injections”, a form of exploitation from an external source. Malicious command waits in a place of innocent text waiting for the AI to scan it, and then changes the nature of the command given to the AI, similar to hiding a sinister, mind-control spell in teleprompter text. Google hopes to help reduce these new types of threats and others including “stealing the model”, constructing prompts that extract confidential verbatim text that was used to train a model, and data poisoning.
Google hopes that this framework will aid organisations in taking care of the fundamentals in protecting their systems as they advance towards AI-generated risk. Currently, it seems as though Google wants organisations to focus on elementary cybersecurity protocols around AI, with a view of sustaining their resources against sophisticated threats in the future.
The release of the Secure AI Framework by Google looks to tackle possible threats arising from the increased adoption of generative AI, aimed at expanding existing security practices to include AI-related attacks. The National Institute of Standards and Technology (NIST) released a similar cybersecurity framework in 2014, which sets the gold standard for organisations to protect against cyberattacks.
While it’s uncertain whether Google’s SAIF will be adopted as a standard, Google has taken a step towards improving data protection by leading in AI security instead of playing catch-up with its AI rivals, such as OpenAI.