Taking a Closer Look at AI Governance and Cybersecurity
Philip Morris International’s Ray Ellis Shares Strategies for Secure AI Integration
As organizations increasingly integrate AI into their operations, ensuring robust cybersecurity governance frameworks is imperative. Ray Ellis, head of AI security at Philip Morris International, recommends capturing requirements for securing AI capabilities, protecting privacy, understanding legal implications and ensuring enterprise architecture that prevents shadow AI.
See Also: Strategies for Protecting Your Organization from Within
Ellis underscored the need for a top-down approach to ensure comprehensive understanding and implementation of security measures in AI initiatives.
“From a security point of view, we’re looking at where your data is and what is the classification of data?” he said. “Do you know who’s going to access the data? Do you who is going to upload the data, what model you’re going to use? How it’s going to be used? Do you have a human in the loop? Are you checking for hallucinations?”
In this video interview with Information Security Media Group at Infosecurity Europe 2024, Ellis also discussed:
- Essential elements of AI governance in cybersecurity;
- Challenges and solutions in integrating AI securely;
- Future cybersecurity trends and implications for defenders.
Ellis is responsible for designing advanced security solutions tailored for AI systems. He has more than 20 years of leadership experience in global information security architecture with expertise in network security and ICT.