Cybersecurity

Adobe Global Chief Privacy And Cybersecurity Legal Officer Nubiaa Shabaka On Trust, Data Security And AI


While cybersecurity and privacy are two different aspects in dealing with data, Nubiaa Shabaka says they fit well together. She’s a vice president at Adobe, serving as the company’s global chief privacy and cybersecurity legal officer. I talked with her about these two aspects of technology, changes AI is bringing to the way business is done, and what the future holds.

This conversation has been edited for length, continuity and clarity. An excerpt was in Thursday’s Forbes CIO newsletter.

Tell me about your role at Adobe.

Shabaka: I am the chief privacy officer. And so what that means is I lead privacy legal, as well as privacy. Privacy legal does things like give advice, monitor and track the law, help draft contracts, all kinds of legal things. I also lead privacy compliance. Privacy compliance is helping with compliance and training, and policies and standards. I also lead a component of privacy engineering and operations to make sure we have a coordinated effort amongst the many, many engineers in the various groups. If it’s a privacy-specific tool, or something that we need to do from a structural engineering perspective, we have experts that can then work to make sure we’re all coordinated, and we don’t have each group doing their own thing, wasting resources. I then also lead cyber legal, and from a cyber legal perspective, we work with the main coverage attorney for our chief security officer that handles all things security. We advise on all of those cyber laws and policies, and their training, and incidents, and contract language on cybersecurity—whether it’s with our vendors or our customers. So it runs the gamut.

And in all of these areas, we help our government relations team [on] various public policy initiatives in cybersecurity and privacy. Anything that touches upon cybersecurity or privacy, whether it’s the internal team infrastructure, our CIO office, IT or security, or even any of the business units: our digital media or DX business, if it’s privacy and security, we are happy to be involved.

How do you see privacy and security fitting together? Why are they a good pair to have under one person?

I actually was so excited when I was interviewing with our chief trust officer a couple of years ago to understand their structure, because I’ve always had a really attached-at-the-hip relationship with the chief security officer. I’ve been a chief cyber legal officer and the head of privacy or chief privacy officer for many years. It’s so important that it be so connected because you’re able to look over the horizon. You’re able to plan and have the most streamlined and efficient manner to protect Adobe, and give the best trustworthy products to our customers. As well, being so integrated, [you have] weekly meetings, daily chats, invited to each other’s team related off-sites and things of that nature.

It’s really all in the construct of what is trust. You need security, you need privacy. And it all is just so interconnected. If you think about personal information, of course that’s privacy. And security is the assets, the infrastructure and potentially proprietary and IP, but you cannot have privacy without security. Privacy is a huge part, of course, of security. They’re very, very interconnected, and folks who pull it together, who have that close relationship, I think have a better value, not only internally, but also to their customers because of the trust that we’re able to build.

How does AI impact privacy and security?

My whole title is chief cyber and legal officer. I think about my role as those two things and two other things. I also co-lead data governance with our chief information officer and our chief security officer, bringing in all of the applicable business units and risk folks to run data governance. I also am a core member of our AI governance. Privacy and security is just key [for] AI at Adobe, a cross-functional team that has folks from our IT department, our security department, my team, other folks in legal and in strategy, coming together to have this cross-functional AI governance.

How privacy and security is impacted by AI, ooh ooh ooh, the stories you can tell. The concept of good folks and even bad actors are able to just be so much more sophisticated on a proactive and reactive standpoint. AI is here. We really need to incorporate it in a privacy and security conscious fashion, but it will allow us to be more privacy-centric when done correctly, and allow us to be more security-focused, when done correctly, to counterbalance the bad actors. Having security and privacy sit at the table in AI governance really keeps security and privacy top of mind when we roll out AI to make sure they have those appropriate data protection aspects. And any privacy-related tools that we roll out always have the personal privacy considerations in mind to make sure we have the appropriate balancing act and impact assessment.

When done correctly, how can AI make these tools more privacy-centric?

Using AI and various privacy-enhancing technologies, you could have various components built where you do not have to always have a human who might look at these various components and data, then use it for means beyond the intentional or initial designed rationale. You can build code that allows us to have very objective reviews, so long as you’re constantly checking and making sure you [do] not have a bias.

It can go through a lot more data, but if you use synthetic data, if you follow the principles of data minimization in terms of hashing and various other technologies that can learn from the AI without actually having the sensitive data needed, you can get the same elements of improving privacy and security without going into all the specific elements they’re under.

As you’re working to improve privacy. Do you see issues related to AI on the horizon? And not just at Adobe, but in general that could put privacy at risk?

I do think that if privacy and security is not done right, you certainly have an increased amount of data that could run into pitfalls. Talking about what Adobe does to ensure that we do it correctly, ART is our acronym: accountability, responsibility and transparency. Having that cross-functional team and having privacy impact assessments and ethics impacts assessments when it comes to AI, we do in fact have the appropriate governance that puts customers first off in terms of how we incorporate AI into our products.

I agree if you don’t have the skill set, or the trust mindset, and you just want to rush to market and don’t have those principles in place, I think certainly companies could fall into trouble.

With data governance in general, what are some of the bigger issues in privacy you’re seeing right now?

As the world evolves, the convergence of all of these unique disciplines are all coming together. When I have discussions with many of my chief privacy officer colleagues at other companies, it’s interesting how chief privacy officers are often handling the AI because they’re seeing no one else at their company to do that. We have specific folks within Adobe that do AI ethics, and then we cross-functionally manage it.

If you think about what’s going on in the EU, they’re actually proposing and passing laws that treat all data the same as personal information, where you need to have the transparency and you need to have the impact assessments. Or you have security-related incidents and laws proposed and passed that is not just if you have a security incident as it relates to personal information, but if you have any security incident, you need to provide the appropriate transparency. In data governance, what I am seeing is the convergence of what used to be unique disciplines of privacy and security, AI and data governance, whereby it’s all one and it’s all together. They’re very similar principles. It is really fortuitous to be at Adobe, where we have already brought those things together, where I think other companies will be going.

What about security? What are some of the bigger issues there, and how has security changed?

Security’s bad actors get more sophisticated every year, not because they themselves got some advanced degree, but [because of] the networks and security as a service, the concept that even bad actors can outsource toolkits and hacking. It used to be cybercriminals had a certain level of threat, and nation states had a certain level of threat, but when you have this new era of cybercriminals learning from nation states, or nation states on their side, you will see the advancement and sophistication in some of these areas.

You then combine that with AI, and [consider] just the mass amount of attacks that could happen in very short timeframes. That is why if bad actors are using AI and cybersecurity, certainly the good actors need to respond in a reactive manner, and think ahead in a proactive manner to defend. What I’m seeing in security is just a continual advancement of all of these attacks and sophistications. When you think about AI, and you think recent examples of companies paying $25 million via their CFO, which was a deepfake, is very concerning and should be concerning to all companies: If you can be fooled not only by voice, but by image where someone in your company high up is authorizing you to do something.

What is Adobe doing to fight back against people using your tools for this kind of action?

Adobe is a leader in many of these respects. When it comes to deepfakes, for example, our Content Authenticity Initiative with Content Credentials. We have many companies in the tech industry and otherwise, whereby folks can use our tooling in order to indicate the [content’s] credentials, whether it’s original, how it’s been modified, how it was created.

You work in areas in which there are a lot of policymakers paying attention and talking about regulations, but there aren’t really a lot of regulations there. Where do you see things going in terms of AI and privacy regulations in the next year? What is Adobe doing to help these questions get worked out? What are your priorities in these areas?

I would say that there are plenty of regulations that are hard and fast on a sectoral basis. The U.S. has a different mindset than Europe. Europe has [General Data Protection Regulation] and large omnibus rules. The U.S. certainly has sectoral rules that have been around for a very long time. If you think back to 1999, the Gramm-Leach-Bliley Act. I come from a financial services background, and so there are hard-and-fast rules that exist. Now we have omnibus state laws that have somewhat mimicked GDPR, and are different in many ways. We are up to 15 U.S. omnibus state laws, which are hard and fast rules.

The U.S. has been a leader on the security side starting with the NIST Cybersecurity Framework, which is optional. Now they are proposing to implement certain items in the U.S. to make hard-and-fast rules. But we certainly have cyber and personal information breach laws in every jurisdiction in the U.S., which is a mitigating factor, right. You don’t want to have to report breaches, so you have appropriate safeguards in order to counterbalance your reporting. We have the new SEC cyber-disclosure laws that came into effect in December of 2023, which is a hard-and-fast rule for disclosure. It is, in fact, a mechanism to have and support appropriate cybersecurity controls.

I do think, even with the rules that we have in place, multinational companies forget about what the rules are just in the U.S. Adobe is a multinational company, and we have to abide by the plethora of rules throughout the Americas and Asia and AMEA. We don’t believe in playing whack-a-mole, following rule by rule and law by law. We look over the horizon to have an appropriate privacy program and cyber program that we stand behind.

On the security side, we have our proprietary Common Controls framework that maps to the various laws around the world from a technical perspective on the privacy and security angle. Adobe’s vision is to set our high water mark, which typically will address any rule, any law that comes into play. That’s how we abide by having the appropriate privacy and security controls.

What I see coming is the additional continual global standard. We are excited about the potential proposed federal privacy law here in the U.S., and encourage governments around the world to continue to work together to promote cross-border data transfers and innovation and technology, while of course balancing privacy and security. Adobe is happy to be part of those conversations through trades and directly meeting with regulators around the world to help move in that direction, whereby we can have innovation and prosperity all together.

Where do you see AI technology going in the next year?

I’m excited about all of the announcements that Adobe has made in the AI space alone, not only with our proprietary Firefly, but also partnering with many of the other AI tools out there to provide our customers with the ability to use Adobe and our integrated platforms to have a one-stop shop. The vision of the integration with other AIs, and how that can enhance Adobe’s systems as well. That’s what I see: more AIs coming together and being more interconnected.



Source

Related Articles

Back to top button