Why Appian CEO Matt Calkins Says AI Regulations Need To Protect Data
When the EU Parliament approved the Artificial Intelligence Act last month, many in the U.S. felt like, once again, Europe was starting out setting the de facto policy that will govern a sensitive new area for global tech companies. Appian founder and CEO Matt Calkins said this move means “the U.S. is literally behind” in AI policy, but there are ways it can move forward. He talked about some of them with me. This transcript has been edited for clarity, continuity and brevity.
A shorter version of this interview was in Thursday’s Forbes CIO newsletter, This transcript has been edited for brevity, clarity and continuity.
Why do you think the U.S. is behind the international community when it comes to AI regulations?
Calkins: When we do comparisons like this, we’re talking about the U.S. against Europe. When it comes to technology regulation, Europe has been assertive. You can see that in the regulations they recently put into effect on tech giants, all of them non-European. They’ve also chosen to be more forward in the way they regulate AI.
It’s not that the U.S. has chosen a more relaxed posture in regards to legislation and regulation, which I would have been supportive of. I think the U.S. is literally behind when it comes to AI regulation. And furthermore, the proposals that the U.S. has made to the UN do nothing to change us being behind as a nation on AI regulation. It’s non-binding. It’s just directional. It’s a statement of intent. It’s full of vague terms, universal goals, and nearly nothing that would constitute actual regulation. The documentation of aspirations is what I would call it.
I would say the U.S.’s surprising level of inaction—and even lack of ambition, when it comes to AI regulation—is best explained by the highly influential tech giants in the U.S., who have steered the conversation toward the wrong concerns, toward things that are not truly the problem, and structured a way of thinking about AI that protects their vulnerable points.
If you were to design the strategy the U.S. would take, what are the things you think are needed in AI regulations?
Calkins: The most important realization that we could have as a country about AI is that artificial intelligence is a function of data. What I mean by that is that everything you get out of artificial intelligence, you have to first put in with data. The algorithm is literally nothing without the data that informs it. It is utterly dependent upon the volume and the quality of that information in order to give you strong, meaningful, valuable output.
The first thing we should do is acknowledge the primacy of information. The substance of AI is data, and data, therefore deserves to be protected, recognized as valuable and given its fair due. The number one issue with regards to AI today, worldwide and for regulation, should be protecting data. And honor those who own data. Those who create it, those who own it. Don’t allow them to be robbed by artificial intelligence.
Nothing’s happened on this front. The White House [executive order] from October had nothing. Not even an afterthought. The closest they came to touching on this issue was saying that if an AI algorithm trains on data that is yours, they must anonymize it. That’s it. Not respect it, not pay you for it, just anonymize it.
When talking about regulations for AI, so much of the focus so far has been on privacy, national security-related information and preventing disinformation—at least letting people know when they are looking at something created by AI. Where do these fall in importance for being regulated?
Secondary. It’s of tremendous importance to politicians, and so this gets mentioned a lot. Misinformation, partly because deciding what speech should now be illegal is becoming our national sport. That’s an intriguing topic for politicians to talk about. I think that also, politicians realize that their reputation depends on their exposure, depends on the images found by video clips by which they are known to their voters, and the vulnerability that they have to a deepfake is obvious. But if we’re going to be frank about this, you can make a deepfake without AI. A photograph or an image, you can show Donald Trump getting arrested roughly in New York, or the Pope wearing a Balenciaga jacket, you could do all this without AI.
How about misinformation? If you’re falsifying someone’s voice, that should be against the law. If you’re falsifying a photo of someone, that should be against the law. But remember, when we make laws, we make laws for law-abiding people. Troll farms in Russia don’t follow our laws, and neither will North Korean hackers. When we make a law this is something like, if you put out an image that was made by AI, it must have been digitally signed as AI, that actually solves about 5% of the problem.
What about AI surveillance, which is prohibited under the EU’s law?
I do believe these constitute an invasion of privacy, we used to see that there are considerations about this. On the other hand, I think that those entities that are most likely to do such a thing, will not follow the law, and so we can’t rely on legislation alone to protect us. In some cases, we have to deny the existence or prevent the existence of the data. If you don’t want people to be surveilled on a day-to-day basis as they walk around your country, you could make a law against using AI on the data that’s collected, or you could just take the cameras down. There’s Western nations with a whole lot of cameras up, and so I think we’ve got to ask ourselves: Is that the society we want? Do we want a society in which there’s a tremendous amount of data available about people, and what their mood is, and where they went on any given day? It’s a privacy versus safety question, and it definitely puts a lot of power in the hands of the government, and I think that reasonable people will disagree as to whether it’s worth it.
There’s some things where I think we should absolutely just try as a civilization not to have the data. Like intentionally making a virus more lethal to humans, and studying what DNA sequence could create the highest lethality. There will never be a law good enough to have that data and yet be sure that it’s never ever used by any bad actor anywhere in the world to create a lethal virus. I think we know how powerful data can be. We need to look in the mirror and say there’s some data we simply don’t want. If we look forward to a world where AI could make a virus, or AI could make a nano robot that hurt people or something like that, there’s just certain areas where we should say just don’t do that research.
There are many policies in the works that may drastically change in direction depending on the result of November’s election. Do you think AI policy is one of them?
I think that speech is becoming a political football, such that each side has speech to which it objects and feels empowered to use its political power to stifle. This, in my opinion, is the wrong direction. We need to be protecting the right to express ourselves and the right of those who have created content for our society to be the beneficiary of that content. Fear is going to lead you to think that certain speech has to be diminished. Both sides, I think, have less regard for the right of the other side to speak than I’ve seen in my lifetime in this country.
What I would encourage is for us to have an approach to AI that isn’t about what you cannot say, because honestly that sounds like China’s approach to AI. Ours should be: AI is a tool. Take responsibility for what you do with it. AI is not an individual actor. It’s controlled by individuals, and so the organization that puts forward AI, that uses AI, is responsible for the output of that AI. I don’t see any political party championing this.
What do you think is needed to get the regulatory movement going in the right direction?
I know that there are people in Congress who regret how slowly they moved on social media. Though the U.S. has been extremely slow so far on AI, I don’t think that it will necessarily remain so. Perhaps once Europe moves, or once more issues come up for us, or once we break out of the current gridlock or something. Maybe when we see a greater commercial impact, or maybe when someone violates ownership rights in a really egregious way, which should happen any day now, because the door’s wide open, then it’ll probably scare legislators into taking action. We may well see an event in the next year that causes the U.S. to move from laggard to pioneer.
It could be one party or other having control. A lack of checks and balances would lead to quick action. Or an AI breach. What I mean is something offensive done with AI. It could be a spoofing of a likeness that becomes very influential. It could be an intellectual property rip-off on a grand scale that people find obviously objectionable. Or it could be something exploitative done to a public personality beyond what’s been done before that offends people. I think that AI has plenty of capacity to offend us. It will offend us. And so we’re going to have some shocks coming up soon that I think will create public momentum for putting boundaries on the technology.
You lead a company that uses a lot of AI, but as we’ve been discussing, there are no regulations. How do you abide by the types of regulations that you would like to see on the technology?
I talk to a lot of CIOs, and CIOs almost universally do not wish to share their data in order to benefit from AI. Appian has focused on something that I’ve been calling private AI. Our quest has been to provide the magic of large language models, without divulging any information to an outside entity. No training whatsoever, and using specific AI techniques which depend upon a highly connected database on the enterprise side, owned by the corporation. If you have a good enough database, it’s possible to get great results out of AI by simply finding the pertinent information, and sending the pertinent information to a large language model along with the question.
You can join the crowd and just do a bulk upload, but our customers don’t want to do that. They want something surgical. They want the magic of modern AI without having to share their information, and it is possible. You just have to be very good at databases.