Generative AI

Why US intelligence agencies are wary of generative AI – Firstpost


US intelligence and defence officials were experimenting with the technology years before OpenAI’s ChatGPT set off the current generative AI marketing frenzy.

The AI revolution is here and everyone is afraid of getting left behind. And we do mean everyone.

US intelligence agencies are among those looking to join the AI revolution as data becomes the ubiqutious currency and competitors seek any advantage.

But there are plenty of problems.

The technology, for one, is still in its infancy. Officials also known that it is often unreliable.

‘Needle in haystack’

Years before OpenAI’s ChatGPT set off the current generative AI marketing frenzy, US intelligence and defence officials were experimenting with the technology.

One contractor, Rhombus Power, used it to uncover fentanyl trafficking in China in 2019 at rates far exceeding human-only analysis.

Rhombus would later predict Russia’s full-scale invasion of Ukraine four months in advance with 80 per cent certainty.

CIA director William Burns recently wrote in Foreign Affairs that US intelligence requires “sophisticated artificial intelligence models that can digest mammoth amounts of open-source and clandestinely acquired information.”

But the agency’s inaugural chief technology officer, Nand Mulchandani, cautions that because generative AI models “hallucinate” they are best treated as a “crazy, drunk friend” — capable of incredible insight but also bias-prone fibbers.

There are also security and privacy issues. Adversaries could steal and poison them. They may contain sensitive personal data agents aren’t authorised to see.

Gen AI is mostly good as a virtual assistant, says Mulchandani, looking for “the needle in the needle stack.”

What it won’t ever do, officials insist, is replace human analysts.

While officials won’t say whether they are using generative AI for anything big on classified networks, thousands of analysts across the 18 US intelligence agencies now use a CIA-developed generative AI called Osiris.

It ingests unclassified and publicly or commercially available data — what’s known as open-source — and writes annotated summaries. It includes a chatbot so analysts can ask follow-up questions.

Osiris uses multiple commercial AI models. Mulchandani said the agency is not committing to any single model or tech vendor. “It’s still early days,” he said.

Experts believe predictive analysis, war-gaming and scenario brainstorming will be among generative AI’s most important uses for intel workers.

Machine learning

Even before generative AI, intel agencies were using machine learning and algorithms. One use case: Alerting analysts during off hours to potentially important developments. An analyst could instruct an AI to ring their phone no matter the hour.

It couldn’t describe what happened, that would be classified, but could say “you need to come in and look at this.”

AI bigshots vying for US intelligence agency business include Microsoft, which announced on 7 May that it was offering OpenAI’s GPT-4 for top-secret networks, though the product is not yet accredited on classified networks.

The White House is also concerned that generative AI models adopted by US agencies could be infiltrated and poisoned. Reuters

A competitor, Primer AI, lists two intelligence agencies among its customers, documents posted online for recent military AI workshops show. One Primer product is designed to “detect emerging signals of breaking events” using AI-powered searches of more than 60,000 news and social media sources in 100 languages including Twitter, Telegram, Reddit and Discord.

Like Rhombus Power’s product, it helps analysts identify key people, organisations and locations and also uses computer vision. At a demo just days after the 7 October Hamas attack on Israel, Primer executives described how their technology separates fact from fiction in the flood of online information from West Asia.

White House worried

The most important near-term AI challenges for US intelligence officials are apt to be counteracting how adversaries use it: To pierce US defences, spread disinformation and attempt to undermine Washington’s ability to read their intent and capabilities.

The White House is also concerned that generative AI models adopted by US agencies could be infiltrated and poisoned.

Another worry: Ensuring the privacy of people whose personal data may be embedded in an AI model. Authorities say it is not currently possible to guarantee that’s all removed from an AI model.

That’s one reason the intelligence community is not in “move-fast-and-break-things” mode on generative AI, says John Beieler, the top AI official at the Office of the Director of National Intelligence.

Model integrity and security are a concern if government agencies end up using AIs to explore bio- and cyberweapons tech.

How AI gets adopted will vary widely by intelligence agency according to mission. The National Security Agency mostly intercepts communications. The National Geospatial-Intelligence Agency (NGA) is charged with seeing and understanding every inch of the planet.

Supercharging those missions with Gen AI is a priority — and much less complicated than, say, how the FBI might use the technology given its legal limitations on domestic surveillance.

The NGA issued in December a request for proposals for a completely new type of AI model that would use imagery it collects — from satellites, from ground-level sensors – to harvest precise geospatial intel with simple voice or text prompts. Gen AI applications also make a lot of sense for cyberconflict.

The human element

Generative AI won’t easily match wits with rival masters of deception.

Analysts work with “incomplete, ambiguous, often contradictory snippets of partial, unreliable information,” notes Zachery Tyson Brown, a former defense intelligence officer. He believes intel agencies will invite disaster if they embrace generative AI too enthusiastically, swiftly or completely. The models don’t reason. They merely predict. And their designers can’t entirely explain how they work.

Linda Weissgold, a former CIA deputy director of analysis, doesn’t see AI replacing human analysts any time soon.

Quick decisions are often required based on incomplete data.

Intelligence “customers’’ – the most important being the president of the United States — want human insight and experience central to the decision options they’re offered, she says.

“I don’t think it will ever be acceptable to some president for the intelligence community to come in and say, ‘I don’t know, the black box just told me so.’”

With inputs from agencies

Latest News

Find us on YouTube

Subscribe



Source

Related Articles

Back to top button