News

CIA building ChatGPT clone to enhance data-mining

Langley said it plans to use the tool to sift through open-source intelligence and share it with other agenciesCIA building ChatGPT clone to enhance data-mining

CIA building ChatGPT clone to enhance data-mining

FILE PHOTO ©  Getty Images / Ignatiev

The CIA is building an AI chatbot to enhance the US intelligence community’s surveillance capabilities, the agency’s Open Source Enterprise division head, Randy Nixon, revealed to Bloomberg in an interview on Tuesday.  

The tool will supposedly be used to improve access to and analysis of so-called open-source intelligence, a term that has expanded to include large volumes of smartphone-generated location information and other sensitive consumer data purchased from private marketplaces that only sell to governments. All information the bot provides will include its original source, Nixon told the outlet, even answers to agents’ subsequent questions.  

Our collection can just continue to grow and grow with no limitations other than how much things cost,” Nixon boasted, arguing the tool would satisfy an unmet critical need. “We’ve gone from newspapers and radio, to newspapers and television, to newspapers and cable television, to basic internet, to big data, and it just keeps going,” he continued.  

We have to find the needles in the needle field.

Google testing journalism AI – NYT

Google testing journalism AI – NYT

Read more Google testing journalism AI – NYT

Nixon hinted the tool would “soon” be available to all 18 US intelligence agencies, though policymakers and the public will be left on the outside. He insisted the CIA closely follows US privacy laws, which prohibit the agency from surveilling Americans inside the country, although that has not previously stopped it from running a domestic bulk data collection program similar to the NSA’s for at least a decade and allegedly concealing it from Congress.  

Nixon did not say whether any of the existing AI chatbots – all of which are known to have problems distinguishing fact from fiction – will be used as a basis for the agency’s proprietary version or whether one was being built from scratch. However, despite the flights of fancy that the current chatbots are subject to, he suggested that investigations should quickly become driven by the tool, explaining that this would move agents into a system “where the machines are pushing you the right information” and summarizing amounts of data too large for humans to work with effectively – and too large for the bot’s work to be properly checked.   

The CIA already works closely with most of the Big Tech players, including Google and Microsoft, both of which released their own ChatGPT-type AI bots earlier this year. While ChatGPT developer OpenAI claims to have an ethical code prohibiting participation in “high risk” government operations, the company has stonewalled inquiries into alleged government partnerships, raising questions of how seriously that code is taken.   

Since last month, a Defense Department task force has been seeking a way to weaponize large language models such as ChatGPT without committing obvious privacy violations.

Source

Leave a Reply

Back to top button