The US Space Force has temporarily banned its employees from using synthetic tools while they work to protect government data, according to reports.
Space Force members have been told that they are “not authorized” to use web-based generative AI tools – to create text, images and other media – unless specifically approved, According to To a Bloomberg report dated October 12, citing a memo to the Guardian Workforce on September 29.
Generative AI “will undoubtedly revolutionize our workforce and enhance the Guardian’s ability to act quickly,” Lisa Costa, Space Force deputy chief of space operations for technology and innovation, said in the memo.
However, Costa cited concerns about current cybersecurity and data processing standards, explaining that the adoption of AI and the Large Language Model (LLM) needs to be more “responsible.”
The United States Space Force is a branch of the Space Service of the United States Armed Forces charged with protecting the interests of the United States and its allies in space.
The U.S. Space Force has temporarily banned the use of web-based generative artificial intelligence tools and the so-called large language models that power them, citing data security and other concerns, according to a memo seen by Bloomberg News.https://t.co/Rgy3q8SDCS
– Katrina Manson (@KatrinaManson) October 11, 2023
The Space Force’s decision has already affected at least 500 individuals who use an artificial intelligence platform called Ask Sage, according to Bloomberg, citing comments from Nick Chaelan, a former chief software officer for the Air Force and the U.S. Space Force.
Shelan reportedly criticized the Space Force’s decision. “This will clearly put us years behind China,” he wrote in an email last September complaining to Costa and other senior defense officials.
“It’s a very short-sighted decision,” Chelan added.
Chelan pointed out that the CIA and its departments have developed their own generative artificial intelligence tools that meet data security standards.
Related: Data protection in AI chat: Does ChatGPT comply with GDPR standards?
Concerns that LLMs could leak private information to the public have been a fear for some governments in recent months.
Italy temporarily blocked its AI-powered chatbot ChatGPT in March, citing suspected violations of data privacy rules before reversing its decision about a month later.
Tech giants like Apple, Amazon, and Samsung are among the companies that have done so as well Forbidden Or prevent employees from using AI tools similar to ChatGPT at work.
magazine: Musk’s alleged price gouging, Satoshi AI chatbot and more