Tech News

Democrats ask Doge to use AI answer

Democrats The House Oversight Committee imposed information on plans to install AI software across federal agencies Wednesday morning, but provided information on plans to federal agency leaders as government workforces continue to cut.

After the snippet of inquiry, Wired and The Washington Post’s latest report on Elon Musk’s so-called Department of Government Efficiency (DOGE) efforts to automate tasks through a variety of proprietary AI tools and access to sensitive data.

“The American people entrust the federal government to possess sensitive personal information related to their health, financial and other biographical information, on the basis that this information will not be disclosed or improperly used without consent, including the use of unauthorized and unachievable third-party AI software.”

The requirements were first obtained by Wired and signed by Democratic Congressman Gerald Connolly of Virginia.

The core purpose of the request is to urge agencies to prove that any potential use of AI is legitimate and that steps are being taken to protect Americans’ private data. Democrats also wonder whether any use of AI would benefit Musk financially, and he founded Xai and was struggling electric car company Tesla is working to spin toward robotics and AI. Connolly said Democrats further fear that Musk may be using his personal rich access to sensitive government data and using it to “enhance” his own proprietary AI model, Grok.

In the requirement, Connolly noted that federal agencies are “bound by multiple statutory requirements when using AI software,” pointing primarily to federal risk and authorization management programs that work to standardize government approaches to cloud services and ensure that AI-based tools are properly assessed for security risks. He also points to the U.S. AI Act, which requires federal agencies to “prepare and maintain an agency’s list of artificial intelligence use cases” and “inventory of agencies that make available to the public.”

Documents obtained by Wired last week show Doge operators have deployed a proprietary chatbot called GSAI to about 1,500 federal workers. The GSA oversees federal government properties and provides information technology services to many agencies.

A memo obtained by cable reporters shows employees have been warned not to provide any controlled unclassified information to the software. According to documents viewed by Wired, other agencies, including the Ministry of Finance, the Department of Health and Public Services, although not necessarily GSAI, it considers using chatbots.

Wired also reports that the U.S. Army is currently using software called Camogpt to scan its recording system to mention diversity, equity, inclusion and accessibility. An Army spokesman confirmed the presence of the tool but declined to provide further information on how the Army plans to use the tool.

On request, Connolly wrote that the Department of Education has personally identifiable information about 43 million people related to the federal student aid program. “Doge’s opaque and fanatical pace, I’m very worried that sensitive information from students, parents, parents, spouses, family members, and all other borrowers is the responsibility of the secret members of Doge Team, and has no unlevel purposes and no safeguards to prevent disclosure or unwelcome use,” he wrote. The Washington Post previously reported that Doge has begun to draw sensitive federal data from the Department of Education’s record system to analyze its spending.

Education Secretary Linda McMahon said Tuesday she is planning to fire more than a thousand workers from the department and join hundreds of other workers who were “acquisitioned” last month. The education sector lost nearly half of its workforce – McMahon completely abolished the institution’s first step.

“Using AI to evaluate sensitive data is full of serious harm,” Connolly warns. “The input used and the parameters selected for analysis may be flawed, may introduce errors through the design of the AI ​​software, and employees may misunderstand AI’s recommendations, among other issues.”

He added: “Behind the use of AI, guardrails to ensure proper processing of data and adequate oversight and transparency, the application of AI is dangerous and has the potential to violate federal law.”

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button