Following Reports of Safety Concerns at OpenAI, Luján Joins Group Demanding Answers on Safety, Transparency, Whistleblower Protections

Washington, D.C. – Following reports made by employees on safety and security concerns, U.S. Senator Ben Ray Luján (D-N.M.) joined U.S. Senators Brian Schatz (D-Hawai’i), Peter Welch (D-Vt.), Mark Warner (D-Va.), and Angus King (I-Maine) calling on OpenAI to honor its commitment to AI safety research and protect current and former employees who raise AI safety concerns.

“We write to you regarding recent reports about OpenAI’s safety and employment practices. OpenAI has announced a guiding commitment to the safe, secure, and responsible development of artificial intelligence (AI) in the public interest. These reports raise questions about how OpenAI is addressing emerging safety concerns,” the Senators wrote in their letter to OpenAI CEO Sam Altman. “Given OpenAI’s position as a leading AI company, it is important that the public can trust in the safety and security of its systems. This includes the integrity of the company’s governance structure and safety testing, its employment practices, its fidelity to its public promises and mission, and its cybersecurity policies.”

In their letter, the senators highlight recent reporting that OpenAI whistleblowers and former employees have sounded alarms about OpenAI’s focus on ‘shiny products’ over safety and societal impacts, allowing AI systems to be deployed without adequate safety review, and insufficient cybersecurity. The company failed to honor its public commitment to allocate 20 percent of compute resources to AI safety, reassigned its long-term AI safety teammembers following staff resignations, and required departing employees to sign life-long non-disparagement agreements under threat of clawing back already earned compensation. OpenAI has branded itself as a safety-conscious and responsible research organization, and the senators ask OpenAI to clarify whether its commitments on AI safety remain in effect and request that the company reform its non-disparagement agreement practices that could disincentivize whistleblowers. 

The full text of the letter to OpenAI can be found below and is available here:

Dear Mr. Altman,

We write to you regarding recent reports  about OpenAI’s safety and employment practices. OpenAI has announced a guiding commitment to the safe, secure, and responsible development of artificial intelligence (AI) in the public interest. These reports raise questions about how OpenAI is addressing emerging safety concerns. We seek additional information from OpenAI about the steps that the company is taking to meet its public commitments on safety, how the company is internally evaluating its progress on those commitments, and on the company’s identification and mitigation of cybersecurity threats.

Safe and secure AI is widely viewed as vital to the nation’s economic competitiveness and geopolitical standing in the twenty-first century. Moreover, OpenAI is now partnering with the U.S. government and national security and defense agencies to develop cybersecurity tools to protect our nation’s critical infrastructure. National and economic security are among the most important responsibilities of the United States Government, and unsecure or otherwise vulnerable AI systems are not acceptable.

Given OpenAI’s position as a leading AI company, it is important that the public can trust in the safety and security of its systems. This includes the integrity of the company’s governance structure and safety testing, its employment practices, its fidelity to its public promises and mission, and its cybersecurity policies. The voluntary commitments that you and other leading AI companies made with the White House last year were an important step towards building this trust.

We therefore request the following information by August 13, 2024:

  1. Does OpenAI plan to honor its previous public commitment to dedicate 20 percent  of its computing resources to research on AI safety?
    1. If so, describe the steps that OpenAI has, is, or will take to dedicate 20 percent of its computing resources to research on AI safety.
    2. If not, what is the percentage of computing resources that OpenAI is dedicating to AI safety research?
  1. Can you confirm that your company will not enforce permanent non-disparagement agreements for current and former employees?
  1. Can you further commit to removing any other provisions from employment agreements that could be used to penalize employees who publicly raise concerns about company practices, such as the ability to prevent employees from selling their equity in private “tender offer” events?
    1. If not, please explain why, and any internal protections in place to ensure that these provisions are not used to financially disincentivize whistleblowers.
  1. Does OpenAI have procedures in place for employees to raise concerns about cybersecurity and safety? How are those concerns addressed once they are raised?
    1. Have OpenAI employees raised concerns about the company’s cybersecurity practices?
  1. What security and cybersecurity protocols does OpenAI have in place, or plan to put in place, to prevent malicious actors or foreign adversaries from stealing an AI model, research, or intellectual property from OpenAI? 
  1. The OpenAI Supplier Code of Conduct requires your suppliers to implement strict non-retaliation policies and provide whistleblowers channels for reporting concerns without fear of reprisal. Does OpenAI itself follow these practices?
    1. If yes, describe OpenAI’s non-retaliation policies and whistleblower reporting channels, and to whom those channels report.
  1. Does OpenAI allow independent experts to test and assess the safety and security of OpenAI’s systems pre-release?
  1. Does the company currently plan to involve independent experts on safe and responsible AI development in its safety and security testing and evaluation processes, procedures, and techniques, and in its governance structure, such as in its safety and security committee?
  1. Will OpenAI commit to making its next foundation model available to U.S. Government agencies for pre-deployment testing, review, analysis, and assessment?
  1. What are OpenAI’s post-release monitoring practices? What patterns of misuse and safety risks have your teams observed after the deployment of your most recently released large language models? What scale must such risks reach for your monitoring practices to be highly likely to catch them? Please share your learnings from post-deployment measurements and the steps taken to incorporate them into improving your policies, systems, and model updates.
  1. Do you plan to make retrospective impact assessments of your already-deployed models available to the public?
  1. Please provide documentation on how OpenAI plans to meet its voluntary safety and security commitments to the Biden-Harris administration.

Thank you very much for your attention to these matters.

Sincerely,


###

Print
Share
Like
Tweet

Filter & Sort Results

Date Range
Date Range
Sort Results