In the world of 'not shocking news', the recent formation of OpenAI's Safety and Security Committee has been met with both praise and skepticism, as concerns about potential conflicts of interest and lack of transparency come to light. The inclusion of Sam Altman, and a member of the company's Board of Directors, on the committee has raised questions about the committee's ability to provide objective recommendations. These concerns are not new, as former OpenAI board member Helen Toner recently shared her experiences on a podcast. Toner revealed that part of the problem that led to Altman's removal from the board in November stemmed from the lack of communication and transparency surrounding the release of GPT-3. According to Toner, the board had to find out about the release on Twitter, rather than being informed by Altman or the company beforehand. This revelation has led to speculation about the reasons behind Altman's actions. Some believe that Altman may have been hiding information and conducting work behind closed doors, while others suspect that there may have been an internal push to monetize GPT rather than continuing with the research-first approach that OpenAI had initially committed to. The formation of the Safety and Security Committee, which includes Altman, has only amplified these concerns. Critics argue that having Altman involved in both the input and output decisions of the company's generative models could lead to a further lack of transparency and accountability, especially given the allegations of his past behavior. As OpenAI stands on the brink of developing AGI with its upcoming frontier model, the need for transparent, objective, and accountable oversight is more pressing than ever. The fact that it has taken until this critical point for the public to learn about the internal issues at OpenAI is a cause for concern, and some fear that it may be too late to course-correct. To address these issues, experts recommend that OpenAI should consider appointing an entirely independent committee, free from any potential conflicts of interest, to oversee the development and implementation of safety and security measures. Additionally, the company must commit to increased transparency and open communication with its board, stakeholders, and the public to rebuild trust and ensure responsible AI development. As the AI industry continues to evolve at a rapid pace, it is crucial that companies like OpenAI prioritize transparency, accountability, and ethical considerations to ensure that the development of powerful AI systems benefits humanity as a whole. We can't just keep relying on one company (Anthropic) to be the most responsible of them all, especially when it's not the frontrunner. --- This post was migrated from connect.com and was originally published at an earlier date.