AI Adventurers

Join our vibrant community of AI enthusiasts driven by curiosity and passion, exploring the latest advancements, applications, and innovations in artificial intelligence.

nlafakis
Top Contributor | Elite Partner
Top Contributor | Elite Partner

OpenAI's Safety and Security Committee Formation Shows A Massive Lack of Transparency

In the world of 'not shocking news', the recent formation of OpenAI's Safety and Security Committee has been met with both praise and skepticism, as concerns about potential conflicts of interest and lack of transparency come to light. The inclusion of Sam Altman, and a member of the company's Board of Directors, on the committee has raised questions about the committee's ability to provide objective recommendations. These concerns are not new, as former OpenAI board member Helen Toner recently shared her experiences on a podcast. Toner revealed that part of the problem that led to Altman's removal from the board in November stemmed from the lack of communication and transparency surrounding the release of GPT-3. According to Toner, the board had to find out about the release on Twitter, rather than being informed by Altman or the company beforehand. This revelation has led to speculation about the reasons behind Altman's actions. Some believe that Altman may have been hiding information and conducting work behind closed doors, while others suspect that there may have been an internal push to monetize GPT rather than continuing with the research-first approach that OpenAI had initially committed to. The formation of the Safety and Security Committee, which includes Altman, has only amplified these concerns. Critics argue that having Altman involved in both the input and output decisions of the company's generative models could lead to a further lack of transparency and accountability, especially given the allegations of his past behavior. As OpenAI stands on the brink of developing AGI with its upcoming frontier model, the need for transparent, objective, and accountable oversight is more pressing than ever. The fact that it has taken until this critical point for the public to learn about the internal issues at OpenAI is a cause for concern, and some fear that it may be too late to course-correct. To address these issues, experts recommend that OpenAI should consider appointing an entirely independent committee, free from any potential conflicts of interest, to oversee the development and implementation of safety and security measures. Additionally, the company must commit to increased transparency and open communication with its board, stakeholders, and the public to rebuild trust and ensure responsible AI development. As the AI industry continues to evolve at a rapid pace, it is crucial that companies like OpenAI prioritize transparency, accountability, and ethical considerations to ensure that the development of powerful AI systems benefits humanity as a whole. We can't just keep relying on one company (Anthropic) to be the most responsible of them all, especially when it's not the frontrunner. --- This post was migrated from connect.com and was originally published at an earlier date.
0 Upvotes
5 Replies 5
nlafakis
Top Contributor | Elite Partner
Top Contributor | Elite Partner

OpenAI's Safety and Security Committee Formation Shows A Massive Lack of Transparency

Hands down!!! They should have been WAY more involved but it's clear that they either didn't have the time, or simply didn't want to spend the time and hope Sam was doing the right thing...which he thinks he is........ And he clearly isn't. --- This post was migrated from connect.com and was originally published at an earlier date.
0 Upvotes
JP6000
Member

OpenAI's Safety and Security Committee Formation Shows A Massive Lack of Transparency

Agreed @Nico, and yet I don't think anyone understood at the point of release what ChatGPT's impact was going to be. Which leads me to ask: how involved should a private sector board be at the operational level? My experience has been that when a board gets heavily involved with an organization on an operational basis, the org's performance suffers. Staying in lanes and all that. Guidance, governance, all fair game. A simple chat app that was originally intended as a research preview when their model had been out for some time already? I can see why they didn't feel the need, at that time, to roll out the PR engine from internal comms/board/tech press/global audience. All to say is that it makes for a good soundbite *today* to say that, as a board member, you found out about ChatGPT on Twitter, but to me, it makes sense how that happened and it's not shocking. --- This post was migrated from connect.com and was originally published at an earlier date.
0 Upvotes
nlafakis
Top Contributor | Elite Partner
Top Contributor | Elite Partner

OpenAI's Safety and Security Committee Formation Shows A Massive Lack of Transparency

The model was available via API far before it was available on the web, but, there is a major difference between a few thousand people having access to an API that hardly anyone else globally knows about - versus the entire world having access to the model through a web window to the point where whole countries didn't have time to react and just started shutting down access to it. I'm very sure that she was referring to the latter when she said that they didn't know it had been released yet. I think it's safe to assume she meant released to the public --- This post was migrated from connect.com and was originally published at an earlier date.
0 Upvotes
JP6000
Member

OpenAI's Safety and Security Committee Formation Shows A Massive Lack of Transparency

I think it's important to recognize there's usually three sides to a story. After listening to the clip, it sounds to me like miscommunication was at the heart of the issue. If we cast back to 2022, GPT3 had been out for some time. I worked with the API around the summer of that year. Also, when ChatGPT was first released, no one expected it to be as popular as it wound up becoming. In terms of safety, the GPT models are highly guardrailed, and some have said that their safety has gone so far that answers now are so safe that it's become unsafe. All to say, @Victor Becerra, I don't know if the company needs to rebuild trust. I trust them about as much as any other tech firm, but I do think that true safety with AI starts with open source. --- This post was migrated from connect.com and was originally published at an earlier date.
0 Upvotes
Victor_Becerra
Community Manager
Community Manager

OpenAI's Safety and Security Committee Formation Shows A Massive Lack of Transparency

This is a crazy coincidence Nico Lafakis I also was reading about Helen Toner discussing the real story behind Sam Altman's firing at OpenAI. https://x.com/bilawalsidhu/status/1795534345345618298 Renaud Delaquis Jason Nash Claire Bouvier Jeff Price What strategies could OpenAI implement to rebuild trust? --- This post was migrated from connect.com and was originally published at an earlier date.
Victor Becerra
Community Marketing Manager

Did my post help answer your question? Mark this as a solution.

0 Upvotes