What made OpenAI panic?

READ TIME - 1 min 17 seconds

Hey there, welcome to the Automated! 

Today marks the grand premiere of this incredible newsletter, and guess what? You can proudly boast about being an OG subscriber right from the get-go! 🎉

Get ready for a daily dose of AI awesomeness.

We'll serve up the hottest gossip and juiciest updates from the realm of artificial intelligence. Get ready to have your mind blown and your funny bone tickled.

Here’s what we got for you today:

  • 🤯 Shocking Revelation: OpenAI's deleted discussion

  • ⚠️ EU calls for AI warning labels!

🤯 Shocking Revelation: OpenAI's Deleted Discussion

Last week Sam Altman and 20 other developers sat down to discuss the future of OpenAI’s APIs and their product.

But shortly after the post was released, it was deleted. 

Yes, you heard that right!

Apparently, OpenAI requested that it be taken down.

The post was taken down but unluckily for them, that's not enough once something has gone online.😥

Once something's out there, it's out there for good!

Now, the big question is – what was said that was so bad that OpenAI tried to sweep it under the rug?

To answer that, we have to dive into what Sam said in the initial post.

  • OpenAI is currently limited by GPU availability, causing delays in their short-term plans and has affected the reliability and speed of the API.

  • ChatGPT plugins are not expected to be released on the API soon, since plugins do not have a product-market fit (PMF) yet.

  • OpenAI will avoid competing with its customers. The vision for ChatGPT is to be a super smart assistant for work but there will be a lot of other GPT use cases that OpenAI won’t touch.

  • OpenAI is considering open-sourcing GPT-3 but has concerns about the capacity of individuals and companies to host and serve large language models (LLMs).

  • OpenAI’s internal data suggests the scaling laws for model performance continue to hold and making models larger will continue to yield performance.

The best guess is that it has to do with this bit - "OpenAI will avoid competing with its customers”.

You can bet your bottom dollar that Microsoft had an issue with Sam saying they were not going to compete. 😡

Sam stated that quite a few developers were nervous about building with the OpenAI APIs when OpenAI might end up releasing products that are competitive to them.

But then, he reassured them that OpenAI would not release more products beyond ChatGPT, rather, Open AI would allow them to make the APIs better by being customers of their own product.

If you don't understand why that statement is so powerful, let's paint a scenario to explain this:

Let's imagine a company opens "Psychologist AI" (which is essentially just GPT4 prompted in the background), and this company suddenly experiences massive growth and starts genereting mouth-watering revenue.

What's stopping OpenAI to just open a "psychologist" version of ChatGPT and make it way cheaper for users?

Oh, and if you're wondering that's not possible, think again… Amazon did it back in the days.

But hey, that's just speculation.

We’ll have to hear from the horse’s mouth (OpenAI) to know the actual reason, that’s assuming they ever give us any. 😒

⚠️ EU Calls for AI Warning Labels!

The EU is currently in a race to establish regulations for generative AI as it negotiates the AI Act which is scheduled for a key vote in the European Parliament's plenary session next week.

But guess what?

Even if a final version of the AI Act is agreed upon by the end of the year, companies will probably not need to comply until 2026. 🗓️

Now, you might think, "Hey, what's gonna regulate AI in the meantime?"

There’s the “EU Voluntary Code” which has more than 40 companies as signatories, including TikTok Inc., Microsoft Corp., Meta Platforms Inc., etc.

In case you don’t know: The EU Voluntary Code sets compliance for the EU’s content moderation rules, the Digital Services Act.

The code aims to prevent profiteering from disinformation and fake news, as well as increasing and curbing the spread of bots and fake accounts.

Interestingly, Twitter initially signed up to the code but later dropped out.

But here’s the catch with the EU Voluntary Code - it doesn’t include the risks of AI-generated content.

So now, the European Union wants tech companies to warn users about artificial intelligence-generated content that could lead to disinformation, as part of a voluntary code.

Vera Jourova, a European Commission vice president has stated that while the new AI technologies can be a force for good, there are “dark sides” to it.

She emphasized that the new AI technologies raise fresh challenges for the fight against disinformation.

Thus, companies that integrate generative AI into their services and are signatories to the EU voluntary code should now also “build in necessary safeguards that these services cannot be used by malicious actors to generate disinformation.”

In other words, the EU is putting trust in the tech companies' hands to do the needful.

So, let's sit back, grab some popcorn, and see how the tech companies handle this responsibility.🍿

So, there you have it — the EU's grand plan to tackle the risks arising from AI-generated content.

Let's hope it’s enough while we eagerly await the AI Act. Fingers crossed! 🤞😉

That's all we've got for you today.

What'd you think of today's email?

Login or Subscribe to participate in polls.

DISCLAIMER: None of this is financial advice. This newsletter is strictly educational and is not investment advice or a solicitation to buy or sell assets or make financial decisions. Please be careful and do your own research

Join the conversation

or to participate.