OpenAI CEO crushes our GPT-4 dreams
Plus, CNET reminds us not to fully trust AI-generated content...yet.
In today’s email:
OpenAI CEO brings us back to reality, crushes dreams.
Tech news site CNET learned to not trust AI-generated content fully.
Free Udemy course on creating your own ChatGPT bot extension.
😎 3 Cool Things
Grab this course on creating a ChatGPT extension while it’s still free.
Use this AI tool to quickly find keywords your audience is searching for.
Incredible art “Wizard’s Desk” created with Midjourney.
🤓 2 Big Stories About AI
1. OpenAI CEO says people are begging to be disappointed by GPT-4.
In a recent interview, OpenAI CEO Sam Altman discussed hype surrounding GPT-4
“The GPT-4 rumor mill is a ridiculous thing. I don’t know where it all comes from,” said the OpenAI CEO. “People are begging to be disappointed, and they will be.
Usually, you love an honest and humble CEO. But in this case, I think I’d rather let my imaginations run wild.
The 🥩 of it:
OpenAI CEO Sam Altman said that GPT-4, the company’s as yet unreleased language model, will come out when the company is confident that it can be done safely and responsibly.
GPT-3, the foundation of AI chatbot ChatGPT, was released in 2020 and an improved version, GPT 3.5, was used to create ChatGPT.
The launch of GPT-4 is highly anticipated, with people in the AI community and Silicon Valley making predictions about its capabilities.
One claim that Altman says is “complete bull💩” is that GPT-4 has 100 trillion parameters.
Altman also said that video-generating AI models are coming, but did not give a timeline for when they would be developed.
For being one of the hottest, most talked about tech companies on the internet, Altman said that OpenAI is currently not making much money, but they’re still in the early stages as a company.
When asked about AI in the classroom:
“We’re going to try and do some things in the short term. There may be ways we can help teachers be a little bit more likely to detect output of a GPT-like system, but a determined person will get around them, and I don’t think it’ll be something society can or should rely on long term. We’re just in a new world now. Generated text is something we all need to adapt to, and that’s fine. We adapted to calculators and changed what we tested in maths class, I imagine. This is a more extreme version of that, no doubt. But also the benefits of it are more extreme as well.”
At this point, it’s hard to guess when we’ll see public access to GPT-4. Sam and the team seem determined to work out the kinks in GPT-3.5 before unleashing the new version to the world. It doesn’t seem like the company was ready for ChatGPT to have such a rapid rise in popularity, and it’s possible the repercussions of handling that attention have added delays and insecurities to this new release.
2. Whoops! CNET learned the hard way not to fully trust AI-generated content yet.
CNET was forced to make major corrections to an article about compound interest written by AI due to multiple inaccuracies. Turns out math is hard for robots, too.
The 🥩 of it:
CNET, a tech news and product reviews publication, was forced to make multiple corrections to an article written by AI due to inaccuracies in basic math throughout the piece.
The article, which explained compound interest, contained at least 5 errors, including incorrectly calculating how much a person would earn if they deposited $10,000 into a savings account that earns 3%.
The original article said the person would earn $10,300 instead of $300, convincing me the math portion of AI is being trained on my high school homework.
AI's rapid advancement has been met with both celebration and horror. Some executives have called the technology "magic," while others have warned that AI is still prone to getting basic facts wrong.
Despite some of AI's flaws, CNET began quietly publishing articles "assisted by an AI engine and reviewed, fact-checked and edited by our editorial staff" in November. According to the article, CNET has published about 75 articles using the technology.
Despite the error and backlash from the mistake, CNET did not indicate that they’re going to stop using AI to write articles altogether.
In a statement to Insider, CNET said they "are actively reviewing all our AI-assisted pieces to make sure no further inaccuracies made it through the editing process."