OpenAI is seeking funding at $29 billion valuation, *and* solving plagiarism concerns.
3 cool things, 2 big stories, and 1 LOL from the world of AI.
đ 3 Cool Things
Curated list of over 500 AI tools.
A Chrome extension that summarizes any page youâre on into a few sentences.
What to expect from AI in 2023.
đ€ 2 Big Stories from AI
OpenAI reportedly looking at $29 billion valuation.
If youâve spent even a minute online in the last two months, youâve seen something about the viral AI chatbot ChatGPT, and AI image generator DALL-E. OpenAI, the creators behind both programs, are reportedly raising funds at a valuation of $29 billion. The shocker? Their valuation was half that ($14 billion) just two years ago.
The đ„© of it:
The Wall Street Journal is reporting that OpenAI has entered talks to sell existing shares to venture capital firms Thrive Capital, and Founders Fun.
The valuation being discussed is $29 billion, which is double the valuation of OpenAI just two years ago.
This is reportedly being done through a tender offer.
Donât worry, I Googled that for you: A tender offer is a bid to purchase some or all of shareholders' stock in a corporation. Tender offers are typically made publicly and invite shareholders to sell their shares for a specified price and within a particular window of time.
So, OpenAI employees and other shareholders will be allowed to sell their current shares to the firms.
No deals have been struck yet, and some investors have doubts on Open AIâs sales potential, but I bet we can expect to see an update on this within the coming weeks.
OpenAI has the answer for solving plagiarism issues it helped create.
As AI tools become more and more available to the masses, fears of plagiarism in essays and other work have become a focal point in our education systems. OpenAI, the creators behind the viral AI chatbot that can write entire essays off of just one simple prompt, actually claim to have the solution.
After its viral success, OpenAIâs ChatGPT has raised concerns that it could be used to cheat in coursework and plagiarize essays.
So far, ChatGPT's output hasn't triggered any conventional plagiarism detectors.
Since the text it produces hasnât been written before, assessors have been struggling to work out how to identify cheaters.
OpenAI is working on a system called âAIgiarismâ, or AI-assisted plagiarism, which essentially watermarks the bot's output to make it harder to pass off as human-generated.
The system would subtly alter the specific words chosen by ChatGPT in a way that is statistically predictable to anyone looking for signs of machine-generated text, but not noticeable to a reader.
A prototype of the watermarking system has been developed and seems to work well, according to OpenAI researcher Scott Aaronson.
âWe actually have a working prototype of the watermarking scheme,â Aaronson added. âIt seems to work pretty well â empirically, a few hundred [words] seem to be enough to get a reasonable signal that, yes, this text came from GPT.â
Besides the obvious concern over students not doing their own research, another big concern is that ChatGPT has a tendency to "hallucinate" facts that aren't strictly true.
The program could also be used for mass generation of propaganda or to impersonate someone's writing style.
âWe want it to be much harder to take a GPT output and pass it off as if it came from a human,â Aaronson said. âThis could be helpful for preventing academic plagiarism, obviously, but also, for example, mass generation of propaganda â you know, spamming every blog with seemingly on-topic comments supporting Russiaâs invasion of Ukraine without even a building full of trolls in Moscow. Or impersonating someoneâs writing style in order to incriminate them.
đ€Ł 1 LOL