Artists accuse Adobe of stealing from them to train their AI
Plus, Microsoft joins the text-to-speech AI race.
In today’s email:
Artists are accusing Adobe of stealing their design process to power its AI.
Microsoft unveils their latest text-to-speech AI that can be trained in 3 seconds.
A chatbot for your internal documentation.
😎 3 Cool Things
Restore old or blurry photos with AI, for free.
Turn your company’s internal documentation into an AI chat bot.
Chat with AI versions of your favorite podcast hosts or YouTubers.
🤓 2 Big Stories About AI
Artists are accusing Adobe of stealing their design process to power its AI.
In order to train your AI, you need a lot of data. Some companies have made headlines lately because they’ve been compiling their user data to train their AI programs, and the users aren’t happy with it.
GitHub dealt with the blowback when they released Copilot, which turns user prompts into coding suggestions. Copilot was trained using billions of lines of GitHub customer code (powered by OpenAI), which resulted in a lawsuit from their customers. Then, as soon as AI avatar photos went viral, artists came out in protest because these programs are trained on their art, with zero kickback or benefit to the artist.
Now, artists are facing off against the company behind the largest design and editing software, Adobe, out of fear that they are using their design processes to train their new AI.
The 🥩 of it:
A screenshot, posted to Instagram from French comic book author Claire Wendling, has gone viral with more than 2 million views.
In the screenshot, Wendling brings attention to Adobe’s updated privacy policy. In particular, it highlights that Photoshop and other Adobe products can track artists that use their apps to see how they work.
The concern in the creative community is that this is yet another case of their work being stolen for another company’s profits.
Adobe denied these concerns in a quote released to Fast Company.
“When it comes to Generative AI, Adobe does not use any data stored on customers’ Creative Cloud accounts to train its experimental Generative AI features,” said the company spokesperson in a written statement to Fast Company. “We are currently reviewing our policy to better define Generative AI use cases.”
However, Adobe’s FAQ cites examples of how the company may use AI to auto-tag photographs of known creatures or individuals, such as dogs and cats. The AI can also be utilized to suggest context-aware options: If the AI believes you’re designing a website, it might suggest relevant buttons to include.
With the lines being too blurry, it’s hard to totally understand how Adobe’s AI will be used in the future, but a lot of artists aren’t willing to find out.
“For me, it’s astonishing that a paid service assumes it’s okay to violate users’ privacy at such a scale,” says Andrey Okonetchnikov, a front-end developer and UI and UX designer from Vienna, Austria, who uses Adobe products to sync photographs. “It’s troublesome because companies who offer to store data in the cloud assume that they own the data. It violates intellectual property and privacy of millions of people and it’s assumed to be ‘business as usual’. This must stop now.”
For any artist not willing to allow Adobe to train their AI off of their processes, there’s an option to opt-out inside their app settings.
What do you think? Total invasion of privacy for company gains, or reasonable use of data that comes with the territory of using software for your business?
Last week Apple announced their entrance to the AI narrator game. Now Microsoft unveils their own text-to-speech AI.
There seems to be a lot of opportunity in text-to-speech. Otherwise, why would Apple, Meta, Google, and Microsoft all be in the race to deliver the highest quality text-to-speech AI? Last week we discussed Apple’s entrance to the market with their AI audiobook narrator, now Microsoft is entering the race with a seemingly superior product.
The 🥩 of it:
Microsoft unveiled their latest text-to-speech generator, VALL-E that can be trained to mimic your voice in just three seconds.
According to this article, Microsoft’s AI sounds naturally human compared to other options.
Major tech companies such as Google, Meta, and Apple have joined in the text-to-speech AI race recently.
🤓🚨 Nerd Alert: While other generators rely on manipulating waveforms to synthesize speech, VALL-E generates discrete audio codecs from text and audio prompts and uses them to match it to what it knows about how the voice would sound if it spoke other phrases.
Translation: It works gooder.
The research team claims that an audio prompt could be as short as three seconds, and that would be sufficient for VALL-E to do its job.
VALL-E's training was conducted using LibriLight, an audio library that was put together by Meta that contains nearly 60,000 hours of English language speech. All of which is available in the public domain.
What VALL-E does is match the three-second audio sample to the voice of one of the 7,000 people that it was trained on, and then delivers the text in a voice similar to that in the training data to deliver an accurate mimic response.
What bets are these companies making on the future that’s pushing them all to enter this market?
🤣 1 LOL
It was a rough day with all the ChatGPT outages.