Google under pressure, AI detecting Alzheimer's, and more.
Google goes into "code red" in response to ChatGPT. Plus: can AI detect Alzheimer's?
In today’s email:
Google feels the pressure to innovate with AI.
GPT-3 is kind of kicking butt at early Alzheimer’s detection.
AI architecture is creepy but amazing.
2 STORIES YOU SHOULD KNOW ABOUT
The success of GhatGPT is putting pressure on Google, who went into “code red” to build new AI products.
The 🥩 of it:
After going viral and gaining millions of users in its first week, people started speculating that ChatGPT could be the Google Killer™.
If type the same question in ChatGPT and Google, but you get a direct response with zero ads and algorithmic hacking (i.e. Fiverr SEO consultants writing incorrect or misleading content) from ChatGPT, that definitely seems to pose a threat to Google’s business model.
Something else ChatGPT offers that Google doesn’t, is the ability to continue the conversation past the first search. I can get unlimited iterations and ask follow-up questions without ever leaving a chat box.
According to The New York Times, Google is treating the issue seriously with management declaring a “code red”, which is a nerdy way of saying “we need everyone working on this now”. According to the NYT article, Google CEO Sundar Pichai, “has upended the work of numerous groups inside the company to respond to the threat that ChatGPT poses.”
Every May, Google hosts their conference, Google I/O, where they tease new products and showcase the latest updates. Considering the code red push for innovating new AI prototypes, some are speculating we may see some AI products demonstrated during Google I/O 2023.
However, an executive member at Google mentioned having concerns in releasing products too early, noting that the AI “can make stuff up” when it’s not sure, while bias and toxic language are other issues.
*Editor’s note: With “misinformation” being the word of the year of 2022, ensuring the accuracy of AI’s output should be a priority inside all AI businesses. I’m happy to hear that they’re not sacrificing accuracy in their quest to take on OpenAI’s product.
Google hasn’t exactly wowed me with their innovation in recent years, so I’m glad that we seem to be seeing ChatGPT putting pressure on them to perform again.
Can AI detect early signs of Alzheimer’s?
The 🥩 of it:
Research recently demonstrated that OpenAI's GPT-3 can identify clues that are 80% accurate in predicting the early stages of dementia.
Because language impairment is a symptom in up to 80% of patients with dementia, researchers have been focusing on ways to pick up on subtle clues with AI that could indicate if a patient should undergo a full examination.
Things like hesitation, making grammar and pronunciation mistakes, and forgetting the meaning of words are all clues they look for when testing.
The researchers at Drexel's School of Biomedical Engineering, Science and Health Systems tested their theory by training the program with a set of transcripts from a dataset of speech recordings from the National Institutes of Health.
They asked the AI to review dozens of transcripts from the dataset and decide whether or not each one was produced by someone who was developing Alzheimer's.
The 80% accuracy suggests that GPT-3 analysis is a promising approach for Alzheimer’s testing, and has the potential to improve early diagnosis of dementia.
To build on these initial results, the researchers are planning to develop a web application that could be used at home or in a doctor's office as a pre-screening tool, which may also give the AI more data to use for improvements.
"Training GPT-3 with a massive dataset of interviews -- some of which are with Alzheimer's patients -- would provide it with the information it needs to extract speech patterns that could then be applied to identify markers in future patients." - Felix Agbavor, Drexel's School of Biomedical Engineering, Science and Health Systems
3 RAD THINGS THAT HAPPENED IN AI
Tim Fu is making some rad architecture with Midjourney.
Test if you can distinguish AI-generated content from real art or literature.
From the New York Times - An A.I. Pioneer on What We Should Really Fear.