KUALA LUMPUR, Oct 6 — Google’s NotebookLM impressed the world upon release with its ability to create realistic AI-generated podcast shows from any article or video input, with the AI hosts speaking in a natural, even humorous manner that made them virtually impossible to distinguish from human speakers.
A philosophical twist occurred when AI researcher Olivia More fed the virtual hosts an article explaining that they were not real, raising the question of how AI responds to learning about its own nature. In a particularly unsettling moment, the male presenter humorously describes trying to call his wife, only to realise she did not exist either.
The NotebookLM hosts realizing they are AI and spiraling out is a twist I did not see coming pic.twitter.com/PNjZJ7auyh
— Olivia Moore (@omooretweets) September 29, 2024
Rather than demonstrating sentience, however, the exercise was a display of how well large-language models are able to process and respond to inputs, including those that “expose” to AI that they are not real.
What people generally construe to be sentient AI will require what is called Artificial General Intelligence (AGI), which is not yet achievable with today’s technology.