Amid ongoing discussions about the increasing impact of artificial intelligence on human life, a new and unusual model aims to expand the boundaries of language into the ocean. Google, in collaboration with the Georgia Institute of Technology and the Wild Dolphin Project – an organization that studies dolphins in the Bahamas – has announced progress in developing a large language model (LLM) designed to decode dolphin communication, and possibly even “speak” it.
The new model, named DolphinGemma, was trained on tens of thousands of hours of acoustic recordings of the dolphin species Stenella frontalis, known for its ability to communicate using clicks, whistles, and rapid sequences of sounds – a phenomenon known as Burst Pulses. Dolphins use these sequences during play, courtship, and sometimes conflict.
“When I first heard the sounds generated by the model, I danced around the room,” said Dr. Thad Starner, an AI researcher at Google and the Georgia Institute of Technology. “For years, I tried to produce Burst Pulses using conventional software and couldn’t succeed. But the model created it on its own, without lines of code – purely from what it learned from the data.”
Unlike previous approaches that attempted to translate dolphin sounds into words, the new model doesn’t try to “translate” but rather to recognize and construct acoustic structures that mimic dolphin communication. The idea is for the system to learn to identify certain sound patterns and associate them with specific situations – such as playing with a friend, encountering a new object, or a confrontation.
The team is also developing a technology called CHAT – a wearable audio system for divers that enables the real-time production of programmed “dolphin-like” sounds. Researchers play a new sound, generated by the AI, while pointing to a specific object – such as a toy or sea plant – and observe whether the dolphins repeat the sound or respond to it consistently.
“The goal is to see if dolphins can learn or recognize a new word,” explains Dr. Denise Herzing, founder of the project. “Instead of taking us 150 years to manually review all the recordings, the model can identify patterns within days.”
However, some experts caution that dolphins may not have actual “words” at all. Human language, they argue, includes syntax and infinite structures. Dolphins may have signals, but that doesn’t necessarily mean they have a true language.
Still, even if the model doesn’t succeed in translating or creating a “conversation,” the very fact that dolphins might recognize patterns or expect an acoustic response opens a window into understanding their cognitive abilities. According to the researchers, even consistent responses to new sounds can teach us about memory, learning, attention, and even basic communication.
Other projects around the world are also aiming to communicate with animals using artificial intelligence, including the Earth Species Project, which focuses on crows, and CETI, which is trying to decode whale communication. When asked whether we will ever be able to hold a “conversation” with a dolphin, the researchers answer cautiously: probably not. But the very attempt to understand – may change the way we see our place in the animal world.