Most of the time when we talk about AI, we’re really talking about output: more code, content, or images generated on demand.
But one of the most disorienting ideas we heard at the SXSW Conference this week wasn’t about what AI can do for us. It was about what AI might finally let us hear.
In a keynote that felt more like fieldwork than a product demo, Aza Raskin, Co‑founder of the Earth Species Project, walked through how researchers are using AI to decode the communication systems of other species like dogs, whales, birds, primates, and more.
Not as a party trick, but as a serious attempt to answer a strange question:
If intelligence is all around us, what would it take to actually listen?
1. Turning Barks and Rumbles into Data With Structure

Instead of showing yet another tool that turns prompts into text or images, Raskin walked through something stranger and much more technical: using AI to help decode the communication systems of other species.
This isn’t the sci‑fi fantasy where you press a button and your dog suddenly speaks English. Nobody is shipping a “talk to your golden retriever” app. What these teams are doing is closer to reverse‑engineering an unknown protocol.
Step 1: Collect enormous, messy datasets
The work starts in the least glamorous place: with an unusually large and varied pool of raw data.
Researchers record thousands of hours of audio (barks, songs, clicks, rumbles) across multiple species, including dogs, whales, birds, primates, and others. Each sound is paired with sensor data that describes what was happening at the time: where the animal was, when the call occurred, what the environment was like, and how the animal was moving.
On its own, this combination of sounds and signals looks like chaos. But that’s intentional. Before a model can find structure, it has to see the full complexity of real behavior: overlapping calls, background noise, and subtle shifts a human ear would miss. That raw, high‑dimensional dataset becomes the foundation for everything that follows.
Step 2: Add labeled context
Once the raw recordings are in place, the next step is to give those sounds meaning by adding context.
For each sequence of calls, researchers annotate what was happening around it: which animals were nearby, whether there was food or a predator present, if a newborn had just arrived, and whether the group was moving, resting, playing, or under stress. They also capture what happened immediately afterward: did the animals approach, retreat, call back, or change direction?
By pairing each vocalization with this situational detail, the dataset shifts from “audio plus numbers” to “signals plus consequences.” The model is no longer learning from sound in isolation; it is learning from sound embedded in real scenarios.
And this is exactly the point we come back to again and again when we talk about AI: context is what makes a model useful. Without it, you only get pattern‑matching on noise. With it, you can start to connect specific patterns to consistent outcomes, and that’s where real insight (for animals or for software systems) begins.
Step 3: Let models uncover a hidden “language”
While different research groups use different setups, a typical approach to this kind of problem looks something like this.
With sound and context in place, the models can finally begin to look for structure.
They search for patterns of calls that reliably show up before certain behaviors, sequences that tend to appear together, and recurring motifs that behave like reusable “words.” Under the hood, researchers use modern machine‑learning techniques to group similar calls into families and to map how those units unfold over time, then relate those patterns back to real‑world situations.
The result isn’t a neat dictionary, but it is a rough language map: clusters of calls that act like a vocabulary, informal rules for how they combine, and regularities in who “says” what, to whom, and when. In short, what once looked like noise starts to behave like a communication system you can actually reason about.
2. What this Teaches Us About Data and Systems
.jpg?width=1024&height=402&name=Blog%20Post%20-%20inside%20Images%20(1).jpg)
Once you see that process, it gets hard to keep your old mental models.
1) Language lives in patterns
We’re used to thinking about language as text on a page or words in a UI. This work pushes a different view:
-
Language is whatever system a group uses to coordinate, signal, and share state.
-
It might be sound, movement, timing, spacing, or a combination.
-
The “meaning” lives in the pattern and the outcome, not in a human label.
That’s a big shift for anyone building products or models: Your logs, traces, and usage analytics are not “just metrics.” They’re the behavioural language of your system and your users. You can either treat it as noise or treat it like whale song and look for structure.
2) Data defines the environment you operate in
Traditional analytics often treats data as a byproduct: something produced by a system you already understand. In the animal‑AI work, data is the environment you enter to discover what’s going on:
-
You don’t know the important units ahead of time.
-
You don’t know which patterns matter yet.
-
You’re willing to let the model show you where the structure is hiding.
That mindset is powerful when you apply it back to software. Instead of defining a handful of KPIs and staring at them, you can ask: What are the latent “phrases” in our product usage? Instead of hand‑labeling a few events as “success” or “drop‑off,”you can ask: What pathways consistently precede churn or expansion?
3) Systems are ongoing conversations
At some point, you stop seeing animals as “things that make noise,” and start seeing them as agents in a live conversation. The same thing can (and should) happen with your stack:
-
Feature flags, retries, error patterns, and workarounds are “utterances.”
-
Engineers, users, and services are all “speakers” in the system.
-
The system is constantly “talking” through behavior long before a dashboard goes red.
Researchers working on interspecies communication treating animal communication as a living system to be decoded, not a static dataset to be summarized. Most engineering orgs aren’t there yet.
3. The Question Every AI‑Curious Engineer Should Be Asking

If AI can help us decode dog barks and whale songs, it raises an uncomfortable question: What excuse do we have left for not understanding our own customers and systems?
For AI‑curious engineers and leaders, this work should trigger at least three concrete questions.
a) Where are we treating “bark” as noise in our own products?
-
Which logs and events do we collect but never truly analyze?
-
Which behaviors do we dismiss as “edge cases” instead of “dialects”?
-
Where are support tickets, GitHub issues, or Slack threads quietly repeating the same “phrase” we’ve never named?
If an animal‑language model can find structure in unlabelled noise, what could similar approaches reveal in onboarding flows, abandoned features, or “shadow workflows” teams invent around your tools?
b) Are we only using AI to talk, or also to listen?
Most teams still apply AI in narrow, output‑focused ways: generating copy, code, tests, or assets, sprinkling in autocomplete, or adding a chatbot layer on top of existing documentation.
Those are all output use cases. But what if you also asked:
-
Where could AI agents quietly watch and map patterns in how our product is actually used?
-
How could we use models to surface “phrases” of behavior that precede churn, upgrades, or incidents?
-
Could we build internal tools that listen to our engineering workflows and proactively highlight friction?
If AI can decode non‑human communication, it can certainly help decode the “language” of your own product.
c) What would it look like to design for empathy, not just efficiency?
The most striking part of the SXSW conversation wasn’t the novelty. It was the stated goal:
“Open the aperture of our own empathy.”
For an engineering org, that could mean:
-
Instrumenting systems not just for speed and uptime, but for understandability
-
Designing tools that surface where users are confused, not just where they click
-
Treating your internal platform like an ecosystem whose health you continually listen to, not just a set of services to keep “green”
-
In other words: using AI to become the kind of team that hears weak signals early and acts on them.
Nature Is Speaking. So Are Your Systems. Are You Listening?
The real promise of AI is to stretch our empathy, not just our productivity. Empathy here isn’t sentimental. It’s operational:
-
Seeing structure where we used to see noise
-
Hearing warnings where we used to see “random flakiness”
-
Recognizing intent and frustration long before a churned contract or a 2 a.m. incident
Nature has been speaking this whole time. Our products, users, and engineering systems have been speaking this whole time, too.
AI might talk to animals before it replaces engineers. That might be the best outcome we could hope for. Because it leaves us with the question that matters:
If you treated your product data, customer behavior, and engineering workflows like a language to decode, what is your system already trying to tell you that you still can’t hear?