Every Attempt To Translate Meows Has Failed. Why?

A New York Times science writer tries MeowTalk, the app that says it can tell you what your cat is saying.

New York Times science writer Emily Anthes details her experience with MeowTalk in a new story, and has more or less come to the same conclusions I did when I wrote about the app last year — it recognizes the obvious, like purring, but adds to confusion over other vocalizations.

Back in January of 2021, in MeowTalk Says Buddy Is A Very Loving Cat, I wrote about using MeowTalk to analyze the vocalizations Bud makes when he wants a door opened. After all, that should be a pretty basic task for an app that exists to translate meows: Cats ask for things, or demand them, some would say.

But instead of “Open the door!”, “I want to be near you!”, “Human, I need something!” or even “Obey me, human!”, it told me Bud was serenading me as he pawed and tapped his claws on the door: “I’m looking for love!”, “My love, come find me!”, “I love you!”, “Love me!”, “I’m in love!”

According to MeowTalk, my cat was apparently the author of that scene in Say Anything when John Cusack held up a boombox outside of Ione Skye’s bedroom window.

John Cusack and Buddy the Cat
Buddy the Director.

Anthes had a similar experience:

“At times, I found MeowTalk’s grab bag of conversational translations disturbing. At one point, Momo sounded like a college acquaintance responding to a text: ‘Just chillin’!’ In another, he became a Victorian heroine: ‘My love, I’m here!’ (This encouraged my fiancé to start addressing the cat as ‘my love,’ which was also unsettling.) One afternoon, I picked Momo up off the ground, and when she meowed, I looked at my phone, ‘Hey honey, let’s go somewhere private.’ !”

On the opposite side of the emotional spectrum, MeowTalk took Buddy’s conversation with me about a climbing spot for an argument that nearly came to blows.

“Something made me upset!” Buddy was saying, per MeowTalk. “I’m angry! Give me time to calm down! I am very upset! I am going to fight! It’s on now! Bring it!”

In reality the little dude wanted to jump on the TV stand. Because he’s a serial swiper who loves his gravity experiments, the TV stand is one of like three places he knows he shouldn’t go, which is exactly why he wants to go there. He’s got free rein literally everywhere else.

If MeowTalk had translated “But I really want to!” or something more vague, like “Come on!” or “Please?”, that would be a good indication it’s working as intended. The app should be able to distinguish between pleading, or even arguing, and the kind of freaked-out, hair-on-edge, arched-back kind of vocalizations a cat makes when it’s ready to throw down.

buddytranslations
Accurate translations of Buddy’s meows.

Still, I was optimistic. Here’s what I wrote about MeowTalk last January:

“In some respects it reminds me of Waze, the irreplaceable map and real-time route app famous for saving time and eliminating frustration. I was one of the first to download the app when it launched and found it useless, but when I tried it again a few months later, it steered me past traffic jams and got me to my destination with no fuss.

What was the difference? Few people were using it in those first few days, but as the user base expanded, so did its usefulness.

Like Waze, MeowTalk’s value is in its users, and the data it collects from us. The more cats it hears, the better it’ll become at translating them. If enough of us give it an honest shot, it just may turn out to be the feline equivalent of a universal translator.”

There are also indications we may be looking at things — or hearing them — the “wrong” way. Anthes spoke to Susanne Schötz, a phonetician at Lund University in Sweden, who pointed out the inflection of a feline vocalization carries nuances. In other words, it’s not just the particular sound a cat makes, it’s the way that sound varies tonally.

“They tend to use different kinds of melodies in their meows when they’re trying to point to different things,” said Schötz, who is co-author of an upcoming study on cat vocalizations.

After a few months in which I forgot about MeowTalk, I was dismayed to open the app to find ads wedged between translation attempts, and prompts that asked me to buy the full version to enable MeowTalk to translate certain things.

The developers need to generate revenue, so I don’t begrudge them that. But I think it’s counterproductive to put features behind paywalls when an application like this depends so heavily on people using it and feeding it data.

To use the Waze analogy again, would the app have become popular if it remained the way it was in those first few days after it launched? At the time, I was lucky to see indications that more than a handful of people were using it, even in the New York City area. The app told me nothing useful about real-time traffic conditions.

These days it’s astounding how much real-time traffic information the app receives and processes, routing drivers handily around traffic jams, construction sites and other conditions that might add minutes or even hours to some trips. You can be sure that when you hear a chime and Waze wants to redirect you, other Waze users are transmitting data about a crash or other traffic impediment in your path.

Bud
“I’m thinking deep thoughts about turkey.”

MeowTalk needs more data to be successful, especially since — unlike Waze — it depends on data-hungry machine learning algorithms to parse the sounds it records. Like people, machine learning algorithms have to “practice” to get better, and for a machine, “practice” means hearing hundreds of thousands or millions of meows, chirps, trills, yowls, hisses and purrs from as many cats as possible.

That’s why I’m still optimistic. Machine learning has produced algorithms that can identify human faces and even invent them. It’s produced software that can write prose, navigate roads, translate the text of dead languages and even rule out theories about enduring mysteries like the Voynich Manuscript.

In each of those cases there were innovators, but raw data was at the heart of what they accomplished. If MeowTalk or another company can find a way to feed its algorithms enough data, we may yet figure our furry little friends out — or at least know what they want for dinner.

How Much Can AI Teach You About Your Cat?

A new AI algorithm promises to help you gauge your cat’s mood — and determine if she’s in pain — by analyzing facial expressions.

In the photograph, Buddy is sitting on the coffee table in the classic feline upright pose, tail resting to one side with a looping tip, looking directly at me.

The corners of his mouth curve up in what looks like a smile, his eyes are wide and attentive, and his whiskers are relaxed.

He looks to me like a happy cat.

Tably agrees: “Current mood of your cat: Happy. We’re 96% sure.”

Screenshot_20210730-092555

Tably is a new app, currently in beta. Like MeowTalk, Tably uses machine learning and an algorithmic AI to determine a cat’s mood.

Unlike MeowTalk, which deals exclusively with feline vocalizations, Tably relies on technology similar to facial recognition software to map your cat’s face. It doesn’t try to reinvent the wheel when it comes to interpreting what facial expressions mean — it compares the cats it analyzes to the Feline Grimace Scale, a veterinary tool developed following years of research and first published as part of a peer-reviewed paper in 2019.

The Feline Grimace Scale analyzes a cat’s eyes, ears, whiskers, muzzle and overall facial expression to determine if the cat is happy, neutral, bothered by something minor, or in genuine pain.

It’s designed as an objective tool to evaluate cats, who are notoriously adept at hiding pain for evolutionary reasons. (A sick or injured cat is a much easier target for predators.)

But the Feline Grimace Scale is for veterinarians, not caretakers. It’s difficult to make any sense of it without training and experience.

That’s where Tably comes in: It makes the Feline Grimace Scale accessible to caretakers, giving us another tool to gauge our cats’ happiness and physical condition. With Tably we don’t have to go through years of veterinary training to glean information from our cats’ expressions, because the software is doing it for us.

Meanwhile, I used MeowTalk early in the morning a few days ago when Buddy kept meowing insistently at me. When Bud wants something he tends to sound whiny, almost unhappy. Most of the time I can tell what he wants, but sometimes he seems frustrated that his slow human isn’t understanding him.

I had put down a fresh bowl of wet food and fresh water minutes earlier. His litter box was clean. He had time to relax on the balcony the previous night in addition to play time with his laser toy.

So what did Buddy want? Just some attention and affection, apparently:

Screenshot_20210801-181618

I’m still not sure why Buddy apparently speaks in dialogue lifted from a cheesy romance novel, but I suppose the important thing is getting an accurate sense of his mood. 🙂

So with these tools now at our disposal, how much can artificial intelligence really tell us about our cats?

As always, there should be a disclaimer here: AI is a misnomer when it comes to machine learning algorithms, which are not actually intelligent.

It’s more accurate to think of these tools as software that learns to analyze a very specific kind of data and output it in a way that’s useful and makes sense to the end users. (In this case the end users are us cat servants.)

Like all machine learning algorithms, they must be “trained.” If you want your algorithm to read feline faces, you’ve got to feed it images of cats by the tens of thousands, hundreds of thousands or even by the millions. The more cat faces the software sees, the better it gets at recognizing when something looks off.

At this point, it’s difficult to say how much insight these tools provide. Personally I feel they’ve helped me understand my cat better, but I also realize it’s early days and this kind of software improves when more people use it, providing data and feedback. (Think of it like Waze, which works well because 140 million drivers have it enabled when they’re behind the wheel and feeding real-time data to the server.)

I was surprised when, in response to my earlier posts about MeowTalk and similar efforts, most of PITB’s readers didn’t seem to share the same enthusiasm.

And that, I think, is the key here: Managing expectations. When I downloaded Waze for the first time it had just launched and was pretty much useless. Months later, with a healthy user base, it became the best thing to happen to vehicle navigation since the first GPS units replaced those bulky maps we all relied on. Waze doesn’t just give you information — it analyzes real-time traffic data and finds alternate routes, taking you around construction zones, car accident scenes, clogged highways and congested shopping districts. Waze will even route you around unplowed or poorly plowed streets in a snowstorm.

If Tably and MeowTalk seem underwhelming to you, give them time. If enough of us embrace the technology, it will mature and we’ll have powerful new tools that not only help us find problems before they become serious, but also help us better understand our feline overlords — and strengthen the bonds we share with them.

Buddy is bored
Buddy’s bored of all this AI talk and wants a snack.

Real Cats vs AI-Generated Cats II: Which Kitties Were Real?

AI can create photorealistic images of cats as well as humans.

After a few days of patiently waiting, we finally have a winner in our unofficial contest from earlier this week.

Reader Romulo Pietrangeli got it right: None of the cats pictured in our April 13 post are real felines.

All nine images were created by the machine learning algorithm that powers the site This Cat Does Not Exist, a riff on the original This Person Does Not Exist, a site that uses generative adversarial networks (GANS) to create stunningly realistic images of people on the fly.

(Above: All six images above are computer-generated using the same technology behind ThisCatDoesNotExist.)

Phillip Wang, the 33-year-old software engineer behind both sites (and a few others using the same tech and concept), explained to Inverse in an earlier interview why he created ThisPersonDoesNotExist.

“I’m basically at the point in my life where I’m going to concede that super-intelligence will be real and I need to devote my remaining life to [it],” Wang said. “The reaction speaks to how much people are in the dark about A.I. and its potential.”

Because the internet is ruled by cats, it was only a matter of time before a feline-generating version of the human-creating algorithm was brought online.

(Above: More artificially-generated cats. Artefacts in the images can sometimes give away the fact that they’re fake, such as the third image in the second row, where part of the cat’s fur is transparent.)

A CNN article from 2019 explains how GAN technology works:

In order to generate such images, StyleGAN makes use of a machine-learning method known as a GAN, or generative adversarial network. GANs consist of two neural networks — which are algorithms modeled on the neurons in a brain — facing off against each other to produce real-looking images of everything from human faces to impressionist paintings. One of the neural networks generates images (of, say, a woman’s face), while the other tries to determine whether that image is a fake or a real face.

Wang, who said his software “dreams up a new face every two seconds,” told CNN he hoped his creations would spark conversation and get people to think critically about what they see in front of them. It looks like he’s achieved his goal.

Christopher Schmidt, a Google engineer who used the same technology to create fake home and rental interiors, agreed.

“Maybe we should all just think an extra couple of seconds before assuming something is real,” Schmidt told CNN.

Pietrangeli, for his part, says he can tell the difference: “All of the animal images,” he wrote, “lacked ‘aura.'”

Can You Tell The Real Cats From The Computer-Generated Kitties?

How can you tell a rel cat from a computer generated cat?

There’s a new tool that uses algorithmic artificial intelligence to create random images of cats, and the results are virtually indistinguishable from the real thing.

Can you tell which cats are real and which ones are computer-generated? (Kindly share your answers in the comments, numbering them from left to right. The person with the best score gets bragging rights!)

We’ll follow up with the answers after everyone’s had a day to make their guesses or informed choices, as it were.

The algorithm was created by a process called machine learning, which you’ve probably heard at some point even if you haven’t sought out information about artificial intelligence. 

In simple terms, machine learning means the creators fed massive amounts of data — millions of photos of cats — to the software algorithm. The algorithm analyzes the data and learns how patterns in the data create accurate images of felines.

Crucially, the algorithm never learns what a cat actually is. It doesn’t know a cat is an animal in the real world. It doesn’t know what the real world is, and it doesn’t know what animals are. All it knows is that data, organized a certain way, produces images that look like the photos it’s been fed.

That’s a key difference because, while we have made huge strides with machine learning, that’s not the kind of artificial intelligence the Elon Musks of the world freak out about when they smoke pot and watch The Matrix. We’ll never have to worry about our cat-generating algorithms rising up and eliminating humanity. 🙂

Artificial general intelligence — or AGI — is the potentially dangerous form of AI, but that’s a whole other piece of business: It involves recreating consciousness and the mind on a machine substrate.

We can’t even define consciousness and we know shockingly little about how the brain works, so that’s not happening any time soon. And even if we could it off, there’s a growing body of evidence supporting the concept of embodied cognition. That’s the idea that the mind cannot be separate from the brain, and the brain cannot be separate from the body, as well as a recognition that everything from pain signals to gut flora has an effect on our cognitive routines.

The bottom line: “AI” can get pretty good at making pictures of cats, but it’s not taking over the world any time soon.

Dear Buddy: Is The AI Vacuum Apocalypse Real?

The Vacuumpocalypse: Self-aware, evil vacuums hunting kitties to extinction?

Dear Buddy,

I found myself intrigued and frightened by the premise of your technoir thriller, Cyberbud 2077, in which nefarious forces plot to infect vacuums with a virus that will grant them consciousness and self-awareness. It’s every cat’s worst nightmare!

Is the Vacuumpocalypse real? Do you really think it could happen?

Technophobe in Tallahassee


Dear Technophobe,

The Vacuumpocalypse is a controversial subject in catdom, and for good reason: Few things prompt such existential dread among felinekind as a dystopian future in which we are systematically hunted down by self-aware vacuums.

Experts don’t quite agree on the certainty of our impending doom at the wrong end of a Dust Buster. Few are more vocal than Elon Meowsk, who never shuts up about how scared he is that Vacuum Terminators will rise up, invent really awesome laser guns and overthrow kitties.

Meowchio Kaku, the renowned physicist, is more circumspect but thinks it’s only a matter of time before the Vacuum Uprising. Smart home technology already allows all our gadgets to communicate, which means your automatic litterbox, your USB cat fountain and your Roomba are already on the same network, talking to each other in a language of ones and zeros. (And you can be sure the litter box is telling the others how foul you are!)

terminatorpur
“Give me yer meowtercycle, yer gunz and ur leather jacketz.”

Sophisticated AI technology already exists in high end litter boxes. The Lulupet litter box, for instance, boasts of “excretory behavioral algorithms” and features AI-driven stool imagery analysis, running every nugget through a database with machine learning techniques similar to the facial recognition algorithms of police states.

It even links up with your human’s smartphone, potentially allowing it to upload a vacuum virus to the entire world!

What if such technology was used to catalog us felines? Would we be marched off into pens guarded by robots and given subpar kibble to eat? It’s too much to contemplate.

The Vacuumpocalypse may be real, and it’s something we should prepare for because we don’t have a get out of jail free card — not even our esteemed brothers and sisters of panthera tigris can fight endless waves of evil robots. Eventually they’re going to have to take a nap, and then who will defend us? The Persians? I think not!

Still, don’t worry too much. I figure we still have a few years left before the army of evil self-aware vacuums is upon us. Until that day, celebrate, eat yums, nap and be merry!

Your friend,

Buddy

buddypraying
“Let us pray! Oh Lord, give us delicious yums, make our humans more responsive toward our demands, and protect us from the Evil Vacuum Robot Overlords who seek to rule the Earth. Amen!”

“Vacuum Monster” photo illustration courtesy of reverendtimothy/deviantart.