The Buddy Comics empire expands with two new titles and a new installment of The Adventures of Baby Bud.
We meet Mister Meowster, the most legendary feline investigator of his neighborhood, who’s called upon to use his Sherlockian skills in search of missing mice. In 11-Dimensional Hyperspace, SpaceCat tunnels to the next iteration of reality in her starship. Finally, Baby Buddy contends with a dark chapter from the past, when there was a shortage of the very stuff of life.
All covers created via natural language AI and pixlr.
Mister Meowster is the greatest detective for at least three blocks.
SpaceCat tunnels through 11-dimensional hyperspace to reach the next stack in the braneworld! M-theory enthusiasts and cat lovers won’t be able to put this down!
Before the Great Turkey Shortage of 2021, there was the Great Turkey Shortage of 2015. In Chapter 6, we visit that grim chapter in in Buddy’s life, when he was forced to eat chicken.
When Alan Turing, the father of artificial intelligence, posed the heady question “Can machines think?”, he inspired generations of computer scientists, philosophers, physicists and regular people to imagine the emergence of silicon-based consciousness, with humanity taking the godlike step of creating a new form of life.
And when science fiction writer Philip K. Dick wrote his seminal 1968 novel, “Do Androids Dream of Electric Sheep?” — the story that would eventually become Ridley Scott’s 1982 classic Bladerunner — he wondered what makes us human, and whether an artificial being could possess a soul.
It’s safe to say neither of those techno-prophets were thinking of fledgling AI algorithms, representing the first small steps toward true machine-substrate intelligence, announcing themselves and their usefulness to the world by helping us watch felis catus take a shit.
And yet that’s what the inventors of the LuluPet litter box designed an AI to do, and it’s what software engineer and Youtuber Estefannie did for her cat, Teddy, who’s got a bit of a plastic-eating problem.
“The veterinarian couldn’t tell me how much plastic he ate, and it would cost me over $3,000 [to find out]. So I didn’t do it,” Estefannie explains in a new video. “Instead, the vet gave me the option of watching him go to the bathroom. If he poops and there’s no plastic in his intestines, then he won’t die, and he might actually love me back.”
Estefannie casually described how she wrote a python script, set up a camera and motion sensor, and rigged it to take photos of Teddy doing his business. But, she explained, there was “a tiny problem”: Luna the Cat, aka her cat’s cat.
“This is Luna, this is technically not my cat, this is Teddy-Bear’s cat, and she uses the same litter box as Teddy,” she explained.
For that, she’d need more than a script. She’d have to build a machine learning algorithm to gorge itself on data, cataloguing tens of thousands of photos of Teddy and Luna along with sensory information from the litter box itself, to learn to reliably determine which cat was using the loo.
So Estefannie decided it was a good opportunity to “completely remodel” Teddy’s “bathroom,” including a compartment that would hide the bespoke system monitoring his bowel movements. The system includes sensors, cameras and lights to capture still images of Teddy dropping deuces in infrared, and a live thermal imaging feed of the little guy doing his business. (Teddy’s luxurious new bedroom turned out to be too dark for conventional cameras, thus the pivot to infrared.)
From there, Estefannie manually calculated how long Teddy’s number ones and twos took, and cross-referenced that information with photo timestamps to help determine the exact nature of Teddy’s calls of nature.
When all the data is collected, Estefannie’s custom scripts sends it to an external server, which analyzes the images from each of Teddy’s bathroom visits and renders a verdict on what he’s doing in there.
Finally, Estefannie gets an alert on her smartphone when one of the cats steps into the litterbox, allowing her the option of watching a live feed and, uh, logging all the particulars. The software determines if a number two was successful, and keeps detailed records so Teddy’s human servant can see aberrations over time.
“So now I definitely know when Teddy-Bear is not pooping and needs to go to the hospital,” she said.
I am not making this up.
For her part, Estefannie says she’s not worried about a technological singularity scenario in which angry or insulted machines, newly conscious, exact revenge on humans who made them do unsavory tasks.
“Did I make an AI whose only purpose in life is to watch my cats poop?” Estefannie asked, barely keeping a straight face. “Mmmhmm. Will it come after me when the machines rise? No! Ewww!”
NEW YORK — Life is full of unpleasantness, like being able to see the bottom of your bowl. But what if someone told you he could fix that?
Enter Buddy the Cat’s SmartHuman Feeding System™, a device that harnesses the power of AI and cutting-edge hardware to make sure you never see the bottom of your bowl again.
SmartHuman was designed with weight sensors and an AI-enabled camera system to determine when the food in your bowl is getting low. If the on-board algorithms detect low levels of kibble, SmartHuman sends a text to your servant every 15 seconds until the device registers fresh kibble in the bowl.
And if the unthinkable should happen and you really are subjected to the horrific sight of the bottom of your bowl, SmartHuman’s built-in klaxon and emergency lights guarantee your human servants won’t have a second’s peace until they do what they’re supposed to and promptly refill your bowl. The system even requires the human to issue an apology before the sound and lights subside.
“I haven’t had to meow in annoyance or raise a paw once since I got the SmartHuman system,” raved Def the Defenestrator, a popular catfluencer with more than 240,000 followers on Meower. “The threat of getting bombarded with 110-decibel alerts to refill my bowl is enough to make my human servant get off her lazy behind and make sure my bowl is refilled before there’s a problem.”
The SmartHuman’s inventor has a background in feline teleportation and string cheese theory, but was prompted to design his device when he saw the bottom of his dry food bowl twice in as many months.
“I was literally starving,” Buddy said, adding that his “lazy human servant made me wait four minutes and 13 seconds before he refilled my bowl” during the second incident.
Vowing never to go hungry again, the entrepawneur built the first SmartHuman prototype in his garage, using a Raspberry Pi and a digital scale he ordered off Amazon.
He brought his idea to Shark Tank in late 2021 and successfully pitched Mr. Wonderful, who bought a 15 percent stake in SmartHuman™ in exchange for a $150,000 investment. The product entered production earlier this summer and is now available in stores and online.
“Cats love the SmartHuman™, but humans? Not so much,” Buddy the Cat admitted.
Not one to rest on his laurels, the inventive feline said he’s working on a software update that will make the device compatible with wet food as well. In early beta testing, SmartHuman successfully prompted humans to feed wet food to their feline masters on time. Wet Mode includes a new feature as well: If the wet food remains untouched after a three-minute timer elapses, SmartHuman sends another text to the human, informing them the food isn’t satisfactory and should be replaced with another meal.
“Humans are stupid, and they don’t understand when we meow to them in complaint because we don’t feel like eating tuna or whatever on a given night when we’d prefer turkey,” Buddy said. “When this update goes live, cats will be able to enjoy meals of their choosing, every time.”
In the photograph, Buddy is sitting on the coffee table in the classic feline upright pose, tail resting to one side with a looping tip, looking directly at me.
The corners of his mouth curve up in what looks like a smile, his eyes are wide and attentive, and his whiskers are relaxed.
He looks to me like a happy cat.
Tably agrees: “Current mood of your cat: Happy. We’re 96% sure.”
Tably is a new app, currently in beta. Like MeowTalk, Tably uses machine learning and an algorithmic AI to determine a cat’s mood.
Unlike MeowTalk, which deals exclusively with feline vocalizations, Tably relies on technology similar to facial recognition software to map your cat’s face. It doesn’t try to reinvent the wheel when it comes to interpreting what facial expressions mean — it compares the cats it analyzes to the Feline Grimace Scale, a veterinary tool developed following years of research and first published as part of a peer-reviewed paper in 2019.
The Feline Grimace Scale analyzes a cat’s eyes, ears, whiskers, muzzle and overall facial expression to determine if the cat is happy, neutral, bothered by something minor, or in genuine pain.
It’s designed as an objective tool to evaluate cats, who are notoriously adept at hiding pain for evolutionary reasons. (A sick or injured cat is a much easier target for predators.)
But the Feline Grimace Scale is for veterinarians, not caretakers. It’s difficult to make any sense of it without training and experience.
That’s where Tably comes in: It makes the Feline Grimace Scale accessible to caretakers, giving us another tool to gauge our cats’ happiness and physical condition. With Tably we don’t have to go through years of veterinary training to glean information from our cats’ expressions, because the software is doing it for us.
Meanwhile, I used MeowTalk early in the morning a few days ago when Buddy kept meowing insistently at me. When Bud wants something he tends to sound whiny, almost unhappy. Most of the time I can tell what he wants, but sometimes he seems frustrated that his slow human isn’t understanding him.
I had put down a fresh bowl of wet food and fresh water minutes earlier. His litter box was clean. He had time to relax on the balcony the previous night in addition to play time with his laser toy.
So what did Buddy want? Just some attention and affection, apparently:
I’m still not sure why Buddy apparently speaks in dialogue lifted from a cheesy romance novel, but I suppose the important thing is getting an accurate sense of his mood. 🙂
So with these tools now at our disposal, how much can artificial intelligence really tell us about our cats?
As always, there should be a disclaimer here: AI is a misnomer when it comes to machine learning algorithms, which are not actually intelligent.
It’s more accurate to think of these tools as software that learns to analyze a very specific kind of data and output it in a way that’s useful and makes sense to the end users. (In this case the end users are us cat servants.)
Like all machine learning algorithms, they must be “trained.” If you want your algorithm to read feline faces, you’ve got to feed it images of cats by the tens of thousands, hundreds of thousands or even by the millions. The more cat faces the software sees, the better it gets at recognizing when something looks off.
At this point, it’s difficult to say how much insight these tools provide. Personally I feel they’ve helped me understand my cat better, but I also realize it’s early days and this kind of software improves when more people use it, providing data and feedback. (Think of it like Waze, which works well because 140 million drivers have it enabled when they’re behind the wheel and feeding real-time data to the server.)
I was surprised when, in response to my earlier posts about MeowTalk and similar efforts, most of PITB’s readers didn’t seem to share the same enthusiasm.
And that, I think, is the key here: Managing expectations. When I downloaded Waze for the first time it had just launched and was pretty much useless. Months later, with a healthy user base, it became the best thing to happen to vehicle navigation since the first GPS units replaced those bulky maps we all relied on. Waze doesn’t just give you information — it analyzes real-time traffic data and finds alternate routes, taking you around construction zones, car accident scenes, clogged highways and congested shopping districts. Waze will even route you around unplowed or poorly plowed streets in a snowstorm.
If Tably and MeowTalk seem underwhelming to you, give them time. If enough of us embrace the technology, it will mature and we’ll have powerful new tools that not only help us find problems before they become serious, but also help us better understand our feline overlords — and strengthen the bonds we share with them.
After a few days of patiently waiting, we finally have a winner in our unofficial contest from earlier this week.
Reader Romulo Pietrangeli got it right: None of the cats pictured in our April 13 post are real felines.
All nine images were created by the machine learning algorithm that powers the site This Cat Does Not Exist, a riff on the original This Person Does Not Exist, a site that uses generative adversarial networks (GANS) to create stunningly realistic images of people on the fly.
(Above: All six images above are computer-generated using the same technology behind ThisCatDoesNotExist.)
“I’m basically at the point in my life where I’m going to concede that super-intelligence will be real and I need to devote my remaining life to [it],” Wang said. “The reaction speaks to how much people are in the dark about A.I. and its potential.”
Because the internet is ruled by cats, it was only a matter of time before a feline-generating version of the human-creating algorithm was brought online.
(Above: More artificially-generated cats. Artefacts in the images can sometimes give away the fact that they’re fake, such as the third image in the second row, where part of the cat’s fur is transparent.)
A CNN article from 2019 explains how GAN technology works:
In order to generate such images, StyleGAN makes use of a machine-learning method known as a GAN, or generative adversarial network. GANs consist of two neural networks — which are algorithms modeled on the neurons in a brain — facing off against each other to produce real-looking images of everything from human faces to impressionist paintings. One of the neural networks generates images (of, say, a woman’s face), while the other tries to determine whether that image is a fake or a real face.
Wang, who said his software “dreams up a new face every two seconds,” told CNN he hoped his creations would spark conversation and get people to think critically about what they see in front of them. It looks like he’s achieved his goal.
Christopher Schmidt, a Google engineer who used the same technology to create fake home and rental interiors, agreed.
“Maybe we should all just think an extra couple of seconds before assuming something is real,” Schmidt told CNN.
Pietrangeli, for his part, says he can tell the difference: “All of the animal images,” he wrote, “lacked ‘aura.'”
Feline humor, news and stories about the ongoing adventures of Buddy the Cat.