January 22, 2019
Hello Kitty gets a mouth, knows your name, and tells you you’re beautiful. I’m using the OpenCV computer vision library to do facial recognition, CreateJS and Adobe AfterEffects for animation, Redux to manage state (without React — yikes!), Koa for a server, and Anne Waldman for the voice.
Why am I doing this? This project is a kind of magpie’s nest, a place to put all the things that interest me. State machines, animation, and, especially, computer vision. Very powerful machine learning tools are now available to anyone with a bit of programming experience. This can (and certainly will) be used to do really bad stuff. I am using it to create something that is fun, useless, and delightfully weird. (Delightful to me, at least.)
I wanted to learn a bit about how this technology works, and in so doing be a little more informed about the challenges and opportunites they present. By making something that is utterly useless, I am free to play, discovering the limits and power of this stuff. And, hopefully, Hello Beautiful can become a basis for sharing what I learn.
One of my goals is to run this on a Raspberry Pi and hook it up to an old Hello Kitty TV via the Pi’s analog video output. I’ll be writing about how it goes… more to come.