We’re using the Adafruit animated eyes bonnet with Processing and cam on the Raspberry Pi to render custom comic eyes 👀
Code can be found here: https://github.com/dkgrieshammer/rpi-eyes
Even though a little late for Halloween but maybe just right for German Carneval. So I’m using the Adafruit Eye Bonnet.
Follow their direction on how to install all the stuff:
https://learn.adafruit.com/animated-snake-eyes-bonnet-for-raspberry-pi/software-installation
The trick we want to make use of is burried in fbx2.c / the compiled binary version of it fbx2. This is scrappping parts of the main screen and is rerouting that to the two connected OLED displays (check fbx2.c to get an idea what’s going on).
Because I’m a lazy fuck I didn’t change their scripts yet (also this allows to switch to their pretty cool but more or less random eye movement by simply changing one filename)
// To-do: I think I could utilize some image detection add-on board, a cam like pyCam or just add some python tensorflow lite stuff with Picam (I intend to become moar familiar with it at some point. Btw. also this Pun was intended 🥳
If one thinks about cleaning up -> check pi-eyes.sh to not install all the stuff we’re not using and the fbx2.c file
👍Alright, so since we’re lazy and we’re using simple Processing, we’re just renaming the python file that would be called normally. In my case thats the „eyes.py“ in boot/Pi_Eyes. So whats happening is that raspberry normally would run eyes.py on boot but since we’re renaming it it won’t get called (I know, it’s a really ugly hack 🙂 )
$ cd /boot/Pi_Eyes/ #moving to the Pi_Eyes folder
$ sudo mv eyes.py eyesBAK.py #renaming the file
$ sudo reboot now #rebooting to take effect
Instead, we start a processing sketch that renders what ever we want at exactly the locations that fbx2 grabs the graphic information and copies it to the two OLEDs; check out the Processing sketch to witness my crimes
That’s all folks