BadMouth started off as an installation that has no purpose in life but to talk shit to people. It is still the scenario however, when it gets to executions reality checks between how you imagine a project will go and how it ends. Not that the ending is negative but there is definitely detours, challenges and limitations to our expectations.
I probably had in mine Artificial Intelligence. I ended up with a fun text to speech in P5js. Eventually it’s all a learning and an experimentation process that one has to go through to eventually reach the best and simplest form to code the project.
First I experimented a bit with Python… went through the tutorials of it. And through speech recognition sketches. It turned out to be an impossible mission.I then conducted search with Js, APIs, different libraries available online… Till I went back to P5js. working with the Library in P5js was a challenge it has some aspects that needs reviewing so people would be able to work with it.
One great site that the amazing Allison Parrish told me about Wit.ai, effectively a great site to create your own scenario for speech recognition and assign it to any bot or application you are starting to work with. It gets clean Json files that could be used in P5js library.
I have prepared two sketches, one that dictates the speech that bad mouth will be telling when he senses people approaching through the ultrasonic sensor. and another that is a dictated conversation with BadMouth. Eventually this sketch will be assigned so people can insult BadMouth and get different results.
Final videos for the serial communication and the BadMouth installation will be available next week as soon as the Pcomp part is over.
For now below two coding platform I will be using for BadMouth.
conversation with BadMouth