BadMouth Pcomp/ICM

So finally done with finals. Well almost done… We do start off thinking about a certain idea and we definitely spend a certain amount of time
trying to imagine the outcome. It usually is great in theory, when it comes to making it happen, challenges rises in different aspects of the project.

In my first blog for BadMouth, I broke down the project to 5 categories.

I will go through the five sections to see how I have ended up developing the projects. Methods I changed, Methods I learned, Methods I dropped…

Speech/Voice:

Speech recognition

After research and time put on checking speech /voice recognition and trying out different libraries, the easiest way for me was to go back to P5js. It’s not just about easy, it’s about what I can accomplish in a certain amount of time with my knowledge. The Speech library is a bit challenging, it is less developed then other libraries and has less examples of usage… It took some time to start it off ( thanks to the ICM help session support that I was able to set it off). It also only works in Chrome for continuous recognition, so the desktop editor P5js was not a possibility. I only had a choice to use the web editor version which I like less.
Through the process of trying out libraries, doing research and also trying to work through Python instead of P5js… I got to learn a lot of things about Speech recognition, the difference with voice recognition, some history of how it all started and the AI possibilities/limitation of the future… All amazing stuff to read and learn about.  If it weren’t for anything learning only about the limitations to whats out there and what can be done and what can’t, is a very good start. On that path
One great reference was of course given by the amazing Allison Parish: wit.ai simple and easy to follow, they let you develop the scenario easily and give it you in Json files for formats easy to use in coding. So with more time in hand one can really  go into creating a full personal library based on personal scenarios.
So that process made me go “again” through all the tutorials out there linked to Jsons and API and how to get data in coding. I had to go through many examples and apply tutorials over and over to really get to do it.
When I first started working with P5js I only new how to repeat examples given, work a bit through them, develop some… but it was a lot of copy pasting; the thing that is making me happy though it’s not much but I felt for the first time in the final while doing the speech coding that I was able to finally put myself some logic on how to do it: now I could use an If statement or a true/false calling… something I wasn’t able to logically apply… good to know it’s get better.

Other then making the library work was the part to connect it serially. Though I had done different times serial communication through Pcomp homeworks, but I guess It also has to do with the different sensors we use. The set up is the same but the logic is sometimes different from one sensor to another getting the right values in P5 was tiring.

I had worked different sketches for BadMouth, below two sketches presented in ICM as finals:
The first is a speech that BadMouth does: talking random shit to people (a bit long):

https://alpha.editor.p5js.org/renanou/sketches/ryz4jmPmg

The second is a made-up conversation made up with BadMouth and Myself:

audio sample of the conversation

https://alpha.editor.p5js.org/renanou/sketches/SygueGqQl

When It came to combining them serially I had to change the strategy of how I’m using the scenarios. The fact that Im using Ultrasonic sensor, which not only detects the presence but also majorly the distance… so it was only logical to work with what the sensor is doing, meaning assigning text linked to the distance people are standing before BadMouth. BadMouth will still be Bad but instead if people are close he will talk about this proximity mentioning that they are close, If people are far, he’d call them to come closer…. this logic makes more sense with an Ultrasonic then to just assign a random speech without taking in consideration the role of the sensor. Below the sketch that I worked it by creating different arrays of words for different ranges to be detected in the ultrasonic depending on the distance or the proximity of a person:

video below testing

ranges ultrasonic

Below the sketch in P5js :

https://alpha.editor.p5js.org/renanou/sketches/Sy8ArBRXl

As for the conversation with badMouth I had presented in ICM a made up scenario to show speech recognition… but the scenario doesn’t make sense to the project itself so I had decided to work  it in a way to respond  to one thing people might do or say, in  this scenario it was logical to insult BadMouth: “Fuck you”, Fuck you BadMouth or Fuck you BadMouth you are a loser!” . This conversation is not linked to the sensor as it’s the people’s output and initiative and not the role of the sensor.

Below the sketch in P5js :

https://alpha.editor.p5js.org/renanou/sketches/HybnhbDml

 

 

Motion detector:

Initially I had in mind to work with the PIR, once discussed in class with my great teacher Benedetta Piantella, she suggested Ultrasonic and camera detection instead of PIR and of course as she knows better she was right. I went through different testing posted on the blog earlier… working with PIR was really boring and not satisfying it takes a lot of time to reset when detecting so a hand could be moving and the responses don’t follow up really because between one movement and another it has to reset… moving to the ultrasonic, it was  great indeed. The values are detailed and it has a really good margin of getting creative with it (if time is available), I had tried different sketches in the Arduino and always got great values which is why i decided to adopt it for this project. The camera ideally would have been great as well because as Benedetta Piantella suggested, you could assign for BadMouth to make comments when it sees a certain color. Let’s say someone wearing a yellow shirt standing at a certain proximity of BadMouth… BadMouth can comment something regarding the yellow shirt. but when I tried testing with the camera I found it complicated, I had decided to keep it tll last cause I was running out of time and I needed to decide and work the project. In order to makeup for that color detecting idea I also tested with the color sensor, I thought it could be cool to combine Ultrasonic and color sensor instead of camera. color sensor values and testing were satisfying but the issue with ultrasonic is that you really need to be at a very close proximity to make it work… I don’t think it make sense in the scenario of BadMouth where people aren’t supposed to be that close… Nevertheless working with added another cool learning to all that.

Ultrasonic details:

HC – SR04 Ultrasonic Distance Measuring Sensor Module for Arduino / NewPing Library

Ultrasonic has a NewPing library that makes the values and working with it greater. I went through using it as well.

different sketches I used and tried out from different sources but changed them and worked through them  for the ultrasonic that has two pins to consider, The trigger pin and the echo pin:

One with the NewPing library and one without:

sketch one

sketch two

Mouth Motion:

In this section I wanted to make the lips of BadMouth move while he talks. Unfortunately due to time constrains I took so much time on the motion detection section and the speech recognition section that this section was jeopardized. I am planning though to complete it during winter break if it isn’t for anything at least for my proper satisfaction and for my proper learning. The challenge was to work through the mechanics of making the lips move up and down. I thought if I mounted two motors to clips at each of their end and then I could hand each clip to one lip it will not make the motors spin completely because it got attached to a physical restraint and yet it will probably do some kind of a random movement (trying to spin its way out) but that would be exactly what i need, just a small movement of the lips, good enough to make the impression. But unfortunately realistically my imagination to the process was off. the motors worked but once attached to the lip they didn’t make a small random movement but instead didn’t move at all… Because this was experienced at the last minute didn’t have the chance to try out plan be which i think could be the way to do it: through the Gripper kit tool and a servo motor. This Gripper kit tool is simply some kind of small clips that are wired through a servo, unlike the motor i was using , they don’t spin but instead do exactly the motion I need which is the opening and the closing of both ends of the clip.

samples of what I tried to do:

that was a sample with batteries not linked yet to Arduino just to test if it’ll work with BadMouth.

video of the motor/clip spinning (before attaching it to the lips).

IMG_2462

What I think the plan that I should experience with next:

Gripper tool kit and servo

here’s reference from servoCity on how it work…

Mouth Design

The design of the mouth should have been made from scratch if time was available, I had to find solutions to get the feel of a realistic looking mouth for now.
I have found a realistic mask made from latex and i decided to work with it for the time being. I laser cut panel to make an opening for the mouth to sit (stuck in wall symbol) and another above it for the Ultrasonic motion sensor to be placed.

Pictures below:

Scenario for BadMouth

Scenario for BadMouth ideally should be written personally and recorded with a certain voice character specially assigned to BadMouth. For now I just worked with Cliche, cheesy funny quotes you could find on the net and assembled them together to make sense while injecting in between some humanistic sounds or words we say in our daily routine to give BadMouth a more humanistic approach, also injected some sentences that I made up to be able to put a certain draft scenario together for now.

Thank you

-Allison Parish and Benedetta Piantella  for the time you give, the great teaching, the help & support but mostly for being who you are as individuals a great inspiration to us all.

-Ben Light for the advise and help you give even when it’s nothing linked to your course.

-Mithru Vigneshwara for supporting and helping out with the coding  and Manning Qu for brainstorming with me over motors mechanics and fabrication.

Leave a Reply

Your email address will not be published. Required fields are marked *