Cactus pot

I like cactus, and it’s a good fit in a  3″x 3″ x 3″ space to make a cactus pot.
Initially I was inspired by the design  of this cactus pot, I like the randomness of the polygons or
different random triangular shapes…


for some reason I could figure out how to make the points of an object randomly editable.
So I went back to the classical shape that was easy to do with Revolve a tool we learned how to use in class.

end result:

 

My Avatar?

So trying to make an avatar that looks like me was kinda fun… But that depends on whether you are searching to make an avatar that realistically looks like you or an avatar that represent some part of you… some fantasy, some idea, some persona.

I don’t mind enjoying having super powers or belonging to a certain fantasy world or to a certain era… as long as the avatar looks like me.. or else i don’t know how would people relate.

Most of the 2d avatar trials looks like a  cartoon character. The attempt in 3D is more realistic in terms of technology and how close it comes to look like a human being and features… However it’s frustrating because most game avatars let you adjust to a certain extent… specially the body.. they want their avatars remain sexy…

I’ve come across the below.. In one I was able to work the body but the face didn’t relate much, in others more developed features for the face but still despite trials it has limitation specially the eyes, which for me are essentials.






final project part II

Human Behavior through Neural Network.

Research and Examples:

-Artificial Neural Network for Human Behavior Prediction through Handwriting Analysis:
https://www.researchgate.net/publication/43808181_Artificial_Neural_Network_for_Human_Behavior_Prediction_through_Handwriting_Analysis

-Use of an Artificial Neural Network as a Model for Human Behavior:
A Proposed Framework for Investigation of the Question of Free Will
Association of Christians in the Mathematical Sciences https://acmsonline.org/lindell-ormsbee/

In order to get acquainted with the NN, i had decided as a first step to go through some sketches try to work through them, understand them and then go back to set a sample for this. I reworked on the sketch of Flappy Bird as a learning.

Final Project- Step 1

How to speculate on human behavior using algorithm.

I started this project with Rest of You, thinking about how technology can help us improve, enhance our behavior by recognizing the patterns we do.
The simple idea is to calculate time spent on certain events or actions (according to data input we add), this will show us a certain pattern on how we spend our time. However ideally is not to stop here but to be able to speculate behavior in the future according to these data inputs linked to our past using algorithm.

-Link to my post in Rest of You

https://itp.nyu.edu/classes/roy17/author/nd1289/

-Fun example that shows time spent over events:

http://blazepress.com/2014/06/sleep-26-years-lifetime-find-much-time-spend-everyday-things/

-Tracking time

https://github.com/git-time-metric/gtm/wiki/Time-Tracking-Algorithm

-Facebook uses algorithms to manipulate NewsFeed

How Facebook News Feed Works

Imessages Data visualization

I started this project trying to work with Whatsapp Data. Retrieving the Data from Whatsapp is a bit complicated or need a longer process. I had downloaded IExplorer an app that Backup your phone and store it’s Data on your Mac. However certain Datas like facebook and Whatsapp need some sort of permission. I was able to retrieve a file SQlite, I downloaded the SQlite browser for it, it turns out it dissects every fragment of the data into a separate csv… meaning, Media.csv, messages.scv and chatlog.scv separate which makes it complicated to work with the data, because they opened funny not completed.

 

I found that with IExplorer it is easier to retrieve Imessages and directly convert them to CSV files. So I did that and combined few chats in one big document and played with the Data in P5js. I was able to make the Data be visualized but I think I need to work more on the output and the code to get a nicer better outcome.

 

P5js Sketch;

I tried to give each category a color to see the outcome proportionally to each other: messages, names, phone number, date.
The idea is to show the different visualization of one conversation according to another. This Data is a combination of of many conversation just to get the Data out.
AS soon as the visualization is fixed into something more appealing I will post the bulk difference of the same sketch changing according to the conversation.

var nanou;
var nanouArray

function preload() {
nanou = loadTable(
‘2Messages_with_+15029055432.csv’,
‘csv’,
‘header’);
}
function setup() {
createCanvas(600, 600);

var nanouArray = nanou.getArray();
//noLoop();
// var nanouArray = nanou.getArray();
//for (var i = 0; i < nanouArray.length; i++)
// print(nanouArray[i]);

}
function draw() {
background(0);
this.x = random(width);
this.y = random(height);
this.diameter = random(10, 30);
this.speed = 0;

var nanouArray = nanou.getArray();
for (var i = 0; i < nanouArray.length; i++){
fill(255, 0, 0, 200);
ellipse(i+this.x, this.y + nanou.getString(i, “Message”), this.diameter, this.diameter);
//ellipse(i*10,100, nanouArray[i], 25, 25);
//ellipse(i*10,100, nanouArray[i], 25, 25);
fill(0,0, 255, 200);
ellipse(i+this.x+10, 100 + nanou.getString(i, “Phone Number”), this.diameter, this.diameter);

fill(0,255, 255, 200);
ellipse(i+this.x, 300 + nanou.getString(i, “Name”), this.diameter, this.diameter);

fill(40,140, 150, 100);
ellipse(i+this.x, 170 + nanou.getString(i, “Subject”), this.diameter, this.diameter);

fill(170,0, 200, 100);
ellipse(i+this.x, 450 + nanou.getString(i, “Attachments”), this.diameter, this.diameter);

fill(255,255, 0, 200);
noStroke();
ellipse(i+this.y, 400 + nanou.getString(i, “Date”), this.diameter, this.diameter);

}

this.move = function() {
nanouArray[i].move();
this.x += (-20, -30);
this.y += (-40,-30);
};

data p5js

E-unconscious

For the E-unconscious I will work with my Whatsapp history data, and my music library.

 

1.Whatsapp

I have decided to dig into my Whatsapp conversations. As I use the app constantly with all friends and family members I think there is a lot to figure out from it. I’m currently going through the tutorials of processing related to data with Daniel Shifman and the codes provided by Dan O’Sullivan to teach myself to make the code in processing instead of P5.

 

2.Music Library

I found myself with age, shifting over the years towards a certain kind of music, but I’m interested in seeing if I could find some meaning to the choices I’m making and if there is a certain pattern I’ve been following. Linked to a certain mood. Don’t know yet how to go by it.

Though it seems illogical but I do want to see if there is a link between words and conversations in whatsapp and the music library.

Talking

Listening to yourself is hard but Talking to yourself is even harder cause we probably tell ourselves what we want to hear.
I have analyzed two experiences for talking to yourself:

1-Meditation:

Back home in Beirut I took Yoga courses for 4 month and at the end of each session we used to end it with a 10 min meditation. Back then meaning few years back I used to come out from the class feeling very positive and full of energy. It really had a certain impact on my well-being. I got to understand how one can work on himself, train the body and mind to relax and think in more focused manner. 4 month is not a lot of time but was enough for me to have a hint about how it’s supposed to feel.

Since I haven’t been taking care of my mind and body, but very little and for short periods of time. For the sake of exercise I had decided to go back to try meditation but this time experiment through as suggested an application. I chose Headspace, and did it several times over the course of a week. Weirdly this time after having to do the breathing exercises again and reworking the meditation techniques it actually felt very good but instead of giving an energetic outcome I almost ended up sleeping after each 10 min session! It is a weird outcome, maybe because I have insomnia problems and haven’t been getting much sleep. It is something I want to get into to understand why I have a horrible need to sleep afterwards…

2-Open the tap

In order to talk to myself, I happen to have this habit to take a piece of paper or open a blank page in a word document or notes and try to pour out my heart with words or very short phrases without censure. I call it the tap. Like when you open the tap water and let everything out, as if this piece of paper will be burnt right after. the interesting part is the amount of repetition you read and highlight after writing and writing and somehow I get to see what is actually bothering me clearly. More interestingly I hide them and forget about them and open them up after a month… It’s shocking to read your state of mind when you are out of it, you suddenly surprise yourself and always shocked to re-read yourself and you would understand and look at things in a different perspective because you took distance of that state of mind. I could share personal examples if asked in private (but will not post any on the blog) too private to expose. i have done this exercise over years and it always came beneficial in the moment itself where it gives the chance to let things out, meaning let the stress out, but also look at your states of mind from distance.

Listening

 part 1. Dreams.

I started this project wanting to listen to my dreams. I thought It is exactly where all the illusive awareness of our conscious takes a break and let the more truthful hidden layers of our selves come out… It is where the body surrender to our fears and desires. I had had a long talk with a therapist regarding dreams but what is more important than dreams is ‘nightmares’. According to the therapist, unlike the common knowledge of people, nightmares are a more truthful  expression of a certain desire or fear then in a nice dream. the latter is ‘nice’ because it is a masked metaphor or symbol to something very raw or brutal that we usually defend or hide by disguising it with something more acceptable or tolerant according to our moral and social values we grew up with. For instance “eating a yummy cup cake” is a nice dream, “being raped’ is an awful nightmare; the irony in the matter they could both symbolize a sexual desire depending on the person, the context of life, etc. The interesting difference to note is that people who are prone to see nightmares can listen more closely to their subconscious versus people who have nice dreams because they add layers and layers as a defensive mechanism to hide fears or desires.

Another aspect to nightmares is the recurrent nightmare that stems from a certain fear or situation. A personal example to that. Since I was a child, every time I am stressed over anything could be work related, love or a family situation.. I ironically dream this recurrent dream where it’s war, I’m hiding because shooting is taking place in the streets, I finally reach home, knock the door and another family opens the door for me.  Being a child of war, this dream somehow make sense, fear of war, fear of loosing the parents, etc.. But now that I’m 35 years old I still wake up in sweat, my heart bouncing in fear because I had this recurrent dream. Therapist said it made sense still, because whenever consciously I’m in a distressed situation, subconscious triggers this childhood nightmare… A long subject willing to explore and interested to see if I can work around it in this class.

First failing trials:

Zeo/Kinect

Ideally to listen and monitor sleep, as suggested by Dan O’sullivan I should have worked with the Zeo sensor to monitor the brain activity while sleeping. As Zeo wasn’t available, another suggestion was to use Kinect and monitor motion while sleeping. It remains physical true, but it could have detected some interesting aspect linked to how much one’s move a fidget while sleeping and see if we could detect a certain pattern linked to these motions over a certain amount of time (a week)..
Working with the Kinect for this particular assignment wasn’t the easiest thing to do, as I had to use a PC (true nightmare) I also had to learn how to make the kinect work, also had to work it through processing, language that I haven’t used before… It was too much learning in a very small frame of time. So after trials to make it work, watching tutorials, etc… I tried over 2 night and one afternoon nap to record the motion, but couldn’t. The video wouldn’t continue recording.

Though Dreams and sleep monitoring failed for this assignment, I at least learned how to work with the Kinect/processing and went through the experience and got some idea of how to make it function and the possibilities that could be done with it for different projects. However I will revisit sleep monitoring and dreams at some point with a different approach.

 part 2. Memories and Emotion

The Pulse Sensor

I moved from dreams trying to detect emotions through the pulse sensor though colleagues said you can only detect through the pulse sensor physical activities, I somehow had slightly different results through the set of experiences i tried I actually saw some visible graphical changes occurring in reaction to some strong emotions. I decided to dig in my memory and think of what would trigger some strong emotions and make my heart bounce. I decided to experiment with some war memories, those well no matter how time flies, no matter how we learn to control our emotions with age, how we teach ourselves to forget and how we master putting a poker face consciously.. those well even after 25 years still make my heart race… I guess you can’t always fool the elephant.
Memories can trigger a lot of emotions
I set a series of videos to watch while using the pulse sensor. same spot same place, not moving:

-Pulse sensor with two leds, blinking fading when a pulse is detected:

ROY small IMG_3944

-Pulse sensor with processing, found a library with a visualizer build for it:

Roy processing smallx

-Pulse graph from the Arduino plotter, sample on the pulse and how it gave a physical response when I coughed:

Below the set of experiments I conducted run different tests the past days but these are my latest and the one I selected:

1.First video a random video from youtube that i never watched before… to keep the element of surprise to test my emotions.

recording 3 war video SMALL

2. A scene from the movie West Beyrouth, a very good movie that talks about how the war started and depicts a very true reality that resembles very much my childhood at school. Specially that the scenes of the school in the movie are actually shot in my actual french school Lycée LAK. In the past I only watched the movie once long ago and never was able to re-watch it again.. because it reminds me of what I work hard to forget.

west beyrouth small and trimmed
3. Third video is a massive attack song entitled save from harm. not that this song is linked to war, but it’s a song that would trigger memories to compare with stronger emotions like war ones.

recording 4 massive attack song SMALL

4. Fourth video a song for Barbara called “mon enfance” my childhood
that also triggers a lot of emotions when i listen to it.

recording mon enfance barbara SMALL

As a final analysis, It is clear to me that the pulse is responding to strong emotions or memories. Even if the shift in the graph is not very big but it clearly shows a change in pattern. For instance as an example in West Beyrouth the graph shift many times but at the end of the scene when they show the school entrance a clear shift is shown; it is a very specific place where I used to hang out with friends during school. Same when kids where gathered in the school yard to sing the french Hymn and then the Lebanese one. Also very relevant while listening to Barbara, less than Massive Attack because actually the latter affects me less.

 

BadMouth Pcomp/ICM

So finally done with finals. Well almost done… We do start off thinking about a certain idea and we definitely spend a certain amount of time
trying to imagine the outcome. It usually is great in theory, when it comes to making it happen, challenges rises in different aspects of the project.

In my first blog for BadMouth, I broke down the project to 5 categories.

I will go through the five sections to see how I have ended up developing the projects. Methods I changed, Methods I learned, Methods I dropped…

Speech/Voice:

Speech recognition

After research and time put on checking speech /voice recognition and trying out different libraries, the easiest way for me was to go back to P5js. It’s not just about easy, it’s about what I can accomplish in a certain amount of time with my knowledge. The Speech library is a bit challenging, it is less developed then other libraries and has less examples of usage… It took some time to start it off ( thanks to the ICM help session support that I was able to set it off). It also only works in Chrome for continuous recognition, so the desktop editor P5js was not a possibility. I only had a choice to use the web editor version which I like less.
Through the process of trying out libraries, doing research and also trying to work through Python instead of P5js… I got to learn a lot of things about Speech recognition, the difference with voice recognition, some history of how it all started and the AI possibilities/limitation of the future… All amazing stuff to read and learn about.  If it weren’t for anything learning only about the limitations to whats out there and what can be done and what can’t, is a very good start. On that path
One great reference was of course given by the amazing Allison Parish: wit.ai simple and easy to follow, they let you develop the scenario easily and give it you in Json files for formats easy to use in coding. So with more time in hand one can really  go into creating a full personal library based on personal scenarios.
So that process made me go “again” through all the tutorials out there linked to Jsons and API and how to get data in coding. I had to go through many examples and apply tutorials over and over to really get to do it.
When I first started working with P5js I only new how to repeat examples given, work a bit through them, develop some… but it was a lot of copy pasting; the thing that is making me happy though it’s not much but I felt for the first time in the final while doing the speech coding that I was able to finally put myself some logic on how to do it: now I could use an If statement or a true/false calling… something I wasn’t able to logically apply… good to know it’s get better.

Other then making the library work was the part to connect it serially. Though I had done different times serial communication through Pcomp homeworks, but I guess It also has to do with the different sensors we use. The set up is the same but the logic is sometimes different from one sensor to another getting the right values in P5 was tiring.

I had worked different sketches for BadMouth, below two sketches presented in ICM as finals:
The first is a speech that BadMouth does: talking random shit to people (a bit long):

https://alpha.editor.p5js.org/renanou/sketches/ryz4jmPmg

The second is a made-up conversation made up with BadMouth and Myself:

audio sample of the conversation

https://alpha.editor.p5js.org/renanou/sketches/SygueGqQl

When It came to combining them serially I had to change the strategy of how I’m using the scenarios. The fact that Im using Ultrasonic sensor, which not only detects the presence but also majorly the distance… so it was only logical to work with what the sensor is doing, meaning assigning text linked to the distance people are standing before BadMouth. BadMouth will still be Bad but instead if people are close he will talk about this proximity mentioning that they are close, If people are far, he’d call them to come closer…. this logic makes more sense with an Ultrasonic then to just assign a random speech without taking in consideration the role of the sensor. Below the sketch that I worked it by creating different arrays of words for different ranges to be detected in the ultrasonic depending on the distance or the proximity of a person:

video below testing

ranges ultrasonic

Below the sketch in P5js :

https://alpha.editor.p5js.org/renanou/sketches/Sy8ArBRXl

As for the conversation with badMouth I had presented in ICM a made up scenario to show speech recognition… but the scenario doesn’t make sense to the project itself so I had decided to work  it in a way to respond  to one thing people might do or say, in  this scenario it was logical to insult BadMouth: “Fuck you”, Fuck you BadMouth or Fuck you BadMouth you are a loser!” . This conversation is not linked to the sensor as it’s the people’s output and initiative and not the role of the sensor.

Below the sketch in P5js :

https://alpha.editor.p5js.org/renanou/sketches/HybnhbDml

 

 

Motion detector:

Initially I had in mind to work with the PIR, once discussed in class with my great teacher Benedetta Piantella, she suggested Ultrasonic and camera detection instead of PIR and of course as she knows better she was right. I went through different testing posted on the blog earlier… working with PIR was really boring and not satisfying it takes a lot of time to reset when detecting so a hand could be moving and the responses don’t follow up really because between one movement and another it has to reset… moving to the ultrasonic, it was  great indeed. The values are detailed and it has a really good margin of getting creative with it (if time is available), I had tried different sketches in the Arduino and always got great values which is why i decided to adopt it for this project. The camera ideally would have been great as well because as Benedetta Piantella suggested, you could assign for BadMouth to make comments when it sees a certain color. Let’s say someone wearing a yellow shirt standing at a certain proximity of BadMouth… BadMouth can comment something regarding the yellow shirt. but when I tried testing with the camera I found it complicated, I had decided to keep it tll last cause I was running out of time and I needed to decide and work the project. In order to makeup for that color detecting idea I also tested with the color sensor, I thought it could be cool to combine Ultrasonic and color sensor instead of camera. color sensor values and testing were satisfying but the issue with ultrasonic is that you really need to be at a very close proximity to make it work… I don’t think it make sense in the scenario of BadMouth where people aren’t supposed to be that close… Nevertheless working with added another cool learning to all that.

Ultrasonic details:

HC – SR04 Ultrasonic Distance Measuring Sensor Module for Arduino / NewPing Library

Ultrasonic has a NewPing library that makes the values and working with it greater. I went through using it as well.

different sketches I used and tried out from different sources but changed them and worked through them  for the ultrasonic that has two pins to consider, The trigger pin and the echo pin:

One with the NewPing library and one without:

sketch one

sketch two

Mouth Motion:

In this section I wanted to make the lips of BadMouth move while he talks. Unfortunately due to time constrains I took so much time on the motion detection section and the speech recognition section that this section was jeopardized. I am planning though to complete it during winter break if it isn’t for anything at least for my proper satisfaction and for my proper learning. The challenge was to work through the mechanics of making the lips move up and down. I thought if I mounted two motors to clips at each of their end and then I could hand each clip to one lip it will not make the motors spin completely because it got attached to a physical restraint and yet it will probably do some kind of a random movement (trying to spin its way out) but that would be exactly what i need, just a small movement of the lips, good enough to make the impression. But unfortunately realistically my imagination to the process was off. the motors worked but once attached to the lip they didn’t make a small random movement but instead didn’t move at all… Because this was experienced at the last minute didn’t have the chance to try out plan be which i think could be the way to do it: through the Gripper kit tool and a servo motor. This Gripper kit tool is simply some kind of small clips that are wired through a servo, unlike the motor i was using , they don’t spin but instead do exactly the motion I need which is the opening and the closing of both ends of the clip.

samples of what I tried to do:

that was a sample with batteries not linked yet to Arduino just to test if it’ll work with BadMouth.

video of the motor/clip spinning (before attaching it to the lips).

IMG_2462

What I think the plan that I should experience with next:

Gripper tool kit and servo

here’s reference from servoCity on how it work…

Mouth Design

The design of the mouth should have been made from scratch if time was available, I had to find solutions to get the feel of a realistic looking mouth for now.
I have found a realistic mask made from latex and i decided to work with it for the time being. I laser cut panel to make an opening for the mouth to sit (stuck in wall symbol) and another above it for the Ultrasonic motion sensor to be placed.

Pictures below:

Scenario for BadMouth

Scenario for BadMouth ideally should be written personally and recorded with a certain voice character specially assigned to BadMouth. For now I just worked with Cliche, cheesy funny quotes you could find on the net and assembled them together to make sense while injecting in between some humanistic sounds or words we say in our daily routine to give BadMouth a more humanistic approach, also injected some sentences that I made up to be able to put a certain draft scenario together for now.

Thank you

-Allison Parish and Benedetta Piantella  for the time you give, the great teaching, the help & support but mostly for being who you are as individuals a great inspiration to us all.

-Ben Light for the advise and help you give even when it’s nothing linked to your course.

-Mithru Vigneshwara for supporting and helping out with the coding  and Manning Qu for brainstorming with me over motors mechanics and fabrication.

Motors

So for this mounting motors project, I started it combining it for my Pcomp as I was anyway working for motors for Badmouth.
However though I will illustrate what I did for Pcomp, But I changed it for Intro to fab cause I wanted to do something that is actually working or almost.

I decided to do an electronic flower/fan in an Acrylic mirror vase cut. But before I proceed to that I want to document my motors trials in Pcomp.

So I thought If I connect/ bring 2 motors with joints and extensions and at the end of each I’ll add a clip that will each hold the upper/lower lips the lips would move a bit even if the motors are twirling… I thought clipping the clips to the mask will narrow down from the motor spin and I will get exactly what I want which is a random chaotic movement of the lips… That was not the scenario that happened in my head.
The motor is working perfectly and spinning perfectly but when clipped the motion is completely stopped.

video showing the motor spinning:

IMG_2462

So after making many trials and errors I decided to give up and do my flower fan vase.

material: Acrylic mirror

laser cutting the vase and putting holes in it to mout the motor on it:

my flower fan connected to the motor through joints and extensions

So the screws I was trying to use to mount the motor inside the vase on the panel were not the right fit they fell. I couldn’t find a replacement for now> tried to put some glue just temporarily for the sake of the presentation but it’s not holding. I will be fixing it.

video for mechanical flower  vase:

IMG_2480