Pen holder and else…

For This week I did a pen holder. I started simple with circles and cylinders. Life is easier with the boolean tool. I used both boolean intersection and boolean union, also used extrude tool.

 

And then I tried to play with typo, and make a pattern of it in 3D, Wanted to make it as piece that gets enlightened from down. But I think due to the complexity of the object the boolean wouldn’t take the shape.

How messages sound

I started wanted to scrap through my Facebook and try to analyze a bit my data and see what I would discover.
Eventually I tried and experimented with different method trying to extract data from Facebook. But results weren’t satisfying. I always came back with error or with incomplete results. I tried Facepager old and new version went through tutorials. I tried an app on Facebook called Nebizz. I tried to work with Facebook graph API…

Eventually I gave up and started to see how to extract data into CVS file through WhatsApp or messages. After certain trials I finally was able to retreat an nice HTML table of my sms and converted it to CVS
and finally to Json file. I had to go through some cleaning not much though.

Idea
Reading through my sms I had good. joyable moments and some bad moments with family friends and exs.
Reason why I had decided to scrap through the code and assign soft rhythms to soft joyable messages and noisy brutal sounds and tones to aggressive heated conversations in messages.

I later planned to physical carry the sound composition on a cassette.
record it and keep it with me. why the choice of tapes. because i’m from the cassette’s era. my teenage years.

Imessages Data visualization

I started this project trying to work with Whatsapp Data. Retrieving the Data from Whatsapp is a bit complicated or need a longer process. I had downloaded IExplorer an app that Backup your phone and store it’s Data on your Mac. However certain Datas like facebook and Whatsapp need some sort of permission. I was able to retrieve a file SQlite, I downloaded the SQlite browser for it, it turns out it dissects every fragment of the data into a separate csv… meaning, Media.csv, messages.scv and chatlog.scv separate which makes it complicated to work with the data, because they opened funny not completed.

 

I found that with IExplorer it is easier to retrieve Imessages and directly convert them to CSV files. So I did that and combined few chats in one big document and played with the Data in P5js. I was able to make the Data be visualized but I think I need to work more on the output and the code to get a nicer better outcome.

 

P5js Sketch;

I tried to give each category a color to see the outcome proportionally to each other: messages, names, phone number, date.
The idea is to show the different visualization of one conversation according to another. This Data is a combination of of many conversation just to get the Data out.
AS soon as the visualization is fixed into something more appealing I will post the bulk difference of the same sketch changing according to the conversation.

var nanou;
var nanouArray

function preload() {
nanou = loadTable(
‘2Messages_with_+15029055432.csv’,
‘csv’,
‘header’);
}
function setup() {
createCanvas(600, 600);

var nanouArray = nanou.getArray();
//noLoop();
// var nanouArray = nanou.getArray();
//for (var i = 0; i < nanouArray.length; i++)
// print(nanouArray[i]);

}
function draw() {
background(0);
this.x = random(width);
this.y = random(height);
this.diameter = random(10, 30);
this.speed = 0;

var nanouArray = nanou.getArray();
for (var i = 0; i < nanouArray.length; i++){
fill(255, 0, 0, 200);
ellipse(i+this.x, this.y + nanou.getString(i, “Message”), this.diameter, this.diameter);
//ellipse(i*10,100, nanouArray[i], 25, 25);
//ellipse(i*10,100, nanouArray[i], 25, 25);
fill(0,0, 255, 200);
ellipse(i+this.x+10, 100 + nanou.getString(i, “Phone Number”), this.diameter, this.diameter);

fill(0,255, 255, 200);
ellipse(i+this.x, 300 + nanou.getString(i, “Name”), this.diameter, this.diameter);

fill(40,140, 150, 100);
ellipse(i+this.x, 170 + nanou.getString(i, “Subject”), this.diameter, this.diameter);

fill(170,0, 200, 100);
ellipse(i+this.x, 450 + nanou.getString(i, “Attachments”), this.diameter, this.diameter);

fill(255,255, 0, 200);
noStroke();
ellipse(i+this.y, 400 + nanou.getString(i, “Date”), this.diameter, this.diameter);

}

this.move = function() {
nanouArray[i].move();
this.x += (-20, -30);
this.y += (-40,-30);
};

data p5js

Listening

 part 1. Dreams.

I started this project wanting to listen to my dreams. I thought It is exactly where all the illusive awareness of our conscious takes a break and let the more truthful hidden layers of our selves come out… It is where the body surrender to our fears and desires. I had had a long talk with a therapist regarding dreams but what is more important than dreams is ‘nightmares’. According to the therapist, unlike the common knowledge of people, nightmares are a more truthful  expression of a certain desire or fear then in a nice dream. the latter is ‘nice’ because it is a masked metaphor or symbol to something very raw or brutal that we usually defend or hide by disguising it with something more acceptable or tolerant according to our moral and social values we grew up with. For instance “eating a yummy cup cake” is a nice dream, “being raped’ is an awful nightmare; the irony in the matter they could both symbolize a sexual desire depending on the person, the context of life, etc. The interesting difference to note is that people who are prone to see nightmares can listen more closely to their subconscious versus people who have nice dreams because they add layers and layers as a defensive mechanism to hide fears or desires.

Another aspect to nightmares is the recurrent nightmare that stems from a certain fear or situation. A personal example to that. Since I was a child, every time I am stressed over anything could be work related, love or a family situation.. I ironically dream this recurrent dream where it’s war, I’m hiding because shooting is taking place in the streets, I finally reach home, knock the door and another family opens the door for me.  Being a child of war, this dream somehow make sense, fear of war, fear of loosing the parents, etc.. But now that I’m 35 years old I still wake up in sweat, my heart bouncing in fear because I had this recurrent dream. Therapist said it made sense still, because whenever consciously I’m in a distressed situation, subconscious triggers this childhood nightmare… A long subject willing to explore and interested to see if I can work around it in this class.

First failing trials:

Zeo/Kinect

Ideally to listen and monitor sleep, as suggested by Dan O’sullivan I should have worked with the Zeo sensor to monitor the brain activity while sleeping. As Zeo wasn’t available, another suggestion was to use Kinect and monitor motion while sleeping. It remains physical true, but it could have detected some interesting aspect linked to how much one’s move a fidget while sleeping and see if we could detect a certain pattern linked to these motions over a certain amount of time (a week)..
Working with the Kinect for this particular assignment wasn’t the easiest thing to do, as I had to use a PC (true nightmare) I also had to learn how to make the kinect work, also had to work it through processing, language that I haven’t used before… It was too much learning in a very small frame of time. So after trials to make it work, watching tutorials, etc… I tried over 2 night and one afternoon nap to record the motion, but couldn’t. The video wouldn’t continue recording.

Though Dreams and sleep monitoring failed for this assignment, I at least learned how to work with the Kinect/processing and went through the experience and got some idea of how to make it function and the possibilities that could be done with it for different projects. However I will revisit sleep monitoring and dreams at some point with a different approach.

 part 2. Memories and Emotion

The Pulse Sensor

I moved from dreams trying to detect emotions through the pulse sensor though colleagues said you can only detect through the pulse sensor physical activities, I somehow had slightly different results through the set of experiences i tried I actually saw some visible graphical changes occurring in reaction to some strong emotions. I decided to dig in my memory and think of what would trigger some strong emotions and make my heart bounce. I decided to experiment with some war memories, those well no matter how time flies, no matter how we learn to control our emotions with age, how we teach ourselves to forget and how we master putting a poker face consciously.. those well even after 25 years still make my heart race… I guess you can’t always fool the elephant.
Memories can trigger a lot of emotions
I set a series of videos to watch while using the pulse sensor. same spot same place, not moving:

-Pulse sensor with two leds, blinking fading when a pulse is detected:

ROY small IMG_3944

-Pulse sensor with processing, found a library with a visualizer build for it:

Roy processing smallx

-Pulse graph from the Arduino plotter, sample on the pulse and how it gave a physical response when I coughed:

Below the set of experiments I conducted run different tests the past days but these are my latest and the one I selected:

1.First video a random video from youtube that i never watched before… to keep the element of surprise to test my emotions.

recording 3 war video SMALL

2. A scene from the movie West Beyrouth, a very good movie that talks about how the war started and depicts a very true reality that resembles very much my childhood at school. Specially that the scenes of the school in the movie are actually shot in my actual french school Lycée LAK. In the past I only watched the movie once long ago and never was able to re-watch it again.. because it reminds me of what I work hard to forget.

west beyrouth small and trimmed
3. Third video is a massive attack song entitled save from harm. not that this song is linked to war, but it’s a song that would trigger memories to compare with stronger emotions like war ones.

recording 4 massive attack song SMALL

4. Fourth video a song for Barbara called “mon enfance” my childhood
that also triggers a lot of emotions when i listen to it.

recording mon enfance barbara SMALL

As a final analysis, It is clear to me that the pulse is responding to strong emotions or memories. Even if the shift in the graph is not very big but it clearly shows a change in pattern. For instance as an example in West Beyrouth the graph shift many times but at the end of the scene when they show the school entrance a clear shift is shown; it is a very specific place where I used to hang out with friends during school. Same when kids where gathered in the school yard to sing the french Hymn and then the Lebanese one. Also very relevant while listening to Barbara, less than Massive Attack because actually the latter affects me less.

 

BadMouth Pcomp/ICM

So finally done with finals. Well almost done… We do start off thinking about a certain idea and we definitely spend a certain amount of time
trying to imagine the outcome. It usually is great in theory, when it comes to making it happen, challenges rises in different aspects of the project.

In my first blog for BadMouth, I broke down the project to 5 categories.

I will go through the five sections to see how I have ended up developing the projects. Methods I changed, Methods I learned, Methods I dropped…

Speech/Voice:

Speech recognition

After research and time put on checking speech /voice recognition and trying out different libraries, the easiest way for me was to go back to P5js. It’s not just about easy, it’s about what I can accomplish in a certain amount of time with my knowledge. The Speech library is a bit challenging, it is less developed then other libraries and has less examples of usage… It took some time to start it off ( thanks to the ICM help session support that I was able to set it off). It also only works in Chrome for continuous recognition, so the desktop editor P5js was not a possibility. I only had a choice to use the web editor version which I like less.
Through the process of trying out libraries, doing research and also trying to work through Python instead of P5js… I got to learn a lot of things about Speech recognition, the difference with voice recognition, some history of how it all started and the AI possibilities/limitation of the future… All amazing stuff to read and learn about.  If it weren’t for anything learning only about the limitations to whats out there and what can be done and what can’t, is a very good start. On that path
One great reference was of course given by the amazing Allison Parish: wit.ai simple and easy to follow, they let you develop the scenario easily and give it you in Json files for formats easy to use in coding. So with more time in hand one can really  go into creating a full personal library based on personal scenarios.
So that process made me go “again” through all the tutorials out there linked to Jsons and API and how to get data in coding. I had to go through many examples and apply tutorials over and over to really get to do it.
When I first started working with P5js I only new how to repeat examples given, work a bit through them, develop some… but it was a lot of copy pasting; the thing that is making me happy though it’s not much but I felt for the first time in the final while doing the speech coding that I was able to finally put myself some logic on how to do it: now I could use an If statement or a true/false calling… something I wasn’t able to logically apply… good to know it’s get better.

Other then making the library work was the part to connect it serially. Though I had done different times serial communication through Pcomp homeworks, but I guess It also has to do with the different sensors we use. The set up is the same but the logic is sometimes different from one sensor to another getting the right values in P5 was tiring.

I had worked different sketches for BadMouth, below two sketches presented in ICM as finals:
The first is a speech that BadMouth does: talking random shit to people (a bit long):

https://alpha.editor.p5js.org/renanou/sketches/ryz4jmPmg

The second is a made-up conversation made up with BadMouth and Myself:

audio sample of the conversation

https://alpha.editor.p5js.org/renanou/sketches/SygueGqQl

When It came to combining them serially I had to change the strategy of how I’m using the scenarios. The fact that Im using Ultrasonic sensor, which not only detects the presence but also majorly the distance… so it was only logical to work with what the sensor is doing, meaning assigning text linked to the distance people are standing before BadMouth. BadMouth will still be Bad but instead if people are close he will talk about this proximity mentioning that they are close, If people are far, he’d call them to come closer…. this logic makes more sense with an Ultrasonic then to just assign a random speech without taking in consideration the role of the sensor. Below the sketch that I worked it by creating different arrays of words for different ranges to be detected in the ultrasonic depending on the distance or the proximity of a person:

video below testing

ranges ultrasonic

Below the sketch in P5js :

https://alpha.editor.p5js.org/renanou/sketches/Sy8ArBRXl

As for the conversation with badMouth I had presented in ICM a made up scenario to show speech recognition… but the scenario doesn’t make sense to the project itself so I had decided to work  it in a way to respond  to one thing people might do or say, in  this scenario it was logical to insult BadMouth: “Fuck you”, Fuck you BadMouth or Fuck you BadMouth you are a loser!” . This conversation is not linked to the sensor as it’s the people’s output and initiative and not the role of the sensor.

Below the sketch in P5js :

https://alpha.editor.p5js.org/renanou/sketches/HybnhbDml

 

 

Motion detector:

Initially I had in mind to work with the PIR, once discussed in class with my great teacher Benedetta Piantella, she suggested Ultrasonic and camera detection instead of PIR and of course as she knows better she was right. I went through different testing posted on the blog earlier… working with PIR was really boring and not satisfying it takes a lot of time to reset when detecting so a hand could be moving and the responses don’t follow up really because between one movement and another it has to reset… moving to the ultrasonic, it was  great indeed. The values are detailed and it has a really good margin of getting creative with it (if time is available), I had tried different sketches in the Arduino and always got great values which is why i decided to adopt it for this project. The camera ideally would have been great as well because as Benedetta Piantella suggested, you could assign for BadMouth to make comments when it sees a certain color. Let’s say someone wearing a yellow shirt standing at a certain proximity of BadMouth… BadMouth can comment something regarding the yellow shirt. but when I tried testing with the camera I found it complicated, I had decided to keep it tll last cause I was running out of time and I needed to decide and work the project. In order to makeup for that color detecting idea I also tested with the color sensor, I thought it could be cool to combine Ultrasonic and color sensor instead of camera. color sensor values and testing were satisfying but the issue with ultrasonic is that you really need to be at a very close proximity to make it work… I don’t think it make sense in the scenario of BadMouth where people aren’t supposed to be that close… Nevertheless working with added another cool learning to all that.

Ultrasonic details:

HC – SR04 Ultrasonic Distance Measuring Sensor Module for Arduino / NewPing Library

Ultrasonic has a NewPing library that makes the values and working with it greater. I went through using it as well.

different sketches I used and tried out from different sources but changed them and worked through them  for the ultrasonic that has two pins to consider, The trigger pin and the echo pin:

One with the NewPing library and one without:

sketch one

sketch two

Mouth Motion:

In this section I wanted to make the lips of BadMouth move while he talks. Unfortunately due to time constrains I took so much time on the motion detection section and the speech recognition section that this section was jeopardized. I am planning though to complete it during winter break if it isn’t for anything at least for my proper satisfaction and for my proper learning. The challenge was to work through the mechanics of making the lips move up and down. I thought if I mounted two motors to clips at each of their end and then I could hand each clip to one lip it will not make the motors spin completely because it got attached to a physical restraint and yet it will probably do some kind of a random movement (trying to spin its way out) but that would be exactly what i need, just a small movement of the lips, good enough to make the impression. But unfortunately realistically my imagination to the process was off. the motors worked but once attached to the lip they didn’t make a small random movement but instead didn’t move at all… Because this was experienced at the last minute didn’t have the chance to try out plan be which i think could be the way to do it: through the Gripper kit tool and a servo motor. This Gripper kit tool is simply some kind of small clips that are wired through a servo, unlike the motor i was using , they don’t spin but instead do exactly the motion I need which is the opening and the closing of both ends of the clip.

samples of what I tried to do:

that was a sample with batteries not linked yet to Arduino just to test if it’ll work with BadMouth.

video of the motor/clip spinning (before attaching it to the lips).

IMG_2462

What I think the plan that I should experience with next:

Gripper tool kit and servo

here’s reference from servoCity on how it work…

Mouth Design

The design of the mouth should have been made from scratch if time was available, I had to find solutions to get the feel of a realistic looking mouth for now.
I have found a realistic mask made from latex and i decided to work with it for the time being. I laser cut panel to make an opening for the mouth to sit (stuck in wall symbol) and another above it for the Ultrasonic motion sensor to be placed.

Pictures below:

Scenario for BadMouth

Scenario for BadMouth ideally should be written personally and recorded with a certain voice character specially assigned to BadMouth. For now I just worked with Cliche, cheesy funny quotes you could find on the net and assembled them together to make sense while injecting in between some humanistic sounds or words we say in our daily routine to give BadMouth a more humanistic approach, also injected some sentences that I made up to be able to put a certain draft scenario together for now.

Thank you

-Allison Parish and Benedetta Piantella  for the time you give, the great teaching, the help & support but mostly for being who you are as individuals a great inspiration to us all.

-Ben Light for the advise and help you give even when it’s nothing linked to your course.

-Mithru Vigneshwara for supporting and helping out with the coding  and Manning Qu for brainstorming with me over motors mechanics and fabrication.

Bad Mouth Final

BadMouth started off as an installation that has no purpose in life but to talk shit to people. It is still the scenario however, when it gets to executions reality checks between how you imagine a project will go and how it ends. Not that the ending is negative but there is definitely detours, challenges and limitations to our expectations.

I probably had in mine Artificial Intelligence. I ended up with a fun text to speech in P5js. Eventually it’s all a learning and an experimentation process that one has to go through to  eventually reach the best and simplest form to code the project.
First I experimented a bit with Python… went through the tutorials of it. And through speech recognition sketches. It turned out to be an impossible mission.I then conducted search with Js, APIs, different libraries available online… Till I went back to P5js. working with the Library in P5js was a challenge it has some aspects that needs reviewing so people would be able to work with it.
One great site that the amazing Allison Parrish told me about Wit.ai, effectively a great site to create your own scenario for speech recognition and assign it to any bot or application you are starting to work with. It gets clean Json files that could be used in P5js library.

I have prepared two sketches, one that dictates the speech that bad mouth will be telling when he senses people approaching through the ultrasonic sensor. and another that is a dictated conversation with BadMouth. Eventually this sketch will be assigned so people can insult BadMouth and get different results.

Final videos for the serial communication and the BadMouth installation will be available next week as soon as the Pcomp part is over.

For now below two coding platform I will be using for BadMouth.

conversation with BadMouth

https://alpha.editor.p5js.org/renanou/sketches/HybnhbDml

BadMouth speech

https://alpha.editor.p5js.org/renanou/sketches/ryz4jmPmg

 

 

Final Project part ll testing and coding

PIR SENSOR + PIEZO + LED:

This Sensor is not the best, It is very limiting and the responses are slow because it needs time to reset every time.

The first trials the PIr was unresponsive, wrongly wired.
VIDEO LINK PIR: pir-wired-wrong

Second trials it worked but too slow and not exciting as a sensor to work with:

VIDEO LINK PIR:img_2322
screen-shot-2016-11-30-at-10-51-24-am

ULTRASONIC SENSOR:

Working with Ultrasonic after PIR was great, though it’s a hard Sensor cause there are many values to code and I had to download the NewPing library for it to get better results and for that I needed to update all my systems but eventually it worked. The serial communication also is hard because of the values and numbers. But at least it’s clearly responsive.

VIDEO links with different codes:

img_2340

ultrasonic-dist

 

screen-shot-2016-11-30-at-6-24-57-am

 

COLOR SENSOR:

Color sensor js cool actually to work with. it gives good writings and numbers.

Video Link: color-sensor

screen-shot-2016-11-30-at-7-27-24-am

Camera with Arduino:

Camera is really hard to work with. So instead I decided to work with Ultrasonic and colors to assign for each color for Badmouth to say something linked to the color it will sense.

In parallel to all the sensor wiring , testing and making the serial communication, I’ve been working on the speech, sound codes trying with Python first… too hard to code with it as I don’t know Python, so I moved back to do it through P5js. I am experimenting with assigning it as sounds, also working on a second code with API chatter bots. The code and the speech recognition is really hard and complex so It’s taking some time to learn it> I will be soon uploading it with the sensors.

Enclosures

I went to “The container Store” and fell in love with the place, wanted to buy everything in the shop… It reminds me of Ikea but Ikea is more furniture driven where as this one is about containers…

I liked these bamboo shelves that can be put together. Neat and easy to make a laser cut Acrylic cover. So next stop was at Canal Plastics, I got black, white and mirror see-through Acrylic. For this project the cover is just a test done on the laser but not the final cut because I am still trying to figure out my final circuit for Pcomp.

This time, unlike the previous lamp project I had to do two trials instead of 4-5. It is getting better. and one of the trials was wrong because I miscalculated between inches and millimeters…

The openings lasered on the cover are not final, It was just a tentative test to see how it will look like. Eventually I think I will go for a white Acrylic sheet and try to spray the wood in white. If it’s spray-able with a neat finishing: Something I will have to inquire about in class: how to do a presentable finishing.

img_2298

screen-shot-2016-11-29-at-12-33-26-am

screen-shot-2016-11-29-at-12-33-15-am

screen-shot-2016-11-29-at-12-33-02-am

screen-shot-2016-11-29-at-12-33-38-am
screen-shot-2016-11-29-at-12-32-49-am
6
9
screen-shot-2016-11-29-at-12-32-27-am

Lamp – lazer printer

So for this project I choose to a do a lamp, as I loved the reference on the site, lighting up Acrylic cuts.
However I choose to work with typography. Inspired by Magritte’s “Ceci n’est pas une pipe” art, I had decided to do Ceci n’est pas une lampe; “this is not a lamp”.

the-treachery-of-images

screen-shot-2016-11-13-at-2-09-37-pm

I had to go through different trials (4) in order to get finally 2 clean cuts. The printer wasn’t cutting properly the typo. So I had to put down the speed to 9 and 10 to get a proper cutting.

jdflajgfjk

this-one-5

this-one-4

this-one-9

 

Later I wanted to create a small stand from wood to make it stand and put the light underneath it.
cut some wood and decided to put screws  the acrylic on the wood to hold it.

this-one-8

screen-shot-2016-11-13-at-3-05-30-pm

jshfugfuqwh

jafkjahd

than i glued the sides:

this-one

and finally added my led string:

img_2134

img_2138

img_2144

 

Wood Multiples

Candle holder

Sunday 7,11. was a long day at the shop of ITP… It took some time to do the candle holder.

So this is how it started:

Initially I started the project wanting to do Dominoes, which I think at some point I really want to make that box, got scared that doing the sliding Domino box would be too complicated to do for the first time experimenting with the machines… So I moved to something simpler, and i thought of  nice rounded coasters with a cool simple holder (better then a sliding box):

I started sketching on the coasters was convinced of the project. But when I started with the execution I started to face different difficulties:

-First one was linked to the cut of wood I bought:
I went to Prince Lumber on the 47th st. which is near where I live and I loved the place.
I bought long cuts of wood with almost a 2″ thickness… I thought at the shop there is a way to slice the thickness of the piece we got…
But I guess not, it’s too complicated.
So I went back to buy some more sheets of wood that are thin initially, chose to buy plywood0.5″ thickness, and another a bit thicker 1″ so I wouldn’t regret again.

-Second one was linked to the rounded shape:
I thought the rounded shape can be simply done with the press drilling machine. But no… it need the big the router, after checking the great tutorials videos
of my professor Ben Light, I also figured that doing circles for the first time won’t be easy, and that also I won’t get to use the big machines which was somehow
the purpose of this assignment.
So I finally decided to do candle holders. I was thinking of a shape that could be playful when put together (because they are multiples)
and I thought of the amazing honeycomb structure, a very famous structure in this universe. Polygons more precisely hexagons!

150721_sci_hex-honeycomb-jpg-crop-promo-xlarge2 honey
I searched and found some references:

f1865ae9aede75a28afaab349b8dd2bc  il_570xn-193691193

 

 

But as I was afraid the structure would fail, I also decide to do in parallel  squarish candle light holders.

wood:

screen-shot-2016-11-07-at-12-09-03-pm

img_2026

0img_2028

0img_2020

 

I proceeded with the project:

Sketches:

img_2039

I was able to cut the squarish holder good enough (like instructed in class) one blog of wood (using a stop to measure the length needed each time, instead of putting measurements for each square).

img_2035

0img_2036

Then I made a different squarish size for the honeycombs, but had to add measurements to know exactly how to cut the angle needed:
When I used the big machines for the angles it worked at first but then I had to move to the other one because the pieces are small they can’t be held to cut.

0img_2041

 

Finally I ruined some, but was also to pull out the honeycombs in a fair condition:

0img_2045

0img_2046

0img_2048

Eventually we reached the phase in which we need to drill a hole for the candle… At the shop the tool is non-existent. So posting will continue tonight
as I will get the size I need and go back to do the holes and finish my holders. (to be continued tonight)

Monday 8, 11.

So I got the Spade Bits:

img_2096-2

wood-machinewood-machinewood-machine

hello

hello-3

Great we have the holes! yes but… not yet cause the spade bit was 1.5 ” exactly the size of the candle, so it didn’t fit in the hole it needed bit more 1 or 2 millimeter, and the bigger spade was 2″ too big, I couldn’t find a 1 3/4″ spade bit don’t know if it exist. So I had to sand the inside with the dremel for 3 hours…. that was long.
But eventually it worked.

hello-2

img_2069-1

img_2071-1

img_2074

hello-4

Now remains the painting and the finishing. Trying to figure out how. (to be continue with a final picture tonight).