[eyedeer] 3-d camera using HSB(?)

was talking to Isaac + Dan(? I think his name is Dan. It’s horrible but I’ve talked to him zillion times but I can’t remember names =_=;;;) and Isaac was like: ‘if you were us, how would you create a 3-d image without using kinect or anything high-tech but basic home equipment?’

(he later told me that he was trying to develop a marketing tool to sell clothes, so online shoppers could put 3-d images of themselves and I was like omg I’m being taken advantage of(!!!) and it was a bit wtf-inducing but he was like you should totally join us, and I was like OK *turns into capitalist grubber*)

ANYWAY. I was like, ‘oh, I’ll use particle theory and HSB’ and he was like: ‘whut?’

Drawing theory 101 states that objects in the foreground are always darker and more saturated than objects in the background. Therefore, the distance between object 1 and object 2 can be calculated by measuring the relative values of HSB and then using particles to fill the ‘space/width’ of each object where the depth is based on the density of particles within the z-axis. The good thing about HSB is that it ignores colour temperature values (i.e. 255 red and blue share the same saturation value, but because of temperature would register differently on RGB axis) and it conveniently puts everything on a 100% measurement scale as opposed to a 255 point scale. The best part about this method is that it doesn’t require high contrast (i.e. using IR or converting images to greyscale for tracking or wtv) or any kind of external technology besides a handheld digital camera.

And here’s a simple clip demonstrating the concept: http://vimeo.com/32900354

I set an automatic actions (on photoshop) to differentiate areas based on hsb, and created seperated layers based on each proximity zone (hsb info can be accessed with f7, use select range+average value picker) I used a threshold of 65% between each area but it’s quite interesting that if you increase the threshold you will get a more detailed 3-d map since more areas become divided up (at a threshold of 20%, I had something like 30+ layers since each ‘grey’ area became subdivided up) It’s also interesting to note that the whiteness of ice doesn’t affect the proximity measurements, it seems to recognise that the white of background icecaps is brighter/lighter/’whiter?’ than the ice of the foreground. It’s also interesting that it recognises angles(?) – or at least, the POV of where the camera is taken from because the darkest area identified by the program actions is the bottom left; which is where the boat/photographer is located compared to the bottom right area.

I’m pretty sure that if you use something like a colour profiler you could get even more accurate values on proximity based on HSB – I knew a d00de called Les Walking while in Australia (the company I was working illegally for parttime sent me to him as a student so that I could help said-dodgy company steal his business practices) Anyway the most important thing I learnt from Les was the value of colour profiling, and how colour profiles can be viewed as 3-d prism objects in ‘viewable light ranges’ (for instance: ektrachrome profiles are differently shaped/modelled spectrums from sg.RGB files)…. so if you were to create a proper profile for the web camera, you would get a better yardstick-range result cos of standardisation. The beauty of colour profiling that if you do it right towards all I/Os, you could get a nice chain effect going where the user can implement it on a printer/photo/projection/camera and get the same look all the time without the upkeep on constant installation-updates and yet it’s locked-in and portable enough because it doesn’t affect the mechanics of software or hardware, but rather sits on the flow itself.

anyway I need some sleep…. it’s like 6.45am and I have class in a few hours!!!!! だめ!!!! (。┰ω┰。)!!!!!!!~~~

Advertisements

[eyedeer] ofxAE

Was talking to ben today about coding, and he was showing me his unity+c# code he’s doing for thesis and explaining to me how to do class extends/inheritances on OF (omg timesaver!) and how the vector field worked (basically a base field and the top layer is referencing the bottom layer point grid by ++ magnitude) and I don’t know how we devolved into it but we started talking about coding and I was like ‘oh I don’t really know how to code [before parsons], but it’s kinda like what I used to write for After Effects because AE has this nifty thing called Expressions’

and he started laughing cos he was like: ‘lol that’s coding, because it has a library and you’re writing conditions that affect objects even if it doesn’t look like compilers/standard coding because code is a kind of thinking/mentality and not so much terminology’

And then I started thinking about it… and then yea, it totally makes sense and then I told Ben: ‘You know what AE/OF needs? A kind of export system so that you could export xcode/OF directly from After Effects because it would be 1000x useful’ And he was like: ‘lol that would be awesome, you could do that as a thesis project’

And the more I think about it, the more I wonder why I didn’t think of it before – a lot of AE plugins/program base is written in c++ so they have a lot of common points: for instance; ofMap is the same as Linear() in AE, and of_window_size is thisComp.height or thisComp.width and we have null objects with parent/child which is kinda like how you would write a class, make it public then extend/inherit it and earlier the clip I made using trapcode particular, is almost exactly like how I would code particles in OF; write a vector, put it on a path, add magnitude on the xyz axis; birth/death values, velocity. I mean, just check out how you would write an if/else expression using a slider on AE:

sliderVal = thisComp.layer(“Null 1”).effect(“Slider Control”)(“Slider”); v = Math.round(sliderVal*10)/10;
check = v % 1
if (check == 0){
“” + v + “.0”
}else{
v}

AE reference library

…… mostly, think about how *useful* it would be to have an OFxAE library that is interchangable with BOTH(!) programs – people with little programming ability could export whole apps/programs/animations directly from AE (‘save as xcode.proj?’) while programmers on OF could have access to AE functions like obj., orbit camera rigs, follow-light (where point light follows a motion path) and simplified use to z-axis etc… like seriously, how awesome would that be???

I guess it would be really cool to do something like that, but it’s also not really a very ‘flashy’ thesis project. Sometimes I feel like to reach Ultimate Coolness Level [in parsons] you need a big flashy interactive project with tons of videos and arduino sensors and give world peace and solve poverty cos it’s like, a lot of people want that right???? I can’t imagine how people would be interested in a library/program that they can’t really see or feel for during thesis exhibition. It’ll be like LOOK THIS IS MY LIFE’S WORK and it’s just a code library full of uh… text. and random symbols. and it’ll be like wtf-so-what inducing D: (plus would people be even interested in that?? it’ll be all ‘lol gfto why u here and not compsci?’)

I don’t know….. I mean, it could be now I’m just excited cos I think it’s a cool idea, but thesis is still pretty far off but as someone once said: ‘isn’t it lovely to think about?’

[random] trapcode test

trapcode test

Video link: http://vimeo.com/32444017

Made using Trapcode and Magic Look in After Effects…. everyone thought I did it in cinema 4D! actually I’ve been working on creating 3d effects in AE for both major studio and simple interfaces….. I kinda wish I did a motion graphics class, because self-teaching is really hard. I love LOVE Trapcode Particular and Form, took me about 4hrs to render the entire thing out, but it’s worth it. It’s actually just made of like, 8000+++ particles, with different velocities and very tight xspacing so it forms glowing lines. Then I created a motion path linked to a camera light (the motion path can be created from illustrator, then IMPORTED over wtf how awesome is that) and then I had a camera just parent to the light and a null layer. The background is actually a base layer created using fractal noise, then a ramp overlay to give depth. Overall it took about 20+++ hours to create this, most of it went into pre-composing-rendering-exporting-repeat because the files were so huge that you could only work in chunks.

The potential for using particles is pretty fantastic though, I bet you could create a camera that takes 3d pictures based on a similar principle. Objects in foreground are always darker/saturated++; than background, so you could use HSB to find the relative points between extremes, create a ‘yardstick’ and use it to measure z-axis between objects…. then fill it up with particles (the greater the density, the closer to camera/more ‘solid’ object) or you could use equation rho*gh for particle filling, so you create individual sections of surface area is to pressure data (where each floating particle can be a point of resistance)

next sem I’ll probably do it for real and take the CG modelling/graphics classes – I kinda regret this semester because I made wrong choices and picked stuff that looked cool but wasn’t what I was really interested in, but hey live and learn right?

[minigame] the good shepherd

the good shepherd

First foray into the word of processing-to-android, my conclusion? Use Lua on Corona SDK instead. A lof of processing libraries are unsupported (actually, only 2 libraries work on the android-processing file) and the emulator is so SO slow. As in drop-dead slow. Also sometimes it jams, or lags, or doesn’t load your file on the emulator.

Simple game I made using boolean conditions. I first tested it using coloured ellipses (red for good guys, blue for evils) then after the conditions worked i.e. when you come within 20px of me, point++; when come within 20px of evils, gameover(); that kind of thing. I really should’ve made classes cos as Joe said – it’s a bit hackey and also not really good form………….

Download code here: http://www.mediafire.com/file/7qsowi1iu3xfo7w/sheepgame.zip

 

processing-to-arduino pulse sensor visualisation

sin wave pulse sensor

Uses the pulse sensor that Yury/Joel made; currently I’m mapping pulse to amplitude of the sin wave (very nifty sin wave using Daniel Shiffman’s equations) although, as usual, I’ve decided to go down the (shiny!!!!) HSB colourmode. Did I mention how much I love HSB???

the processing-recieve code is basically a serial response – it sends strings of ‘b’ (heartbeat) ‘p’ (peak value) ‘t’ (trough value) and ‘d’ (detect) the original one that comes with the sensor is a lot more complicated since it has a ridiculous number of ints(inData) but I cut it down to 4…. and even so, I didn’t use all the variables (compare it to the arduino code, so many ints!). I guess I just wanted to make it easy to edit with, because it’s not really perfect/to my own standards. There’s a bit of jerkyness when the pulse beat drops/rises and debouncing doesn’t really help – it makes it worse since the delay increases the change in amplitude and the coding is kinda hacky – I mapped it as amplitude = float(inData)*1.0f/10. I needed to convert the raw sensor data to frames, but that resulted in something like 81-113 fps/amplitude which made the graphing go crazy, so I divided it by 10 so that it would stay within processing’s 15fps standardisation.

I think improvements would be – gradual the change ‘b’/time(?) since right now ‘b’/beat is a boolean, and the moment it turns true it just snaps the graph (no transition). Map amplitude as ‘p’/peak – ‘t’/trough = drop average, which will give a smoother, more consistent rhythm and if also really *really* think this would be much easier on OF; since I could write it as sin(of_time_elapsed) instead of multiplying frames, dividing it etc etc there’s other things I could change as well, like using xspacing (tighter xspacing for faster heartbeat) and using period. Another thing that might be cool would be to use mimin library and set height as a sound file, so the yvalue will be a ‘note’ on the sound map.

Download code here: http://www.mediafire.com/file/2e9za6wckhb8whz/pulsesensor.zip

[links list]

tutorials+:
http://www.pxleyes.com/tutorials/ (tutorial library)
http://paulbourke.net/miscellaneous/domemirror/faq.html  (spherical projection)
http://www.weltlighting.com/3d-video-mapping-projection-tutorial/ (3-d mapping)
http://vvvv.org/documentation/how-to-project-on-3d-geometry (3-d mapping)
http://www.videocopilot.net/tutorials/ (AE 3d filters/tutes database)
http://layersmagazine.com/ (Adobe et al)
http://library.creativecow.net/ (Adobe et al)
http://vector.tutsplus.com/ (Adobe et al)
http://web.media.mit.edu/~rehmi/fabric/ (conductive silk)
http://www.2d3dtutorials.com/ (3dsmax)
http://tutorialsfor3dsmax.blogspot.com/ (3dsmax)
http://my3dtextures.com/tutorials/tutorials.html (3dsmax textures)
http://www.polygonblog.com/3d-monster/ (3dsmax organic modelling)
http://www.3dtotal.com/ (3d et al)
http://3d.dtuts.com/ (3d et al)
http://www.rnel.net/tutorials/3d_Studio_Max (3d et al)
http://eclipsetutorial.sourceforge.net/totalbeginner.html (eclipse, java)
http://cycling74.com/category/articles/tutorials/ (max msp)
http://www.lua.org/pil/ (lua)

material resources:
http://www.sculpt.com (modelling/sculpting/adhesives)
http://www.sculpturehouse.com/ (modelling/sculpting/adhesives)
http://www.mcssl.com/store/westernsculptingsupply (modelling/sculpting/adhesives)
http://www.douglasandsturgess.com/mm5/merchant.mvc (modelling/sculpting/adhesives)
https://secure.farwestmaterials.com/ (modelling/sculpting/adhesives)
http://www.tapplastics.com/ (plastics/epoxies/adhesives)
http://www.bradsculpture.com/supply/home.asp (stone tools)
https://www.inventables.com/ (raw materials/composites)
http://www.surfachemgroup.co.uk/ (raw chemicals)
http://secure.sciencecompany.com/ (raw chemicals)
http://www.professionalplastics.com/ (hard plastics)
http://www.associatedfabrication.com/ (CNC machining, brooklyn)
http://www.atdprecision.com/ (CNC machining, rochester)
http://www.fiberopticproducts.com/ (fiberoptics, leds)
http://www.trinorthlighting.com/ (fiberoptics, leds)
http://www.fiberoptics4sale.com/ (fiberoptics)
http://www.huihongfiber.com/ (fiberoptics)
http://www.bareconductive.com/ (conductive ink, uk)
http://www.shguilian.com/template/electro-conductive_silk_e.html (conductive silk)
http://kuboriki.com/en/ (lace, trimmings)
http://yuwafabrics.e-biss.jp/indexpc.php (fabric, florals)
http://www.fleecefabricbytheyard.net/ (fabric, fleece)
http://www.fleecequeen.com/ (fabric, fleece)
http://www.liberty.co.uk/fcp/departmenthome/dept/fabrics (fabric, liberty)

documentation:
http://www.openframeworks.cc/documentation
http://processing.org/reference/
http://www.oracle.com/technetwork/java/index-jsp-142903.html
http://www.cplusplus.com/reference/
http://libcinder.org/docs/v0.8.3/
http://www.lua.org/manual/5.1/
http://www.anscamobile.com/corona/
http://developer.android.com/guide/index.html
http://www.php.net/manual/en/index.php

journals:
http://users.aims.ac.za/~abdelrhman/Welcome%20%7C%20gigapedia.org.html
http://aaaaarg.org/login
http://www.tuio.org/

[eyedeer] edible computing

was talking to Larry (2nd year) and one of the things we came up with was edible computing!

What it is:

tempered chocolate sheet

and

edible gold leaf

edible gold leaf

= edible PCBs! (it really works, tested it out with a voltmeter…. the gold/silver leaf is around 94-98% purity due to FDA standards, so it conducts very well)

We were thinking of building shields out of marzipan, and using fondant as solder as well. You could use sugarcraft/bijoux technique to create glass-like ‘shells’ to house lighting (edible lightbulbs??) and resistors out of agar. The only problem with edible computing is creating diodes, because that requires a semiconducter + random electron theory and you really can’t get away from silicates for that (actually most silicates are edible, but not delicious 😦 ) I think it’ll be kinda interesting to go from wearables to edibles…. how cool would it be to have your cake, light it up AND eat it?