was talking to Isaac + Dan(? I think his name is Dan. It’s horrible but I’ve talked to him zillion times but I can’t remember names =_=;;;) and Isaac was like: ‘if you were us, how would you create a 3-d image without using kinect or anything high-tech but basic home equipment?’
(he later told me that he was trying to develop a marketing tool to sell clothes, so online shoppers could put 3-d images of themselves and I was like omg I’m being taken advantage of(!!!) and it was a bit wtf-inducing but he was like you should totally join us, and I was like OK *turns into capitalist grubber*)
ANYWAY. I was like, ‘oh, I’ll use particle theory and HSB’ and he was like: ‘whut?’
Drawing theory 101 states that objects in the foreground are always darker and more saturated than objects in the background. Therefore, the distance between object 1 and object 2 can be calculated by measuring the relative values of HSB and then using particles to fill the ‘space/width’ of each object where the depth is based on the density of particles within the z-axis. The good thing about HSB is that it ignores colour temperature values (i.e. 255 red and blue share the same saturation value, but because of temperature would register differently on RGB axis) and it conveniently puts everything on a 100% measurement scale as opposed to a 255 point scale. The best part about this method is that it doesn’t require high contrast (i.e. using IR or converting images to greyscale for tracking or wtv) or any kind of external technology besides a handheld digital camera.
And here’s a simple clip demonstrating the concept: http://vimeo.com/32900354
I set an automatic actions (on photoshop) to differentiate areas based on hsb, and created seperated layers based on each proximity zone (hsb info can be accessed with f7, use select range+average value picker) I used a threshold of 65% between each area but it’s quite interesting that if you increase the threshold you will get a more detailed 3-d map since more areas become divided up (at a threshold of 20%, I had something like 30+ layers since each ‘grey’ area became subdivided up) It’s also interesting to note that the whiteness of ice doesn’t affect the proximity measurements, it seems to recognise that the white of background icecaps is brighter/lighter/’whiter?’ than the ice of the foreground. It’s also interesting that it recognises angles(?) – or at least, the POV of where the camera is taken from because the darkest area identified by the program actions is the bottom left; which is where the boat/photographer is located compared to the bottom right area.
I’m pretty sure that if you use something like a colour profiler you could get even more accurate values on proximity based on HSB – I knew a d00de called Les Walking while in Australia (the company I was working illegally for parttime sent me to him as a student so that I could help said-dodgy company steal his business practices) Anyway the most important thing I learnt from Les was the value of colour profiling, and how colour profiles can be viewed as 3-d prism objects in ‘viewable light ranges’ (for instance: ektrachrome profiles are differently shaped/modelled spectrums from sg.RGB files)…. so if you were to create a proper profile for the web camera, you would get a better yardstick-range result cos of standardisation. The beauty of colour profiling that if you do it right towards all I/Os, you could get a nice chain effect going where the user can implement it on a printer/photo/projection/camera and get the same look all the time without the upkeep on constant installation-updates and yet it’s locked-in and portable enough because it doesn’t affect the mechanics of software or hardware, but rather sits on the flow itself.
anyway I need some sleep…. it’s like 6.45am and I have class in a few hours!!!!! だめ!!!! (。┰ω┰。)!!!!!!!~~~