Showing posts with label photo. Show all posts
Showing posts with label photo. Show all posts

Tuesday, 20 May 2014

Jurassic Wedding



You will have seen the instant internet classic of a dinosaur crashing a wedding... I got married this year and just had to do the same. Fortunately my wife agreed! I am a biochemist, but cloning a dinosaur to crash my wedding would have been a bit of a challenge, so I had to stick to the graphics approach instead.

So how do you get a dinosaur to crash your wedding?

Step 1: Recruit an understanding wedding photographer and guests for a quick running photoshoot. Make sure everyone is screaming and staring at something imaginary!


Step 2: Recruit a dinosaur. A virtual one will do, and I used this excellent freely available Tyrannosaurus rex model for blender.



Step 3: Get some dynamic posing going on! Most 3D graphics software uses a system called 'rigging' to add bones to a 3D model to make it poseable. This is exactly what I did, and with 17 bones (three for each leg, seven for the tail, two for the body and neck and two for the head and jaw) I made our pet T. rex poseable.

 The bone system

The posed result

Step 4: Get the T. rex into the scene. By grabbing the EXIF data from the running photo I found that it was shot with a 70mm focal length lens. By setting up a matching camera in blender and tweaking its position I made the camera position match perspective between the view of the T. rex and the running people.


Step 5: Making the dino look good. A 3D model is just a mesh of points in 3D space. To get it looking good texturing and lighting need to be added. For this project they also need to match the photo. Matching the lighting is particularly important, and I used Google maps and the time the photo was taken to match up where the sun was as accurately as possible.

The T. rex wireframe

Textured with a flat grey texture.



With a detail bump texture and accurate lighting.

With colours, detail texture and lighting.


 Step 6: Layering it all together. To fit into the scene the dinosaur must sit into the picture in 3D; in front of some object and behind others. To do this I just made a copy of some of the guests which need to sit in front of the dinosaur and carefully cut around them. The final result is then just layering the pictures together.



So there you go! 6 steps to make your own wedding dinosaur disaster photo!


Software used:
Blender: 3D modelling and rendering.
Paint.NET: Final layering of the image.

Friday, 11 October 2013

Making Big Things Look Small

Any photo normally give you an immediate sense of scale, but what is it about the picture that let's you know how big something is? Take a look at these two photos:


The first photo is clearly of some tiny water droplets on a plant, while the one on the right is clearly of a train running through a city. It isn't just content (train vs. droplet) and context (city vs. leaf) that inform you about the scale; you can trick you eyes into thinking a massive building is actually a miniature. There are other properties of the image that come into play.

The key is blur. In photos of tiny objects the background and foreground of the image are normally very blurred, but with big objects the background and foreground are normally sharply in focus. I talked about this effect in my last blog post; the bigger the ratio of the size of the lens to the distance to the object, then the more blurred the background and foreground look. Microscopes take this to the extreme, when out of focus parts of the image are so blurred that you can't see what is there at all.

This effect is universal, it even happens with your eyes. You can try it out: hold your finger close to your face (about 15cm/6in away) and look at it with one eye closed. Notice how blurred things in the background look. Now hold out your hand at arms length, and look at it with one eye closed. Now the objects in the background look more sharply in focus.

So to make something big looks small you need to make the foreground and background look blurred. Simple! Before digital image processing (Photoshop) effects like this had to be done in camera. It was surprisingly easy; take a camera with a detachable lens, and detach the lens from the camera. If you now set up the lens pointing at something you want to take a photo of, but tilt the camera relative to the lens, then you get an effect which is a bit like blurring the foreground and background of the image. This happens because one side of the image sensor/film is now a bit too close to the lens. This makes that side of the camera long sighted, and that side of the image blurry. The other side of the image sensor/film is also at the wrong distance from the lens, but is a bit too far instead of a bit too close. This makes the other side of the camera a bit short sighted, and makes that side of the image blurry too. The middle of the image sensor/film is still at the correct distance from the lens though, so the middle of the image is still in focus.

In principle this is simple, but in practice detaching the lens from your camera just means lots of stray light can sneak into the photo making glare. In practice you need to use a special lens which can be tilted while still attached to the camera, called a tilt shift lens. These are very specialised and cost a huge amount; it is much cheaper to fake the effect using a computer! There are many pieces of software that let you imitate tilt shift lenses, all that is needed is a gradually increasing blur as you move away from the line across the image that you want to remain in focus. I actually wrote a filter in ImageJ which does this processing. It is amazing how much this simple effect can trick your eye; it makes big things looks small by imitating the limitations of focal depth when using lenses to make an image.

This is the same photo as above, but with a tilt shift effect applied to it. This chunk of London now looks like a miniature, with a tiny toy train running through it.

The Tower of London, looking a lot smaller than usual!

The reverse effect happens too, though is psychologically less strong. This is part of the reason scanning electron microscope images are so compelling; they don't use light or lenses (at least in a conventional sense) to generate the image. This lets everything, from background to foreground, be sharply focused. It makes the microscopic world feel free big and accessible.

This diatom shell is less than a 0.1mm wide, but the sharpness of the foreground and background make it feel larger.

Artificially blurring the foreground and background makes it seem much smaller.

When communicating science it is important to think about the scale things appear; scanning electron microscope images break your intuitive concept of scale, and it is hard to imagine just how small the sample is. The image below, by Donald Bliss and Sriram Subramaniam and awarded Honorable Mention in the National Geographic best science images of 2008, is of the structures inside a single cell, and is a perfect example of why you need to think about scale. This image is a computational reconstruction, so it could be presented with pin sharp focus, but by blurring the background it conveys the sense of scale extremely well. It feels like a single cell.


Software used:
Photomatix Pro: Generating the tonemapped HDR photos.
UFRaw: Camera raw file conversion.
ImageJ: Custom tilt shift filtering.

Thursday, 20 June 2013

Sitting in a Pinhole Camera

Pinhole cameras are the simplest camera possible, made up of only a pinhole and a screen or film. They don't even need a lens. The camera works by line of sight and the fact that light essentially always travels in straight lines. For every point on the film there is a single line of sight, out through the pinhole, to a point in the world outside the camera. Only light from that point in the outside world can get through the pinhole and hit the film, which is the way in which the image is made. This simplicity means that you can make a pinhole camera incredibly easily. All that is needed is:
  1. A dark container or enclosure. 
  2. A small circular hole in one side of the container. 
  3. A screen or film to detect the light. 
The first two of these are easy to make at home, but the third isn't. Who has a spare piece of film and developing solutions? The [other] solution? Use a person to detect the light and see the image directly, or sketch it on the screen. The problem with people is they are pretty big, you can't squeeze one into a shoebox. Let's make a bigger pinhole camera then, one the size of a room. A room-sized container is easy to find, it's called a room!


A room sized pinhole camera!

The ability of a pinhole to project an image of a scene has been known for over 1000 years, and was first clearly described by the Persian philosopher Alhazen. These ancient scientists faced the same problem detecting light and initially used darkened rooms or tents with a pinhole in one side, then traced the image projected by the pinhole by hand. These devices are called camera obscura, the Latin literally meaning darkened room!

Turning a room into a pinhole camera takes just three easy steps:

1. The dark enclosure 
A pinhole camera container has to be light proof to prevent light leaking in and spoiling the image. The best thing for light proofing a room is aluminium foil which is very opaque for its weight and thickness. Foil tape is also excellent for blocking smaller gaps. 

2. The pinhole
There are a few rules for pinhole size; the bigger it is, the brighter the image. There is simply a larger gap for the light to get in. Unfortunately larger pinholes give a more blurred image because it increases the range of angles at which light can pass through the pinhole and hit a particular point on the film or screen. The pinhole also can't be too small else diffraction of light will start blurring the image. Practically for a room a "pinhole" around 1.5-2.0 cm in diameter will let in enough light to see the image clearly, but be small enough to give quite a sharp image.

3. The screen or film
In a room-sized pinhole camera you just need to sit in the room to see the image!

Foil blocked windows and a pinhole, the shaft of light is from the sun.

The image in a pinhole camera is inverted (upside down). This is because for light to reach the top of the film after travelling through the pinhole it has to be travelling at an upward angle; the line of sight through the pinhole means this is a view of the ground. Similarly for the light to reach the bottom of the film after travelling through the pinhole it has to be travelling at a downward angle, in this case giving a view of the sky. The upshot of this is that in a room sized pinhole camera the floor is a sea of clouds with an inverted picture of the world outside projected on the wall.


The pinhole image the right way up...

... and upside down. The same way up as the image.


You can literally sit in the clouds!


Software used:
UFRaw: Raw to jpg conversion of the raw camera files.

Friday, 7 June 2013

Pretty Rubbish

Mikumi Zebra, through a polythene bag (and no, it's not photoshopped!).

Rubbish isn't normally viewed as pretty. A huge proportion of modern waste is plastic, and it is only recently that biodegradable plastics have become common, leaving a world filled with plastic waste. Plastic bags line trees, beaches are swamped with bottles. There is even a country sized patch of fragmented plastic waste floating in the middle of the Pacific ocean. Rubbish is not normally viewed as pretty because it simply isn't.

Coke Exhibit, through an Evian bottle.

Our eyes are pretty rubbish. They can only see light in a narrow range of wavelengths/frequencies (if our hearing was like or vision we would only be able to hear one octave) and can only detect wavelength and intensity. We just can't see other properties of the light, which includes phase and polarisation. Polarised light is all around us; if you have ever tried to look at an LCD screen while wearing polarising sunglasses or 3D glasses from the cinema you will have seen wierd effects. These happen purely because of the  polarisation of the light.

Oxford Narrowboats, through a ziplock bag.

Polarisation can make rubbish pretty. Plastics are made of long linear polymers and when plastic is stretched or stressed these line up and start interacting differently with light whose waves are traveling at different orientations. As polarised light is light whose waves travel at only one orientation this means it interacts strangely with plastics. Because our eyes can't see polarisation we don't normally notice this, but using tricks with polarised light to look at plastic we can reveal these effects. This can transform the appearance of rubbish.

Riverside Path, through a magazine wrapper.

Sadly pretty rubbish is a big PR statement. The spread of plastic waste is destroying pretty parts of the world, not just in appearance but though damage to ecosystems. The tragic case is that in the future the beauty of nature may be lost, preserved only through artificial images in artificial materials like plastics. Now that would be pretty rubbish.

 Iffley, through a tonic bottle.

You can explore the full set on Flickr.

Software used:
UFRaw: Raw to JPEG conversion.

None of these images are photoshopped in any way, they are 100% photo and could have been capture on a film camera. 

Thursday, 6 June 2013

Laser Scanning a Room

Laser scanners are an amazing piece of tech which power everything from modern surveying to the self driving cars Google is developing. The way they work is surprisingly simple, they are actually just echo location, but using light.
  1. Point the laser at an object
  2. Send a pulse of laser light
  3. Measure the time the light takes to bounce back to the scanner
  4. Point at a new object and repeat
Most scan the whole of environment instead if just measuring the distance to single objects, and use a rotating scanner with a spinning mirror. 

So why don't the Google cars drive around dazzling people with lasers? Simple; they use infrared lasers instead. Near infrared has a wavelength of around 800nm and behaves just like the light we are used to. Imagine night vision not thermal vision. It is already common to use near infrared for communication using light that doesn't dazzle people, TV remote controls and mobile phone infra red ports use near infra red. 

This means, if Google's forecasts on the success of their self driving cars are to be believed, that in a few years time there will be a hidden world of laser scanning in action. Every time a Google self driving car goes past you will be scanned with infra red light. So what would that hidden world look like? 

I don't have a laser scanner, but I do have an infra red laser, an arm to wave it about with, an infra red camera, and perseverance. So let's fake a laser scan! 


This is what a room looks like when being scanned by a laser scanner (or at least a slightly wonky human imitation of one). Its not what a laser scanner sees, but what you could see if you could see in infra red and watched one in action. Pretty cool really. You can see the path the laser moves over as it scans, and the "shadow" of the scanner (ie. me!). You can just about make out the shape of the room, a coffee table and a sofa, by the way the path of the laser is distorted (some cheaper laser scanners actually measure this distortion to do the scan). 

So what about if there were multiple laser scanners in action together? Because they only look at a single point at any one time they wouldn't interfere with each other (unless they happened to try and scan the exact same point at the exact same time) so could all be on and working simultaneously. So here are two more fake scanning in action:





If these three scans were just piled on top of each other that would be pretty confusing to look at, so let's imagine that they use slightly different wavelengths (equivalent to colours) of light instead. 




This is the amazing result of having three (faked) and laser scanners in action at once. Pretty cool really. Even more scans starts to get messy, but the shape of the room really stands out. This is six scans combined with different colours for each.


Now imagine the future, a future where the infra red part of the spectrum starts getting busy as technology takes advantage of it. Picture a visible light photo of five self driving cars driving past a landmark. Basically just an advert for Google. Now picture the same thing, but in infra red. Now that could be a work of art.

For more images like this check out my Flickr set.

Geeky notes:
Laser scanners obviously aren't the only thing that lights up the near infra red part of the spectrum. The sun pumps out near infra red light which would swamp the scene with light, these photos will probably work better at night. Incandescent lights (ie. lights which aren't fluorescent, LED or energy saving), and anything else which is extremely hot (think flames and anything which glows red hot or hotter), also pump out infra red light. Fortunately as LED and fluorescent lights get more common the night time near infra red scene will get darker making the laser scanning pictures even more striking. On this topic this is why energy saving bulbs save energy, by producing less light at wavelengths that we can't see. 

Software used:
ImageJ: Photo handling and tweaking, composite image making.

Thursday, 10 January 2013

Time to Colour

It is hard to convey time in still images...


... but if colour isn't too important you can make the image black and white, then use colour to convey time.


This method works well where the background doesn't change; the splash of colour resulting from movement on the grey background draws the eye in.


It's easy to create photos like this. The hard bit is actually capturing the picture because you need to make sure the images align well, using a tripod is a good idea! Once you have the pictures aligned neatly then make each one greyscale, then colourise each one. They need to be coloured equally spaced shades of the spectrum in time order from red through the spectrum and back to red. For three images pure red, green and blue work perfectly.

The original pictures...

... and the recoloured pictures.

Using additive blending flatten the stack of images. Parts of the image where nothing moved should add to make shades of grey, but parts where things moved will be colourful.


The effect of movement is really clear if you look at cropped parts of the image where either nothing moves:


... or where there is a lot of movement:


Using a program where you can write scripts to automate processing steps it is very quick to make pictures like this. In ImageJ this macro takes an image stack and generates the recoloured and flattened images:

run("8-bit");
run("RGB Color");
setBatchMode(true);
src=getImageID();
d=nSlices();
for (i=0; i<d; i++) {
setSlice(i+1);
run("Duplicate...", "title=tmp");

run("HSB Stack");
run("Select All");
setSlice(1);
setColor(255*i/d);
fill();
setSlice(2);
setColor(255);
fill();
run("RGB Color");

run("Select All");
run("Copy");
close();
selectImage(src);
run("Paste");
}
setBatchMode(false);
run("Z Project...", "start=1 stop="+nSlices()+" projection=[Sum Slices]");

Really quite simple considering how impressive the results are.


Software used:
ImageJ: Image processing.


Wednesday, 26 September 2012

Book Scanning

I have an old book which I am mining data from... The problem is paper is a pain; there is no Ctrl+F function and unless the index includes the terms you are after (which, in this case, it doesn't) then searching turns into a real pain.

The solution? Scan it.
The problem? How to scan it.

Unlike printed, typed or even many handwritten documents it's not easy to pull apart a book and scan the pages with an automatic machine, especially when the book is old, out of print and quite valuable. Most book scanners (including Google's) use cameras instead. This is my setup:

A very high-tech setup.

It's all very simple; a camera, a tripod to hold the camera still, remote shutter button to snap the pictures, lots of lamps for even illumination and a data connection to the computer so I didn't fill up the memory card too fast.


It was a pretty chunky book (801 pages) and it took a total of 489 shots (including reshoots of slightly out-of-focus pages) to capture all of it. That took nearly 1.5 hours, or about 10 seconds per photo. So what does a whole book look like?



With some magic semi-automated processing these images are all that is needed for a perfect scan. Using ImageJ I converted them to black and white, subtracted the background and cropped/rotated the pages. These are some samples:



These processed images can simply be fed into Adobe Acrobat or other similar optical character recognition (OCR) software to translate the image of the text into machine-understandable, fully-searchable text. Exactly what I need!

Software used:
ImageJ: Automated image processing