Showing posts with label photography. Show all posts
Showing posts with label photography. Show all posts

Sunday, 22 March 2015

Thursday, 3 July 2014

3D Lightning 2

About a year ago two redditors happened to take a photo of the same lightning bolt, but from different places, and I use them to make a 3D reconstruction: 3D Lightning.

Well, it happened again!
The two source images.

This time the lightning bolt struck one World Trade Center (Freedom Tower), and two people got a shot of it from over the river. A little adjustment for the rotation of the image and some guestimation of their approximate locations let me work out that there was very little vertical shift between their locations, but quite a large horizontal shift.

Just like last time, a 100% accurate reconstruction isn't possible. You need to know the exact locations and elevations of the people, and field of view of the cameras used, to do this precisely. However, just like last time, a rough reconstruction is possible where the difference in horizontal position of part of the lightning bolt between the two images is proportional to the distance from the people taking the photos.

The approximate 3D reconstruction.

After grabbing the coordinates from the photos it was just a matter of plugging them into Blender to make an approximate 3D reconstruction.

Software used:
ImageJ: Image analysis.
Blender: 3D modelling and rendering.

Tuesday, 20 May 2014

Jurassic Wedding



You will have seen the instant internet classic of a dinosaur crashing a wedding... I got married this year and just had to do the same. Fortunately my wife agreed! I am a biochemist, but cloning a dinosaur to crash my wedding would have been a bit of a challenge, so I had to stick to the graphics approach instead.

So how do you get a dinosaur to crash your wedding?

Step 1: Recruit an understanding wedding photographer and guests for a quick running photoshoot. Make sure everyone is screaming and staring at something imaginary!


Step 2: Recruit a dinosaur. A virtual one will do, and I used this excellent freely available Tyrannosaurus rex model for blender.



Step 3: Get some dynamic posing going on! Most 3D graphics software uses a system called 'rigging' to add bones to a 3D model to make it poseable. This is exactly what I did, and with 17 bones (three for each leg, seven for the tail, two for the body and neck and two for the head and jaw) I made our pet T. rex poseable.

 The bone system

The posed result

Step 4: Get the T. rex into the scene. By grabbing the EXIF data from the running photo I found that it was shot with a 70mm focal length lens. By setting up a matching camera in blender and tweaking its position I made the camera position match perspective between the view of the T. rex and the running people.


Step 5: Making the dino look good. A 3D model is just a mesh of points in 3D space. To get it looking good texturing and lighting need to be added. For this project they also need to match the photo. Matching the lighting is particularly important, and I used Google maps and the time the photo was taken to match up where the sun was as accurately as possible.

The T. rex wireframe

Textured with a flat grey texture.



With a detail bump texture and accurate lighting.

With colours, detail texture and lighting.


 Step 6: Layering it all together. To fit into the scene the dinosaur must sit into the picture in 3D; in front of some object and behind others. To do this I just made a copy of some of the guests which need to sit in front of the dinosaur and carefully cut around them. The final result is then just layering the pictures together.



So there you go! 6 steps to make your own wedding dinosaur disaster photo!


Software used:
Blender: 3D modelling and rendering.
Paint.NET: Final layering of the image.

Thursday, 23 January 2014

A (supernova a) long time ago in a galaxy far far away...

A long time ago in a galaxy far far away, well about 12 million years ago and 1.135×1020 km to be precise, a star exploded. It exploded with such force that here, on Earth, all that time later the photons are arriving, and we can see them...

I'm talking about the supernova that was detected in M82 (the cigar galaxy) just yesterday, with the catchy temporary name PSN J09554214+6940260 (now called SN 2014J). Supernovae are enormous explosions. It is really hard to get your head around how big they are. To quote XKCD:

Which is brighter?
     1) A supernova, seen from as far away as the Sun is from the Earth, or
     2) The detonation of a hydrogen bomb pressed against your eyeball?

The answer is 1. By a factor of a billion. The sheer amount of energy supernovae release is simply unimaginable. They are also rare, the last supernova to happen closer to Earth than this one was in 1987. Right now is a great and rare chance to try and see the spectacular swan song of a dying star!

Nine months ago I set myself the challenge of imaging the moons of Jupiter with just my standard digital SLR camera, and succeeded. Taking a picture of PSN J09554214+6940260, which is currently at around magnitude 11.5, seemed like a proper challenge!

To really know what we are looking at take a look at this picture which I took of M82 in April last year:


M82 is the smudgy line below and left of the M82 label. Now lets see what M82 looked like tonight (22nd January):



Let's look a bit closer:

April 2013

22nd January 2014

It's faint, it's noisy, but there certainly seems to be an extra star right on M82, exactly where the supernova is positioned... Is that slight spot really the supernova? The forecast is that the supernova will get brighter over the next week or so, so (if there are no clouds) I can take more photos and confirm it!

It is crazy to imagine the journey the few photons that my camera picked up to make this picture have been through. Created 18 million years ago in one of the most violent explosions in the universe they have travelled 1.135×1020 km at the speed of light to reach Earth. They flew down through the atmosphere, into the lenses of my camera, then impacted onto the camera sensor. These photons' deaths, after their 18 million year lives, excited a few electrons, which were then detected and used to make this picture. Pretty epic.

Software used:
ImageJ: Image processing

The geeky details:
Canon EOS 450D
Sigma 18-200mm f/3.5-6.3 DC OS HSM
The lens was used at 200mm, maximum aperture (f/6.3), with focus set manually to infinity. 20 images were captured at ISO 800 with a 2.5 exposure time then aligned and averaged in ImageJ.

Friday, 11 October 2013

Making Big Things Look Small

Any photo normally give you an immediate sense of scale, but what is it about the picture that let's you know how big something is? Take a look at these two photos:


The first photo is clearly of some tiny water droplets on a plant, while the one on the right is clearly of a train running through a city. It isn't just content (train vs. droplet) and context (city vs. leaf) that inform you about the scale; you can trick you eyes into thinking a massive building is actually a miniature. There are other properties of the image that come into play.

The key is blur. In photos of tiny objects the background and foreground of the image are normally very blurred, but with big objects the background and foreground are normally sharply in focus. I talked about this effect in my last blog post; the bigger the ratio of the size of the lens to the distance to the object, then the more blurred the background and foreground look. Microscopes take this to the extreme, when out of focus parts of the image are so blurred that you can't see what is there at all.

This effect is universal, it even happens with your eyes. You can try it out: hold your finger close to your face (about 15cm/6in away) and look at it with one eye closed. Notice how blurred things in the background look. Now hold out your hand at arms length, and look at it with one eye closed. Now the objects in the background look more sharply in focus.

So to make something big looks small you need to make the foreground and background look blurred. Simple! Before digital image processing (Photoshop) effects like this had to be done in camera. It was surprisingly easy; take a camera with a detachable lens, and detach the lens from the camera. If you now set up the lens pointing at something you want to take a photo of, but tilt the camera relative to the lens, then you get an effect which is a bit like blurring the foreground and background of the image. This happens because one side of the image sensor/film is now a bit too close to the lens. This makes that side of the camera long sighted, and that side of the image blurry. The other side of the image sensor/film is also at the wrong distance from the lens, but is a bit too far instead of a bit too close. This makes the other side of the camera a bit short sighted, and makes that side of the image blurry too. The middle of the image sensor/film is still at the correct distance from the lens though, so the middle of the image is still in focus.

In principle this is simple, but in practice detaching the lens from your camera just means lots of stray light can sneak into the photo making glare. In practice you need to use a special lens which can be tilted while still attached to the camera, called a tilt shift lens. These are very specialised and cost a huge amount; it is much cheaper to fake the effect using a computer! There are many pieces of software that let you imitate tilt shift lenses, all that is needed is a gradually increasing blur as you move away from the line across the image that you want to remain in focus. I actually wrote a filter in ImageJ which does this processing. It is amazing how much this simple effect can trick your eye; it makes big things looks small by imitating the limitations of focal depth when using lenses to make an image.

This is the same photo as above, but with a tilt shift effect applied to it. This chunk of London now looks like a miniature, with a tiny toy train running through it.

The Tower of London, looking a lot smaller than usual!

The reverse effect happens too, though is psychologically less strong. This is part of the reason scanning electron microscope images are so compelling; they don't use light or lenses (at least in a conventional sense) to generate the image. This lets everything, from background to foreground, be sharply focused. It makes the microscopic world feel free big and accessible.

This diatom shell is less than a 0.1mm wide, but the sharpness of the foreground and background make it feel larger.

Artificially blurring the foreground and background makes it seem much smaller.

When communicating science it is important to think about the scale things appear; scanning electron microscope images break your intuitive concept of scale, and it is hard to imagine just how small the sample is. The image below, by Donald Bliss and Sriram Subramaniam and awarded Honorable Mention in the National Geographic best science images of 2008, is of the structures inside a single cell, and is a perfect example of why you need to think about scale. This image is a computational reconstruction, so it could be presented with pin sharp focus, but by blurring the background it conveys the sense of scale extremely well. It feels like a single cell.


Software used:
Photomatix Pro: Generating the tonemapped HDR photos.
UFRaw: Camera raw file conversion.
ImageJ: Custom tilt shift filtering.

Thursday, 20 June 2013

Sitting in a Pinhole Camera

Pinhole cameras are the simplest camera possible, made up of only a pinhole and a screen or film. They don't even need a lens. The camera works by line of sight and the fact that light essentially always travels in straight lines. For every point on the film there is a single line of sight, out through the pinhole, to a point in the world outside the camera. Only light from that point in the outside world can get through the pinhole and hit the film, which is the way in which the image is made. This simplicity means that you can make a pinhole camera incredibly easily. All that is needed is:
  1. A dark container or enclosure. 
  2. A small circular hole in one side of the container. 
  3. A screen or film to detect the light. 
The first two of these are easy to make at home, but the third isn't. Who has a spare piece of film and developing solutions? The [other] solution? Use a person to detect the light and see the image directly, or sketch it on the screen. The problem with people is they are pretty big, you can't squeeze one into a shoebox. Let's make a bigger pinhole camera then, one the size of a room. A room-sized container is easy to find, it's called a room!


A room sized pinhole camera!

The ability of a pinhole to project an image of a scene has been known for over 1000 years, and was first clearly described by the Persian philosopher Alhazen. These ancient scientists faced the same problem detecting light and initially used darkened rooms or tents with a pinhole in one side, then traced the image projected by the pinhole by hand. These devices are called camera obscura, the Latin literally meaning darkened room!

Turning a room into a pinhole camera takes just three easy steps:

1. The dark enclosure 
A pinhole camera container has to be light proof to prevent light leaking in and spoiling the image. The best thing for light proofing a room is aluminium foil which is very opaque for its weight and thickness. Foil tape is also excellent for blocking smaller gaps. 

2. The pinhole
There are a few rules for pinhole size; the bigger it is, the brighter the image. There is simply a larger gap for the light to get in. Unfortunately larger pinholes give a more blurred image because it increases the range of angles at which light can pass through the pinhole and hit a particular point on the film or screen. The pinhole also can't be too small else diffraction of light will start blurring the image. Practically for a room a "pinhole" around 1.5-2.0 cm in diameter will let in enough light to see the image clearly, but be small enough to give quite a sharp image.

3. The screen or film
In a room-sized pinhole camera you just need to sit in the room to see the image!

Foil blocked windows and a pinhole, the shaft of light is from the sun.

The image in a pinhole camera is inverted (upside down). This is because for light to reach the top of the film after travelling through the pinhole it has to be travelling at an upward angle; the line of sight through the pinhole means this is a view of the ground. Similarly for the light to reach the bottom of the film after travelling through the pinhole it has to be travelling at a downward angle, in this case giving a view of the sky. The upshot of this is that in a room sized pinhole camera the floor is a sea of clouds with an inverted picture of the world outside projected on the wall.


The pinhole image the right way up...

... and upside down. The same way up as the image.


You can literally sit in the clouds!


Software used:
UFRaw: Raw to jpg conversion of the raw camera files.

Friday, 7 June 2013

Pretty Rubbish

Mikumi Zebra, through a polythene bag (and no, it's not photoshopped!).

Rubbish isn't normally viewed as pretty. A huge proportion of modern waste is plastic, and it is only recently that biodegradable plastics have become common, leaving a world filled with plastic waste. Plastic bags line trees, beaches are swamped with bottles. There is even a country sized patch of fragmented plastic waste floating in the middle of the Pacific ocean. Rubbish is not normally viewed as pretty because it simply isn't.

Coke Exhibit, through an Evian bottle.

Our eyes are pretty rubbish. They can only see light in a narrow range of wavelengths/frequencies (if our hearing was like or vision we would only be able to hear one octave) and can only detect wavelength and intensity. We just can't see other properties of the light, which includes phase and polarisation. Polarised light is all around us; if you have ever tried to look at an LCD screen while wearing polarising sunglasses or 3D glasses from the cinema you will have seen wierd effects. These happen purely because of the  polarisation of the light.

Oxford Narrowboats, through a ziplock bag.

Polarisation can make rubbish pretty. Plastics are made of long linear polymers and when plastic is stretched or stressed these line up and start interacting differently with light whose waves are traveling at different orientations. As polarised light is light whose waves travel at only one orientation this means it interacts strangely with plastics. Because our eyes can't see polarisation we don't normally notice this, but using tricks with polarised light to look at plastic we can reveal these effects. This can transform the appearance of rubbish.

Riverside Path, through a magazine wrapper.

Sadly pretty rubbish is a big PR statement. The spread of plastic waste is destroying pretty parts of the world, not just in appearance but though damage to ecosystems. The tragic case is that in the future the beauty of nature may be lost, preserved only through artificial images in artificial materials like plastics. Now that would be pretty rubbish.

 Iffley, through a tonic bottle.

You can explore the full set on Flickr.

Software used:
UFRaw: Raw to JPEG conversion.

None of these images are photoshopped in any way, they are 100% photo and could have been capture on a film camera. 

Thursday, 6 June 2013

Laser Scanning a Room

Laser scanners are an amazing piece of tech which power everything from modern surveying to the self driving cars Google is developing. The way they work is surprisingly simple, they are actually just echo location, but using light.
  1. Point the laser at an object
  2. Send a pulse of laser light
  3. Measure the time the light takes to bounce back to the scanner
  4. Point at a new object and repeat
Most scan the whole of environment instead if just measuring the distance to single objects, and use a rotating scanner with a spinning mirror. 

So why don't the Google cars drive around dazzling people with lasers? Simple; they use infrared lasers instead. Near infrared has a wavelength of around 800nm and behaves just like the light we are used to. Imagine night vision not thermal vision. It is already common to use near infrared for communication using light that doesn't dazzle people, TV remote controls and mobile phone infra red ports use near infra red. 

This means, if Google's forecasts on the success of their self driving cars are to be believed, that in a few years time there will be a hidden world of laser scanning in action. Every time a Google self driving car goes past you will be scanned with infra red light. So what would that hidden world look like? 

I don't have a laser scanner, but I do have an infra red laser, an arm to wave it about with, an infra red camera, and perseverance. So let's fake a laser scan! 


This is what a room looks like when being scanned by a laser scanner (or at least a slightly wonky human imitation of one). Its not what a laser scanner sees, but what you could see if you could see in infra red and watched one in action. Pretty cool really. You can see the path the laser moves over as it scans, and the "shadow" of the scanner (ie. me!). You can just about make out the shape of the room, a coffee table and a sofa, by the way the path of the laser is distorted (some cheaper laser scanners actually measure this distortion to do the scan). 

So what about if there were multiple laser scanners in action together? Because they only look at a single point at any one time they wouldn't interfere with each other (unless they happened to try and scan the exact same point at the exact same time) so could all be on and working simultaneously. So here are two more fake scanning in action:





If these three scans were just piled on top of each other that would be pretty confusing to look at, so let's imagine that they use slightly different wavelengths (equivalent to colours) of light instead. 




This is the amazing result of having three (faked) and laser scanners in action at once. Pretty cool really. Even more scans starts to get messy, but the shape of the room really stands out. This is six scans combined with different colours for each.


Now imagine the future, a future where the infra red part of the spectrum starts getting busy as technology takes advantage of it. Picture a visible light photo of five self driving cars driving past a landmark. Basically just an advert for Google. Now picture the same thing, but in infra red. Now that could be a work of art.

For more images like this check out my Flickr set.

Geeky notes:
Laser scanners obviously aren't the only thing that lights up the near infra red part of the spectrum. The sun pumps out near infra red light which would swamp the scene with light, these photos will probably work better at night. Incandescent lights (ie. lights which aren't fluorescent, LED or energy saving), and anything else which is extremely hot (think flames and anything which glows red hot or hotter), also pump out infra red light. Fortunately as LED and fluorescent lights get more common the night time near infra red scene will get darker making the laser scanning pictures even more striking. On this topic this is why energy saving bulbs save energy, by producing less light at wavelengths that we can't see. 

Software used:
ImageJ: Photo handling and tweaking, composite image making.

Thursday, 9 May 2013

3D Lightning

Reddit is a great website, where the ability to share and discuss things on the web gives some great little discoveries. Things that would otherwise seem impossibly unlikely, like two people in completely different places getting a photo of the same lightning bolt, suddenly pop up all the time.

 Pic by chordnine

 Pic by Bobo1010

Having two pictures of the exact same lightning bolt lets you do something pretty amazing; reconstruct its path in 3D. In this case because the precise location and elevation of the photographers isn't known this is slightly more art than science, but it is still fun!

These are the two bolts, scaled to approximately the same size:


It is immediately clear that they are taken from about the same direction but different heights: the second bolt looks squashed vertically. This means the pair of images are roughly a stereo pair, but with a vertical shift instead of a horizontal. This is just like the pair of images you would see with your eyes if you had two eyes positioned vertically instead of horizontally on your head.


To analyse this the first step is to trace the lightning bolt, making sure that every point in one image matches up to the corresponding point in the other image, then record the coordinates of all the points. This gives a nice table of numbers where you can calculate the difference in x and y position in the two images.


Now we need to do some maths... except I don't like doing complicated maths and it turns out there is a big simplification you can make! If both pictures are taken from a long way away from the lightning bolt (i.e. the object has quite a small angular size in the image) then the shift in position between the images is proportional to the distance from the camera. Bigger shifts mean that bit of the bolt is closer to the camera. This approximation is pretty accurate for the majority of cameras, so I used it here.

The other problem is the proportionality factor. If one part of the lightning bolt shifts twice as much between the two images as another part that means it is twice as close. But twice as close as what? Without knowing exactly where the cameras were positioned that means only the relative distance, not absolute distance, can be calculated. Oh well, close enough!

So what does the lightning bolt look like in 3D? I plugged the coordinates into Blender and this is the result:


Pretty amazing really!

Software used:
ImageJ: Image analysis.
Blender: 3D modelling and rendering.

Friday, 3 May 2013

Pebble

My Pebble arrived! It may be a one of the smartest watches around, but it is also shiny and curvaceous. Explore the curves and reflections in my macro photos in this flickr set....



 Software used:
ImageJ: Photograph animation
UFRaw: Raw extraction