Showing posts with label machine vision. Show all posts
Showing posts with label machine vision. Show all posts

Thursday, 3 July 2014

3D Lightning 2

About a year ago two redditors happened to take a photo of the same lightning bolt, but from different places, and I use them to make a 3D reconstruction: 3D Lightning.

Well, it happened again!
The two source images.

This time the lightning bolt struck one World Trade Center (Freedom Tower), and two people got a shot of it from over the river. A little adjustment for the rotation of the image and some guestimation of their approximate locations let me work out that there was very little vertical shift between their locations, but quite a large horizontal shift.

Just like last time, a 100% accurate reconstruction isn't possible. You need to know the exact locations and elevations of the people, and field of view of the cameras used, to do this precisely. However, just like last time, a rough reconstruction is possible where the difference in horizontal position of part of the lightning bolt between the two images is proportional to the distance from the people taking the photos.

The approximate 3D reconstruction.

After grabbing the coordinates from the photos it was just a matter of plugging them into Blender to make an approximate 3D reconstruction.

Software used:
ImageJ: Image analysis.
Blender: 3D modelling and rendering.

Wednesday, 9 April 2014

Cells and Worms - 1. The Theory

If you scatter 100 worms on a patch of soil 1 meter by 1 meter how many worms will fall on top of another worm? This might seem like a really pointless question, but it is surprisingly relevant to biological research using microscopes. It's also a surprisingly hard question to answer because worms are very wriggly! However, even this dry, theoretical, research problem provides the tools for making fun illustrations...


My work involves a lot of automated image analysis; taking a picture from a microscope and automatically analysing it to extract scientific data. To make sure an automated analysis is reliable you have to think about all the likely problems that might turn up, and with cells and microscopes a common problem is when two cells are lying on top of each other. The problems this causes are easy to imagine; if there are two cells with one nucleus lying on top of each other then it might look like one cell with two nuclei.

For some types of cells it is quite easy to work out how likely two are to touch or lie partly on top of each other when they are scattered randomly over a microscope slide. An example of an easy case is where all cells are circular and the same size; the approximate calculation is quite simple. Unfortunately the cells I work on are more worm-like in shape, about 17 microns long and 2 wide... if you scatter these cells over a slide how many will end up touching?

To work out the answer simulation is vital; the maths is just too complicated to do it analytically. A simulation of worm-like shapes proved to be quite simple:
  1. Pick a random starting point, direction and curvature.
  2. Start drawing a curved line from that point.
  3. Occasionally re-randomise the curvature.
  4. Stop once you have reached the length of the cell.
  5. Draw the profile of the cell shape along that curve.
Following these simple rules and tweaking the parameters (e.g. the minimum and maximum curvature, frequency of randomising curvature, etc.) gives a simple algorithm for drawing a worm-like shape. With a bit of tweaking it could draw cells that look like trypanosomes. Using this drawing tool it was possible to measure the chance of a cell touching or lying on top of another cell already on the microscope slide. Just repeat the drawing process thousands of times and detect whether the newly drawn cell intersects with any previously drawn ones. Problem solved.

This process gave me the answer I needed, but it also provided a tool for drawing trypanosome-like shapes. Better than that, it was easy to adapt it to make sure no two cells overlapped and they fitted neatly together over the image... And just like that a dry, theoretical, research problem turned into a beautiful image:


This was also easy to adapt to other worm-like shapes, like earthworms:


Software used:

Friday, 28 March 2014

Monroe, Einstein and Visual Acuity

The recent Mirror newspaper advert in the UK has brought a classic optical illusion back into the public eye; a hybrid image of Marilyn Monroe and Albert Einstein which, up close, looks like Einstein but from further away, or with squinted eyes, looks like Marilyn.


Try it out! From close up Einstein's trademark hair and moustache jump out, but squint or stand back from the screen and you can see a classic shot of Marylin's curls, eyelashes and smile. A version of this illusion was first made by Aude Oliva for a feature in New Scientist, and it is a really striking example of a hybrid image illusion.

So what is your brain doing? And how can you make an image like this? Making an image is actually quite simple. First of all take pictures of these two pop icons with similar(ish) lighting and align them so their main features (eyes, mouth, overall face) are at the same size and position in the images:


The trick is then to to use a Fourier bandpass filter to filter out low frequency structure in the Einstein image, and filter out high frequency structure in the Marylin image. You can find Fourier bandpass filters (sometimes called FFT filters) for many image editing programs.

So what is a Fourier bandpass filter? Without diving into too much maths it is a way of separating out information based on its wavelength. Filtering out low frequency structure in an image leaves only the short wavelength features, i.e. fine lines and sharp edges, while filtering out high frequency structure leaves only the long wavelengths, i.e. the general brightness of different parts of the image.

 Einstein with a <5px wavelength Fourier bandpass filter

 Marylin with a >10px wavelength Fourier bandpass filter

Fourier bandpass filters are easier to intuitively understand with sound rather than an image. It might help to imaging using a Fourier bandpass filter on some music; a low frequency (long wavelength) bandpass filter would leave only the bassline and bass drum, while a high frequency (short wavelength) bandpass filter would leave vocal lines and high pitched instruments and drums.

If you are more mathematically minded it might be useful to imagine this through some graphs. These are plots how bright the image is as you go along a line across the middle of the two images. It is easy to see that the filtering of the Einstein image only leaves the short wavelength data, and the filtering of the Marylin image only leaves the long wavelength data:


You can also imagine a long wavelength Fourier bandpass filter as a blurring, and a short wavelength Fourier bandpass filter as the inverse of blurring; grabbing the details that are lost when the image is blurred.

Having made the two Fourier bandpass images it is simply a matter of averaging the two together to get the final product:


So how does it work? The trick is simply based on a limitation of how well you can see. From a greater distance your eyes are less able to see the fine detail of the image, so your brain interprets only the big structures. In this case this leaves your brain to latch onto the Marylin part of the image, helped by the fact that many of her photos are extremely recognisable.

From closer in your eyes can now resolve the fine detail in the image, and your brain does its best effort at interpreting a slightly messy image. Because both the photo of Einstein and Marylin are kind of similar (light skin on a dark background, with big hair) your brain can do a decent job of merging the fine detail of Einstein's face onto the general light and shadow of Marylin's face.

By switching the filtering of the two images you can get the reverse effect...


... although I do find Marylin's teeth in this photo quite terrifying!

Software used:

Monday, 24 March 2014

Finding Circles

Today's random image processing tip: finding circles in images. Circular structures pop up all over the place, from craters on Mars to cross-sections through microtubules inside cells, and automatically detecting them in an image is often really useful! Take this starting image:

An electron microscope image, with the circular crossection of a microtubule.

Using the right filter, circular structures like the microtubule pop out as bright spots. Bright spots are then easy to pull out for later automatic image analysis steps.

 The electron microscope image after filtering.

The trick is use to use a kernel filter designed to find circles. Kernel filters are most commonly used for quickly filtering or sharpening a picture: this simple kernel finds edges in an image:

0  1  0
1 -4  1
0  1  0

Applying a kernel to an image is done by looking at each pixel in turn, and setting its new value according to the values in the kernel. In this example the new pixel value is equal to -4× the current pixel, plus 1× the pixel above, left, below and right of the current pixel.

To find circles the kernel should look like a circle. To find the microtubule in the electron microscope image this kernel, a circle the same size as the microtubule, was used:


This image can be represented as a matrix of numbers, which can be applied as a kernel:

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 
0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 
0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 
0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 
0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 
0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 
0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 
0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 
0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 
0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 
0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 
0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 
0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 
0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 
0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 
1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 
1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 
1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 
1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 
1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 
1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 
1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 
1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 
1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 
0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 
0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 
0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 
0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 
0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 
0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 
0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 
0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 
0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 
0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 
0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 
0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 
0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 
0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 
0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 

And that's it! This kernel picks out circular features the same size of the kernel, and makes them pop out as bright spots. Need to do something like this yourself? This ImageJ macro will get you started.

Software used:
ImageJ

Thursday, 9 May 2013

3D Lightning

Reddit is a great website, where the ability to share and discuss things on the web gives some great little discoveries. Things that would otherwise seem impossibly unlikely, like two people in completely different places getting a photo of the same lightning bolt, suddenly pop up all the time.

 Pic by chordnine

 Pic by Bobo1010

Having two pictures of the exact same lightning bolt lets you do something pretty amazing; reconstruct its path in 3D. In this case because the precise location and elevation of the photographers isn't known this is slightly more art than science, but it is still fun!

These are the two bolts, scaled to approximately the same size:


It is immediately clear that they are taken from about the same direction but different heights: the second bolt looks squashed vertically. This means the pair of images are roughly a stereo pair, but with a vertical shift instead of a horizontal. This is just like the pair of images you would see with your eyes if you had two eyes positioned vertically instead of horizontally on your head.


To analyse this the first step is to trace the lightning bolt, making sure that every point in one image matches up to the corresponding point in the other image, then record the coordinates of all the points. This gives a nice table of numbers where you can calculate the difference in x and y position in the two images.


Now we need to do some maths... except I don't like doing complicated maths and it turns out there is a big simplification you can make! If both pictures are taken from a long way away from the lightning bolt (i.e. the object has quite a small angular size in the image) then the shift in position between the images is proportional to the distance from the camera. Bigger shifts mean that bit of the bolt is closer to the camera. This approximation is pretty accurate for the majority of cameras, so I used it here.

The other problem is the proportionality factor. If one part of the lightning bolt shifts twice as much between the two images as another part that means it is twice as close. But twice as close as what? Without knowing exactly where the cameras were positioned that means only the relative distance, not absolute distance, can be calculated. Oh well, close enough!

So what does the lightning bolt look like in 3D? I plugged the coordinates into Blender and this is the result:


Pretty amazing really!

Software used:
ImageJ: Image analysis.
Blender: 3D modelling and rendering.

Thursday, 15 April 2010

More Intelligent Scaling

Now even more intelligent... Scaling up and/or down at increased speed (although with slightly lowered quality).

Source image:
Scaled wider and shorter:
Scaled narrower and taller:
Download the ImageJ macro here.

Software used:
Image creation: ImageJ

Intelligent Image Scaling

Intelligent modification of images is all the rage at the moment, especially with Photoshop CS5's fancy new tools for healing imperfections in the image... This is my take on a classic implementation of intelligent image shrinking, using an edge detection to determine the bits of the picture that are "interesting" then drawing seams down the image through the boring regions and then removing pixels along this path.

This implementation is 100% coded from scratch and can be used for any project under the terms of the GPL v2 or later.

The test image:
The edge detection to find objects:
And 5 example low detail vertical "seams" which run through the image. These are the lines of boring pixels which will be removed to make the image narrower.
Removing 100 seams of pixels makes the image 100px narrower. See the following images for an example of the resizing in progress:
You can watch the scaling of four example images at YouTube.
Download the code for the ImageJ macro (used for rescaling of these images) here.

Software used:
Image creation: ImageJ
Video transcoding: VLC media player