Search This Blog

Tuesday 21 August 2007

Meteor Spotting

So getting back to the task of automating the task of taking pictures of the night sky and picking out the interesting stuff from the noise. I got a very useful data set from Doug Ellison over at UMSF of 300 or so 30-second exposure pictures that he took of a reasonably clear English sky on the morning of the 13th August when the Perseids were pretty much at their max. He got about 4 or 5 meteors, three planes and two or three satellites plus he got lots of good shots of the sort of stuff we want the motion detector to ignore, specifically clouds and tree branches moving in the wind.

So as I was saying last time I established very quickly that ImageMagick is not fast enough for this sort of work. Maybe it would be OK with a bit of work but, thankfully, once I figured out that I couldn't use it I went in search of some better open source motion detection software and found MotionTrack. This does all the heavy lifting of flattening the files to grayscale, calculating the differences and then doing a sector based blur+edge detection filter to identify points of interest. It then conveniently outputs the result as both a tagged image and a list of sector coordinates.

For example:
Image #1 - No meteors (or plane or anything of interest)


Image #2 Meteor in the Top right (just above the tree)


Image #3. Motiontrack Sector Image


Motiontrack's defaults reduce the image resolution by half and then divides the remainder up into 5x5 pixels. The image above shows any area where the average difference in a 10x10 block of the original pair of images has changed by more than a given sensitivity level. The image is handy for quick visual checking but the important output data comes in the form of the list of sectors that is printed to the console - this looks something like:
107,0:58 114,0:18 115,0:19 226,0:26 256,0:32 273,0:23...and so on.

The real trick now is figuring out a way to pick that line out of the scattered background noise and also to be able to tell the difference between that sort of difference plot and this one:


Here the wind has caused lots of movement of the branches which is a type of motion we don't really care about.

So what we want to do is search through this list looking for blocks of adjacent sectors and then to characterize them so we can tell the difference between long(ish) straight(ish) blocks and more random blocks. For the moment we'll search the list and find the block that has the greatest distance between the start sector and the end sector and we will output the total number of sectors in that block and that distance between the extreme end points. Those two numbers give us a way to crudely establish if the largest notable moving feature in the image is a line or not.

The approach I'm taking for this is to create a sparsely populated data structure to represent the sectors using a combination of two arrays and a hash (an indexed array for those of you unfamiliar with Perl).
  • The first array is just a linear copy of the sector output from Motiondetect. Each element of this array is a 3 element array containing the x,y and z(brightness) values.
  • The second array is used to keep track of where we've been and has a single entry that corresponds to each element in the first array. Initially we set this to a value that we will take to mean "Hasn't been included in a measurement yet".
  • The hash is 2 dimensional and uses x and y coordinates of the sector output. The value of each element is the index number for the corresponding element in the arrays. We will use this to locate valid sectors as we search for blocks\lines of adjacent moving sectors.
We could use a fully populated data structure to do the search but it seemed like a waste to me so I ploughed on with writing a quick version of this in Perl. I'm posting the current working version below for anyone who really wants to dig into it. It's a fairly simple depth first recursive scan.

Pushing the set of 300 images through this process and we find that if we set the sensitivity high enough we can detect most of the interesting images (~13 out of 16) but we also trigger false positives on an additional 20 or so images that are just moving branches. If we set the sensitivity a little lower we drop the false positive rate to about 10 but only hit about 50% of the interesting images. The solution to this problem is going to be a much more robust line finding algorithm so I'll be digging into that over the next few days and then trying to figure out how to convert the code below into PowerShell.

The Perl Adjacent-Blocks code.

# Find-Blocks - Recursive Line finder for sparse data
$datafile=shift();
open (INFILE, "$datafile") or die ("Unable to open $datafile");
$data="";
while () {
chomp;
$data .= $_
}
$data .=" ";
if ($data !~ /^(\d+\,\d+\:\d+\s+)+$/) {
print "Data Format Error\n";
exit;
}
@data=split(/\s/,$data);
$itemcount=0;
foreach $item (@data) {
@items=split(/[^\d]/,$item);
$itemcount++;
$mapdata{$items[0]}{$items[1]}=$itemcount;
$mapitems[$itemcount]=[$items[0],$items[1],$items[2]];
$livemap[$itemcount]=1;
}

@traces=([-1,-1],[0,-1],[1,-1],[-1,0],[1,0],[-1,1],[0,1],[1,1]);

$maxlen=0;
$maxdepth=0;
$deepest=0;
$ppos=0;
for ($indexitem=0;$indexitem<$itemcount;$indexitem++) { if ($livemap[$indexitem]) { $livemap[$indexitem]=0; findblocks($indexitem,$indexitem,1); } } print "$deepest $maxdepth $maxlen $ppos ".$mapitems[$ppos][0]." ".$mapitems[$ppos][1]; sub findblocks { # Recursive adjacent item finder my $lstring=shift(); my $position=shift(); my $depth=shift(); my $cur_x=$mapitems[$position][0]; my $cur_y=$mapitems[$position][1]; my $new_x,new_y,$new_position; $new_position=0; my $trigger=1; # Flag for triggering output # check each of the 8 possible adjacent sectors for (my $traverse=0;$traverse<=7;$traverse++) { # find the new sector using the offsets in the traces array $new_x=$cur_x+$traces[$traverse][0]; $new_y=$cur_y+$traces[$traverse][1]; if (exists($mapdata{$new_x}{$new_y})) { $new_position=$mapdata{$new_x}{$new_y}; if ($livemap[$new_position]) { $trigger=0; $livemap[$new_position]=0; $lstring .= " $new_position"; $depth++; findblocks($lstring,$new_position,$depth); } } } if ($trigger) { # If this is true then we are at a deepest point. # Otherwise we're still scanning my @points=split(/\s/,$lstring); my $fx=$mapitems[$points[0]][0]; my $fy=$mapitems[$points[0]][1]; my $dist=0; foreach my $pixel (@points) { my $tx=$mapitems[$pixel][0]; my $ty=$mapitems[$pixel][1]; my $dist2=int((($tx-$fx)**2+($ty-$fy)**2)**(0.5)); if ($dist2>$dist) {
$dist=$dist2;
}
}
if ($depth > $deepest) {
$deepest=$depth;
}
if ($dist > $maxlen) {
$ppos=$position;
$maxlen=$dist;
$maxdepth=$depth;
}
}
}



No comments: