Monday, July 30, 2012
Now at http://sites.google.com/site/rndmthngs/
Since blogger does not allow me to upload files beyond images I'll continue posting on https://sites.google.com/site/rndmthngs/.
Monday, November 29, 2010
Some Garmin GPS receivers output wrong coordinates in NMEA sentence
Executive Summary
Some Garmin GPS receivers get a simple mathematical conversion wrong that causes them output incorrect coordinates in their NMEA sentences. The only NMEA data streams I looked at are those of the Garmin GPS 16 and Garmin GPS 18 OEM, and those suffer from the software bug described below.Example: walking slowly and logging one coordinate pair per second as output in the NMEA sentences of the GPS 16, the following records were obtained.
29 March 2009
Latitude (N) Longitude (W)
DDMM.FFFF DDDMM.FFFF
7122.0816 15631.9960
7122.0815 15631.9978
7122.0810 15632.9996
7122.0806 15632.0016
7122.0804 15632.0025
Latitude (N) Longitude (W)
DDMM.FFFF DDDMM.FFFF
7122.0816 15631.9960
7122.0815 15631.9978
7122.0810 15632.9996
7122.0806 15632.0016
7122.0804 15632.0025
And another example, walking even slower.
12 May 2010
Latitude (N) Longitude (W)
DDMM.FFFF DDDMM.FFFF
7122.2581 15630.9982
7122.2583 15630.9989
7122.2584 15630.9991
7122.2585 15630.9990
7122.2585 15630.9989
7122.2587 15630.9991
7122.2590 15631.9996
7122.2590 15631.9996
7122.2591 15630.9995
7122.2593 15631.9999
7122.2595 15631.0000
7122.2596 15631.0002
Latitude (N) Longitude (W)
DDMM.FFFF DDDMM.FFFF
7122.2581 15630.9982
7122.2583 15630.9989
7122.2584 15630.9991
7122.2585 15630.9990
7122.2585 15630.9989
7122.2587 15630.9991
7122.2590 15631.9996
7122.2590 15631.9996
7122.2591 15630.9995
7122.2593 15631.9999
7122.2595 15631.0000
7122.2596 15631.0002
Details
GPS receivers calculate their position relative to a set of satellites in Euclidean space, i.e. the same coordinate space we are familiar with since high school. I think Garmin receivers do a nice job at that, in particular considering their price. In my limited experience, all Garmin GPS receivers seem to do this correctly. While presumably all Garmin receivers display and store correct coordinates, at least some receivers occasionally (but systematically) output wrong coordinates through their serial interface. If you do not connect anything to your Garmin that receives GPS coordinates in real time then you can stop reading here and continue enjoying your purchase. Have fun.For those of use who do connect stuff to our Garmin receivers, we can sometimes choose between two data formats: the proprietary Garmin binary format, and the standard (ASCII) NMEA sentences. This blurb applies to data transferred from the GPS receiver in NMEA sentences, only. From what I can tell, the binary data stream is correct.
Some Garmin GPS receivers are able to output data through a serial (and recently USB) interface. The non-proprietary format used follows NMEA 0183 specifications. In this standard, data are transmitted as plain ASCII text. Coordinates are represented in a somewhat peculiar format as DDMM.FFFF, where D is decimal degrees, M is minutes, and F is fractional minutes. For example, consider 71.36 degrees. This is 71 degrees 21 minutes and 36 seconds, i.e. 71 degrees 21.6 minutes, and would be transmitted as 7121.6000. At least the following OEM receivers have a software bug that does the conversion to the DDMM.FFFF NMEA format wrong: Garmin GPS 16 (discontinued) and Garmin GPS 18 OEM-PC (still available as of 2010). Note that I am not familiar with the NMEA output of the Garmin GPS 18x or any other Garmin receiver.
At the heart of the issue seems to be a combination of sloppy programming in conjunction with the use of low resolution (single-precision, IEEE 754) floating point arithmetic.
Consider the following case: we attach an external data logger to a Garmin receiver that reads the NMEA sentences to keep track of our position while we walk very slowly, at something like 0.5 meters per second, due North. We may find the following (this result is simulated with the script below):
actual latitude output in NMEA sentence
in degrees in DDMM.FFFF format
71.359976 7121.5986
71.359982 7121.5991
71.359988 7121.5991
71.359994 7121.5995
71.360000 7121.6000
71.360006 7121.6004
71.360012 7121.6004
71.360018 7121.6009
So far so good. We see that the conversion took place with limited precision but otherwise everything is fine.71.359982 7121.5991
71.359988 7121.5991
71.359994 7121.5995
71.360000 7121.6000
71.360006 7121.6004
71.360012 7121.6004
71.360018 7121.6009
Doing the same experiment a bit further to the South we find (this result is simulated with the script below):
actual latitude output in NMEA sentence
in degrees in DDMM.FFFF format
71.349982 7120.9989
71.349988 7120.9994
71.349994 7120.9994
71.350000 7121.9999
71.350006 7121.0003
71.350012 7121.0008
71.350018 7121.0012
Clearly, a rounding error occurred in the conversion that increased the minutes from 20 to 21 before the fractional minutes actually rolled over to 0.71.349988 7120.9994
71.349994 7120.9994
71.350000 7121.9999
71.350006 7121.0003
71.350012 7121.0008
71.350018 7121.0012
I have seen this error occur over and over again in both latitude and longitude in the region around Barrow, Alaska. However, I bet this problem is more general and will be observable at certain minutes all around the world, including in San Francisco (37° 46' N), and New York (40° 31' N) — in case somebody would like to check.
Reverse Engineering
How can we reproduce this error? Well, for example with the following Python 2.x script:import numpy as np # simulate the roll-over error in # Garmin's degree / minute conversion # script written by Chris Petrich, 2010 # this seems to happen if minutes are calculated in two # different ways in conjunction with limited precision def convert(degrees): # enforce use of single precision (i.e. f4) degrees = degrees.astype('f4') v60 = np.array(60., dtype='f4') # integer degrees deg = np.floor(degrees) # get fractional minutes: minute = (degrees - deg) * v60 frac = minute - np.floor(minute) # now calculate whole minutes differently: minute = np.floor( degrees*v60 - deg*v60 ) # get the number before the dot: whole = int( deg ) * 100 + int( minute ) return '%.4i.%.4i' % (whole, int(frac*10000)) # generate a few coordinates: delta = .000006 ref = 71 + 21/60. test_degrees = np.arange(ref-5*delta, ref+5*delta, delta) # and output data print "GARMIN's rounding error" print 'decimal deg, DDMM.FFFF (NMEA)' for deg in test_degrees: print deg, '%s' % convert(deg)
Remedy
The remedy may be obvious: if this bugs you then write to Garmin and request a firmware patch for your receiver. Happy geo-locating!Thursday, November 13, 2008
Techical aspects about IR photography with digital cameras
I use a digital SLR camera (DSLR), Nikon D60, converted by lifepixel.com to take photos in the near infra-red (IR), i.e. about 720 nm - 1200 nm (this is NOT thermal infrared: thermal images are taken around 10000 nm). The conversion replaces the IR block filter between lens and CCD with an IR pass filter. When I look through the view finder I see "normal" colors, but what I see on the display after taking a shot is a first rendition of an infrared image.
The conversion significantly alters the response of the color receptors of the R, G, and B channels: instead of reflecting the red, green, and blue components of a scene, the combination of the IR-pass filter and the dye inside of the receptors leads to the following spectral sensitivity (my measurements were inspired by Samir Kharusi):
The response of the R, G, and B-channels are red, green and blue shaded curves, respectively. First off, we see that R, G, and B channels have different spectral response curves, i.e. we are actually able to capture with this setup something like color (the thin, light red and green lines are examples for spectral separation calculated from a linear combination of the R, G, and B channels). Most notable, the R channel has a response from 720 nm to the limit due to the bandgap of the Silicon CCD; the B channel is sensitive from 800 nm on. Note the dominance of the R-channel: any light in the near IR that hits the sensor and triggers a response of G or B channel will also trigger a response of the R channel. Hence, we will probably want to use a custom white balance: if we tell the camera to use a build-in white balance (designed for visible light, not near-IR) we'll get a photo that looks very red. We also see that the R channel is most sensitive in the shorter wavelengths while the B channels is most sensitive in the longer wavelengths. Hence, we may want to swap color channels and display the R-channel as blue and the B-channel as red.
Post-processing of JPGs
My program of choice for basic image manipulations is the free IrfanView. The very minimum amount of post-processing I do is swapping the R and B channels of my JPGs (Image -> Swap Colors -> RGB to BGR). Hence, we can interpret the resulting image as follows: if an object looks reddish it has an infrared color, and if an object looks blueish it has more of a visible (deep-red) than infrared tone. I also like to increase the saturation a bit to make the photo a more dramatic (Image -> Color corrections...). Here's an example:
Optimal exposure settings depend on what's going to happen with the photo: if I am only interested in near-IR black-and-white photography then I may be most interested in wavelengths above 850 nm. In this case I will expose appropriately for the B channel and deliberately overexpose R and G channels.
Unfortunately, it is very difficult to tell from the display of the D60 whether an image is overexposed (it may be impossible). I find that I have to look at the RAW image data to tell for sure; so I always underexpose when in doubt. Underexposure does not cause artifacts although the picture is dark and there is more noise in the frame. (The RAW images of the D60 are 12 bit.)
In order to analyze the raw image file I use dcraw (free) to convert Nikon's raw NEF image to the well-documented 16-bit PGM format [dcraw -D -4 filename.nef]. I extract a crude histogram with a short C program that I'd be happy to post here if only I knew how.
Problem with Canon was that their lenses of suitable focal lengths happened to produce hot spots unless I was willing to pay big bucks. If I were to buy a new system today I'd take into account LiveView capabilities in today's DSLRs.
The conversion significantly alters the response of the color receptors of the R, G, and B channels: instead of reflecting the red, green, and blue components of a scene, the combination of the IR-pass filter and the dye inside of the receptors leads to the following spectral sensitivity (my measurements were inspired by Samir Kharusi):
The response of the R, G, and B-channels are red, green and blue shaded curves, respectively. First off, we see that R, G, and B channels have different spectral response curves, i.e. we are actually able to capture with this setup something like color (the thin, light red and green lines are examples for spectral separation calculated from a linear combination of the R, G, and B channels). Most notable, the R channel has a response from 720 nm to the limit due to the bandgap of the Silicon CCD; the B channel is sensitive from 800 nm on. Note the dominance of the R-channel: any light in the near IR that hits the sensor and triggers a response of G or B channel will also trigger a response of the R channel. Hence, we will probably want to use a custom white balance: if we tell the camera to use a build-in white balance (designed for visible light, not near-IR) we'll get a photo that looks very red. We also see that the R channel is most sensitive in the shorter wavelengths while the B channels is most sensitive in the longer wavelengths. Hence, we may want to swap color channels and display the R-channel as blue and the B-channel as red.
Camera white balance
It seems that most people set a custom white balance in IR by taking a photo of green foliage (e.g. grass) instead of something white [in D60: Shooting Menu -> White balance -> Preset manual -> Measure]. This will render foliage white on a JPG straight-out the camera. I like this choice: foliage tends to have a flat reflectivity spectrum in the near IR, i.e. even if we were able to see colors in the near IR we would probably see foliage as gray or white.Post-processing of JPGs
My program of choice for basic image manipulations is the free IrfanView. The very minimum amount of post-processing I do is swapping the R and B channels of my JPGs (Image -> Swap Colors -> RGB to BGR). Hence, we can interpret the resulting image as follows: if an object looks reddish it has an infrared color, and if an object looks blueish it has more of a visible (deep-red) than infrared tone. I also like to increase the saturation a bit to make the photo a more dramatic (Image -> Color corrections...). Here's an example:
Colors in near-IR
If we apply a white balance based on a green surface, and map the R and B channels to blue and red, respectively (as outlined above), then we tend to get the following colors:- white/gray: thick, translucent clouds; snow; leaves; grass
- blue: clear ice; water; clear sky; skin
- red: bark; dry grass; some (often black) synthetic materials; some sun glasses
Near-IR: |
Visible: |
JPG vs. raw image format
I usually save near-IR images in RAW format because I usually want to adjust color balance and exposure. The RAW image (Nikon .nef file) contains a thumb nail jpg which can be displayed and extracted with IrfanView. Of course it doesn't hurt to have the camera save both JPG and RAW at the same time.Exposure setting
This is tricky with most DSLRs because light is metered independently of the CCD and filter in front of it. Hence, the converted D60 continues to meter light in the visible, and the brightness of a scene in the visible tells us little about the brightness in the near-IR. The amount of IR light increases significantly as we move from overcast skies to clear skies (it is even higher under tungsten light, and much lower under fluorescent light). In order to prevent the R-channel from saturating, I set the exposure compensation on my camera to something like 0 to -1 EV under overcast skies, 0 to -2 EV under clear skies, much lower under indoor tungsten lighting, and much higher (e.g. +3 EV or more) under fluorescent light. Actually, forget these numbers and try for yourself: it depends on the scene. Experimentation is my friend.Optimal exposure settings depend on what's going to happen with the photo: if I am only interested in near-IR black-and-white photography then I may be most interested in wavelengths above 850 nm. In this case I will expose appropriately for the B channel and deliberately overexpose R and G channels.
Unfortunately, it is very difficult to tell from the display of the D60 whether an image is overexposed (it may be impossible). I find that I have to look at the RAW image data to tell for sure; so I always underexpose when in doubt. Underexposure does not cause artifacts although the picture is dark and there is more noise in the frame. (The RAW images of the D60 are 12 bit.)
In order to analyze the raw image file I use dcraw (free) to convert Nikon's raw NEF image to the well-documented 16-bit PGM format [dcraw -D -4 filename.nef
Focus
The focal point of a lens depends on the wavelength. Quality lenses are designed to compensate for this from 400 nm to 700 nm; however, we use them around 900 nm. Lifepixel performs an adjustment of the focal point for a specified lens set to a specific focal length (I think they do 50 mm by default). However, we can expect to get slightly out-of-focus (soft) images when we use different equipment. Usually, I use a 18-55 mm lens and didn't notice serious issues here. However, with a 200 mm lens I have to manually adjust the focus slightly after autofocussing since the subject is obviously out of focus. (This is easy to detect in playback with the zoom function.)Hot spot artifact
I understand that there is reflection of light between the "polished" CCD and the lens in the camera. Manufacturers make sure that this does not affect the image for visible light. However, a big ugly red spot may appear if we use an unfortunate combination of camera and lens for infrared photography. I use the following Nikon lenses without problems on my converted Nikon D60 body:AF-S DX Zoom-NIKKOR 18-55mm f/3.5-5.6G ED II (bundled with D40) AF-S DX NIKKOR 18-55mm f/3.5-5.6G VR (bundled with D60) AF-S DX VR Zoom-NIKKOR 18-200mm f/3.5-5.6G IF-ED
Choice of camera and lens
I was looking for a system with the following key features- low noise
- not too expensive
- free of hot spots
- does white balance of IR images properly
Problem with Canon was that their lenses of suitable focal lengths happened to produce hot spots unless I was willing to pay big bucks. If I were to buy a new system today I'd take into account LiveView capabilities in today's DSLRs.
Subscribe to:
Posts (Atom)