It’s common to hear us talking about our cooled CCD cameras, but why exactly do we cool them? Following on from our look at read noise and shot noise, the third type of noise associated with CCD cameras is dark current, or thermal signal. Here Steve Chambers explains what it is, what it means for your images and how we get around it by cooling our cameras.
Hello.
What I’d like to do in this video is talk about thermal noise, or thermal signal and the noise associated with that thermal signal. In some ways, the thermal noise is very similar to shot noise in that we have a signal, which is effectively thermal current , or thermal signal, and associated with that is an element of noise that turns out to be related to the square root of the amount of dark current that we have.
So there’s both a signal element, and a noise element, an uncertainty on that and it’s really the uncertainty – the noise element – that’s really going to degrade the image because we can just subtract the signal away.
Okay, so, this little graph here shows the dark current versus the temperature from a Kodak sensor. What we see obviously is that at higher temperatures, we have higher dark current. Dark current in this way is expressed as electrons per pixel per second. And at lower temperatures we have lower levels of dark current.
All our cameras – or pretty much all our cameras – are cooled CCD cameras. The reason for cooling them is to decrease the dark current, and to decrease this dark current noise.
Hopefully what we get down is the point where the dark current or signal becomes insignificant compared with other types noise that we’re going to have on an image. So those other types of noise are read noise, and shot noise.
So if we have a look at an Atik 414EX which uses a Sony sensor, that actually has remarkably low – I mean, the dark current is so low it’s actually quite difficult to measure. It’s roughly 0.001 electrons per pixel per second at -10°C. And that means that in a ten minute exposure, we can expect to have less than 0.6 of an electron of thermal signal and the noise associated with that signal is smaller still.
What this really turns out to mean is that read noise and shot noise will always dominate an image from a cooled CCD camera. That’s really important. That differentiates using cameras that are uncooled, so if we’re using a digital SLR camera and it’s on a warm night, then we’d expect that the noise from thermal sources would actually be quite significant compared with things like read noise. But if we move to using a cooled CCD camera, that’s one source of noise that usually ends up at a much lower level than the other sources of noise that we’ll need to deal with. And it’s obviously the reason we cool CCDs.
Hot, Warm and Cool Pixels
Related to dark current, but not exactly the same, are hot, warm, cool and cold pixels. So, dark current is something that affects every pixel. Hot, warm, cool and cold pixels are slightly different.
Hot pixels are pixels that are effectively stuck always on maximum signal, so in this case 65000. This is a dark frame from an Atik 11000 and we can see on the image, we’ve stretched it to a point where we can see lots of really warm pixels, I’m not sure there’s any really hot pixels on here, typically we don’t really see many hot pixels, but we see quite a few warm pixels.
And a warm pixel is a pixel whose value is above or outside of the normal distribution of bias frame pixels, and they appear as these little white spots. There’s no clear differentiation between where do warm pixels start and normal bias pixels end. You can choose however many standard deviations from the mean to make that definition. But as we see on the histogram, they really appear as a tail to the right of the normal distribution of bias frame pixels.
We can also look at cool and cold pixels. Again, these are things that we really see very few of on CCDs. Cool are pixels that pick up signal at a slower rate than the average pixel, and so tend to lag behind the average. If we take a flat field we may see some cool pixels, but probably not very many.
Cold pixels are pixels that are not sensitive to light at all, so they always end up with the bias frame signal and typically we wouldn’t see any of those, but it’s possible you might have one or two on a sensor.
Okay, what I want to have a quick look at is the idea that we can use dark frames to reduce noise. This is something that comes up from time to time so it might worth having a quick look at this.
So what I’ve done is taken two images. They’re both ten minute exposures, they’re both dark frames so no light falls on the sensor, so two ten minute exposures in the dark. Both images come up with lots of warm and hot pixels and have some dark current on them.
So how do we fix that to make it appear as a very, very flat image? Well what we’ve done on the left is we’ve taken one image, subtracted the other one from it, added an offset to make sure we don’t get any beyond zero values, or less than zero values on it to mess up the statistics and then we look at the standard deviation, the spread of pixel values. So from doing that method, we have a standard deviation of 55 pixels, or 55 ADU units rather.
The other way to do this is to look at the defect map, so we’ve taken the first image and rather than subtract it, we’ve looked at those and identified any warm and hot pixels. Then on the second image, what you do is you look around and identify those pixels and you go to fix them. So you look at the surrounding pixel values and you remove the hot and warm pixel values and you replace it with an average of its surrounding pixels.
And if you do that, you end up with a standard deviation of 43 so you have a much tighter distribution of the background. It means that any signal there is going to be easier to detect, and the background’s just flatter and less noisy.
It’s a slightly unfair comparison in terms of dark frames. What you’d normally do with dark frames is you’d take a lot of dark frames, average them so you’ve reduced the noise within the single master dark, but it really emphasises the point that if you’re going to use dark frames, you need to take a lot of dark frames. I’d suggest taking two or three times more dark frames than you take image frames just to make sure that the noise in the dark frame isn’t significant compared to the noise in the image.
Or the other way to deal with this type of thing is to use techniques such as defect mapping, which is very effective, or you can use dithering and sigma combine, both of which effectively identify pixels which are outside of the norm and effectively repair them one way or another. So that’s the recommendation from me is to try using defect mapping or dithering on your images, and if you’re going to use dark frames, use a lot of them.
So I hope that’s been useful, thank you for watching.