1.0 Introduction
During a period of extremely poor weather in the 2016/2017 imaging season my friend, Chuck Ayoub, & myself had plenty of time to discuss theories of how best to capture & process data. Many nights were spent discussing software settings, mechanical tweaks & potential ways to spend even more money on astrophotography…
On one of these nights Chuck mentioned that a pal of his had described a method to determine the best exposure settings for a given target that was different to the usual ‘get the peak one third of the way across from dark side of the histogram’. Interestingly he claimed that this method was more precise and supported the theory that using optimum length exposures for the same overall total time would get the best possible data for processing. Using shorter exposures is very beneficial to those of us who live in areas with heavy light pollution.
This method was leveraging the fact that capture software, like Sequence Generator Pro, will display an average value of ADU (Analogue to Digital Unit) for the image you have captured. This is indirectly telling you a value for exposure. Since this is an averaged value; care needs to be taken with targets with extremely bright cores, such as Messier 42, to avoid over-exposing the core & loosing valuable detail. My method for checking this is to open the image is PixInsight, enable Readout Mode in 16bit mode and check that the bright pixels do not exceed 16,383, this value is the maximum for a 16bit file. If they do exceed this threshold then you have over-exposed that portion of the image and the lost data cannot be retrieved, only estimated.
The original theory was explained for use with dedicated CCD cameras so there are a number of conversions, and assumptions, required to apply the theory to a DSLR. As our imaging world is rarely static it is wise to define a range for ideal ADU rather than a single value, this allows us to more quickly determine the optimum settings on the night and start collecting our precious data as early as possible.
This article will concentrate on the workflow using values which I sourced online for my sensor, a follow-up article will define a process for calculating values using your own equipment along with a method to verify the calculated ADU range by imaging a target using a range of exposure lengths and inspecting the resulting images.
2.0 Theory
To work out the ideal ADU range for your DSLR camera you need to know read noise and gain for the chosen ISO you wish to use. This chosen ISO is closely linked to Unity Gain; there are a number of different theories as to which choice is best. This one describes how some extra gain is useful for boosting the signal for extremely fine detail which is an interesting perspective. I will not discuss these in this article as it is a complex topic that deserves an article of its own.
This section will outline the theory of how to calculate the three main variables we require before defining the ADU range. These initial values are Unity Gain, Read Noise & Gain.
2.1 Unity Gain
For a 12bit sensor the unity gain will be the ISO where saturation is closest to 4,096.
For a 14bit sensor the unity gain will be the ISO where saturation is closest to 16,383.
For a 16bit sensor the unity gain will be the ISO where saturation is closest to 65,536.
2.2 Read noise
Determine read noise empirically, as described in my article, or look up on sensorgen.info.
I plan to update this section in the future with an empirical method to determine read noise as I have noticed a number of sources that question the accuracy & test methodology of sensorgen.info.
2.3 Gain
Gain can be inferred from ‘Saturation (e-)’ using the following equation:
2.4 Desired ADU Range
This calculation is based on wanting at last 20 times more signal that the read noise.
There is another subtlety, shared by Chuck’s pal, that it is wise to have your data starting a little offset from the read noise. I have not been able to find further information on this so I will have to trust in his knowledge, not a huge leap as he has been correct thus far…
3.0 Worked example
For the Nikon d7100, in 14bit mode, this was my workflow. The first step was to find the required data for my camera, shown in the table below.
Unity gain will be the ISO which is closest to 16,383. In my case, this value lies between ISO 200 & ISO 400, it is clearly closer to ISO 200. Therefore ISO 200 looks to be my optimum choice for ISO.
I can get the read noise from sensorgen.info, for ISO 200 this is 2.3 e-.
I can then use the saturation values to calculate my gain at 14bit.
I will then calculate the desired lower ADU, using the equation mentioned previously:
Compared to most figures online this seems extremely low, but it is worth noting that most figures online (and also in capture software such as Sequence Generator Pro) are quoted for 16bit data so we must convert our 14bit ADU to a 16bit figure. I was unable to find a source for this so I resorted to inferring it from my previously captured data, using the following method.
To infer the ratio between 14bit and 16bit I opened a single captured image for four different targets that I had taken previously and zoomed in so I could select individual pixels. With PixInsight in 14bit readout mode I noted the value for each corner pixel; I then switched PixInsight into 16bit readout mode and noted the corresponding value for each pixel. All of the ratios were 1:4, this is also supported by the saturation figures I quoted in section 2.1So for my Nikon d7100 the 16bit equivalent for the lower value for my ADU range,
The upper value for my ADU range would then be,
As this is a guideline I would recommend some rounding. For my Nikon d7100 at ISO 200, based on these figures I would use the range 200-400 ADU.
However… I was not convinced that this range was reasonable so decided to empirically determine the parameters for my camera. See another article on my website {HYPERLINK}, this showed my read noise to be much higher than the sensorgen figures.
4.0 Conclusion
In the past I have been using ISO 400 for almost all of my imaging so I ran through the equations substituting the ISO 400 figures for my Nikon d7100 to get an equivalent range for ADU, this turned out to be 500 to 1000.
On inspecting my historic data it seemed that I had actually been working within this range for the majority of my targets, it’s worth noting I didn’t subscribe to the ‘get the peak one third of the way across from dark side of the histogram’ methodology I mentioned in the introduction as I preferred to keep cores from being over-exposed.
One other characteristic I noticed when reviewing my historic data was that the increase in ADU seems to be directly proportional to the exposure time. For example, if an exposure of 1 minute yielded an ADU value of 100; then a 2 minute exposure would yield an ADU value of around 200.
ADU = Exposure Tme * Constant
At this point I believe that the constant will vary from target to target as is likely also related to a number of factors including, but not limited to:
- Seeing.
- Light Pollution.
- Telescope Performance Characteristics.
- Camera Performance Characteristics.
It is my intention to continue using this newly defined ISO & ADU range and compare the results with data collected previously to verify any improvement that has been gained.
It is worth noting that this method is aimed at collecting the best quality data for a given target and not intended to generate a final image quickly. This will require a large number of sub-exposures to generate a final image but that final image will have been composed from the best quality data that the factors listed previously will allow.