Astronomy

How to estimate uncertainty of measurements of equivalent widths?

How to estimate uncertainty of measurements of equivalent widths?


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

I'm measuring equivalent widths of absorption lines using a spectrum of a star. I make two or three measurements of each line by making reasonable gaussian fits of the line with IRAF's splot tool. Then I calculate the mean of the measurements, which serves as my final equivalent width estimate.

What is a good way of estimating the uncertainty of this measurement?

My current method

I'm currently using half of the range for the uncertainty. For example, if I made two measurements 10 and 16 mA (milliangstrom), then the mean is 13 mA and uncertainty is 3 mA. This gives the estimate of equivalent width to be 13±3 mA. Do you see any problems with this method of estimating uncertainty?


Yes there is a problem. You seem to be trying to derive an uncertainty in the measurement of EW by doing repeated measurements of the same data?

This can only give you the uncertainty associated with your measurement technique (i.e. where you define the limits of the line and how you set the continuum level) - the systematic error you might call it (although there can be other systematic errors inherent to EW measurements, like whether you subtracted the sky or scattered light in your spectrograph correctly for example).

What it does not do is evaluate the uncertainty in the EW caused by the quality or signal-to-noise ratio of the data itself. You might assess this using some rule-of-thumb formulae for a Gaussian line, e.g. $$Delta EW simeq 1.5 frac{sqrt{fp}}{{ m SNR}},$$ (eqn 6 of Cayrel de Strobel 1988) where $f$ is the FWHM of the spectral line (in wavelength units), $p$ is the size of one pixel in wavelength units and SNR is the signal-to-noise ratio of the data in an average pixel. Or you could take a synthetic spectrum and add some artificial noise to it with the appropriate properties and measure the EW of several randomisations of the same spectrum, taking the standard deviation of your EW measurements to indicate the EW uncertainty for a particular level of signal-to-noise ratio.

If this statistical uncertainty is not negligible, then you would then need to add it to any systematic uncertainties associated with your analysis of the spectrum. As far as the latter is concerned then your suggested method does give some indication of what that error might be, though I suspect it will overestimate the 1-sigma uncertainty.


Estimating uncertainty… the most preferred one would be Bayesian. However, if you follow the frequentist, MCMC is the most preferred, and it would be somewhat similar to what your advisor suggested with more complicated algorithm. The simplest method would be what your advisor suggested, but you can measure for a larger sample size and use simple statistics like mean and standard error.

There are various variations along these lines that I would not make a list here.


Particle lifetimes from the uncertainty principle

suggests that for particles with extremely short lifetimes, there will be a significant uncertainty in the measured energy. The measurment of the mass energy of an unstable particle a large number of times gives a distribution of energies called a Lorentzian or a Breit-Wigner distribution.

If the width of this distribution at half-maximum is labeled Γ , then the uncertainty in energy ΔE could be reasonably expressed as

where the particle lifetime τ is taken as the uncertainty in time τ = Δt.

In high energy scattering experiments, the energy uncertainty ΔE can be determined and the lifetime implied from it. In other cases, the lifetime is most conveniently measured and the "particle width" in energy implied from that lifetime measurement.

Γ is often referred to as the "natural line width". It is of great importance in high energy accelerator physics where it provides the means for determining the ultrashort lifetimes of particles produced. For optical spectroscopy it is a minor factor because the natural linewidth is typically 10 -7 eV, about a tenth as much as the Doppler broadening. Another source of linewidth is the recoil of the source, but that is negligible in the optical range.

For nuclear transitions involving gamma emission in the 0.1-1 MeV range, the recoil width is typically much greater than the natural line width. The recoil of the emitting nucleus implies that the emitted gamma photon cannot be absorbed by an identical nucleus because its energy is reduced by an amount greater than the natural line width of potential absorbing levels. Mossbauer discovered that the absorption could be accomplished by putting the source on a rotating arm to give it enough speed to compensate for the recoil effect. The Mossbauer effect became a useful experimental tool when it was discovered that the recoil could be suppressed by putting the emitting nucleus in a crystal lattice. The emitted gammas then exhibited something close to the natural linewidth and could be absorbed by other identical nuclei.


How to estimate uncertainty of measurements of equivalent widths? - Astronomy

Evaluating uncertainty components: Type B

  • previous measurement data,
  • experience with, or general knowledge of, the behavior and property of relevant materials and instruments,
  • manufacturer's specifications,
  • data provided in calibration and other reports, and
  • uncertainties assigned to reference data taken from handbooks.

Below are some examples of Type B evaluations in different situations, depending on the available information and the assumptions of the experimenter. Broadly speaking, the uncertainty is either obtained from an outside source, or obtained from an assumed distribution.

Uncertainty obtained from an outside source

Multiple of a standard deviation

Procedure: Convert an uncertainty quoted in a handbook, manufacturer's specification, calibration certificate, etc., that is a stated multiple of an estimated standard deviation to a standard uncertainty by dividing the quoted uncertainty by the multiplier.

Uncertainty obtained from an assumed distribution

Normal distribution: "1 out of 2"

Procedure: Model the input quantity in question by a normal probability distribution and estimate lower and upper limits a - and a + such that the best estimated value of the input quantity is ( a + + a - )/2 (i.e., the center of the limits) and there is 1 chance out of 2 (i.e., a 50 % probability) that the value of the quantity lies in the interval a - to a + . Then u j is approximately 1.48 a , where a is the half-width of the interval.

Normal distribution: "2 out of 3"

Procedure: Model the input quantity in question by a normal probability distribution and estimate lower and upper limits a - and a + such that the best estimated value of the input quantity is ( a + + a - )/2 (i.e., the center of the limits) and there are 2 chances out of 3 (i.e., a 67 % probability) that the value of the quantity lies in the interval a - to a + . Then u j is approximately , where is the half-width of the interval.

Procedure: If the quantity in question is modeled by a normal probability distribution, there are no finite limits that will contain 100 % of its possible values. However, plus and minus 3 standard deviations about the mean of a normal distribution corresponds to 99.73 % limits. Thus, if the limits a - and a + of a normally distributed quantity with mean ( a + + a - )/2 are considered to contain "almost all" of the possible values of the quantity, that is, approximately 99.73 % of them, then u j is approximately /3, where is the half-width of the interval.

Uniform (rectangular) distribution

Procedure: Estimate lower and upper limits a - and a + for the value of the input quantity in question such that the probability that the value lies in the interval is, for all practical purposes, 100 %. Provided that there is no contradictory information, treat the quantity as if it is equally probable for its value to lie anywhere within the interval that is, model it by a uniform (i.e., rectangular) probability distribution. The best estimate of the value of the quantity is then with divided by the square root of 3, where is the half-width of the interval.

The rectangular distribution is a reasonable default model in the absence of any other information. But if it is known that values of the quantity in question near the center of the limits are more likely than values close to the limits, a normal distribution or, for simplicity, a triangular distribution, may be a better model.

Procedure: Estimate lower and upper limits a - and a + for the value of the input quantity in question such that the probability that the value lies in the interval a - to a + is, for all practical purposes, 100 %. Provided that there is no contradictory information, model the quantity by a triangular probability distribution. The best estimate of the value of the quantity is then ( a + + a - )/2 with u j = divided by the square root of 6, where is the half-width of the interval.

Schematic illustration of probability distributions
The following figure schematically illustrates the three distributions described above: normal, rectangular, and triangular. In the figures, µ t is the expectation or mean of the distribution, and the shaded areas represent ± one standard uncertainty u about the mean. For a normal u encompases about of the distribution for a uniform distribution, ± u encompasses about of the distribution and for a triangular distribution, ± u encompasses about 65 % of the distribution.


7.2 The Heisenberg Uncertainty Principle

Heisenberg’s uncertainty principle is a key principle in quantum mechanics. Very roughly, it states that if we know everything about where a particle is located (the uncertainty of position is small), we know nothing about its momentum (the uncertainty of momentum is large), and vice versa. Versions of the uncertainty principle also exist for other quantities as well, such as energy and time. We discuss the momentum-position and energy-time uncertainty principles separately.

Momentum and Position

To illustrate the momentum-position uncertainty principle, consider a free particle that moves along the x-direction. The particle moves with a constant velocity u and momentum p = m u p = m u . According to de Broglie’s relations, p = ℏ k p = ℏ k and E = ℏ ω E = ℏ ω . As discussed in the previous section, the wave function for this particle is given by

Similar statements can be made of localized particles. In quantum theory, a localized particle is modeled by a linear superposition of free-particle (or plane-wave) states called a wave packet . An example of a wave packet is shown in Figure 7.9. A wave packet contains many wavelengths and therefore by de Broglie’s relations many momenta—possible in quantum mechanics! This particle also has many values of position, although the particle is confined mostly to the interval Δ x Δ x . The particle can be better localized ( Δ x ( Δ x can be decreased) if more plane-wave states of different wavelengths or momenta are added together in the right way ( Δ p ( Δ p is increased). According to Heisenberg, these uncertainties obey the following relation.

The Heisenberg Uncertainty Principle

The product of the uncertainty in position of a particle and the uncertainty in its momentum can never be less than one-half of the reduced Planck constant:

This relation expresses Heisenberg’s uncertainty principle. It places limits on what we can know about a particle from simultaneous measurements of position and momentum. If Δ x Δ x is large, Δ p Δ p is small, and vice versa. Equation 7.15 can be derived in a more advanced course in modern physics. Reflecting on this relation in his work The Physical Principles of the Quantum Theory, Heisenberg wrote “Any use of the words ‘position’ and ‘velocity’ with accuracy exceeding that given by [the relation] is just as meaningless as the use of words whose sense is not defined.”

Note that the uncertainty principle has nothing to do with the precision of an experimental apparatus. Even for perfect measuring devices, these uncertainties would remain because they originate in the wave-like nature of matter. The precise value of the product Δ x Δ p Δ x Δ p depends on the specific form of the wave function. Interestingly, the Gaussian function (or bell-curve distribution) gives the minimum value of the uncertainty product: Δ x Δ p = ℏ / 2 . Δ x Δ p = ℏ / 2 .

Example 7.5

The Uncertainty Principle Large and Small

Strategy

Solution

Significance

Example 7.6

Uncertainty and the Hydrogen Atom

Strategy

Solution

Multiplying numerator and denominator by c 2 c 2 gives

Significance

Energy and Time

Another kind of uncertainty principle concerns uncertainties in simultaneous measurements of the energy of a quantum state and its lifetime,

where Δ E Δ E is the uncertainty in the energy measurement and Δ t Δ t is the uncertainty in the lifetime measurement. The energy-time uncertainty principle does not result from a relation of the type expressed by Equation 7.15 for technical reasons beyond this discussion. Nevertheless, the general meaning of the energy-time principle is that a quantum state that exists for only a short time cannot have a definite energy. The reason is that the frequency of a state is inversely proportional to time and the frequency connects with the energy of the state, so to measure the energy with good precision, the state must be observed for many cycles.

To illustrate, consider the excited states of an atom. The finite lifetimes of these states can be deduced from the shapes of spectral lines observed in atomic emission spectra. Each time an excited state decays, the emitted energy is slightly different and, therefore, the emission line is characterized by a distribution of spectral frequencies (or wavelengths) of the emitted photons. As a result, all spectral lines are characterized by spectral widths. The average energy of the emitted photon corresponds to the theoretical energy of the excited state and gives the spectral location of the peak of the emission line. Short-lived states have broad spectral widths and long-lived states have narrow spectral widths.


51 The Heisenberg Uncertainty Principle

Heisenberg’s uncertainty principle is a key principle in quantum mechanics. Very roughly, it states that if we know everything about where a particle is located (the uncertainty of position is small), we know nothing about its momentum (the uncertainty of momentum is large), and vice versa. Versions of the uncertainty principle also exist for other quantities as well, such as energy and time. We discuss the momentum-position and energy-time uncertainty principles separately.

Momentum and Position

To illustrate the momentum-position uncertainty principle, consider a free particle that moves along the x-direction. The particle moves with a constant velocity u and momentum . According to de Broglie’s relations, and . As discussed in the previous section, the wave function for this particle is given by

and the probability density is uniform and independent of time. The particle is equally likely to be found anywhere along the x-axis but has definite values of wavelength and wave number, and therefore momentum. The uncertainty of position is infinite (we are completely uncertain about position) and the uncertainty of the momentum is zero (we are completely certain about momentum). This account of a free particle is consistent with Heisenberg’s uncertainty principle.

Similar statements can be made of localized particles. In quantum theory, a localized particle is modeled by a linear superposition of free-particle (or plane-wave) states called a wave packet . An example of a wave packet is shown in (Figure). A wave packet contains many wavelengths and therefore by de Broglie’s relations many momenta—possible in quantum mechanics! This particle also has many values of position, although the particle is confined mostly to the interval . The particle can be better localized can be decreased) if more plane-wave states of different wavelengths or momenta are added together in the right way is increased). According to Heisenberg, these uncertainties obey the following relation.

The product of the uncertainty in position of a particle and the uncertainty in its momentum can never be less than one-half of the reduced Planck constant:

This relation expresses Heisenberg’s uncertainty principle. It places limits on what we can know about a particle from simultaneous measurements of position and momentum. If is large, is small, and vice versa. (Figure) can be derived in a more advanced course in modern physics. Reflecting on this relation in his work The Physical Principles of the Quantum Theory, Heisenberg wrote “Any use of the words ‘position’ and ‘velocity’ with accuracy exceeding that given by [the relation] is just as meaningless as the use of words whose sense is not defined.”

Note that the uncertainty principle has nothing to do with the precision of an experimental apparatus. Even for perfect measuring devices, these uncertainties would remain because they originate in the wave-like nature of matter. The precise value of the product depends on the specific form of the wave function. Interestingly, the Gaussian function (or bell-curve distribution) gives the minimum value of the uncertainty product:

The Uncertainty Principle Large and Small Determine the minimum uncertainties in the positions of the following objects if their speeds are known with a precision of : (a) an electron and (b) a bowling ball of mass 6.0 kg.

Strategy Given the uncertainty in speed , we have to first determine the uncertainty in momentum and then invert (Figure) to find the uncertainty in position .

Significance Unlike the position uncertainty for the electron, the position uncertainty for the bowling ball is immeasurably small. Planck’s constant is very small, so the limitations imposed by the uncertainty principle are not noticeable in macroscopic systems such as a bowling ball.

Uncertainty and the Hydrogen Atom Estimate the ground-state energy of a hydrogen atom using Heisenberg’s uncertainty principle. (Hint: According to early experiments, the size of a hydrogen atom is approximately 0.1 nm.)

Strategy An electron bound to a hydrogen atom can be modeled by a particle bound to a one-dimensional box of length The ground-state wave function of this system is a half wave, like that given in (Figure). This is the largest wavelength that can “fit” in the box, so the wave function corresponds to the lowest energy state. Note that this function is very similar in shape to a Gaussian (bell curve) function. We can take the average energy of a particle described by this function (E) as a good estimate of the ground state energy . This average energy of a particle is related to its average of the momentum squared, which is related to its momentum uncertainty.

Solution To solve this problem, we must be specific about what is meant by “uncertainty of position” and “uncertainty of momentum.” We identify the uncertainty of position with the standard deviation of position , and the uncertainty of momentum with the standard deviation of momentum . For the Gaussian function, the uncertainty product is

The particle is equally likely to be moving left as moving right, so . Also, the uncertainty of position is comparable to the size of the box, so The estimated ground state energy is therefore

Multiplying numerator and denominator by gives

Significance Based on early estimates of the size of a hydrogen atom and the uncertainty principle, the ground-state energy of a hydrogen atom is in the eV range. The ionization energy of an electron in the ground-state energy is approximately 10 eV, so this prediction is roughly confirmed. (Note: The product is often a useful value in performing calculations in quantum mechanics.)

Energy and Time

Another kind of uncertainty principle concerns uncertainties in simultaneous measurements of the energy of a quantum state and its lifetime,

where is the uncertainty in the energy measurement and is the uncertainty in the lifetime measurement. The energy-time uncertainty principle does not result from a relation of the type expressed by (Figure) for technical reasons beyond this discussion. Nevertheless, the general meaning of the energy-time principle is that a quantum state that exists for only a short time cannot have a definite energy. The reason is that the frequency of a state is inversely proportional to time and the frequency connects with the energy of the state, so to measure the energy with good precision, the state must be observed for many cycles.

To illustrate, consider the excited states of an atom. The finite lifetimes of these states can be deduced from the shapes of spectral lines observed in atomic emission spectra. Each time an excited state decays, the emitted energy is slightly different and, therefore, the emission line is characterized by a distribution of spectral frequencies (or wavelengths) of the emitted photons. As a result, all spectral lines are characterized by spectral widths. The average energy of the emitted photon corresponds to the theoretical energy of the excited state and gives the spectral location of the peak of the emission line. Short-lived states have broad spectral widths and long-lived states have narrow spectral widths.

Atomic Transitions An atom typically exists in an excited state for about . Estimate the uncertainty in the frequency of emitted photons when an atom makes a transition from an excited state with the simultaneous emission of a photon with an average frequency of . Is the emitted radiation monochromatic?

Strategy We invert (Figure) to obtain the energy uncertainty and combine it with the photon energy to obtain . To estimate whether or not the emission is monochromatic, we evaluate .

Solution The spread in photon energies is . Therefore,

Significance Because the emitted photons have their frequencies within percent of the average frequency, the emitted radiation can be considered monochromatic.

Check Your Understanding A sodium atom makes a transition from the first excited state to the ground state, emitting a 589.0-nm photon with energy 2.105 eV. If the lifetime of this excited state is , what is the uncertainty in energy of this excited state? What is the width of the corresponding spectral line?

Summary

  • The Heisenberg uncertainty principle states that it is impossible to simultaneously measure the x-components of position and of momentum of a particle with an arbitrarily high precision. The product of experimental uncertainties is always larger than or equal to
  • The limitations of this principle have nothing to do with the quality of the experimental apparatus but originate in the wave-like nature of matter.
  • The energy-time uncertainty principle expresses the experimental observation that a quantum state that exists only for a short time cannot have a definite energy.

Conceptual Questions

If the formalism of quantum mechanics is ‘more exact’ than that of classical mechanics, why don’t we use quantum mechanics to describe the motion of a leaping frog? Explain.

Can the de Broglie wavelength of a particle be known precisely? Can the position of a particle be known precisely?

Yes, if its position is completely unknown. Yes, if its momentum is completely unknown.

Can we measure the energy of a free localized particle with complete precision?

Can we measure both the position and momentum of a particle with complete precision?

No. According to the uncertainty principle, if the uncertainty on the particle’s position is small, the uncertainty on its momentum is large. Similarly, if the uncertainty on the particle’s position is large, the uncertainty on its momentum is small.

Problems

A velocity measurement of an -particle has been performed with a precision of 0.02 mm/s. What is the minimum uncertainty in its position?

A gas of helium atoms at 273 K is in a cubical container with 25.0 cm on a side. (a) What is the minimum uncertainty in momentum components of helium atoms? (b) What is the minimum uncertainty in velocity components? (c) Find the ratio of the uncertainties in (b) to the mean speed of an atom in each direction.

a. b. c.

If the uncertainty in the -component of a proton’s position is 2.0 pm, find the minimum uncertainty in the simultaneous measurement of the proton’s -component of velocity. What is the minimum uncertainty in the simultaneous measurement of the proton’s -component of velocity?

Some unstable elementary particle has a rest energy of 80.41 GeV and an uncertainty in rest energy of 2.06 GeV. Estimate the lifetime of this particle.

An atom in a metastable state has a lifetime of 5.2 ms. Find the minimum uncertainty in the measurement of energy of the excited state.

Measurements indicate that an atom remains in an excited state for an average time of 50.0 ns before making a transition to the ground state with the simultaneous emission of a 2.1-eV photon. (a) Estimate the uncertainty in the frequency of the photon. (b) What fraction of the photon’s average frequency is this?

a. b.

Suppose an electron is confined to a region of length 0.1 nm (of the order of the size of a hydrogen atom) and its kinetic energy is equal to the ground state energy of the hydrogen atom in Bohr’s model (13.6 eV). (a) What is the minimum uncertainty of its momentum? What fraction of its momentum is it? (b) What would the uncertainty in kinetic energy of this electron be if its momentum were equal to your answer in part (a)? What fraction of its kinetic energy is it?


4. COMPARISONS

In this section we compare results produced by DAOSPEC with data and programs from the current literature. We summarize and discuss in § 4.1 all the papers that, to our knowledge, have made use of DAOSPEC, including our own test with data from Pancino et al. (2002). We also compare DAOSPEC with EWDET (§ 4.2) and with ARES (§ 4.3) and we finally perform an abundance analysis of the Sun (§ 4.4).

4.1. Literature Tests on DAOSPEC

DAOSPEC has been available to the astronomical community since 2002, when the first test versions were circulated. Since then, it has evolved into the form presented here and in the Cookbook, and it has been used and tested by some colleagues: a few authors used the code without mentioning any specific tests (Meléndez et al. 2003 Pasquini et al. 2004 Dall et al. 2005a, 2005b, 2006 Pompéia et al. 2005 Zoccali et al. 2006 Lecureur et al. 2007).

Other papers compare DAOSPEC measurements to manual measurements with IRAF, MIDAS, or other methods and find a good agreement, but they do not explicitly show the comparison (Pancino 2004 da Silva et al. 2005, 2006 Barbuy et al. 2006, 2007 Letarte et al. 2006 Letarte 2007 Alves-Brito et al. 2006). One paper used DAOSPEC to measure radial velocities and mentions that a comparison with the results of fxcor within IRAF gives agreement within the uncertainties (Monaco et al. 2005). Another couple of papers publish the comparison of EWs measured by DAOSPEC with manual measurements (Alves-Brito et al. 2005 Venn & Hill 2005) these are discussed in § 4.1.1.

To our knowledge, only two papers test DAOSPEC extensively: Sousa et al. (2006, 2007), which are discussed in detail in the following sections.

4.1.1. Basic Comparisons

Alves-Brito et al. (2005) used DAOSPEC to measure EW of Fe I and Fe II lines in five red giants in 47 Tuc. A comparison of IRAF interactive measurements with DAOSPEC results on star 25 showed relatively good agreement, in the sense that the average ΔEW was smaller than the spread (σ = 4.82 mÅ see their Fig. 3). But a slight trend with EW appeared, in the sense that for larger EW the ΔEW became larger. According to the discussion in § 3.5.2 above, an effect like this could, for instance, be due to a slightly (10%) inappropriate input FWHM, but we do not know whether something like that is in operation here. In the end, we chose to adopt the IRAF measurements to derive their atmospheric parameters and iron abundances.

Another, more favorable comparison was shown by Venn & Hill (2005), who plotted IRAF EW measurements by Shetrone et al. (2003) versus DAOSPEC, on GIRAFFE spectra (R 20,000) of two stars in the Sculptor dwarf galaxy. They found good agreement (within 10%), with no sign of departures from the 1:1 relation for strong lines up to 200 mÅ. This is expected if one considers the example of Figure 7, where we show that the Gaussian approximation is more and more reliable, even for strong lines, as resolution goes down from R 10 5 to R 20,000.

4.1.2. Detailed Comparisons

Only two papers have performed detailed tests on DAOSPEC, namely Sousa et al. (2006, 2007).

Sousa et al. (2006) used a synthetic (noiseless) model Solar spectrum of very high quality (R 120,000 and S/N 300) to compare DAOSPEC and IRAF EW measurements. They found essentially perfect agreement in a red window (6000–6300 Å) with ΔEW = 0.8 ± 1.1 mÅ, based on 34 lines, and fair agreement in a blue window (4400–4650 Å) with ΔEW = 4.0 ± 4.9 mÅ, based on 25 lines (see also § 4.3). This must, of course, be due to the higher crowding level and lower S/N of the blue part of the spectrum.

The synthetic spectrum was then degraded both in S/N and resolution, and the DAOSPEC measurements were compared with each other. DAOSPEC appears to give very different average EWs and variance, by as much as ΔEW = 15 ± 20 mÅ, for the lowest resolution case (S/N 10 and R 12,000). While some increase in the variance can be easily understood when varying S/N or resolution, as can be seen in our own tests (§ 3.6), EW discrepancies and variance as large as those reported by Sousa et al. (2006) are difficult to understand, and indeed we do not find such behavior in our own tests (§§ 3.6.1 and 3.6.2).

When measuring their FEROS spectra, Sousa et al. (2006) encountered some problems. In particular, they found an enormous, unacceptable spread in the resulting EWs, and they managed to obtain reasonable EWs only by cutting the spectrum into 100 Å segments and running DAOSPEC manually on each small piece. The ΔEW from IRAF measurements then went down from 12.1 ± 17.1 mÅ to 3.0 ± 4.7 mÅ (S. Sousa 2007, private communication) but, of course, at the expense of execution time and humanpower (several hours). They kindly provided us with some of their FEROS spectra (Fig. 17) and we have repeated their measurements. We found that using a different order for the continuum fit (30 instead of 8 Fig. 17) and a different FWHM (14 instead of 5 § 3.5.2 Figs. 10 and 11) gave much better EWs and decreased the execution time by a factor of 50, roughly. We also tried cutting the spectrum into short pieces, both as a consistency check and to test the execution times, but we used shell scripts to run DAOSPEC automatically—in 10 minutes, total time (see § 3.7)—on the various pieces: we obtained ΔEW = -4.1 ± 4.3 mÅ when using the full spectrum, and ΔEW = -6.5 ± 4.4 mÅ when the spectrum was cut into 100 Å pieces.

Fig. 17.— Typical FEROS spectrum of a Solar-type star (kindly provided by S. Sousa 2008, private communication). Two Legendre DAOSPEC polynomials are overplotted, with an arbitrary vertical offset for clarity. As can be seen, the Legendre polynomial of eighth order (lower solid line) does not adequately represent the spectral shape, while the thirtieth order polynomial (upper solid line) fits the spectrum better.

The paper that introduced ARES (Sousa et al. 2007) was the second to perform a detailed check on DAOSPEC, using the same data sets as Sousa et al. (2006) and the same DAOSPEC configuration parameters. ARES is based on the IRAF task splot, and therefore the first comparisons made were between ARES and IRAF, and between DAOSPEC and IRAF. The results obtained with ARES were more similar to IRAF than the ones with DAOSPEC, supporting the conclusion that ARES is a very well-designed extension of splot. We have seen, however, that the most important factor in these comparisons can be the way the continuum is chosen. In case of crowded spectra, we have claimed that the algorithm employed by DAOSPEC can give better results (§ 3.2.1), but since both IRAF and ARES are highly customizable in terms of continuum placement, we do not doubt experienced and careful users can obtain good results with those algorithms.

To summarize, an appropriate choice of the configuration parameters is crucial to obtain good results with DAOSPEC. The Cookbook provides practical and objective methods for finding the best values for these parameters, as well as the discussions and tests presented in § 3.5 here.

4.1.3. Red Giants in ω Centauri

The data set of EW measurements that Pancino et al. (2002) obtained with the IRAF task splot to derive abundances for six red giant stars in ω Cen constitutes a good testbed for DAOSPEC. The full data description can be found in the original paper in short, the six spectra were taken with UVES at the Very Large Telescope in Paranal, Chile, with R 45,000 and S/N 100–150 per resolution element, covering the range 5250–6920 Å. Stellar metallicities range from [Fe/H] = -0.49 to -1.20, with temperatures around 4000 K and gravities of about 1 dex. The input line list contains 230 features of various elements, although only [Fe/H], [Ca/Fe], [Si/Fe], and [Cu/Fe] were published by Pancino et al. (2002).

We remeasured these spectra with DAOSPEC and compared the results (Fig. 18). A total of 1150 lines were used in the comparison. We found a very good average agreement, with DAOSPEC measurements marginally smaller, by ΔEW = -1.3 ± 10.3 mÅ. When considering the six stars separately, we found differences ranging from ΔEW = -3.7 ± 10.7 mÅ, for star WFI 222068, which is the most metal rich of the sample, to ΔEW = 1.1 ± 7.1 mÅ, for star WFI 618854, which is the most metal poor of the sample. No trend with EW is apparent.

Fig. 18.— Comparison of the original measurements from Pancino et al. (2002), obtained with IRAF (y axis), and the measurements obtained here with DAOSPEC with the same line list and on the same spectra (x axis). Perfect agreement is marked with a dotted line.

The agreement appears satisfactory within the uncertainties, especially in light of the tests performed in § 3.2.1, where we show again that an agreement between DAOSPEC and IRAF measurements gets naturally worse as metallicity (and line crowding) increases.

4.2. DAOSPEC versus EWDET

EWDET (Ramírez et al. 2001 see also § 2) was obtained by courtesy of S. Ramírez (2005, private communication). It came with a test spectrum of a moderately crowded red giant in M 71, covering the range 7900–8000 Å, with R 30,000 and a S/N of at least 100 everywhere. We used this spectrum with the default configuration file ewdet.inp provided with the code, to measure the EWs of 70 lines. Ten additional lines were found, but EWDET did not report an EW for any of them because the Gaussian fit did not converge. All the lines found by EWDET were used as the input "laboratory" line list for DAOSPEC, which we then used to obtain EWs from the same spectrum. It is perhaps worth stressing here that the input line list plays no part in the finding of candidate spectral lines by DAOSPEC it is only after features have been detected that tentative identifications with features in the input list are sought. There is no attempt to "force" the detection of features in the spectrum at wavelengths specified by the input laboratory list. In the present case, DAOSPEC was able to (independently) find and measure all the lines that EWDET had found, including the 10 that EWDET had subsequently discarded. No apparent defect was found on those 10 lines in a visual inspection of the spectrum.

Figure 19 plots the difference between the DAOSPEC and EWDET EWs versus EW (top panel) and versus wavelength (bottom panel). The average difference is ΔEW = -1.2 Å with a variance of 11.7 mÅ. While the two sets of measurements appear in good agreement, the spread is slightly higher than expected, i.e., higher than that found in the comparison of § 4.1.3 between DAOSPEC and handmade IRAF measurements. A trend of increasing spread with increasing EW might be present, while no obvious trend with wavelength is seen.

Fig. 19.— Difference between DAOSPEC and EWDET EWs measured on a test spectrum of a giant in M 71 provided with EWDET. ΔEW is plotted vs. EW (top panel) and vs. wavelength (bottom panel). The average difference is ΔEW = -1.2 ± 11.7 mÅ.

A comparison of the model continua adopted by the two programs shows an overall systematic difference of 1.3% (DAOSPEC continuum lower), with a variance of 0.6% around this mean offset. For comparison, the residual spectrum produced by DAOSPEC has a pixel-to-pixel flux variance of 2%. On the surface, this case appears to be similar to that discussed in § 3.2 above. Such a discrepancy in the continuum levels could be the cause of the small ΔEW offset found between the two codes (see § 2, Fig. 2).

On average, the standard errors estimated by DAOSPEC are larger by 0.7 ± 1.5 mÅ than those reported by EWDET, even though the latter also includes the uncertainty due to the continuum placement and the former does not. In any case, given the large spread in ΔEW seen in Figure 19, both error estimates appear a bit small, indicating that some other unidentified source of uncertainty might be present. If we estimate an error budget, including the average errors by EWDET (

4 mÅ) and an error due to the continuum placement as estimated roughly from Figure 2 (

7 mÅ), we account for a spread of

9 mÅ i.e., the missing source of uncertainty must be of about

8 mÅ. 32 This might suggest that the EWDET continuum placement uncertainty (Ramírez et al. 2001 and § 2.2 here) might be underestimated.

Finally, the average difference between the FWHM found by EWDET for each line and the FWHM found by DAOSPEC (scaled with wavelength) is ΔFWHM = 0.001 ± 0.076 Å, and the average radial-velocity difference between the two sets, in the sense EWDET minus DAOSPEC, of measurements is very small, Δvr = 0.1 ± 0.6 km s -1 .

Summarizing, the comparison can be considered satisfactory once all the sources of uncertainty are properly taken into account. The only minor disadvantage of EWDET is the fact that it has been written for personal use and requires knowledge of Fortran to adapt manually some routines to meet the needs of each set of spectra, including naming conventions.

4.3. DAOSPEC versus ARES

ARES (Sousa et al. 2007) has been obtained from the ARES Web site 33 with, inter alia, a test spectrum of the Sun obtained with HARPS from observations of Ganymede. 34 Similarly to what we have done with EWDET, we ran ARES on the test spectrum and used its output as an input line list for DAOSPEC.

The average differences of key parameters, in the sense of DAOSPEC minus ARES, can be summarized as follows: ΔEW = -1.1 ± 3.7 mÅ, ΔFWHM = 0.01 ± 0.05 Å, 35 Δvr = -0.002 ± 0.126 km s -1 , based on 98 lines in common. No error estimate is provided by ARES. At first glance, all these values appear in very good agreement within the uncertainties, even better than the comparison made with EWDET in § 4.2. This is especially true when considering the spread in ΔEW, which is 11.7 mÅ in the comparison with EWDET and only 3.7 mÅ in the comparison with ARES. The very good agreement must of course be largely due to the fine quality of the test spectrum, which has R 45,000 and S/N 350. Figure 20 confirms good agreement with no trends with wavelength or EW in the differences, except for a possible problem in the bluest and most crowded part of the spectrum. A last comparison was made on the number of lines found. The authors do not mention how many lines were found and/or identified by each code, but state that ARES finds more lines than DAOSPEC. If we compare the Solar spectra taken from Ganymede, we find that ARES identifies 101 lines, and DAOSPEC identifies 100.

Fig. 20.— Difference between DAOSPEC and ARES EWs measured on a test spectrum of the Sun provided with ARES. ΔEW is plotted vs. EW (top panel) and vs. wavelength (bottom panel). The average difference is ΔEW = -1.1 ± 3.7 mÅ.

ARES and DAOSPEC represent two very different ways of approaching the problem of measuring EWs. ARES closely follows IRAF, including a major IRAF feature, namely the possibility to customize the continuum-fitting procedure. Because of the way the continuum is fit, ARES is faster than DAOSPEC, although maybe a bit longer to configure. ARES takes of the order of seconds for each spectrum, while DAOSPEC may take from a few seconds to a few minutes, depending on the spectrum characteristics. Finally, ARES gives no error estimate or radial velocity indeed, the radial velocity is one of the necessary inputs, not outputs of the code.

Nevertheless, in spite of the different continuum placement philosophies, ARES and DAOSPEC give entirely comparable measurements, within the uncertainties.

4.4. Abundance Analysis of the Sun

As a final test, DAOSPEC was used on the Solar spectrum obtained with HARPS (§ 4.3) to derive iron abundances for the Sun. The results have been compared, using the same models and abundance calculation code, to the abundances obtained with the EWs measured by Moore et al. (1966) and Rutten & van der Zalm (1984).

To measure EWs with DAOSPEC, we created a line list containing all lines in common between Moore et al. (1966) and Rutten & van der Zalm (1984). This line list was fed to DAOSPEC and, for homogeneity, it was also used to derive the Solar abundance with the original Moore et al. (1966) and Rutten & van der Zalm (1984) measurements.

We used the atmospheric models by Edvardsson et al. (1993) and the latest version of the abundance calculation code originally published by Spite (1967). For sake of homogeneity, the atomic parameters (including log gf) were taken from the line list of Rutten & van der Zalm (1984). In this way, the only difference among the three analyses comes from the EWs. The Solar temperature was kept fixed at 5780 K gravity was allowed to vary between log g = 4.4 and 4.5, to allow for microadjustments of the Fe I and Fe II ionization equilibrium the microturbulent velocity was kept as a free parameter and the best value was chosen as the one that minimized the slope of the EW versus [Fe/H] relation.

The results of our analysis are shown in Table 1, where it can clearly be seen that only negligible variations in log g and very small variations of the microturbulence (vt) were necessary to obtain abundances that are quite compatible with each other, and with the Solar values.


How to estimate uncertainty of measurements of equivalent widths? - Astronomy

Hi Robin, yes you are right the biggest challenge is estimating where the continuum is. IRAF gets you to manually place two points on the "continuum" and then identify all lines between these two points. It then deblends the lines by fitting Gaussian ( or other functions) to the curves and reports the lines center, FWHM and eqw etc. for all the lines.

SPLAT allows a similar process but you can provide a constant for the continuum or fit a polynomial for the background. Again it has several function you can fit with.

I found a paper on DAOSPEC which is an automatic program that has a good discussion on current methods. It also introduced an interesting idea I am trying manually with SPLAT. You estimate (guess) a background and fit the lines. You then subtract the fitted lines which leaves a new local background that provides a new estimate of the local continuum and you then use this etc. etc. iterating as long as you wish.

They also argue that you don't want the true continuum but the local continuum which will include unmodeled lines who's average contribution you want to avoid.

I think I will use an ensemble of methods to give an error estimate and then pick the one that is as robust as I can make it.

PS Given how lazy I am I shall try tp work out how to download and compile DAOSPEC or similar and let it do the work!

I've also been automating equivalent width calculations recently. I suspect you will take a different approach to me, but I thought I'd share my experience.

I am trying to measure the EW of lots of lines in lots of spectra automatically. I've not gone down the path of fitting polynomials or Gaussians. Instead:

  1. Work out the local continuum.
  2. Sum up the flux from the spectrum between 2 predetermined wavelengths for the start and end of the line.
  3. Calculate a dummy flux based on my local continuum level between the 2 wavelengths.
  4. Calculate my absorption flux by subtracting the summed flux from my dummy flux
  5. Finally calculate a width based on this absorption flux and the height of my local continuum.

As in the other posts, I also find working out the local continuum level to be the tricky bit. To deal with varying levels of noise, I take the median continuum of a few wavelength bins either side of my start and end line wavelength, then take the average of the continuum at my start and end wavelengths.

That is an interesting paper, I'll have to give it a read.

Hi Andy, I think your approach will work well with well isolated lines where you can be sure all the flux is from the line you are interested in. Where lines are blended fitting a Gaussians (or other line profile) allows you to deblend them, at least approximately. The biggest problem I have is that there is no obvious continuum in the yellow hypergiants which is why I am attracted to the idea in the paper of subtracting the fitted profile from the spectra to give a new estimate of the continuum.

I am going to experiment with SPLAT and see if taking different starting continuum levels converges to the same value in a few iterations - I easily lose focus!

I used a crude but simple estimate of the EW of the Hα and Hβ emission lines in the VV Cep spectrum, in order to monitor progress of the current eclipse. I drew a 'continuum' by extending a straight line between the turning points at the base of the emission lines. As an example here's the result I got in BASS with last night's spectrum:

I realise the approach is not strictly scientific but is repeatable and seems to have given a reasonably comparable set of measurements. Here's the latest time plot of my EW measurements on my own and others' Alpy spectra:

The latest episode of Hα brightening seems to have appeared bang on time!

I'd welcome any comments on the approach

If it works for you Hugh, I don't see any good reason not to use this method. If I were using it I would just look to see if the "continuum" you are using remains stable or if it moves systematically as the line changes strength. Even this does not matter if you just wish to follow the rise and fall of the intensity.

(The same answer as Andrew but with a bit more detail,read and forget if you like!)

Your measurements are showing nice qualitative trends but you have to be a bit careful when interpreting the EW of emission lines.

EW works well as a measurement of absorption line strength because absorption is normally just a proportion of the continuum, so provided you can decide where the local continuum is, the EW gives you a good measurement of the line strength. (eg even if the star changes in brightness, if the absorption is constant, the EW stays the same.)

Emission lines are different as they normally come from a different source than the continuum so are independent of it. This means the EW value of an emission line makes less sense as we are measuring it relative to something not connected with the emission line. This is OK provided we know that the continuum flux is constant (or at least how it is changing, so we can correct the EW to give a true measurement of the line strength) Otherwise the EW results can be deceiving. Classical novae are a good example. If you plot the EW of H alpha as the nova evolves it looks like the line is continuously getting stronger. In fact though this is mainly due to the continuum falling away and for much of the time the actual line strength is constant and even decreasing at times.

If we look specifically at your VV Cep spectrum, the hot star is now fully eclipsed so the continuum is that of the cool star photosphere and the emission comes from somewhere else (possibly an extended region (disc?) associated with the hot star but there are many possible sources). The variations in the continuum around the emission line will be due to a blend of the many absorption lines in the cool star spectrum and we don't see the true continuum at this resolution (It will likely be somewhere along the high points of the spectrum). The reference points you have chosen will be somewhere in the absorption lines so will only be fixed during totality, assuming no variations in the cool star spectrum. Outside totality, the reference points will rise closer to the continuum level as the hot star reappears and the cool star absorption lines lose their relative strength. The continuum flux will also increase to that of the two stars combined. Both these will affect the EW measurement even if the emission line strength actually remained constant.

All is not lost though if you want to make an absolute measurement of the way the emission line flux is varying, as all the necessary information is available.

What we would first need to do is to convert the spectrum to absolute flux. This could be done using the available measurements of photometric brightness around the time of the spectrum. Once that has been done, we can work in absolute flux rather than relative to some poorly defined and varying continuum.

The next step would be to remove the cool star component. We could try this now by subtracting a reference spectrum of a star of the same type, adjusting it to match the intensity of the absorption lines, but probably the best way to do this would be to wait to around mid eclipse when the hot star and any circumstellar material should be hidden and use this as a template. Once this is subtracted, we should be left with the flux calibrated spectrum of the uneclipsed components, probably dominated by Balmer emission lines with their actual intensities measurable directly.


Examples of Relative Uncertainty Calculations

Example 1

Three 1.0 gram weights are measured at 1.05 grams, 1.00 grams, and 0.95 grams.

  • The absolute error is ± 0.05 grams.
  • The relative error (δ) of your measurement is 0.05 g/1.00 g = 0.05, or 5%.

Example 2

A chemist measured the time required for a chemical reaction and found the value to be 155 +/- 0.21 hours. The first step is to find the absolute uncertainty:

  • absolute uncertainty = 0.21 hours
  • relative uncertainty = Δt / t = 0.21 hours / 1.55 hours = 0.135

Example 3

The value 0.135 has too many significant digits, so it is shortened (rounded) to 0.14, which can be written as 14% (by multiplying the value times 100).

The relative uncertainty (δ) in the measurement for the reaction time is:


Acknowledgements

We are grateful to D. Brown, C. Hirata, V. Scowcroft, P. Shawhan, D. Spergel and H. Peiris for useful discussions. We thank the LIGO Scientific and Virgo Collaborations for public access to their data products. K.H. is supported by the Lyman Spitzer Jr. Fellowship at the Department of Astrophysical Sciences, Princeton University. E.N. and O.G. are supported by the I-Core center of excellence of the CHE-ISF. S.N. is grateful for support from NWO VIDI and TOP Grants of the Innovational Research Incentives Scheme (Vernieuwingsimpuls) financed by the Netherlands Organization for Scientific Research (NWO). The work of K.M. is supported by NASA through the Sagan Fellowship Program executed by the NASA Exoplanet Science Institute, under contract with the California Institute of Technology (Caltech)/Jet Propulsion Laboratory (JPL). G.H. acknowledges the support of NSF award AST-1654815. A.T.D. is the recipient of an Australian Research Council Future Fellowship (FT150100415).


S and S/N

in photometry and spectrografy it is important to achieve an high S/N , this parameter being related to the accuracy of the measure
what about S ?
It is never mentioned in scientific astronomy articles, at least I never found it
is it not that important ?
In other words, two images having same S/N but different S are 100 % equivalent ?

#2 Taosmath

Yes, that's my understanding.

#3 robin_astro

Correct but since we are talking about counting photons, N also depends on S. For a 100% efficient noise free instrument, S/N = sqrt (S) where S is the number of photons.

(To be pedantic, it is not the accuracy but the precision (ie the uncertainty) of the measurement that depends on the SNR)

Edited by robin_astro, 17 May 2021 - 06:37 AM.

#4 Taosmath

.

(To be pedantic, it is not the accuracy but the precision (ie the uncertainty) of the measurement that depends on the SNR)

My understanding is that the precision measures how many significant figures you are able to quote a result to. e.g. my height is 70" or 71" or 71.2" or 71.24" etc. (1,2,3 & 4 significant figures respectively)

The accuracy is determined by the the uncertainty. So if my height measurement is uncertain to +/- 1", then to quote to that precision is reasonable (my height is 71" +/- 1 ", meaning that you are 95% certain that my true height lies between 70" and 72"). It is not reasonable to quote my height as 71.24" if the uncertainty in that is +/- 1".

#5 robin_astro

I disagree.

My understanding is that the precision measures how many significant figures you are able to quote a result to. e.g. my height is 70" or 71" or 71.2" or 71.24" etc. (1,2,3 & 4 significant figures respectively)

The accuracy is determined by the the uncertainty. So if my height measurement is uncertain to +/- 1", then to quote to that precision is reasonable (my height is 71" +/- 1 ", meaning that you are 95% certain that my true height lies between 70" and 72"). It is not reasonable to quote my height as 71.24" if the uncertainty in that is +/- 1".

Perhaps I worded it poorly.

Precision (uncertainty) is how repeatable a measurement is. Accuracy his how close the measurement is to the true value.

It is true you need high precision for high accuracy but that alone is not sufficient. You also need to understand and correct for any systematic errors which are often larger than the uncertainty and can be much more difficult to manage.

SNR may be useful as estimate of uncertainty (but even that is not sufficient. Consider the effects of scintillation on brightness measurements for example) but it tells us little about accuracy. A study of the systematic errors is needed to quantify that.

For example it is relatively straightforward to collect enough photons to measure the brightness of a star to a precision of 0.1% but much harder to measure it to that accuracy as this requires careful calibration and correction for a number systematic effects.

#6 zoltrix

#7 robin_astro

I think that S/N should be related to the accuracy of the measure i.e how close is your measure to the true value

You might like to think that but that is not how precision and accuracy are defined. How close your value is to the true value is indeed accuracy but a high SNR gives you precision not accuracy. Accuracy is about calibration and correcting for systematic errors. If your calibration is not correct or you have systematic errors even a high precision result from a high S/N measurement will not be accurate. I recommend reading the Wikipedia article

#8 zoltrix

Actually there is some confusion between accuracy and precision

I took a book off the shelf:

Mean value of N = 0
Standard deviation of N = sigma

by translating ADUs into magnitudes you get

if so S/N is related to precision i.e the higher S/N the lower the distribution of your measures around the true magnitude
Yet the book speaks of. accuracy

Edited by zoltrix, 18 May 2021 - 05:23 AM.

#9 robin_astro


if so S/N is related to precision i.e the higher S/N the lower the distribution of your measures around the true magnitude

Certainly misunderstandings about this are common but the distinction between the terms accuracy and precision (as used by professional scientists and engineeers) is clear. A high S/N narrows the distribution of individual measurements about the mean measured value but that is not the same as the true value.

(In fact when talking about brightness measurements unless you are measuring faint targets the S/N is not usually the most important factor and in any case can generally easily be improved just by taking longer exposures. There are other, often more important factors to consider to produce an accurate measurement)

Let's look at a practical example. This is the AAVSO light curve of measured photometric V magnitudes for Betelgeuse around the minimum last year when there were many people following it. (I have plotted a mean curve over the individual measurements)

This was a bright target and the S/N of the images would easily have been >100, high enough to give a precision of better than 1% or 0.01 magnitudes. The scatter however is

10x this. ie precision calculated from the S/N was better than 0.01 but the typical accuracy of the individual measurements is more like

0.1 Why is this ? Well there are many possible reasons. We could probably draw up a long list, but a couple to consider here are:-

Scintillation - This is a bright target so exposures would be short and the atmospheric turbulence causes the measured value to vary from second to second. (The effect of this can be reduced for example either by averaging many exposures or by defocusing the stars so longer exposures can be taken without saturating the camera)

Atmospheric extinction - Differential photometry relies on comparing the brightness of a star with others of known brightness. The measured brightness (and colour) of a star depends on how much atmosphere is in front of it (the air mass). This depends on the height above the horizon. Betelgeuse is a bright target and suitable comparison stars of similar magnitude are some distance away so can be subject to different levels of atmospheric extinction. Unless corrected for, this affects the accuracy of the result but not the precision. You could make the same measurement as many times as you like with the highest S/N and you would still not get an accurate result)

#10 zoltrix

do you mean that by averanging more and more measures , with high S/N , the mean value does not get closer and closer to the true value ?
even assuming that the target and the comparison stars are not some distance away
If so , suppose I want to measure the magnitude of a star with accuracy +_ 0.1 mag, what shall I do ?

Edited by zoltrix, 18 May 2021 - 02:16 PM.

#11 robin_astro

do you mean that by averanging more and more measures , with high S/N , the mean value does not get closer and closer to the true value ?

do you mean that by averanging more and more measures , with high S/N , the mean value does not get closer and closer to the true value ?
even assuming that the target and the comparison stars are not some distance away
If so , suppose I want to measure the magnitude of a star with accuracy +_ 0.1 mag, what shall I do ?

I used this just as an example to show that precision and accuracy are not the same thing. If your comparison stars are good and not very different in air mass you should easily get better than 0.1 magnitude accuracy. Getting to 0.01 magnitude accuracy though starts to get tough and just getting a good SNR is not enough. To get there you need to think about things like the quality of your flat correction, the linearity of your sensor, calibrate your setup using standard stars and consider correcting for the effects of air mass and differences in colour between your target and comparison stars. The AAVSO manual photometry manual covers these.

I am not a photometry expert but there are others on here who will have a better idea about what is required to get high accuracy.

Edited by robin_astro, 18 May 2021 - 05:22 PM.

#12 brownrb1

To use a simple analogy of an archery target, the accuracy would be a grouping around the bullseye-which would be the correct value and the tightness of the grouping would be the precision. You could also be very precise (a tight group>, but outside the bullseye-precise but not accurate (I precisely measured the brightness curve of the wrong star).

#13 zoltrix

Hi,
To use a simple analogy of an archery target, the accuracy would be a grouping around the bullseye-which would be the correct value and the tightness of the grouping would be the precision. You could also be very precise (a tight group>, but outside the bullseye-precise but not accurate (I precisely measured the brightness curve of the wrong star).
Dick

Suppose the archer miss the target because of a random wind continuosly changing direction without a preferential direction
In other words : wind mean value = 0 but wind standard deviation != 0
What does it happen by averanging the result of a large number of attempts ?
the mean value should tend to the true target
so the quetion is

even though S/N is , from an accademic point of view, related to the precision rather than to accuracy
Can S/N, in practice, be considered a synonimous with accuracy i.e an indication of how much you are close to the true magnitude of the star ?
in my opinion the answer is "yes" provide the noise is purely random
it is"no" otherwise

Edited by zoltrix, 19 May 2021 - 06:48 AM.

#14 robin_astro

even though S/N is , from an accademic point of view, related to the precision rather than to accuracy
Can S/N, in practice, be considered a synonimous with accuracy i.e an indication of how much you are close to the true magnitude of the star ?
in my opinion the answer is "yes" provide the noise is purely random
it is"no" otherwise

This is absolutely and categorically incorrect. How close the measurement is to the true value depends on a number of factors. The uncertainty due to noise is just one of them. S/N and similar random variations such as scintillation for example affect the precision but for an accurate result you have to consider all systematic errors. To determine the accuracy of the measurement a full error analysis must include all these factors, not just S/N as my example of measurements of Betelgeuse brightness demonstrates. (Each of the measurements made were very precise but not very accurate)

Edited by robin_astro, 19 May 2021 - 07:20 AM.

#15 robin_astro

Some observations require high precision but not high accuracy. For example if you are trying to measure the timing and depth of an exoplanet transit, you need very high precision so you can detect a small change, typically less than 1% so you need a high S/N (and a low level of other random variables such as scintillation). It does not matter if the actual measured brightness is not accurate though as you are only interested in the change in brightness. This means you don't need to use a standard V filter and make the transformations to the standard brightness system which would otherwise be needed to produce an accurate brightness.

#16 Ed Wiley

Robin is right about this. A colleague and I are preparing our paper on precision and accuracy at the amateur level of photometry. A preview can be seen at

#17 zoltrix

Fantastic movie
ok S/N only might not be enough
Accuracy is affected by sistematic errors ,too
lets back to the original question
S/N vs S
To be more specific
same focal ratio same photons per pixel thus same S/N
While total number of photons depend also on the aperture
what's the advantage, if any, of achieving an higher S ?

#18 iantaylor2uk

You will get a larger signal by using a larger aperture telescope. For example you won’t see much of M51 visually in an 80 mm refractor but you can see it’s spiral structure in a 16” or 20” telescope. If you have two telescopes with the same f ratio, the one with the larger aperture will give you a larger signal (if this is not the case why are professional telescopes so large?)

#19 robin_astro

To be more specific
same focal ratio same photons per pixel thus same S/N
While total number of photons depend also on the aperture
what's the advantage, if any, of achieving an higher S ?

Not sure why you have introduced focal ratio but you are correct about aperture. The number of photons S, collected in a given time eg from a star is proportional to the aperture squared.

In an ideal world the S/N is then sqrt(S) (The photon noise) so the more signal, the better the S/N. In practise some noise also comes from the camera and from the sky background but more signal always improves the S/N which defines the detectability of an object and how precise you can measure it.

Note you can compensate for a smaller aperture to some extent by increasing the exposure time (collecting more photons) but this also increases the total camera noise contribution so there is a limit to how far you can go

Edited by robin_astro, 09 June 2021 - 09:49 AM.

#20 robin_astro

To be more specific
same focal ratio same photons per pixel thus same S/N

You are correct that two telescopes with the same focal ratio but different apertures will collect the same number of photons per pixel but this is not what is important for S/N eg for photometry. What is important is how many photons in total you collect from the star. (or for an extended object per given area of sky). These might be spread over more pixels in the case of the larger aperture scope but in photometry you just add all these up so you get a higher S/N. (How much improvement depends on a number of factors including the camera noise, the size of the pixels and the sky background level but a larger aperture always gives a higher S/N)

Edited by robin_astro, 09 June 2021 - 10:12 AM.

#21 zoltrix

#22 Mark Lovik

From a cursory review of comments in this thread, there seems to be two different paths

1. Define accuracy and precision

2. How good are our measurements

There are books across a range of Engineering and Scientific disciplines on this topic, and a true mastery can take years.

Before you even try to answer these two questions with any scientific validity, there are some background ideas that need resolved.

A. S/N assumes background offset corrected values for both the noise and the signal component (or the background is sufficiently small to be ignored). Instruments and measurement strategies are designed to handle this issue.

B. What is the model response? Most of the earlier discussions assume the response is linear. This normally needs to be checked and validated. How S/N relates to precision and accuracy depends on this model response. This discipline has been termed "propagation of errors". For linearized systems, the calculations are easy. Many of the discussion points assume linear response.

C. What is the dominant noise in the system? You can often simplify system error calculations based on the dominant error. As a rule of thumb - if the dominant error is 3-4 times larger than the other (non-systematic) errors, then the other errors can be ignored.

D. Most measurements are indirect measurements (they depend on some sort of reference measurements - that also have errors). An example would be variable star measurements, where the measurement accuracy of the reference stars needs to be considered in the propagation of errors. These system errors can be separated into

  • Standard error of calibration - errors in the calibration curve of the systems response
  • Standard error of prediction - errors using the calibration on new measurements

Now go back to precision and accuracy.

- Precision: you can usually make measurements with a precision that exceeds the expected propagated error in the system. Precision should be assigned by analysis and is not an intrinsic property. Unfortunately this is not uniform across disciplines.

- Accuracy: how close are we to the true value. In practice for indirect calibrations (SEC, SEP above) we don't even know the true value. We know the approximated true value

I have spent years (in the past) doing this type of analysis for multivariate spectroscopic systems and for multi-sensor clusters. We can do simplifications (kind of like using the small angle approx. for image frames instead of straight trig formulas), but this is a deep rabbit hole. How deep do you want to go?