Astrophysics (Index) | About |
The term completeness is used for a quantification of the effectiveness of an observation or survey, regarding its successful detections. For a given apparent magnitude, a survey's completeness is the fraction of objects of that magnitude that are actually detected, the others presumably lost by noise, such as that inherent in the instrument. For example, one might say for a particular survey that for magnitude 20, its completeness is 95%. Generally, the larger the magnitude number (i.e., the fainter the object appears in the sky), the smaller the completeness fraction. There is a natural desire to get and use absolutely every bit valid information possible up to the limits of the instrument, and completeness quantifies the aim or result the effort.
A completeness limit for a survey or a dataset derived from it, and for a given limiting fraction, is the magnitude at which completeness has fallen to that fraction. One could describe a dataset as having a 95% completeness limit of magnitude +20, meaning that at least 95% of any object with magnitude smaller than +20 is included. In other words, these two are equivalent statements:
Knowing a survey's completeness depends upon knowing what the survey hasn't seen, so that must be estimated. If data from a survey with more sensitivity is available, that can be used. Another approach is to create mock data based upon the distribution seen at closer distances, assuming some uniformity across time and space, and to calculate what is likely to be out there.
Another issue for surveys, termed contamination is the appearance of objects where there are none, the illusion consisting of noise (such as instrument noise). The ratio of real objects to the sum of such real objects plus the noise-generated apparent objects is termed its purity (at a given magnitude) and analogous purity limits can be estimated for a specific survey.