NyquistShannon sampling theorem

From Example Problems
Revision as of 04:38, 5 August 2020 by Todd (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

The Nyquist–Shannon sampling theorem is the fundamental theorem in the field of information theory, in particular telecommunications. It is also known as the Whittaker–Nyquist–Kotelnikov–Shannon sampling theorem or just simply the sampling theorem.

The theorem states that:

When sampling a band-limited signal (e.g., converting from an analog signal to digital), the sampling frequency must be greater than twice the input signal bandwidth in order to be able to reconstruct the original perfectly from the sampled version.

If is the signal bandwidth and is the sampling rate, then the theorem can be stated mathematically (called the "sampling condition" from here on)

Note: The above formulation implicitly assumes that the signal s(t) is a real-valued function with a Fourier transform that is zero outside the interval . In cases with a lowest positive frequency , it is possible to have smaller sampling frequencies close to the double of the difference of highest and lowest frequency, see the section about undersampling below.

Aliasing

If the sampling condition is not satisfied, then frequencies will overlap (see the proof below). This overlap is called aliasing.

To prevent aliasing, two things can readily be done

  1. Increase the sampling rate
  2. Introduce an anti-aliasing filter or make anti-aliasing filter more stringent

The anti-aliasing filter is to restrict the bandwidth of the signal to satisfy the sampling condition. This holds in theory, but is not satisfiable in reality. It is not satisfiable in reality because a signal will have some energy outside of the bandwidth. However, the energy can be small enough that the aliasing effects are negligible.

Spatial domain

File:Moire pattern of bricks.jpg
See for full size image

Although this article discusses sampling in the time domain, the sampling theorem holds for the spatial domain as well.

For example, a digital photograph of a striped shirt with high frequencies (in other words, the distance between the stripes is small), can cause aliasing between the shirt and the camera's sensor array. The aliasing appears as a Moiré pattern. The "solution" to higher sampling in the spatial domain for this case would be to move closer to the shirt or use a higher resolution sensor (for example, a CCD).

Another example is shown to the right in the brick patterns. The top image shows the effects when the sampling theorem is not followed. When software rescales an image (the same process that creates the thumbnail shown in the bottom image) it, in effect, runs the image through a low-pass filter first and then downsamples the image to result in a smaller image that does not exhibit the Moiré pattern. The top image is what happens when the image is downsampled without low-pass filtering and aliasing results.

The top image was created by zooming out in GIMP and then taking a screenshot of it. Likely reason that this works is that the zooming feature simply downsamples without low-pass filtering (probably for performance reasons) since the zoomed image is for on-screen display instead of printing or saving.

Downsampling

When a signal is downsampled, the theorem still must be satisfied. The theorem is satisfied when downsampling by filtering the signal appropriately with an anti-aliasing filter.

Critical frequency

The critical frequency is defined as twice the bandwidth (if the sampling condition was an equality instead of an inequality).

If the sampling frequency is exactly twice the highest frequency of the input signal, then phase mismatches between the sampler and the signal will distort the signal. For example, sampling at will give you the discrete signal , as desired. However, sampling the same signal at will give you a constant zero signal. These two sets of samples, which differ only in phase and not frequency, give dramatically different results because they sample at exactly the critical frequency.

Historical background

The theorem was first formulated by Harry Nyquist in 1928 ("Certain topics in telegraph transmission theory"), but was only formally proven by Claude E. Shannon in 1949 ("Communication in the presence of noise"). Kotelnikov published in 1933, Whittaker in 1915 (E.T.) and 1935 (J.M.), and Gabor in 1946.

Mathematically, the theorem is formulated as a statement about the Fourier transform.

If a real valued function has a Fourier transform with for , then it is completely determined by giving the value of the function at a series of points spaced apart. The values are called the samples of s(t).

The minimum sample frequency that allows reconstruction of the original signal, that is samples per unit distance, is known as the Nyquist frequency, (or Nyquist rate). The time in-between samples is called the Nyquist interval.

If for , then can be recovered from its samples by the Nyquist–Shannon interpolation formula.

A well-known consequence of the sampling theorem is that a signal cannot be both bandlimited and time-limited. To see why, assume that such a signal exists, and sample it faster than the Nyquist frequency. This finite number of time-domain coefficients should define the entire signal. Equivalently, the entire spectrum of the bandlimited signal should be expressible in terms of the finite number of time-domain coefficients obtained from sampling the signal. Mathematically this is equivalent to requiring that a (trigonometric) polynomial can have infinitely many zeros in bounded intervals since the bandlimited signal must be zero on an interval beyond a critical frequency which has infinitely many points. However, it is well-known that polynomials do not have more zeros than their orders due to the fundamental theorem of algebra. This contradiction shows that our original assumption that a time-limited and bandlimited signal exists is incorrect.

Undersampling

When sampling a non-baseband signal, the theorem must be restated as follows. Let be the lower and higher boundaries of a frequency band and be the bandwidth. Then there is a non-negative integer N with

, let .

Any real-valued signal s(t) with a spectrum limited to this frequency band, that is with for outside the interval , is uniquely determined by its samples obtained at a sampling rate of , if this sampling rate satisfies one of the following conditions:

  • for one of n = 0, 1, ..., N−1
  • .

If , then the first conditions result in a sampling rate less than the Nyquist frequency obtained from the upper bound of the spectrum. If the so obtained sampling rates are still too high, the intuitive sampling-by-taking-values has to be replaced by sampling-by-taking-scalar-products, as is (implicitly) the case in Frequency-division multiplexing.

Consider FM radio to illustrate the idea of undersampling. In the US, FM radio operates on the frequency band from 88 MHz to 108 MHz. The sampling conditions are satisfied for , i.e. N=4, r=8MHz and . The value gives the lowest sampling frequencies interval and this is a scenario of undersampling.

Note that when undersampling a real-world signal, the sampling circuit must be fast enough to capture the highest signal frequency of interest. Theoretically, each sample should be taken during an infinitesimally short interval, but this is not practically feasible. Instead, the sampling of the signal should be made in a short enough interval that it can represent the instantaneous value of the signal with the highest frequency. This means that in the FM radio example above, the sampling circuit must be able to capture a signal with a frequency of 110 MHz, not 43.2 MHz. Thus, the sampling frequency may be only a little bit greater than 43.2 MHz, but the input bandwidth of the system must be at least 110 MHz.

If the theorem is misunderstood to mean twice the highest frequency, then the sampling rate would assumed to need to be greater than the Nyquist-frequency 216 MHz. While this does satisfy the last condition on the sampling rate, it is grossly over sampled.

Note that if the FM radio band is sampled, e.g., at , then a band-pass filter is required for the anti-aliasing filter.

In certain problems, the frequencies of interest are not an interval of frequencies, but perhaps some more interesting set F of frequencies. Again, the sampling frequency must be proportional to the size of F. For instance, certain domain decomposition methods fail to converge for the 0th frequency (the constant mode) and some medium frequencies. Then the set of interesting frequencies would be something like 10 Hz to 100 Hz, and 110 Hz to 200 Hz. In this case, one would need to sample at a data rate of 360 Hz — i.e. at a sampling rate of 20 Hz with 18 real values in each sample — not 400 Hz, to fully capture these signals.

Proof

To prove the theorem, consider two continuously defined signals: any continuous signal with fast decay at infinity and a Dirac comb .

The result of their multiplication is

.

Any reconstruction has the form with an interpolation kernel h(t).

Taking the Fourier transform and applying the multiplication/convolution property

one gets the Fourier transform of the reconstructed function as

where is the sampling rate (in rad/sec), and by the shifting property of the Dirac delta distribution under convolution,

.

Thus is the product of the periodized Fourier transform S of s and a "windowing" function H. For the sampling theorem, one wants to obtain the identity of S and . To simplify matters, one assumes that in the periodization sum maximally one term is different from zero, that is, the replicated shifted by integer multiples of do not overlap. This is true, among other possibilities,

  • in the standard case where S is zero outside the intervall or
  • in the undersampling case where S is zero outside the union
for some .
The standard case is contained for N=0.

Since the periodization of S is now identical to S, the "windowing" factor H has to be choosen as

wherever .

In the standard case, the cardinal sine function satisfies this condition. Since then , we get the Nyquist–Shannon interpolation formula

.

In the undersampling case a similar formula holds with .

If S has its support inside some intervall , and if is not sufficiently large to satisfy , then the terms of the periodizing summation will overlap and aliasing will be introduced.

See also

References

  • E. T. Whittaker, "On the Functions Which are Represented by the Expansions of the Interpolation Theory," Proc. Royal Soc. Edinburgh, Sec. A, vol.35, pp.181-194, 1915
  • H. Nyquist, "Certain topics in telegraph transmission theory," Trans. AIEE, vol. 47, pp. 617-644, Apr. 1928.
  • V. A. Kotelnikov, "On the carrying capacity of the ether and wire in telecommunications," Material for the First All-Union Conference on Questions of Communication, Izd. Red. Upr. Svyazi RKKA, Moscow, 1933 (Russian).
  • C. E. Shannon, "Communication in the presence of noise," Proc. Institute of Radio Engineers, vol. 37, no.1, pp. 10-21, Jan. 1949.

External links

de:Nyquist-Shannon-Abtasttheorem es:Teorema de muestreo de Nyquist-Shannon fr:Théorème d'échantillonnage de Nyquist-Shannon it:Teorema del campionamento di Nyquist-Shannon he:תורת הדגימה nl:Bemonsteringstheorema van Nyquist-Shannon ja:標本化定理 pl:Twierdzenie Kotielnikowa-Shannona ru:Теорема отсчётов Уиттакера—Найквиста—Котельникова—Шеннона fi:Nyquistin teoreema