University of Ottawa NMR Facility Web Site

Please feel free to make suggestions for future posts by emailing Glenn Facey.

Wednesday, March 19, 2008

Forward Linear Prediction

The digital resolution in an NMR spectrum can be improved by zero filling the FID (i.e. adding zeros to the end of the FID before Fourier transformation). Forward linear prediction, on the other hand, can be used to improve both the digital and real resolution in a spectrum. Forward linear prediction uses the data collected in an FID to predict data after the receiver was turned off. Both processing techniques artificially increase the acquisition time, however it is only forward linear prediction which adds new information to the spectrum. In the figure below a truncated FID is transformed untreated, with zero filling and with forward linear prediction.


old swan said...

The example is excellent and the picture highly explicative, as always on this blog.
Your statement "it is only forward linear prediction which adds new information to the spectrum" is not correct.
1. LP can't add any authentically NEW information because it simply extrapolates the information already contained into the FID.
2. Zero-filling has the property to recover the information contained into the imaginary part of the FID (uncorrelated to the real part) and to reverse this additional information onto the real part of the spectrum.
3. The spectroscopist, to run an LP algorithm, feeds it with the hypothetical number of lines. While this is certainly NEW information not already present into the FID, it is arbitrary.
Bottom Line: LP is a good thing, yet things are more complicated.
[Giuseppe Balacco,]

Glenn Facey said...

Dear Old Swan,

Thank you for your very constructive input. I agree with the first and third points you make in that the added information is an extrapolation of the information already present and that the situation is indeed more complicated than what I present here. I refer to the added information as "new" only because it is non-zero.

I do not agree (or perhaps do not understand) your second point. I'm not sure what you mean when you say that adding null points to the signal can allow the transmission of information from the imaginary signal into the real spectrum. Is this not done by quadriture detection?

Kind regards,

old swan said...

Let's say you collect a FID of n complex points. It's a sequence of n real and n imaginary numbers or measures. They are independent measures, because no single value has been calculated from the rest. The total information content of the experiment amounts to 2n measures. After FT and phase correction you have a real spectrum of n points while the imaginary spectrum is discarded. You may ask: "Why are we throwing away half of our information?". We could rescue it with an Hilbert transform. It's a tool to calculate the imaginary spectrum when you only have the real part, or the real spectrum when you only have the imaginary part. If the noise of the imaginary part is uncorrelated to the noise of the real part we can add:
exp. real spectrum + real spectrum regenerated from the im. part = a more intense spectrum.
The HT works in this way: you set the real part to zero, apply an inverse FT. You reach an artificial FID which is symmetric (both the real and imaginary parts first decay, then grow again). Set the right half of the artificial FID to zero, then apply a direct FT. (I have omitted the adjustment of intensities, because I only want to show the concept).
As you can see, HT is akin to zero-filling. Although this is not a mathematical demonstration, it may convince you that zero-filling can move some information from the imaginary part onto the real one, so can enrich the latter with real information.
It only works, however, when the signal has already decayed to zero at the end of the FID, otherwise it's so unrealistic that generates unwelcome wiggles.
I am sorry I can't find a literature reference with the real demonstration.

Glenn Facey said...

Dear Old Swan,

Thank you. This is new and interesting to me. I will do some reading.


old swan said...

Maybe a few readers of this blog, less fond of math, can better understand the following simple recipe. In high resolution 1D NMR, usually the signal has already decayed to (practically) zero at the end of the FID. In this case zero-filling and LP should yield the same results, but the former is better (faster and more robust). When, for whichever reason, the signal is truncated, the preference goes instead to LP.

Anonymous said...

I believe that the reference old swan refers to is a paper from Ernst's lab:

JMR, 1973, 11, 9-19


stan said...

Hello everybody,
I was kind of silently following this discussion thread and wondering idly about how old problems never really go away. Back in 1978, while with Bruker, I wrote a detailed paper on this theme, analyzing rigorously what zero-filling (ZF) and exponential-multiplication (EM) and their combinations did both to the f-domain coherent signal and to the f-domain noise and its autocorrelation functions.
I still have the manuscript; it was supposed to go on Bruker Report but it became too heavy (20 pages, dense math and a lot of Figures) so it did not get a chance and, due to other pressing Company matters, remained rotting in a drawer ... Since I see that it is still pertinent, I will dust it off and publish it in Stan's Library the first thing after Euromar.
But let me add rightaway a few comments and anticipations:
- First, if your FID is truncated, you either zero-fill and learn to live with oscillatory artifacts, or you apply some a-priori knowledge (like that an FID is an auto-regression function) to extend the FID artificially by one of the many LP algorithms. In both cases you at least double the number of data points. I agree with Old Swan that in the second case you do not really get new information from the LP; by itself, LP just makes the spectra look prittier. I bet that you could achieve the same result, or a better one, by applying a Hanning apodization (to remove the oscillations), then a bit of LG resolution enhancement (to compensate for the mild resolution loss implicit in Hanning) and then zero-filling.
- As long as one has a single coil+preamp front-end, the quadrature splitting is done "artificially" by means of two separate phase detectors (no matter whether analog or digital) which use the same input signal but two, orthogonal reference signals. Consequently, as Carlos Cobas points out, even though the data output by the two channels are statistically uncorrelated (because of the reference channels orthogonality), they are not independent at all. In fact, the Hilbert transform of any one of them matches exactly the other (including the noise).
- However, there definitely is an increase of information associated with the doubling of data points. As Bartholdi and Ernst discuss in the cited paper (Fourier spectroscopy and the causality principle), the "new" points really contain "fresh information", they are not just some interpolation between the adjacent "old" points (that happens only when you push ZF beyond the factor of 2).
Getting new information from two equivalent sets of data may sound as a contradiction, untill you accept the now well known fact that plain FFT does not extract all the info available in an FID. Doubling the number of points helps to squeeze the lemon, so to say, and get more juice out of it. So it is not that ZF somehow "creates" new info, but rather that plain FFT is not good enough at digging it out !!!
Consequently, point doubling should be done ALWAYS - with the computers we have today it would be totally foolish not to do it. When the FID is truncated (and only in that case) one has the additional and independent problem of getting rid of the consequent ripples which can be suppressed either by a proper apodization of by LP.
In the manuscript, I have worked out all the above aspects of ZF and/or EM in a quantitative way, and the details are quite interesting. Thus, for example, after ZF the noises in the real and imaginary parts, seen separately, are still white (no correlation between adjacent data points), but there arises a correlation between the k-th real point and the (k+m)-th imaginary points for any odd m! This, however, goes too far for a blog comment.