University of Ottawa NMR Facility Web Site

Please feel free to make suggestions for future posts by emailing Glenn Facey.

Wednesday, May 11, 2016

Non-uniform Sampling (NUS)

Collecting 2D or 3D NMR data can be very time consuming. The indirect dimension of a 2D experiment is sampled linearly via the t1 increments in the pulse sequence.  An FID must be collected for every single linearly spaced t1 increment. In the interest in collecting 2D or 3D NMR data in a more time efficient manner, a great deal of effort is made towards faster data collection techniques.  While some of these methods are based on spatial selectivity, others are based on sparse sampling techniques in the indirect dimensions of nD NMR sequences.  One such sparse sampling method, given the name non-uniform sampling (NUS), samples a sub-set of the indirect dimension in a random (or weighted random) manner and then predicts the uncollected data based on the data sampled, in much the same way data are predicted in the forward and backward linear prediction methods.  The reconstructed data is then used for the indirect Fourier transforms.  A comparison of the conventional and non-uniform data sampling methods is illustrated in the figure below.
Collecting only a fraction of FID's reduces the experiment time by the same fraction.  The figure below shows a superposition of partial 600MHz 1H-13C HSQC spectra of a D2O solution of sucrose.
All of the spectra were collected with 2 scans per increment using a 1.5 second recycle time.  The lower spectrum in black was collected conventionally with 256 increments in 15 minutes. The middle spectrum in blue was collected conventionally with 64 increments in 3.75 minutes. The top spectrum in purple was collected using NUS with 25% of 256 increments (i.e. 64 increments) collected in 3.75 minutes.  A comparison of the two conventionally collected data sets shows the expected loss in F1 resolution with the 4-fold reduction in experiment time by reducing the number of increments by a factor of 4. The bottom (black) conventional spectrum and the top (purple) NUS spectrum are however virtually indistinguishable despite the 4-fold reduction in experiment time for the NUS spectrum.  NUS is a very valuable technique for reducing experiment times without sacrificing resolution.


Anonymous said...

NUS spectra save you time but you have to pay a price. This is sensitivity, or signal-to-noise. The simplest way to understand this is that since you will be recording for less time you will collect less signal. There is no magic in the NUS methods that collects more signal faster (if this was true then cryoprobes, for example, would not be needed!). If, however, you had a S/N of 10,000:1 with conventional sampling then NUS at 25% sampling will give you S/N of 5,000:1 (ideally) which is still plenty to distinguish peaks. So NUS only makes sense with concentrated samples where sensitivity is not an issue but resolution is. Moreover the various NUS processing techniques introduce artefacts that depend on the intensity of the peaks and are larger than the noise. So it is not correct to talk about signal-to-noise for NUS spectra, it is more appropriate to say signal-to-artefact.
The general rule of thumb is that if you can collect the 2D spectrum with the minimum scans per increment required by the pulse sequence phase cycle then you are better off using NUS. But if you need to do something like 64 scans per increment and 64 NUS increments in order to see the signal just protruding out of the noise (artefacts) then you will get a better result with 16 scans per increment and 256 conventional increments.
The most spectacular application of NUS is with 5, 6 and 7D spectra of labelled proteins where conventional sampling would have taken a few centuries while NUS with 0.001% or less sampling ratio gives the result in a few days.

Glenn Facey said...

Thank you for your insight. Yes, nothing is free.

Sameer Wahid said...

Hi Glenn,

Great post!

How long did the processing take? I've started playing with 2D NUS and compressed sensing processing (we don't have an MDD license yet) and have found the processing to take quite a while (longer if F1 phasing is required).


Glenn Facey said...

With the Bruker NUS license on our computer, a 2D data set of size 1024*1024 takes on the order of 10 seconds to process.

Sameer Wahid said...

Thanks Glenn - that's not long at all! I must have been doing something wrong.

David Rovnyak said...


Great post and data! A clarification on sensitivity and SNR.

If the nonuniform sampling is unweighted, then the sensitivity is as described by anonymous. If the signal decays (or has any other non-constant envelope) then weighting the nonuniform sampling is strongly recommended and (a) will always have greater sensitivity** than uniform sampling for equal experiment times or (b) often only slightly less sensitivity than uniform sampling if recording for less time (e.g. 10-25% less in many cases).

A key case: these principles mean that NUS cannot be used to improve the sensitivity of constant-time evolution periods.

Also, sampling artifacts are certainly possible in NUS, but are associated both with sparsity and total number of samples. Users choosing say 50% reduction by NUS are very unlikely to encounter them. Choosing 512 of 1024 points for example is extremely conservative and modern spectral estimators appear to deconvolve that below the noise. Users should experiment with more sparsity on standards; superb results can be obtained for 25-33% reduction after getting used to setting the parameters.

**The improvement may be negligible or can be up to a factor of 2 in a given dimension, depending on conditions.


Glenn Facey said...

Thanks very much David. I have only started playing with NUS and hoped that this post would be a great introduction to the technique for those not familiar with it. I am very pleased with the gain in resolution per unit data collection time but have yet to experiment with the sensitivity any artifact issues compared to conventionally collected data. I expect these are important for very dilute samples. The information that you and anonymous have given is extremely helpful and will likely be the subject of a BLOG post in the future.


Krzysztof Kazimierczuk said...

Dear Sameer (and all TopSpin compressed sensing users),

As a co-author of the CS module in TopSpin I would like to inform you, that we have just improved the speed significantly. I believe, that the new version (patch?) will be soon released by Bruker.

Kind Regards,
Krzysztof Kazimierczuk

PS. Good post! As many others here :)

Krzysztof Kazimierczuk said...

And one more thing,

David said: "Also, sampling artifacts are certainly possible in NUS, but are associated both with sparsity and total number of samples. Users choosing say 50% reduction by NUS are very unlikely to encounter them. Choosing 512 of 1024 points for example is extremely conservative and modern spectral estimators appear to deconvolve that below the noise. Users should experiment with more sparsity on standards; superb results can be obtained for 25-33% reduction after getting used to setting the parameters."
To be more precise, I would say: "Artifacts (at least in compressed sensing reconstructions) are associated with absolute number of sampling points, related to the number of peaks (or important spectral points,K ). The quality of the reconstruction is only slightly dependent on the size of full grid (N). There is famous CS condition for the minimum number of samples, which is in the order of Klog(N/K)"

In other words, using sparsity expressed in % can be misleading. HSQC of small molecules (low number of peaks) with broad 13C dimension (full grid of, say, 10,000 points) can be easily reconstructed from 5% of the data (500 points). On the other hand, for 15N HSQC of protein, with 256 point of full grid it may be not very reasonable to go below 25 %.



Will said...

Our NMR facility utilizes NUS extensively for automated 2D acquisition.

Processing of 2D and 3D (and higher D) can be very slow if the data aren't stored locally to the computer you are processing on. TopSpin reads/writes sections of a file at a time to not overload the RAM (Especially important for massive spectra).

NMR Spectra of mixtures may suffer more using NUS, but for traditional single compound analysis, it's great.

Josh Nick said...

Non-Uniform Sampling actually reduces the experiment time and hence is the widely used sampling method for large sets of data.

Krzysztof Kazimierczuk said...

For those interested in comparing the effectiveness of various NUS processing programs, we prepared a script that runs them on an artificially undersampled full dataset and compares the result with FT of the full data.

Here it is:


Anonymous said...

I started doing NUS recently and sometimes randomly search the internet for useful discussions. I agree with what Krzysztof Kazimierczuk pointed out. The sampling percentage alone is meaningless. It is the actual number of FIDs recorded vs the number of signals in the data that matters most. 50% 2D NUS could fail miserably if one set TD to 128 only, but 10% may be plenty if TD=2k.

I sort of appreciate what Anonymous above has described about the S/N. I found many in the area of NUS are enthusiastic about enhancing S/N by doing NUS, but in practice, I don't find it rewarding, and often it comes with a big price: poorer spectral resolution and more coherent NUS artifacts that are more difficult to remove. These are just my observation in my every day's practice, but I don't know what the underlying theory behind this. The NUS literature is very confusing in the area, and I disagree strongly with a lot of statements and conclusions in this area.

I saw some users reported to run 25% NUS just so that he or she could use 4 times more NS, although the number of phase steps is low. This is absolutely wrong. The peak intensity in any NMR spectra is the integration of all time domain data. Whether one running one FID 4 times more or collecting 4 times more FIDs should yield the same signal intensity.