Common strategies to develop atomic spectrometry analytical procedures are based on obtaining optimum atomic peaks – well shaped, without secondary peaks – after each developmental stage.
An almost perfect specificity at every step of the working procedure (mineralisation and atomisation, or use of the modifiers) is a must because the measurement process concludes in a classical least squares (LS) fit. There, the integrated areas (heights) of a series of atomic peaks are regressed against the concentrations of their calibration solutions, but classical univariate LS requires the analytical signal not to be affected by concomitants or other interfering effects. In other words, the signal must be as specific as possible, but for an eventual constant background.
To meet this requirement is time consuming and expensive studies are carried out to preclude/ameliorate the influence of concomitants on the measured signal, currently using chemical modifiers. Unfortunately, many complex materials do still present interferences that cannot be avoided or corrected for. Hence the method of standard additions (SAM) arises as an alternative to handling problems.
This is at the cost of carrying out much more work (a calibration per sample), and expending more time and resources. Disappointingly, SAM may not account for all relevant troubles. Furthermore, the apparently trivial use of the calibration functions is not free from problems – among them, the incorrect extrapolation performed to predict the concentration of the test solutions, which largely increases the associated error of the prediction.
That atomic spectroscopists currently perform an excellent job is without question, but new strategies are required to fully avoid the inconveniences depicted above and, at the same time, to accelerate the implementation process, to reduce the development and running costs, to decrease the bench workload and to reduce the usage of reagents and consumables. It cannot be claimed that the approach we are about to depict is the solution to every problem when developing an atomic spectrometry method, but a solution that should be explored in complex situations.
Changing paradigms
The key idea is to change the paradigm from the ‘laboratory-achieved-specificity’ (required for a unique atomic signal: integrated absorbance, or maximum height) to the ‘mathematically-extracted-specificity’ that advanced regression methods can obtain using the whole atomic peak. This move has already happened in the molecular spectrometry field, where similar problems were recognised some time ago and, for example, infrared spectra whose bands overlap hugely or cannot be ascribed to specific compounds were used to predict chemical properties of pharmaceutical and petrochemical products.
To visualise that atomic spectrometry measurements are multivariate in their very nature, refer to Figure 1. First, the ‘atomic peak’ is obviously not a unique value, but a series of values that constitute it. Traditional univariate calibrations consider only a tiny piece of that signal and discard a lot of (eventually useful) information. Why is so much information, obtained so painstakingly and at such a high cost thrown away?
Secondly, the overall atomic signal is – strictly speaking – a sum of different signals that may occur due to the atomisation of volatile components of the sample, molecular fragments, scattered radiation, volatile forms of the analyte, the atomisation of the analyte itself, or atomisation of refractory species. The maximum of the atomic peak can therefore not be attributed only to the analyte (not even the integrated area).
Following this, powerful chemometric tools are needed to decipher the information hidden in the atomic band so that only relevant information is used to get the calibration model. They are generally referred to as ‘multivariate regression methods’ and use the overall amount of information in the atomic band. Hence, no additional work has to be performed, only to optimise how the information is used.
In recent years a wealth of atomic spectroscopists dealing with complex measurements recognised that univariate calibration was more of a con than a pro and, they therefore applied different types of multivariate calibration. This is the case for ICP (inductively coupled plasma, either with optical or mass spectrometry detection), LIBS (laser-induced breakdown spectrometry) and several techniques for analysing solid samples directly, such as X-Ray fluorescence, EPXMA (electron-probe X-ray microanalysis), Laser ablation-ICP and SIMS (secondary ion mass spectrometry).
Three recent reviews compiled more than 110 applications dealing with the combination of multivariate regression and these atomic techniques. It is beyond the scope of this general paper to discuss the different regression methods but it is worth mentioning that they range from standard (PCR – principal components regression – and PLS – partial least squares regression) to cutting-edge ones based on, for example, neural networks and support vector machines (SVM).
Surprisingly, despite ETAAS (Electrothermal Atomic Absorption Spectrometry) being a ubiquitous, typical workhorse within the atomic spectrometry field, it has seldom been combined with multivariate regression. This is why comments here will focus on ETAAS measurements, although the conclusions are general.
The new approach
Development of an ETAAS procedure starts by setting the temperature programmes to mineralise the aliquot of the test solution and, then, atomise the analyte. Detailed study of the atomic signal and its relationship with the concentration of analyte may require further comprehensive studies to ascertain whether a chemical modifier –in brief, ‘modifier’ – should be used. Additional work will allow selection of a modifier among a suite of candidates.
Some questions arise immediately. How many modifiers should be assayed? How many combinations of modifiers can be tried? What is the overall cost (reagents consumption, staff workload, or time) of so many trials? How are conclusions made?
Often the initial questions will depend on the resources available to the chemist whereas the latter is decided upon the visualisation of a ‘satisfactory’ atomic peak. Figure 2 plots some typical results gathered when measuring Sb (antimony) in soils and sediments6. There, up to seven modifiers were considered although others might be of interest as well.
Sizeable
Remember, this volume of work is forced by the LS regression model, which requires almost perfect signal specificity. The workload to be done can be reduced by looking for a suboptimal signal which, afterwards, requires multivariate regression studies to find out the information relevant to set a calibration model.
In effect, instead of looking for a perfect atomic peak after every developmental step, it is possible to use a not-so-perfect signal derived from a given temperature programme and – eventually –a ‘universal’ modifier (both derived from reference books, related published works, or laboratory experience). As long as the equipment is in a state of statistical control (as it should be anyway, whichever the development strategy) the signals will be reproducible and, despite not appearing optimal in classical terms, they do contain the information related to the analyte we are interested in. This notion is the landmark to understanding why an apparent suboptimal atomic signal may be of real use: the information on the analyte is hidden within the atomic band – you only need to retrieve it.
Of course a major issue arises here – what about the concomitants? After all, modifiers are used to avoid problems. Addressing this issue calls for another paradigm shift: traditional studies on the influence of concomitants on the signals rely on doing so on a ‘one-at-a-time’ basis. This means that series of test solutions have to be prepared varying the concentration of the suspicious concomitant on each, while the other concomitants are kept at fixed levels.
Unfortunately, this approach is far from optimal because it does not consider interactions between the effects of several concomitants, as may well happen in real samples. A more optimal way is to deploy a formal experimental design to simultaneously vary all the concomitants along the experiments and to decide on their effect. This will save a lot of effort – and money – and yield more confidence in the final conclusions.
In addition, the solutions employed in the experimental design to study the concomitants can also be used as calibration solutions, as long as the level of the analyte changes along them. Furthermore, as they include the concomitants (and their eventual interactions) the matrix variability inherent to future samples will already be represented in the calibration model, making traditional SAM needless. A requirement for this is that the levels of the concomitants in the calibration solutions are selected in advance to represent those expected in future unknown samples, but this is common to any development strategy. Hence, efforts are optimised, costs reduced and the analytical turnaround improved.
As an example, Cu (copper) was measured in lubricating oils as emulsions. Four concentration levels were set (0, 5, 10 and 15 µg L-1) and a Plackett-Burman fractional experimental design was deployed at each level of Cu to take into account the possible interferences of seven concomitants. Figure 3 shows that some atomic peaks suffered delays, depletions or enhancements due to the concomitants. Any attempt to apply classical LS regression would be frustrating, mostly because the behaviour of the solutions at different levels of Cu was not the same, revealing complex interferences.
Multivariate regression, however, and in particular PLS (partial least squares), can handle the situation satisfactorily. PLS has become a de facto standard which performs well in many circumstances and it was shown to support even non linear effects – at the cost of increasing slightly the complexity of the model. In addition, PLS models were found to be quite robust to slight common changes of the atomic peak. Typically, reproducibility problems related to advances/delays in the atomisation times, enhancements and depletions of the signals and increased random noise (see Figure 4).
So far, a relevant weakness when applying multivariate regression was the lack of simple procedures to calculate figures of merit. Despite IUPAC (International Union of Pure and Applied Chemistry) presenting an approach based on the so-called net analyte signal it was not employed broadly. This situation changed after ISO (International Organization for Standardization) and the European Union (EU) set new definitions for the limits of detection and quantification – now termed decision limit and capability of detection.
This fostered studies which generalised those concepts and presented a simple approach to address the already complicated issue in a pragmatic, holistic way. It is based on the use of the traditional regression among the reference values of the calibration solutions and the experimental ones derived from the model.
Holistic approaches are not new, an example being the use of CRMs (certified reference materials) and interlaboratory exercises to simplify the calculation of the uncertainty of an analytical protocol, i.e. considering a top-down approach instead of the bottom-up approach where each detailed uncertainty must be calculated and combined. Besides this, the new approach considers the statistical type I and type II errors.
To conclude, we can answer positively the question that motivated this paper. The use of multivariate regression precludes the need for lengthy traditional studies to implement new atomic spectrometry methodologies. It requires less experimental work than traditional strategies, reduces staff dedication, diminishes the consumption of reagents, as well as waste, and hence allows the development of green analytical methods.
There will be a short course at the forthcoming Pittcon Conference and Expo, called ‘Multivariate calibration as an aid to develop atomic spectroscopy methods’. This will be run by the author on Wednesday, March 15, 2014, will deal with all of the topics discussed and practical case studies will also be presented.
Published: 27th Feb 2014 in AWE International