At the point of analysis, it is essential that a sample is still representative of the whole from which it was taken. This is an obvious statement, but one which can often be overlooked. Regardless of how modern the analytical instrumentation, or how sophisticated the extraction process, or how experienced the operator, all of this is irrelevant if the concentration of the target analyte has changed since the sample was taken.
Analyte concentrations can change with time. Sample degradation through bacterial activity, volatility, chemical interactions and exposure to air or light can all cause changes to the composition of a sample. Consequently, establishing the length of time that an analyte remains stable, without statistically significant changes in concentration, is extremely important. It is essential to analyse the sample within the stability timeframe, or to highlight any samples that have deviated from this target.
Deviating samples – accreditor’s viewpoint
Following recent guidance provided by the Laboratory Committee of the European Co-operations for Accreditation (EA), the United Kingdom Accreditation Service (UKAS) have provided guidance on deviating samples. They comment that where samples “may have exceeded their maximum preservation time, are not cooled or have inappropriate headspace”, this “may jeopardise the validity of the reported test result.”
UKAS continue by stating that where laboratories are “not critical about samples they receive” in this regard, this “is not in the interest of the laboratories, their customers, or other end users of the test result(s).”
UKAS go on to state that, “as an accreditation body, it will be strengthening its focus” on this area during assessment visits to laboratories, and in particular it will be reviewing “the inclusion of disclaimers in, test reports” where preservation standards have not been met.
UKAS have confirmed, however, that samples analysed outside their stability period are still accredited under ISO17025, provided any associated disclaimers are included.
Customer responsibility
Stability time is recognised as the maximum time from when the sample is taken to the time analysis must be started, such that the results obtained are consistent with the concentrations at the point of sampling. This will therefore include any time that the sample is held by the customer, prior to delivery to the laboratory.
Consequently, customers must provide their laboratory provider with the details of the sampling date and time in order for stability to be assessed. Under the new UKAS guidance, practices in taking and analysing samples will need to change in order to make sure analysis is started within the stability timeframe.
The ‘start of analysis’ may simply be acidification, digestion, or solvent extraction, all of which produce a more stable environment for the target analyte.
It is expected that stability times will vary from laboratory to laboratory, as stability can be affected by parameters such as bottle size, bottle material, headspace, storage conditions and the use of chemical preservatives (where applicable). Some of this will also be reliant on the customer taking the sample.
With so many variables, UKAS have not specified a protocol for establishing analyte stability times.
There are already numerous documents available that provide suggested analyte stability periods. These include recognised sources such as ISO5667-part 3:2003; Methods for the Examination of Waters and Associated Materials (Blue Books); Standard Methods for the Examination of Water and Wastewater.
Where no documentation can be found, however, or where the documented storage conditions are not consistent with a laboratory’s facilities, then it becomes necessary for the laboratory to carry out their own stability trials. Laboratories may also wish to undertake stability trials in order to demonstrate that their storage conditions offer extended stability beyond the documented timescales. This will consequently provide their customers with a more manageable timeframe in which to deliver the samples to the laboratory, without compromising sample stability.
Stability testing – practical considerations
When designing a stability testing protocol, there are certain practical considerations that must be taken into account in order for the assessment to be reliable. These include: • Storage conditions • The test sample matrices • The testing frequency and overall test period • Establishing a starting baseline • Analyte concentration • Number of replicates within the test • Interpretation of the data
These points are discussed in more detail below.
Storage conditions
A stability trial is designed to establish the length of time that samples can be held under normal storage conditions, without evidence of significant analyte degradation or change in analyte concentration.
Within this definition, the term ‘normal storage conditions’ means that the test samples are stored under identical conditions to routine samples, including bottle type, sample volume, storage temperature, preservation (where applicable) and head-space, such that as far as practicable, the test samples are managed under storage conditions encountered by routine samples within the laboratory.
Test matrices
Ideally, the sample matrix tested should be representative of those routinely encountered within the laboratory. This will vary between laboratories. It may be necessary to test more than one matrix in order to cover the spread of sample types routinely encountered. If this is found to be the case, UKAS have recommended that a laboratory quotes the minimum storage time that is established from the testing. Typical sample matrices may include the following:
Test frequency
There is no defined frequency at which testing should take place for stability assessment, as this will change from analyte to analyte. Generally, it is expected that the sample will be tested daily.
For some analytes, it may be prudent to test more frequently at the start of the trials (e.g. every six or 12 hours where possible), while allowing the time between tests to widen as the test progresses.
It may also prove prudent for testing to begin on Monday, so that any lack of data due to weekends equates to days 6,7,13,14 etc. It is suggested that a gap no greater than five days be left between test dates.
It should also be noted that each daily test means returning to the sample in the ‘as-received’ state and beginning the analysis from fresh, including all extraction or pre-treatment processes. It is not reanalysis of a previous extract.
Starting baseline
In order to decide if a significant change in analyte concentration has occurred, it is first necessary to establish the concentration of the target analyte at time zero (Day 0 or T=0). This can prove difficult, as there is always some delay between when a sample is taken and when the analysis begins. It would not be appropriate for a T=0 measurement to be made on a sample that is already several days old. Where practicable, the T=0 measurement should be started as soon after the sample is taken as possible.
Sample spiking
In order for any sample degradation to be monitored, the target analyte must be at a reasonable concentration. Meaningful data cannot be gathered if the target analyte is close to the method’s limit of detection. A suggested guideline is that the target analyte concentration should be at least 10% of the method’s calibrated range. Where the sample matrix does not naturally contain this concentration, then spiking may be necessary.
Spiking should be undertaken with chemicals of a suitable purity and appropriate formulation e.g. creatinine rather than ammonium salts for total nitrogen samples analysed in accordance with the Urban Waste Water Directive. When spiking a sample, the first analysis (T=0) is measured from the time the spike is added, as this is the first point at which the target analyte interacts with the sample matrix and degradation potentially begins.
Number of replicates
When deciding if an analyte concentration has changed, it is advisable that a mean concentration is determined through multiple measurements. An average concentration can be established once any outliers within the data set are removed.
As a general rule of thumb, at each time period a minimum of six replicates is recommended. Where possible, 11 or more replicates should be considered, as this provides in excess of ten degrees of freedom for statistical interpretation of the data.
It should be noted that once the stability of an analyte has been established, this stability time is applicable, irrespective of the analytical method used, provided the prevailing storage conditions are identical. For example, if experimentation shows that zinc within a 1 litre glass bottle at 5o C is stable for 21 days, this is true whether the zinc is then analysed by ICP-MS, ICP-OES or AAS. Individual stability assessments against each analytical method are therefore not necessary.
Interpretation of data – two examples
The data that is generated by the protocol needs interpreting in order to identify whether a significant change in analyte concentration has occurred. In its simplest form, this may just be a maximum percentage change in analyte concentration, e.g. the analyte is outside stability once the concentration difference between T=0 and T=X is greater than 5%.
Other statistical tests exist that allow the comparison of mean concentrations values, such as a t-test.
The example below shows 11 replicates for an analyte at Day 0, then 11 further replicates at Day X and again at Day Y.
A simple assessment shows the calculation of the percentage change from Day 0. At Day X the change is just 0.79%, which may be considered acceptable.
By Day Y, the change in mean concentration from T=0 is 11.33%, and is now unacceptable. The sample is shown to be stable up to Day X, but there is no data to support stability beyond this.
A more complex examination shows the critical t-value for the degrees of freedom. The derived t-value from the Day X and Day Y data sets are presented. In this example, for Day X the derived t-value of 1.779 is less than the target value of 2.228, indicating there is no statistically significant difference between the Day 0 data and the Day X data at the 95% confidence level. At Day Y, where the derived t-value is 18.247, the situation is reversed and a significant change is indicated.
It is for each laboratory to decide how they wish to interpret their data. The two examples above are examples only. Different critical t-values will be obtained if different numbers of replicates are used, due to the change in degrees of freedom. Likewise, while some laboratories use critical t-values based at the 95% confidence levels, other laboratories may wish to use alternatives such as those at the 90% or 99% confidence levels.
All of these factors will affect the outcome of the test. Similarly, if a simple maximum percentage change is chosen as the indicator, then the value chosen will affect the outcome of the interpretation. Overall, the laboratory must choose their own ‘critical’ values in order to assess whether a sample has remained within stability or not.
When considering a stability testing protocol, there are a number of factors that must be taken into account. Although it sounds counterintuitive, it might be advisable to start by deciding what criteria you are going to assess your data against. Once this has been established, factors such as number of replicates, frequency of testing and matrix types may become more obvious, as these are often dictated by the mathematical processing at the end. Regardless of which route an individual laboratory decides to take, any efforts to establish stability times is commendable, showing that the laboratory understands and appreciates not just the analytical aspects of their work, but the wider implications of the service they are offering.
Author
Phillip Clark is Principal Scientist for Severn Trent Services based at Coventry.
Phil is a Chartered Scientist, Chartered Chemist and Fellow of the Royal Society of Chemistry, who has worked for Severn Trent Services since 1991. His experience is mainly within the inorganic field of Chemistry and he represents Severn Trent Services in this area across all of their laboratory sites. Within his field, Phil has published numerous technical articles and papers, has contributed to a series of Industry Standard Blue Book methods and has represented
Severn Trent Services on a number of National Standing Committee of Analyst bodies.
Severn Trent Services are a leading provider of analytical services for environmental monitoring. With a network of laboratories and service centres across the UK and Ireland, they are able to provide an extensive range of environmental analysis, sampling and monitoring services to meet customers’ requirements.
Severn Trent Services are experts in their industry, providing laboratory and field analysis for potable water, wastewater, groundwater, legionella, contaminated land, waste management, asbestos and endocrine disrupting chemicals monitoring, among others.
Severn Trent Services is a member of the Severn Trent Group of companies, which is a leading FTSE 100 company. Severn Trent generates revenues of more than £2 billion and employs more than 15,000 people across the UK, US and Europe.
In total, Severn Trent Services have established more than 1,000 individual wastewater stability times for the analysis we routinely undertake. Further guidance, presentations and details of stability times are available on the Severn Trent Services website under the title ‘Guidance on Deviating Samples’. Log on to: www.stsanalytical.com/guidance-on-deviating-samples [email protected] +44 (0)24 7642 1213
www.osedirecory.com/environmental.php
Published: 10th Dec 2011 in AWE International