Environmental concerns continue to exert downward pressure on the detection limits of monitoring systems. Benzene, along with other aromatics, is coming under increasing scrutiny in the European Union, and the United States is also re-examining current permissible levels.
To meet these lower thresholds, many analytical techniques now offer methods for preconcentrating the samples for a duration of 15 minutes to obtain these required lower limits of quantification.
Benzene (CAS: 71-43-2), a suspected carcinogen to humans, is a highly flammable colourless liquid that quickly evaporates into the air (highly volatile). It can be smelled and tasted at levels lower than 5 ppm. Benzene is commonly found in indoor and outdoor air from industrial processes, vehicular emissions, cigarettes, and even natural sources.
Obviously, living in close proximity to petrochemical plants, brownfield sites, or petrol stations increases the chance of exposure to aromatic hydrocarbons.
There are many ways to monitor benzene including sampling for lab analysis via canisters, bags or badges, hand held monitors, or on-site analysers.
Use of a photoionisation detector in both hand held monitors and on-site analysers can provide some of the lowest detection levels, as well as eliminating the labour intensive sample collection step.
The World Health Organization (WHO), the United States Environmental Protection Agency (US EPA), the United Kingdom’s Department for Environment,
Food and Rural Affairs (DEFRA), and the European Commission for the Environment all understand the health hazards posed by continued exposure to benzene and agree it needs to be monitored, but there is no defined ‘safe level’ of benzene in air for humans, so standards differ by region.
The National Air Quality Reference Laboratories and the European Network (AQUILA) have set requirements for measuring benzene in the European Union as described in EN 14662 ‘Ambient Air Quality – Standard Method for Measurement of Benzene Concentrations’.
This article is in relation to Part 3 of that method, ‘Automated Pump Sampling with in situ Gas Chromatography’.
Included in this requirement is a list of operational specifications for the analyser:
• Must be linear to within 5% of the line-of-best-fit over the range • Short term drift of the reading must be less than 5% over a period of 24 hours • Must show repeatability of the reading at both the zero and limit point • Readings must not be temperature dependent • Readings must not be voltage dependent within the instrument’s specified range • Interference from ozone must be less than 5% • Interference from relative humidity on the span reading must be less than 4% • Interference from organic compounds must be less than 5% • Readings must not carry over into the following run more than 10% of the value • Long term drift must be less than 10% over a 14 day period • Instrument must not require maintenance for intervals of at least two weeks • Instrument must not spend more than 10% of total operational time on calibration or maintenance • Parallel readings between multiple instruments must be within 80 ppb (benzene) • Expanded uncertainty must not be more than 25%
Once an analyser meets the above specifications there is then a choice between an analyser that provides either direct measurement or preconcentration to see the required low levels of benzene.
Unfortunately, preconcentration often obscures the details of a high concentration transient by providing only a fixed time average of the concentration over the length of time required for the preconcentration step.
A more pragmatic solution is to provide numerous quick interval analyses to map the true picture of the transient. Such a solution could provide both a map of the transient while maintaining the same time average as preconcentration.
But how ‘quick’ do these intervals need to be? Can a series of discrete measurements generate the same average as a preconcentrated sample?
Short interval transients
We decided to examine two types of short interval transients with equivalent areas for comparison with preconcentration analytical techniques.
The first we designated as a ‘spike’ transient, representing a sudden, quick release of the monitored compound (for our study, we used benzene). The second type of transient we designated as a ‘decay’ transient, or a transient that rises quickly and then tails off (Figure 1).
Each of these transients were designed to yield a 900-second average as indicated by the ‘preconcentration analysis’. This preconcentration analysis value is the value that would be reported by an analytical technique that accumulated the analyte for 900 seconds (thus, an average concentration for a duration of 15 minutes).
In order to map these short interval transients properly, the longest sampling rate of a non-preconcentration analysis must be quick enough to see the excursion along several points of its duration. If the sampling interval is too long, the analysis becomes a ‘spot check’ that may offer less data than the preconcentration averaging, or may miss the transient altogether, reporting a low value for a 900-second data average compared to the actual concentration of the transient.
For the scenarios presented, the transients can be visualised as roughly parabolic, thus, at least three data points are required to effectively map the transient. For the transients we considered, a data interval of no greater than 120 seconds is required for an accurate profile. Further, a 120-second interval will also capture the transient regardless of the time offset (where the transient occurs in the course of the 900-second average).
Of course, there is no certainty that these transients will occur nicely centered in the 15 minute analytical window as shown. What if the transient occurs later in the preconcentration delay, or earlier? What sampling interval would catch the transient should it shift?
Figure 2 shows the collection points for the analyses of the ‘spike’ and ‘decay’ transients, illustrating where these collection points would fall should the occurrence of the transient shift within the 15 minute average window. Though these figures only illustrate a handful of time shifts for clarity’s sake, we examined shifts for every five seconds through the duration of the analytical window and chose examples with the highest chance of error when compared to the actual concentration. Again, the 3-point parabolic approximation holds: sampling intervals of 120 seconds or less still adequately capture the transient regardless of how the transient is shifted.
If the sampling interval is small enough, the transient will still be observed even if the offset occurs mid-transient, as the analysis wraps around from one 15 minute interval to the next. This is in stark contrast to the preconcentration analysis, as splitting the transient may well result in two intervals reported below the detectable limits, where a positive result would be reported if the entire transient had occurred during a single preconcentration cycle. Analysis intervals of <120 seconds will, of course, improve the resolution of the analysis, and would be required for shorter transient durations. Optimally, the analysis should be as short as possible to maximise the number of points assigned to the transient.
Using a gas chromatograph equipped with a high sensitivity photo-ionisation detector we were able to obtain a sampling interval of approximately 30 seconds (Figure 3). Using this interval results in a well defined transient with at least 10 data points during the duration of the transient (Figure 4).
Detection limits and 15 minute averaging
In order to obtain a lower detection limit of < 0.050 ppb for benzene, most analytical techniques require a preconcentration step that accumulates sample for around 15 minutes (900 seconds). Without this preconcentration, a single point analysis typically has a lower detection limit of 2 – 5 ppb for a photo-ionisation detector (PID), a common detector used for monitoring aromatics. As such, a typical single point analysis would not even see the maximum excursion of either of the presented transients and would merely report these levels as a ‘zero’ concentration, or ‘below detectable levels’.
The lower detection limit requirement is the crux of the problem. The ‘15 minute average’ analysis was required because that was the only way to obtain the very low detection limits sought by the newer or proposed environmental standards for benzene. As such, many standards are generally regarded as including a 15 minute (or other duration) average as part of the definition. In other words, the data reported is based upon discrete 15 minute intervals, so a series of higher resolution measurements would not be accepted unless it can be demonstrated that these individual measurements can produce the same 15 minute average as the preconcentration technique.
This would not be possible unless the individual measurements have the same limits of detection as the accumulated 15 minute preconcentration, as discussed previously.
Examining the transients presented here, for either the 15 minute average of the combined single point analyses or the preconcentration analysis to accurately reflect the actual total amount of benzene emitted, the total area under the transient curve should equal the area of the measurements as a function of time. In the case of the preconcentration analysis, this is the area of the rectangle represented by the concentration versus the preconcentration time.
If we consider the single point analyses as taking a single measurement that persists until the next measurement is taken, then the single point analyses can be represented by a collection of rectangles that have their upper left corners at the measurement value and extend for the length of time between measurements (Figure 5).
The closer in time that these single measurements occur, the narrower the rectangles, and thus the better the measurements represent the actual transient. In other words, the correlation between the measured and actual values worsens as the sampling interval is increased, and improves as the interval is reduced.
The accuracy of the correlation between the actual and measured values of a series of single point analyses is primarily influenced by two factors: where in the analysis window the transient occurs (the transient offset), and the lower detection limit of the analysis. Since we are replicating the transient using a series of rectangles, just where those rectangles ‘fit’ into the graph will influence the accuracy of the 15 minute average, and this ‘fit’ will change as the transient is shifted within the measurement window. However, as the width of these rectangles is narrowed, the transient can shift freely within the window and still be accurately mapped, as it is in Figure 4, with 30 second wide measurements.
No data can be used to generate a measurement rectangle, however, if the measurement is below the lower detection level of the instrument. This has the effect of truncating the transient curves at a line drawn at the lower detection limit, and all data below that line is lost. For instance, if the lower detection limit is, say, 0.5 ppb, then every rectangle with an initial point of less than 0.5 ppb would be lost, resulting in a substantial inaccuracy (this hypothetical limit is shown in Figure 5). Indeed, as mentioned previously, neither of the depicted transients would even be detected with a normal photo-ionisation detector, which have typical lower detection limits above 2 ppb.
Figure 6 shows a summary of these errors over several example time offsets (as per figure 2). These particular shifts were chosen as illustrative of the maximum and minimum errors encountered in the shift data. Additionally, the bottom half of this chart illustrates the additional error introduced by limiting the analysis to values that are above the detection limit of the instrument. This lower limit can be 0.05 ppb for some high sensitivity models.
As expected, overall accuracy declines as the sample interval increases; however, for the 30 second analysis interval, errors hold at < 2% for any given shift of the transient in the analysis window. Eliminating the data below the lower detection limit can introduce a substantial amount of additional error, but this error is minimised as the measurement window decreases, adding only about 1% additional error overall when the 30 second analysis is used.
Concerns over the health risks of aromatics in the environment, particularly benzene, have led to increased scrutiny among a number of health and environmental agencies around the world. Unfortunately, these stricter requirements are pushing the limits of the available technologies for economically practical, automated monitoring.
This need has led to the application of different techniques, such as preconcentration, to automate environmental monitoring and avoid the costly, labour intensive, and ultimately slower ‘sample gathering lab analysis’ procedures. But, by using a preconcentration technique, valuable information regarding concentration spikes or other transients will be lost in the ‘smear’ of time-weighted averaging.
A ‘single point’ type analysis can readily replicate the typical 15 minute average preconcentration technique, as well as detecting and characterising short term transients, provided the single point analysis has a high enough sensitivity (lower detection limit) and a short enough sampling interval. The demonstrated 30 second analysis is capable of mapping these transients wherever they would occur in the 15 minute analysis window, while maintaining the same 15 minute average (< 2% error). ?
Brian Bischof and Adam Gniewek, Baseline-MOCON, Inc.,
19661 Highway 36, Lyons, Colorado, 80540
Brian Bischof is the Applications Chemist for Baseline-MOCON, a subsidiary
of MOCON, Inc. A graduate of the University of California at Davis, he has worked in analytical chemistry for 30 years, specialising in analytical instrumentation.
Adam Gniewek works in Sales and Marketing for Baseline-MOCON and is a graduate of the University of Northern Colorado with a B.S. in Mathematics.
Published: 10th Jun 2011 in AWE International