Modernization of the Standards for Elemental Impurities

Recent activity in standards-setting organizations has raised interest in the impact of testing for impurities that may enter the product before it is mined or harvested or even due to intentional use of some reagents.
Feb 02, 2013
Volume 37, Issue 2

Pete Gardner/Thinkstock Images
Heavy metals have been a concern to society for more than a century. The term "heavy metals" has been useful to describe contamination from a number of different sources. These impurities may be part of the surrounding environment or may be introduced during the processing and delivery of liquids, food, medicines, or even the air. Heavy metals exposure can cause wide-ranging health effects, and standards should be available to prevent these effects. Testing of the environment and consumables has become a routine way to estimate these risks (1). The magnitude of acceptable risks from being exposed to impurities, in general, has become smaller over time as the serious impacts of the toxic effects are more accurately measured. For example, 100 years ago, the capability of a simple colorimetric (sulfide precipitation) test was considered to be an adequate way to estimate the risk associated with the most toxic and common sources of heavy metals. In fact the term, heavy metals, is more closely associated with the name of the test than it is an indicator of the contamination itself. Because the spectrum of contaminants that can be indicated by the test is greater than just the "heavy metals" and greater even than "metals," the term is really about how much color is produced and not so much about what is causing the positive indication.

The heavy-metals analysis test enables testing for gross contamination of any consumable sample that can be prepared, so that the contaminants that react with the sulfide source create a black precipitate. The intensity of the precipitate can be compared to a Lead standard to provide a general idea of how contaminated the sample may be. The chemistry of this method proved reliable enough to support testing standards used throughout the world for many decades. This procedure was successful in spite of two weaknesses: the chemistry behind these standards cannot identify the metal causing the color change and toxic levels of many elements can remain undetected by the method. The visual comparison nature of the sulfide method makes it, at best, a semi-quantitative method for lead determination and, at worst, unable to detect potentially toxic levels of some elements.

In spite of these inherent weaknesses, efforts to improve the sulfide test were ignored for many decades. As evidence of the health effects of low levels of some metal contaminants increased, efforts to better understand the level of contamination were requested. One such call was voiced in 1995 (2), leading to a concerted effort to improve this test. Ten years later, the methods were revised to include standardized "monitor solutions" designed to provide assurance that the sulfide test was working (i.e., assuring the test was revealing actual metal contamination) (3). Further attempts at improvement of the sulfide procedure were abandoned to focus efforts on newer, more capable technologies.