The heavy-metals analysis test enables testing for gross contamination of any consumable sample that can be prepared, so that the contaminants that react with the sulfide source create a black precipitate. The intensity of the precipitate can be compared to a Lead standard to provide a general idea of how contaminated the sample may be. The chemistry of this method proved reliable enough to support testing standards used throughout the world for many decades. This procedure was successful in spite of two weaknesses: the chemistry behind these standards cannot identify the metal causing the color change and toxic levels of many elements can remain undetected by the method. The visual comparison nature of the sulfide method makes it, at best, a semi-quantitative method for lead determination and, at worst, unable to detect potentially toxic levels of some elements.
In spite of these inherent weaknesses, efforts to improve the sulfide test were ignored for many decades. As evidence of the health effects of low levels of some metal contaminants increased, efforts to better understand the level of contamination were requested. One such call was voiced in 1995 (2), leading to a concerted effort to improve this test. Ten years later, the methods were revised to include standardized "monitor solutions" designed to provide assurance that the sulfide test was working (i.e., assuring the test was revealing actual metal contamination) (3). Further attempts at improvement of the sulfide procedure were abandoned to focus efforts on newer, more capable technologies.