In reports of levels of radioactive substances in air, water, food, or sludge, you will find the letters “ND” sometimes. This means “not detected”. This occurs when the measured level is less than a certain specified value. This value and its use are either (in the best case) misleading, and based on lab studies which have nothing to do with the risk to human health… or (worse) they are flat-out wrong and based on a mathematical error perpetuated in 1947, and still being used today in some places.
This table contains the latest measurements of I-131, Cs-134 and Cs-137 from Tokyo sludge. As you can see, under the column “Radioactive iodine 131” and row “Hachioji” there is a value of 18. This means that I-131 was detected in sludge at Hachioji and there was 18 becquerels per kilogram of I-131 in it. Now, next to “Eastern sludge plant” it says “Not detected (<29)". That means that the measurement of I-131 was below the value of 29 which is the cutoff for detection of radioactive iodine. But it does NOT mean that there was no I-131 in sludge at the Eastern plant. Lloyd A. Currie, from the National Institute of Standards and Technology, is a leading expert on the measurement of chemicals and radionuclides. His work on measurement was adopted by the International Union of Pure and Applied Chemistry (IUPAC) in 1995. A description of his work on detection and quantification limits can be found here, and another version here.
THE ZERO MYTH
In many cases the lay public believes, given sufficient effort or funding, that a concentration of zero may be detected and/or achieved. Not unlike the third law of thermodynamics, however, neither is possible, even in concept. A policy of reporting “zero” when L < LC, yielding the decision “not detected”, compounds this lack of understanding. These are issues of major national importance, especially in the context of legislation and regulation, where necessarily, and appropriately, many of the policy makers have critical sociopolitical expertise, but not necessarily scientific or technical expertise. The solution to this sociotechnical dilemma is, once again, very careful communication; and, beyond that, mutual understanding and education among the complementary disciplines.
Stating that a radionuclide is “not detected” does not mean that that isotope is not in the sample. This is both a practical and conceptual impossibility. There is no such thing as “zero plutonium” in a sample. The concept “not detected” is appropriate for lab work. It is not appropriate when human health is involved, except as a convenience.
1. There is no safe dose of radiation.
2. It is impossible to say there is no radiation in a sample of air, water, or food.
Point #2 is as important as point #1.
THE KAISER METHOD
Kaiser developed a method for determining these radiation thresholds in 1947. It was in use for chemicals and radionuclides until 1995, when it was superceded by the Currie method. But the Kaiser method is still in use today in many areas. One slight problem. Kaiser ignored the problem of false negatives and based the entire method on false positives. In other words, he was totally dedicated to getting rid of results that showed there was radiation when there really wasn’t any. He completely ignored the situation where results showed there was no radiation when it was really there. This means that the probability of, say, plutonium in a sample is (a de facto value of) 50%, even when it is “not detected”!
I was gobsmacked when I read this. It is baffling that this could even be derived… I thought you would end up dividing by zero. It is an egregious error, one that a student in Statistics 101 would never make. And it is still being used today in many places.
This means that, for any study of samples of radionuclides or chemicals that depend on a “detected” or “not detected” decision that use the Kaiser method are WRONG. This error means that these detection thresholds are too high. All studies from 1947 to 1995 use this method. Environmental laws have been passed, policy decisions have been made, all based on invalid data due to a mathematical error.
It’s not just radiation. Whether dioxin is in cornflakes, whether arsenic is in chicken, everything like this has been affected.
What a coincidence that this error has served the interests of nuclear and chemical polluters for all those years.
I have seen output from reputable radiation labs that use the Currie method. Whether Tepco uses it, I doubt. They admitted to publishing false radiation data from Fukushima for two years.
Perhaps the most serious terminological trap has been the use of the expression “detection limit” (a) by some to indicate the critical value (LC ) of the estimated amount or concentration above which the decision “detected” is made; but (b) by others, to indicate the inherent “true” detection capability (LD) of the measurement process in question. The first, “signal/noise” school, explicitly recognizes only the false positive (α,
Type-1 error), which in effect makes the probability of the false negative (ß, Type-2 error) equal to 50%.> The second, “hypothesis testing” school employs independent values for α and ß, commonly each equal to 0.05 or perhaps 0.01.
The “signal/noise” school is the Kaiser method. The “hypothesis testing” school actually considers whether radiation is really there when “non detection” is made.
ADJUSTING THE PARAMETERS OF THE CURRIE METHOD TO REFLECT ETHICAL CONCERNS
Note that the values of α , ß, and σQ given above are IUPAC recommended default values, which serve as a common basis for measurement process assessment. They may be adjusted appropriately in particular applications where detection or quantification needs are more or less stringent.
While the Currie method is correct, the default parameters are appropriate only for lab work. In the context of risk for human health, the values of α and ß need to be adjusted. False positives are much less to be feared than false negatives. Consider a glass of milk which is to be given to your child. Which is worse, being told that there is plutonium in the milk when there isn’t (false positive)… or being told there is no plutonium in the milk when there really is (false negative)?
So α and ß should not be equal. The precautionary principle states the the burden of proof is on polluters when environmental contamination is plausible (not necessarily proved). We are not looking for proof that plutonium is in the milk, we are looking for proof that plutonium ISN’T in the milk, during a radiation catastrophe like Fukushima (and WIPP).
A more appropriate value for ß would be .001, which means there is a one-tenth of one percent probability that plutonium is in the milk when the plutonium is “not detected”. A value of maybe .25 is appropriate for α, a 25% chance that plutonium is not there when it is “detected”. An even higher value might be better, but at 50% it becomes like the Kaiser method the other way.
These values can then be used to derive more appropriate minimum detection limits, by straightforward mathematical analysis, with the assumption of a normal distribution. Currie states that this is a special case. A distribution-free method like ODA would be even better.
Currie and IUPAC recommend that all measured radiation values and uncertainties should be published. This means that the table above, with the Tokyo sludge measurements, does not conform to current scientific protocols, since they omit the I-131 values below the “detection limit” and do not publish the uncertainties.