Hi Genevieve,
I invite you to look closely at their "methodology" of how they measure the presence of base atoms in materials (not compounds or molecules). That major limitation means they only infer that such toxic compounds are present in dangerous forms, much as Chlorine can be present in a completely safe molecule like table salt that I mentioned previously. The XRF is very limited in this regard. A sound testing program would include multiple types of instruments for this reason. Also, some important toxic chemicals cannot be detected at all with their device, as they very clearly admit. Not considering Chlorinated flame retardants is a major bias! You could be buying a seat with a top rating that actually contains an equally hazardous and common Chlorinated flame retardant substance as the one they did try to test. I would not feel safe with their results just knowing this alone!
Simply stating what instrument you used and how you used it is not a methodology. In fact, they even state in your link that they followed procedures contrary to the industry standard of using the instrument, such as in how long they subjected a sample to measurement! At minimum, a respectable study could have had another laboratory verify their results independently! A complete methodolgy would include important statistical information and a full disclosure of potential flaws and biases, which they do not discuss. For example, they simply do not address things required in a peer reviewed scientific paper, such as margins of error or statistical significance. That means they don't report at all on the variability of their results. Why?
Most likely because they readily admit that in most tests, they did not even bother to take multiple or repeat samples. That in turn means those results have no statistical signifcance at all! None! When they did take repeat samples in an area, they did not publish the results! That is most likely why some nearly identical products have so much variability. If you have looked at the individual results, I don't think you would trust that top rated products with "zero" levels would have the same top result if they purchased multiple quantities of the same items from different batches and re-tested them in a manner consistent with the industry standard for using an XRF scope that is properly calibrated with another scope for verification. Heck, the products they did test that are essentially idendical varied considerably, so much so that their results seem meaningless. From their results, how do you know the factory in China didn't vary their composition from one month to the next or if the supplier of the chest clip changed over time or one of many normal variations in manufacturing? Yes, it is very expensive to do such comprehensive sampling, but that is how quality studies are done.
The problem with bad science is exactly what you describe. It is likely to taint the reader if it is the only data available, even if it is completely useless for the intended purpose. No, I am not saying that it is acceptable for household items to contain hazardous materials. I am saying that the risk is minimized if the materials have no way to be absorbed by the body. Do you have fine crystal in your house? It likely has a signifcant amount of lead in it, but if your child doesn't drink something stored in the crystal container for a long period (like alcohol stored continually in a decanter) that can leach out the lead over time from the crystal, then it really is not a hazard to them! (The amount of lead leeched out over a couple hours in a glass is typically much less than what you get in a normal diet).
So, yes, I am throwing out their results, because they didn't bother to follow the recommended procedures and didn't include sufficient statistical information that would be acceptable to any respectable, peer-reviewed journal where quality studies are published. It's not unlike the "data" you find in an infomercial or a "white paper" or other compelling advertisement. It might be correct. It probably isn't. There is just no way to know.
Yes, these are things that aren't necessarily obvious to a casual reader. I admit that as a test engineer, I like to look a little farther into reports like this, especially when they have not appeared in a respected medical or scientific journal that has a reputation at stake. We at CarseatBlog are very concerned about toxic hazards to children, but we simply can't suggest that this study is an acceptable way to determine if one carseat is less toxic than another. The study is certainly a wake up call, because it does seem to indicate that potential hazards may exist in carseats. It does not, however, give any information that is useful to compare one carseat to another.
Apologies for the long-winded response, but I feel your comments deserved a good explanation of our stance on this topic.
-Written by CPSDarren