Category: Cosmetics

Lab beakerMy previous post began a conversation about applying the evidentiary rules for admissibility of scientific studies and expert testimony to the emerging studies on the health and environmental effects of nanomaterials, all in the context of the toxic tort litigation that is soon to come.  This post will continue that conversation by looking at the legal rules to determine the reliability and scientific validity of such studies.  In particular, this post will look at the Frye rule and its continuing viability in a significant minority of jurisdictions.

Under the older Frye rule, reliability was determined solely by whether the scientific technique has achieved “general acceptance in the particular field in which it belongs.”  Frye v. United States, 293 F. 1013 (D.C. Cir. 1923).  States that have adopted and continue to apply the Frye test for admissibility of expert evidence have further clarified and refined the rule.  Thus, the Minnesota Supreme Court stated in Goeb v. Tharaldson, 615 N.W.2d 800, 810 (Minn. 2000), that a two-pronged test would apply:  “First, [the] technique must be generally accepted in the relevant scientific community, and second, the particular evidence derived from that test must have a foundation that is scientifically reliable.”  In Goeb, the plaintiffs alleged that their son had suffered permanent injuries from exposure to a pesticide that had been applied in their residence.  The court agreed that the trial court had properly excluded the plaintiffs’ expert scientific evidence of causation because the scientific methodology used was not generally accepted and because the expert’s analysis had no “independent validation.”

The Frye rule has frequently been criticized, however.  Thus, the Alaska Supreme Court (in a case adopting the Daubert rule and the federal evidentiary standard), has criticized Frye as incorrectly favoring the conclusions of scientists over courts in matters of a legal nature, arguing that it “ ‘abdicates’ judicial responsibility for determining admissibility to scientists uneducated in the law.” See State v. Coon, 974 P.2d 386, 392, 394-95 (Alaska 1999).  The Minnesota Supreme Court countered this argument by stating that “the Frye general acceptance standard ensures that the persons most qualified to assess scientific validity of a technique have the determinative voice.”  Goeb, at 813.  In Blackwell v. Wyeth, 971 A.2d 235 (Md. 2009), the Maryland Court of Appeals established a compromise rule.  In Blackwell, the plaintiffs alleged that their child’s autism was caused by thimerosal in childhood vaccines.  The court reaffirmed its adherence to the Frye doctrine, characterizing the doctrine in Maryland as requiring that “[g]enerally accepted methodology . . . must be coupled with generally accepted analysis” by the expert.  This approach thus assures that the trial judge has the final word on acceptance of the evidence.

The debate continues, however, over whether the Frye doctrine relies on excessive deference to the scientific community on matters of a legal nature.  This disagreement is not likely to be resolved soon and is reflected in the split in the states over the adoption of the Daubert rule, which, in contrast, is heavily dependent on judges to evaluate the scientific evidence.

What will happen to nanotechnology studies in a Frye jurisdiction?

The answer may depend on whether the studies are viewed as new and untested because they involve materials at a scale that has generally not been previously studied for health and environmental impacts.  Frye does not favor new technologies.  Frye admissibility is premised upon a history of the technologies that has evolved to the point of receiving general acceptance in the particular scientific community.

On the other hand, an argument could be made that such studies are simply versions of well-established and generally accepted scientific studies, whether of an epidemiological nature (statistical studies of human populations) or a toxicological nature (such as studies on mice conducted in a laboratory).  It is worth noting, too, that studies of human populations generally take much longer to develop, and nanomaterials measurable in consumer products and the environment are a relatively new occurrence in the scheme of things.  Thus, the studies on nanomaterials now emerging are laboratory experiments.  See, for example, the studies summarized in Powell & Kanarek, Nanomaterial Health Effects – Part 1:  Background and Current Knowledge, 105 Wisc. Med. J. 16 (2006).

In the next post, I will examine the Daubert reliability standard.

Many of my posts have talked about the need for studies on the health, safety, and environmental effects of nanomaterials.  But it has been a long time since I raised the question of what these studies may mean for toxic tort litigation.  As in any litigation, the evidence, including scientific studies and the experts who interpret them, must be admissible under the relevant rules of evidence.  In the United States, there are two basic approaches to the admissibility of expert evidence in the courts – (1) the federal courts’ approach, which is governed by the Federal Rules of Evidence and a trio of cases beginning with Daubert v. Merrell Dow Pharmaceuticals, 509 U.S. 579 (1993), and (2) the approach known as the Frye test.  Regardless of the approach used by the particular court considering the evidence, early studies that may demonstrate health or environmental risks associated with nanomaterials will have an uphill battle for admissibility in the courts.

Over the next month, I intend to discuss some of these issues in a series of posts.  This post will consider the first question:  What is it about this evidence that will be so difficult for the courts?

To begin with, let’s briefly look at the rules for admissibility of the evidence in court:

 1.  Frye Test:  This test derives from the case of Frye v. United States, 293 F. 1013 (D.C. Cir. 1923), a criminal case that involved a scientific lie-detection technique that was a sort of precursor to the modern lie-detector tests.  The court there said that “while courts will go a long way in admitting expert testimony deduced from a well-recognized scientific principle or discovery, the thing from which the deduction is made must be sufficiently established to have gained general acceptance in the particular field in which it belongs.”  Thus, courts view as admissible under this test only expert evidence derived from scientific studies or techniques in general use, and usually long-standing use, in the particular field, and which most experts in the field recognize as being reliable.

 2.  Daubert Test:  The Daubert case itself was a toxic tort, a prescription drug product liability action, so the Supreme Court had before it on the record scientific studies that resemble the kinds of studies of exposure-and-outcome that might be produced for nanomaterials.  The Supreme Court held that the test for admissibility of expert evidence under the Federal Rules is broader than the Frye test and requires that the proponent of the evidence demonstrate that it be reliable – i.e. that it be scientifically valid – and that it be relevant to a particular issue in the case, not that it merely be suggestive of health problems.  The Court emphasized that the trial judge is the “gatekeeper” who must make a determination at an early time in the litigation as to whether the expert evidence is admissible.  If it is not admissible, often plaintiffs’ cases are dismissed prior to trial.

 What evidentiary challenges will nanomaterial studies present?

 ●  The studies will provide only probabilistic evidence.  This means that the studies will only show statistical associations (probabilities) between exposure to a particular nanosubstance and a particular outcome (e.g. illness).  While the extent to which probabilistic evidence differs from traditional forms of proof in tort cases (such as motor vehicle accidents) is a matter of degree, the inability of the studies to confirm the causal relationship between exposure to a substance and the illness the plaintiff suffers will be problematic for plaintiffs’ cases.

 ●  The illnesses are likely to be “generic.”  Some substances previously studied are linked to “signature diseases,” which occur very rarely in the general population, but with greater frequency among people exposed to the substance.  Silicosis (silica dust), asbestosis (asbestos), and pleural mesothelioma (asbestos) are examples.  But most cancers, respiratory conditions, and neurological disorders, for example, are caused by a variety of triggers, some related to exposures, others genetic or idiopathic.  It is therefore difficult to differentiate those caused by a particular exposure and those arising for other reasons.

 ●  The nanostudies will be new.  Under either admissibility test, new and untested or unreplicated studies may not pass muster.  In toxic torts, history has shown that early plaintiffs may have considerable difficulty with the admissibility of their evidence; even if the evidence is ruled admissible, problems of proof arise because juries may not view the early evidence as having much weight.   As time goes on, these studies may gain more acceptance in the field – or, they may be proved to be aberrations.

 Next up:  Admissibility and scientific reliability of nanostudies.

nano 3On November 22, 2010, EPA submitted a proposed rule under Section 8(a) of TSCA to the Office of Management and Budget for its review.  The proposed rule includes reporting requirements for manufacturers of nanoscale materials and could be published in the Federal Register for public comment in December.

 The first of three proposed rules expected in 2011, this proposed rule would require disclosure of information on manufacturing and processing, as well as on exposure and release of nanomaterials.  This is merely a prelude to any actual regulation of the industries and processes making use of nanotechnology.  It is a critical step toward reducing risks to human health and the environment.  But it also highlights the fact that regulation of nanomaterials is a long, slow process that may not yield satisfactory results for many years.

 In September, an EPA representative told members of the nanomaterials industry, “We are at the stage where we really don’t have a clear idea of how to manage risk. . . . The more information we can collect through regulation—on what is being manufactured, toxicity data, and the development of the proper protocols for measuring toxic effects of the nanomaterial—the better off we will be to manage the risk and demonstrate to the American people we have a handle on the issue.”

 The current proposal can be seen as early steps in risk assessment, but far from the risk management eventually envisioned by EPA.

The European Union may be further ahead.  On November 24, 2010, the European Parliament voted to extend its restriction on many hazardous substances to most electrical and electronic products, but stopped short of imposing a restriction on nanosilver and carbon nanotubes.  Observers say that it is likely that these substances will be incorporated into the law when the law comes up for review in three years.  Thus, the EU may be heading toward management of the risks of nanotechnology more quickly than the U.S.

 Even so, why so slow?  Regulators should get moving on resolving obstacles such as the scope of nanoscale definitions, deciding how much data is enough before effective regulation may be accomplished, and whether small businesses warrant an exception to regulation.


Sources (all by BNA subscription):

225 BNA Daily Env’t Rptr. A-6 (Nov. 24, 2010)

34 BNA Chemical Reg. Rptr. 1149 (Nov. 24, 2010)

34 BNA Chemical Reg. Rptr 960 (Oct. 4, 2010)

The U.S. National Nanotechnology Initiative (NNI) Strategic Plan Draft was posted at for public comment on November 1, 2010.  The NNI was launched in 2001 with 8 agencies and now consists of the nanotechnology-related activities of 25 agencies.  Fifteen of these agencies have R&D budgets related to nanotechnology.

In reflecting on the 10-year history of U.S. nanotechnology research and development, the NNI Draft highlights its work as having “established a thriving nanotechnology R&D environment, laid the crucial groundwork for developing commercial applications and scaling up production, and created demand for many new nanotechnology and manufacturing jobs in the near-term.”  (Draft, p. 1)  Looking to the future, the NNI notes that nanotechnology R&D is “far from full realization.”  (Draft, p. 2)  The goals of the NNI continue to be broad:  continued development of R&D; developing the technologies into products for commercial and consumer use; and developing the physical and human resources to achieve these goals.

Goal 4 of the Draft Strategic Plan is “Support responsible development of nanotechnology,” including the twin goals of understanding and managing the risks of the technologies.  Among the NNI participating agencies in 2010 are EPA, FDA, National Institutes of Health (NIH), and National Institute for Occupational Safety and Health (NIOSH).

The NNI Draft Strategic Plan focuses directly on the benefits of nanotechnology, rather than the risks.  But many of the participating agencies – and many more – need to be involved on the risk side of the proverbial risk-benefit analysis.  This is happening, as reported previously in posts on this blog ranging from FIFRA to TSCA to the FDCA.

 But equally important is the need for communication and coordination on both the benefits and risks of nanotechnology.  And that extends beyond governmental regulation to businesses and nongovernmental organizations (NGOs).

Aside from governmental action, various voluntary initiatives and partnerships have emerged.  A report out of the Woodrow Wilson  International Center for Scholars, “Voluntary Initiatives, Regulation, and Nanotechnology Oversight:  Charting a Path,” gives an overview of the initiatives – some publicly sponsored, some developed by business, and some representing joint business-NGO partnerships.  These initiatives have the common, though separate, goal of developing a strategy to oversee environmental, health, and safety risks raised by nanomaterials.  The report is available at

Three initiatives discussed in some detail in the report are:

 ●  “Nano Risk Framework,” jointly developed by duPont and the Environmental Defense Fund (EDF)

 ●  “Responsible Nano Code,” sponsored by stakeholders from the United Kingdom

 ●  “Nanoscale Materials Stewardship Program,” developed by EPA

 The report critically analyzes these specific initiatives – as well as others more generally – and concludes that they have a welcome role in the future of nanotechnology safety and health efforts.

The ideal world does not exist, of course.  But in this world, a strategy that incorporates the risks and benefits of these developing technologies and brings together as many varied interests as possible representing all affected parties, including the environment, is warranted.  It can provide needed checks and balances along the way.

Lab beakerThe New York Times recently published an article reviewing the state of research on the adverse health effects of the chemical bisphenol-A (known as BPA), which is found in plastic used for many consumer products.  BPA is a hot topic right now, both in the health and political arenas.  The reason is that BPA has been shown in some studies to mimic the hormone estrogen, which is considered an “endocrine disruptor” capable of causing harm to humans.  But whether BPA, in mimicking estrogen, actually causes harm has yet to be determined.

Some of the concerns about conducting and interpreting the health studies on BPA are instructive as we go forward with studies on the health effects of nanosubstances.

 Some particularly instructive observations in the article are:

 1.  Some scientists have noted the conflicting results in existing studies.  Some have suggested that the inconsistent results are, at least in part, a function of different laboratories studying the chemical in different ways:  “Animal strains, doses, methods of exposure and the results being measured – as crude as body weight or as delicate as gene expression in the brain – have all varied, making it difficult or impossible to reconcile the findings,” according to the article.

 2.  Even when experiments appear to be conducted identically, the interpretations may vary among scientists of different disciplines, using different standards.

 3.  In studying BPA and many other chemicals and substances (including nanosubstances), it is particularly important to be aware of the different ways the substance may act on adults, children, and fetuses exposed in utero.  Moreover, adverse impacts on fetuses include not just fetal development; a person born with fetal exposure could develop future exposure-related health problems during his or her lifetime.

 4.  Private laboratories tend to be the first to use new advances in research, while the government researchers tend to lag behind.  It is not clear which type is likely to yield the more accurate results – the new techniques or the tried-and-true techniques.

 In thinking about studying the health effects of nanotechnology-based substances, it is important to keep in mind these points.  Because nanosubstances are available for so many and varied uses, determining the actual health impacts will take time, money, and a coordinated effort among scientific disciplines.

Now is the time to move forward with just such a coordinated effort.

 The article on BPA is available at:



What do the Gulf oil spill, the attacks on the World Trade Center on September 11, 2001, and nanotechnology have in common?  On the surface, it would seem to be nothing.  But all three involve responses to potential health and environmental threats that are instructive about how we as a society respond to such threats.  Collectively, they raise some important issues regarding how our society views health and environmental risks in general.

 Don’t get me wrong.  In no way am I suggesting that nanotechnology is comparable to the disasters at Ground Zero or in the Gulf.  Rather, I am asking that you look at how government and funded research institutes manage long-term health and environmental effects that occur as a result of chronic low-dose exposures over time.  The potential health hazards of nanotechnology fall into the long-term category.  We don’t expect to see any acute health problems associated with nanotechnology, but we should be concerned about long-term exposures, and existing efforts to study the effects should be expanded.

 Let’s contrast what happens when there is a disaster.

 On August 19, 2010, administration officials reaffirmed their commitment to the recovery and restoration of the Gulf in the aftermath of the Deepwater Horizon oil rig explosion and the subsequent movement of oil into the waters and the ecosystems of the Gulf.  The media outlets have been full of video, photographs, and articles about the efforts of many organizations, companies, and governmental entities to clean up and minimize the potential harm to natural resources, the environment, and all forms of life.

 Not so long ago, something similar went on at Ground Zero in the aftermath of the collapse of the World Trade Center towers following the 9/11 terrorist attacks.  Early on, the focus of efforts at Ground Zero was on the search for survivors.  On September 29, the focus turned to recovery and cleanup, including removal of debris.  But even before that date, the federal, state, and local governments were engaged in managing the environmental disaster that resulted from the release of hazardous substances into the air, including asbestos, silica, lead, mercury, polycyclic aromatic hydrocarbons (PAHs), dioxin, polyvinyl chloride (PVC), Freon, and polychlorinated biphenyls (PCBs), to name a few.  Workers from FDNY, NYPD, Port Authority of New York and New Jersey, emergency medical personnel, and a host of volunteers worked at the site.

 It is easy to assign massive resources to the acute phase of a disaster, but much harder to sustain interest and funding as time goes on.  Eventually the media will move on to other stories now that the Macondo well is just about sealed, as it eventually did when the cleanup at Ground Zero was completed.  Funds have been established to make payments, lawsuits commenced.  But what lingers is the reality of long-term health effects that could emerge over time – ecosystem damage or cancer or other health risks.  Society has a certain myopia about such things.  Perhaps it is human nature to not want to think about the health problems that could arise years down the line.

 The protracted task of developing valid scientific studies on the health effects of any exposures, including nanoparticles, and interpreting the results is as essential as responding to the acute phase of a disaster.  Disasters like the Gulf spill and 9/11 suggest a kind of false dichotomy – that acute harms are more worthy of recognition in the law than chronic long-term harms.  The long-term harms may seem less urgent, but there is nevertheless an urgency about them as well.

 For example, following the Exxon Valdez oil spill in 1989, no concerted effort was made to assess the health effects of the cleanup on workers.  Years later, surveys told the story of respiratory and neurological illness.  This month, the National Institute of Environmental Health Sciences announced it would begin a study of the potential health effects of exposures of workers and residents as a result of the Gulf oil spill.  Even in the 9/11 context, where health screenings of Ground Zero responders have been ongoing since 2002, and a data base has been established, acceptable compensation has come nearly a decade after the disaster.  The law is slower to recognize the harms from chronic exposures, and slower to act to both compensate the injured and prevent further harm.  Clearly, some of this is a result of symptoms and other harms emerging over time.  But this is all the more reason to be vigilant and investigative from the start.

 Far from the spotlight of a high-profile disaster, and in the absence of a clearly exposed population to screen, studies on the health and environmental effects of exposures to substances about which we know little is essential.

 As mentioned, nanotechnology is not a disaster.  Far from it.  It is a means for creating better medical therapies, making some of our technology perform better, and offering consumers desirable features in everyday products such as textiles and cosmetics.  But this does not eliminate the need to make a concerted effort to study the long-term health and environmental effects of nanoparticles and nanomaterials.  No matter how long it takes; no matter how far out of the spotlight.

 For those interested in knowing more about the toxic aftermath of Ground Zero, see my article, Toxic Torts at Ground Zero, 39 ARIZ. ST. L.J. 383 (2007).

On the need for studies of the health impact of the Gulf spill, see

Gina M. Solomon & Sarah Janssen, Health Effects of the Gulf Oil Spill, J.A.M.A. (Aug. 16, 2010), available at

prod liab imageIt’s fair to say that the United States has not yet tiptoed into the waters of regulating nanotechnology directly.  Rather, new efforts at regulation of chemicals and consumer products tend toward indirect regulation.  That is, these efforts would strengthen and expand existing federal regulation.  Two examples are recent bills introduced in the House of Representatives that would amend the Toxic Substances Control Act (TSCA) and the Food Drug and Cosmetic Act (FDCA) for substances and products that may or may not contain nanomaterials.  As discussed in a previous entry in this blog, placing nanomaterials under the same regulatory standards as non-nano substances is a subject that requires discussion on its own.

 Is the current trend toward indirect regulation a good idea?  It’s certainly easier and more efficient in the short run to promulgate broad regulations that encompass a variety of substances and uses, and to amend existing statutes.  And there is no doubt that these statutes needed updating to reflect scientific advancement and new risks.  But there is a danger that regulators – and the public – would be left with the impression that once these statutes have been updated, all substances are sufficiently regulated.  With the products of nanotechnology being so diverse, it is likely that many substances would slip through the cracks of the new legislation.

 Let’s look at the two recently introduced bills.  The Toxic Chemicals Safety Act of 2010 (H.R. 5820) would amend TSCA by requiring the chemical industry to provide EPA with minimum essential data on chemical characteristics, toxicity, exposure, and use, whereupon EPA would undertake an expedited process to reduce exposures to toxic substances in the population.  An important feature of the bill provides for public disclosure of non-confidential and otherwise non-exempt information.  The text of the bill may be found at

The current text of TSCA is at 15 U.S.C. §§ 2601 et seq.

The other recently introduced bill is the Safe Cosmetics Act of 2010 (H.R. 5786), which contains provisions for protecting consumers from carcinogenic and other toxic ingredients in certain previously unregulated household products, such as perfumes, shaving creams, shampoos, and deodorants.  Like the proposed TSCA amendment, a major purpose of this bill is to update the existing FDCA and its regulations and to disclose the information regarding hazards to the public, in this case primarily through product labels.  Currently, the cosmetics industry is mostly self-regulated, and members of the industry have complained that this new bill lacks appropriate standards and would place an undue burden on the FDA.  Instead, the industry has proposed its own new requirements.

 H.R. 5786 also references nanoparticles, clearly indicating that nanotechnology was intended to be part of the amendment.  For example, Sec. 618(a)(5) requires that cosmetic manufacturers submit various information to the FDA, including “the ingredient list as it appears on the cosmetic label or insert, including the particle size of any nanoscale cosmetic ingredients.”  Sec. 618(e) goes on to authorize the Secretary of Health and Human Services to require that

 “(1) minerals and other particulate ingredients be labeled as ‘nano-scale’ on a cosmetic ingredient label or list if not less than 1 dimension is 100 nanometers or smaller for not less than 1 percent of the ingredient particles in the cosmetic; and

(2) other ingredients in a cosmetic be designated with scale-specific information on a cosmetic ingredient label or list if such ingredients possess scale-specific hazard properties.”

 The text of this bill may be found at

 Both bills seem to be a step in the right direction.  But in the context of nanotechnology, complicated questions persist.  For example:

●  Would these updated statutes reach the products of nanotechnology as effectively as they would reach substances and products that have no nano-contents?

 ●  Because benign substances may behave differently at the nanolevel, would such regulation miss potential toxic effects?

●  What science would be behind the decisions to disclose toxicity?

●  Should nanotechnology be regulated separate from chemicals and consumer products?

● Which alternative makes the most sense?

 These and others are the questions that Congress and regulators – and all those who may be potentially exposed – need fully discussed in the coming months and years.

In the call for studies on the health and safety of nanoparticles in various uses, it is easy to overlook important questions about what the studies mean.  Does a study demonstrating what may be considered an adverse outcome provide a basis for legal action?  The complex answer is, “Sometimes yes and sometimes no,” or in the words of every law professor, “It depends.”

Let’s take a look a highly publicized study published in late 2009.  See Trouiller et al., Titanium Dioxide Nanoparticles Induce DNA Damage and Genetic Instability In vivo in Mice, CANCER RES. 2009; 69: (22), Nov. 15, 2009.  Researchers from UCLA conducted a study in vivo on mice to test the effects of the titanium dioxide nanoparticles, regularly used in many consumer products, including cosmetics (especially sunblocks), food coloring, toothpaste, and paint.  The researchers herald their study as the first in vivo study to demonstrate a connection between the particular substance and genetic harm.  Previous in vitro studies, they say, produced mixed results and by their very nature did not involve living tissue.

First, a word about how the law views in vitro and in vivo studies.  In vitro studies, such as the Ames test, test the effects of chemicals on bacteria or other cells in a laboratory dish, looking for genetic mutations.  These studies are sometimes offered in a legal setting to suggest that exposure to the substance is carcinogenic in human, on the theory that somatic cell mutations lead to uncontrolled cell reproduction and, ultimately, cancer.  In vivo studies compare laboratory animals exposed to a particular substance to a control group that was not exposed, looking for differences in outcomes between the two groups.  What both types of studies have in common is that they do not involve humans.  As a result, they also have in common the need to extrapolate from the test data to predictable results in humans, a process that is speculative.  In other words, both studies fall short of demonstrating exactly what will happen when humans are exposed to the substance.  But both are relatively fast, inexpensive, and do not involve the ethical dilemmas of testing on humans.

Courts bristle when plaintiffs seek to introduce this kind of evidence, without anything else, in personal injury litigation as proof that exposure to a particular substance caused their illnesses.  The role of courts in determining what evidence is admissible under the rules of evidence is designed to keep frivolous suits from consuming resources and from reaching juries, which might be more impressionable than the court.  Regulators are less constrained than courts, however.  The role of government regulators is circumscribed by the legislation giving them authority.

In the scheme of things, the law prefers in vivo studies to in vitro studies because in vivo studies demonstrate some action of the substance on mammalian living tissue.  But both types of studies are a distant second to epidemiological studies on human populations.  Such statistical studies of risk factors examine groups of humans to determine the strength of relationships between exposures and outcomes.  But even they do not examine the direct impact of the substance on human tissues.

All scientific and statistical studies used to demonstrate carcinogenicity serve to demonstrate the difficulty the law has with understanding and using the studies to make legal decisions.  In the important U.S. Supreme Court case of Daubert v. Merrell Dow Pharmaceuticals, 509 U.S. 579 (1993), in which the Court provided guidance on determining the reliability of scientific studies in the federal courts (in the context of a toxic torts case involving the prescription drug Bendectin), the Court had the following to say about the distinctions between science and litigation:

[T]here are important differences between the quest for truth in the courtroom and the quest for truth in the laboratory. Scientific conclusions are subject to perpetual revision. Law, on the other hand, must resolve disputes finally and quickly. The scientific project is advanced by broad and wide-ranging consideration of a multitude of hypotheses, for those that are incorrect will eventually be shown to be so, and that in itself is an advance. Conjectures that are probably wrong are of little use, however, in the project of reaching a quick, final, and binding legal judgment – often of great consequence – about a particular set of events in the past.

Id. at 596-97.

There is strength in numbers, however.  The more reliable studies that are conducted showing similar results, the more likely the substance will be regulated effectively.  And the more likely litigants will be able to assemble a package of expert scientific evidence that will support their positions.


An abstract of the article may be found at

In product liability litigation, product sellers often rely on the so-called state-of-the-art defense.  By raising this defense, the seller – usually the product manufacturer – argues  that the risks or hazards of the product complained of in the current litigation were not known to it at the time the product was designed, marketed, and sold to the user or consumer.  As with everything in the law, arguments abound as to how to define the state of the art.  For example, manufacturers have argued that the state of the art should be defined as the industry standard at the time.  This was essentially the argument made by asbestos insulation products manufacturers in the seminal case of Borel v. Fibreboard  Paper Products Corp., 493 F.2d 1076 (5th Cir. 1973).  The court had a very different view, however.   Reflecting concerns that using the industry standard to define the state of the art at any point in time would encourage entire industries to be lax in conducting research on the hazards of their products and/or disseminating information about known hazards to the public, the court held the manufacturers to the standards of experts in the industry.  The court defined this as follows:

The manufacturer’s status as an expert means that at a minimum he must keep abreast of scientific knowledge, discoveries, and advances and is presumed to know what is imparted thereby.  But even more importantly, the manufacturer has a duty to test and inspect his product.  The extent of research and experiment must be commensurate with the dangers involved.

Id. at 1089-90.

Plaintiffs, on the other hand, prefer to define the state of the art to reflect technology on the cutting edge of scientific knowledge at the relevant time.  This concept would limit use of the state-of-the-art defense to a much smaller group of cases and result in broad liability for product sellers.  This view completely ignores whether making the product safer was feasible at the time or whether the utility of the product was greater than the possibility of any dangers it might create.  At the extreme, sellers could be absolutely liable for any and all injuries from their products.  Thus, in Beshada v. Johns-Manville Products Corp., 447 A.2d 539 (N.J. 1982) – another asbestos failure-to-warn case – the court refused to recognize the state-of-the-art defense on policy grounds because the manufacturers were in a better position to bear the losses associated with their products, and spread those costs, than the injured victims.

But the prevailing view allows product sellers to rely on state of the art as a defense to claims for defective products.  The Third Restatement of Torts:  Products Liability (1998) refers to “the foreseeable risks of harm” as a basis of liability for defective design and failure to warn of the hazards of a product.  But what is foreseeable?  All lawyers know the answer to that question is unclear and very fact specific.

Which brings us to the risks of nanotechnology.  What should we demand of sellers of nanotechnology and the products making use of the technologies?   Should the burdens of research into the risks be greater or less because the technology is developing?  Whether or not regulation occurs, personal injury litigation will arise at some point.  It seems inevitable, given the course of other consumer and workplace products.

One thing is clear:  It will not suffice for defendants to argue that they were not aware of the potential hazards of their products if they did not conduct research into the health and safety impacts and apprise themselves of all other available and pertinent research results.  If concerns arise from initial research (as they have in some studies of nanoparticles), their obligation is to conduct further research and to use the information in product design decisions or to provide sufficient warnings.  The words of the Court of Appeals in Borel resonate here:  “But even more importantly, the manufacturer has a duty to test and inspect his product.  The extent of research and experiment must be commensurate with the dangers involved.”