A troubled marriage between safety research and practice

By Chris McCahill
Road design often is not as science-based as we like to think, according to a new study in Accident Analysis & Prevention. Years of biased or misreported research findings inform many of the design practices that are common today. And while there is plenty to be learned from safety research, especially in recent decades, it may be worth revisiting some long-held assumptions and rethinking how research informs practice.
The new study by Ezra Hauer at the University of Toronto focuses on several historical examples illustrating the “dysfunctional” relationship between research and practice. In the late 1950s, for instance, Ohio and Kansas both started programs to paint edge lines on rural roads, assuming it would improve safety. A few years into the programs, both states conducted random controlled experiments (painting lines on some roads but not on others) to test their effectiveness. Both found that edge lines seemed to reduce the number of crashes near intersections and access points, which was unexpected and not well explained, but led to increased crashes in other locations. The net increase in crashes was 15 percent in Ohio and 27 percent in Kansas. Later studies showed similar effects in other states, yet edge lining continued and is now required for rural roads greater than 20 feet wide and carrying at least 6,000 vehicles per day, according to the Manual on Uniform Traffic Control Devices (MUTCD).
This study is not the first of its kind, but highlights many reasons for this disconnect. One is unintentional bias. The edge line studies, like many safety studies, were conducted once the programs had already been in place, at which point there was typically little interest in discontinuing or reversing the practice. Study authors, who were likely invested in the success of the programs, highlighted the benefits while downplaying the potential consequences as “insignificant.” This was true, statistically speaking, but it does not mean the increased crash rates were small or unimportant—a critical distinction.
Another common problem is that evaluation studies often take place after changes are made to a dangerous road or intersection. These studies may show that crashes drop but, unless the studies are carefully designed, they do not fully explain whether the intervention was the cause. “Regression to the mean,” for instance, tells us that an extremely dangerous location one year is more likely to get safer the next year, regardless of design changes. Moreover, issues like these can be compounded and solidified into meta-analyses, which may copy and repeat each other, sometimes creating the appearance of a scientific consensus.
One problem we now face, according to Hauer, is a common belief that guiding documents like the MUTCD and other design standards are strictly evidence-based, when they are actually pieced together by committees over time. He explains: “And so, professional decisions based on these important documents reflect an agglomeration of opinions and interests without their evidence-based safety consequences being declared and, in some cases, without it being known.” He adds that the relationship between research and practice has improved in the last couple of decades, but formally revamping it will be important moving forward.
Chris McCahill is the Deputy Director at SSTI.