You are using an older browser version. Please use a supported version for the best MSN experience.

NHTSA’s Recall Office is Failing

Motor Trend logo Motor Trend 6/24/2015 Scott Evans

The National Highway Transportation Safety Administration (NHTSA) is charged with enforcing vehicle safety regulations, investigating defects, and compelling recalls. Its purpose is to keep you safe behind the wheel, but despite forcing many high-profile recalls over the years, an investigation into the agency’s inner workings revealed major systemic problems that allow many potentially dangerous defects to slip through the cracks.

Research

An audit of NHTSA’s Office of Defects Investigation (ODI) revealed major failings in the way the agency collects, analyzes, and acts on data on potentially defective vehicles. The report, ordered by the Secretary of Transportation in response to NHTSA’s failure to identify and act on the GM ignition switch defect, describes an agency plagued by a lack of transparency, a lack of oversight, a lack of training, and a lack of resources.

Read the companion piece on the specific failings in the GM ignition case RIGHT HERE.

As a result, overworked and sometimes unqualified staffers investigate only a tiny percentage of reported safety issues with little direction and almost no supervision. They often make reports based on perceptions of which issues are likely to garner attention from management, and management prioritizes issues likely to result in a manufacturer recall. Meanwhile, issues deemed to be of lesser concern are passed over completely or filed away after a cursory investigation and rarely revisited.

Although the audit was directed at the failures leading up to the GM ignition switch recall, the Department of Transportation’s Office of Inspector General uncovered many cultural and systemic issues plaguing the ODI at all levels.

© Provided by MotorTrendProblems With Data

ODI relies entirely on outside reports for information on potential defects. Those reports come from two sources: automobile manufacturers and the public. In both cases, the reporting systems are plagued with incomplete or outright bad information. In both cases, this can be traced to inadequate and confusing reporting forms, a lack of guidance from NHTSA, and little follow-up or oversight of the reporting entity.

The most damning allegations come against ODI’s handling of manufacturer-reported data. By law, automobile manufacturers must send quarterly reports to NHTSA on all accidents, property damage claims, warranty claims, consumer complaints, and field reports on incidents involving specific vehicle systems and components specified in regulations. Additionally, manufacturers are required to report all incidents involving injuries or deaths regardless of what vehicle systems or components were involved. Incidents involving components and systems not specifically regulated and that don’t result in an injury or death can be omitted at the manufacturer’s discretion.

In one egregious case, a manufacturer described a fire as a “strange odor” to avoid ODI scrutiny.

Despite the importance of this data, ODI’s reporting process is vague. Manufacturers must assign one of 24 codes to the incident, and ODI provides no guidance on how those codes should be applied. When manufacturers ask for help, ODI tells them to use their best judgment in determining which code applies to the component or system involved. As such, countless employees of automobile manufacturers are guessing at how tens of thousands of reports should be coded, leading to inconsistent reporting across the board. Moreover, ODI provides no guidance on the amount of information reported or the specificity, leading to a wide range in the quality of reports filed. All of this inconsistent data leads invariably to poor analysis.

In addition to handing all the responsibility for reporting to the manufacturers, ODI rarely checks to make sure they’re doing it right. Generally, ODI makes sure the reports are sent on time but doesn’t verify the accuracy or completeness of the reports even though it has the authority to do so. In fact, the frequency at which ODI requests additional documentation from manufacturers has dwindled in recent years as workloads increase. ODI doesn’t even oversee manufacturers’ internal reporting procedures to make sure they’re doing it right. As such, manufacturers are essentially on what one ODI employee called “the honor system,” and they’ve been known to intentionally miscategorize incidents to hide them from ODI. In one egregious case, a manufacturer described a fire as a “strange odor” to avoid ODI scrutiny.

ODI is toothless not only when it comes to policing the content of reports but also when enforcing report deadlines. One manufacturer, believed to be underreporting injury and death incidents, was never disciplined, even when it self-reported an additional 1,700 omitted incidents years later. In another case, an RV manufacturer went 10 years without filing a single report and was never penalized. The manufacturer blamed “internal miscommunications” and a software failure.

ODI’s other source of potential defect information, consumer reporting, fares little better. ODI has only one employee to screen complaints registered by phone and on NHTSA’s website, SaferCar.gov. This employee must screen between 40,000 and 80,000 complaints per year, an average of 330 per day. Worse, the employee is only allowed to spend half of each working day screening complaints, as he has other duties. This means your complaints, no matter their content, are only getting a few seconds of his time.

Car After NHTSA Side Crash Test© Provided by MotorTrend Car After NHTSA Side Crash Test Per the initial screener, only about 10 percent of all complaints are flagged for further investigation. He estimated 25 percent of complaints don’t provide enough information for investigators to go on. He also estimated 50-75 percent of complaints identify the wrong part(s).

Although certain complaints, such as those relating to airbags, are automatically flagged for review, the rest is up to the screener, who has been given no official policy or procedure to follow and instead relies on his own experience and judgment. The initial screener told auditors he prioritizes issues that can surprise a driver and ignores issues that he believes won’t be investigated if he flags them or if he believes they’re already covered by another recall.

ODI has only one employee to screen between 40,000 and 80,000 complaints per year, an average of 330 per day.

The 10 percent of flagged complaints are forwarded to an advanced screening panel of eight employees. They’re charged with doing a deeper analysis and deciding whether the issue is serious enough or has enough supporting evidence to open a full investigation. Although these advanced investigators have access to multiple sources of information and the authority to do more digging, they rarely do because it can be time-consuming. What’s more, until 2013 they weren’t required to document their reasoning when declining to investigate a complaint, so it’s impossible to know why many past decisions were made. Even now, the audit found roughly half of the rejected complaints were “incorrectly annotated or lacked critical information.” Additionally, 57 percent of rejected complaints from the fourth quarter of 2013 alone lacked any justification at all. The screeners’ excuse for these failures: “Annotating complaints is time-consuming.”

It’s unfair to pin all the blame on the screeners, however. Many of them lack experience, training, or background in the fields they’re assigned to. In many cases, investigators aren’t trained on the technologies they’re investigating, and some lack engineering backgrounds or even automotive backgrounds.

They’re not getting any help from above, either. ODI has no money for training, conferences, certifications, or other professional development, and employees are expected to do it on their own time, not during work hours. If that’s not enough, investigators and their supervisor admit there’s no formal review process for screeners, and they receive no feedback on the quality of their work.

Problems With Analysis

Not only does ODI have problems obtaining accurate and reliable data, but it also has a number of problems processing the data it gets.

Much of ODI’s defect investigation decision-making relies on identifying trends in parts failures. To do that, ODI uses four different statistical models to analyze manufacturer-supplied data. The tens of thousands of reports a manufacturer submits are fed into analytical software, so long as the software can read them. In late 2013, GM upgraded to the latest version of Microsoft Word, which saves files with the extension .docx rather than the old .doc. ODI’s software couldn’t read .docx, so it ignored those reports for months before the problem was corrected.

Problems go far beyond simple software issues, though. Auditors found ODI doesn’t follow standard statistical practices, which likely stems from the fact the ODI staff running the statistical analyses have no background or training in statistics. As such, the staff has been running analyses without establishing a model or a set of assumptions needed to set a baseline for all other data to be compared against. Without a model, the audit said, the analysts “cannot differentiate trends and outliers that represent random variation from those that are statistically significant — that is, scores that indicate a safety issue should be pursued.”

ODI’s software couldn’t read .docx, so it ignored those reports for months before the problem was corrected.

Even if they were making models, the results wouldn’t necessarily be sound. The third-party developer of one of the tests told auditors the same data added in the same order should produce the same result every time, but ODI staff said different identical test runs returned different results, so the test isn’t working properly. Worse, “management has not considered this to be a problem.”

Add to these issues that ODI hasn’t updated its statistical tests in years and doesn’t do any assessments of the performance of its tests, and you have a dangerously unreliable system for tracking potentially deadly defects. As if all that weren’t enough, ODI doesn’t even have a process in place to ensure all relevant reports make it into a statistical test. For these reasons, screeners say they don’t consider the analytical data to be particularly helpful. What’s more, screeners complain the quarterly reporting system means any data they get isn’t timely and less helpful in identifying defect trends quickly.

© Provided by MotorTrendProblems With Procedure

Assuming a defect has presented itself often enough in manufacturer and consumer reporting and makes it through the first two rounds of screening, there’s still no guarantee it will get a full, formal investigation because ODI lacks strong procedures for doing so.

Auditors found that although ODI has established three key criteria when proposing a full investigation, management has given screeners “no specific guidance” on how to apply the criteria, and in fact, several of the division chiefs don’t even agree on which criteria are more important. Screeners say they instead rely on precedent, their own personal experiences, and “gut instinct” when deciding which potential defects to recommend for a full investigation. For example, one screener said “he did not propose an investigation into a safety defect that caused a vehicle’s hood to fly open while driving because previous proposals on hood latch issues did not lead to investigations.”

Screeners told auditors they do their best to guess which issues management is most likely to take up and will intentionally drop issues they believe are unlikely to get an investigation. They say management prefers to take up issues likely to result in a manufacturer recall and has in the past declined to take up issues in part because they were deemed unlikely to end in a recall.

GM Ignition Assembly Recall Chevrolet Cobalt Installation 2 NHTSA’s Recall Office is Failing

ODI management does this for two reasons. One, because it worries too many investigations that don’t lead to recalls could hurt its credibility, and two, because investigations are expensive. In order to spend its budget most effectively, ODI leans heavily on its screeners and tends to only investigate what look like slam-dunk cases. As such, screeners are being asked to do investigation-level research and engineering work they’re unqualified for in order to justify their proposals, and that takes time away from their primary duties. On top of that, it was unclear to auditors if the screeners had access to enough data in order to do the expected level of analysis, and although ODI has the authority to compel information from manufacturers without launching a full investigation, it usually doesn’t, and several screeners said they didn’t even know they could do that.

Putting screeners in over their heads has resulted in well-intentioned but ineffective pre-investigations. One screener, charged with assessing ice buildup on a brake component while driving, tested it by freezing the component. Another said he didn’t smell any exhaust inside a vehicle, but a later investigation found dangerously elevated levels of carbon monoxide (which is odorless).

Presuming a defect is potentially severe enough to survive this process, a proposal to investigate is sent to the appropriate division chief, who has two weeks to either open an investigation, decline an investigation, or send the matter to the Defects Assessment Panel (DAP) for further review. Most chiefs say they see the two-week deadline as a guide rather than a rule.

Officially, rejected proposals are listed in ODI’s database as being monitored, but ... most staff interviewed agreed a “monitored” defect is essentially forgotten.

DAP is supposed to be a review process for proposals on the edge, but most described it to auditors as “pro-forma,” a “dog and pony show,” and generally a waste of time. DAP meets less than once per month and has no formal timeline for reviewing a proposal. Some languish for months before a decision is finally made. Of the proposals ultimately rejected by DAP, more than half have no documentation on how DAP justified its decision.

Once a proposal has been rejected, either by a division chief or by DAP, it’s effectively dead. Although it’s possible for old proposals to be revisited, it almost never happens. Officially, rejected proposals are listed in ODI’s database as being monitored, but ODI doesn’t assign specific monitors to specific proposals and doesn’t track if anyone’s monitoring them at all. Most staff interviewed agreed a “monitored” defect is essentially forgotten.

Car Before NHTSA Side Crash Test© Provided by MotorTrend Car Before NHTSA Side Crash Test NHTSA’s Response

As is standard practice, the results of the audit were sent to NHTSA before being made public so the agency could review and respond to the findings. NHTSA’s response, to paraphrase, was: Yeah, pretty much. In a letter, the agency outlined its recent achievements and steps taken already to improve itself since the last audit (2011, the recommendations of which NHTSA fully complied with) but accepted the auditor’s findings without objection and agreed to implement all 17 recommendations made. NHTSA says implementation will begin in September with final implementation complete by August 2016, barring any issues.

Shaken Faith

NHTSA is the sole regulatory body charged with enforcing vehicle safety regulations and compelling recalls, and Americans must have faith that their well-being behind the wheel is being adequately monitored and protected. The major structural failings discovered in the office in charge of investigating potentially deadly defects severely weaken faith in the ability of regulators to keep us safe. We hope this audit will be a much-needed wake-up call for NHTSA and the ODI will be provided adequate staff and resources to effectively investigate defects in the future.

AdChoices
AdChoices

More from Motor Trend

Loading...

image beaconimage beaconimage beacon