Quantcast
Channel: Perilocity » FAA
Viewing all articles
Browse latest Browse all 3

Quis custodiet ipsos medici?

$
0
0
Internet security is in a position similar to that of safety in the medical industry. Many doctors have an opinion like this one, quoted by Kent Bottles:
“Only 33% of my patients with diabetes have glycated hemoglobin levels that are at goal. Only 44% have cholesterol levels at goal. A measly 26% have blood pressure at goal. All my grades are well below my institution’s targets.” And she says, “I don’t even bother checking the results anymore. I just quietly push the reports under my pile of unread journals, phone messages, insurance forms, and prior authorizations.”

Meanwhile, according to the CDC, 99,000 people die in the U.S. per year because of health-care associated infections. That is equivalent of an airliner crash every day. It’s three times the rate of deaths by automobile accidents.

The basic medical error problems observed by Dennis Quaid when his twin babies almost died due to repeated massive medically-administered overdoses and due to software problems such as ably analysed by Nancy Leveson for the infamous 1980s Therac-25 cancer-radiation device are not in any way unique to computing in medicine. The solutions to those problems are analogous to some of the solutions IT security needs: measurements plus six or seven layers of aggregation, analysis, and distribution.

As Gardiner Harris reported in the New York Times, August 20, 2010, another problem is that intravenous and feeding tubes are not distinguished by shape or color:

‘…an application filed in August 2009 from Alan Reid, president of Multi-Med in West Swanzey, N.H., to produce feeding tubes for newborns that go into the stomach using the same connectors as those that go into veins. The F.D.A. was so concerned about the application that it inspected the Multi-Med plant in September and issued a warning letter for Multi-Med’s failure to test or design its pediatric feeding tubes adequately.

The similarity of feeding and intravenous tubes caused the near death of Johannah Back’s premature infant, Chloe Back, in 2006. A nurse mistakenly connected a bag of breast milk to an intravenous tube, leading Chloe to form tiny blood clots throughout her body, bleed profusely and suffer seizures for months.

“These problems have been going on since at least the 1970s. Why?” asked Ms. Back, of Las Vegas.’

We use incompatible connectors to prohibit leaded gasoline from going into unleaded tanks. Yet we permit the equivalent in medicine.

This is quite similar to the problem in the Dennis Quaid case, where the dose for infants was in the same kind of packaging as the 1,000 times larger dose for adults. Worse than packaging jet fuel the same as tractor diesel.

On top of this, there’s no regular audit trail, or monitoring of audit data, or user (patient) transparency, or reputation system (rankings) for the doctors, nurses, and aides involved.

Why? Because the manufacturers don’t want to change:

If the agency is going to crack down on pediatric feeding tubes, they need to go after every manufacturer “and not just the new guy,” Mr. Dryden said.

And medical professionals don’t want to change. Why? Kent Bottles spells out what medical professionals believe:

“the overwhelming majority of health care workers are in the profession to help patients and doing a decent job.”

I believe this translates as what an IT security expert recently said:

“We’re still all wizards here, as far as a lorge segment of the population is concerned, after all.”
The large segment especially including the wizards or doctors themselves.

Yet when medical professionals are actually tested on knowledge or behavior, they’re maybe not as good as they think.

The first thing we need is to keep score. With paper records, there isn’t, because the records are isolated.

“There’s no backup, no way to search them all at once, no way to harness them en masse for research.”
And, as Dennis Quaid says,
“You should be able to have access to your medical record at your fingertips, any time that you want.”
We can build on that.

As Kirt Cathey remarked:

A point-in-time measurement actually gives us very little information about the state of our physical condition — ongoing monitoring is what makes up a good physical report. Same with security. However, the oddity that currently exists in measuring security is that we really do have all the tools to perform ongoing, real-time analysis but very few have the ability to take advantage of the interfaces to gather the proper measurements…. even after developing a measurement framework. Doctors on the other hand, have very few tools available (or human resources) to perform ongoing monitoring of patients.
Ongoing measurement is a start. Then there needs to be a regular audit trail so the records are current and complete, plus monitoring of audit data, plus user (patient) transparency, plus reputation system (rankings) for the doctors, nurses, and aides involved. Records need to be compared across patients, across doctors, and across medical facilities, and the results shown to the medical personnel, the patients, and the public.

That’s six or seven layers of aggregation, analysis, and distribution.

Hospitals often get to the first layer, taking data for a given patient at a given point in time. They may even collect such data consistently during a given hospital stay. Then it all starts to get vague.

IT security could also use scores for accountability. Fortunately, in IT security there usually is copious data available, so layer 1 is usually already implemented.

IT security may get as far as consistent data across time on multiple computers across a single organization, but usually that’s where it stops. The higher layers mostly haven’t even been considered, except occasionally as one-shot cross-sectional studies.

As Alex Hutton tweeted during Metricon:

“the Giant Elephant in the room so far today,
the necessity of comparative analytics”

Consistent metrics, collected on a broad scale, over a long time, compared across organizations, and analyzed to show who’s doing well and who isn’t, and shown to the organizations and the public, so that there can be some accountability, on a current and ongoing basis.

This is different from making sure that a Therac-25 or a Multi-Med product isn’t designed or implemented so shoddily as to kill people. Call that layer zero. It’s also different from the FDA approval process; call that layer 0.5.

These seven or so layers of monitoring would show quite quickly that a medical device problem is occuring, so it could be dealt with, one hopes including sending the manufacturer back to layer zero to fix the problem and banning their product until it’s fixed.

Meanwhile, these layers of monitoring would catch many of the drug and device misapplications that currently kill more people than traffic accidents, and put pressure on organizations that permit them. Pressure to, for example, apply Computerized Physician Order Entry (CPOE) at around layer 1.

For the IT profession the equivalent layer 1 reimplementation incented by a reputation system could be something like applying Pete Herzog’s ISECOM recommendations (yes, I know ISECOM is multi-level and multi-dimensional; it’s not important for the sake of my argument to consider what parts of that may correspond to one or more of the layers or monitoring I’m recommending.) Or the equivalent of CPOE could be something as simple as email sender authentication. The layers of monitoring themselves are different from any specific technical security or safety preventive measure.

The medical profession needs this kind of comparative analytics.

So does the IT security profession.

-jsq

“Physicians need report cards to tell us how well we are taking care of our patients, even when we sincerely think we are doing a fine job.”
—Kent Bottles, MD

Viewing all articles
Browse latest Browse all 3

Latest Images

Trending Articles





Latest Images