Bryan Caplan quotes with approval the NY Times on using effectiveness research to improve medicare treatment:
The British control costs in part by having the will to empower a hard-nosed agency, the National Institute for Health and Clinical Experience (N.I.C.E.), to study treatments and declare some ineffective.
[...]
Even better, use clinical evidence evaluations of the British Medical Journal. They've classified more than 3,000 treatments as either unknown effectiveness (51 percent), beneficial (11 percent), likely to be beneficial (23 percent), trade-off between benefits and harms (7 percent), unlikely to be beneficial (5 percent) and likely to be ineffective or harmful (3 percent).
Of course, this puts a lot of faith in the accuracy and reliability of such research.
John P. A. Ioannidis notes recently in Scientific American of "An Epidemic of False Claims" in medical research:
False positives and exaggerated results in peer-reviewed scientific studies have reached epidemic proportions in recent years. The problem is rampant in economics, the social sciences and even the natural sciences, but it is particularly egregious in biomedicine. Many studies that claim some drug or treatment is beneficial have turned out not to be true. We need only look to conflicting findings about beta-carotene, vitamin E, hormone treatments, Vioxx and Avandia. Even when effects are genuine, their true magnitude is often smaller than originally claimed.
The Atlantic did
a long article featuring Ioannidis and his research a few months ago, stating:
He and his team have shown, again and again, and in many different ways, that much of what biomedical researchers conclude in published studies—conclusions that doctors keep in mind when they prescribe antibiotics or blood-pressure medication, or when they advise us to consume more fiber or less meat, or when they recommend surgery for heart disease or back pain—is misleading, exaggerated, and often flat-out wrong. He charges that as much as 90 percent of the published medical information that doctors rely on is flawed
The problems certainly aren't intractable, and Ioannidis has been working on developing better methods for evaluating the quality of research. But the blunt fact is that good medical research is hard. It requires long term studies with lots of participants, which is not only super expensive, but also super time-consuming. Additionally, the last thing you want to do to improve the quality of research is raise the incentive for people to game the results by making the massive amounts of potential government funding dependent on positive research results.
Bryan Caplan quotes with approval the NY Times on using effectiveness research to improve medicare treatment:
The British control costs in part by having the will to empower a hard-nosed agency, the National Institute for Health and Clinical Experience (N.I.C.E.), to study treatments and declare some ineffective.
[...]
Even better, use clinical evidence evaluations of the British Medical Journal. They've classified more than 3,000 treatments as either unknown effectiveness (51 percent), beneficial (11 percent), likely to be beneficial (23 percent), trade-off between benefits and harms (7 percent), unlikely to be beneficial (5 percent) and likely to be ineffective or harmful (3 percent).
Of course, this puts a lot of faith in the accuracy and reliability of such research.
John P. A. Ioannidis notes recently in Scientific American of "An Epidemic of False Claims" in medical research:
False positives and exaggerated results in peer-reviewed scientific studies have reached epidemic proportions in recent years. The problem is rampant in economics, the social sciences and even the natural sciences, but it is particularly egregious in biomedicine. Many studies that claim some drug or treatment is beneficial have turned out not to be true. We need only look to conflicting findings about beta-carotene, vitamin E, hormone treatments, Vioxx and Avandia. Even when effects are genuine, their true magnitude is often smaller than originally claimed.
The Atlantic did
a long article featuring Ioannidis and his research a few months ago, stating:
He and his team have shown, again and again, and in many different ways, that much of what biomedical researchers conclude in published studies—conclusions that doctors keep in mind when they prescribe antibiotics or blood-pressure medication, or when they advise us to consume more fiber or less meat, or when they recommend surgery for heart disease or back pain—is misleading, exaggerated, and often flat-out wrong. He charges that as much as 90 percent of the published medical information that doctors rely on is flawed
The problems certainly aren't intractable, and Ioannidis has been working on developing better methods for evaluating the quality of research. But the blunt fact is that good medical research is hard. It requires long term studies with lots of participants, which is not only super expensive, but also super time-consuming. Additionally, the last thing you want to do to improve the quality of research is raise the incentive for people to game the results by making the massive amounts of potential government funding dependent on positive research results.
Effectiveness Research
No comments:
Post a Comment