Insight, analysis & opinion from Joe Paduda

< Back to Home


Guidelines – beyond the soundbite and marketing hype

Is medicine science, art, some combination of the two, or something else?
That’s not an idle question.
If you’re trying to get more scientific about how you practice medicine or what services/procedures/drugs/treatments you pay for, you are likely relying on clinical guidelines to help provide a little more perspective, hopefully one based on something other than best guess or generally accepted knowledge or tribal wisdom.
A recent study may well give you pause – the key finding is rather alarming – many guidelines are NOT based on solid research, but on work that is kindly described as rather more superficial.
Published in the Archives of Internal Medicine, the research found “More than half of the current recommendations of the IDSA (Infectious Diseases Society of America) are based on level III evidence [expert opinion] only.” [emphasis added] Note that the research focused solely on IDSA guidelines, which cover a relatively small fraction of all the guidelines in use today. Largely as a result of that conclusion, the researchers concluded “Until more data from well-designed controlled clinical trials become available, physicians should remain cautious when using current guidelines as the sole source guiding patient care decisions.”
This isn’t exactly new news. This from research on guidelines published in The Journal of the American Medical Association over a decade ago “Less than 10% of the guidelines used and described formal methods of combining scientific evidence or expert opinion. Many used informal techniques such as narrative summaries prepared by clinical experts, a type of review shown to be of low mean scientific quality and reproducibility.18​ Indeed, it was difficult to determine if some of the guidelines made any attempt to review evidence, as less than 20% specified how evidence was identified, and more than 25% did not even cite any references.”
The risk here is our sound bite-long attention span will lead some to use these studies to discount guidelines in their entirety, ignoring entirely the “Until more data from well-designed controlled clinical trials become available” recommendation.
Truth is there are lots of guidelines based on standards of evidence significantly higher than ‘expert opinion’. The pre-eminent organization in this area, and the one with the most rigorous standards, is the Cochrane Collaboration. And while not all will meet the randomized double-blind control methodology that most believe is the gold standard, many will indeed provide an ample and durable foundation on which to base medical decisions, treatment recommendations, and reimbursement.
With that said, there are organizations that trumpet their ‘guidelines’ as providing the basis for coverage and payment decisions, when a more-than-superficial examination indicates the ‘guidelines’ are built on mighty shaky ground.
The Agency for Healthcare Research and Quality maintains a database of evidence-based clinical guidelines; the listing is not comprehensive as many organizations choose to not submit their guidelines for business reasons. However, while not meeting the ‘gold’ standard described above, the standard employed by AHRQ is far superior to that of “expert opinion only”; AHRQ requirements include “Corroborating documentation can be produced and verified that a systematic literature search and review of existing scientific evidence published in peer reviewed journals was performed during the guideline development.” (while their science is solid, they really need to get some English majors involved in the whole writing thing…)
What does this mean for you?
If an organization or vendor is touting their medical criteria or guidelines, prepare – and ask – pointed questions about the methodology, development process, quality of the evidence, and staffing of the effort. The good ones will be only too happy to share their work, and the others will either not know why you aren’t impressed and/or be exposed.

A thoughtful piece on ranking the evidence used in medical guideline development can be found here. [opens pdf]
Lots more info on guidelines is available here.

11 thoughts on “Guidelines – beyond the soundbite and marketing hype”

  1. There is a school of thought that random controlled trials are over-emphasized in human subject research, that they work well in, say, analysis of crop yield improvements, but not so well with human subjects. From this perspective, RCTs should not be so widely touted as the gold standard.

  2. Further to Peter’s point, see Jonah Lehrer’s article “The Truth Wears Off” in the December 13, 2010, issue of The New Yorker. Lehrner suggests that researchers find the result they’re looking for, even when using statistically valid samples (which are rare in randomized controlled trials). All that said, clinically based guidelines are better than individual provider biases.

  3. Peter I hear ya and agree to some degree but a good peer reviewed prospective RC double blind clinical study should be what we strive to start with (some argue that the metanalysis is the gold standard) . All evidence is weighted and even consensus will have merit in certain situations. It isn’t perfect but we have to have rules in the absence of no controls

  4. Joe,
    You are absolutely right about the basis for many, many “Evidence Based” treatment guidelines. In part, that is why any such guidelines, while perhaps carrying a legal presumption of correctness, must allow that presumption to be overcome by better/other evidence. Even the venerable ACOEM Guidelines are, in the majority, based on consensus.

  5. What is lacking are outcome studies to evalute threatment guidelines in terms of whether or not they do represent effective treatment models. PMRI just completed a study with CWCI identifyng cost effective treatment pathways by provider type for low back strains. This type of study represents the next generation of research that is needed.

  6. I agree with the two very informed opinions above. In a past life I was a basic science and clinical researcher. A well known book in scientific circles “Betrayers of the Truth: Fraud and Deceit in the Halls of Science” documented how very well regarded scientists have “fudged” their data and conclusions. A popular saying is that “Statistics don’t lie , but statisticians do.”
    There are a lot of peer-reviewed journals that publish randomized, double blinded trials that have blatant errors. But that being said, to paraphrase Darrell above “It’s better than nothing” (but just something to be aware of). But, Joe’s posting is well taken and as usual on point.

  7. One other comment on Joe’s posting. In addition to the points on scientific validity that Joe brings up, one must consider practical applications of the guideline recommendations.
    For instance, if a guideline specifically states that an MRI is not indicated, but it is needed to declare a patient MMI or P&S, will dishing out 4 months of temporary disability payments while you argue the point be cost effective (in addition to causing animosity and increased litigation)? In other words, one must be prudent in the application of the guidelines in light of the bigger picture (i.e. how will it effect the ultimate outcome).

  8. Joe, medicine should be practiced as a science. There is no doubt that guidelines built with a true evidence-based methodology (such as the ACOEM occupational health treatment guidelines) are much more valuable to this end than others that might claim to be evidence-based but are not. However, there is an even more advanced model which we practice at MDGuidelines, which I call data-driven publishing. In this model, high-quality data on real-world outcomes is collected and statistically organized in order to “speak for itself”, providing the ultimate clinical guidance. This supercedes both expert opinion and meta-analysis of scientific literature!

  9. While placebo controlled, randomized, double-blind clinical trials are the gold standard for medical research, in many cases they do not (or cannot exist). For example, it would impossible to measure the effectiveness of major surgery using these methods (who would sign up for a study in which there would be a 50% likelihood of receiving sham surgery)? As a result, to base guidelines only this level of evidence creates a situation where recommendations will necessarily fall to the ISDA’s “Level III” category or what other standards label as “I” (for Insufficient Evidence). This means they are based on the “expert opinion” of the authors (i.e. they are consensus, not evidence, based). As consensus guidelines they are subject to specialty bias, where the authors are more likely to recommend treatments they are familiar with and less likely to recommend those further from their own specialty. For example, chiropractors are more likely to recommend chiropractic care, physical therapists prefer PT, and hand surgeons like hand surgery. State Guidelines, though, can be even worse, with no evidence ranking process and where political influence of device manufacturers or connections of a few at the top can weigh heavily. ODG (for which I am a salesman – full disclosure) uses a comprehensive evidence-ranking process that, while giving the most weight to systematic reviews and randomized controlled trials (collectively, the gold standard), also looks at other types of studies and lower levels of evidence, from cohort studies, to unstructured reviews, to other nationally recognized guidelines. Our position is that some evidence is better than no evidence. In the real world (not the theoretical one), high level evidence can be difficult to find. Joe’s advice to ask pointed methodology and evidence ranking questions of guideline publishers is sound, and rely on the analysis of AHRQ, or other resources like studies by the Adelaide Health Technology Assessment or Rand Corporation.

  10. The general of the foregoing comments is valid. However, what is totally lacking is mention of the massive corruption of the research process at every level. Government research support is declining and being replaced by funds from industry. As just one example of what this is doing, read the excellent article by Lenzer and Brownlee in BMJ ( personal solution to the problem of obtaining minimally biased information is to refuse access of drug reps to my practice and to subscribe to the Medical Letter, a publication with no advertising or ties to the drug industry.

  11. A very informative thread. But a couple of corrections and observations (again in the interest of full disclosure I am the Director of ACOEM Guidelines):
    1. The numerical majority of ACOEM recommendations are designated as “I”, insufficient evidence, this means that
    – A multidisciplinary panel unanimously agreed that despite the absence of high quality evidence (i.e. RCTs) there was a clinical situation that needed to be addressed.
    – It does not mean that there was not lower quality or conflicting evidence used in generating the recommendation: evidence is always reviewed in any ACOEM recommendation.
    – The (I) recommendation should be applied in lcinical or UR situations with the understanding that exceptions are likely to occur more often than with recommendations based on stronger evidence bases ie (C), (B), (A).
    2. A guideline should assess the evidence base in its entirety, selected literature grading and reliance on other systematical reviews is a short cut to propagation of poorly researched recommendations.
    3. Inclusion in the AHRQ Clearinghouse is not an endorsement of quality – there are almost 3000 guidelines posted, many conflicting, some outright propaganda.
    The bottom line is that a high quality guideline should be clear in its recommendations and when the supporting evidence is strong or weak, because quite simply there is not enough strong, clear, consistent science to address the near infinite permutations of clinical presentations. Guidelines clarify application of evidence based medicine only up to a point – high quality guidelines let you know where that point is.

Comments are closed.

Joe Paduda is the principal of Health Strategy Associates



A national consulting firm specializing in managed care for workers’ compensation, group health and auto, and health care cost containment. We serve insurers, employers and health care providers.



© Joe Paduda 2024. We encourage links to any material on this page. Fair use excerpts of material written by Joe Paduda may be used with attribution to Joe Paduda, Managed Care Matters.

Note: Some material on this page may be excerpted from other sources. In such cases, copyright is retained by the respective authors of those sources.