By Joseph Haduch and Patrick Flaherty
When working through sourcing and qualifying common but major cardiology equipment, it is always amazing when two experienced, well-respected cardiologists cannot agree on an interventional cardiology lab.
You’ve all experienced it. Lab A, the Philips lab, is Dr. Smith’s room. Lab B, the Siemens lab, is Dr. Jones’ room. Dr. Smith refuses to do cases in Lab B because he believes the image quality is so poor. Dr. Jones refuses to do cases in Lab A because she cannot effectively practice due to unacceptable image quality. Heaven help you, if one of the labs goes down and you ask your physician to use the other room. It’s as if you’re asking them to travel to the moon to perform the procedure, the only difference is the reduction in gravity would be more tolerable. In these instances, what you’re experiencing is the “art of medicine.” The blatant subjectivity implicit in this example is not unlike what we experience in our personal lives; whether it is a color, a song, a novel, or a classic car, there can be a wide variety of opinions as far as what is more appealing, more enjoyable, most entertaining, or what is simply intolerable. While subjectivity is understandable and expected in these settings, it is not something that should determine data-driven clinical practice.
The “art of medicine” has its place and frequently informs innovative hypotheses which subsequently are used as the foundation of deep scientific research and validation but the “art of medicine” should not be the platform upon which the core diagnostic capabilities of health care are based. Unfortunately, medical equipment manufacturers have become comfortable with and have in many ways exploited the FDA 510K approval process to promulgate a sales and development process that does not fully and objectively test their brand-based features and technology in the systematic manner used by the pharmaceutical industry. The ensuing culture of medical equipment and device manufacturers, which has deepened over the decades, results in suppliers catering to the art of medicine as it is easier to navigate the sale. However, it is the “science of medicine” that should always drive our practice and business decisions. As responsible stewards of our organization’s limited dollars, the science of medicine should be a moral and practical imperative that demands our attention and commitment.
The “science of medicine” is objective and focuses on provable outcomes, it is replicable and can be continuously tested to ensure patients everywhere can be assured that their care is always reflective of the current best-practice. Science does not deal in gray areas or subjective opinion nor is it open for broad interpretation; it is an anchor in a tempestuous sea of ever-changing health care information and variables. You would think all parties would agree that the science of medicine is how we should be evaluating system performance, quality, outcomes and value. These concepts frequently show-up in conferences as buzzwords but they do not frequently show-up in contracts as there is a gaping hole of missing data which would test the software and hardware differences in the context of patient outcomes. Value is a wonderful notion until it no longer supports a supplier’s sales narrative or a physician’s preference. Have you ever noticed that to an incumbent supplier, there is no such thing as equivalent technology but to a supplier on the outside looking in, all technology is an interchangeable commodity? The only way to successfully organize a decision is to use the science of medicine to define functions over features.
Using the example at the beginning of this article, do the Siemens and Philips labs provide the same level of function? Do they each provide an image whose quality can be measured? Absolutely! Given the amount of engineering incorporated, how could there not be a surplus of quantitative information from which to test objective performance? Can we measure the level of delivered dose at a patient level? Absolutely! Do both labs feature image algorithms that “improve” image quality? In some instances, perhaps a more accurate word would be “change” image quality which would allow Dr. Smith to customize an image using more contrast, but which also permits Dr. Jones to create a flatter image; in both situations the answer is a resounding yes. Clearly the ability to objectively measure performance from an engineering perspective exists, it is connecting performance to the context of the patient where the “art of medicine” exerts its control. This is the area where we must focus our collective attention to drive the most value from each dollar we spend.
In closing, features and aesthetics have a place in medicine as they do in all human activities and this certainly needs to be taken into consideration, but it’s the equipment’s functionality, it’s purpose, it’s practical use that need to be at the forefront of our decision-making process. Functionality drives outcomes, savings, improved workflow and best practices. The functionality of equipment is what vendors should be using to differentiate themselves from other vendors and that functionality should always be connected to patient value, clinically and economically. Instead of this functionality, we get ADIR and ASiR, from two different vendors for CT image reconstruction, identical functions obfuscated by the marketing sorcerers employed so effectively by members of the Medical Imaging & Technology Alliance (MITA). It is understandable that suppliers need to stay competitive in the market and offer a wide variety of features, but the arbitrary leapfrogging of features distorts any supplier’s vision toward differentiated functionality and value engineering. Our patients, providers and suppliers deserve better.
Patrick Flaherty is the vice president of operations for UPMC BioTronics.
Joseph Haduch, MBA, MS, is the senior director of clinical engineering for UPMC BioTronics.
