About Me

My photo
I'm a radiologist and writing helps me make sense of the world.

"My method is to take the utmost trouble to find the right thing to say, and then to say it with the utmost levity" -George Bernard Shaw

Saturday 8 May 2010

Heterodoxy and Hegemony Part 2

Hegemonic Assumption #1 - More Structure in Education is Better

Although we understand much about how adults learn, we are all unique learners. There are a multitude of educational opportunities and media now available for radiologists to learn or stay up to date. Each of us has preferences in how we actually go about learning. Similarly, the factors that motivate us to learn are complex, tacit and often deeply embedded in unique social and personal contexts. We have different pre-existing knowledge and skills. Some of us learn certain notions or techniques more rapidly than others. Furthermore, we are adaptive learners; we will adopt whatever learning style necessary to achieve knowledge or competency in the fastest possible time.

We also know that good postgraduate education should aim to enrich, stimulate & enthuse. It should be flexible enough to encompass disparate motivations, personalities, learning styles and rates of learning.

However, the more structure there is in education the less flexible it becomes. Hence, few find that a heavily guided or proscribed course suits them. Note that this is both trainees and trainers alike. Thus educational rigidity promotes mediocrity, as it is difficult to excel if shoehorned into an inflexible “one-size-fits-all” structure.

I am not advocating abandoning high educational standards. Quite the reverse. But I am saying that it is the end product of a competent or expert practitioner that is crucial. We should be very clear about the standards we expect to be reached or maintained. And we should assess trainees very carefully to see that they have done so. I am also saying that the method used to arrive at such competencies should not be mandated. Sure, suggest a framework of how one might become competent but recognize that the route taken to get there varies hugely between individuals.

Like the sound of this? Well, like it or not, these are the principles of competency-based training. Radiologists should embrace them to take radiology training into the next century.

Hegemonic Assumption #2 – Counting Cases is Valid

Another assumption is that competency comes with repetition. With a largely motor task, there is some validity in this. Perform a physical task a number of times and you get slicker. For interventional techniques consisting largely of psychomotor tasks, such repetition is important. However, complex non-motor tasks such as the visuo-perceptive skills of a radiologist don’t necessarily follow this.

There is also the concept of getting your “film miles”. This the ill-explained, non-foveal visual expertise that comes after having seen tens of thousands of chest radiographs that enables you to spot left lower collapse in 0.01 seconds. This concept also includes the million and one radiographic factors, normal variants and common abnormalities that become second nature.

So, yes, experience counts for a lot. Without experience, book-based knowledge remains just that. But experience must be varied, broad ranging, of varying difficulty or complexity and reinforced by feedback that comes from opinions of other experts, clinico-pathological correlation or follow-up. This well-established cycle of experience, learning and reflection ensures that the trainee moves linearly and safely from conscious incompetence to conscious competence.

Thus, simple repetition alone is potentially harmful. It induces confidence but basic errors may be perpetuated and there is no guarantee of expertise developing. Conscious incompetence may become unconscious incompetence. Not knowing the boundaries of your expertise is a route to medico-legal hell and damnation.

Much of radiology training focuses on producing a logbook detailing sufficiently high numbers of films reported, procedures performed and so on. It ignores and hence sidelines other sources of learning such as consultant-led teaching, film and web-based archives, textbooks and journals, courses and conferences.

Why does training focus on numbers so much? I like to introduce you to the McNamara fallacy. This can be described as below.

'The first step is to measure whatever can easily be measured. This is OK as far as it goes. The second step is to disregard that which can't be easily measured or to give it an arbitrary quantitative value. This is artificial and misleading. The third step is to presume that what can't be measured easily really isn't important. This is blindness. The fourth step is to say that what can't be easily measured really doesn't exist. This is suicide”

So, counting numbers is easy but crude as it is only one aspect of radiology training. It lacks validity as it ignores the quality of the experience, doesn’t assure competence and ignores other methods of learning. Some have said that they find “educational bean-counting” demeans professional training.

However, I am much, much more worried about the Law of Educational Cause & Effect. This states:-

“For every evaluative action, there is an equal (or greater) (and sometimes opposite) educational reaction”

Sometimes called consequential validity, it explains the occasional bizarre and/or maladaptive behaviour that trainees exhibit in the run-up to some form of assessment. Phrased metaphorically, “the assessment tail wags the curriculum dog” or more crudely, “grab the students by the tests and their hearts and minds will follow”.

Radiology training becomes a primary quest for achieving logbook numbers. The totality of experience is short-circuited by this lust for padding out the columns. Less scrupulous trainees guestimate their numbers, leading to some pretty inaccurate figures. Some frankly dishonest trainees may “join the dots”, implying they have done a lot more than they really have. On the contrary, scrupulous trainees who have recorded everything in meticulous detail may have logbooks that look a bit patchy. Yet, independently verifying logbook numbers is virtually impossible. So, as an assessment method, it is not only crude and educationally invalid but is easily fooled and therefore neither sensitive nor specific.

What next then?

What is required in radiology now is bravery to question orthodox modern radiological training and its hegemonic assumptions. Be warned that the medico-political climate is against deregulation of any form. A degree of regulation is sensible and prudent but we must recognize and fight professionally demeaning regulation. First though, we need to get our own house in order. The supremacy of logbook numbers has had its day. Witness the dawn of the era of competency-based training; let us welcome it to our hearts.

Heterodoxy and Hegemony Part 1

“Career structure is shackles for the young designed by their elders”.

Our professional freedom is being steadily eroded. Our working lives are increasingly bounded by external constraints. Annual appraisal is enough of a paperwork headache without considering the GMC’s recently unveiled plans for revalidation.

Micromanagement by the Grey Suits from our Trusts reaches new heights (or lows, depending on how you look at it). Regular circulars arrive from said clipboard-wielders, bearing illogical dictats or mandating protracted training courses on hitherto mundane tasks. My wife, recently appointed as a consultant at a Trust not far from my own, had to attend a mandatory one-hour session on hand washing. I jest ye not. What next? Perhaps an enforced course on “Combating ano-olecranon dysagnosia”?

More pertinent to our radiological work, training in a subspecialty or introducing new techniques or modalities to local practice often now require some kind of formal box to have been ticked: attendance on a recognized course; compilation of a logbook of x cases; training at a particular accredited centre of excellence.

In many respects, medicine is merely catching up with the rest of the NHS. The overt bureaucratization of the NHS has been an unstoppable rollercoaster for a time. Doctors rarely say that they can’t do something because they “haven’t been on the course”. However, juniors do now say, “I don’t know that patient, I’m covering the other side of the ward”, so who knows what the future holds?

An oft-repeated maxim is that we should raise professional standards and improve standards of practice. This truism is undeniable - our punters deserve the best quality care we can give. Attempting to raise standards is also a constructive way of dealing with the increasingly fashionable sport of doctor bashing. As a result, tightening control on training, accreditation and re-accreditation of specialists is now the unquestioned predominant method for the powers-that-be to solve the ills within medicine.

This isn’t the whole story though. I would contest that this “tightening control thus protecting patients” motif is being used as a stick to beat us.

Firstly, it seems that the more furious the turf war between specialities, the more hoops the wannabe practitioner has to jump through. For example, being judged competent in defaecating proctography is significantly less hotly contested than sexier techniques like carotid stenting. No one will ask to see your logbook of 300 venograms from the last year whereas ARSAC will want to see your annual reporting figures for PET/CT “to maintain competency”.

Secondly, it tends to cost doctors more than other specialities, or even other professions. The monopoly status of accrediting bodies and perceived richness of doctors allow limitless charges. Witness the spiralling rise in C(C)ST fees, exorbitant GMC fees in addition to the modest annual sum charged by our own dear RCR. These enforced fee hikes are particularly galling, given recent admissions that medical training budgets have been unashamedly raided to bail out NHS debt.

Such perceptions do invite criticism of our medical highers and betters. These gripes mainly circle around protectionism issues: stifling the growth of a modality, Ivory-tower elitism and monopolization of private practice.

However, my major concerns about all this excessive regulation are much more fundamental. My concerns are over the hegemonic assumptions that (i) more structure in training and education is better and (ii) counting numbers of cases is a valid way of assuming competency.

For those unfamiliar with the term, I use “hegemony” to mean “…not only the political and economic control exercised by a dominant class but its success in projecting its own particular way of seeing the world, human and social relationships, so that this is accepted as “common sense” and part of the natural order by those who are in fact subordinated to it”.

This is where heterodoxy enters in. It is the antonym of orthodox, describing someone at variance with established belief; “cynicism with knobs on”. The heterodox do include swivel-eyed conspiracy theorists as well as Trotskyite anarchists. However, I must distance myself from such subversives. I merely believe it is good and healthy to critically reflect on the social hierarchies within our profession.

(read on in Part 2)

Humour and the Radiology Report

It’s a hard, harsh modern world in which to practice radiology. On a background maelstrom of drastic annual “efficiency savings” (i.e. cuts) and tranches of impossible cancer targets, we are now we are all at the whim of truculent PACS systems. The radiological mottos of appropriateness and justification have been subsumed by a market-trader mentality.

With such a focus on quick turn-around and low costs, quality inevitably suffers. And no more so than in the radiological report. I’m not referring to the garbled medical English of a Belgian radiologist (don’t get me started…). More that, if the reporting target clock is ticking, you want to merely dictate a brief technically accurate report and move on. Your mind is on productivity. Especially in the new era of low fee-per-service scans.

Written reports have been the medium of the radiologist since year dot. Their clarity and accuracy are paramount, especially now that radiology is complex and yet so central to clinical practice. We hope the requesting clinician reads the subsequent report but sadly we are not assured of this. With the modern day investigational “chocolate box” open wide to anyone with a pen or computer password, it must be hard to keep track of precisely how many “chocolates” you have helped yourself to. It must only be a question of time until a dictat from the local cancer office demands that radiological investigations revealing unexpected cancer must not only be telephoned to the requesting doctor, then faxed to the GP and but also delivered in person to the appropriate cancer nurse specialist on a silver salver. I digress.

There are agreed rules for radiological reports that I have no argument with – always answer the clinical question, be concise, include relative negatives, avoid vague quantifiers, be as accurate as possible in your description, summarize a long report and so on and so forth. However, there is an odious move in certain circles towards standardized reports, occasinally using a proforma or (Gawd help us) a tick box approach. This seems to emanate more from across the North Atlantic where it seems the complexity of the study, blandness of language and subsequent length of the differential diagnosis are worshipped over and above their clinical utility.

And bizarrely, these tedious lists of normal and nearly normal organs, complete with technical descriptions, a lengthy differential but no opinion seem not only acceptable but often preferred to conventional modern reports. I suspect the problem here is that modern reports may be scientifically accurate but have lost their “art”; their style, charm and readability.

When I was a young registrar, we would sit at the knees of particularly venerable consultants listening to loquacious descriptions of otherwise bland-seeming plain films. His (always a he) hands were a blur as the films danced across the light-box whilst delivering a radiological soliloquy of brief yet lavish verbal pastiches. It was almost as he was trying to entice the Dictaphone to bed with the most charming and suave reports ever to hit magnetic tape. Later, in the typing pool, the secretary transcribing his reports would always be smiling unconsciously at the graceful language flowing from ear to fingertip.

Now this kind of style is refined over decades and tens of thousands of films. But it hasn’t stopped me mulling over why such reports are so attractive and pondering on how to emulate my radiological heroes. On reflection, I have come to the conclusion that it is not only the practised, flowing language, but the sparing use of humour that makes such a reporting style so enviable. More specifically, I would contend it is the use of “whimsy”.

Why whimsy? I would contend that lightly peppering a report with playful comments of a gentlemanly nature will enrich a report but will not offend. Overt attempts at humour or bitter cynicism in a radiological report may entertain but could cause offence. My GP father recalls a local radiologist some decades back issuing occasional seasonal reports that might read, “Normal heart and lungs. And a happy Christmas to all our readers”. I imagine this would be frowned upon these days.

Although an informal tone may make reports more readable, it is all too easy to slip into a conversational tone, thereby artificially elongating reports. Not only do these reports take longer to authorize but I would imagine that clinicians are loathe to wade through long swathes of semi-radiological pseudo-Socratic monologue.

Enticing as it may seem, one should never be offensive to the patient or use the patient as a butt of the humour. Not even remotely. We shouldn’t patronize them either. It will only get you in trouble.

Although you may be tempted to say, “I wish Mrs Lard had abstained from yet another a full fry-up just prior to heaving her fatty liver on my ultrasound couch as it was difficult enough to identify the pancreas through the rolls of blubber anyway”, I would counsel against this approach. Apparently, it is a sign of being unprofessional or even dysfunctional. A sonographer may preface their report “Technically difficult scan…” whereas the whimsical radiologist may say “Distinctly challenging scan. Might I suggest that Mrs Lard’s habitus makes her fundamentally unsuited to upper abdominal ultrasound?”

Not everyone is going to agree that there is role for humour in radiological reports. And fair enough, vive le difference and all that. Plus I’ve met some pretty mirthless radiological types who wouldn’t know their funny bone from a thrombosed haemorrhoid. Lord knows what the result would be if they tried their hand at a whimsical report.

Perhaps the best bit of adopting a faintly whimsical style is that clinicians spontaneously mention that they “like your reports”. Who knows if the sparing use of capricious observations in a report may even entice the clinician to read it through and not just skip to the conclusion? Moreover, it might raise a dry smile and even a chuckle. And in the modern NHS, there definitely aren’t enough smiles and we are seriously short on chuckles.

In celebration of pedantry

Do your toes curl at the tautologous use of the phrase “CT scan”? Do you rankle when someone “orders” an investigation rather than “requests” it? Have you ever said, “Where are the cows?” in response to the phrase “lung fields”? Do you think routinely letting someone else verify your reports is lazy? Do you, in part, secretly admire Channel 4’s Green Wing character Dr Alan Statham?

Now read the following:

“They tend to obsess over the minutiae of subjects, and are prone to giving long detailed expositions, and the related corrections, and may gravitate to careers in academia or science where such obsessive attention to detail is often rewarded”.

Rather than describing some of your colleagues, it is actually describing the behaviour of those with Asperger’s syndrome (or Higher Functioning Autism). Obsessive Compulsive Personality Disorder is also in part characterized by a form of pedantry with the correct following of rules, procedures and practices.

Pedantry is typically used in an insulting negative connotation, implying a personality defect. Moreover, such attention to detail is thought unnecessary and irrelevant in NHS clinical practice.

However, it is my contention that radiologists are innately pedantic and that, rather than be embarrassed by it, we should embrace it. We all want to raise our standards of practice. Surely sloppiness of thought, word and deed is consistent with poor clinical practice? The term “pedant” comes from the Latin paedagogare, "to teach", derived from Greek terms for "child" and "to lead". So, its roots are honourable.

Admittedly, there are varying degrees amongst our ranks. And it is the degree of pedantry that seems to be the key. This is summarized well in Fowler’s Modern English Usage:

"The term, then, is obviously a relative one: my pedantry is your scholarship, his reasonable accuracy, her irreducible minimum of education and someone else’s ignorance."

A touch of pedantry when it makes a clinical difference seems entirely appropriate and defensible. But how far do we go? Prof Paul Goddard, my old head of training, had a frequently espoused maxim that “radiologists should be able to explain absolutely everything on the film”. The other end of this thought spectrum is all knowledge is a personal construct; life is uncertain and greyscale; and that one cannot be truly certain of anything. I guess most of us would want to be up at PG’s end of the spectrum. Quite an undertaking, perhaps, but surely the aim of an expert radiologist.

I argue that the innate pedantry in radiology comes from our training. Our tests are ranked by their ability to be specific and sensitive. Learning curves, ROC curves, operator dependencies are part of our working culture. We are trained and paid to be experts in our clinico-pathologico-radiological knowledge as well as our visuoperceptive and psychomotor skills. Our primary medium is the written radiological report. Our opinion should be accurate as possible, in both the language and content of our reports.

I see nothing wrong with a degree of pedantry. Moreover, I am faintly proud of it. I do feel a bit embarrassed when it gets the better of me in a meeting or other public forum. A bit like wind, I suppose.

Radiology Short-Listing – A Practical Guide

It is that time of year again. You may be one of the select few who have been invited to short list candidates for registrar posts. When I say invited, I mean sweet-talked, cajoled or frankly bullied.

In the good old days, the pile of CVs could be safely thrown down the stairs, with those alighting near the top being invited to interview. I did hear of an anaesthetist who would simply throw the top half of the pile into the bin, claiming that they were unlucky and that he didn’t want to work with unlucky people.

Nowadays, the application and short-listing process is all anonymous, structured etc but its electronic nature makes it a little bland. For example, the presence of blood, sweat, tears +/- midnight oil-burns on the application form show a degree of effort. Consequently, the allotted two weeks of yet more staring at tiny typeface on a computer screen gets wearing. Especially when trying to differentiate the next generation of radiologists from the increasingly homogenous medical graduates.

Like most CVs, obituaries, research proposals and papers in lesser journals than our beloved Clinical Radiology, these applications contain a certain degree of economy of truth. All candidates, according to their applications, are a bunch of saintly, dedicated, super-human, water-walkers. However, as Oscar Wilde once so neatly didn’t put it, “There are lies, damned lies and radiology reg applications”.

To short-list effectively requires an ability to read between the lines. So, for your edification and use, here is a key to what commonly used phrases actually mean.

My desire to experience a diverse array of clinical pathology attracted me to a career in Radiology: I am rather morbid and sinister individual
Radiology is a career that I think will challenge, interest and stimulate me for the foreseeable future: Radiology looks cool but I may well change my mind on the 1st day of the job
I actively attend x-ray meetings: I sit at the front wearing my red tie and when I fall asleep, I try not to snore.
I enjoy working with my hands: I am a failed surgeon
I hope to pursue a career in interventional radiology: I am a failed surgeon
My interest in radiology grew after completion of each stage of my surgical career: I am a failed surgeon
My understanding of anatomy and pathology will be useful as a radiologist: I am a failed surgeon with MRCS
I have realized that my personality is more suited to radiology: My boss says that I am a liability and should be locked in a dark room away from patients
I am fascinated by the use of advanced technology: I am a nerd
I enjoy physics and the use of imaging technology complements my IT skills: I am a card-carrying über-nerd
I spent some time shadowing a consultant radiologist: Some people called it stalking; curse those pesky restraining orders!
I have spent a lot of time in the radiology department: It is a bit of a maze and I get lost easily
I often accompany my patients to the radiology department: It is en route to the canteen, you see
I have heard great things about the radiology training in your deanery: I have no idea about your scheme but a bit of random flattery can’t harm, can it?
I am confident in interpreting CXRs: I lack insight into my nauseating overconfidence
I have excellent eyesight / grey-scale discrimination / spatial awareness: I really am struggling to pad this out to 150 words
Learning to interpret images is like learning a different language: I really am making it up but I’m desperate for a job

Interruptions vs. Consultations

Reflection and navel gazing is something of an increasing personal hobby or affliction, depending on how you look at it. At a mere 30-something years young, I don’t think it is a function of ageing and like to think it a “Physician know thyself” kind of thing. At any rate, the phenomenon of “interruptions” has recently struck me as unique and worthy of discourse.

Interruptions are the many disturbances that a radiologist endures whilst trying to report. Reporting activity is all these days. Gone are the days when the worth of a radiologist was their sound opinions, the eloquence and lucidity of their reports, their humanistic qualities, their abilities as a teacher, researcher and all round good egg. If you ain’t shifting a thousand plain films a month, then you ain’t worth nuffin’. At least that is how it feels, even if your CD denies it emphatically.

This leads to slight tensions arising when you can’t seem to get any “proper” work done. After the nth ward round has taken in your PACS reporting station as their last stop that morning, if you listen carefully you can hear a vague swooshing sound as your reporting target whistles past.

Now there is no doubt that it is annoying to have your intellectual reverie shattered. The possible exception to this is the appearance of a colleague wanting to chew the proverbial radiological fat for 5 minutes. Not only is this usually a welcome break but also I regard it as valuable in-house Continuing Professional Development.

Typically though, the interruption comes midway through the oncology case with monthly pan-body investograms spanning a 7-year history with 16 separate index lesions. However, when using basal ganglia alone to report the seemingly endless in-patient chest films, the prospect of re-engaging the cortex via human contact and conversation is often appreciated.

A knock on the door or polite cough is the usual announcement. The more senior and, well, surgical, the noisier the arrival. Locally we have a neurosurgeon who announces his presence in the form of song, I kid you not. However, some folk sidle in completely soundlessly and you become aware of their presence by some bizarre 6th sense. Sometimes I feel like Joel Halley Osmond when an indescribable sensation alerts me to the presence of 4 soundless juniors hovering behind me.

The interloper typically opens with “Can I ask you a question?”. I have tried the answer “And your second question is?” but most blink momentarily at this repartee, petit mal-like, before launching into their prepared spiel. The other classic of “Can I disturb you?” can be countered with “Didn’t give me much of a chance there, did you?”. Witty as it may seem, no one has ever laughed with 100% honesty at that one either. Their expressions are usually that of bewildered pity.

The task that is asked is usually inversely related to the time you have available. Most mornings will only see trivial queries about Mrs Lard’s bunion-ogram, Whereas the Crackerjack interruption (“Its Friday, its five to five…”) also usually involves someone very young, very complex and very, very sick. On the birthdays of the spouses or beloved offspring, all hell tends to break loose bang-on quitting time whereupon you find yourself mysteriously without a single colleague in the entire department.

A variable length of time later the trespasser has departed, your cup of tea has gone from lukewarm to stone cold and your mental radiology cache has auto-deleted. You sigh audibly, mentally ruminate about the precise dates of your next booked annual leave, and then turn back to the tombstone-like PACS monitors to start afresh.

It is perhaps understandable that radiologists have evolved dysfunctional working practices to avoid interruptions. Some of these changes are working practice arrangements; some are geographical alterations. At worst, I’ve heard of radiologists who shut doors in junior’s faces. However, I think it is also not really on to put rude signs on doors or hide behind locked doors.

I’m not totally decided whether changing reporting behaviour to cope with interruptions is sensible. By this I mean, some radiologists now deliberately come in at odd times to get some plain film reporting done whilst it is quiet. Some even deliberately seek out the quieter PACS reporting stations to reduce the chance of being disturbed. It won’t be long before folk are campaigning to report from home via a teleradiology link for the same reasons. Its only on if the rest of us aren’t left stranded “on the shop floor”.

What I am taking a long way round to saying is that I think we shouldn’t call them interruptions. A change in ethos is needed – we should call them “consultations”. Maybe the clue is in the name but consultants are there to be consulted. It’s not original, but I try and remind our registrars that we should always try to help our clinical colleagues (or at least appear to be helping – a subtle but crucial difference).

Innovation and The Cup of Knowledge

The broadcaster John Humphrys talks of “hurrah” and “boo” words. This linguistic tactic tries to persuade us of something by the use of associated positive or negative words. If something is new, modern, scientific and progressive it can be easily championed. Conversely labelling something with a “boo” word such as old or traditional is a sure-fire way of killing it off. Surely “innovation” has become the ultimate “hurrah” word for the 21st century NHS.

Except it is now more than a word, bigger than an aspiration. Innovation is a now formal part of NHS infrastructure. There is now an NHS Institute of Improvement and Innovation; there are nine “NHS Innovation Hubs” in the UK and a £240m Innovation Fund has been introduced.

This unfettered embrace of innovation in the NHS is perhaps one of the defining motifs of the decade. If it isn’t innovative, you can go whistle. Common sense, clinical experience and empirical evidence are dismissed as bourgeois indulgences in a headlong rush for things modern and novel. Lysenko would be very proud! It is my nihilistic contention that innovation needs to be viewed with extreme scepticism.

Benefits of innovation

It is always hoped an innovation has fundamental advantages over a more traditional method. However, this is often unknown at the time of implementation. Paradoxically, once an innovation is empirically studied, it ceases to be innovative anymore. In truth, most innovations happen because there is always more excitement about new and different. There are subtle yet well-studied interactions at play here.

First (and best studied) is the Pygmalion effect. This refers to situations in which people perform better simply because they are expected to do so. It entails a complex unconscious interaction involving verbal and non-verbal cues. It is named after George Bernard Shaw’s play Pygmalion (later popularized by the musical My Fair Lady), in which Henry Higgins believes that the Cockney flower girl, Eliza Dolittle, can be made into a lady. Higgins’ belief in her is a strong factor in her decision to become one.

Second is the Hawthorne effect. This phenomenon refers to improvements in productivity or quality or work that results from the mere fact that workers were being studied or observed. It is named after a study at the Hawthorne Plant, near Chicago from 1924-36. It is widely invoked to describe an improvement in performance following a newly introduced change. There is no doubt that the presence of an influential figure alters the situation as workers may want to avoid, impress, direct, deny or influence them. Moreover, this is to an unknown and unpredictable degree. This consideration is well known in qualititative research and is known as “reactivity”.

Lastly, the Halo effect, where the individual performs differently because of the novelty of participating in something new. Performance can improve from an unjustified belief in its superiority. It shares similarities with the placebo effect.

After the innovation?

Virtually anything that is new will be popular at the time of its introduction and for a few years thereafter. It is axiomatic that at least 75% of innovations will have a half-life of less than five years. This isn’t surprising. Interest waxes and wanes as the enthusiasm and/or responsible personnel behind it vary.

One reading of this is an order effect. The most recently adopted approach attracts the most enthusiasm hence it will always win. This leads to a purgatory of constant change and enforced adoption of innovation. Such enforced redefinition, reinvention and reclassification indeed blights the modern NHS – “digging up the trees they’ve just planted” as it was memorably described recently [1]. This is flanked by a concurrent fixation with all things new. Sparkly ideas, twinkling notions and shiny concepts are gathered and vaunted by distinctly magpie-like NHS apparatchiks.

The converse can be true. What was new, radical and exciting will inevitably undergo heterotopic ossification, becoming fixed and immutable. Old isn’t always bad – it is madness to dismiss the old without evidence of inferiority. On the other hand, if you do so these days and call it an innovation, such evidence seems to be optional.

And this is my main point: change should be driven not by political fashion but by evidence that it is an improvement. Sadly, it is neither an original nor new point. Over 80 years ago, Sir Robert Hutchison most eloquently put it thus [2]:

“Those of us who have the duty of training the rising generation of doctors have a great responsibility. We must not inseminate the virgin minds of the young with the tares of our own fads. It is for this reason that it is easily possible for teaching to be too “up to date”. It is always well, before handing the cup of knowledge to the young, to wait until the froth has settled.”

Refs

1) BMJ 2008: 336; 919
2) BMJ 1925: 1; 995-8.

Secrets and Ties

A terrible scourge of hospital-acquired infection stalks this land. It is a modern scandal and failing of our profession that we have allowed it to happen. There is no doubt we must be vigilant and do all we can to lessen it. We could argue the long and short of how best to do this. However, we are officially not allowed to. The edicts of infection control teams must be obeyed without question. Even pointing out 110% bed occupancies as a possible contributing factor is considered heretical. Anyone seen to be stepping out of line incurs the full wrath of the medical director (I speak from experience here).

Now one could harp on about a managerial hegemony, doctor bashing, lack of evidence; even a conspiracy theory. However, this particular drum has been beaten to death: nothing happened. There are few points that I could perhaps add to this dying discussion. As a non-interventionalist, I have literal patient contact only during a simple handshake or via the medium of ultrasound gel. Hence the necessity for most radiologists to be complicit with the infection control manta is questionable.

But one aspect of these changes that has made a distinct but subtle difference is the death of the tie. Our female colleagues have suffered the ignominy of having to remove treasured jewellery but this is a largely personal issue. Nowadays I can’t help thinking the plethora of open-collared shirt-wearing doctors makes the radiology department look like a louche firm of copywriters. Good-bye sartorial elegance; every day is now dress-down Friday.

Where did this come from? Well, on 17th September 2007, a Department of Health (or D’oh! as I like to call it) proclamation stated under the subheading “Poor Practice” that, “Ties are rarely laundered but worn daily. They perform no beneficial function in patient care and have been shown to be colonized by pathogens”. Unfortunately, these statements are all true. Particularly the last bit – I did a quick literature search.

You could resurrect the arguments about lack of evidence that it actually prevents infection. You could take issue that the guidelines largely refer to nurses but are taken to mean doctors too. However, my issue here is the same document also states, “It is good practice to dress in a manner which is likely to inspire public confidence”, as “general appearance [is a] proxy measure of competence”. Bit of a dichotomy, eh? Do you want a professional looking doctor or a potentially cleaner doctor? Sadly, we know the answer already.

One particular conspiracy theory cannot be totally ignored. This is the emasculation theory. This centres on the shape of the tie, plus the way it unsubtly points towards the nether regions. Hence a woolly blunted-ended tie can be seen as a direct statement of masculinity, or rather lack of it. It therefore follows that forcible removal of the tie altogether is tantamount to castration. I don’t really buy it myself, but it is food for thought.

The other aspect is the tie as a social statement. The selection of tie states membership of a certain club, or perhaps, that one is worryingly keen on cartoon characters. There is also the issue of exerting the nuances of one’s personality. Just this morning on the way into work, as I idly flicked traces of black pudding and absinthe from my College tie, I wondered what particular messages I was giving out.

We could get round the issue of infection control by all adopting bow ties. However, a professor at my alma mater used to say that wearing a bow tie marked one out as a prat. Admittedly, his actual words were a little coarser but such crudities are not for the gentle reader. Even though I know many fine gentlemen who besport bow ties, I can’t shake his words from my head. Cravat, anyone?

So, faced with an official proscription of ties, why not embrace this emancipation? Who honestly really likes the physical discomfort of wearing a tie? Surely we will all mellow slightly as an indirect result. Despite this liberality, I still feel slightly underdressed and paradoxically uncomfortable.

Alternatively, partly in demonstration, one could radically change one’s professional garb. Sadly, a set of blue scrubs makes one look like an ODA. So, where patient contact might occur it could be justified to perform all one’s professional duties dressed in the way least likely to transmit infection - wearing only a pair of swimming trunks and lightly oiled.

Semiology & Eponyms

A recent debate in the BMJ [1] on the pros and cons of eponyms started me thinking. Personally, I’ve always been in favour of eponyms. As organically derived terms, they are fundamentally non-scientific and we shouldn’t pretend otherwise. Jargon they may be, but useful jargon nevertheless. Without the eye-watering 8000 eponyms listed on whonamedit.com, medicine would be culturally impoverished. Just think; the lasting contributions to medical science by Reiter, Wegener and other World War II Nazi Doctors would go otherwise unrecognized. Furthermore, the lack of obscure triple-barrelled Germanic syndromes would make teasing successive generations of juniors more difficult.

Radiology has few eponyms that have stood the test of time. Our very own Sir Peter Kerley’s B lines are justly famous. However, when a radiology registrar confidently pronounces the presence of A & C lines, it is usually one of the first signs of their schizophrenia. The name of the mighty Leo G Rigler is mainly associated with his classic description of a double bowel wall sign of a pneumoperitoneum rather his description of an enlarged left ventricle on a lateral chest film. The first US professor of radiology, Henry Pancoast, has an eponym that is still thriving. Plus he is possibly the only radiologist ever with the middle name “Khunrath”. Seriously.

Rather than eponyms, radiology has many “classic signs”; 684 at the last count [2]. These typically arise when a radiologist, wild-eyed from 8 hours solid at a PACS workstation, says something like, “Wow, these hepatorhubarbomas all look like mushrooms!”. A short email to a learned journal later and, hey presto, the “mushroom sign” is born.

From a historical perspective, such radiological visual comparisons are only a century or so behind our pathological colleagues. Sorry, I should rephrase that; our colleagues in the pathology department. I can imagine a bewhiskered Victorian pathologist holding up a spleen saying “Forsooth, methinks it is like sago!” before dispatching a telegram to the Lancet.

It seems that this proliferation of radiological semiology has no end. Our transatlantic cousins should shoulder most of the blame as their journals publish not only regular slots for new signs but also endless reviews lauding previously described signs.

In fairness, such visual similes can be enlightening, easy to remember and can aid explanation or teaching. For some reason, the food signs I find quite memorable [3]. This is in distinction to eponyms (and mnemonics too), where I can remember the name of the eponym/mnemonic but precious little else about it.

The main thrust of this article is that I think that radiology has some truly awful signs. Moreover, they can be tenuous, obscure and at worse misleading. Furthermore, I would argue, overtly focusing on “classic signs” is misguided. Reducing our beautiful, complex, subtle speciality to a banal dictionary of visual “Aunt Minnies” fails to reflect real life radiology. I feel it is a destructive, anti-intellectual force – a very real “dumbing down” of radiology.

I realize that, for many, I am about to commit heresy but if you want an shining example of a tenuous sign, take the so-called “Bat’s wing appearance” of perihilar opacity in pulmonary oedema. It is traditionally taught and much beloved of medical students – in fact, it seems the only thing they can remember in finals. But it is rubbish sign, in my ‘umble opinion. Now I am no expert on the flying mammals of the order Chiroptera but it couldn’t look less like bat’s wings if it tried. Plus, either I keep walking past it in as a sign (entirely possible) or it is a pretty rare appearance to capture radiographically. Besides, perihilar oedema looks much more butterfly-like to me, but I guess SLE got there first with that one.

Some signs are exercises in obfuscation. In this category of duff signs, the origin of the visual simile only makes sense to those with a PhD in The Classics. Take, for example, the Phrygian cap, where a gallbladder fundus flops over on itself. Pretty meaningless anatomical variant, really. Not even worth a separate name. But who would be able to confidently identify a Phrygian cap? Or an Erlenmeyer flask, come to that?

I won’t launch into a diatribe concerning pedantic inaccuracies in certain radiological descriptions just now. Suffice it to say that stags have antlers not horns. Complex renal stone disease will do me just fine.

Signs naturally range in their specificity and sensitivity. At one end of the spectrum, there are rare signs that are 100% sensitive and specific; the truly pathognomic sign that is much treasured in teaching collections. At the other end of the spectrum is a ragbag of signs with singularly dubious evidence behind them. Of course, once someone says there is a link, it is hard to disprove it conclusively. Particularly when the link is between something rare and something common. I can’t think how much research time has been wasted in disproving iffy signs. Take, for example, the elevated pronator fat pad sign of occult distal radial fracture. Decades after it was first described as a useful, someone finally does a decent study and it is found to be pretty much useless [4].

To circumvent the inherent problems in using either an eponym or a sign without resorting to dry, technical language, I propose a new “third way”. This is to encourage the conjoined eponymous sign. By this method, the surname of the originator must be conjugated to the visual simile. So, the relatively anonymous “mushroom sign” becomes “McCoubrie’s mushroom sign (of hepatorhubarbosis)”. This way, credit can go where credit is due. Plus we know who to give a good kicking when they get it wrong. You never know, it might even encourage our legally minded radiological cousins across the pond to stop inventing ever-more tenuous signs.


1) BMJ, 2007; 335: 424
2) RadioGraphics 2005; 25: 257.
3) RadioGraphics 2002; 22: 1369
4) Clin Rad 2003; 58: 798

Wednesday 5 May 2010

Incompetence, Feedback and the Kruger-Dunning Effect

August 2010 will see an interesting turn of events. Not only do the 180-odd newbie radiology registrars recruited by national selection start their training but also workplace-based assessment begins in radiology.

Whilst I heard nothing but praise for the former, I’ve heard plenty of grumbling about the latter. Trainees are embittered from previous experience of it as a Foundation Doctor. Trainers are wary of extra workload and all the acronyms. I’m supportive of these concerns but the underlying educational message seems to be lost in all this negativity. Namely, that workplace-based assessment is best seen as a way of giving better feedback.

Recently, a registrar listening at my knee (we can’t afford chairs in my hospital) asked me for feedback. I explained that I didn’t have my electric guitar with me; the registrar fish-mouthed and the joke made an attractive centre-parting as it sailed over their head.

Most trainees appreciate feedback; a recurring criticism is they don’t get enough. I heard of a consultant who stopped giving feedback as they were tired of making their trainees cry. Apparently, to appear balanced and fair as a trainer we need to give 5 positive to 1 negative bit of feedback. So, being Dr Nice Guy is about right.

The type of trainee that really interests me is the particularly thick-skinned individual who is seemingly impervious to feedback. They listen but don’t seem to hear. I’m not talking about the common-or-garden pig-headedness typically seen in ex-surgeons. The (thankfully) rare individuals that I am fascinated by also have an indefensibly high opinion of their own abilities but are often shockingly incompetent.

Perhaps you can think of a similar individual you have trained or trained with. They are a real worry mainly as you don’t know where to start. Well, I’d been musing about this for some time when I had an epiphanic moment. Earlier this year I read of the ‘Kruger-Dunning’ effect and it all slotted into place.

US psychologists Justin Kruger and David Dunning performed an elegant series of experiments in the late 1990s to study human incompetence [1]. Using a diverse set of tasks they showed that incompetence is worse than it appears to be. It forms an unholy trinity of cluelessness: the incompetent not only overestimate their own level of skill but also fail to recognise genuine skill in others and, even after feedback, fail to recognise the extremity of their inadequacy.

Furthermore, they found that the most competent underrated their abilities. This group were aware of their abilities but they overestimated the abilities of others. Subsequent studies have confirmed this affects many real world settings, medical trainees being no exception.2

On digging a little deeper, it appears that Kruger and Dunning (who, incidentally, won an IgNobel award for this research in 2000) have merely crystallised and quantified the wisdom of our forebears, both recent and ancient.

For example, Confucius observed, ‘Real knowledge is to know the extent of one’s ignorance.’ Jeremy Taylor, a 17th century English priest, wrote, ‘It is impossible to make people understand their ignorance; for it requires knowledge to perceive it and therefore he that can perceive it hath it not.’ In 1871, Charles Darwin observed that, ‘Ignorance more frequently begets confidence than does knowledge.’ And then there’s the British philosopher Bertrand Russell (who, ironically, never suffered much from self-doubt) who said, ‘The trouble with the world is that the stupid are cocksure and the intelligent are full of doubt.’

So, can people ever form accurate self-impressions? Probably not, it seems. Can we become aware of our intellectual and social deficiencies without outside intervention? Possibly. The Ancient Greek aphorism of ‘Know thyself’ is a bit harder than it seems.

There is one consolation to this rather nihilistic tale. Kruger & Dunning found that if you trained those who were found to be incompetent, they began to develop insight into previous performance. So perhaps there is a positive take home message. If you come across an over-confident but duff trainee with a skin as thick as hide, don’t give up on them. Protect them from themselves, train them and they will get it eventually.

References

1) Kruger J & Dunning D (1999). Journal of Personality & Social Psychology. 77, 1121.
2) Hodges et al (2001). Academic Medicine. 76, S87.