The need to stay "up-to-date" on research is probably oversold

As someone who has literally been reading research for 30 years this is an odd opinion to hold.  I love reading research papers.  I’ve been a nerd for a long time.  But…I think the pressure put on working clinicians to stay “up to date” is for the most part totally over cooked.


Big Caveat:

I’m not saying our profession does not need research.  We actually need more.  


So, what the hell are you saying?

I think that continuing education providers, researchers hell bent on knowledge translation, research review services and Instagram PubMed warriors can make clinicians feel like they know nothing, are falling behind and create a sense of guilt by missing out on some purported new material.  We can too easily create this unnecessary sense of inferiority and insecurity when we laud the importance of new research for clinical practice.


There seems to be a lot of “sales” associated with the consumption of research papers and a lot of the “rise and grind” mentality that I believe can make clinicians feel overwhelmed.  And I totally reject that.


I would suggest that the need to keep up to date is exaggerated and actually rarely influences clinical practice in a good way.  In part 1 of this blog I will layout the critique and then in part 2 I’ll argue against myself and put forth some simple strategies on how and what to read in the research space.


Why are you so critical Gregio?

I believe that the value being critical is to help people simplify things.  So, this critique is much less about bashing something and more about culling the clutter to help us focus on where we should spend our time.

The Case against being a voracious research consumer


  1. The CE industry can make you feel stupid by elevating the importance of minutiae 


Whether its on purpose or not I think those of us in the Continuing Education industry can make people feel stupid in order to make our product feel like it’s needed.  It is very easy to spout off 10 research papers on minutiae of some disorder or exercise (I’m looking at you BIOMECHANICS).   This material may not be known by the average clinician and then the clinician thinks that they are missing out on this material and are somehow lacking and then they feel like without knowing this material they aren’t serving their patients well.


But, if you read a lot of impressive research papers you will realize that they are often selling an Artificial Precision.  Meaning, the findings are real and impressive but they don’t actually do anything to help you in your clinical practice. They add unnecessary complexity and detail but really change nothing. At least not yet. 

An example of this is in the number of papers which will rank an exercise based on some biomechanical variable.  This could be the strain the achilles tendon undergoes (link), the load that the kneecap feels during common tasks (link), the ratio of spine compression to muscle EMG in the spine (link) etc.  This is cool research to me but as clinicians we should be critical of how it really doesn’t help you in your practice.


You can see it in this wonderful paper by Baxter (2020).  They rank exercises based on achilles tendon loads (peak load, loading rate, loading impulse).  But do you really need this paper to know that running or hopping is more stressful than a seated heel raise?   Conversely, do we know if it’s relevant that one leg lateral hopping has a loading peak of 7.3x Body Weight and running has a loading peak of 5.2x BW? What does this mean?  Is this really guiding your prescription?  Should you have people run to prepare them to laterally hop because peak loads, loading impulse and loading rate are all less in running? Do we know if a difference of 2x/BW is relevant for pain or for stimulating the development of the required attributes to recover?

And what about the other measures.   If you compare walking with a single leg heel raise you’ll see that peak load is the same but loading rate is lower with the heel raise although loading impulse is higher during the heel raise vs walking.


What does the precision tell you?  Is it really guiding your practice? You would need to know what loading variables are important for recovery or what variables are important to minimize or maximize and we aren’t there.  I’d suggest that the average clinician can just look at all of these exercises and pretty much estimate which ones are “harder” on the achilles (either peak load or speed of loading) and we’d pretty much get the same results - or at least in the ballpark. And any differences wouldn’t matter (that’s the artificial precision).  And you know if it’s the right or wrong exercise based on how the person feels during the exercise and the next day. And did these results really guide your exercise prescription? Or did simple things like what the aggravating movements were and what the goal task looks like actually drive your exercise and movement prescription?


We saw the same thing happen in the 1990s with spine “stability” exercises chosen based on the ratio between spine compression and EMG.  Hence, the creation of the Big Three (bird dog, curl up, side bridge).  All those exercises are fine but to say that metric of compression to EMG ratio is needed to choose an exercise has not been borne out in any future research.  Again, artificial precision. Which leads us to our next point. We adopt research too quickly and create clinical dogma.


2.  Research is bought and sold prematurely


The previous scenario would fall into this category.  The spine and tendon loading research is impressive biomehanically.  But, that is just the first step in establishing clinical relevance.  That’s why being a researcher is so hard.  You can’t just tell us that an exercise has a high tendon loading rate - you need to prove why this is important.   You need to prove that choosing exercises based on your ranking system is superior to someone choosing exercises based slowly building someone up to match the demands of their goal task by simply looking at what loads and speeds people obviously do when they do an exercise. So, this type of research is great for other researchers to build on. Its one brick in the wall but unfortunately clinicians see this one brick and confuse it for a wall.


An example of this was seen in the VMO and knee pain world.  The researchers found something (e.g the VMO is delayed in kneecap pain).  A program was created to address it and it helped people get better.  But, what took a long time for people to realize was they were getting better for another reason.  The exercise program was just gradual loading program and the recovery had nothing to do VMO retraining (wonderful research self reflection here).    So, if we had our skepticism in place at the start and we didn’t wholly adopt this model too early we would never have had to self correct.  Related…


3. Research is often about undoing the wrongs of the past


I like research for this.  We see this in the manual therapy world where we realize that we don’t need to be specific with manipulation, joints don’t go out of place and motion palpation is invalid.  This is good research but the problem was in adopting the ideas in the first place based on poor research or premature celebration of research for clinical practice.  Just like with local segmental spine “stability” training.  That research got accepted and adopted far too prematurely as a needed or superior method of rehabbing low back pain.  It was taught as a required way to “stabilize” the spine or help with pain.  When what might be more accurate to say was that it was simply ONE way to exercise the spine.  Research was needed here to undo the over adoption of this method.  But again,  the mistake was in adopting the preliminary and premature research conclusions far too early.


We should just have waited for more research. You might have been better if you never kept “UP-TO-DATE”


4.  Research is overproduced and oversold


There is just too much and so much feels like its written because people need to write something.   But again, I’m speaking as a clinician.  We do need a research base but I would say the relevance of the research is oversold to the clinician.  You can’t just have a discussion in a research paper where the authors say we have NO IDEA what the clinical relevance is of this work when that should be exactly what the conclusion is.  Related is the number of reviews which simply repackage ideas that have been around for decades and dress them up with the “latest” research which really doesn’t improve on the original idea. 

Summary

I really do love reading research but a lot of you don’t. And you shouldn’t feel too guilty. We need to be critical consumers of research. You need to ask if this paper is influencing how you practice in a positive way. Because a lot of what is out there doesn’t do this. Even though it is great research.


Part II is coming soon where I naturally argue against myself and talk about the value of research in clinical practice and maybe lay a rough foundation on how to consume research to improve your clinical practice.

Greg Lehman