Pages

Saturday, 15 February 2014

sport science...what's the point?


This is a guest-post by Canadian Strength Coach and Sport Scientist Matt Jordan.  He is currently in Sochi for the 2014 Olympic Games, where is is supporting the Canadian Alpine Ski Team.  We sat down, and also discussed over email a few topics.  I'm going to roll a few of them into a Q&A, but this one needed its own forum.

The questions was:

sport science...what's the point?


quantify...

The first point of sport science for me is to quantify the impact of my training program.  I have a hard time with strength coaches who have a high level of perceived success based on them having a handful of supremely talented athletes on their client list who would be great despite the training program.  I also have a hard time with strength coaches who browse the scientific literature, use scientific terminology and claim a cause-effect relationship between their programs and performance, but never actually measure anything themselves.  Essentially they cherry-pick the scientific literature when it suits them, criticize science as being 10 years behind the times when it doesn’t, and never do anything to quantify impact in their own approach.  I’m not saying a strength coach has to go to the lengths to publish studies, but I do think it’s reasonable for us to be expected to know what matters, measure what matters and show that we changed what matters.  

I think the problem is that it takes time to do this and sometimes it takes more expertise than what is provided in an undergraduate degree.  This is why I’m a big advocate for pursuing a thesis-based Masters degree.  While this process is certainly not foolproof, it does hold the individual to a much higher level of accountability when it comes to making claims about the effects of interventions.  Through this process I think an individual learns how to perform the steps of investigation, evaluation and knowledge translation, which are key for a strength coach.

Now, I completely agree that high-level science is very hard - if not impossible to do on elite athletes - but I always emphasize the importance of quantifying impact.  Let me give you an example: I just read a tweet the other day from a very well-known strength coach saying that a compound found in a particular food burns fat.  This tweet got re-tweeted a dozen times by a bunch of athletes and the compound that this individual identified in his tweet sure made him sound smart.  The problem with this is that in the total absence of data, this tweet distracted a bunch of athletes into thinking there’s something out there they are missing when the reality is there is nothing to be gained.  To me, quantifying impact would be regularly monitoring body fat in your training group and demonstrating that when this particular compound was used that the group saw an uncharacteristically large drop in body fat that occurred independently of changes in the training or nutritional program.  It could be as simple as saying that 7/10 of athletes in my training group experienced a meaningful change in body fat when I began using compound ‘x’.  I realize this is not a publishable study, and is far from high quality science, but I think it’s reasonable to expect a strength coach for elite athletes to be demonstrating this basic level of testing to quantify the impact of their interventions before firing off random tweets.  The accountability is very low to provide evidence in our world or to quantify the impact of what we do.        

I focus the majority of my testing on how athletes apply force.  The reality is that I have seen tons of freaky athletes who were not the leanest or the strongest in the weight room, but undoubtedly were able to apply force in ways that others couldn’t.  I also think looking at how athlete’s apply force and generate mechanical power provides a foundation to think outside of the box with our approach to training.  For example, Nick Simpson - one of the strength coaches in our group - has done some excellent work with our sprint speedskaters using block periodization with his strength and power program.  Not only is he helping to place athletes on the podium in World Cups but he is also demonstrating that with concentrated blocks of lifting he is able to improve lower body mechanical muscle power in a meaningful way with more than half the lifting the athletes did previously.  In this context, when Nick says this approach works, he has a decent amount of data to quantify the impact of his approach.  I’m also using this approach for assessing athletes returning from ACL injury in the late phases of rehabilitation and to evaluate the potential benefits of eccentric loading to improve movement velocity in our speed and power athletes.


know what matters...

The second point of sport science is to know what matters for performance.  For example, I have seen strength coaches assess variables such as the 1RM power clean, 3RM front squat and a host of other strength measures that have next to ZERO correlation to performance in the sport.  From this, they generate tables and standards for what someone needs to be able to do to be good at the sport.  My response is: based on what??  Show me this is the case.  Generating a table saying this is what these athletes are capable of is meaningless.  In small cohorts over shorter time periods, I will use simple correlation analysis.  As my group gets bigger and I amass more data, I will begin to use multiple linear regressions to identify important strength and power variables.  And then once I have got a ton of data, I will use techniques such as principle component analysis to identify important differentiators for performance.  To me this is absolutely critical for success in any program.

UFC fighter Nick 'the promise' Ring - an athlete Jordan has worked with for 8 years

monitor readiness...

The third point of sport science is to monitor an athlete’s readiness to train.  In all aspects of life, science is used to monitor systems.  There are monitoring systems in your car, in hospitals, in Formula 1 car racing and in environmental science.  Yet, when it comes to elite sports, too many coaches simply rely on the tried, tested and true art of asking: how are you doing today?  Now, I’m fine with this, and trust me, I learned a long time ago how important it is to ask questions and pay attention to body language and energy levels.  However, this isn’t sufficient.  I use monitoring to back up what I see and feel, and often my monitoring identifies potential issues before even the athlete is aware of them.  

I also use this to guide my return to sport process after injury and illness.  I also go back retrospectively across years of data and correlate changes in readiness to training loads.  In my elite alpine ski racers, I have four years of data as they progressed from a young group of ski racers to the top of the World Cup podium.  I can identify things that worked, things that didn’t, and to identify phases of the year where things went well and where they didn’t.  I then use the numbers to gain insight.  I find this incredibly useful.     

I also think this process becomes extremely valuable at major Games when everyone’s senses are heightened and it’s easy to ignore the obvious or dwell on the minutia.  In this setting, monitoring the athletes’ readiness is hugely valuable.  It’s very reassuring when we are sitting in meetings to be able to provide this information to the coach to support the decision-making process.  In my opinion, if you’re not doing this you are simply shooting from the hip.  

I think a good analogy with this is the weather.  I once watched a documentary that focused on the well-known pattern of global warming that is now referred much more appropriately to climate change.  This documentary revealed that on 9/11 the planet experienced a sharp increase in temperature.  The cause for this was unknown.  However, by reviewing data collected around the world from agricultural settings where the water evaporation from a pan in a 24 hour cycle (pan evaporation) was measured, they were able to identify that due to a reduction in air pollution that more sunlight had actually been making it to the earth, and therefore had the potential to accelerate climate change.  Two things were fundamental to this: 1) systematically record basic measurements like temperature and pan evaporation; and 2) maintaining a curious and inquisitive mindset to dig deeper into the anomalies.  In fact, the anomalies identified from routine monitoring often reveal the most groundbreaking insights.  

Now, 1000 years ago, humans were simply looking to the sky and getting a sense of what was to happen in the weather from what they saw in the clouds, how they felt and how the animals behaved.  They also thought based on a very limited understanding of how it all worked that if they burned some herbs and did a dance they could change the weather.  I bet if I asked a present day nomad who lives off the earth to give me a weather forecast that there would be decent agreement with what environmental science predicts.  However, the scientist has much greater insight that has evolved out of subjective assessments.  Through advancements in our understanding, the scientist is able to provide more accurate forecasts and understands that no amount of dancing or burning of herbs is going to affect the weather because the weather is influenced by other factors.  I took the liberty of using this documentary as a parallel for sport science – no doubt our starting point is to observe and look qualitatively at what we see.  But, through science and monitoring, our qualitative observations lead to good methods to measure what we see and to understand what factors influence the behavior.  From here we can use this systematic process of quietly and ubiquitous monitoring to identify anomalies, which help us make decisions and hopefully weed out the useless practices that have no effect.  


To reiterate, I’m not saying this process has to mirror the scientific process - I am the first to recognize there are major limitations with this.  
Some of the limitations include: 
  • bias and corruption in the peer review process; 
  • the difficulty in changing scientific paradigms; 
  • the time-course to get something published; 
  • the lack of applicability from science done on non-elite populations; and 
  • the huge challenges in doing science on small groups of elite athletes.  

But good or bad, the scientific process is the best one I know for generating new insight in a rigorous manner.  No doubt others will disagree, but I think the process of engaging in rigorous observation, evaluation and knowledge translation still needs to be applied in our day-to-day practice.  You can call this evidence-based practice but I prefer to call it quantifying impact that involves knowing matters, measuring what matters and showing we changed what matters.  In the absence of this, we are simply like the nomad who looks to the clouds and with great experience and wisdom makes observations about the weather but due to limitations in understanding believes that a rain dance will bring showers.  


measure - don’t feel...

Right now I’m involved in this PhD and for the most part I love it.  I’m not going to make more money because of this process and it’s a lot of work, but I am doing this for other reasons.  The plan is to identify the neuromuscular deficits that persist following ACL reconstruction in elite athletes to develop effective re-injury prevention strategies.  This process started with the simple observation that some of my elite ski racers with ACL-R presented with significant deficits up to two years after injury, and that some of these athletes returned to pre-injury performance levels, while others either suffered re-injury or were not able to make it back.  With routine assessments that I was doing to assess lower limb kinetics during jumping and squatting, I began to observe a threshold that seemed to differentiate the copers and non-copers.  I started using this as a guide throughout the late phase of rehabilitation to identify at-risk athletes and to guide my training prescription.  Now, this is where many strength coaches stop the quantification of impact.  However, I wanted to take this further and to have my ideas vetted and tested in the scientific community.  From these observational studies, I’m delving into the neuromuscular mechanisms underlying the observed functional deficits and going through a prospective study to identify the factors related to knee re-injury.  

As I move through this process, no doubt it’s been challenging.  I’m really forced to think about this on a much deeper level than I would have.  BUT, I can tell you that my initial hunches were wrong!  The factors that I thought were important based on the opinions of other strength coaches, and then my observational studies have proved to be small factors in terms of how these athletes cope.  I’m finding new insights and I think this is a great example of why you need to measure not feel.  In fact, Dr. Benno Nigg (noted biomechanist) reminded me how important this is at a recent conference on science in skiing.  I opened my conversation with: “I feel this is important…”.  His response was very curt and pointed: MEASURE, DON’T FEEL.  

Touché Dr. Nigg.  


Long-term, I’m hoping my research also leads to new insights on how we train elite athletes who have suffered ACL injury, so that they not only remain injury free but also return to pre-injury performance levels.   


If you enjoyed this post, please 
share it on Twitter or Facebook...thanks


Matt Jordan is a PhD student in the Faculty of Medical Science at the University of Calgary.  He is also the Head of Strength and Power Science at the Canadian Sports Institute in Calgary, Canada, and is about the smartest strength coach I know.  Matt is a lecturer in the Faculty of Kinesiology at the University of Calgary and has published many articles on strength and conditioning for athletes. He has also presented at National and International conferences on strength training methods for high performance athletes. He's worked in the trenches for almost twenty years, working closely with some of Canada's top winter sport athletes.  

He has previously written a short piece on McMillanSpeed on the nutrition industry. Give him a follow on Twitter...

2 comments:

  1. It seems my post wasn't published so I repost it.
    Great article.

    "I will begin to use multiple linear regressions to identify important strength and power variables. And then once I have got a ton of data, I will use techniques such as principle component analysis to identify important differentiators for performance. To me this is absolutely critical for success in any program."

    When you use principal components analysis what intuition and heuristic conclusion do you get if for example you project on the first principal components and have negative components in the first eigenvector of the correlation matrix.
    Second if you need to consider more than one principal components let say the 3 largest eigenvalues, how do you interpret the data in terms of your original explanatory variables (since now they'll be all mixed).

    Finally given the nature of data gathering (it's not a critic but it's due to the nature of coaching) you don't have a well defined experimental design : for example you don't have a random sample, you don't have well defined treatment to administer... how do you get rid off the counfounding factor and how much value do you give to your quantitative analysis? How do you apply those knowledge?

    ReplyDelete
  2. An effective coach must clearly demonstrate what is expected of a sales person.Please visit us:Systemisches coaching

    ReplyDelete