Wednesday, 18 November 2015

a coaches' guide to strength development: PART VII - data collection in practice; a guest-post from Martin Bingisser

Last time out, Matt and I discussed the importance of targeted data collection and an acute awareness of the training process (it's been a while - I suggest you check it out before reading on). I can think of no better example of the application of these two constructs as the work of legendary throws coach Dr Anatolia Bondarchuk.  I first became aware of 'Dr B' back in the late 90s. 

There were some poorly translated texts kicking around, that made his work sound like the typical myth-based Russian methodology that was popular at the time through the writings of Charniga, Siff, and Yessis.  But it wasn't until I attended an awesome conference in Edmonton at the Canadian Athletics Coaching Center in 2004 that I started to become aware of the power of his methods.  Many subsequent conversations with Coach Derek Evely gave me a pretty unique insight into his philosophy - I just wish I had the balls to some day try to apply it to sprinters (although I have ran quite a few 'mini-experiments' over the years)!  

Having said that, I encourage all coaches to think about how they can apply Bondarchuk's methods to their practice.  I'd love to hear from those who may be already applying it in events other than the throws - or from strength & power coaches who are using it in their weight room practice.  There is much wisdom and power in these methods if correctly applied - believe me!

Few folk in the world know the Bondarchuk system better than Martin Bingisser.  Martin has been coached by Dr B for over a decade, and knows it inside-out.  He is also a great writer, and an awesome credit to the coaching fraternity - sharing bucketloads of great info on his site, where he has a brand new webinar on Bondarchuk's methods.  

Make sure you check it out after reading this great overview of Bondarchuk's methods:


A Practical Application of Data-collection Coaching: the Bondachuk method: a guest-post from Martin Bingisser

In the last part of this series Stu and Matt began discussing data collection. Both provided a great outline of what coaches need to know in this area and why they need to know it. The next step is putting these ideas into practice. How does a coach decide what data to collect and how does the data actually impact training? Stu invited me to share my experience with the data-driven methods of Dr. Anatolia Bondarchuk to give an example of how it looks to implement the principles they described. 

For those of you not familiar with Bondarchuk, he is the rare coach that has both decades of coaching and research experience. Having been an Olympic champion in the hammer throw himself, he went on to coach more than a dozen medalists and several world record holders in the throwing events. And as Soviet national coach for nearly twenty years he had access to troves of data from performance tests on thousand of athletes. His methods arose from this data and have produced some important principles that can be applied to any sport. That is a common theme on my site HMMR Media: finding these principles and how other world-class coaches have arrived at them independently. But that is a topic for another day. What is relevant here is that having had the chance to work with him for nearly a decade, I got to see his methods first-hand and they provide a great example of the effectiveness of data collection in driving our practice.

A Detective Needs Data

When we talk about data, we immediately start thinking about numbers, and trends, and progress. But the goal of data is rarely to see how much progress we have made; we have competitions for that. Instead we collect data to help understand why we are or are not making progress. Recently Australian sprint coach Mike Hurst shared the following quote with me from former Australian Institute of Sport athletics head coach Kelvin Giles:

"Coaching is a detective story: we are always looking for clues.”

The core philosophy behind Bondarchuk's training methods is transfer of training. Transfer of training has become a buzzword recently, yet Bondarchuk began talking about it decades ago. The philosophy requires choosing training elements and methods that will make you better on the field for your sport. This is something all coaches agree with, yet in practice it is impossible to know if your methods transfer without data. Will throwing heavy implements help a hammer thrower? Most likely. Power cleans? Probably. Bench press? Likely not. You can only come to these conclusions if you gather data to find the links between training and performance.

Put the Individual First

The links are often harder to find, and less intuitive, than we think. One study I love to cite as an example of this is a look at swimming warm-ups. A few years ago at the University of Alabama, a masters student tried to find the best warmup for the school's swimmers. He experimented with no warmup, a short warm up and regular warm up. The conclusion of the study was that the "regular warm-up was better than short warm-ups to achieve the fastest mean 50-yard freestyle time". However, if you look closer the answer was less clear cut. The regular warm up worked for the most athletes (44%), but a majority of athletes actually performed better with one of the other two approaches. 

This underscores the importance of individuality. It is rarely the case that one approach is the best for everyone. This is why different individuals in your training group show different responses to the same training plan. Studies on groups can help provide a starting point, and it is then up to the coach to individualize the solution. Exercise selection should be individualized to include what the athlete reacts best to. The length of each training phase should be individualized to fit their adaptation response. Technical models should be individualized to optimize the athletes unique characteristics. And only with proper data, can you see how the individualization should be implemented and if it is working.

Bondarchuk in a Nutshell

Here is the cliff notes version of Bondarchuk's approach to training throwers. For a deeper discussion on this topic check out our articles on HMMR Media or our recently released webinar on this very topic. It is important to note that this is not the training method Bondarchuk would use for all athletes, and he would be the first to admit it. Athletes have had and will continue to have great results with a number of systems.

Bondarchuk calls his method ‘complex periodization’, but that name is a misnomer as the method is actually quite simple in many respects. During each training phase, we select 8 to 10 exercises and simply repeat them with the same volume and intensity at every session until the athlete reaches ‘peak form’. His training for throwers is based around and builds upon this core approach.

Data is collected every day in his programs. The best throwing result from each session is measured and tracked. It is this information that is used to individualize training both quantitatively and qualitatively. Quantitively, by measuring the daily result the coach can track the athlete's adaptation to this set of exercises. The coach can then determine how long an individual athlete needs to fully adapt and reach peak form before things need to be changed in order to stimulate a new adaptation. On the qualitative side, the coach can use the data to see how much the body responds, or how much growth there is in the phase. Some peaks are higher than others and deciphering what leads to higher peaks will help a coach replicate those results in the future. Looking at the data in this light helps determine what exercises, or combination of exercises, produce the best results.

Martin with Dr B


Measure What Matters

With new technology, it is possible for coaches to measure thousands of variables in training. But what point is it to measure a variable that has little connection to your sport? Bondarchuk chooses to measure the most specific element: our throwing results. Our goal is to throw far, so we measure how far we are throwing. This is the best way to see if our training is transferring to results on the field. Note that while this seems straightforward, there are some complexities as external variables such as environment, facilities, weather, technique, etc. all may contribute to the results not matching an athlete's true form. Nevertheless it is the best measure we have and we are lucky since in some sports it may not be able to objectively measure your on-field performance. In those sports there are still good options - such as velocity measures in the weight room, jump tests, or any number of other options. 

The key is that you want to find the best measure of what matters for your sport.

Measure What You Can Capture 

If you choose to measure something, it should be something you are prepared to repeat. Therefore it needs to be easy to capture. Force plate measurements may be cool, but if you have access to force plates twice a year, what good will so little data do for you? By measuring throwing Bondarchuk chooses a simple measure that can be easily replicated. All we need is a cheap tape measure and no matter where we are you are able to measure. 

The more complicated the test, the less likely you will replicate it very often. 

Measure What You Will Use 

Before you start capturing data, have a plan for how you will use it. If you cannot answer that question, then you are wasting time and effort. Every coach I know is pressed for time. Capturing data you will not use is a waste of time and will bury the useful data in a haystack full of irrelevant info. If you look at recent US national security issues the problem is rarely that they have too little data. The problem is that they have too much data - and spend so much time collecting it that they cannot find the useful data within. Is the purpose to simply collect data, or to find something you can use? 

Bondarchuk has a clear plan of what he is looking for: he wants to find the individual's adaptation curve so that he knows when to changes exercises and begin the next training period. This makes his data collection efficient and useful.

Minimize The Variables 

A major reason that data collection works for Bondarchuk is that he has minimized the variables in training. If you look at most people’s training programs you might have 30-40 training exercises in a week. If throwing results improve, how will you determine what helped the results? No matter how much data you capture, the link will be hard to determine. And then if progress is not there, the tendency is to add even more new exercises, making the analysis even harder. However, if you reduce the elements in play - as Bondarchuk has - you can more easily identify what worked. This is similar to the concept of via negativa that has become popular in the wake of Nassim Taleb’s book Antifragile (check out the 'anti-fragile athlete' here). You can make training better by making it more simple.

Many people focus on details of Bondarchuk's method in terms of volume and intensity, but when you step back you see that his system really works because of the process. It is the detective's system. It gives you more useful clues both in the short- and the long-term. I just described how the method gives coaches more useful clues in the short-term because it has fewer elements. From a long-term perspective the fact that his system also produces up to six peaks per year as opposed to the normal one or two also helps. That gives coaches more chances each year to crack the code of what works - as explained in a recent interview I did with him.

Don't Overreact 

When working with data, a tendency is to look at the details. While details are important, you cannot lose sight of the big picture. Suppose the data shows that the trend is going down. Should you change what you are doing? Not necessarily. Ups and downs are part of the body’s normal adaptation process. Bad days can be expected - and indeed are needed - if you want to press the body enough to stimulate adaptation. I mentioned that with Bondarchuk we will typically take the same program and repeat it until an athlete reaches a new peak. The road to the new peak is not always up: with most athletes their result will be flat for a few weeks, before falling off and then rebounding to the new peak. If you react too quickly and change things when the results are down, then you will lose out on the new peak. Data can also include outliers which could be due as much to randomness as to training. Therefore it is important to look at the big picture before making any important decisions based on training. 

In other words, listen to the signal and not just the noise.

It All Comes Back to Transfer and Individualization

Confucius once described three ways that we learn in three ways: by imitation, by reflection and by experience. We can all agree that imitation is the easiest way, and also the least effective when it comes to training. Experience is of course helpful - however I would argue that experience alone is not a way to learn. It is only after you reflect on the experience that you actually learn. That is why data analysis is so important; it helps coaches learn by reflecting.

Data collection helps you learn about transfer. It helps you learn about your athletes. These are the two points we started with above. It should help you make sure that the training you prescribe will bring about the best adaptations for each of your individual athletes. If your data is not helping you do that, you need to reassess your approach to data collection. All you are doing then is wasting your time, your money, and your resources. 

Bondarchuk’s approach shows just one way that data collection can be simple and assist the coach in a meaningful manner.

Thanks for reading.  If you are enjoying this series, please share on Twitter or Facebook.  

Martin is a former two-time All-American hammer thrower at the University of Washington, and has gone on to represent Switzerland in 13 competitions.  He has been coaching with LC Zurich since 2010, and has lectured around the world for organizations such as USATF, UK Athletics, Swiss Athletics, and the European Athletics Coaches’ Association.  As a writer he has been published in Modern Athlete and Coach, New Studies in Athletics, Track Coach and various well-known training websites - including of course his own hmmrmedia.

Monday, 3 August 2015

a coaches' guide to strength development: PART VI - awareness & data collection

Our comforting conviction that the world makes sense rests on a secure foundation: our almost unlimited ability to ignore our ignorance … do not trust anyone—including yourself—to tell you how much you should trust their judgment.
Daniel Kahneman 

As discussed in the last post, a key to the organization of training loads is not to blindly follow  pre-set ‘periodization’ models, but to develop a training-coaching philosophy that guides the way you and the athlete(s) move through the macro- and meso- levels.

Philosophy - reasoning

Our philosophy is concerned with what counts as genuine knowledge (epistemology); and what are the correct principles of reasoning (logic); it is based upon rational thinking - and deals very little with instinct, faith, or analogy. It relies primarily on logic - a conscious analysis of all the options, a prediction of potential outcomes, and an understanding of their relative utility.  

But this takes time.  
Time that is often not available to coaches.
So instead we rely on what is often called our ‘awareness’, and it is this that is essential to understanding micro-adaptation.  

As it has been a while since we released the philosophy section, we highly suggest you go back and review this, as it it will lend a little more context to what we will discuss in this long section on awareness - what this means, and how it can be supported by simple data-driven metrics. 

We hope you enjoy, and find it useful to your own coaching practice.  The second section (beginning with 'Tracking What Matters') of this post is written by Matt. 

First - a disclaimer:
If you read the last couple of posts in this series, you may feel we are ‘anti-plan’ - we are not.  

Having a plan is important.  
But it must be preceded by a competent understanding of our philosophical underpinnings, and it should not be so overly detailed that if affects the manner in which we respond to the dynamic nature of the daily perturbations in athlete response. 

Planning - and the terminology that comes with it - can give us a false sense of security.   Pre-set ‘periodization’ models allay responsibility.  They allow us to pass the buck.  If things go wrong, then it wasn’t us - we can blame the model.  And there is an easy fix - we just switch the model!

The reason why we began with the philosophy section, is because it is this philosophy that gives us the ability to ask the relevant questions.  

But it does not answer them.  

Philosophy asks the question.  Science gives us the answer.  

Philosophy is our theory.  Science is the experiment.  

Science is the coaches’ attempt to answer a philosophical question by forming a hypothesis (the program), then running a trial & error experiment - the results of which form the conclusion - furthering our philosophical understanding -  and leading to further questions (and therefore, further experiments). 

Philosophy is the macro.  Science is the micro. 
It is the interplay between the two - knowledge at breadth, and knowledge at depth - that distinguishes successful coaches. 

Dual Process Theory

I am a quick thinker.  Ask me a question, and invariably, you will receive an answer immediately.  Matt is a little more purposeful - he thinks things through - weighing up all sides of an issue before reaching a decision.  These two very distinct ways at arriving at an answer is together known as dual process theory.

Popularized by Daniel Kahneman in his book Thinking Fast and Slow, dual process theory has its roots in the early days of neurological research, when in the 1800s, Wigan & Hughlings identified two cognitive systems: what were then known as verbal-analytic and narrative-experiential.  

Kahneman has since referred to the two systems as intuition (or system 1) and reasoning (system 2).  Intuition is fast and automatic; while reasoning is slower, and more conscious.  

“System 1 operates automatically and quickly, with little or no effort and no sense of voluntary control … System 2 allocates attention to the effortful mental activities that demand it, including complex computations. The operations of System 2 are often associated with the subjective experience of agency, choice, and concentration.”

I’m currently in the UK, struggling first-hand with System 1 and System 2 oscillation:  not only am I having to drive on the other side of the road, but everything inside the car is backwards as well - from the side of the car I sit on, to the stick shift.  What is normally a fairly automatic process is now clearly far front-brain.  I expect that by the end of the week, though, most of this System 2 work will be taken over by System 1, and I can go back to driving with more fluidity.  

How does this relate to coaching?

It is important that the coach is able to switch between the two systems depending upon the situation - the dynamic interaction between systems 1 & 2 allows us to not only make on-the-spot coaching decisions, but also to debrief these decisions later, comparing with previous situations, and ultimately learning from these experiences.  As mentioned, both System 1 and System 2 are equally important, and it is important to understand that expertise in each relies on experience and knowledge. System 1 effectiveness increases as it is provided with more context through System 2.  The simplest way to understand this is to think of System 1 as a pattern-recognition device.  It relies on heuristics that have been developed over time through experiences of similarity, representativeness, and attributions of causality (Tversky & Kahneman).  Your subconscious mind finds links between your new situation and various patterns of these past experiences. 

“Skills are acquired in an environment of feedback and opportunity for learning … thinking about things - discussing things - helps to develop models-heuristics that will both cognitively and intuitively aid in making decisions”

When I first got to the UK in 2010, there was a distinct divide between the ‘scientific’ coaches, and the ‘practical’ coaches.  The practical coaches had years of experience - usually as both an athlete and a coach - and in many cases, considerable international success.  The old argument that empirical knowledge precedes scientific rationalization held very much true in the UK coaching world at the time.  ‘Practical’ coaches relied on their own personal experience as athletes, on pre-set training models, and their own intuition - rather than a deep philosophical (what they termed as ‘scientific’) understanding of the sport, or event.  The problem with this is that it is this deep understanding that leads to robust intuitive skills.  System 1 expertise has its base in System 2 depth.  

It is true, however, that many of these coaches were highly successful internationally - and we all know ‘traditional’ coaches who enjoy much success, without a deep understanding of their sport.  

Perhaps because of this reliance on ‘intuition’ they have built proficient system 1 skills.  Compare this with the medicine world from a few generations ago:  medicine was - by necessity - focused on clinical symptoms, rather than lab values.  Intuitive skills were developed as the only things doctors had to go by were these symptoms, so they were forced to become extremely careful observers.  

No one will argue, though, that medicine has not progressed over the last few decades - it most certainly has; the best doctors are those who have keen observational skills - supported by understanding all the data that is available to him.

The key is in this understanding.  We can all understand anything better than we currently do.  As a self-confessed ‘generalist’, I often find myself skipping over headlines - rather than delving a little deeper into the details (perhaps this is why Matt and I work so well together, as he - through the necessity of doing a PhD, digs very deeply).  As discussed in the last section, we try to direct focus towards the fundamentals.  The key is to figure out what these are, and to understand them in depth

Be Present

“It is of little use to us to be able to remember and predict if it makes us unable to live in the present”
Alan Watts 

Nothing annoys me more than watching a coach not paying attention during training.  Whether it is mindless chatting with others, continuous checking of texts and emails on his or her phone, or just simple day-dreaming, coaches who do not pay attention do not deserve to call themselves coaches.  Real coaching requires constant attention - being present

At risk of sounding overly zen, when we are not present, our ability to react is compromised.  An over-dependance on planning (the future) and former program outcomes (the past) impedes our appreciation of what is happening right now.  Let us learn from the past - but not be dictated by it.  And let’s not get too preoccupied with the future, as that kidnaps value from the present.

When not distracted by meaningless banter, text messages from home, and what you are going to cook for dinner, we can learn to ‘listen’ with a quiet mind: what the Quakers call the ‘still small voice within’. Even well-meaning coaches can allow a busy mind to get in the way of absolute awareness: I often find myself busy planning future sessions, over-analyzing technique, theorizing, and judging; there is a fine line between being analytical and being so critical you lose sense of everything else around: the proverbial losing sight of the forest for the trees. It is important we are not biased by what we expect to see - instead remaining open to what we actually see.


A recent challenge to a coaches’ presence is the continued improvement in technology - both general (smartphones, mostly), and specific (various tracking devices, etc.).  And while much of this technology has aided our understanding of athlete adaptive processes, most hinder our intuitive skills.   With every new technological advancement, our intuition is further impaired. Instead of developing our skills of perception, we instead rely on technology.  It is a symptom of our insecurity - just like we use mechanistic prediction models as ‘crutches’, we do the same with the latest technological gadget.  It takes the pressure away from us.

Scientists Lancelot Whyte and Trigant Burrow called this move away from instinct ‘Dissociation’ - simply, that we have allowed reasoning to dominate our lives out of all proportion to intuition.  Although initially referring to the chasm that has developed between our brains and our bodies, this disassociation has only increased with the continued rapid pace of technological advancement.

As with your colleges, so with a hundred ‘modern improvements’; there is an illusion about them; there is not always positive advance … Our inventions are wont to be pretty toys, which distract our attention from serious things.  They are but improved means to an unimproved end, an end which it was already but too easy to arrive at …
Henry David Thoreau


Having a clear philosophy helps to guide us through major methodological questions in our programing.  It helps us with macro- and meso-planning.  Active improvement of our intuitive skills allows us to more effectively vacillate between system 1 and system 2 thinking - making faster, and better decisions when necessary - and enable us to better understand micro-planning. 

But we are still human - and subject to all manner of biases.  

“All mortals are fallible, even the smartest among us, including the scientists. We are prey to cognitive lapses, some of them built into the very machinery of thinking, such as the statistical fallacies we are prone to commit.”
Tversky &  Kahneman

Our intuitions will always bias us, and our judgments will always be dependent on a background of intuitions. However, there is room for slight improvements - by being aware of them, by understudying our fallibility, by speaking to colleagues when we are unsure, by occasionally seeking out contradictory beliefs, and by truly paying attention to our practice, we can reduce their relative impact. 

Regression to the Mean

In addition, we often mistakenly attribute an adaptation as being caused by a certain stimulus, when the response would have happened anyway.  This is a highly important concept to understand - more than just the difference between correlation and causation, regression to the mean can bias our conclusions quite easily.  

To account for errors stemming from bias and regression to the mean, scientific experimentation employs a control group - or a group against which a treatment effect can be compared.  However, training rarely has a control group.  If a coach or athlete believes something might be effective, it is often difficult to withhold this treatment or training stimulus for the purposes of avoiding errors related to bias.  And as training has no controls, our only option is to improve our intuitive skills, and - without burying ourselves in hard to decipher data and complicated monitoring systems - devise a high quality, simple data-collection system. 


Tracking What Matters

Intuitively, great coaches monitor quality of movement and an athlete’s psychological disposition using the power of visual observation and conversation.  However, it’s tough to quantify these highly important elements.  For this section, I will assume we all agree on the importance of a coach who pays attention and the importance of capitalizing on intuition and the powers of daily interaction with an athlete to make decisions.  These elements must be considered and used alongside objectively determined metrics at all times.

I have to be honest: over my 15 years working with elite athletes from a variety of sports, this has consistently been one of the more challenging aspects for me.  The soft skills of strength coaching - such as motivating athletes, programming, and cueing technique are so much easier to come by - especially for the natural coaches out there.  Further, they are oftentimes much more valued by athletes and teams than demonstrating impact on performance through data. 

I’ve always said, right or wrong, the biggest success factors for a strength coach are looking the part, effective athlete communication, and being able to get along in a high performance sport environment (see more on this in part III of this series from Brett Bartholomew).  The relative ease of acquiring these skills compared to implementing a data-driven, decision-making process and athlete-monitoring framework is significant.  Not only is there a considerable amount of time involved in collecting and analyzing data, but also a great deal of confusion around what metrics truly matter. It’s interesting to note that despite the real challenges in bringing training from art to science, it sure seems to be a consistent theme in the training philosophy of many strength coaches. Whether or not an effective data driven system is being implemented is a whole other question.  

In my early days, I can think of a few strength coaches who all independently viewed their programs as ‘scientific’, and believed objective metrics were critical, yet arrived at totally different conclusions as to what mattered.  I recall receiving a range of advice from: “testing explosive ability never told me anything – I only track progression in gym lifts” to “we need to do entry and exit testing for all training phases” - which at the time involved using a linear position transducer to assess everything from ‘power’ to strength curves (what a waste of time that was).  
As a young strength coach, I was highly confused.  On the one hand I saw strength coaches simply tracking changes in body composition and progression in lifts, and on the other, I saw  strength coaches digging deeper into other strength abilities.  I also read about all sorts of different tests in both scientific publications and textbooks.  I had my undergraduate degree in Kinesiology, I spent a lot of time learning the art of strength development through my own career as a weightlifter and gym rat, but despite this, I could never really figure out how-why I should implement more of a data-driven approach.  

I was also somewhat shocked to see that the quantification of training load was nothing more than a theoretical consideration for most programs.  On the one hand monitoring training load was the cornerstone of any periodization textbook worth its weight; yet on the other hand, I can’t remember a single strength coach who actually could answer the question: how much has your athlete trained in the past week?  

This seemed like a pretty straightforward question when it came down to effective periodization. While it was always discussed in terms of the theory of strength development it just didn’t seem overly important in practical terms for the acute programming process.

The truth is that the softer skills I discussed earlier were really the big rocks in the bucket.  They get you a long way there in terms of effective strength programming and delivery.  However, the final few percent of programming effectiveness can only really be obtained when specific program elements are tracked and monitored.  While there are several important angles for assessment/monitoring such as nutrition and health, I’m going to focus primarily on those implemented on an ongoing nature related to strength development and athlete adaptation.  I’m also going to focus on elements that can be quantified instead of the important qualitative observations great coaches make every single day.  

The PUSH band is an excellent example of a simple, easy to use device that does not get in the way of your coaching

Why Monitor?  

Several years ago, I watched a very interesting documentary.  Whether the scientific claims of what I’m about to discuss are supported is not something I want to debate – instead I want to focus on the analogy to sport.  This documentary focused on climate change and begins by highlighting a very bizarre observation for climatologists when a temperature spike following one of America’s worst days – the 9-11 terror strikes.  Global temperatures are quite stable (generally) so this observation baffled scientists. 

It’s interesting to note that to solve this mystery, they turned in part to a consistently recorded metric called 24-hour pan evaporation.  This wasn’t a complicated measurement system – as the name indicates, farmers literally placed a pan of water outside and measured the amount of water that evaporated in a 24-hour time period.  It is an indirect measure for the amount of sunlight that hits the Earth.  Well, low and behold, in the time period after 9-11 this data demonstrated that more sunlight had indeed hit the Earth, and likely explained the increase in temperatures.  The scientists went on to infer that as a result of the airplanes being grounded, the vapour trails - which are an air pollutant in the upper atmosphere - diminished, and as this air pollutant actually reflects sunlight, a significant cooling variable had been eliminated.  This finding was somewhat profound and changed the way climatologists model climate change.  All this arose from a crude, but consistently implemented, monitoring program using a farmer’s pan.

I can think of several examples in my career when athlete performance was compromised for some reason or another - and it was consistently tracked data that helped us navigate our way out of a mess.  Similar to the example above, the data wasn’t collected for the good times but more when something unexplainable occurred.  The ability to rely on years of well-tracked data brought clarity to an otherwise stressful and highly uncertain time period.  This is one big reason why monitoring athletes is important: when shit hits the fan, data helps bring clarity to a situation often plagued by a high potential for bad decision-making.  

There are other important reasons for data driven monitoring such as: 

  1. removing the veil of our own bias from colouring the true pros or cons of a training intervention
  2. the ability to review big chunks of data to find trends and generate new understanding
  3. the ability to create better processes for a sport around the time course of adaptation in specific strength abilities as it relates to sport performance (e.g. How much strength is enough? What strength abilities matter?  How long does it take to develop an athlete in a given sport?)

The other theme that resonated with me from the climate change example is that monitoring doesn’t need to be complicated.  Simple metrics can get you a long way.  In Calgary, I work with a great boxing coach (Doug Harder, Bowmont Boxing Club) who is one of the best coaches I know.  Doug talks about the importance of mastering the jab with all athletes - whether they are a novice or a professional fighter.  The reason is that the jab, while on the one hand a highly simple strike, is also one of the most important because it sets the foundation for all the other punches, sets range, and allows a fighter to control the fight.  It’s a simple punch but gets a boxer a long way there in terms of setting the stage for everything else.

I take a similar approach to monitoring athletes.  Similar to mastering the jab, I try to answer simple questions and master simple metrics before delving into the complicated stuff.   As Stu outlined previously, it is the question that drives the scientific process.  In respect to the monitoring system,  in the absence of good questions, you can guarantee at the end of a year you will be sitting with spreadsheets upon spreadsheets of data, zero clue where to start, and a feeling that all of this monitoring is a bunch of bollocks.  Strong and simple questions drive the process for implementing a data-driven monitoring system.

The questions that matter to me are:

  1. How much training has the athlete done in the past week?
  2. How does the athlete feel he or she is adapting to the training load?
  3. How much neuromuscular fatigue is accumulating?
  4. What is the athlete’s structural tolerance (structural tolerance is a measure of the musculoskeletal capacity to handle load)
  5. How is performance tracking in my key indicator lifts?

The remainder of this section will detail how I go about answering these questions.

How much training has the athlete done in the past week?

The teams I work with do not have overly large budgets.  As such, we often can’t afford a $20,000 price tag for a commercially available monitoring system.  Instead, I use Google Docs and generate my own form.  The training log form asks a couple of specific questions:

  1. How long did you train (in minutes)?
  2. How hard was the training session (rating of perceived exertion, scale from 1 to 10)?

Example of Issurin meso- organization of training load

The product of these two variables gives the session load and has been referred to in the literature as the Sessional RPE (Foster, 1998).  By tallying up the session loads for each day over a given week, I can estimate the weekly training load for the athletes.  This allows me to identify the occurrence of insufficient variation in training load (monotony), sharp and inappropriate increases or decreases in training load, and gives me an anchor against which I can compare changes in other performance and monitoring metrics.

How does the athlete feel he or she is adapting to the training load?

Again and again, subjective athlete wellness assessments are often the most telling metrics for coaches.  In fact,this is what we obtain from our athletes every day simply by watching body language, having quick conversations to gauge energy levels, and watching warm up.  The only difference here is that I am systematically measuring an athlete’s qualitative self-assessment of their wellness in terms of indicators for non-functional overreaching and overtraining syndrome. 

My questionnaire of choice is the Hooper-MacKinnon Questionnaire (Hooper et al., 1995).  

Again, I save thousands of dollars and develop my own forms online for free.  Athletes complete this form each morning without putting too much thought, energy or contemplation into the answers; we don’t want athletes to become fixated or obsessed on how they are feeling.  

Instead, it’s a quick and effortless response that is done consistently each day.  This data allows me to go back over large time scales to evaluate how the athlete felt.  This allows me to anchor my training load and other performance/monitoring metrics to an athlete’s subjective feeling about how things were going during a particular time period.  It’s also a great conversation starter for the interaction I have each day in the daily training environment.  My good buddy Tyler Goodale (strength coach and performance lead for the Canadian Women’s Rugby Seven’s program) has his strength and conditioning team review this information before each training session.  It helps direct the conversations that occur in the daily training environment with up to 20 athletes.  Based on the athlete wellness monitoring data, Tyler knows to whom he needs to talk and how the conversation should be focused.  

How much neuromuscular fatigue is accumulating?

This is a tough question to answer.  However, what we know is that while high frequency fatigue recovers in a matter of hours, low frequency fatigue or chronic low frequency force depression can persist for many days (and my hunch is up to a couple of weeks – I think we send plenty of athletes into competition with way too much fatigue, which supports why often times unexpected lay offs for small injuries or illnesses lead to best-ever performance for an athlete…they were finally rested!).  Low frequency fatigue is responsible for the ‘dead leg syndrome’ or the feeling of heavy legs when we need to do routine movements like running up a flight of stairs.  The problem with this type of fatigue is that it diminishes rate of force development, which is key for explosive strength.  As many sports require explosive strength for successful performance, this is an important element to monitor.  

Monitoring neuromuscular fatigue is a touch more complicated to implement.  At our training centre we have athletes perform weekly or bi-weekly vertical jump tests using a few jump variants.  Generally speaking, we control jump technique pretty carefully and measure jump performance.  Our go to jump performance variable is take off velocity, which is easily calculated from the vertical ground reaction force using force plate methodology, and has excellent reliability, making it easy to detect a meaningful change.  We consistently measure coefficients of variation below 1% in a diverse group of athlete populations, and we typically use a cut-off of 5% to 10% for determining when training volume needs to be adjusted.  We also use it to understand the individual nuances between recovery in explosive strength abilities and training load/exercise prescription.  This helps us to devise appropriate taper strategies for athletes in sports reliant on explosive strength.

What is the structural tolerance of the athlete?

By answering this question, I am trying to identify an athlete who might get injured before the injury happens.  Essentially, I’m looking for something to support the important feedback I receive from my soft tissue therapist and from a daily qualitative movement assessment.  This is a new area of research for us, so the data I’m discussing right now has yet to be truly flushed out the way it should from the standpoint of robust scientific investigation. At our training centre, in addition to measuring jump performance, we also employ a dual force-plate system that allows us to simultaneously measure the vertical ground reaction force from the right and left limbs during a double-leg jump.  We calculate a functional asymmetry index by measuring the kinetic impulse from the right and left limb separately over specific jump phases.  

We then use this asymmetry index called the kinetic impulse asymmetry index to identify athletes who might have diminished lower body structural tolerance.  Our preliminary data indicates that an asymmetry index above 15%-20% in the eccentric deceleration phase of a countermovement jump is predictive of lower body injury in elite athletes.  I just presented this data at the International Society of Biomechanics Conference in Glasgow, Scotland. As such, we flag athletes that have big changes in asymmetry, or get into this red zone, and adjust training appropriately - or triage to the appropriate person on our medical team.  Assessing functional asymmetry in this manner is also very useful for monitoring an athlete throughout the return to sport process after injury.  

How is performance tracking in the key indicator lifts?

I think a simple but highly important question regarding the efficacy of a strength program is whether or not an athlete is developing maximal strength.  There are lots of reasons why maximal strength is critical for athletes competing in many different sports, and plenty of evidence that at the very least, maximal strength is a foundational performance factor that should be optimized for each sport and each athlete (as discussed in part I of this series).  

In each training program, I have one to two key indicator lifts that are the primary focal point of the program.  I want to track and monitor the progression in load that occurs over a training phase.  There are many ways to accomplish this - the simplest being, have the athlete record their lifts on their program with pen and paper, and then upload this data into a spreadsheet (highly time-consuming and a pain in the ass).  Additionally, many training centres have apps and devices at each training station that eliminate the need for pen and paper, and make this type of data collection much easier.  If you have the budget to afford this system then your problem is solved.

I personally turn to free online tools and develop my own tracking forms.  This eliminates the need for the athlete to go pen to paper and saves a ton of time in terms of having to enter this data into a spreadsheet.  At the bottom of my Training Load form discussed above, I have a small section that allows the athlete to select from a handful of indicator lifts and enter in the maximum load lifted in kilograms on their best set, along with the number of successful repetitions.  Some may criticize this approach as it does not allow me to calculate the total tonnage or lifting volume from a given workout but I have found this information to not be of great value as my weekly training load metric gives me greater insight into how much training an athlete has done in a given week.  Instead, the more relevant question is: how is performance tracking in my key indicator lift?  Is the athlete progressing steadily in the gym or flat lining?  To get at this question, I find that the load and reps lifted on the best set for a given day tells me everything I need to know.

In summary, I’ll bring your attention back to the key points: 

  1. A coach who pays attention is the best monitoring tool going
  2. Data-driven systems are necessary to confirm a coaches’ intuition, and to protect against biases
  3. Great questions drive a successful data-driven monitoring program
  4. Effective monitoring systems give you the final few percent in performance - which is what every elite athlete is chasing
  5. Simple metrics collected consistently over time are extremely valuable, and are often more valuable than sophisticated measurement tools that are unsupported by good questions and difficult to implement

Ask the right questions, pick your metrics carefully - and be consistent.  A data-driven approach to programming doesn’t happen overnight and instead is something that requires commitment and curiosity.    

Thanks for reading ... if you enjoyed this post, 
please share on Twitter or Facebook