We practice to improve game performance. We design drills, training programs, and conditioning, and offer cues, instructions and feedback to enhance performance. We attribute team improvement to practice, and individual player improvements to individual workouts with the coach or an individual skills trainer. The purpose is to improve, so the practices or workouts must cause the improvements.
We assume practice causes improvement, and success, because practice is supposed to improve performance, but how do we know practice or which part of practice caused the improvement?
Imagine you are sick. I give you a pill. You feel better. You attribute the improvement to the medicine. What if there was no medicine? What if the pill was a placebo?
You do not care, on an individual level. You wanted to feel better. You feel better. That is all the matters to you right now.
However, do you want the placebo next time you feel sick? Should everyone else not feeling well receive a placebo? Generally, we want something more than a placebo effect when creating treatments for a larger group.
The same is true of a player’s improvement or skill development. As a player, I am unconcerned with the method or the cause when I improve during the season. I wanted improvement. I improved. Success.
As a coach or trainer, I want to know. A placebo is fine for the immediate improvement, but I want to generalize those methods to my players, to help all of my players improve, if there is a more substantial method or cause of the player’s improvement.
Placebos are used as experimental controls of a treatment: The treatment or experimental group receives the treatment (medicine, training program, instruction), and the control group receives a different treatment, often a placebo.
Imagine the first scenario again. You feel sick. I give you a pill. You feel better. Without a control, I argue the medicine, the treatment, worked. After all, you feel better. However, the best we can say is the action of taking a pill, any pill, led you to feel better, as you felt better after the placebo and the treatment.
We rarely know if our interventions within sports work because we never have a control group and we never isolate one variable to study. Do players improve at practice because of the drill or the instructions and feedback or because they believe practice improves performance? Did the change in practice time (no more 6 AM practices) or practice duration (reducing practices from 2.5 to 1.5 hours) improve performance?
There are several methods used to test the efficacy of Program A (practice, drill, individual training, a specific coach/trainer, etc).
First, we could pre-test a group, have the group train with Program A for a set period of time, and post-test the group. This is the simplest version of a study, and the manner in which we commonly conclude practice caused the improvement: We see a player today, see the player next month, and attribute the improvement to the practices from that month. Of course, practices or individual workouts are not the only activities in which the player participated that conceivably could cause or impact improvement or skill development. Therefore, we cannot conclude Program A caused the improvement, only that it was correlated with improved performance.
A player once attended my weekly clinics. She also worked with an individual trainer, a strength & conditioning coach, and her high school and AAU teams at the same time. How can we possibly determine which activity caused her improvement? We cannot, although we can suggest each was associated or correlated with her improved performance.
Now, again, from the individual perspective, she wants to improve. She does not care what caused the improvement, as long as it happened. This is how we end up with a system of trying everything under the sun: There are more improvements out there somewhere, and the motivated parents and players try different things in the pursuit of these improvements. This is great for those with an unlimited budget and time. For the rest of us, we want a better way. We would prefer to know the treatment works and is not just a placebo helping us improve because we assume we will improve.
Imagine Program A is a three times per week individual training program. Does the player also practice with his team? Does he lift weights? Does he play games? How can we conclude Program A caused the improvements instead of the team practices, games, or strength training?
Every season, when discussing NBA players offseason improvements, television analysts talk about their work in the lab with their trainers; they rarely mention the strength and conditioning work, the pickup games, the mental relaxation, the comfort being with the same team and coach for a second season, and more. The individual training is not the only activity that could improve a player’s performance, but we believe in individual training for individual skills, so we attribute improvement to the individual training, and not to pickup games, as an example.
Second, we could use a control group to control for the confounding variables. The experimental group uses Program A, and the control group does not. Everything else — lifting weights, playing games, team practices, etc. — remains the same between the two groups. Therefore, if the experimental group improves, and the control group does not, we can suggest Program A improved performance: Players who engage in individual training three times per week improve performance more than players who do not participate in any additional training.
Of course, this demonstrates Program A is better than nothing; it does not prove the training’s effectiveness. Does Program A cause the differences between experimental group and the control group or is it simply the extra practice hours? Is any program better than no program? Would the player have improved equally or more by playing pickup games three times per week instead of the individual training? We do not know because we compared three times per week of individual training to nothing.
This tends to be the argument for most interventions. More is better: More shots, more repetitions, more training sessions, more games. More is better than less. More is better than nothing. When is more too much? Is more X better than a comparison Y? Is more individual training better than more pickup? Is more skill work better than more strength training? Is more training better than more rest?
Third, we use a control group spending the same amount of time (three hours) doing something related to the training (shooting on one’s own, playing pickup games) to demonstrate Program A caused the improvements. Now, we know the training had some effect when the experimental group improves performance more than the control group; it was not just the time on task. We can state three weekly sessions of individual training improves performance more than three weekly sessions of pickup games over the same time period.
Coaches often do not care about the cause of offseason improvements; they care that their players improved before returning in the fall. If one player works out three times per week with an individual skills trainer, and the other goes to the beach, and they improve to the same degree, does the coach care one worked out and one played at the beach? No. The coach cares the players are better, hopefully uplifting the team’s performance.
Of course, coaches likely would be disappointed by the player who went to the beach because they assume the training with the trainer caused the improvement, and coaches imagine how much more the player would have improved by working out instead of going to the beach. This fear of missing out drives many parenting and player decisions.
Why attribute improvement to the training when the two players improved to the same degree? Did the training cause the improvement when a player who went to the beach (control group) improved just as much? Maybe the coach should encourage all the players to go to the beach! Both players improved equally, but our perception of the right or best way makes us believe the training caused the improvement, but not the beach.
We discount potential benefits from running after frisbees in the sand, playing beach volleyball, surfing, or swimming in the ocean because they are unrelated to basketball performance, whereas working with a basketball trainer is clearly related. However, what if surfing, swimming in the ocean, and playing games on the sand increased the player’s physical capacity, improving power and endurance? Would that affect basketball performance?
Maybe, the improvement had nothing to do with the beach or the individual training, and the players improved because they lifted weights together five times per week before heading off in different directions; maybe their summer league games on Saturdays and Sundays against better competition caused their improvement.
Basically, the beach is the control group, and Program A is the experimental or treatment group; the beach is the placebo, and the training is the medicine. The training program is unlikely to have caused the improvement because there is no difference between the two groups.
Again, there is a difference between an individual — coach or player — and general recommendations. Coaches and players want improvement, and care less about the methods, validity, and reliability. However, on a larger scale, when we discuss training or practice design ideas to generalize to entire organizations, districts, states, and federations, we should want more evidence than a potential placebo.
This is the fallacy in someone who states a drill/training program worked in one situation or with one coach. The reality is it may have had nothing to do with the drill or the training program, and instead it was the specific group or the coach causing the success. One team winning a championship and using the three-player weave at some point during the season does not prove all teams should use the three-player weave or the three-player weave is a great passing drill. Maybe they just had Steph Curry on their team, and the details mattered far less than his presence and health.
Finding definitive answers in the absence of control groups is difficult. I write confidently about different ideas for several reasons:
First, I have coached men and women, boys and girls, beginners and professionals. My ideas were not formed by working with a very narrow subset of the most elite players in the world who can make almost anything look good, nor with absolute beginners for whom almost anything will elicit some improvement. My ideas have generalized in different countries, environments, levels, and leagues over a period of years.
Second, I have changed my mind. I never rested on things I was taught as a player. I questioned everything. I still use some drills I learned when I was young (I first played Rabbit, as one example, in 5th grade), but I also moved away from many things I was taught. In rejecting these ideas, I did not rely solely on my own ideas and intuition, but looked for inspiration elsewhere. I searched for ideas that generalized across sports, ages, and talent levels. I don’t teach players to cross their feet on defense because that is what I was taught (it’s not), but because I saw NBA players, NFL defensive backs, and NCAA soccer players defending in similar ways and applied this to my teaching with younger players.
I am not looking for improvement one time or in one day, although occasionally that is needed. I look for long-term improvements that generalize across ages and skill levels. Rather than compare an idea to the absence of the idea, I compare to an alternative. When I completed my doctorate, I did not ask if a hip turn worked, but if a hip turn led to faster performance over a short distance than a drop step. The drop step — the typical action — was the control or the comparison group.
The goal is not good enough, but better. And once you replace good enough better, finding best.
Players’ time and resources are limited. We should work to provide the best environments for them to improve and develop, and that requires looking more critically at our practice activities and plans. Rather than asking if a drill is good enough, our question should be:
Is _______ better than the same amount of time spent in free play, pickup games, or some form of constrained game?
The game should be the control group. Moving away from the game should require an understanding or how and why the time spent in the other activity will lead to more improvement (or fun, success, or whatever your outcome measure).
There are plenty of activities, occasions, reasons, and more to move outside the game. But, again, we should imagine the game form as the control so we do not swallow the placebos and validate their use and effectiveness.