Evaluating with the Competitive Cauldron
Are we sure our subjective assessments of players are accurate and not biased?
I have written numerous times, most thoroughly in The 21st Century Basketball Practice, about the competitive cauldron, which I have used with various teams over the last 15 years. Essentially, I track wins and losses in every competitive drill and game during practices and keep the record for each player throughout the season. Our point guard this season ended with the highest winning percentage of any player I have coached since I first used the competitive cauldron.
I noticed him last summer when I visited the club, as I accused him and two others of stacking their team. He joined with the two players who played for the U16 NT last summer (one left the club before this season) when I asked them to form their own teams, as I did not know them well.
The directors and other coaches mentioned him infrequently during discussions of players and potential prior to my arrival and during my first few weeks, but I found myself substituting him into the game every time our opponents made a run as early as our August scrimmages. I did not have a specific reason, as he had no standout skill, but I knew we played better with him on the court. He made plays.
I recognized he was our most important player early on, although even I took a while to acknowledge he was our best player. I knew he was first in practice winning percentage, and I knew I felt better with him on the court, but there was no obvious explanation. He was not the best on the team at anything: We had better shooters, dribblers, passers, defenders, and more. He simply won, regardless of teammates and opposition.
Around the New Year, I believe, I told others in the club he was our best player and compared him to Jrue Holiday: A great player, often overlooked, but a winner who elevates his team by doing whatever is necessary in a particular game — Defense, shooting, scoring, passing, rebounding — despite not possessing a classic standout skill. The others, including my assistant, remained skeptical.
Many coaches discount the competitive cauldron, especially as an objective evaluation tool. They may use it to motivate players or embrace daily competitive elements, punishing the losers, but few use the cauldron to challenge or support their observations, evaluations, and biases. My assistant dismissed the cauldron as a novel practice device early in the season, but bought into the idea more and more as the season progressed. As he learned our point guard won almost every single week and held a sizable season lead, he started to see him as our best player too. He noticed the little things he did to win games. The cauldron helped to shape his evaluation of our players, and he viewed them differently.
Our point guard was a difference maker. I knew this intuitively early in the season, but the cauldron solidified his standing with others. I believe he went the entire season without being awarded a player of the game (not awarded after every game), despite being our best and most consistent player for the duration of the season. Other players had one or two great games, often well-timed to win the game MVP award, but nobody was as reliably excellent from game to game from August through May.
The cauldron captured players’ peaks and valleys throughout the season. The leader over the last six weeks won the MVP at the finals, whereas our leading scorer before the new year was in the top three consistently throughout the season’s first four months, and near the bottom in the last three months when his playing time and performances were reduced. These were two notable examples, but the practice trends tended to correlate with game performances as well as with any team I can remember, attesting to our depth and competitiveness, as well as the length of the season. There really was only one player whose game performances subjectively did not match the cauldron data.
I obviously believe in the competitive cauldron, as I have used it for over a decade, but the obvious question is causation. Correlation simply means things are related; I noticed a relationship between practice trends and subsequent game performances. However, did my awareness of the practice trends influence my coaching, and ultimately the performances? Did improved practice performance demonstrate changes (mental, physical, playing style) that improved game performances? How much did normal teenage existence (girlfriend breakups, sleep reduction, growth spurts, probable alcohol use, illnesses, school exams, and more) influence practice and game performances?
I pushed our point guard to receive an opportunity to play with our second pro team. My assistant/translator who was my liaison with the club on most matters pushed for him when he leaned into the cauldron and saw the separation between the point guard’s winning percentage and everyone else’s. Nobody else saw him as our best player, even after he appeared in several games, including a start, and acquitted himself well with the professional team. The cauldron demonstrated his value before subjective evaluations of his game performances.
Ultimately, coaches care about winning and play players who they believe influence winning the most. Box score statistics and subjective evaluations may not capture the most influential players. Analytics attempt to create objective measures to assist coaches and add an unbiased evaluation, and the competitive cauldron has a similar effect at levels with minimal access to analytics and/or a greater emphasis on practice.
After all, if his practice performances influenced my coaching, which influenced his performances, isn’t that what we often profess as coaches? Earn playing time through practicing better. If I gave more opportunities or had more patience with players at the top of the cauldron, and fewer opportunities or less patience with players toward the bottom, doesn’t that match our popular coaching beliefs? “"Playing time is earned, not given. It's about practice” and “Players who question their playing time should first question their practice time”. Without the cauldron (or another objective measure), are our subjective assessments accurate or are we limited by anchoring and confirmation biases?
I did not use the cauldron to determine starters and captains, as I have previously, but the results had some impact. Mostly I varied starters based on several factors, but practice performance was a factor, and a reason the point guard started the most games. Winning and losing, and performances in general, go beyond box-score statistics. The competitive cauldron provides one objective measure of practice to incorporate into decisions about players, lineups, distribution of playing time, and more.


Hey Brian, curious- did the point guard’s teammates recognize his significance/quality?