Sports coverage loves a clean before-and-after story.
The coach gets fired. The team wins four of the next six. Somebody says the room got fixed, the rotations got sharper, and the problem was the coach all along.
Sometimes that is true. But just as often the team also gets healthier, catches a softer part of the schedule, or changes pace in a way that confuses the picture.
If you compare the games after the coaching change to the games before it without modeling the schedule context, you can give the coach credit for the calendar.
This is a good place to use both parts of the stack:
py-bbnfor graph questionspy-scmfor a compact continuous intervention model
Start with the version fans actually argue about
The claim is usually simple:
- the team played better after the coach changed
- therefore the coach caused the improvement
The missing context is also simple:
- the team may have hit an easier schedule window
- the injury report may have improved at the same time
That means the coaching change is arriving inside a bundle of moving parts.
Make the graph explicit first
Here is a compact causal picture:
Window: where the team sits in the scheduleInjuries: how healthy the roster isOpponent: how strong the opponents are in that windowCoach: whether the coaching change has happenedPace: one tactical style variable the coach can influenceMargin: point differential or performance margin
The graph says:
- the schedule window affects both opponent strength and the timing context for the coaching change
- injuries affect both the chance of change and the outcome
- the coach can move margin partly through pace and partly directly
Drawn explicitly, the coaching-change story looks like this:

With py-bbn, you can ask which variables are doing confounding work.
import networkx as nx from pybbn.graphical import get_graph_tuple, get_minimal_confounders, get_paths g = nx.DiGraph() g.add_edges_from( [ ("Window", "Opponent"), ("Window", "Coach"), ("Injuries", "Coach"), ("Coach", "Pace"), ("Opponent", "Pace"), ("Coach", "Margin"), ("Opponent", "Margin"), ("Injuries", "Margin"), ("Pace", "Margin"), ] ) gt = get_graph_tuple(g) get_minimal_confounders(gt, "Coach", "Margin")# ['Injuries', 'Window']get_paths(gt, "Coach", "Margin")
That is already useful. The graph is telling you the story is not just “coach to margin.” There are backdoor routes through injuries and schedule window that have to be accounted for.
Use a linear-Gaussian SCM for the numeric side
For continuous settings like pace, point differential, and opponent-strength indices, py-scm is a clean fit.
The compact trick is to specify the structural coefficients and let a helper derive the aligned covariance matrix for create_reasoning_model(...).
import numpy as np
from pyscm.reasoning import create_reasoning_model
def build_linear_model(nodes, weighted_edges):
idx = {node: i for i, node in enumerate(nodes)}
B = np.zeros((len(nodes), len(nodes)))
D = np.eye(len(nodes))
for parent, child, weight in weighted_edges:
B[idx[child], idx[parent]] = weight
A = np.eye(len(nodes)) - B
cov = np.linalg.inv(A) @ D @ np.linalg.inv(A).T
return create_reasoning_model(
{"nodes": nodes, "edges": [(p, c) for p, c, _ in weighted_edges]},
{"v": nodes, "m": [0.0] * len(nodes), "S": cov.tolist()},
)
nodes = ["Window", "Injuries", "Opponent", "Coach", "Pace", "Margin"]
weighted_edges = [
("Window", "Opponent", -0.9),
("Window", "Coach", 0.7),
("Injuries", "Coach", 0.5),
("Coach", "Pace", -0.7),
("Opponent", "Pace", 0.2),
("Coach", "Margin", 1.4),
("Opponent", "Margin", -1.8),
("Injuries", "Margin", -1.1),
("Pace", "Margin", -0.6),
]
model = build_linear_model(nodes, weighted_edges)
The observed post-change margin is larger than the coach effect
If you just condition on Coach = 1, the average margin looks strong:
mean, _ = model.pquery({"Coach": 1.0}) float(mean["Margin"])# 2.1991
That is the familiar sports-show number. Once the new coach is in place, the margin looks about 2.20 points better in this toy setup.
But that still mixes the coach with the context that tends to surround the change.
The intervention is smaller and cleaner
Now ask the causal question directly:
model.iquery("Margin", {"Coach": 1.0})# mean 1.82model.iquery("Margin", {"Coach": 0.0})# mean 0.00model.equery("Margin", {"Coach": 1.0}, {"Coach": 0.0})# mean 1.82
So in this model, the coach still helps. The difference is that the clean intervention effect is about +1.82, not the +2.20 that appeared in the raw post-change slice.
That gap is the schedule-and-injuries tax on the naive story.
Counterfactuals let you revisit one game
Suppose the team won by 2.4 in a game after the change, with a favorable window and a manageable injury load.
You can ask:
model.cquery(
"Margin",
factual={
"Window": 1.0,
"Injuries": 0.2,
"Coach": 1.0,
"Opponent": -0.6,
"Pace": -0.8,
"Margin": 2.4,
},
counterfactual=[{"Coach": 0.0}],
)
In this toy example, the same game context falls to a counterfactual margin of about 0.58 without the coaching change.
A same-game counterfactual is often the easiest way to make that point land:

It keeps the opponent window and injury context fixed and asks only what disappears when the coaching change disappears.
That is a much sharper statement than “the coach turned it around.” It says: in this particular game environment, the coach probably mattered, but not by the full amount the box score seemed to advertise.
What this buys you
The point is not that sports should become a matrix lecture. The point is that the usual sports argument already has causal structure whether the speaker admits it or not.
py-bbn helps you expose the backdoor story. py-scm helps you quantify the intervention once the story is explicit.
That combination is exactly what a causal workflow should do in a public, non-technical domain:
- start with a claim people already care about
- make the hidden assumptions visible
- compute the intervention instead of worshipping the before-and-after split
Next in the series: pitch limits can look like they belong to fragile arms, even when the limit itself is protective.


Leave a Reply