Categories → Variables , Method , Validation
Understand how each variable affects model behaviour and output.
Your subject-matter-experts (SMEs) will often request an analysis of the modelled contribution of each variable provided as an input feature to the model. This analysis can be achieved using Causal Wizard, when you select a Regression model for your Causal-Effect estimation.
In a Regression model, the coefficients (learned parameters) of the model can be interpreted directly. By interpret, we mean you can directly understand the effect of the coefficient on model output behaviour. This is a major motivation for including regression in your selected models.
The effect of variables can also be captured from more complex ML models using feature-explainer tools, but additional assumptions and analysis are required for these, which raises questions about the validity or interpretation of the insights. So there’s a direct tradeoff between model performance, complexity and interpretability.
In most cases, Causal studies use tabular data with relatively few variables; simpler models are usually indicative of the performance of more complex models, with the added benefit of increased interpretability. This means you usually get most of the predictive ability with a relatively simple model, which is also easy to interpret.
Using e.g. a linear regression model, we can inspect both the sign and magnitude of the coefficients. The sign tells us the direction of effect and the magnitude reflects the degree to which the feature influences the outcome. This means that coefficients with larger magnitudes are more "important" in that they make a larger contribution to the output, assuming all inputs have similiar magnitude (see caveats below). You might also consider other definitions of importance equally valid - for example, features which make a small but reliably correct influence on the outcome for many samples, are also important.
In the figure above, we can see that the largest coefficient is Sex / 0. Sex is a categorical variable in this data, with two values (0 and 1). Since one-hot encoding is used, each value of a categorical variable has a separate learned coefficient. In this case, the value 0 of the Sex variable has the largest overall coefficient and the direction of its effect is positive - this means it increases the value of the output when Sex is 0. In contrast, the Age variable decreases the output (this variable is numerical).
If the output variable is binary categorical, then the output is the probability that the class is value 1 (whatever that is in your data).
Note that features with small coefficients may not be doing anything meaningful at all. Stability analysis of coefficients can be used to determine this, but you can also experiment with excluding them from the model and seeing if test-set performance is stable or increasing. You might decide some magnitude cutoff where you consider a feature redundant.
This article provides a detailed explanation of how to interpret regression coefficients in various study design scenarios.
This article provides a slightly less mathematical description of the same thing.
Check with your SMEs that the magnitudes and directions of effects are plausible for all variables. If not, dig deeper to understand why not.
In many cases, the answer is that effect of a variable is nonlinear, or conditional on other variables and therefore cannot be interpreted easily. For example, if you have a variable which should increase the output in some samples and decrease the output in others, the learned coefficient of that variable will depend on the frequency of each type of sample and the degree to which the variable affects the output for these samples.
However, variables can often be interpreted successfully as linear or monotonic effects.