Null Preservation
I am in the process of reading Causal Inference by Hernan and Robins, and one idea has come up a handful of times which strikes me as important - and underdiscussed in a standard statistical curriculum (read: my statistical curriculum).
The basic premise is that, under null preservation, the model that we are working with is never misspecified. So instance, if one is considering the null of no causal influece of treatment (\(a\)) on an outcome (\(Y\)), then the standard quadratic regression \[E[Y|A=a] = \beta_0 + \beta_1a + \beta_2a^2\] is correctly specified when there is no causal effect [e.g. it is some constant] even if the mean structure between Y and A is incorrect (supposing it exists).
This is a neat concept in part because it means that a conclusion in which we reject the null is valid evidence against the null, regardless of whether or model is correctly specified. If we take a conservative statisticians point-of-view, and never claim evidence in favour of the alternative, then null preservation permits us to draw valid conclusions, even when model mis-specification is all but certain.