Tagged with bayes
, but this is a usual MLE assumption as well.
Typically, the likelihood of a model, makes use of the conditional independence assumption (CIA) to simplify the joint conditional distribution:
p(\mathbf{y}\mathbf{\theta}) = \prod_{n=1}^{N}p(y_n\mathbf{\theta})
where, \mathbf{y} is the vector (Nx1) of observations (measurements) and \mathbf{\theta} contains the model parameters.

Am I right when I say that when CIA is violated, it means that our model does not explain well the \mathbf{y}?

How to test whether this is a reasonable assumption to make? Is it simply by looking at the autocorrelation function for the observations?

In BDA 3, by Gelman et al., there is an Example: Checking the assumption of independence in binomial trials at page 147.
How does that generalize to multiple regressors, so that p(\mathbf{y}\mathbf{\theta}) is implicitly p(\mathbf{y}\mathbf{\theta}, \mathbf{X})? Or when \mathbf{y} is a continuous variable? 
If there is high dependence, how can I get rid of it?

Do I understand it right that statespace models disguise this CIA under the Markovian process model construction?