Conditional Independence Assumption

Tagged with bayes, but this is a usual MLE assumption as well.

Typically, the likelihood of a model, makes use of the conditional independence assumption (CIA) to simplify the joint conditional distribution:

p(\mathbf{y}|\mathbf{\theta}) = \prod_{n=1}^{N}p(y_n|\mathbf{\theta})

where, \mathbf{y} is the vector (Nx1) of observations (measurements) and \mathbf{\theta} contains the model parameters.

  1. Am I right when I say that when CIA is violated, it means that our model does not explain well the \mathbf{y}?

  2. How to test whether this is a reasonable assumption to make? Is it simply by looking at the auto-correlation function for the observations?

  3. In BDA 3, by Gelman et al., there is an Example: Checking the assumption of independence in binomial trials at page 147.
    How does that generalize to multiple regressors, so that p(\mathbf{y}|\mathbf{\theta}) is implicitly p(\mathbf{y}|\mathbf{\theta}, \mathbf{X})? Or when \mathbf{y} is a continuous variable?

  4. If there is high dependence, how can I get rid of it?

  5. Do I understand it right that state-space models disguise this CIA under the Markovian process model construction?

You will get more responses at

Posted here, in case anyone wants to share her/his thoughts.