By the bias-variance decomposition, the MSE converges to zero. Equivalently, we can say that converges to in the L2 norm. Since Lp convergence implies convergence in probability, we are done.
Remark. For each , is a random variable. The above proves only that each random variable converges in probability to the true value of the CDF . The Glivenko-Cantelli Theorem yields a much stronger result; it states that converges almost surely (and hence in probability) to zero.
Assumption. The Bernoulli random variables in the statement of the question are pairwise independent.
The plug-in estimator is . The standard error is . We can estimate the standard error by . By the CLT,
and hence an approximate 90% confidence interval is . The second part of this question is handled similarly.
By the CLT
Or, more conveniently,
Remark. The closer (respectively, further) is to 0.5, the more (respectively, less) variance there is in the empirical distribution evaluated at .
Without loss of generality, assume . Then,
By the results of the previous question,
We can use the estimator
An approximate confidence interval is .
Remark. The closer is to zero or one, the smaller the standard error.
This is an application of our findings in Question 2. In particular, we use the estimate . A confidence interval for this estimate is where
The z-scores corresponding to 80% and 95% intervals are approximately 1.28 and 1.96.