## 1.

### a)

See Question 8 of Chapter 3.

### b)

First, note that

where $c_{n}\rightarrow1$ and $d_{n}\rightarrow1$. By the WLLN, $n^{-1}\sum_{i}X_{i}^{2}$ and $\bar{X}^{2}$ converge, in probability, to $\mathbb{E}[X_{1}^{2}]$ and $\mu^{2}$. By Theorem 5.5 (d), $c_{n}n^{-1} \sum_{i}X_{i}^{2}$ and $d_{n}\bar{X}^{2}$ converge, in probability, to the same quantities. Lastly, by Theorem 5.5 (a), $S_{n}^{2}$ converges, in probability, to $\mathbb{E}[X_{1}^{2}]-\mu^{2}=\sigma^{2}$.

## 2.

Suppose $X_{n}$ converges to $b$ in quadratic mean. By Jensen’s inequality,

Therefore, $\mathbb{E}X_{n}\rightarrow b$.

Next, note that

Taking limits of both sides reveals $\lim_{n}\mathbb{V}(X_{n})=0$. The converse, we can apply the limits $\lim_{n}\mathbb{E}[X_{n}]=b$ and $\lim_{n}\mathbb{V}(X_{n})=0$ directly to the equation above.

## 3.

First, note that

Taking the limit,

## 4.

Let $\epsilon>0$. For $n$ sufficiently large,

and hence $X_{n}$ converges in probability. However,

and hence $X_{n}$ does not converge in quadratic mean.

## 5.

It is sufficient to prove the second claim since convergence in quadratic mean implies convergence in probability. First, note that

Taking expectations, and using the fact that $X_i^k = X_i$ and $\mathbb{E} X_i = p$,

## 6.

Letting $F$ denote the CDF of a standard normal distribution, by the CLT,

## 7.

Let $f>0$ be a function and $\epsilon>0$ be a constant. Then,

It follows that $f(n) X_n$ converges to zero in probability. Take $f = 1$ for Part (a) and $f(n) = n$ for (b).

## 8.

Letting $F$ denote the CDF of a standard normal distribution, by the CLT,

## 9.

Let $\epsilon>0$. Then,

Therefore, $X_{n}$ converges in probability (and hence in distribution) to $X$. On the other hand,

## 10.

Since $1\leq x^{k}/t^{k}$ whenever $x\geq t>0$, it follows that

Therefore,

## 11.

First, note that $X$ is almost surely zero. Let $\epsilon>0$ and $Z$ be a standard normal random variable. Then,

Therefore, $X_{n}$ converges in probability (and hence in distribution) to zero.

## 12.

Let $F$ be the CDF of an integer valued random variable $K$. Let $k$ be an integer. It follows that $F(k)=F(k+c)$ for all $% $. We use this observation multiple times below.

To prove the forward direction, suppose $X_{n}\rightsquigarrow X$. By definition, $F_{X_{n}}\rightarrow F_{X}$ at all points of continuity of $F_{X}$. Therefore,

To prove the reverse direction, suppose $\mathbb{P}(X_{n}=k)\rightarrow\mathbb{P}(X=k)$ for all integers $k$. Let $j$ be an integer and note that

and hence $X_{n}\rightsquigarrow X$ as desired.

## 13.

First, note that

If $x\leq0$, it follows that $F_{X_{n}}(x)=0$. Otherwise,

Therefore, $F_{X_{n}}(x)\rightarrow(1-e^{-\lambda x})I_{(0,\infty)}(x)$ and hence $X_{n}$ converges in distribution to an $\operatorname{Exp}(\lambda)$ random variable.

## 14.

By the CLT

Let $g(x)=x^{2}$ so that $g^{\prime}(x)=2x$. By the delta method,

## 15.

Define $g:\mathbb{R}^{2}\rightarrow\mathbb{R}$ by $g(x)=x_{1}/x_{2}$. Then, $\nabla g(x)=(1/x_{2},-x_{1}/x_{2}^{2})^{\intercal}$. Define $\nabla_{\mu}=\nabla g(\mu)$ for brevity. By the multivariate delta method,

## 16.

Let $X_{n},X,Y\sim N(0,1)$ be IID with $X_{n}=Y_{n}$. Trivially, $X_{n}\rightsquigarrow X$ and $Y_{n}\rightsquigarrow Y$. However, $\mathbb{V}(X_{n}+Y_{n})=\mathbb{V}(2X_{n})=4$ while $\mathbb{V}(X+Y)=2$ and hence $X_{n}+Y_{n}$ does not converge in distribution to $X+Y$.