## 1.

Since $\mathbb{E}_\lambda[\hat{\lambda}] = \mathbb{E}_\lambda[X_1]$, the estimator is unbiased. Moreover, $\operatorname{se}(\hat{\lambda})^2 = \mathbb{V}_\lambda(X_1) / n = \lambda / n$. By the bias-variance decomposition, the MSE is equal to $\operatorname{se}(\hat{\lambda})^2$.

## 2.

If $y$ is between $0$ and $\theta$,

Differentiating yields the PDF of $\hat{\theta}$ between $0$ and $\theta$ as $y \mapsto n(y/\theta)^n / y$. Therefore,

It follows that the bias of this estimator is $-\theta/(n+1)$ Moreover,

By the bias-variance decomposition, the MSE is $\theta^2 n / (n+2) - \theta^2 (n^2 - 1) / (n+1)^2$.

Remark. $\hat{\theta} (n+1)/n$ is an unbiased estimator.

## 3.

Since $\mathbb{E}_\theta[\hat{\theta}] = 2 \mathbb{E}_\theta[X_1] = \theta$, the estimator is unbiased. Moreover,

By the bias-variance decomposition, the MSE is equal to $\operatorname{se}(\hat{\theta})^2$.