1.

Let be the empirical distribution. The method of moment estimators (MMEs) satisfy

Solving for and ,

2.

a)

The MMEs satisfy

Define . It follows that

Since and ,

b)

The maximum likelihood estimators (MLEs) maximize

The maximum occurs at and .

c)

By equivariance, the MLE of is .

d)

Estimator Mean squared error (MSE)
MLE 0.015
Non-parametric plug-in estimate 0.033

The MSE of , the MLE of the mean, is computed using the code below.

import numpy as np

a = 1.
b = 3.
n = 10
n_sims = 10**6

samples = np.random.uniform(low=a, high=b, size=[n_sims, n])

a_mle = np.min(samples, axis=1)  # Maximum likelihood estimator of a.
b_mle = np.max(samples, axis=1)  # Maximum likelihood estimator of b.

tau     = (a     + b    ) / 2.   # Mean of Uniform(a, b).
tau_mle = (a_mle + b_mle) / 2.   # Maximum likelihood estimator of the mean.

mse = np.mean((tau_mle - tau)**2)

The non-parameteric plug-in estimate of is . Since this estimator is unbiased, its MSE is

3.

a)

Let be a standard normal random variable so that

and hence

The MLEs of the mean and standard deviation of the original distribution are and . By equivariance, the MLE of is .

b)

By the delta method,

A confidence interval is

c)

Estimator SE
Delta Method 0.558
Parametric Bootstrap (100,000 samples) 0.557

Code for the delta method and parametric bootstrap are given below.

import numpy as np
from scipy.stats import norm

data = np.array([ 3.23, -2.50,  1.88, -0.68,  4.43,  0.17,
                  1.03, -0.07, -0.01,  0.76,  1.76,  3.18,
                  0.33, -0.31,  0.30, -0.61,  1.52,  5.43,
                  1.54,  2.28,  0.42,  2.33, -1.03,  4.00,
                  0.39])

se_delta_method = np.std(data) \
                * np.sqrt(1. / n_samples * (1. + 0.5 * norm.ppf(0.95)**2))

n_samples = data.size
n_sims = 10**5
samples = np.std(data) * np.random.randn(n_sims, n_samples) + np.mean(data)
tau_mles = np.std(samples, axis=1) * norm.ppf(0.95) + np.mean(samples, axis=1)
se_parametric_boostrap = np.std(tau_mles)

4.

By Question 2 (b), the MLE of is . Its CDF is . Therefore, for any , and . It follows that

Taking a limit in yields the desired result.

5.

Since , the MME is the sample mean.

The MLE is also the sample mean. To see this, note that

and hence

Therefore, the derivative of the log likelihood is

Setting this to zero and solving for the parameter yields the desired result.

The Fisher information is

6.

a)

is the MLE of the mean. Let be a standard normal random variable. Since

the MLE of is by equivariance.

b)

An approximate 95% confidence interval (CI) for is where, by the delta method,

c)

By the law of large numbers (LLN), converges in probability to .

d)

Similarly to Part (b),

Moreover,

Therefore,

It’s possible to show that the ARE achieves its maximum value of at . Note that this quantity is necessarily less than one due to the asymptotic optimality of the MLE (Theorem 9.23).

e)

Suppose the distribution is not normal. Under sufficient regularity, the LLN guarantees the sample mean to converge, in probability, to the true mean of the distribution. As such, converges, in probability, to .

7.

a)

The log-likelihood for drug is

Taking derivatives,

It follows that the MLE is . By equivariance the MLE of is .

b)

The Fisher information of drug is

Since the two trials are independent, the complete Fisher information is

c)

By the delta method,

d)

Method 90% CI Lower bound 90% CI Upper bound
Delta Method -0.009 0.129
Parametric Bootstrap (100,000 samples) -0.009 0.129
import numpy as np
from scipy.stats import norm

n      = 200
x1     = 160
x2     = 148
n_sims = 10**5

p1_mle   = x1/n
p2_mle   = x2/n
psi_mle  = p1_mle - p2_mle
ppf_0p95 = norm.ppf(0.95)

se_delta = np.sqrt(p1_mle * (1. - p1_mle) / n + p2_mle * (1. - p2_mle) / n)
print('90% CI delta method: [{:.3f}, {:.3f}]'.format(
    psi_mle - 1.645 * se_delta_method, psi_mle + 1.645 * se_delta_method))

samples1 = np.random.binomial(n, p1_mle, size=[n_sims])
samples2 = np.random.binomial(n, p2_mle, size=[n_sims])
psi_mles = samples1/n - samples2/n
se_parametric_bootstrap = np.std(psi_mles)
print('90% CI parametric bootstrap: [{:.3f}, {:.3f}]'.format(
    psi_mle - 1.645 * se_parametric_bootstrap,
    psi_mle + 1.645 * se_parametric_bootstrap))

8.

The log likelihood is

Taking derivatives,

Taking expectations,

Therefore, the Fisher information is

9.

a)

Results are given below.

Method 95% CI Lower bound 95% CI Upper bound
Delta Method 126.146 189.219
Parametric Bootstrap (100,000 samples) 126.076 189.288
Non-parametric Bootstrap (100,000 samples) 129.553 185.812

The MLE of is where is the MLE of the mean. By the delta method,

Therefore, a 95% CI for is .

Code for all three methods is given below.

import numpy as np

np.random.seed(1)
data = np.random.randn(100) + 5.
mu_mle = np.mean(data)

n_sims = 10**5
n_samples = data.size

print('95% CI delta method: [{:.3f}, {:.3f}]'.format(
    (1. - 2. / np.sqrt(n_samples)) * np.exp(mu_mle),
    (1. + 2. / np.sqrt(n_samples)) * np.exp(mu_mle)))

samples = np.random.randn(n_sims, n_samples) + mu_mle
theta_mles = np.exp(np.mean(samples, axis=1))
se_parametric_bootstrap = np.std(theta_mles)
print('95% CI parametric bootstrap: [{:.3f}, {:.3f}]'.format(
    np.exp(mu_mle) - 2. * se_parametric_bootstrap,
    np.exp(mu_mle) + 2. * se_parametric_bootstrap))

indices = np.random.randint(n_samples, size=[n_sims * n_samples])
samples = data[indices]
splits = np.split(samples, n_sims)
theta_mles = np.empty([n_sims])
for i, split in enumerate(splits):
    theta_mles[i] = np.exp(np.mean(split))
se_bootstrap = np.std(theta_mles)
print('95% CI non-parametric bootstrap: [{:.3f}, {:.3f}]'.format(
    np.exp(mu_mle) - 2. * se_bootstrap,
    np.exp(mu_mle) + 2. * se_bootstrap))

b)

10.

a)

The CDF of is . This, along with CDFs of bootstrap estimators, are plotted below.

b)

The parametric bootstrap estimator of this parameter has a continuous distribution and hence zero probability of being equal to exactly. Let be the non-parametric bootstrap estimator. Note that

That is, the non-parametric bootstrap estimator has a good chance of being equal to the MLE. These phenomena are visibile in the plot above.