Parsiad Azimzadeh

The following is an expository post in which a dynamic programming principle is derived for an optimal stopping problem. The exposition is inspired by a proof in N. Touzi's textbook [1], an invaluable resource.

Before we begin, let's give some motivation. As an example, consider a risk-neutral stock given by the process $(X_t)_{t\geq 0}$. Optimal stopping describes the price of an American option paying off $g(X_t)$ at time $t$. Through this three-part series of posts, the reader is shown that the value of such an option is the unique viscosity solution of a partial differential equation (in particular, a single-obstacle variational inequality).

Consider a filtered probability space (with filtration $(\mathcal{F}_{t})_{t\geq0}$) satisfying the usual conditions, on which a standard Brownian motion $W_{t}$ is defined. Let $X_{s}^{t,x}$ denote the strong solution of the stochastic differential equation (SDE) $$dX_{s}=b(s,X_{s})ds+\sigma(s,X_{s})dW_{s}\text{ for }s>t\text{ and }X_{t}=x.$$ To ensure its existence and uniqueness, we need:

$b$ and $\sigma$ are Lipschitz and of linear growth in $x$ uniformly in $t$.

Let $T<\infty$ and $\mathscr{T}_{[t,T]}$ be the set of $[t,T]$ stopping times independent of $\mathcal{F}_{t}$. Consider the problem $$u(t,x)=\sup_{\tau\in\mathscr{T}_{[t,T]}}J(t,x;\tau)\text{ where }J(t,x;\tau)=\mathbb{E}\left[g(\tau,X_{\tau}^{t,x})\right]$$and $g$ is a given function. To ensure this is well-defined, we take the following:

$g:[0,T]\times\mathbb{R}^d$ is continuous and of quadratic growth (i.e., $|g(t,x)|\leq K(1+|x|^2)$ for some constant $K>0$ independent of $(t,x)$).

The above assumption implies that for all $s$ and $\tau\in\mathscr{T}_{[s,T]}$, the function $(t,x)\mapsto J(t,x;\tau)$ is continuous on $[0,s]\times\mathbb{R}^{d}$ by the following argument. Let $(t_{n}^\prime,x_{n}^\prime)_{n}$ be a sequence converging to $(t^\prime,x^\prime)\in[0,s]\times\mathbb{R}^{d}$. If we can show that $(g(\tau,X_{\tau}^{t_{n}^\prime,x_{n}^\prime}))_n$ is dominated by an integrable function, we can apply the dominated convergence theorem to get \begin{align*} \lim_{n\rightarrow\infty}J(t_{n}^\prime,x_{n}^\prime;\tau) & =\lim_{n\rightarrow\infty}\mathbb{E}\left[g(\tau,X_{\tau}^{t_{n}^\prime,x_{n}^\prime})\right]\\ & =\mathbb{E}\left[\lim_{n\rightarrow\infty}g(\tau,X_{\tau}^{t_{n}^\prime,x_{n}^\prime})\right]\\ & =\mathbb{E}\left[g(\tau,\lim_{n\rightarrow\infty}X_{\tau}^{t_{n}^\prime,x_{n}^\prime})\right]\\ & =\mathbb{E}\left[g(\tau,X_{\tau}^{t^\prime,x^\prime})\right]\\ & =J(t^\prime,x^\prime;\tau). \end{align*} Moreover, since $g$ is of quadratic growth, \begin{align*} \mathbb{E}\left[\left|g(\tau,X_{\tau}^{t_{n}^\prime,x_{n}^\prime})\right|\right] & \leq\mathbb{E}\left[K\left(1+\left|X_{\tau}^{t_{n}^\prime,x_{n}^\prime}\right|^{2}\right)\right]\\ & =K\left(1+\mathbb{E}\left[\left|X_{\tau}^{t_{n}^\prime,x_{n}^\prime}\right|^{2}\right]\right)\\ & \leq K_{0}\left(1+\left|x_{n}^\prime\right|^{2}\right) \end{align*} where $K_{0}$ can depend on $T$ (by the usual argument for Ito processes using Gronwall's lemma). Since $x_{n}^\prime\rightarrow x^\prime$, domination follows.

We denote by $f^{*}$ and $f_{*}$ the upper and lower semicontinuous envelopes of a function $f\colon Y\rightarrow[-\infty,\infty]$, where $Y$ is a given topological space.

Let $\theta\in\mathscr{T}_{[t,T]}$ be a stopping time such that $t < \theta < T$ and $X_{\theta}^{t,x}\in\mathbb{L}^{\infty}$. The following dynamic programming principle holds: \begin{align*} u(t,x) & \leq\sup_{\tau\in\mathscr{T}_{[t,T]}}\mathbb{E}\left[\mathbf{1}_{\left\{ \tau<\theta\right\} }g(\tau,X_{\tau}^{t,x})+\mathbf{1}_{\left\{ \tau\geq\theta\right\} }u^{*}(\theta,X_{\theta}^{t,x})\right].\\ u(t,x) & \geq\sup_{\tau\in\mathscr{T}_{[t,T]}}\mathbb{E}\left[\mathbf{1}_{\left\{ \tau<\theta\right\} }g(\tau,X_{\tau}^{t,x})+\mathbf{1}_{\left\{ \tau\geq\theta\right\} }u_{*}(\theta,X_{\theta}^{t,x})\right]. \end{align*}

Note that if $u$ is continuous, the above dynamic programming principle becomes, by virtue of $u=u_{*}=u^{*}$, $$u(t,x)=\sup_{\tau\in\mathscr{T}_{[t,T]}}\mathbb{E}\left[\mathbf{1}_{\left\{ \tau<\theta\right\} }g(\tau,X_{\tau}^{t,x})+\mathbf{1}_{\left\{ \tau\geq\theta\right\} }u(\theta,X_{\theta}^{t,x})\right].$$

Intuition behind proof: The $\leq$ inequality is established by the tower property (see the formal proof below). The $\geq$ inequality requires more work. Intuitively, we can take an $\epsilon$-optimal control $\tau^{\epsilon}(\theta)$ as follows: $$u(\theta,X_{\theta}^{t,x})\leq J(\theta,X_{\theta}^{t,x};\tau^{\epsilon}(\theta))+\epsilon.$$ Now, let $\tau$ be an arbitrary stopping time and $$\hat{\tau}=\tau\mathbf{1}_{\left\{ \tau<\theta\right\} }+\tau^{\epsilon}(\theta)\mathbf{1}_{\left\{ \tau\geq\theta\right\} }.$$ Then, \begin{align*} u(t,x) & \geq J(t,x;\hat{\tau})\\ & =\mathbb{E}\left[\mathbf{1}_{\left\{ \tau<\theta\right\} }g(\tau,X_{\tau}^{t,x})+\mathbf{1}_{\left\{ \tau\geq\theta\right\} }g(\tau^{\epsilon}(\theta),X_{\tau^{\epsilon}(\theta)}^{t,x})\right]\\ & =\mathbb{E}\left[\mathbf{1}_{\left\{ \tau<\theta\right\} }g(\tau,X_{\tau}^{t,x})+\mathbb{E}\left[\mathbf{1}_{\left\{ \tau\geq\theta\right\} }g(\tau^{\epsilon}(\theta),X_{\tau^{\epsilon}(\theta)}^{t,x})\mid\mathcal{F}_{\theta}\right]\right]\\ & =\mathbb{E}\left[\mathbf{1}_{\left\{ \tau<\theta\right\} }g(\tau,X_{\tau}^{t,x})+\mathbf{1}_{\left\{ \tau\geq\theta\right\} }J(\theta,X_{\theta}^{t,x};\tau^{\epsilon}(\theta))\right]\\ & \geq\mathbb{E}\left[\mathbf{1}_{\left\{ \tau<\theta\right\} }g(\tau,X_{\tau}^{t,x})+\mathbf{1}_{\left\{ \tau\geq\theta\right\} }u(\theta,X_{\theta}^{t,x})\right]-\epsilon. \end{align*} The desired result follows since $\tau$ and $\epsilon$ are arbitrary (take a sup over $\tau$ on both sides of the inequality). However, $\hat{\tau}$ is not a $\mathscr{T}_{[t,T]}$ stopping time, so the first inequality fails. In the proof below, this apparent issue is dealt with rigorously. We also mention another, perhaps less grave, issue: in the event that $u$ is not continuous, we cannot say anything about its measurability so that the expectation involving $u$ at a future time is ill-defined (this is the reason we use upper and lower semicontinuous envelopes in the above).

The $\leq$ inequality follows directly from the tower property: \begin{align*} J(t,x;\tau) & =\mathbb{E}\left[g(\tau,X_{\tau}^{t,x})\right]\\ & =\mathbb{E}\left[\mathbf{1}_{\left\{ \tau<\theta\right\} }g(\tau,X_{\tau}^{t,x})+\mathbf{1}_{\left\{ \tau\geq\theta\right\} }g(\tau,X_{\tau}^{t,x})\right]\\ & =\mathbb{E}\left[\mathbf{1}_{\left\{ \tau<\theta\right\} }g(\tau,X_{\tau}^{t,x})+\mathbb{E}\left[\mathbf{1}_{\left\{ \tau\geq\theta\right\} }g(\tau,X_{\tau}^{t,x})\mid\mathcal{F}_{\theta}\right]\right]\\ & =\mathbb{E}\left[\mathbf{1}_{\left\{ \tau<\theta\right\} }g(\tau,X_{\tau}^{t,x})+\mathbf{1}_{\left\{ \tau\geq\theta\right\} }J(\theta,X_{\theta}^{t,x};\tau)\right]\\ & \leq\mathbb{E}\left[\mathbf{1}_{\left\{ \tau<\theta\right\} }g(\tau,X_{\tau}^{t,x})+\mathbf{1}_{\left\{ \tau\geq\theta\right\} }u^{*}(\theta,X_{\theta}^{t,x})\right]. \end{align*} Now, take the supremum over all stopping times $\tau$ on both sides to arrive at the desired result. The $\geq$ inequality requires more work. For brevity, let $\mathcal{O}=(t,T)\times\mathbb{R}^{d}$ for the remainder. Let $\epsilon>0$ and $\varphi\colon[0,T]\times\mathbb{R}^d\rightarrow\mathbb{R}$ be an upper semicontinuous function satisfying $u\geq\varphi$. For each $(s,y)\in \mathcal{O}$, there exists $\tau^{s,y}\in\mathscr{T}_{[s,T]}$ such that $$ u(s,y)\leq J(s,y;\tau^{s,y})+\epsilon. $$ Using the upper semicontinuity of $\varphi$ and the continuity of $J$ (see above), we can find a family $(r^{s,y})$ of positive constants such that $$ \epsilon\geq\varphi(t^{\prime},x^{\prime})-\varphi(s,y)\text{ and }J(s,y;\tau^{s,y})-J(t^{\prime},x^{\prime};\tau^{s,y})\leq\epsilon \text{ for }(t^{\prime},x^{\prime})\in B(s,y;r^{s,y}) $$ where $$B(s,y;r)=(s-r,s)\times\left\{ x\in\mathbb{R}^d\colon\left|x-y\right| < r\right\}.$$ This seemingly strange choice for the sets above is justified later. Since $$ \left\{ B(s,y;r^{s,y})\colon(s,y)\in \mathcal{O}\right\} $$ forms a cover of $\mathcal{O}$ by open sets, Lindelöf's lemma yields a countable subcover $\{B(t_{i},x_{i};r_{i})\}$. Let $C_{0}=\emptyset$, and $$ A_{i+1}=B(t_{i+1},x_{i+1};r_{i+1})\setminus C_{i}\text{ where }C_{i+1}=A_{1}\cup\cdots\cup A_{i+1}\text{ for }i\geq0. $$ Note that the countable family $\{A_{i}\}$ is disjoint by construction, and that $X_{\theta}^{t,x}\in\cup_{i\geq1}A_{i}$ a.s. (recall that $X_{\theta}^{t,x}\in\mathbb{L}^{\infty}$ and $t < \theta < T$ by definition). Moreover, letting $\tau^{i}=\tau^{t_{i},x_{i}}$ for brevity, \begin{align*} J(t^{\prime},x^{\prime};\tau^{i}) & \geq J(t_{i},x_{i};\tau^{i})-\epsilon\\ & \geq u(t_{i},x_{i})-2\epsilon\\ & \geq\varphi(t_{i},x_{i})-2\epsilon\\ & \geq\varphi(t^{\prime},x^{\prime})-3\epsilon & \text{for }(t^{\prime},x^{\prime})\in B(t_{i},x_{i};r_{i})\supset A_{i}. \end{align*} Now, let $A^{n}=\cup_{i\leq n}A_{i}$ for $n\geq1$. Given a stopping time $\tau\in\mathscr{T}_{[t,T]}$, let $$ \hat{\tau}^{n}=\tau\mathbf{1}_{\left\{ \tau<\theta\right\} }+\mathbf{1}_{\left\{ \tau\geq\theta\right\} }\left(T\mathbf{1}_{\mathcal{O}\setminus A^{n}}(\theta,X_{\theta}^{t,x})+\sum_{i=1}^{n}\tau^{i}\mathbf{1}_{A_{i}}(\theta,X_{\theta}^{t,x})\right). $$ In particular, since $B(t_{i},x_{i};r_{i})\supset A_{i}$ was picked such that for all $(t^{\prime},x^{\prime})\in B(t_{i},x_{i};r_{i})$, $t^{\prime}\leq t_{i}$, we have that $\hat{\tau}^n\in\mathscr{T}_{[t,T]}$. If we had instead chosen the open sets $B_{r_{i}}(t_{i},x_{i})$ to form our cover, we would not be able to use $\tau^{i}$ in the above definition of $\hat{\tau}^{n}$ without violating--roughly speaking--the condition that "stopping times cannot peek into the future." We first write \begin{align*} u(t,x) & \geq J(t,x;\hat{\tau}^{n})\\ & =\mathbb{E}\left[\left(\mathbf{1}_{\left\{ \tau<\theta\right\} }+\mathbf{1}_{\left\{ \tau\geq\theta\right\} }\mathbf{1}_{\mathcal{O}\setminus A^{n}}(\theta,X_{\theta}^{t,x})+\mathbf{1}_{\left\{ \tau\geq\theta\right\} }\mathbf{1}_{A^{n}}(\theta,X_{\theta}^{t,x})\right)g(\hat{\tau}^{n},X_{\hat{\tau}^{n}}^{t,x})\right] \end{align*} and consider the terms in the summation separately. It follows from our choice of $A^{n}$ and the tower property that \begin{align*} \mathbb{E}\left[\mathbf{1}_{\left\{ \tau\geq\theta\right\} }g(\hat{\tau}^{n},X_{\hat{\tau}^{n}}^{t,x})\mathbf{1}_{A^{n}}(\theta,X_{\theta}^{t,x})\right] & =\mathbb{E}\left[\mathbf{1}_{\left\{ \tau\geq\theta\right\} }g(\tau^{i},X_{\tau^{i}}^{t,x})\mathbf{1}_{A^{n}}(\theta,X_{\theta}^{t,x})\right]\\ & =\mathbb{E}\left[\mathbb{E}\left[\mathbf{1}_{\left\{ \tau\geq\theta\right\} }g(\tau^{i},X_{\tau^{i}}^{t,x})\mathbf{1}_{A^{n}}(\theta,X_{\theta}^{t,x})\mid\mathcal{F}_{\theta}\right]\right]\\ & =\mathbb{E}\left[\mathbf{1}_{\left\{ \tau\geq\theta\right\} }J(\theta,X_{\theta}^{t,x};\tau^{i})\mathbf{1}_{A^{n}}(\theta,X_{\theta}^{t,x})\right]\\ & \geq\mathbb{E}\left[\mathbf{1}_{\left\{ \tau\geq\theta\right\} }\left(\varphi(\theta,X_{\theta}^{t,x})-3\epsilon\right)\mathbf{1}_{A^{n}}(\theta,X_{\theta}^{t,x})\right]\\ & \geq\mathbb{E}\left[\mathbf{1}_{\left\{ \tau\geq\theta\right\} }\varphi(\theta,X_{\theta}^{t,x})\mathbf{1}_{A^{n}}(\theta,X_{\theta}^{t,x})\right]-3\epsilon. \end{align*} Moreover, $$ \mathbf{1}_{\left\{ \tau\geq\theta\right\} }g(\hat{\tau}^{n},X_{\hat{\tau}^{n}}^{t,x})\mathbf{1}_{\mathcal{O}\setminus A^{n}}(\theta,X_{\theta}^{t,x})=\mathbf{1}_{\left\{ \tau\geq\theta\right\} }g(T,X_{T}^{t,x})\mathbf{1}_{\mathcal{O}\setminus A^{n}}(\theta,X_{\theta}^{t,x})\leq|g(T,X_{T}^{t,x})| $$ and hence the dominated convergence theorem yields $$ \lim_{n\rightarrow\infty}\mathbb{E}\left[\mathbf{1}_{\left\{ \tau\geq\theta\right\} }g(T,X_{T}^{t,x})\mathbf{1}_{\mathcal{O}\setminus A^{n}}(\theta,X_{\theta}^{t,x})\right] =\mathbb{E}\left[\mathbf{1}_{\left\{ \tau\geq\theta\right\} }g(T,X_{T}^{t,x})\lim_{n\rightarrow\infty}\mathbf{1}_{\mathcal{O}\setminus A^{n}}(\theta,X_{\theta}^{t,x})\right]=0 $$ since we can (a.s.) find $i$ such that $(\theta,X_{\theta}^{t,x})\in A_{i}$. By Fatou's lemma, \begin{align*} \liminf_{n\rightarrow\infty}\mathbb{E}\left[\mathbf{1}_{\left\{ \tau\geq\theta\right\} }\varphi(\theta,X_{\theta}^{t,x})\mathbf{1}_{A^{n}}(\theta,X_{\theta}^{t,x})\right] & \geq\mathbb{E}\left[\mathbf{1}_{\left\{ \tau\geq\theta\right\} }\varphi(\theta,X_{\theta}^{t,x})\liminf_{n\rightarrow\infty}\mathbf{1}_{A^{n}}(\theta,X_{\theta}^{t,x})\right]\\ & =\mathbb{E}\left[\mathbf{1}_{\left\{ \tau\geq\theta\right\} }\varphi(\theta,X_{\theta}^{t,x})\right]. \end{align*} Note that we were able to use Fatou's lemma since $\varphi(\theta,X_\theta^{t,x})$ is bounded due to the assumption $X_{\theta}^{t,x}\in\mathbb{L}^{\infty}$. Since $\tau$ and $\epsilon$ were arbitrary, we have that $$ u(t,x)\geq\mathbb{E}\left[\mathbf{1}_{\left\{ \tau<\theta\right\} }g(\tau,X_{\tau}^{t,x})+\mathbf{1}_{\left\{ \tau\geq\theta\right\} }\varphi(\theta,X_{\theta}^{t,x})\right] \text{ for } \tau\in\mathscr{T}_{[t,T]}.$$ For the last step, let $r>0$ such that $|X_{\theta}^{t,x}|\leq r$ a.s. (recall $X_{\theta}^{t,x}\in\mathbb{L}^{\infty}$). We can find a sequence of continuous functions $(\varphi_{n})$ such that $\varphi_{n}\leq u_{*}$ and converges pointwise to $u_{*}$ on $[0,T]\times B_{r}(0)$. Letting $\varphi^N = \min_{n\geq N} \varphi_n$ denote a nondecreasing modification of this sequence, by the monotone convergence theorem, we get \begin{align*} u(t,x) & \geq\lim_{N\rightarrow\infty}\mathbb{E}\left[\mathbf{1}_{\left\{ \tau<\theta\right\} }g(\tau,X_{\tau}^{t,x})+\mathbf{1}_{\left\{ \tau\geq\theta\right\} }\varphi^N(\theta,X_{\theta}^{t,x})\right]\\ & =\mathbb{E}\left[\mathbf{1}_{\left\{ \tau<\theta\right\} }g(\tau,X_{\tau}^{t,x})+\mathbf{1}_{\left\{ \tau\geq\theta\right\} }u_{*}(\theta,X_{\theta}^{t,x})\right] & \text{ for } \tau \in \mathscr{T}_{[t,T]}.\end{align*} Now, simply take supremums on both sides to arrive at the desired result.

Bibliography

  1. Touzi, Nizar. Optimal stochastic control, stochastic target problems, and backward SDE. Vol. 29. Springer Science & Business Media, 2012.
Parsiad Azimzadeh

I am happy to announce the release of the GNU Octave financial package version 0.5.0. This is the first version to be released since I took on the role of maintainer.

If you do not already have GNU Octave, you can grab a free copy here.

To install the package, launch Octave and run the following commands:

pkg install -forge io
pkg install -forge financial

Perhaps the most exciting addition in this version is the Monte Carlo simulation framework, which is significantly faster than its MATLAB counterpart. A brief tutorial (along with benchmarking information) are available in a previous post. Other additions include Black-Scholes options and greeks valuation routines, implied volatility calculations, and general bug fixes. Some useful links are below:

Parsiad Azimzadeh

I have recently taken on the role of maintaining the GNU Octave financial package. I recently implemented stochastic differential equation simulation, which I discuss in this post.

If you don't already have a copy of GNU Octave and the financial package, see this post for installation instructions.

Consider the following SDE: $$dX_t = F(t, X_t) dt + G(t, X_t) dW_t$$ where $W$ is a standard Brownian motion. With the Octave financial package, you can now simulate sample paths of this SDE. Let's take a moment to motivate why you might want to do this...

A simple example: pricing a European call

A classic problem in finance is that of pricing a European option, for which one needs to compute an expectation involving the random variable $X_T$ at some fixed time $T>0$, referred to as the expiry time. For example, the Black-Scholes call pricing problem is that of computing $$E\left[e^{-rT} (X_T-K)^+\right]$$ where $X$ follows a geometric Brownian motion (GBM): $$dX_t = (r-\delta) X_t dt + \sigma X_t dW_t.$$ Here, $r$ is the risk-free rate, $\delta$ is the dividend rate, and $\sigma$ is the volatility.

Approximating such an expectation using a sample mean is referred to as Monte Carlo integration (a.k.a. Monte Carlo simulation). Though the Black-Scholes pricing problem happens to be one in which a closed-form solution is known, as an expository example, let's perform Monte Carlo integration to approximate it using an SDE simulation:

% Test parameters
X_0 = 100.; K = 100.; r = 0.04; delta = 0.01; sigma = 0.2; T = 1.;
Simulations = 1e6; Timesteps = 10;

SDE = gbm (r - delta, sigma, "StartState", X_0);
[Paths, ~, ~] = simByEuler (SDE, 1, "DeltaTime", T, "NTRIALS", Simulations, "NSTEPS", Timesteps, "Antithetic", true);

% Monte Carlo price
CallPrice_MC = exp (-r * T) * mean (max (Paths(end, 1, :) - K, 0.));

% Compare with the exact answer (Black-Scholes formula): 9.3197

The gbm function is used to generate an object describing geometric Brownian motion (GBM). Under the hood, it invokes the sde constructor, which is capable of constructing more general SDEs.

Timing

The GNU Octave financial implementation uses broadcasting to speed up computation. Here is a speed comparison of the above with the MATLAB Financial Toolbox, under a varying number of timesteps:

Timesteps MATLAB Financial Toolbox (secs.) GNU Octave financial package (secs.)
16 0.543231 0.048691
32 1.053423 0.064110
64 2.167072 0.097092
128 4.191894 0.162552
256 8.361655 0.294098
512 16.609718 0.568558
1024 32.839757 1.136864

While both implementations scale more-or-less linearly, the GNU Octave financial package implementation greatly outperforms its MATLAB counterpart.

A multi-dimensional example: pricing a Basket call

Consider two assets, $X^1$ and $X^2$, and the basket call option pricing problem $$E\left[e^{-rT}\left(\max\left(X^1_T,X^2_T\right)-K\right)^+\right]$$ where $$dX_t^i = (r-\delta^i) X^i_t dt + \sigma^i X^i_t dW^i_t \text{ for } i = 1,2$$ and the correlation between the Wiener processes is $dW^1_t dW^2_t = \rho dt$. Sample code for this example is below:

% Test parameters
X1_0 = 40.; X2_0 = 40.; K = 40.; r = 0.05; delta1 = 0.; delta2 = 0.; sigma1 = 0.5; sigma2 = 0.5; T = 0.25; rho = 0.3;
Simulations = 1e5; Timesteps = 10;

SDE = gbm ([r-delta1 0.; 0. r-delta2], [sigma1 0.; 0. sigma2], "StartState", [X1_0; X2_0], "Correlation", [1 rho; rho 1]);
[Paths, ~, ~] = simulate (SDE, 1, "DeltaTime", T, "NTRIALS", Simulations, "NSTEPS", Timesteps, "Antithetic", true);

Max_Xi_T = max (Paths(end, :, :));
BasketCallPrice_MC = exp (-r * T) * mean (max (Max_Xi_T - K, 0.));

% Compare with the exact answer (Stulz 1982): 6.8477