Parsiad Azimzadeh

The following is a continuation of a previous post on optimal stopping. In this post, we derive a dynamic programming equation (which turns out to be a partial differential equation (PDE) to be interpreted in the viscosity sense) for the optimal stopping problem.

As before, we consider a filtered probability space (with filtration $(\mathcal{F}_{t})_{t\geq0}$) satisfying the usual conditions, on which a standard Brownian motion $W_{t}$ is defined. Let $X_{s}^{t,x}$ denote the strong solution of the stochastic differential equation (SDE) $$ dX_{s}=b(s,X_{s})ds+\sigma(s,X_{s})dW_{s}\text{ for }s>t\text{ and }X_{t}=x.$$ Let $T<\infty$ and $\mathscr{T}_{[t,T]}$ be the set of $[t,T]$ stopping times independent of $\mathcal{F}_{t}$. Consider the problem $$ u(t,x)=\sup_{\tau\in\mathscr{T}_{[t,T]}}J(t,x;\tau)\text{ where }J(t,x;\tau)=\mathbb{E}\left[g(\tau,X_{\tau}^{t,x})\right] $$ and $g$ is some given function.

All assumptions of the previous post hold.

The PDE we will derive (in the viscosity sense) is \begin{equation} \min\left\{ -\left(\partial_{t}+\mathcal{A}\right)u,u-g\right\} =0\text{ on }[0,T)\times\mathbb{R}^{d},\label{eq:pde}\tag{1} \end{equation} where $\mathcal{A}$ is the infinitesimal generator of the SDE above. Let's now define the notion of viscosity solution for this specific problem:

Let $\mathcal{O}=[0,T)\times\mathbb{R}^{d}$. A locally bounded function $v\colon\mathcal{O}\rightarrow\mathbb{R}$ is a viscosity subsolution (resp. supersolution) of \eqref{eq:pde} if \begin{align*} & \min\left\{ -\left(\partial_{t}+\mathcal{A}\right)\varphi(t,x),(v^{*}-g)(t,x)\right\} \leq0\\ \text{(resp. } & \min\left\{ -\left(\partial_{t}+\mathcal{A}\right)\varphi(t,x),(v_{*}-g)(t,x)\right\} \geq0\text{)} \end{align*} for all $(t,x,\varphi)\in\mathcal{O}\times C^{1,2}(\mathcal{O})$ such that $(v^{*}-\varphi)(t,x)=\max_{\mathcal{O}}(v^{*}-\varphi)=0$ (resp. $(v_{*}-\varphi)(t,x)=\min_{\mathcal{O}}(v_{*}-\varphi)=0$) and the maximum (resp. minimum) is strict. We say $v$ is a viscosity solution of \eqref{eq:pde} if it is both a subsolution and supersolution of \eqref{eq:pde}.
Suppose $u\colon\mathcal{O}\rightarrow\mathbb{R}$ is locally bounded. Then, $u$ is a viscosity solution of \eqref{eq:pde}.
We first prove that $u$ is a subsolution. Let $(t,x,\varphi)\in\mathcal{O}\times C^{1,2}(\mathcal{O})$ be such that $$ (u^{*}-\varphi)(t,x)=\max_{\mathcal{O}}(u^{*}-\varphi)=0 $$ where the maximum is strict. Assume, in order to arrive at a contradiction, that $$ \min\left\{ -\left(\partial_{t}+\mathcal{A}\right)\varphi(t,x),(u^{*}-g)(t,x)\right\} >0. $$ Equivalently, this can be expressed as $$ (\varphi-g)(t,x)=(u^{*}-g)(t,x)>0\text{ and }-\left(\partial_{t}+\mathcal{A}\right)\varphi(t,x)>0. $$ By continuity, we can find $h>0$ (with $t+h<T$) and $\delta>0$ such that $$ \varphi-g\geq\delta\text{ and }-\left(\partial_{t}+\mathcal{A}\right)\varphi\geq0\text{ on }\mathcal{N}_{h}=\left( (t-h,t+h)\times B_{h}(x) \right) \cap \mathcal{O} $$ where $B_{h}(x)$ is the ball of radius $h$ centred at $x$. Since $(t,x)$ is a strict maximizer, $$ -\gamma=\max_{\partial\mathcal{N}_{h}}\left(u^{*}-\varphi\right)<0.$$ Let $(t_{n},x_{n})$ be a sequence in $\mathcal{O}$ such that $$ (t_{n},x_{n})\rightarrow(t,x)\text{ and }u(t_{n},x_{n})\rightarrow u^{*}(t,x). $$ Let $$\theta_{n}=\inf\left\{ s>t_{n}\colon(s,X_{s}^{t_{n},x_{n}})\notin\mathcal{N}_{h}\right\} . $$ Note that for $n$ large enough, $(t_{n},X_{t_{n}}^{t_{n},x_{n}})=(t_{n},x_{n})\in\mathcal{N}_{h}$ (we will always assume $n$ is large enough for this to occur). Let $$ \eta_{n}=u(t_{n},x_{n})-\varphi(t_{n},x_{n}).$$ Let $\tau_n\in\mathscr{T}_{[t_n,T]}$ be arbitrary. By Ito's lemma, \begin{align*} u(t_{n},x_{n}) & =\eta_{n}+\varphi(t_{n},x_{n})\\ & \begin{gathered}=\eta_{n}+\mathbb{E}\left[\varphi(\tau_{n}\wedge\theta_{n},X_{\tau_{n}\wedge\theta_{n}}^{t_{n},x_{n}})-\int_{t_{n}}^{\tau_{n}\wedge\theta_{n}}\left(\partial_{t}+\mathcal{A}\right)\varphi(s,X_{s}^{t_{n},x_{n}})ds\right]\\ +\mathbb{E}\left[\int_{t_{n}}^{\tau_{n}\wedge\theta_{n}}\nabla_{x}\varphi(s,X_{s}^{t_{n},x_{n}})\cdot\sigma(s,X_{s}^{t_{n},x_{n}})dW_{s}\right] \end{gathered} \\ & =\eta_{n}+\mathbb{E}\left[\varphi(\tau_{n}\wedge\theta_{n},X_{\tau_{n}\wedge\theta_{n}}^{t_{n},x_{n}})-\int_{t_{n}}^{\tau_{n}\wedge\theta_{n}}\left(\partial_{t}+\mathcal{A}\right)\varphi(s,X_{s}^{t_{n},x_{n}})ds\right]. \end{align*} The Ito integral vanishes due to $t\mapsto(t,X_{t}^{t_{n},x_{n}})$ being bounded on the interval $[t_{n},\tau_{n}\wedge\theta_{n}]$ so that $$ u(t_{n},x_{n})\geq\eta_{n}+\mathbb{E}\left[\varphi(\tau_{n}\wedge\theta_{n},X_{\tau_{n}\wedge\theta_{n}}^{t_{n},x_{n}})\right]. $$ Due to the inequalities established on $\mathcal{N}_{h}$, \begin{align*} u(t_n,x_n) & \geq\eta_n+\mathbb{E}\left[\varphi(\tau_n\wedge\theta_n,X_{\tau_n\wedge\theta_n}^{t_n,x_n})\right]\\ & =\eta_n+\mathbb{E}\left[\varphi(\tau_n,X_{\tau_n}^{t_n,x_n})\mathbf{1}_{\left\{ \tau_n <\theta_n\right\} }+\varphi(\theta_n,X_{\theta_n}^{t_n,x_n})\mathbf{1}_{\left\{ \tau_n \geq\theta_n \right\} }\right]\\ & \geq\eta_n+\mathbb{E}\left[\left(g(\tau_n,X_{\tau_n}^{t_n,x_n})+\delta\right)\mathbf{1}_{\left\{ \tau_n <\theta_n\right\} }+\left(u^{*}(\theta_n,X_{\theta_n}^{t_n,x_n})+\gamma\right)\mathbf{1}_{\left\{ \tau_n\geq\theta_n\right\} }\right]\\ & \geq\eta_n+\gamma\wedge\delta+\mathbb{E}\left[g(\tau_n,X_{\tau_n}^{t_n,x_n})\mathbf{1}_{\left\{ \tau_n<\theta_n\right\} }+u^{*}(\theta_n,X_{\theta_n}^{t_n,x_n})\mathbf{1}_{\left\{ \tau_n\geq\theta_n \right\} }\right]. \end{align*} Since $\tau_n\in\mathscr{T}_{[t_n,T]}$ is arbitrary and $\eta_n+\gamma\wedge\delta>0$ for $n$ sufficiently large, this contradicts the $\leq$ inequality in the dynamic programming principle established in the previous post.
We now prove that $u$ is a supersolution. The inequality $u-g\geq0$ follows from the value function since $$ u(t,x)=\sup_{\tau\in\mathscr{T}_{[t,T]}}J(t,x;\tau)\geq J(t,x;t)=\mathbb{E}[g(t,X_{t}^{t,x})]=g(t,x). $$ Taking the lower semicontinuous envelope on both sides of the inequality, we get $u_{*}-g\geq0$ (recall that $g$ is presumed to be continuous). Let $(t,x,\varphi)\in\mathcal{O}\times C^{1,2}(\mathcal{O})$ be such that $$ (u_{*}-\varphi)(t,x)=\min_{\mathcal{O}}(u_{*}-\varphi)=0. $$ Let $(t_{n},x_{n})$ be a sequence in $\mathcal{O}$ such that $$ (t_{n},x_{n})\rightarrow(t,x)\text{ and }u(t_{n},x_{n})\rightarrow u_{*}(t,x). $$ Let $$ \eta_{n}=u(t_{n},x_{n})-\varphi(t_{n},x_{n}) $$ and $$ h_{n}=\sqrt{\eta_{n}}\mathbf{1}_{\left\{ \eta_{n}\neq0\right\} }+\mathbf{1}_{\left\{ \eta_{n}=0\right\} }/n. $$ Also let $$ \theta_{n}=\inf\left\{ s>t_{n}\colon(s,X_{s}^{t_{n},x_{n}})\notin[t_{n},t_{n}+h_{n})\times B_{1}(x)\right\} $$ where we always assume $n$ is large enough for $t_n+h_n < T$ and $x_n \in B_1(x)$. Calling upon the $\geq$ inequality in the dynamic programming principle established in the previous post (with $\theta=\theta_{n}$), we have $$ \eta_{n}+\varphi(t_{n},x_{n})=u(t_{n},x_{n})\geq\mathbb{E}\left[u_{*}(\theta_{n},X_{\theta_{n}}^{t_{n},x_{n}})\right]\geq\mathbb{E}\left[\varphi(\theta_{n},X_{\theta_{n}}^{t_{n},x_{n}})\right]. $$ Applying Ito's lemma and dividing by $h_{n}$ yields $$ \frac{\eta_{n}}{h_{n}}\geq\mathbb{E}\left[\frac{1}{h_{n}}\int_{t_{n}}^{\theta_{n}}\left(\partial_{t}+\mathcal{A}\right)\varphi(s,X_{s}^{t_{n},x_{n}})ds\right]. $$ As usual, the Ito integral has vanished due to $t\mapsto(t,X_{t}^{t_{n},x_{n}})$ being bounded on the interval $[t_{n},\theta_{n}]$. For any fixed sample $\omega$ in the sample space and $n$ sufficiently large, note that $\theta_{n}(\omega)=t_{n}+h_{n}$ (since $h_{n}\rightarrow0$). By the mean value theorem for integrals, the random variable in the expectation converges almost surely. Applying the dominated convergence theorem, we get \begin{align*} 0=\lim_{n\rightarrow\infty}\frac{\eta_{n}}{h_{n}} & \geq\lim_{n\rightarrow\infty}\mathbb{E}\left[\frac{1}{h_{n}}\int_{t_{n}}^{\theta_{n}}\left(\partial_{t}+\mathcal{A}\right)\varphi(s,X_{s}^{t_{n},x_{n}})ds\right]\\ & =\mathbb{E}\left[\lim_{n\rightarrow\infty}\frac{1}{h_{n}}\int_{t_{n}}^{\theta_{n}}\left(\partial_{t}+\mathcal{A}\right)\varphi(s,X_{s}^{t_{n},x_{n}})ds\right]\\ & =\mathbb{E}\left[\left(\partial_{t}+\mathcal{A}\right)\varphi(t,x)\right]\\ & =\left(\partial_{t}+\mathcal{A}\right)\varphi(t,x). \end{align*} Multiplying both sides of the inequality by $-1$ yields the desired result.
Parsiad Azimzadeh

The following is an expository post in which a dynamic programming principle is derived for an optimal stopping problem. The exposition is inspired by a proof in N. Touzi's textbook [1], an invaluable resource.

Before we begin, let's give some motivation. As an example, consider a risk-neutral stock given by the process $(X_t)_{t\geq 0}$. Optimal stopping describes the price of an American option paying off $g(X_t)$ at time $t$. Through this three-part series of posts, the reader is shown that the value of such an option is the unique viscosity solution of a partial differential equation (in particular, a single-obstacle variational inequality).

Consider a filtered probability space (with filtration $(\mathcal{F}_{t})_{t\geq0}$) satisfying the usual conditions, on which a standard Brownian motion $W_{t}$ is defined. Let $X_{s}^{t,x}$ denote the strong solution of the stochastic differential equation (SDE) $$dX_{s}=b(s,X_{s})ds+\sigma(s,X_{s})dW_{s}\text{ for }s>t\text{ and }X_{t}=x.$$ To ensure its existence and uniqueness, we need:

$b$ and $\sigma$ are Lipschitz and of linear growth in $x$ uniformly in $t$.

Let $T<\infty$ and $\mathscr{T}_{[t,T]}$ be the set of $[t,T]$ stopping times independent of $\mathcal{F}_{t}$. Consider the problem $$u(t,x)=\sup_{\tau\in\mathscr{T}_{[t,T]}}J(t,x;\tau)\text{ where }J(t,x;\tau)=\mathbb{E}\left[g(\tau,X_{\tau}^{t,x})\right]$$and $g$ is a given function. To ensure this is well-defined, we take the following:

$g:[0,T]\times\mathbb{R}^d$ is continuous and of quadratic growth (i.e., $|g(t,x)|\leq K(1+|x|^2)$ for some constant $K>0$ independent of $(t,x)$).

The above assumption implies that for all $s$ and $\tau\in\mathscr{T}_{[s,T]}$, the function $(t,x)\mapsto J(t,x;\tau)$ is continuous on $[0,s]\times\mathbb{R}^{d}$ by the following argument. Let $(t_{n}^\prime,x_{n}^\prime)_{n}$ be a sequence converging to $(t^\prime,x^\prime)\in[0,s]\times\mathbb{R}^{d}$. If we can show that $(g(\tau,X_{\tau}^{t_{n}^\prime,x_{n}^\prime}))_n$ is dominated by an integrable function, we can apply the dominated convergence theorem to get \begin{align*} \lim_{n\rightarrow\infty}J(t_{n}^\prime,x_{n}^\prime;\tau) & =\lim_{n\rightarrow\infty}\mathbb{E}\left[g(\tau,X_{\tau}^{t_{n}^\prime,x_{n}^\prime})\right]\\ & =\mathbb{E}\left[\lim_{n\rightarrow\infty}g(\tau,X_{\tau}^{t_{n}^\prime,x_{n}^\prime})\right]\\ & =\mathbb{E}\left[g(\tau,\lim_{n\rightarrow\infty}X_{\tau}^{t_{n}^\prime,x_{n}^\prime})\right]\\ & =\mathbb{E}\left[g(\tau,X_{\tau}^{t^\prime,x^\prime})\right]\\ & =J(t^\prime,x^\prime;\tau). \end{align*} Moreover, since $g$ is of quadratic growth, \begin{align*} \mathbb{E}\left[\left|g(\tau,X_{\tau}^{t_{n}^\prime,x_{n}^\prime})\right|\right] & \leq\mathbb{E}\left[K\left(1+\left|X_{\tau}^{t_{n}^\prime,x_{n}^\prime}\right|^{2}\right)\right]\\ & =K\left(1+\mathbb{E}\left[\left|X_{\tau}^{t_{n}^\prime,x_{n}^\prime}\right|^{2}\right]\right)\\ & \leq K_{0}\left(1+\left|x_{n}^\prime\right|^{2}\right) \end{align*} where $K_{0}$ can depend on $T$ (by the usual argument for Ito processes using Gronwall's lemma). Since $x_{n}^\prime\rightarrow x^\prime$, domination follows.

We denote by $f^{*}$ and $f_{*}$ the upper and lower semicontinuous envelopes of a function $f\colon Y\rightarrow[-\infty,\infty]$, where $Y$ is a given topological space.

Let $\theta\in\mathscr{T}_{[t,T]}$ be a stopping time such that $t < \theta < T$ and $X_{\theta}^{t,x}\in\mathbb{L}^{\infty}$. The following dynamic programming principle holds: \begin{align*} u(t,x) & \leq\sup_{\tau\in\mathscr{T}_{[t,T]}}\mathbb{E}\left[\mathbf{1}_{\left\{ \tau<\theta\right\} }g(\tau,X_{\tau}^{t,x})+\mathbf{1}_{\left\{ \tau\geq\theta\right\} }u^{*}(\theta,X_{\theta}^{t,x})\right].\\ u(t,x) & \geq\sup_{\tau\in\mathscr{T}_{[t,T]}}\mathbb{E}\left[\mathbf{1}_{\left\{ \tau<\theta\right\} }g(\tau,X_{\tau}^{t,x})+\mathbf{1}_{\left\{ \tau\geq\theta\right\} }u_{*}(\theta,X_{\theta}^{t,x})\right]. \end{align*}

Note that if $u$ is continuous, the above dynamic programming principle becomes, by virtue of $u=u_{*}=u^{*}$, $$u(t,x)=\sup_{\tau\in\mathscr{T}_{[t,T]}}\mathbb{E}\left[\mathbf{1}_{\left\{ \tau<\theta\right\} }g(\tau,X_{\tau}^{t,x})+\mathbf{1}_{\left\{ \tau\geq\theta\right\} }u(\theta,X_{\theta}^{t,x})\right].$$

Intuition behind proof: The $\leq$ inequality is established by the tower property (see the formal proof below). The $\geq$ inequality requires more work. Intuitively, we can take an $\epsilon$-optimal control $\tau^{\epsilon}(\theta)$ as follows: $$u(\theta,X_{\theta}^{t,x})\leq J(\theta,X_{\theta}^{t,x};\tau^{\epsilon}(\theta))+\epsilon.$$ Now, let $\tau$ be an arbitrary stopping time and $$\hat{\tau}=\tau\mathbf{1}_{\left\{ \tau<\theta\right\} }+\tau^{\epsilon}(\theta)\mathbf{1}_{\left\{ \tau\geq\theta\right\} }.$$ Then, \begin{align*} u(t,x) & \geq J(t,x;\hat{\tau})\\ & =\mathbb{E}\left[\mathbf{1}_{\left\{ \tau<\theta\right\} }g(\tau,X_{\tau}^{t,x})+\mathbf{1}_{\left\{ \tau\geq\theta\right\} }g(\tau^{\epsilon}(\theta),X_{\tau^{\epsilon}(\theta)}^{t,x})\right]\\ & =\mathbb{E}\left[\mathbf{1}_{\left\{ \tau<\theta\right\} }g(\tau,X_{\tau}^{t,x})+\mathbb{E}\left[\mathbf{1}_{\left\{ \tau\geq\theta\right\} }g(\tau^{\epsilon}(\theta),X_{\tau^{\epsilon}(\theta)}^{t,x})\mid\mathcal{F}_{\theta}\right]\right]\\ & =\mathbb{E}\left[\mathbf{1}_{\left\{ \tau<\theta\right\} }g(\tau,X_{\tau}^{t,x})+\mathbf{1}_{\left\{ \tau\geq\theta\right\} }J(\theta,X_{\theta}^{t,x};\tau^{\epsilon}(\theta))\right]\\ & \geq\mathbb{E}\left[\mathbf{1}_{\left\{ \tau<\theta\right\} }g(\tau,X_{\tau}^{t,x})+\mathbf{1}_{\left\{ \tau\geq\theta\right\} }u(\theta,X_{\theta}^{t,x})\right]-\epsilon. \end{align*} The desired result follows since $\tau$ and $\epsilon$ are arbitrary (take a sup over $\tau$ on both sides of the inequality). However, $\hat{\tau}$ is not a $\mathscr{T}_{[t,T]}$ stopping time, so the first inequality fails. In the proof below, this apparent issue is dealt with rigorously. We also mention another, perhaps less grave, issue: in the event that $u$ is not continuous, we cannot say anything about its measurability so that the expectation involving $u$ at a future time is ill-defined (this is the reason we use upper and lower semicontinuous envelopes in the above).

The $\leq$ inequality follows directly from the tower property: \begin{align*} J(t,x;\tau) & =\mathbb{E}\left[g(\tau,X_{\tau}^{t,x})\right]\\ & =\mathbb{E}\left[\mathbf{1}_{\left\{ \tau<\theta\right\} }g(\tau,X_{\tau}^{t,x})+\mathbf{1}_{\left\{ \tau\geq\theta\right\} }g(\tau,X_{\tau}^{t,x})\right]\\ & =\mathbb{E}\left[\mathbf{1}_{\left\{ \tau<\theta\right\} }g(\tau,X_{\tau}^{t,x})+\mathbb{E}\left[\mathbf{1}_{\left\{ \tau\geq\theta\right\} }g(\tau,X_{\tau}^{t,x})\mid\mathcal{F}_{\theta}\right]\right]\\ & =\mathbb{E}\left[\mathbf{1}_{\left\{ \tau<\theta\right\} }g(\tau,X_{\tau}^{t,x})+\mathbf{1}_{\left\{ \tau\geq\theta\right\} }J(\theta,X_{\theta}^{t,x};\tau)\right]\\ & \leq\mathbb{E}\left[\mathbf{1}_{\left\{ \tau<\theta\right\} }g(\tau,X_{\tau}^{t,x})+\mathbf{1}_{\left\{ \tau\geq\theta\right\} }u^{*}(\theta,X_{\theta}^{t,x})\right]. \end{align*} Now, take the supremum over all stopping times $\tau$ on both sides to arrive at the desired result. The $\geq$ inequality requires more work. For brevity, let $\mathcal{O}=(t,T)\times\mathbb{R}^{d}$ for the remainder. Let $\epsilon>0$ and $\varphi\colon[0,T]\times\mathbb{R}^d\rightarrow\mathbb{R}$ be an upper semicontinuous function satisfying $u\geq\varphi$. For each $(s,y)\in \mathcal{O}$, there exists $\tau^{s,y}\in\mathscr{T}_{[s,T]}$ such that $$ u(s,y)\leq J(s,y;\tau^{s,y})+\epsilon. $$ Using the upper semicontinuity of $\varphi$ and the continuity of $J$ (see above), we can find a family $(r^{s,y})$ of positive constants such that $$ \epsilon\geq\varphi(t^{\prime},x^{\prime})-\varphi(s,y)\text{ and }J(s,y;\tau^{s,y})-J(t^{\prime},x^{\prime};\tau^{s,y})\leq\epsilon \text{ for }(t^{\prime},x^{\prime})\in B(s,y;r^{s,y}) $$ where $$B(s,y;r)=(s-r,s)\times\left\{ x\in\mathbb{R}^d\colon\left|x-y\right| < r\right\}.$$ This seemingly strange choice for the sets above is justified later. Since $$ \left\{ B(s,y;r^{s,y})\colon(s,y)\in \mathcal{O}\right\} $$ forms a cover of $\mathcal{O}$ by open sets, Lindelöf's lemma yields a countable subcover $\{B(t_{i},x_{i};r_{i})\}$. Let $C_{0}=\emptyset$, and $$ A_{i+1}=B(t_{i+1},x_{i+1};r_{i+1})\setminus C_{i}\text{ where }C_{i+1}=A_{1}\cup\cdots\cup A_{i+1}\text{ for }i\geq0. $$ Note that the countable family $\{A_{i}\}$ is disjoint by construction, and that $X_{\theta}^{t,x}\in\cup_{i\geq1}A_{i}$ a.s. (recall that $X_{\theta}^{t,x}\in\mathbb{L}^{\infty}$ and $t < \theta < T$ by definition). Moreover, letting $\tau^{i}=\tau^{t_{i},x_{i}}$ for brevity, \begin{align*} J(t^{\prime},x^{\prime};\tau^{i}) & \geq J(t_{i},x_{i};\tau^{i})-\epsilon\\ & \geq u(t_{i},x_{i})-2\epsilon\\ & \geq\varphi(t_{i},x_{i})-2\epsilon\\ & \geq\varphi(t^{\prime},x^{\prime})-3\epsilon & \text{for }(t^{\prime},x^{\prime})\in B(t_{i},x_{i};r_{i})\supset A_{i}. \end{align*} Now, let $A^{n}=\cup_{i\leq n}A_{i}$ for $n\geq1$. Given a stopping time $\tau\in\mathscr{T}_{[t,T]}$, let $$ \hat{\tau}^{n}=\tau\mathbf{1}_{\left\{ \tau<\theta\right\} }+\mathbf{1}_{\left\{ \tau\geq\theta\right\} }\left(T\mathbf{1}_{\mathcal{O}\setminus A^{n}}(\theta,X_{\theta}^{t,x})+\sum_{i=1}^{n}\tau^{i}\mathbf{1}_{A_{i}}(\theta,X_{\theta}^{t,x})\right). $$ In particular, since $B(t_{i},x_{i};r_{i})\supset A_{i}$ was picked such that for all $(t^{\prime},x^{\prime})\in B(t_{i},x_{i};r_{i})$, $t^{\prime}\leq t_{i}$, we have that $\hat{\tau}^n\in\mathscr{T}_{[t,T]}$. If we had instead chosen the open sets $B_{r_{i}}(t_{i},x_{i})$ to form our cover, we would not be able to use $\tau^{i}$ in the above definition of $\hat{\tau}^{n}$ without violating--roughly speaking--the condition that "stopping times cannot peek into the future." We first write \begin{align*} u(t,x) & \geq J(t,x;\hat{\tau}^{n})\\ & =\mathbb{E}\left[\left(\mathbf{1}_{\left\{ \tau<\theta\right\} }+\mathbf{1}_{\left\{ \tau\geq\theta\right\} }\mathbf{1}_{\mathcal{O}\setminus A^{n}}(\theta,X_{\theta}^{t,x})+\mathbf{1}_{\left\{ \tau\geq\theta\right\} }\mathbf{1}_{A^{n}}(\theta,X_{\theta}^{t,x})\right)g(\hat{\tau}^{n},X_{\hat{\tau}^{n}}^{t,x})\right] \end{align*} and consider the terms in the summation separately. It follows from our choice of $A^{n}$ and the tower property that \begin{align*} \mathbb{E}\left[\mathbf{1}_{\left\{ \tau\geq\theta\right\} }g(\hat{\tau}^{n},X_{\hat{\tau}^{n}}^{t,x})\mathbf{1}_{A^{n}}(\theta,X_{\theta}^{t,x})\right] & =\mathbb{E}\left[\mathbf{1}_{\left\{ \tau\geq\theta\right\} }g(\tau^{i},X_{\tau^{i}}^{t,x})\mathbf{1}_{A^{n}}(\theta,X_{\theta}^{t,x})\right]\\ & =\mathbb{E}\left[\mathbb{E}\left[\mathbf{1}_{\left\{ \tau\geq\theta\right\} }g(\tau^{i},X_{\tau^{i}}^{t,x})\mathbf{1}_{A^{n}}(\theta,X_{\theta}^{t,x})\mid\mathcal{F}_{\theta}\right]\right]\\ & =\mathbb{E}\left[\mathbf{1}_{\left\{ \tau\geq\theta\right\} }J(\theta,X_{\theta}^{t,x};\tau^{i})\mathbf{1}_{A^{n}}(\theta,X_{\theta}^{t,x})\right]\\ & \geq\mathbb{E}\left[\mathbf{1}_{\left\{ \tau\geq\theta\right\} }\left(\varphi(\theta,X_{\theta}^{t,x})-3\epsilon\right)\mathbf{1}_{A^{n}}(\theta,X_{\theta}^{t,x})\right]\\ & \geq\mathbb{E}\left[\mathbf{1}_{\left\{ \tau\geq\theta\right\} }\varphi(\theta,X_{\theta}^{t,x})\mathbf{1}_{A^{n}}(\theta,X_{\theta}^{t,x})\right]-3\epsilon. \end{align*} Moreover, $$ \mathbf{1}_{\left\{ \tau\geq\theta\right\} }g(\hat{\tau}^{n},X_{\hat{\tau}^{n}}^{t,x})\mathbf{1}_{\mathcal{O}\setminus A^{n}}(\theta,X_{\theta}^{t,x})=\mathbf{1}_{\left\{ \tau\geq\theta\right\} }g(T,X_{T}^{t,x})\mathbf{1}_{\mathcal{O}\setminus A^{n}}(\theta,X_{\theta}^{t,x})\leq|g(T,X_{T}^{t,x})| $$ and hence the dominated convergence theorem yields $$ \lim_{n\rightarrow\infty}\mathbb{E}\left[\mathbf{1}_{\left\{ \tau\geq\theta\right\} }g(T,X_{T}^{t,x})\mathbf{1}_{\mathcal{O}\setminus A^{n}}(\theta,X_{\theta}^{t,x})\right] =\mathbb{E}\left[\mathbf{1}_{\left\{ \tau\geq\theta\right\} }g(T,X_{T}^{t,x})\lim_{n\rightarrow\infty}\mathbf{1}_{\mathcal{O}\setminus A^{n}}(\theta,X_{\theta}^{t,x})\right]=0 $$ since we can (a.s.) find $i$ such that $(\theta,X_{\theta}^{t,x})\in A_{i}$. By Fatou's lemma, \begin{align*} \liminf_{n\rightarrow\infty}\mathbb{E}\left[\mathbf{1}_{\left\{ \tau\geq\theta\right\} }\varphi(\theta,X_{\theta}^{t,x})\mathbf{1}_{A^{n}}(\theta,X_{\theta}^{t,x})\right] & \geq\mathbb{E}\left[\mathbf{1}_{\left\{ \tau\geq\theta\right\} }\varphi(\theta,X_{\theta}^{t,x})\liminf_{n\rightarrow\infty}\mathbf{1}_{A^{n}}(\theta,X_{\theta}^{t,x})\right]\\ & =\mathbb{E}\left[\mathbf{1}_{\left\{ \tau\geq\theta\right\} }\varphi(\theta,X_{\theta}^{t,x})\right]. \end{align*} Note that we were able to use Fatou's lemma since $\varphi(\theta,X_\theta^{t,x})$ is bounded due to the assumption $X_{\theta}^{t,x}\in\mathbb{L}^{\infty}$. Since $\tau$ and $\epsilon$ were arbitrary, we have that $$ u(t,x)\geq\mathbb{E}\left[\mathbf{1}_{\left\{ \tau<\theta\right\} }g(\tau,X_{\tau}^{t,x})+\mathbf{1}_{\left\{ \tau\geq\theta\right\} }\varphi(\theta,X_{\theta}^{t,x})\right] \text{ for } \tau\in\mathscr{T}_{[t,T]}.$$ For the last step, let $r>0$ such that $|X_{\theta}^{t,x}|\leq r$ a.s. (recall $X_{\theta}^{t,x}\in\mathbb{L}^{\infty}$). We can find a sequence of continuous functions $(\varphi_{n})$ such that $\varphi_{n}\leq u_{*}$ and converges pointwise to $u_{*}$ on $[0,T]\times B_{r}(0)$. Letting $\varphi^N = \min_{n\geq N} \varphi_n$ denote a nondecreasing modification of this sequence, by the monotone convergence theorem, we get \begin{align*} u(t,x) & \geq\lim_{N\rightarrow\infty}\mathbb{E}\left[\mathbf{1}_{\left\{ \tau<\theta\right\} }g(\tau,X_{\tau}^{t,x})+\mathbf{1}_{\left\{ \tau\geq\theta\right\} }\varphi^N(\theta,X_{\theta}^{t,x})\right]\\ & =\mathbb{E}\left[\mathbf{1}_{\left\{ \tau<\theta\right\} }g(\tau,X_{\tau}^{t,x})+\mathbf{1}_{\left\{ \tau\geq\theta\right\} }u_{*}(\theta,X_{\theta}^{t,x})\right] & \text{ for } \tau \in \mathscr{T}_{[t,T]}.\end{align*} Now, simply take supremums on both sides to arrive at the desired result.

Bibliography

  1. Touzi, Nizar. Optimal stochastic control, stochastic target problems, and backward SDE. Vol. 29. Springer Science & Business Media, 2012.
Parsiad Azimzadeh

I am happy to announce the release of the GNU Octave financial package version 0.5.0. This is the first version to be released since I took on the role of maintainer.

If you do not already have GNU Octave, you can grab a free copy here.

To install the package, launch Octave and run the following commands:

pkg install -forge io
pkg install -forge financial

Perhaps the most exciting addition in this version is the Monte Carlo simulation framework, which is significantly faster than its MATLAB counterpart. A brief tutorial (along with benchmarking information) are available in a previous post. Other additions include Black-Scholes options and greeks valuation routines, implied volatility calculations, and general bug fixes. Some useful links are below: