Optimal stopping II: a dynamic programming equation

The following is a continuation of a previous post on optimal stopping. In this post, we derive a dynamic programming equation (which turns out to be a partial differential equation (PDE) to be interpreted in the viscosity sense) for the optimal stopping problem.

As before, we consider a filtered probability space (with filtration $(\mathcal{F}_{t})_{t\geq0}$) satisfying the usual conditions, on which a standard Brownian motion $W_{t}$ is defined. Let $X_{s}^{t,x}$ denote the strong solution of the stochastic differential equation (SDE) $$dX_{s}=b(s,X_{s})ds+\sigma(s,X_{s})dW_{s}\text{ for }s>t\text{ and }X_{t}=x.$$ Let $T<\infty$ and $\mathscr{T}_{[t,T]}$ be the set of $[t,T]$ stopping times independent of $\mathcal{F}_{t}$. Consider the problem $$u(t,x)=\sup_{\tau\in\mathscr{T}_{[t,T]}}J(t,x;\tau)\text{ where }J(t,x;\tau)=\mathbb{E}\left[g(\tau,X_{\tau}^{t,x})\right]$$ and $g$ is some given function.

All assumptions of the previous post hold.

The PDE we will derive (in the viscosity sense) is $$\min\left\{ -\left(\partial_{t}+\mathcal{A}\right)u,u-g\right\} =0\text{ on }[0,T)\times\mathbb{R}^{d},\label{eq:pde}\tag{1}$$ where $\mathcal{A}$ is the infinitesimal generator of the SDE above. Let's now define the notion of viscosity solution for this specific problem:

Let $\mathcal{O}=[0,T)\times\mathbb{R}^{d}$. A locally bounded function $v\colon\mathcal{O}\rightarrow\mathbb{R}$ is a viscosity subsolution (resp. supersolution) of \eqref{eq:pde} if \begin{align*} & \min\left\{ -\left(\partial_{t}+\mathcal{A}\right)\varphi(t,x),(v^{*}-g)(t,x)\right\} \leq0\\ \text{(resp. } & \min\left\{ -\left(\partial_{t}+\mathcal{A}\right)\varphi(t,x),(v_{*}-g)(t,x)\right\} \geq0\text{)} \end{align*} for all $(t,x,\varphi)\in\mathcal{O}\times C^{1,2}(\mathcal{O})$ such that $(v^{*}-\varphi)(t,x)=\max_{\mathcal{O}}(v^{*}-\varphi)=0$ (resp. $(v_{*}-\varphi)(t,x)=\min_{\mathcal{O}}(v_{*}-\varphi)=0$) and the maximum (resp. minimum) is strict. We say $v$ is a viscosity solution of \eqref{eq:pde} if it is both a subsolution and supersolution of \eqref{eq:pde}.
Suppose $u\colon\mathcal{O}\rightarrow\mathbb{R}$ is locally bounded. Then, $u$ is a viscosity solution of \eqref{eq:pde}.
We first prove that $u$ is a subsolution. Let $(t,x,\varphi)\in\mathcal{O}\times C^{1,2}(\mathcal{O})$ be such that $$(u^{*}-\varphi)(t,x)=\max_{\mathcal{O}}(u^{*}-\varphi)=0$$ where the maximum is strict. Assume, in order to arrive at a contradiction, that $$\min\left\{ -\left(\partial_{t}+\mathcal{A}\right)\varphi(t,x),(u^{*}-g)(t,x)\right\} >0.$$ Equivalently, this can be expressed as $$(\varphi-g)(t,x)=(u^{*}-g)(t,x)>0\text{ and }-\left(\partial_{t}+\mathcal{A}\right)\varphi(t,x)>0.$$ By continuity, we can find $h>0$ (with $t+h<T$) and $\delta>0$ such that $$\varphi-g\geq\delta\text{ and }-\left(\partial_{t}+\mathcal{A}\right)\varphi\geq0\text{ on }\mathcal{N}_{h}=\left( (t-h,t+h)\times B_{h}(x) \right) \cap \mathcal{O}$$ where $B_{h}(x)$ is the ball of radius $h$ centred at $x$. Since $(t,x)$ is a strict maximizer, $$-\gamma=\max_{\partial\mathcal{N}_{h}}\left(u^{*}-\varphi\right)<0.$$ Let $(t_{n},x_{n})$ be a sequence in $\mathcal{O}$ such that $$(t_{n},x_{n})\rightarrow(t,x)\text{ and }u(t_{n},x_{n})\rightarrow u^{*}(t,x).$$ Let $$\theta_{n}=\inf\left\{ s>t_{n}\colon(s,X_{s}^{t_{n},x_{n}})\notin\mathcal{N}_{h}\right\} .$$ Note that for $n$ large enough, $(t_{n},X_{t_{n}}^{t_{n},x_{n}})=(t_{n},x_{n})\in\mathcal{N}_{h}$ (we will always assume $n$ is large enough for this to occur). Let $$\eta_{n}=u(t_{n},x_{n})-\varphi(t_{n},x_{n}).$$ Let $\tau_n\in\mathscr{T}_{[t_n,T]}$ be arbitrary. By Ito's lemma, \begin{align*} u(t_{n},x_{n}) & =\eta_{n}+\varphi(t_{n},x_{n})\\ & \begin{gathered}=\eta_{n}+\mathbb{E}\left[\varphi(\tau_{n}\wedge\theta_{n},X_{\tau_{n}\wedge\theta_{n}}^{t_{n},x_{n}})-\int_{t_{n}}^{\tau_{n}\wedge\theta_{n}}\left(\partial_{t}+\mathcal{A}\right)\varphi(s,X_{s}^{t_{n},x_{n}})ds\right]\\ +\mathbb{E}\left[\int_{t_{n}}^{\tau_{n}\wedge\theta_{n}}\nabla_{x}\varphi(s,X_{s}^{t_{n},x_{n}})\cdot\sigma(s,X_{s}^{t_{n},x_{n}})dW_{s}\right] \end{gathered} \\ & =\eta_{n}+\mathbb{E}\left[\varphi(\tau_{n}\wedge\theta_{n},X_{\tau_{n}\wedge\theta_{n}}^{t_{n},x_{n}})-\int_{t_{n}}^{\tau_{n}\wedge\theta_{n}}\left(\partial_{t}+\mathcal{A}\right)\varphi(s,X_{s}^{t_{n},x_{n}})ds\right]. \end{align*} The Ito integral vanishes due to $t\mapsto(t,X_{t}^{t_{n},x_{n}})$ being bounded on the interval $[t_{n},\tau_{n}\wedge\theta_{n}]$ so that $$u(t_{n},x_{n})\geq\eta_{n}+\mathbb{E}\left[\varphi(\tau_{n}\wedge\theta_{n},X_{\tau_{n}\wedge\theta_{n}}^{t_{n},x_{n}})\right].$$ Due to the inequalities established on $\mathcal{N}_{h}$, \begin{align*} u(t_n,x_n) & \geq\eta_n+\mathbb{E}\left[\varphi(\tau_n\wedge\theta_n,X_{\tau_n\wedge\theta_n}^{t_n,x_n})\right]\\ & =\eta_n+\mathbb{E}\left[\varphi(\tau_n,X_{\tau_n}^{t_n,x_n})\mathbf{1}_{\left\{ \tau_n <\theta_n\right\} }+\varphi(\theta_n,X_{\theta_n}^{t_n,x_n})\mathbf{1}_{\left\{ \tau_n \geq\theta_n \right\} }\right]\\ & \geq\eta_n+\mathbb{E}\left[\left(g(\tau_n,X_{\tau_n}^{t_n,x_n})+\delta\right)\mathbf{1}_{\left\{ \tau_n <\theta_n\right\} }+\left(u^{*}(\theta_n,X_{\theta_n}^{t_n,x_n})+\gamma\right)\mathbf{1}_{\left\{ \tau_n\geq\theta_n\right\} }\right]\\ & \geq\eta_n+\gamma\wedge\delta+\mathbb{E}\left[g(\tau_n,X_{\tau_n}^{t_n,x_n})\mathbf{1}_{\left\{ \tau_n<\theta_n\right\} }+u^{*}(\theta_n,X_{\theta_n}^{t_n,x_n})\mathbf{1}_{\left\{ \tau_n\geq\theta_n \right\} }\right]. \end{align*} Since $\tau_n\in\mathscr{T}_{[t_n,T]}$ is arbitrary and $\eta_n+\gamma\wedge\delta>0$ for $n$ sufficiently large, this contradicts the $\leq$ inequality in the dynamic programming principle established in the previous post.
We now prove that $u$ is a supersolution. The inequality $u-g\geq0$ follows from the value function since $$u(t,x)=\sup_{\tau\in\mathscr{T}_{[t,T]}}J(t,x;\tau)\geq J(t,x;t)=\mathbb{E}[g(t,X_{t}^{t,x})]=g(t,x).$$ Taking the lower semicontinuous envelope on both sides of the inequality, we get $u_{*}-g\geq0$ (recall that $g$ is presumed to be continuous). Let $(t,x,\varphi)\in\mathcal{O}\times C^{1,2}(\mathcal{O})$ be such that $$(u_{*}-\varphi)(t,x)=\min_{\mathcal{O}}(u_{*}-\varphi)=0.$$ Let $(t_{n},x_{n})$ be a sequence in $\mathcal{O}$ such that $$(t_{n},x_{n})\rightarrow(t,x)\text{ and }u(t_{n},x_{n})\rightarrow u_{*}(t,x).$$ Let $$\eta_{n}=u(t_{n},x_{n})-\varphi(t_{n},x_{n})$$ and $$h_{n}=\sqrt{\eta_{n}}\mathbf{1}_{\left\{ \eta_{n}\neq0\right\} }+\mathbf{1}_{\left\{ \eta_{n}=0\right\} }/n.$$ Also let $$\theta_{n}=\inf\left\{ s>t_{n}\colon(s,X_{s}^{t_{n},x_{n}})\notin[t_{n},t_{n}+h_{n})\times B_{1}(x)\right\}$$ where we always assume $n$ is large enough for $t_n+h_n < T$ and $x_n \in B_1(x)$. Calling upon the $\geq$ inequality in the dynamic programming principle established in the previous post (with $\theta=\theta_{n}$), we have $$\eta_{n}+\varphi(t_{n},x_{n})=u(t_{n},x_{n})\geq\mathbb{E}\left[u_{*}(\theta_{n},X_{\theta_{n}}^{t_{n},x_{n}})\right]\geq\mathbb{E}\left[\varphi(\theta_{n},X_{\theta_{n}}^{t_{n},x_{n}})\right].$$ Applying Ito's lemma and dividing by $h_{n}$ yields $$\frac{\eta_{n}}{h_{n}}\geq\mathbb{E}\left[\frac{1}{h_{n}}\int_{t_{n}}^{\theta_{n}}\left(\partial_{t}+\mathcal{A}\right)\varphi(s,X_{s}^{t_{n},x_{n}})ds\right].$$ As usual, the Ito integral has vanished due to $t\mapsto(t,X_{t}^{t_{n},x_{n}})$ being bounded on the interval $[t_{n},\theta_{n}]$. For any fixed sample $\omega$ in the sample space and $n$ sufficiently large, note that $\theta_{n}(\omega)=t_{n}+h_{n}$ (since $h_{n}\rightarrow0$). By the mean value theorem for integrals, the random variable in the expectation converges almost surely. Applying the dominated convergence theorem, we get \begin{align*} 0=\lim_{n\rightarrow\infty}\frac{\eta_{n}}{h_{n}} & \geq\lim_{n\rightarrow\infty}\mathbb{E}\left[\frac{1}{h_{n}}\int_{t_{n}}^{\theta_{n}}\left(\partial_{t}+\mathcal{A}\right)\varphi(s,X_{s}^{t_{n},x_{n}})ds\right]\\ & =\mathbb{E}\left[\lim_{n\rightarrow\infty}\frac{1}{h_{n}}\int_{t_{n}}^{\theta_{n}}\left(\partial_{t}+\mathcal{A}\right)\varphi(s,X_{s}^{t_{n},x_{n}})ds\right]\\ & =\mathbb{E}\left[\left(\partial_{t}+\mathcal{A}\right)\varphi(t,x)\right]\\ & =\left(\partial_{t}+\mathcal{A}\right)\varphi(t,x). \end{align*} Multiplying both sides of the inequality by $-1$ yields the desired result.