Financial Mathematics

Table of Contents

Introduction to Stochastic Calculus: Definition and properties of Filtration and Brownian Motion

Filtration

Filtration is good simulation of “information” in real world (only increase as time goes by).
▻▻ \( \mathbb{F}= (\mathcal{F}_t)_{t \in I} \),  such that \( \mathcal{F}_s \subseteq \mathcal{F}_t \subseteq \mathcal{F},\ \ 0\leq s \leq t \lt \infty \).

Usual conditions

Stochastic basis \( (\Omega, \mathcal{F}, \mathbb{P}, \mathbb{F}) \) or \( (\Omega, \mathcal{F}, \mathbb{P}, (\mathcal{F}_t)_{t \in I}) \), satisfies the Usual Condition:
(1) \( (\Omega, \mathcal{F}, \mathbb{P}) \) is complete;
(2) \(\mathcal{F}_0\) contains all \(\mathbb{P}\)-null sets of \(\mathcal{F}\);
(3) F is right-continous, i.e.\( \mathcal{F}_t = \mathcal{F}_{t_+} := \bigcap\limits_{s \gt t}  \mathcal{F}_s\).

Brownian Motion

A family of random variables \( W=(W_t)_{t \geq 0} \) is Brownian Motion with respect to filtration \( (\mathcal{F}_t) \), if:
(a) \( W_t(\omega):[0, \infty) \to \mathbb{R} \) is continous and \(W_0(\omega)=0\);
(b) \( (W_t) \) is \( (\mathcal{F}_t) \)-adapted, and \( W_t-W_s \) is independent from \( \mathcal{F}_s \);
▻▻ i.e. \( \mathbb{P}(A \bigcap \{ W_t – W_s \in B \})=\mathbb{P}(A)\mathbb{P}(W_t – W_s \in B)\ \ \ \forall A \in \mathcal{F}_s,\ \forall B \in \mathcal{B}(\mathbb{R}) \);
(c) Normally distributed, with\( \mathbb{E} W_t = 0 \) and \( \mathbb{E} W_t^2 = t \):
▻▻ i.e. \( \mathbb{P}( W_t \leq x ) = \frac{1}{\sqrt{2 \pi t}} \int_{- \infty}^{x} e^{-\frac{z^2}{2t}} dz \);
(d) Homogenous, \( \mathbb{P}( W_{t-s} \leq x ) = \mathbb{P}( W_{t} – W_{s} \leq x ) \).

Properties of Brownian Motion

(1) If Brownian Motion exists, unusual condition stochastic space can be chosen;
(2) Length of the path in [0, 1] is ∞ almost surely. (can only be sketched but not drawn);
(3) Path is nowhere differentiable;
(4) Independent increment (because they are homogenous).

Martingales: Conditional expectation and definition of martingales

Conditional Expectation

Let sub-singma-algebra \( \mathcal{G} \subseteq \mathcal{F} \) in probability space \((\Omega, \mathcal{F}, \mathbb{P}) \),
▻▻ \( \mathbb{E}(X\mathcal{1}_A) = \mathbb{E}(Y \mathcal{1}_A )\ \ \ \forall A \in \mathcal{G} \),
▻▻ then, Y is conditional expectation of X given \( \mathcal{G}\,\ \ \ Y:=\mathbb{E}[X|\mathcal{G}] \).

Properties of Conditional Expectation

let \( \mathbb{E}|X| \lt \infty \ or\ X \geq 0 \), then:
(1) if X is \( \mathcal{G}-measurable\), \( \mathbb{E}[X|\mathcal{G}] = X \) almost surely;
(2) if X and \( \mathcal{G}\) are independent, \( \mathbb{E}[X|\mathcal{G}] = \mathbb{E}X \) almost surely;
(3) Tower property \( \mathcal{G} \subseteq \mathcal{H} \subseteq \mathcal{F} \) are sub-sigma-algebras then \( \mathbb{E}[\mathbb{E}[X|\mathcal{G}]|\mathcal{H}] = \mathbb{E}[\mathbb{E}[X|\mathcal{H}]|\mathcal{G}] = \mathbb{E}[X|\mathcal{G}] \).
(4) Linearity: \( \mathbb{E}[\alpha X + \beta Z | \mathcal{G}] = \alpha \mathbb{E}[X|\mathcal{G}] + \beta \mathbb{E}[Z|\mathcal{G}]\ \ \ \forall \alpha, \beta \in \mathbb{R} \).
(5) Take out what is known: if E|X|< ∞ and Y is bounded, then \( \mathbb{E}[XY|\mathcal{G}] = Y \mathbb{E}[X|\mathcal{G}]\ almost\ surely \).

Adapted and Martingale

Stochastic process \( (X_t)_{t \geq 0},\ \ \ \forall t \geq 0 \):
Is adpated, if \( X_t \text{ is } \mathcal{F}_t \)-measurable.
Is a martingale, if:
(1) \( \mathbb{E}| W_t | \lt \infty\);
(2) For \( 0 \leq s \leq t,\ \mathbb{E}[ X_t | \mathcal{F}_s ] = X_s \).
Is square integrable, if: \( \mathbb{E}X_t^2 \lt \infty \).

Itô’s integral for simple integrands: Simple process and its integral

Itô’s Integral

\( I_t(L):=\int_0^T L_t\ dW_t \).

Simple Process \( L_t \)

denoted as \( \mathcal{L}_0 \):
(1) \( \xi_i \) is \( \mathcal{F}_{t_i} \)-measurable;
(2) \( \exists C \gt 0,\ \sup\limits_{\omega \in \Omega}|\xi_i(\omega)| \lt C \), and
▻▻ \( L_t=\sum\limits_{i=1}^n \xi_{i-1}\ \mathcal{I}_{(t_{i-1},t_i ]}(t) \).
Properties:
(1) has a piece-wise constant paths;
(2) is an adapted process, s.t. \( L_t= \xi_{i-1}\text{ for }t \in (t_{i-1},t_i ] \).
\( \int_0^T L_t(\omega)db(t) = \sum\limits_{i=1}^n \xi_{i-1} (\omega) \big( b(t_i)-b(t_{i-1}) \big) \).
But, note that actually Wt is nowhere differentiable.

Itô Integral on \( \mathcal{L}_0 \)

\( I_t(L)=\sum\limits_{i=1}^n \xi_{i-1} \big( W_{t_i\bigwedge t} – W_{t_{i-1}\bigwedge t} \big) \).
▻▻ where \( a \bigwedge b := min\{a,b\}\).

Properties of \( I_t( \mathcal{L}_0 ) \)

(1) Itô Isometry: \( \mathbb{E}(I_T(L))^2 = \mathbb{E} \int_0^T L_t^2dt \);
(2) \( \big(I_T(L) \big)_{t \geq 0} \) is square integrable and a continuous martingale;
(3) \( I_t (\alpha L + \beta K) = \alpha I_t(L) +\beta I_t(K) \).

Itô’s integral for general integrands: General process and its integral

\( I_t(L) = \int\limits_0^T W_tdW_t \) ;

General process space \( \mathcal{ L}_2 \)

The process \( L=(L_t)_{t\in [ 0,T ]}\) , statisfies:
(1) L is \( \mathcal{ B}([ 0,T ]) \otimes \mathcal{ F} \)-measurable;
(2) L is \( (\mathcal{ F}_t)_{t\in [ 0,T ]} \)-adapted;
(3) \( \mathbb{ E} \int\limits_0^T L_t^2dt \lt \infty \).

\( I_t(L)\) for general process

Let \( L\in \mathcal{ L}_2 \), \( \exists\ (L^n)_{n \geq 0}\) of simple process , s.t:
▻▻ \( \lim\limits_{n \to \infty} \mathbb{ E} \int\limits_0^T |L_t – L_t^n|^2 dt=0\) .
Then define:
▻▻ \( I_t(L):= \lim\limits_{n \to \infty} \int\limits_0^t L_s^ndW_s\),
▻▻ where the limit makes sense s.t. \( \lim\limits_{n \to \infty} \mathbb{E} \big( I_t(L)-I_t(L^n) \big)^2=0\).

Properties of \( I_t(\mathcal{ L}_2 )\)

(1) Itô Isometry: \( \mathbb{E}(I_t(L))^2 = \mathbb{E} \int\limits_0^t L_s^2ds \) ;
(2) \( \big(I_t(L) \big)_{t \geq 0} \) is square integrable and a martingale;
(3) \( I_t (\alpha L + \beta K) = \alpha I_t(L) +\beta I_t(K) \) .

Itô’s Formula: A first step in stochastic differential equation

Itô’s integral

\( X_t=X_0+ \int\limits_0^t b(s)\ ds +\int\limits_0^t \sigma(s)\ dW_s\),
▻▻ a combination of “ time integral” + “ space integral”
▻▻ a combination of “Riemann-integral”+”\( \mathcal{ L}_2 \) function”
Conditions:
(1) \(b(s) \) is \( \mathcal{ B}([ 0,T ]) \otimes \mathcal{ F} \)-measurable;
(2) \( b(s)\) is \( (\mathcal{ F}_s) \)-measurable, \( \mathbb{E} \int\limits_0^T |b(s)|ds \lt \infty \) ;
(3) \( \sigma \in \mathcal{ L}_2 \) .

Itô’s formula

let \( f \in C^{1,2} ([ 0,T ] \times \mathbb{ R} )\),
\( \begin{align}
f(t, X_t) = f(0, X_0) &+ \int\limits_0^t \frac{\partial f}{\partial t}(s, X_s)ds\\
&+ \int\limits_0^t \frac{\partial f}{\partial x}(s, X_s) \bigg( b(s)ds+\sigma(s)dW_s \bigg)\\
&+ \frac{ 1}{2} \int\limits_0^t \frac{\partial^2 f}{\partial x^2}(s, X_s)\sigma^2(s)ds
\end{align}
\) .
example :
let \( f(t,x)=e^{x-\frac{ t}{ 2} },\ X_t=W_t\) , then
\( f(t, W_t)=e^{W_t-\frac{ t}{ 2} } = 1 + \int\limits_0^t -\frac{ 1}{ 2} e^{W_s-\frac{ s}{ 2} }ds + \int\limits_0^t e^{W_s-\frac{ s}{ 2} } dW_s + \frac{ 1}{ 2} \int\limits_0^t e^{W_s-\frac{ s}{ 2} } ds\).
More examples.

Continuous time market models: Stock price process and trading strategies

Continuous time market model

(M1) usual conditions;
(M2) a filtration \( \mathbb{ F} \) that:
▻▻ (M2-1) usual conditions;
▻▻ (M2-2) \( \mathcal{ F} _0\) is trivial, i.e. \( \mathbb{ P}(A)=0 \text{ or } \mathbb{ P}(A)=1\ \forall A \in \mathcal{ F}_0 \) ;
▻▻ (M2-3 )\( \mathcal{ F}_T=\mathcal{ F} \) .
(M3) d+1 traded assets:
▻▻ d stocks: \( S_1 (t),…, S_d (t)\) ;
▻▻ 1 bank account: \( S_0 (t)\) .

Generalized geometric Brownian motion (d=1)

\( S_1(t)=S_1 (0) \exp \bigg( \int \limits_{ 0}^{ t} \sigma (s)d W_s + \int \limits_{ 0}^{ t} \bigg( \alpha (s) – \frac{ 1}{ 2} \sigma ^2 (s) \bigg)ds \bigg)\) ;
▻▻ where \( \alpha, \sigma[latex] are bounded, measurable and adapted process.
Ito’s formula implies the other form:
▻▻ [latex] S_1(t)=S_1(0)+ \int \limits_{ 0}^{t } \alpha (s) S_1(s) ds + \int \limits_{ 0}^{ t} \sigma (s) S_1 (s) dW_s \) .

(Special Case) Geometric Brownian motion with drift

\( S_1 (t) = S_1 (0) e^{\alpha W_t + (\alpha – \frac{ \sigma ^2}{ 2} )t}
,\ t \in [ 0,T ] \) ;
⇔ \( S_1 (t) = S_1(0)+ \alpha \int \limits_{ 0}^{ t} S_1 (s)ds + \int \limits_{ 0}^{ t} S_1(s) d Ws
\) .

With d-dimensional Brownian motion

For \( W = (W_t^1, …, W_t^d)_{t \in [ 0,T ]}\) :
\( S_i(t):= S_i (0) + \int \limits_{ 0}^{ t} \alpha _i (s) S_i (s)ds + \sum \limits_{ j=1}^{ d} \int \limits_{ 0}^{ t} S_i (s) \sigma_{ij} (s) dW_s^j \) .

Trading strategy process

\( \varphi (t) := \big( \varphi_0 (t),…, \varphi_d (t) \big) \) form a trading strategy, if:
(M1) \( \varphi _i : [ 0,T ] \times \Omega \to \mathbb{ R} \text{ are } \mathcal{ B}([ 0,T ]) \otimes \mathcal{ F} \) -measurable and adapted;
(M2) \( \varphi_i (t) \text{ is } \mathcal{ F}_t \) -measurable for all t ;
(M3) \( \sum \limits_{ i=0}^{ d} \int \limits_{ 0}^{ T} \mathbb{ E} \varphi _i (t)^2 S_i (t)^2 dt \lt \infty \) .
Note:
▻▻ \( \varphi_i (t) \lt 0\) means short sales (stock borrowed rather than owned).

Risk neutral pricing: A method to compute the fair price of an option without arbitrage

Strong equivalent martingale measure (EMM)

(M1) \( \mathbb{ Q} \text{ is equivalent to } \mathbb{ P} \) , if:
▻▻ \( \mathbb{ Q}(A)=0\ ⇔\ \mathbb{ P}(A)=0,\ \forall A \in \mathcal{ F} \) ;
(M2) The discounted price processes are a \( \mathbb{ Q} \) -martingales:
▻▻ \( \tilde{S}:= \bigg( \frac{ S(t)}{ S_0 (t)} \bigg)_{t \geq 0} \) .
Note:
For certain models, EMM can exists uniquely / not uniquely / not at all.

Value and gain process

Value of the portfolio \( \varphi_t = \big( \varphi_0(t), \varphi_1(t),…,\varphi_d(t), \big) \) :
▻▻ \( V_t (\varphi) := \sum \limits_{ i=0}^{ d} \varphi_i (t) S_i (t),\ \ \ t \in [ 0,T ] \) .
Gain process is:
▻▻ \( G_t(\varphi) := \sum \limits_{ i=0}^{ d} \int \limits_{ 0}^{ t} \varphi_i (u) d S_i(u) \) .
A strategy is self-financing if:
▻▻ \( V_\varphi (t)=V_\varphi (0)+G_\varphi (t)\) .

Admissible strategy and arbitrage

A strategy \( \varphi\) is admissible, if:
(1) \( \varphi\) is self-financing ;
(2) \( V_t (\varphi) \geq 0\) ;
(3) \( \mathbb{ E_Q} \sup \limits_{ 0\leq t \leq T} \tilde{V}_t(\varphi)^2 \lt \infty \) , where:
▻▻ \( \tilde{V}_t (\varphi):= \frac{ V_t(\varphi)}{ S_0 (t)} \) .
A strategy \( \varphi\) is an arbitrage opportunity if:
(1) \( \varphi\) is admissible;
(2) \( V_0(\varphi) =0 \text{ and } \mathbb{ P} \big( V_T(\varphi) \gt 0 \big) \gt 0 \) .

Option

European call option: \( H=f(S_i(T))=(S_i (T) -K)^+\) ;
Basket options: \( H= ( S_1 (T)+…+ S_d (T) – K )^+ \) ;
Asian options: \( H= \bigg( \frac{ 1}{ T} \int \limits_{ 0}^{ T} S_1(t)dt -K \bigg)^+ \)

Admissible trading strategy concerning options

If EMM exists uniquely, and \( \mathbb{ E_Q} H^2 \lt \infty \) , then exists \( \varphi = \big( \varphi_0(s),…, \varphi_d(s) \big)_{s \in [ 0,T ]} \) s.t.:
▻▻ \( \tilde{H} = V_0 (\varphi) + \sum \limits_{ i=1}^{ d} \int \limits_{ 0}^{ T} \varphi_i (s)d \tilde{S}_i (s) \) .
Fair Price:
\( \mathbb{ E_Q} \tilde{H} = V_0 (\varphi) \)

The Black-Scholes model: single riskless and single risky asset model

Idea

The model consists of:
▻▻ one diskless asset: \( S_0(t)=e^{rt}\) , where r is interest rate:
▻▻ one risky asset: \( S(t)= S_0 e^{\sigma W_t – \frac{ \sigma^2}{ 2}t+\mu t }\) .
Differential equation to be solved:
\( \begin{cases} dS_0(t )&= r S_0(t)dt\\ S_0(0) &= 1 \end{cases}\) has the only solution:
\( S(t)= s_0 e^{\sigma W_t – \frac{ \sigma^2}{ 2}t+\mu t }\) .

Properties of EMM \( \mathbb{ Q} \)

(1) \( \mathbb{ Q} \sim \mathbb{ P} \) ;
(2) \( \tilde{S} (t) = \frac{ S(t)}{ S_0(t)}=s_0 e^{\sigma W_t – \frac{ \sigma ^2}{ 2}t + (\mu -r )t } \) is a \( \mathbb{ Q} \) -martingale.

Girsanov’s theorem

Let \( (\theta_t)_{t \in [ 0,T ]}\) be an adpated process, and \( \int_0^T \theta_s^2 ds \lt \infty\) , and a martingale process:
▻▻ \( H_t = \exp \bigg( -\int \limits_{ 0}^{ t} \theta_s d W_s – \frac{ 1}{ 2} \int \limits_{ 0}^{ t} \theta_s^2 ds \bigg) \) .
Then,
▻▻ \( \mathbb{ Q}(A) = \int_A H_t\ d\mathbb{ P}(\omega) \) .
A standard Brownian motion :
▻▻ \( B_t := W_t + \int \limits_{ 0}^{ t} \theta_s ds \) .

Novikov condition

The following condition is sufficient to check \( (H_t)_t\) is a martingale:
▻▻ \( \mathbb{ E}e^{\frac{ 1}{ 2} \int \limits_{ 0}^{ T} \theta_t^2 dt } \lt \infty \) .

\( \tilde{S}(t) \) using Itô’s formula

\( \tilde{s}(t)= s_0 + \sigma \int \limits_{ 0}^{ t} \tilde{S}(s)d B_s \) ,with \( B_s = W_s + \frac{ \mu -r }{ \sigma}s \).
Hence, \( \mathbb{ Q} (A):= \int_A H_t d \mathbb{ P} \) , where
▻▻ \( H_t:= \exp \bigg( -\frac{ \mu -r}{ \sigma}W_t – \frac{ (\mu -r)^2}{ 2 \sigma ^2} t \bigg) \) .

Stochastic Differential Equations

Stochastic Process in Continuous Time

Stochastic process

\( (X_t)_{t \in I}\ with\ X_t: \Omega \to \mathbb{R} \).
I is an index set \( I=[0, T]\ or\ [0, \infty) \).

Indistinguishable, modifications, same distributions

X and Y are indistinguishable if:
▻▻ \( \mathbb{P}(X_t=Y_t, t \in I)=1 \).
▻▻ i.e \( \mathbb{P}(\{ \omega \in \Omega : X_t (\omega)=Y_t (\omega) , t \in I \})=1 \))
X and Y are modifications to each other, if:
▻▻ \( \mathbb{P}(X_t=Y_t)=1\ \forall i \in I\).
X and Y have the same finite-dimensional distributions (\(X{\buildrel d \over =}Y\)) (note:\( X=(X_t)_{t \in I}\ on\ (\Omega, \mathcal{F}, \mathbb{P}),\ Y=(Y_t)_{t \in I}\ on\ (\Omega ‘, \mathcal{F} ‘, \mathbb{P}’) \))if:
▻▻ \( \mathbb{P}((X_{t_1},…,X_{t_n})\in B)=\mathbb{P’}((Y_{t_1},…,Y_{t_n})\in B)\ \ \ \forall t \in I,\ \ B \in \mathcal{B}(\mathbb{R}^n) \).
Relationships:
▻▻ indistinguishable ⇒ modifications ⇒ same distribution
▻▻ indistinguishable ⇍ modifications ⇍ same distribution
▻▻ modifications + right/left-continous ⇒ indistinguishable

Measurable, Progressively Measurable and Adapted

Given process \( X = (X_t){t \in I}, \ X_t:\Omega \to \mathbb{R} \), and a filtration \( (\mathcal{F}_t)_{t \in I} \):
X is measurable, if consider X as:
▻▻ \( (\omega,t) \to X_t(\omega) \) (the function)
▻▻ \( \Omega \times I \to \mathbb{R} \) (the corresponding space)
▻▻ \( \mathcal{F} \otimes \mathcal{B}(I) \to \mathcal{B}(\mathcal{R})\) (the corresponding measurability)
X is progressively measurable, if:
▻▻ \( \Omega \times [ 0, S ] \to \mathbb{R} \) (the corresponding space)
▻▻ \( \mathcal{F} \otimes \mathcal{B}([ 0,S ]) \to \mathcal{B}(\mathcal{R})\) (the corresponding measurability)
X is adapted, if
▻▻ \( X_t \) is \( \mathcal{F}_t \)-measurable.

Exercises with Solution

Agreement: \( (W_t)_{t \in [ 0,T ]}\) be a Brownian motion on \( (\Omega, \mathcal{ F}, \mathbb{ P}, \mathbb{ F} )\) , and \( 0 \leq s \lt t \leq T\)


Sum of independent normal distributed r.v.
Given \( X_1,…,X_n\) independent , normal distributed r.v. with:
\( \mathbb{ E} X_k = m_k \) and \( \mathbb{ E} (X_k – m_k)^2 =\sigma_k^2\) .
Let \( Z = a_1 X_1 + … + a_n X_n\) , calculate expectation and variance.

Using linearity,
\( \mathbb{ E} Z = a_1\mathbb{ E}X_1 +…+ a_n \mathbb{ E}X_n = a_1 m_1 + … + a_n m_n \) ;
Using independence,
\( \mathbb{ E} (Z-\mathbb{ E}Z )^2 = \mathbb{ E} \bigg( \sum \limits_{ i=1}^{ n} a_i (X_i – m_i ) \bigg)^2 = \mathbb{ E} \sum \limits_{ i=1}^{ n} a_i^2 (X_i – m_i )^2 = \sum \limits_{ i=1}^{ n} a_i^2 \sigma_i^2\) ,
because when \( j \neq k\) in the binomial expansion, the item:
\( \mathbb{ E} a_j (x_j-m_j)a_k (x_k-m_k) = a_j a_k \mathbb{ E}(x_j-m_j)\mathbb{ E}(x_j-m_j) = 0 \) .


1-1 Prove that \( \mathbb{ E} e^{aW_t} \lt \infty \) .

we use the distribution function:
\( \begin{align}
\mathbb{ E} e ^{aW_t} &= \frac{ 1}{ \sqrt {2 \pi t}} \displaystyle\int_{ -\infty}^{ \infty} e^{ax} e ^{-\frac{ x^2}{ 2t} }\ dx \\
&= e^{\frac{ ta^2}{ 2} } \underbrace{ \frac{ 1}{ \sqrt {2 \pi t}} \displaystyle\int_{ -\infty}^{ \infty} e ^{-\frac{ (x-ta)^2}{ 2t} }\ dx}_{N(ta,t) = 1} \\
&= e^{\frac{ ta^2}{ 2} } \lt \infty
\end{align}\) .


1-2 Let \( (X_t)_{t \in [ 0, \infty )}\) stochastic process adapted to filtration \( \mathbb{ F} = (\mathcal{ F} _t)_{t \in [ 0, \infty )}\) ;
Assume that \( \forall\ 0 \leq s \lt t, \ \ \ X_t – X_s \text{ is independent from } \mathcal{ F}_s \) .
(a) Check that \( X_t – X_s\) is independent from \( X_s\) ;
(b) Show by induction that \( (X_t)_{t \in [ 0, \infty )}\) has independent increments.

(a)
It is clear that \( \mathbb{ P} ( \underbrace{X_t-X_s \in B_1}_{\text{ independent from } \mathcal{ F}_s },\ \underbrace{X_s \in B_2}_{\in \mathcal{ F}_s }) = \mathbb{ P}(X_t- X_s \in B_1) \mathbb{ P}(X_s \in B_2),\ \ \forall B_1, B_2 \in \mathcal{ B}(\mathbb{ R} ) \) .
(b)
Similar strategy:
\( \mathbb{ P} ( \underbrace{X_{t_1}-X_{t_0} \in B_1, …, X_{t_{n-1}}-X_{t_{n-2}} \in B_1}_{ \in \mathcal{ F}_{n-2} },\ \underbrace{X_{t_n} – X_{t_{n-1}} \in B_2}_{\text{ independent from } \mathcal{ F}_{n-2} } \)


1-3 Calculate the conditional expectations, \( \forall\ 0\leq u \lt s \lt t\) :
(a )\( \mathbb{ E} [ W_t| \mathcal{ F}_t ] \) ;
(b) \( \mathbb{ E} [ W_t-W_s| \mathcal{ F}_s ] \) ;
(c) \( \mathbb{ E} [ (W_t-W_s)(W_s- W_u)| \mathcal{ F}_s ] \) .

Remember to check the conditional expectation’s condition (being bounded)
(a)
Condition: \( \mathbb{ E} |W_t| \leq \sqrt {\mathbb{ E} W_t^2 } = t \lt \infty \) .
Therefore, \( \mathbb{ E} [ W_t| \mathcal{ F}_t ] = W_t \) , because of measurability.
(b)
Condition: \( \mathbb{ E} | W_t-W_s| \leq \sqrt {\mathbb{ E} ( W_t-W_s)^2} = t-s \lt \infty\) .
Therefore, \( \mathbb{ E} [ W_t-W_s| \mathcal{ F}_s ] = \mathbb{ E} [ W_t-W_s ] = \mathbb{ E} W_{t-s} = 0\) , using measurability and conditional E property.
(c)
Condition: \( \mathbb{ E} |(W_t-W_s)(W_s- W_u)| \leq \big (\mathbb{ E} (W_t-W_s)^2 \big)^\frac{ 1}{ 2}\big (\mathbb{ E} (W_s-W_u)^2 \big)^\frac{ 1}{ 2} = \sqrt{(t-s)(s-u)} \) .
Therefore: \( \mathbb{ E} [ (W_t-W_s)(W_s- W_u)| \mathcal{ F}_s ] = (W_s – W_u ) \mathbb{ E} [ (W_t-W_s)| \mathcal{ F}_s ] = 0\) , here we first use “take out what is known”


2-1 Check Brownian Motion is a martingale.

We need check the three conditions (measurability, finite expectation and conditional expectation) for martingale.
(C1) \( W_t \text{ is } \mathcal{ F}_t \) -measurable, according to definition.
(C2) By Cauchy-Schwarz inequality \( \mathbb{ E} |W_t| \leq \sqrt{\mathbb{ E}W_t^2 } = \sqrt{t} \lt \infty\) .
(C3) \( \forall 0 \leq s \leq t \) :
\( \mathbb{ E} [ W_t | \mathcal{ F_s} ] = \mathbb{E}[ (W_t – W_s) + W_s| \mathcal{ F_s} ] = \mathbb{E}[ \underbrace{ W_t – W_s| \mathcal{ F_s}}_{ \text{ independent} } ] + \mathbb{E}[ \underbrace{ W_s| \mathcal{ F_s}} _{\text{ measurable} } ] = \underbrace{ \mathbb{ E}[ W_t – W_s]}_{\text{ =0} } + W_s = W_s \) .


2-2 Let \( (L_t)_{t \in [ 0, \infty )}\) simple process, compute Itô’s Integral \( \displaystyle\int_{ 0}^{ 3} L_s\ dW_s \) for the following (remember to check conditions):
(a) \( L_t = W_2 ?_{( 2,10 ]}(t) \) ;
(b) \( L_t = \sin(W_2) ?_{( 2,10 ]}(t) \) ;
(c) \( L_t = \min \{ |W_2|, 3\} ?_{( 1,10 ]}(t) \) .

The two conditions are 1) \( \xi_i \) is \( \mathcal{ F}_i \) – measurable; 2) \( \sup |\xi_i| \lt \infty\) .
(a)
\( \xi_{i-1} = W_2\) is not bounded.
(b)
\( \xi_{i-1} =\sin( W_2) \) is bounded and \( \mathcal{ F}_2 \) -measurable, therefore:
\( \displaystyle\int_{ 0}^{ 3} \sin (W_2) ? _{( 2,10 ]}(s) d W_s = \sum \limits_{ i=1}^{ n} \xi_{i-1} ( W_{t_i \wedge t } – W_ {t_{i-1} \wedge t}) = \sin (W_2) (W_3- W_2) \) .


2-4 Find out \( \mathbb{ E} \bigg( \displaystyle\int_{ 0}^{ 3} L_s\ dW_s \bigg)^2 \) for
(a) \( L_t = 2 ? _{( 0,1 ]}(t) – ? _{( 2, 3]}(t)\) ;
(b) \( L_t = ? _{[ 0,\infty )}(W_2) ? _{( 2, 10]}(t) \) .

Remind that Itô’s Isometry: \( \mathbb{ E} \bigg( \displaystyle\int_{ 0}^{ 3} L_s\ dW_s \bigg)^2 = \mathbb{ E} \displaystyle\int_{ 0}^{ t} L_s^2\ d W_s \) .
(a)
we notice that the expression does not rely on \( W_t\) , so \( \mathbb{ E} \) is not necessary.
\( \displaystyle\int_{ 0}^{ 3} L_s^2 \ ds = \displaystyle\int_{ 0}^{ 3} 4 ? _{( 0,1 ]}(s) + ? _{( 2,3 ]}(s)\ ds = 4+1 =5\) .
(b)
we notice that \( ?_A ^2= ?_A \),
\( \mathbb{ E} \displaystyle\int_{ 0}^{ 3} L_s^2 \ ds = \mathbb{ E} \displaystyle\int_{ 0}^{ 3} ? _{[ 0,\infty )}(W_2) ? _{( 2, 10]}(s) \ ds = \mathbb{ E} ? _{[ 0,\infty )}(W_2) \displaystyle\int_{ 2}^{ 3} ? _{( 2, 10]}(s) \ ds = \frac{ 1}{ 2} \) .
because \( W_2\) is normally distributed, therefore \( \mathbb{ P}(W_2 \geq 0) = \frac{ 1}{ 2} \)


2-5 Wiener Integral is defined as: \( L_t = \sum \limits_{ i=1}^{ n} b_{k-1} ? _{( t_{k-1}, t_k ]} (t) \) , check that:
expectation \( \mathbb{ E} \displaystyle\int_{ 0}^{T } L_t\ d W_t = 0 \) and variance \( \mathbb{ E} \bigg[ \big( \displaystyle\int_{ 0}^{ T} L_t\ dW_t \big)^2 \bigg ] = \displaystyle\int_{ 0}^{ T} (L_t)^2\ dW_t \) ;

we use the properties of “sum of independent normal distributed random variables”, to get:
\( \mathbb{ E} \displaystyle\int_{ 0}^{T } L_t\ d W_t = \sum \limits_{ k=1}^{ n} b_{k-1} \mathbb{ E} (W_{t_k} – W_{t_{k-1}}) =0\) ;
\( \mathbb{ E} \bigg[ \big( \displaystyle\int_{ 0}^{ T} L_t\ dW_t \big)^2 \bigg ] = \sum \limits_{ k=1}^{ n} b_{k-1}^2 \mathbb{ E} (W_{t_k} – W_{t_{k-1}})^2 = \mathbb{ E} ({t_k} – {t_{k-1}}) \) .


3-1 Apply Itô’s Formula to the following,
(a) \( W_t^2\) ;
(b) \( t \sin (W_t)\) ;
(c) \( t^2\) ;
(d) \( \log (1+ W_t^2)\) .

(a)
\( W_t^2 = f(t,W_t)= W_0^2 + \displaystyle\int_{ 0}^{ t} 0\ ds + \displaystyle\int_{ 0}^{ t} 2W_s\ d W_s + \frac{ 1}{ 2} \displaystyle\int_{ 0}^{ t} 2\ ds = 2 \displaystyle\int_{ 0}^{ t} W_s\ dW_s + t\) .
(b)
\( t\sin (W_t) = 0 + \displaystyle\int_{ 0}^{ t} \sin(W_s) \ ds + \displaystyle\int_{ 0}^{ t} s \cos (W_s)\ dW_s + \frac{ 1}{ 2}\displaystyle\int_{ 0}^{ t} -s \sin (W_s) \ ds = \displaystyle\int_{ 0}^{ t} (1-\frac{ s}{ 2} ) \sin W_s\ ds +\displaystyle\int_{ 0}^{ t} s \cos W_s\ dW_s\) .
(c)
\( t^2 = 0 + \displaystyle\int_{ 0}^{ t} 2s\ ds + 0 + 0 = 2 \displaystyle\int_{ 0}^{ t} s\ ds\) .
(d)
\( \log (1+W_t^2) = 0 + 0 + \displaystyle\int_{ 0}^{ t} \frac{ 2W_s}{ 1+ W_s^2} \ dW_s + \frac{ 1}{ 2} \displaystyle\int_{ 0}^{ t} 2 \frac{ 1- W_s^2}{ (1+W_s^2)^2} \ ds = 2\displaystyle\int_{ 0}^{ t} \frac{ W_s}{ 1+ W_s^2} \ dW_s + \displaystyle\int_{ 0}^{ t} \frac{ 1- W_s^2}{ (1+W_s^2)^2} \ ds\)  .


3-3 Assume \( \sigma \gt 0,\ b \in \mathbb{ R} \) , find out for which function \( g:\mathbb{ R}^2 \to \mathbb{ R} \) , that make the following process a martingale.
\( X_t = e^{\sigma W_t +bt – g(\sigma, b)t}\) .

By Itô’s Formula, \( X_t = 1 + \displaystyle\int_{ 0}^{ t} X_s (b-g (\sigma, b))\ ds + \displaystyle\int_{ 0}^{ t} X_s \sigma \ d W_s + \frac{ 1}{ 2}\displaystyle\int_{ 0}^{ t} X_s \sigma^2\ ds\) ,
\( = \underbrace { 1+ \sigma \displaystyle\int_{ 0}^{ t} X_s \ dWs }_{\text{ already a martingale} } + \underbrace { \displaystyle\int_{ 0}^{ t} (b+ \frac{ \sigma ^2}{ 2} – g(\sigma, b)) X_s \ ds}_{\text{ must be }0 }\) .
Therefore, \( g(\sigma, b) = b+ \frac{ \sigma ^2}{ 2} \).
Another solution is to show that \( \mathbb{ E}X_t^2 \lt \infty \) , using the first exercise


(4-2) Prove: for all interval \( I = (a,b)\) \( \mathbb{ E} [ ? _{\{W_t-W_s+c(t-s)\}} e^{-c(W_t-W_s)-c^2 \frac{ t-s}{ 2} } ] = \mathbb{ E} [ ? _{\{ W_t – W_s \in I \}} ] \) .

We note that \( X_t := W_t – W_s\) is normally distributed to \( N (0, t-s)\) .
from left-side:
\( = \frac{ 1}{ \sqrt{2 \pi (t-s)}} \displaystyle\int \limits_{ -\infty}^{ \infty} ? _{\{ X + c(t-s) \in I\}} e^{-cX-c^2 \frac{ t-s}{ 2} } e^{ -\frac{ X^2}{ 2(t-s)} }\ dx \) .
\( = \frac{ 1}{ \sqrt{2 \pi (t-s)}} \displaystyle\int \limits_{ -\infty}^{ \infty} ? _{\{ X + c(t-s) \in I\}} e^{ -\frac{ X^2 + 2c(t-s)X + c^2 (t-s)^2}{ 2(t-s)} } \ dx \) .
now let \( Y= X + c (t-s)\) ,
\( = \frac{ 1}{ \sqrt{2 \pi (t-s)}} \displaystyle\int \limits_{ -\infty}^{ \infty} ? _{\{ y\in I\}} e^{ -\frac{ Y^2}{ 2(t-s)} } \ dx \) .
\( = \mathbb{ E1}_{W_t-W_s \in (a,b)} \) .


5-2 let \( t_i := \frac{ iT}{ N},\ X_N := \sum \limits_{ i=1}^{ N} (W_{t_i} – W_{t_{i-1}})^2 \) ,
Find out the constant \( X \in \mathbb{ R} \) such that \( \lim\limits_{ N \to \infty} \mathbb{ E} [ (X_N – X)^2 ] =0 \) Hint: \( \mathbb{ E}W_t^4=3t^2 \) .

First we calculate \( \mathbb{ E}X_N,\ \mathbb{ E}X_N^2 \) ,
\( \mathbb{ E} X_N = \mathbb{ E} \sum \limits_{ i=1}^{ N} (W_{t_i} – W_{t_{i-1}})^2  = \sum \limits_{ i=1}^{ N} \mathbb{ E} (W_{t_i} – W_{t_{i-1}})^2 = \sum \limits_{ i=1}^{ N} \mathbb{ E} W_{t_i-t_{i-1}}^2 = \sum \limits_{ i=1}^{ N} (t_i – t_{i-1}) = T\) ;
\( \begin{align} \mathbb{ E} X_N^2 &= \mathbb{ E} \big( \sum \limits_{ i=1}^{ N} (W_{t_i} – W_{t_{i-1}})^2 \big) ^2  =\mathbb{ E} \sum \limits_{ i=1}^{ N} (W_{t_i} – W_{t_{i-1}})^4 + \underbrace {\mathbb{ E} \sum \limits_{ i \neq j} (W_{t_i} – W_{t_{i-1}})^2(W_{t_j} – W_{t_{j-1}})^2 }_{\sharp \text{ remaining of square expansion: }N^2-N } \\ &= N \cdot 3 (t_i – t_{i-1})^2 + (N^2-N) \cdot (t_i – t_{i-1})(t_j – t_{j-1}) = T^2 + \frac{ 2T^2}{ N}\end{align} \) .
Then,
\( \lim\limits_{ N \to \infty} \mathbb{ E} [ (X_N – X)^2 ]  = \lim\limits_{ N \to \infty} \big( \mathbb{ E}X_N^2 – 2X \mathbb{ E}X_N -X^2 \big) = \lim\limits_{ N \to \infty} \big( T^2 + \frac{ 2T^2}{ N} – 2XT – X^2 \big)  = T^2 – 2XT -X^2 =0\) .
Therefore, \( X=T\)


Define cash-or-nothing call option as \( f_{call} = C ?_{\{x \lt K\}}, \ C \gt 0 \) ,
Assume the Black Scholes model, i.e.
\( S_0 (t) = e^{rt} \\ S(t) = xe^{\sigma W_t + rt – \sigma^2 \frac{ t}{ 2} }\) .
(a) check that \( \mathbb{ P} \) is already a martingale;
(b) Find the pricing formula, i.e. \( P_0 (x) = \mathbb{ E} \frac{ f_{call} ( S(T) ) }{ S_0 (T)} \)

(a)
It is enough to show that \( \tilde{S}(t):= \frac{ S(t)}{ S_0(t)}= xe^{\sigma W_t – \sigma^2 \frac{ t}{ 2} } \) is a martingale.
(C1) clearly \( \tilde {S}(t)\) is \( \mathcal{ F}_t \) adapted;
(C2) \( \mathbb{ E}|\tilde{S} (t)| \lt \infty\) is clear because \( \mathbb{ E} W_t \lt \sqrt {\mathbb{ E} W_t^2 } = \sqrt {t}\) ;
(C3) \( \forall s \leq t,\\ \mathbb{ E}[ \tilde{S} (t) | \mathcal{ F}_s ] \overset{(1)}{= } \tilde{S}(s) \mathbb{ E} [ \frac{ \tilde{S}(t)}{ \tilde{S}(s)} | \mathcal{ F}_s ] \\ = \tilde{S}(s) \mathbb{ E} [ e^{\sigma (W_t – W_s) – \sigma ^2 \frac{ t-s}{ 2} }| \mathcal{ F}_t ] \\ \overset{(2)}{= } \tilde{S}(s) \mathbb{ E} [ e^{\sigma (W_t – W_s) – \sigma ^2 \frac{ t-s}{ 2} } ] \overset{( 3)}{= } \tilde{S}(s)\) .
explanation: 1) manually plus then divide an item, and use “take-out-what-is-known”; 2) independence; 3) normal distribution.
(b)
\( \begin{align} P_0 (x) &= \mathbb{ E} \frac{ f_{call} ( S(T) ) }{ S_0 (T)} = \frac{ C}{ e^{rt}} \mathbb{ P} \big( S(T) \lt K \big) \\ &= \frac{ C}{ e^{rt}} \mathbb{ P} \big( e^{ \sigma W_t + rt – \sigma^2 \frac{ t}{ 2} } \lt \frac{ K}{ x} \big) \\ &= \frac{ C}{ e^{rt}} \mathbb{ P} \big( { \sigma W_t + rt – \sigma^2 \frac{ t}{ 2} } \lt \ln K – \ln x \big) \\ &= \frac{ C}{ e^{rt}} \mathbb{ P} \big( W_t \lt \frac{\ln K – \ln x – rt + \sigma^2 \frac{ t}{ 2}}{\sigma} \big) \\ &= \overset{( 1)}{= } \frac{ C}{ \sqrt{2\pi} e^{rt}} \displaystyle\int \limits_{ -\infty}^{ d} e^{-\frac{ z^2}{ 2} }\ dz, \text{ where } d = \frac{\ln K – \ln x – rt + \sigma^2 \frac{ t}{ 2}}{\sigma \sqrt {t}}\end{align} \) .
we note that (1) \( \frac{ W_t}{ \sqrt{t}} \tilde{} N (0,1) \) .

You must be logged in to post a comment.