"Oranje boven" LOL
I was the first Dutch C++ patriot in 1989. BTW I hate petty nationalism. The Danes, Norsemen and Swedes make better compilers!.As a Dutch patriot, you should be all over Python!
I agree.One optimises a Python loop by removing the loop.
toCode: Select allsum_x = 0 for x in xs: sum_x += x
Code: Select allsum_x = sum(xs)
for i in range (1,NSIM):
VOld = S_0
for j in range (0,NT):
dW = random.normalvariate(0,1)
VNew = VOld + (dt*(r-d)*VOld) + (sqrk * sig*VOld * dW)
VOld = VNew
sumPriceT += Payoff(VNew, K)
price = math.exp(-r * T) * sumPriceT / NSIM
print(price)
You replaced nationalism by regionalism - or pan-scandinavism.I was the first Dutch C++ patriot in 1989. BTW I hate petty nationalism. The Danes, Norsemen and Swedes make better compilers!.As a Dutch patriot, you should be all over Python!
Use Numpy.What is needed is loop parallelism in Python and reduction variable (like a OpenMP)Code: Select allfor i in range (1,NSIM): VOld = S_0 for j in range (0,NT): dW = random.normalvariate(0,1) VNew = VOld + (dt*(r-d)*VOld) + (sqrk * sig*VOld * dW) VOld = VNew sumPriceT += Payoff(VNew, K) price = math.exp(-r * T) * sumPriceT / NSIM print(price)
import random
import math
import numpy
import time
def pricerCuch(S_0, NSIM, NT, dt, r, d, sqrk, sig, T, K):
sumPriceT = 0
for i in range (1,NSIM):
VOld = S_0
for j in range (0,NT):
dW = random.normalvariate(0,1)
VNew = VOld + (dt*(r-d)*VOld) + (sqrk * sig*VOld * dW)
VOld = VNew
# Replaced Payoff by European call payoff.
sumPriceT += max(VNew - K, 0)
return math.exp(-r * T) * sumPriceT / NSIM
def pricerNumpy(S_0, NSIM, NT, dt, r, d, sqrk, sig, T, K):
V = numpy.full(NSIM, S_0)
for j in range (1,NT + 1):
dW = numpy.random.randn(NSIM)
V = V + (dt*(r-d)*V) + (sqrk * sig*V * dW)
sumPriceT = sum(numpy.maximum(V - K, 0))
return math.exp(-r * T) * sumPriceT / NSIM
S_0 = 5
K = 5
sig = 0.05
NT = 365
T = 1.
dt = T / NT
r = 0.01
NSIM = 10000
sqrk = math.sqrt(dt)
d = 0
time_start = time.time()
p = pricerCuch(S_0, NSIM, NT, dt, r, d, sqrk, sig, T, K)
time_end = time.time()
print('Price (Cuch) = {}, time = {}'.format(p, time_end - time_start))
time_start = time.time()
p = pricerNumpy(S_0, NSIM, NT, dt, r, d, sqrk, sig, T, K)
time_end = time.time()
print('Price (Numpy) = {}, time = {}'.format(p, time_end - time_start))
"""
# TestBarrierOptionPricerClassic.cpp
#
# Discussion example based on Clewlow and Strickland.
# The code is hard-wired and not flexible.
#
# Additive binomial valuation of an American down-and-out.
#
# (C) Datasim Education BV 2014-2018
#
"""
import random
import math
import numpy as np
import time
def price( K, T, S, sig, r, N): # N = Number of intervals
# Initialise coefficients based on the Trigeorgis approach
dt = T/N;
nu = r - 0.5*sig*sig;
# Up and down jumps
dxu = math.sqrt(sig*sig*dt + (nu*dt)*(nu*dt));
dxd = -dxu;
# Corresponding probabilities
pu = 0.5 + 0.5*(nu*dt/dxu);
pd = 1.0 - pu;
# Precompute constants
disc = math.exp(-r*dt)
dpu = disc*pu
dpd = disc*pd
edxud = math.exp(dxu - dxd)
edxd = math.exp(dxd)
# Initialise asset prices at maturity
St = np.full(N+1,0.0, dtype = float)
St[0] = S*math.exp(N*dxd)
for j in range(1,N+1):
St[j] = edxud*St[j-1]
print(St)
# Option value at maturity ( t = N)
C = np.full(N+1,0, dtype = float)
for j in range (0,N+1):
C[j] = np.maximum(K - St[j], 0.0)
print(C)
# Backwards induction phase
# for i in range (N-1, 1, -1): ??
i = N-1
while i >= 0:
j = 0
while j <= i:
C[j] = dpd*C[j] + dpu*C[j+1]
St[j] = St[j]/edxd;
# Early exercise condition, Brennan Schwartz condition
C[j] = np.maximum(C[j], K - St[j])
j = j+1
i = i -1
# Early exercise down and out call
return C[0]
# Null test
K = 65.0;
S = 60.0;
T = 0.25;
r = 0.08;
q = 0.0;
sig = 0.3;
N= 800
optionPrice = price(K,T,S,sig,r,N)
print(optionPrice)
It's the same algorithm, so the accuracy should be the same. You can calculate the sample variance if you want.Nice! A good experiment would be to compare the respective functions wrt accuracy and performance and try to come up guidelines on 'best practices' (to use an awful phrase).
Yes, for clarity of illustration. It's just a pedagogical example.BTW you use a hard-code payoff (not even a Payoff function).
It's just that Payoff() introduces another level of indirection.It's the same algorithm, so the accuracy should be the same. You can calculate the sample variance if you want.Nice! A good experiment would be to compare the respective functions wrt accuracy and performance and try to come up guidelines on 'best practices' (to use an awful phrase).Yes, for clarity of illustration. It's just a pedagogical example.BTW you use a hard-code payoff (not even a Payoff function).
OK.I removed it from both functions.