### Re: Python tricks

Posted:

**August 16th, 2019, 12:44 pm**Page **12** of **16**

Posted: **August 16th, 2019, 1:33 pm**

As a Dutch patriot, you should be all over Python!

Posted: **August 16th, 2019, 1:35 pm**

I was the first Dutch C++ patriot in 1989. BTW I hate petty nationalism. The Danes, Norsemen and Swedes make better compilers!.As a Dutch patriot, you should be all over Python!

Posted: **August 16th, 2019, 1:38 pm**

I agree.One optimises a Python loop by removing the loop.

toCode: Select all`sum_x = 0 for x in xs: sum_x += x`

Code: Select all`sum_x = sum(xs)`

However, my example is slightly different. I don't have a list 'xs' and my reduction variable is constructed differently.

Posted: **August 16th, 2019, 1:48 pm**

Code: Select all

```
for i in range (1,NSIM):
VOld = S_0
for j in range (0,NT):
dW = random.normalvariate(0,1)
VNew = VOld + (dt*(r-d)*VOld) + (sqrk * sig*VOld * dW)
VOld = VNew
sumPriceT += Payoff(VNew, K)
price = math.exp(-r * T) * sumPriceT / NSIM
print(price)
```

Posted: **August 16th, 2019, 8:33 pm**

You replaced nationalism by regionalism - or pan-scandinavism.I was the first Dutch C++ patriot in 1989. BTW I hate petty nationalism. The Danes, Norsemen and Swedes make better compilers!.As a Dutch patriot, you should be all over Python!

Posted: **August 16th, 2019, 9:01 pm**

Use Numpy.What is needed isCode: Select all`for i in range (1,NSIM): VOld = S_0 for j in range (0,NT): dW = random.normalvariate(0,1) VNew = VOld + (dt*(r-d)*VOld) + (sqrk * sig*VOld * dW) VOld = VNew sumPriceT += Payoff(VNew, K) price = math.exp(-r * T) * sumPriceT / NSIM print(price)`

loop parallelismin Python andreduction variable(like a OpenMP)

Code: Select all

```
import random
import math
import numpy
import time
def pricerCuch(S_0, NSIM, NT, dt, r, d, sqrk, sig, T, K):
sumPriceT = 0
for i in range (1,NSIM):
VOld = S_0
for j in range (0,NT):
dW = random.normalvariate(0,1)
VNew = VOld + (dt*(r-d)*VOld) + (sqrk * sig*VOld * dW)
VOld = VNew
# Replaced Payoff by European call payoff.
sumPriceT += max(VNew - K, 0)
return math.exp(-r * T) * sumPriceT / NSIM
def pricerNumpy(S_0, NSIM, NT, dt, r, d, sqrk, sig, T, K):
V = numpy.full(NSIM, S_0)
for j in range (1,NT + 1):
dW = numpy.random.randn(NSIM)
V = V + (dt*(r-d)*V) + (sqrk * sig*V * dW)
sumPriceT = sum(numpy.maximum(V - K, 0))
return math.exp(-r * T) * sumPriceT / NSIM
S_0 = 5
K = 5
sig = 0.05
NT = 365
T = 1.
dt = T / NT
r = 0.01
NSIM = 10000
sqrk = math.sqrt(dt)
d = 0
time_start = time.time()
p = pricerCuch(S_0, NSIM, NT, dt, r, d, sqrk, sig, T, K)
time_end = time.time()
print('Price (Cuch) = {}, time = {}'.format(p, time_end - time_start))
time_start = time.time()
p = pricerNumpy(S_0, NSIM, NT, dt, r, d, sqrk, sig, T, K)
time_end = time.time()
print('Price (Numpy) = {}, time = {}'.format(p, time_end - time_start))
```

Price (Cuch) = 0.12632649206449126, time = 2.5915353298187256

Price (Numpy) = 0.1263925891096993, time = 0.06981277465820312

Posted: **August 18th, 2019, 11:59 am**

Nice! A good experiment would be to compare the respective functions wrt accuracy and performance and try to come up guidelines on 'best practices' (to use an awful phrase). I am leading up to PDE models from C++ to Python because using Python for PDE eases the transition for non-programmers.

BTW you use a hard-code payoff (not even a Payoff function). More generally, a wrapper pattern (FP style) function is good but I have not investigated performance.

BTW you use a hard-code payoff (not even a Payoff function). More generally, a wrapper pattern (FP style) function is good but I have not investigated performance.

Posted: **August 18th, 2019, 12:04 pm**

ISM. I have taken a couple,of your ideas on-board when porting C++ to Python for lattice models. The performance is quite good..

I have used while loops but I am sure if it breaks Python Scripture.

I have used while loops but I am sure if it breaks Python Scripture.

Code: Select all

```
"""
# TestBarrierOptionPricerClassic.cpp
#
# Discussion example based on Clewlow and Strickland.
# The code is hard-wired and not flexible.
#
# Additive binomial valuation of an American down-and-out.
#
# (C) Datasim Education BV 2014-2018
#
"""
import random
import math
import numpy as np
import time
def price( K, T, S, sig, r, N): # N = Number of intervals
# Initialise coefficients based on the Trigeorgis approach
dt = T/N;
nu = r - 0.5*sig*sig;
# Up and down jumps
dxu = math.sqrt(sig*sig*dt + (nu*dt)*(nu*dt));
dxd = -dxu;
# Corresponding probabilities
pu = 0.5 + 0.5*(nu*dt/dxu);
pd = 1.0 - pu;
# Precompute constants
disc = math.exp(-r*dt)
dpu = disc*pu
dpd = disc*pd
edxud = math.exp(dxu - dxd)
edxd = math.exp(dxd)
# Initialise asset prices at maturity
St = np.full(N+1,0.0, dtype = float)
St[0] = S*math.exp(N*dxd)
for j in range(1,N+1):
St[j] = edxud*St[j-1]
print(St)
# Option value at maturity ( t = N)
C = np.full(N+1,0, dtype = float)
for j in range (0,N+1):
C[j] = np.maximum(K - St[j], 0.0)
print(C)
# Backwards induction phase
# for i in range (N-1, 1, -1): ??
i = N-1
while i >= 0:
j = 0
while j <= i:
C[j] = dpd*C[j] + dpu*C[j+1]
St[j] = St[j]/edxd;
# Early exercise condition, Brennan Schwartz condition
C[j] = np.maximum(C[j], K - St[j])
j = j+1
i = i -1
# Early exercise down and out call
return C[0]
# Null test
K = 65.0;
S = 60.0;
T = 0.25;
r = 0.08;
q = 0.0;
sig = 0.3;
N= 800
optionPrice = price(K,T,S,sig,r,N)
print(optionPrice)
```

Posted: **August 18th, 2019, 5:33 pm**

It's the same algorithm, so the accuracy should be the same. You can calculate the sample variance if you want.Nice! A good experiment would be to compare the respective functions wrt accuracy and performance and try to come up guidelines on 'best practices' (to use an awful phrase).

Yes, for clarity of illustration. It's just a pedagogical example.BTW you use a hard-code payoff (not even a Payoff function).

Posted: **August 18th, 2019, 5:35 pm**

BTW, you will probably be able to simplify and speed up the code more if you evolve log S instead of S.

Posted: **August 18th, 2019, 7:51 pm**

It's just that Payoff() introduces another level of indirection.It's the same algorithm, so the accuracy should be the same. You can calculate the sample variance if you want.Nice! A good experiment would be to compare the respective functions wrt accuracy and performance and try to come up guidelines on 'best practices' (to use an awful phrase).Yes, for clarity of illustration. It's just a pedagogical example.BTW you use a hard-code payoff (not even a Payoff function).

Posted: **August 19th, 2019, 2:25 pm**

I removed it from both functions.

Posted: **August 19th, 2019, 4:55 pm**

OK.I removed it from both functions.

On a follow-on question, how many (newbie?) Python programmers fall to the temptation of "Copy and Paste" syndrome. Exhibit I is

Posted: **August 19th, 2019, 10:40 pm**

I once worked with a guy who was getting red in the face whenever someone modified anything touching his code, or even asked questions about it. Later it turned out that he copy&pasted code stolen from his previous employer into our codebase. He was kindly asked to submit his resignation. Now he's a senior developer in a Tier 1 bank, of course.