Serving the Quantitative Finance Community

  • 1
  • 2
  • 3
  • 4
  • 5
  • 8
 
User avatar
Alan
Posts: 2958
Joined: December 19th, 2001, 4:01 am
Location: California
Contact:

Re: Silly questions

April 17th, 2018, 10:26 pm

It's silly question time again and today's silly question is:

Consider the continuous 'hat' function

 [$]f(x) = 2x, 0 \leq x \leq 1/2; [$] [$]f(x) = 2(1-x), 1/2 \leq x \leq 1[$].

I want to approximate [$]f(x)[$] to any accuracy by (ideally) an infinitely differentiable function. An analytical and easily computable solution is desired. Ideally, the approximation should be in continuous space; failing that, a discrete approximation might be OK.

// Some Wilmotters might remember a similar question a while back ..(the one with the Jack Nicholson avatar).
Here is an idea. Take the approximation to be the exact heat equation solution at time [$]t = \epsilon[$] to a solution [$]u(x,t)[$] with the hat function i.c. and held at 0 at each boundary. It should be easy to find analytically, infinitely smooth for every [$]\epsilon > 0[$], and arbitrarily close (in the MSE sense) to the hat function if you take [$]\epsilon[$] small enough (but strictly positive).  

p.s. In fact, it might actually just be ppauper's just posted series with an extra [$]e^{-n^2 t}[$] term as per here. But there might be some alternative analytic versions that are faster to converge.
Last edited by Alan on April 17th, 2018, 10:45 pm, edited 3 times in total.
 
User avatar
ppauper
Posts: 11729
Joined: November 15th, 2001, 1:29 pm

Re: Silly questions

April 17th, 2018, 10:43 pm

that would be
[$]\sum_{m=1}^{\infty}\frac{8}{m^2\pi^2}e^{-c^{2}m^{2}\pi^{2}t}\sin\frac{m\pi}{2}\sin m\pi x[$]
 
User avatar
Alan
Posts: 2958
Joined: December 19th, 2001, 4:01 am
Location: California
Contact:

Re: Silly questions

April 17th, 2018, 10:44 pm

Exactly -- we cross-posted the same idea!

By the 'alternative analytic versions', I mean where the Green function is constructed from the method of images (as the [$]\Phi[$] here), rather than Fourier series. The method of images form has terms with [$]e^{-(x-n)^2/t}[$] so the series becomes faster to converge as [$]t \downarrow 0[$], rather than slower to converge as the Fourier series.
 
User avatar
lovenatalya
Posts: 187
Joined: December 10th, 2013, 5:54 pm

Re: Silly questions

April 18th, 2018, 5:42 am

Late to the party. 

I like Alan's heat equation idea immensely. In fact the Fourier solution and the image solution are just the two (dual) sides of the Poisson summation formula converging faster at [$]t\searrow 0[$] and [$]t\rightarrow\infty[$] respectively. One possible drawback of the [$]\frac1t[$] solution is the integration of the Green's function has to be enumerated either as a sum of error functions or numerically. Both the Chebyshev polynomial series method of outrun and Fourier series method of ppauper are good, but, just as outrun has pointed out, they converge slowly. The reason is, just as outrun suspected, due to the discontinuity in the derivative of the original function. The convergence speed is [$]O(\frac1n)[$] or even lower, where [$]n[$] is the index of the series. Also the derivative close to the vertex is not uniformly convergent but osciallates around the discontinuity. This is called the Gibbs phenomenon. All of these approximations are actually analytical functions (as in their Taylor power series are convergent), not only infinitely differentiable. 

Here are 2 more approximations satisfying the requirement.

1.  A silly one that is not analytic (Taylor power series is undefined at two points) but infinitely differentiable that does satisfy the requirement.  For the sake of convenience I linear transform the Cuchulainn's hat function into the absolute function [$]|x|[$]. My approximation is simply [$]|x|+e^{\frac{b}{(x+a)(x-a)}}\Theta(a^2-x^2)[$] where [$]a>0[$] and [$]b>0[$] are parameters to be adjusted according to your requirement of the fidelity of the approximation, and [$]\Theta[$] is the Heaviside function. The trajectory of my approximating function simply rounds off the sharp corner near it and assumes the shape of original function away from the corner with the whole curve being infinitely differentiable.

2. Any smoothing summation kernel, convolution or not, would work. The smoothness (differentiability or analyticity) of the resulting function is determined by that of the kernel. The convergence can be controlled by a parameter. It is then the Dirac delta functional. A simple example of this is simply the convolution operator [$]\Phi[t,f]:=\frac1{\sqrt t}\phi\big(\frac{\cdot}{\sqrt t}\big)*f(\cdot)[$], where [$]\phi[$] is, for example, the standard Gaussian function. The convolution is analytic since [$]\phi[$] is and the convolution uniformly converges with high speed to [$]f[$] as [$]t\rightarrow0[$]. This example is just the heat equation solution with the initial condition extended to infinity, dispensing with the finite boundary condition and thus the infinite source images for the Green's function, or reflection and shifting of [$]f[$]. 
Last edited by lovenatalya on April 19th, 2018, 9:13 pm, edited 1 time in total.
 
User avatar
ppauper
Posts: 11729
Joined: November 15th, 2001, 1:29 pm

Re: Silly questions

April 18th, 2018, 9:15 am

I wrote down the coefficients of the Fourier series and outrun plotted the Chebyshev coefficients. They both behave like [$]\frac{1}{n^{2}}[$]
 
User avatar
outrun
Posts: 4573
Joined: January 1st, 1970, 12:00 am

Re: Silly questions

April 18th, 2018, 10:13 am

I wrote down the coefficients of the Fourier series and outrun plotted the Chebyshev coefficients. They both behave like [$]\frac{1}{n^{2}}[$]
For the centered version the continuous Fourier is sinc^2(w).. which indeed decays as 1/w^2
 
User avatar
Alan
Posts: 2958
Joined: December 19th, 2001, 4:01 am
Location: California
Contact:

Re: Silly questions

April 18th, 2018, 2:35 pm

I think lovenatalya's #2 solution is the best so far. Leads to a simple analytic form containing just a couple cumnormals and a couple exponentials -- that's it!. No infinite sum and infinitely smooth.  

With no checking, something like: define

[$]u(t,x) = 2 x   \left[ \Phi(\frac{1/2 - x}{\sqrt{t}}) - \Phi(-\frac{x}{\sqrt{t}}) \right] - \sqrt{\frac{2 t}{\pi}} \left[ e^{-(1/2-x)^2/2 t} - e^{-x^2/ 2 t} \right][$],

where [$]\Phi(\cdot)[$] is the cumnormal function.

Then, the approximating function is

[$]f(t,x) = u(t,x) + u(t,1-x)[$],

converging to the desired [$]f(x)[$] as [$]t \downarrow 0[$].

Likely typos or errors, so needs to be re-done carefully!
 
User avatar
lovenatalya
Posts: 187
Joined: December 10th, 2013, 5:54 pm

Re: Silly questions

April 18th, 2018, 7:14 pm

@ppauper and @outrun:

The decay of [$]\hat f(n)[$] the Fourier (or Chebyshev) coefficients of [$]f[$] and the error or convergence rate [$]|f(x)-(S_nf)(x)|[$] where [$]S_nf[$] stands for the partial Fourier sum up to the (two) [$]n[$]'th terms, are distinct. The latter is no greater than the former. I am talking about the latter.

I should have been more clear that I am speaking about the bound of convergence for a general class of function with a discontinuity in the derivative. For an arbitrary function [$]h[$] satisfying the 1-Holder condition (or Lipschitz condition) which is our case, the coefficient of the Fourier series decays [$]|\hat h(n)|<\frac Kn[$] for some [$]K>0[$] dependent on [$]h[$]; the uniform error [$]|h(x)-(S_nh)(x)|<K\frac{\ln n}n,\, \forall x[$]  for some [$]K>0[$] dependent on [$]h[$], where [$]S_nh[$] stands for the partial Fourier sum up to the (two) [$]n[$]'th terms. 

Now, as I said above, these are the upper decay and error bounds for the general function [$]h[$]. For our specific function denoted by [$]f[$], the rates can be higher. [$]\hat f(n)\sim\frac1{n^2}[$]. That implies the error or convergence rate which is the sum or integral of all the remaining infinite terms [$]|f(x)-(S_nf)(x)|=O\big(\frac1n\big), \, \forall x[$]. This is again a bound obtained by taking the absolute values of the factor. Due to the oscillation in the terms, the precise error could be smaller, but it requires more careful analysis of the partial sum [$]S_nf[$] which is the convolution of the Dirichlet kernel with [$]f[$]. I do not have time to do it now, but will later.

Meanwhile, if either of you would plot the numerical error for each [$]x[$] as [$]n[$] increases, we can see the error (not the coefficient) decay rate.
 
User avatar
lovenatalya
Posts: 187
Joined: December 10th, 2013, 5:54 pm

Re: Silly questions

April 18th, 2018, 7:17 pm

@Alan:

Exactly! The formula looks right.

Some other smoothing kernels, such as a smooth wavelet, would do as well and will give more freedom for the control of the local behavior of the approximation.
 
User avatar
outrun
Posts: 4573
Joined: January 1st, 1970, 12:00 am

Re: Silly questions

April 18th, 2018, 7:56 pm

@Alan:

Exactly! The formula looks right.

Some other smoothing kernels, such as a smooth wavelet, would do as well and will give more freedom for the control of the local behavior of the approximation.
That's a very good idea, especially since the triangle function is self similar across scales, making the (discrete fast?) transform very efficient. The main issues imo is again the differentiability.
 
User avatar
Cuchulainn
Topic Author
Posts: 20252
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: Silly questions

April 19th, 2018, 3:49 pm

Ideally, truncating series is scary because it is not possible to know where to truncate. 
BTW If you solve the heat equation with the given [$]f(x)[$] you get back ppauper's solution.

Some kind of mollifier whose integral = 1 + convolution maybe?

[$]J(x) = k exp[-1/(1-|x|^2)] \:for\: |x|^2 < 1[$]
[$]J(x) = 0 \:for\: |x| \geq  1[$]

and the analytic function

[$]J_{\varepsilon}(x) = {\varepsilon}^{-1}J(x/{\varepsilon)}[$]

Then I would like to convolute this function with [$]f(x)[$]. But .. we wind up with an integral.
This function has infinitely differentiable continuous derivatives an approximate [$]f[$] as [$]\varepsilon \rightarrow 0[$].
.
Can we get an analytic solution?
// k ~ 2.25228
 
User avatar
lovenatalya
Posts: 187
Joined: December 10th, 2013, 5:54 pm

Re: Silly questions

April 19th, 2018, 4:46 pm

@Cuchulainn:

Have you checked out my #2 kernel solution? Convolution is one example of it. I gave an example of the Gaussian kernel convolution of which Alan has worked out the explicit expression. By the way, the convergence rate for the max-norm of the Gaussian kernel convolution is [$]\sqrt t[$] as [$]t\searrow 0[$]. You can replace that with [$]t^k[$] for any desired positive [$]k[$] to get any desired convergence speed.

Also, have you checked out my #1 solution? I would think that is the simplest and cleanest way. Just some arithmetic operations. No integrals, no special functions, no long summations.
 
User avatar
Cuchulainn
Topic Author
Posts: 20252
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: Silly questions

April 19th, 2018, 5:42 pm

@lovenatalya
I'll have another, more detailed look.
 
User avatar
Cuchulainn
Topic Author
Posts: 20252
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: Silly questions

April 19th, 2018, 6:06 pm

Late to the party. 

I like Alan's heat equation idea immensely. In fact the Fourier solution and the image solution are just the two (dual) sides of the Poisson summation formula converging faster at [$]t\searrow 0[$] and [$]t\rightarrow\infty[$] respectively. One possible drawback of the [$]\frac1t[$] solution is the integration of the Green's function has to be enumerated either as a sum of error functions or numerically. Both the Chebyshev polynomial series method of outrun and Fourier series method of ppauper are good, but, just as outrun has pointed out, they converge slowly. The reason is, just as outrun suspected, due to the discontinuity in the derivative of the original function. The convergence speed is [$]O(\frac1n)[$] or even lower, where [$]n[$] is the index of the series. Also the derivative close to the vertex is not uniformly convergent but osciallates around the discontinuity. This is called the Gibbs phenomenon. All of these approximations are actually analytical functions (as in their Taylor power series are convergent), not only infinitely differentiable. 

Here are 2 more approximations satisfying the requirement.

1.  A silly one that is not analytic (Taylor power series is undefined at two points) but infinitely differentiable that does satisfy the requirement.  For the sake of convenience I linear transform the Cuchulainn's hat function into the absolute function [$]|x|[$]. My approximation is simply [$]|x|+e^{\frac{b}{(x+a)(x-a)}}\Theta(a^2-x^2)[$] where [$]a>0[$] and [$]b>0[$] are parameters to be adjusted according to your requirement of the fidelity of the approximation, and [$]\Theta[$] is the Heaviside function. The trajectory of my approximating function simply rounds off the sharp corner near it and assumes the shape of original function away from the corner with the whole curve being infinitely differentiable.

2. Any smoothing summation kernel, convolution or not, would work. The smoothness (differentiability or analycity) of the resulting function is determined by that of the kernel. The convergence can be controlled by a parameter. It is then the Dirac delta functional. A simple example of this is simply the convolution operator [$]\Phi[t,f]:=\frac1{\sqrt t}\phi\big(\frac{\cdot}{\sqrt t}\big)*f(\cdot)[$], where [$]\phi[$] is, for example, the standard Gaussian function. The convolution is analytic since [$]\phi[$] is and the convolution uniformly converges with high speed to [$]f[$] as [$]t\rightarrow0[$]. This example is just the heat equation solution with the initial condition extended by the zero function to infinity, dispensing with the finite boundary condition and thus the infinite source images for the Green's function, or reflection and shifting of [$]f[$]. 
[$]\phi\big(\frac{x}{\sqrt t}) = exp(-x^2/4t)[$],  Yes?
So, the motivation is to solve heat equation in an infinite rod and the solution is a convolution integral between this kernel and [$]f[$]? I just need to get to the stage to roll out a solution which is what Alan sketched? I need to check those steps,

Your solution is a kind of stroke of genius.
Last edited by Cuchulainn on April 19th, 2018, 6:14 pm, edited 2 times in total.
 
User avatar
lovenatalya
Posts: 187
Joined: December 10th, 2013, 5:54 pm

Re: Silly questions

April 19th, 2018, 6:13 pm

@Alan:

Exactly! The formula looks right.

Some other smoothing kernels, such as a smooth wavelet, would do as well and will give more freedom for the control of the local behavior of the approximation.
That's a very good idea, especially since the triangle function is self similar across scales, making the (discrete fast?) transform very efficient. The main issues imo is again the differentiability.
Regarding the differentiability issue, I agree that the differentiability of the original function would be an issue for the infinite-norm convergence with wavelet multi-scale infinite series expansion since the Gibbs phenomenon is common to all these expansions. The smoothness is not a problem, as the smoothness of the resulting function is determined by that of the kernel. However, I am not sure this would be a problem for the infinite-norm convergence for the kernel transformation, such as the convolution. In the Gaussian kernel example of my #2 solution, of which Alan has given the explicit expression, the infitnite-norm of the error is [$]\sim\sqrt t[$] as [$]t\searrow 0[$]. You can replace [$]\sqrt t[$] with [$]t^k[$] for some positive [$]k[$] to get the desired speed of convergence.