that a property of the '"double" type you told it to use (or any floating point number type). It's not specific to Visual Studio ..or C..

It looks like Heisenberg's uncertainty principle surprisingly predicts the same values for maximum velocity for elementary particles as I got here earlier from different approach and actually different formula. I assume minimum uncertainty in position is the Planck length, the rest comes straight out from the Heisenberg's uncertainty principle

\begin{eqnarray}

\sigma_x \sigma_p \geq \hbar \nonumber \\

l_p\sigma_p \geq \hbar \nonumber \\

l_p\frac{mv}{\sqrt{1-\frac{v^2}{c^2}}}\geq \hbar \nonumber \\

l_p\frac{\frac{\hbar}{\bar{\lambda}}\frac{1}{c}v}{\sqrt{1-\frac{v^2}{c^2}}}\geq \hbar \nonumber \\

\frac{\frac{1}{c}v}{\sqrt{1-\frac{v^2}{c^2}}}\geq \frac{\bar{\lambda}}{l_p} \nonumber \\

\frac{v}{\sqrt{1-\frac{v^2}{c^2}}}\geq \frac{\bar{\lambda}}{l_p}c \nonumber \\

\frac{v^2}{1-\frac{v^2}{c^2}}\geq \frac{\bar{\lambda}^2}{l_p^2}c^2 \nonumber \\

v^2\leq \frac{\bar{\lambda}^2}{l_p^2}c^2 \left(1-\frac{v^2}{c^2}\right) \nonumber \\

v^2\left(1+\frac{\bar{\lambda}^2}{l_p^2}\right)\leq \frac{\bar{\lambda}^2}{l_p^2}c^2\nonumber \\

v^2\leq \frac{\frac{\bar{\lambda}^2}{l_p^2}c^2}{\left(1+\frac{\bar{\lambda}^2}{l_p^2}\right)}\nonumber \\

v\leq \frac{c}{\sqrt{1+\frac{l_p^2}{\bar{\lambda}^2}}}

\end{eqnarray}

Taylor expansion shows that this for any practical significance (for hypothetical high energy physics experiments) gives the same value for any observed subatomic particle as my max velocity formula, not so strange as they both are limits on the velocity. For the Planck mass particle the formula above cannot be used, as the momentum of the Planck mass particle likely always is \(m_pc\) that leads to another and exactly the same prediction as my max velocity formula, early working paper (my max velocity formula now in 3 published papers, soon in 4 and 5)

Early working paper, comments and in particular harsh critics welcome

\begin{eqnarray}

\sigma_x \sigma_p \geq \hbar \nonumber \\

l_p\sigma_p \geq \hbar \nonumber \\

l_p\frac{mv}{\sqrt{1-\frac{v^2}{c^2}}}\geq \hbar \nonumber \\

l_p\frac{\frac{\hbar}{\bar{\lambda}}\frac{1}{c}v}{\sqrt{1-\frac{v^2}{c^2}}}\geq \hbar \nonumber \\

\frac{\frac{1}{c}v}{\sqrt{1-\frac{v^2}{c^2}}}\geq \frac{\bar{\lambda}}{l_p} \nonumber \\

\frac{v}{\sqrt{1-\frac{v^2}{c^2}}}\geq \frac{\bar{\lambda}}{l_p}c \nonumber \\

\frac{v^2}{1-\frac{v^2}{c^2}}\geq \frac{\bar{\lambda}^2}{l_p^2}c^2 \nonumber \\

v^2\leq \frac{\bar{\lambda}^2}{l_p^2}c^2 \left(1-\frac{v^2}{c^2}\right) \nonumber \\

v^2\left(1+\frac{\bar{\lambda}^2}{l_p^2}\right)\leq \frac{\bar{\lambda}^2}{l_p^2}c^2\nonumber \\

v^2\leq \frac{\frac{\bar{\lambda}^2}{l_p^2}c^2}{\left(1+\frac{\bar{\lambda}^2}{l_p^2}\right)}\nonumber \\

v\leq \frac{c}{\sqrt{1+\frac{l_p^2}{\bar{\lambda}^2}}}

\end{eqnarray}

Taylor expansion shows that this for any practical significance (for hypothetical high energy physics experiments) gives the same value for any observed subatomic particle as my max velocity formula, not so strange as they both are limits on the velocity. For the Planck mass particle the formula above cannot be used, as the momentum of the Planck mass particle likely always is \(m_pc\) that leads to another and exactly the same prediction as my max velocity formula, early working paper (my max velocity formula now in 3 published papers, soon in 4 and 5)

Early working paper, comments and in particular harsh critics welcome

Last edited by Collector on January 20th, 2018, 9:34 am

- Cuchulainn
**Posts:**56674**Joined:****Location:**Amsterdam-
**Contact:**

"undocumented" inequalities. How can we check their validity?

Cuchulainn wrote:"undocumented" inequalities. How can we check their validity?

Start with Logic! Use Reduction ad Absurdum to exclude the alternative of todays modern physics interpretation. When several theories exist and not can be distinguished by experiments (yet), use the most logical one.

"undocumented" ? Math based on logic is a very good start of documentation. It is clear from my derivation one needs to be able to go below Planck length in uncertainty of position to have a velocity above this limt. And the Planck length seems to be very unique as we can find it from gravity experiments without any need of the gravitational constant.

Also the Heisenberg uncertainty principle is quite well documented. The big question here is if the most accurate we can know a position actually is limited by the Planck length or not. In my view lots of logic behind it, for example that we thereby get rid of a lot of absurd predictions that we indirectly have in standard modern physics when not introducing such boundaries.

Last edited by Collector on January 18th, 2018, 6:52 pm

- Cuchulainn
**Posts:**56674**Joined:****Location:**Amsterdam-
**Contact:**

It's been about 45 years since I did QM The inequalities part is OK but the assumptions are obvious?

Cuchulainn wrote:It's been about 45 years since I did QM The inequalities part is OK but the assumptions are obvious?

the assumptions are not obvious, some will claim there is nothing special about the Planck length, that we can go as short as we will. So it partly boils down to if there is a minimum length or not, that again is linked to minimum time-interval and also the Planck mass particle (as the reduced Compton wavelength of the Planck mass particle is the Planck length). Strangely it seems like the Planck length is central in gravity, but one can argue that G (in my view a composite constant that we don't even need) is the central.

So basically if \(\sigma_x\geq l_p\) then one get my velocity limit derived above for anything with rest-mass.

Assume \(\sigma_x > 0\) then one get \(v< c\). That is the modern physics velocity limit on any mass. But then at the same time modern physics often talk about nothing can be shorter than \(l_p\) . The thing is, it looks like that from Heisenberg's uncertainty principle we cannot have \(v<c\) at the same time as \(\sigma_x\geq l_p\). This my derivation above ``proves", and I dont think it has been shown before (but possible, I need to search more the literature, except my atomism theory that has shown this for years now).

if one have \(\sigma_x \geq 0\) then one have \(v\leq c\) that according to SR is impossible (for anything with rest-mass). And yes we cannot have a infinite relativistic mass.

So do we want to keep the idea of \(v< c\) and reject the idea of the Planck length as a minimum length, or one has to switch to my much more logical idea that gives us an exact speed limit linked to the reduced Compton wavelength of each elementary particle ;-)

Interestingly I see the formula I published in 2014 derived from only two postulates rooted in ancient atomism I predicted that the maximum velocity of any thing with rest-mass had to be

\(v=c\left(\frac{\bar{\lambda}^2-L^2}{\bar{\lambda}^2+L^2}\right)\)

where L was the diameter of the indivisible particle. I did not know what the diameter was back then except I knew it had to be very very small. When setting it to the Planck lengt, \(L=l_p\), we get

\(v=c\left(\frac{\bar{\lambda}^2-l_p^2}{\bar{\lambda}^2+l_p^2}\right)\)

For an electron we then get a maximum velocity of

\(v_{max}=c\left(\frac{\bar{\lambda}_e^2-l_p^2}{\bar{\lambda}_e^2+l_p^2}\right)\approx c\times 0.9999999999999999999999999999999999999999999964966\)

(same as my formula derived only from atomism.)

This is the same velocity we get from setting the Planck length to the maximum uncertainty in position in the Kennard version of the Heisenberg uncertainty principle

\(\sigma_x\sigma_p\geq \frac{\hbar}{2}\)

and when \(\sigma_x=l_p\) we have

\(l_p\sigma_p\geq \frac{\hbar}{2}\)

a little derivation and we get

\(v\leq \frac{c}{\sqrt{1+4\frac{l_p^2}{\bar{\lambda}^2}}}\)

For an electron we get

\(v\leq \frac{c}{\sqrt{1+4\frac{l_p^2}{\bar{\lambda}_e^2}}}\approx c\times 0.9999999999999999999999999999999999999999999964966\)

Using Tailor series expansion we can easily see they predict the same numerical result when \(\bar{\lambda}>>l_p\) which is the case for any observed particle.

Anyway if one should use the original Heisenberg uncertainty principle formulation (I actually think so) or the Kennard version would lead to a long philosophical discussion. However the main conclusion is the same

**Heisenberg + Max Planck = Break Down in Lorentz Symmetry**

The SR limit of v<c (for anything with rest-mass) is impossible. Just ask Heisenberg + Max Planck.

To have Einstein (SR) + Heisenberg (uncertainty principle) + Max Planck (units), one need a new max velocity limit for anything with rest-mass!!

The Lorentz symmetry holds all the way up to Planck energies, then break down, the Planck mass particle, the Planck time and the Planck length are very unique!

\(v=c\left(\frac{\bar{\lambda}^2-L^2}{\bar{\lambda}^2+L^2}\right)\)

where L was the diameter of the indivisible particle. I did not know what the diameter was back then except I knew it had to be very very small. When setting it to the Planck lengt, \(L=l_p\), we get

\(v=c\left(\frac{\bar{\lambda}^2-l_p^2}{\bar{\lambda}^2+l_p^2}\right)\)

For an electron we then get a maximum velocity of

\(v_{max}=c\left(\frac{\bar{\lambda}_e^2-l_p^2}{\bar{\lambda}_e^2+l_p^2}\right)\approx c\times 0.9999999999999999999999999999999999999999999964966\)

(same as my formula derived only from atomism.)

This is the same velocity we get from setting the Planck length to the maximum uncertainty in position in the Kennard version of the Heisenberg uncertainty principle

\(\sigma_x\sigma_p\geq \frac{\hbar}{2}\)

and when \(\sigma_x=l_p\) we have

\(l_p\sigma_p\geq \frac{\hbar}{2}\)

a little derivation and we get

\(v\leq \frac{c}{\sqrt{1+4\frac{l_p^2}{\bar{\lambda}^2}}}\)

For an electron we get

\(v\leq \frac{c}{\sqrt{1+4\frac{l_p^2}{\bar{\lambda}_e^2}}}\approx c\times 0.9999999999999999999999999999999999999999999964966\)

Using Tailor series expansion we can easily see they predict the same numerical result when \(\bar{\lambda}>>l_p\) which is the case for any observed particle.

Anyway if one should use the original Heisenberg uncertainty principle formulation (I actually think so) or the Kennard version would lead to a long philosophical discussion. However the main conclusion is the same

The SR limit of v<c (for anything with rest-mass) is impossible. Just ask Heisenberg + Max Planck.

To have Einstein (SR) + Heisenberg (uncertainty principle) + Max Planck (units), one need a new max velocity limit for anything with rest-mass!!

The Lorentz symmetry holds all the way up to Planck energies, then break down, the Planck mass particle, the Planck time and the Planck length are very unique!

- Cuchulainn
**Posts:**56674**Joined:****Location:**Amsterdam-
**Contact:**

- Cuchulainn
**Posts:**56674**Joined:****Location:**Amsterdam-
**Contact:**

ExSan wrote:

Interesting

How big is that class of problems? and how have they measured 'get by'?

In other words, what are the criteria for using these data types?

Looking more closely at \(\frac{1}{\sqrt{2\pi}}e^{-x^2/2}\) in relation to high precession

if limited digits in Pi looks like one will always end up with probability above Unity. First calculation I just put Pi to 3.141592.

NIntegrate[Exp[-x^2/2]/Sqrt[2*3141592*10^(-6)], {x, -40, 40}, WorkingPrecision -> 2000]

1.00000010402206257919...

Above unity! (**Fake \(\pi\) = Fake probability**) (True \(\pi\) infinite \(\pi\), impossible in limited time interval)

Even when letting Mathematica calculate Pi, I get probability above unity for very high standard-deviation, here 100

NIntegrate[Exp[-x^2/2]/Sqrt[2*Pi], {x, -100, 100}, WorkingPrecision -> 2000]

1.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000

000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000005167523076549771195117636230334557078093298910640836851699

Above unity!

50 Standard deviations works fine:

NIntegrate[Exp[-x^2/2]/Sqrt[2*Pi], {x, -50, 50}, WorkingPrecision -> 2000]

0.99999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999997838804106476726757662588421169523237980337873997773741440309828

Some testing indicates Mathematica Flips to probability above unity at 74 STDV (using method above) NIntegrate[Exp[-x^2/2]/Sqrt[2*Pi], {x, -Z, Z}, WorkingPrecision -> 2000] . Increasing WorkingPrecision further did not seem to help.

if limited digits in Pi looks like one will always end up with probability above Unity. First calculation I just put Pi to 3.141592.

NIntegrate[Exp[-x^2/2]/Sqrt[2*3141592*10^(-6)], {x, -40, 40}, WorkingPrecision -> 2000]

1.00000010402206257919...

Above unity! (

Even when letting Mathematica calculate Pi, I get probability above unity for very high standard-deviation, here 100

NIntegrate[Exp[-x^2/2]/Sqrt[2*Pi], {x, -100, 100}, WorkingPrecision -> 2000]

1.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000

000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000005167523076549771195117636230334557078093298910640836851699

Above unity!

50 Standard deviations works fine:

NIntegrate[Exp[-x^2/2]/Sqrt[2*Pi], {x, -50, 50}, WorkingPrecision -> 2000]

0.99999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999997838804106476726757662588421169523237980337873997773741440309828

Some testing indicates Mathematica Flips to probability above unity at 74 STDV (using method above) NIntegrate[Exp[-x^2/2]/Sqrt[2*Pi], {x, -Z, Z}, WorkingPrecision -> 2000] . Increasing WorkingPrecision further did not seem to help.

Last edited by Collector on April 28th, 2018, 12:12 pm

So will we always have total probability density above unity when limited time (limited computation power)?

Do we need a second normalization, to remove error from normalization that not is exact as we not can operate with infinite Pi. And or could also be issues with Exp function at high precession I guess?

My main question is basically if we include long enough tails (the tails are infinite in this distribution) will we always end up with probability density above unity if we not have Pi to infinite. So the function can never be calculated in finite time? without leading to probability density above unity if including very long tails?

Do we need a second normalization, to remove error from normalization that not is exact as we not can operate with infinite Pi. And or could also be issues with Exp function at high precession I guess?

My main question is basically if we include long enough tails (the tails are infinite in this distribution) will we always end up with probability density above unity if we not have Pi to infinite. So the function can never be calculated in finite time? without leading to probability density above unity if including very long tails?

- Traden4Alpha
**Posts:**23951**Joined:**

The theoretical equations assume the tails go to infinity. But the universe has bounds on time, energy, etc. Thus the tails are clipped.

Maybe the computational issues that push P>1 are balanced by the physical limits on the tails that make P<1?

Maybe the computational issues that push P>1 are balanced by the physical limits on the tails that make P<1?

Traden4Alpha wrote:The theoretical equations assume the tails go to infinity. But the universe has bounds on time, energy, etc. Thus the tails are clipped.

So such theoretical distributions are not true inside limited time interval (energy etc.). Can still be ok approximations, even when probability density is above unity.

"Maybe the computational issues that push P>1 are balanced by the physical limits on the tails that make P<1?"

yes this is worth studying further.

I thought may be oscillating P for different long tails. P slightly above and below 1, but seems like when below 73 STDV P<1 and above 73 STDEV P>1 in Mathematica the way I sat it up above.

If it for some reason always gives P>1 (when limited time, energy). when including very long tails that would be interesting. That would mean modern science often is based on Fake Probabilities :-) (Negative or above unity)

Also I see P increases by STDEV, dose it have a convergence limit? 2 or infinite probability.

NIntegrate[Exp[-x^2/2]/Sqrt[2*Pi], {x, -10000000000, 10000000000}, WorkingPrecision -> 6000]

returns error, and I that was hoping to study the far-out tails

NIntegrate[Exp[-x^2/2]/Sqrt[2*Pi], {x, -10000000000, 10000000000}, WorkingPrecision -> 6000]

returns error, and I that was hoping to study the far-out tails