"The core of the argument is that in modelling the universe through Machine Learning, we are obliged to make inferences based on finite and hence typically less-than-complete information."Machine Learning, Uncertain Information, and the Inevitability of Negative `Probabilities'
It was only a matter of time before someone came up with this brainwave.
At its heart I want to challenge the assumption that probabilities have to be positive. I want to give several arguments, descriptive and formal, to indicate why the use of positive probabilities is an ideal which is both overly restrictive and unrealisable. Indeed I will argue that the use of non-positive `probabilities' is both inevitable and natural.
Well, this is quite a curious reaction. When I end computing a negative probability, I usually think "OMG, I should fix this up". Cuchullain is quite good at removing negative probas. I can give some tricks also."Issues in Artificial Intelligence, "the OOM theory suffers from the negative probability problem *NPP)
"Learned OOM may sometimes return negative probabilities for some events"
(OOM: "Observable Operator Models are generalization of HMM models")
Issues in Artificial Intelligence, Robotics and Machine Learning
I look forward to self driving cars! They are so flexible!
Well said!Holy smoke! The POSITIVE probabilities are what should be used to describe incomplete information. If you have complete information, you don't need probability at all.
You can define an object to describe the difference between two probability distributions (Q. what's a good name?) if you want, but don't call it "probability".
I assume, and confess, that I have a picture of Phyllis Nicolson in my bedroom since two days !There's a H in Nicholson
Deep Wasserstein Distance ?You can define an object to describe the difference between two probability distributions (Q. what's a good name?) if you want, but don't call it "probability".