Page 2 of 2

Re: The Final Frontier: negative probability in Machine Learning

Posted: June 20th, 2019, 6:39 pm
by Cuchulainn
And what about "fake AI"

Re: The Final Frontier: negative probability in Machine Learning

Posted: June 20th, 2019, 11:05 pm
by katastrofa
My personal opinion - What I would do is recompute the probability so it's not negative. Problem solved. Probably you need to do something simple with a ratio to adjust P(not) upwards. Create an error handling function.

You're probably looking at something other than a probability or using an incorrect distribution. Something went wrong. Someone once told me that they computed a correlation higher than 1. I don't doubt it happened, I just acknowledge that something unanticipated happened with your math and you need to fix it.

We have a bigger problem with AI, which is probability distributions that look like waves (multi-modal distributions). You have a missing categorical variable, but until you get it you need to either split the data or do something with it. Also, multi-modal distributions overlap, so they are difficult to bifurcate. Do you perform a custom transformation? Do you run k-means and try to impute the category? It's a complete pain in the ass.
Are you talking about generative models or what really?