Serving the Quantitative Finance Community

 
User avatar
katastrofa
Posts: 7440
Joined: August 16th, 2007, 5:36 am
Location: Alpha Centauri

Re: If you are bored with Deep Networks

June 13th, 2018, 5:07 pm

1963... There are so many accompanying factors present in such studies that they are practically useless. We have better, non-invasive (which is important for the study conclusiveness) techniques these days.
 
User avatar
Traden4Alpha
Posts: 3300
Joined: September 20th, 2002, 8:30 pm

Re: If you are bored with Deep Networks

June 13th, 2018, 5:09 pm

Presumably you meant "navigate" by sound. Definitions of "communicate" imply the intentional transmission of information from the speaker to others.

It would be interesting to see if bats pass the mirror test. I would suspect that some of the bats that eat fish would see themselves reflected in the water and recognize that the acoustic signature of their reflection was not that of another bat flying below them. Whether cave-dwelling bats also see some reflection of themselves in the cave walls is not clear because the walls are probably not smooth enough to form a coherent echolocation signal.
 
User avatar
Traden4Alpha
Posts: 3300
Joined: September 20th, 2002, 8:30 pm

Re: If you are bored with Deep Networks

June 13th, 2018, 6:55 pm

1963... There are so many accompanying factors present in such studies that they are practically useless. We have better, non-invasive (which is important for the study conclusiveness) techniques these days.
LOL! You know that people did successfully do science before you were born.
 
User avatar
katastrofa
Posts: 7440
Joined: August 16th, 2007, 5:36 am
Location: Alpha Centauri

Re: If you are bored with Deep Networks

June 13th, 2018, 8:19 pm

I do, but you should try harder if you want to make an impression that you know something about science and scientific thinking.
 
User avatar
Traden4Alpha
Posts: 3300
Joined: September 20th, 2002, 8:30 pm

Re: If you are bored with Deep Networks

June 13th, 2018, 8:46 pm

I do, but you should try harder if you want to make an impression that you know something about science and scientific thinking.
The same could be said of you, unfortunately. :(
 
User avatar
katastrofa
Posts: 7440
Joined: August 16th, 2007, 5:36 am
Location: Alpha Centauri

Re: If you are bored with Deep Networks

June 13th, 2018, 9:21 pm

I think everybody can see for themselves. And you don't get points for provoking competent people to answer to your confused or wrong deliberations.
 
User avatar
ISayMoo
Topic Author
Posts: 2332
Joined: September 30th, 2015, 8:30 pm

Re: If you are bored with Deep Networks

June 13th, 2018, 9:27 pm

T4A, you are a great example that some animals are very bad at learning anything, including proper manners.

As for animal experiments, the neuroscientists I quizzed about it admitted themselves that they are often inconclusive because of the problems mentioned by Katastrofa.
 
User avatar
ISayMoo
Topic Author
Posts: 2332
Joined: September 30th, 2015, 8:30 pm

Re: If you are bored with Deep Networks

June 13th, 2018, 9:45 pm

Since Cuch was worried that AI researches only throw mud at the wall and see what sticks: some of them are trying to prove things. Provable adversarial defences.
 
User avatar
Cuchulainn
Posts: 20255
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: If you are bored with Deep Networks

June 21st, 2018, 10:11 am

A more serious issue is the following; reproducibility. Do you always get a gibbon from the same panda input? Maybe, is it serious for NN in a heavy metal production process with  real-time constraints?

The booming field of artificial intelligence (AI) is grappling with a replication crisis, much like the ones that have afflicted psychology, medicine, and other fields over the past decade. Just because algorithms are based on code doesn't mean experiments are easily replicated. Far from it. Unpublished codes and a sensitivity to training conditions have made it difficult for AI researchers to reproduce many key results. That is leading to a new conscientiousness about research methods and publication protocols. Last week, at a meeting of the Association for the Advancement of Artificial Intelligence in New Orleans, Louisiana, reproducibility was on the agenda, with some teams diagnosing the problem—and one laying out tools to mitigate it.

Seems some bridges need to be built between AI and engineering principles.
 
User avatar
ISayMoo
Topic Author
Posts: 2332
Joined: September 30th, 2015, 8:30 pm

Re: If you are bored with Deep Networks

June 22nd, 2018, 7:27 am

"Unpublished codes and a sensitivity to training conditions have made it difficult for AI researchers to reproduce many key results."

Publishing code is unrelated at best, or even detrimental to replication. Downloading someone else's code and running it is not replication. Replication is reimplementing a published algorithm in your code and getting the same result. But that takes more work, so people want just to download the code and run it ;-)
 
User avatar
katastrofa
Posts: 7440
Joined: August 16th, 2007, 5:36 am
Location: Alpha Centauri

Re: If you are bored with Deep Networks

June 22nd, 2018, 12:09 pm

It happened to me several times: I reimplemented someone's method to obtain different results. In some cases those were bugs in their code, in other errors in their methods. Once I showed that all revolutionary results of a top scientific journal paper are nothing else but such errors and bugs. In fact I showed that the authors should change their jobs to something more mechanical (sweeping streets?), but since the problem is more widespread (concerns ca 80% of the scientific community), I think the consensus is that it's safer to keep all those idiots in isolated places, such as universities.
 
User avatar
ISayMoo
Topic Author
Posts: 2332
Joined: September 30th, 2015, 8:30 pm

Re: If you are bored with Deep Networks

June 22nd, 2018, 7:46 pm

Or corporate research labs ;-)
 
User avatar
katastrofa
Posts: 7440
Joined: August 16th, 2007, 5:36 am
Location: Alpha Centauri

Re: If you are bored with Deep Networks

June 22nd, 2018, 9:41 pm

That's where the best ones go! For now. Soon they will be pushed out by politically skilled, but dumb, professors, aspiring professors and their toadies (in the reverse order of their life cycle stages). They will come pushing themselves onto your author list, crawling into your projects, claiming your work, ...
 
User avatar
ISayMoo
Topic Author
Posts: 2332
Joined: September 30th, 2015, 8:30 pm

Re: If you are bored with Deep Networks

June 22nd, 2018, 9:48 pm

No pasaran!
 
User avatar
Cuchulainn
Posts: 20255
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: If you are bored with Deep Networks

June 26th, 2018, 11:32 am

Topic: global polynomial, overfitting and regularisation

Mathematically/numerically speaking, these are non-optimal choices in general (at worst, probably the wrong approach; at best, they will keep us busy with lots of tweaking) for these kinds of problems. Some one-liners (no maths, it's well-known..)  on why we think this are:

1. (High-order) global support  polynomials are basically useless for approximation. They lead to all kinds of problems that must be resolved (hopefully) by ad-hoc regularisation, which is really making an unconstrained optimisation problem into a more difficult constrained one with new (penalty) parameters that have to be guessed.
2. Global polynomials lead to dense/non-sparse networks on equally spaced link ranges.
3. Some error functions (e.g. LAD) lead to ill-posed optimisation problems.
4. Modelling discontinuous functions by smooth global functions is difficult.

In other areas, piecewise polynomial are used. Are they known/used in the ML community? I suspect stanrdard 'recipes' are being used and fixes are made when they break. I could be mistaken.