Serving the Quantitative Finance Community

 
User avatar
katastrofa
Posts: 7475
Joined: August 16th, 2007, 5:36 am
Location: Alpha Centauri

Re: If you are bored with Deep Networks

November 11th, 2023, 1:31 pm

In short, tensorflow draws samples without replacement. It’s inconsistent with the assumption of the proof of convergence of SGD.
That’s a small obvious problem among many others in NNs implementations, training and incorrect applications you’d find on daily basis if you joined the domain. Not that there are many bright people there advancing the it in the right direction already.
 
User avatar
tags
Posts: 3176
Joined: February 21st, 2010, 12:58 pm

Re: If you are bored with Deep Networks

November 12th, 2023, 8:30 am

In short, tensorflow draws samples without replacement. It’s inconsistent with the assumption of the proof of convergence of SGD.
That’s a small obvious problem among many others in NNs implementations, training and incorrect applications you’d find on daily basis if you joined the domain. Not that there are many bright people there advancing the it in the right direction already.

thank you. i get your point. your points, actually.
 
User avatar
Cuchulainn
Posts: 20317
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: If you are bored with Deep Networks

November 13th, 2023, 10:30 am

"The human mind is not, like ChatGPT and its peers, a statistical machine greedy for hundreds of terabytes of data to get the most plausible answer to a conversation or the most likely answer to a scientific question. (...) The human mind is a surprisingly efficient and elegant system that operates with a limited amount of information. It doesn't try to infer brutal correlations from the data, but tries to create explanations. (...) Let's call Artificial Intelligence for what it is and does: a "plagiarism software", because it doesn't create anything, but copies existing works, by existing artists, altering them enough to escape copyright laws. This is the largest intellectual property theft on record since European settlers arrived on Native American lands." Noam Chomsky (1928), New York Times, March 8, 2023
 
User avatar
katastrofa
Posts: 7475
Joined: August 16th, 2007, 5:36 am
Location: Alpha Centauri

Re: If you are bored with Deep Networks

November 13th, 2023, 11:49 am

Unsurprisingly easily we fall for the illusion that our brain generates the right answers so efficiently and elegantly, when we don’t have a spare one or enough data to verify them. Still the basis of the brain operations is associative Hebbian learning. Just like in ChatGPT.
 
User avatar
katastrofa
Posts: 7475
Joined: August 16th, 2007, 5:36 am
Location: Alpha Centauri

Re: If you are bored with Deep Networks

November 13th, 2023, 11:54 am

In short, tensorflow draws samples without replacement. It’s inconsistent with the assumption of the proof of convergence of SGD.
That’s a small obvious problem among many others in NNs implementations, training and incorrect applications you’d find on daily basis if you joined the domain. Not that there are many bright people there advancing the it in the right direction already.

thank you. i get your point. your points, actually.
No probs. Erratum: I wanted to write that there *are* many bright people in AI advancing it fast. I always eat nots!
 
User avatar
Paul
Posts: 6613
Joined: July 20th, 2001, 3:28 pm

Re: If you are bored with Deep Networks

November 14th, 2023, 11:27 am

Can one of you deep throat experts replace Richard Gere in the film Chicago with someone with charisma and who can dance? Such as Hugh Jackman or Ryan Gosling.
 
User avatar
Cuchulainn
Posts: 20317
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: If you are bored with Deep Networks

December 16th, 2023, 11:48 pm

Here is a rubber duck (not the movie).  How can I determine its material properties (e.g, light reflection/emission properties)? Are pixels enough?
(you know, like a hologram)

Image
 
User avatar
katastrofa
Posts: 7475
Joined: August 16th, 2007, 5:36 am
Location: Alpha Centauri

Re: If you are bored with Deep Networks

December 17th, 2023, 10:58 pm

Here is a rubber duck (not the movie).  How can I determine its material properties (e.g, light reflection/emission properties)? Are pixels enough?
(you know, like a hologram)

Image
I patented a training algorithm which makes the model robust against adversarial attacks :-)
Image

Reminds me how I was shown by an AI researcher a series of pairs of images - original and adversarial. I recognised all adversarials without problem. I could easily see the odd patterns that others couldn't see... Funnily enough, I sent the test to my good friend who's a spectroscopy expert and she would also see it. We are yet to meet for a pint to figure out the phenomenon of our sixth sense.
 
 
User avatar
Cuchulainn
Posts: 20317
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: If you are bored with Deep Networks

March 1st, 2024, 1:46 pm

"Adversarial prompts based on recent research look like a readable phrase concatenated with a suffix of out-of-place words and punctuation marks designed to lead the model astray. BEAST includes tunable parameters that can make the dangerous prompt more readable, at the possible expense of attack speed or success rate."
 
In numerical analyis (matrix inversion, PDE) this would called an ill-posed/ill-conditioned for which regularisation (Tikhonov, Hadamard) is needed.
I suppose LLMs are based on some of the same flaws as ANNs, so it doesn't come as a surprise, I suppose.
Anyways, these models don't seem to be terribly robust.
 
 
User avatar
jasonbell
Posts: 60
Joined: May 6th, 2022, 4:16 pm
Location: Limavady, NI, UK
Contact:

Re: If you are bored with Deep Networks

March 2nd, 2024, 8:55 am

I suppose LLMs are based on some of the same flaws as ANNs, so it doesn't come as a surprise, I suppose.
Never perfect, think of a LLM as one big overfitted model :)

Anyways, these models don't seem to be terribly robust.
After a week of larking about with Ocra-mini:2b and Gemini:2b to see how to fine tune them..... the large 175b parameter models will always win. They are questionable on their outputs too though. 

Great if you're writing marketing copy. A handy code support if you're stuck on some code. I wouldn't bet the farm on it though.
Twitter: @jasonbelldata
Author of Machine Learning: Hands on for Developers and Technical Professionals (Wiley).
 
User avatar
Cuchulainn
Posts: 20317
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: If you are bored with Deep Networks

March 3rd, 2024, 12:19 pm

We can thank NYT for promoting AI back in 1959(?) praising Rosenblatt's (flawed) NN model.
 
User avatar
Cuchulainn
Posts: 20317
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: If you are bored with Deep Networks

March 3rd, 2024, 12:20 pm

We can thank NYT for promoting AI back in 1959(?) praising Rosenblatt's (flawed) NN model.
AI is a story of unfilled promises since that time, like Scylla and Charybdis.

https://news.cornell.edu/stories/2019/0 ... s-too-soon

If you need a diagram to explain a mathematical concept, then you know something is rotten.
 
User avatar
jasonbell
Posts: 60
Joined: May 6th, 2022, 4:16 pm
Location: Limavady, NI, UK
Contact:

Re: If you are bored with Deep Networks

March 3rd, 2024, 1:56 pm

We can thank NYT for promoting AI back in 1959(?) praising Rosenblatt's (flawed) NN model.
AI is a story of unfilled promises since that time, like Scylla and Charybdis.
Somewhere, in some server room, that 5 ton IBM machine will still be churning out insurance quotes and being billed per query. :)
Twitter: @jasonbelldata
Author of Machine Learning: Hands on for Developers and Technical Professionals (Wiley).
 
User avatar
katastrofa
Posts: 7475
Joined: August 16th, 2007, 5:36 am
Location: Alpha Centauri

Re: If you are bored with Deep Networks

March 5th, 2024, 2:47 pm

We can thank NYT for promoting AI back in 1959(?) praising Rosenblatt's (flawed) NN model.
AI is a story of unfilled promises since that time, like Scylla and Charybdis.
Somewhere, in some server room, that 5 ton IBM machine will still be churning out insurance quotes and being billed per query. :)
June 1958. I read that interview many times and it’s been prophetic looking the advances in the field and computational hardware that followed. Had Rosenblatt not died tragically, it would possibly advance even faster... I think all of them, Rosenblatt, Minsky and Papert were well aware how to overcome the “flaw” of the one layer perceptron and robably even how to train it (backpropagation was mentioned several times before it kicked in in 1986).