SERVING THE QUANTITATIVE FINANCE COMMUNITY

 
User avatar
frolloos
Topic Author
Posts: 1619
Joined: September 27th, 2007, 5:29 pm
Location: Netherlands

Impact factor rankings

January 18th, 2020, 10:45 am

Why isn't Wilmott included in impact factor rankings for quant finance journals? Neither is Risk. Not sure how these things work and how relevant rankings are.
 
User avatar
Alan
Posts: 9975
Joined: December 19th, 2001, 4:01 am
Location: California
Contact:

Re: Impact factor rankings

January 18th, 2020, 7:12 pm

I will guess whatever rankers you are looking at arbitrarily exclude, for any field whatsoever, "(trade) magazines" as opposed to "(academic) journals", regardless of the relative quality of the content and/or actual influence/impact.  Might matter if you are an academic up for a tenure decision -- and the decision makers care about such things. Conversely, could matter in the other direction for a hiring manager at a trading desk.   
 
User avatar
bearish
Posts: 5393
Joined: February 3rd, 2011, 2:19 pm

Re: Impact factor rankings

January 18th, 2020, 8:38 pm

I don't have the actual answer, but it's a private equity thing. So, the goal is not so much to further scientific discourse as to make money. A bit like college ratings, but possibly not as corrosive. To be clear, I'm not against the making of money, per se, but when tangled up with science and education, the track record is iffy at best. 
 
User avatar
katastrofa
Posts: 8772
Joined: August 16th, 2007, 5:36 am
Location: Alpha Centauri

Re: Impact factor rankings

January 18th, 2020, 11:33 pm

 
User avatar
frolloos
Topic Author
Posts: 1619
Joined: September 27th, 2007, 5:29 pm
Location: Netherlands

Re: Impact factor rankings

January 19th, 2020, 8:50 am

Yes the Journal of Risk has an impact rating, but Risk magazine, the one I meant I believe doesn't.

It would be silly that if something is called a "magazine" instead of a "journal" it is not included in impact ratings, but that may be the reason. It's hard to argue that SABR and local volatility, just to name two, both published in mere magazines have had no impact.

I suppose journal impact rating databases are like credit rating agencies; nice, but not to be taken too seriously.
 
User avatar
katastrofa
Posts: 8772
Joined: August 16th, 2007, 5:36 am
Location: Alpha Centauri

Re: Impact factor rankings

January 19th, 2020, 9:10 am

It’s “pop-finance”, just as Scientific American is a pop-science magazine.
I think you will gain more recognision in your circles from publishing in Risk Magazine, but inspire more scientific debate if you publish in Risk Journal (measured by IF). Sounds right to me.
 
User avatar
Collector
Posts: 4436
Joined: August 21st, 2001, 12:37 pm

Re: Impact factor rankings

January 19th, 2020, 5:45 pm

some journals should consider adding a - sign in front of their impact factor, and then add their mantra: All pr is good pr. Such as journals where some of their most cited papers have been proved to be based on garbage statistics.

also heard journals need to pay to rating agency to get official impact factor. (?)
 
User avatar
frolloos
Topic Author
Posts: 1619
Joined: September 27th, 2007, 5:29 pm
Location: Netherlands

Re: Impact factor rankings

January 20th, 2020, 7:51 am

Such as journals where some of their most cited papers have been proved to be based on garbage statistics.
Which journal(s) /paper(s)?

There seems to be a general irreproducibility crisis in scientific research: https://www.nature.com/collections/prbfkwmwvz
 
User avatar
katastrofa
Posts: 8772
Joined: August 16th, 2007, 5:36 am
Location: Alpha Centauri

Re: Impact factor rankings

January 20th, 2020, 4:31 pm

From the point of view of hard core statistics, that's not unexpected: you can't walk into the same river twice (where the river is spacetime). It's of course good to know why the results have changed.

There are also acidents like RANDU, an IBM PRNG widely used in 60s and 70s, before it was shown to terribly fail the spectral test. I wonder how much of research from that period is invalid because of that mishap. Had they used it in the Manhattan Project (for Monte Carlo simulations) and we might have lost the war!


Revolutionary research like that published at Vixra should have IF counted in imaginary units - defined as a rate of success with girls in clubs ;-)
 
User avatar
Cuchulainn
Posts: 61185
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Impact factor rankings

January 20th, 2020, 5:54 pm


There are also acidents like RANDU, an IBM PRNG widely used in 60s and 70s, before it was shown to terribly fail the spectral test. I wonder how much of research from that period is invalid because of that mishap. Had they used it in the Manhattan Project (for Monte Carlo simulations) and we might have lost the war!

That's exactly what I told a group of 22 C++ students yesterday in the West Midlands. They even used it in a MC simulation. It's also shlowe than std::mt19937.

[table][tr][td]
minstd_rand0
(C++11)[/td]
[td]
Discovered in 1969 by Lewis, Goodman and Miller, adopted as "Minimal standard" in 1988 by Park and Miller [/td]
[/tr]
[/table]
“Done in 1609, Kepler's fakery is one of the earliest known examples of the use of false data by a giant of modern science. Donahue, a science historian, turned up the falsified data while translating Kepler's master work, Astronomia Nova, or The New Astronomy, into English.”
http://www.datasimfinancial.com
http://www.datasim.nl

Every Time We Teach a Child Something, We Keep Him from Inventing It Himself
Jean Piaget
 
User avatar
Alan
Posts: 9975
Joined: December 19th, 2001, 4:01 am
Location: California
Contact:

Re: Impact factor rankings

January 20th, 2020, 8:22 pm

 
User avatar
frolloos
Topic Author
Posts: 1619
Joined: September 27th, 2007, 5:29 pm
Location: Netherlands

Re: Impact factor rankings

January 21st, 2020, 6:12 am

>Done in 1609, Kepler's fakery is one of the earliest known examples of the use of false data by a giant of modern science.

So Kepler practised fake news. Disappointing.

I'm willing to wager one Higgs boson that CERN's CMS results are irreproducible too. 

On another topic, I tend to shun away from biographies of scientists, but has anyone read The Strangest Man? Would you recommentd it? Much has been written and said on Einstein and Feynman, but less so about Dirac.
 
User avatar
Cuchulainn
Posts: 61185
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Impact factor rankings

January 21st, 2020, 9:47 am

Another forgotten scientist is Gordon Welchmann .. I bet very few people heard about him. He was the architect and even for the Americans in the Cold War.
https://www.bookdepository.com/The-Hut- ... oUQAvD_BwE

The  self-effacing statistics prof at TCD who lectured us had also worked at Bletchley Park but no one knew it until after he passed. He also set up the 9-digit code for ISBN.

https://en.wikipedia.org/wiki/Gordon_Foster
http://www.datasimfinancial.com
http://www.datasim.nl

Every Time We Teach a Child Something, We Keep Him from Inventing It Himself
Jean Piaget
 
User avatar
katastrofa
Posts: 8772
Joined: August 16th, 2007, 5:36 am
Location: Alpha Centauri

Re: Impact factor rankings

January 22nd, 2020, 2:48 pm

>Done in 1609, Kepler's fakery is one of the earliest known examples of the use of false data by a giant of modern science.

So Kepler practised fake news. Disappointing.

I'm willing to wager one Higgs boson that CERN's CMS results are irreproducible too. 

On another topic, I tend to shun away from biographies of scientists, but has anyone read The Strangest Man? Would you recommentd it? Much has been written and said on Einstein and Feynman, but less so about Dirac.
>Done in 1609, Kepler's fakery is one of the earliest known examples of the use of false data by a giant of modern science.

So Kepler practised fake news. Disappointing.

I'm willing to wager one Higgs boson that CERN's CMS results are irreproducible too. 

On another topic, I tend to shun away from biographies of scientists, but has anyone read The Strangest Man? Would you recommentd it? Much has been written and said on Einstein and Feynman, but less so about Dirac.
It may be disappointing for a modern researcher, but remember that it was long before science established its (to us, the only correct and obvious one) modern method - with all the good and bad (and the ugly) habits... Such "tampering" with data was quite popular in Kepler's times, as old time scientists rather looked for data to confirm their vision of nature than the other way around (as nowadays ML does). If you imagine a scientific discipline at early stages of building its foundations, it's quite understandable: you need to sketch some logical shapes (inspired by your broader knowledge and intuition - or the fear of The Holy Inquisition) into which you can start fitting in the surrounding chaos. If it more or less fits, adding some missing points or shifting them a bit up or down to comply your theory is a matter of an aesthetic correction. A sort of Kepler's licentia astronomica :-) Others will come, build telescopes and correct the commas.
It's not a strictly to the point comment, but that's how I imagine being in Kepler's shoes. I believe it started to change around Newton (when natural science split into hard sciences and philosophy; poor Newton couldn't come to terms with this division, so he did what you Western folk do in such situations - turn to occultism)? Anyway, I'm much more appalled by the pervasive habits or data dredging and multiple comparisons in modern data science than by Kepler's cheats.
 
User avatar
frolloos
Topic Author
Posts: 1619
Joined: September 27th, 2007, 5:29 pm
Location: Netherlands

Re: Impact factor rankings

January 22nd, 2020, 5:37 pm

>but remember that it was long before science established its (to us, the only correct and obvious one) modern method - with all the good and bad (and the ugly) habits... Such "tampering" with data was quite popular in Kepler's times,

Even now there could be some tampering going on, perhaps not in the sense of blatant manipulation of data/results, but more in the 'marketing' sense. For instance selecting only particular situations under which one's proposed model performs well and not pointing out under which cirumstances it utterly fails. This is not dishonesty, and perhaps some models are too complex to be able to say exactly when it could fail, but it's not entirely in the spirit of scientific research either.
ABOUT WILMOTT

PW by JB

Wilmott.com has been "Serving the Quantitative Finance Community" since 2001. Continued...


Twitter LinkedIn Instagram

JOBS BOARD

JOBS BOARD

Looking for a quant job, risk, algo trading,...? Browse jobs here...


GZIP: On