I've recently listened to a talk by a post-doc from Queen Mary University in London. She was optimising the random seed of the PRNG used by her model as a hyper-parameter, showing plots of reward function for "good seeds" and "bad seeds". To her credit, she did say "some people may find it controversial".11 authors, several universities involved, and this research was funded.
LOL!I've recently listened to a talk by a post-doc from Queen Mary University in London. She was optimising the random seed of the PRNG used by her model as a hyper-parameter, showing plots of reward function for "good seeds" and "bad seeds". To her credit, she did say "some people may find it controversial".11 authors, several universities involved, and this research was funded.
I've recently listened to a talk by a post-doc from Queen Mary University in London. She was optimising the random seed of the PRNG used by her model as a hyper-parameter, showing plots of reward function for "good seeds" and "bad seeds". To her credit, she did say "some people may find it controversial".11 authors, several universities involved, and this research was funded.
Falsehoods also pass through the same three stages. Some even come out of stage 3 stronger than ever.I've recently listened to a talk by a post-doc from Queen Mary University in London. She was optimising the random seed of the PRNG used by her model as a hyper-parameter, showing plots of reward function for "good seeds" and "bad seeds". To her credit, she did say "some people may find it controversial".11 authors, several universities involved, and this research was funded.
Why do you think it's so bad? do you mean the execution of the authors is bad or do you mean the whole approach is just idiotic?Applications of maths and computers in magic are still undervalued. To start with https://arxiv.org/abs/1709.03803
They could guess the PRNG's seed state based on several numbers. Learning to hack But you do use cryptographically secure PRNGs, don't you.I briefly wondered once if various Deep Learning models which require random inputs (e.g. autoencoders) could learn to predict the next random value generated, thus "cheating". But I doubt it's possible without a humongous training sample and large model capacity.