- cross-posted to:
- technology@beehaw.org
- cross-posted to:
- technology@beehaw.org
A new paper suggests diminishing returns from larger and larger generative AI models. Dr Mike Pound discusses.
The Paper (No “Zero-Shot” Without Exponential Data): https://arxiv.org/abs/2404.04125
That’s very bold presumption. How can they be so sure of this, that any future models can’t tackle the issue? have they got proof or something.
No, they just calculate with increased size of the training roster… it’s not that complicated. Which is a fair presumption as that is how we’ve increased the predictive precision so far.