But the fact that it can do so much is an awesome (and maybe scary) result in and of itself. These LLMs can write working code examples, write convincing stories, give advice, solve simple problems quite reliably, etc all from just learning to predict the next word. I feel like people are moving the goalpost way too quickly, focussing so much on the mistakes it makes instead of the impressive feats that have been achieved. Having AI doing all this was simply unthinkable a few years ago. And yes, OpenAI is currently using a lot of hardware, and ChatGPT might indeed have gotten worse. But none of that changes what has been achieved and how impressive it is.
Maybe it’s because of all these overhyping clickbait articles that make reality seem disappointing. As someone in the field who’s always been cynical about what would be possible, I just can’t be anything else then impressed with the rate of progress. I was wrong with my predictions 5 years ago, and who knows where we’ll go next.
But the fact that it can do so much is an awesome (and maybe scary) result in and of itself. These LLMs can write working code examples, write convincing stories, give advice, solve simple problems quite reliably, etc all from just learning to predict the next word. I feel like people are moving the goalpost way too quickly, focussing so much on the mistakes it makes instead of the impressive feats that have been achieved. Having AI doing all this was simply unthinkable a few years ago. And yes, OpenAI is currently using a lot of hardware, and ChatGPT might indeed have gotten worse. But none of that changes what has been achieved and how impressive it is.
Maybe it’s because of all these overhyping clickbait articles that make reality seem disappointing. As someone in the field who’s always been cynical about what would be possible, I just can’t be anything else then impressed with the rate of progress. I was wrong with my predictions 5 years ago, and who knows where we’ll go next.