• 0 Posts
  • 7 Comments
Joined 1 year ago
cake
Cake day: June 23rd, 2023

help-circle
  • I think it is a problem. Maybe not for people like us, that understand the concept and its limitations, but “formal reasoning” is exactly how this technology is being pitched to the masses. “Take a picture of your homework and OpenAI will solve it”, “have it reply to your emails”, “have it write code for you”. All reasoning-heavy tasks.

    On top of that, Google/Bing have it answering user questions directly, it’s commonly pitched as a “tutor”, or an “assistant”, the OpenAI API is being shoved everywhere under the sun for anything you can imagine for all kinds of tasks, and nobody is attempting to clarify it’s weaknesses in their marketing.

    As it becomes more and more common, more and more users who don’t understand it’s fundamentally incapable of reliably doing these things will crop up.



  • Eh, this is a thing, large companies often have internal rules and maximums about how much they can pay any given job title. For example, on our team, everyone we hire is given the role “senior full stack developer”, not because they’re particularly senior, in some cases we’re literally hiring out of college, but because it allows us to pay them better with internal company politics.


  • I don’t necessarily disagree that we may figure out AGI, and even that LLM research may help us get there, but frankly, I don’t think an LLM will actually be any part of an AGI system.

    Because fundamentally it doesn’t understand the words it’s writing. The more I play with and learn about it, the more it feels like a glorified autocomplete/autocorrect. I suspect issues like hallucination and “Waluigis” or “jailbreaks” are fundamental issues for a language model trying to complete a story, compared to an actual intelligence with a purpose.