• Zerush@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    7 days ago

    Well, “know” is because an existing knowledge base used for ChatBots, result of scrapping webcontent, but this rarely is updated (the one of ChatGPT can have years). So in a chat, the LLM can retry the concept of a question, but because of the limited “knowledge” data, converge to inventions, because the lack of reasoning, this is what an AI don’t have. This is minor in searchbots, because they don’t have the capability to chat with you, imitating human, they are limited to process the concept of your question, searching this concept in the web, comparing several pages with it to create an summary. It is a very different approach for an AI as an chat, because of this, yes, they also can give BS as answer, depending of the pages they consult for it, same as when you search something in the web in terraplanist pages, but this is with search AIs less an problem as with ChatBots.

    AI is an tool and we have to use it as such, to help us in researches and tasks, not to substitute our own intelligence and creativity, which is the real problem nowadays. For Example, I have in Lemmy several posts in World News, Science and Tecnology from articles and science papers I found in the web, mostly with long texts. Because of this I post it also with an summary made by Andisearch, which is always pretty correct with added several different sources of the issue, so you can check the content. The other why I like Andisearch is, when it don’t find an answer, it don’t invent one, it simply offers an normal websearch by yourself, using an search API from DDG and other privacy search engines.

    Anyway, the use of AI for researches always need an fact check, before we use the content, the only error is to use the answers as is or use biased AI from big (US) corporations. In almost 8.000 different AI apps and services which currently exist, special for very different tasks, we can’t globalise these because of the BS by ChatBots from Google, M$, METH, Amazon & cia, only blame the lack of the own common sense like a kid with a new toy, the differences are too big.

    • B0rax@feddit.org
      link
      fedilink
      arrow-up
      0
      ·
      7 days ago

      Again. LLMs don’t know anything. They don’t have a „knowledge base“ like you claim. As in a database where they look up facts. That is not how they work.

      They give you the answer that sounds most likely like a response to whatever prompt you give it. Nothing more. It is surprising how good it works, but it will never be 100% fact based.