• skisnow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    4 days ago

    Mathematically you might be able to prove I don’t always (and I’m not convinced of that even; I don’t think there is an inherent contradiction like the one used for the proof of Halting), but the bar for acceptable false positives is sufficiently low and the scenario is such an edge case of an edge case of an edge case, that anyone trying to use the whole principle to argue that AIs will always hallucinate, is grasping at straws big time.