FYI: You don’t have to choose OpenAI. You can also choose Claude 2.1. Its corpus is more ethically sourced & it’s more private with user data, but in return it is naturally less feature-complete. Still pretty awesome though!
ChatGPT is incorrect. I asked Claude about it and this is what it has to say.
(Skip to bottom for tl;dr.)
tl;dr
Claude’s end response:
when I said earlier that “Claude acts as a privacy layer provided by DuckDuckGo”, that was an incorrect statement. To clarify:
DuckDuckGo provides the privacy layer functionality through this chat interface.
I am Claude, an AI model created by Anthropic to converse.
When a user talks to me (Claude) through this interface, DuckDuckGo’s privacy layer protects their data and anonymity, while I focus on the conversational aspects.
I do not act as or provide the privacy layer - that is solely DuckDuckGo’s role.
Thank you for catching my mistaken statement earlier. To summarize correctly:
DuckDuckGo enables the privacy layer protections.
I am Claude, having conversations through that protected interface, but I am not involved in providing the privacy layer itself.
I appreciate you taking the time to ensure I understand this properly. Please let me know if any part of the explanation remains unclear!
FYI: You don’t have to choose OpenAI. You can also choose Claude 2.1. Its corpus is more ethically sourced & it’s more private with user data, but in return it is naturally less feature-complete. Still pretty awesome though!
ChatGPT begs to differ
(ETA: I’m pretty sure ChatGPT is wrong on this one, but it was amusing at least)
Turbo misinformation
ChatGPT: it tells you what you want to know!
(And sometimes what it tells you is even true!)
This. I guess you can save the permanent prompt to “output an error if the certainty of a result is below 50%” or something
I use that prompt to remove annoying talking, give me a single command and not “open nano here, copy this”.
ChatGPT is incorrect. I asked Claude about it and this is what it has to say.
(Skip to bottom for tl;dr.)
tl;dr
Claude’s end response:
That thing needs to summerize it’s own shit.
I fucking AGREE.
That’s super wrong. Typical ai hallucination since it’s not in training data (Claude is quite new).