The only thing that changes is the data that is passed to the LLM, which for each iteration includes the last token that the LLM itself generated. So yes, sort of. The LLM itself doesn’t change state; just the data that is fed into it.
It’s also non-deterministic insofar as similar inputs will not necessarily give similar outputs. The only way to actually predict its output is to use the exact same input - and then you only get identical token probability lists on the other end. Every LLM chatbot, by default, will then make a random selection based on those probabilities. It can be set to always pick the most probable token, but this can cause problems.
The only thing that changes is the data that is passed to the LLM, which for each iteration includes the last token that the LLM itself generated. So yes, sort of. The LLM itself doesn’t change state; just the data that is fed into it.
It’s also non-deterministic insofar as similar inputs will not necessarily give similar outputs. The only way to actually predict its output is to use the exact same input - and then you only get identical token probability lists on the other end. Every LLM chatbot, by default, will then make a random selection based on those probabilities. It can be set to always pick the most probable token, but this can cause problems.