

You are assuming the author is being unsafe & not auditing code for very basic security issues.
Let me present this angle, small teams of volunteer open source developers finally have a way to help ease the amount of code they produce, but you want them to continue doing all the work manually because AI hurts your feefees.
Further, you are openly declaring you don’t trust the devs to audit their own code.
If you can find a security vulnerability in the code (it is open source after all) I’ll cede, but otherwise, I think it is a good thing responsible AI use can help shoulder the work these folks do for our benefit.

Two things, the experiment you are referring to was specifically designed to deceive whereas AI vulnerabilities would just be simple bugs.
Secondly, the security requirements of the Linux Kernel are way more important/stringent than Lutris, which has no special access & is often even further sandboxed if installed via Flatpak.
I just don’t see this as an issue until it proves to be one. People are always welcome to fork a “pure” version.