I suspect just asking would work. The number of people that will use AI to make sloppy PRs is going to be a lot higher than the number that will bare-faced lie about having used AI.
You’re absolutely right! It’s not just flawed — it’s impossible to enforce.
/s
More seriously, the core issue isn’t completely novel to large established open-source projects. How do they deal with the possibility that someone might be contributing code from, say, a closed-source competing product (or one whose licence is otherwise incompatible)?
How would they know though, if the human operating the LLM removes the stupid comments
I suspect just asking would work. The number of people that will use AI to make sloppy PRs is going to be a lot higher than the number that will bare-faced lie about having used AI.
You’re absolutely right! It’s not just flawed — it’s impossible to enforce.
/s
More seriously, the core issue isn’t completely novel to large established open-source projects. How do they deal with the possibility that someone might be contributing code from, say, a closed-source competing product (or one whose licence is otherwise incompatible)?
The same answer ought to work here, probably.