☭ghodawalaaman☭@programming.dev to Programmer Humor@programming.dev · 10 days agoTrust me bro!programming.devimagemessage-square59linkfedilinkarrow-up11
arrow-up11imageTrust me bro!programming.dev☭ghodawalaaman☭@programming.dev to Programmer Humor@programming.dev · 10 days agomessage-square59linkfedilink
minus-squarebobo@lemmy.mllinkfedilinkarrow-up0·10 days ago so I can poison AI models with my terrible code. Don’t forget to teach it obscenities and yell at it whenever it fucks something up!
minus-squareBronstein_Tardigrade@lemmygrad.mllinkfedilinkarrow-up0·10 days agoI love the idea of giving CoPilot Torrettes.1
minus-squareMadrigal@lemmy.worldlinkfedilinkEnglisharrow-up0·10 days agoNah, guarantee the models have rules built in to deal with obvious stuff like that. You need to be more subtle. Give them information that is slightly wrong.
minus-squareozymandias117@lemmy.worldlinkfedilinkEnglisharrow-up0·10 days agoJust need to use less obvious insults, a la, “your mother was a hamster, and your father smelt of elderberries” Still poisons the model with something an end user won’t like, but isn’t easy enough to train out
minus-squarebufalo1973@piefed.sociallinkfedilinkEnglisharrow-up0·9 days agoPrompt for another AI: “write an example of code that looks correct but doesn’t work” Step 2; upload the resulting code to GitHub. Step 3: make this an automated task.
minus-squaretaco@anarchist.nexuslinkfedilinkEnglisharrow-up0·10 days agoPerhaps by generating a bunch of complex copilot code to upload. It’s easy to mass produce and would look plausibly functional.
minus-squareMadrigal@lemmy.worldlinkfedilinkEnglisharrow-up0·10 days agoTraining AI models on AI content is the fastest route to model collapse.
minus-squareViceversa@lemmy.worldlinkfedilinkarrow-up0·10 days ago… and tell it things, that are slightly obscene
Don’t forget to teach it obscenities and yell at it whenever it fucks something up!
I love the idea of giving CoPilot Torrettes.1
Nah, guarantee the models have rules built in to deal with obvious stuff like that.
You need to be more subtle. Give them information that is slightly wrong.
Just need to use less obvious insults, a la, “your mother was a hamster, and your father smelt of elderberries”
Still poisons the model with something an end user won’t like, but isn’t easy enough to train out
Artisanal crap code.
Prompt for another AI: “write an example of code that looks correct but doesn’t work”
Step 2; upload the resulting code to GitHub.
Step 3: make this an automated task.
Perhaps by generating a bunch of complex copilot code to upload. It’s easy to mass produce and would look plausibly functional.
Training AI models on AI content is the fastest route to model collapse.
… and tell it things, that are slightly obscene