• 0 Posts
  • 6 Comments
Joined 2 years ago
cake
Cake day: September 6th, 2024

help-circle
  • The real issue is that since any fingerprint that can be mandated for AI content must be algorithmically implemented, then that fingerprint can be algorithmically removed.

    For example, let’s say companies voluntarily choose or are forced to integrate text fingerprinting into LLM output. Automated AI writing detection tools already exist, but they’re not reliable. But in principle we could make the output of LLMs easy to identify. Maybe we force them to adopt subtle but highly unique patterns of word choice, punctuation, sentence structure, etc. Then if any student attempted to upload an LLM-generated essay to their course website, the system could with high accuracy flag it as AI generated.

    But…if those patterns are so clear and unambiguous, it also means they can be easily detected by third party tools. If one person can code ChatGPT to add special fingerprinting to the text ChatGPT creates, another person can create a program that you can paste ChatGPT text into that will remove that fingerprinting.




  • This just doesn’t pass the smell test to me. Why would they need to go to the trouble? It’s not like they can’t figure out who is trans through name and gender change court records, health records, etc. Even if these aren’t gathered in one list anywhere, it wouldn’t be hard to have an AI run through name change records and flag anyone that changed their name from a typically masculine to a typically feminine name, and vice versa.

    Sure, there are some very paranoid/privacy-minded trans folks that never put their transness in government or medical records. They never change their name legally, they DIY HRT, etc. And those folks would be difficult to find. However, I imagine most such folks would be far too cautious to join a trans-specific dating app.