• Personally I don’t see why the views of those that write software should really concern us, as long as the technical implementation is not biased. It’s open-source and people can take it and do with it what they please. No-one is forcing you to accept certain views or think about things critically (including assessing others viewpoints that may be different to yours). I feel like it’s a bit of a waste of time to worry about these things.

    • mathemachristian[he]@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      1 day ago

      It is, in the instance chooser there is a “defederation score” that is “good” if you have defederated a sufficient amount of instances Rimu doesn’t like.

    • eldavi@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 day ago

      biases manifest in really strange and unexpected ways and you’ll fail even when you try to intentionally account for them; that’s why things like facial recognition success rate correlates to the darkness of your skin or why successful ai recognition of text/speech is related how different your language is to english or mandarin.

      the only way to successfully gaurd against bias in software development is to have developer teams comprised of people can naturally keep each other’s biases in check.

      • I think facial recognition technology is very different to threadiverse software. The fact that those technologies are trained on predominantly-white data is no surprise, both of your examples are data-based (ML models) where the data itself contains the bias.

        I am talking more of the open-source projects, it’s important; as you rightfully call out, that we have a varied group of opinions within the developer group 👍

        • eldavi@lemmy.ml
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 hours ago

          it’s not just ai or training data for it; it’s developers themselves too, including fediverse ones.

          the biggest non-ai/training examples that i can think of came from times before ai was ever a thing like:

          • usps had difficulty validating addresses because the software they obtained assumed euro-centric naming schema

          • airlines, health care providers, hotels, and state motor vehicle departments rejected registration/reservations because trans people have to option to select their sex

          • health care providers misdosed patients because the software they used didn’t account for highly-athletic/bodybuilding people or people with chronic conditions

          there are SO MANY examples out there where the bias clearly comes from the developer instead of the training data and there’s no way that any piefed developer is immune or can even effectively mitigate their biases.

    • Diva (she/her)@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      1 day ago

      as long as the technical implementation is not biased.

      Piefed has been baking in biases though, like adding a bunch of leftist/international sources in a huge blocklist labelled ‘qanon’, which if you then want to disable it’s got to be done manually.

      Not to mention clear disregard for breaking activitypub interoperability.