• Andy@slrpnk.net
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    Why do you guarantee that? It seems obviously wrong, on a technical level.

    The point I’m making is that even if we take it as a given that a shrewd enough AI could correctly distinguish sex at birth – which I think is obviously impossible based on the appearances of many ciswomen and the nature of statistical prediction – you’d still need a training data set.

    If the dataset has any erroneous input, that corrupts its ability, and the whole point of this exercise is trying to find passing transwomen. Why would anyone expect that training set of hundreds of thousands of supposed cis women wouldn’t have a few transwomen in it?

    • AlligatorBlizzard@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      2
      ·
      5 months ago

      Because Facebook’s data practices, and how much was volunteered by users on there, means that for some percentage of trans users Facebook knows that they’re trans. And you also have a percentage of pregnancy photos uploaded, if someone identifies as a woman on Facebook, and has uploaded photos with a baby bump, she’s cis (or at least a pre-hatching trans person). And at one point in time, a lot of people just volunteered that info to Facebook.

      • Andy@slrpnk.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 months ago

        Yeah, but the training set is nowhere near clean. That’s my point. “Close” is no where near good enough within this context,