• Australis13@fedia.io
    link
    fedilink
    arrow-up
    2
    ·
    22 hours ago

    Why? LLMs are built by training maching learning models on vast amounts of text data; essentially it looks for patterns. We’ve seen this repeatedly with other behaviour from LLMs regarding race and gender, highlighting the underlying bias in the dataset. This would be no different, unless you’re disputing that there is a possible correlation between bad code and fascist/racist/sexist tendencies?