For background, I am a programmer, but have largely ignored everything having to do with AI (re: LLMs) for the past few years.

I just got to wondering, though. Why are these LLMs generating high level programming language code instead skipping the middle man and spitting out raw 1s and 0s for x86 to execute?

Is it that they aren’t trained on this sort of thing? Is it for the human code reviewers to be able to make their own edits on top of the AI-generated code? Are there AIs doing this that I’m just not aware of?

I just feel like there might be some level of optimization that could be made by something that understands the code and the machine at this level.

  • f43r05@lemmy.ca
    link
    fedilink
    English
    arrow-up
    3
    ·
    17 hours ago

    This here. Black box machine code, created by a black box, sounds terrifying.

    • HubertManne@piefed.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      17 hours ago

      I mean we know the code does not always work and can often be not the cleanest when it does. I mean if code from ai was perfect in a six sigma way, 99.999% of the time, then I could see the black box thing and just sussing out in the lowers. Even then, any time it does not work you would need to have it give it out in human readable so we could find the bug but if it was that good it should happen like once a year or something.