For background, I am a programmer, but have largely ignored everything having to do with AI (re: LLMs) for the past few years.

I just got to wondering, though. Why are these LLMs generating high level programming language code instead skipping the middle man and spitting out raw 1s and 0s for x86 to execute?

Is it that they aren’t trained on this sort of thing? Is it for the human code reviewers to be able to make their own edits on top of the AI-generated code? Are there AIs doing this that I’m just not aware of?

I just feel like there might be some level of optimization that could be made by something that understands the code and the machine at this level.

  • Shimitar@downonthestreet.eu
    link
    fedilink
    English
    arrow-up
    70
    arrow-down
    1
    ·
    11 hours ago

    They would not be able to.

    Ai only mix and match what they have copied from human stuff, and most of code out there is on high level language, not machine code.

    In other words, ai don’t know what they are doing they just maximize a probability to give you an answer, that’s all.

    But really, the objective is to provide a human with a more or less correct boilerplate code, and humans would not read machine code

      • Riskable@programming.dev
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        1
        ·
        11 hours ago

        To add to this: It’s much more likely that AI will be used to improve compilers—not replace them.

        Aside: AI is so damned slow already. Imagine AI compiler times… Yeesh!

        • naught101@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          3 hours ago

          Strong doubt that AI would be useful for producing improved compilers. That’s a task that would require extremely detailed understanding of logical edge cases of a given language to machine code translation. By definition, no content exists that can be useful for training in that context. AIs will certainly try to help, because they are people pleasing machines. But I can’t see them being actually useful.

    • Thaurin@lemmy.world
      link
      fedilink
      arrow-up
      9
      arrow-down
      1
      ·
      11 hours ago

      This is not necessarily true. Many models have been trained on assembly code, and you can ask them to produce it. Some mad lad created some scripts a while ago to let AI “compile” to assembly and create an executable. It sometimes worked for simple “Hello, world” type stuff, which is hilarious.

      But I guess it is easier for a large language model to produce working code for a higher level programming language, where concepts and functions are more defined in the body that it used to get trained.