For background, I am a programmer, but have largely ignored everything having to do with AI (re: LLMs) for the past few years.

I just got to wondering, though. Why are these LLMs generating high level programming language code instead skipping the middle man and spitting out raw 1s and 0s for x86 to execute?

Is it that they aren’t trained on this sort of thing? Is it for the human code reviewers to be able to make their own edits on top of the AI-generated code? Are there AIs doing this that I’m just not aware of?

I just feel like there might be some level of optimization that could be made by something that understands the code and the machine at this level.

  • Shimitar@downonthestreet.eu
    link
    fedilink
    English
    arrow-up
    70
    arrow-down
    1
    ·
    11 hours ago

    They would not be able to.

    Ai only mix and match what they have copied from human stuff, and most of code out there is on high level language, not machine code.

    In other words, ai don’t know what they are doing they just maximize a probability to give you an answer, that’s all.

    But really, the objective is to provide a human with a more or less correct boilerplate code, and humans would not read machine code

      • Riskable@programming.dev
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        1
        ·
        11 hours ago

        To add to this: It’s much more likely that AI will be used to improve compilers—not replace them.

        Aside: AI is so damned slow already. Imagine AI compiler times… Yeesh!

        • naught101@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          3 hours ago

          Strong doubt that AI would be useful for producing improved compilers. That’s a task that would require extremely detailed understanding of logical edge cases of a given language to machine code translation. By definition, no content exists that can be useful for training in that context. AIs will certainly try to help, because they are people pleasing machines. But I can’t see them being actually useful.

    • Thaurin@lemmy.world
      link
      fedilink
      arrow-up
      9
      arrow-down
      1
      ·
      11 hours ago

      This is not necessarily true. Many models have been trained on assembly code, and you can ask them to produce it. Some mad lad created some scripts a while ago to let AI “compile” to assembly and create an executable. It sometimes worked for simple “Hello, world” type stuff, which is hilarious.

      But I guess it is easier for a large language model to produce working code for a higher level programming language, where concepts and functions are more defined in the body that it used to get trained.

    • TauZero@mander.xyz
      link
      fedilink
      arrow-up
      1
      ·
      19 minutes ago

      Language is language. To an LLM, English is as good as Java is as good as machine code to train on. I like to imagine if we suddenly uncovered a library of books left over from ancient aliens, we could train an LLM on it (as long as the symbols themselves are legible), and it would generate stories in the alien language that would sound correct to the aliens, even though the alien world and alien life are completely unknown and incomprehensible to us.

    • naught101@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      3 hours ago

      I think on top of this, the question has an incorrect implicit assumption - that LLMs understand what they produce (this would be necessary for them to produce code in languages other than what they’re trained on).

      LLMs don’t product intelligent output. They produce plausible strings of symbols, based on what is common in a given context. That can look intelligent only in so far as the training dataset contains intelligently produced material.

  • Grimy@lemmy.world
    link
    fedilink
    arrow-up
    16
    ·
    11 hours ago

    You’re a programmer? Yes, integrating and debugging binary code would be absolutely ridiculous.

    • TranquilTurbulence@lemmy.zip
      link
      fedilink
      English
      arrow-up
      10
      ·
      edit-2
      10 hours ago

      Debugging AI generated code is essential. Never run the code before reading it yourself and making a whole bunch of necessary adjustments and fixes.

      If you jump straight to binary, you can’t fix anything. You can just tell the AI it screwed up, roll the dice and hope it figures out what went wrong. Maybe one day you can trust the AI to write functional code, but that day isn’t here yet.

      Then there’s also security and privacy. What if the AI adds something you didn’t want it to add? How would you know, if it’s all in binary?

  • four@lemmy.zip
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    11 hours ago

    Also, not everything runs on x86. For example, you couldn’t write a website in raw binary, because the browser wouldn’t run it. Or maybe you already have a Python library and you just need to interact with. Or maybe you want code that can run on x86 and ARM, without having to generate it twice.
    As long as the output code has to interact with other code, raw binary won’t be useful.

    I also expect that it might be easier for an LLM to generate typical code and have a solid and tested compiler turn it into binary

  • IninewCrow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    10 hours ago

    To me this is a fascinating analogy

    This is like having a civilization of Original Creators who are only able to communicate with hand gestures. They have no ears and can’t hear sound or produce any vocal noises. They discover a group of humans and raise them to only communicate with their hands because no one knows what full human potential is. The Original Creators don’t know what humans are able to do or not do so they teach humans how to communicate with their hands instead because that is the only language that the Original Creators know or can understand.

    So now the humans go about doing things communicating in complex ways with their hands and gestures to get things done like their Original Creators taught them.

    At one point a group of humans start using vocal communications. The Original Creators can’t understand what is being said because they can’t hear. The humans start learning basic commands and their vocalizations become more and more complex as time goes on. At one point, their few basic vocal commands are working at the same speed as hand gestures. The humans are now working a lot faster with a lot more complex problems, a lot easier than their Original Creators. The Original Creators are happy.

    Now the humans continue development of their language skills and they are able to talk faster and with more content that the Original Creators could ever achieve. Their skills become so well tuned that they are able to share their knowledge a lot faster to every one of their human members. Their development now outpaces the Original Creators who are not able to understand what the humans are doing, saying or creating.

    The Original Creators become fearful and frightened as they watch the humans grow exponentially on their own without the Original Creators participation or inclusion.

    • f43r05@lemmy.ca
      link
      fedilink
      English
      arrow-up
      3
      ·
      10 hours ago

      This here. Black box machine code, created by a black box, sounds terrifying.

      • HubertManne@piefed.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 hours ago

        I mean we know the code does not always work and can often be not the cleanest when it does. I mean if code from ai was perfect in a six sigma way, 99.999% of the time, then I could see the black box thing and just sussing out in the lowers. Even then, any time it does not work you would need to have it give it out in human readable so we could find the bug but if it was that good it should happen like once a year or something.

  • dual_sport_dork 🐧🗡️@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    11 hours ago

    I imagine this is hypothetically possible given correct and sufficient training data, but that’s besides the point I think needs to be made, here.

    Basically nothing anyone is programming in user space these days produces machine code, and certainly none of it runs on the bare metal of the processor. Nothing outside of extremely low power embedded microcontroller applications, or dweebs deliberately producing for oldschool video game consoles, or similar anyway.

    Everything happens through multiple layers of abstractions, libraries, hardware privilege levels, and APIs provided by your operating system. At the bottom of all of those is the machine code resulting in instructions happening on the processor. You can’t run plain machine code simultaneously with a modern OS, and even if you did it’d have to be in x86 protected mode so that you didn’t trample the memory and registers in use by the OS or other applications running on it, and you’d have a tough-to-impossible time using any hardware or peripherals including networking, sound, storage access, or probably even putting output on the screen.

    So what you’re asking for is probably not an output anyone would actually want.

  • Z3k3@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 hours ago

    I think i saw a video a few weeks ago where 2 ai assistants realises the other was also an ai so they agreed to switch to another protocol (to me it sounded like 56k modem noises or old 8 bit cassette tapes played on a hifi) so they could communicate more efficiently.

    I suspect something similar would happen with code.

    • nagaram@startrek.website
      link
      fedilink
      arrow-up
      5
      ·
      10 hours ago

      That was a tech demo I’m pretty sure and not just a thing they do btw. A company was trying to make a more efficient sound based comms for AI(?)

      • Z3k3@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 hours ago

        Clearly as both sides of the conversion were in the audio. I believe my point still stands in relation to the original question. I.e. talk at the level both sides agree on