I was looking back at some old lemmee posts and came across GPT4All. Didn’t get much sleep last night as it’s awesome, even on my old (10yo) laptop with a Compute 5.0 NVidia card.

Still, I’m after more, I’d like to be able to get image creation and view it in the conversation, if it generates python code, to be able to run it (I’m using Debian, and have a default python env set up). Local file analysis also useful. CUDA Compute 5.0 / vulkan compatibility needed too with the option to use some of the smaller models (1-3B for example). Also a local API would be nice for my own python experiments.

Is there anything that can tick the boxes? Even if I have to scoot across models for some of the features? I’d prefer more of a desktop client application than a docker container running in the background.

  • catty@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    edit-2
    23 hours ago

    I’ve discovered jan.ai which is far faster than GPT4All, and visually a little nicer.

    EDIT: After using it for an hour or so, it seems to crash all the time, I keep on having to reset it, and currently am facing it freezing for no reason.

    • otacon239@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 day ago

      I also started using this recently and it’s very plug and play. Just open and run. It’s the only client so far that feels like I could recommend to non-geeks.

      • catty@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 day ago

        I agree. it looks nice, explains the models fairly well, hides away the model settings nicely, and even recommends some initial models to get started that have low requirements. I like the concept of plugins but haven’t found a way to e.g. run python code it creates yet and display the output in the window