• Analog@lemmy.ml
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 天前

    Can run decent size models with one of these: https://store.minisforum.com/products/minisforum-ms-s1-max-mini-pc

    For $1k more you can have the same thing from nvidia in their dgx spark. You can use high speed fabric to connect two of ‘em and run 405b parameter models, or so they claim.

    Point being that’s some pretty big models in the 3-4k range, and massive models for less than 10k. The nvidia one supports comfyui so I assume it supports cuda.

    It ain’t cheap and AI has soooo many negatives, but… it does have some positives and local LLMs mitigate some of the minuses, so I hope this helps!

    • melfie@lemy.lol
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 天前

      Nice, though $3k is still getting pretty pricey. I see mini PCs with a AMD RYZEN AI MAX+ 395 and 96GB of RAM can be had for $2k, or even $1k with less RAM: https://www.gmktec.com/products/amd-ryzen™-ai-max-395-evo-x2-ai-mini-pc?variant=f6803a96-b3c4-40e1-a0d2-2cf2f4e193ff

      I’m looking for something that also does path tracing well if I’m going to drop that kind of coin. It sounds like this chip can be on par with a 4070 for rasterization, but it only gets a benchmark score of 495 for Blender rendering compared to 3110 for even a RTX 4060. RDNA 5 with true RTX cores should drastically change the situation of chips like this, though.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        4 小时前

        FYI you can buy this this: https://frame.work/products/framework-desktop-mainboard-amd-ryzen-ai-max-300-series?v=FRAFMK0002

        And stick a regular Nvidia GPU on it. Or an AMD one.

        That’d give you the option to batch renders across the integrated and discrete GPUs, if such a thing fits your workflow. Or to use one GPU while the other is busy. And if a particular model doesn’t play nice with AMD, it’d give you the option to use Nvidia + CPU offloading very effectively.

        It’s only PCIe 4.0 X4, but that’s enough for most GPUs.

        TBH I’m considering exactly this, hanging my venerable 3090 off the board. As I’m feeling the FOMO crunch of all hardware getting so expensive. And $2K for 16 cores with 128GB of ridiculously fast quad channel RAM is not bad, even JUST as a CPU.