• 0 Posts
  • 53 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle




  • If you have a good enough NVIDIA card, probably a 1080ti or better, download KoboldCPP and a .gguf model from huggingface and run it locally.

    The quality is directly tied to your GPU’s vram size and how big of a model you can load into it, so don’t expect the same results as an LLM running on a data center. For example, I can load a 20gb gguf model into a 3090 with 24gb of vram.