• 0 Posts
  • 106 Comments
Joined 3 years ago
cake
Cake day: June 24th, 2023

help-circle


  • Tanoh@lemmy.worldtoSelfhosted@lemmy.worldThree Docker related questions
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    1
    ·
    15 days ago

    I know the purists might sneer at me for this, but I just spun up a server via Hetzner so I could run Docker in the cloud

    Not an answer to your question, but don’t let some gatekeeper… well, gate keep. There are many ways to selfhost. Running your own hardware is one, renting a VPS but hosting the services yourself is another one. One is not better than the other.

    Just pick what is the best solution for you and your problem.











  • And Signal is open source so, if it did anything weird with private keys, everyone would know

    Well, no. At least not by default as you are running a compiled version of it. Someone could inject code you don’t know anything about before compilation that for example leaked your keys.

    One way to be more confident no one has, would be to have predictable builds that you can recreate and then compare the file fingerprints. But I do not think that is possible, at least on android, as google holds they signature keys to apps.


  • Keep in mind that not all work loads scale perfectly. You might have to add 1100 computers due to overhead and other dcaling issues. It is still pretty good though, and most of those clusters work on highly parallelised tasks, as they are very suited for it.

    There are other work loads do not scale at all. Like the old joke in programming. “A project manager is someone that thinks that 9 women can have a child in one month.”






  • Get you hooked to the extreme convenience, much like a drug addict, and then pump up the price or flood every prompt with ads.

    There is a big difference between “normal” SaaS and LLM.

    In a normal SaaS you get a lot of benefit of being at scale. Going from 1000 to 10000 users is not that much harder than going from 10000 to 1000000. Once you have your scaling set up you can just add more servers and/or data centers. But most importantly, the cost per user goes waaay down.

    With AI it just doesn’t scale at all, the 500000th user will most likely cost as much as the 5th. So doing a netflix/spotify/etc, I don’t think is going to work unless they can somehow make it a lot cheaper per user. OpenAI fails to turn a profit even on their most expensive tiers.

    Edit: to clarify, obviously you get some small benefits from being at scale. Better negotiations and already having server racks, etc. But those same benefits a traditionsl SaaS gets as well, and so much more that LLM doesn’t, because the cost per user doesn’t drop.