• 0 Posts
  • 100 Comments
Joined 3 years ago
cake
Cake day: June 24th, 2023

help-circle






  • And Signal is open source so, if it did anything weird with private keys, everyone would know

    Well, no. At least not by default as you are running a compiled version of it. Someone could inject code you don’t know anything about before compilation that for example leaked your keys.

    One way to be more confident no one has, would be to have predictable builds that you can recreate and then compare the file fingerprints. But I do not think that is possible, at least on android, as google holds they signature keys to apps.


  • Keep in mind that not all work loads scale perfectly. You might have to add 1100 computers due to overhead and other dcaling issues. It is still pretty good though, and most of those clusters work on highly parallelised tasks, as they are very suited for it.

    There are other work loads do not scale at all. Like the old joke in programming. “A project manager is someone that thinks that 9 women can have a child in one month.”






  • Get you hooked to the extreme convenience, much like a drug addict, and then pump up the price or flood every prompt with ads.

    There is a big difference between “normal” SaaS and LLM.

    In a normal SaaS you get a lot of benefit of being at scale. Going from 1000 to 10000 users is not that much harder than going from 10000 to 1000000. Once you have your scaling set up you can just add more servers and/or data centers. But most importantly, the cost per user goes waaay down.

    With AI it just doesn’t scale at all, the 500000th user will most likely cost as much as the 5th. So doing a netflix/spotify/etc, I don’t think is going to work unless they can somehow make it a lot cheaper per user. OpenAI fails to turn a profit even on their most expensive tiers.

    Edit: to clarify, obviously you get some small benefits from being at scale. Better negotiations and already having server racks, etc. But those same benefits a traditionsl SaaS gets as well, and so much more that LLM doesn’t, because the cost per user doesn’t drop.