

To be fair sometimes if you update Linux too sparingly it results in conflicts. Of course the likelihood of that happening depends on the distro. Also the vast majority of Linux updates don’t require a reboot.


To be fair sometimes if you update Linux too sparingly it results in conflicts. Of course the likelihood of that happening depends on the distro. Also the vast majority of Linux updates don’t require a reboot.


Qwen 3.5 can be run via ollama


Qwen 3.5 is one of the best of the open-weight (self-host able) models right now. It’s not as good as some of the extra massive proprietary models like the bigger Claude models.
Sure, but to get the communication started you would start with facts you’d agree on, like the positions of stars or basic chemistry.
The model we currently have for the universe goes well beyond anything we could learn with our natural senses and the way we intuitively think about the world because of those senses.
It’s true that we keep refining our models and it’s very possible that an alien would have slightly different models, but at the end of the day, we are trying to describe the same universe and those models are going to overlap a lot because of that.
First of all, there has been a lot of research into what the minimal set of assumptions you need is to reproduce what we consider “basic math” and also what happens if you tweak those assumptions.
Second of all, the main goal for science and the type of math we use for science is to effectively model the world we live in.
Any aliens that live in the same universe are subject to the same physics, and any civilization advanced enough to detect our messages will know some basic universal facts about the world, and those facts are what we hope to use as the basis for starting communication.


Signal already has that setting. It’s up to the user to decide their level of convenience vs security.



data security in that case had nothing to do with the llm
That’s kinda my point.


“I don’t trust companies to hold their promises” is a very different argument from:
LLMs are inherently bad at data security and there is no way these companies can, in good faith, promise HIPPA compliance
It is certainly possible to implement a secure LLM service.


Phone, wallet, keys, and headphones


This is about extracting data that was used as training data. Just don’t do that with sensitive data.


LLMs are inherently bad at data security and there is no way these companies can, in good faith, promise HIPPA compliance
This is simply false. AI sucks but it doesn’t help to lie about it.
EDIT:
Go run a local model on your own computer, and delete the context when you are done. Boom you just used an LLM in a way that maintains your data security.


I have a bare private git repo on my homeland server. Not great for sharing my work but great for personal projects.
LLMs tend to be a “jack of all trades, master of none”. You are likely to find them useful for helping you with something you are inexperienced at, but not at something you are an expert in. However, because they lie a lot, it’s best to double-check your information, but the LLM can still be helpful with the ”you don’t know what you don’t know” issue.


Two weeks is about how long I took when I made that trip, but I did some sightseeing on the way. Also even on the longest day I don’t think I drove for 10 hours in one day.
Natural disaster -> no longer can access everything you have online, including bank and insurance accounts, at precisely the time you most don’t want to deal with that.


Somebody do this to Minecraft


More specifically I thought one of the approaches to an omni-treadmill would catch on enough for an at-home model to be available to the public.
Previous admin was pro-Ukraine but this admin is pro-Russia
I have a metal Apple Watch band that has started filing away the edge on one side of my MacBook just by accident.