

Oh wow, they actually wrote this title. 😯
Oh wow, they actually wrote this title. 😯
A significant decrease in the amount of surplus value society produces going towards tech companies producing proprietary software, whicj is most of them. Basically the costs of using software for a whole lotta things are gonna get lower. This would make that society’s products cheaper for itself and export. It would allow its labour to do more useful things, one of which could be new FOSS software. But also helping out with the green transition, taking care of the ageing population, education, etc.
Or join the US as the CHERISHED 51st STATE, and then the smoke would be AMERICAN smoke and ket me tell you - don’t we love american smoke!
A moderate Democrat worth tens of millions proposes to tinker around the edges without consulting anyone, fails.
All-in, I wanted something on the order of 1MB for client app, server, all dependencies, everything.
Okay that’s gotta be radically different!
Well, you gotta start it somehow. You could rely on compose’es built-in service management which will restart containers upon system reboot if they were started with -d
, and have the right restart policy. But you still have to start those at least once. How’d you do that? Unless you plan to start it manually, you have to use some service startup mechanism. That leads us to systemd unit. I have to write a systemd unit to do docker compose up -d
. But then I’m splitting the service lifecycle management to two systems. If I want to stop it, I no longer can do that via systemd. I have to go find where the compose file is and issue docker compose down
. Not great. Instead I’d write a stop line in my systemd unit so I can start/stop from a single place. But wait 🫷 that’s kinda what I’m doing isn’t it? Except if I start it with docker compose up
without -d
, I don’t need a separate stop line and systemd can directly monitor the process. As a result I get logs in journald
too, and I can use systemd’s restart policies. Having the service managed by systemd also means I can use aystemd dependencies such as fs mounts, network availability, you name it. It’s way more powerful than compose’s restart policy. Finally, I like to clean up any data I haven’t explicitly intended to persist across service restarts so that I don’t end up in a situation where I’m debugging an issue that manifests itself because of some persisted piece of data I’m completely unaware of.
These can’t shoot down F35, can they?
Let me know how the search performs once it’s done. Speed of search, subjective quality, etc.
Why start anew instead of forking or contributing to Jellyfin?
I think I lost neurons reading this. Other commenters in this thread had the resilience to explain what the problems with it are.
This sounds plausible. Has anyone caught him in the act?
The problem is that Grok has been put in a position of authority on information. It’s expected to produce accurate information, not spit out what you ask it for, regardless of the factuality of information. So the expectation created for it by its owners is not the same as that for Google. You can’t expect most people to understand what LLM does because it doesn’t scale. The general public uses uses Twitter and most people get the information about the products they’re being sold and use by their manufacturer. So the issue here is with the manufacturer and their marketing.
I use a fixed tag. 😂 It’s more a simple way to update. Change the tag in SaltStack, apply config, service is restarted, new tag is pulled. If the tag doesn’t change, the pull is a noop.
Let me know how inference goes. I might recommend that to a friend with a similar CPU.
Yup. Everything is in one place and there’s no hardcoded paths outside of the work dir making it trivial to move across storage or even machines.
Because I clean everything up that’s not explicitly on disk on restart:
[Unit]
Description=Immich in Docker
After=docker.service
Requires=docker.service
[Service]
TimeoutStartSec=0
WorkingDirectory=/opt/immich-docker
ExecStartPre=-/usr/bin/docker compose kill --remove-orphans
ExecStartPre=-/usr/bin/docker compose down --remove-orphans
ExecStartPre=-/usr/bin/docker compose rm -f -s -v
ExecStartPre=-/usr/bin/docker compose pull
ExecStart=/usr/bin/docker compose up
Restart=always
RestartSec=30
[Install]
WantedBy=multi-user.target
Did you run the Smart Search job?
That’s a Celeron right? I’d try a better AI model. Check this page for the list. You could try the heaviest one. It’ll take a long time to process your library but inference is faster. I don’t know how much faster it is. Maybe it would be fast enough to be usable. If not usable, choose a lighter model. There’s execution times in the table that I assume tell us how heavy the models are. Once you change a model, you have to let it rescan the library.
Yes, it depends on how you’re managing the service. If you’re using one of the common patterns via systemd, you may be cleaning up everything, including old volumes, like I do.
E: Also if you have any sort of lazy prune op running on a timer, it could blow it up at some point.
Nice. So this model is perfectly usable by lower end x86 machines.
I discovered that the Android app shows results a bit slower than the web. The request doesn’t reach Immich during the majority of the wait. I’m not sure why. When searching from the web app, the request is received by Immich immediately.