

Two rack rails bolted together with a power strip and a tray holding my server mini PC. My router is bolted on as well to act as a switch for everything while also providing Wifi to my phone and laptop



Two rack rails bolted together with a power strip and a tray holding my server mini PC. My router is bolted on as well to act as a switch for everything while also providing Wifi to my phone and laptop



I kind of railroaded myself into using calibre unfortunately.
I have a very specific filenaming scheme which I originally came up with back when I only used folders for organising my books in order to group together books that belong to a series but where the series is part of a larger universe.
Basically my folder structure is {World}/{Reading Order}; {Series} #{Series_Index} - {Title} - {Author}
On my kobo I have the autoshelf plugin installed which automatically parses this information when I add books and groups them together by world while filling out the series information.
In order to properly make use of this system I need to use Calibre custom columns and be able to export the books I want with this specific name format. I have yet to find a program other than calibre that would support this.
It would probably be smarter for me to reorganize my books at some point but I really like being able to basically drop a ton of books at once onto my reader using SFTP and as far as I can tell all common options rely on manually downloading the books, sending them directly to the reader or pulling them from their internal file storage in whatever form the application stores them…
I do like Audiobookshelf for the ability to add a book to multiple series, but the missing mass export functions stop me from switching


I name mine after greek and roman gods.
My NAS is bamed Hestia, the goddess of the bearth and home.
My docker server is called Poseidon due to the sea iconography of docker. My second iteration of my docker server where I tried playing around with podman I called Neptune.
I briefly had a Raspberry Pi for experimenting with some stuff which was called Eileithyia, the goddess of childbirth.
My Proxmox machine on which pretty much all ky other servers are run as VMs is called Atlas, as the Titan holding up my personal network.
I also have a truenas VM which I boringly called truenas…


How high is the power bill? I considered getting some more smaller drives but I figured it is more power efficient in the long term to buy bigger HDDs, not to mention that I only have 4 disk slots


I wish I could afford an SSD Nas since my main server is located in my bedroom. For now I have to be content with shutting anything down over night that triggers HDD activity.
I used to have a 4TB Ironwolf HDD but also ran out of space on that. As I already use a 2x16TB NAS server as a backup destination I looked to get another 16TB drive that I might repurpose at some point in the future.
I had to settle for a WD Elements HDD at about 310 Euro. My Ironwolf was really quiet. Might be because it is a 5400 RPM drive. The element almost drives me mad because the drive head clicks very loudly.
Same reason I don’t use my actual synology NAS with Toshiba MG08 drives as more than a backup server but at least those are actual server HDDs and so usually aren’t expected to be quiet.
I also just wanted to rant a bit. Don’t mind me


Quick question, the way you say server/agent architecture, does this mean that the server manages the backup schedule and pulls the backups from the systems or does the connected computer initiate the backups?
I’m currently using synology active backup for my server and used to also use it for my desktop. Linux support is not ideal though and I would like to move to something with similar capabilities that is also not vendor locked.
My personal usecase would be backing up a single server, a desktop and a laptop.


Good questions. Would like to know that too
I have a bare minimum of documentation as markdown files which I take care to keep in an accessible lovation, aka not on my server.
If my server does ever go down, I might really want to access the (admittedly limited) documentation for it
I read the title and this was literally the first thing that popped in my head


Yeah, that would be the ideal scenario I guess.
It should technically be possible by mapping the compose files into the opt folder via docker mounts but I think that’s an unreasonable way to go about this since every compose file would need a mounting point


Proxmox to manage my VMs, SSH for anything on the command line and portainer for managing my docker containers.
One day I will switch probably switch to dockge so my docker-compose files are stored plain on the hard drive but for now portainer works flawlessly.


I remember building something vaguely related in a university course on AI before ChatGPT was released and the whole LLM thing hadn’t taken off.
The user had the option to enter a couple movies (so long as they were present in the weird semantic database thing our professor told us to use) and we calculated a similarity matrix between them and all other movies in the database based on their tags and by putting the description through a natural language processing pipeline.
The result was the user getting a couple surprisingly accurate recommendations.
Considering we had to calculate this similarity score for every movie in the database it was obviously not very efficient but I wonder how it would scale up against current LLM models, both in terms of accuracy and energy efficiency.
One issue, if you want to call it that, is that our approach was deterministic. Enter the same movies, get the same results. I don’t think an LLM is as predictable for that


Thanks. I’ll keep this in mind in case my new stack causes issues again


Hey, just wanted to let you know that my updated stack has been running perfectly since I changed it based on your setup. Thanks


I guess I missed that.
Anyway, I updated my stack to be similar to what you pasted and so far it seems to be working. I’ll have to check tomorrow if the reboot issue persists.


I know that the port forwarding command can be simplified. In my case its this complex because the way it is listed in the gluetun wiki did not work even though I disabled authentication for my local network. The largest part of the script is authenticating with the username and password before actually sending the port forwarding command.
I’ll definitely try adjusting my stack to your variant though. I’ve also tried the healthcheck option before but I must have configured it wrong because that caused my gluetun container to get stuck.
One question regarding your stack though, is there a specific reason for binding /dev/net/tun to gluetun?


As far as I am aware, Mullvad has removed port forwarding support a while ago. While I am not sure which VPN providers except proton still support it, I kind of remember seeing a small list of them some time ago which listed Proton among one of the few trustworthy ones left.


Good thing I decided against switching to it, even though my main reason is that my weird book organisation scheme isn’t feasible with anything but calibre or manual organisation currently as far as I know
I use a MikroTik Router and while I do love the amount of power it gives me, I very quickly realized that I jumped in at the deep end. Deeper than I can deal with unfortunately.
I did get everything running after a week or so but I absolutely had to fight the router to do so.
Sometimes less is more I guess
I’m not that well informed on the specifics of the topic but I would say that AI has a lot of potential to do good in medical applications. I believe there was quite a bit of research into detecting various forms of cancer earlier and more reliably by using neural networks.