• 2 Posts
  • 92 Comments
Joined 10 months ago
cake
Cake day: December 27th, 2023

help-circle
  • Your experiences are anecdotal.

    by pulishing them they become measurable, which also removes the “anecdotal” flag with numbers, also maybe ask archaeologists how much of an evidence a complain written in papyrus actually is a “while” after it was written.

    also the studies that found out “why” public services don’t serve in the first place have become quite old* meanwhile, which is the very opposite of anecdotal, but nothing was done so far to change the known state of not serving services for decades, so why should they have changed without changing actions affecting them?

    *) i read parts of them >20 years ago and the studies observations and conclusions i read fitted 100% of what i personally experienced/witnessed from within a family “working” in such services.


  • i meant improving society and strengthen its immune system against threats that would otherwise cause collapses. imperiums cause and collapse/vanish all the time, just try to count the already vanished ones in “known” history, i guess while you are still busy counting the known ones there will be even more “newly found” by archaelogists. while lots of them just vanished without trace, some vanished but just weren’t found yet.











  • well for e2ee you obviously have to let one e encrypt the data for the other e. (good luck with newsletters then) for usual services kindly asking them to support either s/mime or gpg for outgoing emails, that would at least make them know the wish, but good luck there too.

    i think the already mentioned solution with encrypting incoming messages on your side just before mda to your inbox should be the closest possible to what op wants. one would need to check if the message is already encrypted and skip encryption for those.

    if you only want the admin of that email (imap) server to not be able to read all emails, maybe placing a separate encrypting server (smtp+encrypt+forward) inbetween outside world and your email imap server could be a solution.

    one should have a look into the logfiles too as some mailers might log message subjects and of course sender/recipients along with ip adresses of incoming/outgoing servers which the op might not want to be readable as well (i dont know protonmail that much)

    also gpg IMHO allows for sign-then-encrypt hiding the signature within the encrypted data which could be wanted. also one might want to look exactly what parts of the messages contents and its headers are encrypted or plaintext on the server before feeling safe from the threat one wants to be protected from.



  • you’re welcome.

    what i’ld suggest… a general rule that i like to always follow is to use a test system for everything new. but that does not need to be a full separate system every time.

    lets say you have your mailbox and want to try getting new mails from it using fetchmail. first you can use uidl mechanisms to only fefch every mail once and besides that leave them all on the server, but i like it a bit more secure: create a second email adress/account at your mail providers service only for testing. thus you can do whatever you like to to test the mechanisms only without even touching your real inbox (maybe even fill it up with large emails and look how the system reacts, i once had an email account with a cheap provider that deadlocked the inboxes when full…). then when everything is as you want it, switch the account and password (or create another config file for fetchmail) and your’re done. every change (not only fetchmail things) could go tested this way before going live with the changes. filtering could be done with procmail for example, but when the mda that is called by procmail somehow exits with success when the email really isn’t delivered, then the email might get lost forever depending on the settings of course. so fiddling with new stuff always carries the risk of not fiddling correctly ;-)

    have fun !


  • Its possible to tell your mta (like postfix) to use another mta for all mails, or only some domains etc, so using a third party to play the internet facing service then getting the mails by fetchmail, storing them in a dovecot server is easy. on the sending part you could use your standard email client (i.e. thunderbird on pc or k9-mail on smartphone) to send it to your postfix instance that also sits on the server hosting your dovecot service. the mta there takes the mail and delivers it by rules which could just be using the mta of your freemailer using username/password of your account for all outgoing emails. i am doing this but the “external” mail system are my servers as well, i just don’t want emails to stay too long on VMs in the datacenter where i have no access to the physical disks in case something goes wrong.

    a raspberry pi is sufficient for such a aetup (i am using a pi4 currently but for emails only i’ld say a 3 or older would do too), adding a disk via usb makes storage huge and cheap then, i use two usb ssd’s in a raid1 for storage… that server could be only accessible through vpn if you whish, depending on your skills and needs (i mainly use ssl client certificates that are supported by k9mail and thunderbird so it fits seamless to be connected through a haproxy that authenticates these before proxying the plain connection to the pi) clients like thunderbird can offline-store all emails (configure download-or-not per imap folder) making searches easy and quick while my k9 client can search locally or on the server if needed.

    maybe adjust maximum mail size of your own mta to exactly match (or slightly less) that of the freemailer you use to prevent surprises of big but later then unsent emails.

    its possible to have a nextcloud instance on that same pi that acts as an email web mailer just in case of (i really dont need it, but i’ve set this up anyway). nextcloud is also great for syncing/backup files pictures, contacts notes todo lists and calendar of your phone (where i use davx5 opentasks and foldersync for). there are other webmailers available but installing /using nextcloud is not a too bad idea either ;-)

    i suggest also setting up some automatic offsite backup with snapshots of that pi then to cover emails and the setup and its configs ;-)


  • you sound like you want to fight anti-civilisation with a non-civilised aproach, but thats not gonna do it. as you wrote “infested”, i strongly believe that becoming evil when fighting evil only helps evil. what comes next after you hung some of who you thought should acompany those oak trees for a while? you choosed to believe that would solve the problem for now, didn’t you? what would really happen (in case you dind’t just wanted to deathwish them only to experience that parasites are immune to deathwishing)? even when youre not going to jail for some reasons, you then have achieved what? wouldn’t there be another one to fill that position* who “thanks” you for helping him into it? but how would that “thank you” look like? well as you would have removed the previous one from his position and you were the “friend” of the new villain helping him into his new position, that beeing a friend immediately turns into a threat to him as soon as he ‘has’ that position and you’re not welcome any more but an enemy to take down unless you start to work with your new skills for that villain to do other dirty work for him and become another evil instead of helping against evil.

    *(where “position” is anything someone can hide behind by law)

    those hiding behind laws will always be there until those laws are adjusted, the persons might change, but the possibility of hiding behind laws would still be there and attract villains, the bigger the villain, the more attractive those laws are to him.

    try to control your anger first and become really good at it or you might end up beeing controlled by some villain using your anger for his own purposes (do you watch tv btw? do you get manipulated by it?)

    maybe step back and become a scientist instead who analyses the parasites how they do what and how they could be really be fought one day without beeing just evil but not achieving anything. and a scientist that helps others becoming parasite analysing scientists themselves.

    if you find another civilized aproach that works or at least should, plz tell me.




  • smb@lemmy.mltoLinux@lemmy.mlA word about systemd
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    one example of a program that did multiple things is sfdisk, it used to make the kernel reload the new partition table but that was not its main job, only changing them. the extra functionality moved to blockdev which is nearer to doing such as it also triggers flushing buffers and i think setting read/write status. i am fully ok with that change as it removes code from a program that doesn’t need it to another that already does similar things so that other partitioning programs like gdisk fdisk or parted could go the same way so that maintainers of the reread-partition-table things can concentrate on one solution at one place (in userspace) instead of opening issues at an unknown number of projects that also alter partitioning. the “do one thing” paradigma is good for developers who maintain the code and i pretty much appreciate their work. if you are up to only want one-day-flies that either die or take huge amounts of resources only for keeping them alive (image of a mayfly in an emergency room and a heart-lung machine attached while chirurgs rushing around trying to enlenghten its life a few seconds more) then you are good with monolithic tools that could hardly be maintained and suck allday as no one wants to fix any bugs or cannot without creating new ones due to the tightened dependency hell it has internally.

    the point is not a lack of examples doing wrong but where one wants to be heading towards.


  • smb@lemmy.mltoLinux@lemmy.mlA word about systemd
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 month ago

    Lol what???

    wouldn’t that be the definition of stable?

    the computer on voyager 2 is running for 47 years now, they might have rebooted some parts meanwhile but overall its a long time now, and if the program is free of bugs the time that program can run only depends on the durability of the hardware, protection from cosmic rays (which were afaik the problems the voyager probes faced mostly, not bugs) which could be quite long if protected from hazardous environments and maybe using optoelectronics but the point is that a bug free software can run forever only depending on hardware durability and energy supply, in any other way no humans are needed for a veery long time ;-)