• 0 Posts
  • 16 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle
  • The process is to go step-by-step. First direct connect to modem you have, bridged connection if possible, and test with multiple bandwidth measurements (speedtest, fast.com, downloading a big file from some university ftp…) and work your way downstream of the network. And on every step test multiple scenarios where it’s possible, preferably with multiple devices.

    When I got a 1Gbit fiber connection few years back I got an Ubiquiti Edgerouter-X with PoE-options. On paper that should’ve been plenty for my network, but in theory with NAT, DNAT, firewall rules and things like that it capped on 6-700Mbps depending on what I used it for. With small packets and VPN it dropped even more. So now that thing acts as an glorified PoE switch and the main routing is handled with Mikrotik device, which on manufacturers tests should be able to push 7Gbps on optimal conditions. I only have 1/1Gbps, so there’s plenty of room, but with very specific loads that thing still is still pushed to the limit (mostly small packet size with other stuff on top of it) but it can manage the full duplex 1000Base-T. And on normal everyday use it’s running at 20% (or so) load, but I like the fact that it can manage even the more challenging scenarios.


  • I’m pretty sure that you’ve already checked, but the obvious things sometimes fly under the radar and go unnoticed: is the phone in file transfer mode in the first place? Other one (which has bitten me) is if you’re using an usb-hub, try direct connection and/or different ports on the host computer.

    Personally I’ve spent far too long to try and hunt down something obscure while the fix was really simple as some default option changed with updates or whatever. And in general I’ve forgotten to check the simple things first way too many times and that has caused wasted hours way more than I want to count or admit.



  • There’s already a ton of great examples which I can relate (I’ve been using linux since 1998 or 99) but maybe the biggest difference today, apart from that everything is SO MUCH EASIER now, is that the internet wasn’t really the thing it is today. Specially the bandwidth. It took hours and hours over the phone line to download anything, on a good day you could get 100MB just under 4 hours. Of course things were a lot smaller too back then, but it still took ages and I’m pretty sure I now have more bandwidth on my home connection than most of the local universities had back in the 90s.


  • Back when CRT monitors were a thing and all this fancy plug’n’play technology wasn’t around you had modelines on your configuration files which told the system what kind of resolutions and refresh rates your actual hardware could support. And if you put wrong values there your analog and dumb monitor would just try to eat them as is with wildly different results. Most of the time it resulted just in a blank screen but other times the monitor would literally squeal when it attempted to push components well over their limits. And in extreme cases with older monitors it could actually physically break your hardware. And everything was expensive back then.

    Fun times.



  • I want to prevent myself from reinstalling my system.

    Any even remotely normal file on disk doesn’t stop that, regardless of encryption, privileges, attributes or anything your running OS could do to the drive. If you erase partition table it’ll lose your ‘safety’ file too without any questions asked as on that point the installer doesn’t care (nor see/manage) on individual files on the medium. And this is exactly what ‘use this drive automatically for installation’ -option does on pretty much all of the installers I’ve seen.

    Protecting myself from myself.

    That’s what backups are for. If you want to block any random usb-stick installer from running you could set up a boot options on bios to exclude those and set up a bios password, but that only limits on if you can ‘accidently’ reinstall system from external media.

    And neither of those has anything to do on read/copy protection for the files. If they contain sensitive enough data they should be encrypted (and backed up), but that’s a whole another problem than protecting the drive from accidental wipe. Any software based limitation concerning your files falls apart immediately (excluding reading the data if it’s encrypted) when you boot another system from external media or other hard drive as whatever solution you’re using to protect them is no longer running.

    Unless you give up the system management to someone else (root passwords, bios password and settings…) who can keep you from shooting yourself on the foot, there’s nothing that could get you what you want. Maybe some cloud-based filesystem from Amazon with immutable copies could achieve that, but it’s not really practical on any level, financial very much included. And even with that (if it’s even possible in the first place, I’m not sure) if you’re the one holding all the keys and passwords, the whole system is on your mercy anyways.

    So the real solution is to back up your files, verify regularly that backups work and learn not to break your things.




  • Then do sudo apt install xfce4 and sudo apt purge cinnamon* muffin* nemo*.

    It’s been a while since I installed xfce4 on anything, but if things haven’t changed I think the metapackage doesn’t include xfce4-goodies and some other packages, so if you’re missing something it’s likely that you just need to ‘apt install xfce4-whatever’. Additionally you can keep cinnamon around as long as you like as a kind of a backup, just change lightdm (or whatever login manager LMDE uses) to use xfce4 as default. And then there’s even lighter WM’s than XFCE, like LXDE, which is also easy to install via apt and try out if that works for you.


  • I understand the mindset you have, but trust me, you’ll learn (sooner or later) a habit to pause and check your command before hitting enter. For some it takes a bit longer and it’ll bite you in the butt for few times (so have backups), but everyone has gone down that path and everyone has fixed their mistakes now and then. If you want hard (and fast) way to learn to confirm your commands, use dd a lot ;)

    One way to make it a bit less scary is to ‘mv <thing you want removed> /tmp’ and when you confirmed that nothing extra got removed you can ‘cd /tmp; rm -rf <thing>’, but that still includes the ‘rm -rf’ part.


  • IsoKiero@sopuli.xyztoLinux@lemmy.mlLinux on old School Machines?
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    1 month ago

    Absolutely. Maybe leave Gnome/KDE out and use a lighter WM, but they’ll be just fine. Specially if they have 8GB or more RAM. I suppose those have at least dual core processors, so that won’t be a (huge) bottleneck either. You can do a ton of stuff with those beyond just web browsing, like programming/text editing/spreadsheets and so on. I’d guess that available RAM is the biggest bottleneck on what they can do, specially if you like to open a ton of tabs on your browser.


  • Make sure you have package alsa-utils installed and try to run alsamixer. That’ll show all the audio devices your system detects. Maybe you’re lucky and it’s just that some volume control is muted and if you’re not it’ll give you at least some info to work with. Majority of audio devices don’t need any additional firmware to work and they almost always work out of the box just fine. What’s the hardware you’re running? Maybe it is something exotic which isn’t installed by default (which I doubt).

    And additionally, what you’re trying to play audio from? For example MP3’s need non-free codecs to be installed and without them your experience is “a bit” limited on audio side of things.


  • They both use upstream version number (as in the number software developer gave to the release). They might additionally have some kind of revision number related to packaging or some patch number, but as a rule of thumb, yes, the bigger number is the most recent. If you should use that as a only variable on deciding which to install is however another discussion. Sometimes dpkg/apt version is preferred over snap regardless of version differences, for example to save a bit of disk space, but that depends on a ton of different things.


  • IsoKiero@sopuli.xyztoLinux@lemmy.ml*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    22
    ·
    2 months ago

    Mullvad (apparenlty, first time I’ve heard from the service) uses DNS over TLS and I don’t think that the current GUI version has the option to enable it. Here’s a quickly googled howto from Fedora on how to enable it on your system. If that doesn’t help search for ‘NetworkManager DOT’ or ‘DNS over TLS’.


  • Most, but not all, do. So it might be as simple as setting a static address, or it may overlap in the future.

    You could ask from ISP (or try it out yourself) if you can use some addresses outside of DHCP pool, my ISP router had /24 subnet with .0.1 as gateway but DHCP pool started from .0.101 so there would’ve been plenty of addresses to use. Mine had a ‘end user’ account too from wehere I could’ve changed LAN IP’s, SSID and other basic stuff, but I replaced the whole thing with my own.