A software developer and Linux nerd, living in Germany. I’m usually a chill dude but my online persona doesn’t always reflect my true personality. Take what I say with a grain of salt, I usually try to be nice and give good advice, though.

I’m into Free Software, selfhosting, microcontrollers and electronics, freedom, privacy and the usual stuff. And a few select other random things as well.

Certified know-it-all.

  • 2 Posts
  • 1.54K Comments
Joined 5 years ago
cake
Cake day: August 21st, 2021

help-circle
  • Yeah, You’ll have to do a lot more troubleshooting than this. Did Docker successfully bind to port 8000? Can you curl it from the VPS itself? Does the container and the things in it run properly? Are there any error messages in the logs?

    I’m not a Docker expert, but I’d start with the docker commands which show if a container is running and which ports it actually binds. Maybe a ss -at. then do a curl http://localhost:8000 and see if it returns your webpage. If it doesn’t, you need to fix your webpage container first. Or see if you can come up with an easier method to deploy your website.

    A reverse proxy in any shape or form, will require your website to run, first.


  • Well, computer programmers still do things like Project Euler and code wars. Some people go Geocaching and more organized events which include riddles and different places. We got Escape Rooms… People still listen to shortwave radio and figure out whether number stations change due to the Iran war… I read people tried to use modern AI on the Voynich manuscript and other older riddles… It’s probably all out there, just the internet changed, and now it’s almost impossible to find in the big haystack and walled discord rooms etc. And social media got more consumerist. You’d (on average) be mindlessly doomscrolling there, these days. Not actively look for puzzles to solve.


  • hendrik@palaver.p3x.detoLinux@lemmy.mlA Counter-View on the Age Verification Law
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    11 hours ago

    3 bits of information is not meaningful surveillance.

    By the way, as I said in my other comment, I don’t think your maths is correct. 3 bit is huge!
    If you extend an browser fingerprint from an extimated 18.1 bits of information by 3 bit, to 21.1 bits: You’d catch 2^21.1 − 2^18.1 = roughly 2 million people. That means out of all the citizens in a state like Nebraska or Idaho, they can tell it’s you. That’s the scale of 3 (additional) bits, if my maths is correct.


  • Well… There’s just a lot of misinformation out there regarding this. First of all, it doesn’t do age-verification. What it tries to do is age attestation. It’s supposed to mandate parental controls in operating systems. It specifically does not verify anyone’s age.

    But it’s poorly written. Contains contradictions. Some phrasings don’t ever work, like how this is supposed to be done by software, but then the developer shouldn’t make their software request the signal, but they themselves need to request the signal?! How is that even supposed to work? Ultimately we need law to be consistent and this law reads to me like it was written by someone who doesn’t know how computers work. And that would be my issue with it.

    But I think some of your points are moot as well. If you want universal legislation (2). Why do a bazillion different state bills? That’s the opposite of it. And (3) doesn’t make sense either, we can’t just give up privacy/freedom since other random things set precedent. We can use it to strike some balance, yes. But the 3 bits don’t work like that. They don’t apply to the total. They come on top! Every additional bit holds a lot of meaning and will be the thing that homes it in from a potential group of thousands of people, to exactly you. In privacy, every single bit of information is very, very important and valuable. In the realm of browser fingerprinting, an additional 3 bits of linearily independent information would rat you out in a group of roughly 2 million people! That’s more than some states/countries have citizens. (This isn’t 3 bits of independent information, though.)




  • Gee. God beware anyone answer the one interesting question. And that’s whether the book / storytelling is any good.

    I mean it’s not surprising to me how random internet people disagree. They always do… But we could just ask a professional?! Book critic is a real job. They could tell us within a few hours if the book is any good. Or full of common story tropes. And “sudden plot twists” like when I tried writing a story with AI 😅 And whether it’s going anywhere, or how it compares to other books which have some artistic quality or meaning to them.


  • Lol. For someone who says they expect other people to learn something, you’re a bit short in supply. I mean this would be an opportunity for someone (me) to learn something. But a down-vote won’t do it. And lessons on what not to do (discuss 2.5h, expect it to think) don’t lead anywhere either. I’d need to know what to do in my situation. Or where to find such information?!

    Or was it because I said I value efficiency and for some reason you’re team bloat? I seriously don’t get it.


  • I don’t have a definite answer to it. Could be the case I’m somehow intelligent enough to remember all the quirks of C and C++. Eat a book on my favorite microcontroller in 3 days and remember details about the peripherals and processor. But somehow I’m too stupid to figure out how AI works. I can’t rule it out. At least I’ve tried.

    I still think microcontroller programming is way more fun than coding some big Node.JS application with a bazillion of dependencies.

    And I sometimes wish people would write an instant messenger like we have 4MB of RAM available and not eat up 1GB with their Electron app, which then also gets flagged by the maintainers for using some components that have open vulnerabilities, twice a year.

    I mean I don’t see any reason why I shouldn’t be allowed to complain about it.

    But yeah, software development is always changing. And sometimes I wonder if things are for the better or the worse.

    I’ve had a lot of bad experience with embedded stuff and trying to let AI do it for me. I mostly ended up wasting time. I always thought it must be because these LLMs are mainly trained on regular computer code, without these constraints and that’s why they always smuggle in silly mistakes. And while fixing one thing, they break a different thing. But could also be my stupidity.
    I’ve had a way better time letting it do webfrontends, CSS, JavaScript… even architecture.

    But I don’t think this (specifically) is one of the big issues with AI anyway. People are free to learn whatever they want. There’s a lot if niches in computer science. And diversity is a good thing.



  • Haha. I think there’s often a rough idea on what kind of programmer people are, judging by their opinion on these AI tools.

    Have you tried arguing with your AI assistant for 2.5h straight about memory allocation, and why it can’t just take some example code from some documentation? And it keeps doing memory allocation wrong? Scold it over and over again to use linear algebra instead of trigonometric functions which won’t cut it? Have you tried connecting Claude Code to your oscilloscope and soldering iron to see what kind of mess its code produces?

    I’m fairly sure there are reasons to use AI in software development. And there are also good reasons to do without AI, just use your brain and be done with it in one or two hours instead of wasting half a workday arguing and then still ending up doing it yourself 😅

    I don’t think these programmers are idiots. There’s a lot of nuance to it. And it’s not easy at all to apply AI correctly so it ends up saving you time.


  • Good comment. The main issue is this: Back in the day I could have a quick look at the code and tell within a minute whether something was coded by a 12 year old or by some experienced programmer. Whether someone put in so much effort, I could be pretty sure they’re gonna maintain the project. Put in some love because it solves some use-case in their life and it’s going to do the same for me. Assess their skill-level in languages I’m fluent in.

    These days not so much. All code quality looks pretty much the same. Could be utter garbage. Could be good software, could be maintained. Could be anything, Claude always makes it look good on a first glance. There’s also new ulterior motives why software exists. And it takes me a good amount of time and detective work to find out. And I often can’t rely on other people either, because they’re either enraged or bots and the entire arguments are full of falsehoods.

    As a programmer and avid Linux user, I rely a lot on other people’s software. And the Free Software community indeed used to be super reliable. I could take libraries for my software projects. Could install everything from the Debian repo and I never had any issues. It’s mostly rock solid. There were never any nefarious things going on.

    And now we added deceit to the mix. Try to keep the true nature of projects a secret. And i think that’s super unhealthy. I had a lot of trust in my supply chain. And now I’m gonna need to put in a lot of effort to keep it that way. And not fall prey to some shiny new thing which might be full of bugs and annoyances and security vulnerabilities, and gone by tomorrow once someone stops their OpenClaw… Yet the project looks like some reliable software.

    And I don’t share the opinion on sandboxing. Linux doesn’t have sandboxing (on the Desktop). That’s a MacOS thing (and Android and iOS). All we have is Flatpak. But you’re forcing me to install 10GB of runtimes. Pass on the distro maintainers who always had a second pair of eyes on what software does, if it had tracking or weird things in it, whether it had security vulnerabilities in the supply chain. Maintainers who also provided a coherent desktop experience to me. And now I’m gonna pull software from random people/upstreams on the internet, and trust them? Really? Isn’t that just worse in any aspect?

    And wasn’t there some line in devops? Why is it now every operators job to do static analysis on the millions of moving parts on their servers… Isn’t that a development job?

    And I don’t think Flatpak’s permission system is even fine-granular enough. Plus how does it even help in many cases? If I want to use a password manager, it obviously needs access to my passwords. I can’t sandbox that away. So if the developers decide to steal them, there’s no sandboxing stopping them in any way. Same for all the files on my Nextcloud. So I don’t see how sandboxing is gonna help with any of that.

    I just don’t think it’s a good argument. I mean if you have a solution on how sandboxing helps with these things, feel free to teach me. I don’t see a way around trust and honesty as the basic building blocks. And then sandboxing/containerization etc on top to help with some specific (limited) attack vectors.

    I mean, don’t get me wrong here. I’m not saying we need to ban AI in software development. I’m also not saying 12 year olds aren’t allowed to code. I did. And some kids do great things. That in itself isn’t any issue.



  • Yeah. Maybe it’s time to adopt some new rule in the selfhosted community. Mandating disclosure. Because we got several AI coded projects in the last few days or weeks.

    I just want some say in what I install on my computer. And not be fooled by someone into using their software.

    I mean I know why people deliberately hide it, and say “I built …” when they didn’t. Because otherwise there’s an immediate shitstorm coming in. But deceiving people about the nature of the projects isn’t a proper solution either. And it doesn’t align well with the traditional core values of Free Software. I think a lot of value is lost if honesty (and transparency) isn’t held up anymore within our community.





  • Interesting. Thanks for the info. I’ll re-think whether I recommend it to random people around the world, then.

    In Germany it’s great. I’ve been using it for many years now. But we have some good/strong hacker organizations, digital sovereignty and privacy groups, nonprofits and some generous IT companies. Maybe it’s random private individuals in other countries and they’re not as reliable.

    Seems right now there’s something going wrong anyway. I don’t think the amount of “offline” servers is normal. And a good amount of them isn’t even offline, but still answer my DNS queries.