• testfactor@lemmy.world
    link
    fedilink
    English
    arrow-up
    36
    arrow-down
    17
    ·
    28 days ago

    This article feels pretty disingenuous to me.

    It glosses over the fact that this is surveillance on computers that the school owns. This isn’t them spying on kids personal laptops or phones. This is them exercising reasonable and appropriate oversight of school equipment.

    This is the same as complaining that my job puts a filter on my work computer that lets them know if I’m googling porn at work. You can cry big brother all you want, but I think most people are fine with the idea that the corporation I work for has a reasonable case for putting monitoring software on the computer they gave me.

    The article also makes the point that, while the companies claim they’ve stopped many school shootings before they’ve happened, you can’t prove they would have happened without intervention.

    And sure. That’s technically true. But the article then goes on to treat that assertion as if it’s proof that the product is worthless and has never prevented a school shooting, and that’s just bad logic.

    It’s like saying that your alarm clock has woken you up 100 days in a row, and then being like, “well, there’s no proof that you wouldn’t have woken up on time anyway, even if the alarm wasn’t there.” Yeah, sure. You can’t prove a negative. Maybe I would usually wake up without it. I’ve got a pretty good sleep schedule after all. But the idea that all 100 are false positives seems a little asinine, no? We don’t think it was effective even once?

    • IsoKiero@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      47
      ·
      28 days ago

      This is the same as complaining that my job puts a filter on my work computer that lets them know if I’m googling porn at work. You can cry big brother all you want, but I think most people are fine with the idea that the corporation I work for has a reasonable case for putting monitoring software on the computer they gave me.

      European point of view: My work computer and the network in general has filters so I can’t access porn, gambling, malware and other stuff on it. It has monitoring for viruses and malware, that’s pretty normal and well understood need to have. BUT. It is straight up illegal for my work to actively monitor my email content (they’ll of course have filtering for incoming spam and such), my chats on teams/whatever and in general be intrusive of my privacy even at work.

      There’s of course mechanisms in place where they can access my email if anyting work related requires that. So in case I’m laying in a hospital or something they are allowed to read work related emails from my inbox, but if there’s anything personal it’s protected by the same laws which apply to traditional letters and other communication.

      Monitoring ‘every word’ is just not allowed, no matter how good your intentions are. And that’s a good thing.

      • TimeSquirrel@kbin.melroy.org
        link
        fedilink
        arrow-up
        6
        arrow-down
        1
        ·
        28 days ago

        Do you mix personal and work email accounts? Do you not keep separate ones? My work email has absolutely no personal conversations in it not related to work. And work isn’t aware of any of my personal accounts.

        • IsoKiero@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          6
          ·
          27 days ago

          I personally don’t, but many do. But it doesn’t matter, my employer isn’t legally allowed to read my emails, unless it’s a sort of an emergency. My vacation, weekend, short sick leave and things like do not qualify. And even then, if the criteria is met, it’s illegal to read anything else than strictly work related things out of my box.

          We even have a form where people leaving the company sign permission that their mailbox can be accessed by their team leader and without signature we’re not allowed to grant permissions to anyone, unless legal department is on the case and terms for privacy breach are met.

      • testfactor@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        12
        ·
        28 days ago

        You say “the last time this happened” as if this wasn’t a generalized trend across all schooling for the past decade or so.

        Out of the tens of thousands of schools implementing systems like this, I’m not surprised that one had some letch who was spying on kids via webcam.

        And I’m all for having increased forms of oversight and protection to prevent that kind of abuse.

        But this argument is just as much of a “won’t someone think of the children” as the opposite. Just cause one school out of thousands did a bad thing, doesn’t mean the tech is worthless or bad.

        • catloaf@lemm.ee
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          2
          ·
          28 days ago

          Like any tool, the tech is fine. It’s the people using them that have been shown to be irresponsible. Therefore, we should not allow use of these tools.

          • testfactor@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            1
            ·
            27 days ago

            That argument could be expanded to any tool though.

            People run people over with cars or drive drunk. Ban cars?

            People use computers to distribute CP. Ban computers?

            People use baseball bats to bludgeon people to death. Ban baseball?

            The question of if a tool should be banned is driven by if its utility is outweighed by the negative externalities of use by bad actors.

            The answer is wildly more nuanced than “if it can hurt someone it must be banned.”

            • catloaf@lemm.ee
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              2
              ·
              27 days ago

              The utility of these tools does not outweigh their misuse.

              • testfactor@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                27 days ago

                That is what we’re debating, yes.

                If it could be conclusively proven that a system like this has saved a child’s life, would that benefit outweigh the misuse?

                If not, how many children’s lives would it need to save for it to outweigh the misuse?

    • tee9000@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      28 days ago

      Counter point: when this isnt an obscure thing, and kids are aware of it, they will purposefully use trigger words because they are kids.

      If kids/people are having mental health issues, whats the best way to handle that? By scanning for the symptom and telling them to stop being mentally troubled? I really doubt kids are getting the care they need based on these flags. Seems like a bandaid for cultural/systemic issues that cause the mental illness/harm.

      • testfactor@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        27 days ago

        Sure, maybe, but I’d also say you shouldn’t let the perfect be the enemy of the good.

        Yes, we should absolutely have better mental healthcare safety nets. Yes, false positives are probably a pretty common prank.

        But this isn’t a zero sum game. This can work on tandem with a therapist/counsellor to try and identify someone before they shoot up a school and get them help. This might let the staff know a kid is struggling with suicidal ideation before they find the kid OD’d on moms sleeping pills.

        In an ideal world would this be unnecessary? Absolutely. But we don’t live in that ideal world.

        • tee9000@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          2
          ·
          edit-2
          27 days ago

          In fairness you cant just say its not a zero sum game when the article is supported with a quote from one individual saying they were glad it told them in some cases. We dont know how effective it is.

          This is normalizing very intimate (and automated) surveillance. Kids all have smart phones and can google anything they want when they arent using school hardware. If kids have any serious pre-meditation to do something bad then they will do it on their smartphones.

          The only reason this would be effective is to catch students before they are aware they are being watched (poof thats gone tomorrow), or the student is so dirt poor that they dont have a smart phone or craptop.

          And what else will the student data be used for? Could it be sold? It would certainly have value. Good intentions are right now… data is FOREVER.

          • sugar_in_your_tea@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            5
            ·
            27 days ago

            Exactly.

            As a concerned and very involved parent, I do not want this nonsense tracking my child, because I value my child’s privacy. I also don’t provide them a phone, because they haven’t demonstrated to me that they’ll use it responsibly (oldest is 10). If I let my kids have a device, it’s because I trust them with it, and I let them use my desktop and laptop w/o supervision and w/o any tracking to do things like play games and do homework, but they only get access when I say they can. When they earn my trust, I’ll let them have their own device with their own passwords that I don’t know.

            But I don’t think I’m the target audience here. My kids aren’t poor and we have a pretty good relationship. I tell them consistently if they don’t feel comfortable talking to me about something, who they can talk to (teachers, school counselors, ecclesiastical leaders, certain neighbors, etc). I wish all kids had parents who had the time and inclination to care for their emotional needs, but that’s not the world we live in. That said, we do exist, so whatever policies exist need to cater to privacy-minded families who properly take care of their kids.

            I don’t know the solution here, but I will oppose any hidden surveillance I hear about.

    • CosmicTurtle0@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      8
      ·
      28 days ago

      This conversation seems to come up every now and again and lemmy seems to split between two camps:

      • students, especially low income students who can’t afford their own devices, will use devices to do things kids do (yes, this includes porn)
      • schools, as part of their duty to provide a safe learning environment, have a responsibility to provide some level of filtering and content monitoring

      Where that line gets drawn has to be an active conversation between schools, parents, and students. But this conversation often delves into “BuT tHiNk Of ThE cHiLdReN!”

    • fuckwit_mcbumcrumble@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      27 days ago

      This is the same as complaining that my job puts a filter on my work computer that lets them know if I’m googling porn at work.

      That is an extreme leap what the fuck.

      Network filters that block certain domains.

      Vs

      Key logger that tracks everything you type into the computer, even things you’ve deleted.