I think 3D geometry has a lot of quirks and has so many results that un_intuitively don’t hold up. In the link I share a discussion with ChatGPT where I asked the following:

assume a plane defined by a point A=(x_0,y_0,z_0), and normal vector n=(a,b,c) which doesn’t matter here, suppose a point P=(x,y,z) also sitting on the space R^3. Question is:
If H is a point on the plane such that (AH) is perpendicular to (PH), does it follow immediately that H is the projection of P on the plane ?

I suspected the answer is no before asking, but GPT gives the wrong answer “yes”, then corrects it afterwards.

So Don’t we need more education about the 3D space in highschools really? It shouldn’t be that hard to recall such simple properties on the fly, even for the best knowledge retrieving tool at the moment.

  • Are_Euclidding_Me [e/em/eir]@hexbear.net
    link
    fedilink
    English
    arrow-up
    1
    ·
    12 hours ago

    Well see, here you have good proof that chatGPT isn’t actually “the best knowledge retrieving tool at the moment”. ChatGPT (and every other LLM) suuuucks at complicated math, because these text extruders don’t reason. Seriously, try out some more complicated math problems. I think you’ll find chatGPT gets most of them wrong, and in infuriating ways that make very little sense.

    I don’t disagree that we need better math instruction for students. I’ve been saying this since I was a student. But using chatGPT being horrible at math as evidence of this is, well, ridiculous, frankly. ChatGPT’s performance isn’t based on how well your average high schooler understands something, and I don’t know why you’re trying to tie those two very different things together.

    • zaknenou@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 hours ago

      ChatGPT is trained based on forum discussions and pretty likely pirated books. If it found the idea in a previously established text it would have answered correctly. That’s why I DO think it is representative of what the average good student was taught (not how smart, or good at problem solving they be). What’s funny is that after reasoning it found the right answer, which is counter intuitive, since ChatGPT is supposed to be good at retrieving information, not at reasoning!