Does AI actually help students learn? A recent experiment in a high school provides a cautionary tale.

Researchers at the University of Pennsylvania found that Turkish high school students who had access to ChatGPT while doing practice math problems did worse on a math test compared with students who didn’t have access to ChatGPT. Those with ChatGPT solved 48 percent more of the practice problems correctly, but they ultimately scored 17 percent worse on a test of the topic that the students were learning.

A third group of students had access to a revised version of ChatGPT that functioned more like a tutor. This chatbot was programmed to provide hints without directly divulging the answer. The students who used it did spectacularly better on the practice problems, solving 127 percent more of them correctly compared with students who did their practice work without any high-tech aids. But on a test afterwards, these AI-tutored students did no better. Students who just did their practice problems the old fashioned way — on their own — matched their test scores.

  • Praise Idleness@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    38
    arrow-down
    9
    ·
    1 month ago

    It’s not about using it. It’s about using it ina helpful and constructive manner. Obviously no one’s going to learn anything if all they do is blatantly asking for an answer and writings.

    LLM has been a wonderful tool for me to further understand various topics.

    • Petter1@lemm.ee
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      7
      ·
      1 month ago

      This! Don’t blame the tech, blame the grown ups not able to teach the young how to use tech!

      • jacksilver@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 month ago

        The study is still valuable, this is a math class not a technology class, so understanding it’s impact is important.

        • Petter1@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          1 month ago

          Yea, did not read that promptengineered chatGPT was better than non chatGPT class 😄 but I guess that proofs my point as well, because if students in group with normal chatGPT were teached how to prompt normal ChatGPT so that it answer in a more teacher style, I bet they would have similar results as students with promtengineered chatGPT

      • MBM@lemmings.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        Can I blame the tech for using massive amounts of electricity, making e.g. Ireland use more fossil fuels again?

    • trollbearpig@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      3
      ·
      edit-2
      1 month ago

      If you actually read the article you will see that they tested both allowing the students to ask for answers from the LLM, and then limiting the students to just ask for guidance from the LLM. In the first case the students did significantly worse than their peers that didn’t use the LLM. In the second one they performed the same as students who didn’t use it. So, if the results of this study can be replicated, this shows that LLMs are at best useless for learning and most likely harmful. Most students are not going to limit their use of LLMs for guidance.

      You AI shills are just ridiculous, you defend this technology without even bothering to read the points under discussion. Or maybe you read an LLM generated summary? Hahahaha. In any case, do better man.

    • Ledivin@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      6
      ·
      1 month ago

      Obviously no one’s going to learn anything if all they do is blatantly asking for an answer and writings.

      You should try reading the article instead of just the headline.

      • Praise Idleness@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        3
        ·
        1 month ago

        The researchers believe the problem is that students are using the chatbot as a “crutch.” When they analyzed the questions that students typed into ChatGPT, students often simply asked for the answer. Students were not building the skills that come from solving the problems themselves.

        I did? What are you trying to say?

    • ColeSloth@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      3
      ·
      1 month ago

      If you’d have read tye article, you would have learned that there were three groups, one with no gpt, one where they just had gpt access, and another gpt that would only give hints and clues to the answer, but wouldn’t directly give it.

      That third group tied the first group in test scores. The issue was that chat gpt is dumb and was often giving incorrect instructions on how to solve the answer, or came up with the wrong answer. I’m sure if gpt were capable of not giving the answer away and actually correctly giving instructions on how to solve each problem, that group would have beaten the no gpt group, easily.