• ohwhatfollyisman@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    5
    ·
    2 months ago

    In general, the report found that the AI summaries showed “a limited ability to analyze and summarize complex content requiring a deep understanding of context, subtle nuances, or implicit meaning.” Even worse, the Llama summaries often “generated text that was grammatically correct, but on occasion factually inaccurate,”

    how is this being accepted? one would have to go through any output with a fine-toothed comb anyway to weed out ai hallucinations, as well as to preserve nuance and context.

    it’s like the ai tells you that mona lisa has three eyes and a nose and her mouth is closed but her denim jacket is open. you’re going to report that in your story without ever looking at the painting?

    • Grimy@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      2
      ·
      2 months ago

      These important limitations highlight why it’s still important to have humans involved in the analysis process here. The NYT notes that, after querying its LLMs to help identify “topics of interest” and “recurring themes,” its reporters “then manually reviewed each passage and used our own judgment to determine the meaning and relevance of each clip… Every quote and video clip from the meetings in this article was checked against the original recording to ensure it was accurate, correctly represented the speaker’s meaning and fairly represented the context in which it was said.”

      It’s literally the paragraph right after.

      They verify it.