• BotCheese@beehaw.org
    link
    fedilink
    arrow-up
    7
    ·
    1 year ago

    And we’re nowhere near dome scalimg LLM’s

    I think we might be, I remember hearing openAI was training on so much literary data that they didn’t and couldn’t find enough for testing the model. Though I may be misrememberimg.

    • newde@feddit.nl
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      No that’s definitely the case. However, Microsoft is now working making LLM’s more dependent on several high quality sources. For example: encyclopedias will be more important sources than random reddit posts.