The ubiquity of audio commutation technologies, particularly telephone, radio, and TV, have had a significant affect on language. They further spread English around the world making it more accessible and more necessary for lower social and economic classes, they led to the blending of dialects and the death of some smaller regional dialects. They enabled the rapid adoption of new words and concepts.

How will LLMs affect language? Will they further cement English as the world’s dominant language or lead to the adoption of a new lingua franca? Will they be able to adapt to differences in dialects or will they force us to further consolidate how we speak? What about programming languages? Will the model best able to generate usable code determine what language or languages will be used in the future? Thoughts and beliefs generally follow language, at least on the social scale, how will LLM’s affects on language affect how we think and act? What we believe?

  • Kethal@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    7 months ago

    Maybe they’ll help people sort out the difference between “affect” and “effect”.

  • Lvxferre@mander.xyz
    link
    fedilink
    arrow-up
    1
    ·
    7 months ago

    [shameless ad] This sort of question fits well !linguistics@mander.xyz [/shameless ad]

    What causes the loss of a local variety (dialect or language) is not simply exposure to other varieties, but the loss of the identity associated with said variety. In other words, what led to the blending and death of those dialects wasn’t the audio communication technology - it’s economical, social, and ideological pressures, such as nationalism.

    I’ll exemplify this using rhoticity in England. If telephone, radio and TV led to blending and death of dialects, you’d expect rhoticity in England to increase, due to exposure to American media. It didn’t - it’s decreasing:


    Source for the map: it’s a collation of both maps in this article. The reason for the shift however becomes obvious when you look at identity matters: “you’re a Brit, speak like a Brit”.

    The exact same reasoning applies to other languages, by the way. Caipira Portuguese features aren’t being replaced with the ones from that weird Globo TV accent, but with the ones spoken in São Paulo city; sheísmo in Argentina seems to be spreading, regardless of media from other countries; Occitan was not killed in France by simply exposing kids to French, but by making them feel ashamed of speaking Occitan.


    With that out of the way, it’s hard to predict the future impact of machine text generation, be it through LLMs or better models. It’s perfectly possible that this sort of tech helps the preservation of local varieties, as LLMs are kind of good at translation; for example, I’ve noticed that Gemini is able to parse Venetian, even if unable to answer in the language.

  • The Picard Maneuver@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    7 months ago

    It’ll be interesting to see how it affects the average person’s written communication. When we know technology can handle something for us, our brains seem to let it carry the load. Think of all the people who aren’t great communicators or might not be confident in their English who would love to rely on this already.

    I guess it’s a matter of perspective whether you view it as a crutch or a boon, which I’m sure has been a conversation about many pieces of technology over the years:

    People were better at remembering phone numbers before cell phones stored them. People were better at remembering how to spell words before spell check/autocorrect. People were better at writing by hand before typewriters/keyboards. etc

    • trolololol@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      7 months ago

      Each generation thinks they had it the right way and younger ones have it easy. You can go back centuries with people pushing each other down.

      What should be encouraged is the exchange of ideas and healthy debate. Words are just a tool for that, and spelling and grammar and " not knowing Latin" are components of it.

      A couple generations down the road we would be able to accurately transmit our thoughts to other people and calibrate for their culture and growing up biases, and the generation immediately before it will whine when LLM was the right way to communicate.

      • Icalasari@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        7 months ago

        Eh, LLMs do have a significant problem in how they can generate false information by themselves. Every other tool prior requires a person to make said false information, but LLMs can just generate it when asked a question

  • The Snark Urge@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 months ago

    I will share a journal entry from when I was mulling this over last December. Interested in your thoughts:

    In old media, such as books and movies, we passively receive the media. We hear stories of heroes, songs about how the singer feels, written thoughts from inside another writer’s mind. These are valuable because of how we connect with others and thereby grow.

    Interactive media, e.g. video games, allow us to tinker with a story and interrogate our relationship and attitude towards the ideas and themes thereby. We pull a lever, and the story changes direction. Video games have become such a large industry thanks to the more profound personal connection we can develop with the art through prescribed mechanical interactions. We press the buttons, and become the hero.

    With the advent of artificial intelligence, it won’t be long before someone invents a new form of storytelling predicated on this technology. While we used to read stories, it now becomes possible for stories to be read into us. An AI can now be created that observes your life, and makes sense of it in a profound larger context.

    This new media would be an AI companion who acts as a fourth wall of your life; layering your struggles and triumphs within a larger context, lightly editorializing, adding soundtracks that seamlessly portray your energy and emotional state (or humorously juxtapose it), adding humorous asides or callbacks that keep you in the moment, gently reminding and prompting next activities, reflecting on failures or calling attention to bad habits one is trying to break, and generally contriving to elevate the daily experience to the level of storytelling. It would give life an enhanced sense of meaningful examination, refining our sense of self and bringing our life into focus. This is a form of media that is not itself passively received, but actively treats your life as a fully interactive lived experience.

    Art is integral to our ability to relate to others, experience things that are larger than ourselves, and to create meaning. This “fourth wall” AI would be a new form of media that seeks to amplify our understanding of ourselves, integrating our egos with our life as it exists as we change and grow throughout life.

    The risks posed by malfeasant propagation of such a medium are at once beyond imagining and entirely predictable; the manufacturing of consent, the corrupting influence of profit motives, and the use of media as a social control mechanism are all pre-21st century concepts in media.

    Whether a “fourth wall AI” represents a new threat or merely a quantum leap in the scale of preexisting threats cannot be known in advance. All of the above is to merely assert that we will see, and that such a medium could theoretically be used as art in the true sense, if such technology can be put in the hands of artists, and not just corporations.

  • Paragone@lemmy.world
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    7 months ago

    I figure they can either help or harm, depending on implementation:

    Huggingface ( I always think of the “face-huggers” in Alien, when I see that name… and have NO idea why they thought that association would be a Good Thing™ ) has a LLM which apparently can do Sanskrit.

    Consider, though:

    All the Indigenous languages, where we’ve only actually got a partial-record of the language, and the “majority rule, minority extinguishes” “answer” of our normal process … obliterated all native speakers of that language ( partly through things like residential-schools, etc )…

    now it becomes possible to have an LLM for that specific language, & to study the language, even though we’ve only got a piece of it.

    This is like how we’ve sooo butchered the ecology that we can only study pieces of it, now, there’s simply too-much missing from what was there a few centuries ago, so we’re not looking at the origina/proper thing, either in ecologies or in languages.

    sigh

    This wasn’t supposed to be depressing.


    Consider how search-engines have altered how we have to communicate…

    In order to FORCE a search-engine to consider a pair-of-words to be a single-term, you have to remove all intervening space/hyphens/symbols from between them.

    ClimatePunctuation is a single search-token, but “Climate Punctuation” is two separate, unrelated terms, which may or may-not appear in the results.

    It’s obscene.

    I’m almost mad-enough to want legislation forcing search-engines to respect some kind of standard set of defaults ( add more terms == narrowing the search, ie defaulting to Boolean AND, as one example ),

    so they’d stop enshittifying our lives while “pretending” that they’re helping.

    ( there was a Science news site which would not permit narrowing-of-search, and I hope they fscking died.

    Making search unusable on a science site??

    probably some “charity” who pays most of their annual-budget to their administration, & only exists for their entitlement.

    I’m saying that after having encountered that religion in charities. )


    Interesting:

    search-engines alter our use-of-language,

    social-sites do too,

    LLM’s do too,

    marketing/propaganda does,

    astroturfing does,

    … it begins looking like real events are … rather-insignificant … influences in our languages?

    Hm…

    • elshandra@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      7 months ago

      Do you actually believe this?

      LLMs are the opposite of a dead end. More like the opening of a pipe. It’s not that they will burn out, it’s just that they’ll reach a point that they’re just one function of a more complete AI perhaps.

      At the very least they tackle a very difficult problem, of communication between human and machine. Their purpose is that. We have to tell machines what to do, when to do it, and how to do it. With such precision that there is no room for error. LLMs are not tools to prove truth, or anything.

      If you ask an LLM a question, and it gives you a response that indicates it has understood your question correctly, and you are able to understand its response that far, then the LLM has done it’s job, regardless of if the answer is correct.

      Validating the facts of the response is another function again, which would employ LLMs as a translation tool.

      It’s not a long leap from there to a language translation tool between humans, where an AI is an interpreter. deepl on roids.

      • HelloThere@sh.itjust.works
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        7 months ago

        Do you actually believe this?

        Yes. I’m also very happy to be proven wrong in the years to come.

        If you ask an LLM a question, and it gives you a response that indicates it has understood your question correctly, and you are able to understand its response that far, then the LLM has done it’s job, regardless of if the answer is correct

        I don’t want to get too philosophical here, but you cannot detach understanding / comprehension from the accuracy of the reply, given how LLMs work.

        An LLM, through its training data, establishes what an answer looks like based on similarity to what it’s been taught.

        I’m simplifying here, but it’s like an actor in a medical drama. The actor is given a script that they repeat, that doesn’t mean they are a doctor. After a while the actor may be able to point out an inconsistency in the script because they remember that last time a character had X they needed Y. That doesn’t mean they are right, or wrong, nor does it make them a doctor, but they sound like they are.

        This is the fundamental problem with LLMs. They don’t understand, and in generating replies they just repeat. It’s a step forward on what came before, that’s definitely true, but repetition is a dead end because it doesn’t hold up to application or interrogation.

        The human-machine interface part, of being able to process natural language requests and then handing off those requests to other systems, operating in different ways, is the most likely evolution of LLMs. But generating the output themselves is where it will fail.

        • elshandra@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          7 months ago

          So I feel like we agree here. LLMs are a step to solving a low level human problem, i just don’t see that as a dead end… If we don’t take the steps, we’re still in the oceans. We’re also learning a lot in the process ourselves, and that experience will carry on.

          I appreciate your analogy, I am well aware LLMs are just clever recursive conditional queries with big semi self-updating datasets.

          Regardless of whether or not something replaces LLMs in the future, the data and processing that’s gone into that data, will likely be used along with the lessons were learning now. I think they’re a solid investment from any angle.

          • HelloThere@sh.itjust.works
            link
            fedilink
            arrow-up
            0
            ·
            7 months ago

            Regardless of whether or not something replaces LLMs in the future, the data and processing that’s gone into that data, will likely be used along with the lessons were learning now. I think they’re a solid investment from any angle.

            I’m a big proponent of research for the sake of research, so I agree that lessons will be learnt.

            But to go back to Ops original question, how will LLMs affect spoken language, they won’t.

            • elshandra@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              7 months ago

              But to go back to Ops original question, how will LLMs affect spoken language, they won’t.

              That’s a rather closed minded conclusion. It makes it sound like you don’t think they have the chance.

              LLMs have the potential to pave the way to aligning spoken language, perhaps even evolving human communication to a point where speech is an occasional thing because it’s really inefficient.

              • HelloThere@sh.itjust.works
                link
                fedilink
                arrow-up
                1
                ·
                edit-2
                7 months ago

                You’re putting the cart very much before the horse here.

                For what you describe to happen requires global ubiquity. For ubiquity to happen, it must be something with sufficient utility that people from all walks of time, and in all contexts (ie not just professional) gain value from it.

                For that to happen, given the interface is natural language, the LLM must work across languages to a very high level, which works against the idea that human language will adapt to it. To work across language to that level it must adapt to humans, not the other way around.

                This is different to other technology which has come before - like post, or email - where a technical restriction in particular format/structure (eg postal or email address) was secondary to the main content (the message).

                For LLMs to affect language you’re basically talking about human-to-human communication adopting “prompt engineering” characteristics. I just don’t see this happening on the scale you describe, human-to-human communication is wooly, imperfect, with large non-verbal elements, and while most people make do most of the time, we all broadly speaking suck at making points with perfect clarity and no misunderstanding.

                For any LLM to be successful, it must be able to handle that, and being able to handle that dramactically reduces the likelihood of affecting change, because if change is required it won’t be successful.

                It’s basically a tautology, is why it’s such a difficult thing, and why our current generation of models are supported mainly through hype and fomo.

                Lastly, the closest example to a highly structured prompt that currently exists are programming languages. These are used by millions of people every day, and still developers do not talk to each other via their prefered language’s syntax of choice.

                • elshandra@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  7 months ago

                  This is interesting and thought provoking discussion, ty.

                  You’re absolutely right, I was looking for the dead end - plugging LLM into a solution.

                  I’m more thinking LLMs used in conjunction with other tech will have these effects on our communicating. LLMs, or whatever replaces them to do that interpretation, are necessary to facilitate that.

                  When we come up with something better, to do the same job better, then of course, LLMs will be redundant. If that happens, great.

                  We are already seeing a boom in popularity of LLMs outside of professional use. Global ubiquity for anything is never going to happen, unless we can fix communication, which we probably can’t. We certainly can’t alone. It’s very much a chicken an egg problem, that we can only gain from by progressing towards.

                  Imagining vocallising using programming languages gave me a chuckle. I have been known to do things like use s/x/y/ to correct in written chats though.

                  Programming languages allow us to talk to and listen to machines. LLMs will hopefully allow machines to listen and talk to/between us.