I think it’s “legally grey” in the sense that governments have largely made no policies one way or the other on the data harvesting. It’s not banned, but it’s not openly encouraged either, and there’s no real legal precedent to point to for this specific matter besides the general data harvesting big tech does.
The area with the largest similarity I feel is music sampling, and as far as I know, the music industry was very quick to ensure that data harvesting for AI had to follow the same copyright laws as sampling.
There are many laws that go entirely against the constitution of a country. You can start by looking at DMCA laws that violate a bunch of rights in MANY countries. Legally gray is falling short, those are illegal, and still get enforced because $$$
It’s a much much bigger issue than this. Would you rather live in a world where other countries have good AI and you do not? Would you like it if only China has powerful AI? I get the copyright issue, but some things are more important than other things. This is an arms race, and everyone slowing down isn’t exactly an option.
It seems like you severely misunderstand what “AI” as we have it nowadays is (it’s not actual AI) and what it is capable (not very much) and most importantly not capable of (most things it is advertised to do). Even if investor magazines and tech CEOs try to make it seem like that, we’re not one step away from creating HAL9000. LLMs are extremely over hyped and in the most areas they have been deployed a straight up dysfunctional scam. The only arms race that is happening right now is about who can waste the most money and violate the most privacy laws with this nonsense while all the necessary data centers and their insane power and water demands accelerate the destruction of our environment even more.
The term “AI” has been in use since 1956 to describe a wide variety of computer algorithms and capabilities. Neural nets and large language models fall very firmly under the term’s umbrella.
What you’re talking about is a specific kind of AI, artificial general intelligence (AGI). Very few people believe that an LLM on its own can become AGI and even fewer believes that current LLMs are AGI, so unfortunately you’re jousting with a strawman here.
If you are genuinely open to understanding the path we are on, the new situational awareness paper would be very eye-opening. It is 160 pages, so it’s probably a bit too much to get through, but there are really good videos that explain it. Matthew Berman has a great video about it. I’m not interested in swaying you and not going to debate, I’m 100s of hours deep into this and have been absolutely obsessed with it. Nobody doubted its impact as much as me. Education on the matter will undeniably change your mind tremendously. The information is there if you want a peak at the future.
You could have a much more complex understanding of what they are. It isn’t nearly as simple as you are imagining. If you genuinely are curious about what you’re overlooking, then here is a link.
Good answer, no way AI will possibly ever catch up to such brilliant responses as this. Certainly, there is no reason to want to have our views represented in the next generation of technology.
Another W for the EU. I just hope they stop making so many sus decisions and don’t accept the chat control laws and stuff like that
What is a W?
Win I guess.
Basically it’s a victory of any kind
Win/Loss records are generally abbreviated as W/L. Take the L is it’s opposite.
Not sure when people started to use ‘W’. It appears multiple times in this thread
Edit: then again I’m of a generation where ‘y’ means why and not yes. Maybe I’m just not hip anymore sadfaceemoticon.jpg
I wanna say it became a thing from Twitch streamers when e sports was a big thing, but I’m by no means sure that that’s correct.
Is it definitely a W that EU perspectives won’t be as represented in the AI programs that we are all using?
Is it definitely a W if a government allows privacy-invasive, legally grey and copyrighted-material-stealing technologies?
If the government allows it, they are per definition not “legally Grey”.
I think it’s “legally grey” in the sense that governments have largely made no policies one way or the other on the data harvesting. It’s not banned, but it’s not openly encouraged either, and there’s no real legal precedent to point to for this specific matter besides the general data harvesting big tech does.
The area with the largest similarity I feel is music sampling, and as far as I know, the music industry was very quick to ensure that data harvesting for AI had to follow the same copyright laws as sampling.
It makes the government look weak. But anyways all the other points remain the same
There are many laws that go entirely against the constitution of a country. You can start by looking at DMCA laws that violate a bunch of rights in MANY countries. Legally gray is falling short, those are illegal, and still get enforced because $$$
It’s a much much bigger issue than this. Would you rather live in a world where other countries have good AI and you do not? Would you like it if only China has powerful AI? I get the copyright issue, but some things are more important than other things. This is an arms race, and everyone slowing down isn’t exactly an option.
It seems like you severely misunderstand what “AI” as we have it nowadays is (it’s not actual AI) and what it is capable (not very much) and most importantly not capable of (most things it is advertised to do). Even if investor magazines and tech CEOs try to make it seem like that, we’re not one step away from creating HAL9000. LLMs are extremely over hyped and in the most areas they have been deployed a straight up dysfunctional scam. The only arms race that is happening right now is about who can waste the most money and violate the most privacy laws with this nonsense while all the necessary data centers and their insane power and water demands accelerate the destruction of our environment even more.
I’m happy to take the time to alter your perspective, if you are open to new information.
The term “AI” has been in use since 1956 to describe a wide variety of computer algorithms and capabilities. Neural nets and large language models fall very firmly under the term’s umbrella.
What you’re talking about is a specific kind of AI, artificial general intelligence (AGI). Very few people believe that an LLM on its own can become AGI and even fewer believes that current LLMs are AGI, so unfortunately you’re jousting with a strawman here.
The person he’s replying to clearly believes current LLMs are a bigger deal than they are though…
They’re not claiming it’s AGI, though. You’re missing a broad middle ground between dumb calculators and HAL 9000.
If you are genuinely open to understanding the path we are on, the new situational awareness paper would be very eye-opening. It is 160 pages, so it’s probably a bit too much to get through, but there are really good videos that explain it. Matthew Berman has a great video about it. I’m not interested in swaying you and not going to debate, I’m 100s of hours deep into this and have been absolutely obsessed with it. Nobody doubted its impact as much as me. Education on the matter will undeniably change your mind tremendously. The information is there if you want a peak at the future.
https://situational-awareness.ai/
The plagiarism machines aren’t what you think they are.
You could have a much more complex understanding of what they are. It isn’t nearly as simple as you are imagining. If you genuinely are curious about what you’re overlooking, then here is a link.
https://situational-awareness.ai/
I would much rather live in a country with no good AI.
Yeah, what a loss. Now it will only be able to suggest glue on burgers. /s
Thank you for your thought provoking question, “AI has use”. I’m sure this is a legitimate question coming from a real human.
Good answer, no way AI will possibly ever catch up to such brilliant responses as this. Certainly, there is no reason to want to have our views represented in the next generation of technology.