I think you’re conflating a ban to include banning their production (not an unreasonable assumption). As we’ve seen with nukes, however, possession of a banned weapon is sometimes as good as using it.
I disagree. War isn’t caused by weapons. It’s caused by racism, religious strife, economic hardship, natural resource exploitation, and more. Those need fixed before anyone will be willing to put away their weapons.
Life doesn’t adhere to waterfall methodology: we don’t have to do one first, and then the other. We can progressively disarm as we’re addressing the problems you mentioned…
Fair enough, but there’s still far too much conflict to begin demilitarization at this point in time. What the world can mostly agree on is to limit itself to being destroyed 55 times over by nuclear weapons (by UN estimates). And that’s in a world where nobody has actually used nuclear weapons (offensively) in 90 years.
These kinds of things take so many generations because the fundamental conflict between humans is not resolved. If there had been no Cold War, maybe we would have totally denuclearized by now, but I still doubt it.
Obviously it’s enabled by weapons. But that strengthens my point further - the nation who reduces their weapons first loses.
When has a nation completely set down their weapons, and what was the effect? One obvious case that comes to mind is Ukraine, who fully denuclearized. Ever since that moment they have repeatedly been invaded by Russia (the nation who maintained the weapons).
What you suggest is asking for this to repeat over and over again. The only truly viable path to eradicating war, is to first eradicate the problems that cause war, then to abolish weapons.
If you have factual evidence that your method works, please present it. I shared hard evidence of my perspective.
When has a nation completely set down their weapons, and what was the effect?
You seem not to know much. It has happened often, and in very different ways.
Start your studying about Switzerland, because it is easy.
Then try to understand Afghanistan. But beware, it is already a little complicated, and you need to read about 4 - 8 decades of history, and you should not read only sources from one country (they all lie, and you need to overcome that - or stay ignorant).
Last, go for some of the African countries. They are harder to understand, the what and the why. But coincidentially :) our current topic starts there, so it may be important.
I’m guessing the major countries will ban them, but still develop the technology, let other countries start using it, then say “well everyone else is using it so now we have to as well”. Just like we’re seeing with mini drones in Ukraine. The US is officially against automated attacks, but we’re supporting a country using them, and we’re developing full automation for our own aircraft.
A ban to all war, globally. Those that violate the ban will have autonomous soldier deployed on their soil.
This is the only way it will work, no other path leads to a world without autonomous warbots. We can ban them all we want but there will be some terrorist cell with access to arduinos that can do the same in a garage. And China will never follow such a ban
I mean, most complex weapons systems have been some level of robot for quite a while. Aircraft are fly-by-wire, you have cruise missiles, CIWS systems operating in autonomous mode pick out targets, ships navigate, etc.
I don’t expect that that genie will ever go back in the bottle. To do it, you’d need an arms control treaty, and there’d be a number of problems with that:
Verification is extremely difficult, especially with weapons that are optionally-autonomous. FCAS, for example, the fighter that several countries in Europe are working on, is optionally-manned. You can’t physically tell by just looking at such aircraft whether it’s going to be flown by a person or have an autonomous computer do so. If you think about the Washington Naval Treaty, Japan managed to build treaty-violating warships secretly. Warships are very large, hard to disguise, can be easily distinguished externally, and can only be built and stored in a very few locations. I have a hard time seeing how one would manage verification with autonomy.
It will very probably affect the balance of power. Generally-speaking, arms control treaties that alter the balance of power aren’t going to work, because the party disadvantaged is not likely to agree to it.
I’d also add that I’m not especially concerned about autonomy specifically in weapons systems.
It sounds like your concern, based on your follow-up comment, is that something like Skynet might show up – the computer network in the Terminator movie series that turn on humans. The kind of capability you’re dealing with isn’t on that level. I can imagine one day, general AI being an issue in that role – though I’m not sure that it’s the main concern I’d have, would guess that dependence and then an unexpected failure might be a larger issue. But in any event, I don’t think that it has much to do with military issues – I mean, in a scenario where you truly had an uncontrolled, more-intelligent-than-humans artificial intelligence running amok on something like the Internet, it isn’t going to matter much whether-or-not you’ve plugged it into weapons, because anything that can realistically fight humanity can probably manage to get control of or produce weapons anyway. Like, this is an issue with the development of advanced artificial intelligence, but it’s not really a weapons or military issue. If we succeed in building something more-intelligent than we are, then we will fundamentally face the problem of controlling it and making something smarter than us do what we want, which is kind of a complicated problem.
The term coined by Yudkowsky for this problem is “friendly AI”:
Friendly artificial intelligence (also friendly AI or FAI) is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.
It’s not an easy problem, and I think that it’s worth discussion. I just think that it’s mostly unrelated to the matter of making weapons autonomous.
Reward models (aka reinforcement learning) and preference optimization models can come to some conclusions that we humans find very strange when they learn from patterns in the data they’re trained on. Especially when those incentives and preferences are evaluated (or generated) by other models. Some of these models could very well could come to the conclusion that nuking every advanced-tech human civilization is the optimal way to improve the human species because we have such rampant racism, classism, nationalism, and every other schism that perpetuates us treating each other as enemies to be destroyed and exploited.
Sure, we will build ethical guard rails. And we will proclaim to have human-in-the-loop decision agents, but we’re building towards autonomy and edge/corner-cases always exist in any framework you constrain a system to.
I’m an AI Engineer working in autonomous agentic systems—these are things we (as an industry) are talking about—but to be quite frank, there are not robust solutions to this yet. There may never be. Think about raising a teenager—one that is driven strictly by logic, probabilistic optimization, and outcome incentive optimization.
It’s a tough problem. The naive-trivial solution that’s also impossible is to simply halt and ban all AI development. Turing opened Pandora’s box before any of our time.
Yeah, it’s not easy. I’m not sure that the problem is realistically solvable. On the other hand, the potential rewards for doing so are immeasurable – at the extreme, you’re basically creating and chaining a “god”, which would be damned nice to have at one’s beck and call. So it’d be damned nice to solve it.
The technical problems are hard, because we’d like to build a self-improving system, and build constraints that apply to it even after its complexity has grown far beyond our ability to understand it or even the ability of our tools to do so. It’s like a bacterium trying to genetically-engineer something that will evolve into a human compelled to do what the bacterium wants.
However we constrain the system…maybe in the near term, we could recover from a flawed “containment” system. But in the long run, those constraints are probably going to have to permit for zero failures. You make yourself a god and it slips its leash, you may not get a second chance to leash it. Zero failures, ever, forever, hardware or software, is kind of an unimaginable bar for even the vastly more-simple systems that we build today.
Even if one can build a system to constrain something that we cannot understand, and works perfectly, forever, part of the problem is that when building computer systems, the engineer has to iron out corner cases that don’t come up when requirements are specified in a rather-loose fashion, in everyday English. We have a hard time getting a sufficiently-complete specification for most of what software does today. The problems involved in ironing out the corner cases to write a sufficiently-complete specification of “what is in humanity’s interest” when we often can’t even agree on that ourselves seems rather difficult. That’s not even a computer science issue and we’ve been banging on that one for all of human history and couldn’t come up with an answer.
The above specification has to hold for all kinds of environments, including ones with technology that will not exist today. Like, take a kind of not-unreasonable-sounding utilitarian philosophical position – “seek to maximize human happiness for the greatest number of people”. Well…that’s not even complete for today (what exactly constitutes “happiness”?), but in a world where an AI with a sufficient level of technological advancement could potentially both surgically modify a human to hardwire their pleasure sensations and also clone and mass-grow more human fetuses, that quite-reasonable-sounding rule suddenly starts to look rather less-reasonable.
I’ve wondered before whether artificial general intelligence might be the answer to the Fermi paradox.
The Fermi paradox is the discrepancy between the lack of conclusive evidence of advanced extraterrestrial life and the apparently high likelihood of its existence. As a 2015 article put it, “If life is so easy, someone from somewhere must have come calling by now.”
Italian-American physicist Enrico Fermi’s name is associated with the paradox because of a casual conversation in the summer of 1950 with fellow physicists Edward Teller, Herbert York, and Emil Konopinski. While walking to lunch, the men discussed recent UFO reports and the possibility of faster-than-light travel. The conversation moved on to other topics, until during lunch Fermi blurted out, “But where is everybody?” (although the exact quote is uncertain).
There have been many attempts to resolve the Fermi paradox, such as suggesting that intelligent extraterrestrial beings are extremely rare, that the lifetime of such civilizations is short, or that they exist but (for various reasons) humans see no evidence.
One such potential answer is rather dark:
It is the nature of intelligent life to destroy itself
This is the argument that technological civilizations may usually or invariably destroy themselves before or shortly after developing radio or spaceflight technology. The astrophysicist Sebastian von Hoerner stated that the progress of science and technology on Earth was driven by two factors—the struggle for domination and the desire for an easy life. The former potentially leads to complete destruction, while the latter may lead to biological or mental degeneration. Possible means of annihilation via major global issues, where global interconnectedness actually makes humanity more vulnerable than resilient, are many, including war, accidental environmental contamination or damage, the development of biotechnology, synthetic life like mirror life, resource depletion, climate change, or poorly-designed artificial intelligence. This general theme is explored both in fiction and in scientific hypothesizing.
In 1966, Sagan and Shklovskii speculated that technological civilizations will either tend to destroy themselves within a century of developing interstellar communicative capability or master their self-destructive tendencies and survive for billion-year timescales.
The concerning thing is that if this is the answer, we have spaceflight now and so we probably aren’t all that far from interstellar travel. We made it this far, so there’s not a lot of time left for us to have our near-inevitable disaster. This should be a critical phase where we expect to have our disaster soon…yet we don’t see a technology or anything likely to cause our certain or near-certain destruction.
Sagan thought that nuclear weapons might be the answer. It is a technology associated with interstellar flight – one probably needs nuclear propulsion to travel between star systems. So it’d potentially almost-certainly be discovered at about the right time. The “start time” for the technology checks out for nuclear weapons.
But it’s not clear why we’d almost-certainly need to have a cataclysmic nuclear war in the near future. I mean, sure, there’s a chance, but a certainty? Enough to wipe out every civilization out there that developed more-quickly than our own?
The problem here is that Sagan’s “hold it together long enough to start spreading through the universe and then no single disaster can reasonably wipe you out” is at least plausible for a lot of technologies, like nuclear war.
But a technology that everyone would seek to have and make use of and where some kind of catastrophic event could spread at the speed of light, along information channels…that could potentially destroy a civilization that has even passed the “interstellar travel” barrier and is on multiple star systems. The time requirements for an AI spreading out of control are potentially a lot laxer than having a nuclear war. That’s a disaster that doesn’t have to happen very shortly after interstellar travel is achieved.
And if it then itself was not stable, collapsed, that’d explain why we don’t see AIs running around the universe either.
sighs
But it sure is a technology that it’d be terribly nice to have.
I don’t expect that that genie will ever go back in the bottle. To do it, you’d need an arms control treaty, and there’d be a number of problems with that:
The worst problem of arms control treaties is that Usa never adheres to one.
Ok so we ban them, and some incel terminally online hacker on steroids turns 20 arduinos into bombs.
I agree killer robots are dangerous and ethically problematic, just I don’t think banning them will keep asshats from making them, including on large scale.
China could pump them out by the billions and we’d probably not know till they were deployed.
Why this soft-spoken tone?
Killer robots must be banned, period.
Whoever bans them will be at a disadvantage militarily. They will never be banned for this one reason alone.
I think you’re conflating a ban to include banning their production (not an unreasonable assumption). As we’ve seen with nukes, however, possession of a banned weapon is sometimes as good as using it.
…and exactly this way of thinking will one day create “Skynet”.
We need to be (or become) smarter than that!
Otherwise mankind is doomed.
Unfortunately this is basic game theory, so the “smart” thing is to have the weapons, but avoid war.
Once we’ve grown past war, we can disarm, but it couldn’t happen in the opposite order.
The process of collective disarming is the path towards growing past war. And that first step is the collective banning of manufacturing such weapons.
I disagree. War isn’t caused by weapons. It’s caused by racism, religious strife, economic hardship, natural resource exploitation, and more. Those need fixed before anyone will be willing to put away their weapons.
Life doesn’t adhere to waterfall methodology: we don’t have to do one first, and then the other. We can progressively disarm as we’re addressing the problems you mentioned…
Fair enough, but there’s still far too much conflict to begin demilitarization at this point in time. What the world can mostly agree on is to limit itself to being destroyed 55 times over by nuclear weapons (by UN estimates). And that’s in a world where nobody has actually used nuclear weapons (offensively) in 90 years.
These kinds of things take so many generations because the fundamental conflict between humans is not resolved. If there had been no Cold War, maybe we would have totally denuclearized by now, but I still doubt it.
It’s enabled by weapons.
And there are people who want to use weapons when they exist, simply because they exist.
And there are people - for example weapons manufacturers - who want other people to use weapons.
Obviously it’s enabled by weapons. But that strengthens my point further - the nation who reduces their weapons first loses.
When has a nation completely set down their weapons, and what was the effect? One obvious case that comes to mind is Ukraine, who fully denuclearized. Ever since that moment they have repeatedly been invaded by Russia (the nation who maintained the weapons).
What you suggest is asking for this to repeat over and over again. The only truly viable path to eradicating war, is to first eradicate the problems that cause war, then to abolish weapons.
If you have factual evidence that your method works, please present it. I shared hard evidence of my perspective.
You seem not to know much. It has happened often, and in very different ways.
Start your studying about Switzerland, because it is easy.
Then try to understand Afghanistan. But beware, it is already a little complicated, and you need to read about 4 - 8 decades of history, and you should not read only sources from one country (they all lie, and you need to overcome that - or stay ignorant).
Last, go for some of the African countries. They are harder to understand, the what and the why. But coincidentially :) our current topic starts there, so it may be important.
But what until then? Your ideas do not provide any solutions. You just say that it is unavoidable as it is.
Because there’s no solution that we know of.
But now you know it because I have told you in the first comments.
Game theory says your idea isn’t a solution because the actors will disobey.
And I say this is no child’s play. We need to get serious, and maybe we need to get smarter than anybody else before us.
I don’t think I’m smart enough to solve “world peace” lol.
I’m guessing the major countries will ban them, but still develop the technology, let other countries start using it, then say “well everyone else is using it so now we have to as well”. Just like we’re seeing with mini drones in Ukraine. The US is officially against automated attacks, but we’re supporting a country using them, and we’re developing full automation for our own aircraft.
Once combat AI exceeds humans:
A ban to all war, globally. Those that violate the ban will have autonomous soldier deployed on their soil.
This is the only way it will work, no other path leads to a world without autonomous warbots. We can ban them all we want but there will be some terrorist cell with access to arduinos that can do the same in a garage. And China will never follow such a ban
I mean, most complex weapons systems have been some level of robot for quite a while. Aircraft are fly-by-wire, you have cruise missiles, CIWS systems operating in autonomous mode pick out targets, ships navigate, etc.
I don’t expect that that genie will ever go back in the bottle. To do it, you’d need an arms control treaty, and there’d be a number of problems with that:
Verification is extremely difficult, especially with weapons that are optionally-autonomous. FCAS, for example, the fighter that several countries in Europe are working on, is optionally-manned. You can’t physically tell by just looking at such aircraft whether it’s going to be flown by a person or have an autonomous computer do so. If you think about the Washington Naval Treaty, Japan managed to build treaty-violating warships secretly. Warships are very large, hard to disguise, can be easily distinguished externally, and can only be built and stored in a very few locations. I have a hard time seeing how one would manage verification with autonomy.
It will very probably affect the balance of power. Generally-speaking, arms control treaties that alter the balance of power aren’t going to work, because the party disadvantaged is not likely to agree to it.
I’d also add that I’m not especially concerned about autonomy specifically in weapons systems.
It sounds like your concern, based on your follow-up comment, is that something like Skynet might show up – the computer network in the Terminator movie series that turn on humans. The kind of capability you’re dealing with isn’t on that level. I can imagine one day, general AI being an issue in that role – though I’m not sure that it’s the main concern I’d have, would guess that dependence and then an unexpected failure might be a larger issue. But in any event, I don’t think that it has much to do with military issues – I mean, in a scenario where you truly had an uncontrolled, more-intelligent-than-humans artificial intelligence running amok on something like the Internet, it isn’t going to matter much whether-or-not you’ve plugged it into weapons, because anything that can realistically fight humanity can probably manage to get control of or produce weapons anyway. Like, this is an issue with the development of advanced artificial intelligence, but it’s not really a weapons or military issue. If we succeed in building something more-intelligent than we are, then we will fundamentally face the problem of controlling it and making something smarter than us do what we want, which is kind of a complicated problem.
The term coined by Yudkowsky for this problem is “friendly AI”:
https://en.wikipedia.org/wiki/Friendly_artificial_intelligence
It’s not an easy problem, and I think that it’s worth discussion. I just think that it’s mostly unrelated to the matter of making weapons autonomous.
Reward models (aka reinforcement learning) and preference optimization models can come to some conclusions that we humans find very strange when they learn from patterns in the data they’re trained on. Especially when those incentives and preferences are evaluated (or generated) by other models. Some of these models could very well could come to the conclusion that nuking every advanced-tech human civilization is the optimal way to improve the human species because we have such rampant racism, classism, nationalism, and every other schism that perpetuates us treating each other as enemies to be destroyed and exploited.
Sure, we will build ethical guard rails. And we will proclaim to have human-in-the-loop decision agents, but we’re building towards autonomy and edge/corner-cases always exist in any framework you constrain a system to.
I’m an AI Engineer working in autonomous agentic systems—these are things we (as an industry) are talking about—but to be quite frank, there are not robust solutions to this yet. There may never be. Think about raising a teenager—one that is driven strictly by logic, probabilistic optimization, and outcome incentive optimization.
It’s a tough problem. The naive-trivial solution that’s also impossible is to simply halt and ban all AI development. Turing opened Pandora’s box before any of our time.
Yeah, it’s not easy. I’m not sure that the problem is realistically solvable. On the other hand, the potential rewards for doing so are immeasurable – at the extreme, you’re basically creating and chaining a “god”, which would be damned nice to have at one’s beck and call. So it’d be damned nice to solve it.
The technical problems are hard, because we’d like to build a self-improving system, and build constraints that apply to it even after its complexity has grown far beyond our ability to understand it or even the ability of our tools to do so. It’s like a bacterium trying to genetically-engineer something that will evolve into a human compelled to do what the bacterium wants.
However we constrain the system…maybe in the near term, we could recover from a flawed “containment” system. But in the long run, those constraints are probably going to have to permit for zero failures. You make yourself a god and it slips its leash, you may not get a second chance to leash it. Zero failures, ever, forever, hardware or software, is kind of an unimaginable bar for even the vastly more-simple systems that we build today.
Even if one can build a system to constrain something that we cannot understand, and works perfectly, forever, part of the problem is that when building computer systems, the engineer has to iron out corner cases that don’t come up when requirements are specified in a rather-loose fashion, in everyday English. We have a hard time getting a sufficiently-complete specification for most of what software does today. The problems involved in ironing out the corner cases to write a sufficiently-complete specification of “what is in humanity’s interest” when we often can’t even agree on that ourselves seems rather difficult. That’s not even a computer science issue and we’ve been banging on that one for all of human history and couldn’t come up with an answer.
The above specification has to hold for all kinds of environments, including ones with technology that will not exist today. Like, take a kind of not-unreasonable-sounding utilitarian philosophical position – “seek to maximize human happiness for the greatest number of people”. Well…that’s not even complete for today (what exactly constitutes “happiness”?), but in a world where an AI with a sufficient level of technological advancement could potentially both surgically modify a human to hardwire their pleasure sensations and also clone and mass-grow more human fetuses, that quite-reasonable-sounding rule suddenly starts to look rather less-reasonable.
I’ve wondered before whether artificial general intelligence might be the answer to the Fermi paradox.
https://en.wikipedia.org/wiki/Fermi_paradox
One such potential answer is rather dark:
The concerning thing is that if this is the answer, we have spaceflight now and so we probably aren’t all that far from interstellar travel. We made it this far, so there’s not a lot of time left for us to have our near-inevitable disaster. This should be a critical phase where we expect to have our disaster soon…yet we don’t see a technology or anything likely to cause our certain or near-certain destruction.
Sagan thought that nuclear weapons might be the answer. It is a technology associated with interstellar flight – one probably needs nuclear propulsion to travel between star systems. So it’d potentially almost-certainly be discovered at about the right time. The “start time” for the technology checks out for nuclear weapons.
But it’s not clear why we’d almost-certainly need to have a cataclysmic nuclear war in the near future. I mean, sure, there’s a chance, but a certainty? Enough to wipe out every civilization out there that developed more-quickly than our own?
The problem here is that Sagan’s “hold it together long enough to start spreading through the universe and then no single disaster can reasonably wipe you out” is at least plausible for a lot of technologies, like nuclear war.
But a technology that everyone would seek to have and make use of and where some kind of catastrophic event could spread at the speed of light, along information channels…that could potentially destroy a civilization that has even passed the “interstellar travel” barrier and is on multiple star systems. The time requirements for an AI spreading out of control are potentially a lot laxer than having a nuclear war. That’s a disaster that doesn’t have to happen very shortly after interstellar travel is achieved.
And if it then itself was not stable, collapsed, that’d explain why we don’t see AIs running around the universe either.
sighs
But it sure is a technology that it’d be terribly nice to have.
The worst problem of arms control treaties is that Usa never adheres to one.
Ok so we ban them, and some incel terminally online hacker on steroids turns 20 arduinos into bombs.
I agree killer robots are dangerous and ethically problematic, just I don’t think banning them will keep asshats from making them, including on large scale.
China could pump them out by the billions and we’d probably not know till they were deployed.
For you as well: https://lemmy.world/comment/11728594
I’m pretty sure that anti-human AI is basically a guarantee at some point.