Yea, well…for the heavy lifting it could be nice. But I’m not letting AI build my house.
Lifting heay crap to the roof or something like that? Sure. That is what machines are good at.
Welding? Well welding robots have existed for long time, they just need to programmed perfectly. I’ve worked with a couple of them, the results are not always consistent and they required some quality checks. It is easier. Manual welding takes more skill and takes longer. I just don’t need the AI part though. That makes it unpredictable. And if I let a robot do something, it should be predictable.
I do woodworking and fixing around the house. Even when building new stuff, there are always issues that you have to solve on the spot. Walls that are not straight, angles that are not perfect, spaces you cannot reach et cetera.
As long as AI does not get it 100% right every time it is not touching my house. And yes, a professional doesn’t reach that rate either, but at least they know and doublecheck themselves and know how to fix things.
AI can also know to doublecheck themselves and how to fix things.
(Read: “I don’t actually understand how ML works”)
It’s not AI. Stop calling it AI.
The term “artificial intelligence” has been in use since the 1950s and it encompasses a wide range of fields in computer science. Machine learning is most definitely included under that umbrella.
Why do you think an AI can’t double check things and fix them when it notices problems? It’s a fairly straightforward process.
The halting problem. Machines cannot, by logic, double check themselves.
What are you trying to argue, that humans aren’t Turing-complete? Which would be an insane self-own. That we can decide the undecidable? That would prove you don’t know what you’re talking about, it’s called undecidable for a reason. Deciding an undecidable problem makes as much sense as a barber who shaves everyone who doesn’t shave themselves.
Aside from that why would you assume that checking results would, in general, involve solving the halting problem.
It has nothing to do with whether humans are Turing complete or not. No Turing machine is capable of solving an undecidable. But humans can solve undecidables. Machines cannot solve the problem the way a human would. So, no, humans are not machines.
This by definition limits the autonomy a machine can achieve. A human can predict when a task will cause a logic halt and prepare or adapt accordingly, a machine can’t. Unless intentionally limited by a programmer to stop being Turing complete and account for the undecidables before hand (thus with the help of the human). This is why machines suck at unpredictable or ambiguous task that humans fulfill effortlessly on the daily.
This is why a machine that adapts to the real world is so hard to make. This is why autonomous cars can only drive in pristine weather, on detailed premapped roads with really high maintenance, with a vast array of sensors. This is why robot factories are extremely controlled and regulated environments. This is why you have to rescue your roomba regularly. Operating on the biggest undecidable there is (e.g. future parameters of operations) is the biggest yet unsolved technological problem (next to sensor integration on world parametrization and modeling). Machine learning is a step towards it, in a several thousand miles long road yet to be traversed.
But humans can solve undecidables.
No, we can’t. Or, more precisely said: There is no version of your assertion which would be compatible with cause and effect, would be compatible with physics as we understand it.
Don’t blame me I didn’t do it. The universe just is that way.
The halting problem is an abstract mathematical issue, in actual real-world scenarios it’s trivial to handle cases where you don’t know how long the process will run. Just add a check to watch for the process running too long and break into some kind of handler when that happens.
I’m a professional programmer, I deal with this kind of thing all the time. I’ve literally written applications using LLMs that do this.
They keep trying that but they are not working out. Like all those pancakes concrete house printers. Just failing