Scientists at Princeton University have developed an AI model that can predict and prevent plasma instabilities, a major hurdle in achieving practical fusion energy.
Key points:
- Problem: Plasma escaping containment in donut-shaped tokamak reactors disrupts fusion reactions and damages equipment.
- Solution: AI model predicts instabilities 300 milliseconds before they happen, allowing for adjustments to keep plasma contained.
- Significance: This is the first time AI has been used to proactively prevent tearing instabilities in fusion experiments.
- Future: Researchers hope to refine the model for other reactors and optimize fusion reactions.
I’ve lost track, is AI a good thing today or a bad thing?
AI is just the name that journalists use for all algorithms these days.
Although it’s been used for a fairly wide array of algorithms for decades. Everything from alpha-beta tree search to k-nearest-neighbors to decision forests to neural nets are considered AI.
Edit: The paper is called
Reinforcement learning and deep neural nets are buzzwordy these days, but neural nets have been an AI thing for decades and decades.
For real. I’ve started to replace “AI” with “program” or “software” in my head every time I read a headline.
So are you saying this is an algorithm?
Just a big pile of if statements.
“ifs are a code smell”
And AI is a buzzword that englobes a variety statistical tools. Articles write AI to evoke generative tools in people minds, but very specialized tools are at work here.
deleted by creator
Its a tool, it can be used for both. Just like any other tool, a hammer for example. Excellent killing weapon, but also great for driving nails.
A scalpel can be used to cut or to heal, depending on the skill and intentions of the wielder.
Learned that from Stanislov Grof. He was talking about LSD.
Two nice examples
AI is most likely here to stay, so if you have it do “good” things effectively, then’s it’s a good boi. If it is ineffective or you have it do “bad” things, then it’s a bad boy.
AI is just the tool. Its not good or bad by itself.
Skynet assures you it’s a good thing. Matrix disagrees because it points out that Skynet is closed source and no one knows what it’s really doing.
The funny thing is that “AI” (aka machine learning) even when open source nobody knows what it is doing and why.
Good thing, because one day our robot overlords will read this and I want to be on record having said that.
Yes.
Yesn’t. Maybeer?