Those are some very bold and generic claims for an accelerator chip startup, that doesn’t provide any details or benchmarks other than some basic diagrams and graphs while they are looking for funding and partners.
Kind of reminds me of basically every tech kickstarter ever.
“Extraordinary claims require extraordinary evidence” (a.k.a., the Sagan standard)
Should I even click?
Valtonen says that this has made the CPU the weakest link in computing in recent years.
This is contrary to everything I know as a programmer currently. CPU is fast and excess cores still go underutilized because efficient paralell programming is a capital H Hard problem.
The weakest link in computing is RAM, which is why CPUs have 3 layers of caches, to try and optimize the most use out of the bottleneck memory BUS. Whole software architectures are modeled around optimizing cache efficiency.
I’m not sure I understand how just adding a more cores as a coprocesssor (not even a floating-point optimized unit which GPUs already are) will boost performance so much. Unless the thing can magically schedule single-threaded apps as parallel.
Even then, it feels like market momentum is already behind TPUs and “ai-enhancement” boards as the next required daughter boards after GPUs.
Eh, as always: It depends.
For example: memcpy, which is one of their claimed 100x performance tasks, can be IO-bound on systems, where the CPU doesn’t have many memory channels. But with a well optimized architecture, e.g. modern server CPUs with a lot more memory channels available, it’s actually pretty hard to saturate the memory bandwidth completely.
Big if true. Going to need some real convincing benchmarks to believe this one, though. From a read, seems like they’re implementing ASIC on processor dies, which is not at all a new concept.
Very big if, though.
The biggest bigs usually do
Sounds like a Ted X presentation.
Why does this remember me about the math co-processors back in the x386 days?
Glad I didn’t have to scroll far to find this. That’s right where my mind went. Though if you think about it, it’s functionally no different than GPUs, upcoming NPUs, E-cores on chips or other ASICs.
So, they’re essentialy claiming they’ve found a way around Amdahl’s Law?