For those, like me, who wondered how much data was written in 400 picoseconds, the answer is a single bit.
If I’m doing the math correctly, that’s write speeds in the 10s-100s GBps range.
If it’s sustainable.
Still about 100 picoseconds too slow for my taste.
400 for my use case, we’re trying to violate causality
The human eye can’t even perceive faster than 1000 picoseconds, so…
Really? I would have guessed the eye was 6 orders of magnitude slower than that.
Other than just making everything generally faster, what would be a use-case that really benefits the most from something like this? My first thought is something like high-speed cameras; some Phantom cameras can capture hundreds, even thousands of gigabytes of data per second, so I think this tech could probably find some great applications there.
There’s some servers using SSDs as a direct extension of RAM. It doesn’t currently have the write endurance or the latency to fully replace RAM. This solves one of those.
Imagine, though, if we could unify RAM and mass storage. That’s a major assumption in the memory heirarchy that goes away.
The article highlights on device AI processing. Could be game changing in a lot of ways.
I doubt it would work for the buffer memory in a high speed camera. That needs to be overwritten very frequently until the camera is triggered. They didn’t say what the erase time or write endurance is. It could work for quickly dumping the RAM after triggering, but you don’t need low latency for that. A large number of normal flash chips written in parallel will work just fine.
The speed of many machine learning models is bound by the speed of the memory they’re loaded on so that’s probably the biggest one.
It’s using graphene so we’ll see this as soon as the 100s of graphene innovations come too in who knows when?
So… How many cycles can it withstand?
At least 1