Note they did not say 1.2 million times faster than fiber. Instead they compared it to the broadband definition; an obvious choice of clickbait terminology.
A single fiber can carry over 90 channels of 400G each. The public is mislead by articles like this. It’s like saying that scientists have figured out how to deliver the power of the sun, but that technology would be reserved for the power company’s generation facilities, not your house.
You mean with 50 GHz channels in the C-band? That would put you at something like 42 Gbaud/s with DP-QAM64 modulation, it probably works but your reach is going to be pretty shitty because your OSNR requirements will be high, so you can’t amplify often. I would think that 58 channels at 75 GHz or even 44 channels at 100 GHz are the more likely deployment scenarios.
On the other hand we aren’t struggling for spectrum yet, so I haven’t really had to make that call yet.
Its not stupid at all. “Broadband” speed is a term that laypeople across the country can at least conceptualize. Articles like this aren’t necessarily written exclusively for industry folks. If the population can’t relate to the information well, how can they hope to pressure telcos for better services?
So it’s fine if an article says Space X develops a new rocket that travels 100x faster than a car?
Because that implies a breakthrough when it’s actually not significantly faster than other rockets: it’s the speed needed to reach the ISS.
10X faster than existing fiber would be accurate reporting. Especially given that there are labs that have transmitted at peta bit speeds over optical already. So terabit isn’t significant, only his method.
To be fair, it all trickles down to home users eventually. We’re starting to see 10+gbps fiber in enthusiast home networks and internet connections. Small offices are widely adopting 100gbps fiber. It wasn’t that long ago that we were adopting 1 gigabit ethernet in home networks, and it won’t be long before we see widespread 800+ gigabit fiber.
Streaming video is definitely a big application where more bandwidth will come in handy, I think also transferring large AI models in the 100s of gigabytes may also become a large amount of traffic in the near future.
Yup, my city has historically had mediocre Internet, and now they’re rolling out fiber and advertising 10g/s at a relatively reasonable $200/month.
I’m probably not getting it anytime soon (I’m happy with my 50/20 service), but I know a few enthusiasts who will. I’ll see what the final pricing looks like and decide if it’s worth upgrading my infrastructure (only have wireless AC, so no point in going above 300mbps or so).
SFP+ still pretty much requires pcie cards or home-server style hardware to use, but it’s pretty accessible. And you can buy 10GbaseT adapters for backwards compatibility for $40.
Disaggregated compute might be able to leverage this in the data center. I could use this to get my server, gaming PC and home theater to share memory bandwidth on top of storage, heck maybe some direct memory access between distributed accelerators.
Disaggregated compute might be able to leverage this in the data center.
I don’t think people would fuck with amplifiers in a DC environment. Just using more fiber would be so much cheaper and easier to maintain. At least I haven’t heard of any current Datacenters even using conventional DWDM in the C-band.
At best Google was using Bidir Optics, which I suppose is a minimal form of wavelength division multiplexing.
Distances though? I’ve seen similar breakthroughs in the past but it was only good for networking within the same room.
It’s optical fiber so it’s good for miles. Unlikely to be at home for decades but telcos will use it for connecting networks.
Optical fiber is already 100 gigabit so the article comparing it to your home connection is stupid.
So the scientist improved current fiber speed by 10x, not 1.2 million X.
Note they did not say 1.2 million times faster than fiber. Instead they compared it to the broadband definition; an obvious choice of clickbait terminology.
OM1 through OM4 have full rate distances of less than 800 meters.
Yes there is faster stuff that goes for literal miles but saying that optical fiber can always go miles is incorrect.
To be fair it’s obviously meant that they’re talking about singlemode and not multimode.
No one said “always”; original comment is correct that fiber can literally go miles
It’s much more than just 100Gb/s.
A single fiber can carry over 90 channels of 400G each. The public is mislead by articles like this. It’s like saying that scientists have figured out how to deliver the power of the sun, but that technology would be reserved for the power company’s generation facilities, not your house.
You mean with 50 GHz channels in the C-band? That would put you at something like 42 Gbaud/s with DP-QAM64 modulation, it probably works but your reach is going to be pretty shitty because your OSNR requirements will be high, so you can’t amplify often. I would think that 58 channels at 75 GHz or even 44 channels at 100 GHz are the more likely deployment scenarios.
On the other hand we aren’t struggling for spectrum yet, so I haven’t really had to make that call yet.
Its not stupid at all. “Broadband” speed is a term that laypeople across the country can at least conceptualize. Articles like this aren’t necessarily written exclusively for industry folks. If the population can’t relate to the information well, how can they hope to pressure telcos for better services?
So it’s fine if an article says Space X develops a new rocket that travels 100x faster than a car?
Because that implies a breakthrough when it’s actually not significantly faster than other rockets: it’s the speed needed to reach the ISS.
10X faster than existing fiber would be accurate reporting. Especially given that there are labs that have transmitted at peta bit speeds over optical already. So terabit isn’t significant, only his method.
I wonder what non-telco applications will use this
I wonder if something like a sport stadium has video requirements that would get close with HFR 8K video?
To be fair, it all trickles down to home users eventually. We’re starting to see 10+gbps fiber in enthusiast home networks and internet connections. Small offices are widely adopting 100gbps fiber. It wasn’t that long ago that we were adopting 1 gigabit ethernet in home networks, and it won’t be long before we see widespread 800+ gigabit fiber.
Streaming video is definitely a big application where more bandwidth will come in handy, I think also transferring large AI models in the 100s of gigabytes may also become a large amount of traffic in the near future.
Yup, my city has historically had mediocre Internet, and now they’re rolling out fiber and advertising 10g/s at a relatively reasonable $200/month.
I’m probably not getting it anytime soon (I’m happy with my 50/20 service), but I know a few enthusiasts who will. I’ll see what the final pricing looks like and decide if it’s worth upgrading my infrastructure (only have wireless AC, so no point in going above 300mbps or so).
Man. The tech is so pricey though. 10g switch’s are scary lol
Yeah. I honestly think 10GbaseT was a mistake, since it fragmented 10gbit and made it so expensive.
The sfp+ switches aren’t too bad, here’s an 8 port unmanaged for $150: https://www.amazon.com/MokerLink-Support-Bandwidth-Unmanaged-Ethernet/dp/B09W24RZDC/
SFP+ still pretty much requires pcie cards or home-server style hardware to use, but it’s pretty accessible. And you can buy 10GbaseT adapters for backwards compatibility for $40.
Some wifi routers are even starting to adopt SFP+, even if it’s ungodly expensive. https://www.amazon.se/TP-Link-Deco-BE85-2-pack-Tri-Band-router/dp/B0C5Y46J1W/
Disaggregated compute might be able to leverage this in the data center. I could use this to get my server, gaming PC and home theater to share memory bandwidth on top of storage, heck maybe some direct memory access between distributed accelerators.
Gotta eat those PCI lanes somehow
I don’t think people would fuck with amplifiers in a DC environment. Just using more fiber would be so much cheaper and easier to maintain. At least I haven’t heard of any current Datacenters even using conventional DWDM in the C-band.
At best Google was using Bidir Optics, which I suppose is a minimal form of wavelength division multiplexing.