Yeah, I’m not sure my reaction to them adding Pandas as a playable race (in the Warcraft III expansion) was that they were “really badass” as OP seemed to think.
Yeah, I’m not sure my reaction to them adding Pandas as a playable race (in the Warcraft III expansion) was that they were “really badass” as OP seemed to think.
Also, the main problem with LIDAR is that it really doesn’t see any more than cameras do. It uses light, or near-visible light, so it basically gets blocked by the same things that a camera gets blocked by. When heavy fog easily fucks up both cameras and LIDAR at the same time, that’s not really redundancy.
The spinning lidar sensors mechanically remove occlusions like raindrops and dust, too. And one important thing with lidar is that it involves active emission of lasers so that it’s a two way operation, like driving with headlights, not just passive sensing, like driving with sunlight.
Waymo’s approach appears to differ in a few key ways:
There’s a school of thought that because many of these would need to be eliminated for true level 5 autonomous driving, Waymo is in danger of walking down a dead end that never gets them to the destination. But another take is that this is akin to scaffolding during construction, that serves an important function while building up the permanent stuff, but can be taken down afterward.
I suspect that the lidar/radar/ultrasonic/extra cameras will be more useful for training the models necessary to reduce reliance on human intervention and maybe reduce the number of sensors. Not just in the quantity of training data, but some filtering/screening function that can improve the quality of data fed into the training.
BYD was just a cell phone battery company, and was like “well we’ve got the lithium supply chain locked down, you know what needs huge batteries: guess we’re doing cars now.”
Waymo chose the more expensive but easier option, but it also limits their scope and scalability.
I don’t buy it. The lidar data is useful for training the vision models, so there’s plenty of reason to believe that Waymo can solve the vision issues faster than Tesla.
The thing is, if Intel doesn’t actually get 18A and beyond competitive, it might be on a death spiral towards bankruptcy as well. Yes, they’ve got a ton of cash on hand and several very profitable business lines, but that won’t last forever, and they need plans to turn profits in the future, too.
Compared to AMD FX series, the Intel Core and Core2 were so superior, it was hard to see how AMD could come back from that.
Yup, an advantage in this industry doesn’t last forever, and a lead in a particular generation doesn’t necessarily translate to the next paradigm.
Canon wants to challenge ASML and get back in the lithography game, with a tooling shift they’ve been working on for 10 years. The Japanese “startup” Rapidus wants to get into the foundry game by starting with 2nm, and they’ve got the backing of pretty much the entirety of the Japanese electronics industry.
TSMC is holding onto finFET a little bit longer than Samsung and Intel, as those two switch to gate all around FETs (GAAFETS). Which makes sense, because those two never got to the point where they could compete with TSMC on finFETs, so they’re eager to move onto the next thing a bit earlier while TSMC squeezes out the last bit of profit from their established advantage.
Nothing lasts forever, and the future is always uncertain. The past history of the semiconductor industry is a constant reminder of that.
Intel got caught off guard by the rise of advanced packaging, where AMD’s chiplet design could actually compete with a single die (while having the advantage of being more resilient against defects, and thus higher yield).
Intel fell behind on manufacturing when finFETs became the standard. TSMC leapfrogged Intel (and Samsung fell behind) based on TSMC’s undisputed advantage at manufacturing finFETs.
Those are the two main areas where Intel gave up its lead, both on the design side and the manufacturing side. At least that’s my read of the situation.
So with the case/mobo/power supply at $259, the CPU/GPU at $329, you’ve got $11 left to work with to buy RAM and SSD, in order to be competitive with the base model Mac Mini.
That’s what I mean. If you’re gonna come close to competing with the entry level price of the Mac Mini (to say nothing of frequent sales/offers/coupons that Best Buy, Amazon, B&H, and Costco run), you’ll have to sacrifice and use a significantly lower-tier CPU. Maybe you’d rather have more RAM/storage and are OK with that lower performing CPU, and twice the power consumption (around 65W rather than 30W), but at that point you’re basically comparing a different machine.
Ok, let’s put together a mini PC with a ryzen 9700X for under $600. What case, power supply, motherboard, RAM, and SSD are we gonna get? How’s it compare on power, sound, form factor?
It’s an apples to oranges comparison, and at a certain point you’re comparing different things.
When I was last comparing laptops a few years back I was seriously leaning towards the Framework AMD. It was clearly a tradeoff between Apple’s displays, trackpad, lid hinges, CPU/GPU benchmarks, and battery life, versus much more built in memory and storage, a tall display form factor, and better Linux support. Price was kinda a wash, as I was just comparing what I could get for $1500 at the time. I ended up with an Apple again, in the end. I’m keeping an eye on progress with the Asahi project, though, and might switch OSes soon.
For the Mac Mini? The Apple Silicon line has always been a really good value for the CPU, compared to similar performance from Intel and AMD. The upcharge on RAM and storage basically made it break even somewhere around 1 or 2 upgrades, if you were looking for a comparable CPU/GPU.
For my purposes the M1 Mac Mini was cheaper than anything I was looking at for a low power/quiet home server, back in 2021, through some random Costco coupon for $80 off the base $599 configuration. A little more CPU than I needed, and a little less RAM than I would’ve preferred, but it was fine.
Plus having official Mac hardware allows me to run a Bluebubbles server and hack Backblaze pricing (unlimited data backup for any external storage you can hook up to a Mac), so that was a nice little bonus compared to running a Linux server.
On their laptops, they’re kinda cost competitive if you’re looking for high dpi laptop screens, and there’s just not really a good comparison for that CPU/GPU performance for power. If you don’t need or want those things then Macs aren’t a good value, but if you are looking for those things the other computer manufacturers aren’t going to be offering better value.
You can’t just use an audio file by itself. It has to come from somewhere.
The courts already have a system in place that if someone seeks to introduce a screenshot of a text message, or a printout of a webpage, or a VHS tape with video, or just a plain audio file, needs to be able to introduce that as evidence, with someone who testifies that it is real and that it is accurate, with an opportunity for others to question and even investigate where it came from and how it was made/stored/copied.
If I just show up to a car accident case with an audio recording that I claim is the other driver admitting that he forgot to look before turning, that audio is gonna do basically nothing unless and until I show that I had a reason to be making that recording while talking to him, why I didn’t give it to the police who wrote the accident report that day, etc. And even then, the other driver can say “that’s not me and I don’t know what you think that recording is” and we’re still back to a credibility problem.
We didn’t need AI to do impressions of people. This has always been a problem, or a non-problem, in evidence.
A camera that authenticates the timestamp and contents of an image is great. But it’s still limited. If I take that camera, mount it on a tripod, and take a perfect photograph of a poster of Van Gogh’s Starry Night, the resulting image will be yet another one of millions of similar copies, only with a digital signature proving that it was a newly created image today, in 2024.
Authenticating what the camera sensor sees is only part of the problem, when the camera can be shown fake stuff, too. Special effects have been around for decades, and practical effects are even older.
uh that was Siri’s fault
He’s a great guy, but sometimes a little hard to follow if you’re only taking part in one conversation at a time when he’s talking in two and listening to a third because he expects you to be on the ball in your own discussion when he jumps in to drop a tidbit or ask a question like a chess master playing 4 games in the park at once
If it’s like simultaneous chess, why isn’t the single thread sufficient context for everything that happens in that thread? It just sounds like the guy you’re describing has low cognitive empathy and doesn’t understand other people’s minds. At that point you’re just describing a neurodivergent person who may or may not be a genius in certain domains, while being a moron in this one domain that you’ve described.
Yeah, Netscape 4.0 was simply slower than IE 4.0. Back then, when a browser was a program that would actually push the limits of the hardware, that was a big deal.
Now that splash screen, with its pixelated gradient of the 256 color palette brings back some nostalgic memories.
It’s funny because we can see pixelated stuff today mostly in shitty jpeg artifacts, but those follow the jpeg algorithm for how to best conserve file size within their compression scheme, so they look different. This splash screen seemingly has every pixel meticulously chosen so that it’s in the right place, and working with only the limits of the color space.
That’s the start, of course. One could always play good cop, bad cop: “I have to do this to comply with the law, sorry, there’s nothing else I can do.” What Linus has done here is play bad cop, bad cop: “the law says I have to obey sanctions, and by the way I support the sanctions and this move anyway.”
I actually have fairly high hopes for Intel’s 18A and the upcoming technology changes presenting competition for TSMC (including others like Samsung and the Japanese startup Rapidus). And even if it turns into a 3-way race among Asian companies, the three nations are different enough that there’s at least some strength in diversity.
TSMC’s dominance in the last decade I think can be traced to their clear advantage in producing finFETs at scale better than anyone else. As we move on from the finFET paradigm and move towards GAA and backside power delivery, there are a few opportunities to leapfrog TSMC. And in fact, TSMC is making such good money on their 3nm and 4nm processes that their roadmap to GAAFETs and backside power is slower than Intel’s and Samsung’s, seemingly to squeeze the very last bit out of finFETs before moving on.
If there’s meaningful competition in the space, we might see lower prices, which could lead to greater innovation from their customers.
Do I think it will happen? I’m not sure. But I’m hopeful, and wouldn’t be surprised if the next few process nodes show big shakeups in the race.
For the news articles themselves, each of the major companies is using a major CMS system, many of them developed in house or licensed from another major media organization.
But for things like journalist microblogging, Mastodon seems like a stand-in replacement for Twitter or Threads or Bluesky, that could theoretically integrate with their existing authentication/identity/account management system that they use to provide logins, email, intranet access, publishing rights on whatever CMS they do have, etc.
Same with universities. Sure, each department might have official webpages, but why not provide faculty and students with the ability to engage on a university-hosted service like Mastodon or Lemmy?
Governments (federal, state, local) could do the same thing with official communications.
It could be like the old days of email, where people got their public facing addresses from their employer or university, and then were able to use that address relatively freely, including for personal use in many instances. In a sense, the domain/instance could show your association with that domain owner (a university or government or newspaper or company), but you were still speaking as yourself when using that service.