Photon is a strange beast. How do you install it?
It seems to only come as a docker container. That’s weird. I don’t have docker installed but docker should really be a choice… not a sole means of installation. I see no deb file or tarball. It seems that it has taken a direction that makes it non-conducive to ever becoming part of the official Debian repos.
Then it seems as well that their official site “phtn.app” is a Cloudflare site – which is a terrible sign. It shows that the devs are out of touch with digital rights, decentralisation, and privacy. That doesn’t in itself mean the app is bad but the tool is looking quite sketchy so far. Several red flags here.
(edit) I found a tarball on the releases page.
I just need to work out exactly what the effect of the user-configured node block is. In principle, if an LW user replies to either my thread or one of my comments in someone else’s thread, I would still want to see their comments and I would still want a notification. But I would want all LW-hosted threads to be hidden in timelines and search results.
On one occasion I commented in an LW-hosted thread without realising it. Then I later blocked the community that thread was in (forgetting about my past comment). Then at one point I discovered someone replied to me and I did not get the notification. That scenario should be quite rare but I wonder how it would pan out with the node-wide blocking option.
Ah, I see! Found it. Indeed that was not there last time I checked.
I’m on both Lemmy and mbin. I have several Lemmy accounts.
Now I need to understand the consequences of blocking lemmy.world. Is it just the same as blocking every lemmy.world community, or does it go further than that? E.g. If I post a thread and a LW user replies, I would not want to block their reply from appearing in my notifications. I just don’t want LW threads coming up in searches or appearing on timelines.
I think he is talking about admins blocking instances in the settings for the whole node. AFAIK, users on Lemmy and k/mBin have no such setting.
I don’t get why you want users to be able to apply cloudflare filters, though.
Suppose an instance has these users:
And suppose the instance is a special interest instance focused on travel. The diverse group of the above people have one thing in common: they want to converge on the expat travel node and the admin wants to accommodate all of them. Norm, and many like him, are happy to subscribe to countless exclusive and centralised forums as they are pragmatic people with no thought about tech ethics. These subscriptions flood an otherwise free world node with exclusive content. Norm subscribes to !travelpics@exclusivenode.com
. Then Victor, Terry and sometimes Cindy are all seeing broken pics in their view because they are excluded by Cloudflare Inc. Esther is annoyed from an ethical standpoint that this decentralised free world venue is being polluted by exclusive content from places like like Facebook Threads™ and LemmyWorld. Even though she can interact with it from her clearnet position, she morally objects to feeding content to oppressive services.
The blunt choice of the admin to federate or not with LemmyWorld means the admin cannot satisfy everyone. It’s too blunt of an instrument. Per-community blocks per user give precision but it’s a non-stop tedious manual workload to keep up with the flood of LW communities. It would be useful for a user to block all of LemmyWorld in one action. I don’t want to see LW-hosted threads and I don’t want LW forums cluttering search results.
Cloudflare is an exclusive walled garden that excludes several demographics of people. I am in Cloudflare’s excluded group. This means:
CF nodes like LW breaks the fedi in arbitrary ways that undermine the fedi design and philosophy. So the use case is to get rid of the pollution. To get broken pieces out of sight and unbury the content that is decentralised, inclusive, open and free. To reach conversations with people who have the same values and who oppose digital exclusion, oppose centralised corporate control, and who embrace privacy. It’s also necessary to de-pollute searches. If I search for “privacy”, the results are flooded with content from people and nodes that are antithetical to privacy. Blocking fixes that. If I take a couple min. to block oxymoron venues like lemmy.world/c/privacy and do the same for a dozen other cloudflared nodes, then search for “privacy” again, I get better results.
When crossposting from Lemmy, there is a pulldown list of target communities which is another search tool. That is broken when there are more communities than what fits in the box. And it’s often ram-packed with Cloudflare venues – places that digital rights proponents will not feed. Blocking the junk CF-centralised communities makes it possible to select the target community I’m after.
So it works. The federated timeline is also more interesting now because it’s decluttered of exclusive places. The problem is that it’s more tedious that it needs to be. I am blocking hundreds of LW communities right now. It probably required 500 clicks to get the config that I have right now and I probably have hundreds of more clicks to go. When in fact I should have simply been able to enter ~10 or nodes.
tl;dr:
I’ve been using Lemmy for years, back when there were only 2 or 3 nodes and federation capability did not exist. It’s a shit show. Extremely buggy web clients and no useful proper desktop clients. I must say it’s sensible that the version numbers are still 0.x. It’s also getting worse. 0.19.3 was more usable than 0.19.5 which introduced serious bugs that make it unusable in some variants of Chromium browser.
mBin has been plagued with serious bugs. But it’s also very young. It was not ready for prime-time when it got rolled out, but I think it (or kbin) was pushed out early because many Redditors were jumping ship and those refugees needed a place to go. IMO mbin will out-pace Lemmy and take the lead. Mbin is bad at searching. You can search for mags that are already federated but if a community does not appear in a search I’m not even sure if or how a user can create the federated relationship.
The running goat fuck with Lemmy is in recent years with the shitty javascript web client. There’s only so much blame you can fairly put on those devs though because they need to focus on a working server. The shitty JavaScript web client should just be considered a proof-of-concept experimental test sandbox. JavaScript is unfit for this kind of purpose. It’s really on the FOSS community to produce a decent proper client. And what has happened is there has been focus on a dozen or so different phone apps (wtf?) and no real effort on a desktop app.
Both Lemmy and Mbin lack the ability to filter out or block Cloudflare nodes. They both only give a way to block specific forums. So you get imersed/swamped in LemmyWorld’s walled garden and to get LemmyWorld out of sight there is a big manual effort of blocking hundreds of communities. It’s a never ending game of whack-a-mole.
Yes indeed… “threads” in the generic sense of the word pre-dates the web. And threadiverse is a few years older than “FB Threads™”. That’s what’s so despicable about Facebook hi-jacking the name. It’s also why I will not refer to them by Meta (another hi-jacking of a generic term with useful meaning that their ego-centric marketers fucked up)
What do you say? Am I too lazy or it is unpractical to stay away from big tech?
Laziness is what the surveillance advertisers are exploiting. It is everyone’s duty to resist the tyranny of convenience that Tim Wu articulates in a famous essay.
After a year I’m starting to think that maybe my data is not worth the hassle just to keep big tech out of my digital life… I guess Big Brother wins
Think of it as boycotting. Exposure of your personal data may not be worth the effort of protecting it, but the big picture is that privacy seekers are not just looking for confidentiality. Privacy is about power and agency. You are exercising your right to boycott a harmful entity. Boycotts are no longer simply a matter of not handing money over, because data is worth money. So boycotting now entails not handing your data over. Giving Google your data feeds Google’s profits.
So you are really asking, “should I give up the boycott”? The answer is no, because the boycott is not just a duty to yourself; it’s a duty everyone benefits from (except Google).
Cloudflare is not at all sensible from a privacy standpoint. Cloudflare is a bigger privacy offender than Google and far more detrimental to our rights.
https://git.kescher.at/dCF/deCloudflare/src/branch/master/subfiles/rapsheet.cloudflare.md
Reverse proxying your website through Cloudflare is actually an attack on privacy. You make yourself part of the problem by arbitrarily blocking several demographics of people from your website including Tor and VPN users (people doing their part to retain privacy).
As far as we know, Google is not giving up any data. The crawler still must store a copy of the text for the index. The only certainty we have is that Google is no longer sharing it.
Here’s the heart of the not-so-obvious problem:
Websites treat the Google crawler like a 1st class citizen. Paywalls give Google unpaid junk-free access. Then Google search results direct people to a website that treats humans differently (worse). So Google users are led to sites they cannot access. The heart of the problem is access inequality. Google effectively serves to refer people to sites that are not publicly accessible.
I do not want to see search results I cannot access. Google cache was the equalizer that neutralizes that problem. Now that problem is back in our face.
From the article:
“was meant for helping people access pages when way back, you often couldn’t depend on a page loading. These days, things have greatly improved. So, it was decided to retire it.” (emphasis added)
Bullshit! The web gets increasingly enshitified and content is less accessible every day.
For now, you can still build your own cache links even without the button, just by going to “https://webcache.googleusercontent.com/search?q=cache:” plus a website URL, or by typing “cache:” plus a URL into Google Search.
You can also use 12ft.io.
Cached links were great if the website was down or quickly changed, but they also gave some insight over the years about how the “Google Bot” web crawler views the web. … A lot of Google Bot details are shrouded in secrecy to hide from SEO spammers, but you could learn a lot by investigating what cached pages look like.
Okay, so there’s a more plausible theory about the real reason for this move. Google may be trying to increase the secrecy of how its crawler functions.
The pages aren’t necessarily rendered like how you would expect.
More importantly, they don’t render the way authors expect. And that’s a fucking good thing! It’s how caching helps give us some escape from enshification. From the 12ft.io faq:
“Prepend 12ft.io/ to the URL webpage, and we’ll try our best to remove the popups, ads, and other visual distractions.”
It also circumvents #paywalls. No doubt there must be legal pressure on Google from angry website owners who want to force their content to come with garbage.
The death of cached sites will mean the Internet Archive has a larger burden of archiving and tracking changes on the world’s webpages.
The possibly good news is that Google’s role shrinks a bit. Any Google shrinkage is a good outcome overall. But there is a concerning relationship between archive.org and Cloudflare. I depend heavily on archive.org largely because Cloudflare has broken ~25% of the web. The day #InternetArchive becomes Cloudflared itself, we’re fucked.
We need several non-profits to archive the web in parallel redundancy with archive.org.
Bingo. When I read that part of the article, I felt insulted. People see the web getting increasingly enshitified and less accessible. The increased need for cached pages has justified the existence of 12ft.io.
~40% of my web access is now dependant on archive.org and 12ft.io.
So yes, Google is obviously bullshitting. Clearly there is a real reason for nixing cached pages and Google is concealing that reason.
This is probably an attempt to save money on storage costs.
That’s in fact what the article claims as Google’s reason. But seems irrational. Google still needs to index websites for the search engine. So the storage is still needed since the data collection is still needed. The only difference (AFAICT) is Google is simply not sharing that data. Also, there are bigger pots of money in play than piddly storage costs.
You were given plenty of references. You can verify it yourself if you want to get a clue – or continue to spread misinfo to the contrary. You are disservicing your users and the fedi by maintaining patronage to the privacy-abusing corp.
If you truly don’t understand the problems with Cloudflare, why not embrace transparency and inform people who visit your site that CF is used and that CF sees all their traffic despite the padlock? If you are proud of this, why conceal it?
Not exactly. !showerthoughts@lemmy.world
was a poor choice, as is:
!showerthoughts@zerobytes.monster
← Cloudflare!showerthoughts@sh.itjust.works
← Cloudflare!showerthoughts@lemmy.ca
← Cloudflare!showerthoughts@lemm.ee
← Cloudflare!hotshowerthoughts@x69.org
← Cloudflare, and possibly irrelevant!showerthoughts@lemmy.ml
← not CF, but copious political baggage, abusive moderation & centralized by disproportionate sizeThey’re all shit & the OP’s own account is limited to creating a new community on #lemmyWorld. !showerthoughts@lemmy.ml
would be the lesser of evils but the best move would be create an acct on a digital rights-respecting instance that allows community creations and then create showerthoughts community there.
(EDIT) !showerThoughts@fedia.io
should address these issues.
Normal users don’t have these issues.
That’s not true. Cloudflare marginalizes both normal users and street-wise users. In particular:
There are likely more oppressed groups beyond that because there is no transparency with Cloudflare.
Thanks for the insights. I was looking for a client not a server. So maybe this can’t help me. A server somewhat hints that it would be bandwidth heavy. I’m looking to escape the stock JS web client. At the same time, I am on a very limited uplink. To give an idea, I browse web with images disabled because they would suck my quota dry.