Use kagi.com. By default it indicates pay walled sites and you can also block whole domains if you choose. Listicles are broken out separately and if you’re feeling ambitious Kagi supports regex-based redirects, so you could redirect paywalled domains to a paywall bypass website.
Im concerned that adopting kagi is just taking your data back from multiple greedy corporations and giving it to one corporation instead, and also giving them a direct link to who you are via your payment method.
Their privacy policy is rock solid, and there is no business incentive for them to do so, at the moment.
Wish they would atleast allow payments in crypto.
Understandable. Unfortunately, this is the world we live in.
You can kinda do it with Google Customizabe Search Engine, which is basically a thin wrapper around Google. In a regular Google search you can use syntax like -site:ignorethisdomain.com to exclude specific domains (i do this with Pinterest whenever searching for images, for example). But manually typing in a large list of black listed domains would be tedious so instead you can set up a CSE with everybody you want to ignore and then just use the special URL as your search engine.
For cookies you just need to enable one of the Cookie Notices list in uBO, and for paywalls you can add the
https://gitlab.com/magnolia1234/bypass-paywalls-clean-filters/-/raw/main/bpc-paywall-filter.txt
filter list.Maybe not exactly what you’re looking for, but Marginalia () focuses on non-commercial and text-based content.
I like it, though I.don’t find.it.a.daily driver.
‘This search engine isn’t particularly well equipped to answering queries posed like questions, instead try to imagine some text that might appear in the website you are looking for, and search for that.’ That’s really old school, in a good way.
There is also https://stract.com/ probably not a daily driver either, it shows some Lemmy results, they have their own crawler and they are open source
Yeah, agreed, Marginalia’s more suited to discover small-web type of content.
Another thing that’d be better as a daily driver, but requires manual curation, is to filter out specific domains in your searches. Brave supports that with the Goggles feature, Mojeek calls it Focus. AFAIK Kagi too has a similar feature.
I don’t know any search engine that’s able to fully exculde paywalled content though.
Stract gives you ‘optics’ which includes ‘copycats removal’. https://stract.com/
All paywalled content would probably have to be self curated or use a list though.
I think what you want is
about:blank
. It contains a list of all the websites without cookie, auth, and pay walls