• 2 Posts
  • 87 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle

  • Oh I could easily be wrong about forgo having integrated ci/cd already. It’s the only tool I mentioned shove that I have never used before. I’m not a good source on this one.

    But I have used both flux and argo quite a lot. I’ll admit that it flux implementation was bad, but it was just a bad experience for everyone using it with me. It was a memory hog and often created. Very few people understood how to use it correctly. When there were errors with e.g. a helm template, you just had to go looking for issues and read through the log. It moved git tags around so you don’t get a history of what flux was doing. I could probably remember more issues if I tried.

    But none of that was a problem with Argo. We just started using it successfully on day 1. Plus its UI is fantastic and a huge advantage. It’s easy to navigate, spot issues, troubleshoot, etc. It also exposes users to resources they unknowingly create because Argo displays owned resources. This part really helped people understand what was going on in k8s. Oh and argo is very extensible. Maybe flux is too but I haven’t tried.


  • They’re both good and quite similar on the surface. But I find that larger, more complicated uses tend to get messy with gitlab because of the heavy use of bash. However, actions are (always?) written in typescript. If your automation needs a lot of logic to handle varying uses, then it’s nice to avoid bash and code with a more language.

    In other words, I’ve seen a few monstrosities that large companies build into gitlab and yikes!








  • That basic idea is roughly how compression works in general. Think zip, tar, etc. files. Identify snippets of highly used byte sequences and create a “map of where each sequence is used. These methods work great on simple types of data like text files where there’s a lot of repetition. Photos have a lot more randomness and tend not to compress as well. At least not so simply.

    You could apply the same methods to multiple image files but I think you’ll run into the same challenge. They won’t compress very well. So you’d have to come up with a more nuanced strategy. It’s a fascinating idea that’s worth exploring. But you’re definitely in the realm of advanced algorithms, file formats, and storage devices.

    That’s apparently my long response for “the other responses are right”


  • I looked into proton pass ~9 months ago and it just wasn’t ready. Needed a few more features before I was willing to move from Bitwarden. However, I gave it another look 2 weeks ago and proton pass satisfied all of my needs. Since I was already paying for proton unlimited, it just made sense for me to change. And it’s been a perfectly good experience so far! A couple of thoughts:

    While I do run Linux, I don’t need a native app for it. I exclusively use a browser extension on my desktop. It does everything that I need. I do use a native app on IOS and it works quite well.

    The 2fa in proton is pretty good now, which I needed. It can also store other types of data like credit cards, identities, etc. But it’s not quite as good at identifying fields for auto fill. Pretty close though so I’m not bothered by this.

    My biggest ”complaint” is protecting my proton account. I use it for email, storage, etc. so I can’t accept a weak password for it. But I also need to have reliable access to other passwords stored in proton pass. For this, I want something long yet memorable and easy enough to type out. These two requirements are roughly at odds with each other.

    My solution for now is to keep my Bitwarden account and use it as a source to recover my proton account when necessary. I think it’s a good pattern actually and I may expand this in the future with methods like syncing data between the two tools.





  • I largely agree. The title and opening words are misleading. The rest of the article is much more clear that they are defending their position of using VPN software that relies on storage and securing it with full disk encryption.

    Also, full disk encryption doesn’t solve everything. If an attacker has access to the running server, the disk is unencrypted. At that point, reading files is much easier than reading RAM from a running process.


  • Never do this.

    Git is all about tracking changes over time which is meaningless with binary files. They are bloat for your repo, slowing down operations. Depending on the repo, they are likely to change from CI with every commit. That last one means that every commit turns into 2 commits btw. They are can ruin diffs. I could go on for a long time here.

    There are basically 0 upsides. Use an artifact repository instead!






  • A complicated plugin ecosystem (e.g. Jenkins) makes for a terrible use experience. It’s annoying to configure a bunch of config files. Managing dependencies can be a complete nightmare. these problems also complicate your ci/cd.

    So I’ll offer a slightly different answer. I prefer a single file instead of splitting up the config. And I’ll use OpenTelemetry as an excellent example of why. the plugins are compiled right into the app binary. This offers a ton of advantages, including a great reason to merge all of your app configs in a single file.

    This really only works well if you have a good app though.