I don’t entirely subscribe to the first paragraph – I’ve never worked at a place so dear to me that spurred me to spend time thinking about its architecture (beyond the usual rants). Other than that, spot on

  • Mikina@programming.dev
    link
    fedilink
    arrow-up
    58
    ·
    edit-2
    7 months ago

    I had the same issue with gamedev industry, but thankfully Ive very quickly realized that’s how work works, and you usually have a choice - either earn a good living being a code monkey, or find a job in a small company that has passion, but they won’t be able to afford paying you well, or do it in your free time as a hobby. Capitalism and passion doesn’t work together.

    So I went to work part-time in cybersecurity, where the money is enough to reasonably sustain me, and use the free time to work on games in my free time. Recently, Ive picked up an amazing second part time job in a small local indie studio that is exactly the kind of environment I was looking for, with passion behind their projects - but they simply can’t afford to pay a competitive wage. But I’m not there for the money, so Ibdon’t mind and am happy to help them. Since there are no investors whose pocket you fill, but the company is owned by a bunch of my friends, I have no issue with being underpaid.

    But it’s important to realize this as soon as possible, before trying to make a living with something you’re passionate about will burn you out. A job has one purpose - earn you a living. Companies will exploit every single penny they can out of you, so fuck them, don’t give them anything more than a bare minimum, and keep your energy for your own projects.

    And be carefull with trying to earn a living on your own - because whatever you do, no matter how passionate are you, if it’s your only income and your life depends on it, you will eventually have to make compromises to get by. It’s better to keep money separate from whatever you like doing, and just keep your passion pure.

    EDIT: Oh, I forgot to mention one important thing - I’m fortunate to not have children, share living costs with a partner, and live in a city with good public transport, so no need for a car, and free healthcare. I suppose that makes it a lot more easier to get by with just a part time.

    • heeplr@feddit.de
      link
      fedilink
      arrow-up
      27
      ·
      7 months ago

      either earn a good living being a code monkey, or find a job in a small company that has passion

      crazy idea: let’s publicly fund FOSS projects so devs working on stuff they like with a passion can actually make a good living and enable sustainable non-profits to hire expertise, marketing and all the stuff a company needs

      the result would be actually good software and happy devs

  • roanutil_@programming.dev
    link
    fedilink
    arrow-up
    28
    ·
    7 months ago

    I am genuinely so thankful for my job. Small start up where the founder is funding the whole thing himself and actually works as a dev as he’s able.

    The amount of autonomy I’ve had since day 1 is wonderful. I put in a lot of time because I enjoy the work. My pay is a little low but not bad and usually increases by a lot each year. We’re 100% remote.

    I just can’t imagine willingly leaving after reading the nonsense that most of you are dealing with. I got so lucky and you can pry my current job from my cold, dead hands.

    • Daxtron2@startrek.website
      cake
      link
      fedilink
      arrow-up
      8
      ·
      7 months ago

      Sounds pretty nice if it’s something you enjoy! I miss being able to do 12 hr days on a project I was actually passionate about lol

  • Mikina@programming.dev
    link
    fedilink
    arrow-up
    27
    arrow-down
    2
    ·
    7 months ago

    I’m starting to think that “good code” is simply a myth. They’ve drilled a lot of “best practices” into me during my masters, yet no matter how mich you try, you will eventually end up with something overengineered, or a new feature or a bug that’s really difficult to squeeze into whatever you’ve chosen.

    But, ok, that doesn’t proove anything, maybe I’m just a vad programmer.

    What made me sceptical however isn’t that I never managed to do it right in any of my projects, but the last two years of experience working on porting games, some of them well-known and larger games, to consoles.

    I’ve already seen several codebases, each one with different take on how to make the core game architecture, and each one inevitably had some horrible issues that turned up during bugfixing. Making changes was hard, it was either overengineersled and almost impenetrable, or we had to resort tonugly hacks since there simply wasn’t a way how to do it properly without rewriting a huge chunk.

    Right now, my whole prpgramming knowledge about game aechitecture is a list of “this desn’t work in the long run”, and if I were to start a new project, I’d be really at loss about what the fuck should i choose. It’s a hopeless battle, every aproach I’ve seen or tried still ran into problems.

    And I think this may be authors problem - ot’s really easy to see that something doesn’t work. " I’d have done it diferently" or “There has to be a better way” is something that you notice very quickly. But I’m certain that watever would he propose, it’d just lead to a different set of problems. And I suspect that’s what may ve happening with his leads not letting him stick his nose into stuff. They have probably seen that before, at it rarely helps.

    • andyburke@fedia.io
      link
      fedilink
      arrow-up
      19
      ·
      7 months ago

      We have an almost total lack of real discipline and responsibility in software engineering.

      “Good enough” is the current gold standard, so you get what we have.

      If we were more serious there wouldn’t be 100 various different languages to choose from, just a handful based on the requirements and those would become truly time worn, tested and reliable.

      Instead, we have no apprenticeships, no unions, very little institutional knowledge older than a few years. We are pretending at being an actual discipline.

      • kbin_space_program@kbin.run
        link
        fedilink
        arrow-up
        12
        ·
        7 months ago

        Best project Ive worked on, we went and implemented a scrict code standard, based on the code standard that a firm that contracted my team to do the work had.

        Worked perfectly. Beautiful, maintainable code. Still used today without major reworks, doesnt need it. Front end got several major updates, but the back end uses what is now called microservice architecture, and we implemented it long before the phrase was common.

        Got the opportunity to go back to it this year. Devs with the 2nd firm not only ignored all of the documentation we put out, they ignored their own coding standards document.

    • magic_lobster_party@kbin.run
      link
      fedilink
      arrow-up
      13
      ·
      7 months ago

      There’s bad code and then there’s worse code. “Best practices” might help you to avoid writing worse code.

      Good code might appear occasionally. In the rare event when it’s also useful, people will start to have opinions about what it should do. Suddenly requirements change and now it’s bad code.

      • nous@programming.dev
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        7 months ago

        “Best practices” might help you to avoid writing worse code.

        TBH I am not sure about this. I have seen many “Best practices” make code worst not better. Not because the rules themselves are bad but people take them as religious gospel and apply them to every situation in hopes of making their code better without actually looking at if it is making their code better.

        For instance I see this a lot in DRY code. While the rules themselves are useful to know and apply they are too easily over applied removing any benefit they originally gave and result in overly abstract code. The number of times I have added duplication back into code to remove a layer of abstraction that was not working only to maybe reapply it in a different way, often keeping some duplication.

        Suddenly requirements change and now it’s bad code.

        This only leads to bad code when people get to afraid to refactor things in light of the new requirements.Which sadly happens far to often. People seem to like to keep what was there already and follow existing patterns even well after they are no longer suitable. I have made quite a lot of bad code better by just ripping out the old patterns and putting back something that better fits the current requirements - quite often in code I have written before and others have added to over time.

        • Dark ArcA
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          7 months ago

          This only leads to bad code when people get to afraid to refactor things in light of the new requirements.Which sadly happens far to often. People seem to like to keep what was there already and follow existing patterns even well after they are no longer suitable. I have made quite a lot of bad code better by just ripping out the old patterns and putting back something that better fits the current requirements - quite often in code I have written before and others have added to over time.

          Yup, this is part of what’s lead me to advocate for SRP (the single responsibility principle). If you have everything broken down into pieces where the description of the function/class is something like “given X this function does Y” (and unrelated things thus aren’t unnecessarily coupled) it makes reorganization of the higher level logic to fit the current requirements a lot easier.

          For instance I see this a lot in DRY code. While the rules themselves are useful to know and apply they are too easily over applied removing any benefit they originally gave and result in overly abstract code. The number of times I have added duplication back into code to remove a layer of abstraction that was not working only to maybe reapply it in a different way, often keeping some duplication.

          Preach. DRY is IMO the most abused/mis-understood best practice particularly by newer programmers. DRY is not about compressing your code/minimizing line count. It’s about … avoiding things like writing the exact same general (e.g., a sort) algorithm inline in a dozen places. People are really good at finding patterns and “over fitting” making up abstractions that make no sense.

          • nous@programming.dev
            link
            fedilink
            English
            arrow-up
            4
            ·
            7 months ago

            Yup, this is part of what’s lead me to advocate for SRP (the single responsibility principle).

            Even that gets overused and abused. My big problem with it is what is a single responsibility. It is poorly defined and leads to people thinking that the smallest possible thing is one responsibility. But when people think like that they create thousands of one to three line functions which just ends up losing the what the program is trying to do. Following logic through deeply nested function calls IMO is just as bad if not worst than having everything in a single function.

            There is a nice middle ground where SRP makes sense but like all patterns they never talk about where that line is. Overuse of any pattern, methodology or principle is a bad thing and it is very easy to do if you don’t think about what it is trying to achieve and when applying it no longer fits that goal.

            Basically, everything in moderation and never lean on a single thing.

            • Dark ArcA
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              7 months ago

              Hmmm… That’s true, my rough litmus test is “can you explain what this thing does in fairly precise language without having to add a bunch of qualifiers for different cases?”

              If you meet that bar the function is probably fine/doesn’t need broken up further.

              That said, I don’t particularly care how many functions I have to jump through or what their line count is because I can verify “did the function do the thing it says it’s supposed to do?” after it’s called in a debugger. If it did, then I know my bug isn’t there. If it didn’t, I know where to look.

              Just like with commits, I’d rather have more small commits to feed through git bisect than a few larger commits because it makes identifying where/when a contract/test case/invariant was violated much more straight forward.

        • magic_lobster_party@kbin.run
          link
          fedilink
          arrow-up
          2
          ·
          7 months ago

          For instance I see this a lot in DRY code.

          DRY is one of the most misunderstood practices. If you read pragmatic programmer (where DRY was coined), they make it clear that DRY doesn’t mean “avoid all repetition at all cost”. Just because two pieces of code look identical doesn’t necessarily mean they are the same. If they can grow independently of each other, then they’re not repetitions according to DRY and should be left alone.

          • nous@programming.dev
            link
            fedilink
            English
            arrow-up
            3
            ·
            7 months ago

            Yup, and that is because people only ever lean DRY coding by its name. It is never really what it originally meant, when to use it and more importantly when not to use it. So loads of people apply it religiously and over use it. This is true of all the popular catchy named methodologies/principals etc.

    • Kache@lemm.ee
      link
      fedilink
      arrow-up
      11
      ·
      7 months ago

      Good code is code that’s easy to delete.

      I’m not a game dev, but it’s got a reputation for being more of a software engineering shit show than other software industries, which your story only reinforces.

    • Dark ArcA
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 months ago

      I’ll contest their is such a thing as good code. I don’t think experienced devs always do the best job at passing on what works and what doesn’t though. Universities certainly could do more software engineering/architecture.

      My personal take is that SRP (the single responsibility principle) is the #1 thing to keep in mind. In my experience DRY (do not repeat yourself) often takes precedence over SRP – IMO because DRY is easy to (mis-)understand – and that ends up making some major messes when good/reasonable code is rewritten into some ultra-compact (typically) inheritance or template-based mess that’s “fewer lines of code, so better.”

      I’ve never regretted using composition (and thus having a few extra lines and a little bit more boilerplate) over inheritance. I’ve similarly never regretted breaking down a function into smaller functions (even if it introduces more lines of code). I’ve also never regretted generalizing code that’s actually general (e.g., a sum N elements function is always a sum N elements function).

      The most important thing with all of these best practices though is “apply it if it makes sense.” If you’re writing some code and you’ve got a good reason to have a function that does multiple things … just write the function, don’t bend over backwards doing something really weird to “technically” abide by the best practice.

      • magic_lobster_party@kbin.run
        link
        fedilink
        arrow-up
        4
        ·
        edit-2
        7 months ago

        I’ve never regretted using composition (and thus having a few extra lines and a little bit more boilerplate) over inheritance.

        I second this. It doesn’t necessarily eliminate bad code, but it certainly makes it more manageable.

        Every time inheritance is used, it will almost certainly causes pain points that’s hard to revert. It leads to all these overly abstracted class hierarchies that give OOP a bad rep. And it can be easily avoided in almost all cases.

        Just use interfaces if you really need polymorphism. Often you don’t even need polymorphism at all.

  • tsonfeir@lemm.ee
    link
    fedilink
    arrow-up
    14
    ·
    7 months ago

    I believe refactoring never ends. Just because it works, doesn’t mean it can’t work better. The better it works, the easier it will be to add features… that can be refactored too.

    • monkeyman512@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      7 months ago

      I think there is an author that said, “Books aren’t completed, they are abandoned.” Code can feel the same.

    • blackbirdbiryani@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      7 months ago

      Bosses will never understand this and discourage refactoring until months later nothing works and everything has to be rewritten…

      • nous@programming.dev
        link
        fedilink
        English
        arrow-up
        7
        ·
        7 months ago

        Refactoring should not be a separate task that a boss can deny. You need to do feature X, feature X benefits from reworking some abstraction a bit, then you rework that abstraction before starting on feature X. And then maybe refactor a bit more after feature X now you know what it looks like. None of that should take substantially longer, and saves vast amounts of time later on if you don’t include it as part of the feature work.

        You can occasionally squeeze in a feature without reworking things first if time for something is tight, but you will run into problems if you do this too often and start thinking refactoring is a separate task to feature work.

      • MajorHavoc@programming.dev
        link
        fedilink
        arrow-up
        4
        ·
        7 months ago

        So true! That’s why I never use ‘the R word’.

        Instead, I use synonyms:

        • performance tuning
        • proactive maintenance
        • fixed a subtle bug
        • fixed a failing test
        • corrected a CI/CD failure

        CI/CD failure is my favorite, because technically our CI/CD enforces a code review, so technically “we don’t like how this is written” counts as a CI/CD failure.

    • Dark ArcA
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      7 months ago

      I agree; I prefer a “hammer and chisel” strategy, I tend to leave things a little less precisely organized/factored earlier in the project and then make a some incremental passes to clean things up as it becomes more clear that what I’ve done handles all the cases it needs to handle.

      It’s the same vein as the “don’t prematurely optimize.”

      Minimizing responsibilities of individual functions/classes/components is the only thing that I take a pretty hard line on. Making sure that I can reason about the code later and objectively say simple sentences like “given X this does Y.” I want all the complex pieces to be isolated into their own individual smaller pieces that can be reasoned about.

      All of the code bases I’ve been in where I go “oh my god why”, the typical reason is been because that’s not true; when I’m in the function I don’t know what it does because it does a lot of things depending on different state flags.

  • TCB13@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    7 months ago

    Yes, this industry is pretty much a race to the bottom (when it comes to wages) by adding methodologies and micromanagement at every corner to make people more “productive”. It’s just sad to see that most people don’t realize that they’re in the race to the bottom just because IT is still paying more than average and/or doesn’t require as many certificates as other fields to get into. The downside of the lack of professionalization is that people abuse developers everyday and the benefits like having more freedom to negotiate your higher than average salary are quickly vanishing in the fact of ever more complex software and big consulting companies taking over internal development teams and departments companies used to have.

    To make things worse cloud / SaaS providers keep profiting from this mess by reconfiguring the entire development industry in a way that favors the sell of their services and takes away all the required knowledge developers used to have when it came to development and deploying solutions. Companies such as Microsoft and GitHub are all about re-creating and reconfiguring the way people develop software so everyone will be hostage of their platforms. We now have a generation of developers that doesn’t understand the basic of their tech stack, about networking, about DNS, about how to deploy a simple thing into a server that doesn’t use some orchestration with service x or isn’t a 3rd party cloud xyz deploy-from-github service.

    Consulting companies who make software for others also benefit from this “reconfiguration” as they are able to hire more junior or less competent developers and transfer the complexities to those cloud services. The “experts” who work in consulting companies are part of this as they usually don’t even know how to do things without the property solutions. Let me give you an example, once I had to work with E&Y, one of those big consulting companies, and I realized some awkward things while having conversations with both low level employees and partners / middle management, they weren’t aware that there are alternatives most of the time. A manager of a digital transformation and cloud solutions team that started his career E&Y, wasn’t aware that there was open-source alternatives to Google Workplace and Microsoft 365 for e-mail. I probed a TON around that and the guy, a software engineer with an university degree, didn’t even know that was Postfix was and the history of email.

    All those new technologies keep pushing this “develop and deploy” quickly and commoditizing development - it’s a negative feedback loop that never ends. Yes I say commoditizing development because if you look at it those techs only make it easier for the entry level developer and companies instead of hiring developers for their knowledge and ability to develop they’re just hiring “cheap monkeys” that are able to configure those technologies and cloud platforms to deliver something. At the end of the they the business of those cloud companies is transforming developer knowledge into products/services that companies can buy with a click.

  • state_electrician@discuss.tchncs.de
    link
    fedilink
    arrow-up
    8
    ·
    7 months ago

    20+ years writing code have taught me a few things. The first and most important is that every code base, given enough time, will end up being difficult to maintain and full of things you hate. And you might have written some of those things yourself. And I think that’s fine. Striving for perfect, clean code is impossible, because the understanding of what that means changes over time. Code needs to do its job and be reasonably easy to maintain. That’s what I strive for these days. And if that is too boring for you, you’ll need to find a new job or write open source software. A company that decides to pay you isn’t usually looking for your ideas about which tool or paradigm you get excited about. They want you to make them more money than they pay you. You can bemoan that, but it will be as effective as complaining that water is wet. I actually enjoy solving problems and luckily as tech lead I still get to do that, because they pass the real hard problems on to me. That’s enough for me to enjoy my job. Of course the money helps too.

  • jkrtn@lemmy.ml
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    7 months ago

    I am more alienated by the processes surrounding everything. If I had to do idiotic agile sprint bullshit and also write mind-numbingly boring code I would lose my mind. Luckily I have gotten away with making improvements in architecture so it is at least an interesting problem on occasion.

    It sounds like this author would feel better working on open source, a passion project, or a deep academic paper. I think I’d prefer that also. I wish it were easier to live while doing that.

  • pelya@lemmy.world
    link
    fedilink
    arrow-up
    6
    arrow-down
    4
    ·
    7 months ago

    I believe the author got the wrong job position. If your job title is something like ‘software developer’, yeah you are measured by the amount of lines of code. You should aim for a senior role such as ‘system architect’ or ‘technical lead’, then you have some kind of guidelines from the sales side of business, and your job is to turn them into requirements and produce the final product, and you choose the tech stack and other details that are inconsequential for sales bug will get the programmers flinging keyboards.

      • pelya@lemmy.world
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        7 months ago

        That’s exactly the difference. The business needs to sell shit, so your management needs you to get the shit done, just good enough quality to sell it, because otherwise you’re burning them money in salary.

        Take any of your hobby projects, and ask yourself - ‘How do I sell this thing?’. You’ll arrive at all the same problems you are seeing in your company. Good managers will explain this and let developers make their own decisions and take part in business processes, bad managers will just dictate which buttons you need to press on your keyboard.

        Lines of code is a really ancient metric for managers who are totally ignorant of technology, I was just putting it here for emphasis.

    • Clent@lemmy.world
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      7 months ago

      I agree.

      One can’t claim to love programming while calling the act of writing code being a code monkey. Whatever they actually love about the process may not exist in the industry.

      I would suggest they explore alternative roles and perhaps alternative industries. They sound like they are new to the industry so their ability to land a senior role is likely to lead to different disappointments.

      The best way to do something, often isn’t the best way to implement something. That’s why this is a senior role. The author does not appear to understand this concept and will be horribly disappointed when their perfect architecture is ignored by the realities of development.

    • rebelsimile@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      7 months ago

      yeah and/or if he wants to delve deeper into the why’s behind decision making and why we are making the software the way we are making it, he’d probably be better off in product or design.

    • Kissaki@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 months ago

      This description is so foreign to me. I guess you’re talking about big [software] companies?

      Nobody in my company, a software development company, measures by lines of code. We bring value through the software we develop and the collaborations we do.