• pinball_wizard@lemmy.zip
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      Exactly. I’ve been sabotaging the AI with shitty code output since long before LLMs existed. That’s how I play 4D chess. (This is just meant to get a laugh. Some of my code is even quite nice, actually.)

  • ViatorOmnium@piefed.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Yes, and so can most experienced developers. In fact unmaintainable human-written code is more often caused by organisational dysfunctions than by lack of individual competence.

    • Samskara@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      In my experience there’s usually a confluence of individual and institutional failures.

      It usually goes like this.

      1. hotshot developers is hired at company with crappy software
      2. hotshot dev pitches a complete rewrite that will solve all issues
      3. complete rewrite is rejected
      4. hotshot dev shoehorns a new architecture and trendy dependencies into the old codebase
      5. hotshot new dev leaves
      6. software is more complex, inconsistent, and still crappy
      • ViatorOmnium@piefed.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        That’s one of the failure modes, good orgs would have design and review processes to stop it.

        There are other classics like arbitrary deadlines, conflicting and shifting requirements and product direction, perverse incentives, etc.

        I would even say that the AI craze is even a result of the latter.

        • PapstJL4U@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Yeah, certain code developed organically (aka shifting demands). Devs know the code gets worse, but either by time or money they don’t have the option to review and redo code.

    • pinball_wizard@lemmy.zip
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      Yes. But the important thing is that now disfunctional organizations have access to tools to write unmaintainable code really fast.

      • kindnesskills@literature.cafe
        link
        fedilink
        arrow-up
        0
        ·
        1 month ago

        I want to write gnocchi code, where each little nugget is good on its own and they still blend together perfectly in the sauce. But I still end up with mashed potato-code if I don’t watch myself.

      • skuzz@discuss.tchncs.de
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        A tale as old as time. The US nuclear missile codes were 000000, but it didn’t matter. The chain of command was purpose-built, ironically, so the front line soldier in a cold war scenario had to make the last decision to delete all life on the planet. Chain of command doesn’t matter at that point. You are choosing to kill everyone you know from an order from who knows who. The ultimate checksum.

        You will always be better at decisions than an n-dimensional matrix of numbers on an overpriced GPU.

        • declanruediger@aussie.zone
          link
          fedilink
          arrow-up
          0
          ·
          2 months ago

          I don’t understand your point about the solider on the front line, but I’m interested. If you get a chance, can you elaborate please?

      • Echo Dot@feddit.uk
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        If you are complete novice then obviously not but I think anyone reasonably proficient in a language would be able to identify optimisations that an AI just doesn’t seem to perceive largely because humans are better at context.

        It’s like that question about whether it’s worth driving your car to the car wash if the car wash is only 10 metres away. AIs have no experience of the real world so they don’t inherently understand that you can’t wash a car if it’s not at the car wash. A human would instantly know that that’s a stupid statement without even thinking about it, and unless you instruct an AI to actually deeply think about something they just give you the first answer they come up with.

        • rumba@lemmy.zip
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          What’s why they’re pushing for the datacenters, they want to turn make every query that deep. The tech is here, but the ability to sustain it isn’t. They build the data centers, kick the developers out, depress the education market for it, and then raise the prices.

          Companies will be paying the AI companies 60k per year per seat in a decade.

        • yabbadabaddon@lemmy.zip
          link
          fedilink
          arrow-up
          0
          ·
          2 months ago

          I agree with you. But the tool will output a basic code that mostly do what asked in seconds instead of tens of minutes if not hours. So now we could argue if the optimization you make are worth the added cost I’d writing the code yourself or if it’s better to have the tool to generate the code and then optimizing it.

  • neukenindekeuken@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    I mean, yes, absolutely I can. So can my peers. I’ve been doing this for a long, long time, as have my peers.

    The code we produce is many times more readable and maintainable than anything an LLM can produce today.

    That doesn’t mean LLMs are useless, and it also doesn’t mean that we’re irreplaceable. It just means this argument isn’t very effective.

    If you’re comparing an LLM to a Junior developer? Then absolutely. Both produce about the same level of maintainable code.

    But for Senior/Principal level engineers? I mean this without any humble bragging at all: but we run circles around LLMs from the optimization and maintainability standpoint, and it’s not even close.

    This may change in the future, but today it is true (and I use all the latest Claude Code models)

    • SparroHawc@lemmy.zip
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      The biggest problem with using AI instead of junior developers is that junior developers eventually become senior developers. LLMs … don’t.

    • sexual_tomato@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      With LLMs I get work done about 3-5x faster. Same level of maintainability and readability I’d have gotten writing it myself. Where LLMs fail is architecting stuff out- they can’t see the blind alleys their architecture decisions being them down. They also can’t remember to activate python virtual environments, like, ever.

      • neukenindekeuken@sh.itjust.works
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        I think it depends on what you’re writing code for. For greenfield/new features that don’t touch legacy code or systems too much? Sure, I agree with that assessment.

        Unfortunately that’s a small fraction of the kind of work I am required to do as most of the work in most places doing software dev are trying to add shit to bloated and poorly maintained legacy systems.

        Working in those environments LLMs are a lot less effective. Maybe that’ll change some day. But today, they don’t know how to code reuse, refactor methods across classes/design patterns, etc. At least, not very well. Not without causing side effects.