…and I still don’t get it. I paid for a month of Pro to try it out, and it is consistently and confidently producing subtly broken junk. I had tried doing this before in the past, but gave up because it didn’t work well. I thought that maybe this time it would be far along enough to be useful.

The task was relatively simple, and it involved doing some 3d math. The solutions it generated were almost write every time, but critically broken in subtle ways, and any attempt to fix the problems would either introduce new bugs, or regress with old bugs.

I spent nearly the whole day yesterday going back and forth with it, and felt like I was in a mental fog. It wasn’t until I had a full night’s sleep and reviewed the chat log this morning until I realized how much I was going in circles. I tried prompting a bit more today, but stopped when it kept doing the same crap.

The worst part of this is that, through out all of this, Claude was confidently responding. When I said there was a bug, it would “fix” the bug, and provide a confident explanation of what was wrong… Except it was clearly bullshit because it didn’t work.

I still want to keep an open mind. Is anyone having success with these tools? Is there a special way to prompt it? Would I get better results during certain hours of the day?

For reference, I used Opus 4.6 Extended.

  • AlphaOmega@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    19 days ago

    This sounds on par for all the AI I have been dealing with. I find it works best if you give it a lot of rules, then treat it like a 12 year old and expect wild mistakes for anything more complicated than a simple calculator. I work primarily with Gemini and have it build simple HTML/CSS and it’s infuriating how many times I have told it to use &amp ; instead of &.
    Now every time it does anything, it’s always telling me how it included the correct ampersand. It can’t tell me why it screwed up like 5 times prior, it just makes up some BS and apologizes profusely.
    The more rules you give it, even if it ignores them sometimes, the better.

  • silver@das-eck.haus
    link
    fedilink
    English
    arrow-up
    0
    ·
    18 days ago

    I think it’s pretty heavily dependent on what you’re trying to do. I’ve gotten a lot of push from higher ups at my company to use copilot wherever possible. So, I’ve spent a lot of time lately having copilot + opus write code for me. Most of what I’m doing is super straightforward middleware APIs or basic internal front ends. Since it has access to very similar codebases for reference, and we have custom agents that point it in the right direction, it’s a pretty good experience.

    However, if I ask it to do something totally new, it does okay, more like what you’ve experienced. It takes a lot of hand holding, but it usually gets the job done as long as you’re very descriptive in your prompt. Probably not faster than an experienced developer at the moment though

    • OwOarchist@pawb.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      19 days ago

      Oh, it will ‘find bugs’ alright. And then flood FreeBSD’s bug report system with bullshit bug reports that turn out to be nothing, but require expert human review to discern that.

  • Feyd@programming.dev
    link
    fedilink
    arrow-up
    0
    ·
    19 days ago

    producing subtly broken junk

    The difference between you and people that say it’s amazing is that you are capable of discerning this reality.

    • JustEnoughDucks@feddit.nl
      link
      fedilink
      arrow-up
      0
      ·
      18 days ago

      I wonder if it was even able to compile. I am a shitty hobby coder who just does it to make my embedded hardware projects function.

      I have yet to get compilable code out of any of the AI bots I have tried. Gemini, mistral, and chatGPT. I am not making an account lol.

      I have gotten some compilable python and VBA code for data analysis stuff at work, so I wonder if it is because embedded stuff uses specific SDKs that it can’t handle.

      Either way I have given up on it for anything besides bouncing ideas off of or debugging where electromagnetics issues could lie (though it has been completely wrong about that also even though it is using the wrong concepts, it just reminds me of concepts that I might have overlooked)

    • OwOarchist@pawb.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      19 days ago

      What I don’t get, though, is how the vibe code bros can’t discern this reality.

      How can they sit there and not see that their vibe-coded app just doesn’t do what they wanted it to do? Eventually, you’ve got to try actually running the app, right? And how do you keep drinking the AI kool-aid when you find out that the app doesn’t work?

      • Oisteink@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        19 days ago

        I do apps that work, i do patches that are production quality. Half the cs world does… I do full stack ai debugging of esp32 projects.

        It’s a powerful tool, you just need to learn it’s strong and weak points, just like any other tool you use.

        • Kissaki@programming.dev
          link
          fedilink
          English
          arrow-up
          0
          ·
          18 days ago

          Half the cs world does…

          What’s the basis for this claim? I’m doubtful, but don’t have wide data for this.

          • Oisteink@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            18 days ago

            Rough estimate from my personal connections only. Some work places where ai is not possible, but all that have made an effort report good code. You need to work with what it is - a word generator that sometimes gives correct results. Make it research and not trust training. Never let it do things on its own, require a plan and reason. Make it evaluate its own work/plan.

            Most issues i have stem from models beeing too eager. Restrain them and remove the “i can do this next…”behaviour.

            Context is king - so proper mcp and documentation that is agent facing. I use serena as i can get lsp for yaml, markup and keep these docs like that

      • tleb@lemmy.ca
        link
        fedilink
        arrow-up
        0
        ·
        19 days ago

        Eventually, you’ve got to try actually running the app, right?

        At least at my company, no, they just start selling it.

      • Lumelore (She/her)@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        19 days ago

        Vibe code bros aren’t real programmers. They’re business people, not computer people. Even if they have a CS degree, they only got that because they think it’ll get them more money. They lack passion and they don’t care about understanding anything. They probably don’t even care about what they’re generating beyond its potential to be used in a grift.

        I graduated college not that long ago and my CS classes had quite a few former business majors. They switched because they think it’ll be more lucrative for them but since they only care about money they didn’t bother to actually learn the material especially since they could just vibe code through everything.

        • b_n@sh.itjust.works
          link
          fedilink
          arrow-up
          0
          ·
          18 days ago

          So much this.

          After working in tech companies for the last 10 years I’ve noticed the difference between people that “generate code” and those that engineer code.

          My worry about the industry is that vibe coding gives the code generators the ability to generate even more code. The engineers (even those that use vibe tools) are not engineering as much code by volume compared to “the generators”.

          My hope is that this is one of those “short term gain, long term pain” things that might self correct in a couple of years 🤞.

          • sobchak@programming.dev
            link
            fedilink
            arrow-up
            0
            ·
            17 days ago

            It’s insane that companies are going back to metrics like LOC (or tokens generated), when the industry figured out decades ago that these are horrible, counterproductive metrics.

            “The hard thing about building software is deciding what one wants to say, not saying it. No facilitation of expression can give more than marginal gains.” - No Silver Bullet (1986)

      • favoredponcho@lemmy.zip
        link
        fedilink
        arrow-up
        0
        ·
        17 days ago

        You do try running the app, and then you see what is broken and then you have Claude fix it. The process is still iterative just like regular coding. I haven’t met a software engineer that wrote a perfect app the first try, its always broken, even in subtle ways. Why does everyone think vibecoding needs to be perfect on the first shot?

      • Feyd@programming.dev
        link
        fedilink
        arrow-up
        0
        ·
        19 days ago

        They’re the same people that copied code from stack overflow that you had to tell them how to actually fix every PR. The difference is the C suite types are backing them this time

  • zerofk@lemmy.zip
    link
    fedilink
    arrow-up
    0
    ·
    19 days ago

    “Almost but not quite” is exactly my experience with Claude.

    The only time I’ve had real success is telling it to do a simple API change that touches a dozen files. It took a while and I’m not sure it was faster than doing it manually, but at least it was less boring.

    Possibly important context: I only started really using it a few weeks ago.

  • lakemalcom@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    19 days ago

    I have yet to be able to vibe code anything relatively involved. The closest I’ve come is a ffmpeg wrapper script to edit out scenes from a video with a fade in/fade out title card. But even then, I ended up at some point having to debug and add my own arg support because it kept screwing things up. The first draft did do something, though.

    I find at this point that it’s still only useful if I have a very clear goal in mind with a lot of context on the area I need to make changes to. That lets me get a more specific prompt, and then I’ll still need to review the output. I have only ever gotten a successful one shot like this with tests.

  • Gsus4@mander.xyz
    link
    fedilink
    arrow-up
    0
    ·
    19 days ago

    Their usual (crap) defense is:

    a) you’re not paying enough, so of course it is crap

    b) you’re not prompting right, you need to use detailed, precise language…

    c) that is just anecdotal evidence, you need to do an actual study, yadda yadda.

    d) it will improve…

    (any other anyone has noticed?)

    • solomonschuler@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      13 days ago

      English is cheap to replicate, there is no science to prompting it’s asking the goddamn question.

      If AI companies are so keen in keeping humans like dumbasses, that’s an issue on their part not my fucking English.

  • bruce_babbler@lemmy.zip
    link
    fedilink
    arrow-up
    0
    ·
    19 days ago

    You’re probably done with this. But if you give claude a test case or two (or have it try to make them) you can have claude run the test case, and then it will iterate.

    Also, aggressively use plan mode and if claude screws up more than three times do /clear, explain that it’s screwing up to it and then give it new instructions.

  • dgdft@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    19 days ago

    Vibe coding, in the sense of telling the model to make codebase changes, then directly using the output produced, is 100% marketing bullshit that does not scale beyond toy examples.

    Here’s the rub: Claude is extremely useful as an advanced autocomplete, if and only if you’re guiding it architecturally through every task it runs, and you vet + revise the output yourself between iterations. You cannot effectively pilot entirely from chat in a mature codebase, and you must compile robust documentation and instructions for Claude to know how to work with your codebase.

    You also must aggressively manage information in the context window yourself and keep it clean. You mentioned going in circles trying to get the robot to correct itself: huge mistake. Rewind to before the error, and give it better instructions to steer it away from the pitfall it fell into. Same vein, you also need to reset ASAP after pushing into the >100k token mark, because the models start melting into putty soon after (yes, even the “extended” 1M-window ones).

    I’m someone who has massively benefited from using modern LLMs in my work, but I’m also a massive hater at the same time: They’re just a tool, not magic, and have to be used with great care and attention to get reasonable results. You absolutely cannot delegate your thinking to them, because it will bite you, hard and fast.

    • something183786@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      18 days ago

      My preferred way of using LLM coders is:

      • plan only
      • read the spec file I just wrote
      • optionally ask me questions in ‘qa.md’, I’ll reply inline Repeat until it stops asking me questions, then switch to a different model and ask again. I usually use both gpt5.3-codex AND Claude Sonnet

      Then I have it update the spec. I start a new session to have it implement. Finally review the code. If I don’t like it, undo and revisit the spec. Usually it’s because I’m trying to do too much at once. And I need to break it down into multiple specs.

      • BehindTheBarrier@programming.dev
        link
        fedilink
        arrow-up
        0
        ·
        18 days ago

        Adversarial reviews are also great ways to prune bad ideas and assumptions from plans. Have helped me out greatly and made the better LLMs often go “plan said do X, but doing that is a unknown huge risk that may take longer then the rest of the plan”.

        The superpowers plugin does the brainstorm, qa, design plan, implementation plan, implement, review quite well. It should aid the process of actually doing feature type work. I also add adversarial reviews into the process, saves a lot of time debugging what went wrong after implementation.

    • eodur@piefed.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      18 days ago

      This is the most pragmatic take I’ve read and it resonates strongly with my own experience. Claude can be a very useful tool, but like any other there is a learning curve and often many sharp edges. I’ve had Claude build some reasonably complex code bases, but it takes work. Its pretty decent at “coding” but pretty terrible at the rest of software engineering.

  • cecilkorik@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    19 days ago

    No, I think you do get it. That’s exactly right. Everything you described is absolutely valid.

    Maybe the only piece you’re missing is that “almost right, but critically broken in subtle ways” turns out to actually be more than good enough for many people and many purposes. You’re describing the “success” state.

    /s but also not /s because this is the unfortunate reality we live in now. We’re all going to eat slop and sooner or later we’re going to be forced to like it.

    • pinball_wizard@lemmy.zip
      link
      fedilink
      arrow-up
      0
      ·
      18 days ago

      almost right, but critically broken in subtle ways” turns out to actually be more than good enough for many people and many purposes. You’re describing the “success” state.

      Exactly. The consequences are at worst a problem for “future me”, and at best “somebody else’s problem”.

      AI didn’t create this reality, but it’s certainly moved it into the spotlight and to “center stage.”

    • GiorgioPerlasca@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      19 days ago

      Or maybe we will be forced to switch off LLMs and start solving the bugs introduced by their usage using our minds.

      • cecilkorik@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        19 days ago

        As a professional software developer, I truly hope that is the case (and I plan to charge at least 10x my current rate after the AI bubble pops when I’m looking for my next job as I expect there to be a massive shortage of people skilled enough to actually deal with the nightmare spaghetti AI code bases)

        Fun times ahead.

        • tohuwabohu@programming.dev
          link
          fedilink
          arrow-up
          0
          ·
          19 days ago

          It will be interesting (read as: bad) times to get to that point and I agree. The Junior market is basically not existent ever since coding agents appeared, stripping the industry of its future Seniors. We will be chained to our desks.

    • vga@sopuli.xyz
      link
      fedilink
      arrow-up
      0
      ·
      18 days ago

      Maybe the only piece you’re missing is that “almost right, but critically broken in subtle ways”

      Sure, but you have to note that it reaches that point in minutes. Sometimes on a task that would take humans a week. The power is not that it creates correct stuff, it’s that it creates almost correct stuff 100 times faster than human. Plus the typical machine benefits: it never gets tired, demotivated, etc.

      So then the challenge becomes being able to be that human, who can review stuff extremely well and rapidly, being natural in probing the stuff LLMs tend to be wrong about. Sort of like the same challenge that every tech lead had before LLMs too, but just subtly different, because LLMs don’t exactly think like we do.

  • Alexstarfire@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    19 days ago

    I haven’t used tools to make stuff from scratch but we do use them, or similar, where I work. What kind of stuff are you prompting it for? I find it works best when you give it a very small/simple task to do. And it’s pretty good when it comes to making tests for existing code.

    But if the main problem is getting math equations and such wrong I’m not sure there is much we can do to help. You’d have to provide it the equations at a minimum and probably explain to it how they should be used.

    But there are definitely times where it can be very frustrating. I had a similar issue yesterday as you did. It made a code change and it wasn’t working how it was supposed to. I kept telling it the problem and it kept trying to fix it but failing. I gave up after far too long and looked at all the code changes it made since it was working correctly before. It just put a change slightly too far down in a process and all I had to do was move it up, wholesale, by like 10 lines and it fixed my problem. Like, how could it not figure out something that simple?

    So, it’s not the best at actually fixing things but does work more often than not. But if you can tell it exactly what code is causing the problem and where you want it to be instead, it’ll fix it.

    • OwOarchist@pawb.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      19 days ago

      I find it works best when you give it a very small/simple task to do.

      If it’s a small/simple task, why do I need help at all?

      • lepinkainen@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        18 days ago

        Because the simple tasks are boring as fuck?

        If an LLM can generate 90% of a HTTP API correctly, why would you want to do it manually?

        • OwOarchist@pawb.social
          link
          fedilink
          English
          arrow-up
          0
          ·
          18 days ago

          If an LLM can generate 90% of a HTTP API correctly, why would you want to do it manually?

          Because figuring out which 10% it did wrong and then fixing that will take longer and be more effort than just doing it from scratch myself.

      • Alexstarfire@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        19 days ago

        Because it might be something that needs to be done in lots of places. Or it may just be something you don’t want to do so you fire it off then go look at or work on something else.

        Now, that might be useless for your work flow, but not every tool is useful in every circumstance.

        And you can still use it for larger tasks, but often I need to come behind it and clean up its work. Just like you would an intern or junior dev.

  • jubilationtcornpone@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    19 days ago

    I rarely use LLM’s for generating code. Usually, by the time I’ve provided all the necessary context, I might as well have just written the code myself. I do use LLM’s for doing research. As long as it’s understood that the response is only as accurate as the source material, they often do a decent job of distilling down to what I’m actually looking for.

  • pixxelkick@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    19 days ago
    1. Did you have MCP tooling setup so it can get lsp feedback? This helps a lot with code quality as it’ll see warnings/hints/suggestions from the lsp

    2. Unit tests. Unit tests. Unit tests. Unit tests.

    I cannot stress enough how much less stupid LLMs get when they jave proper solid Unit tests to run themselves and compare expected vs actual outcomes.

    Instead of reasoning out “it should do this” they can just run the damn test and find out.

    They’ll iterate on it til it actually works and then you can look at it and confirm if its good or not.

  • Yardy Sardley@lemmy.ca
    link
    fedilink
    arrow-up
    0
    ·
    19 days ago

    I used Opus 4.6 Extended

    Stop being cheap, OP. You clearly just need to shell out multiple billions of dollars for access to mythos /s

  • homes@piefed.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    19 days ago

    I tried using Claude to convert some bash scripts to docker compose files, and it made several mistakes with case-sensitivity and failure to properly encapsulate certain path declarations that had spaces in them. if it could make such incredibly simple mistakes in converting a script to a markup language, I wouldn’t dare trust it to actually compose anything in an actual programming language like Python or Rust or C# or Swift whatever you’re using.

    • zbyte64@awful.systems
      link
      fedilink
      arrow-up
      0
      ·
      18 days ago

      I have similar problems whenever I send it to investigate a bug and the local runtime is inside a container. It cannot reliably translate paths without the help of an IDE. Hell, it even occasionally mangles API paths if I have it prefixed elsewhere in the codebase (despite having Claude.md etc, your context needs to be pure for it to be reliable). Having it fix a Dockerfile is comically bad.

      • homes@piefed.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        18 days ago

        like… it fixed it when I called it out, but it made the mistake again later on. I was only using it save time coverting, like, 11 or so files, but it made the mistake 3 or 4 times, not only with the encapsulation, but with the case-sensitivity, too. both with paths, although I couldn’t see any particular pattern to it.

        just annoying, and I had to read through each compose file just to check it for errors. in the end it did save time, but much less that I thought it would.

        • zbyte64@awful.systems
          link
          fedilink
          arrow-up
          0
          ·
          18 days ago

          If it gets it wrong the first time I rarely reprompt. I know I can get it to fix it, but it’s usually faster for me to do it because I already figured out where and what to do the fix. Low key think it’s just a ploy to get us to burn more tokens. Sure correcting it means it writes a few lines to the memory file, but it’s only a matter of time before it trips over that context as well.

          • homes@piefed.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            18 days ago

            yeah, I also wonder if it’s a ploy. that’s really the only time I’ve used for any kind of code assistance, and I really haven’t used Claude much, overall, but it generally seems more capable than chatGPT, for example. it felt a bit strange that it would make such a simple mistake.