• MagicShel@programming.dev
    link
    fedilink
    arrow-up
    104
    arrow-down
    1
    ·
    edit-2
    7 months ago

    Why do these stupid bots respond to explain why you are right? It’s not like they are (or are even capable of) learning anything. It comes across as condescending more than anything.

    “Let me explain the situation to you, the person who just explained it to me.”

    Classic AI-splaining.

  • redcalcium@lemmy.institute
    link
    fedilink
    arrow-up
    101
    ·
    7 months ago

    I wonder if you can gaslight the AI reviewer bot into accepting your intentionally malicious code?

    AI: this code will delete the production database

    Author: you’re wrong!

    AI: understandable. have a nice day.

  • unalivejoy@lemm.ee
    link
    fedilink
    English
    arrow-up
    82
    ·
    7 months ago

    Ai: this is a potential SQL injection

    Me: no it’s not. I do that on purpose as a backdoor.

    Ai: you’re absolutely right.

  • catch22@programming.dev
    link
    fedilink
    arrow-up
    38
    ·
    7 months ago

    The problem with this is that companies like rabbitai are exploiting our inherent drive to teach in order to pass on knowledge and make society and life better for the next generation and ourselves. (In this case code reviews) This doesn’t work in this situation because you’re not actually helping out another person that will reciprocate help to you down the line. You’re helping out a large company, which has no moral values and doesn’t operate in society with the same values as a human being. To me a code review is more than just pointing out mistakes it’s also about sharing knowledge and having meaningful dialog about what makes sense and what doesn’t. There’s no doubt that AI is an amazing achievement, but to me it seems that every application of this technology that involves human interaction manages to simultaneously exploit and erase the core “humanness”, of the interaction. I think this is the case because these types of AI applications are purely monetarily driven, and not for the advancement of our society. OpenAI had the right idea to start with, but they have sunken into the same trope in lock step with the rest of the Googles, Apples and Amazons of the world. Imagine if one of these large companies like say Google had been given money by the us government to create the arpa net and then went on to only use the technology for profit. Would we really be in the same connected world we are now?

  • XPost3000@lemmy.ml
    link
    fedilink
    arrow-up
    16
    ·
    7 months ago

    AI: “This code will create a memory leak and potentially crash the users operating system”

    Me: “Nuh-uh”

    AI: “Correct”

  • dyc3@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    7 months ago

    I’ve used this specific bot. I work with a lot of junior engineers so I was hoping it would be able to catch a lot of the basics for me so we could ultimately reduce the back and forth in code reviews.

    It just doesn’t get enough context from the rest of the repo to be super useful. It would often do this kind of thing where it would just speed garbage. I guess you could get it to be better by configuring it with a good pre prompt, but I don’t know if that would be worth the effort.

    • arendjr@programming.devOP
      link
      fedilink
      arrow-up
      6
      ·
      7 months ago

      Yeah, it has its nice moments, but I also see it make mistakes and a lot of uselessly verbose text. It’s sometimes useful, sometimes funny, but mostly just noisy. Could be genuinely useful if it keeps improving though.