• Lucy :3@feddit.org
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    Genius strategy:

    • Replace Juniors
    • Old nerds knowing stuff die out
    • Now nobody knows anything about programming and security
    • Everything’s now a battle between LLMs
    • jaybone@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I’ve already had to reverse engineer shitty old spaghetti code written by people who didn’t know what they were doing, so I could fix obscure bugs.

      I can wait until I have to do the same thing for AI generated code.

    • OctopusNemeses@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      This is a generalized problem. It’s not only programming. The world faces a critical collapse of expertise if we defer to AI.

    • jaybone@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I’ve already had to reverse engineer shitty old spaghetti code written by people who didn’t know what they were doing, so I could fix obscure bugs.

      I can wait until I have to do the same thing for AI generated code.

  • rozodru@pie.andmc.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Not with any of the current models, none of them are concerned with security or scaling.

    • marcos@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      AI should start breaking code much sooner than it can start fixing it.

      Maybe breaking isn’t even far, because the AI can be wrong 90% of the time and still be successful.

    • notarobot@lemmy.zip
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      A few years back someone made virus that connected to an llm server and kept finding ways to infect computers in the simulated network. I think it was kind of successful. Not viable for a virus though, but an interesting idea non the less

    • NuXCOM_90Percent@lemmy.zip
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      I mean, at a high level it is very much the concept of ICE from Gibson et al back in the day.

      Intrusion Countermeasures Electronics. The idea that you have code that is constantly changing and updating based upon external stimuli. A particularly talented hacker, or AI, can potentially bypass it but it is a very system/mental intensive process and the stronger the ICE, the stronger the tools need to be.

      In the context of AI on both sides? Higher quality models backed by big ass expensive rigs on one side should work for anything short of a state level actor… if your models are good (big ol’ “if” that).

      Which then gets into the idea of Black ICE that is actively antagonistic towards those who are detected as attempting to bypass it. In the books it would fry brains. In the modern day it isn’t overly dissimilar from how so many VPN controlled IPs are just outright blocked from services and there is always the risk of getting banned because your wifi coffee maker is part of a botnet.

      But it is also not hard to imagine a world where a counter-DDOS or hack is run. Or a message is sent to the guy in the basement of the datacenter to go unplug that rack and provide the contact information of whoever was using it.

      • Ex Nummis@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        In the context of AI on both sides? Higher quality models backed by big ass expensive rigs on one side should work for anything short of a state level actor… if your models are good (big ol’ “if” that).

        Turns out Harlan Ellison was a goddamn prophet when he wrote I Have No Mouth And I Must Scream.

        • bleistift2@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          I have no clue how you think these two are related in any way, except for the word “AI” occurring in both.

          • Warl0k3@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            2 months ago

            Tbf, every day that goes by is starting to feel more and more like we’re all being being tortured by a psychotic omnipotent AI… With a really boring sense of humor.

    • ronigami@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      It’s like the “bla bla bla, blablabla… therefore God exists”

      Except for CEOS it’s “blablablabla, therefore we can fire all our workers”

      Same shit different day

  • tidderuuf@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    Ah yes, I’m sure AI just patched that software so that other AI could use that patched software and make things so much more secure. What a brilliant idea from an Ex-CISA head.

    • OpenStars@piefed.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Yeah, good luck with that…

      1+1=your mom

      I’m not holding out any hope for “good” AI for a very long time…

      • ProgrammingSocks@pawb.social
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        I don’t think the poster means that, I think they are drawing parallels to the flaws of gun anti-regulation to AI. Both arguments are bad is the point

        • OpenStars@piefed.social
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Perhaps I came on too strong but I took it as just joking around.

          But if the argument was to be real, then I note that a gun is actually quite an effective tool to accomplish its purpose, whereas AI is not truly “I”(ntelligent).

  • Ex Nummis@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    If an AI can be used for automatic scalable defense, it can also be used offensively. It’ll just be another digital arms race between blackhats and everyone else.

  • Baron von Fajita@infosec.pub
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    Except that most risks are from bad leadership decisions. Exhibit A: patches exist for so many vulnerabilities that remain unpatched because of bad business decisions.

    I think in a theoretical sense, she is correct. However, in practice things are much different.

    • Ex Nummis@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      My old job had so many unpatched servers, mostly Linux ones. Because of the general idea that “Linux is safe anyway”. And because of how Windows updates would often break critical infrastructure, so they were staggered and phased.

      But we’ve seen plenty of infected Linux packages since, so it’s almost a given there’s huge open holes in that security somewhere.

  • Qwel@sopuli.xyz
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    Easterly said that if cybercrime was a country, it would be the third biggest in the world, just behind the US and China.