Long story short, my VPS, which I’m forwarding my servers through Tailscale to, got hammered by thousands of requests per minute from Anthropic’s Claude AI. All of which being from different AWS IPs.

The VPS has a 1TB monthly cap, but it’s still kinda shitty to have huge spikes like the 13GB in just a couple of minutes today.

How do you deal with something like this?
I’m only really running a caddy reverse proxy on the VPS which forwards my home server’s services through Tailscale. "

I’d really like to avoid solutions like Cloudflare, since they f over CGNAT users very frequently and all that. Don’t think a WAF would help with this at all(?), but rate limiting on the reverse proxy might work.

(VPS has fail2ban and I’m using /etc/hosts.deny for manual blocking. There’s a WIP website on my root domain with robots.txt that should be denying AWS bots as well…)

I’m still learning and would really appreciate any suggestions.

  • mholiv@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 hours ago

    I see your point but like I think you underestimate the skill of coders. You make sure your timeout is inclusive of JavaScript run times. Maybe set a memory limit too. Like imagine you wanted to scrape the internet. You could solve all these tarpits. Any capable coder could. Now imagine a team of 20 of the best coders money can buy each paid 500.000€. They can certainly do the same.

    Like I see the appeal of running a tar pit. But like I don’t see how they can “trap” anyone but script kiddies.

    • WhyJiffie@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      54 minutes ago

      you whisky couldn’t solve tarpits completely. they may hold up the scrapers for less time, but they will still do that for the amount of the timeout

      • mholiv@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        37 minutes ago

        Maybe not with just if statements. But with a heuristic system I bet any site that runs a tar pit will be caught out very quickly.

    • Nobody is paying software developers 500.000€. It might cost the company that much, but no developers are making that much. The highest software engineer salaries are still in the US, and the average is $120k. High-end salaries are $160k; you might creep up a little more than that, but that’s also location specific. Silicon Valley salaries might be higher, but then, it costs far more to live in that area.

      In any case, the question is ROI. If you have to spend $500,000 to address some sites that are being clever about wasting your scrapers’ time, is that data worth it? Are you going to make your $500k back? And you have to keep spending it, because people keep changing tactics and putting in new mechanisms to ruin your business model. Really, the only time this sort of investment makes sense is when you’re breaking into a bank and are going to get a big pay-out in ransomware or outright theft. Getting the contents of my blog is never going to be worth the investment.

      Your assumption is that slowly served content is considered not worth scraping. If that’s the case, then it’s easy enough for people to prevent their content from being scraped: put in sufficient delays. This is an actual a method for addressing spam: add a delay in each interaction. Even relatively small delays add up and cost spammers money, especially if you run a large email service and do it at scale.

      Make the web a little slower. Add a few seconds to each request, on every web site. Humans might notice, but probably not enough to be a big bother, but the impact on data harvesters will be huge.

      If you think this isn’t the defense, consider how almost every Cloudflare interaction - and an increasingly large number of other sites - are including time-wasting front pages. They usually say something like “making sure you’re human” with a spinning disk, but really all they need to be doing is adding 10 seconds to each request. If a scraper of trying to indeed only a million pages a day, and each page adds a 10s delay, that’s wasting 2,700 hours of scraper computer time. And they’re trying to scrape far more than a million pages a day; it’s estimated (they don’t reveal the actual number) that Google indexes billions of pages every day.

      This is good, though; I’m going to go change the rate limit on my web server; maybe those genius software developers will set a timeout such that they move on before they get any content from my site.

      • mholiv@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 hour ago

        When I worked in the U.S. I was well above $160k.

        When you look at leaks you can see $500k or more for principal engineers. Look at valves lawsuit information. https://www.theverge.com/2024/7/13/24197477/valve-employs-few-hundred-people-payroll-redacted

        Meta is paying $400k BASE for AI Reserch engineers with stock options on top which in my experience is an additional 300% - 600%. Vesting over 2 to 4 years. This is to H1B workers who traditionally are paid less.

        Once you get to principal and staff level engineering positions compensation opens up a lot.

        https://h1bdata.info/index.php?em=meta+platforms+inc&job=&city=&year=all+years

        ROI does not matter when companies are telling investors that they might be first to AGI. Investors go crazy over this. At least they will until the AI bubble pops.

        I support people resisting if they want by setting up tar pits. But it’s a hobby and isn’t really doing much.

        The sheer amount of resources going into this is beyond what people think.

        That and a competent engineer can probably write something on the BEAM VM that can handle a crap ton of parallel connections. 6 figure maybe? Being slow walked means low CPU use which means more green threads.