I am running this docker image: https://github.com/nextcloud/docker with a cloudflare tunnel, meaning the webserver would see all the traffic coming from a single ip in 172.16.0.0/12 .
The documentation says:
The apache image will replace the remote addr (IP address visible to Nextcloud) with the IP address from X-Real-IP if the request is coming from a proxy in 10.0.0.0/8, 172.16.0.0/12 or 192.168.0.0/16 by default
So I thought that this is a not a problem, as other docker images can also automagically figure out the real IP address from traffic coming from cloudflare tunnels.
In the beginning it worked fine, then it was SLOW. Like 2 full minutes to load new feeds on news, waiting ages to complete a sync, and so on. I rebooted the server on those instances, and then it worked fine for a day.
So because at the time i was running it on unraid, i blamed the lag on that OS + my weird array of HDDs with decades of usage on them. Migrated to debian on a nvme array and… same lag!
Wasted hours trying to use caddy+fpm instead of apache and it’s the same, worked fine for a day, then it was slow again.
Then I wondered: what if the program is “smart” and throttles it by itself without any warning to the admin if it thinks that an ip address is sending too many requests?
Modified the docker compose like this:
nextcloud:
image: nextcloud
became
nextcloud:
build: .
and I created a Dockerfile with
FROM nextcloud
RUN apt update -y && apt upgrade -y
RUN apt install -y libbz2-dev
RUN docker-php-ext-install bz2
RUN a2enmod rewrite remoteip
COPY remoteip.conf /etc/apache2/conf-enabled/remoteip.conf
with this as the content of remoteip.conf
RemoteIPHeader CF-Connecting-IP
RemoteIPTrustedProxy 10.0.0.0/8
RemoteIPTrustedProxy 172.16.0.0/12
RemoteIPTrustedProxy 192.168.0.0/16
RemoteIPTrustedProxy 173.245.48.0/20
RemoteIPTrustedProxy 103.21.244.0/22
RemoteIPTrustedProxy 103.22.200.0/22
RemoteIPTrustedProxy 103.31.4.0/22
RemoteIPTrustedProxy 141.101.64.0/18
RemoteIPTrustedProxy 108.162.192.0/18
RemoteIPTrustedProxy 190.93.240.0/20
RemoteIPTrustedProxy 188.114.96.0/20
RemoteIPTrustedProxy 197.234.240.0/22
RemoteIPTrustedProxy 198.41.128.0/17
RemoteIPTrustedProxy 162.158.0.0/15
RemoteIPTrustedProxy 104.16.0.0/12
RemoteIPTrustedProxy 172.64.0.0/13
RemoteIPTrustedProxy 131.0.72.0/22
RemoteIPTrustedProxy 2400:cb00::/32
RemoteIPTrustedProxy 2606:4700::/32
RemoteIPTrustedProxy 2803:f800::/32
RemoteIPTrustedProxy 2405:b500::/32
RemoteIPTrustedProxy 2405:8100::/32
RemoteIPTrustedProxy 2a06:98c0::/29
RemoteIPTrustedProxy 2c0f:f248::/32
and now because nextcloud is seeing all the different ip addresses it doesn’t throttle the connections anymore!
Why do so many people tunnel their personal data through cloudflare anyways? No port forwarding possible? Or afraid of DDoS attacks? Or am I missing something?
Security.
Cloudflare handles a very large amount of traffic and sees many different types of attacks (thinks CSRF, injections, etc.). It is unlikely that you or me will be individually targeted, but drive-bys are a thing, and thanks to the amount of traffic they monitor, the WAF will more likely block out anything and patch before I’m able to update my apps on 0 days.
Also, while WAF is a paid feature, other free features, such as free DDOS attack protection, help prevent other attacks.
It’s a trade off, sure; they’re technically MITM’ing your traffic, but frankly, I don’t care. Much like no one cares to target/attack me individually, they aren’t going to look at my content individually.
Additionally, it also makes accessing things much easier. Also, it is much more likely I’d find a SME using Cloudflare than some janky custom self hosted tunnel setup. So from a using homelab as a learning for professional experience point of view, it is much more applicable as well.
Thx for explaining. I’m not sure if I’m willing to do the same trade-offs. Supposedly their WAF is very good and quite some people use it. Probably for a good reason… It just comes at a hefty price. I’m doing selfhosting to emancipate myself, stay independent and in control. I’m not sure if becoming dependant on a single large company and terminating my encryption on their servers that do arbitrary magic and whatever with my packets is something that aligns with my goals. (Or ethics, since I think the internet is to connect people on a level playing field. And that’s no longer the case once many people transfer control to a single entity.) But I don’t see a way around that. Afaik you have to choose between one or the other. Are there competitors to cloudflare that handle things differently? Maybe provide people with the WAF and databases to run on their own hardware, let them stay in control and just offer to tunnel their encrypted data with a configurable firewall?
Edit: Just found modsecurity.org while looking that up. But I guess a good and quick database of bad actors’ IPs is another thing that would be needed for an alternative solution.
It’d be a challenge to keep up — 0 days aren’t going to be added to self hosted solution faster than they could be detected and deployed on a massively leveraged system. Economy of scales at full display.
I mean theoretically… I guess, if they do it right? It depends a bit. Some Linux distributions are crazy fast with patching stuff. And some stable channels have a really good track record of open vulnerabilities. Nowadays that’s not the only way of distributing software, vulnerability might depend on your docker container setup etc.
Are there actual numbers what Cloudflare adds on top? What 0-days they focus on? I mean do they have someone sitting there, reading Lemmy CVEs and then immediately getting to action to write a regex that filters out such requests?
And how much does it cost? They also list the same ModSecurity in their lower plans. I don’t think 0day protection would help people like me if it’s $200 a month.
The difference in my opinion is that doesn’t matter how fast upstream vendors patch issues, there’s a window between issue being detected, patch being implemented, release getting pushed, notification of release gets received, and then finally update getting deployed. Whereas at least on cloud WAF front, they are able to look at requests across all sites, run analysis, and deploy instantly.
There is a free tier with their basic “Free managed ruleset”, which they’ve deployed for everyone with orange cloud enabled when we saw the Log4J issue couple years back. This protection applies for all applications, not just the ones that were able to turn around quickly with a patch.
If you want more bells and whistles, there’s a fee associated with it, and I understand having fees is not for everyone, though the price point is much lower – you get some more WAF feature on the $25/mn ($20/mn amortized when paid annually) tier as well before having to fork out the full $250/mn ($200/mn when paid annually) tier. There’s a documentation page on all the price points and rulesets available.
I tried to look it up but I wasn’t very successful. What they do in their free tier keeps being a mystery to me. In the $20/month is the the core ruleset from ModSecurity. I don’t need to pay them $20 to deploy that for me, the dataset is free and publicly available. I’ve just installed it on my VPS… It’s only a few lines in Nginx to enable that.
And what you’re talking about is $200 a month. I seriously doubt anyone here uses that plan for their homeserver. I wouldn’t pay $2400 in a year for it.
I still don’t get how that would work. Sure you can filter spam that way. And migitate attacks while the worst wave washes through the net. Or do machine learning and find out if usage patterns change. But how would it extend to 0-days faster than the software gets patched? This sounds more like snake-oil to me. If someone finds a way to inject something into a Nextcloud plugin and change things in the database so they have access… And then they do it to 100 cloudflare customers… How would Cloudflare know? If it’s a 0-day, they -per definition- don’t know in advance. And they’re just WAF, they don’t know if a user is authorized by mistake or if they’re supposed to have access. And they don’t know anything about my database, since it runs on my machine. And they also don’t know about the endpoints of the software and which request is going to trigger a vulnerability unless this manifests in some obvious (to them) way. Like 100 machines immediately start blasting spam through their connection and there is one common request in the logfiles. Otherwise all they can do is protect against known exploits. Maybe race the software vendor and filter things before they got patched. I just don’t see any substantial 0-day protection that extends to more than “keep your server up to date and don’t use unmaintained software.” Especially not for the home-user.
The free tier rolled out was specifically to address upstream vendors patching Log4J too slowly. They’re able to monitor the requests and intercept malicious patterns before it hits the server running unpatched (due to upstream unavailable yet) applications. They are updating with more rules for the free tier set as far as they’ve stated. The extras from paid tiers are more extra rulesets and more analytics around what was blocked etc.
At the end of the day though, you do you; the benefit for me may not be benefit for you. I’m not selling their service, and have no benefit what so ever should anyone opt into their services.
I just use a VPS with caching and basic https stripping protection
If they don’t care to attack you why would they DDoS you. 😄
The things CF fans make up about “security” are hilarious.
If you ever got hit with a DDoS while on the free tier they’d just disconnect you.
If you ever got hit with a DDoS while on the free tier they’d just disconnect you.
I can’t find anything that supports that statement. What is your source?
From what I understand you can do a bunch of things when under attack like requiring captchas.
Up to a certain volume they serve a page that runs some JavaScript heuristics to figure out if the client making the request is legit or not.
Past a certain volume your service is cut off completely.
The cutoff point depends on the load on their free tier network, which is shared by all freeloaders. Could be someone else under attack and you’d still get cut off.
CloudFlare is a CDN first of all, and it makes its money from paying customers. The free tier and the registrar and the DNS and the reverse proxy and basic DoS heuristics etc. are just there to generate word of mouth and free advertising. Nobody was talking about CF a few years ago when they didn’t offer these free services, now every selfhoster and their dog will recommend them.
The cutoff point depends on the load on their free tier network, which is shared by all freeloaders. Could be someone else under attack and you’d still get cut off.
Again, do you have a source for that?
All the information I can find points to the ddos protection being essentially the same regardless of price plan. The paid plans just get some more features. Like extra firewall stuff.
On the product offering page for Free DDoS Web Protection, the features table shows that “Unmetered DDoS Protection” is available for everyone regardless of tier from Free all the way up to Enterprise. This change was rolled out on 2017-09-25, prior to this, there was a certain amount of throughput depending on price point (though, still very generous for the free tier from what I remembered).
Sometimes, people make up their mind about something and never update their knowledge, and it would appear this is one of those case here.
Nobody is going to go through the effort to ddos a personal site. 😂
Tell this to the Russian bots that are hammering my personal site for some reason.
It’s way easier to make a rule “no Russia” or even “only my country”
Getting brute forced by bots isn’t a DOS attack.
That’s not a ddos. Not even close. Your ISP would be getting involved if it were.
You don’t even need to do a distributed dos against a home system since your bandwidth is so easy to overcome. A single EC2 instance could flood your standard home network.
it’s not a distributed denial of service but a single bot asking the same fucking wordpress page every 100ms is still a denial of service on my poor home server. In one click i was able to ban the whole asian continent without too much effort
Has it “denied service” to you? I’d be genuinely surprised. Are you on dial-up? I’ve run servers on my home network for “never you mind how long” and have never had a denial of service due to bot traffic.
Yes, I got lots of lag due to WordPress using all the CPU time to elaborate the same page over and over again.
I could have wasted some days to setup a cache proxy and other stuff but for a website with 10 monthly visitors is overkill, is faster to block everyone else outside the target. If someone is visiting from Russia or China they have 120% a malicious intent in my case, so no need to serve content
They think the free CF tier offers DDoS protection, which (a) will never happen to their server and (b) if it ever happened would consist of CF disconnecting their tunnel and black-holing their IP and domain until it blows over.
They also think CDN helps when your services are behind authentication.
Some of them just find it convenient that CF is registrar, DNS provider and sets up reverse proxy for them so they never stop to think too much about it.
Thanks. I read a lot of people recommending cloudflare. I believe a substantial amount of that group is on the free tier and not exactly making informed choices. Being a registrar, DNS provider and offering tunneling / port forwarding or some mechanism to traverse your home NAT are valid use-cases.
Some of them just find it convenient that CF is registrar, DNS provider and sets up reverse proxy
You should never put all your eggs in one basket. Using one company for all three of these essentially gives them full control of your domain.
It’s a best practice to use separate companies for registrar and website/proxy. If there’s ever some sort of dispute about the contents of the site, you can change the DNS to point to a different host. That’s not always possible when the same company handles both.
Simple reason: at home I don’t have a static IPv4 address and I can’t do port forwarding
What about ddns?
Edit: never mind reread your comment and saw the port forwrding caveat. Sorry pal.
Thx, that is a good reason to do it. I’m eventually going to lose my static IPv4 address, too. But I’m preparing to move some of my services to a VPS instead and in the process set up the firewall and the reverse proxy to the Nextcloud on my homeserver and so on there (on that VPS.)
I don’t have a static IP but host services off my paid domain. I use duckdns and point host records to the duckdns address. I have to use CloudFlare to manage my DNS records for this to work.
I do the same but I just use a script that runs periodically to update CloudFlare with my current IP with their native API.
Ah. Makes sense. I don’t think you have to specifically use cloudflare in that case. But I remember CNAME records can’t be used for everything… there are some limitations. I know I had issues with dyndns and a domain at some point. I just can’t remember the details. I know it didn’t work with every registrar / DNS provider. But some of them offer some magic to make some things work. I believe back then we ended up transferring that domain to some other hoster. And my domains are with a company that offers an API. I can just have a small script run in the background that changes around entries and do dyndns that way. But obviously you need to pay attention to things like the time to live for your records and set it accordingly once you do dyndns yourself.
The CNAME flattening is not a regular feature of DNS, so I have to use Cloudflare. Maybe other providers do the same, but I haven’t looked around. It’s certainly not something namecheap offer.
I point my TLD to the dynamic DNS record and then point to other records to the TLD as CNAME records. I’m using Nginx Proxy Manager to reverse proxy traffic to different services. These all live on a Raspberry Pi 4.
Took me a while to remember… I think other providers don’t call it CNAME flattening, but ALIAS records. And namecheap lists them in their documentation. You maybe need to look it up if you’re interested, but I think they do in fact offer it. (I mean I’m not advertising for or against anything here. If you’re happy with your provider and your setup works, that’s fine. It’s definitely not available everywhere.)
Awesome info! I wasn’t overly happy with having to use CloudFlare for just this one feature. I’ll have a test with my registrar.
Get a $15/year VPS and run your own tunnel using Wireguard.
Is there a better way to expose my services when behind an ISP cgnat?
Cloudflare, Pagekite, a cheap VPS with a reverse proxy. Maybe IPv6-only access if your CGNat does that, ngrok, serveo, rathole, sish, a VPN… I also found portmap-io, webhook relay, packetriot and countless other smaller companies. There are quite some tools and services available. And which one is right for you might depend on the exact situation and what you’re hosting. I’m not an expert on this. I have an internet connection without a NAT, and additionally a really tiny VPS with a mailserver, a small website and wireguard. I just use that to tunnel through NAT if i need to. But that means I haven’t compared all the other services since I don’t need them (yet.) I’ve learned a bit about Cloudflare from this discussion.
Because it makes them “feel” more secure.
I just don’t see the point of using cloudflared. Its easy to use but it just gives all your data to cloudflare in return for very little.
You can easily set SSL with a self signed certificate, they get nothing
Afaik, they decrypt and recrypt all traffic.
Source?
That’s just how they work. They terminate SSL, and then connect to your source server as a client, this gives them access to read anything submitted to your or any other sites they manage in the clear.
It’s a reverse proxy infront of you’re services. That’s fundamental to how a RP functions. Just like your own reverse proxy.
It says right in your first link that it doesn’t support self signed certs
Interesting - I didn’t bother to set the X-Real-IP headers until now and this might speed up my instance too. Thanks!
Then I wondered: what if the program is “smart” and throttles it by itself without any warning to the admin if it thinks that an ip address is sending too many requests?
The word you’re looking for is “Rate Limit(ing)” and according to the documentation you could also disable it completely.
But I guess the cleanest and most secure solution would be to just set the headers on the reverse proxy.
Good job debugging it. Where’d you get that list of IPs?
It’s the list of IPs that belong to cloudflare.
I think that because I’m using tunnels it’s not necessary to have all of them, just the docker ip address space
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters CF CloudFlare DNS Domain Name Service/System IP Internet Protocol NAT Network Address Translation SSL Secure Sockets Layer, for transparent encryption VPN Virtual Private Network VPS Virtual Private Server (opposed to shared hosting)
7 acronyms in this thread; the most compressed thread commented on today has 15 acronyms.
[Thread #484 for this sub, first seen 4th Feb 2024, 22:45] [FAQ] [Full list] [Contact] [Source code]
Does this also apply to
linuxserver/nextcloud
image? Because that’s what I’m using.yes