Tracking a single cat doesn’t seem like DB work
Tracking a single cat doesn’t seem like DB work
Why wouldn’t a simple spreadsheet and some pivot tables work?
Czech Republic A4, Czech Republic A5, Czech Republic A6…
No worries, the properly implemented CI/CD pipelines will catch the bad code!
Just buy it for ten years. You’re ultimately saving money and it’ll give you more time to incubate your dream!
Just buy it for ten years. You’re ultimately saving money and it’ll give you more time to incubate your dream!
There’s not much cost with S3 object. It’s just a file system in Linux, and replication is a protocol standard.
Use object storage for media and backups, then use s3 replication to put a copy somewhere else.
Dell, HPe and Supermicro. System integrators are buying shitloads to resell.
Get a second pc and a kvm switch
I did 100TB, 100 streams of 1TB, all simultaneous with rsync
Red Hat, because it’s free for developers and used by a lot of enterprises.
Remind me: who provides most of the funding that FF has?
If you have enough users and systems that this is a problem then you should be centrally managing it. I get that you want to inventory what you have, but I’m saying that you’re probably doing it wrong right now, and your ask is solved by using a central IAM system.
It sounds like you’re probably looking for some kind of SAML compliant IAM system, where credentials and access can be centrally managed. Active Directory and LDAP are examples of that.
OpenShift Virtualization
Well, 1ms of latency is 300km of distance, so unless you have something really misconfigured or overloaded, or you’re across the country, latency shouldn’t be an issue. 10-20ms is normally the high water mark for most synchronous replication, so you can go a long way before a protocol like DNS becomes an issue.
Sure, but how many foods are we talking here? This sounds like probably <20 rows on a sheet, with columns for ingredients.