I’m running opnsense on proxmox with some lxc containers and docker hosts.

I’ve never done internal DNS routing, just a simple DMZ with Cloudflare proxies and static entries for some external services. I want to simplify things and stop using my IPs from memory internally.

For example, I have the ports on my docker hosts memorized for the services I use, only a couple mapped hosts in opnsense, but nothing centralized.

What is the best way to handle internal DNS name resolution for both docker and the lxc containers? Internal CA certs? External unroutable (security)?

Any tips and setups appreciated.

  • just_another_person@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    2 months ago

    Focus on DNS for the host machine and it’s port mappings, not the individual containers.

    If you’re instead asking “How can I easily map a DNS name to service and port?”, then you want a reverse proxy on your host machine, like nginx (simplest) or Traefik (more complex, but geared towards service discovery and containers).

    In the latter scenario you setup a named virtual host for each service that maps back to the service port exposed for your containers. Example: a request for jellyfin.localdomain.com points to the host machine, nginx answers the request and maps the host name in the request, then proxies your session to the container.

    It’s copy and paste for the most part once you get the first one going unless you’re dealing with streaming.

    If you’re running a flexible platform on your router like OpenWRT, you could also do some port forwarding as a means to achieve the same thing.

    • ___@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      This is what I was think also. Just let the host rproxy the requests and just map the dns to the host in opnsense.