📄 Source: This page renders /var/www/zap/docs/INFRASTRUCTURE.md.
Edit that file to update this documentation. Also see Database Details.
Infrastructure Reference
Last updated: 2026-04-11
Database connections: See DATABASES.md
DNS management: AdGuard Home at http://adguard.lan (#dns_rewrites)
Credentials: ~/.credentials (per-host passwords), project-level in each repo's private/credentials
Network Overview
Two subnets on the domestic LAN, each served by a dedicated Proxmox host:
| Subnet |
Gateway |
Proxmox Host |
Purpose |
| 192.168.1.0/24 |
192.168.1.1 |
titan.lan (.1.22) |
Legacy VMs/CTs, NAS, DNS, network services |
| 192.168.2.0/24 |
192.168.2.1 |
phoebe.lan (.2.16) |
Primary workloads, LLM, production VMs |
DNS for *.lan is managed by AdGuard Home on ca-server (192.168.1.97).
IP Allocation Scheme
192.168.2.0/24 (phoebe.lan)
| Range |
Category |
Current Assignments |
| .1 |
Gateway |
Router |
| .10-.19 |
Physical hosts / bare metal |
delphi.lan=.15, phoebe.lan=.16 |
| .20-.29 |
(reserved) |
iris1.lan=.21 |
| .30-.39 |
Production VMs |
naiad01.lan=.31, orcus.lan=.32 |
| .40-.49 |
LLM / AI services |
ollama-phoebe.lan=.40, janus01.lan=.41 |
| .50-.59 |
Dev / test |
cursor-demo.lan=.50, zeno.lan=.51, dev.lan=.52 |
| .60-.69 |
Infrastructure services |
torrent-search.lan=.60 |
| .70-.79 |
Scraping / fetch |
web-fetch.lan=.70 |
| .80-.89 |
Databases |
merlin-db.phoebe.lan=.81, surrealdb01.lan=.80 — see DATABASES.md |
| .90-.99 |
(free) |
|
192.168.1.0/24 (titan.lan)
| Range |
Category |
Current Assignments |
| .1 |
Gateway |
Router |
| .9-.14 |
Jason CTs (dev/test containers) |
energystats=.9, sybil=.10, merlin01=.11, jason04=.12, jason05=.13, jason06=.14 |
| .17 |
End-user machines |
paris.lan (Windows 11) |
| .19 |
VPN containers |
vpn-fetch.lan |
| .20-.23 |
Proxmox hosts + misc VMs |
ovid=.20, juno=.21, titan=.22, hyperion=.23 |
| .40-.44 |
LLM / AI |
ollama-titan.lan=.40, delphi01=.43, delphi-dt-12=.44 |
| .50-.54 |
Web / app VMs |
zeus01=.50, zeus01-dev=.51, pindar=.52, neon=.53, argon=.54 |
| .70-.71 |
File services |
hephaestus=.70, nextcloud-test=.71 |
| .80 |
Databases |
resolve-pg14.lan |
| .96-.99 |
Network infrastructure |
lucerna/pi-hole=.96, adguard/ca-server=.97, pharos=.99 |
| .200-.202 |
NAS / storage |
ds916=.200, ds1571=.201, atrium=.202 |
Rule
Before assigning a new static IP, always:
- Check this document
- Check AdGuard rewrites: http://adguard.lan/#dns_rewrites
- Scan the subnet:
ip neigh show | grep "192.168.X\."
Physical Hosts
| Hostname |
IP |
Hardware |
OS |
Purpose |
| phoebe.lan |
192.168.2.16 |
AMD desktop, RTX 3060 (12GB) |
Proxmox VE 8.2 |
Primary hypervisor |
| titan.lan |
192.168.1.22 |
Server |
Proxmox VE 7.1 |
Secondary hypervisor |
| delphi.lan |
192.168.2.15 |
2x RTX 3090 (48GB VRAM) |
Ubuntu (bare metal) |
LLM inference, SearXNG |
| atrium.lan |
192.168.1.202 |
Synology DS916+ |
DSM |
NAS (~35TB), CIFS shares |
| ds1571 |
192.168.1.201 |
Synology DS1571 |
DSM |
NAS (Textz archive) |
| paris.lan |
192.168.1.17 |
Desktop |
Windows 11 |
End-user workstation |
Proxmox Inventory: phoebe.lan
Virtual Machines
| VMID |
Name |
IP |
Status |
RAM |
Cores |
Purpose |
| 232 |
orcus |
192.168.2.32 |
running |
16 GB |
6 |
Main dev workstation VM (Ubuntu 22.04). Hosts all web apps via Apache. |
| 252 |
dev.lan |
192.168.2.52 |
running |
48 GB |
4 |
Development/staging server |
| 400 |
ollama-phoebe |
192.168.2.40 |
running |
8 GB |
4 |
Ollama LLM (RTX 3060 passthrough) |
| 401 |
web-fetch |
192.168.2.70 |
running |
8 GB |
4 |
Playwright / scraping and general web download (Ubuntu 24.04). 64 GB disk on local-zfs. |
| 231 |
naiad01-u24DT |
192.168.2.31 |
stopped |
32 GB |
6 |
Web dev VM (Ubuntu 24.04 desktop) |
| 233 |
orcus-clone |
-- |
stopped |
16 GB |
6 |
Backup clone of orcus |
| 100 |
u22-DT-LVM |
-- |
stopped |
32 GB |
8 |
Template/test (Ubuntu 22.04 desktop, LVM) |
| 101 |
u22-DT-ZFS |
-- |
stopped |
8 GB |
6 |
Template/test (Ubuntu 22.04 desktop, ZFS) |
| 102 |
zeno-ZFS |
192.168.2.51 |
stopped |
24 GB |
6 |
Dev VM |
| 103 |
zeno-LVM |
-- |
stopped |
32 GB |
8 |
Dev VM (LVM variant) |
| 104 |
zeno-test |
-- |
stopped |
32 GB |
8 |
Test VM |
LXC Containers
| CTID |
Name |
IP |
Status |
RAM |
Cores |
Purpose |
| 300 |
vpn-fetch |
192.168.1.19 (DHCP) |
running |
512 MB |
2 |
Alpine Linux. PIA WireGuard VPN + transmission-daemon for torrent downloads. |
| 301 |
cursor-demo |
192.168.2.50 |
stopped |
1 GB |
2 |
Ubuntu 22.04. Cursor demo environment. |
| 302 |
torrent-search |
192.168.2.60 |
running |
2 GB |
2 |
Ubuntu 22.04. Docker: Jackett + FlareSolverr for 1337x torrent search. PIA VPN. |
| 303 |
merlin-db (legacy) |
192.168.2.81 |
stopped |
2 GB |
2 |
Ubuntu 22.04 + PostgreSQL 14. Kept for rollback safety. |
| 304 |
merlin-db |
192.168.2.81 |
running |
2 GB |
2 |
Ubuntu 22.04 + PostgreSQL 16. Production database for Merlin. Migrated from 303 (2026-02-28). Apache: http://merlin-db.phoebe.lan/ — local info page with connection details. |
Storage
| Name |
Type |
Total |
Used |
Available |
| local |
dir |
6.6 GB |
10 MB |
6.5 GB |
| local-zfs |
zfspool |
7.2 GB |
656 MB |
6.5 GB |
| atrium-images |
cifs |
34.9 TB |
11.5 TB |
23.4 TB |
Proxmox Inventory: titan.lan
Virtual Machines (active)
| VMID |
Name |
IP |
Status |
RAM |
Cores |
Purpose |
| 102 |
merlin01 |
192.168.1.11 |
running |
4 GB |
-- |
Merlin agent / STT |
| 120 |
zeus01 |
192.168.1.50 |
running |
8 GB |
-- |
Zeus web app (production) |
| 121 |
zeus01-dev |
192.168.1.51 |
running |
8 GB |
-- |
Zeus web app (dev) |
| 400 |
ollama-titan |
192.168.1.40 |
running |
8 GB |
-- |
Ollama LLM (RTX 3060 passthrough) |
| 851 |
j-u23s-01 |
-- |
running |
8 GB |
-- |
Julian's Ubuntu 23 server |
| 896 |
lucerna |
192.168.1.96 |
running |
4 GB |
-- |
Pi-hole / monitoring |
Virtual Machines (stopped -- templates, legacy, test)
| VMID |
Name |
Notes |
| 100 |
ubuntu20-template |
Template |
| 101 |
chronos00-clone01 |
Legacy |
| 104 |
u24.04-LTS |
Template |
| 110 |
athena01 |
Auth service (stopped) |
| 210, 211 |
jason210, jason211 |
Test VMs |
| 220, 221 |
iris0, iris1-u22-DT |
ML/AI VMs (stopped) |
| 230, 231 |
naiad-00, naiad01 |
Web dev VMs (stopped, migrated to phoebe) |
| 701 |
nextcloudtest |
Test |
| 800 |
surrealdb |
Database VM (stopped) |
| 841 |
janus01 |
Messaging service (stopped) |
| 850, 860, 870 |
Templates |
Julian's Ubuntu/MX templates |
| 861 |
j-u23d-01 |
Julian's Ubuntu 23 desktop (stopped) |
| 990 |
hephaestus |
High-RAM compute (65 GB, stopped) |
| 995 |
pve-test-ip72 |
Network test |
LXC Containers
| CTID |
Name |
IP |
Status |
RAM |
Purpose |
| 880 |
resolve-pg14 |
192.168.1.80 |
running |
2 GB |
PostgreSQL 14 |
| 897 |
ca-server |
192.168.1.97 |
running |
512 MB |
AdGuard Home DNS + certificate authority |
| 981 |
jason04 |
192.168.1.12 |
running |
6 GB |
Dev container |
| 982 |
jason05 |
192.168.1.13 |
running |
6 GB |
Dev container (on 2.x subnet in DNS: 192.168.2.13) |
| 986 |
jason02 |
192.168.1.9 |
running |
6 GB |
Dev container |
| 988 |
jason03 |
192.168.1.10 |
running |
6 GB |
Dev container |
| 989 |
jason06 |
192.168.1.14 |
running |
6 GB |
Dev container (FastAPI, Matomo, energy charts) |
| 890 |
pharos01 |
192.168.1.99 |
stopped |
1 GB |
Network monitoring (stopped) |
| 980 |
jason-template |
192.168.1.12 |
stopped |
6 GB |
Template |
| 985 |
jason01-defunct |
192.168.1.9 |
stopped |
6 GB |
Defunct |
| 987 |
jason03-defunct |
192.168.1.10 |
stopped |
6 GB |
Defunct |
External Servers
| Name |
IP |
Provider |
Purpose |
| du1 |
161.35.36.240 |
DigitalOcean |
Production hosting (philanthropy-planner, concerts.freebyrd.live) |
| du2 |
142.93.45.250 |
DigitalOcean |
Dev/staging (dev.philoenic.com, dev.sso.merlin-ai.com) |
SSH access: ssh jd@161.35.36.240, ssh jd@142.93.45.250
Key Services & Ports
On orcus.lan (192.168.2.32)
| Service |
Port |
URL |
Notes |
| Apache (web apps) |
443 |
https://orcus.lan |
Hosts ~15 vhosts (media-manager, zap, philoenic, prospecta, etc.) |
| LLM Gateway |
443 |
https://llm.orcus.lan |
Apache reverse proxy to Ollama servers |
| PostgreSQL 17 |
5432 |
-- |
zap database, zap-projects |
| Centrifugo |
25001 |
ws://orcus.lan:25001 |
WebSocket server for real-time updates |
LLM Servers
| Host |
Port |
GPU |
Models |
| delphi.lan |
11434 |
2x RTX 3090 (48 GB) |
Large models (qwen3:30b, etc.) |
| ollama-phoebe.lan |
11434 |
RTX 3060 (12 GB) |
qwen3:8b (~63 tok/s) |
| ollama-titan.lan |
11434 |
RTX 3060 (12 GB) |
llama3.1:8b (~67 tok/s) |
Search & Torrents
| Service |
Host |
Port |
Purpose |
| SearXNG |
delphi.lan |
8888 |
Federated web + torrent search |
| Jackett |
torrent-search.lan |
9117 |
Torrent indexer proxy (1337x via FlareSolverr) |
| FlareSolverr |
torrent-search.lan |
8191 |
Cloudflare bypass for Jackett |
| Transmission |
vpn-fetch.lan |
9091 |
BitTorrent client behind PIA VPN |
DNS & Network
| Service |
Host |
Port |
Purpose |
| AdGuard Home |
ca-server (192.168.1.97) |
80 |
DNS server for *.lan, rewrite rules |
| Pi-hole |
lucerna (192.168.1.96) |
80 |
Secondary DNS / monitoring |
AdGuard Home: backing up DNS rewrites (runbook)
DNS rewrites and most other AdGuard Home settings live in the main configuration file AdGuardHome.yaml on the host that runs AdGuard (this network: ca-server / 192.168.1.97, UI at http://adguard.lan).
Why back up
- Rewrite lists are large and easy to lose if the CT/VM is rebuilt or the workdir is wiped.
- The YAML can also hold API credentials and upstream settings — treat backups as sensitive.
What to copy
- Primary:
AdGuardHome.yaml from AdGuard’s working directory (typical locations: /opt/AdGuardHome/, or the path shown under Settings in the AdGuard UI). Confirm on the host with AdGuardHome --help / service unit / ps if unsure.
- Optional: the whole working directory if you also want filter lists and query logs policy; for DNS entries alone, the single YAML is usually enough.
Where to put backups (NAS)
- Store under the BACKUPS share on atrium:
\\atrium\BACKUPS\adguard\ (create adguard once if missing). If BACKUPS is not mounted on the machine performing the copy, use another atrium backup path you already trust (see Storage (NAS Mounts on orcus.lan)).
- Use timestamped filenames, e.g.
AdGuardHome-20260322T040001Z.yaml (UTC) or .yaml.gz when ADGUARD_BACKUP_GZIP=1 (see script below).
- Restrict permissions on the NAS folder (only admin accounts).
How (manual)
- SSH to ca-server (or whichever host runs AdGuard Home).
- Locate
AdGuardHome.yaml (see above).
- Copy to NAS with
scp, rsync, or SMB mount; keep multiple generations (last 7 daily is a reasonable default).
Restore (outline)
- Stop AdGuard Home (method depends on install:
systemctl stop AdGuardHome or equivalent).
- Replace
AdGuardHome.yaml from a known-good backup (or merge carefully).
- Start the service and verify rewrites in the UI (
#dns_rewrites).
Tracked automation (weekly)
- Script (in git):
scripts/backup-adguard-config.sh — copies AdGuardHome.yaml to BACKUPS\adguard with UTC timestamps, optional ADGUARD_BACKUP_GZIP=1, and count-based retention (default: keep newest 8 files).
- Runtime: ca-server (AdGuard host). Two modes:
- CIFS mounted: set
BACKUP_DEST_DIR to the adguard directory on a mounted //atrium.lan/BACKUPS (or //192.168.1.202/BACKUPS) tree. BACKUP_DEST_DIR is required in this mode.
smbclient mode (typical on LXC): kernel mount -t cifs may return “Operation not permitted” in unprivileged containers. Use ADGUARD_BACKUP_USE_SMBCLIENT=1 plus SMB_SERVER (often 192.168.1.202 — atrium.lan may not resolve inside the CT), SMB_SHARE=BACKUPS, SMB_SUBDIR=adguard, and SMB_CREDENTIALS_FILE pointing to a chmod 600 file with username= / password= lines. Run the script as root so it can read /opt/AdGuardHome/AdGuardHome.yaml (root-owned 0600).
- Deployed (ca-server, 2026-03): Script at
/usr/local/sbin/backup-adguard-config.sh; SMB credentials at /home/jd/.smb-atrium-backups; root crontab weekly Sunday 04:00 with smbclient mode and SMB_SERVER=192.168.1.202. Log: /var/log/adguard-config-backup.log.
- Deploy / updates: Copy the script from the repo to
/usr/local/sbin/, chmod 755.
- Cron: Example:
0 4 * * 0 (Sunday 04:00). Examples: infra/cron/ca-server-adguard-backup.cron.example (CIFS) and root crontab on ca-server (smbclient — see above).
- systemd timer: Alternative to cron — same script; one-shot timer weekly if you prefer.
- Security: Backup files can contain API credentials — do not commit them; restrict permissions on the NAS folder; keep SMB credential files
chmod 600.
- Test backup (mandatory before cron): Run the script once manually with the same environment the cron job will use. Verify: exit code 0; a new
AdGuardHome-*.yaml or .yaml.gz under \\atrium\BACKUPS\adguard\; non-zero size; plausible YAML (e.g. dns: / rewrite-related keys). Only then install the crontab line.
- Do not commit
AdGuardHome.yaml or SMB passwords to git.
New Vhost and DNS Checklist
Use this checklist whenever adding a new hostname for a Zap app or any other local web service.
1. Choose the canonical hostname
- prefer short service-facing hostnames when that matches existing conventions
- keep the repo/app slug and the public hostname conceptually separate if that makes the URL cleaner
- decide whether the hostname is a local
.lan host or a public domain/subdomain before making server changes
2. Add the web-server routing
- create or update the Apache vhost/reverse-proxy config under
infra/apache/
- set the correct
ServerName
- point
DocumentRoot or proxy target at the real app entry point
- enable the site if needed and reload Apache
- verify Apache config validity before and after reload
3. Add the matching DNS entry
- before bulk edits: ensure AdGuard Home configuration backup is current (especially before adding many rewrites or rebuilding ca-server)
- for new
.lan hostnames, add the DNS rewrite in AdGuard Home: http://adguard.lan/#dns_rewrites
- for new public domains or public subdomains, add DNS via Namecheap instead
- do not assume a vhost is reachable until DNS has been added as part of the same task
4. Verify reachability in the right order
- confirm hostname resolution first
- then verify the landing page
- then verify a health endpoint or equivalent low-cost endpoint
- only after routing and DNS both work should the hostname be treated as browser-testable
5. Update documentation
- update this file if the hostname becomes part of the continuing platform inventory
- add the hostname to the relevant app docs or README when useful
- update any project metadata that depends on the canonical URL
Notes
.lan => AdGuard Home
- public domain/subdomain => Namecheap
- if DNS is not ready yet, a temporary path-based route on an already-working hostname can be useful for short-term testing, but it should not replace the canonical hostname
Local TLS and CA Reality
Current orcus TLS state
orcus subdomains are currently being served by the local mkcert wildcard certificate, not by CA-server-issued per-host certificates.
- Verified active Apache cert paths on
orcus include:
/var/www/zap/infra/ssl/orcus.lan+wildcard.pem
/var/www/zap/infra/ssl/orcus.lan+wildcard-key.pem
- Verified live TLS checks for
extract.orcus.lan and polyglot.orcus.lan showed:
- issuer:
mkcert development CA
- SAN includes
DNS:*.orcus.lan
- Treat this as the current pragmatic working state, not as proof that the planned CA-server migration has already been completed.
What ca-server.lan is currently doing
ca-server.lan (192.168.1.97) is currently a manual local CA workstation, not an automated issuing service such as step-ca.
- Verified CA material on
ca-server.lan:
/home/jd/easy-rsa/pki/ca.crt
/home/jd/easy-rsa/pki/private/ca.key
- Verified CA workflow documentation on
ca-server.lan:
/home/jd/cert-docs/README_atrium-lan_CA_DNS_CERTS.md
- The documented workflow is manual:
- generate key + CSR
- define SANs explicitly
- sign with the Easy-RSA CA
- copy/install the cert and key manually on the target host
Currently verified CA-managed names / artifacts
atrium.lan
- verified CA-issued cert on
ca-server.lan
- SAN includes
DNS:atrium.lan and IP:192.168.1.202
zap-msg.janus01.lan
- verified CA-issued cert on
ca-server.lan
- SAN includes
DNS:zap-msg.janus01.lan
delphi.lan
- cert, CSR, and key artifacts present on
ca-server.lan
backend.crt
- older/internal CA-managed cert artifact present on
ca-server.lan
Required future migration rule for orcus
- Do not assume Windsurf plans, chat summaries, or agent output mean CA migration has already happened.
- Before claiming a host uses the CA-server workflow, verify both:
- the Apache
SSLCertificateFile / SSLCertificateKeyFile paths
- the live presented certificate via
openssl s_client
- For each new
orcus hostname, do one of the following explicitly:
- document that it is temporarily using the current
mkcert wildcard cert
- or issue and install a dedicated CA-server certificate for that hostname
- Target end state for future cleanup:
- generate dedicated key + CSR on
ca-server.lan
- include exact SANs for the hostname(s)
- sign with the Easy-RSA CA
- install on
orcus with Apache-readable permissions
- update the Apache vhost to use the CA-issued files
- reload Apache and verify with
openssl s_client
Storage (NAS Mounts on orcus.lan)
| Mount Point |
Remote Share |
Purpose |
/mnt/atrium-shared |
//atrium.lan/SHARED |
Main shared storage |
/mnt/atrium-data |
//atrium.lan/DATA |
Data archive |
/mnt/media-downloader |
//atrium.lan/SHARED/MOVIES-TV-AUDIO/MOVIES (in English) |
Downloaded films (A-Z subfolders) |
/mnt/merlin-backups |
//atrium.lan/JULIAN/backups-orcus |
Database backups (7 daily + 4 weekly + 3 monthly) |
/mnt/nas/zap-recordings |
//atrium.lan/DATA/zap-orcus |
Zap recordings |
/mnt/atrium-celine |
//atrium.lan/CELINE |
Celine's share |
/mnt/ds1571-textz |
//192.168.1.201/Textz |
DS1571 text archive |
Content type paths on NAS:
| Type |
Path |
SMB Path |
| Films (English) |
/mnt/media-downloader |
\\atrium\SHARED\MOVIES-TV-AUDIO\MOVIES (in English) |
| TV |
/mnt/atrium-shared/MOVIES-TV-AUDIO/TV |
\\atrium\SHARED\MOVIES-TV-AUDIO\TV |
| Documentaries |
/mnt/atrium-shared/MOVIES-TV-AUDIO/DOCUMENTARIES |
\\atrium\SHARED\MOVIES-TV-AUDIO\DOCUMENTARIES |
| Animation |
/mnt/atrium-shared/MOVIES-TV-AUDIO/ANIMATION |
\\atrium\SHARED\MOVIES-TV-AUDIO\ANIMATION |
| Foreign |
/mnt/atrium-shared/MOVIES-TV-AUDIO/MOVIES (not in English) |
\\atrium\SHARED\MOVIES-TV-AUDIO\MOVIES (not in English) |
| Zap-Notes live assets (uploads + combined export saves) |
/mnt/atrium-data/zap-orcus/zap-notes/assets |
\\atrium\DATA\zap-orcus\zap-notes\assets |
VPN Configuration
vpn-fetch.lan (CT 300 on phoebe)
- OS: Alpine Linux
- VPN: PIA WireGuard (currently connected to PIA server; region varies)
- Purpose: Transmission torrent downloads routed through VPN
- PostUp route:
ip route add 192.168.0.0/16 via 192.168.1.1 (LAN access preserved)
torrent-search.lan (CT 302 on phoebe)
- OS: Ubuntu 22.04
- VPN: PIA WireGuard (UK region preferred, auto-reconnect every 12h via cron)
- Purpose: Jackett + FlareSolverr Docker containers for 1337x torrent search
- PostUp route:
ip route add 192.168.0.0/16 via 192.168.2.1 (LAN access preserved)
- Reconnect script:
/usr/local/bin/pia-reconnect.sh
- Cron:
0 */12 * * * /usr/local/bin/pia-reconnect.sh >> /var/log/pia-reconnect.log 2>&1
- Docker: Jackett (port 9117) + FlareSolverr (port 8191), compose file at
/opt/torrent-search/docker-compose.yml
- Jackett API key: stored in media-manager
private/credentials
Web Apps on orcus.lan
All served via Apache with TLS. At present, the working local subdomains on orcus are still primarily using the local mkcert wildcard certificate for *.orcus.lan; they have not yet been fully migrated to CA-server-issued per-host certs. Each vhost maps a subdomain to a /var/www/<project>/ directory.
| Subdomain |
Project |
Repo |
Notes |
| zap.orcus.lan |
Zap platform |
merlinai-com/zap |
Main platform: mail, calendar, projects, recordings |
| zap-notes.orcus.lan |
Zap-Notes |
merlinai-com/zap (apps/zap-notes) |
Notebook OCR, Postgres on phoebe, CRM bridge to zap-mail |
| media-manager.orcus.lan |
Media Manager |
merlinai-com/media-manager |
Film/TV discovery, torrent search & download |
| philoenic.orcus.lan |
Philoenic |
merlinai-com/philoenic.com |
Wine events (local dev; deployed to dev.philoenic.com) |
| prospecta.orcus.lan |
Prospecta (legacy PHP on disk) |
merlinai-com/prospecta-felix |
Felix app; greenfield design/docs/mockups: merlinai-com/prospecta |
| philanthropy-planner.orcus.lan |
Philanthropy Planner |
merlinai-com/philanthropy-planner |
CC data explorer (deployed to du1) |
| llm.orcus.lan |
LLM Gateway |
(part of zap) |
Reverse proxy to Ollama servers |
| merlin-agent.orcus.lan |
Merlin Agent |
-- |
Agent interface |
| status.orcus.lan |
Status page |
(part of zap) |
System status dashboard |
DNS Entries (AdGuard Rewrites)
Managed at http://adguard.lan/#dns_rewrites. Grouped by IP:
192.168.1.x
| IP |
Hostnames |
| .9 |
energystats.lan |
| .10 |
sybil.lan |
| .11 |
merlin01.lan, merlin.lan, merlin-stt.lan |
| .12 |
jason04, jason04.lan, whisper-v1.jason04.lan |
| .14 |
jason06, jason06.lan, charts.energystats.lan, fastapi.jason06.lan, matomo.jason06.lan, test.multipage.lan, tls.jason06.lan |
| .17 |
paris.lan |
| .19 |
vpn-fetch.lan |
| .20 |
ovid.lan, sibyl, sibyl.ovid.lan |
| .21 |
juno.lan, wss.juno.lan |
| .22 |
proxmox.lan, titan.lan |
| .23 |
hyperion, hyperion.lan |
| .40 |
ollama-titan.lan |
| .43 |
delphi01, delphi01.lan |
| .44 |
delphi-dt-12.lan |
| .50 |
zeus01, zeus01.lan, zeus.lan, athena.zeus01.lan, energycharts.zeus01.lan, rd.merlin.lan, sso.zeus01.lan |
| .51 |
zeus01-dev, zeus01-dev.lan, pindar.lan, athena.zeus01-dev.lan, energycharts.zeus01-dev.lan, jd.merlin.lan, tls.zeus01-dev.lan |
| .52 |
pindar2.lan |
| .53 |
neon.lan, test-fastcgi |
| .54 |
argon.lan |
| .70 |
hephaestus, hephaestus.lan |
| .71 |
nextcloud-test.lan |
| .80 |
resolve-pg14.lan |
| .96 |
lucerna.lan, pi-hole.lan |
| .97 |
adguard.lan, ca-server, ca-server.lan |
| .99 |
pharos, pharos.lan, pharos01.lan |
| .200 |
ds916 |
| .201 |
ds1571 |
| .202 |
atrium.lan |
192.168.2.x
| IP |
Hostnames |
| .13 |
jason05 |
| .15 |
delphi, delphi.lan |
| .16 |
phoebe, phoebe.lan |
| .21 |
iris1, iris1.lan |
| .31 |
naiad01, naiad01.lan, naiad.lan, + sub-vhosts (aifutures, energystats, felix-project, freebird, getzap, pdf-ops, philoenic, philoenos) |
| .32 |
orcus.lan, + sub-vhosts (biz-cards, energystats-app, energystats, extract, file-finder, llm, media-downloader, media-manager, merlin-agent, merlin, pdf-extraction, philanthropy-planner, philoenic, prospecta, status, zap-cal, zap-mail, zap, zap-notes) |
| .40 |
ollama-phoebe.lan |
| .41 |
janus01, janus01.lan, zap-msg.janus01.lan |
| .50 |
cursor-demo.lan |
| .51 |
zeno, zeno.lan |
| .52 |
dev, dev.lan |
| .60 |
torrent-search.lan |
| .80 |
surrealdb01, surrealdb01.lan |
| .81 |
merlin-db, merlin-db.phoebe.lan |