2025-11-29 17:23:31 +00:00
2025-11-14 11:23:41 -05:00
2025-11-29 17:11:00 +00:00
2025-11-24 13:56:15 +00:00
2025-11-14 14:02:06 -05:00
2025-11-24 20:14:41 +00:00
2025-11-14 14:02:42 -05:00
2025-11-16 15:47:54 -05:00
2025-11-21 18:49:56 +00:00
2025-11-16 15:19:38 -05:00
2025-11-29 17:23:31 +00:00

Docker Swarm Cluster Report

Generated: 2025-11-29 17:13:31 UTC

Swarm Cluster Overview

Node Status

| p0-compute | Ready | Active | Reachable | 29.0.4 | | p1-control | Ready | Active | Reachable | 29.0.4 | | p2-control | Ready | Active | Reachable | 29.0.4 | | p3-control | Ready | Active | Leader | 29.0.4 | | p4-compute | Ready | Active | Reachable | 29.0.4 | | p5-compute | Ready | Active | Reachable | 29.0.4 |

Cluster Details:

  • Cluster ID: iutii5nymo40dsgcuxnfdh6jr
  • Created: 2025-11-13 14:34:48.038680238 +0000 UTC
  • Total Nodes: 6
  • Manager Nodes: 6
  • Engine Version: 29.0.4
  • Operating System: Ubuntu 24.04.3 LTS
  • Kernel: 6.8.0-88-generic

Hardware Details

p0-compute (Current Node)

Processor:

Architecture:                         x86_64
CPU(s):                               16
On-line CPU(s) list:                  0-15
Model name:                           AMD Ryzen 7 5800H with Radeon Graphics
Thread(s) per core:                   2
Core(s) per socket:                   8
Socket(s):                            1
CPU(s) scaling MHz:                   83%
CPU max MHz:                          4463.0000
CPU min MHz:                          400.0000
NUMA node0 CPU(s):                    0-15

Memory:

               total        used        free      shared  buff/cache   available
Mem:            28Gi       2.2Gi       7.7Gi       2.4Mi        18Gi        26Gi
Swap:          8.0Gi          0B       8.0Gi

Storage:

Filesystem                         Size  Used Avail Use% Mounted on
/dev/mapper/ubuntu--vg-ubuntu--lv   98G   39G   55G  42% /
/dev/nvme0n1p2                     2.0G  198M  1.6G  11% /boot
/dev/nvme0n1p1                     1.1G  6.2M  1.1G   1% /boot/efi

Block Devices:

NAME                        SIZE TYPE MOUNTPOINT FSTYPE
nvme0n1                   953.9G disk            
├─nvme0n1p1                   1G part /boot/efi  vfat
├─nvme0n1p2                   2G part /boot      ext4
└─nvme0n1p3               950.8G part            LVM2_member
  └─ubuntu--vg-ubuntu--lv   100G lvm  /          ext4

Network Interfaces:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    inet 127.0.0.1/8 scope host lo
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    inet 10.0.4.11/24 metric 100 brd 10.0.4.255 scope global dynamic enp1s0
3: wlo1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
4: docker_gwbridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    inet 172.18.0.1/16 brd 172.18.255.255 scope global docker_gwbridge
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
106: vethb68b357@if105: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP group default 
211: veth281d180@if210: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP group default 
215: veth07de1c1@if214: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP group default 
219: vethbb8c162@if218: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP group default 
225: veth26a4063@if224: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP group default 
231: veth7d33e67@if230: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP group default 
237: veth5e29c61@if236: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP group default 

System Load:

 17:13:31 up 8 days,  4:59,  3 users,  load average: 0.30, 0.16, 0.19
Load: 0.30 0.16 0.19 1/849 3444576

Remote Nodes

p1-control (10.0.4.12)

CPU: Cortex-A76 Cores: 4 Memory: N/A Storage: 235G total, 8.9G used, 216G available (4% used) Uptime: up 1 week, 1 day, 4 hours, 58 minutes

p2-control (10.0.4.13)

CPU: Cortex-A76 Cores: 4 Memory: N/A Storage: 117G total, 7.3G used, 105G available (7% used) Uptime: up 1 week, 1 day, 4 hours, 56 minutes

p3-control (10.0.4.14)

CPU: Cortex-A76 Cores: 4 Memory: N/A Storage: 470G total, 6.9G used, 444G available (2% used) Uptime: up 1 week, 1 day, 4 hours, 55 minutes

p4-compute (10.0.4.15)

CPU: 12th Gen Intel(R) Core(TM) i5-12600K Cores: 5 Memory: N/A Storage: 60G total, 22G used, 36G available (38% used) Uptime: up 28 minutes

p5-compute (10.0.4.16)

CPU: 12th Gen Intel(R) Core(TM) i5-12600K Cores: 5 Memory: N/A Storage: 60G total, 20G used, 38G available (34% used) Uptime: up 28 minutes


Storage Infrastructure

Mounted Filesystems

Filesystem                         Size  Used Avail Use% Mounted on
10.0.4.10:/mnt/user/frostlabs      4.6T  815G  3.8T  18% /home/doc/projects/homelab

GlusterFS Shared Storage

localhost:swarm-data on /home/doc/projects/swarm-data type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072,_netdev)

Application Data Usage:

1.3G	/home/doc/projects/swarm-data/n8n
815M	/home/doc/projects/swarm-data/paperless
65M	/home/doc/projects/swarm-data/crowdsec
26M	/home/doc/projects/swarm-data/traefik
13M	/home/doc/projects/swarm-data/leantime
873K	/home/doc/projects/swarm-data/portainer
295K	/home/doc/projects/swarm-data/webfiles
148K	/home/doc/projects/swarm-data/authentik
101K	/home/doc/projects/swarm-data/cluster-reports
25K	/home/doc/projects/swarm-data/peertube
23K	/home/doc/projects/swarm-data/pulse
14K	/home/doc/projects/swarm-data/webservers
12K	/home/doc/projects/swarm-data/wiki
11K	/home/doc/projects/swarm-data/swarm-cluster-report.md
8.0K	/home/doc/projects/swarm-data/outline

Docker Storage

TYPE            TOTAL     ACTIVE    SIZE      RECLAIMABLE
Images          14        6         9.906GB   3.897GB (39%)
Containers      6         6         76.17MB   0B (0%)
Local Volumes   6         4         859.7MB   219B (0%)
Build Cache     43        0         2.718GB   2.718GB

Docker Volumes

DRIVER    VOLUME NAME
local     2a12ce4d80ed228df0407922ec7e7ab4ec377a5bf1ba705c66af91c55b2a5fe9
local     253ac8ad683409f2b8c98984020595bc40e44f39cc2be9dda83c52b3948a5288
local     e8b304d313cf8e6d5f916b407e3136eb8dce42113185395c0d4bdac187c54420
local     netdata_netdatacache
local     netdata_netdataconfig
local     netdata_netdatalib

Networking

Docker Networks

NETWORK ID     NAME              DRIVER    SCOPE
96516fc92029   bridge            bridge    local
108989d65846   docker_gwbridge   bridge    local
uqtj8gddx10e   frostlabs         overlay   swarm
91db06c956a7   host              host      local
wkdnemyo9t0m   ingress           overlay   swarm
d3024e25ac24   none              null      local

Overlay Network Details (frostlabs)

{
    "Name": "frostlabs",
    "Id": "uqtj8gddx10e5t4qdea05tt2c",
    "Created": "2025-11-29T17:04:44.054714019Z",
    "Scope": "swarm",
    "Driver": "overlay",
    "EnableIPv4": true,
    "EnableIPv6": false,
    "IPAM": {
        "Driver": "default",
        "Options": null,
        "Config": [
            {
                "Subnet": "10.0.1.0/24",
                "IPRange": "",
                "Gateway": "10.0.1.1"
            }
        ]
    },
    "Internal": false,
    "Attachable": true,
    "Ingress": false,
    "ConfigFrom": {
        "Network": ""
    },
    "ConfigOnly": false,
    "Options": {
        "com.docker.network.driver.overlay.vxlanid_list": "4097"
    },
    "Labels": {},
    "Peers": [
        {
            "Name": "4d3d131a9072",
            "IP": "10.0.4.12"
        },
        {
            "Name": "2dc3e26647be",
            "IP": "10.0.4.16"
        },
        {
            "Name": "0fa05fb4badc",
            "IP": "10.0.4.14"
        },
        {
            "Name": "ba45d88141ad",
            "IP": "10.0.4.13"
        },
        {
            "Name": "d5e101a6aa8a",
            "IP": "10.0.4.11"
        },
        {
            "Name": "532747b2bb73",
            "IP": "10.0.4.15"
        }
    ],
    "Containers": {
        "387433afd0a4949b3f19ca2de1d397cfe7e8ddcd784686e72cbefd8fa8aeb70e": {
            "Name": "netdata_netdata.vtgnge0wpukgwn3ct2c3ue81g.2uap80qv8lbrs5atilc2u3qia",
            "EndpointID": "cc36398dc0cf6273ba8b4743dacfe0d517d4dc8b76d60af1fba6186e2d7f4549",
            "MacAddress": "02:42:0a:00:01:75",
            "IPv4Address": "10.0.1.117/24",
            "IPv6Address": ""
        },
        "422044b5c056c63c580fc795e8519c603c8045562a14d9646d03274fd0acf324": {
            "Name": "core_agent.vtgnge0wpukgwn3ct2c3ue81g.3ftzdym5g2cej9i3rineisqs7",
            "EndpointID": "3c3bc64b2285a70a4587499c211c89333d816635232b571baee68314881c9d14",
            "MacAddress": "02:42:0a:00:01:5c",
            "IPv4Address": "10.0.1.92/24",
            "IPv6Address": ""
        },
        "65b52609f5a5b103f1ec3412de4afbcfb7dfe54a9705f6120b3d931ce5fe1aae": {
            "Name": "n8n_n8n.1.kv9s1hom7fk7olv4smstk7n43",
            "EndpointID": "686d537fe00c8f733eb41169fafb0ce560e3ac5df25031ad6302a0c12115e523",
            "MacAddress": "02:42:0a:00:01:6e",
            "IPv4Address": "10.0.1.110/24",
            "IPv6Address": ""
        },
        "7787e194dd2f76696d6858cceb4007bc4ab16e78b181f7a109388a23a92906d5": {
            "Name": "core_redis.1.j1qqhrwydjyrlpxyhrjryfmhi",
            "EndpointID": "7db1e8881a7ade0b3d3b9a751404b7358a6833c8c0b58b5929ff1b8cdfb7f9d9",
            "MacAddress": "02:42:0a:00:01:63",
            "IPv4Address": "10.0.1.99/24",
            "IPv6Address": ""
        },
        "9ff8ccaa9e50caeafa27c2404aefe9306ac3ca65177d33c8100e453bb6eeaac2": {
            "Name": "dozzle_dozzle.vtgnge0wpukgwn3ct2c3ue81g.xylxcjit84hequnjrbxbt5088",
            "EndpointID": "bd7250dc9c019897f468d46569b7206a34e0004f3f58e1382b35637e6d18cee6",
            "MacAddress": "02:42:0a:00:01:6c",
            "IPv4Address": "10.0.1.108/24",
            "IPv6Address": ""
        },
        "d773d035f9402422f6f1d0fd37a14d9f741d32576a59c52337fd5dfa5914871d": {
            "Name": "peertube_peertube.1.xkkvqxwhryep4enozc0zsfbf1",
            "EndpointID": "420575b413b02b6ecaf3e8c46fdb9cab0dbadb16b053c97011497f9f5f96a4e2",
            "MacAddress": "02:42:0a:00:01:7f",
            "IPv4Address": "10.0.1.127/24",
            "IPv6Address": ""
        },
        "lb-frostlabs": {
            "Name": "frostlabs-endpoint",
            "EndpointID": "0a1eb5fb6b06df20ff9f0cbc75a04936a0028441138579bab0f1401730559377",
            "MacAddress": "02:42:0a:00:01:60",
            "IPv4Address": "10.0.1.96/24",
            "IPv6Address": ""
        }
    },
    "Status": {
        "IPAM": {
            "Subnets": {
                "10.0.1.0/24": {
                    "IPsInUse": 58,
                    "DynamicIPsAvailable": 198
                }
            }
        }
    }
}

Published Ports

adminer_adminer: 
core_agent: 
core_authentik_server: 
core_authentik_worker: 
core_portainer: *:9000->9000/tcp
core_redis: 
core_traefik: *:80->80/tcp, *:443->443/tcp, *:8082->8080/tcp
dozzle_dozzle: *:8080->8080/tcp
n8n_n8n: *:5678->5678/tcp
netdata_netdata: *:19999->19999/tcp
paperless_paperless_redis: 
paperless_paperless_webserver: 
peertube_peertube: 
peertube_postgres: *:5432->5432/tcp
peertube_redis: 
tracker_tracker-nginx: *:8180->80/tcp
wiki_wiki: *:3000->3000/tcp

Deployed Services

Stacks

NAME        SERVICES
adminer     1
core        6
dozzle      1
n8n         1
netdata     1
paperless   2
peertube    3
tracker     1
wiki        1

Services

ID             NAME                            MODE         REPLICAS   IMAGE                                        PORTS
m9omofpn9bw6   adminer_adminer                 replicated   1/1        adminer:latest                               
sj19vr2nw0kp   core_agent                      global       6/6        portainer/agent:latest                       
5k4qkbgfeys8   core_authentik_server           replicated   1/1        ghcr.io/goauthentik/server:2025.10.0         
65qy6wmjbdsn   core_authentik_worker           replicated   1/1        ghcr.io/goauthentik/server:2025.10.0         
xng4hh1x7spf   core_portainer                  replicated   1/1        portainer/portainer-ce:latest                *:9000->9000/tcp
cd0cvtzw8j0t   core_redis                      replicated   1/1        redis:alpine                                 
rjsk8v0lebc0   core_traefik                    replicated   1/1        traefik:v3.6.1                               *:80->80/tcp, *:443->443/tcp, *:8082->8080/tcp
i6mu3tf3u5ef   dozzle_dozzle                   global       6/6        amir20/dozzle:latest                         *:8080->8080/tcp
pp0cz85w2kdi   n8n_n8n                         replicated   1/1        n8nio/n8n:latest                             *:5678->5678/tcp
kpe5e0aez8j0   netdata_netdata                 global       6/6        netdata/netdata:stable                       *:19999->19999/tcp
imy76qvd40hq   paperless_paperless_redis       replicated   1/1        redis:alpine                                 
eqf52b9sgmcf   paperless_paperless_webserver   replicated   1/1        ghcr.io/paperless-ngx/paperless-ngx:latest   
ysy9weeo8n41   peertube_peertube               replicated   1/1        chocobozzz/peertube:production-bookworm      
uyo4qxsa8r4v   peertube_postgres               replicated   1/1        postgres:17-alpine                           *:5432->5432/tcp
703wunzrgu99   peertube_redis                  replicated   1/1        redis:7-alpine                               
p32115n71f0s   tracker_tracker-nginx           replicated   1/1        nginx:alpine                                 *:8180->80/tcp
sujusu1pzal8   wiki_wiki                       replicated   1/1        ghcr.io/requarks/wiki:2                      *:3000->3000/tcp

Service Distribution by Stack

Stack: adminer

adminer_adminer.1	p4-compute	Running 7 minutes ago

Stack: core

core_agent.5lfnhogleddgqenj9pb3bnbiq	p5-compute	Running 8 minutes ago
core_agent.9k5wdjeo2pn6bgm6t3kni105w	p4-compute	Running 8 minutes ago
core_agent.lmqjm4tqh1dw12rn49b5zmsv7	p1-control	Running 8 minutes ago
core_agent.v9u4wpdzqvjsw1d7v2nqnudjv	p2-control	Running 8 minutes ago
core_agent.vbd9ze987kd06kb6oorfziuga	p3-control	Running 8 minutes ago
core_agent.vtgnge0wpukgwn3ct2c3ue81g	p0-compute	Running 8 minutes ago
core_authentik_server.1	p1-control	Running 8 minutes ago
core_authentik_worker.1	p5-compute	Running 8 minutes ago
core_portainer.1	p2-control	Running 8 minutes ago
core_redis.1	p0-compute	Running 8 minutes ago
core_traefik.1	p3-control	Running 8 minutes ago

Stack: dozzle

dozzle_dozzle.5lfnhogleddgqenj9pb3bnbiq	p5-compute	Running 7 minutes ago
dozzle_dozzle.9k5wdjeo2pn6bgm6t3kni105w	p4-compute	Running 7 minutes ago
dozzle_dozzle.lmqjm4tqh1dw12rn49b5zmsv7	p1-control	Running 7 minutes ago
dozzle_dozzle.v9u4wpdzqvjsw1d7v2nqnudjv	p2-control	Running 7 minutes ago
dozzle_dozzle.vbd9ze987kd06kb6oorfziuga	p3-control	Running 7 minutes ago
dozzle_dozzle.vtgnge0wpukgwn3ct2c3ue81g	p0-compute	Running 7 minutes ago

Stack: n8n

n8n_n8n.1	p0-compute	Running 6 minutes ago

Stack: netdata

netdata_netdata.5lfnhogleddgqenj9pb3bnbiq	p5-compute	Running 6 minutes ago
netdata_netdata.9k5wdjeo2pn6bgm6t3kni105w	p4-compute	Running 6 minutes ago
netdata_netdata.lmqjm4tqh1dw12rn49b5zmsv7	p1-control	Running 6 minutes ago
netdata_netdata.v9u4wpdzqvjsw1d7v2nqnudjv	p2-control	Running 6 minutes ago
netdata_netdata.vbd9ze987kd06kb6oorfziuga	p3-control	Running 6 minutes ago
netdata_netdata.vtgnge0wpukgwn3ct2c3ue81g	p0-compute	Running 6 minutes ago

Stack: paperless

paperless_paperless_redis.1	p5-compute	Running 6 minutes ago
paperless_paperless_webserver.1	p4-compute	Running 6 minutes ago

Stack: peertube

peertube_peertube.1	p0-compute	Running 6 minutes ago
peertube_postgres.1	p2-control	Running 6 minutes ago
peertube_redis.1	p1-control	Running 6 minutes ago

Stack: tracker

tracker_tracker-nginx.1	p5-compute	Running 6 minutes ago

Stack: wiki

wiki_wiki.1	p4-compute	Running 6 minutes ago

Container Distribution

Current Node Containers

NAMES                                                                 IMAGE                                     STATUS
peertube_peertube.1.xkkvqxwhryep4enozc0zsfbf1                         chocobozzz/peertube:production-bookworm   Up 6 minutes
n8n_n8n.1.kv9s1hom7fk7olv4smstk7n43                                   n8nio/n8n:latest                          Up 6 minutes (healthy)
netdata_netdata.vtgnge0wpukgwn3ct2c3ue81g.2uap80qv8lbrs5atilc2u3qia   netdata/netdata:stable                    Up 7 minutes (healthy)
dozzle_dozzle.vtgnge0wpukgwn3ct2c3ue81g.xylxcjit84hequnjrbxbt5088     amir20/dozzle:latest                      Up 7 minutes
core_redis.1.j1qqhrwydjyrlpxyhrjryfmhi                                redis:alpine                              Up 8 minutes (healthy)
core_agent.vtgnge0wpukgwn3ct2c3ue81g.3ftzdym5g2cej9i3rineisqs7        portainer/agent:latest                    Up 8 minutes

All Nodes Container Count

  • p0-compute: 6 containers
  • p1-control: 5 containers
  • p2-control: 5 containers
  • p3-control: 4 containers
  • p4-compute: 6 containers
  • p5-compute: 6 containers

Node Labels & Roles

p0-compute: map[task:compute]
p1-control: map[task:control]
p2-control: map[task:control]
p3-control: map[task:control]
p4-compute: map[task:compute]
p5-compute: map[task:compute]

Cluster Configuration

{
    "NodeID": "vtgnge0wpukgwn3ct2c3ue81g",
    "NodeAddr": "10.0.4.11",
    "LocalNodeState": "active",
    "ControlAvailable": true,
    "Error": "",
    "RemoteManagers": [
        {
            "NodeID": "vbd9ze987kd06kb6oorfziuga",
            "Addr": "10.0.4.14:2377"
        },
        {
            "NodeID": "vtgnge0wpukgwn3ct2c3ue81g",
            "Addr": "10.0.4.11:2377"
        },
        {
            "NodeID": "5lfnhogleddgqenj9pb3bnbiq",
            "Addr": "10.0.4.16:2377"
        },
        {
            "NodeID": "9k5wdjeo2pn6bgm6t3kni105w",
            "Addr": "10.0.4.15:2377"
        },
        {
            "NodeID": "lmqjm4tqh1dw12rn49b5zmsv7",
            "Addr": "10.0.4.12:2377"
        },
        {
            "NodeID": "v9u4wpdzqvjsw1d7v2nqnudjv",
            "Addr": "10.0.4.13:2377"
        }
    ],
    "Nodes": 6,
    "Managers": 6,
    "Cluster": {
        "ID": "iutii5nymo40dsgcuxnfdh6jr",
        "Version": {
            "Index": 135075
        },
        "CreatedAt": "2025-11-13T14:34:48.038680238Z",
        "UpdatedAt": "2025-11-29T10:25:03.236675911Z",
        "Spec": {
            "Name": "default",
            "Labels": {},
            "Orchestration": {
                "TaskHistoryRetentionLimit": 5
            },
            "Raft": {
                "SnapshotInterval": 10000,
                "KeepOldSnapshots": 0,
                "LogEntriesForSlowFollowers": 500,
                "ElectionTick": 10,
                "HeartbeatTick": 1
            },
            "Dispatcher": {
                "HeartbeatPeriod": 5000000000
            },
            "CAConfig": {
                "NodeCertExpiry": 7776000000000000
            },
            "TaskDefaults": {},
            "EncryptionConfig": {
                "AutoLockManagers": false
            }
        },
        "TLSInfo": {
            "TrustRoot": "-----BEGIN CERTIFICATE-----\nMIIBajCCARCgAwIBAgIUOkVEhR7ir7HWSBC4Eoj71XBnCUgwCgYIKoZIzj0EAwIw\nEzERMA8GA1UEAxMIc3dhcm0tY2EwHhcNMjUxMTEzMTQzMDAwWhcNNDUxMTA4MTQz\nMDAwWjATMREwDwYDVQQDEwhzd2FybS1jYTBZMBMGByqGSM49AgEGCCqGSM49AwEH\nA0IABPqdHvAk/XP51IaMZE8GWt2h90o8JsKoo1O8VS6Qs4yJ0N0HZ0vHmiIm9T3i\nkJ6Vhj6IfSBNBReFe3MVX3i4FvejQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBSzhamb4jVwspS8Ceflk62jvU9AwDAKBggqhkjO\nPQQDAgNIADBFAiEAnduA5iS0SMX/jllJv8Y/XNgVoDcOTXs5gntn5uZOhwYCIEGD\nEsN73RfTYcLhmknpiMiiDnqAY6infpMC27w66iNz\n-----END CERTIFICATE-----\n",
            "CertIssuerSubject": "MBMxETAPBgNVBAMTCHN3YXJtLWNh",
            "CertIssuerPublicKey": "MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE+p0e8CT9c/nUhoxkTwZa3aH3SjwmwqijU7xVLpCzjInQ3QdnS8eaIib1PeKQnpWGPoh9IE0FF4V7cxVfeLgW9w=="
        },
        "RootRotationInProgress": false,
        "DefaultAddrPool": [
            "10.0.0.0/8"
        ],
        "SubnetSize": 24,
        "DataPathPort": 4789
    }
}

System Health

Service Health Status

adminer_adminer	1/1
core_agent	6/6
core_authentik_server	1/1
core_authentik_worker	1/1
core_portainer	1/1
core_redis	1/1
core_traefik	1/1
dozzle_dozzle	6/6
n8n_n8n	1/1
netdata_netdata	6/6
paperless_paperless_redis	1/1
paperless_paperless_webserver	1/1
peertube_peertube	1/1
peertube_postgres	1/1
peertube_redis	1/1
tracker_tracker-nginx	1/1
wiki_wiki	1/1

Node Availability

p0-compute	Ready	Active	Reachable
p1-control	Ready	Active	Reachable
p2-control	Ready	Active	Reachable
p3-control	Ready	Active	Leader
p4-compute	Ready	Active	Reachable
p5-compute	Ready	Active	Reachable

Resource Utilization (Current Node)

CPU Load:  0.36, 0.17, 0.20
Memory: Total: 28Gi, Used: 2.2Gi, Free: 7.7Gi, Available: 26Gi
Disk: Total: 98G, Used: 39G, Available: 55G, Use%: 42%

Docker Daemon Status

 Server Version: 29.0.4
 Storage Driver: overlay2
 Logging Driver: json-file
 Cgroup Driver: systemd
 Kernel Version: 6.8.0-88-generic
 Operating System: Ubuntu 24.04.3 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 16
 Total Memory: 28.31GiB
 Docker Root Dir: /var/lib/docker

Recommendations

Maintenance Tasks

  • Review and prune unused Docker images: docker image prune -a
  • Review and prune unused volumes: docker volume prune
  • Check for service updates: Review each service for newer image versions
  • Backup GlusterFS volumes to external storage
  • Review container logs for errors: docker service logs <service-name>
  • Test failover by cordoning a node: docker node update --availability drain <node>

Monitoring Suggestions

  • Consider deploying Prometheus + Grafana for metrics
  • Implement centralized logging (ELK, Loki, or similar)
  • Set up alerting for node failures
  • Monitor GlusterFS health and replication status
  • Track container restart counts

Security Audit

  • Ensure all services use specific version tags (not latest)
  • Review published ports and restrict unnecessary exposure
  • Implement Docker secrets for sensitive configuration
  • Enable TLS for all inter-service communication
  • Review CrowdSec rules and ban lists
  • Audit Traefik configuration for security headers

Summary

This Docker Swarm cluster report was generated on 2025-11-29 17:13:31 UTC.

Cluster Statistics:

  • Nodes: 6 (6 managers)
  • Stacks: 10
  • Services: 17
  • Containers: 6
  • Images: 14
  • Networks: 6
  • Volumes: 6

Status: Cluster is operational and all manager nodes are reachable.


Report generated by: generate-swarm-report.sh Script location: /home/doc/projects/swarm-data/cluster-reports

Description
No description provided
Readme 178 KiB