Docker avec HAProxy - Architecture micro-services
Sommaire
Projet pour une infrastructure Docker avec un proxy inversé (HAProxy), des services web (Hugo, PlantUML), des outils de supervision (Grafana, Prometheus, Node Exporter) et un tableau de bord de disponibilité (Uptime Kuma).
Objectifs #
- Centraliser l’accès à plusieurs services internes via un seul proxy (
HAProxy). - Surveiller l’état des services avec Prometheus + Grafana.
- Visualiser la disponibilité grâce à Uptime Kuma.
- Déployer un site statique Hugo optimisé derrière Varnish et HAProxy.
Aperçu architectural #
graph TD;
subgraph Internet
A[Utilisateur]
end
subgraph ReverseProxy
B[HAProxy]
V[Varnish]
end
subgraph Apps
H[Hugo Nginx]
K[Uptime Kuma]
G[Grafana]
P[Prometheus]
N[Node Exporter]
U[UnifiedPush]
M[PlantUML]
end
A --> B
B --> V
V --> H
B --> K
B --> G
B --> U
B --> M
G --> P
P --> N
Stack Docker Compose #
docker-compose.yml #
version: "3.9"
services:
haproxy:
image: haproxy:2.9
container_name: haproxy
ports:
- "80:80"
- "443:443"
- "8404:8404"
volumes:
- ./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro
- ./certs:/usr/local/etc/haproxy/certs:ro
networks:
- reverse_proxy
restart: unless-stopped
hugo_nginx:
image: nginx:alpine
container_name: hugo_nginx
volumes:
- ./site:/usr/share/nginx/html:ro
networks:
- reverse_proxy
plantuml:
image: plantuml/plantuml-server:latest
container_name: plantuml
networks:
- reverse_proxy
uptime_kuma:
image: louislam/uptime-kuma:1
container_name: uptime_kuma
volumes:
- uptime_kuma_data:/app/data
networks:
- reverse_proxy
prometheus:
image: prom/prometheus:v3.6.0
container_name: prometheus
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml:ro
networks:
- reverse_proxy
node_exporter:
image: prom/node-exporter:v1.9.0
container_name: node_exporter
command:
- '--path.rootfs=/host'
pid: host
volumes:
- /:/host:ro,rslave
networks:
- reverse_proxy
grafana:
image: grafana/grafana:12.2.0
container_name: grafana
volumes:
- grafana_data:/var/lib/grafana
depends_on:
- prometheus
networks:
- reverse_proxy
networks:
reverse_proxy:
driver: bridge
volumes:
grafana_data:
uptime_kuma_data:
Extrait de configuration HAProxy #
global
stats socket /run/haproxy/admin.sock mode 660 level admin
log stdout format raw local0
defaults
log global
mode http
timeout connect 5s
timeout client 30s
timeout server 30s
frontend http_in
bind *:80
redirect scheme https
frontend https_in
bind *:443 ssl crt /usr/local/etc/haproxy/certs/
acl host_hugo hdr(host) -i hugo.monsite.fr
acl host_grafana hdr(host) -i grafana.monsite.fr
acl host_plantuml hdr(host) -i plantuml.monsite.fr
acl host_kuma hdr(host) -i status.monsite.fr
use_backend hugo_backend if host_hugo
use_backend grafana_backend if host_grafana
use_backend plantuml_backend if host_plantuml
use_backend kuma_backend if host_kuma
backend hugo_backend
server hugo hugo_nginx:80 check
backend grafana_backend
server grafana grafana:3000 check
backend plantuml_backend
server plantuml plantuml:8080 check
backend kuma_backend
server kuma uptime_kuma:3001 check
listen stats
bind *:8404
stats enable
stats uri /stats
Lancement #
docker compose up -d
https://hugo.monsite.fr→ Portfoliohttps://grafana.monsite.fr→ Monitoring et dashboardshttps://plantuml.monsite.fr→ Online PlantUML editor
Supervision #
- Prometheus collecte les métriques de
node_exporter,haproxyetgrafana. - Grafana affiche les tableaux de bord de performance.
- Uptime Kuma surveille en continu les URL exposées. Note : Ce projet est inspiré de l’environnement de production réel, simplifié à des fins de démonstration dans ce portfolio.