Conteneurisation avancée : Docker et Kubernetes
Conteneurisation avancée : Docker et Kubernetes
Docker en production
Images multi-stage optimisées
En production, la taille et la sécurité des images sont critiques. Les builds multi-stage permettent de séparer la compilation du runtime :
# Stage 1 : Build
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production && npm cache clean --force
COPY . .
RUN npm run build
# Stage 2 : Production
FROM node:20-alpine AS production
RUN addgroup -g 1001 -S appgroup && \
adduser -S appuser -u 1001 -G appgroup
WORKDIR /app
COPY --from=builder --chown=appuser:appgroup /app/dist ./dist
COPY --from=builder --chown=appuser:appgroup /app/node_modules ./node_modules
USER appuser
EXPOSE 3000
HEALTHCHECK --interval=30s --timeout=3s --retries=3 \
CMD wget -qO- http://localhost:3000/health || exit 1
CMD ["node", "dist/main.js"]
Sécurité des conteneurs
# Scanner les vulnérabilités
# docker scout cves mon-image:latest
# trivy image mon-image:latest
# Utiliser des images distroless pour minimiser la surface d'attaque
FROM gcr.io/distroless/nodejs20-debian12
COPY --from=builder /app/dist /app/dist
COPY --from=builder /app/node_modules /app/node_modules
WORKDIR /app
CMD ["dist/main.js"]
Docker Compose pour le développement
services:
api:
build:
context: .
dockerfile: Dockerfile
target: builder
volumes:
- ./src:/app/src
ports:
- "3000:3000"
environment:
- DATABASE_URL=postgres://postgres:postgres@db:5432/app
- REDIS_URL=redis://redis:6379
depends_on:
db:
condition: service_healthy
redis:
condition: service_started
db:
image: postgres:16-alpine
environment:
POSTGRES_DB: app
POSTGRES_PASSWORD: postgres
volumes:
- pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 5s
retries: 5
redis:
image: redis:7-alpine
command: redis-server --maxmemory 256mb --maxmemory-policy allkeys-lru
volumes:
pgdata:
Kubernetes : orchestration en production
Architecture d'un cluster
graph TB
subgraph "Control Plane"
API[API Server]
ETCD[etcd]
SCHED[Scheduler]
CM[Controller Manager]
end
subgraph "Worker Node 1"
K1[Kubelet]
P1[Pod: API v1]
P2[Pod: API v2]
end
subgraph "Worker Node 2"
K2[Kubelet]
P3[Pod: API v3]
P4[Pod: Worker]
end
subgraph "Worker Node 3"
K3[Kubelet]
P5[Pod: Frontend]
P6[Pod: Redis]
end
API --> K1
API --> K2
API --> K3
API --> ETCD
SCHED --> API
CM --> API
Déploiements et stratégies de mise à jour
apiVersion: apps/v1
kind: Deployment
metadata:
name: api
namespace: production
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
version: v2.1.0
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values: [api]
topologyKey: kubernetes.io/hostname
containers:
- name: api
image: registry.example.com/api:v2.1.0
ports:
- containerPort: 3000
resources:
requests:
cpu: 250m
memory: 256Mi
limits:
cpu: 500m
memory: 512Mi
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 15
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 3000
initialDelaySeconds: 5
periodSeconds: 5
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: db-credentials
key: url
Horizontal Pod Autoscaler (HPA)
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: api-hpa
namespace: production
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: api
minReplicas: 3
maxReplicas: 20
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
behavior:
scaleUp:
stabilizationWindowSeconds: 60
policies:
- type: Pods
value: 2
periodSeconds: 60
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Percent
value: 10
periodSeconds: 60
Ingress et gestion du trafic
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: api-ingress
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/rate-limit: "100"
nginx.ingress.kubernetes.io/rate-limit-window: "1m"
spec:
tls:
- hosts:
- api.example.com
secretName: api-tls
rules:
- host: api.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: api
port:
number: 3000
Helm : le gestionnaire de packages Kubernetes
# values.yaml
replicaCount: 3
image:
repository: registry.example.com/api
tag: "v2.1.0"
pullPolicy: IfNotPresent
service:
type: ClusterIP
port: 3000
resources:
requests:
cpu: 250m
memory: 256Mi
limits:
cpu: 500m
memory: 512Mi
autoscaling:
enabled: true
minReplicas: 3
maxReplicas: 20
targetCPUUtilization: 70
Installation et mise à jour :
# Installation
helm install api ./charts/api -f values-production.yaml -n production
# Mise à jour
helm upgrade api ./charts/api --set image.tag=v2.2.0 -n production
# Rollback en cas de problème
helm rollback api 1 -n production
# Historique des versions
helm history api -n production