Docker实战案例

一、Web应用部署实战

1.1 Node.js应用部署

1.1.1 应用结构

项目目录结构:

nodejs-app/
├── src/
│   ├── index.js
│   ├── routes/
│   └── controllers/
├── package.json
├── package-lock.json
├── .dockerignore
├── Dockerfile
└── docker-compose.yml

应用代码:

// src/index.js
const express = require('express');
const app = express();
const PORT = process.env.PORT || 3000;

app.get('/', (req, res) => {
  res.json({
    message: 'Hello from Docker!',
    timestamp: new Date().toISOString()
  });
});

app.get('/health', (req, res) => {
  res.json({ status: 'healthy' });
});

app.listen(PORT, () => {
  console.log(`Server running on port ${PORT}`);
});

Dockerfile:

# Dockerfile
FROM node:18-alpine

# 设置工作目录
WORKDIR /app

# 复制package文件
COPY package*.json ./

# 安装依赖
RUN npm ci --only=production

# 复制应用代码
COPY src/ ./src/

# 创建非root用户
RUN addgroup -g 1001 -S nodejs && \
    adduser -S nodejs -u 1001

# 切换用户
USER nodejs

# 暴露端口
EXPOSE 3000

# 健康检查
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
  CMD node -e "require('http').get('http://localhost:3000/health', (r) => {process.exit(r.statusCode === 200 ? 0 : 1)})"

# 启动应用
CMD ["node", "src/index.js"]

Docker Compose:

# docker-compose.yml
version: '3.8'

services:
  app:
    build: .
    image: nodejs-app:latest
    container_name: nodejs-app
    restart: unless-stopped
    ports:
      - "3000:3000"
    environment:
      - NODE_ENV=production
      - PORT=3000
    healthcheck:
      test: ["CMD", "node", "-e", "require('http').get('http://localhost:3000/health', (r) => {process.exit(r.statusCode === 200 ? 0 : 1)})"]
      interval: 30s
      timeout: 3s
      retries: 3
      start_period: 5s
    networks:
      - app-network
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: "3"

networks:
  app-network:
    driver: bridge

部署脚本:

#!/bin/bash
# deploy.sh

set -e

echo "Building application..."
docker-compose build

echo "Starting application..."
docker-compose up -d

echo "Waiting for application to be healthy..."
timeout 60 bash -c 'until docker inspect --format="{{.State.Health.Status}}" nodejs-app | grep -q healthy; do sleep 2; done'

echo "Application deployed successfully!"
docker-compose ps

1.1.2 多阶段构建优化

优化的Dockerfile:

# Dockerfile
# 构建阶段
FROM node:18-alpine AS builder

WORKDIR /app
COPY package*.json ./
RUN npm ci

COPY . .
RUN npm run build

# 生产阶段
FROM node:18-alpine

WORKDIR /app

# 安装生产依赖
COPY package*.json ./
RUN npm ci --only=production && \
    npm cache clean --force

# 复制构建产物
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules

# 安全配置
RUN addgroup -g 1001 -S nodejs && \
    adduser -S nodejs -u 1001 && \
    chown -R nodejs:nodejs /app

USER nodejs

EXPOSE 3000

HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
  CMD node -e "require('http').get('http://localhost:3000/health', (r) => {process.exit(r.statusCode === 200 ? 0 : 1)})"

CMD ["node", "dist/index.js"]

1.2 Python应用部署

1.2.1 Flask应用

项目结构:

flask-app/
├── app/
│   ├── __init__.py
│   ├── routes.py
│   └── models.py
├── requirements.txt
├── Dockerfile
└── docker-compose.yml

应用代码:

# app/__init__.py
from flask import Flask
from flask_sqlalchemy import SQLAlchemy

db = SQLAlchemy()

def create_app():
    app = Flask(__name__)
    app.config['SQLALCHEMY_DATABASE_URI'] = 'postgresql://user:pass@db:5432/mydb'
    app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
    
    db.init_app(app)
    
    from app.routes import main
    app.register_blueprint(main)
    
    return app

Dockerfile:

# Dockerfile
FROM python:3.11-slim

# 设置环境变量
ENV PYTHONDONTWRITEBYTECODE=1 \
    PYTHONUNBUFFERED=1 \
    PIP_NO_CACHE_DIR=1 \
    PIP_DISABLE_PIP_VERSION_CHECK=1

# 安装系统依赖
RUN apt-get update && \
    apt-get install -y --no-install-recommends \
        gcc \
        postgresql-client && \
    rm -rf /var/lib/apt/lists/*

# 创建工作目录
WORKDIR /app

# 安装Python依赖
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# 复制应用代码
COPY app/ ./app/

# 创建非root用户
RUN useradd --create-home --shell /bin/bash app && \
    chown -R app:app /app

USER app

EXPOSE 5000

HEALTHCHECK --interval=30s --timeout=3s --start-period=10s --retries=3 \
  CMD python -c "import requests; requests.get('http://localhost:5000/health')"

CMD ["gunicorn", "--bind", "0.0.0.0:5000", "--workers", "4", "app:create_app()"]

Docker Compose:

# docker-compose.yml
version: '3.8'

services:
  web:
    build: .
    image: flask-app:latest
    container_name: flask-app
    restart: unless-stopped
    ports:
      - "5000:5000"
    environment:
      - FLASK_ENV=production
      - DATABASE_URL=postgresql://user:pass@db:5432/mydb
    depends_on:
      db:
        condition: service_healthy
    networks:
      - app-network
    volumes:
      - ./app:/app/app:ro

  db:
    image: postgres:15-alpine
    container_name: flask-db
    restart: unless-stopped
    environment:
      - POSTGRES_USER=user
      - POSTGRES_PASSWORD=pass
      - POSTGRES_DB=mydb
    volumes:
      - postgres_data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U user"]
      interval: 10s
      timeout: 5s
      retries: 5
    networks:
      - app-network

networks:
  app-network:
    driver: bridge

volumes:
  postgres_data:

二、数据库部署实战

2.1 MySQL主从复制

2.1.1 主从架构

主从复制架构:

MySQL主从复制:
┌─────────────────┐
│   MySQL Master  │
│   (写操作)      │
│   Port: 3306    │
└────────┬────────┘
         │
         │ 复制
         ↓
┌─────────────────┐
│   MySQL Slave   │
│   (读操作)      │
│   Port: 3307    │
└─────────────────┘

主库配置:

# docker-compose.yml
version: '3.8'

services:
  mysql-master:
    image: mysql:8.0
    container_name: mysql-master
    restart: unless-stopped
    environment:
      - MYSQL_ROOT_PASSWORD=root123
      - MYSQL_DATABASE=mydb
      - MYSQL_REPLICATION_USER=repl
      - MYSQL_REPLICATION_PASSWORD=repl123
    ports:
      - "3306:3306"
    volumes:
      - mysql_master_data:/var/lib/mysql
      - ./master.cnf:/etc/mysql/conf.d/master.cnf:ro
    command:
      - --server-id=1
      - --log-bin=mysql-bin
      - --binlog-format=ROW
      - --gtid_mode=ON
      - --enforce-gtid-consistency=ON
    networks:
      - mysql-network

  mysql-slave:
    image: mysql:8.0
    container_name: mysql-slave
    restart: unless-stopped
    environment:
      - MYSQL_ROOT_PASSWORD=root123
      - MYSQL_DATABASE=mydb
    ports:
      - "3307:3306"
    volumes:
      - mysql_slave_data:/var/lib/mysql
      - ./slave.cnf:/etc/mysql/conf.d/slave.cnf:ro
    command:
      - --server-id=2
      - --log-bin=mysql-bin
      - --binlog-format=ROW
      - --gtid_mode=ON
      - --enforce-gtid-consistency=ON
      - --relay-log=relay-bin
    depends_on:
      - mysql-master
    networks:
      - mysql-network

networks:
  mysql-network:
    driver: bridge

volumes:
  mysql_master_data:
  mysql_slave_data:

主库配置文件:

# master.cnf
[mysqld]
server-id = 1
log-bin = mysql-bin
binlog-format = ROW
gtid_mode = ON
enforce-gtid-consistency = ON
binlog-do-db = mydb

从库配置文件:

# slave.cnf
[mysqld]
server-id = 2
log-bin = mysql-bin
binlog-format = ROW
gtid_mode = ON
enforce-gtid-consistency = ON
relay-log = relay-bin
read-only = 1

初始化脚本:

#!/bin/bash
# init-replication.sh

# 在主库创建复制用户
docker exec -i mysql-master mysql -uroot -proot123 <<EOF
CREATE USER IF NOT EXISTS 'repl'@'%' IDENTIFIED BY 'repl123';
GRANT REPLICATION SLAVE ON *.* TO 'repl'@'%';
FLUSH PRIVILEGES;
EOF

# 获取主库状态
MASTER_STATUS=$(docker exec mysql-master mysql -uroot -proot123 -e "SHOW MASTER STATUS\G")
echo "Master status:"
echo "$MASTER_STATUS"

# 在从库配置复制
docker exec -i mysql-slave mysql -uroot -proot123 <<EOF
CHANGE MASTER TO
  MASTER_HOST='mysql-master',
  MASTER_USER='repl',
  MASTER_PASSWORD='repl123',
  MASTER_AUTO_POSITION=1;
START SLAVE;
EOF

# 检查从库状态
docker exec mysql-slave mysql -uroot -proot123 -e "SHOW SLAVE STATUS\G"

2.2 Redis集群部署

2.2.1 Redis主从哨兵

Redis主从哨兵架构:

# docker-compose.yml
version: '3.8'

services:
  redis-master:
    image: redis:7-alpine
    container_name: redis-master
    restart: unless-stopped
    command: redis-server --appendonly yes
    ports:
      - "6379:6379"
    volumes:
      - redis_master_data:/data
    networks:
      - redis-network

  redis-slave1:
    image: redis:7-alpine
    container_name: redis-slave1
    restart: unless-stopped
    command: redis-server --slaveof redis-master 6379 --appendonly yes
    depends_on:
      - redis-master
    ports:
      - "6380:6379"
    volumes:
      - redis_slave1_data:/data
    networks:
      - redis-network

  redis-slave2:
    image: redis:7-alpine
    container_name: redis-slave2
    restart: unless-stopped
    command: redis-server --slaveof redis-master 6379 --appendonly yes
    depends_on:
      - redis-master
    ports:
      - "6381:6379"
    volumes:
      - redis_slave2_data:/data
    networks:
      - redis-network

  redis-sentinel1:
    image: redis:7-alpine
    container_name: redis-sentinel1
    restart: unless-stopped
    command: redis-sentinel /etc/redis/sentinel.conf
    depends_on:
      - redis-master
    ports:
      - "26379:26379"
    volumes:
      - ./sentinel.conf:/etc/redis/sentinel.conf:ro
    networks:
      - redis-network

  redis-sentinel2:
    image: redis:7-alpine
    container_name: redis-sentinel2
    restart: unless-stopped
    command: redis-sentinel /etc/redis/sentinel.conf
    depends_on:
      - redis-master
    ports:
      - "26380:26379"
    volumes:
      - ./sentinel.conf:/etc/redis/sentinel.conf:ro
    networks:
      - redis-network

  redis-sentinel3:
    image: redis:7-alpine
    container_name: redis-sentinel3
    restart: unless-stopped
    command: redis-sentinel /etc/redis/sentinel.conf
    depends_on:
      - redis-master
    ports:
      - "26381:26379"
    volumes:
      - ./sentinel.conf:/etc/redis/sentinel.conf:ro
    networks:
      - redis-network

networks:
  redis-network:
    driver: bridge

volumes:
  redis_master_data:
  redis_slave1_data:
  redis_slave2_data:

Sentinel配置:

# sentinel.conf
port 26379
sentinel monitor mymaster redis-master 6379 2
sentinel down-after-milliseconds mymaster 5000
sentinel parallel-syncs mymaster 1
sentinel failover-timeout mymaster 60000

三、微服务架构实战

3.1 微服务架构设计

3.1.1 微服务架构图

微服务架构:

微服务架构:
┌─────────────────────────────────────────┐
│         API Gateway (Nginx)             │
│         Port: 80                        │
└────────────┬────────────────────────────┘
             │
    ┌────────┼────────┬────────┬────────┐
    │        │        │        │        │
    ↓        ↓        ↓        ↓        ↓
┌────────┐┌────────┐┌────────┐┌────────┐┌────────┐
│ 用户   ││ 订单   ││ 商品   ││ 支付   ││ 通知   │
│ 服务   ││ 服务   ││ 服务   ││ 服务   ││ 服务   │
│ :3001  ││ :3002  ││ :3003  ││ :3004  ││ :3005  │
└────────┘└────────┘└────────┘└────────┘└────────┘
    │        │        │        │        │
    └────────┴────────┴────────┴────────┴────────┘
                      │
         ┌────────────┼────────────┐
         │            │            │
         ↓            ↓            ↓
    ┌────────┐  ┌────────┐  ┌────────┐
    │ MySQL  │  │ Redis  │  │ RabbitMQ│
    │ :3306  │  │ :6379  │  │ :5672   │
    └────────┘  └────────┘  └────────┘

3.1.2 服务配置

API Gateway配置:

# docker-compose.yml
version: '3.8'

services:
  # API Gateway
  nginx:
    image: nginx:alpine
    container_name: api-gateway
    restart: unless-stopped
    ports:
      - "80:80"
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf:ro
    depends_on:
      - user-service
      - order-service
      - product-service
    networks:
      - microservices-network

  # 用户服务
  user-service:
    build: ./user-service
    image: user-service:latest
    container_name: user-service
    restart: unless-stopped
    environment:
      - SERVICE_NAME=user-service
      - DB_HOST=mysql
      - DB_PORT=3306
      - DB_NAME=user_db
      - DB_USER=root
      - DB_PASSWORD=root123
      - REDIS_HOST=redis
      - REDIS_PORT=6379
    depends_on:
      mysql:
        condition: service_healthy
      redis:
        condition: service_started
    networks:
      - microservices-network
    deploy:
      resources:
        limits:
          cpus: '0.5'
          memory: 512M

  # 订单服务
  order-service:
    build: ./order-service
    image: order-service:latest
    container_name: order-service
    restart: unless-stopped
    environment:
      - SERVICE_NAME=order-service
      - DB_HOST=mysql
      - DB_PORT=3306
      - DB_NAME=order_db
      - DB_USER=root
      - DB_PASSWORD=root123
      - RABBITMQ_HOST=rabbitmq
      - RABBITMQ_PORT=5672
    depends_on:
      mysql:
        condition: service_healthy
      rabbitmq:
        condition: service_started
    networks:
      - microservices-network
    deploy:
      resources:
        limits:
          cpus: '0.5'
          memory: 512M

  # 商品服务
  product-service:
    build: ./product-service
    image: product-service:latest
    container_name: product-service
    restart: unless-stopped
    environment:
      - SERVICE_NAME=product-service
      - DB_HOST=mysql
      - DB_PORT=3306
      - DB_NAME=product_db
      - DB_USER=root
      - DB_PASSWORD=root123
      - REDIS_HOST=redis
      - REDIS_PORT=6379
    depends_on:
      mysql:
        condition: service_healthy
      redis:
        condition: service_started
    networks:
      - microservices-network
    deploy:
      resources:
        limits:
          cpus: '0.5'
          memory: 512M

  # MySQL数据库
  mysql:
    image: mysql:8.0
    container_name: mysql
    restart: unless-stopped
    environment:
      - MYSQL_ROOT_PASSWORD=root123
    volumes:
      - mysql_data:/var/lib/mysql
      - ./init-db:/docker-entrypoint-initdb.d:ro
    healthcheck:
      test: ["CMD", "mysqladmin", "ping", "-h", "localhost"]
      interval: 10s
      timeout: 5s
      retries: 5
    networks:
      - microservices-network

  # Redis缓存
  redis:
    image: redis:7-alpine
    container_name: redis
    restart: unless-stopped
    command: redis-server --appendonly yes
    volumes:
      - redis_data:/data
    networks:
      - microservices-network

  # RabbitMQ消息队列
  rabbitmq:
    image: rabbitmq:3-management-alpine
    container_name: rabbitmq
    restart: unless-stopped
    environment:
      - RABBITMQ_DEFAULT_USER=admin
      - RABBITMQ_DEFAULT_PASS=admin123
    ports:
      - "15672:15672"
    volumes:
      - rabbitmq_data:/var/lib/rabbitmq
    networks:
      - microservices-network

networks:
  microservices-network:
    driver: bridge

volumes:
  mysql_data:
  redis_data:
  rabbitmq_data:

Nginx配置:

# nginx.conf
events {
    worker_connections 1024;
}

http {
    upstream user-service {
        server user-service:3001;
    }

    upstream order-service {
        server order-service:3002;
    }

    upstream product-service {
        server product-service:3003;
    }

    server {
        listen 80;

        # 用户服务
        location /api/users {
            proxy_pass http://user-service;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }

        # 订单服务
        location /api/orders {
            proxy_pass http://order-service;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }

        # 商品服务
        location /api/products {
            proxy_pass http://product-service;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }

        # 健康检查
        location /health {
            return 200 'OK';
            add_header Content-Type text/plain;
        }
    }
}

3.2 服务间通信

3.2.1 REST API通信

用户服务API:

// user-service/src/routes.js
const express = require('express');
const router = express.Router();
const axios = require('axios');

// 获取用户信息
router.get('/:id', async (req, res) => {
  try {
    const userId = req.params.id;
    
    // 从数据库获取用户
    const user = await getUserFromDB(userId);
    
    // 调用订单服务获取用户订单
    const orders = await axios.get(`http://order-service:3002/api/orders/user/${userId}`);
    
    res.json({
      user,
      orders: orders.data
    });
  } catch (error) {
    res.status(500).json({ error: error.message });
  }
});

module.exports = router;

3.2.2 消息队列通信

订单服务发送消息:

// order-service/src/messaging.js
const amqp = require('amqplib');

async function sendOrderMessage(order) {
  const connection = await amqp.connect('amqp://rabbitmq:5672');
  const channel = await connection.createChannel();
  
  const queue = 'order_queue';
  await channel.assertQueue(queue, { durable: true });
  
  const message = JSON.stringify(order);
  channel.sendToQueue(queue, Buffer.from(message), { persistent: true });
  
  console.log('Order message sent:', message);
  
  await channel.close();
  await connection.close();
}

module.exports = { sendOrderMessage };

通知服务接收消息:

// notification-service/src/messaging.js
const amqp = require('amqplib');

async function receiveOrderMessage() {
  const connection = await amqp.connect('amqp://rabbitmq:5672');
  const channel = await connection.createChannel();
  
  const queue = 'order_queue';
  await channel.assertQueue(queue, { durable: true });
  
  channel.consume(queue, (msg) => {
    const order = JSON.parse(msg.content.toString());
    console.log('Received order:', order);
    
    // 发送通知
    sendNotification(order);
    
    channel.ack(msg);
  });
}

function sendNotification(order) {
  console.log(`Sending notification for order ${order.id}`);
}

receiveOrderMessage();

四、生产环境最佳实践

4.1 安全配置

4.1.1 容器安全

安全配置清单:

# docker-compose.yml
version: '3.8'

services:
  app:
    image: myapp:latest
    container_name: myapp
    # 安全配置
    security_opt:
      - no-new-privileges:true
      - apparmor:docker-default
    cap_drop:
      - ALL
    cap_add:
      - CHOWN
      - SETGID
      - SETUID
    read_only: true
    tmpfs:
      - /tmp
      - /var/run
    # 用户配置
    user: "1000:1000"
    # 资源限制
    deploy:
      resources:
        limits:
          cpus: '1.0'
          memory: 1G
        reservations:
          cpus: '0.5'
          memory: 512M
    # 网络配置
    networks:
      - app-network
    # 健康检查
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
      interval: 30s
      timeout: 3s
      retries: 3
      start_period: 10s
    # 日志配置
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: "3"
        labels: "service,environment"

networks:
  app-network:
    driver: bridge
    ipam:
      driver: default
      config:
        - subnet: 172.20.0.0/16

4.1.2 网络安全

网络隔离配置:

# docker-compose.yml
version: '3.8'

services:
  # 前端服务
  frontend:
    image: nginx:alpine
    networks:
      - frontend-network
    ports:
      - "80:80"

  # 后端服务
  backend:
    image: myapp:latest
    networks:
      - frontend-network
      - backend-network
    environment:
      - DB_HOST=db

  # 数据库
  db:
    image: postgres:15-alpine
    networks:
      - backend-network
    volumes:
      - db_data:/var/lib/postgresql/data

networks:
  frontend-network:
    driver: bridge
  backend-network:
    driver: bridge
    internal: true  # 内部网络,无法访问外网

volumes:
  db_data:

4.2 监控与日志

4.2.1 监控配置

Prometheus + Grafana监控:

# docker-compose.yml
version: '3.8'

services:
  # 应用服务
  app:
    image: myapp:latest
    ports:
      - "3000:3000"
    networks:
      - monitoring-network
    labels:
      - "prometheus.io/scrape=true"
      - "prometheus.io/port=3000"
      - "prometheus.io/path=/metrics"

  # Prometheus
  prometheus:
    image: prom/prometheus:latest
    container_name: prometheus
    ports:
      - "9090:9090"
    volumes:
      - ./prometheus.yml:/etc/prometheus/prometheus.yml:ro
      - prometheus_data:/prometheus
    command:
      - '--config.file=/etc/prometheus/prometheus.yml'
      - '--storage.tsdb.path=/prometheus'
      - '--web.enable-lifecycle'
    networks:
      - monitoring-network

  # Grafana
  grafana:
    image: grafana/grafana:latest
    container_name: grafana
    ports:
      - "3001:3000"
    environment:
      - GF_SECURITY_ADMIN_USER=admin
      - GF_SECURITY_ADMIN_PASSWORD=admin123
      - GF_INSTALL_PLUGINS=
    volumes:
      - grafana_data:/var/lib/grafana
    depends_on:
      - prometheus
    networks:
      - monitoring-network

  # Node Exporter
  node-exporter:
    image: prom/node-exporter:latest
    container_name: node-exporter
    ports:
      - "9100:9100"
    networks:
      - monitoring-network

  # cAdvisor
  cadvisor:
    image: google/cadvisor:latest
    container_name: cadvisor
    ports:
      - "8080:8080"
    volumes:
      - /:/rootfs:ro
      - /var/run:/var/run:ro
      - /sys:/sys:ro
      - /var/lib/docker/:/var/lib/docker:ro
    networks:
      - monitoring-network

networks:
  monitoring-network:
    driver: bridge

volumes:
  prometheus_data:
  grafana_data:

Prometheus配置:

# prometheus.yml
global:
  scrape_interval: 15s
  evaluation_interval: 15s

scrape_configs:
  - job_name: 'prometheus'
    static_configs:
      - targets: ['localhost:9090']

  - job_name: 'node-exporter'
    static_configs:
      - targets: ['node-exporter:9100']

  - job_name: 'cadvisor'
    static_configs:
      - targets: ['cadvisor:8080']

  - job_name: 'docker-containers'
    docker_sd_configs:
      - host: unix:///var/run/docker.sock
    relabel_configs:
      - source_labels: [__meta_docker_container_label_prometheus_io_scrape]
        regex: true
        action: keep
      - source_labels: [__meta_docker_container_label_prometheus_io_path]
        target_label: __metrics_path__
        regex: (.+)
      - source_labels: [__address__, __meta_docker_container_label_prometheus_io_port]
        target_label: __address__
        regex: (.+):(?:\d+);(\d+)
        replacement: $1:$2

4.2.2 日志配置

ELK Stack日志收集:

# docker-compose.yml
version: '3.8'

services:
  # Elasticsearch
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:8.8.0
    container_name: elasticsearch
    environment:
      - discovery.type=single-node
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      - xpack.security.enabled=false
    ports:
      - "9200:9200"
    volumes:
      - elasticsearch_data:/usr/share/elasticsearch/data
    networks:
      - elk-network

  # Logstash
  logstash:
    image: docker.elastic.co/logstash/logstash:8.8.0
    container_name: logstash
    volumes:
      - ./logstash.conf:/usr/share/logstash/pipeline/logstash.conf:ro
    ports:
      - "5044:5044"
    depends_on:
      - elasticsearch
    networks:
      - elk-network

  # Kibana
  kibana:
    image: docker.elastic.co/kibana/kibana:8.8.0
    container_name: kibana
    environment:
      - ELASTICSEARCH_HOSTS=http://elasticsearch:9200
    ports:
      - "5601:5601"
    depends_on:
      - elasticsearch
    networks:
      - elk-network

  # 应用服务
  app:
    image: myapp:latest
    container_name: myapp
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: "3"
    volumes:
      - /var/lib/docker/containers:/var/lib/docker/containers:ro
    networks:
      - elk-network

networks:
  elk-network:
    driver: bridge

volumes:
  elasticsearch_data:

Logstash配置:

# logstash.conf
input {
  file {
    path => "/var/lib/docker/containers/*/*.log"
    codec => json
    type => "docker"
  }
}

filter {
  if [type] == "docker" {
    grok {
      match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:level} %{GREEDYDATA:msg}" }
    }
    date {
      match => [ "timestamp", "ISO8601" ]
    }
  }
}

output {
  elasticsearch {
    hosts => ["elasticsearch:9200"]
    index => "docker-logs-%{+YYYY.MM.dd}"
  }
}

4.3 备份与恢复

4.3.1 数据备份

备份脚本:

#!/bin/bash
# backup.sh

BACKUP_DIR="/backup"
DATE=$(date +%Y%m%d_%H%M%S)

# 创建备份目录
mkdir -p ${BACKUP_DIR}/${DATE}

# 备份MySQL数据库
docker exec mysql mysqldump -uroot -proot123 --all-databases > ${BACKUP_DIR}/${DATE}/mysql_backup.sql

# 备份Redis数据
docker exec redis redis-cli BGSAVE
docker cp redis:/data/dump.rdb ${BACKUP_DIR}/${DATE}/redis_backup.rdb

# 备份Docker volumes
docker run --rm \
  -v ${BACKUP_DIR}/${DATE}:/backup \
  -v mysql_data:/data \
  alpine tar czf /backup/mysql_volume.tar.gz -C /data .

# 压缩备份文件
cd ${BACKUP_DIR}
tar czf ${DATE}.tar.gz ${DATE}
rm -rf ${DATE}

# 删除7天前的备份
find ${BACKUP_DIR} -name "*.tar.gz" -mtime +7 -delete

echo "Backup completed: ${BACKUP_DIR}/${DATE}.tar.gz"

4.3.2 数据恢复

恢复脚本:

#!/bin/bash
# restore.sh

BACKUP_FILE=$1
RESTORE_DIR="/tmp/restore_$$"

# 解压备份文件
mkdir -p ${RESTORE_DIR}
tar xzf ${BACKUP_FILE} -C ${RESTORE_DIR}

# 恢复MySQL数据库
docker exec -i mysql mysql -uroot -proot123 < ${RESTORE_DIR}/*/mysql_backup.sql

# 恢复Redis数据
docker cp ${RESTORE_DIR}/*/redis_backup.rdb redis:/data/dump.rdb
docker restart redis

# 恢复Docker volumes
docker run --rm \
  -v ${RESTORE_DIR}:/backup \
  -v mysql_data:/data \
  alpine sh -c "cd /data && tar xzf /backup/*/mysql_volume.tar.gz"

# 清理临时文件
rm -rf ${RESTORE_DIR}

echo "Restore completed!"

五、本章小结

5.1 实战案例总结

5.1.1 Web应用部署

Web应用部署要点:
├─ Node.js应用: 多阶段构建优化
├─ Python应用: Gunicorn + Nginx
├─ 配置管理: 环境变量 + 配置文件
└─ 健康检查: 容器健康监控

最佳实践:
├─ 使用多阶段构建减小镜像大小
├─ 使用非root用户运行容器
├─ 配置健康检查
├─ 限制容器资源
└─ 配置日志收集

5.1.2 数据库部署

数据库部署要点:
├─ MySQL主从复制: 高可用架构
├─ Redis集群: 主从哨兵
├─ 数据持久化: Volume挂载
└─ 备份恢复: 定期备份

最佳实践:
├─ 使用Volume持久化数据
├─ 配置主从复制提高可用性
├─ 定期备份数据
├─ 监控数据库性能
└─ 配置资源限制

5.1.3 微服务架构

微服务架构要点:
├─ 服务拆分: 按业务功能拆分
├─ API Gateway: 统一入口
├─ 服务发现: 服务注册与发现
└─ 消息队列: 异步通信

最佳实践:
├─ 合理拆分服务边界
├─ 使用API Gateway统一入口
├─ 配置服务监控
├─ 使用消息队列解耦
└─ 实现服务熔断和降级

5.2 下一章预告

下一章: Docker高级特性

将学习:

  • 🎯 Docker插件机制
  • 📊 自定义网络驱动
  • 🔧 存储插件开发
  • 🚀 高级编排技术

📝 练习题

基础题

  1. Web应用部署: 部署一个Node.js应用,要求:

    • 使用多阶段构建
    • 配置健康检查
    • 使用非root用户
  2. 数据库部署: 部署MySQL主从复制,要求:

    • 配置主从复制
    • 持久化数据
    • 实现自动故障转移
  3. Redis集群: 部署Redis主从哨兵架构。

进阶题

  1. 微服务架构: 搭建一个简单的微服务架构,包含:

    • API Gateway
    • 用户服务
    • 订单服务
    • 数据库
  2. 监控配置: 配置Prometheus + Grafana监控。

  3. 日志收集: 配置ELK Stack收集日志。

实践题

  1. 完整应用: 部署一个完整的Web应用,要求:

    • 前端 + 后端 + 数据库
    • 配置监控和日志
    • 实现CI/CD流程
    • 配置备份恢复
  2. 生产环境: 搭建一个生产环境,包含:

    • 多环境部署
    • 监控告警
    • 日志收集
    • 自动备份
    • 安全配置