跳转至

AstrBot 部署与配置

部署环境

  • AstrBot v4.23.5
  • Mac mini 2024 M4 (macOS Tahoe 26.4.1)
  • OrbStack (Docker & Docker Compose)

部署

mkdir ~/astrbot && cd ~/astrbot

touch docker-compose.yml && nano docker-compose.yml

请按照实际情况修改 docker-compose.yml 中的相关部分。

docker-compose.yml
name: AstrBot
services:
  # shipyard-neo 沙盒环境
  bay:
    image: ghcr.io/astrbotdevs/shipyard-neo-bay:latest
    container_name: bay
    ports:
      - "8114:8114"
    networks:
      - astrbot-network
    volumes:
      # Docker socket — Bay creates sandbox containers dynamically
      - /var/run/docker.sock:/var/run/docker.sock
      # Config file
      - ~/astrbot/bay-config.yml:/app/config.yml:ro
      # SQLite database persistence
      - bay-data:/app/data
      # Cargo storage persistence
      - bay-cargos:/var/lib/bay/cargos
    environment:
      - BAY_CONFIG_FILE=/app/config.yml
      - BAY_DATA_DIR=/app/data
    healthcheck:
      test: [ "CMD", "curl", "-f", "http://localhost:8114/health" ]
      interval: 30s
      timeout: 10s
      retries: 5
      start_period: 15s
    logging:
      driver: json-file
      options:
        max-size: "50m"
        max-file: "5"
    restart: unless-stopped
  astrbot:
    image: soulter/astrbot:latest
    container_name: astrbot
    volumes:
      - ~/astrbot/data:/AstrBot/data
      - /etc/localtime:/etc/localtime:ro
    ports:
      - "6185:6185"
    networks:
      - astrbot-network
    environment:
      TZ: Asia/Shanghai
    depends_on:
      bay:
        condition: service_healthy
    logging:
      driver: json-file
      options:
        max-size: "50m"
        max-file: "3"
    restart: unless-stopped
  # 本地语音转文字服务(可选)
  whisper-local:
    image: onerahmet/openai-whisper-asr-webservice:latest
    container_name: whisper-local
    networks:
      - astrbot-network
    environment:
      ASR_MODEL: base
    restart: unless-stopped

networks:
  astrbot-network:
    name: astrbot-network
    driver: bridge

volumes:
  bay-data:
    name: bay-data
  bay-cargos:
    name: bay-cargos

以下是 shipyard-neo 的配置文件,请按照实际情况修改,详细内容请参考官方文档

bay-config.yml
server:
  host: "0.0.0.0"
  port: 8114

database:
  # SQLite for single-instance deployment.
  # For HA / multi-instance, switch to PostgreSQL:
  #   url: "postgresql+asyncpg://user:pass@db-host:5432/bay"
  url: "sqlite+aiosqlite:///./data/bay.db"
  echo: false

driver:
  type: docker

  # Pull latest images when creating new sandboxes.
  # Production recommendation: "always" ensures you always get the latest image.
  image_pull_policy: always

  docker:
    socket: "unix:///var/run/docker.sock"

    # Bay in container, Ship/Gull in container — use container network direct connect
    connect_mode: container_network

    # Shared network name (must match docker-compose.yaml network)
    network: "astrbot-network"

    # Disable host port mapping — sandbox containers don't need to be reachable
    # from outside the Docker network, reducing attack surface.
    publish_ports: false
    host_port: null

cargo:
  root_path: "/var/lib/bay/cargos"
  default_size_limit_mb: 1024
  mount_path: "/workspace"

security:
  # CHANGE-ME: 设置一个强随机密钥 (e.g. `openssl rand -hex 32`)
  api_key: "CHANGE ME"
  allow_anonymous: false

# Container proxy environment injection.
# When enabled, Bay injects HTTP(S)_PROXY and NO_PROXY into sandbox containers.
proxy:
  enabled: false
  # http_proxy: "http://proxy.example.com:7890"
  # https_proxy: "http://proxy.example.com:7890"
  # Optional extra entries to append to default NO_PROXY list
  # no_proxy: "my-internal.service"

# Warm Pool — pre-start standby sandbox instances to reduce cold-start latency.
# When a user creates a sandbox, Bay will first try to claim an available warm instance,
# delivering near-instant startup instead of waiting for container boot.
warm_pool:
  enabled: true
  warmup_queue_workers: 2          # Concurrent warmup workers
  warmup_queue_max_size: 256       # Maximum queue depth
  warmup_queue_drop_policy: "drop_newest"
  warmup_queue_drop_alert_threshold: 50
  interval_seconds: 30             # Pool maintenance scan interval
  run_on_startup: true

profiles:
  # ── Standard Python sandbox ────────────────────────
  - id: python-default
    description: "Standard Python sandbox with filesystem and shell access"
    image: "ghcr.io/astrbotdevs/shipyard-neo-ship:latest"
    runtime_type: ship
    runtime_port: 8123
    resources:
      cpus: 1.0
      memory: "1g"
    capabilities:
      - filesystem  # includes upload/download
      - shell
      - python
    idle_timeout: 1800  # 30 minutes
    warm_pool_size: 1   # Keep 1 pre-warmed instance ready
    # Environment variables injected into the runtime container (available in Python and Shell)
    # Example: env: { TZ: "Asia/Shanghai", LANG: "en_US.UTF-8", CUSTOM_VAR: "value" }
    env: {}
    # Optional profile-level proxy override
    # proxy:
    #   enabled: false

  # ── Data Science sandbox (more resources) ──────────
  - id: python-data
    description: "Data science sandbox with extra CPU and memory"
    image: "ghcr.io/astrbotdevs/shipyard-neo-ship:latest"
    runtime_type: ship
    runtime_port: 8123
    resources:
      cpus: 2.0
      memory: "4g"
    capabilities:
      - filesystem  # includes upload/download
      - shell
      - python
    idle_timeout: 1800
    warm_pool_size: 1
    env: {}

  # ── Browser + Python multi-container sandbox ───────
  - id: browser-python
    description: "Browser automation with Python backend"
    containers:
      - name: ship
        image: "ghcr.io/astrbotdevs/shipyard-neo-ship:latest"
        runtime_type: ship
        runtime_port: 8123
        resources:
          cpus: 1.0
          memory: "1g"
        capabilities:
          - python
          - shell
          - filesystem  # includes upload/download
        primary_for:
          - filesystem
          - python
          - shell
        env: {}
      - name: browser
        image: "ghcr.io/astrbotdevs/shipyard-neo-gull:latest"
        runtime_type: gull
        runtime_port: 8115
        resources:
          cpus: 1.0
          memory: "2g"
        capabilities:
          - browser
        env: {}
    idle_timeout: 1800
    warm_pool_size: 1

gc:
  # Enable automatic GC for production
  enabled: true
  run_on_startup: true
  interval_seconds: 300  # 5 minutes

  # Instance identifier — MUST be unique in multi-instance deployments
  instance_id: "bay-prod"

  idle_session:
    enabled: true
  expired_sandbox:
    enabled: true
  orphan_cargo:
    enabled: true
  orphan_container:
    # Enable in production to clean up leaked containers.
    # Safe as long as instance_id is unique per Bay instance.
    enabled: true

最后,拉取镜像,启动服务。

sudo docker compose pull
sudo docker compose up -d

配置

服务启动成功后,浏览器访问 <ip>:6185 进入 AstrBot 仪表盘。

添加模型提供商

点击“新增”,找到希望添加的模型提供商,然后按照模型提供商的 API 文档填写 API KeyAPI Base URL

阿里云百炼可选择 OpenAI Compatible

语音转文字模型提供商选择 Whisper(API),API Base URL 填写 http://whisper-local:9000/v1,模型 ID 填写 base

修改默认配置文件

模型

  • 启用语音转文本(可选): True
  • 默认语音转文本模型: (选择刚刚添加的语音转文字模型)

网页搜索

使用网页搜索功能需要注册 Tavily

  • 启用网页搜索: True
  • 网页搜索提供商: Tavily
  • Tavily API Key: (填写你的 Tavily API Key)

使用电脑能力

  • 运行环境: sandbox
  • 需要 AstrBot 管理员权限: (按实际情况)
  • 沙箱环境驱动器: shipyard_neo
  • Shipyard Neo API Endpoint: http://bay:8114
  • Shipyard Neo 访问令牌: (bay-config.yml - security - api_key)
  • Shipyard Neo Profile: python-default / python-data / browser-python
  • Shipyard Neo Sandbox 存活时间(秒): 3600

其他配置

  • 流式输出: True
  • 不支持流式回复的平台: 实时分段回复
  • 现实世界时间感知: True
  • 工具调用轮数上限: 50
  • 工具调用超时时间(秒): 600
  • 工具调用模式: full

人格设定(可选)

  • 填写完备的系统提示词
  • 选择可用的函数工具和 Skills

接入即时聊天软件

推荐接入个人微信,参考官方文档进行配置。