Initial commit

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
jack 2026-03-20 19:39:26 +07:00
commit a1b97f3e4b
28 changed files with 1220 additions and 0 deletions

7
.gitignore vendored Normal file
View file

@ -0,0 +1,7 @@
inventory/group_vars/all/vault.yml
.vault-password-file
*.retry
__pycache__/
*.pyc
.DS_Store
.ansible_cache/

61
CLAUDE.md Normal file
View file

@ -0,0 +1,61 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Commands
```bash
# Prerequisites (once, on operator machine)
ansible-galaxy collection install community.general community.docker ansible.posix
echo "yourpassword" > ~/.vault-password-file && chmod 600 ~/.vault-password-file
# First-time server setup (run as root)
ansible-playbook playbooks/bootstrap.yml -u root
# Idempotent deploy (all subsequent runs)
ansible-playbook playbooks/deploy.yml
# Edit secrets
ansible-vault edit inventory/group_vars/all.vault.yml
# Check syntax without connecting
ansible-playbook playbooks/deploy.yml --syntax-check
# Dry run
ansible-playbook playbooks/deploy.yml --check
# Run only specific role
ansible-playbook playbooks/deploy.yml --tags base
ansible-playbook playbooks/deploy.yml --tags docker
ansible-playbook playbooks/deploy.yml --tags services
```
## Architecture
**Traffic flow:** Internet → Traefik (ports 80/443, TLS via Let's Encrypt ACME) → services. Ports 80 and 443 are open on the server.
**Secrets:** All secrets live in `inventory/group_vars/all.vault.yml` (Ansible Vault, AES-256). The file `all.yml` references them via `"{{ vault_* }}"` aliases. The vault password must exist at `~/.vault-password-file` on the operator machine — this path is in `.gitignore` and never committed.
**Roles:**
- `base` — OS hardening: UFW (allow SSH + 80 + 443), fail2ban, sshd config, deploy user
- `docker` — Docker CE + Compose plugin via official apt repo
- `services` — renders Jinja2 templates → `/opt/services/`, then runs `docker compose up`
**Templates → server files:**
- `roles/services/templates/docker-compose.yml.j2``/opt/services/docker-compose.yml`
- `roles/services/templates/env.j2``/opt/services/.env` (mode 0600)
- `roles/services/templates/traefik/traefik.yml.j2``/opt/services/traefik/traefik.yml`
- `acme.json` created at `/opt/services/traefik/acme.json` (mode 0600, mounted into Traefik)
**Docker networks:**
- `backend` (internal) — traefik ↔ user-facing services
- `forgejo-db` (internal) — forgejo ↔ its postgres
- `plane-internal` (internal) — all plane components (api, worker, beat, db, redis, minio)
**Adding a new service:** add container to `docker-compose.yml.j2` on the `backend` network with `traefik.enable=true` and `traefik.http.routers.X.tls.certresolver=letsencrypt` labels, add its domain variable to `all.yml`.
## Deployment
DNS: add A-records for each subdomain → server IP (or wildcard `*` → IP).
Fill `all.vault.yml` → set `domain_base` in `all.yml` → run bootstrap + deploy. Traefik obtains TLS certificates automatically on first request to each domain.

168
README.md Normal file
View file

@ -0,0 +1,168 @@
# Infra
Ansible + Docker инфраструктура для команды. Все сервисы доступны через HTTPS — трафик принимается напрямую на портах 80/443, TLS-сертификаты выдаются автоматически через Let's Encrypt.
**Сервисы:**
- `vault.csrx.ru` — Vaultwarden (менеджер паролей)
- `git.csrx.ru` — Forgejo (Git)
- `plane.csrx.ru` — Plane (управление проектами)
- `sync.csrx.ru` — Syncthing (синхронизация Obsidian)
- `traefik.csrx.ru` — Traefik dashboard
---
## Что нужно перед запуском
### 1. На машине оператора
```bash
# Ansible
pip install ansible
# Коллекции
ansible-galaxy collection install community.general community.docker ansible.posix
# SSH-ключ (если нет)
ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519
```
---
## Шаг 1 — DNS
Добавить A-записи у DNS-провайдера: каждый субдомен → `87.249.49.32`.
Или wildcard (если провайдер поддерживает): `*``87.249.49.32`.
| Запись | Значение |
|--------|----------|
| `vault.csrx.ru` | `87.249.49.32` |
| `git.csrx.ru` | `87.249.49.32` |
| `plane.csrx.ru` | `87.249.49.32` |
| `sync.csrx.ru` | `87.249.49.32` |
| `traefik.csrx.ru` | `87.249.49.32` |
---
## Шаг 2 — Заполнить секреты
Отредактировать `inventory/group_vars/all.vault.yml`:
```yaml
vault_acme_email: "you@example.com" # email для Let's Encrypt уведомлений
vault_vaultwarden_admin_token: "..." # придумать длинный пароль
vault_forgejo_db_password: "..." # придумать пароль для PostgreSQL
vault_plane_db_password: "..." # придумать пароль для PostgreSQL
vault_plane_secret_key: "..." # сгенерировать: openssl rand -hex 32
vault_plane_minio_password: "..." # придумать пароль для MinIO
# Генерировать командой: htpasswd -nb admin 'yourpassword'
# Знак $ нужно удваивать: $apr1$ → $$apr1$
vault_traefik_dashboard_htpasswd: "admin:$$apr1$$..."
vault_syncthing_basic_auth_htpasswd: "admin:$$apr1$$..."
```
Сгенерировать нужные значения:
```bash
# plane_secret_key
openssl rand -hex 32
# htpasswd (нужен apache2-utils или httpd-tools)
htpasswd -nb admin 'yourpassword'
# macOS без установки:
python3 -c "import crypt; print('admin:' + crypt.crypt('yourpassword', crypt.mksalt(crypt.METHOD_MD5)))"
```
Затем зашифровать файл:
```bash
# Создать файл с паролем vault
echo "придумать-пароль-для-vault" > ~/.vault-password-file
chmod 600 ~/.vault-password-file
# Зашифровать
ansible-vault encrypt inventory/group_vars/all.vault.yml
```
> `~/.vault-password-file` — только на машине оператора, никогда не коммитить.
---
## Шаг 3 — Указать домен
В `inventory/group_vars/all.yml` установить:
```yaml
domain_base: "csrx.ru" # уже стоит, изменить если нужно
```
---
## Шаг 4 — Первый запуск (от root)
```bash
# Создаёт пользователя deploy, устанавливает sudo
ansible-playbook playbooks/bootstrap.yml -u root
```
---
## Шаг 5 — Деплой
```bash
ansible-playbook playbooks/deploy.yml
```
Устанавливает Docker, настраивает UFW/fail2ban (открывает 22, 80, 443), поднимает все контейнеры.
Traefik автоматически получит TLS-сертификаты при первом обращении к каждому домену.
---
## Проверка
```bash
# На сервере
ssh deploy@87.249.49.32
docker compose -f /opt/services/docker-compose.yml ps
```
Все сервисы должны быть в статусе `Up`. Затем открыть в браузере:
- `https://vault.csrx.ru` — Vaultwarden
- `https://git.csrx.ru` — Forgejo (первичная настройка через веб)
- `https://plane.csrx.ru` — Plane
- `https://sync.csrx.ru` — Syncthing (логин/пароль из `syncthing_basic_auth_htpasswd`)
- `https://traefik.csrx.ru` — Traefik dashboard (логин/пароль из `traefik_dashboard_htpasswd`)
---
## Первичная настройка сервисов
### Vaultwarden
- Открыть `https://vault.csrx.ru/admin` → ввести `vault_vaultwarden_admin_token`
- Создать пользователей через admin-панель (регистрация отключена)
### Forgejo
- Открыть `https://git.csrx.ru` → пройти wizard установки
- Первый зарегистрированный пользователь становится администратором
### Plane
- Открыть `https://plane.csrx.ru` → создать workspace
### Syncthing
- Открыть `https://sync.csrx.ru`
- Скопировать Device ID сервера
- На каждом устройстве команды: добавить сервер как remote device, расшарить папку Obsidian vault
---
## Обновление
```bash
ansible-playbook playbooks/deploy.yml
```
Идемпотентно — можно запускать сколько угодно раз.

24
ansible.cfg Normal file
View file

@ -0,0 +1,24 @@
[defaults]
timeout = 60
inventory = inventory/hosts.ini
roles_path = roles
vault_password_file = ~/.vault-password-file
remote_user = deploy
private_key_file = ~/.ssh/id_ed25519
host_key_checking = True
deprecation_warnings = False
stdout_callback = default
result_format = yaml
callbacks_enabled = profile_tasks
fact_caching = jsonfile
fact_caching_connection = .ansible_cache
fact_caching_timeout = 3600
[ssh_connection]
retries = 5
ssh_args = -o ServerAliveInterval=30 -o ServerAliveCountMax=10 -o ConnectTimeout=15
[privilege_escalation]
become = true
become_method = sudo
become_user = root

23
dns-zone.zone Normal file
View file

@ -0,0 +1,23 @@
$ORIGIN csrx.ru.
$TTL 3600
; ── A-записи сервисов ────────────────────────────────────────────────────────
vault IN A 87.249.49.32
git IN A 87.249.49.32
plane IN A 87.249.49.32
sync IN A 87.249.49.32
traefik IN A 87.249.49.32
mail IN A 87.249.49.32
; ── Почта ────────────────────────────────────────────────────────────────────
@ IN MX 10 mail.csrx.ru.
; SPF — разрешаем отправку только с нашего mail-сервера
@ IN TXT "v=spf1 mx ~all"
; DMARC — мониторинг без блокировки (p=none), отчёты на admin@csrx.ru
_dmarc IN TXT "v=DMARC1; p=none; rua=mailto:admin@csrx.ru"
; DKIM — добавить после первого запуска Stalwart (взять ключ из mail.csrx.ru → DKIM)
; Пример как будет выглядеть:
; mail._domainkey IN TXT "v=DKIM1; k=rsa; p=<ключ из Stalwart>"

View file

@ -0,0 +1,25 @@
---
# Non-secret variables
domain_base: "csrx.ru"
# Derived domains
domain_vault: "vault.{{ domain_base }}"
domain_git: "git.{{ domain_base }}"
domain_plane: "plane.{{ domain_base }}"
domain_sync: "sync.{{ domain_base }}"
domain_traefik: "traefik.{{ domain_base }}"
# Service paths
services_root: /opt/services
deploy_user: deploy
deploy_group: deploy
# Secrets (from vault)
acme_email: "{{ vault_acme_email }}"
vaultwarden_admin_token: "{{ vault_vaultwarden_admin_token }}"
forgejo_db_password: "{{ vault_forgejo_db_password }}"
plane_db_password: "{{ vault_plane_db_password }}"
plane_secret_key: "{{ vault_plane_secret_key }}"
plane_minio_password: "{{ vault_plane_minio_password }}"
traefik_dashboard_htpasswd: "{{ vault_traefik_dashboard_htpasswd }}"
syncthing_basic_auth_htpasswd: "{{ vault_syncthing_basic_auth_htpasswd }}"

5
inventory/hosts.ini Normal file
View file

@ -0,0 +1,5 @@
[servers]
main ansible_host=87.249.49.32
[servers:vars]
ansible_python_interpreter=/usr/bin/python3

51
playbooks/bootstrap.yml Normal file
View file

@ -0,0 +1,51 @@
---
# First-run playbook executed as root before deploy user exists
# ansible-playbook playbooks/bootstrap.yml -u root
- name: Bootstrap server
hosts: servers
become: false
remote_user: root
tasks:
- name: Update apt cache
ansible.builtin.apt:
update_cache: true
cache_valid_time: 3600
- name: Install essential packages
ansible.builtin.apt:
name:
- python3
- python3-pip
- sudo
- curl
- git
state: present
- name: Create deploy group
ansible.builtin.group:
name: deploy
state: present
- name: Create deploy user
ansible.builtin.user:
name: deploy
group: deploy
groups: sudo
shell: /bin/bash
create_home: true
state: present
- name: Set up authorized keys for deploy user
ansible.posix.authorized_key:
user: deploy
state: present
key: "{{ lookup('file', '~/.ssh/id_ed25519.pub') }}"
- name: Allow deploy user passwordless sudo
ansible.builtin.lineinfile:
path: /etc/sudoers.d/deploy
line: "deploy ALL=(ALL) NOPASSWD:ALL"
create: true
mode: "0440"
validate: "visudo -cf %s"

12
playbooks/deploy.yml Normal file
View file

@ -0,0 +1,12 @@
---
# Idempotent deploy playbook
# ansible-playbook playbooks/deploy.yml
- name: Deploy all services
hosts: servers
roles:
- role: base
tags: base
- role: docker
tags: docker
- role: services
tags: services

10
playbooks/site.yml Normal file
View file

@ -0,0 +1,10 @@
---
# Master playbook — for reference only.
# Do NOT run this directly: bootstrap.yml requires `-u root`,
# deploy.yml runs as the deploy user. Run them separately:
#
# ansible-playbook playbooks/bootstrap.yml -u root # first time only
# ansible-playbook playbooks/deploy.yml # all subsequent runs
#
# - import_playbook: bootstrap.yml
# - import_playbook: deploy.yml

View file

@ -0,0 +1,24 @@
---
# SSH hardening
sshd_port: 22
sshd_permit_root_login: "no"
sshd_password_authentication: "no"
sshd_pubkey_authentication: "yes"
sshd_x11_forwarding: "no"
sshd_max_auth_tries: 3
sshd_client_alive_interval: 300
sshd_client_alive_count_max: 2
# Packages to install
base_packages:
- ufw
- fail2ban
- curl
- wget
- git
- htop
- vim
- unzip
- ca-certificates
- gnupg
- lsb-release

View file

@ -0,0 +1,10 @@
---
- name: Restart sshd
ansible.builtin.systemd:
name: sshd
state: restarted
- name: Restart fail2ban
ansible.builtin.systemd:
name: fail2ban
state: restarted

View file

@ -0,0 +1,79 @@
---
- name: Allow SSH
community.general.ufw:
rule: allow
port: "{{ sshd_port }}"
proto: tcp
comment: "SSH"
- name: Allow HTTP
community.general.ufw:
rule: allow
port: "80"
proto: tcp
comment: "HTTP (ACME challenge)"
- name: Allow HTTPS
community.general.ufw:
rule: allow
port: "443"
proto: tcp
comment: "HTTPS"
- name: Allow Syncthing sync TCP
community.general.ufw:
rule: allow
port: "22000"
proto: tcp
comment: "Syncthing sync"
- name: Allow Syncthing sync UDP
community.general.ufw:
rule: allow
port: "22000"
proto: udp
comment: "Syncthing sync"
- name: Allow Syncthing discovery UDP
community.general.ufw:
rule: allow
port: "21027"
proto: udp
comment: "Syncthing discovery"
- name: Set UFW default deny incoming
community.general.ufw:
direction: incoming
policy: deny
- name: Set UFW default allow outgoing
community.general.ufw:
direction: outgoing
policy: allow
- name: Enable UFW
community.general.ufw:
state: enabled
- name: Ensure fail2ban is configured for SSH
ansible.builtin.copy:
dest: /etc/fail2ban/jail.local
content: |
[DEFAULT]
bantime = 3600
findtime = 600
maxretry = 5
[sshd]
enabled = true
port = {{ sshd_port }}
logpath = %(sshd_log)s
backend = %(sshd_backend)s
mode: "0644"
notify: Restart fail2ban
- name: Ensure fail2ban is started and enabled
ansible.builtin.systemd:
name: fail2ban
state: started
enabled: true

View file

@ -0,0 +1,5 @@
---
- import_tasks: packages.yml
- import_tasks: users.yml
- import_tasks: sshd.yml
- import_tasks: firewall.yml

View file

@ -0,0 +1,18 @@
---
- name: Update apt cache
ansible.builtin.apt:
update_cache: true
cache_valid_time: 3600
retries: 3
delay: 10
register: apt_cache
until: apt_cache is succeeded
- name: Install base packages
ansible.builtin.apt:
name: "{{ base_packages }}"
state: present
retries: 3
delay: 10
register: apt_packages
until: apt_packages is succeeded

10
roles/base/tasks/sshd.yml Normal file
View file

@ -0,0 +1,10 @@
---
- name: Configure SSH daemon
ansible.builtin.template:
src: sshd_config.j2
dest: /etc/ssh/sshd_config
owner: root
group: root
mode: "0644"
validate: /usr/sbin/sshd -t -f %s
notify: Restart sshd

View file

@ -0,0 +1,22 @@
---
- name: Ensure deploy group exists
ansible.builtin.group:
name: "{{ deploy_group }}"
state: present
- name: Ensure deploy user exists
ansible.builtin.user:
name: "{{ deploy_user }}"
group: "{{ deploy_group }}"
groups: sudo
shell: /bin/bash
create_home: true
state: present
- name: Ensure deploy user has passwordless sudo
ansible.builtin.lineinfile:
path: "/etc/sudoers.d/{{ deploy_user }}"
line: "{{ deploy_user }} ALL=(ALL) NOPASSWD:ALL"
create: true
mode: "0440"
validate: "visudo -cf %s"

View file

@ -0,0 +1,33 @@
# Managed by Ansible — do not edit manually
Port {{ sshd_port }}
AddressFamily inet
ListenAddress 0.0.0.0
# Authentication
PermitRootLogin {{ sshd_permit_root_login }}
PasswordAuthentication {{ sshd_password_authentication }}
PubkeyAuthentication {{ sshd_pubkey_authentication }}
AuthorizedKeysFile .ssh/authorized_keys
PermitEmptyPasswords no
ChallengeResponseAuthentication no
UsePAM yes
# Forwarding
AllowAgentForwarding no
AllowTcpForwarding no
X11Forwarding {{ sshd_x11_forwarding }}
PrintMotd no
# Timeouts and limits
LoginGraceTime 30
MaxAuthTries {{ sshd_max_auth_tries }}
MaxSessions 5
ClientAliveInterval {{ sshd_client_alive_interval }}
ClientAliveCountMax {{ sshd_client_alive_count_max }}
# Subsystems
Subsystem sftp /usr/lib/openssh/sftp-server
# Only allow the deploy user
AllowUsers {{ deploy_user }}

View file

@ -0,0 +1,5 @@
---
- name: Restart Docker
ansible.builtin.systemd:
name: docker
state: restarted

View file

@ -0,0 +1,81 @@
---
- name: Remove old Docker versions
ansible.builtin.apt:
name:
- docker
- docker-engine
- docker.io
- containerd
- runc
state: absent
purge: true
- name: Create keyrings directory
ansible.builtin.file:
path: /etc/apt/keyrings
state: directory
mode: "0755"
- name: Add Docker GPG key
ansible.builtin.get_url:
url: https://download.docker.com/linux/ubuntu/gpg
dest: /etc/apt/keyrings/docker.asc
mode: "0644"
retries: 5
delay: 10
register: gpg_key
until: gpg_key is succeeded
- name: Add Docker repository
ansible.builtin.apt_repository:
repo: >-
deb [arch={{ ansible_facts['architecture'] | replace('x86_64', 'amd64') }}
signed-by=/etc/apt/keyrings/docker.asc]
https://download.docker.com/linux/ubuntu
{{ ansible_facts['distribution_release'] }} stable
filename: docker
state: present
retries: 3
delay: 10
register: docker_repo
until: docker_repo is succeeded
- name: Install Docker Engine and Compose plugin
ansible.builtin.apt:
name:
- docker-ce
- docker-ce-cli
- containerd.io
- docker-buildx-plugin
- docker-compose-plugin
state: present
update_cache: true
retries: 3
delay: 10
register: docker_install
until: docker_install is succeeded
notify: Restart Docker
- name: Configure Docker daemon (registry mirrors)
ansible.builtin.copy:
dest: /etc/docker/daemon.json
content: |
{
"registry-mirrors": [
"https://dockerhub.timeweb.cloud"
]
}
mode: "0644"
notify: Restart Docker
- name: Ensure Docker is started and enabled
ansible.builtin.systemd:
name: docker
state: started
enabled: true
- name: Add deploy user to docker group
ansible.builtin.user:
name: "{{ deploy_user }}"
groups: docker
append: true

View file

@ -0,0 +1,19 @@
---
services_root: /opt/services
# Image versions
# IMPORTANT: pin each image to a specific version tag.
# Check Docker Hub for the latest stable release before updating.
traefik_image: "traefik:v3.3" # https://hub.docker.com/_/traefik/tags
vaultwarden_image: "vaultwarden/server:1.32.7" # https://hub.docker.com/r/vaultwarden/server/tags
forgejo_image: "codeberg.org/forgejo/forgejo:9"
forgejo_db_image: "postgres:16-alpine"
plane_frontend_image: "makeplane/plane-frontend:stable" # https://hub.docker.com/r/makeplane/plane-frontend/tags
plane_backend_image: "makeplane/plane-backend:stable" # https://hub.docker.com/r/makeplane/plane-backend/tags
plane_db_image: "postgres:16-alpine"
plane_redis_image: "redis:7-alpine"
# ВАЖНО: MinIO прекратил публикацию образов на Docker Hub с октября 2025.
# Последний стабильный тег на Docker Hub: RELEASE.2025-04-22T22-12-26Z
# Рекомендуется перейти на alpine/minio или собирать из исходников.
plane_minio_image: "minio/minio:RELEASE.2025-04-22T22-12-26Z" # https://hub.docker.com/r/minio/minio/tags
syncthing_image: "syncthing/syncthing:1.27" # https://hub.docker.com/r/syncthing/syncthing/tags

View file

@ -0,0 +1,10 @@
---
- name: Restart stack
community.docker.docker_compose_v2:
project_src: "{{ services_root }}"
state: present
pull: never
- name: Stack deployed
ansible.builtin.debug:
msg: "Stack deployed/updated successfully"

View file

@ -0,0 +1,37 @@
---
- name: Deploy .env file
ansible.builtin.template:
src: env.j2
dest: "{{ services_root }}/.env"
owner: "{{ deploy_user }}"
group: "{{ deploy_group }}"
mode: "0600"
notify: Restart stack
- name: Deploy docker-compose.yml
ansible.builtin.template:
src: docker-compose.yml.j2
dest: "{{ services_root }}/docker-compose.yml"
owner: "{{ deploy_user }}"
group: "{{ deploy_group }}"
mode: "0644"
notify: Restart stack
- name: Deploy Traefik static config
ansible.builtin.template:
src: traefik/traefik.yml.j2
dest: "{{ services_root }}/traefik/traefik.yml"
owner: "{{ deploy_user }}"
group: "{{ deploy_group }}"
mode: "0644"
notify: Restart stack
- name: Create acme.json for Let's Encrypt certificates
ansible.builtin.file:
path: "{{ services_root }}/traefik/acme.json"
state: touch
owner: "{{ deploy_user }}"
group: "{{ deploy_group }}"
mode: "0600"
modification_time: preserve
access_time: preserve

View file

@ -0,0 +1,26 @@
---
- name: Create services root directory
ansible.builtin.file:
path: "{{ services_root }}"
state: directory
owner: "{{ deploy_user }}"
group: "{{ deploy_group }}"
mode: "0755"
- name: Create service subdirectories
ansible.builtin.file:
path: "{{ services_root }}/{{ item }}"
state: directory
owner: "{{ deploy_user }}"
group: "{{ deploy_group }}"
mode: "0755"
loop:
- traefik
- traefik/dynamic
- vaultwarden/data
- forgejo/data
- forgejo/db
- plane/pgdata
- plane/media
- syncthing/config
- syncthing/data

View file

@ -0,0 +1,67 @@
---
- import_tasks: directories.yml
- import_tasks: configs.yml
- name: Pull Docker images one by one
ansible.builtin.command: docker pull {{ item }}
loop:
- "{{ traefik_image }}"
- "{{ vaultwarden_image }}"
- "{{ forgejo_image }}"
- "{{ forgejo_db_image }}"
- "{{ plane_frontend_image }}"
- "{{ plane_backend_image }}"
- "{{ plane_db_image }}"
- "{{ plane_redis_image }}"
- "{{ plane_minio_image }}"
- "{{ syncthing_image }}"
register: pull_result
changed_when: "'Status: Downloaded newer image' in pull_result.stdout"
retries: 5
delay: 30
until: pull_result.rc == 0
- name: Deploy Docker Compose stack
community.docker.docker_compose_v2:
project_src: "{{ services_root }}"
state: present
pull: never
retries: 3
delay: 15
register: compose_result
until: compose_result is succeeded
notify: Stack deployed
- name: Wait for MinIO to be ready
ansible.builtin.command: docker exec plane-minio curl -sf http://localhost:9000/minio/health/live
register: minio_ready
changed_when: false
retries: 15
delay: 10
until: minio_ready.rc == 0
- name: Get plane-internal network name
ansible.builtin.shell: >
docker inspect plane-minio |
python3 -c "import sys,json; d=json.load(sys.stdin)[0];
print([k for k in d['NetworkSettings']['Networks'] if 'plane-internal' in k][0])"
register: plane_internal_network
changed_when: false
- name: Create MinIO uploads bucket via mc container
# minio/mc entrypoint = mc, поэтому нужен --entrypoint sh
# access-key = имя пользователя MinIO (plane-minio), secret-key = пароль
ansible.builtin.shell: |
docker run --rm \
--entrypoint sh \
--network "{{ plane_internal_network.stdout | trim }}" \
-e MC_ACCESS="{{ plane_minio_password }}" \
minio/mc:RELEASE.2025-05-21T01-59-54Z \
-c 'mc alias set local http://plane-minio:9000 plane-minio "{{ plane_minio_password }}" 2>/dev/null \
&& mc mb --ignore-existing local/uploads \
&& echo "Bucket created or already exists"'
register: minio_bucket
changed_when: "'Bucket created' in minio_bucket.stdout"
retries: 5
delay: 10
until: minio_bucket.rc == 0

View file

@ -0,0 +1,331 @@
# Docker Compose stack — generated by Ansible
# Do not edit manually; re-run ansible-playbook deploy.yml
networks:
# proxy — публичная сеть только для Traefik: нужна для исходящего интернет-доступа
# (ACME Let's Encrypt, внешние сервисы). backend — internal: true, поэтому
# сервисы не имеют прямого исходящего доступа в интернет.
proxy:
driver: bridge
backend:
driver: bridge
internal: true
forgejo-db:
driver: bridge
internal: true
plane-internal:
driver: bridge
internal: true
volumes:
vaultwarden_data:
forgejo_data:
forgejo_db_data:
plane_pgdata:
plane_redis_data:
plane_minio_data:
plane_media:
syncthing_config:
syncthing_data:
services:
# ── Traefik ────────────────────────────────────────────────────────────────
# proxy — для ACME (исходящий интернет), backend — для маршрутизации к сервисам
traefik:
image: {{ traefik_image }}
container_name: traefik
restart: unless-stopped
ports:
- "80:80"
- "443:443"
environment:
- DOCKER_API_VERSION=1.45
networks:
- proxy
- backend
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- {{ services_root }}/traefik/traefik.yml:/etc/traefik/traefik.yml:ro
- {{ services_root }}/traefik/dynamic:/etc/traefik/dynamic:ro
- {{ services_root }}/traefik/acme.json:/acme/acme.json
labels:
- "traefik.enable=true"
- "traefik.http.routers.traefik-dashboard.rule=Host(`{{ domain_traefik }}`)"
- "traefik.http.routers.traefik-dashboard.entrypoints=websecure"
- "traefik.http.routers.traefik-dashboard.tls.certresolver=letsencrypt"
- "traefik.http.routers.traefik-dashboard.service=api@internal"
- "traefik.http.routers.traefik-dashboard.middlewares=traefik-auth"
- "traefik.http.middlewares.traefik-auth.basicauth.users={{ traefik_dashboard_htpasswd }}"
# ── Vaultwarden ────────────────────────────────────────────────────────────
vaultwarden:
image: {{ vaultwarden_image }}
container_name: vaultwarden
restart: unless-stopped
networks:
- backend
volumes:
- vaultwarden_data:/data
environment:
- ADMIN_TOKEN=${VAULTWARDEN_ADMIN_TOKEN}
- DOMAIN=https://{{ domain_vault }}
- SIGNUPS_ALLOWED=false
- INVITATIONS_ALLOWED=true
- LOG_LEVEL=warn
- EXTENDED_LOGGING=true
- TZ=UTC
labels:
- "traefik.enable=true"
- "traefik.http.routers.vaultwarden.rule=Host(`{{ domain_vault }}`)"
- "traefik.http.routers.vaultwarden.entrypoints=websecure"
- "traefik.http.routers.vaultwarden.tls.certresolver=letsencrypt"
- "traefik.http.services.vaultwarden.loadbalancer.server.port=80"
# ── Forgejo ────────────────────────────────────────────────────────────────
forgejo:
image: {{ forgejo_image }}
container_name: forgejo
restart: unless-stopped
depends_on:
forgejo-db:
condition: service_healthy
networks:
- backend
- forgejo-db
volumes:
- forgejo_data:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
environment:
- USER_UID=1000
- USER_GID=1000
- FORGEJO__database__DB_TYPE=postgres
- FORGEJO__database__HOST=forgejo-db:5432
- FORGEJO__database__NAME=forgejo
- FORGEJO__database__USER=forgejo
- FORGEJO__database__PASSWD=${FORGEJO_DB_PASSWORD}
- FORGEJO__server__DOMAIN={{ domain_git }}
- FORGEJO__server__ROOT_URL=https://{{ domain_git }}
- FORGEJO__server__SSH_DOMAIN={{ domain_git }}
- FORGEJO__service__DISABLE_REGISTRATION=true
labels:
- "traefik.enable=true"
- "traefik.http.routers.forgejo.rule=Host(`{{ domain_git }}`)"
- "traefik.http.routers.forgejo.entrypoints=websecure"
- "traefik.http.routers.forgejo.tls.certresolver=letsencrypt"
- "traefik.http.services.forgejo.loadbalancer.server.port=3000"
forgejo-db:
image: {{ forgejo_db_image }}
container_name: forgejo-db
restart: unless-stopped
networks:
- forgejo-db
volumes:
- forgejo_db_data:/var/lib/postgresql/data
environment:
- POSTGRES_USER=forgejo
- POSTGRES_PASSWORD=${FORGEJO_DB_PASSWORD}
- POSTGRES_DB=forgejo
- PGDATA=/var/lib/postgresql/data/pgdata
healthcheck:
test: ["CMD-SHELL", "pg_isready -U forgejo"]
interval: 10s
timeout: 5s
retries: 5
mem_limit: 512m
# ── Plane ──────────────────────────────────────────────────────────────────
# Маршрутизация через Traefik:
# /api/* и /auth/* → plane-api:8000 (Django, на backend + plane-internal)
# остальное → plane-web:3000 (Next.js, на backend + plane-internal)
# Правило с PathPrefix длиннее → более высокий приоритет у Traefik автоматически.
plane-web:
image: {{ plane_frontend_image }}
container_name: plane-web
restart: unless-stopped
command: node web/server.js
depends_on:
- plane-api
networks:
- backend
- plane-internal
environment:
- NEXT_PUBLIC_API_BASE_URL=https://{{ domain_plane }}
labels:
- "traefik.enable=true"
- "traefik.http.routers.plane.rule=Host(`{{ domain_plane }}`)"
- "traefik.http.routers.plane.entrypoints=websecure"
- "traefik.http.routers.plane.tls.certresolver=letsencrypt"
- "traefik.http.services.plane.loadbalancer.server.port=3000"
plane-api:
image: {{ plane_backend_image }}
container_name: plane-api
restart: unless-stopped
mem_limit: 512m
command: ./bin/docker-entrypoint-api.sh
depends_on:
plane-db:
condition: service_healthy
plane-redis:
condition: service_started
plane-minio:
condition: service_healthy
networks:
- backend
- plane-internal
volumes:
- plane_media:/app/media
environment:
- DATABASE_URL=postgresql://plane:${PLANE_DB_PASSWORD}@plane-db:5432/plane
- REDIS_URL=redis://plane-redis:6379/
- SECRET_KEY=${PLANE_SECRET_KEY}
- DEBUG=0
- DJANGO_SETTINGS_MODULE=plane.settings.production
- WEB_URL=https://{{ domain_plane }}
- FILE_SIZE_LIMIT=5242880
- USE_MINIO=1
- AWS_REGION=us-east-1
- AWS_ACCESS_KEY_ID=plane-minio
- AWS_SECRET_ACCESS_KEY=${PLANE_MINIO_PASSWORD}
- AWS_S3_ENDPOINT_URL=http://plane-minio:9000
- AWS_S3_BUCKET_NAME=uploads
- MINIO_ROOT_USER=plane-minio
- MINIO_ROOT_PASSWORD=${PLANE_MINIO_PASSWORD}
labels:
- "traefik.enable=true"
- "traefik.http.routers.plane-api.rule=Host(`{{ domain_plane }}`) && (PathPrefix(`/api/`) || PathPrefix(`/auth/`))"
- "traefik.http.routers.plane-api.entrypoints=websecure"
- "traefik.http.routers.plane-api.tls.certresolver=letsencrypt"
- "traefik.http.services.plane-api.loadbalancer.server.port=8000"
plane-worker:
image: {{ plane_backend_image }}
container_name: plane-worker
restart: unless-stopped
command: ./bin/docker-entrypoint-worker.sh
depends_on:
- plane-api
networks:
- plane-internal
volumes:
- plane_media:/app/media
environment:
- DATABASE_URL=postgresql://plane:${PLANE_DB_PASSWORD}@plane-db:5432/plane
- REDIS_URL=redis://plane-redis:6379/
- SECRET_KEY=${PLANE_SECRET_KEY}
- DEBUG=0
- DJANGO_SETTINGS_MODULE=plane.settings.production
- USE_MINIO=1
- AWS_REGION=us-east-1
- AWS_ACCESS_KEY_ID=plane-minio
- AWS_SECRET_ACCESS_KEY=${PLANE_MINIO_PASSWORD}
- AWS_S3_ENDPOINT_URL=http://plane-minio:9000
- AWS_S3_BUCKET_NAME=uploads
- MINIO_ROOT_USER=plane-minio
- MINIO_ROOT_PASSWORD=${PLANE_MINIO_PASSWORD}
plane-beat:
image: {{ plane_backend_image }}
container_name: plane-beat
restart: unless-stopped
command: ./bin/docker-entrypoint-beat.sh
depends_on:
- plane-api
networks:
- plane-internal
environment:
- DATABASE_URL=postgresql://plane:${PLANE_DB_PASSWORD}@plane-db:5432/plane
- REDIS_URL=redis://plane-redis:6379/
- SECRET_KEY=${PLANE_SECRET_KEY}
- DEBUG=0
- DJANGO_SETTINGS_MODULE=plane.settings.production
- USE_MINIO=1
- AWS_REGION=us-east-1
- AWS_ACCESS_KEY_ID=plane-minio
- AWS_SECRET_ACCESS_KEY=${PLANE_MINIO_PASSWORD}
- AWS_S3_ENDPOINT_URL=http://plane-minio:9000
- AWS_S3_BUCKET_NAME=uploads
- MINIO_ROOT_USER=plane-minio
- MINIO_ROOT_PASSWORD=${PLANE_MINIO_PASSWORD}
plane-db:
image: {{ plane_db_image }}
container_name: plane-db
restart: unless-stopped
mem_limit: 512m
networks:
- plane-internal
volumes:
- plane_pgdata:/var/lib/postgresql/data
environment:
- POSTGRES_USER=plane
- POSTGRES_PASSWORD=${PLANE_DB_PASSWORD}
- POSTGRES_DB=plane
- PGDATA=/var/lib/postgresql/data/pgdata
healthcheck:
test: ["CMD-SHELL", "pg_isready -U plane"]
interval: 10s
timeout: 5s
retries: 5
plane-redis:
image: {{ plane_redis_image }}
container_name: plane-redis
restart: unless-stopped
networks:
- plane-internal
volumes:
- plane_redis_data:/data
command: redis-server --appendonly yes
plane-minio:
image: {{ plane_minio_image }}
container_name: plane-minio
restart: unless-stopped
mem_limit: 512m
networks:
- plane-internal
volumes:
- plane_minio_data:/data
environment:
- MINIO_ROOT_USER=plane-minio
- MINIO_ROOT_PASSWORD=${PLANE_MINIO_PASSWORD}
command: server /data --console-address ":9001"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
# ── Syncthing ──────────────────────────────────────────────────────────────
# Порты 22000 и 21027 нужны для синхронизации между устройствами (не только UI).
# backend — internal: true, но Syncthing на published ports выходит наружу через host.
syncthing:
image: {{ syncthing_image }}
container_name: syncthing
restart: unless-stopped
networks:
- backend
ports:
- "22000:22000/tcp"
- "22000:22000/udp"
- "21027:21027/udp"
volumes:
- syncthing_config:/var/syncthing/config
- syncthing_data:/var/syncthing/data
environment:
- PUID=1000
- PGID=1000
- TZ=UTC
labels:
- "traefik.enable=true"
- "traefik.http.routers.syncthing.rule=Host(`{{ domain_sync }}`)"
- "traefik.http.routers.syncthing.entrypoints=websecure"
- "traefik.http.routers.syncthing.tls.certresolver=letsencrypt"
- "traefik.http.routers.syncthing.middlewares=syncthing-auth"
- "traefik.http.middlewares.syncthing-auth.basicauth.users={{ syncthing_basic_auth_htpasswd }}"
- "traefik.http.services.syncthing.loadbalancer.server.port=8384"

View file

@ -0,0 +1,12 @@
# Generated by Ansible — do not edit manually
VAULTWARDEN_ADMIN_TOKEN={{ vaultwarden_admin_token }}
FORGEJO_DB_PASSWORD={{ forgejo_db_password }}
PLANE_DB_PASSWORD={{ plane_db_password }}
PLANE_SECRET_KEY={{ plane_secret_key }}
PLANE_MINIO_PASSWORD={{ plane_minio_password }}
DOMAIN_BASE={{ domain_base }}
DOMAIN_VAULT={{ domain_vault }}
DOMAIN_GIT={{ domain_git }}
DOMAIN_PLANE={{ domain_plane }}
DOMAIN_SYNC={{ domain_sync }}
DOMAIN_TRAEFIK={{ domain_traefik }}

View file

@ -0,0 +1,45 @@
# Traefik v3 static configuration
# Generated by Ansible
global:
checkNewVersion: false
sendAnonymousUsage: false
log:
level: INFO
accessLog: {}
api:
dashboard: true
insecure: false
entryPoints:
web:
address: ":80"
http:
redirections:
entryPoint:
to: websecure
scheme: https
websecure:
address: ":443"
certificatesResolvers:
letsencrypt:
acme:
email: "{{ acme_email }}"
storage: /acme/acme.json
httpChallenge:
entryPoint: web
providers:
docker:
exposedByDefault: false
network: backend
file:
directory: /etc/traefik/dynamic
watch: true
serversTransport:
insecureSkipVerify: false