225 lines
7.8 KiB
Org Mode
225 lines
7.8 KiB
Org Mode
:PROPERTIES:
|
|
:ID: 6d5b8adf-7c20-4be0-9cb5-388cec1a616e
|
|
:END:
|
|
#+TITLE: Deploying the Arcology
|
|
#+ROAM_TAGS: Arcology
|
|
|
|
#+PROPERTY: header-args :mkdirp yes
|
|
|
|
#+ARCOLOGY_KEY: arcology/deploy
|
|
|
|
,#+CCE_MODULE: arcology-deploy
|
|
,#+CCE_ANSIBLE: arcology-deploy
|
|
#+CCE_ROLES: server
|
|
#+CCE_PREDICATE: t
|
|
|
|
I have a fair bit of existing automation infrastructure for managing my systems, and I plan to rely heavily on it for [[file:../arcology.org][Arcology]]. Right now, it's managed with a simple Dockerfile+systemd service template, but frankly I think I'm not going to enjoy sticking with that. It'll all be templated and weird and hard to reason about, and I intend to make that better over time -- for now, the design of my new server system is in flux and getting a deployment out is higher priority.
|
|
|
|
Quickly, for deploying [[file:../arcology.org][Arcology]], these links can be clicked or executed with =C-c C-o=:
|
|
|
|
- Update version in:
|
|
- [[file:phoenix.org::/project_definition/][Arcology Phoenix]]'s =mix.exs= block
|
|
- [[file:deploying.org::/Dockerfile/][Dockerfile]] below
|
|
- Automate this for the love of self!
|
|
- [[shell:make tangle &][Run =make tangle= if you haven't recently]] to extract all the code from the org-mode documents
|
|
- [[shell:bin/build &][Run the bin/build script]] to generate Distillery =tar.gz=
|
|
- [[shell:docker build . -t arcology &][Build the Dockerfile]] with the release.
|
|
- [[shell:docker run --rm --name=arcology -it -v $PWD/tmp/:/data -v /home/rrix/org:/org -p 127.0.0.1:4000:4000 rrix/arcology &][Test it locally in podman]] with the local arcology injected in
|
|
- [[shell:docker exec -it arcology /opt/arcology/bin/arcology remote &][/opt/arcology/bin/arcology remote]] in the container will present IEx.
|
|
- [[shell:docker stop arcology]] if you gotta.
|
|
- [[shell:ansible-playbook -i ~/org/cce/inventory deploy.yml &][Deploy the container]] after installing =python3-docker= or your system equivalent.
|
|
if this bothers you see the git branch =nix-docker-checkpoint= and make that work, sucker.
|
|
|
|
* Distillery Config
|
|
|
|
This is mostly boilerplate generated by =mix distillery.init=, there is some added to the release to support [[id:63c0724e-3065-42e4-8ced-80eccb526821][Runtime Configurable Elements]].
|
|
|
|
#+begin_src elixir :tangle rel/config.exs
|
|
use Distillery.Releases.Config,
|
|
default_release: :default,
|
|
default_environment: Mix.env()
|
|
|
|
environment :prod do
|
|
set include_erts: true
|
|
set include_src: false
|
|
set cookie: :"As1{bTJ;}(>HQZi=@6ln>r<{wG2RZIte(>`:;C:xK(rApd&u^xJ}PnITd{Q3M|W!"
|
|
set vm_args: "rel/vm.args"
|
|
end
|
|
|
|
release :arcology do
|
|
set version: current_version(:arcology)
|
|
set applications: [
|
|
:runtime_tools
|
|
]
|
|
set config_providers: [
|
|
{Distillery.Releases.Config.Providers.Elixir, ["${RELEASE_ROOT_DIR}/etc/config.exs"]}
|
|
]
|
|
set overlays: [
|
|
{:copy, "rel/config/config.exs", "etc/config.exs"}
|
|
]
|
|
end
|
|
#+end_src
|
|
|
|
the =vm.args= file is used to configure the low-level Erlang VM; it's tangled with an =eex= suffix for =mix release= to compile. it doesn't do anything right now, but I'm leaving it here for my future's sake.
|
|
|
|
#+begin_src erlang :tangle rel/vm.args.eex :comments none
|
|
-smp auto
|
|
#+end_src
|
|
|
|
* Distillery Releases
|
|
|
|
There is this shell script which creates a tarball for the release using Distillery:
|
|
|
|
#+begin_src shell :shebang #!/usr/bin/env bash :tangle bin/build :mkdirp yes
|
|
set -e
|
|
|
|
APP_NAME="$(grep 'app:' mix.exs | sed -e 's/\[//g' -e 's/ //g' -e 's/app://' -e 's/[:,]//g')"
|
|
APP_VSN="$(grep 'version:' mix.exs | cut -d '"' -f2)"
|
|
|
|
mkdir -p ./rel/artifacts
|
|
|
|
# Install updated versions of hex/rebar
|
|
mix local.rebar --force
|
|
mix local.hex --if-missing --force
|
|
|
|
export MIX_ENV=prod
|
|
|
|
# Fetch deps and compile
|
|
mix deps.get --only $MIX_ENV
|
|
# Run an explicit clean to remove any build artifacts from the host
|
|
mix do clean, compile --force
|
|
# compile assets
|
|
pushd assets; npm run deploy; popd
|
|
# Digest assets
|
|
mix phx.digest
|
|
# Build the release
|
|
mix release
|
|
# Copy tarball to output
|
|
cp "_build/prod/arcology-$APP_VSN.tar.gz" rel/artifacts/"arcology-$APP_VSN.tar.gz"
|
|
#+end_src
|
|
|
|
#+results:
|
|
|
|
* =docker build= it
|
|
|
|
Every solution I try is worse than this so here's a dockerfile. All the build is happening outside the container which i *hate* but, well, every other solution i try is worse.[fn:2]
|
|
|
|
#+begin_src dockerfile :tangle Dockerfile
|
|
FROM fedora:33
|
|
MAINTAINER ryan@whatthefuck.computer
|
|
|
|
RUN export BUILD_PACKAGES="glibc-common" \
|
|
PACKAGES="findutils git pandoc emacs-nox inotify-tools sqlite glibc-langpack-en" && \
|
|
dnf -y update && dnf -y install ${BUILD_PACKAGES} ${PACKAGES} && dnf clean all
|
|
|
|
ENV LANG=en_US.UTF-8
|
|
|
|
RUN git clone https://code.rix.si/upstreams/org-roam /opt/org-roam
|
|
RUN mkdir /opt/arcology
|
|
COPY rel/artifacts/arcology-0.1.1.tar.gz /tmp/arcology.tar.gz
|
|
RUN pushd /opt/arcology && tar -xf /tmp/arcology.tar.gz
|
|
RUN dnf erase -y $BUILD_PACKAGES
|
|
|
|
ENV ARCOLOGY_DIRECTORY=/org
|
|
ENV ORG_ROAM_SOURCE=/opt/org-roam
|
|
ENV ARCOLOGY_DATABASE=/data/arcology.db
|
|
ENV ARCOLOGY_PORT=4000
|
|
|
|
EXPOSE ${ARCOLOGY_PORT}
|
|
|
|
CMD bash -c "/opt/arcology/bin/arcology start"
|
|
#+end_src
|
|
|
|
Run [[shell:docker build . -t rrix/arcology &][docker build]].
|
|
|
|
* Deploy the container
|
|
|
|
[[shell:ansible-playbook -i ~/org/cce/inventory deploy.yml &][run it]]
|
|
|
|
playbook
|
|
|
|
#+begin_src yaml :tangle deploy.yml
|
|
- name: deploy arcology
|
|
hosts: fontkeming.fail
|
|
gather_facts: no
|
|
|
|
roles:
|
|
- role: arcology-deploy
|
|
become: yes
|
|
#+end_src
|
|
|
|
tasks:
|
|
|
|
[[file:/usr/lib/python3.9/site-packages/ansible/modules/cloud/podman/podman_image.py::if '/' not in self.name:][ansible treats any image wth a slash in the name specially, lmao]], whee.
|
|
|
|
#+begin_src yaml :mkdirp yes :tangle roles/arcology-deploy/tasks/main.yml
|
|
- name: image is pushed
|
|
podman_image:
|
|
name: arcology
|
|
tag: latest
|
|
push: yes
|
|
push_args:
|
|
dest: docker.fontkeming.fail
|
|
auth_file: /run/user/1000/containers/auth.json
|
|
tags:
|
|
- arcology
|
|
- deploy
|
|
# hahaha
|
|
become: no
|
|
connection: local
|
|
register: push_image
|
|
notify: restart arcology
|
|
|
|
- name: arcology image pulled
|
|
docker_image:
|
|
source: pull
|
|
tag: latest
|
|
name: docker.fontkeming.fail/arcology
|
|
tags:
|
|
- arcology
|
|
- deploy
|
|
when: push_image.changed
|
|
notify: restart arcology
|
|
|
|
- name: systemd service template in place
|
|
template:
|
|
src: systemd.service.j2
|
|
dest: /etc/systemd/system/arcology.service
|
|
tags:
|
|
- arcology
|
|
- deploy
|
|
notify: restart arcology
|
|
#+end_src
|
|
|
|
#+begin_src yaml :mkdirp yes :tangle roles/arcology-deploy/handlers/main.yml
|
|
- name: restart arcology
|
|
systemd:
|
|
name: arcology
|
|
state: restarted
|
|
enabled: yes
|
|
daemon_reload: yes
|
|
#+end_src
|
|
|
|
the Systemd unit:
|
|
|
|
#+begin_src conf :tangle systemd.service.j2
|
|
[Unit]
|
|
Description=Arcology website engine
|
|
After=docker.service
|
|
|
|
[Service]
|
|
Type=simple
|
|
ExecStart=/usr/bin/docker run --name arcology -v /srv/files/services/arcology:/arcology:z -v /srv/files/rrix/org/org:/org/:z -e ARCOLOGY_DIRECTORY=/org -e ORG_ROAM_SOURCE=/opt/org-roam -e ARCOLOGY_DATABASE=/arcology/arcology.db -e ARCOLOGY_PORT=5000 -p 127.0.0.1:5000:5000 docker.fontkeming.fail/arcology
|
|
ExecStop=/bin/bash -c "/usr/bin/docker stop arcology && /usr/bin/docker rm arcology"
|
|
#+end_src
|
|
|
|
This will run on port 5000. It's really bare-bones, sorry!
|
|
|
|
nginx frontend is currently outside this system in legacy code on a legacy operating system. more to come below...
|
|
|
|
* Footnotes
|
|
|
|
[fn:2] [2021-02-21] i tried fiddling with nix again [[https://discourse.nixos.org/t/nix-elixir-and-dependencies-with-port-compilation/11208][trying to get nix-elixir]] working ... wasted a bunch of effort and i am taking a "wait and see" with it. ansible-bender is exiting with some nonsense error and i don't feel like dealing with it! anyways that image was 2gib and didnt cache at all.
|
|
|
|
[fn:3] [[file:../personal_software_can_be_shitty.org][Personal Software Can Be Shitty]], and my time is finite. I of course am not out there to make my site go down for weeks, but I don't need [[file:../site_reliability_engineering.org][Site Reliability Engineering]] brainworms infecting all my work.
|