Migrating from Jenkins to Github/Gitea Actions

 I wanted to title this post

Tea and Tears and Github Actions

This experience was truly traumatic.  Let me first clarify my environment and what I was trying to do, then let me break down what I learned and how you can (hopefully) do it too.

  1. I am using Gitea.  I have been a big fan of Gogs for years, and recently migrated to Gitea, a fork of Gogs that includes artifact management and CI/CD via a Github Actions compatible interface.
  2. I have lots of different types of CI/CD builds.  I am a polyglot of coding languages, working in many different styles and patterns.
  3. I currently use Nexus for artifact management, and Jenkins for builds.

A simple Build

    I did not start with it, but the first build I got working on Gitea was hardly a build at all.  I have a static site built with Hugo for a charity.  Let's start with my Jenkins version of code:


def remote = [:]
remote.name = '**name**'
remote.host = '**ip**'
remote.user = '**username**'
remote.allowAnyHosts = true

pipeline {
    agent {
        label 'hugo'
    }
    stages {
        stage('Clean and Setup Environment') {
            steps {
                sh 'rm -rf *'
                checkout scm
                sh 'git submodule update --init'
            }
        }
        stage('Build') {
            steps {
                sh(script: "HUGO_ENV=production hugo")
            }
        }
        stage('Push') {
            steps {
                script {
                    withCredentials([sshUserPrivateKey(credentialsId: '**key name**', keyFileVariable: 'IDENTITY', passphraseVariable: '', usernameVariable: 'USER')]) {
                        remote.user = USER
                        remote.identityFile = IDENTITY
                        sshPut remote: remote, from: './public/', into: '/var/www/**site name**/.'
                    }
                }
            }
        }
    }
}

  1. The first thing you might notice is that I'm running the code on an agent labeled "hugo".  This is something common in Jenkins.  You have to run builds on a server (or in a container) that has the tools you need to run the process.  This is something entirely unnecessary in Github Actions, which you'll see later.
  2. Next, you might notice that I clear the directory and re-checkout my code from source control at the start of the process.  This is something I learned to do, in particular with static sites.  I don't want lingering artifacts from previous code hanging around in my workspace, so I clear the directory of previous code.  I'm not doing this with Github Actions yet, but it might be necessary.  You'll see why later.
  3. The build step here is very simple.  I'm just running the hugo command with the environment set to production.
  4. Finally, there's a push step, which is actually a deployment, not a build.  At this point I simply copy the code to a server over SSH.

Let's see what that looks like as a Github Action.  Note: This is not where I started, but where I ended up.


name: Deploy **site**

on: [push]

jobs:
  Build:
    runs-on: ubuntu-latest
    steps:
      - name: Check out repository code
        uses: actions/checkout@v4
        with:
          submodules: true
      - name: Build Site
        uses: jakejarvis/hugo-build-action@v0.111.3
        with:
          args: --environment production
      - name: Upload Site
        uses: garygrossgarten/github-action-scp@v0.8.0
        with:
          local: public
          remote: /var/www/**site name**
          host: '**ip**'
          username: ${{ secrets.SSH_USER }}
          privateKey: ${{secrets.SSH_KEY}}
          rmRemote: true

Looks smaller, right?  Nah, YAML is just more dense than Groovy.

  1. My first learning experience was with checking out code.  Unlike with Jenkins, the code is not checked out at the start of the job.  You need to do it.  I found out about the Github Marketplace later, but that's where I found this example action.  The checkout action is a quick way to checkout your code before you start your build.  Because Hugo (often) uses submodules to include themes, I needed to also checkout the git submodules which is done pretty easily with the submodules: true flag above.  Why use the action instead of just checking out my code with git?  The action uses Github's (or Gitea in this case) existing login to pull down your code, something which doesn't exist on the runner.  You'll see more about this later.
  2. After that hassle, I noticed that the runs-on container name for the build is not as changeable as it seems.  It turns out that the build needs to run on a container with the right features installed, and there aren't really runners with all the dependencies installed you need.  I couldn't just pull down an image with Hugo from Dockerhub.  It needed to have the right tools (e.g. nodejs) in order to be able to run the build at all.  This made the Github Marketplace of actions make a whole lot more sense.  These pre-built scripts install and manage the tools you need, thanks to hard-working open-source developers.  That deeper understanding made me it super easy to search out and use the hugo-build-action.  Instead of setting an environment variable,  I needed to pass arguments to the build.  Pretty simple transition.
  3. With the involvement of the SCP action, I noticed the need for secrets.  This is much smoother than with Jenkins.  You just add them to the repository variables.


You may not see the Actions field in Gitea/Github.  You may need to enable Repository Actions here

Something More Complex

    This next build creates a docker container, then deploys it to my servers, and even has a cronjob that starts the docker container regularly.

def tag = "2"

pipeline {
    agent {
        label 'docker'
    }
    stages {
        stage('Clean and Setup Environment') {
            steps {
                sh 'rm -rf *'
                checkout scm
                script {
                    tag = sh(returnStdout: true, script: 'git describe --tags --abbrev=0').trim()
                }
            }
        }
        stage('Build') {
            steps {
                sh "/usr/bin/docker build -t **ip**:**port**/**image name**:${tag} ."
                sh "/usr/bin/docker save -o **image name**.tar **ip**:**port**/**image name**:${tag}"
            }
        }
        stage('Push') {
            steps {
                withCredentials([usernamePassword(credentialsId: '**credentials id**', usernameVariable: 'USERNAME', passwordVariable: 'PASSWORD')]) {
                    sh "/usr/bin/docker login -u ${USERNAME} -p ${PASSWORD} http://**ip**:**port**"
                    sh "/usr/bin/docker image push **ip**:**port**/**image name**:${tag}"
                    sh "/usr/bin/docker logout http://**ip**:**port**"
                }
            }
        }
    }
}

  1. You'll notice the use of the tag variable here.  I was did things a bit backward with this build, building to the last published tag until I create a new one.  It's better to release only when you create a new tag, something I got to implement cleanly with Github Actions.
  2. This job is fairly simple overall.  I just build a docker image and publish it to my registry (Nexus in this case).  Since the Jenkins server was on the same network, I was publishing inside the network to the IP over http instead of https over the internet.

Deceptively Simple.

In Github Actions, this is broken up into multiple files, defined as workflows.


name: Build **image name**

on: [push]

jobs:
  Build-And-Push:
    runs-on: docker
    steps:
      - name: Check out repository code
        uses: actions/checkout@v4
      - name: Login
        uses: docker/login-action@v3
        with:
          registry: **gitea url**
          username: ${{secrets.DOCKER_USER}}
          password: ${{secrets.DOCKER_PASSWORD}}
      - name: Build and push
        uses: docker/build-push-action@v5
        with:
          push: true
          tags: **gitea url**/**organization**/**image name**:latest, **gitea url**/**organization**/**image name**:${{gitea.ref_name}}, **gitea url**/**organization**/**image name**:${{gitea.run_id}}

  1. I used runs-on: docker here.  It isn't important.  I was trying to use different images to run my code and found them to be irrelevant.  In this case, I'm just using catthehacker's image but it changes nothing.  It was my attempt to do the docker build before using the Github Marketplace.
  2. This repo uses the docker build-push-action.  Obviously, this requires me to login first before  i can do that.  With Gitea secrets, I was able to create a service user to be able to deploy this code.  The other note is how I don't need to create a tar file of the docker image to store as an artifact (as I was doing in Jenkins).  I can just let Gitea store the image, tagged appropriate to the build using gitea's build environment variables.


name: Deploy **job name**

env:
  CONTAINER_NAME: **container name**
  IMAGE_NAME: '**organization**/**image name**'

on: release

jobs:
  Deploy:
    runs-on: ubuntu-latest
    steps:
      - name: Configure SSH
        run: |
          mkdir -p ~/.ssh/
          echo "${{secrets.SSH_KEY}}" > ~/.ssh/repo.key
          chmod 600 ~/.ssh/repo.key
          cat >>~/.ssh/config <<END
          Host **server 1**
            HostName **ip**
            User ${{secrets.SSH_USER}}
            IdentityFile ~/.ssh/repo.key
            StrictHostKeyChecking no
          Host **server 2**
            HostName **hostname**
            User ${{secrets.SSH_USER}}
            IdentityFile ~/.ssh/repo.key
            StrictHostKeyChecking no
            Port **custom port**
          END
      - name: Update **server 1**
        env:
          RUN_ARGS: "-e '**environment variable**' -e **environment variable name for secret**=${{secrets.CLOUDFLARE_TOKEN}} "
        run: ssh **server 1** "docker login -u ${{secrets.DOCKER_USER}} -p ${{secrets.DOCKER_PASSWORD}} **gitea url** && docker container stop ${CONTAINER_NAME} && docker container rm ${CONTAINER_NAME}; docker pull **gitea url**/${IMAGE_NAME}:latest && docker run ${RUN_ARGS}--name ${CONTAINER_NAME} **gitea url**/${IMAGE_NAME}:latest; docker logout **gitea url**"
      - name: Update **server 2**
        env:
          RUN_ARGS: "-e '**environment variable**' -e **environment variable name for secret**=${{secrets.CLOUDFLARE_TOKEN}} "
        run: ssh **server 2** "docker login -u ${{secrets.DOCKER_USER}} -p ${{secrets.DOCKER_PASSWORD}} **gitea url** && docker container stop ${CONTAINER_NAME} && docker container rm ${CONTAINER_NAME}; docker pull **gitea url**/${IMAGE_NAME}:latest && docker run ${RUN_ARGS}--name ${CONTAINER_NAME} **gitea url**/${IMAGE_NAME}:latest; docker logout **gitea url**"

Well, so much for being more succinct than Jenkins eh?

  1. This manual SSH configuration and SSH deployment is pretty crazy.  I found this script on StackOverflow to use my repository secrets to generate an SSH configuration file to make it much easier to run remote commands over SSH.  Then I can easily log into my two remote servers and remove and recreate my docker containers.
  2. Notice that I used environment variables for each run step to simplify the really complicated docker commands.
  3. Also, the on: command has changed.  This job is triggered whenever I create a release on my repository (which I can do through the UI or via other means).  This way I don't deploy every single build.  This on: command is a thing that doesn't exist in Jenkins and this is important.  In Jenkins, you have to interact with Jenkins to create the jobs to run, even if you use a Jenkinsfile to run them.  In Github Actions, simply adding the file to your repository inside the Github/Gitea folder will trigger the action whenever the on: requirement is met.  This is what my .gitea folder looks like for this repository.  I even have a demo.yaml file that I used to make sure Github actions were working.


And finally the cronjob.


name: **job name** Execution
run-name: Update
on:
  schedule:
    - cron: '*/5 * * * *'

env:
  CONTAINER_NAME: **container name**

jobs:
  Deploy:
    runs-on: ubuntu-latest
    steps:
      - name: Configure SSH
        run: |
          mkdir -p ~/.ssh/
          echo "${{secrets.SSH_KEY}}" > ~/.ssh/repo.key
          chmod 600 ~/.ssh/repo.key
          cat >>~/.ssh/config <<END
          Host **server 1**
            HostName **ip**
            User ${{secrets.SSH_USER}}
            IdentityFile ~/.ssh/repo.key
            StrictHostKeyChecking no
          END
      - name: Update **server 1**
        run: ssh **server 1** "docker stop ${CONTAINER_NAME};docker start ${CONTAINER_NAME}"

This functions very similar to the previous job, triggering the container to start on server 1 every 5 minutes via cronjob.  You'll notice we were able to insert that cron string into the on: command under schedule.  Very convenient.

The Big Job

With all this extra knowledge, I decided to tackle a more complex, production application stack with microservices and the like.  To say the least, it was messy.  This one actually had multiple Jenkinsfiles.  One for Build:


def tag = "1"

pipeline {
    agent {
        label 'docker'
    }
    stages {
        stage('Clean and Setup Environment') {
            steps {
                sh 'rm -rf *'
                checkout changelog: false, scm: scmGit(branches: [[name: '*/master']], browser: gogs('https://**git url**/**organization**/**repository**'), extensions: [submodule(depth: 1, parentCredentials: true, recursiveSubmodules: true, reference: '', shallow: true)], userRemoteConfigs: [[credentialsId: '**credentials id**', url: 'ssh://**git ssh url for repository**']])
                script {
                    tag = sh(returnStdout: true, script: 'git describe --tags --abbrev=0').trim()
                }
            }
        }
        stage('Install Dependencies') {
            steps {
                sh 'python3 -m venv ./venv'
                sh './venv/bin/pip3 install -r requirements.txt'
            }
        }
        stage('Setup Test Environment') {
            steps {
                sh 'docker compose up -d'
                sleep 90
            }
        }
        stage('Test') {
            steps {
                sh './venv/bin/python3 -m unittest discover tests'
            }
        }
        stage('Build') {
            steps {
                sh(script: "/usr/bin/docker build -t **ip**:**port**/**image name**:${tag} .")
                sh "/usr/bin/docker save -o **image name**.tar **ip**:**port**/**image name**:${tag}"
                sh(script: "/usr/bin/docker build -t **ip**:**port**/**image name**:latest .")
                sh "/usr/bin/docker save -o **image name**.tar **ip**:**port**/**image name**:latest"
            }
        }
        stage('Push') {
            steps {
                sh "/usr/bin/docker login -u **service user** -p **service password** http://**ip**:**port**"
                sh "/usr/bin/docker image push **ip**:**port**/**image name**:${tag}"
                sh "/usr/bin/docker image push **ip**:**port**/**image name**:latest"
                sh "/usr/bin/docker logout http://**ip**:**port**"
            }
        }
    }
    post {
        always {
            sh 'docker compose down --remove-orphans'
        }
    }
}

  1. Like the Hugo build, I needed to checkout the code with submodules.  This time, however, the submodules were private repositories and needed credentials.  This will be a problem when we look at the code for Github Actions.
  2. Otherwise, it looks like a pretty simple python setup, except for that pesky docker compose up command in there.  That's me spinning up a database and filling it with data to test with.  I prefer my docker compose files do that by default, so I can do local development in the same way.  It takes a while for postgres to spin up.
  3. I used a feature of the postgresql docker compose file to initialize the database by mounting a volume.  This isn't shown here, but is important later.  For clarity, this volume was also a git submodule.
  4. Then I do a basic docker build, login and push the image.  Finally, I tear down the docker image even on failure.


def server = [:]
server.name = "**server name**"
server.host='**ip**'
server.port=22
server.allowAnyHosts = true

name = '**image name**'

pipeline {
    agent {
        label 'linux'
    }
    stages {
        stage('Get Tag') {
            steps {
                sh 'rm -rf *'
                checkout scm
                script {
                    tag = sh(returnStdout: true, script: 'git describe --tags --abbrev=0').trim()
                }
            }
        }
        stage('Update Docker Image') {
            steps {
                script {
                    withCredentials([usernamePassword(credentialsId: '**credentials id**', usernameVariable: 'USERNAME', passwordVariable: 'PASSWORD')]) {
                        withCredentials([sshUserPrivateKey(credentialsId: '**other credentials id**', keyFileVariable: 'IDENTITY', passphraseVariable: '', usernameVariable: 'USER')]) {
                            server.user = USER
                            server.identityFile = IDENTITY
                            sshCommand remote: server, command: "/usr/bin/docker login -u ${USERNAME} -p ${PASSWORD} http://**ip**:**port**"
                            sshCommand remote: server, command: "/usr/bin/docker pull **ip**:**port**/${name}:${tag}"
                        }
                    }
                }
            }
        }
        stage('Update **microservice 1**') {
            steps {
                script {
                    container = '**container name**'
                    args = '--restart=unless-stopped -e "PYTHONUNBUFFERED=1" -e "**enviroment variable**" -e "**environment variable 2" -e "**environment variable 3" -e "**environment variable 4**" -e "**environment variable 5**" '
                    cmd = ' **microservice script**'
                    withCredentials([sshUserPrivateKey(credentialsId: '**credentials id**', keyFileVariable: 'IDENTITY', passphraseVariable: '', usernameVariable: 'USER')]) {
                        server.user = USER
                        server.identityFile = IDENTITY
                        sshCommand remote: server, command: "/usr/bin/docker container stop ${container}"
                        sshCommand remote: server, command: "/usr/bin/docker container rm ${container}"
                        sshCommand remote: server, command: "/usr/bin/docker run -d ${args}--name ${container} **ip**:**port**/${name}:${tag}${cmd}"
                    }
                }
            }
        }
        stage('Update **microservice 2**') {
            steps {
                script {
                    container = '**container name**'
                    args = '--restart=unless-stopped -e "PYTHONUNBUFFERED=1" **6 other environment variables** '
                    cmd = ' **microservice script**'
                    withCredentials([sshUserPrivateKey(credentialsId: '**credentials id**', keyFileVariable: 'IDENTITY', passphraseVariable: '', usernameVariable: 'USER')]) {
                        server.user = USER
                        server.identityFile = IDENTITY
                        sshCommand remote: server, command: "/usr/bin/docker container stop ${container}"
                        sshCommand remote: server, command: "/usr/bin/docker container rm ${container}"
                        sshCommand remote: server, command: "/usr/bin/docker run -d ${args}--name ${container} **ip**:**port**/${name}:${tag}${cmd}"
                    }
                }
            }
        }
        stage('Update **microservice 3**') {
            steps {
                script {
                    container = '**container name**'
                    args = '--restart=unless-stopped -e "PYTHONUNBUFFERED=1" **5 more environment variables** '
                    cmd = ' **microservice script**'
                    withCredentials([sshUserPrivateKey(credentialsId: '**credentials id**', keyFileVariable: 'IDENTITY', passphraseVariable: '', usernameVariable: 'USER')]) {
                        server.user = USER
                        server.identityFile = IDENTITY
                        sshCommand remote: server, command: "/usr/bin/docker container stop ${container}"
                        sshCommand remote: server, command: "/usr/bin/docker container rm ${container}"
                        sshCommand remote: server, command: "/usr/bin/docker run -d ${args}--name ${container} **ip**:**port**/${name}:${tag}${cmd}"
                    }
                }
            }
        }
        stage('Update **microservice 4**') {
            steps {
                script {
                    container = '**container name**'
                    args = '--restart=unless-stopped -e "PYTHONUNBUFFERED=1" **4 more environment variables** '
                    cmd = ' **microservice script**'
                    withCredentials([sshUserPrivateKey(credentialsId: '**credentials id**', keyFileVariable: 'IDENTITY', passphraseVariable: '', usernameVariable: 'USER')]) {
                        server.user = USER
                        server.identityFile = IDENTITY
                        sshCommand remote: server, command: "/usr/bin/docker container stop ${container}"
                        sshCommand remote: server, command: "/usr/bin/docker container rm ${container}"
                        sshCommand remote: server, command: "/usr/bin/docker run -d ${args}--name ${container} **ip**:**port**/${name}:${tag}${cmd}"
                    }
                }
            }
        }
        stage('Update **dev app**') {
            steps {
                script {
                    container = '**container name**'
                    args = '--restart=unless-stopped -e "PYTHONUNBUFFERED=1" **9 more enivornment variables** '
                    cmd = ''
                    withCredentials([sshUserPrivateKey(credentialsId: '**credentials id**', keyFileVariable: 'IDENTITY', passphraseVariable: '', usernameVariable: 'USER')]) {
                        server.user = USER
                        server.identityFile = IDENTITY
                        sshCommand remote: server, command: "/usr/bin/docker container stop ${container}"
                        sshCommand remote: server, command: "/usr/bin/docker container rm ${container}"
                        sshCommand remote: server, command: "/usr/bin/docker run -d ${args}--name ${container} **ip**:**port**/${name}:${tag}${cmd}"
                    }
                }
            }
        }
        stage('Update **dev app 2**') {
            steps {
                script {
                    container = '**container name**'
                    args = '--restart=unless-stopped -e "PYTHONUNBUFFERED=1" **12 more environment variables** '
                    cmd = ''
                    withCredentials([sshUserPrivateKey(credentialsId: '**credentials id**', keyFileVariable: 'IDENTITY', passphraseVariable: '', usernameVariable: 'USER')]) {
                        server.user = USER
                        server.identityFile = IDENTITY
                        sshCommand remote: server, command: "/usr/bin/docker container stop ${container}"
                        sshCommand remote: server, command: "/usr/bin/docker container rm ${container}"
                        sshCommand remote: server, command: "/usr/bin/docker run -d ${args}--name ${container} **ip**:**port**/${name}:${tag}${cmd}"
                    }
                }
            }
        }
        stage('Update **dev app 3**') {
            steps {
                script {
                    container = '**container name**'
                    args = '--restart=unless-stopped -e "PYTHONUNBUFFERED=1" **9 more environment variables** '
                    cmd = ''
                    withCredentials([sshUserPrivateKey(credentialsId: '**credentials id**', keyFileVariable: 'IDENTITY', passphraseVariable: '', usernameVariable: 'USER')]) {
                        server.user = USER
                        server.identityFile = IDENTITY
                        sshCommand remote: server, command: "/usr/bin/docker container stop ${container}"
                        sshCommand remote: server, command: "/usr/bin/docker container rm ${container}"
                        sshCommand remote: server, command: "/usr/bin/docker run -d ${args}--name ${container} **ip**:**port**/${name}:${tag}${cmd}"
                    }
                }
            }
        }
    }
    post {
        always {
            script {
                withCredentials([sshUserPrivateKey(credentialsId: '**credentials id**', keyFileVariable: 'IDENTITY', passphraseVariable: '', usernameVariable: 'USER')]) {
                    server.user = USER
                    server.identityFile = IDENTITY
                    sshCommand remote: server, command: "/usr/bin/docker logout http://**ip**:**port**"
                }
            }
        }
    }
}

While the build is simple, as you can see, this app is a monster and deployment is no joke.  As you can see, this deployment uses a lot of SSH commands to reload the docker containers in my test environment.  Outside of that, it's pretty rinse and repeat.  So I thought, hey, the deployment is complex, but the build should be easy.  Boy was I wrong.  Here's my ultimately working Github Action script, after many attempts.


name: Build **job name**

on: [push]

jobs:
  Test:
    runs-on: docker
    steps:
      - name: Configure SSH
        run: |
          mkdir -p ~/.ssh/
          echo "${{secrets.SSH_KEY}}" > ~/.ssh/id_rsa
          chmod 600 ~/.ssh/id_rsa
          cat >>~/.ssh/config <<END
          Host **server 1**
            Hostname **hostname**
            User **username**
            IdentityFile ~/.ssh/id_rsa
            StrictHostKeyChecking no
            Port **port**
          Host **server 2**
            Hostname **ip**
            User **username**
            IdentityFile ~/.ssh/id_rsa
            StrictHostKeyChecking no
            Port **port**
          END
          ssh-keyscan -p **port** **ip** >> ~/.ssh/known_hosts
          ssh-keyscan -p **port** **hostname** >> ~/.ssh/known_hosts
      - name: Check out repository code
        run: git clone --depth 1 --recurse-submodules ssh://**server 1**/**organization**/**repository**.git .
      - name: Install Dependencies
        run: |
          apt-get update && apt-get install libgeos-dev build-essential python3-dev postgresql-client -y
          wget https://www.python.org/ftp/python/3.9.19/Python-3.9.19.tgz
          export PYTHON_VERSION=3.9.19
          export PYTHON_MAJOR=3
          tar -xvzf Python-${PYTHON_VERSION}.tgz
          cd Python-${PYTHON_VERSION}
          ./configure \
            --prefix=/opt/python/${PYTHON_VERSION} \
            --enable-shared \
            --enable-optimizations \
            --enable-ipv6 \
            LDFLAGS=-Wl,-rpath=/opt/python/${PYTHON_VERSION}/lib,--disable-new-dtags
          make
          make install
          /opt/python/${PYTHON_VERSION}/bin/python${PYTHON_MAJOR} --version
          cd ..
          /opt/python/${PYTHON_VERSION}/bin/python${PYTHON_MAJOR} -m venv ./venv
          ./venv/bin/pip install -r requirements.txt
      - name: Setup Database
        run: |
          for file in ./sql/*; do
              PGPASSWORD=**dev password** psql -U postgres -h db -f "${file}"
          done
      - name: Run Tests
        run: |
          MIC_DBIP=db ./venv/bin/python -m unittest discover tests --verbose
    services:
      db:
        image: "timescale/timescaledb-ha:pg14-latest"
        ports:
          - 5432:5432
        env:
          POSTGRES_PASSWORD: **Dev password**
          POSTGRES_USER: postgres
  Build-And-Push:
    needs: Test
    runs-on: docker
    steps:
      - name: Configure SSH
        run: |
          mkdir -p ~/.ssh/
          echo "${{secrets.SSH_KEY}}" > ~/.ssh/id_rsa
          chmod 600 ~/.ssh/id_rsa
          cat >>~/.ssh/config <<END
          Host **server 1**
            Hostname **hostname**
            User **username**
            IdentityFile ~/.ssh/id_rsa
            StrictHostKeyChecking no
            Port **port**
          Host **server 2**
            Hostname **ip**
            User **username**
            IdentityFile ~/.ssh/id_rsa
            StrictHostKeyChecking no
            Port **port**
          END
          ssh-keyscan -p **port** **ip** >> ~/.ssh/known_hosts
          ssh-keyscan -p **port** **hostname** >> ~/.ssh/known_hosts
      - name: Check out repository code
        run: git clone --depth 1 --recurse-submodules ssh://**server 1**/**organization**/**repository**.git .
      - name: Login
        uses: docker/login-action@v3
        with:
          registry: **hostname**
          username: ${{secrets.DOCKER_USER}}
          password: ${{secrets.DOCKER_PASSWORD}}
      - uses: mr-smithers-excellent/docker-build-push@v6
        name: Build & push Docker image
        with:
          image: **organization**/**image name**
          tags: latest, ${{gitea.ref_name}}, ${{gitea.run_id}}
          registry: **hostname**
          username: ${{ secrets.DOCKER_USER }}
          password: ${{ secrets.DOCKER_PASSWORD }}

  1. First, you might notice a modified version of the manual SSH configuration from a previous example.  This is because I have to clone my code manually.  I cannot use the Github Action to checkout the code because of the private submodules.   The documentation for the checkout action suggests this should work, but it simply did not.
  2. You'll also notice that I have to add the server keys to my known hosts file manually.  This is because, even with StrictHostKeyChecking off, we still had host key complaints from git.
  3. Finally, you'll notice we use a different docker build push action.  That's because the default version of the docker build and push action actually runs the checkout action behind the scenes.  I'm so glad someone else wrote a version of this that doesn't do that.
  4. But that Test step is a doozy.  Basically, I have to manually install python and all dependencies locally (notice that I have to install the older version of python I need manually complete with make!).  I have to use the fancy Github Actions Services feature to spin up my database, and then I have to manually fill the database with bash.  This is because of the things we learned earlier: that Github Actions don't actually run on the container you specified in runs-on but instead on the runner itself.  Further, because the runner uses the host for docker, if I ran docker compose up like in Jenkins, I wouldn't be able to access the services as they're on the host and not in my workspace container.  I also tried using the rootless version of Gitea Runner, but rootless docker has trouble mounting directories, and therefore while my postgres container appeared, it had no data.  So instead I had to do this ugly workaround to create a test environment as part of the build process.

I don't like my workaround, because it deviates from how my development environment is setup, but it's the best I can do for now.  Let's look at the deploy file.


name: Deploy **job name**

env:
  CONTAINER_NAME: **container name**
  IMAGE_NAME: '**organization**/**container name**'

on: release

jobs:
  Deploy:
    runs-on: ubuntu-latest
    steps:
      - name: Configure SSH
        run: |
          mkdir -p ~/.ssh/
          echo "${{secrets.SSH_KEY}}" > ~/.ssh/repo.key
          chmod 600 ~/.ssh/repo.key
          cat >>~/.ssh/config <<END
          Host **server**
            HostName **ip**
            User ${{secrets.SSH_USER}}
            IdentityFile ~/.ssh/repo.key
            StrictHostKeyChecking no
          END
      - name: Update **microservice 1**
        env:
          RUN_ARGS: "--restart=unless-stopped -e 'PYTHONUNBUFFERED=1' **more environment variables** "
          CONTAINER_NAME: '**container name**'
          COMMAND: ' **microservice script**'
        run: ssh **server** "docker login -u ${{secrets.DOCKER_USER}} -p ${{secrets.DOCKER_PASSWORD}} **hostname** && docker container stop ${CONTAINER_NAME} && docker container rm ${CONTAINER_NAME}; docker pull **hostname**/${IMAGE_NAME}:latest && docker run -d --pull=always ${RUN_ARGS}--name ${CONTAINER_NAME} **hostname**/${IMAGE_NAME}:latest${COMMAND}; docker logout **hostname**"
      - name: Update **microservice 2**
        env:
          RUN_ARGS: "--restart=unless-stopped -e 'PYTHONUNBUFFERED=1' **more environment variables** "
          CONTAINER_NAME: '**container name**'
          COMMAND: ' **microservice script**'
        run: ssh **server** "docker login -u ${{secrets.DOCKER_USER}} -p ${{secrets.DOCKER_PASSWORD}} **hostname** && docker container stop ${CONTAINER_NAME} && docker container rm ${CONTAINER_NAME}; docker pull **hostname**/${IMAGE_NAME}:latest && docker run -d ${RUN_ARGS}--name ${CONTAINER_NAME} **hostname**/${IMAGE_NAME}:latest${COMMAND}; docker logout **hostname**"
      - name: Update **microservice 3**
        env:
          RUN_ARGS: "--restart=unless-stopped -e 'PYTHONUNBUFFERED=1' **more environment variables** "
          CONTAINER_NAME: '**container name**'
          COMMAND: ' **microservice script**'
        run: ssh **server** "docker login -u ${{secrets.DOCKER_USER}} -p ${{secrets.DOCKER_PASSWORD}} **hostname** && docker container stop ${CONTAINER_NAME} && docker container rm ${CONTAINER_NAME}; docker pull **hostname**/${IMAGE_NAME}:latest && docker run -d ${RUN_ARGS}--name ${CONTAINER_NAME} **hostname**/${IMAGE_NAME}:latest${COMMAND}; docker logout **hostname**"
      - name: Update **microservice 3**
        env:
          RUN_ARGS: "--restart=unless-stopped -e 'PYTHONUNBUFFERED=1' **more environment variables** "
          CONTAINER_NAME: '**container name**'
          COMMAND: ' **microservice script**'
        run: ssh **server** "docker login -u ${{secrets.DOCKER_USER}} -p ${{secrets.DOCKER_PASSWORD}} **hostname** && docker container stop ${CONTAINER_NAME} && docker container rm ${CONTAINER_NAME}; docker pull **hostname**/${IMAGE_NAME}:latest && docker run -d ${RUN_ARGS}--name ${CONTAINER_NAME} **hostname**/${IMAGE_NAME}:latest${COMMAND}; docker logout **hostname**"
      - name: Update **dev app 1**
        env:
          RUN_ARGS: "--restart=unless-stopped -e 'PYTHONUNBUFFERED=1' **more environment variables** "
          CONTAINER_NAME: '**container name**'
          COMMAND: ''
        run: ssh **server** "docker login -u ${{secrets.DOCKER_USER}} -p ${{secrets.DOCKER_PASSWORD}} **hostname** && docker container stop ${CONTAINER_NAME} && docker container rm ${CONTAINER_NAME}; docker pull **hostname**/${IMAGE_NAME}:latest && docker run -d ${RUN_ARGS}--name ${CONTAINER_NAME} **hostname**/${IMAGE_NAME}:latest${COMMAND}; docker logout **hostname**"
      - name: Update **dev app 2**
        env:
          RUN_ARGS: "--restart=unless-stopped -e 'PYTHONUNBUFFERED=1' **more environment variables** "
          CONTAINER_NAME: '**container name**'
          COMMAND: ''
        run: ssh **server** "docker login -u ${{secrets.DOCKER_USER}} -p ${{secrets.DOCKER_PASSWORD}} **hostname** && docker container stop ${CONTAINER_NAME} && docker container rm ${CONTAINER_NAME}; docker pull **hostname**/${IMAGE_NAME}:latest && docker run -d ${RUN_ARGS}--name ${CONTAINER_NAME} **hostname**/${IMAGE_NAME}:latest${COMMAND}; docker logout **hostname**"
      - name: Update **dev app 3**
        env:
          RUN_ARGS: "--restart=unless-stopped -e 'PYTHONUNBUFFERED=1' **more environment variables** "
          CONTAINER_NAME: '**container name**'
          COMMAND: ''
        run: ssh **server** "docker login -u ${{secrets.DOCKER_USER}} -p ${{secrets.DOCKER_PASSWORD}} **hostname** && docker container stop ${CONTAINER_NAME} && docker container rm ${CONTAINER_NAME}; docker pull **hostname**/${IMAGE_NAME}:latest && docker run -d ${RUN_ARGS}--name ${CONTAINER_NAME} **hostname**/${IMAGE_NAME}:latest${COMMAND}; docker logout **hostname**"

You'll notice I configured SSH manually again so I can run a bunch of remote SSH commands.  Pretty basic stuff and very similar to what we did on Jenkins.

So that's my saga.  The effort to migrate various CI/CD tasks from Jenkins to Github Actions.  I'm proud that I managed to do it, but infuriated at the amount of workarounds and problems encountered along the way.  Still, my builds are green and I'm moving forward.  And with any luck, soon I'll be able to eliminate both Nexus and Jenkins from my infrastructure, which would be a big deal.  Thanks for reading.  Happy Easter.


Comments

Popular Posts