Manually updating container image tags after every CI build defeats the purpose of continuous deployment. I built a fully automated GitOps pipeline on GKE using ArgoCD Image Updater that closes the loop between pushing code and deploying it — no human intervention required.
This post walks through the architecture, the key configuration decisions, and what I learned.
What the Pipeline Does
The flow is straightforward:
- Code gets pushed to
main - GitHub Actions builds the Docker image, tags it with SemVer (
1.0.{run_number}), and pushes to Google Artifact Registry - ArgoCD Image Updater detects the new tag in the registry
- Image Updater writes the updated tag back to Git
- ArgoCD syncs the change to the cluster
- Slack gets notified
No one touches kubectl. No one edits a values file. The whole thing runs on its own.
Git Folder Structure
The repo manages multiple environments and namespaces, with Helm charts handling the templating:
nodejs-express-mysql/
├── Dockerfile
├── deploy/
│ ├── dev/
│ │ └── values-dev.yaml
│ ├── prod/
│ │ └── values-prod.yaml
│ └── helm/
│ ├── Chart.yaml
│ ├── templates/
│ └── values.yaml
Each environment gets its own values-{env}.yaml with specific replica counts, resource limits, and autoscaling thresholds. The Helm chart stays generic — the values files are where environments diverge.
GitHub Actions: Build and Push
The CI workflow triggers on every push to main, builds the image, and pushes to Artifact Registry with a semantic version tag:
name: Build and Push Docker Image
on:
push:
branches: [main]
jobs:
build:
if: github.actor != 'argocd-image-updater'
runs-on: ubuntu-latest
env:
IMAGE_TAG: 1.0.${{ github.run_number }}
steps:
- uses: actions/checkout@v3
- uses: google-github-actions/auth@v1
with:
credentials_json: ${{ secrets.GCP_SERVICE_ACCOUNT_KEY }}
- run: gcloud auth configure-docker europe-west1-docker.pkg.dev
- run: |
docker build -t europe-west1-docker.pkg.dev/project/repo/app:${{ env.IMAGE_TAG }} .
docker push europe-west1-docker.pkg.dev/project/repo/app:${{ env.IMAGE_TAG }}
The if: github.actor != 'argocd-image-updater' guard is important — without it, the Image Updater's write-back commits would trigger infinite build loops.
ArgoCD Image Updater Configuration
The Image Updater needs to authenticate with Google Artifact Registry. I used a ConfigMap with an authentication script that retrieves an OAuth2 token from the GKE node's metadata server:
apiVersion: v1
kind: ConfigMap
metadata:
name: auth-cm
namespace: argocd
data:
auth.sh: |
#!/bin/sh
ACCESS_TOKEN=$(wget --header 'Metadata-Flavor: Google' \
http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token \
-q -O - | grep -Eo '"access_token":.*?[^\\]",' | cut -d '"' -f 4)
echo "oauth2accesstoken:$ACCESS_TOKEN"
The Image Updater config references this script with a 30-minute credential expiration:
registries:
- name: GCP Artifact Registry
prefix: europe-west1-docker.pkg.dev
api_url: https://europe-west1-docker.pkg.dev
credentials: ext:/auth/auth.sh
credsexpire: 30m
Secrets Management with External Secrets Operator
Sensitive values like Slack API tokens are stored in Google Cloud Secret Manager and pulled into the cluster using External Secrets Operator. A ClusterSecretStore connects to GCP Secret Manager, and individual ExternalSecret resources reference specific secrets:
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: argocd-notifications-secret
namespace: argocd
spec:
refreshInterval: 1h
secretStoreRef:
name: gcp-secret-manager
kind: ClusterSecretStore
target:
name: argocd-notifications-secret
creationPolicy: Owner
data:
- secretKey: slack-token
remoteRef:
key: slack-token
version: latest
This keeps secrets out of Git entirely while remaining declaratively managed.
Pub/Sub-Based Autoscaling
The application processes messages from Google Cloud Pub/Sub, so CPU alone isn't enough for scaling decisions. The HPA is configured with both CPU utilization and Pub/Sub unacknowledged message count:
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: External
external:
metric:
name: pubsub.googleapis.com|subscription|num_undelivered_messages
target:
type: Value
value: "50"
When the message backlog grows — even if CPU is idle — the cluster scales out to clear the queue. This was critical for handling bursty workloads where message volume spikes don't always correlate with compute pressure.
Slack Notifications
ArgoCD notifications push sync events to Slack for visibility. The ConfigMap defines templates for success, failure, and health degradation events:
service.slack: |
token: $slack-token
subscribe.on-sync-failed.slack: '#argocd-img-updater'
subscribe.on-sync-succeeded.slack: '#argocd-img-updater'
Every deploy posts the app name, sync status, and a link to the ArgoCD dashboard. Simple, but it means the team knows what shipped without checking anything.
What I Learned
SemVer tagging works well with Image Updater. Using 1.0.{run_number} gives you predictable, sortable tags that Image Updater can compare. Avoid mutable tags like latest — they break the update detection.
The OAuth2 metadata approach is clean. Instead of managing service account keys as secrets, using the GKE node's metadata endpoint for registry auth keeps the credential management simple and rotation-free.
Guard against feedback loops. The github.actor != 'argocd-image-updater' check in the CI workflow prevents the Image Updater's Git write-backs from triggering new builds. Miss this and you get an infinite loop.
Multi-metric HPA needs tuning. The Pub/Sub threshold was something I iterated on — too low and pods scale up on every small burst, too high and the queue backs up. Start conservative and adjust based on real traffic patterns.
The full implementation is on GitHub.