GitOps: Managing Infrastructure the Same Way You Manage Code

Philip Rehberger Mar 22, 2026 8 min read

GitOps applies Git workflows to infrastructure management, making every change auditable, reversible, and peer-reviewed. Here's how to implement it without the common pitfalls.

GitOps: Managing Infrastructure the Same Way You Manage Code

GitOps is a simple idea with profound implications: Git is the single source of truth for your entire system state, and automation continuously reconciles what's running with what's declared in your repository. No more manual kubectl applies, no more "who changed that config last Tuesday," no more drift between environments.

If you already trust Git for code, you can trust it for infrastructure.

The GitOps Principles

The OpenGitOps specification defines four core principles:

  1. Declarative: The system is described in its desired state, not via procedural steps
  2. Versioned and immutable: Desired state is stored in Git, providing history and rollback
  3. Pulled automatically: Approved changes are applied automatically by software agents
  4. Continuously reconciled: Software agents observe actual state and correct drift

The key distinction from traditional CI/CD: in classic pipelines, a push event triggers a deployment. In GitOps, a controller inside the cluster continuously watches Git and reconciles what's running with what's declared.

How GitOps Differs From Traditional CI/CD

Traditional Pipeline

Developer pushes code
    → CI builds and tests
    → CI runs kubectl apply (push model)
    → Cluster is updated

Problems:

  • CI system needs cluster credentials
  • No reconciliation — drift is undetected
  • Rollback means re-running a pipeline
  • Hard to see what's actually deployed

GitOps Approach

Developer opens PR to update manifests
    → Review and approval
    → Merge to main
    → GitOps controller detects change (pull model)
    → Controller reconciles cluster to match Git

Benefits:

  • Cluster credentials stay inside the cluster
  • Any drift triggers automatic correction
  • Rollback is a git revert
  • Git history shows exactly what's deployed and when

Repository Structure

GitOps requires careful thought about how you organize your Git repositories. Two main patterns:

Monorepo: Everything in One Place

gitops-repo/
├── apps/
│   ├── api-service/
│   │   ├── base/
│   │   │   ├── deployment.yaml
│   │   │   ├── service.yaml
│   │   │   └── kustomization.yaml
│   │   └── overlays/
│   │       ├── staging/
│   │       │   └── kustomization.yaml
│   │       └── production/
│   │           └── kustomization.yaml
│   └── worker-service/
│       └── ...
├── infrastructure/
│   ├── cert-manager/
│   ├── ingress-nginx/
│   └── monitoring/
└── clusters/
    ├── staging/
    └── production/

Good for: Smaller organizations, tight coupling between app and infra changes

Polyrepo: Separate App and Config

app-source-repo/          # Application code
    → CI builds image, tags with commit SHA
    → CI opens PR to config repo

app-config-repo/          # Kubernetes manifests
    → GitOps controller watches this
    → Changes here affect what's running

Good for: Larger organizations, separate ownership of app code vs deployment config

The polyrepo pattern is more common at scale because it separates concerns: developers own app source, platform or release engineers own deployment config.

Kustomize for Environment Differences

Kustomize lets you maintain a single base configuration and apply patches per environment, keeping things DRY without templating.

# apps/api/base/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: api
spec:
  replicas: 1
  selector:
    matchLabels:
      app: api
  template:
    metadata:
      labels:
        app: api
    spec:
      containers:
        - name: api
          image: myorg/api:latest
          resources:
            requests:
              memory: "128Mi"
              cpu: "100m"
            limits:
              memory: "256Mi"
              cpu: "500m"
# apps/api/overlays/production/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - ../../base
patches:
  - patch: |
      - op: replace
        path: /spec/replicas
        value: 5
    target:
      kind: Deployment
      name: api
  - patch: |
      - op: replace
        path: /spec/template/spec/containers/0/resources/requests/memory
        value: "512Mi"
    target:
      kind: Deployment
      name: api
images:
  - name: myorg/api
    newTag: "1.4.2"  # CI updates this line

The CI pipeline's job is simple: build the image, push it, then update the newTag value in the production overlay and open a pull request. Human review happens in Git, not in the deployment pipeline.

Argo CD: The GitOps Controller

Argo CD is the most widely adopted GitOps tool for Kubernetes. It runs inside your cluster, watches your Git repository, and reconciles the cluster state continuously.

Installation

kubectl create namespace argocd
kubectl apply -n argocd -f \
  https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

# Access the UI
kubectl port-forward svc/argocd-server -n argocd 8080:443

# Get initial admin password
argocd admin initial-password -n argocd

Defining an Application

# argocd-app.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: api-service
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://github.com/myorg/gitops-config
    targetRevision: main
    path: apps/api-service/overlays/production
  destination:
    server: https://kubernetes.default.svc
    namespace: production
  syncPolicy:
    automated:
      prune: true      # Delete resources removed from Git
      selfHeal: true   # Correct drift automatically
    syncOptions:
      - CreateNamespace=true

With selfHeal: true, if someone manually edits a deployment (a cardinal GitOps sin), Argo CD will revert it within minutes. This enforces the rule that Git is the source of truth.

App of Apps Pattern

For managing many applications, use the App of Apps pattern — an Argo CD application that manages other Argo CD applications:

# clusters/production/apps.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: production-apps
  namespace: argocd
spec:
  source:
    repoURL: https://github.com/myorg/gitops-config
    path: clusters/production
    targetRevision: main
  destination:
    server: https://kubernetes.default.svc
    namespace: argocd
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
clusters/production/
├── api-service.yaml        # Application CR
├── worker-service.yaml     # Application CR
├── monitoring.yaml         # Application CR
└── kustomization.yaml      # Lists all files

Now bootstrapping a new cluster means applying a single manifest. Argo CD discovers and deploys everything else.

Handling Secrets in GitOps

Secrets are the hardest part of GitOps. You cannot commit plaintext secrets to Git. Two solid approaches:

Sealed Secrets

Bitnami Sealed Secrets encrypts secrets with a cluster-specific key, making the encrypted form safe to commit:

# Install kubeseal
brew install kubeseal

# Create a regular secret
kubectl create secret generic db-creds \
  --from-literal=password=supersecret \
  --dry-run=client -o yaml | \
  kubeseal --format yaml > sealed-db-creds.yaml

# Commit sealed-db-creds.yaml to Git — safe!
# Only the cluster can decrypt it

External Secrets Operator

A more flexible approach — store secrets in AWS Secrets Manager, GCP Secret Manager, or Vault. The operator syncs them into Kubernetes secrets:

apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  name: db-credentials
spec:
  refreshInterval: 1h
  secretStoreRef:
    name: aws-secrets-manager
    kind: ClusterSecretStore
  target:
    name: db-credentials
    creationPolicy: Owner
  data:
    - secretKey: password
      remoteRef:
        key: production/db/credentials
        property: password

This manifest is safe to commit. The actual secret value lives in Secrets Manager and is pulled at runtime.

The CI/CD Pipeline in a GitOps World

Your CI pipeline changes shape. It no longer deploys — it prepares:

# .github/workflows/ci.yml
name: CI
on:
  push:
    branches: [main]

jobs:
  build-and-update:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Build and push image
        run: |
          IMAGE_TAG=${{ github.sha }}
          docker build -t myorg/api:$IMAGE_TAG .
          docker push myorg/api:$IMAGE_TAG

      - name: Update staging manifest
        run: |
          # Checkout config repo
          git clone https://x-token:${{ secrets.GITOPS_TOKEN }}@github.com/myorg/gitops-config
          cd gitops-config

          # Update image tag in staging overlay
          cd apps/api-service/overlays/staging
          kustomize edit set image myorg/api=myorg/api:${{ github.sha }}

          git config user.email "ci@myorg.com"
          git config user.name "CI Bot"
          git commit -am "chore: update api to ${{ github.sha }}"
          git push

      # Production update via PR for human review
      - name: Open PR for production
        run: |
          cd gitops-config
          git checkout -b release/${{ github.sha }}
          cd apps/api-service/overlays/production
          kustomize edit set image myorg/api=myorg/api:${{ github.sha }}
          git commit -am "release: api ${{ github.sha }}"
          git push origin release/${{ github.sha }}
          gh pr create --title "Release api ${{ github.sha }}" --body "Auto-generated release PR"
        env:
          GH_TOKEN: ${{ secrets.GITOPS_TOKEN }}

Staging gets auto-deployed on every commit to main. Production requires a PR approval — that's your change control process, built into Git.

Multi-Cluster Management

For multiple clusters, Argo CD's ApplicationSet controller generates applications dynamically:

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: api-service
spec:
  generators:
    - list:
        elements:
          - cluster: staging
            url: https://staging.k8s.myorg.com
          - cluster: production
            url: https://prod.k8s.myorg.com
  template:
    metadata:
      name: "api-service-{{cluster}}"
    spec:
      project: default
      source:
        repoURL: https://github.com/myorg/gitops-config
        path: "apps/api-service/overlays/{{cluster}}"
        targetRevision: main
      destination:
        server: "{{url}}"
        namespace: default

Add a new cluster to the list, and all your applications get deployed there automatically.

Measuring GitOps Effectiveness

Track these to validate GitOps is working:

  • Drift incidents: How often does Argo CD alert on drift? Should trend toward zero
  • Manual kubectl applies: Should drop to zero over time
  • Rollback time: Time from "something is wrong" to "reverted" — should be minutes
  • Audit coverage: % of changes that have a corresponding Git commit with reviewer
  • Lead time: Time from merged PR to running in production

Common Mistakes

Committing generated files: Let the GitOps controller handle applying. Committing the output of kubectl get -o yaml creates noise and drift.

Mixing app code and config: Keep your application source and deployment manifests in separate commits or repos. Mixing them makes the history confusing.

Skipping code review for config changes: The whole point of GitOps is that config changes get reviewed. Don't auto-merge config PRs without review, especially for production.

Not testing manifests: Validate manifests in CI before merging. Tools like kubeval, kube-score, and conftest catch problems before they hit the cluster.

# Validate manifests in CI
kubectl apply --dry-run=server -f apps/api-service/overlays/production/
kube-score score apps/api-service/base/deployment.yaml

GitOps makes infrastructure management predictable, auditable, and recoverable. The discipline of treating Git as the source of truth pays dividends every time you need to understand what changed and when — which is exactly when it matters most.

Building something that needs to scale? We help teams architect systems that grow with their business. scopeforged.com

Share this article

Related Articles

Need help with your project?

Let's discuss how we can help you build reliable software.