3

I'm migrating a chunk of applications to k8s. Some of them have large amounts of config files which are best held in GIT as their size exceeds the max size for configmaps. I have a simple git-sync image which I can configure to keep a persistent volume in sync with a git repository and had hoped to use it as a sidecar in some deployments.

Here's the crux. Some applications (like vendor apps that I can't control) require the configuration files to be there before the application starts. This means I can't just run the git-sync container as a sidecar as there's no guarantee it will have cloned the git repo before the main app starts. I've worked around this by having a separate deployment for the git sync and then having an initContainer for my main application which checks for the existence of the cloned git repo before starting.

This works but it feels a little messy. Any thoughts on a cleaner approach to this?

Here's a yaml snippet of my deployments:

#main-deployment
...
initContainers:
- name: wait-for-git-sync
  image: my-git-sync:1.0
  command: ["/bin/bash"]
  args: [ "-c", "until [ -d /myapp-config/stuff ] ; do echo \"config not present yet\"; sleep 1; done; exit;" ]
  volumeMounts:
  - mountPath: /myapp-config
    name: myapp-config
containers:
- name: myapp
  image: myapp:1.0
  volumeMounts:
  - mountPath: /myapp-config
    name: myapp-config

volumes:

  • name: myapp-config persistentVolumeClaim: claimName: myapp-config

...

#git-sync-deployment ... containers:

  • name: myapp-git-sync image: my-git-sync:1.0 env:
    • name: GIT_REPO value: ssh://mygitrepo
    • name: SYNC_DIR value: /myapp-config/stuff
    volumeMounts:
    • mountPath: /myapp-config name: myapp-config

volumes:

  • name: myapp-config persistentVolumeClaim: claimName: myapp-config

...

beirtipol
  • 133
  • 1
  • 5

2 Answers2

1

Maybe a readiness probe will help. The api server will in this case call your pods on /health and a http status error code means not ready, else ready. As long as the service is not ready, calls will not be routed.

  - name: name
    image: "docker.io/app:1.0"
    imagePullPolicy: Always
    readinessProbe:
      httpGet:
        path: /health
        port: 5000
      initialDelaySeconds: 5

And in your code

@app.route("/health")
def health():
    if not os.path.exists('gitfile'):
        return "not ok", 500
    return "OK", 200

or else a livenessprobe with checks the return value of the utilities called. zero means success, else fail.

livenessProbe:
      exec:
        command:
        - cat
        - /tmp/healthy
      initialDelaySeconds: 5
      periodSeconds: 5
Serve Laurijssen
  • 552
  • 2
  • 7
  • 15
  • using a livenessprobe: that's a recipe for app-crashes, file-still-there, pod-wont-restart. readiness makes more sense, when coded into app, alongside whatever other dependency there would be – SYN Aug 24 '22 at 15:54
  • With a readinessProbe I can achieve this much cleaner. Can you update your answer to change from a livenessProbe to a readinessProbe and use 'ls' instead of 'cat' ? – beirtipol Aug 25 '22 at 10:29
1

You should use your initContainer to clone the repository. Don't wait for a file to be present: just pull or clone your conf.

SYN
  • 301
  • 1
  • 6
  • That doesn't work if I want to keep the repository in sync with a cron job calling git pull/push as the init container dies as soon as it's cloned – beirtipol Aug 25 '22 at 10:19
  • It was clear this is what you're doing. And I didn't want to start my answer with this, but that sounds wrong. Beyond the concurrency/corruption issues you'll face when both app and job tries to write the same file, how do you know your app actually read files that would have changed on its filesystem? You'ld better have jobs restart your deployment. Gitops can be nice, but this is wrong. – SYN Aug 25 '22 at 15:40