otomi (diff|apply|sync|template) commands are delegated to
helmfile, which in turn delegates the deployment work to
helm. Sometimes it is not clear wether the issue is from Helm or Helmfile, so we will address them together in this section.
otomi apply does not seem to change resources.
otomi apply command uses helmfile's
apply command, which combines its
sync commandds. So it first does a
helmfile diff against helm's bookeeping (which resides in versioned secrets, e.g.
sh.helm.release.v1.loki.v1). This is the most cost effective way and does not lead to a new release version being deployed when there are no changes. However, when you changed cluster resources without the otomi cli (so without using helm) this is not reflected in the secrets.
helmfile diff will not see any changes in the secret, so it won't execute the subsequent
helmfile sync. If you wish to overwrite the desired state on the cluster, use the
otomi sync -l name=$releaseName command directly. Usually only for a certain release, so you don't force change all the releases, which costs a lot of time.
Helmfile uses Helm 3 under the hood, and it will throw errors in certain situations:
When a resource already exists and was not deployed with the chart before (alien to Helm), it is possible to 'adopt' the resource beforehand by labeling and annotating them correctly:
This functionality exists in the stack in
bin/upgrades/adopt-by-helm.sh, and is used in the upgrade scripts.
Error: "$releaseName" has no deployed releases
This may happen when you try to install a chart (usually for the first time) and it fails. This results in the release's deployment having state 'failed'.
- When this was the first install: destroy with
otomi destroy -l name=$releaseNameand then apply with
otomi apply -l name=$releaseNameagain.
- When it was successfully deployed before: remove the last versioned helm secret that is causing the blockage (e.g.
Error: UPGRADE FAILED: failed to replace object: ... field is immutable
This usually happens when a manifest is not allowed to be patched in place and needs to be replaced. Retry the borking release with
otomi apply -l name=$releaseName --extraArgs='--force=true' which does exactly that.
Problem: Sometimes the otomi cli will time out when operating on a Google cluster.
Cause: This happens when the containerized kubectl binary wants to refresh an access token, but it can't find the binary that was registered to do so in the otomi docker container.
Workaround: Retry the command. Before every invocation with the containerized
kubectl binary, otomi cli first runs
kubectl version with the local binary to invoke a token refresh, resulting in an up-to-date config to mount.
The otomi cli is a docker container with all the binaries it needs to deploy to these clusters. When running a command the local cloud configs are mounted. These configs may contain configuration for token refresh mechanisms, including the name of a binary to execute with certain parameters. This makes it possible to include the binaries in the image, and make them available via the known
However, Google Cloud SDK breaks with that approach, by tightly coupling a hard path to the local gcloud binary. Sample user section from
This will not work with containerization unfortunately. We also can't predict the path on the users host computer to this binary, so we have to hope for Google to fix this some day. They are not inclined to do so it seems:
Maybe they will start to see the importance of this after getting more feedback ;)