ATTENTION: This documentation is based on the old CLI
We will soon update the install with CLI section based on the new CLI
Otomi needs a git repo to store its configuration. We call it a values repo.
You can now bootstrap the versioned artifacts for aws/azure/google profile. In the following example values for azure
This will install the value files, but also the needed artifacts, such as the Otomi CLI. Lastly, it sources aliases you can use, such as the
otomi cli (which is an alias to the imported
The essential otomi platform configurations are stored in
env/secrets.settings.yaml files. Inspect them and customize values to much your environment.
The environment variables are defined
env/.env file, where:
K8S_CONTEXTindicates a kubernetes context name to be used with otomi CLI
No encryption needed?
If you don't need encryption straight away please continue to the next step
Otomi will encrypt any
secrets.*.yaml files with sops, but only if it finds a
.sops.yaml configuration file. (How to work with sops is not in scope of this documentation.)
In order to en-/decrypt the secrets in the values repo, the KMS configuration needs to be provided in
.sops.yaml. Examples are provided in
.sops.yaml.sample for the big 3 cloud KMS providers. Copy it, and then edit it:
Now, these KMS endpoints also need credentials to access them. Your AWS profile is always pointed and loaded (make sure you have loaded the correct one that has access), but in case of Google KMS add the following to the
Then you can run
otomi bootstrap again, which will result in the creation of
gcp-key.json, which is needed for sops to work locally, like when doing a
git diff to show unencrypted values, you must register the sops diffing routine once with git. To register it:
This only registers the sops differ, which is responsible for invoking sops. But sops still needs the credentials to the KMS service. Again, your AWS profile is always pointed and loaded, but in case of Google KMS you will need to point GOOGLE_APPLICATION_CREDENTIALS to the
gcp-key.json file holding your account information:
Now try a diff:
Otomi Enterprise Edition license needed
If you have a license for Otomi EE you can run the console locally for initial configuration.
If you have not done so already, put the pullsecret you have been given in
otomi.pullSecret. Also make sure the git details are correctly added to
charts/*otomi-api.yaml. Remember that some providers like GitHub need an access token when MFA/2FA is turned on, so create one (see https://github.com/settings/tokens) and provide that for
password. At least the following values are expected:
Make sure these are correct and allowing access to the above initialized and pushed repository.
Then bootstrap again and start the console:
The console allows for easy configuration of many settings but not all. Assuming the setup steps are completed, you need to now configure the Otomi values repository. This repo is the source configuration for Otomi Container Platform. It contains drone pipeline configuration for listening to updates of these values targeting the cluster the drone instance is running on.
Configuration can be performed much easier through the Otomi Console. So if you have a license please refer to the Otomi Console documentation.
Not all configuration is (yet) exposed through the console however, so please look at the values repo's
env/* files to edit the configuration files.
Important things to note:
- Every configuration file can have a
secrets.*.yamlcounterpart, but these are optional.
- A json schema and vscode settings are imported by the bootrap (in
.vscode/*), so you will have automatic linting and hinting for the configuration when vscode is used.
.sops.yamlis correctly configured then automatic de-/en-cryption will also be performed when in vscode and editing a
Configuration that is currently managed by the console:
- Team settings:
- Team secrets:
- Team services:
Configuration not (yet) managed by the console:
- Cluster config:
- Otomi settings:
- Charts config:
Please follow the guidance of the yaml hinting, as it has all the descriptions and example values you need to operate on these files.
Otomi YAML hinting only works in vscode
VSCode automatically loads the '.vscode/values-schema.yaml' schema provided. Please inspect it or wire it up manually when using another editor.
Make sure to have working dns management credentials
The most important part to get the platform deployed is having correctly set credentials for dns management. Without it no domains nor ip addresses can be registered, and certificate validation will fail.
If you wish to be sure of your changes, you can always do a
git diff. When you chose to use encryption and have correctly followed the corresponding instructions, then you should see a diff with the unencrypted values. That is, if you modified any ;)
When you are done with the configuration you can validate the results:
If you have made an error in the format of the values this will be reported.
To check if all the output manifests are valid for the target cluster's k8s version, and following best practices you can run another variation:
In our demo files we refer to Azure AD as IDP for Keycloak, which is a common use case. It needs to know the valid callbacks to return control to Otomi's oauth endpoints. The following callbacks needed to be in place (change $clusterDomain to proper one):
This will be the same for any IDP.
The output manifests generated by otomi are deployed in two ways:
- Uncharted: some base manifests are applied directly with
- Charted: manfests that are packaged up in helm charts.
Ideally, we would like to deploy as helm chart as it has many benefits such as rollback. But in some cases we can't or we don't wish to. The reasons for that are the following:
- Some resources we don't want governed by charts (as charts might get accidentally removed, erasing everything that was deployed with it).
- Some existing resources have to be patched (like pull secrets in service accounts), which helmfile won't do as it will not modify existing resources not annotated to be under control by a chart.
- Some resources need to exist before the charts are deployed (such as CRDs).
The manifests that are currently not charted are:
k8s/base(unparameterized, mostly rbac roles)
values/cloud(applies cloud specific "normalization" patterns, such as for storageclasses)
values/k8s(team resources, such as namespaces, service accounts, pull secrets)
Currently we don't have any subcommand that only works on uncharted resources, but we have the following commands that target the entire bundle.
otomi test: does a dry run, showing all manifests that will be deployed, and will also show any errors in the output manifests.
otomi deploy: deploys all the manifests (uncharted first, then charted)
So after doing
otomi test, if all looks ok, go ahead and do the initial deployment of all resources:
This command executes two stages (please see
bin/deploy.sh). The first stage will deploy all uncharted resources with
kubectl apply, and the second stage will deploy all the charted resources with
Whenever you add a team, or change or add to these uncharted resources, you have to run
otomi deploy to apply them. When you let Drone do the syncing for you, it will invoke that command to synchronize the cluster.
During development iterations you will probably not touch uncharted resources often, but instead you will add features in charts.
Otomi has these subcommands that only target charted resources:
You can always target a single chart like this:
(For a list of all supported flags to use those subcommands, we defer to the helmfile documentation, as those are deferred to the helmfile cli.)
Let's do a diff of all the charts that are enabled:
Whenever you modify resources without using helm, its internal bookkeeping (the versioned secrets in the namespaces) will not change, and any subsequent
otomi apply commands will not modify anything. If you notice this, and want to overwrite with the output manifests, you can use
otomi sync, which will skip doing a diff, and instead apply all charted manifests as a new version.
After initial deployment, to enable Continuous Deployment of this repo from within Drone (running in the cluster), for each cluster:
- Login to Drone and activate the values repo to sync with: https://drone.$clusterDomain/
- (Optional) Configure the encryption related secrets as referred to in the configuration section:
- Google KMS: Set
GCLOUD_SERVICE_KEYwith the contents of the service account json file.
- Aws KMS: Set
AWS_ACCESS_KEY_IDto an account that has access.
- Azure: provide
- Google KMS: Set
Sync is now live, and every git change in the values repo is applied by each cluster's Drone.
When you are not using Otomi Enterprise Edition, or are doing development, you will operate on values directly and have to commit them manually:
This will detect any version changes, generate Drone pipeline configuration, and then commit all files with a standardized message "Manual commit". (We believe all values repo configuration changes are equally meaningful and don't need explicit commit messages.) Directly doing a
git commit is discouraged with a git hook saying so, but whenever you did not touch any versions in
env/clusters.yaml you may bypass with
git commit -m "Manual commit" --no-verify to save development time.
This will then trigger the pipeline of any configured Drone (if you followed the previous step).