.lagoon.yml#
The .lagoon.yml
file is the central file to set up your project. It contains configuration in order to do the following:
- Define routes for accessing your sites.
- Define pre-rollout tasks.
- Define post-rollout tasks.
- Set up SSL certificates.
- Add cron jobs for environments.
The .lagoon.yml
file must be placed at the root of your Git repository.
General Settings#
docker-compose-yaml
#
Tells the build script which Docker Compose YAML file should be used, in order to learn which services and containers should be deployed. This defaults to docker-compose.yml
, but could be used for a specific Lagoon Docker Compose YAML file if needed.
environment_variables.git_sha
#
This setting allows you to enable injecting the deployed Git SHA into your project as an environment variable. By default this is disabled. Setting the value to true
sets the SHA as the environment variable LAGOON_GIT_SHA
.
Routes#
Routes are used to direct traffic to services. Each service in an environnment
can have routes, in which the domain names are defined manually or
automatically. The top level routes
section applies to all routes in all
environments.
routes.autogenerate
#
This allows you to configure automatically created routes. Manual routes are defined per environment.
enabled
: Set tofalse
to disable autogenerated routes. Default istrue
.-
allowPullrequests
: Set totrue
to overrideenabled: false
for pull requests..lagoon.ymlroutes: autogenerate: enabled: false allowPullrequests: true
-
insecure
: Configures HTTP connections. Default isAllow
. Allow
: Route will respond to HTTP and HTTPS.-
Redirect
: Route will redirect any HTTP request to HTTPS. -
prefixes
: Configure prefixes for the autogenerated routes of each environment. This is useful for things like language prefix domains, or a multi-domain site using the Drupaldomain
module.
routes:
autogenerate:
prefixes:
- www
- de
- fr
- it
Tasks#
There are different type of tasks you can define, and they differ in when exactly they are executed in a build flow:
Pre-Rollout Tasks - pre_rollout.[i].run
#
Here you can specify tasks which will run against your project after all images have been successfully built, but before:
- Any running containers are updated with the newly built images.
- Any other changes are made to your existing environment.
This feature enables you to, for example, create a database dump before updating your application. This can make it easier to roll back in case of a problem with the deploy.
Info
The pre-rollout tasks run in the existing pods before they are updated, which means:
- Changes made to your Dockerfile since the last deploy will not be visible when pre-rollout tasks run.
- If there are no existing containers (e.g. on the initial deployment of a new environment), pre-rollout tasks are skipped.
Post-Rollout Tasks - post_rollout.[i].run
#
Here you can specify tasks which need to run against your project, after:
- All images have been successfully built.
- All containers are updated with the new images.
- All containers are running have passed their readiness checks.
Common uses for post-rollout tasks include running drush updb
, drush cim
, or clearing various caches.
name
- The name is an arbitrary label for making it easier to identify each task in the logs.
command
- Here you specify what command should run. These are run in the WORKDIR of each container, for Lagoon images this is
/app
. Keep this in mind if you need tocd
into a specific location to run your task. service
- The service in which to run the task. If following our Drupal example, this will be the CLI container, as it has all your site code, files, and a connection to the database. Typically you do not need to change this.
container
- If the service has multiple containers (e.g.
nginx-php
), you will need to specify which container in the pod to connect to (e.g. thephp
container within thenginx
pod). shell
- In which shell the task should be run. By default
sh
is used, but if the container also has other shells (likebash
, you can define it here). This is useful if you want to run some small if/else bash scripts within the post-rollouts. See the example below to learn how to write a script with multiple lines. when
- The "when" clause allows for the conditional running of tasks. It expects an expression that will evaluate to a true/false value which determines whether the task should be run.
Note: If you would like to temporarily disable pre/post-rollout tasks during a deployment, you can set either of the following environment variables in the API at the project or environment level (see how on Environment Variables).
LAGOON_PREROLLOUT_DISABLED=true
LAGOON_POSTROLLOUT_DISABLED=true
Example post-rollout tasks#
Here are some useful examples of post-rollout tasks that you may want to use or adapt for your projects.
Run only if Drupal not installed:
- run:
name: IF no Drupal installed
command: | # (1)
if tables=$(drush sqlq "show tables like 'node';") && [ -z "$tables" ]; then
#### whatever you like
fi
service: cli
shell: bash
- This shows how to create a multi-line command.
Different tasks based on branch name:
- run:
name: Different tasks based on branch name
command: |
### Runs if current branch is not 'production'
service: cli
when: LAGOON_GIT_BRANCH != "production"
Run shell script:
- run:
name: Run Script
command: './scripts/script.sh'
service: cli
Target specific container in pod:
- run:
name: show php env variables
command: env
service: nginx
container: php
Drupal & Drush 9: Sync database & files from master environment:
- run:
name: Sync DB and Files from master if we are not on master
command: |
# Only if we don't have a database yet
if tables=$(drush sqlq 'show tables;') && [ -z "$tables" ]; then
drush sql-sync @lagoon.master @self # (1)
drush rsync @lagoon.master:%files @self:%files -- --omit-dir-times --no-perms --no-group --no-owner --chmod=ugo=rwX
fi
service: cli
when: LAGOON_ENVIRONMENT_TYPE != "production"
- Make sure to use the correct aliases for your project here.
Backup Retention#
backup-retention.production.monthly
#
Specify the number of monthly backups Lagoon should retain for your project's production environment(s).
The global default is 1
if this value is not specified.
backup-retention.production.weekly
#
Specify the number of weekly backups Lagoon should retain for your project's production environment(s).
The global default is 6
if this value is not specified.
backup-retention.production.daily
#
Specify the number of daily backups Lagoon should retain for your project's production environment(s).
The global default is 7
if this value is not specified.
backup-retention.production.hourly
#
Specify the number of hourly backups Lagoon should retain for your project's production environment(s).
The global default is 0
if this value is not specified.
Backup Schedule#
backup-schedule.production
#
Specify the backup schedule for this project. Accepts cron-compatible syntax with the notable exception that the Minute
block must be the letter M
. Any other value in the Minute
block will cause the Lagoon build to fail. This allows Lagoon to randomly choose a specific minute for these backups to happen, while users can specify the remainder of the schedule down to the hour.
The global default is M H(22-2) * * *
if this value is not specified. Take note that these backups will use the cluster's local timezone.
Environments#
Environment names match your deployed branches or pull requests. This allows for each environment to have a different config. In our example it will apply to the main
and staging
environment.
environments.[name].routes
#
Manual routes are domain names that are configured per environment to direct
traffic to a service. Since all environments get automatically created
routes by default, it is typical that manual routes are
only setup for the production environment, using the main domain of the
project's website like www.example.com
.
Tip
Since Lagoon has no control over the manual routes, you'll need to ensure
the DNS records are configured properly at your DNS provider. You can likely
set a CNAME
record to point to the automatic route.
The first element after the environment is the target service, nginx
in our
example. This is how we identify which service incoming requests will be sent
to.
The simplest route is example.com
, as seen in our example
.lagoon.yml
- you can see it has no additional
configuration. This will assume that you want a Let's Encrypt certificate for
your route and no redirect from HTTPS to HTTP.
In the "www.example.com"
example below, we see three more options (also
notice the :
at the end of the route and that the route is wrapped in "
,
that's important!):
- "www.example.com":
tls-acme: true
insecure: Redirect
hstsEnabled: true
SSL Configuration tls-acme
#
Warning
If you switch from tls-acme: true
to tls-acme: false
this will remove any previously
generated certificates for this route. This could result in unexpected behaviour if you're
using an external CDN and do any certificate pinning.
tls-acme
: Configures automatic TLS certificate generation via Let's Encrypt. Default istrue
, set tofalse
to disable automatic certificates.insecure
: Configures HTTP connections. Default isAllow
.Allow
: Route will respond to HTTP and HTTPS.Redirect
: Route will redirect any HTTP request to HTTPS.hstsEnabled
: Adds theStrict-Transport-Security
header. Default isfalse
.hstsMaxAge
: Configures themax-age
directive. Default is31536000
(1 year).hstsPreload
: Sets thepreload
directive. Default isfalse
.hstsIncludeSubdomains
: Sets theincludeSubDomains
directive. Default isfalse
.
Info
If you plan to switch from a SSL certificate signed by a Certificate Authority (CA) to a Let's Encrypt certificate, it's best to get in touch with amazee.io support to oversee the transition.
Monitoring a specific path#
Info
Lagoon does not provide any monitoring capabilties out of the box, only labels and annotations. Check with amazee.io support if monitoring is supported.
Lagoon will add the label lagoon.sh/primaryIngress=true
to the first route defined in the .lagoon.yml
file for an environment.
If a specific path on a route requires monitoring, define monitoring-path
with the path to use. Lagoon will add this path to the annotation monitor.stakater.com/overridePath
to the route.
- "www.example.com":
monitoring-path: "/bypass-cache"
Info
The annotation monitor.stakater.com/overridePath
used by monitoring-path references the stakater monitoring controller, this is not used by Lagoon. This annotation will eventually be replaced with a lagoon.sh
scoped annotation in the future.
Ingress annotations#
Warning
Route/Ingress annotations are only supported by projects that deploy into clusters that run nginx-ingress controllers! Check with amazee.io support if this is supported.
annotations
can be a YAML map of annotations supported by the nginx-ingress controller. This is specifically useful for easy redirects and other configurations.
Restrictions#
Some annotations are disallowed or partially restricted in Lagoon. The table below describes these rules.
If your .lagoon.yml
contains one of these annotations it will cause a build failure.
Annotation | Notes |
---|---|
nginx.ingress.kubernetes.io/auth-snippet |
Disallowed |
nginx.ingress.kubernetes.io/configuration-snippet |
Restricted to rewrite , add_header , set_real_ip , and more_set_headers directives. |
nginx.ingress.kubernetes.io/modsecurity-snippet |
Disallowed |
nginx.ingress.kubernetes.io/server-snippet |
Restricted to rewrite , add_header , set_real_ip , and more_set_headers directives. |
nginx.ingress.kubernetes.io/stream-snippet |
Disallowed |
nginx.ingress.kubernetes.io/use-regex |
Disallowed |
Ingress annotations redirects#
In this example any requests to example.ch
will be redirected to https://www.example.ch
while keeping folders or query parameters intact (example.com/folder?query
-> https://www.example.ch/folder?query
).
- "example.ch":
annotations:
nginx.ingress.kubernetes.io/permanent-redirect: https://www.example.ch$request_uri
- www.example.ch
You can of course also redirect to any other URL not hosted on Lagoon, this will direct requests to example.de
to https://www.google.com
- "example.de":
annotations:
nginx.ingress.kubernetes.io/permanent-redirect: https://www.google.com
Trusted Reverse Proxies#
Warning
Kubernetes will only process a single nginx.ingress.kubernetes.io/server-snippet
annotation. Please ensure that if you use this annotation on a non-production environment route that you also include the add_header X-Robots-Tag "noindex, nofollow";
annotation as part of your server-snippet. This is needed to stop robots from crawling development environments as the default server-snippet set to prevent this in development environments in the ingress templates will get overwritten with any server-snippets
set in .lagoon.yml
.
Some configurations involve a reverse proxy (like a CDN) in front of the Kubernetes clusters. In these configurations, the IP of the reverse proxy will appear as the REMOTE_ADDR
HTTP_X_REAL_IP
HTTP_X_FORWARDED_FOR
headers field in your applications. The original IP of the requester can be found in the HTTP_X_ORIGINAL_FORWARDED_FOR
header.
If you want the original IP to appear in the REMOTE_ADDR
HTTP_X_REAL_IP
HTTP_X_FORWARDED_FOR
headers, you need to tell the ingress which reverse proxy IPs you want to trust:
- "example.ch":
annotations:
nginx.ingress.kubernetes.io/server-snippet: |
set_real_ip_from 1.2.3.4/32;
This example would trust the CIDR 1.2.3.4/32
(the IP 1.2.3.4
in this case). Therefore if there is a request sent to the Kubernetes cluster from the IP 1.2.3.4
the X-Forwarded-For
Header is analyzed and its contents injected into REMOTE_ADDR
HTTP_X_REAL_IP
HTTP_X_FORWARDED_FOR
headers.
Environments.[name].types
#
The Lagoon build process checks the lagoon.type
label from the docker-compose.yml
file in order to learn what type of service should be deployed (read more about them in the documentation of docker-compose.yml
).
Sometimes you might want to override the type just for a single environment, and not for all of them. For example, if you want a standalone MariaDB database (instead of letting the Service Broker/operator provision a shared one) for your non-production environment called develop
:
service-name: service-type
service-name
is the name of the service fromdocker-compose.yml
you would like to override.service-type
the type of the service you would like to use in your override.
Example for setting up MariaDB_Galera:
environments:
develop:
types:
mariadb: mariadb-single
environments.[name].templates
#
The Lagoon build process checks the lagoon.template
label from the docker-compose.yml
file in order to check if the service needs a custom template file (read more about them in the documentation of docker-compose.yml
).
Sometimes you might want to override the template just for a single environment, and not for all of them:
service-name: template-file
service-name
is the name of the service fromdocker-compose.yml
you would like to override.template-file
is the path and name of the template to use for this service in this environment.
Example Template Override#
environments:
main:
templates:
mariadb: mariadb.main.deployment.yml
environments.[name].rollouts
#
The Lagoon build process checks the lagoon.rollout
label from the docker-compose.yml
file in order to check if the service needs a special rollout type (read more about them in the documentation of docker-compose.yml
)
Sometimes you might want to override the rollout type just for a single environment, especially if you also overwrote the template type for the environment:
service-name: rollout-type
service-name
is the name of the service fromdocker-compose.yml
you would like to override.rollout-type
is the type of rollout. See documentation ofdocker-compose.yml
) for possible values.
Custom Rollout Type Example#
environments:
main:
rollouts:
mariadb: statefulset
environments.[name].autogenerateRoutes
#
This allows for any environments to get autogenerated routes when route autogeneration is disabled.
routes:
autogenerate:
enabled: false
environments:
develop:
autogenerateRoutes: true
environments.[name].cronjobs
#
Cron jobs must be defined explicitly for each environment, since it is typically
not desirable to run the same ones for all environments. Depending on the
defined schedule, cron jobs may run as a Kubernetes native CronJob
or as an
in-pod cron job via the crontab of the defined service.
Cron Job Example#
cronjobs:
- name: Hourly Drupal Cron
schedule: "M * * * *" # Once per hour, at a random minute.
command: drush cron
service: cli
- name: Nightly Drupal Cron
schedule: "M 0 * * *" # Once per day, at a random minute from 00:00 to 00:59.
command: drush cron
service: cli
name
: Any name that will identify the purpose and distinguish it from other cron jobs.-
schedule
: The schedule for executing the cron job. Lagoon uses an extended version of the crontab format. If you're not sure about the syntax, use a crontab generator. -
You can specify
M
for the minute, and your cron job will run once per hour at a random minute (the same minute each hour), orM/15
to run it every 15 mins, but with a random offset from the hour (like6,21,36,51
). It is a good idea to spread out your cron jobs using this feature, rather than have them all fire off on minute0
. - You can specify
H
for the hour, and your cron job will run once per day at a random hour (the same hour every day), orH(2-4)
to run it once per day within the hours of 2-4.
Timezones:
- The default timezone for cron jobs is UTC.
- Native cron jobs use the timezone of the node, which is UTC.
- In-pod cron jobs use the timezone of the defined service, which can be configured to something other than UTC.
command
: The command to execute. This executes in theWORKDIR
of the service. For Lagoon images, this is/app
.
Warning
Cronjobs may run in-pod, via crontab, which doesn't support multiline commands. If you need a complex or multiline cron command, you must put it in a script that can be used as the command. Consider whether a pre- or post-rollout task would work.
Danger
Cronjobs run in Kubernetes pods, which means they can be interrupted due to pod rescheduling. Therefore when creating a cronjob you must ensure that the command can be safely interrupted and re-run at the next cron interval.
service
: Which service of your project to run the command in. For most projects, this should be thecli
service.
Polysite#
In Lagoon, the same Git repository can be added to multiple projects, creating what is called a polysite. This allows you to run the same codebase, but allow for different, isolated, databases and persistent files. In .lagoon.yml
, we currently only support specifying custom routes for a polysite project. The key difference from a standard project is that the environments
becomes the second-level element, and the project name the top level.
To utilize this, you will need to:
- Create two (or more) projects in Lagoon, each configured with the same Git URL and production branch, named per your .lagoon.yml (i.e
poly-project1
andpoly-project2
below) - Add the deploy keys from each project to the Git repository.
- Configure the webhook for the repository (if required) - you can then push/deploy. Note that a push to the repository will simultaneously deploy all projects/branches for that Git URL.
Polysite Example#
poly-project1:
environments:
main:
routes:
- nginx:
- project1.com
poly-project2:
environments:
main:
routes:
- nginx:
- project2.com
Specials#
api
#
Info
If you run directly on amazee.io hosted Lagoon you will not need this key set.
With the key api
you can define another URL that should be used by the Lagoon CLI and drush
to connect to the Lagoon GraphQL API. This needs to be a full URL with a scheme, like: http://localhost:3000
This usually does not need to be changed, but there might be situations where amazee.io support tells you to do so.
ssh
#
Info
If you run directly on amazee.io hosted Lagoon you will not need this key set.
With the key ssh
you can define another SSH endpoint that should be used by the Lagoon CLI and drush
to connect to the Lagoon remote shell service. This needs to be a hostname and a port separated by a colon, like: localhost:2020
This usually does not need to be changed, but there might be situations where amazee.io support tells you to do so.
container-registries
#
The container-registries
block allows you to define your own private container registries to pull custom or private images.
To use a private container registry, you will need a username
, password
, and optionally the url
for your registry. If you don't specify a url
in your YAML, it will default to using Docker Hub. We also recommend adding a description
to your container-registry entries to provide a bit of information about them, some examples are provided.
There are 2 ways to define the username and password used for your registry user.
- Define them as environment variables in the API
- Hardcode them in the
.lagoon.yml
file (we don't recommend this though)
Environment variables method#
Firstly, define the container-registries
in your .lagoon.yml
, you don't need to define the username or password here. If you do use a custom registry, you will still need to provide the url
, for example:
container-registries:
docker-hub:
description: "username and password consumed from environment variables for the default docker.io registry"
my-custom-registry:
description: "username and password consumed from environment variables for my custom registry"
url: my.own.registry.com
another-custom-registry:
description: "password consumed from environment variables for my other registry"
username: myotheruser
url: my.other.registry.com
If you do define a username in the .lagoon.yml
you don't need to add the associated variable, but if you do add the variable, the value of the variable will be prefered.
Next, create environment variables in the Lagoon API with the type container_registry
:
lagoon add variable -p <project_name> -N <registry_username_variable_name> -V <username_goes_here> -S container_registry
lagoon add variable -p <project_name> -N <registry_password_variable_name> -V <password_goes_here> -S container_registry
- (see more on Environment Variables)
The name of the variables should be the name of the registry defined in the .lagoon.yml
file, it should be:
- uppercase
- replace
-
with_
- have the prefix
REGISTRY_
- have the suffix of
_USERNAME
or_PASSWORD
.
Some examples of this are:
dockerhub
would becomeREGISTRY_DOCKERHUB_USERNAME
andREGISTRY_DOCKERHUB_PASSWORD
docker-hub
would becomeREGISTRY_DOCKER_HUB_USERNAME
andREGISTRY_DOCKER_HUB_PASSWORD
my-custom-registry
would becomeREGISTRY_MY_CUSTOM_REGISTRY_USERNAME
andREGISTRY_MY_CUSTOM_REGISTRY_PASSWORD
- lowercased versions may still work if there are no
-
in them, for exampleREGISTRY_dockerhub_USERNAME
, but the uppercased version will always be chosen above others.
Legacy method of defining registry password
A previous method that allowed for the password to be defined using an environment variable, with the name of the variable to be defined in the .lagoon.yml
file like so:
container-registries:
docker-hub:
username: dockerhubuser
password: MY_DOCKER_HUB_PASSWORD
the username needs to be provided in this file too, unless the supported variable for defining the username is provided.
The variable can then ba added to the API like so
lagoon add variable -p <project_name> -N MY_DOCKER_HUB_PASSWORD -V <password_goes_here> -S container_registry
While we will continue to support this method, it may be deprecated in the future, we will ensure that warnings are presented within builds to give time for users to change to the supported method.
If a supported variable password is provided, it will be used instead of the custom named variable.
Hardcoded values method#
You can also define the password directly in the .lagoon.yml
file in plain text, however we do not recommend this.
container-registries:
docker-hub:
description: "the default docker.io registry credentials"
username: dockerhubuser
password: MySecretPassword
my-custom-registry:
description: "the credentials for my own registry"
url: my.own.registry.com
username: mycustomuser
password: MyCustomSecretPassword
Consuming a custom or private container registry image#
To consume a custom or private container registry image, you need to update the service inside your docker-compose.yml
file to use a build context instead of defining an image:
services:
mariadb:
build:
context: .
dockerfile: Dockerfile.mariadb
Once the docker-compose.yml
file has been updated to use a build, you need to create the Dockerfile.<service>
and then set your private image as the FROM <repo>/<name>:<tag>
FROM dockerhubuser/my-private-database:tag
Example .lagoon.yml
#
This is an example .lagoon.yml
which showcases all possible settings. You will need to adapt it to your project.
.lagoon.yml | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 |
|
Deprecated#
These settings have been deprecated and should be removed from use in your .lagoon.yml
.
-
routes.autogenerate.insecure
The
None
option is equivalent toRedirect
. -
environments.[name].monitoring_urls
environments.[name].routes.[service].[route].hsts
-
environments.[name].routes.[service].[route].insecure
The
None
option is equivalent toRedirect
.