Deployment methodology, workflow and best practices
The challenge in deploying Drupal sites.
Let's talk about an optimal deployment workflow for Drupal, for that there are some assumptions:
- Composer-based Drupal 8 projects.
- There are at least two environment types for the site (Continuous integration and production).
- Knowledge of Docker or other virtualization tools.
Composer-based Drupal, what is Composer?
Composer is a dependency manager for PHP (like npm for Node or pip for Python). Drupal core uses Composer to manage core dependencies like Symfony components and Guzzle. Composer allows us to systematically manage a list of dependencies and their subsidiary dependencies. Composer installs these dependencies via a manifest file called composer.json.
What to commit?
The composer.json file: this is obvious when using Composer.
The composer.lock file: this is important since it will allow you to rebuild the entire codebase at the same status it was at a given point in the past.
The fully built site is commonly left out of the repository. But this also means that you need to find a way for rebuilding and deploying the codebase safely.
Should we launch Composer in our production environments?
You would clearly never run composer update on the production server, as you want to be sure that you will be deploying the same code you have been developing upon. For a while, we considered it to be enough to have Composer installed on the server, docker image or another virtualization tool and run composer install to get the dependencies from the (committed) composer.lock file.
The process is not robust. A transient network error or timeout might result in a failed build, thus introducing uncertainty factors in the deploy scripts. Easy to handle, but still not desirable as part of a delicate step such as deployment.
The process will inevitably take long. If you run composer install in the webroot directly, your codebase can be unstable for a few minutes. This is orders of magnitude longer than a standard update process (i.e., running drush updb and drush cim) and it may affect your site availability. This can be circumvented by building different virtualization images and swapping between the old image and the new one when the last one is ready.
Even composer install can be unpredictable, especially on servers with restrictions or running different versions of Composer or PHP; in rare circumstances, a build may succeed but yield a different codebase. This can be mitigated by enforcing (e.g., through Docker or virtualization) a dev/staging environment that matches the production environment, but you are still losing control on a relatively lengthy process.
Composer simply does not belong in a production server. It is a tool with a different scope, unrelated to the main tasks of a production server.
After ruling out the production server, where should the codebase be built then?
Building it locally (i.e., using a developer's environment) can't work: besides the differences between the development and the production (--no-dev) setup, there is the risk of missing possible small patches applied to the local codebase. And a totally clean build is always necessary anyway.
We ended up using Continuous Integration for this task. Besides the standard CI job, which operates after any push operation to the branches under active development, performs a clean installation and runs automated tests, another CI job builds the full codebase based on the master branch and the composer.lock file. This allows sharing it between developers, a fast deployment to production through a tarball or rsync, and opportunities for actually testing the upgrade (with a process like: automatically import the production database, run database updates, import the new configuration, run a subset of automated tests to ensure that basic site functionality has no regressions) for maximum safety.
Why Docker or other virtualization tools?
This is an easy answer, working on a Docker with the same features as the others environments, it will allow you to be sure that everything will work better and you are not making any new feature incompatible with production.
Using Docker in this case, will let you make a fast deployment.
Drupal developers use a variety of different tools and Drupal APIs to facilitate the synchronization of code and configuration between environments. The complete process is known as a deployment workflow. At a very high level this includes the following steps:
- Make a change to configuration, or code, on a development environment.
- Export any configuration changes to a file(s) that can be stored in version control.
- Commit code and configuration changes to version control.
- Synchronize version-controlled files to the live server.
- Run database updates (update.php) and import any configuration changes.
- Clear the cache.
DDEV development tool
Drupal developers also use a well known tool called DDEV which lets you completely set up a development environment in a matter of minutes.
DDEV is an open source tool that makes it dead simple to get local PHP development environments up and running within minutes. It's powerful and flexible as a result of its per-project environment configurations, which can be extended, version controlled, and shared. In short, DDEV aims to allow development teams to use Docker in their workflow without the complexities of bespoke configuration.
With this tool you can adapt the configuration of Docker images, PHP extensions and many more services, depending on the LIVE environment. Also you will be able to debug code and do compatibility tests in a fast way.
On the other hand, DDEV gives you the possibility to develop code for other applications like Laravel, Wordpress, different Drupal versions, etc.
DDEV Hosting provider integration
DDEV provides the pull command with whatever recipes you have configured. For example, DDEV pull acquia if you have created .ddev/providers/acquia.yaml.
This means that DDEV has full compatibility with the CI/DEV platform where teams build, host, and manage their websites like Pantheon.io, Platform.sh, DDEV Live and Acquia.
DDEV also provides the push command to push databases and files to upstream. This is very dangerous to your upstream site and should only be used with extreme caution. It's recommended not even to implement the push stanzas in your yaml file, but if it fits your workflow, use it well.