In this article, I will explain why we decided to build "general purpose" PHP images for Docker, and what you can gain from using one of these images (spoiler alert: it's time).
TL;DR
Have a look at thecodingmachine/php. This project contains a set of Docker images containing PHP with:
- 3 variants: Apache, CLI or PHP-FPM
- NodeJS (optional, version 6 or 8)
- The most common PHP extensions (can be enabled using environment variables)
- Cron (configurable via environment variables)
- ... and much more!
Why?
At TheCodingMachine, we build intranets, extranets and websites for our clients. And we build a lot of these. Each project has slightly different needs. Some require PHP + mysqli extension, others PHP + postgresql extension, others the mongodb and redis extensions. Some will need apcu, others memcache, others GD, etc...
For each of these projects, we used to start with the stock "php" Docker image
and to enable extensions using a Dockerfile
.
And in each of our projects, we had a Dockerfile that generally contained this:
- installation of required PHP extensions
- installation of Composer
- installation of NodeJS (if we need to run webpack for static assets)
The Dockerfiles were copy-pasted from one project to another, slightly modified, and increasingly becoming harder to track and to read.
We needed to do better. We needed a "universal" PHP image that would be usable in most of our projects and yet, be flexible enough to not incur any cost on performance. And so we began our journey in building the thecodingmachine/php images
General purpose images
Our idea is to build "one-fits-all" PHP images by:
- building the most common PHP extensions right into the image
- enabling or disabling these extensions on container startup using environment variables
The images are developer friendly. For instance:
- they come with the
nano
editor installed - they come with
composer
- the
xdebug
extension, if enabled, will configure the remote host automatically (and correctly whether you use Linux, Windows or MacOS!) - ...
As you probably understand, we are willfully trading some image size for an improved developer experience. More on this later.
Usage
We have images for PHP 7.1 and PHP 7.2.
Images are tagged according to PHP version, image version, variant and Node version.
For instance:
thecodingmachine/php:7.1-v1-apache-node6
^ ^ ^ ^
| | | |
PHP version | | |
| | |
Image version | |
| |
Variant (apache, cli or fpm) |
|
Node version (empty, 6 or 8)
By default, images come with these PHP extensions enabled: apcu mbstring mysqli opcache pdo pdo_mysql redis zip soap
.
However, you can easily enable or disable any available PHP extension using environment variables:
# Let's enable the PostgreSQL extension and disable the Mysqli extension!
PHP_EXTENSION_PGSQL=1
PHP_EXTENSION_MYSQLI=0
As an alternative, you can enable a bunch of extensions in a single line with the PHP_EXTENSIONS
environment variable:
PHP_EXTENSIONS=pgsql gettext imap sockets
You can also change any value of the php.ini
file by using the associated PHP_INI_XXX
environment variable:
# set the parameter memory_limit=1g
PHP_INI_MEMORY_LIMIT=1g
# set the parameter error_reporting=EALL
PHP_INI_ERROR_REPORTING=E_ALL
Finally, if you are using the Apache variant of the image, you can also enable any Apache extension using environment variables:
APACHE_EXTENSION_DAV=1
APACHE_EXTENSION_SSL=1
Utility features
Quite often, when setting up a development environment, or when changing branches (in your development process),
there are a number of recurring tasks that need to be performed. Like running composer install
, or
applying Doctrine migrations, or starting webpack in "watch" mode.
In my development environment, I personally like to run these tasks on container startup. That way, once my environment is started, I'm sure that the project dependencies are up-to-date with my colleagues, that database patches have been applied...
The thecodingmachine/php
image allows you to register commands that will be executed on startup.
Therefore, my docker-compose.yml
looks like this:
docker-compose.yml
version: '3'
services:
my_app:
image: thecodingmachine/php:7.1-v1-apache-node8
environment:
STARTUP_COMMAND_1: composer install
STARTUP_COMMAND_2: vendor/bin/doctrine orm:schema-tool:update
STARTUP_COMMAND_3: webpack --watch &
Please note these are settings I use exclusively in development.
For production images, I run composer install
and webpack build
at build time, from the Dockerfile
.
I do, however, apply database migrations on container startup, even in production.
The image also bundles "cron" to run recurring tasks:
docker-compose.yml
version: '3'
services:
my_app:
image: thecodingmachine/php:7.1-v1-apache-node8
environment:
CRON_USER_1: root
CRON_SCHEDULE_1: * * * * *
CRON_COMMAND_1: vendor/bin/console do:stuff
Pretty useful as setting up cron in Docker containers is a challenging task. Also, the scripts that run in Cron will have their output redirected to the Docker logs.
Usage in continuous integration environments
The image can be tremendously useful in continuous integration environments.
At TheCodingMachine, we are pretty fond of Gitlab CI. Our CI file now looks like this:
test:
image: thecodingmachine/php:7.1-v1-cli
variables:
PHP_EXTENSIONS=gd event
before_script:
- composer install
script:
- vendor/bin/phpunit
Woot! So easy!
Actually, I can even replace vendor/bin/phpunit
by phpunit
alone, because ./vendor/bin
is part of the PATH.
That's right, anything in vendor/bin
directory can be accessed from your project's root directory... Did I say developer friendly? :)
File permissions management
Permissions management is a tricky issue when it comes to Docker. Depending on your use case (development or production), and depending on the OS you are using, you can have a wide range of issues to solve. We really tried to do our best to simplify this, without sacrificing security.
File permissions on a development environment
When you are developing using Docker, you typically mount your working directory into the container /var/www/html
directory.
For a good development workflow:
- your IDE must have write access to the files
- your web-server should have write access to the files (for caching or upload purposes)
- scripts executed in the container (like
composer install
orphp-cs-fixer
) should have write access too
If you are using MacOS or Windows, Docker does not really enforce any permissions in the file system. For instance, any user can modify any files owned by root on a OSX docker mount point.
If you are using Linux on the other hand, things are really more secure (and therefore more tricky). Typically, Docker will
enforce the permissions across the mount points. If your run a composer install
in the Docker container as root
, your files
will belong to root on the host file system (something you want to avoid because your IDE won't be able to touch those files).
The thecodingmachine/php image solves this problem by taking the following steps:
- Out of the box, it has a
docker
user (whose ID is 1000 by default) - Apache is run by this
docker
user (and not bywww-data
) - On container startup, a script will first try to detect if the
/var/www/html
directory is mounted or not, and whether it is a Windows, MacOS, or Linux mount. - If this is a Linux mount, it will look at the owner of the
/var/www/html
directory. Let's assume the directory belongs to the user "foobar" whose ID is 1001. Dynamically, the container will change thedocker
ID to be 1001 (instead of 1000) by default. This is done using the-u
flag of theusermod
command. - Therefore, the ID of the docker user (that is running Apache) and the ID of the mounted directory owner on the host are matching. No more permission issues while developing (Hooray!)
File permissions on a production environment
Of course, on a production environment, you don't want this. On a production environment, you will typically not use
any mount. Instead, you will copy your PHP files inside the container's /var/www/html
directory.
By default, the /var/www/html
directory belongs to www-data
. The container will detect this and act accordingly.
You should still give back ownership of the Apache processes to the www-data
user. This can be done easily using
one more environment variable:
ENV APACHE_RUN_USER=www-data \
APACHE_RUN_GROUP=www-data
Is this following Docker best practices?
Hell no!
In order to develop this image, we violated a number of Docker best-practices.
So you want to be state-of-the-art?
Instead of using thecodingmachine/php, you should do this:
Avoid installing unnecessary packages
thecodingmachine/php contains a lot of pretty useful packages for development, but that are not needed for production (like the nano editor, or all the PHP extensions that you are not enabling).
If you want to be state of the art, you should write your own Dockerfile
and install the bare-minimum in the container.
Of course, you should store the image on your own registry.
Use multi-stage builds
Some variants of thecodingmachine/php come with NodeJS installed. The expectation is that you will need NodeJS to build
your JS/CSS assets (probably using webpack
).
However, this means that the image will run in production with NodeJS installed, while it is absolutely not necessary
(it is used only at build time).
Starting with Docker 17.05, Docker added this wonderful feature named multi-stage builds.
From your Dockerfile, you can call another container to perform build stages. So from your PHP container, you could call a NodeJS container to perform a build, while not storing NodeJS in your own PHP image. Useful.
Each container should have only one concern
thecodingmachine/php images bundle cron. So strictly speaking, they have 2 concerns:
- one is to answer HTTP requests
- one is to trigger events at regular intervals
If you want to be state of the art, you should delegate the scheduling of events to a separate container like Tasker or one of the other alternatives.
Image size
Here are a few image size comparison:
Image | Size (uncompressed) | Variation |
---|---|---|
php:7.1-apache |
392 MB | |
thecodingmachine/php:7.1-v1-apache |
575 MB | +183 MB (+47%) |
thecodingmachine/php:7.1-v1-apache-node8 |
664 MB | +272 MB (+69%) |
So should I use the thecodingmachine/php images?
Well it depends!
If you are working on a single application, for the next years to come, you might want to build your own Dockerfile, completely tailored to your need.
But if, like us, you are working on a new project every 3 months, the benefits of the setup and maintenance of a complete Dockerfile might not be worth it.
Using thecodingmachine/php general purpose images will help you get started quickly, while ensuring a pretty decent quality.
You are trading some additional disk space (which is cheap) for some of your time (which is valuable). This is a great deal!
Alternatives
There are other alternatives to be mentioned:
- Laradock, which is building tailored images locally. It's pretty cool for setting up a development environment, but less easy to use for continuous integration or deployment in production.
- Kickoff Docker PHP, which is also building tailored images locally, with a focus and differentiating development and production environment using docker-compose.
- webdevops/php-apache[-dev] which are Docker PHP images with a big variety of base Docker images (if you want to extend a special Debian version for instance)
About the author
David is CTO and co-founder of TheCodingMachine and WorkAdventure. He is the co-editor of PSR-11, the standard that provides interoperability between dependency injection containers. He is also the lead developper of GraphQLite, a framework-agnostic PHP library to implement a GraphQL API easily.