In this tutorial, you will learn how to host your Umbraco ASP.NET 5 powered website within a docker container. If you are new to docker and are unsure about the benefits of using containers with Umbraco V9, this is the tutorial for you. In simple terms, combining Umbraco v9 and Docker will help you to streamline your ASP.NET Core development workflow. We all want to use the latest and greatest frameworks, however, an often neglected ASP.NET 5 topic is the negative impact on unproductiveness, let me explain 🤔
When building a CMS project, I find that the most efficient way to build a website locally is using a combination of the debugger to test your code changes and then using a persistent website when you need to make CMS changes. Having persistent access to the CMS while developing is useful as it will give you quick access to the backend. When you need to create new document types, check the logs, generate new Umbraco models, etc..., having to waste time opening Visual Studio and waiting for the debugger to launch and the site to start gets annoying quickly 😞
In the old world, creating a persistent website was easy. You could host a site in IIS and simply point the website to your webroot and jobs a good 'un. If you want to create a persistent website that is constantly updated with your latest changes within ASP.NET 5, things are no longer as easy. You need to perform an MSBuild publish every time you make a change, this is annoying. Having to perform a publish, introduces a new issue, synchronisation. Your new environment will share the same database connection so no CMS data will be lost. If you uploaded some media within the CMS, this media will not be uploaded into your webroot and it will live within your website folder. The next time you fire up your local debugging server your website could look funky as those files will not be within your webroot. You would need to manually copy the files from the website folder to the webroot to fix the issue. There are other similar situations around this area, the main takeaway is that the old ways of working are no longer ideal. We need to mix it up.
As hosting your local persistent website is very annoying in IIS, when it comes to v9 development, a better solution is to host your development environment in a docker container. Having your site hosted in docker will mean you will have persistent access to it. Using dockers volumes and mount ability, whenever anyone uploads a file in CMS, the mount can be mapped so the files are added into your webroot. Using a docker container will give you almost all the same benefits as the old work. The only caveat is building the docker file. There are other benefits of using Docker around testing and deployments, however, the focus of this article is building a development environment that will allow you to work efficiently. If this sounds good to you, read on 🔥🔥🔥
Before we start creating the docker file, let us ensure you have all the prerequisites to get this working locally ticked ✔️
Enable SQL Server Access: Umbraco uses a database to store all the data. It is possible to work with Umbraco using a file-based database, however, when using containers it's a better idea to host your database within SQL-server. To do this, your container will need to talk to your local SQL instance. Out-of-the-box the type of access docker will need is disabled, so you will need to enable some things. Getting this access set-up can be fiddley. To make life as simple as possible, I also recommend that you test using your machine's local IP address and the local port your SQL instance is running under. I will go over all these steps in this section.
To allow your SQL server to talk to docker, you will need to use a tool called SQL Server Configuration Manager. This tool should be installed on your PC already, however, it is pretty well hidden. You can not find it from the Start menu, it is in the
Windows folder. Assuming, you use SQL 18, you can find the config manager here:
A list of which version of this tool you should use for your SQL instance can be found on the Microsoft website here. From within SQL Server Configuration Manager, you need to enable these things:
Ensure TCP/IP for your SQL instance is enabled
Listen Allis enabled
Get the IP and the Port number your instance is listening on
Reset the server so the new changes are applied
Follow the steps in the video above. After making these changes I recommend you fire up SQL manager and try to connect to your database manually. Use your machine's internal IP address and the SQL port number for this testing. I found the port number my SQL server was running under in the
IP Address tab in
TCP Dynamic Ports. If you struggle to find yours, for reference the default SQL port is
1433. When testing the SQL connection to SQL Manager, use the username and password that you define in your connection string. To get your internal IP address, in a terminal run the
ipconfig command. You can find your IP in one of the
IPv4 Address fields. It should start with
192. The format of your host is IP and port, separated by a comma, e.g.:
Use IP and port liek this:
After you have successfully set up your container and got it talking to SQL, you should be able to swap the IP and port with this command
gateway.docker.internal. Using gateway.docker.internal` is a better and more robust approach, however, for initial testing, focus on just getting the communication up and running. After making this change, your connection string should look like this:
The easiest way to use this connection string within your container is to create an environment within Visual Studio and create an environment-specific version of
appsettings.json. It is likely you might want to configure some other bits and bobs too. You will need to make sure your website is configured to read in the correct environment settings file. This configuration is usually defined within
Program. An example of the code required to ensure your website applies the changes from an environment-specific file is shown below:
Permissions: To set up a container for development success, you will need to create a file mount. You will need to map folders in your docker container to the corresponding folders in your webroot. This way you can upload things in the container and get access to them in your webroot. This will allow for the dual running of your site in docker and in a debugger. It will also mean you can easily add newly created files into source control.
In order to make this work, you will need to set the correct file permissions on the host machine. If you forget to do this, your docker container will fail to run and you can waste hours of your life figuring out why... trust me 😕. In your host machine, you need to ensure that all the folders you want to map have read/write access and are not read-only. For best practice, ensure the
docker-user has read/write permission. As we are talking development environment, I add the
Everyone account with full permissions and replicate the permissions down the tree for simplicity 😉
Creating The Docker File
In order to host your website in a container, you need to create a docker configuration file. In this example, I am going to add my Umbraco V9 starter kit into a container. You can clone this starter kit from my GitHub here. The important thing to note with this site is that it is structured with two additional class libraries. One library is used for my custom code and the other one is for the models generated by Umbraco Model Builder.
When setting up a docker file you need to copy all the folders your website relies on to the container image. As I need to copy multiple folders, it makes sense for this project to create the docker file within the root solution folder. If you only have a single website project, you could create the docker file in your webroot. You will obviously need to adjust the script below accordingly🤔
Creating a docker file is easy, create a new blank file called
Within the docker file, you will add the instructions of what is to be included within the container. The first thing you need to do is define the base container your image will use and then set the location where the files and folders will be created under. This is done using the
WORKDIR commands, like so:
Next, you need to copy all the solution files you care about into the container. You can do that by targeting all the
.csproj files you want to copy. For each project, you should also call
dotnet restore to ensure the container has each of the library's dependencies. In terms of ordering, add the reference to your main website project last!
The next step is to build the app:
Then you need to build the runtime. You need to specify the base image and the working directory again. This time the directory is
app. Note, I am also exposing port 80, so we can access the website via a web browser:
The main difference in this second section is the command to copy the output of the build to the publish folder:
The final command is to start the container using
ENTRYPOINT which takes two arguments. The first is the command. Running the site is done using the
dotnet CLI command. The second is the file to launch, in this example, the starter kits namespace is called
Build The Image
To build and start the container, you can use
This command will use the docker file and create an image from it. When docker has finished creating your image, you can confirm the image has been successfully created using
docker images command. To create a container from an image you use the
docker run command. The
docker run command that you will need to run is quite complex:
Let us break down this command per flag so it is less confusing:
- '-rm': Clean the container from any previous crap. You can omit this flag if you want to, however, it will mean your container will always be in a fresh state
-d: Start the container as a background task. This prevents your terminal window from being locked up!
-p: Maps a port on your machine to port in the container. When you are trying to access a website in a container, it is best to not use port
80on the host machine. it is likely you will still want to use the port for IIS. In this example, I use
8080. You can use any port you want as long as it is not already mapped to something else!
- '--name': Specifies the image name. If you omit this docker will generate a random one. Being able to easily identify the container, is very useful for debugging!
- 'umbraco9starterkit`: THe name of the image to use
After running this command, you should have a working container that you should be able to access using
localhost and port
Sometimes getting a container working can be annoying. In these instances, you might want to check the file structure of your container. The quick and easy way to do this is to copy the data out of the container onto your local machine. You can do this using
docker cp, like this:
In this example,
umbraco is the name of the container!
One limitation with running a Umbraco powered website within a container is file persistence. If a content editor uploads some images in the CMS and the container is destroyed, all the uploaded images will also be deleted. The focus of this tutorial is on creating a developer environment. If you want to run your Umbraco website within a container in production, I recommend that you consider using Umbraco with a cloud-based storage provider, like Azure Blob Storage. Showing you how to set up a file provider in Umbraco is an article in itself. The benefit of using a cloud file provider is that when content editors upload media it will be persisted. If you want to learn more I recommend checking out Umbraco.StorageProviders for some inspiration.
As we are focusing on a development environment, the focus is efficiency. This is where the docker
mount capability is great for development. Mount allows you to map a folder in the shot machine to a folder in the container. This means you can automatically access files in your solution folder when they are uploaded to the container. Great!
Mounts are not the only way to deal with persistent data storage in docker. Docker also offers volumes. Volumes are considered better practice compared to mounts, however, with a volume you can not map folders to the host machine. In production, it is not very likely that you would want to map a folder to a folder on the server, however, in development, it is very useful. You can learn more about docker volumes here. Volumes will allow you to persist data and they can be shared across multiple containers. Volumes are still useful for some folders in development. For example, it makes more sense to map the
log folder and the
temp data folder as volumes rather than mounts. Using volumes means that your temp data and logs will not get destroyed, improving performance, plus aiding you while you are debugging.
You will start your container with a mix of volumes and mounts. The big question is what will you need to map? Below list the folders that I tend to add:
wwwroot: The default area where assets are added within the backend. In the Umbraco editor, it's possible for an admin to make JS and CSS amends in the editor. If you do not map this folder these changes and any images or assets uploaded will be lost when the container is destroyed. As we will want to check these changes into source control, this mapping makes sense to use the mount capability. The folder mapping for this rule will look like this:
umbraco: Umbraco keeps a lot of temp data in the
umbraco folder. Things like the site's log files, and the front-end cache can be found here. For performance, it is handy to persistantly store this data between containers. The content of these folders will not need to be added to source-control, so you can use a volume for this mapping rather than a
usync: If you use uSync, you will want to commit any updates made in the containers into source control, so it should be a
Umbraco.Models: If you use the Umbraco modes builder in
SourceCodeManual mode, you will want to include any updates in source-control. This means this mapping will also be a
Taking all these mappings into account, this is how I structure the
docker run command to also include all the mounts and volumes needed:
Running this command will start your container with the correct mappings. Now, whenever a content editor updates files within the CMS, you should get access to those updates in the solution folder in your host machine.
Finally... we made it! Granted, this guide has a lot of steps, however, when you get this set-up and working once, life is peachy! In terms of effort, it is quicker to set up additional sites compared to hosting stuff in IIS. When working with Umbraco V9 and onwards, I find that the container approach is a better route for efficiency so I recommend you give it a go. Happy Coding 🤘