I haven’t seen many posts about Continuous Integration. Perhaps this is because Umbraco Cloud option offers this feature off the shelf but I have some customers that require customizations (custom databases) that would be costly on the Umbraco Cloud setup. Therefore I decided to spin up Jenkins that connects to Github repo and does automatic deployment from the main branch whenever there is push to it.
Here are few quick steps that gets you started.
1. Setup Github
Wheater you use Github or Gitlab or your own Git repo these steps do not matter. You just need a branching stradegy where the master branch matches what is in your live site.
Merge feature branches into the master branch using pull requests.
I have root server Windows 2019 running on Hetzner but I hear OVHCloud is good as well. So, I installed Jenkins instances directly on my test server and live server. I could do the setup with PowerShell remote but Jenkins is quick to install and requires very little maintainance and resources so I just have them running isolated on the given server.
You need some plugins in order to build .NET projects and Frontend projects. Try these:
Global Slack Notifier Plugin (for notifying build success/failures)
NodeJS Plugin (control npm builds)
PowerShell plugin (for scripting)
ThinBackup (for moving Jenkins configs between instances
Here is my sample configuration on Freestyle Jenkins project.
Nuget restore command and Injection to Assembly
Copy files from Jenkins build folder to Webroot
Post Build actions
That’s it. Minimail Jenkins setup for Umbraco projects that I use. This same setup works for any .NET projects as I did not include any actions on uSync or anything other Umbraco specific.
I have done few DevOps Setups on Jenkins, TeamCity, Azure Devops and Github Actions. Which ever CI/CD tool you are using I recommend to create your scriptinigs on Node and/or PowerShell so you can easily jump between as they all work essentially same way.
Screen is a full-screen window manager that multiplexes a physical terminal between several processes (typically interactive shells).
Start a new session with session name
screen -S <session_name>
List running sessions / screens
Attach to a running session
Attach to a running session with name
screen -r <session_name>
Detach a running session
screen -d <session_name>
Switching between screens
When you do nested screen, you can switch between screen using command “Ctrl-A” and “n“. It will be move to the next screen. When you need to go to the previous screen, just press “Ctrl-A” and “p“.
To create a new screen window, just press “Ctrl-A” and “c“.
There are 2 (two) ways to leaving the screen. First, we are using “Ctrl-A” and “d” to detach the screen. Second, we can use the exit command to terminating screen. You also can use “Ctrl-A” and “K” to kill the screen.
I love Azure Functions for small jobs and deploying Kubernetes cluster from Github actions to Amazon EKS is blast. But I think it is only business customers and not for small startups or pet projects.
At work I am using Azure and AWS and I used to have GCP and AWS for my pet projects as well but now my projects run on Hetzner VPS’s and Cloud. Main reason for my decision to favour Hetzner over these big guys is uncertaincy and confusion of billing. It’s nice if I suddenly become next Facebook to have scaling possibilty but that is least of my worries and quite nice problem to have. What I like VPS’s and Hetzner Cloud is straight forward and simple pricing per month. If I want Cloud instance with 2Gb RAM and 1vCPU it’s 3 Euros. If I run out of resources I can scale up like going to 32Gb is 35 Euros.
I have small startup where I have worker process written on Python, API written on Node and client running as SPA (Vue.js/Nuxt.js). Data layer is on PostgreSQL/Redis. Without thinking, I set this up in AWS and it worked nicely but soon I noticed this is 100 Euros/month. Since this is in Minimum Viable Product stage I decided to move to VPS. So I spin up VPS with Ubuntu on it. I deployed everything there and changed Github actions. It took me one evening and I don’t consider myself sysadmin. But I think installing Linux with above tech stack is easier than to understand how to run same setup on AWS, GCP or Azure. If in some point I need scaling I can always move back Kubernetes setup on AWS but sometimes less is more.
Sometimes you may have publicly facing website, which you don’t want it indexed by the major search engines. Easy way is to add robots.txt or meta tags on html but more global setting can be set on IIS settings from ‘Administrative Tools’ open ‘Internet Information Services (IIS) Manager’ > Select the Server > ‘HTTP Response Headers’.
Once dialog for HTTP Response Header is open
Add > Name = X-Robots-Tag > Value = noindex > and press OK.
General info about X-Robots-Tag HTTP header
The X-Robots-Tag can be used as an element of the HTTP header response for a given URL. Any directive that can be used in a robots meta tag can also be specified as an X-Robots-Tag. Here’s an example of an HTTP response with an X-Robots-Tag instructing crawlers not to index a page:
About Robots Tag
A meta robots tag can include the following values:
Index: Allows search engines to add the page to their index.
Noindex: Disallows search engines from adding a page to their index and disallows it from appearing in search results for that specific search engine.
Follow: Instructs search engines to follow links on a page, so that the crawl can find other pages
Nofollow: Instructs search engines to not follow links on a page.
None: This is a shortcut for noindex, nofollow.
All: This is a shortcut for index, follow.
Noimageindex: Disallows search engines from indexing images on a page (images can still be indexed using the meta robots tag, though, if they are to linked to from another site).
Noarchive: Tells search engines to not show a cached version of a page.
Nocache: This is the same as noarchive tag, but specific to the Bingbot/MSNbot.
Nosnippet: Instructs search engines to not display text or video snippets.
Notranslate: Instructs search engines to not show translations of a page in SERPs.
Unavailable_after: Tells search engines a specific day and time that they should not display a result in their index.
Noyaca: Instructs Yandex crawler bots to not use page descriptions in results.