I haven’t seen many posts about Continuous Integration. Perhaps this is because Umbraco Cloud option offers this feature off the shelf but I have some customers that require customizations (custom databases) that would be costly on the Umbraco Cloud setup. Therefore I decided to spin up Jenkins that connects to Github repo and does automatic deployment from the main branch whenever there is push to it.
Here are few quick steps that gets you started.
1. Setup Github
Wheater you use Github or Gitlab or your own Git repo these steps do not matter. You just need a branching stradegy where the master branch matches what is in your live site.
Merge feature branches into the master branch using pull requests.
I have root server Windows 2019 running on Hetzner but I hear OVHCloud is good as well. So, I installed Jenkins instances directly on my test server and live server. I could do the setup with PowerShell remote but Jenkins is quick to install and requires very little maintainance and resources so I just have them running isolated on the given server.
You need some plugins in order to build .NET projects and Frontend projects. Try these:
Global Slack Notifier Plugin (for notifying build success/failures)
NodeJS Plugin (control npm builds)
PowerShell plugin (for scripting)
ThinBackup (for moving Jenkins configs between instances
Here is my sample configuration on Freestyle Jenkins project.
Nuget restore command and Injection to Assembly
Copy files from Jenkins build folder to Webroot
Post Build actions
That’s it. Minimail Jenkins setup for Umbraco projects that I use. This same setup works for any .NET projects as I did not include any actions on uSync or anything other Umbraco specific.
I have done few DevOps Setups on Jenkins, TeamCity, Azure Devops and Github Actions. Which ever CI/CD tool you are using I recommend to create your scriptinigs on Node and/or PowerShell so you can easily jump between as they all work essentially same way.
Sometimes you may have publicly facing website, which you don’t want it indexed by the major search engines. Easy way is to add robots.txt or meta tags on html but more global setting can be set on IIS settings from ‘Administrative Tools’ open ‘Internet Information Services (IIS) Manager’ > Select the Server > ‘HTTP Response Headers’.
Once dialog for HTTP Response Header is open
Add > Name = X-Robots-Tag > Value = noindex > and press OK.
General info about X-Robots-Tag HTTP header
The X-Robots-Tag can be used as an element of the HTTP header response for a given URL. Any directive that can be used in a robots meta tag can also be specified as an X-Robots-Tag. Here’s an example of an HTTP response with an X-Robots-Tag instructing crawlers not to index a page:
About Robots Tag
A meta robots tag can include the following values:
Index: Allows search engines to add the page to their index.
Noindex: Disallows search engines from adding a page to their index and disallows it from appearing in search results for that specific search engine.
Follow: Instructs search engines to follow links on a page, so that the crawl can find other pages
Nofollow: Instructs search engines to not follow links on a page.
None: This is a shortcut for noindex, nofollow.
All: This is a shortcut for index, follow.
Noimageindex: Disallows search engines from indexing images on a page (images can still be indexed using the meta robots tag, though, if they are to linked to from another site).
Noarchive: Tells search engines to not show a cached version of a page.
Nocache: This is the same as noarchive tag, but specific to the Bingbot/MSNbot.
Nosnippet: Instructs search engines to not display text or video snippets.
Notranslate: Instructs search engines to not show translations of a page in SERPs.
Unavailable_after: Tells search engines a specific day and time that they should not display a result in their index.
Noyaca: Instructs Yandex crawler bots to not use page descriptions in results.