Categories
Uncategorized Windows

Umbraco Continuous Deployment with Jenkins

I haven’t seen many posts about Continuous Integration. Perhaps this is because Umbraco Cloud option offers this feature off the shelf but I have some customers that require customizations (custom databases) that would be costly on the Umbraco Cloud setup. Therefore I decided to spin up Jenkins that connects to Github repo and does automatic deployment from the main branch whenever there is push to it.

Here are few quick steps that gets you started.

1. Setup Github

Wheater you use Github or Gitlab or your own Git repo these steps do not matter. You just need a branching stradegy where the master branch matches what is in your live site.

image of release branch workflows

Merge feature branches into the master branch using pull requests.

Install Jenkins

I have root server Windows 2019 running on Hetzner but I hear OVHCloud is good as well. So, I installed Jenkins instances directly on my test server and live server. I could do the setup with PowerShell remote but Jenkins is quick to install and requires very little maintainance and resources so I just have them running isolated on the given server.

In the Jenkins server you also need Microsoft Build Tools and Nuget installed on globally available.

Configure Jenkins for Umbraco

You need some plugins in order to build .NET projects and Frontend projects. Try these:

  • MBBuild
  • Global Slack Notifier Plugin (for notifying build success/failures)
  • NodeJS Plugin (control npm builds)
  • Nuget Plugin
  • PowerShell plugin (for scripting)
  • ThinBackup (for moving Jenkins configs between instances

Here is my sample configuration on Freestyle Jenkins project.

Node version

Nuget restore command and Injection to Assembly

Run MSBuild

Copy files from Jenkins build folder to Webroot


Notice that running IISReset here is probably not the best idea. I have done same thing on Sitecore projects where instead I stop/start Application Pool. Stop-WebAppPool -Name "DefaultAppPool" you may make restarting of the application few seconds faster that way. But you need to stop AppPool or IIS because sometiimes artificats are locked and if that happens things goes south.

Post Build actions

You can also setup here Email notification or if you use Teams , RocketChat etc. there is plugins for sending notification when build completes or fails.
If you work on bigger team you can also send email for whoever pushed bad code on the success/failure.

That’s it. Minimail Jenkins setup for Umbraco projects that I use. This same setup works for any .NET projects as I did not include any actions on uSync or anything other Umbraco specific.

I have done few DevOps Setups on Jenkins, TeamCity, Azure Devops and Github Actions. Which ever CI/CD tool you are using I recommend to create your scriptinigs on Node and/or PowerShell so you can easily jump between as they all work essentially same way.

Categories
Uncategorized

Quick Screen -command reference

Screen is a full-screen window manager that multiplexes a physical terminal between several processes (typically interactive shells).

DescriptionCommand

Start a new session with session name
screen -S <session_name>
List running sessions / screensscreen -ls
Attach to a running sessionscreen -x
Attach to a running session with namescreen -r <session_name>
Detach a running sessionscreen -d <session_name>

Switching between screens

When you do nested screen, you can switch between screen using command “Ctrl-A” and “n“. It will be move to the next screen. When you need to go to the previous screen, just press “Ctrl-A” and “p“.

To create a new screen window, just press “Ctrl-A” and “c“.

Leaving Screen

There are 2 (two) ways to leaving the screen. First, we are using “Ctrl-A” and “d” to detach the screen. Second, we can use the exit command to terminating screen. You also can use “Ctrl-A” and “K” to kill the screen.

That’s some of screen usage on daily basis. There are still a lot of features inside the screen command. You may see screen man page for more detail or this page https://gist.github.com/jctosta/af918e1618682638aa82.

Categories
Uncategorized

I don’t use anymore “cloud” providers for personal projects.

I love Azure Functions for small jobs and deploying Kubernetes cluster from Github actions to Amazon EKS is blast. But I think it is only business customers and not for small startups or pet projects.

At work I am using Azure and AWS and I used to have GCP and AWS for my pet projects as well but now my projects run on Hetzner VPS’s and Cloud. Main reason for my decision to favour Hetzner over these big guys is uncertaincy and confusion of billing. It’s nice if I suddenly become next Facebook to have scaling possibilty but that is least of my worries and quite nice problem to have. What I like VPS’s and Hetzner Cloud is straight forward and simple pricing per month. If I want Cloud instance with 2Gb RAM and 1vCPU it’s 3 Euros. If I run out of resources I can scale up like going to 32Gb is 35 Euros.

I have small startup where I have worker process written on Python, API written on Node and client running as SPA (Vue.js/Nuxt.js). Data layer is on PostgreSQL/Redis. Without thinking, I set this up in AWS and it worked nicely but soon I noticed this is 100 Euros/month. Since this is in Minimum Viable Product stage I decided to move to VPS. So I spin up VPS with Ubuntu on it. I deployed everything there and changed Github actions. It took me one evening and I don’t consider myself sysadmin. But I think installing Linux with above tech stack is easier than to understand how to run same setup on AWS, GCP or Azure. If in some point I need scaling I can always move back Kubernetes setup on AWS but sometimes less is more.

Categories
Uncategorized

New blog

I started new blog here.

But right now I am busy with customer projects but soon I will start to push some content here.

Categories
Uncategorized

4 Useful extension for Visual Studio Code

Bracket Pair Colorizer

This extension allows matching brackets to be identified with colours. The user can define which tokens to match, and which colours to use.

Bracket Pair Colorizer

Git graph

View a Git Graph of your repository, and easily perform Git actions from the graph. Configurable to look the way you want!

Git Graph

GitLens

Visualize code authorship at a glance.

GitLens

Live Share

Real-time collaborative development.

Live Share

Categories
Uncategorized

App Engine vs Cloud Functions

Both Cloud Functions (CFs) and Google App Engine (GAE) are designed to build “microservice architecture” in the “serveless” environment.

Google says that Cloud Functions is basically for SERVERLESS FUNCTIONS & EVENTS where as App Engine is for SERVERLESS HTTP APPLICATIONS. However,when I read this short description I am still confused since if I am running SPA application what prevents me to use just CFs for my serverside code? When exactly would use GAE instead of CFs?

I made a small investigation on this and here are my findings.

Little bit more longer description from Google:

Cloud Functions

An event-driven compute platform to easily connect and extend Google and third-party cloud services and build applications that scale from zero to planet scale.

Use Cases

  • Asynchronous backend processing
  • Simple APIs (like one or two functions, not RESTful stuff)
  • Rapid prototyping and API stitching

 

 

App Engine standard environment

A fully managed serverless application platform for web and API backends. Use popular development languages without worrying about infrastructure management.

Use Cases

  • Web applications
  • API’s like Mobile and SPA backends

 

 

 

I found this answer from StackOverflow which I am updating here with few of my edits.

When creating relatively complex applications, CFs have several disadvantages compared to GAE.

  • Limited to Node.JS, Python, and Go. GAE supports also .NET, Ruby, PHP, Java.
  • CFS is designed for lightweight, standalone pieces of functionality, attempting to build complex applications using such components quickly becomes “awkward”. Yes, the inter-relationship context for every individual request must be restored on GAE just as well, only GAE benefits from more convenient means of doing that which aren’t available on CFs. For example user session management, as discussed in other comments
  • GAE apps have an app context that survives across individual requests, CFs don’t have that. Such context makes access to certain Google services more efficient/performant (or even plain possible) for GAE apps, but not for CFs. For example memcached.
  • the availability of the app context for GAE apps can support more efficient/performant client libraries for other services which can’t operate on CFs. For example accessing the datastore using the ndb client library (only available for standard env GAE python apps) can be more efficient/performant than using the generic datastore client library.
  • GAE can be more cost effective as it’s “wholesale” priced (based on instance-hours, regardless of how many requests a particular instance serves) compared to “retail” pricing of CFs (where each invocation is charged separately)
  • response times might be typically shorter for GAE apps than CFs since typically the app instance handling the request is already running, thus:
    • the GAE app context doesn’t need to be loaded/restored, it’s already available, CFs need to load/restore it
    • the handling code is (most of the time) already loaded, CFs’ code still needs to be loaded. Not to sure about this one, tho, I guess it depends on the underlying implementation.

Note that nothing prevents us from mixing both notions. An AppEngine application can launch jobs through cloud functions

Summary

Use Cloud Functions (CFs) for “tasks” and use Google App Engine (GAE) for “full applications”.

 

 

Categories
Uncategorized

ERROR: gcloud crashed (ValueError): unknown locale: UTF-8

I was getting this error on gcloud CLI when trying to deploy Cloud Functions to Google Cloud.

Here’s the quick fix – add these lines to your ~/.zshrc or ~/.bash_profile:

export LC_ALL=en_US.UTF-8
export LANG=en_US.UTF-8
Categories
Uncategorized

Essential Docker Compose Commands

Launch in background

docker-compose up -d

If you want to rebuild docker images you can use –build flag after up command. This is essentially same as you would write:

# docker build .
# docker run myimage

docker-compose up –build

Stop containers

docker-compose down

List running images on containers

docker-compose ps

Categories
Uncategorized

Tagging docker images

Normally with “docker build . ” you get docker id that you can run with “docker run DOCKERID” but if you want a bit more friendly name you can tag it like this:

docker build -t YOURDOCKERUSERNAME/PROJECT:latest .

After that you can refer to the image with the tag instead of id like this

docker run -p 8081:8081 YOURDOCKERUSERNAME/PROJECT

Categories
Docker Uncategorized

Create, Run and Delete Container from Dockerfile

First, lets make a simple “hello world” that runs and outputs nodejs command from the container.

STEP 1

Create folder and put following files on the folder:

Dockerfile

# Specify a base image
FROM node:alpine
WORKDIR ‘/app’
# Install some dependencies
COPY ./package.json ./
RUN npm install
COPY ./ ./
# Default command
CMD [“npm”, “start”]
package.json
{
  “dependencies”: {
    “express”: “*”
  },
  “scripts”: {
    “start”: “node index.js”
  }
}
index.js
const express = require(‘express’);
const app = express();
app.get(‘/’, (req, res) => res.send(‘Hello World!’))
app.listen(8081, () => {
  console.log(‘Listening on port 8081’);
});
This will create a simple webserver that is listening port 8081 spitting out “Hello world!”
STEP 2
Build Docker Image and Run it
docker build .
This will create an image from the Dockerfile to your computer.
Tip: You can have multiple configurations, for example if you have different configuration for local development. Then use -f -flag to point to that like this: docker build -f Dockerfile.dev .
On the previous command docker created an image for you and passed you the image ID. It looks something like this on the console:
Successfully built 6bf0f35fae69
Now, you need to take this image ID and run like this:
docker run 6bf0f35fae69
Docker container is now running but we created the web server. The host has no idea how to access to this container so we need to do some port mapping.
Stop the container with CTRL+C
Then run the same command but with port mapping
docker run -it -p 8081:8081 6bf0f35fae69
On the port parameter ports are mapped as host:container
STEP 3
View and delete container
docker ps -a
docker rm CONTAINERID
To remove all containers
docker rm $(docker ps -a -q)
Check existing dockers on your system: docker images
See more at https://docs.docker.com/engine/reference/commandline/docker/