Michael Blum

Developer from Chicago

Building a Blog Part 2 - Git Server

Why? There’s Github

I’m interested in what GitLab is offering - a full featured Git server that supports many of the features found on GitHub:

  • Pages
  • Issue Tracking
  • Pull Requests
  • Private Repos (Gitlab is unlimited as opposed to the five I get on Github)

and the list goes on.


github webhooks

One difference I saw was with the use of Web Hooks, the developer is responsible for maintaining a server to support their Webhooks. While this works well for plug-and-play integrations of services like Travis CI - but for the Webhook I’m building, I don’t want to deal with having a publicly available POST endpoint.

Installing GitLab

It look just a few clicks on DigitalOcean:

one-click gitlab install

Securing GitLab with LetsEncrypt and Nginx


Once you’ve set up a custom hostname - like git.yourdomain.com we can secure our source code with a free certificate from LetsEncrypt.

git clone https://github.com/letsencrypt/letsencrypt /opt/letsencrypt

Before starting the installation we need to do a few steps before hand:

  1. Turn off GitLab as it force redirects requests to port 80:

    # Stop all GitLab components
    sudo gitlab-ctl stop
  2. Turn off Nginx

    sudo service nginx stop

  3. Installing LetsEncrypt

As of this writing, the LetsEncrypt script doesn’t have the same automagick for Nginx as it does for Apache2.

If we run LetsEncrypt by default:

cd /opt/letsencrypt
sudo ./letsencrypt-auto certonly -a webroot --webroot-path=/usr/share/nginx/html -d git.mblum.me

I get a cryptic error saying my DNS record is wrong:

Requesting root privileges to run letsencrypt...
   /root/.local/share/letsencrypt/bin/letsencrypt certonly --webroot -w /var/www/letsencrypt/ -d git.mblum.me -manual
Failed authorization procedure. git.mblum.me (http-01): urn:acme:error:unauthorized :: The client lacks sufficient authorization :: Error parsing key authorization file: Invalid key authorization: 70 parts

 - The following errors were reported by the server:

   Domain: git.mblum.me
   Type:   unauthorized
   Detail: Error parsing key authorization file: Invalid key
   authorization: 70 parts

   To fix these errors, please make sure that your domain name was
   entered correctly and the DNS A record(s) for that domain
   contain(s) the right IP address.

After doing some research I found that Nginx plays a role in the LetsEncrypt installation.

Basically LetsEncrypt attempts to access ACME challenge files that it creates and then tries to get using the specified fully qualified domain name (FQDN).

Configuring Nginx for LetsEncrypt

We want to create a directory that can be read from a browser. Traditionally these sorts of files live in /var/www. Lets create a directory for LetsEncrypt’s files and a temporary file to confirm we setup Nginx properly:

mkdir -p /var/www/letsencrypt
chown www-data /var/www/letsencrypt
cd /var/www/letsencrypt
echo 'letsencrypt' >> letsencrypt.txt
chown www-data letsencrypt.txt

Now lets setup Nginx:

sudo vi /etc/nginx/sites-available/default

and configure Nginx to serve files out of /var/www/letsencrypt

sudo vi /etc/nginx/sites-available/default
server {
        listen 80 default_server;
        listen [::]:80 default_server ipv6only=on;

        root /var/www/letsencrypt;
        index index.html index.htm;

        server_name git.mblum.me;

        location / {
                # First attempt to serve request as file, then
                # as directory, then fall back to displaying a 404.
                try_files $uri $uri/ =404;
                # Uncomment to enable naxsi on this location
                # include /etc/nginx/naxsi.rules

Startup Nginx with sudo service nginx start and our test file should now be available at http://git.mblum.me/letsencrypt.txt.

With Nginx running lets now install LetsEncrypt:

sudo ./letsencrypt-auto certonly -a webroot --webroot-path=/var/www/letsencrypt -d git.mblum.me

The keys are generated at:

sudo ls -l /etc/letsencrypt/live/git.mblum.me/

Enable HTTPS on Nginx

DigitalOcean has a guide on configuring Nginx and LetsEncrypt:

LetsEncrypt and Nginx DigitalOcean

but I’ll put down the last steps for prosperity:

1. Create a strong Diffie-Hellman group

sudo openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048

This key lives here /etc/ssl/certs/dhparam.pem

2. Enable HTTPS with Nginx

sudo vi /etc/nginx/sites-available/default

Add LetsEncrypt certificates and upgrade all traffic to HTTPS. This involved quite a bit of trial and error:

	## GitLab

	upstream gitlab-workhorse {
		server unix:/var/opt/gitlab/gitlab-workhorse/socket fail_timeout=0;

	## Redirects all HTTP traffic to the HTTPS host
	server {
	  	listen [::]:80 ipv6only=on default_server;
	  	server_name git.mblum.me;
	  	## Don't show the nginx version number, a security best practice
	  	server_tokens off;
	  	return 301 https://$http_host$request_uri;
	  	access_log  /var/log/nginx/gitlab_access.log;
	  	error_log   /var/log/nginx/gitlab_error.log;

	## HTTPS host
	server {
		listen ssl;
		listen [::]:443 ipv6only=on ssl default_server;
		## Don't show the nginx version number, a security best practice
		server_tokens off;
		root /opt/gitlab/embedded/service/gitlab-rails/public;

	  	server_name git.mblum.me;

	  	client_max_body_size 100m;

		ssl on;
		ssl_certificate /etc/letsencrypt/live/git.mblum.me/fullchain.pem;
		ssl_certificate_key /etc/letsencrypt/live/git.mblum.me/privkey.pem;

		ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
		ssl_prefer_server_ciphers on;
		ssl_dhparam /etc/ssl/certs/dhparam.pem;
		ssl_session_timeout 1d;
		ssl_session_cache shared:SSL:50m;
		ssl_stapling on;
		ssl_stapling_verify on;
		add_header Strict-Transport-Security max-age=15768000;

		## Individual nginx logs for this GitLab vhost
		access_log  /var/log/gitlab/nginx/gitlab_access.log;
		error_log   /var/log/gitlab/nginx/gitlab_error.log;

	  	location / {
		    ## If you use HTTPS make sure you disable gzip compression
		    ## to be safe against BREACH attack.

		    ## https://github.com/gitlabhq/gitlabhq/issues/694
		    ## Some requests take more than 30 seconds.
		    proxy_read_timeout      300;
		    proxy_connect_timeout   300;
		    proxy_redirect          off;

		    proxy_http_version 1.1;

		    proxy_set_header Host $http_host;
		    proxy_set_header X-Real-IP $remote_addr;
		    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
		    ## fixes 402 error
		    proxy_set_header X-Forwarded-Proto https;
		    proxy_pass http://gitlab-workhorse;

		error_page 404 /404.html;
		error_page 422 /422.html;
		error_page 500 /500.html;
		error_page 502 /502.html;
		location ~ ^/(404|422|500|502)\.html$ {
		    root /opt/gitlab/embedded/service/gitlab-rails/public;

3. Modify your /etc/gitlab/gitlab.rb to use external Nginx

external_url "https://git.mblum.me/"
web_server['external_users'] = ['www-data']
nginx['enable'] = false

4. Configure & Restart Gitlab and Nginx

gitlab-ctl reconfigure
gitlab-ctl restart
service nginx restart

5. check our A+ SSL grade: https://www.ssllabs.com/ssltest/analyze.html?d=git.mblum.me

A+ SSL grade from ssllabs

Gitlab up and running

gitlab up and running

Login with the initial root configuration found in the motd.tail:

Thank you for using DigitalOcean's GitLab Application.
Your GitLab instance can be accessed at
The default credentials for GitLab are:
Username: root

You can find more information on using this image at: http://do.co/gitlabapp
To delete this message of the day: rm -rf /etc/motd.tail


Repository too large

After running git push gitlab master we get a hang up from the Gitlab server:

Pushing to https://git.mblum.me/mblum/mblum.me-blog.git
Counting objects: 1438, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (712/712), done.
POST git-receive-pack (chunked))
error: RPC failed; result=22, HTTP code = 4131.27 MiB/s
fatal: The remote end hung up unexpectedly
Writing objects: 100% (1438/1438), 9.65 MiB | 1.27 MiB/s, done.
Total 1438 (delta 758), reused 1275 (delta 718)
fatal: The remote end hung up unexpectedly
Everything up-to-date

This is because our Jekyll blog has lots of images. This is fixed by adding client_max_body_size 100m; to the nginx proxy to allow for larger pushes.