How to deploy your node app on Linux, 2016 edition

The latest verson of the top node deployment guide

By Mike on 30th Dec 2015

‘How to deploy node on Linux’ is the most popular guide to deploying node. It's been running since 2013, and covers all Linux distros. This version has been been updated for node LTS, handles current Linux distros, adds HTTPS termination, and a lot of small tweaks and fixes suggested by the community.

Here’s how to deploy your node app on a modern Linux setup. This will take you from 'I have written a node app' to 'I have a deployed node app on the internet'.

If you're a web developer, and not a devops person, this is just enough devops to get:

The steps below require only basic shell skills, and a box running SSH. We don't care whether you use Debian/Ubuntu or Red Hat/CentOS. We have commands for both.

This is just the basics: there's nothing stopping you from doing more advanced stuff like using Ansible playbooks, Dockerfiles, building your own deployment software using your cloud provider's API or building your own cloud provider. However before you can do those things, you need to understand how to get a server up and running, which is what this guide is for.

Contents

Install Linux

Set up user accounts and SSH keys

Set timezone to UTC

Install node

Add a shrinkwrap file, for consistent deploys

Setup git and deploy keys

Setup HTTPS

Allow inbound access to your apps ports

Start node app as a Linux service

Deploying

Conclusion

Install an OS + the bits necessary for node

# RHEL / CentOS
yum -y update
# Debian / Ubuntu
apt-get update; apt-get upgrade
# RHEL / CentOS
yum -y group install "Development Tools"
# Debian / Ubuntu
sudo apt-get install build-essential

Allow your team to log in with their public keys

Your cloud provider might already set up a user account and public key access — for example, AWS create ‘ec2-user’ with sudo access and your public keys already authorised to log in to this account. If that’s the case, skip ahead.

However some cloud providers (like Digital Ocean) just give you a root prompt. Logging in as root all the time is considered somewhat insecure — for one thing, everyone already knows the user name, so let’s make a regular user account:

useradd myaccount

Wheel is an old Unix term that apparently comes from the expression ‘big wheel’ meaning a powerful person. For all intents and purposes, it’s your ‘admin’ group. Add the user you just made to the ‘wheel’ secondary group:

usermod -G wheel myaccount

If you're using GitHub: you can find a copy of your user's public keys at https://github.com/(their GitHub user name).keys

Otherwise, ask each member of your team for a copy of their public key. If they’re not sure, check the ~/.ssh/id_rsa.pub file on their Mac or Linux box, and if the file doesn’t exist, run ssh-keygen to make a new one.

Make sure people only give you the .pub keys, and not the non-pub keys, otherwise they’ll have to regenerate everything.

Got the users keys? Copy them, one key per line, into ~/.ssh/authorized_keys on the box (note the American English spelling). Putting the public keys into a user’s authorized_keys this allows whoever has the corresponding private key to log in as that user.

Enable passwordless sudo access to people in the ‘wheel’ group — this means you won’t have to retype your password to run sudo. Editing the sudoers file is a bit special: if you mess it up, you could lock yourself out. So run:

visudo

…which just opens your editor on the sudo file, and checks the syntax before you save. Uncomment the line that looks like:

%wheel ALL = (ALL) NOPASSWD: ALL

Now try logging in as a regular user from the outside world. You should be able to log in using your public key (ie, without needing a password) and be able to run

sudo -l

to list your access. Does it say a lot of ALLs? Good. You now have a working wheel (admin) account. You can run

sudo someCommand

To run a single command or:

sudo -i

To run an interactive shell.

Let’s disable root logins over SSH then.

Edit /etc/ssh/sshd_config, find the PermitRootLogin line and change it to no. Then run:

 systemctl reload sshd

To make the changes apply.

Set timezone to UTC

There’s a whole bunch of reasons why you might want to set the time on your server to UTC. First let's update the timezone:

rm /etc/localtime
ln -s /usr/share/zoneinfo/UTC /etc/localtime

To check your work, look at the local time right now:

date

The answer should be the same as the UTC date:

date -u

Install an LTS Node

Install node 4, the current LTS release.

The lovely people at NodeSource make official packages of node for most distros. We've included the more popular instructions below.

RHEL / CentOS

curl --silent --location https://rpm.nodesource.com/setup_4.x | bash
yum install -y nodejs

Debian and Ubuntu

curl -sL https://deb.nodesource.com/setup_4.x | sudo -E bash -
sudo apt-get install -y nodejs

If you can’t find a Node package for your distro

Visit http://nodejs.org/download/, download node, and extract it under /usr/local:

tar -xf node-someversion.tgz

Make some symlinks so node is in your PATH:

ln -s /usr/local/node/bin/node /usr/local/bin/node
ln -s /usr/local/node/bin/npm /usr/local/bin/npm

Add a shrinkwrap file, for consistent deploys

npm's inbuild shrinkwrap tool ensures that your dependencies - specifically your dependencies dependencies - remain the same. So the package versions you use on your developer machine are the same as what ends up on the server:

npm shrinkwrap

And commit the resulting npm-shrinkwrap.json file.

Setup git

We’re going to update our apps using git. If you use GitHub or GitLab, rather than using a particular person’s SSH key for git, you can create a special ‘deploy key’ which only has read access to a single project. You’d normally make the key on the machine:

ssh-keygen -t rsa -C “myapp@mycompany.com”

Add the deploy key to GitHub or GitLab.

HTTPS

You should really be using HTTPS these days.

For smaller apps

If your app is unlikely to go beyond a single instance, and you can take it down during upgrades - node's inbuilt TLS stack is the simplest option. Using node's inbuilt TLS stack for smaller apps means less software to maintain, particularly since node's event based IO mean node doesn't need a seperate evented web server for static files like Python or Ruby typically does.

For a typical express app in Node 4, that's just.

var server = https.createServer({
    key: privateKey,
    cert: certificate,
    ca: certificateAuthorityCertificate
}, app);

Check your work at SSL Labs - you should get at least an A.

For apps that need to scale

However if you expect to have multiple servers, you're best to terminate your HTTPS on a load balancer: this means a single place for your HTTPS to be handled before it's passed on via straight HTTP to one of your running node servers.

Check your work at SSL Labs - you should get at least an A.

Allow Inbound Access to Your App’s Ports

Pull down your app — make, then change into /var/www and git clone your app.

On Unix, by default, only the root user can bind to ports below 1024. We don’t want to run our app as root though, as that’s not secure. If you used HAProxy or an ELB earlier, you can skip this step - HAProxy or the ELB will forward incoming traffic stright to the relevant port.

So, to run node on a low port, we'll need to give it the 'capability' - Linux speak for a specical privilege:

sudo setcap 'cap_net_bind_service=+ep' $(readlink -f $(which node))

If you're wondering why the readlink, this is because setcap needs to work with a real file, rather than use a link. The capability will be saved to your filesystem: you may need to re-run the command when node updates, but that's it: it will still work after a reboot.

Install and Start Services

Services on all popular Linux distributions now use systemd. This means we don’t have to write shell scripts, explore the wonders of daemonization, changing user accounts, clearing environment variables, set up automatic restarts, log to weird syslog locations like ‘local3', and a bunch of other stuff.

Instead, we just make a .service file for the app and let the OS take care of these things. Here’s an example one, called myapp.service:

[Unit]
Description=Your app
After=network.target
[Service]
ExecStart=/var/www/myapp/app.js
Restart=always
User=nobody
Group=nobody
Environment=PATH=/usr/bin:/usr/local/bin
Environment=NODE_ENV=production
WorkingDirectory=/var/www/myapp
[Install]
WantedBy=multi-user.target

Here are the more interesting lines:

After means we’ll start this service after the network service has started.

ExecStart is the app to run. It should have an interpreter on the first line of the file — eg, you’ll want to add #!/usr/bin/env node to app.js if it’s not already there. You'll also want the file to be executable: chmod +x app.js

Environment lines are the environment variables — setting NODE_ENV to production is a node standard to tell apps like Express and gulp to run on your production settings.

Copy your service file into the /etc/systemd/system directory. Then make systemd aware of the new service:

 systemctl daemon-reload

Start the service:

 systemctl start myapp

All your node console output is logged to the journal with the same name as your .service file. To watch logs for ‘myapp’ in realtime:

 journalctl --follow -u myapp

Unless you’re perfect the first time you run your app you might have the odd problem — you forgot to run gulp, file permissions are incorrect, etc. Read the log output using journalctl, fix anything it tells you about, then restart the app with:

systemctl restart myapp

Your app should soon be up and running.

Deploying

Deploying should be a matter of cleaning any generated files, pulling the latest code, installing whatever new packages your node-shrinkwrap file specifies, and restarting the service:

git clean -f -d
git pull origin/master
npm install
gulp build
systemctl restart myapp

If using the haproxy to run an active/passive setup per above, you'd deploy onto the non-running server and inspect it manually. If we're happy, we run service myapp stop on the previous active server to trigger a fail over. This is a common deployment technique, and allows us to keep out site up during upgrades and still easily fall back onto a working environment.

One final note

So recapping everything everything we've done:

We've now got a working, basic environment! As you build your devops skills you can incorporate them into your environment. Want to build an Ansible playbook or Dockerfile from the above? Go for it - and let us know!

I hope that was useful — if you have additions or corrections feel free to let us know in the corresponding Hackers News post.

Thanks to the following people for their contributions:

Mike MacCana, founder at CertSimple.

CertSimple makes EV HTTPS fast and painless.

An EV certificate proves your website is controlled by a real business. But getting verified is a slow painful process.
CertSimple provides EV HTTPS certificates 40x faster than other vendors. We check your company registration, network details, physical address and flag common errors before you pay us, provide specific validation help for your company, update in realtime during the validation process, and even check your infrastructure to help you set up HTTPS securely.
Prove your identity now!