Skip to main content

Deploy to production

This guide explains how to setup and run Ory Kratos in an exemplary production environment. It uses Postgres as database, Nginx as reverse proxy, Digital Ocean as cloud provider, and the Ory Kratos NodeJS UI Reference as user interface. You can use another relational database, a different reverse proxy, deploy on any other cloud host, and spin up a custom interface in your favorite language - this is just an example!

Create a Droplet

Spin up a droplet with the following configuration:

  • OS: Ubuntu 20.04
  • Plan: Basic
  • CPU options: Regular with SSD
  • RAM: 1Gb
  • SSD: 25Gb
  • VPC network: default
  • Authentication: SSH Keys. Don't forget to add your own SSH key
  • Region: Choose your region
note

This example shows a single configuration on a small droplet. The configuration of a virtual machine (VM) may vary depending on the scale of your application.

Wait for the virtual machine (VM) to start up and copy the IP address.

This guide uses accounts.example.com as a hostname to run Ory Kratos. Replace it with <something>.<your-domain> for your purposes. You need to configure DNS, create an A type record, and point it to the VM's IP.

Install required dependencies

Connect to your droplet via SSH or use the Droplet Console.

First upgrade all packages in your VM.

apt-get update && apt-get upgrade -y

The default version of Node.js is v10.19.0 on Ubuntu 20.04, we need to install a newer version. You can use the following script to install Node.js 16.x on a Ubuntu system.

curl -sL https://deb.nodesource.com/setup_16.x -o /tmp/nodesource_setup.sh
bash /tmp/nodesource_setup.sh
apt-get install nodejs jq unzip -y
# Install Node.js 16.x and other dependencies

Install PostgreSQL

Install PostgreSQL by running the following command:

sudo apt install postgresql postgresql-contrib -y
sudo -i -u postgres

Create the database:

createdb kratos

Change the default password encryption to a stronger one as recommended by PostgreSQL:

psql
# Postgres command line
ALTER SYSTEM SET password_encryption = 'scram-sha-256';
# Change the default password encryption to stronger one
SELECT pg_reload_conf();
# Reload configuration

Let's create a user for Kratos (Use your own password / hash!):

CREATE USER kratos PASSWORD 'CHANGE-ME-INSECURE-PASSWORD';

Give the newly created account access to the database:

GRANT CONNECT ON DATABASE kratos to kratos;

We need to change the Postgres configuration to enable scram-sha-256 encryption. Open the file at /etc/postgresql/12/main/pg_hba.conf, and add the following content:

host    all             all             127.0.0.1/32            scram-sha-256

Check your credentials against PostgreSQL:

psql -U kratos -W -h 127.0.0.1

Type or paste the password for the Ory Kratos user we created before. If everything works correctly you should see a prompt similar to this:

Password:
psql (12.9 (Ubuntu 12.9-0ubuntu0.20.04.1))
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off)
Type "help" for help.

kratos=>

Congratulations, you have installed and configured the PostgreSQL database to store your identities. Next we will setup Ory Kratos.

Install Ory Kratos

For security reasons, we need to create a new user with prohibited login:

useradd -s /bin/false -m -d /opt/kratos kratos

Create folders to hold the installation data and configuration files.

mkdir /opt/kratos/{bin,config}

Install Ory Kratos:

cd /opt/kratos/bin
# Download a new version of Ory Kratos
wget https://github.com/ory/kratos/releases/download/<version-you-want>/kratos_<ersion-you-want>-linux_64bit.tar.gz
# Extract contents
tar xfvz kratos_<ersion-you-want>-linux_64bit.tar.gz
# Remove redundant files
rm *md
rm LICENSE

Download the Quickstart configuration files:

cd ../config
wget https://raw.githubusercontent.com/ory/kratos/<version-you-want>/contrib/quickstart/kratos/email-password/identity.schema.json
wget https://raw.githubusercontent.com/ory/kratos/<version-you-want>/contrib/quickstart/kratos/email-password/kratos.yml

We need configure Ory Kratos to use the Postgres database. Open kratos.yml and change the dsn configuration. By default dsn: memory is configured, so Ory Kratos is storing data in the temporary memory. Change the dsn key to use the Postgres database you configured before:

- dsn: memory
+ dsn: postgres://kratos:CHANGE-ME-INSECURE-PASSWORD@127.0.0.1:5432/kratos?sslmode=disable&max_conns=20&max_idle_conns=4

We also need to change the default location of the identity schema:

identity:
default_schema_id: default
schemas:
- id: default
- url: file:///etc/kratos/config/identity.schema.json
+ url: file:///opt/kratos/config/identity.schema.json

Apply migrations:

/opt/kratos/bin/kratos -c /opt/kratos/config/kratos.yml migrate sql -y postgres://kratos:CHANGE-ME-INSECURE-PASSWORD@127.0.0.1:5432/kratos?sslmode=disable

Test your setup using the serve command:

/opt/kratos/bin/kratos -c /opt/kratos/config/kratos.yml serve

You should see something like this once the service has been started:

INFO[2022-05-10T20:51:56Z] TLS has not been configured for public, skipping  audience=application service_name=Ory Kratos service_version=v0.9.0-alpha.3
INFO[2022-05-10T20:51:56Z] Starting the public httpd on: 0.0.0.0:4433 audience=application service_name=Ory Kratos service_version=v0.9.0-alpha.3
INFO[2022-05-10T20:51:56Z] TLS has not been configured for admin, skipping audience=application service_name=Ory Kratos service_version=v0.9.0-alpha.3
INFO[2022-05-10T20:51:56Z] Starting the admin httpd on: 0.0.0.0:4434 audience=application service_name=Ory Kratos service_version=v0.9.0-alpha.3```

Run Ory Kratos using Systemd

We need to change the owner of /opt/kratos directory to the kratos user:

chown -R kratos /opt/kratos/

Create a /etc/systemd/system/kratos.service file

cd /etc/systemd/system
nano kratos.service

Paste the following content to configure systemd to start Ory Kratos using the serve command from earlier.

[Unit]
Description=Kratos Service
After=network.target
StartLimitIntervalSec=0

[Service]
Type=simple
Restart=always
RestartSec=1
User=kratos
ExecStart=/opt/kratos/bin/kratos -c /opt/kratos/config/kratos.yml serve

[Install]
WantedBy=multi-user.target

To run Ory Kratos using systemd add the systemd service to startup:

systemctl enable kratos.service
Created symlink /etc/systemd/system/multi-user.target.wants/kratos.service → /etc/systemd/system/kratos.service.

Start kratos service using systemd:

systemctl start kratos.service

Check running processes with ps ax | grep kratos. If everything worked correctly you should see something like this:

19191 ?        Ssl    0:00 /opt/kratos/bin/kratos -c /opt/kratos/config/kratos.yml serve
19206 ? Ss 0:00 postgres: 12/main: kratos kratos 127.0.0.1(36094) idle

Check if it's accessible (if your DNS record is not set yet (it takes up to 24h to propagate), you can also send the request here directly to your droplet IP)

curl -s  http://accounts.example.com:4433/sessions/whoami | jq
{
"error": {
"code": 401,
"status": "Unauthorized",
"reason": "No valid session cookie found.",
"message": "The request could not be authorized"
}
}

We have Ory Kratos up and running, but we need to configure a reverse proxy to make the Kratos Admin API inaccessible via the public internet. We need to set serve.public.host and serve.admin.host to 127.0.0.1 to ensure Ory Kratos is listening on the loopback interface.

Change the following sections in your kratos.yml (in case you forgot: you can find it in /opt/kratos/config):

serve:
public:
base_url: http://127.0.0.1:4433/
+ host: 127.0.0.1
cors:
enabled: true
admin:
base_url: http://kratos:4434/
+ host: 127.0.0.1

and then restart Kratos by running

service kratos restart
curl -I http://accounts.example.com:4433
curl: (7) Failed to connect to accounts.example.com port 4433: Connection refused

Installing and configuring Nginx

We'll use Nginx as reverse proxy and load balancer for our service. You can use another reverse proxy and load balancer instead. Install Nginx by running:

apt-get nginx certbot python3-certbot-nginx

Create a default configuration for the virtual host. To do this create the file accounts.example.com at /etc/nginx/sites-available/ with the following content

cd /etc/nginx/sites-available/
nano accounts.example.com
server {
listen 80;
server_name accounts.example.com;
}

We need to create a symlink to the sites-enabled directory

ln -s /etc/nginx/sites-available/accounts.example.com /etc/nginx/sites-enabled/accounts.example.com

Configure SSL via Certbot. Run the following command and answer the questions to set it up. When prompted choose to redirect HTTP traffic to HTTPS.

certbot --nginx -d accounts.example.com

After running Certbot your configuration file will look like this:

cat /etc/nginx/sites-enabled/accounts.example.com
server {
listen 80;
server_name accounts.example.com;
if ($host = accounts.example.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
}
server {
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/accounts.example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/accounts.example.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}

There are some missing parts in your configuration at this point, and we need to configure locations and upstream APIs. You can balance network traffic between different instances of Ory Kratos running on the various virtual machines. We need two upstream APIs:

  • public_api to proxy traffic to the Public API of Ory Kratos
  • admin_api to proxy traffic to the admin API of Ory Kratos

Add the following configuration before server section to /etc/nginx/sites-enabled/accounts.example.com file

upstream public_api {
server 127.0.0.1:4433;
server 127.0.0.1:4433; # We can load balance the traffic to support scaling
}
upstream admin_api {
server 127.0.0.1:4434;
server 127.0.0.1:4434;
}
server {
...

Add your locations and the /etc/nginx/sites-enabled/accounts.example.com has the following content

upstream public_api {
server 127.0.0.1:4433;
server 127.0.0.1:4433; # We can load balance the traffic to support scaling
}
upstream admin_api {
server 127.0.0.1:4434;
server 127.0.0.1:4434;
}
server {
listen 80;
server_name accounts.example.com;
if ($host = accounts.example.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
}
server {
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/accounts.example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/accounts.example.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
location / {
proxy_pass http://public_api;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /admin {
# Example of managing access control
# for the /admin endpoint
# in that example we allow access
# either from the subnet
# or by checking query parameter ?secret=
set $allow 0;
# Check against remote address
if ($remote_addr ~* "172.24.0.*") {
set $allow 1;
}
# Check against ?secret param
if ($arg_secret = "GuQ8alL2") {
set $allow 1;
}
if ($allow = 0) {
return 403;
}

rewrite /admin/(.*) /$1 break;

proxy_pass http://admin_api;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}

location /identities {
proxy_pass http://admin_api;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}

Test the Nginx configuration:

nginx -t

If your configuration is correct, you should get the following outputs:

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

Now reload the Nginx service:

service nginx reload

curl -s http://accounts.example.com/sessions/whoami|jq
{
"error": {
"code": 401,
"status": "Unauthorized",
"reason": "No valid session cookie found.",
"message": "The request could not be authorized"
}
}

Install Ory Kratos UI

The installation of the Ory Kratos NodeJS UI Reference is straightfoward:

useradd -s /bin/false -m -d /opt/uinode uinode
cd /opt/uinode
wget https://github.com/ory/kratos-selfservice-ui-node/archive/refs/tags/<version-you-want>.zip
unzip <version-you-want>.zip
rm <version-you-want>.zip
mv kratos-selfservice-ui-node-<ersion-you-want>/ ui
cd ui
npm i
chown -R uinode /opt/uinode

Create a configuration file for systemd. Create a file named uinode.service at /etc/systemd/system/ with the following content (change https://accounts.example.com to your domain!):

[Unit]
Description=Kratos Service
After=network.target
StartLimitIntervalSec=0

[Service]
Type=simple
Restart=always
RestartSec=1
User=uinode
ExecStart=/usr/bin/npm start
Environment=KRATOS_PUBLIC_URL=http://127.0.0.1:4433
Environment=KRATOS_BROWSER_URL=https://accounts.example.com
WorkingDirectory=/opt/uinode/ui

[Install]
WantedBy=multi-user.target

Let's enable the systemd service and start it

systemctl enable uinode

You should get the following response:

Created symlink /etc/systemd/system/multi-user.target.wants/uinode.service → /etc/systemd/system/uinode.service.

Now start the self-service UI

systemctl start uinode

You can use ps ax | grep node to check if it is running.

Configure User Interface

In kratos.yml change URLs to use the user interface you installed in the step before:

serve:
public:
- base_url: http://127.0.0.1:4433/
+ base_url: https://accounts.example.com/
host: 127.0.0.1
cors:
enabled: true
admin:
host: 127.0.0.1
base_url: http://127.0.0.1:4434/

selfservice:
- default_browser_return_url: http://127.0.0.1:4455/
+ default_browser_return_url: https://accounts.example.com/auth/
allowed_return_urls:
- - https://accounts.example.com
+ - http://127.0.0.1:4455

methods:
password:
enabled: true

flows:
error:
- ui_url: http://127.0.0.1:4455/error
+ ui_url: https://accounts.example.com/auth/errors

settings:
- ui_url: http://127.0.0.1:4455/settings
+ ui_url: https://accounts.example.com/auth/settings
privileged_session_max_age: 15m

recovery:
enabled: true
- ui_url: http://127.0.0.1:4455/recovery
+ ui_url: https://accounts.example.com/auth/recovery

verification:
enabled: true
- ui_url: http://127.0.0.1:4455/verification
+ ui_url: https://accounts.example.com/auth/verification
after:
- default_browser_return_url: http://127.0.0.1:4455/
+ default_browser_return_url: https://accounts.example.com/auth/

logout:
after:
- default_browser_return_url: http://127.0.0.1:4455/login
+ default_browser_return_url: https://accounts.example.com/auth/login

login:
- ui_url: http://127.0.0.1:4455/login
+ ui_url: https://accounts.example.com/auth/login
lifespan: 10m

registration:
lifespan: 10m
- ui_url: http://127.0.0.1:4455/registration
+ ui_url: https://accounts.example.com/auth/registration
after:
password:
hooks:
-
hook: session

Configure Nginx and add the missing configuration for your UI:

    upstream ui_node {
server 127.0.0.1:3000;
}
...
location /auth {
rewrite /auth/(.*) /$1 break;

proxy_pass http://ui_node;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
...
error_page 401 = @error401;
# Catch if 401/unauthorized and redirect for login
location @error401 {
return 302 https://accounts.example.com/auth/login;
}

Test the Nginx configuration:

nginx -t

You should get the following response if the configuration is correct:

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

Reload the Nginx and Ory Kratos services:

service nginx reload
service kratos restart

Open https://accounts.example.com/auth/login and you should see the Login Screen.

Secure secrets

We covered a basic deployment of Ory Kratos to production with Nginx. However, the configuration of Kratos itself is not production-ready. We need to:

  1. Set up secure keys.
  2. Run Ory Kratos in production mode.

Generate secure keys using the following openssl command:

openssl rand -base64 32

Edit /opt/kratos/config/kratos.yml file and replace the following values:

secrets:
cookie:
- PLEASE-CHANGE-ME-I-AM-VERY-INSECURE
cipher:
- 32-LONG-SECRET-NOT-SECURE-AT-ALL

with unique and generated strings:

secrets:
cookie:
- xgrxYZLM0jGgJqijvvf73x30URJkRENoCm2ExQw9+CE=
cipher:
- rtxrVjiWwe85nxB9XeWoDP8U17+sT6kHo0E1t3Tr5uY=

Change log.leak_sensitive_values to false, to not leak any sensitive values like secrets in the logs:

log:
- leak_sensitive_values: true
+ leak_sensitive_values: false

Save changes and restart Ory Kratos:

systemctl restart kratos

Next Steps