On Premise Setup

Hardware requirements

Hardware requirements are highly dependent on the number of active users simultaneously working on documents inside the single SMASHDOCs installation. Under the active user we understand the user who is typing text in the Editor, using copy-paste, moving sections around, writing comments and so on.

Minimal hardware requirements are: One server/VM with Ubuntu 16 x64, 2 core CPU, 8GB RAM, 40GB storage (SSD is preferred)

Our performance tests are showing following results regarding requited hardware in term of active users count:

Number of active users CPU RAM Storage
20 2 cores 8GB 40GB storage, SSD is preferred
50 4 cores 8GB 40GB storage, SSD is preferred
100 8 cores 16GB 40GB storage, SSD is mandatory
300 16 cores 32GB 40GB storage, SSD is mandatory
>300 Please contact support@smashdocs.net

Requirements

To setup a SMASHDOCs on premise installation you will need:

default_logo
brand_logo
email_logo
favicon

Docker engine is required to complete this installation. We deliver our services as docker containers, so this is a prerequisite for running SMASHDOCs. If you do not already have docker installed, obtain docker engine on your operating system of choice.

Docker compose is a required tool to compose and orchestrate the SMASHDOCs environment. If you do not already have docker compose installed, obtain docker compose on your operating system of choice.

Docker Compose set-up

System Architecture

smashdocs_architecture.png
A basic SMASHDOCs installation consists of the following few docker images:
  1. mongo:3.4
  2. redis:3.2.8
  3. smashdocs/nginx:2.0.2.8 A standard Nginx Webserver Version 1.11.3 with an nginx-upload-module and the SMASHDOCs nginx config file
  4. smashdocs/backend:latest The SMASHDOCs Backend which will be used in 4 Docker Containers (Beat, Worker, Backend, Socketserver)
  5. smashdocs/frontend:latest The SMASHDOCs Frontend

To ensure an easy setup of a SMASHDOCs installation, we are providing a docker compose file. This docker compose file will spawn all docker containers which are needed for one SMASHDOCs Instance.

Certificates

SMASHDOCs can only be served via SSL. Therefore you must ensure that your SSL certificates are placed in the the /opt/smashdocs/certs/ folder and are named wildcard.crt and wildcard.key.

If you don’t have signed SSL certificates at hand you can generate a self-signed certificate as follows:

Hint

If you dont have SSL certs and quickly need to create some selfsigned ones try:

$(Host) cd /tmp $(Host) openssl req -x509 -newkey rsa:2048 -keyout wildcard.key -out wildcard.crt -days XXX -nodes -subj “/C=DE/ST=Bayern/L=Muenchen/O=Smashdocs Example/OU=Org/CN=*.smashdocs.local”

$(Host) mkdir -p /opt/smashdocs/certs/
$(Host) cp -R /tmp/* /opt/smashdocs/certs/

Traffic Routing and required webservers

SMASHDOCs is using Nginx to route and to server all the traffic to an installation.

1: Nginx: The “NGINX-Proxy” docker container accepts incoming TCP (Port 4434) and HTTP (Ports 80, 443) traffic and is responsible for the SSL termination. SMASHDOCs ships the NGINX-Proxy with a default nginx.config for easy installation and usage. Some of the properties in the nginx default config will be overwritten with defined the environment variables by the docker entryfile.

2: Frontend The “frontend” docker container serves the frontend HTML, CSS and JavaScript code served by and NGINX

3: Backend The “backend” docker container serves the backend. Both frontend and backend traffic is routed by the NGINX-Proxy

Docker container setup

The docker-compose.customer.yml file is an example docker compose file which sets up a SMASHDOCs Installation which could be available via https://customer.smashdocs.net. If you setup you own SMASHDOCs instance, make sure you change the configuration accordingly to your desired name and URL.

Download: docker-compose.customer.yml

version: '2'

networks:
  customer:

volumes:
  asset-data:
  mongo-data:

services:
  nginx-proxy:
    image: smashdocs/nginx:2.1.8
    mem_limit: 256m
    user: root
    restart: always
    volumes:
      - "/opt/docker/ssl/certs:/etc/nginx/certs:ro"
      - "asset-data:/usr/local/data:rw"
    environment:
      - "SSL_CERT=wildcard.crt"
      - "SSL_KEY=wildcard.key"
      - "SERVER_NAME_FRONTEND=customer.smashdocs.net"
      - "SERVER_NAME_BACKEND=customer-api.smashdocs.net"
      - "UPSTREAM_FRONTEND=frontend_upstream"
      - "UPSTREAM_BACKEND=backend_upstream"
      - "UPSTREAM_SOCKET=socket_upstream"
      - "UPSTREAM_FRONTEND_VALUE=frontend:80"
      - "UPSTREAM_BACKEND_VALUE=backend:8080"
      - "UPSTREAM_SOCKET_VALUE=socketserver:8080"
    networks:
      - customer
    ports:
      - 80:80
      - 443:443
      - 4434:4434

  frontend:
    image: smashdocs/frontend:2.9.7
    mem_limit: 256m
    user: root
    restart: always
    networks:
      - customer
    environment:
      - "API_URL=https://customer.smashdocs.net/api"
      - "MODE=normal"
    depends_on:
      - nginx-proxy

  backend:
    image: smashdocs/backend:2.9.7
    mem_limit: 5g
    user: nobody
    restart: always
    networks:
      - customer
    volumes:
      - "asset-data:/usr/local/data:rw"
    environment:
      - "DATABASE_DATABASE=customer"
      - "DATABASE_ADDRESS=mongodb"
      - "DATABASE_PORT=27017"
      - "API_URL_API_URL=https://customer.smashdocs.net/api"
      - "HTTP_SERVER_ADDRESS=http://customer.smashdocs.net"
      - "HTTP_SERVER_SSL_ADDRESS=https://customer.smashdocs.net"
      - "CELERY_ENABLED=true"
      - "CELERY_BROKER=redis://redis:6379/0"
      - "CELERY_BACKEND=redis://redis:6379/0"
      - "ASSETS_ASSET_ROOT=/usr/local/data/assets"
      - "REDIS_ADDRESS=redis"
      - "REDIS_PORT=6379"
      - "LOCAL_ENABLED=true"
      - 'PROVISIONING_ENABLED=true'
      - 'PROVISIONING_KEY=REPLACE_PROVISIONING_KEY'
    depends_on:
      - nginx-proxy
      - mongodb
      - redis
    links:
      - mongodb:mongodb
      - redis:redis

  beat:
    image: smashdocs/backend:2.9.7
    mem_limit: 512m
    user: nobody
    restart: always
    networks:
      - customer
    command: "beat"
    volumes:
      - "asset-data:/usr/local/data:rw"
    environment:
      - "SERVICE_NAME=beat-customer"
      - "SERVICE_TAGS=smashdocs,rest"
      - 'CELERY_ENABLED=true'
      - "CELERY_BROKER=redis://redis:6379/0"
      - "CELERY_BACKEND=redis://redis:6379/0"
      - 'CELERY_BEAT_SCHEDULE_PATH=/usr/local/data/celery/smashdocs_beat_schedule'
    depends_on:
      - redis
    links:
      - redis:redis

  worker:
    image: smashdocs/backend:2.9.7
    mem_limit: 512m
    user: nobody
    restart: always
    networks:
      - customer
    command: "worker"
    volumes:
      - "asset-data:/usr/local/data:rw"
    environment:
      - "DATABASE_DATABASE=customer"
      - "DATABASE_ADDRESS=mongodb"
      - "DATABASE_PORT=27017"
      - "API_URL_API_URL=https://customer.smashdocs.net/api"
      - "HTTP_SERVER_ADDRESS=http://customer.smashdocs.net"
      - "HTTP_SERVER_SSL_ADDRESS=https://customer.smashdocs.net"
      - "CELERY_ENABLED=true"
      - "CELERY_BROKER=redis://redis:6379/0"
      - "CELERY_BACKEND=redis://redis:6379/0"
      - 'CELERY_BEAT_SCHEDULE_PATH=/usr/local/data/celery/smashdocs_beat_schedule'
    depends_on:
      - mongodb
      - redis
    links:
      - mongodb:mongodb
      - redis:redis

  socketserver:
    image: smashdocs/backend:2.9.7
    mem_limit: 5g
    user: nobody
    restart: always
    networks:
      - customer
    command: "backend_socket"
    volumes:
      - "asset-data:/usr/local/data:rw"
    environment:
      - "DATABASE_DATABASE=customer"
      - "DATABASE_ADDRESS=mongodb"
      - "DATABASE_PORT=27017"
      - "DATABASE_MIGRATIONS=false"
      - "API_URL_API_URL=https://customer.smashdocs.net/api"
      - "HTTP_SERVER_ADDRESS=http://customer.smashdocs.net"
      - "HTTP_SERVER_SSL_ADDRESS=https://customer.smashdocs.net"
      - "CELERY_ENABLED=true"
      - "CELERY_BROKER=redis://redis:6379/0"
      - "CELERY_BACKEND=redis://redis:6379/0"
      - "ASSETS_ASSET_ROOT=/usr/local/data/assets"
      - "REDIS_ADDRESS=redis"
      - "REDIS_PORT=6379"
    depends_on:
      - nginx-proxy
      - mongodb
      - redis
    links:
      - mongodb:mongodb
      - redis:redis

  redis:
    image: redis:3.2.8
    mem_limit: 512m
    user: redis
    restart: always
    networks:
      - customer

  mongodb:
    image: mongo:3.4
    mem_limit: 5g
    user: root
    restart: always
    networks:
      - customer
    ports:
      - 27017
    volumes:
      - "mongo-data:/data/db:rw"
    environment:
      - "SERVICE_NAME=customer-mongodb"
      - 'SERVICE_TAGS=mongodb'
    command: "--storageEngine wiredTiger"

External Database Setup

SMASHDOCs can be used with an existing MongoDB Database Server. In this case the dependencies in the example docker compose file above need to be removed.

The following environment variables can be used in :

  • Backend
  • Worker
  • Socketserver
Name Type Description
DATABASE_ADDRESS String Database Address (default: 127.0.0.1)
DATABASE_PORT String Database Port (default: 27017)
DATABASE_USER String Database User (empty string for no authentication)
DATABASE_PASSWORD String Database Password (empty string for no authentication)
DATABASE_DATABASE String Database Name (required)

E-Mail Server Setup

For your own SMASHDOCs installation you can use your own SMTP server to send emails to your customers. The following environment variables can be configured in

  • Backend
  • Worker
Name Type Description
EMAIL_SMTP_SERVER_ADDRESS String E-Mail SMTP Address (default SMASHDOCs Mail Server)
EMAIL_SMTP_SERVER_PORT String E-Mail SMTP Port (default 587)
EMAIL_SMTP_USERNAME String SMTP Username (empty string for no authentication)
EMAIL_SMTP_PASSWORD String SMTP Password (empty string for no authentication)
EMAIL_STANDARD_EMAIL String Sender E-Mail Address (default: no-reply@smashdocs.net)
EMAIL_STANDARD_FROM String Sender E-Mail Display Text (default: SMASHDOCs Team)

Step 1: Prepare the configuration

Copy the Docker compose example setup file from above and place it to your hosts file system as /opt/docker-compose/docker-compose.smashdocs.yml.

The example configuration contains the domains

smashdocs.example.com

These should be changed to reflect your environments needs.

$(Host) sed -i -- "s/customer.smashdocs.net/smashdocs.yourdomain.net/g" /opt/docker-compose/docker-compose.smashdocs.yml

Step 2: Replace the Provisioning key

SMASHDOCs has a Provisioning API (provisioning.html) which can be used to configure a SMASHDOCs installation. The provisioning key is a random key a partner can choose by himself and enable/disable to his needs.

For security reasons we advice to enabled the Provisioning API only if needed

In this example setup the provisioning key is generated using a python expression piped by sha256sum and written to the ``PROVISIONING_KEY` env variable

$(Host) export PROVISIONING_KEY=`python -c "import random; print random.randint(5,1000000)" | sha256sum | awk '{print $1}'`
$(Host) sed -i -- "s/REPLACE_PROVISIONING_KEY/$PROVISIONING_KEY/g" /opt/docker-compose/docker-compose.smashdocs.yml

Step 3: Select the Frontend MODE

SMASHDOCs can be run in 2 different modes: Standalone and Partner mode. 1. In Standalone mode (config variable "MODE=normal") a user can create an account and login. The user will see a document list and can create and open documents 2. In Partner mode (config variable "MODE=partner") the system can only be accessed via the Partner API

frontend:
  ...
  environment:
    ...
    - "MODE=normal"
    - "MODE=partner"

Step 4: Authenticate with Dockerhub

This step requires a contract with SMASHDOCs. The containers required to run a SMASHDOCs environment are contained in a protected private registry. Please contact SMASHDOCs if you require authentication.

$(Host) docker login --user <partneruser> --password <partnerpassword> https://index.docker.io/v1

Step 5: Run the docker compose file

On the host system run the following

$(Host) /usr/local/bin/docker-compose -f /opt/docker-compose/docker-compose.smashdocs.yml -p smashdocs up -d

Wait until all docker containers are spawned. See running docker containers using

$(Host) /usr/local/bin/docker-compose -f /opt/docker-compose/docker-compose.smashdocs.yml -p smashdocs ps

Any changes to the compose file will be rerun on the specific parts of the configuration on consecutive executions. Docker compose will restart the changed service and dependant services.

Step 7: Configure the DNS

The domain names chosen above are required to be resolvable from hosts using the SMASHDOCs system. For different customers this step will be quite different. SMASHDOCs consultation can be acquired.

Kubernetes set-up

This guide describes Kubernetes cluster provisioning using Azure managed kubernetes-as-a-service (AKS), and SMASHDOCs app install on Kubernetes (k8s) cluster.

The main architectural considerations are:

  • all SMASHDOCs app components: frontend, adminui, backend, socketserver, beat, worker, nginx, redis, mongodb are running in k8s cluster
  • one k8s pod running one SMASHDOCs app component
  • one k8s service is created for each SMASHDOCs app component
  • nginx ingress controller is used for http/https termination
  • kube-lego is used for SSL certificate generation
  • persistent data stored at Azure storage, using Disk Storage and Azure Files services

At the end of this guide you will have working instance of SMASHDOCs app in Azure cloud with public HTTPS access.

Install CLI utilities

The following prerequisites are required for installing AKS

  • az - azure CLI utility
  • kubectl - Kubernetes CLI utility
  • helm - Kubernetes package manager client
  • jq - JSON parsing utility

Step to install on MacOS:

$ brew install azure-cli
$ brew install kubernetes-cli
$ brew install kubernetes-helm
$ brew install jq

in Linux use appropriate package manager, apt/yum

Prerequisite steps

Before creating k8s AKS cluster, we need to make changes in our Azure account:

  • sign in using az CLI utility
  • enable write permissions for Azure account
  • register Azure providers in namespaces: Compute, Network, Storage

Those steps are done once, before cluster creation.

Sign in using az CLI utility. This step is required for further Azure resource provisioning from CLI.

$ az login
To sign in, use a web browser to open the page https://aka.ms/devicelogin and enter the code B4JGSFDGSV to authenticate.

Enable write permissions as described here - https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-create-service-principal-portal

Make sure that Azure providers available from your account, have proper permissions for cloud resource provisioning. This command will list available Azure providers with respective status - Registered/NotRegistered

$ az provider list --query "[].{Provider:namespace, Status:registrationState}" --out table

The providers need to be registered in following namespaces: Compute, Network, Storage to be able to create resources.

# register providers in namespaces
$ az provider register --namespace Microsoft.Compute
$ az provider register --namespace Microsoft.Network
$ az provider register --namespace Microsoft.Storage

Create K8S in AKS

Azure Kubernetes Service is supported on following locations:

  • East US
  • West Europe
  • Central US
  • Canada Central
  • Canada East

Check https://docs.microsoft.com/en-us/azure/aks/container-service-quotas for updates

You need to choose Azure location, in which SMASHDOCs app in AKS will be deployed. List short names for all locations, and pick short name for desired one.

# list short name for all Azure locations
$ az account list-locations | jq .[].name

We will use canadacentral

Pick desired resource group name and create it in Azure location, we will use k8s-cc-gr as group name and create resources group in Canada Central Azure location.

# list resource groups you already have
$ az group list

# create resource group at desired location for AKS deployment
$ az group create --name k8s-cc-gr --location canadacentral

# make sure it's created
$ az group list

# show newly created group details
$ az group show  --name k8s-cc-gr

Choose node flavor at Azure website - https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-general

We will use Standard_B2ms node flavor - 2 vCPU, 8G RAM, 16G SSD

Pick a name for k8s cluster, we will use k8s-cc-test Following command will create it with specified name, node count, and node flavor. It may take some time to finish provisioning.

# create k8s cluster
$ az aks create --resource-group k8s-cc-gr --name k8s-cc-test --node-count 3 --node-vm-size Standard_B2ms --generate-ssh-keys

# show details about newly created cluster
$ az aks show --resource-group k8s-cc-gr --name k8s-cc-test

[JSON output with k8s Azure location, name, provisioningState, resourceGroup]

Look for JSON keys "provisioningState": "Succeeded" in output, it means cluster is ready to use.

Get k8s cluster credentials

To use k8s cluster we need to get it’s credentials (certificates), this command will put it in kubectl config located at $HOME/.kube/config

# get k8s credentials and merge it to kubectl config
$ az aks get-credentials -g k8s-cc-gr -n k8s-cc-test

k8s cli tool kubectl allows managing multiple k8s cluster, local/dev/staging/prod for example, in this step we will view available k8s cluster contexts, select context for future use, and check cluster connectivity

# view available contexts
$ kubectl config get-contexts
*         k8s-cc-test      k8s-cc-test   clusterUser_k8s-cc-gr_k8s-cc-test

# use specified context
$ kubectl config use-context k8s-cc-test
Switched to context "k8s-cc-test".

# view cluster nodes
$ kubectl get nodes
NAME                       STATUS    ROLES     AGE       VERSION
aks-nodepool1-24768417-0   Ready     agent     20h       v1.9.6
aks-nodepool1-24768417-1   Ready     agent     20h       v1.9.6
aks-nodepool1-24768417-2   Ready     agent     20h       v1.9.6

# view cluster pods (containers)
$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE
kube-system   azureproxy-79c5db744-2xp79              1/1       Running   2          20h
kube-system   heapster-55f855b47-q9qxn                2/2       Running   0          20h
kube-system   kube-dns-v20-7c556f89c5-7pwxh           3/3       Running   0          20h
kube-system   kube-dns-v20-7c556f89c5-g7v6g           3/3       Running   0          20h
kube-system   kube-proxy-96qbk                        1/1       Running   0          20h
kube-system   kube-proxy-x9vms                        1/1       Running   0          20h
kube-system   kube-svc-redirect-84wq8                 1/1       Running   0          20h
kube-system   kube-svc-redirect-cnn77                 1/1       Running   0          20h
kube-system   kubernetes-dashboard-546f987686-99xd7   1/1       Running   4          20h
kube-system   tunnelfront-646d798b4-dkrxc             1/1       Running   0          20h

# view cluster info
$ kubectl cluster-info
Kubernetes master is running at https://k8s-cc-tes-k8s-cc-gr-2acabf-255c18b1.hcp.canadacentral.azmk8s.io:443
Heapster is running at https://k8s-cc-tes-k8s-cc-gr-2acabf-255c18b1.hcp.canadacentral.azmk8s.io:443/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at https://k8s-cc-tes-k8s-cc-gr-2acabf-255c18b1.hcp.canadacentral.azmk8s.io:443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
kubernetes-dashboard is running at https://k8s-cc-tes-k8s-cc-gr-2acabf-255c18b1.hcp.canadacentral.azmk8s.io:443/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Status Ready is OK, now nodes can take workloads.

Create Azure cloud storage account

Using AKS we can store cluster persistent data in Azure, but first we need to create Azure storage account.

Previous step will create resource group list with prefix MC - in our case it will be MC_k8s-cc-gr_k8s-cc-test_canadacentral

$ az group list --output table

k8s-cc-gr                               canadacentral   Succeeded
MC_k8s-cc-gr_k8s-cc-test_canadacentral  canadacentral   Succeeded

Look for group matching our cluster name with MC prefix, and create storage account with this MC-prefixed resource group, account name must be in lowercase, and don’t have ‘-’ or ’_’ in account name.

We choose sdk8sccstorageaccount as storage account name.

# create Azure storage account
$ az storage account create --resource-group MC_k8s-cc-gr_k8s-cc-test_canadacentral --name sdk8sccstorageaccount --location canadacentral --sku Standard_LRS

# show Azure storage account details
$ az storage account show --name sdk8sccstorageaccount --resource-group MC_k8s-cc-gr_k8s-cc-test_canadacentral

Kubernetes storage class provisioning

Kubernetes abstractions called StorageClass with special modes support needs to be provisioned in cluster before installing SMASHDOCs app:

  • ReadWriteMany mode - azure-file Storage Class for assets volume, it’s attached in ReadWrite mode to nginx and backend containers,
  • ReadWriteOnce mode - azure-disk Storage Class for mongodb volume mode is used with mongodb volume

Edit storage classes file for SMASHDOCS app - azure-file-sc.yaml - replace storage account sdk8sccstorageaccount with your storage account created at previous step

Download: azure-disk-sc.yaml

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: azure-disk
provisioner: kubernetes.io/azure-disk
parameters:
  skuName: Standard_LRS
  kind: Managed
  cachingmode: None

Download: azure-file-sc.yaml

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: azure-file
provisioner: kubernetes.io/azure-file
mountOptions:
  - dir_mode=0777
  - file_mode=0777
parameters:
  storageAccount: sdk8sccstorageaccount # - replace with your storage account name
  skuName: Standard_LRS

Create storage classes in k8s

# create storage class "azure-file"
$ kubectl create -f azure-file-sc.yaml

# create storage class "azure-disk"
$ kubectl create -f azure-disk-sc.yaml

# display storage clasess
$ kubectl get sc
NAME                PROVISIONER                AGE
azure-disk          kubernetes.io/azure-disk   1h
azure-file          kubernetes.io/azure-file   1h

Initialize k8s package manager - helm

Initialize kubernetes package manager helm server-side part, which is called tiller, empty output is OK, it means no releases have been installed by helm in k8s yet.

# initialize server-side pod, tiller
$ helm init

# wait a few moments, before server-side initialization is finished
# upgrade tiller to latest version
$ helm init --upgrade

# show helm-deployed releases
$ helm list

# empty output excepted, we didn't deploy anything by helm yet
$

Create k8s ingress

The standard way to receive incoming traffic in k8s is through special component, provisioned by creating Ingress object

Install nginx k8s ingress controller to ingress-nginx namespace, without RBAC enabled

# install nginx helm chart
$ helm install stable/nginx-ingress --namespace ingress-nginx --set rbac.create=false --set rbac.createRole=false --set rbac.createClusterRole=false

Ingress deployment will create Azure LoadBalancer, pointing to newly created k8s cluster, we need to find out it’s public IP, and create DNS A record - azure.smashdocs.net in our case.

Execute following command and look for EXTERNAL-IP address - it will be first, but later will be like 40.85.221.174 - this is Azure load balancer public IP address, this operation may take a few minutes to complete.

$ kubectl get service  -n ingress-nginx -l app=nginx-ingress
NAME                                                TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)                      AGE
irreverant-ladybird-nginx-ingress-controller        LoadBalancer   10.0.162.217   40.85.221.174   80:31933/TCP,443:30924/TCP   13d

Let’s assume, we want to deploy SMASHDOCs app to https://azure.smashdocs.net

Create DNS A-record pointing to EXTERNAL-IP, in our case we’ve created azure.smashdocs.net with value 40.85.221.174

Verify that DNS provisioned correctly and k8s is working - try to open your URL in browser or by CLI utility like curl, the following output is expected

default backend - 404

Install kube-lego (Letsencrypt SSL certs)

At this step we will install kube-lego app into our k8s cluster, which is needed for dynamic SSL certificate generation by using Letsencrypt API. Change your@email.com to your email address and run following command:

# install kube-lego chart
$ helm install stable/kube-lego \
  --set config.LEGO_URL=https://acme-v01.api.letsencrypt.org/directory \
  --set config.LEGO_EMAIL=your@email.com

# list installed release status
$ helm list
NAME                REVISION    UPDATED                     STATUS      CHART                   NAMESPACE
knobby-labradoodle  1           Thu Jun  7 07:38:48 2018    DEPLOYED    nginx-ingress-0.20.1    ingress-nginx
vigilant-billygoat  1           Thu Jun  7 07:50:45 2018    DEPLOYED    kube-lego-0.4.2         default

Now we’ve successufully deployed kubernetes ingress with letsencrypt ssl support

Install SMASHDOCs app

SMASHDOCs app can be deployed using helm chart - the templates collecton, which describes k8s components like deployments, services, volumes, configmaps, ingress, etc.

One of possible ways to organize resources (deployments, pods, volumes) in kubernetes by using namespaces. One of recommendations is to have one namespace per application.

Let’s choose namespace name - we will use azure and k8s release name, we will use azure-smashdocs

Release name is optional, will be assigned to random word1-word2 value, if not specified, can be looked up later by running helm list

Actual chart content (deployment templates) located in chart archive smashdocs-x.x.x.tgz, untar it, then we need to change install-specific settings, change current directory

# extract archive
$ tar zxvf smashdocs-x.x.x.tgz

# change directory
$ cd smashdocs

There is smashdocs-values.yaml file there, which describes SMASHDOCs app setings, like version, keys, domain name settings, etc

Download: smashdocs-values.yaml

# Default values for SMASHDOCs App
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.


# customer URL - dnsname.domain
# shortname is equal to dnsname without dashes
customer:
  dnsname: azure
  shortname: azure
  domain: smashdocs.net

replicaCount: 1

frontend:
  name: frontend
  image:
    repository: smashdocs/frontend
    tag: 2.9.7

backend:
  name: backend
  provisioningKey: 9c25ba5292a968514e9852ac0ee670af0028cdac1a9bbde870de5373c64da674
  image:
    repository: smashdocs/backend
    tag: 2.9.7

assets:
  name: assets
  persistentVolume:
    size: 40Gi
    storageClass: azure-file

email:
  smtpServerAddress: email-smtp.us-east-1.amazonaws.com
  smtpServerPort: 587
  smtpUsername: XXX
  smtpPassword: XXX

socketserver:
  name: socketserver
  command: backend_socket

worker:
  name: worker
  command: worker

beat:
  name: beat
  command: beat

nginx:
  name: nginx
  image:
    repository: smashdocs/nginx
    tag: 2.0.3

backup:
  name: backup
  image:
    repository: smashdocs/backup
    tag: latest

redis:
  name: redis
  image:
    repository: redis
    tag: 3.2

mongodb:
  enabled: true
  name: mongodb
  image:
    repository: mongo
    tag: 3.6
  persistentVolume:
    size: 20Gi
    storageClass: azure-disk

resources: {}
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  # limits:
  #  cpu: 100m
  #  memory: 128Mi
  # requests:
  #  cpu: 100m
  #  memory: 128Mi

nodeSelector: {}

tolerations: []

affinity: {}

You will need to review and edit this file, replacing following values:

  • provisioningKey - mandatory value, used internally for frontend-backend authentication, can be generated by python one-liner, requires python installed in system
$ python -c "import random; print(random.randint(5,1000000))" | shasum -a 256 | awk '{print $1}'
2ae777842419e4ab1691655b3db1625412b816e8af70573378b32c81882cc13c

place it in your smashdocs-values.yaml file, replacing provisioningKey default value

backend:
  name: backend
  provisioningKey: 9c25ba5292a968514e9852ac0ee670af0028cdac1a9bbde870de5373c64da674
  image:
    repository: smashdocs/backend
    tag: 2.9.7
  • dnsname - first word in fully qualified domain name, azure in our case
  • shortname - used in chart internally, equals to dnsname without dashes in name, azure in our case
  • domain name - customer domain, smashdocs.net
customer:
  dnsname: azure
  shortname: azure
  domain: smashdocs.net
  • email settings - server name, user name, password - outgoing email, user confirmation during registration

default values - SMASHDOCs AWS SES credentials

Optional values:

SMASHDOCs frontend and backend components can report runtime errors to Sentry, the following chart values need to be enabled in sentry section:

sentry:
  enabled: true
  sentyURL: https://54a7890c3aa9457688c6560eb77bb28b:fffa5d77bd59403791ea038247d9cd36@sentry.io/278077
  serverName: azure-smashdocs-backend
  • enabled- true or false
  • sentryURL - Sentry project DSN URL, like https://54a7890c3aa9457688c6560eb77bb28b@sentry.io/278077
  • serverName - component name for Sentry reporting, default is shortname-frontend or shortname-backend

Now, we need to create kubernetes secret with docker login credentials, we use hub.docker.com, for private docker image registry, you can request your existing docker login to be linked, as our partner, for image pull.

# create kubernetes docker login secret in given namespace with given credentials
kubectl -n azure create secret docker-registry smashdocs-dockerlogin --docker-username=YOURLOGIN --docker-password=YOURPASSWORD --docker-email=your@email.com

Now we are ready to install SMASHDOCs app into your k8s cluster. Following command will check configuration and deployment files syntax. It will output k8s deployment yaml files, but no objects will be created.

# check install, dry run
$ helm install --dry-run --name azure-smashdocs --namespace azure -f smashdocs-values.yaml .

Now, let’s install SMASHDOCs app from helm chart, to namespace azure-smashdocs with app settings (values) defined in smashdocs-values.yaml

# install SMASHDOCs app
$ helm install --name azure-smashdocs --namespace azure -f  smashdocs-values.yaml .

# list installed releases
$ helm list

NAME            REVISION    UPDATED                     STATUS      CHART                       NAMESPACE
azure-smashdocs     6           Fri Jun  1 08:55:05 2018    DEPLOYED    smashdocs-0.1.0             azure
bumptious-sheep 1           Fri May 11 13:32:28 2018    DEPLOYED    kube-lego-0.4.0             default
zeroed-markhor  1           Fri May 25 15:03:40 2018    DEPLOYED    nginx-ingress-0.19.2        kube-system

# get running pods in namespace = 'azure', it's freshly installed SMASHDOCs app
$ kubectl -n azure get pods
NAME                                            READY     STATUS              RESTARTS   AGE
azure-smashdocs-adminui-5f59b78468-p987n        1/1       Running             0          <invalid>
azure-smashdocs-backend-7b6769474b-xzcpt        0/1       ContainerCreating   0          <invalid>
azure-smashdocs-beat-756549df5b-tmqrw           0/1       ContainerCreating   0          <invalid>
azure-smashdocs-frontend-6cdb5df48d-mcm6v       1/1       Running             0          <invalid>
azure-smashdocs-mongodb-85b76bdd85-fxfrn        0/1       Pending             0          <invalid>
azure-smashdocs-nginx-6cd6c7c784-cjj5x          1/1       Running             0          <invalid>
azure-smashdocs-redis-f45f6d49c-nhwhz           0/1       PodInitializing     0          <invalid>
azure-smashdocs-socketserver-559f4ffcbb-w7948   0/1       ContainerCreating   0          <invalid>
azure-smashdocs-worker-7cccccf465-l542h         0/1       ContainerCreating   0          <invalid>

# wait a few minutes, while Azure will create volumes and attach them to containers
$ kubectl get pods -n azure

NAME                                          READY     STATUS    RESTARTS   AGE
azure-smashdocs-adminui-5f59b78468-p987n        1/1       Running   0          3m
azure-smashdocs-backend-7b6769474b-xzcpt        0/1       Running   0          3m
azure-smashdocs-beat-756549df5b-tmqrw           1/1       Running   0          3m
azure-smashdocs-frontend-6cdb5df48d-mcm6v       1/1       Running   0          3m
azure-smashdocs-mongodb-85b76bdd85-fxfrn        1/1       Running   0          3m
azure-smashdocs-nginx-6cd6c7c784-cjj5x          1/1       Running   0          3m
azure-smashdocs-redis-f45f6d49c-nhwhz           1/1       Running   0          3m
azure-smashdocs-socketserver-559f4ffcbb-w7948   1/1       Running   0          3m
azure-smashdocs-worker-7cccccf465-l542h         1/1       Running   0          3m

App component statuses:

  • READY 0/1 - app component is up, but readinessProbe is not OK yet READY
  • READY 1/1 - app component is up and running, readinessProbe is OK

First initialization for backend component may take some time, because Azure network attached storages with ReadWriteMany mode are slow.

Open azure.smashdocs.net in browser

Congratulations, you have SMASHDOCs running in Azure cloud!

Debug option: you can add --debug to helm command while installing/upgrading release

Upgrade SMASHDOCs app

# change current directory to one, that contains smashdocs chart
$ cd smashdocs_chart_directory

# list releases, look for smashdocs-x.x.x in CHART column
$ helm list

# upgrade SMASHDOCs release
$ helm upgrade -f smashdocs-values.yaml SMASHDOCS_RELEASE_NAME .

Backup SMASHDOCs app assets and database

Azure cloud

Get Azure connection string, create storage bucket

# list resource groups
$ az group list --output=table
Name                               Location        Status
---------------------------------  --------------  ---------
ccgroup                            canadacentral   Succeeded
MC_ccgroup_azure-cc_canadacentral  canadacentral   Succeeded

# define resource group, that starts with MC-prefix
$ RESOURCE_GROUP=MC_ccgroup_azure-cc_canadacentral

# list storage accounts
$ az storage account list --output=table
  StatusOfPrimary
--------------------------------  -------  -------------  --------------  -----------------  -------------------  ---------------------------------  -----------------
2018-06-07T11:18:57.673477+00:00  Storage  canadacentral  sdccstorageacc  canadacentral      Succeeded            MC_ccgroup_azure-cc_canadacentral  available

# define storage account name
$ STORAGE_ACCOUNT=sdccstorageacc

# get Azure CLI connection string
$ AZURE_STORAGE_CONNECTION_STRING="$(az storage account show-connection-string -n $STORAGE_ACCOUNT -g $RESOURCE_GROUP --query connectionString -o tsv)"

# verify, it outputs connection string for Azure CLI
$ echo $AZURE_STORAGE_CONNECTION_STRING
DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=youraccountname;AccountKey=.....

# choose storage bucket name
$ AZURE_STORAGE_BUCKET=azure-k8s-backup

# create storage bucket
$ az storage container create -n $AZURE_STORAGE_BUCKET --public-access off --account-name=youraccountname --account-key=youraccountkey

Finished[#############################################################]  100.0000%
{
  "etag": "\"0x8D5D07B701617E5\"",
  "lastModified": "2018-06-12T15:44:52+00:00"
}

Encode Azure storage credentials in base64

$ echo -n "$AZURE_STORAGE_BUCKET" | base64
YXp1cmUtazhzLWJhY2t1cA==

$ echo -n "$AZURE_STORAGE_CONNECTION_STRING" | base64
RGVmYXVsdEVuZHBvaW50c1Byb3RvY29sPWh0dHBzO0VuZHBvaW50U3VmZml4PWNvcmUud2luZG93cy5uZXQ7QWNjb3VudE5hbWU9eW91cmFjY291bnRuYW1lO0FjY291bnRLZXk9Li4uLi4=::

Include encoded Azure storage in Kubernetes secret, replace name with your namespace-backupsecret, values with base64-encoded Azure storage container name, connection string, and GPG key id, which will be used for encrypting backups before upload to Azure storage.

Empty GPG_KEY_ID will disable backup encryption before upload to cloud, not recommended.

Download: backup-secret.yaml

apiVersion: v1
kind: Secret
metadata:
  name: azure-backupsecret
type: Opaque
data:
  BACKUP_CONTAINER: YXp1cmUtazhzLWJhY2t1cA==
  AZURE_STORAGE_CONNECTION_STRING: RGVmYXVsdEVuZHBvaW50c1Byb3RvY29sPWh0dHBzO0VuZHBvaW50U3VmZml4PWNvcmUud2luZG93cy5uZXQ7QWNjb3VudE5hbWU9eW91cmFjY291bnRuYW1lO0FjY291bnRLZXk9Li4uLi4=::
  GPG_KEY_ID: MTIzNDU2Nzg=

Now, create secret in kubernetes with your backup settings (base64-encoded: blob storage container name, azure connection string, gpg public key id:

# create secret in kubernetes namespace "azure"
$ kubectl -n azure create secret backup-secret.yaml

# verify secrets existence
$ kubectl -n azure get secrets

azure-smashdocs-nginx-tls                     kubernetes.io/tls                     2         27d
azure-storage-account-sdccstorageacc-secret   Opaque                                2         27d
default-token-gf9qs                           kubernetes.io/service-account-token   3         27d
smashdocs-backupsecret                        Opaque                                8         38m
smashdocs-dockerlogin                         kubernetes.io/dockerconfigjson        1         13d

After secrets was created, you need to edit cronjob schedule, it determines how often backup will be triggered, review backup section in your values yaml file

backup:
  enabled: true
  name: backup
  type: azure
  schedule: "5 5 * * *"
  image:
    repository: smashdocs/backup
    tag: latest

Backup cronjob steps:

  • create tar.gz archives of SMASHDOCs assets and mongodb dump
  • fetch public GPG key material from hkps.pool.sks-keyservers.net (if GPG key id is specified, optional)
  • encrypt dumps with fetched public key (optional)
  • upload dumps to Azure blob storage container with name specified in backup secret

After cronjob description created, upgrade SMASHDOCs app release to apply changes, verify cronjob status, and secret content

$ helm upgrade --install --namespace azure -f smashdocs-values.yaml azure-smashdocs .

# get cronjob status
$ kubectl -n azure get cronjob
NAME                     SCHEDULE      SUSPEND   ACTIVE    LAST SCHEDULE   AGE
azure-smashdocs-backup   5 5 * * *   False     1         14s             5m

# check cronjob results
$ kubectl -n azure get pods | grep cronjob
azure-smashdocs-backup-1528952700-mhvb7        0/1       Completed   0          1m


# get backup container events - image pull, container start, etc
$ kubectl -n azure describe pod azure-smashdocs-backup-1528952700-mhvb7

# get backup container logs - actual backup operations
$ kubectl -n azure logs azure-smashdocs-backup-1528952700-mhvb7
[backup logs]

Troubleshooting

Get namespaces list

$ kubectl get namespaces
NAME            STATUS    AGE
azure           Active    31d
default         Active    36d
ingress-nginx   Active    36d
kube-public     Active    36d
kube-system     Active    36d

Display pods in given namespace

$ kubectl get pods -n azure

Describe pod lifecycle, look for pod events

$ kubectl -n azure describe pod azure-smashdocs-mongodb-85b76bdd85-fxfrn
...
Events:
  Type     Reason                 Age              From                               Message
  ----     ------                 ----             ----                               -------
  Warning  FailedScheduling       4m (x7 over 5m)  default-scheduler                  PersistentVolumeClaim is not bound: "azure-smashdocs-mongodb" (repeated 2 times)
  Normal   Scheduled              4m               default-scheduler                  Successfully assigned azure-smashdocs-mongodb-85b76bdd85-fxfrn to aks-nodepool1-24768417-1
  Normal   SuccessfulMountVolume  4m               kubelet, aks-nodepool1-24768417-1  MountVolume.SetUp succeeded for volume "default-token-d7zdh"
  Warning  FailedMount            1m               kubelet, aks-nodepool1-24768417-1  Unable to mount volumes for pod "azure-smashdocs-mongodb-85b76bdd85-fxfrn_azure(ee0a6920-68ea-11e8-92cf-0a58ac1f0520)": timeout expired waiting for volumes to attach/mount for pod "azure"/"azure-smashdocs-mongodb-85b76bdd85-fxfrn". list of unattached/unmounted volumes=[mongodb-data]
  Normal   SuccessfulMountVolume  1m               kubelet, aks-nodepool1-24768417-1  MountVolume.SetUp succeeded for volume "pvc-edc6720e-68ea-11e8-92cf-0a58ac1f0520"
  Normal   Pulling                1m               kubelet, aks-nodepool1-24768417-1  pulling image "mongo:3.6"
  Normal   Pulled                 48s              kubelet, aks-nodepool1-24768417-1  Successfully pulled image "mongo:3.6"
  Normal   Created                48s              kubelet, aks-nodepool1-24768417-1  Created container
  Normal   Started                48s              kubelet, aks-nodepool1-24768417-1  Started container

Get pod logs from stdout/stder (nginx, frontend, backend, socketserver)

$ kubectl -n azure log azure-smashdocs-backend-7b6769474b-xzcpt
 ...
 [backend logs here]

Check storage classes

$ kubectl get sc
NAME                PROVISIONER                AGE
azure-disk          kubernetes.io/azure-disk   6h
azure-file          kubernetes.io/azure-file   6h

$ kubectl describe sc azure-file
Name:                  azure-file
IsDefaultClass:        No
Annotations:           <none>
Provisioner:           kubernetes.io/azure-file
Parameters:            skuName=Standard_LRS,storageAccount=sdk8sccstorageaccount
AllowVolumeExpansion:  <unset>
MountOptions:
  dir_mode=0777
  file_mode=0777
ReclaimPolicy:      Retain
VolumeBindingMode:  Immediate
Events:             <none>

Check persistent volume claims, and persistent volume status, Status Bound is OK

$ kubectl -n azure get pv,pvc
NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                           STORAGECLASS   REASON    AGE
persistentvolume/pvc-edc46301-68ea-11e8-92cf-0a58ac1f0520   40Gi       RWX            Delete           Bound     azure/azure-smashdocs-assets    azure-file               39m
persistentvolume/pvc-edc6720e-68ea-11e8-92cf-0a58ac1f0520   20Gi       RWO            Delete           Bound     azure/azure-smashdocs-mongodb   azure-disk               38m

NAME                                            STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/azure-smashdocs-assets    Bound     pvc-edc46301-68ea-11e8-92cf-0a58ac1f0520   40Gi       RWX            azure-file     39m
persistentvolumeclaim/azure-smashdocs-mongodb   Bound     pvc-edc6720e-68ea-11e8-92cf-0a58ac1f0520   20Gi       RWO            azure-disk     39m

Delete stuck in Terminating status pods

kubectl get pods --all-namespaces | awk -v ns="YOUR_NAMESPACE" '$3=="Terminating" {print "kubectl delete -n "ns" pod " $2 " --grace-period=0 --force"}' | xargs -0 bash -c

Access pods within k8s cluster

Let’s assume that our release is named azure-smashdocs, it’s running in azure namespace, and we need to get access to mongo database, this can be done by kubectl port-forwarding

Please change azure to YOUR_NAMESPACE when executing command from example given below:

# list installed releases, to find out proper namespace
$ helm list
NAME                 REVISION        UPDATED                         STATUS          CHART                   NAMESPACE
azure-smashdocs      166             Tue Jul 10 17:46:02 2018        DEPLOYED        smashdocs-0.2.0         azure

# OR
$ kubectl get namespaces
NAME            STATUS    AGE
azure           Active    28d
default         Active    33d
ingress-nginx   Active    33d
kube-public     Active    33d
kube-system     Active    33d

# so, our components are running in 'azure' namespace,
$ kubectl -n azure get svc
NAME                         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)     AGE
azure-smashdocs-adminui        ClusterIP   10.0.172.143   <none>        80/TCP      1h
azure-smashdocs-backend        ClusterIP   10.0.187.210   <none>        8080/TCP    3h
azure-smashdocs-beat           ClusterIP   10.0.169.224   <none>        8080/TCP    3h
azure-smashdocs-frontend       ClusterIP   10.0.108.36    <none>        80/TCP      3h
azure-smashdocs-mongodb        ClusterIP   10.0.23.47     <none>        27017/TCP   3h
azure-smashdocs-nginx          ClusterIP   10.0.109.157   <none>        80/TCP      3h
azure-smashdocs-redis          ClusterIP   10.0.159.33    <none>        6379/TCP    3h
azure-smashdocs-socketserver   ClusterIP   10.0.233.101   <none>        8080/TCP    3h
azure-smashdocs-worker         ClusterIP   10.0.193.207   <none>        8080/TCP    3h

# forward local 37017 to remote service azure-smashdocs-mongodb port 27017
$ kubectl -n azure port-forward svc/azure-smashdocs-mongodb 37017:27017
Forwarding from 127.0.0.1:37017 -> 27017
Forwarding from [::1]:37017 -> 27017

Now you can connect with MongoHUB or Studio3T or mongodb cli to localhost:37017, and that connect will be forwarded to mongodb pod inside cluster.

Delete SMASHDOCs release

WARNING - DESTRUCTIVE actions

delete SMASHDOCs k8s release

helm delete RELEASE_NAME

delete k8s cluster from AKS

az aks delete -n k8s-cc-test -g k8s-cc-gr

Multitenancy

SMASHDOCs supports multiple tenants on one installation. Each tenant in SMASHDOCs is called Organization and can be created and updated by the Provisioning API: provisioning.html

Redundant Setup

please contact us