User Rating: 5 / 5

Star ActiveStar ActiveStar ActiveStar ActiveStar Active
 

This article will show how to get Synology DSM working on Docker.
At the time of writing, it seems there is no issue so far with dsm "6.2.3-25426".

Requirements (at the time of the writing):

  • Xpenology Jun's Loader 1.03b for ds3615xs (which can be found in this forum)
  • DSM 6.2.3 (25426) PAT file (here)

Source: https://github.com/uxora-com/xpenology-docker

Warning
This system is for testing or educational purpose ONLY, and It is NOT recommended for use in production environment because it has no support and it has not been proven  stable/reliable.

So if DATA LOSS happens by using this system, this is ONLY on your own responsibility.

If you are happy with the testing of this product, I would highly recommend you to go for an original Synology hardware especially for PRODUCTION environment where data is critical.

We recommend ...
... at least 512MB RAM
... at least 16GB of free disk space

Configure Lxc container [Proxmox user only]

This part is for users using proxmox or Lxc container, if this is not your case then skip this part and go to the next part.

Add overlay, aufs module and nested option

On proxmox host, execute as root (or with sudo):

# Check if nested virtualization option is set
[[email protected]]$ cat /sys/module/kvm_intel/parameters/nested
    N

# Set nested option
[[email protected]]$ echo "options kvm-amd nested=1" > /etc/modprobe.d/kvm-amd.conf
#For Intel# echo "options kvm-intel nested=Y" > /etc/modprobe.d/kvm-intel.conf

# Reload the kernel module
[[email protected]]$ modprobe -r kvm_amd
#For Intel# modprobe -r kvm_intel

# Check again nested virtualization option
[[email protected]]$ cat /sys/module/kvm_intel/parameters/nested                    
    Y


# Add overlay and aufs kernel module for docker lxc
[[email protected]]$ echo -e "overlay\naufs" >> /etc/modules-load.d/modules.conf

# Reboot or load modules
[[email protected]]$ modprobe aufs
[[email protected]]$ modprobe overlay

# Check if the modules are active
[[email protected]]$ lsmod | grep -E 'overlay|aufs'

# Add permission to kvm
[[email protected]]$ chmod o+rw /dev/kvm

 

Create lxc container

Create a new unpriviledge Lxc container :

  • With the template "debian-10-standard_10.5-1_amd64.tar.gz" downloaded in proxmox ve
  • Core 1, RAM 1GB, Swap 1GB, Root disk 32GB

Edit container conf file  /etc/pve/lxc/111.conf to look like the following:

/etc/pve/lxc/111.conf
	arch: amd64
	cores: 1
	features: keyctl=1,nesting=1
	hostname: ct-deb10-docker
	memory: 1024
	mp0: /var/lib/vz/bindmounts/shared,mp=/shared,replicate=0
	net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=CA:5A:B3:F4:71:09,ip=dhcp,type=veth
	ostype: debian
	rootfs: local:111/vm-111-disk-0.raw,size=32G
	swap: 1024
	unprivileged: 1
	lxc.apparmor.profile: lxc-container-default-cgns-with-mounting
	lxc.apparmor.raw: mount,
	lxc.cgroup.devices.allow: c 10:200 rwm
	lxc.mount.entry: /dev/net/tun dev/net/tun none bind,create=file
	lxc.cgroup.devices.allow: c 10:232 rwm
	lxc.mount.entry: /dev/kvm dev/kvm none bind,create=file,rw,uid=0,gid=105
	lxc.cgroup.devices.allow: c 10:238 rwm
	lxc.mount.entry: /dev/vhost-net /dev/vhost-net none bind,create=file
	lxc.cgroup.devices.allow: c 10:241 rwm
	lxc.mount.entry: /dev/vhost-vsock /dev/vhost-vsock none bind,create=file
	lxc.cgroup.devices.allow: b 7:* rwm
	lxc.cgroup.devices.allow: c 10:237 rwm

Start the container with pct start 111

Then access to its console with pct enter 111 or lxc-attach --name 111

Then continue to the next part to configure this linux container.

Pre-requisite

Check virtual host capability

Your host need to have virtualization capability to get this docker working.

On host server, open a shell as root, and execute the following command:

# Check cpu virtualization
[[email protected]]$ egrep --color '(svm|vmx)' /proc/cpuinfo
[[email protected]]$ lscpu | egrep --color '(svm|vmx)'

# Check virtual host capability
[[email protected]]$ apt-get install libvirt-clients
[[email protected]]$ virt-host-validate
    QEMU: Checking for hardware virtualization                                 : PASS
    QEMU: Checking if device /dev/kvm exists                                   : PASS
    QEMU: Checking if device /dev/kvm is accessible                            : PASS
    QEMU: Checking if device /dev/vhost-net exists                             : WARN (Load the 'vhost_net' module to improve performance of virtio networking)
    QEMU: Checking if device /dev/net/tun exists                               : PASS
    QEMU: Checking for cgroup 'cpu' controller support                         : PASS
    QEMU: Checking for cgroup 'cpuacct' controller support                     : PASS
    QEMU: Checking for cgroup 'cpuset' controller support                      : PASS
    QEMU: Checking for cgroup 'memory' controller support                      : PASS
    QEMU: Checking for cgroup 'devices' controller support                     : PASS
    QEMU: Checking for cgroup 'blkio' controller support                       : PASS
    QEMU: Checking for device assignment IOMMU support                         : PASS
    QEMU: Checking if IOMMU is enabled by kernel                               : PASS
     LXC: Checking for Linux >= 2.6.26                                         : PASS
     LXC: Checking for namespace ipc                                           : PASS
     LXC: Checking for namespace mnt                                           : PASS
     LXC: Checking for namespace pid                                           : PASS
     LXC: Checking for namespace uts                                           : PASS
     LXC: Checking for namespace net                                           : PASS
     LXC: Checking for namespace user                                          : PASS
     LXC: Checking for cgroup 'cpu' controller support                         : PASS
     LXC: Checking for cgroup 'cpuacct' controller support                     : PASS
     LXC: Checking for cgroup 'cpuset' controller support                      : PASS
     LXC: Checking for cgroup 'memory' controller support                      : PASS
     LXC: Checking for cgroup 'devices' controller support                     : PASS
     LXC: Checking for cgroup 'freezer' controller support                     : PASS
     LXC: Checking for cgroup 'blkio' controller support                       : PASS
     LXC: Checking if device /sys/fs/fuse/connections exists                   : PASS

Troubleshooting:

  • if /dev/kvm issue : chmod o+rw /dev/kvm
  • if fuse issue: modprobe fuse

Install Docker

# Install docker
[[email protected]]$ apt-get install apt-transport-https ca-certificates curl gnupg2 software-properties-common -y
[[email protected]]$ curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
[[email protected]]$ add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
[[email protected]]$ apt-get update && apt-get install docker-ce -y

# change storage driver to save disk space
[[email protected]]$ echo -e '{\n  "storage-driver": "overlay2"\n}' >> /etc/docker/daemon.json

Bootloader in a web server

The bootloader "synoboot_103b_ds3615xs_virtio_9p.img", downloaded from this forum , need to be stored in a place where it can provide a URL to the file.
For example, you can :

  • Store it in your own web server
  • Upload it into gofile.io free file storage sharing, then use the "download" url

The URL will be used as BOOTLOADER_URL parameter in Docker.

Start Xpenology Docker

You can find all the documentation and instruction in https://github.com/uxora-com/xpenology-docker .

Simple run

# Simple docker run
[[email protected]]$ docker run --privileged \
  -e BOOTLOADER_URL="http://example.com/path/synoboot.img" \
  uxora/xpenology

More advanced run

# Run with more specific parameters
[[email protected]]$ docker run --privileged --cap-add=NET_ADMIN \
  --device=/dev/net/tun --device=/dev/kvm \
  -p 5000:5000 -p 5001:5001 -p 2222:22 -p 8080:80 \
  -e CPU="qemu64" \
  -e THREADS=1 \
  -e RAM=512 \
  -e DISK_SIZE="8G 16G" \
  -e DISK_PATH="/image" \
  -e BOOTLOADER_URL="http://example.com/path/synoboot.img" \
  -e BOOTLOADER_AS_USB="Y" \
  -e VM_ENABLE_VIRTIO="Y" \
  -v /shared/data:/datashare \
  uxora/xpenology

Install Xpenology dsm

Once, you've got your xpenology docker running, then follow this tutorial to install xpenology dsm by opening a web page on <HOST_IP>:5000.

Note: Do not forgot to change vid/pid (as explained in tutorial) to get minor update working

 

HTH,
Michel.

 

Reference
Forum Xpenology (xpenology.com)
Tutorial: DSM 6.x on Proxmox (Thread on xpenology.com)
Proxmox backup template (Thread on xpenology.com)
Xpenology running on docker (Thread on xpenology.com)
Tutorial to compile xpenology dsm driver (xpenology.club)
Install Xpenology DSM 6.1.x on Proxmox (uxora.com)
Install Xpenology DSM 6.2.x on Proxmox (uxora.com)

 

Enjoyed this article? Please like it or share it.

Add comment

Please connect with one of social login below (or fill up name and email)

     


Security code
Refresh