User Rating: 4 / 5

Star ActiveStar ActiveStar ActiveStar ActiveStar Inactive
 

This article will show how to get Synology DSM working on Docker.
At the time of writing, it seems there is no issue so far with dsm "6.2.3-25426".

Requirements (at the time of the writing):

  • Xpenology Jun's Loader 1.03b for ds3615xs (which can be found in this forum)
  • DSM 6.2.3 (25426) file named "DSM_DS3615xs_25426.pat" (here)

Source: https://github.com/uxora-com/xpenology-docker

Warning
This system is for testing or educational purpose ONLY, and It is NOT recommended for use in production environment because it has no support and it has not been proven  stable/reliable.

So if DATA LOSS happens by using this system, this is ONLY on your own responsibility.

If you are happy with the testing of this product, I would highly recommend you to go for an original Synology hardware especially for PRODUCTION environment where data is critical.

We recommend ...
... at least 512MB RAM
... at least 16GB of free disk space

Configure Lxc container [Proxmox user only]

This part is for users using proxmox or Lxc container, if this is not your case then skip this part and go to the next part.

Add overlay, aufs module and nested option

On proxmox host, execute as root (or with sudo):


# Check if nested virtualization option is set
[root@proxmox]$ cat /sys/module/kvm_intel/parameters/nested
    N

# Set nested option
[root@proxmox]$ echo "options kvm-amd nested=1" > /etc/modprobe.d/kvm-amd.conf
#For Intel# echo "options kvm-intel nested=Y" > /etc/modprobe.d/kvm-intel.conf

# Reload the kernel module
[root@proxmox]$ modprobe -r kvm_amd
#For Intel# modprobe -r kvm_intel

# Check again nested virtualization option
[root@proxmox]$ cat /sys/module/kvm_intel/parameters/nested                    
    Y


# Add overlay and aufs kernel module for docker lxc
[root@proxmox]$ echo -e "overlay\naufs" >> /etc/modules-load.d/modules.conf

# Reboot or load modules
[root@proxmox]$ modprobe aufs
[root@proxmox]$ modprobe overlay

# Check if the modules are active
[root@proxmox]$ lsmod | grep -E 'overlay|aufs'

# Add permission to kvm
[root@proxmox]$ chmod o+rw /dev/kvm

 

Create lxc container

Create a new unpriviledge Lxc container :

  • With the template "debian-10-standard_10.5-1_amd64.tar.gz" downloaded in proxmox ve
  • Core 1, RAM 1GB, Swap 1GB, Root disk 32GB

Edit container conf file  /etc/pve/lxc/111.conf to look like the following:

/etc/pve/lxc/111.conf
	arch: amd64
	cores: 1
	features: keyctl=1,nesting=1
	hostname: ct-deb10-docker
	memory: 1024
	mp0: /var/lib/vz/bindmounts/shared,mp=/shared,replicate=0
	net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=CA:5A:B3:F4:71:09,ip=dhcp,type=veth
	ostype: debian
	rootfs: local:111/vm-111-disk-0.raw,size=32G
	swap: 1024
	unprivileged: 1
	lxc.apparmor.profile: lxc-container-default-cgns-with-mounting
	lxc.apparmor.raw: mount,
	lxc.cgroup.devices.allow: c 10:200 rwm
	lxc.mount.entry: /dev/net/tun dev/net/tun none bind,create=file
	lxc.cgroup.devices.allow: c 10:232 rwm
	lxc.mount.entry: /dev/kvm dev/kvm none bind,create=file,rw,uid=0,gid=105
	lxc.cgroup.devices.allow: c 10:238 rwm
	lxc.mount.entry: /dev/vhost-net /dev/vhost-net none bind,create=file
	lxc.cgroup.devices.allow: c 10:241 rwm
	lxc.mount.entry: /dev/vhost-vsock /dev/vhost-vsock none bind,create=file
	lxc.cgroup.devices.allow: b 7:* rwm
	lxc.cgroup.devices.allow: c 10:237 rwm

Start the container with pct start 111

Then access to its console with pct enter 111 or lxc-attach --name 111

Then continue to the next part to configure this linux container.

Pre-requisite

Check virtual host capability

Your host need to have virtualization capability to get this docker working.

On host server, open a shell as root, and execute the following command:


# Check cpu virtualization
[root@host]$ egrep --color '(svm|vmx)' /proc/cpuinfo
[root@host]$ lscpu | egrep --color '(svm|vmx)'

# Check virtual host capability
[root@host]$ apt-get install libvirt-clients
[root@host]$ virt-host-validate
    QEMU: Checking for hardware virtualization                                 : PASS
    QEMU: Checking if device /dev/kvm exists                                   : PASS
    QEMU: Checking if device /dev/kvm is accessible                            : PASS
    QEMU: Checking if device /dev/vhost-net exists                             : WARN (Load the 'vhost_net' module to improve performance of virtio networking)
    QEMU: Checking if device /dev/net/tun exists                               : PASS
    QEMU: Checking for cgroup 'cpu' controller support                         : PASS
    QEMU: Checking for cgroup 'cpuacct' controller support                     : PASS
    QEMU: Checking for cgroup 'cpuset' controller support                      : PASS
    QEMU: Checking for cgroup 'memory' controller support                      : PASS
    QEMU: Checking for cgroup 'devices' controller support                     : PASS
    QEMU: Checking for cgroup 'blkio' controller support                       : PASS
    QEMU: Checking for device assignment IOMMU support                         : PASS
    QEMU: Checking if IOMMU is enabled by kernel                               : PASS
     LXC: Checking for Linux >= 2.6.26                                         : PASS
     LXC: Checking for namespace ipc                                           : PASS
     LXC: Checking for namespace mnt                                           : PASS
     LXC: Checking for namespace pid                                           : PASS
     LXC: Checking for namespace uts                                           : PASS
     LXC: Checking for namespace net                                           : PASS
     LXC: Checking for namespace user                                          : PASS
     LXC: Checking for cgroup 'cpu' controller support                         : PASS
     LXC: Checking for cgroup 'cpuacct' controller support                     : PASS
     LXC: Checking for cgroup 'cpuset' controller support                      : PASS
     LXC: Checking for cgroup 'memory' controller support                      : PASS
     LXC: Checking for cgroup 'devices' controller support                     : PASS
     LXC: Checking for cgroup 'freezer' controller support                     : PASS
     LXC: Checking for cgroup 'blkio' controller support                       : PASS
     LXC: Checking if device /sys/fs/fuse/connections exists                   : PASS

Troubleshooting:

  • if /dev/kvm issue : chmod o+rw /dev/kvm
  • if fuse issue: modprobe fuse

Install Docker


# Install docker
[root@host]$ apt-get update && apt-get install apt-transport-https ca-certificates curl gnupg2 software-properties-common -y
[root@host]$ curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
[root@host]$ add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
[root@host]$ apt-get update && apt-get install docker-ce -y

# change storage driver to save disk space
[root@host]$ echo -e '{\n  "storage-driver": "overlay2"\n}' >> /etc/docker/daemon.json

Bootloader in a web server

The bootloader "synoboot_103b_ds3615xs_virtio_9p.img", downloaded from this forum , need to be stored in a place where it can provide a URL to the file.
For example, you can :

  • Store it in your own web server
  • Upload it into gofile.io , a free file storage sharing, then use the "download" button link as url

The URL will be used as BOOTLOADER_URL parameter in Docker.

UPDATE:

If you do not want to use a webserver but a local folder (ie. /xpenodock/syst) then

  • Copy bootloader to /xpenodock/syst/bootloader.img
  • Then use the following parameter in docker run command line:
    • -e DISK_PATH="/xpy_syst"
    • -v /xpenodock/syst:/xpy_syst

Start Xpenology Docker

You can find all the documentation and instruction in https://github.com/uxora-com/xpenology-docker .

Simple run


# Simple docker run
[root@host]$ docker run --privileged \
  -e BOOTLOADER_URL="http://example.com/path/synoboot.img" \
  uxora/xpenology

More advanced run: my prefered configuration

# To avoid ip_tables error on docker
[root@host]$ modprobe ip_tables

# Create directories which will store vm data
[root@host]$ mkdir -vp /xpenodock/{syst,data,lnk}

# Run with more specific parameters
[root@host]$ docker run --privileged --cap-add=NET_ADMIN \
  --device=/dev/net/tun --device=/dev/kvm \
  -p 5000:5000 -p 5001:5001 -p 2222:22 -p 8080:80 \
  -e CPU="qemu64" \
  -e THREADS=1 \
  -e RAM=512 \
  -e DISK_SIZE="8G" \
  -e DISK_PATH="/xpy_syst" \
  -e BOOTLOADER_URL="http://192.168.0.14/joomla/tmp/synoboot.img" \
  -e BOOTLOADER_AS_USB="Y" \
  -e VM_ENABLE_VIRTIO="Y" \
  -e VM_PATH_9P="/xpy_data" \
  -v /xpenodock/data:/xpy_data \
  -v /xpenodock/syst:/xpy_syst \
  uxora/xpenology

In this configuration:

  • Snapshot will be usable
    • ie. [root@host]$ docker exec -ti $( docker container ls -f 'ancestor=uxora/xpenology' -f "status=running" -q ) vm-snap-create
  • /xpenodock/syst
    • will contain bootloader and vm datafiles.
    • if bootloader.img already exists in this folder, then it won't download it from BOOTLOADER_URL but use the existing one.
    • you will be able to quickly change bootloader by replacing those files
  • /xpenodock/data
    • will be a sharefolder between host and DSM
    • will be used as a 9p mount point in DSM as follow
      • First from dsm gui, "Create New Shared Folder" in "File Station" named "9pDataShare"
      • Then open a ssh connection to dsm, then create the mount point:
        • [xpenology]$ sudo mount -t 9p -o trans=virtio,version=9p2000.L,msize=262144 hostdata0 /volume1/9pDatashare

If you want to kinda "passtrough" a full disk, you can do it by:

  • Creating a symbolic link on the device
    • ie. [root@host]$ ln -s /dev/disk/by-id/ata-SAMSUNG_XXX /xpenodock/lnk/sdz
  • Then add theses parameters to docker command line:
    • --device=/xpenodock/lnk/sdz:/dev/sdz
    • -e DISK_SIZE="8G /dev/sdz"
  • But note that Snapshot won't work anymore by adding a raw disk

Install Xpenology dsm

Once, you've got your xpenology docker running, then follow this tutorial to install xpenology dsm by opening a web page on <HOST_IP>:5000.

Note0: Do not forgot to change vid/pid (as explained in tutorial) to get minor update working

Note1: If you have some issue, please check troubleshooting part of github's README here: https://github.com/uxora-com/xpenology-docker

 

HTH,
Michel.

 

Reference
Forum Xpenology (xpenology.com)
Tutorial: DSM 6.x on Proxmox (Thread on xpenology.com)
Proxmox backup template (Thread on xpenology.com)
Xpenology running on docker (Thread on xpenology.com)
Tutorial to compile xpenology dsm driver (xpenology.club)
Install Xpenology DSM 6.1.x on Proxmox (uxora.com)
Install Xpenology DSM 6.2.x on Proxmox (uxora.com)

 

Enjoyed this article? Please like it or share it.

Add comment

Please connect with one of social login below (or fill up name and email)

     


Security code
Refresh

Comments   

Stian
# No dice.Stian 2022-07-07 10:35
Hi run the docker run command from git. All sems to start up fine. But I can not open a webpage to DockerHost:5000. I get to the shell. I see it startup. Look at my startup log in pastebin. I have plenty of other dockers setup on Bridge working fine. This bootloader also works fine on my KVM as a VM.

[censored]s://pastebin.com/JiCym3DB



NAT Network 20.20.20.21.
Reply | Reply with quote | Quote
Ken Zhang
# RE: Xpenology on DockerKen Zhang 2021-08-10 04:25
success to entered Docker-DSM. thank you so much. but here is another question, if it's possible to Map the entire hard disk to docker. please help if it's possible.
Reply | Reply with quote | Quote
Ken Zhang
# RE: Xpenology on DockerKen Zhang 2021-08-10 04:26
i mean map the entire hard disk to docker-dsm.
Reply | Reply with quote | Quote
Ken Zhang
# RE: Xpenology on DockerKen Zhang 2021-08-10 04:24
Xpenology on Docker
Reply | Reply with quote | Quote
Ahluck
# RE: Xpenology on DockerAhluck 2021-01-27 12:45
Hi, Michel

I keep getting this error when starting docker for xpenology, could you please help, much appreciated.

0K ........ ........ ........ ........ 64% 201M 0s
32768K ........ ........ .. 100% 249M=0.2s
INFO: Bootloader has been successfully downloaded from URL.
INFO: /image/bootloader.raw file size seems valid for synoboot.
INFO: Bootloader has been converted to qcow2
INFO: No Initial Disk found, creating disk /image/vm-disk-1.qcow2

INFO: KVM acceleration enabled
INFO: Configuring network ...
net.ipv4.ip_forward = 1
INFO: DHCP configured to serve IP 20.20.20.21/24 via dockerbridge
iptables: No chain/target/match by that name.
Reply | Reply with quote | Quote
UxOra DBA
# RE: Xpenology on DockerUxOra DBA 2021-01-28 00:12
Hi,

Well just tried on a new docker, and it still works for me.

It seems to failed on the following command:
# Hack for guest VMs complaining about "bad udp checksums in 5 packets"
$ iptables -A POSTROUTING -t mangle -p udp --dport bootpc -j CHECKSUM --checksum-fill

Not sure why you got this error, but I read that some get this resolves just by restarting Docker:
$ systemctl restart docker
Reply | Reply with quote | Quote
DAVID
# ERROR HDDAVID 2022-04-05 11:15
hello thanks for the guide. I am using the long command "with more specific parameters" and the redpill boot image. It works well, i can login to ip:5000, but, then it shows this error:

something went wrong detected errors on the hard drives (7,8) and the sata ports have also been disables. Please shut down your ds3615xs to replace or remove the hard drives and try again.

Can you help to solve this?
Reply | Reply with quote | Quote