I spent some time this weekend playing around with my Synology 1511+ NAS and various file functions. I decided to start collecting some of the less obvious (at least to me) things I learned. My setup is a Synology DS1511+ with two DX510 expansion units.
- RAID array rebuilds are a low priority background task and have no noticeable impact on performance.
Example: I was doing a RAID rebuild while at the same time running robocopy to copy about 5.5 TB from the drive array to a different array. The RAID array rebuild progressed less than 10% during the first day and a half while the copy (and other activity) were running. Once the copy was done and the drive wasn’t being used the remaining rebuild took less than 6 hours.
- DSM seems to queue even unrelated drive/volume changes and does them serially.
Example: During an array rebuild on one disk group I expanded a iSCSI LUN on a volume configured on a second disk group. Despite not being on the same drives and seemingly unrelated I had to wait for the array rebuild to finish before the LUN expansion happened. It could be initiated, but stayed in a “waiting” state until the raid rebuild was done.
- The root and swap file systems (basically the OS) are on the DS1511+ drives but not part of the file system volume that’s on the drives. They appear to be spread across all drives.
Example: Removing the volume that’s on the drives still lets the system boot. Removing one of the drives, even with no volumes on it, results in messages that the root and swap volumes have entered a degraded state. Popping in a new drive results in eventual message that the consistency check on both volumes is done. At this point an additional drive can be removed and replaced without a negative impact (but another degrade/consistency check).
- DSM did not like it when I removed Volume 1 from the DSM 1511+. I was a bit concerned it wouldn’t let me remove the only volume in the unit, but it did so without any warning messages before I told it to go ahead and do it (I had moved everything off it – it does warn if a share, service or package would be impacted). But during the removal I got a message that the volume had crashed. There’s not much to do with a crashed volume so I rebooted. The volume was gone after the reboot and everything seems fine.
- Disk Group and Volume names are system generated and cannot be changed (at least through the GUI). Just something to keep in kind if you try to organize disk groups. When I deleted Disk Group 1 I still had disk groups 2 & 3. When I went to recreate the disk group it was named Disk Group 4 and I had no disk group 1. (This might be related to the crash when I removed the original Volume 1 but it does seem completely gone.) [Updated Apr. 29th: When I created a disk group and volume about a week later they both slotted in as Volume 1 and Disk Group 1 respectively. I also did a firmware upgrade before doing the volume and group creation.
That was it for this weekend’s explorations. I did come up with a couple of questions I want to explore in the future:
- Can the disk groups be moved between the expansion bays (for example, if a expansion bay fails can it be moved to a new one)? A similar question is can the drives be moved to a new NAS and preserve the data, but I don’t have the hardware to check that out.
- If DSM 4 is re-installed, will the disk groups and anything installed on them remain after the installation? According to Synology this can be done, although with the loss of configuration data for some Synology DSM services that are kept in it’s internal database.
- After swapping out the fourth drive (out of 5) I never got the consistency check finished messages for the root and swap. Maybe it did a consistency check on one of the reboots and never logged it. Maybe I’ll crash the system when I pull that fifth disk. [Apr.29th: Pulling the drive went fine, see below.]
I suspect that the DS1511+ will crash when I pull that 5th drive for replacement. Synology’s solution involves having their tech support remote in the check/repair the file system. Instead I’ll make sure I have an extra backup then go for it. Might as well test how this thing holds up. Once I know I’ll update this article.
[Updated Apr. 29th]
I was wrong in point #3. Everything was fine when I pulled that 5th drive. I got the degraded error message for root and swap but everything worked, even with a reboot. Eventually I received the message that the rebuild was complete. So it looks like those earlier consistency check complete messages were just lost in the ether.