I had to deal with a Synology DSM 6.2 (DS 215J) device that didn’t want to update anymore, because of insufficient capacity for update. It was a bit of a hustle to figure out what was causing the problem, so here I’ll explain what was the case.
Recently one of the drives of this NAS had crashed and has been replaced since. In this case both drives were set up as separate operating drives, so no SHR. In the time between the crash of one of the drives and the replacement, the NAS was still up and running and being used. In retrospective, this actually caused the problem. But in my quest looking for the answer, I read several causes to this issue, so the explanation below might not solve your problem, but the steps can be followed to find your cause.
So, first of all, you should connect to your NAS via ssh using the admin account. On MacOs you can open a terminal window and type:
On Windows, I used PuTTY to connect via ssh (follow the link to download the client).
Once connected you should execute commands as su. Be careful with the commands that you execute!
Now go to the root of your drive.
And check which folder is using too much data.
In my case it looked like this:
Filesystem Size Used Avail Use% Mounted on
/dev/root 2.4G 2.1G 230M 91% /
none 249M 0 249M 0% /dev
/tmp 251M 1.2M 250M 1% /tmp
/run 251M 5.4M 246M 3% /run
/dev/shm 251M 12K 251M 1% /dev/shm
/dev/vg1/volume_2 2.7T 1.9T 847G 70% /volume2
/dev/vg2/volume_1 3.6T 535G 3.1T 15% /volume1
So the /dev/root, which is mounted on /, has the most data. To see which folder is consuming the most, I used the following command to gain some insight. Note: I excluded both volumes, because that would be too time-consuming and the issue isn’t there.
du -sh / --exclude=volume1 --exclude=volume2
However, when I looked at the totals it only used 800MB and not 2.1GB. So that was a bit strange. So then I read about the possibility that it might be some leftover data from when a disk was unmounted. In my case when one of the drives crashed. To see if that caused the problem I had to do unmount the volumes and check if I would see any files in the mountpoint folders.
Before you continue, make sure you are on the same network as the NAS and not remotely connected via VPN or anything else. The following steps will stop any running applications (like VPN) on the NAS.
Make sure you’re in the root again.
Then kill all the running application with this command.
Now your SSH connection will probably be closed as well. So reconnect as admin and make yourself su again.
Now check if the volumes are unmounted.
If the volumes are not listed anymore, then they are unmounted.
Now check if there are any files in the volumes folders.
If this returns any folders, then there is data. Or if this returns all the folders that are on the volumes, then the unmounting might have failed, so then try again from above. Make sure you check this for both folders.
In my case, there were files located in one of the volume drives. I checked with the following command how much data it was using (1.2GB in my case).
To remove the folders I used:
rm -r folder_name
After that I checked with again how much space I had.
This showed that I now had 1.4GB available instead of 200 MB.
So time to reboot and check via the GUI again if I can update again.
reboot -f -n
After this I could update my NAS again.
Other people reported that MariaDB or zarafa where causing issues, or that the folder /var/log was full with files.
So if you have the same issues as reported above, this might help you to solve your insufficient storage problems.