abnormal disk usage with DropBox sync

Dropbox is a free service that lets you bring your photos, docs, and videos anywhere and share them easily. Never email yourself a file again!

Moderator: Lillian.W@AST

Post Reply
Pilloso
Posts: 19
youtube meble na wymiar Warszawa
Joined: Tue Feb 02, 2016 6:32 pm

abnormal disk usage with DropBox sync

Post by Pilloso »

I configured Dropbox sync and all works fine (I got about 1GB of data), but disks still work hard also if sync is concluded and no update is needed because all pc's with dropbox are turned off.
Disks still work also if I set some hour of pause in the grid.
The only way to stop disks is to press pause button in the main screen of DataSync, but obviusly, in this mode no more sync is performed.
There is no reason why disks should work also when sync is completed and also if in the timetable results pause (blank cells), and for "work" I mean non only turned on but they actually read or write and make non-stop noise, and disk's led blinks continuously.
I'm shure it's not due to RAID volume, because if I press pause button, all this disks activities stop immediatly.
Any suggestions?
martinb
Posts: 5
Joined: Fri Apr 17, 2020 9:50 am

Re: abnormal disk usage with DropBox sync

Post by martinb »

I had a similar issue with DataSync for OneDrive, constant disk access even though all files were up to date.

I my case I found there was a small file named user_info.json in the /.datasync-center/connections/1/ folder which was constantly being updated. It contained the username and usage information displayed on the Status tab.
Status
Status
Screenshot 2020-12-14 131651.jpg (44.85 KiB) Viewed 6851 times
Seems it was constantly polling onedrive to get updated info. I ended up writing a startup script which redirects the .datasync-center folder to /tmp/ so the constantly updated files are in memory, not on disk.
Pilloso
Posts: 19
Joined: Tue Feb 02, 2016 6:32 pm

Re: abnormal disk usage with DropBox sync

Post by Pilloso »

martinb wrote:I ended up writing a startup script which redirects the .datasync-center folder to /tmp/ so the constantly updated files are in memory, not on disk.
I also have this file in the same dyrectory... I don't konw how to realize a script for the NAS, could you kindly share yours and some instruction to make it work?
martinb
Posts: 5
Joined: Fri Apr 17, 2020 9:50 am

Re: abnormal disk usage with DropBox sync

Post by martinb »

I didn't notice Piloso's reply until after he PM'ed me. I'll add my reply to him to this thread for completeness. Probably should add a disclaimer too: this works for me, but I'm no linux guru so I can't say how it will behave on your system :P
martinb wrote:First up I have four different onedrive accounts syncing. I set up all four onedrive jobs in the web interface using the NAS admin login. I don't let my family members near the NAS web interface - they just see their files on the network.
If you have multiple users who have each configured their own sync jobs it will get more complicated because they will each have user_info.json files within their home folders.

I suggest installing 'Shell In a Box' (if you don't already have a preferred way to connect the NAS console) and 'Midnight Commander' from App Central. Midnight Commander is a pretty neat old school file manager - I'm really not a command-line type of guy - give me anything with a GUI to save me typing. And its great for having a poke around when you're hunting for something

Remember I have done all this using the admin login, if you have a different login the directory names will need to be changed.

First up lets take a copy of the datasync directory, we'll need it later:

Code: Select all

cp -R /volume1/home/]admin/.datasync-center /volume1/home/admin/.datasync-center-save
You don't need to run the two commands below, they are just for explanation - the script does all the work.
The basic idea is to create a folder in the temp directory:
mkdir -p /tmp/.datasync-center/connections/

And then redirect the default .datasync-center folder to the temp directory with a symbolic link:
ln -sf /tmp/.datasync-center/connections/ volume1/home/admin/.datasync-center

But when you shutdown all your .datasync-center data in the temp dir is lost.

To make it all work we need a startup/shutdown script to:
1. recreate the folder in the temp directory
2. restore the data (from the backup you made earlier)
3. symlink the .datasync-center directory to temp
4. And importantly, save the data to backup on shutdown

In the event of a power outage you will still lose the temp data so I also have a CRON job that does regular backups. But because the regular backup would write to disk - preventing the disk from sleeping, which is the whole point, the CRON job compares the temp data with a second copy also in temp and only if there are differences to important files does it then write to the backup.

So here is the script

Code: Select all

#!/bin/sh -e
### BEGIN INIT INFO
# Provides:          onedrive-daemon
# Required-Start:    $local_fs $remote_fs $network
# Required-Stop:     $local_fs $remote_fs $network
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: Start or stop the dropbox-daemon.
### END INIT INFO


## Datasync Center has a dir which contains configuration and log files for all the sync jobs.
## The logs get constantly updated meaning the HDDs never sleep
## The script symlinks the config/log dir to one created in memory /tmp
## It doesn't matter if all the log data is lost on reboot, but because the user edited
## .conf files are in the same dir we need to save and restore them from a backup on HDD

TMPDIR=/tmp/.datasync-center
BAKDIR=/volume1/home/admin/.datasync-center-save
DEFDIR=/volume1/home/admin/.datasync-center

case "$1" in
    start)
        echo "Redirecting Datasync Centre Logs"
        ### hack by MB to move log files to RAM to allows HDD sleep
        ##mkdir /tmp/.datasync-center/
        #create in memory directory under /tmp to hold the datasync-center logs
        mkdir $TMPDIR/
        #copy datasync-center configurations to tmp from backup dir 
        cp -rf $BAKDIR/connections $TMPDIR/connections/
        #create symlink to direct default datsync-center config dir to the one we created in /tmp
        ln -sf $TMPDIR/connections/ $DEFDIR
        #use rsync to create a second copy of the .conf files in a separate /tmp dir
        #will use a cron job to periodically compare the copy with the live files and
        #if there are any changes we will save to the backup dir
        rsync -r -c --stats --include="*.conf" --exclude="log.sqlite" --include="*.sqlite" --include="*/" --exclude="*" $TMPDIR/connections/ $TMPDIR/confs/
        chown -R admin:root $TMPDIR/
        ###
        ;;
    stop)
        echo "Stopping"
	#$0 backup
	cp -rf $TMPDIR/connections $BAKDIR/
	;;
    restart)
        echo "Restarting"
        $0 stop
        $0 start
        ;;
    refresh)
        #use rync to compare copy .conf files with the live files and
        #if there are any changes we will save to the backup dir
        echo "Checking Datasync Centre Logs"
        #pipe the output of rsync --stats and search for the line about number of files transferred
        transferred=$(rsync -r -c --stats --include="*.conf" --exclude="log.sqlite" --include="*.sqlite" --include="*/" --exclude="*" $TMPDIR/connections/ $TMPDIR/confs/ | awk '/files transferred/ {print $5}')
        #echo "$transferred"
        if [[ "$transferred" != "0" ]]
        then
            #echo "work"
            #cp -rf /tmp/.datasync-center/connections $BAKDIR/
            $0 backup
        fi
        ;;
    backup)
        echo "Saving Datasync Centre Logs"
	## hack by MB copy from RAM before shut down and when any changes detected
	cp -rf $TMPDIR/connections $BAKDIR/
	;;
    *)
        echo "usage: $0 {start|stop|restart}"
        exit 2
        ;;
esac

exit 0

I named the script move-config.sh and placed it in the back up directory /volume1/home/admin/.datasync-center-save

To get the script to run on startup and shutdown you need create two symlinks to the move-config script in the /usr/local/etc/init.d/ directory

Code: Select all

   ln -s /volume1/home/admin/.datasync-center-save/move-config.sh /usr/local/etc/init.d/S49-datasync-move-config
   ln -s /volume1/home/admin/.datasync-center-save/move-config.sh /usr/local/etc/init.d/K51-datasync-move-config
Scripts starting with S are run at startup in numeric order, there was a S50datasync-center script so I used S49 to make sure the config was moved before it started and K51 to take the copy after it stopped
K scripts run at shutdown.

Finally to get the regular backups I added a CRON job to run the move-config script with the refresh switch:
Add the following line to /var/spool/cron/crontabs/root
3 * * * * /bin/sh /volume1/home/admin/.datasync-center-save/move-config.sh refresh

I use the builtin editor in midnight commander, type mc at the command line to launch


And thats it

One thing the script doesn't do is remove redundant files from the backup directory. So if you delete a sync job you will need to manually remove the corresponding numbered subdirectory from the backup connections directory. Other wise you will get phantom sync jobs reappearing after a reboot. I could probably improve that by using rsync instead of cp to copy the files.

Oh and it looks like there are some redundant comments at the start of the script - I must have copied another script.

Thanks for getting me to write this up - it will come in handy if I ever have to do it again :P

Good luck and if you make any improvements let me know.
Post Reply

Return to “DataSync for Dropbox”