![]() ![]() Sorry but the other answers here are too complicated :-7.Ī simpler answer working for me: (using rsync over -e ssh) # optionally move rsync temp file, then resume using rsync Though, after a first rsync completes, you could run a second rsync with -c to insist on checksums, to prevent the strange case that file size and modtime are the same on both sides, but bad data was written. It affects how rsync decides if it should transfer a file. As far as I understand -checksum / -c, it won't help you in this case.However, I'm unsure of the compression techniques in both cases. An already compressed disk image sent over rsync without compression might result in shorter transfer time (by avoiding double compression).It's trivial, but -t in your rsync options is redundant, it is implied by -a.You will undoubtedly have other problems as a result, man rsync for the details. ![]() Don't use -inplace to workaround this.you can stop the second client, SIGTERM the first servers, it will merge the data, and you can resume. So, imagine a long running partial copy which dies (and you think you've "lost" all the copied data), and a short running re-launched rsync (oops!). Interestingly, if you then clean up all rsync server processes (for example, stop your client which will stop the new rsync servers, then SIGTERM the older rsync servers, it appears to merge (assemble) all the partial files into the new proper named file. Thus, you'll have two temporary files (and four rsync server processes) - though, only the newer, second temporary file has new data being written (received from your new rsync client process). Specifically, the new rsync client will not re-use/reconnect to the existing rsync server processes. ![]() If you do not clean up the server rsync processes (or wait for them to die), but instead immediately launch another rsync client process, two additional server processes will launch (for the other end of your new client process). Meaning, you could try to "ride out" the bad internet connection for 15-20 seconds. On a "dead" network connection, the client terminates with a broken pipe (e.g., no network socket) after about 30 seconds you could experiment or review the source code. In my testing, the server rsync processes remain running longer than the local client. I'm not sure of the default timeout value of the various rsync processes will try to send/receive data before they die (it might vary with operating system). On the server, this means moving the temporary file into position, ready for resuming. , both the client and server rsync processes will cleanly exit if they do not send/receive data in 15 seconds. To politely ask rsync to terminate, do not SIGKILL (e.g., -9), but SIGTERM (e.g., pkill -TERM -x rsync - only an example, you should take care to match only the rsync processes concerned with your client).įortunately there is an easier way: use the -timeout=X (X in seconds) option it is passed to the rsync server processes as well.įor example, if you specify rsync. However, you must politely terminate rsync - otherwise, it will not move the partial file into place but rather, delete it (and thus there is no file to resume). If you don't want to wait for the long default timeout to cause the rsync server to self-terminate, then when your internet connection returns, log into the server and clean up the rsync server processes manually. If the rsync server processes do not receive data for a sufficient time, they will indeed timeout, self-terminate and cleanup by moving the temporary file to it's "proper" name (e.g., no temporary suffix). in ps output on the receiver) continue running, to wait for the rsync client to send data. The issue is the rsync server processes (of which there are two, see rsync -server. TL DR: Use -timeout=X (X in seconds) to change the default rsync server timeout, not -inplace. ![]()
0 Comments
Leave a Reply. |