When using the WriteFile() command to deliver a file to a target machine, you may find that this takes a long time - longer than a direct file copy over the network
This is due to the way in which the file is encrypted and transferred.
The file being is transferred is broken in to small chunks, in order to minimise any problems caused by possible poor network reliability. By using smaller chunks of data, in the event of a network drop out, only the last failed chunk needs to be re-sent, rather than the whole file.
In addition to this, each chunk needs to be encrypted before being sent.
At the receiving end the process runs in reverse, with the file chunks being decrypted, then "added" to the existing chunks, building the file as it moves along
This process adds an overhead, even without any other problems, such as unreliable networks, there can be a considerable decrease in expected speed.
AV can also be a problem. Some AV can lock the file or chunks as it is being written which adds additional overhead. Microsoft Security Essentials has been seen to exhibit this behaviour.
SOLUTION / WORKAROUND
Although the "chunking" and encryption cannot be changed, disabling the AV as a test, just to see if the performance changes. If the AV is causing a big enough overhead, then you may want to consider excluding certain directories from the AV, typical examples as c:\kworking (the kaseya temp directory).
One option which has a couple of additional benefits is to use the GetURL() command. This allows you to specify a URL and will instruct the endpoint to pull the specified URL, which of course can be an EXE, INI, or any other file down to itself.
This method does not use the agent-server communication channel, so many of the overheads are removed.You can still store the file on the kaseya server, or if you prefer, this can be stored on a hosted site such as Dropbox or Google Drive - as long as there is a URL for the file, this method will work.
This is a known limitation of WriteFile() command.
#92193 / VAKF-497