

This means that GRT restores cannot be done directly but rather require a staging process and therefore enough local disk space on the backup server to restore the whole container file. The most important disadvantage however is the fact, that Backup Exec handles OST appliances similar to tape devices. Another one is the huge performance these appliances deliver during backups as well as during restores. One of the advantages you get when using OST appliances is that you can suppress the 64 terabyte boundary Backup Exec has for its “internal” deduplication storages. When using Backup Exec together with a OST appliance, the appliance does the deduplication work and just reports to the backup server, so Backup Exec can keep its catalogs and database entries up to date. The abbreviation OST means Open Storage Technology and describes a technology used to integrate deduplication aware hardware appliances into Backup Exec’s deduplication option.Īmong other things, this technology enables the Backup Exec server to control and monitor the deduplication process on the appliance. The last requirement means that this technology cannot be used for VMware environments where you want to do host-based backups, as the remote agent can’t be installed on the ESX host. Nevertheless, as in so many things in IT, this technology also has a downside: The source server has to meet some technical requirements: This means that from the second run on, the amount of data sent over the network is massively reduced, resulting in less need of bandwidth and smaller backup windows. Using this, the client itself can “sort out” packages, the server will not need and directly demand the creation of the pointers. Since this game of questions and answers takes a while, the first run of a backup job may be quite lengthy.ĭuring subsequent runs of the same backup job, the backup server sends the remote agent a blob of information, containing the answers of all questions he asked during the last run. So only unique packets are sent over the network. If the chunk is already present in the deduplication storage on the backup server, the source agent drops the packet and only a pointer is created on the backup server. If you enable client-side deduplication for a backup job in Backup Exec, the source server’s remote agent splits the backup data into chunks and sends their hash values to th backup server to determine, whether the server already has a copy of this chunk, or not. This is due to the fact that Backup Exec cannot read the content of the data stream sent by these systems and therefore cannot split it into chunks.

Please note that files deriving from NDMP filers like NetApp, EMC etc.

Thereby it is completely irrelevant, whether that data packet is part of an office document, a file from an operating system or a video clip.ĭeduplication data by splitting files into chunks yields to better results than doing it at the file level. If the backup server receives a data packet who’s hash value already exists in the database, the packet is discarded and only a pointer o the existing chunk is created in the database. The backup server receives the data stream, splits it into 512kB chunks and calculates a unique hash value for each chunk that is stored in a dedicated database. This implies that the source server sends all files to the backup server, as if it was a regular backup job. Server-side deduplication means, that all deduplication work is done at the backup server. The Deduplication Option within Backup Exec supports three different methods of deduplication: Server-Side Deduplication 64 terabyte per deduplication storage and one deduplication storage per backup server.
