Saturday, February 25, 2012

Backup of 9Gb Database

Hello all,
i was wandering if there are any documentation/suggestions on how to implement a backup that will not interrupt usage of database?
Basically we have a application that need to be available 24/7.
We have 2 servers, one is a primary, second using the log shipping function in case primary server dies. Howerver once a day when we perform full database backup system become non-responsive for 30-45 minutes.
Is there any ways around it? what are the best practice? We are thinking of creating the cluster of 2-3 SQL servers, however from what i have read, there is still going to be the same issue with backup on clusters.
Thanks.

To limit the time, you can use tools like SQL Litespeed. This will bring the dump size down to about 25% to 30% of the original size. Also the amount of I/O will be less, because the OS has to read the same amount of I/O from the database, but less amount of I/O to the dump-file. Also the time needed to complete the dump is about 25% to 30% less than normal dumps.

Second option to consider is the snapshot option. Create a snapshot of the database and make a dump of the snapshot database (if that is possible)

|||

Hi Kiryl,

MSSQL utilises "hot backups" that should not in any way cause interruption to service - most certainly not to the magnitude of 40 mins!

When you say "unresponsive", what form is this taking? Blocking, locking, cpu flat-line etc?

Cheers

Rob

No comments:

Post a Comment