This article will gives some advices for making a RMAN backup in the right way based on my experience.
Backup and Recovery
IMHO, the most important DBA task to do is backup and the most important DBA task to avoid is recovery.
Everyday backup can be a fastidious task, for something you will hopefully never use, but this get your back safe.
Hopefully you will never do a database restore/recovery in production, but this is something a DBA need to be well prepared to do.
enable bad block detection
This slightly add a cpu overhead, but Oracle will be able to detect block corruption earlier if you enable the following parameters:
From 11g, DB_ULTRA_SAFE parameter (see here) is introduced to set the default values for above parameters that control protection levels:
SQL> alter system set db_ultra_safe=data_only scope=spfile ;
enable block change tracking
For incremental backups, change tracking feature improves incremental backup performance by recording changed blocks in each datafile in a change tracking file. If change tracking is enabled, RMAN uses the change tracking file to identify changed blocks for incremental backup, thus avoiding the need to scan every block in the datafile.
duplex online logs and archive logs
If one of the log files get corrupted then the second member file can be used to recover.
This may affect performance depends on write speed media.
Controlfile needs to be backup last to include last database backup.
RMAN can actually do this for you by configuring the following parameter :
RMAN> configure controlfile autobackup on ;
Set one datafile by backup piece
RMAN must read through the entire backup piece to get a datafile or archivelog.
If you have only one datafile or archive log to recover then it will be quicker to complete.
RMAN> backup database filesperset 1 plus archivelog delete input ;
Disable Backup Optimization feature
This feature tells RMAN to skip files that have already been backed up and hasn't changed.
It is a good feature to save diskspace but makes backup management more risky if you delete a old backupset that is needed by another.
So I recommend to disable this option to make sure the unchanged files will still be included in the backupset:
RMAN> configure backup optimization off ;
Use CHECK LOGICAL with backup
This will make RMAN to check for logical corruption by making additionnal validation during backup.
It is a good option to avoid corrupted block to be backed up:
RMAN> backup check logical database plus archivelog delete input;
Do not use DELETE ALL INPUT
After backing up any one copy, all copies of it will be deleted.
So this will make you lost the redundancy when archive logs are duplexed.
Therefore if an archivelog is missing or corrupt, you won't get no more this second chance.
DELETE INPUT, where only the copy that was just backed up is deleted, the other copy will remain for subsequent backups to handle.
Set Archivelog Deletion Policy
Let RMAN to manage archivelogs deletion instead of doing it manually.
Simply configure an archivelog deletion policy as follow:
RMAN> CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY BACKED UP 2 TIMES TO DEVICE TYPE DISK ;
In this case, when backing up archivelogs with
DELETE INPUT option, RMAN will delete archivelogs only after they have been backed up in two diferent backupset and applied on standby databases.
Set retention policy
RMAN parameter to configure
RMAN> CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 28 DAYS;
If not using a catalog, ensure that the control_file_record_keep_time instance parameter is set.
SQL> alter system set control_file_record_keep_time=28 scope=both
CROSSCHECK ARCHIVELOG is NOT recommended
Archive logs marked as expired will be silently ignored during all subsequent archive log backups. This will compromise the recoverability of the database without so much as a warning message.
In order to avoid catalog to grow indefinately and to make backup more manageable, you may want to delete some old backupset.
First make sure you have set retention policy as explain just before, then delete old backupset as follow :
In order to ensure that you have a working backup, it can be validated as follow:
RMAN> restore database validate ;
or you can specify the backupset key as follow:
RMAN> validate backupset 9 ;
Make the most of your Logs
Log file needs to be reviewed and check for errors to make sure of its sucessful completion.
To logs additional useful information about the related backup, I would recommend to :
- Set NLS_DATE_FORMAT, NLS_LANGUAGE to make output more readable.
- Output all parameters used for the backup
- Echo commands and its output to the log file
- Output a "preview restore database" after backup (see below)
- Use a unique log filename (ie with timestamp) to avoid logfile override
Theses informations can be valuable in case of a full recovery or in stressful situation.
Preview restore database
In a restore database situation, you can preview backup information that will be used for the restore.
This can also be used in backup log output for additionnal information.
In a recovery situation, you can first test a recovery without making change on datafiles:
Send email notification of the backup with keyword like "SUCCESS" or "FAIL" in title.
Set parallelism without compression:
RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 2 ;
Set parallelism + compression :
RMAN> CONFIGURE COMPRESSION ALGORITHM 'BASIC' ;
RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 2 BACKUP TYPE TO COMPRESSED BACKUPSET ;
Advanced compression license
Advanced compression license is NOT required for BASIC compression algorithm, but the other compression algorithm type require a licence.
Database Backup and Recovery User's Guide (docs.oracle.com)
Enjoyed this article? Please like it or share it.