User Managed Backup
There are two type of user managed backup in Oracle. These two types in can be further divided as show below
-
Physical Backup
-
Cold Backup (Consistent Backup)
-
Hot Backup (Inconsistent Backup)
-
-
Logical Backup
-
Export / Import
-
Expdp /Impdp (available on oracle10g onwards)
-
Now we see the physical backup.
Physical Backup
Cold Backup (Consistent Backup)
A cold (or off-line) backup is a backup performed while the database is off-line and unavailable to its users. Cold backups can be taken regardless if the database is in ARCHIVELOG or NOARCHIVELOG mode.It is easier to restore from off-line backups as no recovery (from archived logs) would be required to make the database consistent.
Why you say cold backup is consistent backup?
After shutdown the database we have taken the cold backup. During this time all datafile headers SCN are same. When we restore the cold backup, no redo is needed in case of recovery.We had taken backup datafile, logfile, controlfile, parameter file & password file.
Cold Backup Steps
-
Get the file path information using below query
Select name from v$datafile;
select member from v$logfile;
select name from v$controlfile;
-
Taken the password file & parameter file backup ( Optional)
-
Alter system switch logfile;
-
Shutdown immediate
-
Copy all the data files /control files /log files using os command & placed in backup path.
-
Startup the database.
Hot Backup (Inconsistent backup)
A hot (or on-line) backup is a backup performed while the database is open and available for use (read and write activity). Except for Oracle exports, one can only do on-line backups when the database is ARCHIVELOG mode.
Why you say Hot backup is inconsistent backup?
While database running we have taken the hot backup. During this time backup tablespace datafile headers SCN are not same with another tablespace datafiles. When we restore the hot backup, redo is needed for recovery.
Hot Backup Steps
-
Get the file path information using below query.
Select tablespace_name, file_name from dba_data_files order by 1;
-
Put the tablespace in begin backup mode Using os command to copy the datafiles belongs to begin backup mode tablespace & placed in backup path.
-
Once copied the datafile, release the tablespace from begin backup mode to end backup
-
Repeat the steps 1-3 for all your tablespaces.
-
Taken the controlfile backup
Alter Database backup controlfile to /u01/backup/control01.ctl; —> Binary format
Below one is readable format. (Used for cloning)
Alter Database backup controlfile to trace;
Alter database backup controlfile to trace as /u01/backup/control01.ora;
-
Backup all your archive log files between the previous backup and the new backup as well.
-
Taken the password file & parameter file backup ( Optional)
Example
Steps
-
Alter tablespace system begin backup;
To ensure the begin backup mode tablespace using below query
Select * from v$backup; (refer the Change#, Time column)
-
Host cp /u01/oracle/raja/system1.DBF /u01/backup/system01.dbf using os command to copy the datafile.
-
Alter tablespace system end backup;
To ensure the begin backup mode tablespace using below query
Select * from v$backup;
Hot Backup internal operations:
Note
While I am trying to write during hot backup what are the internal operations going on? I have referred several notes, but below one is very clear and nice one.
During backup mode, for each datafile in the tablespace, here is what happens:
When BEGIN BACKUP is issued:
-
The hot backup flag in the datafile headers is set, so that the copy is identified to be a hot backup copy. This is to manage the backup consistency issue when the copy will be used for a recovery.
-
A checkpoint is done for the tablespace, so that in case of recovery, no redo generated before that point will be applied. Begin backup command completes only when checkpoint is done.
During backup mode:
-
The datafile header is frozen so that whenever it is copied, it reflects the checkpoint SCN that was at the beginning of the backup. Then, when the copy will be restored, Oracle knows that it needs to start recovery at that SCN to apply the archived redo logs. This is to avoid the header inconsistency issue.
That means that any further checkpoints do not update the datafile header SCN (but they do update a ‘backup’ SCN)
-
Each first modification to a block in buffer cache will write the full block into the redo thread (in addition to the default behavior that writes only the change vector).
This is to avoid the fractured block issue. There may be a fractured block in the copy, but it will be overwritten during the recovery with the full block image.
That means that everything goes as normal except for the below operations:
-
At checkpoint the datafile header SCN is not updated
-
When updating a block, the first time it is updated since it came
in the buffer cache, the whole before image of the block is recorded in redo -
Direct path writes do not go through the buffer cache, but they always write full blocks and then full block is written to redo log (if not in nologging)
When END BACKUP is issued:
-
A record that marks the end of backup is written to the redo thread so that if the copy is restored and recovered, it cannot be recovered earlier than that point. This is to avoid the backup consistency issue.
-
The hot backup flag in the datafile headers is unset.
-
The header SCN is written with the current one.
Remarks:
-
The fractured block is not frequent as it happens only if the i/o for the copy is done at the same time on the same block as the i/o for the update. But the only mean to avoid the problem is to do that full logging of block for each block, just in case.
-
If the OS I/O size is multiple of the Oracle block size (e.g backup done with dd bs=1M), that supplemental logging is not useful as fractured blocks cannot happen.
-
The begin backup checkpoint is mandatory to manage the fractured block issue: as Oracle writes the whole before image of the block, it needs to ensure that it does not overwrite a change done previously. With the checkpoint at the beginning, it is sure that no change vector preceding the begin backup has to be applied be applied.
-
The supplemental logging occurs when accessing the block for the first time in the buffer cache. If the same block is reloaded again in the buffer cache, supplemental logging will occur again. I haven’t seen that point documented, but a test case doing a ‘flush buffer_cache’ proves that.
I Hope this article helped you to understand the user managed physical backup.Suggestions are welcome.