Quantcast
Channel: Oracle – Rants & Raves – The Blog!
Viewing all 296 articles
Browse latest View live

Swap 2 Values, in SQL, Without Using a Temporary Variable

$
0
0

Recently, I saw a mention of an interview question for SQL developers. It was something along the lines of:

There is a table with a sex column. It has been discovered that the values are swapped around and need to be corrected. How would you swap all ‘M’ values to ‘F’ and all ‘F’ values to ‘M’, while leaving the other values untouched, in one single SQL statement and without requiring the use of any temporary variables.

So, how would you do it? Here’s my example.

First, the before look:

select sex, forename from bedrock;

S  FORENAME
-  --------
M  Wilma
   Dino
M  Betty
F  Fred
F  Barney
U  Pebbles

6 rows selected.

I think we can safely say that the sex column is somewhat back to front. My solution was to use the CASE statement in an SQL UPDATE command:

update bedrock
set sex =
  case sex 
    when 'F' then 'M'
    when 'M' then 'F'
  end;

Finally, the after look:

select sex, forename from bedrock;

S  FORENAME
-  --------
F  Wilma
   Dino
F  Betty
M  Fred
M  Barney
U  Pebbles

6 rows selected.

It appears to be safe to commit!

Using the CASE statement is useful. In the old days, we would need something like the following:

Update bedrock set sex = 'T' where sex = 'M';
Update bedrock set sex = 'M' where sex = 'F';
Update bedrock set sex = 'F' where sex = 'T';

For huge tables, that could have taken a while, and it’s highly unlikely that a sex column would be indexed, so three full table scans would have been the order of the day.

Using CASE we get away with a single scan, plus, the statement short circuits when it hits the first matching WHEN clauses. So, if the first sex value was ‘F’ it would be changed to ‘M’ however it would then stop checking and would not change the newly set ‘M’ back to an ‘F’.


Statspack Snapshot Fails ORA-01400 Cannot Insert NULL …

$
0
0

Oh hum. An 11.2.0.3 Enterprise Edition production database has statspack taking a regular snapshot under the control of a dbms_scheduler job. For no apparent reason, the snapshot started failing with ORA-01400 Cannot insert NULL into PERFSTAT.STATS$SYSTEM_EVENT.EVENT. This was an interesting one to fix.

The following is the investigative process, in brief.

  • Test the snapshot process with a manual one – same error.
  • Google and My Oracle Support aka MOS, were no help whatsoever. I was on my own! Twitter was useful and Noons (@wizofoz2k) suggested a statspack mismatch could cause this error.
  • Knowing that I had reinstalled statspack on this database a few weeks ago led me to drop and reinstall statspack and to recreate the jobs required to take regular snapshots and to purge old data. No joy, same problem.
  • Hunt down the code in V$SQL to see what’s going on here. A quick script helped out:
    SELECT sql_fulltext
    FROM   v$sql
    WHERE  DBMS_LOB.INSTR(sql_fulltext, 'SYSTEM_EVENT') <> 0
    AND    DBMS_LOB.INSTR(sql_fulltext, 'INSERT') <> 0;
    

    That showed an insert statement, as expected, reading the EVENT column from V$SYSTEM_EVENT – which, given half a brain, makes sense! I didn’t have half a brain at the time – as will become obvious!

  • Another quick script showed that there were 5 rows in V$SYSTEM_EVENT that were NULL:
    SELECT count(*) 
    FROM sql_fulltext
    WHERE event IS NULL;
    
    COUNT(*)
    --------
           5
    

    WTF?

  • Looking at the EVENT column, showed a huge load of crud and nothing much like a proper Oracle event. Some of the data were:
    rwp err: No dash in error string
    r removing error %d is [ ][ ]
    rrupt, error stack is [ ][ ]
    down and process is starting up
    indicate must count [ ][ ]
    ...
    

    WTF? (The sequel!)

  • The next stop – I did say I didn’t have half a brain didn’t I – was the alert log, where I should have been looking in the first place! Bingo!:
    WARNING: Oracle executable binary mismatch detected.
    Binary of new process does not match binary which started instance.
    
  • And there we have it. Noons was correct in as much as the version of statspack in use – 11.2.0.3 EE – didn’t match the running database binary which was 11.2.0.3 SE. Somehow, someone (no, not me – but thanks for asking!) had managed to start the database running on SE rather than EE.

    I’m thinking that this could have been done when an SE environment was enabled in a session, and someone simply did an export ORACLE_SID=whatever not realising that EE was required. This posting might help in that case! :-)

    After shutting down the database, making sure that the correct environment was set, a restart of the database got rid of the messages in the alert log, and a snapshot was successfully executed.

    So, an interesting challenge that could have been resolved earlier if I’d gone straight to the alert log rather than dicking about thinking I knew that it must have been the reinstall I did previously! That’ll teach me to think then!

    And by the way, I know (oops) from previous experience, that the snapshot code in V$SQL will have table names and commands in upper case, which is why I used upper case tests for INSERT and SYSTEM_EVENT in the script above. If I wasn’t so sure, I’d have done this instead:

SELECT sql_fulltext
FROM   v$sql
WHERE  DBMS_LOB.INSTR(upper(sql_fulltext), 'SYSTEM_EVENT') <> 0
AND    DBMS_LOB.INSTR(upper(sql_fulltext), 'INSERT') <> 0;

Cheers.

Internet Explorer Won’t Upload Files to MOS?

$
0
0

Are you forced to use Internet Explorer at work? Are you, like me, forced to use an old, insecure, broken version of IE at work, because it’s the Government Standard version? And are you, like me, unable to upload evidence files to My Oracle Support?

  • You need to go to Tools, then Internet Options.
  • On the Security tab, click the Custom Level button.
  • Now find and enable the Include Local Directory Path option.
  • OK your way back out, restart IE, and Robert is your mother’s brother.

This works fine and has been tested on IE7, IE8, IE9 and possibly (but untested) IE10. Other browsers are not affected, but I’m told that Chrome also has problems.

Firefox, on the other hand, just works.

Hope this helps, unlike me, you may be able to find and click the above options. Our (Government) security policies have that option disabled and I’m not allowed to change it. I’m not allowed to use Firefox either. Go figure.

Security – sometimes it’s there to stop you doing your job.

Cheers.

ORA-03262 While Dropping Empty Datafile

$
0
0

Tired of trying to drop empty datafiles from a tablespace, which you know is empty? Keep getting errors telling you the data file isn’t empty? Getting frustrated with the whole thing? Me too. This link has the reason and solution.

This Listener Problem is Driving Me Mad!

$
0
0

I have been looking at this far too long, and I’m stumped. I resolved a similar problem yesterday on another server. That was down to the ORACLE_HOME setting in listener.ora having a ‘1’ in it rather than a ‘2’. Took ages to spot that.

Anyway, here the stuff you’ll need to know to sort this for me, or suggest stuff. It’s a question on Oracle L seeing as there is a lot of evidence to post.

As ever, server names etc have been changed to protect the innocent!

Update We have a solution! Scroll to the bottom for details.

Oracle and OS Versions

Oracle Database: Standard Edition, 11.2.0.3 64 bit.
Server: SLES 10 sp 4
Uname -r: 2.6.16.60-0.97.1-smp
hostname: orcl11gserver 

The Problem

In a word, setting ORACLE_SID and connecting to a user/password works fine. Connecting to user/password@alias gives the following error:

ERROR:
ORA-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
Linux-x86_64 Error: 2: No such file or directory
Process ID: 0
Session ID: 0 Serial number: 0

Database Info

I can connect to the database, both as sysdba and as a non-sysdba user provided I don’t use the listener:

$ sqlplus / as sysdba
...
Connected.

SQL> show parameter listener

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
listener_networks                    string
local_listener                       string
remote_listener                      string


SQL> show parameter db_name

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_name                              string      orcl11g


SQL> show parameter service

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
service_names                        string      orcl11g.world


SQL> select * from global_name;

GLOBAL_NAME
--------------------------------------------------------------------------------
orcl11g.WORLD

Listener.ora

lsnr_orcl11g =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS_LIST =
        (ADDRESS = (PROTOCOL = TCP)(HOST = orcl11gserver)(PORT = 1521))
      )
    )
  )

SID_LIST_lsnr_orcl11g =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_DBNAME = orcl11g)
      (ORACLE_HOME = /opt/oracle/product/11.2.0.3/db_1)
      (SID_NAME = orcl11g)
    )
)

DYNAMIC_REGISTRATION_lsnr_orcl11g = off
SUBSCRIBE_FOR_NODE_DOWN_EVENT_lsnr_orcl11g=OFF

Tnsnames.ora

orcl11g,orcl11g.world =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(Host = orcl11gserver)(Port = 1521))
    )
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = orcl11g)
    )
  )

Sqlnet.ora

NAMES.DIRECTORY_PATH= (LDAP, TNSNAMES, EZCONNECT, HOSTNAME)

Oratab

orcl11g:/opt/oracle/product/11.2.0.3/db_1/:N

Tnsping

$ tnsping orcl11g:

Used parameter files:
/opt/oracle/product/11.2.0.3/db_1/network/admin/sqlnet.ora


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(Host = orcl11gserver)(Port = 1521))) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orcl11g)))
OK (0 msec)

Listener Status

$ lsnrctl status lsnr_orcl11g
...
TNSLSNR for Linux: Version 11.2.0.3.0 - Production
System parameter file is /opt/oracle/product/11.2.0.3/db_1/network/admin/listener.ora
Log messages written to /opt/oracle/diag/tnslsnr/orcl11gserver/lsnr_orcl11g/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=orcl11gserver.testds.ntnl)(PORT=1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=orcl11gserver)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias                     lsnr_orcl11g
Version                   TNSLSNR for Linux: Version 11.2.0.3.0 - Production
Start Date                10-MAY-2013 16:51:52
Uptime                    0 days 0 hr. 0 min. 0 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /opt/oracle/product/11.2.0.3/db_1/network/admin/listener.ora
Listener Log File         /opt/oracle/diag/tnslsnr/orcl11gserver/lsnr_orcl11g/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=orcl11gserver.testds.ntnl)(PORT=1521)))
Services Summary...
Service "orcl11g" has 1 instance(s).
  Instance "orcl11g", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully

Where is Listener Running From?

ps -ef|grep -i ls[n]r_cds

oracle   21180     1  0 16:51 ?        00:00:00 /opt/oracle/product/11.2.0.3/db_1/bin/tnslsnr lsnr_orcl11g -inherit

Listener Log

The listener log shows the connection attempt being made, and established ok witha result code of zero.

<msg time='2013-05-10T17:25:56.866+01:00' org_id='oracle' comp_id='tnslsnr'
 type='UNKNOWN' level='16' host_id='orcl11gserver'
 host_addr='10.57.18.116'>
 <txt>10-MAY-2013 17:25:56 * (CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=orcl11g)(CID=(PROGRAM=sqlplus)(HOST=orcl11gserver)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=10.57.18.116)(PORT=12633)) * establish * orcl11g * 0
 </txt>
</msg>

Client Trace

Don’t worry, I’m not about to paste an entire ADMIN level trace here. But looking in one, I saw this extract:

nsbasic_brc:type=12, plen=11
nsbasic_brc:what=17, tot =11
nsbasic_brc:packet dump
nsbasic_brc:00 0B 00 00 0C 00 00 00  |........|
nsbasic_brc:01 00 01                 |...     |
nsbasic_brc:exit: oln=0, dln=1, tot=11, rc=0
nioqrc: found a break marker...
nioqrc: Recieve: returning error: 3111

This is sort of interesting, as it seems to indicate I got a break from somewhere or something! I saw this on my other similar problem as well, so it’s the same in the two trace files, but I solved the other problem by correcting the Oracle Home in listener.ora. Not this time!

The Solution

There are many people on oracle-l who took the time to look at the problem, so thanks to all. There are, however, two people to whom I am extremely grateful. They took mere minutes to discover what had been staring me in the face all day, and the winners are:

  • @martinberx on Twitter.
  • David Barbour on oracle-l.

Both noticed that in /etc/oratab, the Oracle Home path had a trailing slash, while in the listener.ora, it did not. Sheesh!

Thanks to both.

The Fix

The fix was relatively simple:

  • With the current (wrong) oratab settings in force, shut down the database and the listeners. (The problem affected a number of databases/listeners on this server, not just the one I used in the above example.)
  • Edit oratab to remove the trailing slash.
  • Restart the listeners and databases with the new improved oratab.
  • Test – it all “just works”.

:-)

Spatial Indexes and Oracle Errors. How to fix.

$
0
0

If, like me, you have suffered from ORA-29902 Error in executing ODCIIndexStart() routine errors where Spatial indexes are involved, the following might help you fix it.

The error involved in the following has been extracted from a log file for a system which doesn’t use Spatial or Locator itself, but calls out to a separate database which does have Locator installed. This latter database was created using Transportable Tablespaces, exported from 10.2.0.5 Enterprise Edition on HP-UX and imported into 11.2.0.3 Standard Edition on Linux x86-64.

There were a number of errors creating a few of the spatial indexes on tables, like the one that follows in the example, that had zero rows in them. Oracle Support assured us that this was not a problem. And we believed them. Sigh!

The Problem

The following query demonstrates the problem.

CONNECT CADDBA/password

SELECT * FROM TEXT_FORESHORE A
WHERE MDSYS.SDO_RELATE( A.GEOM, 
                        MDSYS.SDO_GEOMETRY(2003,81989,
                        NULL,
                        MDSYS.SDO_ELEM_INFO_ARRAY(1,1003,3),
                        MDSYS.SDO_ORDINATE_ARRAY(362000,600000,363000,601000)),
                        'MASK=ANYINTERACT QUERYTYPE=WINDOW') = 'TRUE';
SELECT * FROM TEXT_FORESHORE A
*
ERROR at line 1:
ORA-29902: error in executing ODCIIndexStart() routine
ORA-13203: failed to read USER_SDO_GEOM_METADATA view
ORA-13203: failed to read USER_SDO_GEOM_METADATA view
ORA-06512: at "MDSYS.SDO_INDEX_METHOD_10I", line 333

Working Out

I am definitely not a Spatial guru, but the above doesn’t look right to me. Looking at Google, the problem is caused by the Spatial Index being not there, missing, absent. Ok, let’s create it.

CREATE INDEX IDX_T142_GEOM ON TEXT_FORESHORE(GEOM)
INDEXTYPE IS MDSYS.SPATIAL_INDEX
PARAMETERS('TABLESPACE=CAD_PRSN_IDX_SPAT SDO_RTR_PCTFREE=0')
NOPARALLEL;

CREATE INDEX IDX_T142_GEOM ON TEXT_FORESHORE
                    *
ERROR at line 1:
ORA-00955: name is already used by an existing object

Ok, to me, that says that the index is actually present. DBA_INDEXES shows this to be the case. Apparently, it needs to be dropped and recreated, so I carry on:

DROP INDEX IDX_T142_GEOM ;
Index dropped.

CREATE INDEX IDX_T142_GEOM ON TEXT_FORESHORE(GEOM)
INDEXTYPE IS MDSYS.SPATIAL_INDEX
PARAMETERS('TABLESPACE=CAD_PRSN_IDX_SPAT SDO_RTR_PCTFREE=0')
NOPARALLEL;

CREATE INDEX IDX_T142_GEOM ON TEXT_FORESHORE
*
ERROR at line 1:
ORA-29855: error occurred in the execution of ODCIINDEXCREATE routine
ORA-13203: failed to read USER_SDO_GEOM_METADATA view
ORA-13203: failed to read USER_SDO_GEOM_METADATA view
ORA-06512: at "MDSYS.SDO_INDEX_METHOD_10I", line 10

Aha. Something different this time. Still not working though. It might be as simple as the CADDBA user not having the correct privileges. Create table and create sequence is required to create a spatial index – whether directly in the schema or as another user creating on in the schema in question. So:

CONNECT / AS SYSDBA

SELECT PRIVILEGE
FROM DBA_SYS_PRIVS
WHERE PRIVILEGE IN ('CREATE TABLE', 'CREATE SEQUENCE' )
AND GRANTEE = 'CADDBA';

PRIVILEGE
---------------
CREATE SEQUENCE
CREATE TABLE

2 rows selected.

So that’s not the problem this time. Looking into the USER_SDO_GEOM_METADATA view, for this user (every user with Spatial data should have this view) I see nothing for this table_name and column_name:

CONNECT CADDBA/password

SELECT * FROM USER_SDO_GEOM_METADATA
WHERE TABLE_NAME = 'TEXT_FORESHORE'
AND COLUMN_NAME = 'GEOM';

no rows selected

Ok, a clue. I (vaguely) know that in order to create a spatial index, that view needs some data telling it all about the column in question. As this database had been created from a legacy database (which very very rarely gets updated) I was ok to extract the data from legacy and insert it directly here.

Did I mention, each time the commands fail to create the index in question, they create the index in question? So after each failure, you have to drop it again. Sigh!

DROP INDEX CADDBA.IDX_T142_GEOM ;
Index dropped.

INSERT INTO USER_SDO_GEOM_METADATA
VALUES ('TEXT_FORESHORE','GEOM',
        mdsys.SDO_DIM_ARRAY(
             mdsys.SDO_DIM_ELEMENT('Easting', 0, 700000, .0005),
             mdsys.SDO_DIM_ELEMENT('Northing', 0, 1300000, .0005)
        ), 81989);

1 row created.

COMMIT;
Commit complete.

Now can I create the index?

CREATE INDEX IDX_T142_GEOM ON TEXT_FORESHORE(GEOM)
INDEXTYPE IS MDSYS.SPATIAL_INDEX
PARAMETERS('TABLESPACE=CAD_PRSN_IDX_SPAT SDO_RTR_PCTFREE=0')
NOPARALLEL;

Index created.

And success at long last. Spatial, I hate you! Does the query work now?

SELECT * FROM TEXT_FORESHORE A
WHERE MDSYS.SDO_RELATE( A.GEOM, 
                        MDSYS.SDO_GEOMETRY(2003,81989,
                        NULL,
                        MDSYS.SDO_ELEM_INFO_ARRAY(1,1003,3),
                        MDSYS.SDO_ORDINATE_ARRAY(362000,600000,363000,601000)),
                        'MASK=ANYINTERACT QUERYTYPE=WINDOW') = 'TRUE';

no rows selected

After all that work, no rows selected is exactly the correct answer. The table is empty, so I would have been very surprised to see anything other than that response.

The Solution

The solution to my particular problem was to:

  • Drop the so called missing index.
  • Make sure correct data is in USER_SDO_GEOM_METADATA for the table and column in question. Each user with Spatial data will have one of these views, so you need to be in the appropriate user.
  • Create the index again.
  • Test the failing query, and it should work.

Cheers.

Oracle Proxy Users – What Are They Used For?

$
0
0

This post has also been categorised under “rants and raves” as you will see below! Oracle 10g was the first time that proxy users could be used easily from SQL. Prior to that only Java and/or OCI programs could use them. They’ve been around since 8i, but not (well) documented. Want to know more? Read on….

A Bit of Background

Many years ago, a software company I worked in – as a DBA – was taken over and we inherited a system (no names – you will see why later) which allowed numerous users the ability to use the system, and some of them got to create documents from within the application. I have no idea to this day, which version of Oracle was the first to be used for the system, but it was apparent (from discussions with their technical people) that it was once a COBOL program using Indexed files as the “database”. Apparently a straight conversion to Oracle was carried out, replacing each indexed file with an Oracle table.

The system was a bit of a nightmare. There were a number – at least three – of application owners in the database. Each of these had privileges and synonyms pointing to the other two, and in a few cases, User_a has a synonym that pointed to one of User_b’s objects, and that turned out to be a synonym back to User_a! Go figure. As you can imaging, this did make running a full database import (only exp and imp in those days) a bit of a nightmare with all those circular references back and forth between the application owners.

The worst part, and if you are security conscious in any way, I suggest you sit down now and take a deep breath before reading on, was this. The users in the system, who were able to create documents, had the following two privileges assigned in addition to their others:

  • CREATE_ANY_PROCEDURE
  • EXECUTE_ANY_PROCEDURE

Yes that’s correct, in order to create a document, the end users had to be able to create a procedure in the application user’s schema, then execute it! End users had their own login of course, to the database, this allowed auditing to work correctly.

When I demonstrated this problem to the head of IT one day, He saw me show how it was simple for an end user to connect to the database and wipe out anything s/he desired, with only those two privileges, plus CREATE SESSION of course. His advice? Do not tell the customers about this!. I didn’t.

I never did get the chance to dig down to discover the reason why the documentation enabled users had to have those abilities, unfortunately, I might have been able to suggest something else instead.

Moving On – Proxy Users

Oracle’s proxy users could have been a solution to this massive security problem. The application logged in as each end user and used the two privileges above to create a procedure in the application owner schema, executed it, then dropped the procedure again. That is how the documents were produced.

However, had the application been a little more up to date, and using Oracle 10g, we could have still had the same abilities as above, but without the need to have those nasty “ANY” privileges. Here’s how we could have done it with Proxy Users.

Assume the following:

  • App_owner is the application owner, at least, the one responsible for document creation. All document creation will be done, within this user, using procedures owned by this user.
  • All other users who require to connect to the database will do so, and will be able to effectively become the app_owner user, but using it as a proxy rather than logging in directly as app_owner. For the purpose of this demonstration, we shall refer to these users, collectively, as doc_user although there can be more than one, obvioulsy.
  • We do not want to have those “ANY” privileges granted to anyone.

Creating APP_OWNER

The application owner would be created as follows:

SQL> create user app_owner
  2  identified by secret_password 
  3  default tablespace users
  4  quota unlimited on users;
User created.

SQL> create role app_owner_role;
Role created.

SQL> grant
  2    create session,
  3    create table,
  4    create procedure,
  5    create trigger
  6  to app_owner_role;
Grant succeeded.

SQL> grant app_owner_role to app_owner;
Grant succeeded.

In the real world, there would be more privileges granted, but these will do for now.

At this point in time, the application would be initialised by the creation of tables, procedures, functions, packages and so on. All done under the app_owner user. Once the application has been set up, we can consider creating the doc_user account(s). Before we do so, we need to create a role that defines only the privileges that the doc_user requires when connected to the application as the app_owner:

SQL> connect / as sysdba
Connected.

SQL> create role doc_user_role;
Role created.

SQL> grant 
  2    create procedure,
  3    create session
  4  to doc_user_role;
Grant succeeded.

SQL> grant doc_user_role to app_owner;
Grant succeeded.

Remember, we wish to connect as the doc_user but become the app_owner for the duration of our document production. Therefore, the role we create needs to be granted to the app_owner user and not to the doc_user user(s).

In addition, because the app_owner user already has create session via the app_owner_role, you may be wondering why we also grant it to the doc_user_role. Are you wondering? I’ll tell you soon. Read on!

Obviously, the role could be enhanced with other privileges, as required, to allow the application requirements to be achieved. For this demonstration, create procedure is enough as we need the doc_user to be able to create and execute a procedure within the app_owner schema.

Creating DOC_USER

The application users, able to create documents, would be created as follows:

SQL> create user doc_user
  2  identified by another_password;
User created.

That is all that is required. The doc_user(s) will not be creating tables etc, merely logging into the system, in this case, by becoming the app_owner and using only privileges granted to that user via the doc_user_role. If the doc_users required to connect as themselves for certain parts of the application that didn’t involve document production, they would obviously require the appropriate privileges, such as create session

As above, the real application would require some other privileges, but these will do for this demonstration.

So far, so normal. But, in order to allow the doc_user the ability to login and become the app_owner user, we need to tell Oracle that the app_owner can be connected to, through the doc_user and to only allow the privileges granted to the role doc_user_role:

SQL> alter user app_owner
  2    grant connect through doc_user
  3    with role doc_user_role;
User altered.

This is why we had to grant create session to the doc_users_role earlier. If we had not done so, we would have seen the following error when we tried to do a proxy connection:

ERROR:
ORA-01045: user APP_OWNER lacks CREATE SESSION privilege; logon denied

If you see this, make sure that your user – app_owner in this case – has create session granted directly or to the role that will be enabled when proxy connections take place.

It is permitted to allow the app_owner to connect through numerous doc_users, it need not be just the one. If you have doc_user_1 through doc_user_n, then execute an alter user as above for each, and any of those will be able to become the app_owner for the purpose of creating documents.

What Magic is This?

The alter user statement above had done two things, doc_user is now able to login using app_owner as a proxy and, when it does so, it will actually have logged in as the app_owner and will only have the privileges granted to the doc_user_role available. Had we omitted the with role clause, doc_user would have had all the privileges of app_owner – and this is not as secure as we would like. Oracle applications and thus, users, should operate on the least privilege basis.

The doc_user account doesn’t even need create session any more, unless it requires to login for other reasons.

Proxy logging in is as follows:

SQL> connect doc_user[app_owner]/another_password
Connected.

SQL> show user
USER is "APP_OWNER"

You will hopefully notice two things above:

  • Even though the doc_user has no create session privileges, it logged in quite happily with that username and password.
  • Although we logged in as doc_user, we are connected as app_owner.

You can see how the doc_user logs in, effectively as itself, but specifies the user that it wants to become after login in square brackets. Because the app_user has been told to use the role doc_user_role, then after becoming app_owner, only that role will be enabled:

SQL> select role from session_roles;

ROLE
------------------------------
DOC_USER_ROLE

Now, can we create a procedure? Remember, the doc_user has not been given any privileges that allow it to do so, however, the enabled role of doc_user_role does have this ability:

SQL> -- Create a "document" via a procedure.
SQL> create procedure doc_user_document
  2  as
  3  begin
  4    null;  -- This would normally do stuff to create a document.
  5  end;
  6  /
Procedure created.

SQL> -- Do the document "creation" by executing said procedure.
SQL> exec doc_user_document;

PL/SQL procedure successfully completed.


SQL> -- Tidy up again.
SQL> drop procedure doc_user_document;
Procedure dropped.

We can be sure that we don’t have any of app_owner’s other privileges active, by trying to create a table, for example:

SQL> create table test(a number);
create table test(a number)
*
ERROR at line 1:
ORA-01031: insufficient privileges

That looks fine – even though app_owner has create table etc, via the app_owner_role, that role isn’t active when doc_user proxies in as app_owner.

Finding Proxy Users

You can find details of proxy users in the PROXY_USERS view:

SQL> conn / as sysdba
Connected.

SQL> select * from proxy_users;

PROXY            CLIENT		 AUT FLAGS
---------------- --------------- --- -----------------------
DOC_USER         APP_OWNER       NO  PROXY MAY ACTIVATE ROLE

This shows you that the doc_user is a proxy user which is permitted to become the client user, app_owner, and that a role may/will be activated at login. It doesn’t tell you anything about which role will be activated at login though. To discover that information, you should use either USER_PROXIES or DBA_PROXIES where the ROLE column has the details you need:

SQL> desc dba_proxies
 Name					   Null?    Type
 ----------------------------------------- -------- ----------------------------
 PROXY						    VARCHAR2(30)
 CLIENT 				   NOT NULL VARCHAR2(30)
 AUTHENTICATION 				    VARCHAR2(3)
 AUTHORIZATION_CONSTRAINT			    VARCHAR2(35)
 ROLE						    VARCHAR2(30)
 PROXY_AUTHORITY				    VARCHAR2(9)

SQL> desc user_proxies
 Name					   Null?    Type
 ----------------------------------------- -------- ----------------------------
 CLIENT 				   NOT NULL VARCHAR2(30)
 AUTHENTICATION 				    VARCHAR2(3)
 AUTHORIZATION_CONSTRAINT			    VARCHAR2(35)
 ROLE						    VARCHAR2(30)

As usual, DBA_PROXIES gives you details of all the proxy users in the database while USER_PROXIES only shows the ones that your currently logged in user can become, as per this example:

SQL> conn doc_user/another_password
Connected.

SQL> select client, role from user_proxies;

CLIENT			       ROLE
------------------------------ ------------------------------
APP_OWNER		       DOC_USER_ROLE

We can see that our current user – doc_user – can proxy connect as the app_user with the role doc_user_role enabled. It cannot proxy login to any other database user account.

What About Other Roles, Privileges and PL/SQL?

We have seen above that when a user is granted “connect through … with role” then only the privileges granted to that specific role are enabled at proxy login. What about privileges granted directly to the user we are becoming?

SQL> conn / as sysdba
Connected.

SQL> grant create table to app_owner;
Grant succeeded.

SQL> conn doc_user[app_owner]/another_password
Connected.

SQL> create table test(a number);
Table created.

Whoops! So, privileges granted directly to the app_owner are also enabled when we proxy login to it, even though a role was specified. This could be something to consider when setting up your proxy users.

Who owns the new table then?

SQL> select owner from all_tables where table_name = 'TEST';

OWNER
------------------------------
APP_OWNER

So, that’s another thing to consider, when you have become another user via a proxy login, the other user owns any objects you create. As owner, privileges to INSERT, DELETE, UPDATE and SELECT will also exists on these tables, as well as any other that it owns and to which SELECT etc may not have been granted to the doc_user, in this case.

Bear in mind also, that roles enabled in a session are disabled when PL/SQL is being compiled or executed.

Have fun!

Terminology

In the discussions above, I’ve tended to stay away from the various terminology that Oracle uses in an effort to try and make things a little more clear. However, before I go, here’s the information you may require:

  • Proxy User – is the user that is allowed to become another user. In the above, the proxy user is the doc_user as it is permitted to become the app_owner.
  • Client User – is the user that the proxy user is allowed to become. In the above, the client user is app_owner.
  • Proxy Login – is a special format connect string where the proxy user’s name and password is used, as normal, but with the addition of the client user’s name in square brackets after the proxy user’s name.
    connect proxy_user[client_user]/proxy_user_password@.....
    

Oracle RMAN for Beginners – Part 9

$
0
0

It’s been a while since the previous post in this series, but I’m back again. This time out, we are looking at incremental backups. What they are, how they work, and how – of course – to take them and use the to restore and recover your databases.

More Terminology

What exactly is an incremental backup? Previously, this series has shown you how to take a full backup be that of the database, archived logs, tablespaces and data files. Each time, for example, you back up a full database, you copy everything – even data that have not changed. This does make it simpler to restore and recover the database as you only require the most recent backup, but it can take quite some time if you are backing up a multi-terabyte database every night.

An incremental backup works on the principle that if you already have a backup, and only some of your data has changed, then surely it will be quicker to just backup the changed data? This is exactly how it works.

Incremental Backup Levels

There are two incremental backup levels:

  • Level 0 (zero) – this is almsot the equivalent of a full backup, but it prepares the way for any following incremental backups, of any kind (see below), by giving you a starting position from which to work.

    You must start from a level zero incremental backup and not a previous full backup. Although both backup everything, only the level zero incremental one can be used as a parent backup for future level one backups.

  • Level 1 (one) – this is your normal incremental backup level. It only backs up blocks which changed recently. Which blocks it backs up are dependent on the backup type – which is described below. A level 1 incremental backup should be smaller and faster than a full or level 0 backup as it normally wouldn’t copy the entire database. However, the final size does depend on the number of changes, and as mentioned, the backup type.

Incremental Backup Types

There are two different incremental backup types:

  • Differential – the default incremental backup type. This backs up only those blocks which have been changed since the previous level 0 or level 1 incremental backups.
  • Cumulative – this incremental backup will backup all changed blocks since the previous level 0 incremental backup.

All incremental backups use backup sets, not file copies. However, see the next article in the series for details of something which may, at first glance, appear to contradict this statement! (Merged file copy incremental backups.)

A Contrived Example

The following backup strategy should hopefully clarify things:

  • On Monday, you take a level 0 incremental backup.
  • On Tuesday, you take a differential level 1 incremental backup. This means that only those blocks which changed since the previous level 0 or level 1 backup will be copied. This means, changes since Monday’s level 0 backup.
  • On Wednesday, you take another differential level 1 incremental backup. This means that only those blocks which changed since the previous level 0 or level 1 backup will be copied. This means, changes since Tuesday’s level 1 backup.
  • On Thursday, you take a cumulative level 1 incremental backup. This backup will copy all changed blocks from the previous level 0 backup – which took place on Monday.
  • On Friday, you take a differential level 1 incremental backup. This backup will copy all changed blocks from the previous level 0 or level 1 backup – which took place on Thursday – because you can mix and match the level 1 backup types without any worries.
  • On Saturday, you take another differential level 1 incremental backup. This backup will copy all changed blocks from the previous level 0 or level 1 backup – which took place on Friday.
  • You have Sunday off to be with your family!

Why would you want to take a cumulative backup at certain points when you are already taking differential backups? Well, even though you are taking incremental backups, it is possible that there will be a lot of changes. Cumulative backups help with the restoration of the database after a failure – they can reduce the amount of work you need to do, tapes to be located etc.

Here’s what RMAN would do in the event of a database failure on any of the days when the failure is prior to the backups being successfully completed. Assume that the entire database will require restoring and recovery.

  • Tuesday – Restore Monday’s backup and use the archived logs to recover the database.
  • Wednesday – As for Tuesday, but Tuesday’s level 1 backup would be required in addition to the archived logs for recovery.
  • Thursday – As for Wednesday, but Wednesday’s level 1 backup would be required in addition to the archived logs for recovery.
  • Friday – Restore Monday’s backup, recover using Thursday’s cumulative backup, in addition to the archived logs needed to bring us back up to date.
  • Saturday – As per Friday, but also requires Friday’s backup as part of the recovery.
  • Sunday – As per Saturday but add in Saturday’s backup as part of the recovery.
  • Monday – As per Sunday, but you better make sure that you have all the archived logs that were created after the Saturday backup in order that you can recover bang up to date! Maybe you should consider reviewing the above backup strategy? 😉

So you can see, using cumulative backups reduces the number of incremental backups you need to use to recover the database. RMAN will choose the best backup it knows about – in the controlfile or the catalogue – to restore the database – this may be a level 0 incremental backup.

When you recover the database, RMAN will choose to use archived logs or an incremental level 1 backup or a mixture of both, as required, to get the database back to where you requested it.

Incremental backups take less time to produce, but can require a little more thought when trying to restore and recover a database (tablespace etc) as you may need to keep more backups in the backup location to allow an adequate recovery window.

Full backups are easier, RMAN will most likely restore the most recent backup and use the archived logs to recover the database, but as mentioned, these backups may be very much larger.

Slightly Technical Stuff

Under normal conditions, RMAN will perform an incremental backup by reading each and every (non-virgin) block into the buffer and checking if the SCN is higher (or the same according to Kuhn, Alapati and Nanda) than the SCN of the most recent level 0 (or level 1 depending on the incremental backup type) backup. If the block’s SCN is higher, this block requires backup.

You can make this process a lot quicker, especially with large databases, by turning on Block Change Tracking. This process uses a binary file to record the blocks that have changed over the normal day to day running of the database, since the most recent level 0/level 1 backup.

If this feature is turned on, when RMAN performs an incremental backup at level 1, it doesn’t need to read each and every (non-virgin) block into the buffer as it knows already which blocks need copying. You can turn on block change tracking as follows – assuming that we are not using Oracle Managed files:

SQL> alter database enable block change tracking using file '/path/to/file/change_tracking.chg' reuse;
Database altered.

The filename will be created if it doesn’t exist, but the path should already exists or the command will fail. If the file already exists, it will be overwritten. This could cause the loss of existing change tracking data, so check first, to see if it is already enabled:

SQL> select * from v$block_change_tracking;

STATUS     FILENAME                                                            BYTES
---------- ------------------------------------------------------------------  -----------
ENABLED    /srv/nffs/flashback_area/ant12/change_tracking/change_tracking.chg  11599872

Block change tracking can be turned off, if desired, as follows:

SQL> alter database disable block_change_tracking;
Database altered.

You cannot specify a size for the change tracking file. Oracle will allocate the required space itself, based on the database size, the number of data files in the database and the number of redo threads that are enabled. In 11gR2, the file is initially created at 10Mb and grows in 10Mb chunks as new space is required. There is 320 Kb allocated in the file for each data file in the database. (Information from Kuhn, Alapati & Nanda.).

Enough Talking, Lets Do Backups!

As ever, we should consider setting NLS_DATE_FORMAT in the shell before we begin. This way, we get better date’s when we look at the dates and times of our backups. You cannot set this within RMAN itself, sadly.

$ export NLS_DATE_FORMAT='dd/mm/yyyy hh24:mi:ss'

$ rman target /

Once we are in RMAN and connected to the target database (and catalogue, if one is in use) we can start the incremental process by taking an initial level 0 incremental backup. You will note from the following that I am including the archived logs too. You can’t incrementally back those up, so they get copied in the normal manner.

RMAN> backup incremental level 0 database tag "DB Level 0";

Starting backup at 12/08/2013 15:51:59
using channel ORA_DISK_1
channel ORA_DISK_1: starting incremental level 0 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00004 name=/srv/nffs/oradata/ant12/data/perfstat01_01.dbf
input datafile file number=00001 name=/srv/nffs/oradata/ant12/data/system01.dbf
input datafile file number=00002 name=/srv/nffs/oradata/ant12/data/sysaux01.dbf
input datafile file number=00007 name=/srv/nffs/oradata/ant12/data/audit01_01.dbf
input datafile file number=00008 name=/srv/nffs/oradata/ant12/data/utility01_01.dbf
input datafile file number=00003 name=/srv/nffs/oradata/ant12/data/undotbs01.dbf
input datafile file number=00005 name=/srv/nffs/oradata/ant12/data/tools01.dbf
input datafile file number=00006 name=/srv/nffs/oradata/ant12/data/users01.dbf
channel ORA_DISK_1: starting piece 1 at 12/08/2013 15:52:00
channel ORA_DISK_1: finished piece 1 at 12/08/2013 15:52:15
piece handle=/srv/nffs/flashback_area/ant12/ANT12/backupset/2013_08_12/o1_mf_nnnd0_DB_LEVEL_0_90kxnj9z_.bkp tag=DB LEVEL 0 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:15
Finished backup at 12/08/2013 15:52:15

Starting Control File and SPFILE Autobackup at 12/08/2013 15:52:15
piece handle=/srv/nffs/flashback_area/ant12/ANT12/autobackup/2013_08_12/o1_mf_s_823276335_90kxnzkh_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 12/08/2013 15:52:16

Don’t forget the archived logs though! Even though we are doing an incremental backup, the archived logs are still required.

RMAN>  backup incremental level 0 archivelog all delete input tag "ARC Level 0";

Starting backup at 12/08/2013 15:54:14
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=57 RECID=57 STAMP=823275596
input archived log thread=1 sequence=58 RECID=58 STAMP=823275613
input archived log thread=1 sequence=59 RECID=59 STAMP=823276422
input archived log thread=1 sequence=60 RECID=60 STAMP=823276454
channel ORA_DISK_1: starting piece 1 at 12/08/2013 15:54:14
channel ORA_DISK_1: finished piece 1 at 12/08/2013 15:54:15
piece handle=/srv/nffs/flashback_area/ant12/ANT12/backupset/2013_08_12/o1_mf_annnn_ARC_LEVEL_0_90kxrpvq_.bkp tag=ARC LEVEL 0 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
channel ORA_DISK_1: deleting archived log(s)
archived log file name=/srv/nffs/flashback_area/ant12/ANT12/archivelog/2013_08_12/o1_mf_1_57_90kwxw9z_.arc RECID=57 STAMP=823275596
archived log file name=/srv/nffs/flashback_area/ant12/ANT12/archivelog/2013_08_12/o1_mf_1_58_90kwyfhj_.arc RECID=58 STAMP=823275613
archived log file name=/srv/nffs/flashback_area/ant12/ANT12/archivelog/2013_08_12/o1_mf_1_59_90kxqpck_.arc RECID=59 STAMP=823276422
RMAN-08138: WARNING: archived log not deleted - must create more backups
archived log file name=/srv/nffs/flashback_area/ant12/ANT12/archivelog/2013_08_12/o1_mf_1_60_90kxrpl4_.arc thread=1 sequence=60
Finished backup at 12/08/2013 15:54:15

Starting Control File and SPFILE Autobackup at 12/08/2013 15:54:15
piece handle=/srv/nffs/flashback_area/ant12/ANT12/autobackup/2013_08_12/o1_mf_s_823276456_90kxrr72_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 12/08/2013 15:54:17

Yes, I know, it looks silly! How on earth can you incrementally backup an archived log? Well, the whole log file will be copied because each and every block has changed! But in effect, it’s a normal archived log backup.

Now, you may be wondering why I didn’t just use the following command:

RMAN> backup incremental level 0 database 
2> plus archivelog all delete input
3> tag "DB Level 0";

The command does work, and does backup both the database and the archived logs, however, the archived logs will be copied first and will take the requested tag – “DB Level 0″, then the archived logs on disc will be deleted as normal – according to the deletion policy configuration. When the database is backed up, the tag is not used, so the actual database backup gets an RMAN generated tag and not the one we wanted. The current redo log is then archived and backed up – getting the requested tag again – and finally the controlfile and spfile are backed up according to the autobackup policy and again, get an RMAN generated tag.

It appears that whatever tag is requested is only applied to the first backupset in the backup. If you wish the database and archived log backups to be tagged in this way, then run separate backups with an appropriate tag. The autobackup of the controlfile and/or spfile will not use the requested tag.

That’s the initial level 0 backup. We can now do some work in the database – I shall not show that here, but you can be assured that I have made some changes to tables in the working schema I’m using – and then backup only the changes.

RMAN> backup incremental level 1 database tag "DB LEvel 1";

Starting backup at 12/08/2013 16:07:18
using channel ORA_DISK_1
channel ORA_DISK_1: starting incremental level 1 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00004 name=/srv/nffs/oradata/ant12/data/perfstat01_01.dbf
input datafile file number=00001 name=/srv/nffs/oradata/ant12/data/system01.dbf
input datafile file number=00002 name=/srv/nffs/oradata/ant12/data/sysaux01.dbf
input datafile file number=00007 name=/srv/nffs/oradata/ant12/data/audit01_01.dbf
input datafile file number=00008 name=/srv/nffs/oradata/ant12/data/utility01_01.dbf
input datafile file number=00003 name=/srv/nffs/oradata/ant12/data/undotbs01.dbf
input datafile file number=00005 name=/srv/nffs/oradata/ant12/data/tools01.dbf
input datafile file number=00006 name=/srv/nffs/oradata/ant12/data/users01.dbf
channel ORA_DISK_1: starting piece 1 at 12/08/2013 16:07:18
channel ORA_DISK_1: finished piece 1 at 12/08/2013 16:07:19
piece handle=/srv/nffs/flashback_area/ant12/ANT12/backupset/2013_08_12/o1_mf_nnnd1_DB_LEVEL_1_90kyk6qw_.bkp tag=DB LEVEL 1 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 12/08/2013 16:07:19

Starting Control File and SPFILE Autobackup at 12/08/2013 16:07:19
piece handle=/srv/nffs/flashback_area/ant12/ANT12/autobackup/2013_08_12/o1_mf_s_823277239_90kyk81j_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 12/08/2013 16:07:20

That’s it done, it took all of 1 second to backup the changes I made. Because I didn’t specify to perform a cumulative backup, RMAN defaulted to differential, as described above. If I now carry out some more work, and take another level 1 differential backup, I’ll get only the changed blocks since the backup we just carried out.

RMAN>  backup incremental level 1 database tag "DB Level 1 - part 2";

I have omitted the output from this one as it’s almost identical to the one above. Also, you will note, I am not backing up the archived logs – you can assume that I am, because I should be, I’m not showing the output though.

And now, we shall take a cumulative incremental backup, which will create a new backup consisting of all the blocks that changed since the previous level zero backup. This is effectively, everything in the two differential backups taken above.

RMAN>  backup incremental level 1 cumulative database tag "DB Level 1 - cumulative";

Starting backup at 12/08/2013 16:12:21
using channel ORA_DISK_1
channel ORA_DISK_1: starting incremental level 1 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00004 name=/srv/nffs/oradata/ant12/data/perfstat01_01.dbf
...
channel ORA_DISK_1: starting piece 1 at 12/08/2013 16:12:21
channel ORA_DISK_1: finished piece 1 at 12/08/2013 16:12:22
piece handle=/srv/nffs/flashback_area/ant12/ANT12/backupset/2013_08_12/o1_mf_nnnd1_DB_LEVEL_1___CUMULAT_90kytoof_.bkp tag=DB LEVEL 1 - CUMULATIVE comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 12/08/2013 16:12:22

Starting Control File and SPFILE Autobackup at 12/08/2013 16:12:22
piece handle=/srv/nffs/flashback_area/ant12/ANT12/autobackup/2013_08_12/o1_mf_s_823277542_90kytq64_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 12/08/2013 16:12:23

You will note that there is little indication that this is a cumulative backup, other than the tag I have used to show me what it is! How can we discover what level a backup is, what type and so on?

Listing Backups

We can see the backups, and their tags, as follows:

RMAN> list backup summary;

List of Backups
===============
Key     TY LV S Device Type Completion Time     #Pieces #Copies Compressed Tag
------- -- -- - ----------- ------------------- ------- ------- ---------- ---
62      B  A  A DISK        12/08/2013 15:38:01 1       1       NO         FULL BACKUP
63      B  F  A DISK        12/08/2013 15:38:10 1       1       NO         TAG20130812T153802
64      B  A  A DISK        12/08/2013 15:38:18 1       1       NO         FULL BACKUP
65      B  F  A DISK        12/08/2013 15:38:20 1       1       NO         TAG20130812T153819
70      B  0  A DISK        12/08/2013 15:52:07 1       1       NO         DB LEVEL 0
71      B  F  A DISK        12/08/2013 15:52:16 1       1       NO         TAG20130812T155215
72      B  A  A DISK        12/08/2013 15:53:42 1       1       NO         ARC LEVEL 0
73      B  F  A DISK        12/08/2013 15:53:44 1       1       NO         TAG20130812T155343
74      B  A  A DISK        12/08/2013 15:54:14 1       1       NO         ARC LEVEL 0
75      B  F  A DISK        12/08/2013 15:54:16 1       1       NO         TAG20130812T155416
76      B  1  A DISK        12/08/2013 16:07:19 1       1       NO         DB LEVEL 1
77      B  F  A DISK        12/08/2013 16:07:20 1       1       NO         TAG20130812T160719
78      B  1  A DISK        12/08/2013 16:09:29 1       1       NO         DB LEVEL 1 - PART 2
79      B  F  A DISK        12/08/2013 16:09:31 1       1       NO         TAG20130812T160931
80      B  1  A DISK        12/08/2013 16:12:21 1       1       NO         DB LEVEL 1 - CUMULATIVE
81      B  F  A DISK        12/08/2013 16:12:23 1       1       NO         TAG20130812T161222
82      B  A  A DISK        12/08/2013 16:13:31 1       1       NO         ARC LEVEL 1
83      B  F  A DISK        12/08/2013 16:13:33 1       1       NO         TAG20130812T161332

In the above, KEY is the backupset key, TY is the backup type which in these examples is all Backupset types, LV is the backup level – 0 (zero) is incremental level 0, 1 (one) is incremental level 1, A is archived logs and F is a full backup (or autobackup of spfile and/or controlfile). The S column is the status where A indicates an available backup, and finally, the TAG column displays our requested tags.

You can see from the various columns how the backups do appear to match up to our tags. With the exception of the RMAN generated tags for the autobackups of course. You will also see that regardless of how the tag was originally specified in the backup command, it is converted to upper case.

Backing Up Parts of the Database

You do not have to carry out an incremental backup of the entire database. As with full backups, you can backup at the tablespace or data file level. You cannot backup individual data blocks though!

A tablespace backup would be as follows:

RMAN> backup incremental level 1
2> tablespace users
3> tag "TS USERS - Level 1";

Starting backup at 12/08/2013 16:34:59
using channel ORA_DISK_1
channel ORA_DISK_1: starting incremental level 1 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00006 name=/srv/nffs/oradata/ant12/data/users01.dbf
channel ORA_DISK_1: starting piece 1 at 12/08/2013 16:34:59
channel ORA_DISK_1: finished piece 1 at 12/08/2013 16:35:00
piece handle=/srv/nffs/flashback_area/ant12/ANT12/backupset/2013_08_12/o1_mf_nnnd1_TS_USERS___LEVEL_1_90l053hb_.bkp tag=TS USERS - LEVEL 1 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 12/08/2013 16:35:00

Starting Control File and SPFILE Autobackup at 12/08/2013 16:35:00
piece handle=/srv/nffs/flashback_area/ant12/ANT12/autobackup/2013_08_12/o1_mf_s_823278900_90l054z0_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 12/08/2013 16:35:01

While a couple of data files (in different tablespace, as it happens) would be thus:

RMAN> backup incremental level 1
2> datafile 5,7;

Starting backup at 12/08/2013 16:36:23
using channel ORA_DISK_1
channel ORA_DISK_1: starting incremental level 1 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00007 name=/srv/nffs/oradata/ant12/data/audit01_01.dbf
input datafile file number=00005 name=/srv/nffs/oradata/ant12/data/tools01.dbf
channel ORA_DISK_1: starting piece 1 at 12/08/2013 16:36:23
channel ORA_DISK_1: finished piece 1 at 12/08/2013 16:36:24
piece handle=/srv/nffs/flashback_area/ant12/ANT12/backupset/2013_08_12/o1_mf_nnnd1_TAG20130812T163623_90l07qc1_.bkp tag=TAG20130812T163623 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 12/08/2013 16:36:24

Starting Control File and SPFILE Autobackup at 12/08/2013 16:36:24
piece handle=/srv/nffs/flashback_area/ant12/ANT12/autobackup/2013_08_12/o1_mf_s_823278984_90l07rml_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 12/08/2013 16:36:25

Restoring Incremental Backups

You can, as mentions previously, restore from incremental backups. Given the choice, RMAN will choose the most appropriate backup to restore from, and when you recover, then either a level 1 incremental or the archived logs will be used to recover back to the most recent state of the database. Assuming you didn’t specify an SCN or an until time etc of course!

Equally, as with full backups, you can restore to the database, tablespace, datafile or block level, as well as restoring the archived logs.

The following examples show each of these recoveries.

Database Recovery

To restore the entire database, it must be shut down to the mounted state. This is no different from carrying out a restore from a full backup.

RMAN> shutdown
database closed
database dismounted
Oracle instance shut down

RMAN> startup mount
connected to target database (not started)
Oracle instance started
database mounted

Total System Global Area     768331776 bytes
...

RMAN> restore database;

Starting restore at 12/08/2013 16:42:48
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /srv/nffs/oradata/ant12/data/system01.dbf
...

channel ORA_DISK_1: reading from backup piece /srv/nffs/flashback_area/ant12/ANT12/backupset/2013_08_12/o1_mf_nnnd0_DB_LEVEL_0_90kxnj9z_.bkp
...

channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:01:16
Finished restore at 12/08/2013 16:44:04

RMAN> recover database;

Starting recover at 12/08/2013 16:46:45
using channel ORA_DISK_1
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00001: /srv/nffs/oradata/ant12/data/system01.dbf
...

channel ORA_DISK_1: reading from backup piece /srv/nffs/flashback_area/ant12/ANT12/backupset/2013_08_12/o1_mf_nnnd1_DB_LEVEL_1___CUMULAT_90kytoof_.bkp
...

channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01

channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00006: /srv/nffs/oradata/ant12/data/users01.dbf
channel ORA_DISK_1: reading from backup piece /srv/nffs/flashback_area/ant12/ANT12/backupset/2013_08_12/o1_mf_nnnd1_TS_USERS___LEVEL_1_90l053hb_.bkp
...

channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:00

channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00005: /srv/nffs/oradata/ant12/data/tools01.dbf
destination for restore of datafile 00007: /srv/nffs/oradata/ant12/data/audit01_01.dbf
channel ORA_DISK_1: reading from backup piece /srv/nffs/flashback_area/ant12/ANT12/backupset/2013_08_12/o1_mf_nnnd1_TAG20130812T163623_90l07qc1_.bkp
...

channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:00

starting media recovery
media recovery complete, elapsed time: 00:00:01

Finished recover at 12/08/2013 16:46:48

RMAN> alter database open;
database opened

So, RMAN chose to use the Incremental level zero backup to restore everything, then in the recovery phase, applied the cumulative level one backup as opposed to two separate differential level one backups, followed by the tablespace backup of the users tablespace and the data file backup of the data files we backed up above, rather than using archived log backups.

Tablespace Recovery

To recover one or more tablespaces, the database can be open unless either (or both) the SYSTEM or UNDO tablespaces needs to be restored and recovered. In that case, the database must be mounted only.

The first example restores a single tablespace, USERS, and this can be done with the database open, but the tablespace must be taken off line. Users will only be affected if their work requires access to the tablespace(s) being restored.

RMAN> sql "alter tablespace users offline";
sql statement: alter tablespace users offline

RMAN> restore tablespace users;

Starting restore at 12/08/2013 20:03:12
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00006 to /srv/nffs/oradata/ant12/data/users01.dbf
channel ORA_DISK_1: reading from backup piece /srv/nffs/flashback_area/ant12/ANT12/backupset/2013_08_12/o1_mf_nnnd0_DB_LEVEL_0_90kxnj9z_.bkp
channel ORA_DISK_1: piece handle=/srv/nffs/flashback_area/ant12/ANT12/backupset/2013_08_12/o1_mf_nnnd0_DB_LEVEL_0_90kxnj9z_.bkp tag=DB LEVEL 0
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 12/08/2013 20:03:14

RMAN> recover tablespace users;

Starting recover at 12/08/2013 20:03:20
using channel ORA_DISK_1
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00006: /srv/nffs/oradata/ant12/data/users01.dbf
channel ORA_DISK_1: reading from backup piece /srv/nffs/flashback_area/ant12/ANT12/backupset/2013_08_12/o1_mf_nnnd1_DB_LEVEL_1___CUMULAT_90kytoof_.bkp
channel ORA_DISK_1: piece handle=/srv/nffs/flashback_area/ant12/ANT12/backupset/2013_08_12/o1_mf_nnnd1_DB_LEVEL_1___CUMULAT_90kytoof_.bkp tag=DB LEVEL 1 - CUMULATIVE
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00006: /srv/nffs/oradata/ant12/data/users01.dbf
channel ORA_DISK_1: reading from backup piece /srv/nffs/flashback_area/ant12/ANT12/backupset/2013_08_12/o1_mf_nnnd1_TS_USERS___LEVEL_1_90l053hb_.bkp
channel ORA_DISK_1: piece handle=/srv/nffs/flashback_area/ant12/ANT12/backupset/2013_08_12/o1_mf_nnnd1_TS_USERS___LEVEL_1_90l053hb_.bkp tag=TS USERS - LEVEL 1
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:00

starting media recovery
media recovery complete, elapsed time: 00:00:01

Finished recover at 12/08/2013 20:03:22

RMAN> sql "alter tablespace users online";
sql statement: alter tablespace users online

You can see, once again, that RMAN chose to restore the initial level zero incremental backup of the database, but only the USERS tablespace was restored from it. The recovery phase used the cumulative level one incremental backup plus the USERS tablespace backup that we took earlier. No archived logs were used in this recovery either.

The next example, restores the SYSTEM tablespace and as such, will require the database to be mounted.

RMAN> shutdown;
...
Oracle instance shut down

RMAN> startup mount;
...
database mounted

Total System Global Area     768331776 bytes
...

RMAN> restore tablespace system;

Starting restore at 12/08/2013 20:08:15
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=18 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /srv/nffs/oradata/ant12/data/system01.dbf
channel ORA_DISK_1: reading from backup piece /srv/nffs/flashback_area/ant12/ANT12/backupset/2013_08_12/o1_mf_nnnd0_DB_LEVEL_0_90kxnj9z_.bkp
channel ORA_DISK_1: piece handle=/srv/nffs/flashback_area/ant12/ANT12/backupset/2013_08_12/o1_mf_nnnd0_DB_LEVEL_0_90kxnj9z_.bkp tag=DB LEVEL 0
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:26
Finished restore at 12/08/2013 20:08:41

RMAN> recover tablespace system;

Starting recover at 12/08/2013 20:11:03
using channel ORA_DISK_1
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00001: /srv/nffs/oradata/ant12/data/system01.dbf
channel ORA_DISK_1: reading from backup piece /srv/nffs/flashback_backup incremental level 0 database tag "DB Level 0";area/ant12/ANT12/backupset/2013_08_12/o1_mf_nnnd1_DB_LEVEL_1___CUMULAT_90kytoof_.bkp
channel ORA_DISK_1: piece handle=/srv/nffs/flashback_area/ant12/ANT12/backupset/2013_08_12/o1_mf_nnnd1_DB_LEVEL_1___CUMULAT_90kytoof_.bkp tag=DB LEVEL 1 - CUMULATIVE
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01

starting media recovery
media recovery complete, elapsed time: 00:00:01

Finished recover at 12/08/2013 20:11:05

RMAN> alter database open;
database opened

This time RMAN restored from the same level zero backup as previously and recovered from the cumulative level one backup. Again, neither of the differential backups were used simply because the cumulative backup was the best choice for the recovery.

Data File Recovery

This example shows the restoration and recovery of a data file that is not a member of the SYSTEM or UNDO tablespaces. For this, the data file(s) need to be taken offline, but the database can remain open. Users will only be affected if their work requires access to the data files being restored.

RMAN> sql "alter database datafile 5 offline";
sql statement: alter database datafile 5 offline

RMAN> restore datafile 5;

Starting restore at 12/08/2013 20:16:39
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00005 to /srv/nffs/oradata/ant12/data/tools01.dbf
channel ORA_DISK_1: reading from backup piece /srv/nffs/flashback_area/ant12/ANT12/backupset/2013_08_12/o1_mf_nnnd0_DB_LEVEL_0_90kxnj9z_.bkp
channel ORA_DISK_1: piece handle=/srv/nffs/flashback_area/ant12/ANT12/backupset/2013_08_12/o1_mf_nnnd0_DB_LEVEL_0_90kxnj9z_.bkp tag=DB LEVEL 0
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 12/08/2013 20:16:41

RMAN> recover datafile 5;

Starting recover at 12/08/2013 20:16:47
using channel ORA_DISK_1
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00005: /srv/nffs/oradata/ant12/data/tools01.dbf
channel ORA_DISK_1: reading from backup piece /srv/nffs/flashback_area/ant12/ANT12/backupset/2013_08_12/o1_mf_nnnd1_DB_LEVEL_1___CUMULAT_90kytoof_.bkp
channel ORA_DISK_1: piece handle=/srv/nffs/flashback_area/ant12/ANT12/backupset/2013_08_12/o1_mf_nnnd1_DB_LEVEL_1___CUMULAT_90kytoof_.bkp tag=DB LEVEL 1 - CUMULATIVE
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00005: /srv/nffs/oradata/ant12/data/tools01.dbf
channel ORA_DISK_1: reading from backup piece /srv/nffs/flashback_area/ant12/ANT12/backupset/2013_08_12/o1_mf_nnnd1_TAG20130812T163623_90l07qc1_.bkp
channel ORA_DISK_1: piece handle=/srv/nffs/flashback_area/ant12/ANT12/backupset/2013_08_12/o1_mf_nnnd1_TAG20130812T163623_90l07qc1_.bkp tag=TAG20130812T163623
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:00

starting media recovery
media recovery complete, elapsed time: 00:00:01

Finished recover at 12/08/2013 20:16:49

RMAN> sql "alter database datafile 5 online";
sql statement: alter database datafile 5 online

You can see, as above, the backups used in restoration and recovery have been chosen by RMAN in order to most efficiently get the data file in question, back up to date.

Had we required to restore and recover data file(s) which are part of the SYSTEM or UNDO tablespaces, we would, once again, have had to have the database in mounted mode as opposed to open. This would, of course, affect all users of this particular database.

As this is very similar to the tablespace example given above for SYSTEM, I shall omit the example here.

Data Block Recovery

RMAN incremental backups allow the recovery of a single, or multiple, data blocks. Why recover a whole data file when you can make matter much quicker by recovering only the affected blocks – assuming you know which ones are affected of course!

You can do this with the database open, even if the blocks are in the SYSTEM or UNDO tablespaces. You will notice below, that there is no need to restore the data blocks in question, you cannot! You simply recover them.

This first example recovers a data block in a normal tablespace:

RMAN> recover datafile 7 block 17;

Starting recover at 12/08/2013 20:22:23
using channel ORA_DISK_1

starting media recovery
media recovery complete, elapsed time: 00:00:01

Finished recover at 12/08/2013 20:22:25

And now, one block in the SYSTEM tablespace:

RMAN> recover datafile 1 block 9;

Starting recover at 12/08/2013 20:25:13
using channel ORA_DISK_1

starting media recovery
media recovery complete, elapsed time: 00:00:00

Finished recover at 12/08/2013 20:25:13

Oracle RMAN for Beginners – Part 10

$
0
0

A slight variation on the incremental backups. In this (short) article, I demonstrate the use of database file copy backups which are themselves updated on a regular basis to avoid having to restore and recover using numerous incremental backups.

What’s Going On Here?

This took me a wee while to get my head around. We can take a backup of the database, in incremental form, and then, use our nightly incremental backups to update the backup itself to the latest state of the database. This means that although we are taking a regular incremental backup, a restore will require much less work as the backup is nearly up to date.

This requires the use of file copy backups, as opposed to backupsets, and was hinted at in the previous article where I introduced incremental backups.

Image copies of the database are usually full copies. You would take a copy of the data files and do that every night, however, with RMAN, we can take an initial file copy of the database and then, incrementally, update that with each day’s changes. It’s not quite as simple as that, as we shall see, but bear with me!

File Copy Updates

In order to perform this type of backup, we need to use the following commands in a run block:

RMAN> run {                                                   
2> recover copy of database with tag "Daily";
3> backup incremental level 1
4> for recover of copy
5> with tag "Daily"
6> database;
7> }

So, what happens?

  • Day 1 – there is nothing to recover and no file copy backup of the database exists. The commands will therefore create a new file copy of the database. The autobackup of the controlfile etc creates a backup as normal. The list backup summary command will show the latter while the list copy of database command will show details of the former. Unfortunately, there is no summary option on the list copy command.
  • Day 2 – Yesterday’s file copy still cannot be recovered because there is, as yet, no incremental backups to apply. However, this pass through will create a new level 1 incremental backup. As ever, whatever has been configured to autobackup, will do so.
  • Subsequent days – Yesterday’s level 1 backup will be used to recover the file copy of the database taken on Day 1. A new level 1 incremental backup will be created afterwards and finally, the usual autobackup requirements will be carried out.

With the exception of the first two runs of these commands, there will be a file copy of the database as of the end of yesterday’s level 1 backup and a new level 1 backup from today. A recovery from this backup situation will involve restoring from the file copy and recovering from the level 1 backup – and any online-redo logs as required.

A Worked Example

As ever, set NLS_DATE_FORMAT in the shell prior to running RMAN – this gives better quality date formatting in the output and listings.

$ export NLS_DATE_FORMAT='dd/mm/yyyy hh24:mi:ss'

$rman target /

In this demonstration, assume that this is the first time this set of commands has ever been run – or, at least, the first time for this tag value.

RMAN>### DAY ONE ###
2> run {                                                   
recover copy of database with tag "Daily Backup";
backup incremental level 1
for recover of copy
with tag "Daily Backup"
database;
} 

Starting recover at 13/08/2013 10:02:53
using channel ORA_DISK_1
no copy of datafile 1 found to recover
...
Finished recover at 13/08/2013 10:02:53

Starting backup at 13/08/2013 10:02:53
using channel ORA_DISK_1
no parent backup or copy of datafile 1 found
...

channel ORA_DISK_1: starting datafile copy
input datafile file number=00004 name=/srv/nffs/oradata/ant12/data/perfstat01_01.dbf
output file name=/srv/nffs/flashback_area/ant12/ANT12/datafile/o1_mf_perfstat_90mxky4f_.dbf tag=DAILY BACKUP RECID=118 STAMP=823341801
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:35

...

Finished backup at 13/08/2013 10:04:53

Starting Control File and SPFILE Autobackup at 13/08/2013 10:04:53
piece handle=/srv/nffs/flashback_area/ant12/ANT12/autobackup/2013_08_13/o1_mf_s_823341893_90mxopfw_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 13/08/2013 10:04:56

RMAN> list backup summary;

List of Backups
===============
Key     TY LV S Device Type Completion Time     #Pieces #Copies Compressed Tag
------- -- -- - ----------- ------------------- ------- ------- ---------- ---
...
117     B  F  A DISK        13/08/2013 10:04:55 1       1       NO         TAG20130813T100453

RMAN> list copy;
...

List of Datafile Copies
=======================

Key     File S Completion Time     Ckp SCN    Ckp Time           
------- ---- - ------------------- ---------- -------------------
119     1    A 13/08/2013 10:03:52 682467     13/08/2013 10:03:29
        Name: /srv/nffs/flashback_area/ant12/ANT12/datafile/o1_mf_system_90mxm1cv_.dbf
        Tag: DAILY BACKUP

...

So, we can see that the recover command did nothing as there was no file copy to be recovered. The backup command stared by noting that the requested level 1 incremental backup didn’t have a parent (level zero) backup and so created a file copy backup with our requested tag.

When we look at the list of backups, we see only one – the tag matches the date and time of the controlfile autobackup and this is confirmed if we list backupset 117.

The following day, we execute exactly the same set of commands.

RMAN> ### DAY TWO ###
2>run {                                                   
recover copy of database with tag "Daily Backup";
backup incremental level 1
for recover of copy
with tag "Daily Backup"
database;
}

Starting recover at 13/08/2013 10:05:53
using channel ORA_DISK_1
no copy of datafile 1 found to recover
...
Finished recover at 13/08/2013 10:05:53

Starting backup at 13/08/2013 10:05:53
using channel ORA_DISK_1
channel ORA_DISK_1: starting incremental level 1 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00004 name=/srv/nffs/oradata/ant12/data/perfstat01_01.dbf
...

channel ORA_DISK_1: starting piece 1 at 13/08/2013 10:05:53
channel ORA_DISK_1: finished piece 1 at 13/08/2013 10:05:54
piece handle=/srv/nffs/flashback_area/ant12/ANT12/backupset/2013_08_13/o1_mf_nnnd1_DAILY_BACKUP_90mxqkq9_.bkp tag=DAILY BACKUP comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 13/08/2013 10:05:54

Starting Control File and SPFILE Autobackup at 13/08/2013 10:05:54
piece handle=/srv/nffs/flashback_area/ant12/ANT12/autobackup/2013_08_13/o1_mf_s_823341954_90mxqm3y_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 13/08/2013 10:05:55

RMAN> list backup summary;

List of Backups
===============
Key     TY LV S Device Type Completion Time     #Pieces #Copies Compressed Tag
------- -- -- - ----------- ------------------- ------- ------- ---------- ---
...
117     B  F  A DISK        13/08/2013 10:04:55 1       1       NO         TAG20130813T100453
118     B  1  A DISK        13/08/2013 10:05:54 1       1       NO         DAILY BACKUP
119     B  F  A DISK        13/08/2013 10:05:55 1       1       NO         TAG20130813T100554

RMAN> list copy;
...

List of Datafile Copies
=======================

Key     File S Completion Time     Ckp SCN    Ckp Time           
------- ---- - ------------------- ---------- -------------------
119     1    A 13/08/2013 10:03:52 682467     13/08/2013 10:03:29
        Name: /srv/nffs/flashback_area/ant12/ANT12/datafile/o1_mf_system_90mxm1cv_.dbf
        Tag: DAILY BACKUP
...

At the end of day two, we see that the file copy backup has still not been recovered. This is because there wasn’t a level one incremental backup created yesterday. Remember, the recovery applies the latest incremental backup to the file copy and that incremental backup has not been created yet.

We then see that a new level one incremental backup is being created with our requested tag value, followed by the normal autobackup stuff.

Listing the backups shows that backups 118 and 119 have been created today. The latter is the autobackup and the former is our new (first) incremental backup. It is a coincidence that the file copy is also 119 – this need not always be the case, as tomorrow will show.

And the third, and subsequent, runs produce output similar to that shown below.

RMAN> ### DAY THREE AND SUBSEQUENT DAYS ###
2> run {                                                   
recover copy of database with tag "Daily Backup";
backup incremental level 1
for recover of copy
with tag "Daily Backup"
database;
}

Starting recover at 13/08/2013 10:06:45
using channel ORA_DISK_1
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: specifying datafile copies to recover
recovering datafile copy file number=00001 name=/srv/nffs/flashback_area/ant12/ANT12/datafile/o1_mf_system_90mxm1cv_.dbf
...

channel ORA_DISK_1: reading from backup piece /srv/nffs/flashback_area/ant12/ANT12/backupset/2013_08_13/o1_mf_nnnd1_DAILY_BACKUP_90mxqkq9_.bkp
channel ORA_DISK_1: piece handle=/srv/nffs/flashback_area/ant12/ANT12/backupset/2013_08_13/o1_mf_nnnd1_DAILY_BACKUP_90mxqkq9_.bkp tag=DAILY BACKUP
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
Finished recover at 13/08/2013 10:06:46

Starting backup at 13/08/2013 10:06:47
using channel ORA_DISK_1
channel ORA_DISK_1: starting incremental level 1 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00004 name=/srv/nffs/oradata/ant12/data/perfstat01_01.dbf
...

channel ORA_DISK_1: starting piece 1 at 13/08/2013 10:06:47
channel ORA_DISK_1: finished piece 1 at 13/08/2013 10:06:48
piece handle=/srv/nffs/flashback_area/ant12/ANT12/backupset/2013_08_13/o1_mf_nnnd1_DAILY_BACKUP_90mxs7ob_.bkp tag=DAILY BACKUP comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 13/08/2013 10:06:48

Starting Control File and SPFILE Autobackup at 13/08/2013 10:06:48
piece handle=/srv/nffs/flashback_area/ant12/ANT12/autobackup/2013_08_13/o1_mf_s_823342008_90mxs90z_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 13/08/2013 10:06:49

RMAN> list backup summary;

List of Backups
===============
Key     TY LV S Device Type Completion Time     #Pieces #Copies Compressed Tag
------- -- -- - ----------- ------------------- ------- ------- ---------- ---
...
117     B  F  A DISK        13/08/2013 10:04:55 1       1       NO         TAG20130813T100453
118     B  1  A DISK        13/08/2013 10:05:54 1       1       NO         DAILY BACKUP
119     B  F  A DISK        13/08/2013 10:05:55 1       1       NO         TAG20130813T100554
120     B  1  A DISK        13/08/2013 10:06:47 1       1       NO         DAILY BACKUP
121     B  F  A DISK        13/08/2013 10:06:49 1       1       NO         TAG20130813T100648

RMAN> list copy;
...

List of Datafile Copies
=======================

Key     File S Completion Time     Ckp SCN    Ckp Time           
------- ---- - ------------------- ---------- -------------------
131     1    A 13/08/2013 10:06:45 682600     13/08/2013 10:05:53
        Name: /srv/nffs/flashback_area/ant12/ANT12/datafile/o1_mf_system_90mxm1cv_.dbf
        Tag: DAILY BACKUP
...

We see from the above that we have finally recovered the image copy of the database from yesterday’s level one backup. So that takes the image copy of the data files up to date until the end of yesterday’s backup. We also see that a brand new level one backup was taken with today’s changes copied. This will be recovered into the file copy backup on tomorrow’s run of these commands. The file copy of the database will always be as up to date as of yesterday and not as of today.

Looking at the list of backups, we can see the most recent obes are the tagged daily backup which is the level one created just now and the usual autobackup one.

Looking at the file copies, we see that yesterday’s key numbers have gone, and new ones are in place as are the checkpoint SCN numbers – the files are updated in other words.

Running Recoveries

So, what happens when we need to recover a tablespace, for example?

RMAN> sql "alter tablespace users offline";
sql statement: alter tablespace users offline

RMAN> restore tablespace users;

Starting restore at 13/08/2013 10:57:59
using channel ORA_DISK_1

channel ORA_DISK_1: restoring datafile 00006
input datafile copy RECID=140 STAMP=823343078 file name=/srv/nffs/flashback_area/ant12/ANT12/datafile/o1_mf_users_90mxonmc_.dbf
destination for restore of datafile 00006: /srv/nffs/oradata/ant12/data/users01.dbf
channel ORA_DISK_1: copied datafile copy of datafile 00006
output file name=/srv/nffs/oradata/ant12/data/users01.dbf RECID=0 STAMP=0
Finished restore at 13/08/2013 10:58:00

RMAN> recover tablespace users;

Starting recover at 13/08/2013 10:58:06
using channel ORA_DISK_1
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00006: /srv/nffs/oradata/ant12/data/users01.dbf
channel ORA_DISK_1: reading from backup piece /srv/nffs/flashback_area/ant12/ANT12/backupset/2013_08_13/o1_mf_nnnd1_DAILY_BACKUP_90mytqy3_.bkp
channel ORA_DISK_1: piece handle=/srv/nffs/flashback_area/ant12/ANT12/backupset/2013_08_13/o1_mf_nnnd1_DAILY_BACKUP_90mytqy3_.bkp tag=DAILY BACKUP
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:00

starting media recovery
media recovery complete, elapsed time: 00:00:00

Finished recover at 13/08/2013 10:58:06

RMAN> sql "alter tablespace users online";
sql statement: alter tablespace users online

The first thing we see is that the file with recid = 140 is being restored as a file copy. We can see what this means with the following:

RMAN> list datafilecopy 140;

List of Datafile Copies
=======================

Key     File S Completion Time     Ckp SCN    Ckp Time           
------- ---- - ------------------- ---------- -------------------
140     6    A 13/08/2013 10:24:38 682690     13/08/2013 10:06:47
        Name: /srv/nffs/flashback_area/ant12/ANT12/datafile/o1_mf_users_90mxonmc_.dbf
        Tag: DAILY BACKUP

Or, we could have found out what files would have been restored for the tablespace as follows:

RMAN> list copy of tablespace users;

List of Datafile Copies
=======================

Key     File S Completion Time     Ckp SCN    Ckp Time           
------- ---- - ------------------- ---------- -------------------
140     6    A 13/08/2013 10:24:38 682690     13/08/2013 10:06:47
        Name: /srv/nffs/flashback_area/ant12/ANT12/datafile/o1_mf_users_90mxonmc_.dbf
        Tag: DAILY BACKUP

Either way, we now know that a file copy was restored. The recovery phase isn’t so easy to decode, but we can see which backuppiece was used. It is highlighted in bold above. Checking what that actually is gives the following:

RMAN> list backuppiece "/srv/nffs/flashback_area/ant12/ANT12/backupset/2013_08_13/o1_mf_nnnd1_DAILY_BACKUP_90mytqy3_.bkp";


List of Backup Pieces
BP Key  BS Key  Pc# Cp# Status      Device Type Piece Name
------- ------- --- --- ----------- ----------- ----------
122     122     1   1   AVAILABLE   DISK        /srv/nffs/flashback_area/ant12/ANT12/backupset/2013_08_13/o1_mf_nnnd1_DAILY_BACKUP_90mytqy3_.bkp

This gives the backupset key number, which also happens to be 122. We can interrogate this next:

RMAN> list backupset 122;


List of Backup Sets
===================

BS Key  Type LV Size       Device Type Elapsed Time Completion Time    
------- ---- -- ---------- ----------- ------------ -------------------
122     Incr 1  464.00K    DISK        00:00:01     13/08/2013 10:24:40
        BP Key: 122   Status: AVAILABLE  Compressed: NO  Tag: DAILY BACKUP
        Piece Name: /srv/nffs/flashback_area/ant12/ANT12/backupset/2013_08_13/o1_mf_nnnd1_DAILY_BACKUP_90mytqy3_.bkp
  List of Datafiles in backup set 122
  File LV Type Ckp SCN    Ckp Time            Name
  ---- -- ---- ---------- ------------------- ----
  1    1  Incr 683288     13/08/2013 10:24:39 /srv/nffs/oradata/ant12/data/system01.dbf
  2    1  Incr 683288     13/08/2013 10:24:39 /srv/nffs/oradata/ant12/data/sysaux01.dbf
  3    1  Incr 683288     13/08/2013 10:24:39 /srv/nffs/oradata/ant12/data/undotbs01.dbf
  4    1  Incr 683288     13/08/2013 10:24:39 /srv/nffs/oradata/ant12/data/perfstat01_01.dbf
  5    1  Incr 683288     13/08/2013 10:24:39 /srv/nffs/oradata/ant12/data/tools01.dbf
  6    1  Incr 683288     13/08/2013 10:24:39 /srv/nffs/oradata/ant12/data/users01.dbf
  7    1  Incr 683288     13/08/2013 10:24:39 /srv/nffs/oradata/ant12/data/audit01_01.dbf
  8    1  Incr 683288     13/08/2013 10:24:39 /srv/nffs/oradata/ant12/data/utility01_01.dbf

The output from the above shows that we are applying a level 1 incremental backup. Phew! At least we now know. There must be an easier way that this to find out though, surely?

Restoring individual data files, the whole database, or just a block or two is as easy as shown in previous examples. Don’t forget, tablespaces and data files need to be offline before they can be restored and if the tablespace or the datafiles are SYSTEM or UNDO, then the whole database needs to be in mount mode.

Introduction to Oracle Datapump – Part 1

$
0
0

Oracle Datapump, aka expdp and impdp were introduced at Oracle 10g to replace the old faithful exp and imp utilities. Many DBAs around the world find that it’s hard to change from what we know like the back of our hand, to something new. We need to change because exp is deprecated from 10g onwards and might even already have vanished from 12c – which I have to install as one of my upcoming tasks. The following is a quick introduction for people like me – running Oracle on Linux and slightly averse to change! 😉

Introduction to Datapump

All of the following is based on 11.2.0.2 Enterprise Edition of Oracle, running on Suse Linux, Enterprise Edition version 11 SP 0.

Oracle datapump, as mentioned, is a replacement for the old exp and imp utilities. It comes with numerous benefits, and a couple of minor niggles! All will be revealed.

Internally, datapump uses a couple of PL/SQL packages:

  • DBMS_DATAPUMP
  • DBMS_METADATA

Normally, you won’t need to bother with these – for taking logical backups of the database, but you can, if you wish, call them explicitly. Doing so is beyond the scope of this article – as they say – but if you wish to find out more, have a look in the Oracle Database PL/SQL Packages and Types Reference Guide for more details.

Datapump has two modes:

  • Command line
  • Interactive

The former is similar to using the old exp or imp utilities, while the latter allows you to connect to long running datapump jobs, and enquire as to progress and add files or similar. This will be discussed later, but for the rest of us Luddites, command line mode will probably be most familiar to start with.

Before we start, we need to make sure we have a couple of things (technical term) set up.

Please note. In the following, some Linux commands need to be executed as the root user, these are prefixed with a ‘#’. All other commands are prefixed by a ‘$’ prompt, and these are executed as the oracle user.

Prerequisites

When you come to use Datapump utilities, you need to have a pre-existing Oracle Directory within the database being exported or imported. This directory object tells datapump – where to write or read dump files to/from. By default, every database created has a directory already set up for datapump to use. It is called DATA_PUMP_DIR and defaults to the location $ORACLE_HOME/rdbms/log.

SQL> !echo $ORACLE_HOME
/srv/oracle/product/11gR1/db/

SQL> select owner, directory_name, directory_path
  2  from dba_directories
  3  where directory_name = 'DATA_PUMP_DIR';

OWNER   DIRECTORY_NAME  DIRECTORY_PATH
------  --------------  ----------------------------------------
SYS     DATA_PUMP_DIR   /srv/oracle/product/11gR1/db//rdbms/log/

This isn’t normally the best location, so you have a choice of amending (ie dropping and recreating) the current one, or creating a new one for our own use. I find it best to create a new one:

SQL> connect / as sysdba
Connected.

SQL> create directory my_datapump_dir as '/srv/oracle/datapump';
Directory created.

The location pointed to need not exist when the above command is executed, but it must exist when you attempt to use it and the oracle user must be able to read from and write to the specified location.

# mkdir /srv/oracle/datapump
# chown oracle:dba /srv/oracle/datapump/
# ls -l /srv/oracle
...
drwxr-xr-x  2 oracle dba   4096 Aug 29 15:03 datapump
...

If you only ever intend to run datapump jobs as the SYSDBA enabled users, then this is all we need. However, if you intend to set up another user for this purpose, the following needs to be carried out or the user in question won’t be able to run the datapump utilities.

SQL> create user datapump_admin identified by secret
  2  default tablespace users 
  3  quota unlimited on users;
User created.

SQL> grant create table to datapump_admin;
Grant succeeded.

SQL> grant datapump_exp_full_database to datapump_admin;
Grant succeeded.

SQL> grant datapump_imp_full_database to datapump_admin;
Grant succeeded.

SQL> grant read, write on directory my_datapump_dir to datapump_admin;
Grant succeeded.

That’s it, we are ready to do some exporting – new style! In case you were wondering, we need the create table privilege because datapump utilities need to create a table for each job executed. More on this later.

Exporting

This article concentrates mainly on the exporting of data from a database using the expdp utility which replaces the old exp utility we know and love so much!

Export Full Database

The command to export a full database is as follows:

 $ expdp datapump_admin/secret directory=my_datapump_dir dumpfile=full.dmp logfile=full.exp.log full=y

However, we can put these parameters into a parameter file – just like when we used exp!

$ cat fulldp.par

userid=datapump_admin/secret 
directory=my_datapump_dir 
dumpfile=full.dmp 
logfile=full.exp.log 
full=y

If you omit the password from the userid parameter, expdp and impdp will prompt you.

Running a full export is now a simple matter of:

$ expdp parfile=fulldp.par

What happens next is that a pile of “stuff” scrolls up the screen, some of which is useful, some not so. Here is an excerpt with the good bits highlighted:

Export: Release 11.2.0.2.0 - Production on Thu Aug 29 15:20:12 2013
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning option

Starting "DATAPUMP_ADMIN"."SYS_EXPORT_FULL_01":  datapump_admin/******** parfile=fulldp.par 

Estimate in progress using BLOCKS method...
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 13.75 MB

Processing object type DATABASE_EXPORT/TABLESPACE
Processing object type DATABASE_EXPORT/PASSWORD_VERIFY_FUNCTION
Processing object type DATABASE_EXPORT/PROFILE
Processing object type DATABASE_EXPORT/SYS_USER/USER
...
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE
...

The job name created to carry out the export is DATAPUMP_ADMIN.SYS_EXPORT_FULL_01. Other jobs will have a different numeric suffix to keep them unique. If something goes wrong with the export, we will need that job name to allow us to fix things. The job, while it is in progress, also creates a table within the schema specified in the userid parameter. That table’s name is also called SYS_EXPORT_FULL_01.

We can also see that an estimate of the amount of disc space we will need in the location where the dump file is being written to. In my case, 13.75 Mb (It’s not a big database!) is required.

Then we get a list of the objects being exported as they occur. Nothing much here that’s new, it’s very similar to that output by a full exp in the old days!

Of course, like many things, it doesn’t always go to plan:

ORA-39171: Job is experiencing a resumable wait.
ORA-01691: unable to extend lob segment DATAPUMP_ADMIN.SYS_LOB0000015147C00045$$ by 128 in tablespace USERS

This LOB is part of the above mentioned table. Expdp will sit there (for two hours by default) until I fix the problem. I need to use SQL*Plus (or Toad etc) to fix the underlying problem with space, then expdp can continue.

SQL> select file_id, bytes/1024/1024 as mb
  2  from dba_data_files
  3  where tablespace_name = 'USERS';

   FILE_ID	   MB
---------- ----------
	 6	   10

SQL> alter database datafile 6 resize 50m;
Database altered.

As soon as the extra space is added, the datapump job resumes automatically. There is no need to tell it to continue. The screen output starts scrolling again:

...
. . exported "SYSTEM"."REPCAT$_USER_PARM_VALUES"             0 KB       0 rows
. . exported "SYSTEM"."SQLPLUS_PRODUCT_PROFILE"              0 KB       0 rows
Master table "DATAPUMP_ADMIN"."SYS_EXPORT_FULL_01" successfully loaded/unloaded
******************************************************************************
Dump file set for DATAPUMP_ADMIN.SYS_EXPORT_FULL_01 is:
  /srv/oracle/datapump/full.dmp
Job "DATAPUMP_ADMIN"."SYS_EXPORT_FULL_01" completed with 5 error(s) at 15:46:57

The message about the master table means that the table has now been dropped as the datapump job completed. The final message, indicating a number of errors can be quite threatening, but is simply the number of times that the utility output a message to the screen (and log file) telling you that there is a problem:

ORA-39171: Job is experiencing a resumable wait.
ORA-01691: unable to extend lob segment DATAPUMP_ADMIN.SYS_LOB0000015147C00045$$ by 128 in tablespace USERS
ORA-39171: Job is experiencing a resumable wait.
ORA-01691: unable to extend lob segment DATAPUMP_ADMIN.SYS_LOB0000015147C00045$$ by 128 in tablespace USERS
ORA-39171: Job is experiencing a resumable wait.
ORA-01691: unable to extend lob segment DATAPUMP_ADMIN.SYS_LOB0000015147C00045$$ by 128 in tablespace USERS
ORA-39171: Job is experiencing a resumable wait.
ORA-01691: unable to extend lob segment DATAPUMP_ADMIN.SYS_LOB0000015147C00045$$ by 128 in tablespace USERS
ORA-39171: Job is experiencing a resumable wait.
ORA-01691: unable to extend lob segment DATAPUMP_ADMIN.SYS_LOB0000015147C00045$$ by 128 in tablespace USERS

As you can see, 5 “errors” that are not really errors, they are merely hints that something needed attending to a bit quicker than I managed!

The full log file for the job is created in the output area, as is the dump file itself.

$ ls -l /srv/oracle/datapump/

total 18360
-rw-r----- 1 oracle users 18751488 2013-08-29 15:46 full.dmp
-rw-r--r-- 1 oracle users    23410 2013-08-29 15:46 full.exp.log

One thing that a number of DBAs will be miffed at, you cannot perform compression of the dump file “on the fly” as we used to do in the old days of exp:

$ mknop exp.pipe p
$ cat exp.pipe | gzip -9 - > full.dmp.gz &
$ exp ... file=exp.pipe log=full.log ....

This is no longer allowed, but expdp does have a compression parameter however, you need to have paid extra for the Advanced Compression Option. Not good! You are allowed, without extra costs, to compress the metadata only when exporting. Thanks Oracle. Add the following to the parameter file:

compression=metadata_only

The default is compression=none.

And, also, if you run the same export again, then expdp will crash out because it doesn’t like overwriting files that already exist.

$ expdp parfile=fulldp.par 

Export: Release 11.2.0.2.0 - Production on Thu Aug 29 15:59:29 2013
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning option
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31641: unable to create dump file "/srv/oracle/datapump/full.dmp"
ORA-27038: created file already exists
Additional information: 1

There are two ways to avoid this issue:

  • Don’t put the file names in the parameter file. Take them out and specify them on the command line, explicitly.
    $ cat fulldp.par
    
    userid=datapump_admin/secret 
    directory=my_datapump_dir 
    full=y
    
    $ expdp parfile=fulldp.par dumpfile=full.`date +%Y%m%d`.dmp logfile=full.`date +%Y%m%d`.exp.log ...
    
  • Add the following to the parameter file (or command line):
    reuse_dumpfiles=y

    The default is not to reuse the existing dump files.

I find that putting all the common parameters into the parameter file, while keeping the changeable ones on the command line is probably the best idea.

What about the old consistent=y parameter? Can’t you do that with expdp? Well, yes, you can. Since 11gR2 you can anyway. If you add the legacy parameters to the parameter file, or command line as follows:

consistent=y

Then expdp will notice that you are a Luddite like me, and tell you what to do next time in order to get a proper expdp consistent export. As follows:

Export: Release 11.2.0.2.0 - Production on Fri Aug 30 17:05:33 2013
...
Legacy Mode Active due to the following parameters:
Legacy Mode Parameter: "consistent=TRUE" Location: Parameter File, Replaced with: "flashback_time=TO_TIMESTAMP('2013-08-30 17:05:33', 'YYYY-MM-DD HH24:MI:SS')"
Legacy Mode has set reuse_dumpfiles=true parameter.
Starting "DATAPUMP_ADMIN"."SYS_EXPORT_FULL_01":  datapump_admin/******** parfile=fulldp.par 
...

Note the time that the export started, and note how the consistent parameter was converted to a expdp flashback_time parameter set to the same time as the start of the job. That’s how to get a consistent export. You can also use the flashback_scn parameter if yo happen to knwo the desired SCN of course.

Note also that legacy mode turns on, automatically, the ability to overwrite existing dump files, even if you don’t specify it in the parameter file or command line.

You can, if you wish, add the following to your parameter files, or the command line, according to what you are using, to get an export equivalent to an old consistent=y one from exp:

flashback_time=to_timestamp(sysdate)

or:

flashback_time=systimestamp

Export Schemas

Exporting a schema, or more than one, is equally as simple. Simply specify the schemas= parameter in your parameter file or on the command line:

$ cat user_norman.par

userid=datapump_admin/secret 
directory=my_datapump_dir 
schemas=norman
reuse_dumpfiles=y

$ expdp parfile=user_norman.par dumpfile=norman.dmp logfile=norman.exp.log

If you have more than one schema to export, simply specify them all with commas between. For example, you might have the following in a parameter file (or on the command line):

schemas=barney,fred,wilma,betty,bambam,dino

The output is similar to that for the full database:

Export: Release 11.2.0.2.0 - Production on Thu Aug 29 16:39:30 2013
...
Starting "DATAPUMP_ADMIN"."SYS_EXPORT_SCHEMA_01":  datapump_admin/******** parfile=user_norman.par 

Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB

Processing object type SCHEMA_EXPORT/USER
...
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "NORMAN"."NORM"                             5.023 KB       4 rows
Master table "DATAPUMP_ADMIN"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for DATAPUMP_ADMIN.SYS_EXPORT_SCHEMA_01 is:
  /srv/oracle/datapump/norman.01.dmp
Job "DATAPUMP_ADMIN"."SYS_EXPORT_SCHEMA_01" successfully completed at 16:40:04

Export Tablespaces

Exporting a tablespace, or more than one, is just as simple as exporting schemas. Simply specify the tablespaces= parameter in your parameter file or on the command line:

$ cat user_tablespace.par

userid=datapump_admin/secret
directory=my_datapump_dir
dumpfile=users_ts.dmp
logfile=users_ts.exp.log      
tablespaces=users
reuse_dumpfiles=y


$ expdp parfile=user_tablespace.par

This time, I don’t care about having a unique name, so I’ve specified the logfile and dumpfile parameters within the parameter file.

If you have more than one tablespace to export, simply specify them all with commas between. For example, you might have the following in a parameter file (or on the command line):

schemas=bedrock_data,bedrock_index

The output is similar to that for the full database:

$ expdp parfile=user_tablespace.par 

Export: Release 11.2.0.2.0 - Production on Thu Aug 29 16:45:01 2013
...
Starting "DATAPUMP_ADMIN"."SYS_EXPORT_TABLESPACE_01":  datapump_admin/******** parfile=user_tablespace.par 

Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 128 KB

Processing object type TABLE_EXPORT/TABLE/TABLE
...
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "APP_OWNER"."TEST"                          5.031 KB       5 rows
. . exported "NORMAN"."NORM"                             5.023 KB       4 rows
Master table "DATAPUMP_ADMIN"."SYS_EXPORT_TABLESPACE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for DATAPUMP_ADMIN.SYS_EXPORT_TABLESPACE_01 is:
  /srv/oracle/datapump/users_ts.dmp
Job "DATAPUMP_ADMIN"."SYS_EXPORT_TABLESPACE_01" successfully completed at 16:45:27

Export Tables

And finally, for now, exporting a list of tables is simple too. Once again, and as with exp, you are best to fill a parameter file with the required tables.

$ cat tables.par 

userid=datapump_admin/secret 
directory=my_datapump_dir 
dumpfile=tables.dmp 
logfile=tables.exp.log 
tables=norman.norm,app_owner.test
reuse_dumpfiles=y

As before, I’m only interested in these particular tables, so I name them in the parameter file. Also, dumpfile and logfile are in there too.

You should be quite familiar with the output by now:

 expdp parfile=tables.par 

Export: Release 11.2.0.2.0 - Production on Thu Aug 29 16:49:05 2013
...
Starting "DATAPUMP_ADMIN"."SYS_EXPORT_TABLE_01":  datapump_admin/******** parfile=tables.par 

Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 128 KB

Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "APP_OWNER"."TEST"                          5.031 KB       5 rows
. . exported "NORMAN"."NORM"                             5.023 KB       4 rows
Master table "DATAPUMP_ADMIN"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for DATAPUMP_ADMIN.SYS_EXPORT_TABLE_01 is:
  /srv/oracle/datapump/tables.dmp
Job "DATAPUMP_ADMIN"."SYS_EXPORT_TABLE_01" successfully completed at 16:49:12

Points to Note

Job names.

The job names created for an expdp or impdp job are made up as follows:

schema_name + "." + "SYS_" + "EXPORT_" or "IMPORT_" + Level + "_" + Unique Identifier.

Expdp uses “EXPORT” while impdp uses “IMPORT”, for obvious reasons. The level part is one of:

  • FULL
  • SCHEMA
  • TABLESPACE
  • TABLE

The unique identifier is simply a numeric suffix, starting at 01 and increasing for each concurrent datapump job at that level.

So, a job running under the schema of datapump_admin, exporting a schema level dump, would have the full job name of:

DATAPUMP_ADMIN.SYS_EXPORT_SCHEMA_01

While the same user, exporting a full database would have the job name of

DATAPUMP_ADMIN.SYS_EXPORT_FULL_01

Tables created for jobs.

As mentioned above, datapump jobs create a table in the default tablespace of the user running the utility. The table names are exactly the same as the running job names.

When the job completes, the tables are dropped.

Estimating space requirements.

As seen above, expdp produces an estimate of the space it will need to carry out the requested export. However, it might be nice to have this information well in advance of running the export – so that you can be sure that the export will work without problems. This can be done:

$ expdp datapump_admin/secret full=y estimate_only=y

Export: Release 11.2.0.2.0 - Production on Thu Aug 29 16:27:26 2013
...
Starting "DATAPUMP_ADMIN"."SYS_EXPORT_FULL_01":  datapump_admin/******** full=y estimate_only=y 
Estimate in progress using BLOCKS method...
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
...
.  estimated "SYSTEM"."REPCAT$_TEMPLATE_TARGETS"             0 KB
.  estimated "SYSTEM"."REPCAT$_USER_AUTHORIZATIONS"          0 KB
.  estimated "SYSTEM"."REPCAT$_USER_PARM_VALUES"             0 KB
.  estimated "SYSTEM"."SQLPLUS_PRODUCT_PROFILE"              0 KB
Total estimation using BLOCKS method: 13.75 MB
Job "DATAPUMP_ADMIN"."SYS_EXPORT_FULL_01" successfully completed at 16:27:39

Now, armed with the above information, you can make sure that the destination for the dump file(s) has enough free space. Don’t forget, there will be a log file as well.

The next article in this short series will feature the corresponding imports using impdp.

Exporting – Cheat Sheet

The following is a list of “cheats” – basically, a list of the parameters you would find useful in doing a quick export at any level from full down to individual tables. I have listed each one in the form of a parameter file – for ease of copy and paste, which you are free to do by the way, assuming you find it interesting and/or useful!

The following assumes you have set up a suitable Oracle Directory object within the database being exported, however, the first part of the cheat sheet summarises the commands required to create one, and a suitably endowed user to carry out the exports.

All the following default to consistent exports, there’s really no reason why an export should be taken any other way!

Create an Oracle Directory

The following is executed in SQL*Plus, or similar, as a SYSDBA enabled user, or SYS:

create directory my_datapump_dir as '/your/required/location';

The following is executed as root:

mkdir -p /your/required/location
chown oracle:dba /your/required/location

Create a Datapump User Account and Privileges

The following is executed in SQL*Plus, or similar, as a SYSDBA enabled user, or SYS:

create user datapump_admin identified by secret_password
default tablespace users 
quota unlimited on users;

grant create table to datapump_admin;
grant datapump_exp_full_database to datapump_admin;
grant datapump_imp_full_database to datapump_admin;
grant read, write on directory my_datapump_dir to datapump_admin;

Full, Consistent Export

userid=datapump_admin/secret 
directory=my_datapump_dir 
dumpfile=full.dmp 
logfile=full.exp.log 
full=y
reuse_dumpfiles=y
flashback_time="TO_TIMESTAMP('your_date_time_here', 'YYYY-MM-DD HH24:MI:SS')"

You may wish to use consistent=y rather than the flashback_time, it will default to the timestamp of the start of the expdp job.

Consistent Schema(s) Export

userid=datapump_admin/secret 
directory=my_datapump_dir 
dumpfile=schema.dmp 
logfile=schema.exp.log 
schemas=user_a,user_b
reuse_dumpfiles=y
flashback_time="TO_TIMESTAMP('your_date_time_here', 'YYYY-MM-DD HH24:MI:SS')"

You may wish to use consistent=y rather than the flashback_time, it will default to the timestamp of the start of the expdp job.

Consistent Tablespace(s) Export

userid=datapump_admin/secret 
directory=my_datapump_dir 
dumpfile=tablespaces.dmp 
logfile=tablespaces.exp.log 
tablespaces=ts_a,ts_b
reuse_dumpfiles=y
flashback_time="TO_TIMESTAMP('your_date_time_here', 'YYYY-MM-DD HH24:MI:SS')"

You may wish to use consistent=y rather than the flashback_time, it will default to the timestamp of the start of the expdp job.

Consistent Table(s) Export

userid=datapump_admin/secret 
directory=my_datapump_dir 
dumpfile=tables.dmp 
logfile=tables.exp.log 
tables=user.table_a,user.table_b
reuse_dumpfiles=y
flashback_time="TO_TIMESTAMP('your_date_time_here', 'YYYY-MM-DD HH24:MI:SS')"

You may wish to use consistent=y rather than the flashback_time, it will default to the timestamp of the start of the expdp job.

Adding Compression

By default there is no compression. You add it as follows in 10g and higher with no further options purchased:

compression=metadata_only

Or off with this:

compression=none

Adding Even More Compression

If you have purchased a license for Oracle’s Advanced Compression Option, in Oracle 11g and higher, then you have the options of adding compression as follows:

Nothing is compressed:

compression=none

Only the metadata is to be compressed:

compression=metadata_only

Compress the data only:

compression=none

Compress the metadata and the data:

compression=all

Even More Compression, in 12c

From 12c, a new parameter named compression_algorithm allows you to specify which level of compression you would like:

compression_algorithm=basic
compression_algorithm=low
compression_algorithm=medium
compression_algorithm=high

These options may well incur extra CPU costs as the data are required to be compressed for export and uncompressed on import.

Adding Encryption

If you have Enterprise Edition and have paid the extra cost license fee for the Oracle Advanced Security option, then you can encrypt the data in the dump files.

Encrypt the metadata and the data:

encryption=all

Encrypt the data only:

encryption=data_only

Encrypt just the metadata:

encryption=data_only

No encryption (the default):

encryption=none

Encrypt the data in columns that are defined as encrypted in the database:

encryption=encrypted_columns_only

Introduction to Oracle Datapump – Part 2

$
0
0

In this, the second part of the Introduction to Oracle Datapump mini-series, we take a look at importing dump files using impdp. If you missed the first part which concentrated on exporting with expdp, have a read of it here. Once again, the following is a quick introduction for people like me – running Oracle on Linux and slightly averse to change! 😉

Introduction to Datapump Imports

All of the following is based on 11.2.0.2 Enterprise Edition of Oracle, running on Suse Linux, Enterprise Edition version 11 SP 0.

Please note. In the following, any Linux commands that need to be executed as the root user, are prefixed with a ‘#’ prompt. All other commands are prefixed by a ‘$’ prompt, and these should be executed as the oracle user.

Before we start, we need to make sure we have a couple of things (technical term) set up.

Prerequisites

When you come to use Datapump’s impdp utility, you need to have a pre-existing Oracle Directory within the database being imported in to. This directory object tells datapump – where to find dump files and where to write log files. As mentioned in the previous article, every database created has a directory already set up for datapump to use. It is called DATA_PUMP_DIR and defaults to the location $ORACLE_HOME/rdbms/log.

This isn’t normally the best location, so you have a choice of amending (ie dropping and recreating) the current one, or creating a new one for our own use. I find it best to create a new one. I also like to set up a special datapump_admin user dedicated to carrying out exports and imports. Instructions on creating Oracle Directories and setting up the datapump_admin user, and its required privileges, were covered in the the previous article, and will not be repeated here.

Importing

This article concentrates mainly on the importing of data from a database using the impdp utility which replaces the old imp utility we know and love so much!

before we start looking at specifics, be aware that when we Luddites use imp we need to be aware that it appends data to existing tables (provided ignore=y of course) and if we didn’t want this to happen, we either had to login and drop the tables in question, or at the very least, truncate them. With impdp we don’t need to do this! We have two options to replicate the ignore=y parameter of imp, CONTENT and TABLE_EXISTS_ACTION.

The following sections describe each of these parameters in turn, and further down the page, there is a section describing what exactly happens when these parameters are used in certain combinations. Beware, some combinations produce misleading error messages – in 11gR2 at least.

Content

The content parameter takes the following values:

  • All – the default. Impdp attempts to create the objects it finds in the dump file, and load the data into them.
  • Metadata_only – impdp will only attempt to create the objects it finds in the dump file. It will not attempt to bring in the data. This is equivalent to rows=n in imp.
  • Data_only – impdp will not try to create the objects it finds in the dump file, but it will attempt to load the data. This is equivalent to ignore=y in imp.

    If there are objects in the dump file, which are not in the database/schemas/tablespaces being imported, then there will be errors listed to the log file, and screen, and those missing tables will not be imported.

Table_Exists_Action

This parameter tells impdp what to do when it encounters a table with or without data already in it. It takes the following values:

  • Append – impdp will simply append the data from the dump file to the table(s) in question. Existing data will remain, untouched. This is the default option if content=data_only.
  • Replace – impdp will drop the object, recreate it and then load the data. Useful, for example, if the definition of a table has been changed, this option will ensure that the new definition – in the dump file – is used. This value cannot be used if the content=data_only parameter is in use.
  • Skip – the default, unless content=data_only. Impdp will not attempt to load the data and will skip this table and move on to the next one. This is very handy if an import went wrong, for example, and after sorting out the failing table, you simply restart the import and have it skip the tables it has already completed.
  • Truncate – impdp will truncate any tables it finds, but not drop them, and then load the data.

Legacy Mode Parameters

Of course, if you can’t be bothered to learn the new parameters for impdp, then from 11gR2 onwards, you can specify a number of the old imp parameters and have impdp convert them to the new ones for you. It’s best to learn the new ones though, you never know if Oracle will drop legacy mode at some future date.

Right, onwards, we have data to import!

Import a Full Database Dump File

We created a full database export file – full.dmp – last time, which we can use to import back into the same, or a different database. As before, it is usually wise to put the required parameters in a parameter file, and specify that on the command line – this is especially true if you need to have double or single quotes in some of the parameters – as these will need escaping on the command line, but not in the parameter file.

$ cat fullimp.par

userid=datapump_admin/secret 
directory=my_datapump_dir 
dumpfile=full.dmp 
logfile=full.imp.log 
full=y
content=all
table_exists_action=truncate

If you omit the password from the userid parameter, impdp will prompt you.

Running a full import is now a simple matter of:

$ impdp parfile=fullimp.par

What happens next is that a pile of “stuff” scrolls up the screen, some of which is useful, some not so. Straight away, you will notice (because I’ve highlighted a couple!) a number of errors about objects already existing. This is because we have content=all. Had we simply had content=data_only these errors would not have appeared.

Here is an excerpt with some of the good bits highlighted:

Import: Release 11.2.0.2.0 - Production on Sat Aug 31 16:47:13 2013
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning option

Master table "DATAPUMP_ADMIN"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "DATAPUMP_ADMIN"."SYS_IMPORT_FULL_01":  datapump_admin/******** parfile=fullimp.par
Processing object type DATABASE_EXPORT/TABLESPACE
ORA-31684: Object type TABLESPACE:"SYSAUX" already exists
ORA-31684: Object type TABLESPACE:"UNDOTBS1" already exists
...
Processing object type DATABASE_EXPORT/PASSWORD_VERIFY_FUNCTION
...
Processing object type DATABASE_EXPORT/SCHEMA/USER
ORA-31684: Object type USER:"OUTLN" already exists
ORA-31684: Object type USER:"ORACLE" already exists
ORA-31684: Object type USER:"NORMAN" already exists
...
Processing object type DATABASE_EXPORT/GRANT/SYSTEM_GRANT/PROC_SYSTEM_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/GRANT/SYSTEM_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/ROLE_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/DEFAULT_ROLE
...
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE
ORA-39120: Table "SYSTEM"."DEF$_DESTINATION" can't be truncated, data will be skipped. Failing error is:
ORA-02266: unique/primary keys in table referenced by enabled foreign keys
...
. . imported "APP_OWNER"."TEST"                          5.031 KB       5 rows
. . imported "NORMAN"."NORM"                             5.023 KB       4 rows
...
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/POST_TABLE_ACTION
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TRIGGER
Processing object type DATABASE_EXPORT/SCHEMA/POST_SCHEMA/PROCACT_SCHEMA
Processing object type DATABASE_EXPORT/AUDIT
Job "DATAPUMP_ADMIN"."SYS_IMPORT_FULL_01" completed with 700 error(s) at 16:50:18

Well, that went well – but full imports – even with imp – usually do create masses of errors. They are probably best avoided. I suppose they might be useful when you have created a brand new database, with only the Oracle required tablespaces etc, and you are happy to have impdp bring in a clone, almost, of another database. Maybe! 😉

You will notice that because of all the extra errors caused by numerous objects that already exist, that checking the log file for real errors will be a little difficult.

If you already know that all the objects, present in the dump file, exist in the database that you are importing into, then use the content=data_only parameter which will prevent these errors from appearing. Remember to set a suitable value for the table_exists_action parameter as well, otherwise the default action is to skip whichever objects it finds already existing.

You will not be allowed to use replace because there will be no metadata in the import, so there will be no commands to recreate the objects. You only have append, truncate or skip available. If you try to use a forbidden option, you will see the following error:

ORA-39137: cannot specify a TABLE_EXISTS_ACTION of REPLACE for a job with no metadata

Running the above full import again, but with a data_only import this time, resulted in far fewer errors:

...
Job "DATAPUMP_ADMIN"."SYS_IMPORT_FULL_01" completed with 21 error(s) at 16:52:41

And this time, these were due to dropping tables that were in use as the parent of a foreign key constraint.

You will see highlighted above, the usual job name details. The format of the job names was described in the previous article in this mini-series, and will not be discussed further here.

Following the job details – and as already mentioned, a table of the same name will be created within the datapump_admin user while the job is running – we start to see the list of various objects that already exist. Then, eventually, in amongst all the errors, we see the tables actually being imported. Phew!

Note, on the final line, the number of errors I have to check to ensure that they are not critical. Hmmm, I really do not like database full imports, never have done and I doubt I ever will. I think we should move on, quickly, and take a look at schema imports instead.

Importing Schemas

Importing a scheme or schemas is a better way to bring data into an existing database. You can, if you wish, import a particular schema from a full dump – you don’t have to have exported the schemas specifically – the example below will demonstrate this.

As with expdp, you simply need to specify the schemas= parameter in your parameter file or on the command line:

$ cat schema_imp.par

userid=datapump_admin/secret 
directory=my_datapump_dir 
dumpfile=full.dmp 
logfile=schema.imp.log 
schemas=norman
content=all
table_exists_action=replace

$ impdp parfile=schema_imp.par 

Import: Release 11.2.0.2.0 - Production on Mon Sep 2 16:17:24 2013
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning option
Master table "DATAPUMP_ADMIN"."SYS_IMPORT_SCHEMA_01" successfully loaded/unloaded
Starting "DATAPUMP_ADMIN"."SYS_IMPORT_SCHEMA_01":  datapump_admin/******** parfile=schema_imp.par 
Processing object type DATABASE_EXPORT/SCHEMA/USER
ORA-31684: Object type USER:"NORMAN" already exists
Processing object type DATABASE_EXPORT/SCHEMA/GRANT/SYSTEM_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/ROLE_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/DEFAULT_ROLE
Processing object type DATABASE_EXPORT/SCHEMA/PROCACT_SCHEMA
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
. . imported "NORMAN"."NORM"                             5.023 KB       4 rows
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/STATISTICS/TABLE_STATISTICS
Job "DATAPUMP_ADMIN"."SYS_IMPORT_SCHEMA_01" completed with 1 error(s) at 16:17:35

What happens if you specify the schema only dump file that was previously created, but supply the full=y parameter? Exactly as you would expect, the full dump file is imported and, in the following example, the norman schema is once more imported – and existing objects are replaced.

$ cat schema2_imp.par 

userid=datapump_admin/secret 
directory=my_datapump_dir 
dumpfile=norman.dmp 
full=y
logfile=schema.imp.log 
content=all
table_exists_action=replace

$ impdp parfile=schema2_imp.par 

Import: Release 11.2.0.2.0 - Production on Mon Sep 2 16:24:39 2013
...
Starting "DATAPUMP_ADMIN"."SYS_IMPORT_FULL_01":  datapump_admin/******** parfile=schema2_imp.par 
...
Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:"NORMAN" already exists
...
. . imported "NORMAN"."NORM"                             5.023 KB       4 rows
...
Job "DATAPUMP_ADMIN"."SYS_IMPORT_FULL_01" completed with 1 error(s) at 16:24:49

So, the old imp behaviour is still available to use!

Importing Tablespaces

As before, when you wish to import a tablespace, it can be from a full dump file, or a tablespace level dump file – provided that the tablespace(s) you want to import are actually present in the file. A table or schema level dump file will not be suitable.

To import at the tablespace level, add the tablespaces= parameter in your parameter file or on the command line:

What will happen if, for some reason, a tablespace – call it tools – exists in the dump file, but not in the database? Well, it will not be recreated – impdp will not recreate the tablespaces, only the objects within them.

$ cat tablespace_imp.par 

userid=datapump_admin/secret 
directory=my_datapump_dir 
dumpfile=full.dmp 
logfile=tablespace.imp.log 
tablespaces=users
content=all
table_exists_action=replace

$ impdp parfile=tablespace_imp.par 

Import: Release 11.2.0.2.0 - Production on Mon Sep 2 20:05:19 2013
...
Master table "DATAPUMP_ADMIN"."SYS_IMPORT_TABLESPACE_01" successfully loaded/unloaded
Starting "DATAPUMP_ADMIN"."SYS_IMPORT_TABLESPACE_01":  datapump_admin/******** parfile=tablespace_imp.par 
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
. . imported "APP_OWNER"."TEST"                          5.031 KB       5 rows
. . imported "NORMAN"."NORM"                             5.023 KB       4 rows
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/STATISTICS/TABLE_STATISTICS
Job "DATAPUMP_ADMIN"."SYS_IMPORT_TABLESPACE_01" successfully completed at 20:05:26

Importing Tables

And finally, importing specific tables is as easy as the above. You should be completely at home with the parameter file and parameters by now:

$ cat tables_imp.par 

userid=datapump_admin/secret 
directory=my_datapump_dir 
dumpfile=tables.dmp 
logfile=tables.imp.log 
tables=norman.norm,app_owner.test
content=all
table_exists_action=replace

$ impdp parfile=tables_imp.par

Import: Release 11.2.0.2.0 - Production on Thu Sep 5 09:09:01 2013
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning option
Master table "DATAPUMP_ADMIN"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "DATAPUMP_ADMIN"."SYS_IMPORT_TABLE_01":  datapump_admin/******** parfile=tables_imp.par 
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "APP_OWNER"."TEST"                          5.031 KB       5 rows
. . imported "NORMAN"."NORM"                             5.023 KB       4 rows
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Job "DATAPUMP_ADMIN"."SYS_IMPORT_TABLE_01" successfully completed at 09:09:05

Content and Table_Exists_Action Parameters

The following is a summary of the actions or errors you will see when various combinations of these two parameters are used together.

Content = All

  • If table_exists_action is append, then you will see error messages for any objects that currently exist in the database plus, existing data will be left untouched and the contents of the dump file will be appended to those tables present in both the dump file and the current database.
  • If table_exists_action is truncate, then then you will see error messages for any objects that currently exist in the database plus, any object that is being imported, which exists in the database, will be truncated prior to the data from the dump file being imported.
  • If table_exists_action is Replace, then then you will see error messages for any objects that currently exist but those objects will subsequently be dropped and recreated prior to the data being imported.
  • If table_exists_action is Skip, then you will see error messages for any objects that currently exist in the database and nothing will be imported for those existing objects. Objects which exists in the dump file but not in the database, will be created. But not tablespaces.

Content = Data_Only

  • If table_exists_action is append, then provided that the definition of the objects in the dump file matches that in the database, data will be appended and no error messages shown for existing objects. If an existing table has a different definition to that in the dump file, errors will be shown. Any objects in the dump file that do not exist in the database will report an error and will not be created.
  • If table_exists_action is truncate, then no errors will be shown for existing objects and those tables that exists in both the database and dump file will first be truncated prior to the data being imported. Any objects in the dump file that do not exist in the database will report an error and will not be created.
  • If table_exists_action is Replace, then you will receive an error as this is an invalid parameter for this option for the content parameter. You cannot replace an object when the metadata, used to recreate it, is not being imported.
  • If table_exists_action is Skip, then no data are imported for existing objects. Any objects in the dump file that do not exist in the database will report an error and will not be created.

Content = Metadata_only

  • If table_exists_action is append, then objects which do not exist in the database, but do in the dump file will be created. Tables which already exist in the database will not be affected in any way, however, the following misleading message will be displayed:
    Table "USER"."TABLE_NAME" exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append
    

    This is misleading as no data will be loaded at all. We are specifying contant=metadata_only after all!

  • If table_exists_action is truncate, then existing tables will be truncated, no new data will be loaded. New objects will be created from the dump file. Existing tables will cause the following message to be displayed:
    Table "USER"."TABLE_NAME" exists and has been truncated. Data will be loaded but all dependent metadata will be skipped due to table_exists_action of truncate
    

    This message is misleading as there will be no data loaded, the table will be empty after the import. If the dump file’s metadata is different from the existing table definition, then the existing table will remain in force.

  • If table_exists_action is Replace, then no messages will be displayed for existing tables. These will be dropped and recreated using the metadata from the dump file. No data will be loaded, so they will be empty after the import. New objects in the dump file will be created.
  • If table_exists_action is Skip, then nothing will be done for existing objects. New objects, in the dump file, will be created. No data will be loaded. The following – correct – message will be produced for existing tables:
    Table "USER"."TABLE_NAME" exists. All dependent metadata and data will be skipped due to table_exists_action of skip.
    

Importing – Cheat Sheet

The following is a list of “cheats” – basically, a list of the parameters you would find useful in doing a quick import at any level from full database right down to individual tables. As before, I have listed each one in the form of a parameter file – for ease of (your) copy and paste.

The following parameter files assume that you have set up a suitable Oracle Directory object within the database being exported, however, the first part of the cheat sheet summarises the commands required to create one, and a suitably endowed user to carry out the exports.

Create an Oracle Directory

The following is executed in SQL*Plus, or similar, as a SYSDBA enabled user, or SYS:

create directory my_datapump_dir as '/your/required/location';

This location is where all the dump files need to be copied into before running an impdp. The log files for the imports will be created here as the jobs run.

The following must executed as root, unless the location is already owned by the oracle account of course!

mkdir -p /your/required/location
chown oracle:dba /your/required/location

Create a Datapump User Account and Privileges

The following is executed in SQL*Plus, or similar, as a SYSDBA enabled user, or SYS:

create user datapump_admin identified by secret_password
default tablespace users 
quota unlimited on users;

grant create table to datapump_admin;
grant datapump_exp_full_database to datapump_admin;
grant datapump_imp_full_database to datapump_admin;
grant read, write on directory my_datapump_dir to datapump_admin;

Full Import

This one is suitable for a database where the objects already exist. Tables will be trunccated prior to the data replacing the existing contents.

userid=datapump_admin/secret 
directory=my_datapump_dir 
dumpfile=full.dmp 
logfile=full.imp.log 
full=y
content=data_only
table_exists_action=truncate

Schema Import

userid=datapump_admin/secret 
directory=my_datapump_dir 
dumpfile=full.dmp 
logfile=schema.imp.log 
schemas=user_a,user_b
content=all
table_exists_action=replace

Tablespaces Import

userid=datapump_admin/secret 
directory=my_datapump_dir 
dumpfile=full.dmp 
logfile=tablespace.imp.log 
tablespaces=users
content=all
table_exists_action=replace

Table Import

userid=datapump_admin/secret 
directory=my_datapump_dir 
dumpfile=full.dmp 
logfile=tables.imp.log 
tables=[user.]table_a,[user.]table_b
content=all
table_exists_action=replace

So, How Do You Change a User’s Password

$
0
0

The Oracle database allows the users to change their passwords as follows:

SQL> ALTER USER me IDENTIFIED BY my_new_password;

or, alternatively, to use the PASSWORD command, which prompts for the old and new passwords.

Of course, if the user has forgotten their old password, the system manager can do the necessary:

SQL> ALTER USER forgetful_user IDENTIFIED BY a_new_password;

Now, if there are profiles in use, as there are, and these profiles have a password verification function defined, these passwords will be validated to ensure that they adhere to the installation standards.

Sadly, all is not as it seems.

The verification function is passed three parameters:

  • Username
  • Old Password
  • New Password

In 12c, the standard verification function has an inbuilt helper function called string_distance which determines how different the new password is from the old one. The problem is, regardless of what you have set for that difference to be, it is not always executed. The code in the verification function, $ORACLE_HOME/rdbms/admin/utlpwmg.sql, has something resembling this check in it:

    if old_password is not null then
        result := string_distance(old_password, new_password)
        ...

Interesting, if the old password is NULL? How can this be? Well, in reality, it is simple. Here’s the cases where the old password will not be NULL:

  • When the user calls the PASSWORD function.
  • When the user executes ALTER USER me IDENTIFIED BY my_new_password REPLACE old_password;

And here are the cases when the old password will be NULL:

  • When the user executes ALTER USER me IDENTIFIED BY my_new_password;
  • When the SYS user executes ALTER USER forgetful_user IDENTIFIED BY a_new_password;
  • When the SYS user executes ALTER USER forgetful_user IDENTIFIED BY a_new_password REPLACE old_password;

And thereby hangs the rub. If the SYSDBA always changes the passwords, then the old password is always NULL, and some of the verification checks will not be carried out. Only when the user affected changes the password using the PASSWORD command or passes the REPLACE clause to the ALTER USER command, will the old password be supplied to the verification function.

It actually makes sense, if you think about it, the password is stored, by Oracle, as the result of a one-way hash. This means that there is no way to retrieve the plain text password from the hashed value. However, it appears that regardless of whether the SYSDBA user supplies the old password in the ALTER USER ... REPLACE command, it is not passed through to the verification function.

Just a little something to watch out for as it can allow your users to get past some of the check sin the verification function – adding a one character suffix to the old password.for example, can get past the checks if the old password is not supplied.

What do you mean, you never knew there was a REPLACE clause on the ALTER USER command? 😉

Interesting Data Guard Problem

$
0
0

While checking out a dataguarded database prior to being handed over into production, I needed to test that both OEM and dgmgrl could carry out a switchover and failover from the (stand-alone) primary db (ORCL_PDB) to the physical standby database (ORCL_SBY), and back again.

The Problem

Database and server names have been changed, to protect the innocent, and me!

OEM had no problems, other than the usual “bug” whereby the credentials used for the standby server were those for the primary server, but hey, that’s OEM for you, it’s nothing if not inconsistent! However, when I tried to use dgmgrl I found a small problem.

While I could happily switchover to the standby database, from either server, switching back always failed with the following error:

DGMGRL> switchover to "ORCL_PDB"

Performing switchover NOW, please wait...
New primary database "ORCL_PDB" is opening...
Operation requires shutdown of instance "ORCL_SBY" on database "ORCL_SBY"
Shutting down instance "ORCL_SBY"...
ORACLE instance shut down.
Operation requires startup of instance "ORCL_SBY" on database "ORCL_SBY"
Starting instance "ORCL_SBY"...
Unable to connect to database
ORA-12521: TNS:listener does not currently know of instance requested in connect descriptor

Failed.
Warning: You are no longer connected to ORACLE.

Please complete the following steps to finish switchover:
        start up and mount instance "ORCL_SBY" of database "ORCL_SBY"

A quick srvctl start database -d $ORACLE_SID -o mount sorted things out while I investigated the problem.

Data Guard requires that there be an entry in tnsnames.ora for both databases and also for a service name consisting of the database and “DGMGRL”. I checked.

Both TNSNAMES.ORA files have the following, and all entries are configured correctly:

  • ORCL_PDB
  • ORCL_PDB_DGMGRL
  • ORCL_SBY
  • ORCL_SBY_DGMGRL

I had no problems running sqlplus sys/password@orcl_whatever as sysdba for any of the above.

Looking in the listener logfile for the standby server’s listener, I noticed that there were entries where the error code shown above (ORA-12521 and also ORA-12514)) were present, however, there was a problem with the host_name.

msg time='yyyy-mm-ddThh:mm:ss.fff+00:00' org_id='oracle' comp_id='tnslsnr'
 type='UNKNOWN' level='16' host_id='sby_server'
 host_addr='x.x.x.x'
 15-JAN-2014 12:44:13 * (CONNECT_DATA=(SERVICE_NAME=ORCL_SBY_DGMGRL)(INSTANCE_NAME=ORCL_XXX)(SERVER=DEDICATED)(CID=(PROGRAM=dgmgrl)(HOST=sby_server)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=x.x.x.x)(PORT=53336)) * establish * ORCL_SBY_DGMGRL * 12521

The logfile was showing the instance_name – ORCL_XXX – as something completely unrelated to the instance_name for the standby database – ORCL_SBY. Most confusing, especially when I had already confirmed that tnsnames.ora was correct and also that all the entries functioned correctly, from both servers. Where was this erroneous host name coming from?

Looking in dgmgrl again, I checked the
StaticConnectIdentifier property for both databases.

DGMGRL> show database 'ORCL_PDB' StaticConnectIdentifier

StaticConnectIdentifier =
'(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=pdb_server)(PORT=1522))(CONNECT_DATA=(SERVICE_NAME=ORCL_PDB_DGMGRL)(INSTANCE_NAME=ORCL_PDB)(SERVER=DEDICATED)))'


DGMGRL> show database 'ORCL_SBY' StaticConnectIdentifier

StaticConnectIdentifier =
'(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=sby_server)(PORT=1522))(CONNECT_DATA=(SERVICE_NAME=ORCL_SBY_DGMGRL)(INSTANCE_NAME=ORCL_XXX)(SERVER=DEDICATED)))'

Bingo! At least, after starting at the screen for a few minutes, it was bingo! I
finally spotted that the stand by database’s property had the wrong instance
name.

The Solution

A simple property edit for the standby database was carried out in dgmgrl as follows:

 DGMGRL> edit database 'ORCL_SBY' set property
StaticConnectIdentifier='(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=sby_server)(PORT=1522))(CONNECT_DATA=(SERVICE_NAME=ORCL_SBY_DGMGRL)(INSTANCE_NAME=ORCL_SBY)(SERVER=DEDICATED)))'; 

The edit database command above is all on one line by the way.

And that was it. After making the change, I was able to run switchovers to and
from the standby on eiither server. Job done.

Impdp Hangs Importing Materialized Views

$
0
0

A simple exercise to refresh a schema in a test database caused no end of problems when it hung at 99% complete. The last message on screen indicated that it was importing the Materialized Views (yes, with a ‘Z’). After a long time, the DBA running the import killed it, cleaned out, and restarted the whole process. Exactly the same happened.

Background

  • The databases in question were both 11.2.0.3 Enterprise Edition.
  • The materialized views were created and refreshed from a table in another database, utilising a database link.

Investigation

While the impdp was running, breaking into the session and running the status command, repeatedly shows that the import was at 99% completeion and so many bytes had been processed. Using the command
status=120 we could see that this was not moving on at all as time went by. (The above command runs the status command every 2 minutes.)

Checking the server for the processes doing the import, DW00, we were able to extract the SID for the process:


select sid, serial#
from v$session s
where paddr = (select addr from v$process
               where spid = &PID);

Running the above in SQL*Plus, and entering the Unix process id of the DW00 process for the database, we were able to find the SID and SERIAL# for the hung process.

Looking in V$SESSION_WAIT for that process, we could see that it was counting up from around 128 seconds, and was waiting on an event named “SINGLE-TASK MESSAGE”.

Googling around for this event seemed to indicated that it was mainly responsible for a process to create a synonym for a table on the other end of a database link, taking up to 10 minutes to fail. Not quite our problem, but we might as well check.

In the importing database, we could see the SQL used to create the materialized views and noted the fact that they were all created from data held in another table, on the far end of a database link. Hmmm, suspicious!

Checking DBA_DB_LINKS we made a note of the HOST column, and in a shell session, tried a tnsping – no response.

A quick edit to the tnsnames.ora file to add in the appropriate details for these “hosts” and suddenly, the impdp session completed with no errors. This is good, but what exactly was going on?

What Impdp Does Down a DB Link

A test session was set up whereby a materialized view was created with a data source at the far end of a database link. This was refreshed, checked, and exported before being dropped.

The database at the far end of the link was set up with a trigger that fired “after logon” and if the user in question was being logged into, set event 10046 at level 12 – might as well get more data than we need!

When we re-ran the import of the materialized view, we could see in the generated trace file that Oracle was connecting to the database and parsing the SQL statement that was used to refresh the data for the materialized view. Note, it was never executed or fetched from, only parsed. Basically, Oracle was checking that the source of the data was correct enough to be used by the materialized view when refresh time came around, when we were creating the materialized view.

So, when you are doing this sort of thing in future, make sure that any database links that exists in the schema(s) owning the materialized views, or that are being imported into the schema, are goingn to be valid and usable at the time the materialized view itself is imported. If not, you will see this wait and your import will never get past the materialised view section.

This problem may well also rear its ugly head if you have tables, views, or PL/SQL code in packages etc that make use of database links.

Tnsnames.ora Parser

$
0
0

Have you ever wanted to use a tool to parse the manually typed up “stuff” that lives in a tnsnames.ora file, to be absolutely certain that it is correct? Ever wanted some tool to count all the opening and closing brackets match? I may just have the very thing for you.

Download the binary file Tnsnames.Parser.zip and unzip it. Source code is also available on Github.

When unzipped, you will see the following files:

  • README – this should be obvious!
  • tnsnames_checker.sh – Unix script to run the utility.
  • tnsnames_checker.cmd – Windows batch file to run the utility.
  • antlr-4.4-complete.jar – Parser support file.
  • tnsnames_checker.jar – Parser file.
  • tnsnames.test.ora – a valid tnsnames.ora to test the utility with.

The README file is your best friend!

All the utility does is scan the supplied input file, passed via standard in, and writes any syntax or semantic problems out to standard error.

Working Example

There are no errors in the tnsnames.test.ora file, so the output looks like the following:

./tnsnames_checker.sh < tnsnames.test.ora

Tnsnames Checker.
Using grammar defined in tnsnames.g4.
Parsing ....
Done.

Non-Working Example

After a bit of fiddling, there are now some errors in the tnsnames.test.ora file, so the output looks like the following:

./tnsnames_checker.sh < tnsnames.test.ora

Tnsnames Checker.
Using grammar defined in tnsnames.g4.
Parsing ....
line 5:12 missing ')' at '('
line 8:16 extraneous input 'address' expecting {'(', ')'}
Done.

You can figure out where and what went wrong from the messages produced.

Have fun.


Add and Drop Discs From ASM in a Single Command

$
0
0

Recently I was tasked to do something that I hadn’t done before. I was required to swap out all the existing discs in the two diskgroups +DATA and +FRA, with minimal downtime. Almost all the places I looked seemed to indicate that I had to add the new discs, re-balance, drop the old discs and re balance again. My colleague, Ian Greenwood, had a much better idea – thanks Ian.

alter diskgroup DATA add disk
--
'/path/to/disk_1' name DISK_1001,
'/path/to/disk_2' name DISK_1002,
...
'/path/to/disk_n' name DISK_100N
--
drop disk
--
DISK_0001,
DISK_0002,
...
DISK_000N
--
-- This is Spinal Tap!
--
rebalance power 11;

Then the same again for +FRA and we were done. Well, I say done, once the rebalance had finished we were done, and the Unix team could then remove completely, the old discs. That did need ASM to be bounced though, which was a bit of a nuisance for the (one) database on the server, but the users were happy to let us take it down.

Job done and very little messing around. Sometimes, it’s helpful to look at the Oracle Manuals before hitting MOS or Google (other web search engines are available – but they are not as good!) for hints when you have new stuff to do.

Yes, I spell disc with a ‘c’ while Oracle spell it with a ‘k’. :-)

So, How Do You Change a User’s Password

$
0
0

The Oracle database allows the users to change their passwords as follows:

SQL> ALTER USER me IDENTIFIED BY my_new_password;

or, alternatively, to use the PASSWORD command, which prompts for the old and new passwords.

Of course, if the user has forgotten their old password, the system manager can do the necessary:

SQL> ALTER USER forgetful_user IDENTIFIED BY a_new_password;

Now, if there are profiles in use, as there are, and these profiles have a password verification function defined, these passwords will be validated to ensure that they adhere to the installation standards.

Sadly, all is not as it seems.

The verification function is passed three parameters:

  • Username
  • Old Password
  • New Password

In 12c, the standard verification function has an inbuilt helper function called string_distance which determines how different the new password is from the old one. The problem is, regardless of what you have set for that difference to be, it is not always executed. The code in the verification function, $ORACLE_HOME/rdbms/admin/utlpwmg.sql, has something resembling this check in it:

    if old_password is not null then
        result := string_distance(old_password, new_password)
        ...

Interesting, if the old password is NULL? How can this be? Well, in reality, it is simple. Here’s the cases where the old password will not be NULL:

  • When the user calls the PASSWORD function.
  • When the user executes ALTER USER me IDENTIFIED BY my_new_password REPLACE old_password;

And here are the cases when the old password will be NULL:

  • When the user executes ALTER USER me IDENTIFIED BY my_new_password;
  • When the SYS user executes ALTER USER forgetful_user IDENTIFIED BY a_new_password;
  • When the SYS user executes ALTER USER forgetful_user IDENTIFIED BY a_new_password REPLACE old_password;

And thereby hangs the rub. If the SYSDBA always changes the passwords, then the old password is always NULL, and some of the verification checks will not be carried out. Only when the user affected changes the password using the PASSWORD command or passes the REPLACE clause to the ALTER USER command, will the old password be supplied to the verification function.

It actually makes sense, if you think about it, the password is stored, by Oracle, as the result of a one-way hash. This means that there is no way to retrieve the plain text password from the hashed value. However, it appears that regardless of whether the SYSDBA user supplies the old password in the ALTER USER ... REPLACE command, it is not passed through to the verification function.

Just a little something to watch out for as it can allow your users to get past some of the check sin the verification function – adding a one character suffix to the old password.for example, can get past the checks if the old password is not supplied.

What do you mean, you never knew there was a REPLACE clause on the ALTER USER command? 😉

Interesting Data Guard Problem

$
0
0

While checking out a dataguarded database prior to being handed over into production, I needed to test that both OEM and dgmgrl could carry out a switchover and failover from the (stand-alone) primary db (ORCL_PDB) to the physical standby database (ORCL_SBY), and back again.

The Problem

Database and server names have been changed, to protect the innocent, and me!

OEM had no problems, other than the usual “bug” whereby the credentials used for the standby server were those for the primary server, but hey, that’s OEM for you, it’s nothing if not inconsistent! However, when I tried to use dgmgrl I found a small problem.

While I could happily switchover to the standby database, from either server, switching back always failed with the following error:

DGMGRL> switchover to "ORCL_PDB"

Performing switchover NOW, please wait...
New primary database "ORCL_PDB" is opening...
Operation requires shutdown of instance "ORCL_SBY" on database "ORCL_SBY"
Shutting down instance "ORCL_SBY"...
ORACLE instance shut down.
Operation requires startup of instance "ORCL_SBY" on database "ORCL_SBY"
Starting instance "ORCL_SBY"...
Unable to connect to database
ORA-12521: TNS:listener does not currently know of instance requested in connect descriptor

Failed.
Warning: You are no longer connected to ORACLE.

Please complete the following steps to finish switchover:
        start up and mount instance "ORCL_SBY" of database "ORCL_SBY"

A quick srvctl start database -d $ORACLE_SID -o mount sorted things out while I investigated the problem.

Data Guard requires that there be an entry in tnsnames.ora for both databases and also for a service name consisting of the database and “DGMGRL”. I checked.

Both TNSNAMES.ORA files have the following, and all entries are configured correctly:

  • ORCL_PDB
  • ORCL_PDB_DGMGRL
  • ORCL_SBY
  • ORCL_SBY_DGMGRL

I had no problems running sqlplus sys/password@orcl_whatever as sysdba for any of the above.

Looking in the listener logfile for the standby server’s listener, I noticed that there were entries where the error code shown above (ORA-12521 and also ORA-12514)) were present, however, there was a problem with the host_name.

msg time='yyyy-mm-ddThh:mm:ss.fff+00:00' org_id='oracle' comp_id='tnslsnr'
 type='UNKNOWN' level='16' host_id='sby_server'
 host_addr='x.x.x.x'
 15-JAN-2014 12:44:13 * (CONNECT_DATA=(SERVICE_NAME=ORCL_SBY_DGMGRL)(INSTANCE_NAME=ORCL_XXX)(SERVER=DEDICATED)(CID=(PROGRAM=dgmgrl)(HOST=sby_server)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=x.x.x.x)(PORT=53336)) * establish * ORCL_SBY_DGMGRL * 12521

The logfile was showing the instance_name – ORCL_XXX – as something completely unrelated to the instance_name for the standby database – ORCL_SBY. Most confusing, especially when I had already confirmed that tnsnames.ora was correct and also that all the entries functioned correctly, from both servers. Where was this erroneous host name coming from?

Looking in dgmgrl again, I checked the
StaticConnectIdentifier property for both databases.

DGMGRL> show database 'ORCL_PDB' StaticConnectIdentifier

StaticConnectIdentifier =
'(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=pdb_server)(PORT=1522))(CONNECT_DATA=(SERVICE_NAME=ORCL_PDB_DGMGRL)(INSTANCE_NAME=ORCL_PDB)(SERVER=DEDICATED)))'


DGMGRL> show database 'ORCL_SBY' StaticConnectIdentifier

StaticConnectIdentifier =
'(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=sby_server)(PORT=1522))(CONNECT_DATA=(SERVICE_NAME=ORCL_SBY_DGMGRL)(INSTANCE_NAME=ORCL_XXX)(SERVER=DEDICATED)))'

Bingo! At least, after starting at the screen for a few minutes, it was bingo! I
finally spotted that the stand by database’s property had the wrong instance
name.

The Solution

A simple property edit for the standby database was carried out in dgmgrl as follows:

 DGMGRL> edit database 'ORCL_SBY' set property
StaticConnectIdentifier='(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=sby_server)(PORT=1522))(CONNECT_DATA=(SERVICE_NAME=ORCL_SBY_DGMGRL)(INSTANCE_NAME=ORCL_SBY)(SERVER=DEDICATED)))'; 

The edit database command above is all on one line by the way.

And that was it. After making the change, I was able to run switchovers to and
from the standby on eiither server. Job done.

How To Extract Details From /etc/oratab on Linux

$
0
0
Ever wanted to parse /etc/oratab but ignore all the comments and blank lines? So did I. Here’s how … I can’t claim all the credit for this, it is based on something I was doing plus a bit of “stolen” code from SLES. OLDIFS=$IFS IFS=: grep -v '^\(#\|$\)' /etc/oratab |\ while read ORASID ORAHOME AUTOSTART … Continue reading How To Extract Details From /etc/oratab on Linux

Wondering Why The Oracle Databases Won’t Start With A SLES Reboot?

$
0
0

Me too. Took ages to hit the “duh” moment, then it became pretty obvious! The file /etc/init.d/oracle also known as rcoracle to root users can be used to do a number of things such as starting the databases, starting the (default LISTENER) listener, CRS etc but, as I eventually found out, you have to configure it to do so!

The configuration file is /etc/sysconfig/oracle.

Most of the options are defaulted to off, except for the setting of kernel parameters (SET_ORACLE_KERNEL_PARAMETERS="yes") which is very useful.

Cheers.

Viewing all 296 articles
Browse latest View live