Friday, September 28, 2012

Oracle: Locked Users or Schemas

Dear Readers,

My name is Franz Devantier, creator of this blog.  I am an Oracle Certified Professional (OCP DBA 11g) Security DBA.  I will be sharing with you the basic duties of an Oracle DBA, and also some of the undocumented, and not so well known tasks. 

Locked Users or Schemas
One of the users comes to tell you that they can't log in, because the user is locked. 

"Please fix, ASAP!".



Firstly why has this happened?


Lets see if the default profile is assigned to this user. 

SQL> select username, account_status, profile
          2 from dba_users
          3 where username = <user_name> ;

USERNAME                       ACCOUNT_STATUS                   PROFILE
------------------------------ -------------------------------- -------------
<user_name>                         LOCKED                                 DEFAULT
If there is another profile, you will follow the same method to investigate.

SQL> set linesize 120 pagesize 25
SQL> select resource_name, resource_type, limit
           2  from dba_profiles
           3 where profile = 'DEFAULT';

RESOURCE_NAME                    RESOURCE LIMIT
-------------------------------- -------- ----------------------------------------
COMPOSITE_LIMIT                  KERNEL   UNLIMITED
SESSIONS_PER_USER                KERNEL   UNLIMITED
CPU_PER_SESSION                  KERNEL   UNLIMITED
CPU_PER_CALL                     KERNEL   UNLIMITED
LOGICAL_READS_PER_SESSION        KERNEL   UNLIMITED
LOGICAL_READS_PER_CALL           KERNEL   UNLIMITED
IDLE_TIME                        KERNEL   UNLIMITED
CONNECT_TIME                     KERNEL   UNLIMITED
PRIVATE_SGA                      KERNEL   UNLIMITED
FAILED_LOGIN_ATTEMPTS            PASSWORD 10
PASSWORD_LIFE_TIME               PASSWORD UNLIMITED
PASSWORD_REUSE_TIME              PASSWORD UNLIMITED
PASSWORD_REUSE_MAX               PASSWORD UNLIMITED
PASSWORD_VERIFY_FUNCTION         PASSWORD NULL
PASSWORD_LOCK_TIME               PASSWORD UNLIMITED
PASSWORD_GRACE_TIME              PASSWORD UNLIMITED
16 rows selected.

SQL>

You will notice that FAILED_LOGIN_ATTEMPTS            PASSWORD 10
This means that if there are 10 wrong passwords entered in a row without a correct entry, then the user will become locked.  If somebody enters the correct password, the FAILED_LOGIN_ATTEMPTS is reset again.  The default for FAILED_LOGIN_ATTEMPTS  is 10.

So now you need to do some research as to why there were ten attempts to login, with the wrong password.  It could be that because of the nature of the application, this may be expected.  In this case you may consider changing this setting to a higher number or even unlimited.  If it was a transient occurance, then you can leave it at the default, or the number it was set to.

SQL> alter profile default limit failed_login_attempts 25;

or

SQL> alter profile default limit failed_login_attempts unlimited;

If this was not expected from the application, then there may have been an attempt to compromise the security of your Oracle Database, by an un-authorized agent.  You will need to investigate this possibility.

However in the meantime the users, are probably keen to be able to connect again, especially if it is a production environment, so you will unlock the user account.  You don't need to reset the password in this case, you only need to unlock the user account.

SQL> alter user <user_name> account unlock;

You are done.

Franz Devantier,
Need a database health check?


Income stabilizer (win-win opportunity)
Join the carefully selected and tested cash-flow generating program below to potentially create a long-term residual or annuity type income enhancer for yourself.

Traffic Wave - Free Report:  The report will give you all the information you need to start making a nice long-term residual income stream for yourself.


Monday, September 24, 2012

Is the database up and running?

Dear Readers,

My name is Franz Devantier, creator of this blog.  I am an Oracle Certified Professional (OCP DBA 11g) Security DBA.  I will be sharing with you the basic duties of an Oracle DBA, and also some of the undocumented, and not so well known tasks. 

Is the database up and running?
Have you noticed that whenever there is an issue, one of the first questions is to ask if the database is up and running?

One trick that some DBA's use, is to logon to the database and execute this:
SQL> select * from dual;

D
-
X

If they get an 'X' back, they are prepared to stick there neck out and say that the database is up and running, and that there is nothing wrong with the database.  This is obviously not the way to go, you will need to investigate further, before taking a position, if you want to remain a DBA for a long period of time.

Firstly if you get a string of characters back from dual, it probably indicates that the database is mounted and not open.  Again our cowboy DBA's will rush in and declare that there is corruption.  This is based on the assumption, that if the database was opening and was stuck on the mounted stage, then it obviously could not open, because the instance could not connect to certain datafiles, and therefore there must be corruption, and we need media recovery.  Again a very dangerous assumption, please don't fall into this simplistic trap.

So although selecting * from dual, may in fact provide a quick reference as to whether the database is up or not, don't rely too much on this.

What has made it easier to rely on this mechanism is the fact that if you can connect to the database from a remote logon, then the network is in place, and the OS and the hardware are up and running.

So we have to agree, in the end, that a quick "select * from dual;", does have some advantage, if you suspect that the database may not be up and running.  Keep in mind that if you have logged in as sysdba, that this will not reveal the fact that the archive log directory has run out of space, and that the database is currently hanging.  Which means that nobody else can currently log in.

So if we are going to use "select * from dual;", then we will need to follow this up with some other checks, to make it a more useful and reliable check.  One of the first things to check after this, would be to make sure that there is available space on the operating system level.  This will confirm that the archive logs can still write to disk, and the tablespaces can grow if you have them in auto-extend mode.

Then you will go and have a look at the alert log.  in UNIX you can do something like:
> tail -400 alert<sid>.log|grep ORA-
to see if there are any recent errors in the alert log.

In windows you can open the alert log up in notepad or similar.  Scroll to the bottom of the find.  Do a search going backwards, looking for "ORA-".  If you find such an error, then look at the date-time stamp, to decide if the error was recent or not.  You see we are looking for recent errors here.  So a few days back is still recent.  A few weeks or months back is not recent, and so for the purposes of this exercise, where we are determining if the database is up and running, we will ignore them for now.

Now lets go back to the SQL prompt, and do a few more checks.  Lets make sure that we are looking at the right database.

SQL> select name, open_mode from v$database;
NAME      OPEN_MODE
--------- ----------
<DBName>  READ WRITE

Is this database on the correct host?
SQL> select instance_name, host_name, version, startup_time, status from v$instance;
INSTANCE_NAME    HOST_NAME    VERSION      STATUS
----------------             -----------------  --------------   ------------
<Instance_name>         <host_name>        10.2.0.4.0        OPEN

So we have answered the question now with some level of certainty.  The database is up and running, and you can connect to it.  There are no recent errors in the alert logs, and at first look there seems to be enough OS space.  However we have not determined if we are about to run out of tablespace, and other potential issues, but that was not part of the purpose of this exercise.

Franz Devantier,
Need a database health check?
devantierf@gmail.com


Income stabilizer (win-win opportunity)
Join the carefully selected and tested cash-flow generating program below to potentially create a long-term residual or annuity type income enhancer for yourself.

Traffic Wave - Free Report:  The report will give you all the information you need to start making a nice long-term residual income stream for yourself.


Saturday, September 22, 2012

ORA-00060: Deadlock detected.

Dear Readers,

My name is Franz Devantier, creator of this blog.  I am an Oracle Certified Professional (OCP DBA 11g) Security DBA.  I will be sharing with you the basic duties of an Oracle DBA, and also some of the undocumented, and not so well known tasks. 

ORA-00060: Deadlock detected
You may have confused this with an ORA-00600 for a moment.  But from a DBA perspective this is not a database problem.  Nothing like an internal error anyway, so the database is not in trouble.  The application is in trouble though.

DEADLOCK DETECTED ( ORA-00060 )
[Transaction Deadlock]
The following deadlock is not an ORACLE error. It is a
deadlock due to user error in the design of an application
or from issuing incorrect ad-hoc SQL.

Franz Devantier,
Need a database health check?
devantierf@gmail.com

Income stabilizer (win-win opportunity)
Join the carefully selected and tested cash-flow generating program below to potentially create a long-term residual or annuity type income enhancer for yourself.

Traffic Wave - Free Report:  The report will give you all the information you need to start making a nice long-term residual income stream for yourself.


Thursday, September 20, 2012

Oracle: Setting up Datapump Exports

Dear Readers,

My name is Franz Devantier, creator of this blog.  I am an Oracle Certified Professional (OCP DBA 11g) Security DBA.  I will be sharing with you the basic duties of an Oracle DBA, and also some of the undocumented, and not so well known tasks. 

Datapump Exports:

The first thing you need to do, to set up datapump exports, is to verify that you have a suitable Directory object. 
Get to a shell  in Unix, or a command prompt in Windows:
export ORACLE_SID=<SidName> or in windows set ORACLE_SID=<SidName>
sqlplus / as sysdba, or connect as your DBA user.
SQL> set linesize 120
SQL>  select directory_name||'  :   '||directory_path from dba_directories;

This should give you a list of directory names, and directory paths.  Determine if one of these directory paths is suitable for your purposes.  Go to the directory_path, and determine if it is being used for other purposes.  If for whatever reason, it is not suitable.  For example there may not be sufficient space on the file system or in windows the drive, or it may already be used for other purposes, and you want to keep things seperate.  In this case you would first create a directory.

For example:
SQL>  create directory <Directory_name> as '<Directory_path>';

Then you need to grant rights to the user, who will be running datapump.  This may be the system user, or your DBA_user, or another designated user.
SQL> grant execute, read, write on directory <Directory_name> to <user_name>;

Now you are ready to start creating your datapump export script.  Have a look at the syntax, by invoking this datapump, to display only the available parameters:
> expdp help=y

The best approach here is to create a parameter file, for example:
> expdp_<dbname>.par
There are three important parameters to have in this file:
DIRECTORY=<directory_name>
DUMPFILE=<dumpfile_name>.dmp
LOGFILE=<logfile_name>.log
Note that these names are not fully qualified, because the Directory parameter will point to the path.  So the parameters DUMPFILE and LOGFILE should contain only the file names.

You also need to specify what you need to export.  If you are not sure, then you can export the entire database:
FULL=Y

If you know which schemas, then you can add the schema names in
SCHEMAS=scott

Or if you only need a few tables then:
TABLES=(schema.table_name,schema.table_name:partition_name)
Exporting tables, enables you to drill down to specific partitions, which can save space and time, if you have large partitioned tables.

Now that you have set up your parameter file you are ready to run.  At this point in time, you may decide to run it all from the command line, but to keep the parameter file as a form of documentation, or you may run it using the parameter file.
From the command line:
> expdp user@connect_string directory=<directory_name> dumpfile=<dumpfile_name> logfile=<logfile_name> full=y
You will then be prompted for the password.  this is best practice, because it avoids the password being available while the process is running.

Using a parameter file:
> expdp user@connect_string parfile=expdp_<dbname>.par
You may find that you need to specify the fully qualified path for the parameter file.

Here is one last parameter that you may like to add into your parameter file.  With datapump if the dumpfile already exists, then you stop before you start with an error.  With the exp utility, it would have automatically just overwritten it.
REUSE_DUMPFILES=Y
This parameter will cause your old dumpfile of the same name to be automatically over-written.  Make sure that this is what you want, before you add this parameter in.

So your final parameter file could look someting like this"
DIRECTORY=<directory_name>
DUMPFILE=<dumpfile_name>.dmp
LOGFILE=<logfile_name>.log
TABLES=(schema.table_name1,schema.table_name2)
REUSE_DUMPFILES=Y

If you were to be running this from UNIX or windows, you would have run it directly from the command line.  However now you can't close that command window or Unix shell until it is finished.  If you had connected to a windows server with "Remote Desktop Connection", then you can click the 'X' in the top right hand corner, and the server will continue to process your script until you log in again.  The problem with this is that by default you can have only two concurrent connections, so somebody may bump your connection off, if they need to connect to the server a little later on.  To over come this issue in windows you can run it from the scheduler.

In Unix, you will have to sit around until it is finished before you can close the shell.  Running it from the cron, which is the UNIX scheduler would have also given you more options.  However here is one common way to work around this problem in UNIX:

Put the process in the background, or start it in the background.  Further more if you run it with nohup, then you will have a log file afterwards of everything that got spooled to the screen.  Fo example:
nohup expdp user@connect_string parfile=expdp_<dbname>.par &

Now you can watch in realtime what happen like this: tail -50f nohup.out

If you need to exit the Unix shell you can, because it is running in the background.  When you log in again, you can just do the tail -50f nohup.out, to see where you are in the process.  nohup.out should be close to the same place that the log file that we defined in this exercise is.  Good idea to check them both.  You can also spool the log file in the same way:  "tail -50f logfile.log"

Now the important part comes when you validate if the backup was successful or not.
You start off by gong right to the end of the log file, to see if there were any errors.  If you see a message that implies that it ended successfully without errors, then you are good to go.  If you see something that tells you that you have finished successfully with errors, or warnings, then you need to go back through the log file, to determine what went wrong. 

Always keep the log file with the dump file.  The log file will tell you in detail what was exported, and if it was successful.  Without the information in the logfile, the dumpfile can not be trusted as a valid source to recovered from, and therefore has very limited value.  So always keep your logfiles.

Well that should be enough to get you going on datapump exports.  Keep looking at this blog for more database administration insights.

Franz Devantier,
Need a database health check?

Income stabilizer (win-win opportunity)
Join the carefully selected and tested cash-flow generating program below to potentially create a long-term residual or annuity type income enhancer for yourself.

Traffic Wave - Free Report:  The report will give you all the information you need to start making a nice long-term residual income stream for yourself.


Sunday, September 16, 2012

Oracle: Pinning Frequently Used Packages

Dear Readers,

My name is Franz Devantier, creator of this blog.  I am an Oracle Certified Professional (OCP DBA 11g) Security DBA.  I will be sharing with you the basic duties of an Oracle DBA, and also some of the undocumented, and not so well known tasks. 

Pinning Frequently Used Packages
The reason why we would consider pinning packages in memory is because if they are used frequently and aged out of memory, with the Least Recently Used (LRU) algorithm; then they need to be loaded into memory again.  Associated code, may also need to be reparsed.  So to improve performance, we can pin these frequently used packages into memory.

Firstly you will need to verify that the package sys.dbms_shared_pool exists.  From a SQ*Plus prompt:
SQL> select owner, object_type, object_name
          2 from dba_objects
          3 where object_name = ‘DBMS_SHARED_POOL’;

You should get something like the below.

SYS                            PACKAGE             DBMS_SHARED_POOL                                                                                                               
SYS                            PACKAGE BODY        DBMS_SHARED_POOL                                                                                                               

If the package does not exist, then you need to create it:

Go to $ORACLE_HOME/rdbms/admin, or %ORACLE_HOME%/rdbms/admin
Connect to SQL*Plus as sysdba and run the following
SQL> @dbmspool.sql
           @prvtpool.plb

This package has four procedures in it.  We only need one of them for our purposes:
PROCEDURE KEEP
PROCEDURE PURGE
PROCEDURE SIZES
PROCEDURE UNKEEP

There are now three steps to follow to achieve the goal.
Step 1.  Choose the packages to keep in Memory.
From a SQL*Plus prompt:
SQL> SELECT Owner, substr(name,1,20) "Name", type, kept, loads,executions
    FROM v$db_object_cache
    where type like 'PACKAGE%'
    and executions > 100000
    order by executions desc;

APPS                                                             FND_CACHE_VERSIONS_P                                                             PACKAGE BODY                 NO  5                                      2662144                              
SYS                                                              DBMS_LOB                                                                         PACKAGE BODY                 NO  2                                      1274938                              
SYS                                                              DBMS_UTILITY                                                                     PACKAGE BODY                 NO  1                                      1235305                              
SYS                                                              DBMS_ASSERT                                                                      PACKAGE BODY                 NO  1                                      1111957                              
APPS                                                             FND_PROFILE                                                                      PACKAGE BODY                 NO  4                                      746524                               
SYS                                                              STANDARD                                                                         PACKAGE BODY                 NO  1                                      261366                               
APPS                                                             FND_GLOBAL                                                                       PACKAGE BODY                 NO  4                                      254258                               
SYS                                                              DBMS_SPACE_ADMIN                                                                 PACKAGE BODY                 NO  1                                      223115                               
SYS                                                              DBMS_AQ                                                                          PACKAGE BODY                 NO  1                                      188273                                
SYS                                                              DBMS_METADATA_UTIL                                                               PACKAGE                      NO  5                                      116324                                
SYS                                                              DBMS_SESSION                                                                     PACKAGE BODY                 NO  5                                      101181                                

You may have to play around with the number of executions, until you have generated more or less the top 10 performers.  You will choose the packages where kept is ‘NO’, and with the highest executions to pin into the SGA memory.  Some of these packages may in fact be sys owned packages.


Step 2. Re-start the database, this is optional.  However it is good practice so that we can make sure that the system does not run out of memory, when we try to load the pinned objects in memory.

Step 3.  Call the procedure to pin the packages in memory:
SQL> execute dbms_shared_pool.keep(‘sys.dbms_lob’,’P’);

The parameter ‘P’ indicates package/procedure/ function
‘T’ indicates Type
‘R’ indicates trigger
‘Q’ indicates sequence


Now you can verify what is in memory by:
SQL> SELECT Owner, substr(name,1,20) "Name", type, kept, loads,executions
    FROM v$db_object_cache
    where type like 'PACKAGE%'
    and executions > 100000
    order by executions desc;

However you may want the packages to remain pinned in memory even after re-starting the database.  So you will create a trigger, for in this case the most frequently used packages, or packages with the most executions.  In a SQL*Plus prompt

create or replace trigger pin_packages
After startup on database
BEGIN
   dbms_shared_pool.keep(‘apps. FND_CACHE_VERSIONS_P’,’P’);
   dbms_shared_pool.keep(‘sys.DBMS_LOB’,’P’);
   dbms_shared_pool.keep(‘sys. DBMS_UTILITY’,’P’);
   dbms_shared_pool.keep(‘sys. DBMS_ASSERT’,’P’);
   dbms_shared_pool.keep(‘apps. FND_PROFILE’,’P’);
   dbms_shared_pool.keep(‘sys. STANDARD’,’P’);
   dbms_shared_pool.keep(‘apps. FND_GLOBAL’,’P’);
   dbms_shared_pool.keep(‘sys. DBMS_AQ’,’P’);
   dbms_shared_pool.keep(‘sys. DBMS_SESSION’,’P’);
END;
/

Now everytime you startup the database, these packages will be pinned into memory.  You can also use the DBMS_SHARED_POOL.unkeep() to unpin them again.


Franz Devantier,
Need a database health check?

Income stabilizer (win-win opportunity)
Join the carefully selected and tested cash-flow generating program below to potentially create a long-term residual or annuity type income enhancer for yourself.

Traffic Wave - Free Report:  The report will give you all the information you need to start making a nice long-term residual income stream for yourself.