Monday 22 August 2016

How to export/import the database data

In here, I will show you the basic steps for export/import the data from oracle database 11g.

Please note that the output of the export (expdp) and import (impdp) utilities are not compatible with the original Export(exp) and Import(imp).

For details, please refer to the oracle online document:
https://docs.oracle.com/cd/E18283_01/server.112/e17222/expimp.htm#i262247


Remark
To ensure the data exported from the right database instance, please set the ORACLE_SID before you execute the script for export/import.

export ORACLE_SID=dummydb;


A. Sample scripts for the Original Export (exp) and Import (imp) utilities (Compatible with 9i or before).

Export(exp) Utility - export data with specified table:

EXP [USER] TABLES=[TABLE_NAME] FILE=[DUMP_FILE] LOG=[LOG_FILE] FULL=[Y/N] ROWS=[Y/N]

E.g
EXP HR TABLES=EMPLOYEES FILE=exp_hr_employees.dmp log=exp_hr_employees.log full=n rows=y
Import(imp) Utility - Import Data with specified table

IMP [USER] FILE=[EXP_DUMP_FILE_NAME] IGNORE=[Y/N] FROMUSER=[EXPORT_SOURCE_USER] TOUSER=[IMPORT_TARGET_USER] LOG=[LOG_NAME] TABLES=[TABLE_NAME] ROWS=[Y/N] COMMIT=[Y/N]

E.g
IMP SCOTT FILE=exp_hr_employees.dmp IGNORE=Y FROMUSER=HR TOUSER=SCOTT log=imp_hr_employees.log TABLES=EMPLOYEES ROWS=y COMMIT=Y


B. Sample scripts for the Data Pump Utilities (Compatible with 10g or above).

As data dump utility which required to create the directory object on database when perform export/import data process.

The directory object is only a pointer to a physical directory and it does not actually create the physical directory on the file system of the database.So, if the location is changed , we also need update the path in database side.

The directory objects may require to create by DBA if you without sys privilege “CREATE DIRECTORY” on database.

Also, it required the privileges for database user who want to export/import other user data on database.

1. Export Utility

Reference: http://docs.oracle.com/cd/E11882_01/server.112/e22490/dp_export.htm#SUTIL200

1.1 Export specify table data only

EXPDP [USER] DIRECTORY=[DIRECTORY_OBJECT_NAME] TABLES=[TABLE_NAME] DUMPFILE=[DUMP_FILE] LOGFILE=[LOG_FILE_NAME]
E.g
EXPDP HR DIRECOTRY=DATA_EXP_IMP TABLES=TABLE_A DUMPFILE=impstestexp.dmp LOGFILE=action.log

1.2 Full Export

EXPDP [USER] DIRECTORY=[DIRECTORY_OBJECT_NAME] DUMPFILE=[DUMP_FILE] LOGFILE=[LOG_FILE_NAME]
E.g
EXPDP HR DIRECOTRY=DATA_EXP_IMP DUMPFILE=impstestexp.dmp LOGFILE=action.log

1.3 Full Export with specified schema user

EXPDP [USER] DIRECTORY=[DIRECTORY_OBJECT_NAME] DUMPFILE=[DUMP_FILE] LOGFILE=[LOG_FILE_NAME] SCHEMA=[SCHEMA_NAME]
E.g
EXPDP SYSTEM DIRECOTRY=DATA_EXP_IMP DUMPFILE=impstestexp.dmp LOGFILE=action.log SCHEMA=HR
2. Import Utility

Reference: http://docs.oracle.com/cd/E11882_01/server.112/e22490/dp_import.htm#SUTIL300
IMPDP [USER_NAME] DIRECTORY=[DIRETORY_OBJECT_NAME] REMAP_SCHEMA=[EXPORT_SOURCE_USER]:[IMPORT_TARGET_USER] DUMPFILE=[EXPORT_DUMP_FILE] LOGFILE=[LOG_FILE_NAME]
E.g
IMPDP SCOTT DIRECOTRY=DATA_EXP_IMP REMAP_SCHEMA=HR:SCOTT DUMPFILE=impstestexp.dmp LOGFILE=action.log 

Thursday 21 July 2016

Gather the statistic of Archive log generation by hour

Use the following SQL to gather the statistic of archive log generation by hour, this information could help to you to monitor the database instance which hour is most frequency change the data in a day.


SQL> select
  2  to_char( a.COMPLETION_TIME, 'dd-mm-yyyy')as generation_date,
  3  to_char( a.COMPLETION_TIME, 'hh24')||':00' as start_time,
  4   round(sum(a.BLOCKS * a.BLOCK_SIZE)/1048576, 0) as Total_size_in_MB,
  5   count(a.recid) as no_of_archived_log
  6   from v$archived_log a
  7   group by to_char( a.COMPLETION_TIME, 'dd-mm-yyyy'),  to_char( a.COMPLETION_TIME, 'hh24')
  8   order by 1, 2, 3 desc, 4 desc;

GENERATION START TOTAL_SIZE_IN_MB NO_OF_ARCHIVED_LOG
---------- ----- ---------------- ------------------
01-07-2016 04:00               66                  2
01-07-2016 09:00               64                  2
01-07-2016 14:00               64                  2
01-07-2016 19:00               64                  2
01-07-2016 22:00               96                  2
02-07-2016 02:00               64                  2
02-07-2016 06:00               81                  2
02-07-2016 09:00               64                  2
02-07-2016 11:00               66                  2
02-07-2016 14:00               74                  2
02-07-2016 18:00               68                  2

Wednesday 20 July 2016

Got security error with "Security token does not match. You must be login again..." on 12c EM express

When you login as system on 12c EM Express, you may got the following errors after login.


Basically, this error may happened on you haven't privilege to use EM Express, therefore, you should grant "EM_EXPRESS_BASIC" or "EM_EXPRESS_ALL" role to the specified user. 

In general, "EM_EXPRESS_BASIC" has been granted to "EM_EXPRESS_ALL" by default and the DBA role include "EM_EXPRESS_ALL".  

"EM_EXPRESS_ALL" grants a user all privileges required to perform any action in EM Express.


SQL> column grantee format a20
SQL> select grantee, granted_role from dba_role_privs where granted_role = 'EM_EXPRESS_ALL';

GRANTEE         GRANTED_ROLE
-------------------- --------------
DBA     EM_EXPRESS_ALL
SYS     EM_EXPRESS_ALL

SQL> GRANT EM_EXPRESS_ALL TO SYSTEM;

Grant succeeded.

SQL> select grantee, granted_role from dba_role_privs where granted_role = 'EM_EXPRESS_ALL';

GRANTEE         GRANTED_ROLE
-------------------- --------------
SYSTEM         EM_EXPRESS_ALL
DBA     EM_EXPRESS_ALL
SYS     EM_EXPRESS_ALL

SQL> 



After that, when you login again as system on EM Express, you can successfully to view the database instance performance. If you still got the security errors, please try use another browsers.


Reference:


"ORA-600 [3020]" / "ORA-16472" Perform failover after restore standby backup on standby site

If you got the error with "ORA-600 [3020]" or "ORA-16472" after performed failover on target standby database, you can try to perform a data-loss failover on target standby database.

As refer to the oracle document, this error is caused by the database configured the protection mode with "MaxAvailability" or "MaxPerformance" , data loss is detected during failover process.

For solve this issue, you can change the database protection mode to "Max Performance" before failover operation on target standby database or perform a data-loss failover on target standby database.

Here is the example:

Background:
1. The original protection mode of database is configured with "Max Availability".
2. The database is configured in physical standby database.
3. Oracle 12c Database on Oracle Linux
4. Both way are restored from same backup with applied the latest archive log, for plan A is unnecessary to perform a data-loss failover on target standby database.

A. Perform failover after changed the protection mode

RMAN> recover database;

Starting recover at 02-DEC-15
using channel ORA_DISK_1

starting media recovery

channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=56
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=57
channel ORA_DISK_1: reading from backup piece /data_02/rman_disk/a_3045506482_20151202_151
channel ORA_DISK_1: piece handle=/data_02/rman_disk/a_3045506482_20151202_151 tag=TAG20151202T120618
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
archived log file name=/u01/app/arch/testdgbdr/1_56_896806709.dbf thread=1 sequence=56
archived log file name=/u01/app/arch/testdgbdr/1_57_896806709.dbf thread=1 sequence=57
media recovery complete, elapsed time: 00:00:02
Finished recover at 02-DEC-15

RMAN> alter database set standby database to maximize performance;

Statement processed

RMAN> exit


Recovery Manager complete.
[oracle@standby02 rman_disk]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Wed Dec 2 17:22:25 2015

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> select db_unique_name, protection_mode from v$database;

DB_UNIQUE_NAME       PROTECTION_MODE
------------------------------ --------------------
testdgbdr       MAXIMUM PERFORMANCE

SQL> alter database failover to testdgbdr;

Database altered.

SQL> select db_unique_name, database_role, protection_mode from v$database;

DB_UNIQUE_NAME       DATABASE_ROLE PROTECTION_MODE
------------------------------ ---------------- --------------------
testdgbdr       PRIMARY MAXIMUM PERFORMANCE

SQL> exit

Also, you can perform manual failover using DGMGRL (i.e. DG Broker).

DGMGRL> FAILOVER TO testdgdbr;


B. Perform failover without change the protection mode

RMAN> recover database;

Starting recover at 02-DEC-15
using channel ORA_DISK_1

starting media recovery

channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=56
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=57
channel ORA_DISK_1: reading from backup piece /data_02/rman_disk/a_3045506482_20151202_151
channel ORA_DISK_1: piece handle=/data_02/rman_disk/a_3045506482_20151202_151 tag=TAG20151202T120618
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
archived log file name=/u01/app/arch/testdgbdr/1_56_896806709.dbf thread=1 sequence=56
archived log file name=/u01/app/arch/testdgbdr/1_57_896806709.dbf thread=1 sequence=57
media recovery complete, elapsed time: 00:00:03
Finished recover at 02-DEC-15

RMAN> exit


Recovery Manager complete.
[oracle@standby02 rman_disk]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Wed Dec 2 17:34:11 2015

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> select db_unique_name, database_role, protection_mode from v$database;

DB_UNIQUE_NAME       DATABASE_ROLE PROTECTION_MODE
------------------------------ ---------------- --------------------
testdgbdr       PHYSICAL STANDBY MAXIMUM AVAILABILITY

SQL> alter database failover to testdgbdr;
alter database failover to testdgbdr
*
ERROR at line 1:
ORA-16472: failover failed due to data loss

SQL> alter database activate physical standby database;

Database altered.

SQL> exit

Also, you can perform manual failover using DGMGRL (i.e. DG Broker).

DGMGRL> FAILOVER TO testdgdbr immediate;

Reference:
https://docs.oracle.com/database/121/SBYDB/role_management.htm#SBYDB4773
https://docs.oracle.com/database/121/DGBKR/sofo.htm#DGBKR370

Tuesday 19 July 2016

Got error "ORA-00054" when truncate table

Sometimes, we may got the following errors during truncate table, this issue may happened when the table is locked by some session(s). When the table is locked in database, we cannot perform any DDL on this table.

Therefore, try to find out which session(s) is locked this object currently, and then kill the session(s) that related to locking. Finally, truncate this table again.


SQL> TRUNCATE TABLE EMPLOYEES;
TRUNCATE TABLE EMPLOYEES
               *
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired 


SQL> select * from v$locked_object where object_id = in (
  2  select object_id from dba_objects wuhere object_name = 'EMPLOYEES');

    XIDUSN    XIDSLOT     XIDSQN  OBJECT_ID SESSION_ID
---------- ---------- ---------- ---------- ----------
ORACLE_USERNAME                OS_USER_NAME
------------------------------ ------------------------------
PROCESS                  LOCKED_MODE
------------------------ -----------
        60         11     243914     164033         28
HR                        oracle
6944:5636                          3


SQL> select sid from v$lock where id1=164033;

       SID
----------
        28

SQL> select sid, serial# from v$session where sid=28;

       SID    SERIAL#
---------- ----------
        28       5921

SQL> alter system kill session '28,5921';

System altered.

SQL> select * from v$locked_object where object_id in (
  2  select object_id from dba_objects where object_name = 'EMPLOYEES');

no rows selected

SQL> TRUNCATE TABLE EMPLOYEES;
Table truncated.


SQL> SELECT COUNT(*) AS EMPLOYEES FROM EMPLOYEES;
 EMPLOYEES                                                                      
----------                                                                      
         0  

Friday 15 July 2016

Gather user login information from database

How to know who connected to the oracle database? Or when we see "Fatal NI connect error 12170" in alert log, so how to get the further information on database for investigation?

Normally, the user login information may logged in database such as os_username, machine, login the database have a view called "dba_audit_trail" that facilitate for investigation.


 select os_username,
       username,
       userhost,
       terminal,
       to_char(timestamp, 'dd-mm-yyyy hh24:mi:ss') as timestamp,
       action,
       action_name,
       to_char(logoff_time, 'dd-mm-yyyy hh24:mi:ss') as logoff_time,
       extended_timestamp,
       os_process,
       dbid
  from dba_audit_trail
 where timestamp between
       to_timestamp('13-07-2016 12:00:00', 'dd-mm-yyyy hh24:mi:ss') and
       to_timestamp('13-07-2016 13:00:00', 'dd-mm-yyyy hh24:mi:ss')
 order by os_process;


Reference:
http://www.dba-oracle.com/t_dba_audit_trail.htm

Wednesday 13 July 2016

Setup unixODBC for oracle database instance to connect DB2 database

Set up unixODBC to connect DB2 database on Oracle database

A. Installation driver:

1. Unzipping and extracting
unixodbc: 
Download from: http://www.unixodbc.org/download.html

gzip -d unixODBC-2.3.2.tar.gz
tar xf unixODBC-2.3.2.tar

DB2 client:
Download from: http://www-01.ibm.com/support/docview.wss?uid=swg24034634

gzip -d v9.7fp8_hpipf64_odbc_cli.tar.gz
tar xf v9.7fp8_hpipf64_odbc_cli.tar

2. Change Directory
 cd unixODBC-2.3.2

3. Set PATH and Compiler flags

PATH=$PATH:/usr/ccs/bin;export PATH
PATH=$PATH:/usr/sbin;export PATH
CXXFLAGS=+DD64;export CPPFLAGS
CFLAGS=+DD64;export CFLAGS

4. Configure, compliing and installing
   ./configure --prefix=/u01/app/oraodbc/unixODBC-2.3.2 --enable-gui=no --enable-drivers=no
   make
   make install

5. Confgiure the db2 odbc client:
Location: /u01/app/oraodbc/db2_odbc_client/odbc_cli/clidriver/cfg
File: db2cli.ini

[DB2]
database=
protocol=TCPIP
hostname=
servicename=
BitData=1
BlockLobs=1
SQLCODEMAP=1

6. Configure the odbc ini:
Location: /u01/app/oraodbc/unixODBC-2.3.2/etc/
File: odbc.ini
Location: /u01/app/oracle/product/11.2.0/dbhome_1
File: .odbc.ini
Add to odbc.ini and .odbc.ini 
   [ODBC Data Sources]
   DB2=IBM DB2 ODBC DRIVER

   [DB2]
   Driver=/u01/app/oraodbc/db2_odbc_client/odbc_cli_o/odbc_cli/clidriver/lib/libdb2o.so
   AUTHENTICATION=SERVER
   SECURITY=SSL
   CHARSET=AMERICAN_AMERICA.WE8MSWIN1252

7. Create ora file for Oracle HS (e.g. inidb2.ora)
Location: /u01/app/oracle/product/11.2.0/dbhome_1/hs/admin
# This is a sample agent init file that contains the HS parameters that are
# needed for the Database Gateway for ODBC

#
# HS init parameters
#
HS_FDS_CONNECT_INFO = DB2
HS_FDS_TRACE_LEVEL = DEBUG
HS_FDS_SHAREABLE_NAME = /u01/app/oraodbc/unixODBC-2.3.2/lib/libodbc.so
HS_FDS_SQLLEN_INTERPRETATION=64
HS_FDS_TIMESTAMP_MAPPING="TIMESTAMP(6)"
HS_FDS_DATE_MAPPING=CHAR
# fixed triple size problem
HS_KEEP_REMOTE_COLUMN_SIZE=ALL
HS_FDS_SUPPORT_STATISTICS=FALSE
#HS_LANGUAGE=AMERICAN_AMERICA.AL32UTF8
HS_LANGUAGE=AMERICAN_AMERICA.WE8MSWIN1252
HS_FDS_REMOTE_DB_CHARSET=AMERICAN_AMERICA.AL32UTF8
HS_NLS_NCHAR=AL16UTF16

#
# ODBC specific environment variables
#
set ODBCINI=/u01/app/oraodbc/unixODBC-2.3.2/etc/odbc.ini


# Environment variables required for the non-Oracle system
#
#set DB2INSTANCE=db2inst1
set DELIMIDENT=y


8. Update listener.ora and tnsnames.ora
Location: /u01/app/oracle/product/11.2.0/dbhome_1/network/admin
Add to listener.ora 

SID_LIST_LISTENER1521=
      (SID_LIST=
        (SID_DESC=
          (SID_NAME=DB2)
          (ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1)    
          (ENVS=LD_LIBRARY_PATH64=/u01/app/oracle/product/11.2.0/dbhome_1/hs/lib:/u01/app/oraodbc/db2_odbc_client/odbc_cli_o/odbc_cli/clidriver/lib:/u01/app/oracle/product/11.2.0/dbhome_1/lib)
          (PROGRAM=dg4odbc)
        )

Add to tsnames.ora 

DG4ODBC_DB2_SSL =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = host001)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SID = DB2)
    )
    (HS=OK)
  )

9. Reload Listener
lsnrctl status
lsnrctl reload

10. Create DBLINK
 create database link DBLINK_ODBC_DB2
 connect to ibmdb2user identified by *******
 using 'DG4ODBC_DB2_SSL';

11. Testing the connection:
 select * from user.table01@DBLINK_ODBC_DB2;


B. SSL Configuration:

Reference:
http://pic.dhe.ibm.com/infocenter/db2luw/v9r7/topic/com.ibm.db2.luw.admin.sec.doc/doc/t0053518.html

1. Download GSKit Package from IBM

Note: 
1. The GSKit is not free for download from IBM sites and it required use root to install.

Install GSKit 8 Scripts:
http://www-01.ibm.com/support/docview.wss?uid=swg21628888 

Operation Steps : HP-UX - Install Instructions for GSKit 8

1. Extract the file
==> cd
==> gunzip -c DB2_SF_SSL_10_HP-UX_IA64.tar.gz | tar -xf –

2. The contents of the folder:
==> cd gskit/hpipf64/
./hpipf64:
gskcrypt64-8.0.14.14.hpux.ia64.tar.Z
gskssl64-8.0.14.14.hpux.ia64.tar.Z

Description of the files:

For TDS 64 bit Client components:
Global Security Kit V8.r.m.f for HPUX IA 64-bit Crypto - gskcrypt64-8.r.m.f.hpux.ia64.tar.Z
Global Security Kit V8.r.m.f for HPUX IA 64-bit - gskssl64-8.r.m.f.hpux.ia64.tar.Z

Note: r.m.f are release, maintenance and fix level of GSKit V8.
3. Now extract individual package files:
==> uncompress gsk*Z
==> tar -xf gskcrypt64*.tar
==> tar -xf gskssl64*.tar
==> ls -1d gsk*64 # the following output will be seen
gskcrypt64
gskssl64
      
4. Install the packages:

For 64 bit GSKit: (From the 64 subfolder where you can see the listing of gskssl64 and gskcrypt64)
==> cd 8*-TIV-GSKIT-HPUXIA64-*/64 # this full path becomes $PWD and is set in most shells.
==> ls -1d gsk*64
gskcrypt64
gskssl64

==> swinstall -s $PWD/gskcrypt64 gskcrypt64
==> swinstall -s $PWD/gskssl64 gskssl64
      
5. Verify the installed fileset levels:
==> swlist | egrep "gskcrypt|gskssl"


2. Upload ".arm" key to db server (E.g. "/u01/app/oraodbc/keystore")

3. Set PATH

PATH=$PATH:/bin:/lib;export PATH
echo $PATH

4. Create key database under keystore folder
Location: /u01/app/oraodbc/keystore
client key database name: ibmdb2db.kdb
gsk8capicmd_64 -keydb -create -db "keystore db name" -pw "keystore password" -stash

5. Import server certificate to the client key database
client key database name: ibmdb2db.kdb
server certificate: serverdb2.arm
 gsk8capicmd_64 -cert -add -db "keystore db name" -pw "keystore password" -label "dbselfsigned" -file "" -format ascii -fips

– Note:
For example, the following command creates a key database called mydbclient.kdb and a stash file called mydbclient.sth -stash:
gsk8capicmd_64 -keydb -create -db "mydbclient.kdb" -pw "mydbclientpw0"
The -stash option creates a stash file at the same path as the key database, with a file extension of .sth. At connect time, GSKit uses the stash file to obtain the password to the key database.


6. update the db2 odbc client config (red color):
Location: /u01/app/oraodbc/db2_odbc_client/odbc_cli/clidriver/cfg
File: db2cli.ini
[DB2]
database=
protocol=TCPIP
hostname=
servicename=
security=ssl
ssl_client_keystoredb=/u01/app/oraodbc/keystore/ibmdb2db.kdb
ssl_client_keystash=/u01/app/oraodbc/keystore/ibmdb2db.sth

7. Reload Listener
lsnrctl status
lsnrctl reload


Enjoy =)

Auto Housekeep aged out files in Unix

How housekeep the aged out files in Unix platform?

For example, if you would like to keep the files within 2 days , to remove the aged out file per hour.

1. Prepare Shell Script.
filename: auto_housekeep_file.sh
housekeep files location: /data_01


v_date=`date +%d%m%y-%H-%M-%S`
v_host=host101
echo $v_date;
echo $v_host;
echo "executed: auto_housekeep_file.sh ";

log_file=/data_01/$v_date.$v_host.auto_housekeep_file.log
echo "Host: host101" >> $log_file
echo "Start Date/Time: " $(date) >> $log_file
echo "disk info" >> $log_file
echo "++++++++++++++++++++++++++++++++++++++++++++++++" >> $log_file
echo " data mount point - before remove the aged file " >> $log_file
echo "++++++++++++++++++++++++++++++++++++++++++++++++" >> $log_file
echo " " >> $log_file
df -k /data_01 >> $log_file
echo " " >> $log_file

echo "++++++++++++++++++++++++++++++++++++++++++++++++" >> $log_file
echo "List of all aged out files:" >> $log_file
echo "++++++++++++++++++++++++++++++++++++++++++++++++" >> $log_file
echo " " >> $log_file
find /data01_/*.* -type f -mtime +1 >> $log_file
echo " " >> $log_file
echo "Remove more than 2 days archive log backup now......" >> $log_file
echo " " >> $log_file
find /data_01/*.* -type f -mtime +1  -exec rm {} +>> $log_file
echo "++++++++++++++++++++++++++++++++++++++++++++++++" >> $log_file
echo "After removed, listed all file:                 " >> $log_file
echo "++++++++++++++++++++++++++++++++++++++++++++++++" >> $log_file
echo " " >> $log_file
find /data_01/* -type f >> $log_file
echo " " >> $log_file
echo " " >> $log_file
echo "End Date/Time: " $(date) >> $log_file
echo "disk info" >> $log_file
echo "++++++++++++++++++++++++++++++++++++++++++++++++" >> $log_file
echo " data mount point - after remove the aged file  " >> $log_file
echo "++++++++++++++++++++++++++++++++++++++++++++++++" >> $log_file
echo " " >> $log_file
df -k /data_01 >> $log_file
echo " " >> $log_file
echo "End of List " >> $log_file

2. set cron job
For example: Everyday between 08:00 - 22:00
cmd> crontab -e

# auto housekeep files
# per hour between 0800 - 2200 to housekeep the aged out files
00 8-22 * * * /data_01/auto_housekeep_file.sh

Enjoy! =)