Thứ Sáu, 30 tháng 5, 2014
Thứ Năm, 29 tháng 5, 2014
Thứ Tư, 28 tháng 5, 2014
Diagnose RAC Problems
General Comments:
The vast majority of RAC issues I've encountered have been caused by one or more of the following:
- Incorrect network configuration. Remember, the public IP addresses, VIPs and SCAN IPs must all be on the same public network. The private IPs must be on a different network to the public network. The public IPs and the private IPs must all be pingable prior to the installation.
- Incorrect shared disk configuration. The voting disk and OCR location, as well as all the database files, need to be on shared storage for RAC to function properly. Any problems with the shared disk configuration will cause RAC to fail.
- Missing prerequisites. There are a lot of prerequisites that must be completed before you can start a RAC installation. It may be tempting to miss steps out, but this will invariably cause problems. Make sure all prerequisites are met before starting the installation.
- Insufficient available resources. This is especially true of people doing virtual RAC installations. The minimum requirements of 11gR2 RAC are quite significant. Without some clever tricks to free up memory, you are going to need at least 4G RAM per node to complete a fairly basic installation. Trying to install RAC on under-specced hardware can lead to some rather unpredictable results.
Tools to Checking:
csrctl:
Amongst other things, the crsctl command allows you to check the health of the cluster. The following command displays the top-level view of the cluster.
# ./crsctl check cluster -all
The following command gives information about the individual resources.
# ./crsctl stat res -t
olsnodes:
Run the olsnodes command on all cluster nodes and see that it returns a list of all the nodes in each case.
# ./olsnodes
cluvfy:
You have probably run the runcluvfy.sh utility from the installation media before the installing the clusterware software. Once the Oracle software is installed, the cluvfy utility is available to provide useful post-installation information. Use the "-help" flag for usage information.
$ cluvfy stage -help
$ cluvfy stage -post crsinst -n ol6-112-rac1,ol6-112-rac2
$ cluvfy stage -pre dbcfg -n ol6-112-rac1,ol6-112-rac2 -d /u01/app/oracle/product/11.2.0/db_1
RACcheck:
Oracle provide the RACcheck tool (MOS [ID 1268927.1]) to audit the configuration of RAC, CRS, ASM, GI etc. It supports database versions from 10.2-11.2, making it a useful starting point for most analysis. The MOS note includes the download and setup details. If you are using 11.2.0.4 or later you will have RACcheck by default.
$ unzip raccheck.zip
$ cd rachcheck
$ chmod 755 raccheck
$ ./raccheck -a
The vast majority of RAC issues I've encountered have been caused by one or more of the following:
- Incorrect network configuration. Remember, the public IP addresses, VIPs and SCAN IPs must all be on the same public network. The private IPs must be on a different network to the public network. The public IPs and the private IPs must all be pingable prior to the installation.
- Incorrect shared disk configuration. The voting disk and OCR location, as well as all the database files, need to be on shared storage for RAC to function properly. Any problems with the shared disk configuration will cause RAC to fail.
- Missing prerequisites. There are a lot of prerequisites that must be completed before you can start a RAC installation. It may be tempting to miss steps out, but this will invariably cause problems. Make sure all prerequisites are met before starting the installation.
- Insufficient available resources. This is especially true of people doing virtual RAC installations. The minimum requirements of 11gR2 RAC are quite significant. Without some clever tricks to free up memory, you are going to need at least 4G RAM per node to complete a fairly basic installation. Trying to install RAC on under-specced hardware can lead to some rather unpredictable results.
Tools to Checking:
csrctl:
Amongst other things, the crsctl command allows you to check the health of the cluster. The following command displays the top-level view of the cluster.
# ./crsctl check cluster -all
The following command gives information about the individual resources.
# ./crsctl stat res -t
olsnodes:
Run the olsnodes command on all cluster nodes and see that it returns a list of all the nodes in each case.
# ./olsnodes
cluvfy:
You have probably run the runcluvfy.sh utility from the installation media before the installing the clusterware software. Once the Oracle software is installed, the cluvfy utility is available to provide useful post-installation information. Use the "-help" flag for usage information.
$ cluvfy stage -help
$ cluvfy stage -post crsinst -n ol6-112-rac1,ol6-112-rac2
$ cluvfy stage -pre dbcfg -n ol6-112-rac1,ol6-112-rac2 -d /u01/app/oracle/product/11.2.0/db_1
RACcheck:
Oracle provide the RACcheck tool (MOS [ID 1268927.1]) to audit the configuration of RAC, CRS, ASM, GI etc. It supports database versions from 10.2-11.2, making it a useful starting point for most analysis. The MOS note includes the download and setup details. If you are using 11.2.0.4 or later you will have RACcheck by default.
$ unzip raccheck.zip
$ cd rachcheck
$ chmod 755 raccheck
$ ./raccheck -a
Thứ Sáu, 23 tháng 5, 2014
Thứ Ba, 20 tháng 5, 2014
Oracle GoldenGate using datapump.
SOURCES:
GGSCI (db01.hieupham) 1> edit params ./GLOBALS
GGSCHEMA
GGS_OWNER
CHECKPOINTTABLE
GGS_OWNER.CHKPTAB
GGSCI (db01.hieupham) 43> edit params mgr
PORT
7809
USERID
ggs_owner, PASSWORD ggs_owner
PURGEOLDEXTRACTS
/tmp/rt, USECHECKPOINTS
GGSCI (db01.hieupham) 43> start mgr
GGSCI (db01.hieupham) 43>info manager
GGSCI (db01.hieupham)
16>dblogin userid ggs_owner, password ggs_owner;
GGSCI (db01.hieupham)
16>ADD CHECKPOINTTABLE CHKPTAB
GGSCI
(db01.hieupham) 42> edit params HIEUPV
EXTRACT
HIEUPV
USERID
ggs_owner, PASSWORD ggs_owner
EXTTRAIL
/tmp/rt
TABLE
hieupv.productskey;
GGSCI (db01.hieupham) 42>ADD EXTRACT HIEUPV, TRANLOG,
BEGIN NOW
GGSCI (db01.hieupham) 42> ADD EXTTRAIL /tmp/rt, EXTRACT
HIEUPV
GGSCI (db01.hieupham) 42>START EXTRACT HIEUPV
GGSCI
(db01.hieupham) 42>EDIT PARAMS PUMP
EXTRACT
PUMP
PASSTHRU
RMTHOST
db02, MGRPORT 7809
RMTTRAIL
/tmp/rt
TABLE
hieupv.productskey;
GGSCI (db01.hieupham) 42>ADD EXTRACT PUMP, EXTTRAILSOURCE
/tmp/rt
GGSCI (db01.hieupham) 42>ADD RMTTRAIL /tmp/rt, EXTRACT
PUMP
GGSCI (db01.hieupham) 42>START EXTRACT PUMP
TARGET:
GGSCI (db02.hieupham) 1> edit params ./GLOBALS
GGSCHEMA
GGS_OWNER
CHECKPOINTTABLE
GGS_OWNER.CHKPTAB
GGSCI (db02.hieupham) 17> edit params mgr
PORT 7809
USERID ggs_owner, PASSWORD
ggs_owner
GGSCI
(db02.hieupham) 16>dblogin userid ggs_owner, password ggs_owner;
GGSCI
(db02.hieupham) 16>ADD CHECKPOINTTABLE CHKPTAB
GGSCI
(db02.hieupham) 16>EDIT PARAMS HIEUPV
REPLICAT HIEUPV
USERID ggs_owner, PASSWORD
ggs_owner
ASSUMETARGETDEFS
DISCARDFILE /tmp/reportd.dsc,
APPEND
MAP hieupv.productskey, TARGET
hieupv.productskey;
GGSCI
(db02.hieupham) 16>ADD REPLICAT HIEUPV, EXTTRAIL /tmp/rt
GGSCI
(db02.hieupham) 16>START REPLICAT HIEUPV
GGSCI
(db02.hieupham) 16>info all
-----------ohhhh Dear chạy ngon hhhho------------------------------- |
Chủ Nhật, 11 tháng 5, 2014
ftp VS scp
FTP is faster than SCP but not secure.Why ?
FTP COMMANDS
#cd Locate_dir
ftp remote_host
ftp> cd remote_dir
ftp>binary
ftp>prompt yes
ftp> mput * (copy to remote_dir)
ftp> mget * (copy to locate_dir)
Đăng ký:
Nhận xét (Atom)
















