Database and Cloud World

Live the life you love. Love the life you live

Oracle Export import Compress


Import (IMP) and Export (EXP) are among the oldest surviving Oracle tools. They are command line tools used to extract tables, schemas, or entire database definitions from one Oracle instance, to be imported into another instance or schema.

COMPRESS (Y) – This Parameter in EXP tool does not compress the contents of the exported data.It controls how the STORAGE clause for exported objects will be generated. If left as Y, the storage clause for objects will have an initial extent that is equal to the sum of its current extents. That is, EXP will generate a CREATE statement that attempts to fit the object into one single extent.

idle> connect scott/oracle
Connected.
scott@10G> drop table t purge;

Table dropped.

Elapsed: 00:00:02.51
scott@10G> create table t as select * from all_objects;

Table created.

Elapsed: 00:00:10.13
scott@10G> select sum(blocks) as blocks, sum(bytes) as bytes
2 from user_extents
3 where segment_name =’T’;

BLOCKS      BYTES

   768    6291456

Elapsed: 00:00:00.11
scott@10G> SELECT dbms_metadata.get_ddl(‘TABLE’,’T’) FROM DUAL;

DBMS_METADATA.GET_DDL(‘TABLE’,’T’)

CREATE TABLE “SCOTT”.”T”
( “OWNER” VARCHAR2(30) NOT NULL ENABLE,
“OBJECT_NAME” VARCHAR2(30) NOT NULL ENABLE,
“SUBOBJECT_NAME” VARCHAR2(30),
“OBJECT_ID” NUMBER NOT NULL ENABLE,
“DATA_OBJECT_ID” NUMBER,
“OBJECT_TYPE” VARCHAR2(19),
“CREATED” DATE NOT NULL ENABLE,
“LAST_DDL_TIME” DATE NOT NULL ENABLE,
“TIMESTAMP” VARCHAR2(19),
“STATUS” VARCHAR2(7),
“TEMPORARY” VARCHAR2(1),
“GENERATED” VARCHAR2(1),
“SECONDARY” VARCHAR2(1)
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE “USERS”

D:>exp userid=scott/oracle file=d:\t.dmp log=d:\t_log.txt tables=T COMPRESS=Y

Export: Release 10.2.0.3.0 – Production on Tue Feb 16 23:51:52 2010

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 – Production
With the Partitioning, OLAP and Data Mining options
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set

About to export specified tables via Conventional Path …
. . exporting table T 51353 rows exported
Export terminated successfully without warnings.

D:>imp userid=scott/tiger@10GR2 file=d:\t.dmp log=d:\imp_log.txt fromuser=scott touser=scott

Import: Release 10.2.0.3.0 – Production on Tue Feb 16 23:53:07 2010

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 – Production
With the Partitioning, OLAP and Data Mining options

Export file created by EXPORT:V10.02.01 via conventional path
import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
. importing SCOTT’s objects into SCOTT
. . importing table “T” 51353 rows imported
Import terminated successfully without warnings.

scott@10GR2> select sum(blocks) as blocks, sum(bytes) as bytes
2 from user_extents
3 where segment_name =’T’;

BLOCKS      BYTES

   768    6291456

Elapsed: 00:00:00.79

scott@10GR2> SELECT dbms_metadata.get_ddl(‘TABLE’,’T’) FROM DUAL;

DBMS_METADATA.GET_DDL(‘TABLE’,’T’)

CREATE TABLE “SCOTT”.”T”
( “OWNER” VARCHAR2(30) NOT NULL ENABLE,
“OBJECT_NAME” VARCHAR2(30) NOT NULL ENABLE,
“SUBOBJECT_NAME” VARCHAR2(30),
“OBJECT_ID” NUMBER NOT NULL ENABLE,
“DATA_OBJECT_ID” NUMBER,
“OBJECT_TYPE” VARCHAR2(19),
“CREATED” DATE NOT NULL ENABLE,
“LAST_DDL_TIME” DATE NOT NULL ENABLE,
“TIMESTAMP” VARCHAR2(19),
“STATUS” VARCHAR2(7),
“TEMPORARY” VARCHAR2(1),
“GENERATED” VARCHAR2(1),
“SECONDARY” VARCHAR2(1)
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE(INITIAL 6291456 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE “USERS”

Elapsed: 00:00:01.23

D:>exp userid=scott/oracle file=d:\t.dmp log=d:\t_log.txt tables=T COMPRESS=N

Export: Release 10.2.0.3.0 – Production on Wed Feb 17 00:03:19 2010

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 – Production
With the Partitioning, OLAP and Data Mining options
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set

About to export specified tables via Conventional Path …
. . exporting table T 51353 rows exported
Export terminated successfully without warnings.

D:>imp userid=scott/tiger@10GR2 file=d:\t.dmp fromuser=scott touser=scott log=d:\imp_log.txt

Import: Release 10.2.0.3.0 – Production on Wed Feb 17 00:06:16 2010

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 – Production
With the Partitioning, OLAP and Data Mining options

Export file created by EXPORT:V10.02.01 via conventional path
import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
. importing SCOTT’s objects into SCOTT
. . importing table “T” 51353 rows imported
Import terminated successfully without warnings.

scott@10GR2> select sum(blocks) as blocks, sum(bytes) as bytes
2 from user_extents
3 where segment_name =’T’;

BLOCKS      BYTES

   768    6291456

Elapsed: 00:00:00.71

scott@10GR2> SELECT dbms_metadata.get_ddl(‘TABLE’,’T’) FROM DUAL;

DBMS_METADATA.GET_DDL(‘TABLE’,’T’)

CREATE TABLE “SCOTT”.”T”
( “OWNER” VARCHAR2(30) NOT NULL ENABLE,
“OBJECT_NAME” VARCHAR2(30) NOT NULL ENABLE,
“SUBOBJECT_NAME” VARCHAR2(30),
“OBJECT_ID” NUMBER NOT NULL ENABLE,
“DATA_OBJECT_ID” NUMBER,
“OBJECT_TYPE” VARCHAR2(19),
“CREATED” DATE NOT NULL ENABLE,
“LAST_DDL_TIME” DATE NOT NULL ENABLE,
“TIMESTAMP” VARCHAR2(19),
“STATUS” VARCHAR2(7),
“TEMPORARY” VARCHAR2(1),
“GENERATED” VARCHAR2(1),
“SECONDARY” VARCHAR2(1)
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE “USERS”

May 14, 2020 Posted by | 12c Database | Leave a comment

Oracle Performance Tuning views


alter session set sql_trace=TRUE
alter system set timed_statistics=TRUE
SELECT * FROM V$PROCESS
select username,addr,spid,program,terminal,traceid from v$process
SELECT * FROM V$SYSTEM_EVENT
select event,total_waits from v$system_event
SELECT * FROM V$ROWCACHE
SELECT * FROM V$STATNAME

select st.name,se.sid,se.statistic#,sy.username from v$statname st,v$sesstat se,v$session sy where st.statistic#=se.statistic# and se.sid=sy.sid

SELECT * FROM V$SESSTAT
SELECT * FROM V$STATNAME

select event,wait_time,state from v$session_wait
select event,state from v$session_wait
select event from v$session_wait where wait_time=0
SELECT * FROM V$DATABASE
SELECT * FROM V$LIBRARYCACHE

select gets,pins,reloads,namespace,dlm_invalidations from v$librarycache

SELECT * FROM V$SQL
SELECT * FROM V$SQLAREA
select sql_text,users_executing,executions,loads from v$sqlarea

select sum(pins) “exec”,sum(reloads) “miss”,sum(reloads)/sum(pins) from v$librarycache

select gets,pins,reloads,namespace,gethitratio,invalidations from v$librarycache

select count(*) from lokesh.emp
select sum(pins) “exec”,sum(reloads) “miss”,sum(reloads)/sum(pins) from v$librarycache
analyze table lokesh.emp compute statistics
select gets,pins,reloads,namespace,gethitratio,invalidations from v$librarycache

SELECT * FROM V$DB_OBJECT_CACHE
select owner,name,db_link,namespace from v$db_object_cache
select sum(sharable_mem) from v$db_object_cache

SELECT * FROM V$SHARED_POOL_RESERVED

select * from v$db_object_cache where sharable_mem>10000 and (type=’PACKAGE%’ or type=’FUNCTION’ or type=’PROCEDURE’) and kept=’NO’

alter system flush shared_pool
select sql_text from v$sql

select sum(value) || ‘bytes’ “tot sess mem” from v$mystat,v$statname where name=’session uga memory’ and v$mystat.statistic#=v$statname.statistic#

Latches are of two types:

Willing to wait :- it will wait and then request this process will be continously carried out till the latch is not available.

Immediate:doesn;t wait but continous process other instruictions

desc v$latch

May 14, 2020 Posted by | Scripts | Leave a comment

Kubernetes PODS creation


A note about creating pods using kubectl run.

You can create pods from the command line using any of the below two ways:

Create an NGINX Pod (using –generator)
kubectl run –generator=run-pod/v1 nginx –image=nginx

  1. Create an NGINX Pod (using –restart=Never)

kubectl run nginx –image=nginx –restart=Never

If you run the kubectl run command without the –restart=Never OR the –generator=run-pod/v1, the command will create a deployment instead (as of version 1.16).

Note that this way of creating a deployment is deprecated and should not be used.

Instead, use kubectl create command to create a deployment

kubectl create deployment nginx –image=nginx

Kubernetes Concepts – https://kubernetes.io/docs/concepts/

Pod Overview- https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/

May 14, 2020 Posted by | Kubernetes | Leave a comment

Dropping Index in Oracle


declare
cursor cursor1 is
select ‘drop ‘||object_type||’ ‘ || owner || ‘.’ ||object_name as query_txt
from dba_objects
where object_type in (‘INDEX’)
and owner in(‘Username’)
order by object_type,object_name;
begin
for rec in cursor1 loop
begin
execute immediate(rec.query_txt);
exception when others then
null;
end;
end loop;
end;
/

May 14, 2020 Posted by | 12c Database | Leave a comment

Docker private registry


docker run -d -p 5000:5000 –name=registry registry:2
docker images
docker image tag my-image localhost:5000/mysql
docker image tag mysql localhost:5000/mysql
docker ps -a
docker image tag mysql-db localhost:5000/mysql
docker image tag mysql:5.6 localhost:5000/mysql
docker push localhost:5000/mysql
docker pull localhost:5000/mysql

May 14, 2020 Posted by | Docker | Leave a comment

Docker Container Logging


Create a container using syslog.

Enable and start the Dockere service.

sudo systemctl enable docker
sudo systemctl start docker
Create a container called syslog-logging using the httpd image.

docker container run -d –name syslog-logging httpd

Create a container using a JSON file.

Create a container that uses the JSON file for logging.

docker container run -d –name json-logging –log-driver json-file httpd

Verify that the syslog-logging container is sending its logs to syslog.

Make sure that the syslog-logging container is logging to syslog by checking the message log file:

tail /var/log/messages

Verify that the json-logging container is sending its logs to the JSON file.

Execute docker logs for the json-logging container.

docker logs json-logging

May 14, 2020 Posted by | Uncategorized | Leave a comment

Oracle Disk issue reserve policy


Often we have seen database instance issues after Unix team performing the power path update. Please find below the list of preventive actions which needs to be followed before EMC Power Path upgrade.

Preventative Actions:

  • The Oracle Clusterware will be disabled prior to starting O/S system maintenance that involves reboots – DBA.
  • Prior to restarting the Oracle Clusterware after EMC Power Path software updates the following sequence will be implemented – Unix:

Restart the server

Check disk inventory, ownership, permissions and sharing attributes (reserve_policy) matches the original values

Restart the server

Check disk inventory, ownership, permissions and sharing attributes (reserve_policy) matches the original values

May 14, 2020 Posted by | Uncategorized | Leave a comment

ASM kfed utlity


Kfed parameters

  • aun – Allocation Unit (AU) number to read from. Default is AU0, or the very beginning of the ASM disk.
  • aus – AU size. Default is 1048576 (1MB). Specify the aus when reading from a disk group with non-default AU size.
  • blkn – block number to read. Default is block 0, or the very first block of the AU.
  • dev – ASM disk or device name. Note that the keyword dev can be omitted, but the ASM disk name is mandatory.

Understanding ASM disk layout

Read ASM disk header block from  AU[0]

[root@grac41 Desktop]# kfed read  /dev/asm_test_1G_disk1 | egrep ‘name|size|type’

kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD   <– ASM disk header

kfdhdb.dskname:               TEST_0000 ; 0x028: length=9          <– ASM disk name

kfdhdb.grpname:                    TEST ; 0x048: length=4          <– ASM DG name

kfdhdb.fgname:                TEST_0000 ; 0x068: length=9          <– ASM Failgroup

kfdhdb.capname:                         ; 0x088: length=0

kfdhdb.secsize:                     512 ; 0x0b8: 0x0200            <– Disk sector size   

kfdhdb.blksize:                    4096 ; 0x0ba: 0x1000            <– ASM block size

kfdhdb.ausize:                  1048576 ; 0x0bc: 0x00100000        <– AU size : 1 Mbyte

kfdhdb.dsksize:                    1023 ; 0x0c4: 0x000003ff        <– ASM disk size : 1 GByte  

Check ASM block types for the first 2 AUs

AU[0] :

[root@grac41 Desktop]# kfed find /dev/asm_test_1G_disk1

Block 0 has type 1

Block 1 has type 2

Block 2 has type 3

Block 3 has type 3

Block 4 has type 3

Block 5 has type 3

Block 6 has type 3

Block 7 has type 3

Block 8 has type 3

Block 9 has type 3

Block 10 has type 3

..

Block 252 has type 3

Block 253 has type 3

Block 254 has type 3

Block 255 has type 3

AU[1] :

[root@grac41 Desktop]#  kfed find /dev/asm_test_1G_disk1 aun=1

Block 256 has type 17

Block 257 has type 17

Block 258 has type 13

Block 259 has type 18

Block 260 has type 13

..

Block 508 has type 13

Block 509 has type 13

Block 510 has type 1

Block 511 has type 19

Summary :

–> Disk header size is 512 bytes

    AU size = 1Mbyte  –> AU block size = 4096

    This translates to 1048576 / 4096 = 256 blocks to read an AU ( start with block 0 – 255 )

    Block 510 and block 0 storing an ASM disk header ( == type 1 )

Run the kfed command below if you interested in a certain ASM block type ( use output from kfed find to the type info )

[root@grac41 Desktop]#  kfed read  /dev/asm_test_1G_disk1 aun=1 blkn=255  | egrep ‘type’

kfbh.type:                           19 ; 0x002: KFBTYP_HBEAT

Some ASM block types

[root@grac41 Desktop]# kfed read  /dev/asm_test_1G_disk1 aun=0 blkn=0  | egrep ‘type’

kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD

kfbh.type:                            2 ; 0x002: KFBTYP_FREESPC

kfbh.type:                            3 ; 0x002: KFBTYP_ALLOCTBL

kfbh.type:                            5 ; 0x002: KFBTYP_LISTHEAD

kfbh.type:                           13 ; 0x002: KFBTYP_PST_NONE

kfbh.type:                           18 ; 0x002: KFBTYP_PST_DTA

kfbh.type:                           19 ; 0x002: KFBTYP_HBEAT

Repair ASM disk header block in AU[0] with kfed repair

  • In ASM versions 11.1.0.7 and later, the ASM disk header block is backed up in the second last ASM metadata block in the allocation unit 1.

Verify ASM DISK Header block located in  AU[0] and AU[1]

AU[0] :

[root@grac41 Desktop]# kfed read  /dev/asm_test_1G_disk1 aun=0 blkn=0 | egrep ‘name|size|type’

kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD

kfdhdb.dskname:               TEST_0000 ; 0x028: length=9

kfdhdb.grpname:                    TEST ; 0x048: length=4

kfdhdb.fgname:                TEST_0000 ; 0x068: length=9

kfdhdb.capname:                         ; 0x088: length=0

kfdhdb.secsize:                     512 ; 0x0b8: 0x0200

kfdhdb.blksize:                    4096 ; 0x0ba: 0x1000

kfdhdb.ausize:                  1048576 ; 0x0bc: 0x00100000

kfdhdb.dsksize:                    1023 ; 0x0c4: 0x000003ff

AU[1] :

[root@grac41 Desktop]# kfed read  /dev/asm_test_1G_disk1 aun=1 blkn=254  | egrep ‘name|size|type’

kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD

kfdhdb.dskname:               TEST_0000 ; 0x028: length=9

kfdhdb.grpname:                    TEST ; 0x048: length=4

kfdhdb.fgname:                TEST_0000 ; 0x068: length=9

kfdhdb.capname:                         ; 0x088: length=0

kfdhdb.secsize:                     512 ; 0x0b8: 0x0200

kfdhdb.blksize:                    4096 ; 0x0ba: 0x1000

kfdhdb.ausize:                  1048576 ; 0x0bc: 0x00100000

kfdhdb.dsksize:                    1023 ; 0x0c4: 0x000003ff

Erase Disk header block in first AU ( aun=0 blkn=0 )

# dd if=/dev/zero of=/dev/asm_test_1G_disk1  bs=4096 count=1

Verify ASM disk header

# kfed read /dev/asm_test_1G_disk1 aun=0 blkn=0

kfbh.type:                            0 ; 0x002: KFBTYP_INVALID

KFED-00322: Invalid content encountered during block traversal: [kfbtTraverseBlock][Invalid OSM block type][][0]

–> Corrupted ASM disk header detected in AU [0]

Repair disk header in AU[0] with kfed

[grid@grac41 ASM]$ kfed repair  /dev/asm_test_1G_disk1

[grid@grac41 ASM]$ kfed read /dev/asm_test_1G_disk1 aun=0 blkn=0

kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD

kfdhdb.dskname:               TEST_0000 ; 0x028: length=9

kfdhdb.grpname:                    TEST ; 0x048: length=4

kfdhdb.fgname:                TEST_0000 ; 0x068: length=9

kfdhdb.capname:                         ; 0x088: length=0

kfdhdb.secsize:                     512 ; 0x0b8: 0x0200

kfdhdb.blksize:                    4096 ; 0x0ba: 0x1000

kfdhdb.ausize:                  1048576 ; 0x0bc: 0x00100000

kfdhdb.dsksize:                    1023 ; 0x0c4: 0x000003ff

–> kfed repair worked – Disk header restored

Can kfed repair the Disk header block stored in the 2.nd AU ?

Delete  Disk header block in AU[1]

First use dd to figure out whether we are getting the correct block

[grid@grac41 ASM]$  dd if=/dev/asm_test_1G_disk1 of=-  bs=4096 count=1 skip=510 ; strings block1

1+0 records in

1+0 records out

4096 bytes (4.1 kB) copied, 0.000464628 s, 8.8 MB/s

ORCLDISK

TEST_0000

TEST

TEST_0000

–> looks like an ASM disk header – go ahead and erase that block

[grid@grac41 ASM]$  dd if=/dev/zero of=/dev/asm_test_1G_disk1  bs=4096 count=1  seek=510

1+0 records in

1+0 records out

4096 bytes (4.1 kB) copied, 0.00644028 s, 636 kB/s

Verify ASM disk header block in AU[1]

[grid@grac41 ASM]$ kfed read /dev/asm_test_1G_disk1 aun=1 blkn=254

kfbh.type:                            0 ; 0x002: KFBTYP_INVALID

KFED-00322: Invalid content encountered during block traversal: [kfbtTraverseBlock][Invalid OSM block type][][0]

–> Corrupted ASM disk header detected

[grid@grac41 ASM]$ kfed repair  /dev/asm_test_1G_disk1

KFED-00320: Invalid block num1 = [0], num2 = [1], error = [endian_kfbh]

–> kfed repair doesn’ work

Repair block with dd

grid@grac41 ASM]$ dd if=/dev/asm_test_1G_disk1  bs=4096  count=1 of=/dev/asm_test_1G_disk1  bs=4096 count=1  seek=510

1+0 records in

1+0 records out

4096 bytes (4.1 kB) copied, 0.0306682 s, 134 kB/s

[grid@grac41 ASM]$ kfed read /dev/asm_test_1G_disk1 aun=0 blkn=0

kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD

kfdhdb.dskname:               TEST_0000 ; 0x028: length=9

kfdhdb.grpname:                    TEST ; 0x048: length=4

kfdhdb.fgname:                TEST_0000 ; 0x068: length=9

kfdhdb.capname:                         ; 0x088: length=0

kfdhdb.secsize:                     512 ; 0x0b8: 0x0200

kfdhdb.blksize:                    4096 ; 0x0ba: 0x1000

kfdhdb.ausize:                  1048576 ; 0x0bc: 0x00100000

kfdhdb.dsksize:                    1023 ; 0x0c4: 0x000003ff

# kfed read /dev/asm_test_1G_disk1 aun=1 blkn=254

kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD

kfdhdb.dskname:               TEST_0000 ; 0x028: length=9

kfdhdb.grpname:                    TEST ; 0x048: length=4

kfdhdb.fgname:                TEST_0000 ; 0x068: length=9

kfdhdb.capname:                         ; 0x088: length=0

kfdhdb.secsize:                     512 ; 0x0b8: 0x0200

kfdhdb.blksize:                    4096 ; 0x0ba: 0x1000

kfdhdb.ausize:                  1048576 ; 0x0bc: 0x00100000

kfdhdb.dsksize:                    1023 ; 0x0c4: 0x000003ff

Summary:

 to fix the backup block or the ASM disk header in AU 1 block you need to use dd

May 14, 2020 Posted by | ASM | Leave a comment

Understanding Oracle Architecture


Interacting with an Oracle Database
The following example describes Oracle database operations at the most basic level. It illustrates an
Oracle database configuration in which the user and associated server process are on separate
computers, connected through a network.

  1. An instance has started on a node where Oracle Database is installed, often called the host or
    database server.
  2. A user starts an application spawning a user process. The application attempts to establish a
    connection to the server. (The connection may be local, client/server, or a three-tier connection
    from a middle tier.)
  3. The server runs a listener that has the appropriate Oracle Net Services handler. The server
    detects the connection request from the application and creates a dedicated server process on
    behalf of the user process.
  4. The user runs a DML-type SQL statement and commits the transaction. For example, the user
    changes the address of a customer in a table and commits the change.
  5. The server process receives the statement and checks the shared pool (an SGA component) for
    any shared SQL area that contains a similar SQL statement. If a shared SQL area is found, the
    server process checks the user’s access privileges to the requested data, and the existing shared
    SQL area is used to process the statement. If a shared SQL area is not found, a new shared SQL
    area is allocated for the statement so that it can be parsed and processed.
    Oracle Database 11g: Administration Workshop I 1 – 9
    Interacting with an Oracle Database (continued)
  6. The server process retrieves any necessary data values, either from the actual data file (table) or
    from values stored in the SGA.
  7. The server process modifies data in the SGA. Because the transaction is committed, the
    LogWriter process (LGWR) immediately records the transaction in the redo log file. The
    Database Writer process (DBWn) writes modified blocks permanently to disk when it is
    efficient to do so.
  8. If the transaction is successful, the server process sends a message across the network to the
    application. If it is not successful, an error message is transmitted.
  9. Throughout this entire procedure, the other background processes run, watching for conditions
    that require intervention. In addition, the database server manages other users’ transactions and
    prevents contention between transactions that request the same data.

May 14, 2020 Posted by | Uncategorized | Leave a comment

Useful Sar (Sysstat) Examples for UNIX / Linux /Oracle Performance Monitoring


 Useful Sar (Sysstat) Examples for UNIX / Linux Performance Monitoring

Using sar you can monitor performance of various Linux subsystems (CPU, Memory, I/O..) in real time.

Using sar, you can also collect all performance data on an on-going basis, store them, and do historical analysis to identify bottlenecks.

Sar is part of the sysstat package.

This article explains how to install and configure sysstat package (which contains sar utility) and explains how to monitor the following Linux performance statistics using sar.

  1. Collective CPU usage
  2. Individual CPU statistics
  3. Memory used and available
  4. Swap space used and available
  5. Overall I/O activities of the system
  6. Individual device I/O activities
  7. Context switch statistics
  8. Run queue and load average data
  9. Network statistics
  10. Report sar data from a specific time

This is the only guide you’ll need for sar utility. So, bookmark this for your future reference.

I. Install and Configure Sysstat

Install Sysstat Package

First, make sure the latest version of sar is available on your system. Install it using any one of the following methods depending on your distribution.

sudo apt-get install sysstat

(or)

yum install sysstat

(or)

rpm -ivh sysstat-10.0.0-1.i586.rpm

Install Sysstat from Source

Download the latest version from sysstat download page.

You can also use wget to download the

wget http://pagesperso-orange.fr/sebastien.godard/sysstat-10.0.0.tar.bz2

tar xvfj sysstat-10.0.0.tar.bz2

cd sysstat-10.0.0

./configure –enable-install-cron

Note: Make sure to pass the option –enable-install-cron. This does the following automatically for you. If you don’t configure sysstat with this option, you have to do this ugly job yourself manually.

  • Creates /etc/rc.d/init.d/sysstat
  • Creates appropriate links from /etc/rc.d/rc*.d/ directories to /etc/rc.d/init.d/sysstat to start the sysstat automatically during Linux boot process.
  • For example, /etc/rc.d/rc3.d/S01sysstat is linked automatically to /etc/rc.d/init.d/sysstat

After the ./configure, install it as shown below.

make

make install

Note: This will install sar and other systat utilities under /usr/local/bin

Once installed, verify the sar version using “sar -V”. Version 10 is the current stable version of sysstat.

$ sar -V

sysstat version 10.0.0

(C) Sebastien Godard (sysstat  orange.fr)

Finally, make sure sar works. For example, the following gives the system CPU statistics 3 times (with 1 second interval).

$ sar 1 3

Linux 2.6.18-194.el5PAE (dev-db)        03/26/2011      _i686_  (8 CPU)

01:27:32 PM       CPU     %user     %nice   %system   %iowait    %steal     %idle

01:27:33 PM       all      0.00      0.00      0.00      0.00      0.00    100.00

01:27:34 PM       all      0.25      0.00      0.25      0.00      0.00     99.50

01:27:35 PM       all      0.75      0.00      0.25      0.00      0.00     99.00

Average:          all      0.33      0.00      0.17      0.00      0.00     99.50

Utilities part of Sysstat

Following are the other sysstat utilities.

  • sar collects and displays ALL system activities statistics.
  • sadc stands for “system activity data collector”. This is the sar backend tool that does the data collection.
  • sa1 stores system activities in binary data file. sa1 depends on sadc for this purpose. sa1 runs from cron.
  • sa2 creates daily summary of the collected statistics. sa2 runs from cron.
  • sadf can generate sar report in CSV, XML, and various other formats. Use this to integrate sar data with other tools.
  • iostat generates CPU, I/O statistics
  • mpstat displays CPU statistics.
  • pidstat reports statistics based on the process id (PID)
  • nfsiostat displays NFS I/O statistics.
  • cifsiostat generates CIFS statistics.

This article focuses on sysstat fundamentals and sar utility.

Collect the sar statistics using cron job – sa1 and sa2

Create sysstat file under /etc/cron.d directory that will collect the historical sar data.

# vi /etc/cron.d/sysstat

*/10 * * * * root /usr/local/lib/sa/sa1 1 1

53 23 * * * root /usr/local/lib/sa/sa2 -A

If you’ve installed sysstat from source, the default location of sa1 and sa2 is /usr/local/lib/sa. If you’ve installed using your distribution update method (for example: yum, up2date, or apt-get), this might be /usr/lib/sa/sa1 and /usr/lib/sa/sa2.

Note: To understand cron entries, read Linux Crontab: 15 Awesome Cron Job Examples.

/usr/local/lib/sa/sa1

  • This runs every 10 minutes and collects sar data for historical reference.
  • If you want to collect sar statistics every 5 minutes, change */10 to */5 in the above /etc/cron.d/sysstat file.
  • This writes the data to /var/log/sa/saXX file. XX is the day of the month. saXX file is a binary file. You cannot view its content by opening it in a text editor.
  • For example, If today is 26th day of the month, sa1 writes the sar data to /var/log/sa/sa26
  • You can pass two parameters to sa1: interval (in seconds) and count.
  • In the above crontab example: sa1 1 1 means that sa1 collects sar data 1 time with 1 second interval (for every 10 mins).

/usr/local/lib/sa/sa2

  • This runs close to midnight (at 23:53) to create the daily summary report of the sar data.
  • sa2 creates /var/log/sa/sarXX file (Note that this is different than saXX file that is created by sa1). This sarXX file created by sa2 is an ascii file that you can view it in a text editor.
  • This will also remove saXX files that are older than a week. So, write a quick shell script that runs every week to copy the /var/log/sa/* files to some other directory to do historical sar data analysis.

II. 10 Practical Sar Usage Examples

There are two ways to invoke sar.

  1. sar followed by an option (without specifying a saXX data file). This will look for the current day’s saXX data file and report the performance data that was recorded until that point for the current day.
  2. sar followed by an option, and additionally specifying a saXX data file using -f option. This will report the performance data for that particular day. i.e XX is the day of the month.

In all the examples below, we are going to explain how to view certain performance data for the current day. To look for a specific day, add “-f /var/log/sa/saXX” at the end of the sar command.

All the sar command will have the following as the 1st line in its output.

$ sar -u

Linux 2.6.18-194.el5PAE (dev-db)        03/26/2011      _i686_  (8 CPU)

  • Linux 2.6.18-194.el5PAE – Linux kernel version of the system.
  • (dev-db) – The hostname where the sar data was collected.
  • 03/26/2011 – The date when the sar data was collected.
  • _i686_ – The system architecture
  • (8 CPU) – Number of CPUs available on this system. On multi core systems, this indicates the total number of cores.

1. CPU Usage of ALL CPUs (sar -u)

This gives the cumulative real-time CPU usage of all CPUs. “1 3″ reports for every 1 seconds a total of 3 times. Most likely you’ll focus on the last field “%idle” to see the cpu load.

$ sar -u 1 3

Linux 2.6.18-194.el5PAE (dev-db)        03/26/2011      _i686_  (8 CPU)

01:27:32 PM       CPU     %user     %nice   %system   %iowait    %steal     %idle

01:27:33 PM       all      0.00      0.00      0.00      0.00      0.00    100.00

01:27:34 PM       all      0.25      0.00      0.25      0.00      0.00     99.50

01:27:35 PM       all      0.75      0.00      0.25      0.00      0.00     99.00

Average:          all      0.33      0.00      0.17      0.00      0.00     99.50

Following are few variations:

  • sar -u Displays CPU usage for the current day that was collected until that point.
  • sar -u 1 3 Displays real time CPU usage every 1 second for 3 times.
  • sar -u ALL Same as “sar -u” but displays additional fields.
  • sar -u ALL 1 3 Same as “sar -u 1 3″ but displays additional fields.
  • sar -u -f /var/log/sa/sa10 Displays CPU usage for the 10day of the month from the sa10 file.

2. CPU Usage of Individual CPU or Core (sar -P)

If you have 4 Cores on the machine and would like to see what the individual cores are doing, do the following.

“-P ALL” indicates that it should displays statistics for ALL the individual Cores.

In the following example under “CPU” column 0, 1, 2, and 3 indicates the corresponding CPU core numbers.

$ sar -P ALL 1 1

Linux 2.6.18-194.el5PAE (dev-db)        03/26/2011      _i686_  (8 CPU)

01:34:12 PM       CPU     %user     %nice   %system   %iowait    %steal     %idle

01:34:13 PM       all     11.69      0.00      4.71      0.69      0.00     82.90

01:34:13 PM         0     35.00      0.00      6.00      0.00      0.00     59.00

01:34:13 PM         1     22.00      0.00      5.00      0.00      0.00     73.00

01:34:13 PM         2      3.00      0.00      1.00      0.00      0.00     96.00

01:34:13 PM         3      0.00      0.00      0.00      0.00      0.00    100.00

“-P 1″ indicates that it should displays statistics only for the 2nd Core. (Note that Core number starts from 0).

$ sar -P 1 1 1

Linux 2.6.18-194.el5PAE (dev-db)        03/26/2011      _i686_  (8 CPU)

01:36:25 PM       CPU     %user     %nice   %system   %iowait    %steal     %idle

01:36:26 PM         1      8.08      0.00      2.02      1.01      0.00     88.89

Following are few variations:

  • sar -P ALL Displays CPU usage broken down by all cores for the current day.
  • sar -P ALL 1 3 Displays real time CPU usage for ALL cores every 1 second for 3 times (broken down by all cores).
  • sar -P 1 Displays CPU usage for core number 1 for the current day.
  • sar -P 1 1 3 Displays real time CPU usage for core number 1, every 1 second for 3 times.
  • sar -P ALL -f /var/log/sa/sa10 Displays CPU usage broken down by all cores for the 10day day of the month from sa10 file.

3. Memory Free and Used (sar -r)

This reports the memory statistics. “1 3″ reports for every 1 seconds a total of 3 times. Most likely you’ll focus on “kbmemfree” and “kbmemused” for free and used memory.

$ sar -r 1 3

Linux 2.6.18-194.el5PAE (dev-db)        03/26/2011      _i686_  (8 CPU)

07:28:06 AM kbmemfree kbmemused  %memused kbbuffers  kbcached  kbcommit   %commit  kbactive   kbinact

07:28:07 AM   6209248   2097432     25.25    189024   1796544    141372      0.85   1921060     88204

07:28:08 AM   6209248   2097432     25.25    189024   1796544    141372      0.85   1921060     88204

07:28:09 AM   6209248   2097432     25.25    189024   1796544    141372      0.85   1921060     88204

Average:      6209248   2097432     25.25    189024   1796544    141372      0.85   1921060     88204

Following are few variations:

  • sar -r
  • sar -r 1 3
  • sar -r -f /var/log/sa/sa10

4. Swap Space Used (sar -S)

This reports the swap statistics. “1 3″ reports for every 1 seconds a total of 3 times. If the “kbswpused” and “%swpused” are at 0, then your system is not swapping.

$ sar -S 1 3

Linux 2.6.18-194.el5PAE (dev-db)        03/26/2011      _i686_  (8 CPU)

07:31:06 AM kbswpfree kbswpused  %swpused  kbswpcad   %swpcad

07:31:07 AM   8385920         0      0.00         0      0.00

07:31:08 AM   8385920         0      0.00         0      0.00

07:31:09 AM   8385920         0      0.00         0      0.00

Average:      8385920         0      0.00         0      0.00

Following are few variations:

  • sar -S
  • sar -S 1 3
  • sar -S -f /var/log/sa/sa10

Notes:

  • Use “sar -R” to identify number of memory pages freed, used, and cached per second by the system.
  • Use “sar -H” to identify the hugepages (in KB) that are used and available.
  • Use “sar -B” to generate paging statistics. i.e Number of KB paged in (and out) from disk per second.
  • Use “sar -W” to generate page swap statistics. i.e Page swap in (and out) per second.

5. Overall I/O Activities (sar -b)

This reports I/O statistics. “1 3″ reports for every 1 seconds a total of 3 times.

Following fields are displays in the example below.

  • tps – Transactions per second (this includes both read and write)
  • rtps – Read transactions per second
  • wtps – Write transactions per second
  • bread/s – Bytes read per second
  • bwrtn/s – Bytes written per second

$ sar -b 1 3

Linux 2.6.18-194.el5PAE (dev-db)        03/26/2011      _i686_  (8 CPU)

01:56:28 PM       tps      rtps      wtps   bread/s   bwrtn/s

01:56:29 PM    346.00    264.00     82.00   2208.00    768.00

01:56:30 PM    100.00     36.00     64.00    304.00    816.00

01:56:31 PM    282.83     32.32    250.51    258.59   2537.37

Average:       242.81    111.04    131.77    925.75   1369.90

Following are few variations:

  • sar -b
  • sar -b 1 3
  • sar -b -f /var/log/sa/sa10

Note: Use “sar -v” to display number of inode handlers, file handlers, and pseudo-terminals used by the system.

6. Individual Block Device I/O Activities (sar -d)

To identify the activities by the individual block devices (i.e a specific mount point, or LUN, or partition), use “sar -d”

$ sar -d 1 1

Linux 2.6.18-194.el5PAE (dev-db)        03/26/2011      _i686_  (8 CPU)

01:59:45 PM       DEV       tps  rd_sec/s  wr_sec/s  avgrq-sz  avgqu-sz     await     svctm     %util

01:59:46 PM    dev8-0      1.01      0.00      0.00      0.00      0.00      4.00      1.00      0.10

01:59:46 PM    dev8-1      1.01      0.00      0.00      0.00      0.00      4.00      1.00      0.10

01:59:46 PM dev120-64      3.03     64.65      0.00     21.33      0.03      9.33      5.33      1.62

01:59:46 PM dev120-65      3.03     64.65      0.00     21.33      0.03      9.33      5.33      1.62

01:59:46 PM  dev120-0      8.08      0.00    105.05     13.00      0.00      0.38      0.38      0.30

01:59:46 PM  dev120-1      8.08      0.00    105.05     13.00      0.00      0.38      0.38      0.30

01:59:46 PM dev120-96      1.01      8.08      0.00      8.00      0.01      9.00      9.00      0.91

01:59:46 PM dev120-97      1.01      8.08      0.00      8.00      0.01      9.00      9.00      0.91

In the above example “DEV” indicates the specific block device.

For example: “dev53-1″ means a block device with 53 as major number, and 1 as minor number.

The device name (DEV column) can display the actual device name (for example: sda, sda1, sdb1 etc.,), if you use the -p option (pretty print) as shown below.

$ sar -p -d 1 1

Linux 2.6.18-194.el5PAE (dev-db)        03/26/2011      _i686_  (8 CPU)

01:59:45 PM       DEV       tps  rd_sec/s  wr_sec/s  avgrq-sz  avgqu-sz     await     svctm     %util

01:59:46 PM       sda      1.01      0.00      0.00      0.00      0.00      4.00      1.00      0.10

01:59:46 PM      sda1      1.01      0.00      0.00      0.00      0.00      4.00      1.00      0.10

01:59:46 PM      sdb1      3.03     64.65      0.00     21.33      0.03      9.33      5.33      1.62

01:59:46 PM      sdc1      3.03     64.65      0.00     21.33      0.03      9.33      5.33      1.62

01:59:46 PM      sde1      8.08      0.00    105.05     13.00      0.00      0.38      0.38      0.30

01:59:46 PM      sdf1      8.08      0.00    105.05     13.00      0.00      0.38      0.38      0.30

01:59:46 PM      sda2      1.01      8.08      0.00      8.00      0.01      9.00      9.00      0.91

01:59:46 PM      sdb2      1.01      8.08      0.00      8.00      0.01      9.00      9.00      0.91

Following are few variations:

  • sar -d
  • sar -d 1 3
  • sar -d -f /var/log/sa/sa10
  • sar -p -d

7. Display context switch per second (sar -w)

This reports the total number of processes created per second, and total number of context switches per second. “1 3″ reports for every 1 seconds a total of 3 times.

$ sar -w 1 3

Linux 2.6.18-194.el5PAE (dev-db)        03/26/2011      _i686_  (8 CPU)

08:32:24 AM    proc/s   cswch/s

08:32:25 AM      3.00     53.00

08:32:26 AM      4.00     61.39

08:32:27 AM      2.00     57.00

Following are few variations:

  • sar -w
  • sar -w 1 3
  • sar -w -f /var/log/sa/sa10

8. Reports run queue and load average (sar -q)

This reports the run queue size and load average of last 1 minute, 5 minutes, and 15 minutes. “1 3″ reports for every 1 seconds a total of 3 times.

$ sar -q 1 3

Linux 2.6.18-194.el5PAE (dev-db)        03/26/2011      _i686_  (8 CPU)

06:28:53 AM   runq-sz  plist-sz   ldavg-1   ldavg-5  ldavg-15   blocked

06:28:54 AM         0       230      2.00      3.00      5.00         0

06:28:55 AM         2       210      2.01      3.15      5.15         0

06:28:56 AM         2       230      2.12      3.12      5.12         0

Average:            3       230      3.12      3.12      5.12         0

Note: The “blocked” column displays the number of tasks that are currently blocked and waiting for I/O operation to complete.

Following are few variations:

  • sar -q
  • sar -q 1 3
  • sar -q -f /var/log/sa/sa10

9. Report network statistics (sar -n)

This reports various network statistics. For example: number of packets received (transmitted) through the network card, statistics of packet failure etc.,. “1 3″ reports for every 1 seconds a total of 3 times.

sar -n KEYWORD

KEYWORD can be one of the following:

  • DEV – Displays network devices vital statistics for eth0, eth1, etc.,
  • EDEV – Display network device failure statistics
  • NFS – Displays NFS client activities
  • NFSD – Displays NFS server activities
  • SOCK – Displays sockets in use for IPv4
  • IP – Displays IPv4 network traffic
  • EIP – Displays IPv4 network errors
  • ICMP – Displays ICMPv4 network traffic
  • EICMP – Displays ICMPv4 network errors
  • TCP – Displays TCPv4 network traffic
  • ETCP – Displays TCPv4 network errors
  • UDP – Displays UDPv4 network traffic
  • SOCK6, IP6, EIP6, ICMP6, UDP6 are for IPv6
  • ALL – This displays all of the above information. The output will be very long.

$ sar -n DEV 1 1

Linux 2.6.18-194.el5PAE (dev-db)        03/26/2011      _i686_  (8 CPU)

01:11:13 PM     IFACE   rxpck/s   txpck/s   rxbyt/s   txbyt/s   rxcmp/s   txcmp/s  rxmcst/s

01:11:14 PM        lo      0.00      0.00      0.00      0.00      0.00      0.00      0.00

01:11:14 PM      eth0    342.57    342.57  93923.76 141773.27      0.00      0.00      0.00

01:11:14 PM      eth1      0.00      0.00      0.00      0.00      0.00      0.00      0.00

10. Report Sar Data Using Start Time (sar -s)

When you view historic sar data from the /var/log/sa/saXX file using “sar -f” option, it displays all the sar data for that specific day starting from 12:00 a.m for that day.

Using “-s hh:mi:ss” option, you can specify the start time. For example, if you specify “sar -s 10:00:00″, it will display the sar data starting from 10 a.m (instead of starting from midnight) as shown below.

You can combine -s option with other sar option.

For example, to report the load average on 26th of this month starting from 10 a.m in the morning, combine the -q and -s option as shown below.

$ sar -q -f /var/log/sa/sa23 -s 10:00:01

Linux 2.6.18-194.el5PAE (dev-db)        03/26/2011      _i686_  (8 CPU)

10:00:01 AM   runq-sz  plist-sz   ldavg-1   ldavg-5  ldavg-15   blocked

10:10:01 AM         0       127      2.00      3.00      5.00         0

10:20:01 AM         0       127      2.00      3.00      5.00         0

11:20:01 AM         0       127      5.00      3.00      3.00         0

12:00:01 PM         0       127      4.00      2.00      1.00         0

There is no option to limit the end-time. You just have to get creative and use head command as shown below.

For example, starting from 10 a.m, if you want to see 7 entries, you have to pipe the above output to “head -n 10″.

$ sar -q -f /var/log/sa/sa23 -s 10:00:01 | head -n 10

Linux 2.6.18-194.el5PAE (dev-db)        03/26/2011      _i686_  (8 CPU)

10:00:01 AM   runq-sz  plist-sz   ldavg-1   ldavg-5  ldavg-15   blocked

10:10:01 AM         0       127      2.00      3.00      5.00         0

10:20:01 AM         0       127      2.00      3.00      5.00         0

10:30:01 AM         0       127      3.00      5.00      2.00         0

10:40:01 AM         0       127      4.00      2.00      1.00         2

10:50:01 AM         0       127      3.00      5.00      5.00         0

11:00:01 AM         0       127      2.00      1.00      6.00         0

11:10:01 AM         0       127      1.00      3.00      7.00         2

There is lot more to cover in Linux performance monitoring and tuning. We are only getting started. More articles to come in the performance series.

Taken from other website for learning.

May 14, 2020 Posted by | Uncategorized | Leave a comment