Database and Cloud World

Live the life you love. Love the life you live

Kubectl imperative commands


POD
Create an NGINX Pod

kubectl run –generator=run-pod/v1 nginx –image=nginx

Generate POD Manifest YAML file (-o yaml). Don’t create it(–dry-run)

kubectl run –generator=run-pod/v1 nginx –image=nginx –dry-run -o yaml

Deployment
Create a deployment

kubectl create deployment –image=nginx nginx

Generate Deployment YAML file (-o yaml). Don’t create it(–dry-run)

kubectl create deployment –image=nginx nginx –dry-run -o yaml

Generate Deployment YAML file (-o yaml). Don’t create it(–dry-run) with 4 Replicas (–replicas=4)

kubectl run –generator=deployment/v1beta1 nginx –image=nginx –dry-run –replicas=4 -o yaml

The usage –generator=deployment/v1beta1 is deprecated as of Kubernetes 1.16. The recommended way is to use the kubectl create option instead.

IMPORTANT:

kubectl create deployment does not have a –replicas option. You could first create it and then scale it using the kubectl scale command.

Save it to a file – (If you need to modify or add some other details)

kubectl run –generator=deployment/v1beta1 nginx –image=nginx –dry-run –replicas=4 -o yaml > nginx-deployment.yaml

OR

kubectl create deployment –image=nginx nginx –dry-run -o yaml > nginx-deployment.yaml

You can then update the YAML file with the replicas or any other field before creating the deployment.

Service
Create a Service named redis-service of type ClusterIP to expose pod redis on port 6379

kubectl expose pod redis –port=6379 –name redis-service –dry-run -o yaml

(This will automatically use the pod’s labels as selectors)

Or

kubectl create service clusterip redis –tcp=6379:6379 –dry-run -o yaml (This will not use the pods labels as selectors, instead it will assume selectors as app=redis. You cannot pass in selectors as an option. So it does not work very well if your pod has a different label set. So generate the file and modify the selectors before creating the service)

Create a Service named nginx of type NodePort to expose pod nginx’s port 80 on port 30080 on the nodes:

kubectl expose pod nginx –port=80 –name nginx-service –dry-run -o yaml

(This will automatically use the pod’s labels as selectors, but you cannot specify the node port. You have to generate a definition file and then add the node port in manually before creating the service with the pod.)

Or

kubectl create service nodeport nginx –tcp=80:80 –node-port=30080 –dry-run -o yaml

(This will not use the pods labels as selectors)

Both the above commands have their own challenges. While one of it cannot accept a selector the other cannot accept a node port. I would recommend going with the kubectl expose command. If you need to specify a node port, generate a definition file using the same command and manually input the nodeport before creating the service.

Reference:

March 30, 2021 Posted by | Kubernetes | , , | Leave a comment

AWS RDS Migration Checklist


Database migration checklist

  1. What is the size of your database?
  2. How many schemas and tables do you have?
  3. How many really big tables do you have (200 gigabytes or 200 million rows in size)?
  4. What do the transaction boundaries look like?
  5. Do you have engine-specific data types that won’t be migrated by your migration tool?
  6. Do you have LOBs in your tables, and how large are they?
  7. Do all your tables with LOBs have primary keys?
  8. How hot (busy) is your source database?
  9. What kind of users, roles, and permissions do you have on the source database?
  10. When was the last time you vacuumed, or compacted, your database?
  11. How can your database be accessed (firewalls, tunnels, VPNs)?
  12. Do you know what VPC you want to use in AWS?
  13. Do you know what VPC security group you can use?
  14. Do you have enough bandwidth to move all your data?
  15. Can you afford downtime? How much?
  16. Do you need the source database to stay alive after the migration? For how long?
  17. Do you know why you preferred one target database engine over another?
  18. What are your high availability (HA) requirements?
  19. Does all the data need to move?
  20. Does it need to move to the same place?
  21. Do you understand the benefits offered by Amazon RDS?
  22. Do you understand any Amazon RDS limitations which might affect you?
  23. What happens to your application after the migration?
  24. What is your contingency plan if things go wrong?

March 30, 2021 Posted by | Uncategorized | | Leave a comment

Oracle Export import Compress


Import (IMP) and Export (EXP) are among the oldest surviving Oracle tools. They are command line tools used to extract tables, schemas, or entire database definitions from one Oracle instance, to be imported into another instance or schema.

COMPRESS (Y) – This Parameter in EXP tool does not compress the contents of the exported data.It controls how the STORAGE clause for exported objects will be generated. If left as Y, the storage clause for objects will have an initial extent that is equal to the sum of its current extents. That is, EXP will generate a CREATE statement that attempts to fit the object into one single extent.

idle> connect scott/oracle
Connected.
scott@10G> drop table t purge;

Table dropped.

Elapsed: 00:00:02.51
scott@10G> create table t as select * from all_objects;

Table created.

Elapsed: 00:00:10.13
scott@10G> select sum(blocks) as blocks, sum(bytes) as bytes
2 from user_extents
3 where segment_name =’T’;

BLOCKS      BYTES

   768    6291456

Elapsed: 00:00:00.11
scott@10G> SELECT dbms_metadata.get_ddl(‘TABLE’,’T’) FROM DUAL;

DBMS_METADATA.GET_DDL(‘TABLE’,’T’)

CREATE TABLE “SCOTT”.”T”
( “OWNER” VARCHAR2(30) NOT NULL ENABLE,
“OBJECT_NAME” VARCHAR2(30) NOT NULL ENABLE,
“SUBOBJECT_NAME” VARCHAR2(30),
“OBJECT_ID” NUMBER NOT NULL ENABLE,
“DATA_OBJECT_ID” NUMBER,
“OBJECT_TYPE” VARCHAR2(19),
“CREATED” DATE NOT NULL ENABLE,
“LAST_DDL_TIME” DATE NOT NULL ENABLE,
“TIMESTAMP” VARCHAR2(19),
“STATUS” VARCHAR2(7),
“TEMPORARY” VARCHAR2(1),
“GENERATED” VARCHAR2(1),
“SECONDARY” VARCHAR2(1)
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE “USERS”

D:>exp userid=scott/oracle file=d:\t.dmp log=d:\t_log.txt tables=T COMPRESS=Y

Export: Release 10.2.0.3.0 – Production on Tue Feb 16 23:51:52 2010

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 – Production
With the Partitioning, OLAP and Data Mining options
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set

About to export specified tables via Conventional Path …
. . exporting table T 51353 rows exported
Export terminated successfully without warnings.

D:>imp userid=scott/tiger@10GR2 file=d:\t.dmp log=d:\imp_log.txt fromuser=scott touser=scott

Import: Release 10.2.0.3.0 – Production on Tue Feb 16 23:53:07 2010

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 – Production
With the Partitioning, OLAP and Data Mining options

Export file created by EXPORT:V10.02.01 via conventional path
import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
. importing SCOTT’s objects into SCOTT
. . importing table “T” 51353 rows imported
Import terminated successfully without warnings.

scott@10GR2> select sum(blocks) as blocks, sum(bytes) as bytes
2 from user_extents
3 where segment_name =’T’;

BLOCKS      BYTES

   768    6291456

Elapsed: 00:00:00.79

scott@10GR2> SELECT dbms_metadata.get_ddl(‘TABLE’,’T’) FROM DUAL;

DBMS_METADATA.GET_DDL(‘TABLE’,’T’)

CREATE TABLE “SCOTT”.”T”
( “OWNER” VARCHAR2(30) NOT NULL ENABLE,
“OBJECT_NAME” VARCHAR2(30) NOT NULL ENABLE,
“SUBOBJECT_NAME” VARCHAR2(30),
“OBJECT_ID” NUMBER NOT NULL ENABLE,
“DATA_OBJECT_ID” NUMBER,
“OBJECT_TYPE” VARCHAR2(19),
“CREATED” DATE NOT NULL ENABLE,
“LAST_DDL_TIME” DATE NOT NULL ENABLE,
“TIMESTAMP” VARCHAR2(19),
“STATUS” VARCHAR2(7),
“TEMPORARY” VARCHAR2(1),
“GENERATED” VARCHAR2(1),
“SECONDARY” VARCHAR2(1)
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE(INITIAL 6291456 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE “USERS”

Elapsed: 00:00:01.23

D:>exp userid=scott/oracle file=d:\t.dmp log=d:\t_log.txt tables=T COMPRESS=N

Export: Release 10.2.0.3.0 – Production on Wed Feb 17 00:03:19 2010

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 – Production
With the Partitioning, OLAP and Data Mining options
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set

About to export specified tables via Conventional Path …
. . exporting table T 51353 rows exported
Export terminated successfully without warnings.

D:>imp userid=scott/tiger@10GR2 file=d:\t.dmp fromuser=scott touser=scott log=d:\imp_log.txt

Import: Release 10.2.0.3.0 – Production on Wed Feb 17 00:06:16 2010

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 – Production
With the Partitioning, OLAP and Data Mining options

Export file created by EXPORT:V10.02.01 via conventional path
import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
. importing SCOTT’s objects into SCOTT
. . importing table “T” 51353 rows imported
Import terminated successfully without warnings.

scott@10GR2> select sum(blocks) as blocks, sum(bytes) as bytes
2 from user_extents
3 where segment_name =’T’;

BLOCKS      BYTES

   768    6291456

Elapsed: 00:00:00.71

scott@10GR2> SELECT dbms_metadata.get_ddl(‘TABLE’,’T’) FROM DUAL;

DBMS_METADATA.GET_DDL(‘TABLE’,’T’)

CREATE TABLE “SCOTT”.”T”
( “OWNER” VARCHAR2(30) NOT NULL ENABLE,
“OBJECT_NAME” VARCHAR2(30) NOT NULL ENABLE,
“SUBOBJECT_NAME” VARCHAR2(30),
“OBJECT_ID” NUMBER NOT NULL ENABLE,
“DATA_OBJECT_ID” NUMBER,
“OBJECT_TYPE” VARCHAR2(19),
“CREATED” DATE NOT NULL ENABLE,
“LAST_DDL_TIME” DATE NOT NULL ENABLE,
“TIMESTAMP” VARCHAR2(19),
“STATUS” VARCHAR2(7),
“TEMPORARY” VARCHAR2(1),
“GENERATED” VARCHAR2(1),
“SECONDARY” VARCHAR2(1)
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE “USERS”

May 14, 2020 Posted by | 12c Database | Leave a comment

Oracle Performance Tuning views


alter session set sql_trace=TRUE
alter system set timed_statistics=TRUE
SELECT * FROM V$PROCESS
select username,addr,spid,program,terminal,traceid from v$process
SELECT * FROM V$SYSTEM_EVENT
select event,total_waits from v$system_event
SELECT * FROM V$ROWCACHE
SELECT * FROM V$STATNAME

select st.name,se.sid,se.statistic#,sy.username from v$statname st,v$sesstat se,v$session sy where st.statistic#=se.statistic# and se.sid=sy.sid

SELECT * FROM V$SESSTAT
SELECT * FROM V$STATNAME

select event,wait_time,state from v$session_wait
select event,state from v$session_wait
select event from v$session_wait where wait_time=0
SELECT * FROM V$DATABASE
SELECT * FROM V$LIBRARYCACHE

select gets,pins,reloads,namespace,dlm_invalidations from v$librarycache

SELECT * FROM V$SQL
SELECT * FROM V$SQLAREA
select sql_text,users_executing,executions,loads from v$sqlarea

select sum(pins) “exec”,sum(reloads) “miss”,sum(reloads)/sum(pins) from v$librarycache

select gets,pins,reloads,namespace,gethitratio,invalidations from v$librarycache

select count(*) from lokesh.emp
select sum(pins) “exec”,sum(reloads) “miss”,sum(reloads)/sum(pins) from v$librarycache
analyze table lokesh.emp compute statistics
select gets,pins,reloads,namespace,gethitratio,invalidations from v$librarycache

SELECT * FROM V$DB_OBJECT_CACHE
select owner,name,db_link,namespace from v$db_object_cache
select sum(sharable_mem) from v$db_object_cache

SELECT * FROM V$SHARED_POOL_RESERVED

select * from v$db_object_cache where sharable_mem>10000 and (type=’PACKAGE%’ or type=’FUNCTION’ or type=’PROCEDURE’) and kept=’NO’

alter system flush shared_pool
select sql_text from v$sql

select sum(value) || ‘bytes’ “tot sess mem” from v$mystat,v$statname where name=’session uga memory’ and v$mystat.statistic#=v$statname.statistic#

Latches are of two types:

Willing to wait :- it will wait and then request this process will be continously carried out till the latch is not available.

Immediate:doesn;t wait but continous process other instruictions

desc v$latch

May 14, 2020 Posted by | Scripts | Leave a comment

Kubernetes PODS creation


A note about creating pods using kubectl run.

You can create pods from the command line using any of the below two ways:

Create an NGINX Pod (using –generator)
kubectl run –generator=run-pod/v1 nginx –image=nginx

  1. Create an NGINX Pod (using –restart=Never)

kubectl run nginx –image=nginx –restart=Never

If you run the kubectl run command without the –restart=Never OR the –generator=run-pod/v1, the command will create a deployment instead (as of version 1.16).

Note that this way of creating a deployment is deprecated and should not be used.

Instead, use kubectl create command to create a deployment

kubectl create deployment nginx –image=nginx

Kubernetes Concepts – https://kubernetes.io/docs/concepts/

Pod Overview- https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/

May 14, 2020 Posted by | Kubernetes | Leave a comment

Dropping Index in Oracle


declare
cursor cursor1 is
select ‘drop ‘||object_type||’ ‘ || owner || ‘.’ ||object_name as query_txt
from dba_objects
where object_type in (‘INDEX’)
and owner in(‘Username’)
order by object_type,object_name;
begin
for rec in cursor1 loop
begin
execute immediate(rec.query_txt);
exception when others then
null;
end;
end loop;
end;
/

May 14, 2020 Posted by | 12c Database | Leave a comment

Docker private registry


docker run -d -p 5000:5000 –name=registry registry:2
docker images
docker image tag my-image localhost:5000/mysql
docker image tag mysql localhost:5000/mysql
docker ps -a
docker image tag mysql-db localhost:5000/mysql
docker image tag mysql:5.6 localhost:5000/mysql
docker push localhost:5000/mysql
docker pull localhost:5000/mysql

May 14, 2020 Posted by | Docker | Leave a comment

Docker Container Logging


Create a container using syslog.

Enable and start the Dockere service.

sudo systemctl enable docker
sudo systemctl start docker
Create a container called syslog-logging using the httpd image.

docker container run -d –name syslog-logging httpd

Create a container using a JSON file.

Create a container that uses the JSON file for logging.

docker container run -d –name json-logging –log-driver json-file httpd

Verify that the syslog-logging container is sending its logs to syslog.

Make sure that the syslog-logging container is logging to syslog by checking the message log file:

tail /var/log/messages

Verify that the json-logging container is sending its logs to the JSON file.

Execute docker logs for the json-logging container.

docker logs json-logging

May 14, 2020 Posted by | Uncategorized | Leave a comment

Oracle Disk issue reserve policy


Often we have seen database instance issues after Unix team performing the power path update. Please find below the list of preventive actions which needs to be followed before EMC Power Path upgrade.

Preventative Actions:

  • The Oracle Clusterware will be disabled prior to starting O/S system maintenance that involves reboots – DBA.
  • Prior to restarting the Oracle Clusterware after EMC Power Path software updates the following sequence will be implemented – Unix:

Restart the server

Check disk inventory, ownership, permissions and sharing attributes (reserve_policy) matches the original values

Restart the server

Check disk inventory, ownership, permissions and sharing attributes (reserve_policy) matches the original values

May 14, 2020 Posted by | Uncategorized | Leave a comment

ASM kfed utlity


Kfed parameters

  • aun – Allocation Unit (AU) number to read from. Default is AU0, or the very beginning of the ASM disk.
  • aus – AU size. Default is 1048576 (1MB). Specify the aus when reading from a disk group with non-default AU size.
  • blkn – block number to read. Default is block 0, or the very first block of the AU.
  • dev – ASM disk or device name. Note that the keyword dev can be omitted, but the ASM disk name is mandatory.

Understanding ASM disk layout

Read ASM disk header block from  AU[0]

[root@grac41 Desktop]# kfed read  /dev/asm_test_1G_disk1 | egrep ‘name|size|type’

kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD   <– ASM disk header

kfdhdb.dskname:               TEST_0000 ; 0x028: length=9          <– ASM disk name

kfdhdb.grpname:                    TEST ; 0x048: length=4          <– ASM DG name

kfdhdb.fgname:                TEST_0000 ; 0x068: length=9          <– ASM Failgroup

kfdhdb.capname:                         ; 0x088: length=0

kfdhdb.secsize:                     512 ; 0x0b8: 0x0200            <– Disk sector size   

kfdhdb.blksize:                    4096 ; 0x0ba: 0x1000            <– ASM block size

kfdhdb.ausize:                  1048576 ; 0x0bc: 0x00100000        <– AU size : 1 Mbyte

kfdhdb.dsksize:                    1023 ; 0x0c4: 0x000003ff        <– ASM disk size : 1 GByte  

Check ASM block types for the first 2 AUs

AU[0] :

[root@grac41 Desktop]# kfed find /dev/asm_test_1G_disk1

Block 0 has type 1

Block 1 has type 2

Block 2 has type 3

Block 3 has type 3

Block 4 has type 3

Block 5 has type 3

Block 6 has type 3

Block 7 has type 3

Block 8 has type 3

Block 9 has type 3

Block 10 has type 3

..

Block 252 has type 3

Block 253 has type 3

Block 254 has type 3

Block 255 has type 3

AU[1] :

[root@grac41 Desktop]#  kfed find /dev/asm_test_1G_disk1 aun=1

Block 256 has type 17

Block 257 has type 17

Block 258 has type 13

Block 259 has type 18

Block 260 has type 13

..

Block 508 has type 13

Block 509 has type 13

Block 510 has type 1

Block 511 has type 19

Summary :

–> Disk header size is 512 bytes

    AU size = 1Mbyte  –> AU block size = 4096

    This translates to 1048576 / 4096 = 256 blocks to read an AU ( start with block 0 – 255 )

    Block 510 and block 0 storing an ASM disk header ( == type 1 )

Run the kfed command below if you interested in a certain ASM block type ( use output from kfed find to the type info )

[root@grac41 Desktop]#  kfed read  /dev/asm_test_1G_disk1 aun=1 blkn=255  | egrep ‘type’

kfbh.type:                           19 ; 0x002: KFBTYP_HBEAT

Some ASM block types

[root@grac41 Desktop]# kfed read  /dev/asm_test_1G_disk1 aun=0 blkn=0  | egrep ‘type’

kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD

kfbh.type:                            2 ; 0x002: KFBTYP_FREESPC

kfbh.type:                            3 ; 0x002: KFBTYP_ALLOCTBL

kfbh.type:                            5 ; 0x002: KFBTYP_LISTHEAD

kfbh.type:                           13 ; 0x002: KFBTYP_PST_NONE

kfbh.type:                           18 ; 0x002: KFBTYP_PST_DTA

kfbh.type:                           19 ; 0x002: KFBTYP_HBEAT

Repair ASM disk header block in AU[0] with kfed repair

  • In ASM versions 11.1.0.7 and later, the ASM disk header block is backed up in the second last ASM metadata block in the allocation unit 1.

Verify ASM DISK Header block located in  AU[0] and AU[1]

AU[0] :

[root@grac41 Desktop]# kfed read  /dev/asm_test_1G_disk1 aun=0 blkn=0 | egrep ‘name|size|type’

kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD

kfdhdb.dskname:               TEST_0000 ; 0x028: length=9

kfdhdb.grpname:                    TEST ; 0x048: length=4

kfdhdb.fgname:                TEST_0000 ; 0x068: length=9

kfdhdb.capname:                         ; 0x088: length=0

kfdhdb.secsize:                     512 ; 0x0b8: 0x0200

kfdhdb.blksize:                    4096 ; 0x0ba: 0x1000

kfdhdb.ausize:                  1048576 ; 0x0bc: 0x00100000

kfdhdb.dsksize:                    1023 ; 0x0c4: 0x000003ff

AU[1] :

[root@grac41 Desktop]# kfed read  /dev/asm_test_1G_disk1 aun=1 blkn=254  | egrep ‘name|size|type’

kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD

kfdhdb.dskname:               TEST_0000 ; 0x028: length=9

kfdhdb.grpname:                    TEST ; 0x048: length=4

kfdhdb.fgname:                TEST_0000 ; 0x068: length=9

kfdhdb.capname:                         ; 0x088: length=0

kfdhdb.secsize:                     512 ; 0x0b8: 0x0200

kfdhdb.blksize:                    4096 ; 0x0ba: 0x1000

kfdhdb.ausize:                  1048576 ; 0x0bc: 0x00100000

kfdhdb.dsksize:                    1023 ; 0x0c4: 0x000003ff

Erase Disk header block in first AU ( aun=0 blkn=0 )

# dd if=/dev/zero of=/dev/asm_test_1G_disk1  bs=4096 count=1

Verify ASM disk header

# kfed read /dev/asm_test_1G_disk1 aun=0 blkn=0

kfbh.type:                            0 ; 0x002: KFBTYP_INVALID

KFED-00322: Invalid content encountered during block traversal: [kfbtTraverseBlock][Invalid OSM block type][][0]

–> Corrupted ASM disk header detected in AU [0]

Repair disk header in AU[0] with kfed

[grid@grac41 ASM]$ kfed repair  /dev/asm_test_1G_disk1

[grid@grac41 ASM]$ kfed read /dev/asm_test_1G_disk1 aun=0 blkn=0

kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD

kfdhdb.dskname:               TEST_0000 ; 0x028: length=9

kfdhdb.grpname:                    TEST ; 0x048: length=4

kfdhdb.fgname:                TEST_0000 ; 0x068: length=9

kfdhdb.capname:                         ; 0x088: length=0

kfdhdb.secsize:                     512 ; 0x0b8: 0x0200

kfdhdb.blksize:                    4096 ; 0x0ba: 0x1000

kfdhdb.ausize:                  1048576 ; 0x0bc: 0x00100000

kfdhdb.dsksize:                    1023 ; 0x0c4: 0x000003ff

–> kfed repair worked – Disk header restored

Can kfed repair the Disk header block stored in the 2.nd AU ?

Delete  Disk header block in AU[1]

First use dd to figure out whether we are getting the correct block

[grid@grac41 ASM]$  dd if=/dev/asm_test_1G_disk1 of=-  bs=4096 count=1 skip=510 ; strings block1

1+0 records in

1+0 records out

4096 bytes (4.1 kB) copied, 0.000464628 s, 8.8 MB/s

ORCLDISK

TEST_0000

TEST

TEST_0000

–> looks like an ASM disk header – go ahead and erase that block

[grid@grac41 ASM]$  dd if=/dev/zero of=/dev/asm_test_1G_disk1  bs=4096 count=1  seek=510

1+0 records in

1+0 records out

4096 bytes (4.1 kB) copied, 0.00644028 s, 636 kB/s

Verify ASM disk header block in AU[1]

[grid@grac41 ASM]$ kfed read /dev/asm_test_1G_disk1 aun=1 blkn=254

kfbh.type:                            0 ; 0x002: KFBTYP_INVALID

KFED-00322: Invalid content encountered during block traversal: [kfbtTraverseBlock][Invalid OSM block type][][0]

–> Corrupted ASM disk header detected

[grid@grac41 ASM]$ kfed repair  /dev/asm_test_1G_disk1

KFED-00320: Invalid block num1 = [0], num2 = [1], error = [endian_kfbh]

–> kfed repair doesn’ work

Repair block with dd

grid@grac41 ASM]$ dd if=/dev/asm_test_1G_disk1  bs=4096  count=1 of=/dev/asm_test_1G_disk1  bs=4096 count=1  seek=510

1+0 records in

1+0 records out

4096 bytes (4.1 kB) copied, 0.0306682 s, 134 kB/s

[grid@grac41 ASM]$ kfed read /dev/asm_test_1G_disk1 aun=0 blkn=0

kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD

kfdhdb.dskname:               TEST_0000 ; 0x028: length=9

kfdhdb.grpname:                    TEST ; 0x048: length=4

kfdhdb.fgname:                TEST_0000 ; 0x068: length=9

kfdhdb.capname:                         ; 0x088: length=0

kfdhdb.secsize:                     512 ; 0x0b8: 0x0200

kfdhdb.blksize:                    4096 ; 0x0ba: 0x1000

kfdhdb.ausize:                  1048576 ; 0x0bc: 0x00100000

kfdhdb.dsksize:                    1023 ; 0x0c4: 0x000003ff

# kfed read /dev/asm_test_1G_disk1 aun=1 blkn=254

kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD

kfdhdb.dskname:               TEST_0000 ; 0x028: length=9

kfdhdb.grpname:                    TEST ; 0x048: length=4

kfdhdb.fgname:                TEST_0000 ; 0x068: length=9

kfdhdb.capname:                         ; 0x088: length=0

kfdhdb.secsize:                     512 ; 0x0b8: 0x0200

kfdhdb.blksize:                    4096 ; 0x0ba: 0x1000

kfdhdb.ausize:                  1048576 ; 0x0bc: 0x00100000

kfdhdb.dsksize:                    1023 ; 0x0c4: 0x000003ff

Summary:

 to fix the backup block or the ASM disk header in AU 1 block you need to use dd

May 14, 2020 Posted by | ASM | Leave a comment