MERHABA

Uzun zamandır düşünüpte hayata geçiremediğim blog dünyasına bu yazı ile merhaba diyorum.  Tasarlarken, düşünürken faydalı,güzel yazılar yazarak, karşılaştığım durumları paylaşarak,  özellikle Türkçe kaynak arayanlara faydalı olmayı planlıyordum. umarım düşündüğüm şeyleri burada hayata geçirebilirim.

Faydalı,sürekli yeni bilgilerin paylaşıldığı bir blog olması dileğiyle,

Merhaba Dünya 🙂

Advertisements

Extend mount point and swap

Extend mount point and swap

Step: create partition on newly added disk sdg.
[root@loc1 ~]# fdisk /dev/sdg
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0x94481826.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won’t be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

WARNING: DOS-compatible mode is deprecated. It’s strongly recommended to
switch off the mode (command ‘c’) and change display units to
sectors (command ‘u’).

Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-6527, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-6527, default 6527):
Using default value 6527

Command (m for help): l

0 Empty 24 NEC DOS 81 Minix / old Lin bf Solaris
1 FAT12 39 Plan 9 82 Linux swap / So c1 DRDOS/sec (FAT-
2 XENIX root 3c PartitionMagic 83 Linux c4 DRDOS/sec (FAT-
3 XENIX usr 40 Venix 80286 84 OS/2 hidden C: c6 DRDOS/sec (FAT-
4 FAT16 <32M 41 PPC PReP Boot 85 Linux extended c7 Syrinx
5 Extended 42 SFS 86 NTFS volume set da Non-FS data
6 FAT16 4d QNX4.x 87 NTFS volume set db CP/M / CTOS / .
7 HPFS/NTFS 4e QNX4.x 2nd part 88 Linux plaintext de Dell Utility
8 AIX 4f QNX4.x 3rd part 8e Linux LVM df BootIt
9 AIX bootable 50 OnTrack DM 93 Amoeba e1 DOS access
a OS/2 Boot Manag 51 OnTrack DM6 Aux 94 Amoeba BBT e3 DOS R/O
b W95 FAT32 52 CP/M 9f BSD/OS e4 SpeedStor
c W95 FAT32 (LBA) 53 OnTrack DM6 Aux a0 IBM Thinkpad hi eb BeOS fs
e W95 FAT16 (LBA) 54 OnTrackDM6 a5 FreeBSD ee GPT
f W95 Ext'd (LBA) 55 EZ-Drive a6 OpenBSD ef EFI (FAT-12/16/
10 OPUS 56 Golden Bow a7 NeXTSTEP f0 Linux/PA-RISC b
11 Hidden FAT12 5c Priam Edisk a8 Darwin UFS f1 SpeedStor
12 Compaq diagnost 61 SpeedStor a9 NetBSD f4 SpeedStor
14 Hidden FAT16 ❤ 63 GNU HURD or Sys ab Darwin boot f2 DOS secondary
16 Hidden FAT16 64 Novell Netware af HFS / HFS+ fb VMware VMFS
17 Hidden HPFS/NTF 65 Novell Netware b7 BSDI fs fc VMware VMKCORE
18 AST SmartSleep 70 DiskSecure Mult b8 BSDI swap fd Linux raid auto
1b Hidden W95 FAT3 75 PC/IX bb Boot Wizard hid fe LANstep
1c Hidden W95 FAT3 80 Old Minix be Solaris boot ff BBT
1e Hidden W95 FAT1

Command (m for help): p

Disk /dev/sdg: 53.7 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x94481826

Device Boot Start End Blocks Id System
/dev/sdg1 1 6527 52428096 83 Linux

Command (m for help): t
Selected partition 1
Hex code (type L to list codes): 8e
Changed system type of partition 1 to 8e (Linux LVM)

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[root@loc1 ~]#

Step: create new physical parititon on partition /dev/sdg1

[root@loc1 ~]# pvcreate /dev/sdg1
Physical volume "/dev/sdg1" successfully created
[root@loc1 ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 vg_loc1 lvm2 a–u 19.64g 0
/dev/sdg1 lvm2 —- 50.00g 50.00g
[root@loc1 ~]#

Step: now I can extend my existing volüme group vg_loc1

[root@loc1 ~]# vgs
VG #PV #LV #SN Attr VSize VFree
vg_loc1 1 2 0 wz–n- 19.64g 0
[root@loc1 ~]#
[root@loc1 ~]#
[root@loc1 ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
lv_root vg_loc1 -wi-ao—- 17.62g
lv_swap vg_loc1 -wi-ao—- 2.01g
[root@loc1 ~]#

[root@loc1 ~]# vgextend vg_loc1 /dev/sdg1
Volume group "vg_loc1" successfully extended
[root@loc1 ~]#

–%O GB free size added to vg
[root@loc1 ~]# vgs
VG #PV #LV #SN Attr VSize VFree
vg_loc1 2 2 0 wz–n- 69.63g 50.00g
[root@loc1 ~]#

Step : extend lv_rot and /root mount point

[root@loc1 ~]# lvextend -L +10G /dev/vg_loc1/lv_root
Size of logical volume vg_loc1/lv_root changed from 17.62 GiB (4512 extents) to 27.62 GiB (7072 extents).
Logical volume lv_root successfully resized.
[root@loc1 ~]#
[root@loc1 ~]# resize2fs /dev/vg_loc1/lv_root
resize2fs 1.43-WIP (20-Jun-2013)
Filesystem at /dev/vg_loc1/lv_root is mounted on /; on-line resizing required
old_desc_blocks = 2, new_desc_blocks = 2
The filesystem on /dev/vg_loc1/lv_root is now 7241728 blocks long.

[root@loc1 ~]#

[root@loc1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_loc1-lv_root
28G 12G 15G 44% /
tmpfs 1.9G 292K 1.9G 1% /dev/shm
/dev/sda1 477M 160M 288M 36% /boot
Downloads_win 238G 148G 90G 63% /media/sf_Downloads_win
asmdisks 238G 148G 90G 63% /media/sf_asmdisks
[root@loc1 ~]#

[root@loc1 ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
lv_root vg_loc1 -wi-ao—- 27.62g
lv_swap vg_loc1 -wi-ao—- 2.01g

[root@loc1 ~]# vgdisplay
— Volume group —
VG Name vg_loc1
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 5
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 2
Act PV 2
VG Size 69.63 GiB
PE Size 4.00 MiB
Total PE 17826
Alloc PE / Size 7587 / 29.64 GiB
Free PE / Size 10239 / 40.00 GiB
VG UUID 91Zb79-jXl8-nG5a-W2rl-hF0s-U8Ns-SrcYOw

Step: extend swap size

[root@loc1 ~]# more /proc/meminfo | grep -i swap
SwapCached: 0 kB
SwapTotal: 2109436 kB
SwapFree: 2109436 kB

[root@loc1 ~]# swapoff -v /dev/vg_loc1/lv_swap
swapoff on /dev/vg_loc1/lv_swap

[root@loc1 ~]# lvm lvresize /dev/vg_loc1/lv_swap -L +6G
Size of logical volume vg_loc1/lv_swap changed from 2.01 GiB (515 extents) to 8.01 GiB (2051 extents).
Logical volume lv_swap successfully resized.

[root@loc1 ~]# mkswap /dev/vg_loc1/lv_swap
mkswap: /dev/vg_loc1/lv_swap: warning: don't erase bootbits sectors
on whole disk. Use -f to force.
Setting up swapspace version 1, size = 8400892 KiB
no label, UUID=92ea9789-706c-48e4-a965-428d6dd6748c

[root@loc1 ~]# swapon -va
swapon on /dev/mapper/vg_loc1-lv_swap
swapon: /dev/mapper/vg_loc1-lv_swap: found swap signature: version 1, page-size 4, same byte order
swapon: /dev/mapper/vg_loc1-lv_swap: pagesize=4096, swapsize=8602517504, devsize=8602517504
[root@loc1 ~]#
[root@loc1 ~]# more /proc/meminfo | grep -i swap
SwapCached: 0 kB
SwapTotal: 8400892 kB
SwapFree: 8400892 kB

kill session procedure

If  do you need to give kill session privilage to a non-dba user, you should create procedure as below.


CREATE OR REPLACE PROCEDURE sys.kill_session
IS
is_found_rec BOOLEAN := FALSE;

CURSOR c1
IS
SELECT ss.sid, ss.serial#, ss.inst_id
FROM gv$session ss
WHERE wait_class = 'Application'
AND event = 'SQL*Net break/reset to client'
AND username = 'USERNAME'
AND status = 'INACTIVE'
AND TYPE = 'USER';
BEGIN
FOR i IN c1
LOOP
is_found_rec := TRUE;
DBMS_OUTPUT.put_line (
'killing ' || i.sid || ',' || i.serial# || ',' || i.inst_id);

EXECUTE IMMEDIATE
'Alter System Kill Session '''
|| i.sid
|| ','
|| i.serial#
|| ',@'
|| i.inst_id
|| ''' IMMEDIATE';
END LOOP;

IF NOT is_found_rec
THEN
DBMS_OUTPUT.put_line ('Can not find any rows! ');
END IF;
EXCEPTION
WHEN OTHERS
THEN
DBMS_OUTPUT.put_line (
'SQL Error Message= ' || SQLERRM || ' Error Code= ' || SQLCODE);
END;
/

grant execute on sys.kill_session to opUser;

Move OCR , Vote File , ASM SPILE to new Diskgroup

1-)check before operation

[oracle@demir01 patch]$ ocrconfig -showbackup
demir01 2018/12/13 13:29:52 +OCRVOTE:/demir-cluster/OCRBACKUP/backup00.ocr.284.994771763 70732493
demir01 2018/12/13 09:29:22 +OCRVOTE:/demir-cluster/OCRBACKUP/backup01.ocr.290.994757335 70732493
demir01 2018/12/13 05:28:53 +OCRVOTE:/demir-cluster/OCRBACKUP/backup02.ocr.289.994742909 70732493
demir01 2018/12/12 01:25:31 +OCRVOTE:/demir-cluster/OCRBACKUP/day.ocr.285.994641933 70732493
demir01 2018/12/10 21:22:12 +OCRVOTE:/demir-cluster/OCRBACKUP/week.ocr.286.994540935 70732493
demir01 2018/12/10 15:58:30 +OCRVOTE:/demir-cluster/OCRBACKUP/backup_20181210_155830.ocr.259.994521511 70732493


SQL> show parameter spfile

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
spfile string +OCRVOTE/demir-cluster/asmparameterfile/registry.253.973865833

 

[oracle@demir01 patch]$ crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 03b049c195d64f82bf0db455e23a97e8 (/dev/mapper/asmvote1) [OCRVOTE]

 

[oracle@demir01 patch]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 4
Total space (kbytes) : 491684
Used space (kbytes) : 96816
Available space (kbytes) : 394868
ID : 1416219024
Device/File Name : +OCRVOTE
Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check bypassed due to non-privileged user

2-) create and mount new asm diskgroup.

3-) run as root. add new diskgroup to ocrconfig and remove old diskgroup from ocrconfig.

[root@demir01 ~]# ocrconfig -add +SYSTEMDG
[root@demir01 ~]# ocrconfig -delete +OCRVOTE
[root@demir01 ~]# crsctl replace votedisk +SYSTEMDG
Successful addition of voting disk 7c8630a1a2b84f55bf0e6ea4667c00d5.
Successful addition of voting disk 0079a48605d84f41bf602265300d048b.
Successful addition of voting disk 2c81fb6664eb4f8ebf7d05b94f11ead0.
Successful addition of voting disk 3d65a4ee99434fe7bf00888e0e314a6e.
Successful addition of voting disk 6a864950b38f4f25bf6e92c9a9230174.
Successful deletion of voting disk 03b049c195d64f82bf0db455e23a97e8.
Successfully replaced voting disk group with +SYSTEMDG.
CRS-4266: Voting file(s) successfully replaced

4-) check the new configuration

[root@demir01 ~]# crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 7c8630a1a2b84f55bf0e6ea4667c00d5 (/dev/mapper/asmdata206) [SYSTEMDG]
2. ONLINE 0079a48605d84f41bf602265300d048b (/dev/mapper/asmdata208) [SYSTEMDG]
3. ONLINE 2c81fb6664eb4f8ebf7d05b94f11ead0 (/dev/mapper/asmdata210) [SYSTEMDG]
4. ONLINE 3d65a4ee99434fe7bf00888e0e314a6e (/dev/mapper/asmdata211) [SYSTEMDG]
5. ONLINE 6a864950b38f4f25bf6e92c9a9230174 (/dev/mapper/asmdata214) [SYSTEMDG]
Located 5 voting disk(s).

 

[root@demir01 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 4
Total space (kbytes) : 491684
Used space (kbytes) : 96816
Available space (kbytes) : 394868
ID : 1416219024
Device/File Name : +SYSTEMDG
Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check succeeded

5-)change the ASM instance spfile location.


[oracle@demir01 patch]$ sqlplus / as sysasm;

SQL*Plus: Release 18.0.0.0.0 - Production on Thu Dec 13 15:52:10 2018
Version 18.3.0.0.0

Copyright (c) 1982, 2018, Oracle. All rights reserved.
Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.3.0.0.0

SQL> create pfile='/tmp/asm.pfile' from spfile;

File created.

SQL> create spfile='+SYSTEMDG' from pfile='/tmp/asm.pfile';

File created.

SQL> exit

6-)check new spfile location with gpnptool.


[oracle@demir01 patch]$ gpnptool get
Warning: some command line parameters were defaulted. Resulting command line:
/oracle/app/18.0.0.0/grid/bin/gpnptool.bin get -o-

... SPFile="+SYSTEMDG/demir-cluster/ASMPARAMETERFILE/registry.253.994780399" ...

7-) top and start crs on all cluster nodes.

[root@demir01 ~]# crsctl stop crs
[root@demir01 ~]# crsctl start crs

after all, check cluster resources and alerts.

ref:Move OCR , Vote File , ASM SPILE to new Diskgroup (Doc ID 1638177.1)

Move MGMTDB files to new diskgroup on 12c onward

 

 

1-) download and copy on db server mdbutil.pl tool from Doc ID 2065175.1.
2-)check status

[oracle@demir01 patch]$ ./mdbutil.pl --status
mdbutil.pl version : 1.95
2018-12-13 14:27:27: I Checking CHM status...
2018-12-13 14:27:28: I Listener MGMTLSNR is configured and running on demir01
2018-12-13 14:27:30: I Database MGMTDB is configured and running on demir01
2018-12-13 14:27:31: I Cluster Health Monitor (CHM) is configured and running
--------------------------------------------------------------------------------
CHM Repository Path = +OCRVOTE/_MGMTDB/7CABD0A0D9D40761E05302168C0A2CFA/DATAFILE/sysmgmtdata.280.994522813
MGMTDB space used on DG +OCRVOTE = 26235 Mb
--------------------------------------------------------------------------------

3-) create new ASM diskgroup and mount on all nodes.

4-) move from +OCRVOTE to +SYSTEMDG. move MGMTDB files with mdbutil.pl tool. you have to run command with grid user and grid environment.


[oracle@demir01 patch]$ ./mdbutil.pl --mvmgmtdb --target=+SYSTEMDG
mdbutil.pl version : 1.95
Moving MGMTDB, it will be stopped, are you sure (Y/N)? Y
2018-12-13 14:33:30: I Checking for the required paths under +SYSTEMDG
2018-12-13 14:33:32: I Creating new path +SYSTEMDG/_MGMTDB/PARAMETERFILE
2018-12-13 14:33:36: I Creating new path +SYSTEMDG/_MGMTDB/CONTROLFILE
2018-12-13 14:33:40: I Creating new path +SYSTEMDG/_MGMTDB/ONLINELOG
2018-12-13 14:33:44: I Creating new path +SYSTEMDG/_MGMTDB/DATAFILE
2018-12-13 14:33:48: I Creating new path +SYSTEMDG/_MGMTDB/TEMPFILE
2018-12-13 14:33:52: I Creating new path +SYSTEMDG/_MGMTDB/DATAFILE/PDB$SEED
2018-12-13 14:33:55: I Creating new path +SYSTEMDG/_MGMTDB/DATAFILE/TEMPFILE/PDB$SEED
2018-12-13 14:33:58: I Creating new path +SYSTEMDG/_MGMTDB/DATAFILE/demir_cluster
2018-12-13 14:34:00: I Creating new path +SYSTEMDG/_MGMTDB/TEMPFILE/demir_cluster
2018-12-13 14:34:02: I Getting MGMTDB Database files location
2018-12-13 14:34:02: I Getting MGMTDB Temp files location
2018-12-13 14:34:02: I Getting MGMTDB PDB PDB$SEED files location
2018-12-13 14:34:03: I Getting MGMTDB PDB PDB$SEED Temp files location
2018-12-13 14:34:04: I Getting MGMTDB PDB GIMR_DSCREP_10 files location
2018-12-13 14:34:04: I Getting MGMTDB PDB GIMR_DSCREP_10 Temp files location
2018-12-13 14:34:11: I Creating temporary PFILE
2018-12-13 14:34:11: I Creating target SPFILE
2018-12-13 14:34:15: I Stopping the Cluster Health Analysis Resource
2018-12-13 14:34:21: I Stopping mgmtdb
2018-12-13 14:34:54: I Copying MGMTDB DBFiles to +SYSTEMDG
2018-12-13 14:35:05: I Copying MGMTDB PDB$SEED DBFiles to +SYSTEMDG
2018-12-13 14:35:16: I Copying MGMTDB PDB DBFiles to +SYSTEMDG
2018-12-13 14:36:50: I Creating the CTRL File
2018-12-13 14:37:28: I The CTRL File has been created and MGMTDB is now running from +SYSTEMDG
2018-12-13 14:37:28: I Setting MGMTDB SPFile location
2018-12-13 14:37:29: I Modifing the init parameter
2018-12-13 14:37:29: I Removing old MGMTDB
2018-12-13 14:37:31: I Changing START_DEPENDENCIES
2018-12-13 14:37:32: I Changing STOP_DEPENDENCIES
2018-12-13 14:37:32: I Restarting MGMTDB using target SPFile
2018-12-13 14:38:52: I Startin the Cluster Health Analysis Resource
2018-12-13 14:38:53: I MGMTDB Successfully moved to +SYSTEMDG!

6-)check after migration.

[oracle@demir01 patch]$ ./mdbutil.pl --status
mdbutil.pl version : 1.95
2018-12-13 14:43:05: I Checking CHM status...
2018-12-13 14:43:06: I Listener MGMTLSNR is configured and running on demir01
2018-12-13 14:43:08: I Database MGMTDB is configured and running on demir01
2018-12-13 14:43:09: I Cluster Health Monitor (CHM) is configured and running
--------------------------------------------------------------------------------
CHM Repository Path = +SYSTEMDG/_MGMTDB/DATAFILE/demir_cluster/sysmgmtdata.20181213143404.dbf
MGMTDB space used on DG +SYSTEMDG = 2736 Mb
--------------------------------------------------------------------------------

ref : MDBUtil: GI Management Repository configuration tool (Doc ID 2065175.1)

ORA-12012 ORA-12534

–create a dbms_job  with global user.

DBMS_JOB.SUBMIT (
job => INT_JOB_ID,
what => 'user1.pck_tb.create_job(0, 500);',
next_date => SYSDATE,
interval => NULL
);
commit;

–but job fails with below errors.

ORA-12012: error on auto execute of job 62392
ORA-12534: TNS:operation not supported

–In Oracle Support mentioned that dbms_job fails because of global user.
it is refered a bug (Bug 5489095.)
solution is work with DBMS_SCHEDULER or do not dbms_job with global user.
DBMS_JOB Does Not Work For Globally Identified Users (Doc ID 1060368.1)

Unlug PDB and plug into another CDB

unplug a PDB and plug into another CDB
unplug from CDB1 and plugin to CDB2

on CDB1

SQL> select CON_ID, DBID, CON_UID, GUID,NAME from v$pdbs

CON_ID DBID CON_UID GUID NAME
---------- ---------- ---------- -------------------------------- --------------------------------------------
2 2743990362 2743990362 6C77C27828EC08A2E05303E1060A2939 PDB$SEED
3 2688943372 2688943372 71E7C773DC8E7659E05303E1060A61C4 PDB3
4 2848179871 2848179871 6D052E1C34FE1EC3E05303E1060A72DF PDB1
5 4140312712 4140312712 6DCD095FEFDC2693E05303E1060A9333 PDB2

SQL> show pdbs

CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
4 PDB1 READ WRITE NO
5 PDB2 READ WRITE NO
8 PDB3 READ WRITE NO
SQL> set lines 100
SQL> select name from v$datafile where con_id=8

NAME
----------------------------------------------------------------------------------------------------
+DATA/CDBCONS/72385D6869CF333FE05303E1060AB751/DATAFILE/system.374.982858999
+DATA/CDBCONS/72385D6869CF333FE05303E1060AB751/DATAFILE/sysaux.373.982858999
+DATA/CDBCONS/72385D6869CF333FE05303E1060AB751/DATAFILE/undotbs1.378.982858999
+DATA/CDBCONS/72385D6869CF333FE05303E1060AB751/DATAFILE/undo_2.371.982859021
+DATA/CDBCONS/72385D6869CF333FE05303E1060AB751/DATAFILE/users.370.982859025
+DATA/CDBCONS/72385D6869CF333FE05303E1060AB751/DATAFILE/PDB3data.269.982859213
+DATA/CDBCONS/72385D6869CF333FE05303E1060AB751/DATAFILE/PDB3index.267.982859213

7 rows selected.
SQL>alter pluggable database PDB3 close instances=all;
Pluggable database altered.

SQL>alter pluggable database PDB3 unplug into '/dblog/PDB3.xml';
Pluggable database altered.

SQL> drop pluggable database PDB3 keep datafiles;
Pluggable database dropped.

on CDB2
–check compability of the unplugged PDB with destination CDB


set serveroutput on

DECLARE
compatible BOOLEAN := FALSE;
BEGIN
compatible := DBMS_PDB.CHECK_PLUG_COMPATIBILITY(
pdb_descr_file => '/dblog/PDB3.xml');
if compatible then
DBMS_OUTPUT.PUT_LINE('Is pluggable PDB3 compatible? YES');
else DBMS_OUTPUT.PUT_LINE('Is pluggable PDB3 compatible? NO');
end if;
END;
/

Is pluggable PDB3 compatible? YES

PL/SQL procedure successfully completed.
SQL> create pluggable database PDB3 using '/dblog/PDB3.xml' COPY FILE_NAME_CONVERT=('+DATA/CDBCONS/72385D6869CF333FE05303E1060AB751','+UDATA');

Pluggable database created.
SQL> select pdb_name, status from cdb_pdbs where pdb_name='PDB3';

PDB_NAME STATUS
------------------------------------------------------------- -----------------------------------
PDB3 NEW

SQL> select open_mode from v$pdbs where name='PDB3';

OPEN_MODE
----------
MOUNTED

SQL> show pdbs

CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
4 PDB3 MOUNTED
SQL> select name from v$datafile where con_id=4

NAME
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+UDATA/TEK/72385D6869CF333FE05303E1060AB751/DATAFILE/system.286.988222753
+UDATA/TEK/72385D6869CF333FE05303E1060AB751/DATAFILE/sysaux.287.988222753
+UDATA/TEK/72385D6869CF333FE05303E1060AB751/DATAFILE/undotbs1.284.988222753
+UDATA/TEK/72385D6869CF333FE05303E1060AB751/DATAFILE/undo_2.285.988222753
+UDATA/TEK/72385D6869CF333FE05303E1060AB751/DATAFILE/users.281.988222753
+UDATA/TEK/72385D6869CF333FE05303E1060AB751/DATAFILE/PDB3data.282.988222753
+UDATA/TEK/72385D6869CF333FE05303E1060AB751/DATAFILE/PDB3index.283.988222753

7 rows selected.

SQL>alter pluggable database PDB3 open instances=all;
Pluggable database altered.

SQL> show pdbs;

CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
4 PDB3 READ WRITE NO
[oracle@server01 ~]$ sqlplus sys/@10.5.5.5:1521/PDB3 AS SYSDBA

SQL*Plus: Release 12.2.0.1.0 Production on Sun Sep 30 18:04:16 2018

Copyright (c) 1982, 2016, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> show con_name

CON_NAME
------------------------------
PDB3

 

Linux Kermel parameters HugePages

In Linux sytems Oracle recommends using Huge Pages.
Because memory allocation for sga can cause updating page tables.

Without HugePages, the operating system keeps each 4 KB of memory as a page. When it allocates pages to the database System Global Area (SGA), the operating system kernel must continually update its page table with the page lifecycle (dirty, free, mapped to a process, and so on) for each 4 KB page allocated to the SGA.

With HugePages, the operating system page table (virtual memory to physical memory mapping) is smaller, because each page table entry is pointing to pages from 2 MB to 256 MB.

Also, the kernel has fewer pages whose lifecycle must be monitored. For example, if you use HugePages with 64-bit hardware, and you want to map 256 MB of memory, you may need one page table entry (PTE). If you do not use HugePages, and you want to map 256 MB of memory, then you must have 256 MB * 1024 KB/4 KB = 65536 PTEs.

HugePages provides the following advantages:

  • Increased performance through increased TLB hits
  • Pages are locked in memory and never swapped out, which provides RAM for shared memory structures such as SGA
  • Contiguous pages are preallocated and cannot be used for anything else but for System V shared memory (for example, SGA)
  • Less bookkeeping work for the kernel for that part of virtual memory because of larger page sizes

check Huge Pages:

$ grep Huge /proc/meminfo| grep "HugePages_Total"

add folloving line in /etc/sysctl.conf to set Huge Pages as 200GB. because our hugepagesize is 2MB.

vm.nr_hugepages = 102400

check hugepagesize:

[root@server01 ~]# more /proc/meminfo | grep Hugepagesize
Hugepagesize: 2048 kB

change can be seted like:

[root@server01 ~]# sysctl -p

new Huge Page value can set with

$ grep Huge /proc/meminfo| grep "HugePages_Total"

Linux Kernel parameters SHMMAX SHMMNI SHMALL

SHMMAX:
Oracle recommeds Half the size of physical memory in bytes for 64 bit systems.

See My Oracle Support Note 567506.1 for additional information about configuring shmmax.

This parameter defines the maximum size in bytes of a single shared memory segment that a Linux process can allocate in its virtual address space.
when tying to allocate SGA size bigger than SHMMAX the error below occur.
“SGA size con not be greater than maximum shared memory segment size (shmmax value)”

We neend to change shmamx value as oracle recommendations.

check current size of

 
[root@server01 ~]# cat /proc/sys/kernel/shmmax
68719476736

my total memory is 512 GB and hmmax would be 250GB.
change kernel.shmmax line as 268435456000 in /etc/sysctl.conf file.
and applying changes with command

sysctl -p

then check it with

 [root@server01 ~]# cat /proc/sys/kernel/shmmax
268435456000

SHMMNI: parameter recommended as 4096.

SHMALL: parameter recommended as %40 size of physical memory in pages.

Note: If the server supports multiple databases, or uses a large SGA, then set this parameter to a value that is equal to the total amount of shared memory, in 4K pages, that the system can use at one time.

check default page size:

[root@toryum01 ~]# getconf PAGE_SIZE
4096

Create AWR for PDB on 12.2

step : set awr_pdb_autoflush_enabled value as “true”.
The default value of AWR_PDB_AUTOFLUSH_ENABLED is false. Thus, by default, automatic AWR snapshots are disabled for all the PDBs in a CDB.

When you change the value of AWR_PDB_AUTOFLUSH_ENABLED in the CDB root, the new value takes effect in all the PDBs in the CDB.

Therefore, if you change the value of AWR_PDB_AUTOFLUSH_ENABLED in the CDB root to true, the value of AWR_PDB_AUTOFLUSH_ENABLED is also changed to true in all of the PDBs, so that automatic AWR snapshots are enabled for all the PDBs.

You can also change the value of AWR_PDB_AUTOFLUSH_ENABLED in any of the individual PDBs in a CDB, and the value that is set for each individual PDB will be honored. This enables you to enable or disable automatic AWR snapshots for individual PDBs.

When a new PDB is created, or a PDB from a previous database release is upgraded to the current database release, automatic AWR snapshots are enabled or disabled for the PDB based on the current value of AWR_PDB_AUTOFLUSH_ENABLED in the root.

for specified PDB

alter session set container=PDBtest;
alter system set awr_pdb_autoflush_enabled=true;

for all pdbs and CDB

alter session set container=CDB$ROOT;
alter system set awr_pdb_autoflush_enabled=true;

step: set interval and retention period by PDB or CDB$ROOT level. for all PDBSs an dCDB we have to run script separately. for 45 days retention and 30minutes interval script can be below.

alter session set container=CDB$ROOT;
execute dbms_workload_repository.modify_snapshot_settings(interval => 30, retention=>64800);	
alter session set container=PDB1;                                 
execute dbms_workload_repository.modify_snapshot_settings(interval => 30, retention=>64800);
alter session set container=PDB2;                                 
execute dbms_workload_repository.modify_snapshot_settings(interval => 30, retention=>64800);	
alter session set container=PDB3;                                 
execute dbms_workload_repository.modify_snapshot_settings(interval => 30, retention=>64800);	
alter session set container=PDB4;                                 

step:if you have many instances or PDBS, you can set AWR_SNAPSHOT_TIME_OFFSET parameter bigger than 0. default value is 0. to avoid CPU problems this parameter can be set in CDB.

AWR snapshots normally start at the top of the hour (12:00, 1:00, 2:00, and so on). This parameter allows DBAs to specify an offset for the AWR snapshot start time.

This is a useful parameter to avoid CPU spikes from multiple instances all starting their AWR snapshots at the same time. If you have a large system with many instances on it (like many Exadata installations), and you are experiencing such CPU spikes, this parameter can be very useful.

The parameter is specified in seconds. Normally, you set it to a value less than 3600. If you set the special value 1000000 (1,000,000), you get an automatic mode, in which the offset is based on the database name.

The automatic mode is an effective way of getting a reasonable distribution of offset times when you have a very large number of instances running on the same node.

alter session set container=CDB$ROOT;
alter system set AWR_SNAPSHOT_TIME_OFFSET=3600 scope=both;

step: create snapshot for PDB

alter session set container=PDB1;
SQL> exec dbms_workload_repository.create_snapshot();
SQL> exec dbms_workload_repository.create_snapshot();
SQL> exec dbms_workload_repository.create_snapshot();
SQL>select * from awr_pdb_snapshot;

step:create awr report.

SQL>@?/rdbms/admin/awrrpt

Specify the location of AWR Data
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
AWR_ROOT – Use AWR data from root (default)
AWR_PDB – Use AWR data from PDB

...

RMAN-06091: no channel allocated for maintenance (of an appropriate type)

When backing up archive logs, RMAN script ended with below errors. actually backup finished successfully. but when delete backed up acrchive logs, RMAN ended with error because of obsolete backups.

RMAN-03002: failure of delete command at 09/04/2018 14:15:18
RMAN-06091: no channel allocated for maintenance (of an appropriate type)

delete obsolete backups can resolve problem.

solution for tape backups:

RMAN> allocate channel for maintenance device type 'sbt_tape' PARMS 'ENV=(NSR_SERVER=, NSR_CLIENT=)';

RMAN> delete obsolete;

RMAN> crosscheck archivelog all;

RMAN> delete archivelog all backed up 2 times to sba_tape;

RMAN> crosscheck archivelog all;

solution for disk backups:

RMAN> allocate channel for maintenance device type disk;

RMAN> delete obsolete device type disk;

RMAN> crosscheck archivelog all;

RMAN> delete archivelog all backed up 2 times to sba_tape;

RMAN> crosscheck archivelog all;

Encountered RMAN-03002 and RMAN-06091 when Deleting Obsolete Backups (Doc ID 567555.1