MERHABA

Uzun zamandır düşünüpte hayata geçiremediğim blog dünyasına bu yazı ile merhaba diyorum.  Tasarlarken, düşünürken faydalı,güzel yazılar yazarak, karşılaştığım durumları paylaşarak,  özellikle Türkçe kaynak arayanlara faydalı olmayı planlıyordum. umarım düşündüğüm şeyleri burada hayata geçirebilirim.

Faydalı,sürekli yeni bilgilerin paylaşıldığı bir blog olması dileğiyle,

Merhaba Dünya 🙂

oracle 12c move all database files to new mount point

existing mount point usage is near %100, so we need to extend mont point.
but because of technical reasons we cannot extend mount point, so er have to move all oracle files in old mount point to new created mount point.
then we will drop old mount point.

step: change db_create_file_dest
alter system set db_create_file_dest='/new_dest' scope=both; 
step:  move all datafiles online,
OMF is enabled so I do not give the new destination.
with "keep" option old files remain at old location.

alter database move datafile   2 keep;
alter database move datafile   3 keep;
alter database move datafile   4 keep;
alter database move datafile   5 keep;
alter database move datafile   7 keep;
alter database move datafile   8 keep;
alter database move datafile   9 keep;
alter database move datafile  10 keep;
alter database move datafile  11 keep;
alter database move datafile  12 keep;
alter database move datafile  13 keep;
alter database move datafile  14 keep;
alter database move datafile  15 keep;
alter database move datafile  16 keep;
alter database move datafile  17 keep;
alter database move datafile  18 keep;
alter database move datafile  19 keep;
alter database move datafile  20 keep;
alter database move datafile  21 keep;
alter database move datafile  22 keep;
alter database move datafile  23 keep;
step: remove old temp tablespace and create new tablespace.
create new temp tablespace. new tablespace will be created in new location because of new "db_create_file_dest".  If you do not use OMF please sepicify new location imlicitly.
change default temp tablespace as newly created.
find and kill sessions that using old temp tablespace.
drop old temp tablespace.

create temporary tablespace temp2;

alter database default  temporary tablespace TEMP2;


SELECT 'alter system kill session '''||sid||','||serial#||''' immediate;'
FROM v$session a,v$sort_usage b
WHERE a.saddr = b.session_addr;


DROP TABLESPACE temp INCLUDING CONTENTS AND DATAFILES;
step: move controlfiles in old destination to new destination
change control_files location as new parameter in spfile. If you have more than one copy of control files like me you can only change required files. retype controlfile copy as same that do not require any change.
And then shutdown the db. move controlfile from old location to new location with linux move command. 
startup database then check "control_files" parameter.


select name from v$controlfile;

NAME
--------------------------------------------------------------------------------
/old_dest/MYDB/controlfile/o1_mf_ggf7lh18_.ctl
/u02/MYDB/controlfile/o1_mf_ggf7lh4j_.ctl


alter system set control_files='/new_dest/MYDB/controlfile/o1_mf_ggf7lh18_.ctl', '/u02/MYDB/controlfile/o1_mf_ggf7lh4j_.ctl' scope=spfile;

shu immediate;
Database closed.
Database dismounted.
ORACLE instance shut down


linux shell# mv /old_dest/MYDB/controlfile/o1_mf_ggf7lh18_.ctl /new_dest/MYDB/controlfile/o1_mf_ggf7lh18_.ctl

select name from v$controlfile;

NAME
--------------------------------------------------------------------------------
/new_dest/MYDB/controlfile/o1_mf_ggf7lh18_.ctl
/u02/MYDB/controlfile/o1_mf_ggf7lh4j_.ctl
step: relocate redolog members
shutdown database and move redolog file/member to new location.
start database in mount mode and run "alter database rename file" command
open database.


select group#, member from v$logfile
    GROUP# MEMBER
---------- ------------------------------------------------------------------------------------------
         3 /old_dest/MYDB/onlinelog/o1_mf_3_ggf7lklb_.log
         3 /u02/MYDB/onlinelog/o1_mf_3_ggf7lrmb_.log
         2 /old_dest/MYDB/onlinelog/o1_mf_2_ggf7lkkr_.log
         2 /u02/MYDB/onlinelog/o1_mf_2_ggf7lpys_.log
         1 /old_dest/MYDB/onlinelog/o1_mf_1_ggf7lkk6_.log
         1 /u02/MYDB/onlinelog/o1_mf_1_ggf7lm5h_.log

6 rows selected.

shu immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

mv /old_dest/MYDB/onlinelog/o1_mf_3_ggf7lklb_.log /new_dest/MYDB/onlinelog/o1_mf_3_ggf7lklb_.log
mv /old_dest/MYDB/onlinelog/o1_mf_2_ggf7lkkr_.log /new_dest/MYDB/onlinelog/o1_mf_2_ggf7lkkr_.log
mv /old_dest/MYDB/onlinelog/o1_mf_1_ggf7lkk6_.log /new_dest/MYDB/onlinelog/o1_mf_1_ggf7lkk6_.log


startup mount;
ORACLE instance started.

Total System Global Area 4966055936 bytes
Fixed Size                  8630856 bytes
Variable Size            1040190904 bytes
Database Buffers         3909091328 bytes
Redo Buffers                8142848 bytes
Database mounted.

Troubleshooting ‘enq: TX – index contention’ Waits

When running an OLTP systems, it is possible to see high TX enqueue contention on index associated with tables, which are having high concurrency from the application.  This usually happens when the application performs lot of INSERTs and DELETEs concurrently. For RAC system, the concurrent INSERTs and DELETEs could happen from all the instances .

The reason for this is the index block splits while inserting a new row into the index. The transactions will have to wait for TX lock in mode 4, until the session that is doing the block splits completes the operations.

A session will initiate a index block split, when it can’t find space in an index block where it needs to insert a new row. Before starting the split, it would clean out all the keys in the block to check whether there is enough sufficient space in the deleted block.


Splitter has to do the following activities:
    o          Allocate a new block.
    o          Copy a percentage of rows to the new buffer.
    o          Add the new buffer to the index structure and commit the operation.

In RAC environments, this could be an expensive operation, due to the global cache operations included. The impact will be more if the split is happening at a branch or root block level.

Causes:

Most probable reasons are:

   o          Indexes on the tables which are being accessed heavily from the application.
   o          Indexes on table columns which are monotonically growing. In other words, most of the index insertions occur only on the right edge of an index.
   o          Large data purge has been performed, followed by high concurrent insert

Identifying the Hot index:

The indexes which are having contention can be identified from the AWR reports taken during the time of the issue.

Top 5 Timed Events:

Event                       Waits      Time(s)   Avg Wait(ms)  % Total Call  Time Wait Class 
en: TX - index contention   89,350     40,991    459           63.3          Concurrency 
db file sequential read     1,458,288  12,562    9             19.4          User I/O 
CPU time                               5,352                   8.3   

Instance Activity Stats:

Statistic                Total     per Second    per Trans 
branch node splits       945       0.26          0.00 
leaf node 90-10 splits   1,670     0.46          0.00 
leaf node splits         35,603    9.85          0.05 

And the objects can be found either from V$SEGMENT_STATISTICS or from ‘Segments by Row Lock Waits’ or ‘Segments by ITL Waits’ or ‘Service ITL Waits’ of the AWR reports.

Segments by Row Lock Waits:

Owner     Tablespace  Object Name            Obj.Type   Row Lock Waits  % of Capture 
ACSSPROD  ACSS_IDX03  ACSS_ORDER_HEADER_PK   INDEX      3,425           43.62 
ACSSPROD  ACSS_IDX03  ACSS_ORDER_HEADER_ST   INDEX      883             11.25 
ACSSPROD  ACSS_IDX03  ACSS_ORDER_HEADER_DT   INDEX      682             8.69 
 
Segments by ITL Waits

Owner   Tablespace Name Object Name     Subobject Name  Obj. Type       ITL Waits       % of Capture
ACSSPROD  ACSS_IDX03  ACSS_ORDER_HEADER_PK                  INDEX              6       50.00
ACSSPROD  ACSS_IDX03  ACSS_ORDER_HEADER_ST                  INDEX              3       25.00
ACSSPROD  ACSS_IDX03  ACSS_ORDER_HEADER_DT                  INDEX              3       25.00

Solutions:

Solution here is to tune the indexes avoid heavy access on a few set of blocks.

Following are the options we could try:

  • Rebuild the index  as reverse key indexes or hash partition the indexes which are listed in the ‘Segments by Row Lock Waits’ of the AWR reports

For example :

CREATE INDEX <index name> ON <column> REVERSE;

From the Performance Tuning Guide –

Reverse key indexes are designed to eliminate index hot spots on insert applications.  These indexes are excellent for insert performance.  But the downside of it is that, it may affect the performance of index range scans.

http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/design.htm#sthref112


The hash method can improve performance of indexes where a small number leaf blocks in the index have high contention in multiuser OLTP environment. In some OLTP applications, index insertions happen only at the right edge of the index. This could happen when the index is defined on monotonically increasing  columns. In such situations right edge of the index becomes a hotspot because of contention for index pages, buffers, latches for update, and additional index  maintenance activity, which results in performance degradation.

http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/data_acc.htm#i2678

It’s recommended to test the application performance,  after rebuilding the indexes as  reverse key or hash partitioned.

  • Consider increasing the CACHE size of the sequences
alter sequence <owner>.<seq name> cache <required value>;

When we use monotonically increasing sequences for populating column values, the leaf block which is having high sequence key will be changing with every  insert, which makes it a hot block and potential candidate for a block split.

With CACHE SIZE (and probably with NOORDER option), each instance would use start using the sequence keys with a different range reduces the index keys getting insert same set of leaf blocks.

  • Rebuild or shrink associated index after huge amount of data purge

If there is a huge amount of data purge (delete) has been done, rebuild or shrink associated index should help to reduce the wait via alter index rebuild or alter index shrink command.

  • Increase PCT_FREE for the index

A higher PCT_FREE would help to avoid ITL contention on index blocks. Contention for ITL occurs when all the available ITLs within a block are currently in use and there is not enough space in the PCT_FREE area for Oracle to dynamically allocate a new ITL slot. 

ref :


Troubleshooting ‘enq: TX – index contention’ Waits (Doc ID 873243.1)

ORA-01031: insufficient privileges error when try to flashback package, procedure etc. source.

When you try to flashback package, procedure etc. oracle launch Ora-01031: insufficient privileges error even if you have dba role privilege. only sysdba role privileged users can see result of fashback query.

SELECT TEXT
    FROM SYS.DBA_SOURCE
         AS OF TIMESTAMP TO_TIMESTAMP ('11.04.2022 09:00:00', 'dd.mm.yyyy hh24:MI:SS')
   WHERE name = 'MYPROC' AND TYPE = 'PROCEDURE' AND OWNER = 'ADMIN'
ORDER BY line;

[Error] Execution (23: 14): ORA-01031: insufficient privileges

There is a workaround. you need to grant source$ view to required user. and then query from source$ bring result to user without sysdba privilege.

grant flashback on source$ to user1;

conn user1/password

  SELECT TEXT
    FROM SYS.DBA_SOURCE
         AS OF TIMESTAMP TO_TIMESTAMP ('11.04.2022 09:00:00', 'dd.mm.yyyy hh24:MI:SS')
   WHERE name = 'MYPROC' AND TYPE = 'PROCEDURE' AND OWNER = 'ADMIN'
ORDER BY line;

ref: https://ora600tom.wordpress.com/2012/05/31/flashback-a-package-from-the-current-schema/

Get critical alerts report from EM with MGMT$ views

You have to immedately interfere critical alerts from EM. Because it is in critcal threshold. And if you not it cause a problem soon.

But sometimes, you need to get big picture, maybe you need a summary or you need to view all alerts togather. In this stuation, you will use this query to view all critical alerts in same result set.

SELECT ac.target_name,
         ac.target_type,
         ac.metric_name,
         ac.metric_column,
         ac.metric_label,
         ac.column_label,
         ac.key_value,
         TO_CHAR (ac.collection_timestamp, 'DD/MM/YYYY HH24:MI:SS') AS alert_time,
         ac.violation_level,
         ac.message,
         ac.current_value
    FROM SYSMAN.MGMT$ALERT_CURRENT ac,  sysman.mgmt$target tg
   WHERE     ac.collection_timestamp >= SYSDATE - 1 --change interval accrding to your needs
         AND ac.alert_state = 'Critical'
         AND ac.TARGET_GUID=tg.TARGET_GUID 
         and tg.target_type IN ('oracle_database',
                                'oracle_listener',
                                'host',
                                'oracle_emd',
                                'oracle_home',
                                'oracle_dbsys')  --add target_types that you need to get 
ORDER BY alert_time, ac.target_name;

How to query to SYSMAN.MGMT$ views with custom user

When you try to select from Enterprise Manger Cloud Control(EM)’s managemnt views, known as MGMT$views, query returns with no rows, although you give access privileges to user.
Reason is VPD. OEM repository (SYSMAN) uses VPD policies. So you need to exempting the custom user from VPD policies. Lets exempt personal/custom user from access policies.

grant EXEMPT ACCESS POLICY to <username>;

now grant select all mgmt$ views to custom/personal user;

grant MGMT_USER to <username>;

or you can grant only specified views;

grant select on MGMT$STORAGE_REPORT_DATA TO <USERNAME>;

now, queries to mgmt$ returns data, of course it has any data 🙂

Monitor Create Projection on Vertica

We need to status of progress, when creating big tables/projections . After a bit googling I cannot see any monitoring scripts. Vertica documantation mention in about monitoring but It is not sufficient. So I typed my own script with trial and error. You can use this scripts to monitor progress of creation projection. Plesae inform me if you discover a problem with this scripts. In my cases this scripts run well.

Case1 : create a new superprojection or projection from an existing table. in my tracing I get 2 long running phases, so you will see 2 rows result, each row corresponds one phase. first step 1 reaches %100 and then step2 starts. After step 2 reaches %100, create projection will finish with success.

eg:

create projection public.monitor_projection_del_p
as select * from public.monitor_projection_del order by date, privateAddress,region;
select start_refresh();
(SELECT  'step 1' as 'step',
		round(((eep.counter_value)/ <anchor_table_row_nums>)*100,2)  as progress
FROM execution_engine_profiles  eep, projection_refreshes qp
where operator_name='StorageUnion' 	
	--and operator_name='ValExpr' 	--and operator_name='Sort' 	--and operator_name='TopK' 	--use anyone of this operator_name, result should be same
	and counter_name='rows produced'  
	and eep.is_executing
	and counter_value != 0
	and eep.transaction_id=qp.transaction_id
	and eep.session_id=	qp.session_id
	and qp.is_executing
	limit 1) --projection_refreshes tables returns 2 rows because of k-safety=1 . one of them is <projection_name_b0>  and other one is should be <projection_name_b1> . they are body projections and comlately have the same rows therefore have same storage on disk. with using limit 1 I get only one projction's information. It doesn't matter which one, both are same on all conditions. 
union all
select 'step 2' as 'step',  
		round(sum(progress)/2,2) as progress -- projection_refreshes tables returns 2 rows because of k-safety=1 . and sum(progress) reaches tottaly 200. so I will devide 2 to get "%100" return value from script.
from
(
SELECT  eep.counter_value/<anchor_table_row_nums>*100  as progress
FROM execution_engine_profiles  eep, projection_refreshes qp
where operator_name='DataTarget'
	and counter_name='written rows' 
	and counter_tag like '%isDV=f%'
	and eep.is_executing
	and counter_value != 0
	and eep.transaction_id=qp.transaction_id
	and eep.session_id=	qp.session_id
	and qp.is_executing
) a;

Case 2: create table as select (CTAS) statement. in my tracing I get 3 long running phases, so you will see three rows result, each row corresponds one phase. first step 1 reaches %100 and then step2 starts. After step 2 reaches %100, step 3 starts . After step 3 reaches %100, create projection will finish with success.

eg: Örnek:
create table public.monitor_projection_test as select * from (select * from public.netflow limit 500000000) a order by date,message;
SELECT  'step 1' as 'step',
         round(((eep.counter_value)/<anchor_table_row_nums>)*100,2) as 'progress %'
FROM execution_engine_profiles  eep, QUERY_PROFILES qp
where eep.session_id='gdatalab_db_node0003-212021:0x3ba7c4' --“create projection sesson”s id
       and operator_name='StorageUnion'
       and counter_name='rows produced'
       and  eep.is_executing
       and counter_value != 0
       and eep.transaction_id=qp.transaction_id
       and eep.session_id= qp.session_id
       and qp.is_executing
union all
SELECT   'step 2' as 'step',
         round(((eep.counter_value)/ <anchor_table_row_nums>)*100,2) as 'progress %'
FROM execution_engine_profiles  eep, QUERY_PROFILES qp
where eep.session_id='gdatalab_db_node0003-212021:0x3ba7c4' --“create projection sesson”s id
       and operator_name='ValExpr'  --and operator_name='Sort' --and operator_name='TopK' --use anyone of this operator_name, result should be same
       and counter_name='rows produced' 
       and eep.is_executing
       and counter_value != 0
       and eep.transaction_id=qp.transaction_id
       and eep.session_id= qp.session_id
       and qp.is_executing
union all
select  'step 3' as 'step',
		round(sum(progress),2) as 'progress %'
from
(
SELECT  
        eep.counter_value/<anchor_table_row_nums>*100 as 'progress'
FROM execution_engine_profiles  eep, QUERY_PROFILES qp
where eep.session_id='gdatalab_db_node0003-212021:0x3ba7c4' --“create projection sesson”s id
       and operator_name='DataTarget'
       and counter_name='written rows' 
       and counter_tag like '%isDV=f%'
       and eep.is_executing
       and counter_value != 0
       and eep.transaction_id=qp.transaction_id
       and eep.session_id= qp.session_id
       and qp.is_executing
       order by  counter_value desc
) a;

Trace Listener Dynamic Registration

until 12.1 dynamic registration was controlled by PMON. with 12c LREG (Listener REGistration) background process preform dynamic registration.

LREG

*registers instance information with the listener.
*is a critical background process for each database instance (if it is killed, Oracle goes down).
*does everything PMON used to do and report: service_update, service_register, service_died in the listener.log.

11 to 18 option

From 11.1.0.7 to 18.0, tracing can be enabled using the following command:

alter system set events='immediate trace name listener_registration level 3';

When finished collecting pmon traces,turn this off using:

alter system set events='immediate trace name listener_registration level 0';

19 tracing option

alter system set events 'trace[LREG] disk highest';
alter system set events = 'immediate trace name LREG_STATE level 3';

fo oter releases plese read doc below.

ref:How to Trace Dynamic Registration from PMON or LREG (12c) (Doc ID 787055.1)

Run DDL statement in a DDL trigger

oracle prevents running a DDL statement in a DDL trigger. if you try to run a ddl statement in DDL trigger error ORA-30511 will thrown from Oracle.

ORA-00604: error occurred at recursive SQL level 1
ORA-30511: invalid DDL operation in system triggers

In my case I try to give grant privilege to user B when user A creates table. but 30511 error block me. After some googling this blog rescued me. Anyway I will also share the solution on my blog, here it is:

create or replace procedure execute_grant(v_ddl in varchar2)
 is
 begin
 
   execute immediate v_ddl;
 
 end;
 /


create or replace trigger ddl_trigger_test
  after create on A.SCHEMA
  declare
    v_ddl   varchar2(4000);
    v_job   number;
  begin
    if ora_dict_obj_owner = 'A' and ora_dict_obj_type = 'TABLE' then
      v_ddl := 'grant select,insert,update,delete on ' || ora_dict_obj_owner || '.' || ora_dict_obj_name || ' to B';
    end if;

  --create a one time running job
    dbms_job.submit(job => v_job, what =>  'execute_grant(''' || v_ddl || ''');', nextdate => sysdate+(5/24/60/60), interval => null);
	
  end;
  /

if you want to create a dbms_schedular job please use this notation for a one time running job

  begin
    dbms_scheduler.create_job 
    (  
      job_name      =>  'One_Time Running Job',  
      job_type      =>  'PLSQL BLOCK',  
      job_action    =>  'execute_grant(''' || v_ddl || ''');',  
      start_date    =>  sysdate + (5/24/60/60),  
      enabled       =>  TRUE,  
      auto_drop     =>  TRUE,  
      comments      =>  'one time running job');
  end;
  /

and maybe you want to create trigger for ddl operation on view, procedure, e.g. this link give you an idea for events and objects that you can use in ddl trigger.

MRP0: Background Media Recovery terminated with error 1111. ORA-01111 – ORA-01110

Oracle throw below alert to alert.log in standby.

Managed Standby Recovery not using Real Time Apply
MRP0: Background Media Recovery terminated with error 1111
Errors in file /u01/app/oracle/diag/rdbms/orcl_stby/orcl/trace/orcl_pr00_24837.trc:
ORA-01111: name for data file 127 is unknown - rename to correct file
ORA-01110: data file 127: '/u01/app/oracle/product/11.2.0.4/db_1/dbs/UNNAMED00127'
ORA-01157: cannot identify/lock data file 127 - see DBWR trace file
ORA-01111: name for data file 127 is unknown - rename to correct file
ORA-01110: data file 127: '/u01/app/oracle/product/11.2.0.4/db_1/dbs/UNNAMED00127'
Completed: alter database recover managed standby database disconnect
Recovery Slave PR00 previously exited with exception 1111
MRP0: Background Media Recovery process shutdown (orcl)

/u01/app/oracle/product/11.2.0.4/db_1/dbs/UNNAMED00127 file could not found on standby server so ORA-01111 alert accurs in alert.log. as I remember, oracle do not have write privileges on “db_create_file_dest” and “db_file_name_convert” dest locations so datafile created in $ORACLE_HOME/dbs location. So far, there is no problem, because file is exist in $ORACLE_HOME/dbs location. in my case someone deletes manually the /u01/app/oracle/product/11.2.0.4/db_1/dbs/UNNAMED00127 file. and then oracle throws errors ORA-01111 erros because file is not exist in place.

there are various solution according to oracle doc: How to resolve ORA-01111 ORA-01110 ORA-01157 in a physical standby database (Doc ID 1416554.1)

  • you can create an backup from primary and restore it on standby
  • you can create an empty file on standby site and then alter the controlfile with ”  ALTER DATABASE RENAME FILE ‘/<oracle_home path>/dbs/UNNAMED000127’ to ‘</absolute path/real datafile name>’; “
  • you can migrate unknown file (in this case the file that reason of ORA-01111 error exist in place (in $ORACLE_HOME/dbs)) with “alter database create datafile ‘unknown path’ as ‘absolute path’; “

in my case standby unknown file is lost so I will create an empty datafile and run the rename command.

touch /data2/test.dbf
alter database standby_file_management=MANUAL;
ALTER DATABASE RENAME FILE '/u01/app/oracle/product/11.2.0.4/db_1/dbs/UNNAMED00127' to '/data2/test.dbf';
alter database standby_file_management=AUTO;
alter database recover managed standby database disconnect;

when the problem is active you can check the issue on disctionary views:

SQL> select * from v$recover_file where error like ‘%FILE%’;
 FILE# 		ONLINE   		ERROR        
———- 		——-   			——————–
 127 		ONLINE  		FILE MISSING  

ref: https://heliosguneserol.com/2016/07/21/mrp0-background-media-recovery-terminated-with-error-1111/


How to resolve ORA-01111 ORA-01110 ORA-01157 in a physical standby database (Doc ID 1416554.1)