Wikipedia

Search results

ORA-39346: Data Loss In Character Set Conversion During Import (IMPDP)


APPLIES TO:

Oracle Database - Standard Edition - Version 12.1.0.2 to 12.1.0.2 [Release 12.1]
Oracle Database - Enterprise Edition - Version 12.1.0.2 and later
Information in this document applies to any platform.

SYMPTOMS

A datapump import, with the source database being 11.2.0.2 and the target 12.1.0.2, both with database character set AL32UTF8, reports:
ORA-39346: data loss in character set conversion for object DATABASE_EXPORT/SCHEMA/VIEW/VIEW
Processing object type DATABASE_EXPORT/SCHEMA/VIEW/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/VIEW/COMMENT
Processing object type DATABASE_EXPORT/SCHEMA/PACKAGE_BODIES/PACKAGE/PACKAGE_BODY
ORA-39346: data loss in character set conversion for object DATABASE_EXPORT/SCHEMA/PACKAGE_BODIES/PACKAGE/PACKAGE_BODY
Processing object type DATABASE_EXPORT/SCHEMA/TYPE/TYPE_BODY
ORA-39346: data loss in character set conversion for object DATABASE_EXPORT/SCHEMA/PACKAGE_BODIES/PACKAGE/PACKAGE_BODY

CAUSE

Invalid or corrupt characters are stored in the source database, usually in comments on objects.

The fix for Bug 13696958 added warning messages in 12.1 datapump import for characters that couldn't be converted so, while the characters may have been detected before 12.1, they wouldn't have been reported.

SOLUTION


Previously, a fix for Bug 16173664 was made available to report the affected object in the import log.
The fix was withdrawn and the Bug was superseded by unpublished Bug 21342624 - DATA LOSS DUE TO CHARACTER SET NOT SHOWING CORRECT SET OF OBJECTS DURING IMPORT.
- Apply interim patch 21342624 and run post-install step:
$ datapatch
- With the fix installed, redo the import, review the objects reported in the import log file and correct them. Look for and fix replacement characters (often an upside down '?' character) in the source environment.
The script below can be used to identify the rows containing non-printable characters:
set serveroutput on
declare
cursor c_comments is select owner,table_name,comments from dba_tab_comments where owner not in ('APEX_050100','APPQOSSYS','AUDSYS','CTXSYS','DBSNMP','FLOWS_FILES','MDSYS','OJVMSYS','ORDSYS','PERFSTAT','SYS','SYSTEM','WMSYS','XDB')
and comments is not null;
begin
for i in c_comments
loop
for j in 1..length(i.comments)
loop
if ascii(substr(i.comments,j,1))>128
then dbms_output.put_line(i.owner||'.'||i.table_name||' '||i.comments);
end if;
end loop;
end loop;
end;
/

set serveroutput on
declare
cursor c_comments is select owner,table_name,column_name,comments from dba_col_comments where owner not in ('APEX_050100','APPQOSSYS','AUDSYS','CTXSYS','DBSNMP','FLOWS_FILES','MDSYS','OJVMSYS','ORDSYS','PERFSTAT','SYS','SYSTEM','WMSYS','XDB')
and comments is not null;
begin
for i in c_comments
loop
for j in 1..length(i.comments)
loop
if ascii(substr(i.comments,j,1))>128
then dbms_output.put_line(i.owner||'.'||i.table_name||' '||i.column_name||' '||i.comments);
end if;
end loop;
end loop;
end;
/

set serveroutput on
declare
cursor c_text is select owner,name,text from dba_source where owner not in ('APEX_050100','APPQOSSYS','AUDSYS','CTXSYS','DBSNMP','FLOWS_FILES','MDSYS','OJVMSYS','ORDSYS','PERFSTAT','SYS','SYSTEM','WMSYS','XDB')
and type='PROCEDURE';
begin
for i in c_text
loop
for j in 1..length(i.text)
loop
if ascii(substr(i.text,j,1))>128
then dbms_output.put_line(i.owner||'.'||i.name||' '||i.text);
end if;
end loop;
end loop;
end;
/

Note:1/ With Patch 21342624 installed, impdp log will report the objects that are experiencing data loss during characterset conversion.
With the fix installed, the character data is checked even when the client and database character sets are the same, to prevent input of invalid character data.
The fix for Bug 21342624 does not stop the error messages or repair any bad data on the source database, they only allow identification of objects that have the bad data.
2/ To check for conflicting patches, please use the MOS Patch Planner Tool. Please refer to Note 1317012.1 - How To Use MOS Patch Planner To Check And Request The Conflict Patches?
If no patch exists for your version, please contact Oracle Support for a backport request.


- reference (Oracle Doc ID 1958604.1)

ALTER SYSTEM CANCEL SQL : Cancel a SQL Statement in a Session in Oracle Database 18c

ALTER SYSTEM CANCEL SQL : Cancel a SQL Statement in a Session in Oracle Database 18c


The ALTER SYSTEM CANCEL SQL command was introduced in Oracle Database 18c to cancel a SQL statement in a session, providing an alternative to killing a rogue session. If you ultimately have to kill the session, that is discussed here.

Syntax

The basic syntax of the ALTER SYSTEM CANCEL SQL statement is show below.
ALTER SYSTEM CANCEL SQL 'SID, SERIAL[, @INST_ID][, SQL_ID]';
If the INST_ID is omitted, it is assumed you mean the current instance. If the SQL_ID is omitted, it is assumed you mean the SQL that is currently running in the specified session. Some of the variations are shown below.
-- Current SQL in session on this instance.
ALTER SYSTEM CANCEL SQL '738, 64419';

-- Current SQL in session on instance with INST_ID = 1.
ALTER SYSTEM CANCEL SQL '738, 64419, @1';

-- Specified SQL in session on this instance.
ALTER SYSTEM CANCEL SQL '738, 64419, 84djy3bnatbvq';

-- Specified SQL in session on instance with INST_ID = 1.
ALTER SYSTEM CANCEL SQL '738, 64419, @1, 84djy3bnatbvq';
All four pieces of information can be retrieved from the GV$SESSION view, as shown below.

Identify the Session to be Cancelled

Cancelling a SQL statement in a background session can be very destructive, so be very careful when identifying the session and SQL.
Identify the offending session and SQL using the GV$SESSION view. The following query joins to the GV$PROCESS view to get the SPID column, which is not really necessary for this command.
SET LINESIZE 150
COLUMN spid FORMAT A10
COLUMN username FORMAT A30
COLUMN program FORMAT A45

SELECT s.inst_id,
       s.sid,
       s.serial#,
       s.sql_id,
       p.spid,
       s.username,
       s.program
FROM   gv$session s
       JOIN gv$process p ON p.addr = s.paddr AND p.inst_id = s.inst_id
WHERE  s.type != 'BACKGROUND';
The SIDSERIAL#INST_ID and SQL_ID values of the relevant session can then be substituted into the commands in the previous sections.

How to Format Corrupted Block Not Part of Any Segment

Reference Doc ID 336133.1
How to Format Corrupted Block Not Part of Any Segment 

Scenarios :
1. Rman backup fails with ORA-19566 error and the block reported corrupt does not belong to any object
2. Dbverify shows the block as corrupted
3. Corrupted block does not belong to any object

Cause :
Corrupted block will still be reported by RMAN and DBV until it is reused and reformatted.

Solution :

DISCLAIMER :-The steps given in this note are not always guaranteed to work
A possible way to fix the problem is provided below. Note that it is not guaranteed to work, but it has been known to resolve the problem in several cases.Also if there are many blocks reported corrupt in a particular datafile  pass the highest block number reported corrupt for  that datafile, when prompted for blocknumber in Step 6
When an object is recreated the blocks allocated (even corrupted block) to it are returned to free space.  There they await reallocation to an object requiring additional space.  Once they are reallocated to a new extent for an object and only when any DML operation makes use of these block (Even Corrupted blocks which were in free space and now allocated) they would be reformatted just before the DML operation makes modifcation/use of those blocks.
Note that a simple allocation of extent does not format the block. Step 7 allocates the extent and step 8 performs DML operation to make use of allocated block in step 7 so that corrupt block gets reused and formatted.

In this note we try to reformat the block manually.

Step 1 - Identify corrupt datafile
The corruption may be reported at the application level, such as DBV and RMAN, or the alert.log.

For example, you may receive the following during your RMAN backups:
RMAN-03009: failure of backup command on nm4501 channel at 04/29/2005 09:44:41
ORA-19566: exceeded limit of 0 corrupt blocks for file E:\xxxx\test.ORA.

Corrupt block is present in file E:\xxxx\test.ORA.

Step 2 Run DBV/Rman validate on affected datafile and check for corrupt block
Run dbverify on the datafile which reports corrupt block.
# dbv userid={system/password} file={full path filename} logfile={output filename}
Check the {output filename} for the result
Sample Output:
    DBVERIFY: Release 9.2.0.3.0 - Production on Thu Aug 25 11:15:54 2005
    Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.


DBVERIFY - Verification starting : FILE = E:\xxxx\test.ORA

 Page 48740 is marked corrupt    ***

 Corrupt block relative dba: 0x01c0be64 (file 7, block 48740)
 Bad check value found during dbv:

  Data in bad block -

  type: 0 format: 2 rdba: 0x0000be64
  last change scn: 0x0000.00000000 seq: 0x1 flg: 0x05
  consistency value in tail: 0x00000001
  check value in block header: 0xb964, computed block checksum: 0x2a5a
  spare1: 0x0, spare2: 0x0, spare3: 0x0
  ***

 DBVERIFY - Verification complete


  Total Pages Examined         : 64000
  Total Pages Processed (Data) : 0
  Total Pages Failing   (Data) : 0
  Total Pages Processed (Index): 1751
  Total Pages Failing   (Index): 0
  Total Pages Processed (Other): 45
  Total Pages Processed (Seg)  : 0
  Total Pages Failing   (Seg)  : 0
  Total Pages Empty            : 62203
  Total Pages Marked Corrupt   : 1
Note that Block 48740 is reported as corrupt in datafile 7.

Or in RMAN:
For Entire database

Rman> backup validate check logical database ;

For specific datafile

Rman> backup validate check logical datafile <fileno> ;

Once done query

SQL>Select * from v$database_block_corruption ;

 **If large number of blocks are reported, proceed to step 4 instead of step 3.

Step 3 - Check whether block is part of any object - For Small Number of Corrupted Blocks
Query dba_extents and  cross check the block does not belong to any object :
SQL> select segment_name, segment_type, owner
       from dba_extents
      where file_id = <Absolute file number>
        and <corrupted block number> between block_id
            and block_id + blocks -1;

If it doesn't belong to an object, double check if it does exists in dba_free_space :
SQL> Select * from dba_free_space where file_id= <Absolute file number>
     and <corrupted block number> between block_id and block_id + blocks -1;

If the block cannot be found on DBA_FREE_SPACE nor DBA_EXTENTS, the block might be a file space usage bitmap and cannot be reformatted.

Step 4 - Check whether block is part of any object - For Large Number of Corrupted Blocks
If you have already ran rman validate in step 2 go to directly sqlplus script given below to identify the object
$ rman target / nocatalog
or
$ rman target sys/ nocatalog

run {
allocate channel d1 type disk;
allocate channel d2 type disk;
------------------------------------------------------------------------
--  multiple channels may be allocated for parallelizing purposes
--  depends: RMAN - Min ( MAXOPENFILES , FILESPERSET )
--  Defaults: MAXOPENFILES =8, FILESPERSET =64
------------------------------------------------------------------------
allocate channel dn type disk;
backup check logical validate database;
}

Note: if RDBMS is < 11g and NOARCHIVELOG mode, then db must be in MOUNTED mode


*** The RMAN check logical validate database command MUST be executed and completed before going any further.
*** The 'v$database_block_corruption' view gets populated upon this command completion.
*** If not complete, you risk to get invalid/incomplete information from the below step.


 Run the following sql query to find if the block is in free space or occupied
set lines 200 pages 10000
col segment_name format a30

SELECT e.owner, e.segment_type, e.segment_name, e.partition_name, c.file#
, greatest(e.block_id, c.block#) corr_start_block#
, least(e.block_id+e.blocks-1, c.block#+c.blocks-1) corr_end_block#
, least(e.block_id+e.blocks-1, c.block#+c.blocks-1)
- greatest(e.block_id, c.block#) + 1 blocks_corrupted
, null description
FROM dba_extents e, v$database_block_corruption c
WHERE e.file_id = c.file#
AND e.block_id <= c.block# + c.blocks - 1
AND e.block_id + e.blocks - 1 >= c.block#
UNION
SELECT s.owner, s.segment_type, s.segment_name, s.partition_name, c.file#
, header_block corr_start_block#
, header_block corr_end_block#
, 1 blocks_corrupted
, 'Segment Header' description
FROM dba_segments s, v$database_block_corruption c
WHERE s.header_file = c.file#
AND s.header_block between c.block# and c.block# + c.blocks - 1
UNION
SELECT null owner, null segment_type, null segment_name, null partition_name, c.file#
, greatest(f.block_id, c.block#) corr_start_block#
, least(f.block_id+f.blocks-1, c.block#+c.blocks-1) corr_end_block#
, least(f.block_id+f.blocks-1, c.block#+c.blocks-1)
- greatest(f.block_id, c.block#) + 1 blocks_corrupted
, 'Free Block' description
FROM dba_free_space f, v$database_block_corruption c
WHERE f.file_id = c.file#
AND f.block_id <= c.block# + c.blocks - 1
AND f.block_id + f.blocks - 1 >= c.block#
ORDER BY file#, corr_start_block#;



Step 5 - Create a dummy table as user other than SYS and SYSTEM
SQL> connect scott/password

Create a dummy table in the tablespace containing datafile which has the corrupt block - and use nologging option to prevent redo records from being generated: 
SQL> create table s (
       n number,
       c varchar2(4000)
     ) nologging tablespace <tablespace name having the corrupt block> pctfree 99; 

Different storage parameters can be used to suit the specific environment.
We use the PCTFREE 99 to speed up the reformat of the block

Verify that the table is created in the correct tablespace by querying user_segments:
    
SQL> select segment_name,tablespace_name from user_segments
      where segment_name='S' ;

Please note in 11gr2 due to deferred segment creation concept query from above user_segments may not report any rows in such cases query user_tables
SQL>Select table_name,tablespace_name from user_tables where table_name='S' ;


Step 6 - Create trigger on dummy table which throws exception once the corrupted block is reused
Login as SYSDBA
Please note when prompted for file number enter the relative file no(rfile# value from v$datafile)
CREATE OR REPLACE TRIGGER corrupt_trigger
  AFTER INSERT ON scott.s
  REFERENCING OLD AS p_old NEW AS new_p
  FOR EACH ROW
DECLARE
  corrupt EXCEPTION;
BEGIN
  IF (dbms_rowid.rowid_block_number(:new_p.rowid)=&blocknumber)
 and (dbms_rowid.rowid_relative_fno(:new_p.rowid)=&filenumber) THEN
     RAISE corrupt;
  END IF;
EXCEPTION
  WHEN corrupt THEN
     RAISE_APPLICATION_ERROR(-20000, 'Corrupt block has been formatted');
END;
/

When prompted for the block number, provide the block reported corrupt as input.
When prompted for the file number enter the relative fileno (rfile# value from v$datafile) for corrupt datafile.
Step 7- Allocate space to the table from the affected datafile
Notes
1) If this is an ASSM tablespace, you may need to repeat this step a few times. That is, create multiple tables and allocate multiple extents.
And periodically look at dba_extents to ensure that the free space is now allocated to a dummy table.

This is because ASSM will automatically determine the size of the next extent
2) It is advisable to ensure that AUTOEXTEND is OFF for the datafile, to prevent it from growing

Please note :- If your database is on 10.2.0.4/11.1.0.7 and tablespace in which dummy table is created is ASSM then you might hit the below bug while manually Allocating extent to the dummy table. Ensure for 10.2.0.4/11.1.0.7 database you have the below fix installed before Manually allocating extents.Ensure you have One off patch 6647480 applied for 10.2.0.4/11.1.0.7 database version.
Bug 6647480 - Corruption / OERI [kddummy_blkchk] .. [18021] with ASSM (Doc ID 6647480.8)
Firstly, find the extent size by querying dba_free_space  
SQL> Select BYTES from dba_free_space where file_id=<file no> and <corrupt block no> between block_id and block_id + blocks -1;

BYTES
---------------- ---------- ---------- ---------- ---------- ------------
 65536
In this case it's 64K, so allocate the extent as follows:

SQL> alter table scott.s
allocate extent (DATAFILE 'E:\xxxx\test.ORA' SIZE 64K);

If there are multiple extents of 64K free in this datafile, you may need use this loop:
BEGIN
for i in 1..1000000 loop
EXECUTE IMMEDIATE 'alter table scott.s allocate extent (DATAFILE '||'''E:\xxxx\test.ORA''' ||'SIZE 64K) ';
end loop;
end ;
/


Keep allocating until the corrupted block is part of scott.s. Use this query to confirm thus:
SQL> select segment_name, segment_type, owner
       from dba_extents
      where file_id = <Absolute file number>
        and <corrupt block number> between block_id
            and block_id + blocks -1 ;


Step 8 -  Insert data into dummy table to format the block
Sample code (depending on the size of the tablespace it may vary):
BEGIN
FOR i IN 1..1000000000 LOOP
INSERT /*+ APPEND */ INTO scott.s select i, lpad('REFORMAT',3092, 'R') from dual;
commit ;
END LOOP;
END;

 Or
BEGIN
  FOR i IN 1..1000000000 LOOP
    INSERT INTO scott.s VALUES(i,'x');
  END LOOP;
END;
/
 Or use the below code which includes 2 loops:
Begin
  FOR i IN 1..1000000000 loop
    for j IN 1..1000 loop
      Insert into scott.s VALUES(i,'x');
    end loop;
    commit;
  END LOOP;
END; 

The trigger will be fired for every row inserted into the table and an exception with ORA-20000 will be produced as soon as it inserts the first row into the corrupt block.
Step 9 - Confirm that the block is now corruption free
Run dbverify or RMAN validate on the corrupt datafile (or entire database) again. It will not show the block as corrupted.
Ensure you do couple of manual logswitch or checkpoint for information in memory to be written to disk.
RMAN backup will not report any error on this block.

Before running the actual backup you can re-run Rman validate command on the datafile and check v$database_block_corruption doesnot show the block formatted as corrupted.
 For Db version <=10gr2
 Rman> Backup validate check logical datafile <fileno>,<fileno> ;
 For Db version >= 11gr1
 Rman> Backup validate check logical datafile <fileno> ;
 Or
 Rman> validate datafile <fileno> block <blockno reported corrupt>, <blockno reported corrupt>  ;

 Once done
 SQL>Select * from v$database_block_corruption ;


Step 10 - Drop the dummy table created in step 4

SQL> DROP TABLE scott.s ;

If 10gr1 and above drop table with purge option.

Step 11:-  Do a Manual logswitch and checkpoint

Do couple of logswitch and checkpoint so that The block  formatted in-memory are written  into disk and dbverify no longer reports errors
SQL>Alter system switch logfile ;  --> Do this couple of time

SQL>Alter system checkpoint ;

Step 12:- Drop trigger created in step 6
SQL> DROP trigger corrupt_trigger ;

Important:
Bug 7381632 - ORA-1578 Free corrupt blocks may not be reformatted when Flashback is enabled

Oracle DBMS_SHARED_POOL


Oracle DBMS_SHARED_POOL
Version 12.2.0.1


General Information
Library Note
The Library is currently in the process of being upgraded from Oracle Database Version 12.1.0.2 to 12.2.0.1.
 Demos are being upgraded to reflect the new Container paradigm as well as EBR (Edition Based Redefinition)
 and may contain references to CDBs, PDBs, and other objects you may not be familiar
 with such as CDB_OBJECTS_AE: Welcome to 12c.

Do you remember when mainframes were replaced by client-server? When client-server was replaced by n-tier architecture?

 The "Cloud" is a distraction ... DevOps is not. Prepare for the future.
PurposePin and unpin objects in memory
AUTHIDDEFINER
Dependencies
DBMS_OUTPUTDBMS_UTILITYV$SQLAREA
DBMS_SQLTCB_INTERNALV$DB_OBJECT_CACHEX$KGLOB
DBMS_STANDARD
DocumentedYes
First Available10gR1
Security ModelOwned by SYS with EXECUTE granted to the EXECUTE_CATALOG_ROLE role
Source{ORACLE_HOME}/rdbms/admin/dbmspool.sql
Subprograms
ABORTED_REQUEST_THRESHOLD
Sets the aborted request threshold for the shared pooldbms_shared_pool.aborted_request_threshold(threshold_size IN NUMBER);

-- the range of threshold_size is 5000 to ~2 GB inclusive
exec dbms_shared_pool.aborted_request_threshold(100000000);
KEEP
Pin A Cursor In Memory

Overload 1
dbmsdbms_shared_pool.keep(name IN VARCHAR2, flag IN CHAR DEFAULT 'P');

Flag ValuesDescription
Ccursor
JCjava class
JDjava shared data
JRjava resource
JSjava source
Pprocedure
Qsequence
Rtrigger
Ttype
conn sys@pdbdev as sysdba

GRANT select ON gv_$open_cursor TO uwclass;

conn uwclass/uwclass@pdbdev

-- SQL statement to load cursor into the shared pool
VARIABLE x REFCURSOR

BEGIN
   OPEN :x for
   SELECT *
   FROM all_tables;
END;
/

--Determine address and hash value of the SQL statement
SELECT address, hash_value
FROM gv$open_cursor
WHERE sql_text LIKE '%ALL_TABLES%';

-- substitute your query results for mine, below
exec sys.dbms_shared_pool.keep('1C5B28DC, 3958201300', 'C');

conn sys@pdbdev as sysdba

SELECT owner, name, type
FROM gv$db_object_cache
WHERE kept = 'YES'
AND TYPE = 'CURSOR';
Pin A Package, Procedure Or Function In Memory (this is the default)SELECT owner, name, type
FROM gv$db_object_cache
WHERE kept = 'YES'
AND owner = 'UWCLASS';

CREATE OR REPLACE PROCEDURE testproc IS
BEGIN
  NULL;
END testproc;
/

exec sys.dbms_shared_pool.keep('testproc', 'P');

conn sys@pdbdev as sysdba

col owner format a30

SELECT owner, name, type
FROM gv$db_object_cache
WHERE kept = 'YES'
AND owner = 'UWCLASS';
Pin A Sequence In Memoryconn sys@pdbdev as sysdba
SELECT owner, name, type
FROM gv$db_object_cache
WHERE kept = 'YES'
AND owner = 'UWCLASS';

conn uwclass/uwclass@pdbdev

CREATE SEQUENCE seq_test;

exec sys.dbms_shared_pool.keep('seq_test', 'Q');

conn sys@pdbdev as sysdba

SELECT owner, name, type
FROM gv$db_object_cache
WHERE kept = 'YES'
AND owner = 'UWCLASS';
Pin A Trigger In Memoryconn uwclass/uwclass@pdbdev

SELECT owner, name, type
FROM gv$db_object_cache
WHERE kept = 'YES'
AND owner = 'UWCLASS';

CREATE TABLE t (
testcol VARCHAR2(20));

CREATE OR REPLACE TRIGGER testtrig
BEFORE UPDATE
ON t
BEGIN
   NULL;
END testtrig;
/

exec sys.dbms_shared_pool.keep('testtrig', 'R');

conn sys@pdbdev as sysdba

SELECT owner, name, type
FROM gv$db_object_cache
WHERE kept = 'YES'
AND owner = 'UWCLASS';
Overload 2dbms_shared_pool.keep(
schema    IN VARCHAR2,
objname   IN VARCHAR2,
namespace IN NUMBER,
heaps     IN NUMBER);
TBD
Overload 3dbms_shared_pool.keep(
hash      IN VARCHAR2,
namespace IN NUMBER,
heaps     IN NUMBER);
TBD
MARKHOT
Mark a library cache object as a hot object

Overload 1
dbms_shared_pool.markhot(
schema    IN VARCHAR2,
objname   IN VARCHAR2,
namespace IN NUMBER  DEFAULT 1,     -- library cache namespace to search
global    IN BOOLEAN DEFAULT TRUE); -- If TRUE mark hot on all RAC instances
TBD
Overload 2dbms_shared_pool.markhot(
hash      IN VARCHAR2,              -- 16-byte hash value for the object
namespace IN NUMBER  DEFAULT 1,
global    IN BOOLEAN DEFAULT TRUE);
CREATE OR REPLACE TRIGGER pin_markhot_objects
AFTER STARTUP ON DATABASE
BEGIN
  dbms_shared_pool.markhot(hash=>'01630e17906c4f222031266c21b49303',namespace=>0);
  dbms_shared_pool.markhot(hash=>'119df082543f104e29cad00ee793c8aa',namespace=>0);
  dbms_shared_pool.markhot(hash=>'251d24517d18ee7b2154e091b80e64d2',namespace=>0);
  dbms_shared_pool.markhot(hash=>'28104e170c4020b7d6991509b4886443',namespace=>0);
  dbms_shared_pool.markhot(hash=>'3362900d064bc7d9a1812303ea49391e',namespace=>0);
END;
/
PURGE
Purge the named object or particular heap(s) of the object

Overload 1
dbms_shared_pool.purge(
name  IN VARCHAR2,
flag  IN CHAR   DEFAULT 'P',
heaps IN NUMBER DEFAULT 1);


Note: heaps to purge. e.g if heap 0 and heap 6 are to be purged.
1<<0 | 1<<6 => hex 0x41 => decimal 65. so specify heaps=>65.
Default is 1 i.e heap 0 which means the whole object will be purged.

In some versions this may not work unless you set event 5614566 so
I have shown that in this demo. It is not necessary in 11gR1 or above.
conn sys@pdbdev as sysdba

alter session set events '5614566 trace name context forever';

SELECT /* find me */ COUNT(*)
FROM dba_tables t, dba_indexes i
WHERE t.table_name = i.table_name;

SELECT address, hash_value, sql_text
FROM v$sqlarea
WHERE sql_text LIKE '%find me%';

exec dbms_shared_pool.purge('385C52F8,943808449', 5'c');

SELECT address, hash_value, sql_text
FROM v$sqlarea
WHERE sql_text LIKE '%find me%';
Purge the named object or particular heap(s) of the object

Overload 2
dbms_shared_pool.purge(
schema    IN VARCHAR2,
objname   IN VARCHAR2,
namespace IN NUMBER,
heaps     IN NUMBER);
TBD
Purge the object or particular heap(s) of the object

Overload 3
dbms_shared_pool.purge(
hash      IN VARCHAR2,
namespace IN NUMBER,
heaps     IN NUMBER);
TBD
SIZES
Shows what is in the Shared Pool larger than a specified size in bytesdbms_shared_pool.sizes(minsize IN NUMBER);
set serveroutput on

exec dbms_shared_pool.sizes(500);
UNKEEP
Unkeep the named object
Overload 1
dbms_shared_pool.unkeep(name IN VARCHAR2, flag IN CHAR DEFAULT 'P');
exec dbms_shared_pool.unkeep('UWCLASS.TESTPROC', 'P');
Unkeep an object in the shared pool

Overload 2
dbms_shared_pool.unkeep(
schema    IN VARCHAR2,
objname   IN VARCHAR2,
namespace IN NUMBER);
TBD
Overload 3dbms_shared_pool.unkeep(
hash      IN VARCHAR2,
namespace IN NUMBER);
TBD
UNMARKHOT
Unmark a library cache object as a hot object

Overload 1
dbms_shared_pool.unmarkhot(
schema    IN VARCHAR2,
objname   IN VARCHAR2,
namespace IN NUMBER  DEFAULT 1,
global    IN BOOLEAN DEFAULT TRUE);
TBD
Overload 2dbms_shared_pool.unmarkhot(
hash      IN VARHAR2,
namespace IN NUMBER  DEFAULT 1,
global    IN BOOLEAN DEFAULT TRUE);
exec dbms_shared_pool.unmarkhot(hash=>'7eb6e0f357f73998ba9116f63f50f54e',namespace=>0);
Related Queries
Find information on pinned cursorsSELECT address, hash_value
FROM gv$sqlarea
WHERE sql_text LIKE '%<name_from_v$db_object_cache%';