Wikipedia

Search results

Structured Query Language/COMMIT and ROLLBACK + SAVEPOINT

COMMIT and ROLLBACK:

DBMS offers a special service. We can undo a single or even multiple consecutive write and delete operations. To do so we use the command ROLLBACK. When modifying data, the DBMS writes in a first step all new, changed or deleted data to a temporary space. During this stage the modified data is not part of the 'regular' database. If we are sure the modifications shall apply, we use the COMMIT command. If we want to revert our changes, we use the ROLLBACK command. All changes up to the finally COMMIT or ROLLBACK are considered to be part of a so called transaction.
The syntax of COMMIT and ROLLBACK is very simple.
COMMIT WORK;     -- commits all previous INSERT, UPDATE and DELETE commands, which
                 -- occurred since last COMMIT or ROLLBACK
ROLLBACK WORK;   -- reverts all previous INSERT, UPDATE and DELETE commands, which
                 -- occurred since last COMMIT or ROLLBACK
The keyword 'WORK' is optional.

AUTOCOMMIT

The feature AUTOCOMMIT automatically performs a COMMIT after every write operation (INSERT, UPDATE or DELETE). This feature is not part of the SQL standard, but is implemented and activated by default in some implementations. If we want to use the ROLLBACK command, we must deactivate the AUTOCOMMIT. (After an - automatic or explicit - COMMIT command a ROLLBACK command is syntactically okay, but it does nothing as everything is already committed.) Often we can deactivate the AUTOCOMMIT with a separate command like 'SET autocommit = 0;' or 'SET autocommit off;' or by clicking an icon on a GUI.
To test the following statements it is necessary to work without AUTOCOMMIT.

COMMIT

Let us insert a new person into the database and test the COMMIT.
-- Store a new person with id 99.
INSERT INTO person (id, firstname, lastname, date_of_birth,    place_of_birth, ssn,           weight)
VALUES             (99, 'Harriet', 'Flint',  DATE'1970-10-19', 'Dallas',       '078-05-1120', 65);

-- Is the new person really in the database? The process which executes the write operation will see its results,
-- even if they are actually not committed. (One hit expected.)
SELECT *
FROM   person
WHERE  id = 99;

-- Try COMMIT command
COMMIT;

-- Is she still in the database? (One hit expected.)
SELECT *
FROM   person
WHERE  id = 99;
Now we remove the person from the database.
-- Remove the new person
DELETE
FROM   person
WHERE  id = 99;

-- Is the person really gone? Again, the process which performs the write operation will see the changes, even 
-- if they are actually not committed. (No hit expected.)
SELECT *
FROM   person
WHERE  id = 99;

-- Try COMMIT command
COMMIT;

-- Is the person still in the database? (No hit expected.)
SELECT *
FROM   person
WHERE  id = 99;

ROLLBACK

The exciting command is the ROLLBACK. It restores changes of previous INSERT, UPDATE or DELETE commands.
We delete and restore Mrs. Hamilton from our example database.
DELETE
FROM   person
WHERE  id = 3; -- Lisa Hamilton

-- no hit expected
SELECT *
FROM   person
WHERE  id = 3;

-- ROLLBACK restores the deletion
ROLLBACK;

-- ONE hit expected !!! Else: check AUTOCOMMIT
SELECT *
FROM   person
WHERE  id = 3;
The ROLLBACK is not restricted to one single row. It may affect several rows, several commands, different kind of commands and even several tables.
-- same as above
DELETE
FROM   person
WHERE  id = 3; 
-- destroy all e-mail addresses
UPDATE contact
SET    contact_value = 'unknown'
WHERE  contact_type = 'email';

-- verify modifications
SELECT * FROM person;
SELECT * FROM contact;

-- A single ROLLBACK command restores the deletion in one table and the modifications in another table
ROLLBACK;

-- verify ROLLBACK
SELECT * FROM person;
SELECT * FROM contact;


SAVEPOINT

As transactions can cover a lot of statements, it is likely that runtime errors or logical errors arise. In some of such cases applications want to rollback only parts of the actual transaction and commit the rest or resume the processing a second time. To do so, it is possible to define internal transaction boundaries which reflects all processing from the start of the transaction up to this point in time. Such intermediate boundaries are called savepoints. COMMIT and ROLLBACK statements terminate the complete transaction including its savepoints.

-- Begin the transaction with an explicit command
START TRANSACTION;
--
INSERT ... ;
-- Define a savepoint
SAVEPOINT step_1;
--
UPDATE ... ;
-- Discard only the UPDATE. The INSERT remains.
ROLLBACK TO SAVEPOINT step_1;
-- try again (or do any other action)
UPDATE ... ;
-- confirm INSERT and the second UPDATE
COMMIT;
During the lifetime of a transaction a savepoint can be released if it's no longer needed. (At the end of the transaction it's implicitly released.)
-- ...
-- ...
RELEASE SAVEPOINT <savepoint_name>;
-- This has no effect to the results of previous INSERT, UPDATE or DELETE commands. It only eliminates the
-- possiblity to ROLLBACK TO SAVEPOINT <savepoint_name>.




The Hadoop FAQ for Oracle DBAs

Oracle DBAs, get answers to many of your most common questions about getting started with Hadoop.
As a former Oracle DBA, I get a lot of questions (most welcome!) from current DBAs in the Oracle ecosystem who are interested in Apache Hadoop. Here are few of the more frequently asked questions, along with my most common replies.
How much does the IT industry value Oracle DBA professionals who have switched to Hadoop administration, or added it to their skill set?
Right now, a lot. There are not many experienced Hadoop professionals around (yet)!
In many of my customer engagements, I work with the DBA team there to migrate parts of their data warehouse from Teradata or Netezza to Hadoop. They don’t realize it at the time, but while working with me to write Apache Sqoop export jobs, Apache Oozie workflows, Apache Hive ETL actions, and Cloudera Impala reports, they are learning Hadoop. A few months later, I’m gone, but a new team of Hadoop experts who used to be DBAs is left in place.
My solutions architect team at Cloudera also hires ex-DBAs as solutions consultants or system engineers. We view DBA experience as invaluable for those roles.
What do you look for when hiring people with no Hadoop experience?
I strongly believe that DBAs have the skills to become excellent Hadoop experts 
 — but not just any DBAs. Here are some of the characteristics I look for:

  • Comfort with the command line. Point-and-click DBAs and ETL developers need not apply.
  • Experience with Linux. Hadoop runs on Linux so that’s where much of the troubleshooting will happen. You need to be very comfortable with Linux OS, filesystem, tools, and command line. You should understand OS concepts around memory management, CPU scheduling, and IO.
  • Knowledge of networks. ISO layers, what ssh is really doing, name resolution, basic understanding of switching.
  • Good SQL skills. You know SQL and you are creative in your use of it. Experience with data warehouse basics such as partitioning and parallelism is a huge plus. ETL experience is a plus. Tuning skills are a plus.
  • Programming skills. Not necessarily Java (see below). But, can you write a bash script? Perl? Python? Can you solve few simple problems in pseudo-code? If you can’t code at all, that’s a problem.
  • Troubleshooting skills. This is huge, as Hadoop is far less mature than Oracle. You’ll need to Google error messages like a pro, but also be creative and knowledgeable about where to look when Google isn’t helpful.
  • For senior positions, we look for systems and architecture skills too. Prepare to explain how you’ll design a flight-scheduling system or something similar.
  • And since our team is customer facing, communication skills are a must. Do you listen? Can you explain a complex technical point? How do you react when I challenge your opinion?
Is that maybe too much to ask? Possibly. But I can’t think of anything I could remove and still expect success with our team.
How do I start learning Hadoop?
The first task we give new employees is to set up a five-node cluster in the AWS cloud. That’s a good place to start. Neither Cloudera Manager nor Apache Whirr is allowed; they make things too easy.
The next step is to load data into your cluster and analyze it.
The tutorials here, which show how to load Twitter data using Apache Flume and analyze it using Hive:
Also, Cloudera’s QuickStart VM (download here) includes TPC-H data and queries. You can run your own TPC-H benchmarks in the VM.
There are also some good books to help you get started. My favorite is Eric Sammer’s Hadoop Operations – it’s concise and practical, and I think DBAs will find it very useful. The chapter on troubleshooting is very entertaining. Other books that DBAs will find useful are Hadoop: The Definitive GuideProgramming Hive, and Apache Sqoop Cookbook (all of which are authored or co-authored by Clouderans).

Do I need to know Java?
Yes and no :)
You don’t need to be a master Java programmer. I’m not, and many of my colleagues are not. Some never write Java code at all.
You do need to be comfortable reading Java stack traces and error messages. You’ll see many of those. You’ll also need to understand basic concepts like jars and classpath.
Being able to read Java source code is useful. Hadoop is open source, and digging into the code often helps you understand why something works the way it does.
Even without mastery required, the ability to write Java is often useful. For example, Hive UDFs are typically written in Java (and it’s easier to do that than you think).

Conclusion

If you’re an Oracle DBA interested in learning Hadoop (or working for Cloudera), this post should get you started.

Hadoop FAQ – But What About The DBAs?

                             

There is one question I hear every time I make a presentation about Hadoop to an audience of DBAs. This question was also recently asked in LinkedIn’s DBA Manager forum, so I finally decided to answer it in writing, once and for all.
“As we all see there are lot of things happening on Big Data using Hadoop etc….
Can you let me know where do normal DBAs like fit in this :
DBAs supporting normal OLTP databases using Oracle, SQL Server databases
DBAs who support day to day issues in Datawarehouse environments .
Do DBAs need to learn Java (or) Storage Admin ( like SAN technology ) to get into Big Data ? ”
I hear a few questions here:
  • Do DBAs have a place at all in Big Data and Hadoop world? If so, what is that place?
  • Do they need new skills? Which ones?
Let me start by introducing everyone to a new role that now exists in many organizations: Hadoop Cluster Administrator.
Organizations that did not yet adopt Hadoop sometimes imagine Hadoop as a developer-only system. I think this is the reason why I get so many questions about whether or not we need to learn Java every time I mention Hadoop. Even within Pythian, when I first introduced the idea of Hadoop services, my managers asked whether we will need to learn Java or hire developers.
Organizations that did adopt Hadoop found out that any production cluster larger than 20-30 nodes requires a full time admin. This admin’s job is surprising similar to a DBA’s job – he is responsible for the performance and availability of the cluster, the data it contains, and the jobs that run there. The list of tasks is almost endless and also strangely familiar – deployment, upgrades, troubleshooting, configuration, tuning, job management, installing tools, architecting processes, monitoring, backups, recovery, etc.
I did not see a single organization with production Hadoop cluster that didn’t have a full-time admin, but if you don’t believe me – note that Cloudera is offering Hadoop Administrator Certification and that O’Reilly is selling a book called “Hadoop Operations”.
So you are going to need a Hadoop admin.
Who are the candidates for the position? The best option is to hire an experienced Hadoop admin. In 2-3 years, no one will even consider doing anything else. But right now there is an extreme shortage of Hadoop admins, so we need to consider less perfect candidates. The usual suspects tend to be: Junior java developers, sysadmins, storage admins, and DBAs.
Junior java developers tend not to do well in cluster admin role, just like PL/SQL developers rarely make good DBAs. Operations and dev are two different career paths, that tend to attract different types of personalities.
When we get to the operations personnel, storage admins are usually out of consideration because their skillset is too unique and valuable to other parts of the organization. I’ve never seen a storage admin who became a Hadoop admin, or any place where it was even seriously considered.
I’ve seen both DBAs and sysadmins becoming excellent Hadoop admins. In my highly biased opinions, DBAs have some advantages:
  • Everyone knows DBA stands for “Default Blame Acceptor”. Since the database is always blamed, DBAs typically have great troubleshooting skills, processes, and instincts. All of these are critical for good cluster admins.
  • DBAs are used to manage systems with millions of knobs to turn, all of which have a critical impact on the performance and availability of the system. Hadoop is similar to databases in this sense – tons of configurations to fine-tune.
  • DBAs, much more than sysadmins, are highly skilled in keeping developers in check and making sure no one accidentally causes critical performance issues on an entire system. This skill is critical when managing Hadoop clusters.
  • DBA experience with DWH (especially Exadata) is very valuable. There are many similarities between DWH workloads and Hadoop workloads, and similar principles guide the management of the system.
  • DBAs tend to be really good at writing their own monitoring jobs when needed. Every production database system I’ve seen has crontab file full of customized monitors and maintenance jobs. This skill continues to be critical for Hadoop system.
To be fair, sysadmins also have important advantages:
  • They typically have more experience managing huge number of machines (much more so than DBAs).
  • They have experience working with configuration management and deployment tools (puppet, chef), which is absolutely critical when managing large clusters.
  • They can feel more comfortable digging in the OS and network when configuring and troubleshooting systems, which is an important part of Hadoop administration.
Note that in both cases I’m talking about good, experienced admins – not those that can just click their way through the UI. Those who really understand their systems and much of what is going on outside the specific system they are responsible for. You need DBAs who care about the OS, who understand how hardware choices impact performance, and who understand workload characteristics and how to tune for them.
There is another important role for DBAs in the Hadoop world: Hadoop jobs often get data from databases or output data to databases. Good DBAs are very useful in making sure this doesn’t cause issues. (Even small Hadoop clusters can easily bring down an Oracle database by starting too many full-table scans at once.) In this role, the DBA doesn’t need to be part of the Hadoop team as long as there is good communication between the DBA and Hadoop developers and admins.
What about Java?
Hadoop is written in Java, and a fairly large amount of Hadoop jobs will be written in Java too.
Hadoop admins will need to be able to read Java error messages (because this is typically what you get from Hadoop), understand concepts of Java virtual machines and a bit about tuning them, and write small Java programs that can help in troubleshooting. On the other hand, most admins don’t need to write huge amounts of Hadoop code (you have developers for that), and for what they do write, non-Java solutions such as Streaming, Hive, and Pig (and Impala!) can be enough. My experience taught me that good admins learn enough Java to work on Hadoop cluster within a few days. There’s really not that much to know.
What about SAN technology? 
Hadoop storage system is very different from SAN and generally uses local disks (JBOD), not storage arrays and not even RAID. Hadoop admins will need to learn about HDFS, Hadoop’s file system, but not about traditional SAN systems. However, if they are DBAs or sysadmins, I suspect they already know far too much about SAN storage.
So what skills do Hadoop Administrators need?
First and foremost, Hadoop admins need general operational expertise such as good troubleshooting skills, understanding of system’s capacity, bottlenecks, basics of memory, CPU, OS, storage, and networks. I will assume that any good DBA has these covered.
Second, good knowledge of Linux is required, especially for DBAs who spent their life working with Solaris, AIX, and HPUX. Hadoop runs on Linux. They need to learn Linux security, configuration, tuning, troubleshooting, and monitoring. Familiarity with open source configuration management and deployment tools such as Puppet or Chef can help. Linux scripting (perl / bash) is also important – they will need to build a lot of their own tools here.
Third, they need Hadoop skills. There’s no way to avoid this :) They need to be able to deploy Hadoop cluster, add and remove nodes, figure out why a job is stuck or failing, configure and tune the cluster, find the bottlenecks, monitor critical parts of the cluster, configure name-node high availability, pick a scheduler and configure it to meet SLAs, and sometimes even take backups.
So yes, there’s a lot to learn. But very little of it is Java, and there is no reason DBAs can’t do it. However, with Hadoop Administrator being one of the hottest jobs in the market (judging by my LinkedIn inbox), they may not stay DBAs for long after they become Hadoop Admins…
Any DBAs out there training to become Hadoop admins? Agree that Java isn’t that important? Let me know in the comments.

Oracle Database Cloud Service



Types of cloud computing:-





               Get Started 



Tidying up SYSAUX – removing old snapshots which you didn’t know existed

How to reduce space taken up in the  SYSAUX tablespace by because you have many more AWR snapshots being retained than you think.
Firstly lets take an example database and we can see that we are using 92Gb  of space

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
set linesize 120
set pagesize 100
col ash form a30
col retention form a30
col snap form a30
COLUMN "Item" FORMAT A25
COLUMN "Space Used (GB)" FORMAT 999.99
COLUMN "Schema" FORMAT A25
COLUMN "Move Procedure" FORMAT A40
SELECT  occupant_name "Item",
space_usage_kbytes/1048576 "Space Used (GB)",
schema_name "Schema",
move_procedure "Move Procedure"
FROM v$sysaux_occupants
WHERE occupant_name = 'SM/AWR'
ORDER BY 1
/

Item                     Space Used (GB) Schema                   Move Procedure
------------------------- --------------- ------------------------- ----------------------------------------
SM/AWR                             91.88                                      SYS
Now we are looking to see how long we should be retaining AWR snapshots for and what we actually have
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
select sysdate - a.sample_time ash,
sysdate - s.begin_interval_time snap,
c.RETENTION
from sys.wrm$_wr_control c,
(
select db.dbid,
min(w.sample_time) sample_time
from sys.v_$database db,
sys.Wrh$_active_session_history w
where w.dbid = db.dbid group by db.dbid
) a,
(
select db.dbid,
min(r.begin_interval_time) begin_interval_time
from sys.v_$database db,
sys.wrm$_snapshot r
where r.dbid = db.dbid
group by db.dbid
) s
where a.dbid = s.dbid
and c.dbid = a.dbid;

ASH                           SNAP                           RETENTION
—————————— —————————— ——————————
+000000875 22:22:41.045       +000000030 12:22:28.323       +00030 00:00:00.0
We want to keep 30 days worth of snapshots – we have set the retention period to that  – but we have 875 days worth of active session history.
I rather smugly said let’s show an example and then I produce  a schema with nearly 3 years worth of snapshots being maintained but there is no trickery involved. Looking around at other systems I can find a few similar examples and it is always down to the same issue. Let’s look further.
1
2
3
4
5
6
select table_name, count(*)
from dba_tab_partitions
where table_name like 'WRH$%'
and table_owner = 'SYS'
group by table_name
order by 1;

TABLE_NAME                       COUNT(*)
------------------------------ ----------
WRH$_ACTIVE_SESSION_HISTORY             2
WRH$_DB_CACHE_ADVICE                    2
WRH$_DLM_MISC                           2
WRH$_EVENT_HISTOGRAM                    2
WRH$_FILESTATXS                         2
WRH$_INST_CACHE_TRANSFER                2
WRH$_INTERCONNECT_PINGS                 2
WRH$_LATCH                              2
WRH$_LATCH_CHILDREN                     2
WRH$_LATCH_MISSES_SUMMARY               2
WRH$_LATCH_PARENT                       2
WRH$_OSSTAT                             2
WRH$_PARAMETER                          2
WRH$_ROWCACHE_SUMMARY                   2
WRH$_SEG_STAT                           2
WRH$_SERVICE_STAT                       2
WRH$_SERVICE_WAIT_CLASS                 2
WRH$_SGASTAT                            2
WRH$_SQLSTAT                            2
WRH$_SYSSTAT                            2
WRH$_SYSTEM_EVENT                       2
WRH$_SYS_TIME_MODEL                     2
WRH$_TABLESPACE_STAT                    2
WRH$_WAITSTAT                           2

24 rows selected.
There is the problem, all the WRH$ data is held in 2 partitions and the overnight house-keeping job does not have time to finish it’s tidy-up before it’s 15 minutes of fame is over again for another 24 hours.
Two components of Server Manageability (SM) components that reside in the SYSAUX tablespaces cause the problem. The components involved are the Automatic Workload Repository (AWR) and Optimizer Statistics History (OPTSTAT). Both of these components have retention periods associated with their data, and the MMON process should run nightly, as part of the scheduled maintenance tasks, to purge data that exceeds these retention periods. From version 11G onwards, the mmon purging process has been constrained to a time-limited window for each of the purges, if this window is exceeded the purging stops and an ORA-12751 error is written to an m000 trace file.
For the AWR data, held in tables with names commencing with WRH$, the probable cause is due to fact that a number of the tables are partitioned. New partitions are created for these tables as part of the mmon process. Unfortunately, it seems that the partition splitting process is the final task in the purge process. As the later partitions are not split they end up containing more data. This results in partition pruning within the purge process becoming less effective.
For the OPTSTAT data, held in tables with names commencing with WRI$, the cause is more likely to be related to the volume of data held in the tables. WRI$ tables hold historical statistical data for all segments in the database for as long as specified by the stats history retention period. Thus, if there database contains a large number of tables with a long retention period – say 30 days, then the purge process will have an issue trying to purge all of the old statistics within the specified window.
I also think that two scenarios lead to this problem
1) SYSAUX running out of space, which I know has happened on at least one occasion and the volume of data left in a partition is too much to handle in a constrained time-window and so continues to grow in the same partition.
2) If the database is shut down over the period of the maintenance task and again the volume in the partition becomes too large to handle in a short-time.

The resolution to this problem is split into two stages.
Firstly a temporary manual fix is required to rectify the partition splitting failure. A job needs to scheduled on the database, to be run as the sys user which executes the following code:This job will force the splitting of partitions for the WRH$ tables, and should be scheduled to run on a daily basis until the number of partitions per table exceeds the AWR retention period.
1
2
3
begin
 execute immediate 'alter session set "_swrf_test_action" = 72';
 end;
What I do is drop the retention period to 8 days ( or maybe less) and then run this procedure once a day. It can take up to an hour but afterwards you can see the additional partitions have been created.
TABLE_NAME                       COUNT(*)
------------------------------ ----------
WRH$_ACTIVE_SESSION_HISTORY             3
WRH$_DB_CACHE_ADVICE                    3
WRH$_DLM_MISC                           3
WRH$_EVENT_HISTOGRAM                    3
WRH$_FILESTATXS                         3
WRH$_INST_CACHE_TRANSFER                3
WRH$_INTERCONNECT_PINGS                 3
WRH$_LATCH                              3
WRH$_LATCH_CHILDREN                     2
WRH$_LATCH_MISSES_SUMMARY               3
WRH$_LATCH_PARENT                       2
WRH$_OSSTAT                             3
WRH$_PARAMETER                          3
WRH$_ROWCACHE_SUMMARY                   3
WRH$_SEG_STAT                           3
WRH$_SERVICE_STAT                       3
WRH$_SERVICE_WAIT_CLASS                 3
WRH$_SGASTAT                            3
WRH$_SQLSTAT                            3
WRH$_SYSSTAT                            3
WRH$_SYSTEM_EVENT                       3
WRH$_SYS_TIME_MODEL                     3
WRH$_TABLESPACE_STAT                    3


Once the correct number of partitions have been created the old data may be automatically purged by the mmon process. If this does not occur, it will be necessary to manually purge the data from the AWR tables.
Firstly turn off the production of snapshots for the duration of this activity:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
get the dbid for this database
select dbid from v$database;
<strong>-- get current snap interval (in minutes)</strong>
select extract( day from snap_interval) *24 *60
       + extract( hour from snap_interval) *60
       + extract( minute from snap_interval) snap_interval
       from wrm$_wr_control where dbid = ;
-- disable snaps by setting interval to 0
exec dbms_workload_repository.modify_snapshot_settings(interval=> 0, dbid => <dbid>)
Then determine the maximum snapshot to be purged from the database by querying the maximum snap-id outside of the retention period:
1
2
3
4
5
6
7
8
9
10
11
select max(snap_id) max_snap
from wrm$_snapshot
where begin_interval_time
< (sysdate - (select retention
from wrm$_wr_control
where dbid = ));
This will return the maximum snapshot to be purged
The AWR tables will not purge if baselines exist, so check and remove as required. The following should return only 1 row called SYSTEM_MOVING_WINDOW:
1
2
3
select baseline_name,creation_time
from dba_hist_baseline;
If other baselines are found they will need to be removed:
1
exec dbms_workload_repository.drop_baseline(baseline_name => <baseline>);
If we look at the current retention period in this code it is set to 15 days, so we set it to 8 days
1
2
3
4
select d.dbid,w.snap_interval,w.retention from DBA_HIST_WR_CONTROL w, v$database d where w.dbid = d.dbid;
Begin DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS(11520,0); end;
begin</strong> DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS(129600,0); <strong>end</strong>;
Then we purge the snapshots ( – in ranges if there a are a lot as in this example). This can take several hours
1
2
3
EXEC dbms_workload_repository.drop_snapshot_range(low_snap_id=>100000, high_snap_id=>114159);
EXEC dbms_workload_repository.drop_snapshot_range(low_snap_id=>116261, high_snap_id=>116265);
Once all that is complete it is just a  matter of tidying up the indexes
ASH    SNAP    RETENTION —————————————————————————
+000000008 23:47:01.421    +000000008 23:46:55.865    +00008 00:00:00.0

Item                     Space Used (GB) Schema                   Move Procedure
------------------------- --------------- ------------------------- ----------------------------------------
SM/AWR                             93.70 SYS
1
select 'alter index '||segment_name||' rebuild online parallel (degree 4);' from dba_segments where tablespace_name= 'SYSAUX' and segment_name like 'WRH$_%' and segment_type='INDEX' order by segment_name;
double check for unusable indexes

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
select 'alter index '||s.segment_name||' rebuild online parallel (degree 4);'
from dba_segments s,  dba_indexes i where s.tablespace_name= 'SYSAUX'
and s.segment_name like 'WRH$_%'
and s.segment_type='INDEX'
and i.status = 'UNUSABLE'
and i.index_name = s.segment_name
order by s.segment_name
&nbsp;
select 'alter table '||segment_name||' move tablespace sysaux ;' from dba_segments where tablespace_name= 'SYSAUX' and segment_name like 'WRH$_%' and segment_type = 'TABLE'   order by segment_name
Item                     Space Used (GB) Schema                  Move Procedure
------------------------- --------------- ------------------------- ----------------------------------------
SM/AWR                             10.05 SYS