Wikipedia

Search results

Architecture of Exadata Database Machine

Exadata Database machine provides a high performance,high availability and plenty of storage space platform for oracle database .The high availability clustering is is provided by Oracle RAC and ASM will be responsible for storage mirroring .Infiniband technology provides high bandwidth and low latency cluster interconnect and storage networking.  The powerful compute nodes joins in the RAC cluster to offers the great performance.

Oracle Exadata X6: Technical Deep Dive - Architecture and Internals
Oracle's Next Generation Exadata Database Machine X7
In this article, we will see the
  • Exadata Database Machine Network architecture
  • Exadata Database Machine Storage architecture
  • Exadata Database Machine Software architecture
  • How to scale up the Exadata Database Machine
  • Key components of the Exadata Database Machine

    Shared storage: Exadata Storage servers 

    Database Machine provides intelligent, high-performance shared storage to both single-instance and RAC implementations of Oracle Database using Exadata Storage Server technology.The Exadata storage servers is designed to provide the storage to oracle database using the ASM (Automatic Storage Management). ASM keeps the redundant copies of data on separate Exadata Storage Servers and it protects against the data loss if you lost the disk or entire storage server.

    Shared Network – Infiniband 

    Database machine uses the infiniband network for interconnect between database servers and exadata storage servers. The infiniband network provides 40Gb/s speed.So the latency is very low and offers the high bandwidth. In Exadata Database machine , multiple infiniband switches  and interface boning will be used to provide the network redundancy.

    Shared cache:

    The database machine’s RAC environment, the database instance buffer cache are shared. If one instance has kept some data on cache and that required by another instance,the data will be provided to the required node via infiniband cluster interconnect. It increases the performance since the data is happening between memory to memory via cluster interconnect.

    Database Server cluster:

    The Exadata database machine’s full rack consists , 8 compute nodes and you can able to build the 8-n0de cluster using the oracle RAC. The each  compute nodes has up to 80 CPU cores and 256GB memory .

    Cluster interconnect:

    By default, the database machine is configured use the infiniband storage network as cluster interconnect.

    Database Machine – Network Architecture

    Exadata Network Architecture
    There are three different networks has been shown on the above diagram.
    Management Network –  ILOM:
    ILOM(Integrated lights out manager) is the default remote hardware management on all oracle servers.It uses the traditional Ethernet network to manage the exadata database machine remotely. ILOM provides the graphical remote administration facility and   it also helps the system administrators to monitor the hardware remotely.
    Client Access:
    The database servers will be accessed by application servers via Ethernet network. Bonding will be created using multiple ethernet adapters for network redundancy and load balancing. Note: This database machine consists Cisco switch to provide the connectivity to ethernet networks.

    InfiniBand Network Architecture

    The below diagrams shows that how the infiniband links are connected to different components on X3-2 Half/Full Rack setup.
    infiniband switch x3-2 half-full rack
    infiniband switch x3-2 half-full rack
    The spine switch will be exists only on half rack and full rack  exadata database configuration only. The spine switch will help you to scale the environment by providing the Inifiniband links to multiple racks. In the quarter rack of X3-2 model, you will get leaf switches . You can scale up to 18 rack by adding the infiniband cables to the infiniband switches.
    How we can interconnect two racks ? Have a look at the below diagram closely.Single InfiniBand network formed based on a Fat Tree topology
    Scale two Racks
    Scale two Racks
    Six ports on each leaf switch are reserved for external connectivity.These ports are used for Connecting to media servers for tape backup,Connecting to external ETL servers,Client or application access Including Oracle Exalogic Elastic Cloud 

    Database Machine Software Architecture

    Software architecture- exadata
    Software architecture- exadata
    CELLSRV, MS,RS & IORM are the important process of the exadata storage cell servers. In the DB servers , these storage’s griddisks are used to create the ASM diskgroup.In the database server, there will be special library called LIBCELL. In combination with the database kernel and ASM, LIBCELL transparently maps database I/O to exadata storage server.
    There is no other filesystems are allowed to create in Exadata storage cell. Oracle Database must use the ASM for volume manager and filesystem.
    Customers has option to choose the database servers operating system between oracle Linux and oracle Solaris x86 . Exadata will support the oracle database 11g release 2 and laster versions of database.

    Database Machine Storage  Architecture

    Exadata Storage cell
    Exadata Storage cell
    Exadata storage servers has above mentioned software components.  Oracle Linux is the default operating system for exadata storage cell software . CELLSRV is the core exadata storage component which provides the most of the services. Management Server (MS) provides Exadata cell management and configuration.MS is responsible for sending alerts and collects some statistics in addition to those collected by CELLSRV.Restart Server (RS) is used to start up/shut down the CELLSRV and MS services and monitors these services to automatically restart them if required.
    How the disks are mapped to  Database from the Exadata storage servers ?
    Exadata Disks overview
    Exadata Disks overview
    If you look the below image , you can observe that database servers is considering   the each cell nodes as failure group.
    Exadata DG
    Exadata DG

Oracle GoldenGate Concepts and Architecture Made Simple!

Oracle Goldengate supports the replication of data across various heterogeneous platforms. The Goldengate replication topology includes the capture and transfer of the extracted data from the source database, across to the destination database. 
Below are the topologies which can be used to fulfill various data transfer requirements using data replication.
Oracle Goldengate
Goldengate Topologies
• Uni-directional: Data is replicated in one direction from source to target
• Bi-Directional: The data flows in both direction and stays synced up between site A and site B
• Peer to Peer: Similar to Bi-directional but involves more that 2 databases which stay synced up
• Broadcast: Data from source is sent to multiple destinations
• Consolidation: Data from multiple sources is delivered to one destination DB
• Cascading: Data from one source is sent to multiple destinations

Oracle Golden Gate Logical Architecture

The Oracle Golden Gate architecture consists of the following components:
Oracle Golden Gate Architecture diagram
Oracle Golden Gate Architecture diagram

GoldenGate Components

Manager: The Manager is the process which starts the other Goldengate processes. This process must be running on the source and target system for the configuration and starting up of all the other Goldengate processes. he Manager process also manages the disk space by purging the old trail files. Only one Manager Process is required for every Goldengate installation.
Extract: The Extract process is responsible for capturing the committed DML transactions and the DDL from Oracle Redo logs. Then Extract writes these data changes into Trail or Extract Files.
Data Pump: The Pump process which is also an extract process is optional in the Goldengate setup. This process copies the Trail files containing the data to the target system.
Replicat: The Replicat process is the apply process in the Goldengate configuration. This process runs at the end point of the data delivery chain on the target database. This process reads the destination trail files and applies the data changes to the target systems.
Trail/Extract Files: The Extract process on the source database creates trail files for consumption by the pump process for transfer to remote database or for consumption by a local replicate on the source database.
Checkpoint: The Extract Pump & Replicat processes use checkpoints for tracking the progress of these processes. This mechanism marks the location up to point where the data changes have been retrieved or applied from the trail files. This is useful when processes need to recover (without any data loss) or need to know the starting point after a failure.
Collector: The Collector process runs on the target system and writes the data changes from the source database in the target Trail Files known as RMTTRAIL. Before copying it to RMTTRAIL it reassembles the files.

Oracle Database 12c: INTERACTIVE QUICK REFERENCE

How to Backup and Restore Statistics

Backup Optimizer Statistics
Step Checklist: 
1. Create a statistics table in the  User schema 
2. Transfer the statistics to this table
Step detail: 
------------
1. Create a statistics table in the user schema : 
User is the owner of the tables for which support requests CBO statistics.
SQL> connect user/password 
SQL> exec dbms_stats.create_stat_table(user,'STAT_TIMESTAMP'); 
PL/SQL procedure successfully completed.

 2a. Transfer the statistics to this table :

Transfer of statistics is achieved using the 'dbms_stats.export_table_stats' procedure.
Run the package once for each set of statistics to transfer. 
In the following example there are 2 tables:
SQL> exec dbms_stats.export_table_stats(user,'<TABLE_NAME>',NULL,'STAT_TIMESTAMP'); 
PL/SQL procedure successfully completed.
SQL> exec dbms_stats.export_table_stats(user,'<TABLE_NAME_2>',NULL,'STAT_TIMESTAMP'); 
PL/SQL procedure successfully completed.

If you want to collect user/Schema level statistics

SQL> exec dbms_stats.export_schema_stats(user,'STAT_TIMESTAMP'); 
PL/SQL procedure successfully completed.

2b. Transfer SYSTEM statistics to this table :
———————————————-

Transferring SYSTEM statistics : 
If you have system statistics (below SQL returns rows) 
connect system/password
Check for System stats:
select sname,pname,pval1 from sys.aux_stats$ where pval1 is not null;
Create stats storage table
exec dbms_stats.create_stat_table(user,'STAT_TIMESTAMP');
-- Export:
exec dbms_stats.export_system_stats('STAT_TIMESTAMP');
-- Import:
exec dbms_stats.import_system_stats('STAT_TIMESTAMP');
Restore set of statistics 
=========================
Use your statistics backup table and Reimport your statistics
exec dbms_stats.import_table_stats(NULL,'<TABLE_NAME>', NULL,'STAT_TIMESTAMP'); 
exec dbms_stats.import_table_stats(NULL,'<TABLE_NAME_2>', NULL,'STAT_TIMESTAMP');
To find the tables statistics stored in the STAT_TIMESTAMP table: 
 select distinct c1 from STAT_TIMESTAMP where type ='T';
To restore statistics from All tables in STAT_TIMESTAMP' table: 
 exec dbms_stats.import_schema_stats(user,'STAT_TIMESTAMP');

RELATED DOCUMENTS 
----------------- 
Oracle9i Supplied PL/SQL Packages and Types Reference, Release 2 (9.2) Part Number A96612-01

Example of Scott user :

SQL> show user
USER is "SCOTT"
SQL> exec dbms_stats.create_stat_table('SCOTT','STAT_TIMESTAMP');
PL/SQL procedure successfully completed.
SQL> select * from tab;
TNAME TABTYPE CLUSTERID
DEPT TABLE
EMP TABLE
BONUS TABLE
SALGRADE TABLE
STAT_TIMESTAMP TABLE  => new table created for stats

To collect table DEPT STATS
SQL> exec dbms_stats.export_table_stats('SCOTT','DEPT',NULL,'STAT_TIMESTAMP');
PL/SQL procedure successfully completed.
To collect EMP table stats
SQL> exec dbms_stats.export_table_stats('SCOTT','EMP',NULL,'STAT_TIMESTAMP');
PL/SQL procedure successfully completed.
To collect user level stats
SQL> exec dbms_stats.export_schema_stats('SCOTT','STAT_TIMESTAMP');
PL/SQL procedure successfully completed.
To know table stats in stat_timestamp table.
SQL> select distinct c1 from STAT_TIMESTAMP where type ='T';
C1
DEPT
EMP
BONUS
SALGRADE
SQL>

Move  DBMS_STATS Statistics to a Different Database:

1: First, run the export:
%exp scott/tiger tables= STAT_TIMESTAMP file= STAT_TIMESTAMP.dmp
About to export specified tables via Conventional Path ...
. . exporting table STAT_TIMESTAMP ...
Then on the new database, run import:
2: %imp scott/tiger file= STAT_TIMESTAMP.dmp full=y log=implog.txt
Populate the data dictionary in the new database.
3: SQL>exec dbms_stats.import_table_stats('SCOTT','EMP',NULL,'STATS',NULL,TRUE);
PL/SQL procedure successfully completed.

Target and source schema name should be take care

Same schema:
============
If there are two databases and users name are same in both i.e(SCOTT) than procedure is simple as below.
SQL> exec dbms_stats.export_table_stats('SCOTT','DEPT',NULL,'STAT_TIMESTAMP');
PL/SQL procedure successfully completed.
exec dbms_stats.import_table_stats('SCOTT','DEPT',NULL,'STAT_TIMESTAMP');
PL/SQL procedure successfully completed.
Different schema:
=================
If You are exporting stats from one schema name and import into a different schema name (Bug 1077535).
example
SQL> set autot trace explain
SQL> select * from dept;
Execution Plan
----------------------------------------------------------
Plan hash value: 3383998547
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 4 | 80 | 3 (0)| 00:00:01 |
| 1 | TABLE ACCESS FULL| DEPT | 4 | 80 | 3 (0)| 00:00:01 |
--------------------------------------------------------------------------
SQL>
The schema names much match exactly.
If the target database schema name (import database) is different from the source
database schema name (export database), then you may update the table you exported the statistics
into and set the C5 column to the target schema name.
See example below:
————————————–
STAT_TIMESTAMP = table to store statistics in
DEPT - is my table
SCOTT & COPY_SCOTT - user accounts
---------------------------------------
Checking current explain plan of table DEPT on target db: 
select * from copy_SCOTT;
Execution Plan
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 4 | 80 | 3 (0)| 00:00:01 |
| 1 | TABLE ACCESS FULL| DEPT | 4 | 80 | 3 (0)| 00:00:01 |
--------------------------------------------------------------------------
Update the STAT_TIMESTAMP table which contains the statistics from source db, schEma SCOTT,
setting the C5 column to the new schema name on the target db:
update STAT_TIMESTAMP set c5 = 'COPY_SCOTT';
where c5 = 'SCOTT';
commit;
Now import the statistics into the data dictionary on the target db:
exec dbms_stats.import_table_stats('COPY_SCOTT','DEPT',NULL,'STAT_TIMESTAMP');
Check the explain plan. Should reflect new statistics imported:
select * from COPY_SCOTT.DEPT;

All About Statistics In Oracle


Understanding Oracle statistics.

#####################################
Database | Schema | Table | Index Statistics
#####################################

Gather Database Statistics:
=======================
SQL> EXEC DBMS_STATS.GATHER_DATABASE_STATS(
     ESTIMATE_PERCENT=>100,METHOD_OPT=>'FOR ALL COLUMNS SIZE SKEWONLY',
     CASCADE => TRUE,
     degree => 4,
     OPTIONS => 'GATHER STALE',
     GATHER_SYS => TRUE,
     STATTAB => PROD_STATS);

CASCADE => TRUE :Gather statistics on the indexes as well. If not used Oracle will decide whether to collect index statistics or not.
DEGREE => 4 :Degree of parallelism.
options:
       =>'GATHER' :Gathers statistics on all objects in the schema.
       =>'GATHER AUTO' :Oracle determines which objects need new statistics, and determines how to gather those statistics.
       =>'GATHER STALE':Gathers statistics on stale objects. will return a list of stale objects.
       =>'GATHER EMPTY':Gathers statistics on objects have no statistics.will return a list of no stats objects.
        =>'LIST AUTO' : Returns a list of objects to be processed with GATHER AUTO.
        =>'LIST STALE': Returns a list of stale objects as determined by looking at the *_tab_modifications views.
        =>'LIST EMPTY': Returns a list of objects which currently have no statistics.
GATHER_SYS => TRUE :Gathers statistics on the objects owned by the 'SYS' user.
STATTAB => PROD_STATS :Table will save the current statistics. see SAVE & IMPORT STATISTICS section -last third in this post-.

Note: All above parameters are valid for all kind of statistics (schema,table,..) except Gather_SYS.
Note: Skew data means the data inside a column is not uniform, there is a particular one or more value are being repeated much than other values in the same column, for example the gender column in employee table with two values (male/female), in a construction or security service company, where most of employees are male workforce,the gender column in employee table is likely to be skewed but in an entity like a hospital where the number of males almost equal the number of female workforce, the gender column is likely to be not skewed.

For faster execution:

SQL> EXEC DBMS_STATS.GATHER_DATABASE_STATS(
ESTIMATE_PERCENT=>DBMS_STATS.AUTO_SAMPLE_SIZE,degree => 8);

What's new?
ESTIMATE_PERCENT=>DBMS_STATS.AUTO_SAMPLE_SIZE => Let Oracle estimate skewed values always gives excellent results.(DEFAULT).
Removed "METHOD_OPT=>'FOR ALL COLUMNS SIZE SKEWONLY'" => As histograms is not recommended to be gathered on all columns.
Removed  "cascade => TRUE" To let Oracle determine whether index statistics to be collected or not.
Doubled the "degree => 8" but this depends on the number of CPUs on the machine and accepted CPU overhead during gathering DB statistics.

Starting from Oracle 10g, Oracle introduced an automated task gathers statistics on all objects in the database that having [stale or missing] statistics, To check the status of that task:
SQL> select status from dba_autotask_client where client_name = 'auto optimizer stats collection';

To Enable Automatic Optimizer Statistics task:
SQL> BEGIN
     DBMS_AUTO_TASK_ADMIN.ENABLE(
     client_name => 'auto optimizer stats collection',
     operation => NULL,
     window_name => NULL);
     END;
     /

In case you want to Disable Automatic Optimizer Statistics task:
SQL> BEGIN
     DBMS_AUTO_TASK_ADMIN.DISABLE(
     client_name => 'auto optimizer stats collection',
     operation => NULL,
     window_name => NULL);
     END;
     /

To check the tables having stale statistics:

SQL> exec DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO;
SQL> select OWNER,TABLE_NAME,LAST_ANALYZED,STALE_STATS from DBA_TAB_STATISTICS where STALE_STATS='YES';

[update on 03-Sep-2014]
Note: In order to get an accurate information from DBA_TAB_STATISTICS or (*_TAB_MODIFICATIONS, *_TAB_STATISTICS and *_IND_STATISTICS) views, you should manually run DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO procedure to refresh it's parent table mon_mods_all$ from SGA recent data, or you have wait for an Oracle internal that refresh that table  once a day in 10g onwards [except for 10gR2] or every 15 minutes in 10gR2 or every 3 hours in 9i backwards. or when you run manually run one of GATHER_*_STATS procedures.
[Reference: Oracle Support and MOS ID 1476052.1]

Gather SCHEMA Statistics:
======================
SQL> Exec DBMS_STATS.GATHER_SCHEMA_STATS (
     ownname =>'SCOTT',
     estimate_percent=>10,
     degree=>1,
     cascade=>TRUE,
     options=>'GATHER STALE');


Gather TABLE Statistics:
====================
Check table statistics date:
SQL> select table_name, last_analyzed from user_tables where table_name='T1';

SQL> Begin DBMS_STATS.GATHER_TABLE_STATS (
     ownname => 'SCOTT',
     tabname => 'EMP',
     degree => 2,
     cascade => TRUE,
     METHOD_OPT => 'FOR COLUMNS SIZE AUTO',
     estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE);
     END;
     /

CASCADE => TRUE : Gather statistics on the indexes as well. If not used Oracle will determine whether to collect it or not.
DEGREE => 2: Degree of parallelism.
ESTIMATE_PERCENT => DBMS_STATS.AUTO_SAMPLE_SIZE : (DEFAULT) Auto set the sample size % for skew(distinct) values (accurate and faster than setting a manual sample size).
METHOD_OPT=>  :  For gathering Histograms:
 FOR COLUMNS SIZE AUTO :  You can specify one column between "" instead of all columns.
 FOR ALL COLUMNS SIZE REPEAT :  Prevent deletion of histograms and collect it only for columns already have histograms.
 FOR ALL COLUMNS  :  Collect histograms on all columns.
 FOR ALL COLUMNS SIZE SKEWONLY :  Collect histograms for columns have skewed value should test skewness first>.
 FOR ALL INDEXED COLUMNS :  Collect histograms for columns have indexes only.


Note: Truncating a table will not update table statistics, it will only reset the High Water Mark, you've to re-gather statistics on that table.

Good source from "Mahmmoud ADEL"
Inside "DBA BUNDLE", there is a script called "gather_stats.sh", it will help you easily & safely gather statistics on specific schema or table plus providing advanced features such as backing up/ restore new statistics in case of fallback.
To learn more about "DBA BUNDLE" please visit this post:
http://dba-tips.blogspot.com/2014/02/oracle-database-administration-scripts.html


Gather Index Statistics:
===================
SQL>
BEGIN
DBMS_STATS.GATHER_INDEX_STATS(ownname => 'SCOTT',indname => 'EMP_I',estimate_percent =>DBMS_STATS.AUTO_SAMPLE_SIZE);
END;
/

####################
Fixed OBJECTS Statistics
####################

What are Fixed objects:
----------------------------
-Fixed objects are the x$ tables (been loaded in SGA during startup) on which V$ views are built (V$SQL etc.).
-If the statistics are not gathered on fixed objects, the Optimizer will use predefined default values for the statistics. These defaults may lead to inaccurate execution plans.
-Statistics on fixed objects are not being gathered automatically or within gathering DB stats.

How frequent to gather stats on fixed objects?
-------------------------------------------------------
Only one time for a representative workload unless you've one of these cases:

- After a major database or application upgrade.
- After implementing a new module.
- After changing the database configuration. e.g. changing the size of memory pools (sga,pga,..).
- Poor performance/Hang encountered while querying dynamic views e.g. V$ views.


Note:
- It's recommended to Gather the fixed object stats during peak hours (system is busy) or after the peak hours but the sessions are still connected (even if they idle), to guarantee that the fixed object tables been populated and the statistics well represent the DB activity.
- Also, note that performance degradation may be experienced while the statistics are gathering.
- Having no statistics is better than having a non-representative statistics.

How to gather stats on fixed objects:
---------------------------------------------

First Check the last analyzed date:
------ -----------------------------------
SQL> select OWNER, TABLE_NAME, LAST_ANALYZED
        from dba_tab_statistics where table_name='X$KGLDP';
Second Export the current fixed stats in a table: (in case you need to revert back)
------- -----------------------------------
SQL> EXEC DBMS_STATS.CREATE_STAT_TABLE
        ('OWNER','STATS_TABLE_NAME','TABLESPACE_NAME');

SQL> EXEC dbms_stats.export_fixed_objects_stats
        (stattab=>'STATS_TABLE_NAME',statown=>'OWNER');
Third Gather the fixed objects stats:
-------  ------------------------------------
SQL> exec dbms_stats.gather_fixed_objects_stats;

Note:
In case you experienced a bad performance on fixed tables after gathering the new statistics:

SQL> exec dbms_stats.delete_fixed_objects_stats();
SQL> exec DBMS_STATS.import_fixed_objects_stats
        (stattab =>'STATS_TABLE_NAME',STATOWN =>'OWNER');


#################
SYSTEM STATISTICS
#################

What is system statistics:
-------------------------------
System statistics are statistics about CPU speed and IO performance, it enables the CBO to
effectively cost each operation in an execution plan. Introduced in Oracle 9i.

Why gathering system statistics:
----------------------------------------
Oracle highly recommends gathering system statistics during a representative workload,
ideally at peak workload time, in order to provide more accurate CPU/IO cost estimates to the optimizer.
You only have to gather system statistics once.

There are two types of system statistics (NOWORKLOAD statistics & WORKLOAD statistics):

NOWORKLOAD statistics:
-----------------------------------
This will simulate a workload -not the real one but a simulation- and will not collect full statistics, it's less accurate than "WORKLOAD statistics" but if you can't capture the statistics during a typical workload you can use noworkload statistics.
To gather noworkload statistics:
SQL> execute dbms_stats.gather_system_stats();

WORKLOAD statistics:
-------------------------------
This will gather statistics during the current workload [which supposed to be representative of the actual system I/O and CPU workload on the DB].
To gather WORKLOAD statistics:
SQL> execute dbms_stats.gather_system_stats('start');
Once the workload window ends after 1,2,3.. hours or whatever, stop the system statistics gathering:
SQL> execute dbms_stats.gather_system_stats('stop');
You can use time interval (minutes) instead of issuing start/stop command manually:
SQL> execute dbms_stats.gather_system_stats('interval',60);

Check the system values collected:
-------------------------------------------
col pname format a20
col pval2 format a40
select * from sys.aux_stats$;

cpuspeedNW:  Shows the noworkload CPU speed, (average number of CPU cycles per second).
ioseektim:    The sum of seek time, latency time, and OS overhead time.
iotfrspeed:  I/O transfer speed,tells optimizer how fast the DB can read data in a single read request.
cpuspeed:      Stands for CPU speed during a workload statistics collection.
maxthr:          The maximum I/O throughput.
slavethr:      Average parallel slave I/O throughput.
sreadtim:     The Single Block Read Time statistic shows the average time for a random single block read.
mreadtim:     The average time (seconds) for a sequential multiblock read.
mbrc:             The average multiblock read count in blocks.

Notes:
-When gathering NOWORKLOAD statistics it will gather (cpuspeedNW, ioseektim, iotfrspeed) system statistics only.
-Above values can be modified manually using DBMS_STATS.SET_SYSTEM_STATS procedure.
-According to Oracle, collecting workload statistics doesn't impose an additional overhead on your system.

Delete system statistics:
------------------------------
SQL> execute dbms_stats.delete_system_stats();


####################
Data Dictionary Statistics
####################

Facts:
-------
> Dictionary tables are the tables owned by SYS and residing in the system tablespace.
> Normally data dictionary statistics in 9i is not required unless performance issues are detected.
> In 10g Statistics on the dictionary tables will be maintained via the automatic statistics gathering job run during the nightly maintenance window.

If you choose to switch off that job for application schema consider leaving it on for the dictionary tables. You can do this by changing the value of AUTOSTATS_TARGET from AUTO to ORACLE using the procedure:

SQL> Exec DBMS_STATS.SET_PARAM(AUTOSTATS_TARGET,'ORACLE');

When to gather Dictionary statistics:
---------------------------------------------
-After DB upgrades.
-After creation of a new big schema.
-Before and after big datapump operations.

Check last Dictionary statistics date:
---------------------------------------------
SQL> select table_name, last_analyzed from dba_tables
     where owner='SYS' and table_name like '%$' order by 2;

Gather Dictionary Statistics:  
-----------------------------------
SQL> EXEC DBMS_STATS.GATHER_DICTIONARY_STATS;
->Will gather stats on 20% of SYS schema tables.
or...
SQL> EXEC DBMS_STATS.GATHER_SCHEMA_STATS ('SYS');
->Will gather stats on 100% of SYS schema tables.
or...
SQL> EXEC DBMS_STATS.GATHER_DATABASE_STATS(gather_sys=>TRUE);
->Will gather stats on the whole DB+SYS schema.



################
Extended Statistics "11g onwards"
################

Extended statistics can be gathered on columns based on functions or column groups.

Gather extended stats on column function:
====================================
If you run a query having in the WHERE statement a function like upper/lower the optimizer will be off and index on that column will not be used:
SQL> select count(*) from EMP where lower(ename) = 'scott';

In order to make optimizer work with function based terms you need to gather extended stats:

1-Create extended stats:
>>>>>>>>>>>>>>>>>>>>
SQL> select dbms_stats.create_extended_stats('SCOTT','EMP','(lower(ENAME))') from dual;

2-Gather histograms:
>>>>>>>>>>>>>>>>>
SQL> exec dbms_stats.gather_table_stats('SCOTT','EMP', method_opt=> 'for all columns size skewonly');

OR
----
*You can do it also in one Step:
>>>>>>>>>>>>>>>>>>>>>>>>>

SQL> Begin dbms_stats.gather_table_stats
     (ownname => 'SCOTT',tabname => 'EMP',
     method_opt => 'for all columns size skewonly for
     columns (lower(ENAME))');
     end;
     /

To check the Existence of extended statistics on a table:
----------------------------------------------------------------------
SQL> select extension_name,extension from dba_stat_extensions where owner='SCOTT'and table_name = 'EMP';
SYS_STU2JLSDWQAFJHQST7$QK81_YB (LOWER("ENAME"))

Drop extended stats on column function:
------------------------------------------------------
SQL> exec dbms_stats.drop_extended_stats('SCOTT','EMP','(LOWER("ENAME"))');

Gather extended stats on column group: -related columns-
=================================
Certain columns in a table that are part of a join condition (where statements are correlated e.g.(country, state). You want to make the optimizer aware of this relationship between two columns and more instead of using separate statistics for each column. By creating extended statistics on a group of columns, the Optimizer can determine a more accurate the relation between the columns are used together in a where clause of a SQL statement. e.g. columns like country_id and state_name the have a relationship, state like Texas can only be found in the USA so the value of state_name is always influenced by country_id.
If there are extra columns are referenced in the "WHERE statement with the column group the optimizer will make use of column group statistics.

1- create a column group:
>>>>>>>>>>>>>>>>>>>>>
SQL> select dbms_stats.create_extended_stats('SH','CUSTOMERS', '(country_id,cust_state_province)')from dual;
2- Re-gather stats|histograms for table so optimizer can use the newly generated extended statistics:
>>>>>>>>>>>>>>>>>>>>>>>
SQL> exec dbms_stats.gather_table_stats ('SH','customers',method_opt=> 'for all columns size skewonly');

OR
---

*You can do it also in one Step:
>>>>>>>>>>>>>>>>>>>>>>>>>

SQL> Begin dbms_stats.gather_table_stats
     (ownname => 'SH',tabname => 'CUSTOMERS',
     method_opt => 'for all columns size skewonly for
     columns (country_id,cust_state_province)');
     end;
     /

Drop extended stats on column group:
--------------------------------------------------
SQL> exec dbms_stats.drop_extended_stats('SH','CUSTOMERS', '(country_id,cust_state_province)');


#########
Histograms
#########

What are Histograms?
-----------------------------
> Holds data about values within a column in a table for the number of occurrences for a specific value/range.
> Used by CBO to optimize a query to use whatever index Fast Full scan or table full scan.
> Usually being used against columns have data being repeated frequently like country or city column.
> gathering histograms on a column having distinct values (PK) is useless because values are not repeated.
> Two types of Histograms can be gathered:
  -Frequency histograms: is when distinct values (buckets) in the column is less than 255 (e.g. the number of countries is always less than 254).
  -Height balanced histograms: are similar to frequency histograms in their design, but distinct values  > 254
    See an Example: http://aseriesoftubes.com/articles/beauty-and-it/quick-guide-to-oracle-histograms
> Collected by DBMS_STATS (which by default doesn't collect histograms, it deletes them if you didn't use the parameter).
> Mainly being gathered on foreign key columns/columns in WHERE statement.
> Help in SQL multi-table joins.
> Column histograms like statistics are being stored in data dictionary.
> If the application is exclusively uses bind variables, Oracle recommends deleting any existing histograms and disabling Oracle histograms generation.

Cautions:
   – Do not create them on Columns that are not being queried.
   – Do not create them on every column of every table.
   – Do not create them on the primary key column of a table.

Verify the existence of histograms:
---------------------------------------------
SQL> select column_name,histogram from dba_tab_col_statistics
     where owner='SCOTT' and table_name='EMP';

Creating Histograms:
---------------------------
e.g.
SQL> Exec dbms_stats.gather_schema_stats
     (ownname => 'SCOTT',
     estimate_percent => dbms_stats.auto_sample_size,
     method_opt => 'for all columns size auto',
     degree => 7);


method_opt:
FOR COLUMNS SIZE AUTO                 => Fastest. you can specify one column instead of all columns.
FOR ALL COLUMNS SIZE REPEAT     => Prevent deletion of histograms and collect it only for columns already have histograms.
FOR ALL COLUMNS => collect histograms on all columns.
FOR ALL COLUMNS SIZE SKEWONLY => collect histograms for columns have skewed value.
FOR ALL INDEXES COLUMNS      => collect histograms for columns have indexes.

Note: AUTO & SKEWONLY will let Oracle decide whether to create the Histograms or not.

Check the existence of Histograms:
SQL> select column_name, count(*) from dba_tab_histograms
     where OWNER='SCOTT' table_name='EMP' group by column_name;

Drop Histograms: 11g
----------------------
e.g.
SQL> Exec dbms_stats.delete_column_stats
     (ownname=>'SH', tabname=>'SALES',
     colname=>'PROD_ID', col_stat_type=> HISTOGRAM);


Stop gather Histograms: 11g
------------------------------
[This will change the default table options]
e.g.
SQL> Exec dbms_stats.set_table_prefs
     ('SH', 'SALES','METHOD_OPT', 'FOR ALL COLUMNS SIZE AUTO,FOR COLUMNS SIZE 1 PROD_ID');
>Will continue to collect histograms as usual on all columns in the SALES table except for PROD_ID column.

Drop Histograms: 10g
----------------------
e.g.
SQL> exec dbms_stats.delete_column_stats(user,'T','USERNAME');


################################
Save/IMPORT & RESTORE STATISTICS:
################################
====================
Export /Import Statistics:
====================
In this way statistics will be exported into table then imported later from that table.

1-Create STATS TABLE:
-  -----------------------------
SQL> Exec dbms_stats.create_stat_table(ownname => 'SYSTEM', stattab => 'prod_stats',tblspace => 'USERS');

2-Export statistics to the STATS table: [Backup Statistics]
---------------------------------------------------
The following will backup the statistics into PROD_STATS table which we just created under SYSTEM schema.

For Database stats:
SQL> Exec dbms_stats.export_database_stats(statown => 'SYSTEM', stattab => 'prod_stats');
For System stats:
SQL> Exec dbms_stats.export_SYSTEM_stats(statown => 'SYSTEM', stattab => 'prod_stats');
For Dictionary stats:
SQL> Exec dbms_stats.export_Dictionary_stats(statown => 'SYSTEM', stattab => 'prod_stats');
For Fixed Tables stats:
SQL> Exec dbms_stats.export_FIXED_OBJECTS_stats(statown => 'SYSTEM', stattab => 'prod_stats');
For Schema stas:
SQL> EXEC DBMS_STATS.EXPORT_SCHEMA_STATS(ownname=>'SCHEMA_NAME',stattab=>'STATS_TABLE',statown=>'STATS_TABLE_OWNER');
e.g.
SQL> EXEC DBMS_STATS.EXPORT_SCHEMA_STATS(ownname=>'SCOTT',stattab=>'prod_stats',statown=>'system');
For Table:
SQL> Conn scott/tiger
SQL> Exec dbms_stats.export_TABLE_stats(ownname => 'SCOTT',tabname => 'EMP',statown => 'SYSTEM', stattab => 'prod_stats');
For Index:
SQL> Exec dbms_stats.export_INDEX_stats(ownname => 'SCOTT',indname => 'PK_EMP',statown => 'SYSTEM', stattab => 'prod_stats');
For Column:
SQL> Exec dbms_stats.export_COLUMN_stats (ownname=>'SCOTT',tabname=>'EMP',colname=>'EMPNO',statown => 'SYSTEM', stattab=>'prod_stats');

Parameters:
ownname: The owner of the object that will have its statistics backed up.
tabname: The table name which will have its stats backed up.
indname: The index name which will have its stats backed up.
statown: The owner of the table which stores the backed up statistics.
stattab: The table which stores the backed up statistics.

3-Import statistics from PROD_STATS table to the dictionary:
---------------------------------------------------------------------------------
For Database stats:
SQL> Exec DBMS_STATS.IMPORT_DATABASE_STATS
     (stattab => 'prod_stats',statown => 'SYSTEM');
For System stats:
SQL> Exec DBMS_STATS.IMPORT_SYSTEM_STATS
     (stattab => 'prod_stats',statown => 'SYSTEM');
For Dictionary stats:
SQL> Exec DBMS_STATS.IMPORT_Dictionary_STATS
     (stattab => 'prod_stats',statown => 'SYSTEM');
For Fixed Tables stats:
SQL> Exec DBMS_STATS.IMPORT_FIXED_OBJECTS_STATS
     (stattab => 'prod_stats',statown => 'SYSTEM');
For Schema stats:
SQL> Exec DBMS_STATS.IMPORT_SCHEMA_STATS
     (ownname => 'SCOTT',stattab => 'prod_stats', statown => 'SYSTEM');
For Table stats and its indexes:
SQL> Exec dbms_stats.import_TABLE_stats
     ( ownname => 'SCOTT', stattab => 'prod_stats',tabname => 'EMP');
For Index:
SQL> Exec dbms_stats.import_INDEX_stats
     ( ownname => 'SCOTT', stattab => 'prod_stats', indname => 'PK_EMP');
For COLUMN:
SQL> Exec dbms_stats.import_COLUMN_stats
     (ownname=>'SCOTT',tabname=>'EMP',colname=>'EMPNO',stattab=>'prod_stats');

Parameters:
ownname: The owner of the object that will have its statistics backed up.
tabname: The table name which will have its stats backed up.
indname: The index name which will have its stats backed up.
statown: The owner of the table which stores the backed up statistics.
stattab: The table which stores the backed up statistics.

4-Drop STAT Table:
--------------------------
SQL> Exec dbms_stats.DROP_STAT_TABLE (stattab => 'prod_stats',ownname => 'SYSTEM');

===============
Restore statistics: -From Dictionary-
===============
Old statistics are saved automatically in SYSAUX for 31 day.

Restore Dictionary stats as of timestamp:
------------------------------------------------------
SQL> Exec DBMS_STATS.RESTORE_DICTIONARY_STATS(sysdate-1);

Restore Database stats as of timestamp:
----------------------------------------------------
SQL> Exec DBMS_STATS.RESTORE_DATABASE_STATS(sysdate-1);

Restore SYSTEM stats as of timestamp:
----------------------------------------------------
SQL> Exec DBMS_STATS.RESTORE_SYSTEM_STATS(sysdate-1);

Restore FIXED OBJECTS stats as of timestamp:
----------------------------------------------------------------
SQL> Exec DBMS_STATS.RESTORE_FIXED_OBJECTS_STATS(sysdate-1);

Restore SCHEMA stats as of timestamp:
---------------------------------------
SQL> Exec dbms_stats.restore_SCHEMA_stats
     (ownname=>'SYSADM',AS_OF_TIMESTAMP=>sysdate-1);
OR:
SQL> Exec dbms_stats.restore_schema_stats
     (ownname=>'SYSADM',AS_OF_TIMESTAMP=>'20-JUL-2008 11:15:00AM');

Restore Table stats as of timestamp:
------------------------------------------------
SQL> Exec DBMS_STATS.RESTORE_TABLE_STATS
     (ownname=>'SYSADM', tabname=>'T01POHEAD',AS_OF_TIMESTAMP=>sysdate-1);


Delete Statistics:
==============
For Database stats:
SQL> Exec DBMS_STATS.DELETE_DATABASE_STATS ();
For System stats:
SQL> Exec DBMS_STATS.DELETE_SYSTEM_STATS ();
For Dictionary stats:
SQL> Exec DBMS_STATS.DELETE_DICTIONARY_STATS ();
For Fixed Tables stats:
SQL> Exec DBMS_STATS.DELETE_FIXED_OBJECTS_STATS ();
For Schema stats:
SQL> Exec DBMS_STATS.DELETE_SCHEMA_STATS ('SCOTT');
For Table stats and it's indexes:
SQL> Exec dbms_stats.DELETE_TABLE_stats(ownname=>'SCOTT',tabname=>'EMP');
For Index:
SQL> Exec dbms_stats.DELETE_INDEX_stats(ownname => 'SCOTT',indname => 'PK_EMP');
For Column:
SQL> Exec dbms_stats.DELETE_COLUMN_stats(ownname =>'SCOTT',tabname=>'EMP',colname=>'EMPNO');

Note: This procedure can be rollback by restoring STATS using DBMS_STATS.RESTORE_ procedure.


Pending Statistics:  "11g onwards"
===============
What is Pending Statistics:
Pending statistics is a feature let you test the new gathered statistics without letting the CBO (Cost Based Optimizer) use them "system wide" unless you publish them.

How to use Pending Statistics:
Switch on pending statistics mode:
SQL> Exec DBMS_STATS.SET_GLOBAL_PREFS('PUBLISH','FALSE');
Note: Any new statistics will be gathered on the database will be marked PENDING unless you change back the previous parameter to true:
SQL> Exec DBMS_STATS.SET_GLOBAL_PREFS('PUBLISH','TRUE');

Gather statistics: "as you used to do"
SQL> Exec DBMS_STATS.GATHER_TABLE_STATS('sh','SALES');
Enable using pending statistics on your session only:
SQL> Alter session set optimizer_use_pending_statistics=TRUE;
Then any SQL statement you will run will use the new pending statistics...

When proven OK, publish the pending statistics:
SQL> Exec DBMS_STATS.PUBLISH_PENDING_STATS();

Once you finish don't forget to return the Global PUBLISH parameter to TRUE:
SQL> Exec DBMS_STATS.SET_GLOBAL_PREFS('PUBLISH','TRUE');
>If you didn't do so, all new gathered statistics on the database will be marked as PENDING, the thing may confuse you or any DBA working on this DB in case he is not aware of that parameter change.


Lock Statistics:
=============
Gathering new statistics is not always a good approach, this may change your applications queries'/reports' execution plans to the worst, it's not guaranteed that gathering new statistics will lead to better execution plans ! .I've learned this lesson before in the hard way! but having a backup of the old statistics before gathering new ones has saved my day!.
This is why you want to avoid having a such scenario, where one of the DBA's in your team has accidentally gathered new statistics on the whole DB ـــscrambling most of execution plans of application queries, in the hope of generating better execution plans. In this case you need to lock the statistics of one or more schema or on key tables in order to prevent their statistics from being refreshed by such unattended maintenance activities.

To lock the statistics on all tables under a specific schema:
SQL> exec dbms_stats.lock_schema_stats('SCHEMA_NAME');
e.g. exec dbms_stats.lock_schema_stats('SCOTT');


To lock the statistics on a specific table:

SQL> exec dbms_stats.lock_table_stats('OWNER','TABLE_NAME'');
e.g. exec dbms_stats.lock_table_stats('SCOTT','EMP');

Note: This will lock the table's statistics and its indexes.

When you have a need to gather new statistics on those tables that having their statistics locked, you need first to unlock the statistics then gather a new statistics as usual.

To check all tables that have their statistics locked:
SQL> select OWNER, TABLE_NAME, LAST_ANALYZED, STATTYPE_LOCKED from DBA_TAB_STATISTICS
where STATTYPE_LOCKED is not null
and OWNER not in ('SYS','SYSTEM','SQLTXPLAIN','WMSYS')
order by OWNER, TABLE_NAME;

To unlock all tables under a specific schema:
SQL> exec dbms_stats.unlock_schema_stats('SCHEMA_NAME');
e.g. exec dbms_stats.unlock_schema_stats('SCOTT');

To unlock a specific table:
SQL> exec dbms_stats.unlock_table_stats('OWNER','TABLE_NAME'');
e.g. exec dbms_stats.unlock_table_stats('SCOTT','EMP');

Note: This will unlock the table's statistics and its indexes.

=========
Advanced:
=========

To Check current Stats history retention period (days):
-------------------------------------------------------------------
SQL> select dbms_stats.get_stats_history_retention from dual;
SQL> select dbms_stats.get_stats_history_availability from dual;
To modify current Stats history retention period (days):
-------------------------------------------------------------------
SQL> Exec dbms_stats.alter_stats_history_retention(60);

Purge statistics older than 10 days:
------------------------------------------
SQL> Exec DBMS_STATS.PURGE_STATS(SYSDATE-10);

Procedure To claim space after purging statstics:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Space will not be claimed automatically when you purge stats, you must claim it manually using this procedure:

Check Stats tables size:
>>>>>>
col Mb form 9,999,999
col SEGMENT_NAME form a40
col SEGMENT_TYPE form a6
set lines 160
select sum(bytes/1024/1024) Mb,
segment_name,segment_type from dba_segments
where  tablespace_name = 'SYSAUX'
and segment_name like 'WRI$_OPTSTAT%'
and segment_type='TABLE'
group by segment_name,segment_type
order by 1 asc
/

Check Stats indexes size:
>>>>>
col Mb form 9,999,999
col SEGMENT_NAME form a40
col SEGMENT_TYPE form a6
set lines 160
select sum(bytes/1024/1024) Mb, segment_name,segment_type
from dba_segments
where  tablespace_name = 'SYSAUX'
and segment_name like '%OPT%'
and segment_type='INDEX'
group by segment_name,segment_type
order by 1 asc
/

Move Stats tables in same tablespace:
>>>>>
select 'alter table '||segment_name||'  move tablespace
SYSAUX;' from dba_segments
where tablespace_name = 'SYSAUX'
and segment_name like '%OPT%'
and segment_type='TABLE'
/

Rebuild stats indexes:
>>>>>>
select 'alter index '||segment_name||'  rebuild online;'
from dba_segments
where tablespace_name = 'SYSAUX'
and segment_name like '%OPT%'
and segment_type='INDEX'
/

Check for un-usable indexes:
>>>>>
select  di.index_name,di.index_type,di.status
from dba_indexes di , dba_tables dt
where  di.tablespace_name = 'SYSAUX'
and dt.table_name = di.table_name
and di.table_name like '%OPT%'
order by 1 asc

Gathering Fixed Objects Statistics



What are the fixed objects:

Fixed objects are the x$ tables and their indexes.

Why we must gather statistics on fixed objects:

If the statistics are not gathered on fixed objects, the Optimizer will use predefined default values for the
statistics. These defaults may lead to inaccurate execution plans.

Does Oracle gather statistics on fixed objects:

Statistics on fixed objects are not being gathered automatically nor within gathering database stats procedure.

When we should gather statistics on fixed objects:

-After a major database or application upgrade.
-After implementing a new module.
-After changing the database configuration. e.g. changing the size of memory pools (sga,pga,..).
-Poor performance/Hang encountered while querying dynamic views e.g. V$ views.
-This task should be done only a few times per year.

Note: 
-It's recommended to Gather the fixed object stats during peak hours (system is busy) or after the peak hours but the sessions are still connected (even if they idle), to guarantee that the fixed object tables been populated and the statistics well represent the DB activity.
-Performance degradation may be experienced while the statistics are gathering.
-Having no statistics is better than having a non representive statistics.

How to gather stats on fixed objects:

Firstly Check the last analyzed date:


select OWNER, TABLE_NAME, LAST_ANALYZED from dba_tab_statistics where table_name='X$KGLDP';




OWNER          TABLE_NAME        LAST_ANAL
------------------------------ ------------------------------      ---------
SYS                     X$KGLDP         20-MAR-12


Secondly Export the current fixed stats in a table: (in case you need to revert back)

exec dbms_stats.create_stat_table('OWNER','STATS_TABLE_NAME','TABLESPACE_NAME');
exec dbms_stats.export_fixed_objects_stats(stattab=>'STATS_TABLE_NAME',statown=>'OWNER');

Thirdly Gather fixed objects stats:


exec dbms_stats.gather_fixed_objects_stats;


In case of reverting to the old statistics:
In case you experianced a bad performance on fixed tables after gathering the new statistics:

exec dbms_stats.delete_fixed_objects_stats();
exec DBMS_STATS.import_fixed_objects_stats(stattab =>’STATS_TABLE_NAME’,STATOWN =>'OWNER');

Creating a physical standby database

Creating a physical standby database

A physical standby database is created from an existing other database. This other database will then be the primary database.
In this text, it is assumed that the primary database uses a spfile.

Getting the primary database ready

The primary database must meet two conditions before a standby database can be created from it:
  • it must be in force logging mode and
  • it must be in archive log mode (also automatic archiving must be enabled and a local archiving destination must be defined.
Use v$database.force_logging to determine if a database is in force logging mode. If not, enable it like so:
alter database force logging;
Use v$database.log_mode to determine if a database is in archive log mode. If not, enable it.
The (local) archive destination should be specified like so:
alter system set log_archive_dest_1='LOCATION=c:\oracle\oradb\arch MANDATORY' scope=both;

Creating the standby database

Copying the datafiles

The standby database is created from the existing datafiles of the primary database. These can be queried from the v$datafileview:
 select name from v$datafile;
These files must be copied to the standby database. However, the primary database must be shut down before they can be copied.
shutdown immediate;
After copying the datafiles, the primary database can be started up again.
startup

Creating a standby database control file

A control file needs to be created for the standby system. Execute the following on the primary system:
alter database create standby controlfile as '/some/path/to/a/file'
The created file must meet two conditions:
  • Its filename must be different from any other control file
  • Must be created after the backup of the datafiles.

Creating an init file

A pfile is created from the spfile. This pfile needs to be modified and then be used on the standby system to create an spfile from it.
create pfile='/some/path/to/a/file' from spfile
The following parameters must be modified or added:

Creating an oracle service (if on windows)

If the environment is running on windows, create a new oracle service on the standby system with oradim:
oradim -new -sid stdby -startmode manual

Configuring the listener

sqlnet.expire_time=2

In order to enable dead connection time, specify sqlnet.expire_time=2 (or any other appropriate value).

Creating net service names

net service names must be created on both the primary and standby database that will be used by log transport services. That is: something like to following lines must be added in thetnsnames.ora.
TO_STANDBY =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = stdby_host)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = stdby)
    )
  )

Creating the spfile on the standby database

On the still idle standby database, the pfile is turned into an spfile. Then the instance is started up.
set ORACLE_SID=stdby
sqlplus "/ as sysdba"

create spfile from pfile='/.../../modified-pfile';
Then, the standby database needs to be started as a physical standby database, however, without recovering:
startup nomount
alter database mount standby database;

Creating standby redo logs

On the standby database, standby redo logs can be created (if lgwr transmission is to be used).
alter database add standby logfile '/some/path/file_1.log' size 1M, '/some/path/file_2.log' size 1M, '/some/path/file_3.log' size 1M;

Archiving to the standby from the primary

alter system set log_archive_dest_2='service=to_standby lgwr' scope=both; 
alter system set log_archive_dest_state_2=enable scope=both;

Putting standby in recovery mode

alter database recover managed standby database disconnect from session;

Verify environment

After everything has been done, verify the physical standby database.  http://www.oracleops-support.com/2017/12/verify-standby-environment.html