Oracle logminer & Oracle Goldengate
Oracle Logminer :
Oracle Goldengate
- Enable parameter for all DB instances :
ENABLE_GOLDENGATE_REPLICATION=true
- Archive mode is enabled.
- Force logging :
- Forced logging mode forces the logging of all transactions and loads, overriding any user or storage settings to the contrary. This ensures that no source data in the Extract configuration gets missed.
- Minimum supplemental logging :
- This adds row chaining information, if any exists, to the redo log for update operations.
- Schema level supplemental logging is required when using DDL replication.
- Enable Table-level supplemental logging if schema-level logging is not enabled.
- ADD TRANDATA : Enable supplemental logging of the primary key and conditional supplemental logging of unique key / foreign key of a table plus changed columns.
- ADD TRANDATA ALLCOLS : Enables unconditional supplemental logging of all of the columns of the table.
- Configure stream_pool_size for each process (extract/ replicat) connecting to database. Start with 1 GB size and tune as per workloads.
- Classic Architecture : Oracle GoldenGate classic architecture provides the processes and files required to effectively move data across a variety of topologies. These processes and files form the main components of the classic architecture and was the product design until this release. Support oracle and other databases.
- Microservices Architecture (MA): Oracle GoldenGate Microservices Architecture is a new microservices architecture that provides REST-enabled services as part of the Oracle GoldenGate environment. The REST-enabled services provide remote configuration, administration, and monitoring through HTML5 web pages, command line, and APIs. Support only Oracle database.
- Primary key or
- Not Null unique keys or
- Specify KEYCOLS
A supplemental log group is the set of additional columns to be logged when supplemental logging is enabled. There are two types of supplemental log groups that determine when columns in the log group are logged:
Unconditional supplemental log groups: The before-images of specified columns are logged any time a row is updated, regardless of whether the update affected any of the specified columns. This is sometimes referred to as an ALWAYS log group.
Conditional supplemental log groups: The before-images of all specified columns are logged only if at least one of the columns in the log group is updated.
Because integrated capture is fully integrated with the database, no additional setup is required to work with Oracle RAC, ASM, and TDE.
Integrated capture uses the database logmining server to access the Oracle redo stream, with the benefit of being able to automatically switch between different copies of archive logs or different mirrored versions of the online logs. Thus integrated capture can transparently handle the absence of a log file caused by disk corruption, hardware failure, or operator error, assuming that additional copies of the archived and online logs are available
Integrated capture enables faster filtering of tables.
Integrated capture handles point-in-time recovery and RAC integration more efficiently.
Integrated capture features integrated log management. The Oracle Recovery Manager (RMAN) automatically retains the archive logs that are needed by Extract.
Integrated capture is the only mode that supports capture from a multitenant container database. One Extract can mine multiple pluggable databases within a multitenant container database.
For a release 11.2.0.4 source database and later (with source compatibility set to 11.2.0.4 or higher), the capture of DDL is performed by the logmining server asynchronously and requires no special triggers, tables, or other database objects to be installed. Oracle GoldenGate upgrades can be performed without stopping user applications. The use of a DDL trigger and supporting objects is required when Extract is in integrated mode with an Oracle 11g source database that is earlier than version 11.2.0.4.
- If the row with the old key is not found in the target, the change record in the trail is converted to an insert.
- If a row with the new key exists in the target, Replicat deletes the row that has the old key (it would not exist if the update had executed successfully), and then the row with the new key is updated as an overlay where the trail values replace the current values.
- When Replicat encounters a duplicate-record error, the static record that was applied by the initial load is overwritten by the change record in the trail. Overlaying the change is safer from an operational standpoint than ignoring the duplicate-record error.
Missing updates are ignored.
Missing deletes are ignored.
Duplicate inserts are turned into updates.
- Do not use HANDLECOLLISIONS with parameters that requires replicat to take an alternate action if an error for an update, insert or delete occurs. The parameter INSERTMISSINGUPDATES, for example, will not work if used with HANDLECOLLISIONS, because the latter will prompt replicat to ignore the update DML if the update could not find the row.
- If you have a business reason to skip a DML, for example, due to a collision or row not found, use REPERROR with the MAP for that table or table set (wildcard). For example, instead of HANDLECOLLSIONS, use instead MAP SRC.TABLE, TARGET TARGET.TABLE, REPERROR (1430, DISCARD). This will skip only the ORA-01403 error and records this in the discard file.
- If your source and target database are both Oracle and you need to resync a table or database, then use SCN / CSN as a way to coordinate initial loading rather than using HANDLECOLLSIONS.
GGSCI> add trandata <schema>.<tablename>
4)start the extract
5)add the new table in extract pump parameter file and save it
6)start the extract pump
7)get the current SCN from the source database
eg:-
SQL> select current_scn from v$database;
CURRENT_SCN
------------------------
5343407
8)re-sync the the newly added table from source to target(using normal export/import).
Make sure to use FLASHBACK_SCN parameter for the export.
9) Add the table in the replicat parameter file including the below option till 11g ( FILTER ( @GETENV ("TRANSACTION", "CSN") > <scn_number obtained from source db>) ) and for 12c onwards it will be like ( FILTER ( @GETENV ('TRANSACTION', 'CSN') > <scn_number obtained from source db>) ) as shown in the below example
eg:-
MAP source.test1, TARGET target.test1 ,FILTER ( @GETENV ("TRANSACTION", "CSN") > 5343407);
MAP source.test2, TARGET target.test2 ,FILTER ( @GETENV ("TRANSACTION", "CSN") > 5343407);
MAP source.test2, TARGET target.test2 ,FILTER ( @GETENV ('TRANSACTION', 'CSN') > 5343407);
11)start the replicat
12)verify the tables on source and table and once the lag is zero remove the filter parameter from the replicat parameter file and restart.
- Extract : Use TRANLOGOPTIONS EXCLUDEUSER GGS to avoid loop back in bi-direction replication. - Valid for Extract
- Replicat : Use the
GROUPTRANSOPSparameter to control the number of SQL operations that are contained in a Replicat transaction when operating in its normal mode (non-BATCHSQL). - For an integrated Replicat,
GROUPTRANSOPSis effective only when the integrated Replicat parameterPARALLELISMis set to 1 - Default : Nonintegrated Replicat: 1000 operations, Integrated Replicat: 50 operations.
- Replicat : BATCHSQL :
- BatchSQLworks the best for large number of small row changes. For large row changes, the performance result is not ideal. You normally would not use BATCHSQL for the row size > 5K.
- BatchSQL focuses on optimizing the replication of individual DML operations by combining them into single SQL statements.GroupTransOps optimizes the replication process by grouping multiple transactions together and applying them as a batch.
- Avoid infinite loopback by creating TRACETABLE in source & target databases.
- GETREPLICATES and IGNOREREPLICATES parameters control whether DML transactions issued by Replicat are captured or ignored by the Extract process which is processing the same tables on the same system.
Comments
Post a Comment