Monday, February 1, 2016

OPTIMIZING SAP MIGRATION



Preface
What is Migration?
                Migrating an SAP database and application environment, along with the associated system software and unbundled products, is one of the most demanding tasks an IT team can encounter. This white paper explains the process of moving an SAP environment from one system to another.
                In General the migration is the process of changing the environment, e.g. People migrate from one place to other for better environment. There are various factors involved such as, why we migrate, to where we migrate and what path and process we will be following and Things we need to consider when we reach the destination.
                As explained in the above example the migration with respect to SAP is nothing but Movement of the current SAP application and data from one environment to other. For SAP the environment refers to the platform on which the SAP is installed. Platform can have same or different operating system, based on this the migration is categorized in to two types: Homogeneous and Heterogeneous, Homogeneous Migration is the migration to the platform with same operating system and database which is on the source platform and the Heterogeneous Migration is the migration to the platform with different Operating system or database.
Why and when needed?
Client may plan for a Change in infra management strategies due to H/w, O/s and Db constraints.  Here can be situation when the current H/w is not scalable enough to cope up with business requirements and future growth of the system. Also many a times Database may have constraints working with a specific sets of O/s system.   e.g. database selection will be required based o the operating system which are classified in two section namely, little endian and big endian.
Little Endian and Big Endian 
When converting to Unicode, the export code page must correspond to the endianness of the target system. More exactly speaking, the export code page should correspond to the endianess of the machine where the R3load import processes will run.
Big Endian: The most significant byte is stored in memory at the lowest address, and the least significant byte at the highest address. (The big end comes first.)  As an analogy, we say "twenty-four" in English; the more significant number, twenty, comes first. This is also called most significant byte (MSB) ordering.
Little Endian: The least significant byte of the number is stored in memory at the lowest address, and the most significant byte at the highest address. (The little end comes first.) As an analogy, we say "fourteen" in English; the less significant number, four, comes first. This is also called least significant byte (LSB) ordering.
4103   Little Endian   Alpha, Intel X86 (and clones),
                      X86_64, Itanium (Windows+Linux),
                      Solaris_X86_64
4102   Big Endian      IBM 390, AS/400, PowerPC (AIX),
                      Linux on zSeries (S/390), Linux on Power,
                      Solaris_SPARC, HP PA-RISC, Itanium (HP-UX)








1.           Using Migration Monitor

1.1.      Introduction:

SAP provides couple tools to optimize (or perform) Unicode conversion export/import; one of these tools is Migration Monitor. Migration monitor is nothing but the set of scripts and the configuration files which is internally based on SAP tools such as R3load, R3szchk, R3ldctl and R3ta. Migration monitor can be used as an integrated with SAPinst or it can be used independently. As of NetWeaver 2004 SR1 migration monitor is integrated in to SAPinst or it can be used as an independent tool without SAPINST. For older releases of SAP, MIGRATION MONITOR can also be used independently. It is recommended to use Migration monitor for medium sized database (500 GB to 1 TB) and when you need to perform the parallel export/import. For large database use Distribution Monitor.
There are three transfer methods used in MIGMON net, ftp and socket. When using the net method you need to have the common mapped area where the exported data and the respective signal files are stored and can be accessed by both source system and target system. When you do not have the shared space for source and target system, in such cases the exported data created at source is referred  to target via ftp protocol. Socket option is completely depends on the network as the exported data is directly transferred to destination via network, advantage of this method is that it saves all the space needed to store the exported data but the biggest disadvantage is that it needs very stable network any glitch can lead to problems. Number of ports needed to transfer the data is equal to 1+ number of R3load jobs running in parallel. Eg if you mention port 6678 to be used for socket and have set jobNum to 10 then you need to tunnel all 11 ports from 6678 to 6688 between source and destination.
MIGMON is used to perform below tasks:
-                      Creates R3load command files
-                      Creates R3load task files if required
-                      Start R3load process to load and unload data
-                      Transfer packages from source to destination if required.
-                      Start the R3load process to load the data as soon as a package is available.

1.2.      Advantages:

-                      The biggest advantage of using Migration Monitor is parallel export/import.
-                      User has the full control over the load and unload process.
-                      Automates the dump shipping between source and target system.

1.3.      Disadvantages:

-                      You need to be ready with complete configuration before starting export/import.
-                      Increases chance of human errors.

1.4.      Limitations:

-                      Can be used only for ABAP stack or the ABAP Part of the dual stack but not for javastack, for Java you must follow the standard system copy method

2.           Migration Monitor-Basic Configuration

Below table provides the details of some of the very basic files used in the Migration Monitor:

export_monitor_cmd.properties
This is the property file with all the user inputs are stored which are needed to execute the Export part of MIGMON manually, e.g. Export Location, Important directories, etc, you will read those in detail in the below sections.
import_ monitor_cmd.properties
This is the property file with all the user inputs are stored which are needed to execute the Import part of MIGMON manually, e.g. Export Location, Load type, Important directories etc, you will read those in detail in the below sections.
orderby.txt
In this file you can define the order by which the tables and packages should be imported, please note that you should have the WHR files for tables ready to create this file. Name of the file can be anything however the path of the file should be maintained in the above properties file
Tablesplit.txt (Name can be changed)
This file contains details of the table splitting, name of the file can be anything however the path of the file should be maintained in the above properties file
export_monitor.sh (Unix)
export_monitor.bat (Windows)
Scripts used to start the export on the source system.
import_monitor.sh (Unix)
import_monitor.sh (Windows )
Scripts used to start the import on the target system.

2.1     Configuration of the properties file:

Before you start export or import you should be ready with the basic configuration that is needed for execution of the export and import. To perform the configuration all the required parameters should be included in the configuration files we refer to them as ‘.properties’ files hose files are export_monitor_cmd.properties for export and import_monitor_cmd.properties for import. Below are some of the important parameters needed to perform the export import are given below:
Exchange mode: ftp | net: This defines the way of transfer of the export files to target. ftp option actually transfers the files to target using ftp protocol whereas the for net option the common directory structure is created between source and the target.
exportDirs: points to the location where the EXPORTED files will be stored.
installDir: points to the location where export logs are stored.
orderBy: location of order by file.
ddlFile: location of the DDL control file, depending the type of export/import you decide ie sorted or unsorted. DDL<DBTYPE>.TPL or DDL<DBTYPE>_LRG.TPL
r3loadExe: locates the source system’s R3load executable.
dataCodepage: code page of the datafiles as which they will be exported.
loadArgs: Defines the R3load option for load and unload.
jobNum: number of R3load jobs to run in parallel, this parameter is dynamic and can be changed runtime looking at the resources utilized.
netExchangeDir: this is the location of the directory where .SGN ie signal files will be created whenever the export of the package is complete, this will be used as he signal for target R3load to start the import of that package.

2.2.    Optimization of the load and unload of the data.

                Optimization plays very important role in the loading and unloading of data especially when there large database to be migrated. There are few DB specific methods can be used for the same. Most common optimization techniques used in SAP migration are table splitting and package splitting.
                In Table splitting top sized tables are split in small chunks to process one table using parallel R3loads. You can use R3ta to split the top tables using where condition. Once the tables are split .WHR files will be created. These tables than further excluded form package splitting. SAPINST can be used for table splitting however you can do it manually too.
                In package splitting the tables are divided in to number of packages and such single package is processed by single R3load. Excluding the tables considered for table splitting you can further decide to exclude tables above certain size to be treated as separate package. Similar to .WHR in table splitting As a result of package splitting .STR files are created which are then used by R3loads to process them.
                Once you do the Table splitting you can then decide on their order in which they should be processed to make sure that the long running tables to be processed at the beginning. This order can be mentioned in orderBy.txt file.

3.           Execution:

                For executing the export and import script you should be ready with the configuration files, table splitting and some of the other prerequisites:
-                      Set JAVA_HOME pointing to JAVA version 1.4.1 or above.
-                      Make sure that you have latest R3load, dbsl, R3ta and R3util.
-                      Target database must be tuned in order to avoid the issues rising during the import process.
-                      Make sure that you have enough space available on target.
-                      Export preparation is run before the start of export to make sure that the DBSIZE.XML is ready and can be given as input to start the import in parallel.

Hint: If you are not sure of the AVA installed on the system or if not sure if you doubt if current ava supports MIGMON you can always set the JAVA_HOME to “/tmp/sainst_instdirexe*****/jre” generated by SWPM.

To start the export and import you just need to enter the below commands:

                                ./export_monitor.sh –bg
                                ./import_monitor.sh –bg
                                ‘-bg’ will run the process in background.              

4.           Monitoring MIGMON:

                The export import process can be monitored using some of the files generated during the export/import they are similar to those generated by SWPM:
-                      Import_monitor.log, Export_Monitor.log
-                      Export_state.properties and Import_state.properties
-                      <Package>.TSK
-                      <Package>.log

5.           Further Reading:

To further optimize the export import method there are various options available but they have their own advantages and disadvantages. You can choose the technique that is more suitable for your environment. Sorted and unsorted load/unload is one of such option that you can use to speed up the migration process. Apart from this there are few DB specific techniques to optimize the migration you can refer to DB specific optimization guides.
For bigger databases migration, you can reduce the downtime by increasing the process parallelization. One of the key component for this is to have multiple servers to process the migration exp/imp processes.   Similar to MIGMON SAP has provided enhanced tool that can be used to serve this purpose which is DISTMON (Distribution monitor). For more information, see SAP Note 855772.

6.           References:

1.        Migration Monitor User’s Guide
3.        TADM70 - SAP System - OS and DB Migration