AWS Database Blog

Best practices for migrating an Oracle database to Amazon RDS PostgreSQL or Amazon Aurora PostgreSQL: Migration process and infrastructure considerations

An Oracle to PostgreSQL migration in the AWS Cloud can be a complex multistage process with different technologies and skills involved, starting from the assessment stage to the cutover stage. This blog series covers the environment and configuration setups for your source Oracle database, the AWS Database Migration (AWS DMS) service, and the target PostgreSQL database. The series focuses on the source and target database infrastructure, setup, tools and configurations used for migrating the production, development, testing, and staging database environments. We start with the data migration process from Oracle to a database based either on Amazon RDS for PostgreSQL or Amazon Aurora with PostgreSQL compatibility.

Although each of these environments is different, they share common features. The blog posts in this series provide high-level information about the components to consider for a database migration. These posts don’t address the complexities and intricacies of the application components and different scenarios, because these depend a great deal on the use case. To better understand the complexities involved, see the AWS Database Blog post Database Migration—What Do You Need to Know Before You Start?

This series contains three blog posts, covering the three stages of the migration:

  • Stage 1: Migration process and infrastructure considerations. This post discusses the source server infrastructure setup. It also covers understanding the migration process at a high level, requiring access to the Oracle database and client hardware and operating system as appropriate.
  • Stage 2: Source database considerations for the Oracle and AWS DMS CDC environment. The second blog post covers setting up the Oracle source database environment with a DMS service for both one-time migrations and also continuing change data capture (CDC) replication.
  • Stage 3: Target database considerations for the PostgreSQL environment. The blog series concludes with setting up the PostgreSQL target database environment for AWS DMS.

The blog series addresses the following aspects:

  • Setup tools for each different stage of the migration process
  • Appropriate instance types and database versions for the source environment, migration tools, and target environment.
  • Building and testing the environment for Oracle, AWS DMS, and PostgreSQL.

The planning decisions that you make regarding which databases to migrate and how help define the environments required to develop the proper solutions. This series covers only the Oracle to PostgreSQL migration process. It doesn’t discuss the manual aspects of database migration, such as application dependencies. For more information about how to manually migrate specific Oracle database objects and features to Aurora PostgreSQL, see the Oracle Database 11g/12c to Amazon Aurora with PostgreSQL Compatibility Migration Playbook.

Migration process

Database migration is a crucial operation within any enterprise and failure can cause a lot of pain. With failure, the organization’s ability to move forward with new technology is stopped dead in its tracks. By this time, significant expenditures have been made with little or no value gained. The migration process has many moving parts, which require streamlining the process with care. Therefore, when planning to move the data from Oracle to PostgreSQL, you must consider the various available options. These include your networks, CPU, memory, and operating system or systems. These also include your migration tools and process. Examples include whether to use ETL tools like AWS Glue, exporting the data as files, or use DMS to extract the data with CDC into PostgreSQL.

A critical process step is the technical assessment phase by using Workload Qualification Framework (WQF). WQF includes self-service tools, a questionnaire, and design review guidance. AWS solutions architects, partners, and consultants can use these to classify workloads to properly determine the magnitude of migration, staff-hour consumption, and to which AWS service to migrate. The WQF classifies two main types of database workloads—OLTP and DW—into five separate categories, to determine the effort that is required when moving the data, as shown in the following table.

Category Workload Type Ease of migration Man-hour consumption
1 ODBC/JDBC workload Easy Light
2 Light Oracle feature workload Easy Medium
3 Heavy Oracle feature workload Hard Heavy
4 Oracle-specific application framework Difficult to migrate N/A
5 OCI workloads Hard Heavy

For further information, see the AWS Database Freedom program website.

The database migration process takes a significant amount of planning for each stage. This planning helps ensure that all incompatibility and configuration issues between environments are addressed effectively up front. For example, if you are moving 100 GB of data on a local network and can take an outage for a few hours, things can be relatively simple. However, suppose that your database is 5 TB and you are migrating from on-premises hardware to a cloud environment with a two-hour maintenance window. In this case, an Oracle to PostgreSQL migration can prove challenging.

You can find the four main stages in the database migration process listed below:

Migration Steps

Stage 1: infrastructure

In this stage, you do the following:

  • Configure the network that connects your source and target databases to an AWS DMS replication instance (RI). Doing this can be as simple as connecting two AWS resources in the same VPC as the replication instance. It can include more complex configurations, such as connecting an on-premises database to an Amazon RDS DB instance over VPN. The network speeds can range from typical internet service to using AWS Direct Connect for faster speeds.
  • Do the capacity planning of the target and RI.

This stage is discussed in the current blog post.

Stage 2: Oracle environment setup for capturing source data using AWS Schema Conversion Tool (AWS SCT)

In this stage, you do this:

  • Set up the Oracle database environment properly for better performance and integration with AWS tools and services.
  • Set up AWS SCT to convert your existing Oracle database schemas and data to the PostgreSQL database. You can convert both relational OLTP schema and data warehouse schema. SCT migrates your secondary indexes, sequences, default values, stored procedures, triggers, synonyms, views, and other schema objects not specifically related to data migration. It generates data definition language (DDL) statements to create the new PostgreSQL database, based upon the converted model objects. Running the DDL statements results in the creation of the objects in the PostgreSQL database.
  • Review the migration assessment report in SCT to understand the incompatibility issues and technical challenges before you embark on the project.
  • Look at unsupported data types and objects. Some source data types need to be converted into the equivalent data types for the target database. To find more information on supported data types, see the Oracle Database 11g/12c To Amazon Aurora with PostgreSQL Migration Playbook.

This stage is discussed in the second series post, Source database considerations for the Oracle and AWS DMS CDC environment.

Stage 3: DMS setup to move the data

In this stage, you set up AWS DMS to migrate data.

AWS DMS migrates data from your Oracle source into your PostgreSQL target. AWS DMS also captures data manipulation language (DML) and supported DDL changes that happen on your source database and applies these changes to your target database. This way, AWS DMS keeps your target databases in sync with the source database.

This stage is discussed in the second series post, Source database considerations for the Oracle and AWS DMS CDC environment.

Stage 4: PostgreSQL database environment setup

In this stage, you set up the PostgreSQL database environment for the best possible migration performance and integration with AWS tools. This stage completes the migration process. This stage is discussed in the last series post, Target database considerations for the PostgreSQL environment.Typically, the database migration process also includes a rollback process and methods, in case the production cutover experiences issues. We don’t discuss rollback strategy in this blog series.

Database migration process

Database migration from Oracle DB to RDS PostgreSQL or Aurora PostgreSQL refers to the complete migration of all of the table structure, data, associated indexes, triggers, and stored procedures. After these are migrated, the new RDS or Aurora PostgreSQL database is kept in sync by using data replication with the source until cutover.

You can use many ways to migrate data between each of these heterogeneous databases. These include the dedicated data migration tools provided by database vendors. You can also combine AWS database migration tools with the procedures that have been developed for the design and implementation of cross-platform data migrations.

In particular, AWS offers two tools that enable a simple and easy heterogeneous migration from databases such as Oracle to PostgreSQL: AWS SCT and AWS DMS.

AWS SCT helps you convert Oracle dictionaries and secondary objects to any supported database target with minimal work involved. AWS SCT creates a migration assessment report that recognizes objects and code that requires a manual change or rework.

AWS DMS uses change data capture (CDC) to work with an Oracle source endpoint when the storage infrastructure is Oracle ASM or non-ASM, to migrate the database.

The migration of Oracle to PostgreSQL using DMS and SCT services is a multistage process, and you must take several dependency factors must be taken into consideration. Here, you need to address both technical and nontechnical aspects, such as project planning, resources, training, testing processes, and so on.

To build a sound migration strategy, include the following technical aspects when using DMS and SCT:

  • Server capacity planning for source, replication instance, and target
  • Tuning the source operating system (for example, Linux) and network
  • Schema and data migration process
  • Source Oracle database parameters, including these:
    • Archive logging
    • Supplemental logging
    • Storage
    • Target PostgreSQL database parameters

Database server capacity planning requires that we know about data platform architecture, performance counters, external dependencies, DB versions and editions, migration paths, hardware, virtualization, non-functional requirements, licensing, and so on. One of the key points in estimating the capacity needs for the new data platform environment is to understand the trend for each performance counter. Counters include, for example, Oracle instance RAM and CPU usage patterns, database IOPS, and log and data file sizes. You also should be able to extrapolate time series against the best fitting trend and then compare them in a seasonal manner. This extrapolation applies whether for both target databases or Amazon EC2 instances.

For performance counters, such as CPU usage, you must understand what happens if you plan to change the underlying hardware by migrating from on premises into the AWS Cloud. For example, there can be great differences between the source and target hardware platform performance. Therefore, you must be able to calculate things like CPU performance ratio between the current source server setup and the hypothetical target server setup. By doing this, you can extrapolate the counter time series for instance or database level performance to a new norm.

In addition, for CPU and memory, you must understand the storage requirements in a heterogeneous migration. The points to evaluate or assess in the source system to optimize on the storage target include the following:

  • Audit your source Oracle database. Evaluate every schema and all objects within each schema to determine if any of the objects are no longer used. Deprecate these unused objects in the source Oracle database, because you don’t need to migrate unused objects.
  • If load capacity permits, then get the maximum size in kilobytes for each LOB type on the source database. Later in the process, we need this information.
  • Move columns with LOBs (BLOB, CLOB, NCLOB), LONG, LONG RAW, and XMLTYPE to Amazon S3, Amazon DynamoDB, or any other external data store.

External LOB data objects are stored as operating system files, outside the database tablespaces. This being the case, only the LOB locator (pointer) is stored in the Oracle database. For the actual value of the LOBs, LONG, or LONG RAW, the entire data is stored externally. This approach reduces the Oracle database’s storage requirements and improves query performance.

For example, you can store the CLOBs (XML files) or BLOBs (binary data objects) in Amazon S3 and then store URL references to the S3 objects in your database. The support for integrity and durability is provided by the underlying storage as governed by the operating system. Taking this approach simplifies your source Oracle database for an easier migration. It also lowers the capacity requirements for the target PostgreSQL database.

Operating system and network

When considering a DB platform migration, we find that users typically check the source’s CPU utilization, memory size, and IO patterns. However, users often ignore the importance of network performance, especially when migrating to AWS. Therefore, we dig a bit deeper into network validation topics in this blog post, though we also recommend that you engage network and system engineers.

Networks entail overhead that can add a certain amount of delay to processing. To optimize performance, ensure that your network throughput is fast. Additionally, try to reduce the number of messages that are sent over the network. Lastly, be aware that it can be difficult to measure the delay that the network adds. If possible, consult a network or systems engineer early in the validation process. Network problems that begin at the source system or data center can cause performance issues and slow down the migration.

When you address network performance, get a network sniffer trace to analyze for performance degradations. To do so, load data into a network analyzer (for example, Wireshark or a commercial equivalent) where you can examine the individual packets for clues.

In addition, use network monitoring tools to ensure that you have the source prepped for migration:

  • A network bandwidth monitor can identify issues with bandwidth performance limits being exceeded on the network, the client, or the server. The monitor can determine if issues are related to either of the following:
    • Throttling – Requests are throttled based on the category of the request and number of calls the customer has made in that category.
    • Bandwidth utilization patterns that show physical limitations.
  • Check for Ethernet collisions, dropped packets or errors, and so on.
  • Check for stability of WAN links. WAN optimization addresses latency, packet loss, and bandwidth challenges that cause critical applications to be unresponsive or unreliable across the WAN.
  • Check for Quality of Service (QoS) or packet prioritizing that might go on.
  • Check for a firewall in the traversal path, if it blocks specific ports or protocols.

The source database servers can have an operating system like Windows, SunOS, IBM AIX, HP UX, Linux, or any other major vendor. We don’t intend here to present every available command for every OS. We assume throughout the blog series that the DB migration is from a popular open-source Linux system.

Given this, you need a handful of Linux commands to effectively manage a system, and we list a few important ones following. We strongly recommend that you test them before using.

Network commands

There are many utilities like netstat, traceroute, and ipconfig and port scanners available for troubleshooting or checking on the connection to look up anomalies. Some important commands include these:

  • The ifconfig command shows the details of the network interface or interfaces defined in the system.
    • The most common option is -a (ifconfig –a), which shows all the interfaces and looks for packet drops or errors, MTU sizes, and so on.
    • The usual name of the primary Ethernet network interface is eth0. To find out the details of a specific interface, for example eth0, you can use: # ifconfig eth0
  • /proc/net/dev, netstat, mii-tool all deal with port speeds.
  • Use netstat –s or netstat –su and look for either udpInOverflows, packet receive errors, or fragments dropped. If you observe a consistent increase in these metrics, that indicates an issue.
  • NIC Version : ethtool –driver eth0 deal with port and packet speed setup.

The commands following provide you with information about the current network drivers and BIOS versions. At times, a simple search after identifying the current version can provide suggestions on an upgrade in case of a performance issue. Manual directions for finding the name of the network adapter follow (you must have permissions to run these commands in Linux):

  • Enter lspci -v | grep Ethernet -A 1
  • Identify the name of the adapter to ensure that the right adapter and drivers are on the subsystem line. The example for Intel PRO/1000 PT Dual Port Server Adapter is as follows:
    MyServer:/home/MyUser # lspci -v | grep "Ethernet" -A 1
    02:00.0 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller (rev 06)
    Subsystem: Intel Corporation PRO/1000 PT Dual Port Server Adapter
    --
    02:00.1 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller (rev 06)
    Subsystem: Intel Corporation PRO/1000 PT Dual Port Server Adapter
  • We strongly recommend that you ensure that the latest and best-supported drivers are installed for best performance. Check manually to find the driver version as follows, and if required replace it:
    • Type ethtool -i ethx where ethx is the Ethernet port
    • Read the driver name and version (to confirm that you have the latest and best). The example for the first Ethernet port, eth0, is as follows:
      MyServer:/home/MyUser # ethtool -i eth0
      driver: e1000
      version: 7.6.5-NAPI
      firmware-version: 5.6-2
      bus-info: 0000:02:00.0

To improve network throughput for the source servers, you can set up jumbo frames on the host’s network adapter with a persistent MTU size of 9000 (which survives reboots). However, you must ensure that the network path from start to end supports jumbo packets to avoid packet fragmentation.

For example, you can use ifconfig -mtu 9000 followed by ifconfig -a to show the setting completed. To test to confirm that the jumbo packets have traversed the whole way, use the following:

[node01] $ traceroute -F node02-priv 9000
traceroute to node02-priv (10.10.10.2), 30 hops max, 9000 byte packets
1 node02-priv (10.10.10.2) 0.232 ms 0.176 ms 0.160 ms

[node01] $ traceroute -F node02-priv 9001
traceroute to node02-priv (10.10.10.2), 30 hops max, 9001 byte packets
traceroute: sendto: Message too long
1 traceroute: wrote node02-priv 9001 chars, ret=-1

In addition, you can do the following:

  1. Check the network throughput using the iperf tool to determine whether the network bandwidth utilization is nearing the maximum throughput observed.
    Set the network parameter values appropriately to support the maximum network throughput.Find the bandwidth delay product (BDP) value and set the network buffer size accordingly. It is calculated as the product of link bandwidth and round trip time.Example: For a 1 Gb/s network and round trip time of 0.1 second, BDP is (0.1 * 10^9)/8. On such a network, set the following parameter values in the file: /etc/sysctl.conf

    net.core.rmem_max = 12500000
    net.core.wmem_max = 12500000
    net.ipv4.tcp_rmem = 4096 87380 12500000
    net.ipv4.tcp_wmem = 4096 65536 12500000  
    
  2. Additionally, increase the following parameters:
    net.core.netdev_max_backlog = 30000
    net.ipv4.tcp_max_syn_backlog = 4096
  3. If the bandwidth latency and utilization look normal, then check the network route from the client to target server. Understanding the travel times on a network gives you an idea how long a transaction takes.Client-server communication requires many small packets. High latency on a network slows the transaction down due to the time interval between sending a request and getting the response. At times, the traversal path is suboptimal. For example, the transaction might have to hop across many unnecessary paths due to a wrong setup. Or maybe there’s a repeater issue in the path, or a route configuration issue or another factor. To get address information for each device in the path, use traceroute or the equivalent from the client to the server.

System performance indicators

Some useful commands for system performance indicators are the following:

  • To check the disk I/O and memory utilization, use sar, vmstat, and iostat.
  • For the CPU, review for utilization /proc/cpuinfo, mpstat, and top.
  • For memory, review for utilization /proc/meminfo, /proc/slabinfo, and free.
  • For the kernel version and release, use cat /proc/version.
  • For types of I/O cards, review the versions and drivers by using lspci -vv.
  • To list all hardware PCI devices, use lspci –v.
  • Kernel messages – Review them for obvious issues or problems by using /var/log/messages and /var/log/dmesg.

Summary

In this blog post, we address network issues in depth by design. Often we find that users address the CPU, memory, and I/O topics and neglect network performance. Network performance at times can be a key factor in making or breaking the DB migration strategy, and generally CPU, memory, and I/O for any production system are well-monitored and well-understood.

This post concludes our discussion of the system and network administration tasks to consider in setting up the infrastructure for database migration. For the next stage, preparing the Source Oracle database setup and configuration for migration, see the next blog post, Source database considerations for the Oracle and AWS DMS CDC environment.


About the Author

Mahesh Pakala has worked with Amazon since April 2014. Prior to joining Amazon, he worked with companies such as Ingres, Oracle Corporation & Dell Inc.; advice strategic marquee customers design their highly available scalable applications, heterogeneous cloud application migrations and assist with system performance.