PodcastsEducationOracle University Podcast

Oracle University Podcast

Oracle Corporation
Oracle University Podcast
Latest episode

158 episodes

  • Oracle University Podcast

    Exploring the Oracle Analytics AI Assistant

    17/03/2026 | 17 mins.
    Join hosts Lois Houston and Nikita Abraham for a special episode of the Oracle University Podcast as they explore the Oracle Analytics AI Assistant. In this episode, you'll discover how Oracle's AI-powered conversational tool empowers users of all backgrounds to interact with business data using simple, natural-language questions. Learn how the assistant interprets queries, surfaces visualizations, and delivers actionable insights in seconds, all within Oracle's secure analytics environment. The episode dives into best practices for data preparation, security and privacy safeguards, how to configure datasets for optimal AI performance, and tips for getting the most relevant results. You'll also hear how synonyms, column indexing, and user permissions make analytics more accessible and accurate.
     
    Visualize Data with the Oracle Analytics AI Assistant: https://mylearn.oracle.com/ou/article-course/visualize-data-with-the-oracle-analytics-ai-assistant/156941/
    Oracle University Learning Community: https://education.oracle.com/ou-community
    LinkedIn: https://www.linkedin.com/showcase/oracle-university/
    X: https://x.com/Oracle_Edu
     
    Special thanks to Arijit Ghosh, Anna Hulkower, Kris-Ann Nansen, and the OU Studio Team for helping us create this episode.
    -------------------------------------------------------
     
    Episode Transcript:
     
    00:00
    Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started!
    00:26
    Lois: Hello and welcome to the Oracle University Podcast! I'm Lois Houston, Director of Communications and Adoption Programs with Customer Success Services, and with me is Nikita Abraham, Team Lead: Editorial Services with Oracle University.
    Nikita: Hi everyone! Today's episode is on the Oracle Analytics AI Assistant, which is all about making business data accessible and useful, no matter your background. Whether you're a seasoned pro or just starting out with Oracle Analytics, you'll want to stick around for this episode because we're covering everything you need to know to unlock powerful, intuitive, and secure data insights.
    01:06
    Lois: That's right. And full disclosure before we start. We're trying something a little different for this episode. Instead of a live guest, our expert will be an AI-generated voice sharing insights drawn directly from Oracle's official course materials. Think of it as getting a taste of what our training courses are like, with a little help from AI. So, with that, let's kick things off by taking a closer look at what the Oracle Analytics AI Assistant really is.
    Expert: The Oracle Analytics AI Assistant is an AI-powered tool that provides a conversational interface for data analysis. With this tool, data exploration becomes more intuitive and efficient, helping you access fast, personalized insights. The AI Assistant makes use of Generative AI to process queries, analyze indexed datasets, and create or refine relevant visualizations. It is fully integrated into the Oracle Analytics platform, complementing existing analytic and visualization capabilities.
    02:13
    Nikita: So, put simply, users have the ability to interact with their data in plain English and receive immediate, visual answers.
    Expert: Exactly! You can ask natural language questions, such as, "What were my sales in the United States last Tuesday?" or "Show me monthly sales for this year," and the assistant interprets the question, queries the right data, and generates the best visualization.
    02:39
    Lois: Before we dive deeper, let's ground ourselves in some of the core concepts behind this technology. Here's an overview of the AI technologies powering the assistant.
    Expert: 
    - Artificial Intelligence refers to systems or machines that perform tasks which typically require human intelligence, like reasoning, learning, perception, and language understanding.
    - Large Language Models or LLMs are AI programs trained on very large data sets. LLMs can generate human-like language and perform complex language tasks, such as writing emails or answering questions.
    - Generative AI is a branch of AI that can create new content, such as text, images, and audio. GenAI includes chatbots and virtual assistants capable of human-like conversations, answering questions, and creating content based on user prompts.
    - Natural Language Processing or NLP is a subfield of AI, targeting how computers understand and generate human language.
    03:42
    Lois: Now, let's look at what happens behind the scenes when someone interacts with the Oracle Analytics AI Assistant.
    Expert: Here is how the process works. You ask a question or make a request in natural language. Oracle Analytics Cloud identifies the most relevant dataset to answer that question, looking at metadata and attribute values. The platform prepares a prompt for the LLM that includes dataset metadata, column names, synonyms, and your question. The LLM and Natural Language Understanding interpret the question, and then translate it into a structured query. Oracle Analytics validates this query against your data model, and then queries your database.
    Based on the results, the AI Assistant creates the most appropriate visualization, like a chart, table, or similar format, and provides additional natural language insights.
    04:36
    Nikita: Security and privacy are top priorities for organizations using tools like this, so let's get into Oracle's approach to protecting user data.
    Expert: At Oracle, your data privacy and security are always top priorities. Specifically, your data is never shared with external model providers or other customers. Pre-trained generative AI models are accessed exclusively within Oracle's secure cloud infrastructure. No customer data is stored or retained by the AI models after processing, and prompt data is not used to train the models. And finally, all data processed is fully isolated and never combined or visible to anyone outside your organization.
    05:20
    Lois: In other words, users always remain in full control of their own data, with no risk of leakage or exposure to outside parties.
    Nikita: Yeah, this kind of reassurance is absolutely critical for enterprises.
    05:32
    Lois: That's right, Niki. Next, let's cover how to get the most accurate and relevant insights from the AI Assistant by following some best practices for prompting.
    Expert: To get the best answers, you need to be specific. Include key data points, timeframes, or filters. For example, something like: "Show total sales by country for Q2 2024." Keep questions focused, clear, and concise. Refine your request as needed. If you want different details or a simpler trend line, follow up with something like, "Show by quarter," or "Replace product category with customer segment." Avoid complex prompts, like highly nested or multi-step ones. Ask a series of concise questions instead. When typing column names or field values, pause briefly to let the Assistant suggest the correct field. This increases prompt accuracy. Consider the context of the conversation. Filters and refinements made in previous messages persist, so be aware that context builds over the conversation unless reset.
    06:36
    Nikita: So, you might start with something like, "Show me sales trends for the last 5 years," and then get more granular, like, "Include only technology products," or "Break the results down by product sub-category."
    Lois: But sometimes, you may just want to start from scratch, so let's discuss how you can reset your session with the AI Assistant.
    Expert: Just select the "Clear Assistant History" option and you can begin a new analysis.
    07:03
    Nikita: Language capabilities are another important consideration, so here's an overview of which languages the Assistant currently supports.
    Expert: Right now, English is the primary language supported. Simple questions in other languages may work, but with less accuracy and fewer features. Talk to your Oracle Analytics administrator if you have multilingual needs.
    07:26
    Lois: Let's clarify what kinds of questions are beyond the scope of the Assistant.
    Expert: The Assistant is built for business-oriented, goal-driven queries, not for technical schema questions or database logic. So, don't ask about dataset structures or technical metadata. But do ask about trends, comparisons, breakdowns, and summaries that relate to your business.
    07:53
    Do you want to fast-track your learning goals? Join us for live events hosted by Oracle expert instructors! Get certification exam tips, learn about new technology, and ask your questions in real time. Take charge of your learning. Visit mylearn.oracle.com and join a live event today! 
    08:13
    Nikita: Welcome back! Now, let's discuss why configuring datasets is crucial for working effectively with the AI Assistant.
    Expert: Effectively indexing and configuring your dataset can make a huge difference when working with the AI Assistant. When you index a dataset, you're basically creating searchable references. This makes it easier for the AI Assistant to quickly locate the most relevant columns and give accurate responses to natural language questions. 
    It's important to know that you'll need to manually select which columns to index. For example, if your users are likely to ask about sales in the United States, you'll want to make sure that both the "Country" column and the "Sales" column are included when indexing. That way, the Assistant knows exactly where to look when someone asks a question about U.S. sales figures. 
    Another thing to remember is that you can make your analytics more user-friendly by resolving ambiguities and assigning synonyms to your dataset columns. For instance, if there's a generic "date" column, clarify whether that refers to the "order date" or the "ship date." It helps to add synonyms as well, so the assistant can handle different ways users might phrase their questions. 
    So, while it may take a little extra effort upfront, making your dataset easy to search and understand pays off. Your AI Assistant can respond quickly and accurately, and your users get the answers they're looking for with less hassle. 
    09:43
    Lois: Next, we'll outline the steps for configuring and indexing datasets for optimal performance.
    Expert: First you need to confirm dataset access. You'll need read/write privileges to enable the AI Assistant and index the dataset. Then, on the Search tab, under "Index Dataset For," select "Assistant." Choose your language and, optionally, set an indexing schedule. Carefully pick columns users will likely question, like sales, region, or date. Avoid technical metadata, sensitive data, and high-cardinality columns like Customer IDs. Choose whether to index only column names or names plus data values. Including data values helps with typing suggestions and nuance. Avoid values no one will search on. Importantly, indexed dataset values are never sent to the LLM. They are retrieved from the dataset when visualizations are created. Assign synonyms to attribute names. Oracle Analytics suggests synonyms, but you can also add your own. Finally, save the changes and run indexing to make the dataset searchable by the Assistant.
    10:50
    Nikita: Now, let's look at how configuring subject areas can further tailor the experience.
    Expert: You'll need to navigate to the Search Index by going through the Console's Configuration and Settings. Choose your language and indexing schedule. Index folders relevant to business questions; avoid non-relevant or sensitive columns. Select the Index Type: "Index Metadata Only" for high-cardinality columns (like IDs); "Index" for columns and values that users reference. As with datasets, clarify column meanings with user-friendly synonyms. Finalize settings and run the index to prepare your subject area for AI-powered queries.
    Special care must be taken with date columns. Select and clearly identify the main business date so queries don't become ambiguous.
    11:39
    Lois: Synonyms play an important role in reducing ambiguity and enhancing results, so let's review the best practices for setting them up effectively.
    Expert: If your columns use abbreviations, acronyms, or codes—like "custNo" or "Pname"—it's a good idea to provide synonyms to clarify what those attributes actually mean. Think about how people typically refer to those columns in everyday language. So instead of just "custNo," add "Customer Number" as a synonym, and for "Pname," you would use "Product Name." 
    If you can, actually renaming the column is usually more effective than just adding a synonym. But if that's not possible for some reason, a synonym is the next best thing. 
    Dates can be another tricky area. Datasets often have several date columns, like "Ship Date," "Order Date," and "Invoice Date." If a user asks, "Show me revenue by date," the system has to decide which date column to use, and it may just pick one for you. If you definitely want "Order Date" to be considered the default date, make sure to assign "date" as a synonym specifically for that column. 
    There's also the situation where different tables have columns with the same name—like "name" from both a Product table and an Employee table. You'll want to use synonyms for these columns too, to make it clear what each one means. 
    Adding more than one synonym can help as well. For example, if you have a "Yield" column, maybe also specify "revenue" and "income" as synonyms, so users can ask questions however they naturally would. 
    Avoid using reserved words or special characters in your synonyms. This means words like "Count," "Year," or anything that's also a SQL function, plus characters like "@" or special symbols. Also, steer clear of Unicode characters and terms that are analytical functions or date formats. 
    The whole point is to make your columns easy for business users or anyone else to reference naturally, using the terms they're most likely to try in a search. 
    And finally, just a few rules of thumb: synonyms can be up to 50 characters long, you can use up to 20 synonyms for each column, and you don't need to worry about uppercase or lowercase; column names aren't case sensitive. 
    Besides the basic setup and using synonyms, you can really improve the quality of answers from the AI Assistant (and the LLM it uses) by prepping and enriching your data. It's easier for the AI to work with words than numbers. Try "binning" numerical values into simple categories people can understand. For instance, instead of showing a long list of sales amounts, split them into groups like "small," "medium," and "large." 
    LLMs handle words better than blanks. If your data has missing or null values, fill them in with something meaningful, like "Unknown," "Not specified," or "Not available." Skipping this step could cause errors in queries, such as reports missing customers because their country is blank. Incorrect averages or summaries, especially if missing values are ignored. Issues with forecasting, if data gaps throw off trends. The AI Assistant might skip important columns or even generate errors.
    Ambiguous or duplicate column names confuse both users and the LLM. Make your names clear and consistent. 
    You can use Oracle Analytics's Transform editor to add even more context. For example, you might extract the day of the week from a date, so you can easily ask, "Show sales for all Fridays in 2026." 
    By preparing your data with these steps, you help the AI Assistant give you more accurate and insightful answers, making data analysis a lot smoother!
    15:27
    Nikita: Finally, let's walk through the process of making the Oracle Analytics AI Assistant accessible to end users directly within their workbooks.
    Expert: Permissions are controlled through application roles. Your administrator must create a specific role enabling access to the AI Assistant.
    To enable consumer access, open your workbook in edit mode and select Present. From the Workbook tab, toggle it on in the Insights Panel section. Choose tabs like Watch Lists and Workbook Assistant. Decide which data sources in your workbook are available to the consumer.
    Save, and then use Preview to simulate the user experience.
    Consumers can access the AI Assistant by selecting Auto Insights at the top of the workbook. They can then type in natural language questions, review visualizations, and follow up.
    Repeat these steps for each workbook you wish to enable.
    16:22
    Lois: This really puts agile, self-service analytics at everyone's fingertips, all while keeping data security and integrity front and center.
    Nikita: And it's not just plug-and-play. To get the best results, you configure your data, enrich it, apply the right synonyms and permissions, and then your team can ask questions and visualize results just by using natural language.
    Lois: If you're ready to kickstart or deepen your journey with the Oracle Analytics AI Assistant, or you want to review the topics we covered in today's episode in even greater detail, visit mylearn.oracle.com.
    Nikita: That wraps up this episode. Thanks for spending time listening to us today. Join us next week for another episode of the Oracle University Podcast. Until then, this is Nikita Abraham…
    Lois: And Lois Houston, signing off!
    17:14
    That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
  • Oracle University Podcast

    Oracle Database@AWS: Monitoring, Logging, and Best Practices

    10/03/2026 | 19 mins.
    Running Oracle Database@AWS is most effective when you have full visibility and control over your environment.
     
    In this episode, hosts Lois Houston and Nikita Abraham are joined by Rashmi Panda, who explains how to monitor performance, track key metrics, and catch issues before they become problems. Later, Samvit Mishra shares key best practices for securing, optimizing, and maintaining a resilient Oracle Database@AWS deployment.
     
    Oracle Database@AWS Architect Professional: https://mylearn.oracle.com/ou/course/oracle-databaseaws-architect-professional/155574
    Oracle University Learning Community: https://education.oracle.com/ou-community
    LinkedIn: https://www.linkedin.com/showcase/oracle-university/
    X: https://x.com/Oracle_Edu
     
    Special thanks to Arijit Ghosh, Anna Hulkower, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode.
    ------------------------------------------------------
    Episode Transcript:

    00:00
    Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started!
    00:26
    Nikita: Welcome to the Oracle University Podcast! I'm Nikita Abraham, Team Lead: Editorial Services with Oracle University, and with me is Lois Houston, Director of Communications and Adoption with Customer Success Services 
    Lois: Hello again! Last week's discussion was all about how Oracle Database@AWS stays secure and available. Today, we're joined by two experts from Oracle University. First, we'll hear from Rashmi Panda, Senior Principal Database Instructor, who will tell you how to monitor and log Oracle Database@AWS so your environment stays healthy and reliable.
    Nikita: And then we're bringing in Samvit Mishra, Senior Manager, CSS OU Cloud Delivery, who will break down the best practices that help you secure and strengthen your Oracle Database@AWS deployment. Let's start with you, Rashmi. Is there a service that allows you to monitor the different AWS resources in real time?
    Rashmi: Amazon CloudWatch is the cloud-native AWS monitoring service that can monitor the different AWS resources in real time. It allows you to collect the resource metrics and create customized dashboards, and even take action when certain criteria is met. Integration of Oracle Database@AWS with Amazon CloudWatch enables monitoring the metrics of the different database resources that are provisioned in Oracle Database@AWS.
    Amazon CloudWatch collects raw data and processes it to produce near real-time metrics data. Metrics collected for the resources are retained for 15 months. This facilitates analyzing the historical data to understand and compare the performance, trends, and utilization of the database service resources at different time intervals. You can set up alarms that continuously monitor the resource metrics for breach of user-defined thresholds and configure alert notification or take automated action in response to that metric threshold being reached.
    02:19
    Lois: What monitoring features stand out the most in Amazon CloudWatch?
    Rashmi: With Amazon CloudWatch, you can monitor Exadata VM Cluster, container database, and Autonomous database resources in Oracle Database@AWS. Oracle Database@AWS reports metrics data specific to the resource in AWS/ODB namespace of Amazon CloudWatch. Metrics can be collected only when the database resource is an available state in Oracle Database@AWS.
    Each of the resource types have their own metrics defined in AWS/ODB namespace, for which the metrics data get collected. 
    02:54
    Nikita: Rashmi, can you take us through a few metrics?
    Rashmi: At Exadata database VM Cluster, there is CPU utilization, memory utilization, swap space storage file system utilization metric. Then there is load average on the server, what is the node status, and the number of allocated CPUs, et cetera.
    Then for container database, there is CPU utilization, storage utilization, block changes, parse count, execute count, user calls, which are important elements that can provide metrics data on database load. And for Autonomous Database metrics data include DB time, CPU utilization, logins, IOPS and IO throughput, RedoSize, parse, execute, transaction count, and few others.
    03:32
    Nikita: Once you've collected these metrics and analyzed database performance, what tools or services can you use to automate responses or handle specific events in your Oracle Database@AWS environment?
    Rashmi: Then there is Amazon EventBridge, which can monitor events from AWS services and respond automatically with certain actions that may be defined. You can monitor events from Oracle Database@AWS in EventBridge, which sends events data continuously to EventBridge at real time. Eventbridge forwards these events data to target AWS Lambda and Amazon Simple Notification Service to perform any actions on occurrence of certain events.
    Oracle Database@AWS events are structured messages that indicate changes in the life cycle of the database service resource. Eventbridge can filter events based on your defined rules, process them, and deliver to one or more targets. Event Bus is the router that receives the events, optionally transform them, and then delivers the events to the targets. Events from Oracle Database@AWS can be generated by two means: they can be generated from Oracle Database@AWS in AWS, and they can also be generated directly from OCI and received by EventBridge in AWS.
    You can monitor Exadata Database and Autonomous Database resource events. Ensure that the Exadata infrastructure status is an available state. You can configure how the events are handled for these resources. You can define rules in EventBridge to filter the events of interest and the target, who is going to receive and process those events. You can filter events based on a pattern depending on the event type, and apply this pattern using Amazon EventBridge put-rule API, with the default event bus to route only those matching events to targets.
    05:13
    Lois: And what about events that AWS itself generates?
    Rashmi: Events that are generated in AWS for the Oracle Database@AWS resources are delivered to the default event bus of your AWS account. These events that are generated in AWS for Oracle Database@AWS resources include lifecycle changes of the ODB network. The different network events are successful creation or failure of the creation of the ODB network, and successful deletion or failure in deletion of the ODB network.
    When you subscribe to Oracle Database@AWS, then an event bus with prefix aws.partner/odb is created in your AWS account. All events generated in OCI for the Oracle Database@AWS resources are then received in this event bus. When you are creating filter pattern using Amazon EventBridge put-rule API, you must set the event bus name to this event bus. Make sure you do not delete this event bus. Events generated in OCI and received into event bus are extensive. They include events of Oracle Exadata infrastructure, VM Cluster, container, and pluggable databases.
    06:14
    Lois: If you want to look back at what's happened in your environment, like who made the changes or accessed resources, what's the best AWS service for logging and auditing all that activity?
    Rashmi: Amazon CloudTrail is a logging service in AWS that records the different actions taken by a user or roles, or an AWS service. Oracle Database@AWS is integrated with Amazon Cloud Trail. This enables logging of all the different events on Oracle Database@AWS resources. 
    Amazon Cloud Trail captures all the API calls to Oracle Database@AWS as events. These API calls include calls from the Oracle Database@AWS console, and code calls to Oracle Database@AWS API operations. These log files are delivered to Amazon S3 bucket that you specify. These logs determine the identity of the caller who made the call request to Oracle Database@AWS, their IP from which the call originated, the time of the call, and some additional details. 
    CloudTrail event history stores immutable record of the past 90 days of management events in an AWS region. You can view, search, and download these records from CloudTrail Event History. You can access CloudTrail when you create an AWS account that automatically gives you the access to CloudTrail. Event history. If you would like to retain the logs for a longer period of time beyond 90 days, you can create CloudTrail trails or CloudTrail Lake event data store. 
    Management events in AWS provide information about management operations that are performed on the resources in your AWS account. Management operations are also called control plane operations. Thus, the control plane operations in Oracle Database@AWS are logged as management events in CloudTrail logs. 
    07:59
    Are you a MyLearn subscriber? If so, you're automatically a member of the Oracle University Learning Community! Join millions of learners, attend exclusive live events, and connect directly with Oracle subject matter experts. Enjoy the latest news, join challenges, and share your ideas. Don't miss out! Become an active member today by visiting mylearn.oracle.com.
    08:25
    Nikita: Welcome back! Samvit, let's talk best practices. What should teams keep in mind when they're setting up and securing their Oracle Database@AWS environment? 
    Samvit: Use IAM roles and policies with least privilege to manage Oracle Database@AWS resources. This ensures only authorized users can provision or modify DB resources, reducing the risk of accidental or malicious changes. 
    Oracle Data Safe monitors database activity, user risk, and sensitive data, while AWS CloudTrail records all AWS API calls. Together, they give full visibility across the database and cloud layers.
    Autonomous Database supports Oracle Database Vault for enforcing separation of duties. Exadata Database Service can integrate with Audit Vault and Database Firewall to prevent privileged users from bypassing security controls.
    Enable multifactor authentication for AWS IAM users managing Oracle Database@AWS. This adds a strong second layer of protection against stolen credentials. 
    Always deploy your Oracle Database@AWS in private subnets without public IPs. Use AWS security groups and NACLs to strictly limit inbound and outbound traffic, allowing access only from trusted applications.
    Exadata Database Service supports integration with Oracle Vault for key lifecycle management. And in case of Autonomous Database, the transparent data encryption keys are automatically managed. But you can bring your own keys with OCI Vault. Key rotation ensures compliance and reduces risk of key compromise.
    Oracle Database@AWS enforces encrypted connections by default. Ensure clients connect with TLS 1.2 or 1.3 to protect data in transit from interception or tampering. 
    Use Oracle Data Safe's user assessment features to detect dormant users or excessive privileges. Disable unused accounts and rightsize permissions to reduce insider threats and security gap.
    Export database audit logs to Oracle Data Safe Audit Vault or AWS S3 with object lock for immutability. This prevents lock tampering and ensures audit evidence is preserved for compliance. 
    11:25
    Lois: OK, that covers security. Do you have any tips for making sure your Oracle Database@AWS setup is reliable and resilient?
    Samvit: Start with clear recovery objectives. Define how much downtime and data loss each workload can tolerate. These targets drive your HADR architecture and backup strategy. 
    Implement business continuity measures to deliver maximum uptime for your databases. As a best practice, you must configure disaster recovery environment for your critical databases so that, in the event of any disaster affecting the primary database, applications can be immediately failed over to the DR environment, ensuring least application downtime and zero or minimal data loss. With Oracle Database@AWS, you can automate the creation and management of DR environment for your database services using different deployment capabilities. You can opt to configure either cross-availability zone DR in the same region or configure cross-region DR. Since cross-availability zone can only provide site failure protection, you must also configure a cross-region DR to protect against regional failure.
    A DR plan is only effective if tested. Regular failover and switchover drills validate that people, processes, and systems can recover as designed. 
    For Exadata Database, Autonomous Recovery Service provides automated backup validation, recovery guarantees, and protection against accidental data loss or corruption. 
    Oracle-managed backups are fully managed by OCI. When you create your Oracle Exadata Database, you can enable automatic backups by choosing Enable Automatic Backups in the OCI Console. When you do that, you can select Amazon S3 or OCI Object Storage or Autonomous Recovery Service as the backup destination.
    Don't just take backups. You also need to test them. Regularly restore backups into non-production environment to validate integrity and recovery time. 
    Plan beyond just the database. Map application and middleware dependencies to ensure end-to-end business resilience. A database failover is useless if dependent apps can't reconnect.
    14:09
    Nikita: Another area of interest is performance and cost. What practices help teams balance the two?
    Samvit: Autonomous Database automatically scales CPU and storage as workloads grow. This ensures performance during peaks while avoiding overprovisioning. So you should enable ADB auto-scaling. 
    Monitor CPU, memory, and IO metrics with AWS CloudWatch to rightsize your compute. Scale up or down based on actual utilization instead of static provisioning.
    Autonomous databases continuously evaluate and creates indexes automatically. This improves query performance without requiring manual tuning. 
    Use connection pooling in your applications to optimize database connections. Minimizing round-trip reduces latency and improves throughput.
    Apply AWS tags to database and related resources for cost allocation and chargeback. Tagging also helps with governance and cost visibility. 
    Choose between bring your own license and license-included models for Oracle Database@AWS. The right model depends on your existing license portfolio and cost strategy.
    Not all workloads need long backup retention. Adjust retention policies based on business needs to balance compliance with storage costs. 
    Exadata Database supports Oracle multitenant with pluggable databases. Consolidating databases reduces infrastructure footprint and licensing costs.
    Performance tuning isn't just technical. Align metrics with business KPIs. correlating DB performance to user experience and revenue impact helps prioritize optimizations. 
    16:20
    Lois: Before we wrap up, Samvit, let's look at operational efficiency. What advice do you have for making day-to-day operations more efficient?
    Samvit: Use infrastructure as code tools like Terraform or AWS CloudFormation to automate provisioning. This ensures consistent, repeatable deployments with minimal manual errors. 
    For Autonomous Database, enable auto-start/stop to optimize costs by running databases only when needed. This is ideal for dev test or seasonal workloads.
    Exadata Database Service provides fleet maintenance to patch multiple systems consistently. This reduces downtime and simplifies lifecycle management. 
    Integrate AWS CloudWatch for performance monitoring and EventBridge for event-driven automation. This helps detect issues early and trigger automated workflows.
    Oracle Data Safe provides ready-to-use audit and compliance reports. Use these to streamline governance and reduce the effort of manual compliance tracking. 
    For Autonomous databases, Performance Hub simplifies monitoring while Exadata users benefit from AWR and ASH reports. Together, they give deep insights into performance trends.
    Automated tagging policies and change management workflows help maintain governance. They ensure resources are tracked properly and changes are auditable. 
    Monitor storage consumption and growth patterns using AWS CloudWatch and the ADB Console. Proactive tracking helps avoid capacity issues and unexpected costs.
    Send CloudTrail logs into EventBridge to trigger automated incident responses. This shortens response time and builds operational resilience. 
    18:36
    Nikita: Samvit and Rashmi, thanks for spending time with us today. Your insights always help bring the bigger picture into focus.
    Lois: They definitely do. And if you'd like to go deeper into everything we covered, head over to mylearn.oracle.com and look up the Oracle Database@AWS Architect Professional course. Until next time, this is Lois Houston…
    Nikita: And Nikita Abraham, signing off!
    19:03
    That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
  • Oracle University Podcast

    How Oracle Database@AWS Stays Secure and Available

    03/03/2026 | 16 mins.
    When your business runs on data, even a few seconds of downtime can hurt. That's why this episode focuses on what keeps Oracle Database@AWS running when real-world problems strike.
     
    Hosts Lois Houston and Nikita Abraham are joined by Senior Principal Database Instructor Rashmi Panda, who takes us inside the systems that keep databases resilient through failures, maintenance, and growing workloads.
     
    Oracle Database@AWS Architect Professional: https://mylearn.oracle.com/ou/course/oracle-databaseaws-architect-professional/155574
    Oracle University Learning Community: https://education.oracle.com/ou-community
    LinkedIn: https://www.linkedin.com/showcase/oracle-university/
    X: https://x.com/Oracle_Edu
     
    Special thanks to Arijit Ghosh, Anna Hulkower, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode.
    --------------------------------------------------
     
    Episode Transcript:

    00:00
    Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started!
    00:26
    Lois: Hello and welcome to the Oracle University Podcast! I'm Lois Houston, Director of Communications and Adoption with Customer Success Services, and with me is Nikita Abraham, Team Lead: Editorial Services with Oracle University.
    Nikita: Hi everyone! In our last episode, we explored the security and migration strengths of Oracle Database@AWS. Today, we're joined once again by Senior Principal Database Instructor Rashmi Panda to look at how the platform keeps your database available and resilient behind the scenes.
    01:00
    Lois: It's really great to have you with us, Rashmi. As many of you may know, keeping critical business applications running smoothly is essential for success. And that's why it's so important to have deployments that are highly resilient to unexpected failures, whether those failures are hardware-, software-, or network-related. With that in mind, Rashmi, could you tell us about the Oracle technologies that help keep the database available when those kinds of issues occur?
    Rashmi: Databases deployed in Oracle Database@AWS are built on Oracle's Foundational High Availability Architecture. Oracle Real Application Cluster or Oracle RAC is an Active-Active architecture where multiple database instances are concurrently running on separate servers, all accessing the same physical database stored in a shared storage to simultaneously process various application workloads.
    Even though each instance runs on a separate server, they collectively appear as a single unified database to the application. As the workload grows and demands additional computing capacity, then new nodes can be added to the cluster to spin up new database instances to support additional computing requirements. This enables you to scale out your database deployments without having to bring down your application and eliminates the need to replace existing servers with high-capacity ones, offering a more cost-effective solution.
    02:19
    Nikita: That's really interesting, Rashmi. It sounds like Oracle RAC offers both scalability and resilience for mission-critical applications. But of course, even the most robust systems require regular maintenance to keep them running at their best. So, how does planned maintenance affect performance? 
    Rashmi: Maintenance on databases can take a toll on your application uptime. Database maintenance activities typically include applying of database patches or performing updates. Along with the database updates, there may also be updates to the host operating system. These operations often demand significant downtime for the database, which consequently leads to slightly higher application downtime.
    Oracle Real Application Cluster provides rolling patching and rolling upgrades feature, enabling patching and upgrades in a rolling fashion without bringing down the entire cluster that significantly reduces the application downtime. 
    03:10
    Lois: And what happens when there's a hardware failure? How does Oracle keep things running smoothly in that situation?
    Rashmi: In the event of an instance or a hardware failure, Oracle RAC ensures automatic service failover. This means that if one of the instance or node in the cluster goes down, the system transparently failovers the service to an available instance in the cluster, ensuring minimal disruption to your application.
    This feature enhances the overall availability and resilience of your database. 
    03:39
    Lois: That sounds like a powerful way to handle unexpected issues. But for businesses that need even greater resilience and can't afford any downtime, are there other Oracle solutions designed to address those needs?
    Rashmi: Oracle Exadata is the maximum availability architecture database platform for Oracle databases. Core design principle of Oracle Exadata is built around redundancy, consisting of networking, power supplies, database, and storage servers and their components.
    This robust architecture ensures protection against the failure of any individual component, effectively guaranteeing continuous database availability. The scale out architecture of Oracle Exadata allows you to start your deployment with two database servers and three storage servers, having different number of CPU cores and different sizes and types of storage to meet the current business needs.
    04:26
    Lois: And if a business suddenly finds demand growing, how does the system handle that? Is it able to keep up with increased needs without disruptions?
    Rashmi: As the demand increases, the system can be easily expanded by adding more servers, ensuring that the performance and capacity grow with your business requirements. Exadata Database Service deployment in Oracle Database@AWS leverages this foundational technologies to provide high availability of database system. This is achieved by provisioning databases using Oracle Real Application Cluster, hosted on the redundant infrastructure provided by Oracle Exadata Infrastructure Platform.
    This deployment architecture provides the ability to scale compute and storage to growing resource demands without the need for downtime. You can scale up the number of enabled CPUs symmetrically in each node of the cluster when there is a need for higher processing power or you can scale out the infrastructure by adding more database and storage servers up to the Exadata Infrastructure model limit, which in itself is huge enough to support any large workloads.
    The Exadata Database Service running on Oracle RAC instances enables any maintenance on individual nodes or patching of the database to be performed with zero or negligible downtime. The rolling feature allows patching one instance at a time, while services seamlessly failover to the available instance, ensuring that the application experienced little to no disruption during maintenance.
    Oracle RAC, coupled with Oracle Exadata redundant infrastructure, protects the Database Service from any single point of failure. This fault-tolerant architecture features redundant networking and mirrored disk, enabling automatic failover in the event of a component failure. Additionally, if any node in the cluster fails, there is zero or negligible disruption to the dependent applications.
    06:09
    Nikita: That's really impressive, having such strong protection against failures and so little disruption, even during scaling and maintenance. But let's say a company wants those high-availability benefits in a fully managed environment, so they don't have to worry about maintaining the infrastructure themselves. Is there an option for that?
    Rashmi: Similar to Oracle Exadata Database Service, Oracle Autonomous Database Service on dedicated infrastructure in Oracle Database@AWS also offers the same feature, with the key difference being that it's a fully managed service. This means customers have zero responsibility for maintaining and managing the Database Service.
    This again, uses the same Oracle RAC technology and Oracle Exadata infrastructure to host the Database Service, where most of the activities of the database are fully automated, providing you a highly available database with extreme performance capability. It provides an elastic database deployment platform that can scale up storage and CPU online or can be enabled to autoscale storage and compute.
    Maintenance activities on the database like database updates are performed automatically without customer intervention and without the need of downtime, ensuring seamless operation of applications.
    07:20
    Lois: Can we shift gears a bit, Rashmi? Let's talk about protecting data and recovering from the unexpected. What Oracle technologies help guard against data loss and support disaster recovery for databases?
    Rashmi: Oracle Database Autonomous Recovery Service is a centralized backup management solution for Oracle Database services in Oracle Cloud Infrastructure.
    It automatically takes backup of your Oracle databases and securely stores them in the cloud. It ensures seamless data protection and rapid recovery for your database. It is a fully managed solution that eliminates the need for any manual database backup management, freeing you from associated overhead.
    It implements an incremental forever backup strategy, a highly efficient approach where only the changes since the last backup are identified and backed up. This approach drastically reduces the time and storage space needed for backup, as the size of the incremental changes is significantly lower than the full database backup.
    08:17
    Nikita: And what's the benefit of using this backup approach?
    Rashmi: The benefit of this approach is that your backups are completed faster, with much lesser compute and network resources, while still guaranteeing the full recoverability of your database in the event of a failure. You can achieve zero data loss with this backup service by enabling the real-time protection option, while minimizing the data loss by recovering data up to the last subsecond.
    It is highly recommended to enable this option for mission-critical databases that cannot tolerate any data loss, whether due to a ransomware attack or due to an unplanned outage. The protection policy can retain the protected database backups for a minimum of 14 days to a maximum of 95 days.
    The recovery service requires and enforces the backups are encrypted. These backups are compressed and encrypted during the backup process. The integrity of the backups is continuously validated without placing a burden on the production database.
    This ensures that the stored backup data is consistent and recoverable when needed. This protects against malicious user activity or any ransomware attack. With strict policy-based retention strategy, it prevents modification or deletion of backup data by malicious users.
    09:30
    Lois: Now, let's look at the next layer of protection. Rashmi, can you tell us about Oracle Active Data Guard?
    Rashmi: Oracle Active Data Guard provides highly available data protection and disaster recovery for Enterprise Oracle Databases. It creates and manages one or more transactionally consistent standby copies of production database, which is the active primary.
    The standby database is isolated from production environment located miles away in a distance data center, ensuring the standby remains protected and unaffected, even if the primary is impacted by a disaster.
    In the event of a disaster or data corruption occurring at the primary, the standby can take over the role as new primary, thus allowing business to continue its operations uninterrupted. It keeps the standby database in sync with the production database by continuously applying change logs from production.
    10:25
    Do you want to stay ahead in today's fast-paced world? Check out our New Features courses for Oracle Fusion Cloud Applications. Each quarter brings new updates and hands-on training to keep your skills sharp and your knowledge current. Head over to mylearn.oracle.com to dive into the latest advancements!
    10:45
    Nikita: Welcome back! Rashmi, how does Oracle Active Data Guard operate in practice?
    Rashmi: It uses the knowledge of Oracle Database block format to continuously validate physical blocks or logical intrablock corruption during redo transport and change apply. With automatic block repair feature, whenever any corrupt block is detected in the primary or the standby database, then it is automatically repaired by transferring a good copy of the block from other destination that holds it. This is handled transparently without any error being reported in the application.
    It enables you to upload the read-only workloads and backup operations to the standby database, reducing the load on the production database. You can achieve zero data loss at any distance by configuring a special synchronization mechanism known as parsing.
    File systems form the attack surface for ransomware. Since Active Data Guard replicates the data at the memory level, any ransomware attack on the primary database will never be replicated to the standby database. This allows for a safe failover to the standby without any data loss, and shielding the database from effects of the attack.
    You can enable automatic failover of the primary database to a chosen standby database without any manual intervention by configuring a Data Guard Broker. The Data Guard Broker continuously monitors the primary database and automatically performs a failover to the standby when the predefined failover conditions are met. Active Data Guard enables you to perform database maintenance or database software upgrades with almost zero or minimal downtime.
    12:18
    Lois: And how does disaster recovery work for Exadata Database Service in Oracle Database@AWS?
    Rashmi: Exadata Database Service, by design, are already protected against local failures by use of technologies like Oracle RAC and Oracle Exadata.
    Now, by deploying Exadata Database Service across multiple availability zones in an AWS region, it can ensure that your database services remain resilient to site failures. It leverages Oracle Active Data Guard to create standby in a separate availability zone such that if the primary availability zone is affected, then all application traffic can be routed to the database services in the secondary availability zone, restoring business continuity of the application back to normal.
    Through continuous validation of the data blocks at both the primary and the standby database, any potential corruption is detected and prevented. This ensures data integrity and protection across the entire database service.
    By leveraging zero data loss Autonomous Recovery Service, the database ensures that the backup remains secure and unaffected by ransomware. This enables rapid restoration of clean, uncompromised data in the event of an attack.
    Periodic patching and upgrades are performed online in a rolling fashion with little to no impact on the application uptime using a combination of Oracle RAC and Oracle Active Data Guard technologies. For all resource-intensive workloads like database backup or generating monthly reports, which are read-only in nature, they can be uploaded to the standby, reducing the load on the production database.
    In the cross-availability zone DR setup, you have the flexibility to configure Active Data Guard to use either the AWS network or the OCI network for keeping database redo logs to the standby database.
    Choosing which network to use for the traffic is entirely at the enterprise discretion. However, both are Oracle maximum availability–compliant and the setup is pretty simple. If the network traffic being used is OCI network or AWS network, then respective cloud provider is responsible for ensuring the reliability.
    You have to take into account the different charges that each cloud provider may have. And you can provision multiple standby databases using the console. Optionally, you may set up a broker manually to enable automatic failover capability.
    14:30
    Nikita: We just covered cross-availability-zone protection. But what if an entire AWS region goes down?
    Rashmi: This is where we can provide an additional level of protection by provisioning cross-region disaster recovery for your Exadata Database Service in Oracle Database@AWS. 
    This deployment protects your database against regional disasters. You can provision another DR environment in a different AWS region that supports Oracle Database@AWS. This deployment, together with the cross-availability zone deployment, complements your highly available and protected database service deployment in Oracle Database@AWS.
    Under the hood, it uses the same Oracle Database technologies that include Oracle Active Data Guard, OCI Autonomous Recovery Service, Oracle Exadata, Oracle RAC to provide the same capabilities as in case of cross-availability zone deployment.
    Here too, you have the flexibility to configure Oracle Active Data Guard to use either AWS network or OCI network for shipping database redo logs to the standby. And for the network traffic options, the feature remains the same, except a small difference with respect to chargeback.
    When using OCI Network for cross-region deployment, there is no charge for the first 10 TB of data transfer per month. Beyond that, standard OCI charges would apply. When using AWS network, you may refer to AWS charging sheet for the cross-region traffic.
    15:49
    Nikita: Thank you so much, Rashmi, for this insightful episode.
    Lois: Yes, thank you! And if you want to dive deeper into the topics we covered today, go to mylearn.oracle.com and search for the Oracle Database@AWS Architect Professional course. Until next time, this is Lois Houston…
    Nikita: And Nikita Abraham, signing off!
    16:13
    That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
  • Oracle University Podcast

    Security and Migration with Oracle Database@AWS

    24/02/2026 | 20 mins.
    In this episode, hosts Lois Houston and Nikita Abraham are joined by special guests Samvit Mishra and Rashmi Panda for an in-depth discussion on security and migration with Oracle Database@AWS. Samvit shares essential security best practices, compliance guidance, and data protection mechanisms to safeguard Oracle databases in AWS, while Rashmi walks through Oracle's powerful Zero-Downtime Migration (ZDM) tool, explaining how to achieve seamless, reliable migrations with minimal disruption.
     
    Oracle Database@AWS Architect Professional: https://mylearn.oracle.com/ou/course/oracle-databaseaws-architect-professional/155574
    Oracle University Learning Community: https://education.oracle.com/ou-community
    LinkedIn: https://www.linkedin.com/showcase/oracle-university/
    X: https://x.com/Oracle_Edu
     
    Special thanks to Arijit Ghosh, Anna Hulkower, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode.
     
    -------------------------------------------------------------
     
    Episode Transcript:
    00:00
    Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started!
    00:26
    Nikita: Welcome to the Oracle University Podcast! I'm Nikita Abraham, Team Lead: Editorial Services with Oracle University, and with me is Lois Houston, Director of Communications and Adoption with Customer Success Services.
    Lois: Hello again! We're continuing our discussion on Oracle Database@AWS and in today's episode, we're going to talk about the aspects of security and migration with two special guests: Samvit Mishra and Rashmi Panda. Samvit is a Senior Manager and Rashmi is a Senior Principal Database Instructor. 
    00:59
    Nikita: Hi Samvit and Rashmi! Samvit, let's begin with you. What are the recommended security best practices and data protection mechanisms for Oracle Database@AWS?
    Samvit: Instead of everyone using the root account, which has full access, we create individual users with AWS, IAM, Identity Center, or IAM service.
    And in addition, you must use multi-factor authentication. So basically, as an example, you need a password and a temporary code from virtual MFA app to log in to the console. 
    Always use SSL or TLS to communicate with AWS services. This ensures data in transit is encrypted. Without TLS, the sensitive information like credentials or database queries can be intercepted.
    AWS CloudTrail records every action taken in your AWS account-- who did what, when, and from where. This helps with audit, troubleshooting, and detecting suspicious activity. So you must set up API and user activity logging with AWS CloudTrail. 
    Use AWS encryption solutions along with all default security controls within AWS services. To store and manage keys by using transparent data encryption, which is enabled by default, Oracle Database@AWS uses OCI vaults. Currently, Oracle Database@AWS doesn't support the AWS Key Management Service.
    You should also use advanced managed security services such as Amazon Macie, which assists in discovering and securing sensitive data that is stored in Amazon S3. 
    03:08
    Lois: And how does Oracle Database@AWS deliver strong security and compliance?
    Samvit: Oracle Database@AWS enforces transparent data encryption for all data at REST, ensuring stored information is always protected. Data in transit is secured using SSL and Native Network Encryption, providing end-to-end confidentiality.
    Oracle Database@AWS also uses OCI Vault for centralized and secure key management. This allows organizations to manage encryption keys with fine-grained control, rotation policies, and audit capabilities to ensure compliance with regulatory standards. At the database level, Oracle Database@AWS supports unified auditing and fine-grained auditing to track user activity and sensitive operations.
    At the resource level, AWS CloudTrail and OCI audit service provide comprehensive visibility into API calls and configuration changes. At the database level, security is enforced using database access control lists and Database Firewall to restrict unauthorized connections. At the VPC level, network ACLs and security groups provide layered network isolation and access control. Again, at the database level, Oracle Database@AWS enforces access controls to Database Vault, Virtual Private Database, and row-level security to prevent unauthorized access to sensitive data. And at a resource level, AWS IAM policies, groups, and roles manage user permissions with the fine-grained control.
    05:27
    Lois Samvit, what steps should users be taking to keep their databases secure?
    Samvit: Security is not a single feature but a layered approach covering user access, permissions, encryption, patching, and monitoring.
    The first step is controlling who can access your database and how they connect. At the user level, strong password policies ensure only authorized users can login. And at the network level, private subnets and network security group allow you to isolate database traffic and restrict access to trusted applications only.
    One of the most critical risks is accidental or unauthorized deletion of database resources. To mitigate this, grant delete permissions only to a minimal set of administrators. This reduces the risk of downtime caused by human error or malicious activity.
    Encryption ensures that even if the data is exposed, it cannot be read. By default, all databases in OCI are encrypted using transparent data encryption. For migrated databases, you must verify encryption is enabled and active. Best practice is to rotate the transparent data encryption master key every 90 days or less to maintain compliance and limit exposure in case of key compromise.
    Unpatched databases are one of the most common entry points for attackers. Always apply Oracle critical patch updates on schedule. This mitigates known vulnerabilities and ensures your environment remains protected against emerging threats.
    07:33
    Nikita: Beyond what users can do, are there any built-in features or tools from Oracle that really help with database security?
    Samvit: Beyond the basics, Oracle provides powerful database security tools. Features like data masking allow you to protect sensitive information in non-production environments. Auditing helps you monitor database activity and detect anomalies or unauthorized access.
    Oracle Data Safe is a managed service that takes database security to the next level. It can access your database configuration for weaknesses. It can also detect risky user accounts and privileges, identify and classify sensitive data. It can also implement controls such as masking to protect that data. And it can also continuously audit user activity to ensure compliance and accountability.
    Now, transparent data encryption enables you to encrypt sensitive data that you store in tables and tablespaces. It also enables you to encrypt database backups. After the data is encrypted, this data is transparently decrypted for authorized users or applications when they access that data.
    You can configure OCI Vault as a part of the transparent data encryption implementation. This enables you to centrally manage keystore in your enterprise. So OCI Vault gives centralized control over encryption keys, including key rotation and customer managed keys.
    09:23
    Lois: So obviously, lots of companies have to follow strict regulations. How does Oracle Database@AWS help customers with compliance? 
    Samvit: Oracle Database@AWS has achieved a broad and rigorous set of compliance certifications. The service supports SOC 1, SOC 2, and SOC 3, as well as HIPAA for health care data protection. If we talk about SOC 1, that basically covers internal controls for financial statements and reporting. SOC 2 covers internal controls for security, confidentiality, processing integrity, privacy, and availability.
    SOC 3 covers SOC 2 results tailored for a general audience. And HIPAA is a federal law that protects patients' health information and ensures its confidentiality, integrity, and availability. It also holds certifications and attestations such as CSA STAR, C5.
    Now C5 is a German government standard that verifies cloud providers meet strict security and compliance requirements. CSA STAR attestation is an independent third-party audit of cloud security controls. CSA STAR certification also validates a cloud provider's security posture against CSA's cloud controls matrix. And HDS is a French certification that ensures cloud providers meet stringent requirements for hosting and protecting health care data.
    Oracle Database@AWS also holds ISO and IEC standards. You can also see PCI DSS, which is basically for payment card security and HITRUST, which is for high assurance health care framework. So, these certifications ensure that Oracle Database@AWS not only adheres to best practices in security and privacy, but also provides customers with assurance that their workloads align with globally recognized compliance regimes.
    11:47
    Nikita: Thank you, Samvit. Now Rashmi, can you walk us through Oracle's migration solution that helps teams move to OCI Database Services?
    Rashmi: Oracle Zero-Downtime Migration is a robust and flexible end-to-end database migration solution that can completely automate and streamline the migration of Oracle databases. With bare minimum inputs from you, it can orchestrate and execute the entire migration task, virtually needing no manual effort from you.
    And the best part is you can use this tool for free to migrate your source Oracle databases to OCI Oracle Database Services faster and reliably, eliminating the chances of human errors. You can migrate individual databases or migrate an entire fleet of databases in parallel.
    12:34
    Nikita: Ok. For someone planning a migration with ZDM, are there any key points they should keep in mind? 
    Rashmi: When migrating using ZDM, your source databases may require minimal downtime up to 15 minutes or no downtime at all, depending upon the scenario. It is built with the principles of Oracle maximum availability architecture and leverages technologies like Oracle GoldenGate and Oracle Data Guard to achieve high availability and online migration workflow using Oracle migration methods like RMAN, Data Pump, and Database Links.
    Depending on the migration requirement, ZDM provides different migration method options. It can be logical or physical migration in an online or offline mode. Under the hood, it utilizes the different database migration technologies to perform the migration.
    13:23
    Lois: Can you give us an example of this?
    Rashmi: When you are migrating a mission critical production database, you can use the logical online migration method. And when you are migrating a development database, you can simply choose the physical offline migration method.
    As part of the migration job, you can perform database upgrades or convert your database to multitenant architecture. ZDM offers greater flexibility and automation in performing the database migration.
    You can customize workflow by adding pre or postrun scripts as part of the workflow. Run prechecks to check for possible failures that may arise during migration and fix them. Audit migration jobs activity and user actions. Control the execution like schedule a job pause, resume, if needed, suspend and resume the job, schedule the job or terminate a running job. You can even rerun a job from failure point and other such capabilities.
    14:13
    Lois: And what kind of migration scenarios does ZDM support?
    Rashmi: The minimum version of your source Oracle Database must be 11.2.0.4 and above. For lower versions, you will have to first upgrade to at least 11.2.0.4. You can migrate Oracle databases that may be of the Standard or Enterprise edition.
    ZDM supports migration of Oracle databases, which may be a single-instance, or RAC One Node, or RAC databases. It can migrate on Unix platforms like Linux, Oracle Solaris, and AIX. For Oracle databases on AIX and Oracle Solaris platform, ZDM uses logical migration method.
    But if the source platform is Linux, it can use both physical and logical migration method. You can use ZDM to migrate databases that may be on premises, or in third-party cloud, or even within Oracle Cloud Infrastructure. ZDM leverages Oracle technologies like RMAN datacom, Database Links, Data Guard, Oracle GoldenGate when choosing a specific migration workflow.
    15:15
    Are you ready to revolutionize the way you work? Discover a wide range of Oracle AI Database courses that help you master the latest AI-powered tools and boost your career prospects. Start learning today at mylearn.oracle.com.
    15:35
    Nikita: Welcome back! Rashmi, before someone starts using ZDM, is there any prep work they should do or things they need to set up first?
    Rashmi: Working with ZDM needs few simple configuration. Zero-downtime migration provides a command line interface to run your migration job. First, you have to download the ZDM binary, preferably download from my Oracle Support, where you can get the binary with the latest updates.
    Set up and configure the binary by following the instructions available at the same invoice node. The host in which ZDM is installed and configured is called the zero-downtime migration service host. The host has to be Oracle Linux version 7 or 8, or it can be RCL 8.
    Next is the orchestration step where connection to the source and target is configured and tested like SSH configuration with source and target, opening the ports in respective destinations, creation of dump destination, granting required database privileges. Prepare the response file with parameter values that define the workflow that ZDM should use during Oracle Database migration.
    You can also customize the migration workflow using the response file. You can plug in run scripts to be executed before or after a specific phase of the migration job. These customizations are called custom plugins with user actions.
    Your sources may be hosted on-premises or OCI-managed database services, or even third-party cloud. They may be Oracle Database Standard or Enterprise edition and on accelerator infrastructure or a standard compute.
    The target can be of the same type as the source. But additionally, ZDM supports migration to multicloud deployments on Oracle Database@Azure, Oracle Database@Google Cloud, and Oracle Database@AWS.
    You begin with a migration strategy where you list the different databases that can be migrated, classification of the databases, grouping them, performing three migration checks like dependencies, downtime requirement versions, and preparing the order migration, the target migration environment, et cetera.
    17:27
    Lois: What migration methods and technologies does ZDM rely on to complete the move?
    Rashmi: There are primarily two types of migration: physical or logical.
    Physical migration pertains to copy of the database OS blocks to the target database, whereas in logical migration, it involves copying of the logical elements of the database like metadata and data.
    Each of these migration methods can be executed when the database is online or offline. In online mode, migration is performed simultaneously while the changes are in progress in the source database.
    While in offline mode, all changes to the source database is frozen. For physical offline migration, it uses backup and restore technique, while with the physical online, it creates a physical standby using backup and restore, and then performing a switchover once the standby is in sync with the source database.
    For logical offline migration, it exports and imports database metadata and data into the target database, while in logical online migration, it is a combination of export and import operation, followed by apply of incremental updates from the source to the target database. The physical or logical offline migration method is used when the source database of the application can allow some downtime for the migration.
    The physical or logical online migration approach is ideal for scenarios where any downtime for the source database can badly affect critical applications. The only downtime that can be tolerated by the application is only during the application connection switchover to the migrated database.
    One other advantage is ZDM can migrate one or a fleet of Oracle databases by executing multiple jobs in parallel, where each job workflow can be customized to a specific database need. It can perform physical or logical migration of your Oracle databases. 
    And whether it should be performed online or offline depends on the downtime that can be approved by business.
    19:13
    Nikita: Samvit and Rashmi, thanks for joining us today.
    Lois: Yeah, it's been great to have you both. If you want to dive deeper into the topics we covered today, go to mylearn.oracle.com and search for the Oracle Database@AWS Architect Professional course. Until next time, this is Lois Houston…
    Nikita: And Nikita Abraham, signing off!
    19:35
    That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
  • Oracle University Podcast

    Getting Started with Oracle Database@AWS

    17/02/2026 | 23 mins.
    If you've ever wondered how Oracle Database really works inside AWS, this episode will finally turn the lights on.
     
    Join Senior Principal OCI Instructor Susan Jang as she explains the two database services available (Exadata Database Service and Autonomous Database), how Oracle and AWS share responsibilities behind the scenes, and which essential tasks still land on your plate after deployment.
     
    You'll discover how automation, scaling, and security actually work, and which model best fits your needs, whether you want hands-off simplicity or deeper control.
     
    Oracle Database@AWS Architect Professional: https://mylearn.oracle.com/ou/course/oracle-databaseaws-architect-professional/155574
    Oracle University Learning Community: https://education.oracle.com/ou-community
    LinkedIn: https://www.linkedin.com/showcase/oracle-university/
    X: https://x.com/Oracle_Edu
     
    Special thanks to Arijit Ghosh, Anna Hulkower, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode.
     
    ------------------------------------------------------------
     
    Episode Transcript:
     
    00:00
    Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started!
    00:26
     
    Lois: Hello and welcome to the Oracle University Podcast! I'm Lois Houston, Director of Communications and Adoption with Customer Success Services, and with me is Nikita Abraham, Team Lead: Editorial Services with Oracle University. 
    Nikita: Hi everyone! In our last episode, we began the discussion on Oracle Database@AWS. Today, we're diving deeper into the database services that are available in this environment. Susan Jang, our Senior Principal OCI Instructor, joins us once again. 
    00:56
    Lois: Hi Susan! Thanks for being here today. In our last conversation, we compared Oracle Autonomous Database and Exadata Database Service. Can you elaborate on the fundamental differences between these two services?  
     
    Susan: Now, the primary difference is between the service is really the management model. The Autonomous is fully-managed by Oracle, while the Exadata provides flexibility for you to have the ability to customize your database environment while still having the infrastructure be managed by Oracle.  
    01:30
    Nikita: When it comes to running Oracle Database@AWS, how do Oracle and AWS each chip in? Could you break down what each provider is responsible for in this setup? 
    Susan: Oracle Database@AWS is a collaboration between Oracle, as well as AWS. It allows the customer to deploy and run Oracle Database services, including the Oracle Autonomous Database and the Oracle Exadata Database Service directly in AWS data centers.  
    Oracle provides the ability of having the Oracle Exadata Database Service on a dedicated infrastructure. This service delivers full capabilities of Oracle Exadata Database on the Oracle Exadata hardware. It offers high performance and high security for demanding workloads. It has cloud automation, resource scaling, and performance optimization to simplify the management of the service. 
    Oracle Autonomous Database on the dedicated Exadata infrastructure provides a fully Autonomous Database on this dedicated infrastructure within AWS. It automates the database management tasks, including patching, backups, as well as tuning, and have built-in AI capabilities for developing AI-powered applications and interacting with data using natural language. The Oracle Database@AWS integrates those core database services with various AWS services for a comprehensive unified experience. 
    AWS provides the ability of having a cloud-based object storage, and that would be the Amazon S3. You also have the ability to have other services, such as the Amazon CloudWatch. It monitors the database metrics, as well as performance. You also have Amazon Bedrock. It provides a development environment for a generative AI application.  
    And last but not the least, amongst the many other services, you also have the SageMaker. This is a cloud-based platform for development of machine learning models, a wonderful integration with our AI application development needs. 
    03:54
    Lois: How has the work involved in setting up and managing databases changed over time? 
    Susan: When we take a look at the evolution of how things have changed through the years in our systems, we realize that transfer responsibility has now been migrated more from customer or human interaction to services. As the database technology evolves from the traditional on-premise system to the Exadata engineered system, and finally to the Autonomous Database, certain services previously requiring significant manual intervention has become increasingly automated, as well as optimized. 
    04:34
    Lois: How so? 
    Susan: When we take a look at the more traditional database environment, it requires manual configuration of hardware, operating system, as well as the software of the database, along with initial database creation. As we evolve into the Exadata environment, the Exadata Database, specifically the Exadata cloud service, simplifies provisioning through web-based wizard, making it faster and easier to deploy the Oracle Database in an optimized hardware.  
     
    But when we move it to an Autonomous environment, it automates the entire provisioning process, allowing users to rapidly deploy mission-critical databases without manual intervention, or DBA involvement. So as customers move toward Autonomous Database through Exadata, we have fewer components that the customer needs to manage in the database stack, which gives them more time to focus more on important parts of the business. 
    With the Exadata Database, it provides a co-management of backup, restore, patches and upgrade, monitoring, and tuning. And it allows the administrator the ability to customize the configuration to meet their very specific business needs. With Autonomous Database, it's now fully automated and it's a greater responsibility is shift toward the service. With Autonomous Database on dedicated infrastructure, it provides that fine-grained tuning more for Oracle to help you perform that task. 
    06:15
    Nikita: If we narrow it down just to Oracle and AWS for a moment, which parts of the infrastructure or day-to-day ops are handled by each company behind the scenes? 
    Susan: When we take a look at Oracle Database@AWS, it operates under a shared responsibility model, dividing the service responsibilities between AWS, as well as Oracle, as well as you, the customer.  
    The AWS has the data center. Remember, this is where everything is running. The Oracle Database@AWS, the Oracle Database infrastructure may be managed by Oracle and run in OCI, but is physically located within the AWS regions, as well as the availability zones and the AWS data centers. 
    The AWS infrastructure, in this case, is AWS's responsibility to secure the environment, including the physical security of the data center, the network infrastructure, and the foundational services like the compute, the storage, and the networking, all within AWS. 
    The next thing of who's responsible for the shared responsibility, it's Oracle. And that would be the hardware. We provide the hardware. While the hardware may physically reside in the AWS data center, Oracle's Cloud Infrastructure operational team will be the one managing this infrastructure, including software patching, infrastructure update, and other operations through a connection to OCI. This means Oracle handles the provisioning, as well as the maintenance of any of the underlying Exadata infrastructure hardware. 
    When we take a look at the next thing that it manages, it is also responsible besides the infrastructure of the Exadata. It is also the ability to manage the hardware, the environment of that hardware through the database control plane. So Oracle manages the administration and the operational for the Oracle Database@AWS service, which resides in OCI. So this includes the capabilities for management, upgrade, and operational features. 
    08:37
    Nikita: And what are the key things that still remain on the customer's plate?  
    Susan: If you are in an Exadata environment or in an Autonomous environment, it is you, the customer, who is responsible for most of the database administration operation, as well as managing the users and the privileges of the user to access the database. No one knows the database and who should be accessing the data better than you. 
    You will be responsible for securing the applications, the data of the database, which now allows you to define who has access to it, control the data encryption, and securing the application that interacts with the Oracle Database@AWS. 
    09:29
    Lois: Susan, we've talked about both Autonomous Database and Exadata Database Service being available on Oracle Database@AWS, but what's different about how each works in this environment, and why might someone pick one over the other? 
    Susan: Both databases, even though they run on the same Exadata Cloud Infrastructure, both can be deployed on both public cloud, as well as the customer data center, which is Oracle Cloud@Customer. 
    The Autonomous Database is a fully managed, completely automated environment. And this provides a capability of having a fully Autonomous Database Service running on a dedicated Oracle Exadata Infrastructure within your AWS data center. 
    The Exadata is a service that is provided and managed by Oracle and is physically running in the AWS data center, but is designed for mission critical workload and includes RAC environment, Real Application Cluster, offering a high performance availability and full feature capability that is similar to other Exadata environment, such as those running in our customers' data center. 
    The primary difference is really between the two services. When you take a look at the Exadata, the customer only pays for the compute resources that is used. Autoscaling can be used for a variety or variable resources, the workload, to automatically scale to the compute resources up or down when required. 
    The Autonomous Database also has automatic optimization for data warehousing, transaction processing, as well as JSON workload. The Exadata service, the customer again, also pays for the compute resources that they allocate. But that's the key thing. The customer can initiate the scaling because it's very specific to the workload that is needed. 
    So when you take a look at the two database services, one gives the ability to let Oracle fully manage it, including the scaling capability. The other, the Exadata, provides you the capability of having the environment that it's running on the infrastructure be managed by Oracle that adds a database administrator. You may wish to have a little bit more granular control of how you want the database to not only be scaling, but how you wish to customize how the database will be running. 
    12:10
    Nikita: Focusing on Autonomous Database for a moment, what should teams know about how it actually runs within AWS?  
    Susan: The Autonomous Database on the Oracle Database@AWS brings the power of the Oracle's self-managing, self-securing, and self-repairing database into your AWS environment. 
    It provides the capability of the database automatically, automates many of the traditional, complex, and time-consuming database management tasks, such as the provisioning of the database, the patching, the backing up, and the scaling, and the performance tuning, reducing the need for any manual intervention by the database administrator. 
    Running the Autonomous Database in your AWS region enables low latency access for your AWS applications and services that is deployed within AWS, thus improving performance and response time. With the Autonomous Database, it automates many of the traditional things that is now automatically done by Oracle. It also supports integration with various AWS services, such as the ability of the not in addition to AIM, but the cloud formation, the CloudWatch for monitoring and the S3 for the storage. 
    You can easily migrate existing Exadata workload, including those running on Oracle RAC to AWS with minimum or no change to any of your databases or applications. In addition, there's a really powerful capability and feature of the database is called zero ETL, and that's zero extract, transformation, and load. 
    It's an integration capability with services like your Amazon Redshift, enabling near real time analytics and machine learning on your transactional database that is stored within the Autonomous Database on in your AWS environment. So with the Autonomous Database, it checks off many of the boxes for automatic capability, securing, tuning, as well as scaling the database. 
    With the Autonomous Database in the Dedicated Exadata Infrastructure, the Exadata Cloud Infrastructure resource represents the physical system, which can be expanded with storage, as well as compute services, the compute host. This now provides the ability to have an isolated zone for the highest protection from other tenants. The data is stored on a dedicated server only for one customer. That would be you. 
    14:56
    Lois: Could you explain the role of Autonomous VM? What are its primary benefits? 
    Susan: The virtual machine or as we refer to them as the cluster, includes the grid infrastructure and provides a private network isolation. This provides you the capability of having custom memory, core, and storage allocation. 
    The Oracle Grid Infrastructure includes the Oracle Clusterware, which manages the cluster, as well as the servers, and ensure that the database can failover to another server in case of any failure. 
    15:34
    Be a part of something big by joining the Oracle University Learning Community! Connect with over 3 million members, including Oracle experts and fellow learners. Engage in topical forums, share your knowledge, and celebrate your achievements together. Discover the community today at mylearn.oracle.com. 
    15:55
    Nikita: Welcome back! Susan, what is the Autonomous Container Database? 
    Susan: With the Autonomous Container Database, and you need that if you're going to create an Autonomous Database, you need to provision that within your Autonomous Exadata VM Cluster. It serves as a container to hold or to house one or more Autonomous Databases. 
    This allows multiple Autonomous Databases to coexist in the same infrastructure while still being logically separated. And this allows for the separation of databases based on their intended use. Think of a database for production. Think of a database for development. Think of a database for testing. You may have different database versions within the same infrastructure. 
    This isolation makes it easier for you to be able to meet your SLA, your Service Level Agreement, any long-term backups you may have, very specific encryption key needs to prevent issues from one database impacting another. So, the ability to have everything be isolated and secure is still grouping it in a manner that will meet your business needs. 
    17:08
    Lois: Looking at Exadata Database Service specifically, what are some standout advantages for customers who deploy it on Oracle Database@AWS? Is there anything in particular they should get excited about in terms of performance or integration with AWS? 
    Susan: The Exadata Database Service is running on a dedicated Exadata Infrastructure that's deployed within your AWS data center. It delivers the same Exadata service experience in cloud control planes as the Oracle Cloud Infrastructure, allowing you to leverage existing skills and processing across your multi-cloud environment. 
    It addresses the data resiliency, or residency rather. And that's the scenario where many of our customers has the need. You have a need because of your security compliance to have the data local to you. By having the Exadata Database in your Oracle Database@AWS, it is running in your data center. So, this addresses that very important need, data residency, to have it close to you. 
    It also allows for seamless integration with other AWS services and applications. So now you have a capability of a hybrid cloud architecture leveraging the benefit of both Oracle Exadata and your AWS system. It has built-in high availability, the RAC application cluster, as well as Data Guard, a capability of addressing disaster recovery capability. 
    This also provides the ability for you to scale your compute, as well as your storage and your I/O resources independently. So as mentioned with Exadata, you have flexibility of how you want your database to be running individually. So just like the Autonomous, the Exadata Database checks off many of the boxes for running a mission-critical with high availability, highly redundant hardware and software features, along with extreme performance, scalability, and reliability. 
    This now allows you to run your AI environment, your online transaction processing, your analytic workload on any scale on the Exadata Infrastructure running in the Oracle Cloud. And in this case, running in your data center. 
    19:45
    Nikita: If a business suddenly needs more capacity, how does scaling work with Exadata Database Service versus Autonomous Database on Oracle Database@AWS?  
    Susan: So with the Exadata scaling, you now can scale to meet expected demands so you know at certain point I will need more. I will then ask it to scale at that point when I will assign it-- and I'm using an example, I will assign it three computer cores all the time. But there may be demands. Think of your end of the quarter, end of the year processing that you may need more. So, you are enabling the compute cores to scale at the time you need it. 
    And what's cool is it will then, when it's no longer needed, it will then scale back down to the original three cores that you assign. So, you only pay for the enabled cores. But what's very cool about the Autonomous is that it is real-time scaling. So, with Autonomous, now you have the capability using Autonomous Database since it is self-tuning, self-monitoring, the Autonomous Database actually monitors the workload requirement and scales to match the workload demand. 
    Once the minimum level of the compute is defined and enabled, the automatic scaling is set. Autonomous Database will adjust to the consumption when it's needed, and it will scale back down when it's not. So though the Exadata is pretty cool, it will scale up and down on the workload demand. 
    This is with the Autonomous is even more powerful. It is real-time scaling based on that usage at that moment. Built-in automatic increase to meet the workload demands when it spikes and it automatically scales back when it's not needed. 
    A very powerful capability with all of our Oracle databases, the ability, even with traditional, to allow you to define what you may need with Exadata scaling for peak demands, as well as Autonomous scaling for real-time consumption and scaling when needed. 
    When you look at all of our options, one of the key things to bear in mind is a phrase that we use: performance scale as more servers are added. And what this is really saying is Oracle's automated scaling ability for the database, it basically has the ability to maintain or improve its performance under increased workload by automatically adding computational resources when needed. 
    This process is also known as horizontal scaling. It involves adding more servers, compute instances, to a cluster to share the processing load. And it has that capability automatically. 
    22:53
    Nikita: There's so much more we can discuss about Oracle Database@AWS, but let's pause here for today! Thank you so much Susan for joining us. 
    Lois: Yeah, it's been really great to have you, Susan. If you want to dive deeper into the topics we covered today, go to mylearn.oracle.com and search for the Oracle Database@AWS Architect Professional course. Until next time, this is Lois Houston… 
    Nikita: And Nikita Abraham, signing off! 
    23:23
    That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

More Education podcasts

About Oracle University Podcast

Oracle University Podcast delivers convenient, foundational training on popular Oracle technologies such as Oracle Cloud Infrastructure, Java, Autonomous Database, and more to help you jump-start or advance your career in the cloud.
Podcast website

Listen to Oracle University Podcast, The Mel Robbins Podcast and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

Oracle University Podcast: Podcasts in Family