This is the official channel by Oracle Database Upgrade and Migrations team at Oracle. We are mainly posting videos that are related to database upgrade, migration, and patching, but you can also find other Oracle-related videos. We try to publish videos every week. Finally, our promise to you:
It is all tech - no buzzword, no marketing!
Maintained by: Mike Dietrich, Distinguished Product Manager @ Oracle Daniel Overby Hansen, Senior Principal Product Manager @ Oracle Bill Beauregard, Senior Principal Product Manager @ Oracle Rodrigo Jorge, Senior Principal Product Manager @ Oracle Roy Swonger, Vice President @ Oracle
Is there any way to find out/list out all the critical bugfix patches released against quarterly DB RU. also is there any API where we can find all the critical bugfix patches released against quarterly DB RU.
Very informative , question why its not recommended to create guaranteed restore point before running datapatch on primary , as its easier to rollback using GRP , kindly suggest
Hi, That's a good point. If you keep users off the database until you finished datapatch, then a GRP could be a viable rollback plan. However, you can run datapatch while users are connected to the database. This is what we do with standby-first and RAC rolling patch apply. But even for single instance databases you can run datapatch with users connected. This minimizes the downtime needed. In such a case, a GRP is not a viable rollback plan because it would mean data loss. Every patch applied by Datapatch has a corresponding rollback action that can revert the apply. In case of a problem, Datapatch can always rollback to the starting point without issues. But feel free to use GRP as a rollback plan. It works - you just need to keep users off the database while you run Datapatch. Regards, Daniel
@@upgradenow Thanks Daniel for detail answer , but i would like to share real challenging patching experience using dbaascli, until now we were taking 7-8 hours of downtime for RU patching of ExaCS system and we end up with complex problem where datapatch failed and MOS also find hard to fix it during downtime , then it was blamed to us by MOS why there is no GRP to rollback datapatch , then later very senior MOS engineer joined and read the code and fixed internal to bring the PDB up , so i am still sure if datapatch can be run while users are connected , please share the MOS number to support this, so i can take this with team internally , Thanks lot
Hi, No, you can't. There is no direct upgrade from 11.2.0.4 to 23ai. You have to pass 19c first. Further, Data Guard is not a vehicle for upgrades, unless you use transient logical standby (search for DBMS_ROLLING). But, if you have standby databases you can upgrade those together with the primary database. Regards, Daniel
Hi, Yes, you can. It should more or less like with a single instance database. When you upgrade the transient logical standby, on RAC, you must have only one instance running, but AutoUpgrade takes care of that for you. Regards, Daniel
Hi, Good catch. The database is upgraded fine, however, it seems like we used the wrong environment settings in the demo (that's not ideal). We should have set the environment to the 19c Oracle home and used the binaries from there. Regards, Daniel
Hi This looks fine. However, on my system the datafile of the TS is restored to +DATA pointing to the correct to the correct positon +DATA/<DBNAME>/143BA60E583F672AE0633DE6010A7B01/DATAFILE/xxxx.469.1167307693. Source is an 12c database, target is 19c (nonpdb), I set dest_datafile_location to +DATA Both databases run on the same host, but I guess this should not cause problems.. Thx for any help
Hi, There shouldn't be a problem migrating on the same host. If you don't like the destination of the data files after the migration, you can always move the data file later on with ALTER DATABASE MOVE DATABASE command. Regards, Daniel
Hi Mike, Had a small doubt. In FTEX (without RMAN Incremental) the Users Tablespace is also a part of export dump. We know that target database also have same tablespace by default. To avoid conflict, oracle suggests to rename the tablespace or drop the destination Users tablespace. If we rename the tablespace, that ofcourse will not rename the datafile associated with it. So how we can copy the source datafiles to destination when same name file is already there? If I overwrite, that will lead corruption as database ID in overwritten DBF users01.dbf will be diff then of the destination database.
Hi, If you are using OMF, the data files will have different name always. If you are not using OMF, then you might need to rename the data files as well. However, the script restores the data files to a location different than where the data files usually are in the target database. I don't expect you'd run into this problem. Regards, Daniel
@1:14:35 Why it is said that "never use sys as sysdba for datapump"? What is the exact reason behind this? What makes it different when we use a dedicated user for Data Pump?
Hi, Internally in the database, SYS behaves different in many ways. Some features are disabled for SYS and in other situations a SYS session will do things differently. Regards, Daniel
Thanks, Daniel. I may be unintentionally asking too much information from you. But to understand this, can you just provide ine example of what is disabled for sys user when performing an export with it?
I tried this out.. However, I wanted to transport only the TS listed in dbmig_ts_list.txt: a1,a2. When I launch dbmig_driver_m5.sh with L1F option I run into an ORA-29335 error for all other TS that are not taken "read only". Well, I not You spoke about "whole database" - but is there a way to do the job on single tablepaces? Thx
Hi, The procedure was developed originally for full transportable export/import where you take the entire database including all tablespaces. Having said that, the RMAN portion should work fine with just a subset of the tablespaces, but it might require that you edit the scripts a little. Or you can extract the relevant commands from the generated scripts and run them yourself. For the Data Pump part, you need to move from a full transportable to a regular transportable job with just the tablespaces you'd like. Then also, you need to create users and all other metadata. I hope that helps, Daniel
Hi, The impact is... unknown. We don't test that situation or take it into account. But I've seen other situations where it led to unexpected, intermittent errors. Could also be ORA-00600. Regards, Daniel
very confusing video. i do not learn anything with it. this only works with exa CC ? this only works with zdlra ? how many time to migrate 1500 databases ? all databases have the same charaterset and timezone ?
Hi, This is a short extract of a longer webinar. Perhaps you get a better picture if you watch it all. The procedure works for any Linux platform including our engineered systems and it is optional to use ZDLRA. In the complete webinar we touch upon the benefits of using ZDLRA. The source and target databases must have the same character set and time zone, but whether the individual databases also had that, I don't know. But I don't see how this could be relevant for the project. If you want an up-to-date reference story about cross-platform migrations, please check out our webinar "Cross Platform Migration - Transportable Tablespaces to the Extreme" on dohdatabase.com/webinars. Regards, Daniel
16hrs still huge, i have similar requirement where we need to migrate from aix to linux , Database running on 19c with filesystem, db size is 45TB, plan to move to Linux on asm, customer can afford only 1-2hrs or downtime, what technologies would you recommend ?
Hi, This reference project is a few years old and use a former technique. We have a new, improved method for cross-platform migrations that we explain in our webinar "Cross Platform Migration - Transportable Tablespaces to the Extreme". You find the video and slides here: dohdatabase.com/webinars/ For a cross-platform migration of such a large database, I think only Oracle GoldenGate is a viable solution. Transportable Tablespace as shown about might be possible in Oracle Database 21c and later with parallel transportable jobs in Data Pump but only for very simple databases (they might be big, but dictionary must not be complex). Regards, Daniel
Hi, Please could you be so kind to give us the maximum characteres or tbs_name seperated with comma in the dbmig_ts_list.txt file? (M5). Thanks in advance
Hi, There shouldn't be a maximum - the script should be able to handle whatever the database can handle. We recently launched a new version of the M5 script which handled a case with 1.000s of tablespaces better. In another case I worked on, we had almost 10.000 data files. If you run into troubles, you have really many, many tablespaces. In that case, create a service request. Regards, Daniel
Great Tutorial Two Questions, however: Does this work if databases have the same endian, so no migration is needed? Must Source and Target database have the same Name - I guess so, as the databasefiles have this info in the header. Thanks a lot Christian
Hi Christian, Thanks for the positive feedback. Much appreciated. Yes, you can use this method even if the source and target databases are on the same platform. RMAN investigates the data files and determines automatically whether a conversion is needed when it restores the data files. No, you can transport the tablespaces into a database of any name. It does not matter. In the target database, Data Pump plugs in the new tablespaces and make the necessary adjustments. I hope that helps, Daniel
I'm glad you all posted this. We're testing with an EBS database despite that M5 is not yet certified with Ebusiness Suite cross platform. V4 is so convoluted, we'd love to use M5 and hope it will be certified for EBS soon. Have you all tried it on a sizeable EBS database and can you report any successes, issues, workarounds? Thanks much for the video. Really appreciate it.
Hi Jeff, I have actually tried it on an EBS database, and it worked without problems. However, that's just the database part. I haven't tried anything of the other EBS stuff that you must do. Technically, I don't see why it shouldn't work. However, I must recommend you to follow the official procedures from Oracle and the EBS team. We have reached out to the EBS team for guidance on M5 and EBS, but so far nothing concrete. You can send me an e-mail at daniel.overby.hansen@oracle.com. I can update you when we hear from the EBS team. Regards, Daniel
@@upgradenow Hi Daniel, thanks for the quick reply. I'll send to your email address. I'm pretty sure this method will be the "goto" for cross platform as this is sorely needed in the oracle bag of tools. Hopefully it will be certified with EBS soon as their support for v4 is not very strongly worded anyway. They say to be sure to keep a standard export for backup, etc. etc. We are planning on installing the middle tier software from scratch of course as you can't migrate the homes. But we'll handle the cloning parts as usual. Thanks again and I'll send you a note, as we have some questions. Great video. 🙂
Greate Video. Can I handle "alter system flash redo" etc from dgmgrl? I find it more forward to do all from one interface (as we use dgmrgl for switchover, failover, etc.)
Hi, That's a good question. I'm not a Data Guard, so perhaps I'm wrong, but judging from the documentation it appears you can flush redo from DGMGRL. Regards, Daniel
At my workplace. We are migrating from supercluster m8 to exadata m9. Also we are migrating tablespaces from 18c to 19 at the point of importing meta data into 19c. We get an error about the version could you kindly help we used the transportable tablespace framework
Hi, I'm afraid I need a little more detail to answer your question. I suggest that you read my blog post series on XTTS. It has many information, step-by-step guide and link to demos. dohdatabase.com/xtts Further, we are hosting a 2-hour webinar on such migrations. Specifically we have a similar customer case which could be of interest to you. We are using a newer method, but much of the information applies to a migration from 18c to 19c as well. dohdatabase.com/webinars/ Regards, Daniel
Hi, Thanks for your question. Yes, you can have two Oracle homes of the same release on the same server. You can have as many as you like. You can even have two Oracle homes on the exact same patch level. Regards, Daniel
Hi, You can find the information here: docs.oracle.com/en/database/oracle/oracle-database/19/multi/plugging-in-a-pdb.html#GUID-EE58804F-A999-4F7C-93DE-D641B3C1047B You might also find this useful: docs.oracle.com/en/database/oracle/oracle-database/19/multi/introduction-to-the-multitenant-architecture.html#GUID-D67CAB03-AB86-4761-BCF3-F91CA648B315 We even have a webinar with much more detail: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-7PNTJqbX5Ew.html Regards, Daniel
Hi, AutoUpgrades automatically determines the state of the source database. Whether or not an upgrade is needed, is determined by examining the specified target Oracle home (target_home). If it is on a newer version, then AutoUpgrade knows that an upgrade is needed. Whether you also want to plug in your non-CDB to a CDB (and convert it) is determined by the parameter "target_cdb". By specifying that, you can saying that you want to convert to CDB architecture. Regards, Daniel
I had this one doubt. Let's say If there is a component called 'A' visible in $registry in Invalid state. I uninstalled it then start upgrade. Will that component recreated if new release supports same comonent?
Hi, That's a good question. The upgrade process does not install any new components. If you previously uninstalled a component, then during upgrade that component is completely skipped. Regards, Daniel
Hi, Thanks for the positive feedback. Much appreciated. If there is any video that you would like us to do, please leave a note in our RU-vid community: www.youtube.com/@upgradenow/community. Regards, Daniel
Hi Daniel, Thank you for this clear video explanation. We are in the midst of switching from in-place patching to out-of-place patching and automating as much as possible with autoupgrade. I was wondering if the shown method also work in this case, especially when we also upgrade the timezone using the "timezone_upg=yes" option in the configuration file. With a timezone upgrade of a CDB, extra restarts of the database are needed. How are these restarts reflected to the standby? Thank you
Hi Frank-Jan, If you upgrade the time zone file on a primary database, there is no need to perform anything on the standby database. The time zone file upgrade happens entirely in the database and the changs are recoreded in redo. Once the redo applies on the standby, it is implicitly upgraded. The only restart of the primary database is to restart in upgrade mode. The time zone file upgrade happens with the database in upgrade mode. However, when the redo is applied on the standby database, there is no such requirement. The redo is just applied. Regards, Daniel
@@upgradenow yes I liked it and have personally used it. Many aspects ( parameters, deprecated ones, roles, privs etc ) related to db upgrade versions are covered. Came to about it when Mike Dietrich mentioned this in one of his seminars. A big thanks to the team and people involved in making this tool. And thank you for sharing this.
Nice. Path selection is source and target are the equivalent the same. eg. Linux to Linux . Btw duplicate for standby with higher version is not possible. Backup/Restore is handy in that case
Hi, Yes, you are right. Data Guard is a great tool for migrations, however, redo transport requires SQL*Net connection between source and target. In some situations, that's not allowed and here incremental backups can be very useful. But if Data Guard is an option, I take Data Guard any day of the week. Regards, Daniel
What is the estimated "Down time" when switching over to PROD2? It appears as though there will still be some type of "outage" when making the switch over that will affect the users. If we switch to Oracle RAC, will this resolve the "down time" entirely?
Hi, Estimated downtime is .... it depends! :) You will perform a Data Guard switchover and there are numerous things that influence it. Typically, we see switchover completing in less than a minute or up to a few minutes. If you use RAC you can completely hide the outage by performing a rolling patch apply. If you have RAC, you probably don't want to switchover to the standby. You can just do the standby database first, and then a rolling patch apply on the primary. That yields no downtime. Regards, Daniel
Hi, We don't have a demo of a migration using this approach, unfortunately. However, you should be able to use the commands from this video to pull it off. You can watch the full section here: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-zharMnB02Uw.html Regards, Daniel
In the execution analysis task execution type => 'TEST EXECUTE' should we run this? Will it run all the delete or update or insert queries in the migrated database?
Does Autoupgrade set the cluster_database=false for RAC environments? Or I needed to create a .sql file to do that and include this script in the .cfg ?
Hi, All the commands should be in the recorded demo. Since we made the demo, it has become even easier to connect SQLcl to ADB. Check the documentation for an up-to-date description: docs.oracle.com/en/cloud/paas/autonomous-database/serverless/adbsb/connect-tools.html#GUID-CF6C7E1B-D0D4-4641-BADA-5C57DEA7C73B. This might help as well: docs.oracle.com/en-us/iaas/autonomous-database/doc/connect-oracle-sqlcl.html. Regards, Daniel
In windows Oracle 19c database,I have moved file to another location successfully but data files did not removed physically automatically from orignal location.. So do we need to remove it Physically ?
Hi, On Windows, you might see data files being locked by the operating system even though the files have been moved. This is an issue specifically on Windows and has to do with the way file locking is implemented in the OS. The database only uses the new file, but often we see that the old file is locked by the database process until the database is restarted. There's nothing Oracle can do about that. It's just how it is on Windows. If on Windows and this is a big issue, my only recommendation is to use ASM instead. Regards, Daniel
It depends on your client and the exact nature of your workload. Most workloads should be able to replay without problems, but you would need to analyze your specific application. Regards, Daniel