Тёмный
Oracle Database Upgrades and Migrations
Oracle Database Upgrades and Migrations
Oracle Database Upgrades and Migrations
Подписаться
This is the official channel by Oracle Database Upgrade and Migrations team at Oracle. We are mainly posting videos that are related to database upgrade, migration, and patching, but you can also find other Oracle-related videos. We try to publish videos every week. Finally, our promise to you:

It is all tech - no buzzword, no marketing!


Maintained by:
Mike Dietrich, Distinguished Product Manager @ Oracle
Daniel Overby Hansen, Senior Principal Product Manager @ Oracle
Bill Beauregard, Senior Principal Product Manager @ Oracle
Rodrigo Jorge, Senior Principal Product Manager @ Oracle
Roy Swonger, Vice President @ Oracle
Demo of a simple Data Pump job
2:54
Месяц назад
Getting started with Oracle Data Pump
4:38
Месяц назад
Data Pump architecture - part 2
4:30
Месяц назад
Data Pump architecture - part 1
5:07
Месяц назад
Комментарии
@ligariusHector
@ligariusHector День назад
Amazing videos...thank you very much Mike, take care and catch you later! ;)
@parameshhc
@parameshhc 5 дней назад
Is there any way to find out/list out all the critical bugfix patches released against quarterly DB RU. also is there any API where we can find all the critical bugfix patches released against quarterly DB RU.
@upgradenow
@upgradenow 4 дня назад
Hi, We publish that information in MOS note 555.1. To my knowledge there are no API to get that information. Regards, Daniel
@rajatsanwal9919
@rajatsanwal9919 19 дней назад
Hi Daniell, At @03:57, you not provided anywhere the Old grid home path. So how -switchGridHome command identifies the old Grid home location?
@upgradenow
@upgradenow 18 дней назад
Hi, That's a good question. It comes from inventory.xml, I assume. Regards, Daniel
@mayanksingh6663
@mayanksingh6663 25 дней назад
Very informative , question why its not recommended to create guaranteed restore point before running datapatch on primary , as its easier to rollback using GRP , kindly suggest
@upgradenow
@upgradenow 25 дней назад
Hi, That's a good point. If you keep users off the database until you finished datapatch, then a GRP could be a viable rollback plan. However, you can run datapatch while users are connected to the database. This is what we do with standby-first and RAC rolling patch apply. But even for single instance databases you can run datapatch with users connected. This minimizes the downtime needed. In such a case, a GRP is not a viable rollback plan because it would mean data loss. Every patch applied by Datapatch has a corresponding rollback action that can revert the apply. In case of a problem, Datapatch can always rollback to the starting point without issues. But feel free to use GRP as a rollback plan. It works - you just need to keep users off the database while you run Datapatch. Regards, Daniel
@mayanksingh6663
@mayanksingh6663 25 дней назад
@@upgradenow Thanks Daniel for detail answer , but i would like to share real challenging patching experience using dbaascli, until now we were taking 7-8 hours of downtime for RU patching of ExaCS system and we end up with complex problem where datapatch failed and MOS also find hard to fix it during downtime , then it was blamed to us by MOS why there is no GRP to rollback datapatch , then later very senior MOS engineer joined and read the code and fixed internal to bring the PDB up , so i am still sure if datapatch can be run while users are connected , please share the MOS number to support this, so i can take this with team internally , Thanks lot
@fellipemorsolin6761
@fellipemorsolin6761 Месяц назад
Hello, can i use data guard to migrate oracle 11g to 23c?
@upgradenow
@upgradenow Месяц назад
Hi, No, you can't. There is no direct upgrade from 11.2.0.4 to 23ai. You have to pass 19c first. Further, Data Guard is not a vehicle for upgrades, unless you use transient logical standby (search for DBMS_ROLLING). But, if you have standby databases you can upgrade those together with the primary database. Regards, Daniel
@sirshakchanda1922
@sirshakchanda1922 Месяц назад
Thanks !!
@upgradenow
@upgradenow Месяц назад
You're welcome!
@facundote2
@facundote2 Месяц назад
Hi, is it possible use this method when the environment is a RAC 12c in primary and a physical standby RAC 12c in recovery? Thanks.
@upgradenow
@upgradenow Месяц назад
Hi, Yes, you can. It should more or less like with a single instance database. When you upgrade the transient logical standby, on RAC, you must have only one instance running, but AutoUpgrade takes care of that for you. Regards, Daniel
@facundote2
@facundote2 Месяц назад
Hi, this video is very useful! Thanks.
@upgradenow
@upgradenow Месяц назад
Hi, I'm glad that you found it useful. Regards, Daniel
@saurabhnigam27
@saurabhnigam27 Месяц назад
It was showing 12c after upgrade.. pls check
@upgradenow
@upgradenow Месяц назад
Hi, Good catch. The database is upgraded fine, however, it seems like we used the wrong environment settings in the demo (that's not ideal). We should have set the environment to the 19c Oracle home and used the binaries from there. Regards, Daniel
@christianfloreck8347
@christianfloreck8347 Месяц назад
Hi This looks fine. However, on my system the datafile of the TS is restored to +DATA pointing to the correct to the correct positon +DATA/<DBNAME>/143BA60E583F672AE0633DE6010A7B01/DATAFILE/xxxx.469.1167307693. Source is an 12c database, target is 19c (nonpdb), I set dest_datafile_location to +DATA Both databases run on the same host, but I guess this should not cause problems.. Thx for any help
@upgradenow
@upgradenow Месяц назад
Hi, There shouldn't be a problem migrating on the same host. If you don't like the destination of the data files after the migration, you can always move the data file later on with ALTER DATABASE MOVE DATABASE command. Regards, Daniel
@christianfloreck8347
@christianfloreck8347 Месяц назад
@@upgradenow Thanks Daniel Great to get response so fast Christian
@rajatsanwal9919
@rajatsanwal9919 Месяц назад
Hi Mike, Had a small doubt. In FTEX (without RMAN Incremental) the Users Tablespace is also a part of export dump. We know that target database also have same tablespace by default. To avoid conflict, oracle suggests to rename the tablespace or drop the destination Users tablespace. If we rename the tablespace, that ofcourse will not rename the datafile associated with it. So how we can copy the source datafiles to destination when same name file is already there? If I overwrite, that will lead corruption as database ID in overwritten DBF users01.dbf will be diff then of the destination database.
@upgradenow
@upgradenow Месяц назад
Hi, If you are using OMF, the data files will have different name always. If you are not using OMF, then you might need to rename the data files as well. However, the script restores the data files to a location different than where the data files usually are in the target database. I don't expect you'd run into this problem. Regards, Daniel
@nagaonnet
@nagaonnet Месяц назад
Thanks for this video
@upgradenow
@upgradenow Месяц назад
Most welcome
@rajatsanwal9919
@rajatsanwal9919 Месяц назад
@1:14:35 Why it is said that "never use sys as sysdba for datapump"? What is the exact reason behind this? What makes it different when we use a dedicated user for Data Pump?
@upgradenow
@upgradenow Месяц назад
Hi, Internally in the database, SYS behaves different in many ways. Some features are disabled for SYS and in other situations a SYS session will do things differently. Regards, Daniel
@rajatsanwal9919
@rajatsanwal9919 Месяц назад
Thanks, Daniel. I may be unintentionally asking too much information from you. But to understand this, can you just provide ine example of what is disabled for sys user when performing an export with it?
@christianfloreck8347
@christianfloreck8347 2 месяца назад
I tried this out.. However, I wanted to transport only the TS listed in dbmig_ts_list.txt: a1,a2. When I launch dbmig_driver_m5.sh with L1F option I run into an ORA-29335 error for all other TS that are not taken "read only". Well, I not You spoke about "whole database" - but is there a way to do the job on single tablepaces? Thx
@upgradenow
@upgradenow Месяц назад
Hi, The procedure was developed originally for full transportable export/import where you take the entire database including all tablespaces. Having said that, the RMAN portion should work fine with just a subset of the tablespaces, but it might require that you edit the scripts a little. Or you can extract the relevant commands from the generated scripts and run them yourself. For the Data Pump part, you need to move from a full transportable to a regular transportable job with just the tablespaces you'd like. Then also, you need to create users and all other metadata. I hope that helps, Daniel
@luap12000
@luap12000 2 месяца назад
what is the impact of "forgetting" to run datapatch on a given database, after the RU apply on dbhome ?
@upgradenow
@upgradenow 2 месяца назад
Hi, The impact is... unknown. We don't test that situation or take it into account. But I've seen other situations where it led to unexpected, intermittent errors. Could also be ORA-00600. Regards, Daniel
@sushmag6427
@sushmag6427 2 месяца назад
Hi , we are planning to migrate 64TB database which has large tables as few TBs. Can you suggest good migration method to move to AWS rds
@upgradenow
@upgradenow 2 месяца назад
Hi, I have no knowledge about the ins and outs of Amazon RDS, so I can't provide a good advice. Sorry. Regards, Daniel
@fmsilva11
@fmsilva11 2 месяца назад
very confusing video. i do not learn anything with it. this only works with exa CC ? this only works with zdlra ? how many time to migrate 1500 databases ? all databases have the same charaterset and timezone ?
@upgradenow
@upgradenow 2 месяца назад
Hi, This is a short extract of a longer webinar. Perhaps you get a better picture if you watch it all. The procedure works for any Linux platform including our engineered systems and it is optional to use ZDLRA. In the complete webinar we touch upon the benefits of using ZDLRA. The source and target databases must have the same character set and time zone, but whether the individual databases also had that, I don't know. But I don't see how this could be relevant for the project. If you want an up-to-date reference story about cross-platform migrations, please check out our webinar "Cross Platform Migration - Transportable Tablespaces to the Extreme" on dohdatabase.com/webinars. Regards, Daniel
@MukeshSahu-mn2ps
@MukeshSahu-mn2ps 2 месяца назад
16hrs still huge, i have similar requirement where we need to migrate from aix to linux , Database running on 19c with filesystem, db size is 45TB, plan to move to Linux on asm, customer can afford only 1-2hrs or downtime, what technologies would you recommend ?
@upgradenow
@upgradenow 2 месяца назад
Hi, This reference project is a few years old and use a former technique. We have a new, improved method for cross-platform migrations that we explain in our webinar "Cross Platform Migration - Transportable Tablespaces to the Extreme". You find the video and slides here: dohdatabase.com/webinars/ For a cross-platform migration of such a large database, I think only Oracle GoldenGate is a viable solution. Transportable Tablespace as shown about might be possible in Oracle Database 21c and later with parallel transportable jobs in Data Pump but only for very simple databases (they might be big, but dictionary must not be complex). Regards, Daniel
@cesar_dba
@cesar_dba 2 месяца назад
👏
@upgradenow
@upgradenow 2 месяца назад
Thanks :)
@manzambibissengomatila7383
@manzambibissengomatila7383 2 месяца назад
Hi, Please could you be so kind to give us the maximum characteres or tbs_name seperated with comma in the dbmig_ts_list.txt file? (M5). Thanks in advance
@upgradenow
@upgradenow 2 месяца назад
Hi, There shouldn't be a maximum - the script should be able to handle whatever the database can handle. We recently launched a new version of the M5 script which handled a case with 1.000s of tablespaces better. In another case I worked on, we had almost 10.000 data files. If you run into troubles, you have really many, many tablespaces. In that case, create a service request. Regards, Daniel
@manzambibissengomatila7383
@manzambibissengomatila7383 2 месяца назад
@@upgradenow , We have 3329 tablespaces, 5714 datafiles and DB size 390 TB. is it possible to go with recent M5 script, please?
@christianfloreck8347
@christianfloreck8347 2 месяца назад
Great Tutorial Two Questions, however: Does this work if databases have the same endian, so no migration is needed? Must Source and Target database have the same Name - I guess so, as the databasefiles have this info in the header. Thanks a lot Christian
@upgradenow
@upgradenow 2 месяца назад
Hi Christian, Thanks for the positive feedback. Much appreciated. Yes, you can use this method even if the source and target databases are on the same platform. RMAN investigates the data files and determines automatically whether a conversion is needed when it restores the data files. No, you can transport the tablespaces into a database of any name. It does not matter. In the target database, Data Pump plugs in the new tablespaces and make the necessary adjustments. I hope that helps, Daniel
@christianfloreck8347
@christianfloreck8347 2 месяца назад
Hi Daniel, yes that realy helped. I was a little unsure about DB Names.
@kotagiriSrikanth4
@kotagiriSrikanth4 3 месяца назад
1st to view this gem of a video
@upgradenow
@upgradenow 2 месяца назад
Thanks
@jeffh2125
@jeffh2125 3 месяца назад
I'm glad you all posted this. We're testing with an EBS database despite that M5 is not yet certified with Ebusiness Suite cross platform. V4 is so convoluted, we'd love to use M5 and hope it will be certified for EBS soon. Have you all tried it on a sizeable EBS database and can you report any successes, issues, workarounds? Thanks much for the video. Really appreciate it.
@upgradenow
@upgradenow 3 месяца назад
Hi Jeff, I have actually tried it on an EBS database, and it worked without problems. However, that's just the database part. I haven't tried anything of the other EBS stuff that you must do. Technically, I don't see why it shouldn't work. However, I must recommend you to follow the official procedures from Oracle and the EBS team. We have reached out to the EBS team for guidance on M5 and EBS, but so far nothing concrete. You can send me an e-mail at daniel.overby.hansen@oracle.com. I can update you when we hear from the EBS team. Regards, Daniel
@jeffh2125
@jeffh2125 3 месяца назад
@@upgradenow Hi Daniel, thanks for the quick reply. I'll send to your email address. I'm pretty sure this method will be the "goto" for cross platform as this is sorely needed in the oracle bag of tools. Hopefully it will be certified with EBS soon as their support for v4 is not very strongly worded anyway. They say to be sure to keep a standard export for backup, etc. etc. We are planning on installing the middle tier software from scratch of course as you can't migrate the homes. But we'll handle the cloning parts as usual. Thanks again and I'll send you a note, as we have some questions. Great video. 🙂
@christianfloreck8347
@christianfloreck8347 4 месяца назад
Greate Video. Can I handle "alter system flash redo" etc from dgmgrl? I find it more forward to do all from one interface (as we use dgmrgl for switchover, failover, etc.)
@upgradenow
@upgradenow 3 месяца назад
Hi, That's a good question. I'm not a Data Guard, so perhaps I'm wrong, but judging from the documentation it appears you can flush redo from DGMGRL. Regards, Daniel
@busingerogers1086
@busingerogers1086 4 месяца назад
At my workplace. We are migrating from supercluster m8 to exadata m9. Also we are migrating tablespaces from 18c to 19 at the point of importing meta data into 19c. We get an error about the version could you kindly help we used the transportable tablespace framework
@upgradenow
@upgradenow 4 месяца назад
Hi, I'm afraid I need a little more detail to answer your question. I suggest that you read my blog post series on XTTS. It has many information, step-by-step guide and link to demos. dohdatabase.com/xtts Further, we are hosting a 2-hour webinar on such migrations. Specifically we have a similar customer case which could be of interest to you. We are using a newer method, but much of the information applies to a migration from 18c to 19c as well. dohdatabase.com/webinars/ Regards, Daniel
@rajatsanwal9919
@rajatsanwal9919 4 месяца назад
Are two Oracle homes of same release possible in a server? @44:42
@upgradenow
@upgradenow 4 месяца назад
Hi, Thanks for your question. Yes, you can have two Oracle homes of the same release on the same server. You can have as many as you like. You can even have two Oracle homes on the exact same patch level. Regards, Daniel
@rajatsanwal9919
@rajatsanwal9919 4 месяца назад
Thanks Daniel.
@user-zt1ux2wv7d
@user-zt1ux2wv7d 4 месяца назад
how to convert cdb db to non-cdb db using sql commands?
@upgradenow
@upgradenow 4 месяца назад
Hi, You can find the information here: docs.oracle.com/en/database/oracle/oracle-database/19/multi/plugging-in-a-pdb.html#GUID-EE58804F-A999-4F7C-93DE-D641B3C1047B You might also find this useful: docs.oracle.com/en/database/oracle/oracle-database/19/multi/introduction-to-the-multitenant-architecture.html#GUID-D67CAB03-AB86-4761-BCF3-F91CA648B315 We even have a webinar with much more detail: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-7PNTJqbX5Ew.html Regards, Daniel
@rajatsanwal9919
@rajatsanwal9919 4 месяца назад
How autoupgrade knows whether the source non cdb needs to plugged inn CDB or not?What if we want to keep it non-cdb over in 19c too?
@upgradenow
@upgradenow 4 месяца назад
Hi, AutoUpgrades automatically determines the state of the source database. Whether or not an upgrade is needed, is determined by examining the specified target Oracle home (target_home). If it is on a newer version, then AutoUpgrade knows that an upgrade is needed. Whether you also want to plug in your non-CDB to a CDB (and convert it) is determined by the parameter "target_cdb". By specifying that, you can saying that you want to convert to CDB architecture. Regards, Daniel
@rajatsanwal9919
@rajatsanwal9919 4 месяца назад
Thanks Daniel, for this wonderful explanation.
@rajatsanwal9919
@rajatsanwal9919 4 месяца назад
I had this one doubt. Let's say If there is a component called 'A' visible in $registry in Invalid state. I uninstalled it then start upgrade. Will that component recreated if new release supports same comonent?
@upgradenow
@upgradenow 4 месяца назад
Hi, That's a good question. The upgrade process does not install any new components. If you previously uninstalled a component, then during upgrade that component is completely skipped. Regards, Daniel
@rajatsanwal9919
@rajatsanwal9919 4 месяца назад
I recently landed up on this youtube channel. This is far more engaging and knowledgeful as compared to others. Kudos to you, Experts !!
@upgradenow
@upgradenow 4 месяца назад
Hi, Thanks for the positive feedback. Much appreciated. If there is any video that you would like us to do, please leave a note in our RU-vid community: www.youtube.com/@upgradenow/community. Regards, Daniel
@rnesouza
@rnesouza 4 месяца назад
Nice video, do u have a video with example of 11g creating data guard and then upgrade in another server?
@upgradenow
@upgradenow 4 месяца назад
Hi, Thanks for the positive feedback. Much appreciated. I don't think that we have such a demo, sorry. Regards, Daniel
@fjfranken
@fjfranken 4 месяца назад
Hi Daniel, Thank you for this clear video explanation. We are in the midst of switching from in-place patching to out-of-place patching and automating as much as possible with autoupgrade. I was wondering if the shown method also work in this case, especially when we also upgrade the timezone using the "timezone_upg=yes" option in the configuration file. With a timezone upgrade of a CDB, extra restarts of the database are needed. How are these restarts reflected to the standby? Thank you
@upgradenow
@upgradenow 4 месяца назад
Hi Frank-Jan, If you upgrade the time zone file on a primary database, there is no need to perform anything on the standby database. The time zone file upgrade happens entirely in the database and the changs are recoreded in redo. Once the redo applies on the standby, it is implicitly upgraded. The only restart of the primary database is to restart in upgrade mode. The time zone file upgrade happens with the database in upgrade mode. However, when the redo is applied on the standby database, there is no such requirement. The redo is just applied. Regards, Daniel
@marikupatadze1947
@marikupatadze1947 4 месяца назад
👏👏
@upgradenow
@upgradenow 4 месяца назад
We're glad you like it. Regards, Daniel
@vedantpadwalkar-us8xr
@vedantpadwalkar-us8xr 4 месяца назад
Great tool.
@upgradenow
@upgradenow 4 месяца назад
Thanks, we're glad you like it. Let us know if there's anything we can do to make the tool even greater. Regards, Daniel
@vedantpadwalkar-us8xr
@vedantpadwalkar-us8xr 4 месяца назад
@@upgradenow yes I liked it and have personally used it. Many aspects ( parameters, deprecated ones, roles, privs etc ) related to db upgrade versions are covered. Came to about it when Mike Dietrich mentioned this in one of his seminars. A big thanks to the team and people involved in making this tool. And thank you for sharing this.
@williamslaverde8132
@williamslaverde8132 4 месяца назад
Nice. Path selection is source and target are the equivalent the same. eg. Linux to Linux . Btw duplicate for standby with higher version is not possible. Backup/Restore is handy in that case
@upgradenow
@upgradenow 4 месяца назад
Hi, You're right. The platform must be the same. I think we talk about this in one of the other videos. Regards, Daniel
@vedantpadwalkar-us8xr
@vedantpadwalkar-us8xr 5 месяцев назад
Migration can be achieved by configuring a standby database and performing a switchover to new system
@upgradenow
@upgradenow 4 месяца назад
Hi, Yes, you are right. Data Guard is a great tool for migrations, however, redo transport requires SQL*Net connection between source and target. In some situations, that's not allowed and here incremental backups can be very useful. But if Data Guard is an option, I take Data Guard any day of the week. Regards, Daniel
@vedantpadwalkar-us8xr
@vedantpadwalkar-us8xr 4 месяца назад
@@upgradenow I agree with you.
@SukaFresh
@SukaFresh 5 месяцев назад
What is the estimated "Down time" when switching over to PROD2? It appears as though there will still be some type of "outage" when making the switch over that will affect the users. If we switch to Oracle RAC, will this resolve the "down time" entirely?
@upgradenow
@upgradenow 5 месяцев назад
Hi, Estimated downtime is .... it depends! :) You will perform a Data Guard switchover and there are numerous things that influence it. Typically, we see switchover completing in less than a minute or up to a few minutes. If you use RAC you can completely hide the outage by performing a rolling patch apply. If you have RAC, you probably don't want to switchover to the standby. You can just do the standby database first, and then a rolling patch apply on the primary. That yields no downtime. Regards, Daniel
@SukaFresh
@SukaFresh 5 месяцев назад
@@upgradenow Okay that makes a little more sense. Appreciate the assistance.
@MrEnrique2k
@MrEnrique2k 5 месяцев назад
can we have a practical demo for these scenarios ?
@upgradenow
@upgradenow 5 месяцев назад
Hi, We don't have a demo of a migration using this approach, unfortunately. However, you should be able to use the commands from this video to pull it off. You can watch the full section here: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-zharMnB02Uw.html Regards, Daniel
@rajanikanthkalipatnapu4040
@rajanikanthkalipatnapu4040 5 месяцев назад
In the execution analysis task execution type => 'TEST EXECUTE' should we run this? Will it run all the delete or update or insert queries in the migrated database?
@upgradenow
@upgradenow 5 месяцев назад
Hi, The test execution does not change your test. It tests only.
@Actions_Only
@Actions_Only 6 месяцев назад
Thank you so much❤
@upgradenow
@upgradenow 6 месяцев назад
You're welcome 😊
@rafavmmoracle
@rafavmmoracle 6 месяцев назад
Does Autoupgrade set the cluster_database=false for RAC environments? Or I needed to create a .sql file to do that and include this script in the .cfg ?
@upgradenow
@upgradenow 6 месяцев назад
Hi, AutoUpgrade does it all for you - including cluster_database. Regards, Daniel
@Princesinsha
@Princesinsha 7 месяцев назад
After setting ociid wname and mpwd which command need to execute ?
@upgradenow
@upgradenow 6 месяцев назад
Hi, All the commands should be in the recorded demo. Since we made the demo, it has become even easier to connect SQLcl to ADB. Check the documentation for an up-to-date description: docs.oracle.com/en/cloud/paas/autonomous-database/serverless/adbsb/connect-tools.html#GUID-CF6C7E1B-D0D4-4641-BADA-5C57DEA7C73B. This might help as well: docs.oracle.com/en-us/iaas/autonomous-database/doc/connect-oracle-sqlcl.html. Regards, Daniel
@Gys-bm5bz
@Gys-bm5bz 7 месяцев назад
Thanks. strait forward steps and clear to proceed. Thanks again
@upgradenow
@upgradenow 7 месяцев назад
You're welcome!
@Best_Stocks
@Best_Stocks 8 месяцев назад
In windows Oracle 19c database,I have moved file to another location successfully but data files did not removed physically automatically from orignal location.. So do we need to remove it Physically ?
@upgradenow
@upgradenow 8 месяцев назад
Hi, On Windows, you might see data files being locked by the operating system even though the files have been moved. This is an issue specifically on Windows and has to do with the way file locking is implemented in the OS. The database only uses the new file, but often we see that the old file is locked by the database process until the database is restarted. There's nothing Oracle can do about that. It's just how it is on Windows. If on Windows and this is a big issue, my only recommendation is to use ASM instead. Regards, Daniel
@Nikhil-ds9xh
@Nikhil-ds9xh 8 месяцев назад
How about batch processing , this will also reply or it will failed ?
@upgradenow
@upgradenow 8 месяцев назад
It depends on your client and the exact nature of your workload. Most workloads should be able to replay without problems, but you would need to analyze your specific application. Regards, Daniel
@emadal-mousa8723
@emadal-mousa8723 9 месяцев назад
Great post and work as usual....Thanks Daniel
@upgradenow
@upgradenow 9 месяцев назад
Hi, Glad you enjoyed it. Did you try ORAdiff yet? What's your impression? Regards, Daniel
@emadal-mousa8723
@emadal-mousa8723 9 месяцев назад
informative so far 👍@@upgradenow