@@dbagenesis thanks a ton Arun bhaiya. Do make this video non the patching in depth I mean, patching it's types, how patching on the RAC is different from patching on the Dataguard ?
Sorry Arun but I don't agree with the statement is to insert 5m records (video time 6:50) and do commit. we should have frequent commit.. i.e. commit after every 10k/10k .
I installed VMware on my laptop which has as host windows 7 . Now I created VMware machine and installed oracle Linux into it. The size of virtual hard disk grew to the 40 GB as I had created many databases but then I deleted all the databases. Now the size of the virtual hard disk is still 40 GB while it has no data into it. Can you tell me how to reclaim the size of virtual hard disk ?
Its a pre-requisite requirements by Oracle. Read Oracle documentation and search on google "Oracle installation pre-requisites". You will get to know each and every single parameter.
Lets look at what Oracle has to say about it: The addition of temporary files to TEMP tablespaces in the primary site is not handled automatically through the normal redo apply mechanisms in the same way as regular datafiles if the parameter standby_file_management is set to AUTO. The DBA must manually synchronised the primary and standby tempfile configuration if they require both the sites to be the same.
Q2. As you said we should keep the old binaries for atleast 30days after upgrade. Should we keep comparability parameter to old one for 30days too so that if require we can downgrade the db.
The INDEX REBUILD ONLINE is slow compared to INDEX REBUILD without the online clause. This is because with INDEX REBUILD ONLINE, it will wait for any existing DML commands that holds lock and proceed with index rebuild once the locks are released. When you run ALTER INDEX REBUILD command, it will acquire exclusive lock immediately and blocks all DMLs during the rebuild process. Tip: you can also use NOLOGGING clause with INDEX REBUILD to avoid high archive generation!
I have one doubt. 2 DB in a single server. 1st DB register with listener 2nd DB register with local Lister No 1 DB is down. No2 is up and running. Even though connection is try to connect NO1 DB. Same host but different port no. Kindly give your valuable suggestions.
Hi Arun, Thank you so much for detailed explanation and your videos are very helpful. I have one doubt, in my procedure I am trying to insert data in temporary table by using select statement. How to improve the performance of this select statement. In select statement I am using aggregation function. This insert statement itself taking 30 to 45% of the time in procedure execution.
Look at select statement carefully and observer which Index is it using or if its going for full table scan. If possible, post the insert statement here. If not comfortable here, send it to support@dbagenesis.com
Hi Arun, thanks for your explaination, I have an issue and need your suggestion. My database is 12c, i have tables with partitions My issue, when I add a new partitions to the tables, the indexes got invalid and it took much time to compile, because its Global index, not local index. Could you please explain the difference between Global and local, and what will happen if I replaced the Global index with local index? Regards
Offcourse, it will take time. A global index is like one index for multiple table partitions. A local index is like one-to-one index partition for each table partition. At any given scenario, local index will be faster compared to Global index.
I have a query regarding tablespace High Water Mark. I have tablespace name AUDIT_DATA in my UAT and all my audit data will be stored in this tablespace, after a month the size of this tablespace is approx. 25G and I suppose to do the truncate the AUD$ table after taking the backup. But the physical space was not released after truncating the table, it remains shows 25G of the data file. What can I do to release the space also. Please help me, I have to do this job in the live database.
Its depend on your RMAN backup retention policy or what recovery windows you decided. if backup get beyond the retention this become the obsolete one which means no longer needed.
@25:50-25:53 You mean the SID on Primary DB and Standby DB must be same? We used to set the SID on Primary and Standby DB different and it is a headache to change the data souce on application in case there is a switchover.
Hahaha.. I have been there. If thats that case, create a service (with TAF enabled) on your primary and allow clients to connect DG environment using the TAF service. The service will run only on active primary and you do not have to deal with different SIDs. When you perform swtichover / failover, the clients will auto connect to service because the service will auto come up on new primary. Check this article on how to setup TAF in data guard: support.dbagenesis.com/knowledge-base/client-connectivity-in-data-guard-configuration/
You can use below command to just change the SCAN name: srvctl modify scan -n racnodepdb-orascan NOTE: There are many steps involved in order to move / modify scan when there is a change in subnet / network / scan ips etc...
Thanks for your video, Here is my question if I want to learn on more databases administration which one I should take up , Right now two database admin revolving on my mind "oracle Big data cloud or Sql server " please suggest. . Thanks in advance. .
I am not sure about your background so cannot give you good suggestion on this but If you ask me for a good career for next 5 to 7 years would be Oracle Golden Gate! Database replication is a business requirement whether your DB is on physical servers of cloud.
You can use Oracle Data Pump to export and import tables that have encrypted columns. When you use Oracle Data Pump to export and import tables containing encrypted columns, it uses the ENCRYPTION parameter to enable encryption of data in dump file sets. The ENCRYPTION parameter allows the following values: ENCRYPTED_COLUMNS_ONLY: Writes encrypted columns to the dump file set in encrypted format DATA_ONLY: Writes all of the data to the dump file set in encrypted format METADATA_ONLY: Writes all of the metadata to the dump file set in encrypted format ALL: Writes all of the data and metadata to the dump file set in encrypted format NONE: Does not use encryption for dump file sets
Hey Arun, suppose we have 4gb SGA and we want to insert 50GB data into database with simple insert statements. How insert would process. I want to understand processing cycle.
Netca is a configuration Utility it is used to configure the tns and listener files. Netmgr is used to monitor the status of listener and find the connectivity issues.
First get OCA certified and then OCP. Once you complete both trainings, you get grate deal of hands on experience. Having working experience is very important! Later on you can choose to learn Oracle Golden Gate which will give you career stability for next 5 to 7 years!
@@dbagenesis how to determine how much memory (sga and pga)is needed for my database instance interms of OLTP and DW Environments. And how to prevent memory leaks on both database instance and server level.
Yup, if you are like me who hates programming, then yes, you should go for Oracle DBA career. But also if you think you just need to learn few concepts and settle your career in DBA, it won't last long. You must be constantly upgrading your skills and keep up with market changes.
@@dbagenesis i want to pursue Career in oracle but cofused as a fresher which version of oracle should i choose to learn first 11g? Or should i go with oracle 12c certification direct..
@@arpitseth96 First master any one version and all other versions will be a cake walk. Like for windows, if one learns any one version, all other versions become cake walk. You do not go for windows trainings everytime windows releases new version right !! Go for 12c for now and master it, forget about 11g, 18c and 19c versions ... !!! Become master of any one version first ;)
@@dbagenesis Thanks Arun :) Is the v$sga_dynamic_components ok? If the summation of the different SGA components from the current size is equal to the SGA_MAX_SIZE and the server is swapping, does this mean RAM has to increased?