Welcome to a channel created exclusively for tech enthusiasts who have a passion for coding in Python, R, Machine Learning, Django, Big data tools, Hadoop Ecosystem, ReactJS, Docker, Java, Android, Flutter, reactNative, block-based coding (like Scratch, mBlock, Thumbnel, etc.), and various technical courses. Join us for an immersive journey into the world of technology, where you'll find exciting content and gain valuable insights. Subscribe now and elevate your coding skills to new heights! 💻🌟
==Follow me on== 1) LinkedIn: www.linkedin.com/in/kundankumar011
Thank you so much for loving my video. Yes I am going to add into my to-do list once it's ready you will get notifications dear. please you do encourage me by subscribing and my channel and press the bell icon 🔔
@@kundankumar011 i have already subscribed to your channel and recommended my friends as well to do the same. I installed hadoop and hive watching your videos.
before executing hdfs namenode -format , you must execute the command rmdir /S "C:\hadoop\data\datanode" otherwise you will get the error Incompatible clusterIDs in C:\hadoop\data\datanode: namenode clusterID = CID-9ffe06b7-e903-46f6-a3fe-508ccc12e155; datanode clusterID = CID-d349
Thank you for watching my video. Apologies for the delayed response. I hope you were able to resolve the error. If not, here's a quick guide to help you. The error "Incompatible clusterIDs" occurs when there is a mismatch between the NameNode's and DataNode's cluster IDs. This typically happens if Hadoop is reinstalled or the namenode -format command is run again without resetting the DataNode. To resolve the Incompatible clusterIDs error, you should delete the contents of the DataNode directory before formatting the NameNode.
Sir there is an error showing Usage Error: Unrecognized or legacy configuration settings found: dir - run "yarn config -v" to see the list of settings supported in Yarn (in <environment>) $ yarn run [--inspect] [--inspect-brk] [-T,--top-level] [-B,--binaries-only] [--require #0] <scriptName> ... C:\Windows\System32>
Apologies to late reply dear. If localhost:8088 (the YARN ResourceManager web interface) is not working, it typically indicates that YARN is either not running correctly or there may be configuration issues preventing access. Cross Check Yarn Configuration file yarn-site.xml and also check firewall setting if it is not blocked
Sorry to know that you are facing a challenge. Localhost is not working. This could be due to various reasons dear. 1) Cross the Hadoop daemons NameNode , Resource Manager is still running or not using jps command while accessing localhost 2) Cross check Hadoop Configuration file is set properly or not. 3) Check the firewall settings. Sometimes firewall blocks some ports like 8080, 9000, 50070 etc. if yes change settings to allow them Please do you subscribe to my channel to encourage me dear and not to miss on upcoming relevant videos. Thank you ❤️
@@samiaahmed-k7n Thank you so much for watching this video💗. Sorry to know that you are facing the sdk not found error. To fix these issues make sure that you set the correct environmental variables Path of sdk. Please rewind the video and see how to set environmental variables Path of sdk dear. Please don't forget to encourage me by subscribing to my channel so that you don't miss notifications on upcoming relevant videos.
when I execute the command "start-all.cmd" and I type the command "jps" at the beginning there are all the daemon (NodeManager, Namenode, ResourceManager, Datanode) after a moment I retype the command "jps" I see only the daemons (RessourceManager and DataNode) is this normal? Thank you
@@amazighkab9904 Basically sometimes Namenode and Resource Manager may stop automatically due to lack of space resources. But it should not happen always dear.
Thank you so much for the appreciation dear 💗. It motivates me. Hope you subscribed to my channel and press bell 🔔 icon l so that you don't miss notifications on up irrelevant videos
Sir i am using win 11 but i get this error"C:\hive\bin>hive --service schematool -dbType derby -initSchema "Missing hadoop installation: C:\hadoop\bin must be set"" when u run on this step "11. To Initialize Hive Schema: a) Navigate to `C:\hive\bin` and run below command: hive --service schematool -dbType derby -initSchema" it is different with the error in your video
Sorry to know that you are facing issues. The error Missing hadoop installation: C:\hadoop\bin must be set means that Hive cannot find your Hadoop installation because the environment variable for HADOOP_HOME is not set correctly. Please Set HADOOP_HOME and also Cross Check JAVA_HOME is set well. Follow the Steps in video then cross check what mistakes you did dear. For Hive Services, the errors you may encounter can vary depending on the specific configuration of your computer.
@iranzilionel5090 Thank you for your comment, dear❤. However, it seems unrelated to this video, as you are asking about another viewer of this video. Larissa? Oh, she is the star student in the class of awesomeness. She is obedient student. She always asks the best questions and brings amazing energy to the group. Keep up the fantastic work, Larissa! 💪🎓
This video is well elaborated however, by the time am about to finish installing hive, i encounted the following error how do i resolve it? C:\Windows\System32>StartNetworkServer -h 0.0.0.0 'StartNetworkServer' is not recognized as an internal or external command, operable program or batch file.
Sorry to know that you encountered the error. Looking at the error shows that there are some issues in setting Path. I advise you Cross check the path and if possible again watch the video to see if you have not made any mistakes while setting the path
i Encountered this problem. 'jps' is not recognized as an internal or external command, operable program or batch file can you please provide the solution?
Thank you so much for watching my video. The error 'jps' is not recognized as an internal or external command, operable program or batch file typically occurs because the Java Development Kit (JDK) tools (like jps) are not properly set up or the environment variable JAVA_HOME is not configured correctly. Please follow these steps to fix: 1) Check if JDK is Installed: Open a terminal or command prompt and type the following command to check the installed version of Java: java -version 2) Ensure JDK bin is in the PATH Environment Variable is set properly Please you do subscribe my channel to encourage me dear and don't forget to press bell icon to receive notifications on upcoming relevant videos
Sir all the steps were clear but i'm getting this error C:\Users\srika>cd c:\hive\bin c:\hive\bin>hive --service schematool -dbType derby -initSchema SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/C:/hive/lib/log4j-slf4j-impl-2.18.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/C:/pyspark/hadoop/share/hadoop/common/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] Exception in thread "main" java.lang.UnsupportedOperationException: 'posix:permissions' not supported as initial attribute at sun.nio.fs.WindowsSecurityDescriptor.fromAttribute(WindowsSecurityDescriptor.java:358) at sun.nio.fs.WindowsFileSystemProvider.createDirectory(WindowsFileSystemProvider.java:492) at java.nio.file.Files.createDirectory(Files.java:674) at java.nio.file.TempFileHelper.create(TempFileHelper.java:136) at java.nio.file.TempFileHelper.createTempDirectory(TempFileHelper.java:173) at java.nio.file.Files.createTempDirectory(Files.java:950) at org.apache.hadoop.util.RunJar.run(RunJar.java:296) at org.apache.hadoop.util.RunJar.main(RunJar.java:245) PLEASE HELP!!!
Hi, Thankyou for the details. It is very helpful!!! However, I am getting an error ""Missing Hive Execution Jar: C:\hive\lib/hive-exec-*.jar"". How to resolve this? Thanks.
@@dishtisoni3937 Thank you for watching my video and happy to know that detailing in video was helpful. The error you're encountering suggests that Hive can't find the required hive-exec.jar file. The steps to fix it are below: 1) Check Hive Installation: Ensure that you have installed Hive correctly. The hive-exec.jar file should be in the C:/hive/lib/ directory. If it's missing, you may need to reinstall Hive. 2) Cross check the Environment Variables: - Make sure your HIVE_HOME is set to C:/hive. - Ensure that the CLASSPATH includes C:/hive/lib/*. 3) Configuration Files: Check your hive-site.xml and make sure all configurations are correct and point to valid paths. Let me know if you managed to fix dear? Please you do subscribe my channel if you not yet to encourage me and not to miss notifications on upcoming relevant videos. Keep learning
Exception in thread "main" java.lang.UnsupportedOperationException: 'posix:permissions' not supported as initial attribute at sun.nio.fs.WindowsSecurityDescriptor.fromAttribute(WindowsSecurityDescriptor.java:358) at sun.nio.fs.WindowsFileSystemProvider.createDirectory(WindowsFileSystemProvider.java:492) at java.nio.file.Files.createDirectory(Files.java:674) at java.nio.file.TempFileHelper.create(TempFileHelper.java:136) at java.nio.file.TempFileHelper.createTempDirectory(TempFileHelper.java:173) at java.nio.file.Files.createTempDirectory(Files.java:950) at org.apache.hadoop.util.RunJar.run(RunJar.java:296) at org.apache.hadoop.util.RunJar.main(RunJar.java:245) how to solve this ,someone help
@@mohamedmoawad9670 Glad to know that it was very helpful for you and saved your lots of time. Enjoy your learning dear ❤️ Hope you do subscribe to my channel to encourage me dear and not to miss notifications on upcoming relevant videos
Thank you so mucg for watching my video dear. I apologies for the delay in replying. To count the number of words and letters in a file using Hadoop, you can use the Hadoop streaming utility with wc (word count) commands. C:\Windows\System32>hdfs dfs -cat /user/KUMAR/inputfile.txt | wc -w error: 'wc' is not recognized as an internal or external command, operable program or batch file. if you are being prompt above error please Use the Hadoop Streaming Command with HDFS: Hadoop provides its own way to process files. Instead of piping to wc, you can write a simple Hadoop MapReduce job to count words and letters, but if you want to use the wc command, you would need a Unix-like environment such as Git Bash. After installation, open Git Bash and run the same command you were trying: hdfs dfs -cat /user/KUMAR/inputfile.txt | wc -w Note: Change the Path of file on Hadoop as per your system Let me know if it works. Please you do subscribe my channel to encourage me and not to miss notifications on upcoming relevant videos.
Hi, I followed ur video amd am stuck at point 12 where we are giving command "Hive" to start hive. Error is java.io.EOFException: End of File Exception between local hostis <> and destination host is <>. Any help here please
Thank you for watching my video dear. Sorry for the late replying. It looks like you're encountering an EOF (End of File) exception when trying to start Hive, which can be due to several reasons related to the configuration or communication between nodes in your setup. 1. Check Java Version Compatibility: I am using java1.8, Hadoop 3.2.4 2. Check Hive Configuration Files: Look into the Hive configuration files, especially hive-site.xml if it is configured as shown in video 3. Permissions and Connectivity: Make sure the user running Hive has sufficient permissions for Hadoop and HDFS, and that it can read/write to the necessary directories. Please you do subscribe my channel if not yet to encourage me and not to miss notifications on upcoming relevant videos
This is an excellent tutorial in order to install hive and the previous one (install hadoop) are very handy. So i need to make a conecction with dbeaver, do you have any video? can you shar it please?
@@FeelFenix Thank you for your appreciation of the steps for installing Hadoop and Hive❤️. Regarding your request for a video on connecting Apache Hive with DBeaver, I don’t have one yet, but I’ll try to find time to make it. I hope you’ve subscribed and clicked the bell icon on my channel to stay updated with my upcoming videos.
Thank you for watching my video! I’m sorry to hear that you're facing issues rendering HTML template files. There could be several reasons for this, but let me guide you through a few important steps that were demonstrated in the video: 1) Create a "templates" folder as guided in the video, and ensure it's placed correctly in your project directory. 2) Update "settings.py" to include the path to your templates directory within the app. Make sure the `TEMPLATES` setting has the correct `'DIRS'` path. 3) Check your HTML file name to ensure it's not typed incorrectly while rendering. If everything seems fine and the issue persists, please share your error logs with me so I can assist you further. Please you do subscribe my channel if not yet to encourage me and not missing any notifications of upcoming relevant videos.
@@kundankumar011I created templates, path, urls, settings as per your instruction in the video..even then i got error "templates doesn't exist ". BTW I subscribed to your channel.... Do you have any social media platforms such as- messenger group,what's app group where I can get your support... Thank you... I am from Bangladesh 🇧🇩🇧🇩
At first when i installed everything is fine and all the environment varibales are correct but now im facing issue with jps after entering start-all.cmd also im getting only jps no namenode,datanode,resource manager,nodemanager can you pls help sir
Thank you so much for watching this video and happy to know that you had successfully configured it. But unfortunately it stopped working. As you are saying all environmental variables are set well. Then to fix these issues you must check the log files to check what causes an error. Review the logs for any errors or issues with starting the daemons. The logs are typically located in $HADOOP_HOME/logs. Please do you subscribe to my channel if you have not yet to encourage me and not to miss notifications on upcoming relevant videos.
Pleasure to know that you liked the way of explanation and references links. Please you do subscribe my channel to encourage me and not to miss notifications on upcoming relevant videos.
Glad to know that this video was very informative to you as well. Please you do subscribe my channel to encourage me and not to miss notifications on upcoming relevant videos.
I used CREATE EXTERNAL TABLE retail_data (record_no STRING,invoice_no STRING,stockcode STRING,description STRING,quantity INT,invoicedate STRING,price DOUBLE,customer_id STRING,country STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE LOCATION '/data_BDSAssignmet/'; Where my CSV file is stored in data_BDSAssignment dir in hdfs Above commands created table but it did not load any data I am getting below output when I select * hive> select * from retail_data; 2024-09-08T21:38:14,830 INFO [main] org.apache.hadoop.hive.conf.HiveConf - Using the default value passed in for log id: 2f59663d-4a5c-4cde-9103-7e72ed85a913 2024-09-08T21:38:14,831 INFO [main] org.apache.hadoop.hive.ql.session.SessionState - Updating thread name to 2f59663d-4a5c-4cde-9103-7e72ed85a913 main 2024-09-08T21:38:15,464 INFO [2f59663d-4a5c-4cde-9103-7e72ed85a913 main] org.apache.hadoop.hive.common.FileUtils - Creating directory if it doesn't exist: hdfs://localhost:9000/tmp/hive/k.Prachetha/2f59663d-4a5c-4cde-9103-7e72ed85a913/hive_2024-09-08_21-38-14_855_3060643932666893220-1/-mr-10001/.hive-staging_hive_2024-09-08_21-38-14_855_3060643932666893220-1 OK Time taken: 0.696 seconds 2024-09-08T21:38:15,591 INFO [2f59663d-4a5c-4cde-9103-7e72ed85a913 main] org.apache.hadoop.hive.conf.HiveConf - Using the default value passed in for log id: 2f59663d-4a5c-4cde-9103-7e72ed85a913 2024-09-08T21:38:15,591 INFO [2f59663d-4a5c-4cde-9103-7e72ed85a913 main] org.apache.hadoop.hive.ql.session.SessionState - Resetting thread name to main
hive> select * from retail_data; 2024-09-08T21:38:14,830 INFO [main] org.apache.hadoop.hive.conf.HiveConf - Using the default value passed in for log id: 2f59663d-4a5c-4cde-9103-7e72ed85a913 2024-09-08T21:38:14,831 INFO [main] org.apache.hadoop.hive.ql.session.SessionState - Updating thread name to 2f59663d-4a5c-4cde-9103-7e72ed85a913 main 2024-09-08T21:38:15,464 INFO [2f59663d-4a5c-4cde-9103-7e72ed85a913 main] org.apache.hadoop.hive.common.FileUtils - Creating directory if it doesn't exist: hdfs://localhost:9000/tmp/hive/k.Prachetha/2f59663d-4a5c-4cde-9103-7e72ed85a913/hive_2024-09-08_21-38-14_855_3060643932666893220-1/-mr-10001/.hive-staging_hive_2024-09-08_21-38-14_855_3060643932666893220-1 OK Time taken: 0.696 seconds 2024-09-08T21:38:15,591 INFO [2f59663d-4a5c-4cde-9103-7e72ed85a913 main] org.apache.hadoop.hive.conf.HiveConf - Using the default value passed in for log id: 2f59663d-4a5c-4cde-9103-7e72ed85a913 2024-09-08T21:38:15,591 INFO [2f59663d-4a5c-4cde-9103-7e72ed85a913 main] org.apache.hadoop.hive.ql.session.SessionState - Resetting thread name to main
Thank you so much for watching my video dear. I will try to share it soon or I can make a video and upload it dear. Please do you subscribe to my channel if you have not yet to encourage me and not to miss notifications on upcoming relevant videos
Thank you so much for watching my video but sorry to delay in replying. If still you are facing issues then After looking at The error you are encountering indicates a mismatch between the ClusterID of the NameNode and DataNode. One for the Steps to Fix: Clear the DataNode Data Directory If you're okay with clearing the DataNode's data and starting fresh, you can delete the DataNode storage directory and restart the DataNode. This will cause the DataNode to re-register with the NameNode using the current ClusterID. Then reformat the namenode and restart the daemons hope this will help you. otherwise let me know dear.
@@sivashankar47 Glad to know that it is working for you as expected ❤️ Keep learning and enjoy. Please do you subscribe to my channel to encourage me and not to miss any notification on upcoming relevant videos.
@@only_voice_of_tamizhiThank you so much for watching this video. I have used Jdk8 and Apache Hadoop 3.2.4 for this hive. Thank you so much for subscribing to my channel ❤️
@@MohammedSalman-nb2pu I am glad to know that this video was very helpful for and got your answers here❤️. Please do you subscribe to my channel to encourage me dear and to get notifications for upcoming relevant videos.
sir, Issue while dealing this - wget -r -np -nH --cut-dirs=3 -R index.html svn.apache.org/repos/asf/hive/trunk/bin/ 'wget' is not recognized as an internal or external command, operable program or batch file.
@@RiteshSingh-zb5ss Thank you so much for watching this video dear and happy to know that you managed to configure it successfully. Try to use localhost:50070/ Let me know you can open it. Even this link is given in the video description dear. Please do you subscribe to my channel to encourage me.
@@RiteshSingh-zb5ss That's good progress. What error is promoting in the web browser on opening of the cluster? Please do you subscribe to encourage me dear.
Thank you so much for watching this video. If the Hadoop DataNode is not working or not starting, it could be due to several reasons, including misconfiguration, resource issues, or problems with the underlying storage. To troubleshoot and fix the issue, please check the configuration "hdfs-site.xml" files datanode path is specified correctly. Addition to this also check if you have sufficient space. You can also try to restart the computer and try to start all daemons and see if it worked. If possible, please do subscribe my channel to encourage me.
Thank you so much for watching my video dear. Sorry to know that you are facing issues with the Hadoop installation. Ensure that you have configured the Hadoop installation properly before Hive installation. Set the HADOOP_HOME environmental variable . If not there is another video on my channel to install Hadoop on windows. You can watch that dear. Let me know if it helped you. Thank you so much for subscribing to the channel.
Hi, Thank you so much for the great effort you put into this tutorial, it is better organized than the Hadoop one. The steps are straightforward and the explanations are clear. One point that was critical for me was the version compatibility between Hadoop and Hive, in fact, in my first installation I missed that information, and then I re-checked and found that the Hive version that I previously installed was not compatible with my Hadoop version. So I repeated the complete tutorial to ensure the Hive version is compatible with the Hadoop version. Can you please do a video on how to install Cassandra on Windows, just like you did here for Hive? Thanks again
@@tonyobikwelu9783 The pleasure is mine dear ❤️ I hope you’ve subscribed to my channel to encourage me and receive notifications about upcoming videos.
Thank you so much for watching this video❤️ Sorry to know that you are facing issues. If the NodeManager and ResourceManager are shutting down immediately after starting, it typically indicates a configuration issue. Please Validate Configuration Files: Double-check the following configuration files for any misconfigurations: "yarn-site.xml" "core-site.xml" Ensure that the configuration for the yarn.resourcemanager.hostname and other essential parameters is correct. If these all are ok then there could be issues of insufficient space. Let me know if this helped you dear. Please do subscribe to my channel and encourage me
Hii sir followed every step you adviced ...but after running hdfs namenode -format I'm getting error ...big is not recognised and classpatch is not recognised
@@GlobalTalesChronicles Thank you so much for watching this video dear. Sorry to know that you are promoted errors while formatting namennode. What is a "big" word here? Can you copy paste the whole error?
I am facing this error: C:\hive\apache-hive-4.0.0-bin>hive --service schematool -dbType derby -initSchema Exception in thread "main" java.lang.UnsupportedOperationException: 'posix:permissions' not supported as initial attribute at sun.nio.fs.WindowsSecurityDescriptor.fromAttribute(WindowsSecurityDescriptor.java:358) at sun.nio.fs.WindowsFileSystemProvider.createDirectory(WindowsFileSystemProvider.java:496) at java.nio.file.Files.createDirectory(Files.java:674) at java.nio.file.TempFileHelper.create(TempFileHelper.java:136) at java.nio.file.TempFileHelper.createTempDirectory(TempFileHelper.java:173) at java.nio.file.Files.createTempDirectory(Files.java:950) at org.apache.hadoop.util.RunJar.run(RunJar.java:296) at org.apache.hadoop.util.RunJar.main(RunJar.java:245) can anyone help ?