Welcome to my AI and Computer Vision Channel! We are going over some cool things on this channel with main focus on hardcore AI, ML, and Computer Vision. I'm doing both theoretical and project based videos. Visit my AI Career Program at: www.nicolai-nielsen.com/aicareer
i alrd follow step by step but this eror said ScannerError: while scanning a simple key in "<unicode string>", line 11, column 1: test:/content/dsfs-8/test/images ^ could not find expected ':' in "<unicode string>", line 12, column 1: train:/content/dsfs-8/train/images i dont even know where is line 12 and how i check it
I found the most success by downloading "Anaconda3-2022.05-Windows-x86_64.exe" from the archives. This is the version he was using at the time. Numpy version 1.26.4
and it does not build one single opencv_worldXXX.lib , it builds many _worldXXX.lib like this opencv_world_AVX.vcxproj -> E:\cuda opencv\build\modules\world\opencv_world_AVX.dir\Release\opencv_world_AVX.lib
How to set the augmentation paramters? # Load the pretrained YOLOv8 model model = YOLO(model_path) # Use the .pt file for pretrained weights # Define custom augmentation parameters augmentation_params = { 'degrees': 19, # Rotation in degrees 'translate': 0.1, # Translation as a fraction of image size 'scale': 0.5, # Scaling factor 'shear': 2.0, # Shear angle in degrees 'perspective': 0.0, # Perspective transformation 'flipud': 0.5, # Vertical flip probability 'fliplr': 0.5, # Horizontal flip probability 'mosaic': 1.0, # Mosaic augmentation probability 'mixup': 0.2, # Mixup augmentation probability 'hsv_h': 0.015, # HSV hue augmentation 'hsv_s': 0.7, # HSV saturation augmentation 'hsv_v': 0.4, # HSV value augmentation } # Additional training parameters training_params = { 'data': dataset_path, # Path to your dataset 'imgsz': 640, # Image size 'augment': True, # Enable augmentation 'patience': 10, # Early stopping patience 'save_period': 1, # Save the model after every epoch 'save': True, # Enable saving of model checkpoints 'resume': True, # Resume training from the last checkpoint 'project': save_dir, # Directory to save the project **augmentation_params # Include custom augmentation parameters } # Train the model with custom parameters model.train(**training_params) This didn't work.
Awesome video Nicolai, a few years ago it would have been unimaginable to have such a powerful model available. I'm sure it will be able to increase productivity of teams that are working on annotation and other computer vision fields through the roof!
Is pipreqs smart enough to analyze my python script and determine that some imports are not actually used, and therefore won't add it to the requirements txt? I'd like to find a way to give it a script and have it report the versions the user will need.. ONLY for the packages that are used somewhere in the code!
Join My AI Career Program www.nicolai-nielsen.com/aicareer Enroll in the Investing Course outside the AI career program nicolai-nielsen-s-school.teachable.com/p/investment-course Camera Calibration Software and High Precision Calibration Boards camera-calibrator.com/
Appreciate your effort , this motivate us by type few lines in yolov10 and get the object detected, Kindly make a video so that one can manually selecting the the desired object to be selected from drone-nvidia jetson etc And put some free videos of yolov10 course on youtube, so that aftwr watching and satisfying we will definitely buy ur course
In order for me to study artificial intelligence engineering at university in Turkey, I, who am currently going to high school 3, need to work more than 10 hours a day for 2 years...that's sadd, I hope ,it's worth it... :(
I trained my own model but I'm getting errors. OSError: [WinError 126] The specified module could not be found. Error loading ".conda\envs\yolov10x_derin\Lib\site-packages\torch\lib\fbgemm.dll" or one of its dependencies.
Hey Nicolai, it seems like you forgot to provide the following file to the GitHub repo pertaining to this example: "forzen_inference_graph.pb". Both the txt files are present but not the pb file. Would it be possible for us to get said file?