Amazing how quickly are things becoming easier! thanks for all your work. I am interested in designing a state machine that actually analyses the LLMvision inferences and parses them into a coherent ideas, to be somehow aware of what’s going on. Just like humans. For example: what happened? Answer is somewhere in comparing the differences between two or more llm vision query repliers/inferences from two or more frames separated by a second or two…states using different lists of prompts… wouldn’t it be interesting to have a go at that?!?
does supervision provide a way to retrieve information from each ID that is detected? In my case where a car and a truck are two different objects crossing the line zone in supervision, can I find out the class information of the objects that have passed?
Can you please create notebook or video explaning , how to run the YOLO NAS model in Jetson Nano. I am unable to setup the dependency in jetson that is required for infercing YOLO NAS model. I am using Jetson Nano 2GB model and Jetpack of 4.6.1 version. Can you please specify clear instructions of version of each dependency that needs to be downloaded in Jetson Nano and also how to download in Jetson Nano. Please🙏🙏🙏🙏