There are so many exciting things I'd love to share with you, but unfortunately, some topics aren't allowed on RU-vid. So, where's the best place for us to discuss and support each other? I have some great insights and tips that will definitely benefit you!
I am sorry I don't have the code for this talk from the speaker. But here's the link to the paper that might be interesting to you from the speaker that has similar objectives: www.sciencedirect.com/science/article/abs/pii/S2212420922006549
@@TheGeoICT You mean what kind of examples? I focus on traditional method to detect both marine debris and algae, and I can discriminate them automatically.
Hi. If you check the description, the slide is linked there. The slide has link to the colab that you can run for prediction etc. Thanks and happy predicting ;)
I am sorry I don't have the code for this talk from the speaker. But here's the link to the paper that might be interesting to you from the speaker that has similar objectives: www.sciencedirect.com/science/article/abs/pii/S2212420922006549
@@TheGeoICT I’d be curious to know what you think about how the recent LLM would be fit for EO. The challenge I’m setting for myself, in my spare time, is to make a free EO mobile app for the chiefs of village or chiefs of transhumant herds, to let them decide by themselves when to sow, when to harvest, when to start transhumance, which of the various ancestral paths to take, without help from any paid EO experts. After all, today they rely on their own observations on the wind, the rains, the clouds. And maybe watching TV weather girls 😉
🎯 Key Takeaways for quick navigation: 01:54 🌐 *The Segment Anything Model (SAM) is a powerful and versatile segmentation system that allows for the segmentation of various objects in images without additional training.* 03:30 📦 *SAM is a zero-shot generalization model, meaning it doesn't require specific training for different objects. It was released by Mera AI and has found applications beyond geospatial data, including computer vision and medical imaging.* 05:46 🧠 *SAM's model architecture involves an image encoder, prompt encoder, and mass decoder, allowing users to obtain segmentation masks by providing prompts such as points, bounding boxes, or text.* 08:16 🌐 *SAM comes with a dataset called "ac1b" (Segment Anything One Billion), containing 11 million highly diverse images with corresponding masks. It was trained on 256 GPUs for 3-5 days, making it computationally intensive.* 19:46 ⚙️ *SAM can be used through Python functions, providing both automatic mass generation and prediction modes, making it accessible for various applications, from geospatial analysis to medical imaging.* 23:58 🔄 *The Segment Anything Model (SAM) integrates with various packages, making it user-friendly even for new users on platforms like Google Colab, Jupyter Lab, or StacMakers geospatial environment.* 25:32 🌐 *SAM can segment spatial features using geotiffs, providing georeferenced outputs. It simplifies visualization with integrated tools like sliders, enhancing user interaction.* 28:01 🗺️ *SAM enables segmentation with points, polygons, bounding boxes, andtext forms. Users can interactively create foreground and background, streamlining the segmentation process.* 32:50 📦 *SAM facilitates bulk processing of bounding boxes, allowing users to draw or use existing vector data. This is useful for efficient segmentation and integration with diverse datasets.* 36:19 ⚙️ *For large imagery, SAM's function subdivides images for segmentation, overcoming GPU limitations. The tool offers both interactive and programmatic segmentation, enhancing flexibility and memory efficiency.* 47:20 🖼️ *Automated segmentation models are often fine-tuned for specific domains, limiting their ability to handle diverse imagery like traditional models.* 48:28 🦉 *The Segment Anything Model (SAM) foundation allows application across various domains, making it versatile for tasks like animal surveys and remote sensing applications.* 48:43 🌐 *Ongoing developments in SAM involve fine-tuning and integration with other models for improved results in remote sensing applications.* 49:29 📷 *SAM is best suited for high-resolution and low-resolution imagery, and its segmentation is based on numpy arrays, making it adaptable for various geospatial data.* 51:35 🧠 *Integration of TensorFlow in GE Map is under consideration for future development, aiming to simplify the process and make it more accessible to users.* Made with HARPA AI
Thank you very much. very clear and easy to follow. How do I participate in your next workshop. Any training you do on request? how can we get in contact.? I am in Botswana. Thank you
That's a great question! Since LSTM is a time-series algorithm, as long as your examples maintain the sequence in them, you could randomly split training and testing data at the example level. Happy coding!
You can find more info on the GEE access here: developers.google.com/earth-engine/guides/access. If you're looking for individual sign-up for commercial use, you can signup using this link: signup.earthengine.google.com/#!/. Happy working with GEE!
Absolutely. The link for the repo of Day 1 is here: code.earthengine.google.com/?accept_repo=users/tjm0042/WA_ML_Training and the link of the notebook for Day 2 is here: colab.research.google.com/drive/1tqWoLQSUkgjlxmcdKoLL0fwtcxqixcBt?usp=sharing. Make sure to do a copy before running things so you can save your own changes. Happy Learning!
Oui, vous devriez les voir dans votre section lecteurs avec ce lien : code.earthengine.google.com/?accept_repo=users/biplov/bhutan-aces-v-1 Notez que ce référentiel a été spécifiquement créé pour la formation au Bhoutan.