😂 You should listen to Michael Levin, Karl Friston, etc. Keith Duggar and Yannick Kilcher have some amazing insights on conscious systems and extrapolation vs interpolation too. There's so much amazing content on this channel dude, welcome to the rabbit hole lol.
@@paxdriver I have been listening to Levin and Friston non stop. This conversation is amazing: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-J6eJ44Jq_pw.html
To be honest after 3 minutes I felt numb. As if I didn't learn a thing doing ML for 4 years. Maybe her physics background is helping her in defining these but I have never tested my metrics so thoroughly ever.
Thanks for this interesting tidbit! (Very well explained by Adele!) I really like the general concept of sampling/transforming inputs to a different space before running ML “end to end” that’s just going to learn a similar transform anyway, or not and poorly imitate. You can complain about building in priors, but I think we’re still making progress doing exactly that for now, this paper being a perfect example.
Great explanation. From simple to more complex. Great channel. High quality, information dense, every word matters. Great interviewees. Super-smart beautiful people. Thank you
Cool talk, but how well work with real data is an other question - we see how this idea can be improved. It is well-suited solution for specific problem, but it is not generic solution to solve geometric DL - a reasonable start in a new direction.
Very cool. R^2 won't guarantee optimization, but that's only a distraction from the big ideas here, at the worst. A and B will be optimized for maximum R^2, and Rsqrd is not optimized for fit necessarily. Honestly, that R^2 part is something easily replaceable and inconsequential to the idea on the whole. If any haters had an issue with it, they're splitting hairs and missing the point.