Deep Learning: Alchemy or Science? Topic: Troubling Trends in ML Scholarship Speaker: Zachary Lipton Affiliation: Carnegie Mellon University Date: February 22, 2019 For more video please visit video.ias.edu
These are pretty much the things that bug anyone who's worked in industry then faced the flakiness and warped motives of pure research in any hot field. Though I think "mathiness" is not necessarily an obfuscation tactic so much as the "real product" in the minds of the author, even if it is of uncertain impact, while the data analysis ends up veering off in a different direction in order to find an impactful-looking "result" (that's where the system-gaming is really going on).
@@zacwinzurk2715 I guess it has to be different strokes for different folks. In my case features doesn't make sense at all but covariates tells me almost the whole story. Furthermore features doesn't even imply that things might be co-varying. Also look back 3-4 years at paper in ML where folks started using the word inference just to say prediction. Horrible and confusing, ask any statistician what inference means and you'll get completely different answer, unrelated to prediction.
No one is saying that all words used in ML are bad. Features and targets are fine. The point is that there is an epidemic of sloppy, imprecise, and often suggestive or anthropomorphic language in recent ML papers.