"Do not forsake wisdom, and she will protect you; love her, and she will watch over you. Wisdom is supreme; therefore get wisdom. Though it cost all you have, get understanding." - Proverbs 4:6-7. Thank you Stanford.
Interesting. Lost of concern about bias, but then the root vulnerability of bias is found in modeling. If you want a specific outcome (a bias on equity versus equality, for example), model it and everything will be based on that. GIGO.
Can't we just find the max lenght of the two strings,in this case the max length will be of string 2 which is "The Cats" then use the LCS Algo using the DP(recursion) which returns the longest common subsequence and then substract it from the max length of the string. Can we approach this way someone please look into this!
I think doing this by LCS would be easy,First we find the max length of the two strings: int max(str1,str2){ s1=sizeof(str1); s2=sizeof(str2); if(s1>s2){ max=s1; } else{ max=s2; } return(max); } int LCS(m,n){ if(m==0) return(n); if(n==0) return(m); else{ if(s[m]==t[n]) return(1+LCS(m-1,n-1)); else a=min(LCS(m-1,n),LCS(m,n-1)); return(a); } } Finally return(max-LCS(m,n)) This way we can find out the minimum edit distance between the two strings. NOTE -> We have not consirdered the space while calculating the max! Please do correct if I am wrong anyone??
You check the cache FIRST before running all the computation "if (m,n) in cache => return cache(m,n)" lines at the top before everything else. So basically if the result is already in the cache then there is no need to run 3 computations again, just return the result
Such an amazing session. But i cant understand as to why eta is used in generating new value of w that too without conditions. Can someone clear this up. Would be much help
That was a bit quick right. 😅 If my math serves me right, eta is the value by which you jump after each iteration. Almost the same as the learning rate in which is in alot of ai stuff. I'm probably butchering the explanation. But all you need to know is that it is a parameter you play around with in these types of models and the lower it is the longer it takes for the model to reach the minimum and vice versa
Here the problem is relatively simpler i mean the graph is simple. There is just one minimum. In case where we have functions where there are more than 1 minimum, the slope is flat or there is a narrow pit in a graph, it becomes essential that we control the step size by which we decrease the gradient after each iteration otherwise we might miss the minimum. If we decrease the starting point everytime with a larger value we are decending down the graph too fast and at some point it will skip the minimum point and would never converge. Also if there at any point in graph a plateau then a very small step size would believe that to be minimum as it would never be able to cross it in such small iterations. So we play around with this value to get desired result and to reduce the error in order to have better predictions.