Here you can find (nearly) all talks from the conference "FOMUS: Foundations of Mathematics: Univalent Foundations and Set Theory - What are Suitable Criteria for the Foundations of Mathematics?" For more information please look at the homepage of the conferene: www.fomus.weebly.com.
This workshop was organised with the generous support of the Association for Symbolic Logic (ASL), the Association of German Mathematicians (DMV), the Berlin Mathematical School (BMS), the Center of Interdisciplinary Research (ZiF), the Deutsche Vereinigung für Mathematische Logik und für Grundlagenforschung der Exakten Wissenschaften (DVMLG), the German Academic Merit Foundation (Stipendiaten machen Programm), the Fachbereich Grundlagen der Informatik of the German Informatics Society (GI) and the German Society for Analytic Philosophy (GAP).
There is a mathematically very important distinction between A and B are isomorphic and A and B are _uniquely_ isomorphic. In mathematical practice, one can only safely identify two structures, if the isomorphism between A and B is unique. The non uniqueness of isomorphisms is in fact easily detectable: the group of self isomorphisms (automorphisms) of A is non trivial. This is already apparent for basically the simplest mathematical object: the Booleans. As a finite set the booleans is just two element set and it is manifestly has a automorphism T <-> F. However as a Boolean algebra, it no longer has an automorphism and it is unique, and it is completely harmless to change its representation (in fact this actually done in practice: in most computer hardware a boolean is represented by F <-> 0 , T <-> 1 but in some others it is represented by F <-> 0 T <-> ~0 = 0b111111...111 = -1).
Understanding with precision the samenesses and the differences (identities and deltas/differentials) between things was the motive for me finding my way to homotopy type theory. Stating the differences and samenesses (importantly the differences) on the *tools* of stating samenesses and differences is profoundly useful and helps people to become ambivalent to the tools that suit their needs best!
Mathematicians are getting further away from the foundations of mathematics, not closer. Does an infinity groupoid explain the difference between cardinals and ordinals?
Oh of course, cardinal is a truncation, while ordinal is a simulation. See TUF Program 2013, sec10.2, 10.3. You can take truncation and simulation either verbally or formally, just don’t mix.
Sorry this is unwatchable. I stoped watching after you keep using the term token to describe the term term. Term is the right term for term for god sake. Dont get me started about witness
i ran into this in the real world. say that you use a service that signs statements that it believes about you with a MAC; literally hashing strings: A=H("fname:thorsten"), etc. A is evidence that the string was hashed by our authority. (A(B ~ C)) = AB ~ AC. Where hashes are literally multiplied, and "~" means that they differ by some constant. Say that K is a secret key that we want to calculate to open a logical padlock, by storing values. (K - (AB ~ AC)) = K-AB ~ K-AC ... means that we can calculate K by adding AB to (K-AB), or adding AC to (K-AC). To handle not logic, we would need negated expressions (which are true), such as !D. That works fine because it has an actual witness value. AB and AC are "witnesses". But you get stuck when you want proof that a statement E is "missing". The certificate that a user has would be a list of statements "and" together, like: A B D E G H !M !N. We have evidence for these. But if we are asked to prove that X is MISSING from the certificate, such as X=H("DrugConviction2020"), we can easily walk the certificate to verify that X is not in the list. But we can't provide a specific witness value that calculates the key to unlock the padlock. ie: AB = 2394. AC =2300. K = 1300. case AB stores -1094 in the padlock, AC stores -1000. User must be able to supply the value of AB or AC to recover the key. This is very much like needing a witness. Because the key K must be calculated the same, no matter who the user is; We have a similar issue with general not(p) of propositions; in that we can't produce a consistent witness; unless the padlock brings the witness value with it. The intention is that the padlocks are totally public; and should not leak the values of any attributes, though they may leak what they are looking for. This is a reasonable definition of boolean logic, and it even admits a reasonable interpretation for values between 0 and 1: def and(a,b): return a*b def not(a): return 1-a def or(a,b): return not(and(not(a),not(b))) But, they lack witness information. I am definitely handling and/or/not, but the inputs are not just zero and one. With general circuits, it does not seem possible to make the user certificate provide all the witnesses required to handle NOT cases. The padlock may need to provide (ie: leak!) a witness value, to tell what to look for in the certificate. And there doesn't seem to be a way to enforce that the code honors a missing derogatory attribute from its certificate. It's not clear how to create a witness value for something proven missing from a certificate. I tried all kinds of crazy stuff. One thing was having user and padlock encode their beliefs into points on a polynomial (which amounts to polynomial neural nets for questioning witnesses), where they agree on what hashes to x, but disagree on what the y values are; and use the difference at a particular point between user (x,y) and padlock (x,y) (ie: H("DrugTest2020Pass")) answers in the polynomial. This is the first time that all this abstract gobbledygook I read in "Type Theory And Programming" made sense. Even when witnesses required to handle Not are not secrets, you can only walk the certificate (ie: an environment) to prove that the statement is "missing"; which is different from being explicitly asserted as false (giving an explicit witness value).
This talk was from time to time hard to understand, but it is such mind expanding that is thing that you must watch. Requiesciat in pace Voevodsky, you were a great mathematician.
"Internal / External" is far more comprehensible than "propositional" or "logical". For example, I have no idea what a "proposition" is supposed to mean as a function of math or semantics. I have a very clear idea of how "internal" is relatable to "external". That has a dualistic notion (structural), whereas "proposition" is a circular, abstract term, and "logic" only concerns consistency, not meaning. The reason so many people hate math, is because mathematicians are horrible at conveying meaning and are very poor thinkers overall in the modern day. Math is a LANGUAGE. Nothing more. It is not some devine idea. Logic is a language too. It is a language with the same intuitions and assumptions of any other language. He is trying to explain the MEANING of ideas. That is what a philosopher should try to do.
I find the idea that "A function is a set of pairs" interesting. I must have learned enough computer science before set theory to not think of it that way, except (ironically?) for finite functions like &&, ||, !=, etc.. The way I think of a && b is as its logic table "00->0;01->0;10->0;11->1", which can be manipulated as a whole to show certain logic identities. For any other function, though, I think of it more as the way to *generate* the table (or graph), than as a lookup. Even for functions which I don't know how to calculate, I think of it like a blackbox, which hypothetically spits out values when you give it input.
In set theory, we can say that ℕ ⊂ ℤ ⊂ ℝ, which encapsulates the idea that 3 :: ℕ behaves the same as 3 :: ℤ, so in some sense they are the same (and in set theory they literally are). It does seem important to be able to say that a `double` can represent the same things that an `int` can, and more.
Actually, in set theory they are not... In naive set theory they may actually be the same, but in ZF the "integers" are actually sets of pairs of natural numbers. So the fact that ℕ ⊂ ℤ ⊂ ℝ is not actually true (at least they are not contained inside the other as sets). But it is true that they behave in the same way. The proper way to phrase it is that there is a way to "insert ℕ into ℤ" that preserves every property of ℕ
@@tonaxysamIt’s not necessarily false. You can define ℕ as the subset of ℤ that “behaves like” the set of natural numbers. This doesn’t brake any rules.
What's interesting is, it's not true that a double (64-bit float) is a superset of (64-bit) integers due to floating point precision. Any number with higher precision than the floating point format's mantissa may not be represented in the set of floating point numbers, even if it can be represented by an integer of the same width. You'll lose precision converting in either direction!
Thorsten seems to interpret "extensional" and "intensional" as some complicated, hard to explain concepts. But at least in philosophy, they were never meant to be complicated, see for example: philosophy.stackexchange.com/questions/16164/what-is-the-difference-between-intensional-and-extensional-logic. What worries me even more is that Thorsten feels that the name intentional type theory is paradoxical, and that he feels a paper by Per Martin-Löf explaining some closely related concepts would be based on some confusion. Maybe Thorsten just failed to get that people mean something utterly trivial when they say "extensional"...
I will readily admit to being stupid, but it really irritates me that I feel I almost but not just quite understand this lecture. Is there a place with a better explanation (viz. Explanation for dummies) of the Pi and Sigma thingamabobs?
@@lhpl ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-VxINoKFm-S4.html this lecture and the related book focus specifically on exploring dependent types, which is the direction for logic that here Thorsten is advocating for. (Usage of Pi & Sigma types)
Pity the community of mathematicians having to deal with this man who thinks SOOOOOOOOOOOOOO outside the box. His project will probably have to wait. Sad he left us so soon.
Yes, indeed- like Grassmann's ideas. On the other hand, it seems that computers will verify proofs of ever increasing length, which is certainly disconcerting to those who work with paper and pen.
@Calum Tatum Once pen and paper mathematicians figure out how to express their ideas using definitions and structures that are precise enough to be investigated/verified by a computer (a.k.a "virtual graduate student"), I think the status quo will change fairly rapidly. At least I hope so.
@@JoelHealy Perhaps many of the pen and paper mathematicians will never change, and never get a hang of using a proof assistant. But if they are open-minded, they can play a role like system analysts working with coders. They will have grad students who grew up coding proofs into a proof assistant, they are digital natives. The transition will be difficult for many mathematicians and departments, but it can be made smoother. User friendly interfaces to the next generation of proof assistants, or even Lean if they bring back HoTT compability in some (cubical?) version, would be a great help.