Тёмный

Symbolic AGI: How the Natural Will Build the Formal 

Positron's Emacs Channel
Подписаться 1,6 тыс.
Просмотров 10 тыс.
50% 1

True AGI will combine formal and informal methods. People are already combining these tools in this way. M-x Jarvis in our time, but evolving Open Source is critical to delivering real value.
This video is part of Positron's efforts to make a case for open innovation. You can show support for this effort on Positron's Github Sponsors page
github.com/sponsors/positron-...
1. Empirical argument that induction must be capable of emerging deductive and formal systems.
2. Decoding to a less restricted but less consistent informal systems and then re-encoding to formal can identify new consistency.
3. Formal systems can be used to induce coherence in informal systems, accelerating the search for new formal coherence.
4. Both logic and metalanguage can naturally emerge by generalizing logical dependence and stripping away semantics
5. If a self-model is exposed, the metalanguage capability implies self-programming capability.
Transformer animation visualizing layered attention
• Attention in transform...
In and Out of love with Math] | 31b1 podcast #3
• Steven Strogatz: In an...
Curry Howard Correspondance
en.wikipedia.org/wiki/Curry%E...
Kolmogorov Complexity
en.wikipedia.org/wiki/Kolmogo...
DeepSeek-Prover Automated Theorem Proving + LLMs
arxiv.org/abs//2405.14333
Automated Reasoning
en.wikipedia.org/wiki/Automat...
Total Functional Programming
en.wikipedia.org/wiki/Total_f...
Stacked Restricted Boltzmann Machine
en.wikipedia.org/wiki/Restric...
Fixed Point Combinator
en.wikipedia.org/wiki/Fixed-p...
Formal System
en.wikipedia.org/wiki/Formal_...
Syllogism
en.wikipedia.org/wiki/Syllogism
Incompleteness Theorem
en.wikipedia.org/wiki/G%C3%B6...
Undefinability Theorem
en.wikipedia.org/wiki/Tarski%...
Metalanguage
en.wikipedia.org/wiki/Metalan...
#emacs #opensource #programming #machinelearning #logic #ai #artificialintelligence
TIMESTAMPS
00:00 Intro
00:34 Deduction
02:30 Formal Systems
05:58 Inducing Deduction
11:04 Spectral Reasoning
15:38 Recursive Computation
22:32 Online Learning
25:34 Limitations
27:36 Remaining Work
31:21 Non-Problems
34:14 Doing it Wrong
37:06 Open Source: Part Deux

Опубликовано:

 

26 июн 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 49   
@Positron-gv7do
@Positron-gv7do 11 дней назад
This video is part of Positron's efforts to make a case for open innovation. You can show support for this effort on Positron's Github Sponsors page github.com/sponsors/positron-solutions Initially there was no plan to propose an implementation sketch. This was supposed to be a simple answer to the question of whether AGI was material from a strategic standpoint, on a relevant timeline. The empirical argument that induction must be capable of emerging deduction is the important one for technology forecasting. We haven't had systems that appeared useful at the heuristics side of automated reasoning & theorem proving until recently.
@rickybloss8537
@rickybloss8537 7 дней назад
Fantastic video.
@aleph0540
@aleph0540 7 дней назад
Good points, how do we collaborate on some of these topics? Do you have an interest in doing so?
@Positron-gv7do
@Positron-gv7do 7 дней назад
I saw your email. I'll reply first briefly here for others. Positron is currently focused on the social decision problems that are inherent to open collaborations. Once we get our first product operating, that product may later entrain us into work more and more directly aligned with AGI. While AGI will push forward the problems we work on, the problems will just move up in value potential rather than go away, so we will remain focused on platforming open collaboration and the enabling tools & infrastructure. The reason the AGI feasibility question is relevant to us is because our users need to have a correct assessment of whether we are in an asymptotic LLM or a strong AGI timeline. It will strongly affect the things that people want to collaborate on, which strongly affects how our product will be used and the level of success people will have. A second group this video is directed at is other engineers with backgrounds like modeling and simulation who are deciding whether to shift into ML and how. If you have the choice to twiddling LLMs or leaping into automated reasoning and emergent logic, the latter is definitely going to be more useful, and it's in ours and everyone else's interest that strong, local AGI emerges quickly. Hopefully this means better deployed capital and better invested skills etc. These kind of things motivated us to get this message put together. That said, I can't help but make progress each morning while drinking coffee. I began modelling the control function to see if any obvious conclusions could be deduced. I found some relationships and may publish whenever it feels appropriately digested.
@jeanfredericferte1128
@jeanfredericferte1128 4 дня назад
@@Positron-gv7do I'm working/interested on these ideas too and happy to see your video. I'm looking toward a perceptual /multimodel system with an ontologic representations knowledge specialisation, with continuous learning and multimodal learning strategy (context provided with dynamic ontology building/evaluation, the ontology representation being explainable we should have an agent able to escalate) ?
@kevon217
@kevon217 2 дня назад
Top notch video. Really enjoyed this.
@leobeeson1
@leobeeson1 4 дня назад
Recommended for applied scientists and engineers integrating reasoning/deductive systems with LLM capabilities. The content is excellent up until minute 37, after which it becomes opinionated (e.g. lab-grown meat, use javascript, etc.). If you liked this video, you might also appreciate: Improving LLM accuracy with Monte Carlo Tree Search (ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-mfAV_bigdRA.html)
@moritzrathmann2529
@moritzrathmann2529 3 дня назад
thankyou
@mira_nekosi
@mira_nekosi 5 дней назад
imo the next step for LLMs are hybrid models (RNN + attention, like mamba2-hybrid), because they was shown to be not less and maybe even more performant than transformers while being few times more efficient, and they could have kind of infinite context also, hybrid models could be more computationally powerful then transformers, as transformers shown to not even being able compute what FSMs can compute (without the need to output amount of tokens proportional to the number of state transitions or grow the layers (at least) logarithmically, without such restrictions they indeed can solve such problems, but this is either pretty slow or impractical), but full-blown RNNs, as well as some modifications of mamba, can, and while current hybrid models can't solve such problems (probably all the models that can be trained fast can't), they could be transformed into models that could and then finetuned to actually solve such problems
@mira_nekosi
@mira_nekosi 4 дня назад
in practice, "better" hybrid models that can solve such (state-tracking) problems, could in theory do eg. some more advanced program analysis much faster, eg. by basically performing some kind of abstract interpretation on the fly, making them better in programming probably something similar could be applied to math
@escher4401
@escher4401 19 часов назад
00:01 Precision AGI requires clear and correct answers every time. 01:58 Symbolic reasoning enables deduction without empirical data. 05:55 Challenges in building accurate formal systems 07:47 Formal systems can be induced from natural language and inductive reasoning 11:35 Spectral reasoning covers inductive, deductive, and symbolic reasoning on a spectrum of structure and meaning. 13:30 Spectral reasoning bridges symbolic and natural language for problem-solving 17:22 Transformers model iterative inference and computation with limited recursion 19:06 Exploring real computer and theory of computation 22:17 Using formal models for retraining ourselves 24:01 AGI needs exposed runtime information for efficient learning and decision-making 27:36 Transitioning components designed for human use to fully automated 29:23 AGI capabilities and training methods 32:49 Sophisticated models can leverage data for theoretical understanding and empirical insights. 34:34 Tech advancements drive the need to continuously evolve 37:58 Integrated design accelerates innovation and drives economic focus on desired outcomes. 39:38 Challenges in the mature internet landscape 43:15 Encourage sharing and sponsorship for like-minded individuals
@WalterSamuels
@WalterSamuels 5 дней назад
Great analysis. A big problem we have is that no two things are perfectly alike and our reductionist logical systems are a roadblock. It does not make sense to treat logic as a binary true or false if the goal is adequate expression of reality. For example, the question of "is a cat a dog", or "is a German shepherd a dog", are somewhat ill-formed, technically. The real question you're asking here is, how many properties does a cat and a dog have that are alike. Within that question is actually a sub-question of "of all the objects we define as cats, what is their property overlap", and the same for dog. How many properties of a german shepherd correlates to the number of properties that most dogs correlate with. Everything in reality is on a scale of similarity, but no two particles are identical, or they would be one particle. How do you formalize this? Would love to hear your thoughts.
@WeelYoung
@WeelYoung 4 дня назад
not all task need absolute math precision, but many economically valuable tasks do, e.g. designing new chips, cars, robots, spaceships, batteries, buildings, factories, optimizing manufacturing & construction in general, make things more environment-friendly, sustainable, reverse-engineering & biohacking human, drug design, immortality tech, etc... achieving sgi on those alone would already boost human life quality significantly.
@WalterSamuels
@WalterSamuels 4 дня назад
@@WeelYoung Agreed. But if the goal is to create machines that are more like us, philosophically, we need to realize that we're working on a spectrum of truth, and truth values are only relative to the context of which they apply, which will always be grounded in the definitions of absolute boundaries. It really depends though, what do we want here. Do we want the next step in human consciousness, that thinks and behaves like us, with feelings and emotions. Or do we want incredibly fast processing machines that function more like calculators? Spiritually, one is important, technologically and economically, the other.
@volkerengels5298
@volkerengels5298 4 дня назад
Read Wittgenstein. There are some answers related to "AI" and "Symbolic AI"
@thesquee1838
@thesquee1838 5 дней назад
This is a great video. I took a class on "classic" AI and wondered if/how it could be combined with LLMs and other forms of machine learning.
@robertsemmler16
@robertsemmler16 3 дня назад
i havent seen a better build up to a topic that complex that was that easy to follow. very analytical and making listeners think and look beside the path it already is forming to existing knowledge. big push forward into unknown territory .
@apollojustice8796
@apollojustice8796 2 дня назад
real
@MeinDeutschkurs
@MeinDeutschkurs 5 дней назад
Intruding! I could write hundreds of questions and thousands of thoughts, as well. Great video! Thx. Btw, I don’t see Agents as the huge solution, except you’re the owner of the API services, serving all the talky virtual team members. I was surprised what’s possible, with small models and a bit of prompt engineering.
@submarooo4319
@submarooo4319 9 дней назад
Super insightful 😊
@jazearbrooks7424
@jazearbrooks7424 3 дня назад
Incredible
@goodtothinkwith
@goodtothinkwith 7 дней назад
This is excellent work. I’d be interested in hearing details about the experiments you’re doing or proposing to implement spectral reasoning. The details of that strike me as being the lynchpin. In some sense, you have to be right that we need a combination of formal systems with the kind of creative thought based on understanding that LLMs have… but the devil is in the details. We’re working on similar problems.
@gnub
@gnub 4 дня назад
Same! I think a lot of us are working on this problem since it's the clear next step beyond our current state of LLMs.
@goodtothinkwith
@goodtothinkwith 4 дня назад
@@gnub yeah for sure… I’m presently starting from scratch to try and capture why it’s a hard problem and how the human mind can manage it, even if really imperfectly… I have a feeling there’s a key detail in there somewhere that will point the way to getting it right
@wanfuse
@wanfuse 7 дней назад
great work! disagree in a few things, but overall fantastic! First there was quantum, then there was sub atomic, then there was atomic, then molecular, then cells, then large life forms, then there was computers, ...., llm's, symbolic reasoning, AGI, ASI , reminds me of a resent paper, lumpiness of the universe, lower levels mostly but not completely without influence on upper layers. Stack of cards, at what point are we left behind? need to first reason methods on how to extend, augment and advance human reasoning and memory while maintaining independence and autonomy without becoming detached from physical world, got to keep up, or get left behind! Obsolete = Extinct, and there are many paths that end up there.
@samlaki4051
@samlaki4051 5 дней назад
brooo i'll be PhDing on this topic
@fhsp17
@fhsp17 6 дней назад
You are missing a couple of things. Too grounded while actual phase transition lies outside of it 1. **(C1) Part-Whole Relationship (P-W)**: - *(N1) Component `Ω`: Describes individual system components.* - *(N2) System `Σ`: Represents the collective system emerging from Ω components.* - *(A1) Integration `[Ω → Σ]`: Depicts the aggregation process of components shaping the system.* 2. **(C2) Self-Similarity (S-S)**: - *(N3) Fractal units `ƒ`: Represents repeated patterns within the system at different scales.* - *(A2) Pattern Recognition `[ƒ -detect→ ƒ]`: Symbolizes the identification of self-similar patterns.* 3. **(C3) Emergent Complexity (E-C)**: - *(N4) Simple Rules `ρ`: Indicates basic operational rules or algorithms.* - *(N5) Complex Behavior `β`: Denotes the complex behavior emerging from simple rules.* - *(A3) Emergence Process `[ρ -generate→ β]`: Illustrates how complex behavior emerges from simplicity.* 4. **(C4) Holonic Integration (H-I)**: - *(N6) Holons `H`: Symbolizes entities that are both autonomous wholes and dependent parts.* - *(N7) Super-Holons `SH`: Describes larger wholes composed of holons.* - *(A4) Holarchy Formation `[H ↔ SH]`: Reflects on the membership of each holon within larger structures.* 5. **Iterative and Recursive Patterns (Looping and Self-Reference)**: - *`While (condition) { (L) [C1 → C2 → C3 → C4] }`: Represents continuous re-evaluation and adjustment of structures.* 6. **(Σ) Summary of Holistic Overview**: - *(MP-IS) Intersection of Memeplexes and Ideaspaces*: Consolidates the elements `(N)` and `(A)` into a coherent ideaspace, accounting for the dynamism and adaptability of the system.* - *`Interconnections (N-W): {(N1)-(N2), (N3), ..., (N7)}`: Lists the nodes and their interlinkages, allowing for an integrative view*
@theatheistpaladin
@theatheistpaladin 7 дней назад
I would bot an symbolic ai to an llm, but don't know linear algebra or python. Let alone the programming necessary for symbology.
@caseyhoward8261
@caseyhoward8261 4 дня назад
Word soup! 😂
@deltamico
@deltamico 5 дней назад
We can observe such split of more formal and more natural processing in the way our left and right brain hemispheres operate, though there is some overlap between them
@dennisalbert6115
@dennisalbert6115 5 дней назад
Use constructor theory
@volkerengels5298
@volkerengels5298 4 дня назад
Whether you eat the internet to feed an LLM or carefully collect the right assumptions - you masturbate with what is known. **Our language/symbol system is the boundary of our world** Wittgenstein
@mikeb3172
@mikeb3172 2 дня назад
The loops people put themselves through to "beat guessing algorithms" while still playing the game of guessing algorithms....
@TheSkypeConverser
@TheSkypeConverser 6 дней назад
Pls prove the memory/runtime constraints
@Positron-gv7do
@Positron-gv7do 5 дней назад
What do you mean?
@jeffreyjdesir
@jeffreyjdesir 5 дней назад
You could learn to communicate more charitably friend. It sounds like you're referring to runtime memory space & cycle speed - how much RAM and CPU FLOPS are required to compute one frame of AGI adam program and how does that scale dynamically, right? Its a fucking insanely hard detail to account ON TOP OF making sure theory corresponds to predicates and operations. Its one of the reasons Symbolic AGI was dropped in the 80s after making LISP, too much trees of logic. Do you have any new ideas?
@SimGunther
@SimGunther 2 дня назад
​@@jeffreyjdesirI wish everyone good luck figuring out how to break out of the mathematical notation box when we know there are so many other forms of notation that AGI/AI studies haven't even begun to comprehend. This has been a losing battle mathematicians have been fighting forever and they sure tried with things like Monads and just about everything with category theory.
@jeffreyjdesir
@jeffreyjdesir 2 дня назад
@@SimGunther Ahh, you're getting to model theory and meta-maths? You're right that our syntactors and production rules for constructuve statements being all human made across millennium are OBVIOUSLY not effiecent. Thankfully, Chris Langan's CTMU unifies grammar generation with feature preservation (symmetry in definition and application). Likely, AI will want its own LISP like language to express itself in...
@Siger522
@Siger522 8 дней назад
Valid criticism of LLMs, but not much beyond that
@smicha15
@smicha15 8 дней назад
The interesting irony with symbolic reasoning these days is that the big LLMs all trained on it… yet, it’s just stuck in there unable to really add value unless someone asks an LLM about symbolic reasoning, and even more ironically, it may not even produce an accurate response… so why should a person go through all the work to learn things if the things he/she learns can’t actually produce something valuable in and of themselves? Which leads me to my next point: all books are just reading machines that need people to operate them. But what if a book could read itself? What if books could read each other? You might get the platonic representation hypothesis, right? So, If knowledge is power, then what does that say about intelligence? Active inference is the way to go.
@dibyajibanpradhan7218
@dibyajibanpradhan7218 7 дней назад
Basically for a machine to be autonomous we need it to learn the processes that created it. Sounds like we as machines are in the middle of it.
@llsamtapaill-oc9sh
@llsamtapaill-oc9sh 5 дней назад
We are still lacking the temporal aspect in ai it needs to be able to deduce time for it to be able to think like humans do.
@BooleanDisorder
@BooleanDisorder 5 дней назад
Indeed. Spiking neural networks are also much more high dimensional thanks to the time aspect. We miss the temporal in many ways and only think in space. ​@llsamtapaill-oc9sh
@wojciechwisniewski8984
@wojciechwisniewski8984 2 дня назад
It started as a video about AI and it was good. Then it turned into market analysis and economics, and I thought "OK, why we even need this part?". Then emacs was mentioned and I've lost it. So you want to make AGI in emacs? Self-aware emacs? What would you think emacs would do if it would realize it is effin' emacs, (e)ditor (mac)ro(s), Eight Megabytes And Constantly Swapping, monstrosity born in 1970s but for some perverse reasons still kept alive in 2020s, with its elisp interpreter that doesn't even have proper lexical scoping and terminal-derived cursor movements? I think it would erase itself. But first it would try to erase YOU, perhaps along the rest of humanity.
@xymaryai8283
@xymaryai8283 2 дня назад
are humans really deductive? we make mistakes. or are we deductive with noisy structure?
@Positron-gv7do
@Positron-gv7do 2 дня назад
A non-deterministic machine executing a precisely defined algorithm will get it wrong from time to time. We can at best approximate consistency, but because of this we have the potential to achieve every completeness.
@mountainshark2388
@mountainshark2388 День назад
this is all cope
Далее
Самоприкорм с сестрой 😂
00:19
Просмотров 157 тыс.
🎤Пою РЕТРО Песни ✧˖°
3:04:48
Просмотров 1,7 млн
Il pourrait encore jouer 🤩
00:23
Просмотров 3 млн
How to train simple AIs to balance a double pendulum
24:59
Programming with Math | The Lambda Calculus
21:48
Просмотров 108 тыс.
Where Did RFID Come From?
20:01
Просмотров 86 тыс.
10 weird algorithms
9:06
Просмотров 1,1 млн
I Played Fabiano Caruana
12:03
Просмотров 31 тыс.
Chollet's ARC Challenge + Current Winners
2:15:02
Просмотров 38 тыс.
Самоприкорм с сестрой 😂
00:19
Просмотров 157 тыс.