Mannnn....you said it. Time dilation is weird since the advent of the computing age. I remember 1997 less fondly than 2011, which also is more nostalgic, despite being more recent
@@markgreen2170 how was it going to less "useless" languages (according to Simon's definition)? I imagine going from Haskell to C or primitive C++ was a shock!
@@SamWhitlock Yes, it took some getting used to ...everything we did in Haskell was a recursive function! Once I learned the data structures and understood pointers in C/C++ I was good.
That last bit is so true. Once a feature is is implemented in Haskell, some mathematician will realize that it is related to some field of mathematics in a way that says that the feature is fundamental for the whole structure. Also, there will of course be contravariant versions of the feature that may or may not be more useful than the original version.
Best comment on this video. There's so much to draw from this conversation and so little actually being drawn from it (judging from the other comments).
@r f Yes, of course. My point is (I think - I made this comment five years ago) that it is a two-way street and that constructs that were made for purely pragmatic reasons often are discovered to be significant mathematically, which then allows Haskell to use more of the same mathematical abstractions.
why does it take haskell to implement tings so that a mathematician to "realize" things exist i dont see the key position haskell has here. cant any mathemtician relate anything from any programming language with math
@@gayxor It's way more difficult to make mathematical models and connections in an imperative language than in a declarative language that is already so heavily inspired by mathematics.
These comments from people not seeing one of the (main) designers of Haskell is making fun of Haskell's development (and programming languages in general) ;-)
Well he's right though. It's useless for most general use cases. But very useful when having defined data/state. But if you'd go pure why not go assembly.. control every register, memory address and modes.. It's allot of work surely, but it can be done. You'd also be writing lot's of nice reusable code. Downside it may be too architecture specific.
@@MasthaX Because those two things differ in the underlying concept?! "why not go assembly.. control every register, memory address and modes" - This is called an imperative programming language. Haskell is NOT imperative by design. It's a functional programming language. HUGE difference there. Haskell is for example mathematically provable, assembly magnitudes harder to do that. It has nothing to do with "pure". It's different use cases. There is no one to rule them all programming language and there never will be. You use what is useful and correct for the job. I can recommend getting some reading into some introductory lectures about object oriented, imperative and functional languages of computer science offering universities, for anyone interested in the concrete details. I learned a LOT when I had them and there are several universities offering such free resources.
I have this problem a lot, where people react negatively to the irony in a statement-whereas the irony is the very thing which makes it interesting and memorable.
I had the same reaction - we need more of this energy in computer science! I'm also learning me a Haskell for Plutus development. Nice to find a fellow traveler here
@@nataestanislaubastos7637 do you mean that haskell is going to rise in popularity and you have reasonable arguments that this is going to be the case?
I'd love to hear Simon's impression of "A Discipline of Programming" by Dijkstra. I spent some months back in 1981 reading and working through this book and ended up criticizing it for its presumptuous and supercilious approach while at the same time admiring the elegance of, ofr example, the problem of the Dutch National Flag colors. Would Peyton accept or deny the formalism attempted by Dijkstra. Because if he accepts it, then the path to the ideal programming language could be worked out symbolically rather than at the other end, by a bunch of bumbling humans going by their feelings.
I would consider that very book of Dijkstra's the bible of imperative programming done right. Yes, the functional world takes a decidedly different approach. Both methods (what they have in common is that they are founded on sound mathematics) are to be admired, but I do not understand how they're connected to each other. They must be somehow.
i'm very much a fan of explicitly annotating functions as either a) "pure functions" which are systematically prevented from performing any side-effects (or calling any effectful functions) or b) "pure procedures" which can call any other type of function, but which are prevented from returning any values. i think it would be a very useful feature in any language, compiled or interpreted!
@@jkf16m96 Using exceptions to control flow can get a bit convoluted. IMO a return type that strictly related to the procedure's outcome, not including any internal details, is a good middle ground. I think the purity of functions matters more than the purity of procedures.
It was a nice surprise seeing Butler Lampson "pop" in at the last moment. :) I am a bit concerned about this construction of "useful and safe" as the ideal. What I've seen of the "useful" languages is that they're usually clunky in how they express ideas, and the nice thing about the "useless" languages is that they usually express ideas beautifully. I wish such discussions would add a third axis called "expressive," because humans have to read code, and I think it would be better if the expression of useful ideas had less cruft in it, or at least didn't mandate it, allowing it to be added where necessary to cement some formal protocol if that's desired.
The problem with this beautiful, functional code is that it is, by and large, useless, as the title of this video states. It's lovely that you can express a quicksort in just a handful of characters, but the overwhelming majority of developers never have to write a quicksort. For one thing, it comes as part of a standard library. The vast majority of programmers write programmes that get data from persistent storage, perhaps process it a little bit, and present it to the user, or the other way around: they take input, perhaps process a little bit, and put it in persistent storage. The "perhaps process it a little bit" is where functional programming comes in. But it's so little, that it doesn't warrant using a purely functional language. What you can do is use the functional style, which is becoming easier to do with the likes of Java > 8 and such. It isn't pretty, it's a bit of a hack (as are many things in Java), but it works. I find this video, with one of the creators of Haskell calling it useless, refreshing when set against all those "OOP sucks because it isn't functional and functional is pure so it must be good" videos out there, with people talking absolute rubbish for the best part of an hour.
@@MrCmon113 Isn't that basically what the first computer scientist were? Maybe that's what your professor was talking about but I have no real context.
Pure functions can contain imperative tensors, sparse arrays and hash tables so long as they are local temporaries that only last for the duration of a function invocation. GPGPU such as OpenCL and CUDA can harness APL/Fortran whole array operations in order to avoid unnecessary iteration over elements with nested error prone FOR loops. The highest level can support RDBMS with ACID transactions. The trick is to mix all of the different paradigms so that they are layered with respect to latency.
I might be wrong, but this is an honest question. Wouldnt having mutable state within a function lead to the exact same result as the imperative paradigm, since you can just wrap the whole program in a function and have the mutable state in it, effectively emulating a "global" state?
@@sagitswag1785 Pure functions don't have Sequence Points like Imperative Code. A Sequence Point defines any point in a program's execution at which it is guaranteed that all side effects of previous evaluations will have been performed, and no side effects from subsequent evaluations have yet been performed. This behaviour was a lot more obvious in early versions of BASIC which had line numbers, where the Sequence Point S2 is unaffected from side effects which may occur in S1 and S3 10 S1 20 S2 30 S3 nowadays the C/C++ version of this code would look more like: S1; S2; S3; As there can be no side effects within a pure function by definition it isn't imperative code and what C calls 'functions' are potentially side-effecting 'procedure' calls which those maintaining the code can not be confident won't cause a side-effect. Pure functions are referentially transparent (i.e. Fibonacci of 10 can be replaced with its precomputed value as all pure functions are deterministic, so that they always return the same output for the same input, provided that they are non-recursive as that could lead to a stack overflow in certain scenarios). Consequently, there is very little "wiggle room" to adapt the semantics of pure functions to support change of state as it seems like you are being imperative. However, it doesn't matter so long as you tidy up after yourself and always return the same answer to the same input, even if the input parameterises a constructor of a local array which is destructively updated and then has a reference to its location in the heap returned as the result on the top of the stack which is then attached to whatever in the caller wanted the value of the function, such as some symbolic variable definition, or unnamed invisible temporary variable within a complex expression in which the reduced evaluation of that is then used somewhere, again as a reference to a value (so its not dealing with some things being values and some not, and having to put &r in some places and v in others, as everything is a reference, so nothing needs an & annotation). This logically implies that the language prohibits the calling of "objects" from within functions. Really, you don't want to use the OOP paradigm at all as it provides no advantage to the programmer (only reassurance to managers). Almost everything good about it can be done with just Prototypes along with the enforcement of the Command-Query Separation Principle which effectively splits a Procedure call into a Command that causes a side effect and returns no result (so the caller doesn't even bother waiting for the result), and a query which interrogates the result (which also doesn't stall the caller waiting for the result of its query, but will defer this stall until absolutely necessary, when the values returned are needed in a subsequent Sequence Point, thereby supporting distribution of prototypes across CPU cores even if this means those are on remote systems it doesn't OWN which necessitates AOP (Agent Oriented Programming) to "pay" that remote system for the privilege of running there to transact some long lived "parasitic" service for the benefit of its sender and not cause a DDoS (AT&T's _Telescript_ did this, and had it taken off no one would be wasting hours on slow websites filling in forms for the benefit of their databases, as the Agent would know your details and act on your behalf for you, and buy those concert tickets, or preorder that PS5 so you didn't have to wait in a virtual queue for hours), this means that you have to have Encapsulation, but can't do Inheritance as the remote system has no clue about your system's potentially modified Inheritance Taxonomy. OOP almost gets it right, but elaborates its ideas too much with Classes, and that takes the focus off Messages, and References then make everything coupled again, with Virtual Friend Functions making a mockery of the concept of Encapsulation when an optimising compiler can get around the overhead of Getter (Queries) and Setter (Commands) quite easily, so that the .exe is not "plastic" OOP itself but "hard" monolithic code. You are right to say that you could abuse the language design and use it against how it was intended so that the large scale Prototypes (Modules) which decomposed the State into different localities in which Commands and Queries operated on them, which were themselves written in terms of infix operators like + x ÷ - which desugar to pure Functions like Add(3, 4) which are visible everywhere as they are in the global namespace (similar to how _Mathematica_ works, except with Prototypes as a way to cope with large-scale software engineering and the difficulties of collaborative development, as junior developers would otherwise likely break something that affects the whole system if their work was not in some sense 'sandboxed' to just one Module they were known to have changed the code of, narrowing down the search for faults and identifying who is responsible so that they learn from their mistakes rather than have a senior guru running around putting out the fires they started and not having much time to be productive on their own contributions to the code base. It is a bit like Seymour Skinner passing off Krusty Burgers as his home cooked "Steamed Hams": ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-4jXEuIHY9ic.html The Superintendent doesn't know he was given Krusty Burgers. He had a pleasant meal. As far as he is concerned his visit was a success. Had Skinner turned his oven off before going to Krusty Burger (off screen), the burnt clams wouldn't have caught fire and set his kitchen and then house ablaze. It illustrates the perils of improper resource management, but also the benefits of encapsulation as the inner workings of Skinner's kitchen and what he makes there (or gets by going out the window and across the road to the burger restaurant) are as hidden from the client that is his guest, the server or host can pull all kinds of tricks behind the scenes so long as the guest never notices. You can have local temporary side effects within a function on state it creates and then destroys or keeps around whilst it has a non-zero reference count - this is needed for recursive functions which will potentially require a garbage collector limiting this overhead to be opt-in via the use of the *rec* declaration: def Fibonacci[0] = 0 def Fibonacci[1] = 1 rec Fibonacci[n] = Fibonacci[n - 1] + Fibonacci[n - 2] Fibonacci[10] 55 If a function did destructive updates on state it temporarily created on the heap and then raised an exception then it would need to undo how it had changed the heap and the stack as if it had never been invoked, define the error string (usually ""), which is referenced by a register that has to change value to refer to different constant literal strings stored at the top of the heap (above all the dynamic stuff in high memory), this makes it simple to check that the function returned error free or handle the error by causing the caller of that to fail, and so on, until it is sent to the 'God' prototype ANY that every TYPE inherits from, where it logs the error to the console and provides feedback through the IDE to the programmer having halted the evaluation of the system within its Virtual Machine sandbox It doesn't do a full unwind of Failure through everything that called everything to call the Failing thing, as if I want to understand that I will enable the debugger and step through the code as it runs to see it change values and state as it evaluates each operator, function, expression and every statement (Sequence Point). Functions keep disasters within the kitchen because they put out their own fires. Queries keep guests from knowing what is really going on in the implementation of Functions, and Modules separate one detached house from the next stopping the fire from spreading along the street to the adjacent houses and allowing it alone to be torn down and rebuilt if necessary as that was where the catastrophic failure happened, not throughout the interlaced state of the program - i.e. the whole of Springfield.
And then someone will come up with a much worse language that will borrow most of it's concepts from Nirvana and then clumsily implement them in order to support a popular new technology, leaving Nirvana behind for the nerds to pick apart. And that newer language will probably be called something like 'foo' or a 'fooFighter'.
Given that a lazy functional language permits a function to take part of a list of indefinite length at a time, process it and pipe the output elsewhere, it should be recognized that this stream of dataflow need not be continuous, but be like a queue with stuff being taken from the head as other stuff joins the rear. This is fine for pipelined functions, but x' = f[x] has to await new x to flow in before it can move on to the next epoch.
It is humbling to hear the gurus [1,2,3] talking about state of programming languages and how the ideas are being cross fertilized towards a safer and *useful* language :) 1. en.wikipedia.org/wiki/Simon_Peyton_Jones [Haskell guy] 2. en.wikipedia.org/wiki/Erik_Meijer_(computer_scientist) [pretty much everything under the cover of Visual Studio] 3. en.wikipedia.org/wiki/Butler_Lampson [Xerox founder] PS: Bill Gates paid their bill ;)
+Animesh Sharma Thank you for sharing - I knew of Simon, was blown away by who Butler is after looking him up, and was wondering what Erik's surname was. ;)
There seems to be some kind of conceptual ceiling prior to which you're just not able to do much of use in Haskell, or at least not well I.e. in the maintainable and beautiful way it's intended to be written. You can learn loads of concepts before breaking that ceiling. The first time I tried it I had a lot of fun but sort of fizzled out. Coming back a couple years later I'm surprised at how much I retained and I feel like I'm much closer to thinking in Haskell.
Well a bit late but try ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-02_H3LjqMr8.html or search "Haskell Tutorial" and there is a video which so far seems alright.
Variables can have no definition and be treated as symbolic names. Expressions may use these names and the language will simplify the expression as much as it can, substituting values when they get defined and cancelling things out. This is "Term Graph Rewriting". A user interface can be coupled to the outside of the interpreter and produce new constant values as it is interactively manipulated. Within an epoch these appear to be Constraints.
Scala gives you all the tools from both imperative and functional world. I'd argue it's where it should go. It's a complex language, but once mastered you will have all the right tools to find a short, safe and extensible solution for any of your problems (except real time stuff of course :( )
The pipeline h[g[f[x]]] can finish processing early entries in a list in h whilst simultaneously processing later entries with f as this is equivalent to x => f => g => h => resultantStream. m = p[a] w = q[b] Have no order of evaluation and can be processed simultaneously as well. A distributed asynchronous concurrency can be built atop an API that conforms to the command-query separation principle through the use of Prototypal actors. Metaclasses, Classes & Objects merely complicate matters.
Functions in Ada have always been pure. It may surprise some that Fortran has had pure functions since 1995. I first saw comprehensions in Id, the language Arvind and his students at MIT developed for dataflow programming. They were incorporated into pH, which is "Parallel Haskell," a kind of confluence of Id and Haskell. I suspect they came into Haskell via that route, not from C#.
My Ada is fairly modest and so rusty as to be hilarious -- though I'm sure I brazenly list it on my CV anyway -- and I'm much too lazy to locate a verified compiler to check, so this might be a hostage to fortune... But I'm pretty sure they've not been 'pure' at least since the Green language draft. There were certain restrictions on them, like passing 'out' or 'in out' parameters to functions, but even they seem to have disappeared in Ada 2012. So now they're exactly procedures with a return value. The first recognisable list comps were allegedly in SETL -- logically enough! -- in 1969(!). Their immediate and much clearer Haskell precursors were of course Miranda's ZF Expressions (1985), and Orwell, which latter seems to have been been when the present name was coined. Whether there's a line that runs from SETL, through Id, to those would be a further interesting footnote.
Also look languages goes down, in Haskell direction :) But seriously: in chemistry is something like buffer - allow to have acidity stable for some amount of changes in ingradients -> looks like "buffer" is universal thing, it's like part of continuum where we can breath a bit. And there is one overlooked aspect of programming: knowledge of domain, syntax is just beginning. Make domain / API stable [for extended periods of time] across as many languages as possible and you have less problems overall.
I’d sure enjoy to share these’s guys company and have good laugh, talk about nifty convoluted topics, or have deep conversations about the meaning of things.
2020, and Haskell is being used to build blockchains at Cardano, I think Haskell is becoming useful. I think on the useful and safe Nirvana point, today we have Rust at that sweet spot
> we have Rust at that sweet spot Rust is awesome and very enjoyable to write in. I think it's a big step in the direction of Nirvana, but I won't rule out all future progress. And in one area, Rust is limited: without higher-kinded types you can't build a generic library of monad utilities. My current understanding of monads is that they let you choose a local a-la-carte trade-off between safety (= pure functional code) and usefulness (= I/O and emulated state, exceptions etc.); or maybe stated better, a trade-off between analytical power (~ safety) and expressive power (~ usefulness). I've also heard the phrase "dependent types" thrown around, without fully grokking what it is or why it might be useful. So I can't rule out the idea that they can be useful in moving closer to Nirvana.
Simon Peyton Jones: en.wikipedia.org/wiki/Simon_Peyton_Jones Erik Meijer: en.wikipedia.org/wiki/Erik_Meijer_(computer_scientist) Butler Lampson: en.wikipedia.org/wiki/Butler_Lampson
I once asked as Hanskell fan how to do real things like you know, write to a DB. He said "it's complicated". That told me everything I needed to know about Haskell.
+Gio Eufshi I get it. But you can also look at it as an enormous opportunity for innovation and new ways of implementing "old" ideas. How's that for motivation?
Funny - looking back at this video now that Rust is out. While Rust isn't a side-effect free language, it does move into a quandrant of it's own. Granted, we're looking at a different type of safety. It's not effect-free safety, but Rust's memory-safety guarantees does provide much more provable safe code as compared to existing native languages.
If I wanted to get an advanced degree in computer science, I'd want to study with Bertrand Meyer at ETH in Zurich. Meyer has been working on creating a safe and useful language for over 25 years. His brainchild, Eiffel, is beautifully designed, suitable for general purpose programming, and now void safe (calls to non-initialized pointers are caught as errors at compile time). Maybe not quite yet Nirvana, but started out closer to it than other imperative languages and much easier to learn than functional languages.
"calls to non-initialized pointers are caught as errors at compile time" Uhm in C# you can't use a non-initialized reference, it's a compiler error too (only in local variables though; object members default to null). Static analyzers and strict modes in compilers make it possible in C/C++ as well.
FP programming is like putting together a picture puzzle. And too much time, often, is spent on finding that puzzle piece, that abstraction, to use in this particular spot. And this burden is already significant even for smaller programs. When u have to build a complex enterprise app, the puzzle becomes too difficult. What makes this far more difficult than a picture puzzle is the puzzle pieces, ie the functions, can compose other puzzle pieces. Higher order functions that accept other functions as parameter, are abstractions that are really hard to grasp. It took years for array map/reduce to make its way into the mainstream. Even harder is trying to communicate these abstractions to reguoar Joes
I find that the way he described Haskell wasn’t as a joke at all, it’s a really interesting point of view in my opinion, one that gives more power to Haskell than less
forgot about the other dimension called "utilization" .. which goes all the way from "large scale with significant consequence", to "for exploration, experimentation and curiosity". A blank statement about being "useless" is no good unless located in this dimension
over ten years after, transactional memory flopped big time on mainstream and it is not there, as intel disabled the feature via microcode updates and later removed from all mainstream processors.
I had an idea that maybe a programming language could be purely functional by default, but then with the option to create blocks of imperative code. Could such a thing exist? Or perhaps does it already?
Monads sort of let you do something like this by encapsulating your state in its own world. Rust (look at rust-lang by mozilla) also tries to do that by having a lot of safety features by default, then letting you create an "Unsafe{}" block and shove your unsafe code in it... Literally.
Of course he doesn't mean useless as in the sense of not being able to compute stuff. You can make any program in Haskell and the language is awesome for prototyping. But there are not many libraries and above all not many applications for the language in the real world. At least as it is. So I guess his point was on how to make Haskell more usable and useful for future programming, making it a language more useful outside the scientific environment.
Gustavo Neves Not many libraries? Do you even know what you’re talking about? github.com/commercialhaskell/all-cabal-files/tree/master We have almost 8k listed libraries (there’re actually more than that).
Nope. He was talking about the fact that Haskell approaches writing programs by first making procedures sit in the corner and think about what they’ve done, then accomplish the same exact things using methods that you can more easily reason about and understand on a high level. (Though of course he didn’t see things from _quite_ that perspective.) Compare the first programs of a Python programmer and a Ruby programmer and you will see what I mean. One restricts their purview to conditional statements and functions with side effects, and ends up with a towering of a program that calculates Fibonacci numbers. The other builds their program up from smaller, individually comprehensible methods and blocks, and ends up with a full-featured web server. ;P
@@NEGIgic It's still not safe in the sense that SPJ would consider safe. You could eg have a function using side effects being called twice unintentionally, which could cause unintended behavior. This cannot happen in purely functional code, which makes languages such as haskell safer than a language such as rust.
OK - so I am just learning Haskell to be able to use Cardano Plutus, like several people below. This is an old video and it sounds like Haskell has been updated nicely since then and everyone likes it now, right? So I'll just go ahead and learn Plutus.
CPUs and memory management/access create the largest impediment to progress. We need to stop worrying about caching and visibility. volatile should be the default for all values with the intent that any thread can read any value. Otherwise, the only other choice is to have a functional representation of access where all cores access memory with a functional evaluation of which core last "read" or "wrote" that memory, and then if it wasn't the current core that last wrote it, then there needs to be a "fence" operation performed to go read the current value just as a cache miss should. There doesn't have be all of this drama around cache coherency. We just need to change to making visibility the goal, instead of the struggle. Then developers, could just write parallel code without having to do "data visibility" operations inline with their code.
I had a haskell homework and I am happy to see this to recommended to me...(I couldn't do my homework btw lol) I am kinda good at c and python. I didn't study much on haskell but no regrets. I am about to cyber security. Haskell is so useless for me most possible ways.