Brian and his Valhalla team have just discovered the E=mc² of Java. It really is so beautifully elegant. Now they have a little matter of general and special relativity to iron out and then they’re done. I’ve been watching this space for the last 10 years and am super impressed with how clean and simple you’ve made it. Well done!
Top 5 things that I am incredibly excited about in java: Type enforced nullability (thanks Valhalla) Project Valhalla Project Valhalla Project Valhalla Project Valhalla
@@curio78 hearing from whom? Also, some features already in preview in preparation for valhalla. There is a prototype JDK version they released which I've tested.
@@curio78 saying they haven't figured out serialisation based on the talk is quite a misnomer. Even the talk doesn't say that. You seem to have misinterpreted. They already figured it out, and another talk was published a few hours ago which highlights the path for value objects will take in serialisation, which is through strict initialisation. The approach uses the constructor of that value type similar to how records do serialisation but with further strictness, and now the developer specifies the default constructor to use.
@@curio78 Okay, then explain more how brain storming should work and how they are doing trial and error! It's easy to type paragraphs than refactoring the whole jvm to provide these features. "No I heard him speak". I didn't hear him speak about it in the video. Wait. Out of curiosity. Which programming language do you use mainly? I'm suspecting it is a specific one that I tend to see this kind of negative discussion.
Great insights what is under the hood ... and how the improvements are thoroughly thought through and coming step-by-step into the language... Great thanks to all people who are working on that... with so much passion and with so much attention to details, which in the end leaves a very rounded impression in the language.
I love how this is the first JVMLS video I can't watch now but have to wait for. Edit: To be clear, this comment is written in good fun. On a more serious note I'm thoroughly impressed with the work the Java team has done in the past few years and as a long time Java dev, I appreciate and much prefer the "get it right" approach and mentality.
Wow, I am impressed! There obviously are genial and brave design decisions involve. This seems to be another huge step forward. Maybe we can see a bunch of this already with next LTS Java25?
I am somewhat disappointed that "integrity first" means we don't get flattening for complex values unless we make our code messy with explicit annotations/interfaces/whatever. If the language has a volatile keyword specifically to designate things that are intended to be updated from multiple threads, why isn't everything else that doesn't have this attribute allowed to tear? It seems like a strange choice to disable all these optimizations just because someone out there may be writing poorly protected multithreaded code.
My opinion on operator overloading is that there should be no operator overloading in Java. My reasoning is that operator overloading reduces clarity without bringing any material benefits (syntax sugar notwithstanding). Operator overloading is confusing (what does "+" mean ?) and incomplete (there are far more operators needed than are present in any language). And I fail to see the benefit of writing "p1 + p2" instead of "p1.add( p2 )" for anything other than pre-defined primitive types. Others may have a different opinion, I am fine with that, but "no operator overloading in Java" is mine.
Sorry, so what happens to `ArrayList`? From my watching of this video, it seems like I will be able to write `ArrayList` and get a lowering optimization that will back the resulting instance with something like more like an `int[]` instead of an `Object[]`, correct?
First, Vector is a problem that was solved eons ago with: * Vector * Auto-Boxing & Auto-Un-Boxing Second, To me, the difference between an Object and a Primitive is like the difference between an Apple and an Orange. Furthermore, the most applicable / usable Value-Classes (Integer, Long, Float, etc..) have already been rolled out ages ago, right? What am I missing? Classes that have 'final' declared Data-Fields could all be converted to 'Value-Classes' but how much "Value" or "Savings" (and in terms of what metrics) are being achieved with this idea? It seems to me like this idea would add an infinite amount of complexity (on the Oracle side of the Coding-Fence), but I don't understand the gains. Classes with "Low-Amounts of Data", and "final-constant" data could be declared "Value" Classes? But are there actual and realizable speed-improvements? Is the Garbage-Collector operating more efficiently?
I'm going to keep harping on this: Why are you making value class fields final! If you persist with that design decision then it no longer "codes like a class" and I don't see the benefit. Why can't you allow fields to be declared as final or not? I believe that is how C# implements value classes.
Because making fields final, thus ensuring there will be at most one write on that field you allow the JVM to do 1) more and better flattening. 2) more optimizations under the hood. That's what Valhalla is all about. If you need your fields to mutate then there is no point in using value classes/records since the JVM is not gonna be able to optimize memory and performance as much as it could, but you would still pay the trade off of lack of identity, so Why would you use a value Class/Record then?
@@Mirage2020 How does making fields final provide better "flattening"? Data oriented models requires controlling memory layout and mutability. Laying out structs in a vector is a common pattern. Creating your own arenas. The whole point of value classes is controlling memory allocation and size. The feature is the optimization.
@@Mirage2020 Thanks for the response. I don't know the particulars of JVM optimization, but I do know raw coding in C and C++, and even FORTRAN back in the day. In the end it comes down to allocating raw memory blocks and navigating that memory with address pointers. Somewhere there has to be a pointer to the value object, and then you get to it's fields relative to that starting pointer. I did a lot of HPC engineering programming and in that area I like to allocate memory to hold massive amounts of input data and output data, and state data that get's changed every iteration of the solvers. Having that memory clustered together in memory reduces paging and other performance robbing tasks. I was really excited about the flattening aspect of value objects, but if I can't change them then I can't allocate that memory ahead of time and use them for output and state data.
@@Mirage2020 This is a follow up to my last comment, I'm just thinking out loud. To achieve flattening/clustering of objects in memory why don't you implement an array allocator that instantiates all of the objects in the array when the array is allocated? That way all objects in the array can be created in one continuous memory block. // Suggestion Point[] array = new new Point[1000000]; // Create Point array, instantiate all objects in continuous memory
@@sstevenson638 if they went with zero initialization instead of strict initialization, new Point[100000], would just be zeroed out memory. No need to initialize anything from the JVM side of things. No hidden code running. Since value classes are immutable and require strict initialization, creating an array like that gets more complicated. It is a bad design.
I can clearly see that Java sunset is getting faster. Maybe my comment will be funny, so hold your laugh until it happens, then we will laugh together my friends.