Тёмный
Performance Summit
Performance Summit
Performance Summit
Подписаться
Performance Summit events serve as a place for software performance enthusiasts and practitioners to meet and discuss challenges, research and possible solutions around delivering delightful and efficient software solutions.

You can find additional event resources in github.com/ttsugriy/performance-summit
Performance Laws by Taras Tsugrii
30:20
Год назад
Performance Summit Trailer Sep 2021
0:43
2 года назад
London Perf Summit - Panel Discussion
1:03:16
3 года назад
Комментарии
@petrvset1960
@petrvset1960 20 часов назад
Hard to understand English and unpleasantly small text...
@richard2845
@richard2845 Месяц назад
thanks so much for the tutorial
@JakobJenkov
@JakobJenkov 3 месяца назад
I personally use 9 performance principles - some of which are the same as the 5 outlined in this video. Some of the differences seem to be "CPU to data distance" and "data size" ... I cover the principles I use in my video "My 9 + 1 core performance optimization principles" : ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-ULlFWomaPVw.html
@liaforubeb
@liaforubeb 4 месяца назад
This is cringe
@mytech6779
@mytech6779 8 месяцев назад
Somebody should have taken a minute to optimize that terrible audio compression.
@maximilian19931
@maximilian19931 8 месяцев назад
Adobe should optimise their software to have less source code in general based on the results of this. Write slimmer code and the performance isssue goes away.
@mrfantasticindian1593
@mrfantasticindian1593 9 месяцев назад
Awesome 🇷🇺
@DavidM_GA
@DavidM_GA 11 месяцев назад
The "Closure" part of the optimization is basically how Forth works: a code pointer and a piece of data for each "word".
@lenkite
@lenkite Год назад
Thanks for the video. Is there any new material on topic of "closure generation" interpreters ? Very hard to find material that is not bytecode based.
@Roxas99Yami
@Roxas99Yami Год назад
Thanks very appreciated. Especially the examples in C. Is this directky compatible in Cython ?
@martingeorgiev999
@martingeorgiev999 Год назад
I don't understand why these architecture specific instructions are not recognized directly by gcc on O3.
@pingkai
@pingkai Год назад
Anybody really cares about latency what to use a RPC and streaming? If I can tolerate 18ms I did not see why I can not tolerate 100ms.
@yuangchen905
@yuangchen905 Год назад
great video. Thank very much for your lightening example and insightful explanation!
@poker53281
@poker53281 Год назад
Great talk. Thank you Sean.
@YoloMonstaaa
@YoloMonstaaa Год назад
great talk
@RicardoGonzalez-tg3lw
@RicardoGonzalez-tg3lw Год назад
This was amazingly useful! Loved it. Thanks for the great work!
@sezgin_murat
@sezgin_murat Год назад
Amazing and full of knowledge talk.
@Roibarkan
@Roibarkan Год назад
50:00 Sean might be referring to “Eytzinger binary search”
@Roibarkan
@Roibarkan Год назад
21:45 ru-vid.com/group/PLGvfHSgImk4Y1thqJLpcSscMTwgN-XM9l
@Roibarkan
@Roibarkan Год назад
31:43 Great talk. More about optimization remarks and viewing them can be found in Ofek’s talks such as ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-6HbyacS5eZQ.html
@performancesummit4604
@performancesummit4604 Год назад
thank you for the kind words and sharing this excellent talk!
@karimmanaouil7278
@karimmanaouil7278 2 года назад
This guy is seriously a god.
@Shogoeu
@Shogoeu 2 года назад
What happened to Concord.io?
@alexandergallego5899
@alexandergallego5899 Год назад
sold it to akamai.
@allanwind295
@allanwind295 2 года назад
Audio is too low. The (section) flags are neat. As it's a performance presentation, it would have been neat to include a running tally as go through the optimizations. You do cover it in the conclusion, of course. Good job!
@mjwchapman
@mjwchapman 2 года назад
thank you laurence very interesting. i have a trading app written in java that takes a good few hours to speed up. this has given me the incentive to find out exactly when jit compilation is happening.
@mjwchapman
@mjwchapman 2 года назад
this was brilliant. thank you very much. next stop ScyllaDB
@AbderrahmaneBenbachir
@AbderrahmaneBenbachir 2 года назад
I recognise Suchakra's slide at 4:00 :)
@jayakrishnanjr5194
@jayakrishnanjr5194 2 года назад
Link in the desc is broken link
@performancesummit4604
@performancesummit4604 2 года назад
Fixed ! Thank you for raising this !
@GlebWritesCode
@GlebWritesCode 3 года назад
"Developers love Kafka api" - are you serious? After a year working with Kafka, I know some configuration choices and design problems I still shudder at
@bruceritchie7613
@bruceritchie7613 2 года назад
Oh hell yes. Completely agree.
@alexandergallego5899
@alexandergallego5899 Год назад
lossy compression here. what ppl want is their existing apps to go faster w/ no code changes. the partitioning scheme of un ordered collection with totally ordered sub collections is pretty handy as a modeling. what you point out is the heavy weight nature of partitions which is true, but the mental model is helpful.
@rurban
@rurban 3 года назад
He is missing baseline jits. Very fast jit compilers. They are about 10x faster than the fastest interpreter
@SimGunther
@SimGunther 2 года назад
As he mentioned in the intro, the whole point of "fast CHEAP interpreters" is that you wouldn't need JIT compilation which is "not cheap" because special domain knowledge on assembly/hardware is necessary for JIT/inline threading.
@hardknockscoc
@hardknockscoc 8 месяцев назад
@@SimGunther I think rurban's point is that there are JITs that are quite simple to make. Look at the Kaleidescope tutorial for LLVM. You don't actually need to know anything about platform specific ASM.
@SimGunther
@SimGunther 8 месяцев назад
​@@hardknockscocIt's one thing to use someone else's JIT library so you don't need "platform specific knowledge" about the assembly you're creating, but it's a different story rolling your own JIT library, which is what I assumed in the original comment. You could also be a LISP-kinda person if you want to write a whole interpreter in SBCL/scheme running your whole program tree with C shared libraries for performance intensive stuff. That is technically a way of going about JIT without "platform specific knowledge" unless performance is the only concern for the interpreter.
@nanman_chief
@nanman_chief 2 месяца назад
@@hardknockscoc Even when using LLVM IR, it's a huge hassle for high-level languages. For instance, implementing a language like Scheme (or any language with call/cc) is straightforward in a virtual machine or interpreter, but it's a major headache on the machine stack. Simply translating it won't yield effective results. Another issue is compile time. Usually, we use scripting languages not for compute-bound program, but to assist the host language in configuring some data. Imagine your build system script just contains some build logic and file paths, but the script's compile time is longer than its execution time. To address this, sufficiently mature JIT compilers are typically multi-tiered, but implementing this is also a significant challenge.
@shweddedglowy
@shweddedglowy 3 года назад
Benchmarking these approaches provide a very valuable insight for anyone considering these approaches! Thanks for providing all these learnings. 😎
@kokizzu
@kokizzu 3 года назад
Why I never see throughput benchmark for this XD.
@tuberovayer9099
@tuberovayer9099 3 года назад
Excellent sneak peak of query based compilers.
@yafz
@yafz 3 года назад
The discussion in the last 10 minutes is full of insights! Thanks!
@kadiyamsrikar9565
@kadiyamsrikar9565 3 года назад
Good talk. 👍
@cepuofficial9025
@cepuofficial9025 3 года назад
Any tutorial on this?
@tarastsugrii4367
@tarastsugrii4367 3 года назад
vectorized.io has an extensive documentation for Redpanda - vectorized.io/docs.
@ravengelen
@ravengelen 3 года назад
Here is the link to the GitHub project page: github.com/Genivia/ugrep
@gukki5
@gukki5 3 года назад
he stated that intra-datacenter network latencies nowadays between machines was on par with inter-NUMA latencies. that’s just categorically untrue. 100s of nanoseconds vs 100s of microseconds. that’s 3 orders of magnitude.
@k2t210
@k2t210 4 года назад
Fantastic talk! Thanks for the wisdom!
@Super_Mario0o
@Super_Mario0o 4 года назад
Great job guys, and thanks Nitin to share this discussion and your vision. I really liked the concept and topic which fits o app such Linkedin. Nevertheless, there were some missing points like: - Evaluation of used prognostic models (Accuracy for prediction concepts expressed by error analysis) - Compare ML (Xboost) Vs DL (DNN) or offline NQS - Type & quality of NNs architectures were used for predictive models (info about the best model and its hyperparameters) - feature importance mechanism was used for feature engineering just for Xboost since in Deep Learning it's done by NN itself. - Is it possible to access the dataset you used like open-source?
@NitinPasumarthy
@NitinPasumarthy 4 года назад
Hey Mehryar. These are some good questions. Thanks for posting them. We are working on a detailed LinkedIn engineering blog focussed on Machine Learning aspects of this solution. We haven't open sourced this data yet, but we have it in our radar.