Here's why f(S&) is called nonetheless: In C++, when a temporary object is passed to a function, the compiler tries to find the best match among overloaded functions. It prioritizes functions taking rvalue references (f(S&&)) for efficiency reasons. However, in this specific case, there's a catch. The function f(S&) is also a viable candidate because a reference can bind to both values (variables) and rvalues (temporary objects). Since there's no explicit conversion happening within the wrap function (e.g., casting the temporary object to an lvalue), the compiler might choose the seemingly simpler option - f(S&).
Nobody. This is the 'backup', since the desktop recording failed us this time. But luckily most of the slides are well visible from the camera recording
C++ will live longer than rust. It's more safe than RUST. Someday, when stupid managers, cheaters and government authorities will understand what safety means they will agree with that point. Toooooo many words out of core theme.
I agree with your (original, pre-edited) comment that the use of 'auto' does not make the code more readable. Exceptions do exist, generic parts where types are not known, or parts where types are super obvious, like a vector.begin() ...
@@HaraldAchitz I edited my original comment because I thought it was overly negative - more than anything else this is an interesting talk by someone who knows a lot more about C++ than me. That being said I don't like the use of auto, and think it should almost never be used. In fact I find the entire drive towards terser code to often be misguided, there''s a lot to be said for code that is a bit longer but more explicit, actual time taken to type something out shouldn't be a factor in deciding how to write code as its only ever a tiny amount compared to time spent planning the code and then reading it back again. If code is made terser by making more use of declarative styles of writing code then that is one thing, and can make the code easier to understand, but this isn't a problem that is solved by auto - which exists purely for laziness
I know this isn't everyones approach though - some people really like using auto and use it nearly all the time - I've seen code examples for libraries that use auto and be little use as examples as a result
This talk is from 2018, std::function_ref will (most likely) come to C++ in 2026 which means, 'official compiler support' (until 2026 is an official voted ISO standard) it will be 2027.
To elaborate on @QuavePL’s answer - I believe the idea is that all the public functions in AsyncService should return as quickly as possible and have their work done asynchronously (for example in another thread, and in this case on the main thread) and call ‘Finish()’ when they are done (when they populated the response object, ideally). This quick return “ensures” that the gRPC thread isn’t blocked.
Correct answers above. We need to handle the request in a separate thread for asynchronous handling and return the call from the gRPC thread directly. I just picked boost asio as an example since it is pretty easy to work with.
You can find the dates for the next 2 meetups in the video. Block the dates, and check our homepage, it will be shown there (or also on the meetup page ;-)
Correct, the PDP-11 is 16 bit, but some CPU registers could also be used as 8 bit registers. I guess that's what they did, since the reports state that overflow happened every 256:th call.
The CRTP case is not equivalent - the explicit object parameter-code may do Unexpected Things if called through a reference to base: struct Base { template<typename Self) auto&& work(this Self&& self) { return std::forward<Self>(self).do_work(); } void do_work() { puts("do_work default impl "); } }; struct Derived: Base { void do_work() { puts("do_work overridde "); } }; int main() { Derived d; d.work(); // do_work overridde Base& b = d; b.work(); // do_work default impl // no compiler diagnostic required } In case of the classical CRTP, this scenario is much less likely, since the actual base class for each Derived is a unique template instance, so even if you still wanted to get a reference-to-base, you'd still have to make it explicitly templated, or it won't compile. Or in other words - the legacy CRTP actually provides better type safety than the templated EOP, which might be quite a compelling reason to actually avoid the latter in new code, even in a codebase with C++23 enabled.
main issues of SOUP and also other requirements from IEC62304: 1) SOUP validation procedures, especially if the SOUP is required to be validated for class C, are not strictly prescribed. In other words, if memory serves me correctly, the IEC standard says nothing about how you are supposed to perform validation activities on the SOUP. Do you test only the functions you use, or the whole package, and if the latter, how? Imagine you use numpy to perform matrix multiplications. What do you do to ensure that numpy is fit for purpose in your class C application? 2) I personally found myself unable to verify if currently open bugs on third party software were affecting us, because the vendor (Note: not an opensource vendor. A microcontroller, closed source vendor) refused to release information about currently open defects of their software and hardware. The consequence is that you are asked, as for IEC62304, unable to verify which standing issues in your third party products may affect your product. Eventually I managed to get them to release this information only to us, but it took *a lot* of effort.
Very interesting talk, thanks. I have been trying to convert my Pi1541 code to run on a Pi Pico, a Cortex-M0 and need the fastest code possible. I have been viewing the disassembly of every instruction to see what the compiler generated and altering the C code to get the most efficient output. I am finding that macros still beat static inline functions. It will be interesting to test out the presenter's code/ideas and see how it compares to the generated assembly.
About that switch/jump table and one predictable and one unpredictable jumps vs one unpredictable jump: I know about this technique thanks to Eli Bendersky blog. Search for article "Computed goto for efficient dispatch tables" from 2012. He has more articles about this. He claims 15-20% improvement for CPython interpreter.
9 месяцев назад
7:40 On a Swedish keyboard, you need to hit alt-gr + the tilde key then n to get ñ, skipping the alt-gr gives "¨n" so this is more of a user/input error than a bug in Power Point
This is such a good talk, the explanations are incredibly helpful and considerate as well! Explaining the ? operator as either use the value or return early with an error made a lot of sense to me. Thanks Mats.
I am confused as to why it is considered advisable to let people pass a writable reference to an unnamed variable -- that is what is happening when you forward a reference of any non-const reference of any kind (forwarding or simple reference, I mean) to a function that might write on it. The changes made to it would be unavailable to the calling function since the object has no name in its context with which to refer to the data so changed. While I thought this was a good tutorial, i would have appreciated an actual use in the context of a real program that benefits from this feature.
17:27 (slide 45) this SoA approach could have been taken to an extreme, to have a separate array for each of {x, y, z, dx, dy, dz, hp}. That approach will also allow better chances of SIMD vectorization of calculations
On cppcon 2021 Eduardo Madrid gave a talk about capabilities of his ‘zoo’ library for writing nice-looking data oriented code: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-QbffGSgsCcQ.html
On cppnow 2023 Floris Bob van Enzelingen gave a good talk about data oriented programming and writing libraries for them: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-GoIOnQEmXbs.html
In cppnow 2023 Chandler Carruth gave a good talk about using the same approach for the Carbon lexer/parser: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-ZI198eFghJk.html
I made a presentation multithreaded perf optimization where I discussed using Tracy, really great tool, and can be handy even for simple cpu bound examples. ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-K33CPCQcF14.htmlsi=3ivMrtLsyZCUkQZQ
I appreciate the intuitive model for atomics, though I was a bit disappointed by the speaker showing shaky understanding of the atomics, which was leaking into the presentation and some takeaways. "memory_order_relaxed is most likely a bug" is simply not true. When two threads talk to each other by means of a single atomic variable (and nothing else) it is acceptable to use memory_order_relaxed. The phrase "last resort" also didn't make sense to me. I'm assuming they meant that we should use the safest things first, optimizing as we see fit, but that wasn't clear to me at first.
27:23 Can't agree more. I say "I hate code!" when I delete it, then feel happy for a couple of hours. This lecture is very useful for a vast majority of C++ users and should definitely have more than 3.3K views.