ACCU is the conference for developers and programmers who are devoted and passionate about their profession.
Bringing together some of the best minds and game changers in the industry, for over 25 years the ACCU organisation has gained a deserved reputation as a trusted source of the latest information on software development.
Taking place across four-days with five concurrent streams and a range of pre-conference tutorials, the ACCU programme boasts a range of international speakers covering many aspects of software development from programming and design, testing and programming languages through to requirements, agile process, project management and professionalism in programming. With a solid core of C++ and C related sessions, ACCU is a conference for polyglot programmers
For all conference enquiries please contact Archer Yates Associates Ltd - www.archer-yates.co.uk
For video/talk/RU-vid specific enquiries please contact Digital Medium - www.digital-medium.co.uk
17:18 vectors have a member called size_type. Only any size that fits within that type can be the sizes of a vector. If a std::vector<T> already has std::numeric_limits<std::vector<T>::size_type>::max elements a push_back can't just grow the vector's size. I don't know if/how push_back is defined in this (extremely unlikely and on basically all machines and implementations impossible) case. (on that slide. capacity specifically also returns size_type which usually is size_t but doesn't need to be)
I wonder if there is a joke, that the x0-register is the correct way to get rid of data. If you leave data in memory for too long it will rot, get moldy, or even evolve to sentient lifeforms. So you do regularly an automatic garbage collection, which will send it to x0, where it is disposed correctly!
When has Stroustrup ever agreed to break backwards compatibility in the name of basic safety or security guarantees? He doesn't even support a builtin package manager. Rather than wait a century for reasonably high quality defaults, adopt a modern systems language. Even Go can do the job for the vast majority of applications.
noexcept is important for reasons beyond performance, such as safety, realtime programming, and intuitive, predictable API's. Tragically, the C++ stdlb continues to rely on exceptions in 2024, ultimately encouraging programmers to switch to C, Go, Rust, assembler, and so on. Exceptions encourage bad data models. Exceptions overcomplicate program control flow. Exceptions waste screen real estate and mental cycles. Any conflicts with noexcept are bugs in the compiler or bugs in language design.
Its so sad that the profile/debug/breakpoint features never got into clang-query. Even the srcloc stuff is not there is very disappointing because how do you discover these things about the ast!
Around 47' campanology (not campinology). Knuth has some fascinating coverage of the subject. Ringing of full peals is not a bad example of how to enumerate all possible permutations. The big accountants printing desk calculators also use RPN. Less keystrokes.
managers hire engineers based on charisma rather than merit. the engineers refuse to implement basic performance or efficiency improvements, exclaiming "it's good enough!" because they're terrified of studying, which would require acknowledging knowledge gaps senior leadership demands tacking on AI and other features where they are not needed customers insist on delivering features they never use benchmarks are faked customers buy based on marketing budget without any paragraphs about moore's law, concurrency, cache alignment, data modeling, or any such technical detail, we see the foundational reason why, in 2024, the average web page demands megabytes to render a cocktail recipe
CISC is fundamentally vulnerable to cyberattacks that offset the program counter a single byte or so, creating arbitrary instructions out of data. W^X is only a partial solution, and is still not implemented on many popular operating systems, let alone microcontrollers. Also, variable length instructions don't play as well with instruction cache alignment. Variable length instructions waste more CPU time when analyzing programs with antivirus scans. Variable length things generally behave in anti-HPC ways, like char stars rather than char fixed capacity arrays.
If your argument for not switching technologies is that you need to learn another programming language, then you've just admitted to be a bad programmer.
I think Walter is a really bright guy, and his C++ compiler, even if he's not still maintaining it, is the absolute best in existence. I do think simplifying things from how C++ does them is a good idea, but perhaps not quite as much as he has done for D. For instance, I don't agree with his choice for the template syntax nor for scope resolution. I also don't like how his standard library is organized, but I also don't think that most languages get that right. One thing this talk has exposed for me is that writeln and all of those associated functions can't be used with @nogc functions, but especially with an @nogc main(). As far as I can tell looking at the source there's no reason for that because at first glance they don't appear to allocate anything. I feel like that's something that needs to be remedied by making them all @nogc, which would still render them usable from everywhere else even in GC enabled code. I'm sure most will disagree with all of my complaints about D, but I don't like the lambda syntax. Although, the post return syntax is better than C++'s because -> in C++ is a member pointer operator and thus it further confuses context for no gain. I dislike UFCS and method chaining in general. The contrived examples always suck, but when you add on two unnecessary calls to strip() to make a point about convoluted code, that only undermines the point. At least pull a real-world example, because even if I disagree with it, at least it'll be valid. Also, the point about mixins is really disappointing as a preprocessor doesn't have to mean a lack of type checking. A simple addition could be `#macro int foo( int arg )` and give it an almost proper function styling, yet it could still act as a text replacement mechanism if you don't use a function style macro or have type annotations at all. There's no reason to not have both #import and #include as a preprocessor is an incredibly powerful thing. One thing I really like about D is the concept of CTFE and I try to do as much of that in my own language as I can having made my compiler into an interpreter as well and like other languages just calling it directly opens a REPL, and I hate constexpr and consteval in C++. However, I still don't understand the point of most uses of type annotations and especially typeof(). I feel more languages should aim for more type inference as I have in my own language. For instance, that other contrived example would in my language look more like this `v := [7, 5, 8, 2, 4, 1, 3]; min := *v; for i in v: if i < min: min = i;` but if we take the example seriously, then it would ideally be `v := #{ 7, 5, 8, 2, 4, 1, 3 }; min := *v;` because #{} denotes a built-in tree type and thus the first element would denote the minimum. Since I strive for CTFE too, if `v` isn't used other than to acquire the minimum of that set, then assuming the answer is printed out it would generate code that just prints the constant value. While I do take some inspiration from Python, I take more from C and its child C++, and whitespace isn't significant for scoping and default types can literally be changed by aliasing whatever you want to the int and float "generic" types. So if you want 32-bit floats to be default then you can simply do `alias float32 as float;` at the top. Though, I opted for RAII over GC and the user can configure the default allocator per module if desired for cases when objects have a runtime representation. Ultimately, I would suggest learning more languages, whether side-by-side or not, because it can improve your programming skills regardless of which language you end up using and there are certain big projects that everyone should take on at least once, such as writing a compiler, even if it's only for a subset of functionality present in other languages. I fully expect that no one will read this long-winded post, but I tend to ramble on as a means of catharsis, so it doesn't matter anyway.
This presentation misses entaierly elephant in the room...data centers in details presentation although they are mostly responsible for summary precentages.
Circle is without a doubt the most promising development for C++. Too many people unfortunately can't set their egos aside to give it the attention it really deserves.
I like the whole idea of views, but I'm concerned about the performance of this stuff. I'm pretty sure, although I don't have any evidence for this, that it would be faster to do what you're getting with a view yourself. Maybe it would be more readable as a view, I don't know, it would certainly be more concise, but that doesn't always equate to readability. Having views might be more confusing to junior programmers than something you've done yourself. This sort of thing with views and stuff has existed in C# for ages with Linq and it preformed pretty badly there. So as a concept, views seem okay, but outside the odd case or two, I've got no real need for them. In my opinion, a lot of these things they're cramming into the standard are like things to include just for the sake of inclusion, not for any real worldly benefit. Sure, they're nice to have but are they going to change the way that most C++ developers write code, probably not. Well, not unless you're one of those annoying bleeding edge coders who likes to shoehorn a spaceship operator in their code whenever they can.
Great talk! In many ways D seems to answer an early aughts question - how can you combine the productivity and safety of Java with at least most of the performance of C++? I really love how D makes many things that are hard or ugly in C++ seem simple. Mixins is a very simple way to get powerful metaprogramming. Automated marshalling of structs to JSON, yes please! Less undefined behavior and UFS, hell yeah! However I think the reliance on a GC and built in safety checks disqualified it from the high performance domains occupied by C++ early on. Recently even Herb Sutter acknowledges that we should be safe by default (paying a small runtime tax for that) with the option to opt out of we really need to. This makes D relevant today, but unfortunately I think the boat of D has sailed due to Rust which manages to provide safety with most of the checks done at compile time. If you really value productivity and still want decent performance then I think a lot of people will look into Go instead.
The only people who think this is great are Rust fanatics that will never write a line of C in their lives so why is the committee trying to appease them?
@@RoibarkanYes, but using NC (non-commercial) and ND (no derivations). This means cppfront can’t be included by default in any GNU/Linux-distribution, no company can use it and those contributors Herb celebrates are technically not even allowed to provide pull requests since that involves modifying the sources. Because of that it is hard to take cppfront seriously until Herb switches to a free license.
I hate putting the ampersand and asterisk signs left-aligned. I teach newbies by asking this: int* a, b; and then I ask the type of variables. Most of them say they are both pointers. No, they are not.
The reason, that I know and is mostly cited in the community, is that they did some "bad" decisions in the implementation but now they can't change it because this will be ABI breakage.
Adding auto with the same meaning as in C++ is a strange choice - I thing its generally a bad feature in C++ but at least has some utility due to long typenames, iterators, namespaces- but in C? Why?
If you look at it, the humble 6502 was the first RISC CPU. It had competition (6800, Z80, 6809, 8008/8080), but its minimized register count and minimized instruction set made it very simple to implement efficiently, and it could do everything that any other 8-bit CPU could do, and often just as quickly.
45:21 Assuming sz can get very big, the argument to malloc can overflow and create a smaller allocation than expected. There's calloc for this purpose. Thought this was common knowledge.