C is really awesome! The creators of C (Ken Thompson and Dennis Ritchie) deserve so much respect, they literally built the foundation of modern programming. Thanks Chris Hawkes!
Although Ken Thompson was around and undoubtably had some input, he is not considered one of the creators of C, and *certainly* not listed first. C really was Denis Ritchie's baby.
@@MrWendijohanes I hate web dev, I'd rather make a website in C than with those kiddy scripting languages, even then I wouldn't even make a website in the first place
I've used C for going on 40 years now... I still like how it works. One advantage is that it's small. I don't think there are even 100 keywords to the whole thing. I like that simplicity. The standard libraries are well tested and robust. C has proved to be a most useful tool.
I’ve been studying compsci for a while now (since middle school) and one of the biggest game changers for me was reading “The C Programming Language” written by the creators themselves. Absolutely legendary book.
I have to say, although C can be a pain to deal with and it is certainly not my first choice to use for anything, I am really glad that it was my first language. You learn all about building your own data structures from scratch, manually managing memory, etc. and then you feel like a master when you go into languages like ruby and python and realize that you don't need to do any of that stuff :)
If you're into embedded systems design C is probably the language you should start with because most microcontrollers use it, for example your Arduino uses what's essentially C programming.
C is a big pain compared to something like Python though. Granted C is great and lower level when compared to Python but how easy would it be to make a quick script like what I produce in this video with Python using C? ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-g8b4Ex81bWw.html
Just discovered this and going to have to respectfully disagree with you here Chris (Hawkes). I would love to give every developer in the world basic exposure to C; especially those who primarily work in interpreted or typeless languages. C teaches and enforces good programming habits. I've been in a rotational engineering/development role at a fortune 500 company for a couple years, and the number of questionable production systems I've already found is crazy. The issue is always something like no regard for data type, poor memory management, rushed overall data structure, or straying to far from the underlying computer science. These are all things that you cannot get away with easily in C. C really forces you to evaluate your programming design and refine it to be what you need. So yes, while I agree that I certainly wouldn't use C for every application, I strongly believe it's important to understand and respect how to build everything you develop at the C-programming level. There are also definitely some developers out there that would benefit from gaining an appreciation of C before they put yet another loosely-constructed, over-complicated java(script) app into production. Sorry for the long response; I get really passionate about this.
I'm definitely biased, because I got my upbringing in C and have literally replaced the Linux system memory allocator with C code before, but I still feel like lack of understanding of the things C brings to the table is a huge issue in the modern development world.
I honestly think that pulling up embedded systems / microcontrollers, kernel development and the age of the language as the main strong points of C does the language a disservice, because these are the things perhaps most commonly associated with C and the usual reasons why C is shunted to the side and disregarded when it comes to discussing or choosing programming languages. The need to have "an intimate knowledge of how a computer is actually operating" is also not a reason for C's awesomeness, and neitehr is the need to manage the program's memory. Saying that C is great because other languages or their implementations are built on C also, in my opinion, serves little purpose - if we, as humanity, have taken tool X and our knowledge of it, and with it we've built tools Y and Z that make our jobs easier and help us achieve what we want with less effort, why would we go back to using tool X for those same jobs? I think you've missed a good deal of why C is actually awesome to the programmer. It's well-designed - simple, consistent and with a high degree of semantic coherence. It does a good job of telling the programmer what the code does, without making it overly complicated. It doesn't make the programmer work hard for that information, at least definitely not as hard as with some other languages (_cough_ c++ _cough_ sorry, I have a cold); it doesn't obscure its low-level workings, but it also doesn't prematurely force implementation details on the programmer. It's flexible - you can do both operating system kernel implementation, and high-level application programming, all in the same language, without much effort at all. It's so old and still used today for all sorts of tasks because of how well it has stood the test of time and how strong its design has turned out to be. It's easy to pick up, learn, and do stuff with, even without the "intimate knowledge" about CPUs and memory, because nearly every high school kid is smart enough to write a C program that asks for 2 integers, adds them together, and spits out the result, during their very first class about the language. And probably some other neat things about C that I've forgotten about. Cool video idea, but I think a lot of people in the industry really miss out on the true elegance of C. The bland powerpoint slides with a mouse pointer racing across them didn't help the video, either :P
Oh and i have a very simple project if someone wants to help transition from screenspace only to calculations off the screens borders at a very slight loss in FOV for resolving cleanness purposes. Example: 1936x1089 internal borders cut off 1920x1080 actual pixel density Not supersampling... Any help?
I started doing a little bit of Python. Then, after going to college (still studying) and learning how to create algorithms in pseudocode and to program in C, has made me a much better overall programmer. I still have a lot to learn, but I can't wait to keep going and keep getting better.
C has the perfect level of abstraction from the hardware. It's high-enough to be 100% portable, and human-readable, but it's low-enough to be efficient for performance- and memory use sensitive applications. And, it's much easier to understand other people's code than in something like C++, where you can even overwrite operators (what I call an encryption / obfuscation scheme, rather than a helpful language feature). With C99 and later, you can also create an excellent object-oriented runtime, way better than C++.
You can overwrite operators, feel free to use this feature carefully and relevantly. I personally find the C not abstracting enough for me, I don't want to have memory management concerns when I'm dealing with high levels works, as long as you have to manage strings and collections, C will force you to handle memory manually whatever the level of abstraction you are coding into. C++ allows you to forget about it, doing exactly what you would do manually, maybe even better.
Why did Stroustrup call the C language to be obsolete then? I honestly don't get it. Can somebody please elaborate? I wasted my time learning JavaScript (for millennials) when what I really wanted is to learn a logic language that gave me control over real machines, real hardware.
Please cite your source of information? Which microcontrollers, specifically? Because BASIC was the only language those MCUs supported? I am really curious to learn more about this.
I learned C as my first programming language and it's so efficient that I have no idea why anyone would even bother with OOP. And with the right text editor like Vim you can type C reeeeaally fast.
OOP keeps your code and data together in one place. Allows more complex abstractions (e.g. Tensors, Atoms). Procedural programming is all about how to compute - you give the computer all the steps, functional programming is all about what to compute - you state the problem and make the computer figure himself how to solve it, OOP is somewhere in between. Also no paradigm is perfect. Different problems ask for different solutions. I personally don't understand pure OOP, but it must be working for the majority, given the popularity of C# and Java.
I started coding in BASIC. It originally had no subroutines and long programs were sometimes referred to as "spaghetti code". Subroutines were an improvement as they allowed you to organize code into logical functions. However, sometimes you have several logical functions that work with a couple of shared variables. C offered no way to organize those. Enter C++. I loved C, but C++ is the next progression of being able to organize your program making it easier to organize, understand and use.
***** The question is what is the definition of _quality code_. To the extent that organization and readability is key, you just can't do as well in raw C (for the reasons I outlined above). That isn't to say you can't write quality code in C--I did for many years. But I can give my code a higher level of organization and many operations are much easier to read using classes.
Flash Man I have never programmed in a non OOP language. I am learning C++ now, and I have a friend who told me if I know C++, then I know C. Is this true? And also, what is wrong with OOP? I am just curious lol.
Kyle Stankovich Yes, that is true for the most part as C++ is basically C with a number of extensions to the language. And there is absolutely nothing wrong with OOP. OOP is a better way to organize your application code. Is it slower? It can be only because it makes it easier to do more complex things. And sometimes you need to be fairly proficient to know everything that is happening within the compiler. But I'm a guy who held on to assembly programming because I loved the hand optimizations. As software gets more complex and hardware gets faster, you'll need to move on if you want your skills to be marketable.
I have worked in c, java and a host of other languages. but I still prefer c. you can do just about anything in c, and you can do it efficiently and clean. it's very efficient once you know it.
No. Not really. I mostly spend my time in Java nowadays (as it is the language of choice in many larger business oriented systems) but I rarely do actul object orientation even here. OOP has it's uses, but IMHO it often complicates things rather than simplifies them. Abstractions are useful, but when they get in the way of seeing the actual problem at hand, they are simply put just in the way of the solution.
when i was studying computer engineering in 2003/2004 first grade they taught us C and C++, those days i loved C programming language and the logic of the language. After that year one of the teachers pushed us to learn C# and i dont know why but i couldnt easily adapt it and like it as much as i liked C/C++. later on in the masters degree we focused on Java and i became a Java developer. Now after all those years I'm teaching myself C programming language and i confess it's really amazing.
Our development team came from c backgrounds and we were tasked to develop this simulation program in Java. It had to use the CPU as efficiently as possible and also run for weeks under high stress conditions. Long story short, we had multiple years of growing pains learning you DID have to actively worry about Java's memory management. We would constantly fight the introduction of memory leaks. Some small seemingly innocuous fix in one part of the code would cause a slow memory leak only noticeable after running for several days. This ended up with everyone having to learn memory profiling tools. Often, the root cause was difficult to locate and you could easily spend multiple days attempting to figure it out.
C also just says, “Screw type safety, I want to do this myself!” giving you the freedom to do things far more efficiently, but at the risk of shooting yourself in the foot if you don’t know what you’re doing.
I'm a huge type nerd and proponent of Rust and Typescript. But absolutely, there is time and a place! I love Rust's approach of putting unsafe{} around such code. I love safety by default with an easy escape hatch for when you need to do the fiddly bits.
A great C Programming book is C Programming Language 2nd ed. By Kernighan, W. B. & Ritche, M. D. 256 Pages The C Answer Book 2nd ed. By Tondo, L. C. & Gimpel, E. S. This the answer guide for the above 201 Pages. Also your correct you need to know something about computers. A good book for that is The Architecture of Computer Hardware, Systems Software & Networking By Englander, I. Great video.
C is easier to learn and faster to compile compared to C++. You have forgotten to mention one of the most important benefits of C that is a proper stable ABI.
C is not easier to learn than C++. C++ comes with some features that lower the minimum-learning-barrier to entry. References for example. If you want to write a function that changes its arguments outside its scope, you have to use pointers. Which is one more thing you have to learn. That and streams, stuff in STL, function (and operator) overloading, templates to name a few other features. Also, being able to define functions inside structs without having to know what function pointers are once again makes it easier to pick up.
@@SArthur221 I disagree. "C is not easier to learn than C++. C++ comes with some features that lower the minimum-learning-barrier to entry. References for example. If you want to write a function that changes its arguments outside its scope, you have to use pointers. Which is one more thing you have to learn." First off you have to learn pointers in C++ too so that makes no sense, and second, it's not that hard to do. You can either pass by reference, or return the data you want to edit. Ex: //First way int weridfunction(int a); int main(void){ int outofscope = weridfunction(1); //gives outofscope a value of 2 } int weridfunction(int a){ return ++a; } //Second way void weridfunction(int* a); int main(void){ int outofscope = 1; weridfunction(outofscope); //gives outofscope a value of 2 } void weridfunction(int* a){ ++a; } Im pretty sure this will compile properly, but Im a little tired right now so eh. "That and streams, stuff in STL, function (and operator) overloading, templates to name a few other features. Also, being able to define functions inside structs without having to know what function pointers are once again makes it easier to pick up." All of that is more complex things to learn, you complained about learning pointers (Not that hard of a topic to pick up imo) but learning templates overloading and such is easier?
@@lightskinche You forgot to dereference A in the 2nd weridfunction. It should be ++(*a) not ++a; But what you're saying is true. It literally took me a week to basically master Pointers. They're only hard if you can't imagine the boxes in your head.
Faster to compile if you use templates in C++, but if you don't use template you will rewrite a lot of code. What you gain in compilation time, you lose it in sustainability.
When personal computers ran off of floppy disks, program efficiency was much more important. I liked C programming because it was easy to see how each statement would be translated into machine language. I could write a C program, compile it, pick at the assembly code, and end up with a small and fast program. I actually had a C program produce a standalone distributable binary that was 453 bytes long (of course it used stdio from a console library, and a system library). A program that was only 453 bytes to download was nice when our modems maxed out at 1200 baud.
I totally agree with you as an ex C developer of a great many years standing. After a 10 year+ "break" from coding/development I have recently come back to it after my wifes death to take advantage of the Arduino revolution. Sadly, I cannot find a "RAW COMPILER" any more. The last one I used commercially was VC 4 or 5, and then it all moved on to C++, and now of course C#, but both of these rely as you righty pointed out on endless different libraries and frameworks, all of which you need to be pretty familair with if you are to be able to code in these environments. I think the introduction of the MS Intellisense is the best indicaton of this need, qas without it I suspect that few programmers would ever really get anything done without understanding all of these arcane library canyons.
I agree with the majority of this presentation regarding C. I don't think the primary reason for choosing C for the Curiosity Rover is for reducing overhead. NASA has many of their own higher level languages for their own use. C is chosen because none of the other languages can easily do I/O with computer chips, robotic devices, and device drivers. There are exceptions since you do have chips such as Arduino will allow higher level support. I suspect there are hundreds of device vendors that NASA contracts with that requires programmatic communication across low-level ports which cannot be done with other higher level languages. I've personally worked with Satellite contract jobs that were similar to what I described. Regardless, this was a good and informative presentation. One last thing too add, another good reason to learn C is that all of the higher level languages you mentioned are based on C/C++.
import static java.lang.System.out; public class Hello { public static void main(String[] args) { out.println("Hello World of C - Greetings from Mx. :) " + "C,C++,Java, C# etc... they all are awesome languages "+ " Learn them all and have fun :) "); } // end main } // end class.
#include void main() { int ans; printf(" \t Is the user saying hi to me?"); printf(" 1>Yes 2>No"); printf (" Answer:") ; scanf("%d", &ans); if (ans==1) { printf(" \t\tHeelooooooo!! ) ; } else if (ans==2) { printf("aww,sad noises"); } else { printf("Please enter between 1 and 2 only") ; main() ; } }
brian lucore C is a powerful language with a lot of flexibility, but it doesn't protect you from yourself. If you don't program defensively and comment your code thoroughly, it can be a real pain to debug later. The more a high-level language takes care of for you, the more it can limit you options. So C gives you a lot of freedom, but at the price of your having to take responsibility for more things.
shouting back. When 32-bit first became readily accessible (in the form of the 386-DX), and GUIs were in their infancy, C programmers wrote code that made such (limited) devices useful. Speed was paramount. Disk-thrash was death. Keep it lean. I rode that wave (and I miss it pretty badly.)
C is to modern programming languages what Latin is to almost all European languages today. It's the basis, and everything shows its influence, including Perl.
@@brainletexplains8271 yeah, in the UK you can take a GCSE course in Latin (this is a high school level qualification not sure what the American equivalent will be)
Yeah but you lose the sense of how it works. Like Java creating pointers without even letting the programmer know that you just created a pointer. Don't worry about garbage collector or the dreaded pointers. Here's an example that I got from Github it says----- The following code shows some Java object references. Notice that there are no *'s or &'s in the code to create pointers. The code intrinsically uses pointers. Also, the garbage collector (Section 4), takes care of the deallocation automatically at the end of the function. public void JavaShallow() { Foo a = new Foo(); // Create a Foo object (no * in the declaration) Foo b = new Foo(); // Create another Foo object b=a; // This is automatically a shallow assignment -- // a and b now refer to the same object. a.Bar(); // This could just as well be written b.Bar(); // There is no memory leak here -- the garbage collector // will automatically recycle the memory for the two objects. } Slower. Because the language takes responsibility for implementing so much pointer machinery at runtime, Java code runs slower than the equivalent C code
@@BlazertronGames I can't do much in pure JavaScript but that doesn't mean I just can't use it. I can still program in JavaScript using frameworks. Just try a framework and see the magic happen.
If you want to program anything low level in C you ALWAYS need to read a hardware's (MCPUs, sensors, interfaces...) user manuals (UM) first. This is how you learn hardware and it is a lot of fun if you like to read. I learned basic C while cross compiling ARM programs on Linux. To me it was easy to (a) learn ARM by reading MCPU's UM, to (b) interact with sensors integrated alongside MCPU on the same MCU, by reading MCU's UM or to (c) program external sensors only connected to the same embedded system that MCU is on by studying UM for that sensors... So I was able to learn on how to do basic stuff with embedded boards, but... On the other hand it was very hard to (a) find any study material on how to actually show anything on LCD and to create a graphical library, or to (b) write C drivers for Linux in order for Linux to be able to recognize the embedded system or in other words firmware that is ran on that same embedded system. This is where you see flaws of Linux, and you feel how Linux knowledge is intentionally hidden in order to sell it through seminars, certification...
I think if you are serious about programming you should take an assembly language class. Even if you never touch it again, it gives you a much better understanding of what is going on at the CPU level.
So to summarise: If you are writing for a Mars rover, use C. The rest of us use Java (or whatever is required for the job). For the record: I'm a programmer and I've written code in 7 of the top 10 listed in the video, plus a few more besides. C is a fine language, but it's horses for courses. I would suggest being able to write good C code makes you a better programmer in the whatever language you actually need to write code in, which probably won't be C.
I do all things in C. Once you learn how to wield your '.dll' and '.so' files you don't even need scripting languages for any reason, you can make small configuration/high level adjustments in one compilation to the dynamic/shared library and just compile that from C too. It's not sexy, but eventually I got bored with sexy and realized pure and simple is the only way to go... I do use the '.cpp' extension and compile as C++ though, so that I can use C++ style struct declarations and very occasional operator overloading. The biggest shortcoming of C is lack of good introspection. Can't easily turn an enum value into a string, or list the members of a struct by index, etc, but once you're willing to write your own metaprogram to generate a few .h files, that can pretty much be solved as well.
C, unlike ISO C++ plus STL, has a very thin compile-time and run-time. I'm assuming that's partly why NASA picked C. After all if it's an embedded thing that you're debugging you need to fully understand what the relevant parts of the hardware and your code are doing and how they're interacting and that gets harder the more layers, abstractions and complexity you have in between. Was it C.A.R. Hoare who in one of his old papers posed the rhetorical question of what point there is in using a tool that is more complicated than the problem you're trying to solve. The language isn't supposed to be there to add accidental complexity to the situation.
I don’t know how to code in C yet, but hearing all these testimony makes me want to start programming class immediately. Can someone please recommend a good book?
It depends on what you’re doing. If you’re into embedded design C is probably the only language worth knowing. I do a lot of embedded design at least 95% of the code I have written in the last 20 years was in C. As a matter of fact, a lot of small processors in eight and 16 bit bus with, and millions of those things are used in everyday products , have no compilers in any other language available. If you’re interested in that kind of work it is 100% C.
I started working on a video game with my friends, and sometimes wish that I used C, but I used C++ because there was supposed to be other people helping me with the programming and they only knew C++, so I used C++, but it turned out they didn’t understand the code well enough and it was too complicated, so I’m doing it alone. Although I am using it as half C and half C++ (like using classes and cout, but using a lot of struct based stuff and little C++ only stuff as well).
What is awesome about C, is the fact that your "framework" is not a runtime, perhaps you are powered by means of an Library, in general, most of the time, your framework is the very same machine you are programming to. IMHO.
If I want to build a watch I use a magnifying glass and tiny tools, screws and cogs. If I want to build a bridge I use large steel beams, bolts and steel cables. Nobody would use watchmaker tools to build a bridge or vice versa. So it´s all about the purpose which tools (or programming language) I use. I agree that C is a beautiful language. But I wouldn´t want to use C to build a large website or a major Enterprise Data Application.
@@realchrishawkesWhat is the point of this statement. No one disputes the power or beauty of the C language. It is just that you wouldn´t choose C to build a big data-driven application or a large website like you wouldn´t choose a tiny brush that would be well suited for painting tiny details in a portrait to paint the walls of your house.
The first point doesn't make much sense, because we would all be using assembly because C was built with it. C is NOT portable, you need to make a lot of code changes to make a code run in all systems and, you could see java running in the Mars thing (you just need to implement the jvm in it) but you would never see it because java has too much of a overhead for it, it has nothing to do with portability. I still think that C is a good programming language and it's between assembly (really low level) and higher level languages. Best language for machine programming
NASA would build their own very efficient robust runtime libraries for their robots in a manner where code execution is deterministic. Also, applications would not use malloc -- not directly anyway. Everything must be controllable. Garbage collection would bea BIG NO NO on the Mars Rover.
At 6:39, what causes those spikes in job demand? Is it simply big layoffs in companies or is it just by chance that hundreds (if not thousands) of C jobs become in demand at one time?
TinyC ompiler is reasonable for begineers. I have a general math ebook with most of the practical math functions found on scientific calculators written in the basic ansi C language.
🔥 If you're learning to code, check out my website 👉 codehawke.com/all_access.html 🔥 Learn more 👉 ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-fpp215OSRV0.html
Why Assembler Programming is Awesome I'm just gonna touch on a few points why I think Assembly is good maybe as a first language... Almost all languages are built on top of Assembly, including C, C++, C#, Python, Java, etc The lines of C that power the Mars Curiosity rover are compiled into Assembly Before being able to comfortably program kernels in C, an Assembly base is needed In Assembler, you have to manage your own memory, you need an intimate knowledge of the hardware you're writing for. In fact, said knowledge is required to know what instructions are available. It's gonna give you much more appreciation for what languages like C, C++, Pascal are doing for you, when they make certain tasks much easier than what you would have to go through with Assembler, like loops, recursion, and such.
C syntax is the ABSOLUTE best. Decades old and still great. I only wish they would make a version with garbage collection and JavaScript style strings.
C by itself will be a tough language to program in. I like C+. Yes, you heard me right. If you remove most of the bullshit from C++ (and from the IDE, cough, Visual Studio) and leave just enough, that would be perfect. But OK if speaking seriously it's important to know C so that you can understand what's going on in a more complex language under the hood, such as C++, then C#, etc. C brings you closer to the bare metal and the knowledge of memory allocation, stack, heap, PE file, pointers, etc. is important for any software developer to have. But it doesn't mean you'll be using C for everything. Doing so will be stupid.
I"ve been writing C since before there were function prototypes in the language (arghhhh - that was an infinite cause of mystery crashes when a sequence of parameters doesn't match what a function expected!!!). I got to the point (eventually) where I knew how to not shoot off my toes and life was fairly good. But there are still lots of opportunities for run-time disaster if one is not careful. strcpy() to a too-small buffer. Fixed size arrays to hold file names. And the ever popular return a pointer to an automatic variable. I started C++ programming more about 10 years ago, but I've been told I write C++ like a C programmer. Oh well... Objects can be nice but it's possible to write horrible, ununderstandible code in pretty much any language. My preference for obfuscation of code is Forth. I found I could write word in forth and 15 minute later I couldn't figure out how it worked. Now most of my coding is for fun doing embedded stuff for Arduinos. And although they don't embrace it, the Arduino language is C++ - but the build process makes doing classes in separate files a pain. Still it's much more fun than freakin' Visual C++. FWIW
43 keywords to make up the world. the problem with C is not that it is complicated. the problem with c is that it is too simple. and many people have a hard time wrapping their brains around simple. :)
The problem is that you need to drill a hole but are given a stick, a string, a woodplate and a stone. Nothing more simple than the things you are given and one can certainly do it with the things you are given, but you can also just use a prebuild drill, it may not be optimal for your use case, but chances are it is the same with your selfmade one, if your not experienced enough.
Might as well say the same thing for Java then since it too was built on top of C. Interpreter's are nice, but they're not the best when it comes to optimization.
the C programming language was the first ever programming language that I have ever learned. And I still use it today for everything that I need to do today: for building desktop applications and building embedded software.
Radiation and big libraries are not related. 64MB is as large as 512MB these days, surface area is relationele to Radiation risk. C in de mars rover makes sense because you need to read pulses and control IO and thus there's no need to have a VM for java on top of your CPU where you have to write wrappers (in C or asm ) to get to it. This is why a lot of embedded systems do not need higher abstraction because you're doing very direct abstract work.
1st argumen: ok, so C is built on top of Assembler. Does that in and of itself constitute an argument for assembler being "awesome"? For the kernel argument, again, a large part of a kernel is assembler. No, you don't need to intimately know how the memory works in order to be able to use C. Or the hardware things. Syscalls talk to the hardware. C might be cool and low level, but its difficult
C is usually written in C (google: bootstrap compiler). Other languages are often written in C too. Sometimes a language written in C (Python) will be rewritten in itself (PyPy). C is an immensely popular abstraction over assembly. Other languages are often heavier abstractions over C.
Have you even actually tried C? It is NOT difficult. You can master C in like a year or 2... Within 5 years, I mastered the language, know the C standard library, and even remember how to implement data structures entirely in C and other nifty tricks and tips using the language. Take another language like C++, Python, Java, etc. and tell me if you can master those languages and everything else I said within a time frame of 5 years? I should also add that I was lazy in those 5 years so if I was hard working, I probably would've mastered C in the ways I mentioned in like 2-3 years...
I have not only tried C. I've used it quite a bit and i like it. I'm not against using it at all. But you can't say it's easier to pick up compared to something like JavaScript (even though i like C over it)
@@Mike-yr8yz My point exactly. It all started lower than C. That in and of itself doesn't mean it's simpler or more awesome than C. As you say, C is a popular abstraction over Assembly. Other programming languages are abstractions over C.
I love Ruby but Ruby made me to love C, that's why i am here. My intention is to extend ruby, lua with c. May be i will have my own programming language who knows lmoa. I love C,Ruby, Lua
3:34 For more than 30 years? C was developed at Bell Laboratories in 1972 by Dennis Ritchie. He also worked with Ken Thompson to develop UNIX. This video was released in 2016, and that was 44 years after it was created. It's already 46 years old.
After 50 years, the C language is still viable and with some augmentations (see @C - augmented version of C programming language) it can also celebrate the centenary.
2:06 C is radiation hardened??? Are there special libraries that have redundancy or error checking built-in that could detect a radiation induced hardware fault? One of C's biggest strength (and potential pitfalls) is it's use of pointers and pointer arithmetic. Conditional pre-processor directives can give your c-code tremendous flexibility, but can also make tracking down some bugs a lot more difficult. Managing memory can get a little tedious. No exception handling though. "C is for cookie, that's good enough for me." Cookie Monster
I think what he meant was that a spacecraft has to endure conditions which make a large onboard library, which would have to substitute for an internet based set of system call procedures, inadvisable. Smaller means lighter and easier to shield from radiation, and being compiled directly to optimized machine code makes the program smaller. In the Apollo project, when the effects of solar and cosmic radiation on memory and logic circuits were less understood, they played it safe with “macrame” memory for the onboard invitational computer. Each bit of this “read only” memory was a donut shaped magnetic core, penetrated by driving wires for each address containing a 1 in a given bit position of a word, and by a sense wire for that bit position. These wires were hand knotted for each copy of the computer. Radiation couldn’t move a wire from one core to another!