The fallacy of high-level programming

For the last 5 years or so, I’ve stopped writing on technical subjects on this blog. But that doesn’t mean I stop writing completely, in fact, I keep on writing a lot, but keep them to myself instead of posting out. For reasons too numerous to tell, or to be short, just because… I’m lazy… Software engineering is a still a relatively – young industry, hence… naivety, untruthfulnesses, deceptions, myths, lies, and dogmas are… countless. In that environment, writings could be controversial and misleading, so I choose to note down my ideas in private.

started with Turbo-C on DOS, then move on to different dialects of the C language: Watcom C, Borland C++, C++, glibC, Obj-C… For me, the most important thing in programming is… crashes. It crashes right away to tell you that you’ve done something wrong! It crashes when you access a null pointer, it crashes when you use an API the wrong way, it crashes when you allocate an infeasible amount of memory, it crashes when you access a dangling pointer referencing to an object which has gone out of scope, just because you can’t keep a right tracking on the life-cycle of that object. It doesn’t even throw an exception and try going on until the situation is unmanageable. Simply put, there’s NO exception, you’re punished immediately, as soon as you’ve done something wrong!

I strongly advocate the use of ARC for memory management, in fact, I would call it the most brilliant feature of the Obj-C language for the last 25 years or so, ARC makes life much more easier. But I also advocate the use of crashes as a “graceful” way to tell that you’ve done something wrong with the deallocated blocks. It crashes right away when you allocate an unbearable amount of memory so that you would know that your algorithms and data – structures are not efficient enough, and you will need to improve, to do tuning, optimization! For me, modern languages are good and friendly, the down side is that it’s also too friendly to the developers, without punishments, how can the devs’ skills could be improved!

Thus, by the interacting between you – the coder and the computer & compiler combination, the reward – punishment model will help greatly boost the devs’ skills over time, and help producing good code. There’re huge differences between an experienced programmer who write good code, and foresee possibilities of bugs, and a novice one who only try to make it… just run. I really want to emphasize here, that the “reward – punishment” model of programming is what made a good programmer! Also by learning to handle memory problems by yourself give you opportunities to follow and understand the life cycles of objects, of memory blocks, understand the precise flows of code, understand the cost-and-benefits of each coding approaches.

To summarize about languages, C is like Sanskrit, extremely precise and accurate, rigid grammar, strong types, all syntaxes has profound implications. High level languages such as Swift, .NET, JavaScript, etc are like… Vietnamese, lacking a good grammar, and quite vague and inconsistent in meanings. Of course, learning C is hard, not everyone want to do things the hard way. On the other hand, it’s too easy to quickly draft up some simple apps in high-level languages, which would naturally give a fallacy that devs are good, whatever they write, it seems to run “smoothly and perfectly”! Of course, high-level languages have their roles, for examples, to make some prototyping… until things get huge and complex!