Monday 4 May 2015

Javascript: has everyone forgot how it became what it is?

From time to time there's the occasional question from people that think that I'm really smart and ask me about advice on which programming language they should learn to ensure they have a good career ahead. Of course, I try always to answer these questions instead of focusing on the real question, which is why they think I'm so smart when in fact I am not.
 
And things always end up being a debate about how important is to know Javascript and how much of a future it has. Then you stumble upon debates about how good Node.js because you are using the same language on client and server, and what a great language Javascript is for writing the server side of an application.

And while I think that it is important that everyone knows Javascript, I don't think it is going to be the only programming language they are going to need. Or that they are going to work in Javascript a lot. Because Javascript is good for what it was created for, not for writing performance sensitive server code or huge code bases.

And when raising that point, it seems most people seem to forget how Javascript became what it is today.

See, in the beginning of the web, there was only one browser. It was called Mosaic, and did not had any scripting capabilities. You could not do any client side programming in web pages. Period. If you wanted your web pages to change, you had to write some server code. Usually using something called CGI and a language that was able to read/write to standard input/output. But let's not disgress.

Then came Netscape. A company where many of the authors of the original Mosaic code ended up working in. These guys forgot about their previous Mosaic code, started from scratch and created a web browser that was the seed that started the web revolution. Besides being faster and more stable than Mosaic, the Netscape browser known as Navigator had a lot of new features, some of them became crucial for the development of the world wide web as we know it today. Yes, Javascript was one of those.

So they needed a programming language. They created something with a syntax similar to Java, and even received permission from Sun Systems (owner of Java at the time) to call it Javascript. Legend says Javascript was created in 10 days, which is in itself no small feat and speaks volumes about the technical abilities of the Netscape team, most notably in this case of Brendan Eich

At that point, Javascript was a nice and welcome addition to the browser, and to your programming toolbox, because it enabled things that previously were simply not possible with a strict client-server model.

Then it all went boomy and bubbly, and later crashy. The web was the disruptive platform that ... changed everything. The server side (running Perl, Java, ASP or whatever) plus the client side executing Javascript was soon used to create sophisticated applications that replaced their desktop counterparts, but also being universally available from anywhere, instantly accessible, without requiring any client capable of running anything but a browser and a TCP/IP network stack.

Javascript provided the missing piece in the puzzle necessary for replacing  applications running in desktops and laptops of the time with just a URL typed in the address bar of a web browser. Instantly available and updated, accessible from any device, anywhere. Remember, there were no mobile smartphones back then.

That of course ringed a lot of bells at Microsoft. They saw the internet and the browser as a threat to their Windows desktop monopoly and Microsoft, being the smart people they are, set out to counter that threat. The result was Internet Explorer.

Internet Explorer was Microsoft's vision of a web browser integrated in Windows. It was faster than Netscape's Navigator. It was more stable. It crashed less. It came already installed with Windows, so you did not have to download anything to start browsing the web. Regardless of the anti-monopoly lawsuits arising from how Microsoft pushed Internet Explorer in the market, the truth was the Internet Explorer was a better browser than Netscape's Navigator in almost any dimension And I say almost because I'm sure someone can remember something where Navigator was better, but I sincerely can't.

And it contained a number of technologies designed by Microsoft to regain control of their Windows desktop monopoly. Among them, the ill-fated ActiveX technology (later to become one of the greatest sources of security vulnerabilities of all times) and the VB scripting engine. That was part of Microsoft "embrace, extend, extinguish" tactic. Now, you could write your web page scripts in a Visual Basic dialect instead of Javascript.

Internet Explorer practically crushed Navigator out of the browser market, leaving it with 20% or so of their previous 99% market share. It was normal at the time for web developers to place "works best with Internet Explorer" stickers on their pages, or even directly refuse to to load a page with any other browser than Explorer and pop up a message asking you to use Explorer to view their pages. Microsoft was close to realizing their dreams of controlling the web and keeping their Windows desktop monopoly untouched.

And then came Mozilla. And then came the iPhone. Which are other stories, and very interesting by themselves, but not the point of this post...

What is interesting from an history perspective at that point is that developers were using many proprietary IE features and quirks, yet their web page scripts were still mostly written in Javascript. Not in VBScript. And VBScript faded away like ActiveX, FrontPage and other Microsoft ideas about how web pages should be created. Web developers were happily using Microsoft proprietary extensions but kept using Javascript.

Why that happened? Why embrace lots of proprietary extensions to the point of making your pages unreadable outside of a specific browser but keep your code in Javascript instead of the Microsoft's nurtured VBScript? Basically two reasons: first, there were still a significant minority of non-Internet Explorer users browsing the web, so Javascript programs worked on both browsers with little changes from one to another. Second: VBScript sucked. You may think that developers immersed in a proprietary web were choosing Javascript over VBScript because the language was superior. And it was. But this was not the case of choosing among the very best available. It was just a matter of keeping that remaining 20% happy and at the same time picking up the one of the two languages that sucked less.

Mind you, Javascript had no notion of a module or package system. No strong typing. Almost no typing at all. No notion of what a thread was. No standard way of calling libraries written in other languages.

But if after reading all these missing items you think Javascript sucks, you have to see VBScript to appreciate the difference. A language sharing all the deficiencies of Javascript, and then having more of its own. VBScript sucked more than Javascript. Javascript was the lesser of two evils.

And as of today, Javascript still has all those deficiencies. Don't think of Javascript as the language of choice for writing web page front ends. Think of it as your only choice. You don't have any other alternatives when working with web pages. Period. Javascript is not used because it is the best language, it is used because it is the only one available.

It was much later when Google created the V8 Javascript interpreter, making Javascript fast enough to be considered acceptable for anything else beyond  animations and data validations. It was even later when Ryan Dahl, the creator of Node.js, had the crazy idea of running V8 on a server and have it handle incoming http requests. Node.js works very well on a very limited subset of problems, and fails completely outside those.

The corollary is: Javascript will be around for ages. You need to know it if you want to do anything at all on the client side. And know it well, together with the framework of the week if you want to do anything at all on the client side. But it will not be the language where in the future web servers are programmed in.

Phew. And all this still does not completely answer the question of which programming languages you need to know. Javascript is one of them, for sure, but not the most important or the most relevant. It is a necessary evil.

Three guys in a garage, the NIH syndrome and big projects

As the old adage says, experience is the root (or perhaps was it the mother?) of science. Nowhere near software development, it seems. For what is worth, not a week passes without another report of a disastrous software project being horrendously late, over budget, under performing or all of these at the same time.

Usually, these bad news are often about the public sector. Which are usually great news to those ideologically inclined to think that government should be doing as little as possible, even nothing, in our current society. This argument usually does not take into account that these government run projects are almost always awarded to private contractors. Apparently, these same contractors are able to deliver as expected when they the money does not come from public funds, hence the blame should sit squarely on the way government entities manage those projects not on these contractors, right?

I have bad news for these kind of arguments: it is just that these publicly funded failures are just more visible than the private ones. With various degrees of transparency, and by their own nature, these kind of projects are much more likely to be audited and reviewed than any privately funded project.

That is, for each Avon Canada or LSE failure that is made public, we have many, many more news of public sector failures such as healthcare.gov or Queensland failures. So next time you consider that argument, think if it is simply that you don't hear about private sector projects going wrong so often. Is it really because all goes well? Or is it because simply the private sector is more opaque and thus can hide its failures better?

Anyway, I'm digressing here. The point is that big projects, no matter where the funding comes from, are much more likely to fail. Brooks explained it in the TMM (mandatory reading) years ago. Complex projects means complex requirements which means bigger systems with more moving parts and interactions, which technically are challenging enough but are nothing compared to the challenges of human communication and interaction that rise exponentially as the number of people involved increases.

What is more surprising how often when one reads the post mortem of these projects there is some kind of odd technical decision that contributes to the failure. This is usually discussed in detail, with long discussion threads pondering how one solution can be better than another and inevitably pointing to the NIH syndrome. This can take the shape of using XML as a database, a home grown transaction processor or home grown database, using a document database as structured storage (or viceversa), using an unstructured language to develop an object oriented system and so son.

There is an explanation for focusing on technical vs. organizational issues when discussing these failures: technical bits are much more visible and easier to understand. Organizational, process or methodology issues, except for those involved directly in the project, are much more opaque. While technical decisions usually contribute to a project's failure, fact is that very, very complex and long projects have been successfully executed with way more rudimentary technologies in the past, so it is only logical to conclude that the the technology choices should not be that determinant in the fate of a project.

And we usually quickly forget something that we tend to apply in our own projects: the old "better the devil you know" adage. More often than not, technologies are chosen conservatively, as it is far easier to deal with their knows weaknesses than to battle with new unknowns. We cannot, of course, disregard other reasons in these choices. Sometimes there are commercial interests from vendors interested in shoehorning their technology, and these are difficult to detect. But we have to admit that sometimes the project team believed the chosen option as the best for the problem at hand. Which leads to the second point in the post: the NIH syndrome.

What someone unfamiliar with a strange choice of technology can be dismissed just as another instance of "Not invented here" syndrome. But what is NIH for someone claiming to "know best" for a specific technology area, it is perhaps the most logical decision for someone else. What looks attractive as a standalone technology for a specific use case may not look so good when integrated into a bigger solution. This is why people still choose to add text indexing plugins to a relational database instead of using a standalone unstructured text search engine, for example.

Another often cited reason for failure in these projects -and in all software projects in general- is that there are huge volumes of changes introduced once the project started. What is missing here is not some magic ingredient, but a consistent means of communication that states clearly changing anything already done is going to cost time and money. Projects building physical things -as opposed to software- seem to be able to get along with this quite well, if only because one does not have any issues explaining the effects of change on physical things. But the software world has not yet managed to create the same culture.

So now that we've reasonably concluded that technology choices are not usually the reasons for a project failing, that change is a constant that needs to be factored in and there is no way to avoid it, and that there is a strong correlation between project size and complexity, is there a way of keeping these projects to fail?

In my opinion, there is only one way: start small. Grow as it succeeds, and if it does not, just discard it. But I hear you crying, there is no way for these projects to "start small", as their up front requirements usually cover 100s of pages full of technical and legal constraints that must be met from day one. These projects don't have "small" states. They just go in a big bang and have to work as designed from day one. And that's exactly why they fail so often.

Otherwise, three guys in a garage will always outperform and deliver something superior. They are not constrained by 100s of requirements, corporate policies and processes, or financial budget projections.