null + null = 0, +new Date, and why loosely typed languages like Javascript are a pain

Javascript is a loosely typed language. This was done to expedite the writing of code and avoiding tedious tasks such as declaring variables and explicitly defining their types. You know, the kind of stuff only suckers would do. The irony is that these so called shortcuts actually result in more tedium as everything requires a tremendous amount of type checking. Consider the simple act of concatenating two strings: str1 and str2.

str1.concat(str2) fails immediately if str1 is undefined or null, so the concat method is out of the question. But we can always use the javascript “+” operator right? Sure, if you want to trust the implicit type casting in Javascript. In a strongly typed language such as C#, if str1 and str2 are null, str1 + str2 results in null as well. The result is intuitive and expected. In Javascript, the result of str1 + str2 is the integer value 0. This can lead to some hilarious and unintended consequences in the code that can be a pain in the ass to track down. But wait, there’s more! If both str1 and str2 are undefined, str1 + str2 is NaN (not a number). If str1 is “” and str2 is null, then str1 + str2 is equal to “null”. That’s right, because str1 is a string, Javascript implicitly casts str2 as a string as well. The expression becomes “” + “null” which evaluates to “null”. Ugh. If str1 is “” and str2 is undefined, well guess what? The string, “undefined”, is what happens when you concatenate the two.

If you want a result that is not nonsensical, you wind up having to write to some hideous code to convert all null and undefined values to an empty string first:

if (str1 === undefined || str1 === null)
    str1 = "";

if (str2 === undefined || str2 === null)
    str2 = "";

var result = str1 + str2;

And yes, the triple equals is absolutely necessary. Without it, the implicit casting rears its ugly head again. So to sum up, use common sense when writing Javascript code. Stick with the basic best practices.


str1 = "hello"; //BAD but valid, sets str as a property of a global object. That's right, str1 is now a global variable. Horrible
var str2 = "world"; //You have to type three extra characters but it saves you a ton of grief


var str1 = "hello";
str1 = 5;  //For the love of god, just declare a new variable here instead


var start = +new Date();

This is a clever snippet of code, but it can be rewritten to this functionally equivalent:

//short hand for
var startDateTime = new Date();
var startTicks = startDateTime.getTime();

Both versions of the code do the same thing. The second requires a bit more typing but is far more readable. +new Date is gimmicky. Yes it “looks cool” and shows off your Javascript skillz, but the brevity in this case is detrimental in the long run to maintainability. +new Date works because it first creates a new Date object. By default, the Date constructor returns the current date. The “+” unary operator is applied, which implicitly casts Date to an integer. The implicit casting returns the number of ticks since 1/1/1970. Great, but why have this esoteric code? It relies on an implicit cast, so it is entirely on the version of Javascript and the browser it is running on. Any future changes could potentially break the code . If typing two lines instead of one is so painful, create a utility function instead that calls +new Date and slap a comment on there. Or don’t, because +new Date has horrible performance.

So remember, scripting languages such as javascript make writing code easy, but the ease of use leads to ease of abuse. Be careful or you’ll wind up making things more difficult in the long run.

Can’t we all just get along?

Good teamwork is vital for the success of any endeavor, whether it be winning an NBA championship, getting an A for that final group presentation in school, or running a business. This is especially true in the tech industry, where the explosion of complexity confronting companies today combined with increased globalization makes teamwork a requirement.  Just take a look at your typical web company:  The development team alone is going to consist of multiple people with different specialties. No one person can possibly be an expert at everything, which makes it a necessity to work together with others. Front end developers will know CSS, Javascript, some sort of server side scripting language. A backend developer on the other hand, will know everything there is to tuning a database. Even though there will be skill overlap among developers, an entire code base can spawn millions of lines of code: its not humanly possible for just one person to maintain it all! And that’s not even looking at the marketing, operations, HR, accounting, legal and business departments, all of which serve vital roles in the company. Everyone must work together for the company to run smoothly.

The difficulty with teamwork lies in scaling up. For example, a small startup company naturally fosters good teamwork. Because of its size, everybody must contribute by working hard and helping out others. People learn to depend on one another, forming close bonds in the process. However, this camaraderie tends to get lost as the company grows larger. A once tight knit group inevitably splits off into multiple teams. As more headcount is added, entire departments are formed, then divisions, and finally even subsidiaries! All too often, these org units all have their own bottom lines and agendas, creating an atmosphere of Machiavellian politicking. Dick Brass, a former Microsoft VP, wrote a fascinating op-ed piece that touches on this very issue.

He paints a grim picture of the internecine warfare at Microsoft. I was left with the impression that groups within Microsoft were not entirely unlike the warring city states in Italy during the Middle Ages. Teams did not play nicely with one another and were more likely to sabotage, rather than help, one another. Microsoft Cleartype, a font display technology that garnered much praise, did not make it into Windows until a decade after it was invented. Why?

“It … annoyed other Microsoft groups that felt threatened by our success. Engineers in the Windows group falsely claimed it made the display go haywire when certain colors were used. The head of Office products said it was fuzzy and gave him headaches. The vice president for pocket devices was blunter: he’d support ClearType and use it, but only if I transferred the program and the programmers to his control.”

A similar situation occurred with the tablet, which Microsoft had developed long before the Ipad came out. And yet, despite having the support of top management, it was essentially torpedoed by the VP of Office:

“ When we were building the tablet PC in 2001, the vice president in charge of Office at the time decided he didn’t like the concept. The tablet required a stylus, and he much preferred keyboards to pens and thought our efforts doomed. To guarantee they were, he refused to modify the popular Office applications to work properly with the tablet. So if you wanted to enter a number into a spreadsheet or correct a word in an e-mail message, you had to write it in a special pop-up box, which then transferred the information to Office. Annoying, clumsy and slow.”

I experienced this internal bickering firsthand when I worked as a consultant at Avanade and got to work on a fairly large scale project at the Microsoft Entertainment and Devices division. Despite multiple teams being involved and the obvious dependencies between them, help was hard to come by. Trying to get things such as getting access to documentation, sample code, servers, or even the developers themselves was like pulling teeth. I quickly learned that the only way I could get timely responses to my nagging emails asking for documentation would be if I CCed somebody important. Bugs that blocked other teams were not prioritized correctly and took far too long to get fixed. Not surprisingly, the project fell behind schedule. In the ensuing shitstorm, everyone tried to pin the blame on everyone but themselves. The funny thing is, all this could have easily been avoided had there been a little more cooperation.

My team was hardly blameless either. I remember in one meeting, one of the developers on another team asked us if we could squeeze in an extra bug fix for the build. Our PM said that we were already swamped and that this would require pushing the build back by at least a day. After much posturing and positioning, an agreement was reached. In the time it took to do so, I literally could have checked in the one line fix on my laptop. Instead, I kept quiet, fully aware of what was going on. We could not just give up something for nothing; we had to get something in return. Such is how the game is played.

This failure to cooperate appears to be the inevitable result of companies that have grown bloated from their own success. Individuals employed by these corporations are loyal to themselves first, their coworkers second, other teams a distant third, and the actual company dead last. I don’t want to paint too grim a picture here however. I offer just this simple observation: The most effective way for a large corporation to deal with this problem is to have a compelling vision and a strong CEO who can carry it out. This CEO must be a benevolent dictator who “speaks softly but carries a big stick”, rewarding those who work together and punishing those who don’t.

The textbook example is Steve Jobs. Walter Isaacson’s biography of him makes him appear more as a dictator than as someone who was benevolent, but he definitely got the job done. Jobs brought his innovative ideas to bear by exercising complete control over the entire end to end user experience. Despite being an asshole at times, his passion and his ability to revolutionize technology gave everyone a reason to rally behind him and set aside their differences, knowing that the end result in the long run would be well worth it. The results speak for themselves.

This is why Apple succeeded where Sony couldn’t. Sony should have been able to come up with the Ipod long before Apple did. As discussed in Steve Job’s biography:

“But Sony couldn’t. It had pioneered portal music with the Walkman, it had a great record company, and it had a long history of making beautiful consumer devices. It had all of the assets to compete with Job’s strategy of integration of hardware, software, devices, and content sales. Why did it fail? Partly because it was a company … that was organized into divisions (that word itself ominous) with their own bottom lines; the goal of achieving synergy in such companies by prodding the divisions to work together was unusually elusive” (P. 407 Steve Jobs by Walter Isaacson).

Because the executives at Sony placed the success of their own divisions before the success of the company as a whole, they missed the big picture and failed to think long term. The irony here is that had they actually come up with a true successor to the Walkman, their bottom lines would all have profited enormously. Sony shouldn’t feel too bad though. Failure to work together is recurring theme throughout history, whether it be primitive tribes fighting over resources to the centuries of warfare that wracked Europe during the Middle Ages. Of course, the history textbooks also show what happens when petty tribal rivalries are set aside: Once Genghis Khan was able to “unite the clans”, he conqured most of the known world. Similarly, CEOs who wish to capture a dominant global market share with their products would do well to study Steve Jobs and see how he rallied all of Apple behind him.

Conceptual Integrity and the Design of Design

The Mythical Man Month is Frederick P Brooks’ famous book about software engineering, read by computer science students around the world. Of course, it’s not immediately obvious from the title what the book is going to be about, but the title of his latest book, The Design of Design pretty much says it all. Like The Mythical Man Month, The Design of Design is a collection of essays, but unlike its predecessor, it is not a computer book per se. Rather than focus solely on the design of computer systems, Brooks talks about design in general, exploring the possibility that widely disparate design disciplines such as music and art share elements in common with software. Brooks brings to bear his many years of experience building everything from houses to computer architectures in this book, sharing his insight and opinions on a wide variety of fascinating topics such as open source, the design rational behind his vacation beach house, to empiricism versus rationalism. The one overarching theme that ties all these essays together is the idea of conceptual integrity – which Brooks defines as consistency, coherence, and uniformity of style. This is the glue that holds a design together.

Brooks argues that all great works have conceptual integrity and that this holds true across all design mediums, not just computer system design. He uses Reims cathedral to illustrate this point:

“most European cathedrals show differences in plan or architectural style between parts built in different generations by different builders… So the peaceful Norman transept abuts and contradicts the soaring Gothic nave… Against these, the architectural unity of Reims stands in glorious contrast. The joy that stirs the beholder comes as much from the integrity of the design as from any particular excellences. As the guidebook tells, this integrity was achieved by the self-abnegation of eight generations of builders, each of whom sacrificed some of his ideas so that the whole might be of pure design.” 1

This pattern of design purity being achieved through a single unifying idea can be found everywhere. The sonata form has existed in classical music for centuries, and has been one way for a composer to structure their music around a central theme that is first introduced in the exposition, explored and contrasted in the development, and finally revisited in the recapitulation. The most famous example of this form is the first movement of Beethoven’s fifth symphony. Any great piece of literature will also be structured around a few central underlying themes that underpin the entire story: Shakespeare explores the age old question of fate versus free will in Macbeth, as does Tolstoy in War and Peace. Likewise, a good photograph will capture a particular mood or moment in time. Attention will be drawn to the main subject of the composition by carefully cropping it in such a way so that the subject is not lost among the noise of a busy background. Everything that remains must visually support and complement this subject.

Similarly, good software (and hardware) is deliberately built around a set of strong underlying ideas. Eric Raymond talks at length in his book, the Art of UNIX programming, about the design philosophy that shaped UNIX:

“The UNIX philosophy originated with Ken Tompson’s early meditations on how to design a small but capable operating system with a clean service interface. It grew as the Unix culture learned things about how to get maximum leverage out of Thompson’s design. It absorbed lessons from many sources along the way… The UNIX philosophy is bottom-up, not top down. It is pragmatic and grounded in experience.” 2

The end result is a clean and elegant system centered around the command line interface. Programs were built to do one task, and to do it well. Furthermore, they were made to play well with others: The output of one tool would serve as the input to another. To facilitate this, programs were designed to run in batch mode, with file formats using transparent text formats (compared to the opaque binary formats favored by Windows applications); Scripting languages such as awk and sed were created to provide strong text processing capabilities. By adhering to these principles, a strong foundation was built that fostered extensibility and flexibility, allowing programs to be combined in powerful ways that would not otherwise have been possible.

One of the reasons conceptual integrity produces such good designs is because the systems themselves are easier to build. Conceptual integrity provides a rough roadmap of what the finished product will look like; In a sense, all the difficult decisions have already been made. If you think of a design as a tree, with the final product at the root and each branch representing a set of possible design decisions, then conceptual integrity is what allows us to prune this tree, which is theoretically infinite in size. Not only will the end product be easier to build and maintain, it will be easier to use as well. This is because it will be consistent in structure.

Why is this a good thing? States Brooks: “Consistency is reinforcing and self-teaching, because it confirms and encourages our expectations… It aids the comprehensibility of the design” 3. This allows an end-user to rely more on their own intuition to figure out how a program works rather than by having to read a thousand page manual. For instance, the desktop metaphor pioneered by Xerox PARC that we all take for granted today, allows any user to easily manipulate the files and folders on his machine with just a mouse. It is not a difficult metaphor to grasp, and as a result can readily be taught even to the most inexperienced computer user. However, imagine a system that does not consistently apply this metaphor. Dragging a file onto the recycle bin makes a copy of the file, and dragging one window on top of another closes the one on the bottom. This may sound like a contrived scenario, but there are such poorly designed systems out there. In a previous article, I rant and rage about the horribly inconsistent and illogical design decisions in VB script.

Products that lack conceptual integrity are everywhere and easy to spot: They are bloated, ill conceived, and difficult to use. Features are haphazardly slapped together, often by the clueless non- technical middle management that you’d find in a Dilbert comic. Corporate intranet sites, bloatware found on new PCs, and websites of non tech companies are rife with examples of this. A look at the code base is revealing as well. Take for example, a poorly built web application. Instead of a clean and modular architecture with orthogonal functions, we instead find code in the business logic layer that returns HTML, insert functions in the database layer that happen to delete and update rows in other tables. Needless to say, maintaining such a code base is a nightmare, and is something that should be avoided. The goal then, is to strive for conceptual integrity in our designs. Brooks latest book provides a great starting point for discussion, ideas, and inspiration.


1 Brooks, Frederick. “Achieving Conceptual Integrity” The Mythical Man Month
2 Brooks, Frederick. “Esthetics and Style in Technical Design” The Design of Design
3 Raymond, Eric. “Art of UNIX Programming”

ASP and VBS, how do I love thee, let me count the ways

Over the past year I have had to work on a large legacy ASP code base spanning millions of lines of code. Not surprisingly, I have developed a deep and passionate hatred of VB Script as a result.  This angry rant is the culmination of months of hate, pain, and tears.  Now, I don’t want to come off as too negative, so before I begin I will say that vb script does have a few positives: It is positively a great case study in how not to design a scripting language, and it positively inspires developers (myself included) to think of creative ways to migrate and port the code base over to the a language that doesn’t suck.

So without further ado, here is a list, in no particular order, of why VB Script is garbage.

  • If statements don’t short circuit – In most languages, each of the conditionals in an if statement is evaluated in order from left to right.   If any of them evaluate to false, none of the other conditionals to the right are evaluated.   This makes sense. One common usage of an if statement is to first check if an object is null, and if its not, to then check the value of one of its members. For example in C#:

    if (someObject != null && someObject.memberValue == 5)

    This of course is not possible in VB Script. The second conditional would cause a null reference exception if someObject were null. This forces the developer to write two separate if statements where one would suffice. Words cannot describe how mind numbingly stupid this is. I’m scratching my head trying to figure out the rationale behind this design decision. Perhaps the designers felt that developers might want to write conditionals that had side-effects. For example, perhaps the conditional would call a function that would update some global variable, and that this needed to happen every time the if statement ran. Needless to say, this is a terrible coding practice. Evaluating an if statement should never have side effects, as this causes a testing and maintenance nightmare that is a ripe breeding ground for bugs.

  • Subroutines and functions are redundant – A sub in VB Script is just a function that doesn’t have a return value.  In other words, it simply returns void.     There is really no need for this distinction.    I understand that subs and functions derive from BASIC, VB Script’s ancient predecessor, but BASIC is decades old.    Perhaps there was a good reason for this architecture back then.  If I had to hazard a guess, I’d suspect that there were performance optimizations back then that the BASIC compiler was able to perform based on whether a sub or a function was called.    Decades later, there is simply no excuse to not trim the fat.    Its sloppy and its lazy.  This design decision makes even less sense because you can declare functions that have no return value, making the existence of subroutines about as useful as those phonebooks that they keep delivering to my door.
  • Calling a subroutine is ugly – Let’s say you have  a FooBar subroutine that takes in two parameters.    There are two ways of invoking this subroutine, but both of them are ugly:
    Call FooBar(1, 2) 
    FooBar 1,2 

    Why the hell can’t you just call like you would a function?


    Oh that’s right, because then subs would be even more indistinguishable from functions, and we wouldn’t want that!   Then we might have to get rid of subs altogether.    That’d make far too much sense.

  • WEND – Yes, there is actually a statement called WEND in VB Script.    Let that sink in for a moment.  It is used to indicate the end of a block of code in a while loop.     In addition to being a hideously ugly statement, it is inconsistent with the rest of the language.    For loops use “Next”.   If statements use “End If”.  “End Sub” and “End Function” are used to denote the end of a subroutine and a function, respectively.   If I may paraphrase Frederick P Brooks in his excellent book, Design of Design, the key hallmark of a well designed product is conceptual integrity.1 Conceptual integrity gives the product a consistent and easily comprehensible design, which in turn allows users to easily and intuitively learn and use the product, and more importantly, remember how to use the product years later.

    VB Script does not achieve any semblance of conceptual integrity at all.   Instead, it takes exactly the opposite approach by using a widely divergent syntax in its statement block syntax     Again, this syntax is taken directly from BASIC, but the designers had a chance to make a clean break from its predecessor.    It’s a terrible excuse that spawned a terrible language.

  • & is used to concatenate strings – This is not a good choice for a string concatenation operator.      Am I being petty and subjective?    No.   & is almost universally used in all languages to denote some kind of logical AND.      It looks awkward when used to concatenate strings.
  • ‘ is used to denote comments – Again, ‘ is almost universally used in all languages to denote some kind of string or character.      The designers could have picked a semantically better choice.
  • Multi line string concatenation cannot handle comments – To denote a multi line string in vb script, you use &_ to do so.     Ignoring the fact that this is hideously ugly, VB Script chokes when you try to add comments.Take a look at the following code:
    dim foobarFoobar = “hello” &_  ‘this is a comment 
    “world” ‘this is another comment 

    “this is a comment” causes a syntax error. The equivalent C# code compiles and runs just fine. Typically, you’d want to break up a string into multiple lines to help with readability.   To further assist in elucidation, its conceivable that you’d want to add a comment next to each line.    I’m not sure if this is disallowed in VB Script because the comment parsing code is awful, or if this was an intentional design decision.    I’m also not sure which of those two is worse.

  • Variables are not case sensitive – When VB Script first appeared in 96, most languages were already case sensitive.     There are a number of naming conventions that are based on capitalization.     Not having a case sensitive language means that not only are these naming conventions not possible, different programmers will capitalize variables differently, often in the exact same function, making for code that makes your eyes bleed.
  • Option explicit is off by default – Option explicit forces all variables to be defined before they can be used.   By default, this is OFF in VB Script.   You have to manually set this in your code in order for it to take effect.   VB Script came out in 1996. By that time, requiring variables to be declared before being used was the default for the most languages.   It was just common sense. Not making this the default makes it possible to introduce a large number of easily preventable bugs.     Part of good design is setting sensible defaults and encouraging best practices.   Allowing programmers to make typos and then waste hours trying to track down the resulting bugs is asinine.   Of course, no good programmer would ever not use option explicit in the first place.    Actually, a good programmer would probably be using a better scripting language altogether.
  • Empty, Null, Nothing – Why are three separate keywords needed to indicate a variable does not have a value?     A good design is achieved not when nothing more can be added, but when nothing more can be taken away.   Having all these useless and un-necessary distinctions causes lots of useless and un-necessary bugs when a developer confuses the difference between Empty and Null.Languages that came out before VB Script such as C and Java get away with just using NULL and have not been any worse off as a result.   Perhaps you have heard of these moderately successful languages before.
  • IsNumeric(Empty) evaluates to true – VB Script is a scripting language, which means its not very strongly typed.   Scripting languages in general tend to play fast and loose with types, to facilitate rapid prototyping and easy to write code.   However, they typically provide GOOD type checking functions so that strong type checking CAN be done if necessary.      A GOOD type checking function would not return true when checking if empty is numeric.    What the hell were the designers even thinking?   Did they even stop to think about the use cases for IsNumeric?   In what use case scenario would a programmer trying to validate data want IsNumeric(empty) to evaluate to true?   In every situation I can think of, I would never want empty to be considered a number.    This terrible design decision forces programmers to do two separate checks, one for IsEmpty, then another for IsNumeric, instead of using just one.
  • Functions can be called without parentheses – No big deal right?   Not quite.    This makes the following innocuous looking line of code completely ambiguous.

    “y” could either be a function or a variable.    Who knows?    I guess the VB Script designers wanted to keep programmers on their toes.

  • Let and set are useless and unnecessary syntax constructs – “let” is an optional keyword that is used to assign values to primitives.     Since it’s optional, why does it even exist in the first place?     “set” on the other hand, is a required keyword that is used to assign values to an object.   For example:   “set object = new [classname]”.    Why did VB Script decide to make a useless distinction between primitives and objects in this manner?     Were the designers afraid that programmers would not be able to tell when a variable in question was an object or a primitive? Wouldn’t the “new” keyword suffice (as it does in Java and Javascript, both of which appeared before VB Script)?   If the VB Script designers were that worried, why is it not possible to tell if y is a function or a variable (see previous example)?    You’d think that’d be cause for concern as well.
  • Setting the return value in a function does not terminate the function. You must call “End Function” to do so – In every other language, there is typically a one line “Return x” statement that causes a function to return the value and stop execution of the function.   Not so in VB Script.   They decided one line wasn’t good enough.    You have to set the return value, and then manually exit the function.   For example, if you have a function FooBar and you want to return the value “x”, you’d need two statements: “FooBar =x” followed by “Exit Function”.    Its as if the VB Script designers wanted to add useless filler to their language to make it even uglier.     Mission accomplished.    Talk about a pyrrhic victory.
  • You cannot declaring a variable and set its value in the same line – This is something I think every language in the planet supports.    You’d think the syntax would look something like this:
    Dim foobar = 5

    Nope!   Syntax error.    I guess the designers were looking for more ways to differentiate VB Script.  Not satisfied with requiring two lines of code to return a value in a function, they decided declaring and setting a variable in one line would be disallowed as well.   To be fair, you can get around this limitation by using the colon, which allows you to execute multiple statements on the same line.   You could for example write

    Dim foobar : foobar = 5

    But that’s not quite the same thing as having built in support, and its also ugly as hell.

  • VB Script Regex does not support lookahead/lookbehind – Ok, this is a nitpick, since a lot of other languages don’t support this either.   But given that this is a rant, I decided to include this one in the list for completeness.   Cartharsis can not be achieved otherwise.
  • No built in support for inheritance – VB Script provides a class construct, but only does so in a half assed way that does not allow inheritance.    So why even bother having a class construct in the first place?    Javascript, a language that came out earlier, does not have this limitation.2

    If I may digress for a moment, this is no excuse for developers not to use inheritance in VB Script.    To quote Steve McConell from his book, Code Complete:3

    "Programmers who program in a language limit their thoughts to constructs that the language directly supports. If the language tools are primitive, the programmer's thoughts will also be primitive....   Programmers who program into a language first decide what thoughts they want to express, and then they determine how to express those thoughts using the tools provided by their specific language."

    So yes, technically you can do inheritance in VB Script, but you have to be put in the effort to hack it in.    Of course, why not put in the extra effort and just rewrite all the code from scratch using any of the other languages out there which are all vastly superior?    You know its inevitable anyway.

  • Dim Array(3) creates an array of size 4 – Again, this is not intuitive.   You’d expect Dim Array(3) to create an array that holds 3 objects.    Instead, the 3 denotes the largest index value of the array.     This flies in the face of normal convention.    I’m not sure if the designers were looking for ways to be intentionally defiant or looking for ways to make VB Script different.     Whatever the case, this syntax is completely boneheaded.


Because VB Script and ASP go together like peanut butter and jelly, no rant against VB Script would be complete without an obligatory rant against ASP.    Most developers who have ever worked on an ASP code base know the horror that is include file hell.     Languages that support include files typically provide a way of checking if a file is already included.    In C, you put all your code inside a #ifndef directive, and #define an identifier to prevent the code from being included elsewhere.    Not providing such a mechanic makes code a nightmare to maintain as you navigate the maze of include dependencies.    Most companies (that still use ASP) have simply resorted to using one giant include file that includes everything, which defeats the whole purpose of include files in the first place.

Also, ASP implements include files in such a way that encourages abuse.   Include statements can be included anywhere, allowing for truly unreadable code to exist.    Witness the following (note I had to add in a space for the include files otherwise WordPress would treat it as an actual HTML comment. Despite downloading various Raw HTML plugins for WordPress I have not yet found a workaround for this. Just remember to remove the extra space should you want to try this for yourself):

<% Dim X : X = True %>
< !-- #include file="inc_if.asp" -->
One<br />
< !-- #include file="inc_else.asp" -->
Two<br />
< !-- #include file="inc_endif.asp" --> 

Inc_if.asp contains:

<% If x = true then %> 

Inc_else.asp contains:

<% Else %>

Inc_endif.asp contains:

<% Endif %>

Sure enough, this code results in “One” being printed out in the response.     Sure, only the most sick and depraved person who had already given his two weeks notice would write code like this intentionally, but allowing unreadable code such as this to exist in a language is inexcusable.    If you had the safety on in a loaded gun, would you keep it in a room with an unattended baby?   Hell no.    In the case of ASP, the “safety” is nothing more than relying on programmers to know better.

End Rant

In this long and rambling diatribe, I have repeatedly used the word ugly.    Why do I make such a big deal out of this?   Sure, ugly code can be written in any language.   This can not be argued.   However, a clean and elegantly designed language will by its very nature help encourage clean and elegant code.    An ugly language such as VB Script however, encourages ugly code.     In fact, sometimes it even demands it.   I noted quite a few examples of VB Script requiring multiple lines of code to do the same task that in other languages would require one.     Clean code can be written in VB Script, but it requires far more discipline and effort.    Most developers won’t even bother or will have long ago given up.    The moral of this story is:  don’t use ugly languages unless you absolutely have to (regular expressions for example, are hideously ugly, but compensate for this by offering powerful functionality and expressiveness.   VB Script however, has no real redeeming qualities whatsoever).   

1 From the essay “Esthetics and Style in Technical Design” in The Design of Design by Frederick P Brooks.

2 Javascript, being a prototype based programming language, does not provide the classical inheritance found in object oriented programming languages such as C++ or Java but instead uses prototypal inheritance instead.

3 McConnell, Steven C. 2004. Code Complete Second Edition. P. 68

Temporary workarounds are not so temporary

In a perfect world, development schedules would be based on realistic estimates, with plenty of buffer time factored in for the unexpected contingencies that always arise. But the real world is never ideal. Shit happens. Development cycles end up being too short and inflexible. Even worse, deadlines are often times determined by a completely arbitrary release date.

As a result of all this, everyone cuts corners to meet deadlines. From a development perspective, this manifests itself it as hastily written code: sprawling thousand line functions, “magic numbers” scattered everywhere*, far too much inline css, hard-coded English strings (this makes localization a pain to deal with later), code layers that bleed all over one another (the business logic layer outputting HTML and javascript code, and the UI layer performing business logic checks), and of course, lots of hidden bugs everywhere. Its not a pretty picture.

A fun thing to do on an old code base is to search for the word “hack” and read all the hilarious comments that crop up. These comments typically start out with a warning along the lines of: “This is a horrible hack!”, “UGH hack hack bad bad”, or “DANGER! Hack!” This is followed up by an explanation of why the following code is, to put it nicely, less than optimal. I don’t think anybody who has ever checked in code such as that ever expected their changes to be permanent, but rather a temporary band-aid fix/implementation to meet a deadline. Unfortunately, many of these comments that I see are dated back from many years ago. Oops.

Its not hard to see why. You check in the code, promising yourself you’ll come back and revisit this at some later point in time and do some badly needed cleanup, refactoring, or in a worst case scenario, a complete rewrite. But what happens inevitably is that there are new features to work on, bug fixes to make, meetings to attend, and pretty soon you are completely sidetracked. After all, if the current feature you’re cutting corners on is part of an unforgiving development cycle, why would future development cycles be planned out any differently?

Worse, each passing day makes a refactor that much more riskier. Think about it – the easiest bug fixes to make are ones that you do a few days after you’ve completed a feature. This is because your brain doesn’t have to make a “context switch”; everything is still fresh in your memory. However, as time goes on, you grow less familiar with the code in question and it is more likely you’ll introduce new bugs into the system anytime you modify it. This problem is compounded over time by the fact that new functionality will invariably be built upon this previous code. In the worst case, a temporary hack becomes an integral cornerstone of the system architecture. You’ll then need to be cognizant of a whole set of complex dependencies that simply wouldn’t have existed back when the code was first written. After a certain point it becomes more expensive and risky to refactor/rewrite a broken piece of code than it is to simply leave it as is.

Its also worth mentioning the political barrier to rewrites and refactors: Upper management, sales, and marketing don’t really care about clean and elegant code, especially if it is at the expense of more tangible things, such as a shiny new feature. Its difficult to make a compelling power point slide to customers explaining that the new version of a product uses 30% more design patterns. This problem is mitigated if you work for a good tech company, but not everyone is so fortunate.

The bottom line here is that it is wishful thinking to hope that a temporary hack is going to be anything other than permanent. So what’s a developer to do? As the cliche goes, anything worth doing is worth doing right. Ok great, but even if you work weekends, the fact of the matter is that there’s only so many hours in a day. Given an inflexible deadline, you’ve got to figure out what truly matters. Luckily, tough decisions such as these are why upper management exists. Transparency and honesty go a long way. All you can do is give them enough information so that they can make an informed decision. The importance here is to emphasize the tradeoff between high quality code and being 100% feature complete by a given date.

Obviously, everyone would be happy if bug free code shipped on the feature complete date. But that’s not possible in most cases. Management may not be happy to hear that kind of news, but I think its a safe bet to make that they’d be far more upset to find out later on down the line that you were unable to deliver on what you promised. The choices would involve some combination of pushing back the release date, slipping certain features (or slipping/modifying certain requirements), and of course, scheduling time for the inevitable bug fix patches.

If shoddy code is knowingly shipped, the key thing is to stress the importance of a refactor. As I mentioned before, non techies typically don’t grasp the benefits of a well written piece of code, so it is your duty to make sure to make sure that they do understand. A well architected solution not only has fewer bugs, it will also be flexible enough to accommodate future requirements, making subsequent dev work that much easier. Of course, the downside is that it obviously takes much longer to come up with a good solution. However, this is only a one time price that you pay upfront. In contrast, bad code is bug ridden and inflexible. New functionality built on such a shoddy foundation will take longer to write and be buggier as well. This is a steep price that will be paid repeatedly in the future.

This is exactly why anytime code of dubious quality is shipped, it is imperative that you convince the powers that be to create some bug/task/feature in whatever bug tracking / project planning system your company uses to make sure this code is improved upon. Simply promising yourself that you will do it at some later unspecified date and time won’t be enough; chances are its not gonna happen.

*“Magic numbers” are numeric values that appear in code which are unclear in meaning. Ideally, these should be replaced with a descriptive variable name instead. For example, instead of:

weight = mass * 9.80665;

the following would be better: