CIW Javascript Specialist Certification

Last year I got my MCTS EXAM 70-433: Microsoft SQL Server 2008 Database Development certification and wrote about my experience.    To recap, while I doubt that there is much intrinsic value in the certificate itself, the extrinsic value of setting a goal (improving my TSQL skills) and having a test with which I could objectively measure progress was valuable.    It also happened to be one of my annual SMART objectives, which is one of those corporate self development programs.   You know, the kind of stuff that HR comes up with in order to justify their continued existence:   It’s got a clever acronym (what exactly it stands for escapes me at the moment) and tied to your annual performance review too.

This year, for lack of a better imagination, my SMART objective this year was to get a Javascript certification:  In addition to having done a lot of front end development this year, I had also just finished reading all 800+ pages of Professional Javascript for Web Developers.   I searched around online for certifications and was surprised to see that there really weren’t any.     W3C schools has a Javascript certification, but its a self proctored online exam, which means that it’d be of dubious validity at best.   I finally found a CIW Javascript Specialist certification that was administered by Prometric.

I immediately encountered some red flags.    The CIW website has a maximum password length on its account creation page, which I found to be hilarious.     The study guide for the exam assumed that the user had little to no prior programming experience, and seemed to hold your hand every step of the way.    I skimmed a few chapters and decided it wasn’t worth my time.    Much to my disappointment, the actual test proved to be almost comically easy.     I had done all the practice exams the night before, and that had proven to be more than sufficient.    Most of the exam questions were taken almost verbatim from the questions on the practice tests, which weren’t very difficult in the first place.

The questions that tested knowledge of general programming constructs such as if statements and for loops were predictably straightforward.       I rolled my eyes at the multiple choice syntax questions.   These were freebies.  “Spot the valid function declaration” and “which one of these is a valid variable name?” were my favorites.    There weren’t really any questions that dealt with advanced Javascript language features:  I encountered one question about prototypical inheritance.    Probably the most “difficult” part of the exam were the Javascript API questions.   These required a lot of rote memorization.    For example, normally I wouldn’t know the difference between the substr and substring functions and would need to rely on google to find out.      However, after spending a few hours going over the practice problems, they became a non issue on exam day.

The FAQ on the CIW website indicates that the exam was recently updated to reflect major third party APIs such as JQuery.     Well turns out there aren’t actually any questions about JQuery on the exam.   Rather, they threw in some generic questions about how to include JQuery in your web application, as well as some questions about the pros and cons of using third party Javascript libraries.

If the my MCTS EXAM 70-433: Microsoft SQL Server 2008 Database Development  is geared toward those with intermediate to advanced expertise with SQL Server, then the CIW Javascript Specialist certification is geared toward absolute beginners.  Anyone with a rudimentary knowledge of Javascript will be able to pass it.      Even those without Javascript knowledge but who have done nay programming would probably be able to go through the practice exams the night before and pass.   I wouldn’t recommend getting this certification if you have to pay for it out of your own pocket, but if you can convince your company to foot the bill, then go for it.   It’s a “low hanging fruit”, won’t take long, and can’t hurt.

Professional Javascript for Web Developers

Professional Javascript for Web Developers is a comprehensive 800+ page tome that does a deep dive on all things Javascript.     It starts off with a brief history of the ECMAScript standard that Javascript is derived from, before launching into the basics, teaching everything there is to know about types, statements, operators and functions.    These earlier chapters provide a solid basis of understanding that the later parts of the book then build off of.   These sections go into the more advanced language features such as function expressions, closures, prototypes, and inheritance.    In addition, the book also devotes entire chapters to the typical day to day tasks that web developers do on a regular basis: DOM manipulation, forms, AJAX, JSON, and event handling.   Along the way, it manages to cover just about everything else as well, including client detection, XPath, XSLT, and the WebStorage API.   Heck, it even covers 3D graphics programming with WebGL!

When I say the book does a deep dive, I do mean deep.   I had originally intended to skim through the language basics sections, but ended up learning a lot about the various nuances and caveats of Javascript.     For example, due to the dynamically typed nature of the language, there are a lot of “gotchas” involved when using the equality and comparison operators.    This book covers all of them.  While reading the section on variables, I was surprised to learn that all function arguments are passed by value, whereas creating a copy of a reference type (eg var obj1 = new Object(); var obj2 = obj1) merely creates a pointer to the original object on the heap.     Useful information such as this is found throughout the book, and upon completing it, I found myself with a greater appreciation and understanding of the language.

I also gained a greater appreciation of just what a pain in the ass it is to support multiple browsers.    The slight differences in Javascript implementations across browsers (or not so slight, in the case of IE) can be quite problematic for front end developers, so the book takes great pains to discuss techniques to writing cross-browser compatible code.   To that end it spends an entire chapter talking about client detection; the sample code from this chapter is used extensively throughout the later portions of the book.

All in all, I would strongly recommend this book to anybody who is a front end web developer.   The breadth of topics covered makes it a great reference, whereas the depth makes this an excellent textbook as well.   Particularly valuable are the best practices found throughout every chapter.   That in and of itself is reason enough to own this book.   As an added bonus, it is quite funny learning about all the errors in earlier versions of IE’s Javascript engines that were either egregious bugs or blatant disregard for the official standards.

Javascript arguments.callee

All functions in Javascript are instances of the Function type, meaning that they are actually objects. And just like any other object in Javascript, they can have their own set of properties and methods. One such property is the arguments object, which contains a list of all the parameters passed in to the function. The arguments object itself contains a callee property, which points to the function that owns the arguments object. This provides us with a way to reference the function being called while inside of the function body itself.

Why would you want to do this? Well, this can come quite in handy if we want to define a static variable scoped to the function itself. Perhaps the function contains code that performs a particularly complex calculation or an expensive initialization that only needs to be run once. We can store the result inside of a static variable which will improve response time on subsequent calls to this function. For example:

Function createFoo()
   If (typeof != “number”) 
   { = ….  //perform expensive logic here		

   //now we can refer to without having to perform
   //the expensive logic again on subsequent calls

Using closures to simulate encapsulation in Javascript

Javascript is not an object oriented programming language per se. Although everything in Javascript is an object, there is no real concept of a class construct; objects are simply a collection of key/value pairs. As a result, tenets of OOP such as classical inheritance are not really possible with the language. Instead, Javascript supports “object reuse” through prototypal inheritance. This is a different flavor of inheritance where objects clone key/value pairs from prototype objects. Encapsulation, another key tenet of object oriented programming, is also not directly supported in Javascript either. There are no “public”, “private”, or “protected” keywords.

However, it is possible to simulate private variables through closures in Javascript. A closure is simply a function defined inside another function in Javascript. The closure has access to all the local variables defined within the containing function. Note that any local variable defined within the containing function is not accessible outside said function, as it will no longer be in scope. For example:

function Foo()	
	var bar = 5;


This will error, since bar is defined inside Foo, and not accessible from the global context. To get around this, we define a closure inside Foo:

function Foo()
	var bar = 5;

        this.IncrementBar = function (){

Var myFoo = new Foo();

The IncrementBar() function is defined within Foo() and thus has access to the bar variable. Furthermore, because IncrementBar() is assigned to the “this” object, by instantiating an instance of Foo, the new object will now have a public method with which it can modify bar. The bar variable is local to the Foo() constructor, and hence cannot be accessed from outside. Trying to set will create a new property on the foo object, but will not override the bar variable declared within Foo(), which is no longer in scope. bar is for all intents and purposes, “private” to the foo object. While using closures is not quite as easy as slapping on a private keyword in front of bar, it achieves the same effect.

Javascript === operator

Like most scripting languages, Javascript is not strongly typed. This causes implicit typing to occur when two variables are compared, resulting in some interesting behaviors. A prime example of this is the equality operator ==. For example, if either operand is NaN, then the equality operator returns false. Even if both operands are NaN, the comparison still evaluates to false, because NaN is not equal to NaN (similar to how comparing anything to an ANSI NULL in SQL evaluates to false).

Or consider the following:

var op1 = “”;
var op2 = true;

if (op1 == op2)

The if statement evaluates to false, because op1 is cast to a boolean. Empty strings are converted to false, and non empty strings are converted to true. However…

Var op1 = “0”;
Var op2 = true;

If (op1 == op2)

In this case, the if statement evaluates to false, because “0” is converted to 0, which is not equal to false. If op1 were equal to “1”, then the statement would evaluate true.

To deal with all these caveats and gotchas, Javascript introduced a strictly typed equality operator, ===, and its eqvuivalent inequality operator, !==. These do a strict type comparison, and if the types do not match, the result is false. Thus 0 === “0” evaluates to false. One interesting to note is that null == undefined is true, however null=== undefined is not. This is because null and undefined are not the same type. In order to maintain type integrity in the code, its considered a best practice to use === whenever possible. It also potentially saves on performance, since no implicit casting is performed.

Solving problems with two dimensional indexes

I recently attended a one day Couchbase conference in Seattle. It began with a brief overview of NO -SQL databases and a demo of Couchbase, followed by some in depth programming crash courses on using how to use it in your code. The final presentations were given by real world Couchbase users. One talk in particular, “Scaling Geodata with MapReduce”, given by Nathan Vander Wilt, was quite interesting.

GeoCouch is a service embedded into CouchBase that stores geodata (lat/long coordinates) using R-trees, that allows for highly optimized bounding box queries (find all the locations within this rectangle). GeoCouch does not support polygon queries yet, but it will provide radius support in the future. Nathan gave some examples on how to use GeoCouch to solve common questions (where have I been? What photos have I taken in this area?). The most enlightening part about Nathan’s talk was when he began talking about using GeoCouch in more unconventional ways.

As Pike states in Notes on C Programming, “Data dominates. If you’ve chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming.”

GeoCouch essentially provides a 2-dimensional index, so many problems that can project their data in two dimensions* can suddenly be solved in creative ways. Take for instance the question, “When have I flown?” If you store a data set of altitude and time, with altitude on the Y axis and time on the X axis, the data would look like the graph below. We can safely assume that all points with a y-coordinate greater than say … 10,000 ft represent times you’ve flown on a plane. We then simply draw a bounding box that captures all these points, and we can now use GeoCouch to quickly answer the question.

Another example that Nathan gave was a storing a photograph’s camera model (we can define an enum to map each model to a numerical value) and rating (1-5 stars). By then drawing a bounding box around say, all Canon photos with a rating lower than 2, we can quickly locate all our bad photos.

While I found the entire Couchbase Conference to be incredibly educational and helpful, I especially enjoyed Nathan’s talk because it contained a creative and novel problem solving approach that is not specific to just Couchbase and GeoCouch.

*Projection is a mathematical term. The simplest example is a map: A topographical map is a 2 dimensional projection of a 3 dimensional space.

Finding crap in the SQL Server database

Anyone who has ever maintained an old legacy code base often finds themselves working with a decade old database schema that now spans hundreds of tables, each of which contain dozens of columns. Trying to locate a particular column or table in the db can be particularly difficult, especially if the only clue you have is a vague inkling of what said table/column might be named. Luckily, SQL Server provides several system tables that hold metadata that can help you find the proverbial needle in the haystack.

We can query sys.tables and sys.columns like so:

--look for a particular table
SELECT AS table_name
FROM sys.tables AS t
WHERE LIKE '%needle%'

--look for a particular column
SELECT AS table_name, AS column_name
FROM sys.tables AS t
JOIN sys.columns c
WHERE LIKE '%needle%'
ORDER BY table_name;

The information_schema.columns table lets you accomplish the same thing. This table shares a lot of the same information as sys.columns (but does not contain the object_id), but it already contains the table_name so you don’t have to do a join. This is good if you are lazy and dislike typing more than necessary.

FROM Information_Schema.Columns

These queries provide a good starting point for exploring the database and all the metadata available to you. Of course, if you’re too lazy to do any typing at all, you can simply open SQL Server Management Studio, click on “View” in the menus, and select Object Explorer Details. From there you can do a search. Its important to note that this searches EVERYTHING. Views, stored procedures, triggers, tables, columns, and everything else that is an object in SQL Server.

If you run the profiler, you can see that it is indeed querying all the system tables, including sys.database_principals, sys.asymmetric_keys, and sys.all_objects.

Often times this UI search will get the job done, although it can produce a lot of noise.

Why every developer should work for a startup at least once in their lives

Shortly before I graduated from the University of Washington, I joined the startup company Twango as a software development intern. We were a web 2.0 company whose premise was to provide a centralized place on the web where users could upload, manage, and share all their digital media with friends. Our tagline was “Share your life”. Think of a more advanced version of flickr that catered toward power users and also had support for video, audio, and office documents. I was offered a full time job there after completing my degree in computer science, which I gladly accepted. Looking back, it was one of the best decisions I ever made in my life.

I got to experience a truly hands on learning environment that was far more in depth and educational than any of the projects that I had worked on in school. This is not a knock on the University of Washington, as there is only so much you can do in a quarter system and when you are constrained by the course material itself. The professor is not going to deviate much from her carefully prepared course plan*. The “curriculum” at a startup is much more fluid and dynamic. Because the team itself is so small, everyone actively participates in every aspect of running the company, from development, testing, design, operations, customer support, and even marketing. I had to master many different technologies and learn on the fly. This helped foster a “can do” attitude that has served me well over the years. It didn’t matter what language, tool, framework, or API was needed. I was willing and able to learn whatever was necessary to get the job done.

This amazing learning experience is free from the constraints of bureaucratic processes and red tape found in larger corporations. I am not saying that processes and development methodologies are bad, only that corporations tend to adhere to these things religiously, rather than exercising flexibility and common sense. At a startup, you are given a lot more freedom in what features you work on and how you will design and build them. For instance, Twango was missing a private messaging feature, so I went ahead and built one. At a giant corporation, something so simple would have required multiple meetings, requirements gathering, functional specifications, and a whole lot of other hurdles. That’s even assuming the higher ups would even OK the feature in the first place. At a startup, you can pitch your idea directly to the founders. At a corporation, you as a developer are typically pigeon holed into a narrow role working on a small subset of the functionality. At a startup, you have the freedom and the responsibility over the entire code base. It sounds daunting, but it’s actually quite empowering.

Of course, the best part about working for a startup? It’s a lot of fun! Sadly, most people forget that work can and should be enjoyable. Older, grizzled veterans of the development industry tend to become cynical over the years, and view their jobs as a way to collect their monthly paycheck. They forget the passion and enthusiasm that got them excited about computers in the first place. I’m lucky that I got to experience the startup culture straight out of school Otherwise, I wouldn’t really know what I was missing out on. As a result, I can screen the companies that I interview for, and tailor my search toward the types of places that evoke the same happy feelings I had when I was at Twango.

This is why I’d recommend anyone who recently graduated from school to go find a startup to work for. The pay will not be as competitive. You might never IPO or get bought out. But it is better to take that kind of risk when you are young. There will be plenty of time later on down the line to find a high paying job. Trust me, the skills you acquire at a startup will allow you to easily find one later. Not to mention, there will be other perks. The Twango founders constantly took us out for lunch, movies, and even the occasional ski trip. These are the sort of warm fuzzy memories that are truly priceless.

*To be fair, many of the University of Washington capstone courses do a good job of giving the student more freedom over their own projects.

Can’t we all just get along?

Good teamwork is vital for the success of any endeavor, whether it be winning an NBA championship, getting an A for that final group presentation in school, or running a business. This is especially true in the tech industry, where the explosion of complexity confronting companies today combined with increased globalization makes teamwork a requirement.  Just take a look at your typical web company:  The development team alone is going to consist of multiple people with different specialties. No one person can possibly be an expert at everything, which makes it a necessity to work together with others. Front end developers will know CSS, Javascript, some sort of server side scripting language. A backend developer on the other hand, will know everything there is to tuning a database. Even though there will be skill overlap among developers, an entire code base can spawn millions of lines of code: its not humanly possible for just one person to maintain it all! And that’s not even looking at the marketing, operations, HR, accounting, legal and business departments, all of which serve vital roles in the company. Everyone must work together for the company to run smoothly.

The difficulty with teamwork lies in scaling up. For example, a small startup company naturally fosters good teamwork. Because of its size, everybody must contribute by working hard and helping out others. People learn to depend on one another, forming close bonds in the process. However, this camaraderie tends to get lost as the company grows larger. A once tight knit group inevitably splits off into multiple teams. As more headcount is added, entire departments are formed, then divisions, and finally even subsidiaries! All too often, these org units all have their own bottom lines and agendas, creating an atmosphere of Machiavellian politicking. Dick Brass, a former Microsoft VP, wrote a fascinating op-ed piece that touches on this very issue.

He paints a grim picture of the internecine warfare at Microsoft. I was left with the impression that groups within Microsoft were not entirely unlike the warring city states in Italy during the Middle Ages. Teams did not play nicely with one another and were more likely to sabotage, rather than help, one another. Microsoft Cleartype, a font display technology that garnered much praise, did not make it into Windows until a decade after it was invented. Why?

“It … annoyed other Microsoft groups that felt threatened by our success. Engineers in the Windows group falsely claimed it made the display go haywire when certain colors were used. The head of Office products said it was fuzzy and gave him headaches. The vice president for pocket devices was blunter: he’d support ClearType and use it, but only if I transferred the program and the programmers to his control.”

A similar situation occurred with the tablet, which Microsoft had developed long before the Ipad came out. And yet, despite having the support of top management, it was essentially torpedoed by the VP of Office:

“ When we were building the tablet PC in 2001, the vice president in charge of Office at the time decided he didn’t like the concept. The tablet required a stylus, and he much preferred keyboards to pens and thought our efforts doomed. To guarantee they were, he refused to modify the popular Office applications to work properly with the tablet. So if you wanted to enter a number into a spreadsheet or correct a word in an e-mail message, you had to write it in a special pop-up box, which then transferred the information to Office. Annoying, clumsy and slow.”

I experienced this internal bickering firsthand when I worked as a consultant at Avanade and got to work on a fairly large scale project at the Microsoft Entertainment and Devices division. Despite multiple teams being involved and the obvious dependencies between them, help was hard to come by. Trying to get things such as getting access to documentation, sample code, servers, or even the developers themselves was like pulling teeth. I quickly learned that the only way I could get timely responses to my nagging emails asking for documentation would be if I CCed somebody important. Bugs that blocked other teams were not prioritized correctly and took far too long to get fixed. Not surprisingly, the project fell behind schedule. In the ensuing shitstorm, everyone tried to pin the blame on everyone but themselves. The funny thing is, all this could have easily been avoided had there been a little more cooperation.

My team was hardly blameless either. I remember in one meeting, one of the developers on another team asked us if we could squeeze in an extra bug fix for the build. Our PM said that we were already swamped and that this would require pushing the build back by at least a day. After much posturing and positioning, an agreement was reached. In the time it took to do so, I literally could have checked in the one line fix on my laptop. Instead, I kept quiet, fully aware of what was going on. We could not just give up something for nothing; we had to get something in return. Such is how the game is played.

This failure to cooperate appears to be the inevitable result of companies that have grown bloated from their own success. Individuals employed by these corporations are loyal to themselves first, their coworkers second, other teams a distant third, and the actual company dead last. I don’t want to paint too grim a picture here however. I offer just this simple observation: The most effective way for a large corporation to deal with this problem is to have a compelling vision and a strong CEO who can carry it out. This CEO must be a benevolent dictator who “speaks softly but carries a big stick”, rewarding those who work together and punishing those who don’t.

The textbook example is Steve Jobs. Walter Isaacson’s biography of him makes him appear more as a dictator than as someone who was benevolent, but he definitely got the job done. Jobs brought his innovative ideas to bear by exercising complete control over the entire end to end user experience. Despite being an asshole at times, his passion and his ability to revolutionize technology gave everyone a reason to rally behind him and set aside their differences, knowing that the end result in the long run would be well worth it. The results speak for themselves.

This is why Apple succeeded where Sony couldn’t. Sony should have been able to come up with the Ipod long before Apple did. As discussed in Steve Job’s biography:

“But Sony couldn’t. It had pioneered portal music with the Walkman, it had a great record company, and it had a long history of making beautiful consumer devices. It had all of the assets to compete with Job’s strategy of integration of hardware, software, devices, and content sales. Why did it fail? Partly because it was a company … that was organized into divisions (that word itself ominous) with their own bottom lines; the goal of achieving synergy in such companies by prodding the divisions to work together was unusually elusive” (P. 407 Steve Jobs by Walter Isaacson).

Because the executives at Sony placed the success of their own divisions before the success of the company as a whole, they missed the big picture and failed to think long term. The irony here is that had they actually come up with a true successor to the Walkman, their bottom lines would all have profited enormously. Sony shouldn’t feel too bad though. Failure to work together is recurring theme throughout history, whether it be primitive tribes fighting over resources to the centuries of warfare that wracked Europe during the Middle Ages. Of course, the history textbooks also show what happens when petty tribal rivalries are set aside: Once Genghis Khan was able to “unite the clans”, he conqured most of the known world. Similarly, CEOs who wish to capture a dominant global market share with their products would do well to study Steve Jobs and see how he rallied all of Apple behind him.

Conceptual Integrity and the Design of Design

The Mythical Man Month is Frederick P Brooks’ famous book about software engineering, read by computer science students around the world. Of course, it’s not immediately obvious from the title what the book is going to be about, but the title of his latest book, The Design of Design pretty much says it all. Like The Mythical Man Month, The Design of Design is a collection of essays, but unlike its predecessor, it is not a computer book per se. Rather than focus solely on the design of computer systems, Brooks talks about design in general, exploring the possibility that widely disparate design disciplines such as music and art share elements in common with software. Brooks brings to bear his many years of experience building everything from houses to computer architectures in this book, sharing his insight and opinions on a wide variety of fascinating topics such as open source, the design rational behind his vacation beach house, to empiricism versus rationalism. The one overarching theme that ties all these essays together is the idea of conceptual integrity – which Brooks defines as consistency, coherence, and uniformity of style. This is the glue that holds a design together.

Brooks argues that all great works have conceptual integrity and that this holds true across all design mediums, not just computer system design. He uses Reims cathedral to illustrate this point:

“most European cathedrals show differences in plan or architectural style between parts built in different generations by different builders… So the peaceful Norman transept abuts and contradicts the soaring Gothic nave… Against these, the architectural unity of Reims stands in glorious contrast. The joy that stirs the beholder comes as much from the integrity of the design as from any particular excellences. As the guidebook tells, this integrity was achieved by the self-abnegation of eight generations of builders, each of whom sacrificed some of his ideas so that the whole might be of pure design.” 1

This pattern of design purity being achieved through a single unifying idea can be found everywhere. The sonata form has existed in classical music for centuries, and has been one way for a composer to structure their music around a central theme that is first introduced in the exposition, explored and contrasted in the development, and finally revisited in the recapitulation. The most famous example of this form is the first movement of Beethoven’s fifth symphony. Any great piece of literature will also be structured around a few central underlying themes that underpin the entire story: Shakespeare explores the age old question of fate versus free will in Macbeth, as does Tolstoy in War and Peace. Likewise, a good photograph will capture a particular mood or moment in time. Attention will be drawn to the main subject of the composition by carefully cropping it in such a way so that the subject is not lost among the noise of a busy background. Everything that remains must visually support and complement this subject.

Similarly, good software (and hardware) is deliberately built around a set of strong underlying ideas. Eric Raymond talks at length in his book, the Art of UNIX programming, about the design philosophy that shaped UNIX:

“The UNIX philosophy originated with Ken Tompson’s early meditations on how to design a small but capable operating system with a clean service interface. It grew as the Unix culture learned things about how to get maximum leverage out of Thompson’s design. It absorbed lessons from many sources along the way… The UNIX philosophy is bottom-up, not top down. It is pragmatic and grounded in experience.” 2

The end result is a clean and elegant system centered around the command line interface. Programs were built to do one task, and to do it well. Furthermore, they were made to play well with others: The output of one tool would serve as the input to another. To facilitate this, programs were designed to run in batch mode, with file formats using transparent text formats (compared to the opaque binary formats favored by Windows applications); Scripting languages such as awk and sed were created to provide strong text processing capabilities. By adhering to these principles, a strong foundation was built that fostered extensibility and flexibility, allowing programs to be combined in powerful ways that would not otherwise have been possible.

One of the reasons conceptual integrity produces such good designs is because the systems themselves are easier to build. Conceptual integrity provides a rough roadmap of what the finished product will look like; In a sense, all the difficult decisions have already been made. If you think of a design as a tree, with the final product at the root and each branch representing a set of possible design decisions, then conceptual integrity is what allows us to prune this tree, which is theoretically infinite in size. Not only will the end product be easier to build and maintain, it will be easier to use as well. This is because it will be consistent in structure.

Why is this a good thing? States Brooks: “Consistency is reinforcing and self-teaching, because it confirms and encourages our expectations… It aids the comprehensibility of the design” 3. This allows an end-user to rely more on their own intuition to figure out how a program works rather than by having to read a thousand page manual. For instance, the desktop metaphor pioneered by Xerox PARC that we all take for granted today, allows any user to easily manipulate the files and folders on his machine with just a mouse. It is not a difficult metaphor to grasp, and as a result can readily be taught even to the most inexperienced computer user. However, imagine a system that does not consistently apply this metaphor. Dragging a file onto the recycle bin makes a copy of the file, and dragging one window on top of another closes the one on the bottom. This may sound like a contrived scenario, but there are such poorly designed systems out there. In a previous article, I rant and rage about the horribly inconsistent and illogical design decisions in VB script.

Products that lack conceptual integrity are everywhere and easy to spot: They are bloated, ill conceived, and difficult to use. Features are haphazardly slapped together, often by the clueless non- technical middle management that you’d find in a Dilbert comic. Corporate intranet sites, bloatware found on new PCs, and websites of non tech companies are rife with examples of this. A look at the code base is revealing as well. Take for example, a poorly built web application. Instead of a clean and modular architecture with orthogonal functions, we instead find code in the business logic layer that returns HTML, insert functions in the database layer that happen to delete and update rows in other tables. Needless to say, maintaining such a code base is a nightmare, and is something that should be avoided. The goal then, is to strive for conceptual integrity in our designs. Brooks latest book provides a great starting point for discussion, ideas, and inspiration.


1 Brooks, Frederick. “Achieving Conceptual Integrity” The Mythical Man Month
2 Brooks, Frederick. “Esthetics and Style in Technical Design” The Design of Design
3 Raymond, Eric. “Art of UNIX Programming”