Why changing jobs every few years is good for your career


I recently quit my job.   My letter of resignation was short and to the point:   I had accepted an offer at another company and was giving my two weeks notice.   I read and reread my email a few times, took a deep breath, and hit send.    I was now officially a short timer, and before I knew it, my last day had come.    It was a bittersweet occasion for me.    I had already said my heartfelt goodbyes, gave hugs, shook hands, and parted ways with coworkers who had become friends.   I shed a tear as I walked off into the sunset, but not once did I look back.

There are pros and cons when it comes to changing jobs.     The benefits of staying in one place for the long haul are pretty self explanatory.      The awkward time spent floundering about at the start is now a distant memory.    Having built up credibility, your input is actually valued.   You find yourself entrusted with critical projects, given more freedom and flexibility in execution, climbing the corporate ladder with every success.    The sky is the limit.     So I’m going to talk about the flip side and why you would ever want to give all that up voluntarily.

The obvious reason is unhappiness.    Maybe the opportunities for growth and advancement aren’t really there.    Perhaps the work life balance is non-existent.     You could have a pointy haired boss.     The technology stack could suck.     The company could be on a downward spiral.   It lost its VC funding and now the employees are leaving in droves.      There’s a whole myriad of issues that would make you want to leave.

But why would you ever want to leave a place where you are quite comfortable and content?    Well, there’s a fine line between contentment and complacency, and the longer you stay at a place, the easier it is for complacency to become complete stagnation.     A change of jobs can shake things up and expose you to new ideas.    Every place has its own way of doing things.   Ideally, the goal should be to learn new languages, design patterns, tools, and frameworks wherever you go.    It can also be quite instructive to pay close attention to the org chart.     For example, how do people report up the chain command?    Do they use horizontal integration, vertical integration, or a combination of both?   Also, observe the processes put into place by the company.    What is the product release cycle?    When there are blocking issues or showstopper bugs, how are these problems escalated?     Where are the bottlenecks?     There is no one size fits all when it comes to structuring and running a corporation, so by examining these things closely, you can figure out what works well and what doesn’t given a specific set of circumstances.    This will allow you to assist the company by suggesting improvements and sharing your own experiences.      Furthermore, this experience can refine your future job searches by helping you identify the well run companies.

On a similar note, by experiencing a wide range of different work environments, you can quickly learn what is tolerable and what is not.    For me personally, I’d never work anywhere that required a suit and tie.    Outdated and obsolete technologies like classic ASP are also a complete dealbreaker.    Free coffee and soda are a perk, but not a necessity.     Again, this helps you narrow down your search criteria when it comes to finding a new job.     In a way, this is similar to dating.     Much in the same way that you cannot know what you are looking for in a significant other until you have been in at least a few relationships, you cannot really know what companies will be a good fit for you, until you’ve worked for at least a few of them.

Which brings me to my final point.    Gone are the days where an employee joins a company and works there until retirement.      It is best to get used to changing jobs, because there are things out of your control, such as mass layoffs, budget cuts, and the collapse of the US real estate market, that can lead to unemployment.        To the uninitiated, finding a new job can be very stressful.   The mad scramble to update the resume, apply for positions, talk to recruiters, and run an interview gauntlet can be completely overwhelming*.       Starting a new job is even more stressful.    The ramp up process can be unforgiving, especially in the tech industry, where one is expected to be self-sufficient and be able to adapt on the fly.    In my first week at a consulting gig I did at Microsoft, I had no cubicle (we had to work in the atrium until we found enough office space for the whole team), no developer image on my laptop (we had to set up our dev environment from scratch), and no account in source control (we had to email our code in zip files so other devs who did have access could check our files in).     Oh and the project was already behind schedule so we needed to work weekends too.    By having gone through multiple job changes early on in life, hectic starts like this will no longer faze you later on in life, when the stakes are higher and you have a family to feed.      This will make your job transitions go much more smoothly and successfully.


*Luckily, there are many great books and resources on this.   Land the Tech Job You Love is one.    It contains a lot of great advice.   For example, you don’t want to make a disaster preparedness kit after an earthquake hits.   Likewise, you want to update your resume at least once a year, so that if a layoff or other disaster strikes, you’ll be ready.   Better yet, if a great opportunity arises, you’ll be ready to respond immediately.



The Singularity is nearer than you think

The Singularity is Near is a thought provoking book.    The emotions I experienced while reading ran the whole gamut from deeply disturbed to inspired.    The author, Ray Kurzweil, is a billionaire entrepreneur who graduated from MIT and made his fortune from a whole host of inventions, ranging from musical synthesizers, OCR, text to speech, and speech to text, among other things.    In his book, he shares his vision of the future and makes a compelling argument as to why the technological singularity is close at hand.

The singularity is defined as the point in time in which the rate of change in technological progress happens too quickly for the human brain to process, due to the emergence of artificial intelligence that has surpassed human intelligence.   Such an AI would be able to quickly iterate and improve upon its own design without the need for any human intervention.    Not only would each iteration yield exponentially more computing power and capability, the time needed for each iteration would decrease exponentially as well.    The resulting runaway AI would herald the arrival of the singularity, and by definition, no one can really say what will happen.

And that to me is the worrisome part.   Humanity will necessarily relinquish control of its own fate and leave it in the hands of its artificial creations.    There are many who are not enthused by this prospect.   At the far end of this spectrum are guys such as Ted “The Unabomber” Kaczynski, who believe that technological progress inevitably leads to an increase in human suffering and loss of freedom.    While Kaczynski’s actions are morally reprehensible, many of the concerns that he raises are valid.     Improvements in technology necessitate a restriction in freedom.    Consider the invention of the automobile.   In order to accommodate everyone being able to drive a car, bureaucracy in the form of traffic regulations and legislation had to be created, in addition to the formation of state level DMVs.     This bureaucracy limits what we can and cannot do.  Before cars, we could walk wherever we pleased without needing to stay on sidewalks and having to heed traffic lights.    Consider also the current generation of military drones that go on surveillance missions and launch remote strikes on military targets.    One can only imagine that the next such generation of drones would be smaller, smarter, and stronger.     Such drones would no doubt enable the government to create a 24/7 Orwellian surveillance system capable of quickly and quietly dispatching dissidents.    Even corporations can freely collect and harvest our personal information that we so freely post on social networks.    Given that technology impinges on our personal freedoms, it is not at all farfetched to imagine that the invention of a super intelligent AI would reduce humans to the status of house hold pets.

This is but one possible negative outcome.    Science fiction features many robot uprising scenarios wherein the human race is completely obliterated.    But what is more disturbing to me than the prospect of total annihilation is the eventual emergence of neural nanotechnology.    Nanotechnology would allow us to enhance our neural pathways, vastly increasing our biological intelligence by augmenting it with machine intelligence.     This would bypass one of the biggest limitations of the human brain:  knowledge transfer.    A concert pianist must take years of piano lessons and spend thousands of hours practicing before she can master her craft.     Computers on the other hand, can quickly copy data from one machine to the other, sharing information with ease.     Now imagine a world where we could simply download “knowledge modules” and install them onto our machine brains.   Suddenly everyone would be able to speak multiple languages, play chess at a grandmaster level, solve differential equations, all while having a great sense of humor.    With nothing to distinguish one another, we will lose all individuality.   It is reminiscent of the Borg collective of Star Trek, where newly acquired knowledge is quickly shared among all the drones.     Such an egalitarian society to me seems quite dull.     In a chapter discussing aging and death (and how technology can someday make us immortal), Kurzweil dismisses the argument that our limitations make us human.   In the context of mortality, I would agree.   However, in the case of our inherent knowledge transfer limitations, I feel that such limitations make life rewarding.      Taking years to learn something is not a downside, but a fulfilling journey.       It will be interesting to see how human/machine hybrids find purpose and meaning in the post singularity (assuming the robots don’t kill off everybody of course).    Of course, just getting to that point will be troublesome.

Consider what happens as technology continues to improve and automate away tasks that previously required a lot of human intervention.     Expedia and Concur destroyed the livelihood of many travel agents.    Sites such as Zillow and Redfin will someday do away with most real estate agents (although why they have not succeeded – yet – is a different topic altogether).      Grocery stores have self checkout lanes.     Retail stores use software to handle all their complicated supply chain logistics.    Today there is almost no industry where computers are not utilized in some way.    Now imagine what happens as artificial intelligence continues improving at an ever accelerating pace and eliminates the need for human intervention altogether.    Today, the Google driver-less car has logged hundreds of thousands of miles on the road.   Commercial driverless cars are soon to follow.    In a couple of years, bus drivers, taxi drivers, and chauffers will all be out of a job.   IBM’s Watson computer beat 74-time champion Ken Jennings at Jeopardy quite convincingly, and now IBM is using Watson as a medical diagnosis tool.    How many in the medical profession will still have a job once computers outperform them in the future?       Even art is being automated.     There are already AI programs that can churn out novels, songs, and paintings.     Who is still going to have a job in the future?    The industrial revolution put every artisan out of a job.   Likewise, a similar fate awaits all humans as the technology sector continues to innovate.   Some will argue that new jobs will be created by technology; however as AI continually improves, even those will be automated away.    Entire job industries will be wiped out.     This massive unemployment will obviously cause a lot of social upheaval.     How will governments respond?    Will money have any meaning in the future if nobody works for a living?

Kurzweil does not address these issues in his book, which is unfortunate, because it would have been cool to hear his insights on the matter; he has obviously given a lot of thought toward the dangers that future technological innovations in genetics, nanotechnology, and robotics will pose.   In fact, he devotes an entire chapter to this topic.    Despite this, Ray Kurzweil remains optimistic about the future, believing that we will eventually merge with our technology and transcend our humanity.    Others picture a future that will be a lot more grim for humanity.   Given these two diametrically opposed viewpoints, which vision of the future will be proven correct?   In the end, it may not really matter.    As Kurzweil astutely points out, history has shown that progress cannot be stopped.    Even a complete relinquishment of all scientific research wouldn’t really work: This would require a global totalitarian regime.    Of course, in order for such a regime to maintain its power, it would need to make sure it always had a technological advantage over its citizens, thus making its Luddite agenda an unsustainable self-contradiction.      Even a doomsday scenario in which the entire human race was wiped out via some massive meteor, nuclear war, viral pandemic, or some other form of unpleasantry would only serve as a hard reset.  Other sentient life forms would likely emerge again here on earth, or elsewhere in the universe (assuming this hasn’t occurred already).    The entire process would start all over again; it would appear that technology is on an inexorable march on a path toward the future.

Where does this path lead?   Kurzweil believes that the universe has an ultimate destiny and that the technological singularity is a major milestone along the way.    He provides a fascinating roadmap of the journey, dividing the universe up into six major epochs, each characterized by the nature of information and how it replicates.    Each epoch builds on the foundations of the previous one in order to generate information of increasing complexity.

The first epoch is that of “dumb” matter in the universe.      A vast amount of information is encoded into every piece of matter:  the number of molecules it’s made of, the number of atoms in each molecule, the spin state and energy level of the electrons orbiting each atom, and so on.    Matter, and the information stored within it, can replicate itself – although not efficiently.   For example, a crystal is comprised of a precise arrangement of atoms in a lattice.    As a crystal “grows”, it repeats this pattern over and over again.       Although not intelligent, the matter in the universe coalesces into objects of increasing complexity.    Entire planets, solar systems, star systems, and galaxies are formed.   From these arise the conditions necessary for biological life, leading to the second epoch.

In this epoch, biological life encodes information about itself in its genes via DNA.   DNA of course, is itself  made up of the “dumb” molecules from the first epoch.     The information stored within DNA then, represents a much higher level of abstraction.    It can self replicate much more efficiently, and even has mechanisms for error correction in the copying process.    As life evolves on earth over millions of years, the first sentient life forms appear.      In this third epoch, information is now encoded in the neural patterns of the brain.     The invention of a spoken language and a written alphabet by homo-sapiens facilitate the transmission of these patterns, which now replicate as memes.     Educational institutions help preserve these memes over the centuries, allowing humans to retain and build on the knowledge of their ancestors.     Standing on the shoulders of giants, scientists and engineers build the first computer (although there is much dispute as to which computer was the first one, for the purposes of this narrative we will pretend there is only one clear progenitor), heralding the fourth epoch.    Information is now stored in electronic circuitry and replication of this data is a simple matter of copying bits between machines.   This is facilitated by massive communication networks such as the internet.      As the machines begin to increase their computing power, artificial intelligence rivals that of humanity.   The singularity marks the arrival of the fifth epoch.    AI begins to saturate the universe, harnessing the matter in the universe itself for computational substrate (ie Dyson spheres).

The sixth and final epoch that Kurzweil describes is a gradual “awakening” of the universe.   In essence, the entire universe itself is turned into a computer and becomes self aware.    This is not to anthropomorphize the universe; this awakening will be an emergent process, wherein the inanimate universe transforms and transcends into something altogether different.    Carl Sagan once said, “we are a way for the cosmos to understand itself”.    The sixth epoch then, represents the fulfillment and realization of that statement.      This of course is all highly speculative and borders on religious.    One of the criticisms of Kurzweil is that he writes a lot of religious science fiction and that the singularity he describes is nothing more than a “rapture for nerds”.     Personally, I found his description of this final stage in the evolution of the universe to be quite beautiful and profound, with none of the trappings of religious dogma.    Whether or not any of it comes true remains to be seen.

There are of course, many other criticisms of Kurzweil’s work.    He even devotes an entire chapter in his book to address them.    Because he makes such strong assertions, including the bold prediction that in 2045 a laptop would possess billions of times more computing power than every human brain in existence (both past and present), many have told him that what he was saying either could not possibly happen, or that it could not happen so soon.    Kurzweil points to the exponential rate at which technology is improving (referred to as the law of accelerating returns in his book), while the naysayers argue that such growth will continue until it doesn’t.

The question boils down to whether or not there are limits to our knowledge and ability.   The pragmatists take a more conservative position on this:   Some things are by their very nature unknowable and not doable, while the optimists feel that there are always workarounds.    With regards to the singularity, the two main barriers are the hardware required to run a computer powerful enough to surpass the human brain’s parallel processing capabilities, and no less important, the software that can mimic it.   Kurzweil goes through great pains to discuss the promising ideas and solutions that can be found in the research pipeline.

On the hardware side of things, one of the major problems will be all the heat generated by the increased computing power.   Paradigms such as reversible computing will significantly reduce heat dissipation.     This will allow computing power to continue increasing at an exponential clip.    However, Moore’s law will come to an end due to the fundamental physical limitations to how small silicon transistors can become, but companies are already looking into what’s next after silicon.     A variety of technologies such as carbon nanotube based computers, DNA computing, and quantum computing (just to name a few) will potentially allow Moore’s law to continue unabated.

In order to take advantage of this powerful hardware, software will need to be written that can mimic the brain.   Instead of inefficiently hand coding each rule manually as is done in an old fashioned AI expert system, self-emergent machine learning techniques such as genetic algorithms, neural nets, and Bayesian networks will need to be employed.    At the same time, as brain scanning technology continues to improve, we can integrate what we learn as we reverse engineer the brain to come up with ever more accurate models.     The key here is to operate at the right level of abstraction.    Consider the Mandelbrot set.   It is a fractal of literally infinite complexity.    To calculate each point in the set would take an infinite amount of time, yet we can use a simple equation to represent it in its entirety.     There is strong evidence that the brain is fractal in nature.   Instead of painstakingly modeling the brain by mapping every dendrite and neuron, it would be much easier to generate an accurate model the brain by finding the right set of equations.   Of course, deriving these equations is non trivial, but it serves to illustrate why a top down approach to the problem will work the best.

All in all, The Singularity Is Near was a great read.    It is hard to categorize the book, as it contains a mix of philosophy, religion, and science.    It was necessarily epic in its scope given its subject matter.     Topics discussed ranged from Fermi’s paradox, the speed of light, the nature of consciousness, and everything in between.     There is something in it for everybody.     As a software developer, it served a springboard for a Wikipedia binge as I looked up the machine learning techniques and paradigms he described.     Anyone interested in science will find themselves  learning much by googling all the research articles that he cites:   I was personally amazed to find out there was a naturally occurring nuclear reactor in Africa 1.7 billion years ago.    There are a lot more of these nuggets of knowledge contained within.    That alone makes the book worth the read, but more importantly, it got me thinking about all the massive change brought about by the explosive growth in technological innovation that I’ve seen just growing up in my lifetime.     Humans have a linear view of present day trends: even an exponential curve is a straight line if you zoom in close enough.     Hence we miss the exponential rate of change that has happened in just this past decade.     Even more change is in store for the future, and it will be quite different from anything we’ve experienced up until now.   Important discussions need to be had about this.    So whether or not you agree with Kurzweil, it is worth reading his seminal work and to consider all its implications; it makes for deep conversation.

CIW Javascript Specialist Certification

Last year I got my MCTS EXAM 70-433: Microsoft SQL Server 2008 Database Development certification and wrote about my experience.    To recap, while I doubt that there is much intrinsic value in the certificate itself, the extrinsic value of setting a goal (improving my TSQL skills) and having a test with which I could objectively measure progress was valuable.    It also happened to be one of my annual SMART objectives, which is one of those corporate self development programs.   You know, the kind of stuff that HR comes up with in order to justify their continued existence:   It’s got a clever acronym (what exactly it stands for escapes me at the moment) and tied to your annual performance review too.

This year, for lack of a better imagination, my SMART objective this year was to get a Javascript certification:  In addition to having done a lot of front end development this year, I had also just finished reading all 800+ pages of Professional Javascript for Web Developers.   I searched around online for certifications and was surprised to see that there really weren’t any.     W3C schools has a Javascript certification, but its a self proctored online exam, which means that it’d be of dubious validity at best.   I finally found a CIW Javascript Specialist certification that was administered by Prometric.

I immediately encountered some red flags.    The CIW website has a maximum password length on its account creation page, which I found to be hilarious.     The study guide for the exam assumed that the user had little to no prior programming experience, and seemed to hold your hand every step of the way.    I skimmed a few chapters and decided it wasn’t worth my time.    Much to my disappointment, the actual test proved to be almost comically easy.     I had done all the practice exams the night before, and that had proven to be more than sufficient.    Most of the exam questions were taken almost verbatim from the questions on the practice tests, which weren’t very difficult in the first place.

The questions that tested knowledge of general programming constructs such as if statements and for loops were predictably straightforward.       I rolled my eyes at the multiple choice syntax questions.   These were freebies.  “Spot the valid function declaration” and “which one of these is a valid variable name?” were my favorites.    There weren’t really any questions that dealt with advanced Javascript language features:  I encountered one question about prototypical inheritance.    Probably the most “difficult” part of the exam were the Javascript API questions.   These required a lot of rote memorization.    For example, normally I wouldn’t know the difference between the substr and substring functions and would need to rely on google to find out.      However, after spending a few hours going over the practice problems, they became a non issue on exam day.

The FAQ on the CIW website indicates that the exam was recently updated to reflect major third party APIs such as JQuery.     Well turns out there aren’t actually any questions about JQuery on the exam.   Rather, they threw in some generic questions about how to include JQuery in your web application, as well as some questions about the pros and cons of using third party Javascript libraries.

If the my MCTS EXAM 70-433: Microsoft SQL Server 2008 Database Development  is geared toward those with intermediate to advanced expertise with SQL Server, then the CIW Javascript Specialist certification is geared toward absolute beginners.  Anyone with a rudimentary knowledge of Javascript will be able to pass it.      Even those without Javascript knowledge but who have done nay programming would probably be able to go through the practice exams the night before and pass.   I wouldn’t recommend getting this certification if you have to pay for it out of your own pocket, but if you can convince your company to foot the bill, then go for it.   It’s a “low hanging fruit”, won’t take long, and can’t hurt.

Professional Javascript for Web Developers

Professional Javascript for Web Developers is a comprehensive 800+ page tome that does a deep dive on all things Javascript.     It starts off with a brief history of the ECMAScript standard that Javascript is derived from, before launching into the basics, teaching everything there is to know about types, statements, operators and functions.    These earlier chapters provide a solid basis of understanding that the later parts of the book then build off of.   These sections go into the more advanced language features such as function expressions, closures, prototypes, and inheritance.    In addition, the book also devotes entire chapters to the typical day to day tasks that web developers do on a regular basis: DOM manipulation, forms, AJAX, JSON, and event handling.   Along the way, it manages to cover just about everything else as well, including client detection, XPath, XSLT, and the WebStorage API.   Heck, it even covers 3D graphics programming with WebGL!

When I say the book does a deep dive, I do mean deep.   I had originally intended to skim through the language basics sections, but ended up learning a lot about the various nuances and caveats of Javascript.     For example, due to the dynamically typed nature of the language, there are a lot of “gotchas” involved when using the equality and comparison operators.    This book covers all of them.  While reading the section on variables, I was surprised to learn that all function arguments are passed by value, whereas creating a copy of a reference type (eg var obj1 = new Object(); var obj2 = obj1) merely creates a pointer to the original object on the heap.     Useful information such as this is found throughout the book, and upon completing it, I found myself with a greater appreciation and understanding of the language.

I also gained a greater appreciation of just what a pain in the ass it is to support multiple browsers.    The slight differences in Javascript implementations across browsers (or not so slight, in the case of IE) can be quite problematic for front end developers, so the book takes great pains to discuss techniques to writing cross-browser compatible code.   To that end it spends an entire chapter talking about client detection; the sample code from this chapter is used extensively throughout the later portions of the book.

All in all, I would strongly recommend this book to anybody who is a front end web developer.   The breadth of topics covered makes it a great reference, whereas the depth makes this an excellent textbook as well.   Particularly valuable are the best practices found throughout every chapter.   That in and of itself is reason enough to own this book.   As an added bonus, it is quite funny learning about all the errors in earlier versions of IE’s Javascript engines that were either egregious bugs or blatant disregard for the official standards.

Javascript arguments.callee

All functions in Javascript are instances of the Function type, meaning that they are actually objects. And just like any other object in Javascript, they can have their own set of properties and methods. One such property is the arguments object, which contains a list of all the parameters passed in to the function. The arguments object itself contains a callee property, which points to the function that owns the arguments object. This provides us with a way to reference the function being called while inside of the function body itself.

Why would you want to do this? Well, this can come quite in handy if we want to define a static variable scoped to the function itself. Perhaps the function contains code that performs a particularly complex calculation or an expensive initialization that only needs to be run once. We can store the result inside of a static variable which will improve response time on subsequent calls to this function. For example:

Function createFoo()
   If (typeof arguments.callee.bar != “number”) 
       Arguments.callee.bar = ….  //perform expensive logic here		

   //now we can refer to arguments.callee.bar without having to perform
   //the expensive logic again on subsequent calls

Using closures to simulate encapsulation in Javascript

Javascript is not an object oriented programming language per se. Although everything in Javascript is an object, there is no real concept of a class construct; objects are simply a collection of key/value pairs. As a result, tenets of OOP such as classical inheritance are not really possible with the language. Instead, Javascript supports “object reuse” through prototypal inheritance. This is a different flavor of inheritance where objects clone key/value pairs from prototype objects. Encapsulation, another key tenet of object oriented programming, is also not directly supported in Javascript either. There are no “public”, “private”, or “protected” keywords.

However, it is possible to simulate private variables through closures in Javascript. A closure is simply a function defined inside another function in Javascript. The closure has access to all the local variables defined within the containing function. Note that any local variable defined within the containing function is not accessible outside said function, as it will no longer be in scope. For example:

function Foo()	
	var bar = 5;


This will error, since bar is defined inside Foo, and not accessible from the global context. To get around this, we define a closure inside Foo:

function Foo()
	var bar = 5;

        this.IncrementBar = function (){

Var myFoo = new Foo();

The IncrementBar() function is defined within Foo() and thus has access to the bar variable. Furthermore, because IncrementBar() is assigned to the “this” object, by instantiating an instance of Foo, the new object will now have a public method with which it can modify bar. The bar variable is local to the Foo() constructor, and hence cannot be accessed from outside. Trying to set myFoo.bar will create a new property on the foo object, but will not override the bar variable declared within Foo(), which is no longer in scope. bar is for all intents and purposes, “private” to the foo object. While using closures is not quite as easy as slapping on a private keyword in front of bar, it achieves the same effect.

Javascript === operator

Like most scripting languages, Javascript is not strongly typed. This causes implicit typing to occur when two variables are compared, resulting in some interesting behaviors. A prime example of this is the equality operator ==. For example, if either operand is NaN, then the equality operator returns false. Even if both operands are NaN, the comparison still evaluates to false, because NaN is not equal to NaN (similar to how comparing anything to an ANSI NULL in SQL evaluates to false).

Or consider the following:

var op1 = “”;
var op2 = true;

if (op1 == op2)

The if statement evaluates to false, because op1 is cast to a boolean. Empty strings are converted to false, and non empty strings are converted to true. However…

Var op1 = “0”;
Var op2 = true;

If (op1 == op2)

In this case, the if statement evaluates to false, because “0” is converted to 0, which is not equal to false. If op1 were equal to “1”, then the statement would evaluate true.

To deal with all these caveats and gotchas, Javascript introduced a strictly typed equality operator, ===, and its eqvuivalent inequality operator, !==. These do a strict type comparison, and if the types do not match, the result is false. Thus 0 === “0” evaluates to false. One interesting to note is that null == undefined is true, however null=== undefined is not. This is because null and undefined are not the same type. In order to maintain type integrity in the code, its considered a best practice to use === whenever possible. It also potentially saves on performance, since no implicit casting is performed.

Solving problems with two dimensional indexes

I recently attended a one day Couchbase conference in Seattle. It began with a brief overview of NO -SQL databases and a demo of Couchbase, followed by some in depth programming crash courses on using how to use it in your code. The final presentations were given by real world Couchbase users. One talk in particular, “Scaling Geodata with MapReduce”, given by Nathan Vander Wilt, was quite interesting.

GeoCouch is a service embedded into CouchBase that stores geodata (lat/long coordinates) using R-trees, that allows for highly optimized bounding box queries (find all the locations within this rectangle). GeoCouch does not support polygon queries yet, but it will provide radius support in the future. Nathan gave some examples on how to use GeoCouch to solve common questions (where have I been? What photos have I taken in this area?). The most enlightening part about Nathan’s talk was when he began talking about using GeoCouch in more unconventional ways.

As Pike states in Notes on C Programming, “Data dominates. If you’ve chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming.”

GeoCouch essentially provides a 2-dimensional index, so many problems that can project their data in two dimensions* can suddenly be solved in creative ways. Take for instance the question, “When have I flown?” If you store a data set of altitude and time, with altitude on the Y axis and time on the X axis, the data would look like the graph below. We can safely assume that all points with a y-coordinate greater than say … 10,000 ft represent times you’ve flown on a plane. We then simply draw a bounding box that captures all these points, and we can now use GeoCouch to quickly answer the question.

Another example that Nathan gave was a storing a photograph’s camera model (we can define an enum to map each model to a numerical value) and rating (1-5 stars). By then drawing a bounding box around say, all Canon photos with a rating lower than 2, we can quickly locate all our bad photos.

While I found the entire Couchbase Conference to be incredibly educational and helpful, I especially enjoyed Nathan’s talk because it contained a creative and novel problem solving approach that is not specific to just Couchbase and GeoCouch.

*Projection is a mathematical term. The simplest example is a map: A topographical map is a 2 dimensional projection of a 3 dimensional space.

Finding crap in the SQL Server database

Anyone who has ever maintained an old legacy code base often finds themselves working with a decade old database schema that now spans hundreds of tables, each of which contain dozens of columns. Trying to locate a particular column or table in the db can be particularly difficult, especially if the only clue you have is a vague inkling of what said table/column might be named. Luckily, SQL Server provides several system tables that hold metadata that can help you find the proverbial needle in the haystack.

We can query sys.tables and sys.columns like so:

--look for a particular table
SELECT t.name AS table_name
FROM sys.tables AS t
WHERE t.name LIKE '%needle%'

--look for a particular column
SELECT t.name AS table_name,
c.name AS column_name
FROM sys.tables AS t
JOIN sys.columns c
WHERE c.name LIKE '%needle%'
ORDER BY table_name;

The information_schema.columns table lets you accomplish the same thing. This table shares a lot of the same information as sys.columns (but does not contain the object_id), but it already contains the table_name so you don’t have to do a join. This is good if you are lazy and dislike typing more than necessary.

FROM Information_Schema.Columns

These queries provide a good starting point for exploring the database and all the metadata available to you. Of course, if you’re too lazy to do any typing at all, you can simply open SQL Server Management Studio, click on “View” in the menus, and select Object Explorer Details. From there you can do a search. Its important to note that this searches EVERYTHING. Views, stored procedures, triggers, tables, columns, and everything else that is an object in SQL Server.

If you run the profiler, you can see that it is indeed querying all the system tables, including sys.database_principals, sys.asymmetric_keys, and sys.all_objects.

Often times this UI search will get the job done, although it can produce a lot of noise.

Why every developer should work for a startup at least once in their lives

Shortly before I graduated from the University of Washington, I joined the startup company Twango as a software development intern. We were a web 2.0 company whose premise was to provide a centralized place on the web where users could upload, manage, and share all their digital media with friends. Our tagline was “Share your life”. Think of a more advanced version of flickr that catered toward power users and also had support for video, audio, and office documents. I was offered a full time job there after completing my degree in computer science, which I gladly accepted. Looking back, it was one of the best decisions I ever made in my life.

I got to experience a truly hands on learning environment that was far more in depth and educational than any of the projects that I had worked on in school. This is not a knock on the University of Washington, as there is only so much you can do in a quarter system and when you are constrained by the course material itself. The professor is not going to deviate much from her carefully prepared course plan*. The “curriculum” at a startup is much more fluid and dynamic. Because the team itself is so small, everyone actively participates in every aspect of running the company, from development, testing, design, operations, customer support, and even marketing. I had to master many different technologies and learn on the fly. This helped foster a “can do” attitude that has served me well over the years. It didn’t matter what language, tool, framework, or API was needed. I was willing and able to learn whatever was necessary to get the job done.

This amazing learning experience is free from the constraints of bureaucratic processes and red tape found in larger corporations. I am not saying that processes and development methodologies are bad, only that corporations tend to adhere to these things religiously, rather than exercising flexibility and common sense. At a startup, you are given a lot more freedom in what features you work on and how you will design and build them. For instance, Twango was missing a private messaging feature, so I went ahead and built one. At a giant corporation, something so simple would have required multiple meetings, requirements gathering, functional specifications, and a whole lot of other hurdles. That’s even assuming the higher ups would even OK the feature in the first place. At a startup, you can pitch your idea directly to the founders. At a corporation, you as a developer are typically pigeon holed into a narrow role working on a small subset of the functionality. At a startup, you have the freedom and the responsibility over the entire code base. It sounds daunting, but it’s actually quite empowering.

Of course, the best part about working for a startup? It’s a lot of fun! Sadly, most people forget that work can and should be enjoyable. Older, grizzled veterans of the development industry tend to become cynical over the years, and view their jobs as a way to collect their monthly paycheck. They forget the passion and enthusiasm that got them excited about computers in the first place. I’m lucky that I got to experience the startup culture straight out of school Otherwise, I wouldn’t really know what I was missing out on. As a result, I can screen the companies that I interview for, and tailor my search toward the types of places that evoke the same happy feelings I had when I was at Twango.

This is why I’d recommend anyone who recently graduated from school to go find a startup to work for. The pay will not be as competitive. You might never IPO or get bought out. But it is better to take that kind of risk when you are young. There will be plenty of time later on down the line to find a high paying job. Trust me, the skills you acquire at a startup will allow you to easily find one later. Not to mention, there will be other perks. The Twango founders constantly took us out for lunch, movies, and even the occasional ski trip. These are the sort of warm fuzzy memories that are truly priceless.

*To be fair, many of the University of Washington capstone courses do a good job of giving the student more freedom over their own projects.