Interview practice: Racing horses

This one is a puzzle question. I’m personally not the biggest fan of these types of problems and would never ask something like this in an interview. That being said, they are kind of fun, and the following problem is no exception. Its an entertaining logic question that can give insight into a candidate’s thought process and how they approach an optimization problem.

There are 25 race horses and you want to find the fastest one. You can only race 5 horses at a time, and you can’t write down their times. How many races does it take to find the fastest three horses?

Here’s how I approached it. The first thing to do is figure out all the assumptions inherent in any puzzle question. The first and most crucial assumption is that the finish times for each horse remains the same across races. Without this assumption, the task is impossible, as the fastest horse could change between races: horse #1 could beat horse #2 in the first race, but lose to it in the second race. The second assumption is that there are no ties. There is a clear ranking in finish times for the horses.

Now with those out of the way, we can approach the problem. The most obvious thing to do first is to divide the 25 horses into five groups, and then race each group. This will identify the fastest horse in each heat. Now we just race the five fastest horses from each group. The winner will be the fastest out of all 25 horses, and we will have identified the fastest horse in just six races. Let’s call this winner horse #1, and assume he came from group #1. That’s the easy part. Now the question remains, how many races do we now need to find the second fastest horse? Should we just pick the second fastest horse from race #6? Should we race all the second place finishers from the five groups as well?

At this point it would be helpful to do some whiteboarding. Let’s write down some concrete race times.

Race #1:
Horse #1 1 minute
Horse #2 2 minute
Horse #3 3 minute
Horse #4 4 minute
Horse #5 5 minute

Race #2:
Horse #6 1.6 minutes
Horse #7 1.7 minutes
Horse #8 1.8 minutes
Horse #9 1.9 minutes
Horse #10 1.95 minutes

By writing out the first two races and assigning some times we can see that it is entirely possible that all the horses in the second heat are actually faster than all of the other horses in the first heat save for horse #1. Let’s write down some more race times for the other heats.

Race #3:
Horse #11 1.1 minutes
Horse #12 1.2 minutes
Horse #13 1.3 minutes
Horse #14 1.4 minutes
Horse #15 1.5 minutes

Again, we see that for the third heat, it is entirely possible that all the horses are faster than all of the other horses in the first heat save for horse #2. At this point it may seem that it is necessary to race all the second place finishers in each heat, before racing the winner of the seventh race against the second place finisher of the sixth race. But that’s not quite true.

Race #4:
Horse #16 1.01 minutes
Horse #17 1.02 minutes
Horse #18 1.03 minutes
Horse #19 1.04 minutes
Horse #20 1.05 minutes

Race #5:
Horse #21 1.001 minutes
Horse #22 1.002 minutes
Horse #23 1.003 minutes
Horse #24 1.004 minutes
Horse #25 1.005 minutes

Race #6:
Horse #1 comes in first place at 1 minute
Horse #21 comes in second at 1.001 minutes
Horse #16 comes in third at 1.01 minutes
Horse #11 comes in fourth at 1.1 minutes
Horse #6 comes in fifth at 1.6 minutes

From this we can quickly see that the second place finisher of race #6 (horse #21 from heat 5 from our whiteboard example) is faster than the first place finishers of heats 2,3, and 4. This means that he is also going to be faster than all the horses in heats 2,3, and 4. We can quickly eliminate all those horses as being the second fastest. Thus the only horses to consider for being second fastest are either the second place finisher in heat 6, or the second fastest horse from the winning heat the fastest horse was from (horse #2 from heat 1 in this case). So we race them. So far, we’ve had to do seven races.

So what about identifying the third fastest horse? Based on the same logic as above, we know that the third place finisher of race #6 (horse #16 from heat 4 from our whiteboard example) is faster than all the horses in heats 2 and 3. We do not know if this third place finisher is faster than the second or third place finisher from the winning heat (Horse #2 and #3 from heat 1), or the second place finisher from the heat that the second fastest horse came from (horse #22 from heat 5 in this case). Note for the latter case, we don’t need to consider the third place finisher from this heat, because at best it could only be the fourth fastest horse overall (we know by its placement that it must be slower than the first and second place finishers in its heat and slower than the fastest horse overall). Since we don’t care about the fourth fastest horse, we can eliminate it from consideration. So, there are only four horses to consider for third fastest, and we will need to have them compete in an eighth race to find the third fastest horse overall.

Or do we? In race #7, we raced horse #2 from heat 1 against horse #21 from heat 5. In race #8, we raced horse #2 from heat 1 again. There are only five horses we need to consider for second and third fastest, so we can combine races #7 and #8 into one. The second and third place finishers of the seventh race are the second and third fastest horses, respectively. In total, we just need seven races.

The general algorithm is:
Divide the 25 horses into groups of five.
Race each group.
Select the winner from each group and race them.
The winner from this race is the fastest horse. Let heat A denote the winning heat this horse is from, heat B denote the heat that the second place winner is from, and heat C denote the heat that the third place winner is from.
Now race horse #2 and #3 from heat A, horse #1 and #2 from heat B, and horse #1 from heat C (where #1, #2, and #3 denotes what place the horse finished for that given heat) in a final seventh race.
The first and second place winners of this race are the second and third fastest horses.

Scalatra Tutorial (Part 2): Logging, debugging, and tests

Part 1 of this tutorial is here

You can follow along by getting the code from github here: https://github.com/jieyangh/scalatra-sample-API
The git hash for part 2 is 32190b1ae5eb685f6a06eaae6cd5fa15d5cf23bd

Now that we have a barebones implementation of a web API, let’s start adding on functionality and utility.

Debugging 

Remote debugging with IntelliJ makes life much easier. Here’s how to set it up. Open the project in IntelliJ and go to Run->Edit->Configurations. Choose Remote , hit the “+” button, and enter in a name for the configuration (I chose “ScalatraDebug” for simplicity). The default settings are fine and will set up a debug port on port 5005:

remote debug

We will need to modify the SBT parameters accordingly, since it defaults to a different port. Go to File->Settings->Other settings and select SBT from the side nav. Under IDE Settings, change the VM parameters so that its listening on port 5005:

-Xmx512M -XX:MaxPermSize=256M -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=5005

SBTDebug

Launch the application in the SBT console by typing “container:start” as we did before in part 1. Once the app is running, go to Run->Debug and select the remote debugging configuration that you previously created.

Logging

Any serious enterprise level application will need logging. Scalatra uses the logback framework by default, so that’s what we’ll use too. First we’ll need to add this dependency in the librarydependencies sequence in build.sbt:

"ch.qos.logback" % "logback-classic" % "1.0.1"

Next we will create a logback.xml under src/main/resources. Chances are if you’re reading this you’ve used logback before. If not, the file contents are fairly self explanatory. We can modify the pattern to suit our purposes, but including the date, thread, level, logger and actual message is a good a starting point as any:

<configuration>
    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>Captain's Log Stardate %date{ISO8601}. [%thread] %level %logger %msg
            </pattern>
        </encoder>
    </appender>

    <root level="debug">
        <appender-ref ref="STDOUT" />
    </root>
</configuration>

Finally, we’ll need to utilize the logger in our code. Let’s add some logging to the GreetingController scala class. First let’s create the logger:

val logger =  LoggerFactory.getLogger(getClass)

 

And now we just need to use our logger object to log things:

logger.info("We have entered a spectacular binary star system in the Kavis Alpha sector");

Again, pretty self explanatory. Here’s the full listing for the GreetingController:

package sampleApi.controllers

import org.scalatra.ScalatraServlet
import org.slf4j.{Logger, LoggerFactory}

class GreetingController extends ScalatraServlet {
  val logger =  LoggerFactory.getLogger(getClass)

  get("/") {
    logger.info("We have entered a spectacular binary star system in the Kavis Alpha sector");
    "Hello world"
  }

  get("/:name") {
    val name = params.getOrElse("name", "world")
    "Hello " + name
  }
}

Tests

Now we’ll want to add some automated testing. ScalaTest appears to be the go to test framework for Scala, and it provides support for Scalatra as well. This can be accomplished by adding a dependency for Scalatra ScalaTest in the libraryDependencies sequence in build.sbt:

"org.scalatra" %% "scalatra-scalatest" % "2.2.2" % "test"

Next we’ll want to create a test under src/test/scala. We’ll want our test class to extend the ScalatraSuite class, which allows us to use simple sentences to specify the expected behavior of our application. This is Behavior Driven Development (BDD). We also want our test class to have the “FunSuite” trait so that it can treat each test case as a function value. Note that in the official Scalatra testing example, the listed code didn’t compile for me, because it could not resolve “FunSuiteLike”. Replacing “FunSuiteLike” with “FunSuite” fixed the issue for me.

import org.scalatra.test.scalatest.ScalatraSuite
import org.scalatest.FunSuite
import sampleApi.controllers.GreetingController

class GreetingControllerTests extends ScalatraSuite  with FunSuite {
  addServlet(classOf[GreetingController], "/*")

  test("simple get") {
    get("/bob") {
      status should equal (200)
      body should include ("Hello bob")
    }
  }
} 

The test code is also pretty straightforward. An HTTP get request for “/bob” should return a status code of 200, and the response message should include “Hello bob”. You can run the test in IntelliJ by right clicking anywhere on the test code, or by creating a test configuration in the IDE and running it via the Run menu.

Does Javascript pass by value or pass by reference?

Javascript exhibits quite interesting behavior when it passes object parameters. At first glance, it would appear that it passes everything by reference. Consider the following HTML page.

<html>
<body>
<script type="text/javascript">

    function MyObject() {
        this.value = 5;
    }

    var o = new MyObject(); 
    alert(o.value); // o.value = 5

    function ChangeObject(oReference) {
        oReference.value = 6;
    }

    ChangeObject(o);
    alert(o.value); // o.value is now equal to 6

</script>

</body>
</html>

We initialize an object o by calling the MyObject() constructor. The constructor sets the value member to 5. We then pass o into the ChangeObject() function, where it sets the value member to 6. If o were passed by value, that would mean we are passing a copy of o, and that its value member would not be changed once the function exits. However, if we open up this HTML page in a browser, the final alert(o.value) indeed prints 6. This would lead us to conclude that Javascript passes by reference.

However, things get really interesting when we look at this HTML page.

<html>
<body>

<script type="text/javascript">

    function MyObject() {
        this.value = 5;
    }

    var o = new MyObject();    
    alert(o.value); // o.value = 5

    function ChangeObjectWithNew(oReference) {
        oReference = new MyObject();
        oReference.value = 6;
    }

    ChangeObjectWithNew(o);
    alert(o.value); // o.value is still 5

</script>

</body>
</html> 

This page is identical to the last, with the slight modification that we call the ChangeObjectWithNew () function. ChangeObjectWithNew creates a new instance of MyObject and assigns it to oReference, and then sets this new instance’s value member to 6. If Javascript passed by reference, o would now point to this new instance, and its value member would be 6 after ChangeObjectWithNew ran. However, if you run this page in a browser, the final alert(o.value) still prints 5. How is this possible?

What is happening behind the scenes is that Javascript is passing a reference of o by value. This can be a little confusing, so think of it this way. When o is first declared and initialized, it points to the memory location of the actual object that we instantiated. When the ChangeObject function is called, it passes in a copy of this pointer that points to the exact same memory location. Thus any changes to oReference’s member variables will be reflected in o, since both point to the same spot in memory. However, when we call ChangeObjectWithNew, the line “oReference = new MyObject” now causes to point to an entirely different spot in memory. Thus any further changes to oReference’s member variables will no longer be reflected in o.

A picture helps clarify things.

Here is what the situation looks like when we call ObjectChangeWithNew:

After ChangeObjectWithNew has run, note that oReference now points to a new memory location.

Since Javascript hides the implementation details behind how it handles its pointers, this C program demonstrates passing by reference and passing a “reference” by value. ChangeObject is called with a pointer to MyObject. This is passing by reference. ChangeObjectWithNew is being called with a copy of the original MyObject pointer. As a result, modifying its value member does not modify the original pointer’s value member.

#include <stdio.h>
#include <stdlib.h>

typedef struct 
{
    int value;
} MyObject;

void ChangeObject(MyObject * objReference)
{
    (*objReference).value = 6;
}
 
MyObject * ChangeObjectWithNew(MyObject * objReference)
{
    objReference = (MyObject *)malloc(sizeof(MyObject));
    objReference->value = 6;
}

int main(int argc, char* argv[])
{       
    MyObject * o = (MyObject *)malloc(sizeof(MyObject));  
    MyObject * oCopy;  //this will be a copy of the original object pointer      
    oCopy = o;

    printf("Setting o->value to 5");
    o->value = 5;
    printf("MyObject.value: %d\n", o->value);
    
    //now pass by reference
    printf("Calling ChangeObject with original pointer\n");
    ChangeObject(o);  //this will change o->value to 6    
    printf("MyObject.value: %d\n", o->value);  //prints 6

    //reset o->value to 5
    printf("Resetting o->value to 5\n");
    o->value = 5;
    printf("MyObject.value: %d\n", o->value);  //prints 5

    //now pass a COPY of the origal pointer
    //this is how Javascript behaves, minus the memory leak
  
    printf("Passing in a copy of the pointer o to ChangeObjectWithNew\n"); 
    oCopy = ChangeObjectWithNew(oCopy);
    printf("MyObject.value: %d\n", o->value); //still prints 5

    //free the memory 
    free(o);
    free(oCopy);
    o = NULL;
    oCopy = NULL;

    scanf("Press any key to continue");
}

The fact that Javascript passes a “reference” by value is important to keep in mind. Assuming it passes by reference can lead to some difficult to track down bugs later on.

The Singularity is nearer than you think

The Singularity is Near is a thought provoking book.    The emotions I experienced while reading ran the whole gamut from deeply disturbed to inspired.    The author, Ray Kurzweil, is a billionaire entrepreneur who graduated from MIT and made his fortune from a whole host of inventions, ranging from musical synthesizers, OCR, text to speech, and speech to text, among other things.    In his book, he shares his vision of the future and makes a compelling argument as to why the technological singularity is close at hand.

The singularity is defined as the point in time in which the rate of change in technological progress happens too quickly for the human brain to process, due to the emergence of artificial intelligence that has surpassed human intelligence.   Such an AI would be able to quickly iterate and improve upon its own design without the need for any human intervention.    Not only would each iteration yield exponentially more computing power and capability, the time needed for each iteration would decrease exponentially as well.    The resulting runaway AI would herald the arrival of the singularity, and by definition, no one can really say what will happen.

And that to me is the worrisome part.   Humanity will necessarily relinquish control of its own fate and leave it in the hands of its artificial creations.    There are many who are not enthused by this prospect.   At the far end of this spectrum are guys such as Ted “The Unabomber” Kaczynski, who believe that technological progress inevitably leads to an increase in human suffering and loss of freedom.    While Kaczynski’s actions are morally reprehensible, many of the concerns that he raises are valid.     Improvements in technology necessitate a restriction in freedom.    Consider the invention of the automobile.   In order to accommodate everyone being able to drive a car, bureaucracy in the form of traffic regulations and legislation had to be created, in addition to the formation of state level DMVs.     This bureaucracy limits what we can and cannot do.  Before cars, we could walk wherever we pleased without needing to stay on sidewalks and having to heed traffic lights.    Consider also the current generation of military drones that go on surveillance missions and launch remote strikes on military targets.    One can only imagine that the next such generation of drones would be smaller, smarter, and stronger.     Such drones would no doubt enable the government to create a 24/7 Orwellian surveillance system capable of quickly and quietly dispatching dissidents.    Even corporations can freely collect and harvest our personal information that we so freely post on social networks.    Given that technology impinges on our personal freedoms, it is not at all farfetched to imagine that the invention of a super intelligent AI would reduce humans to the status of house hold pets.

This is but one possible negative outcome.    Science fiction features many robot uprising scenarios wherein the human race is completely obliterated.    But what is more disturbing to me than the prospect of total annihilation is the eventual emergence of neural nanotechnology.    Nanotechnology would allow us to enhance our neural pathways, vastly increasing our biological intelligence by augmenting it with machine intelligence.     This would bypass one of the biggest limitations of the human brain:  knowledge transfer.    A concert pianist must take years of piano lessons and spend thousands of hours practicing before she can master her craft.     Computers on the other hand, can quickly copy data from one machine to the other, sharing information with ease.     Now imagine a world where we could simply download “knowledge modules” and install them onto our machine brains.   Suddenly everyone would be able to speak multiple languages, play chess at a grandmaster level, solve differential equations, all while having a great sense of humor.    With nothing to distinguish one another, we will lose all individuality.   It is reminiscent of the Borg collective of Star Trek, where newly acquired knowledge is quickly shared among all the drones.     Such an egalitarian society to me seems quite dull.     In a chapter discussing aging and death (and how technology can someday make us immortal), Kurzweil dismisses the argument that our limitations make us human.   In the context of mortality, I would agree.   However, in the case of our inherent knowledge transfer limitations, I feel that such limitations make life rewarding.      Taking years to learn something is not a downside, but a fulfilling journey.       It will be interesting to see how human/machine hybrids find purpose and meaning in the post singularity (assuming the robots don’t kill off everybody of course).    Of course, just getting to that point will be troublesome.

Consider what happens as technology continues to improve and automate away tasks that previously required a lot of human intervention.     Expedia and Concur destroyed the livelihood of many travel agents.    Sites such as Zillow and Redfin will someday do away with most real estate agents (although why they have not succeeded – yet – is a different topic altogether).      Grocery stores have self checkout lanes.     Retail stores use software to handle all their complicated supply chain logistics.    Today there is almost no industry where computers are not utilized in some way.    Now imagine what happens as artificial intelligence continues improving at an ever accelerating pace and eliminates the need for human intervention altogether.    Today, the Google driver-less car has logged hundreds of thousands of miles on the road.   Commercial driverless cars are soon to follow.    In a couple of years, bus drivers, taxi drivers, and chauffers will all be out of a job.   IBM’s Watson computer beat 74-time champion Ken Jennings at Jeopardy quite convincingly, and now IBM is using Watson as a medical diagnosis tool.    How many in the medical profession will still have a job once computers outperform them in the future?       Even art is being automated.     There are already AI programs that can churn out novels, songs, and paintings.     Who is still going to have a job in the future?    The industrial revolution put every artisan out of a job.   Likewise, a similar fate awaits all humans as the technology sector continues to innovate.   Some will argue that new jobs will be created by technology; however as AI continually improves, even those will be automated away.    Entire job industries will be wiped out.     This massive unemployment will obviously cause a lot of social upheaval.     How will governments respond?    Will money have any meaning in the future if nobody works for a living?

Kurzweil does not address these issues in his book, which is unfortunate, because it would have been cool to hear his insights on the matter; he has obviously given a lot of thought toward the dangers that future technological innovations in genetics, nanotechnology, and robotics will pose.   In fact, he devotes an entire chapter to this topic.    Despite this, Ray Kurzweil remains optimistic about the future, believing that we will eventually merge with our technology and transcend our humanity.    Others picture a future that will be a lot more grim for humanity.   Given these two diametrically opposed viewpoints, which vision of the future will be proven correct?   In the end, it may not really matter.    As Kurzweil astutely points out, history has shown that progress cannot be stopped.    Even a complete relinquishment of all scientific research wouldn’t really work: This would require a global totalitarian regime.    Of course, in order for such a regime to maintain its power, it would need to make sure it always had a technological advantage over its citizens, thus making its Luddite agenda an unsustainable self-contradiction.      Even a doomsday scenario in which the entire human race was wiped out via some massive meteor, nuclear war, viral pandemic, or some other form of unpleasantry would only serve as a hard reset.  Other sentient life forms would likely emerge again here on earth, or elsewhere in the universe (assuming this hasn’t occurred already).    The entire process would start all over again; it would appear that technology is on an inexorable march on a path toward the future.

Where does this path lead?   Kurzweil believes that the universe has an ultimate destiny and that the technological singularity is a major milestone along the way.    He provides a fascinating roadmap of the journey, dividing the universe up into six major epochs, each characterized by the nature of information and how it replicates.    Each epoch builds on the foundations of the previous one in order to generate information of increasing complexity.

The first epoch is that of “dumb” matter in the universe.      A vast amount of information is encoded into every piece of matter:  the number of molecules it’s made of, the number of atoms in each molecule, the spin state and energy level of the electrons orbiting each atom, and so on.    Matter, and the information stored within it, can replicate itself – although not efficiently.   For example, a crystal is comprised of a precise arrangement of atoms in a lattice.    As a crystal “grows”, it repeats this pattern over and over again.       Although not intelligent, the matter in the universe coalesces into objects of increasing complexity.    Entire planets, solar systems, star systems, and galaxies are formed.   From these arise the conditions necessary for biological life, leading to the second epoch.

In this epoch, biological life encodes information about itself in its genes via DNA.   DNA of course, is itself  made up of the “dumb” molecules from the first epoch.     The information stored within DNA then, represents a much higher level of abstraction.    It can self replicate much more efficiently, and even has mechanisms for error correction in the copying process.    As life evolves on earth over millions of years, the first sentient life forms appear.      In this third epoch, information is now encoded in the neural patterns of the brain.     The invention of a spoken language and a written alphabet by homo-sapiens facilitate the transmission of these patterns, which now replicate as memes.     Educational institutions help preserve these memes over the centuries, allowing humans to retain and build on the knowledge of their ancestors.     Standing on the shoulders of giants, scientists and engineers build the first computer (although there is much dispute as to which computer was the first one, for the purposes of this narrative we will pretend there is only one clear progenitor), heralding the fourth epoch.    Information is now stored in electronic circuitry and replication of this data is a simple matter of copying bits between machines.   This is facilitated by massive communication networks such as the internet.      As the machines begin to increase their computing power, artificial intelligence rivals that of humanity.   The singularity marks the arrival of the fifth epoch.    AI begins to saturate the universe, harnessing the matter in the universe itself for computational substrate (ie Dyson spheres).

The sixth and final epoch that Kurzweil describes is a gradual “awakening” of the universe.   In essence, the entire universe itself is turned into a computer and becomes self aware.    This is not to anthropomorphize the universe; this awakening will be an emergent process, wherein the inanimate universe transforms and transcends into something altogether different.    Carl Sagan once said, “we are a way for the cosmos to understand itself”.    The sixth epoch then, represents the fulfillment and realization of that statement.      This of course is all highly speculative and borders on religious.    One of the criticisms of Kurzweil is that he writes a lot of religious science fiction and that the singularity he describes is nothing more than a “rapture for nerds”.     Personally, I found his description of this final stage in the evolution of the universe to be quite beautiful and profound, with none of the trappings of religious dogma.    Whether or not any of it comes true remains to be seen.

There are of course, many other criticisms of Kurzweil’s work.    He even devotes an entire chapter in his book to address them.    Because he makes such strong assertions, including the bold prediction that in 2045 a laptop would possess billions of times more computing power than every human brain in existence (both past and present), many have told him that what he was saying either could not possibly happen, or that it could not happen so soon.    Kurzweil points to the exponential rate at which technology is improving (referred to as the law of accelerating returns in his book), while the naysayers argue that such growth will continue until it doesn’t.

The question boils down to whether or not there are limits to our knowledge and ability.   The pragmatists take a more conservative position on this:   Some things are by their very nature unknowable and not doable, while the optimists feel that there are always workarounds.    With regards to the singularity, the two main barriers are the hardware required to run a computer powerful enough to surpass the human brain’s parallel processing capabilities, and no less important, the software that can mimic it.   Kurzweil goes through great pains to discuss the promising ideas and solutions that can be found in the research pipeline.

On the hardware side of things, one of the major problems will be all the heat generated by the increased computing power.   Paradigms such as reversible computing will significantly reduce heat dissipation.     This will allow computing power to continue increasing at an exponential clip.    However, Moore’s law will come to an end due to the fundamental physical limitations to how small silicon transistors can become, but companies are already looking into what’s next after silicon.     A variety of technologies such as carbon nanotube based computers, DNA computing, and quantum computing (just to name a few) will potentially allow Moore’s law to continue unabated.

In order to take advantage of this powerful hardware, software will need to be written that can mimic the brain.   Instead of inefficiently hand coding each rule manually as is done in an old fashioned AI expert system, self-emergent machine learning techniques such as genetic algorithms, neural nets, and Bayesian networks will need to be employed.    At the same time, as brain scanning technology continues to improve, we can integrate what we learn as we reverse engineer the brain to come up with ever more accurate models.     The key here is to operate at the right level of abstraction.    Consider the Mandelbrot set.   It is a fractal of literally infinite complexity.    To calculate each point in the set would take an infinite amount of time, yet we can use a simple equation to represent it in its entirety.     There is strong evidence that the brain is fractal in nature.   Instead of painstakingly modeling the brain by mapping every dendrite and neuron, it would be much easier to generate an accurate model the brain by finding the right set of equations.   Of course, deriving these equations is non trivial, but it serves to illustrate why a top down approach to the problem will work the best.

All in all, The Singularity Is Near was a great read.    It is hard to categorize the book, as it contains a mix of philosophy, religion, and science.    It was necessarily epic in its scope given its subject matter.     Topics discussed ranged from Fermi’s paradox, the speed of light, the nature of consciousness, and everything in between.     There is something in it for everybody.     As a software developer, it served a springboard for a Wikipedia binge as I looked up the machine learning techniques and paradigms he described.     Anyone interested in science will find themselves  learning much by googling all the research articles that he cites:   I was personally amazed to find out there was a naturally occurring nuclear reactor in Africa 1.7 billion years ago.    There are a lot more of these nuggets of knowledge contained within.    That alone makes the book worth the read, but more importantly, it got me thinking about all the massive change brought about by the explosive growth in technological innovation that I’ve seen just growing up in my lifetime.     Humans have a linear view of present day trends: even an exponential curve is a straight line if you zoom in close enough.     Hence we miss the exponential rate of change that has happened in just this past decade.     Even more change is in store for the future, and it will be quite different from anything we’ve experienced up until now.   Important discussions need to be had about this.    So whether or not you agree with Kurzweil, it is worth reading his seminal work and to consider all its implications; it makes for deep conversation.

The Monty Hall problem

The Monty Hall problem is a well known math problem, famous among nerds around the world who have studied math and computer science. It has garnered somewhat of an infamous reputation, stemming from the fact that although the problem itself is simplistic and straightforward in nature, the correct solution is anything but. In fact, the right answer is so unintuitive, that when the problem was first introduced to the general populace via a 1990 Parade Magazine “Ask Marilyn” column, many reacted with vitriolic letters expressing their disagreement and disbelief. I know that when I first introduced to the problem in my Discrete Mathematics class, I had a hard time wrapping my head around the professor’s explanation. I would end up second guessing myself all the time anytime I worked on homework involving probability and proofs (although to be fair, all my math and computer science courses did that.)

Here is the gist of the problem: You are a contestant on a game show. In front of you are three doors. For storytelling sake, let’s say that behind one of these doors is the grand prize of one million dollars and behind the other two doors is a whole lot of Monopoly money. All you need to do to win is simply pick the door that contains the grand prize. Simple right? But wait, there’s a twist! After you have made your selection, the host, Monty Hall, opens one of the doors which does not contain the grand prize. He then gives you the option of switching doors.

So the question is, what should you do? Stick with your original choice or pick the other door instead? Does it even matter? A lot of people will argue that it doesn’t matter which door you pick at this point, because it’s a 50/50 chance that either door is a winner. Well, it turns out this is not true. In fact, by switching doors, you more than double your chances of winning! As you can imagine, many people have a hard time accepting this.

There are quite a number of proofs floating around the Internet that explain why switching is the right choice. There’s even one proof that involves using Bayes theorem that I could probably still understand, but only if I dusted off my probability textbooks. However, I’m but a mere mortal, so I prefer simpler explanations. The probability that you picked the correct door initially is 1/3. The probability that you picked the wrong door is 2/3. If you picked the wrong door initially, then switching after Monty Hall reveals a door means that you’ve won. Here’s the part that can tend to trip people up. Because you have a 2/3 chance of picking incorrectly, then there is a 2/3 chance that the remaining door is the winning door.

To illustrate this more clearly, it helps to look at an extreme version of the problem. Let’s say there are one hundred doors, and after you make your initial selection, the host opens 98 of the other doors that do not contain the winning prize. The probability you chose the correct door initially is 1/100. Are you really willing to stick with this initial shot in the dark? The chance that you picked correctly does not increase as the host starts opening doors. This is because you guessed blindly, whereas the host knows which doors do not contain the prize. The probability you chose the incorrect door is 99/100. In ninety nine cases out of one hundred, by switching to the remaining door, you will win.

Of course, some people may still vehemently disagree with this simple explanation and spend all their time looking for a flaw in the argument. Another recourse is good old fashioned empirical evidence. One can simply run an experiment where they simulate the game with a friend, with simple setups such as three playing cards or three cups (one of which hides a toy prize). The law of large numbers means that by running a large number of tests, the actual percentage of games won will mirror the actual probability of winning a game via switching. In other words, 2/3 of the games will be won by always switching, given a large enough sample size. Try it for yourself below and see (and feel free to view page source to make sure there’s no funny business going on, this really is an accurate rendition of the game):

 



Select a door:



Score:
Games played:

0

Games won:

0

 

Games won switching:

0

Games switched:

0

Games won switching %:

0%

 

Games won staying with original choice:

0

Games staying with original:

0

Games won staying with original choice %:

0%

 


For those without patience, they can run multiple tests via a computer simulation. I wrote one myself for fun. This one lets you modify the number of doors as well as the number of total test runs. As stated previously, in a setup with n doors where the host then reveals n-2 incorrect doors after you have made your initial selection, the probability of winning by switching is (n – 1) / n. Again, feel free to view page source to make sure there are no smoke and mirrors being used to alter the results. Granted, random number generators aren’t technically random, and instead use fancy pants algorithms such as a linear congruential generator to simulate randomness. The javascript RNG is no different. But for purposes of this simulation however, it will suffice.

 



Please enter in a value between 10 and 1000 for the number of times to run simulation (invalid values will be replaced by default of 1000) :

 

Please enter in a value between 3 and 100 inclusive for the number of doors (invalid values will be replaced by default of 3):

 

 

* A special thank you to my sunshine Mary, for suggesting the idea of writing a javascript version of the Monty Hall game :)

The ReceiveTimeout registry key and the obligatory rant on the registry

The Windows registry is one of those delightful features1 that came out over a decade ago that still manages to screw people over to this day, resulting in tears, bitterness, and a vast collection of horror stories. In fact, I’m surprised no one has published a short story anthology about the Registry yet. I can picture it now: Scary IT Stories to tell in the Dark – Windows Registry Edition. Well, just in case there such a book ever sees the light of day, here’s my story:

One of our web pages was intermittently returning a “The page cannot be displayed” error in IE. Not only did this bug not repro consistently, it did not even repro on everybody’s machines; that would be making things too easy2. Naturally, due to the nondeterministic nature of the bug, everyone’s first inclination was to look for some kind of bug involving a race condition. Fun fact, in some versions of Internet Explorer, if you have javascript code that tries to modify the DOM before the entire DOM has been loaded, you will get this error3. So we combed through the recent check ins looking for an error along those lines.

After much hair pulling and frustration, somebody started to notice that the error always happened around the 30 second mark. Luckily, in a clutch miracle save, one of the people on our team had run into this exact same problem years ago and immediately recognized the issue. The ReceiveTimeout key under HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings was the culprit! From the Microsoft page: “Internet Explorer imposes a time-out limit for the server to return data”. Well, this registry key controls that time out value; once that time period has elapsed, IE aborts the connection, causing the error. Apparently, the people who all experienced the issue had that registry key defined, and it was set to only 30 seconds. We’re not sure how or why that key got added on some people’s machines and not on others, but that’s typical of a lot of keys in the registry. Case closed, problem solved!4 Now all that remains is to rant.

The Windows Registry is one of those design decisions where the cure is worse than the disease itself. The original problem it was trying to solve was the plethora of .INI files that stored application settings in Windows 3.x that were scattered about the filesystem. This is not entirely unlike the conf files in UNIX found under /etc, which are actually laid out in a fairly clean and straightforward way. Setting aside the obvious design flaw of the registry being a single point of failure where a rogue application or even a simple misclick can result in catastrophic failure, one of the other problems with the registry is that it is not transparent. I’m not talking about the file format of the registry either, although the fact that it is a non human readable binary format makes troubleshooting/maintenance a pain. The transparency I’m referring to is the fact that whenever a registry key comes into play, the end user has no way of knowing.

This is highlighted by the hours of wasted debugging that our team spent trying to track down this error. We simply did not know that the ReceiveTimeout key was what was causing the problem. IE merely displayed a generic error message with no troubleshooting information whatsoever. As a result, we couldn’t even google the error (a common troubleshooting tactic that can work wonders and yield a lot of insight into the issue).

Now I can understand hiding this complexity from an end user using a home PC who might get intimidated. However, why isn’t there more information available on Windows Server 2008? Shouldn’t there be some sort of utility that lets a user know when, where and how a registry key has come into play? This lack of transparency is exacerbated by the fact that the Registry is a monolithic file that stores not just settings used by the operating system itself, but also a whole myriad of settings used by the countless applications installed on the machine (on a home PC this problem is compounded by all the bloatware, malware, and spyware further polluting the registry with their dubious keys). Even if we had known the problem was somewhere in the registry, tracking down the problematic key among the hundreds of thousands of other keys would have been incredibly difficult. Talk about the proverbial needle in a haystack. It would have been much easier to narrow down our search in a system that used conf files. This is because there is a one to one mapping between a conf file and the application that uses it, allowing one to easily eliminate many, if not most conf files from consideration, making life a whole lot easier.

Here’s hoping someday Microsoft finally does the right thing and dumps the Registry. That would make a great advertisement for their new Windows OS! I can see it now: “Hey, I’m Bob. The other day I was doing some software development and I got screwed over by the Registry. So then I was thinking, why not get rid of it? Well next thing I know, Windows 8 came out, and there was no more registry! They got rid of it! I’m a PC, and Windows 8 was my idea! Hooray!”

FOOTNOTES:

1. I’m being sarcastic which is often times hard to do on the internet

2. The typical approach to tracking down a bug is to isolate the various variables, eliminating each one as a possible culprit. This helps narrow down the search, which helps when dealing with computer software. Not only does your application potentially span millions of lines, it also interacts with the OS (millions of lines), and other applications (also totally millions of lines), which all run on top of various hardware platforms. It would take forever to look at all the possible causes of a bug.

Unfortunately, this process of isolating variables gets trickier when the program does not fail in a deterministic fashion: The absence of buggy behavior under a given set of variables does not guarantee it will not appear later under those exact same conditions. Eliminating potential suspects becomes far more difficult. This is because you cannot prove an assertion simply by bringing up multiple examples where said assertion holds true. You can however, disprove the assertion by bringing up just one example to the contrary. To borrow Nassim Tabel’s iconic example: All it takes is one black swan to disprove the statement “All swans are white”.

3. Using the IETester tool, which lets you emulate the various versions of IE, copy this HTML into a test.html file in your IIS webroot directory:


<html>
	<body>	 
		<div>
		<script type="text/javascript">

			 function appendme()
			 {
				   var newDiv = document.createElement("div");
				   newDiv.innerHTML = "Appended text";
				   document.body.appendChild(newDiv);
			 }
			appendme();  //this will fail
		</script> 
		</div>
		<div>Placeholder div</div>
	</body>
</html>

If you then browse to this file via http://localhost/test.html try this in IE 6 and IE 7, you will get a “page cannot be displayed error”. The right way to do this would be to call the function after the page loaded. Something along these lines:

addEvent(window, ‘load’, appendme);

4. Well, not quite. The correct solution of course, is to make sure our page returns in less than 30 seconds, which is already an eternity in computer time. That, and perhaps streaming down bits and pieces of the response using AJAX.