# Interview questions explained: Fun with multiplication!

Write a method that takes as its input an array of integers. For each element in the array, print the product of all the other elements in the array. For example, given the following array:
[1,2,3,4,5]
The output would be:
120, 60, 40, 30, 24

Let’s start with the most literal solution. For each element in the array, loop through all the other elements and multiply them together:

``````int[] j = { 1, 2, 3, 4, 5};

for (int outerIndex = 0; outerIndex < j.Length; outerIndex ++)
{
int product = 1;
for (int innerIndex = 0; innerIndex < j.Length; innerIndex++)
{
if (innerIndex != outerIndex)
{
product *= j[innerIndex];
}
}

Console.WriteLine(product);
} ```
```

This solution requires an outer for loop to iterate through each element in the array and then an inner for loop to iterate through the other n-1 elements in the array. The asymptotic complexity of this is O(n^2).

Obviously, this isn’t a very good solution, so we can do better. The key observation to make here is that for any given element with value x in the array, the product of all the other elements in the array is simply the total product of all the elements divided by x. We only need to calculate the total product once.

This solution only requires two passes through the array. The first calculates the total product, and the second divides the total product by each element in the array:

``````int[] j = { 1, 2, 3, 4, 5 };

int totalProduct = 1;
for (int i = 0; i < j.Length; i++)
{
totalProduct *= j[i];
}

for (int i = 0; i < j.Length; i++)
{
//assume we check for zeroes beforehand to prevent divide by zero errors
Console.WriteLine(totalProduct / j[i]);
}```
```

This solution has O(n) complexity, since it requires only two linear scans of the array.

Now let’s make the problem a little bit more challenging. What if we cannot use division? Assume that the operation is too prohibitively expensive and that we need to find a workaround (a not uncommon scenario in the real world).

We can use an algorithm design technique known as dynamic programming: Break the problem up into smaller sub-problems, solve and store the solution for each one, and then combine the solutions as necessary to arrive at the answer to the original problem. One difference between dynamic programming and the divide and conquer class of algorithms is that dynamic programming stores the solutions to the subproblems, which are then retrieved at a later point in time. This is an optimization that prevents the same sub-problems from being solved multiple times. One example of this approach is calculating the nth element in the fibbonacci sequence. The fibbonacci sequence is a sequence that starts with 0, 1, 1, 2 and where every subsequent number is the sum of the previous two numbers in the sequence. Typically, the solution involves using either recursion or iteration. However, we can use dynamic programming by precomputing the sequence up to some number j. Thus, for all numbers where n < j, instead of having to use recursion or iteration, we can do a simple lookup. For numbers where n > j, we can still reduce the number of computations by summing from j and j-1 instead of from 0 and 1.

With respect to the problem of determining the products without the usage of division, we can use a similar approach. Note that the inefficiency of the first solution is due to the same multiplication operations being carried out multiple times. We can eliminate these redundant calculations by using dynamic programming. For any given element k in the array j of size n, we want to multiply all the numbers to the left of k with all the numbers to the right of k. By precomputing the running product of all the elements in the array in both directions (from left to right and vice versa), we now know the product of all the numbers to the left and right of any element k in the array. The problem can then be solved:

``````int[] j = {1,2,3,4,5};

int[] runningProductLeft = new int[j.Length];
int[] runningProductRight = new int[j.Length];

int product = 1;
//there is no element to the left of the start of the array, so set it to 1
runningProductLeft[0] = 1;

//since we already set the first element of runningProductLeft
//start populating it from index 1
for (int i = 1; i < j.Length; i++)
{
product = product * j[i - 1];
runningProductLeft[i] = product;
}

//we want to populate runningProductRight in reverse by starting from the end of the array

//there is no element to the right of the end of the array, so set it to 1
runningProductRight[j.Length - 1] = 1;
product = 1;

//since we already populated the last element of runningProductRight, start populating from the second to last element in the array
for (int i = j.Length - 2; i >= 0; i--)
{
product = product * j[i + 1];
runningProductRight[i] = product;
}

//now that the running products have been precomputed, printing out the solution becomes trivial
for (int i = 0; i < j.Length; i++)
{
product = runningProductLeft[i] * runningProductRight[i];
Console.WriteLine(product);
}```
```

This solution requires three linear scans through the array, so the runtime complexity is still O(n).

# json-schema how to reference other schemas in jsonschema2pojo

I’ve been using the jsonschema2pojo utility at work to automatically generate java classes from JSON schemas, simplifying a lot of the tedious code that needs to be written for the marshalling and unmarshalling of JSON objects. JSON schema is a developing standard that is analogous to what XSD is to XML, providing schema definition and validation for JSON. Just like with XSD, a JSON schema can reference other schemas. This is done by using the special object property \$ref.

The following is a simple JSON schema for an address object. Assume that it is stored in the file, Address.json:

``````{
"id":"Address",
"title": "Address",
"description":"This is the json schema for an address",
"type": "object",
"properties": {
"streetAddress": {
"type": "string"
},
"city": {
"type": "string"
},
"state/province": {
"type": "string"
},
"zipcode": {
"type": "string"
}
}
}```
```

Here is a simple JSON schema for a person object. A person object contains an address. To reference the previously defined address schema, add an address object that contains a \$ref property pointing to the filepath of the Address.json file:

``````
{
"id": "Person",
"title": "Person",
"description":"This is the json schema for a person",
"type": "object",
"properties": {
"firstName": {
"type": "string"
},
"lastName": {
"type": "string"
},
"age": {
"description": "Age in years",
"type": "integer",
"minimum": 0
},
"address": {
"\$ref": "Address.json"
}
}
}```
```

Note that both relative and absolute pathing work in jsonschema2pojo.

To add a reference to a list of addresses, add an “addresses” object with a “type” property of array and an “items” object. Add a \$ref property under “items” that points to the filepath of Address.json:

``````{
"id": "Person",
"title": "Person",
"description":"This is the json schema for a person",
"type": "object",
"properties": {
"firstName": {
"type": "string"
},
"lastName": {
"type": "string"
},
"age": {
"description": "Age in years",
"type": "integer",
"minimum": 0
},
"addresses": {
"id" : "addresses",
"type": "array",
"items": { "\$ref": "Address.json" },
"uniqueItems": true
}
}
}```
```

jsonschema2pojo will generate a Java class that contains a List of Address objects. Incidentally, adding the property “uniqueItems”:true it becomes a HashSet of Address objects.

A free schema generation tool is available at http://www.jsonschema.net/. When paired with utilities like jsonschema2pojo, it makes life for developers that much easier.

# vim tutorial – using the s command to replace text on the fly

The vim text editor comes with a powerful :s (substitution) command that is more versatile and expressive than the search and replace functionality found in GUI based text editors.

The general form of the command is:
`:[g][address]s/search-string/replace-string[/option]`

The address specifies which lines vim will search. If none is provided, it will default to the current line only. You can enter in a single line number to search, or specify an inclusive range by entering in the lower and upper bounds separated by a comma. For example: an address of 1,10 is lines one through ten inclusive.

You can also provide a string value for the address by enclosing it with forward slashes. vim will operate on the next line that matches this string. If the address string is preceded by “g”, vim will search all lines that match this string. /hello/ matches the next line that contains hello, whereas g/hello matches every line that contains hello.

The search-string is a regular expression and the replace-string can reference the matched string by using an ampersand (&).

[option] allows even more fine grained control over the substitution. One of the more common options used is “g”, not to be confused with the “g” that precedes address. Option “g”, which appears at the end of the command, replaces every occurrence of the search-string on the line. Normally, the substitute command only matches on the first occurrence and then stops.

`s/ten/10/g `

run on the following line:
`ten + ten = 20`

results in:

`10 + 10 = 20`

as opposed to:

`10 + ten = 20 `

without the global option.

Given all this versatility, the :s command comes in quite handy. Consider the following scenario. There is a comma delimited file that is missing trailing commas on some lines and not others. In order to normalize the text file so that all lines ended with a comma, you could run:

`1,\$s/[^,]\$/&,`

The address range 1,\$ spans the entire file (\$ in the address means the last line in the file). The search-string “[^,]\$” is a regular expression that matches every line that ends with any character except comma (\$ in a regex indicates end of the line). The replace-string has an &, which refers to the trailing character matched in the search-string. By setting the replace-string to “&,” we are telling VIM to take the last character on every line that is not a comma and add a comma to it.

[^,]\$ won’t match on blank new lines because [^,] expects at least one character to be on the line. To get around this problem, you would normally use negative look behinds, however the VIM regex does not seem to support them. The easiest way around this is to use a second replace command for newlines:
`1,\$s[^\n\$]/,`

This tells it to only add a comma to any line that only contains a newline (^ in a regex indicates start of line).

This is just one example of course. By coming up with the right regex in the search-string, you can automate all sorts of normally tedious tasks with succinct commands. The best part is, unlike those cumbersome GUI based editors that often require the use of a pesky mouse, your hands never have to leave the keyboard! For even more control and flexibility, you could use sed, but :s can handle most day to day tasks quite easily.

# Linux tutorial: Searching all files of a specific type for a string

Let’s say you want to search all the *.txt files in a given parent directory and all its children for the word “hello”.

Something like ` grep -rl "hello" *.txt ` would not work because the shell will expand “*.txt” for the current directory only. The -r flag for recursion would essentially be ignored. For example, if the parent directory contained:

`a.txt`

and the child directory contained:

`a.txt b.txt c.txt d.txt`

`grep -rl "hello" *.txt` would only search a.txt in the parent directory. This is because the shell will only evaluate the * wildcard for the parent directory from which the command is run.

What we actually want to do is use the find command to recursively list all the text files in the directory and its children, and then pass each of these files as arguments into grep, which will then search each argument for any instances of the string “hello”.

The find command to locate all the text files looks like this:
`find ./ -name "*.txt"`

In order to pass each file as an argument into grep, we use xargs. The xargs utilities reads in parameters from standard input (the default delimiter is whitespace or newline). For each item read in from standard input, xargs will then execute a given command with each item passed in as an argument.

Essentially what it is doing is this:

```foreach item in stdin { execute "[command] [initial arguments] [arg]" }```

In our example, we want to run `xargs grep "hello"` (grep being [command] and “hello” being [initial arguments]), with stdin coming from the output of the find command. Putting this all together, we get the following:

` find ./ -name "*.txt" | xargs grep "hello"`

Combining commands together is the strength of the UNIX design philosophy. The various command line utilities are designed to play well with one another, using the output from one as the input into another. Think of each utility as a puzzle piece that can fit together with any other puzzle piece,combining in interesting ways to solve complex problems. Often times there will be many possible solutions to a given problem; such is the versatility of the platform!

# Does Javascript pass by value or pass by reference?

Javascript exhibits quite interesting behavior when it passes object parameters. At first glance, it would appear that it passes everything by reference. Consider the following HTML page.

``````<html>
<body>
<script type="text/javascript">

function MyObject() {
this.value = 5;
}

var o = new MyObject();
alert(o.value); // o.value = 5

function ChangeObject(oReference) {
oReference.value = 6;
}

ChangeObject(o);
alert(o.value); // o.value is now equal to 6

</script>

</body>
</html>
```
```

We initialize an object o by calling the MyObject() constructor. The constructor sets the value member to 5. We then pass o into the ChangeObject() function, where it sets the value member to 6. If o were passed by value, that would mean we are passing a copy of o, and that its value member would not be changed once the function exits. However, if we open up this HTML page in a browser, the final alert(o.value) indeed prints 6. This would lead us to conclude that Javascript passes by reference.

However, things get really interesting when we look at this HTML page.

``````<html>
<body>

<script type="text/javascript">

function MyObject() {
this.value = 5;
}

var o = new MyObject();
alert(o.value); // o.value = 5

function ChangeObjectWithNew(oReference) {
oReference = new MyObject();
oReference.value = 6;
}

ChangeObjectWithNew(o);
alert(o.value); // o.value is still 5

</script>

</body>
</html>
```
```

This page is identical to the last, with the slight modification that we call the ChangeObjectWithNew () function. ChangeObjectWithNew creates a new instance of MyObject and assigns it to oReference, and then sets this new instance’s value member to 6. If Javascript passed by reference, o would now point to this new instance, and its value member would be 6 after ChangeObjectWithNew ran. However, if you run this page in a browser, the final alert(o.value) still prints 5. How is this possible?

What is happening behind the scenes is that Javascript is passing a reference of o by value. This can be a little confusing, so think of it this way. When o is first declared and initialized, it points to the memory location of the actual object that we instantiated. When the ChangeObject function is called, it passes in a copy of this pointer that points to the exact same memory location. Thus any changes to oReference’s member variables will be reflected in o, since both point to the same spot in memory. However, when we call ChangeObjectWithNew, the line “oReference = new MyObject” now causes to point to an entirely different spot in memory. Thus any further changes to oReference’s member variables will no longer be reflected in o.

A picture helps clarify things.

Here is what the situation looks like when we call ObjectChangeWithNew:

After ChangeObjectWithNew has run, note that oReference now points to a new memory location.

Since Javascript hides the implementation details behind how it handles its pointers, this C program demonstrates passing by reference and passing a “reference” by value. ChangeObject is called with a pointer to MyObject. This is passing by reference. ChangeObjectWithNew is being called with a copy of the original MyObject pointer. As a result, modifying its value member does not modify the original pointer’s value member.

``````#include <stdio.h>
#include <stdlib.h>

typedef struct
{
int value;
} MyObject;

void ChangeObject(MyObject * objReference)
{
(*objReference).value = 6;
}

MyObject * ChangeObjectWithNew(MyObject * objReference)
{
objReference = (MyObject *)malloc(sizeof(MyObject));
objReference->value = 6;
}

int main(int argc, char* argv[])
{
MyObject * o = (MyObject *)malloc(sizeof(MyObject));
MyObject * oCopy;  //this will be a copy of the original object pointer
oCopy = o;

printf("Setting o->value to 5");
o->value = 5;
printf("MyObject.value: %d\n", o->value);

//now pass by reference
printf("Calling ChangeObject with original pointer\n");
ChangeObject(o);  //this will change o->value to 6
printf("MyObject.value: %d\n", o->value);  //prints 6

//reset o->value to 5
printf("Resetting o->value to 5\n");
o->value = 5;
printf("MyObject.value: %d\n", o->value);  //prints 5

//now pass a COPY of the origal pointer
//this is how Javascript behaves, minus the memory leak

printf("Passing in a copy of the pointer o to ChangeObjectWithNew\n");
oCopy = ChangeObjectWithNew(oCopy);
printf("MyObject.value: %d\n", o->value); //still prints 5

//free the memory
free(o);
free(oCopy);
o = NULL;
oCopy = NULL;

scanf("Press any key to continue");
}```
```

The fact that Javascript passes a “reference” by value is important to keep in mind. Assuming it passes by reference can lead to some difficult to track down bugs later on.

# Dynamically modifying client endpoints in WCF

I was doing some contract work for a client building out a SOA backend for them. It consisted of some WCF services that all talked to one another. Due to how the client had set up their deployment process, we had to dynamically load these client endpoint address from an external XML file at runtime, instead of setting them inside the web.config.
In other words, we needed a way to dynamically change the address attribute:

``````<system.serviceModel>
<client>
<endpoint address="http://someaddress.com/webservice"
binding="ws2007HttpBinding" bindingConfiguration="WebService_ws2007HttpBinding"
contract="WebService.ServiceContracts.IWebService" name="ws2007HttpBinding_IWebService" />
</client>
</system.serviceModel> ``````

WCF allows you to programmatically configure any System.ServiceModel setting in the web.config, so the challenge was to inject the endpoint before the call was actually made. Normally, you can pass in the endpoint address to the channel factory constructor and be done with it. However, the service in question we needed to modify was a simple passthrough WCF router. There was no code to speak of, so in order to modify the client endpoints we decided to do so in a service behavior.

The first task was to figure out how to access the endpoints themselves. At first, I tried:

``````    var serviceModelSection = ConfigurationManager.GetSection("system.serviceModel");
ClientSection clientSection = serviceModelSection.GetSection("client" ) as ClientSection;
ChannelEndpointElementCollection endpointCollection = clientSection.Endpoints;```
```

However, when I actually tried to edit the endpoints inside ChannelEndpointElementCollection, I got an error: “The configuration is read only”. After searching online, I tried using WebConfigurationManager.OpenWebConfiguration instead:

``````       Configuration webConfig = WebConfigurationManager.OpenWebConfiguration("~" );
ClientSection clientSection = webConfig.GetSection("system.serviceModel/client" ) as ClientSection;
ChannelEndpointElementCollection endpointCollection = clientSection.Endpoints;

//dynamically load the URI here
Uri serviceAddress = “http://sometempuri.org”;

endpointCollection[0].Address = new EndpointAddress(serviceAddress);
webConfig.Save();``````

This option was a non starter because webConfig.Save() literally saves the actual web.config file itself. This causes the application pool to recycle, and since it edits the physical file, the changes made won’t apply to the current request.

Ultimately, we ended up implementing IEndpointBehavior interface. The IEndpointBehavior interface has an ApplyClientBehavior method, that takes as its parameter a client service endpoint. This method fires only once for the lifetime of the application for each client endpoint defined, which is exactly what we wanted. The following sample code demonstrates how this service behavior can be used to dynamically set the EndpointAddress for the client endpoint.

``````public class WebServiceEndpointBehavior : IEndpointBehavior
{
public void Validate(ServiceEndpoint endpoint)
{

}

public void AddBindingParameters(ServiceEndpoint endpoint, BindingParameterCollection bindingParameters)
{

}

public void ApplyDispatchBehavior(ServiceEndpoint endpoint, EndpointDispatcher endpointDispatcher)
{

}

public void ApplyClientBehavior(ServiceEndpoint endpoint, ClientRuntime clientRuntime)
{
Uri serviceAddress = new Uri("http://sometempuri.org ");   //dynamically load URL here
endpoint.Address = new EndpointAddress(serviceAddress);
}
}        ``````

From here, it was just a matter of coding the behavior extension element that loads the behavior, as well wiring in this extension element into the web.config. Here are the relevant snippets:

``````namespace WebService
{
public class WebServiceEndpointBehaviorExtensionElement: BehaviorExtensionElement
{
protected override object CreateBehavior()
{
return new WebServiceEndpointBehavior ();
}

public override Type BehaviorType
{
get { return typeof (WebServiceEndpointBehavior ); }
}
}
}```
```
``````  <endpointBehaviors>
<behavior name="updateEndpointAddress">
<webserviceEndpointBehavior/>
</behavior>``````
``````<extensions>
<behaviorExtensions>
<add name="webserviceEndpointBehavior" type="WebService.WebServiceEndpointBehaviorExtensionElement, WebService" />
</behaviorExtensions>
</extensions>``````

# CredentialCache.DefaultNetworkCredentials is empty and so is Thread.CurrentPrincipal.Identity.Name

I was working on a simple console application in C# that issued HTTP DELETE requests to WebDAV to delete expired documents from the file system. Once completed, this was to run periodically as a job. However, I kept getting back 401 Unauthorized on the DELETE requests. While troubleshooting the issue, I went down the rabbit hole and learned some interesting things. I was passing in CredentialCache.DefaultNetworkCredentials as my HttpRequest credentials. So as a sanity check, I tried viewing it in the debugger to make sure the program was passing in my credentials, only to find that CredentialCache.DefaultNetworkCredentials.UserName was blank.

Well, it turns out that you can’t actually view the credentials unless you set them manually yourself. According to the MSDN documentation:

“The ICredentials instance returned by DefaultCredentials cannot be used to view the user name, password, or domain of the current security context.”

So I tried checking the value of Thread.CurrentPrincipal.Identity.Name instead. This was blank as well. Upon further reading, I determined that this was due to the principal policy not being explicitly set to WindowsPrincipal. Once I did so, Thread.CurrentPrincipal.Identity.Name correctly displayed my windows login ID:

``````AppDomain.CurrentDomain.SetPrincipalPolicy(PrincipalPolicy.WindowsPrincipal);
Console.WriteLine(Thread.CurrentPrincipal.Identity.Name);```
```

It is helpful to review what an application domain is before proceeding. In Windows, every application runs as its own process, complete with its own set of resources. By isolating applications via processes, this minimizes the risk that one badly coded application can negatively impact others. In the Common Language Runtime, application domains provide an even more granular level of isolation. A single process (the application host) can run multiple application domains, each with the same level of isolation that separate processes would have, minus any of the overhead.
Because every app domain is separate from one another, each has its own Principal object. This object represents the current security context that the code is running as. The PrincipalPolicy is an enum that indicates how this principal object is to be created for the given app domain. Setting it to WindowsPrincipal will map the principal object to the Windows user that the application host is executing as. By default, the PrincipalPolicy will be UnauthenticatedPrincipal, which will set Name to empty string.

After doing some more digging, I also found out that I could use WindowsIdentity.GetCurrent().Name to determine what user the program was executing as:

````Console.WriteLine(WindowsIdentity.GetCurrent().Name); `
```

Having finally proven to myself that my program was running as the correct user, I eventually figured out the issue. It was completely unrelated to the code of course; I had simply forgotten to enable Windows Authentication in IIS. I didn’t mind the time sink, as it proved to be quite educational.

# Retrieving a return value from a stored procedure using the Entity Framework DBContext.Database class

I was trying to figure out how to call a stored procedure and then retrieve the return value using the Entity Framework in .NET.      Entity Framework is designed to be an ORM layer, so it doesn’t really have very strong support for stored procs.    My problem seemed to be somewhat common as I found a decent number of results on google.    It seems like lots of places that use Entity Framework still require the use of procs.    Perhaps they don’t trust the performance or scalability of Entity Framework, or perhaps the particular procedure in question encapsulated some complex and highly optimized SQL logic.

Whatever the reason, the code had to call a stored procedure and retrieve a return value.    The stored procedure in question could not be modified to use an output parameter instead of having a return value.  This also seemed to be a common requirement.   Typically the proc would be really old legacy TSQL, and changing it would have required too much bureaucracy to have been worth it.

So there’s a couple of ways to solve the problem.   Probably the simplest and easiest way is not to use Entity Framework at all and just use plain old ADO.NET instead.       However, in my case, everything else in the solution  was already using the Entity Framework, so I wanted the code to be consistent.    After doing some investigation and testing, I finally figured it out.    The trick is to set the direction of the SQLParameter as ParameterDirection.Output.    This is what tripped me up initially, as in ADO.NET, you would declare the SQLParameter with direction type ParameterDirection.ReturnValue.      Another interesting thing to note is that Database.ExecuteSqlCommand returns an integer.   But this int does not correspond to the return value from the stored procedure.   The MSDN documentation seemed to indicate otherwise.   It states that the return value is: “The result returned by the database after executing the command.”     When I tried storing the result, I got back -1 and I’m not really sure why.

Everything  I found online consisted of just code snippets, so here is a complete code sample that deletes a row from a database table and checks the return value to see if it was successful.

``````
public bool DeleteRowFromDB(int rowId)
{
bool success = false;
var retVal = new SqlParameter("@Success", SqlDbType.Int) {    Direction = ParameterDirection.Output };

object[] parameters =
{
new SqlParameter("@RowID", SqlDbType.BigInt) { Value = rowId}
,retVal
};

string command = string.Format("exec @Success = dbo.spDeleteTableRow @RowID");

//the result is -1 regardless of what the stored procedure returns
//note that ExecuteSqlCommand maps to sp_executesql in SQL Server
//which will append the type and value for @RowID
int result = _DBContext.Database.ExecuteSqlCommand(command, parameters);

if ((int)retVal.Value == 1)
{
success = true;
}

return success;
}```
```

# Why changing jobs every few years is good for your career

I recently quit my job.   My letter of resignation was short and to the point:   I had accepted an offer at another company and was giving my two weeks notice.   I read and reread my email a few times, took a deep breath, and hit send.    I was now officially a short timer, and before I knew it, my last day had come.    It was a bittersweet occasion for me.    I had already said my heartfelt goodbyes, gave hugs, shook hands, and parted ways with coworkers who had become friends.   I shed a tear as I walked off into the sunset, but not once did I look back.

There are pros and cons when it comes to changing jobs.     The benefits of staying in one place for the long haul are pretty self explanatory.      The awkward time spent floundering about at the start is now a distant memory.    Having built up credibility, your input is actually valued.   You find yourself entrusted with critical projects, given more freedom and flexibility in execution, climbing the corporate ladder with every success.    The sky is the limit.     So I’m going to talk about the flip side and why you would ever want to give all that up voluntarily.

The obvious reason is unhappiness.    Maybe the opportunities for growth and advancement aren’t really there.    Perhaps the work life balance is non-existent.     You could have a pointy haired boss.     The technology stack could suck.     The company could be on a downward spiral.   It lost its VC funding and now the employees are leaving in droves.      There’s a whole myriad of issues that would make you want to leave.

But why would you ever want to leave a place where you are quite comfortable and content?    Well, there’s a fine line between contentment and complacency, and the longer you stay at a place, the easier it is for complacency to become complete stagnation.     A change of jobs can shake things up and expose you to new ideas.    Every place has its own way of doing things.   Ideally, the goal should be to learn new languages, design patterns, tools, and frameworks wherever you go.    It can also be quite instructive to pay close attention to the org chart.     For example, how do people report up the chain command?    Do they use horizontal integration, vertical integration, or a combination of both?   Also, observe the processes put into place by the company.    What is the product release cycle?    When there are blocking issues or showstopper bugs, how are these problems escalated?     Where are the bottlenecks?     There is no one size fits all when it comes to structuring and running a corporation, so by examining these things closely, you can figure out what works well and what doesn’t given a specific set of circumstances.    This will allow you to assist the company by suggesting improvements and sharing your own experiences.      Furthermore, this experience can refine your future job searches by helping you identify the well run companies.

On a similar note, by experiencing a wide range of different work environments, you can quickly learn what is tolerable and what is not.    For me personally, I’d never work anywhere that required a suit and tie.    Outdated and obsolete technologies like classic ASP are also a complete dealbreaker.    Free coffee and soda are a perk, but not a necessity.     Again, this helps you narrow down your search criteria when it comes to finding a new job.     In a way, this is similar to dating.     Much in the same way that you cannot know what you are looking for in a significant other until you have been in at least a few relationships, you cannot really know what companies will be a good fit for you, until you’ve worked for at least a few of them.

Which brings me to my final point.    Gone are the days where an employee joins a company and works there until retirement.      It is best to get used to changing jobs, because there are things out of your control, such as mass layoffs, budget cuts, and the collapse of the US real estate market, that can lead to unemployment.        To the uninitiated, finding a new job can be very stressful.   The mad scramble to update the resume, apply for positions, talk to recruiters, and run an interview gauntlet can be completely overwhelming*.       Starting a new job is even more stressful.    The ramp up process can be unforgiving, especially in the tech industry, where one is expected to be self-sufficient and be able to adapt on the fly.    In my first week at a consulting gig I did at Microsoft, I had no cubicle (we had to work in the atrium until we found enough office space for the whole team), no developer image on my laptop (we had to set up our dev environment from scratch), and no account in source control (we had to email our code in zip files so other devs who did have access could check our files in).     Oh and the project was already behind schedule so we needed to work weekends too.    By having gone through multiple job changes early on in life, hectic starts like this will no longer faze you later on in life, when the stakes are higher and you have a family to feed.      This will make your job transitions go much more smoothly and successfully.

*Luckily, there are many great books and resources on this.   Land the Tech Job You Love is one.    It contains a lot of great advice.   For example, you don’t want to make a disaster preparedness kit after an earthquake hits.   Likewise, you want to update your resume at least once a year, so that if a layoff or other disaster strikes, you’ll be ready.   Better yet, if a great opportunity arises, you’ll be ready to respond immediately.

# The Singularity is nearer than you think

The Singularity is Near is a thought provoking book.    The emotions I experienced while reading ran the whole gamut from deeply disturbed to inspired.    The author, Ray Kurzweil, is a billionaire entrepreneur who graduated from MIT and made his fortune from a whole host of inventions, ranging from musical synthesizers, OCR, text to speech, and speech to text, among other things.    In his book, he shares his vision of the future and makes a compelling argument as to why the technological singularity is close at hand.

The singularity is defined as the point in time in which the rate of change in technological progress happens too quickly for the human brain to process, due to the emergence of artificial intelligence that has surpassed human intelligence.   Such an AI would be able to quickly iterate and improve upon its own design without the need for any human intervention.    Not only would each iteration yield exponentially more computing power and capability, the time needed for each iteration would decrease exponentially as well.    The resulting runaway AI would herald the arrival of the singularity, and by definition, no one can really say what will happen.

And that to me is the worrisome part.   Humanity will necessarily relinquish control of its own fate and leave it in the hands of its artificial creations.    There are many who are not enthused by this prospect.   At the far end of this spectrum are guys such as Ted “The Unabomber” Kaczynski, who believe that technological progress inevitably leads to an increase in human suffering and loss of freedom.    While Kaczynski’s actions are morally reprehensible, many of the concerns that he raises are valid.     Improvements in technology necessitate a restriction in freedom.    Consider the invention of the automobile.   In order to accommodate everyone being able to drive a car, bureaucracy in the form of traffic regulations and legislation had to be created, in addition to the formation of state level DMVs.     This bureaucracy limits what we can and cannot do.  Before cars, we could walk wherever we pleased without needing to stay on sidewalks and having to heed traffic lights.    Consider also the current generation of military drones that go on surveillance missions and launch remote strikes on military targets.    One can only imagine that the next such generation of drones would be smaller, smarter, and stronger.     Such drones would no doubt enable the government to create a 24/7 Orwellian surveillance system capable of quickly and quietly dispatching dissidents.    Even corporations can freely collect and harvest our personal information that we so freely post on social networks.    Given that technology impinges on our personal freedoms, it is not at all farfetched to imagine that the invention of a super intelligent AI would reduce humans to the status of house hold pets.

This is but one possible negative outcome.    Science fiction features many robot uprising scenarios wherein the human race is completely obliterated.    But what is more disturbing to me than the prospect of total annihilation is the eventual emergence of neural nanotechnology.    Nanotechnology would allow us to enhance our neural pathways, vastly increasing our biological intelligence by augmenting it with machine intelligence.     This would bypass one of the biggest limitations of the human brain:  knowledge transfer.    A concert pianist must take years of piano lessons and spend thousands of hours practicing before she can master her craft.     Computers on the other hand, can quickly copy data from one machine to the other, sharing information with ease.     Now imagine a world where we could simply download “knowledge modules” and install them onto our machine brains.   Suddenly everyone would be able to speak multiple languages, play chess at a grandmaster level, solve differential equations, all while having a great sense of humor.    With nothing to distinguish one another, we will lose all individuality.   It is reminiscent of the Borg collective of Star Trek, where newly acquired knowledge is quickly shared among all the drones.     Such an egalitarian society to me seems quite dull.     In a chapter discussing aging and death (and how technology can someday make us immortal), Kurzweil dismisses the argument that our limitations make us human.   In the context of mortality, I would agree.   However, in the case of our inherent knowledge transfer limitations, I feel that such limitations make life rewarding.      Taking years to learn something is not a downside, but a fulfilling journey.       It will be interesting to see how human/machine hybrids find purpose and meaning in the post singularity (assuming the robots don’t kill off everybody of course).    Of course, just getting to that point will be troublesome.

Consider what happens as technology continues to improve and automate away tasks that previously required a lot of human intervention.     Expedia and Concur destroyed the livelihood of many travel agents.    Sites such as Zillow and Redfin will someday do away with most real estate agents (although why they have not succeeded – yet – is a different topic altogether).      Grocery stores have self checkout lanes.     Retail stores use software to handle all their complicated supply chain logistics.    Today there is almost no industry where computers are not utilized in some way.    Now imagine what happens as artificial intelligence continues improving at an ever accelerating pace and eliminates the need for human intervention altogether.    Today, the Google driver-less car has logged hundreds of thousands of miles on the road.   Commercial driverless cars are soon to follow.    In a couple of years, bus drivers, taxi drivers, and chauffers will all be out of a job.   IBM’s Watson computer beat 74-time champion Ken Jennings at Jeopardy quite convincingly, and now IBM is using Watson as a medical diagnosis tool.    How many in the medical profession will still have a job once computers outperform them in the future?       Even art is being automated.     There are already AI programs that can churn out novels, songs, and paintings.     Who is still going to have a job in the future?    The industrial revolution put every artisan out of a job.   Likewise, a similar fate awaits all humans as the technology sector continues to innovate.   Some will argue that new jobs will be created by technology; however as AI continually improves, even those will be automated away.    Entire job industries will be wiped out.     This massive unemployment will obviously cause a lot of social upheaval.     How will governments respond?    Will money have any meaning in the future if nobody works for a living?

Kurzweil does not address these issues in his book, which is unfortunate, because it would have been cool to hear his insights on the matter; he has obviously given a lot of thought toward the dangers that future technological innovations in genetics, nanotechnology, and robotics will pose.   In fact, he devotes an entire chapter to this topic.    Despite this, Ray Kurzweil remains optimistic about the future, believing that we will eventually merge with our technology and transcend our humanity.    Others picture a future that will be a lot more grim for humanity.   Given these two diametrically opposed viewpoints, which vision of the future will be proven correct?   In the end, it may not really matter.    As Kurzweil astutely points out, history has shown that progress cannot be stopped.    Even a complete relinquishment of all scientific research wouldn’t really work: This would require a global totalitarian regime.    Of course, in order for such a regime to maintain its power, it would need to make sure it always had a technological advantage over its citizens, thus making its Luddite agenda an unsustainable self-contradiction.      Even a doomsday scenario in which the entire human race was wiped out via some massive meteor, nuclear war, viral pandemic, or some other form of unpleasantry would only serve as a hard reset.  Other sentient life forms would likely emerge again here on earth, or elsewhere in the universe (assuming this hasn’t occurred already).    The entire process would start all over again; it would appear that technology is on an inexorable march on a path toward the future.

Where does this path lead?   Kurzweil believes that the universe has an ultimate destiny and that the technological singularity is a major milestone along the way.    He provides a fascinating roadmap of the journey, dividing the universe up into six major epochs, each characterized by the nature of information and how it replicates.    Each epoch builds on the foundations of the previous one in order to generate information of increasing complexity.

The first epoch is that of “dumb” matter in the universe.      A vast amount of information is encoded into every piece of matter:  the number of molecules it’s made of, the number of atoms in each molecule, the spin state and energy level of the electrons orbiting each atom, and so on.    Matter, and the information stored within it, can replicate itself – although not efficiently.   For example, a crystal is comprised of a precise arrangement of atoms in a lattice.    As a crystal “grows”, it repeats this pattern over and over again.       Although not intelligent, the matter in the universe coalesces into objects of increasing complexity.    Entire planets, solar systems, star systems, and galaxies are formed.   From these arise the conditions necessary for biological life, leading to the second epoch.

In this epoch, biological life encodes information about itself in its genes via DNA.   DNA of course, is itself  made up of the “dumb” molecules from the first epoch.     The information stored within DNA then, represents a much higher level of abstraction.    It can self replicate much more efficiently, and even has mechanisms for error correction in the copying process.    As life evolves on earth over millions of years, the first sentient life forms appear.      In this third epoch, information is now encoded in the neural patterns of the brain.     The invention of a spoken language and a written alphabet by homo-sapiens facilitate the transmission of these patterns, which now replicate as memes.     Educational institutions help preserve these memes over the centuries, allowing humans to retain and build on the knowledge of their ancestors.     Standing on the shoulders of giants, scientists and engineers build the first computer (although there is much dispute as to which computer was the first one, for the purposes of this narrative we will pretend there is only one clear progenitor), heralding the fourth epoch.    Information is now stored in electronic circuitry and replication of this data is a simple matter of copying bits between machines.   This is facilitated by massive communication networks such as the internet.      As the machines begin to increase their computing power, artificial intelligence rivals that of humanity.   The singularity marks the arrival of the fifth epoch.    AI begins to saturate the universe, harnessing the matter in the universe itself for computational substrate (ie Dyson spheres).

The sixth and final epoch that Kurzweil describes is a gradual “awakening” of the universe.   In essence, the entire universe itself is turned into a computer and becomes self aware.    This is not to anthropomorphize the universe; this awakening will be an emergent process, wherein the inanimate universe transforms and transcends into something altogether different.    Carl Sagan once said, “we are a way for the cosmos to understand itself”.    The sixth epoch then, represents the fulfillment and realization of that statement.      This of course is all highly speculative and borders on religious.    One of the criticisms of Kurzweil is that he writes a lot of religious science fiction and that the singularity he describes is nothing more than a “rapture for nerds”.     Personally, I found his description of this final stage in the evolution of the universe to be quite beautiful and profound, with none of the trappings of religious dogma.    Whether or not any of it comes true remains to be seen.

There are of course, many other criticisms of Kurzweil’s work.    He even devotes an entire chapter in his book to address them.    Because he makes such strong assertions, including the bold prediction that in 2045 a laptop would possess billions of times more computing power than every human brain in existence (both past and present), many have told him that what he was saying either could not possibly happen, or that it could not happen so soon.    Kurzweil points to the exponential rate at which technology is improving (referred to as the law of accelerating returns in his book), while the naysayers argue that such growth will continue until it doesn’t.

The question boils down to whether or not there are limits to our knowledge and ability.   The pragmatists take a more conservative position on this:   Some things are by their very nature unknowable and not doable, while the optimists feel that there are always workarounds.    With regards to the singularity, the two main barriers are the hardware required to run a computer powerful enough to surpass the human brain’s parallel processing capabilities, and no less important, the software that can mimic it.   Kurzweil goes through great pains to discuss the promising ideas and solutions that can be found in the research pipeline.

On the hardware side of things, one of the major problems will be all the heat generated by the increased computing power.   Paradigms such as reversible computing will significantly reduce heat dissipation.     This will allow computing power to continue increasing at an exponential clip.    However, Moore’s law will come to an end due to the fundamental physical limitations to how small silicon transistors can become, but companies are already looking into what’s next after silicon.     A variety of technologies such as carbon nanotube based computers, DNA computing, and quantum computing (just to name a few) will potentially allow Moore’s law to continue unabated.

In order to take advantage of this powerful hardware, software will need to be written that can mimic the brain.   Instead of inefficiently hand coding each rule manually as is done in an old fashioned AI expert system, self-emergent machine learning techniques such as genetic algorithms, neural nets, and Bayesian networks will need to be employed.    At the same time, as brain scanning technology continues to improve, we can integrate what we learn as we reverse engineer the brain to come up with ever more accurate models.     The key here is to operate at the right level of abstraction.    Consider the Mandelbrot set.   It is a fractal of literally infinite complexity.    To calculate each point in the set would take an infinite amount of time, yet we can use a simple equation to represent it in its entirety.     There is strong evidence that the brain is fractal in nature.   Instead of painstakingly modeling the brain by mapping every dendrite and neuron, it would be much easier to generate an accurate model the brain by finding the right set of equations.   Of course, deriving these equations is non trivial, but it serves to illustrate why a top down approach to the problem will work the best.

All in all, The Singularity Is Near was a great read.    It is hard to categorize the book, as it contains a mix of philosophy, religion, and science.    It was necessarily epic in its scope given its subject matter.     Topics discussed ranged from Fermi’s paradox, the speed of light, the nature of consciousness, and everything in between.     There is something in it for everybody.     As a software developer, it served a springboard for a Wikipedia binge as I looked up the machine learning techniques and paradigms he described.     Anyone interested in science will find themselves  learning much by googling all the research articles that he cites:   I was personally amazed to find out there was a naturally occurring nuclear reactor in Africa 1.7 billion years ago.    There are a lot more of these nuggets of knowledge contained within.    That alone makes the book worth the read, but more importantly, it got me thinking about all the massive change brought about by the explosive growth in technological innovation that I’ve seen just growing up in my lifetime.     Humans have a linear view of present day trends: even an exponential curve is a straight line if you zoom in close enough.     Hence we miss the exponential rate of change that has happened in just this past decade.     Even more change is in store for the future, and it will be quite different from anything we’ve experienced up until now.   Important discussions need to be had about this.    So whether or not you agree with Kurzweil, it is worth reading his seminal work and to consider all its implications; it makes for deep conversation.