Make Me Think

Good software is typically developed with the idea that it should be easy and intuitive for the user to use. Don’t Make Me Think is a book written by Steve Krug about human-computer interaction and usability that stresses that this interaction should come with the least friction, so little in fact, that the interaction should be intuitive rather than the user having to “think” about how they need to interact with the computer to get what they want out of it.

This idea has changed the way user experience experts layout programs for users and how we as users interact with complex computer situations and logic. It has been great for the consumers of software around the globe because typically the computer “just works” or “just does what I want it to do”.

I think we can all agree that this is a good idea.

What is happening, as open source software gets better and better, and as more developers are taking these “off the shelf” open source solutions and using them in there code and projects – is that we, the developers, are now the clients of a lot of software that we did not write. Also this software (following good principles) was developed to hide the details of complexity so that it is easy to use and plug-in, so easy that we don’t even have to think about it.

Once again, I think that we can agree that overall this is a great thing. It helps developers and teams become more productive with the amount of code they can write and ship to their clients.

However, every year, the distance between the average developer and “the bare metal” grows larger and larger. As the popular Microsoft employee, Scott Hanselman, has said “we are abstracting on the shoulders of giants“, and I agree. If you haven’t read the article, he makes a good point about how developers can build something amazing with reusing the complex software that has been testing and proven by other great developers. This is one software development’s most popular idioms (DRY – “Don’t Repeat Yourself”, and “Only Write the Code That Only You Can Write”), and with all the open source software on the web for everyone to grab and plug-in to their projects, we make great use of it and build some extraordinary solutions to problems very quickly.

This is great for developer productivity, but sometimes I wonder if this is coming with a cost.

Recently, I have started to get into competitive programming. For those of you not familiar with this term, I wrote a post about what it is and why you might be interested in it here.

It’s been a lot of fun competing in these programming challenges, but it has caused me to realize just how inefficient many of our solutions tend to be that we code everyday (I’m talking as an enterprise developer at large companies) because of the vast resources we have in CPU power and memory. Every solution in competitive programming is tested against many predefined tests, and even with the fastest processors and unlimited RAM, the solution will fail under the stress of these tests if the algorithms and data structures are not optimized.

With the advances of hardware and having a “plug and play” attitude allows the developer to code something that works great, but hides the details of why it works or, perhaps more importantly, when it won’t work.

And sometimes even that is okay (but surely not optimal) because the probability of reaching that limit (of when your solution won’t work) is very slim.

Why would you want to solve a complex problem twice anyway? Its a problem that’s already been solved by another developer.

My argument is that many developers have never thought about the complexities of these solutions even once. So in reality they’ve never solved this problem.

Could this be a bad thing?

What causes me concern is that if you have no prior experience with these types of very complex computer logic and efficiency costs problems, then you have zero insight into a new problem, or algorithmic approach, that would benefit greatly from your experience with an old problem thus missing a chance to solve this new problem.

Don’t get me wrong, you won’t see me complaining that someone has already implemented a compiler for me or that I can use an open source ORM instead of coding my own (I do think it’s crucial that a developer understands these though, and could code a very basic implementation). That being said, the developing community seems to be getting farther and farther away from what (I believe) really makes developers useful. I that is how we think.

This gets me back to competitive programming (which I will now refer to as CP). In just the last month, I have probably had more mental stimulation and mental growth than I have had the last year combined from my day job, and I owe this to the problems that I have had to sit down, THINK, and solve. This is not a testament to having an easy day job (believe me we have many problems we need to work through), but they aren’t the type of problems that force me to think outside the box very often.

Many times the hard problems that require actually stopping, thinking, and reflecting are solved for me already from someone else and implemented in an open source library. It’s just to easy not to grab this and use it if it has been tested and proven to work for my needs. These libraries are necessary for us to be productive and for software to be where it is at today. However, just because we have a calculator to do division doesn’t mean we should stop understanding how to perform long division by hand because this provides insights that you otherwise would not have into how arithmetic actually works. You’d don’t just magically have the answer when you divide a number into another number. There is a process, and it is this process that is getting farther and farther abstracted in developing teams.

Let me give you an example, I was working on a problem that required me to develop an algorithm that found the answer to sub-problems to build an answer to the parent problem. For simplicity sake, let;s refer to the common problem of calculating the next number in a Fibonacci sequence. If you are not familiar with this, the short explanation is that the first two numbers in the Fibonacci sequence are either 1 and 1, or 0 and 1, depending on the chosen starting point of the sequence, and each subsequent number is the sum of the previous two. Therefor, if you want to calculate the Nth number in a Fibonacci sequence you would need to compute all of the Fibonacci numbers before N to arrive at your solution.

Many people will use simple systematic recursion as a solution for this type of problem and, in fact, that is what I did as well initially. The problem with this, which I noticed after I started solving complex problems in CP, is that during recursion, there may exist a case where same sub-problems are solved multiple times.
Consider the example of calculating Nth Fibonacci number.
fibo(n) = fibo(n-1) + fibo(n-2)
ffibo(n-1) = fibo(n-2) + fibo(n-3)
fibo(n-2) = fibo(n-3) + ffibo(n-4)
fibo(2) = fibo(1) + fibo(0)

In the first three steps, it can be clearly seen that fibo(n-3) is calculated twice. If one goes deeper into recursion, he/she may find repeating the same sub-problems again and again. In fact the time complexity for this is T(n) = T(n-1) + T(n-2) which is exponential.

A better way of doing this (for this problem definition anyway) would be to implement what is known as a “dynamic programming” solution. This is a very popular algorithmic technique that remembers the results of our sub-problems to avoid doing the work more than once and can be implemented in many cases where a naive recursive case has been implemented. This is implemented as such:

int fib(int n)
  /* Declare an array to store Fibonacci numbers. */
  int f[n+1];
  int i;
  /* 0th and 1st number of the series are 0 and 1*/
  f[0] = 0;
  f[1] = 1;
  for (i = 2; i <= n; i++)
      /* Add the previous 2 numbers in the series
         and store it */
      f[i] = f[i-1] + f[i-2];
  return f[n];
int main ()
  int n = 9;
  printf("%d", fib(n));
  return 0;

Refactoring the solution in this way has reduced the time complexity down to O(n) so we are in linear time (Yay!). Now this actually can be optimized further to O(logn) by using an underlying matrix and doing a recursive multiplication to get power(M, n), then we get the (n+1)th Fibonacci number as the element at row and column in the resultant matrix.

Obviously, things get more and more complex the further this is optimized. However, if you did not know that you could optimize a solution that required first solving its sub-problems in a time complexity less then O(n^2) then you would have missed out on a great opportunity to apply this knowledge to future problems you come across.

This is just one very small area that shows the benefit of solving problems like this on a regular bases to build up your intuition and insight when you come across future complex problems and need a solution that will work and be correct.

In conclusion, I do believe that for the most part, the “Don’t Make Me Think” paradigm is good for delivering solutions to our clients. However, what makes us unique and valuable as developers is our ability to think and our confidence that we have been here before and know what to do. That we can recognize similarities to provide us insight into solving even harder problems down the road.

For myself, CP has been a great influence to help me recognize the error of my ways. I hope that this will inspire you to not be afraid of “thinking” and that the more you do it, the better you will become at it.

As a side note, I want you to know that pre-mature optimization is just as bad as using un-optimized code. Know when and where to spend you efforts when you are applying these optimization techniques.

I hope you enjoyed this article, and if you did, please subscribe to my blog at If you didn’t, then tell me why in the comments! I would also really appreciate it if you liked or shared the article to other developers that you think would benefit from it! Thanks for reading!

Leave a Reply