The Curious Programmer

Software, Gadgets, Books, and All Things Geek

Learn Anything Faster and More Effectively — June 30, 2015

Learn Anything Faster and More Effectively

It used to be hard to find an abundance of information about a particular subject. You would have to go to the library and find books that were written by subject matter experts and then decide if that information was even up to date with the latest research or if it was even valid any longer.

Then the web was born. The problem then wasn’t so much the lack of information, but the difficulty in finding quality information quickly.

Then Google was born, and they changed the web to deliver the information that you wanted with the highest rated content that its web crawlers have come across. The even have a Knowledge Engine which takes advantage of “Structured Data” (Structured data markup is a standard way to annotate your content so machines can understand it.)

It is ridiculously easy to find the answers to questions and to learn new topics just from a few searches and reading a few blog posts by the industry experts in their field. One of the best and sometimes overlooked aspect of this is just how quickly information can be updated and time time (or lack there of) from an expert learning a new technique or idea and we the consumers reading about it on his blog a day later. Truly amazing.

As the world’s information becomes easier and easier to obtain, we are suddenly realizing that it isn’t that lack of good information or where to find it that is holding us back anymore. Our real problem is our ability to learn and understand this information as fast enough to take advantage of the vast amount of data and apply what we have learned to our own lives.

I began to have this problem a few years ago when I truly decided that I wanted to master my craft (software engineering). I started out by gathering resources between blogs that I wanted to read everyday, books that were vital to the industry, new research being done in the area, PluralSight videos, Youtube videos, technology podcasts, and much much more.

It didn’t take long to realize that if I truly wanted to master I needed a way to learn faster. 

14601014695_dd8c815c39_o

This is where my “learning journey” began. I started to rigorously research the ways our brains learn and how I could hack this process to learn as quickly as possibly, while retaining and comprehending all the information that I was absorbing.

The results were astounding. I realized that I have been doing most things wrong in terms of effective learning. However, through the research and applying the facts to my real life and my real learning habits I begin to retain and understand concepts and topics on a much deeper level than I ever had before.

That’s what I want to share with you. I have found what works best for me and what experts say will work best for everyone trying learn and master new material. It is a common statement that you come across in the media, “every kid is different, the learning style has to be specific and catering to the kid’s learning style”. On the face of it, the statement looks obvious. Empirical evidence however does not support it.

So, without further adieu, here are my suggestions to help you learn any material more quickly and more effectively than you have been doing in the past.

Rule 1. Speed Reading (with 85-95% comprehension)

I don’t think I could mention tips on how to learn more quickly without talking about speed reading. It is a reality that if you want to be able to be able to absorb more information then you will obviously have to work on your ability to process information from some medium faster.

There are many ways to consume information: videos, podcasts, books, audiobooks, classroom settings, etc. The fact of the matter is though that the quickest way to obtain the most information the quickest is through reading. You should be able to read much faster than you can listen (audiobook, classroom) or watch (videos) any type of information.

However, the sad fact that for most people they can only read about as fast as they talk. That is because they are victim to one of the worst techniques you can do while reading, and that is Subvocalization. This is when you look at each word and speak each word in your head. You will never be able to read at a speed of 800-1000 words per minute (an average rate for a speed reader) if you cannot do this.

There are many great books out there to help you learn how to speed read. The ones I suggest are Breakthrough Rapid Reading and Become a Superlearner. Both of these books go through great techniques to get you to reading around 800 wpm (250 wpm is the average rate people read) in just a few months or even weeks depending on how much you practice (and you will have to practice). When you can read books and blog posts at a rate 4 times faster than the average human and RETAIN that information better as well (Becoming a Superlearner goes through many techniques on memory retention), then you are indeed on your way to becoming a Super Human. If you want to practice speed reading right now Spreeder is a free web application that help you to read faster and comprehend more of what you read.

Rule 2. Don’t Reread

Rereading text and massed practice of a skill or new knowledge are by far the preferred study strategies of learners of all stripes, but they’re also among the least productive. Opt for active practice over review. If you are learning a skill, a foreign language or any other topic, practice retrieving it from memory rather than rereading your text or reviewing instructional material.

Recalling what you have learned makes the learning stronger and more easily recalled again later. Space your practice. Space out your practice sessions, letting time elapse between them. Massed practice (like cramming) leads to fast learning but also to rapid forgetting compared to spaced practice.

Spacing helps embed learning in long-term memory. Look for tools on your phone or on the web (I use Anki) that remind you to review material right when you are about to forget it (the best time for retrieval and building a strong mental connection). When you space out practice at a task and get a little rusty between sessions, or you interleave the practice of two or more subjects, retrieval is harder and feels less productive, but the effort produces longer lasting learning and enables more versatile application of it in later settings.

Rule 3. Mix Up Practice and Interleave Ideas

Practice also has to interleaved. Interleaving is practicing two or more subjects or two different aspects of the same subject. You cannot study one aspect of subject completely and move on to another subject and so on. Linearity isn’t good.

Let’s say you are learning some technique, for example EM algorithm. If you stick to data mining field, you will see its application in, let’s say, mixture estimation. However, by interleaving your practice with state space models, you see that EM algorithm being used to estimate hyper parameters of a model.

This interleaving of various topics gives a richer understanding. Obviously there is a price to pay. The learner is just about learning to understand something, when he is asked to move to another topic. So, that sense of feeling that he hasn’t got a full grasp on the topic remains. It is a good thing to have but an unpleasant situation that a learner must handle.

Rule 4. Teach What You Learn

Have you ever heard the saying “You never truly understand something until you can teach it to a child?” I couldn’t agree more. When you are able to draw on analogies to explain a complex subject (water flowing through a pipe to help explain electricity flowing through a circuit) then you are on your way to a deeper understanding of the topic. When you are learning new material, try to draw connection in your brain to what this relates to that you already do understand. Doing this will help you remember what you have learned and have a better understanding of how it relates to other subjects that you already know. Discussing new information in your own words and connecting it to things you already know makes learning more efficient and longer lasting.

Teaching also helps you find your weak points in understanding. When you try to explain something to someone and they don’t understand it yet or think about it in a different way than you initially did, then you can take this chance to revisit the material to get a deeper understanding of the subject matter. Discussing new information in your own words and connecting it to things you already know makes learning more efficient and longer lasting.

Rule 5. Pay Attention and Test Yourself

It’s easy to start daydreaming when you are reading a book or an informative article. You must fight the urge to do this by having a purpose for reading the article or chapter.

I do this by quickly skimming the chapter subheadings or paragraphs and get an overall idea about what this chapter is about. Then I quickly think of about 5 things that I expect to have learned after reading the chapter and I test myself at the end.

When you are reading with a purpose you far less likely to start daydreaming or lose your focus on the article. This is a surprisingly effective and simple strategy.

Rule 6. Repeat “focus bursts,” where we give our very best effort for a short period of time, then take fulfilling and refreshing breaks.

There are multiple studies that confirm that proper rest increases brain functioning. The typical, caffeine-induced, late night cramming session that most students engage in at least once in their life is not the most effective way to learn. In fact, there is evidence to suggest that it is the least effective way. If we want to learn something quickly, we need to do it when our minds are fresh. We need to engage in “focus bursts” where, with fresh energy and a well-rested mind, we focus all our attention on learning, perfecting, and linking the chunks. Then, when we start to feel our effectiveness dissipate, we take breaks to recharge.

Focus burst, recharge, focus burst, recharge. Over and over again. This is the way to speed up the learning process. Long study sessions are not as effective as short bursts. In long sessions we are prone to distraction, and we are also prone to focusing on time rather than repetitions. However, if we will train ourselves to learn like a top athlete trains (in smaller, high intensity chunks) we will be very happy with the results that we get.

Rule 7. Binarial Beats

Binaural beats involve playing two close frequencies simultaneously to produce alpha, beta, delta, and theta waves, all of which produce either sleeping, restfulness, relaxation, meditativeness, alertness, or concentration. Binaural beats are used in conjunction with other exercises for a type of super-learning. If you have ever heard of music helping you learn or studying while listening to classical music it is because of the state of mind that the music can put you in. This can actually effect brain waves and produce a more focused mindset. I recommend downloading “Focus Zen” from the App Store or googling “Binarial Beats” to find a good Youtube playlist. These work much better when listening to them with headphones.

Well those are my main “Learning Hacks” that I have been using and have seen a great increase in the amount of information that I am able to learn in a day and I hope they can do the same for you. . It truly is amazing how much information we have access to but if you want to take advantage of it all then you really need to LEARN HOW TO LEARN!

I hope you enjoyed this post and, if you haven’t already, then please subscribe to my blog or share this post with anyone you think would like it!

Good luck with your new learning abilities! Use your power for good 🙂

Start Programming…Competitively! — June 26, 2015

Start Programming…Competitively!

I love programming. If you are reading this article then there is a good chance that you do too, or are at least interested in the idea. The problem with programming for a company these days though is that the problems we are solving just aren’t as challenging or fun as the problems that computer scientists and enthusiastic programmers really salivate over. (unless you are working at Microsoft, Google, or somewhere else in Silicon Valley)

I work for a large Fortune 500 company and before that I worked for a bio-medical company, and though the work can be challenging, it is usually the business problems that are challenging and not so much the technical/mathematical (interesting) problems.

Business problems can sometimes be intellectually stimulating (I actually got my undergraduate degree in Business), but more times than not, I’ve found that usually you will need to go through a million channels to come to a conclusion that was the most obvious from the very beginning. This left me bored and frustrated.

If you are like me, then you will instead CRAVE those problems that need just the right algorithm to solve. For these rare problems, choosing and implementing the correct algorithm and using clever tricks of the programming/mathematical trade will determine if it is even possible to solve in a given lifetime. My dopamine neurons are firing on all cylinders when I encounter one of these bad boys.

btree

Unfortunately, for me, they just don’t show up enough. I need more.  Like an addict that hasn’t had their drugs, I began to look for other ways to get my fix.

And that’s when I found it. A place where these “programming pearls” were in abundance and I could tinker away at these problems in my leisure whenever I wanted! The place was TopCoder.com, and I was addicted.

Now TopCoder.com isn’t the only place that offers stimulating computer programming problems. There are other sites as well. Actually, my favorite right now has become LeetCode.com for its ease of use. The thing that all these sites have in common is that they are sites for what is becoming known as “Competitive Programming“.

A programmer that has mastered his or her language can go to these sites and “face off” against other competent code slingers, and battle to the death!….or at least until one of them completes the programming problem with an acceptable result in the accepted amount of time.  Who would have known that I could have fun solving problems and have fun doing it against other people! It’s like playing a video game but one that actually can have really world benefits!

And that’s what I want to really hit on. For all the coders out there that want to get better in their language of choice…or already think they are the best, I challenge you to go to one of these sites and get involved in some competitions. Fair warning though, these problems aren’t for the faint of heart. They are heavily math based so you may need to brush up on some of those algebra concepts that you haven’t seen in awhile 🙂

These problems will make you a better programmer. I thought I was on top of my game until I got thrown into one of these matches and a guy solved a problem that would take most programmers a few days to solve (if they could even solve it), in just few minutes. It’s a humbling experience and one that has helped me grow a lot. You don’t know if you are up on par with the best until you are up against some of the best, and some of the best programmers in the world are frequent visitors of these sites. It keeps you sharp and is actually a lot of fun once you get the hang of it.

If you are new to algorithm development and aren’t familiar with some of the most famous algorithms that have already been written or aren’t sure how to analyse an algorithm to find it asymptotic complexity (read my previous article) then picking up the book Introduction to Algorithms, by Thomas H. Cormen would be a great place to start.

If you are ready to jump in then this article will get you up and competing in O(1) time!

I hope you enjoyed this post and if you haven’t already then please subscribe to my blog at jasonroell.com!

Happy coding everyone!!

Know Your Big-O! — June 1, 2015

Know Your Big-O!

Study: Women avoid computer science careers because they think ...

As a software developer (or someone involved in higher level mathematics) you’ve probably heard the term “Big-O” used a lot. I remember skimming through my Introduction to Algorithms book in college and wondering what the heck “Big-O” was all about.  It doesn’t go away once you enter the working force either. If you have to deal at all with computational performance and designing systems that can scale gracefully given extra workload, then you have to understand what “Big O” is all about.

Okay, I’ve now used the term “Big-O” multiple times and still haven’t suggested what this notation means! Sorry, just wanted to build up a little curiosity first (it’s good for memory retention!). Anyway….

Big-O, also known as “Big-O Notation” is just a way we, as software engineers or mathematicians, like to describe our algorithms. Big-O Notation is very commonly used to describe the asymptotic time and space complexity of algorithms. The term asymptotic means “approaching a value or curve arbitrarily closely”. In computer science and mathematics, asymptotic analysis is a method of describing limiting behavior. When we want to asymptotically bound the growth of a running time to within the upper constant factors, we use big-O notation for asymptotic upper bounds, since it bounds the growth of the running time from above for large enough input sizes.

I know it’s probably still a little unclear about what that means. Think about it like this; if we have an algorithm that does work on amount of data inputs, as grows (always approaching infinity) how will our algorithm grow in respect to it’s running time (time to complete the processing of n units) and its space in physical memory (typically how much RAM will be needed or used in respect to n growing to infinity).

Big-O is defined as the asymptotic upper limit of a function. In plain English, it means that is a function that cover the maximum values a function could take. Big O notation characterizes functions according to their growth rates: different functions with the same growth rate may be represented using the same O notation. The letter O is used because the growth rate of a function is also referred to as the order of the function. A description of a function concerning big O notation usually only provides an upper bound limit on the growth rate of the function.

Growth Rate Name
1 Constant
log(n) Logarithmic
n Linear
n*log(n) Linearithmic
n^2 Quadratic
n^3 Cubic
2^n Exponential

What these functions look like on a graph are as follows. You can see that as n (the input elements) approaches infinity for some of the algorithms, the operations taken to complete these algorithms grows VERY rapidly.

Big O Analysis

Alright, now I know that the last paragraph had a lot to take in, so I want to make it a little more concrete and precise by providing some examples and when we might use Big-O Notation to describe some common algorithms.

Probably the most common place you will see Big-O Notation is when you are reading about sorting algorithms. Big-O is the main way we describe these algorithms to allow the person using them to understand the performance of these algorithms and choice the correct algorithm for their situation based on the data sets and hardware limitations that they have.

Sorting Algorithms
1
2
3
4
5
6
7
8
  void sort(int[] arr){
    for(int x=1; x < arr.length; x++)
      for(int y=x; y > 0 && arr[y-1] > arr[y]; y--){
          int t = arr[y];
          arr[y] = arr[y-1];
          arr[y-1] = t;
        }
  }

Do you recognize this algorithm? It’s called Insertion Sort. It has two nested loops, which means that as the number of elements n in the array arr grows it will take approximately n * n longer to perform the sorting. In big-O notation, this will be represented as O(n2), or we could say the algorithm is a quadratic function (see the growth chart above).

What would happen if the array arr was already sorted? That would be the best-case scenario. The inner for loop will never go through all the elements in the array then (because arr[y-1] > arr[y] won’t be met). So the algorithm will run in O(n) time.

We are not living in an ideal world. So O(n2) will be probably the average time complexity.

This is just one of the many sorting algorithms that have been developed over the years, and in fact, it is not typically the best choice as a sorting algorithm to implement if you have a large data set to be sorted.

A more advanced sorting algorithm is Merge Sort. It uses the principle of divide and conquer to solve the problem faster.

  • Divide the array in half
  • Divide the halves by half until 2 or 3 elements are remaining
  • Sort each of these halves
  • Merge them back together

This algorithm (as with most divide and conquer algorithms) will have a much more efficient running time and can be described in big-O notation as having a running time of O(n log(n)) which happens to be a very fast algorithm for sorting data elements.

So if merge sort is faster than insertion sort, why do we even have to be concerned with insertion sort (or any or the slower sorting algorithms for that matter). The reason is that we have only been considering its asymptotic time complexity and haven’t given thought to its asymptotic space complexity. The great time complexity of the merge sort algorithm didn’t come without a cost. In the case with merge sort, we are trading speed (time complexity) for memory (space complexity). This is sometimes okay, and sometimes it’s not. It all depends on your situation and the resources that you have at your disposal.

The asymptotic space complexity for merge sort happens to be O(n)…. This means that for every input element (n) we give this algorithm it will grow linearly with n. Now consider insertion sort which has a asymptotic space complexity of just O(1). That is about as good as it gets regarding asymptotic analysis. When an algorithm is described as an O(1) algorithm, that means that can get as large as it wants but it will have NO EFFECT on the algorithm. This is known as a constant function.

Therefore, you could use this knowledge to apply the insertion sort algorithm with its low memory overhead relative to if you have a very limited amount of memory to use for your processing. However, if memory is less of a concern then you most likely would like to use merge sort and use more memory to have substantial performance gains in large data sets.

Consider the following table to realize the difference in time complexity (speed) of the algorithms based on differently sized data sets and I think you can see why it is important to understand when and where to use the correct algorithm.

Big-O Time and Space Complexities for Insertion Sort and Merge Sort:

Algorithm best average worst space complexity
Insertion Sort O(n) O(n2) O(n2) O(1)
Merge sort O(n log(n)) O(n log(n)) O(n log(n)) O(n)

Notice that the table also shows the space complexity. How much space the algorithm takes is also an important parameter to compare algorithms. The merge sort uses an additional array and that’s way its space complexity is O(n); however, the insertion sort uses O(1) because it does the sorting in-place.

Big-O looks intimidating at first, but just remember that it is just a quick and easy way to describe how a particular algorithm will perform if its input size grows towards infinity. Knowing and understanding this notation will quickly help you as a developer to chose the correct algorithm for the right job and help you and your team from making a mistake that could cost you O(2^n) time!!

I hope you enjoyed this post and that it spread some light on a topic that is not always explained well in school or to beginner software engineers. Let me know of any other useful resources to describe Big-O or how it’s helped you in your algorithm development! As always if you liked this post, please subscribe to my blog to get the latest updates to my blog at jasonroell.com! Have a great day!