Moore's Law and the Programming Language Renaissance

Within select academic communities, programming languages have been developed that make multicore programming far less daunting.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

Moore's Law states that every 18 months a CPU's transistors will shrink in size by a factor of two. This means every 18 months you can stuff twice as many transistors onto a CPU, making it twice as fast. This law has been in effect since the late 1950's, and as technology advances, not only have the CPU's been able to hold more transistors, but they have been getting smaller in size as well. Intel recently released plans for a 14 nanometer CPU this September. 14 nanometers is only about 60 times larger than a silicon atom, which means you can fit billions of them on a chip, over a million times more than you could in the 1950s.

Throughout the 20th century, Moore's Law was the software developer's best friend because, as a result of the increasing CPU speed, every 18 months software became twice as fast without any extra effort on the developer's part. However, in 2002, limitations in a CPU's circuitry demanded a new architecture if Moore's Law were to hold true. Out of these limitations the multicore processor was born. This essentially means that before 2002 you had one CPU in your computer and after 2002 you have multiple CPUs -- or cores -- in your computer. Each individual core has not gotten much faster since 2002, but the amount of them is doubling every 18 months, which means Moore's Law is still holding true.

In order for today's developers to benefit from the speed increases Moore's Law offers, they need to take advantage of multicore processors. But it's not that simple. For the past 50 years developers have written programs to run on a single CPU. Most of the tools, programming languages, college curriculums and research have followed suit. A program designed using all of that knowledge and experience will only run on a single core of a multicore CPU, and those cores aren't getting faster. Now developers have to rethink their programming approach and coordinate -- or parallelize -- programs across cores--no easy feat.

An apt metaphor for describing the difficulties of multicore programming is the game "telephone." For those of you who aren't familiar with this childhood game, the basic premise is that several children stand in a line and the first child whispers a word or phrase to the next, who whispers the same word or phrase to the next child, and so on. When the last child hears the word or phrase, he says it out loud so everyone can hear, and it is never the same as it was at the beginning of the chain. I remember once starting one of these infamous chains with the phrase "I think Jenny's cute" only to have it end with "Jenny smells like monkey poop"--clearly not the desired outcome. Multicore programming is like this but with millions of kids whispering millions of words, a million times a second.

Today's most popular programming languages (C, C++, C#, Objective-C, Java, Python, Ruby) are ill-equipped to stand up to the challenge of multicore programming. While developers can still use these languages for single CPU programming, they face the drawback of leaving the speed of the program stuck in 2002. And unfortunately, this is what is happening with much of the code being written today.

However, within select academic communities, programming languages have been developed that make multicore programming far less daunting. These languages are known as "functional programming languages" and while they don't completely solve the difficulties of multicore programming, they certainly make them less daunting. To go back to the earlier metaphor, playing "telephone" with a functional programming language is more like playing with adults instead of children. There is still room for "monkey poop" moments, but they are less likely if adults have the right intent.

My favorite of these functional programming languages is called Scala. Scala is a language that runs on the Java Virtual Machine, which is great because it means that all of the knowledge a Java programmer has built up over their lifetime is 100% applicable to the Scala programming done in the future. It's also flexible enough so that a Java developer can ease into the language. Scala describes itself as a "object-oriented-functional-hybrid language." Java programmers have been doing object-oriented programming since its inception, so they simply need to adapt to a few of the functional programming paradigms.

Scala has been dramatically gaining in popularity over the past decade. Companies like LinkedIn, Twitter, Intel, Foursquare, and the Huffington Post are using it in production. Large communities have sprouted up all over the world through sites like meetup.com (also using Scala). And it's becoming easier to learn since free courses on the language are being offered through coursera.org.

Now it's easy for me to sing the praises of Scala, as an avid user and the organizer of both the New York Scala Meetup and the Boulder Scala Meetup. So I think it's only fair that I note there are a few other languages out there making programming on multicore processors easier. Some popular ones are Clojure, F#, and Go. These languages all have their upsides and downsides, as any language does, but because of the difficulty Moore's Law has posed to developers, it's in their best interest to start learning one.

Popular in the Community

Close

What's Hot