The Legend Of The Phoenix: Building The New HuffPost

There is a rule of thumb in the world of software development that every single developer learns at the beginning of his/her career: "When something works, don't touch it." Well, the rules are meant to be broken, and that's exactly what we have done with the Phoenix project.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

There is a rule of thumb in the world of software development that every single developer learns at the beginning of his/her career: "When something works, don't touch it." Well, the rules are meant to be broken, and that's exactly what we have done with the Phoenix project.

Let's start from the beginning:

When we started the project, it was temporarily named "Performance improvement," and the goal was to improve the page load speed. I bet every IT-driven business (nowadays that is literally every single business) faces performance optimization issues once in a while. After spending couple of weeks reviewing our current codebase and investigating spots that might require performance optimization, we faced the dilemma. There were two options: either apply some amount of patches to the codebase or start from scratch.

The first approach required less time and resources and was incremental, meaning we could deploy updates one by one. By choosing this option we wouldn't break the rule of thumb I mentioned in the first paragraph. However, we would have become hostages of the rule by exploring every single line of code without the ability to improve specific spots because of the current architecture.

The second approach required our team to build a new architecture, taking into account not only the current state of the HuffPost business-model, but also short- and long-term perspectives. This approach would give us a chance to not only improve scalability, but also implement bunch of tasks pending in our backlog and apply some solutions we had postponed for a while due to their incompatibility with the current architecture. Following this path would require more time, a longer debugging period and more QA resources. However, the great win from the complete rebuild would be the ability to build new Document Object Model (DOM) and rethink every single logic unit of the page from the HTML/JS point of view and review it's backend logic and business necessity. This is directly connected with the project goal of improving page load speed. The less information you transfer, the faster page loads.

The decision was made in favor of a new architecture, and that's how the project got its name: Phoenix. When I was asked to give the project a name it came to me right away. I think Phoenix describes the new architecture precisely -- "burnt the old to give birth to something completely new".

The Phoenix development process:

It was necessary to choose the foundation of the project. Before Phoenix started, I was working on the architecture of the HuffPost mobile web version. For that project we used a Foundation/SASS combination and found it to be a good, stable solution. In order to use the same technologies and thinking about responsiveness future, we've stuck with the same combination for the desktop version as well. Here are the simple rules we set for ourselves in order to keep code clean, the logic simple and easily understandable:

  • Don't copy/paste anything without knowing 100 percent that it's still necessary and in use
  • Don't add any complexity to the logic; keep it as simple as possible
  • Before adding something to the current logic, think of scalability
  • Check every single unit with the Editorial team before adding it to the project

By following these rules we've avoided a lot of "code loops" and redundant logic. We've reviewed all the logic and verified it with the editorial team to make sure that every single unit was needed and working properly. Along with that we've discovered several algorithm changes the editorial team had in mind, and we were able to implement those right away. This saved us a lot of time and resources -- pretty nice, right?

I read an article stating that the key concept in web-document optimization is to avoid loading unnecessary files. Don't invent techniques and frameworks with fancy titles, just don't load stuff you won't use. The key idea of the article highlighted exactly what I was thinking about and this and how I came up with the idea of modularity for the Phoenix architecture.

In a nutshell, you should shift your point of perception. You should consider a web page both like a whole and a combination of units. We've split the page into atomic units. Units can be of a higher level or lower level and every single page can be a combination of these units, like a LEGO block. All you need to do is to build the atomic autonomous blocks. Building new unit is as simple as creating:

  • describe HTML in new_unit.tpl
  • add new_unit.js (if necessary)
  • create new_unit.sass (compiled into new_unit.css)
  • describe unit connections in configuration file
  • describe API data source (if necessary)

Once units are built, you can use as many combinations as you wish. All you need to do afterwards is describe the combination for every single page. Where? Let's say in JSON. Once it's done, you can easily add/edit/delete new units and combine them in any way you like. Just like LEGOs. It's so simple.

The next step is moving the site configuration, describing the tree itself and all the units, into JSON files. This makes building new section of our site as simple as just editing one file. With the structure configuration, JSONs also contain section specific values for each unit. This approach prevents us from tons of conditional statements on the backend and makes templates more clear and transparent.

Once you delete a unit from the configuration file, it will remove the correspondent CSS and JS right away. No need to search the codebase for connections. No need to worry if removing this condition will break something else. There's no need to think about this. All you need to do is to change "true" to "false" in the JSON. Furthermore, you can build UI and it will become as simple as clicking the checkbox on the configuration page. It all depends on how far your imagination and creativity can take you.

Build process:

Now that we have built atomic units and described the page structure, it's time for build process. Build process starts from building the tree of the page, we built it every single time before we even start doing anything else. Once we have the tree built, we know exactly which units we will have for the specific page. What benefit does this information give us? Well, by having the exact units we need, we can call only those API we need, build only the HTML we need and include only the CSS/JS we need. Nothing else. Imagine you have 70+ sections and 10+ international editions. Each of those might have their unique units or be of a completely different layout (like our TED Weekends section).

Speaking of data sources, the Phoenix version of HuffPost is completely autonomous in terms of not making any data queries inside the codebase. Instead, we made the necessary API calls (which are also described in JSON) at the data collection point and used this data to populate the templates afterwards. When the architecture and configuration adjustment were done, the last step was adding sharding and a lazy loading mechanism. We load all the assets, such as images and third party elements in parallel and after the document is complete.

In regard to timeline, about 50 percent of the overall development process was devoted to debugging and shaping the units themselves, rather than working on the general architecture part. Also, as we've completely changed the data-schema, which required thorough investigation on the infrastructure side to make sure it won't hurt our current infrastructure configuration.

Looking back, Project Phoenix was a huge team effort: editorial, product, development, tech ops and QA. Everybody was very excited about working together on something huge and valuable. We realized we were creating the next generation of HuffPost and that's a great feeling. We achieved our goal of reducing the page load speed almost 50 percent, we built a new scalable and maintainable codebase, revisited all of the units of our website and verified their logic and necessity. Additionally, we established a completely new philosophy inside our team, where everybody is working on something big and feeling their personal impact. Even though this is a huge step, it's just the first one. Our team has more plans to make Huffington Post even faster and accessible in order to improve the way our readers experience the news.

Popular in the Community

Close

What's Hot