iOS app Android app More

Featuring fresh takes and real-time analysis from HuffPost's signature lineup of contributors
Frédéric Véron

GET UPDATES FROM Frédéric Véron
 

The Causes and Solutions to the State of Big Data in the Mortgage Industry

Posted: 01/12/12 04:00 PM ET

While many books have been written about the causes and after-effects of the 2008 financial crisis, little analysis has appeared on some discrete technical issues impacting the state of information. This concurrent crisis occurred within the community charged with managing financial data related to mortgages, and its causes and subsequent solutions have had more of an effect on mainstream companies and consumers than one might realize.

The cause

Prior to 2008, there were four key problems with the state of Big Data within the mortgage industry:

1. Unstructured: Different firms used different standards and terms of data; imagine two people who don't speak the same language attempting to communicate through hand signals.
2. Unorganized: Since data was in disparate standards, management processes were scattered throughout divisions and organizations rather than being concentrated within a dedicated team.
3. Inefficient: Most companies had far higher numbers of data elements than is realistically manageable.
4. Exposed: The industry lacked necessary levels of granularity to provide a dynamic view of risk to counterparties.

The state of mortgage-related data was -- to be kind -- a bit of a mess. So what did all of this mean in layman's terms?

It meant people were trying to understand exposure to others but lacked the ability to view it by counterparty. Companies did not understand their exposure at the financial instrument level. As a result, it was difficult -- if not impossible -- to know your company's exposure to third parties where a direct relationship did not exist.

How did it get to that point? A recurring theme within the modern finance industry is that the most powerful institutions only think about data in short-term increments. There was no long-term planning, and no consideration of the consequences of data inefficiency during the inevitable periods of instability. Debates about data integrity, storage, management, optimization, and even the types of data being sought and managed only happened in times of crisis. And it took the biggest crisis of all for influencers to realize the status quo within data management needed to change.

The effect

As the financial services and mortgage sectors got turned on their heads in late 2008, we finally saw industry-wide recognition that data must be standardized. This started a bottom-up movement to finally develop stronger and more consistent data standards to ensure the crisis could not repeat itself.

Institutions realized the need to extract real value from data and transform it into tangible pieces of information. This shift would enable businesses to do stronger modeling and drive analytics. This renewed an acute interest in predictive analysis and business intelligence that would allow institutions to see patterns around delinquencies and catch potential issues earlier in the mortgage process.

We also saw Congress embracing the role of data through legislative action. One of Dodd-Frank's most important creations was the Office of Financial Research. This office was tasked with ensuring that the financial and mortgage industries established consistent data standards, adding a top-down component to a process that was already organically in motion.

What it means

As such, we now have new rules for success at the enterprise level in this evolving era of data. But data cannot be seen as "just a tech issue." Business accountability and governance are equally important data components. It is also a critical asset of the business and must be treated as such.

Therefore, as we standardize data delivery and head towards standards that everyone can follow, the industry will be focusing much more on enterprise data management. We will align resources against what needs to be done to best manage data across the enterprise. To this end, ownership, accountability and stewardship are critical.

2011 has ushered in a movement across the financial industry to review data as soon as it comes in the door and catch mistakes and bad data at a much earlier point in the process. This substantially reduces overall risk for the industry and our customers.

At this point, data quality and enrichment becomes critical and needs to be implemented at the source and viewed throughout end-to-end processes. At Fannie Mae, we are building towards our ultimate goal of qualifying data at the outset and viewing data quality issues at the enterprise level on our most critical data, thus building a safety net across the data life cycle. This will strengthen our ability to find more issues in more places, earlier in the process and improve quality control across the mortgage industry.

What's next

Better data management delivers increased efficiency and reduces reputational risk, in addition to improving our industry's ability to deliver better reporting. Working towards more centralized third party data from fewer sources will help to increase quality and optimize spend and is a critical component of this strategy.

Fannie Mae's practices are in line and on pace with others' efforts -- which is a necessary part of this evolution. Financial firms must work together in new encouraging ways to get to where we need to be on data. We all remain interconnected, so one company getting their data in order means nothing if the others are lagging well behind.

Most financial firms are in the beginning stages of data reform, and it's important to remember that this is new for most of us. This is a monumental challenge that will take time and resources, but the industry is heading in the right direction. Shoring up our data at the enterprise level will improve the country's business performance and help ensure that we learn from important lessons about prioritizing data management and integrity.