Big Data Continues to Evolve

01/31/2017 10:34 am ET Updated Jan 31, 2017

The act of gathering and storing large amounts of information for eventual analysis is much older than the term “big data,” which gained momentum in the early 2000s when industry analyst Doug Laney articulated the now-mainstream definition of big data as the three Vs:

Volume. Organizations collect data from a variety of sources, including business transactions, social media and information from sensor or machine-to-machine data. In the past, storing it would’ve been a problem – but new technologies (such as Hadoop) have eased the burden.

Velocity. Data streams in at an unprecedented speed and must be dealt with in a timely manner. RFID tags, sensors and smart metering are driving the need to deal with torrents of data in near-real time.

Variety. Data comes in all types of formats – from structured, numeric data in traditional databases to unstructured text documents, email, video, audio, stock ticker data and financial transactions.

Data comes from multiple sources, which makes it difficult to link, match, cleanse and transform data across systems. The credo of the big data analyst to date has been: “centralize, centralize, centralize,” because the most sensible way to connect and correlate relationships, hierarchies and multiple data linkages, was to have them all flow into once place. De-siloing data was an important precursor to analysis.

However, data centralization provides its own challenges. Most classical analytics, data warehousing, and big data analysts advise us to “bring it all together, then we’ll create some value—in about fifteen months.”

This obsession with centralizing data and methods before creating value and the resulting constant need for application and hardware upgrades are having a very negative impact on companies’ bottom lines. Only recently have new techniques emerged that allow solution designers to source, process, and deliver value-added results by interacting directly with disparate systems rather than at points of centralization.

Pneuron, founded in 2010 and headquartered in New York, has developed a solution designed to deal with data and systems fragmentation common in today’s enterprises without requiring massive centralization. Pneuron integrates with big data software like Hadoop, analytics tools including Tableau modeling software and many more types – any software or application with an API. It was designed as an extensible solution to help teams leverage their existing tooling and IT investment.

Pneuron can be used to query, modify and merge existing apps, databases and data services. It can be used to build any type of business application, from supply chain intelligence to accelerating conversions, mergers and acquisitions, and it is particularly appropriate for human-intensive processes that leverage highly diverse and distributed challenges such as risk management, anti-money laundering investigations (AML/BSA compliance), network security, and customer relationship management.

Pneuron Corporation is a privately held, venture-backed business orchestration software vendor with over 50 employees. In 2011, the company announced a $2 million Series A round of funding from Osage Venture Partners. In 2013 it raised $6 million Series B round of funding lead by Safeguard Scientifics and Osage Partners. In 2015, Pneuron announced a $5 million Series B-1 round of funding from Safeguard Scientifics, Osage Venture Partners and Scott Group LLC. This latest funding round brings the company’s total funding to US $13 million.

Collectively the founders were business people who were frustrated with the time, cost, and complexity of solving business problems where multiple, diverse sources of value needed to be brought together quickly, flexibly and with a high degree of agility. They worked in various parts of the financial services industry, focused on the investigation of money laundering alerts. That highly manually intensive process requires gathering targeted data from both internal bank systems and various external data services. They recognized they needed an effective and efficient way to automate the data gathering, integration, and processing across all the different types of systems involved. The solution had to be agile enough to deal with constantly changing regulations as well. The challenges of time to value, cost, and agility across their various business needs eventually came down to “there’s got to be a better way” and “someone’s going to solve this.”

Other use cases in financial services well suited to a solution like Pneuron are those that require the acquisition of diverse data elements from both internal and external systems: assessing customer risk during the new customer onboarding process as well as when a customer’s status changes (like job title); evaluating the presence of known bad actors in your customer records based on publicly available data (e.g., Panama Papers). Typical use cases also extend to insurance, supply chain, healthcare, and many other industries.

While Pneuron can dynamically scale to handle analytical problems of reasonable size, they really excel in assisting firms who need to realize the value potential of the new insights generated by Big Data analytics. Typically Pneuron orchestrates the use of Big Data platforms to derive that new insight, then Pneuron directly orchestrates downstream actions, across diverse platforms, to further enrich insight into an actionable form, then delivers those results to a variety of applications and staff that can take action.

An example would be taking social media monitoring and sentiment analysis a step further. Traditionally sentiment analysis is reported in aggregate, or human reviewed. If you’re dealing with hundreds of thousands of mentions a month, human review becomes impractical. However, Pneuron can leverage Big Data infrastructures so that as the sentiment derivations are completed by the Big Data platform, Pneuron sends those results downstream, initiating actions to route reports and alerts to various Brand and Product Managers, providing them with recent sales, sentiment, and other value-added information and analyses, thereby increasing their awareness of social media feedback and shortening their responsive action cycle time.

Another use of Pneuron to take the burden off human staff would be an e-Audit solution. In conventional audits, a team of auditors swoops in each January, asks for reams of data from already overloaded IT staff, then proceeds to do piles of analyses which verify that proper controls are in place and properly executed. The issue is that the process is so time intensive, findings are generated up to 12 months after the incident took place. With Pneuron, pre-scheduled controls testing can be scheduled to run periodically across a diverse set of source systems and extracting data, performing analyses, checking against configured thresholds, preparing reports, and then automatically initiating alerts back to the auditors, management, regulators and/or anyone else. These can be run across the year to ensure detection takes place as close to time of incident as possible.

In logistics, Pneuron provides a smarter supply chain solution, providing highly current “situational awareness” across the diverse components of a highly outsourced supply chain. On both a scheduled and event-driven basis, Pneuron executes configured workflows to pull current inventory from both internal and partner/distributor systems to gather up-to-the minute inventory information (quantities, models) which allows an accurate valuation calculation. Further by analyzing trending and forecasted consumption, Pneuron triggers a replenishment order to various internal and distributor systems to avoid stockouts.

This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.