How Big is Big Data?

Before becoming too exuberant about this potentially transformative technology, condsider some risks.

TRUE TECHNOLOGICAL revolutions are few and far between, and the uneven global economy is crying out for a new one. If the equivalent of 19th- or 20th-century mechanization is too much to expect, then something on the order of post–World War II information technology or the 1990s dot-com boom would probably do just fine.

Over the past couple of years — and with particular gusto in recent months — technologists and economists have seized on a likely candidate: the industrialization of information. For purposes of headlines, strategy discussions and sales brochures, it has been dubbed “big data.”

A May 2011 report by McKinsey Global Institute, the research arm of consulting firm McKinsey & Co. — “Big Data: The Next Frontier for Innovation, Competition and Productivity” — essentially threw down a gauntlet. Rather than trumpeting the volumes of extant data in unfathomable numbers of terabytes (trillions of characters) or petabytes (quadrillions), McKinsey made anecdotal points. For one, “all the world’s music” could be stored on a $600 disc drive. Used “creatively and effectively,” McKinsey said, big data could deliver $300 billion in annual cost reductions and other benefits to the U.S. health care industry and $149 billion in efficiencies to European governments.

Information overload may still be a problem, but now it is perceived by some as a mother lode to be mined and refined, using new database technologies, to create wealth-producing insight and intelligence. Big data “has the potential to transform economies, delivering a new wave of productivity growth and consumer surplus,” said McKinsey.

In January the World Economic Forum weighed in with “Big Data, Big Impact: New Possibilities for International Development” for its forum in Davos, Switzerland. The report pointed to “the potential for channeling these torrents of data” for purposes of economic uplift and emphasized the need for concerted action by “governments, development organizations and companies to ensure that this data helps the individuals and companies that create it.”

The promise may be real and some excitement justified, but there is much work to be done, and attention must be paid to looming risks and challenges. McKinsey listed a few things that should be addressed first, including data privacy, security and intellectual-property issues, as well as organizations’ agility and ability to recruit talent. WEF’s “Global Risks 2012” report lumped cybersecurity and other IT threats together in a cautionary scenario described as “the dark side of connectivity.”

Sponsored

Perhaps the most fundamental issue is execution. Long before “big data” was a catchphrase, the financial services, telecommunications and transportation industries, among others, struggled to identify returns on their investments in vast warehouses of management and customer information. In many cases it took years to develop useful analytics and business intelligence.

So it can hardly be expected that what the WEF calls the emerging data ecosystem — a broad range of technologies, billions of consumers, companies in multiple sectors, and governments — is ready to be harnessed, orchestrated and interconnected.

Even relatively modest or focused programs will be daunting, but they could indicate big data’s ultimate workability. One critical test case will be in financial regulation. Securities regulators in the U.S. have proposed aggregating all transaction data into a consolidated audit trail for market surveillance. Advanced technologies for data collection and on-the-fly analysis have worked on individual exchanges. An all-seeing eye calls for a new, big-data ecosystem.

The U.S. Treasury Department’s Office of Financial Research — created by the Dodd-Frank Wall Street Reform and Consumer Protection Act of 2010 to support the Financial Stability Oversight Council’s mission of monitoring and mitigating systemic risks — is getting down to work on macrosize data collection. The OFR can extract virtually anything it desires from the books and records of financial institutions, but will it have the computing and analytical resources that its mission requires? Will it give regulators what they need to prevent a repeat of the 2008 crisis?

Not all bankers are comfortable with the implications of that ecosystem; some worry that so much data in one place will be at risk of falling into the wrong hands. The sensitivity is justified. According to the Computing Technology Industry Association, 65 percent of companies reporting data thefts or disappearances last year lost confidential corporate financial data.

It is best to lock down security now, lest big data become a big attraction for hackers. • •

Jeffrey Kutler is editor-in-chief of Risk Professional magazine, published by the Global Association of Risk Professionals.

Related