Managing Data One Byte at a Time

Increasing connectivity, largely a result of the Internet and the facility with which we collect data, has changed the pace and relationships of business. To a great extent, local business no longer exists, as all commerce and trade is tied together by information. This is evident as the world’s economy stumbles.

Business is, quite simply, acting like a single organism, where each element impacts the other. We appear to lack understanding in regulating data within the business process. Specifically, global supply chains are at risk because companies poorly judge intelligent data.

To take action we need to know infinitely more than we used to know. We need three things: more data (when a specific container passes security in a Los Angeles seaport); more reliable data (secure, clear, and non-redundant information); and better systems for processing that data (a combination of good software and hardware).


Years ago, we lived quite comfortably with megabytes of information, and were rather astounded when we crept up to gigabytes. Today, many of us reside in a flush of data numbered in terabytes.

The world of data consists of more than terabytes. There are petabytes, which lead to petaflops or the processing capacity of supercomputers manufactured by IBM and Cray Inc. These data dynamos run more than one quadrillion calculations per second.

Such scale may be difficult to comprehend, but this staggering connectivity to myriad processes and information presents a tremendous business upside. Connectivity in big bytes is a competitive differentiator.

Google, a great data hog, uses tens of thousands of servers to digest 20 petabytes of data daily. That’s a lot of searching and video playing. More and more businesses are consuming this level of data, whether they like it or not.

The capacity to gather information, and act on it, is relevant to transportation and logistics, on smaller scales. A truck used to simply fill up on gas, oil, and air. Now it fills up on data, much of which connects the driver’s work and the vehicle’s behavior to broader supply and demand flows. Today, a truck is nothing less than a moving data warehouse.

This flood of information will only continue. Beyond petabytes, the world of exabytes, or one quintillion bytes, now looms. Five exabytes, according to the UC Berkeley School of Information, would contain all the words ever spoken by people (including presidential candidates).

The very notion of exabytes is terrifying, and a new word—exaflood—has been created simply to describe the coming data torrent. The sum tide of collecting, processing, and saving these raging rivers of data is connectivity.


But the question remains whether we are prepared to manage this connectivity. An ounce of prevention is worth a pound of cure, but are we anticipating the need for new levels of data management?

The downside of connectivity is that we are obligated to use these vast amounts of data, often at a great waste of time and money. Data management carries three responsibilities: making sure it’s necessary, managing it expeditiously, and using it wisely.

This is a tall order, because regardless of how well we collect data, garbage in is still garbage out. It is all a matter of accessing more data intelligently. And that is no small matter.

Leave a Reply

Your email address will not be published. Required fields are marked *