The Ugly Truth: Your Procurement Data is Holding You Back

by Tom Beaty, Founder & CEO of Insight Sourcing Group

Organizations across the world tend to suffer from “bad” procurement data. Given the extraordinary investments companies have made in ERP systems and procurement technologies, poor procurement data is stunningly common.

Basic spend visibility is “table stakes” for any procurement organization as you cannot manage what you cannot see. After all, spend visibility is the gateway to opportunity identification, compliance management, vendor management, spend trending, savings tracking, and a host of other levers for value creation and sustainability.

One telling indicator of this fact is that many private equity firms are starting to use spend data quality within a portfolio company as a measure of the quality of the procurement organization itself. In other words, an organization without a handle on its data, almost by definition, cannot execute at a high level.

Even if such an organization were world class, how would you prove it without clear data? As a result, many leading private equity firms are buying spend visibility solutions for the companies they own.

Spend analytics solutions have been around for many years yet some companies that own them still struggle. Many waste time and energy manipulating data when they have much higher value opportunities to pursue.

The traditional spend analytics systems suffer from an over-reliance on automated categorization and a lack of procurement expertise. They would work perfectly in a perfect world but they stumble in the messy reality that exists in most companies. They are undone by noncompliance, data entry errors, fragmentation of data across multiple systems, and death by a thousand other cuts.

The solution to the problem, for better or for worse, seems to require companies to take their data outside of native systems and to do the “pick and shovel work” to clean it up. First, companies need to normalize the vendors to eliminate duplicates and to consolidate to a standard naming convention for the purposes of the spend analysis.

Companies should then move away from complex organizational schemas such as UNSPIC which was designed for other purposes and apply a categorization structure that reflects the way supply markets are organized. This typically requires only 100-150 subcategories for indirect spend, for example, for most companies. Compare this to the 200 to 1,000 some companies struggle through in order to structure their spend data. This complex web of fractional categories is unwieldy and ultimately, ineffective for core procurement requirements.

Once the data is well organized, companies should create a “back map” connecting the new, ideal data, to the old disorganized data. For example, all original vendor names should be linked to the new version and then paired with the appropriate category or categories so that in the future, the data can be refreshed easily. And finally, companies should take their most advanced sourcing and procurement professionals and identify the analyses and views of the data that drive the greatest value. These should be embedded into an analytical tool such that each time the data is refreshed, these insights and views can be automatically, or at least quickly, generated.

Many of the current systems would work perfectly in a perfect world. Unfortunately, the procurement environment is utterly prone to nuances and challenges. A simplified approach with a heavy dose of expertise is required to release procurement professionals to do what they do best.