A recent meeting with a client subject matter expert highlighted for me just how much change has occurred in the arenas of application development and data warehousing over the past 30 years. During a discussion about how to best develop an “information blueprint” (a graphical depiction of the organization’s high-level analytic requirements), a debate ensued on whether we should start by defining business processes or the key performance indicators (KPIs) that the client wants to use for measuring success. The client advocated that his organization’s analytics requirements should be determined by modeling the required data attributes from “data flows” between business processes. Even though I was a very strong advocate of that approach earlier in my career, I found myself suggesting that it’s neither viable nor necessary for two very important reasons:
1. Prevalent adoption of ERP and industry-specific applications
The enterprise process/data modeling approach was necessary when IT departments were developing custom applications. If you didn’t understand your organization’s business processes, as well as the data required to support those processes, you had no chance of developing a suite of applications to satisfy business requirements. However, organizations have been implementing enterprise applications such as SAP, Oracle Financials, Epic, and Cerner since the mid-to-late 1990s. To a significant degree, an organization’s business processes are tailored to the requirements of the purchased enterprise application during the implementation. The need to model all business processes and associated data flows is thus no longer required.
2. The overarching business problem is not a lack of data, but rather how to implement the KPIs the executive leadership team needs to measure success
Organizations are no longer struggling with data acquisition. The enterprise application suite provides a vast amount of “raw” materials that can be used for reporting and analytics, supplemented by a flood of unstructured data obtained from a wide variety of sources and stored in “data lakes.” When we did model the organization’s business processes and data flows, we were primarily concerned with capturing all the data needed to satisfy the requirements of custom-developed applications in a relational database. That’s simply no longer a problem. The problem now is how to quickly provide information stakeholders with meaningful dashboards derived from an ocean of data.
I believe that rather than “boiling the ocean” and developing a technically elegant enterprise business process model and associated data model, organizations should adopt a top-down approach to developing an information blueprint. This approach allows the organization to quickly identify the KPIs required to measure operational success, how these KPIs should be analyzed or “filtered,” and where the required data resides. The key steps are to:
Develop an inventory of your KPIs—This doesn’t have to be a daunting task. If it takes more than a few weeks to analyze your current reports and dashboards and interview stakeholders to identify and prioritize your KPIs, you’re probably over-engineering the process.
Identify the “business filters” associated with these KPIs—Determine how your stakeholders wish to “slice and dice” their KPIs by business filters, such as an Organization Hierarchy, Product, or Customer. It’s important to note that these filters directly equate to the organization’s “master data,” and should be managed in accordance with a well-defined data governance strategy.
Determine candidate data sources for these KPIs and business filters—Identify possible sources for this data. This doesn’t have to be a comprehensive source-to-target data mapping exercise, but rather ensuring you have confidence that your current data sources can populate your future analytics framework.
Much has changed in the last few decades from an application development and analytics perspective. We no longer need to model every business process and every data flow to ensure we’re capturing all of data we need. However, we do need to ensure that we have a clear understanding of the information our stakeholders need to manage the business and what data is needed to source the dashboards they require. Of course, we also need to ensure the quality of data we’re providing them, but that’s a subject for another blog post.
Client Solution Architect
John Walton is a CTG Client Solution Architect and consulting professional with more than 35 years of IT experience spanning multiple disciplines and industries. He has more than 20 years of experience leading data warehousing, business intelligence, and data governance engagements. He has extensive experience working with a broad range of healthcare and life sciences organizations including IDNs, national healthcare payers, regional HMOs, a global pharmaceutical company, academic medical centers, community, and pediatric hospitals.