Happy 2018! December was a busy month – as I’m sure many of you, like me, experienced. We started the month with a trip to the Data Governance Winter Conference in Delray Beach, Florida. It is a great place to be in December, not only for the weather but also for the great conversations with our peers. I wanted to start the New Year with a brief recap of those conversations, and to relay some key themes that I took away for those of you unable to attend this year.
One of the overriding themes was the evolving definition of “data governance.” It appears that customers (and some vendors) are still very much at quite different levels of understanding and engagement with the idea of data governance. While there is some core agreement on what data governance might encompass (and that is a very broad scope), everyone talking about data governance seems to mean something different – based on what they focus on and prioritize.
At the core, the top-down approach to Data Governance is all about getting executive buy-in, and establishing organizational structure and policies for data governance based on clearly-defined mission statements. In common in this regard, everyone starts with reducing cost, increasing ROI, improving CRM, and so on. But most companies, and especially government agencies, all have missions well beyond those we share in common: the increasing need to target specific audiences, with specific services, for specific periods of time, on specific types of platforms, in certain locations. This extensive segmentation of data consumption is already one of the most fundamentally new processes making the need for flexible data governance urgent.
The bottom-up approach starts with data quality and cleansing. It focuses a lot more on data management, and systems for insuring the trustworthiness and stability of the organization’s data lake, and builds out initiatives, business rules, frameworks and policies from there. This approach has been the standard for 30 years; however, data influx rates, the increasingly rapid additions of different types of data, and perhaps most importantly, the integration of varied types of access with complex context association are all becoming difficult to manage with such traditional data practices.
David Loshin and a number of others spoke specifically about a “middle-out” approach to Data Governance. Organizations often start from the bottom up, or the top down, and then lose focus on where to go, how to stitch the two approaches together. How do you get policy and business rules to meet reference data and data integrity management, and not end up in confusion?
At TopQuadrant we believe in a top-down, bottom-up and middle-out approach to Data Governance, as we outlined in the talk at the conference by our CTO, Ralph Hodgson: “Delivering Business Value by “Putting Data Governance to Work”. You can also read more about this in our latest press release.
There were also many challenges noted, even for those who seem well on their way with a data governance program. These include the continual introduction of new systems and types of data, the rapidly increasing scale of enterprise data in use and that must be managed, and frequent changes in organizational structures. These challenges are being constantly driven and amplified by the rapidly changing goals, opportunities and the dynamic nature of emerging situations organizations have to cope with (mergers, changing markets, interruptions in service, supply lines, etc.). With cloud, multi-cloud, sensors, mobile data, grid systems and the internet of increasingly smart things all proliferating quickly, linking data across multiple platforms for multiple purposes — all the while delivering the right info to the right people at the right time — will be critical for governance and organizational success. Any inflexibility in your current data governance platform will soon become obvious.
In addition to the above challenges, the need for automation of processes – including data governance processes – also became apparent at this year’s conference. In their comprehensive talk, IBM discussed the potential weak link data stewards can become in the process if they are overwhelmed in their efforts. Across the board, the search for the automation of data governance wherever possible is advancing.
Last but not least and articulated most precisely and extensively in Malcolm Chisholm’s presentation (San Francisco Partners) – is the way in which any kind of compliance definition, large or small (from GDPR or HIPAA to ISO standards), is likely over time to evolve, in the larger instances (he cites GDPR for his example) with a continuously growing opus of case law built around all the specific, contestable terms, and in the smaller instances, with continual interpretation and revision of standards as they are rolled out, tested, and meet the limits of what they constrain.
As noted earlier, this is all occurring in an environment where the core concepts and features of data governance don’t seem to have coalesced yet into any agreed-upon standards. Consultants and SMEs in the field seem to be in an active contest to create the terms of definition. What is clear though is that as any kind of structuring system – data governance itself included – becomes more rigorous and increases its reach, across extensively heterogeneous environments, it is going to become more and more susceptible to the need for flexibility, transformability, rapid adoption to a huge sea of ongoing changes beneath what it formalizes. Systems incapable of such flexibility will fail or be quickly tossed aside, due to expense and potential catastrophe.
Learn more about how TopQuadrant’s Enterprise Data Governance(EDG) can help you to overcome these challenges with a semantic approach to Data Governance.