Burwood Group

View Original

Revamp your Data Strategy: Modernize and Empower Your Business with Data Modernization

By now, most organizations are aware of the critical importance of data to their daily operations, their competitive differentiators, or their importance in driving strategy. Data permeates every layer of their business, informs every critical decision they make, and gives insight into their customers’ behaviors to a degree that seemed the stuff of science fiction just a few years ago. Or, at least, it should do those things. The reality is that while modern organizations are surrounded by a wealth of data, the ability to translate that data into information, knowledge, and wisdom is sorely lacking. This plethora of data often remains under-utilized.

We are currently at a great inflection point in the data landscape. Over the last several years, the amount of data humanity generates while going about our daily lives has grown exponentially. One estimate from Statista projects that in 2023, 120 zettabytes of data will be generated. That’s 120,000,000,000 terabytes. Every minute of every day, people are generating virtually endless streams of data through their devices. Why, then, are so many organizations unable to effectively harness this wealth of data? Why can they not efficiently capitalize on this abundance of data when it seems like there’s more of it than anyone could ever want or need? In short – why are companies today still struggling with data intelligence? Put plainly, it’s because organizational data intelligence has always been complex, and the nature of the modern data landscape has made it exponentially more so.

In order to thrive in the contemporary data space, businesses must reassess past methods of managing analytical data. For years, companies have operated under the belief that a centralized team and architecture were the most effective approach to fulfilling their data requirements. A single specialized team and centralized data stack were considered the norm for serving all requests. However, this paradigm needs to be reconsidered in light of the massive amount of data that is currently being generated at an unprecedented rate and in various formats. To excel in this new era, organizations must prioritize adopting a framework that emphasizes data intelligence.

Traditional Data Warehouse Model

Consider the traditional data warehouse model as an example. Data warehouses, along with their specialized data models cater to specific reporting needs. As a result, they undergo a thorough requirement-gathering and design process before deployment. This can take anywhere from weeks to months, resulting in a delay in the availability of data for business purposes. Additionally, there are often inconsistencies between the engineering team’s initial understanding of the requirements and the actual business need. What’s more, when a request comes in for an update to the data warehouse to support a new data source or build a new model that it wasn’t initially designed for, a version of this entire process begins again. All the while, valuable data sits unused. Add that to the constant data access requests and system support tickets that the central team needs to serve, and that is a lot of varied responsibility for one team. A centralized solution, while well-intentioned, creates a major bottleneck between the data and the valuable insights it can provide.

Self-Service Data Intelligence

Rethinking our traditional centralized data warehouse approach and enabling a self-service, democratized data intelligence culture is the solution. By shifting the centralized IT team’s responsibilities towards serving an enabling function and empowering the business users to engage with the data more readily themselves, scale is more easily achieved, and time-to-insight can be drastically shortened. One recent conceptual framework that enables this self-service approach is the data mesh platform, proposed by Zhamak Dehghani. While entire books can (and have) been written on this approach, a key aspect of a data mesh is decentralization rather than centralization of data resources. The idea is to create discoverable data products that are aligned to and managed by domain-specific data product teams within an organization. Users throughout the business can access these data products and even build and publish new data products from existing ones. The subject matter experts in the business domains provide the know-how as it relates to the data and its usage when designing data products, while the platform admin team handles things like underlying cloud infrastructure, data engineering, and enforcing data governance via computational tools (services like Google Cloud Dataplex and Azure Purview).

Planning a Self-Sevice Data Intelligence Platform

When planning and implementing a self-service data intelligence platform, it is important to identify a small number of pilot use cases that can comprise the MVP of the solution. As with other cloud initiatives, building a cloud-native data platform is complex, and it is important to demonstrate value quickly to get early buy-in and drive initial adoption. It is critical at this stage to work closely with your initial stakeholders to translate their needs into valuable data products. This will gain the trust and support of your early adopters, and they can act as champions for your new platform approach throughout the organization. After the MVP is delivered, additional features are added to the platform in an iterative fashion. DevOps and automation play a key role here. Many aspects of the platform should be templatized with infrastructure as code and re-used in a service catalog approach. For example, a templated pipeline for moving data from on-premises and performing a standard suite of validation checks can be developed once and re-used across multiple domain teams or data products. Deploying these resources through an automation harness drastically shortens the development and deployment times for future iterations on the platform.

A New Approach

A self-service approach to data intelligence will likely change how pre-existing processes are structured. For example, with a legacy approach data governance may have been the sole responsibility of central IT. A self-service, decentralized data platform would require a shared responsibility approach to data governance. The domain experts dictate governance and access standards based on their expert knowledge of their data. At the same time, central IT would encode those standards into a tool, enforce them for platform users, and log data access activity for audit purposes. This is a major transformative initiative for an organization operating on a traditional centralized monolith, from both a technology and an organizational change management perspective. An experienced professional data services partner like Burwood is invaluable when planning and implementing a Cloud-native solution to enable self-service data intelligence.

While we are currently in the middle of an explosion of data availability, the value of that data is harder to harness than ever before. To do so effectively, we must challenge the existing assumptions that an organization’s intelligence stack should be based on a centralized system and fully managed by a centralized team. Designing and implementing a data platform that challenges this paradigm will involve new technologies and new ways of addressing organizational data responsibilities like governance and sharing. It is both a technologically and culturally significant change, but a worthwhile endeavor.

A self-service, decentralized approach to data intelligence is key to unlocking the potential of the wealth of data that is available to organizations today.


June 2, 2023

See this gallery in the original post

See this content in the original post