Wikipedia tells us that “data virtualization is any approach to data management that allows an application to retrieve and manipulate data without requiring technical details about the data, such as how it is formatted or where it is physically located. Unlike the traditional extract, transform, load («ETL») process, the data remains in place, and real-time access is given to the source system for the data, thus reducing the risk of data errors and reducing the workload of moving data around that may never be used.!
Reports can be generated from data spread across multiple platforms in the enterprise or from the growing data lakes within the operator or from a cloud application currently in use. The abstracting layer understands where the data is and mystically removes that complexity from the user, developer and tool. “Off the shelf” software exposes as a veneer of simplicity to the end user who can go mad creating reports ‘on the fly’, sublimely unaware of the integration processing and retrieval complexity behind the scenes! Fantastic!
Most enterprise architectures are struggling to manage the growing complexity of connected systems and are burdened by the ever increasing volumes of traffic. BI architects juggle a myriad of challenges daily, such as data governance, security, maintenance, data management, performance, regulatory compliance and operational cost, while battling to implement changes within reasonable timelines for a constantly mutating business. Data virtualization looks to simplify and consolidate the data extraction function. So why doesn’t every enterprise have one?!
One Operator explained that in practice, it is not possible to have one overarching data virtualisation layer. The capability provided is just not suited to every use case, end user need and/or application. Like an undocumented law in physics, their belief is that the more you customise, in response to a particular business need, the more you have to fragment your architecture and solution. This fragmentation works contrary to the data management simplification, optimization and consolidation concept of this abstraction layer.
The BI and IT architects, attending this ETIS work stream, shared experiences based on varying degrees of progress along the rocky road of enterprise transformation to meet the current demands of the business. Again, I observed with surprise, the apparent lack of strategic vision with respect to these architectural transformations. It appeared, these architects are struggling to keep abreast of current business needs and this battle is fraught with side units branching off to do their own thing in order to gratify their particular data needs. The idea of being able to design and implement an architecture that is dimensioned and engineered in anticipation of the explosive volumes and performance expectations of the digital age is simply beyond their line of sight. But perhaps, as we apply the Darwinian survival theory to the BI evolution, “It is not the strongest of the species that survives, or the most intelligent that survives. It is the one that is most adaptable to change” or in other words, the process of evolution we see today is at the natural pace of change necessary for these BI systems to provide value today rather then an clever anticipation of what may be needed down the line.