When virtualization became mainstream in the early 2000’s we realized that we could decouple applications from the physical limitations of the underlying infrastructure. This led to a revolution in how we design and operate applications in the modern enterprise. Fast-forward to 2021 and virtualization is now being applied to data in much the same way.
Today the objective is to infuse data and insights into all aspects of our business and embrace the new data-driven mindset of the modern era. The problem is that our data environments are inflexible and the data requires hard work to transform it into a format that is compatible and usable.
Current data environments are often associated with high costs, multiple copies of large, growing datasets across the enterprise, a variety of complex programming languages and a never ending demand for more and more data from business…
Much like with infrastructure virtualization in the early 2000’s, virtualization is now being used to decouple data from the underlying complexities. The result is an agile and flexible virtual layer 'in-between' the consumer of the data and the source of the data. This means no more difficult and time consuming data transformations, no more endless copies of the data across the environment and one simple and easy way to access all data no matter where it is or what format it's in.
Join renown data analyst, consultant and author, Rick van der Lans, to learn how Data Virtualization is being used to build virtual data marts/warehouses/lakes in a matter of hours by the world's leading companies and why Data Virtualisation is essential for any modern data environment.