The use of information as a strategic advantage in several industries has created a powerful industry of its own - publishers, data aggregators, statistical and econometrics data, market & competitive research companies and a wide array of cloud services companies. Ranging in size from global players to niche players they serve their clients providing financial market data, legal case law, medical regulatory data, oil and gas information, or dozens of niche services.
All these companies have some common challenges they face. The volume of information they must aggregate has increased exponentially. Since most of this information can be obtained in raw form in the public domain, the perception of value depends on how well quality improved and curated the final data service is, which in turn means more and more sophisticated editorial, semantic interpretation and data governance needs. Finally, the delivery and application of this data needs to be easier to integrate by clients into their own business processes. With the exception of some financial data companies, most of the data aggregation, transformation, and delivery mechanisms used by this industry today are woefully inadequate and manually intensive.
Learn how you can integrate data virtualization for information and Cloud services benefits into your organization. It has become an essential tool to aggregate, virtually integrate and deliver data services for myriad information services at scale. It automates the highly manual data collection tasks to scale and allows lean editorial staff to focus on value add. And it provides an abstraction layer to take curated data and serve up data services in varied formats. Not all data virtualization tools however are suited for this industry -- but only those that have strong web extraction capabilities, ability to semantically integrate semi and un-structured information with structured data, those with strong search and text capabilities, and ability to deliver hierarchical and semantic data formats as outputs.
Increasingly information and cloud services companies have been using data virtualization in core information processes and product platforms as described above, as well as in corporate and business intelligence functions:
The leading provider of structured and unstructured data to the energy industry found that it had hundreds of data bases segmented by dozens of data types (well, lease, oil production, pipeline, seismic, pricing etc.) and geography and organizational silos. This frustrated customers forced to buy separate data subscriptions when what they really wanted was unified information that could be subscribed to by region or granularly by well or field. Further the company also wanted to move strategically into providing new derivative products such as analytics and valuation models (e.g. well/field asset valuation) that went beyond selling pure data. Data virtualization helped achieve both objectives by building an abstraction layer between physical data stores and the cross-domain, cross-datatype canonical data services that would become the new "goto" layer for product development teams.
A company that provides information services to many verticals in 40 countries found that unprecedented growth in regulatory and social data has fueled demand for its products, but also posed significant scalability and complexity challenges in automating data collection. The healthcare unit had a content database of over 10 million documents that must be constantly refreshed and their web extraction process, first manual and then automated through PERL scripts, was unable to scale and was being thwarted by dynamic web technologies. This impacted customer experience and revenue. Advanced web automation integrated with a data virtualization platform was used to automate access to websites like a human being would (fill forms, retrieve search results, interact with dynamic web elements, etc.), and retrieve and present information in a structured format, and do this with industrial reliability and performance. This has empowered the company to explore new market opportunities.
Retail consumer analytics is one of the hottest sectors in big data analytics. One company provides both data and analytics services for major retail and CPG companies - from analyzing store and web traffic to conversions, social trend prediction, and use of public health data (trending flu seasons in certain zip codes) and weather data (storm warnings) to shift advertising spend and shelf-stocking recommendations. Data and algorithms grew in lockstep and to keep up with that they use data virtualization to bring extreme agility to ingesting new data feeds and integrate existing data into new canonical views without impacting source systems.
The leader in geo-spatial and navigation platforms used in mobile, automotive and other applications sought to enrich its offering of location-based data services with real-time and contextual social data. Data virtualization creates a logical data service that combines base-map data on points of interest with semi-static data feeds on ratings, opening hours, offers and prices, etc. from variety of suppliers further enriched with real-time, dynamic information coming directly from users and social media sites and context-sensitive commerce. It has enabled the company to deliver a win-win for their customers and partners with greater agility and lower cost.