(ETL) process extracts information from the source databases, transforms it and then loads it into the data warehouse. interfaces with the source databases via
wrapper code if required. Issues with combining
heterogeneous data sources, often referred to as
information silos, under a single query interface have existed for some time. In the early 1980s, computer scientists began designing systems for interoperability of heterogeneous databases. The first data integration system driven by structured metadata was designed in 1991 at the
University of Minnesota for the
Integrated Public Use Microdata Series (IPUMS). IPUMS used a
data warehousing approach, which
extracts, transforms, and loads data from heterogeneous sources into a unique view
schema so data from different sources become compatible. By making thousands of population databases interoperable, IPUMS demonstrated the feasibility of large-scale data integration. The data warehouse approach offers a
tightly coupled architecture because the data are already physically reconciled in a single queryable repository, so it usually takes little time to resolve queries. The
data warehouse approach is less feasible for data sets that are frequently updated, requiring the
extract, transform, load (ETL) process to be continuously re-executed for synchronization. Difficulties also arise in constructing data warehouses when one has only a query interface to summary data sources and no access to the full data. This problem frequently emerges when integrating several commercial query services like travel or classified advertisement web applications. A trend began in 2009 favoring the
loose coupling of data and providing a unified query-interface to access real time data over a
mediated schema (see Figure 2), which allows information to be retrieved directly from original databases. This is consistent with the
SOA approach popular in that era. This approach relies on mappings between the mediated schema and the schema of original sources, and translating a query into decomposed queries to match the schema of the original databases. Such mappings can be specified in two ways: as a mapping from entities in the mediated schema to entities in the original sources (the "Global-as-View" (GAV) approach), or as a mapping from entities in the original sources to the mediated schema (the "Local-as-View" (LAV) approach). The latter approach requires more sophisticated inferences to resolve a query on the mediated schema, but makes it easier to add new data sources to a (stable) mediated schema. , some of the work in data integration research concerns the
semantic integration problem. This problem addresses not the structuring of the architecture of the integration, but how to resolve
semantic conflicts between heterogeneous data sources. For example, if two companies merge their databases, certain concepts and definitions in their respective schemas like "earnings" inevitably have different meanings. In one database it may mean profits in dollars (a floating-point number), while in the other it might represent the number of sales (an integer). A common strategy for the resolution of such problems involves the use of
ontologies which explicitly define schema terms and thus help to resolve semantic conflicts. This approach represents
ontology-based data integration. On the other hand, the problem of combining research results from different bioinformatics repositories requires bench-marking of the similarities, computed from different data sources, on a single criterion such as positive predictive value. This enables the data sources to be directly comparable and can be integrated even when the natures of experiments are distinct. , it was determined that current
data modeling methods were imparting data isolation into every
data architecture in the form of islands of disparate data and information silos. This data isolation is an unintended artifact of the data modeling methodology that results in the development of disparate data models. Disparate data models, when instantiated as databases, form disparate databases. Enhanced data model methodologies have been developed to eliminate the data isolation artifact and to promote the development of integrated data models. One enhanced data modeling method recasts data models by augmenting them with structural
metadata in the form of standardized data entities. As a result of recasting multiple data models, the set of recast data models will now share one or more commonality relationships that relate the structural metadata now common to these data models. Commonality relationships are a peer-to-peer type of entity relationships that relate the standardized data entities of multiple data models. Multiple data models that contain the same standard data entity may participate in the same commonality relationship. When integrated data models are instantiated as databases and are properly populated from a common set of master data, then these databases are integrated. Since 2011,
data hub approaches have been of greater interest than fully structured (typically relational) Enterprise Data Warehouses. Since 2013,
data lake approaches have risen to the level of Data Hubs. (See all three search terms popularity on Google Trends.) These approaches combine unstructured or varied data into one location, but do not necessarily require an (often complex) master relational schema to structure and define all data in the Hub. In recent times, as the number of applications being used have increased many fold and application to application integration have become critical and this has given rise to [Unified APIs] that help application developers integrate their apps with other apps and more recently with [MCP - Model Context Protocol] taking it a step further for AI Agents. Data integration plays a big role in business regarding
data collection used for studying the market. Converting the raw data retrieved from consumers into coherent data is something businesses try to do when considering what steps they should take next. Organizations are more frequently using
data mining for collecting information and patterns from their databases, and this process helps them develop new business strategies to increase business performance and perform economic analyses more efficiently. Compiling the large amount of data they collect to be stored in their system is a form of data integration adapted for
Business intelligence to improve their chances of success. ==Example==