Middleware provides all the services and applications necessary for efficient management of
datasets and
files within the data grid while providing users quick access to the datasets and files. There is a number of concepts and tools that must be available to make a data grid operationally viable. However, at the same time not all data grids require the same capabilities and services because of differences in access requirements, security and location of resources in comparison to users. In any case, most data grids will have similar middleware services that provide for a universal
name space, data transport service, data access service, data replication and resource management service. When taken together, they are key to the data grids functional capabilities.
Universal namespace Since sources of data within the data grid will consist of data from multiple separate systems and
networks using different file
naming conventions, it would be difficult for a user to locate data within the data grid and know they retrieved what they needed based solely on existing physical file names (PFNs). A universal or unified name space makes it possible to create logical file names (LFNs) that can be referenced within the data grid that map to PFNs. When an LFN is requested or queried, all matching PFNs are returned to include possible replicas of the requested data. The
end user can then choose from the returned results the most appropriate replica to use. This service is usually provided as part of a management system known as a
Storage Resource Broker (SRB). Information about the locations of files and mappings between the LFNs and PFNs may be stored in a
metadata or replica catalogue. The replica catalogue would contain information about LFNs that map to multiple replica PFNs.
Data transport service Another middleware service is that of providing for data transport or data transfer. Data transport will encompass multiple functions that are not just limited to the transfer of bits, to include such items as
fault tolerance and data access. Fault tolerance can be achieved in a data grid by providing mechanisms that ensures data transfer will resume after each interruption until all requested data is received. There are multiple possible methods that might be used to include starting the entire transmission over from the beginning of the data to resuming from where the transfer was interrupted. As an example,
GridFTP provides for fault tolerance by sending data from the last acknowledged byte without starting the entire transfer from the beginning. The data transport service also provides for the low-level access and connections between
hosts for file transfer. The data transport service may use any number of modes to implement the transfer to include parallel data transfer where two or more data streams are used over the same
channel or striped data transfer where two or more steams access different blocks of the file for simultaneous transfer to also using the underlying built-in capabilities of the
network hardware or specifically developed
protocols to support faster transfer speeds. The data transport service might optionally include a
network overlay function to facilitate the routing and transfer of data as well as file
I/O functions that allow users to see remote files as if they were local to their system. The data transport service hides the complexity of access and transfer between the different systems to the user so it appears as one unified data source.
Data access service Data access services work hand in hand with the data transfer service to provide security, access controls and management of any data transfers within the data grid. Security services provide mechanisms for authentication of users to ensure they are properly identified. Common forms of security for authentication can include the use of passwords or
Kerberos (protocol). Authorization services are the mechanisms that control what the user is able to access after being identified through authentication. Common forms of authorization mechanisms can be as simple as file permissions. However, need for more stringent controlled access to data is done using
Access Control Lists (ACLs),
Role-Based Access Control (RBAC) and Tasked-Based Authorization Controls (TBAC). These types of controls can be used to provide granular access to files to include limits on access times, duration of access to granular controls that determine which files can be read or written to. The final data access service that might be present to protect the confidentiality of the data transport is encryption. The most common form of encryption for this task has been the use of
SSL while in transport. While all of these access services operate within the data grid, access services within the various administrative domains that host the datasets will still stay in place to enforce access rules. The data grid access services must be in step with the administrative domains access services for this to work.
Data replication service To meet the needs for scalability, fast access and user collaboration, most data grids support replication of datasets to points within the distributed storage architecture. The use of replicas allows multiple users faster access to datasets and the preservation of bandwidth since replicas can often be placed strategically close to or within sites where users need them. However, replication of datasets and creation of replicas is bound by the availability of storage within sites and bandwidth between sites. The replication and creation of replica datasets is controlled by a replica management system. The replica management system determines user needs for replicas based on input requests and creates them based on availability of storage and bandwidth. All replicas are then cataloged or added to a directory based on the data grid as to their location for query by users. In order to perform the tasks undertaken by the replica management system, it needs to be able to manage the underlying storage infrastructure. The
data management system will also ensure the timely updates of changes to replicas are propagated to all nodes.
Replication update strategy There are a number of ways the replication management system can handle the updates of replicas. The updates may be designed around a centralized model where a single master replica updates all others, or a decentralized model, where all peers update each other. The following section contains several strategies for replica placement.
Dynamic replication Dynamic replication is an approach to placement of replicas based on popularity of the data. The method has been designed around a hierarchical replication model. The data management system keeps track of available storage on all nodes. It also keeps track of requests (hits) for which data clients (users) in a site are requesting. When the number of hits for a specific dataset exceeds the replication threshold it triggers the creation of a replica on the server that directly services the user's client. If the direct servicing server known as a father does not have sufficient space, then the father's father in the hierarchy is then the target to receive a replica and so on up the chain until it is exhausted. The data management system algorithm also allows for the dynamic deletion of replicas that have a null access value or a value lower than the frequency of the data to be stored to free up space. This improves system performance in terms of response time, number of replicas and helps load balance across the data grid. This method can also use dynamic algorithms that determine whether the cost of creating the replica is truly worth the expected gains given the location. If the number of requests on average exceeds the previous threshold and shows an upward trend, and storage utilization rates indicate capacity to create more replicas, more replicas may be created. As with dynamic replication, the removal of replicas that have a lower threshold that were not created in the current replication interval can be removed to make space for the new replicas.
Fair-share replication Like the adaptive and dynamic replication methods before, fair-share replication is based on a hierarchical replication model. Also, like the two before, the popularity of files play a key role in determining which files will be replicated. The difference with this method is the placement of the replicas is based on access load and storage load of candidate servers. A candidate server may have sufficient storage space but be servicing many clients for access to stored files. Placing a replicate on this candidate could degrade performance for all clients accessing this candidate server. Therefore, placement of replicas with this method is done by evaluating each candidate node for access load to find a suitable node for the placement of the replica. If all candidate nodes are equivalently rated for access load, none or less accessed than the other, then the candidate node with the lowest storage load will be chosen to host the replicas. Similar methods to the other described replication methods are used to remove unused or lower requested replicates if needed. Replicas that are removed might be moved to a parent node for later reuse should they become popular again.
Other replication The above three replica strategies are but three of many possible replication strategies that may be used to place replicas within the data grid where they will improve performance and access. Below are some others that have been proposed and tested along with the previously described replication strategies. •
Static – uses a fixed replica set of nodes with no dynamic changes to the files being replicated. •
Best Client – Each node records number of requests per file received during a preset time interval; if the request number exceeds the set threshold for a file a replica is created on the best client, one that requested the file the most; stale replicas are removed based on another algorithm. •
Cascading – Is used in a hierarchical node structure where requests per file received during a preset time interval is compared against a threshold. If the threshold is exceeded a replica is created at the first tier down from the root, if the threshold is exceeded again a replica is added to the next tier down and so on like a waterfall effect until a replica is placed at the client itself. •
Plain Caching – If the client requests a file it is stored as a copy on the client. •
Caching plus Cascading – Combines two strategies of caching and cascading. •
Fast Spread – Also used in a hierarchical node structure this strategy automatically populates all nodes in the path of the client that requests a file.
Tasks scheduling and resource allocation Such characteristics of the data grid systems as large scale and heterogeneity require specific methods of tasks scheduling and resource allocation. To resolve the problem, majority of systems use extended classic methods of scheduling. Others invite fundamentally different methods based on incentives for autonomous nodes, like virtual money or reputation of a node. Another specificity of data grids, dynamics, consists in the continuous process of connecting and disconnecting of nodes and local load imbalance during an execution of tasks. That can make obsolete or non-optimal results of initial resource allocation for a task. As a result, much of the data grids utilize execution-time adaptation techniques that permit the systems to reflect to the dynamic changes: balance the load, replace disconnecting nodes, use the profit of newly connected nodes, recover a task execution after faults.
Resource management system (RMS) The resource management system represents the core functionality of the data grid. It is the heart of the system that manages all actions related to storage resources. In some data grids it may be necessary to create a federated RMS architecture because of different administrative policies and a diversity of possibilities found within the data grid in place of using a single RMS. In such a case the RMSs in the federation will employ an architecture that allows for interoperability based on an agreed upon set of protocols for actions related to storage resources.
RMS functional capabilities • Fulfillment of user and application requests for data resources based on type of request and policies; RMS will be able to support multiple policies and multiple requests concurrently • Scheduling, timing and creation of replicas • Policy and security enforcement within the data grid resources to include authentication, authorization and access • Support systems with different administrative policies to inter-operate while preserving site autonomy • Support
quality of service (QoS) when requested if feature available • Enforce system fault tolerance and stability requirements • Manage resources, i.e. disk storage, network bandwidth and any other resources that interact directly or as part of the data grid • Manage trusts concerning resources in administrative domains, some domains may place additional restrictions on how they participate requiring adaptation of the RMS or federation. • Supports adaptability, extensibility, and scalability in relation to the data grid. ==Topology==