Rozo provides an open source POSIX filesystem, built on top of
distributed file system architecture similar to
Google File System,
Lustre or
Ceph. The Rozo specificity lies in the way data is stored. The data to be stored is translated into several chunks using
Mojette Transform and distributed across storage devices in such a way that it can be retrieved even if several pieces are unavailable. On the other hand, chunks are meaningless alone. Redundancy schemes based on coding techniques like the one used by RozoFS allow to achieve significant storage savings as compared to simple replication. The file system comprises three components: • Exports server — (Meta Data Server) manages the location (layout) of chunks (managing capacity load balancing with respect to high availability), file access and namespace (hierarchy). Multiple replicated metadata servers are used to provide
failover. The Exports server is a user-space
daemon; the metadata are stored synchronously to a usual file system (the underlying file system must support extended attributes). • Storage servers — (Chunk Server) store the chunks. The Chunk server is also a user-space
daemon that relies on the underlying local file system to manage the actual storage. • Clients — talk to both the exports server and chunk servers and are responsible for data transformation. Clients mount the file system into user-space via
FUSE. == See also ==