Red Hat Storage is software only, scale-out storage that provides flexible
and affordable unstructured data storage for the enterprise. GlusterFS, a
key building block of Red Hat Storage, is based on a stackable user-space
design and can deliver exceptional performance for diverse workloads.
GlusterFS aggregates various storage servers over network interconnects
into one large, parallel network file system.
Multiple insecure temporary file creation flaws were found in Red Hat
Storage. A local user on the Red Hat Storage server could use these flaws
to cause arbitrary files to be overwritten as the root user via a symbolic
link attack. (CVE-2012-4417)
These issues were discovered by Kurt Seifried of Red Hat, and Jim Meyering.
This update also fixes the following bugs:
If geo-replication is started with a large number of small static files,
E2BIG error is displayed. This is due to the way the rsync was invoked by
geo-replication. This issue has been fixed and geo-replication works fine
for large number of files. (BZ#859173)
RHS automatically modifies the smb.conf file and starts/restarts the SMB
service when a new volume is created, regardless of the chkconfig status of
the service. This results in an improper SMB configuration and errors are
logged. This issue has been fixed by performing a “condrestart” instead of
an unconditional “start”. (BZ#863907)
Issuing “gluster peer probe” command on an Fully Qualified Domain Name
(FQDN) that contains a number as the first character in the domain name
resulted in the command failing. This issue has been fixed by allowing
digits as the first character in the FQDN. (BZ#863908)
In a replicated configuration, rebooting one of the servers during active
I/O have resulted in stale locks and caused some glusterfs commands to
fail. This issue has been fixed by adjusting the timeout value. (BZ#866758)
In a replicated volume, when a file was in split-brain state, reads were
still permitted on that file from the NFS mount. This issue has been fixed
by reporting an I/O Error. (BZ#855913)
After an upgrade, Geo-replication status is “N/A” if the checkpoint
service is not functional. This is due to the change of the location of
unix domain sockets that are used for inter-component communication. This
issue has been fixed by having glusterd specify the socket location to
gsyncd. (BZ#873380)
On a replicate volume, when one of the brick is offline and from NFS
mount when ln command is executed, the command fails. This is because
getattr calls lookup with a NULL parent. This issue has been fixed by
properly populating the parent information. (BZ#874051)
In addition, this update adds the following enhancements:
This errata includes a replication enhancement called Server-side Quorum
enforcement, which is a means to reduce the chances of split-brains. Quorum
enforcement is at the glusterd level and each volume can choose whether or
not to enforce quorum by setting the relevant volume options and the quorum
default ratio is >50%. A ratio of >50% means that at any point in time,
more than half the number of nodes in the trusted storage pool need to be
started and connected to each other. If network disconnects and outages
happen in such a way that a smaller portion of the storage pool is offline,
bricks running in those nodes are taken down preventing further writes from
happening to the minority. For a two node cluster, quorum enforcement will
require an arbitrator in the trusted storage pool which does not have
bricks participating in that quorum enforcing volume. (BZ#840122)
sosreport is a tool which generates debugging information for the system
it is run on. This tool has been packaged in the Red Hat Storage channel.
This will enable further Red Hat Storage specific tweaks and enhancements
and improve debugability. (BZ#856673)
All users of Red Hat Storage are advised to upgrade to these updated
packages, which fix these issues and add these enhancements.
OS | Version | Architecture | Package | Version | Filename |
---|---|---|---|---|---|
RedHat | 6 | src | glusterfs | < 3.3.0.5rhs-37.el6 | glusterfs-3.3.0.5rhs-37.el6.src.rpm |
RedHat | 6 | x86_64 | glusterfs-rdma | < 3.3.0.5rhs-37.el6rhs | glusterfs-rdma-3.3.0.5rhs-37.el6rhs.x86_64.rpm |
RedHat | 5 | x86_64 | glusterfs-rdma | < 3.3.0.5rhs-37.el5 | glusterfs-rdma-3.3.0.5rhs-37.el5.x86_64.rpm |
RedHat | 6 | x86_64 | glusterfs | < 3.3.0.5rhs-37.el6rhs | glusterfs-3.3.0.5rhs-37.el6rhs.x86_64.rpm |
RedHat | 6 | x86_64 | glusterfs-geo-replication | < 3.3.0.5rhs-37.el6rhs | glusterfs-geo-replication-3.3.0.5rhs-37.el6rhs.x86_64.rpm |
RedHat | 5 | x86_64 | glusterfs-debuginfo | < 3.3.0.5rhs-37.el5 | glusterfs-debuginfo-3.3.0.5rhs-37.el5.x86_64.rpm |
RedHat | 6 | src | sos | < 2.2-17.1.el6rhs | sos-2.2-17.1.el6rhs.src.rpm |
RedHat | 6 | x86_64 | glusterfs-server | < 3.3.0.5rhs-37.el6rhs | glusterfs-server-3.3.0.5rhs-37.el6rhs.x86_64.rpm |
RedHat | 6 | x86_64 | glusterfs-rdma | < 3.3.0.5rhs-37.el6 | glusterfs-rdma-3.3.0.5rhs-37.el6.x86_64.rpm |
RedHat | 6 | x86_64 | glusterfs-devel | < 3.3.0.5rhs-37.el6rhs | glusterfs-devel-3.3.0.5rhs-37.el6rhs.x86_64.rpm |