Lucene search

K
redhatRedHatRHSA-2018:2179
HistoryJul 11, 2018 - 6:20 p.m.

(RHSA-2018:2179) Moderate: Red Hat Ceph Storage 3.0 security and bug fix update

2018-07-1118:20:07
access.redhat.com
150

8.1 High

CVSS3

Attack Vector

NETWORK

Attack Complexity

LOW

Privileges Required

LOW

User Interaction

NONE

Scope

UNCHANGED

Confidentiality Impact

NONE

Integrity Impact

HIGH

Availability Impact

HIGH

CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:H/A:H

5.5 Medium

CVSS2

Access Vector

NETWORK

Access Complexity

LOW

Authentication

SINGLE

Confidentiality Impact

NONE

Integrity Impact

PARTIAL

Availability Impact

PARTIAL

AV:N/AC:L/Au:S/C:N/I:P/A:P

0.004 Low

EPSS

Percentile

74.7%

Red Hat Ceph Storage is a scalable, open, software-defined storage platform that combines the most stable version of the Ceph storage system with a Ceph management platform, deployment utilities, and support services.

Security Fix(es):

  • ceph: cephx protocol is vulnerable to replay attack (CVE-2018-1128)

  • ceph: cephx uses weak signatures (CVE-2018-1129)

  • ceph: ceph-mon does not perform authorization on OSD pool ops (CVE-2018-10861)

For more details about the security issue(s), including the impact, a CVSS score, and other related information, refer to the CVE page(s) listed in the References section.

Bug Fix(es):

  • Previously, Ceph RADOS Gateway (RGW) instances in zones configured for multi-site replication would crash if configured to disable sync (β€œrgw_run_sync_thread = false”). Therefor, multi-site replication environments could not start dedicated non-replication RGW instances. With this update, the β€œrgw_run_sync_thread” option can be used to configure RGW instances that will not participate in replication even if their zone is replicated. (BZ#1552202)

  • Previously, when increasing β€œmax_mds” from β€œ1” to β€œ2”, if the Metadata Server (MDS) daemon was in the starting/resolve state for a long period of time, then restarting the MDS daemon lead to assert. This caused the Ceph File System (CephFS) to be in degraded state. With this update, increasing β€œmax_mds” no longer causes CephFS to be in degraded state. (BZ#1566016)

  • Previously, the transition to containerized Ceph left some β€œceph-disk” unit files. The files were harmless, but appeared as failing. With this update, executing the β€œswitch-from-non-containerized-to-containerized-ceph-daemons.yml” playbook disables the β€œceph-disk” unit files too. (BZ#1577846)

  • Previously, the β€œentries_behind_master” metric output from the β€œrbd mirror image status” CLI tool did not always reduce to zero under synthetic workloads. This could cause a false alarm that there is an issue with RBD mirroring replications. With this update, the metric is now updated periodically without the need for an explicit I/O flush in the workload. (BZ#1578509)

  • Previously, when using the β€œpool create” command with β€œexpected_num_objects”, placement group (PG) directories were not pre-created at pool creation time as expected, resulting in performance drops when filestore splitting occurred. With this update, the β€œexpected_num_objects” parameter is now passed through to filestore correctly, and PG directories for the expected number of objects are pre-created at pool creation time. (BZ#1579039)

  • Previously, internal RADOS Gateway (RGW) multi-site sync logic behaved incorrectly when attempting to sync containers with S3 object versioning enabled. Objects in versioning-enabled containers would fail to sync in some scenariosβ€”for example, when using β€œs3cmd sync” to mirror a filesystem directory. With this update, RGW multi-site replication logic has been corrected for the known failure cases. (BZ#1580497)

  • When restarting OSD daemons, the β€œceph-ansible” restart script goes through all the daemons by listing the units with systemctl list-units. Under certain circumstances, the output of the command contains extra spaces, which caused parsing and restart to fail. With this update, the underlying code has been changed to handle the extra space.

  • Previously, the Ceph RADOS Gateway (RGW) server treated negative byte-range object requests (β€œbytes=0–1”) as invalid. Applications that expect the AWS behavior for negative or other invalid range requests saw unexpected errors and could fail. With this update, a new option β€œrgw_ignore_get_invalid_range” has been added to RGW. When β€œrgw_ignore_get_invalid_range” is set to β€œtrue”, the RGW behavior for invalid range requests is backwards compatible with AWS.

8.1 High

CVSS3

Attack Vector

NETWORK

Attack Complexity

LOW

Privileges Required

LOW

User Interaction

NONE

Scope

UNCHANGED

Confidentiality Impact

NONE

Integrity Impact

HIGH

Availability Impact

HIGH

CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:H/A:H

5.5 Medium

CVSS2

Access Vector

NETWORK

Access Complexity

LOW

Authentication

SINGLE

Confidentiality Impact

NONE

Integrity Impact

PARTIAL

Availability Impact

PARTIAL

AV:N/AC:L/Au:S/C:N/I:P/A:P

0.004 Low

EPSS

Percentile

74.7%