This update for ceph fixes the following issues:
Security issues fixed:
- CVE-2018-7262: rgw: malformed http headers can crash rgw (bsc#1081379).
- CVE-2017-16818: User reachable asserts allow for DoS (bsc#1063014).
Bug fixes:
- bsc#1061461: OSDs keep generating coredumps after adding new OSD node to
cluster.
- bsc#1079076: RGW openssl fixes.
- bsc#1067088: Upgrade to SES5 restarted all nodes, majority of OSDs
aborts during start.
- bsc#1056125: Some OSDs are down when doing performance testing on rbd
image in EC Pool.
- bsc#1087269: allow_ec_overwrites option not in command options list.
- bsc#1051598: Fix mountpoint check for systemctl enable --runtime.
- bsc#1070357: Zabbix mgr module doesn’t recover from HEALTH_ERR.
- bsc#1066502: After upgrading a single OSD from SES 4 to SES 5 the OSDs
do not rejoin the cluster.
- bsc#1067119: Crushtool decompile creates wrong device entries (device 20
device20) for not existing / deleted OSDs.
- bsc#1060904: Loglevel misleading during keystone authentication.
- bsc#1056967: Monitors goes down after pool creation on cluster with 120
OSDs.
- bsc#1067705: Issues with RGW Multi-Site Federation between SES5 and RH
Ceph Storage 2.
- bsc#1059458: Stopping / restarting rados gateway as part of deepsea
stage.4 executions causes core-dump of radosgw.
- bsc#1087493: Commvault cannot reconnect to storage after restarting
haproxy.
- bsc#1066182: Container synchronization between two Ceph clusters failed.
- bsc#1081600: Crash in civetweb/RGW.
- bsc#1054061: NFS-GANESHA service failing while trying to list mountpoint
on client.
- bsc#1074301: OSDs keep aborting: SnapMapper failed asserts.
- bsc#1086340: XFS metadata corruption on rbd-nbd mapped image with
journaling feature enabled.
- bsc#1080788: fsid mismatch when creating additional OSDs.
- bsc#1071386: Metadata spill onto block.slow.
This update was imported from the SUSE:SLE-12-SP3:Update update project.