Lucene search

K
ibmIBMEFA4B1FB4E6111BA1786D51E61A6F2DF05761E4F9FD6463A2397AC40F97AF09D
HistoryJun 18, 2018 - 12:50 a.m.

Security Bulletin: The Elastic Storage Server and the GPFS Storage Server are affected by a vulnerability in IBM Spectrum Scale (CVE-2017-1654)

2018-06-1800:50:57
www.ibm.com
6

0.001 Low

EPSS

Percentile

20.2%

Summary

The Elastic Storage Server and the GPFS Storage Server are affected by a vulnerability in IBM Spectrum Scale that could allow a local unprivileged user access to information located in dump files. User data could be sent to IBM during service engagements.

Vulnerability Details

CVEID: CVE-2017-1654**
DESCRIPTION:** IBM Spectrum Scale 4.1.1, 4.2.0, 4.2.1, 4.2.2, 4.2.3, and 5.0.0 could allow a local unprivileged user access to information located in dump files. User data could be sent to IBM during service engagements.
CVSS Base Score: 4.3
CVSS Temporal Score: See https://exchange.xforce.ibmcloud.com/vulnerabilities/133378 for the current score
CVSS Environmental Score*: Undefined
CVSS Vector: (CVSS:3.0/AV:L/AC:L/PR:N/UI:N/S:C/C:L/I:N/A:N)

Affected Products and Versions

The Elastic Storage Server 5.0.0 thru 5.2.1

The Elastic Storage Server 4.5.0 thru 4.6.0

The Elastic Storage Server 4.0.0 thru 4.0.6

The Elastic Storage Server 3.5.0 thru 3.5.6

The Elastic Storage Server 3.0.0 thru 3.0.5

The Elastic Storage Server 2.5.0 thru 2.5.5

The GPFS Storage Server 2.0.0 thru 2.0.7

Remediation/Fixes

For ESS 5.2.0 thru 5.2.1,customers should upgrade to ESS 5.3
For ESS 5.1.0 thru 5.1.1, customers should upgrade to ESS 5.3
For ESS 5.0.0 thru 5.0.3, customers should upgrade to ESS 5.3
For ESS 4.5.0 thru 4.6.0, customers should upgrade to ESS 5.3
For ESS 4.0.0 thru 4.0.6, customers should upgrade to ESS 5.3
For ESS 3.5.0 thru 3.5.6, customers should upgrade to ESS 5.3
For ESS 3.0.0 thru 3.0.5, customers should upgrade to ESS 5.3
For ESS 2.5.0 thru 2.5.5, customers should upgrade to ESS 5.3

The ESS 5.3 package is available at
https://www-945.ibm.com/support/fixcentral/swg/selectFixes?parent=Software%20defined%20storage&product=ibm/StorageSoftware/IBM+Elastic+Storage+Server+(ESS)&release=5.3.0&platform=All&function=all

Notes:
If you are unable to upgrade to ESS 5.3, please contact IBM Service to obtain an efix:

  • For ESS 5.0.0 thru 5.2.1, reference APAR IJ03165
  • For ESS 4.0.0 thru 4.6.0 , reference APAR IJ03164
  • For ESS 2.5.0 thru 3.5.6, reference APAR IJ03140

For the GPFS Storage Server 2.0.0 thru 2.0.7, contact IBM Service to obtain an efix referencing APAR IJ03140.

To contact IBM Service, see <http://www.ibm.com/planetwide/&gt;

Once ESS 5.3, or the efix, is installed on a node, issue the** mmstartup** command on the node to restart Spectrum Scale and enable the fix.

This fix addresses file permissions of Spectrum Scale dump and trace files.

Spectrum Scale dump and trace files are generally created in the directory specified by the dataStructureDump configuration attribute of themmchconfigcommand. IfdataStructureDump is not explicitly set to a value, dump and trace files are created in** /tmp/mmfs**.

Spectrum Scale dump and trace files might exist on any node in a Spectrum Scale cluster. Whether or not such files exist on a particular node depends on the history of the node. The dump directory might be empty if tracing was never started on the node and no event triggering the collection of problem determination data has ever occurred on the node. The dump directory might also be empty if it has been purged of aged files, perhaps directly by an administrator or through a cron job.

It should be noted that if the dataStructureDump configuration attribute has been changed, dump and trace files might exist in former and current dump directories.

Following are some common file names that might be found in Spectrum Scale dump directories. This is not an exhaustive list: internaldump.* Internal state of Spectrum Scale
kthreads.* Kernel thread stacks
extra.* Additional system state
logdump.* Dump of a Spectrum Scale recovery log (binary)
trcrpt.* Formatted Spectrum Scale trace records
trcfile.* Unformatted (binary) Spectrum Scale trace records
lxtrace.trc.* Unformatted (binary) Spectrum Scale trace records (Linux only)

Non-privileged User Access to Dump and Trace Files
Subsequent to the enablement of the fix on a node, whenever Spectrum Scale on that node creates certain dump files and trace files in the dump directory, the files are created with restricted permissions. The permissions set for such a file grant the file’s user owner and group owner readaccess to the file content, and deny other users any access to the file content. On UNIX systems, the file’s user owner is typically root (user ID 0), and the file’s group owner is typically the primary group for the root user. Different ownership is possible if the sticky bit is set on the dump directory, if the dump directory is configured to be within a remote file system, or if the node is running Windows.

Dump and trace files created before the application of the fix may have permissions that allow any user access to the file’s content. Application of the fix affects the permissions of subsequently created files, but does not affect the permissions of already created dump and trace files.

Spectrum Scale dump and trace files are generally created in the directory specified by the dataStructureDumpconfiguration attribute of themmchconfig command. IfdataStructureDump is not explicitly set to a value, dump and trace files are created in** /tmp/mmfs**.

An administrator wishing to restrict access to existing dump and trace files has the option of doing so by changing the permissions of individual dump and trace files, or by changing the permissions of the dump directory.

If changing the permissions of an individual file, it is recommended that other users be given no access to the file. Here are some sample invocations of the chmod command that deny access for other users:

# chmod o= FILE...
# chmod o-rwx FILE...

Here are some additional sample invocations of the chmod command that also explicitly setread permissions for the owning user and group:

# chmod ug=r,o= FILE...
# chmod 440 FILE...

If changing the permissions of the dump directory, the simplest approach would be to remove all access permissions to the directory for other users. Please note that removing execute (x) access to the directory is needed to prevent a user accessing file content through a directory; just removing read (r) access to the directory is not sufficient. Some examples:

# chmod o-rwx /tmp/mmfs
# chmod o-wx /tmp/mmfs

Transmission of User Data to IBM during Service Engagements
On a node on which a Spectrum Scale file system is mounted, file system updates originating from the node may be logged to allow caching of updates in memory while ensuring file system consistency in the event of node failure. Traditionally, the recovery log only contains information related to file system metadata. However, if highly-available write cache (HAWC) is enabled for the file system, user data may be written to the recovery log. If the node fails, the file system manager performs log recovery; it replays file system updates described in the recovery log.

Before the application of this fix, if log recovery fails, the file system manager node dumps the contents of the recovery log into a file in the dump directory. The file name’s pattern is logdump.fsName., where fsName is the name of the file system. If HAWC is currently enabled for the file system, or if it has been enabled in the past, thelogdump.fsName. file could contain user data. If you do not want this data transmitted to IBM during a service engagement, remove thelogdump.* files from the dump directory of each cluster node before running thegpfs.snap command.

After the application of this fix, if log recovery fails, the file system manager node by default does not dump the contents of any recovery log. However, existing logdump.* files created before the application of this fix might exist in the dump directory. As noted above, if you do not want this data transmitted to IBM during a service engagement, remove thelogdump.* files from the dump directory of each cluster node before running the** gpfs.snap** command.

After the application of this fix, if you do want the file system manager to dump the contents of a recovery log for which recovery has failed, use themmchconfig command to change the value of the** allowUserDataDump** configuration attribute toyes. Themmchconfig command option** -i** is supported for** allowUserDataDump**, putting the change in effect immediately.

Note that if HAWC has never been enabled for any of the cluster’s file systems, logdump.* files will not contain user data, whether they were created before the application of this fix, or were allowed to be created after the application of this fix by the value of theallowUserDataDump configuration attribute.

Workarounds and Mitigations

None

0.001 Low

EPSS

Percentile

20.2%

Related for EFA4B1FB4E6111BA1786D51E61A6F2DF05761E4F9FD6463A2397AC40F97AF09D