Rocks Clusters – the httpd update that breaks your cluster and how to fix it

I’ve had a cluster running Rocks 6.2 (Sidewinder) for a few months and it has been working well. I recently had a request to add a new user, so I created the account with a minimal useradd command specifying only the comment, the uid, the group and the username, then I ran the ‘rocks sync users’ command which copies various files, including /etc/passwd to the nodes and restarts some daemons.

A few hours later the user got back to me to say his jobs were queued, but not running. So I used the checkjob command to what the problem was, and found that his uid was unknown on the node. Indeed looking at the password file on the node, I saw that his account was not there. So I rebooted the node, and ran rocks sync users again, with no joy. So I set the node to rebuild on boot and rebooted it, and it came up with no user accounts at all.

There were errors like this in the log:

Jul 27 17:39:43 compute-0-8 411-alert-handler[13333]: Error: http://10.1.1.1:372/411.d/etc.auto..masterupdating Could not get file ‘http://10.1.1.1:372/411.d/etc.auto..master’: 400 Bad

The nodes get the password files amongst other things from the head node using the 411 service. So running the command below on the node should get all the files.

411get –all

however all I got was

Error: Could not get file ‘http://10.1.1.1:372/411.d//’: 400 Bad

I could ssh to a node and use wget to get the files successfully which caused me more confusion.

I had updated the head node recently, and this turned out to be my problem. I asked on the Rocks mailing list, and the answer I got was:

The latest CentOS 6 httpd update breaks 411.  To fix, add this to the
end of /etc/httpd/conf/httpd.conf and reload httpd:

HttpProtocolOptions Unsafe

So I did that, and now rocks sync users is working again. The version of http which caused the problem was httpd-2.2.15-60.el6.centos.4.x86_64

I’m putting this here in case anyone else gets hit by this.

Making suexec work…

suexec is a useful way of getting apache to run interactive magic (cgi scripts, php scripts etc) with a different user/group than the one that apache is running as.

Most configuration guides tell you:

  • Add “SuexecUserGroup $OWNER $GROUP” to your apache config
  • Look in /var/log/httpd/suexec.log to see what’s going wrong

What they don’t tell you, is that suexec makes some assumptions about where to find things it will execute, and that you can’t guarantee that log location is consistent across distros (or even versions of the same distro). I’m setting this up on CentOS 7, so the examples below were produced in that environment.

You can get useful information about both of the above by running the following:

[myuser]$ sudo suexec -V
-D AP_DOC_ROOT="/var/www"
-D AP_GID_MIN=100
-D AP_HTTPD_USER="apache"
-D AP_LOG_SYSLOG
-D AP_SAFE_PATH="/usr/local/bin:/usr/bin:/bin"
-D AP_UID_MIN=500
-D AP_USERDIR_SUFFIX="public_html"
[myuser]$

AP_DOC_ROOT and AP_USERDIR_SUFFIX control which paths suexec will execute. In this case we’re restricted to running stuff that lives somewhere under “/var/www” or “/home/*/public_html”

If you’ve got content elsewhere (for example, an application which expects to be installed under /usr/share/foo/cgi-bin) then it’s not sufficient to put a symlink from /var/www/foo/cgi-bin to /usr/share/foo/cgi-bin as suexec checks the actual location of the file, not where it was called from.

This is sensible, as it stops you putting a symlink in place which points at something nasty like /bin/sh.

AP_GID_MIN and AP_UID_MIN limit which users/groups suexec will run stuff as. In this case it won’t run anything with a GID < 100 or a UID < 500. This is sensible as it stops you running CGI scripts as privileged system users.

The GID limit is probably not an issue, but the UID limit might cause wrinkles if you look after one of the 18 UoB users who have a centrally allocated unix UID that is under 500 (because they’ve been here since before that was a problem)[1]

AP_LOG_SYSLOG is a flag that says "send all log messages to syslog" – which is fine, and arguably an improvement over writing to a specific log file. It doesn’t immediately tell you where those messages end up, but I eventually found them in /var/log/secure… which seems a sensible place for them to end up.

Once you’ve got all that sorted, you’ll need to make selinux happy. Thankfully, that’s dead easy and can be done by enabling the httpd_unified boolean. If you’re using the jfryman/selinux puppet module, it’s as easy as:


selinux::boolean { 'httpd_unified': }

I think that’s all the bumps I’ve hit on this road so far, but if I find any more I’ll update this article.

-Paul
[1] but then again, if you look after them (or one of the 84 users whose UID is under 1000) you’re probably already used to finding odd things that don’t work!

RHEL 7.2 Authconfig follow up — don’t mix local user info with sssd!

Quick follow-up on my previous post about authconfig with more info.

So it turns out that this was intentional, and the change was made because 2-facter authententication support was added to SSSD.
This was added as a fix for RHEL bug 1204864, with the following comment:

With the current configuration pam_unix will always prompt the user for a password. Letting SSSD ask users of 2FA again for the password will lead to a bad user experience. Letting SSSD only ask for the second factor will make it hard for applications like gdm to show specific 2FA dialogs.

This means that if you use a mix of local (/etc/passwd or /etc/shadow) and remote (via sssd) user information for a particular user, then the user in question will only auth against their local password.
If they don’t have a local password then they will be unable to authenticate.

This seems a particularly odd thing to change during a point-release of RHEL, as I would expect that using a mix of local and remote user information is more common than using 2FA with sssd…

I thought this was worth stating separately from the previous post, as it’s more general than just when performing hackery to change UIDs — any local user entry will cause this to happen when used in conjunction with sssd.

Additional info:

The code in sssd which enforces this is as follows (from authconfig-6.2.8/authinfo.py in the current CentOS 7.x git sources, line 3812):

  # do not continue to following modules if authentication fails
  if name == "unix" and stack == "auth" and (self.enableSSSDAuth or
    self.implicitSSSDAuth or self.enableIPAv2) and (not self.enableNIS):
    logic = LOGIC_FORCE_PKCS11 # make it or break it logic

..so this is specifically for when you are using SSSD and not NIS, not any other remote authn/authz methods such as KRB5 without SSSD.

git – deleting local branches that were merged upstream

Like most people, we’re using git right at the centre of our puppet config management workflow. As I’ve mentioned previously, it features prominently in my top 10 most frequently used commands.

Our workflow is based around feature branches, and quite often we end up in a situation where we have a lot of local branches which have already been merged in the copy held upstream on github/gitlab/etc.

Today, I looked and noticed that while we only had 4 active branches on the gitlab server I had 41 branches locally, most of which related to features fixed a long time ago.

This doesn’t cause much of a problem although it can get confusing (especially if you’re likely to re-use a branch name in the future) – 41 branches is enough that deleting them one at a time by hand is tedious.

It looks like some gui tools/IDEs will take care of this for you, but I’m a command line kinda guy, and the git command line tools don’t seem to quite have this functionality baked in.

After a bit of poking about, I came up with the following approach which deletes any branch which no longer exists upstream.


# Delete all stale remote-tracking branches in origin.
git remote prune origin

# "git branch -vv" now includes the word "gone" against branches which the previous command removed, so
# use awk to identify those branches and plumb the list into "git branch -d" which will delete them locally
git branch -vv | awk '/: gone\]/ { print $1 }' | xargs git branch -D

The above seemed to do the right thing for the two repos I tested it on, but well… you might want to try it on something unimportant before you trust it!

NB if a branch has only ever existed locally (and never appeared under origin), it should leave it alone. But I’ve not tested that bit either.

Fusion MPT SAS-2 / sas2ircu disk replacement

Just a quick tip for anyone confused about how to replace a failed disk on a Fusion MPT SAS-2 controller under Linux (shows up as 02:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03) via lspci).

The sas2ircu command-line tool is quite “light” on features, and it wasn’t at all obvious to me how to get a replacement disk to re-add to an array. There aren’t any options for replacing a disk in an array, and the server in question has a very minimal remote management console which doesn’t even mention storage at all…

The replacement disk showed up as “Ready (RDY)” in the output of the sas2ircu 0 DISPLAY command, but didn’t automatically replace the failed disk in the array and cause a rebuild.

The only available option for replacing the disk was to set it as a “hot spare” with:

sas2ircu 0 hotspare 2:10

— the disk in question was 2:10 as it was the tenth disk on what showed up as the second (for some reason, even though there’s only one!) enclosure.

This gives a large warning about data loss or corruption, to which you must (after ensuring it’s the correct disk ID!) say YES. Then it adds that disk as a hot spare and then immediately turns uses this for the rebuild of the array with the failed disk.
This adds it back into the array as though nothing had failed at all — which is what I wanted, but couldn’t see another way to do it!

Fairly odd but easy to remember once you realise that there’s no other option with sas2ircu to allow you to replace a failed disk in an array! 🙂

(Maybe there are other tools which make this more obvious, but sas2ircu is the only one I had to hand)

Dell servers, warranty facts and refresh-mcollective-metadata

On our physical Dell servers we install the dell-omsa packages which give us the ability to monitor our underlying hardware.

With that in place, you can use facter to report on all sorts of useful things about the hardware, including the state of the warranty.

The fact which checks warranty information, uses dell-omsa to pull the service tag of the server and submits it to Dells API – which then returns info about the status of your warranty.

You can then use mcollective to report on this. This can be really useful if you can’t remember what you bought when!

Unfortunately, from time to time it breaks and we start getting cronjob output which looks like this:

/usr/libexec/mcollective/refresh-mcollective-metadata
Could not retrieve fact='warranty_end', resolution='': undefined method `[]' for nil:NilClass
Could not retrieve fact='warranty_days_left', resolution='': can't dup NilClass
Could not retrieve fact='warranty_start', resolution='': undefined method `[]' for nil:NilClass
Could not retrieve fact='warranty_end', resolution='': undefined method `[]' for nil:NilClass

This happens just frequently enough to be a familiar problem for us, but not frequently enough for the fix to stick in my mind!

Googling for the error messages yeilds a couple of mailinglist threads asking about this error and how to work around it – which were both started by my colleague Jonathan Gazeley the first time we hit the problem. [1]

There are no actual fixes in those threads, although one post did hint at the root cause being mcollective caching the result of the Dell API call – without actually stating where it gets cached.

So, it’s strace time!

sudo strace -e open /usr/libexec/mcollective/refresh-mcollective-metadata 2>&1 | less

Skip to the end, and page back until you get to the bit where it starts complaining about the warranty fact, and you find that it’s trying to open /var/tmp/dell-warranty-XXXXXXX.json where XXXXXXX is the service tag of the hardware.

...
open("/var/tmp/dell-warranty-XXXXXXX.json", O_RDONLY) = 3
Could not retrieve fact='warranty_end', resolution='': undefined method `[]' for nil:NilClass
Could not retrieve fact='warranty_days_left', resolution='': can't dup NilClass
...

In our most recent case, the contents of that file looked like this:

$ cat /var/tmp/dell-warranty-XXXXXXX.json 
{
  "GetAssetWarrantyResponse": {
    "GetAssetWarrantyResult": {
      "Response": null,
      "Faults": {
        "FaultException": {
          "Message": "The tag you sent is not present. Check your separator character and ensure it is |.",
          "Code": 4001
        }
      }
    }
  }
}

That looks a lot to me like the API call failed for some reason.

The fix is to remove that stale cache file and re-run the mcollective-refresh-metadata script.

$ sudo rm /var/tmp/dell-warranty-*.json
$ sudo /usr/libexec/mcollective/refresh-mcollective-metadata

Then inspect the cached file again. It should now contain a lot of warranty info.

If it doesn’t, well… then you need to start working out why, and that’s an exercise left for the reader!

-Paul
[1] https://groups.google.com/forum/#!msg/puppet-users/LsK3HbEBMGc/-DSIOMNCDzIJ

I freely admit that the intent behind this post is mostly about getting the “fix” into those google search results – so I don’t have to resort to strace next time it happens!

“KDC has no support for encryption type” with old ciphers against Active Directory

A single machine somehow managed to have a differently-configured /etc/krb5.conf file and recently stopped all (both ssh and on the console, except for root) logins from working. The messages in the logs were of the form:

Sep 29 15:04:58 test-host sshd[1433]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= host=host.example.com user=user12345
Sep 29 15:04:58 test-host sshd[1433]: pam_krb5[1433]: authentication fails for 'user12345' (user12345@REALM.EXAMPLE.COM): Authentication failure (KDC has no support for encryption type)
Sep 29 15:05:00 test-host sshd[1433]: Failed password for user12345 from 1.2.3.4 port 50432 ssh2

The reason for this was simple – the Kerberos config in /etc/krb5.conf contained the following lines:

[libdefaults]
        ... (other lines snipped)
        default_tkt_enctypes = des-cbc-crc
        default_tgs_enctypes = des-cbc-crc

These settings force the use of an older DES encryption type which is only 56-bit, and has been disabled since Windows 7/Windows Server 2008 R2. Removing these lines so that the encryption type is automatically negotiated allows stronger encryption to be used which is supported by the Active Directory servers, allowing us to login once more. Phew!

(This is a legacy CentOS 5 server, all the newer ones have the same Kerberos config on them — thankfully the same config works on CentOS 5/6/7 and Debian/Ubuntu without modifications thus far!)

What’s using all my swap?

On a couple of occasions recently, we’ve noticed swap use getting out of hand on a server or two. There’s been no common cause so far, but the troubleshooting approach has been the same in each case.

To try and tell the difference between a VM which is generally “just a bit tight on resources” and a situation where process has run away – it can sometimes be handy to work out what processes are hitting swap.

The approach I’ve been using isn’t particularly elegant, but it has proved useful so I’m documenting it here:

grep VmSwap /proc/*/status 2>&1 | perl -ne '/\/(\d+)\/[^\d]*(\d+) (.B)$/g;if($2>0){$name=`ps -p $1 -o comm=`;chomp($name);print "$name ($1) $2$3\n"}'

Lets pick it apart a component at a time.

grep VmSwap /proc/*/status 2>&1

The first step is to pull out the VmSwap line from the PID status files held in /proc. There’s one of these files for each process on the system and it tracks all sorts of stuff. VmSwap is how much swap is currently being used by this process. The grep gives output like this:

...
/proc/869/status:VmSwap:	     232 kB
/proc/897/status:VmSwap:	     136 kB
/proc/9039/status:VmSwap:	    5368 kB
/proc/9654/status:VmSwap:	     312 kB
...

That’s got a lot of useful info in it (eg the PID is there, as is the amount of swap in use), but it’s not particularly friendly. The PID is part of the filename, and it would be more useful if we could have the name of the process as well as the PID.

Time for some perl…

perl -ne '/\/(\d+)\/[^\d]*(\d+) (.B)$/g;if($2>0){$name=`ps -p $1 -o comm=`;chomp($name);print "$name ($1) $2$3\n"}'

Dealing with shell side of things first (before we dive into the perl code) “-ne” says to perl “I want you to run the following code against every line of input I pipe your way”.

The first thing we do in perl itself is run a regular expression across the line of input looking for three things; the PID, the amount of swap used and the units reported. When the regex matches, this info gets stored in $1, $2 and $3 respectively.

I’m pretty sure the units are always kB but matching the units as well seemed safer than assuming!

The if statement allows us to ignore processes which are using 0kB of swap because we don’t care about them, and they can cause problems for the next stage:

$name=`ps -p $1 -o comm=`;chomp($name)

To get the process name, we run a “ps” command in backticks, which allows us to capture the output. “-p $1” tells ps that we want information about a specific PID (which we matched earlier and stored in $1), and “-o comm=” specifies a custom output format which is just the process name.

chomp is there to strip the ‘\n’ off the end of the ps output.

print "$name ($1) $2$3\n"

Lastly we print out the $name of the process, it’s PID and the amount of swap it’s using.

So now, you get output like this:

...
automount (869) 232kB
cron (897) 136kB
munin-node (9039) 5364kB
exim4 (9654) 312kB
...

The output is a little untidy, and there is almost certainly a more elegant way to get the same information. If you have an improvement, let me know in the comments!

Molly-guard for RHEL/CentOS – protect your hosts from accidental reboots!

Molly-guard is a very useful package which replaces the default halt and reboot (and other related) commands with a version which prompts you to type the hostname of the host you intended to halt/reboot before it continues to do so. For example:

root@testhost:~# reboot
I: molly-guard: reboot is always molly-guarded on this system.
Please type in hostname of the machine to reboot: [type incorrect hostname]
Good thing I asked; I won't reboot testhost ...
W: aborting reboot due to 30-query-hostname exiting with code 1.

This is invaluable if you use a lot of different systems and they are often in use by other users whom you don’t want to anger with accidental reboots…

For Debian-based distros (including Ubuntu), it’s available via a simple apt-get install molly-guard. On RHEL-based distros, unfortunately, it’s not in the base repositories and I was unable to find a suitably-trustworthy repository which contains it.

So this leads to asking some questions:

What does it do?

Simply put, it copies the existing /sbin/halt and related commands to a separate directory (by default /lib/molly-guard), and replaces them with symlinks to /lib/molly-guard/molly-guard to ensure that the new executable is used.

By default it only requires hostname confirmation when you are logged in via ssh, but this can be changed to always ask for the hostname by setting the ALWAYS_QUERY_HOSTNAME variable in the /etc/molly-guard/rc configuration file. Further customisations are possible by adding scripts to run to the /etc/molly-guard/run.d directory, and if any of these exit with a non-zero exit code then the reboot is aborted. (This is how the hostname check is done, but you can add whatever logic you want via this method)

How can we make this work on RHEL / Why are there no packages for RHEL?

Someone kindly ported a version of molly-guard from Debian to RHEL, and a github repo of this is available here. Unfortunately this doesn’t quite solve the problem, as creating a package from this (having updated it for molly-guard 0.6.2) creates an RPM which gives us errors when we try to install it:

Running Transaction Test


Transaction Check Error:
  file /sbin/halt from install of molly-guard-0.6.2-1.1.noarch conflicts with file from package upstart-0.6.5-13.el6_5.3.x86_64
  file /sbin/poweroff from install of molly-guard-0.6.2-1.1.noarch conflicts with file from package upstart-0.6.5-13.el6_5.3.x86_64
  file /sbin/reboot from install of molly-guard-0.6.2-1.1.noarch conflicts with file from package upstart-0.6.5-13.el6_5.3.x86_64
  file /sbin/shutdown from install of molly-guard-0.6.2-1.1.noarch conflicts with file from package upstart-0.6.5-13.el6_5.3.x86_64

RPM really doesn’t like replacing files which are owned by another package, so an alternative (you’ll see what I did there in a minute) strategy is required.

Handily there’s a tool called alternatives which can handle selecting which of a set of binaries to use, via managing a directory of symlinks. (See!)

If we re-create the RPM without the explicit symlinks and instead use a post-install script snippet which copies the halt/reboot binaries to the molly-guard directory and sets up alternatives to point at them then this might just work!

(There are a bunch more reboot/halt commands which are in /usr/bin/... on RHEL, so we need to turn those into links as well:

alternatives --install /sbin/halt halt              /lib/molly-guard/molly-guard 999 \
  --slave /sbin/powreoff          poweroff          /lib/molly-guard/molly-guard \
  --slave /sbin/reboot            reboot            /lib/molly-guard/molly-guard \
  --slave /sbin/shutdown          shutdown          /lib/molly-guard/molly-guard \
  --slave /sbin/coldreboot        coldreboot        /lib/molly-guard/molly-guard \
  --slave /sbin/pm-hibernate      pm-hibernate      /lib/molly-guard/molly-guard \
  --slave /sbin/pm-suspend        pm-suspend        /lib/molly-guard/molly-guard \
  --slave /sbin/pm-suspend-hybrid pm-suspend-hybrid /lib/molly-guard/molly-guard \
\
  --slave /usr/bin/reboot         usrbinreboot      /lib/molly-guard/molly-guard \
  --slave /usr/bin/halt           usrbinhalt        /lib/molly-guard/molly-guard \
  --slave /usr/bin/poweroff       usrbinpoweroff    /lib/molly-guard/molly-guard

# Ensure we're using this by default:
alternatives --set halt /lib/molly-guard/molly-guard

At this point we’ve got a set of alternatives and a post-install RPM scriptlet which copies the required commands into the /lib/molly-guard directory, but there is still a problem that RPM will clobber these when the clashing packages are updated! So we definitely need to ensure that our alternatives get re-added after any updates. To do this we can do one of the following:

  • Have a cron job which runs regularly and enforces the alternatives setting for “halt” (the “slave” entries trigger all the others to update when this is done)
  • Use Puppet or some other configuration management to enforce the alternatives setting (will have a small window of the binaries being the new version until the alternatives are reset by Puppet on the next run)
  • Scrap the alternatives method and use something else like modifying the PATH to ensure our reboot/halt versions are before the system versions (will break if scripts use full paths to the commands, which is pretty common)
  • Drop this approach and create a PAM module which does the same as molly-guard, then use consolehelper and PAM instead (untested)

I’ve decided to go for the second of these and have configuration management ensure that the “alternatives” settings are enforced.

Will this have any side-effects?

One side-effect is that on RHEL6 the reboot-related commands in /usr/bin use “consolehelper” to control access to these commands through PAM. Without some additional jiggery-pokery this functionality will be broken by this update.

Updates to the packages will need to be handled somehow, perhaps by detecting that the binaries have been put back in place and updating the molly-guard-controlled versions. (Eek!)

I’ve not looked at updating the package for RHEL7, where the reboot and related commands all link to systemctl for systemd control. This is likely to need some different shenanigans to get it to work…

Where can I get the packages?

These are a work in progress but the updated git repoistory is available here. RPMs will be available from http://packages.bris.ac.uk/centos/6/zone_d/ (UoB internal only access) once finalised. It it works well these should be published more widely 🙂

No RHEL7 packages yet I’m afraid — I need to investigate how to get this to work with systemd!

(Bonus Q: Why is it called Molly-guard? See this definition for an explanation)