Service availability monitoring with Nagios and BPI

Several times, senior management have asked Team Wireless to provide an uptime figure for eduroam. While we do have an awful lot of monitoring of systems and services, it has never been possible to give a single uptime figure because it needs some detailed knowledge to make sense of the many Nagios checks (currently 2704 of them).

From the point of view of a Bristol user on campus here, there are three services that must be up for eduroam to work: RADIUS authentication, DNS, and DHCP. For the purposes of resilience, the RADIUS service for eduroam is provided by 3 servers, DNS by 2 servers and DHCP by 2 servers. It’s hard to see the overall state of the eduroam service from a glance at which systems and services are currently up in Nagios.

Nagios gives us detailed performance monitoring and graphing for each system and service but has no built-in aggregation tools. I decided to use an addon called Business Process Intelligence (BPI) to do the aggregation. We built this as an RPM for easy deployment, and configured it with Puppet.

BPI lets you define meta-services which consist of other services that are currently in Nagios. I defined a BPI service called RADIUS which contains all three RADIUS servers. Any one RADIUS server must be up for the RADIUS group to be up. I did likewise for DNS and DHCP.

BPI also lets meta-services depend on other groups. To consider eduroam to be up, you need the RADIUS group and the DNS group and the DHCP group to be up. It’s probably easier to see what’s going on with a screenshot of the BPI control panel:

BPI control panel

BPI control panel

So far, these BPI meta-services are only visible in the BPI control panel and not in the Nagios interface itself. The BPI project does, however, provide a Nagios plugin check_bpi which allows Nagios to monitor the state of BPI meta-services. As part of that, it will draw you a table of availability data.

eduroam uptime

eduroam uptime

So now we have a definitive uptime figure to the overall eduroam service. How many nines? An infinite number of them! 😉 (Also, I like the fact that “OK” is split into scheduled and unscheduled uptime…)

This availability report is still only visible to Nagios users though. It’s a few clicks deep in the web interface and provides a lot more information than is actually needed. We need a simpler way of obtaining this information.

So I wrote a script called nagios-report which runs on the same host as Nagios and generates custom availability reports with various options for output formatting. As an example:

$ sudo /usr/bin/nagios-report -h bpi -s eduroam -o uptime -v -d
Total uptime percentage for service eduroam during period lastmonth was 100.000%

This can now be run as a cron job to automagically email availability reports to people. The one we were asked to provide is monthly, so this is our crontab entry to generate it on the first day of each month:

# Puppet Name: eduroam-availability
45 6 1 * * nagios-report -h bpi -s eduroam -t lastmonth -o uptime -v -d

It’s great that our work on resilience has paid off. Just last week (during the time covered by the eduroam uptime table) we experienced a temporary loss of about a third of our VMs, and yet users did not see a single second of downtime. That’s what we’re aiming for.

Making suexec work…

suexec is a useful way of getting apache to run interactive magic (cgi scripts, php scripts etc) with a different user/group than the one that apache is running as.

Most configuration guides tell you:

  • Add “SuexecUserGroup $OWNER $GROUP” to your apache config
  • Look in /var/log/httpd/suexec.log to see what’s going wrong

What they don’t tell you, is that suexec makes some assumptions about where to find things it will execute, and that you can’t guarantee that log location is consistent across distros (or even versions of the same distro). I’m setting this up on CentOS 7, so the examples below were produced in that environment.

You can get useful information about both of the above by running the following:

[myuser]$ sudo suexec -V
-D AP_DOC_ROOT="/var/www"
-D AP_GID_MIN=100
-D AP_HTTPD_USER="apache"
-D AP_LOG_SYSLOG
-D AP_SAFE_PATH="/usr/local/bin:/usr/bin:/bin"
-D AP_UID_MIN=500
-D AP_USERDIR_SUFFIX="public_html"
[myuser]$

AP_DOC_ROOT and AP_USERDIR_SUFFIX control which paths suexec will execute. In this case we’re restricted to running stuff that lives somewhere under “/var/www” or “/home/*/public_html”

If you’ve got content elsewhere (for example, an application which expects to be installed under /usr/share/foo/cgi-bin) then it’s not sufficient to put a symlink from /var/www/foo/cgi-bin to /usr/share/foo/cgi-bin as suexec checks the actual location of the file, not where it was called from.

This is sensible, as it stops you putting a symlink in place which points at something nasty like /bin/sh.

AP_GID_MIN and AP_UID_MIN limit which users/groups suexec will run stuff as. In this case it won’t run anything with a GID < 100 or a UID < 500. This is sensible as it stops you running CGI scripts as privileged system users.

The GID limit is probably not an issue, but the UID limit might cause wrinkles if you look after one of the 18 UoB users who have a centrally allocated unix UID that is under 500 (because they’ve been here since before that was a problem)[1]

AP_LOG_SYSLOG is a flag that says "send all log messages to syslog" – which is fine, and arguably an improvement over writing to a specific log file. It doesn’t immediately tell you where those messages end up, but I eventually found them in /var/log/secure… which seems a sensible place for them to end up.

Once you’ve got all that sorted, you’ll need to make selinux happy. Thankfully, that’s dead easy and can be done by enabling the httpd_unified boolean. If you’re using the jfryman/selinux puppet module, it’s as easy as:


selinux::boolean { 'httpd_unified': }

I think that’s all the bumps I’ve hit on this road so far, but if I find any more I’ll update this article.

-Paul
[1] but then again, if you look after them (or one of the 84 users whose UID is under 1000) you’re probably already used to finding odd things that don’t work!

RHEL 7.2 Authconfig follow up — don’t mix local user info with sssd!

Quick follow-up on my previous post about authconfig with more info.

So it turns out that this was intentional, and the change was made because 2-facter authententication support was added to SSSD.
This was added as a fix for RHEL bug 1204864, with the following comment:

With the current configuration pam_unix will always prompt the user for a password. Letting SSSD ask users of 2FA again for the password will lead to a bad user experience. Letting SSSD only ask for the second factor will make it hard for applications like gdm to show specific 2FA dialogs.

This means that if you use a mix of local (/etc/passwd or /etc/shadow) and remote (via sssd) user information for a particular user, then the user in question will only auth against their local password.
If they don’t have a local password then they will be unable to authenticate.

This seems a particularly odd thing to change during a point-release of RHEL, as I would expect that using a mix of local and remote user information is more common than using 2FA with sssd…

I thought this was worth stating separately from the previous post, as it’s more general than just when performing hackery to change UIDs — any local user entry will cause this to happen when used in conjunction with sssd.

Additional info:

The code in sssd which enforces this is as follows (from authconfig-6.2.8/authinfo.py in the current CentOS 7.x git sources, line 3812):

  # do not continue to following modules if authentication fails
  if name == "unix" and stack == "auth" and (self.enableSSSDAuth or
    self.implicitSSSDAuth or self.enableIPAv2) and (not self.enableNIS):
    logic = LOGIC_FORCE_PKCS11 # make it or break it logic

..so this is specifically for when you are using SSSD and not NIS, not any other remote authn/authz methods such as KRB5 without SSSD.

Rocks and /install directory on compute nodes

I noticed recently that some of our compute nodes were getting a bit short on disk space, as we have a fairly large set of (multiple versions of) large applications installed into /opt on each node rather than sharing those applications over NFS via /share/apps (which is apparently the usual Rocks way).

In looking at this I noticed that the /install directory on each compute node contained a complete copy of all packages installed during rebuild! In our case the /install directory was using about 35GB, mostly because of multiple versions of Matlab and Mathematica being installed (which are up to 10GB each now…).

Anyhow, to avoid this you can convert from using the <packages> XML tags in your extend-compute.xml file to using a post-install script (the <post> section) which calls yum install explicitly for the packages you want to install.
Be sure to also run yum clean packages regularly in the script, otherwise you’re just moving the packages from /install into the yum package cache in /var/cache/yum !

e.g. Convert this:

<package>opt-matlab-R2016a</package>

..into this:

<post>
yum install -y opt-matlab-R2016a
yum clean packages
</post>

This has allowed us to continue installing packages locally until we use up that extra 35GB 🙂

Users with UIDs < 1000 : workarounds and how RHEL 7.2 breaks some of these hacks

When RHEL/CentOS 7.2 was released there was a change in PAM configs which authconfig generates.
For most people this won’t have made any difference, but if you occasionally use entries in /etc/passwd to override user information from other sources (e.g. NIS, LDAP) then this can bite you.

The RHEL bug here shows the difference and discussion around it, which can be summarised in the following small change.

In CentOS 7.1 you see this line in the PAM configs:

auth sufficient pam_unix.so nullok try_first_pass

…whilst in 7.2 it changes to this:

auth [success=done ignore=ignore default=die] pam_unix.so nullok try_first_pass

The difference here is that the `pam_unix` entry essentially changes from (in PAM terms) “sufficient” to “required”, and any failure there means that authentication is denied.

“Well, my users are in LDAP so this won’t affect me!”

Probably, but if you happen to add an entry to /etc/passwd to override something for a user, such as their shell, home directory or (in this case) their UID (yes, hacky I know, but old NIS habits die hard…):

testuser:*:12345:12345:Test User:/home/testuser:/bin/bash

..then this means that your user is defined as being a target for the pam_unix module (since it’s a local user defined in the local passwd file), and from 7.2 you hit that modified pam_unix line and get auth failures. In 7.1 you’d get entries in the logs saying that pam_unix denied access but it would continue on through the subsequent possibilities (pam_ldap, pam_sss or whatever else you have in there) and check the password against those.

The bug referenced above suggests a workaround of using authconfig --enablenis, as this happens to set the pam_unix line back to the old version, but that has a bunch of other unwanted effects (like enabling NIS in /etc/nsswitch.conf).

Obviously the real fix for our particular case is to not change the UID (which was a terrible hack anyway) but to reduce the UID_MIN used in /etc/logins.def to below the minimum UID required, and hope that there aren’t any clashes between users in LDAP and users which have already been created by packages being added (possibly ages ago…).

Hopefully this saves someone else some trouble with this surprise change in behaviour when upgrades are applied to existing machines and authconfig runs!

Some additional notes:

  • This won’t change until you next run authconfig, which in this case was loooong after the 7.2 update…
  • Not recommended : Putting pam_sss or pam_ldap before pam_unix as a workaround for the pam_unix failures in 7.2. Terrible problems happen with trying to use local users (including root!) if the network goes away.
  • Adding the UID_MIN change as early as possible is a good idea, so in your bootstrap process would be sensible, to avoid package-created users getting added with UIDs near to 1000.
  • These Puppet modules were used in creation of this blog post:
    • joshbeard/login_defs 0.2.0
    • sgnl05/sssd 0.2.1

Merging SELinux policies

We make extensive use of SELinux on all our systems. We manage SELinux config and policy with the jfryman/selinux Puppet module, which means we store SELinux policies in plain text .te format – the same format that audit2allow generates them in.

One of our SELinux policies that covers permissions for NRPE is a large file. When we generate new rules (e.g. for new Nagios plugins) with audit2allow it’s a tedious process to merge the new rules in by hand and mistakes are easy to make.

So I wrote semerge – a tool to merge SELinux policy files with the ability to mix and match stdin/stdout and reading/writing files.

This example accepts input from audit2allow and merges the new rules into an existing policy:

cat /var/log/audit/audit.log | audit2allow | semerge -i existingpolicy.pp -o existingpolicy.pp

And this example deduplicates and alphabetises an existing policy:

semerge -i existingpolicy.pp -o existingpolicy.pp

There are probably bugs so please do let me know if you find it useful and log an issue if you run into problems.

git – deleting local branches that were merged upstream

Like most people, we’re using git right at the centre of our puppet config management workflow. As I’ve mentioned previously, it features prominently in my top 10 most frequently used commands.

Our workflow is based around feature branches, and quite often we end up in a situation where we have a lot of local branches which have already been merged in the copy held upstream on github/gitlab/etc.

Today, I looked and noticed that while we only had 4 active branches on the gitlab server I had 41 branches locally, most of which related to features fixed a long time ago.

This doesn’t cause much of a problem although it can get confusing (especially if you’re likely to re-use a branch name in the future) – 41 branches is enough that deleting them one at a time by hand is tedious.

It looks like some gui tools/IDEs will take care of this for you, but I’m a command line kinda guy, and the git command line tools don’t seem to quite have this functionality baked in.

After a bit of poking about, I came up with the following approach which deletes any branch which no longer exists upstream.


# Delete all stale remote-tracking branches in origin.
git remote prune origin

# "git branch -vv" now includes the word "gone" against branches which the previous command removed, so
# use awk to identify those branches and plumb the list into "git branch -d" which will delete them locally
git branch -vv | awk '/: gone\]/ { print $1 }' | xargs git branch -D

The above seemed to do the right thing for the two repos I tested it on, but well… you might want to try it on something unimportant before you trust it!

NB if a branch has only ever existed locally (and never appeared under origin), it should leave it alone. But I’ve not tested that bit either.

Nagios – aggregating performance data

bind_aggregate

The Wireless team use Nagios to monitor our servers. As well as availability monitoring, we use pnp4nagios to collect and graph performance data. This works reasonably well for us, and we can easily draw graphs of everything from CPU temperature to how many queries/second our mariadb servers are handling.

However, the graphs are drawn on a per-host basis, which wasn’t a problem until now…

Like a lot of people at UoB, we’re migrating services to the f5 load balancers so that we can scale them out as we need to. Services which were previously single hosted are now fronted by several servers in a load balanced configuration.

It would be nice to be able to combine performance data from multiple nodes so we can get a picture of how many queries/second the entire pool is handling. As I’ve written about previously, this sort of information is very useful when it comes to capacity planning.

The f5 will tell us how many tcp/udp connections it’s handling for that pool, and the amount of traffic, but that’s not quite the same thing as the number of queries. Nagios has that information, it just can’t graph it easily.

I had a look around at a few nagios plugins that claimed to solve this problem. The best one I could find looked difficult to deploy without dragging in more dependencies than we wanted to maintain on a production box. It’s licence wasn’t particularly conducive to hacking it about to make it deployable in our environment, so I wrote my own from scratch.

It’s available from here: https://github.com/uobnetops/nagios_aggregate_perfdata

The plugin works by scanning through the status.dat file on the nagios server itself, summarizing the checks/hosts which match. It then reports the sum (or average if that’s what you prefer) as perfdata for nagios to graph.

If you think it might be useful to you, please use it! If you spot something it doesn’t do (or doesn’t do as well as you like) we’re more than happy to accept pull requests or issues logged through github.

DNS Internals: delegating a subdomain to a server listening on a non-standard port

I’m writing this up because it took me quite some time to get my head around how to do this, and I found answers around the internet varying from “not possible” through to “try this” (which didn’t work) and “switch off this security feature you really like having” (no)

I found a way to make it happen, but it’s not easy. I’ll walk you through the problem, and how each way I attempted to solve it failed.

All the names below are hypotheticals, and for the sake of argument we’re trying to make “foo.subdomain.local” resolve via the additional server.

Problem:
Suppose you have two DNS servers. One which we’ll call “NS1” and one which we’ll call “NS-NEW”.

  • NS1 is a recursive server running bind, which all your clients point at to get their DNS information. It’s listening on port 53 as standard.
  • NS-NEW is an authoritative server which is listening on a non-standard port (8600) and for these purposes it’s a black box, we can’t change its behaviour.

You want your clients to be able to resolve the names that NS-NEW is authoritative for, but you don’t want to have to reconfigure the clients. So NS1 needs to know to pass those queries on to NS-NEW to get an answer.

Attempt 1 – “slave zone”
My first thought was to configure NS1 to slave the zone from NS-NEW.

zone "subdomain.local" {
        type slave;
        file "/var/named/slave/priv.zone";
        masters { $IP_OF_NS-NEW port 8600; };
};

This didn’t work for me because NS-NEW isn’t capable of doing zone transfers. Pity, as that would have been really neat and easy to manage!

Attempt 2 – “forward zone”
Then I tried forwarding queries from NS1 to NS-NEW, by using binds “forward zone” features.

zone "subdomain.local" {
        type forward;
        forward only;
        forwarders { $IP_OF_NS-NEW port 8600; };
};

This didn’t work because NS1 is configured to check for valid DNSSEC signatures. The root zone says that all its children are signed, and bind takes that to mean that all the grandchildren of the root should be signed as well.

The software running on NS-NEW isn’t capable of signing its zone information.

It doesn’t appear to be possible to selectively turn off DNSSEC checking on a per-zone basis, and I didn’t want to turn that off for our whole infrastructure as DNSSEC is generally a Good Thing.

Attempt 3 – “delegation”
I did think I could probably work around it by making NS1 authoritative for the “local.” top level domain, then using NS records in the zonefile for “local.” to directly delegate the zone to NS-NEW.

Something like this:

$TTL 86400	; default TTL for this zone
$ORIGIN local.
@       IN  SOA  NS1.my.domain. hostmaster.my.domain. (
                     2016031766 ; serial number
                     28800      ; refresh
                     7200       ; retry
                     604800     ; expire
                     3600       ; minimum
                     )
        IN  NS  NS1.my.domain.

; delegated zones
subdomain  IN  NS NS-NEW.my.domain.

Unfortunately that doesn’t work either, as it’s not possible to specify a port number in an NS record, and NS-NEW isn’t listening on a standard port.

Attempt 3 – “a little of option 2 and a little of option 3”
Hold on to your hats, this gets a little self referential.

I made NS1 authoritative for “local.”

zone "local" {
        type master;
        file "/var/named/data/zone.local";
};

I configured NS records in the “local.” zone file, which point back at NS1

$TTL 86400	; default TTL for this zone
$ORIGIN local.
@       IN  SOA  NS1.my.domain. hostmaster.my.domain. (
                     2016031766 ; serial number
                     28800      ; refresh
                     7200       ; retry
                     604800     ; expire
                     3600       ; minimum
                     )
        IN  NS  NS1.my.domain.

; delegated zones
subdomain  IN  NS NS1.my.domain.

I then configured a “subdomain.local.” forward zone on NS1 which forwards queries on to NS-NEW

zone "subdomain.local" {
        type forward;
        forward only;
        forwarders { $IP_OF_NS-NEW port 8600; };
};

To understand why this works, you need to understand how the recursion process for a query like “foo.subdomain.local.” happens.

When the query comes in NS1 does this:
– do I already know the answer from a previously cached query? Let’s assume no for now.
– do I know which DNS server is responsible for “subdomain.local.” from a previously cached query? Lets assume no for now.
– do I know which DNS server is responsible for “local.” – ooh! Yes! That’s me!
– now I can look in the zone file for “local.” and look to see how I resolve “subdomain.local.” – there’s an NS record which says I should ask NS1 in an authoritative way.
– now I ask NS1 for an answer to “foo.subdomain.local.”
– NS1 can then forward my query off to NS-NEW and fetch an answer.

Because we haven’t had to go all the way up to the root to get our answer, we avoid encountering the DNSSEC issue for this zone.

Did you really do it like *that*?
Yes and no.

The above is a simplified version of what I actually had to do, as our production equivalent of NS1 isn’t a single server – and I had to take account of our zone file management process, and all of that adds complexity which I don’t need to go into.

There are also a few extra hoops to jump through to make sure that the “local.” domain can only be accessed by clients on our network, and to make sure that our authoritative infrastructure doesn’t “leak” the “local.” zone to the outside world.

What would you have liked to have done?
If NS-NEW was able to listen on a standard port, I’d have used a straight delegation to do it.

If NS-NEW was able to sign it’s zone data with DNSSEC, I’d have used a simple forward zone to do it.

NS-NEW isn’t *quite* the black box I treated it as in this article, but the restriction about not being able to make it listen on port 53 is a real one.

The software running on NS-NEW does have a feature request in it’s issue tracker for DNSSEC, which I’ll watch with interest – as that would allow me to tidy up our config and might actually enable some other cool stuff further down the line…