“KDC has no support for encryption type” with old ciphers against Active Directory

A single machine somehow managed to have a differently-configured /etc/krb5.conf file and recently stopped all (both ssh and on the console, except for root) logins from working. The messages in the logs were of the form:

Sep 29 15:04:58 test-host sshd[1433]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= host=host.example.com user=user12345
Sep 29 15:04:58 test-host sshd[1433]: pam_krb5[1433]: authentication fails for 'user12345' (user12345@REALM.EXAMPLE.COM): Authentication failure (KDC has no support for encryption type)
Sep 29 15:05:00 test-host sshd[1433]: Failed password for user12345 from 1.2.3.4 port 50432 ssh2

The reason for this was simple – the Kerberos config in /etc/krb5.conf contained the following lines:

[libdefaults]
        ... (other lines snipped)
        default_tkt_enctypes = des-cbc-crc
        default_tgs_enctypes = des-cbc-crc

These settings force the use of an older DES encryption type which is only 56-bit, and has been disabled since Windows 7/Windows Server 2008 R2. Removing these lines so that the encryption type is automatically negotiated allows stronger encryption to be used which is supported by the Active Directory servers, allowing us to login once more. Phew!

(This is a legacy CentOS 5 server, all the newer ones have the same Kerberos config on them — thankfully the same config works on CentOS 5/6/7 and Debian/Ubuntu without modifications thus far!)

What’s using all my swap?

On a couple of occasions recently, we’ve noticed swap use getting out of hand on a server or two. There’s been no common cause so far, but the troubleshooting approach has been the same in each case.

To try and tell the difference between a VM which is generally “just a bit tight on resources” and a situation where process has run away – it can sometimes be handy to work out what processes are hitting swap.

The approach I’ve been using isn’t particularly elegant, but it has proved useful so I’m documenting it here:

grep VmSwap /proc/*/status 2>&1 | perl -ne '/\/(\d+)\/[^\d]*(\d+) (.B)$/g;if($2>0){$name=`ps -p $1 -o comm=`;chomp($name);print "$name ($1) $2$3\n"}'

Lets pick it apart a component at a time.

grep VmSwap /proc/*/status 2>&1

The first step is to pull out the VmSwap line from the PID status files held in /proc. There’s one of these files for each process on the system and it tracks all sorts of stuff. VmSwap is how much swap is currently being used by this process. The grep gives output like this:

...
/proc/869/status:VmSwap:	     232 kB
/proc/897/status:VmSwap:	     136 kB
/proc/9039/status:VmSwap:	    5368 kB
/proc/9654/status:VmSwap:	     312 kB
...

That’s got a lot of useful info in it (eg the PID is there, as is the amount of swap in use), but it’s not particularly friendly. The PID is part of the filename, and it would be more useful if we could have the name of the process as well as the PID.

Time for some perl…

perl -ne '/\/(\d+)\/[^\d]*(\d+) (.B)$/g;if($2>0){$name=`ps -p $1 -o comm=`;chomp($name);print "$name ($1) $2$3\n"}'

Dealing with shell side of things first (before we dive into the perl code) “-ne” says to perl “I want you to run the following code against every line of input I pipe your way”.

The first thing we do in perl itself is run a regular expression across the line of input looking for three things; the PID, the amount of swap used and the units reported. When the regex matches, this info gets stored in $1, $2 and $3 respectively.

I’m pretty sure the units are always kB but matching the units as well seemed safer than assuming!

The if statement allows us to ignore processes which are using 0kB of swap because we don’t care about them, and they can cause problems for the next stage:

$name=`ps -p $1 -o comm=`;chomp($name)

To get the process name, we run a “ps” command in backticks, which allows us to capture the output. “-p $1” tells ps that we want information about a specific PID (which we matched earlier and stored in $1), and “-o comm=” specifies a custom output format which is just the process name.

chomp is there to strip the ‘\n’ off the end of the ps output.

print "$name ($1) $2$3\n"

Lastly we print out the $name of the process, it’s PID and the amount of swap it’s using.

So now, you get output like this:

...
automount (869) 232kB
cron (897) 136kB
munin-node (9039) 5364kB
exim4 (9654) 312kB
...

The output is a little untidy, and there is almost certainly a more elegant way to get the same information. If you have an improvement, let me know in the comments!

Molly-guard for RHEL/CentOS – protect your hosts from accidental reboots!

Molly-guard is a very useful package which replaces the default halt and reboot (and other related) commands with a version which prompts you to type the hostname of the host you intended to halt/reboot before it continues to do so. For example:

root@testhost:~# reboot
I: molly-guard: reboot is always molly-guarded on this system.
Please type in hostname of the machine to reboot: [type incorrect hostname]
Good thing I asked; I won't reboot testhost ...
W: aborting reboot due to 30-query-hostname exiting with code 1.

This is invaluable if you use a lot of different systems and they are often in use by other users whom you don’t want to anger with accidental reboots…

For Debian-based distros (including Ubuntu), it’s available via a simple apt-get install molly-guard. On RHEL-based distros, unfortunately, it’s not in the base repositories and I was unable to find a suitably-trustworthy repository which contains it.

So this leads to asking some questions:

What does it do?

Simply put, it copies the existing /sbin/halt and related commands to a separate directory (by default /lib/molly-guard), and replaces them with symlinks to /lib/molly-guard/molly-guard to ensure that the new executable is used.

By default it only requires hostname confirmation when you are logged in via ssh, but this can be changed to always ask for the hostname by setting the ALWAYS_QUERY_HOSTNAME variable in the /etc/molly-guard/rc configuration file. Further customisations are possible by adding scripts to run to the /etc/molly-guard/run.d directory, and if any of these exit with a non-zero exit code then the reboot is aborted. (This is how the hostname check is done, but you can add whatever logic you want via this method)

How can we make this work on RHEL / Why are there no packages for RHEL?

Someone kindly ported a version of molly-guard from Debian to RHEL, and a github repo of this is available here. Unfortunately this doesn’t quite solve the problem, as creating a package from this (having updated it for molly-guard 0.6.2) creates an RPM which gives us errors when we try to install it:

Running Transaction Test


Transaction Check Error:
  file /sbin/halt from install of molly-guard-0.6.2-1.1.noarch conflicts with file from package upstart-0.6.5-13.el6_5.3.x86_64
  file /sbin/poweroff from install of molly-guard-0.6.2-1.1.noarch conflicts with file from package upstart-0.6.5-13.el6_5.3.x86_64
  file /sbin/reboot from install of molly-guard-0.6.2-1.1.noarch conflicts with file from package upstart-0.6.5-13.el6_5.3.x86_64
  file /sbin/shutdown from install of molly-guard-0.6.2-1.1.noarch conflicts with file from package upstart-0.6.5-13.el6_5.3.x86_64

RPM really doesn’t like replacing files which are owned by another package, so an alternative (you’ll see what I did there in a minute) strategy is required.

Handily there’s a tool called alternatives which can handle selecting which of a set of binaries to use, via managing a directory of symlinks. (See!)

If we re-create the RPM without the explicit symlinks and instead use a post-install script snippet which copies the halt/reboot binaries to the molly-guard directory and sets up alternatives to point at them then this might just work!

(There are a bunch more reboot/halt commands which are in /usr/bin/... on RHEL, so we need to turn those into links as well:

alternatives --install /sbin/halt halt              /lib/molly-guard/molly-guard 999 \
  --slave /sbin/powreoff          poweroff          /lib/molly-guard/molly-guard \
  --slave /sbin/reboot            reboot            /lib/molly-guard/molly-guard \
  --slave /sbin/shutdown          shutdown          /lib/molly-guard/molly-guard \
  --slave /sbin/coldreboot        coldreboot        /lib/molly-guard/molly-guard \
  --slave /sbin/pm-hibernate      pm-hibernate      /lib/molly-guard/molly-guard \
  --slave /sbin/pm-suspend        pm-suspend        /lib/molly-guard/molly-guard \
  --slave /sbin/pm-suspend-hybrid pm-suspend-hybrid /lib/molly-guard/molly-guard \
\
  --slave /usr/bin/reboot         usrbinreboot      /lib/molly-guard/molly-guard \
  --slave /usr/bin/halt           usrbinhalt        /lib/molly-guard/molly-guard \
  --slave /usr/bin/poweroff       usrbinpoweroff    /lib/molly-guard/molly-guard

# Ensure we're using this by default:
alternatives --set halt /lib/molly-guard/molly-guard

At this point we’ve got a set of alternatives and a post-install RPM scriptlet which copies the required commands into the /lib/molly-guard directory, but there is still a problem that RPM will clobber these when the clashing packages are updated! So we definitely need to ensure that our alternatives get re-added after any updates. To do this we can do one of the following:

  • Have a cron job which runs regularly and enforces the alternatives setting for “halt” (the “slave” entries trigger all the others to update when this is done)
  • Use Puppet or some other configuration management to enforce the alternatives setting (will have a small window of the binaries being the new version until the alternatives are reset by Puppet on the next run)
  • Scrap the alternatives method and use something else like modifying the PATH to ensure our reboot/halt versions are before the system versions (will break if scripts use full paths to the commands, which is pretty common)
  • Drop this approach and create a PAM module which does the same as molly-guard, then use consolehelper and PAM instead (untested)

I’ve decided to go for the second of these and have configuration management ensure that the “alternatives” settings are enforced.

Will this have any side-effects?

One side-effect is that on RHEL6 the reboot-related commands in /usr/bin use “consolehelper” to control access to these commands through PAM. Without some additional jiggery-pokery this functionality will be broken by this update.

Updates to the packages will need to be handled somehow, perhaps by detecting that the binaries have been put back in place and updating the molly-guard-controlled versions. (Eek!)

I’ve not looked at updating the package for RHEL7, where the reboot and related commands all link to systemctl for systemd control. This is likely to need some different shenanigans to get it to work…

Where can I get the packages?

These are a work in progress but the updated git repoistory is available here. RPMs will be available from http://packages.bris.ac.uk/centos/6/zone_d/ (UoB internal only access) once finalised. It it works well these should be published more widely 🙂

No RHEL7 packages yet I’m afraid — I need to investigate how to get this to work with systemd!

(Bonus Q: Why is it called Molly-guard? See this definition for an explanation)

Rsync between two hosts using sudo and a password prompt

Using rsync normally is nice and straightforward. e.g.:

# rsync -av -e 'ssh' /some/local/path user@remote:/some/remote/path

This works fine and prompts for the ssh password to log into the remote machine if required.

But what if the remote end needs root (or a different user) rights to write into the destination directory? Just whack in an --rsync-path option to add sudo to the rsync command, right?:

# rsync -av -e 'ssh' --rsync-path='sudo rsync' /some/local/path user@remote:/some/remote/path
user@remote's password: 
sudo: no tty present and no askpass program specified
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at io.c(632) [sender=3.0.4]

Oh, that didn’t work — sudo couldn’t ask us for a password. Adding the -t option to the ssh used by rsync doesn’t work either, as it can’t allocate a tty:

# rsync -av -e 'ssh -t' --rsync-path='sudo rsync' /some/local/path user@remote:/some/remote/path
Pseudo-terminal will not be allocated because stdin is not a terminal.
user@remote's password:

So we need to tell sudo to ask for the password some other way. Fortunately there is a -A option for sudo which tells it to use an “askpass” program, but we also need to tell it what askpass program to use (and it’s not in the default path on most machines). We can find this with locate askpass on the remote machine:

[user@remote]$ locate askpass
/etc/profile.d/gnome-ssh-askpass.csh
/etc/profile.d/gnome-ssh-askpass.sh
/usr/libexec/openssh/gnome-ssh-askpass
/usr/libexec/openssh/ssh-askpass

We’ll use /usr/libexec/openssh/ssh-askpass as that should pick an appropriate version according to what is available to sudo. Again sudo won’t have a tty to ask for the password on, so how about we use an X11 askpass program and enable X forwarding for ssh:

# rsync -av -e 'ssh -X' --rsync-path='SUDO_ASKPASS=/usr/libexec/openssh/ssh-askpass sudo -A rsync' /some/local/path user@remote:/some/remote/path
user@remote's password: [type user password]
[Then a dialog pops up for the sudo password]

Hooray, this works! Bit of a faff but it could be scripted or made into a shell function to save having to remember it 🙂

Yes, you could enable remote root login, but it would definitely be preferable to avoid that.

RPM addsign fail on vendor-provided package (and a workaround)

We’ve been signing RPM packages in local repos for a while now, and this has been working nicely (see previous posts about rpm signing)… until today.

The Intel Fortran 2015 installer provides RPMs which are already signed by Intel and which install and work fine, so we push these out with Puppet from our local (private) repo. However, even though I’d signed them myself they were failing to verify the signature…

Original RPM before re-signing locally:

  => rpm --checksig intel-fcompxe-187-15.0-3.noarch.rpm
  intel-fcompxe-187-15.0-3.noarch.rpm: RSA sha1 ((MD5) PGP) md5 NOT OK (MISSING KEYS: (MD5) PGP#7a5a985f) 

  => rpm -qpi intel-fcompxe-187-15.0-3.noarch.rpm
  warning: intel-fcompxe-187-15.0-3.noarch.rpm: Header V3 RSA/SHA1 Signature, key ID 7a5a985f: NOKEY
  Name        : intel-fcompxe-187            
  ...
  Signature   : RSA/SHA1, Fri 10 Apr 2015 13:23:59 BST, Key ID 27fbcd8d7a5a985f

So I go ahead and sign the package:

  => rpm --addsign intel-fcompxe-187-15.0-3.noarch.rpm 
  Enter pass phrase: [type passphrase]
  Pass phrase is good.
  intel-fcompxe-187-15.0-3.noarch.rpm:

All looks fine, so lets check the signature again:

  => rpm --checksig intel-fcompxe-187-15.0-3.noarch.rpm
  intel-fcompxe-187-15.0-3.noarch.rpm: RSA RSA sha1 sha1 ((MD5) PGP) ((MD5) PGP) md5 md5 NOT OK (MISSING KEYS: (MD5) PGP#7a5a985f (MD5) PGP#262a742e) 

  => rpm -qpi intel-fcompxe-187-15.0-3.noarch.rpm
warning: intel-fcompxe-187-15.0-3.noarch.rpm: Header V3 RSA/SHA1 Signature, key ID 7a5a985f: NOKEY
  Name        : intel-fcompxe-187            
  ...
  Signature   : RSA/SHA1, Fri 10 Apr 2015 13:23:59 BST, Key ID 27fbcd8d7a5a985f

Odd, it hasn’t changed… Let’s try removing the signature instead:

=> rpm --delsign intel-fcompxe-187-15.0-3.noarch.rpm 
intel-fcompxe-187-15.0-3.noarch.rpm:

=> rpm --checksig intel-fcompxe-187-15.0-3.noarch.rpm
intel-fcompxe-187-15.0-3.noarch.rpm: RSA RSA sha1 sha1 sha1 ((MD5) PGP) ((MD5) PGP) md5 md5 md5 NOT OK (MISSING KEYS: (MD5) PGP#7a5a985f (MD5) PGP#262a742e) 

That’s very odd, it’s added tags to the signature header. And if you try a few more times (just to be sure, right? :), it adds more tags to the header:

=> rpm --delsign intel-fcompxe-187-15.0-3.noarch.rpm 
intel-fcompxe-187-15.0-3.noarch.rpm:
...
Packager    : http://www.intel.com/software/products/support
Summary     : Intel(R) Fortran Compiler XE 15.0 Update 3 for Linux*
Description :
Intel(R) Fortran Compiler XE 15.0 Update 3 for Linux*

=> rpm --checksig intel-fcompxe-187-15.0-3.noarch.rpm
intel-fcompxe-187-15.0-3.noarch.rpm: RSA RSA sha1 sha1 sha1 sha1 sha1 sha1 ((MD5) PGP) ((MD5) PGP) md5 md5 md5 md5 md5 md5 NOT OK (MISSING KEYS: (MD5) PGP#7a5a985f (MD5) PGP#262a742e) 

If you do this a few more times then rpm can’t read the package at all anymore!

  => rpm --checksig intel-fcompxe-187-15.0-3.noarch.rpm
  error: intel-fcompxe-187-15.0-3.noarch.rpm: rpmReadSignature failed: sigh tags: BAD, no. of tags(33) out of range

  => rpm -ql -v -p intel-fcompxe-187-15.0-3.noarch.rpm 
  error: intel-fcompxe-187-15.0-3.noarch.rpm: rpmReadSignature failed: sigh tags: BAD, no. of tags(33) out of range
  error: intel-fcompxe-187-15.0-3.noarch.rpm: not an rpm package (or package manifest)

Ooops!

Workaround: Add the vendor keys as you should do, rather than re-signing. Appending the public key to the required RPM-GPG_KEY-* file is all that’s required, and then you can install the packages just fine.

Future work: Submit bug report about this to the rpm-sign developers…

How old is this solaris box?

Sometimes it’s useful to know how old a solaris server is, without having to dig out its serial number or documentation.

Turns out it’s really easy. “prtfru -c” will give you the build date of various bits of hardware in the system. For example, here’s a server that we’ve just retired (Which is long overdue!)

oldserver:$ sudo prtfru -c | grep UNIX_Timestamp
Password:
      /ManR/UNIX_Timestamp32: Mon Aug 22 02:52:32 BST 2005
      /ManR/UNIX_Timestamp32: Fri Jun  3 19:48:16 BST 2005
      /ManR/UNIX_Timestamp32: Wed Aug  3 11:39:47 BST 2005
      /ManR/UNIX_Timestamp32: Fri Jun  3 19:46:50 BST 2005
oldserver:$ 

SELinux quicktip

A while ago, Jonathan wrote a really useful post about how to use SELinux – it’s useful, and I tend to refer to it every time I need to build an SELinux policy to get something working.

However, yesterday I hit a wrinkle not covered in that post. I was working on a nagios plugin which didn’t work when run by nrpe. It worked from the command line, and worked via nrpe with SELinux disabled (which pointed the finger neatly at SELinux) but it didn’t leave any traces in the audit log, which makes building a policy difficult!

It seems that the default policies in CentOS include a list of “don’t audit” rules, which silently block some types of behaviour. The intention is to keep a lot of common noise out of the audit log, but that doesn’t help you much when you’re trying to build a policy!

Luckily you can turn that behaviour on and off.

# Turn it off:
sudo semodule --disable_dontaudit --build
sudo setenforce 0

# Turn it back on:
sudo semodule --build
sudo setenforce 1

With dontaudit disabled, I got the information I needed in the audit log and was able to successfully build a policy that made my nagios check work.

What’s in your history?

A little bit of Friday frivolity for you. A friend of mine recently discovered zsh_history, which tells you what commands you run most often from your shell. Obviously zsh_history is pretty zsh specific, but a bit of rummaging in the code shows it pretty much does this:

history | awk '{CMD[$2]++;count++;}END { for (a in CMD)print CMD[a] " " CMD[a]/count*100 "% " a;}' | grep -v "./" | column -c3 -s " " -t | sort -nr | nl |  head -n10

In my case, on my work desktop my top 10 commands are:

     1	321  32.1%  git
     2	214  21.4%  ls
     3	151  15.1%  cd
     4	105  10.5%  vi
     5	29   2.9%   ssh
     6	29   2.9%   exit
     7	21   2.1%   grep
     8	14   1.4%   less
     9	13   1.3%   nslookup
    10	11   1.1%   whois

Given that git is a key component of the Team ResNet puppet workflow, it’s probably not surprising that it’s top of my list.

If you want to join in, hit us up in the comments and tell us what’s top of your list? Are there any typos which show up more often than you were expecting?

Linux VMs on Hyper-V – be sure to install and run hyperv-daemons!

A short post, just to say that if you are running Linux VMs on Hyper-V hypervisors you really should install and run the hyperv daemons.

On RHEL7-based distros this leads to :

yum install hyperv-daemons
systemctl enable hypervvssd
systemctl enable hypervkvpd

Ubuntu users should install the packages specified on this Microsoft TechNet page (I’ve not tested this myself, as I don’t yet have any Ubuntu VMs on Hyper-V).

Once you’ve done this a whole host of important features will work, including:

  • Live migration of VMs
  • IP injection (?)
  • Dynamic memory sizing
  • etc..

This will avoid any surprise reboots when hypervisor nodes are taken down for maintenance (which is what happened to me before I installed these..).
Obviously the complete failure of a hypervisor will still cause VM downtime,

I leave it as an exercise to the reader to use configuration management to add these to all their Hyper-V VMs automatically 🙂

Further reading : Best Practices for running Linux on Hyper-V

Puppet future parser — what to expect that you’ll have to update in your manifests…

The Puppet Future Parser is the new implementation of the manifest parser which will become the default in 4.0, so I thought I’d take a look to see what I’d need to update.

Also, there are some fancy new features like iteration and that you can use [1,2] array notation or {a=>b} hash notation anywhere that you’d previously used a variable containing an array or hash.

The iteration and lambda features are intended to replace create_resources calls, as they are more flexible and can loop round repeatedly to create individual definitions.

For example, here’s a dumb “sudo” profile which uses the each construct to iterate over an array:

class profiles::sudo {
  # This is a particularly dumb version of use of sudo, to allow any commands:
  $admin_users = hiera_array('admin_users')
  # Additional users with special sudo rights, but no ssh access (e.g. root):
  $sudo_users  = hiera_array('sudo_users')

  class { ::sudo: }

  $all_sudo_users = concat($sudo_users, $admin_users)

  # Create a resource for each entry in the array:
  each($all_sudo_users) |$u| {
    sudo::entry { $u:
      comment  => "Allow ${u} to run anything as any user",
      username => $u,
      host     => 'ALL',
      as_user  => 'ALL',
      as_group => 'ALL',
      nopasswd => false,
      cmd      => 'ALL',
    }
  }
}

Making this work with create_resources and trying to splice in the the username for each user in the list into a hash looked like it would be messy, requiring at least an additional layer of define — this method is much neater.

This makes it much easier to create data abstractions over existing modules — you can programmatically massage the data you read from your hiera files and call definitions using that data in a much more flexible way than when passing hashes to create_resources. This “glue” can be separated into your roles and profiles (which could be the subject of another post but are described well in this blog post), creating a layer which separates the use of the module from the data which drives that use nicely.

So this all sounds pretty great, but there are a few changes you’ll possibly encounter when switching to the future parser:

  • Similar to the switch from puppet master to puppet server, the future parser is somewhat more strict about data formats. e.g. I found that my hiera data definitely needed to be properly quoted when I started using puppet server, so entries like mode : 644 in a file hash wouldn’t give the number you were expecting… (needs mode : 0644 or mode : '644' to avoid conversion from octal to decimal…). The future parser extends this to being more strict in your manifests, so a similarly-incorrect file { ... mode => 644 } declaration needs quoting or a leading zero. If you use puppet-lint you’ll catch this anyway — so use it! 🙂
  • It’s necessary to use {} instead of undef when setting default values for hiera_hash (and likewise [] instead of undef for hiera_array), to allow conditional expressions of the form if $var { ... } to work as intended. It seems that in terms of falseness for arrays and hashes that undef is in fact true… (could be a bug, as this page in the docs says: “When used as a boolean, undef is false”)
  • Dynamically-scoped variables (which are pretty mad and difficult to follow anyway, which is why most languages avoid them like the plague…) don’t pass between a class and any sub-classes which it creates. This is in the docs here, but it’s such a common pattern that it could well have made it through from your old (pre-Puppet 2.7) manifests and still have been working OK until the switch to the future parser. e.g.:
    class foo {
      $var = "x"
    }
    
    class bar {
      include foo
      # $var isn't defined here, as dynamic scope rules don't allow it in Puppet >2.7
    }
    

    Instead you need to explicitly qualify your variables to pull them out of the correct scope — $foo::var in this case. In your erb templates, as a common place where the dynamically-scoped variables might have ended up getting used, you can now use scope['::foo::var'] as a shorthand for the previously-longer scope.lookupvar('::foo::var') to explicitly qualify the lookup of variables. The actual scope rules for Puppet < 2.7 are somewhat more complicated and often led to confusing situations if you unintentionally used dynamic scoping, especially when combined with overriding variables from the parent scope…

  • I’m not sure that expressions of the form if "foo" in $arrayvar { ... } work how they should, but I’ve not had a chance to investigate this properly yet.

Most of these are technically the parser more strictly adhering to the specifications, but it’s easy to have accidentally had them creep into your manifests if you’re not being good and using puppet-lint and other tools to check them.

In conclusion : Start using the Future Parser soon! It adds excellent features for iteration which make abstracting data a whole lot easier than using the non-future (past?) parser allows. Suddenly the combination of roles, profiles and the iteration facilities in the future parser mean that abstraction using Puppet and hiera makes an awful lot more sense!