Rsync between two hosts using sudo and a password prompt

Using rsync normally is nice and straightforward. e.g.:

# rsync -av -e 'ssh' /some/local/path user@remote:/some/remote/path

This works fine and prompts for the ssh password to log into the remote machine if required.

But what if the remote end needs root (or a different user) rights to write into the destination directory? Just whack in an --rsync-path option to add sudo to the rsync command, right?:

# rsync -av -e 'ssh' --rsync-path='sudo rsync' /some/local/path user@remote:/some/remote/path
user@remote's password: 
sudo: no tty present and no askpass program specified
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at io.c(632) [sender=3.0.4]

Oh, that didn’t work — sudo couldn’t ask us for a password. Adding the -t option to the ssh used by rsync doesn’t work either, as it can’t allocate a tty:

# rsync -av -e 'ssh -t' --rsync-path='sudo rsync' /some/local/path user@remote:/some/remote/path
Pseudo-terminal will not be allocated because stdin is not a terminal.
user@remote's password:

So we need to tell sudo to ask for the password some other way. Fortunately there is a -A option for sudo which tells it to use an “askpass” program, but we also need to tell it what askpass program to use (and it’s not in the default path on most machines). We can find this with locate askpass on the remote machine:

[user@remote]$ locate askpass
/etc/profile.d/gnome-ssh-askpass.csh
/etc/profile.d/gnome-ssh-askpass.sh
/usr/libexec/openssh/gnome-ssh-askpass
/usr/libexec/openssh/ssh-askpass

We’ll use /usr/libexec/openssh/ssh-askpass as that should pick an appropriate version according to what is available to sudo. Again sudo won’t have a tty to ask for the password on, so how about we use an X11 askpass program and enable X forwarding for ssh:

# rsync -av -e 'ssh -X' --rsync-path='SUDO_ASKPASS=/usr/libexec/openssh/ssh-askpass sudo -A rsync' /some/local/path user@remote:/some/remote/path
user@remote's password: [type user password]
[Then a dialog pops up for the sudo password]

Hooray, this works! Bit of a faff but it could be scripted or made into a shell function to save having to remember it 🙂

Yes, you could enable remote root login, but it would definitely be preferable to avoid that.

RPM addsign fail on vendor-provided package (and a workaround)

We’ve been signing RPM packages in local repos for a while now, and this has been working nicely (see previous posts about rpm signing)… until today.

The Intel Fortran 2015 installer provides RPMs which are already signed by Intel and which install and work fine, so we push these out with Puppet from our local (private) repo. However, even though I’d signed them myself they were failing to verify the signature…

Original RPM before re-signing locally:

  => rpm --checksig intel-fcompxe-187-15.0-3.noarch.rpm
  intel-fcompxe-187-15.0-3.noarch.rpm: RSA sha1 ((MD5) PGP) md5 NOT OK (MISSING KEYS: (MD5) PGP#7a5a985f) 

  => rpm -qpi intel-fcompxe-187-15.0-3.noarch.rpm
  warning: intel-fcompxe-187-15.0-3.noarch.rpm: Header V3 RSA/SHA1 Signature, key ID 7a5a985f: NOKEY
  Name        : intel-fcompxe-187            
  ...
  Signature   : RSA/SHA1, Fri 10 Apr 2015 13:23:59 BST, Key ID 27fbcd8d7a5a985f

So I go ahead and sign the package:

  => rpm --addsign intel-fcompxe-187-15.0-3.noarch.rpm 
  Enter pass phrase: [type passphrase]
  Pass phrase is good.
  intel-fcompxe-187-15.0-3.noarch.rpm:

All looks fine, so lets check the signature again:

  => rpm --checksig intel-fcompxe-187-15.0-3.noarch.rpm
  intel-fcompxe-187-15.0-3.noarch.rpm: RSA RSA sha1 sha1 ((MD5) PGP) ((MD5) PGP) md5 md5 NOT OK (MISSING KEYS: (MD5) PGP#7a5a985f (MD5) PGP#262a742e) 

  => rpm -qpi intel-fcompxe-187-15.0-3.noarch.rpm
warning: intel-fcompxe-187-15.0-3.noarch.rpm: Header V3 RSA/SHA1 Signature, key ID 7a5a985f: NOKEY
  Name        : intel-fcompxe-187            
  ...
  Signature   : RSA/SHA1, Fri 10 Apr 2015 13:23:59 BST, Key ID 27fbcd8d7a5a985f

Odd, it hasn’t changed… Let’s try removing the signature instead:

=> rpm --delsign intel-fcompxe-187-15.0-3.noarch.rpm 
intel-fcompxe-187-15.0-3.noarch.rpm:

=> rpm --checksig intel-fcompxe-187-15.0-3.noarch.rpm
intel-fcompxe-187-15.0-3.noarch.rpm: RSA RSA sha1 sha1 sha1 ((MD5) PGP) ((MD5) PGP) md5 md5 md5 NOT OK (MISSING KEYS: (MD5) PGP#7a5a985f (MD5) PGP#262a742e) 

That’s very odd, it’s added tags to the signature header. And if you try a few more times (just to be sure, right? :), it adds more tags to the header:

=> rpm --delsign intel-fcompxe-187-15.0-3.noarch.rpm 
intel-fcompxe-187-15.0-3.noarch.rpm:
...
Packager    : http://www.intel.com/software/products/support
Summary     : Intel(R) Fortran Compiler XE 15.0 Update 3 for Linux*
Description :
Intel(R) Fortran Compiler XE 15.0 Update 3 for Linux*

=> rpm --checksig intel-fcompxe-187-15.0-3.noarch.rpm
intel-fcompxe-187-15.0-3.noarch.rpm: RSA RSA sha1 sha1 sha1 sha1 sha1 sha1 ((MD5) PGP) ((MD5) PGP) md5 md5 md5 md5 md5 md5 NOT OK (MISSING KEYS: (MD5) PGP#7a5a985f (MD5) PGP#262a742e) 

If you do this a few more times then rpm can’t read the package at all anymore!

  => rpm --checksig intel-fcompxe-187-15.0-3.noarch.rpm
  error: intel-fcompxe-187-15.0-3.noarch.rpm: rpmReadSignature failed: sigh tags: BAD, no. of tags(33) out of range

  => rpm -ql -v -p intel-fcompxe-187-15.0-3.noarch.rpm 
  error: intel-fcompxe-187-15.0-3.noarch.rpm: rpmReadSignature failed: sigh tags: BAD, no. of tags(33) out of range
  error: intel-fcompxe-187-15.0-3.noarch.rpm: not an rpm package (or package manifest)

Ooops!

Workaround: Add the vendor keys as you should do, rather than re-signing. Appending the public key to the required RPM-GPG_KEY-* file is all that’s required, and then you can install the packages just fine.

Future work: Submit bug report about this to the rpm-sign developers…

How old is this solaris box?

Sometimes it’s useful to know how old a solaris server is, without having to dig out its serial number or documentation.

Turns out it’s really easy. “prtfru -c” will give you the build date of various bits of hardware in the system. For example, here’s a server that we’ve just retired (Which is long overdue!)

oldserver:$ sudo prtfru -c | grep UNIX_Timestamp
Password:
      /ManR/UNIX_Timestamp32: Mon Aug 22 02:52:32 BST 2005
      /ManR/UNIX_Timestamp32: Fri Jun  3 19:48:16 BST 2005
      /ManR/UNIX_Timestamp32: Wed Aug  3 11:39:47 BST 2005
      /ManR/UNIX_Timestamp32: Fri Jun  3 19:46:50 BST 2005
oldserver:$ 

SELinux quicktip

A while ago, Jonathan wrote a really useful post about how to use SELinux – it’s useful, and I tend to refer to it every time I need to build an SELinux policy to get something working.

However, yesterday I hit a wrinkle not covered in that post. I was working on a nagios plugin which didn’t work when run by nrpe. It worked from the command line, and worked via nrpe with SELinux disabled (which pointed the finger neatly at SELinux) but it didn’t leave any traces in the audit log, which makes building a policy difficult!

It seems that the default policies in CentOS include a list of “don’t audit” rules, which silently block some types of behaviour. The intention is to keep a lot of common noise out of the audit log, but that doesn’t help you much when you’re trying to build a policy!

Luckily you can turn that behaviour on and off.

# Turn it off:
sudo semodule --disable_dontaudit --build
sudo setenforce 0

# Turn it back on:
sudo semodule --build
sudo setenforce 1

With dontaudit disabled, I got the information I needed in the audit log and was able to successfully build a policy that made my nagios check work.

What’s in your history?

A little bit of Friday frivolity for you. A friend of mine recently discovered zsh_history, which tells you what commands you run most often from your shell. Obviously zsh_history is pretty zsh specific, but a bit of rummaging in the code shows it pretty much does this:

history | awk '{CMD[$2]++;count++;}END { for (a in CMD)print CMD[a] " " CMD[a]/count*100 "% " a;}' | grep -v "./" | column -c3 -s " " -t | sort -nr | nl |  head -n10

In my case, on my work desktop my top 10 commands are:

     1	321  32.1%  git
     2	214  21.4%  ls
     3	151  15.1%  cd
     4	105  10.5%  vi
     5	29   2.9%   ssh
     6	29   2.9%   exit
     7	21   2.1%   grep
     8	14   1.4%   less
     9	13   1.3%   nslookup
    10	11   1.1%   whois

Given that git is a key component of the Team ResNet puppet workflow, it’s probably not surprising that it’s top of my list.

If you want to join in, hit us up in the comments and tell us what’s top of your list? Are there any typos which show up more often than you were expecting?

Linux VMs on Hyper-V – be sure to install and run hyperv-daemons!

A short post, just to say that if you are running Linux VMs on Hyper-V hypervisors you really should install and run the hyperv daemons.

On RHEL7-based distros this leads to :

yum install hyperv-daemons
systemctl enable hypervvssd
systemctl enable hypervkvpd

Ubuntu users should install the packages specified on this Microsoft TechNet page (I’ve not tested this myself, as I don’t yet have any Ubuntu VMs on Hyper-V).

Once you’ve done this a whole host of important features will work, including:

  • Live migration of VMs
  • IP injection (?)
  • Dynamic memory sizing
  • etc..

This will avoid any surprise reboots when hypervisor nodes are taken down for maintenance (which is what happened to me before I installed these..).
Obviously the complete failure of a hypervisor will still cause VM downtime,

I leave it as an exercise to the reader to use configuration management to add these to all their Hyper-V VMs automatically 🙂

Further reading : Best Practices for running Linux on Hyper-V

Puppet future parser — what to expect that you’ll have to update in your manifests…

The Puppet Future Parser is the new implementation of the manifest parser which will become the default in 4.0, so I thought I’d take a look to see what I’d need to update.

Also, there are some fancy new features like iteration and that you can use [1,2] array notation or {a=>b} hash notation anywhere that you’d previously used a variable containing an array or hash.

The iteration and lambda features are intended to replace create_resources calls, as they are more flexible and can loop round repeatedly to create individual definitions.

For example, here’s a dumb “sudo” profile which uses the each construct to iterate over an array:

class profiles::sudo {
  # This is a particularly dumb version of use of sudo, to allow any commands:
  $admin_users = hiera_array('admin_users')
  # Additional users with special sudo rights, but no ssh access (e.g. root):
  $sudo_users  = hiera_array('sudo_users')

  class { ::sudo: }

  $all_sudo_users = concat($sudo_users, $admin_users)

  # Create a resource for each entry in the array:
  each($all_sudo_users) |$u| {
    sudo::entry { $u:
      comment  => "Allow ${u} to run anything as any user",
      username => $u,
      host     => 'ALL',
      as_user  => 'ALL',
      as_group => 'ALL',
      nopasswd => false,
      cmd      => 'ALL',
    }
  }
}

Making this work with create_resources and trying to splice in the the username for each user in the list into a hash looked like it would be messy, requiring at least an additional layer of define — this method is much neater.

This makes it much easier to create data abstractions over existing modules — you can programmatically massage the data you read from your hiera files and call definitions using that data in a much more flexible way than when passing hashes to create_resources. This “glue” can be separated into your roles and profiles (which could be the subject of another post but are described well in this blog post), creating a layer which separates the use of the module from the data which drives that use nicely.

So this all sounds pretty great, but there are a few changes you’ll possibly encounter when switching to the future parser:

  • Similar to the switch from puppet master to puppet server, the future parser is somewhat more strict about data formats. e.g. I found that my hiera data definitely needed to be properly quoted when I started using puppet server, so entries like mode : 644 in a file hash wouldn’t give the number you were expecting… (needs mode : 0644 or mode : '644' to avoid conversion from octal to decimal…). The future parser extends this to being more strict in your manifests, so a similarly-incorrect file { ... mode => 644 } declaration needs quoting or a leading zero. If you use puppet-lint you’ll catch this anyway — so use it! 🙂
  • It’s necessary to use {} instead of undef when setting default values for hiera_hash (and likewise [] instead of undef for hiera_array), to allow conditional expressions of the form if $var { ... } to work as intended. It seems that in terms of falseness for arrays and hashes that undef is in fact true… (could be a bug, as this page in the docs says: “When used as a boolean, undef is false”)
  • Dynamically-scoped variables (which are pretty mad and difficult to follow anyway, which is why most languages avoid them like the plague…) don’t pass between a class and any sub-classes which it creates. This is in the docs here, but it’s such a common pattern that it could well have made it through from your old (pre-Puppet 2.7) manifests and still have been working OK until the switch to the future parser. e.g.:
    class foo {
      $var = "x"
    }
    
    class bar {
      include foo
      # $var isn't defined here, as dynamic scope rules don't allow it in Puppet >2.7
    }
    

    Instead you need to explicitly qualify your variables to pull them out of the correct scope — $foo::var in this case. In your erb templates, as a common place where the dynamically-scoped variables might have ended up getting used, you can now use scope['::foo::var'] as a shorthand for the previously-longer scope.lookupvar('::foo::var') to explicitly qualify the lookup of variables. The actual scope rules for Puppet < 2.7 are somewhat more complicated and often led to confusing situations if you unintentionally used dynamic scoping, especially when combined with overriding variables from the parent scope…

  • I’m not sure that expressions of the form if "foo" in $arrayvar { ... } work how they should, but I’ve not had a chance to investigate this properly yet.

Most of these are technically the parser more strictly adhering to the specifications, but it’s easy to have accidentally had them creep into your manifests if you’re not being good and using puppet-lint and other tools to check them.

In conclusion : Start using the Future Parser soon! It adds excellent features for iteration which make abstracting data a whole lot easier than using the non-future (past?) parser allows. Suddenly the combination of roles, profiles and the iteration facilities in the future parser mean that abstraction using Puppet and hiera makes an awful lot more sense!

mod_auth_cas on CentOS7 / Apache 2.4

For CentOS, this is now available in the EPEL repo

mod_auth_cas (https://github.com/Jasig/mod_auth_cas) is an Apache module that plugs into the Apache mod_auth framework, to provide authentication against a Jasig CAS SSO server.  Unfortunately development on it seems to have stalled, and it currently doesn’t support Apache 2.4 – https://github.com/Jasig/mod_auth_cas/issues/49.

There is a fork that does support Apache 2.4, so this willl cover how to build and install it.

# Clone the forked github repo
git clone https://github.com/klausdieterkrannich/mod_auth_cas.git .

# install development libraries
yum install gcc httpd-devel openssl-devel libcurl-devel automake

# Run configure
./configure

# Run Make
make

# If you get errors like:
/opt/mod_auth_cas/missing: line 81: aclocal-1.12: command not found
WARNING: 'aclocal-1.12' is missing on your system.
# then symlink the binaries as follows:
ln -s /usr/bin/aclocal /usr/bin/aclocal-1.12
ln -s /usr/bin/automake /usr/bin/automake-1.12
# and run make again

# Assuming it built OK, then install it:
make install

# On CentOS this will put the binaries into:
#/usr/lib64/httpd/modules
# and you can then copy them from here to your production systems.

 

DHCP fingerprinting

We wanted to find out what sort of devices are active on the wireless network, and the vendor tools we’ve got don’t quite give us the level of detail we were after.

However, everything which hits our wireless network gets a DHCP lease from our dhcp servers.  With a bit of dhcpd.conf magic, you can make it profile each client when it requests or renews a lease and record a fingerprint in the logs.

dhcpd.conf – collecting fingerprints

# put the dhcp options request fingerprint in the leases file
set dhcp-op-req-string = binary-to-ascii(10,8,":",option dhcp-parameter-request-list);

# log the fingerprint in the format:
# Jul 17 14:36:06 dhcp2 dhcpd: FINGERPRINT 1,3,6,12,15,28 for 00:10:20:30:40:50

log(info,
concat("FINGERPRINT ",
binary-to-ascii(10,8,",",option dhcp-parameter-request-list),
" for ",
concat (  # MAC
        suffix (concat ("0", binary-to-ascii (16, 8, "",
          substring (hardware, 1, 1))),2), ":",
        suffix (concat ("0", binary-to-ascii (16, 8, "",
          substring (hardware, 2, 1))),2), ":",
        suffix (concat ("0", binary-to-ascii (16, 8, "",
          substring (hardware, 3, 1))),2), ":",
        suffix (concat ("0", binary-to-ascii (16, 8, "",
          substring (hardware, 4, 1))),2), ":",
        suffix (concat ("0", binary-to-ascii (16, 8, "",
          substring (hardware, 5, 1))),2), ":",
        suffix (concat ("0", binary-to-ascii (16, 8, "",
          substring (hardware, 6, 1))),2)
       )        # End MAC
));
# End DHCP fingerprinting

Now every time a device interacts with our DHCP server, we get a FINGERPRINT line appearing in our logs along with the mac address which requested the lease.

So far, so good. Now we need to process those logs into something anonymous, but meaningful.

Data Prep
The easiest approach is to cat our logfile, strip out just the fields we’re interested in (mac address and fingerprint) then sort them to remove duplicates (we only want to count each machine once!) and then finally throw away the mac addresses (because all we really want are the fingerprints)

We can do that easily enough with a lovely long pipeline

cat /var/log/dhcpd.log | grep FINGERPRINT | awk '{ print $9 " " $7 }' | sort -u | awk '{ print $2 }'

There are probably more elegant ways to do it, but the above isn’t really the interesting bit. All you get out of it is a list of fingerprints. The magic is in converting those into something meaningful.

Chewing on your fingerprints
To process, identify and count these fingerprints, we need the help of the fingerbank project who have collected DHCP fingerprints from all over the place.

I’m grabbing the fingerprint list as a config file from their github repo: https://github.com/inverse-inc/fingerbank/blob/master/dhcp_fingerprints.conf although since I first started playing with this about 6 months ago, it seems they’ve made their fingerprint database available as an Sqlite DB – which would have been much easier to wrangle than parsing the config file.

So here’s a slightly shonky perl script to parse the config file and produce a CSV summary of the output. This is probably not as elegantly done as it could be, please don’t judge too harshly! I’ve tried to make it readable, but some of the datastructures are a bit on the deep side. If you want to see what’s going on, make plenty of use of “Data::Dumper” – I know I had to when writing it.

It assumes dhcp_fingerprints.conf is in the same folder as the script, and expects to be fed fingerprints over STDIN one line at a time – so you can stick it on the end of the pipeline I mentioned earlier.

#!/usr/bin/perl -wT

use strict;

use Config::IniFiles;
use Data::Dumper;

my %dhcp_fingerprints; # tied version of the config file
my ($fprint_db, $fprint_class, $os_counter); # DStructs which we query later

# Tie fingerprint config file from fingerbank to a DS so we can parse it
tie %dhcp_fingerprints, 'Config::IniFiles', ( -file => "dhcp_fingerprints.conf" );

# Build $fprint_class (maps OS name to "class")
foreach my $class (tied(%dhcp_fingerprints)->GroupMembers("class") ) {
  my ($min,$max) = split /\D/, $dhcp_fingerprints{$class}{"members"};
  $$fprint_class{ $dhcp_fingerprints{$class}{"description"} }{min}=$min;
  $$fprint_class{ $dhcp_fingerprints{$class}{"description"} }{max}=$max;
}

# Build $fprint_db (maps fingerprint to OS name)
foreach my $os ( tied(%dhcp_fingerprints)->GroupMembers("os") ) {
  $os =~ m/os (.*)$/gi;
  my $os_id = $1;

  if ( exists( $dhcp_fingerprints{$os}{"fingerprints"} ) ) {
    if ( ref( $dhcp_fingerprints{$os}{"fingerprints"} ) eq "ARRAY" ) {
      foreach my $dhcp_fingerprint ( @{ $dhcp_fingerprints{$os}{"fingerprints"} } ) {
        $$fprint_db{$dhcp_fingerprint}{"description"}=$dhcp_fingerprints{$os}{"description"};
        $$fprint_db{$dhcp_fingerprint}{"os"}=$os_id;   
      }
    } else {
      if (defined $dhcp_fingerprints{$os}{"fingerprints"}) {
        foreach my $dhcp_fingerprint (split(/\n/, $dhcp_fingerprints{$os}{"fingerprints"})) {
        $$fprint_db{$dhcp_fingerprint}{"description"}=$dhcp_fingerprints{$os}{"description"};
        $$fprint_db{$dhcp_fingerprint}{"os"}=$os_id;
        }
      }
    }
  }
}

# now we loop through all the fingerprints we've been given on STDIN and try to ID them
while () {
  chomp;
  my $fingerprint = $_;

  # See if it appears in $fprint_db...
  if(defined $$fprint_db{$fingerprint}) {
    # Count it
    $$os_counter{$$fprint_db{$fingerprint}{"description"}}{"count"}++;

    # Try to identify the type of OS
    foreach my $class (keys $fprint_class) {
      if ($$fprint_db{$fingerprint}{"os"} >= $$fprint_class{$class}{"min"} && $$fprint_db{$fingerprint}{"os"} <= $$fprint_class{$class}{"max"}) {
        $$os_counter{$$fprint_db{$fingerprint}{"description"}}{"class"}=$class;
      }
    }
    
    # If we haven't yet set the OS class, set it to "unknown"
    $$os_counter{$$fprint_db{$fingerprint}{"description"}}{"class"}="unknown" unless (defined $$os_counter{$$fprint_db{$fingerprint}{"description"}}{"class"});

    } else {
      # No idea what it was, so add it to the unknown count
      $$os_counter{"unknown"}{"count"}++;
      $$os_counter{"unknown"}{"class"}="unknown";
    }
  }

# Print summary output as a CSV
print "\n\nClass,OS,Count\n";
foreach my $os(keys %$os_counter) {
  print qq["$$os_counter{$os}{class}","$os","$$os_counter{$os}{count}"\n];
}

If I let that chew on a decent chunk of todays logs (from about 7am to 2pm) it spits out the following:

Class OS Count
Smartphones/PDAs/Tablets Samsung Galaxy Tab 3 7.0 SM-T210R 39
Home Audio/Video Equipment Slingbox 49
Dead OSes OS/2 Warp 1
Gaming Consoles Xbox 360 6
Windows Microsoft Windows Vista/7 or Server 2008 1694
Printers Lexmark Printer 1
Network Boot Agents Novell Netware Client 1
Macintosh Mac OS X Lion 2783
Misc Eye-Fi Wireless Memory Card 1
Printers Kyocera Printer 1
unknown unknown 40
Smartphones/PDAs/Tablets LG Nexus 5 & 7 1797
Printers HP Printer 54
CD-Based OSes PHLAK 1
Smartphones/PDAs/Tablets Nokia 13
Smartphones/PDAs/Tablets Motorola Android 2
Macintosh Mac OS X 145
Smartphones/PDAs/Tablets Generic Android 2989
Gaming Consoles Playstation 2 1
Linux Chrome OS 39
Linux Ubuntu/Debian 5/Knoppix 6 5
Routers and APs Cisco Wireless Access Point 69
Linux Generic Linux 7
Linux Ubuntu 11.04 21
Windows Microsoft Windows 8 1792
Routers and APs Apple Airport 2
Routers and APs DD-WRT Router 3
Smartphones/PDAs/Tablets Sony Ericsson Android 1
Linux Debian-based Linux 51
Smartphones/PDAs/Tablets Symbian OS 2
Storage Devices LaCie NAS 27
Windows Microsoft Windows XP 30
Smartphones/PDAs/Tablets Android Tablet 24
Monitoring Devices Tripplite UPS 1
Smartphones/PDAs/Tablets Apple iPod, iPhone or iPad 12289
Smartphones/PDAs/Tablets Samsung S5260 Star II 2
Smartphones/PDAs/Tablets RIM BlackBerry 63

I’m not sure I 100% believe the above (OS/2 Warp? Really?) but the bits I disbelieve are largely in the noise.

Chewing on the above stats a bit, shows us that the wireless network is roughly 27% laptops and 72% mobile devices (tablets etc). Amongst the laptops, Windows is just about in the lead with 53%, and OSX is close behind at 44% (which is probably higher than a lot of people think) Linux laptops are trailing behind at only 2%.

The mobile device landscape is less evenly split, with 71% iOS and 28% Android.

Although I wouldn’t read too much into the above analysis, as it represents a comparatively small time slice (and only 23775 of the 37000 devices we see on the wireless each week)

Who knows, perhaps we’ve got 13K windows phones owned by people who just don’t come onto campus on a Monday…

Update 2015-05-11: I’ve been asked under what license I’ve released the perl script in this post. I didn’t put any thought into licenses at the time (I was just trying to solve a problem and answer a question I’d been asked!) but I’ll put my hand up, part of the script is based on prior-art.

The section which parses the fingerprint database is taken from the process_fingerprints() function in https://github.com/inverse-inc/fingerbank/blob/master/obsolete/tools/fingerprint-find-candidate-matches.pl – a script which seems to be covered by the GPLv2 licence.

As I understand it, under the terms of the GPLv2 license, that means that the script above should also be distributed under the GPLv2 license (which I’m OK with) and that under the terms of that license it should be distributed along with a copy of the GPLv2 license… which can be found here: https://www.gnu.org/licenses/gpl-2.0.html

Puppet — making array expansions in resource calls unique

Lets say that you have an array of NFS clients to allow access to an export. You want to expand this into a list of resources, which you’d normally do via something like:

$allow = hiera_array('some_variable', undef)
nfs::export { $allow:
  ...
}

This works fine until you want to have multiple exports to the same name/IP, at which point you have two nfs::export resources with the same name. But you need the expansion of the array to generate multiple calls to nfs::export, so you need to prefix each string in the array:

$allow = hiera_array('some_variable', undef)
$hacked_allow = prefix($allow, "${fqdn} export of ${path} to client ")
nfs::export { $hacked_allow:
  ...
}

..then in the called define you extract the client fqdn like so:

define nfs::export ( ... ) {
  $client = regsubst($name, "^.* export of .* to client (.*)$", '\1')
  validate_string($client)

  ... [use $client as fqdn in here]
}

This seems like a terrible hack, but it’s a reasonable workaround for the need to expand arrays and keep the names unique to avoid duplicate resources.