Mortality

I’m old enough now to think about death personally. As in, my own. When people I know or even people my age that I don’t know die now, it’s much more meaningful to me that it was when I was younger. A man in Greenland, NH, a town not far from where we live, shot and killed another man there. The victim was 46 and it’s in the news because he was the Greenland Chief of Police, working with his staff in a drug raid. That’s sad and tragic. What brought it home to me personally, of course, is that – well – I’m 46 this year. And I’m friends with our police chief. And he’s about my age too. And it really could have been my friend, Richard, whose son and wife would be learning today to live without him, minute by minute.

Sometimes, getting older is fun. Sometimes, it’s just learning on a personal that everyone dies. It the price that all of us pay.

First positive experience with SELinux!

Yes, we all “hate” SELinux.  But, as I tell my kids, “hate” probably really means this:

prefer not to use it because it stops me doing things and since I don’t know how to manage it, I can’t do anything but turn it off entirely and feel dumb about it…. 🙂

However, it’s probably actually a good thing.  And it’s enabled by default on RHEL and Fedora now.  And we may see it adopted by other mainstream distros that market to commercial and government organizations.

So, I took a stab at it today.

Just after installing a fresh RHEL 6 Server, I went to enable private key authentication between account@{our other server} and account@{this new server} – needed for a certain process that {the new server} will support.

So, I copied the public key from  “account@{our other server}”  to the .ssh/authorized keys file in the home directory for “account@{the new server}”.  I did the right chmod magic.  In the past, that’s it – from then on we can ssh from one machine to the other with no passwords.  Make certain automated tasks possible.

But, this time, sshd still prompted for a password.  I checked a lot of stuff.  Then I googled and I discovered that SELinux was probably blocking me.  By reading a bit on The UnOfficial SELinux FAQ and then reading some man pages and then googleing more intelligently, I eventually learned that along with your standard file system permissions, SELinux enabled systems have this nice method of enforcing which daemons can access which files and such.  Sort of like how we use security groups in Active Directory to regulate access to resources on the Windows Domain.  Much more useful than the old Unix groups.

So, now I know it’s more accurate to describe the situation like this:

SELinux was blocking a daemon from accessing a file in a sensitive area.  Specifically, the .ssh directory and files in it that I created in “account’s” directory aren’t known to the SELinux system.  And since it knows that “account’s” home directory is sensitive – says so in a config file –  it doesn’t know if it should allow sshd to read them or not.

Our current method of setting up private key authentication, copying around the authorized_keys file,  doesn’t include updating that structure so that sshd can access the files.  So, on this SELinux enabled RHEL 6 server, itsa-no-a-go-ah!

Here’s how you can update the SELinux information for the new files:

restorecon -R -v -n /home/account/.ssh
The -n makes it show the changes it proposes instead of applying them.  If you like what you see, run it again without the -n option.
That’s it.  So, {the new server} is RHEL6 Server with SELinux enabled and working properly.

Character encodings and Black Diamonds

From an e-mail I wrote today to a colleauge confused about character encodings.  We copied a bunch of files from an old HPUX web server to a new RHEL server running modern Apache.  The files viewed from the new server have the dread black diamonds all over the place and he is trying to understand why.

 

Hi, I misread your mail when I first responded that you were exactly right.  Saddly, you weren’t:

Be aware that this looks to be more of a file system issue that a web server issue from my initial findings. It may be a case of both, but the fact that a file moved to [Computer2] got the problem and then moved to [Computer1] still had the problem, when the original file on [Computer1] was fine. I think the UTF8 encoding is being set on the file when it is moved to the [Computer2] machine.

It’s not a file system issue.  Character encoding isn’t an attribute of the file, like read-only or hidden, that can be set on a file or munged by moving a file from one file system to another.  It’s a table that maps the numbers that are actually in the file to the symbols we humans use to read and write with.  The term “code-page” is a another term for character encoding.  Code-page started with IBM and is the term Microsoft uses today.

The files we think of as “just plain text, dangit!” have been written with an encoding too.  It’s just that here in the US, one encoding, ASCII, and a few close relatives and descendants have been ubiquitous since the beginning and we haven’t ever had to even consider the concept.  Now that the web is international, though, we North Americans are having to learn about this.

MS Word uses one set of numbers (encoding) to represent the characters we type and the Apache web server is (currently) configured to assume a different set were used.

The specific problem we’re seeing is caused by MS Word writing in one encoding and Apache assuming that the files were written in another.  Apache is configured to assume a file is encoded with UTF-8 if it isn’t told otherwise.  MS Word is writing in either Windows-1252, AKA CP-1252, or ISO-8859-1, which are very similar to one another.

So, in the Windows-1252 encoding, there’s a number that refers, in UTF-8, to something the browsers can’t display.  So they show us the black diamond thingy.

Help?

Best,
Mike

Cisco WCS and Microsoft IAS

I’m deploying a Cisco Unified Wireless Network at the office.  It’s a cool, but complex beast.  Along the way, I’m learning lots and lots of stuff…. one of which is how to use RADIUS to authenticate users.  It’s an old but great protocol.  In my situation, I want the Cisco Wireless Control System (WCS) to allow members of an MS Active Directory (AD) security group to log in and administer the system.  The WCS software can use RADIUS to authenticate users, but it needs the RADIUS server to return a bunch of information with along with the Auth-Accept (OK, let him in) message.  Took me a while to understand what that meant and how to make it happen and along they way, I found that

  1. Configuring FreeRADIUS, the OSS solution, has a steep, steep learning curve.  I fell off about 3/4 of the way up.
  2. The available documentation doesn’t seem sufficient to help total protocol newbies get up to speed.
  3. Microsoft IAS is a pretty nice little AAA server that does RADIUS just fine, thank you, and it’s already part of Server 2003!

So, I hopped down of the FreeRADIUS learning curve and walked up the shallowly sloped IAS learning ramp and made some progress.  Until, that is, I realized that, like many Microsoft system administration interfaces, adding more than a few items at a time is an all day click-a-thon…

And then I discovered that – lo and behold – the data storage for IAS is an Access database!  No kidding!  So, I stopped the service, closed the SMC console plugin that manages IAS and copied the C:\Windows\system32\ias\ias.mdb to a computer with Office 2007 and Access 2007 installed and away I went.  Once I figured out the record format, I made a CSV of the entries I needed to add, using vim, and I imported them.  Then I copied the database file back and started IAS and – woooo hooooo!  It worked!  Five hundred attributes entered in about twenty minutes.

These two posts showed me about IAS and got me started with it:

  1. Configure your Cisco routers to authenticate … using … IAS.
  2. ?Configuring PEAP on Cisco WCS using Microsoft’s Radius (IAS) Server

Saddly, the folks at deployingradius.com were not so useful.

DOS Line endings break CGI script execution

Some fellows in another office asked for help figuring out why the CGI script they were writing wouldn’t run.

Yesterday, I’d helped them find that the interpreter named on the she-bang line (#!) of their scripts didn’t exist and I’d recommended they use “#!/usr/bin/env perl” instead.  They did, but wrote the test.cgi script in notepad and then transfered the file to the Linux web server with something that left the DOS line endings intact.

So, when Apache tried to run the script, the /usr/bin/env program got handed “perl^M” instead of “perl,” couldn’t find that in the path anywhere and blew chunks.  Here’s what I wrote back to explain:

The problem exists when you use /usr/bin/env to invoke the interpreter and pass it a string (the interpreter name) on the command line and that string ends in a trailing ^M.  That ^M is the part of the DOS line ending that dos2unix would remove.  So, in your simpletest.cgi script, this is what Unix is seeing as the top line:

#!/usr/bin/env perl^M

So, the /usr/bin/env program is trying to invoke a program named “perl^M”, which doesn’t exist.  If you’d like to have some fun with this, make a symlink from /usr/bin/perl to /usr/bin/perl^M and watch your program run.  🙂
(Make the ^M by pressing CTRL-V and then either CTRL-M or <ENTER>.)
In your new file listdocs2011, you won’t see this problem crop up because the line looks like this:
#!/usr/bin/env perl -w ^M

In this line, you have a space after the -w switch.  You won’t see it in notepad, but it’s there.  And it’s what allows that script to run with the CGI.  Because /usr/bin/env is getting “perl” and “-w” and “^M” as three separate words on the command line.  It knows what to do with the first two and perl get the ^M, which it probably ignores.
If there were my mess to clean up, I’d probably use grep -l ‘^M$’ to list the names of files that have the problem and then run dos2unix on just those file names:
In bash:   dos2unix $(grep -l ‘^M$’) But remember, that ^M is made with the CTRL-V and then CTRL-M.  Not the ^ and the M keys.

 

Extending Xen VM size

Draft of notes so I don’t loose the method.

On CentOS 5.0 from 2005.
Xen vm images in /xenvms
Task: extend the databases vm from 5 G to 56 GB.

This wasn’t hard once I learned how to do it.  However, in this case, there were only two partitions on the device: p1: root and p2: swap.   So, I was able to delete the second partition without any hassle and resize the root partition.  Then I put a new swap partition at the end of the newly sized drive device.

In the virtual machine, comment out the swap partition in /etc/fstab.  You’l be deleting it shortly and won’t have it back before you boot up again.

Shutdown the virtual machine – the guest.

  • On the Xen host, run “xm console databases”
  • log in as root
    {do the commenty-outy thing in /etc/fstab}
  • run shutdown -hP -t 0 now

backup the image file first.
cd /xenvms; cp -a databases.img databases.2010061301.img
I renamed my original for easier manipulation and to avoid accidentally munging my backup with filename completion foo.
mv databases.imb t1.img

grow the img file

dd if=/dev/zero of=t1.img oflag=direct bs=1M seek=57000 count=1
Which means, effectively, “write one 1MB of zeros starting at postion 57,000 MB into the device.”  The effect is to very rapidly move the end of the device out to 58 GB.

Create a loop device to represent the device, then use fdisk, not parted, to delete the original partion and create a new one with a new end point out where you want it.

losetup /dev/loop1 t1.img
fdisk /dev/loop1
Set your units to cyl with ‘u’
Use ‘p’ to print the current partition table.
Note the starting cyl, the partition type and any flags
Delete the swap and root partitions with ‘d’
Create a new root partition with everything the same, but put the end out where you want it.  In this case, it was at 7200 cyl.
Don’t bother with the swap part.  Easier to do in the VM when you have it running again.

Get a loop device for the root partition so you can resize it offline.  You need to know how many bytes into the device to start.  Use fdisk -l -u t1.img to find out.  The -u reports in sectors.  Sectors are 512 bytes.  Example:

fdisk -u -l t1.img

Custom compiled samba drove me nuts.

A long time ago, someone compiled from source and then installed samba 3.0.7 in /opt/csw on a Solaris 8 machine we have.  I surmize that the person ran ./configure with “–prefix=/opt/csw”

Today we wanted to do “net ads join” and couldn’t. We got /etc/krb5/krb5.conf right, we know it because kinit would work. But the /opt/csw/bin/net ads join -U myadminaccount kept failing.   Kept saying:

ads_connect: Cannot resolve network address for KDC in requested realm

I ran the command with debugging and tried tweaking this that and the other thing over and over.  It was infuriating.  It was like the tool (net) couldn’t see the “kdc =” lines in krb5.conf. Well, it finally dawned on me: it CAN’T. It thinks krb5.conf should be /opt/csw/etc. So I made a symlink and it worked!!!

Whooo hoo!

Sauron….

Sauron seems cool.  Been out of development for a while.  I’ve installed 0.7.3 on a server at work to try it out.  It’s an IPAM – IP Address Management system that also writes BIND and ISC DHCP configs and zone files.  Backed by PostgreSQL.  Looks like it can manage a bunch of servers for us.

So, ran into some troubles.  PostgreSQL is up to version 8.3.9 in the SLES 11 SP1 distro.  In PG 8.0 and earlier, user table creation defaulted to adding the OID system column.  Sauron developers made use of that column in a few places.  Since I’m using 8.3.9 and I’m brand bloody new to both PG and Sauron, it took me a while to figure this out.  Ultimately, I just changed ~postgres/data/??postgresql.conf so that it works the way it used to: default_with_oids = on and then redid the database creation and population work.

There are some other bits of cruft I’d like to document and send in to the Sauron folks.  Maybe they’d like to have patches?  here’s a short list:

  • Would be nice to have better docs about creating the sauron user in the database.  Or any docs about that, really. 🙂
  • Current PostgreSQL (8.0+) doesn’t make OID columns in user tables by default, change table creation scripts or config PG to old behavior
  • Sauron’s import-ether’s script can protect against bad UTF-8 data easily thus:
use utf8;
use encode;
$info = encode( "UTF-8", $info );

So, there’s some notes for myself.  If this is useful to you also and you want more detail (I’ll forget to come update this, I’m sure.) drop me a line and I’ll do what I can.

Bash functions for posting to blosxom from the command line

Here are those two bash functions I use at work to make my blosxom blog into a daily time log. I made them because I wanted to be able to dash off a one-liner entry into a logfile that would be easily viewable later. So, here they are:

Timelog function

Make a single line entry in a post for the current day. If there’s no post yet for today, it makes that new post.

tl ()
{
   timelog_file="$HOME/blosxom/timelog/$(date +%Y-%m-%d).txt"
   [[ ! -f $timelog_file ]] && echo Timelog for $(date "+%A %F") >> $timelog_file
   time=$(date +%H:%M)
   [[ $# -eq 0 ]] && msg=$(cat)
   [[ $# -gt 0 ]] && msg="$*"
   echo "<li>$time $msg</li>" >> $timelog_file
}

Note function

Let’s me make a very simple post to my blog from the command line. I type “note” followed by the title of the note and then press enter. From then on until I press CTRL-D, I’m composing my entry. After I press CTRL-D, the note is posted.

note ()
{
   if [[ $# -eq 0 ]]
    then
      echo "Need file name on CL"
      return
    fi

   filename="$HOME/blosxom/notes/$*.txt"

   if [[ -f $filename ]]
    then
      echo Extant post file
      return
    fi
   cat > "$filename"
}

Apache UserDir cgi-bin configuration

I use blosxom as a simple blog at work. I have two bash functions that allow me to use the blog as a timelog. Whenever I change workstations, my blog breaks for a while until I get around to relearning how to setup cgi for user directory’s. This time, I’m writing it down! 🙂

First, make sure you are loading mod_userdir. Since I’m doing this on Ubuntu 10.10 and using Apache2.2, I get root and cd into /etc/apache2/mods-enabled and run:

ln -s ../mods-available/userdir.load ./
ln -s ../mods-available/userdir.conf ./

and then I change the userdir.conf to look like this:

<IfModule mod_userdir.c>
        UserDir www
        UserDir disabled root

        <Directory /home/*/www>
                AllowOverride FileInfo AuthConfig Limit Indexes
                Options MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec
                <Limit GET POST OPTIONS>
                        Order allow,deny
                        Allow from all
                </Limit>
                <LimitExcept GET POST OPTIONS>
                        Order deny,allow
                        Deny from all
                </LimitExcept>
        </Directory>

        ScriptAliasMatch ^/~([^/]*)/cgi-bin/(.*) /home/$1/www/cgi-bin/$2
        <Directory /home/*/www/cgi-bin>
            Options +ExecCGI
            SetHandler cgi-script
            AllowOverride None
            Order allow,deny
            Allow from all
        </Directory>

</IfModule>

SUSE and RedHat related distros put the configs in different places. The general idea, though, is the same: get Apache to load the userdir module and then configure it.

Thanks very much to “mastercomputers” for her (his?) post about this at:

http://www.astahost.com/info.php/apaches-userdir-very-own-cgi-bin_t3698.html