Studying I/O performance in a complex system

We have a pair of identical computers (Dell R610s) clustered as our Subversion server.  The repo administrators began reporting poor performance about six weeks ago.  Performance like this: a directory listing (ls -l) sometimes takes over 30 seconds to return. That’s just annoying, but it’s indicative of an underlying problem which is causing very, very long delays in loading data into and exporting data from the repositories.  And that’s not just annoying – they do small loads and exports all day long to keep the master repo server in sync with the remote servers in our other offices.

The Subversion repositores are in a ~ 450 GB OCFS2 filesystem, /srv/data1, on a DRBD device, /dev/drbd1.  The DRBD device is active/active and configured to mirror data between the two computers; both members of the cluster can read to and write from the device at the same time.  On each computer, the DRBD device is hosted by a Linux software RAID mirror, /dev/md3. At the very bottom of this pile, are the two 500 GB SATA HDDs on each computer.  They each have a single partition, /dev/sdc1 and /dev/sdd1 which are mirrored with Linux software RAID (mdadm).  If you’re with me so far, you’ll understand that we have four copies of the Subversion repos.

The challenge is to find out where the bottle neck is and solve it.  Or them.

One idea we’ve tossed around is that the software RAID is slower than hardware RAID would be.  However,  I’ve found results from a test showing software RAID outperforming hardware RAID in situations similar to ours.  Also, the Wiki on RAID seems to indicate that software RAID can usually outperform hardware RAID in our situation.  From what the author(s) write and what I read elsewhere, I think I can believe that.  Up until you bring dedicated SANs and NASes into the picture, I mean.

Another idea is that maybe “chunk” or “stripe” size mismatches could be to blame.  We have OCFS2 on DRBD on MD Raid.  Well, according to the linux.org RAID HowTo authors, mirrors don’t use “stripes” and so I can eliminate the RAID stripe size from the question.

Perhaps the problem is in the config of drbd. Maybe having the metadata “internal” isn’t so good for this application. In DRBD, “internal” means on the same backing store device. I don’t see any other tunables so far. I have plenty of space on the system drives to store the metadata. Per the formula on http://www.drbd.org/users-guide-emb/ch-internals.html#s-meta-data-size, I need 15 MB. I wonder if I can “move” the metadata….

Finally found docs for ifcfg-em1 and friends

All of redhat derived systems I’ve ever used store network configuration in /etc/sysconfig/network-scripts and use the ifup and ifdown commands to start and stop interfaces. Fine. I can’t tell you how many times over the years, though, I’ve tried to look up the options I can put in those files. I’ve never found anything in the man pages nor in the info pages either. I’ve always just fallen back on using Google to find examples to crib.

Well, I’ve been working with CentOS systems for a few months now and I like them and there’s no end in sight, so this mystery has become a bit more of a problem for me. Today I discovered that, completely ignoring the convention of writing man pages for unix systems, the authors of the initscripts package have placed the options for the many /etc/sysconfig/* files here:

/usr/share/doc/initscripts-9.03.38/sysconfig.txt

Of particular note is the section about /etc/sysconfig/network-scripts, in which they list and discuss all the options for an ifcfg-{whatever} config file.

Grrrr….

Yes, security of the identity store is important.

A friend at work identified a “feature” of SAML based Identity Federation systems. The weakness is likely possible in *any* SAML identity federation system. To explain, I’ll posit a Google Apps domain configured to use SSO with a company that uses Oracle’s Identity and Access Management products. In that product line, OIF is the federation server and OAM is the authorization server.

Basically, it works like this:

Someone types http://mail.example.com/ into her browser. Her browser and the OS resolve the hostname to an IP that’s on Google SSO servers, connects to it and passes a HTTP header saying which host it’s trying to connect with: “mail.example.com.”

Google knows now only that she’s trying to get into the Example company’s Google Apps account. It sets a session cookie in her browser and redirects her to our OIF server. At this point, it has no idea who she is.

Next thing she sees in the SSO Login Page (actually presented by OAM because access to OIF is controlled by OAM). Our user types in her username and password. OAM checks these with AD. If they are correct, it sets it’s own session cookie in her browser and then redirects her to OIF.

OIF sees that OAM authenticated her (sees the cookie) and retrieves whatever e-mail address is stored in her AD account or, if she’s visited recently, from it’s cache of her account from an earlier visit.

Next, OIF makes up a wee packet, encrypted with Google own public key. It contains the e-mail address and maybe some other fluff. OIF hands that to the browser along with a redirection URL which will cause it to load a special “page” at Google’s SSO system, delivering the encrypted packet.

Google’s SSO system sees the session cookie it set earlier, de-crypts the “packet” from OIF and sees the e-mail address OIF read from AD (or it’s own cache). Now Google knows that the browser with its session cookie is authorized access to the Google Apps account identified by that e-mail address.

Now, if you absorbed all that, you’ll see the “weakness.” Quotes because I’m not sure a stipulation or basic assumption about how the system works can fairly be called a weakness.

Anyone who can change the contents of the identity store (AD, in our case) can game the system – so to speak. And that’s not at all new. That’s *always* been true, for *any* system.

IIS7 keeps using old SSL cert

A user reported to me that his browser was reporting that one of the websites I maintain was sending out a revoked SSL certificate as it’s identity. I checked and found that, sure enough, the certificate authority (CA), which I also run, has put that cert on the CRL. It had been superseded when I’d issued a new cert for the server with different extensions.

However, when I checked the web server config, I couldn’t find the old, revoked cert listed anywhere. And it wasn’t listed in the host computer’s Certificates MSC either. Weird.

Restarting the web server didn’t help. Neither did rebooting the computer. Neither did getting yet another new certificate for the computer.

Finally, I woke up and “asked” google.

Well, that was easy. Edited the “Bindings” on the “Default Site” and selected the new certificate. The old one didn’t appear in the list, though. Apparently, if you simply delete the old cert from the computer, IIS7 doesn’t clear it from it’s own config, even though you can’t see the cert anywhere in that config.

Bash function to ease smbclient usage

Quick and dirty: I find smbclient incredibly useful at the command line in Linux, but I can never remember how to put the command together. So I wrote a bash function to simplify it for me. After the function code, I’ll describe how to store your credentials safely so you don’t need to type them in each time.

Here it is:

sm ()
{
    # Samba client connection function
    # 2012-11-15 mike@diehn.net
    #   Simply command line file transfer connections tor
    #   windows systems from Linux.  You need smbclient.
    #
    # Gotta give at least a hostname
    [ -z "$1" ] && {
      echo "sm: usage: sm hostname [service]"
      return
    }
 
    # Putchyer own creds file in place, put the
    # pathname here, and prolly better do this
    #   chmod -R go-rwx # $HOME/.creds
    #
    #auth="-A $HOME/.creds/{REALM}/{username}"
 
    host=${1}
    nbname=${host##.*}
 
    # If no service given, list what's available, otherwise
    # connect to the service
    [   -z "$2" ] && cmd="-L $nbname" || cmd="//$nbname/${2}"
 
    # execute the command
    echo smbclient $auth -I $host $cmd
    smbclient $auth -I $host $cmd
 
}

Credential storage for use with smbclient and friends:

I create a $HOME/.creds directory and in it, I make a dir for each authentication realm and then in those, I make a file for each username. In the username file, I put this:

username = myusername
password = mypassword
domain   = MY_AD_DOMAIN_NAME

Enjoy!

Making Oracle SSL wallets from scratch

Some hard won knowledge:

Here’s what I did:

I used openssl on my Linux workstation to create a new private key and a CSR. Then I bought a signed cert from DigiCert using that CSR. I rolled those into a JKS using keytool – no trouble. But then I learned that if I want to use those with Oracle HTTP Server (OHS), I’d need them in an SSL wallet.

Took a long time to learn these things:

  1. a “wallet” is a directory containing a file named ewallet.p12
  2. ewallet.p12 must contain the private key, the signed cert for it and the certs of all the CAs in the chain that signed the cert.
  3. cwallet.sso is the one that OHS actually uses – it has no password
  4. unlike ewallet.p12, cwallet.sso is tailored to the machine it’s generated on. You can’t use it on other machines.
  5. The “orapki.bat” that you get when you install OHS is broken. I had to edit the file and wrap %JAVA_HOME% in quotes on all three lines on which it appears: 19, 20 and 90

Here are the commands for building this beast from scratch. I’m copying in the text file I wrote for myself earlier and adding to it:

# In this demo, I'm naming the files "self.*" just to keep them
# short and to indicate we're working with a self-signed cert.
# In real life, you'd name your files something meaningful to
# your use.  Like, I might use "oiam-external-ssl.*"

# Start by making a brand new private key.
# Put the keylength you want as the number of bits at the end of
# the command line.  I use 2048 - a good balance of strength
# versus speed of operations
#
# I don't put a passphrase on this key
#
openssl genrsa -out self.key 2048

# From here down, I break commands into lines to make them easier
# to read and understand.  You could paste these as they are
# because I've put Bash line continuation charaters at the end of
# each line.  However, you may want to join them up again.  If
# you do, keep the pieces in order because some of that order is
# important.  Mostly the first two or three lines.

# Make the CSR.  Your subjectAltNames are in the openssl.cnf
# named in the -config option.
#
openssl req -new \
  -config openssl.cnf \
  -key    self.key \
  -out    self.csr

# If you buy a cert, you skip this step
#
# In the test, sign the CSR (public key) with your own private
# key.  Because we're using the same key to sign that we used to
# make the CSR, we're producing a "self-signed" certificate.
#
#
openssl x509 -req \
  -days    1450 \
  -in      self.csr \
  -signkey self.key \
  -out     self.crt

# To make the next step a bit easy, cat the private key and and signed
# public key files together into one.  They're BASE64 encoded
# blocks, commonly called PEM encoded, so they won't get mixed
# up.
#
# If you bought a signed cert, make sure it's in PEM format first!

cat self.key self.crt > self.pem

# If you bought a signed cert, add in the CA certs.  They should
# have come with the cert.  Look for something called a chain.
#
cat ca-cert-1.crt ca-cert-2.crt ca-cert-3.crt >> self.pem
#
# You should have *real* CA cert filenames there, not these!

# Create the PKCS12 file
#
# Notes:
#   use the -name option to make the "alias" that marks the item
#   in the keystore.  You'll look for this name/alias in many
#   place in the future - choose a meaningful name/alias here.
#
#   Export Password: don't leave it blank.
#
#   To let the -chain option work, and you need it to work, you
#   must have put the various CA certs in the default CA store on
#   your computer so openssl can find them and add them to the
#   pkcs12 file it's making for you.  In debian based systems,
#   like Ubuntu and Mint, use man update-ca-certificates to learn
#   how to do that.
#
openssl pkcs12 -export \
  -in   self.pem \
  -out  self.p12 \
  -name self-test \
  -chain

# At this point, you have made the wallet file.  Copy the self.p12 and name
# that copy "ewallet.p12."  Put it in a directory named for your wallet.
# For example, I did this:
mkdir oif-wallet
cp self.p12 oif-wallet/ewallet.p12

# Then you copy the whole directory to the server you want to have
# use it.

In my environment, MIDDLEWARE_HOME is C:\Oracle\Middleware. So, first, I copy the directory where I want it to live. Then I get a DOS window and CD to the parent of the wallet directory. Then I use orapki to create the cwallet.sso which OHS will actually use when it starts so it can access it’s private key and it’s cert. In my case, it looked like this:

cd C:\Oracle\Middleware\oiam-ssl
\Oracle\Middleware\oracle_common\bin\orapki wallet create -wallet oif-wallet -auto_login

It asked me for the password I’d set earlier and then exited silently. But when I looked in oif-wallet, I found a shiny new file: “cwallet.sso.” And OHS started up and is listening on the ports I configured in ssl.conf.

Recovering data from an NTFS laptop harddrive with MFT failures

A client brought me a Dell Inspiron 5150 and reported it wouldn’t boot.  Other techs had looked at it and reported a hard-drive failure.  I learned the drive was mechanically operable and that the NTFS file system had suffered a double MFT failure.  The MFT is the Master File Table, which you can read about in the Internals section of the Wikipedia NTFS article linked above.  Without it and with it’s backup lost also, the OS and most recovery methods had no means of locating files on the physical media.  Detailed below is the process I used to analyze the problem and then to recover much of the data from that filesystem.
First, I used an IDE to USB adapter to connect the harddrive, removed from the laptop, to my Linux workstation.  I use Ubuntu 12.04 right now, but that’s not important.  I had to work as root.  Here’s how it went, blow by blow from then:  Then, I used dd_rescue to duplicate the entire drive into a “raw .img” file on my hard drive so I could work from a copy instead of the original.   The command line was simply:
dd_rescue -v -l /media/laptop-sdb.log /dev/sdb /media/laptop-hdd.img
Then I tried “loop mounting” the NTFS partition in that image to a mount point on my filesystem.  First, I looked at the partition table to find out where in the image the first partition starts:
root@mjdlnx2:/media# fdisk -l laptop-hdd.img
Disk laptop-hdd.img: 60.0 GB, 60011642880 bytes
255 heads, 63 sectors/track, 7296 cylinders, total 117210240 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xf52bcf0e
Device Boot Start End Blocks Id System
laptop-hdd.img1 * 63 117194174 58597056 7 HPFS/NTFS/exFAT
Now I know there’s a single parition, likely holding an NTFS. It starts at cylinder 63, which is normal. Now I need to know how many bytes that is so I can tell the loop mount command (losetup on Linux) where in the hdd img file to find the start of the partition. There’s 512 bytes per cylinder, according to the fdisk output above.
root@mjdlnx2:/media# echo '63 * 512' | bc
32256
So, that’s 32,256 bytes. I’ll use that as an argument to losetup next, and then mount the loop device losetup creates for us. You’ll see the error messages from the mount command that tell me the MFT is bad:
root@mjdlnx2:/media# losetup -o32256 /dev/loop0 /media/laptop-hdd.img

root@mjdlnx2:/media# mount /dev/loop0 /media/t/
ntfs_mst_post_read_fixup_warn: magic: 0x43425355 size: 1024 usa_ofs: 32296 usa_count: 35161: Invalid argument
Record 0 has no FILE magic (0x43425355)
Failed to load $MFT: Input/output error
Failed to mount '/dev/loop0': Input/output error
NTFS is either inconsistent, or there is a hardware fault, or it's a
SoftRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows
then reboot into Windows twice. The usage of the /f parameter is very
important! If the device is a SoftRAID/FakeRAID then first activate
it and mount a different device under the /dev/mapper/ directory, (e.g.
/dev/mapper/nvidia_eahaabcc1). Please see the 'dmraid' documentation
for more details.
root@mjdlnx2:/media#

OK, so I can see something is wrong. I’d already tried chkdsk earlier and had similar reports of “can’t do it” from that program.  So I used a program called testdisk on my linux workstation next.  It’s a tool you run in a terminal window on Linux. It’s a menu driven program, so I’m not going to show everything I did with it. Here are the steps, though:

  1. at the command-line, I start testdisk like this:
    testdisk /media/laptop-hdd.img
  2. select media laptop-hdd.img and “proceed”
  3. choose [Intel] partition table type
  4. I did an [Analyse] and in there, [Quick Search] and then “P” to list files and it told me the file system is damaged.
  5. Then I quit back to the main menu and went to [Advanced], [Boot].  It says the boot sector and it’s backup are OK and identical.
  6. I tried to use [Repair MFT] since it’s the MFT that mount complained about.
  7. Got this message: “MFT and MFT mirror are bad. Failed to repair them.”
  8. That’s the end of this game.

The Master File Table (MFT) and it’s backup were both corrupt, making most normal recovery techniques impossible.  Most of those methods rely on copying the backup MFT over the primary.  Once I’d discovered that, I purchased a license for Zero Assumption Recovery’s data recover tool.

I have my Windows 7 workstation in a VirtualBox VM.  So, I made a copy of the laptop-hdd.img file with VBoxManage, converting it on the fly into a .vdi that my VM could attach as a D: drive.  Here’s that command:
cd ~/VBox/vdi
VBoxManage convertfromraw /media/laptop-hdd.img laptop-hdd.vdi
Then I started up the VirtualBox UI, edited my VM storage settings to add the new laptop-hdd.vdi as a disk on my virtual SATA controller and then started my VM.  I installed the ZAR program I’d bought and used it’s dead simple interface to do everything else.  It scanned the drive and assembled what it found into a tree structure with that was largely intact.  It stored the many unidentifiable files and directories in two new root level directories called “Lost Directories” and “Lost Files.”
Once the recovery was finished, I simply marked what I wanted copied in the ZAR recovery interface, “aimed” it at a folder on my VM’s desktop and clicked “go” or whatever it said.  When that was done, I just copied that folder to a 65GB USB stick for my client and away she went, mostly happy.

Weblogic AdminServer refused to start – truncated system-jazn-data.xml

We rebooted the computer on which we have OID, OVD installed. When it started up, we noticed the AdminServer wasn’t running. We have our system configured to start Nodemanager which should start the AdminServer.  So when it didn’t, I went to the DOS prompt and used startWeblogic.cmd so I could easily see the output.  Here’s the relevant bit:

<Aug 14, 2012 11:58:27 AM EDT> <Info> <Security> <BEA-090065> <Getting boot identity from user.>
Enter username to boot WebLogic server:weblogic
Enter password to boot WebLogic server:
<Aug 14, 2012 11:58:33 AM EDT> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to STARTING>
<Aug 14, 2012 11:58:33 AM EDT> <Info> <WorkManager> <BEA-002900> <Initializing self-tuning thread pool>
Error at line:990 col:27 ' ' expected a valid beginning name character
   at weblogic.xml.babel.scanner.Name.read(Name.java:33)
   at weblogic.xml.babel.scanner.Name.read(Name.java:20)
   at weblogic.xml.babel.scanner.OpenTag.read(OpenTag.java:58)
   at weblogic.xml.babel.scanner.Scanner.startState(Scanner.java:251)
   at weblogic.xml.babel.scanner.Scanner.scan(Scanner.java:178)

There was a whole slew of crud like this, miles and miles of java spew.  I did find this also, though, that helped:

weblogic.security.SecurityInitializationException: The loading of OPSS java security policy provider 
failed due to exception, see the exception stack trace or the server log file for root cause. If 
still see no obvious cause, enable the debug flag -Djava.security.debug=jpspolicy to get more information. 
Error message: oracle.security.jps.JpsException: [PolicyUtil] Exception while getting default policy Provider

After searching Google for a while, I found myself reading about jazn-data.xml at this post by Kavita (thanks!)  That lead me to search our computer’s file system below C:\Oracle for “jazn-data.xml.”  I found many of them.  I noticed, though, that there were a bunch named system-jazn-data.xml and there was one in C:\Oracle\Middleware\user_projects\domains\ANSYSPROD\config\fmwconfig.  I opened that one and found that it was truncated about halfway down.

Noticing that all the other system-jazn-data.xml files I’d seen in the search results were dated the same and were the same size, I took an chance and copied one of them into place, after backing up the truncated version, and voila!  It worked!

 

 

 

 

Weblogic SSL is a screaming baby in the night.

See, here’s the silver lining in the very dark cloud of sysadmin hell I lately find myself. I’ve learned that I never, EVER, want to have to dig my way through an Oracle product again. So, there’s that.

Figuring out the SSL stuff between the nodemanager, admin servers and managed servers feels sort of like trying to figure out why my three month old baby is screaming all night long again. I’m reading anything I can find about baby sleep, night terrors, colic, anything I can get my hands on that looks remotely related! I’m asking – no, begging every parent I know: “tell me! What can I do!” Nothing works reliably. And what worked last night won’t tonight.

Seems really familiar, but see, in this nightmare, I’m a SINGLE, WORKING DAD! No partner to take turns with.

And someone keeps sneaking into the room after I go to sleep (hah!) and scattering caltrops all OVER the floor between my bed and the crib. Wait, no, those are legos.

Need SANS? Creating a JKS keystore with openssl and keytool.

I needed to buy a single SSL cert from Verisign that works for two hostnames and can be installed on nine servers.  Wow.

To do that, you buy a SAN (Subject Alternative Name) SSL Cert.  I’m installing this cert on nine Windows 2008 R2 based Oracle Weblogic 10.3 managed servers (web servers).  They’ll be behind load balancers that hold the IP to which the two hostnames resolve.  Weblogic 10.3 managed servers easily let you point them at a java keystore to get the SSL cert so they can serve HTTPS.  Nice.  So, I need a keystore with this SAN Cert in it.  Keystores also hold the private key that identifies the server. Hmmm…

Oracle recommends using either Sun’s Cert Util – part of the Weblogic installation or the java keytool program to create your private key and generate a certificate signing request (CSR), which is what you send to Verisign (or whomever) to get your SSL cert.  However, neither of those can put the subjectAltName extensions into the CSR, and openssl can.  I figured there must be some way to get the stuff openssl can create into a java keystore, so I set out across the GoogleScape and found this guy Nick, explaining how to do it!  There’s this cunning guy with the useful tibbit about using -alias 1.  Gotta have that.

Well, almost.  They showed me how to use keytool to turn a pkcs12 file into a java keystore.  They’d left one hole in the road.  How do I make that pkcs12 file? Well, I learned that a pkcs12 file is just a file holding both the private key and the corresponding, signed public key.  In PEM format.  Openssl outputs in PEM be default, so my private key was already in the right format.  And my self-signed cert is in PEM too – I used openssl to make it for my experiment.  I don’t think the cert from Verisign is going to be in PEM, but I know I can convert it – I’ve done that before.

So I used openssl to create my public and private key pair in PEM and then used it generate a certificate signing request (CSR) that includes the subjectAltName x509 v3 extensions.  Here’s a blow by blow, actually tested receipe for doing it, using openssl 1.0.1 and the keytool program in JRE 1.6.26.

In this demo, I’m naming the files “self.*” just to keep them short and to indicate we’re working with a self-signed cert. In real life, you’d name your files something meaningful to your use. Like, I might use “oiam-external-ssl.*”

Start by making a brand new private key. Put the keylength you want as the number of bits at the end of the command line. I use 2048 – a good balance between strength and speed of operations. I don’t put a passphrase on this key

openssl genrsa -out self.key 2048

From here down, I break commands into lines to make them easier to read and understand. You could paste these as they are because I’ve put Bash line continuation charaters at the end of  each line. However, you may want to join them up again. If you do, keep the pieces in order because some of that order is important. Mostly the first two or three lines.

Make the CSR. Your subjectAltNames are in the openssl.cnf named in the -config option.

openssl req -new \
 -config openssl.cnf \
 -key self.key \
 -out self.scr

If you buy a cert, you skip this step

In the test, sign the CSR (public key) with your own private key. Because we’re using the same key to sign that we used to make the CSR, we’re producing a “self-signed” certificate.

openssl x509 -req \
 -days 1450 \
 -in self.csr \
 -signkey self.key \
 -out self.crt

To make the next step a bit easy, cat the private key and and signed public key files together into one. They’re BASE64 encoded blocks, commonly called PEM encoded, so them won’t get mixed up.

cat self.* > self.pem

Create the PKCS12 file and we’ll use java keytool on that to make our keystore.

Notes:

  • use the -name option to make the “alias” that marks the item in the keystore. You’ll look for this name/alias in many place in the future – choose a meaningful name/alias here.
  • Export Password: don’t leave it blank. Keytool requires that the importkeystore have a password, so set one. I use “changeit” but delete this PKCS12 file as soon as I know I have a working keystore.
openssl pkcs12 -export \
 -in self.pem \
 -out self.p12 \
 -name self-test

Create the keystore! Set a real, safe, strong password when you are asked for one. This keystore will be around a long time and you don’t want it compromised easily, right?

keytool -importkeystore \
 -srckeystore self.p12 -srcstoretype pkcs12 \
 -destkeystore self.jks -deststoretype jks

And that’s it. You can verify you’ve got a functioning keystore with this command. You’ll need the password you just set. I’ll put the output I see in the demo just below so you know what to expect from both commands:

COMMAND:
keytool -list -keystore self.jks

OUTPUT:
Enter keystore password:
Keystore type: JKS
Keystore provider: SUN
Your keystore contains 1 entry
self-test, Jun 1, 2012, PrivateKeyEntry,
Certificate fingerprint (MD5): AE:B5:3B:F5:DD:42:6F:38:C2:BA:EF:57:B2:26:12:AB

# And here’s the verbose command and output. See the -v after the -list option?

COMMAND:
keytool -list -v -keystore self.jks

OUTPUT:
Enter keystore password:
Keystore type: JKS
Keystore provider: SUN
Your keystore contains 1 entry
Alias name: self-test
Creation date: Jun 1, 2012
Entry type: PrivateKeyEntry
Certificate chain length: 1
Certificate[1]:
Owner: CN=oif.ansys.com, OU=Information Technology, O=ANSYS Inc., L=Canonsburg, ST=Pennsylvania, C=US
Issuer: CN=oif.ansys.com, OU=Information Technology, O=ANSYS Inc., L=Canonsburg, ST=Pennsylvania, C=US
Serial number: 84xcfasdfxcvxca8
Valid from: Fri Jun 01 16:05:49 EDT 2012 until: Tue May 31 16:05:49 EDT 2016
Certificate fingerprints:
 MD5: AE:B5:3B:F5:DD:42:6F:38:C2:BA:EF:57:B2:26:12:AB
 SHA1: 84:5B:7F:A0:A0:88:DC:EE:E7:BB:9C:90:6D:04:B1:53:65:A2:11:BD
 Signature algorithm name: SHA1withRSA
 Version: 1
*******************************************
*******************************************