SysAdmin's Journey

SCSA vs RHCE

After taking and passing both my SCSA and my RHCE exams this year, it’s time to reflect. Read on for the pros and cons of each from a student’s point of view.

SCSA

The Sun Certified Solaris Administrator certification is obtained by taking two exams. Both exams are all multiple choice and drag-and-drop format. The first test costs $300 and consists of 59 questions; you have 2 hours in which to complete them. Anything over 61% is passing, and you walk out of the testing center with your grade in-hand. You must pass this first test before moving to the second test.

The second test also costs $300, and consists of 60 questions. Anything over 63% is passing, and you have 105 minutes to complete the second test. Like the first test, all the questions are multiple choice or drag-and-drop, and you walk out with your grade in-hand.

The time limits on both exams were adequate for me. I had enough time to go through once, then review all my answers at leisure with a little time remaining.

Find out more from Sun’s SCSA home page.

RHCE

The Red Hat Certified Engineer certification is obtained by taking one exam. The exam costs $799, and you have 3.5 hours to finish the exam. The exam includes content from the RHCT and the RHCE – you must score 70% or higher on the RHCT portion, and 70% or higher on the RHCE content to get your RHCE. It is possible to get over 70% on the RHCT portion, but get less than 70% on the RHCE content; in this case you will obtain your RHCT (think of it as a consolation prize). You will not know your score until some time after your exam. I took my exam on Friday and got my score Tuesday afternoon.

The exam is what Red Hat calls a “performance-based” test environment. There are no multiple choice, drag-and-drop, or written answers. You sit down at a machine, and your questions tell you do things to that machine. Red Hat doesn’t care how you get the machine there, they only care about the end state of the machine.

The time limit on the exam was enough for me to go once through, and then go back and double check my answers one time with only a couple minutes to spare. I actually found some mistakes on my review, so it was time well spent. The nature of the performance-based exam means that you may not be able to proceed to the next step until you finish the current step, so if you get stuck, you may find yourself short on time.

Start your research at Red Hat’s RHCE home page.

My Opinion

I’m under NDA’s for both exams, so I’m limited in what I can say. Here’s a few points to ponder:

  • RHCE
    • I feel like a huge nerd for saying this, but the RHCE was the most fun I’ve ever had during a test.
    • You don’t get access to the Internet, but you do have access to the commands and the man pages just like you would in the real world.
    • The RHCE does not test your memorization skills, your ability to “read between the lines”, or your test-taking ability. It tests your ability to administer a Red Hat Linux system.
  • SCSA
    • The SCSA covers, in my opinion, much more information – both in depth and in width.
    • There are what I would call trick questions, and you are tested on your ability to memorize. You have to memorize command-line flags, and many times you have to really read into the question to determine the right answer. I honestly left the testing center angry, because much of what I missed I felt that I knew the answer to.

This could be due to my background knowledge on the two exams, but it felt to me like the SCSA exam focuses more on the core OS, whereas you have more focus on some third party daemons on the RHCE. For example, setting up an IMAP server is on the RHCE Exam Prep page, but not addressed by the SCSA Objectives.

Both exams have free online assessments you can do to find out where you stand, both have the objectives of the exam listed online, and all the study material you need is available free online.

Study Materials Used

Personally, for the RCHE, I took the RH300 “Rapid Track” course, which served as a nice refresher course before taking the exam. I think I would have obtained my RHCE without the course, but I certainly wouldn’t have done so well. Aside from the course, I didn’t buy any books or anything. Oh, but I have used Red Hat since Red Hat Linux 5.2 - that’s gotta count for something.

For the SCSA, I bought Sun’s Web-Based training course. After taking the first exam, I felt the course was inadequate in preparing me for the exam, so I picked up Bill Calkins Solaris 10 Exam Prep CX-310-202 book. It was excellent, and I strongly recommend it to anyone looking to take the exam. The book actually hadn’t been printed yet when I needed it, and I picked it up pre-press via Safari’s RoughCuts program. I actually had quite a few emails back and forth with the author – he knows his stuff and does an excellent job teaching the reader.

In Closing

In the end, I must say that I feel that I learned more by obtaining my SCSA, but much of that knowledge was has “fell away”. I remember the technologies, but I’ve forgotten the command-line flags I was forced to memorize. The RHCE, on the other hand, was almost easy for me. It tests not what you’ve memorized, but what you know.

I’m glad I have both, but I think all certifications should take Red Hat’s lead and switch to that testing format. I feel it’s far and away a better real-world testing method.

Using Fssnap and Ufsdump to Create Point-in-time Backups of Mounted UFS Partitions in Solaris 10

With all the (deserved) hype about ZFS, there’s still a lot of systems that make use of UFS out there. With all the things that ZFS can do, there’s still some things that it can’t do (incompatability with flash archives, and POSIX ACL’s are examples). I needed to basically make an image of a T1000 that had some non-global zones installed, stick it into a lab for a couple weeks, and then return it to it’s previous state. Since this server had non-global zones, using flar’s was questionable. So I decided to use fssnap and ufsdump to make my backups.

The best part about UFS is that while it may not have the latest and greatest features, what features it does have are rock-solid stable and supported. ufsdump has been around for a long time, but only works on unmounted slices. In order to do a ufsdump on your / mount, you either need to boot to rescue media, or create a snapshot and run ufsdump against that.

For this example, we’ll assume that you have just two slices – / and /apps. The first step is to find out where to store your backing store files. Your backing store files will be the size not of the entire slice but of the size of the changes to the slice since the snapshot was made. Let’s say your /apps mount is 40GB, but seldom changes - your backing store file will likely be less than 512MB in size. Nonetheless, your backing store must not reside on the same partition that you’re making a snapshot of. For our example, we’ll assume that we have a third slice available, mounted at /snaps.

Before creating your snapshot, it’s best to get the system into a state where things are as quiet as possible. The best way to do this is to switch to single user mode, but you can do whatever you like here. Issue the following two commands to create your snapshots:

# fssnap -F ufs -o bs=/snaps/root.back.file /
/dev/fssnap/0
# fssnap -F ufs -o bs=/snaps/apps.back.file /apps
/dev/fssnap/1

You can see here that it has created two devices for us that represent the snapshot. Note that these commands may take 20 seconds or so to return to the shell. Once your snapshot devices are created, you may now return the system to a normal state. Once you’re back to normal, we need to create our UFS dumps, but use our snapshot devices as the source. In our example, we have a NFS mount at /mnt/shared that has all the room we need.

Now, let’s create our UFS dump files:

# ufsdump 0uf /mnt/shared/root.ufsdump /dev/rfssnap/0 
  DUMP: Date of this level 0 dump: Tue Aug 25 08:49:31 2009
  DUMP: Date of last level 0 dump: the epoch
  DUMP: Dumping /dev/rfssnap/0 to /mnt/shared/root.ufsdump.
  DUMP: Mapping (Pass I) [regular files]
  DUMP: Mapping (Pass II) [directories]
  DUMP: Writing 32 Kilobyte records
  DUMP: Estimated 21955062 blocks (10720.25MB).
  DUMP: Dumping (Pass III) [directories]
  DUMP: Dumping (Pass IV) [regular files]
  DUMP: 44.74% done, finished in 0:12
  DUMP: 94.38% done, finished in 0:01
  DUMP: 21955006 blocks (10720.22MB) on 1 volume at 8638 KB/sec
  DUMP: DUMP IS DONE
  DUMP: Level 0 dump on Tue Aug 25 08:49:31 2009
# ufsdump 0uf /mnt/shared/apps.ufsdump /dev/rfssnap/1 
  DUMP: Date of this level 0 dump: Tue Aug 25 08:49:48 2009
  DUMP: Date of last level 0 dump: the epoch
  DUMP: Dumping /dev/rfssnap/1 to /mnt/shared/apps.ufsdump.
  DUMP: Mapping (Pass I) [regular files]
  DUMP: Mapping (Pass II) [directories]
  DUMP: Writing 32 Kilobyte records
  DUMP: Estimated 80736236 blocks (39421.99MB).
  DUMP: Dumping (Pass III) [directories]
  DUMP: Dumping (Pass IV) [regular files]
  DUMP: 11.32% done, finished in 1:18
  DUMP: 19.82% done, finished in 1:20
  DUMP: 21.32% done, finished in 1:50
  DUMP: 22.99% done, finished in 2:14
  DUMP: 24.85% done, finished in 2:31
  DUMP: 26.69% done, finished in 2:44
  DUMP: 28.71% done, finished in 2:53
  DUMP: 30.93% done, finished in 2:58
  DUMP: 32.57% done, finished in 3:06
  DUMP: 34.46% done, finished in 3:10
  DUMP: 36.08% done, finished in 3:14
  DUMP: 38.21% done, finished in 3:14
  DUMP: 40.29% done, finished in 3:12
  DUMP: 43.34% done, finished in 3:03
  DUMP: 50.89% done, finished in 2:24
  DUMP: 64.35% done, finished in 1:28
  DUMP: 78.02% done, finished in 0:47
  DUMP: 88.56% done, finished in 0:23
  DUMP: 97.63% done, finished in 0:04
  DUMP: 99.83% done, finished in 0:00
  DUMP: 80736126 blocks (39421.94MB) on 1 volume at 3347 KB/sec
  DUMP: DUMP IS DONE
  DUMP: Level 0 dump on Tue Aug 25 08:49:48 2009

As you can see, the /apps mount was quite large, but even after the backup, the backing store file was less that 30MB when I was done. Make sure you remember to remove your snapshots when you’re done with them:

# fssnap -d /
# fssnap -d /apps
# rm /snaps/*.back.file

Stay tuned for how to restore these ufsdump files!

RHCE!!!

Just got my test results back – I got 100% on my exam, so now I’m a RHCE!

Don't Reboot After Adding Partitions - Partprobe!

Another one of those topics where about 50% of the class bustled in excitement over learning something new and simple came up today. After running fdisk, you will almost always get an error about the kernel not using the new partition table you just modified. Before GNU released parted, you had to reboot in order for the kernel to purge it’s cache and reload the partition table, but now, all you need to do is run partprobe after exiting fdisk. AFAIK, partprobe is included in most all distros.

Iptables Options in RHEL/CentOS

Today in class we were talking about how you needed to save your iptables changes using service iptables save before rebooting at the end of the test, or else you’ll fail that section of the test. I brought up the setting IPTABLES_SAVE_ON_STOP to “yes” in /etc/sysconfig/iptables-config, and no one else knew about that file. There’s some pretty cool settings in there - read on for details.

The file /etc/sysconfig/iptables-config provides a place to configure the behavior of the iptables initscript in /etc/init.d/iptables. The file is documented very well, so give it a quick read. Here’s some of the more interesting settings:

  • IPTABLES_SAVE_ON_STOP - this defaults to “no”. When set to “yes”, every time the initscript is called with the argument of “stop” (whether via command line or via system shutdown), the initscript will take the current iptables ruleset and dump it into /etc/sysconfig/iptables. Essentially, this is doing a service iptables save behind the scenes when you do a service iptables stop. This is great for sysadmins who get distracted often and forget to commit their iptables commands to persistent storage often.
  • IPTABLES_SAVE_ON_RESTART - defaults to “no”. When set to “yes”, it does the exact same thing as IPTABLES_SAVE_ON_START except this does a save operation when the initscript is called with the “restart” option.
  • IPTABLES_SAVE_COUNTER - defaults to “no”. Everytime service iptables save is called (including in the two cases above), the rule and chain counters are saved to the file, and restored on startup. This prevents your counters from being reset every time you restart the service.
  • IPTABLES_STATUS_NUMERIC - defaults to “yes”. When you do a service iptables status, this will print IP’s instead of hostnames when set to “yes”. When set to “no”, it will do reverse DNS lookups on all the IP’s and /etc/services lookups on all ports.
  • IPTABLES_STATUS_VERBOSE - prints packet and byte counters in the output of service iptables status.

There’s a few other settings in there, but these are the ones that I’m usually interested in. Happy firewalling!

RedHat 6 Tidbits

RedHatDuring my RH300 course, my instructor mentioned that RHEL 6 is likely to come out sometime Q1 2010. I wanted to know more about it, so I hit Google, and came up with some interesting results.

This post over at linuxquestions.org starts out innocently enough – someone asks for an expected release data on RHEL 6. A poster named lazlow who appears to be a RH or Fedora dev gives a few interesting tidbits:

  • RHEL 5 is based on Fedora Core 6. No wonder it feels a little long in the tooth!
  • RHEL 6 was intended to be based upon Fedora 9, but it had too many bugs to even be considered.
  • Since the Fedora development has been driven by the community, the focus has shifted towards new features. I’ve seen this before in community driven projects. Unless devs are motivated either via cash or fixing the bug helps their situation, no one wants to fix bugs. New features are more fun to work on.
  • To solve the problem, RedHat didn’t take the project out of the community’s hands, they paid their own devs to fix Fedora bugs. This is very commendable behavior for a big corporation, and I feel it’s a win/win for RedHat and the community.
  • It looks like RHEL6 will be based upon Fedora 11.

Now, this could all be someone spouting off about things they don’t know anything about, but it looks like it checks out to me. Some pretty interesting tidbits, and (if true) an example of a corporation contributing to OSS and making money off of it. If anyone can confirm or deny this information, please do so!

Happiness Is *NOT*...

No happiness here!I bit the bullet and jumped on a sweet deal on a latest-gen 17" MacBook Pro late last week. It was a refurb, and I was too cheap to pay for quick shipping, so Apple told me it wouldn’t ship for 5-7 business days. Whatdya know, they were on-the ball and shipped it out early. It arrived at my desk on Monday. Normally this would be good except that I’m almost 400 miles away from my desk, and won’t be back until Friday! Arrrggghhh! Oh well, I probably wouldn’t get any studying done if I had it with me!

Trying for My RHCE

This week, I’m off to my RH300 course which involves taking my RHCE exam on Friday. It’s funny – studying one thing from 9-5 without any distractions or multitasking truly feels like a vacation to me. After 5pm, I go back to my room and play. I’m pumped! Wish me luck!

Hudson > (CruiseControl * 2)

CruiseControl and I have never really gotten along. When you’re a Java shop, you have to use continuous integration. In fact, if you’re a code shop, you need CI. For the longest time, CruiseControl was the only kid on the block. I’d heard about Hudson quite a bit, but I didn’t take the time to try it. Why not? Well, because CI is hard, and it takes forever to get setup right – I didn’t want to have to re-invest all that time. Man, if only I’d known how wrong I was.

Everyone’s gripe with CruiseControl was that you had to edit XML files to make the configuration. Well, I don’t mind XML, and it’s often pretty good at config files. But CruiseControl was always quirky. Switching from CVS to SVN? A day’s worth of work. Adding a new build? At least an hour or two. Little things: CruiseControl would freak out and die if you didn’t do the initial checkout from CVS/SVN - CruiseControl only does updates, not checkouts. We often joke how the developers that write CruiseControl favorite motto was “let the sucker sysadmin deal with it”.

So, I downloaded Hudson, and in less than 10 minutes I had everything that was being done in CruiseControl working in Hudson. And, I’m being honest here, I actually smiled a few times to myself when setting it up! It took another 20 minutes, and I have authentication working against our LDAP server, which I never had working in CruiseControl.

If you’re running CruiseControl now, drop everything, do yourself a favor, and go try Hudson. If it doesn’t do what you want, it has plugins that do. It has API’s for XML, JSON, and Python, and the XML implementation has full XPATH support. Every field in the web interface has inline help that is actually helpful. Having different projects use different Java’s and Ant’s are a click away. You can build multiple projects at once, create build dependencies, and even have distributed builds run amongst multiple machines.

Please, I’m begging you. Give Hudson a try, and get back some of your life from CruiseControl! If you’re not running CruiseControl or Hudson, then you probably should be.

Forcing Apache's Mod_deflate Module to Compress JSP's From Weblogic

This is one of those “note for myself, and maybe it will help someone else” posts. When you use Apache and mod_weblogic as a frontend to a WebLogic application server, you will likely want to compress your output. It makes sense to put the load of compression on the webservers, since the application servers are busy doing other things.

With all the buggy browsers out there, blindly gzipping everything can cause a lot of issues, so most people end up with a stanza such as this in their config:

AddOutputFilterByType DEFLATE text/html text/css application/x-javascript text/plain
#Instead of blacklist, we use a whitelist:  
BrowserMatch ^MSIE [6-9] gzip

Well, unfortunately, this will not catch your JSP files. I think it has to do with the way that Weblogic is passing through the MIME type as well as the order of filters in the chain. No matter the exact cause, here is the fix:

<LocationMatch ".*\.jsp$">
     ForceType text/html
</LocationMatch>

This simply forces Apache to assume that all files that end in .jsp are of type text/html. This happens before the mod_deflate filter is applied, and therefore your JSP’s will be gzipped!