Ubergiek’s Uberlab (Server Build)

clip_image001[4]

I wanted to take this time to give you guys the run down on my home lab. It took a lot of time and research on my part to construct my lab, so I hope I can save others some of that effort. vm-help.com, Newegg.com, and good old fashion google-ing helped piece mill unsupported hardware together to make sufficient white-boxes for ESXi 4.x. I learned a lot about the hypervisors hatred toward the Realtek 8139 and 8168 chipset, and also that trying to hack the kernel modules can be quite frustrating. This resulted in eventually calling no joy and settling for Intel PCI cards.

My labs consists of 6 ESXi 4.x white-box servers and an Openfiler SAN. I know you are here for the stuff that actually works, so without further ado, here is the list of gear:

First purchase (convinced my wife the equipment listed below would be all I needed). I built my first two ESXi 4.x hosts with the specs below using parts from newegg.com:

Item Number Description Cost
N82E16814125251 GIGABYTE GV-R435OC-512I Radeon HD 4350 $35.00
N82E16819115131 Intel Core2 Quad Q9400 2.66GHz $190.00
N82E16820231166 G.SKILL 4GB (2 x 2GB) 240-Pin DDR2 $80.00 (2)
N82E16813131347 ASUS P5Q SE PLUS LGA 775 Intel P45 $100.00
N82E16817182074 Rosewill Stallion Series RD400-2-SB 400W ATX $35.00
N82E16811147074 Rosewill R220-P-BK Black SECC Steel ATX Mid $30.00

At this point, I figured the 2 onboard 1GbE interfaces would suffice. Little did I know, I would be thrown head first into following the rest of the community’s attempt to hack the unsupported Realtek hardware into the ESX kernel. After hours of “fun,” I went ahead and purchased a few Intel NICs to solve my problem. Finally, ESXi happily installed, and the virtualization adventure began.

Intel Goodness

Item Number Description Cost
N82E16833106121 Intel PWLA8391GT 10/100/1000Mbps PCI $35.00 (2)

So all seemed well until I noticed spontaneous reboots and the infamous ESXi PSOD (Pink Screen of Death). Although this was annoying, I could not quite figure out its cause. It appeared to be a problem with the memory timing, but even modifying the memory’s respective values in the BIOS did not render any better results. I later purchased some larger capacity memory (see second purchase) that resolved the issue.

Insert Funny tangent: Apparently the PSOD is NOT a common occurrence. In a VMware training class I attended, my instructor mentioned the PSOD and said he had never seen it. I had to laugh as I explained that I saw this daily.

clip_image003[4]

Second Purchase (Became VCP4.x, convinced wife more equipment was needed). Again, multiply everything by two 🙂

Item Number Description Cost
N82E16833106121 Intel PWLA8391GT 10/100/1000Mbps PCI $30.00 (2)
N82E16820211392 A-DATA 4GB 240-Pin DDR2 SDRAM DDR2 800 $75.00 (2)
N82E16814125251 GIGABYTE GV-R435OC-512I Radeon HD 4350 $35.00
N82E16813128358 GIGABYTE GA-EP45-UD3P $130.00
N82E16819115131 Intel Core2 Quad Q9400 2.66GHz $190.00
N82E16833106033 Intel EXPI9301CTBLK 10/100/1000Mbps PCI-Express $29.00 (4)
N82E16817182074 Rosewill Stallion Series RD400-2-SB 400W ATX $35.00
N82E16811147074 Rosewill R220-P-BK Black SECC Steel ATX Mid $30.00

clip_image005

Memory Problems Resolved:

On a hunch that the culprit was the memory timing, I swapped the 2GB Corsair DIMMs in the ASUS MB with the aData 4GB DIMMs. This not only resolved the random reboots, but also bumped the systems up to their max of 16GBs. This made me extremely happy.

I recently found a great deal on the HP dc5700 desktops that gave me additional capacity in my lab. See my recent post on this HP gem for more details.

I know, I know. You are thinking, “I thought the point of virtualization was consolidation; Reducing the physical footprint”. You’re right. But, I would compare my desire to have more cool toys in my lab to an automobile enthusiast that has to have a supercharged V-8. There really isn’t a practical reason for it except that it makes other like-minded hobbyists drool.

I will follow up with the physical and logical layout in a later post. Stay tuned…

Thanks for visiting!

Ubergiek

Comments 7

    1. Post
      Author

      Eric,

      Thanks for visiting! You can expect a follow up post on the SAN piece soon. I am currently using Openfiler2.6×64 with a single 1 TB 7200RPM spindle. I have 6 spindles allocated for the SAN but I recently moved from a single core 3GHz/2GB RAM system to a more robust QUAD core and haven’t had a chance to move the drives. The move wasn’t quite necessary, except that I did it to determine the cause of very high packet drop counts. Ultimately I believe this was a result of a buggy R8 series Realtek driver. At any rate, I will post the specs on both SAN systems, but for now will leave you with this: Openfiler works very well for an ESXi implementaiton, but 2008 R2 Fail over clusters (HyperV) require SCSI3 persistent reservations. Check out the Windows 2008 Storage Server iSCSI target (not available on 2008 R2 standard without a hack) or the Starwind iSCSI target (free limited license) for the latter.

      –Ubergiek

  1. Pingback: Video Blog

  2. Pingback: Gigabyte Blog

  3. Pingback: Tower Blog

  4. Pingback: Gaming Blog

Leave a Reply

Your email address will not be published. Required fields are marked *

*