VM and Lab Hardware

My host now sits on upgraded hardware which follows:

Vm Host

Node 1

Dual X5687 cpus ( 4 core 8 thread per cpu)

36GB DDR3 ECC ram

3 300gb 15k Rpm Drives in Raid 0


Freenas 9.10 Host

I7 4790

32Gb DDR3

6 4TB drives in Z2 x 2 pools for  total of 12 4tb drives

Services: Cifs/Ad joined to domain/owncloud/http manager



PFSense 2.3.1

Quad Core 2.8

8gig DDr2

400gb HDD



ISP Cable Modem Docsys 3

Smart Manged Linksys 24port switch

Welcome to the Start of my lab Adventure

I Have been been in the I.T industry for many years now. I have been doing Desktop Services Support Roles for many years now. I am to the point in my career I am trying to get out of this role and move onto something better in my career. I really want to get int System Admin/Server Admin roles. In order to do this I need to start studying for server certifications which requires I get my own VM Host for learning and studying purposes. I was recently given the opportunity of receiving a dell T5500 Precision host which would make a prefect Vm host. I then got some more ram for it and a riser card so I could add a second cpu



‘Doomsday’ worm uses seven NSA exploits (WannaCry used two)

I found this story interesting. I in no way take credit for it, but wanted to share  it with everyone Please see https://www.cnet.com/au/news/doomsday-worm-eternalrocks-seven-nsa-exploits-wannacry-ransomware/  for the story itself


The recently discovered EternalRocks joins a set of highly infectious bugs created from the NSA’s leaked tools.


If the NSA’s leaked hacking tools had a Voltron, it would be EternalRocks.

On Sunday, researchers confirmed new malware, named EternalRocks, that uses seven exploits first discovered by the National Security Agency and leaked in April by the Shadow Brokers group. Experts described the malware as a “doomsday” worm that could strike suddenly.

Earlier this month, the WannaCry ransomware plagued hospitals, schools and offices around the world and spread to more than 300,000 computers. It uses two NSA exploits that were leaked by the Shadow Brokers, EternalBlue and DoublePulsar. A few days later, researchers found Adylkuzz, new malware that spread using those same exploits and created botnets to mine for cryptocurrency.

Now, there’s EternalRocks. Miroslav Stampar, a cybersecurity expert for Croatia’s CERT, first discovered the hodgepodge of hacks on Wednesday. The earliest findings of EternalRocks goes all the way back to May 3, he wrote in a description on GitHub.

EternalRocks uses EternalBlue, DoublePulsar, EternalChampion, EternalRomance, EternalSynergy, ArchiTouch and SMBTouch — all tools leaked by the Shadow Brokers. Stampar said he found the packed hack after it infected his honeypot, a trap set to monitor incoming malware.

The majority of the tools exploit vulnerabilities with standard file sharing technology used by PCs called Microsoft Windows Server Message Block, which is how WannaCry spread so quickly without being noticed. Microsoft patched these vulnerabilities in March, but many outdated computers remain at risk.

Unlike WannaCry, which alerts victims they’ve been infected through ransomware, EternalRocks remains hidden and quiet on computers. Once in a computer, it downloads Tor’s private browser and sends a signal to the worm’s hidden servers.

Then, it waits. For 24 hours, EternalRocks does nothing. But after a day, the server responds and starts downloading and self-replicating. That means security experts who want to get more information and study the malware will be delayed by a day.

“By delaying the communications the bad actors are attempting to be more stealthy,” Michael Patterson, CEO of security firm Plixer, said in an emailed statement. “The race to detect and stop all malware was lost years ago.”

It even names itself WannaCry in an attempt to hide from security researchers, Stampar said. Like variants of WannaCry, EternalRocks also doesn’t have a kill-switch, so it can’t be as easily blocked off.

For now, EternalRocks remains dormant as it continues to spread and infect more computers. Stampar warns the worm can be weaponized at any time, the same way that WannaCry’s ransomware struck all at once after it had already infected thousands of computers.

Whats in my tech laptop bag you ask




lenovo l420
i3 2350
8gig ddr3
320gig hdd

dual boot
Windows 10 Pro Set with all needed tools

Fedora 25 workstation Set with all needed tools


1tb hdd partitioned

80gig hdd set loaded with boot tools, windows installers, linux installers, and troubleshooting tools

partition set with bit locker with all my software and small apps
partition set with bit locker with programs library
partition for pure storage


640gig usb 3.0 hdd storage

usb to serial console cable
network cable
power cable for laptop
usb charging cable for phone
usb 3.0 cable

My remote/tech/techsupport/work laptop Setup

Well my remote/tech/techsupport/work laptop is finally setup way I want
Windows 10 Partition
Windows 10
private disk
sccm client view
sys internals suite
ps tools
openvpn into home network
domain account for homelab
os partition set with bit locker
partition with software so doesnt pop corp firewalls over something stupid since AV likes to do that far to often for things that arent viruses
Dual booted with fedora 25 workstation
updated and setup with encryption
remote tools
team viewer

How to Prevent WannaCry-Like Ransomware Attacks

I found this story interesting. I in no way take credit for it, but wanted to share  it with everyone Please see  http://gadgets.ndtv.com/internet/features/how-to-prevent-ransomware-attacks-wanna-cry-1694527/ for the story itself and video shared.


The WannaCry ransomware has caused a scare across the world within a few days of being discovered. The biggest ransomware attack yet, WannaCry was temporarily stopped in its tracks by a British researcher by registering an obscure web address, even as it infected 200,000 computers worldwide.

People soon created new WannaCry versions that could not be taken out with the original fix. And the scope of this ransomware is huge. Computers in over 150 countries have been hit, from police departments in India to schools and universities in China, and from Britain’s National Health Service to Telefónica in Spain.

The WannaCry hackers have demanded payments of $200 to $600 (roughly Rs. 13,000 to Rs. 38,000) in bitcoins from organisations as well as individual users whose computers had been infected, or else the data would be wiped.

Even after individual users and IT departments patch and update their systems, there are lingering concerns here. And if you would like to safeguard yourself against such attacks in the future, there’s quite a bit that you can do. Here are some basic things to keep in mind to protect yourself from ransomware attacks.

Never run files you don’t trust

Most computer worms, including WannaCry, spread themselves with the help of unwitting computer users who run a file that they don’t know enough about. These files are sent through emails as attachments, or via obscure URLs masquerading as safe links.

If you receive an email from an unknown source, or an executable file that you don’t trust, never click on it. Discard it into your junk/ spam folder, or delete the file, and empty the recycle bin.

Moreover, Windows OSes since Vista have a security feature called User Account Control, which restricts unauthorised programs, such as the ransomware in question, from full administrative access. If an unknown app brings up a UAC prompt, steer clear of giving it any such permission.

There are ways to safely execute an untrustworthy program, by running them inside a virtual environment. In such a scenario, the program can’t interact with any other files on your computer. Security researchers use this method to study malware but you shouldn’t try it if you don’t know what you’re doing.

Stay away from outdated and pirated OSes

The biggest reason for WannaCry’s success has been the fact that most institutions, corporations and government agencies had been running an unsupported version of Windows, or an outdated one – XP in most cases – owing to a lack of funding for their IT department. Malware like WannaCry rely on exploiting vulnerabilities in your system, and with Microsoft ending support for Windows XP in 2014, thousands of computers were at risk.

The other issue was that there’s a heavy culture of software piracy in countries such as India, China, and Russia, where businesses, and even government offices, were using pirated copies of Windows, which don’t always have the required security updates.

There’s also the fact that Windows XP is really old (it released in 2001, 16 years ago), and the burden of security lies on the end-user too. As IT departments scramble to fix things around the world, they should implore their companies to either pay Microsoft for extended support contracts, or upgrade from outdated systems to newer versions to avert spread of ransomware such as WannaCry.

For an individual user, it’s obviously much easier. If you’re on an old Windows machine, and haven’t been infected yet, install Microsoft’s emergency patch MS17-010. In the future, stay away from pirated/ unsupported Windows since you won’t receive timely updates, and make sure you’re using a version – Windows 7, 8.1 or 10 – that will get security updates in the long run. If you don’t wish to pay, consider moving to a Linux distro.

Keep automatic updates on

Simply having the latest Windows OS installed – Windows 7, 8.1 or 10 – isn’t enough. In the case of WannaCry, only the users who had the most recent (May 2017) updates installed, and the latest Windows Defender virus definitions, were not vulnerable to the WannaCry ransomware attack. This goes to show how important the boring update cycle can be, and why you shouldn’t take it lightly.

Here’s how you can make sure you receive automatic updates on the supported Windows systems. If you don’t see some of the options below, make sure you’re logged in with an administrative account.

On Windows 7 –

  1. Head to Start > Control Panel > System and Security > Windows Update.
  2. On the left-hand side, choose Change settings.
  3. Under Important updates, make sure it says Install updates automatically (recommended).
  4. Check all the other boxes on the page, and then click OK.

On Windows 8.1 –

  1. Hit Win key + X, and click Control Panel.
  2. Head to System and Security > Windows Update.
  3. On the left-hand side, choose Change settings.
  4. Under Important updates, make sure it says Install updates automatically (recommended).
  5. Check all the other boxes on the page, and then click OK.

On Windows 10 –

  1. Hit Start key, and click on the Settings gear icon.
  2. Head to Update & security, and then click Windows Update on the left.
  3. On the right, choose Advanced options.
  4. Under Choose when updates are installed, make sure it says Current Branch, and that both the values for feature and quality update are set to 0.
  5. Check the first two boxes, and close the window.

Third-party firewall and anti-virus

The sheer ubiquity of Windows around the world means that hackers and criminals usually design their code for the most common environment, which includes the default Windows Firewall and Windows Defender. And though both are capable, they are far from perfect.

If you wish to increase protection, you should consider investing in a good firewall and anti-virus, ideally a best each in its own regard. The two are often marketed together as ‘Internet security suites’ these days, but it’s better to go for an individual winner for improved security.

We have a long list of anti-virus solutions – paid and free – that you can look at, and there are several firewalls – Comodo, Kaspersky, and ZoneAlarm among them – that make a great contender.

Most anti-virus and firewall programs also offer extended protection in the form of website filtering, which warns you of unsafe websites; network scans, which looks at security issues with your router and network protocols; and software updater, which makes sure that you aren’t using an outdated version of a program.

Some even offer a built-in password manager, a VPN solution, and a more secure browser. There might even be a sandbox option that helps you execute a file in a virtual environment, like we talked about earlier. And if you’re worried about an impact on your performance, there’s usually a ‘game mode’ option, as well.

Backup your important data regularly

Despite taking all the above precautions, there’s always a chance that your system can be compromised. If you’ve got sensitive data, always have a backup. Ideally, multiple ones.

It ensures that you won’t start sweating and break down if something happens to your computer. The basic rule about backups is that they should always be on a separate hard-drive from your computer.

It can something be as simple as an external hard-drive, a network attached storage with RAID functionality (it’s like having a backup of a backup), or opening an account with a subscription-based cloud service, which regularly backs up all your important data

Microsoft held back a free WannaCry patch, report says

I found this story interesting. I in no way take credit for it, but wanted to share  it with everyone Please see https://www.cnet.com/news/microsoft-reportedly-held-back-wannacry-patch-for-older-windows-versions/ for the story itself and video shared.


Microsoft could have slowed the devastating spread of ransomware WannaCry to businesses, the Financial Times reports. Instead, it held back a free repair update on machines running older software like Windows XP.

Microsoft wanted hefty fees of up to $1,000 a year from businesses for “custom” support and protection against attacks like WannaCry, which locks your computer unless you pay the hackers in bitcoin, said the publication.

While Microsoft finally did make the patch available free of charge to Windows XP machines last Friday, damage had already been done. The company has since been trying to convince customers, business or otherwise, to switch to its newer and more secure Windows 10. Despite the lack of cover, plenty of Microsoft’s customers are still running older software that may still be vulnerable.

“Recognizing that for a variety of business reasons, companies sometimes choose not to upgrade even after 10 or 15 years, Microsoft offers custom support agreements as a stopgap measure,” said a Microsoft spokesperson in a statement to CNET.

“To be clear, Microsoft would prefer that companies upgrade and realize the full benefits of the latest version rather than choose custom support. Security experts agree that the best protection is to be on a modern, up-to-date system that incorporates the latest defense-in-depth innovations. Older systems, even if fully up-to-date, simply lack the latest protections.”

Initial WannaCry attacks were slowed by a security professional who found the ransomeware’s kill switch, but newer, more resistant versions have appeared. At last count, over 200,000 computers in over 150 countries had been hit with the ransomware.

Automated shotgun style Hiring Trends Rant

Most recruiting is done shotgun style anymore. Nothing is looked at, nor is any candidate vetted before even called. I have literally had emails sent asking about being a car mechanic. You respond back with any response and auto get a response back please send resume, Then a actual person looks at y our resume and says you aren’t qualified. Its all automated anymore. Its a poor business model being used a lot now.

I am starting to notice a very unhealthy trend in the automation of job seekers/employers. I completely understand that there is a need for automation due to the level of applications for a job posting. I get so many automated emails because i come up in keywords for my resume on job postings sites for positions I have zero experience in. I either A ignore it which is unprofessional or B respond and get automated response please send me your resume to be told I am not qualified. That is how pre screening gets done now it seems and is a waste of time and energy for both parties

I have also started to notice a lot of talk going on about this order at linked in. It seems like this is something a lot of proffesionals looking for work,  or changing jobs are seeing as well. As an example you can take a look at one of the posts that is getting alto of tracking on linked in. Please see





Whats in my Technician toolbag

usb charging cord for phone
screw drivers
comp tool kit
various length network patch cables
usb 2 cord
usb 3 cord
socket wrenches
usb hdd docking station
usb card reader
thermal paste
sata cables
sata power cords
small keyboard/mouse
various assortment of cables
ethernet cable tester
zip ties
ratchet screw driver and bits
electric screw driver and bits
rj45 heads
2 port kvm
8 port gig switch
head phones
alan wrenches

Lessons learned using Freenas

Well I have  had my freenas machine  up and running a good solid year at this point. I am sure I have said this in the past, but during the planning phase before even purchasing take the following into considering :

  1. Make Sure you buy the amount of drives needed to start off with
  2. Consider type you array you are using ( Z1, Z2,and Z3 )
  3. How much space you currently need to backup all data
  4. Have backups already in place if data needs to be moved
  5. Plan for future need of space and over estimate
  6. Don’t cheap out on hard drives you will regret it later
  7. Also dont’ cheap on hard ware either ( there is a reason there is hardware recommend guides out there)
  8.  Plan out your data sets and sharing  structure before you start as well

I made a few mistakes along the way. I choose to start with 6 4tb drives and 6 3tbs drives in two separate 6 hard drive Z2 arrays. From the start that is a bad thing to do since that makes your arrays lop sides and can cause performance issues down the road. Also from the start don’t use used drives that don’t have a a lot of life span left.  I made this mistake and had instant regret. I don’t think I made it more than a few weeks before all of my 3tb hard drives started to  die very quickly. I was then forced into rushing out buying brand new hard drives which I wasn’t ready to purchase. Also I had no backups in place at this point since I was taking to raid arrays and combining them into one storage machine. I was short on funds and should have waited until I had all needed parts first. Another very large mistake I made looking back. I was then forced with buying 6 brand new 4tb hard drives and hoping that i don’t have more than two hard drives die ( sweated bullets )  for about week. Each resilver took about 5-10 hours doing one at a time. Soon as i resilvered one failed drive another one died. Somehow the storage gods smiled upon me and  I made it through that giant mess.

Now lets roll forward to present time.  I am sitting here with two 6 hard drive Z2 arrays  ( meaning I loose two drives per pool to parity) I am getting email alerts about going over my capacity. There is a rule of thumb in ZFS and Freenas to not go over 80% capacity of your pool. This isnt a set in stone rule, but more of  rule of them to go buy. In all honesty you could probably going to about 85% before you really start to see your performance start to drop off.  I am sitting here at this point wondering what to do. I really cant afford to go out and buy 12 brand new hard  drives that would hurt wallet way to much. I then made the drastic decision to nuke my pools and recreate a 12 hard drive Z2 ( I can hear the community screaming in horror at this point). I understand completely that having that wide of a array is somewhat risky.  I then considered I have two complete backups of all my data . I felt pretty safe considering one backup was in a separate array in a completely different system and one was sitting in a mostly offline cold storage usb enclosure. Well at this point I pretty much blew a weekend killing my array, recreating the reconfigured array, and copying back roughly 24tbs in data from backups.

I  hope that sharing my over a experience is a lesson learned to others thinking about getting into freenas as their main storage system. Hopefully those digging in wont make the same mistakes I did.


Well you have plex installed. Now how to monitor usage

Well you have your awesome Plex media server installed and all setup. Right now you are probably wondering how do i monitor usage/stats/whos on etc right? I have been looking for a good solution for a while now and have come up with a great solution that works really well for me. I ran into a software package called PlexPy which you can find at


For the windows based instalation you will need to do the following

  • Go to http://msysgit.github.io and download git.
  • Run the installer, select all the defaults except for the section called “Adjusting your PATH environment” – here select “Use Git from the Windows command prompt”
  • Complete the rest of the install with the default options.
  • Right click on your desktop and select “Git Gui”.
  • Select “Clone Existing Repository”.
  • In the “Source Location” enter: https://github.com/JonnyWong16/plexpy.git
  • In “Target Directory” create a new folder where you want to install PlexPy to.
  • Click “Clone”.
  • When it’s finished a Git Gui windows will appear, just close this Window.
  • Browse to where you cloned the PlexPy repository and double click PlexPy.

For other os / platform installs please go to https://github.com/JonnyWong16/plexpy/wiki/Installation


I was able to get my windows based installation sitting on same vm as my Plex server going in a matter of minutes. Setup was a breeze and had things going very quickly. One thing to keep in mind is it takes a while for stats and graphs to pull your data and start showing correctly. If you have more than one server you will need to set that as well . Plexpy is able to show you most recent plays/ who is on the most/ popular movies /etc is is pretty powerful Please see my below screen for examples of what you can do.



Confused where to start and what raid to use . Take a look here


I wanted to clear the air between software/hardware/hba raids. I take no credit, but this is a good read taken from.


Software RAID (OS/ File system Level)

Generally when one speaks of pure software RAID they mean a controller agnostic RAID platform that does mirroring, striping, and parity calculations using the CPU. Some hybrid solutions, like the Promise C3500 and C5500 based solutions use special embedded Intel Xeon processors with RAID functions built in to allow an OS to perform quicker parity calculations. Those solutions do blur the lines a bit between pure software RAID, but as this is a general primer, I will focus on the common cases.

Intel SASUC8I HBA and RAID 0/1/10 LSI 1068E based Controller

Common incarnations of software RAID would include the Oracle/ Sun ZFS, Linux’s mdadm, FlexRAID, Drobo BeyondRAID, Lime Technology’s unRAID, Windows Dynamic Disk based-RAID functionality, NetApp’s RAID-DP, and etc.  Windows Home Server V1’s Drive Extender was not a RAID 1 implementation, but it utilize the CPU to make stored data redundant as can be attested to by anyone that has been impacted by DEmigrator.exe. For purposes of picking hardware, if one continues use of Windows Home Server V1 Drive Extender, then the software RAID category is probably the place to look for ideas.

One big advantage of software RAID is that it can be hardware agnostic when it comes to migrating drives and arrays. If a server fails, one can move drives to a new system with new HBAs and access data in most cases assuming that the vendor allows migration and the new system’s controllers are compatible. An example of migration not working using software RAID would be if one were to take Drobo drives and place them into another system without the proprietary RAID implementation.

Another major advantage of software RAID is that one can get many advanced features with software RAID, and the feature set may expand over time. ZFS is a great example here as things like de-duplication, L2ARC SSD caching, encryption, and triple parity RAID-Z3. These are really enterprise-class features added in successive ZFS versions.

For software RAID, one wants to purchase simple host bus adapters (HBAs) for use in systems. HBAs perform the simple task of providing the hardware interface so that a drive can be accessed by the underlying operating system. It is best practice not to use RAID controllers with additional RAID logic built-in because one does not want to have three controllers, the drive’s, the RAID controller’s, and the OS all potentially trying to do things like error correction.

From a cost and support perspective, this is an area where LSI excels. HBAs based upon controllers such as the LSI 1068E and SAS2008 can be flashed and used in initiator-target (IT) mode discussed extensively on this site to turn them into simple HBAs. These two controllers are used in literally millions of systems as they are sold by OEMs such as Dell, IBM, Intel, HP, Sun, Supermicro and etc. As a result, driver support is generally excellent and prices are reasonable.

Fake-RAID the Hardware-Software Solution

Users generally refer to “Fake-RAID” when referring to products such as the Intel ICH10R, AMD SB850, and various Marvell products (as another example) where RAID mirroring, striping, and parity calculations occur through software powered by the host system’s CPU. The key here is that, this solution, unlike if it is done at the OS level, is generally tied to a controller type. Although some add-in cards do support arrays spanning multiple controllers, the vast majority limit array size to a single controller type. Controller type is important here because one can generally migrate arrays from one system to another, so long as the new system’s controller is compatible. For example, moving a RAID 1 array from an Intel ICH9R to an ICH10R is a very simple process.

HighPoint 2680 Controller

The major advantage of Fake-RAID is simply cost. Intel supports it with Intel Matrix Storage, and AMD has south bridge support too. For most users, especially if using a decent server chip set (or most non-budget conscious consumer motherboards), this is a “free” feature. For RAID 0 and RAID 1, especially using a south bridge/ PCH implementation, Fake-RAID can have solid performance due to high bandwidth, low latency interfaces to the CPU. Another advantage of Fake-RAID is that many implementations can be used by multiple operating systems. For example, one can format a FAT32 volume based on an ICH10R and then change host system operating systems to Linux and utilize the volume. Under software RAID scenarios, such as using ZFS volumes directly by Windows or Linux systems, is at minimum difficult but in most cases impossible.

Two caveats here are that most Fake-RAID solutions, are limited to at most RAID 0, RAID 1, RAID 10, RAID 5, and RAID 50. With modern 2Tb and 3TB drives, double parity protection schemes such as RAID 6 become both practical and arguably necessary over single parity RAID 4 and RAID 5 implementations.

ASUS P8P67-Pro SATA Ports Intel and Marvell

Aside from the lack of double parity options, there is one other major Fake-RAID caveat, Write-back cache (also known as copy back cache) in many applications can be enabled but should be avoided by those building storage servers. Fake-RAID implementations are tied to hardware solutions, but do not have on board DRAM cache. As a result, enabling write-back cache means that data is temporarily stored in main system memory before . While one may think that this is a good thing if they have lots of fast memory, this is not good for data integrity. If power fails in the system, data stored in main memory will be lost. To mitigate this risk, UPS systems and redundant power supplies can be used, however in a major failure, data can still be lost. Without write-back caching, RAID 5 and RAID 50 performance is hindered in situations where there are large numbers of writes. Best practice is to not turn on write-back cache on Fake-RAID controllers

From a controller recommendation perspective, I would argue that the Intel ICH10R/ PCH solutions and AMD SB850 solutions are probably the best bets using RAID 1 or RAID 10 (RAID 0 does not provide redundancy.) Frankly, for years to come both setups will have available, low cost motherboards that can read arrays in a recovery situation. That often limits one to four to six ports of connectivity, but drive counts in excess of that should look to something like a LSI SAS2008 controller in IR (RAID) mode for RAID 1 or 10, or hardware/ software RAID solutions. Both Silicon Image and Marvell make popular controllers that are used in “Fake-RAID” class add-in controllers.

Hardware RAID

Hardware RAID is usually the most expensive option, but it still provides a lot of value in many applications. Hardware RAID can most easily be thought of as a miniature computer on an expansion board. It generally hascomponents such as its own BIOS, its own management interface, sometimes a web server and NIC (in high-end Areca cards), a CPU (such as the venerable Intel IOP348 or newer chips), onboard ECC DRAM, optional power supplies (battery back up units), drive interfaces, and I/O through the PCIe bus to peripherals (in this case the rest of the PC.) If one wants to understand why many true hardware RAID solutions are expensive, that illustration of how a hardware RAID controller is akin to a PC is probably a good model to keep in mind.

Adatptec ABM 800T BBU

Hardware RAID has some definite advantages. It is usually OS agnostic, so volumes are not specific to an OS/ File system like software RAID. Beyond that, hardware RAID usually has at least options for battery backed or newer capacitor-flash based write caches. These allow for write-back caching to be enabled with the added security of having protection for extended power outages. In battery backed write cache schemes, a battery back up unit (BBU) is connected to the controller and maintains power to the DRAM in the event that power is no longer being supplied to the card. In capacitor-flash based protection schemes, a power outage event for the card will allow the DRAM to transfer contents to NAND storage while the capacitor keeps the NAND and DRAM powered. BBUs typically are spec’d for at least two days of power protection. NAND storage theoretically can provide months of data maintenance. This is not best practice, but as an interesting note one could theoretically pull the plug to a server while data is being written and cached in DRAM and the controller/ drives installed into a new system the next day, and no data will be lost. I have done this on two occasions but I will stress, do not try this unless there is no other option.

Performance wise, hardware RAID is an interesting mix. When new controllers are released, generally they offer higher IOPS, newer PCIe and drive interfaces, faster DRAM, and etc. which have a positive effect on performance. Near the end of controller life cycles, performance is generally not up to par for the newest generation(s) of drives. For example, Dell PERC 5/i with an older Intel IOP333 processor will choke when used with eight “generation 2” solid state drives. Solid state drives are not the only way to bottle an older controller, many large disks in arrays can cause long rebuild times due to processor speed and the sheer amount of data to be processed.

One important factor is that many vendors offer things like SSD caching with Adaptec maxCache and LSI CacheCade), SSD optimizations, controller fail-over (drives are not the only storage components that fail) and etc. on hardware RAID cards. Many times, these features do guide purchasing decisions.

Areca 1880i Hardware RAID Controller

Probably the two biggest disadvantages to hardware RAID controllers are vendor lock-in and cost. Vendor lock-in involves being able to migrate arrays only to controllers from the same vendor. Product lines can be crossed, for example Adaptec 3805 created arrays can be migrated to Adaptec 5805 controllers if appropriate firmware revisions are used, however those same arrays will not work on Areca RAID controllers. This is especially important when cost is put into perspective. Full featured hardware RAID controllers can cost several hundred dollars with BBUs adding an additional $100 or more for each controller. If a controller fails, a replacement can be an expensive proposition. A disadvantage of some controllers is that RAID arrays cannot span multiple controllers. If this is the case, a solution is limited to the number of drives that can be connected to a single controller. Software RAID usually does not have this limitation.

Right now, the Areca 1880 series and LSI 9260 and LSI 9280 series are probably the top hardware RAID solutions offering 6.0gbps SAS and SATA connectivity and a host of enhancements over previous generations. It should be noted that expensive battery backed hardware RAID solutions are only really required if RAID 5 or RAID 6 are being used. RAID 1 or RAID 10 solutions work fairly well even without expensive hardware RAID controllers and can be acquired relatively inexpensively.


This was a big article to write for an evening, but hopefully it helps people understand the general implications of different RAID implementations. There are a lot of variations in each implementation of the above, so consider this a general overview on the subject. As always, there is more to come in the coming weeks and months on this subject. If anyone sees something that could be more clear, please let me know on the forums. Also, the RAID controller and HBA forums are a great place do discuss different controller options.