Intel NUC vs. other? Build vs. Buy?

So, I'm not sure anyone cares enough to see my post, so I'll just give you the highlights:

My mainserver is an FX-8320 with 32gb of ram. I also have a Dell Perch H700 raid controller with 4x500GB drives in a raid 10. All of the VMs that run on this host, host 4, run off of the raid 10. I have a SQL server 2012 developer license installed on the vCenter server that runs vcenter, vmware view, and a few other database instances. I should move the sql server to it's own box, but I just haven't had time. The raid 10 is important for both performance here as a lot of machines are being used, and also resiliency. I've had 3 drives fail of these so it is important to keep an eye on the drive health. I think about going to 2 hot spares, but I don't have another cage in the chassis. It also has a hotlava 6 port intel NIC(http://www.ebay.com/itm/Hotlava-Vesuvius-6-Port-Gig-Intel-Based-NIC-PCIe-6CGigNIC-6C11810A3-/191450536777?pt=LH_DefaultDomain_0&hash=item2c93575b49). 4 ports for NFS from the three hosts (each host has 1 dedicated port and they share the last port for failover) I've only ever seen it once, but I did test using vms from all three hosts and I was able to read 350MB/s from it concurrently, and had 0 impact on my network traffic to/from the internet. http://i.imgur.com/rDiiJLk.png

All of my things that I absolutely need run here. They auto boot in the order described by numbers in the screenshot.

First is my NAS. it has 10gb of memory and six vCPUS. I use compression on my backup volumes. Originally I passed through my onboard sata connectors to this VM to create the NAS and used a hardware raid card for esxi install and the datastore for these VMs. As we retired an older SAN and I got 36x2TB drives I picked up a 24 port SAS expander (http://www.intel.com/content/www/us/en/servers/raid/raid-controller-res2sv240.html) and now the NAS VM has this passed through to it. The SAS expander has 16x2TB drives connected to it in raidz2 with 2 hot spares (24TB of useable space) and 2x 256 SSD's that I had laying around from an old build. I use the SSD's as L2ARC cache as it really helps with performance. I kept the rest of the drives around because they were used for three years and I'm bound to experience failures eventually. So I have a drawer full of cold spares. The NAS has several exports, NFS (vmware), CIFS (all windows clients and owncloud), and iSCSI (all of my Exchange instances) What's really cool about this is that all shares are available via NFS. I have the NFS share mounted on a the Windows 10 install running here aswell. I'll explain this later on.

Next up is the DC. This VM has 1 vCPU and 4gb of memory, however since it doesn't do PCIe passthrough, it rarely uses any memory. Right now it's sitting at 300MB. All of my FSMO roles run here, The primary DHCP server runs here, and it is the primary DNS server. I have backups for DHCP and DNS that run on the cluster, but this guy handles all of my authentication. Everything that I run is LDAP integrated with my AD domain. VPN accounts, email accounts, owncloud. After storage is up, this guy needs to be up second.

Third up is vCenter. This machine also skips on PCIe passthrough. It has 4 vCPUs and 6gb of memory. It generally consumes all 6gb because of SQL though. vCenter needs the NAS (for the other VM storage) and the domain controller (for authentication) so it is scheduled to boot third.

After vCenter is the HTPC. This VM does use PCIe passthrough. When I first created it, it used a Raedon HD 5450 which was a really nice 1 slot passively cooled card with display port out (that also carried audio) My girlfriend (just recently fiancee, actually!) wanted something to game on other than her laptop, so I tossed an HD 7970 in this that I picked up used on Amazon. The card is getting a bit long in the tooth, but it can run most games she likes to play (borderlands, diablo, etc) and she can use the coffee table with wireless keyboard and mouse. It also works well for Steam Big Picture. So, to passthrough input devices, I also gave ownership of my onboard USB 3.0 controller. This saved a PCIe slot, and allowed me to connect an external raid tower. All movies are stored here (http://www.sansdigital.com/towerraid-/tr5utplusb.html) it is filled with 5x2TB drives in raid5 from the previously mentioned pile because I have a lot, they were free, and I still have cold spares. All of the movies and whatnot are stored here. Plex also runs on the HTPC and so when I'm on the road I'm streaming from these disk a lot. I get a bit irritated if she does sofware updates or something and shuts down because Plex will not auto start. I'm thinking about moving plex to a server, but I don't really want to move the media (around 5TB) onto the other storage. Also connected to the USB controller is an xbox 360 receiver. This makes steam big picture really awesome. I go on an on about this to anyone who will listen because I think it's super fun that I have a virtualized gaming PC

The next VM is Sophos UTM. We deploy this for some of our small business clients and I needed to develop the skillset really fast so I replaced my pfSense router with this. I actually have to say that I love it. It does a lot more than just routing and firewall. I also use it for my inbound email filter, I use it to centrally manage antivirus on my servers, you can use it for web filtering. I will be using it to create a separate wireless network and provide RADIUS authentication for neighbors and resell my internet to them. I have 150 down/20 up but I generally get around 180 down/35 up. Most importantly, it is also a VPN server. When I'm on the road I use SSL vpn for split tunnel. My fiancee is from Malaysia, so her family there uses PPTP to connect to it so they can stream netflix/hulu/amazon/etc. Lastly, I use it for site-to-site VPN to work, but I turn that off and on as needed.

http://i.imgur.com/fvEXyH6.png

The last important VM here is OwnCloud. All ISOs I need when I'm in the field I store here. All the VMWare, Microsoft, Redhat ISO's, installers, patches, etc are stored here. I keep them on a USB drive in my laptop bag too, but I worry that it'll stop working. Also, I don't keep the CD keys on the drive since it is unencrypted, so they're in text files on my owncloud server. OwnCloud uses the CIFS export from the NAS for my home directory, so my Persona profile is also there (similar to Windows roaming profile if you're unfamilar with VMWare View) So I have access to all my docs anywhere very easily.

The last VM here is my Windows 10 test VM. I have a VDI deployment running at home, but the other 3 servers are all 1.8ghz cores. So for some decent testing I run this VM here. The part earlier I was talking about here is pretty funny. This VM also has everything running on the NAS mapped to it as a local drive. I use Crashplan (http://www.code42.com/crashplan/) with unlimited storage for 1 computer back up. Well, this VM is the one that I back up, and it has all the NFS exports, cifs, etc mapped, so for $6 a month I'm backing up around 16TB. I also run a personal crashplan server here. Friends do encrypted backups to me, that in turn gets backed up to Crashplan, all for the low price of $6. I've never had to test the recovery of files from Crashplan, but I have tested recovery to my laptop from my own crashplan server, and it worked very well.

The VMs running in the cluster are a lot, and some of it is propriety stuff so I'm not going to take any screenshots there. What I'm using for hardware there is a Dell C6005. with 3 trays. In 2U I have 36x1.8Ghz opteron cores and 96gb of memory. I added a dual NIC into each box because my switch hated the onboard nics (they have two macs, one is for ip-kvm) and it caused my switch to freak out.

The cluster runs a half dozen or so AD forests with trust relationships to my jc101 domain. This is so I can do exchange testing with multiple versions without having to consider what my AD schema looks like. I run no exchange in jc101 to avoid schema problems. I run a VMWare View VDI deployment here, I run some RDS servers in here, and I run two nested hyper-v machines (I have 1 MCSA exam left)

By day I'm a systems engineering consultant, so I use the cluster to install/configure anything I would do out in the field. That's about where the separation is. Anything that I use basically just for me runs on my whitebox, anything I'm doing for work runs on the cluster. AD is the exception to this, as I use that authentication for everything.

The switch I use to back all of this is an SG300-20. Of the 20 ports, I have six used by the mainserver, 6 used by the cluster, 2 used by extension switches (the cluster vmware traffic each goes to a separate 8 port switch on the nics that my core switch hates, that then uplinks to the core) 2 used by separate WAPS, and one used by my cable modem. The cable modem is on a vlan alone with UTM so I just have it all plugged in that way so I don't have to dedicate a physical port on the big server for wan traffic.

I don't have any vizios of it, but I do have a huge diagram on the whiteboard in my office. I have VPN accounts created for other friends in the industry who need a place to do labwork. All in all the only complaint I have about it is heat and noise. :)

/r/homelab Thread Parent