When I mentor junior sysadmins, I generally tell them that with modern virtualisation, you can create enterprise-scale networks pretty easily at home. Simulating WANs, VPNs, high-latency connections and the like are trivial with a copy of any popular Linux distro and the built in virtualisation and networking tools. In fact, I currently work for a company that delivers large JBoss clusters for realtime web applications, and 100% of our complex development environments sit on virtualised infrastructure. We don't hit bare metal until our staging environments. Linux includes all sorts of tools that can easily simulate internet-style conditions (packet loss, high latency and jitter, etc) quite easily to really test your apps out. Most of my juniors are die-hard PC gamers, sporting multi-core PCs packed with 8+ GB of RAM, and then tell me they can't afford test labs at home. I tell them they already have one, they just don't know it. Raspberry Pi is going to have an official Fedora spin for it, which I think is good news if you're after a system where you don't want to have to screw around too much to get packages working, and can instead concentrate on the application level. I find with some embedded systems, half the battle is just compiling up the stuff you need for basic OS-level operations to work the way you want. If you get something that works with a popular distro like Fedora, you've got the power of their entire packaging community there to make your life a hell of a lot easier. Just my 2c.