Discussion in 'Business & Enterprise Computing' started by elvis, Jul 1, 2008.
Yeah, that's called "Target fixation". Happens to the best of us.
Couldn't for the life of me think of the correct term when I was righting the post.
We've got our PXE boot -> build -> Puppet -> "online in production" system down to 7 minutes now.
I had an urgent render on the floor, and one of our rendernodes was playing silly buggers. Rebooted it (these things suffer segfaults and OOMs all day long thanks to the weird collection of proprietary applications they run from dozens of vendors, which are all giant maths/physics simulators at heart and can do weird things to hardware when they go bad), came back with all sorts of weirdness (TTY constantly crashing and respawning, apparmour errors everywhere).
CBF dealing with that, got too much work to do. Hit the "nuke it from space" button, node is online and rendering again in 7 minutes.
Boy I love our network mounted applications. Our systems are pretty much just an OS (average install is around 5GB), and some mount points. Everything else is distributed through the network, so machines are "treated like cattle, not pets" as per the wise words of the LHC sysadmins.
I do similar, except without the advantage of netboot and network resources - I may be at one customer one day another the next etc.
If one of our devices (RHEL based) is an issue, if I can't fix it in 5 minutes, it gets rebuilt from a DVD/ISO with a kickstart script and some custom RPMs in about 10 minutes. Punch in the appropriate IP addresses and it's back online.
Saves me and other colleagues countless hours.
(and I have the RPM build process automated, so when we release a new version it's packaged and a new ISO ready in 5 minutes).
I do a similar thing, but on Windows 10, also averaging about 10 minutes. If I'm doing a local SSD to SSD restore I can be up in 3 minutes on fast hardware Linux would be quicker if I automated some things but I don't have the time for it at the moment (and I don't roll out too many linux workstations anyway).
Windows 10 here, about an hour for ours (PXE boot MDT image) - Thin image (Windows + Office + Adobe suite) - and then a boatload of other applications. (K-12 school)
Was thinking of setting up a thick image (created automatically) and doing it that way in the future. Just so many applications to install.
I always prefer this approach. If a system decides it wants to be a unique snowflake then it's not worth the time to diagnose. Anything longer than about 10 minutes worth of basic diagnostics (in case it's something which may form a pattern) and I'd rather restore from backup / rebuild via our orchestration system.
I trust the system and engineers to have everything correct so there's no pain in this approach. If they've missed something, then it's a timely reminder why documentation / automation is king.
If Only there was a way I could install Autodesk products without it taking an Age and a half.
Or adobe suite, or vectorworks, or.....manually scripting 60% of the apps to install...
It makes me bitter when 1/2 the time, they aren't really doing anything
you fire up procmon to see what its waiting on, and it literally isn't doing anything...
It's like its got a loop coded with a wait 60 after every single thing it does.
Adobe suite packages are most annoying - it takes best part of an hour - SSD > SSD for the master collection.
My automated images take about 8-9hrs on a server building with all the W7 update, office install and adobe packages.
And the app to make the packages in the first place takes forever, or crashes too.
We had a big discussion about this some time ago. We now even automate the installation of many of our "one off" systems. It takes about 2-3 times as long for a single system, so immediately it seems like a waste.
But 50% of the time someone wants another one because they didn't plan ahead, so that pays for itself immediately.
The best bit is once it's in Puppet, DR becomes nothing more than "make another one of those", and no steps are missed.
All of our Autodesk tools are network mounted. I'm not going to lie: this is the biggest pain in my arse on a daily basis. It takes easily 10 times as long to get Autodesk tools working from network mount points than any other vendor, and their installers make a whole bunch of assumptions about your environment that are fucking stupid.
However, what it means is that our PXE/Puppet installed systems don't have any Autodesk stuff installed locally, and that alone is a damned good incentive to do it the way we do.
We've communicated with Autodesk on a number of occasions about their tools, but they just don't seem to get it (or more realistically, there's not enough people inside Autodesk who do get it in order for them to change). They're apparently convinced everyone wants a wizard with a million "click next" checkboxes, and that we're too dumb to untar a file and set a few envars for PATH and libraries.
Compare and contrast to many of our other vendors who just give us tar files, and a 3-line README with the envars to set, and away we go.
Best of all with our model, we can have dozens of different versions of our tools online all the time. We have scripts that set an environment per job for our artists, and they just PATH in to all the tools they need based on a small configuration file per project or sub-project. This means when people work on a project that spans years, they can slowly upgrade tools as they go along, but they know that if they ever change their environment back into an old portion of a project, they'll switch back to the correct version of tools and not force themselves to upgrade files, break their scenes, and destroy precious work.
Sadly, Adobe is our exception to the rule. They've made it utterly impossible to use in an intelligent manner, and have also made it very clear that they don't give a shit.
Everything else we've managed to work around. Even Cinema4D, which I've got a script that creates upper and lower case symlinks to all of their files, because they use case sensitive filenames, but refer to them in weird case-insensitive ways within their scripts and code!
Our SCCM images take anywhere between 45m to 2 hours depending on how many collections the tech adds the pc to.
I've made the entire process modular, so that at any stage we want to upgrade anything (drivers, os, apps etc) it's a case of adding in the new package for it.
My biggest issue so far has been with random mining apps that have no automated installer strings or options at all. Datamine being one of them, but luckily through the entire process they extract the payload, which is just a bunch of MSI's, and i manually install them in the same order and it's worked pretty damn well.
I've been tempted to tell them to go shove it and fix their shit, but then again, i've found a working solution i don't want them changing it again.
Thankfully i don't have to deal with much Adobe/Autocad stuff. I've tried to package their trueview dgw/dfw viewer thing and it's a nightmare.
two completely separate clients, with services on justhost
one of those clients, have telstra ADSL and cannot access their service, but via their optus or vodafone services, can.
The other client, has dodo adsl and telstra mobile, cannot access their server
A second person at the second client, same scenario, cannot access
A third person, optus mobile, unsure internet provider, cannot access.
Some weird shit going on, but it's next to pointless ringing either of those vendors
I'm seriously beginning to wonder just how many days of my life i've waited for dotnet to process updates at this point.
fuck dotnet with a rake.
I am pretty sure you look after accountants so you probably know MYOB AE 2015.2 requires .net 4.6, that's 20 mins for the sql server and 20 mins per workstation, approx.
best yet is when it's a shared server and the auto script fails because it cant kill a bunch of .net stuff and hangs for 40 minutes before you realise...
At least win 10 is supported now.
The wait for Server 16 and office 16 continues. Very expensive licensing bill this year. Server, Desktop, SQL and office - all at once.
Any other vendor and you'd ride it out.
And just by coincidence, I had to "apt-get install wine-mono" today to get a little MAME ROM scanner working. First time I've needed anything .Net related in eons.
(And absolutely zero to do with enterprise computing )
TLDR: Specify to the sparky what you want - don't leave it up to him, then complain.
If its what I think you're talking about this is a 50/50 religious debate on whether fibres should be delivered as single cores 'straight through', or considered duplex pairs where each pair is crossed.
Single-core straight-thru makes a bit more sense when troubleshooting patching particularly if there are multiple passive patches.
Duplex-pair cross makes things super easy - if everyone sticks to the plan.
Most people who deal with fibre are very good at splitting fibre patch bail clips and checking for fibre polarity issues before looking at more complex causes of no link.
As I said, make sure you specify to your cabler how you expect the terminations - if he still gets it wrong then you have a reason to put the boot in.
Dotnet is amazing, take it back.