July 08, 2016

Techno Bits vol. 75: Adapting Big Environment Solutions For Small Environments


Summer has arrived in all its humid glory here in the Nation's Capital. July's a bit of a crazy month here, especially in an election year. Most of our clients are outside of that political realm, though, for which I'm grateful. That's just not our jam. 

I'm back from a quick trip up to the Penn State University Mac Admins Conference in scenic State College, PA, and once again my brain is full of ideas. I enjoyed some really fascinating talks from Tom Burgin and Ed Marczak on decompiling and troubleshooting and live-editing binaries with various tools, as well as Ryan Manly's talk on Bash Scripting Best Practices (no backticks ever, mommy dearest!), Dan Griggs' session on defensible Macs through better log monitoring, and the work of my collaborator and friend Chris Dawe on Wi-Fi at Scale for iOS and macOS.

The talk that's been percolating most in my head though is the one from Shea Craig and Elliot Jordan on Intelligent Practices for Structuring, Maintaining and Collaborating, which I will absolutely post the video for when it's released. Shea (in-house at a large software company) and Elliot (a consultant for medium- and large businesses with Macs) have spent a lot of time working in Munki environments that are substantially larger than my entire practice.

So, it's got me thinking: how do I adapt their advice to my smaller markets? Right now we manage deployment and updates in a couple different ways: with Gruntwork (for now) for clients with 10 or fewer machines on a single site, or where it doesn't make sense to have a server of any kind, and with a local Munki repository above that number of users, or where it makes sense to do so.

Be sure to click through for Elliot & Shea's presentation slides, which are beautiful and well architected.

The workflow that Elliot and Shea described was reliant entirely upon storing their Munki repo in source control, as described by this wiki article. Source control is a great way to carefully and meticulously manage your Munki repo, and make sure that process is followed before release. This is a really great way to make sure that full care and attention is taken when upgrading your Macs.

But it also requires infrastructure that your small business may not have or need. 

So it got me thinking: what's the sane way to make sure that you're not aggressively stupid with Munki changes? How do you maintain an orchestra of munki servers without relying upon a source control scheme that will substantially harm your client machines?

I've been working on this, and I think building a scheme whereby you're doing the following may be your best bet:
  1. If you're really pressed for time and effort - let someone else vet the updates. This isn't ideal, but I know how busy consultant-type folks can be. You don't want to let systems wither on the vine, but you can't tend them all perfectly. For this, I recommend Gruntwork's approach to vetting. Careful testing in their own fleet and in their own environment, followed by slow promotion to the production environment.
  2. ‚ÄčIf you're not using Munki on your own machines, make that your master repository. If you're not all-in on Munki, and living in it every day, but you're setting it up for your clients, you really need to fix that. You have to get your updates the way your users do. Downloading something for yourself? Unless it's coming from the MAS, you should be putting that in Munki. Better yet? Write an AutoPkg recipe for it. And an override. 
  3. Now that you have a master repository, put your own daily driver in the testing catalog. Okay, maybe not. But you need a machine that you touch daily that gets the bleeding edge for testing purposes. A VM is probably fine, but you need something that's setup to feel like home, that you're actually going to use and test. Put AutoPkgr in place somewhere that can check for changes that will affect you, and set it to hourly and get to work testing updates as they come in. 
  4. Elevate updates locally that you test after no less than 48 hours, unless circumstances dictate otherwise. The counterpoint here is Flash, right? Apple's got a nasty habit of deprecating old Flash Player updates the hard way after about 2 days, so your window to test is narrow. Good testing protocol indicates that you should test these things on a couple different tasks and workflows, and that you apply some extra elbow grease where you have it to bring to bear. Get in the habit of updating quickly so 
  5. Once an update is good in your master repository, elevate it elsewhere. Set a schedule, build a calendar, do something to make sure that your remote repositories get kept up to date. If it's Thursday at 4pm, have that space blocked out on your calendar make sure your remote repositories get up to date with the updates that you've tested and marked as good locally.
  6. Use Tools To Track Testing Updates and Your Repo. Shea wrote up his script Spruce to look at unused packages, as well as tracking your updates through the testing cycles. I'm hopeful that Greg Neagle will release the repoclean script that he wrote as part of the PSU Mac Hackathon, which shows substantial promise as well.
  7. Make Sure to Follow New Developments. Since I started writing this article, a commit has been made to autopkg (sure to be released into production before too long) that will allow you to audit what your trusted recipes are doing. If you haven't seen Elliot's talk on How Not to Do Bad Things with Autopkg, click through and watch his talk about why this is important.
This isn't straight forward or easy. Every compromise is a compromise. But we're not all big shops with big resources, and so we have to do what needs must. The key is doing as little harm to the structure as possible.

Now Reading: