Friday, April 11, 2014

Code Catasrophies

Dave's top five (5) reasons your code will crash and burn:

1. You didn't close a string value in matching quotes.

2. You didn't close a block of code in matching curly braces or parenthesis.

3. You forgot a comma somewhere.

4. You got your variable names crossed up somewhere.

5. You lost sight of the end game and became engulfed in one chunk of code.

Config Manager Contusions: Finding Advertisements Pointed at Active Directory OUs

If you have a large "enterprise" environment, which has System Center Configuration Manager in it, and it's been in place for a long time, you could have a rather wide sprawling infrastructure, replete with Collections that point to things you completely forgot about.  Case in point: Collections based on Query rules which point at AD Organizational Units (OU).

So, when one of your engineers asks the respectable question: "Do we push any apps at computers in OU ", you're left scratching your head about how to compile a layer of potential Group Policy software installations with another layer of potential Configuration Manager Advertisements.  Say hello to one more Rubik's cube of the IT world.

Thankfully, Microsoft spent just enough time on their SQL back-end for Config Manager to make a lot of chores surprisingly simple with just a few minutes poking around in SQL Management Studio.

Here's one example to demonstrate how to identify which Collections are built on query-rules that point at AD OUs, and (AND) which have Advertisements pointed at them...

[T-SQL Query]

FROM v_CollectionRuleQuery INNER JOIN
    v_Collection ON 
    v_CollectionRuleQuery.CollectionID = v_Collection.CollectionID 
INNER JOIN dbo.v_Advertisement ON 
    v_Collection.CollectionID = v_Advertisement.CollectionID
WHERE (v_CollectionRuleQuery.QueryExpression LIKE '%SYSTEM_OU_NAME%')
ORDER BY AdvertisementName

[/T-SQL Query]

Once you start scratching the SQL surface, it's hard to stop.

Monday, March 31, 2014

RoboCopy RoboCopy Where For Art Thou RoboCopy

Comparing exit codes with exit codes...

[vbscript: windows 8.1 64-bit]

Set objShell = CreateObject("Wscript.Shell")

cmdstr = "robocopy \\server1\d$\path1 /xo /s \\server2\f$\path1 /xf *.db *.bak"

result = objShell.Run(cmdstr, 1, True)
exitcode = err.Number

wscript.echo "result is " & result
wscript.echo "exit code is " & exitcode 


If the [source] or [target] path are bogus (e.g. do not exist), it makes result = 16, but if both are valid, result = 2.  In every case, exitcode = 0.

Why fix something when you can easily order a battalion of coders to just move on to build a new fort?

Tuesday, March 25, 2014

Factors of Refactoring

Even if you've never written a line of program or script code in your life, you know what refactoring is.  You may not have known it, but you knew it.  Confused yet?  Let me help.

Refactoring is basically "refinement" or "optimization" or "streamlining".

Let's try this analogy on:

You build a book case from scraps of wood and some hand tools in your garage.  After the first shelf, you decide you want to build another one.  But this time you modify the design to change some of the parts and how they're joined together.  Maybe you change it so there are fewer parts overall.  The first one was assembled from eight (8) pieces of wood, and sixty-four (64) screws.  The second one from six (6) pieces, and thirty-two (32) screws.  After a few more, you realize you can rearrange the cut patterns on a sheet of raw plywood to get more parts from each sheet with less wasted scraps.  Eventually, you have a sturdier product, requiring less material and lower cost to build each one.

Or how about this:

You cook meals and find that you tend to make mostly Italian recipes.  After several dinners you realize that by reorganizing the pantry and spice rack you can more easily get the ingredients to the stove top in less time and with less confusion.  As time goes on, you move things around: pot racks, islands, roller carts, etc.  After a month of "trial and error" you now have the most streamlined kitchen setup imaginable.  Everything from the cookware, cooking area and ingredients to the paths you traverse moving from one station to another is now as good as it can be.  Eventually, you realize you've reduced the time and effort required to cook most meals by 50% and lowered your costs as well.

Or how about this:

You wrote a script, and it does a lot of separate, but sequentially-dependent tasks in order to arrive at a desired result.  The first draft contains 12 distinct custom function blocks, and repeated "for" or "while" loops on the same of data.  The second draft contains 10 distinct custom function blocks and the repeated iteration blocks are now calls to one of the custom functions.  By the third draft, the total number of code lines has been reduced from 1034 to 811 and down to 643.

Going back to look for ways to accomplish the same task in fewer steps.  With better quality results.  Improved reliability, predictability, dependability.  All those other -ilities.  Reusing things instead of making new ones all the time.  Getting more out of same effort than you did before.

This is refactoring.

Case in Point:  Program Files Example 101

Phase 1 - two files, one does option "A" and the other option "B"
Phase 2 - files are combined and use input parameter (i.e. "switches") to drive the code workflow.
Phase 3 - instead of distinct blocks of code, sections are combined into calls to one function with input params.
Phase 4 - instead of forked code paths in each function, the function now passes the input params through to drive external interfaces via concatenated references (e.g. you call one SQL view or another by passing in the param as the view name itself)
Phase 5 - SQL view is replaced with a stored procedure or db function that builds expression with input param directly.
Phase 6 - your work day now goes from 6 hours of keyboard banging and eating at your desk, to 4 hours of enjoyable keystrokes and going out for lunch.

For those of you who already knew this and are staring at this page like you're witnessing a cat eating a pig, whole, I apologize for your agony.

Thursday, March 20, 2014

Stick Shift or Automatic: Software Deployment by the Numbers

No.  I'm not trying to promote sales of any books/ebooks, not even my own (cough-cough).  I am about to dive into a murky subject that many Windows-environment Systems Administrators and Systems Engineers have a very tough time understanding.  And, as if that wasn't enough, I would venture to bet that very, very, veerrrrrrrry few Business Analysts, Project Managers and executive management folks have even a basic grasp of this subject.

The irony of this is that this is about as fundamental to any medium-to-large scale IT environment as bricks are to building a big house.  But even a small business can benefit from this, so don't scoff and walk away just yet.

What am I talking about?...

Manual versus Automated Software Product installation, and Repackaging.

Two things which fit together like peas and carrots.  Or hookers and politicians.  Or Florida and hurricanes.

Yes.  It's 2014, and from what I can tell there is a frightening number of so-called "technology professionals" that sincerely believe that there is little or no difference in terms of cost or quality between these to approaches.  These two diametrically-opposed approaches, that is.  In fact, many think that manual installations are cheaper, even when dozens of installations are involved per product.  I am not joking, nor am I exaggerating.  Please read on.

Most of them, if fed enough beer, or Xanax, would bet their retirement interest on the assumption that the differences between these two are as close as driving a vehicle with stick-shift versus an automatic transmission.  That this is a monolithic, linear-scale, pound-for-pound comparison.

It's a good thing for them that the retirement interest on their fortunes is almost invisible to their overall balance sheet.  A few zeros can get lost in all that green I'm sure.  When you dig into the real numbers, the comparison is about as close as a boxing match between Mike Tyson on his "best day ever" and Richard Simmons after getting a root canal.

All kidding aside, let's do some math. mmkay?

Quasi-Scientific Analysis

Let's say product "Fubar 2014", from a well-known vendor, is required by 500 of your employees in order to "do their assigned job duties".  You have a minimum wage minion whip out a stopwatch and begin timing the installation on the first five computers.  The minion tallies up the results and hands it to you. It goes something like this:

  1. Technician walks over to computer "123" in building 42, up on the 3rd floor, in room 112 and sits down. Time spent getting there by foot/bicycle/car/private jet/yacht or teleporter is, on average, 5 minutes.
  2. Technician then logs on and opens Windows Explorer. 2 minutes (waits for initial profile setup)
  3. Navigates to central file server share on the network (Active Directory domain environment) to locate the folder containing Fubar 2014 setup files and related files.  1 minute.
  4. Navigates to "prereqs" subfolder to install individual products which are required before installing Fubar 2014:  Java  Runtime 1.5 (vendor says it can't work with 1.6 or later), then Apple Quicktime 7.1, Adobe Flash Player 11.5 (Fubar 2014 won't work with version 12), and a few other items.  10 minutes.
  5. Double-clicks on Fubar 2014 "setup.exe" file to launch main setup. Clicks Next on the Welcome page.
  6. Accepts default for installation target folder path, clicks Next
  7. Checks the EULA terms and enters a license product key.  Clicks Next.
  8. Waits for installation to complete. 8 minutes.
  9. Goes back into Windows Explorer and right-clicks on a folder under C:\Program Files\Fubar to open the Properties form.  Modifies the NTFS permissions to allow members of the local "Users" group to have Modify/Change permissions on that folder and all sub-folders.  This is required since the users do not have local Administrator rights, so UAC has been a problem.  This has been known to resolve the problem, so your tech goes ahead with the routine modification.  5 minutes.
  10. Tech goes into Windows Services and disables a service that Fubar 2014 uses to check for periodic updates, which users cannot install without elevated permissions, so this is a standard practice at your shop to disable it.  2 minutes
  11. Tech opens REGEDIT, navigates down to HKEY_LOCAL_MACHINE\Software\Fubar\Fubar 2014\Settings\ and changes the value of "autoupdate" from 1 to 0.  1 minute.
  12. Tech reboots computer, and waits for login screen to log back on.  2 minutes.
  13. Tech logs back on (1 minute or less) and launches Fubar 2014 to confirm it works.  While still opened, Tech navigates into settings to change the option (Tools / Options / Data) to set the "Default library location" to a central UNC server path where all the users share templates and common items to maintain standards.  2 minutes.
  14. Tech closes Fubar 2014 and logs off.
  15. Tech goes on to next location and repeats this process.
Paying the Bill

If you kept track of the time spent above, that's 5+2+1+10+8+5+2+1+2+2 or 38 minutes.  That's without ANY interruptions or unexpected problems.  And that's assuming the computers are relatively new and performing well.

In reality, from tests I have witnessed over the past 5 years alone, in various enterprise environments from 5,000 to 50,000 computers, the average time to perform an installation of this magnitude is roughly between 35 and 50 minutes.  

When performed during business hours with people around in close proximity, the times averaged 45 minutes to 1 hour.

When additional problems had to be resolved, such as missing updates, recovering disk space, removing conflicting components, that range increased to around 1 hour 20 minutes to 1 hour 50 minutes.  

I haven't even mentioned:
  • Time spent deactivating old licenses
  • Time spent activating new licenses
  • Time spent dealing with device drivers
  • Time spent dealing with custom network interface settings
  • Time spent on the phone dealing with vendor support:
    • Large vendor: waiting on line, listening to 70's pop music, interlaced with endless repeats of ads for their other products, like their "new cloud services".  awesome.
    • Small vendor: waiting for guy (company owner/programmer/tester/web admin/support rep) to move his cat off the desk so he can flip through his paper stack to find your purchase order.
  • Impact on end-users while they wait for the tech to do their work
  • Impact on production from unexpected conflicts with other line-of-business products which are only discovered after the installation because there was no lead-time testing afforded.
In situations where a previous version had to first be uninstalled before performing a new install of the later version (usually because the vendor didn't want to take the time to handle this within their new installation package) the time ranges increase to around 2 hours to 2 hours 30 minutes.

Simple:  35 - 50 minutes
Complex:  120 - 150 minutes

In beancounter English: that's a range of roughly 1 hour to 2-1/2 hours.

Repeat this times 500 and you get anywhere from 316 hours (simple) to 1125 hours (pain in the ass).

Multiply that times the technician labor of say, $9/hour (you're a cheap bastard, after all), and that equates to roughly $2,850 to $10,120 of labor.  For ONE software installation.

I'd guess you probably have more than a few products that would be handled this same way across your organization.

Are you starting to see where this is going yet?

Sanity Check Time

Now, let's crank this puppy through ONE cycle of repackaging effort and see how this spews out the other end of the meat grinder.
  1. Software Package Engineer (hereinafter SPE) opens a CMD console within a guest virtual machine (VM) running inside of VMware Workstation or Microsoft Hyper-V (take your pick).
  2. Navigates to folder where Fubar files are stored.
  3. Launches setup.exe -r and completes a normal setup process.
  4. SPE grabs the resulting setup.iss file from C:\Windows and copies it into new folder along with the original setup files.  5 minutes total by now.
  5. SPE opens a text/code editor and loads a template script to enter some lines to handle checks for prerequisites like JRE, Silverlight, Quicktime and so forth.
  6. SPE enters code to invoke the setup.exe with setup.iss and redirect the output to a new log file.  Total of 15 minutes by now.
  7. SPE saves script and puts all the files into the new deployment source folder.  SPE launches a separate VM, which is configured to match the configuration of the computers in use by employees who will be getting the installation.  SPE runs the install using a command shell to ensure it runs "silent" and requires no user interaction whatsoever.  Total runtime, including launching the VM and logging on is now around 30 minutes.
  8. SPE emails or IM's the designated customer SME (that's subject-matter-expert) who was nominated to be the "test user" and asks them to log into the VM using Remote Desktop and kick the tires.  Time spent contacting the customer about 1 minute.
  9. SPE moves on to work on other packages or tasks while waiting for customer to respond and attempt the testing (parlance: User Acceptance Testing, or "UAT")  No time expended by SPE during this period by the way.
  10. Customer gives the package "two thumbs-up!" and the SPE moves it into staging for production deployment.  SPE creates a new "Application" in System Center Configuration Manager 2012, creates a resource Collection with the computers to be targeted, and assigns the Application to the Collection using an Advertisement.  10 minutes (he's drinking decaf this morning)
  11. Advertisement is scheduled to run after hours, avoiding impact on production time against the customer staff who will receive the new installation.  SPE does not have to wait for the installation because it is scheduled to run on its own, so he/she checks on it the next morning.
Total time spent:  5+15+30+1+10 = 61 minutes.

I realize I said "he" a lot, but "she" could do just as well obviously, so that's irrelevant.

Things I didn't include:
  • UAT problems resulting in having to go back and make adjustments and retesting
  • Pre-flight deployments to verify conflicts in production on a limited subset of computers.
  • Next-day support calls for incidental one-offs like machines being offline or a service was stopped or an application was open and had a lock on a file that prevented the Advertisement from completing successfully.
  • Cats walking across keyboards and causing a BSOD.
  • Who knows what else.
Taking those things into account, the ranges can jump from 60-80 minutes for a simple scenario to 2 hours, for just a simple repackaging effort like the one Fubar 2014 involves.  

In the "real world" some products can be much more difficult to repackage and may consume days or weeks of development and testing in order to get a rock-solid package into production.  Those are rare, but even then, EVEN THEN, the savings when calculated across hundreds or thousands of computers, spread across multiple locations, states or continents, can be well worth the effort.  

Think of "mission critical" applications, where the time window to get them into production, with 99.999% success rate, is only an hour or two, end to end, over 20,000 or 30,000 computers.  That's not fiction.  There are industries where this is not uncommon, and they rely heavily on this methodology to ensure:
  • Highest probability of success
  • Consistent and predictable results
  • Minimized impact on production operations
  • Optimum transparency of all moving parts (think reporting and monitoring)
Steak Dinner Anyone?

So, this SPE makes $75,000 a year in USD, roughly $36/hour, and spent an hour building this simple package.  That's $36 to deploy one product to 500 computers over one evening without asking any users to step away from their computers during work hours.  

The cheapest scenario in the first example was $2,850.
The most expensive scenario in the latter example was $1,442.

Even if the SPE had to devote an entire week to one product, or roughly 40 x $36 = $1,442, that's a LOT CHEAPER than a $9/hour tech running around to 500 computers, or 10 x $9/hour techs running around to 50 computers each.

That's Not All

Billy Mays homage:  Now, if you go with the repackaging and automated deployment scenario, you have a mechanism in place that does the following for you without ANY additional cost:
  • Provides automatic self-healing of deployments, to cover situations where a targeted computer is replaced or reimaged with a fresh Windows configuration.
  • Provides simple, effortless scalability for future growth.
  • Provides a robust auditing and reporting trail for accountability and compliance.
  • Provides fault tolerance
  • Provides coverage during non-production hours.
Still think that thumb drive is the way to go?  Hmmmm?

Dave's Top 10 Windows Command-Line Goodies

I'll admit it, I like geeky "Top 10" lists.  The irony of saying "Windows" and "command-line" in the same sentence also fits nicely with how my wobbly brain works, so I couldn't resist.  While scripting is obviously in a class of its own with regards to flexibility, scale and so on, the built-in tool set provided within Windows 8 (and Windows Server 2012) is still worthy of some healthy respect.

The commands listed below are some that I use quite often, and by that I mean pretty much every day. On a typical day I will have two CMD consoles open, a PowerShell ISE console and at least one or two instances of some code editor application like TextPad or Visual Studio Express.  Regardless, I will often jump back to one of my CMD consoles to do something that involves one of the items mentioned below.  Maybe there's a few in here you haven't tried yet.


I use this for managing my credential mappings between my laptop, my home domain environment, my various "work" environments, and TS/RDP environments.  To list saved credentials, use "cmdkey /list".  You can narrow down the list to one target, such as a server you connect to by SMB or RDP named "fubar", by typing "cmdkey /list:fubar".  To add a new explicit mapping, such as connecting to server "fubar" from your tablet which is not part of the same domain or workgroup, use "cmdkey /add:fubar /user:dave /pass:p00rb@$taRd".  For more information about this command, type "cmdkey" and press Enter.

Example: save login credentials for domain server "serverXYZ" from your workgroup laptop...
cmdkey /add:serverXYZ /user:dave /pass:Ih@teP@$$wordZ


This is another command I find extremely helpful.  There are a few times where it doesn't shine as brightly as I'd like, but that's rare.  One such example (and don't let this put you off) is invoking a utility like "SendSchedule.exe" on a remote box, even when the associated XML config file is in the appropriate location.  You just get nothing.  Again, this is not worth tarnishing this powerful utility.

What does it do?  Magical stuff.  That's what.  It's basically a remote shell wrapper, and lets you execute tasks on the remote computer as if you're on that remote computer.  So if you have a script, say "dosomething.ps1" sitting in the c:\stuff folder on serverXYZ, and you're on your laptop named "dingy8", you can open a command console (or PowerShell console) and type:

winrs -r:serverXYZ powershell.exe c:\stuff\dosomething.ps1

Then watch the results unfold as if you were logged onto serverXYZ and running the script directly.

There's way more to this command than I can possibly blurt out here, so try "winrs -?" to begin exploring.


This one has been around almost as long as me.  Same for its cousin: popd.  The pushd command creates a temporary "ad hoc" drive mapping to a specified share.  So if just need a drive letter, under the context of whatever the command console is running as, just type "pushd " and bang! you have a Z: drive and it's the current drive as well.  So why does this do any better than "net use"?  Well, to disconnect "net use" you have to use "net use" again and provide some more key strokes.  In some cases, you forget and leave the drive mapped even after repeated logins.  The pushd command only lasts until you either log off (or "sign out" in Windows 8 parlance), or use "popd" to release any pushd mappings.

Example: map a temp drive to share \\serverXYZ\stuff --> pushd \\serverXYZ\stuff

4. SC

The sc command is another golden-age oldie that provides command-line control over the Windows Services environment.  SC allows you to stop, start, create, delete, and modify Windows Services.  Nuff said.

Example: Check on the status of the WinRM service --> sc query winrm
Example: Stop the WinRM service -->  sc stop winrm
Example: Change WinRM startup to manual -->  sc config winrm start:demand

for more information, type "sc /?"


Like the sc command, this is one is a counterpart to a common GUI tool.  This one provides management features for Scheduled Tasks (hence the abbreviated name: "SCHeduled TASKS").  You create, delete, modify, list, export and import scheduled task jobs with incredible detail and control.  In many cases, it's easier to shove a command string using schtasks through a shell operation from within a script, than to use the direct API alternatives like WMI and .NET, but it depends on your circumstances of course.

Example: List all scheduled tasks on remote computer "Abc123": schtasks /s abc123 /query
Example: Run task "doit" on remote computer "abc123":  schtasks /s abc123 /run /tn doit


Ah, this is a most powerful, yet most simple command tool.  Almost as flexible as "psshutdown" and some other third-party tools, it lets you request or "force" a shutdown, logoff or restart of a remote computer, as well as the local computer.

Example: Restart remote computer "abc123" in 30 seconds --> shutdown -m abc123 -r -f -t 30
Example: Restart "abc123" with a custom display message to the current users...
shutdown -m abc123 -r -f -t 30 -c "IT rulers are kicking you minions off in 30 seconds!"

7. REG

If you haven't heard of this command already, you might have guessed that it has something to do with the Registry.  Yep.  This cool gadget provides command-line capabilities for reading, writing, importing and export registry keys and values and a little more as well.

Example: Display installed apps...
reg query hklm\software\microsoft\windows\currentversion\uninstall /s

Example: Import a .reg file
reg import regfile.reg

Example: Import a .reg file into 32-bit view on a 64-bit computer:
reg import regfile.reg /reg:32


Oh boy.  What can I say about this command that hasn't already been said?  Once you get used to it, xcopy and the like will be flushed down the drain for good.  Powerful.  Flexible.  Simple.  Incredibly useful.  And it was one command I was happy to see rolled into the base operating system configuration, where it was once relegated to Resource Kit add-ons.

One scenario I use this in quite often is wrapping in a .bat script to synchronize remote project folders to a central location, and then invoke the 7-Zip command-line interface to archive the backup content into .zip files and offload them to attached or removable storage.  For me it's just another redundant redundancy of backups, in addition to server backups, DropBox, Google Drive and One Drive.

Example: backup only .vbs and .ps1 files which are newer than those already backed up, or added since last backup, from \\server123\stuff to d:\archives\scripts...
robocopy \\server123\stuff *.vbs *.ps1 d:\archives\scripts /xo /s

One interesting aspect to robocopy is showing the "help" information.  On older versions, "robocopy  /?" or "robocopy /???" and get very different results.  However, on Windows 8 and Windows Server 2012 both options were merged to produce the same results.

9. MSG

This sneaky little turd hijacked the old "net send" command like a stealthy ninja.  Not quite a easy to implement due to the choke collar placed on the messenger service since Windows XP was taken out in a boat and shot in the head (GodFather homage).  I wrote a blog post some years back on nothing but how to implement msg in a Windows 7 environment and enable the plumbing using a GPO.

If you don't have a GUI IM product in use, such as Communicator or Lync (same thing, different lipstick), and don't have an IRC app in use, this is another means to annoy your coworkers.

Example: tell my son Zach to get to bed, while he's uploading guitar demos at his computer in his bedroom...  msg zachary /server:zachspc /v "bed time dude!"  (the /v option says to wait for him to click the 'ok' button to indicate he read it).

10. MSTSC and MSRA

Holy crap, these are powerful and offer everything from really simple/basic/easy usage to up arguably complex yet powerful usage.  MSTSC is the Remote Desktop utility.  So instead of navigating the Start Menu or Start Screen for Remote Desktop, and entering a computer name to access, you can invoke it from a command-line aspect.  It also offers command-line options for dealing with multiple monitors (/span and /multimon), shielding credentials (/restrictedAdmin) and invoking a preconfigured connection file.

Example: Remote into serverXYZ -->  mstsc /v serverXYZ

MSRA is the command-line counterpart to Windows Remote Assistance.  There are ton of options for this command (type msra /? for a list of them), but /offerra is the most commonly used of them.

Example: Initiate a Remote Assistance offer to the user on computer "fubar5"...
msra /offerra fubar5

You can go crazy with msra and invoke email invitations, password protection on assistance offers, and much more.


There are obviously many more command-line tools available on Windows 8 and Windows Server 2012 I could have included.  Some that come to mind, which I also use quite often, include MSIEXEC, WEVTUTIL, WBADMIN, WMIC, OPENFILES, FTYPE, DRIVERQUERY, REGINI and FINDSTR.  I won't argue that there aren't better alternatives outside of what comes built into Windows, even many that are "free", but it's nice that these still exist for times when you need them.


Thursday, March 13, 2014


This may be a bit deep, but I just finished folding laundry at a nearby laundromat whilst a group of over-caffeinated high-schoolers pretended it was audition night for "The Voice" and "Tosh.O" in the same place.  After getting home and putting the goods away, I did dishes and consumed a tasty Dogfish Burton Baton and now my brain needs some playtime.  You've been warned.  Sit down.  Strap in.  Enjoy the ride....

To many folks, the term "NIH" means National Institutes of Health.  But to many folks in the legal field, particularly those specializing in intellectual property issues ("IP" law, that is), it means "not invented here".  It's a uniquely-American term, but the concept predates America itself actually.  Allow me to digress...

I learned this while doing some contract work in that field a few years back; patent drawings and technical procedure writing actually.  "NIH" refers to a policy some renown international corporate entities (I won't name any) have towards how they approach dealing with intellectual property.  This encompasses copyright, patents and trademarks, obviously, but it also includes service marks and more.

In basic English: If the item was "not invented here" they don't want anything to do with it.  In most cases, they mean "at all", as in "not in any respect whatsoever".  For example, if you invent some cool toy, and approached some particularly HUGE toy manufacturing corporation to negotiate some kind of "deal" (I'll leave their actual name to your imagination), and they adhere to the NIH policy, and you are not a direct-employee of that company, they will very likely ask you to leave.  If you continue to insist on a discussion, they may call security to help you find the door.

I am not joking.

So, during a recent discussion I had with some colleagues (past and present) about the subject of "why do so many IT organizations seem to abhor the idea of their own staff daring to develop their own tools for their own needs and then being so brazen as to ask their employers if they'd like to join in on the party?"  Keep in mind that this is not about selling the invention to the employer.  It's about asking the employer to put some elbow grease into it and help launch the airplane off the flight deck with a little more "umpf!".  Of the six or so folks present, and their recollection of some two dozen secondary contacts and colleagues, not ONE of them could recall ever hearing of a successful effort in that regard.

Not one.

I am very familiar with this, as I have been, without planning or expectation, in the center of such situations, many times.  At my last four employers in fact.  I was present during the putting-out of some particular fires (metaphorically-speaking), in which there were NO available "off-the-shelf" solutions to be had.  None.  So we built our own (or I built my own).  Like many contraptions, mechanical or otherwise, which are conceived to solve a "real" problem, they tend to gain a life of their own.  Problems, it seems, tend to recur so having a tool which is custom-fit to solve it tends to be very appealing.  Especially when that tool or solution was provided at no "additional" cost to the parties involved.

Yes.  No additional cost.  Translation:  It was conceived, built, tested and applied using existing funding and task vehicles (that's beancounter speak for "it was part of my job duties, so I did it").  Even the technologies involved with building the tool were 100% "free".  Things like built-in API's, and SQL Server Express, IIS, scripting, etc.  Aside from the already-paid cost of the operating system license itself, there were no additional costs required or incurred.

So, why the fuss?

That's a good question.  A lot of theories were discussed around this one key aspect.  Everything from businesses shying away from stepping outside of their "core competencies" to "perceived risk" to "obligatory support liabilities" and so on.  Blah blah blah.  Fear.  It's just fear.  I add laziness to that, but fear and laziness go together like hookers and politicians.

Then it dawned on me that it's really about NIH.  I realize that fear+laziness nearly equates to NIH in most respects, but it has a different sauce poured over it.

What's even more interesting about this entire mess is that in none of the examples we could cite were there any alterior motives on the part of the employee/developer.  It was all above-board and in good faith, in which they not only accomplished the item in question, but in how they approached and engaged their employer.  The employer however, no matter how the approach was toasted, garnished and served-up, consistently took a hostile, defensive stance in regards to their reaction to the employee.  As if indicating their distrust in the employee; probably assuming the employee concocted the whole idea just to negotiate a deal in the same sense as a blackmail operation.  Holding them hostage.  Whatever that could mean.

But the question remains: why?  Why not actually hold detailed, sincere discussions with the employee, rather than closing the gates and shooting arrows off the guard towers?

Legal risk.  Once again, attorneys, and their corporate financial overlords (retainer clients, usually) have successfully cultivated an atmosphere of risk-avoidance.  Risk avoidance is another name for "fear of innovation".  Imagine if Henry Ford were told that challenging horses would land him in court?  I'm sure someone tried it, but what if he actually caved in to that?  Oh boy.  You can argue that the environment would have fared better today than it has already, but think of the wider ramifications of that.  Now, start that idea-clock in motion today and imaging where it will be in 50 or 100 years.

It's already too late to save our federal government system from the corruption of corporate PAC influences.  Let's not let the rest of the baby go down the bathtub drain as well.  There's a few companies and entrepreneurs out there still taking risks (Elon Musk, Richard Branson being just two of them), so maybe there's still hope things will turn around in favor of imagination and risk-taking.  It can only work when it's cooked in the same pot as the money comes from though.  It seems those of us stirring around in the bottom ranks of the IT world are going to have to fight our way out day by day, in spite of the risk-averse surroundings.

But I digress.  Sweet dreams! :)

Wednesday, March 12, 2014

Walking the Walk

Quick post before I unplug and go comatose for the evening:  Tomorrow, when you get to work (or if you're in a different time zone and it's daylight right now) try this on:

  1. Write down all of the key functions your IT group performs.  AD accounts, software deployment, patching, server provisioning, backups, storage management, cloud integration, etc. whatever.  List them out.
  2. Identify the role(s) which relate to each of them:  Account Managers, App Packagers, App Deployers, Server Managers, Cloud Administrators, etc. whatever.
  3. Assign actual names to those roles.
  4. Compare that mapping with reality.  Grade yourself on a 100 point scale by checking off how many roles/people are actually assigned accordingly with what you are already doing today.

Let me know your score, what scale your environment is (small, medium, ginormous, etc.) and which country you're based out of.  Just curious how we all rate ourselves.


Monday, March 3, 2014

Dastardly Dissections: Recipe for Cooking a Web-Based SCCM "ReRun Advertisement" Tool

If you are familiar with Microsoft System Center Configuration Manager, you've probably heard of the infamous "Right-Click Tools".  Maybe the "Client Center" as well.  There are quite a few such add-on or enhancement applications out there ranging from MMC extensions to HTA scripts to Powershell cmdlets and so on.  The IT world is increasingly becoming a big Lego kit.  Let's rummage through some bins and build something!

This post is just a slightly different spin on the typical "how-to" article concept, and comes with a slight twist:  I'm not giving the code away.  I find it more fun to follow a treasure map and find the goodies at the end.  So I will guide you through the process and you can connect the dots.  If you're half-way decent at scripting and know how to make a table in a database you should be fine.  If not, there are plenty of web sites you can scour to get help.

It's sleeting and snowing outside today and will continue through the night.  Yesterday, it was 68 F and I was on a bike ride in shorts and a t-shirt.  I still have a light sunburn to remind me it wasn't a dream.  After a glass of wine I'm ready to get all verbose on yo ass, so you've been warned.  Enjoy. :)

Since most of Configuration Manager is accessible, and manageable, from "under-the-hood" using command-line and API features, it makes it really nice to build a "kit" to suit your own needs.  This is just one such "kit".  Apologies for the "icky-looking" formatting, but Google seems to have put Blogger on hold while it focuses on Drive updates.  Hopefully they'll come around to improving Blogger editing tools soon.

The Goal

To be able to run a Configuration Manager Advertisement on a remote computer at-will, using a web browser as the interface.  This makes it possible access and use the capability from anywhere you have access to a web browser and sufficient permissions to invoke the process.  More on that last part later.  The benefits are slim, but include not having to install anything on any computer where you wish to leverage this feature.  You can remote in from home and use it.  Open it on your phone or tablet, and use it, and so on.

The Ingredients

  • System Center Configuration Manager 2007 or 2012
  • An Active Directory environment
  • A computer running Windows 7, 8.x, Server 2008 R2 or Server 2012 (or higher)
  • A database (preferably SQL Server, but SQL Express will just fine)
  • A little scripting
  • A little time and coffee

The Process

The way a "re-run" request works is actually a bit odd, but it makes senses once you think it through. Basically, an Advertisement has properties you can configure regarding re-run behavior, but they can be overridden in certain situations.  In short, you capture the current "rerunBehavior" setting, replace it with "always" (as in always let it be re-run), run it again, and then set it back to the original setting.

This requires you to have an account within the SCCM environment (site) that allows you to create and/or modify Advertisements and their settings.  If not, stop right here and take care of that or switch into a lab environment where you have more control over things. Actually, I should say here: Do this in a lab or "non-production" environment before you try it in the real production environment.  Okay, back to the cooking show...

The Building Blocks - Part 1: The Database Side

  1. Create a new Database and create a Table inside of it.  I chose the name "SCCMTools" for the database name, and "ClientActions" for the table name.  If you already have a database to play with (hopefully SQL Server or SQL Server Express), you can simply add a new table (see step 2)
  2. The Table should have some basic fields.  I've suggested a few below:
    • ID (to identify the unique row.  I prefer making this an Integer value, and assigning it as the primary key and also making it a Identity(1,1) setting, akin to the old MS-Access "autonumber" fields. This should be NOT NULL of course)
    • ComputerName (to identify the computer on which the advertisement will be launched.  This can be a Varchar or nVarchar data type, but only has to be as long as your longest client computer name.  Generally, the default of 50 is fine.  This should be NOT NULL)
    • AdvID (to identify the Configuration Manager Advertisement ID.  This can also be a Varchar or nVarchar value and should be NOT NULL.  The length is always going to be less than 12 chars due to the format of Site Code + 5 chars = 8 but you never know)
    • DateAdded (to record when you submitted the request.  This should be a SmallDateTime value, but that's up to you.  NOT NULL also)
    • AddedBy (to record who added it; in case you share this with others. In most cases this will contain a user account name (aka sAMAccountName value). This is an optional field, but should be Varchar or nVarchar and NOT NULL if you're going to use it at all).
    • ResultCode (to record the exit code or result value from your attempt to re-run the Advertisement on each computer.  This should be NULL and could be an Integer value).
    • DateProcessed (to record when the request was actually completed.  Since this won't contain a value until the request is processed, this can be left as a NULL column)
  3. You can add other columns/fields as you prefer.  The list above is only a suggestion, not a mandatory requirement or anything.  Some other columns you might consider include "Comment", or "Site" or "Group" or whatever.  Be creative, but be careful to not over-complicate it too soon as well.
  4. Grant permissions to the Database and the Table for access by whomever you will allow to use it.  That same account will be used for setting up the web interface later on.  You can (and probably should) use a Security Group instead of an individual user, or you could use a SQL user account and secure password, but that's entirely up to you.  Regardless, the account, group or role assignment you use should have rights to the table for select, insert, delete and update.
  5. After step 4 I strongly recommend you test the connectivity by using a small script.  First, shove some sample data into the table, it doesn't matter what as long as it makes sense.  Then use the script to run a "select * from whatever" query and see if you get anything back.  Make sure you consider the security "context" under which the script is running, as well as whether you employ internal credentials within the script itself.  I hope that makes sense.  If not, drink up and it'll either make sense or you'll pass out and forget what the question was.
As a sample, assuming your SCCM Site code is "ABC" and the remote computer which has already run a given advertisement is named "12345DT", your table row might look like this:

ID: 1
ComputerName: 12345DT
AdvID: ABC0012D
DateAdded: 2014-03-03 20:07:01AddedBy:  dstein
DateProcessed: (NULL)

Part 2 - The Electric Web Side Boogaloo

This is where you put some rubber to the electronic pavement.  You can build this with whatever web platform/language/toolkit you prefer.  The only basic needs are that it must support interfacing with the database and the Configuration Manager site Management Point server via WMI/SWBEM scripting.  If you are comfortable with .NET and use the framework tools to connect and interact that, have a blast.  It doesn't matter as long as it works for you.

I would recommend Visual Studio Web Express, or Visual Studio (full-blown) if you have access to it.  However, if Java, PHP or Ruby is your thing, go for it.

The web code is simple and all it really needs to provide is few files/pages as follows:
  • A "home" page of some sort (so you can put a picture of your cat or dog and a spiffy looking logo thing), and include links to...
  • A list / table report that shows current table entries.  This is important to see what has already been completed and what is still waiting to be run.
  • A "new request" form for submitting, well, a new request.  It should include a box (or even better: a drop-down list) to specify the target computer (or computers), as well as a box (or still even more better: a drop-down list) to specify the Advertisement to re-run on the selected computer(s).
  • Optional pages/files for inserting the request records, or deleting pending requests (which haven't yet been processed, obviously).
  • Feel free to go nuts.  Maybe add files to truncate or clear table entries, batch update rows, etc.  It's your baby, raise however you want.

Part 3 - The 12-Cylinder Engine with NOS

Okay, that's a little excessive, but this is really where it all "happens".

This is actually a combination of two things:  A script that performs the actual operation, and a scheduled task the repeatedly checks the database for pending requests, and then runs those pending requests through the script process. The actual process is as follows, and it's not complicated, trust me:
  1. Query the database table for rows, using something like "Select * from ClientActions WHERE DateProcessed IS NULL".
  2. Loop through the recordset (do-while or do-until, etc.), and for each record grab the "ComputerName" and "AdvID" values.
  3. Pass those two values to the script, and run that script under a context which has access into both the SCCM site and the remote client desktop.  It needs local admin rights (or something suitable enough to allow it to invoke COM operations remotely) and that means you may need to tinker with firewall stuff, but you already knew that, right?  Mmmhmmm. Okay.
  4. The script connects to the remote client computer, does it's thing, and if the exit code (return/result value) is 0 (zero, or "success", etc.) update the matching table row to enter the current Date/Time into the "DateProcessed" column.
  5. Capturing the exit or result code is very important.  This is why I suggested the ResultCode" column earlier.  It allows you to capture the successes and failures as well.  If you just have a "DateProcessed" value, you can't really distinguish successes from failures.
If you need some pointers for making the script processor part, here are just a few links that I found by entering "re-run SCCM advertisement script" in a Google or Bing search:

Baking Time - 2 - 4 Hours

Okay, sooner if your brain is a microwave and you've consumed some caffeine.

Fetch a computer from your SCCM environment and get a list of the Advertisements it has already run.  Identify one of the Advertisements which can be re-run on it without causing any harm to the computer.  You might want to make a fake or shell package that just runs a script to create a log file on the client.  Something to demonstrate it was run once and that will leave a trace when it's run again afterwards.

Using the computer name and the Advertisement ID value, enter that information into your database table using your web form.  It's okay if you just directly edit the table using something like SQL Server Management Studio, or the database tools in Visual Studio.  It's just nice to test it all together if you can.

Place the script in a folder on a computer where you intend to host it for future use.  Use that computer to configure the scheduled task, along with the credentials to run the script itself, and save it.

Run the scheduled task.  This will help you verify the following:
  • The script can connect to the database and obtain records
  • The script can connect to the remote client computer
  • The script can modify the Advertisement properties for re-run behavior
  • The script can invoke the SCCM client agent actions to re-run the Advertisement
  • The script can update the matching rows in the database table to indicate completion
  • The script can do all of this under the security context by which the task scheduler is invoking it
Now, it's time for a desert.  Post a comment to let me know how this works out for you?

Sunday, March 2, 2014

The Professional References Dilemma

If you've held a job for more than a few years, especially the kind where you had to write (or borrow) a resume to qualify for an interview, you've probably had to list some "references" as well.

A professional "reference" is supposed to be someone whom you've known, professionally, long enough to tell a prospective employer good things about you.  Things like how well you work with others, your skill set, types of projects or operational work, and so on.

A professional reference is NOT someone you worked in the same office with, but didn't interact with every day. Nor is it one of your drinking buddies.

I'm not sure why, but I've been asked to give permission to list me as a reference for more than a dozen current and former colleagues.  I say that because my professional career path hasn't been the shiniest example for others to follow.  In some ways I consider myself the guy walking backwards through a minefield, giving out advice on how to detect mines.  Yeah, I saw that episode of Benny Hill.

The problem comes into play when someone you're friendly with, maybe really good buddies with, asks you to be a reference for a job they're applying for; but you can't honestly say you worked directly along side this person enough to vouch for every skill the new job is asking for.  Maybe you didn't work with this person directly at all.  The risk you take is that you may say "Sure, this guy/girl is an awesome ___. I'd hire them in a heartbeat.".  Then they get hired and things fall apart.  Now you're reference is devalued by having vouched for someone that just didn't cut it for them.

Granted, this is a risk for any such circumstance, but you greatly reduce that risk when you stick to your guns and only agree to vouch for people you truly KNOW about on a professional level.

This spills over into LinkedIn skill recommendations as well.  I can't count how many people have tagged me for a skill I barely know.  VMware ESX?  I played with it.  SharePoint? I can install it, configure it, build some sites and libraries and post pictures of cute animals.  After I started seeing notifications that so-and-so tagged me as an expert on these things, I began to go back and remove those items which I consider myself to be marginally skilled at best.

To be fair, it's not really something I can blame on my LinkedIn network, it's just everyone trying to help each other out, and that's very much appreciated.  But it also puts them at risk of damaging their street cred by saying I'm a pro at something that I'm really not that well-versed in.  To mitigate the chances of hurting their good intentions, I'm making the effort to clean up my skills list.  I think it's a good idea.

Back to ironing shirts for Monday staff meetings.  Cheers!