Wednesday, April 23, 2014

These are a Few of My Favorite Views (SQL with Fries and a Pepsi)

If you use System Center Configuration Manager 2007 or 2012, and you like to drink beer and play with firearms and flammable substances, and things like SQL Server Management Studio, well, I have no clue what you're thinking about.  Just kidding.  There are three places I can promise you I can spend a lot of time, calmly perusing the landscape:

  • A Lowe's or Home Depot store (preferrably on a slow, week night)
  • The beer section of a store like Total Wine
  • SQL Server Management Studio and a System Center Configuration Manager site database
The first two are not at all surprising.  Almost any guy that lives near a big hardware store or beer store is in the same boat (apologies to any recovering alcoholics out there).  But the third is my personal weakness.  In days long past, it would have been supplanted by a good Lego kit, a stack of Marvel comic books, or a pile of Revell model kits or Estes rocket kits.  These days, crossing the 50 year milestone, it's become more geeky.  Sure, I love me some good woodworking projects (I just finished a badass bench swing in my backyard, all done by hand, no power tools. p-shaaaa!)

My favorite database views?
  • v_Advertisement
  • v_Collection
  • v_Package
  • v_GS_Computer_System
  • v_R_System
  • v_GS_Installed_Software_Categorized
  • v_GS_System_Enclosure
  • Any of the v_CM_COLL_XXXXXXXX collection views
Of course, I really dig poking around the site-related views, boundaries, discoveries, metering, etc.  As well as the operating system views, x86 memory stuff, collected files, software products and so on.

The saddest part of this?  I rattled those off from my pointy head, without being anywhere near access to the tables or views.  That's how deep they've become ingrained in my squishy skull.

If you're as twisted as I am, open SQL Management Studio, right-click on the Views section and create a new View.  Drag in some of the ones named above and link on ResourceID (or MachineID for some), and start exploring what kinds of cool things you can build on your own.  Data is like Lego's to me now.

Sunday, April 20, 2014

IT Catastrophes: Triage and Compression with Fries and Coke

Triage (noun)
(b) the sorting of patients (as in an emergency room) according to the urgency of their need for care
Compression ()
The state of being compressed (re: reduced in size or volume, as by pressure)
(source: Merriam-Webster online dictionary)
This is another one of my silly-brained IT monologues about subjects which are rarely discussed.

What I'm talking about is a loose comparison and contrast with these two words as they relate to medical and technology fields.  It is however a very real subject (or subjects) for those of us who occasionally deal with critical outages, especially those which involve things like:

  • Highly-Available Hosting Services (think: Google, Microsoft, Facebook, etc.)
  • Mission Critical Systems (think: Defense, Lifesaving, etc.)
  • Service Level Agreements (the dreaded SLA's)
It's kind of funny how most businesses feel their operations are "mission critical" or "highly-available", when they're being subjective.  From an objective view however, it's not always as "critical" when things go "down" for a few minutes; even a few hours.

By the way: Compression, as it pertains to this article relates to the compression of time in which you have to operate in.  The time between a failure and sufficient restoration of services.

When dealing with a system outage in a truly "critical" environment, the first steps are pretty much the same as what an Emergency Medical Technician (EMT) would have to consider:
  1. What exactly is not working?
  2. How serious is the impact?
  3. What is known about what led to this outage?
  4. How long has it been down?
  5. How much time is left?
You were probably thinking of the Who, What, Where, When, Why and How sequence.  I kind of tripped you up with two What's and three How's.  (Technically, #4 could be a "when", and #2 could be a "who" or "where", but whatever).  Let's move along.

With regards to a human, the general rule of thumb is 4-6 minutes, total.  That's about how long the brain go without Oxygen and still recover.  Compression CPR is usually the first course of action to sustain blood flow; keeping the remaining oxygen-rich blood reserves moving through the brain.  Enough pseudo-medical blabbering.  The main point is that there is a "first-course of action" to resort to in most cases.

What aspects are shared between a medical outage and an IT system outage?
  • There are measurable limits to assessing what can be saved and how
  • There are identifiable considerations with regards to impact on various courses of action
  • Techniques can be developed and stored for more efficient use when needed
  • Steps can be taken to identify probable risks and applying risk mitigation
With regards to a system-wide outage, the general rule of thumb is not so clear-cut as the 4-6 minute rule.  It truly varies by what the systems does and who (or what) it supports.  Consider the two following scenarios:

Scenario 1

The interplanetary Asteroid tracking system you maintain is monitoring a projectile traveling at an extremely high velocity towards planet Earth.  The system "goes down" during a window of time in which it would be able to assess a specific degree of variation of its trajectory.  The possible margin of error from the last known projected path could have it hit the Earth, or miss it by a few hundred miles.  The sooner the system is back on line, the sooner a more precise forecast can be derived.

Every hour the system is offline, the margin of error could potentially be re-factored (and reduced) by a considerable amount, possibly ruling out a direct hit.  The best estimate of a direct impact places the date and time somewhere around one year from right now.  Your advisers state that it would require at least six months to prepare and launch an interceptor vehicle in time to deflect or divert the projectile away from a direct Earth impact.

Scenario 2

Your order-tracking system for is down and customers are unable to place orders for new sucky shoes.  Your financial manager estimates that during this particular period of the year, using past projections, combined with figures collected up until the outage, every hour the system is offline, you are losing $500,000 of potential sales revenue.  The system has reportedly been offline for two hours.  So far, that's $1 million bucks.

Which of these scenarios is more critical? 

Answer: It depends

What are the takeaways from each scenario?
  • How long do you have to restore operations before things get really bad?
  • Having the time window defined, what options can you consider to diagnose and restore services?
  • How prepared are you with regards to the outage at hand?
  • What resources are at your disposal, for how long, and how soon?
In the first scenario, you have roughly six months to get things going.  Odds would be generally good to assume you can restore services sooner than that, but what if the outage was caused by an Earthquake that decimated your entire main data center?  Ouch.

In the second scenario, the margin would depend on the objective scale of revenue your business could withstand losing.  If you're Google, a million dollar outage might be bad, but not catastrophic.  If you're a much smaller business, it could wipe you out entirely.

What's really most important (besides the questions about what systems are down, why, when and how) is knowing what the "limits" are.  Remember the 4-6 minutes rule?  SLAs are obviously important, but an SLA is like a life insurance policy; not like a record of discussion between the EMT in the ambulance with the attending physician back at the hospital ER.  One is prescriptive and didactic.  The other is matter-of-fact, holy shit, no time to f*** around.

QUESTION:  When was the last time you or your organization sat down and clearly defined what losses it can absorb and where the line exists whereby you would have to consider filing for bankruptcy?

Is your IT infrastructure REALLY critical to the business, or just really important?  In other words: could your business continue to operate at ANY level without the system in operation?

Forget all the confidence you have in your DR capabilities for just a minute.  Imagine if ALL of your incredibly awesome risk avoidance preparation were to fail.  How long could you last as a business?  At what point would you lose your job?  At what point would your department, division or unit fail?  At what point would the organization fail?  Or do you think it's fail-proof?

Friday, April 11, 2014

Code Catasrophies

Dave's top five (5) reasons your code will crash and burn:

1. You didn't close a string value in matching quotes.

2. You didn't close a block of code in matching curly braces or parenthesis.

3. You forgot a comma somewhere.

4. You got your variable names crossed up somewhere.

5. You lost sight of the end game and became engulfed in one chunk of code.

Config Manager Contusions: Finding Advertisements Pointed at Active Directory OUs

If you have a large "enterprise" environment, which has System Center Configuration Manager in it, and it's been in place for a long time, you could have a rather wide sprawling infrastructure, replete with Collections that point to things you completely forgot about.  Case in point: Collections based on Query rules which point at AD Organizational Units (OU).

So, when one of your engineers asks the respectable question: "Do we push any apps at computers in OU ", you're left scratching your head about how to compile a layer of potential Group Policy software installations with another layer of potential Configuration Manager Advertisements.  Say hello to one more Rubik's cube of the IT world.

Thankfully, Microsoft spent just enough time on their SQL back-end for Config Manager to make a lot of chores surprisingly simple with just a few minutes poking around in SQL Management Studio.

Here's one example to demonstrate how to identify which Collections are built on query-rules that point at AD OUs, and (AND) which have Advertisements pointed at them...

[T-SQL Query]

FROM v_CollectionRuleQuery INNER JOIN
    v_Collection ON 
    v_CollectionRuleQuery.CollectionID = v_Collection.CollectionID 
INNER JOIN dbo.v_Advertisement ON 
    v_Collection.CollectionID = v_Advertisement.CollectionID
WHERE (v_CollectionRuleQuery.QueryExpression LIKE '%SYSTEM_OU_NAME%')
ORDER BY AdvertisementName

[/T-SQL Query]

Once you start scratching the SQL surface, it's hard to stop.

Monday, March 31, 2014

RoboCopy RoboCopy Where For Art Thou RoboCopy

Comparing exit codes with exit codes...

[vbscript: windows 8.1 64-bit]

Set objShell = CreateObject("Wscript.Shell")

cmdstr = "robocopy \\server1\d$\path1 /xo /s \\server2\f$\path1 /xf *.db *.bak"

result = objShell.Run(cmdstr, 1, True)
exitcode = err.Number

wscript.echo "result is " & result
wscript.echo "exit code is " & exitcode 


If the [source] or [target] path are bogus (e.g. do not exist), it makes result = 16, but if both are valid, result = 2.  In every case, exitcode = 0.

Why fix something when you can easily order a battalion of coders to just move on to build a new fort?

Tuesday, March 25, 2014

Factors of Refactoring

Even if you've never written a line of program or script code in your life, you know what refactoring is.  You may not have known it, but you knew it.  Confused yet?  Let me help.

Refactoring is basically "refinement" or "optimization" or "streamlining".

Let's try this analogy on:

You build a book case from scraps of wood and some hand tools in your garage.  After the first shelf, you decide you want to build another one.  But this time you modify the design to change some of the parts and how they're joined together.  Maybe you change it so there are fewer parts overall.  The first one was assembled from eight (8) pieces of wood, and sixty-four (64) screws.  The second one from six (6) pieces, and thirty-two (32) screws.  After a few more, you realize you can rearrange the cut patterns on a sheet of raw plywood to get more parts from each sheet with less wasted scraps.  Eventually, you have a sturdier product, requiring less material and lower cost to build each one.

Or how about this:

You cook meals and find that you tend to make mostly Italian recipes.  After several dinners you realize that by reorganizing the pantry and spice rack you can more easily get the ingredients to the stove top in less time and with less confusion.  As time goes on, you move things around: pot racks, islands, roller carts, etc.  After a month of "trial and error" you now have the most streamlined kitchen setup imaginable.  Everything from the cookware, cooking area and ingredients to the paths you traverse moving from one station to another is now as good as it can be.  Eventually, you realize you've reduced the time and effort required to cook most meals by 50% and lowered your costs as well.

Or how about this:

You wrote a script, and it does a lot of separate, but sequentially-dependent tasks in order to arrive at a desired result.  The first draft contains 12 distinct custom function blocks, and repeated "for" or "while" loops on the same of data.  The second draft contains 10 distinct custom function blocks and the repeated iteration blocks are now calls to one of the custom functions.  By the third draft, the total number of code lines has been reduced from 1034 to 811 and down to 643.

Going back to look for ways to accomplish the same task in fewer steps.  With better quality results.  Improved reliability, predictability, dependability.  All those other -ilities.  Reusing things instead of making new ones all the time.  Getting more out of same effort than you did before.

This is refactoring.

Case in Point:  Program Files Example 101

Phase 1 - two files, one does option "A" and the other option "B"
Phase 2 - files are combined and use input parameter (i.e. "switches") to drive the code workflow.
Phase 3 - instead of distinct blocks of code, sections are combined into calls to one function with input params.
Phase 4 - instead of forked code paths in each function, the function now passes the input params through to drive external interfaces via concatenated references (e.g. you call one SQL view or another by passing in the param as the view name itself)
Phase 5 - SQL view is replaced with a stored procedure or db function that builds expression with input param directly.
Phase 6 - your work day now goes from 6 hours of keyboard banging and eating at your desk, to 4 hours of enjoyable keystrokes and going out for lunch.

For those of you who already knew this and are staring at this page like you're witnessing a cat eating a pig, whole, I apologize for your agony.

Thursday, March 20, 2014

Stick Shift or Automatic: Software Deployment by the Numbers

No.  I'm not trying to promote sales of any books/ebooks, not even my own (cough-cough).  I am about to dive into a murky subject that many Windows-environment Systems Administrators and Systems Engineers have a very tough time understanding.  And, as if that wasn't enough, I would venture to bet that very, very, veerrrrrrrry few Business Analysts, Project Managers and executive management folks have even a basic grasp of this subject.

The irony of this is that this is about as fundamental to any medium-to-large scale IT environment as bricks are to building a big house.  But even a small business can benefit from this, so don't scoff and walk away just yet.

What am I talking about?...

Manual versus Automated Software Product installation, and Repackaging.

Two things which fit together like peas and carrots.  Or hookers and politicians.  Or Florida and hurricanes.

Yes.  It's 2014, and from what I can tell there is a frightening number of so-called "technology professionals" that sincerely believe that there is little or no difference in terms of cost or quality between these to approaches.  These two diametrically-opposed approaches, that is.  In fact, many think that manual installations are cheaper, even when dozens of installations are involved per product.  I am not joking, nor am I exaggerating.  Please read on.

Most of them, if fed enough beer, or Xanax, would bet their retirement interest on the assumption that the differences between these two are as close as driving a vehicle with stick-shift versus an automatic transmission.  That this is a monolithic, linear-scale, pound-for-pound comparison.

It's a good thing for them that the retirement interest on their fortunes is almost invisible to their overall balance sheet.  A few zeros can get lost in all that green I'm sure.  When you dig into the real numbers, the comparison is about as close as a boxing match between Mike Tyson on his "best day ever" and Richard Simmons after getting a root canal.

All kidding aside, let's do some math. mmkay?

Quasi-Scientific Analysis

Let's say product "Fubar 2014", from a well-known vendor, is required by 500 of your employees in order to "do their assigned job duties".  You have a minimum wage minion whip out a stopwatch and begin timing the installation on the first five computers.  The minion tallies up the results and hands it to you. It goes something like this:

  1. Technician walks over to computer "123" in building 42, up on the 3rd floor, in room 112 and sits down. Time spent getting there by foot/bicycle/car/private jet/yacht or teleporter is, on average, 5 minutes.
  2. Technician then logs on and opens Windows Explorer. 2 minutes (waits for initial profile setup)
  3. Navigates to central file server share on the network (Active Directory domain environment) to locate the folder containing Fubar 2014 setup files and related files.  1 minute.
  4. Navigates to "prereqs" subfolder to install individual products which are required before installing Fubar 2014:  Java  Runtime 1.5 (vendor says it can't work with 1.6 or later), then Apple Quicktime 7.1, Adobe Flash Player 11.5 (Fubar 2014 won't work with version 12), and a few other items.  10 minutes.
  5. Double-clicks on Fubar 2014 "setup.exe" file to launch main setup. Clicks Next on the Welcome page.
  6. Accepts default for installation target folder path, clicks Next
  7. Checks the EULA terms and enters a license product key.  Clicks Next.
  8. Waits for installation to complete. 8 minutes.
  9. Goes back into Windows Explorer and right-clicks on a folder under C:\Program Files\Fubar to open the Properties form.  Modifies the NTFS permissions to allow members of the local "Users" group to have Modify/Change permissions on that folder and all sub-folders.  This is required since the users do not have local Administrator rights, so UAC has been a problem.  This has been known to resolve the problem, so your tech goes ahead with the routine modification.  5 minutes.
  10. Tech goes into Windows Services and disables a service that Fubar 2014 uses to check for periodic updates, which users cannot install without elevated permissions, so this is a standard practice at your shop to disable it.  2 minutes
  11. Tech opens REGEDIT, navigates down to HKEY_LOCAL_MACHINE\Software\Fubar\Fubar 2014\Settings\ and changes the value of "autoupdate" from 1 to 0.  1 minute.
  12. Tech reboots computer, and waits for login screen to log back on.  2 minutes.
  13. Tech logs back on (1 minute or less) and launches Fubar 2014 to confirm it works.  While still opened, Tech navigates into settings to change the option (Tools / Options / Data) to set the "Default library location" to a central UNC server path where all the users share templates and common items to maintain standards.  2 minutes.
  14. Tech closes Fubar 2014 and logs off.
  15. Tech goes on to next location and repeats this process.
Paying the Bill

If you kept track of the time spent above, that's 5+2+1+10+8+5+2+1+2+2 or 38 minutes.  That's without ANY interruptions or unexpected problems.  And that's assuming the computers are relatively new and performing well.

In reality, from tests I have witnessed over the past 5 years alone, in various enterprise environments from 5,000 to 50,000 computers, the average time to perform an installation of this magnitude is roughly between 35 and 50 minutes.  

When performed during business hours with people around in close proximity, the times averaged 45 minutes to 1 hour.

When additional problems had to be resolved, such as missing updates, recovering disk space, removing conflicting components, that range increased to around 1 hour 20 minutes to 1 hour 50 minutes.  

I haven't even mentioned:
  • Time spent deactivating old licenses
  • Time spent activating new licenses
  • Time spent dealing with device drivers
  • Time spent dealing with custom network interface settings
  • Time spent on the phone dealing with vendor support:
    • Large vendor: waiting on line, listening to 70's pop music, interlaced with endless repeats of ads for their other products, like their "new cloud services".  awesome.
    • Small vendor: waiting for guy (company owner/programmer/tester/web admin/support rep) to move his cat off the desk so he can flip through his paper stack to find your purchase order.
  • Impact on end-users while they wait for the tech to do their work
  • Impact on production from unexpected conflicts with other line-of-business products which are only discovered after the installation because there was no lead-time testing afforded.
In situations where a previous version had to first be uninstalled before performing a new install of the later version (usually because the vendor didn't want to take the time to handle this within their new installation package) the time ranges increase to around 2 hours to 2 hours 30 minutes.

Simple:  35 - 50 minutes
Complex:  120 - 150 minutes

In beancounter English: that's a range of roughly 1 hour to 2-1/2 hours.

Repeat this times 500 and you get anywhere from 316 hours (simple) to 1125 hours (pain in the ass).

Multiply that times the technician labor of say, $9/hour (you're a cheap bastard, after all), and that equates to roughly $2,850 to $10,120 of labor.  For ONE software installation.

I'd guess you probably have more than a few products that would be handled this same way across your organization.

Are you starting to see where this is going yet?

Sanity Check Time

Now, let's crank this puppy through ONE cycle of repackaging effort and see how this spews out the other end of the meat grinder.
  1. Software Package Engineer (hereinafter SPE) opens a CMD console within a guest virtual machine (VM) running inside of VMware Workstation or Microsoft Hyper-V (take your pick).
  2. Navigates to folder where Fubar files are stored.
  3. Launches setup.exe -r and completes a normal setup process.
  4. SPE grabs the resulting setup.iss file from C:\Windows and copies it into new folder along with the original setup files.  5 minutes total by now.
  5. SPE opens a text/code editor and loads a template script to enter some lines to handle checks for prerequisites like JRE, Silverlight, Quicktime and so forth.
  6. SPE enters code to invoke the setup.exe with setup.iss and redirect the output to a new log file.  Total of 15 minutes by now.
  7. SPE saves script and puts all the files into the new deployment source folder.  SPE launches a separate VM, which is configured to match the configuration of the computers in use by employees who will be getting the installation.  SPE runs the install using a command shell to ensure it runs "silent" and requires no user interaction whatsoever.  Total runtime, including launching the VM and logging on is now around 30 minutes.
  8. SPE emails or IM's the designated customer SME (that's subject-matter-expert) who was nominated to be the "test user" and asks them to log into the VM using Remote Desktop and kick the tires.  Time spent contacting the customer about 1 minute.
  9. SPE moves on to work on other packages or tasks while waiting for customer to respond and attempt the testing (parlance: User Acceptance Testing, or "UAT")  No time expended by SPE during this period by the way.
  10. Customer gives the package "two thumbs-up!" and the SPE moves it into staging for production deployment.  SPE creates a new "Application" in System Center Configuration Manager 2012, creates a resource Collection with the computers to be targeted, and assigns the Application to the Collection using an Advertisement.  10 minutes (he's drinking decaf this morning)
  11. Advertisement is scheduled to run after hours, avoiding impact on production time against the customer staff who will receive the new installation.  SPE does not have to wait for the installation because it is scheduled to run on its own, so he/she checks on it the next morning.
Total time spent:  5+15+30+1+10 = 61 minutes.

I realize I said "he" a lot, but "she" could do just as well obviously, so that's irrelevant.

Things I didn't include:
  • UAT problems resulting in having to go back and make adjustments and retesting
  • Pre-flight deployments to verify conflicts in production on a limited subset of computers.
  • Next-day support calls for incidental one-offs like machines being offline or a service was stopped or an application was open and had a lock on a file that prevented the Advertisement from completing successfully.
  • Cats walking across keyboards and causing a BSOD.
  • Who knows what else.
Taking those things into account, the ranges can jump from 60-80 minutes for a simple scenario to 2 hours, for just a simple repackaging effort like the one Fubar 2014 involves.  

In the "real world" some products can be much more difficult to repackage and may consume days or weeks of development and testing in order to get a rock-solid package into production.  Those are rare, but even then, EVEN THEN, the savings when calculated across hundreds or thousands of computers, spread across multiple locations, states or continents, can be well worth the effort.  

Think of "mission critical" applications, where the time window to get them into production, with 99.999% success rate, is only an hour or two, end to end, over 20,000 or 30,000 computers.  That's not fiction.  There are industries where this is not uncommon, and they rely heavily on this methodology to ensure:
  • Highest probability of success
  • Consistent and predictable results
  • Minimized impact on production operations
  • Optimum transparency of all moving parts (think reporting and monitoring)
Steak Dinner Anyone?

So, this SPE makes $75,000 a year in USD, roughly $36/hour, and spent an hour building this simple package.  That's $36 to deploy one product to 500 computers over one evening without asking any users to step away from their computers during work hours.  

The cheapest scenario in the first example was $2,850.
The most expensive scenario in the latter example was $1,442.

Even if the SPE had to devote an entire week to one product, or roughly 40 x $36 = $1,442, that's a LOT CHEAPER than a $9/hour tech running around to 500 computers, or 10 x $9/hour techs running around to 50 computers each.

That's Not All

Billy Mays homage:  Now, if you go with the repackaging and automated deployment scenario, you have a mechanism in place that does the following for you without ANY additional cost:
  • Provides automatic self-healing of deployments, to cover situations where a targeted computer is replaced or reimaged with a fresh Windows configuration.
  • Provides simple, effortless scalability for future growth.
  • Provides a robust auditing and reporting trail for accountability and compliance.
  • Provides fault tolerance
  • Provides coverage during non-production hours.
Still think that thumb drive is the way to go?  Hmmmm?

Dave's Top 10 Windows Command-Line Goodies

I'll admit it, I like geeky "Top 10" lists.  The irony of saying "Windows" and "command-line" in the same sentence also fits nicely with how my wobbly brain works, so I couldn't resist.  While scripting is obviously in a class of its own with regards to flexibility, scale and so on, the built-in tool set provided within Windows 8 (and Windows Server 2012) is still worthy of some healthy respect.

The commands listed below are some that I use quite often, and by that I mean pretty much every day. On a typical day I will have two CMD consoles open, a PowerShell ISE console and at least one or two instances of some code editor application like TextPad or Visual Studio Express.  Regardless, I will often jump back to one of my CMD consoles to do something that involves one of the items mentioned below.  Maybe there's a few in here you haven't tried yet.


I use this for managing my credential mappings between my laptop, my home domain environment, my various "work" environments, and TS/RDP environments.  To list saved credentials, use "cmdkey /list".  You can narrow down the list to one target, such as a server you connect to by SMB or RDP named "fubar", by typing "cmdkey /list:fubar".  To add a new explicit mapping, such as connecting to server "fubar" from your tablet which is not part of the same domain or workgroup, use "cmdkey /add:fubar /user:dave /pass:p00rb@$taRd".  For more information about this command, type "cmdkey" and press Enter.

Example: save login credentials for domain server "serverXYZ" from your workgroup laptop...
cmdkey /add:serverXYZ /user:dave /pass:Ih@teP@$$wordZ


This is another command I find extremely helpful.  There are a few times where it doesn't shine as brightly as I'd like, but that's rare.  One such example (and don't let this put you off) is invoking a utility like "SendSchedule.exe" on a remote box, even when the associated XML config file is in the appropriate location.  You just get nothing.  Again, this is not worth tarnishing this powerful utility.

What does it do?  Magical stuff.  That's what.  It's basically a remote shell wrapper, and lets you execute tasks on the remote computer as if you're on that remote computer.  So if you have a script, say "dosomething.ps1" sitting in the c:\stuff folder on serverXYZ, and you're on your laptop named "dingy8", you can open a command console (or PowerShell console) and type:

winrs -r:serverXYZ powershell.exe c:\stuff\dosomething.ps1

Then watch the results unfold as if you were logged onto serverXYZ and running the script directly.

There's way more to this command than I can possibly blurt out here, so try "winrs -?" to begin exploring.


This one has been around almost as long as me.  Same for its cousin: popd.  The pushd command creates a temporary "ad hoc" drive mapping to a specified share.  So if just need a drive letter, under the context of whatever the command console is running as, just type "pushd " and bang! you have a Z: drive and it's the current drive as well.  So why does this do any better than "net use"?  Well, to disconnect "net use" you have to use "net use" again and provide some more key strokes.  In some cases, you forget and leave the drive mapped even after repeated logins.  The pushd command only lasts until you either log off (or "sign out" in Windows 8 parlance), or use "popd" to release any pushd mappings.

Example: map a temp drive to share \\serverXYZ\stuff --> pushd \\serverXYZ\stuff

4. SC

The sc command is another golden-age oldie that provides command-line control over the Windows Services environment.  SC allows you to stop, start, create, delete, and modify Windows Services.  Nuff said.

Example: Check on the status of the WinRM service --> sc query winrm
Example: Stop the WinRM service -->  sc stop winrm
Example: Change WinRM startup to manual -->  sc config winrm start:demand

for more information, type "sc /?"


Like the sc command, this is one is a counterpart to a common GUI tool.  This one provides management features for Scheduled Tasks (hence the abbreviated name: "SCHeduled TASKS").  You create, delete, modify, list, export and import scheduled task jobs with incredible detail and control.  In many cases, it's easier to shove a command string using schtasks through a shell operation from within a script, than to use the direct API alternatives like WMI and .NET, but it depends on your circumstances of course.

Example: List all scheduled tasks on remote computer "Abc123": schtasks /s abc123 /query
Example: Run task "doit" on remote computer "abc123":  schtasks /s abc123 /run /tn doit


Ah, this is a most powerful, yet most simple command tool.  Almost as flexible as "psshutdown" and some other third-party tools, it lets you request or "force" a shutdown, logoff or restart of a remote computer, as well as the local computer.

Example: Restart remote computer "abc123" in 30 seconds --> shutdown -m abc123 -r -f -t 30
Example: Restart "abc123" with a custom display message to the current users...
shutdown -m abc123 -r -f -t 30 -c "IT rulers are kicking you minions off in 30 seconds!"

7. REG

If you haven't heard of this command already, you might have guessed that it has something to do with the Registry.  Yep.  This cool gadget provides command-line capabilities for reading, writing, importing and export registry keys and values and a little more as well.

Example: Display installed apps...
reg query hklm\software\microsoft\windows\currentversion\uninstall /s

Example: Import a .reg file
reg import regfile.reg

Example: Import a .reg file into 32-bit view on a 64-bit computer:
reg import regfile.reg /reg:32


Oh boy.  What can I say about this command that hasn't already been said?  Once you get used to it, xcopy and the like will be flushed down the drain for good.  Powerful.  Flexible.  Simple.  Incredibly useful.  And it was one command I was happy to see rolled into the base operating system configuration, where it was once relegated to Resource Kit add-ons.

One scenario I use this in quite often is wrapping in a .bat script to synchronize remote project folders to a central location, and then invoke the 7-Zip command-line interface to archive the backup content into .zip files and offload them to attached or removable storage.  For me it's just another redundant redundancy of backups, in addition to server backups, DropBox, Google Drive and One Drive.

Example: backup only .vbs and .ps1 files which are newer than those already backed up, or added since last backup, from \\server123\stuff to d:\archives\scripts...
robocopy \\server123\stuff *.vbs *.ps1 d:\archives\scripts /xo /s

One interesting aspect to robocopy is showing the "help" information.  On older versions, "robocopy  /?" or "robocopy /???" and get very different results.  However, on Windows 8 and Windows Server 2012 both options were merged to produce the same results.

9. MSG

This sneaky little turd hijacked the old "net send" command like a stealthy ninja.  Not quite a easy to implement due to the choke collar placed on the messenger service since Windows XP was taken out in a boat and shot in the head (GodFather homage).  I wrote a blog post some years back on nothing but how to implement msg in a Windows 7 environment and enable the plumbing using a GPO.

If you don't have a GUI IM product in use, such as Communicator or Lync (same thing, different lipstick), and don't have an IRC app in use, this is another means to annoy your coworkers.

Example: tell my son Zach to get to bed, while he's uploading guitar demos at his computer in his bedroom...  msg zachary /server:zachspc /v "bed time dude!"  (the /v option says to wait for him to click the 'ok' button to indicate he read it).

10. MSTSC and MSRA

Holy crap, these are powerful and offer everything from really simple/basic/easy usage to up arguably complex yet powerful usage.  MSTSC is the Remote Desktop utility.  So instead of navigating the Start Menu or Start Screen for Remote Desktop, and entering a computer name to access, you can invoke it from a command-line aspect.  It also offers command-line options for dealing with multiple monitors (/span and /multimon), shielding credentials (/restrictedAdmin) and invoking a preconfigured connection file.

Example: Remote into serverXYZ -->  mstsc /v serverXYZ

MSRA is the command-line counterpart to Windows Remote Assistance.  There are ton of options for this command (type msra /? for a list of them), but /offerra is the most commonly used of them.

Example: Initiate a Remote Assistance offer to the user on computer "fubar5"...
msra /offerra fubar5

You can go crazy with msra and invoke email invitations, password protection on assistance offers, and much more.


There are obviously many more command-line tools available on Windows 8 and Windows Server 2012 I could have included.  Some that come to mind, which I also use quite often, include MSIEXEC, WEVTUTIL, WBADMIN, WMIC, OPENFILES, FTYPE, DRIVERQUERY, REGINI and FINDSTR.  I won't argue that there aren't better alternatives outside of what comes built into Windows, even many that are "free", but it's nice that these still exist for times when you need them.


Thursday, March 13, 2014


This may be a bit deep, but I just finished folding laundry at a nearby laundromat whilst a group of over-caffeinated high-schoolers pretended it was audition night for "The Voice" and "Tosh.O" in the same place.  After getting home and putting the goods away, I did dishes and consumed a tasty Dogfish Burton Baton and now my brain needs some playtime.  You've been warned.  Sit down.  Strap in.  Enjoy the ride....

To many folks, the term "NIH" means National Institutes of Health.  But to many folks in the legal field, particularly those specializing in intellectual property issues ("IP" law, that is), it means "not invented here".  It's a uniquely-American term, but the concept predates America itself actually.  Allow me to digress...

I learned this while doing some contract work in that field a few years back; patent drawings and technical procedure writing actually.  "NIH" refers to a policy some renown international corporate entities (I won't name any) have towards how they approach dealing with intellectual property.  This encompasses copyright, patents and trademarks, obviously, but it also includes service marks and more.

In basic English: If the item was "not invented here" they don't want anything to do with it.  In most cases, they mean "at all", as in "not in any respect whatsoever".  For example, if you invent some cool toy, and approached some particularly HUGE toy manufacturing corporation to negotiate some kind of "deal" (I'll leave their actual name to your imagination), and they adhere to the NIH policy, and you are not a direct-employee of that company, they will very likely ask you to leave.  If you continue to insist on a discussion, they may call security to help you find the door.

I am not joking.

So, during a recent discussion I had with some colleagues (past and present) about the subject of "why do so many IT organizations seem to abhor the idea of their own staff daring to develop their own tools for their own needs and then being so brazen as to ask their employers if they'd like to join in on the party?"  Keep in mind that this is not about selling the invention to the employer.  It's about asking the employer to put some elbow grease into it and help launch the airplane off the flight deck with a little more "umpf!".  Of the six or so folks present, and their recollection of some two dozen secondary contacts and colleagues, not ONE of them could recall ever hearing of a successful effort in that regard.

Not one.

I am very familiar with this, as I have been, without planning or expectation, in the center of such situations, many times.  At my last four employers in fact.  I was present during the putting-out of some particular fires (metaphorically-speaking), in which there were NO available "off-the-shelf" solutions to be had.  None.  So we built our own (or I built my own).  Like many contraptions, mechanical or otherwise, which are conceived to solve a "real" problem, they tend to gain a life of their own.  Problems, it seems, tend to recur so having a tool which is custom-fit to solve it tends to be very appealing.  Especially when that tool or solution was provided at no "additional" cost to the parties involved.

Yes.  No additional cost.  Translation:  It was conceived, built, tested and applied using existing funding and task vehicles (that's beancounter speak for "it was part of my job duties, so I did it").  Even the technologies involved with building the tool were 100% "free".  Things like built-in API's, and SQL Server Express, IIS, scripting, etc.  Aside from the already-paid cost of the operating system license itself, there were no additional costs required or incurred.

So, why the fuss?

That's a good question.  A lot of theories were discussed around this one key aspect.  Everything from businesses shying away from stepping outside of their "core competencies" to "perceived risk" to "obligatory support liabilities" and so on.  Blah blah blah.  Fear.  It's just fear.  I add laziness to that, but fear and laziness go together like hookers and politicians.

Then it dawned on me that it's really about NIH.  I realize that fear+laziness nearly equates to NIH in most respects, but it has a different sauce poured over it.

What's even more interesting about this entire mess is that in none of the examples we could cite were there any alterior motives on the part of the employee/developer.  It was all above-board and in good faith, in which they not only accomplished the item in question, but in how they approached and engaged their employer.  The employer however, no matter how the approach was toasted, garnished and served-up, consistently took a hostile, defensive stance in regards to their reaction to the employee.  As if indicating their distrust in the employee; probably assuming the employee concocted the whole idea just to negotiate a deal in the same sense as a blackmail operation.  Holding them hostage.  Whatever that could mean.

But the question remains: why?  Why not actually hold detailed, sincere discussions with the employee, rather than closing the gates and shooting arrows off the guard towers?

Legal risk.  Once again, attorneys, and their corporate financial overlords (retainer clients, usually) have successfully cultivated an atmosphere of risk-avoidance.  Risk avoidance is another name for "fear of innovation".  Imagine if Henry Ford were told that challenging horses would land him in court?  I'm sure someone tried it, but what if he actually caved in to that?  Oh boy.  You can argue that the environment would have fared better today than it has already, but think of the wider ramifications of that.  Now, start that idea-clock in motion today and imaging where it will be in 50 or 100 years.

It's already too late to save our federal government system from the corruption of corporate PAC influences.  Let's not let the rest of the baby go down the bathtub drain as well.  There's a few companies and entrepreneurs out there still taking risks (Elon Musk, Richard Branson being just two of them), so maybe there's still hope things will turn around in favor of imagination and risk-taking.  It can only work when it's cooked in the same pot as the money comes from though.  It seems those of us stirring around in the bottom ranks of the IT world are going to have to fight our way out day by day, in spite of the risk-averse surroundings.

But I digress.  Sweet dreams! :)

Wednesday, March 12, 2014

Walking the Walk

Quick post before I unplug and go comatose for the evening:  Tomorrow, when you get to work (or if you're in a different time zone and it's daylight right now) try this on:

  1. Write down all of the key functions your IT group performs.  AD accounts, software deployment, patching, server provisioning, backups, storage management, cloud integration, etc. whatever.  List them out.
  2. Identify the role(s) which relate to each of them:  Account Managers, App Packagers, App Deployers, Server Managers, Cloud Administrators, etc. whatever.
  3. Assign actual names to those roles.
  4. Compare that mapping with reality.  Grade yourself on a 100 point scale by checking off how many roles/people are actually assigned accordingly with what you are already doing today.

Let me know your score, what scale your environment is (small, medium, ginormous, etc.) and which country you're based out of.  Just curious how we all rate ourselves.