Showing posts with label applications. Show all posts
Showing posts with label applications. Show all posts

Sunday, June 15, 2014

IT Kool Aid Flavors: Vendor or Reality

There are two general realms, or flavors, that exist in most of the IT world.  The vendor flavor, and the reality flavor.

Case in point:

According to one reseller, the world is running, or eagerly in the process of pursuing, a "pure" Windows Server 2012 R2 and Windows 8.1 / Update 1 environment.

Other sprinkle-toppings may include flavor crystals like all the 2013 products (but don't forget SQL Server 2014), and of course, the ubiquitous "cloud" world and Office 365/Azure.  And don't forget, if you bundle you get a free kid's toy.  Is this for a boy or a girl?  Do you want to super-size that as well?

And, please don't get me started on how to properly pronounce Azure.

Anybody still using XP?  Vendor says: Pfffft!  I think not (p-shaaaa!).

Not so fast.

The uncounted, 70-90% of the computer-swilling world doesn't have the luxury of IT project plans, operational efficiency directives and SLA's to worry about. They're busy trying to make things, sell things, build things, fix things, provide services, and all that mumbo-jumbo. The kind of stuff that a lot of larger shops don't seem to have as much "direct / hands-on" exposure to anymore.

Nowadays, many larger shops have grown into detached sector/division/department/project/task-group/tiger-team environments, where they fit into a mesh of bean-counter menageries that eventually lead to something that tickles shareholders and keeps the paychecks flowing.

I have no intention of offending or insulting anyone by this (well, okay, maybe some resellers and sales-folks), but the truth can be summed up in a very simple example:

Kathy's landscaping shop has a few apps they bought with personal funds to help with designing backyard ponds, estimate water coverage, soil depths, and seasonal impacts on gardening.  They bought them when they bought their prized Dell or HP desktop they still use with Windows XP.  And guess what:  IT STILL WORKS.  In their view, shiny new touch-screen tiles and cloud things are not as exciting as kicking the shit out of the revenue numbers compared with the nearby Lowe's or Home Depot.

Their IT support center?  1-800-ASK-DELL or 1-800-WHATS-YOUR-KIDS-FRIENDS-NUMBER-AGAIN?

Sure, there are distinct, and tangible values to the new features provided by Windows 8 and so on, but for many (okay, dare I say: most) small businesses, and home users as well, the deciding factor is "why do I need to buy another new computer if the one I have still works?"  For many small "mom-and-pop" shops, the apps they depend on aren't tops on the lists of bigger companies.  They tend to be very industry-specific, and extremely function-specific as well.  Things that perform one task, maybe two, but do them well, and are also either cheap, or free.

Ask any software repackager who deals with more than a hundred titles, and they'll probably have no trouble recalling a list of those "oddball" apps that are tough to wrestle into a package, but for whatever reason, HAVE to be made available or the planets will spin out of orbit and gravity will dissolve.  Floral arrangement apps may seem stupid, but tell that to a small, family-owned Florist.

The consumer isn't broken. The rationale isn't broken either.  And neither are the products. What's broken is the sales pitch.

Remember the Daffy Duck salesman episode?  Hey Bud, you need a house to go with this door knob.

PS.  In case you're wondering, the photo depicts (for me, anyways), from left to right: me, a vendor, and a small-business owner.

Sunday, May 4, 2014

Windows 7 Desktop to Windows 8 Tablet Migrations: With Fries, a Drink and a Xanax

One of my clients is in the midst of a mildly-frustrating migration to change from a like-for-like computer hardware refresh project, to one that inserts the added dimension (or dimentia) of offering users the option to purchase Windows 8 tablets.

The basics of the environment are this:

  • Active Directory (Windows Server 2012)
  • Windows 7 SP1 Enterprise 32-bit on 99.99% of the desktops and laptops
  • Office 2010, IE8 on most everything, but some smattering of IE11 and Google Chrome
  • Almost no Windows 8 production computers
  • Roughly 1400 distinct software applications
  • Roughly 6000 desktops / laptops / users
All of the Windows 8 tablet devices are configured with Windows 8.1 64-bit edition.  They will also come with Office 2013, IE11, Google Chrome 30-something, and a handful of other staple apps for various doo-hickey things (VLC Player, Adobe Reader, etc.)

Of the approximate 1400 software products in the production environment, roughly 40% are big-name vendors (Adobe, Microsoft, Oracle, etc.).  Another 40% are from lesser-known vendors, but still have a real "support center" you can call into, somewhere.  The last 20% are what some might call "garageware".  Those are the kinds of one-off products which are (more often than expected) "must-have" tools for various department functions.  Those are also quite often duct-taped together by a "vendor" who answers the phone while trying to scoot their noisy cat away from the food dish sitting beside their keyboard.

But all of this is just backdrop to the real issues...

While the bigger headaches with XP-to-Win7 migrations circled around UAC challenges, the newer challenges are less dramatic, but no less serious.  Some of the key aspects to consider when reviewing a given software product for use on a tablet device (I'm just picking on Windows 8, but it could be any tablet OS actually), are these things:
  1. Does the application sport a UI that is easy to use on a tablet? (i.e. the "UX" aspects)
  2. Does the application work well with kinds of touch input "gestures" that a tablet provides?
  3. Does the vendor give a shit about tablets, or Windows 8 for that matter?
Go ahead and laugh at #3, but you might be surprised how many would seriously tell you "no".

Now, aside from the aesthetics, there the nuts-and-bolts stuff to consider:
  1. Does the application rely on a specific CPU architecture?  Not just AMD vs Intel, but how it pertains to Windows (e.g. "Program Files" vs. "Program Files (x86)", and the wonderful Registry hive "WOW6432node" stuff)
  2. Does the application rely on specific browser interfaces?  Will it work as well with IE11 as it does (or did) with IE8?
  3. Does it rely on a particular Microsoft Office version?
But wait... there's more!!!

If you're supporting a shop that's large enough to care about things like this, you could fall within the general realm of having to care about this as well:  Licensing Models.

Let's say you have 1,000 desktop computers running Autodesk's infamous AutoCAD 2014 (okay, you're probably still on an older version, trying to squeeze budgets as usual).  Maybe you were smart enough to install them using the provided deployment toolkit, and you opted to use a network license server (e.g. "Flexnet" or "FlexLM").  You did some analysis and figured out that only 75% of the licenses were in use at roughly 90% of a typical work week, so you purchased 750 licenses.

Now you show up at a Monday morning meeting, wearing a clean shirt, toting your notepad, pen and a steaming hot cup of coffee.  You're in the midst of enjoying a nice, long swill of that caffeinated goodness, when one of the managers across the room says that she wants to replace 500 of the desktops with Windows 8.1 mobile tablets and run AutoCAD as well.  Now you've got coffee stains all over your clean shirt and on the arm of the poor guy next to you.  What a bummer.

Did you remember to configure "license borrowing" on your server?  Did you plan for the possibility of such a large number of licenses possibly going off into the wild, for who knows how long?  Will that 750 number still work for this new direction? 

Things to make you rub your chin and say "hmmmmmm...."


Wednesday, February 5, 2014

BPA. Again? Yes. and THIS time with Fries and a Drink

The Challenge

Your tech staff wants to be able to swap-out existing desktop computers with the least amount of administrative overhead.  Basically, they want to be able to walk up with a new computer, unplug the old one, plug in the new one, turn it on and walk away.  The stuff on the existing computer should magically end up on the new computer (sans all the usual personal crap that shouldn't be stored on a local computer in the first place).

What are these "stuff"?  Applications, default/initial configuration settings, printer mappings, Outlook profile settings, etc.

The Tools at Hand

Active Directory (Windows Server 2012), Group Policy, System Center Configuration Manager (2007, but that's for another discussion), SCCM OSD+MDT+ADK+Coffee, SQL Server 2012, ASP (yes, the sticky, smelly old classic kind), Packaged Software (already loaded into SCCM), a tablet running Windows 8.1 and a bluetooth barcode scanner.  Also, a van, a hand-truck, appropriate weather-oriented clothing, hydration substances and a fully-loaded firearm (okay, not for all situations).

Options

Options are good.  This is just one option.  One that I am fortunate to be working with actually.

If the computer is mapped into the appropriate collections within Configuration Manager, and managed and monitored via logical memberships and dispositional relationships (Active Directory Security Groups, Active Directory Organizational Unit location), you're in what NFL fans would call "the Red Zone".

The Binding Adhesives:

  • Assuming that the existing computer finds its way into a functional or organizational related Collection within SCCM, you have a means for aiming things at it by function and/or organization (sector, department, division, business unit, etc.)
  • Assuming that the existing computer has things configured via Group Policy, it is likely grouped under a logical OU environment.
  • Assuming it is targeted via either query-based Collections (handy), or direct-membership Collections (a little more work but still handy) you have another means for aiming things at it in a logical manner.
  • Assuming, and this is a reach for some environments, but very common in others: you have barcode labels on assets that are relational to their AD account names, oooh.  You are in very good position to throw a touchdown now.
The Process Walk-Through
  • Technician arrives at customer location with new computer, already imaged with a generic, standard "load" (Windows, Office, base applications, etc.) and joined to AD in a generic OU. It also has a functional SCCM client agent.
  • The technician gets the existing barcode and the new barcode values.  Enters them into a web form (from the computer itself or from a third device, such as a tablet or phone).  Hit "submit".  Swap-out the hardware, reconnect, power up the new stuff and run off with the old stuff.
In the background:
  • The data is queued (in a relational database, such as SQL Server) for the "old" and "new" computer names.  Since both already exist in AD and in Configuration Manager...
  • A scheduled, or triggered, task invokes a script that queries the database for pending items (those which have not been swapped out yet)
  • Step 1 is fetching the "old" computer information:
    • AD OU
    • AD Security Groups
    • AD account description property (if desired)
    • AD account location property (if desired)
    • SCCM direct-membership Collections
  • Step 2 is taking action:
    • Move "new" computer into correct OU
    • Add "new" computer to same AD groups
    • Add "new" computer to same SCCM direct-membership Collections
    • Update AD account properties (description, location, etc. if desired)
    • Disable "old" computer AD account
    • Remove "old' computer from AD security groups
    • Move "old" computer AD account to special OU (for GPO management aspects)
    • Update asset inventory tracking database (operational status, if relevant, etc.)
    • Force a restart of the "new" computer (to force GPO updates, SCCM client policy polling and discovery updates)
  • Step 3 - mopping up
    • No process model is without exceptions:  manual installations, special device installation, white-glove stuff, etc.
Suiting Up

All of this is not only possible, it's not that difficult with the right planning and testing.  I'm currently using ASP on IIS from an internal Windows Server 2012 box.  The queue is managed in SQL Server 2012.  The script process uses VBscript, but will soon be ported to PowerShell.  The interfaces are COM, WMI, SWBEM, ADO and LDAP/ADSI.  All are very common building blocks in Windows environments.

When was the last time you played with a Lego kit or one of those car/ship/aircraft model kits?

Repeat after me: "It often comes with headaches, but I usually love what I do for a living."

:)

Miscellaneous Nerd Notes after a Busy Day at Work

These are just reminders.  Today was a rocky start and a smooth ending.  Assisted by a relaxing conversation with a long-time colleague (and all-around nice guy), followed by a run around the neighborhood and a cold Belgian Ale to cool down.  Some of these may seem obvious to some of you, but they are just reminders for my own feeble brain cells.

The "RuleSet" property for the Configuration Manager SWBEM Collection direct-rule interface property expects a client "name", not a ResourceID.

Make sure your functions return values properly.

There is no such thing as too much error checking.

Remoting is good.  Whether it's PowerShell, Sysinternals, or throwing a heavy object at the butthead in the next cube.

Colleagues that consistently shout out "___ is broken." without providing the usual, necessary, and requisite details about "what", "where" and "when", deserve a good choking.

The Office 2013 default UI is pretty, but too cluttered and poorly organized.

The downloads for SQL Server Express 2012 and SSMS for x64 are stupidly confusing and not linked very well on the MS downloads site.

As soon as a social app says it's going to "monetize" it usually means it smells fear from their own staff.

When it snows in Virginia, drivers turn the asshole knob to "max".

Web apps rock.  Building them is fun.  Using them is rewarding, especially when you build them to solve real problems you face every day.

Listen to the entire problem before responding.  Ask questions before spewing directions.

Cats can't eat as much chicken as they think they can.  The results usually end up on the floor in the early morning.

Monday, January 27, 2014

Dastardly Dissections: PowerShell and Software Deployment Dabbling

I was turned onto PowerShell several years ago, but like most music that I've held onto, it took awhile to grow on me.  Until I had one of those "a-HA!" moments, which was just this past week.  I have to give a double-extra "thank you!" to folks like Jeffery Hicks and Don Jones (among many others), as well as all the folks who patiently help others on sites like StackOverflow and Microsoft TechNet.  If I had that much patience I'd be a therapist.


The Meat and Potatoes

I wanted to find a balance between efficiency and reusable code structures.  Ever since I was forged in the fires of LISP programming by an incredible guru named Brad Hamilton, I've sought to make my code as refactoringly refined and reusable as possible.  It should work like a Lego block, as he once mentioned to me.  Another word he used was "organic".  It should work and feel like it grew out of nature, not like a 7-legged cat trying to climb an ice mountain.

Much of what you will see below (and soon-after stab your own eyes out with a plastic fork, out of the sheer horror of it all) is my own personal seasoning.  I like to put a nerdy block-style heading at the top, followed by a group of related custom variable assignments, and then start to work destroying any sense of productivity soon after.

In a nutshell: I define some variables to identify the product, the installer file, the source path, the target path that the installer creates on a typical client, and then move on.

The next part checks if the file is already present, indicating a previous installation was already completed and then exit if that's the case.  If not found, go ahead and run the installation and return the exit code.

(Updated 1/28/2014: line in red below replaces the line just above it.  Ensures script calls installer from the same location / path)

[powershell-begin-ugly-code]

#------------------------------------------------------------
# filename...: install-orca.ps1
# author.....: David M. Stein
# date.......: 01/27/2014
# purpose....: install Microsoft Orca using PowerShell 3
#------------------------------------------------------------

# comment: define variables and assignments

$appName = "Microsoft Orca"
$msifile = "orca.msi"
# $srcPath = "\\appserver3\utils\microsoft"
$srcPath = Split-Path -Parent $PSCommandPath
$path32  = "C:\Program Files (x86)\Orca\orca.exe"
$path64  = ""

$f1 = get-location

write-host "info: searching for existing installation of $appName..."
if (test-path -Path $path32) {
  write-host "info: $appName is already installed (aborting install)"
  $retval = 5000
} else {
  write-host "info: installing $msiFile....."
  set-location $srcPath
  write-host "info: working path is $srcPath"

# comment: the following line may wrap incorrectly in a browser...
  $retval = (start-process msiexec.exe -ArgumentList "/i $msifile /qn" -Wait -PassThru).ExitCode

  switch ($retval) { 
    0    {write-host "info: success"} 
    3010 {write-host "info: success (reboot pending)"} 
    1603 {write-host "fail: I hate 1603. A useless error code!"}
    1605 {write-host "skip: target application was not found (uninstall abort)"} 
    default {write-host "fail: uh-oh? exit code is $retval"}
  }
 
  set-location $f1
  write-host "info: installation complete."
}
exit $retval

[powershell-end-ugly-code]


Why exit with code number 5000?  Good question. I wanted to be able to filter in on that via System Center Configuration Manager, especially through direct T-SQL queries and BI reporting.  I tend to "live" in the SQL Server environment more than anywhere else for some reason.  It feels like wandering around a big-volume hardware store on a quiet night.

If I treated an existing install as a "failure" or exception, I would have to assign a non-zero result code.  I could consider it a "success" and return 0 (zero) as well, but then I wouldn't be able to query for unnecessary attempts in my production environments.  Artificial flavoring has its uses.

To invoke this from an non-Powershell state, I fire off the command string as follows...

powershell -File install-orca.ps1

Then I can fetch the result implicitly via the command pipeline or explicitly by interrogating %errorlevel% via the CMD shell interface.

Ripping It Out Again

So, what about the Uninstall flip-side of this?  Let's try this out...

[powershell-begin-stupid-code]

#------------------------------------------------------------
# filename...: uninstall-orca.ps1
# author.....: David M. Stein
# date.......: 01/27/2014
# purpose....: uninstall Microsoft Orca using PowerShell 3
#------------------------------------------------------------

# comment: define variables and assignments

$appName = "Microsoft Orca"
$guid    = "{85F4CBCB-9BBC-4B50-A7D8-E1106771498D}"
$path32  = "C:\Program Files (x86)\Orca\orca.exe"
$path64  = ""

$f1 = get-location

if (test-path -Path $path32) {
  write-host "info: $appName is installed.  Uninstall it now..."

  # comment: the following line may wrap incorrectly in a browser also...

  $retval = (start-process msiexec.exe -ArgumentList "/x ""$guid"" /qn" -Wait -PassThru).ExitCode

  switch ($retval) { 
    0 {
        write-host "info: uninstallation was successful."
        write-host "info: removing leftover files and folders..."
        Remove-Item $path32 -Recurse
      }
   3010 {write-host "info: success (reboot pending)"} 
   1605 {write-host "info: target application was not found (uninstall abort)"} 
        default {write-host "fail: exit code is $retval"}
  }
 
} else {
  $retval = 0
  write-host "info: $appName was not found on this computer (abort uninstall)"
}
write-host "info: completed"
exit $retval

[powershell-end-stupid-code]


A few notes on the example above:

  1. First, you may notice the additional code to remove leftover files and folders.  That's because it's not uncommon to find leftover files and folders after a "successful" uninstall.  The reasons are many, but in short: just clean them up if needed.  
  2. Second, if the installation was not found, I force a 0 return value here.  I could have also forced something like 5001 or 6000 or 227001 or whatever (as long as it's not in conflict with known result codes used by other apps or processes).  I chose 0 because I'm tired and sitting in a realllllllly comfortable chair right now.  Too lazy to use a longer value.
  3. I could have used Test-Path to find the Registry Key instead of a folder and file.  That would work as well, and the example would look instead something like the following...

Test-Path "HKLM:\Software\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall\{85F4CBCB-9BBC-4B50-A7D8-E1106771498D}"

(note that I had to wrap the path in matching double quotes; otherwise it tries to evaluate the { and } as code).

If you're not familiar with Windows Installer methods (e.g. msiexec syntax), that's okay.  That means you're probably "normal".  I'm not.  You can invoke an uninstall using "/x" and provide a specific .msi package file, or you can locate the associated application GUID from the Registry (see HKLM\Software\Microsoft\Windows\CurrentVersion\Uninstall, or for 32-bit apps on a 64-bit client, like Orca, refer to HKLM\Software\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall) and use that as well.  Below is a screen capture of REGEDIT showing the key and value on my cheap laptop...


Find the "UninstallString" value and grab a copy to inspect.  The irony is the "/I" prefix, which denotes "Install", but you can ignore that.  Almost every product entry will have an "UninstallString" value and it will almost always contain "Msiexec /I{blahblahblah}".  For an uninstall operation, you replace "/I" with "/X" (upper or lower case, doesn't matter), and move on.  It's the rest of that string that matters.  That's the GUID, and it is usually pretty reliable for use with msiexec to uninstall a known product entry.

More Mindless Notes:
  • You may encounter cases where you do NOT want to delete leftover files and folders (I didn't even mention leftover Registry keys and value, did I?).  Just comment that line and you're good to go.
  • You may need to stop services in order to perform some tasks.  You can use the Get-Service cmdlet or Stop-Service.  To remove a service, you can use the ancient SC.EXE command (under \Windows\System32) to invoke the Delete method.  Just don't forget that it may require a reboot.
  • Be careful to validate every exit code before assuming anything other than 0 (zero) is "bad".  3010 for example is good.  In some contexts (is that proper English?), the exit code 1605 could be considered "good" as well.
  • I'm NOT a PowerShell expert.  I'm still learning and very possibly farther behind on this stuff than you (in which case you're probably not reading sentence because you already left this page to find what you are really looking for).  I hope I'm not the smartest guy in the room.  That's a boring proposition to consider.  I'd rather be learning.
Namaste.

Wednesday, September 18, 2013

Knowing Which Seeds to Water

(Warning: I've had some coffee today)



In The Beginning

In 1990, at the age of 26, I worked as a drafter for a small Naval engineering firm in Virginia.  It was my third job in the field of naval design, and I was assigned to work in the Piping Systems Division with maybe two dozen others, in support of contracts for overhauling U.S. warships.  It was during this time that PC-based CAD entered the foray in the defense industry.  ThisCAD and ThatCAD were everywhere, but AutoCAD was the eventual, and clear winner.  Until then, everything with "CAD" in the name wasn't even considered unless it ran on UNIX-powered hardware.

While learning to use this new "AutoCAD" tool, I tripped over something and looked down to realize that inside this little product was a shiny gem called "AutoLISP".  A customization programming tool, built right into the product!  Having tinkered with CMD and Batch scripting for MS-DOS and Windows, I was addicted from that moment on to programming.  Mainly because it made it possible to draw and "create" visible objects on the screen, rather than a bunch of numbers and text.

After a few weeks, I built some menus, functions (or "routines", as they were often called then), and eventually wrapped them in dialog forms and prettier stuff.  After sharing them with my coworkers, I began to get feedback and ideas started coalescing like a tropical storm into a hurricane.  The momentum continued to build and within a few months I had a complete "design package" for automating much of the tasks involved with creating and validating engineering and design drawings for piping systems.

Auto-Something-or-other

Not long after that threshold was crossed, I spread out into HVAC systems, and eventually into the other primary system groups involved with nautical engineering: Structure, Outfitting, Machinery, Electrical and Electronics.  Then it was on to building the top of this strange pyramid:  Notes, References, Sheet Formats, Materials Lists, Tables, and so on.  In much the say a Lego kit ends up becoming a city with elevated monorails and skyscrapers around a kid's room, I ended up gluing in data files, database tables and views, symbol tables, icon files, drawing parts (block inserts, XREFs, etc.).  Building on top of what a predecessor from our New York office had started, it became an entirely new animal.

My boss was supportive, as was the Department Manager, and the Division Manager.  But once it cleared the cloud layer, things got less clear.  I never asked for a raise or a promotion, oddly enough. All I asked for was the approval to tap a few key "power users" in each department to form a "team" to help improve this automation tool even further and faster.  Silence.



In 1996, I was contacted by a much-larger company, a nearby shipyard, to take on a newly created role of "AutoCAD Systems Manager" for an entire Division.  That meant a lot of things at once for me:  Automating the deployment (installation), configuration, maintenance and licensing of AutoCAD and AutoCAD Mechanical Desktop to roughly 1,300 users.  There were other Divisions, but they were tied to UNIX products and rebuffed any consideration of anything that ran on a scruffy PC. This offer also meant I'd take over licensing administration (i.e. FLEXlm), and my prized role: Customization.  Oh yeah, it also meant a considerable pay increase and better benefits, but customization was what I had my eyes on the entire time.

Project: Mariner

Within a few months of that new job, I began building an entirely new suite-based, collection of design automation tools to run on AutoCAD and MDT for Piping, HVAC, Mechanical, Hull-Structure, Hull-Outfitting, Electrical, and Materials.  This new beast grew a beard and a deeper voice and eventually was named "Mariner".  A fitting name I thought.  I sure get wrapped around the axle when it comes to choosing a name for software projects, but that's for another story.

This process continued to grow and I was allowed to form an unofficial "team" to help maintain and improve it as well. Once again, I never asked for a raise or promotion, but things seemed to progress much more easily.


Sometime in late 1999, this shipyard began contracting in designers from local firms to handle the capacity of work going on.  The contractors were required to learn this new abstraction layer, so I embarked on developing a training guide, a training course and even was authorized to issue training certificates for completion of the training.  Seriously, they printed some 1400 books with color graphics and sturdy permanent binder edges.  Nifty stuff!


Project: ShipWorks

In early 2000, one of the contractors asked if they could license this "Mariner" product to use back in their offices.  The rationale at the time involved a lack of physical space at the shipyard to bring in any more contractors, while the workload continued to rise.  I approached the corporate overlords, their legal masters and the contracts department gurus and soon there was a "first-ever" licensing contract produced to allow their "partners" to use this product.  Until then, no other such vehicle had existed, or so I was told.  Then, I inquired about approaching Autodesk or some other (no defunct) software vendor, to help take it to the next logical level: external marketing.  There was interest from nearly all of the outside contracting firms, as well as several software vendors.  The company said: "NO.  We are not a software development company."

Growing tired of the lack of management support, I accepted an offer to work for a local Autodesk product reseller.  I submitted my two-weeks notice and packed my belongings to move on to yet another employer.  On my last Friday, I received a phone call from the contracting firm that had initially approached our company about licensing Mariner.  They heard I was leaving and they counter-offered and, me being stunned and shocked, I accepted it.  I went to that new employer and, again, started development on a totally new product, incorporating all of the lessons-learned from the Mariner project.  This animal grew into something called "ShipWorks".  Much to the chagrin of Autodesk, it was not ever intended to run on Solidworks, nor was it ever attempted.  Still, they were obviously not too happy about the "works" suffix.  Just an odd side note now, I think.

That leg of my journey into the software technology world is where I officially transitioned from a mostly-engineering environment into a mostly-IT environment.  I absorbed managing Windows Server, SMS and Configuration Manager, WSUS, RIS and WDS and a whole bunch of other weird things that I found interesting and helpful, and which helped cut costs and make for a better computing environment.

In this new role, I was given a team, management support, resources and things finally to be on a good track.  Then in 2007, the company was sold and split apart.  I ended up bouncing to a consulting firm, which lasted about three months, when the economy tanked, and I had to make ends meet doing side work for a few months before crawling on my knees back to the shipyard and beg for my job back.  They graciously accepted.



From here on, I haven't touched AutoCAD much at all.  Most of the work I've done since involves things like ASP or PHP, along with SQL Server, Oracle, Active Directory, SMS or Configuration Manager, Inventory systems, Service Request systems, and so on.  Basically, gluing things together horizontally with a big bucket of sticky web application goo.

Looking Back

Every one of the places I've worked at, I've built something custom to help them operate more efficiently and tried to make the users happy with the results.  In every case, my immediate supervisors were very supportive.  In every case, when it went above my immediate supervisors things got shaky and less reaffirming.  The support and reinforcement began to vaporize the higher I went.


The Takeaway

Over the past twenty-odd years, I've seen more potential wasted because someone decided a project was not worthy of basic consideration.  Not even giving it a second thought.  The results could have been astoundingly helpful for a lot of people and businesses.  Too quick to judge, was always the culprit to killing the dream before it could begin to take shape.  I'm writing this today because I still see this happen too often, in too many places.

If you have a lone developer, or a small team of developers, within your business, official or not, and they are actually producing useful things, support them.  Especially if they don't ask for monitory compensation, but they simply want to see that management cares and wants to help them push it further.  Maybe it's outside of your "core business" comfort zone.  Maybe you never considered your business to include this mysterious thing called "applications development".  Try making it work anyway.  You say you have "gut instincts" for business, well, use them.  You might be amazed what good can come from it.  I'm not suggesting you rubber-stamp every app-dev project without checking on it's merits.  Verify and validate them all.  But just don't reject them simply because they involve "application development".  That's an unforgivable crime of business.

Cheers.

Friday, May 17, 2013

Shiny New AutoCAD, Same Old VLISP

I'm beyond the point of crying over the demise of Visual LISP.  A once-mighty development platform with an impressive following (and one-time unrivaled volume-king of content), now relegated to bleeding out on the scrap heap of soon-to-be forgotten languages.

When John Walker chose LISP as the core extensible language for AutoCAD, he did so on the basis of its inherent dynamic polymorphic nature.  Recursion and chameleon-like characteristics made it as fluid and flexible as a the T2 walking through the mental hospital metal bar gate (without the pistol, of course).

What Autodesk is ignoring is potential. There is and always has been potential within the Visual LISP world to grow the language as a standalone platform. It could be used for so much more than CAD purposes. Even DCL could join in on the ride beyond the walls of Fort AutoCAD.

Once unfamiliar programmers got used to working with lists and functions like mapcar, apply and lambda, who knows where it could lead?

Friday, February 22, 2013

What's On Your Desktop?

I got into a brief, but interesting discussion about what applications certain IT folks typically have open at ANY given time of day.  More specifically: if you were to guess as to what you'd find if you were to walk up behind various staff members, and see what apps were opened on their desktop(s), how accurate would your prediction be.

Here's what I almost always have open on my dual monitor setup at work:

  • Google Chrome
  • Internet Explorer 9
  • Microsoft Outlook 2010
  • Microsoft System Center Configuration Manager 2007 admin console
  • Microsoft SQL Server 2012 Management Studio
  • Microsoft RSAT: Active Directory Users and Computers
  • VMware Workstation 9
  • AdminStudio 11.5 / InstallShield 2012 (depending upon workload)
  • TextPad 6
  • Paint.NET
  • Several CMD consoles
At home, from my laptop (usually on the couch):
  • Google Chrome
  • Microsoft Word 2010
  • Microsoft PowerPoint 2010
  • Microsoft Outlook 2010
  • VMware Workstation 9
  • PowerShell ISE
  • Several CMD consoles
  • TextPad 9
  • Paint.NET
  • iTunes 11

Wednesday, February 6, 2013

The Not-So Fine Art of Software Re-Packaging and Deployment


New and Improved / Amended Version!

Note: Thanks to Mikko Järvinen for reminding me about "Uninstall Testing", also known as "Package Removal Testing" or "PRT".  I added the changes below in blue (just FYI).

I had to prepare the following for an internal FAQ document to help our "customers" better understand the ramifications, and pontifications, with respect to processing a request to have software packaged (re-packaged) for deployment to their computers.  It's a work in progress, but this is where it stands right now...


Overview

Software can be installed in a variety of ways, but when it comes to installing software on a large number of computers in the shortest time, AND with the highest level of consistency and reliability, it requires a little more work.

Unfortunately, software vendors do not follow a common playbook when it comes to packaging their products for installation. Some products are packaged in a way that makes it easy to install them using automation tools. Most are not. When software is not packaged in a way that lends itself to being installed easily, it often requires "repackaging".

Packaging vs. Re-packaging

Packaging is the process whereby a software product is compiled into an original installation package. This is often a single ".exe" or "msi" file, but in many cases it results in a collection of many folders and files. An installer that comes as a single ".exe" file is referred to as an "Executable installer". An installer that comes as a single ".msi" file is referred to as a "Windows installer package".

Executable installer packages are the most common.  These are the familiar .exe packages (e.g. setup.exe). They often provide a built-in mechanism for launching them along with a list of options or settings to pre-configure the installation without having to step through a series of dialog input forms. In many cases, this includes the option for running it in "silent" mode. "Silent" mode allows the installation to run without displaying any forms or prompting for user input. This is essential for mass deployment using automation tools such as Microsoft Configuration Manager or Group Policy.

Repackaging is the process of taking the smelly crap that some vendors hand you, and mooshing it up into a new, less-smelly ball of goodness that can be installed "silently" and pre-configured, just the way your precious customers are begging for.  There are no limits to what qualifies as "repackaging".  It can be wrapping the installation parameters within a script file (pick your script language, it really doesn't matter that much), or it can be squeezing it through the meat-grinder of something like InstallShield, or AdminStudio, to make a whole new installation binary (i.e. a new .EXE, or a Windows Installer .MSI file, etc.).  I won't even bother with discussing .ZAP files because I just ate.

Did I just say that it doesn't really matter what scripting language you use? Well, that's sort of true.  I don't recommend you just pick a scripting language without considering how it will fit into what the rest of your organization uses.  Even more important, you need to consider what the environment will support (if you do everything in one language, you might find out some of the target devices don't support it).

Complications and Time Factor

One of the most commonly asked questions about packaging and re-packaging is "How long does it take?" The answer is always "It depends.". No two software installations work exactly the same. Because of this, it is impossible to predict how long it will take to get a workable unattended installation package prepared and tested. Some of the variables that play a critical role in determining the time it takes to re-package an installation include:
  • The original installation package format and integrity
  • Vendor licensing and activation requirements
  • Removal or Upgrade of older versions
  • Checking for, and resolving prerequisites
  • Per-machine vs Per-user configuration settings
  • Operating System dependencies
  • Business-specific configuration settings
  • Vendor compliance with Microsoft's recommended guidelines
  • Client/Server dependencies
This is not an exhaustive list, and each of these can greatly impact the difficulty with re-packaging the installation. In some cases, it can render the re-packaging process ineffective, requiring manual installation and configuration; something we try to avoid at all costs.

Other factors that should be factored in:
  • The relative chemical stimulant consumption rate of the under-paid coders hired by the vendor (on contract, of course).
  • The address of the mobile trailer they call an "office"
  • Whether they include "Grateful Dead Reunion Tour" dates as paid holidays
  • When you say "InstallShield" and they respond with "What's that?"
  • When you really have to explain to the vendor what a "silent install" is
  • When you call their "support line" and reach the owner/president/senior architect/coder guy every time.
I'm sure I could add more, but I'm too tired right now.  Let's move on...

Testing and Validation

Once an installation package has been developed, the next step in the process is to test it. In most cases, this is done by using "test" computers whereby a designated user will "remote" into the test computer from their own location and test the software installation. This eliminates the need for customers to travel around to physically sit down at the test computer and allows greater flexibility with scheduling.

In most cases, a virtual machine "test computer" will suffice just fine.  It doesn't matter what you prefer to work with (VMware, VirtualPC, VirtualBox, etc.) as long as it works and users can remote into it and do what they do (crash and break things, usually).

In some cases, usually when special hardware devices are required to be used with the software, it may be necessary for the customer to physically sit down at the test computer and log on, so they can use the hardware devices properly.

The process whereby customers test the software prior to it being deployed into production, is referred to as "User Acceptance Testing", or UAT.  But be careful, as UAT is *NOT* the entire testing process.  It's just one piece of it.

The most basic testing process goes something like this:

  1. Install the application using the normal means, on an isolated test computer.  This helps the repackager get familiar with how the application "normally" installs, and what options and settings it provides along the way to completion.  This is sometimes called "Installation Analysis Testing" or IAT.  It's also equally important to use this step for documenting the "footprint" an application installation leaves on a computer.  Having a complete list of changes it makes to the Registry, File System, Services and security environment, are all crucial pieces of information. This is required for making sure that you build an uninstall package that does a thorough job of cleaning up when the application is removed.
  2. After repacking, use the new repackaged package (a mouthful, sorry), to do the install to verify that it (A) installs properly, even silently, and (B) launches and functions properly after installation. This is sometimes called "Initial Package Testing" or IPT.  After running the IPT, it is vital that you confirm that the installed application functions properly. This also adds a new dimension to the "footprint" by virtue of launching and using the application, which often initiates a chain of post-installation configuration processes that modify additional things in the Registry, File System, Services, and so forth.  This is where a wrench often comes flying in from left field during the uninstall testing (IRT), so be prepared to make some adjustments to your uninstall package.
  3. After the repackaged package has passed IPT, it's time to load it up into your deployment/distribution system (i.e. Microsoft System Center Configuration Manager) - (and you thought I couldn't string a bunch of words together into a longer name than that, pffft!).  Once loaded into your deployment system, the next step is to target a test computer to verify that the deployment system delivers the installation package, and installs it successfully.  This is sometimes called "Package Deployment Testing" or PDT.
  4. After PDT, it's time to go to User Acceptance Testing, or UAT.  In most situations, you can use the same targeted test computer from the PDT without have to do another deployment, but the choice is yours (and varies by individual circumstances).
  5. Once UAT is complete, you should be ready to remove the safety lock and fire with both barrels.  In other words, you should be ready to go to Production Deployment.
Some important notes pertaining to the above gibberish:
  • Steps 2, 3 and 4 should be performed on a test computer for each type of target configuration.  In other words, if you will be expected to deploy this to Windows XP, Vista, Windows 7 and Windows 8 computers, you should definitely perform each test on an appropriate test computer.  And don't forget that 32-bit and 64-bit configurations add another layer of complexity (and testing).
  • If your target user base does not (generally) have local Administrative permissions on their computer device, make sure you package and test with that expectation.  And more importantly: Be sure to have a user account logon and launch the application the FIRST TIME after being deployed, so that it will behave as it would on the other 99.99% of the target clients (unless you expect to walk around, or remote into, every computer after the deployment - which would probably suck).
  • Useful tools in your arsenal for developing installation packages are InstallShield and AdminStudio.  But in addition to their primary capabilities, another useful aspect of AdminStudio is to use the Repackager "snapshot" feature to help compare "before" and "after" system states when doing your Uninstall development and testing.  For example, you can take a "before" snapshot, install and run the application, and then take an "after" snapshot.  The results of comparing both snapshots will reveal what aggregate changes were made to the system, thereby helping to shine some light on how to develop an effective package for removing the application completely and cleaning up behind it.
The level of testing you employ will depend upon the nature of your environment obviously.  The smaller and less complex an environment is, the less likely you will need to perform as many phases of testing. But it never hurts to test more than you think you should.  It usually saves you from yourself later on (if your poorly-tested, bad packaging output lands on 10,000 devices over a weekend, your Monday will very likely suck ass indeed).

Deployment

Once a software installation package has been tested and approved by the customer, the IT department can then begin to deploy it to the requested devices. The method we use is an automated deployment tool named Microsoft System Center Configuration Manager.

Final Thoughts

Software Packaging and Repackaging (two distinct processes) are a combination of science and art. While based upon technology (science), there is a lot of human intuition (art) involved as well.  If you expect to master these things using one or the other alone, you will be in for a tough time.  Also, be prepared to consume additional quantities of caffeine.

Sunday, November 11, 2012

Crude But Effective: Part 2, the Electric Boogaloo

In my previous article I described a system for replicating some of the functionality of the ConfigMgr Right-Click Tools (aka "SCCM Right-Click Tools"), through a web interface (intranet web portal application) using a combination of HTML, ASP, and a database back-end   What I planned to do was provide a little more detail of each of the pieces in follow-on articles.  This way, if you really cared enough, you could build your own setup (and probably do a better job of it than I have).

In this article I'm going to expand on the part of the process which involves the database back-end  and the script that runs on a schedule to query, process and update the database table.

The Database Table

To bring all of the processing into one central "hub", I chose to use a Microsoft SQL Server database, and create a table to capture the incoming requests from the portal.   My database server is named "DB1" and is running on SQL Server 2012, but it doesn't matter what version you use really.  I've tested this setup on 2005, 2008 and 2008 R2 with equal results.  The name of my database is "AMS" (for Asset Management Services), but you can call it whatever you want, just modify the names below to suit your needs.  The table I created is named "ClientToolsLog", but again, that's not required, so you could name it "DogPoo" and it won't matter.

USE [AMS]

SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

SET ANSI_PADDING ON
GO

CREATE TABLE [dbo].[ClientToolsLog](
 [ID] [int] IDENTITY(1,1) NOT NULL,
 [ActionName] [varchar](50) NOT NULL,
 [Network] [varchar](50) NOT NULL,
 [Comment] [varchar](255) NULL,
 [AddedBy] [varchar](50) NOT NULL,
 [DateAdded] [smalldatetime] NOT NULL,
 [DateProcessed] [smalldatetime] NULL,
 [ResultData] [varchar] (50) NULL,
)
GO

GRANT SELECT, INSERT, UPDATE, DELETE on ClientToolsLog TO amsManager
GO
 
GRANT SELECT on ClientToolsLog TO amsReadOnlyUser
GO

The Table Structure

Each of the columns has a purpose, so I'll explain them each below:
  • ID - This is used to identify the specific row in the table.  Because it's an integer value, and auto-incremented by 1, you don't specify a value for this field when inserting a new row. You only need it if you want to query, modify, or delete a specific row.
  • ActionName - (required) This is where the specific action name is entered.  I use my own abbreviated codenames to save on space (this log can easily grow very quickly with multiple users!).  For example, I use "MACHINE_POLICY" to indicate "Machine Policy Retrieval and Evaluation", and "HWINV" to indicate "Hardware Inventory Cycle", and so on. (see image below for the list of default available actions for ConfigMgr 2012 clients)
  • Network - (required) This is for storing the AD domain name or the CM site name, the choice is yours and it really doesn't matter, but I made it mandatory so you can modify "NOT NULL" to "NULL" if you prefer.  It's just there to enable filtering on specific environments when needed.
  • Comment - (optional) This is for entering a comment if desired. I had initially intended this to be a [textarea] field on the web form, but decided to skip it to avoid unnecessary data.
  • AddedBy - (required) This stores the username of the person who submitted the request from the web site form.  For this to work, you MUST enable "Windows Authentication" in IIS for the web site or the virtual folder.  If you leave it on "Anonymous" there won't be any way to track who the user was unless you build in forms-based authentication (yuck!)
  • DateAdded - (required) This stores the date and time when the request was submitted
  • DateProcessed - This is initially NULL until the script comes along and processes the request, at which time it enters the date and time it was completed.
  • ResultData - This is also initially NULL until the script updates the row when the request has been processed.

Security

I chose SQL accounts for this setup, but you could use mixed-mode.  I do a lot of things by force of habit, so SQL accounts are pretty common for my work, so I tend to use mixed-mode setups.  In any case, I have two user accounts for this system:
  • amsManager - This account has rights to SELECT, INSERT, UPDATE and DELETE data and rows in the table.  I use this account from within the web application to insert new records, and it's used in the script (discussed later) to update the rows when requests are processed.
  • amsReadOnlyUser - This account only has SELECT rights, and is used for any applications/scripts/processes where someone needs to be able to consume (read) the data but not have the ability to modify or delete anything.

The Script

Now that the database is created, the table created and the permissions applied to the table, the next step is getting a script to work with it to do the heavy-lifting.  You can do this with almost any language, including PowerShell, VBscript, KiXtart, Perl, Python or whatever.  As long as the language you choose can do the following things it should work fine:
  • Open a database connection to query (read) and update data in the rows.
  • Execute shell operations to call external .exe applications (SendSchedule.exe), as well as invoke COM interfaces such as WMI and SWBEM requests.
Again, out of habit, I chose VBScript.  I was going to do it with PowerShell, but I got lazy.  Here's the code, but I have to mention that one key "action" is left out for now, and that's the "Re-Run Advertisement" option.  The reason is that I'm still working on this part and having some challenges.  When I get it working reliably and consistently I will post an update:

'****************************************************************
' Filename..: ams_client_tools.vbs
' Author....: David M. Stein
' Date......: 11/11/2012
' Purpose...: invoke ConfigMgr Agent "client actions" on remote clients
'             using a SQL table and WMI invocation
' SQL.......: DB1\AMS
' Comment...: Beware of line-wrapping!  If I wrap it I used [& _]
'****************************************************************
Dim query, conn, cmd, rs, objShell, scriptPath, recID, objFSO

' controls DebugPrint output
Const verbose = True

' database connection
Const dsn = "DRIVER=SQL Server;SERVER=DB1;database=AMS;UID=amsManager;PWD=P@ssw0rd$123;"

' database table name
Const strTable = "dbo.ClientToolsLog"

'------------------------------------------------------------
scriptPath = Replace(wscript.ScriptFullName, "\" & wscript.ScriptName, "")
'------------------------------------------------------------
' constants used by this script (abridged format)
'------------------------------------------------------------
Const adOpenDynamic = 2 Const adOpenStatic = 3 Const adLockReadOnly = 1 Const adLockPessimistic = 2 Const adLockOptimistic = 3 Const adUseServer = 2 Const adUseClient = 3 Const adCmdText = &H0001 Const adStateClosed = &H00000000 Const adStateOpen = &H00000001 Const ForReading = 1 Const ForWriting = 2 Const ForAppend = 8 Const TristateUseDefault = -2 Const TriStateTrue = -1 Const TriStateFalse = 0 '------------------------------------------------------------ DebugPrint "info: begin processing..." Set objShell = CreateObject("Wscript.Shell") query = "SELECT * FROM " & strTable & _ " WHERE DateProcessed IS NULL ORDER BY ID" Set conn = CreateObject("ADODB.Connection") Set cmd = CreateObject("ADODB.Command") Set rs = CreateObject("ADODB.Recordset") On Error Resume Next conn.ConnectionTimeOut = 5 conn.Open dsn If err.Number <> 0 Then wscript.echo "fail: database connection failed" wscript.quit(err.Number) Else On Error GoTo 0 End If rs.CursorLocation = adUseClient rs.CursorType = adOpenStatic rs.LockType = adLockReadOnly Set cmd.ActiveConnection = conn cmd.CommandType = adCmdText cmd.CommandText = query rs.Open cmd If Not(rs.BOF And rs.EOF) Then xrows = rs.RecordCount counter = 0 Do Until rs.EOF recID = rs.Fields("ID").value compName = rs.Fields("ClientName").value actName = rs.Fields("ActionName").value actCode = ClientActionCode(actName) addBy = rs.Fields("AddedBy").value DebugPrint "record id...... " & rs.Fields("ID").value DebugPrint "client name.... " & compName DebugPrint "action name.... " & actName DebugPrint "action code.... " & actCode DebugPrint "requestor...... " & addBy DebugPrint "request date... " & rs.Fields("DateAdded").value DebugPrint "network........ " & rs.Fields("Network").value If IsOnline(compName) Then retval = ExecAction(compName, actName, actCode, addBy) Else DebugPrint "result......... offline!" retval = 100 End If DebugPrint "result......... " & retval MarkRecord recID, retval DebugPrint "-------------------------------------------" rs.MoveNext Loop DebugPrint "info: " & counter & " processed" Else DebugPrint "info: no records found" End If rs.Close conn.Close Set rs = Nothing Set cmd = Nothing Set conn = Nothing '------------------------------------------------------------ ' function: return datestamp formatted for log file use '------------------------------------------------------------ Function LogTime() LogTime = FormatDateTime(Now, vbShortDate) & " " & _ FormatDateTime(Now, vbLongTime) End Function '------------------------------------------------------------ ' function: return TRUE if computer responds to a PING request
' note: this features can be impacted by firewall settings!
'------------------------------------------------------------

Function IsOnline(strComputer)
  Dim objPing, query, objStatus, retval
  If strComputer <> "" Then
    query = "SELECT * FROM Win32_PingStatus WHERE Address='" & strComputer & "'" 
    Set objPing = GetObject("winmgmts:{impersonationLevel=impersonate}")._
      ExecQuery(query)
    For Each objStatus in objPing
      If Not(IsNull(objStatus.StatusCode)) And objStatus.StatusCode = 0 Then
        IsOnline = True
      End If
    Next
  End If
End Function

'------------------------------------------------------------
' function:
'------------------------------------------------------------

Function ClientActionCode(actionName)
  Select Case actionName
    Case "MACHINE_POLICY":
      ' Machine Policy Retrieval and Evaluation Cycle
      ClientActionCode = "{00000000-0000-0000-0000-000000000021}"
    Case "HWINV":
      ' Hardware Inventory Cycle
      ClientActionCode = "{00000000-0000-0000-0000-000000000001}"
    Case "SWINV":
      ' Software Inventory Cycle
      ClientActionCode = "{00000000-0000-0000-0000-000000000002}"
    Case "DISCOVERY":
      ' Discover Data Collection Cycle
      ClientActionCode = "{00000000-0000-0000-0000-000000000003}"
    Case "RERUN_ADV":
      ' Re-Run Advertisement
      ClientActionCode = "RERUNADV"
    Case "INST_SOURCE":
      ' Windows Installer Source List Update Cycle
      ClientActionCode = "{00000000-0000-0000-0000-000000000032}"
    Case "UPDATE_SCAN":
      ' Software Updates Scan Cycle
      ClientActionCode = "{00000000-0000-0000-0000-000000000113}"
    Case "AMT_PROV":
      ' AMT Auto Provisioning Policy / Out-of-Band Mgt Scheduled Event
      ClientActionCode = "{00000000-0000-0000-0000-000000000120}" 
    Case "BRANCH_DP":
      ' Branch Distribution Point Maintenance Task 
      ClientActionCode = "{00000000-0000-0000-0000-000000000062}"
    Case "UPDATE_DEP":
      ' Software Updates Deployment Evaluation Cycle 
      ClientActionCode = "{00000000-0000-0000-0000-000000000108}"
    Case "SW_METERING":
      ' Software Metering Usage Report Cycle 
      ClientActionCode = "{00000000-0000-0000-0000-000000000031}"
    Case "USER_POLICY":
      ClientActionCode = "{00000000-0000-0000-0000-000000000027}"
    Case Else:
      ClientActionCode = ""
  End Select
 
  ' list of codes for future inclusion...
  '
  '{00000000-0000-0000-0000-000000000010}  File Collection
  '{00000000-0000-0000-0000-000000000021}  Request machine assignments
  '{00000000-0000-0000-0000-000000000023}  Refresh default MP
  '{00000000-0000-0000-0000-000000000024}  Refresh location services
  '{00000000-0000-0000-0000-000000000025}  Request timeout value for tasks
  '{00000000-0000-0000-0000-000000000026}  Request user assignments
  '{00000000-0000-0000-0000-000000000032}  Request software update source
  '{00000000-0000-0000-0000-000000000061}  DP: Peer DP status report
  '{00000000-0000-0000-0000-000000000062}  DP: Peer DP pending status check
  '{00000000-0000-0000-0000-000000000111}  Send unset state messages
  '{00000000-0000-0000-0000-000000000112}  Clean state message cache
  '{00000000-0000-0000-0000-000000000114}  Refresh update status
End Function

'--------------------------------------------------------
' function: 
'--------------------------------------------------------

Function ExecAction(clientName, actionName, actionCode, userID)
  Dim strCmd, result

  DebugPrint "info: executing action request for " & clientName

  If actionCode = "RERUNADV" Then
    ' result = RerunAdv(compName, advID)
    ' [[ I will cover this in part 4 of this article ]]
    result = 200 ' denotes request was ignored (for now)
  Else
    strCmd = scriptPath & "\SendSchedule.exe " & actionCode & " " & clientName
    wscript.echo "info: command = " & strCmd
    result = objShell.Run(strCmd, 1, True)
  End If

  '--------------------------------------------------------

  ExecAction = result

End Function

'------------------------------------------------------------
' function: 
'------------------------------------------------------------

Sub MarkRecord(recID, pVal)
  Dim query, conn, cmd, rs

  wscript.echo "info: marking record completed..."

  DebugPrint "info: id = " & recID & " / result = " & pval

  query = "SELECT * FROM " & strTable & " WHERE id=" & recID
 
  Set conn = CreateObject("ADODB.Connection")
  Set cmd  = CreateObject("ADODB.Command")
  Set rs   = CreateObject("ADODB.Recordset")
 
  On Error Resume Next
  conn.ConnectionTimeOut = 5
  conn.Open dsn
  If err.Number <> 0 Then
    wscript.echo "fail: connection failed"
    wscript.quit(err.Number)
  Else
    On Error GoTo 0
  End If
 
  rs.CursorLocation = adUseClient
  rs.CursorType = adOpenDynamic
  rs.LockType = adLockPessimistic
 
  Set cmd.ActiveConnection = conn
 
  cmd.CommandType = adCmdText
  cmd.CommandText = query
  rs.Open cmd
 
  If Not(rs.BOF And rs.EOF) Then
    rs.Fields("DateProcessed").value = Now
    rs.Fields("ResultData").value = pVal
    rs.Update
  Else
    DebugPrint "error: no records found"
  End If
 
  rs.Close
  conn.Close
  Set rs = Nothing
  Set cmd = Nothing
  Set conn = Nothing

End Sub

'------------------------------------------------------------
' function: verbose echo printing
'------------------------------------------------------------

Sub DebugPrint(s)
  If verbose = True Then
    wscript.echo s
  End If
End Sub

What The Script Does

As I mentioned before, each time the Scheduled Task runs, it calls the script.  The script performs the following actions in the order/sequence listed below:
  • Opens a Connection to the database using ADO (COM) with SQL user permissions
  • Submits a Query for all rows where the DateProcessed value is NULL (indicating the request has not been processed yet).  The results are obtained as an ADO RecordSet object.
  • Iterates the RecordSet rows to gets the remote Computer Name, and ActionName field to determine the specific things that need to be done for the requested action (for example: look up the Action Code GUID)
  • Initiates a WMI (Win32_Ping) request to determine if the remote computer is online.
    • If not online, the ResultData column is updated with a value to indicate the client was offline
    • If online, the Action is processed...
  • Executes the requested Action:
    • If a "Client Action" is requested: Open a Shell session using WScript Shell object (COM) and executes the SendSchedule.exe application with the appropriate GUID for the Action and the name of the remote computer.  Gets the result/exit code from the SendSchedule process.
    • If "Re-Run Advertisement" is requested:  (to be continued)
  • Updates the database table row by entering the appropriate result code (ResultData) and the timestamp of the completion (DateProcessed)
  • Exits
Not really complicated actually.  This is a pretty straightforward and common process for interacting with database tables with ADO.  You could separate the requests and the results into two tables if you prefer, but I'm not shooting for 3NF or 4NF here.  I'm too lazy for that much work.

The Scheduled Task

This is where the Security aspect comes into play.  You need to execute the script under a context which has permissions to invoke the Configuration Manager Agent on remote computers over your network from a WMI interface.  I created a special Domain user account for this and added to the local Administrators group on every desktop and laptop computer using Group Policy and Restricted Groups.

Before setting up the Scheduled Task, I highly recommend testing the script directly.  Open a session (interactive login or use RunAs to open a CMD console) under the credentials of the user account you intend to use for the Scheduled Task.  Test the script until you are satisfied it works correctly.

As a force of habit, I use a simple BAT script to wrap my calls to VBScript to I can pipe the output (wscript.echo or DebugPrint results) to a log file if I want.  Or you can do it from within the VBScript code using basic FileSystemObject (FSO) methods if you prefer.  Either way, it can be helpful to generate a log file to diagnose issues where the database is unavailable for some reason when the scheduled task is executed.

The Schedule you choose is entirely arbitrary.  I run mine at ten (10) minute intervals all day, every day.  It also doesn't matter how you choose to create the Scheduled Task.  You can obviously use the GUI, or do it from the command line using SchTasks.exe, or from a script or whatever.

Summary

All of this I've covered here is essentially the "back-end" of the process.  I hope it you find it useful and helpful.  Let me know by posting a comment below?  In the next part of this article I will delve into the web form and the user interaction aspects.

Wednesday, October 10, 2012

Software Feature Entropy Cycles - Part 2, Example

I figured I might need to provide some concrete elaboration on my previous post about "Software Feature Entropy Cycles", so here goes.

Back in the late 1980's, while working for a "Naval Architecture and Engineering Firm", I began my career as a programmer.  According to my IRS tax return, my job was "Senior Engineer Technician", which was basically a glorified drafter, who also does some calculations.  My real job was writing tons and tons of LISP code for AutoCAD R10 and R11, and eventually R12 and onward, to automate design processes for piping and HVAC systems on U.S. Navy warships.

One of the interesting aspects about the world of CAD is that every niche industry has evolved its own unique "standards" of design and drafting.  From sheet borders, to dimensions and callouts, to tables and lists, to fonts and font sizes, to colors and layers.  You name it, I've seen every bizarre permutation of "standards" you can imagine. Metric vs. Standard. All Modelspace vs. Modelspace/Paperspace vs. all Paperspace.  Microscale and Macroscale.  Orthographic and Isometric and Oblique and Perspective, and whatever.

Some of the "standards" required for U.S. Naval design drawings were "NEVER CROSS CALLOUT LEADERS" and "NEVER CROSS A DIMENSION FEATURE WITH A CALLOUT LEADER".  Of course, in reality that wasn't always possible.  Some drawings contains such complicated spewage of goo (technical term for tons and tons of shit, making the end result difficult to read and make sense of), so breaking those "rules" was not only hard to avoid, it was downright required.

So, myself and another (much better) programmer, named Brad, went to work making some LISP routines to automatically detect when a Leader crossed another Leader or Dimension and automatically broke out a "gap" at the intersection.  Later versions of AutoCAD actually supported "real" LEADER entities, and also added the GROUP entity, so we updated our code to break the leader, remove the arrow head from the label leg, and applied both parts into a single GROUP entity to retain some behavioral integrity.  It worked pretty well actually.

Then Autodesk released "Bonus Tools", later renamed as "Express Tools", which included a leader gap feature that worked as well as ours, maybe better.  So we deprecated our code and moved on.  This is a fairly typical iterative process for most developers:  As the base product/technology/platform gains new features which previously required custom extensions, those custom extensions become unnecessary and deprecated.  Remaining feature "gaps" continue to be filled by custom extensions (aka program code), and newly-identified gaps are addressed with new extensions, and the cycle continues.

That's a pretty simple, yet concise example of a Software Feature Entropy Cycle.

Friday, October 5, 2012

Software Feature Entropy Cycles

I'm overdue for a long-winded article. I used to post a lot more of these kinds of brain dribble, but I took a break from it for reasons unknown.  You've been fortunate.  Tonight, I just finished playing whiffle ball, eating spaghetti, and laying on the couch watching The Sandlot on TV.  My brain juices are flowing, so you know what that means...

There's a lot of talk about the "life-cycle" of software, and "life-cycle management", yada yada yada. But there's another "cycle" within software technology that doesn't get talked about much. This "cycle" actually has more impact on professional lives, and the course of technology than most of the other "cycles".  If I were to borrow a bit from Kevin Kelly's "What Technology Wants" (a great book, by the way), I'd probably coin the term I'm talking about as "Feature Entropy".

Before you start thinking I've lost my mind, or that I'm blowing smoke, hold on and hear me out.  This will actually make sense.

Programmers everywhere make their living filling in gaps. Feature gaps.  The gaps that exist between what a retail software product (or service, or technology) offers, and what you or your customers/users really need. Whether it's custom templates for content creation, or formulas for spreadsheets, or reports from databases, or components to connect services, or applications to solve problems.  Whatever.  So, you install Windows, or OSX or Ubuntu or Android, or iOS, whatever, and you discover you need something to solve a problem.  Maybe it's just automating a common task; maybe it's for pure entertainment.  Maybe it's to earn additional income.

This is when you start brainstorming and designing a solution.  From that design, you start tinkering, and building key pieces of this puzzle.  Soon you have portions linked together and things start to take shape. You get excited, and realize it's almost 3:00 AM and you're still driven to write more code, but you're about to crash.  You wake up, stretch, hit the bathroom, eat and start coding again.

Sound familiar?

This is actually the first phase, or "phase one" of the entropy cycle.  Phase Two is when your solution is running on it's own legs and enters production.  Phase Three is about maturity and refinement.  Phase Four is about absorption.  By that, I mean when the vendor realizes that a lot of developers are building the same general type of solution for a large install base.  The vendor realizes that there is value to be gained in absorbing that solution into the next version of their product.  The next version comes out and, if customers find value in the improvements (versus the perceived challenges of new changes), they buy the upgrade and now that class of "solutions" you and many other developers were thriving on is no longer required.  That would be Phase Five, or "closure', in some respects.

That may sound like doom and gloom, but the machine rolls on.  Developers size up the new version and find yet more gaps that need filling and they continue on.  Nothing is truly a "one-size-fits-all" solution for all customers.  That is what makes it possible for developers to exist.  It's why builders build custom houses, and why auto-detailers customize cars.  People want, even need, for things to be a certain way in order to fulfill their needs.

The entropy here is the logical flow of features from outside to inside.  Outside being the developer community at-large.  Inside being the vendor who makes the base product or technology upon which the developers are building their tools that fill the gaps.  As features are incorporated into base, and versions roll onward, the flow continues inward.

If you really want to get technical about this "flow" aspect, it would be more accurate to describe it as toroidal.  That's right, a Toroid. This is because there is also and outward flow aspect to this cycle.  Vendors invest quite a bit into extensibility, not because it makes customers feel good, or to be altruistic.  They do it because is helps to grow an ecosystem that spreads the use of their products, while also building a community from which to cultivate ideas and features to be absorbed into future versions and future products/services.  It's kind of like scattering seeds in a field and watching the flowers grow, especially if the person spreading the seeds is a florist.

Entropy is really not the best word, because it implies decay or destruction, but in some references it can imply a sense of equilibrium.  It's that aspect that made me think of it.  Inflow or Convergence might be better words in some respect, I don't know.  Basically, there's an outward flow of technology, extensibility, and support which germinate as a community and grow into a revenue stream and ecosystem.

There's an inward flow of ideas, concepts and features, which help to grow the core of the revenue stream: products and services.  As ideas, concepts and features are plucked from the outside and incorporated into the inside, new ideas, concepts and features rise to the top, and the cycle continues.

The circle of (software) life.  Please don't start singing that movie soundtrack, mmmkay?

I promise future posts will be more coherent and entertaining.

Wednesday, June 27, 2012

Top-Down vs Bottom-Up

I promised "more to come" yesterday, so here goes... (warning: mindless rambling begins now)

In general terms, within larger organizations (e.g. corporate or government environments), there exists two broad approaches to "software development":

  • Top-Down
  • Bottom-Up

Top-Down

When you have the luxury of working in a structured team scenario, especially (possibly ONLY) when it consists of talented people that ALSO get along VERY well, you can take the time to plan and proceed in a logical manner.  By this, I mean: gather requirements, assess the status quo against the desired outcome, determine gaps, resources, timelines, etc.  And you may even have the luxury of dedicated project managers, program managers, developers, architects, test groups, test procedures, CMMI and all that.

This notion of planning ahead, designing everything methodically, testing and more testing, is all part of the "top-down" approach.  This is the approach taught in schools, text books, lectures, and so on.  It's admirable and difficult to find fault in this concept.  But everything has potential drawbacks.

Bottom-Up

When you start coding within a short time of having an idea.  When you are faced with crisis-mode problems that demand your full attention to solve using whatever tools you have at hand.  When you don't have an elaborate structured environment to delegate tasks to.  When like to create and evolve something, rather than plan it ahead.  All of these reasons, and many more, often lead immediately into a "bottom-up" development process.  Oftentimes this approach ends up at a crossroads with Top-Down ideals, where the developer(s) stops at 2.0 or 3.0 and decides to refactor, clean-up, and document everything.  At this point, it often takes on a new direction that feels more like "top-down", even though it didn't start out that way.

So what's the best way to go?  There is no "best way".  There is only the "way" that works for your endeavors.  Sure, logically speaking, it's hard to argue that with all the right pieces in place, that a "top-down" process isn't the better option.  But a lot (repeat A LOT) of developers do not have such luxuries.  And even more of them have personal leanings towards "bottom-up" because it suits their creative process.  Is that wrong?  Who knows.

I've worked in both camps for quite a bit of time.  There are aspects of each I like and dislike.  Sometimes I compare them to cooking with gas versus a wood fire.  One is simpler to get going, the other has a nicer feel to it.

One subtle, often overlooked, yet serious drawback to the "top-down" approach is the timeline.  With a more rigorous application of metric-oriented planning and execution comes a long duration (start to finish).  While that may seem like an obvious cost risk, the other side of this (the part I propose as being "overlooked") is the budget window constraint.  I've seen plenty of large-scale development projects fall to the cutting room floor because the timeline ran afoul of an ever-scrutinized budget.  Many times it happens before any code has been written.  Great ideas on paper, in a server shared folder, in SharePoint or some other repository, being hashed and vetted and showing immense promise, only to slide unknowingly under the falling axe of a budget cutback.

On the flip-side is the "bottom-up" approach.  Sometimes viewed as "shooting from the hip" or "wild west show' approach.  Get the code moving sooner and work out the kinks as they come up.  Give and take with the end users.  It is exciting to work in that fold.  I much prefer meeting users face to face than sifting through survey reports, forum threads, and e-mails. 

As Chris Curran states*: 
"While Agile and CMMI can coexist, there are limits.  Agile practices can normally function with CMMI levels 1 to 3 but are usually incompatible with the higher maturity levels 4 and 5. At CMMI levels 4 and 5, the intrusion of documentation into the development process over-formalizes Agile’s internal discipline and Agile ceases to be agile."
Everything has limits obviously.  You can't fit either of these approaches to every situation.  There are many stories involving Facebook, Twitter and other recent major ideas where the nexus of their success was taking an unorthodox or hybrid approach to their entire inception and debut.  Stop and think about every aspect of the way your are currently approaching software projects.  Are there things you wish could be improved?  Eliminated?

I need sleep.  Cheers!

* "Are Agile and CMMI Compatible?" - http://www.ciodashboard.com/it-processes-and-methodologies/agile-cmmi-compatible/

Tuesday, June 26, 2012

Top-Down, or Bottom-Up? The Yin and Yang of Software Development

I plan on digressing into this topic more heavily in the near future.  Having worked in IT for quite a few years, and much of it in a software development capacity, I've developed some personal and professional perspectives on two diametrically opposed approaches to building applications:

  • From the Top - Down
  • From the Bottom - Up

Each of these two has obvious, and not-so-obvious advantages, as well as drawbacks.  The weight of each (advantage or drawback) varies by the scale of the environment, and the significance of resources laid upon the providers and the consumers of the application.  Terms come into play like Agility, CMMI, SDLC, Test-driven, and so on.  When do they help?... and when do they not?  How is it that one approach, or the other, becomes inevitable within some organizations?

Stay tuned...

Friday, April 13, 2012

Packaging Exam: Part 3

The stupidity continues on...

You've installed a 32-bit application on a 64-bit Windows 7 Enterprise computer.  You want to track down all the places you might likely find this "footprint".  Which locations would you select?

A. %ProgramFiles%
B. %ProgramFiles(x86)%
C. %CommontFiles%
D. %CommonFiles(x86)%
E. %ProgramData%
F. %WINDIR%\System32
G. Registry: HKLM\SOFTWARE
H. Registry: HKLM\SOFTWARE\WOW6432node
I. Registry: HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall
J. Registry: HKLM\SOFTWARE\WOW6432Node\Microsoft\Windows\CurrentVersion\Uninstall
K. Registry: HKEY_CLASSES_ROOT


Answer:

(I'm not telling)  moo-ha-ha-haaaa!  Post your answers below...