ConfigMgr 2012 SP1 with MDT 2012 Update 1, UDI Task Sequence and Bitlocker Error 6767

Redmond, we have a problem.  The “Client Task Sequence” template when used with Configuration Manager 2012 SP1 does not work too well when UDI is enabled and we want to Bitlocker our devices.

When we are performing a User Driven Installation, the MDT 2012 Update 1 template makes use of the ConfigMgr “Pre-provision Bitlocker” step, which adequately pre-provisions Bitlocker on the Operating System drive on a ‘used space only’ basis.  This is a good thing.   Later on in the Task Sequence we have an MDT specific step that uses the ZTIBde.wsf script which SHOULD configure and enable the protectors – but you may find that this will fail, resulting in an exitcode of 6767.  Tracing this error in the ZTIBde.wsf script takes us into a failure of the function that enables the Bitlocker protectors.  So what is going wrong?

Let us first understand that the ZTIBde.wsf was designed for MDT deployments primarily – with added ConfigMgr integration as an afterthought.  The ZTIBde.wsf script ON ITS OWN is capable of pre-provisioning Bitlocker in Windows PE AND ALSO, later (in the OS phase) configuring and enabling the protectors.  It’s a great little script.  In the MDT Client Task Sequence template however, the ZTIBde.wsf script is not used within the Windows PE phase to Pre-provision Bitlocker and instead the in-built ConfigMgr function is used.  The ConfigMgr function will pre-provision Bitlocker, however it does not then configure the appropriate variable that the ZTIBde.wsf Pre-provisioning function would and this is needed and consumed later by the ZTIBde.wsf script when it comes to Enabling the Bitlocker Protectors in the OS Phase.

The ZTIBde.wsf script has a section which checks if Bitlocker is enabled on the drive (IsBDE = TRUE) and also if the drive has NOT been pre-provisioned earlier (IsBDEPreProvisioned <> TRUE).  This test is used to accommodate the refresh scenario when the existing suspended protectors of a Bitlocker drive can simply be turned back on without having to configure any.  In the absence of the IsBDEPreProvisioned variable (or if it doesn’t equal TRUE) then this section of code will cause the script to skip over the configuration of any required protectors (specified in the UDI wizard) and simply try to enable any existing protectors, of which there are none – resulting in the error 6767 spat out by the EnableProtectors function.

So what do we need to do about this?  Well you could amend the script – but that wouldn’t be supported by Microsoft and your modifications could well be wiped out by any future revision (which may or may not address its shortcomings.  There are a few (non-exhaustive) alternatives that I could propose;

  1. If you are using the UDI Wizard AND you want the Bitlocker options to come through from it, then you should replace the “Pre-provision Bitlocker” step in the task sequence that uses the ConfigMgr function with the ZTIBde.wsf script – the very same script used later on to “Enable Bitlocker”.  This keeps the Pre-provisioning process and the Enable Bitlocker process solely in the MDT camp.  However, if you want the Recovery Key to be backed up to Active Directory then you will need to use a script afterwards to do that, or implement MBAM.
  2. If you are using the UDI Wizard AND are removing the Bitlocker configuration page but still want Bitlocker enabling then replace the “Enable Bitlocker” step in the task sequence with the ConfigMgr function – as you can specify the protectors used and also escrow the Recovery Key to Active Directory automatically.
  3. If you are NOT using the UDI Wizard but want to implement Bitlocker in your Task Sequence, then stick with the ConfigMgr step for pre-provisioning Bitlocker and replace the ZTIBde.wsf method for Enabling Bitlocker with the ConfigMgr function – for the same reasons as above.  In this scenario, you would need to ensure that your partitioning layout is suitable for Bitlocker.

Have fun with that!


ConfigMgr 2012 SP1: KB2801987 to fix MicrosoftPolicyPlatformSetup.msi Authenticode Signature

You should apply this hotfix as soon as possible to your Configuration Manager 2012 SP1 environments;

This is a server-side install only and you do not need to push anything out to the clients. The patch will update the existing MicrosoftPolicyPlatformSetup.msi in the client installation files folder.

Related to an earlier post of mine about having patience, you should VERY MUCH be patient after installing this hotfix as it will trigger the re-installation of most, if not all, of the server site components in the background after the hotfix install has completed.  Use CMTrace to monitor your sitecomp.log and give it plenty of time to settle down before firing up the ConfigMgr console to do ‘stuff’.  My server took about 30 minutes to complete the whole background tasks and also you should check that your MP has been re-installed correctly by monitoring the MPSetup.log and the mpMSI.log.  My MPSetup.log indicated a 3010 exit code after re-installation which means a reboot is required and you would be wise to do this immediately after the sitecomp.log has settled.


ConfigMGr MPControl.log: Call to HttpSendRequestSync failed for port 443 with status code 403, text: Forbidden

I was working in my Hyper-V lab this morning trying to PXE boot a client VM into a ConfigMgr Task Sequence but somehow things had just stopped working, overnight.  SMSPXE.log was showing me this;

sending with winhttp failed; 80072f8f
Failed to get information for MP: https://CON-CM1.contoso.local. 80072f8f.
PXE::DB_InitializeTransport failed; 0x80004005
Unspecified error (Error: 80004005; Source: Windows)

My MPControl.log had also, within minutes, gone from this (working);

>>> Selected Certificate [Thumbprint 37d4c9502df29c6780a456597b5088d569ceca6b] issued to 'CON-CM1.contoso.local' for HTTPS Client Authentication
Call to HttpSendRequestSync succeeded for port 443 with status code 200, text: OK

to this (broken);

>>> Selected Certificate [Thumbprint 37d4c9502df29c6780a456597b5088d569ceca6b] issued to 'CON-CM1.contoso.local' for HTTPS Client Authentication
Call to HttpSendRequestSync failed for port 443 with status code 403, text: Forbidden

So what happened here?

First things, I wanted to isolate if this was a problem with the Management Point component or the PKI setup – so I simply set the Management Point role to run as HTTP only.  Within minutes I was seeing a working management point in the MPControl.log – so it was certificate related.

I looked on my Windows Server 2008 R2 Certificate Authority and there were no certificate revocations.  Maybe the client certificate is a bit screwed up I thought – so I deleted the Client Authentication Certificate from the Personal Store on the Management Point and tried to request a new one from the CA but received a failure that the Certificate Revocation Server was unavailable.  Weird.  A quick visit back to the CA and I stopped and restarted the CA service and tried the request again from the MP and it went through fine.

I changed the Management Point back to HTTPS and again within a few minutes I was seeing a working Management Point again in the MPControl.log.

Just goes to show that it isn’t always (actually, it isn’t USUALLY) Configuration Manager that is to blame when things aren’t working correctly.


ConfigMgr 2012: 64bit file system redirection bites again…

Even though the ConfigMgr 2012 client is supposedly 64bit now, the issue with 64bit file system redirection is still very much a problem during Task Sequence and even regular package/program deployments when we want to copy things to the ‘Native’ “%Program Files%” or “%WinDir%\System32”.  File System redirection kicks in and we are magically transported to the 32bit “Program Files (x86)” or “Windows\SysWOW64”. Boo hiss.  I even found a log entry in the client-side execmgr.log which clearly states;

Running "C:\Windows\ccmcache\3\CopyFiles-Temp.cmd" with 32bitLauncher

Why? Why? Why?

In the Task Sequence we can easily get around this problem by choosing to run a “Run Command Line” step instead of an Install Package step; we reference our package and command line and ensure we tick the box to “Disable 64bit file system redirection.”  This is all well and good but what about deployments outside of the Task Sequence?  There simply isn’t the same option, so we need to build this into our script/batch file that we are trying to run.  My borderline Obsessive Compulsiveness dictates that the single solution must ‘just work’ for both 32bit and 64bit operating systems.

The most elegant way I have found for doing this that doesn’t involve re-writing sections of your batch file to cater for the various operating environments, involves taking advantage of being able to invoke the ‘Native’ 64bit command processor (system32\cmd.exe) from within any 32bit command processor running within a 64bit OS if needed.  Here’s a snippet of code you simply add to the very top of your existing batch files…

  ECHO "Re-launching Script in Native Command Processor..."
  %SystemRoot%\Sysnative\cmd.exe /c %0 %*
ECHO "Running Script in Native Command Processor..."
REM Your script starts here

What this will do for you, is first detect if the batch file is being ran from within a 32bit command interpreter on a 64bit OS – if it isn’t then the code jumps to the “:native” label and continues your script as usual.  If we are in a 32bit command interpreter on a 64bit OS, then the script invokes the ‘Native’ 64bit command processor to run this very same batch file and then exit.  The %PROCESSOR_ARCHITEW6432% condition near the top will simply be evaluated again by the Native command processor and ignored (as we are running 64bit cmd.exe in 64bit OS, or a 32bit cmd in a 32bit OS) and your script will happily continue without being subject to redirection.  Neat!


ConfigMgr 2012 SP1: The Lord Giveth…

..but he’s taken away the ability to add collection variables to the default “Unknown Computers” collection.  Why God, why.

I like the convenience of using this collection to attach the “OSDComputerName” blank variable so that the deployment process for new ‘Unknown’ systems prompts for this information.  I now have to create another collection of my own for this variable, essentially containing the same two resources.

MDT 2012 Update 1: Missing “Request State Store” step causes state restore to fail from State Migration Point in REPLACE Scenario.

I have found that the MDT 2012 Update 1 Client Task Sequence Template is now missing a crucial step in the later stages of the task sequence which is needed to restore captured data from the ConfigMgr State Migration Point when working under the REPLACE scenario – and we need to add this back in.

You simply need to add a “Request State Store” step just above the new “Connect to State Store” step (which looks to just do an authentication against UNC path) and add a condition on the step to run only if Task Sequence Variable “USMTLOCAL” not equals TRUE.

Also add a “Release State Store” step below the “Restore User State” task, again with the same condition as above.


Patch your ConfigMgr Boot Image for Advanced Format / 512e Drives

Advanced Format (AF or 512e) drives are out there, often fitted randomly from one model to the next.  I won’t go into the technicalities of what they are all about as Google will tell you this, but what I will tell you is that their presence can slow down the deployment rate on an affected system.

Firstly, if you are not sure whether your system is equipped with an AF drive (DELL include bright orange note with the system, HP just seem to sneak them in) then you can download and run the following tool in the OS or in WindowsPE;

The tool will tell you if an AF drive is fitted and also if the partitions are ‘aligned’.

When we use ConfigMgr and WindowsPE boot images to deploy systems with AF drives you may notice quite a slow down, especially in the “Apply Operating System Image” Task Sequence step.  There is a patch to be downloaded from Microsoft that should be installed within a fully patched Windows 7 SP1 OS image AND we must also incorporate this patch into our ConfigMgr boot images;

Once you have obtained the x86 and amd64 versions you can follow my guide below on how to update BOTH of your ConfigMgr boot images.  We will do the x86 boot image first and you just need to repeat the process for the amd64 image.

You should have installed the Windows Automated Deployment Toolkit (WAIK).  You can undertake this task on the ConfigMgr server as it will have the WAIK installed.  From your Start Menu, find the Microsoft Windows AIK\Deployment Tools Command Prompt and run as Administrator.

Create the following structure on a drive of your choice with a good few GB free (where X = YourDriveLetter)


Copy both of the downloaded Windows6.1-KB982018-v3-x64.msu and Windows6.1-KB982018-v3-x86.msu to X:\WinPE\patches.  It does not matter that they are together as the patch injection process is clever enough to pick the right one.

Copy the boot.wim file (ignore the boot.xxx12345.wim) from <ConfigMgrInstallDir>\OSD\boot\i386 to X:\WinPE

From the Deployment Tools Command Prompt, run the following commands (replacing X:\ as appropriate);

DISM /Mount-Wim /WimFile:X:\WinPE\boot.wim /MountDir:X:\WinPE\mount /Index:1
DISM /Image:X:\WinPE\mount /Add-Package:X:\WinPE\patches
DISM /Unmount-Wim /MountDir:X:\WinPE\mount /Commit

Rename the existing <ConfigMgrInstallDir>\OSD\boot\i386 .wim file to .old and copy up your replacement boot.wim from X:\WinPE.

From the ConfigMgr Console, Right click the x86 Boot Image and choose to Update Distribution Points.  This will take our new boot.wim and re-integrate the ConfigMgr components, your modifications and drivers and re-send out to the DP’s.

You should also click the “Reload” button on the Images tab of the boot image properties, so that the size changes will be reflected.

The size increases on the boot.wim files you should be looking for, if all went successfully, should be;
i386 boot.wim should increase by approx 10MB
x64 boot.wim should increase by approx 12MB

With a patched WindowsPE boot image, the “Apply Operating System Image” Task Sequence step when ran on an AF drive should speed up considerably.  The speed increases I have observed on an HP Touchsmart system were as follows;

BEFORE: AF Drive non-patched took 28 minutes for the “Apply Operating System Image” step to complete (inc. download)
AFTER: AF Drive patched took 13 minutes for the “Apply Operating System Image” step to complete (inc. download)

So you can see, there are significant time savings to be made with a properly patched WindowsPE boot image on an AF drive equipped system.

Oddly, when an AF equipped drive is partitioned and formatted using the boot images generated by ConfigMgr 2012 the Dell Alignment Tool states that the partitions are aligned correctly – however without the patch the disk performance is still poor.