ConfigMgr 2007: The Self Signed Certificate Can Not Be Created Successfully

I’ve been scratching my head for the last hour or so at a customer site having issues with installing the PXE Service Point role onto some new Secondary Site Servers – getting an error “The Self Signed Certificate Can Not Be Created Successfully”.  This problem also affected the ability to extend the expiration time on the self signed certificate on existing PXE enabled site systems.

I spent considerable time tracing logs, file/object access and repeated uninstall/re-install of the PXE service point role.  Some Google searches for this error only returned results for people experiencing problems when re-installing the PXE service point role where it had once existed.

When I was down to my last few strands of hair, in a moment of inspired clarity, I remembered a little popup balloon when I was logged into the server stating that I was being logged on with a temporary profile (this organisation does not allow roaming profiles on their servers.)  I quickly created a local administrative user, gave it some permissions within ConfigMgr and ran a new Console window under those credentials and was able to add the PXE service point role no problem.

So I can only assume that during the generation of the self-signed certificate it is a requirement that a locally cached profile/folder must exist for the user, perhaps for only temporary reasons.

ConfigMgr 101: Patience dear boy… SMS_SERVER_BOOTSTRAP

In ConfigMgr world, one must learn patience – in particular when installing hotfixes, service packs and cumulative updates.  It is quite common for the installer GUI to complete and leave you under the false pretense that your environment is ready to go again, but what you will find is that the installer has triggered additional tasks which the SMS/ConfigMgr component manager needs to handle.

I would certainly recommend using trace32/cmtrace to watch the sitecomp.log when you are performings updates to your environment as this will give you an idea as to whether the component manager is initiating a re-installation of certain site components following an update.   You may see a flurry of activity which mentions the re-installation of components and the SMS_SERVER_BOOTSTRAP service.  When this log has settled back down to normal then you can think about returning your environment to normal usage.

ConfigMGr MPControl.log: Call to HttpSendRequestSync failed for port 443 with status code 403, text: Forbidden

I was working in my Hyper-V lab this morning trying to PXE boot a client VM into a ConfigMgr Task Sequence but somehow things had just stopped working, overnight.  SMSPXE.log was showing me this;

sending with winhttp failed; 80072f8f
Failed to get information for MP: https://CON-CM1.contoso.local. 80072f8f.
PXE::DB_InitializeTransport failed; 0x80004005
Unspecified error (Error: 80004005; Source: Windows)

My MPControl.log had also, within minutes, gone from this (working);

>>> Selected Certificate [Thumbprint 37d4c9502df29c6780a456597b5088d569ceca6b] issued to 'CON-CM1.contoso.local' for HTTPS Client Authentication
Call to HttpSendRequestSync succeeded for port 443 with status code 200, text: OK

to this (broken);

>>> Selected Certificate [Thumbprint 37d4c9502df29c6780a456597b5088d569ceca6b] issued to 'CON-CM1.contoso.local' for HTTPS Client Authentication
Call to HttpSendRequestSync failed for port 443 with status code 403, text: Forbidden

So what happened here?

First things, I wanted to isolate if this was a problem with the Management Point component or the PKI setup – so I simply set the Management Point role to run as HTTP only.  Within minutes I was seeing a working management point in the MPControl.log – so it was certificate related.

I looked on my Windows Server 2008 R2 Certificate Authority and there were no certificate revocations.  Maybe the client certificate is a bit screwed up I thought – so I deleted the Client Authentication Certificate from the Personal Store on the Management Point and tried to request a new one from the CA but received a failure that the Certificate Revocation Server was unavailable.  Weird.  A quick visit back to the CA and I stopped and restarted the CA service and tried the request again from the MP and it went through fine.

I changed the Management Point back to HTTPS and again within a few minutes I was seeing a working Management Point again in the MPControl.log.

Just goes to show that it isn’t always (actually, it isn’t USUALLY) Configuration Manager that is to blame when things aren’t working correctly.


PITA – ATI Catalyst Drivers Installation

I love manufacturers who stubbornly refuse to conform to Industry standards for Driver and software Deployment.  ATI and NVidia are two such culprits who make the installation of drivers for their products using widely used deployment tools a royal pain in the arse.  The driver .inf files can be easily extracted from the vendor supplied software, however when installed using Driver Injection and Plug and Play during Windows Setup they are not ‘completely’ installed and if the first user of the system is not an administrator they will receive a prompt for elevation to complete the install.  This is unacceptable guys!

So, we have to work with the vendor supplied drivers in the format they were provided and using whatever silent/unattended methods they provide.  ATI do not make this particularly easy with their Catalyst drivers as they use an installer technology called ‘Monet’ – nope, I never heard of it either.  There seems to be multiple ways to start the installation routine too; Setup.exe, ATISetup.exe and InstallManagerApp.exe – so what do we use?

After several hours of mucking around trying to get one of these to install the drivers during a task sequence, I can proudly put my name to a command line that actually works!  If you create a standard package that contains the extracted files from the vendor supplied install files and create an ‘install.cmd’ file that contains the following;


Create a program that runs the ‘install.cmd’ file (Run Hidden, Whether or not a user is logged on, Allow TS Deployment) and add this as an ‘Install Package’ Step to your Task Sequence.  You should enable the ‘Continue On Error’ option on this step, as the ATI installer will exit with a non-zero exitcode even if the drivers install successfully.

In the command line above, I am choosing to only install the drivers and not the associated ‘crap’ that comes with them – but if you want more than just the drivers then just amend the /UNATTENDED_INSTALL option and take off the ‘\Drivers’ at the end of the path.


Patch your ConfigMgr Boot Image for Advanced Format / 512e Drives

Advanced Format (AF or 512e) drives are out there, often fitted randomly from one model to the next.  I won’t go into the technicalities of what they are all about as Google will tell you this, but what I will tell you is that their presence can slow down the deployment rate on an affected system.

Firstly, if you are not sure whether your system is equipped with an AF drive (DELL include bright orange note with the system, HP just seem to sneak them in) then you can download and run the following tool in the OS or in WindowsPE;

The tool will tell you if an AF drive is fitted and also if the partitions are ‘aligned’.

When we use ConfigMgr and WindowsPE boot images to deploy systems with AF drives you may notice quite a slow down, especially in the “Apply Operating System Image” Task Sequence step.  There is a patch to be downloaded from Microsoft that should be installed within a fully patched Windows 7 SP1 OS image AND we must also incorporate this patch into our ConfigMgr boot images;

Once you have obtained the x86 and amd64 versions you can follow my guide below on how to update BOTH of your ConfigMgr boot images.  We will do the x86 boot image first and you just need to repeat the process for the amd64 image.

You should have installed the Windows Automated Deployment Toolkit (WAIK).  You can undertake this task on the ConfigMgr server as it will have the WAIK installed.  From your Start Menu, find the Microsoft Windows AIK\Deployment Tools Command Prompt and run as Administrator.

Create the following structure on a drive of your choice with a good few GB free (where X = YourDriveLetter)


Copy both of the downloaded Windows6.1-KB982018-v3-x64.msu and Windows6.1-KB982018-v3-x86.msu to X:\WinPE\patches.  It does not matter that they are together as the patch injection process is clever enough to pick the right one.

Copy the boot.wim file (ignore the boot.xxx12345.wim) from <ConfigMgrInstallDir>\OSD\boot\i386 to X:\WinPE

From the Deployment Tools Command Prompt, run the following commands (replacing X:\ as appropriate);

DISM /Mount-Wim /WimFile:X:\WinPE\boot.wim /MountDir:X:\WinPE\mount /Index:1
DISM /Image:X:\WinPE\mount /Add-Package:X:\WinPE\patches
DISM /Unmount-Wim /MountDir:X:\WinPE\mount /Commit

Rename the existing <ConfigMgrInstallDir>\OSD\boot\i386 .wim file to .old and copy up your replacement boot.wim from X:\WinPE.

From the ConfigMgr Console, Right click the x86 Boot Image and choose to Update Distribution Points.  This will take our new boot.wim and re-integrate the ConfigMgr components, your modifications and drivers and re-send out to the DP’s.

You should also click the “Reload” button on the Images tab of the boot image properties, so that the size changes will be reflected.

The size increases on the boot.wim files you should be looking for, if all went successfully, should be;
i386 boot.wim should increase by approx 10MB
x64 boot.wim should increase by approx 12MB

With a patched WindowsPE boot image, the “Apply Operating System Image” Task Sequence step when ran on an AF drive should speed up considerably.  The speed increases I have observed on an HP Touchsmart system were as follows;

BEFORE: AF Drive non-patched took 28 minutes for the “Apply Operating System Image” step to complete (inc. download)
AFTER: AF Drive patched took 13 minutes for the “Apply Operating System Image” step to complete (inc. download)

So you can see, there are significant time savings to be made with a properly patched WindowsPE boot image on an AF drive equipped system.

Oddly, when an AF equipped drive is partitioned and formatted using the boot images generated by ConfigMgr 2012 the Dell Alignment Tool states that the partitions are aligned correctly – however without the patch the disk performance is still poor.

ConfigMgr 101: Compressing Content for Distribution

Here’s a good tip I found, from somewhere.  When you distribute content from your central site, the distribution manager will compress the content and then transmit this to secondary sites, however this isn’t necessary for all types of files.

There exists a registry key “HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\SMS\Compression” and a String Value for “DontCompressExts”.  You might want to consider adding any additional extensions of files to that list which the server should not waste time attempting to compress, such as .wim files.

Little things like this all add up to make a ConfigMgr environment more efficient.




Improving Availability of your remote Branch Distribution Points

Whilst I remember, this is a quick post to share a couple of tips which might help you improve the availability of Windows 7 Branch Distribution Points that you may have operating within your ConfigMgr infrastructure.

  • BIOS Power On Timer – If the BIOS supports it, enable the Power On events each working day to power up the system every morning, ready for business.
  • BIOS Power On after AC Loss – Again, if the BIOS supports it, ensure that in the event of a power failure the system will power up again and boot to the OS (not network!)
  • Windows 7 Recovery Options – If Windows 7 is not shut down correctly, it will default to boot into Recovery mode.  To negate this and always attempt to boot into the OS, run the following command line from an Administrative command prompt;
    bcdedit /set {current} bootstatuspolicy ignoreallfailures

This should help to make your Branch Distribution Point systems a little more resilient to everyday life at remote offices.