[This topic is pre-release documentation and is subject to change in future releases. Blank topics are included as placeholders.]

These release notes address late-breaking issues and information about the Beta 1 release of Microsoft® HPC Pack 2008 R2.

Microsoft HPC Pack 2008 R2 Beta 1 cannot be installed on a computer that has HPC Pack 2008 already installed

It is not possible to have installations of HPC Pack 2008 R2 Beta 1 and HPC Pack 2008 on the same computer. If you try to install HPC Pack 2008 R2 Beta 1 on a computer that has HPC Pack 2008 already installed, the installation wizard presents an error.

Workaround

Uninstall HPC Pack 2008 from the computer, and then install HPC Pack 2008 R2 Beta 1.

User preferences are not deleted after uninstalling an edition of HPC Pack

When you uninstall HPC Pack 2008 R2 Beta 1 or a previous release of HPC Pack, some of the user preferences for HPC Cluster Manager are not removed from the computer. For example, any customizations that you made to the heat map view or to charts and reports, or the standard output and standard error folders that you specified, are not deleted.

If you install HPC Pack again on the same computer, the software does not overwrite these user preferences, and they become available in HPC Cluster Manager. If you want to completely uninstall HPC Pack, so that a new installation of HPC Pack has the default preferences and not the previous preferences, you have to manually delete the file where the user preferences are stored.

Workaround

After you uninstall HPC Pack, delete the .dat files for each user on the computer for whom you want to reset the user preferences to the default preferences. The .dat files are located in the following folder:

%SYSTEMDRIVE%\Users\<user_name>\AppData\Local\Microsoft\HPC\<version>\AdminConsole\<head_node_name>.dat

Where: <user_name> is the name of the user, <version> is “2.0” for HPC Pack 2008 and “3.0” for HPC Pack 2008 R2, and <head_node_name> is the name of the head node computer.

Note

If some users on the computer prefer to keep their preferences, do not delete the .dat files for those users.

Uninstalling a compute node and reinstalling it as a WCF broker node causes an error when the node template is assigned

If you uninstall HPC Pack 2008 R2 Beta 1 on a compute node to reinstall the node as a Windows® Communication Foundation (WCF) broker node, the provisioning of the WCF broker node fails when the node template is assigned to the node.

Workaround

For the provisioning to complete without errors, you need to first delete the compute node. The following procedure explains how to perform this task:

To uninstall a compute node and reinstall it as a WCF broker node
  1. On HPC Cluster Manager, take the node offline: In Node Management, right-click the node, and then click Take Offline. If you are prompted to confirm the action, click Yes.

  2. To delete the node, right-click the node again, and then click Delete. If you are prompted to confirm the action, click Yes.

  3. On the node, uninstall HPC Pack 2008 R2 Beta 1.

  4. On the node, start the Microsoft HPC Pack 2008 R2 Installation Wizard, and on the Select Installation Type page, click Join an existing HPC cluster by creating a new WCF broker node.

  5. To finish installing HPC Pack 2008 R2 Beta 1, continue to follow the steps in the installation wizard.

  6. After HPC Pack 2008 R2 Beta 1 finishes installing, you can use the Add Node Wizard in HPC Cluster Manager to add the node to the cluster: In Node Management, right-click the node, click Add Node, and then click Add compute nodes that have already been configured.

  7. To assign a broker node template to the node and add it to your HPC cluster, continue to follow the steps in the Add Node Wizard, and on the Select New Nodes page, select a broker node template for the node.

Deployment of virtual nodes is not supported in this release

Deployment of virtual nodes is not supported in the Beta 1 release of Windows HPC Server 2008 R2. There are two built-in node groups listed in this release (HyperVHostNodes and VirtualNodes), but they are not used.

Workaround

Currently there is no workaround for this issue.

Error 7002 occurs during the deployment of nodes from bare metal

During the deployment of nodes from bare metal, the following error message might appear: Error 7002: This operation has conflicted with another operation running concurrently. Once the operation completes, this operation may be rerun.. Also, you might see the same error message on the Provisioning Log for the nodes.

This is only a transient error. For example, this error can occur when the operation to import nodes from a node XML file is not yet complete, and while the imported nodes are in the Transitional state, the nodes are turned on.

Workaround

Redeploy the nodes that failed to deploy by assigning them a node template.

Error 4404 occurs during the deployment of nodes

During the deployment of nodes, the following error message might appear: Error 4404: Assigning node template failed for conflicting operations.. Also, you might see the same error message on the Provisioning Log for the nodes.

Any subsequent attempts to deploy the nodes will fail with the same error.

Workaround

Delete the nodes that failed to deploy, and then deploy the nodes again.

Deployment might fail if the system clock on the iSCSI boot nodes is not properly set

As part of the process for deploying iSCSI boot nodes, a deployment task is scheduled on the computer that you select to use as the base node. Later, an image of the base node is created, and this image is used to deploy the iSCSI boot nodes.

If the system clock on the computer that you select to use as a base node is set to a later time than the system clocks on the iSCSI boot nodes that you plan to deploy, the deployment task might never start on the iSCSI boot nodes (that is, the deployment process might time out).

Workaround

Ensure that the system clocks on all the iSCSI boot nodes that you plan to deploy are set to the current time, and that the system clock on the computer that you select to use to create the base node image is set to the same time.

The operation to delete a node fails if the HPC Job Scheduler Service is not running

When you select to delete a node, a reconciliation process starts between the cluster management database and the job scheduling database. If the HPC Job Scheduler Service is not running, this reconciliation process cannot take place, and the operation to delete the node reverts.

Workaround

Ensure that the HPC Job Scheduler Service is running before you delete nodes. If an operation to delete a node reverts, check the status of the HPC Job Scheduler Service, and then try to delete the node again.

It takes up to five minutes for a node that has been rebooted to show information again in the heat map

If you reboot a node, the heat map takes up to five minutes to show information about that node again. In the meantime, the node will be marked with an “X”.

Workaround

Currently there is no workaround for this issue.

Two MSMQ counters are refreshed every fifteen minutes

Because of performance concerns, the following two Microsoft® Message Queuing (MSMQ) counters are refreshed every fifteen minutes: MSMQ Request Queue Length, and MSMQ Response Queue Length. When you add these counters as metrics for the list or heat map views in HPC Cluster Manager, there might be a delay in showing changes in the MSMQ queue length.

Workaround

Currently there is no workaround for this issue.

A SOA session broker cannot detect that the client disconnected if EndRequest() was not called

If a client does not send an EndRequest() call to the service-oriented architecture (SOA) session broker before it sends a BrokerClient::Close() call or disposes of the client object, the SOA session broker might not detect that the client disconnected. The client only disconnects after the ClientIdle timeout passes, at which point the resources on the broker become available to other clients.

Workaround

Always call EndRequests() before closing or disposing of the client object.

SOA service configuration files that are edited with Microsoft Service Configuration Editor on Windows 7 cannot be used on Windows HPC Server 2008 nodes

When Microsoft Service Configuration Editor runs on Windows 7, the extendedProtectionPolicy element is added to SOA service configuration files that are edited with this tool. This element is not recognized by nodes running Windows HPC Server 2008.

If you edit a service configuration file by using Microsoft Service Configuration Editor on Windows 7, all SOA sessions that use that service configuration file fail on nodes running Windows HPC Server 2008. Nodes running the Beta 1 release of Windows HPC Server 2008 R2 are not affected.

Workaround

Manually delete the extendedProtectionPolicy element by using a text editor.

If a network error occurs during the creation of the SOA session, the service job runs indefinitely

A service job runs indefinitely if the connection is lost between the client and the SOA session broker while the SOA session is being created. This causes the following exception on the broker: svcHost endpoint not found.

Workaround

Use HPC Job Manager or HPC Cluster Manager to manually cancel the job.

Setting the event logging level in a SOA service configuration file by using the switchName attribute can cause an error

If you edited a SOA service configuration file, and in the <system.diagnostics> section, you added the <switches> section and you also added the switchName attribute to the <sources> section, the switchName attribute conflicts with the switchValue attribute that is used to set the event logging level with HPC Cluster Manager.

When you use HPC Cluster Manager to set the event logging level of the SOA service that uses the configuration file that you edited, the configuration file fails to load and the service is no longer listed in HPC Cluster Manager.

Workaround

We are working to resolve this issue in a future release. In the meantime, to correct this issue, manually edit the SOA service configuration file by using a text editor and remove the switchName attribute. Alternatively, you can remove the SwitchValue attribute and not use HPC Cluster Manager to set the event logging level.