Table of Contents

FCP Server Configuration for Windows and ESX workflow

The FCP Server Configuration for Windows and ESX workflow enables you to set up the FC service on a Storage Virtual Machine (SVM, formerly known as Vserver), in preparation for provisioning a LUN for use as a datastore using an FC HBA the host computer (Windows or ESX).

Note: This workflow does not cover FCoE and raw device mapping (RDM) disks or using N-port ID virtualization (NPIV) to provide FC directly to virtual machines.

The following sections provide details about the workflow and how to execute the workflow:

Prerequisites for executing the FCP Server Configuration for Windows and ESX workflow

You must ensure that certain requirements are met before executing the FCP Server Configuration for Windows and ESX workflow. You must be a cluster administrator for executing this workflow.

  • Your system must be running clustered Data ONTAP.
    Note: This workflow is qualified to work with Data ONTAP 8.2.1 and later.
  • OnCommand Workflow Automation (WFA) 3.0 or later must be installed.
  • You must have added OnCommand Unified Manager 6.1 or later as a data source in WFA and obtained the latest Unified Manager data.
  • The cluster must already be created and the cluster time must be synchronized with an NTP server.
  • You must be using a supported version of Virtual Storage Console for VMware vSphere to configure storage settings for your ESX host and to provision the datastores.
  • You must not be using virtual Fibre Channel (VFC) with Hyper-V guests.

What happens when you execute the FCP Server Configuration for Windows and ESX workflow

The FCP Server Configuration for Windows and ESX workflow enables you to create an aggregate, create an SVM, set up an FCP service, create a portset, create an FCP LIF, and then add the LIF to the portset.

The following illustration displays the tasks involved in executing the workflow:


Image displays the tasks performed in the FCP Server Configuration for Windows and ESX workflow.

Executing the FCP Server Configuration for Windows and ESX workflow

The FCP Server Configuration for Windows and ESX workflow enables you to create an aggregate, create an SVM, set up an FCP service, create a portset, create an FCP LIF, and then add the LIF to the portset.

Before you begin

  • The workflow pack must be downloaded from the Storage Automation Store.
  • You must have reviewed the prerequisites for executing the workflow.

About this task

You should have the following input parameters available for executing the workflow:

  • Cluster name
  • If you want to create a new aggregate:
    • Node name
    • Aggregate name
    • Aggregate RAID type
    • Number of disks to create the aggregate
  • SVM details:
    • SVM name
    • Language
  • FCP details: number of LIFs per node

Steps

  1. Log in to WFA by providing the necessary credentials.
  2. Click Portal > Setup > FCP Server Configuration for Windows and ESX.
    Tip: You can use the filter () to locate the workflow.
  3. Click the Execute icon ().
    The Execute Workflow 'FCP Server Configuration for Windows and ESX' dialog box is displayed.
  4. Select the cluster name.
  5. Provide the aggregate details:
    If you are... Do this…
    Using an existing aggregate Select an appropriate aggregate from the drop-down list.
    Creating a new aggregate

    Enter the following values:

    • Node name
    • Aggregate name
    • RAID type
    • Disk count
  6. Create a new SVM by providing details such as the SVM name and language.
  7. Select the number of LIFs per node from the drop-down list.
    You can select a maximum of two FCP LIFs.
  8. Optional: Click Preview to validate your workflow before executing it.
  9. Click Execute.
    You can also schedule the workflow for execution at a later date and time by selecting the Choose Date and Time for Execution check box.

After you finish

You must provision a LUN and make the LUN available using an FC HBA on a Windows or ESX host computer by executing the FCP LUN Provisioning for Windows and ESX workflow.