vRealize Orchestrator

vRealize Automation & Orchestrator API ODATA Filters

This is following on from my recent http://virtualdevops.com/2017/03/22/using-createrestclient-in-vrealize-orchestrator-to-create-vrealize-automation-rest-calls/ blog as I often hear people are confused about ODATA filters and specifically, what filters are available to use in both Orchestrator and the vRealize Automation API

ODATA is a common feature used in API calls to apply filter queries to obtain a subset of information based on the query passed in to the REST API call. ODATA filters can be used in both vRA REST calls and vRO workflows. This blog explores a couple of different ways to interact with the vRA REST API and vRO workflows to help minimize the data set returned in the response. Before we look in more detail on how to construct these API calls, lets go through some basic ODATA expressions:

Select a range of values

filter=Entry_No gt 610 and Entry_No lt 615

And

filter=Country_Region_Code eq 'ES' and Payment_Terms_Code eq '14 DAYS'

Or

filter= Country_Region_Code eq 'ES' or Country_Region_Code eq 'US'

Less than

filter=Entry_No lt 610

endswith

filter=endswith(VAT_Bus_Posting_Group,'RT')

startswith

filter=startswith(Name, 'S')

These are some of the most common parameters used, but it doesn’t stop there.

Additionally, it is import to append the pagesize and limit to you ODATA filter to limit the dataset returned.

&page=1&limit=50

The page size is important when expecting a large dataset response. Without providing a pagesize or a limit size, the defaults will be used, which might mean the dataset does not include the information you are expecting.

It is also worth noting that the filter expressions are discussed in the API docs found on the vRealize Automation 7.1+ appliance. To access the docs, goto the following URL on a vRealize Automation 7.1+ appliance

https://{{vra-fqdn}}/component-registry/services/docs

With the above information in mind, let’s start looking at some basic ODATA filter expressions that can be used.

I am using POSTMAN as a REST client to interact with the vRA API.

Take this URI example:

https://vra-01.mylab.com/catalog-service/api/consumer/resources/

I only have a few catalog resources but see the following response:

"metadata": {
    "size": 20,
    "totalElements": 29",
    "totalPages": 2,
    "number": 2,
    "offset": 20
}

This shows that there is a total of 29 elements in the response, 2 pages and a size of 20. This means that the JSON response has 2 pages and the 1st page list the first 20 elements. You would need to go to page 2 to get the other 9 elements. If using POSTMAN, you can see the HREF element that shows you what pagesize and limit defaults are being used.

“href”: https://vra-01.mylab.com/catalog-service/api/consumer/resources/?page=2&limit=20

By changing the pagesize and limit, you can control how many elements are returned, or you can search through each page to find the element that you want.

To limit the response to 40 items instead of the default 20 in this case

https://vra-01.mylab.com/catalog-service/api/consumer/resources/?limit=40

or to get to page 2,

https://vra-01.mylab.com/catalog-service/api/consumer/resources/?page=2

However, even though it is important to understand the pagesize and limit used, it is not an efficient way to query resources using the API.  Many times, the query is set with a large limit and a pagesize of 1 to accommodate all resources, but it’s not an efficient way to find resources. What happens if you have 000’s of resources or you have more resources than what your limit is set to? Here is where you can use ODATA filters and use filter expressions to minimize the result set.

So how do we do this? One way is to read the API docs that come with vRealize Automation and it is important to read through how ODATA filters work.

A very simple example is to add a query expression that targets a particular name or ID. For example:

https://vra-01.mylab.com/catalog-service/api/consumer/resources/?
$filter=id eq '21da4606-ebfa-440e-9945-6ef9102c6292'

If you look at the metadata, you can see we have only one element returned:

"metadata": {

    "size": 20,
    "totalElements": 1,
    "totalPages": 1,
    "number": 1,
    "offset": 0

  }

This is indeed the element that has the ID equal to 21da4606-ebfa-440e-9945-6ef9102c6292.

Just breaking down the filter expression, here is how it worked:

$filter=id eq '21da4606-ebfa-440e-9945-6ef9102c6292'

The $filter is used and is set to equal an expression. In this case id eq ”

You can append the pagesize and limit with a filter. For example:

?page=1&limit=1&$filter=id eq ''

Notice the ? used to append a query to the URI, the & sign to have multiple filters and the $filter value which is the beginning of the ODATA filter query.

We can do additional searches too.

https://vra-01.mylab.com /catalog-service/api/consumer/resources/
?$filter=id eq '21da4606-ebfa-440e-9945-6ef9102c6292' 
or id eq 'f89877f9-7c86-43e1-9956-9cefe22f127e'

Just checking the metadata, we can see the following

  "metadata": {

    "size": 20,
    "totalElements": 2,
    "totalPages": 1,
    "number": 1,
    "offset": 0

  }

That matches our query as we see 2 elements returned.

If we supply an invalid ID, it simply doesn’t return an element.

This is all straight forward so far, so how do we do more complex ODATA filters? Well, there are a couple of ways to see how filters are used within vRealize Automation that will help you understand how to append ODATA filters to your API calls.

The first way is to look at the access_lot.txt log file that resides in /var/log/vcac/. This log file shows you all the API calls that are being made either by an end user or the vRealize Automation system. Looking at this log file can help you understand more complex ODATA filters.

You can simply use the UI and execute a search, which logs the ODATA filter used in the UI when searching in the access_log.txt  . A good example of this is searching for catalogue items as we have a nice advanced search box that we can use to trigger a ODATA search.

Screen Shot 2017-04-09 at 19.21.30

Searching the access_log.txt log file, I can see the following API call which show how the search made via UI translates in to a more complex ODATA query.

/catalog-service/api/catalogItems/?$top=30&$skip=0&$orderby=name%20
asc&$filter=substringof%28%27docker%20-%20coreos%27%2Ctolower%28
name%29%29%20and%20substringof%28%27sample%27%2Ctolower%28
description%29%29%20and%20outputResourceType%2Fid%20eq%20%27
composition.resource.type.deployment%27%20and%20providerBinding%2F
provider%2Fid%20eq%20%275a4f1ed5-7a6c-4000-8263-5cbbcc888e84%27%20
and%20status%20eq%20%27PUBLISHED%27%20and%20organization%2F
tenant%20ne%20null

The URL has been encoded but we can simply decode this using an online decoder to give you the format that you require:

/catalog-service/api/catalogItems/?$top=30&$skip=0&$orderby=name 
asc&$filter=substringof('docker - coreos',tolower(name)) and 
substringof('sample',tolower(description)) and outputResourceType/id 
eq 'composition.resource.type.deployment' and 
providerBinding/provider/id eq '5a4f1ed5-7a6c-4000-8263-5cbbcc888e84'
and status eq 'PUBLISHED' and organization/tenant ne null

Some interesting filters have been used, specifically new filters we haven’t discussed, for example  substringof(‘docker – coreos’,tolower(name)).

Substringof returns name records for catalog items with names containing the string ‘docker – coreos’

We could also use this :

catalog-service/api/catalogItems/?$filter=startswith(name,'Docker')

This returns 2 results as expected, as we have 2 OOTB catalogue items that have a name that starts with Docker.

You can also do precedence grouping which enables you to override the precedence conventions to manipulate your dataset based on the precedence grouping defined.

catalog-service/api/catalogItems/?$filter=substringof
('docker - coreos',tolower(name)) and 
substringof('sample',tolower(description))

You can list all virtual machines that were created between 2 different dates:

/catalog-service/api/consumer/resources/?$filter=(dateCreated 
gt '2017-01-01T00:0:00' and dateCreated le ‘17-01-02T00:0:00')

This will display all virtual machine created between ‘2017-01-01T00:0:00’ and ‘2017-01-02T00:0:00’

You can actually append a filter to this precedence and say all virtual machine that are active.

/catalog-service/api/consumer/resources/?$filter=(dateCreated 
gt '2017-03-29T11:42:00' and dateCreated le '2017-03-29T11:50:00') 
and status eq 'ACTIVE'

To be safe, you could make the active status to uppercase, just to ensure any parameters programmatically set are capitalized

catalog-service/api/consumer/resources/?$filter=(dateCreated gt 
'2017-03-29T11:42:00' and dateCreated le '2017-03-29T11:50:00') 
and status eq toupper('active')

Additionally, you can use the orderby function to sort your queries too.

orderby=dateSubmitted+desc

But how do we know what parameters can be passed in using the ODATA filter?

Well, this is a bit of trial and error. You can use the JSON response elements and using the key element and see whether you get results you are expecting that means the expression is working successfully. So take this JSON response payload:

  {

      "@type": "CatalogResource",
      "id": "21da4606-ebfa-440e-9945-6ef9102c6292",
      "iconId": "cafe_default_icon_genericCatalogItem",
      "resourceTypeRef": {
        "id": "composition.resource.type.deployment",
        "label": "Deployment"
      },
      "name": "vm-app-01-88805337",
      "description": null,
      "status": "ACTIVE",
      "catalogItem": {
        "id": "8e04dceb-f040-4bbc-b312-8a1b0d0a3b95",
        "label": "VM app 01"
      },
      "requestId": "3f49e42a-5e4e-4293-b3f0-eff2a34a5108",
      "providerBinding": {
        "bindingId": "3ee7fe9f-197c-4fac-8e59-cea6ac4f2336",
        "providerRef": {
          "id": "5a4f1ed5-7a6c-4000-8263-5cbbcc888e84",
          "label": "Blueprint Service"
        }
      },
      "owners": [
        {
          "tenantName": "vsphere.local",
          "ref": "oleach@vmware.com",
          "type": "USER",
          "value": "Oliver Leach"
        }
      ],
      "organization": {
        "tenantRef": "vsphere.local",
        "tenantLabel": "vsphere.local",
        "subtenantRef": "8723d842-d6f0-48c0-b0a3-e555eaeecdd5",
        "subtenantLabel": "Corp"
      },
      "dateCreated": "2017-01-00T00:00:00.000Z",
      "lastUpdated": "2017-01-00T00:00:00.000Z",
      "hasLease": true,
      "lease": {
        "start": "2017-01-00T00:00:00.000Z"
      },
      "leaseForDisplay": null,
      "hasCosts": true,
      "totalCost": null,
      "hasChildren": false,
      "operations": null,
      "forms": {
        "catalogResourceInfoHidden": false,
        "details": {
          "type": "external",
          "formId": "composition.deployment.details"
        }
      },
      "resourceData": {
        "entries": []
      },
      "destroyDate": null,
      "pendingRequests": []
    },

We can test setting the expression filter to use any of the above JSON payload elements, although there are ones that won’t work and other caveats you need to follow.

For example, take the request ID. This should look like ?$filter=requestId, however that doesn’t work. Trial and error lead me to use ?$filter=request as shown here:

catalog-service/api/consumer/resources/?$filter=request 
eq '3f49e42a-5e4e-4293-b3f0-eff2a34a5108'

Additionally, in the vRA API docs, there are some caveats to be aware of. See this notes in the API docs.

Note: Notice how even though the providerRef object contains a label element, we use provider (and not providerRef) and name (and not label) to craft the filter:

$filter=provider/name eq 'theName'
      "providerBinding": {
        "bindingId": "cd1c0468-cc1a-404d-a2e0-a4e78fc06d4d",
        "providerRef": {
          "id": "2575e506-acfe-487a-b080-9898a30f519f",
          "label": "XaaS"
        }
      },

Therefore, in order to get all providers that are of type XaaS, we need to run the following ODATA filter

/catalog-service/api/consumer/resources/?$filter=
providerBinding/provider/name eq ‘XaaS’

Notice provider is missing the Ref and the label is now the name. However, you can just use id too, for example:

catalog-service/api/consumer/resources/?$filter=
providerBinding/provider/id eq 
'2575e506-acfe-487a-b080-9898a30f519f'

In order to get to an element that is an array, for example using the owner information in above JSON payload, you can simply do this:

catalog-service/api/consumer/resources?$filter=owners/ref 
eq 'oleach@vmware.com'

You can also use firebug or chrome dev tools to see what API calls are being made via the UI and intercept the API call and look at the filters being used. For example, let’s redo the search we did above to see catalogue item entitlements and sniff the request using the firebug add-on in Firefox.

Here you can see the output of firebug:

Screen Shot 2017-04-08 at 12.34.14

The request URL will give you the UTF-8 encoded format and you can see also the ODATA filters used.

Additionally, you can also use ODATA filters in vRealize Orchestrator and these principles still apply.

Take this code example:

service = cafeHost.createCatalogClient().getCatalogConsumerResourceService();

var filter = new Array();

filter[0] = vCACCAFEFilterParam.substringOf("providerBinding/provider/id",
vCACCAFEFilterParam.string('2575e506-acfe-487a-b080-9898a30f519f'));
var query = vCACCAFEOdataQuery.query().addFilter(filter);
var odataRequest = new vCACCAFEPageOdataRequest(1, 10000, query);
resources = service.getResourcesList(odataRequest);

You can add you filter parameters in the same was as you can using the API ODATA filters.

To add additional parameters, you can build up the filter array with your queries.

Remember to consider the caveats I have been talking about above but by using the API JSON response, you can determine what filters to use. Please add comments if you find any caveats regarding ODATA filters not mentioned above.

vRealize Automation – Changing a Virtual Machine’s Business Group in 7.2 and 6.x

Changing business groups in both vRealize Automation v6 and v7 is a common task and below demonstrates how to automate the process. Luckily, VMware provides an out of the box vRealize Orchestrator workflow solution in both vRealize 6.x and 7.2, however, the workflow you need to use in both versions is different. Let’s have a look:

vRealize Automation 7.2

As of v7.2, we have a new workflow that changes the business group of the IaaS virtual machine entity as well as the CAFE deployment entity. The virtual machine entity is still part of the IaaS model, however, the deployment is a CAFE entity and therefore there are a few things you need to be aware of before changing the business group. We’ll cover those later. However, here is a screenshot of the workflow that you need to use:

The workflow you need to run to change business group can be found in the following location, noting this is under the Library folder:

screen-shot-2017-01-28-at-16-55-52

You need to provide the following inputs:

Screen Shot 2017-01-28 at 16.56.39.png

There are a few things to be considered.

1 – The owner of the virtual machine must also be a member of the target business group. The business group is determined based on the reservation you select. If the user is not a member of the target business group, the virtual machine IaaS entity business group will get updated, however, the deployment CAFE entity business group entity will not get updated, leaving you in a position where the 2 entities that are out of sync. If this is the case, your my items will be out of sync as the deployment will still appear under the original business group but the virtual machine will appear under the new business group.

2 – You may have more than one Virtual Machine within the deployment so make sure you update all virtual machine entities within the deployment.

3 – This workflow is not available prior to version vRealize Automation 7.2, although if you need it, contact VMware GSS support as they may be able to help you get your hands on a fix. Or, better still upgrade to vRealize 7.2.

Based on the above, one way I would implement this solution would be to create a day 2 operation on the deployment CAFE entity. Then you can do a couple of things as follows:

1 – Check that the owner of the deployment is a member of the target business group. If not, flag this on the day 2 XaaS form.

2 – Get all virtual machines that are part of the deployment so you update all virtual machines which are part of the deployment.

That’s it but you can enhance the custom day 2 resource operation if you need to.

vRealize Automation 6.x

In my opinion, it is slightly easier to update the virtual machine business group as we don’t have to worry about the CAFE deployment as the entity doesn’t exist in vRA 6.x. We do have MMBP, but this is part of the IaaS model which can be updated. This post just concentrates on single machine blueprints. The workflow you need to run to change business groups can be found in the following location, noting this is under the Library folder:

Screen Shot 2017-01-28 at 17.12.29.png

You can just run the import vCAC Virtual Machine and you can rerun this over an over again. It won’t register the machine multiple times but just update the business group based on the following inputs:

screen-shot-2017-01-28-at-17-14-02

You can see there are few more options to supply in vRA 6.x. You can also make this a day 2 operation by wrapping this workflow around another workflow as you can see the actual code that changes the business group is this:

vm.register()

Where vm is a vCAC:VirtualMachine type. You can see this in the ‘start register workflow‘ scripting element.

You can see the requirements parameters using the API library, but here’s a screenshot of what you need:

screen-shot-2017-01-28-at-17-18-32

That’s pretty much it. Happy Business Group Automating.

vRealize Orchestrator 7 VAPI vSphere tags workflow

I created a workflow using the VAPI plugin to assign vSphere tags to a VC:VirtualMachine object.

You will need to import the package, then configure VAPI with your vCenter endpoints by following these instructions:

1 – Run the Import VAPI metamodel workflow

2 – Run the Add VAPI endpoint workflow

Both workflows take the same inputs, but you must run the workflows in that order. Screen shot of inputs as follows:

Screen Shot 2016-07-19 at 22.56.04

 

Once you have configured your VAPI endpoints, download the following package from github

com.virtualdevops.tags.package

Import the package and then the following workflows and actions.

Screen Shot 2016-07-19 at 23.02.25

Screen Shot 2016-07-19 at 23.03.11

Run the Add tag to VC vm workflow (highlighted in the package screen shot above), and as long as you have configured VAPI endpoints, you can add vSphere Tags and vSphere Categories to a VC:VirtualMachine object.

10_26_38_40

  • The vCenter Virtual Machine is the VC:VirtualMachine object
  • The VAPI Endpoint drop down list should contain all your VAPI endpoints
  • The Create new Tag Category allows you to create a new Category
  • The Tag Category displays existing tag categories configured on your VAPI Endpoint
  • The Tag Name is the name of the tag you wish to create
  • The Tag description is the description and is optional

Code is more of an example, as I found the OOTB VAPI plugins for tags pretty limited. You can move the input parameters to attributes as desired and pass in values for the tag name using maybe an API call to a SNOW or CMDB to get values for your tag name.

Any issues, send in a comment.

 

Updating vRA ASD Service Blueprint workflows

There isn’t an easy way to find out what vRealize Orchestrator workflow is associated to a vRealize Automation Service Blueprint. You can query the vPostgres database to find out the workflow ID as follows:

1 – Log in to vPostgres and connect to the vcac database

2 – Run the following SQL command

vcac=# \x on;
Expanded display is on.
vcac=# select * from asd_serviceblueprint;

This should output something similar to this:

-[ RECORD 1 ]------------------------------------------------
id | 850b9e17-024e-44f7-9831-a5651a5d6e0f
description | This is a workflow description.
name | My XAAS workflow
status | 1
tenant | vsphere.local
workflowid | 1eea8b02-15e8-41b8-992e-3df1cd8e4b99
baseform |
outputparameter | ea001bf4-d43f-4c74-a010-b6027c7ecbdf
catalogrequestinfohidden | t
version | 1.0.0
-----------------------------------------------------------

You can change the vRealize Orchestrator workflow that gets called by a vRealize Automation service blueprint pretty easily, by changing the workflow ID value in the vPostgres database

vcac=# update asd_serviceblueprint 
set workflowid='ad9e2cc2-2efa-44c1-a574-6a02bec2f998'
where id = '9d6510ad-24a6-4f0c-adc8-736cc3e99d77';

This will change what workflow gets executed without redeploying the ASD form. It’s handy when you want to clone the ASD service blueprint and actually change the underlying workflow to test something new, without having to rebuild the ASD form.

You can also run a vRealize Orchestrator workflow using the below javascript to find out what the name is using the workflow ID

wfId = '9d6510ad-24a6-4f0c-adc8-736cc3e99d77'
System.log("Workflow name: " + Server.getWorkflowWithId(wfId).name);

 

All of this can be managed by creating your own service blueprint and publishing as a catalog item..

How to get vCAC:VirtualMachine object and entity from VC:VirtualMachine

A quick post to show you a couple of ways to get the vCAC:VirtualMachine object and the entity from a VC:VirtualMachine object. Note, below I have got the VC:VirtualMachine as an input parameter called vm.

There are a few ways we can achieve this and one way would be to use a filter on Server.findAllForType. Here’s a snippet of code:

// filter on vm.id

var vraVm = Server.findAllForType("vCAC:VirtualMachine", "ExternalReferenceId eq '" + vm.id + "'");

// end of code

This returns an array. I found it is best to use LinqPad to work out what you can filter on. For example, if you filter on the vCAC:VirtualMachine properties, some are different cases and won’t return a result.

You can also filter using an ‘and’. For example:

// Filter using an 'and'

var vraVm = Server.findAllForType("vCAC:VirtualMachine", "IsDeleted eq false and ExternalReferenceId eq '" + vm.id + "'");

//end of code

Notice that I am using IsDeleted, which is what LinqPad defines, rather than isDeleted, which is what the property in the vCAC:VirtualMachine scripting object has defined.

However, the ExternalReferenceId could have multiple values if you have more that one vCenter endpoint configured. Another value you can filter on if you do have multiple VC’s is VMUniqueID.

So, simply as follows:

// filter on vm.config.instanceUuid

var vraVm = Server.findAllForType("vCAC:VirtualMachine", "VMUniqueID eq '" + vm.config.instanceUuid + "'");

// end of code

Notice that we use vm.config.instanceUuid now instead of vm.id. The attribute simply returns a different value.

In this case, vraVm returns an array, as we are doing a findAllForType, which puts the result set into an array. Therefore, we can get the first index to get our vCAC:VirtualMachine object and then use the getEntity() method to get the entity.

// get entity

var vmEntity = vraVm[0].getEntity();

// end of code

Simple stuff. You can use comparison operators, but generally speaking, I find that you are matching of specific values returned from the VC object, vm.id or vm.config.instanceUuid.

You should also check the length of your array as if the array returns more than 1 index, then you know you have multiple matches – can happen when using multiple VC endpoints.

If you return more than one index, then you can do extra checking, like match on the virtual machine name.

However as you may be using this in an action item for a resource action, try to keep down the code evaluation to improve performance.You can use the filter for any other scripting object, for example vCAC:Blueprint and use LinqPad to work out what you can filter on.

vRealize Orchestrator – using Telnet to test port connectvitiy

vRealize Orchestrator has a TelnetClient scripting class that you can use to test port connectivity. This is far easier to use than dropping to a telnet client on a cli and test connectivity that way. Together with using the TelnetClient scripting class and a workflow that would loop and wait for the port to be open, fail gracefully or continue the workflow but notify that connectivity wasn’t available, the options the TelnetClient scripting class gives you are very handy.

Here a screenshot of what the scripting class looks like in the API browser:

Screen Shot 2015-03-08 at 11.19.18

The constructor looks like this:


var telnet = new TelnetClient(string)

The parameter is a string and takes a terminal type, which be default is VT100. Other terminal types would be ANSI, VT52, VT102, VTNT and so on. VVT stands for video terminal so mostly you can use the default value of VT100.

Here is some simple code to test telnet connectivity:


var port = 22; // number
var target = "localhost"; // string

try {

// testing for port connectivity on target host
System.log("Trying to connect to port " + port + " on host  " + target);
var telnet = new TelnetClient("vt100") ;
telnet.connect(target, port);
connectionSuccess = telnet.isConnected();
System.log("Connectivity to target host " + target + " is successful")

} catch (e) {

connectionSuccess = telnet.isConnected();
System.log("Connection failed: " + e)

} finally {

telnet.disconnect()

}

The isConnected() method returns a boolean of true or false, depending on the connection result, so together with this block of code and a loop workflow, you can test connectivity to a telnet port or wait for a socket to be up. Using a try / cath / finally statement, you catch any errors and disconnect appropriately, although strictly speaking disconnec() can go in the try statement as you’ll only ever connect in the try block so you can only ever disconnect there as well. Skin a cat in several ways on that one. Up to you 🙂

Here’s a screenshot of a workflow that puts it all together. The decision just checks for true or false which is set depending on the connection response.

Screen Shot 2015-03-08 at 11.27.05

Obviously in the sample code above, you should configure the inputs appropriately for the target and the port. This is also a good way to check if a service is available, for example check for SMTP or a REST API service by using telnet.

Happy telnetting!

references:

http://goo.gl/f7NqqR
http://goo.gl/v8QZA

workflow:

http://goo.gl/cOLz8d

vRealize Orchestrator – connecting to more than one domain using the Active Directory plugin

  1. The vRealize Orchestrator plugin only supports connecting to one active directory domain so what happens if you want to connect to more than one domain to create a user or an organizational unit? Well, it is possible to refocus the active directory plugin on the fly to do just that but you need to consider how this will work in your environment and design your AD workflows appropriately.

    Firstly, lets take a look at the vRO Active Directory plugin ‘Configure Active Directory server’ workflow that configures the plugin to connect to a domain.

    Screen Shot 2015-03-08 at 14.21.03

    When you run the workflow to add a domain, you are presented with the following form:

    Screen Shot 2015-03-08 at 14.23.08

    Now you can relate these presentation inputs to what is going on inside the workflow. So the first scripting element in the workflow determines whether you are using SSL and the second workflow element imports the certificate if you are.

    The reason why I mention this is that if you are using SSL for LDAP connectivity, it’s worth importing all your domain SSL certificates into vRO so vRO can use a valid certificate if you are to refocus the plugin to connect to a domain over SSL.

    Now if we take a look at the ‘Update configuration’ scripting element, you can see the following code:

    
    var configuration = new AD_ServerConfiguration();
    configuration.host = host;
    configuration.port = port;
    configuration.ldapBase = ldapBase;
    configuration.useSSL = useSSL;
    configuration.defaultDomain = defaultDomain;
    configuration.useSharedSession = useSharedSession;
    configuration.sharedUserName = sharedUserName;
    configuration.sharedUserPassword = sharedUserPassword;
    
    ConfigurationManager.validateConfiguration(configuration);
    ConfigurationManager.updateConfiguration(configuration);
    
    

    So what you can do is run this specific block of code to connect to another domain each and every time time you want to do domain operations in a specific domain in a multi-domain environment. It’s best to use a configuration element to store your domain information and by using some logic, you can determine what domain you want to use and then use the values in your configuration element to populate the required domain configuration values in the block of code above.

    However, there is one absolute key thing here to implement and that is design your workflow using the LockingSystem scripting class. Any workflow that needs to configure active directory objects will need to run the block of code that refocuses the domain connection to use, then run a workflow to create, update, read, delete AD objects. So you need to let workflow operation finish first before allowing another workflow change the focus of the domain that you are connecting to. You can do this using the LockingSystem scripting class, example as follows:

    Screen Shot 2015-03-08 at 14.39.09

    So the set lock would look like this:

    
    LockingSystem.lockAndWait("AdOperationLock", workflow.id);
    
    

    Then once the lock is set, the refocus domain code runs and you can change to a different domain,  then run an AD operation, then remove the lock. So the remove lock would look like this:

    
    LockingSystem.unlock("AdOperationLock", workflow.id);
    
    

    You can have multiple workflows using this locking system, as long as the lock you use is called the same, in other words “AdOperationLock” in the example above.

    Also, you could use an action item to lock and refocus the domain connection and use the action item for any workflows you are calling that run AD operations, making the locking and refocus aspects modularized. Remember to unlock the lock and to use the same lockid name, for example “AdOperationLock”.

    There are limitations with this as you can only ever run one workflow to read / configure / delete active directory objects at any one time as you can’t have refocusing happen when existing AD operation workflows are running. However, most of the time active directory updates are generally small operations and the workflows should run pretty quick. Therefore, if you do have a lot of AD operations, the likelihood is that you should only have a few seconds to wait until your workflow in waiting runs – workflows check every 2 seconds to see if the lock it has has been released in the vRA database, so the length of time the workflow in waiting will have to wait before it runs will be 2 seconds max plus the length of time the current workflow execution time is. However this is something you need to consider in your environment.

    There are other solutions in a multi-domain environment, for example using PowerShell scripts, but if you can keep it all in vRO, you can keep all the code in one place, and create modular workflow elements for AD operations, then it’s one less programming language and dependancy to manage in vRO.