Building an automated testing pipeline on Azure DevOps

  • Tutorial
Recently, I came across a not-so-popular beast in the DevOps world, Azure DevOps pipelines. Immediately I felt the absence of any clear instructions or articles on the topic, I don’t know what it is connected with, but Microsoft clearly has something to work on in terms of promoting the tool. Today we will build a pipeline for automated testing inside the Azure cloud.

So, the task:
There is software that build using the same Azure DevOps, assembled from a project on WIX. If there is interest, I will write about this tool. In fact, this is a more automated way to build Windows installers, which replaces the standard InstallShield. So, our software successfully builds and generates an artifact, a certain setup.exe, which puts the application on a Windows system. You need to put this application into a virtual machine similar to a prod, copy the automated tests prepared by the testing team there, run them and collect the results in order to consider the branch good or bad before merging. All in GitLab, only through ... .

As a virtualization environment where we will run our tests, we obviously use Azure DevTest Labs, an entity in Azure subscriptions, which was created to twist all sorts of test nonsense in it for reasonable money.

1. Cloud Side Integration


To begin with, we need to integrate our DevTest Labs with Azure DevOps, for which we need some Service Principal, essentially a service account that allows you to go to the pipelines in the cloud and create / delete resources there for ourselves.

We go into the subscription and find the Azure Active Directory service.



We find the App Registrations and click on New Registration, this will create our service principal. I won’t go over what settings I’ll choose when creating, this may differ for different subscriptions.



Now we need to give rights to our service director. To do this, go to the subscription icon with a key. Choose our subscription.



Next, in Access Control, click Role Assignment and look for this account in the search by the name you just created. We give the role of Contributor, this is enough.



Next, go back to our Service Principal in Azure AD and open its properties. Later, we will need all the IDs that are there, we save them.

This is where our portal settings end and we go to Azure DevOps.

2. Integration on the side of Azure DevOps


First of all, we go into the project settings and select Service Connections. Create a new element of type Azure Resource Manager.



Now we need all the IDs that we recorded. Click on use the full version of the service connection dialog. And enter all the data that we received from the Service Principal. Click verify and if everything is fine, keep the connection. Now our pipelines can use it to connect to the cloud.



3. Creating a pipeline


Now we proceed to the most interesting, the construction of the pipeline itself. Open the Pipelines-Builds



menu. We are greeted by a menu for creating a new build, which by default will try to create a YAML file for us with a suitable configuration. We politely refuse this and choose the classic version. Microsoft’s desire to do everything like people and give the opportunity to maximize customize pipelines via YAML is understandable, but the stingy documentation and just the practical inoperability of many modules tells us that it’s too early to use this functionality.



From the variety of templates, we need a simple Empty Pipeline. After its creation, we are greeted by an empty editing window, in it we will spend a lot of time on.



So, click on + and get into a certain module store, from where we will need the following components from the list.



Before we proceed with the configuration of the pipeline task, we need to create and put several files into the project. These will be the ARM Template of our virtual machine, which we will generate in Azure DevTest Labs, the script for getting the IP machine after it is created, and, if desired, the scripts of our tests or what we want to run on the host.

4. ARM Template Generation


To create a virtual machine, we will need to first generate a template for it, a json file, which we put in the project code so that it can be read from there by the pipeline.

We go to our lab and find the Formulas (reusable bases) menu, click to create a new one.



We will be greeted by a long list of images as a base, the choice of machine size, all the same as when creating a virtual machine. At this stage, we will not stop, we will immediately proceed to the last point of the machine’s properties, namely artifacts. You can use any configurations that are necessary for your environment. For example, I add a machine to the domain and add a service account to it as an admin so that the pipeline can then log into this machine under this account. This can vary, but for successful testing of the code we need one artifact, which we will dwell on in more detail. To put the latest version of the software we tested on our machine, we will use the Download Azure Pipelines Artifact and Run Script artifact. Remember at the beginning I said that somewhere a build is going with the application installer? Now we need to tell the virtual machine, or rather the template, so that he goes and takes this artifact. And I didn’t just pick it up, but also set it, for which we fill in special fields indicating the project, the name of the build and the secret key. The secret key, as in all systems of this kind, is generated in the account, in this case in Azure DevOps and stored in Secrets in your lab. There is a small caveat here, in Secrets we will save it, but it’s neither cold nor hot, it will be launched from another user as part of the pipeline, therefore we will need to manually enter the secret key into the template again. why we fill in special fields indicating the project, the name of the build and the secret key. The secret key, as in all systems of this kind, is generated in the account, in this case in Azure DevOps and stored in Secrets in your lab. There is a small caveat here, in Secrets we will save it, but it’s neither cold nor hot, it will be launched from another user as part of the pipeline, therefore we will need to manually enter the secret key into the template again. why we fill in special fields indicating the project, the name of the build and the secret key. The secret key, as in all systems of this kind, is generated in the account, in this case in Azure DevOps and stored in Secrets in your lab. There is a small caveat here, in Secrets we will save it, but it’s neither cold nor hot, it will be launched from another user as part of the pipeline, therefore we will need to manually enter the secret key into the template again.

Another artifact that must be included is “Configure WinRM”, we will need it for subsequent access to the machine. There is only one parameter, hostname. Since we do not know it in advance, we will use the variable% COMPUTERNAME%.



So we have added all the necessary artifacts, we’ll go to why we came here at all. We get the generated ARM Template in the Advanced tab of the same formula creation window.



Copy the contents of the page to the VMtemplate.json file and put it in the root of the project. We don’t need the cloud anymore, we are returning to the pipeline.

5. Pipeline configuration


Let's start with the most important and interesting, creating a virtual machine, for the sake of this we did all these integrations and templates. In the Azure RM Subscription, we select our Service connection, which we configured in paragraph 2. Next, the available lab environment will pop up. Then we select json which we generated and we define some obligatory variables. The username and password of the car can be set either directly or by variables, but I’m not at all sure that it works, whatever I write there, I couldn’t get into the car under these credits, the main thing is to set the name of the car so that it always unique. For this, I use the build environment variable.



Next, we set up another important point. After the car takes off, we also need to know its parameters somehow, and it’s better not to us but to the payline. To do this, we make a script, for example GetLabVMParams.ps1 and put it there, in the project. I took the text of the script on the Microsoft website, but slightly corrected it for my environment, because he took PublicIP and FQDN machines. I don’t have either one, but there is PrivateIP which is not so easy to get, so I added a piece.

Param( [string] $labVmId)
$labVmComputeId = (Get-AzureRmResource -Id $labVmId).Properties.ComputeId
# Get lab VM resource group name
$labVmRgName = (Get-AzureRmResource -Id $labVmComputeId).ResourceGroupName
# Get the lab VM Name
$labVmName = (Get-AzureRmResource -Id $labVmId).Name
# Get lab VM public IP address
# $labVMIpAddress = (Get-AzureRmPublicIpAddress -ResourceGroupName $labVmRgName -Name $labVmName).IpAddress
# Get lab VM FQDN
# $labVMFqdn = (Get-AzureRmPublicIpAddress -ResourceGroupName $labVmRgName -Name $labVmName).DnsSettings.Fqdn
# Get lab VM private IP address
$VmNetworkdetails= (((Get-AzureRmVM -ResourceGroupName $labVmRgName -Name $labVmName).NetworkProfile).NetworkInterfaces).Id
$nicname = $VmNetworkdetails.substring($VmNetworkdetails.LastIndexOf("/")+1)
$labVMnetwork = (Get-AzureRmNetworkInterface -Name $nicname -ResourceGroupName $labVmRgName)|Select-Object -ExpandProperty IPConfigurations 
$labVMIpAddress = $labVMnetwork.PrivateIpAddress
# Set a variable labVmRgName to store the lab VM resource group name
Write-Host "##vso[task.setvariable variable=labVmRgName;]$labVmRgName"
# Set a variable labVMIpAddress to store the lab VM Ip address
Write-Host "##vso[task.setvariable variable=labVMIpAddress;]$labVMIpAddress"
# Set a variable labVMFqdn to store the lab VM FQDN name
Write-Host "##vso[task.setvariable variable=labVMFqdn;]$labVMFqdn"
Write-Output $labVMIpAddress
Set-Item wsman:\localhost\client\trustedhosts * -Force

Of all that the script reads, we only need the variable labVMIpAddress. Well, this is for me, maybe you need something else, for this I did not delete anything and just commented out the excess.

I’ll also explain the last line of the script, it allows our build machine access to any host via WinRM.

The next step, we run our wonderful script. He will need the same connection to the cloud, the input variable with the machine ID, which by that time will already be known from the previous step. How? Here it is necessary to mention such a wonderful thing as Output Variables. Each step can have a list of variables that are passed on to the next pipeline steps. Accordingly, for our super script, such a variable would be labVMIpAddress, do not forget to indicate this.



Further, I do fairly simple things, which, in addition, can vary from case to case. I execute a remote script with the creation of balls, into which I will then upload my scripts.

New-Item “C:\test" –type directory
New-SMBShare –Name “test” –Path “C:\test”  –FullAccess everyone

From the name of the cabbages it is clear that then we copy some sample script to the machine and execute it in one more step. As the address of the remote machine, our variable $ (labVMIpAddress) is useful to us. Next, we use the task “pick up the artifact from the balls” and copy the results of the script to our build environment, then we save these files in the build artifact with the same standard task. After we no longer need the car, we kill it with the last step. The main difficulty, as can be seen in the volume of the article, is to integrate with the cloud and establish contact with the virtual machine that you created, then you can already have fun as much as necessary.

This is my first article, so do not judge strictly, comments are welcome.

Also popular now: