How to utilize elastic provisioning of Performance Center Load Generators using Kubernetes

Micro Focus Performance Center 12.56 performance testing software introduces the power of Docker for elastic provisioning and de-provisioning of containerized Linux load generator resources using Kubernetes container orchestrators.

This approach allows you to automate and increase efficiency of your testing process by allocating resources on-demand—without needing to define or reserve load generators in advance.

Docker, containers, and orchestrators in a nutshell:

Docker is a tool designed to make it easier to create, deploy, and run applications by using containers. A container consists of an entire runtime environment: an application, plus all of its dependencies, libraries and other binaries. It also contains all configuration files needed to run it—neatly bundled into one package.

Over the past few years, Docker containers have become increasingly popular. As a result, the need for automated management of hundreds of containers has led to orchestration solutions such as Docker swarm, Kubernetes and others. Container orchestration refers to the automated arrangement, coordination, and management of software containers.

Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation.

The following diagram shows Kubernetes high-level architecture:

K8sArchitect.png

Let’s see how we configure and run elastic load generators in Performance Center with Kubernetes…

Prerequisites:

  • A configured Kubernetes server. For details, see how to configure Kubernetes. Performance Center supports Kubernetes version 1.7 and later. Earlier versions might also be compatible but were not tested.
  • A load generator Dockerfile image. It is recommended that you download the image from the Docker hub to a local shared repository for better performance. If your environment has no internet access, then it is a must.

How to configure a Kubernetes orchestrator in Performance Center Administration:

  1. Log onto Performance Center Administration. < http://<pc_server_name>/admin>
  2. Navigate to Management -> Orchestration tab
  3. Click Add Orchestrator from the tool bar.
  4. Enter the New Orchestrator details:
    • Full URL: Select the connection type (http/https), and enter the full URL of the Kubernetes server including      port .<Https://Server:port>
    • Type: By default it is set to kubernetes (this is the only orchestrator type that is currently supported).
    • Namespace: Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual       clusters are called namespaces.  In this field you need to specify the Namespace where your containers will be created.
    • Token: Kubernetes uses bearer tokens to authenticate API requests through authentication plugins.Main.png
      • Assign Images: Associate one or more elastic load generator images to be used during the test with the orchestrator. You can use the default image provided in the Docker hub, or can create a custom image.

Click the + button to open the Assign Images to the Orchestrator dialog box, and select the images you want toassociate to the orchestrator. Click “Assign”.

AssignImage.png

Note: If no image is assigned, Performance Center uses the default performancetesting/load_generator_linux/ image from the Docker hub.

  • Use Monitoring: Select this option to enable collecting metrics on Kubernetes container clusters using the Heapster monitoring solution. Enter the Heapster server URL, including the port, in the format: <server_name>:<port> monitor.png
    • Resource Limits: You can limit the available memory and CPU resources per container.  These limits are displayed when assigning LG’s in the Performance Test Designer, and the user cannot exceed these limits.Resource.png
    •  Use External Storage: Specify an external shared alternative location to store runtime files. This should be a mount path on each physical Linux node.While this setting is not mandatory, we recommended using it to prevent the loss of, or inaccessibility to results in case the result collation fails (since containers are automatically de-provisioned after the run).ExternalPath.png

5. Assign Projects

In the Linked Projects area, click   “Assign Projects” to open the Assign Projects to the Orchestrator dialog box.

Select the projects you want to assign to the orchestrator, and click “Assign”. The selected projects are added to the Linked Projects grid.AssignProj.png

6. Click Save

 Assign containers to groups:

  1. Log onto a Performance Center project that was assigned a Kubernetes orchestrator.
  2. Open a load test designer and click the “Assign LG” button from the toolbar.    AssignLG.png
  3. Expand the “Elastic” tab to display the list of elastic load generators

Notes:

These are only logical names. The actual load generator containers are created on test execution, and are automatically terminated once the test finishes.

If you defined different image types or set Memory/CPU limits, you can see your selections here, and these selections will affect all containers.

Type_Res.png

    • Select elastic load generators, and the groups to which you want to assign them.
    • Click the “Assign” button. The load generators are assigned to the selected groups. You can expand them to view more information, or to unassign.AssinDocker.png
    • Click “OK”. The load generators are updated in the Groups & Workload tab of the Performance Load Test Designer.

Group_Docker.png

Run the performance test:

When you start the performance test, the elastic load generators are provisioned:

rov2.png

During the test run, you can see a graph for the Heapster server that was defined for the orchestrator:

heapsterGraph.png

When the test finishes, the results are collated and the elastic load generators are automatically terminated.

After the data has been analysed, all the information is visible in the Summary report including the Heapster graph:

HTML.png
You can alos view Performance Center and Kubernetes integration video for the full flow.

To learn more about elastic provisioning and de-provisioning of load generators using Kubernetes, see Manage Elastic Load Generators in the Performance Center Help. You can also learn more about Performance Center at the product page.

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *

Partagez
Tweetez
Partagez