LOAD TESTING OF KMS

writeImage
Shahid Algur,Quality Assurance Architect

Testing WebRTC applications is a complex activity. From the basic one-to-one calling to ascertain how to optimize for large group video calls requires simple and powerful automation.

Kurento is a WebRTC media server and a set of client APIs that supports the development of advanced video applications for the web and mobile platforms. Kurento Media Server features include group communications, transcoding, recording, mixing, broadcasting and routing of audiovisual flows.

Load testing any application requires creating a set of virtual users, mimicking user actions and simulating the load on the server. There is a big difference between load testing traditional web applications and WebRTC applications. Jmeter is a popular tool used for testing traditional web applications. However, since WebRTC involves peer to peer media communication through the browser, it requires a different approach to simulate such usage pattern. The challenge magnifies as the testing involves sending/receiving Audio and Video streams where the quality, frame rate, lag of all the streams between the end points must be measured along with client behaviors, server profiling, load balancers and the performance of KMS server itself. This makes the choice of technology stack crucial.

In order to do Load testing of KMS server we need to automate test execution using real web browsers (Chrome, Firefox, Safari etc.) since these browsers have in built implementations for WebRTC. Also, we need many browser instances running across multiple machines. Selenium Grid is a part of the Selenium Suite that specializes on running multiple tests across different browsers, operating systems, and machines in parallel. It supports distributed test execution and helps run tests in a distributed test environment.

Hub and node

Selenium Grid has 2 basic entities :

Hub/Master

  • A Selenium Hub is a central point (which can be a local machine) that receives all the test requests and distributes them to the right nodes. The machine which actually triggers the test case if known as the Hub.
  • There can be only one hub in a Selenium grid.
  • The machine containing the hub triggers the test case, but the actual test execution is carried out on Node machines.

Node/ Slaves

  • Nodes are the Selenium instances that will execute the tests that are loaded on the hub.
  • There can be one or more nodes in a grid.
  • Nodes can be launched on multiple machines with different operating systems and browsers.

The test bed can be setup as an on-premises solution or a cloud solution. There are several players in the public cloud / SaaS market such as Saucelabs and Browserstack. Using a cloud service is a good approach, as these providers offer the entire infrastructure with a large range of browser & OS combinations.

In this article, we will look at a Selenium grid and node setup on AWS cloud. The entire solution needs to be hosted in a single Region, within a single custom VPC. All selenium grid components – HUB & Nodes should fall within the same subnet.​ Hub machine will act as a Master and will require a GUI to watch the node instances.​ Testing nodes can be operated in Non-GUI mode and will act as Slaves. Test needs to be executed in Headless Mode which facilitates faster test execution and less usage of resources on client machines.

 

Test setup is categorized into 3 phases:

1. Number of meetings in progress

  • This phase includes creating and setting up test environment, software installations, setting up security policies and opening the required Outgoing/Incoming ports.
  • While forming the grid proper timeouts and configurations should be specified.
  • Modify the system configuration to increase file descriptors so as to allow large number of socket + API connections on backend application server.
  • Modify the system configuration to increase DefaultTasksMax on KMS server. Since KMS instance will have dedicated process running for KMS server, we can allocate max available tasks to KMS process itself.

2. Test Execution phase:

  • This phase include creating the necessary test data that acts as an input for the test scripts.
  • Specify IP’s in Nodelist file and provide appropriate values in config file.
  • Tests can either be executed either sequentially or in parallel depending on the test scenarios.
  • Session duration can be set from few minutes to hours. Certain action will be triggered on client browsers to avoid timeout. All such configuration should be configured in the property file.
  • Resource profiling of server should be done while test execution is in progress.

3. Test Analysis phase:

  • This phase includes analyzing the outcome of test execution. Test reports will be created to show the status of test execution.
  • Analyzing screenshots and client browser logs in case of failure or Audio/Video loss.
  • Creating detailed test report with status of test execution, server statistics and resource metrics graphs.

 

Types of Testing

Tests are broadly classified on the basis of their objectives :

1. Load Testing

  • Evaluate the number of meetings which can be accommodated on a particular KMS instance by monitoring the Memory and CPU utilization while gradually increasing the load during testing.
  • Ensure that all meetings can work as functionally intended without causing any performance degradation by capturing screenshots of the admin portal.
  • Perform test actions based on usage and simulate various user patterns, for example, users can perform actions such as Audio calls, Video Calls, Audio calls and Screenshare, Video calls and Screenshare, Annotations and Messages etc.
  • Ensure there are no memory leaks on the backend application server by comparing memory usage pre and post execution of load test.
  • Ensure that all Websockets are released by the backend application server after the test completion.

2. Scalability Tests

There are few challenges while supporting horizontal scaling of KMS due to following constraints –
1

We need to ensure that all users of a single meeting are connected to the same KMS instance.

2

Traditional scale out approach based on CPU and memory won’t work due to constraint mentioned in point # 1 and hence we need to keep a room for every meeting based on number of max users supported in a meeting.

3

Scale-in decision also needs to be taken based on number of meetings currently held by KMS instead of CPU and memory consumption.

Due to the above constraints, KMS scalability is achieved by implementing a dedicated Seed Node server which is responsible for redirecting users to the appropriate KMS instance as well as manages Scale-in and Scale-out of KMS instances. Additionally there should be a portal to monitor KMS instances and their status like CPU, Memory consumption, no. of meetings etc.

The Seed Node Server has configurations like threshold CPU / Memory / no. of meetings etc. and it starts with a batch of KMS instances. Following scenarios detail out Scalability testing:

Profiling to determine KMS benchmark

Perform Profiling activity of KMS by increasing 1 meeting at a time and monitoring CPU and Memory usage as shown in the diagram below. This test helps to determine the threshold value which acts as pivot point Seed Node configuration.

Profiling KMS by doing load test on KMS Server

  • Scale OUT
  • Increase the load of meetings and users using load script so that threshold value on Seed Node server is crossed. Verify that beyond this no further meetings will be hosted on current KMS instance and that the Seed Node server spins up a new KMS instance.

  • Scale IN
  • Stop the currently running meetings using load test script. Ensure that when the meetings count on a particular KMS instance reaches zero, it is marked as idle and stopped after a predefined time.

Results

By following the above steps, we are able to determine the performance of the WebRTC application at various load scenarios and verify whether the application adheres to the expected performance requirements for the given load.