Contents

Changes in this test

Compared to OpenMeetings 140 users test reducing the CPU value for the Scrypt implementation from 1024 * 8 to 256. 

Test Details

Test run below:

  • 140 users
  • staggered to enter in a time period around 5-10min
  • distributed into
    • 10 conference rooms 4x4 = 40 users
    • 5 webinars with 21 users each = 105 users
  • Each test runs calls the API to login/createRoomHash and then load the URL with the room (plus start webcam/audio stream in the conference rooms)

Hardware:

  • 4GB OpenMeetings 1 core
  • 4GB KMS 1 core

Test Results

  • Reduces the login command to perform without any increase. It's completely gone from being a problematic call.
  • However now the StreamProcessor::onMessage shows a curve and sending a single message in this method takes out of sudden almost 1 second. This wasn't previous the case. It seems like we moved the bottleneck to the next level. We speed up the login command but now, because login is much faster, the next method becomes problematic

VIdeo pods not triggering:

  • There are still lots of video pods not triggering.
  • CPU still spikes to 95%. Same profile

Link to dashboard

I will delete this dashboard shortly again

Test1 - Prometheus Dashboard with metrics 

Test2 - Prometheus Dashboard with metrics

Test3 - Prometheus Dashboard with metrics

Graphs

CPU and memory usage

Login web service call is fine all web service calls are fine

None of the calls take longer than 0.35seconds. That seems okay

None of the DatabaseDao that have metrics seem to take long - login still longest

Login takes long, but its like 0.15seconds. So really nothing to be concerned of I think

StreamProcessor single message event takes almost 2seconds

=> Now this looks strange. That is a single WebSocket message. I don't know if its specific messages or all of them. Have to look into it in more detail. 

Previously, because probably login took so long this wasn't such a problem.

Adding Listener takes 2 seconds

The method that takes almost 2 sec here is: 

Every broadcast is refreshed - immediately 

On every stream it calls:

  • broadCastStart 
  • broadCastRestart

=> What is that good for to restart every single stream by default immediately again ?

onIceCandidate is called 1600 times

During this initialisation for ~50 streams StreamProcessor::onMessage(onIceCandidate) is called 1600 times.

=> Is it really necessary to package all those WebSocket calls up in this onIceCandidate ? Can't there be another way for the client to get possible iceCandidates ?

Tomcat threads are better used

This looks actually better then previously. Cause now it seems to start using threads to handle load issues. But still not enough in order to prevent CPU spike.








  • No labels