We’ve benchmarked Huddle01 on both the Client side and Server Side

CLIENT SIDE

Sharing the performance videos of Zoom vs Huddle01 is a similar environment. Here's the link.

Huddle01 Performance Tests - Google Drive

It contains 4 videos, Huddle01 Render vs Zoom Render & Huddle01 CPU vs Zoom CPU.

Here's the explanation of the videos as well, along with the summary:

  1. CPU Utilization: In the first set of videos we have shown the CPU utilization, latency, and other call stats when doing calls with 8 people video on.

    Zoom call is sending a video stream of 640p while receiving only 320p stream video with latency ~200ms. While in the case of Huddle01 we are able to send and receive a 720p stream with ~50ms latency and relatively the same CPU utilization. This is because Huddle01 uses Simulcast and load-aware, and region-aware servers respectively.

  2. Re-rendering: In the second set of videos, we showed the optimization we did on rendering our components as compared to that of Zoom. Re Render Basically means re-mounting/re-creating all the HTML nodes inside a web document object modal (DOM). Zoom here re-renders the whole grid on the micro-interactions. Optimization of this remounting, if done smartly, decreases CPU usage load and provides a faster experience to end users.

    The tests show significant improvements over the zoom browser, below is the summary:

    Summary: Latency: 200ms (ZOOM) vs 50ms (Huddle01) CPU Utilisation (Apple M1 processor): 9.1% (ZOOM) vs 7.4% (Huddle01) Browser used: Chrome with a nominal number of extensions

    Great play for SDKs: Since we have a much better grasp on UI (better re-rendering), using our custom UI kits coupled with our RTC-SDKs will lead to more performant applications built by developers/builders/communities.

    To check the authenticity of re-rendering, we have used react developer kit, link here.

SERVER SIDE

Measuring performance

Huddle01 can scale to many simultaneous rooms by running a distributed setup across multiple nodes. However, each room must fit within a single node.

For this reason, benchmarks below will be focused on stressing the number of concurrent users in a room.

With WebRTC SFUs, the main proxy for how much work the server is doing is total forwarded bitrate.

Total forwarded bitrate = per track bitrate * tracks published * number of subscribers

This can also be co-related with the total streams the SFU has to manage, ie, CPU load.

For example, for a room with 6 publishers, each publishing one video stream at 600kbps, and 100 subscribers, total forwarded bitrate would be ~360Mbps.

Headless chrome