Enhancing Dhali with Redis Cache Layer
Dhali has introduced a Redis cache layer to our backend to improve our latency performance. Here, we discuss our integration and our performance results. Additionally, we explore future improvements.
Platform Behavior as a Proxy
Dhali operates as a sophisticated API proxy, cryptographically validating blockchain transactions in real-time off the ledger. This approach allows us to monetise other providers APIs' in real-time, without the traditional bottlenecks associated with ledger closures. However, because we act as a proxy, we necessarily introduce latency.
Low Latency and Redis' Role
API providers typically value low latency. This is where Redis, an in-memory data structure store, helps. Redis excels in low-latency data access and manipulation, providing sub-millisecond response times. By leveraging Redis for caching, we can store layer-2 transaction data and API metadata temporarily, while our primary storage is asynchronously updated in batches.
Performance results
We benchmarked the following scenarios to understand the impact of our upgrades: 1. Calls made directly to a dummy API based in us-central1. 2. Calls made to 1. via Dhali with Redis in us-central1. 3. Calls made to 1. via Dhali with our old DB in us-central1.
Our strategy was to ramp up requests (from the UK) using a single payment channel, up to the 70 requests per second (RPS) limit we impose per payment channel. Note, users can obtain high RPS rates by making requests to an API across more than one payment channel.
Here are our results:
| Method | Average Response Time | Minimum Response Time | Maximum Response Time | Reports |
|---|---|---|---|---|
| Direct API call | 170ms | 114ms | 1438ms | 1 |
| Dhali with Redis | 279ms | 134ms | 993ms | 2 |
| Dhali with old DB | 1272ms | 215ms | 8197ms | 3 |
Future Latency Improvements
While the integration of Redis has markedly improved our platform's performance, we are seeking ways to push the envelope further. One avenue is the implementation of global load balancing to minimize in-flight delays for our globally diverse API providers.