Engineer for the future of Cloud
June 10-13, 2019
San Jose, CA

Scale data access with app layer caching (sponsored by Salesforce)

Anil Jacob (Salesforce)
1:25pm2:05pm Thursday, June 13, 2019
Sponsored
Location: LL20 C
Average rating: ****.
(4.00, 1 rating)

Level

Intermediate

Prerequisite knowledge

  • A basic knowledge of web applications, application architectures, databases, and networks

What you'll learn

  • Learn how application layer caching can benefit your business and increase database response time

Description

In applications such as customer relationship management (CRM), many systems and users are trying to reference commonly used data, which causes contention on the database as it’s a shared resource and the same data is read repeatedly. Also, accessing data from the database tier is slower than accessing data in the application server tier. The problem can be aggravated if the application or customization is not efficiently implemented to minimize calls to the database. It becomes a bottleneck trying to handle the flood of requests. The result is user frustration, costly hardware additions, and the overall business slows down and is unable to meet targets.

Anil Jacob walks you through how to protect shared resources such as databases and other information stores—which do complex work and are costly resources but are critical for application health—enabling the applications to scale with increasing demand, reducing both dependency on hardware and costs and making businesses and users happy.

When the data doesn’t change often but is read frequently between changes, an application cache can be used to cache the data from a database. Whenever data gets created or updated in the database, it can trigger a write to the cache. The application code uses the cache as the data source. Using a key-value pair, it can retrieve the cached data efficiently. The cache acts as a layer in front of the database protecting it from the huge number of user and application requests.

Caching needs to be selective. Data that’s updated by more than one thread can run into race conditions, so the application should be tolerant to such situations. Cache invalidation needs to be taken into account. The application can handle cache miss by falling back to database, and the retrieved data is then cached for subsequent requests.

The concept of application-layer cache is not new; however, selecting which contents to place in application (or platform) cache is a performance-efficiency versus data consistency trade-off decision. Salesforce has custom applications developed by its customers and partners where the latest data for cases or chats is checked at a high frequency to provide faster customer service. It has been able to reduce the impact of these use cases on the database by ~5 times by implementing an application layer cache.

Application caching traditionally has been used for caching static data to improve response times for the end user. Scale is an increasing challenge with applications frequently requesting data from shared resources. Implementing a caching layer in front of critical resources helps applications scale better, reduces cost, and improves user experience.

This session is sponsored by Salesforce.

Photo of Anil Jacob

Anil Jacob

Salesforce

Anil Jacob is a lead software engineer on the frontier scale team at Salesforce, where he works on large and complex customer implementations and related scale challenges. Previously, he was at Intuit, BEA WebLogic, and Wells Fargo. Anil’s interests are application scale, user experience, UX performance, and application development.

Comments on this page are now closed.

Comments

Lakshmi Kollipara | SR ENG MGR
06/24/2019 8:14am PDT

hi, can you share the slides?