Back to articles
Caching Intro: Making Your API Lightning Fast with In-Memory Cache

Caching Intro: Making Your API Lightning Fast with In-Memory Cache

via Dev.to WebdevFiyinfoluwa Ojo

Why Caching? Every database query takes time. If 1000 users request the same categories list every minute, that's 1000 database queries for identical data. Caching stores the result in memory after the first query. Everyone else gets it instantly. The Cache Logic cache = {} CACHE_TTL = 60 # seconds def get_from_cache ( key : str ): if key in cache : data , expires_at = cache [ key ] if time . time () < expires_at : print ( " CACHE HIT " ) return data else : del cache [ key ] print ( " CACHE EXPIRED " ) print ( " CACHE MISS " ) return None def set_cache ( key : str , data ): cache [ key ] = ( data , time . time () + CACHE_TTL ) Cache Miss vs Cache Hit @app.get ( " /categories " ) def get_categories (): cached = get_from_cache ( " all_categories " ) if cached : return { " source " : " cache " , " data " : cached } # Hit the database only on cache miss categories = db . query ( Category ). all () set_cache ( " all_categories " , result ) return { " source " : " database " , " data " : res

Continue reading on Dev.to Webdev

Opens in a new tab

Read Full Article
2 views

Related Articles