Dell Latitude E6410 Notebook| Quantity Available: 40+
This post is intended for businesses and other organizations interested... Read more →
Posted by Richy George on 29 May, 2024
This post was originally published on this siteLike every other programming environment, you need a place to store your data when coding in the browser with JavaScript. Beyond simple JavaScript variables, there are a variety of options ranging in sophistication, from using localStorage
to cookies to IndexedDB
and the service worker cache API. This article is a quick survey of the common mechanisms for storing data in your JavaScript programs.
You are probably already familiar with JavaScript’s set of highly flexible variable types. We don’t need to review them here; they are very powerful and capable of modeling any kind of data from the simplest numbers to intricate cyclical graphs and collections.
The downside of using variables to store data is that they are confined to the life of the running program. When the program exits, the variables are destroyed. Of course, they may be destroyed before the program ends, but the longest-lived global variable will vanish with the program. In the case of the web browser and its JavaScript programs, even a single click of the refresh button annihilates the program state. This fact drives the need for data persistence; that is, data that outlives the life of the program itself.
An additional complication with browser JavaScript is that it’s a sandboxed environment. It doesn’t have direct access to the operating system because it isn’t installed. A JavaScript program relies on the agency of the browser APIs it runs within.
The other end of the spectrum from using built-in variables to store JavaScript data objects is sending the data off to a server. You can do this readily with a fetch() POST
request. Provided everything works out on the network and the back-end API, you can trust that the data will be stored and made available in the future with another GET
request.
So far, we’re choosing between the transience of variables and the permanence of server-side persistence. Each approach has a particular profile in terms of longevity and simplicity. But a few other options are worth exploring.
There are two types of built-in “web storage” in modern browsers: localStorage
and sessionStorage
. These give you convenient access to longer-lived data. They both give you a key-value and each has its own lifecycle that governs how data is handled:
localStorage
saves a key-value pair that survives across page loads on the same domain.sessionStorage
operates similarly to localStorage
but the data only lasts as long as the page session.In both cases, values are coerced to a string, meaning that a number will become a string version of itself and an object will become “[object Object]
.” That’s obviously not what you want, but if you want to save an object, you can always use JSON.stringify()
and JSON.parse()
.
Both localStorage
and sessionStorage
use getItem
and setItem
to set and retrieve values:
localStorage.setItem("foo","bar");
sessionStorage.getItem("foo"); // returns “bar”
You can most clearly see the difference between the two by setting a value on them and then closing the browser tab, then reopening a tab on the same domain and checking for your value. Values saved using localStorage
will still exist, whereas sessionStorage
will be null. You can use the devtools console to run this experiment:
localStorage.setItem("foo",”bar”);
sessionStorage.setItem("foo","bar");
// close the tab, reopen it
localStorage.getItem('bar2'); // returns “bar”
sessionStorage.getItem("foo") // returns null
Whereas localStorage
and sessionStorage
are tied to the page and domain, cookies give you a longer-lived option tied to the browser itself. They also use key-value pairs. Cookies have been around for a long time and are used for a wide range of cases, including ones that are not always welcome. Cookies are useful for tracking values across domains and sessions. They have specific expiration times, but the user can choose to delete them anytime by clearing their browser history.
Cookies are attached to requests and responses with the server, and can be modified (with restrictions governed by rules) by both the client and the server. Handy libraries like JavaScript Cookie simplify dealing with cookies.
Cookies are a bit funky when used directly, which is a legacy of their ancient origins. They are set for the domain on the document.cookie
property, in a format that includes the value, the expiration time (in RFC 5322 format), and the path. If no expiration is set, the cookie will vanish after the browser is closed. The path sets what path on the domain is valid for the cookie.
Here’s an example of setting a cookie value:
document.cookie = "foo=bar; expires=Thu, 18 Dec 2024 12:00:00 UTC; path=/";
And to recover the value:
function getCookie(cname) {
const name = cname + "=";
const decodedCookie = decodeURIComponent(document.cookie);
const ca = decodedCookie.split(';');
for (let i = 0; i < ca.length; i++) {
let c = ca[i];
while (c.charAt(0) === ' ') {
c = c.substring(1);
}
if (c.indexOf(name) === 0) {
return c.substring(name.length, c.length);
}
}
return "";
}
const cookieValue = getCookie("foo");
console.log("Cookie value for 'foo':", cookieValue);
In the above, we use decodeURIComponent
to unpack the cookie and then break it along its separator character, the semicolon (;), to access its component parts. To get the value we match on the name of the cookie plus the equals sign.
An important consideration with cookies is security, specifically cross-site scripting (XSS) and cross-site request forgery (CSRF) attacks. (Setting HttpOnly on a cookie makes it only accessible on the server, which increases security but eliminates the cookie’s utility on the browser.)
IndexedDB
is the most elaborate and capable in-browser data store. It’s also the most complicated. IndexedDB
uses asynchronous calls to manage operations. That’s good because it lets you avoid blocking the thread, but it also makes for a somewhat clunky developer experience.
IndexedDB
is really a full-blown object-oriented database. It can handle large amounts of data, modeled essentially like JSON. It supports sophisticated querying, sorting, and filtering. It’s also available in service workers as a reliable persistence mechanism between thread restarts and between the main and workers threads.
When you create an object store in IndexedDB
, it is associated with the domain and lasts until the user deletes it. It can be used as an offline datastore to handle offline functionality in progressive web apps, in the style of Google Docs.
To get a flavor of using IndexedDB
, here’s how you might create a new store:
let db = null; // A handle for the DB instance
llet request = indexedDB.open("MyDB", 1); // Try to open the “MyDB” instance (async operation)
request.onupgradeneeded = function(event) { // onupgradeneeded is the event indicated the MyDB is either new or the schema has changed
db = event.target.result; // set the DB handle to the result of the onupgradeneeded event
if (!db.objectStoreNames.contains("myObjectStore")) { // Check for the existence of myObjectStore. If it doesn’t exist, create it in the next step
let tasksObjectStore = db.createObjectStore("myObjectStore", { autoIncrement: true }); // create myObjectStore
}
};
The call to request.onsuccess = function(event) { db = event.target.result; }; // onsuccess
fires when the database is successfully opened. This will fire without onupgradeneeded
firing if the DB
and Object
store already exist. In this case, we save the db
reference:
request.onerror = function(event) { console.log("Error in db: " + event); }; // If an error occurs, onerror will fire
The above IndexedDB
code is simple—it just opens or creates a database and object store—but the code gives you a sense of IndexedDB
‘s asynchronous nature.
Service workers include a specialized data storage mechanism called cache. Cache makes it easy to intercept requests, save responses, and modify them if necessary. It’s primarily designed to cache responses (as the name implies) for offline use or to optimize response times. This is something like a customizable proxy cache in the browser that works transparently from the viewpoint of the main thread.
Here’s a look at caching a response using a cache-first strategy, wherein you try to get the response from the cache first, then fallback to the network (saving the response to the cache):
self.addEventListener('fetch', (event) => {
const request = event.request;
const url = new URL(request.url);
// Try serving assets from cache first
event.respondWith(
caches.match(request)
.then((cachedResponse) => {
// If found in cache, return the cached response
if (cachedResponse) {
return cachedResponse;
}
// If not in cache, fetch from network
return fetch(request)
.then((response) => {
// Clone the response for potential caching
const responseClone = response.clone();
// Cache the new response for future requests
caches.open('my-cache')
.then((cache) => {
cache.put(request, responseClone);
});
return response;
});
})
);
});
This gives you a highly customizable approach because you have full access to the request and response objects.
We’ve looked at the commonly used options for persisting data in the browser of varying profiles. When deciding which one to use, a useful algorithm is: What is the simplest option that meets my needs? Another concern is security, especially with cookies.
Other interesting possibilities are emerging with using WebAssembly for persistent storage. Wasm’s ability to run natively on the device could give performance boosts. We’ll look at using Wasm for data persistence another day.
Next read this:
Copyright 2015 - InnovatePC - All Rights Reserved
Site Design By Digital web avenue