Javascript is inherently single threaded, i.e. it uses only one main thread to execute all of the tasks in a sequential manner. Then why is it so popular for backend services ?? a use-case where processing tasks in parallel is one of the bare requirement. That is where event loops steps in. It hands off the tasks written in your code to the system kernel which can actually perform the tasks in parallel. If you think about it, if we choose to write a backend service without the event loop in JS we would be able to only serve one request/client at a time as all of the requests will have to be processed sequentially. This would make our server highly inefficient and our clients unhappy.
Popular JS frameworks like Node, Deno and Bun leverage the event loop to perform non-blocking async tasks. How does the event loop help perform non-blocking operations asynchronously ? The JS runtime (like Node) constantly polls for tasks which are added to a queue. Once a task is popped (for example, Task A) from the queue, it is scheduled to run by handing it off to the kernel. The runtime then moves on to scheduling other tasks (Task B, Task C ...) while the kernel is executing the Task A. The runtime will comeback to check for the completion of Task A and if is completed, the callback function associated with it is executed. This process happens in an endless loop where tasks are polled, scheduled, checked for completion and its callbacks being invoked.
setTimeout(() => {
console.log("Successfully waited for 2 seconds");
}, 2000);
fetch("https://api.example.com/test")
.then((response) => response.json())
.then((data) => {
console.log("API Response:", data);
})
.catch((error) => {
console.error("API Error:", error);
});
In the above example, the setTimeout
task and fetch
task are handled asynchornously by the event loop, i.e. execution time of one does not affect the other. Once each of the task is completed their related callbacks are executed viz. the console.log
statements. Now, lets look at slightly different example:
function cpuBoundTask() {
const start = Date.now();
const size = 300;
const matrixA = Array.from({ length: size }, () => Array(size).fill(Math.random()));
const matrixB = Array.from({ length: size }, () => Array(size).fill(Math.random()));
const result = Array.from({ length: size }, () => Array(size).fill(0));
for (let i = 0; i < size; i++) {
for (let j = 0; j < size; j++) {
for (let k = 0; k < size; k++) {
result[i][j] += matrixA[i][k] * matrixB[k][j];
}
}
}
let trigSum = 0;
for (let i = 0; i < 1e7; i++) {
trigSum += Math.sin(i) * Math.cos(i);
}
console.log("CPU-bound task completed in", Date.now() - start, "ms");
}
setTimeout(() => {
console.log("Successfully waited for 200ms");
}, 200);
cpuBoundTask();
I have made two major changes in the above code w.r.t. previous snippet: 1) Replaced the fetch
with a computational task that performs a matrix multiplication and some random trignometric operations. 2) Reduce the timeout from 2 seconds to 200 ms. If the run the above code using node, I get the following result:
CPU-bound task completed in 1474 ms
Successfully waited for 200ms
As you can see, the CPU bound tasks completes first even though we would expect the timeout to complete first (200 ms < 1474 ms). This is one of the shortcomings of JS and how it processes CPU bound tasks. In JS, CPU bound tasks execute in the main thread and occupies the call stack. This prevents the event loop from performing its tasks of executing I/O operations, polling for results and executing callbacks. Therefore, its more appropriate to say the event loop allows users to perform non-blocking async "I/O" operations. To be fair, most of the apps that we write are just I/O bound. They just make some API calls to fetch some data from somewhere, add some business logic and return a response. So, the slowdown imposed by CPU bound tasks doesn't really stop people from using JS for their apps because the tasks that their apps do are mostly I/O bound.
Async/Await - Using callback functions can sometimes get ugly, it can become harder to read and hence hard to debug especially when they are nested. The async/await keyword is basically syntactic sugar. It makes the code appear as if they are executed synchronously making it easier to read and also easier to wrap around a try/catch. When an async function is called, a promise is returned, i.e. a promise of a value that will be present in the future. The await keyword schedules the async function to resolve with either a value or reject with an error. Lets take a look two blocks of code, one with callbacks and another with async/await keyword
// Code with callbacks
fetch("https://api.example1.com/test")
.then((response1) => response1.json())
.then((data1) => {
console.log("Data from API 1:", data1);
return fetch(`https://api.example2.com/test/${data1.id}`);
})
.then((response2) => response2.json())
.then((data2) => {
console.log("Data from API 2:", data2);
return fetch(`https://api.example3.com/test/${data2.relatedId}`);
})
.then((response3) => response3.json())
.then((data3) => {
console.log("Data from API 3:", data3);
})
.catch((error) => {
console.error("Error occurred:", error);
});
// Same code with async/await
try {
const response1 = await fetch("https://api.example1.com/test");
const data1 = await response1.json();
console.log("Data from API 1:", data1);
const response2 = await fetch(`https://api.example2.com/test/${data1.id}`);
const data2 = await response2.json();
console.log("Data from API 2:", data2);
const response3 = await fetch(`https://api.example3.com/test/${data2.relatedId}`);
const data3 = await response3.json();
console.log("Data from API 3:", data3);
} catch (error) {
console.error("Error occurred:", error);
}
I believe we all can agree that the second block of code is more readable.