Top 50 NodeJS Interview Questions and Answers for 2024

Ravi Sharma
44 min readJan 9, 2024
Top 50 NodeJS Interview Questions and Answers for 2024 by Ravi Sharma

Prepare for your NodeJS interview with confidence!

In 2024, Node.js continues to be a dominant force in the world of server-side programming, with its versatility and performance making it an increasingly sought-after skill in the tech industry.

For anyone looking to excel in Node.js interviews this year, having a solid grasp of both fundamental concepts and the latest advancements is crucial.

To assist you in your preparation, we’ve curated a definitive list of the top 50 Node.js interview questions and their comprehensive answers. Let’s embark on a journey to master the essential principles and intricacies of Node.js development!

Level -1: Basic

1. What is NodeJS?

2. What is Event Loop and how it works?

3. What are REPL and its uses?

4. Difference between package.json and package.lock.json files?

5. Difference between the Asynchronous and non-blocking code?

6. Difference between pipe and chaining in a node?

7. What is the control flow function?

8. What is Buffer in Nodejs?

9. What is Stream and different types of stream in nodejs?

10. What are processes and threads and How do they communicate between multiple threads and processes?

11. Difference between put, post, and patch methods?

12. What is the use of module.export ?

13. What is the node process model?

14. What is CORS?

15. Difference between crypto and bcrypt module?

16. Difference between NPM, YARN, and NPX?

17. What is LTS release?

18. Difference between req.params and req.query?

19. Difference between dependency and dev-dependency?

20. What is the error first callback function?

21. Difference between Authentication and Authorization?

Level-2: Intermediate

22. How to handle errors in node?

23. How to resolve unhandled exceptions in node?

24. What is cron job?

25. What difference between fork and spawn and exec methods?

26. What are the global objects of Node.js?

27. What are the different phases of the event loop?

28. What are Reactor Pattern and Demultiplexer in Node?

29. How to handle large file upload on node server? What is highWaterMark ?

30. What is middleware and different types of middleware?

31. Difference between readFile and createReadStream ?

32. Difference between Hashing and Encryption?

33. What is the use of NodeJS binding?

34. Difference between Cluster and child_process modules?

35. How to read write and compress a file in node?

36. Difference between Async.Await and Async.Series ?

37. How to achieve localization in node?

38. Difference between Promises and Observables?

39. How to achieve server-side validation in node application?

40. Difference between Emitter and Dispatcher?

Level-3: Expert

41. How to perform a load test of the node app?

42. What are the different types of memory leaks?

43. How you can achieve caching in node application?

44. How can you improve the performance of the node applications?

45. What are the security best practices for node applications?

46. How we can scale nodejs application?

47. How to prevent DOS and DDOS attacks in node applications?

48. How to avoid SQL Injection attacks in node applications?

49. How to write test cases for the backend?

50. List out the use of the following npm modules (shrinkwrap, forever, dotenv, nodemon, response-time, multer, body-parser, loadtest)

==============================================================

1. What is NodeJS?

NodeJs is an open-source, cross-platform, runtime environment for developing server-side and networking applications. It is built on Google Chrome’s JavaScript V8 engine and uses JavaScript as a scripting language.

It allows to execution of the javascript code on the server (outside of the browser) on any machine. It is not a language & nor a framework. It's a Javascript runtime environment.

It works on single-threaded event loops and non-blocking I/O modal which provides a high rate as it can handle a higher number of concurrent requests.

2. What is an Event Loop and how it works?

Node.js is a combination of Google’s V8 JavaScript engine, an event loop, and a low-level I/O API(Node API).

Libuv is a multi-platform C library (that implements the event loop) that provides support for asynchronous I/O.

NodeJS Architecture — JavaScript Centric

The Event loop is central to Libuv (a core component of the Node.js runtime), running on the main thread and managing tasks, processing I/O events, timers, and callbacks making Node.js applications highly responsive and scalable.

The Event loop is single-threaded, which means that it can only execute one task at a time. However, the event loop is also non-blocking, which means that it can continue to execute other tasks while it is waiting for an I/O operation to complete. This makes Node.js applications very responsive, as they can continue to handle other requests even while they are waiting for a file to be read or a network request to be completed.

Working: The way Libuv and the event loop work is based on the Reactor Pattern. In this pattern, there is an Event queue and an Event demultiplexer. The loop (i.e. the dispatcher) keeps listening for incoming I/O and for each new request, an event is emitted.

The event is received by the demultiplexer and it delegates work to the specific handler (the way these requests are handled differs for each OS). Once the work is done, the registered callback is enqueued on the Event queue. Then the callbacks are executed one by one and if there is nothing left to do, the process will exit.

3. What are REPL and its uses?

REPL stands for (READ, Eval, Print Loop). It represents a computer environment like a window console or Unix shell where a command is entered and the system responds with an output.

It is a simple, interactive programming environment that takes single user inputs, evaluates them, and returns the result to the user. It is most often used for experimenting with code, debugging, and learning a new language or framework.

4. Difference between package.json and package.lock.json files?

In a Node.js project, the package.json file is mandatory, while the package-lock.json file is not mandatory but highly recommended for projects using npm for package management.

package.json is a manifest file that contains metadata about the project and specifies its dependencies. It defines the project configuration. It includes details such as the project name, version, entry point, script commands, and most importantly, the list of dependencies and their versions.

package.lock.json is automatically generated by npm, when dependencies are installed or updated. It contains a detailed record of the exact versions of all dependencies that were installed for the project at a specific point in time. this file is crucial for enabling reproducible and consistent builds in a project.

5. Difference between the Asynchronous and non-blocking code?

  • Asynchronous code refers to the style of programming where operations are executed independently of the main program flow. Instead of waiting for each operation to complete before moving on to the next one, asynchronous code initiates an operation and then proceeds to execute subsequent code without waiting for the operation to finish.
  • Asynchronous code typically involves the use of callbacks, promises, async/await, and event-driven mechanisms. It allows the program to achieve concurrency, enabling multiple operations to be executed simultaneously.
  • Non-blocking code refers to the ability of a program to continue executing without being obstructed by long-running operations. In a non-blocking system, when an operation is initiated, the program can continue executing other tasks without waiting for the operation to be completed.
  • Non-blocking behavior is often associated with I/O operations and event-driven architectures.

6. Difference between pipe and chaining in a node?

Pipe: In Node.js, the “pipe” method is used to direct the output of one stream to the input of another stream. It is commonly used with readable and writable streams to efficiently transfer data from one source to another without explicitly having to manage the data flow.

const fs = require('fs');

const readableStream = fs.createReadStream('input.txt');
const writableStream = fs.createWriteStream('output.txt');

readableStream.pipe(writableStream);

Chaining in Node.js generally refers to method chaining, which is the practice of calling multiple methods on an object in a single statement, with each method returning the object itself, thereby allowing for a sequence of operations to be performed on the object in a linear fashion.

const result = [1, 2, 3, 4, 5]
.map(num => num * 2)
.filter(num => num > 5)
.reduce((acc, num) => acc + num, 0);

console.log(result); // Output: 24

Both concepts are fundamental to working with streams and asynchronous operations in Node.js.

7. What is the control flow function?

Control flow functions are used to dictate the order in which specific code blocks or functions are executed. These functions are used to manage the flow of execution within a program, enabling developers to handle asynchronous operations, iterate through collections, handle conditional logic, and more.

8. What is Buffer in NodeJS?

Buffer is a temporary memory, mainly used by the stream to hold some data until consumed. Buffer is mainly used to store binary data while reading from a file or receiving packets over the network.

It represents a fixed-length sequence of bytes. It allocated memory outside of the V8 heap. (Buffer not available in browser’s JavaScript)

A simple example of converting a string into a buffer:

// a string
const str = "Hey. this is a string!";
// convert string to Buffer
const buff = Buffer.from(str, "utf-8");
console.log(buff); // <Buffer 48 65 79 2e ... 72 69 6e 67 21>

To convert buffer into a string:

// if the buffers contain text
buffer.toString(encoding) // encoding = 'utf-8'
// if you know how many bytes the buffer contains then
buffer.toString(encoding, 0, numberOfBytes) // numberOfBytes = 12

9. What is Stream and different types of streams in nodejs?

Stream is the object (abstract interface) that allows us to transfer data from source to destination and vice-versa. It enables you to process large amounts of data chunk by chunk, without having to load the entire data set into memory at once.

There are 4 types of streams:

a)Readable Stream: A Readable stream represents a source from which data can be read. It emits “data” events whenever new data becomes available. An example of a Readable stream is reading a file line by line:

const fs = require('fs');
const readline = require('readline');

const readableStream = fs.createReadStream('file.txt');

const readlineInterface = readline.createInterface({
input: readableStream,
output: process.stdout
});

readlineInterface.on('line', (line) => {
console.log(line);
});

b)Writable Stream: A Writable stream represents a destination to which data can be written. It can be used to write data to files, HTTP responses, or any other writable target. Here’s an example of writing data to a file using a Writable stream:

const fs = require('fs');

const writableStream = fs.createWriteStream('output.txt');

writableStream.write('Hello, ');
writableStream.write('World!');
writableStream.end();

c)Duplex Stream: A Duplex stream is both readable and writable, allowing both data input and output. A common example is a TCP socket, where data can be both read from and written to. Here’s a simple echo server using a Duplex stream:

const net = require('net');

const server = net.createServer((socket) => {
socket.pipe(socket);
});

server.listen(3000);

d)Transform Stream: A Transform stream is a Duplex stream that performs transformations on the data as it is read and written. It allows for data modification or manipulation. An example of a Transform stream is compressing or encrypting data on the fly:

const fs = require('fs');
const zlib = require('zlib');

const readableStream = fs.createReadStream('file.txt');
const writableStream = fs.createWriteStream('file.txt.gz');
const gzipStream = zlib.createGzip();

readableStream.pipe(gzipStream).pipe(writableStream);

10. What are processes and threads and How do they communicate between multiple threads and processes?

PROCESS: A process in Node.js refers to an instance of the Node.js runtime that can be executed independently. Each Node.js process has its own memory space, global objects, modules, and event loop.

When you run a Node.js application, you’re essentially starting a process. Node.js applications can be single-threaded, meaning they run in a single process, or they can leverage the built-in clustering module to create multiple processes to take advantage of multi-core systems.

Create Multiple Processes: The Cluster module in Node.js is designed specifically to enable efficient load balancing of incoming network connections across multiple processes. It allows a Node.js application to take advantage of multi-core systems by creating multiple instances of the application, each running in its own process.

The primary purpose of the Cluster module is to distribute incoming connection requests (e.g., HTTP requests) across a pool of workers, allowing a Node.js server to handle multiple requests concurrently. This is achieved by forking the main Node.js process into multiple workers using a master/worker architecture.

const cluster = require('cluster');
const http = require('http');
const numCPUs = require('os').cpus().length;

if (cluster.isMaster) {
// Create a worker process for each CPU core
for (let i = 0; i < numCPUs; i++) {
cluster.fork();
}

// Listen for when a worker exits and replace it
cluster.on('exit', (worker, code, signal) => {
console.log(`Worker ${worker.process.pid} died. Restarting...`);
cluster.fork();
});
} else {
// Worker processes create an HTTP server
http.createServer((req, res) => {
res.writeHead(200);
res.end('Hello, world!\n');
}).listen(8000);
}

THREAD: Node.js, by default, uses a single-threaded, event-driven architecture, meaning it utilizes a single thread (event loop) to execute JavaScript code. This single thread processes I/O operations (like file system and network operations) and asynchronous events.

However, Node.js does use additional threads from a thread pool managed by the libuv library to handle certain operations, such as file system operations.

It is also possible to create multiple threads by utilizing Worker Threads, a module provided by Node.js for handling heavy CPU-bound computation and parallel processing.

const { Worker, isMainThread, parentPort } = require('worker_threads');

if (isMainThread) {
// This code is executed in the main thread
// Create new worker threads
const worker1 = new Worker(__filename);
const worker2 = new Worker(__filename);

// Set up message handlers for worker threads
worker1.on('message', (msg) => console.log('Message from worker 1:', msg));
worker2.on('message', (msg) => console.log('Message from worker 2:', msg));

// Post messages to worker threads
worker1.postMessage('Hello from main thread to worker 1');
worker2.postMessage('Hello from main thread to worker 2');
} else {
// This code is executed in the worker threads
// Set up message handler for worker thread
parentPort.on('message', (msg) => {
console.log('Message from main thread:', msg);
// Send message back to the main thread
parentPort.postMessage('Hello from worker thread');
});
}

here the main thread creates two worker threads using the Worker class. Each worker thread listens for messages from the main thread using the on(‘message’ event listener and sends messages back to the main thread using parentPort.postMessage().

When running this code, you would see messages exchanged between the main thread and the worker threads, demonstrating the inter-thread communication.

Communication between Threads: you can pass data between threads using the Worker and parentPort objects provided by the Worker Threads module.

// main.js
const { Worker } = require('worker_threads');

// Create a new worker thread and pass initial data
const worker = new Worker('./worker.js', { workerData: { message: 'Hello from main thread' } });

// Listen for messages from the worker thread
worker.on('message', (data) => {
console.log('Message from worker thread:', data);
});
// worker.js
const { workerData, parentPort } = require('worker_threads');

// Receive initial data from the main thread
console.log('Received data from main thread:', workerData);

// Send a message to the main thread
parentPort.postMessage('Hello from worker thread');

In this example, the main.js file creates a new worker thread using Worker(‘./worker.js’) and passes initial data to the worker using the workerData option. In the worker.js file, the workerData is received from the main thread, and a message is sent back to the main thread using parentPort.postMessage().

When the worker script is executed, calling parentPort.postMessage() will trigger the ‘message’ event in the main thread, and the data passed from the worker thread can be accessed and processed accordingly.

This approach allows threads to communicate with each other and pass data back and forth as needed, enabling efficient multi-threaded data processing in Node.js.

11. Difference between put, post, and patch methods?

The POST method is used to submit data to a server, the PUT method is used to replace or create a resource at a specific URL, and the PATCH method is used to apply partial modifications to a resource.

The POST method is commonly used for creating new resources or triggering operations that are not idempotent, meaning that … the same request could result in different outcomes based on the server state or implementation.

The PUT method is used to update or create a resource at a specific URL. It is idempotent, meaning that multiple identical requests should have the same effect as a single request.

The PATCH method is used to apply partial modifications to a resource, specifying the set of changes to be applied, without having to send the entire representation. PATCH is also idempotent, meaning that multiple identical requests should have the same effect as a single request.

12. What is the use of module.export ?

module.exports is a special object in Node.js that is used to expose a module’s contents to be consumed by other modules. When a module is required in another module using require, the value returned is the module.exports object of the required module.

// myModule.js
const myFunction = () => {
console.log('This is my function');
};

const myVariable = 'This is my variable';

module.exports.myFunction = myFunction;
module.exports.myVariable = myVariable;

In the above example, the module.exports object is used to expose the myFunction and myVariable to other modules. Another module can consume myModule.js using require and access myFunction and myVariable like this:

// anotherModule.js
const myModule = require('./myModule');

myModule.myFunction(); // Output: This is my function
console.log(myModule.myVariable); // Output: This is my variable

13. What is the node process model?

The Node.js process model revolves around the event-driven, single-threaded, non-blocking architecture, which optimizes the handling of concurrent operations and I/O-bound activities, making it well-suited for building scalable, high-performance applications.

Here’s a breakdown of the key components:

  • Event-Driven
  • Single-Threaded
  • Non-Blocking I/O
  • Worker Threads (Optional)

14. What is CORS?

CORS stands for Cross-Origin Resource Sharing. It is a security feature implemented in web browsers to restrict web pages from making requests to a different domain than the one that served the web page.

In a Node.js application, you can set up CORS handling using middleware such as the ‘cors’ package or by manually adding the necessary CORS headers to responses. This allows you to control which origins have permission to access resources in your Node.js application.

CORS is a security feature that regulates cross-origin requests, allowing secure communication between different domains while protecting users’ data from unauthorized access and potential security threats.

When you visit a website, let’s say “example.com”, your web browser allows JavaScript code running on “example.com” to make requests to the same domain. This is part of the security protocol called the same-origin policy, and it’s meant to protect users’ data from malicious attacks.

However, sometimes a web page might need to make requests to a different domain. For example, let’s say “example.com” needs to retrieve some data from “api.otherdomain.com”. This is where CORS comes into play.

15. Difference between crypto and bcrypt module?

crypto Module: The crypto module in Node.js provides cryptographic functionality, including encryption, hashing, and decryption. It offers a wide range of cryptographic algorithms and tools for handling secure data transformations.

   const crypto = require('crypto');

const password = 'mySecurePassword';
const salt = crypto.randomBytes(16).toString('hex'); // Generate a random salt
const hash = crypto.pbkdf2Sync(password, salt, 100000, 64, 'sha512').toString('hex'); // Generate a hashed password

console.log('Salt:', salt);
console.log('Hashed Password:', hash);

bcrypt Module: The bcrypt module is specifically designed for password hashing using the bcrypt algorithm. It provides a convenient way to securely hash passwords, a common requirement for user authentication systems. bcrypt automatically handles the generation of salts, which enhances the security of the hashed passwords.

   const bcrypt = require('bcrypt');
const saltRounds = 10;
const myPlaintextPassword = 'mySecurePassword';

bcrypt.hash(myPlaintextPassword, saltRounds, function(err, hash) {
if (!err) {
console.log('Hashed Password:', hash);
}
});

16. Difference between NPM, YARN, and NPX?

NPM is the default package manager for Node.js and is used to install, manage, and publish packages. It is bundled with Node.js installation.

Yarn is a popular alternative to NPM for package management in Node.js, with additional features and enhanced performance compared to NPM. It was developed by Facebook and offers similar features to NPM. Yarn is faster than NPM and NPX.

NPX is a tool that comes with NPM and is used to execute Node.js packages without having to install them globally. It allows you to run packages directly from the NPM registry or execute binaries from local node_modules/.bin folders

17. What is LTS release?

An LTS (Long Term Support) release is a specific version of the software that is designated for long-term maintenance and support.

18. Difference between req.params and req.query?

In the context of a Node.js server using the Express framework, req.params and req.query are used to access different types of parameters passed in the URL.

  • req.params is an object containing properties mapped to the named route parameters. These parameters are part of the URL path and are matched by the route’s path pattern. They are typically used to capture dynamic values from the URL.
// Route definition
app.get('/users/:id', (req, res) => {
const userId = req.params.id; // Access the "id" parameter from the URL
// Use userId to retrieve user data
});
  • req.query is an object containing a property for each query string parameter in the route. Query parameters are appended to the URL after a ? and are used to pass additional information or filters for the requested resource.

If you have a route defined as /search, and a client makes a request like /search?city=NewYork&active=true, you can access the query parameters city and active using req.query.city and req.query.active respectively.

  // Route definition
app.get('/search', (req, res) => {
const city = req.query.city; // Access the "city" query parameter
const active = req.query.active; // Access the "active" query parameter
// Use city and active for searching
});

19. Difference between dependency and dev-dependency?

Dependencies are the packages that are required for the application to run in the production environment.

DevDependencies are the packages that are only needed for development and testing purposes. These packages include tools, libraries, and utilities that are used during the development, testing, and build process, but are not required for the application to function in the production environment.

20. What is the error first callback function?

The “error-first callback” pattern, also known as “Node.js-style callbacks”, is a convention used in Node.js for handling asynchronous operations. In this pattern, callback functions are structured to take an error as the first parameter, allowing the calling code to check for and handle errors in a consistent manner.

function readFileAndHandleError(path, callback) {
fs.readFile(path, 'utf8', (err, data) => {
if (err) {
// Pass the error to the callback as the first parameter
callback(err);
} else {
// Pass the data to the callback as the second parameter
callback(null, data);
}
});
}

// Example usage of the error-first callback function
readFileAndHandleError('example.txt', (err, data) => {
if (err) {
console.error('An error occurred:', err);
} else {
console.log('File data:', data);
}
});

21. Difference between Authentication and Authorization?

Authentication is the process of validating the identity of a user or entity, through the credentials, such as usernames, passwords, biometric data, or security tokens.

Authorization is the process of determining the rights and privileges of a user to access the resources. Authorization typically involves specifying what resources or operations a user can interact with based on their role, group membership, or other relevant attributes.

22. How to handle errors in node?

In Node.js, errors can be handled using a variety of techniques, including try…catch blocks, error-first callbacks, and the use of error event emitters.

Using try-catch:

try {
// Code that may throw an error
const result = someFunction();
console.log(result);
} catch (error) {
console.error('An error occurred:', error);
}

Using event emitters:

const myEmitter = new MyEmitter();

myEmitter.on('error', (err) => {
console.error('An error occurred:', err);
});

myEmitter.emit('error', new Error('Oops! Something went wrong.'));

23. How to resolve unhandled exceptions in node?

In Node.js, unhandled exceptions can be resolved using the process.on(‘uncaughtException’) event. By attaching a listener to this event, you can catch unhandled exceptions and prevent Node.js from terminating.

process.on('uncaughtException', (err) => {
console.error('An unhandled exception occurred:', err);
// Perform cleanup, logging, or any necessary action
// Avoid attempting to continue with the application as it may be in an inconsistent state
// Gracefully shut down the application
process.exit(1); // Exit the process with a failure code (1)
});

// Example of an unhandled exception (for demonstration purposes)
// This code will throw an unhandled exception
setTimeout(() => {
throw new Error('Intentional unhandled exception');
}, 100);

// Other application logic
// ...

When an unhandled exception occurs, the provided callback function is executed, allowing you to log the error, perform any necessary cleanup, and gracefully shut down the application using process.exit(1).

24. What is cron job?

A cron job is a time-based job scheduler. It allows users to schedule tasks (commands or scripts) to run periodically at fixed times, dates, or intervals.

In Node.js, you can use the node-cron module to schedule jobs to run at specific times.

const cron = require('node-cron');

// Schedule a job to run every minute
cron.schedule('* * * * *', () => {
console.log('Running scheduled job every minute');
});

The first argument ‘* * * * *’ represents the cron expression for running the job every minute. The second argument … is a function that will be executed when the scheduled time is reached.

Run a Job Every Hour:

   const cron = require('node-cron');

// Schedule a job to run every hour
cron.schedule('0 * * * *', () => {
console.log('Running scheduled job every hour');
});

Run a Job Every Day at a Specific Time:

   const cron = require('node-cron');

// Schedule a job to run at 8:00 AM every day
cron.schedule('0 8 * * *', () => {
console.log('Running scheduled job at 8:00 AM every day');
});

25. What difference between fork and spawn and exec methods?

  • The fork method is specifically designed for creating child processes that run Node.js modules. It is commonly used to create new instances of the Node.js interpreter to run separate Node.js scripts.
  • The spawn method is a more general-purpose function for creating child processes. It is used to spawn new processes and execute commands in the operating system’s shell. With spawn, you can execute non-Node.js, programs, such as Python scripts or shell commands.
  • Child process is created using fork, it automatically sets up a communication channel for inter-process communication (IPC) with the parent process.
  • With spawn, if you need to establish communication between the parent and child processes, you have to manually set up the standard input and output streams to exchange data and messages.
  • The exec method runs a command in a shell and buffers the output. It is suitable for simple commands and scripts where the output is not excessively large.

Example of Spawn method

const { spawn } = require('child_process');

const pythonProcess = spawn('python', ['script.py', 'arg1', 'arg2']);

pythonProcess.stdout.on('data', (data) => {
console.log(`Python script stdout: ${data}`);
});

pythonProcess.stderr.on('data', (data) => {
console.error(`Python script stderr: ${data}`);
});

pythonProcess.on('close', (code) => {
console.log(`Python script child process exited with code ${code}`);
});

Example of Fork method

const { fork } = require('child_process');

const child = fork('child.js');

child.on('message', (message) => {
console.log('Received message from child:', message);
});

child.send('Hello from parent!');

Example of Exec method

const { exec } = require('child_process');

// Execute a simple shell command to list files in the current directory
exec('ls -l -a', (error, stdout, stderr) => {
if (error) {
console.error(`error: ${error.message}`);
return;
}
if (stderr) {
console.error(`stderr: ${stderr}`);
return;
}
console.log(`stdout: ${stdout}`);
});

26. What are the global objects of Node.js?

Node.js Global Objects are the objects that are available in all modules. Global Objects are built-in objects that are part of JavaScript and can be used directly in the application without importing any particular module.

  1. global: The global object is the global namespace object in Node.js. It acts as a container for global variables and functions. Any variable or function defined on the global object becomes available across modules.
  2. process: The process object provides information about the current Node.js process and allows you to control the process. It contains properties such as process.env, process.argv, and methods like process.exit().
  3. console: The console object provides a simple debugging console that is similar to the console mechanism provided by web browsers. It includes functions like console.log(), console.error(), and console.warn().
  4. Buffer: The Buffer class provides a way to work with binary data directly. Buffers are used in Node.js to handle raw binary data for tasks such as reading from or writing to the file system, dealing with network operations, or handling binary data in other formats.
  5. __filename: The __filename variable represents the name of the current file.
  6. __dirname: The __dirname variable represents the directory name of the current module.
  7. setTimeout and setInterval: Node.js provides global functions setTimeout and setInterval for scheduling code execution after a specified delay or at regular intervals, similar to their behavior in browsers.

27. What are the different phases of the event loop?

The Event Loop is composed of the following six phases, which are repeated for as long as the application still has code that needs to be executed:

  1. Timers
  2. I/O Callbacks
  3. Waiting / Preparation
  4. I/O Polling
  5. setImmediate() callbacks
  6. Close events

The Event Loop starts at the moment Node.js begins to execute your index.js file, or any other application entry point.

These six phases create one cycle, or loop, which is known as a tick. A Node.js process exits when there is no more pending work in the Event Loop, or when process.exit() is called manually.

Some phases are executed by the Event Loop itself, but for some of them the main tasks are passed to the asynchronous C++ APIs.

Phase 1: timers: The timers phase is executed directly by the Event Loop. At the beginning of this phase, the Event Loop updates its own time. Then it checks a queue, or pool, of timers. This queue consists of all timers that are currently set. The Event Loop takes the timer with the shortest wait time and compares it with the Event Loop’s current time. If the wait time has elapsed, then the timer’s callback is queued to be called once the call stack is empty.

Phase 2: I/O callbacks: This is a phase of non-blocking input/output. The asynchronous I/O request is recorded into the queue and then the main call stack can continue working as expected. In the second phase of the Event Loop the I/O callbacks of completed or errored out I/O operations are processed.

Phase 3: idle / waiting / preparation: This is a housekeeping phase. During this phase, the Event Loop performs internal operations of any callbacks. It is primarily used for gathering information, and planning of what needs to be executed during the next tick of the Event Loop.

Phase 4: I/O polling (poll phase): This is the phase in which all the JavaScript code that we write is executed, starting at the beginning of the file, and working down. Depending on the code it may execute immediately, or it may add something to the queue to be executed during a future tick of the Event Loop.

During this phase, the Event Loop is managing the I/O workload, calling the functions in the queue until the queue is empty, and calculating how long it should wait until moving to the next phase. All callbacks in this phase are called synchronously in the order that they were added to the queue, from oldest to newest.

Note: this phase is optional. It may not happen on every tick, depending on the state of your application.

If there are any setImmediate() timers scheduled, Node.js will skip this phase during the current tick and move to the setImmediate() phase.

Phase 5: setImmediate() callbacks: Node.js has a special timer, setImmediate(), and its callbacks are executed during this phase. This phase runs as soon as the poll phase becomes idle. If setImmediate() is scheduled within the I/O cycle it will always be executed before other timers regardless of how many timers are present.

Phase 6: close events: This phase executes the callbacks of all close events. For example, a close event of web socket callback, or when process.exit() is called. This is when the Event Loop is wrapping up one cycle and is ready to move to the next one. It is primarily used to clean the state of the application.

28. What are Reactor Pattern and Demultiplexer in Node?

The Reactor pattern is a design pattern used in event-driven systems, and it forms the foundation of Node.js’ event-driven architecture. In this pattern, an event loop (also known as the Reactor) continuously monitors multiple I/O sources for events, such as incoming network connections, file system operations, or timers. When an event occurs, the Reactor dispatches the corresponding event handler to handle the event.

Demultiplexer in Node.js is part of the underlying libuv library. Developers interact with the Reactor pattern through APIs provided by Node.js, such as event emitters, timers, and networking utilities.

The Demultiplexer (often abbreviated as Demux) is a component in Node.js that works in collaboration with the Reactor pattern. Its primary function is to monitor multiple I/O resources, such as network sockets or files, and notify the Reactor when events occur on those resources.

Together, the Reactor pattern and the Demultiplexer form the core of Node.js’ event-driven, non-blocking I/O model. This architecture allows Node.js to handle a large number of concurrent operations efficiently, making it well-suited for building scalable network applications and servers that require high performance.

29. How to handle large file upload on node server? What is highWaterMark ?

To handle large file uploads on a Node.js server, you can use the multer middleware, which is a popular choice for handling multipart/form-data, including file uploads.

const express = require('express');
const multer = require('multer');
const upload = multer({ dest: 'uploads/' }); // Destination directory for uploaded files

const app = express();

app.post('/upload', upload.single('file'), (req, res) => {
res.send('File uploaded successfully');
});

app.listen(3000, () => {
console.log('Server is running on port 3000');
});

Another third-party module that is commonly used for handling file uploads in Node.js is busboy. busboy is a streaming parser for HTML form data for Node.js. It provides a way to handle file uploads and other form data within HTTP requests.

const http = require('http');
const Busboy = require('busboy');
const fs = require('fs');
const path = require('path');

http.createServer((req, res) => {
if (req.url === '/upload' && req.method === 'POST') {
const busboy = new Busboy({ headers: req.headers });

busboy.on('file', (fieldname, file, filename, encoding, mimetype) => {
const saveTo = path.join(__dirname, 'uploads', filename);
file.pipe(fs.createWriteStream(saveTo));
});

busboy.on('finish', () => {
res.writeHead(200, { 'Content-Type': 'text/plain' });
res.end('File uploaded successfully');
});

req.pipe(busboy);
} else {
res.writeHead(404, { 'Content-Type': 'text/plain' });
res.end('Not Found');
}
}).listen(3000, () => {
console.log('Server is running on port 3000');
});

highWaterMark, is a parameter used in streams to control the amount of data that can be read from a source or written to a destination before the stream emits a ‘drain’ event. In the context of file uploads, the highWaterMark is relevant when dealing with large file uploads to prevent backpressure and control the amount of data buffered in memory.

When using fs.createReadStream or other stream-related operations, you can specify the highWaterMark option to control the buffer size.

const fs = require('fs');
const readStream = fs.createReadStream('largeFile.txt', { highWaterMark: 16 * 1024 }); // Set highWaterMark to 16 KB

30. What is middleware and different types of middleware?

Middleware is the javascript function that has full access to HTTP request and response cycle. Middleware comes in between your request and business logic. It is mainly used to capture logs and enable rate limit, routing, authentication, and security header.

There are 4 types of middleware:

a) Application level. app.use()

b) Routing Level. authRoutes.use(‘*’, isAuth);

c) Third Party middleware. cors, helmet, multer, validator

31. Difference between readFile and createReadStream?

  • The readFile method is used to asynchronously read the entire contents of a file into memory as a single buffer. It is suitable for small to medium-sized files that can be comfortably held in memory. Once the entire file is read, the file content is available for processing.
   const fs = require('fs');

fs.readFile('example.txt', 'utf8', (err, data) => {
if (err) {
// Handle error
} else {
// Process the file data
}
});
  • The createReadStream method is used to create a readable stream from a file. This method is suitable for reading large files or for efficiently processing the contents of a file chunk by chunk, without loading the entire file into memory at once. This is particularly useful when dealing with large files, as it helps in optimizing memory usage.
   const fs = require('fs');

const readableStream = fs.createReadStream('largeFile.txt', 'utf8');

readableStream.on('data', (chunk) => {
// Process the chunk of data
});

readableStream.on('end', () => {
// All data has been read
});

32. Difference between Hashing and Encryption?

Hashing is a one-way process used for data integrity and verification. It converts input data into a fixed-size string of characters, typically a hexadecimal number. The resulting string, known as a hash value or digest, is unique to the input data.

Hashing is indeed idempotent, meaning that for the same input, the resulting hash value will always be the same. This property is desirable for ensuring data integrity and security.

Example of hashing:

  • When the user sets or changes their password, you would hash their input using bcrypt and then store the resulting hash in your database. When doing this, bcrypt handles the addition of a salt to the password and the hashing process.
const bcrypt = require('bcrypt');
const saltRounds = 10;
const plainTextPassword = 'mySecurePassword';

bcrypt.hash(plainTextPassword, saltRounds, (err, hash) => {
if (!err) {
console.log('Hashed Password:', hash);
}
});
  • When the user attempts to log in, you would retrieve the stored hash from the database and then use bcrypt’s compare function to hash the password input by the user and compare the resulting hash with the stored hash. If they match, the input password is correct.
const bcrypt = require('bcrypt');
const hashedPassword = '...'; // Replace with the actual hashed password
const userInputPassword = 'userInputPassword'; // Replace with the user's input

// Use the bcrypt compare function to check if the userInputPassword matches the hashed password
bcrypt.compare(userInputPassword, hashedPassword, (err, result) => {
if (err) {
// Handle error
}
if (result) {
// Passwords match
console.log('Password is correct');
} else {
// Passwords don't match
console.log('Password is incorrect');
}
});

Encryption is a two-way process that transforms plaintext (original data) into ciphertext (encrypted data) using an encryption algorithm and an encryption key. The encrypted data can later be transformed back into its original form using a decryption algorithm and the corresponding decryption key. Encryption is designed to protect data confidentiality and ensure that only authorized parties can access the original data.

33. What is the use of NodeJS binding?

In Node.js, bindings, also known as the Node API (Application Programming Interface), serve as the bridge between JavaScript code and C/C++ code. These bindings enable communication between Node.js and low-level system resources and libraries written in C or C++, allowing for interactions with hardware, operating system functionality, and other low-level operations that are not directly accessible from JavaScript.

The use of Node.js bindings is essential for several purposes, including:

  • Accessing System Features
  • Performance Optimization
  • Integration with Existing Libraries
  • System-level Functionality

34. Difference between Cluster and child_process modules?

Cluster module:

  • The cluster module is used to create multiple processes of the same Node.js application in a single machine to take advantage of multi-core systems. It allows for automatic load balancing across the different processes and enables efficient utilization of system resources.
  • The primary purpose of the Cluster module is to distribute incoming connection requests (e.g., HTTP requests) across a pool of workers, allowing a Node.js server to handle multiple requests concurrently.
  • When using the cluster module, a single “master” process is responsible for managing multiple “worker” processes, distributing incoming connections among them, and restarting workers as needed.
   const cluster = require('cluster');
const http = require('http');
const numCPUs = require('os').cpus().length;

if (cluster.isMaster) {
console.log(`Master ${process.pid} is running`);

// Fork workers for each CPU core
for (let i = 0; i < numCPUs; i++) {
cluster.fork();
}

cluster.on('exit', (worker, code, signal) => {
console.log(`Worker ${worker.process.pid} died`);
});
} else {
// Workers can share any TCP connection
// In this case, an HTTP server
http.createServer((req, res) => {
res.writeHead(200);
res.end('Hello World\n');
}).listen(8000);

console.log(`Worker ${process.pid} started`);
}

child_process module:

  • The child_process module is used to spawn new processes to execute external commands, run other Node.js scripts, or perform non-Node.js operations. It provides a way to create and manage child processes from within a Node.js application.
  • The primary purpose of the child_process module is to enable the execution of tasks that are CPU-intensive, I/O-intensive, or require interaction with external programs or scripts.
  • When using the child_process module, the parent process can spawn child processes, communicate with them, and receive their results asynchronously using event-driven mechanisms. This allows for parallel execution of tasks and better utilization of system resources.
   const { exec } = require('child_process');

// Execute a command using child_process.exec
exec('ls -la', (error, stdout, stderr) => {
if (error) {
console.error(`Error: ${error.message}`);
return;
}
if (stderr) {
console.error(`stderr: ${stderr}`);
return;
}
console.log(`stdout:\n${stdout}`);
});

35. How to read, write, and compress a file in node?

To read a file in Node.js, you can use the fs module’s readFile function.

   const fs = require('fs');

fs.readFile('input.txt', 'utf8', (err, data) => {
if (err) {
console.error('Error reading the file:', err);
return;
}
console.log('File content:', data);
});

To write data to a file in Node.js, you can use the fs module’s writeFile function.

   const fs = require('fs');

const data = 'This is the data to write to the file';

fs.writeFile('output.txt', data, (err) => {
if (err) {
console.error('Error writing to the file:', err);
return;
}
console.log('Data has been written to the file');
});

To compress a file in Node.js, you can use the zlib module to create a GZIP archive.

   const fs = require('fs');
const zlib = require('zlib');

const readStream = fs.createReadStream('input.txt');
const writeStream = fs.createWriteStream('input.txt.gz');
const gzip = zlib.createGzip();

readStream.pipe(gzip).pipe(writeStream);

writeStream.on('finish', () => {
console.log('File has been compressed');
});

36. Difference between Async.Await and Async.Series and Async.parallel?

Async/await is a modern JavaScript feature that allows you to write asynchronous code in a more synchronous style, making it easier to work with promises and asynchronous operations. With async/await, you can define an async function, and within that function, you can use the await keyword to pause the execution of the function until a promise is resolved. This allows you to write asynchronous code that looks and behaves more like synchronous code, making it easier to manage and reason about.

   async function fetchData() {
try {
let result = await fetch('https://api.example.com/data');
let data = await result.json();
console.log(data);
} catch (error) {
console.error('Error fetching data:', error);
}
}

Async.series is a method provided by the Async.js library, which is a utility module for working with asynchronous JavaScript. Async.series is used to run multiple asynchronous functions in a specific order, one after the other. Each function is executed only after the previous function has completed. This is typically used when you have several asynchronous tasks that need to be executed in a specific sequence.

   async.series([
function(callback) {
// Perform asynchronous operation 1
callback(null, 'Result 1');
},
function(callback) {
// Perform asynchronous operation 2
callback(null, 'Result 2');
}
], function(err, results) {
// All tasks have been completed
console.log(results);
});

Async.parallel is a function from the Async.js library in JavaScript. It is used to run multiple asynchronous functions simultaneously and collect the results when all of them have completed.

async.parallel([
function(callback) {
// Perform asynchronous operation 1
callback(null, 'Result 1');
},
function(callback) {
// Perform asynchronous operation 2
callback(null, 'Result 2');
}
], function(err, results) {
// All tasks have been completed
console.log(results);
});

37. How to achieve localization in node?

In Node.js, achieving localization involves providing multi-language support for your application. This typically includes translating text messages, date formats, currency symbols, and other locale-specific aspects of your application.

The i18n (Internationalization) module is a popular choice for achieving localization in Node.js. It provides support for multi-language content and facilitates the loading of locale-specific resource files.

  • Install the i18n module
  • Create a directory named “locales” at the root of your project to store the locale-specific translation files.
  • Inside the “locales” directory, create separate JSON files for each supported language. For example, you might have “en.json” for English and “fr.json” for French. Each file should contain key-value pairs for the translations.
{
"greeting": "Hello",
"welcome": "Welcome"
}
  • In your Node.js application entry file (e.g., app.js), configure the i18n module.
const i18n = require('i18n');
const express = require('express');
const app = express();

i18n.configure({
locales: ['en', 'fr'],
defaultLocale: 'en',
directory: __dirname + '/locales',
objectNotation: true
});

app.use(i18n.init);

// Other app configurations and middleware

app.listen(3000, () => {
console.log('Server running on port 3000');
});
  • You can now use the translations in your routes or views. For example, in an Express route handler.
app.get('/', (req, res) => {
res.send(req.__('greeting'));
});

38. Difference between Promise and Observables ?

Promises and Observables are both used for managing asynchronous operations in JavaScript, but there are several differences between the two:

Single Value vs. Multiple Values:

  • Promise: Represents a single value or the eventual result of an asynchronous operation. Once a promise is resolved or rejected, it can only emit a single value.
  • Observable: Represents a stream of values over time. It can emit multiple values asynchronously, and it also has additional capabilities such as handling errors, completing, and transformation operations.

Eager vs. Lazy Evaluation:

  • Promise: Eagerly evaluates and triggers the asynchronous operation as soon as the promise is created, whether there are consumers interested in the result or not.
  • Observable: Lazily evaluates and does not trigger the asynchronous operation until it has at least one subscriber interested in the emitted values. This allows for more efficient use of resources when dealing with cold observables.

Cancellation:

  • Promise: Once a promise is created, it cannot be canceled. It will eventually resolve or reject, and the consumer has to handle the result accordingly.
  • Observable: Supports cancellation. Subscribers can unsubscribe from receiving further values if they are no longer interested, which can be beneficial in scenarios where resources need to be released early.

Additional Operators and Features:

  • Observable: Provides a rich set of operators for transforming, combining, and working with asynchronous data streams. It also supports higher-order observables, multicast behavior, and backpressure handling in certain implementations.
  • Promise: Offers limited built-in capabilities, primarily focused on handling the eventual resolution or rejection of a single asynchronous operation.

Backward Compatibility:

  • Promise: Has been a part of the JavaScript language since ECMAScript 6 (ES6), making it widely supported in modern environments.
  • Observable: Introduced later through libraries such as RxJS and is not part of the core JavaScript language, although it has gained popularity, especially in the context of reactive programming and complex asynchronous scenarios.

Example of Observable:

// Import the necessary modules from RxJS
import { Observable } from 'rxjs';

// Create an observable
const observable = new Observable((observer) => {
// Emit three values asynchronously with a delay
setTimeout(() => {
observer.next('First value');
}, 1000);

setTimeout(() => {
observer.next('Second value');
}, 2000);

setTimeout(() => {
observer.next('Third value');
// Complete the observable after emitting the third value
observer.complete();
}, 3000);
});

// Subscribe to the observable
observable.subscribe({
// Handle each emitted value
next: (value) => console.log(value),
// Handle errors
error: (error) => console.error(error),
// Handle completion
complete: () => console.log('Observable completed'),
});

39. How to achieve server-side validation in node application?

In a Node.js application, server-side validation can be achieved by implementing validation logic on the server to ensure that incoming data from the client is valid before processing it further.

There are several libraries available in the Node.js ecosystem that facilitate validation, such as Joi, Validator.js, Express-validator, and Yup. These libraries provide a rich set of validation functions for checking data against predefined rules and constraints.

const express = require('express');
const Joi = require('joi');

const app = express();
app.use(express.json());

app.post('/validate', (req, res) => {
const schema = Joi.object({
username: Joi.string().alphanum().min(3).max(30).required(),
email: Joi.string().email().required(),
age: Joi.number().integer().min(18).max(120).required(),
});

const { error, value } = schema.validate(req.body);
if (error) {
return res.status(400).json({ error: error.details[0].message });
}

// If data is valid, continue processing
// ...

res.status(200).json({ message: 'Data is valid' });
});

app.listen(3000, () => {
console.log('Server is running on port 3000');
});

40. Difference between Emitter and Dispatcher?

The EventEmitter is a class that facilitates communication/interaction between objects in Node.js. The EventEmitter class can be used to create and handle custom events.

EventEmitter is at the core of Node asynchronous event-driven architecture. Many of Node’s built-in modules inherit from EventEmitter including prominent frameworks like Express.js.

An emitter object basically has two main features:

  • Emitting name events.
  • Registering and unregistering listener functions.

Emitter:

  • An emitter is an object that emits events or signals to indicate that something has happened.
  • It’s a fundamental component in event-driven programming, where it notifies interested parties (listeners or subscribers) when particular events occur.
  • Emitters are commonly used in languages like JavaScript, where objects can be event emitters, and they are central to frameworks like Node.js for handling asynchronous events.
/**
* Callback Events with Parameters
*/
const events = require('events');
const eventEmitter = new events.EventEmitter();

function listener(code, msg) {
console.log(`status ${code} and ${msg}`);
}

eventEmitter.on('status', listener); // Register listener
eventEmitter.emit('status', 200, 'ok');

// Output
status 200 and ok

Dispatcher:

  • A dispatcher is an intermediary component responsible for routing and delivering messages or events to their intended recipients.
  • It acts as a message broker that receives messages from emitters and then forwards them to the appropriate handlers, listeners, or components based on predefined criteria.
  • Dispatchers are commonly used in message-passing systems, event-driven architectures, and in the context of software design patterns like the observer pattern.
  • Example: In event-driven systems or message-based architectures, a dispatcher may receive events or messages from various sources and then dispatch or route them to the corresponding event handlers or processing units within the system.

41. How to perform a load test of the node app?

Loadtest is a Node.js module that allows you to perform load testing on HTTP endpoints. It provides a simple command-line interface for quick testing and also allows for programmatic usage for more complex scenarios.

You can use loadtest to perform a simple load test.

loadtest -n 100 -k 10 http://yourwebsite.com

In this command:

  • -n specifies the number of requests to send.
  • -k specifies the number of HTTP keep-alive connections to use.

You can also use loadtest programmatically in your Node.js application. Here’s an example of how you can perform a simple programmatic load test:

const loadtest = require('loadtest');

const options = {
url: 'http://yourwebsite.com',
maxRequests: 100,
concurrency: 10,
};

loadtest.loadTest(options, function (error, ... result) {
if (error) {
console.error('Load Test Error: ', error);
} else {
console.log('Tests run successfully');
}
});

42. What are the different types of memory leaks?

In Node.js, memory leaks can occur due to various reasons, and addressing them is vital to maintain the performance and stability of Node.js applications. Some common types of memory leaks in Node.js include:

  1. Accidental Global Variables: When a variable is not properly scoped and unintentionally becomes a global variable, it can prevent it and any objects it references from being garbage collected, leading to memory leaks.
  2. Closures: Closures can keep references to local variables even after they are no longer needed, preventing the garbage collector from reclaiming their memory.
  3. Forgotten Timers and Callbacks: Timers and callbacks that are not cleared when they are no longer needed can hold references to objects, preventing their disposal.
  4. Unintentional Circular References: When objects reference each other in a loop, and those references are not properly broken, the garbage collector is unable to reclaim the memory occupied by these objects.
  5. Uncollected DOM Elements: In web development, if DOM elements are not properly removed when they are no longer needed, they continue to consume memory.
  6. Event Emitters: If event emitters are not properly cleaned up after use, they can create memory leaks. Subscribing to events without unsubscribing from them can keep objects alive longer than necessary.

43. How you can achieve caching in node application?

Caching is beneficial for various types of data in a Node.js application, especially when dealing with data that is frequently accessed and relatively static.

Implementing caching in a Node.js application using Redis can significantly improve performance by storing frequently accessed data in memory.

In your Node.js application, create a connection to the Redis server using the redis package:

const redis = require('redis');
const client = redis.createClient({
host: 'your-redis-server-hostname',
port: 6379, // default Redis port
});

Let’s consider an example where we have a function that fetches user data from a database and we want to cache this data using Redis. Here’s a simplified implementation using an Express.js route:

const express = require('express');
const app = express();

app.get('/user/:userId', (req, res) => {
const { userId } = req.params;
// Check the cache first
client.get(`user:${userId}`, (err, userData) => {
if (err) throw err;
if (userData) {
// Data found in cache, return it
res.json(JSON.parse(userData));
} else {
// Data not found in cache
// Simulate fetching data from a database
const fetchedUserData = fetchUserDataFromDatabase(userId);
// Store the fetched data in the cache with an expiration time
client.setex(`user:${userId}`, 3600, JSON.stringify(fetchedUserData));
res.json(fetchedUserData);
}
});
});

function fetchUserDataFromDatabase(userId) {
// Simulated database fetch
return {
id: userId,
name: 'John Doe',
// Other user data
};
}

app.listen(3000, () => {
console.log('Server is running on port 3000');
});

In this example, when a request is made to fetch user data, the application first checks if the data is present in the Redis cache. If it is, the cached data is returned. If not, the data is fetched from the database, stored in the cache using client.setex(), and then returned to the client.

44. How you can improve the performance of the node applications?

Use Streams: Utilize Node.js streams for data processing, especially when dealing with large volumes of data. Streams enable data to be processed in chunks, reducing memory usage and improving performance.

   const fs = require('fs');
const readableStream = fs.createReadStream('input.txt');
const writableStream = fs.createWriteStream('output.txt');

readableStream.pipe(writableStream);

Implement Caching: Cache frequently accessed data using in-memory solutions like Redis to reduce the need for repeated database queries and enhance overall application responsiveness.

   const redis = require('redis');
const client = redis.createClient();

// Caching data
const fetchUserData = (userId) => {
// Simulated data fetch from database
const userData = { id: userId, name: 'John Doe' };
// Store data in cache with expiration
client.setex(`user:${userId}`, 3600, JSON.stringify(userData));
return userData;
};

// Retrieving data from cache
const getUserData = (userId, callback) => {
client.get(`user:${userId}`, (err, userData) => {
if (err) throw err;
if (userData) {
return callback(JSON.parse(userData));
}
const userData = fetchUserData(userId);
return callback(userData);
});
};

Utilize Worker Threads: For CPU-intensive tasks, consider using Node.js worker threads to offload processing to separate threads and take advantage of multi-core CPUs without blocking the main event loop.

   const { Worker, isMainThread, parentPort } = require('worker_threads');

if (isMainThread) {
// Main thread
const worker = new Worker(__filename);
worker.on('message', (msg) => {
console.log('Worker said:', msg);
});
} else {
// Worker thread
parentPort.postMessage('Hello from the worker!');
}

Compression: Enable compression for HTTP responses using middleware like compression in a Node.js web framework such as Express.

     const express = require('express');
const compression = require('compression');
const app = express();

app.use(compression()); // Enable compression for all routes

// ... Define your routes and application logic

Scaling:

  • Utilize horizontal scaling by employing load balancers (such as Nginx or HAProxy) to distribute incoming requests across multiple Node.js instances.
  • Implement clustering in Node.js using the built-in cluster module to take advantage of multi-core systems and create child processes to handle incoming requests.
  • Use container orchestration tools like Docker Swarm or Kubernetes to manage and scale Node.js application instances across a cluster of machines.
   const cluster = require('cluster');
const os = require('os');

if (cluster.isMaster) {
const numCPUs = os.cpus().length;

for (let i = 0; i < numCPUs; i++) {
cluster.fork();
}

cluster.on('exit', (worker, code, signal) => {
console.log(`Worker ${worker.process.pid} died`);
});
} else {
// Code for the worker process
const app = require('./app');
app.listen(3000);
}

Optimized Database Queries: Construct efficient database queries, utilize database indexing, and employ query optimization techniques to minimize database load and response times.

   // Efficient query with proper indexing
const users = await User.find({}).limit(10).sort({ createdAt: -1 });

// Index creation
db.collection('users').createIndex({ name: 1 });

Error Handling: Implement effective error handling mechanisms to prevent crashes and ensure the application continues to run smoothly in case of errors.

   // Implementing centralized error handling middleware in an Express.js application
app.use((err, req, res, next) => {
console.error(err.stack);
res.status(500).send('Something went wrong!');
});

// Handling unhandled promise rejections
process.on('unhandledRejection', (err) => {
console.error('Unhandled Rejection:', err);
// Logging and other cleanup tasks
process.exit(1); // Exit the process in case of unhandled rejections
});

Avoid Synchronous Operations: Minimize the use of synchronous operations, especially in I/O-bound tasks, and prioritize asynchronous operations to prevent blocking the event loop and maximize concurrency.

  • Use Promise.all to execute multiple asynchronous operations concurrently and await their results.
     function fetchUserData(userId) {
// Simulated data fetch from a database or API
return new Promise((resolve, reject) => {
setTimeout(() => {
resolve(`User data for user ${userId}`);
}, 1000);
});
}

const userIds = [1, 2, 3];

Promise.all(userIds.map((id) => fetchUserData(id)))
.then((userData) => {
console.log(userData);
})
.catch((error) => {
console.error(error);
});
  • Use Promises or the async/await syntax to handle asynchronous operations in a more readable and maintainable manner.
     function fetchData() {
return new Promise((resolve, reject) => {
// Simulated asynchronous data fetching
setTimeout(() => {
resolve('Data fetched');
}, 1000);
});
}

fetchData()
.then((data) => {
console.log(data);
})
.catch((error) => {
console.error(error);
});

Content Delivery Networks (CDNs):

  • Offload static content delivery to a CDN to reduce the load on the application servers and improve content delivery performance.

45. What are the security best practices for node applications?

Security best practices for Node.js applications encompass various measures to mitigate vulnerabilities and protect against malicious activities. Here are some essential security best practices for Node.js applications:

Dependency Management:

  • Regularly update and patch dependencies to address security vulnerabilities. Utilize tools like npm audit to identify and fix vulnerabilities within dependencies.

Input Validation and Sanitization:

  • Validate and sanitize user input to prevent injection attacks, such as SQL injection, NoSQL injection, and cross-site scripting (XSS). Leveraging libraries like Joi or validator.js can facilitate robust input validation.

Secure Authentication and Authorization:

  • Implement secure authentication mechanisms, such as bcrypt for password hashing, and adopt industry-standard protocols like OAuth or JWT for authorization. Enforce strong password policies and multi-factor authentication (MFA) where applicable.

Secure Communication:

  • Utilize HTTPS with TLS/SSL to secure communication between clients and the server. Employ strong, up-to-date cipher suites and security protocols to safeguard data transmission.

Avoiding Sensitive Data Exposure:

  • Minimize the exposure of sensitive information, such as API keys, credentials, and encryption keys, by utilizing environment variables and secure storage mechanisms rather than embedding them directly within the codebase.

Content Security Policy (CSP):

  • Implement a Content Security Policy to mitigate cross-site scripting (XSS) attacks by defining approved sources of content, scripts, and other resources that the application can load.

Security Headers:

  • Set appropriate security headers, such as HTTP Strict Transport Security (HSTS), X-Frame-Options, X-XSS-Protection, and X-Content-Type-Options, to bolster security and protect against various types of attacks.

Logging and Monitoring:

  • Implement robust logging and monitoring solutions to track and analyze security events, anomalous behaviors, and potential vulnerabilities within the application.

Secure Session Management:

  • Utilize secure session management practices, such as randomizing session IDs, utilizing HTTPS, and implementing session timeouts to prevent session hijacking and fixation.

Regular Security Audits and Penetration Testing:

  • Conduct periodic security audits and penetration testing to identify and address potential security flaws, vulnerabilities, and weaknesses in the application’s architecture and codebase.

46. How we can scale nodejs application?

Scaling a Node.js application involves increasing its capacity to handle a growing number of users, requests, and data without sacrificing performance. Here are several approaches to scale a Node.js application:

Horizontal Scaling:

  • Use a load balancer to distribute incoming traffic across multiple instances of the Node.js application. Each application instance can run on a separate server or container, allowing the application to handle more concurrent requests.

Vertical Scaling:

  • Upgrade the server hosting the Node.js application with more resources, such as CPU, memory, or storage, to support increased loads and processing requirements.

Clustering:

  • Utilize Node.js’s built-in cluster module to create a cluster of Node.js processes running on a single machine, effectively utilizing multiple CPU cores to handle incoming requests.

Containerization and Orchestration:

  • Use containerization technologies like Docker to package the application and its dependencies into containers. Orchestrate these containers with tools like Kubernetes to manage and scale them across a cluster of machines.

Serverless Architecture:

  • Consider migrating parts of the application to a serverless architecture, where the infrastructure automatically scales to match the application’s demand. Services like AWS Lambda or Azure Functions can be used for executing application code in a serverless manner.

Database Scaling:

  • Scale the database layer independently to handle increased data volume and read/write operations. This can involve sharding, replication, using distributed databases, or utilizing managed database services that support auto-scaling.

Monitoring and Auto-Scaling:

  • Set up monitoring and alerting systems to track the application’s performance and resource usage. Use cloud services that offer auto-scaling based on predefined metrics to automatically adjust the application’s capacity.

47. How to prevent DOS and DDOS attacks in node applications?

Denial of Service (DoS) Attack: In a DoS attack, a single source or single location is used to flood the targeted system with a massive amount of traffic, often from one machine. The goal is to exhaust the resources of the target, such as bandwidth, computer memory, or processing power, resulting in a denial of service to legitimate users. DoS attacks can be launched by a single person using a single machine or a group of people coordinating their efforts.

Distributed Denial of Service (DDoS) Attack: In a DDoS attack, the malicious traffic comes from multiple sources, making it more difficult to mitigate. The attackers use a network of compromised machines (often called a botnet), which could be thousands or even … millions of computers and internet-connected devices. These machines are controlled remotely by the attackers and are used to simultaneously flood the target with traffic, overwhelming its capacity to respond to legitimate requests.

Prevention From DOS and DDOS Attacks: To prevent Denial of Service (DoS) and Distributed Denial of Service (DDoS) attacks in Node.js applications, there are several strategies and best practices that can be employed.

  1. Rate Limiting: Implement rate limiting to restrict the number of requests from a client within a given time frame. This can be done using middleware such as express-rate-limit in Express.js to prevent an attacker from overwhelming the server with a large number of requests.
  2. Monitoring and Anomaly Detection: Use monitoring tools to track the normal behavior and traffic patterns of your application. Implement anomaly detection to identify and respond to unusual spikes in traffic. Services like Amazon CloudWatch, New Relic, or similar monitoring tools can help in detecting anomalies.
  3. Use of CDN and Proxy Services: Content Delivery Networks (CDNs) and proxy services such as Cloudflare can help absorb and mitigate DDoS attacks by filtering traffic and serving as a protective layer between the attackers and your application.
  4. Web Application Firewall (WAF): Implement a WAF that can detect and filter out malicious traffic, preventing common web-based attacks including those used in DoS and DDoS attacks.
  5. Cloud-Based DDoS Protection Services: Consider using DDoS protection services offered by cloud providers like AWS Shield, Google Cloud Armor, or Azure DDoS Protection. These services are specifically designed to mitigate DDoS attacks and provide a shield for your infrastructure.
  6. HTTP/HTTPS Configuration: Configure your web server to handle and manage incoming requests efficiently. Ensure that your server can handle large numbers of connections without being overwhelmed by minor or malicious requests.
  7. Regular Security Audits and Updates: Regularly audit and update your application and infrastructure for security vulnerabilities. Stay updated with security patches and best practices to mitigate known attack vectors that can be exploited by attackers.

48. How to avoid SQL Injection attacks in node applications?

SQL injection is a type of security vulnerability that occurs when an attacker is able to manipulate input that is passed into a SQL query, leading to the unintended execution of malicious SQL code.

SQL injection attacks typically exploit web applications or other software that interact with a backend database. Attackers can inject malicious SQL code into input fields such as login forms, search boxes, or any other user-controllable data input.

Prevention From SQL Injection Attack:

  1. Use Parameterized Queries: When performing database queries, use parameterized queries or prepared statements provided by database libraries like mysql, pg, or ORM frameworks like Sequelize. Parameterized queries sanitize user input and prevent the insertion of malicious SQL code into queries.
  2. Input Validation and Sanitization: Validate and sanitize user input to ensure that it conforms to the expected format and does not contain any malicious SQL code. Libraries like validator.js or frameworks like Express-validator can be used for input validation and sanitization.
  3. ORMs (Object-Relational Mapping) or Query Builders: Consider using ORM libraries such as Sequelize, TypeORM, or query builders like Knex.js, which provide abstractions for interacting with the database. These tools often handle input sanitization and help prevent direct SQL injection.
  4. Escaping User Input: If you are manually constructing SQL queries, make sure to escape user input using the appropriate escaping functions provided by your database library. This prevents user input from being interpreted as SQL commands.
  5. Least Privilege Principle: Follow the principle of least privilege by ensuring that the database user account used by the application has limited permissions. Avoid using highly privileged accounts for routine application operations.
  6. Regular Security Patching: Keep your database software and related libraries up to date with the latest security patches to mitigate known vulnerabilities that could be exploited by attackers.
  7. Web Application Firewalls (WAF): Implement a WAF to filter and monitor incoming traffic to your application, helping to detect and mitigate SQL injection attempts.
  8. Database Connection Pooling: Use database connection pooling to manage database connections efficiently. Connection pooling can help prevent certain types of SQL injection attacks and improve performance.

49. How to write test cases for the backend?

To write test cases for the backend in Node.js, you can utilize testing frameworks like Mocha or Jest and chai.

Mocha is a testing framework that provides functions that are executed according in a specific order, and that logs their results to the terminal window.

Chai is an assertion library that is often used alongside Mocha. It provides a number of assertion methods that can be used to test the output of functions and code snippets.

Example: Suppose you have a signin function that takes a username and password, and it returns a promise that resolves to a user object if the signin is successful, and rejects with an error if the signin fails.

// Assume this is the signin function defined in signin.js
function signin(username, password) {
return new Promise((resolve, reject) => {
// Assume some asynchronous logic to validate the username and password
if (username === 'user' && password === 'password') {
resolve({ username: 'user' });
} else {
reject(new Error('Invalid username or password'));
}
});
}

// This is our test file test.js

// Import the 'assert' module from Chai
const assert = require('chai').assert;

// Import the signin function
const { signin } = require('./signin');

// Describe the test suite
describe('Signin functionality', () => {
// Test case 1
it('should return a user object for valid credentials', async () => {
const user = await signin('user', 'password');
assert.deepEqual(user, { username: 'user' });
});

// Test case 2
it('should return an error for invalid credentials', async () => {
try {
await signin('user', 'wrong-password');
assert.fail('Expected the signin to reject with an error');
} catch (error) {
assert.instanceOf(error, Error);
assert.equal(error.message, 'Invalid username or password');
}
});
});

50. List out the use of the following npm modules (shrinkwrap, forever, dotenv, nodemon, response-time, multer, body-parser,)

For this one, you can check my other article:
https://medium.com/@javascriptcentric/30-node-js-modules-with-usecases-36950203722fhttps://medium.com/@javascriptcentric/30-node-js-modules-with-usecases-36950203722f

****************************************************************************

Hope You Like this article. Big Thank you For Reading.

Follow me on Medium for more guided articles.

Follow me on LinkedIn: https://www.linkedin.com/in/ravics09/

****************************************************************************

--

--