Easily know about backend some concepts

Muktarul Khan Akash
13 min readDec 19, 2020

What is a web server?
A web server is server software or hardware dedicated to running this software, that can satisfy client requests on the World Wide Web. A web server can, in general, contain one or more websites. A web server processes incoming network requests over HTTP and several other related protocols.

Different types of web servers?

The five leading web servers in the market are:

1. Apache HTTP Server

2. Microsoft Internet Information Services

3. Lighttpd

4. Nginx Web Server

5. Sun Java System Web Server

Let’s have a look at each of them in detail.


Apache HTTP Server

Apache HTTP server is one of the most widely used web servers worldwide. The biggest advantage of using this server is that it supports almost all operating systems such as Windows, Linux, Apple Mac OS, Unix, and others. Around 60% of the web server machines worldwide, run the Apache Web Server.

Apache HTTP Server is open-source. Being open-source means it is available for free, and can easily be accessed through online communities. Thus, a lot of online support is available in case of a problem or an error. This also enables the user to modify the server as per his requirements. Apache’s latest version is much more flexible than the previous ones and can handle more requests smoothly.


Microsoft Internet Information Service

IIS is a Microsoft product that offers almost all the features that Apache HTTP server provides. Microsoft IIS is not open source. This means that it has some development limitations, and the users cannot modify it as per their project requirements. The project has to be modified around it. It works with every Windows OS gadget. Microsoft provides ‘customer care and helps’ to its users in case of any issue.


Lighttpd (pronounced ‘Lightly’)

Lighttpd is a combination of ‘light’ and ‘httpd’ and was released in 2003. It is not as popular as Apache and IIS, however, its small CPU load and speed optimizations stand it apart from its competitors. It can run a large number of connections at the same time and even provides facilities like Auth, URL rewriting, flexible virtual hosting, servlet support (AJP), HTTP proxy support, etc to the user.

All these features along with being lightweight make Lighttpd suitable for servers suffering from load problems.


Nginx Web Server (pronounced ‘engine-x’)

Just like Lighttpd, it is also an open-source web server, well known for the performance that it provides at low resource and configuration. It is mainly used for caching, media streaming, load balancing, handling of static files, auto-indexing, etc. Instead of creating new processes for each request made by the user, Nginx handles the requests in a single thread, using an asynchronous approach.

Nginx has started to get some recognition in the market nowadays, and about 7.5% of the domains worldwide use it.


Sun Java System Web Server — SJSAS

It is a multi-threaded and multi-process web server that provides high performance, scalability, and reliability to enterprises. It also provides data security and command-line interface CLI support. The newest version of this web server (7.0) uses a newly introduced CLI called ‘wadm’.

The 7.0 version web server does not support HttpServerAdmin. However, it comes with a built-in migration tool that helps in migrating apps, websites, and their configurations from the older to a newer version of SJSAS hassle-free.

Difference between PUT, POST, GET, DELETE and PATCH IN HTTP Verbs:

The most commonly used HTTP verbs POST, GET, PUT, DELETE is similar to CRUD (Create, Read, Update and Delete) operations in databases. We specify these HTTP verbs in the capital case. So, below is the comparison between them.

  1. create — POST
  2. read — GET
  3. update — PUT
  4. delete — DELETE

PATCH: Submits a partial modification to a resource. If you only need to update one field for the resource, you may want to use the PATCH method.


Since POST, PUT, DELETE modifies the content, the tests with Fiddler for the below URL just mimics the updations. It doesn’t delete or modify actually. We can just see the status codes to check whether insertions, updations, deletions occur.

URL: http://jsonplaceholder.typicode.com/posts/

1) GET:

GET is the simplest type of HTTP request method; the one that browsers use each time you click a link or type a URL into the address bar. It instructs the server to transmit the data identified by the URL to the client. Data should never be modified on the server-side as a result of a GET request. In this sense, a GET request is read-only.

Checking with Fiddler or PostMan: We can use Fiddler for checking the response. Open Fiddler and select the Compose tab. Specify the verb and URL as shown below and click Execute to check the response.

Verb: GET

URL: http://jsonplaceholder.typicode.com/posts/

Response: You will get the response as:

“userId”: 1, “id”: 1, “title”: “sunt aut…”, “body”: “quia et suscipit…”

In the “happy” (or non-error) path, GET returns a representation in XML or JSON and an HTTP response code of 200 (OK). In an error case, it most often returns a 404 (NOT FOUND) or 400 (BAD REQUEST).

2) POST:

The POST verb is mostly utilized to create new resources. In particular, it’s used to create subordinate resources. That is subordinate to some other (e.g. parent) resource.

On successful creation, return HTTP status 201, returning a Location header with a link to the newly-created resource with the 201 HTTP status.

Checking with Fiddler or PostMan: We can use Fiddler for checking the response. Open Fiddler and select the Compose tab. Specify the verb and URL as shown below and click Execute to check the response.

Verb: POST

url: http://jsonplaceholder.typicode.com/posts/

Request Body:

data: { title: ‘foo’, body: ‘bar’, userId: 1000, Id : 1000 }

Response: You would receive the response code as 201.

If we want to check the inserted record with Id = 1000 change the verb to Get and use the same URL and click Execute.

As said earlier, the above URL only allows reads (GET), we cannot read the updated data in real.

3) PUT:

PUT is most-often utilized for update capabilities, PUT-ing to a known resource URI with the request body containing the newly-updated representation of the original resource.

Checking with Fiddler or PostMan: We can use Fiddler for checking the response. Open Fiddler and select the Compose tab. Specify the verb and URL as shown below and click Execute to check the response.

Verb: PUT

url: http://jsonplaceholder.typicode.com/posts/1

Request Body:

data: { title: ‘foo’, body: ‘bar’, userId: 1, Id : 1 }

Response: On successful update it returns 200 (or 204 if not returning any content in the body) from a PUT.


DELETE is pretty easy to understand. It is used to delete a resource identified by a URI.

On successful deletion, return HTTP status 200 (OK) along with a response body, perhaps the representation of the deleted item (often demands too much bandwidth), or a wrapped response (see Return Values below). Either that or return HTTP status 204 (NO CONTENT) with no response body. In other words, a 204 status with nobody, or the JSEND-style response and HTTP status 200 are the recommended responses.

Checking with Fiddler or PostMan: We can use Fiddler for checking the response. Open Fiddler and select the Compose tab. Specify the verb and URL as shown below and click Execute to check the response.


URL: http://jsonplaceholder.typicode.com/posts/1

Response: On successful deletion, it returns HTTP status 200 (OK) along with a response body.

The example between PUT and PATCH


If I had to change my first name then send a PUT request for an Update:

{ “first”: “Nazmul”, “last”: “Hasan” } So, here to update the first name we need to send all the parameters of the data again.


Patch request says that we would only send the data that we need to modify without modifying or affecting other parts of the data. Ex: if we need to update only the first name, we pass only the first name.

What is node.js?

Node.js® is a JavaScript runtime built on Chrome’s V8 JavaScript engine.

Building blocks & architecture of node.js?

A good start is half the battle, said someone wiser than me. And I can’t think of any quote that would better describe the situation every developer gets into whenever starting a new project. Laying out a project’s structure in a practical way is one of the hardest points of the development process and, indeed, a delicate one.

By looking at the previous articles I have written here on LogRocket, we can define a path about discussing Node.js technologies, how to choose what front-end framework to use, and now we can try to dig deeper on how to structure our web apps once we have decided on the tech stack to use.

The importance of good architecture

Having a good starting point when it comes to our project architecture is vital for the life of the project itself and how you will be able to tackle changing needs in the future. A bad, messy project architecture often leads to:

  • Unreadable and messy code, making the development process longer and the product itself harder to test
  • Useless repetition, making code harder to maintain and manage
  • Difficulty implementing new features. Since the structure can become a total mess, adding a new feature without messing up existing code can become a real problem

With these points in mind, we can all agree that our project architecture is extremely important, and we can also declare a few points that can help us determine what this architecture must help us do:

  • Achieve clean and readable code
  • Achieve reusable pieces of code across our application
  • Help us to avoid repetitions
  • Make life easier when adding a new feature into our application

Establishing a flow

Now we can discuss what I usually refer to as the application structure flow. The application structure flow is a set of rules and common practices to adopt while developing our applications. These are the results of years of experience working with a technology and understanding what works properly and what doesn’t.

The goal of this article is to create a quick reference guide to establishing the perfect flow structure when developing Node.js applications. Let’s start to define our rules:

Rule #1: Correctly organize our files into folders

Everything has to have its place in our application, and a folder is a perfect place to group common elements. In particular, we want to define a very important separation, which brings us to rule number #2:

Rule #2: Keep a clear separation between the business logic and the API routes

See, frameworks like Express.js are amazing. They provide us with incredible features for managing requests, views, and routes. With such support, it might be tempting for us to put our business logic into our API routes. But this will quickly make them into a giant, monolithic blocks that will reveal themselves to be unmanageable, hard to read, and prone to decomposition.

Please also don’t forget about how the testability of our application will decrease, with consequently longer development times. At this point, you might be wondering, “How do we solve this problem, then? Where can I put my business logic clearly and intelligently?” The answer is revealed in rule number #3.

Rule #3: Use a service layer

This is the place where all our business logic should live. It’s basically a collection of classes, each with its methods, that will be implementing our app’s core logic. The only part you should ignore in this layer is the one that accesses the database; that should be managed by the data access layer.

Now that we have defined these three initial rules, we can graphically represent the result like this:

Separating our business logic from our API routes.

And the subsequent folder structure sending us back to rule #1 can then become:

By looking at this last image, we can also establish two other rules when thinking about our structure.

Rule #4: Use a config folder for configuration files

Rule #5: Have a scripts folder for long npm scripts

Rule #6: Use dependency injection

Node.js is literally packed with amazing features and tools to make our lives easier. However, as we know, working with dependencies can be quite troublesome most of the time due to problems that can arise with testability and code manageability.

There is a solution for that, and it’s called dependency injection.

Dependency injection is a software design pattern in which one or more dependencies (or services) are injected, or passed by reference, into a dependent object.

By using this inside our Node applications, we:

  • Have an easier unit testing process, passing dependencies directly to the modules we would like to use instead of hardcoding them
  • Avoid useless modules coupling, making maintenance much easier
  • Provide a faster git-flow. After we defined our interfaces, they will stay like that, so we can avoid any merge conflicts.

Using Node.js without dependency injection.

Simple but still not very flexible as an approach to our code. What happens if we want to alter this test to use an example database? We should alter our code to adapt it to this new need. Why not pass the database directly as a dependency instead?

Rule #7: Use unit testing

Now that we know we have got dependency injection under our belt, we can also implement unit testing for our project. Testing is an incredibly important stage in developing our applications. The whole flow of the project — not just the final result — depends on it since the buggy code would slow down the development process and cause other problems.

A common way to test our applications is to test them by units, the goal of which is to isolate a section of code and verify its correctness. When it comes to procedural programming, a unit may be an individual function or procedure. This process is usually performed by the developers who write the code.

Benefits of this approach include:

Improved code quality

Unit testing improves the quality of your code, helping you to identify problems you might have missed before the code goes on to other stages of development. It will expose the edge cases and makes you write better overall code

Bugs are found earlier

Issues here are found at a very early stage. Since the tests are going to be performed by the developer who wrote the code, bugs will be found earlier, and you will be able to avoid the extremely time-consuming process of debugging

Cost reduction

Fewer flaws in the application mean less time spent debugging it, and less time spent debugging it means less money spent on the project. Time here is an especially critical factor since this precious unit can now be allocated to develop new features for our product

Rule #8: Use another layer for third-party services calls

Often, in our application, we may want to call a third-party service to retrieve certain data or perform some operations. And still, very often, if we don’t separate this call into another specific layer, we might run into an out-of-control piece of code that has become too big to manage.

A common way to solve this problem is to use a pub/subpattern. This mechanism is a messaging pattern where we have entities sending messages called publishers, and entities receiving them called subscribers.

Publishers won’t program the messages to be sent directly to specific receivers. Instead, they will categorize published messages into specific classes without knowledge of which subscribers, if any, may be dealing with them.

Similarly, the subscribers will express interest in dealing with one or more classes and only receive messages that are of interest to them — all without knowledge of which publishers are out there.

The publish-subscribe model enables event-driven architectures and asynchronous parallel processing while improving performance, reliability, and scalability.

Rule #9: Use a linter

This simple tool will help you to perform a faster and overall better development process, helping you to keep an eye on small errors while keeping the entire application code uniform.

Example of using a linter.

Rule #10: Use a style guide

Still, thinking about how to properly format your code consistently? Why not adopt one of the amazing style guides that Google or Airbnb have provided to us? Reading code will become incredibly easier, and you won’t get frustrated trying to understand how to correctly position that curly brace.

Google’s JavaScript style guide.

Rule #11: Always comment your code

Writing a difficult piece of code where it’s difficult to understand what you are doing and, most of all, why? Never forget to comment on it. This will become extremely useful for your fellow developers and to your future self, all of whom will be wondering why exactly you did something six months after you first wrote it.

Rule #12: Keep an eye on your file sizes

Files that are too long are extremely hard to manage and maintain. Always keep an eye on your file length, and if they become too long, try to split them into modules packed in a folder as files that are related together.

Rule #13: Always use gzip compression

The server can use gzip compression to reduce file sizes before sending them to a web browser. This will reduce latency and lag.

An example of using gzip compression with Express.

Rule #14: Use promises

Using callbacks is the simplest possible mechanism for handling your asynchronous code in JavaScript. However, raw callbacks often sacrifice the application control flow, error handling, and semantics that were so familiar to us when using the synchronous code. A solution for that is using promises in Node.js.

Promises bring in more pros than cons by making our code easier to read and test while still providing functional programming semantics together with a better error-handling platform.

A basic example of a promise.

Rule #15: Use promises’ error handling support

Finding yourself in a situation where you have an unexpected error or behavior in your app is not at all pleasant, I can guarantee. Errors are impossible to avoid when writing our code. That’s simply part of being human.

Dealing with them is our responsibility, and we should always not only use promises in our applications but also make use of their error handling support provided by the catch keyword.



Muktarul Khan Akash

Frontend web developer || Web developer || JavaScript developer