Thursday, October 1, 2020

Load Balancer

 Load Balancer

-> Load Balancer is used in a distributed system design tool.

-> Principle of load balancing ensure that you do not have duplicates request to the same server.

-> Balancing the load and not sending duplicates to the same server.

Ex: Users sends a request on the internet which reaches to load balancer and the load balancer allows traffic to one of the webserver. if any of one server is down, other server serve the request.

Load Balancing Algorithms

-> Round Robin (Request will be routed to each server sequence)

-> Least Connection (when we need sticky session) [storing some data on particular webserver then until session gets over all the requests will be routed to the same server.]

-> Least Bandwidth(it routes traffic to the server which is serving list amount of traffic major in bandwidth)

-> IP Hash (Based on user IP address)

What if Load Balancer fails?

Create two load balancer in clustered mode. If one fails second one pick up the traffic and sent to webserver. This is about redundancy about load balancer. 

How can we prevent from disaster recovery?

We can create a global and local load balancer in different regions.

Advantages of Load Balancer

-> User experience is faster and uninterrupted services

-> Less downtime and higher throughput

-> Zero downtime to update webserver

-> Flexible to scaleup and scaledown

Message Queue
It takes request, persists them, assigns them to the correct server waits for them to complete.
if it's taking too long for the server to give an acknowledgment, it feels that server is dead and assigns it to the next server.

Message Queue use asynchronous Process in Distributed system

Synchronous Process -> Request sent to the application and it will be waiting until the response comes back. (task should be done or acknowledge then and their itself)
Ex: You went to shop ordered your item and waited there till you get your order item. return back only after you got your item.

Asynchronous Process ->Delay in response (callback)(different port type & operation)
-> Request sent to an application, Application validates the input, stores into the database, and sent an acknowledgment immediately.  but in the backend process will pickup open requests from the database and process the request by service and on successful processing, it will notify the user by a notification service.
Ex: You went to the shop and gave your list of items you want to order, you didn't wait for their till the order is ready. and you return back and proceed with other tasks. once the order is ready shop owner will notify you, then you will go back to the shop and pick your order. 

Scenario: Suppose 1K of request is coming at the same time to your application and every request will take 10sec to process it, too much workload in your application server which will degrade your application performance. High chance its bring down the system if too many concurrent request. So how can we improve this situation?

-> Asynchronous Process using message queues

Asynchronous process in applications: 

we have a bunch of servers accepting a request from users and storing them into the database and responding back immediately.
The asynchronous job is running as an independent service. 
All open requests from the database start processing one by one sequentially. 
This is one way of processing pending requests asynchronously.  (This is good for small scale of applications.) 
The drawback of this approach is 
if more and more request is adding up since there is only one async processing node, it will take a too long time to process all pending request. too much workload on this node. Could be chances of an async node can break.
 if we increase the number of async nodes then there will be high chances of reading pending requests more then one async job and duplicate processing of the same job.

In this kind of scenario, we can use a message queue. 
Queue: Queue is a point to point, first in first out.
When the application server accepting any request from the user first its write into the database and then it adds a message into the queue also. A message may contain Acknowledgement id or primary key of request. Here, one Asynch job will read the pending message and delete from the queue and process further. 
So no duplication and capable to handle large requests without any issue on the application server.

Message Queue: It is a kind of asynchronous service to service communications used in the microservice architecture of a distributed system. each message will store in a queue until it is processed and deleted. 
Each message will process only one by the consumer and deleted it after reading.
A message queue enables only one-way communication between two applications.
It's like many to one.

Note: Message queue also supports clustered mode, in clustered you can find multiple queues and avoid the risk of a single point of failure. also we can scale at queue level. 
If the target application is down then the message will be stored in a queue until applications come up.

Message Queue
It takes request, persists them, assigns them to the correct server waits for them to complete.
if it's taking too long for the server to give an acknowledgment, it feels that server is dead and assigns it to the next server.




SOA Overview Part-1

  Middleware It provides a mechanism for the process to interact with other processes running on multiple network machines. Advantages...