COURSE HOSTING AT GOOGLE AND FACEBOOK

K

Why are Google and Facebook always available?

Google and Facebook are two examples of companies that power the biggest websites right now. Every day they each reach hundreds of millions of visitors and generate billions of page views. This entails a gigantic amount of data traffic and calculation work. Yet Google manages to answer a search query in a few milliseconds and Facebook immediately informs everyone that there are new photos online of the last staff party.

The heart of the matter: divide & conquer

“divide” and “conquer” are two keywords that can be used to describe the hosting of the Google and Facebook websites. The concept is very simple: try to distribute as much workload as possible so that the servers work as efficiently as possible to provide the visitor with the fastest possible service.

The distribution part of the servers

To meet the many billions of page views per day, thousands of servers are needed. Small websites have enough with one server and if necessary it is customary to add a server to meet an increasing number of visitors. However, with the websites of Google and Facebook we are talking about billions of page views per day and simply adding servers would cause a catastrophe.

Each page view involves a certain amount of data traffic. If a server is placed in the same data center, the extra data traffic will have to run together with the original data traffic over the same internet connection of the data center. As a result, after a while this internet connection can be overloaded, which can have a negative impact on accessibility.

The problem that arises when simply adding servers is a single-point-of-failure. For example, if Facebook were to put all their servers in one big data center and this data center has a problem that makes it unreachable, it means that Facebook has temporarily disappeared completely from the internet. To ensure that one can always have an accessible website, the various servers will never all be placed in one data center. They opt to use different data centers, preferably as widely spread as possible around the world. These groups of servers are then called clusters.

Rule over a cluster of servers

After all servers are distributed in clusters around the world, there is still a problem. By dividing the servers into clusters, it is ensured that if someone surfs to Google from Europe, this request will be processed in a data center in Europe. Analogously, this also happens for people from the United States, Asia and so on. In other words: when requesting a page from a website, this will be answered by a cluster closest to the requesting computer.

However, this cluster must work as efficiently as possible. The cluster itself will try to distribute all incoming requests for pages as well as possible over the servers in the cluster. This distribution is arranged by large servers that are at the head of a cluster. In extreme cases, these distribution servers can even consist of clusters of servers in themselves. In other words: a cluster is, as it were, assigned a certain area of ​​the world, but the cluster must dominate this area as well as possible.

But what if a cluster becomes unreachable?

Because all servers are divided into clusters all over the world, this creates a major advantage: guaranteed accessibility. Suppose the cluster of servers for Europe becomes inaccessible due to a power failure, this would mean that no one from Europe can temporarily surf to Facebook or Google. Nothing is less true. If a cluster falls offline, the work of this cluster is taken over by all other clusters in the world. In case the cluster in Europe is unreachable, all pages will be served on the clusters of America, Asia and so on. Each cluster has a reserve of computing power and data traffic in order to cope with temporary problems of other clusters.


Posted

in

, ,

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *