Previous | Next --- Slide 32 of 59
Back to Lecture Thumbnails
shabnam

This is a commonly approach, specially in web based companies. For eg linkedin has separate "client service API's" that they used for many different kinds of requests including mobile, web, messaging etc.

vrkrishn

In addition to only separating different tasks to different servers, such as the recommender and advertising, there are many efforts to parallelize the generation of the static content of the page as well.

I can't claim to be an expert on the Mozilla Servo project, but different research tasks on the browser demonstrate the different parallel optimizations in the future. Some of the different goals:

Parallelizing rendering of page - The page can either be split into tiles and rendered in parallel, or independent subtrees that can be rendered in parallel. In addition, implementing a DOM tree that is shared between the content and layout tasks allow both to access the DOM tree at the same time.

Parallelizing parsing - Of JS, CSS, HTML

Decoding Resources in Parallel - independent resources

There is also an attached research paper at http://www.eecs.berkeley.edu/~lmeyerov/projects/pbrowser/pubfiles/paper.pdf that goes into some of the challenges of parallelizing the rendering of CSS attributes

For small scale pages, this parallelism might be limited to just one of the web servers but it is easy to see that on large pages with many objects, the front web server can identify different tasks and send all of the possible parallelization tasks to associated worker nodes