The Client sends a create paste request to the Web Server, running as a reverse proxy
The Web Server forwards the request to the Write API server
The Write API server does the following:
Generates a unique url
Checks if the url is unique by looking at the SQL Database for a duplicate
If the url is not unique, it generates another url
If we supported a custom url, we could use the user-supplied (also check for a duplicate)
Saves to the SQL Databasepastes table
Saves the paste data to the Object Store
Returns the url
Clarify with your interviewer how much code you are expected to write.
The pastes table could have the following structure:
shortlink char(7) NOT NULL
expiration_length_in_minutes int NOT NULL
created_at datetime NOT NULL
paste_path varchar(255) NOT NULL
PRIMARY KEY(shortlink)
We'll create an index on shortlink and created_at to speed up lookups (log-time instead of scanning the entire table) and to keep the data in memory. Reading 1 MB sequentially from memory takes about 250 microseconds, while reading from SSD takes 4x and from disk takes 80x longer.1
To generate the unique url, we could:
Take the MD5 hash of the user's ip_address + timestamp
MD5 is a widely used hashing function that produces a 128-bit hash value
MD5 is uniformly distributed
Alternatively, we could also take the MD5 hash of randomly-generated data
Base 62 encodes to [a-zA-Z0-9] which works well for urls, eliminating the need for escaping special characters
There is only one hash result for the original input and and Base 62 is deterministic (no randomness involved)
Base 64 is another popular encoding but provides issues for urls because of the additional + and / characters
The following Base 62 pseudocode runs in O(k) time where k is the number of digits = 7:
def base_encode(num, base=62):
digits = []
while num > 0
remainder = modulo(num, base)
digits.push(remainder)
num = divide(num, base)
digits = digits.reverse
Take the first 7 characters of the output, which results in 62^7 possible values and should be sufficient to handle our constraint of 360 million shortlinks in 3 years:
Since realtime analytics are not a requirement, we could simply MapReduce the Web Server logs to generate hit counts.
Clarify with your interviewer how much code you are expected to write.
class HitCounts(MRJob):
def extract_url(self, line):
"""Extract the generated url from the log line."""
...
def extract_year_month(self, line):
"""Return the year and month portions of the timestamp."""
...
def mapper(self, _, line):
"""Parse each log line, extract and transform relevant lines.
Emit key value pairs of the form:
(2016-01, url0), 1
(2016-01, url0), 1
(2016-01, url1), 1
"""
url = self.extract_url(line)
period = self.extract_year_month(line)
yield (period, url), 1
def reducer(self, key, values):
"""Sum values for each key.
(2016-01, url0), 2
(2016-01, url1), 1
"""
yield key, sum(values)
Use case: Service deletes expired pastes
To delete expired pastes, we could just scan the SQL Database for all entries whose expiration timestamp are older than the current timestamp. All expired entries would then be deleted (or marked as expired) from the table.
Step 4: Scale the design
Identify and address bottlenecks, given the constraints.
Important: Do not simply jump right into the final design from the initial design!
State you would do this iteratively: 1) Benchmark/Load Test, 2) Profile for bottlenecks 3) address bottlenecks while evaluating alternatives and trade-offs, and 4) repeat. See Design a system that scales to millions of users on AWS as a sample on how to iteratively scale the initial design.
It's important to discuss what bottlenecks you might encounter with the initial design and how you might address each of them. For example, what issues are addressed by adding a Load Balancer with multiple Web Servers? CDN? Master-Slave Replicas? What are the alternatives and Trade-Offs for each?
We'll introduce some components to complete the design and to address scalability issues. Internal load balancers are not shown to reduce clutter.
To avoid repeating discussions, refer to the following system design topics for main talking points, tradeoffs, and alternatives:
The Analytics Database could use a data warehousing solution such as Amazon Redshift or Google BigQuery.
An Object Store such as Amazon S3 can comfortably handle the constraint of 12.7 GB of new content per month.
To address the 40 average read requests per second (higher at peak), traffic for popular content should be handled by the Memory Cache instead of the database. The Memory Cache is also useful for handling the unevenly distributed traffic and traffic spikes. The SQL Read Replicas should be able to handle the cache misses, as long as the replicas are not bogged down with replicating writes.
4 average paste writes per second (with higher at peak) should be do-able for a single SQL Write Master-Slave. Otherwise, we'll need to employ additional SQL scaling patterns:
Design and develop Paste Bin Clone
Scope
Out of scope
Constraints and assumptions
State assumptions
Calculate usage
Clarify with your interviewer if you should run back-of-the-envelope usage calculations.
shortlink
- 7 bytesexpiration_length_in_minutes
- 4 bytescreated_at
- 5 bytespaste_path
- 255 bytesHandy conversion guide:
High level design
Step 3: Design core components
Use case: User enters a block of text and gets a randomly generated link
We could use a relational database as a large hash table, mapping the generated url to a file server and path containing the paste file.
Instead of managing a file server, we could use a managed Object Store such as Amazon S3 or a NoSQL document store.
An alternative to a relational database acting as a large hash table, we could use a NoSQL key-value store. We should discuss the tradeoffs between choosing SQL or NoSQL. The following discussion uses the relational database approach.
pastes
tableClarify with your interviewer how much code you are expected to write.
The
pastes
table could have the following structure:We'll create an index on
shortlink
andcreated_at
to speed up lookups (log-time instead of scanning the entire table) and to keep the data in memory. Reading 1 MB sequentially from memory takes about 250 microseconds, while reading from SSD takes 4x and from disk takes 80x longer.1To generate the unique url, we could:
[a-zA-Z0-9]
which works well for urls, eliminating the need for escaping special characters+
and/
charactersWe'll use a public REST API:
Response:
For internal communications, we could use Remote Procedure Calls.
Use case: User enters a paste's url and views the contents
REST API:
Response:
Use case: Service tracks analytics of pages
Since realtime analytics are not a requirement, we could simply MapReduce the Web Server logs to generate hit counts.
Clarify with your interviewer how much code you are expected to write.
Use case: Service deletes expired pastes
To delete expired pastes, we could just scan the SQL Database for all entries whose expiration timestamp are older than the current timestamp. All expired entries would then be deleted (or marked as expired) from the table.
Step 4: Scale the design
Important: Do not simply jump right into the final design from the initial design!
State you would do this iteratively: 1) Benchmark/Load Test, 2) Profile for bottlenecks 3) address bottlenecks while evaluating alternatives and trade-offs, and 4) repeat. See Design a system that scales to millions of users on AWS as a sample on how to iteratively scale the initial design.
It's important to discuss what bottlenecks you might encounter with the initial design and how you might address each of them. For example, what issues are addressed by adding a Load Balancer with multiple Web Servers? CDN? Master-Slave Replicas? What are the alternatives and Trade-Offs for each?
We'll introduce some components to complete the design and to address scalability issues. Internal load balancers are not shown to reduce clutter.
To avoid repeating discussions, refer to the following system design topics for main talking points, tradeoffs, and alternatives:
The Analytics Database could use a data warehousing solution such as Amazon Redshift or Google BigQuery.
An Object Store such as Amazon S3 can comfortably handle the constraint of 12.7 GB of new content per month.
To address the 40 average read requests per second (higher at peak), traffic for popular content should be handled by the Memory Cache instead of the database. The Memory Cache is also useful for handling the unevenly distributed traffic and traffic spikes. The SQL Read Replicas should be able to handle the cache misses, as long as the replicas are not bogged down with replicating writes.
4 average paste writes per second (with higher at peak) should be do-able for a single SQL Write Master-Slave. Otherwise, we'll need to employ additional SQL scaling patterns:
We should also consider moving some data to a NoSQL Database.
Additional talking points
NoSQL
Caching
Asynchronism and microservices
Communications
Security
Refer to the security section.
Latency numbers
See Latency numbers every programmer should know.
Ongoing