Computing power
A tour of CPU concepts like multicore, and Hyperthreading. Plus how HTTP client-side caching works.
Hi Friends,
Welcome to the 24th issue of the Polymathic Engineer newsletter.
Today we are hitting a significant milestone: we are now 2K+ people interested in algorithms and distributed systems.
Thanks to all the new subscribers and all those who decided to stay. I appreciate it, and I'll try to make this newsletter consistently better.
Today we will talk about:
Multithreading, multicore, and hyperthreading
HTTP client-side caching
Coding challenge
Interesting tweets
Multithreading, multicore, and hyperthreading
In the current age, it's crucial to have a solid grasp of concepts like thread, multithread, multicore, and hyperthreading.
Understanding their differences helps you appreciate how computing power has evolved and how these innovations have led to improved efficiency and performance.
a thread is a sequence of instructions executed by a processor. A single chip had only a processor and one hardware thread two decades ago. So it could only execute one set of instructions at a time. Nowadays, CPUs can handle multiple threads.
Multithreading is the ability of a CPU to manage multiple threads concurrently, improving performance and efficiency. So the processor can switch between threads, better using its resources and reducing idle time.
Multicore CPUs have multiple processors (or cores) on a single chip. Each core can execute threads independently and work on separate tasks simultaneously. This capability enhances performance, leading to more efficient processing.
Hyperthreading is also known as Simultaneous Multithreading (SMT). It allows a single core to run multiple threads by duplicating certain core parts. While sharing resources, this technology helps improve the core's throughput.
The main difference between multicore and hyperthreading is their approaches to improving performance. Multicore CPUs have multiple distinct processors, while hyperthreading duplicates parts within a single core to handle multiple threads.
Comparatively, multicore CPUs offer better performance than hyperthreading, as each core operates independently. However, hyperthreading can still provide significant performance gains, especially when resources aren't fully utilized by the threads.
HTTP client-side caching
HTTP servers host 2 types of resources:
• static, containing data that don't change between two different requests
• dynamic, containing data generated by the server on the fly for each request
Since static resources don't change, they can be cached. Caching static resources has many advantages like less load on the servers and lower response time for the users.
Usually, the first thing that comes to mind while discussing cache is server-side caching. But HTTP protocol allows also to implement caching client-side quickly. All that it's required is using some specific HTTP headers.
Suppose the client requests a GET for a resource that nobody accessed before. The local client cache intercepts the request. Since the resource is not there, the cache requests it to the origin server on behalf of the client.
The server informs the client that a resource can be cached using dedicated HTTP response headers. Cache-control specifies how long the resource can be cached. ETag identifies a specific version of a resource.
When the cache receives the server's response, it caches the resource returning it to the client. Now suppose that the client makes the same GET request after some time. If the resource is still valid, the local cache immediately returns it to the client.
The server can update a not yet expired resource. So resources in the cache and the server are not strongly consistent. But this is usually an acceptable trade-off. And a server can always force clients to fetch an updated resource by changing its version.
If a requested resource is expired, the local cache sends a new request to the origin server. The request has a conditional HTTP header like If-None-Match specifying the version of the resource stored in the cache.
At this point, there are 2 possibilities. If the server has a new version, it returns it. Otherwise, it replies with a 304 Not Modified status code. In this case, the cache renews the resource freshness and returns the stored resource.
Coding challenge
The coding challenge of today is Solving Questions With Brainpower.
It is a funny dynamic programming problem. It gives as input an array with pairs representing questions.
The first pair value is the points gained by solving the question. The second pair are the questions to skip after solving the current question.
The problem asks to find the maximum points you can earn. Such optimization problems can be usually solved using dynamic programming.
Finding the maximum points starting with a question is a subproblem. The key is to find a recurrence relationship between the subproblems.
For each question, you have 2 possibilities: solve or skip the question.
In the first case, you add the points of the question to the total and skip the required questions. In the second case, you can try to solve the subsequent problem.
The recursive formulation of the problem is straightforward but inefficient.
Multiple ways exist to reach each question in the array. So, the corresponding subproblem is solved multiple times.
Caching the result of each computation is the solution. In the implementation below, I used an array dp to cache the results:
The asymptotic analysis is:
- O(N) for the time complexity
- O(N) for the space complexity
Interesting tweets
I also think that cover letters are not so useful, but I had to learn how to write them. When a company allows you to submit a cover letter, I always prepare one. I use a fixed template, but I always customize the part where I relate my skills to the position.
The reality is what Louie described, but there are still some exceptions. One thing that positively surprised me about the company I work for is the amount of people working there for 20+ years. But of course things can always change and it's better to be prepared.
Moving from passive listener to active contributor in meetings is one of the things I struggled the most. But it's a skill like many others, and the more you practice the better you become.