Margo Seltzer – Automatically Scalable Computation – Code Mesh 2017

Share
Copy the link

Hi everyone, I’m Margo Seltzer and today I want to talk to you about automatically scalable computation. This is a topic that is near and dear to my heart, and I had the pleasure of discussing it at Code Mesh 2017. In this presentation, I delved into the challenges and opportunities surrounding automatically scalable computation, and how we can leverage this technology to build more efficient and resilient systems.

When we talk about automatically scalable computation, we are referring to the ability of a system to automatically adjust its resources in response to changing workloads. This is crucial in today’s fast-paced and dynamic computing environments, where workloads can vary significantly over time. By automatically scaling our computational resources, we can ensure that our systems are always able to meet demand without wasting resources or sacrificing performance.

One of the key challenges in automatically scalable computation is ensuring that our system is able to scale both up and down in response to changing workloads. Scaling up is relatively straightforward – we simply add more resources to our system to handle increased demand. However, scaling down can be more complex, as we need to ensure that we are not wasting resources when demand decreases.

One approach to automatically scalable computation is to use a container-based architecture. Containers provide a lightweight and efficient way to package and deploy applications, making it easier to dynamically allocate resources based on workload. By using container orchestration tools such as Kubernetes or Docker Swarm, we can automatically scale our applications up or down based on predefined metrics such as CPU utilization or request rate.

Another key aspect of automatically scalable computation is fault tolerance. In a dynamically scaling system, it is important to ensure that our applications remain available and resilient to failures. One way to achieve this is through the use of microservices architecture, where applications are broken down into smaller, independent components that can be scaled and deployed independently.

During my presentation at Code Mesh 2017, I discussed some of the real-world examples of automatically scalable computation in action. For example, Netflix uses a combination of containerization and microservices to automatically scale their streaming platform in response to demand. By leveraging a combination of technologies such as AWS and Kubernetes, Netflix is able to ensure that their platform is always able to handle the varying workload of millions of users.

Automatically scalable computation also plays a crucial role in cloud computing, where resources are shared among multiple users. Platforms such as AWS and Google Cloud provide auto-scaling capabilities that allow users to automatically adjust their resources based on demand. By using these platforms, users can ensure that their applications are always able to meet demand without overspending on resources.

One of the key takeaways from my presentation is that automatically scalable computation is not a one-size-fits-all solution. Different applications have different requirements and constraints, and it is important to design our systems with scalability in mind from the beginning. By using a combination of containerization, microservices, and cloud platforms, we can build applications that are flexible, efficient, and resilient to failures.

In conclusion, automatically scalable computation is a powerful tool that can help us build more efficient and resilient systems. By leveraging technologies such as containerization, microservices, and cloud platforms, we can ensure that our applications are always able to meet demand without sacrificing performance or wasting resources. I hope that my presentation at Code Mesh 2017 has inspired you to explore the possibilities of automatically scalable computation in your own projects. Thank you for listening!

source