With the evolution of cloud computing, the IT infrastructure has undergone a rapid revolution. To add more flexibility & scalability to the software on cloud, Serverless Functions comes into the picture. Although I should point out immediately that the term “Serverless is not actually server less”.
In my previous blog, I already discussed that the future cloud-native applications will stand on a combination of both microservices and Serverless, often wrapped as Linux containers.

Virtual Machines(VMs) & the cloud-enabled DevOps collaborate together to optimize technology processes. Cloud technologies dynamic computation and storage feature made it easier to provision for resources.
The idea behind DevOps is that developers no longer need to worry about infrastructure because that’s taken care of in the background by programs such as Ansible, Chef, and Puppet.
Container orchestration is the starting point 
Containers not only speed up your application using fewer resources as compared to Virtual Machine(VM) but also scale multiple, colocated containers to deliver services in a production environment.

But extensive use of containers drastically increases the complexity of managing them, which in turn requires the latest patching of the base Operating System, and managing that manually isn’t feasible.
Container orchestration programs such as Docker swarm mode, Mesosphere, and Kubernetes enables you to build application services, span multiple containers, schedule containers across a cluster, scale containers, deploy & manage at scale.
Serverless & Microservices are not the same 
Although at the initial stage, Serverless may appear more like microservices, however, there are substantial differences between the two of them in terms of pricing, scalability, complexity, granularity & time to market.
Read more here.
Serverless isn’t actually server less
Instead of using containers to run applications, serverless computing replaces containers with another abstraction layer in which the cloud provider acts as the server, dynamically managing the allocation of machine resources.
Listing down various Serverless approaches :
- Serverless is the next step up of cloud technologies, such as infrastructure-as-a-service(IaaS) cloud, where you don’t need to care about your resources(no physical memory/servers).
- Serverless is not a one-size-fits-all technology. Not every program can run in a serverless environment. It must be customized to run transitory containers functions thus called function-as-a-service (FaaS).
- Another serverless approach is backend-as-a-service (BaaS). In this API-based model, services autoscale, the containers behind them are transparent to developers and end users.
It began with AWS Lambda, other serverless services include Google Cloud Functions and Azure Functions where you-pay-as-you-go with the increasing complexity of the application.
In serverless, horizontal scaling is completely automatic, elastic, and provider managed.
If your system needs to process 100 requests in parallel, the provider handles that without any additional work from the developer where the cloud-server is managed by a third party.
Also, unlike a traditional server-based application, there is no constantly running server. Compute time is spent only when the function is called to action.
However, Serverless has its own cons. First of which is portability. An application built around Lambda, for instance, can’t easily be ported to Azure Functions resulting in vendor lock-in. Others are complexity & limitations in terms of memory overloading.
On the flip side, container orchestration is platform-independent & you don’t get stuck with vendor lock-in. Also, it is lesser complex than Serverless.
Cloud-native computing
All these technologies – VM, containers, microservices, Serverless, play their role in creating cloud-native computing, which is an approach that uses open source software stack to deploy applications.
Containerized – Each part of the application is packaged in its own container. This facilitates reproducibility, transparency, and resource isolation.
Dynamically orchestrated – To optimize resource utilization, containers are actively scheduled and managed.
Microservices-oriented – Applications are segmented into microservices. This significantly increases the overall agility and maintainability of applications.
Serverless also yields the same results which free a developer from maintaining the application servers residing on the cloud.
What’s the best? 
Complexity arises in terms of monitoring when someone has to monitor multiple microservices, & for each service, there might be several instances running in parallel.
Another form of complexity is storage management. To build successful cloud-native applications, developers have to consider new ways of managing storage, in such a way that it separates the applications from their data.
Particularly, serverless computing works best when workloads are:
- Asynchronous, concurrent, and easy to parallelize into independent units of work.
- Infrequent or with sporadic demand, with large, unpredictable variance in scaling requirements.
- Stateless and transitory, without a major need for instantaneous cold start time.
- Highly dynamic, in terms of changing business requirements that drive a need for accelerated developer velocity.
However, for increased complexity, Serverless may pull you back in certain instances, whereas microservices/containers will let you survive the complexity for its service-oriented architecture (SOA). Although SOA and microservices are not the same again.
With microservices, you’re working with a true decoupled architecture which is lighter than SOA and thus more flexible. While SOA services are deployed to servers & VMs, microservices are deployed in containers.
To be specific, depending on the application and infrastructure requirement, every IT department should evaluate how best to integrate/combine VM, container & serverless into its software stack.