Do you ever wonder how some companies seem to be able to launch new products to consumers effortlessly fast, and scale quickly to meet the demand without disruptions? How can Netflix deploy several product updates daily, Spotify can stream terabytes of audio files per hour, Or how facebook users can upload millions of pieces of content per minute, yet systems are able to meet the demand without ever breaking, or being 'down for maintenance'? They must have massive Ops team managing alien technology that is far ahead of your IT? Right??? Wrong!
It all boils down to applying technology in a way that is removing obstacles of scale (or automating them), understanding that Ops is an integral part of Dev thus—DevOps.
DevOps in a nutshel
This software engineering culture and practice aims at unifying software development (Dev) and software operation (Ops). The main characteristic of the DevOps movement is to strongly advocate automation and monitoring at all steps of software construction, from integration, testing, releasing to deployment and infrastructure management. DevOps aims at shorter development cycles, increased deployment frequency, and more dependable releases, in close alignment with business objectives. Source: Wikipedia
Most companies today apply segments of DevOps culture; Continuous Development, Continuous Deployment, but not many have mastered the discipline of Continuous Product Delivery.
Source: Kharnagy - Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=51215412
In a typical system often proprietary platforms are utilized that have many built-in features (modules) useful to a business. Items like payment module, product inventory module, content module, user access module, profile module, etc... These are built to support its platform and—as platform is not exposing its services (modules) to other applications or services, all is functioning well. We refer to this as 'Monolithic' software architecture.
Major disadvantages of this legacy methodology—other that lack of resource sharing capabilities, is in that—as the title suggests, it is hard to make changes to the system because many modules may be impacted and time to fully re-test and deploy the platform simply takes too long. Business owners are then given cost estimates and timelines that are hard to digest.
Micro-services on the other hand, is a SOA style software development technique that structures the applications as a collection of loosely coupled, fine-grained services interconnected via APIs and lightweight protocols. Benefits of decomposing an application into different smaller services is in that it improved modularity, and makes development of an application easier to understand, develop, and test, and makes the system more resilient to architecture erosion. Not to mention it enables autonomous teams to develop, deploy, and scale, their services independently and allows new services to emerge through continuous refactoring. As there is no 'platform' to speak of, micro-services can be deployed independently without affecting users or applications. As example, if the business decides to change the payment gateway and use other provider's API, only affected payment service is changed and re-deployed, without impact on other services in the system. This approach guaranties fast time to market with changes and new services and significantly cuts cost of development, testing and operations.
The term serverless computing is a bit of misnomer as there is always going to be need for servers to deliver applications. But Serverless architecture plays a big role in the cost-to-scale computing. Typically when a new application is created there is a need for capacity planning where fast growing digital products are near impossible to manage manually. Allocating bandwidth, memory, storage, number of servers in a cluster to load-balance to meet the demand is near impossible to predict—at least it is often miscalculated, so the business owner ends up paying for infrastructure they don't utilize at the time.
Benefits of micro-services, serverless strategy
Deploy in minutes
Serverless offers a programmable way to solve the needs of increasing demand or growth of the product aided by auto-scalable and elastic computing. I wrote at length about this in the article: https://www.linkedin.com/pulse/developing-cost-efficient-digital-products-2018-damir-mustafic/
Infrastructure cost savings
At high level application micro-services do not consume any computing power until they are called upon. Based on the needs the servers are created, provisioned, and process instructions then destroy themselves when not used. We call this Function-as-a-service model (FaaS). As example; if the application is not processing any payments at the time, the server node handling payments simply does not exist. On the flip side if the demand for payment processing jumps, the Serverless architecture may provision several payment processing nodes to handle the demand, then shut them all down afterwards.
Freedom at scale
AWS Lambda is the industry leading service and first public cloud service that handles this type of abstract serverless computing. Microsoft Azure today has similar service (Azure Functions), as does IBM (OpenWisk).
All this scaling, elasticity, provisioning, happens automatically—when properly architected, so the application developers, or ops engineers never need to worry about capacity loads, thus tremendously cutting cost and resource needs for any organization running a digital business.
Bring your own programming language
Many serverless-capable cloud providers offer developers freedom at choosing their own programming languages, and use open-source technologies, so short of the infrastructure the code runs on, nothing is proprietary, and there are no traps, unlike Monolithic approach where all developers are forced to write using the same language extending the capabilities of a proprietary platform, thus diminishing IP and brand value of your product.
The trend of product digitization is ever increasing, yet the cost of computing is decreasing... So, when you are asked: "how much memory, storage or how many servers blades you need" you are talking to wrong people.