Ivana, a reader who’s been following “serverless architectures” for some time emailed in with a question about Lambda’s future for different sizes and types of projects.
That’s a good question, in my opinion there’s a distinction between “serverless” and “Function as a Service”. The infrastructure tools behind FaaS are relatively well-explored in other contexts.
For example, containers as a technology have been in use at companies like Google for years, even before the popularity of Docker. And event-driven systems have been popular in forms like Kafka, Akka, and others.
For the more general “serverless” tech – backend-as-a-service tools have existed in the form of Parse & others. Service Oriented Architectures have been in use as a way to make it easier for teams to cooperate across organizational boundaries, manage complex systems, and define contracts between services.
Serverless as a whole has some of each of these themes. SOA-type integration between functions and to external services (like Auth0, Amazon DynamoDB, IBM Watson, PubNub messaging, etc) so they can take advantage of existing services. Backend-as-a-Service type service providers running custom code in containers, and a very CI-friendly deployment workflow all add up to an attractive package to developers, team leads, and ultimately business people.
That doesn’t mean there aren’t bumps in the technology, here’s just a couple examples:
- Lambda specifically has unpredictable latencies, making it unsuitable for some uses.
- API Gateway’s usability isn’t the best, but improved with a release of some new features last month.
- The differences between different platforms (Lambda vs. IBM OpenWhisk vs. iron.io IronWorker vs. PubNub BLOCKS) aren’t totally transparent because the different available event sources, latency profiles, and underlying architectures.
- Deployments are challenging, and tools are not yet mature to handle the different event sources, function versions, and dependencies/libraries.
The first 3 challenges there are mostly technical, since latencies can be reduced, APIs can be improved, and the different offerings can be better measured and compared. But deployments are a tricky problem for a couple of reasons.
First, the process of making sure all the updated code gets out at the same time or always provides backwards compatible APIs. If you have 20, 40, or 100 functions out in your infrastructure, they can’t all be deployed at once so they need to be able to talk to older versions of themselves, and process old messages since there may be overlap.
Second, code sharing is a question that comes up frequently. Does all common code go in a library and get shared by all functions? Do different functions have different library version requirements? Do all an organization’s functions go in a single repo?
Third, event sourcing and handling different function stages. Tooling like the Serverless Framework, Sparta, or Apex is all relatively new, and has only been seeing “production” use for at most a year or so. Existing tools for deployment are also working to close the gap, such as Ansible, but none of these are complete or mature solutions yet.
So I think tooling for “Lambda, in big project contexts” will continue to improve. I’m optimistic that the current crop of frameworks/tools to meet the needs of projects at different sizes. Look at the number of contributors to the public frameworks, then add however many people are working on closed-source tools inside existing companies.
If you’re in the category of people with bigger projects and you’re worried about the maturity level of “serverless” as a whole, you don’t have to jump in right away and migrate. You can run a small pilot project, or decide to wait until the ecosystem shapes up more.