MetaCall Frequently Asked Questions.
Last update: March 22, 2019
MetaCall helps you build serverless applications using a more fine-grained, scalable and NoOps oriented FunctionMesh instead of ServiceMesh and DevOps approach. MetaCall automagically converts your code into a Function Mesh and auto-scales individual hot parts or functions of your app.
MetaCall not only helps to simplify application development but also speeds up time to market. Developers can focus purely on code and business logic instead of expending expensive development cycles on DevOps.
The purpose of MetaCall is to integrate local development with function-mesh transparently.
MetaCall enables a new, smarter and productive way to develop distributed systems. It is a library that allows you to execute code across boundaries – be it language, process, pod, container, node, or server boundaries. Your code could be in language X and it can invoke functions implemented in languages X, Y, and Z where X, Y, Z are the set of supported languages in MetaCall core.
CRUD via HTTP in REST APIs is a form of RPC with limited functions. MetaCall extends the abstraction to any function that can be called remotely and is not just limited to the semantics of CRUD. (Note REST, i.e., CRUD over HTTP is sort of a degenerate form of RPC). Over and above these functions could be implemented in a different programming language and running on a different pod / container / server / node distributed geographically.
Function Mesh is a new transparent way to inter-connect serverless functions. Function Mesh enables the building of complex distributed systems while efficiently scaling only the hot functional parts of the system. MetaCall is an enabling technology that helps to build the function gateway of sorts, under the covers. The only thing developers need to care about is writing a small configuration file indicating what code they want to publish and the function gateway will be automatically created by MetaCall. Scaling up and down of MetaCall instances happens in an automatic manner governed by certain configurable limits such as $$ spent for resources, response time, or latencies.
Function Mesh introduces a new way of development. It is possible to develop a complete monolith service and this will be scaled depending on the hot parts of the application. The Function Mesh will grow and subdivide parts of the monolith if needed or shrink when the workload is lower. At the same time, it is non-intrusive. This means you can code normal applications without needing a new complete framework or architecting the code or infrastructure. At the same time, that code can be tested and debugged locally without needing clusters like Kubernetes or complex environments with many layers of abstraction. The developer can use existing common tools, debuggers, and test runners. You can build a complete distributed system in a single project without worrying about the infrastructure.
Nowadays it is possible to implement Service Mesh in your architecture. Service Mesh is a practical way to interconnect services in your architecture. You can use tools like Istio (Service Mesh) and OpenFaaS (FaaS) on top of Kubernetes (Orchestrator) to build your architecture. Other alternatives are appearing, like OpenFaaS Flow to interconnect functions.
These solutions have drawbacks. Kubernetes itself is a really complex tool that has a difficult learning curve, and OpenFaaS and Istio introduce even more complexity and knowledge. The resulting architecture can be powerful, but really difficult to build. Apart from this, testing the system must be done in the cloud, or in a powerful local machine, to handle the whole cluster and the consumption of all these systems. In addition, Service Mesh has the drawback that it injects a proxy for each pod or service instance, duplicating the size of the processes the cluster must handle and adding an intermediate layer between the functions.
The idea MetaCall Function Mesh is to unify the development. MetaCall allows a non-intrusive way to develop this architecture, without handling all cluster and complex tools by yourself (no more YAML abuse). In addition, in the Function Mesh, each function is a load balancer and proxy at the same time. It takes advantage of concurrency and it also can be scaled horizontally, creating a uniform matrix of functions interconnected. Three levels of scalability can drastically reduce costs, improve performance, and simplify the life of the developer.
MetaCall Core can be seen as a Polyglot. A multi-language interpreter that can execute different programming languages at the same time. But MetaCall itself is not a MetaVirtualMachine, in fact, it only provides foreign function interface calls between programming languages. It can help integrate functionality across application components written in different languages. We are using it to build our core for the FaaS. In this way, we achieve high performance in the execution of the calls.
For a complete list of supported languages, refer to MetaCall Language Support - Backend.
For the testing performance of MetaCall calls, refer to MetaCall Benchmarks.
Target Audience: Developers, Solution Architects.
Key Use case:
MetaCall is useful for enterprises that need to migrate legacy or traditional non-micro-services architecture-based distributed application solutions (monolithic, SoA) without refactoring the entire code base. It is also very useful for developers’ productivity and eliminates the need to set up complex K8s clusters for testing code in production.
MetaCall enables developers to test the code in local just as it would run in production – saves time and resources and brings solutions to market faster.
Application Type | Benefits with MetaCall | Comments |
---|---|---|
Application with persistent or long-lived interactions and inter-service connections for performing tasks that can scale up or down quickly | Yes | Need confirmation from V |
Applications with short term intermittent communication across components or micro-services where these interactions can scale up or down fast | Yes | Need confirmation from V |
Applications that are not front-end facing but in the deeper end of the stack and deal with caches, databases and intermediary services | Yes | Need confirmation from V |
Applications with bi-directional data flow | ? | |
CPU-centric applications | ? | |
I/O intensive applications | ? | |
Message Broker-based big data applications? (ETL, Hadoop, Spark, Flink – data warehousing apps) | ? | |
Real-time messaging application for Big Data Analytics | ? | Need confirmation from V Reference: Real-time messaging for analytics |
MetaCall can be used for either of the above. It is not limited to an application architecture type.
Yes, you can secure your APIs with MetaCall or maintain them public from the Dashboard.
Here are some of the available benchmark data for MetaCall (Beta). More benchmarking tests are underway and will be updated shortly.
Benchmark Description:
A simple function that merges two strings using the following technologies:
Specs:
Results:
Software | Requests/sec | Transfer/sec | Errors |
---|---|---|---|
Flask | 614.82 | 100.87KB | 1715 Timeouts |
Express | 6433.61 | 1.39MB | 652 Timeouts |
MetaCall (Python) | 11620.65 | 1.80MB | None |
MetaCall (Node.js) | 8190.71 | 1.27MB | None |
NginX | 14224.61 | 2.48MB | None |
Conclusions:
Benchmark Description:
Comparison of a Python C API call (hard-coded) versus a MetaCall call (ffi). No connection used, this is focused to see what is the overhead of MetaCall when executing calls between run-times. No optimization was used when compiling MetaCall (debug mode and all call optimizations disabled). One million calls were tried with two long integers for each call as an argument, and another long integer as a return value. The test has been done using only one thread, although the VM has more than one assigned to it.
Specs:
Source:
Results:
Software | Time | Bandwidth | Calls/sec |
---|---|---|---|
Python C API | 544 ms | 42.0545MB/s | 1.75227M items/s |
MetaCall Python Variadic Arguments Call | 988 ms | 23.1689MB/s | 988.54k items/s |
MetaCall Python Array Arguments Call | 903 ms | 25.353MB/s | 1081.73k items/s |
Conclusions:
Following is a placeholder answer, we need to refine it as per V's inputs.
MetaCall implementation uses a higher-level protocol (QUIC / HTTP3) to reduce RPC overheads. It uses a high-performance multi-core asynchronous I/O model.
MetaCall has a unique scaling model that is more granular and more compute resource-efficient than micro-services and REST-based API interaction models. MetaCall can scale at 3 levels – per process, per pod and per function-mesh. Basically, it allows you to do more work with fewer resources and enables finer-grained resource utilization. {Any benchmark run numbers here would be useful – say with MetaCall, xyz micro-services benchmark ran 5X times faster than without MetaCall.}
MetaCall is a cross-platform architecture – it is designed to work on multiple platforms. Current tested platforms:
Platform | Version | Architecture |
---|---|---|
Windows | 7, 8, 10 | x86, x86-64 |
Linux | Debian (8, 9, 10), Ubuntu (14.04, 16.04, 18.04) | x86-64 |
MacOs | 10.14 | x86-64 |
MetaCall use cases – TBD
MetaCall examples - see Auth Function Mesh and refer to the examples folder in GitHub.
Developer Productivity
Saves time, lowers costs
Simplifies and speeds up Legacy code and cloud technologies integration
To bring Legacy code to the cloud, or to evolve it using newer application architecture models such as micro-services, you need a service like Lambda. Instead, you can use MetaCall and without rewriting legacy code, migrate some or all its functionality to the cloud.
Let us consider this example as shown in the figures below:
Function A call functions B, C and D, all implemented in PHP |
Let's say all functions are written in PHP, and ´Fn A´ calls ´Fn B´, ´C´ and ´D´. Now you want to update your code with a brand new Node.js function ´Fn C'´. With MetaCall, you can progressively transfer your calls from ´Fn C´ to ´Fn C'´ with ease, allowing you to migrate legacy code in production. Refer to figures below that show phased migration of PHP code to Node.js:
Function C is implemented in Node.js and the rest code is as-is | Function B is migrated to Node.js | Function B, C and D are all in Node.js now |
Without MetaCall, you will need 6 months or so to move legacy code into new node.js or other models before you can verify and test. With MetaCall, you can do it in chunks – it helps with phased migration.
As indicated in the figure above, with MetaCall, you don’t need two teams – one to maintain the legacy and another to refactor the code and make it work again using newer design and application, architecture models. Helps agile and reduces TTM.
Nowadays when you do functions, you can only do one function per file. MetaCall can do N functions per file. MetaCall allows you to write multiple functions in the same deployment. You can keep monolithic architecture or micro-services architecture and MetaCall will slice everything. Monolith refers to pre-MSA or SoA application architecture. {Need to highlight the benefit of this – how does this help developer, project, business}
Answer to be provided by V
Lots of upside of MetaCall – downside – a small amount of resource is always consumed – to avoid cold starts.
MetaCall simplifies test and debugging which is a nightmare in distributed application development. Test in local and the same code runs exactly the same way in production – across process, pod, and container boundaries. During the test, it runs locally. So no need to test in dev and again in production.
Self–automated? There is a way to limit resource consumption – say by $$$ or number of replicas etc. How? {Elaborate with the help of V}
MetaCall implementation has two interfaces – on the caller side, the function gateway uses simple HTTP (for now) – in the future, it could be pluggable so it could use GraphQL or XML or WebSockets. At the other end, for function level communication or function gateway, it uses JSON-RPC. That could be pluggable too – it could use QUIC or gRPC or RabbitMQ – whichever works better for the user in terms of solution performance, required communication bandwidth and latency.
Answer TBD
MetaCall is not an industry or application-specific solution. It is a newer, better model for building your Function Mesh. (More details TBD by V)
Application architectures have evolved from traditional monolithic applications and SoA architectures to applications built using micro-services architecture. The next step in evolution in serverless and Functions – FaaS. The granularity of application ‘execution-unit’ has become finer and finer at each step of evolution. Consequently, application complexity has exploded and begun to impact application development and distributed application architectures.
Application granularity and resultant complexity is a disadvantage to distributed application development and testing process and a big drain on computing resources. Ideally, this granularity ought to be transparent to impart its benefits seamlessly to modern applications – for example – efficient and optimal consumption of underlying compute and cloud resources and services. This granularity, ideally, should not reside in the hands of the developer. Instead, it should be part of the underlying infrastructure.
With MetaCall FaaS model, you can maintain your traditional monolithic applications or contemporary micro-services-based architectures, gain the benefit of FaaS granularity without having to bother about application restructuring or developmental complexity. Your legacy code and old big architectures can be migrated to serverless and FaaS models, fully distributed by functions easily with the use of MetaCall and without needing to refactor it, or spend extra resources in development, DevOps or specialized serverless developers.
If you look closely, REST APIs comprise CRUD (Create, Read, Update, Delete) operations using HTTP. It is a sort of RPC with limited functions the CRUD. MetaCall extends this abstraction to any function that can be called remotely and is not just limited to the semantics of CRUD. In other words, REST, ie, CRUD over HTTP is sort of a degenerate form of RPC. In addition, these functions could be implemented in a different programming language and may be running on a different pod or container or node or server which is distributed geographically. TBD - How MetaCall is different from RPC or is it
Refer to the picture below. It shows how MetaCall differs from the competition:
MetaCall vs. Competition |
Unlike Lambda, MetaCall does not have cold starts. Refer to the figures below:
Fig 1: Function scaling model with Lambda |
Fig 2: Function scaling model with MetaCall |
With Lambda, you can have two or more layers of LB to scale up function and obtain function mesh – see figure. With MetaCall, you can have any-to-any function mesh and it is much less expensive. In MetaCall model, each function also acts as a gateway with the help of the MetaCall library. This tremendously helps in scaling and everything is integrated. No third-party products are required for integrating across languages, platforms and components or routing function calls.
AWS Lambda and Microsoft Azure functions are generic serverless platforms that can be used to create FaaS solutions. Twilio is a cloud-based enterprise contact center software platform that is specialized for communications code (SMS, Text, Voice) and allows the creation of Twilio-based apps for contact center, marketing and customer engagement.
MetaCall is similar to AWS Lambda and Azure functions in the sense that it also enables you to create FaaS based solutions. However, the approach is very different. Unlike AWS or Microsoft, MetaCall does not host and handle these functions via triggers. MetaCall automagically converts your code into a Function Mesh and auto-scales individual hot parts or functions of your app. Both Lambda and Azure Functions are similar in functionality as they propose to segment the application architecture in order to scale only the parts that require scaling, thus making consumption efficient with a pay-per-function-use pricing model. Unlike these two, MetaCall is not cloud provider proprietary as it can work for applications that are distributed across cloud, on-premises and even hybrid applications.
In this question, we will try to answer the following queries related to MetaCall versus other similar technologies:
Applications today are comprised of various moving parts in order to have scalable, efficient systems. Does MetaCall help in any way to replace any or all of these multiple moving parts or simplify them
When and how does MetaCall decide to optimize certain application actions
How does MetaCall based application compare with an optimized application system built using various AWS functionalities including Lambda? Is it time saved to deployment? Is one better than the other in the long run? What are the features that separate MetaCall from the rest
Since it first appeared in 2014, AWS Lambda function implementation has not changed much. Microsoft Azure and Google cloud implemented very similar functions. The main drawback of these technologies as compared to MetaCall are detailed below:
exports.handler = async (event) => {
return {
statusCode: 200,
headers:{
"Content-Type": "text/html"
},
body: "Hello",
};
};
Handlers are different from events or a function (callback). This handler is similar to the ones that exist in micro-services. When you use AWS Lambda, besides handlers, you will need to use AWS API Gateway in order to provide a valid REST endpoint to the Lambda handler. This complicates the problem multi-fold. Although it offers high granularity in terms of resource consumption, it imperates the use of other AWS services to make it work. Eventually, your application or solution will have to be split into many handlers, making it harder to manage.
For example, 10 lambda functions require 10 separate deployments and 10 additional entries into the API Gateway. If you consider Database or Message Queue service implementations, then this resource and service dependency list begins to grow even further. Now imagine, if we try to inter-connect other kinds of Lambdas to build a complete application, it just gets more and more complex.
MetaCall deals with a callback or a function that is very different from a handler. A function is a piece of executable code that is passed as an argument to other code, which is expected to call back (execute) the argument at some convenient time. The invocation may be immediate as in a synchronous callback, or it might happen at a later time as in an asynchronous callback.
MetaCall functions look like as shown below:
exports.hello_world = () => {
return "Hello";
}
Instead of looking like a handler, a MetaCall function looks like a normal function, that can be unit tested locally with common development tools. There is no need for extra framework or resource deployment in order to debug it or test it. Also, you can add more functions in the same file and they will be deployed separately.
If you want to call other functions, you just have to call it like a normal function:
import { other_function } from 'lib';
exports.hello_world = () => {
return other_function();
}
This will build a function mesh for you. The function ‘other_function’ may be executed on another peer, and the network resolution, scalability, and everything will be automatically taken care of by MetaCall.
Following are a list of benefits that MetaCall usage brings to the developers and modern application development process:
Developers can build monolithic applications that will be fully distributed through MetaCall model.
MetaCall brings in an integrated API Gateway along with the functions, so there is no need for updating API Gateways for each function as in the case of AWS Lambda.
MetaCall allows application developers to write functions that can be tested locally and validated globally in one shot. This is because they are normal functions and after being uploaded to the FaaS they will be automatically scaled. Developers do not have to first test and then verify the deployment in production.
With MetaCall usage, the organization’s development lifecycle can be speeded up. The developer encounters less friction when implementing the software.
MetaCall abstracts and hides the nuances and differences across multiple cloud vendors. With MetaCall, the developer will not be required to understand AWS or any other vendor-specific implementation models because MetaCall integrates with repositories seamlessly.
MetaCall pricing is transparent and simpler. You will not have to deal with high segmentation and obfuscated costs. You will be able to see all telemetry and consumption of your code.
Here is how MetaCall overcomes some of the drawbacks associated with AWS Lambda:
Metacall does not have typical serverless cold start performance issues.
MetaCall automatically builds all dependencies with your functions in a standard way, using existing package managers.
MetaCall is simple to use as it integrates into normal developer code, without complex frameworks, thus reducing vendor lock-in.
Unlike AWS Lambda or typical serverless models, MetaCall enables you to test code locally and run it, later on, in the FaaS, with equivalent distributed execution without having to test it twice – once locally and then in production.
MetaCall scales hot parts of your application depending on the workload and the function calls for each function published in the function mesh. The function mesh can be subdivided or compressed depending on the workload.
MetaCall allows persistence through functions just by exporting common objects or arrays in the code.
In short, instead of having to use AWS API Gateway, DynamoDB, Lambda, Elastic Load Balancer, Simple Queue Service... (among others), MetaCall solves application scaling and distribution problems at the same time, without DevOps. There is no learning curve or new framework needed for MetaCall use. Local testing capability significantly improves development time and cuts down on resource usage. The function mesh model allows for a more effective way of scaling, so we can scale horizontally, vertically and in a new third dimension introduced by the code splitting.
All this is taken care of by MetaCall, with no additional cost or new knowledge required for the developer.
Besides the above, with MetaCall, there is an extra benefit: your existing code can be migrated to MetaCall easily because it does not need a new framework. MetaCall can consume classical functions. Migrations to MetaCall based FaaS environment can be done automatically.
One of the core objectives of MetaCall is to simplify application migration to FaaS. Most of the problems related to ‘fitting’ an existing application into the FaaS model are caused by the limitations of current FaaS designs. With the cutting-edge design of MetaCall you will be able to migrate a monolithic application into FaaS easily. Refer to previous answer for more details.