Visit nuclio.io for product information and news and a friendly web presentation of the Nuclio documentation.
Translations:
Nuclio is a high-performance "serverless" framework focused on data, I/O, and compute intensive workloads. It is well integrated with popular data science tools, such as Jupyter and Kubeflow; supports a variety of data and streaming sources; and supports execution over CPUs and GPUs. The Nuclio project began in 2017 and is constantly and rapidly evolving; many start-ups and enterprises are now using Nuclio in production.
You can use Nuclio as a standalone Docker container or on top of an existing Kubernetes cluster; see the deployment instructions in the Nuclio documentation. You can also use Nuclio through a fully managed application service (in the cloud or on-prem) in the Iguazio Data Science Platform, which you can try for free.
If you wish to create and manage Nuclio functions through code - for example, from Jupyter Notebook - see the Nuclio Jupyter project, which features a Python package and SDK for creating and deploying Nuclio functions from Jupyter Notebook. Nuclio is also an integral part of the new open-source MLRun library for data science automation and tracking and of the open-source Kubeflow Pipelines framework for building and deploying portable, scalable ML workflows.
Nuclio is extremely fast: a single function instance can process hundreds of thousands of HTTP requests or data records per second. This is 10-100 times faster than some other frameworks. To learn more about how Nuclio works, see the Nuclio architecture documentation, read this review of Nuclio vs. AWS Lambda, or watch the Nuclio serverless and AI webinar. You can find links to additional articles and tutorials on the Nuclio web site.
Nuclio is secure: Nuclio is integrated with Kaniko to allow a secure and production-ready way of building Docker images at run time.
For further questions and support, click to join the Nuclio Slack workspace.
None of the existing cloud and open-source serverless solutions addressed all the desired capabilities of a serverless framework:
Nuclio was created to fulfill these requirements. It was intentionally designed as an extendable open-source framework, using a modular and layered approach that supports constant addition of triggers and runtimes, with the hope that many will join the effort of developing new modules, developer tools, and platforms for Nuclio.
The simplest way to explore Nuclio is to run its graphical user interface (GUI) of the Nuclio dashboard. All you need to run the dashboard is Docker:
docker run -p 8070:8070 -v /var/run/docker.sock:/var/run/docker.sock --name nuclio-dashboard quay.io/nuclio/dashboard:stable-amd64
Browse to http://localhost:8070, create a project, and add a function. When run outside of an orchestration platform (for example, Kubernetes), the dashboard will simply deploy to the local Docker daemon.
Assuming you are running Nuclio with Docker, as an example, create a project and deploy the pre-existing template "dates (nodejs)".
With docker ps
, you should see that the function was deployed in its own container.
You can then invoke your function with curl; (check that the port number is correct by using docker ps
or the Nuclio dashboard):
curl -X POST -H "Content-Type: application/text" -d '{"value":2,"unit":"hours"}' http://localhost:37975
For a complete step-by-step guide to using Nuclio over Kubernetes, either with the dashboard UI or the Nuclio command-line interface (nuctl
), explore these learning pathways:
"When this happens, do that". Nuclio tries to abstract away all the scaffolding around taking an event that occurred (e.g. a record was written into Kafka, an HTTP request was made, a timer expired) and passing this information to a piece of code for processing. To do this, Nuclio expects the users to provide (at the very least) information about what can trigger an event and the code to run when such an event happens. Users provide this information to Nuclio either via the command line utility (nuctl
), a REST API or visually through a web application.
Nuclio takes this information (namely, the function handler
and the function configuration
) and sends it to a builder. This builder will craft the function's container image holding the user's handler and a piece of software that can execute this handler whenever events are received (more on that in a bit). The builder will then "publish" this container image by pushing it to a container registry.
Once published, the function container image can be deployed. The deployer will craft orchestrator specific configuration from the function's configuration. For example, if deploying to Kubernetes the deployer will take configuration parameters like number of replicas, auto scaling timing parameters, how many GPUs the function is requesting and convert this to Kubernetes resource configuration (i.e. Deployment, Service, Ingress, etc).
Note: The deployer does not create Kubernetes native resources directly, but rather creates a "NuclioFunction" custom resource (CRD). A Nuclio service called the "controller" listens to changes on the NuclioFunction CRD and creates/modifies/destroys the applicable Kubernetes native resources (Deployment, Service, etc). This follows the standard Kubernetes operator pattern
The orchestrator will then spin up containers from the published container images and execute them, providing them the function configuration. The entrypoint of these containers is the "processor", responsible for reading the configuration, listening to event triggers (e.g. connecting to Kafka, listening for HTTP), reading events when they happen and calling the user's handler. The processor is responsible for many, many other things including handling metrics, marshaling responses, gracefully handling crashes, etc.
Once built and deployed to an orchestrator like Kubernetes, Nuclio functions (namely, processors) can process events, scale up and down based on performance metrics, ship logs and metrics - all without the help of any external entity. Once deployed, you can terminate the Nuclio Dashboard and Controller services and Nuclio functions will still run and scale perfectly.
However, scaling to zero is not something they can do on their own. Rather - once scaled to zero, a Nuclio function cannot scale itself up when a new event arrives. For this purpose, Nuclio has a "Scaler" service. This handles all matters of scaling to zero and, more importantly, from zero.
The following sample function implementation uses the Event
and Context
interfaces to handle inputs and logs, returning a structured HTTP response; (it's also possible to use a simple string as the returned value).
In Go
package handler
import (
"github.com/nuclio/nuclio-sdk-go"
)
func Handler(context *nuclio.Context, event nuclio.Event) (interface{}, error) {
context.Logger.Info("Request received: %s", event.GetPath())
return nuclio.Response{
StatusCode: 200,
ContentType: "application/text",
Body: []byte("Response from handler"),
}, nil
}
In Python
def handler(context, event):
response_body = f'Got {event.method} to {event.path} with "{event.body}"'
# log with debug severity
context.logger.debug('This is a debug level message')
# just return a response instance
return context.Response(body=response_body,
headers=None,
content_type='text/plain',
status_code=201)
More examples can be found in the hack/examples Nuclio GitHub directory.
For support and additional product information, join the active Nuclio Slack workspace.