Writing Custom Knative Event Sources

Murugappan Sevugan Chetty
4 min readOct 27, 2020

Knative Eventing source is a link between event producer and event sink.

Event Producer

Examples of traditional event producers are kafka, github webhook, nats, redis events etc. Knative currently provides/maintains a list of commonly used event sources, which can be found here.

Event Sink

Event sink is any addressable component, i.e. kubernetes resources that could be resolved to an URI. For example knative service, kubernetes service, knative eventing channel etc. List of the same can be retrieved using the discovery api.

Building an Event Source

Now that we have defined the producer, source and sink, let us talk about building a custom event source. As defined earlier and shown below, source is the link between producer and sink and it comes down to building, distributing and deploying this link aka adapter on demand.

Events produced by knative sources are in the cloud event format. This gives interoperability while building event consumers (Sink), Cloud Events is an incubating CNCF project.

Knative provides the below options to build sources,

Options

  1. Controller
  2. Sink Binding
  3. Container Source

Controller Approach

This approach involves building a kubernetes controller. Building a kubernetes controller can be cumbersome for many, fortunately knative provides a template project to get it up and running quickly. Below are the components you will be building,

  1. Adapter

This is a dedicated process (kubernetes pod), that will reach out to the producer and fetch events, construct event as a cloud event and push it down to the sink. Each event source can have additional requirements, like kafkasource adapter has a logic for marking/committing messages. This component will be deployed by the controller. Important point to note is, this is a standalone process and can be written in any language, all that controller needs is a container image.

2. Controller

Building the controller starts with defining the kubernetes custom resource, followed by code generation and writing the reconciliation logic. This component follows the operator pattern to deploy and maintain the adapter.

3. Webhook (Optional)

This is an optional component, used for validating the resources created by the user.

The sample source template provided by knative comes with all these components, we just need to alter it according to our needs. Example of this approach is gql-source. Goal of this source is to capture the graphql subscriptions events and push it down to the sink.

Example +++> gql-source

Sink Binding Approach

The controller approach is a well rounded approach, apart from having a dedicated adapter for event generation, it brings in all the benefits of the kubernetes operator like versioning, seamless upgrades, validation webhooks. But it might not be suitable for all use cases, for example there can be scenarios where a knative service or a kubernetes service or a kubernetes job would like to send an event or you may want to chain sources, this is where sink binding comes in.

Sink binding is a custom resource provided and managed by knative eventing. On creation of a sink binding resource, the eventing controllers inject the below 2 environment variables into the “desired” “pod specable” kubernetes resource.

  1. K_SINK — URI of the addressable service (knative service, kubernetes service etc)
  2. K_CE_OVERRIDES — Overrides to control the output format and modifications of the event sent to the sink.

“pod specable” are kubernetes resources which have pod spec. The injection is based on labels, for example if the sink binding has below match label requirement, the environment variables will be injected into kubernetes pod specable resource with “app: sample-source” labels, which can be used by the app to construct and send cloud events over http.

selector:
matchLabels:
app: sample-source

Cloud event sdk’s could be leveraged to construct the cloud event client and send the events.

This is the simplest approach to send custom events. One of the main challenges with this approach is “distribution”. Unlike the controller approach where you have the custom resource definition, there is no standard way to distribute this source. An example source using this approach is s3-file-source. It makes use of a knative service and kubernetes job to send events, in this case, knative service definition becomes the format of distribution.

Example +++> s3-file-source

Container Source Approach

The third option is kind of a hybrid between first and second, wherein you create a custom resource to get the adapter scheduled in the desired namespace, but in this approach the user just needs to build the adapter, knative eventing controller would operate this adapter.

ContainerSource is one the earliest sources provided by knative, it is a custom resource that allows to specify a container image (for adapter), arguments to the adapter and sink information. From the user stand point, he/she needs to build an adapter which will produce cloud events and send to the sink. Same sink binding approach as above could be leveraged. Advantage of this option is, you get a custom resource definition for distributing your source and its operated by a knative eventing controller, instead of user building/maintaining the controller.

Example +++> ftp source.

Final Thoughts

Based on your needs you can choose one of the sources, below is my observation.

+-------------------------------+------------+--------------+------+
| Feature | Controller | Sink Binding | CS |
+-------------------------------+------------+--------------+------+
| Dedicated Adapter | Yes | Depends | Yes | |
| CRD | Yes | No | Yes | |
| Language | Go | Any | Any | |
| knative controller dependency | No | Yes | Yes | |
| Maintained by cluster admin | Yes | No | No | |
| Versioning | Yes | No | No | |
| Resource Validation | Yes | No | No | +-------------------------------+------------+--------------+------+

--

--