Skip to main content

· 7 min read
Sam Sussman

Open AI plugins are an exciting new way of exposing APIs to end consumers. For decades we’ve mostly exposed APIs programmatically or via some form of graphical UI. With the advent of AI, we now have a new form factor emerging - conversational AI. Examples include Open AI Plugins (aka. GPT Plugins) and LangChain.

This new form factor uses conversation to interact with APIs. A smart agent backed by LLMs then works to serve your requests by selecting APIs from a catalog and calling them on your behalf. Instead of pointing and clicking - you just say what you need done and the bot will work to achieve your request by calling APIs step-by-step.

tip

You can find the example code for this blog post here in our example repos.

Video

Building GPT plugins requires three things:

  1. An API hosted somewhere on the internet
  2. An Open API specification describing the API’s operations in detail.
  3. An Open AI Plugin Manifest file served over HTTP

This Open API specification part is super important - because without a well documented or spec’ed out API, the smart agents cannot reason about when or how to call your API. o dive more into details visit the Open AI getting started guide.

Eventual’s cloud framework makes building plugins dead simple thanks to its abstractions of AWS serverless and Open API. With just a few lines of code, you can have an API deployed on AWS and a well documented API specification available for the plugins.

The rest of this blog will walk through the noteworthy steps in building a plugin with eventual. To skip straight to the code - see this example.

First, if you haven’t already, bootstrap a new eventual project (see the Getting Started guide).

pnpm create eventual@latest

This will create you a new repository with an eventual project ready to go.

Next, head on over to your service’s NPM package in packages/service. Here we’ll find some sample code that we can just delete.

Now, the simplest GPT plugin has a single API that can be called. To create APIs with eventual, we simply import the command primitive and create a new API.

import { command } from "@eventual/core";

export const helloWorld = command(
"helloWorld",
{
path: "helloWorld/:name",
},
async (request: { name: string }) => {
return { message: `hello ${request.name}` };
}
);

Now, this is the simplest command possible, but it’s not yet enough for building GPT plugins because we have not defined its schema and documented it for generating the Open API spec.

To do that, we import zod and use that to define the input/output schema of the command. This is good practice anyway, regardless of whether you’re onboarding with GPT plugins, because it is used to validate the request to ensure the data matches what you expect.

import { z } from "zod";

const HelloInput = z.object({
name: z.string()
});

const HelloOutput = z.object({
message: z.string()
});
export const helloWorld = command("helloWorld", {
path: "helloWorld/:name",
input: HelloInput,
output: HelloOutput
}, async (request) => {
return `hello ${request.name}`.
});

We now have an API with request validation and Open API schema generation.

The next step is to document the properties with information that will be read and reasoned over by the smart agents. Unfortunately, this requires that we install an external dependency on @anatine/zod-openapi.

pnpm add @anatine/zod-openapi

We then import the extendApi method from that package. This function decorates zod schemas with with metadata specifical to Open API.

const HelloInput = z.object({
name: extendApi(z.string(), {
description: "name of the person to say hello to",
}),
});

const HelloOutput = z.object({
message: extendApi(z.string(), {
description: "the message greeting the user",
}),
});

Finally, we add a summary to our command describing the API operation. This description helps Open AI know when to call this API - so make sure to provide a useful/helpful description.

export const helloWorld = command("helloWorld", {
summary: "This API generates a greeting message when given a name",
path: "helloWorld/:name",
input: HelloInput,
output: HelloOutput
}, async (name: string) => {
return `hello ${name}`.
});

Now that we have our API with an Open API specification, we need to add two endpoints for Open AI:

  1. openai.json - a HTTP endpoint (GET: /.well-known/openapi.json) for getting the API spec
  2. ai-plugin.json - a HTTP (GET: /.well-known/ai-plugin.json) endpoint for getting the plugin manifest

Both of these can be achieved with a command.

Eventual provides an intrinsic ApiSpecification object that can be used at runtime to access the Open API specification. With that, we can simply create a command to serve the openapi.json file.

import { ApiSpecification } from "@eventual/core";
import { OpenAPIObject } from "openapi3-ts";

export const specificationJson = command(
"specificationJson",
{
method: "GET",
// this can be any path
path: "/spec/openapi.json",
},
(): OpenAPIObject => {
return ApiSpecification.generate();
}
);

The ai-plugin.json file can be achieved similarly. It’s a simple static JSON file with some metadata required by Open AI. To see the accepted fields and options, refer to the Open AI Plugin documentation.

export const aiPluginJson = command(
"aiPluginJson",
{
// the URL path for accessing this plugin manifest
// must be this specific string requried by Open AI
path: "/.well-known/ai-plugin.json",
// it must be a HTTP GET operation
method: "GET",
},
(_, { service: { serviceUrl } }) => {
return {
schema_version: "v1",
name_for_human: "TODO Plugin",
name_for_model: "todo",
description_for_human:
"Plugin for managing a TODO list. You can add, remove and view your TODOs.",
description_for_model:
"Plugin for managing a TODO list. You can add, remove and view your TODOs.",
auth: {
type: "none",
},
api: {
type: "openapi",
url: `${serviceUrl}/spec/openapi.json`,
is_user_authenticated: false,
},
contact_email: "support@example.com",
legal_info_url: "http://www.example.com/legal",
};
}
);

We’re now done with implementation and can now move on to testing and deploying.

Testing and deploying

  1. Run the service locally and test with LangChain or Open AI
  2. Deploy the service to AWS and register the plugin with Open AI

To run locally, run the local command from the root of the repository.

pnpm eventual local

This will stand up a server running on localhost that you can then interact with from LangChain or Open AI. See the OpenAI docs for a detailed how-to guide on that.

tip

For an example of how to test with LangChain, check out our example repo here.

To deploy our new service to AWS serverless, run the deploy command from the root of the repository.

pnpm run deploy

This will deploy a best practice architecture consisting of API Gateway (with OpenAPI spec), AWS Lambda, etc. to AWS.

Modifying and maintaining

As you modify your plugin, eventual will always ensure your published schemas are up to date and reflect the latest changes, allowing you to focus on building your service.

Closing

This has been a quick overview of how to build a ChatGPT plugin with eventual. It only scratches the surface on what’s possible with eventual - to learn more, such as how to build event-driven services with eventual’s distributed systems primitives, visit https://eventual.ai, follow us on Twitter @eventualAi and star us on GitHub.

· 8 min read
Sam Goodwin

Choreography vs Orchestration

Welcome to the first blog in a series where we explore Event-Driven Architectures (EDA) and Domain Driven Design (DDD) patterns using eventualCloud - a cloud framework designed for building massively scalable distributed systems.

In this series, we'll showcase how Eventual's distributed systems primitives, APIs, Messaging, Durable Workflows (and more), simplify and standardize the implementation of event-driven architectures on AWS with TypeScript.

Each of the code examples in this series will be available in the eventual-eda-patterns repository.

info
tip

If you're new to EDAs, we highly recommend checking out his resource and this post from Yan Cui (The Burning Monk) for further learning.

Video

What are Choreography and Orchestration

In broad strokes there are two main approaches to organizing communication and collaboration between services in event-driven systems: 1) Choreography and 2) Orchestration.

(source: Choreography vs Orchestration - David Boyne)

Choreography is a decentralized approach where each service is responsible for reacting to events from other services and taking appropriate actions. There is no central coordinator, and the services are loosely coupled.

Orchestration is a centralized approach where a coordinator service (orchestrator) is responsible for directing and managing the communication between services, making decisions, and ensuring the correct order of execution.

Eventual provides first-class primitives in TypeScript to support both of these techniques. They can be used interchangeably and are interoperable.

  1. The event and subscription primitives streamline the creation and configuration of Event Bridge Rules and Lambda Functions, and maintain type safety across publishers and subscribers.
  2. The workflow and task primitives enable the development of long-running, durable workflows using imperative code.

In this blog, we’ll build an example service using Choreography and Orchestration to illustrate their differences and to demonstrate the benefits of eventual.

Example: Order Processing Pipeline

Our example will process an Order in a hypothetical e-commerce service that performs the following steps:

  1. process the Payment
  2. ship the Order to the customer
  3. update the Order status record

Using Choreography

To build this process with choreography, events are published and handled for each step in the process.

  1. When a customer places an order, the Order Service emits an OrderCreated event.
  2. The Payment Service listens for the OrderCreated event, processes the payment, and emits a PaymentProcessed event.
  3. The Shipping Service listens for the PaymentProcessed event, and then ships the order.
  4. The Order Service listens for the OrderShipped event and updates its status

The below diagram depicts this visually - showing three services “ping ponging” messages between each other.

To build this with eventual, we will use the event and subscription primitives.

The event primitive declares an event with a name and schema. Events can then be published and subscribed to.

export const OrderCreated = event(
"OrderCreated",
z.object({
orderId: z.string(),
})
);

export const PaymentProcessed = event(
"PaymentProcessed",
z.object({
orderId: z.string(),
paymentId: z.string(),
})
);

export const OrderShipped = event(
"OrderShipped",
z.object({
orderId: z.string(),
shipmentId: z.string(),
})
);
tip

Eventual heavily promotes type safety and schemas. This helps prevent errors early in the development cycle and improves documentation of the system.

The subscription primitive can then be used to “subscribe” to one or more events by specifying:

  • a unique name to identify the subscription
  • a list of events it listens for
  • and a handler function that processes them

First, we’ll subscribe to the OrderCreated event to charge the card associated with the order and then publish a PaymentProcessed event.

export const processOrderPayment = subscription(
"processOrderPayment",
{
events: [OrderCreated],
},
async (event) => {
// process the payment using an API (e.g. stripe)
const paymentId = await chargeCard(event.orderId);

// emit an event that the payment was processed
await PaymentProcessed.emit({
orderId: event.orderId,
paymentId,
});
}
);

Next, whenever a PaymentProcessed event is received, submit the order for shipping, receive a shipmentId and forward that along as an OrderShipped event.

export const shipOrderAfterPayment = subscription(
"shipOrderAfterPayment",
{
events: [PaymentProcessed],
},
async (event) => {
// call the shipOrder API
const trackingId = await shipOrder(event.orderId);

// publish an event recording that the order has been shipped
await OrderShipped.emit({
orderId: event.orderId,
trackingId,
});
}
);

Finally, whenever an OrderShipped event is received, update the order status to reflect that change.

export const updateOrderStatus = subscription(
"updateOrderStatus",
{
events: [OrderShipped],
},
async (event) => {
await updateOrder(event.orderId, { status: "Shipped" });
}
);

All of these steps are performed independently of each other in response to events published to a durable AWS Event Bus. This guarantees that the events will be handled independently of intermittent failures and enables elastic scaling.

Eventual creates a best-practice serverless architecture for each Subscription - consisting of a dedicated Lambda Function for processing each event, an Event Bridge Rule to route events to your function, and a SQS Dead Letter Queue to catch messages that failed to process.

Using Orchestration

Let’s now simplify this example by using Orchestration. Instead of juggling events between subscribers, we will instead implement a workflow and have it call multiple tasks - this will allow us to centralize and explicitly control the order of execution of each step.

First, we’ll create each of the individual tasks. Tasks are functions that will be called as part of the workflow to do work such as integrating with a database or another service.

const processPayment = task("processPayment", async (orderId: string) => {
// (integrate with your payment API, e.g. Stripe)
});

const shipOrder = task("shipOrder", async (orderId: string) => {
// integrate with the shipping API (etc.)
});

const updateOrderStatus = task(
"updateOrderStatus",
async (input: { orderId: string; status: string }) => {
// update the order database (e.g. DynamoDB)
}
);
note

Our task implementations only show high-level details - implementation is left to your imagination ✨.

Finally, we’ll implement the processOrder pipeline using eventual’s workflow primitive. This will allow us to express that step-by-step orchestration logic as an imperative program.

export const processOrder = workflow(
"processOrder",
async (orderId: string) => {
const paymentId = await processPayment(orderId);

const shippingId = await shipOrder(orderId);

await updateOrderStatus({
orderId,
status: "Shipped",
});

return {
orderId,
paymentId,
shippingId,
};
}
);

When a Workflow calls a Task, it uses an Asynchronous Lambda Invocation to invoke it in a durable and reliable way. The Task, which is hosted in its own dedicated Lambda Function, performs the work and then communicates its result back to the workflow by sending a message to the SQS FIFO queue.

Although workflows appear to be just ordinary asynchronous functions, this is actually an abstraction designed to enable the development of orchestrations in imperative TypeScript (as opposed to DSLs like AWS Step Functions).

info

We use a similar technique to Azure Durable Functions and Temporal.io (called re-entrant processes) except the whole thing runs on AWS Serverless backed by SQS FIFO, S3 and Lambda.

See the Workflow docs for a deeper dive.

A major benefit of implementing workflows with this technique is that they can be tested like any other function. This greatly simplifies maintainability and allows you to really get into the details and ensure your workflow handles all scenarios.

let env: TestEnvironment;

// if there is pollution between tests, call reset()
beforeAll(async () => {
env = new TestEnvironment();
});

test("shipOrder should not be called if processPayment throws", async () => {
// mock the processPayment API to throw an error
env.mockTask(processPayment).fail(new Error("failed to process payment"));

// start the processOrder workflow
const execution = await env.startExecution({
workflow: processOrder,
input: "orderId",
});

// allow the simulator to advance time
await env.tick();

// get the status of the workflow
const status = (await execution.getStatus()).status;

// assert it failed
expect(status).toEqual(ExecutionStatus.FAILED);
});
note

See the docs on testing with eventual for more information on how to simulate and unit test your whole application locally.

Summary

You’re free to mix and match each of these approaches. Workflows can publish events and subscriptions can trigger workflows, etc. You should always use the right tool for the job - our goal with eventual is to make choosing and applying these patterns as straightforward as possible by providing an integrated experience.

Stay tuned for more blogs on EDAs. In the meantime, please star us on on GitHub, follow us on Twitter, and come chat to us on Discord.

A final note: don't forget to sign up for the eventualAi wait-list which enables the generation of these types of architectures using natural language and Artificial Intelligence.

· 5 min read
Sam Goodwin

What a year it has been already! It’s only been 3 months and we’ve already seen glimpses of a future that looks nothing like today. A world where real AI (actually intelligent AI) is available to everyone at low-cost. I think we’re all starting to wonder about our place in the world and how we need to change the way we work and what we do.

I’ve always been obsessed with efficiency and abstraction, which is what has drawn me to programming as a profession. For the past 5-6 years I’ve been thinking deeply about how cloud development, specifically on AWS, can be simplified down to a very simple programming model instead of what we have today. I’ve always wondered - what is that next level of abstraction for developers that will unlock more value? The next “infrastructure as code” (IaC), so to speak. I used to believe strongly that it was new programming languages, but it now seems more likely that the next level of abstraction is natural language!

What is powerful about IaC is the concept of “declarative infrastructure”. Instead of worrying about HOW to update your infrastructure configuration, you simply “declare” WHAT infrastructure you need and the engine takes care of “making it happen”. I love this way of working because it eliminates a whole class of problems and enables new forms of abstractions like templates and component libraries of infrastructure.

Today, I’m excited to announce eventualAi (waitlist) and eventualCloud (public beta), which we believe are the next steps in declarative software development - where business requirements can be declared in natural, conversational language and then fed into an AI system that takes care of making it happen.

eventualAi is a companion autonomous software development team of smart agents who know how to use the eventualCloud framework and apply Domain Driven Design (DDD) principles to translate business requirements into functioning services. Feed it your business problem or domain in natural language and have it “print” on-demand, scalable and fully serverless solutions.



eventualCloud is an open source, high-level framework for building distributed systems on AWS with TypeScript, serverless and IaC. It provides "core abstractions" — including APIs, Transactions, Messaging and long-running, durable Workflows — that shield you from the complexities of distributed systems and ensure a consistent, best-practice serverless architecture. Our goal with eventualCloud is to streamline and standardize the process of building functional services on AWS by providing code-first primitives and patterns that align with business concepts. We won't go deep into the details, for that you can read this dedicated blog post.

In short, we’ve long wanted a better way to build cloud services - one that gives the right level of abstractions for distributed systems in a simple programming model. For example, implementing long-running workflows shouldn’t involve learning weird, archaic domain specific languages when we already have tools like if-else, for-loops, async/await in all the programming languages we know and love. We want to program the cloud in the same way we program servers and local machines - and eventualCloud enables this.

An unforeseen benefit of how we built eventualCloud is that the same solutions that make it easier for human engineers to translate business requirements into solutions, also help AI agents do the same. Because our abstractions map closely with business processes and constrain the problem of “build me a service that does XYZ” down to a few repeatable and scalable patterns, intelligent agents built with LLMs (when given the right instructions and context) can apply these patterns just as effectively.

This new capability ushers in a transformative way for businesses to tackle problem-solving. It is now possible to truly “work backwards” from the customer in a declarative way, focusing on WHAT needs to be solved instead of HOW to solve it. Define your business goals, tenets, and policies in natural language, and then employ an intelligent agent to explore your business domain automatically. This intelligent agent can incorporate feedback, suggest improvements, break down the domain into smaller pieces, and even implement solutions. Additionally, it can maintain and operate production services, providing an all-encompassing solution for businesses.

We believe that we’re entering a new era of software development frameworks where the primary user is an intelligent AI with human supervision. Frameworks already do the job of simplifying and standardizing how problems should be solved, making them ideal “targets” for what we think of as “AI compilers” - i.e. systems of intelligent agents that “compile” business problems described in natural language to functioning solutions.

We’ll be sharing more soon. You can start building and playing with the eventualCloud framework today or sign up for the eventualAi waitlist. Come say hi on Discord, star us on GitHub, and follow us on Twitter. We’d love to hear about your business use-cases to help refine the technology before open sourcing it.

· 4 min read
Sam Goodwin

For years, I’ve dreamed of building serverless, massively distributed micro-services with the simplicity of a local program. I strongly believe there’s no conceptual difference between a laptop and the cloud. Both are complex systems made of many components. However, while we can simply program our laptop with a single language and runtime, unfortunately, the same is not true for the cloud. The developer experience is still at the “hardware level” so to speak - meaning that developers must understand how to configure each individual service and then glue them together into a functioning architecture. This is hard.

While there’s no conceptual difference between a laptop and the cloud, there is a glaringly obvious physical one - the cloud is massively distributed, and distributed systems are subject to an entire category of new problems around timing, failure and consistency. In a way, a large part of AWS’s business is providing solutions to these problems in the form of managed services, such as AWS Step Functions, or Event Bridge.

But, these services come at the cost of a consistent developer experience. They’re severely disjointed. You can’t just write code and run it - instead you have to: 1) hand-configure these services, all with their own domain specific configuration languages, and 2) reason through each of their failure cases and how they affect each other. You also can’t test all this easily because the entire system can’t run locally. This is why the current “best practice” is that you should mostly rely on integration tests. All of this requires specialized knowledge, has a slow developer iteration cycle, and limits composability and re-usability. Over time this accumulates as tech debt.

We believe that developing a scalable and fault tolerant service should be as lean as building a frontend app in a framework like NextJS. Simply make a change and then observe its impact in less than a second, not minutes or hours. The framework should take care of reasoning through complex failure cases so you can just focus on your application.

This is the vision of Eventual.

Eventual is an open source TypeScript framework that offers "core abstractions" — including APIs, Messaging and long-running, durable Workflows — that shield you from the complexities of distributed systems and ensure a consistent, best-practice serverless architecture. The top-level concept of Eventual is a "Service" that has its own API Gateway, Event Bus, and Workflow Engine that you customize and build on using the core abstractions.

Each Service can be thought of as exposing a Service Interface, consisting of APIs and Events, and then then performing Event-Driven Orchestration & Choreography with Workflows, Tasks and Subscriptions.

Service Contract

These can then be adopted iteratively, and seamlessly integrate and interact with any external or existing system, whether it be a database or another service. In our opinion, having confidence in (and relying on) these consistent patterns can 10x productivity from a reliability, productivity, maintainability and agility standpoint.

Everything can be written and tested in a single TypeScript code-base. You can run your massively scalable distributed cloud system LOCALLY before you even deploy 🤯. Run, test and iterate locally, then deploy only when it’s working. And here's the kicker - you can even debug production locally. That's right - debug your production system on your local machine.

Developer Iteration Cycle

Eventual is built with (and consumed from) modern Infrastructure-as-Code tooling such as the AWS CDK and Pulumi to give you maximum flexibility and control, as well as total ownership of your data by deploying everything into your own AWS account and within your own security boundaries. We implement best practices and solve edge cases for you so you can spend more time on your application. Best practice security, best practice scalability, best practice operations, best practice cost efficiency etc. - we lay the groundwork for you.

Finally, because it’s all TypeScript, you have full end-to-end type safety for a massively scalable distributed cloud system. We believe that relying on the TypeScript compiler to scream at you when you make mistakes is one of the biggest time savers you can have as a developer. As you make changes, simply follow the red squiggly lines all the way from your frontendbackend serviceinfrastructure configuration.

In the next part, eventualCloud Part 2 - Features, we'll walk through each of the features offered by Eventual. Be sure to check that out if you want to see some code!

To jump right in, see the Quick Start.

-Sam

· 11 min read
Sam Goodwin

In the previous part, eventualCloud Part 1 - Philosophy, we introduced the philosophy behind Eventual. How we envision a world where programming massively scalable, distributed systems in the cloud is as simple as writing local programs. In this second part, we'll give an overview of Eventual's features and developer experience.

Service

The Service is the top-level Concept of Eventual. It's a totally encapsulated micro-service deployable with a simple Construct that can be instantiated in an AWS CDK or Pulumi application.

It takes only 4 lines of code to deploy an entire micro-service to AWS:

const myService = new Service(this, "Service", {
name: "my-service",
// point it at where your backend code NPM package is
entry: require.resolve("@my/service"),
});

Each Service has its own API Gateway, Event Bus and Workflow engine. And, because it’s all just Infrastructure-as-Code, it can be customized to your heart’s content.

The business logic of the Service is automatically discovered by analyzing the entry point of your code. In there are Commands, Events, Subscriptions, Workflows, Tasks and Signals.

APIs

What would a service without APIs? Answer: not much. As mentioned, each Service comes with its own API Gateway that you can register routes on using Commands (RPC) or a HTTP router.

Command - i.e. RPC

A Command is simply a function that can be called over HTTP - aka. Remote Procedure Call (RPC). It has a simple input/output contract - it takes one argument as input and returns a value as output.

export const hello = command("hello", async (name: string) => {
return `hello ${name}`;
});

Each command is automatically added as a route on your Service’s API Gateway and invokes a dedicated, individually tree-shaken AWS Lambda Function. This enables you to tweak and tune the memory, timeout (and any other properties) for individual API routes.

APIs are exposed to the outside world, so it's important to provide a schema to validate requests. Eventual integrates with Zod for defining schemas.

export const hello = command(
"hello",
{
input: z.string(),
},
async (name) => {
return `hello ${name}`;
}
);

These schemas are then used for runtime validation in your Function, but also to generate an OpenAPI spec and attach it to your API Gateway. This ensures your Lambda Function is only invoked if the data is valid according to the schema - a good practice.

Calling commands from another application, for example your frontend react application, can be achieved without any code generation using the ServiceClient. And it’s all type-safe.

import type * as MyService from "@my/service";

const client = new ServiceClient<typeof MyService>({
serviceUrl: process.env.SERVICE_URL!,
});

await client.hello("sam");

Simply, import the types of your backend into the consuming application and instantiate a client. In this case @my/service points to a separate NPM package containing the service code. You can then directly call commands as if they were in the same code-base, while also promoting sensible separation of concerns.

REST (i.e. raw HTTP)

If you need to register raw HTTP routes, such as GET, PUT, POST, PATCH, etc., you can always use the api router.

api.get("/hello", async (request) => {
return new Response("OK");
});

Similar to Commands, reach route translates to an individual Lambda Function invoked by your API Gateway.

Middleware

Commands and HTTP routes can integrate with middleware chains that perform functions such as validating requests, setting headers, authorizing and fetching user information.

To create a Command with middleware, use the api.use utility to first create a middleware chain, and then finally created the command.

export const hello = api
.use(cors)
.use(authorized)
.command("hello", async (name: string, { user }) => {
// etc.
});

Messaging

The next aspect of an event-driven micro-service is Messaging. In Eventual, we provide Events and Subscriptions for passing messages around within and outside a Service.

When something happens in a service, it’s often a good idea to record it as an “event” and emit it to an Event Bus so other parts of your system can react to it. They’re also useful for logging and analytical use-cases, among many others. This is known as “Choreography”

Subscriptions have the benefit of decoupling the emitter of an event from the subscriber. This simplifies how you evolve your system over time as you can always add more subscribers without disrupting other parts of your service.

Event

In Eventual, you declare Event types:

export const HelloEvent = event("HelloEvent");

You can then emit an event from anywhere using the emit function:

await HelloEvent.emit({ key: "value"});

Sticking with our theme of TypeScript and type-safety, Eventual supports declaring a type for each event - and we highly encourage you to do so. There’s nothing worse than un-typed code.

export const HelloEvent = event<{
key: string;
}>("HelloEvent");

And for that extra level of safety, you can also use Zod to define a schema for runtime validation.

export const HelloEvent = event("HelloEvent", z.object({
key: z.string().min(1)
});

Subscription

To process events, you create a Subscription to one or more event types.

export const onHelloEvent = subscription(
"onHelloEvent",
{
events: [HelloEvent],
},
async (event) => {
console.log(event.key);
}
);

Each Subscription will automatically create a new Lambda Function, Event Bridge Rule and a SQS Dead Letter Queue.

Your function will be invoked by AWS Event Bridge for each event that matches the selection and any messages that fail to be processed will be safely stored in the dead letter queue for you to deal with as a part of your operational procedure.

Orchestration

When we talk about programming the cloud like a local machine, there’s just no getting around the distributed nature of it. Everything fails, all the time. So, orchestrating business logic that interacts with people, time and services is a challenging task.

Workflow

The most powerful piece of Eventual is most definitely the Workflow. In Eventual, you can orchestrate long running, durable workflows using plain TypeScript - such as if-else, loops, functions, async/await, and all that goodness. This gives you an expressive, Turing complete way to implement business logic, however complex, distributed or time-dependent it may be.

Workflows are where you put control-flow logic. Eventual ensures your code runs exactly as written, in a fault tolerant way such that you do not need to worry about things like transient failures, race conditions, temporary outages, or runtime duration etc.

For example, the below code implements a workflow that will send an email to a user every day. It will loop forever, sleep for a day and then send an email.

export const emailDaily = workflow("emailDaily", async (email: string) => {
while (true) {
await duration(1, "day");

// send an email to the user every day
await sendEmail(email);
}
});

With Eventual, your code can run forever, even sleep forever. We achieve this feat using serverless primitives behind the scene to allow you to program distributed systems with the mental model of a local machine.

Task

Workflows are not where you do actual work, such as interacting with a database. They are purely for deciding what to do and when. Instead, you separate out side-effects into what are called Tasks.

A task is a function that runs in its own AWS Lambda Function and can be invoked by a Workflow with exactly-once guarantees.

export const getUser = task("getUser", async (userId: string) => {
return client.getItem({
TableName: process.env.TABLE_NAME,
Key: { userId },
});
});

If you call a task, you can be sure it will run exactly once, which enables you to safely control when you interact and change resources such as database records.

You can also configure things like a retry policy that the platform will enforce, as well as protections such as heartbeats.

task(
"getUser",
{
// require a heartbeat every 30s
heartbeatTimeout: duration(30, "seconds"),
},
async (userId: string, ctx) => {
await ctx.sendHeartbeat();
}
);

Signal

Signals are messages that can be sent into a running workflow. They’re useful for integrating other parts of your application into a workflow, for example having a person approve something before continuing.

Creating a Signal is very similar to creating an Event type. All you need is a name and an optional type.

export const userEmailChanged = event<string>("userEmailChanged");

You can then use expectSignal within a workflow to pause execution until such information is received:

await userEmailChanged.expectSignal();

Or register a callback to be invoked whenever a signal is received:

userEmailChanged.onSignal(async (newAddress) => {
emailDaily(newAddress);
});

Signals are a powerful tool for building capabilities around workflows, for example human-in-the-loop systems where a UI or CLI can send data into a workflow to influence it.

This is barely scratching the surface of workflow orchestration - to learn more visit eventual.ai.

Testing

Testing distributed systems is difficult because of how fragmented the system is physically. It can be impossible or impractical to reproduce timing and race conditions in a real-world system with integration tests.

In Eventual, you can test any function locally. We also provide a TestEnvironment utility that gives fine-grained control over time and the underlying system, so that you can target tests towards those tricky edge cases.

You can write tests for your workflows with per-second granularity, up to extremes such as days, months or even years.

test("workflow should wait 1 second before completing", async () => {
const execution = await env.startExecution(myWorkflow, "input");

expect(await execution.getStatus()).toBe("PENDING");

// advance time by 1 second
env.tick(1);

expect(await execution.getStatus()).toBe("SUCCESS");
});

This test starts workflow, asserts that it is running, then explicitly advances time by 1 second, and then asserting that the workflow completed successfully. This form of control allows you to craft deterministic tests for timing and race conditions.

Local Simulation

An entire Eventual service can be simulated locally. Simply run the eventual local command to stand up a server on localhost:9000 which can be interacted with on your local machine.

eventual dev

Set breakpoints in your code and step-through any part of your application.

Even parts that span multiple cloud services, such as APIs emitting Events, that trigger Subscriptions, that then trigger Workflows, and so on.

The entire control flow can be walked through within the context of a single NodeJS runtime.

Debug Time Machine

Imagine the scenario where you’ve been paged at 2am in the morning because one of your workflows broke for some unknown reason.

Eventual provides what we call the “Debug Time Machine” that allows you to replay a workflow execution that already ran (or is still running) in production, locally, so you can debug from the comfort of your IDE.

Simply take the workflow execution ID and run the eventual replay CLI command.

eventual replay --execution-id <execution-id>

This will download the workflow’s history and run everything locally. You can then attach your debugger, for example with VS Code, and step through everything that happened as if it’s happening in real-time. Inspect variables, look at the returned values of tasks, identity and fix the bug.

A note on end-to-end Type Safety

This blog is getting a bit long, it’s hard to fit it all in! We’ll finish with a note on how Eventual really goes the extra mile when it comes to “end-to-end type safety”.

We use types to map everything back to the source, from your frontend → to your service implementation → and finally to its infrastructure configuration. This makes refactoring as easy as following those red squiggly lines. If your code compiles, you can be pretty confident it’s working - or at least that there’s no stupid mistakes 😉.

As previously mentioned, you can use the ServiceClient to call your Commands without generating any code. Simply import the types of your backend code and instantiate the client.

import type * as MyService from "@my/service";

const client = new ServiceClient<typeof MyService>({
serviceUrl: process.env.SERVICE_URL!,
});

await client.hello("sam");

The same goes for when you’re configuring infrastructure. Import the types of the backend and then safely customize and integrate with each of the pieces of generated infrastructure.

import type * as MyService from "@my/service";

const service = new Service<typeof MyService>(this, "Service", {
commands: {
// safely configure any of the commands
hello: {
environment: { .. }
}
}
});

// safely access any generated infrastructure

// such as the hello Command's Lambda Function
service.commands.hello;

// or a Subscription's dead letter queue
service.subscriptions.onHelloEvent.deadLetterQueue

Conclusion

That does it for now. To learn more, visit eventual.ai, star us on GitHub, follow us on twitter, and please, come chat to us on Discord. We’d love to hear from you

We want to help you build scalable cloud services. And we want it to be fast and we want it to be fun .