Skip to main content

7 posts tagged with "cloud-oriented"

View All Tags

· 23 min read
Asher Sterkin

Introducing a new programming language that creates an opportunity and an obligation to reevaluate existing methodologies, solutions, and the entire ecosystem—from language syntax and toolchain to the standard library through the lens of first principles.

Simply lifting and shifting existing applications to the cloud has been broadly recognized as risky and sub-optimal. Such a transition tends to render applications less secure, inefficient, and costly without proper adaptation. This principle holds for programming languages and their ecosystems.

Currently, most cloud platform vendors accommodate mainstream programming languages like Python or TypeScript with minimal adjustments. While leveraging existing languages and their vast ecosystems has certain advantages—given it takes about a decade for a new programming language to gain significant traction—it's constrained by the limitations of third-party libraries and tools designed primarily for desktop or server environments, with perhaps a nod towards containerization.

Winglang is a new programming language pioneering a cloud-oriented paradigm that seeks to rethink the cloud software development stack from the ground up. My initial evaluations of Winglang's syntax, standard library, and toolchain were presented in two prior Medium publications:

  1. Hello, Winglang Hexagon!: Exploring Cloud Hexagonal Design with Winglang, TypeScript, and Ports & Adapters
  2. Implementing Production-grade CRUD REST API in Winglang: The First Steps

Capitalizing on this exploration, I will focus now on the higher-level infrastructure frameworks, often called 'Middleware'. Given its breadth and complexity, Middleware development cannot be comprehensively covered in a single publication. Thus, this publication is probably the beginning of a series where each part will be published as new materials are gathered, insights derived, or solutions uncovered.

Part One of the series, the current publication, will provide an overview of Middleware origins and discuss the current state of affairs, and possible directions for Winglang Middleware. The next publications will look at more specific aspects.

With Winglang being a rapidly evolving language, distinguishing the core language features from the third-party Middleware built atop this series will remain an unfolding narrative. Stay tuned.

Acknowledgments

Throughout the preparation of this publication, I utilized several key tools to enhance the draft and ensure its quality.

The initial draft was crafted with the organizational capabilities of Notion's free subscription, facilitating the structuring and development of ideas.

For grammar and spelling review, the free version of Grammarly proved useful for identifying and correcting basic errors, ensuring the readability of the text.

The enhancement of stylistic expression and the narrative coherence checks were performed using the paid version of ChatGPT 4.0.

I owe a special mention to Nick Gal’s informative blog post for illuminating the origins of the term "Middleware," helping to set the correct historical context of the whole discussion.

While these advanced tools and resources significantly contributed to the preparation process, the concepts, solutions, and final decisions presented in this article are entirely my own, for which I bear full responsibility.

What is Middleware?

The term "Middleware" passed a long way from its inception and formal definitions to its usage in day-to-day software development practices, particularly within web development.

Covering every nuance and variation of Middleware would be long a journey worthy of a comprehensive volume entitled “The History of Middleware”—a volume awaiting its author.

In this exploration, we aim to chart the principal course, distilling the essence of Middleware and its crucial role in filling the gap between basic-level infrastructure and the practical needs of cloud-based applications development.

Origins of Middleware

Brian RanellThe concept of Middleware ||traces its roots back to an intriguing figure: the Russian-born British cartographer and cryptographer, Alexander d’Agapeyeff, at the "1968 NATO Software Engineering Conference."

Despite the scarcity of official information about d’Agapeyeff, his legacy extends beyond the enigmatic d’Agapeyeff Cipher, as he also played a pivotal role in the software industry as the founder and chairman of the "CAP Group." Insights into the early days of Middleware are illuminated by Brian Randell, a distinguished British computer scientist, in his recounting of "Some Middleware Beginnings."

At the NATO Conference d’Agapeyeff introduced his Inverted Pyramid—a conceptual framework positioning Middleware as the critical layer bridging the gap between low-level infrastructure (such as Control Programs and Service Routines) and Application Programs:

Fig 1: Alexander d'Agapeyeff's Pyramid

Here is how A. d’Agapeyeff explains it:

An example of the kind of software system I am talking about is putting all the applications in a hospital on a computer, whereby you get a whole set of people to use the machine. This kind of system is very sensitive to weaknesses in the software, particular as regards the inability to maintain the system and to extend it freely.

This sensitivity of software can be understood if we liken it to what I will call the inverted pyramid... The buttresses are assemblers and compilers. They don’t help to maintain the thing, but if they fail you have a skew. At the bottom are the control programs, then the various service routines. Further up we have what I call middleware.

This is because no matter how good the manufacturer’s software for items like file handling it is just not suitable; it’s either inefficient or inappropriate. We usually have to rewrite the file handling processes, the initial message analysis and above all the real-time schedulers, because in this type of situation the application programs interact and the manufacturers, software tends to throw them off at the drop of a hat, which is somewhat embarrassing. On the top you have a whole chain of application programs.

The point about this pyramid is that it is terribly sensitive to change in the underlying software such that the new version does not contain the old as a subset. It becomes very expensive to maintain these systems and to extend them while keeping them live.

A. d'Agapeyeff emphasized the delicate balance within this pyramid, noting how sensitive it is to changes in the underlying software that do not preserve backward compatibility. He also warned against danger of over-generalized software too often unsuitable to any practical need:

In aiming at too many objectives the higher-level languages have, perhaps, proved to be useless to the layman, too complex for the novice and too restricted for the expert.

Despite improvements in general-purpose file handling and other advancements since d’Agapeyeff's time, the essence of his observations remains relevant.

There is still a big gap between low-level infrastructure, today encapsulated in an Operating System, like Linux, and the needs of final applications. The Operating System layer reflects and simplifies access to hardware capabilities, which are common for almost all applications.

Higher-level infrastructure needs, however, vary between different groups of applications: some prioritize minimizing the operational cost, some others - speed of development, and others - highly tightened security.

Different implementations of the Middleware layer are intended to fill up this gap and to provide domain-neutral services that are better tailored to the non-functional requirements of various groups of applications.

This consideration also explains why it’s always preferable to keep the core language, aka Winglang, and its standard library relatively small and stable, leaving more variability to be addressed by these intermediate Middleware layers.

Patterns, Frameworks, and Middleware

The middleware definition was refined in the “Patterns, Frameworks, and Middleware: Their Synergistic Relationships” paper, published in 2003 by Douglas C. Schmidt and Frank Buschmann. Here, they define middleware as:

software that can significantly increase reuse by providing readily usable, standard solutions to common programming tasks, such as persistent storage, (de)marshaling, message buffering and queueing, request demultiplexing, and concurrency control. Developers who use middleware can therefore focus primarily on application-oriented topics, such as business logic, rather than wrestling with tedious and error-prone details associated with programming infrastructure software using lower-level OS APIs and mechanisms.

To understand the interplay between Design Patterns, Frameworks and Middleware, let’s start with formal definitions derived from the “Patterns, Frameworks, and Middleware: Their Synergistic Relationships” paper Abstract:

Patterns codify reusable design expertise that provides time-proven solutions to commonly occurring software problems that arise in particular contexts and domains.

Frameworks provide both a reusable product-line architecture – guided by patterns – for a family of related applications and an integrated set of collaborating components that implement concrete realizations of the architecture.

Middleware is reusable software that leverages patterns and frameworks to bridge the gap between the functional requirements of applications and the underlying operating systems, network protocol stacks, and databases.

In other words, Middleware is implemented in the form of one or more Frameworks, which in turn apply several Design Patterns to achieve their goals including future extensibility. Exactly this combination, when implemented correctly, ensures Middleware's ability to flexibly address the infrastructure needs of large yet distinct groups of applications.

Let’s take a closer look at the definitions of each element presented above.

Design Patterns

In the realm of software engineering, a Software Design Pattern is understood as a generalized, reusable blueprint for addressing frequent challenges encountered in software design. As defined by Wikipedia:

In software engineering, a software design pattern is a general, reusable solution to a commonly occurring problem within a given context in software design. It is not a finished design that can be transformed directly into source or machine code. Rather, it is a description or template for how to solve a problem that can be used in many different situations. Design patterns are formalized best practices that the programmer can use to solve common problems when designing an application or system.

Sometimes, the term Architectural Pattern is used to distinguish high-level software architecture decisions from lower-level, implementation-oriented Design Patterns, as defined in Wikipedia:

An architectural pattern is a general, reusable resolution to a commonly occurring problem in software architecture within a given context. The architectural patterns address various issues in software engineering, such as computer hardware performance limitations, high availability and minimization of a business risk. Some architectural patterns have been implemented within software frameworks.

It is essential to differentiate Architectural and Design Patterns from their implementations in specific software projects. While an Architectural or Design Pattern provides an initial idea for solution, its implementation may involve a combination of several patterns, tailored to the unique requirements and nuances of the project at hand.

Architectural Patterns, such as Pipe-and-Filters, and Design Patterns, such as the Decorator, are not only about solving problems in code. They also serve as a common language among architects and developers, facilitating more straightforward communication about software structure and design choices. They are also invaluable tools for analyzing existing solutions, as we will see later.

Software Frameworks

In the domain of computer programming, a Software Framework represents a sophisticated form of abstraction, designed to standardize the development process by offering a reusable set of libraries or tools. As defined by Wikipedia:

In computer programming, a software framework is an abstraction in which software, providing generic functionality, can be selectively changed by additional user-written code, thus providing application-specific software.

It provides a standard way to build and deploy applications and is a universal, reusable software environment that provides particular functionality as part of a larger software platform to facilitate the development of software applications, products and solutions.

In other words, a Software Framework is an evolved Software Library that employs the principle of inversion of control. This means the framework, rather than the user's application, takes charge of the control flow. The application-specific code is then integrated through callbacks or plugins, which the framework's core logic invokes as needed.

Utilizing a Software Framework as the foundational layer for integrating domain-specific code with the underlying infrastructure allows developers to significantly decrease the development time and effort for complex software applications. Frameworks facilitate adherence to established coding standards and patterns, resulting in more maintainable, scalable, and secure code.

Nonetheless, it's crucial to follow the Clean Architecture guidelines, which mandate that domain-specific code remains decoupled and independent from any framework to preserve its ability to evolve independently of any infrastructure. Therefore, an ideal Software Framework should support plugging into it a pure domain code without any modification.

Middleware

The Middleware is defined by Wikipedia as follows:

Middleware is a type of computer software program that provides services to software applications beyond those available from the operating system. It can be described as "software glue".

Middleware in the context of distributed applications is software that provides services beyond those provided by the operating system to enable the various components of a distributed system to communicate and manage data. Middleware supports and simplifies complex distributed applications. It includes web servers, application servers, messaging and similar tools that support application development and delivery. Middleware is especially integral to modern information technology based on XML, SOAP, Web services, and service-oriented architecture.

Middleware, however, is not a monolithic entity but is rather composed of several distinct layers as we shall see in the next section.

Middleware Layers

Below is an illustrative diagram portraying Middleware as a stack of such layers, each with its specialized function, as suggested in the Schmidt and Buchman paper:

Fig 2: Middleware Layers

Fig 2: Middleware Layers in Context

Layered Architecture Clarified

To appreciate the significance of this layered structure, a good understanding of the very concept of Layered Architecture is essential—a concept too often misunderstood completely and confused with the Multitier Architecture, deviating significantly from the original principles laid out by E.W. Dijkstra.

At the “1968 NATO Software Engineering Conference,” E.W. Dijkstra presented a paper titled “Complexity Controlled by Hierarchical Ordering of Function and Variability” where he stated:

We conceive an ordered sequence of machines: A[0], A[1], ... A[n], where A[0] is the given hardware machine and where the software of layer i transforms machine A[i] into A[i+1]. The software of layer i is defined in terms of machine A[i], it is to be executed by machine A[i], the software of layer i uses machine A[i] to make machine A[i+1].

In other words, in a correctly organized Layered Architecture, the higher-level virtual machine is implemented in terms of the lower-level virtual machine. Within this series, we will come back to this powerful technique over and over again.

Back to the Middleware Layers

Right beneath the Applications layer resides the Domain-Specific Middleware Services layer, a notion deserving a separate discussion within the broader framework of Domain-Driven Design.

Within this context, however, we are more interested in the Distribution Middleware layer, which serves as the intermediary between Host Infrastructure Middleware within a single "box" and the Common Middleware Services layer which operates across a distributed system's architecture.

As stated in the paper:

Common middleware services augment distribution middleware by defining higher-level domain-independent reusable services that allow application developers to concentrate on programming business logic.

With this understanding, we can now place Winglang Middleware within the Middleware Services layer enabling the implementation of Domain-Specific Middleware Services in terms of its primitives.

To complete the picture, we need more quotes from the “Patterns, Frameworks, and Middleware: Their Synergistic Relationships” article mapped onto the modern cloud infrastructure elements.

Host Infrastructure Middleware

Here is how it’s defined in the paper:

Host infrastructure middleware encapsulates and enhances native OS mechanisms to create reusable event demultiplexing, interprocess communication, concurrency, and synchronization objects, such as reactors; acceptors, connectors, and service handlers; monitor objects; active objects; and service configurators. By encapsulating the peculiarities of particular operating systems, these reusable objects help eliminate many tedious, error-prone, and non-portable aspects of developing and maintaining application software via low-level OS programming APIs, such as Sockets or POSIX pthreads.

In the AWS environment, general-purpose virtualization services such as AWS EC2 (computer), AWS VPC (network), and AWS EBS (storage) play this role.

On the other hand, when speaking about the AWS Lambda execution environment, we may identify AWS Firecracker, AWS Lambda standard and custom Runtimes, AWS Lambda Extensions, and AWS Lambda Layers as also belonging to this category.

Distribution Middleware

Here is how it’s defined in the paper:

Distribution middleware defines higher-level distributed programming models whose reusable APIs and objects automate and extend the native OS mechanisms encapsulated by host infrastructure middleware.

Distribution middleware enables clients to program applications by invoking operations on target objects without hard-coding dependencies on their location, programming language, OS platform, communication protocols and interconnects, and hardware.

Within the AWS environment, fully managed API, Storage, and Messaging services such as AWS API Gateway, AWS SQS, AWS SNS, AWS S3, and DynamoDB would fit naturally into this category.

Common Middleware Services

Here is how it’s defined in the paper:

Common middleware services augment distribution middleware by defining higher-level domain-independent reusable services that allow application developers to concentrate on programming business logic, without the need to write the “plumbing” code required to develop distributed applications via lower-level middleware directly.

For example, common middleware service providers bundle transactional behavior, security, and database connection pooling and threading into reusable components, so that application developers no longer need to write code that handles these tasks.

Whereas distribution middleware focuses largely on managing end-system resources in support of an object-oriented distributed programming model, common middleware services focus on allocating, scheduling, and coordinating various resources throughout a distributed system using a component programming and scripting model.

Developers can reuse these component services to manage global resources and perform common distribution tasks that would otherwise be implemented in an ad hoc manner within each application. The form and content of these services will continue to evolve as the requirements on the applications being constructed expand.

Formally speaking, Winglang, its Standard Library, and its Extended Libraries collectively constitute Common middleware services built on the top of the cloud platform Distribution Middleware and its corresponding lower-level Common middleware services represented by the cloud platform SDK for JavaScript and various Infrastructure as Code tools, such as AWS CDK or Terraform.

With Winglang Middleware we are looking for a higher level of abstraction built in terms of the core language and its library and facilitating the development of production-grade Domain-specific middleware services and applications on top of it.

Domain-Specific Middleware Services

Here is how it’s defined in the paper:

Domain-specific middleware services are tailored to the requirements of particular domains, such as telecom, e-commerce, health care, process automation, or aerospace. Unlike the other three middleware layers discussed above that provide broadly reusable “horizontal” mechanisms and services, domain-specific middleware services are targeted at “vertical” markets and product-line architectures. Since they embody knowledge of a domain, moreover, reusable domain-specific middleware services have the most potential to increase the quality and decrease the cycle-time and effort required to develop particular types of application software.

To sum up, the Winglang Middleware objective is to continue the trend of the Winglang compiler and its standard library to make developing Domain-specific middleware services less difficult.

Cloud Middleware State of Affairs

Applying the terminology introduced above, the current state of affairs with AWS cloud Middleware could be visualized as follows:

Fig 3: Cloud Middleware State of Affairs

We will look at three leading Middleware Frameworks for AWS:

  1. Middy (TypeScript)
  2. Power Tools for AWS Lambda (Python, TypeScript, Java, and .NET)
  3. Lambda Middleware

Middy

If we dive into the Middy Documentation we will find that it positions itself as a middleware engine, which is correct if we recall that very often Frameworks, which Middy is, are called Engines. However, it later claims that “… like generic web frameworks (fastify, hapi, express, etc.), this problem has been solved using the middleware pattern.” This is, as we understand now, complete nonsense. If we dive into the Middy Documentation further, we will find the following picture:

Fig 4: Middy

Now, we realize that what Middy calls “middleware” is a particular implementation of the Pipe-and-Filters Architecture Pattern via the Decorator Design Pattern. The latter should not be confused with TypeScript Decorators. In other words, Middy decorators are assembled into a pipeline each one performing certain operations before and/or after an HTTP request handling.

Perhaps, the main culprit of this confusion is the expressjs Framework Guide usage of titles like “Writing Middleware” and “Using Middleware” even though it internally uses the term middleware function, which is correct.

Middy comes with an impressive list of official middleware decorator plugins plus a long list of 3rd party middleware decorator plugins.

Power Tools for AWS Lambda

Here, the basic building blocks are called Features, which in many cases are Adapters of lower-level SDK functions. The list of features for different languages varies with the Python version to have the most comprehensive one. Features could be attached to Lambda Handlers using language decorators, used manually, or, in the case of TypeScript, using Middy. The term middleware pops up here and there and always means some decorator.

Lambda Middleware

This one is also an implementation of the Pipe-and-Filters Architecture Pattern via the Decorator Design Pattern. Unlike Middy, individual decorators are combined in a pipeline using a special Compose decorator effectively applying the Composite Design Pattern.

Limitations of existing solutions

Apart from using the incorrect terminology, all three frameworks have certain limitations in common, as follows:

  1. The confusing sequence of operation of multiple Decorators. When more than one decorator is defined, the sequence of before operations is in the order of decorators, but the the sequence of after operations is in reverse order. With a long list of decorators that might be a source of serious confusion or even a conflict.

  2. Reliance of environment variables. Control over the operation of particular adapters (e.g. Logger) solely relies on environment variables. To make a change, one will need to redeploy the Lambda Function.

  3. A single list of decorators with some limited control in runtime. There is only one list of decorators per Lambda Function and, if some decorators need to be excluded and replaced depending on the deployment target or run-time environment, a run-time check needs to be performed (look, for example, at how Tracer behavior is controlled in Power Tools for AWS Lambda). This introduces unnecessary run-time overhead and enlarges the potential security attack surface.

  4. Lack of support for higher-level crosscut specifications. All middleware decorators are specified for individual Lambda functions. Common specifications at the organization, organization unit, account, or service levels will require some handmade custom solutions.

  5. Too narrow interpretation of Middleware as a linear implementation of Pipe-and-Filers and Decorator design patterns. Power Tools for AWS Lambda makes it slightly better by introducing its Features, also called Utilities, such as Logger, first and corresponding decorators second. Middy, on the other hand, treats everything as a decorator. In both cases, the decorators are stacked in one linear sequence, such that retrieving two parameters, one from the Secrets Manager and another from the AppConfig, cannot be performed in parallel while state-of-the-art pipeline builders, such as Marble.js and Async.js, support significantly more advanced control forms.

For Winglang Common Services Middleware Framework (we can now use the correct full name) this list of limitations will serve as a call for action to look for pragmatic ways to overcome these limitations.

Winglang Middleware Direction

Following the “Patterns, Frameworks, and Middleware: Their Synergistic Relationships” article middleware layers taxonomy, the Winglang Common Middleware Services Framework is positioned as follows:

Fig 5: Winglang Middlware Layer

In the diagram above, the Winglang Middleware Layer, code name Winglang MW, is positioned as an upper sub-layer of Common Middleware Services, built on the top of the Winglang as a representative of the Infrastructure-from-Code solution, which in turn is built on the top of the cloud-specific SDK and IaC solutions providing convenient access to the cloud Distribution Middleware.

From the feature set perspective, the Winglang MW is expected

  1. To be on par with leading middleware frameworks such
    1. Middy (TypeScript)
    2. Power Tools for AWS Lambda (Python, TypeScript, Java, and .NET)
    3. Lambda Middleware
  2. In addition, to provide support for leading open standards such as
    1. OpenID
    2. Open Telemetry
    3. OAuth 2.0
    4. Async API
    5. Cloud Events
  3. To provide built-in support for cross-cut middleware specifications at different levels above individual cloud functions
  4. To support run-time fine-tuning of individual feature parameters (e.g. logging level) without corresponding cloud resources redeployment

Different implementations of Winglang MW will vary in efficiency, ease of use (e.g. middleware pipeline configuration), flexibility, and supplementary tooling such as automatic code generation.

At the current stage, any premature conversion towards a single solution will be detrimental to the Winglang ecosystem evolution, and running multiple experiments in parallel would be more beneficial. Once certain middleware features prove themselves, they might be incorporated into the Winglang core, but it’s advisable not to rush too fast.

For the same reason, I intentionally called this section Directions rather than Requirements or Problem Statement. I have only a general sense of the desirable direction to proceed. Making things too specific could lead to some serendipitous alternatives being missed. I can, however, specify some constraints, as follows:

  1. Do not count on advanced Winglang features, such as Generics, to come. Generics may significantly complicate the language syntax and too often are introduced before a clear understanding of how much sophistication is required. Also, at the early stages of exploration, the lack of Generics support could be compensated by switching to a general-purpose data type, such as Json, or code generators, including the “C” macros.
  2. Stick with Winglang and switch to TypeScript for implementing low-level extensions only. As a new language, Winglang lacks features taken for granted in mainstream languages and therefore requires some faith to get a fair chance to write as much code as possible, even if it is slightly less convenient. This is the only way for a new programming language to evolve.
  3. If the development of CLI tools is required, prefer TypeScript over other languages such as Python. I already have the TypeScript toolchain installed on my desktop with all dependencies resolved. It’s always better to limit the number of moving parts in the system to the absolute minimum.
  4. Limit Winglang middleware implementation to a single process of a Cloud Function. Out-of-proc capabilities, such as AWS Lambda Extensions, can improve overall system performance, security, and reuse (see, for example, this blog post). However, they are not currently supported by Winglang out of the box. Also, utilizing such advanced capabilities will increase the system's complexity while contributing little, if any, at the semantic level. Exploring this direction can be postponed to later stages.

What’s Next?

This publication was completely devoted to clarifying the concept of Middleware, its position within the cloud software system stack, and defining a general direction for developing one or more Winglang Middleware Frameworks.

I plan to devote the next Part Two of this series to exploring different options for implementing the Pipe-and-Filters Pattern in Middleware and after that to start building individual utilities and corresponding filters one by one.

It’s a rare opportunity that one does not encounter every day to revise the generic software infrastructure elements from the first principles and to explore the most suitable ways of realizing these principles on the leading modern cloud platforms. If you are interested in taking part in this journey, drop me a line.

· 25 min read
Asher Sterkin

Cover Art

Abstract

This is an experience report on the initial steps of implementing a CRUD (Create, Read, Update, Delete) REST API in Winglang, with a focus on addressing typical production environment concerns such as secure authentication, observability, and error handling. It highlights how Winglang's distinctive features, particularly the separation of Preflight cloud resource configuration from Inflight API request handling, can facilitate more efficient integration of essential middleware components like logging and error reporting. This balance aims to reduce overall complexity and minimize the resulting code size. The applicability of various design patterns, including Pipe-and-Filters, Decorator, and Factory, is evaluated. Finally, future directions for developing a fully-fledged middleware library for Winglang are identified.

Introduction

In my previous publication, I reported on my findings about the possible implementation of the Hexagonal Ports and Adapters pattern in the Winglang programming language using the simplest possible GreetingService sample application. The main conclusions from this evaluation were:

  1. Cloud resources, such as API Gateway, play the role of drivers (in-) and driven (out-) Ports
  2. Event handling functions play the role of Adapters leading to a pure Core, that might be implemented in Winglang, TypeScript, and in fact in any programming language, that compiles into JavaScript and runs on the NodeJS runtime engine

Initially, I planned to proceed with exploring possible ways of implementing a more general Staged Event-Driven Architecture (SEDA) architecture in Winglang. However, using the simplest possible GreetingService as an example left some very important architectural questions unanswered. Therefore I decided to explore in more depth what is involved in implementing a typical Create/Retrieve/Update/Delete (CRUD) service exposing standardized REST API and addressing typical production environment concerns such as secure authentication, observability, error handling, and reporting.

To prevent domain-specific complexity from distorting the focus on important architectural considerations, I chose the simplest possible TODO service with four operations:

  1. Retrieve all Tasks (per user)
  2. Create a new Task
  3. Completely Replace an existing Task definition
  4. Delete an existing Task

Using this simple example allowed me to evaluate many important architectural options and to to come up with an initial prototype of a middleware library for the Winglang programming language compatible with and potentially surpassing popular libraries for mainstream programming languages, such as Middy for Node.js middleware engine for AWS Lambda and AWS Power Tools for Lambda.

Unlike my previous publication, I will not describe the step-by-step process of how I arrived at the current arrangement. Software architecture and design processes are rarely linear, especially beyond beginner-level tutorials. Instead, I will describe a starting point solution, which, while far from final, is representative enough to sense the direction in which the final framework might eventually evolve. I will outline the requirements, I wanted to address, the current architectural decisions, and highlight directions for future research.

Simple TODO in Winglang

Developing a simple, prototype-level TODO REST API service in Winglang is indeed very easy, and could be done within half an hour, using the Winglang Playground:

Wing Playground

To keep things simple, I put everything in one source, even though, it of course could be split into Core, Ports, and Adapters. Let’s look at the major parts of this sample.

Resource (Ports) Definition

First, we need to define cloud resources, aka Ports, that we are going to use. This this is done as follows:

bring ex;
bring cloud;

let tasks = new ex.Table(
name: "Tasks",
columns: {
"id" => ex.ColumnType.STRING,
"title" => ex.ColumnType.STRING
},
primaryKey: "id"
);
let counter = new cloud.Counter();
let api = new cloud.Api();
let path = "/tasks";

Here we define a Winglang Table to keep TODO Tasks with only two columns: task ID and title. To keep things simple, we implement task ID as an auto-incrementing number using the Winglang Counter resource. And finally, we expose the TODO Service API using the Winglang Api resource.

API Request Handlers (Adapters)

Now, we are going to define a separate handler function for each of the four REST API requests. Getting a list of all tasks is implemented as:

api.get(
path,
inflight (request: cloud.ApiRequest): cloud.ApiResponse => {
let rows = tasks.list();
let var result = MutArray<Json>[];
for row in rows {
result.push(row);
}
return cloud.ApiResponse{
status: 200,
headers: {
"Content-Type" => "application/json"
},
body: Json.stringify(result)
};
});

Creating a new task record is implemented as:

api.post(
path,
inflight (request: cloud.ApiRequest): cloud.ApiResponse => {
let id = "{counter.inc()}";
if let task = Json.tryParse(request.body) {
let record = Json{
id: id,
title: task.get("title").asStr()
};
tasks.insert(id, record);
return cloud.ApiResponse {
status: 200,
headers: {
"Content-Type" => "application/json"
},
body: Json.stringify(record)
};
} else {
return cloud.ApiResponse {
status: 400,
headers: {
"Content-Type" => "text/plain"
},
body: "Bad Request"
};
}
});

Updating an existing task is implemented as:

api.put(
"{path}/:id",
inflight (request: cloud.ApiRequest): cloud.ApiResponse => {
let id = request.vars.get("id");
if let task = Json.tryParse(request.body) {
let record = Json{
id: id,
title: task.get("title").asStr()
};
tasks.update(id, record);
return cloud.ApiResponse {
status: 200,
headers: {
"Content-Type" => "application/json"
},
body: Json.stringify(record)
};
} else {
return cloud.ApiResponse {
status: 400,
headers: {
"Content-Type" => "text/plain"
},
body: "Bad Request"
};
}
});

Finally, deleting an existing task is implemented as:

api.delete(
"{path}/:id",
inflight (request: cloud.ApiRequest): cloud.ApiResponse => {
let id = request.vars.get("id");
tasks.delete(id);
return cloud.ApiResponse {
status: 200,
headers: {
"Content-Type" => "text/plain"
},
body: ""
};
});

We could play with this API using the Winglang Simulator:

Wing Console

We could write one or more tests to validate the API automatically:

bring http;
bring expect;
let url = "{api.url}{path}";
test "run simple crud scenario" {
let r1 = http.get(url);
expect.equal(r1.status, 200);
let r1_tasks = Json.parse(r1.body);
expect.nil(r1_tasks.tryGetAt(0));
let r2 = http.post(url, body: Json.stringify(Json{title: "First Task"}));
expect.equal(r2.status, 200);
let r2_task = Json.parse(r2.body);
expect.equal(r2_task.get("title").asStr(), "First Task");
let id = r2_task.get("id").asStr();
let r3 = http.put("{url}/{id}", body: Json.stringify(Json{title: "First Task Updated"}));
expect.equal(r3.status, 200);
let r3_task = Json.parse(r3.body);
expect.equal(r3_task.get("title").asStr(), "First Task Updated");
let r4 = http.delete("{url}/{id}");
expect.equal(r4.status, 200);
}

Last but not least, this service can be deployed on any supported cloud platform using the Winglang CLI. The code for the TODO Service is completely cloud-neutral, ensuring compatibility across different platforms without modification.

Should there be a need to expand the task details or link them to other system entities, the approach remains largely unaffected, provided the operations adhere to straightforward CRUD logic and can be executed within a 29-second timeout limit.

This example unequivocally demonstrates that the Winglang programming environment is a top-notch tool for the rapid development of such services. If this is all you need, you need not read further. What follows is a kind of White Rabbit hole of multiple non-functional concerns that need to be addressed before we can even start talking about serious production deployment.

You are warned. The forthcoming text is not for everybody, but rather for seasoned cloud software architects.

Architect

Usability

TODO sample service implementation presented above belongs to the so-called Headless REST API. This approach focuses on core functionality, leaving user experience design to separate layers. This is often implemented as Client-Side Rendering or Server Side Rendering with an intermediate Backend for Frontend tier, or by using multiple narrow-focused REST API services functioning as GraphQL Resolvers. Each approach has its merits for specific contexts.

I advocate for supporting HTTP Content Negotiation and providing a minimal UI for direct API interaction via a browser. While tools like Postman or Swagger can facilitate API interaction, experiencing the API as an end user offers invaluable insights. This basic UI, or what I refer to as an "engineering UI," often suffices.

In this context, anything beyond simple Server Side Rendering deployed alongside headless protocol serialization, such as JSON, might be unnecessarily complex. While Winglang provides support for Website cloud resource for web client assets (HTML pages, JavaScript, CSS), utilizing it for such purposes introduces additional complexity and cost.

A simpler solution would involve basic HTML templates, enhanced with HTMX's features and a CSS framework like Bootstrap. Currently, Winglang does not natively support HTML templates, but for basic use cases, this can be easily managed with TypeScript. For instance, rendering a single task line could be implemented as follows:

import { TaskData } from "core/task";

export function formatTask(path: string, task: TaskData): string {
return `
<li class="list-group-item d-flex justify-content-between align-items-center">
<form hx-put="${path}/${task.taskID}" hx-headers='{"Accept": "text/plain"}' id="${task.taskID}-form">
<span class="task-text">${task.title}</span>
<input
type="text"
name="title"
class="form-control edit-input"
style="display: none;"
value="${task.title}">
</form>
<div class="btn-group">
<button class="btn btn-danger btn-sm delete-btn"
hx-delete="${path}/${task.taskID}"
hx-target="closest li"
hx-swap="outerHTML"
hx-headers='{"Accept": "text/plain"}'>✕</button>
<button class="btn btn-primary btn-sm edit-btn">✎</button>
</div>
</li>
`;
}

That would result in the following UI screen:

Asher Sterkin Tasks

Not super-fancy, but good enough for demo purposes.

Even purely Headless REST APIs require strong usability considerations. API calls should follow REST conventions for HTTP methods, URL formats, and payloads. Proper documentation of HTTP methods and potential error handling are crucial. Client and server errors need to be logged, converted into appropriate HTTP status codes, and accompanied by clear explanation messages in the response body.

The need to handle multiple request parsers and response formatters based on content negotiation using Content-Type and Accept headers in HTTP requests led me to the following design approach:

Diagram

Adhering to the Dependency Inversion Principle ensures that the system Core is completely isolated from Ports and Adapters. While there might be an inclination to encapsulate the Core within a generic CRUD framework, defined by a ResourceData type, I advise caution. This recommendation stems from several considerations:

  1. In practice, even CRUD request processing often entails complexities that extend beyond basic operations.
  2. The Core should not rely on any specific framework, preserving its independence and adaptability.
  3. The creation of such a framework would necessitate support for Generic Programming, a feature not currently supported by Winglang.

Another option would be to abandon the Core data types definition and rely entirely on untyped JSON interfaces, akin to a Lisp-like programming style. However, given Winglang's strong typing, I decided against this approach.

Overall, the TodoServiceHandler is quite simple and easy to understand:

bring "./data.w" as data;
bring "./parser.w" as parser;
bring "./formatter.w" as formatter;

pub class TodoHandler {
_path: str;
_parser: parser.TodoParser;
_tasks: data.ITaskDataRepository;
_formatter: formatter.ITodoFormatter;

new(
path: str,
tasks_: data.ITaskDataRepository,
parser: parser.TodoParser,
formatter: formatter.ITodoFormatter,
) {
this._path = path;
this._tasks = tasks_;
this._parser = parser;
this._formatter = formatter;
}

pub inflight getHomePage(user: Json, outFormat: str): str {
let userData = this._parser.parseUserData(user);

return this._formatter.formatHomePage(outFormat, this._path, userData);
}

pub inflight getAllTasks(user: Json, query: Map<str>, outFormat: str): str {
let userData = this._parser.parseUserData(user);
let tasks = this._tasks.getTasks(userData.userID);

return this._formatter.formatTasks(outFormat, this._path, tasks);
}

pub inflight createTask(
user: Json,
body: str,
inFormat: str,
outFormat: str
): str {
let taskData = this._parser.parsePartialTaskData(user, body);
this._tasks.addTask(taskData);

return this._formatter.formatTasks(outFormat, this._path, [taskData]);
}

pub inflight replaceTask(
user: Json,
id: str,
body: str,
inFormat: str,
outFormat: str
): str {
let taskData = this._parser.parseFullTaskData(user, id, body);
this._tasks.replaceTask(taskData);

return taskData.title;
}

pub inflight deleteTask(user: Json, id: str): str {
let userData = this._parser.parseUserData(user);
this._tasks.deleteTask(userData.userID, num.fromStr(id));
return "";
}
}

As you might notice, the code structure deviates slightly from the design diagram presented earlier. These minor adaptations are normal in software design; new insights emerge throughout the process, necessitating adjustments. The most notable difference is the user: Json argument defined for every function. We'll discuss the purpose of this argument in the next section.

Security

Exposing the TODO service to the internet without security measures is a recipe for disaster. Hackers, bored teens, and professional attackers will quickly target its public IP address. The rule is very simple:

any public interface must be protected unless exposed for a very short testing period. Security is non-negotiable.

Conversely, overloading a service with every conceivable security measure can lead to prohibitively high operational costs. As I've argued in previous writings, making architects accountable for the costs of their designs might significantly reshape their approach:

If cloud solution architects were responsible for the costs incurred by their systems, it could fundamentally change their design philosophy.

What we need, is a reasonable protection of the service API, not less but not more either. Since I wanted to experiment with full-stack Service Side Rendering UI my natural choice was to enforce user login at the beginning, to produce a JWT Token with reasonable expiration, say one hour, and then to use it for authentication of all forthcoming HTTP requests.

Due to the Service Side Rendering rendering specifics using HTTP Cookie to carry over the session token was a natural (to be honest suggested by ChatGPT) choice. For the Client-Side Rendering option, I might need to use the Bearer Token delivered via the HTTP Request headers Authorization field.

With session tokens now incorporating user information, I could aggregate TODO tasks by the user. Although there are numerous methods to integrate session data, including user details into the domain, I chose to focus on userID and fullName attributes for this study.

For user authentication, several options are available, especially within the AWS ecosystem:

  1. AWS Cognito, utilizing its User Pools or integration with external Identity Providers like Google or Facebook.
  2. Third-party authentication services such as Auth0.
  3. A custom authentication service fully developed in Winglang.
  4. AWS Identity Center

As an independent software technology researcher, I gravitate towards the simplest solutions with the fewest components, which also address daily operational needs. Leveraging the AWS Identity Center, as detailed in a separate publication, was a logical step due to my existing multi-account/multi-user setup.

After integration, my AWS Identity Center main screen looks like this:

AWS Identity Center Image

That means that in my system, users, myself, or guests, could use the same AWS credentials for development, administration, and sample or housekeeping applications.

To integrate with AWS Identity Center I needed to register my application and provide a new endpoint implementing the so-called “Assertion Consumer Service URL (ACS URL)”. This publication is not about the SAML standard. It would suffice to say that with ChatGPT and Google search assistance, it could be done. Some useful information can be found here. What came very handy was a TypeScript samlify library which encapsulates the whole heavy lifting of the SAML Login Response validation process.

What I’m mostly interested in is how this variability point affects the overall system design. Let’s try to visualize it using a semi-formal data flow notation:

AWS Identity Center 2

While it might seem unusual this representation reflects with high fidelity how data flows through the system. What we see here is a special instance of the famous Pipe-and-Filters architectural pattern.

Here, data flows through a pipeline and each filter performs one well-defined task in fact following the Single Responsibility Principle. Such an arrangement allows me to replace filters should I want to switch to a simple Basic HTTP Authentication, to use the HTTP Authorization header, or use a different secret management policy for JWT token building and validation.

If we zoom into Parse and Format filters, we will see a typical dispatch logic using Content-Type and Accept HTTP headers respectively:

Content Diagram

Many engineers confuse design and architectural patterns with specific implementations. This misses the essence of what patterns are meant to achieve.

Patterns are about identifying a suitable approach to balance conflicting forces with minimal intervention. In the context of building cloud-based software systems, where security is paramount but should not be overpriced in terms of cost or complexity, this understanding is crucial. The Pipe-and-Filters design pattern helps with addressing such design challenges effectively. It allows for modularization and flexible configuration of processing steps, which in this case, relate to authentication mechanisms.

For instance, while robust security measures like SAML authentication are necessary for production environments, they may introduce unnecessary complexity and overhead in scenarios such as automated end-to-end testing. Here, simpler methods like Basic HTTP Authentication may suffice, providing a quick and cost-effective solution without compromising the system's overall integrity. The goal is to maintain the system's core functionality and code base uniformity while varying the authentication strategy based on the environment or specific requirements.

Winglang's unique Preflight compilation feature facilitates this by allowing for configuration adjustments at the build stage, eliminating runtime overhead. This capability presents a significant advantage of Winglang-based solutions over other middleware libraries, such as Middy and AWS Power Tools for Lambda, by offering a more efficient and flexible approach to managing the authentication pipeline.

Implementing Basic HTTP Authentication, therefore, only requires modifying a single filter within the authentication pipeline, leaving the remainder of the system unchanged:

Basic HTTP Auth

Due to some technical limitations, it’s currently not possible to implement Pipe-and-Filters in Winglang directly, but it could be quite easily simulated by a combination of Decorator and Factory design patterns. How exactly, we will see shortly. Now, let’s proceed to the next topic.

Operation

In this publication, I’m not going to cover all aspects of production operation. The topic is large and deserves a separate publication of its own. Below, is presented what I consider as a bare minimum:

Operation Diagram

To operate a service we need to know what happens with it, especially when something goes wrong. This is achieved via a Structured Logging mechanism. At the moment, Winglang provides only a basic log(str) function. For my investigation, I need more and implemented a poor man-structured logging class

// A poor man implementation of configurable Logger
// Similar to that of Python and TypeScript
bring cloud;
bring "./dateTime.w" as dateTime;

pub enum logging {
TRACE,
DEBUG,
INFO,
WARNING,
ERROR,
FATAL
}

//This is just enough configuration
//A serious review including compliance
//with OpenTelemetry and privacy regulations
//Is required. The main insight:
//Serverless Cloud logging is substantially
//different
pub interface ILoggingStrategy {
inflight timestamp(): str;
inflight print(message: Json): void;
}

pub class DefaultLoggerStrategy impl ILoggingStrategy {
pub inflight timestamp(): str {
return dateTime.DateTime.toUtcString(std.Datetime.utcNow());
}
pub inflight print(message: Json): void {
log("{message}");
}
}

//TBD: probably should go into a separate module
bring expect;
bring ex;

pub class MockLoggerStrategy impl ILoggingStrategy {
_name: str;
_counter: cloud.Counter;
_messages: ex.Table;

new(name: str?) {
this._name = name ?? "MockLogger";
this._counter = new cloud.Counter();
this._messages = new ex.Table(
name: "{this._name}Messages",
columns: Map<ex.ColumnType>{
"id" => ex.ColumnType.STRING,
"message" => ex.ColumnType.STRING
},
primaryKey: "id"
);
}
pub inflight timestamp(): str {
return "{this._counter.inc(1, this._name)}";
}
pub inflight expect(messages: Array<Json>): void {
for message in messages {
this._messages.insert(
message.get("timestamp").asStr(),
Json{ message: "{message}"}
);
}
}
pub inflight print(message: Json): void {
let expected = this._messages.get(
message.get("timestamp").asStr()
).get("message").asStr();
expect.equal("{message}", expected);
}
}

pub class Logger {
_labels: Array<str>;
_levels: Array<logging>;
_level: num;
_service: str;
_strategy: ILoggingStrategy;

new (level: logging, service: str, strategy: ILoggingStrategy?) {
this._labels = [
"TRACE",
"DEBUG",
"INFO",
"WARNING",
"ERROR",
"FATAL"
];
this._levels = Array<logging>[
logging.TRACE,
logging.DEBUG,
logging.INFO,
logging.WARNING,
logging.ERROR,
logging.FATAL
];
this._level = this._levels.indexOf(level);
this._service = service;
this._strategy = strategy ?? new DefaultLoggerStrategy();
}
pub inflight log(level_: logging, func: str, message: Json): void {
let level = this._levels.indexOf(level_);
let label = this._labels.at(level);
if this._level <= level {
this._strategy.print(Json {
timestamp: this._strategy.timestamp(),
level: label,
service: this._service,
function: func,
message: message
});
}
}
pub inflight trace(func: str, message: Json): void {
this.log(logging.TRACE, func,message);
}
pub inflight debug(func: str, message: Json): void {
this.log(logging.DEBUG, func, message);
}
pub inflight info(func: str, message: Json): void {
this.log(logging.INFO, func, message);
}
pub inflight warning(func: str, message: Json): void {
this.log(logging.WARNING, func, message);
}
pub inflight error(func: str, message: Json): void {
this.log(logging.ERROR, func, message);
}
pub inflight fatal(func: str, message: Json): void {
this.log(logging.FATAL, func, message);
}
}

There is nothing spectacular here and, as I wrote in the comments, a cloud-based logging system requires a serious revision. Still, it’s enough for the current investigation. I’m fully convinced that logging is an integral part of any service specification and has to be tested with the same rigor as core functionality. For that purpose, I developed a simple mechanism to mock logs and check them against expectations.

For a REST API CRUD service, we need to log at least three types of things:

  1. HTTP Request
  2. Original Error message if something wrong happened
  3. HTTP Response

In addition, depending on needs the original error message might need to be converted into a standard one, for example in order not to educate attackers.

How much if any details to log depends on multiple factors: deployment target, type of request, specific user, type of error, statistical sampling, etc. In development and test mode, we will normally opt for logging almost everything and returning the original error message directly to the client screen to ease debugging. In production mode, we might opt for removing some sensitive data because of regulation requirements, to return a general error message, such as “Bad Request”, without any details, and apply only statistical sample logging for particular types of requests to save the cost.

Flexible logging configuration was achieved by injecting four additional filters in every request handling pipeline:

  1. HTTP Request logging filter
  2. Try/Catch Decorator to convert exceptions if any into HTTP status codes and to log original error messages (this could be extracted into a separate filter, but I decided to keep things simple)
  3. Error message translator to convert original error messages into standard ones if required
  4. HTTP Response logging filter

This structure, although not an ultimate one, provides enough flexibility to implement a wide range of logging and error-handling strategies depending on the service and its deployment target specifics.

As with logs, Winglang at the moment provides only a basic throw <str> operator, so I decided to implement my version of a poor man structured exceptions:

// A poor man structured exceptions
pub inflight class Exception {
pub tag: str;
pub message: str?;

new(tag: str, message: str?) {
this.tag = tag;
this.message = message;
}
pub raise() {
let err = Json.stringify(this);
throw err;
}
pub static fromJson(err: str): Exception {
let je = Json.parse(err);

return new Exception(
je.get("tag").asStr(),
je.tryGet("message")?.tryAsStr()
);
}
pub toJson(): Json { //for logging
return Json{tag: this.tag, message: this.message};
}
}

// Standard exceptions, similar to those of Python
pub inflight class KeyError extends Exception {
new(message: str?) {
super("KeyError", message);
}
}
pub inflight class ValueError extends Exception {
new(message: str?) {
super("ValueError", message);
}
}
pub inflight class InternalError extends Exception {
new(message: str?) {
super("InternalError", message);
}
}
pub inflight class NotImplementedError extends Exception {
new(message: str?) {
super("NotImplementedError", message);
}
}
//Two more HTTP-specific, yet useful
pub inflight class AuthenticationError extends Exception {
//aka HTTP 401 Unauthorized
new(message: str?) {
super("AuthenticationError", message);
}
}
pub inflight class AuthorizationError extends Exception {
//aka HTTP 403 Forbidden
new(message: str?) {
super("AuthorizationError", message);
}
}

These experiences highlight how the developer community can bridge gaps in new languages with temporary workarounds. Winglang is still evolving, but its innovative features can be harnessed for progress despite the language's age.

Now, it’s time to take a brief look at the last production topic on my list, namely

Scale

Scaling is a crucial aspect of cloud development, but it's often misunderstood. Some neglect it entirely, leading to problems when the system grows. Others over-engineer, aiming to be a "FANG" system from day one. The proclamation "We run everything on Kubernetes" is a common refrain in technical circles, regardless of whether it's appropriate for the project at hand.

Neither—neglect nor over-engineering— extreme is ideal. Like security, scaling shouldn't be ignored, but it also shouldn't be over-emphasized.

Up to a certain point, cloud platforms provide cost-effective scaling mechanisms. Often, the choice between different options boils down to personal preference or inertia rather than significant technical advantages.

The prudent path involves starting small and cost-effectively, scaling out based on real-world usage and performance data, rather than assumptions. This approach necessitates a system designed for easy configuration changes to accommodate scaling, something not inherently supported by Winglang but certainly within the realm of feasibility through further development and research. As an illustration, let's consider scaling within the AWS ecosystem:

  1. Initially, a cost-effective and swift deployment might involve a single Lambda Function URL for a full-stack CRUD API with Server-Side Rendering, using an S3 Bucket for storage. This setup enables rapid feedback essential for early development stages. Personally, I favor a "UX First" approach over "API First." You might be surprised how far you can get with this basic technology stack. While Winglang doesn't currently support Lambda Function URLs, I believe it could be achieved with filter combinations and system adjustments. At this level, following Marc Van Neerven's recommendation to use standard Web Components instead of heavy frameworks could be beneficial. This is a subject for future exploration.
  2. Transitioning to an API Gateway or GraphQL Gateway becomes relevant when external API exposure or advanced features like WebSockets are required. If the initial data storage solution becomes a bottleneck, it might be time to consider switching to a more robust and scalable solution like DynamoDB. At this point, deploying separate Lambda Functions for each API request might offer simplicity in implementation, though it's not always the most cost-effective strategy.
  3. The move to containerized solutions should be data-driven, considered only when there's clear evidence that the function-based architecture is either too costly or suffers from latency issues due to cold starts. An initial foray into containers might involve using ECS Fargate for its simplicity and cost-effectiveness, reserving EKS for scenarios with specific operational needs that require its capabilities. This evolution should ideally be managed through configuration adjustments and automated filter generation, leveraging Winglang's unique capabilities to support dynamic scaling strategies.

In essence, Winglang's approach, emphasizing the Preflight and Inflight stages, holds promise for facilitating these scaling strategies, although it may still be in the early stages of fully realizing this potential. This exploration of scalability within cloud software development emphasizes starting small, basing decisions on actual data, and remaining flexible in adapting to changing requirements.

Concluding Remarks

In the mid-1990s, I learned about Commonality Variability Analysis from Jim Coplien. Since then, this approach, alongside Edsger W. Dijkstra's Layered Architecture, has been a cornerstone of my software engineering practices. Commonality Variability Analysis asks: "In our system, which parts will always be the same and which might need to change?" The Open-Closed Principle dictates that variable parts should be replaceable without modifying the core system.

Deciding when to finalize the stable aspects of a system involves navigating the trade-off between flexibility and efficiency, with several stages from code generation to runtime offering opportunities for fixation. Dynamic language proponents might delay these decisions to runtime for maximum flexibility, whereas advocates for static, compiled languages typically secure crucial system components as early as possible.

Winglang, with its unique Preflight compilation phase, stands out by allowing cloud resources to be fixed early in the development process. In this publication, I explored how Winglang enables addressing non-functional aspects of cloud services through a flexible pipeline of filters, though this granularity introduces its own complexity. The challenge now becomes managing this complexity without compromising the system's efficiency or flexibility.

While the final solution is a work in progress, I can outline a high-level design that balances these forces:

Pipeline

This design combines several software Design Patterns to achieve the desired balance. The process involves:

  1. The Pipeline Builder component is responsible for preparing a final set of Preflight components.
  2. The Pipeline Builder reads a Configuration which might be organized as a Composite (think team-wide or organization-wide configuration).
  3. Configurations specify capability requirements for resources (e.g., loggers).
  4. Each Resource has several Specifications each one defining conditions under which a Factory needs to be invoked to produce the required Filter. Three filter types are envisioned:
    1. Row HTTP Request/Response Filter
    2. Extended HTTP Request/Response Filter with session information extracted after token validation
    3. Generic CRUD requests filter to be forwarded to Core

This approach shifts complexity towards implementing the Pipeline Builder machinery and Configuration specification. Experience teaches such machinery could be implemented (described for example in this publication). That normally requires some generic programming and dynamic import capabilities. Coming up with a good configuration data model is more challenging.

Recent advances in generative AI-based copilots raise new questions about achieving the most cost-efficient outcome. To understand the problem, let's revisit the traditional compilation and configuration stack:

Code Generator

This general case may not apply to every ecosystem. Here's a breakdown of the typical layers:

  1. The Core Language is designed to be small (”C” and Lisp tradition). It may or may not provide support for Reflection.
  2. As many as possible extended capabilities are provided by Standard Library and 3rd Party Libraries and Frameworks.
  3. Generic Meta-Programming: Support for features like C++ templates or Lisp macros is introduced early (C++, Rust) or later (Java, C#). Generics are a source of ongoing debate:
    1. Framework developers find them insufficiently expressive.
    2. Application developers struggle with their complexity.
    3. Scala exemplifies the potential downsides of overly complex generics.
  4. Despite criticism, macros (e.g., C preprocessor) persist as a tool for automated code generation, often compensating for generic limitations.
  5. Third-party vendors (often open-source) provide solutions that enhance or compensate for the standard library, typically using external configuration files (YAML, JSON, etc.).
  6. Specialized generators very often use external blueprints or templates.

This complex structure has limitations. Generics can obscure the core language, macros are unsafe, configuration files are poorly disguised scripts, and code generators rely on inflexible static templates. These limitations are why I believe the current trend of Internal Development Platforms has limited growth potential.

As we look forward to the role of generative AI in streamlining these processes, the question becomes: Can generative AI-based copilots not only simplify but also enhance our ability to balance commonality and variability in software engineering?

This is going to be the main topic of my future research to be reported in the next publications. Stay tuned.

· 10 min read
Nathan Tarbert

Cover Art

TL;DR

As the saying goes, there are several ways to skin a cat...in the tech world, there are 5 ways to skin a Lambda Function 🤩

Lets Compare 5 DevTools

Introduction

As developers try to bridge the gap between development and DevOps, I thought it would be helpful to compare Programming Languages and DevTools.

Let's start with the idea of a simple function that would upload a text file to a Bucket in our cloud app.

The next step is to demonstrate several ways this could be accomplished.

Note: In cloud development, managing permissions and bucket identities, packaging runtime code, and handling multiple files for infrastructure and runtime add layers of complexity to the development process.

Let&#39;s get started

Let's dive into some code!


1. Wing

After installing Wing, let's create a file: main.w

If you aren't familiar with the Wing Programming Language, please check out the open-source repo HERE


bring cloud;

let bucket = new cloud.Bucket();

new cloud.Function(inflight () => {
bucket.put("hello.txt", "world!");
});

Let's do a breakdown of what's happening in the code above.

bring cloud is Wing's import syntax

Create a Cloud Bucket: let bucket = new cloud.Bucket(); initializes a new cloud bucket instance.

On the backend, the Wing platform provisions a new bucket in your cloud provider's environment. This bucket is used for storing and retrieving data.

Create a Cloud Function: The new cloud.Function(inflight () => { ... }); statement defines a new cloud function.

This function, when triggered, performs the actions defined within its body.

bucket.put("hello.txt", "world!"); uploads a file named hello.txt with the content world! to the cloud bucket created earlier.

Compile & Deploy to AWS

  • wing compile --platform tf-aws main.w

  • terraform apply

That's it, Wing takes care of the complexity of (permissions, getting the bucket identity in the runtime code, packaging the runtime code into a bucket, having to write multiple files - for infrastructure and runtime), etc.

Not to mention it generates IAC (TF or CF), plus Javascript that you can deploy with existing tools.

Wing Console

But while you develop, you can use the local simulator to get instant feedback and shorten the iteration cycles

Wing even has a playground that you can try out in the browser!

2. Pulumi

Step 1: Initialize a New Pulumi Project

mkdir pulumi-s3-lambda-ts
cd pulumi-s3-lambda-ts
pulumi new aws-typescript

Step 2. Write the code to upload a text file to S3.

This will be your project structure.

pulumi-s3-lambda-ts/
├─ src/
│ ├─ index.ts # Pulumi infrastructure code
│ └─ lambda/
│ └─ index.ts # Lambda function code to upload a file to S3
├─ tsconfig.json # TypeScript configuration
└─ package.json # Node.js project file with dependencies

Let's add this code to index.ts

import * as pulumi from "@pulumi/pulumi";
import * as aws from "@pulumi/aws";

// Create an AWS S3 bucket
const bucket = new aws.s3.Bucket("myBucket", {
acl: "private",
});

// IAM role for the Lambda function
const lambdaRole = new aws.iam.Role("lambdaRole", {
assumeRolePolicy: JSON.stringify({
Version: "2023-10-17",
Statement: [
{
Action: "sts:AssumeRole",
Principal: {
Service: "lambda.amazonaws.com",
},
Effect: "Allow",
Sid: "",
},
],
}),
});

// Attach the AWSLambdaBasicExecutionRole policy
new aws.iam.RolePolicyAttachment("lambdaExecutionRole", {
role: lambdaRole,
policyArn: aws.iam.ManagedPolicy.AWSLambdaBasicExecutionRole,
});

// Policy to allow Lambda function to access the S3 bucket
const lambdaS3Policy = new aws.iam.Policy("lambdaS3Policy", {
policy: bucket.arn.apply((arn) =>
JSON.stringify({
Version: "2023-10-17",
Statement: [
{
Action: ["s3:PutObject", "s3:GetObject"],
Resource: `${arn}/*`,
Effect: "Allow",
},
],
})
),
});

// Attach policy to Lambda role
new aws.iam.RolePolicyAttachment("lambdaS3PolicyAttachment", {
role: lambdaRole,
policyArn: lambdaS3Policy.arn,
});

// Lambda function
const lambda = new aws.lambda.Function("myLambda", {
code: new pulumi.asset.AssetArchive({
".": new pulumi.asset.FileArchive("./src/lambda"),
}),
runtime: aws.lambda.Runtime.NodeJS12dX,
role: lambdaRole.arn,
handler: "index.handler",
environment: {
variables: {
BUCKET_NAME: bucket.bucket,
},
},
});

export const bucketName = bucket.id;
export const lambdaArn = lambda.arn;

Next, create a lambda/index.ts directory for the Lambda function code:

import { S3 } from "aws-sdk";

const s3 = new S3();

export const handler = async (): Promise<void> => {
const bucketName = process.env.BUCKET_NAME || "";
const fileName = "example.txt";
const content = "Hello, Pulumi!";

const params = {
Bucket: bucketName,
Key: fileName,
Body: content,
};

try {
await s3.putObject(params).promise();
console.log(
`File uploaded successfully at https://${bucketName}.s3.amazonaws.com/${fileName}`
);
} catch (err) {
console.log(err);
}
};

Step 3: TypeScript Configuration (tsconfig.json)

{
"compilerOptions": {
"target": "ES2018",
"module": "CommonJS",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true
},
"include": ["src/**/*.ts"],
"exclude": ["node_modules", "**/*.spec.ts"]
}

After creating a Pulumi project, a yaml file will automatically be generated. pulumi.yaml

name: s3-lambda-pulumi
runtime: nodejs
description: A simple example that uploads a file to an S3 bucket using a Lambda function
template:
config:
aws:region:
description: The AWS region to deploy into
default: us-west-2

Deploy with Pulumi

Ensure your lambda directory with the index.js file is correctly set up. Then, run the following command to deploy your infrastructure: pulumi up


3. AWS-CDK

Step 1: Initialize a New CDK Project

mkdir cdk-s3-lambda
cd cdk-s3-lambda
cdk init app --language=typescript

Step 2: Add Dependencies

npm install @aws-cdk/aws-lambda @aws-cdk/aws-s3

Step 3: Define the AWS Resources in CDK

File: index.js

import * as cdk from "@aws-cdk/core";
import * as lambda from "@aws-cdk/aws-lambda";
import * as s3 from "@aws-cdk/aws-s3";

export class CdkS3LambdaStack extends cdk.Stack {
constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);

// Create the S3 bucket
const bucket = new s3.Bucket(this, "MyBucket", {
removalPolicy: cdk.RemovalPolicy.DESTROY, // NOT recommended for production code
});

// Define the Lambda function
const lambdaFunction = new lambda.Function(this, "MyLambda", {
runtime: lambda.Runtime.NODEJS_14_X, // Define the runtime
handler: "index.handler", // Specifies the entry point
code: lambda.Code.fromAsset("lambda"), // Directory containing your Lambda code
environment: {
BUCKET_NAME: bucket.bucketName,
},
});

// Grant the Lambda function permissions to write to the S3 bucket
bucket.grantWrite(lambdaFunction);
}
}

Step 4: Lambda Function Code

Create the same file struct as above and in the pulumi directory: index.ts

import { S3 } from 'aws-sdk';
const s3 = new S3();

exports.handler = async (event: any) => {
const bucketName = process.env.BUCKET_NAME;
const fileName = 'uploaded_file.txt';
const content = 'Hello, CDK! This file was uploaded by a Lambda function!';

try {
const result = await s3.putObject({
Bucket: bucketName!,
Key: fileName,
Body: content,
}).promise();

console.log(`File uploaded successfully: ${result}`);
return {
statusCode: 200,
body: `File uploaded successfully: ${fileName}`,
};
} catch (error) {
console.log(error);
return {
statusCode: 500,
body: `Failed to upload file: ${error}`,
};
}
};


Deploy the CDK Stack

First, compile your TypeScript code: npm run build, then

Deploy your CDK to AWS: cdk deploy


4. CDK for Terraform

Step 1: Initialize a New CDKTF Project

mkdir cdktf-s3-lambda-ts
cd cdktf-s3-lambda-ts

Then, initialize a new CDKTF project using TypeScript:

cdktf init --template="typescript" --local

Step 2: Install AWS Provider and Add Dependencies


npm install @cdktf/provider-aws

Step 3: Define the Infrastructure

Edit main.ts to define the S3 bucket and Lambda function:

import { Construct } from "constructs";
import { App, TerraformStack } from "cdktf";
import { AwsProvider, s3, lambdafunction, iam } from "@cdktf/provider-aws";

class MyStack extends TerraformStack {
constructor(scope: Construct, id: string) {
super(scope, id);

new AwsProvider(this, "aws", { region: "us-west-2" });

// S3 bucket
const bucket = new s3.S3Bucket(this, "lambdaBucket", {
bucketPrefix: "cdktf-lambda-",
});

// IAM role for Lambda
const role = new iam.IamRole(this, "lambdaRole", {
name: "lambda_execution_role",
assumeRolePolicy: JSON.stringify({
Version: "2023-10-17",
Statement: [
{
Action: "sts:AssumeRole",
Principal: { Service: "lambda.amazonaws.com" },
Effect: "Allow",
},
],
}),
});

new iam.IamRolePolicyAttachment(this, "lambdaPolicy", {
role: role.name,
policyArn:
"arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole",
});

const lambdaFunction = new lambdafunction.LambdaFunction(this, "MyLambda", {
functionName: "myLambdaFunction",
handler: "index.handler",
role: role.arn,
runtime: "nodejs14.x",
s3Bucket: bucket.bucket, // Assuming the Lambda code is uploaded to this bucket
s3Key: "lambda.zip", // Assuming the Lambda code zip file is named lambda.zip
environment: {
variables: {
BUCKET_NAME: bucket.bucket,
},
},
});

// Grant the Lambda function permissions to write to the S3 bucket
new s3.S3BucketPolicy(this, "BucketPolicy", {
bucket: bucket.bucket,
policy: bucket.bucket.apply((name) =>
JSON.stringify({
Version: "2023-10-17",
Statement: [
{
Action: "s3:*",
Resource: `arn:aws:s3:::${name}/*`,
Effect: "Allow",
Principal: {
AWS: role.arn,
},
},
],
})
),
});
}
}

const app = new App();
new MyStack(app, "cdktf-s3-lambda-ts");
app.synth();

Step 4: Lambda Function Code

The Lambda function code should be written in TypeScript and compiled into JavaScript, as AWS Lambda natively executes JavaScript. Here's an example index.ts for the Lambda function that you need to compile and zip:

import { S3 } from "aws-sdk";

const s3 = new S3();

exports.handler = async () => {
const bucketName = process.env.BUCKET_NAME || "";
const content = "Hello, CDKTF!";
const params = {
Bucket: bucketName,
Key: `upload-${Date.now()}.txt`,
Body: content,
};

try {
await s3.putObject(params).promise();
return { statusCode: 200, body: "File uploaded successfully" };
} catch (err) {
console.error(err);
return { statusCode: 500, body: "Failed to upload file" };
}
};

You need to compile this TypeScript code to JavaScript, zip it, and upload it to the S3 bucket manually or using a script.

Ensure the s3Key in the LambdaFunction resource points to the correct zip file in the bucket.

Compile & Deploy Your CDKTF Project

Compile your project using npm run build

Generate Terraform Configuration Files

Run the cdktf synth command. This command executes your CDKTF app, which generates Terraform configuration files (*.tf.json files) in the cdktf.out directory:

Deploy Your Infrastructure

cdktf deploy

5. Terraform

Step 1: Terraform Setup

Define your AWS Provider and S3 Bucket Create a file named main.tf with the following:


provider "aws" {
region = "us-west-2" # Choose your AWS region
}

resource "aws_s3_bucket" "lambda_bucket" {
bucket_prefix = "lambda-upload-bucket-"
acl = "private"
}

resource "aws_iam_role" "lambda_execution_role" {
name = "lambda_execution_role"

assume_role_policy = jsonencode({
Version = "2023-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "lambda.amazonaws.com"
}
},
]
})
}

resource "aws_iam_policy" "lambda_s3_policy" {
name = "lambda_s3_policy"
description = "IAM policy for Lambda to access S3"

policy = jsonencode({
Version = "2023-10-17"
Statement = [
{
Action = ["s3:PutObject", "s3:GetObject"],
Effect = "Allow",
Resource = "${aws_s3_bucket.lambda_bucket.arn}/*"
},
]
})
}

resource "aws_iam_role_policy_attachment" "lambda_s3_access" {
role = aws_iam_role.lambda_execution_role.name
policy_arn = aws_iam_policy.lambda_s3_policy.arn
}

resource "aws_lambda_function" "uploader_lambda" {
function_name = "S3Uploader"

s3_bucket = "YOUR_DEPLOYMENT_BUCKET_NAME" # Set your deployment bucket name here
s3_key = "lambda.zip" # Upload your ZIP file to S3 and set its key here

handler = "index.handler"
role = aws_iam_role.lambda_execution_role.arn
runtime = "nodejs14.x"

environment {
variables = {
BUCKET_NAME = aws_s3_bucket.lambda_bucket.bucket
}
}
}

Step 2: Lambda Function Code (TypeScript)

Create a TypeScript file index.ts for the Lambda function:


import { S3 } from 'aws-sdk';

const s3 = new S3();

exports.handler = async (event: any) => {
const bucketName = process.env.BUCKET_NAME;
const fileName = `uploaded-${Date.now()}.txt`;
const content = 'Hello, Terraform and AWS Lambda!';

try {
await s3.putObject({
Bucket: bucketName!,
Key: fileName,
Body: content,
}).promise();

console.log('Upload successful');
return {
statusCode: 200,
body: JSON.stringify({ message: 'Upload successful' }),
};
} catch (error) {
console.error('Upload failed:', error);
return {
statusCode: 500,
body: JSON.stringify({ message: 'Upload failed' }),
};
}
};

Finally after uploading your Lambda function code to the specified S3 bucket, run terraform apply.


Rapping it up!

I hope you enjoyed this comparison of five simple ways to write a function in our cloud app that uploads a text file to a Bucket.

As you can see, most of the code becomes very complex, except for Wing.

If you are intrigued about Wing and like how we are simplifying the process of cloud development, please join our community and reach out to us on Twitter.

· 30 min read
Asher Sterkin

Exploring Cloud Hexagonal Design with Winglang, TypeScript, and Ports & Adapters

As I argued elsewhere, automatically generating cloud infrastructure specifications directly from application code represents “The Next Logical Step in Cloud Automation.” This approach, sometimes referred to as “Infrastructure From Code” (IfC), aims to:

Ensure automatic coordination of four types of interactions with cloud services: life cycle management, pre- and post-configuration, consumption, and operation, while making pragmatic choices of the most appropriate levels of API abstraction for each cloud service and leaving enough control to the end-user for choosing the most suitable vendor, based on personal preferences, regulations or brownfield deployment constraints

While analyzing the IfC Technology Landscape a year ago, I identified five attributes essential for analyzing major offerings in this space:

  • Programming Language — is an IfC product based on an existing mainstream programming language(s) or embarks on developing a new one?
  • Runtime Environment — does it still use some existing runtime environment (e.g. NodeJS)?
  • API — is it proprietary or some form of standard/open source? Cloud-specific or cloud-agnostic?
  • IDE — does it assume its proprietary, presumably cloud-based, Integrated Development Environment or could be integrated with one or more of existing IDEs?
  • Deployment — does it assume deployment applications/services to its own cloud account or produced artifacts could be deployed to the customer’s own cloud account?

At that time, Winglang appeared on my radar as a brand-new cloud programming-oriented language running atop the NodeJS runtime. It comes with an optional plugin for VSCode, its own console, and fully supports cloud self-hosting via popular cloud orchestration engines such as Terraform and AWS CDK.

Today, I want to explore how well Winglang is suited for supporting the Clean Architecture style, based on the Hexagonal Ports and Adapters pattern. Additionally, I’m interested in how easily Winglang can be integrated with TypeScript, a representative of mainstream programming languages that can be compiled into JavaScript and run atop the NodeJS runtime engine.

Disclaimer

This publication is a technology research report. While it could potentially be converted into a tutorial, it currently does not serve as one. The code snippets in Winglang are intended to be self-explanatory. The language syntax falls within the common Algol-60 family and is, in most cases, straightforward to understand. In instances of uncertainty, please consult the Winglang Language Reference, Library, and Examples. For introductory materials, refer to the References.

Acknowledgements

Many thanks to Elad Ben-Israel, Shai Ber, and Nathan Tarbert for the valuable feedback on the early draft of this paper.

Table of Contents

  1. Disclaimer
  2. Acknowledgements
  3. Part One: Creating the Core
    3.1 Step Zero: “Hello, Winglang!” Preflight
    3.2 Step One: “Hello, Winglang!” Inflight
    3.3 Step Two: Generalizing Functionality by Accepting the Argument
    3.4 Deciding if the Hexagon Approach is Right for You
  4. Part Two: Encapsulating the Core within Hexagon
    4.1 Step Four: Extracting Core
    4.2 Step Five: Extracting the makeGreeting(name) Request Handler
    4.3 Step Six: Connecting the Handler via Cloud Function Port
    4.4 Step Seven: Reimplementing the Core in TypeScript
    4.5 Step Eight: Implementing the REST API Port
    4.6 Step Nine: Extracting the REST API Request Adapter
    4.7 Step Ten: Testing the REST API Request Adapter
    4.8 Step Eleven: Extracting the GreetingService
    4.9 Step Twelve: Enhancing REST API Request Adapter for Content Negotiation
  5. References
    6.1 Winglang Publications
    6.2 My Publications on “Infrastructure From Code”
    6.3 Hexagonal Architecture

Part One: Creating the Core

Step Zero: “Hello, Winglang!” Preflight

Creating the simplest possible “Hello, World!” application is a crucial, yet often overlooked, validation step in new software technology. Although such an application lacks practical utility, it reveals the general accessibility of the technology to newcomers. As a marketing wit once told me, “We have only one chance to make a first impression.” So, let’s begin with a straightforward one-liner in Winglang.

About Winglang: Winglang is an innovative cloud-oriented programming language designed to simplify cloud application development. It integrates seamlessly with cloud services, offering a unique approach to building and deploying applications directly in the cloud environment. This makes Winglang an intriguing option for developers looking to leverage cloud capabilities more effectively.

Installing Winglang is straightforward, assuming you already have npm and terraform installed and configured on your computer. As a technology researcher, I primarily work with remote desktops. Therefore, I won’t delve into the details of preparing your workstation here. My personal setup, once stabilized, will be shared in a separate publication.

My first step is to create a one-line application that prints the sentence “Hello, Winglang!” In Winglang, this is indeed could be done in a single line:

log(“Hello, Winglang!”);

However, to execute this one line of code, we need to compile it by typing wing compile:

Image1

Winglang adopts an intriguing approach by distinctly separating the phases of programmatic definition of cloud resources during compilation and their use during runtime. This is articulated in Winglang as Preflight and Inflight execution phases.

Simply put, the Preflight phase occurs when application code is compiled into a target orchestration engine template, such as a local simulator or Terraform, while the Inflight phase is when the application code executes within a Cloud Function or Container.

The ability to use the same syntax for programming the compilation phase and even print logs is quite a unique feature. For comparison, consider the ability to use the same syntax for programming “C” macros or C++ templates to print debugging logs of the compilation phase, just as you would program the runtime phase.

Step One: “Hello, Winglang!” Inflight

Now, I aim to create the simplest possible application that prints the sentence “Hello, Winglang!” during runtime, that is during the Inflight phase. In Winglang, accomplishing this requires just a couple of lines, similar to what you’d expect in any mainstream programming language:

bring cloud;

log("Hello, Winglang, Preflight!");

let helloWorld = new cloud.Function(inflight (event: str) => {
log("Hello, Winglang!");
});

By typing wing it in the VSCode Terminal, you can bring up the Winglang simulator (I prefer the preview in the editor). Click on cloud.Function, then on Invoke, and you will see the following:

Image2

This is pretty cool and Winglang definitely passes the initial smoke test.

Step Two: Generalizing Functionality by Accepting the name Argument

To move beyond simply printing static text, we’re going to slightly modify our initial function to return the greeting “Hello,<name>!”, where <name> is the function’s argument. The updated code, along with the simulator’s output, will look something like this:

Image3

Keep in mind, there’s no need to close the simulator. Simply edit the file, hit CTRL+S to save, and the simulator will automatically load the new version.

In today’s world, a system without test automation support hardly has a right to exist. Let’s add some tests to our simple function (now renamed to makeGreeting):

Image4

Again, there’s no need to close the simulator. The entire process is interactive and flows quite smoothly.

You can also run the tests via the command line in the VSCode Terminal:

Image5

The same test can also be run automatically in the cloud by typing, for example, wing test -t tf-aws. Additionally, the same code can be deployed on a target cloud.

Cloud neutrality support in Winglang is important and fascinating topic, which will be covered in more details in the next Step Four: Extracting Core section.

Deciding if the Hexagon Approach is Right for You

If all you need is to develop simple Transaction Scripts that:

  • Are triggered by an event happening to a cloud resource, e.g., REST API Gateway.
  • Optionally retrieve data from another Cloud Resource, like a Blob Storage Bucket.
  • Perform some very simple calculations.
  • Optionally send data to another Cloud Resource, such as a Blob Storage Bucket.
  • Can ideally be written once and require minimal maintenance.

Then you may choose to stop here. Explore Winglang Examples to see what can be achieved today, and visit Winglang Issues for insights on current limitations and future plans. However, if you’re interested in exploring how Winglang supports complex software architectures with potentially intricate computational logic and long-term support requirements, you are welcome to proceed to Part Two of this publication.

Part Two: Encapsulating the Core within Hexagon

Hexagonal Architecture, introduced by Alistair Cockburn in 2005, represented a significant shift in the way software applications were structured. Also known as the Ports and Adapters pattern, this architectural style was designed to create a clear separation between an application’s core logic and its external components. It enables applications to be equally driven by users, programs, automated tests, or batch scripts, and allows for development and testing in isolation from runtime devices and databases. By organizing interactions through ‘ports’ and ‘adapters’, the architecture ensures that the application remains agnostic to the nature of external technologies and interfaces. This approach not only prevented the infiltration of business logic into user interface code but also enhanced the flexibility and maintainability of software, making it adaptable to various environments and technologies.

While I believe that Alistair Cockburn, like many other practitioners, may have misinterpreted the original intent of layered software architecture as introduced by E.W. Dijkstra in his seminal work, “The Structure of ‘THE’ Multiprogramming System” (a topic I plan to address in a separate publication), the foundational idea he presents remains useful. As I argued in my earlier publication, the Ports metaphor aligns well with cloud resources that trigger specific events, while software modules interacting directly with the cloud SDK effectively function as Adapters.

Numerous attempts (see References) have been made to apply Hexagonal Architecture concepts to cloud and, more specifically, serverless development. A notable example is the blog post “Developing Evolutionary Architecture with AWS Lambda,” which showcases a repository structure closely aligned with what I envision. However, even this example employs a more complex application than what I believe is necessary for initial exploration. I firmly hold that we should fully understand and explore the simplest possible applications, at the “Hello, World!” level, before delving into more complex scenarios. With this in mind, let’s examine how far we can go in building a straightforward Greeting Service.

Step Four: Extracting Core

First and foremost, our goal is to extract the Core and ensure its complete independence from any external dependencies:

bring cloud;

pub class Greeting impl cloud.IFunctionHandler {
pub inflight handle(name: str): str {
return "Hello, {name}!";
}
}

At the moment, the Winglang Module System does not support public functions. I does, however, support public static class functions, which semantically are equivalent. Unfortunately, I cannot directly pass a public static inflight function to cloud.Function (it only works for closures), and I need to implement the cloud.IFunctionHandler interface. These limitations are fairly understandable and quite typical for a new programming system.

By extracting the core into a separate module, we can focus on what brings the application to life in the first place. This also enables extensive testing of the core logic independently, as shown below:

bring "./core" as core;
bring expect;

let greeting = new core.Greeting();

test "it will return 'Hello, <name>!'" {
expect.equal("Hello, World!", greeting.handle("World"));
expect.equal("Hello, Winglang!", greeting.handle("Winglang"));
}

Keeping the simulator up with only the core test allows us to quickly explore application logic and discuss it with stakeholders without worrying about cloud resources. This approach often epitomizes what a true MVP (Minimum Viable Product) is about:

Image6

The main file is now streamlined, focusing on system-level packaging and testing:

bring cloud;
bring "./core" as core;


let makeGreeting = new cloud.Function(inflight (name: str): str => {
log("Received: {name}");
let greeting = core.Greeting.makeGreeting(name);
log("Returned: {greeting}");
return greeting;
});


bring expect;

test "it will return 'Hello, `<name>`!'" {
expect.equal("Hello, Winglang!", makeGreeting.invoke("Winglang"));
}

To consolidate everything, it’s time to introduce a Makefile to automate the entire process:


.PHONY: all test_core test_local test_remote

cloud ?= aws

all: test_remote

test_core:
wing test test.core.main.w -t sim

test_local: test_core
wing test main.w -t sim

test_remote: test_local
wing test main.w -t tf-$(cloud)

Here, I’ve defined a Makefile variable cloud with the default value aws, which specifies the target cloud platform for remote tests. By using Terraform as an orchestration engine, I ensure that the same code and Makefile will run without any changes on any cloud platform supported by Winglang, such as aws, gcp, or azure.

The output of remote testing is worth examining:

Image7

As we can see, Winglang automatically converts the Preflight code into Terraform templates and invokes Terraform commands to deploy the resulting stack to the cloud. It then runs the same test, effectively executing the Inflight code on the actual cloud, aws in this case, and finally deletes all resources. In such cases, I don't even need to access the cloud console to monitor the process. I can treat the cloud as a supercomputer, working with it through Winglang's cross-compilation mechanism.

The project structure now mirrors our architectural intent:


greeting-service/

├── core/
│ └── Greeting.w

├── main.w
├── Makefile
└── test.core.main.w

Step Five: Extracting the makeGreeting(name) Request Handler

The core functionality should be purely computational, stateless, and free from side effects. This is crucial to ensure that the core does not depend on any external framework and can be fully tested automatically. Introducing states or external side effects would generally hinder this possibility. However, we still aim to isolate application logic from the real environment represented by Ports and Adapters. To achieve this, we introduce a separate Request Handler module, as follows:


bring cloud;
bring "../core" as core;

pub class Greeting impl cloud.IFunctionHandler {
pub inflight handle(name: str): str {
log("Received: {name}");
let greeting = core.Greeting.makeGreeting(name);
log("Returned: {greeting}");
return greeting;
}
}

In this case, the GreetingHandler is responsible for logging, which is a side effect. In more complex applications, it would communicate with external databases, message buses, third-party services, etc., via Ports and Adapters.

The core logic is now encapsulated as a plain function and is no longer derived from the cloud.IFunctionHandler interface:


pub class Greeting {
pub static inflight makeGreeting(name: str): str {
return "Hello, {name}!";
}
}

The unit test for the core logic is accordingly simplified:

bring "./core" as core;
bring expect;

test "it will return 'Hello, <name>!'" {
expect.equal("Hello, World!", core.Greeting.makeGreeting("World"));
expect.equal("Hello, Wing!", core.Greeting.makeGreeting("Wing"));
}

The responsibility of connecting the handler and core logic now falls to the main.w module:

bring cloud;
bring "./handlers" as handlers;


let greetingHandler = new handlers.Greeting();
let makeGreetingFunction = new cloud.Function(greetingHandler);

bring expect;

test "it will return 'Hello, <name>!'" {
expect.equal("Hello, Wing!", makeGreetingFunction.invoke("Wing"));
}

Once again, the project structure reflects our architectural intent:

greeting-service/

├── core/
│ └── Greeting.w
├── handlers/
│ └── Greeting.w
├── main.w
├── Makefile
└── test.core.main.w

It should be noted that for a simple service like Greeting, such an evolved structure could be considered over-engineering and not justified by actual business needs. However, as a software architect, it’s essential for me to outline a general skeleton for a fully-fledged service without getting bogged down in application-specific complexities that might not yet be known. By isolating different system components from one another, we make future system evolution less painful, and in many cases just practically feasible. In such cases, investing in a preliminary system structure by following best practices is fully justified and necessary. As Grady Booch famously said, “One cannot refactor a doghouse into a skyscraper.”

In general, keeping core functionality purely stateless and free from side effects, and isolating stateful application behavior with potential side effects into separate handlers, is conceptually equivalent to the monadic programming style widely adopted in Functional Programming environments.

Step Six: Connecting the Handler via Cloud Function Port

We can now remove the direct cloud.Function creation from the main module and encapsulate it into a separate GreetingFunction port as follows:

bring "./handlers" as handlers;
bring "./ports" as ports;


let greetingHandler = new handlers.Greeting();
let makeGreetingService = new ports.GreetingFunction(greetingHandler);

bring expect;

test "it will return 'Hello, <name>!'" {
expect.equal("Hello, Wing!", makeGreetingService.invoke("Wing"));
}

The GreetingFunction is defined in a separate module like this:

bring cloud;

pub class GreetingFunction {
\_f: cloud.Function;
new(handler: cloud.IFunctionHandler) {
this.\_f = new cloud.Function(handler);
}
pub inflight invoke(name: str): str {
return this.\_f.invoke(name);
}
}

This separation of concerns allows the main.w module to focus on connecting different parts of the system together. Specific port configuration is performed in a separate module dedicated to that purpose. While such isolation of GreetingHandler might seem unnecessary at this stage, it becomes more relevant when considering the nuanced configuration supported by Winglang cloud.Function, including execution platform (e.g., AWS Lambda vs Container), environment variables, timeout, maximum resources, etc. Extracting the GreetingFunction port definition into a separate module naturally facilitates the concealment of these details.

The project structure is updated accordingly:

greeting-service/

├── core/
│ └── Greeting.w
├── handlers/
│ └── Greeting.w
├── ports/
│ └── greetingFunction.w
├── main.w
├── Makefile
└── test.core.main.w

The adopted naming convention for port modules also allows for the inclusion of multiple port definitions within the same project, enabling the selection of the required one based on external configuration.

Step Seven: Reimplementing the Core in TypeScript

There are several reasons why a project might consider implementing its core functionality in a mainstream programming language that can still run atop the underlying runtime environment. For example, using TypeScript, which compiles into JavaScript, and can be integrated with Winglang. Here are some of the most common reasons:

  • Risk Mitigation: Preserving the core regardless of the cloud programming environment in use.
  • Available Skills: It’s often easier to find developers familiar with a mainstream language than with a new one.
  • Existing Code Base: Typical brownfield situations.
  • 3rd Party Libraries: Essential for core functionality, such as specific algorithms.
  • Automation Ecosystem Maturity: More options are available for exhaustive testing of core functionality in mainstream languages.
  • Support for Specific Styles: For instance, better support for pure functional programming.

The Greeting service core functionality, redeveloped in TypeScript, would look like this:

export function makeGreeting(name: string): string {
return \`Hello, ${name}!\`;
}

Its unit test, developed using the jest framework, would be:

import { makeGreeting } from "@core/makeGreeting";

describe("makeGreeting", () => {
it("should return a greeting with the provided name", () => {
const name = "World";
const expected = "Hello, World!";
const result = makeGreeting(name);
expect(result).toBe(expected);
});
});

To make it accessible to Winglang language modules, a simple wrapper is needed:

pub inflight class Greeting {
pub extern "../target/core/makeGreeting.js" static inflight makeGreeting(name: str): str;
}

The main technical challenge is to place the compiled JavaScript version where the Winglang wrapper can find it. For this project, I decided to use the target folder, where the Winglang compiler puts its artifacts. To achieve this, I created a dedicated tsconfig.build.json:

{
"extends": "./tsconfig.json",
"compilerOptions": {
"outDir": "./target",
// ... production-specific compiler options ...
},
"exclude": \[
"core/\*.test.ts"
\]
}

The Makefile was also modified to automate the process:

.PHONY: all install test\_core test\_local test\_remote

cloud ?= aws

all: test\_remote

install:
npm install

test\_core: install
npm run test

build\_core: test\_core
npm run build

test\_local: build\_core
wing test main.w -t sim

test\_remote: test\_local
wing test main.w -t tf-$(cloud)

The folder structure reflects the changes made:

greeting-service/

├── core/
│ └── Greeting.w
│ └── makeGreeting.ts
│ └── makeGreeting.test.ts
├── handlers/
│ └── Greeting.w
├── ports/
│ └── greetingFunction.w
├── jest.config.js
├── main.w
├── Makefile
├── package-lock.json
├── package.json
├── tsconfig.build.json
└── tsconfig.json

Step Eight: Implementing the REST API Port

Now, let’s consider making our Greeting service accessible via a REST API. This could be necessary, for instance, to enable demonstrations from a web browser or to facilitate calls from external services that, due to security or technological constraints, cannot communicate directly with the GreetingFunction port. To accomplish this, we need to introduce a new Port definition and modify the main.w module, while keeping everything else unchanged:

bring cloud;
bring http;


pub class GreetingApi{
pub apiUrl: str;

new(handler: cloud.IFunctionHandler) {
let api = new cloud.Api();

api.get("/greetings", inflight (request: cloud.ApiRequest): cloud.ApiResponse => {
return cloud.ApiResponse{
status: 200,
body: handler.handle(request.query.get("name"))
};
});

this.apiUrl = api.url;
}

pub inflight invoke(name: str): str {
let result = http.get("{this.apiUrl}/greetings?name={name}");
assert(200 == result.status);
return result.body;
}

}

To maintain a consistent testing interface, I implemented an invoke method that functions similarly to the GreetingFunction port. This design choice is not mandatory but rather a matter of convenience to minimize the amount of change.

The main.w module now allocates the GreetingApi port:

bring "./handlers" as handlers;
bring "./ports" as ports;


let greetingHandler = new handlers.Greeting();
let makeGreetingService = new ports.GreetingApi(greetingHandler);

bring expect;

test "it will return 'Hello, <name>!'" {
expect.equal("Hello, Wing!", makeGreetingService.invoke("Wing"));
}

Since there is now something to use externally, the Makefile was modified to include deploy and destroy targets,as follows:


.PHONY: all install test\_core build\_core update test\_adapters test\_local test\_remote compile tf-init deploy destroy

cloud ?= aws
target := target/main.tf$(cloud)

all: test\_remote

install:
npm install

test\_core: install
npm run test

build\_core: test\_core
npm run build

update:
sudo npm update -g wing

test\_adapters: update
wing test test.adapters.main.w -t sim

test\_local: build\_core test\_adapters
wing test test.main.w -t sim

test\_remote: test\_local
wing test test.main.w -t tf-$(cloud)

compile:
wing compile main.w -t tf-$(cloud)

tf-init: compile
( \\
cd $(target) ;\\
terraform init \\
)

deploy: tf-init
( \\
cd $(target) ;\\
terraform apply -auto-approve \\
)

destroy:
( \\
cd $(target) ;\\
terraform destroy -auto-approve \\
)

The browser screen looks almost as expected, but notice a strange JSON.parse error message (will be addressed in the forthcoming section):

The project structure is updated to reflect these changes:

Image8

greeting-service/

├── core/
│ └── Greeting.w
│ └── makeGreeting.ts
│ └── makeGreeting.test.ts
├── handlers/
│ └── Greeting.w
├── ports/
│ └── greetingApi.w
│ └── greetingFunction.w
├── jest.config.js
├── main.w
├── Makefile
├── package-lock.json
├── package.json
├── tsconfig.build.json
└── tsconfig.json

Step Nine: Extracting the REST API Request Adapter

The GreetingApi port implementation introduced in the previous section slightly violates the Single Responsibility Principle, which states: “A class should have only one reason to change.” Currently, there are multiple potential reasons for change:

  1. HTTP Routing Conventions: URL path with or without variable parts.
  2. HTTP Request Processing.
  3. HTTP Response Formatting.

We can generally agree that while HTTP Request Processing and HTTP Response Formatting are closely related, HTTP Routing stands apart. To decouple these functionalities, we introduce an ApiAdapter responsible for converting cloud.ApiRequest to cloud.ApiResponse, thereby extracting this functionality from the GreetingApi port.

To achieve this, we introduce a new IRestApiAdapter interface:

bring cloud;


pub interface IRestApiAdapter {
inflight handle(request: cloud.ApiRequest): cloud.ApiResponse;
}

The GreetingApiAdapter class is defined as follows:

bring cloud;
bring "./IRestApiAdapter.w" as restApiAdapter;

pub class GreetingApiAdapter impl restApiAdapter.IRestApiAdapter {
\_h: cloud.IFunctionHandler;
new(handler: cloud.IFunctionHandler) {
this.\_h = handler;
}
inflight pub handle(request: cloud.ApiRequest): cloud.ApiResponse {
return cloud.ApiResponse{
status: 200,
body: this.\_h.handle(request.query.get("name"))
};
}
}

The modified GreetingApi port class is now:

bring cloud;
bring http;
bring "../adapters/IRestApiAdapter.w" as restApiAdapter;

pub class GreetingApi{
\_apiUrl: str;
\_adapter: restApiAdapter.IRestApiAdapter;
new(adapter: restApiAdapter.IRestApiAdapter) {
let api = new cloud.Api();
this.\_adapter = adapter;

api.get("/greetings", inflight (request: cloud.ApiRequest): cloud.ApiResponse => {
return this.\_adapter.handle(request);
});
this.\_apiUrl = api.url;
}
pub inflight invoke(name: str): str {
let result = http.get("{this.\_apiUrl}/greetings?name={name}");
assert(200 == result.status);
return result.body;
}
}

The main.w module is updated accordingly:

bring "./handlers" as handlers;
bring "./ports" as ports;
bring "./adapters" as adapters;

let greetingHandler = new handlers.Greeting();
let greetingStringAdapter = new adapters.GreetingApiAdapter(greetingHandler);
let makeGreetingService = new ports.GreetingApi(greetingStringAdapter);
bring expect;
test "it will return 'Hello, <name>!'" {
expect.equal("Hello, Wing!", makeGreetingService.invoke("Wing"));
}

The project structure reflects these changes:

greeting-service/

├── adapters/
│ └── greetingApiAdapter.w
│ └── IRestApiAdapter.w
├── core/
│ └── Greeting.w
│ └── makeGreeting.ts
│ └── makeGreeting.test.ts
├── handlers/
│ └── Greeting.w
├── ports/
│ └── greetingApi.w
│ └── greetingFunction.w
├── jest.config.js
├── main.w
├── Makefile
├── package-lock.json
├── package.json
├── tsconfig.build.json
└── tsconfig.json

Step Ten: Testing the REST API Request Adapter

Extracting the GreetingApiAdapter from the GreetingApi port might seem like a purist action, performed to demonstrate the potential value of Adapters, even if artificially and not strictly necessary. However, this perspective changes when we consider serious testing. The GreetingApiAdapter implementation from the previous section assumes that the name argument always comes within the query part of the HTTP request. But what happens if it doesn't? The system will crash, while according to standard it should respond with the HTTP 400 (Bad Request) status code in such cases. The modified structure allows us to introduce a separate unit test fully dedicated to testing the GreetingApiAdapter:

bring cloud;
bring expect;
bring "./adapters" as adapters;
bring "./handlers" as handlers;

let greetingHandler = new handlers.Greeting();
let greetingStringAdapter = new adapters.GreetingStringRestApiAdapter(greetingHandler);

test "it will return 200 and correct answer when name supplied" {
let request = cloud.ApiRequest{
method: cloud.HttpMethod.GET,
path: "/greetings",
query: {"name" => "Wing"},
vars: {}
};
let response = greetingStringAdapter.handle(request);
expect.equal(200, response.status);
expect.equal("Hello, Wing!", response.body);
}

test "it will return 400 and error message when name is not supplied" {
let request = cloud.ApiRequest{
method: cloud.HttpMethod.GET,
path: "/greetings",
query: {"somethingElse" => "doesNotMatter"},
vars: {}
};
let response = greetingStringAdapter.handle(request);
expect.equal(400, response.status);
expect.equal("Query name=<name> is missing", response.body);
}

Running this test with the existing implementation will result in failure, necessitating the following changes:

bring cloud;
bring "./IRestApiAdapter.w" as restApiAdapter;


pub class GreetingStringRestApiAdapter impl restApiAdapter.IRestApiAdapter {
\_h: cloud.IFunctionHandler;

new(handler: cloud.IFunctionHandler) {
this.\_h = handler;
}

inflight pub handle(request: cloud.ApiRequest): cloud.ApiResponse {
if let name = request.query.tryGet("name") {
return cloud.ApiResponse{
status: 200,
body: this.\_h.handle(name)
};
} else {
return cloud.ApiResponse{
status: 400,
body: "Query name=<name> is missing"
};
}
}
}

The main lesson from this story is that system complexity can exist in multiple places, not always within the core logic. Separation of concerns aids in managing this complexity through dedicated and isolated test suites.

Step Eleven: Extracting the GreetingService

After all the modifications made, the resulting version of the main.w module has become quite complex, incorporating the logic of wiring system handlers, ports, and adapters. Additionally, maintaining end-to-end system tests within the same module is only feasible up to a point. Different testing and production environments may be necessary to address various security and cost considerations. To tackle these issues, it's advisable to extract the GreetingService configuration into a separate module:

bring "./handlers" as handlers;
bring "./ports" as ports;
bring "./adapters" as adapters;


pub class Greeting {
pub api: ports.GreetingApi;

new() {
let greetingHandler = new handlers.Greeting();
let greetingStringAdapter = new adapters.GreetingStringRestApiAdapter(greetingHandler);
this.api = new ports.GreetingApi(greetingStringAdapter);
}
}

Ideally, the creation of the Greeting service object should be implemented using a static method, following the Factory Method design pattern. However, I encountered difficulties in this approach, as Preflight static functions require a context, which I was unable to determine how to obtain. Nonetheless, even in this form, extracting the Greeting service class opens up multiple possibilities for different configurations in testing and production environments. The main.w module can now be relieved of the testing code:

bring "./service.w" as service;


let greetingService = new service.Greeting();

The system end-to-end test is now placed in its dedicated test.main.w module:

bring "./service.w" as service;

let greetingService = new service.Greeting();
bring expect;
test "it will return 'Hello, <name>!'" {
expect.equal("Hello, Wing!", greetingService.api.invoke("Wing"));
}

In this case, code duplication is minimal, and as previously mentioned, a real system will have different configurations for test and production environments. The detailed specifications for these will be passed to the Greeting service class constructor.

Step Twelve: Enhancing REST API Request Adapter for Content Negotiation

Now, I aim to put the resulting architecture to the final test by partially implementing HTTP Content Negotiation. Specifically, the Greeting service should support returning a greeting statement as plain text, HTML, or JSON, depending on the client's request. The appropriate way to express these requirements is to modify the GreetingApiAdapter unit test as follows:

bring cloud;
bring expect;
bring "./adapters" as adapters;
bring "./handlers" as handlers;

let greetingHandler = new handlers.Greeting();
let greetingStringAdapter = new adapters.GreetingApiAdapter(greetingHandler);

test "it will return 200 and plain text answer when name is supplied without headers" {
let request = cloud.ApiRequest{
method: cloud.HttpMethod.GET,
path: "/greetings",
query: {"name" => "Wing"},
vars: {}
};
let response = greetingStringAdapter.handle(request);
expect.equal(200, response.status);
expect.equal("Hello, Wing!", response.body);
expect.equal("text/plain", response.headers?.get("Content-Type"));
}

test "it will return 200 and json answer when name is supplied with headers Accept: application/json" {
let request = cloud.ApiRequest{
method: cloud.HttpMethod.GET,
path: "/greetings",
query: {"name" => "Wing"},
headers: {"Accept" => "application/json"},
vars: {}
};
let response = greetingStringAdapter.handle(request);
expect.equal(200, response.status);
expect.equal("application/json", response.headers?.get("Content-Type"));
let data = Json.tryParse(response.body);
let expected = Json.stringify(Json {
greeting: "Hello, Wing!"
});
expect.equal(expected, response.body);
}

test "it will return 200 and html answer when name is supplied with headers Accept: text/html" {
let request = cloud.ApiRequest{
method: cloud.HttpMethod.GET,
path: "/greetings",
query: {"name" => "Wing"},
headers: {"Accept" => "text/html"},
vars: {}
};
let response = greetingStringAdapter.handle(request);
expect.equal(200, response.status);
expect.equal("text/html", response.headers?.get("Content-Type"));
let body = response.body ?? "";
assert(body.contains("Hello, Wing!"));
}

test "it will return 400 and error message when name is not supplied" {
let request = cloud.ApiRequest{
method: cloud.HttpMethod.GET,
path: "/greetings",
query: {"somethingElse" => "doesNotMatter"},
vars: {}
};
let response = greetingStringAdapter.handle(request);
expect.equal(400, response.status);
expect.equal("Query name=<name> is missing", response.body);
expect.equal("text/plain", response.headers?.get("Content-Type"));
}

Suddenly, having a separate class for HTTP request/response handling doesn’t seem like a purely theoretical exercise, but rather a very pragmatic architectural decision. To make these tests pass, substantial modifications are needed in the GreetingApiAdapter class:

bring cloud;
bring "./IRestApiAdapter.w" as restApiAdapter;
bring "../core" as core;


pub class GreetingApiAdapter impl restApiAdapter.IRestApiAdapter {
\_h: cloud.IFunctionHandler;

new(handler: cloud.IFunctionHandler) {
this.\_h = handler;
}

inflight static \_textPlain(greeting: str): str {
return greeting;
}

inflight static \_applicationJson(greeting: str): str {
let responseBody = Json {
greeting: greeting
};
return Json.stringify(responseBody);
}

inflight \_findContentType(formatters: Map<inflight (str): str>, headers: Map<str>): str {
let contentTypes = (headers.tryGet("Accept") ?? "").split(",");
for ct in contentTypes {
if formatters.has(ct) {
return ct;
}
}
return "text/plain";
}

inflight \_buildOkResponse(headers: Map<str>, name: str): cloud.ApiResponse {
let greeting = this.\_h.handle(name) ?? ""; // TODO: guard against empty greeting or what??
let formatters = {
"text/plain" => GreetingApiAdapter.\_textPlain,
"text/html" => core.Greeting.formatHtml,
"application/json" => GreetingApiAdapter.\_applicationJson
};
let contentType = this.\_findContentType(formatters, headers);
return cloud.ApiResponse{
status: 200,
body: formatters.get(contentType)(greeting),
headers: {"Content-Type" => contentType}
};
}

inflight pub handle(request: cloud.ApiRequest): cloud.ApiResponse {
if let name = request.query.tryGet("name") {
return this.\_buildOkResponse(request.headers ?? {}, name);
} else {
return cloud.ApiResponse{
status: 400,
body: "Query name=<name> is missing",
headers: {"Content-Type" => "text/plain"}
};
}
}
}

Notice how quickly the complexity escalates. We’re not done yet, as we need a proper HTML formatter. The easiest way to implement it seemed to be in TypeScript, so I decided to place it in the core package:

export function formatHtml(greeting: string): string {
return \`
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Wing Greeting Service</title>

<!-- Tailwind CSS Play CDN https://tailwindcss.com/docs/installation/play-cdn -->
<script src="https://cdn.tailwindcss.com"></script>
</head>
<body class="flex items-center justify-center h-screen">
<div class="text-center", id="greeting">
<h1 class="text-2xl font-bold">${greeting}</h1>
</div>
</body>
</html>
\`
}

There is, of course, a separate unit test for it:

import { formatHtml } from "@core/formatHtml";

describe("formatHtml", () => {
it("should return a properly formatted HTML greeting page", () => {
const greeting = "Hello, World!";
const result = formatHtml(greeting);
expect(result).toContain(greeting);
});
});

Placing the HTML response formatter in the core package could be debated as a violation of Hexagonal Architecture principles. Indeed, formatting an HTML response doesn’t seem to belong to the core application logic. Technically, relocating it wouldn’t be too hard, and in a larger real-world system, that’s probably what should be done. However, I chose to place it there to consolidate all TypeScript-related components in one place and to test and build them through the same set of Makefile targets.

Now, the browser gets response in format it could understand and render properly:

Image9

As stated at the outset, the objectives of this technology research report were to explore:

The exploration was conducted using the simplest “Hello, World!” application, which evolved into the GreetingService through twelve incremental steps, each introducing a minor modification to the previous code base. This resulted in the following project structure:

greeting-service/

├── adapters/
│ └── greetingApiAdapter.w
│ └── IRestApiAdapter.w
├── core/
│ └── Greeting.w
│ └── makeGreeting.ts
│ └── makeGreeting.test.ts
├── handlers/
│ └── Greeting.w
├── ports/
│ └── greetingApi.w
│ └── greetingFunction.w
├── jest.config.js
├── main.w
├── Makefile
├── package-lock.json
├── package.json
├── service.w
├── test.adapters.main.w
├── test.main.w
├── tsconfig.build.json
└── tsconfig.json

In my view, this structure reflects the overall service architecture quite well. As a minor improvement, I would consider relocating the TypeScript related files to a sub-level within the core folder.

Overall, the Winglang Module System passed the initial test, providing substantial support for the separation of concerns as prescribed by the Hexagonal Ports and Adapters pattern. It also offers reasonable support for interoperability with NodeJS runtime engine-based languages, such as TypeScript. My wish list for potential improvements includes:

  • Support for Preflight static functions in modules other than main.w, essential for the effective implementation of the Factory Method design pattern, crucial for supporting non-trivial service configurations.
  • Automatic lifting of Inflight static functions in modules other than main.w (this worked for TypeScript external functions), to eliminate the need for some extra boilerplate.
  • Automatic generation of Winglang wrappers for external functions.

This report evaluates the Winglang programming language for implementing one sequential stage of a more general Staged Event-Driven Architecture (SEDA). The assessment of how well Winglang supports the full-fledged Event-Driven part and asynchronous stage implementation (most likely for Handlers) will be the subject of future research. Stay tuned.

References

Winglang Publications

  1. Elad Ben-Israel, “Cloud, why so difficult?”
  2. Pouya Hallaj, “Wing: Programing language for the cloud”
  3. Artem Sokhin, “Revolutionize Cloud Programming with Wing: A New Cloud-Oriented Language”
  4. Jin, “Wing Language: Streamlining Cloud-Oriented Programming for Human-AI Collaboration”
  5. Sebastian Korfmann, “A Cloud Development Troubleshooting Treasure Hunt”
  6. Jesse Warden, “Wing — Programming Language for the Cloud”
  7. Shai Ber, “Winglang: Cloud Development Programming for the AI Era”

My Publications on “Infrastructure From Code”

  1. Asher Sterkin, “If your Computer is the Cloud, what should its Operating System look like?”
  2. Asher Sterkin, “Cloud Application Infrastructure from Code (IfC): The Next Logical Step in Cloud Automation”
  3. Asher Sterkin, “4 Pillars of the “Infrastructure from Code”
  4. Asher Sterkin, “IfC-2023: Technology Landscape”

Hexagonal Architecture

  1. Alistar Cockburn, “Hexagonal architecture”
  2. Robert C. Martin, “Clean Architecture”
  3. Krzysztof Słomka, “Hexagonal Architecture with Nest.js and TypeScript”
  4. Sairyss, “Domain-Driven Hexagon”
  5. Carlos Cunha, “A Hexagonal Approach to Writing Microservices for Scalable and Decentralized Business: How to use Ports and Adapter with TypeScript”
  6. Walid Karray, “Building a Todo App with TypeScript Using Clean Architecture: A Detailed Look at the Directory Structure”
  7. Andy Blackledge, “Hexagonal Architecture with CDK, Lambda, and TypeScript”
  8. Dyarlen Iber, “Hexagonal Architecture and Clean Architecture (with examples)”
  9. Khalil Stemmler, “Clean Node.js Architecture”
  10. James Beswick, Luca Mezzalira, “Developing evolutionary architecture with AWS Lambda”
  11. Adam Fanello, “Hexagonal Architecture by Example (in TypeScript)
  12. Royi Benita, “Clean Node.js Architecture — With NestJs and TypeScript”

· 16 min read
Hasan Abu-Rayyan

Wow its 2024, almost a quarter of the way through the 21st century, if you are reading this you probably should pat yourself on the back, because you did it! You have survived the crazy roller coaster ride that has lingered over the last several years, ranging from a pandemic to global insecurity with ongoing wars.

So finally 2024 is here, and we all get to ask ourselves, "Is this the year things finally start going back to normal?"... probably not! Though, as we all sit on the edge of our seats waiting for the next global crisis (my bingo card has mole people rising to the surface) we can take solace in one silver lining. Wing Custom Platforms are all the rage, and easier than ever to build!

In this blog series I'm going to be walking through how to build, publish, and use your own Wing Custom Platforms. Now before we get too deep, and since this is the first installment of what will probably be many procrastinated iterations, lets just do a quick level set.

Let me introduce Wing

A programming language for the cloud.

Wing combines infrastructure and runtime code in one language, enabling developers to stay in their creative flow, and to deliver better software, faster and more securely.

lightbult-moment

Please star ⭐ Wing


What Are Wing Custom Platforms?

The purpose of the post is not to explain all the dry details of Wing Platforms, thats the job of the Wing docs (I'll provide reference links down below). Rather we want to get into the fun of building one, so Ill briefly explain.

Wing Custom Platforms offer us a way to hook into a Wing application's compilation process. This is done through various hooks that a custom platform can implement. As of the today, some of these hooks include:

  • preSynth: called before the compiler begins to synthesize, and gives us access to the root app in the construct tree.
  • postSynth: called right after artifacts are synthesized, and will give us access to manipulate the resulting configuration. In the case of a Terraform provisioner this is the Terraform JSON configuration.
  • validate: called right after the postSynth hook and provides the same input, however the key difference is the passed config is immutable. Which is important for validation operations

There are several other hooks that exist though, we wont go into all those in this blog.

Lets Get Building!

One more bit of information we need before we start building our very own Custom Platform which is kind of important is, "what is our platform going to do?"

I'm glad you asked! We are going to build a Custom Platform that will enhance the developer experience when working with Terraform based platforms, some of which come builtin with Wing installation such as tf-aws, tf-azure, and tf-gcp.

The specific enhancement is we want to add is the functionality to configure how Terraform state files are managed through the use of Terraform backends. By default all of the builtin Terraform based platforms will use local state file configurations, which is nice for quick experimentation, but lacks some rigor for production quality deployments.

The Goal

Build and publish a Wing Custom Platform that provides a way to configure your Terraform backend state management.

For the purpose of brevity we will focus on 3 backend types, s3, azurerm, and gcs

Required Materials

  • Wing
  • NPM & Node
  • A bit of Typescript know-hows
  • A wish and a prayer

Creating The Project

To begin lets just create a new npm project, I'm going to be a little bit more bare bones in this guide, so ill just create a package.json and tsconfig.json

Below is my package.json file, the only real interesting part about it is the dev dependency on @winglang/sdk this is so we can use some of the exposed Platform types, which we will see an example of soon.

{
"name": "@wingplatforms/tf-backends",
"version": "0.0.1",
"main": "index.js",
"repository": {
"type": "git",
"url": "https://github.com/hasanaburayyan/wing-tf-backends"
},
"license": "ISC",
"devDependencies": {
"typescript": "5.3.3",
"@winglang/sdk": "0.54.30"
},
"files": ["lib"]
}

Here is the tsconfig.json Ive omitted a few other details for brevity since some other options are just personal preference. Whats worth noting here is how I have decided to structure the project. All my code will exist in a src folder and my expectations are that output of compilation will be in the lib folder. Now you might set your project up different and thats fine, but its worth explaining if you are just following along.

{
"compilerOptions": {
"target": "ES2020",
"module": "commonjs",
"rootDir": "./src",
"outDir": "./lib",
"lib": ["es2020", "dom"]
},
"include": ["./src/**/*"],
"exclude": ["./node_modules"]
}

Then to prep our dependencies we can just run npm install

Lets Code!

Okay now that that initial setup is out of the way, time to start writing our Platform!!

First Ill create a file src/platform.ts this will contain the main code for our Platform, which is used by the Wing compiler. The bare minimum code required for a Platform would look like this

import { platform } from "@winglang/sdk";

export class Platform implements platform.IPlatform {
readonly target = "tf-*";
}

Here we create and export the our Platform class, which implements the IPlatform interface. All the platform hooks are optional so we don't actually have to define anything else for this to technically be valid.

Now the required bit is defining target this mechanism allows a platform to define the provisioning engine and cloud provider it is compatible with. At the time of this blog post there is not actually an enforcement of this compatibly but... we imagine it works :)

Okay, so we have a barebones Platform but its not actually useful yet, lets change that! First we will plan on using environment variables to determine which type of backend our users want to use, as well as what is the key for the state file.

So we will provide a constructor in our Platform:

import { platform } from "@winglang/sdk";

export class Platform implements platform.IPlatform {
readonly target = "tf-*";
readonly backendType: string;
readonly stateFileKey: string;

constructor() {
if (!process.env.TF_BACKEND_TYPE) {
throw new Error(`TF_BACKEND_TYPE environment variable must be set.`);
}
if (!process.env.TF_STATE_FILE_KEY) {
throw new Error("TF_STATE_FILE_KEY environment variable must be set.");
}

this.backendType = process.env.TF_BACKEND_TYPE;
this.stateFileKey = process.env.TF_STATE_FILE_KEY;
}
}

Cool, now we are starting to get moving. Our Platform will require the users to have two environment variables set when compiling their Wing code, TF_BACKEND_TYPE and TF_STATE_FILE_KEY for now we will just persist this data as instance variables.

One more house keeping item we need to do is export our Platform code, to do this lets create an index.ts with a single line that looks like this:

export * from "./platform";

Testing Our Platform

Before we get much further I just want to show how to test your Platform locally to see it working. In order to test this code we need to first compile it using the command npx tsc and since we already defined everything in our tsconfig.json we will conveniently have a folder named lib that contains all the generated JavaScript code.

Lets create a super simple Wing application to use this Platform with.

// main.w
bring cloud;

new cloud.Bucket();

The above Wing code will just import the cloud library and use it to create a Bucket resource.

Next we will run a Wing compile command using our Platform in combination with some other Terraform based Platform, in my case it will be tf-aws

wing compile main.w --platform tf-aws --platform ./lib

Note: We are providing two Platforms tf-aws and a relative path to our compiled Platform ./lib The ordering of these Platforms is also important tf-aws MUST come first since its a Platform that implements the newApp() API. We won't dive deeper into that in this post but the reference reading materials down below will provide links if you want to dive deeper.

Now running this code will result in the following error:

wing compile main.w -t tf-aws -t ./lib

An error occurred while loading the custom platform: Error: TF_BACKEND_TYPE environment variable must be set.

Now before you freak out, just know thats one of them good errors :) we can indeed see our Platform code was loaded and run because the Error was thrown requiring TF_BACKEND_TYPE as an environment variable. If we now rerun the compile command with the required variables we should get a successful compilation

TF_BACKEND_TYPE=s3 TF_STATE_FILE_KEY=mystate.tfstate wing compile main.w -t tf-aws -t ./lib

To be extra sure the compilation worked we can inspect the generated Terraform code in target/main.tfaws/main.tf.json

{
"//": {
"metadata": {
"backend": "local",
"stackName": "root",
"version": "0.17.0"
},
"outputs": {}
},
"provider": {
"aws": [{}]
},
"resource": {
"aws_s3_bucket": {
"cloudBucket": {
"//": {
"metadata": {
"path": "root/Default/Default/cloud.Bucket/Default",
"uniqueId": "cloudBucket"
}
},
"bucket_prefix": "cloud-bucket-c87175e7-",
"force_destroy": false
}
}
},
"terraform": {
"backend": {
"local": {
"path": "./terraform.tfstate"
}
},
"required_providers": {
"aws": {
"source": "aws",
"version": "5.31.0"
}
}
}
}

We should see that a single Bucket is being created, however it is still using the local Terraform backend and that is because we still have some work to do!

Implementing The postSynth Hook

Since we want to edit the generated Terraform configuration file after the code has been synthesized, we will implement the postSynth hook. As I explained earlier this hook is called right after synthesis completes and passes the resulting configuration file.

What is more useful about this hook is it allows us to return a mutated version of the configuration file.

To implement this hook we will update our Platform code with this

export class Platform implements platform.IPlatform {
// ...
postSynth(config: any): any {
if (this.backendType === "s3") {
if (!process.env.TF_S3_BACKEND_BUCKET) {
throw new Error(
"TF_S3_BACKEND_BUCKET environment variable must be set."
);
}

if (!process.env.TF_S3_BACKEND_BUCKET_REGION) {
throw new Error(
"TF_S3_BACKEND_BUCKET_REGION environment variable must be set."
);
}

config.terraform.backend = {
s3: {
bucket: process.env.TF_S3_BACKEND_BUCKET,
region: process.env.TF_S3_BACKEND_BUCKET_REGION,
key: this.stateFileKey,
},
};
}
return config;
}
}

Now we can see there is some control flow logic happening here, if the user wants to use an s3 backend we will need some additional input such as the name and region of the bucket, which we will use TF_S3_BACKEND_BUCKET and TF_S3_BACKEND_BUCKET_REGION to configure.

Assuming all of the required environment variables exist, we can then manipulate the provided config object, where we set config.terraform.backend to use an s3 configuration block. Finally the config object is returned.

Now to see this all in action we will need to compile our code (npx tsc) and provide all four required s3 environment variables. To make the commands easier to read Ill do it in multiple lines:

# compile platform code
npx tsc

# set env vars
export TF_BACKEND_TYPE=s3
export TF_STATE_FILE_KEY=mystate.tfstate
export TF_S3_BACKEND_BUCKET=myfavorites3bucket
export TF_S3_BACKEND_BUCKET_REGION=us-east-1

# compile wing code!
wing compile main.w -t tf-aws -t ./lib

And viola! We should now be able to look at our Terraform config and see that a remote s3 backend is being used:

// Parts of the config have been omitted for brevity
{
"terraform": {
"required_providers": {
"aws": {
"version": "5.31.0",
"source": "aws"
}
},
"backend": {
"s3": {
"bucket": "myfavorites3bucket",
"region": "us-east-1",
"key": "mystate.tfstate"
}
}
},
"resource": {
"aws_s3_bucket": {
"cloudBucket": {
"bucket_prefix": "cloud-bucket-c87175e7-",
"force_destroy": false,
"//": {
"metadata": {
"path": "root/Default/Default/cloud.Bucket/Default",
"uniqueId": "cloudBucket"
}
}
}
}
}
}

ITS ALIVE!!!

If you have been following along, pat yourself on the back again! Now on top of surviving the early 2020s you have also written your first Wing Custom Platform!

Now before we go into how to make it available for use to other Wingnuts, lets actually make our code a little cleaner, and a bit more usefully robust.

Supporting Multiple Backends

In order to live up to its name tf-backends it should probably support multiple backends! To accomplish this lets just use some good ol' coding chops to abstract a bit.

We want our Platform to support s3, azurerm, and gcs to accomplish this we just have to define different config.terraform.backend blocks based on the desired backend.

To make this work I'm going to create a few more files:

src/backends/backend.ts

// simple interface to define a backend behavior
export interface IBackend {
generateConfigBlock(stateFileKey: string): void;
}

Now several backend classes that implement this interface

src/backends/s3.ts

import { IBackend } from "./backend";

export class S3 implements IBackend {
readonly backendBucket: string;
readonly backendBucketRegion: string;

constructor() {
if (!process.env.TF_S3_BACKEND_BUCKET) {
throw new Error("TF_S3_BACKEND_BUCKET environment variable must be set.");
}

if (!process.env.TF_S3_BACKEND_BUCKET_REGION) {
throw new Error(
"TF_S3_BACKEND_BUCKET_REGION environment variable must be set."
);
}

this.backendBucket = process.env.TF_S3_BACKEND_BUCKET;
this.backendBucketRegion = process.env.TF_S3_BACKEND_BUCKET_REGION;
}

generateConfigBlock(stateFileKey: string): any {
return {
s3: {
bucket: this.backendBucket,
region: this.backendBucketRegion,
key: stateFileKey,
},
};
}
}

src/backends/azurerm.ts

import { IBackend } from "./backend";

export class AzureRM implements IBackend {
readonly backendStorageAccountName: string;
readonly backendStorageAccountResourceGroupName: string;
readonly backendContainerName: string;

constructor() {
if (!process.env.TF_AZURERM_BACKEND_STORAGE_ACCOUNT_NAME) {
throw new Error(
"TF_AZURERM_BACKEND_STORAGE_ACCOUNT_NAME environment variable must be set."
);
}

if (!process.env.TF_AZURERM_BACKEND_STORAGE_ACCOUNT_RESOURCE_GROUP_NAME) {
throw new Error(
"TF_AZURERM_BACKEND_STORAGE_ACCOUNT_RESOURCE_GROUP_NAME environment variable must be set."
);
}

if (!process.env.TF_AZURERM_BACKEND_CONTAINER_NAME) {
throw new Error(
"TF_AZURERM_BACKEND_CONTAINER_NAME environment variable must be set."
);
}

this.backendStorageAccountName =
process.env.TF_AZURERM_BACKEND_STORAGE_ACCOUNT_NAME;
this.backendStorageAccountResourceGroupName =
process.env.TF_AZURERM_BACKEND_STORAGE_ACCOUNT_RESOURCE_GROUP_NAME;
this.backendContainerName = process.env.TF_AZURERM_BACKEND_CONTAINER_NAME;
}

generateConfigBlock(stateFileKey: string): any {
return {
azurerm: {
storage_account_name: this.backendStorageAccountName,
resource_group_name: this.backendStorageAccountResourceGroupName,
container_name: this.backendContainerName,
key: stateFileKey,
},
};
}
}

src/backends/gcs.ts

import { IBackend } from "./backend";

export class GCS implements IBackend {
readonly backendBucket: string;

constructor() {
if (!process.env.TF_GCS_BACKEND_BUCKET) {
throw new Error(
"TF_GCS_BACKEND_BUCKET environment variable must be set."
);
}

if (!process.env.TF_GCS_BACKEND_PREFIX) {
throw new Error(
"TF_GCS_BACKEND_PREFIX environment variable must be set."
);
}

this.backendBucket = process.env.TF_GCS_BACKEND_BUCKET;
}

generateConfigBlock(stateFileKey: string): any {
return {
gcs: {
bucket: this.backendBucket,
key: stateFileKey,
},
};
}
}

Now that we have our backend classes defined, we can update our Platform code to use them. My final Platform code looks like this:

import { platform } from "@winglang/sdk";
import { S3 } from "./backends/s3";
import { IBackend } from "./backends/backend";
import { AzureRM } from "./backends/azurerm";
import { GCS } from "./backends/gcs";
import { Local } from "./backends/local";

// TODO: support more backends: https://developer.hashicorp.com/terraform/language/settings/backends/local
const SUPPORTED_TERRAFORM_BACKENDS = ["s3", "azurerm", "gcs"];

export class Platform implements platform.IPlatform {
readonly target = "tf-*";
readonly backendType: string;
readonly stateFileKey: string;

constructor() {
if (!process.env.TF_BACKEND_TYPE) {
throw new Error(
`TF_BACKEND_TYPE environment variable must be set. Available options: (${SUPPORTED_TERRAFORM_BACKENDS.join(
", "
)})`
);
}
if (!process.env.TF_STATE_FILE_KEY) {
throw new Error("TF_STATE_FILE_KEY environment variable must be set.");
}
this.backendType = process.env.TF_BACKEND_TYPE;
this.stateFileKey = process.env.TF_STATE_FILE_KEY;
}

postSynth(config: any): any {
config.terraform.backend = this.getBackend().generateConfigBlock(
this.stateFileKey
);
return config;
}

/**
* Determine which backend class to initialize based on the backend type
*
* @returns the backend instance based on the backend type
*/
getBackend(): IBackend {
switch (this.backendType) {
case "s3":
return new S3();
case "azurerm":
return new AzureRM();
case "gcs":
return new GCS();
default:
throw new Error(
`Unsupported backend type: ${
this.backendType
}, available options: (${SUPPORTED_TERRAFORM_BACKENDS.join(", ")})`
);
}
}
}

BOOM!! Our Platform now supports all 3 different backends we wanted to support!

Feel free to build and test each one.

Publishing Our Platform For Use

Now I'm not going to explain all the intricate details about how npm packages work, since I would do a poor job of that as indicated by the fact my below examples will use a version 0.0.3 (third times the charm!)

However if you have followed along thus far you will be able to run the following commands Note: in order to publish this library you will need to have defined a package name that you are authorized to publish to. If you use mine (@wingplatforms/tf-backends) you're gonna have a bed time

```bash
# compile platform code again
npx tsc

# package your code
npm pack

# publish your package
npm publish

If done right you should see something along the lines of

npm notice === Tarball Details ===
npm notice name: @wingplatforms/tf-backends
npm notice version: 0.0.3
npm notice filename: wingplatforms-tf-backends-0.0.3.tgz
npm notice package size: 36.8 kB
npm notice unpacked size: 119.5 kB
npm notice shasum: 0186c558fa7c1ff587f2caddd686574638c9cc4c
npm notice integrity: sha512-mWIeg8yRE7CG/[...]cT8Kh8q/QwlGg==
npm notice total files: 17
npm notice
npm notice Publishing to https://registry.npmjs.org/ with tag latest and default access

Using The Published Platform

With the Platform created lets try it out. Note: I suggest using a clean directory for playing with it

Using the same simple Wing application as before

// main.w
bring cloud;

new cloud.Bucket()

We need to add one more thing to use a Custom Platform, a package.json file which only needs to define the published Platform as a dependency:

{
"dependencies": {
"@wingplatforms/tf-backends": "0.0.3"
}
}

With both those files create lets install our custom Platform using npm install

Finally we lets set up all the environment variables for GCS and run our Wing compile command. Note: since we are using a installed npm library we will provide the package name and not ./lib anymore!

export TF_BACKEND_TYPE=gcs
export TF_STATE_FILE_KEY=mystate.tfstate
export TF_GCS_BACKEND_BUCKET=mygcsbucket

wing compile main.w -t tf-aws -t @wingplatforms/tf-backends

Now we should be able to see that the generated Terraform config is using the correct remote backend!

{
"terraform": {
"required_providers": {
"aws": {
"version": "5.31.0",
"source": "aws"
}
},
"backend": {
"gcs": {
"bucket": "mygcsbucket",
"key": "mystate.tfstate"
}
}
},
"resource": {
"aws_s3_bucket": {
"cloudBucket": {
"bucket_prefix": "cloud-bucket-c87175e7-",
"force_destroy": false,
"//": {
"metadata": {
"path": "root/Default/Default/cloud.Bucket/Default",
"uniqueId": "cloudBucket"
}
}
}
}
}
}

Whats Next?

Now that we have built and published our first Wing Custom Platform, the sky is the limit! Get out there and start building the Custom Platforms to your hearts content <3 and keep a look out for the next addition to this series on Platform building!

In the meantime make sure you to join the Wing Slack community: https://t.winglang.io/slack and share what you are working on, or any issues you run into.

Want to read more about Wing Platforms? Check out the Wing Platform Docs


If you enjoyed this article Please star ⭐ Wing