Wasm Builders šŸ§±

Taylor Thomas
Taylor Thomas

Posted on

Why WebAssembly Belongs Outside the Browser

by Matt Butcher, Connor Hicks, and Taylor Thomas

As employees of 3 different WebAssembly (Wasm) startups and creators/maintainers of some of the largest open source Wasm projects, we've been seeing a lot of questions in our communities about why people would choose Wasm. Sometimes it is a question of whether this is just repeating the same things we tried in the past and other times it is more along the lines of "well, I do everything on cloud linux servers anyway, so why should I start using this?" In order to answer many of the questions we see, we thought it would be a good idea to collaborate on a blog post where we address why Wasm is so powerful and useful. But before we get to those answers, we need to start with a little history

A Brief History of Wasm

In 2015, Luke Wagner made an announcement on his Mozilla blog:

Iā€™m happy to report that we at Mozilla have started working with Chromium, Edge and WebKit engineers on creating a new standard, WebAssembly, that defines a portable, size- and load-time-efficient format and execution model specifically designed to serve as a compilation target for the Web.

With that, the group building WebAssembly set out to achieve two major objectives:

  1. Build a specification for a binary compilation target that could run in the browser
  2. Gain support from all major browsers

In a few years, the team building Wasm had achieved both. Under the auspices of W3, a Core Wasm specification reached Recommendation status. And all major browsers included support.

But for WebAssembly to be successful, languages must be able to compile to WebAssembly. Some languages came along quickly, with C/C++ and Rust leading the way. Now, several years after Luke's initial post, many languages have WebAssembly support.

WebAssembly has enjoyed success in the browser world, with large-scale adopters like Adobe and Figma. But others have noticed WebAssembly's virtues beyond the browser context.

A host of non-browser runtimes like Wasmtime, Wamr, Wasm3, WasmEdge, and Wasmer take the WebAssembly format and apply it to specific use cases beyond the browser. And these tools show the flexibility of the specification, with some implementations like Wasm3 executing as an interpreter, while other runtimes support JIT and AOT compiling as well as caching and optimization features.

While WebAssembly in the browser typically relies on bridges between JavaScript and the Wasm runtime, recent work by the non-profit Bytecode Alliance (of which Cosmonic, Fermyon, and Suborbital are all members) focuses on adding system bindings. The WebAssembly System Interface (WASI) is a good example of this, adding standardized support for interacting with system resources such as file systems, environment variables, clocks, and random number generators.

Today's nascent standards make WebAssembly usable beyond the browser. But is it desirable outside of the browser? We think so. In fact, we think that the very properties that make it good for the browser are what make it even more compelling as a target for the cloud.

Good for the browser, great for the cloud

There are several characteritics that are necessary for a language runtime in the web browser. These same characteristics, though, are also attractive in the cloud.

  • Security: If you are going to run untrusted code in the browser, you want to make sure it is running in isolation. The same is true in the cloud.
  • Cross-platform/Cross-arch: When we build code for the browser, we want to write it once and have it run anywhere. This is also a highly desirable feature for the cloud.
  • Polyglot: A big goal of the WebAssembly project was to extend the browser to many languages. Cloud development is not as JS-centric as browser development. Multi-langauge support is not optional for us.
  • Speed: Nobody wants to wait for a web page to load. The same is true on cloud. Instant loading means rapid scaling.
  • Efficiency: Browsers are constrained in how much energy they can consume. On the cloud, the more efficient the runtime, the cheaper it is to operate.
  • Size: Before we can talk about startup speed in the browser, we have to talk about download speed. And that is largely a function of the objects we download. Smaller binaries mean faster download. And in the cloud, that translates to the ability to move these objects around the cloud.

Now let's dive into how these properties are important in practice.

Security

One of the more perplexing aspects of running cloud software is understanding its security properties, attack surface, and how to keep your organization safe. A recent trend of supply-chain security vulnerabilities and a long, historic road of unpatched operating systems has cost companies billions of dollars and immeasurable lost time to security issues. One of the key goals of Wasm is to provide a simple, easily understood surface area to ensure code can be executed inside a sandbox that accounts for all the ways attackers could harm your infrastructure from outside-in, and inside-out. WebAssembly forces the interactions between the code running within its sandbox, and the operating system outside the sandbox to be explicitly defined and enabled very granularly. At a high level, this means that every "system call" that Wasm bytecode attempts to perform is handled by a set of functions given to the runtime when it starts up. This means if you want to disable filesystem access, network access, or even access to the system clock, you can do so by changing the set of host functions available. Combine this with linear, bounds-checked program memory, and you arrive at a vessel for executing arbitrary, untrusted code that surpasses the security models of VMs and containers on the axes of simplicity and attack-able surface area.

In the real world, this amounts to an execution environment that allows operators to more confidently execute untrusted code. This can come in the form of un-audited third party dependencies, or user-submitted code such as plugins and User-Defined Functions (UDFs). If users are given the ability to upload snippets of code meant to extend your software, it's pretty daunting to run that code on current container-based platforms, as they are given quite a lot of leeway to reach out and prod around your internal infrastructure. Also consider the container base images used to execute user code could very well contain vulnerabilities, causing a huge headache when hundreds, thousands, or more user-provided containers need to be re-built with patched OSs. By removing most of the "OS-like" aspects from running a program, WebAssembly provides a much more controllable and understandable base upon which to build secure environments to run code.

Cross-platform/Cross-arch

Possibly the most touted feature of WebAssembly is that it is entirely platform and architecture agnostic. However, this goes way beyond what most people think of with technologies such as the JVM or other "compile once, run anywhere" attempts of the past. Yes, Wasm has the potential to truly compile once and run anywhere, but that is not where the real power of WebAssembly shines. There is no better example of this than the Component Model. The Component Model allows developers to write code and export their APIs as interfaces. For example, you could be wanting to use a key store. An interface for a key store could look like this:

set: func(key: string, value: payload, ttl: option<u32>) -> expected<unit, error>

get: func(key: string) -> expected<payload, error>

delete: func(key: string) -> expected<unit, error>
Enter fullscreen mode Exit fullscreen mode

If we wanted to implement a key store, we could write it in any language (let's say Go for this example) we want to that compiles to a Wasm module as long as we export to this interface. Then, another developer elsewhere who wants to consume our implementation could write their code in an entirely different language, but still be able to consume the key value implementation we wrote in Go earlier. Essentially, this gives us a universal library or registry of dependencies that can be composed together as needed for each person's use case. So, instead of needing a separate client library for every single language, you could write it once and then compose it together with something in an entirely different language. This means it is much more easy to share and collaborate, no matter the language, platform or architecture! If you are curious and want to learn more, several of us have written some detailed blog posts about the component model if you want to read more about it.

Beyond the composibility offered by the cross platform/arch support of Wasm is the power to be a runtime that is purpose-built for a post-Kubernetes world. We know that is a bold claim, but stick with us here. People have quickly realized that Kubernetes (and containers) can only be stretched so far. As much as we like to pretend that containers can "run anywhere," if we are being honest with ourselves, we know that is not true. To support different platforms, you need to build a different image for each platform + arch combination. Also, containers are a Linux technology. Yes, some very smart people added containers for Windows, but it is an entirely different set of technology (and it can't run Linux containers, nor can normal container platforms run Windows containers). Also, the overhead of a container runtime (particularly when running Kubernetes) prevents containers from being effective the further out towards the edge you get. On top of this, there are an ever increasing number of custom processors that need to be targeted. Wasm is a perfect solution for this as it is platform and architecture agnostic as well as being small (see the section on Size below for more!). You are able to have code that runs both on huge cloud servers and tiny edge devices close to your users without any recompilation needed.

We acknowledge and are grateful for the forcing factor that containers provided to help people transition to the cloud, but we are also firm believers in the post-Kubernetes future of Wasm.

Polyglot

As we alluded to in the previous section, one of the other key features of Wasm is that it is polyglot. Because Wasm is a compile target, there is no special buy in required to use it. All that needs to happen is that a language adds support for compiling to Wasm. Some languages are going to adopt that sooner than others, but we are already seeing that many languages have already done or started that work. Because of this, you can have different parts of the same application, not just the same service, be written in different languages (see the component model example in the previous section)!

We'll follow that up with another bold claim that we think Wasm has the potential to be the last plugin model we'll ever need. Currently, writing a plugin for almost any tool is almost guaranteed to be a pain. You either have to write in the same language, set up some sort of communication protocol (like gRPC), or shell out to another binary with some sort of agreed upon stdin/stdout contract. All of these options are confining and/or inefficient. With Wasm, a plugin could be written in any language and compiled to Wasm. That Wasm module can then be executed by any other language as part of a plugin model ā€“ no shelling out or cross-process communication needed. On top of that, you get the advantages of speed and size discussed below.

Speed

The key enabling technologies that made public cloud possible were VMs and containers. As we've touched on a few times, these are great technologies that have their place in the world of computing, but they are not a magic bullet that can make all of the world's cloud needs viable. When running code in resource-constrained or extremely high-usage scenarios such as edge computing, IoT, or gigantic data processing clusters, VMs and containers can actually hamper our ability to get maximum performance out of our hardware. Since we get the same (or higher) isolation guarantees from WebAssembly in many cases, we can remove the underlying "public cloud safety nets" of VMs and containers to take better advantage of the servers and devices our code is running on. Since Wasm is a low-level bytecode that can be compiled to support any hardware architecture and any OS, we can (and should!) run Wasm directly on bare metal. This allows workloads to be packed much more tightly onto the available hardware, and it has huge impacts in terms of performance, energy usage, environmental impact, etc.

These performance benefits are especially evident in highly ephemeral workloads such as cloud functions. Since a Wasm runtime and the code it's loading is usually an order of magnitude (or more) smaller than an equivalent container image or VM, they can start up and terminate much more quickly with a higher amount of replication. These are extremely desireable properties in many kinds of cloud deployments, as it allows deployed software to more nimbly handle spikes in traffic as well as scale out further to handle higher total amounts of traffic. Since a WebAssembly module is effectively just a single program, rather than a container or VM which plays the role of an operating system, the host OS's controls and hardware optimizations can more effectively take advantage of multi-core architectures while maintaining strong isolation.

Size and Efficiency

In today's dominant paradigms, we overconsume cloud resources. We provision enough replicas to meet peak load requirements, and those replicas sit idly consuming resources most of the time. And again, because we are optimizing for highest possible demand, we allocate more CPU, memory, and storage than we typically need simply in order to be prepared for traffic spikes. And we must be prepared because today's solutions cannot quickly scale up.

WebAssembly's size and efficiency means scaling is not costly. We can scale up nearly instantly... and then scale back down with ease. We can install the same small WebAssembly modules throughout our datacenters or clusters, but not execute them until demand is there. As a result, we can cheaply achieve high replica counts without actually running until necessary.

And with JIT/AOT runtimes readily at hand, we can make sure that our WebAssembly binaries are pre-optimized for execution, cutting down energy and resource consumption.

In many cases, since we don't have to cart around a plethora of system libraries and file artifacts, the sizes of the objects we are dealing with are considerably smaller than containers.

Combined, all of these point toward a compelling feature of WebAssembly in the cloud: It is cheaper to operate than other cloud services.

Wrapping up

As you can probably tell, we are excited about the many possibilities WebAssembly affords. However, we know that this is all new and lots of work is still needed. This is where you come in! One of the best things you can do is get involved with the Wasm community and to try building things. Our projects, Spin, Sat, and wasmCloud are good places to get started, but are far from the only projects. Whatever you do, please share what you build with the community and let us know what the gaps are so we can continue to build towards the future.

Top comments (2)

Collapse
 
antweiss profile image
Ant Weiss

Loved this post! The Component Model sounds exciting - exactly what I've been thinking about! Here's my, a bit more naive, take on WASM potential I originally published a few months back: wasm.builders/antweiss/the-promise...
Would love to get your feedback!

Collapse
 
sp42 profile image
Frank Cheung

Hi. I translate this great article into Chinese.