Category

Blog

New OCI Artifacts Project

By Blog

By Steve Lasker

The OCI Technical Oversight Board (TOB) has approved a new Artifacts project, utilizing the OCI manifest and OCI index definitions, new artifact types can be stored and served using the OCI distribution-spec without changing the actual distribution spec. This repository will provide a reference for artifact authors and registry implementers for supporting these new artifact types themselves with the existing implementations of distribution.

Registries are a defacto part of a container workflow, streamlining development, deployment, and operations. When a developer wishes to share a built image, they push to a registry. When a CI/CD solution builds and deploys an image, it’s built FROM a registry, pushed to a registry, where it can be vulnerability scanned and signed. When a container host, such as Kubernetes, is requested to run an image, scale a pod, or replace a failed node, it must pull the image from a registry. Registries aren’t just development resources, rather they’re considered production, operationally dependent resources, locked down for network and security requirements. 

Recognizing the need for vendor neutrality, in March of 2018, Docker contributed their work on distribution to the OCI. The oci distribution-spec provides a vendor neutral, cloud agnostic spec to share, secure and deploy container images. Cloud providers and vendors implemented the oci distribution-spec, enabling optimized experience on a standard set of APIs, enabling this rich end to end experience. 

There’s More Than Just One Type

Once clients get past single container deployments, they quickly realize they need additional artifacts to define deployments, such as Kubernetes deployment files, Helm Charts, CNAB and new evolving formats. At the same time, new runtime and tools are evolving, such as the Singularity project, for running high-performance computing workloads and Open Policy Agent (OPA) for declarative policy based access control. Clients and end users needed investment across the industry and collaboration with members of the Helm, Chart Museum, CNAB, Singularity, OPA, and the OCI community, and so many more to leverage the work in the oci distribution-spec.

Inverting the Plug-in Model

In evaluating the plug-in model used for many tools, we considered whether each registry should implement cloud specific tooling for specific artifacts. This would force each artifact owner to work with each cloud operator and vendor of distribution, fracturing the common docker push/pull experience and the sense of one community focused on developer and user experience with artifacts. 

We wanted each artifact author to own their experience, with their toolset. By inverting the model, where each artifact toolset could leverage standard registry APIs, authors could tailor the experience as it applied to them. Building on the oci-manifest and oci-index schema formats, artifact owners can define their persistence format, fitting into the manifest, tagging and layer format defined in the oci distribution-spec, As a result, customers will be able to use helm registry login, helm chart push, helm chart pull with standard content addressable urls. We took the experience further by developing an OCI Registry As Storage (ORAS) library for pushing and pulling content to an OCI Artifact registry. 

What’s Next for OCI Artifacts

We’re thrilled to see OCI Artifacts be adopted by OCI as a means for artifact authors to define their content. The OCI Artifact repo will evolve to provide artifact owners info for how they author their types, while registry operators will have a means to discover well known artifact types, providing great customer experiences for browsing, securing and deploying all artifact types.

Stay tuned, get involved and follow along on social for exciting updates on OCI Artifacts!

Open Container Initiative Explained…with Dolls!

By Blog

Have you ever tried to explain to a friend, or neighbor or your parent what containers are and received a blank or confused look as you try to describe them? Well, OCI community contributor and Stanford software engineer Vanessa Sochat has the solution in her entertaining and creative video series. The first video helps viewers understand what a container is in a simple and easy to understand way.

The second video describes what OCI is and the upcoming final video in the series will explore the process for contributing to OCI Specifications. While the videos are entertaining and enlightening, they are also meant to highlight documentation Vanessa and the community have created that solves some challenges she encountered when getting involved with the OCI community. Vanessa, with a self-described passion for all things containers, and a love of the open source community decided to take action. She started to speak up about it with maintainers and with encouragement put together documentation to help newcomers.

Check out her series, be inspired, find your own open containers passion and contribute to the container community.

OCI 2019 Elections and New TOB Lineup

By Blog

As mentioned in our first post of 2019, this year is shaping up to be our busiest yet. The OCI community plans to roll out updates to specifications, ship a v1.0 of runc + much more, all in the next few months!

One exciting new development ready to share is the selection of the following four board members, elected to each serve a two-year term on the OCI Technical Oversight Board (TOB) – comprised of independently elected individuals who provide oversight of the technical leadership and serve as a point of appeal:

  • Vincent Batts (Red Hat)
  • Michael Crosby (Docker)
  • Aleksa Sarai (SUSE)
  • Derek McGowan (Docker)

These newest TOB members join the following existing members, who are each in the middle of two-year terms:

  • Taylor Brown (Microsoft)
  • Stephen Day (Cruise)
  • Phil Estes (IBM)
  • Jon Johnson (Google)
  • Mrunal Patel (Red Hat)

The TOB also voted to elect Michael Crosby (Docker) as the 2019 Chair. To learn more about the term limits, the function of the board + more, you can follow TOB activity here on GitHub.

We’d love to extend a big thank you to all of our outgoing TOB members – Vishnu Kannan and Greg Kroah-Hartman – for their commitment to OCI and its growing project community. We look forward to your continued collaboration on all things container standards!

As always, we welcome any + all contributions from the community – our progress this year banks on the support and collaboration of many 👍🏼

If you’re interested in contributing to OCI, please join the OCI developer community. For those who are building products on OCI technology, we recommend joining as a member and visiting https://github.com/opencontainers for more details about releases and specifications in development.

2018’s Biggest Moments + What’s Coming for OCI in 2019

By Blog

Looking back at 2018, OCI had a banner year for foundational momentum perfect for all our community has planned for 2019 🗓

Some of our biggest moments of last year included the launch of the Distribution Specification project, to standardize container image distribution based on the specification for the Docker Registry HTTP API V2 protocol the result of extensive work from key maintainers Derek McGowan, Stephen Day and Vincent Batts, with backing from hundreds of OCI contributors and organizations committed to container standardization and the long-awaited announcement of Alibaba Cloud’s membership the last of top 5 major hyperscale clouds to join the initiative and largest cloud provider in China.

Additionally, OCI community members Chris Aniszczyk, Jeffrey Borek, Rithu Leena John, & Patrick Chanezon secured a coveted speaking slot at KubeCon + CloudNativeCon North America to present How Standards, Specifications and Runtimes Make for Better Containers” to a sold-out Seattle crowd. Check out a recording of their session below 🎥

We also started publishing ecosystem features in an effort to highlight how OCI is being leveraged by various projects. These deep dives, which we plan to continue sharing in 2019, were a hit with maintainers and readers alike a sample of community usage posts shared in 2018 can be found below:

Check back soon for more ecosystem project features, including one from the containerd team!

In 2019, the OCI community plans to roll out updates to specifications, ship a v1.0 of runc + much more stay tuned, get involved, and follow along on social for an even bigger year of all things container standards & specs!

Bringing OCI images to the desktop with Flatpak

By Blog

By Alex Larsson and Owen Taylor

Over the last five years, containers have taken the server world by storm. Many of the same things that make containers well-suited for server-side computing — the ability to test code an environment that is very similar to the deployment environment, the ability to upgrade application software independently from the host operating system, the ability to deploy applications across multiple host operating systems — also make a lot of sense for desktop applications.

At the same time that container technologies were emerging on the server, the Flatpak project was being developed by a community of contributors, including engineers from Red Hat, Collabora, and Endless Mobile, as a way to improve application deployment on desktop Linux, and allow application authors to make their applications available directly to users. During the evolution of Flatpak, leading up to a 1.0 release in August 2018, it’s been possible to share technologies with server-side containers, from namespaces, to seccomp, to the OCI Image format.

Containers for the Desktop

You might wonder if it would have been possible to go one step further and use a existing server-side runtime, such as runc, to run containers on the desktop. While it is possible to use server containers for desktop applications and get basic functionality working, the desktop world is pretty different from the server world. Instead of integrating with storage-area-networks, network routing, and orchestration, a desktop application deals with USB input devices, geolocation, and desktop environment application menus. Server-side technology might have the ability to provide access to a device, or block it off, but can’t meaningfully handle interactively working with the user to establish fine-grained access control.

For this reason, Flatpak doesn’t use runc, but instead has its own runtime that runs within the user’s desktop session and provides services (called portals) that allow applications to access the desktop under the user’s control. A portal is a service, exposed via the D-Bus IPC protocol, that sits between the application and the resource the application wants to access (local files, printing, geolocation, etc.) and provides a user interface to let the user decide whether to allow access or not. Typically this is not done as a Yes/No question about permissions but instead as a natural part of the operation. For example, instead of asking a user “Allow application X to access your files?”, the user is shown a file selection dialog, and they can either pick a file to pass back to the application, or they can cancel the operation.

In addition to a unique desktop-focused security model, Flatpak has an approach to combining operating system and application content into a single container that was inspired by the requirements of the desktop. The typical model for a server container is that a base image is arbitrarily modified to create the application container. Each application container is its own mini-operating system, and in order to fix a bug or security hole in the base operating system, the application has to be rebuilt and deployed. The OCI layer system potentially optimizes the deployment step, but the rebuilds are still necessary, and production images are often “squashed” for maximum efficiency.

The downsides of having every application independent are minimized in the server environment: we usually have a small number of applications running on a node with abundant disk space and network bandwidth, and hopefully have automation to automatically rebuild applications as necessary, as well as paid sysadmins. On a desktop, we might instead have dozens or hundreds of applications installed on a much more modest device, maintained by individual users. We also don’t want software vendors to have to rebuild their application in order to pick up a fix to the base operating system.

For this reason, when a Flatpak application is executed, two separate filesystems are mounted in its environment – the runtime filesystem is mounted at the path /usr, and the application filesystem is mounted at /app. Library and other search paths in the applications execution environment are set up to search both directories so that code and resources can be bundled with the runtime or with the application. This way a single runtime can be reused by many applications, and can be updated without having to modify applications. Different applications can use different runtimes, so some applications might use a runtime that is maintained for long-term stability with few changes, and other applications might use a runtime that gets more rapid releases to pick up new library versions.

Flatpaks as OCI Images

The native image format for Flatpaks is OSTree. It is a local storage format that automatically supports deduplication and versioning. It also naturally comes with a distribution framework, which most Flatpak repositories use. However, that is not the only distribution mechanism Flatpak supports.

Organizations seldom deploy only servers or only desktops. Having a unified way to distribute desktop applications and server applications can be highly desirable for sysadmins: they don’t really want to maintain both an OSTree repository, and a OCI registry (for servers). Luckily, the OCI format is sufficiently flexible that it is also suitable for storing desktop applications. Flatpak supports installing applications and runtimes from OCI images and a Flatpak remote can either be an ostree repository or a tentative version of the OCI registry (distribution spec). At the end-user level the difference is invisible and all available sources of applications are integrated together and displayed to the user.

One of the basic advantages that the OCI Image format has over older image formats is the concept of annotations – in addition to a compressed filesystem and standard metadata such as the operating system, architecture, and author of the image, an OCI Image provides a set of arbitrary key/value pairs. When we’re storing a Flatpak as an OCI image, these annotations are used to store information like the permissions that the Flatpak requires and its installed size.

One thing that is still actively under development is browsing available Flatpaks. Typing ‘docker pull postgresql:latest’ may be a good user interface for the server command line, but  a desktop user typically wants a nice user interface with icons, human readable names, and a user-friendly description. These things can be stored in the OCI annotations of the individual images, but it’s also necessary to be able to efficiently download the information for all the images in the registry, without having to download each individual image. Currently, Flatpak supports a draft metadata format and protocol for this. The Flagstate server allows adding this capability to an existing registry. Browsing and searching available images in a registry is useful beyond the desktop, so perhaps this is something that future versions of the OCI distribution specification can address.

As Flatpaks become more commonly used as a way to distribute desktop applications, users can benefit from an expanded set of available applications, with more robust upgrades and enhanced security. Using OCI Images and the OCI distribution mechanism as a deployment technology enables sysadmins to have a unified way of managing and distributing server side and desktop applications within their organization.

Hundreds of applications from Inkscape and Blender, to LibreOffice, to SuperTuxKart, are already available as Flatpaks. Flatpak is installed by default on current versions of some Linux distributions and can be easy to install on most others. Instructions for getting started can be found on flatpak.org.


Alex Larsson is a Senior Principal Software Engineer at Red Hat and the creator of Flatpak.

Owen Taylor is a Principal Software Engineer at Red Hat, and architect for Red Hat’s desktop and workstation engineering team.

OCI Image Support Comes to Open Source Docker Registry

By Blog

By Phil Estes & Mike Brown

The Open Container Initiative (OCI) was formed in 2015 as a place to collaborate on the definition of a standard container runtime and image format. By that time, Docker had effectively become the de facto standard for container images, and DockerHub and many other public and private registries were filled with tens of thousands of available container images using the Docker image format.

Given this state of the world in late 2015, the OCI image specification work began in earnest with a strong group of collaborating independent and vendor-associated participants, using the Docker v2.2 image format as a starting point. Fast forward eighteen months, and by summer of 2017, both the runtime and image specifications reached their intended 1.0 milestone release.

That release was a great milestone for the OCI community and container ecosystem at large, but the next step beyond declaring victory on any specification is always: adoption! The runtime specification had a head start here. The default reference implementation within the OCI, runc, was already in use by several implementers including the Docker engine by the time the specification was released. For the image specification, tools, and most notably, container image registries would have to adopt the OCI v1.0 image specification specifically for it to be useful to developers and implementers. Many tools existed to operate on Docker’s v2.2 image manifest format already, and these tools would now need to adopt the OCI format.

This work—enabling OCI images in the core registry used on a daily basis for millions of images—started in 2016 on the Docker distribution project long before the specification even reached 1.0. This open source project is probably better known as the registry that backs  DockerHub as well as many other public and private registries. Now that this pull request has been merged, we’ll take a few minutes to describe a bit more about the OCI image spec and how the open source registry was modified to support OCI in addition to its native Docker image formats.

Background

Container registries today are usually the combination of an HTTP API backed by some form of content store, with the content itself delineated by various media types. In the Docker image specification, you have metadata types (like the image’s Docker runtime configuration) as JSON content, combined with references to image layers, usually tarred and compressed binary blobs which are then stored in the backing filesystem along with their requisite media types. For example, in the Docker v2.2 image world, each container image will have a manifest with the media type application/vnd.docker.distribution.manifest.v2+json. A list of image references (to support multi-platform images) is known as a manifest list with media type: application/vnd.docker.distribution.manifest.list.v2+json.  An image layer commonly has the media type of application/vnd.docker.image.rootfs.diff.tar.gzip.

The OCI specification took this v2.2 image specification from Docker as a starting point, but as definitions were changed in small ways during the specification process, each one of the Docker image metadata or layer media types developed into an OCI counterpart. Given these efforts, the resulting official OCI media types include application/vnd.oci.image.manifest.v1+json for the image manifest. Manifest lists have been renamed to “indexes” in OCI v1, giving us the media type: application/vnd.oci.image.index.v1+json. A layer type becomes application/vnd.oci.image.layer.v1.tar+gzip in OCI parlance, and so on.

Implementation

To add OCI v1 support into the open source distribution project meant handling all the new media types from the OCI specification, and appropriately handling the HTTP API interactions when a client of the registry wants to “speak” in OCI media-types versus the already supported Docker types. If you think of image manifests being the top-most objects in the registry’s world, then effectively the registry was currently supporting three: the original “schema 1” Docker image format; the “schema 2” single image Docker format, and the manifest list multi-platform Docker image format (part of the schema 2.2 definition, but treated as a separate object type). Adding the OCI v1 types would mean supporting OCI v1 manifests and indexes—two additional formats, which would cascade into all the OCI media type references from these high level manifests and indexes.

The first attempt looked at simply expanding the registry’s handlers for the schema 2 and manifest list types to support OCI v1 manifests and OCI v1 indexes. These two types are quite similar between Docker v2.2 and OCI v1, so it seemed like it might be a quick path to getting support for OCI into the registry codebase.  However, over time it became clear that this would not be an optimal path for long-term OCI image support.

Success

Finally, the approach presented in GitHub pull request #2076 –to add specific new handlers to the open source  registry for the OCI main image types (v1 manifests and v1 indexes)—was agreed on and in July of 2018 this PR was merged into the docker/distribution project. This PR adds new handlers for the OCI types as well as new tests, validation code, and integration that crosses various core sections of the registry codebase. As with most PRs that add significant capability, it was not an easy task, and Mike Brown from IBM persisted on the project for over a year before working out all the requested issues, reviews, and bugs found during testing and validation of the OCI support!

Summary

Interested parties can try out the recently available v2.7.0 release candidate with OCI v1 image support. We expect after final release that registries based on the open source distribution, including DockerHub, will update in the near future to adopt these new features. At that point, client tools which today already support the OCI image formats will have interoperability with these registries thanks to the hard work of many involved from Docker, to the OCI, to participating vendors and contributors.

In addition to the growing adoption of the OCI v1 image formats—used natively in other projects like the CNCF containerd project as well as the Moby project’s LinuxKit implementation—we’re also excited about the standardization of the registry API itself coming to OCI this year. The proposal to take the Docker HTTP registry API has already been accepted into the OCI. This specification will add industry standardization around the protocol to talk to registries in addition to the existing interoperability brought in with the OCI v1 image specification.

It’s great to see growing adoption of the OCI specifications, and with the added support in the open source Docker registry for OCI images, we see this as just the beginning of a whole host of tools, vendor products, and software that will be enabled to utilize the OCI specifications now and in the future.

PouchContainer: How OCI Specifications Power Alibaba

By Blog

By Allen Sun, Alibaba Group

PouchContainer is an open source container project created by Alibaba Group to be enterprise ready and promote OCI container standards. The project is a fundamental piece of software in Alibaba’s infrastructure, it helps process transactions smoothly on millions of containers.

To become a general container engine for every scenario in production, PouchContainer seeks ways to support several OCI-compatible container runtimes. This action makes container service totally out of box:

* runc: container runtime based on Linux cgroups and namespaces;
* katacontainers: container runtime based on hypervisor; and
* runlxc: container runtime based on LXC especially on legacy kernels.

Architecture Based on OCI and Open Source Components


Three OCI-compatible runtimes are listed in the middle right part of architecture.

Features

PouchContainer’s most important features are:

  • Rich container: Besides the common ways of running container, PouchContainer includes a rich container mode, which integrates more services, hooks, and many others container internals to guarantee containers running like usual.
  • Strong isolation: PouchContainer is designed to be secure by default. It includes lots of security features, like hypervisor-based container technology, lxcfs, directory disk quota, patched Linux kernel and so on.
  • P2P distribution: PouchContainer utilizes Dragonfly, a P2P-based distribution system, to achieve lightning-fast container image distribution.
  • Kernel compatibility: Enables OCI-compatible runtimes to work on old kernel versions, like linux kernel 2.6.32+.
  • Standard compatibility: PouchContainer keeps embracing container ecosystem to support industry specifications such as CNI, CSI and so on.
  • Kubernetes Native: PouchContainer has natively implemented Kubernetes Container Runtime Interface (CRI).

Learn more about PouchContainer

PouchContainer brings many additional features to end-users. Want to learn more? Please visit the PouchContainer GitHub, where the PouchContainer community is currently busy preparing the 1.0.0 GA release.

CRI-O: How Standards Power a Container Runtime

By Blog

By Joe Brockmeier, Red Hat

The CRI-O project (part of the former Kubernetes incubator) is busy working on the upcoming 1.11 release, which will be released in conjunction with the Kubernetes 1.11 release. It will have some interesting new features, but won’t lose sight of its stated No. 1 goal: to never break Kubernetes. Parallel to that goal is to run any OCI image from any registry (when the OCI distribution specification is finalized).

Historically, Kubernetes has worked with container runtimes that were designed to do many things: build container images, manage container security, manage container orchestration, inspect container images, etc. CRI-O, on the other hand, was designed just to support the functions Kubernetes needs to actually run containers.

Depending on Standards

CRI-O moves in lock-step with Kubernetes’ Container Runtime Interface (CRI), the API for container runtimes to integrate with a kubelet. CRI-O is aligned with the upstream Kubernetes releases, so any changes to the CRI in Kubernetes are supported in the matching release of CRI-O for that release. For example, the most recent CRI-O 1.10 release matches Kubernetes 1.10. CRI-O 1.11 will release with Kubernetes 1.11, and so forth.

Most users these days are using Kubernetes with a version of Docker, but some organizations with different business needs might want to use new container types that haven’t been implemented yet, or others like Kata Containers. CRI-O opens the door for this by supporting any OCI images and runtimes.

Here’s how it works:

  • Kubernetes asks the kubelet to start a pod
  • The kubelet talks to the CRI-O daemon using the CRI
  • CRI-O uses a library that implements the OCI standard to pull the image from a registry
  • CRI-O uses another standard library to unpack the container image for use
  • CRI-O then generates a JSON file that describes how the container is to be run
  • Next, CRI-O launches an OCI-compatible runtime (currently runc or the Clear Containers runtime) to run the container processes
  • A common process handles logging for the container and monitors the process

You might also be wondering about networking. Again, the idea is to have flexibility within a standard interface, so CRI-O uses the Container Networking Interface (CNI) to set up networking. Any CNI plugin can be used with CRI-O, giving users flexibility over their container networking stack as well.

CRI-O helps achieve what the OCI specifications and CRI API set out to do – make the container runtime an implementation detail that the end user doesn’t have to worry about. Worry about how Kubernetes works with your application, not how Kubernetes works with the container runtime.

Learning More about CRI-O

Want to learn more about CRI-O? Of course you do! For now the best resources on CRI-O are the README on GitHub, the accompanying tutorial, and be sure to watch the CRI-O blog.

OCI Member Spotlight: OpenStack (Kata Containers)

By Blog

The OCI community is comprised of a diverse set of member organizations that are committed to creating open industry standards around a container image format and runtime. This blog series highlights OCI members and their contributions to building an open, portable and vendor neutral specification.

Name: Xu Wang
Title: CTO + Kata Architecture Committee member
Company: Hyper.sh + Kata Containers

What is Kata Containers?

Kata Containers is an open source project hosted by the OpenStack Foundation that provides lightweight virtual machines that feel and perform like containers, while providing the workload isolation and security advantages of traditional VMs.

Why did OpenStack (Kata Containers) join OCI?

The Kata Containers project runs containers specified by OCI runtime spec in virtual machines. We joined OCI to guarantee the compatibility between Kata Containers and OCI runtime specs, and to help to improve the OCI specifications – enabling more efficient Kata Containers. We look forward to collaborating around tooling, compatibility and interoperability testing.

How can OCI community members contribute to Kata Containers? 

Many of the Kata community members come from the OCI community, so we look forward to more collaboration on use case sharing, specification discussion, testing and toolchains. See Github for more information: https://github.com/kata-containers/community

How do you anticipate OCI changing the container technology landscape? 

The OCI specs guarantee that container technology can be open, vendor neutral and a cornerstone of future computing infrastructure.

What is the benefit of open standards like OCI for users of Kata Containers?

The open and unified container spec gives users more options and helps Kata to be adopted in more cases.

More and more applications are shipped and run under OCI specs. With OCI, Kata could enable users to launch unified container applications no matter if the runtime isolation technology is namespace or VM.

For cloud providers, if a user application has been developed for OCI specs, they could run the application with Kata Containers directly which introduces fewer layers than running a container orchestration system on a VM pool.

On the other hand, for those users who have already invested in VM technology, they could apply their existing tools to Kata Containers and move their application to microservice with OCI containers.

Teaming up with Docker to Support a Diverse Container Ecosystem

By Blog

With a commitment to driving inclusivity in the community, OCI is proud to be an official Diversity Scholarship sponsor for DockerCon 2018.

By actively seeking ways to increase the ecosystem’s diversity, OCI + Docker’s collective goal is to make DockerCon a safe place for everyone to learn and collaborate.

The scholarship program will provide under represented members across the global container community with a scholarship to attend the annual event.

To learn more, make sure to check out the selection process, scholarship details and requirements below + don’t forget to submit an application by Wednesday, April 25, 2018 at 5:00PM PST!

Apply Now!

Selection Process:

A committee of Docker community members will review and select the scholarship recipients. Recipients will be notified by the week of May 7, 2018

What’s included:

Full Access DockerCon Conference Pass

Requirements:

Must be able to attend DockerCon US 2018

Must be 18 years old or older to apply