The Continuous Delivery Summit is a one-day event that brings together the open source CI/CD community. Meet peers and drive the future direction of continuous delivery.
How to register: Pre-registration is required. To register for Continuous Delivery Summit, add it on during your KubeCon + CloudNativeCon registration.
Apache Samza is a distributed stream processing framework that allows you to process and analyze your data in real-time. It has been widely used at Linkedin and other companies on a large scale. Recently, we added Kubernetes as the new scheduler backend for Samza to run in distributed mode. In this talk, we will deep dive into the technical details about how Samza runs natively on Kubernetes by leveraging the primitives provided by Kubernetes for scheduling, storages, etc. We will also compare running Samza on Kubernetes with other existing solutions such as YARN and standalone mode. Finally, we will share some practices about running Kubernetes as a container orchestration framework for other big data processing engines.
Weiqing has been working in big data computation frameworks since 2015 and is an Apache Spark/HBase/Hadoop/Samza contributor. She is currently a software engineer in streaming infrastructure team at LinkedIn, working on Samza, Brooklin, etc. Before that, she worked in Spark team at... Read More →
Raw block PersistentVolumes (PVs) allow applications to consume storage in a new way. In particular, Rook-Ceph now makes use of them to provide the backing store for its clustered storage in a more Kubernetes-like fashion and with improved security. Now we can rethink the notion of how we structure our storage clusters, moving the focus away from static nodes and basing them on more dynamic, resilient devices.
This talk will go over how we incorporated raw block PVs, how the operator manages them, and how we can now define storage cluster. It will also include a demo of the resiliency of these new types of devices. By the end of the talk, you'll not only know how to use raw block PVs but also why and when to use them.
Jose Rivera is a Senior Software Engineer at Red Hat. He's worked in and around storage for over 10 years, with experiences spanning across multiple networked and software-defined storage projects such as Samba (SMB) and GlusterFS. Currently he works on OpenShift Container Storage... Read More →
Accounting is very important in Kubernetes. Better accounting leads to improved node stability, density, and more accurate charging users based on their actual resource utilization. Unfortunately, there are gaps in resource accounting in Kubernetes today, mostly based on the fact that running a pod is not actually free.
In Kubernetes 1.16, the PodOverhead feature is introduced to fix these issues.
We’ll dive into the details of a pod’s journey from client CLI to running on a node, touching on kubectl, API server, admission controllers, etcd, scheduler, kubelet, containerd/cri-o, and runtimes like Kata Containers and gVisor. Through this we will highlight the current gaps and how the PodOverhead feature addresses them.
Attend to get a basic understanding of the Pod creation process, and learn what the new PodOverhead feature is and how it can be used to improve cluster stability.
Eric is a senior software engineer at Intel’s Open Source Technology Center, based out of Portland, Oregon. Eric has spent the last several years working on embedded firmware and the Linux kernel. Eric has been a developer and technical lead for the Intel Clear Containers project... Read More →
Is Kubernetes a kernel or distribution? Yes! It is necessarily both!
CRD’s, out-of-tree cloud providers, and CNI/CSI/CRI abstractions evolve Kubernetes’ core toward an extensible kernel.
At 2017, KubeCon NA Tim Hockin and Michael Rubin started a conversation on formalizing “Kubernetes upstream as a distro”, proposing we clean up thinking/processes, define tools/standards, incentivize distros to stay close. They argued for a Kubernetes reference distribution focused on correctness and stability.
So where is it?
After a slow start, we have momentum in 2019 to improve conformance, API stability, and better documented support stances. However to understand why we don’t (yet) have an upstream reference distro, we need to dive deep on build/release/test tooling.
This talk will summarize Kubernetes distro issues/advances and potential contribution areas for individuals and companies.
Stephen Augustus is an active leader in the Kubernetes community. He currently serves as a Special Interest Group Chair (Release, PM), a Release Manager, and a subproject owner for Azure.Stephen leads the Cloud Native Developer Strategy team at VMware, driving meaningful interactions... Read More →
Tim Pepper is a Principal Engineer in VMware's Open Source Technology Center with over 25 years in open source, working as an open source developer advocate and contributor to Kubernetes (emeritus Steering Committee elected member, emeritus Code of Conduct Committee elected member... Read More →
Recently OpenEBS was accepted as a CNCF sandbox project. OpenEBS is a block storage provider that is built on top of Kubernetes APIs as well as extends these APIs to let end users have granular control on persistent storage decisions. We welcome communities to join us and make innovations in Container Attached Storage space. In this talk, Amit Das & Vishnu Itta, the core maintainers of OpenEBS will share the background and design principles behind OpenEBS. Through real life use cases, Amit and Vishnu will share the experiences of various OpenEBS users on solving their persistent needs on Kubernetes environments ranging from home grown labs to managed cloud platforms to on premise solutions and other hybrids.
Amit is the director of engineering at MayaData, where he works on various open source projects including OpenEBS and MetaController. In his earlier days, he was a contributor to openstack cinder and apache cloudstack projects. When not writing code or talking about it, Amit loves... Read More →
Developer who always eager to learn, loves math, algorithms and programming. Have good experience in storage protocols, ZFS, FreeBSD internals, Linux, device drivers. Enjoys playing Table Tennis and doing travel.
KubeVirt is a Kubernetes extension that supports running traditional Virtual Machine workloads side by side with containers.
In this session we will explore the architecture behind KubeVirt and how NVIDIA is leveraging that architecture to power GPU workloads on Kubernetes. Using NVIDIA’s GPU workloads as a case study, we’ll provide a focused view on how host device passthrough is accomplished with KubeVirt as well as providing some performance metrics comparing KubeVirt to standalone KVM. You’ll come away with a high level understanding of what KubeVirt is capable of and the general design principles that drive the project.
Vishesh is a Software Engineer at Nvidia. He is focussing on different aspects of enabling VM workload management on Kubernetes Cluster. He is specifically interested in GPU workloads on VMs. He is a active contributor to Kubevirt, a CNCF Sanbox Project.
K9P, a virtual file system, exposes the state of a Kubernetes cluster as files. Our terminals have been optimized over the last 40 years towards working with files, kubectl not so much. K9P allows us to carry the mantra of "everything is a file" to the distributed computing extreme.
K9P allows you to integrate Kubernetes resources into an existing workflow, or create new ones. Scale a Deployment by writing to a file. Locate failing Pods with grep. Update configuration in ConfigMaps with sed.
Software engineer working on scaling bare-metal Kuberentes clusters by day. Builds experiments with esoteric 90s technology by night. Previous talks include an introduction to Kubernetes controllers at KubeCon EU 2018 and Building a Go-based MIDI Player at FOSDEM 2019.
SIG API Machinery is responsible for all generic API topics in Kubernetes, i.e. for the generic API server implementation, API CRUD semantics, discovery, the admission control mechanism, conversion, defaulting, persistence with etcd, general controllers like garbage collection, Go client libraries, code generation and extension points like CustomResourceDefinitions, aggregation & admission. This session will have two parts: A deep dive into a selection for API Machinery topics, probably: defining API types in Golang, groups+versions+kinds+resources, tags, code-generation, schemes, different variants of codecs – and how to use all this with CustomResourceDefinitions and a custom client-go client. time for general discussion and opportunity for API machinery questions. This session is targetted especially at: People using the Kubernetes APIs with client-go and wanting to understand what is going on behind the scenes People extending Kubernetes with APIs using aggregated API servers or CustomResourceDefinitions
Stefan is a Senior Principal Engineer at Upbound working on Kubernetes-based control plane technology. He contributed a major part of the CustomResourceDefinition features to Kubernetes, lead-architected kcp and is among the top 10 contributors to Kubernetes. Before Upbound he worked... Read More →
Virtual-kubelet is an open source kubelet implementation that allows users to extend Kuberentes in multiple, crazy ways. A couple examples include, a provider to order Dominos pizza, or to spin out workloads to a satellite in space. This talk will go through the inner workings of virtual-kubelet (vk) and how users can build their own providers to leverage the flexibility that vk offers. Contributors to the virtual-kubelet have been working on new features past 1.0, this talk will also give a roadmap of what’s to come. Azure will also share their experiences with writing a provider for virtual-kubelet and the use-cases associated with it.
In Barcelona, we raced through seven different container runtime setups from Docker to cri-o to containerd--including interesting projects like AWS's Firecracker, Kata containers and gVisor. For each we demonstrated how to allow Kubernetes to use each one of them using either RuntimeClass or standard kubelet CRI configuration parameters and then gave a quick highlight of their feature set, maturity, and usage in the ecosystem.
While we successfully demo'd each runtime, we didn't have time to assess each of them with regards to the "why?" question: why would an operator or user choose one of these runtimes? In this "Part 2" talk we will take the time to walk back through each runtime, cover updates to the project since May, look at performance and security characteristics, and answer the why question for each one!
Phil is a Principal Engineer for Amazon Web Services (AWS), focused on core container technologies that power AWS container offerings like Fargate, EKS, and ECS.Phil is currently an active contributor and maintainer for the CNCF containerd runtime project, and participates in the... Read More →
This deep dive of the working group for Multi-tenancy will include an in-depth technical exploration of multi-tenancy in core Kubernetes and the tooling and services the multi-tenancy working group has been developing to mainstream how users of Kubernetes can achieve multi-tenancy.
Adrian is a software engineer on the Google Kubernetes Engine (GKE) in Kitchener, Ontario, and created the Hierarchical Namespace Controller (HNC). Before Google, he was a developer at Intel’s Programmable Solutions Group (formerly Altera) in Toronto, and specialized in parallel... Read More →
Sanjeev Rampal, PhD, is a Principal Engineer in the Cloud Platforms and Solutions group at Cisco Systems where he works on the Cisco Container Platform, an enterprise multi-cloud platform based on Kubernetes and cloud native technologies. He has over 20 years of experience in development... Read More →
Baidu internally has improved the performance of large-scale deep learning workloads by using the Volcano project. The CRD-based computing resource model makes it possible to use resources more efficiently and configure computing models more flexibly. The Volcano project has unified abstraction of the underlying capabilities of group scheduling, fair share, priority queue, job suspend/resume, etc., which makes up for the lack of functionality of the native job based training operator.
After using Volcano, Baidu's internal resource utilization increased by 15%, and the training task completion speed increased by 10%. This talk will introduce the overall function of Volcano, transformation of the old operator to support Volcano, and the comparison of the performance of deep learning training tasks before and after using Volcano.
Ti Zhou, Kubernetes member, LF AI & Data TAC member, currently serves as senior architect in Baidu Inc, focusing on PaddlePaddle Deep Learning Framework and Baidu Cloud Container Engine, helps developers to deploy cloud-native machine learning on private and public cloud.
Join Kubernetes SIG Storage to learn about the areas of our focus, what we are working on currently, and how you can get involved. Veteran SIG Storage members will also present details on projects the SIG is actively working on, and help answer any questions you may have.
Saad Ali is a Staff Software Engineer at Google and member of the CNCF Technical Oversight Committee. He works on the open-source Kubernetes project, and has led the development of the Kubernetes storage and volume subsystem. He serves as a lead of the Kubernetes Storage SIG, and... Read More →
Snow works on Square's Traffic & Observability team, focusing on service discovery and all things software proxies. In addition to this, he is also an Envoy maintainer.
Harvey Tuch is a Staff Software Engineer at Google where he leads the Envoy Platform team. He is an Envoy senior maintainer and is a driver of the Universal Dataplane API (UDPA) initiative. His Envoy interests include xDS APIs, security, fuzzing and performance.
Lizan Zhou is a Founding Engineer at Tetrate leading mesh backend team. He is a senior maintainer of Envoy and one of the core contributors of Istio. Previously he was working at Google Cloud, during his time at Google he worked on security and networking on Istio and Cloud Endpoints... Read More →