Flannel is an overlay based networking technique for networking Docker containers on CoreOS platforms. This tutorial explains the theory, setup instructions and limtations of the mechanism.
This document discusses Docker networking and Weave. By default, Docker uses Linux bridges to connect containers to host networks, but containers cannot communicate across hosts. Weave allows containers on different hosts to communicate as if on the same network by launching Weave routers that connect container networks. Weave provides features like multi-datacenter networking, encryption, and container mobility. The document demonstrates launching Weave on multiple VMs and using it to connect containers across different hosts and dynamically attach new containers.
The document discusses Docker networking and how Weave addresses some of its limitations. With Docker, each host has its own isolated network and containers on different hosts cannot communicate easily. Weave provides a common virtual network across all hosts that allows containers to communicate via standard IP protocols even if on different hosts. It uses DNS to enable service discovery and containers can be addressed by name across the Weave network. Weave also supports advanced features like encryption, container migration, and upcoming fast data path and IP address management capabilities.
This document discusses networking in Docker containers. It begins by explaining how networking works without Docker, using interfaces, routes and iptables rules. It then explains how Docker sets up networking for containers using bridges, network namespaces and veth pairs. It describes the basic bridged networking and overlay networking models. It also introduces the Container Network Model (CNM) and Docker's libnetwork plugin system for extending networking functionality through external drivers.
This document discusses Docker's new multi-host networking capabilities introduced in version 1.9. It uses overlay networks with VXLAN transport and key-value stores for cluster discovery to allow containers on different Docker hosts to communicate securely over a virtual network without port mappings. The new networking model addresses challenges with the previous single-host approach and enables use cases like multi-tenancy, segmentation, and multi-cloud networks.
This document discusses several network overlay options in Docker: Weave, Flannel, and Libnetwork. Weave creates a custom bridge and uses encapsulation to connect containers across hosts, but has low throughput due to packet processing in userspace. Flannel assigns each host a subnet and supports backends like VxLAN, with higher throughput than Weave by using the kernel driver. Libnetwork is integrated with Docker and supports custom drivers like Weave; it defines networks and services and allows containers to attach across hosts, with throughput close to Flannel due to using the VxLAN driver.
The document discusses Docker networks and provides details about the default networks that are automatically created when Docker is installed. There are three default networks: bridge, none, and host. The bridge network represents the docker0 network and is the default network that containers are connected to. The none network adds containers without a network interface, and the host network gives containers the same network stack as the Docker host. The default bridge network uses IP address 172.17.0.1/16 and connects newly launched containers to it.
Docker networking allows containers to communicate in several ways. Containers can communicate using Docker's default bridge (Docker0), by binding container ports to the host's ports, or using the host's network stack directly. More advanced options include linking containers to share information, using overlay networks with technologies like Open vSwitch, or running containers across multiple hosts with tunnels. The document provides examples of setting up different Docker networking configurations and discusses which methods suit different communication requirements between containers, hosts, and external networks.
Docker networking basics & coupling with Software Defined NetworksAdrien Blind
This presentation reminds Docker networking, exposes Software Defined Network basic paradigms, and then proposes a mixed-up implementation taking benefits of a coupled use of these two technologies. Implementation model proposed could be a good starting point to create multi-tenant PaaS platforms. As a bonus, OpenStack Neutron internal design is presented. You can also have a look on our previous presentation related to enterprise patterns for Docker: http://fr.slideshare.net/ArnaudMAZIN/docker-meetup-paris-enterprise-docker
Octo talk : docker multi-host networking Hervé Leclerc
This document summarizes Docker networking and the Docker libnetwork plugin. It discusses: - Docker libnetwork implements the Container Network Model (CNM) with components like networks, endpoints, and network sandboxes. - Network drivers like the bridge and overlay drivers are used to connect containers to networks and implement container isolation. The bridge driver uses Linux bridges while the overlay driver uses VXLAN tunnels for multi-host networks. - Networking demonstrations show how containers on different Docker hosts can communicate over an overlay network using VXLAN tunnels even when isolated in separate network namespaces.
This document provides an overview of Open vSwitch, including what it is, its main components, features, and how it can be used to build virtual network topologies. Open vSwitch is a software-defined networking switch that can be used to create virtual networks and handle network traffic between virtual machines and tunnels. It uses a distributed database, ovsdb-server, and a userspace daemon, ovs-vswitchd, to implement features like virtual switching, tunneling protocols, and OpenFlow support. Examples are provided for using Open vSwitch with KVM virtual machines and GRE tunnels to create virtual network topologies.
Pipework: Software-Defined Network for Containers and DockerJérôme Petazzoni
Pipework lets you connect together containers in arbitrarily complex scenarios. Pipework uses cgroups and namespaces and works with "plain" LXC containers (created with lxc-start), and with the awesome Docker. It's nothing less than Software-Defined Networking for Linux Containers! This is a short presentation about Pipework, given at the Docker Networking meet-up November 6th in Mountain View. More information: - https://github.com/jpetazzo/pipework - http://www.meetup.com/Docker-Networking/
This document discusses OpenvSwitch, an open source virtual switch that provides virtual networking and network virtualization capabilities. It describes OpenvSwitch's architecture, features, configuration, and use cases with OpenStack, VMware NSX, MidoNet, Pica8, and Intel DPDK. OpenvSwitch supports virtual networking functions like VLANs, STP, QoS, and tunneling protocols. It integrates with hypervisors and controllers to enable network virtualization and software-defined networking.
This document summarizes networking concepts for Docker containers including network archetypes like closed, bridged, joined, and open containers. It discusses how network namespaces and inter-container communication work. It also covers service discovery techniques in Docker like container linking and DNS.
Docker Networking with New Ipvlan and Macvlan DriversBrent Salisbury
This document introduces new Docker network drivers called Macvlan and Ipvlan. It provides information on setting up and using these drivers. Some key points: - Macvlan and Ipvlan allow containers to have interfaces directly on the host network instead of going through NAT or VPN. This provides better performance and no NAT issues. - The drivers can be used in bridge mode to connect containers to an existing network, or in L2/L3 modes for more flexibility in assigning IPs and routing. - Examples are given for creating networks with each driver mode and verifying connectivity between containers on the same network. - Additional features covered include IP address management, VLAN trunking, and dual-stack IPv4/
Docker Online Meetup #29: Docker Networking is Now GA Docker, Inc.
At DockerCon in June, we first announced experimental support for Docker Networking. As of the 1.9 release of Docker, we are excited to announce that Docker Networking is generally available to define how your Dockerized apps connect together. Docker Networking is a feature of Docker Engine that allows you to create virtual networks and attach containers to them so you can create the network topology that is right for your application. The networked containers can even span multiple hosts, so you don’t have to worry about what host your container lands on. They can seamlessly communicate with each other wherever they are - thus enabling true distributed applications. And Networking is pluggable, so you can use any third-party networking driver to power your networks without having to make any changes to your application. Read more: http://blog.docker.com/2015/11/docker-multi-host-networking-ga/
This document provides an overview and agenda for a Docker networking deep dive presentation. The presentation covers key concepts in Docker networking including libnetwork, the Container Networking Model (CNM), multi-host networking capabilities, service discovery, load balancing, and new features in Docker 1.12 like routing mesh and secured control/data planes. The agenda demonstrates Docker networking use cases like default bridge networks, user-defined bridge networks, and overlay networks. It also covers networking drivers, Docker 1.12 swarm mode networking functionality, and how packets flow through systems using routing mesh and load balancing.
Building a network emulator with Docker and Open vSwitchGoran Cetusic
A short description of container namespaces, Linux virtual Ethernet interfaces and how to use them in Docker and Open vSwitch to create a self-contained network with hundreds of nodes on a single host machine.
Docker network Present in VietNam DockerDay 2015Van Phuc
The document discusses Docker networking. It begins with an introduction to Docker and why networking is important for communication between containers. It then covers the libnetwork project, Docker networking features in version 1.7 like the docker0 bridge and linking containers, and experimental features like multi-host networking and services. Drivers and plugins for providing networking are described. The document concludes with a call for users to try experimental Docker and contribute to networking projects.
1) The document provides an agenda and instructions for a hands-on tutorial on OVS/NFV basics using Open vSwitch, Linux containers, Docker, and virtual private networks. 2) It describes how to access two provided virtual machines and configure port mirroring with Open vSwitch to monitor network traffic between VMs. 3) Instructions are given for installing Linux containers on the VMs, configuring network interfaces and scripts, and testing connectivity between containers using GRE tunnels. 4) The tutorial also covers installing and configuring Docker containers on the VMs, creating virtual networks between them using GRE tunnels, and deploying example containers from Docker Hub.
Docker Meetup: Docker Networking 1.11, by Madhu VenugopalMichelle Antebi
In this talk, Madhu Venugopal will present Docker Networking & Service Discovery features shipped in 1.11 and new Experimental Vlan network drivers introduced in 1.11.
Docker Network Overview and legacy "--link"Avash Mulmi
This document discusses Docker networking and the legacy "--link" option. It provides an overview of default Docker networks like bridge, none and host. It also describes how to create user-defined networks and connect containers to them. The document recommends using user-defined networks over the legacy "--link" option, which is being removed. It notes that Docker provides an embedded DNS server for containers connected to the same user-defined network to resolve each other by container name.
Docker 1.11 Meetup: Containerd and runc, by Arnaud Porterie and Michael Crosby Michelle Antebi
In this talk, Michal Crosby will present on runC and Containerd, the internals and how they work together to start and manage containers in Docker. Afterwards, Arnaud Porterie will touch on about what was shipped in 1.11 and how it will enable some of the things we are working on for 1.12.
this slide is created for understand open vswitch more easily. so I tried to make it practical. if you just follow up this scenario, then you will get some knowledge about OVS. In this document, I mainly use only two command "ip" and "ovs-vsctl" to show you the ability of these commands.
VyOS now supports VXLAN interfaces which allow multiple L2 segments to be multiplexed over a single physical network. VXLAN uses encapsulation to transport Ethernet frames over IP. The VNI field in VXLAN headers maps frames to different L2 segments. VyOS VXLAN interfaces can be configured and used like physical interfaces for routing, bridging, and protocols like OSPF. However, attributes like the VNI and multicast group cannot be changed after interface creation without deleting and recreating the interface.
The document is describing OpenStack networking components including Linux bridges, Open vSwitch, virtual network interfaces (TAP and VETH), and how they work together to provide virtual networking. It explains that TAP interfaces connect virtual machines to hypervisors, VETH pairs connect virtual bridges, Linux bridges act as hubs to connect multiple interfaces, and Open vSwitch bridges act like virtual switches with configurable ports and VLAN tagging. Traffic flows through these components via OpenFlow rules with tags added or stripped as needed.
Linux offers an extensive selection of programmable and configurable networking components from traditional bridges, encryption, to container optimized layer 2/3 devices, link aggregation, tunneling, several classification and filtering languages all the way up to full SDN components. This talk will provide an overview of many Linux networking components covering the Linux bridge, IPVLAN, MACVLAN, MACVTAP, Bonding/Team, OVS, classification & queueing, tunnel types, hidden routing tricks, IPSec, VTI, VRF and many others.
PayPal is an online payment system that allows users to send and receive payments. It was founded in 1998 and acquired by eBay in 2002. PayPal has over 60 million active accounts in 190 countries and processes payments in 19 currencies. While some criticize PayPal, it maintains low fraud rates and emphasizes security features like its security key to protect users.
The document discusses Kubernetes networking. It describes how Kubernetes networking allows pods to have routable IPs and communicate without NAT, unlike Docker networking which uses NAT. It covers how services provide stable virtual IPs to access pods, and how kube-proxy implements services by configuring iptables on nodes. It also discusses the DNS integration using SkyDNS and Ingress for layer 7 routing of HTTP traffic. Finally, it briefly mentions network plugins and how Kubernetes is designed to be open and customizable.
Docker networking basics & coupling with Software Defined NetworksAdrien Blind
This presentation reminds Docker networking, exposes Software Defined Network basic paradigms, and then proposes a mixed-up implementation taking benefits of a coupled use of these two technologies. Implementation model proposed could be a good starting point to create multi-tenant PaaS platforms. As a bonus, OpenStack Neutron internal design is presented. You can also have a look on our previous presentation related to enterprise patterns for Docker: http://fr.slideshare.net/ArnaudMAZIN/docker-meetup-paris-enterprise-docker
Octo talk : docker multi-host networking Hervé Leclerc
This document summarizes Docker networking and the Docker libnetwork plugin. It discusses: - Docker libnetwork implements the Container Network Model (CNM) with components like networks, endpoints, and network sandboxes. - Network drivers like the bridge and overlay drivers are used to connect containers to networks and implement container isolation. The bridge driver uses Linux bridges while the overlay driver uses VXLAN tunnels for multi-host networks. - Networking demonstrations show how containers on different Docker hosts can communicate over an overlay network using VXLAN tunnels even when isolated in separate network namespaces.
This document provides an overview of Open vSwitch, including what it is, its main components, features, and how it can be used to build virtual network topologies. Open vSwitch is a software-defined networking switch that can be used to create virtual networks and handle network traffic between virtual machines and tunnels. It uses a distributed database, ovsdb-server, and a userspace daemon, ovs-vswitchd, to implement features like virtual switching, tunneling protocols, and OpenFlow support. Examples are provided for using Open vSwitch with KVM virtual machines and GRE tunnels to create virtual network topologies.
Pipework: Software-Defined Network for Containers and DockerJérôme Petazzoni
Pipework lets you connect together containers in arbitrarily complex scenarios. Pipework uses cgroups and namespaces and works with "plain" LXC containers (created with lxc-start), and with the awesome Docker. It's nothing less than Software-Defined Networking for Linux Containers! This is a short presentation about Pipework, given at the Docker Networking meet-up November 6th in Mountain View. More information: - https://github.com/jpetazzo/pipework - http://www.meetup.com/Docker-Networking/
This document discusses OpenvSwitch, an open source virtual switch that provides virtual networking and network virtualization capabilities. It describes OpenvSwitch's architecture, features, configuration, and use cases with OpenStack, VMware NSX, MidoNet, Pica8, and Intel DPDK. OpenvSwitch supports virtual networking functions like VLANs, STP, QoS, and tunneling protocols. It integrates with hypervisors and controllers to enable network virtualization and software-defined networking.
This document summarizes networking concepts for Docker containers including network archetypes like closed, bridged, joined, and open containers. It discusses how network namespaces and inter-container communication work. It also covers service discovery techniques in Docker like container linking and DNS.
Docker Networking with New Ipvlan and Macvlan DriversBrent Salisbury
This document introduces new Docker network drivers called Macvlan and Ipvlan. It provides information on setting up and using these drivers. Some key points: - Macvlan and Ipvlan allow containers to have interfaces directly on the host network instead of going through NAT or VPN. This provides better performance and no NAT issues. - The drivers can be used in bridge mode to connect containers to an existing network, or in L2/L3 modes for more flexibility in assigning IPs and routing. - Examples are given for creating networks with each driver mode and verifying connectivity between containers on the same network. - Additional features covered include IP address management, VLAN trunking, and dual-stack IPv4/
Docker Online Meetup #29: Docker Networking is Now GA Docker, Inc.
At DockerCon in June, we first announced experimental support for Docker Networking. As of the 1.9 release of Docker, we are excited to announce that Docker Networking is generally available to define how your Dockerized apps connect together. Docker Networking is a feature of Docker Engine that allows you to create virtual networks and attach containers to them so you can create the network topology that is right for your application. The networked containers can even span multiple hosts, so you don’t have to worry about what host your container lands on. They can seamlessly communicate with each other wherever they are - thus enabling true distributed applications. And Networking is pluggable, so you can use any third-party networking driver to power your networks without having to make any changes to your application. Read more: http://blog.docker.com/2015/11/docker-multi-host-networking-ga/
This document provides an overview and agenda for a Docker networking deep dive presentation. The presentation covers key concepts in Docker networking including libnetwork, the Container Networking Model (CNM), multi-host networking capabilities, service discovery, load balancing, and new features in Docker 1.12 like routing mesh and secured control/data planes. The agenda demonstrates Docker networking use cases like default bridge networks, user-defined bridge networks, and overlay networks. It also covers networking drivers, Docker 1.12 swarm mode networking functionality, and how packets flow through systems using routing mesh and load balancing.
Building a network emulator with Docker and Open vSwitchGoran Cetusic
A short description of container namespaces, Linux virtual Ethernet interfaces and how to use them in Docker and Open vSwitch to create a self-contained network with hundreds of nodes on a single host machine.
Docker network Present in VietNam DockerDay 2015Van Phuc
The document discusses Docker networking. It begins with an introduction to Docker and why networking is important for communication between containers. It then covers the libnetwork project, Docker networking features in version 1.7 like the docker0 bridge and linking containers, and experimental features like multi-host networking and services. Drivers and plugins for providing networking are described. The document concludes with a call for users to try experimental Docker and contribute to networking projects.
1) The document provides an agenda and instructions for a hands-on tutorial on OVS/NFV basics using Open vSwitch, Linux containers, Docker, and virtual private networks. 2) It describes how to access two provided virtual machines and configure port mirroring with Open vSwitch to monitor network traffic between VMs. 3) Instructions are given for installing Linux containers on the VMs, configuring network interfaces and scripts, and testing connectivity between containers using GRE tunnels. 4) The tutorial also covers installing and configuring Docker containers on the VMs, creating virtual networks between them using GRE tunnels, and deploying example containers from Docker Hub.
Docker Meetup: Docker Networking 1.11, by Madhu VenugopalMichelle Antebi
In this talk, Madhu Venugopal will present Docker Networking & Service Discovery features shipped in 1.11 and new Experimental Vlan network drivers introduced in 1.11.
Docker Network Overview and legacy "--link"Avash Mulmi
This document discusses Docker networking and the legacy "--link" option. It provides an overview of default Docker networks like bridge, none and host. It also describes how to create user-defined networks and connect containers to them. The document recommends using user-defined networks over the legacy "--link" option, which is being removed. It notes that Docker provides an embedded DNS server for containers connected to the same user-defined network to resolve each other by container name.
Docker 1.11 Meetup: Containerd and runc, by Arnaud Porterie and Michael Crosby Michelle Antebi
In this talk, Michal Crosby will present on runC and Containerd, the internals and how they work together to start and manage containers in Docker. Afterwards, Arnaud Porterie will touch on about what was shipped in 1.11 and how it will enable some of the things we are working on for 1.12.
this slide is created for understand open vswitch more easily. so I tried to make it practical. if you just follow up this scenario, then you will get some knowledge about OVS. In this document, I mainly use only two command "ip" and "ovs-vsctl" to show you the ability of these commands.
VyOS now supports VXLAN interfaces which allow multiple L2 segments to be multiplexed over a single physical network. VXLAN uses encapsulation to transport Ethernet frames over IP. The VNI field in VXLAN headers maps frames to different L2 segments. VyOS VXLAN interfaces can be configured and used like physical interfaces for routing, bridging, and protocols like OSPF. However, attributes like the VNI and multicast group cannot be changed after interface creation without deleting and recreating the interface.
The document is describing OpenStack networking components including Linux bridges, Open vSwitch, virtual network interfaces (TAP and VETH), and how they work together to provide virtual networking. It explains that TAP interfaces connect virtual machines to hypervisors, VETH pairs connect virtual bridges, Linux bridges act as hubs to connect multiple interfaces, and Open vSwitch bridges act like virtual switches with configurable ports and VLAN tagging. Traffic flows through these components via OpenFlow rules with tags added or stripped as needed.
Linux offers an extensive selection of programmable and configurable networking components from traditional bridges, encryption, to container optimized layer 2/3 devices, link aggregation, tunneling, several classification and filtering languages all the way up to full SDN components. This talk will provide an overview of many Linux networking components covering the Linux bridge, IPVLAN, MACVLAN, MACVTAP, Bonding/Team, OVS, classification & queueing, tunnel types, hidden routing tricks, IPSec, VTI, VRF and many others.
PayPal is an online payment system that allows users to send and receive payments. It was founded in 1998 and acquired by eBay in 2002. PayPal has over 60 million active accounts in 190 countries and processes payments in 19 currencies. While some criticize PayPal, it maintains low fraud rates and emphasizes security features like its security key to protect users.
The document discusses Kubernetes networking. It describes how Kubernetes networking allows pods to have routable IPs and communicate without NAT, unlike Docker networking which uses NAT. It covers how services provide stable virtual IPs to access pods, and how kube-proxy implements services by configuring iptables on nodes. It also discusses the DNS integration using SkyDNS and Ingress for layer 7 routing of HTTP traffic. Finally, it briefly mentions network plugins and how Kubernetes is designed to be open and customizable.
Container Network Interface: Network Plugins for Kubernetes and beyondKubeAcademy
With the rise of modern containers comes new problems to solve – especially in networking. Numerous container SDN solutions have recently entered the market, each best suited for a particular environment. Combined with multiple container runtimes and orchestrators available today, there exists a need for a common layer to allow interoperability between them and the network solutions. As different environments demand different networking solutions, multiple vendors and viewpoints look to a specification to help guide interoperability. Container Network Interface (CNI) is a specification started by CoreOS with the input from the wider open source community aimed to make network plugins interoperable between container execution engines. It aims to be as common and vendor-neutral as possible to support a wide variety of networking options — from MACVLAN to modern SDNs such as Weave and flannel. CNI is growing in popularity. It got its start as a network plugin layer for rkt, a container runtime from CoreOS. Today rkt ships with multiple CNI plugins allowing users to take advantage of virtual switching, MACVLAN and IPVLAN as well as multiple IP management strategies, including DHCP. CNI is getting even wider adoption with Kubernetes adding support for it. Kubernetes accelerates development cycles while simplifying operations, and with support for CNI is taking the next step toward a common ground for networking. For continued success toward interoperability, Kubernetes users can come to this session to learn the CNI basics. This talk will cover the CNI interface, including an example of how to build a simple plugin. It will also show Kubernetes users how CNI can be used to solve their networking challenges and how they can get involved. KubeCon schedule link: http://sched.co/4VAo
RESTful API Design & Implementation with CodeIgniter PHP FrameworkBo-Yi Wu
This document provides an overview and summary of Bo-Yi Wu's presentation on implementing a RESTful API with CodeIgniter. The presentation covers RESTful API basics like HTTP methods, JSON response format, API design best practices, and using the CodeIgniter REST Server and REST Client libraries to implement and test APIs within the CodeIgniter framework. Examples are provided for creating, updating, deleting and reading data via API requests and responses. Folder structure and routing configurations for organizing API controllers are also discussed.
The document provides an overview of Kubernetes networking concepts and common networking solutions like Flannel and Calico. It describes how Kubernetes networking works at a basic level using namespaces and CNI plugins. It then dives deeper into how Flannel and Calico implement overlay networking between pods using techniques like VXLAN tunnels, IP-in-IP encapsulation, and BGP routing. The document walks through examples of pod-to-pod communication and how packets flow with ARP resolution and routing for both Flannel and Calico networks.
How to build a Kubernetes networking solution from scratchAll Things Open
Presented by: Antonin Bas & Jianjun Shen, VMware Presented at All Things Open 2020 Abstract: For the non-initiated, Kubernetes (K8s) networking can be a bit like dark magic. Many clusters have requirements beyond what the default network plugin, kubenet, can provide and require the use of a third-party Container Network Interface (CNI) plugin. But what exactly is the role of these plugins, how do they differ from each other and how does the choice of one affect your cluster? In this talk, Antonin and Jianjun will describe how a group of developers was able to build a CNI plugin - an open source project called Antrea - from scratch and bring it to production in a matter of months. This velocity was achieved by leveraging existing open-source technologies extensively: Open vSwitch, a well-established programmable virtual switch for the data plane, and the K8s libraries for the control plane. Antonin and Jianjun will explain the responsibilities of a CNI plugin in the context of K8s and will walk the audience through the steps required to create one. They will show how Antrea integrates with the rest of the cloud-native ecosystem (e.g. dashboards such as Octant and Prometheus) to provide insight into the network and ensure that K8s networking is not just dark magic anymore.
When it comes to networking inside Kubernetes, selecting the correct networking solution may be one of the most important decisions you may face. This is especially true if you are trying to run a Kubernetes cluster in production. Therefore it's beneficial to have a good understanding of different CNI options out there and most importantly how these networking options are different from each other. This presentation goes over packet by packet-level details of how the network plumbing is happening with different CNI plugins including, Flannel, Calico & Cilium.
Learning how AWS implement AWS VPC CNIHungWei Chiu
The document discusses AWS VPC CNI (Container Network Interface) and how it enables networking connectivity for Kubernetes pods running on Amazon EC2 instances within an AWS VPC (Virtual Private Cloud). It aims to provide high throughput and availability, low latency networking while allowing users to express and enforce network policies and isolation comparable to using native EC2 networking and security groups. AWS VPC CNI assigns pods IP addresses from the same subnet as the EC2 instance so traffic can bypass overlay networking for improved performance and visibility within the VPC.
Chris Swan's presentation on Docker Networking from Container.Camp in London 12 September 2014 A look at how stock Docker does networking, and how containers can be connected together. Introduction to libchan and pipework projects, and a look at container internetworking using Open vSwitch and kernel VXLAN. Docker can also be used as a place to run layer 4-7 network application services like SSL termination, proxying, load balancing, content caching and intrusion detection.
Docker 1.11 Meetup: Networking ShowcaseDocker, Inc.
Docker networking was introduced in Docker 1.9.0 and has continued to be improved upon and expanded. Key features introduced include support for multiple micro-segmented networks, built-in multihost networking using VXLAN, pluggable network plugins, and integration with Docker Swarm and Docker Compose. Later versions added additional capabilities like service discovery using embedded DNS, network load balancing, and experimental Macvlan and IPVlan drivers to connect containers to specific VLANs. Docker networking allows containers to be connected to different network types including default bridge networks, user-defined bridge networks, and overlay networks spanning multiple Docker hosts.
Docker Meetup: Docker Networking 1.11 with Madhu VenugopalDocker, Inc.
In this talk, Madhu Venugopal will present Docker Networking & Service Discovery features shipped in 1.11 and new Experimental VLAN network drivers introduced in 1.11.
Kubernetes has a very complex network architecture. It is the networking that enables Kubernetes to redefine the latest container technology. 1. Docker containers networks 2. Containers communication in a Pod 3. Pods communication cross different nodes 4. Pod to Service communication
The document provides an overview of Docker networking options and access control. It discusses the default Linux bridge networking (Docker0), port mapping to access containers externally, using the host's network, and connecting containers via their networks. It also covers more advanced options like Open vSwitch for encapsulation and programmable networking. The document recommends using iptables and the --icc and --link flags for access control between containers and only allowing connected containers to communicate.
Piotr Kieszczynski gave a presentation on network solutions for Docker. Some key points: - Docker's default network assigns each container a static IP on the Linux bridge docker0, but outside traffic cannot reach containers. - Solutions like port mapping, host networking, and connecting containers allow external access but require IP management. - Projects like Weave, Calico, Flannel, SocketPlane, and Pipework automate networking between containers and hosts using overlays like GRE tunnels or OVS. - Docker 1.7 includes a new libnetwork for container networking with a common network model and tools to manage networks.
20240415 [Container Plumbing Days] Usernetes Gen2 - Kubernetes in Rootless Do...Akihiro Suda
Rootless mode is a technique to harden containers by running the container engine as a non-root user. The support for rootless mode has been merged into Docker since v19.03 (2019) and in Kubernetes since v1.22 (2021). However, setting up Rootless Kubernetes has been more challenging than setting up Rootless Docker due to its complexity. This session presents Usernetes Generation 2, a Kubernetes distribution that wraps Kubernetes in Rootless Docker for ease of setting up multi-node Rootless Kubernetes clusters. Unlike the original Usernetes (Generation 1) that was based on "Kubernetes The Hard Way", Usernetes Generation 2 supports kubeadm. Usernetes Generation 2 is similar to `kind` and `minikube`, however, unlike them Usernetes Generation 2 supports forming real multi-node clusters using Flannel (VXLAN) and it can be potentially used for production clusters. https://github.com/rootless-containers/usernetes
This document provides an overview of Kubernetes networking concepts including: - CNI (Container Network Interface) is used to provide networking to Kubernetes pods and allows pod to pod communication without NAT. Popular CNIs include Calico, Cilium, and Flannel. - Network design considerations for Kubernetes include topology routed, overlay, and hybrid models. The overlay model uses technologies like VXLAN while the hybrid model uses both underlay routing and overlay tunnels. - Kubernetes services allow pods to be accessed via a single IP or DNS name even as pods are rescheduled. Service types include ClusterIP, NodePort, and LoadBalancer. Ingress exposes HTTP routes to services within the cluster. -
This is a followup to our Docker networking tutorial. This slidedeck describes the options for deploying Docker container in a multi-host cluster environment. We introduce the LorisPack toolkit for connecting and isolating pods of containers deployed across multiple hosts.
The Docker network overlay driver relies on several technologies: network namespaces, VXLAN, Netlink and a distributed key-value store. This talk will present each of these mechanisms one by one along with their userland tools and show hands-on how they interact together when setting up an overlay to connect containers. The talk will continue with a demo showing how to build your own simple overlay using these technologies.
OpenStack Israel Meetup - Project Kuryr: Bringing Container Networking to Neu...Cloud Native Day Tel Aviv
This document describes Project Kuryr, which aims to use Neutron as the networking abstraction for containerized workloads. It maps container networking concepts like endpoints and sandboxes to equivalent Neutron API entities. This allows containers to leverage Neutron features like security groups, LBaaS, and integration with VM networking. Kuryr includes a libnetwork remote driver, generic VIF binding, and plans to support Kubernetes, Docker Swarm, and other orchestration engines. It brings container and VM networking under a common API without vendor lock-in.
The document discusses Kubernetes networking concepts including pods, services, and ingress. It provides examples of how containers within pods communicate via Docker networking. It also explains how Kubernetes networking solves the problems of pod-to-pod, service-to-pod, and external-to-service communications using services, iptables, and kube-proxy. The document demonstrates creating a deployment, service, and ingress to expose an application externally via a load balancer.
Kubernetes networking allows pods to communicate with each other and services to load balance traffic to pods. The document discusses Kubernetes networking concepts including the network model, pod networking using CNI plugins like Flannel, and different service types such as ClusterIP, NodePort, and Ingress. It provides examples of exposing a Kubernetes service using hostNetwork, hostPort, and NodePort and how network policies are implemented using iptables.
The Journey to the Kubernetes networking.pdfChenYiHuang5
Kubernetes networking allows pods to communicate across nodes using IP-in-IP tunnels. When a pod makes a request to a service IP, the request is distributed to backend pods using iptables rules and connection tracking. DNS queries in pods are resolved by CoreDNS, which returns service IPs and pod IPs configured in Kubernetes.
Introducing Agnetic AI: Redefining Intelligent Customer Engagement for the Future of Business In a world where data is abundant but actionable insights are scarce, Agnetic AI emerges as a transformative force in AI-powered customer engagement and predictive intelligence solutions. Our cutting-edge platform harnesses the power of machine learning, natural language processing, and real-time analytics to help businesses drive deeper connections, streamline operations, and unlock unprecedented growth. Whether you're a forward-thinking startup or an enterprise scaling globally, Agnetic AI is designed to automate customer journeys, personalize interactions at scale, and deliver insights that move the needle. Built for performance, agility, and results, this AI solution isn’t just another tool—it’s your competitive advantage in the age of intelligent automation.
Introduction to LLM Post-Training - MIT 6.S191 2025Maxime Labonne
In this talk, we will cover the fundamentals of modern LLM post-training at various scales with concrete examples. High-quality data generation is at the core of this process, focusing on the accuracy, diversity, and complexity of the training samples. We will explore key training techniques, including supervised fine-tuning, preference alignment, and model merging. The lecture will delve into evaluation frameworks with their pros and cons for measuring model performance. We will conclude with an overview of emerging trends in post-training methodologies and their implications for the future of LLM development.
Buckeye Dreamin 2024: Assessing and Resolving Technical DebtLynda Kane
Slide Deck from Buckeye Dreamin' 2024 presentation Assessing and Resolving Technical Debt. Focused on identifying technical debt in Salesforce and working towards resolving it.
CLI, HTTP, GenAI and MCP telemetry/observability in JavaPavel Vlasov
This presentation demonstrates Nasdanika telemetry/observability capabilities for CLI, HTTP, GenAI and MCP in Java. With these capabilities you can build observable custom Java-based CLI tools, including MCP & HTTP servers, deployed to workstations, build pipelines, servers, Docker images, etc. and track usage of individual commands and their use of other resources - HTTP, AI Chat and Embeddings, MCP servers. You can also track MCP and HTTP server requests. The CLI approach allows to leverage CPUs/GPUs of local workstations and local LLMs. While local LLMs may not be very fast, they can be used in a batch mode, e.g. overnight. For example, generating code, analyzing merge requests, or tailoring resumes for job postings (using a CrewAI example - https://nasdanika-knowledge.github.io/crew-ai-visual-synopsis/tailor-job-applications/index.html). Also, CLI-based tools can be used to deliver fine-grained functionality specific to a particular group of people. For example, a custom bundled RAG/Chat on top of a document base for, say, mortgage agents.
Monitor Kafka Clients Centrally with KIP-714Kumar Keshav
Apache Kafka introduced KIP-714 in 3.7 release, which allows the Kafka brokers to centrally track client metrics on behalf of applications. The broker can subsequently relay these metrics to a remote monitoring system, facilitating the effective monitoring of Kafka client health and the identification of any problems. KIP-714 is useful to Kafka operators because it introduces a way for Kafka brokers to collect and expose client-side metrics via a plugin-based system. This significantly enhances observability by allowing operators to monitor client behavior (including producers, consumers, and admin clients) directly from the broker side. Before KIP-714, client metrics were only available within the client applications themselves, making centralized monitoring difficult. With this improvement, operators can now access client performance data, detect anomalies, and troubleshoot issues more effectively. It also simplifies integrating Kafka with external monitoring systems like Prometheus or Grafana. This talk covers setting up ClientOtlpMetricsReporter that aggregates OpenTelemetry Protocol (OTLP) metrics received from the client, enhances them with additional client labels and forwards them via gRPC client to an external OTLP receiver. The plugin is implemented in Java and requires the JAR to be added to the Kafka broker libs. Be it a kafka operator or a client application developer, this talk is designed to enhance your knowledge of efficiently tracking the health of client applications.
UiPath Automation Developer Associate 2025 Series - Career Office HoursDianaGray10
This event is being scheduled to check on your progress with your self-paced study curriculum. We will be here to answer any questions you have about the training and next steps for your career
The proposed regulatory framework for Artificial Intelligence and the EU General Data Protection Regulation oblige automated reasoners to justify their conclusions in human-understandable terms. In addition, ethical and legal concerns must be provably addressed to ensure that the advice given by AI systems is aligned with human values. Value-aware systems tackle this challenge by explicitly representing and reasoning with norms and values applicable to a problem domain. For instance, in the context of a public administration such systems may provide support to decision-makers in the design and interpretation of administrative procedures and, ultimately, may enable the automation of (parts of) these administrative processes. However, this requires the capability to analyze as to how far a particular legal model is aligned with a certain value system. In this work, we take a step forward in this direction by analysing and formally representing two (political) strategies for school place allocation in educational institutions supported by public funds. The corresponding (legal) norms that specify this administrative process differently weigh human values such as equality, fairness, and non-segregation. We propose the use of s(LAW), a legal reasoner based on Answer Set Programming that has proven capable of adequately modelling administrative processes in the presence of vague concepts and/or discretion, to model both strategies. We illustrate how s(LAW) simultaneously models different scenarios, and how automated reasoning with these scenarios can answer questions related to the value-alignment of the resulting models.
ISTQB Foundation Level – Chapter 4: Test Design Techniqueszubair khan
This presentation covers Chapter 4: Test Design Techniques from the ISTQB Foundation Level syllabus. It breaks down core concepts in a simple, visual, and easy-to-understand format — perfect for beginners and those preparing for the ISTQB exam. ✅ Topics covered: Static and dynamic test techniques Black-box testing (Equivalence Partitioning, Boundary Value Analysis, Decision Tables, State Transition Testing, etc.) White-box testing (Statement and Decision coverage) Experience-based techniques (Exploratory Testing, Error Guessing, Checklists) Choosing appropriate test design techniques based on context 🎓 Whether you're studying for the ISTQB certification or looking to strengthen your software testing fundamentals, these slides will guide you through the essential test design techniques with clarity and real-world relevance.
"Client Partnership — the Path to Exponential Growth for Companies Sized 50-5...Fwdays
Why the "more leads, more sales" approach is not a silver bullet for a company. Common symptoms of an ineffective Client Partnership (CP). Key reasons why CP fails. Step-by-step roadmap for building this function (processes, roles, metrics). Business outcomes of CP implementation based on examples of companies sized 50-500.
Managing Changing Data with FME: Part 2 – Flexible Approaches to Tracking Cha...Safe Software
Your data is always changing – but are you tracking it efficiently? By using change detection methods in FME, you can streamline your workflows, reduce manual effort, and boost productivity. In Part 1, we explored a basic method for detecting changes using the ChangeDetector transformer. But what if your use case requires a more tailored approach? In this webinar, we’ll go beyond basic comparison and explore more flexible, customizable methods for tracking data changes. Join us as we explore these three methods for tracking data changes: - Filtering by modification date to instantly pull updated records. -Using database triggers in shadow tables to capture changes at the column level. -Storing all changes in a transaction log to maintain a history of all changes with transactional databases. Whether you’re handling a simple dataset or managing large-scale data updates, learn how FME provides the adaptable solutions to track changes with ease.
Transcript - Delta Lake Tips, Tricks & Best Practices (1).pdfcarlyakerly1
This session takes you back to the core principles for for successfully utilizing and operating Delta Lake. We break down the fundamentals—Delta Lake’s structure, transaction management, and data retention strategies—while showcasing its powerful features like time travel for seamless rollback and vacuuming for efficient cleanup. Demonstrations will teach you how to create and manage tables, execute transactions, and optimize performance with proven techniques. Walk away with a clear understanding of how to harness Delta Lake’s full potential for scalable, reliable data management. Speakers: Scott Haines (Nike) & Youssef Mirini (Databricks) YouTube video: https://www.youtube.com/live/O8_82Cu6NBw?si=--4iJL1NkzEPCBgd Slide deck from presentation: https://www.slideshare.net/slideshow/delta-lake-tips-tricks-and-best-practices-wip-pptx/277984087
3. • Lightweight OS based on Gentoo Linux • Has a distributed key-value store at the core • Read-only rootfs. Writeable /etc o All services are in containers CoreOS
4. • One CIDR subnet per machine, like Kubernetes o Host 1: 10.10.10.0/24 o Host 2: 10.10.11.0/24 • No Docker port-based mapping • Containers reach each other through IP • Peer network configs exchanged over etcd • Packets encapsulated using UDP, and soon VxLAN Flannel Basic 4
6. 1. Build flannel on each host 2. Set key in etcd for network config Instructions to Run Flannel 6 $ curl -L http://127.0.0.1:4001/v2/keys/coreos.com/network/config -XPUT -d value='{ "Network": "10.0.0.0/8", "SubnetLen": 20, "SubnetMin": "10.10.0.0", "SubnetMax": "10.99.0.0", "Backend": {"Type": "udp", "Port": 7890}} $ git clone https://github.com/coreos/flannel.git $ cd flannel $ docker run -v `pwd`:/opt/flannel -i -t google/golang /bin/bash -c "cd /opt/flannel && ./build"
7. 3. Start flannel. o flanneld port created and route is set for the full flat IP range. Instructions to Run Flannel (contd.) 7 $ sudo ./bin/flanneld & Output: I1219 17:34:41.159822 00809 main.go:247] Installing signal handlers I1219 17:34:41.160030 00809 main.go:118] Determining IP address of default interface I1219 17:34:41.160579 00809 main.go:205] Using 192.168.111.14 as external interface I1219 17:34:41.212157 00809 subnet.go:83] Subnet lease acquired: 10.12.224.0/20 I1219 17:34:41.217829 00809 main.go:215] UDP mode initialized I1219 17:34:41.218953 00809 udp.go:239] Watching for new subnet leases I1219 17:34:41.219349 00809 udp.go:264] Subnet added: 10.13.128.0/20 core@coreos-05 ~ $ route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.111.1 0.0.0.0 UG 1024 0 0 eth0 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 flannel0 10.12.224.0 0.0.0.0 255.255.240.0 U 0 0 0 docker0 192.168.111.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
8. 4. Restart docker daemon with appropriate bridge IP Instructions to Run Flannel (contd.) 8 $ source /run/flannel/subnet.env $ sudo ifconfig docker0 ${FLANNEL_SUBNET} $ sudo docker -d --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU} &
9. • Ping between two bash containers on two different hosts succeeds. The traffic on wire is encapsulated by flanneld Testing Flannel Networking 9 192.168.111.14 Docker0 10.12.224.1 bash 192.168.111.13 Docker0 10.13.128.1 bash $ docker run -i -t ubuntu /bin/bash root@36484def3b03:/# ifconfig eth0 eth0 Link encap:Ethernet HWaddr 02:42:0a:0c:e0:02 inet addr:10.12.224.2 Bcast:0.0.0.0 Mask:255.255.240.0 root@36484def3b03:/# ping 10.13.128.2 Success! $ docker run -i -t ubuntu /bin/bash root@e0b9dd20d146:/# ifconfig eth0 eth0 Link encap:Ethernet HWaddr 02:42:0a:0d:80:02 inet addr:10.13.128.2 Bcast:0.0.0.0 Mask:255.255.240.0
10. Packet on the Wire 10 Original ICMP packet between the two containers Flannel introduced encap UDP header
11. • IP address overlap not possible o VxLAN not used to create container groups • User-space encapsulation and forwarding o Potential performance bottleneck Limitations 11