AlexJ's Computer Science Journal

DevConf 2015 – Part 2

[see part 1 first]

Day 2

I went early in the morning for a workshop about The guts of a modern NIC driver and bonding internals. It was very interesting (not for beginners but also not that exclusive if you had a minimum exposure to Linux device drivers and Linux networking). We got to compare some code for veth and virtio drivers. Also there was a discussion about architecture of bonding and bridging in the Linux kernel. Nothing mind blowing, but it wasn’t something I see everyday.

After, I went to a presentation about Vitalization on secondary architectures, this meaning non-{x86, x64, ARM}. Unfortunately it was less about virtualization and more about hardware so I was out of the loop. And I regret not going to another presentation, about the sos report.

The SOS report presentation (which I viewed online later) was both on an interesting topic and delivered in a funny and geeky way by the presented. sosreport, as I had discovered a couple of weeks earlier, is a project/tool that gathers diagnostic information about a system that could be sent to a 3rd party in order to analyze an find the cause of a crash or malfunction.

An interesting presentation was one about Quick Hacks for DevOps. The title says it all: some tips and tricks for DevOps. I would say they are tricks for any sysadmin, but mostly useful for people who administer small to medium clusters. And there were some useful tips for any Linux user at the beginning. I am going to add to my shell the tip about coloring your prompt red if the the previous process returned with an error (simple and useful).

Next, I was at a workshop about Kubernetes. I didn’t go to the presentation about it a day earlier, which would have given me the needed introduction, but this workshop it was educational none the less. I heard about Google’s Kubernetes project a week earlier. It’s Google’s way of managing Docker. It builds on the idea of containers to organize them into ‘pods’. These pods are are a group of containers that work with each other to provide a service (example, combining web server pod with data base pods to provide a web service). Kubernetes is a Go-based interface to manage these pods which are run on ‘minions’ (the host hypervisors).

After that I went to a workshop about OpenShift, another technology that I learned about a week earlier. OpenShift is an older project (currently at v2), but the version 3 of it was rewritten on top of Kubernetes. While v2 is written in Ruby, v3 is rewritten in Go, under the name Origin.

At first, it seemed that the OpenShift and Kubernetes do the same thing, but after a while (and some direct questions to the presenters), I got the idea: Docker is the container technology that does the image management, Kubernetes is the management over Docker to provided an Infrastructure and OpenShift is framework for a Platform as a Service (like Google App Engine).

So, my personal executive summary, containers are the technology within the Linux kernel, Docker is an userspace framework to deploy containers on a host, Kubernetes is an orchestration platform for deploying Docker on a cluster of nodes and OpenShift (v3) is the overlay on top of Kubernetes to provide users a developer-centric interface for devs to deploy their applications on the infrastructure.

That being said, here is my rant about Docker: Container technology has been out there for a while. OpenVZ was one of the first projects that caught on but it required a custom Linux kernel. When things like cgroups made it into the mainline kernel, projects like LXC could create containers using the normal Linux kernels. Docker didn’t do anything new regarding containers, but it did provide an awesome, git-like, image distribution mechanism. Also, what Docker really did, from my point of view, made containers ‘cool’ for the market. Probably because it had some corporate backing, and made things more user friendly, it got further than the the other projects. And, so, this is why ‘Docker’ is the buzzword at all conferences in the last year. So know what Docker is and what it is not!

Day 3

I started the last day with a presentation about perf. Perf is one of those things that do magic and it’s very nice to hear about. Only it doesn’t make a very good presentation topic. It’s better to see it in action, with hands on examples.

The next presentation, about Linux Bridge, Open vSwitch and DPDK. I knew about DPDK from within Ixia, being an userspace implementation for network drivers, optimizing packet processing by removing kernel operation from the process. The point of the presentation was showing performance results for Open vSwitch and DPDK setups compared to normal Linux Bridge and normal Open vSwitch (in kernel).

After that, I caught half of a presentation about Fedora Server. I am not a Fedora user and I was surprised that they started delivering Fedora in a Server and ‘Cloud’ version. Apparently I am not the only one surprised, because a lot of people share this view. In the Red Hat world, you have RHEL Server, the enterprise version, CentOS, the community backed version of RHEL and now there is Fedora Server which nobody really knows what it’s going to be. Because in a server environment you want stability and support. Both RHEL and CentOS are supported for many years while Fedora is supported for about a year. So the idea of Fedora as being a cutting edge but unstable server distribution is strange. They want to keep it as the beta grounds for what RHEL and then CentOS will become in a few years.

After two and a half days of ‘containers’, the presentation awaited by attendees was the one about super privileged containers. After all the talks about how containers and atomic hosts are good for security and ease of deployment, people had to focus on downsides. Like the fact that you want to install something in a container, but you can’t because you don’t have access to the host. Enter super privileged containers that are more than containers but less than hosts. The presentation explained the concept and the current, rather unstable implementation.

My last presentations were storage related. I went to a presentation about Ceph. This is a storage technology that I learned about while attending LinuxCon Europe 2013 (actually, the presenter was someone with whom I talked to at LinuxCon). The company behind Ceph was acquired by Red Hat and now they are trying to integrate it in Red Hat Storage solutions. They gave an update about their work on CephFS, a POSIX compatible file system that works over an object type storage. The architecture is interesting and could prove important for large clusters (aka ‘big data’).

And the last presentation was about lvm2 and about the new features available for Logical Volume Management. The important news was about cache management for logical volumes, along with some features implemented that were inspired by mdadm.

To keep this article short, I will leave the conclusions for part 3. I will link videos and slides to the presentations worth watching.


AlphaOmega Captcha Classica  –  Enter Security Code