Kubernetes will take over the world. When and how?

Kubernetes will take over the world. When and how?


In anticipation of DevOpsConf Vitaly Khabarov interviewed Dmitry Stolyarov ( distol ), technical director and co-founder of the company "Flant". Vitaly asked Dmitry about what “Flant” is doing, about Kubernetes, ecosystem development, support. We discussed why Kubernetes is needed and whether it is needed at all. And about microservices, Amazon AWS, the “I'm lucky” approach to DevOps, the future of Kubernetes itself, why, when and how it will take over the world, DevOps perspectives and what engineers need to prepare for in a bright and near future with simplification and neural networks.

Original interview in the form of a podcast, listen to DevOps Deflope - the Russian-language podcast about DevOps, and below - the text version.



Here and further, questions are asked by Vitaly Khabarov an engineer from Express42.

About Fant


- Dima, hello. You are the technical director of Fant , and also its founder. Please tell us what the company does and you are in it?

Dmitry Stolyarov Dmitry : From the outside it seems like we are the kind of guys who walk, put Kubernetes to everyone and do something with it. But it is not. We started as a company that deals with Linux, but for a very long time our main activity is production and turnkey highload projects. Usually we build the entire infrastructure from scratch and then we are responsible for it for a long, long time. Therefore, the main work that “Flant” performs, for which it receives money, is taking responsibility and realization of turnkey production .


As a technical director and one of the founders of the company, I work around the clock to come up with how to increase the availability of production, simplify its operation, make life easier for administrators, and make the life of developers more pleasant.

Kubernetes Pro


- Recently, I see many reports and many “reports” and articles about Kubernetes. How did you come to him?

Dmitry : I have already spoken about this many times, but I don’t feel sorry to repeat it at all. I believe that it is right to repeat this topic, because there is confusion between cause and effect.

We really needed a tool. We faced a bunch of problems, struggled, overcame them with different crutches and felt the need for a tool. Went through many different options, built their bikes, accumulated experience. Gradually, they got to the point that they started using Docker almost as soon as it appeared - approximately in 2013. At the time of its appearance, we already had a lot of experience with containers, we already wrote an analogue of “Docker” - some of our crutches in Python. With the advent of Docker, it became possible to throw out crutches and use a reliable and community-supported solution.

With Kubernetes, the story is similar. By the time he began to gain momentum, for us this is version 1.2, we already had a bunch of crutches for both Shell and Chef, which we somehow tried to orchestrate Docker. We seriously looked in the direction of Rancher and various other solutions, but then Kubernetes appeared in which everything was implemented exactly as we would have done or even better. Nothing to complain about.

Yes, there is some flaw, there is some flaw - there are a lot of flaws, and 1.2 is horrible at all, but .... Kubernetes is like a building under construction - you look at the project and you understand that it will be cool.If the building now has a foundation and two floors, then you understand that it is better not to be populated yet, but you can’t use such problems with software.

We did not have a moment that we thought to use Kubernetes or not. We waited for him long before he appeared, and tried to fence off analogues.

Near Kubernetes


- Do you participate directly in the development of the Kubernetes itself?

Dmitry : Mediocre. Rather, we are involved in the development of the ecosystem. We send a certain number of pull requests: to Prometheus, to all sorts of operators, to Helm - to the ecosystem. Unfortunately, I am not able to keep track of everything that we are doing and can be mistaken, but there is not a single pool from us in the core.

- At the same time, are you developing and a lot of your tools around Kubernetes?

Dmitry : The strategy is as follows: we are going and pull-requesting everything that is already there. If there pull requests are not accepted, we simply fork them ourselves and live until they are accepted with our builds. Then, when it reaches the upstream, we go back to the upstream version.

For example, we have a Prometheus-operator, with whom we switched back and forth to the upstream of our assembly already 5 times, probably. We need some kind of feature, we sent a pull request, we need to roll it out tomorrow, but we don’t want to wait until it is released in the upstream. Accordingly, we collect ourselves, we roll our assembly with our features, which for some reason we need, to all our clusters. Then, for example, in the upstream we are wrapped up with the words: "Guys, let's do it for a more general case," we, or someone else, finish it up, and eventually again merges back.

Everything that exists, we try to develop . Many elements that are not yet have not yet been invented or invented, but did not have time to realize - we are doing. And not because we like the process itself or cycling as an industry, but simply because we need this tool. Often they ask the question, why did we do this or that thing? The answer is simple - yes, because we had to go further, solve some practical problem, and we solved it with this tool.

The path is always the same: we are looking very carefully and, if we don’t find any solution, how to make a trolley bus from a loaf of bread, then we make our loaf and our trolley bus.

Flants Tools


- I know that Flanta now has addon-operators, shell-operators, dapp/werf tools. As I understand it, this is the same tool in different incarnations. I also understand that inside the "Flant" there are still many different tools. Is that so?

Dmitry : We still have a lot on GitHub. From what I’ll remember now, we have a statusmap - a panel for Grafana, which has gone in for everyone. She is mentioned almost in every second article about monitoring Kubernetes on Medium. It is impossible to briefly tell you what a statusmap is - a separate article is needed, but this is a very useful thing for monitoring status over time, since in Kubernetes we often need to show status over time. We also have a LogHouse - this is a thing based on ClickHouse and black magic for collecting logs in Kubernetes.

Many utilities! And there will be even more, because a certain amount of internal decisions will be released this year. From the very large addon-based database there are a bunch of addons to Kubernetes ala how to put the sert manager correctly - a tool for managing certificates, how to put Prometheus with a bunch of attachments correctly - these are about twenty different binaries that export data and collect something, to this Prometheus posh graphics and alerts. All this is just a bunch of addons to Kubernetes, which are put in a cluster, and it turns from simple to cool, sophisticated, automatic, in which many issues have already been resolved. Yes, we do a lot.

Ecosystem Development


- I think this is a very big contribution to the development of this tool and its methods of use.Can you think about who else would make the same contribution to the development of the ecosystem?

Dmitry : In Russia, of those companies that operate in our market - no one is close . Of course, this is a loud statement, because there are large players, like Mail with Yandex - they also do something with Kubernetes, but even they did not get close to the contribution of companies around the world that are doing much more than we do. It is difficult to compare the "Flant" with a staff of 80 people and Red Hat, in which there are 300 engineers for one Kubernetes only, if I am not mistaken. It's hard to compare. In our department RnD 6 people including me, that all of our sawing Tula. 6 people against 300 Red Hat engineers - somehow difficult to compare.

- However, when even these 6 people can do something really useful and alienable, when they are faced with a practical task and give a decision to the community - an interesting case. I understand that in large technology companies where there is a development and the support team Kubernetes, in principle, the same types can be developed. This is an example for them that can be developed and given to the community, give impetus to the whole community that uses Kubernetes.

Dmitry : I guess it is a counter integrator, its peculiarity. We have a lot of projects and we see a lot of different situations. For us, the main way to create added value is to analyze these cases, find a common one and make it as cheap as possible for us. We are actively engaged in this. It's hard for me to talk about Russia and the world, but we have about 40 DevOps engineers in a company that deals with Kubernetes. I do not think that there are many companies in Russia with a comparable number of specialists who understand Kubernetes, if they exist at all.

I understand everything about the title of the post DevOps-engineer, everyone understands everything and is used to calling DevOps engineers DevOps-engineers, we will not discuss this. All these 40 great DevOps engineers face problems every day and solve them, we just analyze this experience and try to summarize it. We understand that if he stays with us inside, then after a year or two the tool is useless, because somewhere in the community there will be a ready-made tool. It makes no sense to accumulate this experience inside - it's just a waste of time and effort into dev/null. And so we do not mind at all. We are happy to publish everything and understand that it is necessary to publish, develop, promote, promote, so that people use and add their experience - then everything grows and lives. Then after two years the tool does not go in the trash. It is not a pity to continue to pour in power, because it is clear that someone uses your tool, and after two years everyone uses it.

This is part of our big strategy with dapp/werf . I do not remember when we started doing it, it seems, about 3 years ago. Initially, it was generally on the shell. It was a super proof of concept, we solved some of our particular tasks - it worked out! But with the shell there is a problem, then it is impossible to build, programming in shell - something else to do. We had a habit of writing in Ruby, respectively, in Ruby we remade, develop, develop, develop, and rested on the fact that the community, the crowd that does not say "we want or do not want," turns Ruby's nose, as it is not funny. We realized that we should write all this stuff on Go, in order to simply correspond to the first item in the checklist: DevOps-tula should be a static binary . On the Go or not to Go is not so important, but it is better static binary, written in Go.

Spent the strength, rewrote the dapp on Go and called it werf. Dapp is no longer supported, does not develop, works in some latest version, but there is an absolute upgrade-path to the top, and you can follow it.

Why was a dapp created?


- Can you briefly explain why the dapp was created, what problems does it solve?

Dmitry : The first reason to build.Initially, we had strong build issues when Docker could not do a multi-stage, and we did a multi-stage on our own. Then we had a lot of cleaning questions. Anyone who does CI/CD, sooner rather than later, is faced with the problem that there are a bunch of collected images, it is necessary to somehow clean out what is not needed and leave what is needed.

The second reason is slow. Yes, there is Helm, but it solves only part of the problems. No matter how ridiculous, it is written that "Helm is the Package Manager for Kubernetes." It is that “the”. Still there are the words “Package Manager” - what is the usual expectation from the Package Manager? We say: “Package Manager - put the package!” And we expect it to tell us: “Package is delivered”.

Interestingly, we say: “Helm, put the package,” and when he replies that he has installed, it turns out that he had just started the installation — he pointed to Kubernetes: “Start this thing!” And whether it started or not, whether it works or not , Helm this question does not solve at all.

It turns out that Helm is just a text preprocessor that loads data into Kubernetes.

But we, within the framework of any deployment, want to know - did the application roll out on the prod or not? Rolled out on the prod means that the application has left there, the new version has unfolded, it has unfolded, and at least it does not fall there and correctly responds. Helm does not solve this problem. To solve it, you need to spend a lot of effort, because you need to give Kubernetes a command to roll out and monitor what is happening there - whether it has developed, whether it has rolled out. And then there are a lot of tasks associated with deploem, with cleaning, with the assembly.

Plans


Even this year we will go to local development. We want to come to what used to be in Vagrant - scored "vagrant up" and we turned around virtualka. We want to come to such a state that there is a project in Git, we write “werf up” there, and it raises a local copy of this project, deployed in a local mini-Kub, with all the directories convenient for development connected. Depending on the development language, this is done differently, but, nevertheless, so that you can comfortably conduct local development under mounted files.

The next step for us is to strongly invest in convenience for developers . In order to quickly deploy a project locally with one tool, put it down, push into Git, and it will roll out to the stage or to the tests, depending on the pipelines, and then go to the same tool with the same tool. This unity, unification, reproducibility of the infrastructure from the local environment to the sale is very important for us. But this is not yet in werf - we are only planning to do it.

But the path to dapp/werf was always the same as with Kubernetes at the beginning. We faced problems, solved their workarounds - we made up for ourselves some solutions on the shell, on anything. Then these workarounds tried to somehow straighten, summarize and consolidate into binaries in this case, which we simply share.

There is another look at the whole story, with analogies.

Kubernetes is a car frame with an engine. There are no doors, glass, radio, Christmas tree - nothing at all. Only the frame and engine. And there is a Helm - this is the steering wheel. Cool - there is a steering wheel, but we also need a steering pin, steering rack, gearbox and wheels, and there is no way without them.

In the case of werf, this is another component to Kubernetes. Only now in our alpha-version of werf, for example, Helm is compiled in general inside werf, because we are tired of doing it ourselves. Many reasons to do so, in detail about why we compiled helm entirely together with the tiller inside werf, I will tell you just on the report on RIT ++ .

Now werf is a more integrated component. We have a ready-made steering wheel, steering pin - I am not good at cars, but this is a big block that solves a fairly large range of tasks.We do not need to climb the catalog ourselves, pick up one detail to another, think about how to tie them together. We receive the ready combine which solves at once the big pack of tasks. But inside it is arranged all of the same open source components, it also uses Docker for building, Helm for part of the functionality, and there are several other libraries. This is an integrated tool to get fast and conveniently cool CI/CD out of the box.

Is it difficult to maintain Kubernetes?


- You tell about the experience that you started to use Kubernetes, this is for you a frame, an engine, and that you can hang a lot of different things on it: the body, the steering wheel, the pedals, the seats. The question arises - how difficult is the support of Kubernetes given to you? You have a lot of experience, how much time and resources does it take to support Kubernetes apart from everything else?

Dmitry : This is a very difficult question and in order to answer, you need to understand what support is and what we want from Kubernetes. Maybe you will open it?

- As far as I know and as I see it, many teams now want to try Kubernetes. Everyone is harnessed into it, put on his knee. I have a feeling that people do not always understand the complexity of this system.

Dmitry : That's it.

- How difficult is it to take and put Kubernetes with nothing so that it is production ready?

Dmitry : What do you think, how difficult is it to transplant a heart? I understand, the question is compromising. To carry a scalpel and not make a mistake - it is not so difficult. If you are told where to cut and where to sew, then the procedure itself is simple. Difficult to guarantee from time to time that everything will work out.

Put Kubernetes and make it work simply: chick! - set, there are a lot of ways to install. But what happens when problems arise?

There are always questions - what have we not considered yet? What have we not done yet? What are the parameters of the Linux kernel indicated incorrectly? Lord, did we even point them out ?! What components of Kubernetes have we supplied and which are not? Thousands of questions arise, and to answer them you need to stew in this industry for 15-20 years.

I have a fresh example on this topic, which can reveal the meaning of the problem “Is it difficult to support Kubernetes?”. Some time ago we seriously considered whether we should try to introduce Cilium as a network in Kubernetes.

I will explain what is Cilium. Kubernetes has many different implementations of the network subsystem, and one of them is very cool - this is Cilium. What is its meaning? In the kernel, some time ago, it became possible to write kernel hooks that somehow invade the network subsystem and various other subsystems, and allow you to bypass large chunks in the kernel.

In the Linux kernel, historically there is ip rout, an over filter, bridges and many different old components that are 15, 20 and 30 years old. In general, they work, everything is cool, but now they have covered the containers, and it looks like a tower of 15 bricks on top of each other, and you stand on it on one leg - a strange feeling. This system has historically developed with many nuances, like an appendix in the body. In some situations there are problems with performance, for example.

There is a wonderful BPF and the ability to write kernel hooks - the guys wrote their kernel hooks. The packet comes to the Linux kernel, they take it out right at the entrance, process it themselves as it should without bridges, without TCP, without an IP stack — in short, bypassing everything that is written in the Linux kernel, and then spit out into a container. < br/>
What happened? Very cool performance, cool features - just great! But we look at it and see that there is a program on each machine that connects to the Kubernetes API and, according to the data it receives from the API, generates the C code and the compiler of the binaries that it loads into the kernel so that the kernel space works .

What happens if something goes wrong? We do not know.To understand this, you need to read all this code, understand all the logic, but it's awesome how difficult it is. But, on the other hand, there are these bridges, net-filters, ip rout - I did not read their sources, and 40 engineers who work in our company, too. Maybe some pieces understand the units.

And what's the difference? It turns out that there is ip rout, the Linux kernel, and there is a new tool - what difference does it make, we don’t understand either one or the other. But we are afraid to use new - why? Because if the instrument is 30 years old, then in 30 years all the bugs have been found, all the rakes have come and you don’t need to know everything - it works like a black box and it always works. Everyone knows which diagnostic screwdriver in which place to plug in, what tcpdump at which point to run. Everyone knows diagnostic tools well and understands how this set of components works in the Linux kernel — not how it works, but how to use it.

And awesome cool Cilium is not 30 years old, it is not yet maintained. Kubernetes has the same copy problem. That Cilium is placed beautifully, that Kubernetes is placed beautifully, but when something goes wrong in sales, can you in a critical situation quickly understand what went wrong?

When we say if it’s hard to maintain Kubernetes, no, it’s very simple, and yes, it’s incredibly difficult. Kubernetes works great on its own, but with a billion nuances.

About the “I'm lucky” approach


- Are there companies where these nuances are almost guaranteed to appear? Suppose Yandex suddenly transfers all the services to Kubernetes polling, there will be some load on it.

Dmitry : No, this is not a conversation about the load, but about the simplest things. For example, we have Kubernetes, we have appended there. How to understand that it works? There is simply no ready-made tool to understand that the application is not falling. There is no ready system that sends alerts — you need to configure these alerts and each schedule. And here we are updating Kubernetes.

There is Ubuntu 16.04. We can say that this is the old version, but we are still on it, because there is LTS. There is a systemd, the nuance of which is that it does not clean the C-groups. Kubernetes launches sweats, creates C-groups, then sweeps deletes, and somehow it turns out - I don’t remember the details, I'm sorry - there are systemd slices left. This leads to the fact that over time, any machine begins to slow down strongly. This is not a question about highload. If you start running all the time, for example, if there is a Cron Job that constantly generates it, then the machine with Ubuntu 16.04 starts to slow down in a week. There will be a constantly high load average due to the creation of a bunch of C-groups. This is a problem that anyone who simply puts Ubuntu 16 and Kubernetes on top is facing.

Suppose it somehow updates systemd or something else, but in the Linux kernel up to 4.16 it is even funnier — if you delete C-groups, they leak in the kernel and are not actually deleted. Therefore, after a month of work on this machine, it will be impossible to see the statistics on the memory by file. We take out a file, we ride in the prog, and one file skates for 15 seconds, because the kernel has been counting within a million C-groups for a very long time, which seem to be deleted, but not - they are leaking.

There are still a lot of such trifles here and there. It is not a question that giant companies can sometimes encounter with very large loads - no, it is a matter of everyday things. People can live like this for months - they have set up Kubernetes, they have closed the application - it seems to be working. So much so normal. The fact that once this application somehow falls, they will not even know, the alert will not come, but for them this is the norm. We used to live on virtual servers without monitoring, now we moved to Kubernetes without monitoring too - what difference does it make?

The question is that when we walk on the ice, we never know its thickness, if not measured in advance. Many go and do not soar because they used to go before.

From my point of view, the nuance and complexity of operating any system is to ensure that the thickness of the ice is exactly enough to solve our problems. Speech about it.

In IT, it seems to me that there are too many “I’m feeling lucky” approaches. Many people install software, use software libraries in the hope that they will be lucky. In general, a lot of luck. That's probably why it works.

- From my pessimistic assessment, it looks like this: when the risks are great, and the application should work, then support from Flanta, perhaps from Red Hat, or your own internal team, dedicated to Kubernetes, is ready to pull it .

Dmitry : Objectively, it is. Getting a small team into Kubernetes alone is a number of risks.

Do we need containers?


- Can you tell how common Kubernetes is in Russia?

Dmitry : I do not have this data, and I’m not sure that anyone has any data at all. We say: “Kubernetes, Kubernetes”, but there is another view on this question. I don’t know how common the containers are, but I know the number from reports on the Internet that 70% of containers are orchestrated by Kubernetes. It was a reliable source for a fairly large sample of the world.

Next is another question - do we need containers? I have a personal feeling and the position of the Flant company as a whole is such that Kubernetes is the de facto standard.

Nothing but Kubernetes will not.

This is an absolute game-changer in infrastructure management. Just absolute - everything, no more Ansible, Chef, virtual machines, Terraform. I'm not talking about the old collective farm methods. Kubernetes is an absolute changer , and now will be the only way.

It is clear that someone needs a couple of years, and someone a couple of dozen to realize this. I have no doubt that there will be nothing but Kubernetes and this new look: we no longer hurt the OS, but use infrastructure as code , not only with the code, but with yml — the declaratively described infrastructure. I have a feeling that it will always be like this.

- That is, those companies that have not yet switched to Kubernetes will definitely switch to it or remain in oblivion. I understand you correctly?

Dmitry : This is also not entirely true. For example, if we have a task to run a dns server, then it can be run on FreeBSD 4.10 and it can work fine for 20 years. Just work and everything. Maybe in 20 years you will need to update something once. If we are talking about software in the format that we launched and it really has been working for many years without any updates, without making changes, then, of course, there will not be Kubernetes. He is not needed there.

All that relates to CI/CD - wherever you need Continuous Delivery, where you need to update versions, keep active changes, wherever you need to build fault tolerance - only Kubernetes.

About microservices


- Here I have a little dissonance. To work with Kubernetes, external or internal support is needed - this is the first point. The second is that when we are just starting development, we are a small start-up, we still have nothing, development under Kubernetes or even under microservice architecture can be difficult, and is not always justified with economics. I am interested in your opinion - do startups need to immediately start writing from Kubernetes from scratch, or can you still write a monolith, and then only come to Kubernetes?

Dmitry : A cool question. I have a report about microservices “Microservices: size does matter.” I have come across many times that people are trying to hammer nails with a microscope. The approach itself is correct, we ourselves design the internal software in this way. But when you do it, you need to clearly understand what you are doing.Most of all in microservices I hate the word “micro”. Historically, this word appeared there, and for some reason people think that micro is very small, less than a millimeter, like a micrometer. It is not.

For example, there is a monolith, which is written by 300 people, and all those who participated in the development, understand that there are problems, and it should be broken into micro-pieces - 10 pieces, each of which is written by 30 people in the minimum version. This is important, necessary and cool. But when a startup comes to us, where 3 very cool and talented boys wrote 60 microservices on my knee, every time I look for a Corvalol.

It seems to me that thousands of times have already been told about this - they got a distributed monolith in one or another incarnation. It is not economically justified, it is very difficult at all in all. I just saw it so many times that it hurt me right, so I continue to talk about it.

To the initial question, that there is a conflict between the fact that, on the one hand, Kubernetes is terrible to use, because it is not clear that it can break or not make money, on the other hand, it is clear that everything goes there and nothing but Kubernetes . The answer is to weigh the amount of benefit that comes in, the volume of tasks that you can solve . This is on the one hand scales. On the other hand, there are risks that are associated with downtime or with a decrease in response time, and the level of accessibility with a decrease in performance indicators.

Here it is - either we move fast, and Kubernetes allows us to perform many things much faster and better, or use reliable, time-tested solutions, but move much slower. Each company must make this choice. You can think of it as a jungle walkway - when you walk for the first time, you can meet a snake, a tiger or a mad badger, and when you went 10 times - you trod a path, removed branches and go easy. Each time the path is wider. Then it is an asphalt road, and later a beautiful boulevard.

Kubernetes does not stand still. Again the question: Kubernetes, on the one hand, is 4-5 binaries, on the other - this is the whole ecosystem. This is the operating system that we have on the machines. What is it? Ubuntu or Curios? This is the Linux kernel, a bunch of additional components. All of these things here one poisonous snake thrown out of the road, they put a fence there. Kubernetes is developing very quickly and dynamically, and the amount of risk, the amount of unexplored decreases with each passing month and, accordingly, these scales are rebalanced.

Answering the question what a startup should do, I would say - come to “Flante”, pay 150 thousand rubles and get a turnkey DevOps easy service. If you are a small startup in several developers - it works. Instead of hiring your DevOps, who will need to learn to solve your problems and pay salaries at this time, you will receive a solution to all turnkey issues. Yes, there are some cons. As an outsourcer, we cannot be so involved and react quickly to making changes. But then we have a lot of expertise, ready-made practices. We guarantee that in any situation, we will quickly understand and pick up any Kubernetes from the next world.

I strongly recommend outsourcing start-ups and established businesses to a size where you can allocate a team of 10 people for operation, because otherwise there is no point. It categorically makes sense to outsource.

About Amazon and Google


- Can a host from a solution from Amazon or Google be considered as an outsource?

Dmitry : Yes, of course, this solves a number of questions. But again, the nuances. Still need to understand how to use it. For example, there are a thousand little things about Amazon AWS: Load Balancer should be warmed up or written in advance that “guys, traffic will come to us, warm up Load Balancer for us!” You need to know these nuances.

When you address people who specialize in this, you get almost all the typical things closed.We now have 40 engineers, by the end of the year, there will probably be 60 of them — we have definitely encountered all these things. Even if we encounter this problem once again on some project, we quickly ask each other and know how to solve each other.

Probably the answer is - of course, the hosted story makes it easy for some part. The question is whether you are ready to trust these hosts, and whether they will solve your problems. Amazon and Google have proven themselves well. For all our cases - for sure. We have no more positive experiences. All the rest of the clouds with which we tried to work, create a lot of problems - and Ager, and all that is in Russia, and all sorts of OpenStack in different implementations: Headster, Overage - everything you want. They all create problems that you don’t want to solve.

Therefore, the answer is yes, but, in fact, there are not very many mature hosted solutions.

Who needs Kubernetes?


- And yet, who needs Kubernetes? Who should go to Kubernetes, who is a typical Flante customer who comes for Kubernetes?

Dmitry : An interesting question, because right now on the Kubernetes wave, many people come to us: “Guys, we know that you are doing Kubernetes, make us!”. We answer them: "Gentlemen, we do not do Kubernetes, we do prod and everything connected with it." Because to make a product without making the whole CI/CD and the whole story is currently simply impossible. Everyone left the division that we have development by development, and then exploitation by exploitation.

Our clients expect different things, but everyone is waiting for some kind of miracle that they have certain problems, and now - hop! - Kubernetes will solve them. People believe in miracles. It is understood by reason that there will be no miracle, but they hope with their souls - what if this Kubernetes now decides everything for us, they say so much about it! Suddenly he is now - sneeze! - and a silver bullet, sneeze! - and we have 100% uptime, all developers can release 50 times on everything that they sell, and it does not fall. All in all, a miracle!

When such people come to us, we say: "Sorry, but there is no miracle." To be healthy, you need to eat well and play sports. To have a reliable product, it must be done securely. To have a convenient CI/CD, you need to do it like this. This is a lot of work that needs to be done.

Answering the question who needs Kubernetes - Kubernetes is not needed by anyone.

Some people have the erroneous feeling that they need Kubernetes. People need, they have a deep need to stop thinking, to be engaged, to be interested in all the problems of the infrastructure and the problems of launching their applications. They want applications to simply work and simply deploy. For them, Kubernetes is a hope that they will stop hearing the story, that “we were lying there,” or “we can't roll out,” or something else.

We usually come to the technical director. Two things are asked from it: on the one hand, give us features, on the other hand - stability. We suggest that we take it upon ourselves and do it. The silver bullet, more precisely, silvered, is that you stop thinking about these issues and waste time. You will have special people who will close this question.

The wording that we or someone needs Kubernetes is wrong.

Kubernetes admins really need, because this is a very interesting toy, with which you can play, dig deeper. Let's be honest - everyone loves toys. We are all somewhere children, and when we see a new one, we want to play it. Someone is repulsed, for example, in the adminstvo, because they have already played enough and are already fed up with the fact that they simply do not want to. But this is not completely broken. For example, if I have long been tired of toys in the field of system administration and DevOps, then I still love toys, I still buy some new ones. All the people anyway want some kind of toys anyway.

No need to play with production.Whatever I categorically recommend to do and what I see now massively: “Ah, a new toy!” - they ran to buy it, bought it and: “Let's take it to school now, show all friends”. Do not do this. I apologize, I just have children growing up, I constantly see something in children, I notice it in myself, and then I generalize to the others.

Final answer: you do not need Kubernetes. You need to solve your problems.

You can achieve that:

  • the product does not fall;
  • even if he tries to fall, we know about it in advance, and we can put something in;
  • we can change it as fast as we need for a business, and it’s convenient to do so, it doesn’t cause any problems to us.

There are two real needs: reliability and agility/flexibility of rolling out. Anyone who is doing some IT-projects, no matter what business is soft for easing the world, and who understands this, you need to solve these needs. Kubernetes with the right approach, with the right understanding and with enough experience allows them to be solved.

Serverless Pro


- If you look a little further into the future, then trying to solve the problem of the lack of a headache with the infrastructure, with the speed of rolling out and the speed of application change, new solutions appear, for example, serverless. Do you feel any potential in this direction and, let's say, danger to Kubernetes and similar decisions?

Dmitry : Here you need to make a remark again that I am not a visionary who looks ahead and says - it will be like this! Although I just did the same. I look under my feet and see a lot of problems there, for example, how transistors work in a computer. Ridiculous, huh? We face some bugs in the CPU.

Make serverless reasonably reliable, cheap, efficient, and convenient by solving all ecosystem issues. Here I agree with Ilon Mask that we need a second planet to make fault tolerance for humanity. Although I do not know what he is saying, but I understand that I am not ready to fly to Mars myself and this will not be tomorrow.

With serverless, it is clearly understood that this is an ideologically correct thing, like fault tolerance for humanity - two planets are better than one. But how to do it now? Sending one expedition is not a problem if you concentrate on this effort. I think it is realistic to send several expeditions and populate several thousand people there. But it’s right to make fault tolerance entirely so that half of humanity lived there, it seems to me now impossible, not being considered.

With serverless one to one: the thing is cool, but it is far from the problems of 2019. Closer to 2030 - let's live to see it. I have no doubt that we will live, we will definitely live (repeat before going to bed), but now we need to solve other problems. This is how to believe in the fabulous pony Raduga. Yes, a couple of percent of cases are solved, and solved perfectly, but subjectively serverless is a rainbow ... For me, this topic is too far and too incomprehensible. I am not ready to talk. In 2019, no application can be written with serverless.

How Kubernetes will evolve


- As we go to this potentially beautiful distant future, how do you think the Kubernetes and the ecosystem will evolve around it?

Dmitry : I thought a lot about it and I have a clear answer. The first - statefull - after all, stateless is easier to do. Kubernetes initially invested in it more, it all started with him. Stateless works almost perfectly in Kubernetes, just nothing to complain about. By statefull there are still a lot of problems, or rather, nuances. Everything is already working fine there, but it is us. In order for this to work for everyone, you need at least another couple of years. This is not a calculated indicator, but my feeling out of my head.

In short, a statefull must very much - and will - evolve, because all our applications keep status, there are no stateless applications. This is an illusion, you always need some kind of database and something else.The statefull is the rectification of everything that is possible, the correction of all the bugs, the improvement of all the problems that are now facing - let's call it a adoption.

The level of the unknown, the level of unresolved problems, the level of probability with something to face, will fall dramatically. This is an important story. And the operators - everything related to the codification of administrative logic, management logic, to get easy service: MySQL easy service, RabbitMQ easy service, Memcache easy service, - in general, all these components that we need to get working out of the box guaranteed. This is exactly the pain that we want the database, but we don’t want to administer it, or we want Kubernetes, but we don’t want to administer it.

This story with the development of operators in one form or another will be important in the next couple of years.

I think ease of operation should increase greatly - the box will become more and more black, more and more reliable, with more and more simple twisters.

I once listened to an old 80-ies Isaac Asimov interview on YouTube on Saturday Night Live - an Urgant-type program, only interesting. He was asked about the future of computers. He said that the future is in simplicity, as it was with the radio. The radio was originally a complicated thing. In order to catch the wave, it was necessary to twist the spinners for 15 minutes, turn the twirls and generally know how everything works, to understand the physics of radio wave transmission. In the end, the radio was one twister.

Now in 2019 what radio? In the car, the radio finds all the waves, the name of the stations. The physics of the process has not changed in 100 years, the ease of use has changed. Now, and not only now, already in 1980, when there was an interview with Azimov, everyone used the radio and no one thought about how it worked. It always worked - it's a given.

Azimov then said that with computers it will be similar - ease of use will increase . If in 1980 you need to get a special education to press the buttons on your computer, then in the future this will not be the case.

I have a feeling that with Kubernetes and with infrastructure, ease of use will also greatly increase. This, in my opinion, is obvious - it lies on the surface.

What to do with the engineers?


- What will happen to engineers, system administrators who support Kubernetes?

Dmitry : And what happened to the accountant after the appearance of 1C? About the same. Before that, they counted on a piece of paper - now in the program. Labor productivity increased by orders of magnitude, and the work itself did not disappear. If earlier 10 engineers were needed for screwing in a light bulb, now one will suffice.

The number of software and the number of tasks, it seems to me, is now growing at a speed greater than new DevOps appear and the efficiency increases. Now the market has a specific shortage and it will last a long time. Later, everything will go into a certain norm, at which the efficiency of work will increase, there will be more and more serverless, a neuron will be bolted to Kubernetes, which will pick up all the resources right as it should, and generally it does everything itself, as it should - get away and do not bother me.

But decisions will still need to be made by someone. It is clear that the level of qualification and specialization of this person is higher. Now in the accounting department you don’t need 10 employees who keep account books so that they don’t get tired. This is just not necessary. Many documents are automatically scanned, recognized by the electronic document management system. One smart chief accountant is enough, already with much greater skills, with good understanding.

In general, this way in all industries. With cars in the same way: earlier a car mechanic and three drivers were attached to the car. Now driving a car is the simplest process in which we all participate every day. Nobody thinks that a car is something complicated.

DevOps or systems engineering will not go anywhere - high level and work efficiency will increase.

- I also heard an interesting idea that in fact the work will increase.

Dmitry : Of course, one hundred percent! Because the amount of software that we write is constantly growing. The number of issues that we solve software is constantly growing. The amount of work is growing. Now the DevOps market is terribly overheated. This is evident from salary expectations. In an amicable way, without going into details, there should be juniors who want X, middles who want 1.5X, and seniors who want 2X. And now, if you look at the Moscow DevOps salary market, Junior wants from X to 3X and senor wants from X to 3X.

No one knows how much it costs. Salary level is measured by your confidence - a complete madhouse, to be honest, a terribly overheated market.

Of course, this situation will change very soon - some saturation should come. The development of software is not the same - despite the fact that everybody needs developers, and everybody needs good developers, the market understands how much it costs - the industry has settled down. With DevOps now is not the case.

- From what I heard, I concluded that the current system administrator shouldn’t worry much, but it’s time to swing skills and get ready for more work tomorrow, but it will be more highly skilled. < br/>
Dmitry : Absolutely. In general, we live in 2019 and the rule of life is: lifetime learning - we learn all our life . It seems to me that now everyone already knows and feels it, but to know enough is to be done. Every day we have to change. If we do not do this, then sooner or later we will be landed on the sidelines of the profession.

Get ready for a sharp 180 degree turn. I do not exclude situations when something drastically changes, they come up with something new - it happens. Hop! - and we now act differently. It is important to be prepared for this and not to steam. It may happen that tomorrow everything that I do will be unnecessary - nothing, I have studied all my life and am ready to learn something else. It's not a problem. You should not be afraid of job security, but you need to be prepared to constantly learn something new.

Wishes and a minute of advertising


- Will you have some kind of wish?

Dmitry : Yes, I have a few wishes.

First and mercantile - subscribe to YouTube . Dear readers, go to YouTube and subscribe to our channel. Somewhere in a month we will begin an active expansion to the video service, we will have a lot of educational content about Kubernetes, open and different: from practical things, right up to laboratories, to deep fundamental theoretical things and how to apply Kubernetes at the level of principles and patterns.

The second mercantile wish is to go to GitHub and put asterisks because we eat them. If you do not put stars on us, we will have nothing to eat. It's like mana in a computer game. We do something, we do, we try, someone says that these are terrible bikes, someone that everything is wrong at all, and we continue and act absolutely honestly. We see the problem, we solve it and share our experience. Therefore, give us a star, it will not lose from you, but will come to us, because we eat them.

Third, important, and no longer mercantile wish - stop believing in fairy tales . You are professionals. DevOps is a very serious and responsible profession. Stop playing in the workplace. Let you click and you will understand. Imagine that you will come to the hospital, and there the doctor is experimenting with you. I understand that this may be offensive to someone, but, most likely, this is not about you, but about someone else. Tell others to stop too.It really spoils the life for all of us - many are beginning to treat exploitation, admins and DevOps, like dudes who again broke something. This is “broken” most often due to the fact that we went to play, but we didn’t see with cold mind that this is the case, and this is the case.

This does not mean that you should not experiment. It is necessary to experiment, we do it ourselves. To be honest, we ourselves also sometimes play - this is, of course, very bad, but nothing human is alien to us. Let's declare the year 2019 as a year of serious, thoughtful experiments, and not games for sale. Probably so.

- Thank you very much!

Dmitry : Thank you, Vitaly, for the time and for the interview. Dear readers, thank you very much if you suddenly come to this point. I hope that at least a couple of thoughts we brought you.

In the interview, Dmitry touched upon the question of werf. Now it is a universal Swiss knife, which solves almost all problems. But it was not always so. At DevOpsConf at the RIT ++ Dmitry Stolyarov will tell about this tool in detail. The report "werf - our tool for Kubernetes CI/CD" will be everything: problems and hidden nuances of Kubernetes , solutions to these difficulties and the current implementation of werf in detail. Join us on May 27 and 28, we will create the perfect tools.

Source text: Kubernetes will take over the world. When and how?