WEBVTT

00:00.000 --> 00:16.400
So the next session is about to start. Please take a seat and quiet down. Thank you. So we will

00:16.400 --> 00:25.000
hear about lessons after three years of clubness. Kubernetes. So hello everyone. I see this

00:25.000 --> 00:32.360
pretty packed room. So thanks a lot for attending and let's talk about Kubernetes today.

00:32.360 --> 00:39.800
So my name is Nadia. I like infrastructure. Here you have my website where you can see the

00:39.800 --> 00:45.400
information about me and now you're jumping to the fun stuff. So what are we going to talk

00:45.400 --> 00:51.960
today? So I'm going to give a brief introduction of self-managed Kubernetes and then I'm going

00:51.960 --> 00:57.800
to talk about some of the tools that I use that make life easy for me. I hope that this gives you

00:57.800 --> 01:02.280
some pointers in case you want to attempt this. So let's start with a show of hands. We're going to

01:02.280 --> 01:08.120
do once. So please read with me. Please raise your hand and keep it up. If you interact with Kubernetes

01:08.120 --> 01:16.280
regularly, use your own definition regularly. Okay. Good solid 90%. Please keep it up. So lower down or rather

01:16.360 --> 01:24.120
keep it up still. If you also manage your own control plane. Okay, still still pretty good. Good.

01:24.120 --> 01:30.040
So I'm going to still go through the first section but I'll try to make it a speedy. You can lower

01:30.040 --> 01:37.640
all your hands now. Don't get tired. Okay. So let's start with a very quick introduction to self-managed

01:37.640 --> 01:45.000
Kubernetes. And to do that, let's also talk about what I mean when I say I run Kubernetes outside

01:45.000 --> 01:51.080
of the cloud. So outside of the cloud, by my own definition, just may vary is when you run Kubernetes

01:51.080 --> 01:56.920
in your own hardware. So let's say you need to care about the hard drives that are in there.

01:56.920 --> 02:01.720
And you need to care about the memory that you have in there. When you manage your own control plane,

02:01.720 --> 02:08.520
that comes with that was pretty much obvious one. And you do not really or try to not really

02:08.520 --> 02:13.720
on external services provided by others as much as possible. For example, DNS, that's a hint

02:13.720 --> 02:20.760
about the demo we're going to make. So what's a Kubernetes? Most of you will know this already

02:20.760 --> 02:28.440
because you raise your hand, right? So let's go with about the real quick. So a Kubernetes and

02:28.440 --> 02:34.200
by that I mean a Kubernetes knows is just a cumulative process that is properly configured and running.

02:34.360 --> 02:38.280
And that's the reason to it. The cumulative will do stuff. We'll connect to a community

02:38.280 --> 02:42.840
API. We'll find some pods that are tagged with it's no name and will run them.

02:43.400 --> 02:49.240
Now, interesting thing is that the cumulative binary runs on a typical Linux system. It has some

02:49.240 --> 02:55.000
configuration and you know, flags files and whatnot. It depends on other binaries. For example,

02:55.000 --> 02:59.800
you're going to your container runtime or you see an I. And it's sure typically be a start of that

02:59.800 --> 03:07.000
boat, right? So this looks very familiar. This sounds very familiar. This looks like a just Linux

03:07.800 --> 03:15.480
demo, right? A Linux service. So in the end, what's a Kubernetes knows? But just a Linux box with

03:15.480 --> 03:20.360
something running on it. So let's talk about control planes now. So what's a control plane?

03:21.320 --> 03:25.800
So very roughly speaking most of you will know this already. It's a series of services that make

03:25.800 --> 03:33.640
Kubernetes work. And we have a bunch of them, but roughly we have an API, the rest is API.

03:33.640 --> 03:39.320
We have a database, typically it is a D. And we also have a bunch of clients that do stuff with the

03:39.320 --> 03:47.000
API, right? And these services, these API, these database, these other clients, sometimes run inside of

03:47.080 --> 03:55.960
Kubernetes itself. And that's pretty fun. And those, this service obviously very critical, right?

03:55.960 --> 04:03.240
So we dedicate a whole machine most of the time, sometimes many to run these services. So when we talk

04:03.240 --> 04:09.720
about a control plane node is just a node that happens to run these services on it. So there is

04:09.720 --> 04:16.040
not a lot of mysticism on it. And I promise this is the last system in from now on, or everything on

04:16.120 --> 04:22.360
the left is going to be jammed. You will feel like, at half. So how does a control plane look

04:22.360 --> 04:27.160
in cluster? You can see the pods in there. I'm not going to go over them, you can see the

04:27.160 --> 04:33.160
CNA, you can see the data data. So you may be wondering, so if the control plane lives inside Kubernetes,

04:33.160 --> 04:38.200
most of the time, and is the control plane the one that typically creates the pods because the

04:38.280 --> 04:43.640
cube will need to connect to the control plane to the API. Who starts the control plane, right?

04:43.640 --> 04:51.880
That might be a very interesting question. So the answer is static manifests. So the cube

04:51.880 --> 04:59.400
led has a very specific folder there in this, where it has some jammed files, and it will basically

04:59.400 --> 05:04.920
blindly start these pods when the cube lets us start. We've forgotten anything, we've forgotten

05:04.920 --> 05:09.480
what actually while trying to connect to the API, it will just read the files and create them.

05:09.480 --> 05:14.360
There are some answers, there are some limitations, and there are quite a lot of limitations,

05:14.360 --> 05:20.360
so what this static manifests can do, but it is enough to bring the database, the API and whatnot.

05:22.120 --> 05:29.720
So what is Kubernetes really? So it's a set of Linux boxes that you will need to manage, right?

05:29.720 --> 05:35.960
So you will need to typically run Linux and then, so if you want to do this, you will probably

05:35.960 --> 05:39.960
need to pick Linux distribution that you are comfortable running, that you are comfortable

05:39.960 --> 05:44.520
troubleshooting, you will encounter fun problems, you will need to install packages, you will need to

05:44.520 --> 05:51.160
upgrade those packages. So my advice on this is just pick someone, pick other distribution that you

05:51.160 --> 05:58.600
are comfortable with. Cool. Enough of the introduction. Let's talk about some tools about some

05:58.680 --> 06:05.560
things I do. The first one, I'm going to talk is QVDM, QVDM's kind of the reference

06:06.120 --> 06:11.160
way to set up a Kubernetes cluster, and don't pay a lot of attention to the jameleon right, I don't

06:11.160 --> 06:19.080
want you to read that, just I will like for you to see that all the jameleon right is everything that

06:19.080 --> 06:24.520
I use to bring up my cluster, that's it. So it may not be very readable, but it fits on

06:25.160 --> 06:31.080
and I think that's pretty cool. Why do we need these tools? Because we talk about the QVDM

06:31.080 --> 06:37.240
and bringing all these files, all these static manifest and whatnot, right? So that is kind of an

06:37.240 --> 06:42.760
understanding, right? Those static files, those static manifest needs to be populated, need to have

06:42.760 --> 06:49.000
arguments, need to be consistent with each other, right? So QVDM will take care of all that. Also

06:49.000 --> 06:53.080
we know that stuff, like creating Arbach objects, have the clusters brought up, you know, creating

06:53.080 --> 06:59.720
TLS certificates for the API for the, for it is the deploy avons, funny, that QVDM coordinates,

06:59.720 --> 07:07.720
so DNS, and this is an avon, it's pretty funny, and it also renews certificates. So I think QVDM

07:07.720 --> 07:13.640
is pretty cool. Another thing to use that I pretty like, that I like my match is kind, kind

07:13.640 --> 07:21.640
of a project from the people who make KTS, and the reason, well kind of says for a couple of reasons,

07:21.720 --> 07:27.320
but one of those is that it is the is great software, built with high-level ability in mind,

07:27.320 --> 07:34.600
and something I have found is that it is also very hungry for your SSD. It will choose your

07:34.600 --> 07:43.400
SSD really fast because it goes to this a lot. So as an anecdotal evidence, it is the two

07:43.400 --> 07:50.200
10% of my reporter says the left is Pani Nagier, which is not particularly great. So kind allows you

07:50.200 --> 07:55.720
to back for the allows you to replace the SSD with something else. For example, it is QLite,

07:55.720 --> 08:00.280
which is pretty cool if you are running a pet cluster or it is more cluster, or it progresses QL,

08:00.280 --> 08:06.920
which is what I use just because I cannot focus on one detail is. And funny enough, kind,

08:06.920 --> 08:11.640
kind, so we deploy pretty easily with QDM. So don't pay a lot of attention to the jamele, I think

08:11.640 --> 08:17.240
that there is not a lot of, you don't need to mind the details, just know that those two snippets,

08:17.320 --> 08:21.480
the one in the bottom is obviously cropped, but the one in the top is legit, that is all

08:21.480 --> 08:29.000
unit to use kind with QLDM. So I think, again, this is, this QLDM is a pretty, it is a pretty nice

08:29.000 --> 08:37.320
tool, and very flexible. So more stuff, Philium, you may know what you may know the acronym C&I,

08:37.320 --> 08:45.240
containment of working interface, and this is, CLDM is C&I plugin, and it is interesting that

08:45.960 --> 08:50.520
just like the NS, it is called plugin, but nothing will work without it, so it is actually

08:50.520 --> 08:56.280
quite important that you pick one. And I really like Selium, because I think it is very well

08:56.280 --> 09:04.600
that meant it replaces QLite proxy, so that is one thing less to care about, and it is relatively

09:04.600 --> 09:09.960
easy, or as I put it less hard to debug, than others, and I suddenly see my experience, that's

09:09.960 --> 09:16.840
all my views, just may vary. And something that I find particularly relevant for this is that

09:16.840 --> 09:21.240
it has a lot of knobs to tweak, that are useful for metal. For example, you can restrict it to

09:21.240 --> 09:26.680
certain interface names, you can choose with kind of encapsulation, it is going to use for

09:26.680 --> 09:32.680
no-to-no traffic, and so on. You can also find custom routes, so get rid of that encapsulation

09:32.680 --> 09:38.280
all together, if you everything shares the same else you network, that is pretty pretty neat.

09:38.360 --> 09:46.120
Another neat feature of Filium, actually, is the EGRS gateway, so if you are running a

09:46.120 --> 09:53.720
Kubernetes cluster and you are kind of in a multi-tenant environment, and you want to have

09:53.720 --> 10:00.760
and you have more than one AP address, and you may want to map tenants to IP addresses, or otherwise

10:00.760 --> 10:06.760
dedicate these IPs who want to be used by this tenant, this is for EGRS IP addresses, and for most

10:06.840 --> 10:11.240
of you, then you want to talk about this, but for some workloads you may need to, for example,

10:11.240 --> 10:18.360
if you are using HR hosting an email server, email is pretty picky about IP reputation,

10:18.360 --> 10:24.760
so you really don't want, if you run in a multi-tenant environment, you really want that contained,

10:24.760 --> 10:31.480
so if one of your tenants behaves badly, it doesn't pack the others. It can also be used this

10:31.480 --> 10:37.400
feature for road EGRS traffic through specific nodes, which I think is pretty good.

10:38.520 --> 10:43.320
So we have talked about EGRS, let's talk about EGRS traffic for a while,

10:43.960 --> 10:50.120
so you will probably need a load balancer, it is not strictly like necessary, but you will

10:50.120 --> 10:57.160
really want to, one one special is you want to go HA, and why is that? Well, obviously,

10:57.240 --> 11:04.120
all the IPs and service IPs are not reachable from the outside world, that will be annoying,

11:04.120 --> 11:10.600
right, and a security nightmare as well. So your nodes may or may not have public IPs,

11:12.360 --> 11:21.000
but it is not particularly straightforward to map the IPs of your nodes to those services.

11:21.000 --> 11:25.800
There are a couple of things you could do, for example, you can use, you can create a service,

11:25.880 --> 11:36.760
and use a node port, that will be not very migration friendly, so let's say, if your workload,

11:36.760 --> 11:43.240
migration one node to another, you may have problems with that, and you will also have problems

11:43.240 --> 11:48.680
with multiple applications that want to use the same port, right, even if you have multiple nodes,

11:48.680 --> 11:53.880
with a node port they will fight each other. Same thing goes for external IP, external IPs,

11:53.880 --> 11:59.400
and I shortcut, it allows you, it tells the CNI everything that comes to this IP address,

11:59.400 --> 12:04.040
that this outside of Kubernetes, and to this port, please route it to the service,

12:04.040 --> 12:09.880
this works, I think, a bit better, but you need to pick an existing IP address there, right,

12:10.520 --> 12:15.640
one that is already reachable, and which node are you going to pick, whichever you pick,

12:15.640 --> 12:20.360
is going to become your single point of failure, if that node goes down, your service goes down,

12:20.360 --> 12:25.720
it doesn't matter how many replicas you have, and that is not particularly late, so we can

12:25.720 --> 12:30.920
solve that, we can solve that with all those balancer, and the one that I particularly like is

12:30.920 --> 12:38.440
metal LB, metal LB is a load balancer for metal, it allows you defining a set of virtual IPs,

12:38.440 --> 12:43.480
your network provider will need to allocate this to you, or grant you permission to use them,

12:43.480 --> 12:49.000
but the neat thing is that you don't need to bind them to any particular machine if you do not like,

12:49.080 --> 12:54.200
you just create a CRD, that's on the right, these are the two CRDs you need for to make one thing

12:54.200 --> 13:00.120
to work for one IP to work, and what metal LB will do is that nodes will take turns,

13:00.120 --> 13:06.680
an ounce in the CPU, there is an internal literal action going on with gossip, and the metal LB

13:06.680 --> 13:14.360
will ensure that one node is both casting ARP packets for this virtual IP, so no node has this IP address

13:14.360 --> 13:20.360
in, if you go into the nodes and type IPA, you want to see it, but at a given point in time,

13:20.360 --> 13:24.920
one node will be an ounce in it, and that means that for all intents and purposes the address is

13:24.920 --> 13:31.480
reachable, if that node fails, then metal LB will figure that out, pick a new one to an ounce

13:31.480 --> 13:40.280
IP and everything will work with minimal downtime, your CNI will typically take care of load balancer

13:40.280 --> 13:44.680
in that, so a single node will be the entry point, but your CNI will typically if you don't tell

13:44.680 --> 13:51.880
it otherwise, you will typically distribute that inside the cluster, so let's make a demo let's

13:51.880 --> 13:59.240
save that works, so I have a bunch of terminals here which are like in a very big font,

14:00.280 --> 14:04.600
this here that you can see on the top right is I'm going to actually make it smaller just

14:04.600 --> 14:09.800
so it fits, is it still readable, okay, not a lot, this is a Kubernetes cluster that they

14:09.800 --> 14:15.640
have created for this demo, it has a public IP, it is running on Hatsuna Cloud, so I'm going to show

14:15.640 --> 14:21.400
you the things I am going to apply, right, which are here, I'm going to apply a name space,

14:21.400 --> 14:25.400
pretty trivial, I'm going to apply those here this that we just saw, this are the same one that

14:25.400 --> 14:30.680
we're on this line, this is an engine accident deployment, like there is not a lot of magic

14:30.680 --> 14:37.160
to it, just deployment container and the next nothing special, and this is a service for that

14:37.160 --> 14:43.400
with labels matching of deployment, this service is of type, what's to that again, this service

14:43.400 --> 14:49.560
is of type load balancer, and you can see that it is annotated with this here, this is telling

14:49.560 --> 14:55.480
metal will be, please know that this service will want to use that IP, okay, so let's apply that

14:55.560 --> 15:00.360
and see what happens, and before that, so we can see the magic, I'm going to start what's

15:00.360 --> 15:06.360
cool command, this cool command, I'm going to cancel, so you can see it, it's going to try to connect

15:06.360 --> 15:12.760
to this IP address, and right now this IP address does not exist, nobody is answering, this is

15:12.760 --> 15:21.640
arriving into actually Hatsuna data center, but nobody is answering, so let's apply this,

15:22.520 --> 15:28.520
and let's see what happens, so I have created an in-space, I have got this, I'm sure this

15:28.520 --> 15:42.840
and now the IP address works, it's like magic, okay, so one note about load balancers and this

15:42.840 --> 15:49.160
need thing called external traffic policy, right, you most likely don't need to care about this,

15:49.160 --> 15:55.240
but it has beaten me up in the as a couple times, so I think it's good to mention, so load balancers

15:55.240 --> 16:00.760
are great for exposing services, but as you may expect, they are balancing loads, and they are

16:00.760 --> 16:07.640
doing this thing, I just told you, the request, the ingress traffic will are right to one note,

16:07.640 --> 16:11.880
and then it will be distributed by you've seen I applied into other notes, right,

16:11.880 --> 16:17.960
as you think just for a moment about this, for this to happen, there must be some kind of IP

16:17.960 --> 16:24.280
routing in the right, because the IP packet that arrive has a destination MP, and then we're

16:24.280 --> 16:31.480
going to route it to another note, that is there must be some kind of trickery going on, and the

16:31.480 --> 16:37.000
result of this trickery is that the service, the destination, the path that is handling the

16:37.000 --> 16:43.720
ultimate in the request, will not see the source address for this packet, right, most of the times

16:43.800 --> 16:48.680
you do not really care about this, like application doesn't really care about where it comes from,

16:48.680 --> 16:52.360
it will see of course a source address, but it will be a one internal to a cluster,

16:53.400 --> 16:57.960
there are some applications that do care about this, some applications that may want to do

16:57.960 --> 17:05.720
some allow list in for this IPs, or that my have some nasty encryption scheme using the source

17:05.720 --> 17:10.200
IP address, this happens, and this has happened to me, for example, with transmission, which is

17:10.200 --> 17:17.560
a bit on server, so this kind of application to have in-house roles, your own encryption stuff,

17:17.560 --> 17:25.880
like not TLS, will care about this, so you can use this property for service, external traffic

17:25.880 --> 17:32.040
policy, by default is cluster, it's just a little local, this will not happen, and nothing will

17:32.040 --> 17:38.520
be routed inside a cluster, metallurgy will be noticeable this, and will only force this IP address

17:38.600 --> 17:44.360
to be announced within one node that can actually also handle this load, so it is not free,

17:44.360 --> 17:53.800
there are some nuances in that, but it is an interesting trick, let's talk about everybody's favorite

17:53.800 --> 18:02.200
three-letter acronym, DNS, so as the number of applications that you are hosting keeps

18:02.280 --> 18:08.600
growing, your number of domains may also keep growing as well, especially if you buy the

18:08.600 --> 18:13.880
domain as the motivation for your project, and then you need to give up on your hopes and dreams

18:13.880 --> 18:21.320
and let the domain grow in there, so one typical thing that you can use while coming names,

18:21.320 --> 18:28.200
so you just say, okay, this is the DNS, this is the address for every subdomain of

18:28.280 --> 18:33.960
the other more, you could do that, why not, but that may fail short, and may also not be

18:33.960 --> 18:42.360
particularly great for multi-tenancy, and besides, you know, when Kubernetes, and for example,

18:42.360 --> 18:47.240
if you have an address, Kubernetes knows what is the host name of that address, right, you write it

18:47.240 --> 18:55.480
in the YAML, so it will be kind of neat if we could use that information just to make DNS records

18:55.560 --> 19:00.440
happen, right, and that is what external DNS does for us, this is another great tool that I like

19:00.440 --> 19:06.040
very much, and it will do exactly this, it will walk your ingresses, it will log in walk,

19:06.040 --> 19:12.040
your load balancer services that are raising the cluster with a particular annotation, also custom

19:12.040 --> 19:16.200
resources, in case you want to, I don't know, else some other names, and it will create the

19:16.200 --> 19:22.840
inner record for that, isn't that great? Now there is a, there is a gotcha, right, external DNS

19:22.840 --> 19:30.360
is not a DNS server, so it will create this DNS records, but it will not serve them, it will,

19:30.360 --> 19:36.040
the actual, the way it works, is that it will try to register them somewhere else, and that's

19:36.040 --> 19:45.000
a bit of inconvenient, but worrying out, I have the solution for you, this is a very nice tool,

19:45.080 --> 19:51.400
a very nice actual, it's a handsert called status DNS, hosting GitHub in the TX QLAN

19:51.400 --> 19:57.320
organization, that's there is a joke in there that I'm not going to explain, and status DNS is a

19:57.320 --> 20:05.240
handsert that combines external DNS with power DNS, power DNS is a fully fled DNS server, right,

20:06.520 --> 20:13.720
and the, the thing, the magic thing, that this does tell the DNS does, is it makes power DNS

20:13.800 --> 20:19.000
stateless, and that is great, because we love status stuff, and my phone is kind of buggering

20:20.200 --> 20:26.280
if you, so sorry, and a second, there you go, and we love status stuff, right, so what will this

20:26.280 --> 20:33.000
thing do? It will create a power DNS deployment, it will spin up, external DNS next to it,

20:33.000 --> 20:38.200
and it will configure both of them so they turn nicely to each other, right, and again on the

20:38.200 --> 20:43.800
right, there is a lot of jammel, there is no need to pay a lot of attention to that, just keep in mind

20:43.800 --> 20:51.720
that all you need to make this thing work, and you will see it in just one second. This is

20:52.280 --> 20:57.880
another nice feature that this card is that allows you to define your DNS zones as close,

20:57.880 --> 21:02.280
that is something that external DNS does not manage, external DNS won't do your songs for you,

21:02.280 --> 21:06.920
but this thing will, if you can see in the bottom, there is a bunch of text in there that you

21:07.000 --> 21:14.200
may sound familiar, this is the bind text format for DNS zones, can understand it, and this

21:14.760 --> 21:21.000
card has a nice init script that will pass this and see the power DNS database with this,

21:21.000 --> 21:28.440
so you don't need to care about power DNS at all, this is kind of great, it also highly available

21:28.440 --> 21:35.160
as the risk, so let's do a demo about this, the announcement over the internet, what can we

21:35.160 --> 21:41.480
work? Cool, so we have the same thing that we have before, this this card is still running,

21:41.480 --> 21:46.520
just to refresh your memory, this is calling this IP address, and the default NGNX

21:46.520 --> 21:52.680
welcome page is, and it's announced, got it, and now something that we may want to do is to have

21:52.680 --> 21:58.520
domain name for this right, so I have this another cool command here that is basically doing the same,

21:58.520 --> 22:04.760
but this is doing this against a domain name, and now right now you can see it doesn't work, so let's see

22:04.760 --> 22:11.960
one thing, I'm going to apply this demo DNS thing, just in a moment, this one, but I want to show

22:11.960 --> 22:16.280
you what is the difference between this and what we applied before, because this demo will not

22:16.280 --> 22:21.160
talk of each other, as you can see some deep wire plate, the thing you can do need to pay attention to

22:21.160 --> 22:28.760
is this thing, this, apply command will add an annotation to this host name, and we'll tell

22:28.760 --> 22:36.920
hey, external DNS, this service is served on this domain, out of cloud that online, so let's try to

22:37.080 --> 22:47.480
apply it, okay, so we got it configured, and there might be some time for DNS propagation to do

22:47.480 --> 22:54.840
its magic, I'm going to give this 30 seconds, and if not I'm going to cheat, let's say if we can

22:55.480 --> 23:05.080
pay some attention, okay, so we have this is not answering yet, but now it is, and there you go,

23:05.960 --> 23:21.160
our DNS name is awesome, cool, so just to close this with some other things that I think might be

23:21.240 --> 23:28.360
interesting, but not really related to the topic, there are some other things I do like,

23:28.360 --> 23:33.640
and I think it's super useful, but in Kubernetes in general, I thought about, I thought that I might

23:33.640 --> 23:39.400
as well take this opportunity and voice it, so the first one is GitHub, GitHub is great,

23:39.400 --> 23:45.960
GitHub's, and you can call me this, is what makes Kubernetes worth it, do yourself a favor and

23:45.960 --> 23:54.440
of load your brain to get, this will draw, draw drastically improve the things, and the

23:54.440 --> 23:59.640
drastically increase your possibilities to actually host this time, you want me to remember

23:59.640 --> 24:05.640
about every single thing that you are hosting, if you get, can do it for you, both Argosidia and

24:05.640 --> 24:12.680
Flexidia, I think are the most true, the two most popular GitHub Stools, I think they are both great,

24:13.640 --> 24:20.680
renovate what also some call, also sometimes called renovate, is a very nice bot that you can set up,

24:20.680 --> 24:27.320
is will run periodically, watch your GitHub repo, it understands Kubernetes manifests and also

24:27.320 --> 24:33.160
hunter values, and it will look for images and check if there are new versions of those images,

24:33.160 --> 24:39.560
and if they are, it will create a PR for you, so that is this in my view, this greatly amplifies

24:39.640 --> 24:45.000
the power of GitHub, right, now you have something that is able to work this call and actually

24:45.000 --> 24:50.760
improve it. Secure management, I have tried a couple things, I have tried sobs, I have tried

24:50.760 --> 24:55.320
still secreted by the NAMI, in the end what I do, your malicious may vary, is I just have a separate

24:55.320 --> 25:04.520
repository with diamonds, like it works, and yes, you can just me, feel free, but it really is in my view,

25:04.520 --> 25:11.560
the less, the less, for databases that you may want to run Kubernetes, I find cloud native

25:11.560 --> 25:16.440
pgqi nice, it allows you to define a custom resource and it will provision a database for you,

25:16.440 --> 25:20.680
and more importantly, you know, provision a database is easy, anybody can do it, more importantly,

25:20.680 --> 25:27.160
it will perform switchover procedures when that database migrates when one goes down, which is pretty

25:27.400 --> 25:35.480
bad. About tooling for deploying stuff, I have tried a couple things, I have tried hand charts,

25:35.480 --> 25:42.200
I have tried customized, in the end what I do is I just keep a lot of yummy files, the reason

25:42.200 --> 25:49.960
for this is that, you know, every hand chart has a different API, like a different way to understand

25:49.960 --> 25:55.640
the values file, my brain just kind of kind of keep up with all those API, right, I will always

25:55.640 --> 26:02.200
hand up hand to all the templates and that's manual, manifest work. For storage, what I use are

26:02.200 --> 26:06.920
just some statically provision, that means I have a basis script that I'll push jammal, that creates

26:06.920 --> 26:11.960
previously some previous, and they are buying to an alt as one will respect, but they just work.

26:13.000 --> 26:18.200
And finally, this is my last slide, some things I would like to try, I would like to try to

26:18.200 --> 26:22.360
start with a few updates that are, you know, you can roll into your notes, how to mount

26:22.440 --> 26:26.680
automatically, it's pretty, pretty magic, there are tablets that does this, like towels and

26:26.680 --> 26:31.800
flacker, but I haven't got into that yet, for storage, I want to experiment with the

26:31.800 --> 26:37.720
extra buckets of species, and maybe at some point I will scale my clusters to something

26:37.720 --> 26:42.920
multi-region. So that's it, thanks for your attention, any questions?

26:43.000 --> 26:52.200
Yeah, that's one there.

26:52.200 --> 27:14.600
With regards to secrets, and just as a data, you said that it's just basically jammals, is it

27:14.680 --> 27:22.360
you use e-amal or just plain text jammal, for example? Do I use what or text jammal,

27:22.360 --> 27:28.200
the encrypted jammal file list? No, no, it's just text jammal, like a regular manifest,

27:28.200 --> 27:32.520
it's just kept on a different Git repository, it's also synced with GitHub's an

27:32.520 --> 27:37.560
algo, the advantage is that you can make your other repository public, and this one obviously

27:37.560 --> 27:43.480
not, and you can also have different ways of controlling access, but it's plain jammal.

27:44.600 --> 27:50.280
Okay, thank you.

27:50.280 --> 27:57.720
Hi, about all the DNS dense that we did, so the idea is that you don't want to rely on the external

27:57.720 --> 28:04.280
DNS provided by the operating system, but you want each pod to see their own work,

28:04.280 --> 28:09.320
because we might be in a multi-tenant environment, so you don't want all pods belong

28:09.320 --> 28:15.800
into another customer to see a collective list of records that might be accessible, that's the

28:15.800 --> 28:20.920
idea. So let's see if I got your question, your question was about

28:22.520 --> 28:28.680
about having an internal DNS to the pods? Aha, no, this is not an internal DNS, this is a

28:28.680 --> 28:33.960
world-reachable external DNS, this thing I'm doing here, you can do it, you can try to resolve

28:33.960 --> 28:38.760
this this name, it will work, this is all over the internet, so this is about how you make

28:38.840 --> 28:44.280
your applications available to users, which will require a domain name, and you can do this in

28:44.280 --> 28:48.920
a self-contained way, this DNS server is running in the same cluster, serving that engine

28:49.080 --> 29:04.200
expage, there is one more in the back, we have time, was it always in the middle of today?

29:07.400 --> 29:13.400
You should spread as much as possible, and make the room organizers do some exercise.

29:13.720 --> 29:22.120
I have a question about the kind, I really understand why you use it, but

29:23.320 --> 29:28.120
can you please be a bit quieter and the person is speaking a bit louder, thank you?

29:28.120 --> 29:34.680
Yes, I am just asking if maybe kind is creating a single point of failure in your environment,

29:34.680 --> 29:40.200
as it runs on Postgres, and if the Postgres server is going down, you won't be able to access

29:40.200 --> 29:46.280
to the API, and actually there is some possibility to get some postgres frustrating,

29:46.280 --> 29:50.280
but it's not very as strong as it is for example.

29:51.240 --> 29:56.600
Okay, so the question was about, to say we got this correctly, about HAPosabilities, about Postgres,

29:56.600 --> 30:00.440
as the database back in the API server, what was that it?

30:00.440 --> 30:04.200
Yeah, and does it create a single point of failure?

30:04.200 --> 30:06.200
Yes, sir.

