WEBVTT

00:00.000 --> 00:11.380
So, have everyone, today I'm going to talk about enhancing delivery using a community

00:11.380 --> 00:21.880
of gateway API and SEO, excuse me, this is fine now, so yeah, communities gateway API and

00:21.880 --> 00:26.760
SEO and how we use these tools for internal deployments and digital ocean, a good

00:26.760 --> 00:33.760
sure fans like how many of us use communities at work, gateway API, SEO or their sort

00:33.760 --> 00:41.760
of stuff, some folks know this stuff already, bit about me, such in I work as software engineer

00:41.760 --> 00:46.400
distribution, million platform engineering team and that's our student mascot, so I mean,

00:46.400 --> 00:52.360
we have couple of semis or here is someone once, plus choice.

00:52.360 --> 00:58.480
So, it was a recurrent, recurrent stream throughout the entirety of the, that continuous

00:58.480 --> 01:03.960
deployment and continuous integration is a really hard problem and the reason it is hard

01:03.960 --> 01:08.560
is because there are a lot of tools that work here, like lots of software components that work

01:08.560 --> 01:14.760
in tandem with you each other and the growing up of any of the component might lead to

01:14.760 --> 01:20.560
a degradation of the entirety of the deployment pipeline and eventually might lead to an incident

01:20.640 --> 01:26.000
or degradation in Massey and SLOs, so it's a tough problem, there are a lot of solution

01:26.000 --> 01:34.000
but some work, some kind of work and in cases of normal loadout, they don't provide a lot

01:34.000 --> 01:38.760
of productions against some failures, especially in cases of false positive in which like

01:38.760 --> 01:43.840
failure of a container or a readiness check, don't indicate a failure of the entirety of pipeline

01:43.840 --> 01:49.320
and so you don't have a lot of production in those cases because it doesn't provide you

01:49.320 --> 01:59.080
a safe framework to roll back to a bit interlude of how we do deployments at this solution,

01:59.080 --> 02:04.080
we use software called DOCC, it's not an acronym, I think it was an acronym at some point

02:04.080 --> 02:08.880
but we now agree it's not an acronym at this point which, and it's just an abstraction

02:08.880 --> 02:16.000
on top of communities with all the same defaults battery included, defaults provided and

02:16.000 --> 02:23.240
it does not include all those features for communities but it provides like 99% of the features

02:23.240 --> 02:28.040
that make sense for our internal users, so it's like a size, it's just capital notice,

02:28.040 --> 02:35.720
this is my system, I mean I've for declarative deployments that we do additional solution,

02:35.720 --> 02:44.280
a bit architecture of DOCC, simple client server architecture, we have a DOCC, we have a DOCC

02:44.280 --> 02:50.120
client that talks to a server, the server talks to a QBPI server, we have a bunch of controllers

02:50.120 --> 02:56.800
that interact with the QBPI server, we have a we use QKDL but it's mostly for the platform team

02:56.800 --> 03:05.520
to have administrative tasks because users don't usually care about the low-level details,

03:05.520 --> 03:12.160
but we do provide some red access, if someone wants, and one fact DOCC server is deployed

03:12.160 --> 03:22.360
on top of DOCC, so it's kind of like a chicken acting, very simple DOCC manifest, it's

03:22.360 --> 03:30.800
declarative, like I said, we use JSON instead of ML, we got a batch for ML at certain

03:30.800 --> 03:37.920
point, fairly intuitive, we have name names, this is scale, container to define the container

03:38.000 --> 03:44.240
that will be deployed, maintainer for which team, so that team can be rolled back to in case

03:44.240 --> 03:52.720
of a failure, all the jobs really intuitive, really simple to use, getting back to normal

03:52.720 --> 03:59.200
workouts, we have, we'll take a quick look of our normal workouts work and what's the problem

03:59.280 --> 04:08.400
with those approaches in the end, we'll use a simple engine x deployment to track this, so we

04:08.400 --> 04:16.800
have a we used to have a 123 image, we converted it to 124, applied that manifest, the update

04:16.800 --> 04:25.760
starts rolling out, so initially we just have all the traffic pointy get 123, but as time goes on

04:26.080 --> 04:31.760
as soon as we apply the manifest, the newer versions will start coming up, so you see here's the

04:31.760 --> 04:39.600
124 application that came up and the traffic will be reverted to that one and the older version

04:39.600 --> 04:46.880
will be destroyed, and the same thing goes for all of these, it looks like they are in place upgrade,

04:46.880 --> 04:52.160
but the cycle is always the newer version gets created, the traffic is get redirected and the

04:52.240 --> 05:01.360
older one is finished off, and going on we can see some of this problem with this, first thing

05:01.360 --> 05:06.560
first is like there is no way to control the percentage or the time for the traffic to be rolled

05:06.560 --> 05:13.200
over, so in case of large deployments it can, it can take and in case of small deployments is

05:13.200 --> 05:18.960
like couple of seconds, but essentially no one has control over it, it depends on a lot of

05:18.960 --> 05:25.760
factor none of us is in our control, and second most important, if you see the last image like in

05:25.760 --> 05:32.080
case of bugs, there are no previously deployments left back to roll back to, so if there is a fault

05:32.080 --> 05:36.640
in the newer rollouts you are kind of out of luck because you cannot roll back to anything,

05:38.880 --> 05:45.920
so the solution is progressive rollouts, progressive rollouts is like simple

05:46.560 --> 05:52.480
minimal approach of deploying traffic in a periodic way to the newer versions, so that in enhanced

05:52.480 --> 05:59.120
reliability and gives you a bit more confidence on your application deployment, and the user

05:59.120 --> 06:03.840
experiences much more better due to this because user are much more confident on what they are rolling

06:03.840 --> 06:09.520
out, and you get you can test out on a sample of subset of the newer rollouts before

06:10.160 --> 06:21.600
committing to the entire rollout. Among a plethora of rollout methods that are present like

06:21.600 --> 06:29.440
blue green A-B testing, can rollouts really deployed for most of our use cases, canary and A-B testing,

06:29.440 --> 06:35.520
it because it may be more sense, but there is also a scope for improvement as in when we see

06:36.400 --> 06:42.640
there is an actual demand for it, so we will start by looking at how we use canary deployments

06:42.640 --> 06:47.840
and how we created canary deployments using all these two that I mentioned before, so if you

06:47.840 --> 06:54.320
remember out the manifest in it will also section to define the deployments strategy that will

06:54.320 --> 07:00.960
be will employ, so for in this case it is a canary deployment with the configuration, so

07:01.280 --> 07:06.640
5% of the traffic will be rolled out every one or two seconds or two minutes, pretty simple

07:07.600 --> 07:14.480
let's see the reaction, so as soon as we do that they will proxy that comes in between,

07:14.480 --> 07:18.960
and that's a load balancing proxy that talks to the DOC supply and to rather than

07:18.960 --> 07:25.440
the service is directly talking to the client and then the service is doing something changes on

07:25.680 --> 07:34.320
based on those clients, so it's the task of the load balancing proxy to divide the traffic

07:34.880 --> 07:44.960
into as per the configurations into the rollouts, so we can see the of this benefit of this approach,

07:44.960 --> 07:53.760
we have a lot of control over the traffic, the time that we need to

07:53.760 --> 07:59.760
when we need to load to the newer issues and then in the case of any failure, we obviously have

07:59.760 --> 08:04.720
some traffic still flowing onto the older deployments so that it can be limited back in case

08:04.720 --> 08:14.800
of failure, so this obviously gives a lot of confidence moving forward, to achieve this we use

08:14.800 --> 08:21.200
these open source tools, SEO on my proxy get way PI and let's zoom in a bit and see how this

08:21.200 --> 08:28.320
work in synergy and how we use them to apply this, so first thing first, SEO is a service mesh,

08:28.320 --> 08:33.280
service meshes are so failures that essentially separate out the networking part out of your

08:34.480 --> 08:38.720
applications and have a separate layer for ingress or egress traffic or in between,

08:38.720 --> 08:47.040
given in between your services and it provides excuse me, it provides encryption of the

08:48.000 --> 08:53.920
traffic splitting and all that all that cool stuff and it's just essentially used as a

08:53.920 --> 09:03.920
control plane to manage all your proxies, so we use SEO to manage our load balancing

09:03.920 --> 09:12.320
proxy, so SEO provides all the configurations how in which architecture of proxies will be

09:12.320 --> 09:17.760
deployed, what will be the configuration of the proxies, the certificate management, security,

09:18.320 --> 09:25.280
every kind of stuff is provided by SEO to those load balancing proxies and this proxy as we will

09:25.280 --> 09:33.600
see are just on-white proxies underneath the garb shocking, so what is on-white proxies, it's

09:34.240 --> 09:38.240
open source high performance, Agents service proxies, a proxy design to failure to

09:38.240 --> 09:45.440
communication between services, so essentially intercepts and manage all the traffic that comes out

09:45.440 --> 09:50.480
that comes from ingress traffic, maybe in between the services or egress traffic, so it's all

09:50.480 --> 09:56.960
controlled by NY proxy and on-white is controlled by SEO, so on-white acts as a control plane and

09:57.520 --> 10:03.760
sorry, SEO acts as a control plane and on-white acts as a data plane for our architecture,

10:03.760 --> 10:10.800
and yeah, here we have it, so underneath the garb, it was on-white proxy all along managed by

10:10.800 --> 10:23.840
SEO, and to glue this all together, we have gate-papies which are open source Kubernetes native

10:24.640 --> 10:31.280
libraries that provide the consistent framework and interfaces to manage all these

10:31.360 --> 10:39.520
networking-related properties and objects in any community native software application,

10:40.240 --> 10:46.560
it was the predecessor of ingress API, I think it was the try to name it ingress 2.0, but that

10:46.560 --> 10:53.520
did not encapsulate the entirety of what the CPS stands for, there is a lot more that gate will

10:53.520 --> 11:00.320
be a provider that ingress does not, and if any one of us has use ingresses, they know they provide

11:00.320 --> 11:07.040
a lot of pain just because different providers have very different objects, very different way

11:07.040 --> 11:12.080
of doing things, so for example, if you are using NJX, this will that will have a very different

11:12.080 --> 11:19.840
objects or a very different way of managing the traffic than SEO or linkity or any of those

11:19.840 --> 11:26.640
service measures, so gate-vapies try to ease all that pain and provide the consistent framework

11:27.040 --> 11:33.840
to manage the entirety, so you can quickly switch off between different providers without

11:33.840 --> 11:40.000
wearing too much that functionality will break, and another benefit is because its

11:40.000 --> 11:45.920
Kubernetes native, so you can create it as a native community's object and not have to

11:48.800 --> 11:54.720
import or get a third-party software for doing that, so it's all very much in the community

11:54.720 --> 12:05.520
is equal system, yeah it provides a couple of objects, but the two I highlighted are kind of

12:05.520 --> 12:16.240
important to us, gate-vaped object, that encompasses the service mesh in NJX SEO, and how

12:16.240 --> 12:22.160
that service mesh will be, what will the behavior of that service mesh, so it will be

12:22.160 --> 12:26.720
different for SEO, it will be different for linkity, but essentially everyone, every one of these

12:27.600 --> 12:33.840
service mesh will implement a gate-vaped object, so there is you don't have to worry about a

12:33.840 --> 12:40.240
feature-breaking in case you sit between different service meshes, and it also very it also

12:40.240 --> 12:46.240
spits out the roles, it has a role-based architecture, so you get ways something that

12:46.240 --> 12:52.000
cluster admins are very about and HTTP out here which is also a gateway API component,

12:52.080 --> 12:59.120
something that provides traffic management on application level details, so it's not the job,

12:59.120 --> 13:04.640
obviously these two have two collaborate together, but it's not the onus on application

13:04.640 --> 13:11.680
manages to maintain SEO cluster, manage all the configurations, all that stuff, so the kind

13:11.680 --> 13:19.280
role kind of separates out and it's very much less friction in doing work, so gate-vaped, so gate-vaped

13:19.280 --> 13:26.640
provides us a way for managing the traffic from English traffic, and then that English traffic

13:28.640 --> 13:34.080
for a per application basis, it's spitted out into different services depending on what

13:38.160 --> 13:45.680
on the kind of application that we wanted to redirect it to, and so yeah using all these tools

13:45.680 --> 13:55.600
we were able to get a very stable progressive role out, but moving on we also have a bit

13:55.600 --> 14:05.280
testing, sometimes you need to validate your stuff before we actually release it kind of like a

14:05.280 --> 14:13.600
smoke test to validate all those results and smoke a bit testing helps in those cases, so and

14:14.160 --> 14:20.960
very similar to how we use can read requirements, whereas you just point out to a deployment strategy

14:20.960 --> 14:26.000
that we are using a bit testing, provided a bit of configurations, so enable headers will

14:27.200 --> 14:33.440
enable header based routing, as we'll see in the next slide, and pause before roll out is because

14:33.440 --> 14:39.040
these two a bit testing and can we can work in tandem with each other, and so you can define

14:39.040 --> 14:44.480
can be along with a bit testing, and just say that you should pause before roll out and not start

14:45.520 --> 14:50.880
transferring traffic to the newer version, because we need to test before we do any sort of traffic

14:50.880 --> 15:02.160
roll out, and for this two, HTTP out provides a very easy film for using that, so we can just use

15:03.120 --> 15:11.840
synthetic traffic via headers, so the HTTP out reads out the HTTP headers and just

15:12.720 --> 15:20.000
rolls out the synthetic traffic to whatever the application the traffic needs to go on, so

15:20.800 --> 15:27.520
in here we have version 128, that will be rolled out to only 123 instances and 124 instances will

15:27.600 --> 15:34.560
be rolled out to the 124 instances, without any traffic being actually rolled out to any of the

15:34.560 --> 15:43.600
new instances, and also let's talk about automatic rollbacks, because this is also a really

15:43.600 --> 15:54.240
powerful paradigm and kinds of tides not to everything we discussed here, I love this web tool from

15:54.240 --> 16:03.200
XKCD if you like XKCD like me, but it kind of scoresome like automation might be a bit

16:03.200 --> 16:10.960
padded, but in the end if use properly, it really saves us some sleep and not a lot of worry

16:10.960 --> 16:17.520
about, something can go wrong and we have no control over it, and this is what automatic rollbacks

16:17.600 --> 16:25.520
just for us, so this is what it looks like in the OCC, we have failed conditions in which we

16:25.520 --> 16:35.280
define PromQL query, so here we have just defined that the amount of 500 error code that I get

16:35.280 --> 16:42.560
out of that particular application should not exceed 5% of traffic, and the OCC scrapes

16:42.800 --> 16:48.320
Victorian metrics or permit is like anything you can use every fixed amount of time,

16:49.280 --> 16:57.440
in the new rollouts it tries to see if this is true, if this is true and if we do not get a

16:57.440 --> 17:04.320
time series based on this, we have found a failure, something has gone wrong, and it will automatically

17:04.320 --> 17:11.520
roll back to the previous table version and stop sending traffic to the Norwegian or together,

17:11.600 --> 17:16.640
so this is really powerful paradigm because we do not need to buy too much about how things

17:16.640 --> 17:24.560
will go on, and something more to fail than we can easily, without any manual intervention

17:24.560 --> 17:35.600
rollout to the previous instance, and kind of something just all up, we have seen of this benefit

17:35.680 --> 17:43.680
of all of these, every time we see instance rolled back automatically, that is the potential

17:43.680 --> 17:50.160
incident that we saved from ever happening, and this is all very powerful paradigm, like we can use it,

17:51.680 --> 17:59.040
it does not need to be very specific to just deployments, it can be very useful in the CICD pipelines

17:59.120 --> 18:05.120
and all that stuff, so really we are obviously seeing a lot of benefit surface, and we also have

18:05.120 --> 18:15.040
some improvements that we think we can implement ourselves as time goes on, but that was mostly

18:15.040 --> 18:23.040
what I have to talk about, some references which are basically the project pages, you can check

18:23.120 --> 18:28.640
them out, I use X-keleter to use all those diagram, this is open source, really good tool,

18:30.240 --> 18:44.480
and yeah, I have time for some questions, if you have some questions, questions.

18:44.480 --> 18:50.720
So, you can sort of automate it a lot, it's like you're using a bit of system behind it,

18:50.720 --> 18:55.840
it's obviously in your class, you most of the time wouldn't, for a case in the default

18:55.840 --> 18:59.840
computation, so maybe there's a big question, you won't all the right things, this is the

18:59.840 --> 19:04.560
deployment instead of creating you and when rolling out, right? So, we would not really have both

19:04.560 --> 19:09.920
set this one to interesting, as an area example, and set this one to four, how do you work

19:09.920 --> 19:14.880
around that, do you have a different type of system that manages that for? So, we don't use

19:14.880 --> 19:21.360
like Argo all this stuff, so essentially we scrape a Victorian matrix or promethias every end

19:21.360 --> 19:26.960
number, like whatever the configuration is beyond that, so every one minute at least we scrape

19:26.960 --> 19:32.800
that matrix, we verify a very date that we get a time series made out out of that matrix,

19:32.800 --> 19:37.840
right, and if that works, then this condition is essentially false, right, we do not have an

19:38.560 --> 19:43.280
area, as soon as there is an issue in that upper direct Victorian matrix instance, we know that

19:43.280 --> 19:48.960
we have had some sort of error because they essentially the data is not generating there, so we do not

19:48.960 --> 19:53.440
have it, okay, it seems my question was a little bit many other different places, okay, for example,

19:53.440 --> 19:59.520
what I meant was the moment of the roll out before the route even comes into play, okay,

20:00.160 --> 20:04.800
when you start the realizing that you actually have two seconds in a way for the gateway, the

20:05.760 --> 20:13.360
unit route 5% to 1, 95% to the other one, right, basically my question is how do you know

20:13.360 --> 20:17.520
that you have some services instead of operating with one cell phone?

20:17.520 --> 20:26.400
Oh, so you really use tagging, so this app, like we use tag, we tag every of the services that

20:26.400 --> 20:32.400
is using, can we deployments and we keep track of those applications for specific purposes,

20:32.480 --> 20:38.240
so app demo is essentially what we use, yeah, yeah, those kind of need to have these

20:39.120 --> 20:43.680
labels for them to deploy, any more questions?

20:48.080 --> 20:54.480
Yeah, it is an internal tool but it's not like very much different from communities,

20:54.480 --> 21:00.320
it's just like we have provided certain sample, like certain sanity tools over it, so rather than

21:00.400 --> 21:04.960
exposing our users to what is a CRD, we just say like this is a service you can just use that,

21:04.960 --> 21:09.840
so you don't have to know about the internal stuff, it's not doing anything really fancy

21:09.840 --> 21:12.160
about from just exposing a couple of APIs to users.

21:16.960 --> 21:22.720
It's not, at certain point we thought we should open source it, we still are kind of working on it,

21:22.720 --> 21:27.840
but there is not like really commitment on top of it, but yeah.

21:30.320 --> 21:46.080
I mean it's very much related to our internals, what makes sense for us at our company, it might not

21:46.080 --> 21:51.280
make sense for us at their own, but it's very much opinionated for us, for our use cases.

21:51.280 --> 22:01.760
I'd like to mention that HTTP is a generated by the load balancing, also I can talk to you

22:03.360 --> 22:08.400
about a container, the app itself, where does it come from?

22:08.400 --> 22:16.320
So this expression we provide like the HTTP request total, oh, so I mean, the load balancing itself,

22:16.400 --> 22:25.360
that's an iterative order, yeah, I mean, you can play with the group ideas, so, yeah,

22:27.360 --> 22:30.880
yeah, I mean, these are standard like you can have promising metrics,

22:30.880 --> 22:36.640
sort of most of these like this one is, I think, one provided by promising, but you can also have custom

22:36.640 --> 22:45.440
metrics, so the script is for somewhere, where does it script from? Is it the application itself?

22:46.400 --> 22:51.920
I mean, yeah, the proxy, the proxy has all the information from the application, so it gets the

22:51.920 --> 22:56.480
all these traffic information from that application, and so we're to the website, from it is

22:56.480 --> 23:03.760
out of 20 metrics, that's the violation of this expression, a lot of certain kind of indication

23:03.760 --> 23:11.040
of, or does it like it to let you know that, yes, we do this automatically, so it is basically

23:11.040 --> 23:16.480
for automatic rule back, so as soon as we hit this condition for a certain amount of time,

23:16.480 --> 23:21.040
this is an indication that we are essentially not getting the data we want, and

23:21.040 --> 23:25.360
our indication that we should roll back this. So let's, like, let's continue now with the

23:25.360 --> 23:31.920
hey, we're all this back, yeah, I mean, yeah, there is something that we develop, which was developed

23:31.920 --> 23:36.720
after I created this, so it will essentially do a slack notification to you as soon as it goes back

23:37.680 --> 23:52.720
for service target, yeah, no, so it's, it's an internal tool, you can use it, I mean, it's a tool,

23:52.720 --> 24:00.800
you can use, what's the, what I'm forgetting, but the IDP is kind of an IDP, like you take it,

24:01.600 --> 24:08.000
you take the UCC for, we have the UCC service deployed, you can use the client to talk to those

24:08.000 --> 24:12.880
servers, and those will deploy, you can also use it in the pipeline as well, if you have a CICD pipeline,

24:12.880 --> 24:14.320
as I get a publication.

24:14.320 --> 24:36.320
Yeah, I mean, in case of a failure, we just roll out the data, so we have to, yeah, yeah, yeah,

24:36.400 --> 24:41.120
so like 5% that percent, if you take it, I'm looking for some person.

24:41.120 --> 24:44.960
Yeah, I mean, in case of a failure, we just roll out the entire traffic to the,

24:46.960 --> 24:51.520
yeah, so if there is no failure, then this is, like, can really, can be based deployments, so

24:51.520 --> 24:57.520
it is all this kind of user alongside can deployments, so 5% 10%, you see some sort of

24:57.520 --> 25:02.480
error at 15, going to 15%, you know, this is time for a roll back to 100% to the stable version.

25:03.440 --> 25:09.280
So, then I'll come up to you, so yeah, there is a stable version, which is the older version,

25:09.280 --> 25:13.520
and the latest version is where we're trying to go, in case of a failure, we go to the stable version.

25:17.920 --> 25:24.080
So, so now, I mean, you have to, I guess, have to update the days to use our routes,

25:24.080 --> 25:27.920
and don't say that this is managed by that dog server, you mentioned in the beginning,

25:28.800 --> 25:36.160
stable application during the updates to these NFS, or is the automatic time,

25:36.160 --> 25:41.760
like, changing of the percentage over time, something that is already provided by the gateway,

25:41.760 --> 25:48.720
yeah, that's what I mean. So, it's the traffic splitting, this kind of thing is,

25:48.720 --> 25:51.920
any specific to HTTP route, it's provided by gateway.

25:51.920 --> 25:56.080
Actually, because you don't know what time, you have to keep updating that, so my question is just,

25:56.080 --> 26:01.600
like, this old time of days, this is managed by that server, you mentioned, like, the dog server?

26:01.600 --> 26:05.680
Yeah, yeah, I mean, yeah, so it has a server that keeps updating the Kubernetes.

26:07.040 --> 26:13.520
The server talks to the gateway API, so that provides all the information to that API,

26:13.520 --> 26:18.880
that now is the state of, now it's the state of this deployment is this, whatever that state is,

26:18.880 --> 26:23.440
and you need to update. So, the community is like, you have a state that declarative,

26:23.440 --> 26:25.440
you need to have a state, and you need to achieve it.

26:25.440 --> 26:30.240
Yeah, but who of this desired state is what I'm saying, like, you have something updating to the desired state,

26:30.240 --> 26:34.880
because if you're going green, it's much percent of the time. So, whether it is, like,

26:34.880 --> 26:40.080
these over time updates, these are, this is managed by your thing, or is this something

26:40.080 --> 26:46.320
built into the, into the, I don't know, whatever API, gateway API, you know, is console where you're using,

26:46.320 --> 26:50.000
I don't know, is this over time thing of part of the API?

26:50.160 --> 26:56.960
It's part of gateway API. So, gateway API allows, so, and my proxy works,

26:56.960 --> 27:04.080
to set the traffic from that, from the server, right? And it's in front of the, all the applications,

27:04.080 --> 27:09.680
and that manages the splitting of traffic, right? So, essentially, it's on my proxy

27:09.680 --> 27:15.200
doing the splitting of the traffic, and the gateway API managing those steps as time goes on.

27:15.760 --> 27:16.880
Did I answer question?

27:16.960 --> 27:22.960
So, the big thing, I don't know, can you have days to, the percentage, so, and it has been,

27:22.960 --> 27:29.520
it has been external system, gateway API, doesn't, there's a, yeah, the server, so it's a thing,

27:29.520 --> 27:35.120
let's go to what else. What tells the gateway API, okay? I mean, it's five minutes,

27:35.120 --> 27:40.880
I've found out failures, and more, the next percentage, the big question is there.

27:40.880 --> 27:45.680
Okay. I'm kind of not sure on that, like,

27:47.520 --> 27:55.840
the big question is about what portions scheduling that update is that, is that, is that

27:55.840 --> 28:02.480
the, is that the OCC, is that the OCC doing all the stuff, like, it's continuously telling,

28:04.640 --> 28:09.440
that it's time to do a five percent increase. Now, the manifest needs to do five

28:09.440 --> 28:12.160
percent increase every time, so it's basically the OCC server.

28:12.160 --> 28:15.520
Okay. All right. Okay.

28:15.520 --> 28:20.240
Okay. The stack of public open thing, and what is, in turn, all the things, okay?

28:20.240 --> 28:22.880
Okay. I think I, I think I, I think I, I think I, I think I, I think I, I think I, sorry,

28:22.880 --> 28:29.120
sorry for looking for you anymore.

28:30.480 --> 28:33.840
All right. I think we can, yeah, then. Thank you. Thank you all for coming.

