WEBVTT

00:00.000 --> 00:15.520
My name is Thomas. I work at Rattard. I'm going to be really horse after this. So, this talk

00:15.520 --> 00:22.560
is sort of a side show to the project Lily Put. Oh, my talk about that earlier. When we

00:22.560 --> 00:27.960
were working on project Lily Put in the last couple of years, we saw, we had to shrink the

00:27.960 --> 00:34.520
network class or the down to 22 bits as Roman has explained. And in the course of that work,

00:34.520 --> 00:40.680
we saw that we may hit at some point in time limits in the class based model we have

00:40.680 --> 00:50.200
now. So, last summer, I set out to explore some alternative avenues. And that's the talk basically.

00:50.200 --> 00:55.960
So, you have to remember that class based has been with us since it's inception,

00:55.960 --> 01:01.720
basically since I'm going to move it. And it's freeing around the corners, it's showing us age.

01:02.760 --> 01:07.000
And sometimes it's good to step back and look what you are doing, whether you should be

01:07.000 --> 01:12.440
continue doing it, or whether there are alternatives, and whether they treat off still makes sense.

01:13.720 --> 01:22.600
Even if you end up doing the same in the end. So, a small recap in class based objects

01:22.600 --> 01:29.880
lift in the heat, obviously. And if the GVM wants to do something with an object, it needs to

01:29.880 --> 01:34.680
know things about that object. And many of these things are recorded in the class structure.

01:34.680 --> 01:40.520
The class structure written with Captain K is a native data structure. And in letters based,

01:40.520 --> 01:45.720
or class based, it's two sides of the same coin, really. And contain things like IT,

01:45.720 --> 01:51.880
the detail, the detail, and so on. So, every object contains a reference to the class of its class.

01:53.080 --> 01:59.320
And the shape of this reference matters, because it has to be small, it goes into a real object.

01:59.320 --> 02:04.760
So, everybody matters. And decoding has to be fast, because we do this all the time, especially

02:04.760 --> 02:14.920
during you see. Now, we could just put the point into the header, we don't do that, because

02:14.920 --> 02:20.760
it's a bit silly, and it based its memory, at least on 64 bit, on 32 bit we do that. So,

02:24.680 --> 02:28.920
what you do instead, there is a mode actually in the GVM. You can enable this. If you switch

02:28.920 --> 02:33.640
off, you need to deliberately switch off compressed class pointers. And then you have this mode,

02:33.640 --> 02:38.680
but I actually think this mode shouldn't exist anymore, we should deprecate it. And what we

02:38.680 --> 02:43.880
instead do, by default, is a trait of between size and decoding speed. We have the so-called

02:43.960 --> 02:49.400
narrow class ID or narrow class pointer. The narrow class pointer is 32 bit offset of the

02:49.400 --> 02:54.840
address of the class structure, which refers to a common base, which we add, and then you have the

02:54.840 --> 03:01.640
class pointer. There is also an optional shift of three, which I will ignore for now, because it's

03:01.640 --> 03:06.680
basically not used anymore, at least not in practical concerns, because it doesn't work with CDS

03:06.760 --> 03:15.240
and everyone wants to CDS. So, using this technique means that you have four gigabyte range,

03:15.240 --> 03:20.920
so you have to confine the location of our class structures to a four gigabyte range or the offset

03:20.920 --> 03:27.800
with lower over. And we do that, that starts at the encoding base, and in this encoding range,

03:27.800 --> 03:35.320
we place CDS and class base, and all class is live in that thing, and the distance, the

03:35.320 --> 03:39.480
pointer difference between the class structure and the encoding base is the narrow class ID,

03:39.480 --> 03:45.880
we put into the object. And this technique allows us, this is actually a pretty smart, because

03:45.880 --> 03:53.160
this technique allows us to translate the narrow class ID into the real class pointer without any

03:53.160 --> 03:58.520
additional loads. We can just, it's one load to load the narrow class ID. Everything else can

03:58.520 --> 04:04.520
happen on the CPU, because at least from the point of the JIT. The encoding base is a constant,

04:04.600 --> 04:08.760
so we can encode it directly in the instructions stream as some form of, of, of immediate,

04:08.760 --> 04:18.200
which we do. With Lily put things change a bit, so the narrow class ID shrunk down to 22 bits,

04:18.200 --> 04:25.640
and moved into the Margaret, as Roman has explained. You can see advantage of Lily put right here,

04:25.640 --> 04:32.360
because now we can load both Margaret and the narrow class ID in a single 64 bit load, and that

04:32.360 --> 04:39.960
actually shows in metrics. That's one of the advantages of the many advantages. And 22 bit

04:39.960 --> 04:44.600
give you only a coverage of form megabyte, and that would be a small class base, so what we do

04:44.600 --> 04:51.160
instead is we now have an obligatory shift of 10. The reality is a bit more complex, so the shift

04:51.160 --> 04:56.760
can be smaller, but let's go with 10. And every narrow class ID now shift the button,

04:57.720 --> 05:02.840
in effect this means that every class structure has to be located at an address that is

05:02.840 --> 05:08.120
in one kilobyte aligned, so we have an alignment should of 10 bits, which we then don't just don't

05:08.120 --> 05:15.640
save. And that works, so we should Lily put, we will should Lily put for JIT 21 networks.

05:15.640 --> 05:22.120
The only problem is now there are two disadvantages. The first to what you can see is we impose

05:22.120 --> 05:27.000
a cadence of sort, like one kilobyte cadence, on the encoding range, every class structure lifts

05:27.000 --> 05:34.200
there and only there unless you don't want to decode the pointer. And first thing is obvious,

05:34.200 --> 05:39.640
you now have alignment based. Every class structure is followed by a block of memory of varying

05:39.640 --> 05:46.040
size, class structures there are a size, and we cannot use this memory for class structures. So

05:46.040 --> 05:52.360
we solve this problem, by using recycling this memory for data that would otherwise be

05:52.360 --> 06:01.080
lift somewhere else, so the net footprint is not 0, net footprint plus 0. And that data is also

06:01.080 --> 06:07.160
metadata, it belongs, it has the same lifetime scope as a class, so it's fine, this really works,

06:07.160 --> 06:13.080
so that problem is solved. The other problem or the problem of hyperlining and I will come to that.

06:13.640 --> 06:23.320
So why should we change? There are several reasons, the apart from the hyperlining issue,

06:23.320 --> 06:29.320
we have a class limit, we are sneaking up on every further step, for instance, for Lily put,

06:29.320 --> 06:34.840
we reduce the narrow class ID, and the size of the narrow class ID is really the class limit,

06:34.840 --> 06:39.880
it's not the size of the class base, but it's the size of the bit size of the narrow class ID.

06:40.280 --> 06:45.960
So with Lily put, we are now at 4Mg, at 4Mg, and it goes down, and we may, I don't know,

06:45.960 --> 06:51.160
it gets tight, it's not the problem yet, so it's not really urgent problem, what we need to have

06:51.160 --> 06:57.720
a plan, and we have a plan, and I will present that plan at the end. The other problem is that

06:57.720 --> 07:06.600
no class ID set up in CDS, and a class BSD coding set up is really complex, and it's not fun

07:06.600 --> 07:13.400
complex, it's annoying complex, it's brittle, and maintenance heavy, so we have lived with this

07:13.400 --> 07:19.480
problem for quite some while, this code exists for 15 years, we can continue to live with it,

07:20.280 --> 07:26.680
but if the opportunity arises to not have a class base, this would be good, we could reduce a lot

07:26.680 --> 07:35.240
of complexity, honestly. The set-ups complex for a number of reasons, and unfortunately, when I was

07:35.320 --> 07:39.640
preparing this, I had like six, six slides, and I don't have time for that for them talks

07:39.640 --> 07:45.880
are short, so suffers to see there is a basic problem behind all of them, and the basic problem

07:45.880 --> 07:52.920
is that the numerical shape of the narrow class ID, and of the encoding base is tightly

07:53.640 --> 07:59.480
coupled with allocator mechanics, because the narrow class ID is in part of an address, and that

07:59.560 --> 08:06.680
address is the product of an allocator, the metaspace allocator, and that one does allocator things,

08:06.680 --> 08:12.440
like it has a body allocator in the middle, it does free list management, and so on, so it's already

08:12.440 --> 08:17.400
kind of part to predict and influence the shape of the narrow class ID, if you want to do that,

08:17.400 --> 08:23.080
and we maybe want to do that. Next thing is, below the allocator, there's a virtual memory layer,

08:23.720 --> 08:29.240
and below that as an operating system, so in the end eventually, we have M map calls to the

08:29.240 --> 08:34.840
operating system, and the operating system, and we are subject to the winds of the operating

08:34.840 --> 08:41.560
system, like address-based population, and ASLR, and whatnot, and so if you want to do things

08:42.760 --> 08:48.360
like like you went to have a certain bit shape of the encoding base, maybe because your

08:48.440 --> 08:57.000
particular digit, or your particular ISA, straggled, representing arbitrary 64 bit, and 64 bit,

08:58.680 --> 09:08.280
immediate, then you need to fight this out with the allocator, and with the underlying OS, and

09:08.280 --> 09:13.960
it's important to remember that this is a performance to it, we do things the way we do,

09:13.960 --> 09:18.520
because they are fast. There is a simpler way, and I will present one.

09:22.120 --> 09:29.240
So the hyperlining problem. Class structures are aligned to tendons, it's fine, so it works,

09:29.240 --> 09:37.560
but the problem is now with a typical cashline size of 64 by it, we lose four bit for selecting a cashline,

09:38.200 --> 09:44.760
and that means we have more cashline now. If you work the heap and you touch different

09:44.760 --> 09:49.400
class structures, you have a higher chance of evicting the outer class structure. We did no

09:49.400 --> 09:53.960
Roman and I, we knew that this effect existed, we didn't really have time to focus on it,

09:53.960 --> 10:01.960
because Lily put a really needed power or attention elsewhere, and clearly, it's actually

10:02.280 --> 10:07.960
if you Roman didn't really touch on that topic, but Lily put brings a lot of positive effects also

10:07.960 --> 10:13.960
in terms of memory bandwidth, very much so. So clearly, whatever effects the hyperlining

10:13.960 --> 10:20.120
has is much more than an outrated by the positive effects of Lily put, because things in the heap

10:20.120 --> 10:25.160
are no smaller, and you can basically use your memory bandwidth much more efficiently.

10:25.400 --> 10:34.440
So question is, how much better could we be? And now last summer I said aside some time to

10:34.440 --> 10:39.720
look at this, and first I tried a couple of general purpose benchmarks. The problem here is that

10:39.720 --> 10:45.960
general purpose benchmarks are not always, but most of the time they tend to drawn out whatever you

10:45.960 --> 10:53.000
want to measure with general mutator and VM noise, and so this was getting me nowhere, and then I was

10:53.080 --> 10:57.720
at a junction of some sort, I could say, okay, you don't see anything in general purpose benchmark,

10:57.720 --> 11:02.920
you can just throw away the class base, and I wasn't really daring to do that. There were actually

11:02.920 --> 11:08.200
people who said this, but the problem here is always you as a GBM vendor, you never know who's

11:08.200 --> 11:13.880
using your stuff. I mean, you never can be sure whether a significant portion of the user base will

11:13.880 --> 11:21.160
be heard by a change, and there are existing optimizations in the GBM, which seem to indicate that

11:21.240 --> 11:30.280
this is kind of important. So what I did was, I wrote a small micro benchmark in order to mimic

11:30.280 --> 11:37.320
the situations in which these optimizations are important. The thing is rather simple, you

11:37.320 --> 11:45.320
populate the heap with a ton of objects, the objects try to mimic the distribution of object features

11:45.320 --> 11:49.960
in a realistic population in a way like distribution object sizes, distribution of

11:49.960 --> 11:56.040
root map sizes, and so on. But they belong to a randomly selected class of a set of enclosures,

11:56.040 --> 12:01.560
and clearly when you have the larger the set, this is the benchmark parameter, the larger the set,

12:01.560 --> 12:06.280
the more adverse the catch effect, when you trace the heap, because you touch different classes.

12:07.080 --> 12:12.600
And also clearly the higher the number of classes, the more unrealistic it gets, because

12:13.080 --> 12:19.720
the heap population is really like this. But it really exposes like bad catch behavior really

12:19.720 --> 12:29.640
great. So I tried the first thing I tried was, I tried the non-power tool alignment. This is a

12:29.640 --> 12:36.920
very simple idea. So you make the class alignment not a power tool value, but a non-power tool value,

12:37.240 --> 12:41.720
more specifically you take an uneven number of catch line sizes. In my case I just

12:41.720 --> 12:48.920
hard-coded 11, 7 or 4 bytes. That means if you touch different classes, you touch different catch lines.

12:48.920 --> 12:55.400
Eventually every catch line is so nice, the distributed. The decode is not an add and shift anymore,

12:55.400 --> 13:00.360
it's an add and an integral multiplication, but this is only like three more cycles depending on the

13:00.360 --> 13:07.240
CPU. But I have to say, while I was writing it, I was already kind of hating this patch,

13:07.240 --> 13:15.480
because the length is non-power tool alignment is no fun. It's really awkward. So the results are

13:15.480 --> 13:23.640
interesting. So the level one misses the rise, similarly steeply blue is the prototype black

13:23.640 --> 13:27.960
is stock. The level one misses the rise, similarly steeply, but they start at a state,

13:27.960 --> 13:33.160
is the later stage. Then I thought, okay, I can work with this. Now I have numbers. Now I know

13:33.880 --> 13:41.240
how much hyperlining costs, and this because everything else was the same. I didn't really like this

13:41.240 --> 13:49.080
patch, though. It has a second problem, which is that deep within metaspace, we have a power tool based

13:49.080 --> 13:55.080
body style allocator, and that doesn't, if you throw allocation sizes at that large,

13:55.080 --> 14:00.600
because class is large, and with alignment requirements that are not power tool based, it starts

14:00.600 --> 14:05.800
choking. So your fragmentation, this can be solved. I know I can solve this, but the problem

14:06.600 --> 14:12.920
at that point I was kind of, I have something, I can do it, and it's fine, you know,

14:12.920 --> 14:20.040
if nothing else works. So the next alternative is a class-bound interaction table.

14:20.680 --> 14:26.760
The very quickly the idea comes up over and over again, Colleen Phillymore, from all what

14:26.760 --> 14:34.360
the brings this up, and the idea here is that we, instead, we don't have a class-based. What we do

14:34.360 --> 14:40.040
is we just place a point as into a look-up table, and every night, narrow-class idea is the slot,

14:40.040 --> 14:46.680
is the slot of the look-up table. Two advantages, first, no class-based, lots of fewer complexities,

14:46.760 --> 14:53.160
this is really good. It's solved, cyber-aligning, as an excellent almost, because now the

14:53.160 --> 14:58.520
class-structure can look wherever, so they can be a line-werver, whatever. Big disadvantage is now you

14:58.520 --> 15:06.280
have one more load in a hot decode path, and this shows. So steeper-wise, we can clearly see with

15:06.280 --> 15:11.320
a little depot that first we have a benefit because our hyper-alignment is avoided, and then it goes

15:11.320 --> 15:16.920
into pivots into negative territory, and then it just goes progressively, progressively worse.

15:16.920 --> 15:23.560
So, this was already not good. Obviously, so, on this point, the costs for the load-out rate,

15:23.560 --> 15:30.040
the benefits of the avoided hyper-aligning. And for non-leli-pot, of course, you don't have a benefit.

15:30.040 --> 15:34.520
You only have the costs, so it's bad, it's worse from the beginning, and the thing is,

15:34.520 --> 15:38.760
this, this approach only makes sense if you do it for lili-pot or non-leli-pot, because otherwise

15:38.760 --> 15:43.480
you don't have a complexity benefit. It's just only more complicated than what we already do.

15:45.080 --> 15:50.600
You can, a number of lili-one loads goes up flat by 5%, this is the load. So, okay,

15:51.480 --> 15:55.720
I didn't like this video. I actually like this a lot, I would love to get rid of the class days,

15:55.720 --> 16:02.920
but it's, I think, it's too expensive, at least for now. So, and then I, I was getting impatient,

16:02.920 --> 16:08.520
I didn't have time anymore, and so I tried something different. I looked at how we

16:08.520 --> 16:14.040
actually access metadata during arbitration. So, arbitration is when the GC traces heap. Here,

16:14.040 --> 16:18.280
he looks at an object, tries to figure out where the referents are, and then traces these,

16:18.280 --> 16:23.080
and maybe that's things with the object. So, it goes, it loads in our class,

16:23.080 --> 16:28.280
I think, then it decodes, now we have the class pointer, now we know where the class structure is.

16:29.240 --> 16:34.840
Now it loads a ton of data from the class structures, it's distributed, all over the class structure,

16:34.920 --> 16:40.360
and so we hit, we don't always always load everything, but we load at least,

16:40.360 --> 16:44.920
though, and that one is at the end of a while of size section, so we need to figure out where that one is.

16:45.880 --> 16:49.960
And it's a ton of loads from different cache lines, up to 7 different cache lines,

16:49.960 --> 16:55.880
I can't, if I can't directly. And as an edit, more loads, the class structure is large, so

16:55.880 --> 17:00.520
you will never hit the same cache line from one class to the other. So, that is kind of,

17:00.600 --> 17:07.240
can we do this better, to answer what we can? So, the new approach I played around with was,

17:07.240 --> 17:13.320
you, when you load a class, you pre-compute a little token, I started a bit token, and that token

17:13.320 --> 17:19.400
contains all the information I need for, for, for arbitration, and that seems weird, because that's

17:19.400 --> 17:23.800
a lot of information, that's like 70 bytes, how do you get into four bytes, but we get.

17:24.520 --> 17:29.400
The point here is that this compression doesn't have to always work, it's an optimization,

17:29.400 --> 17:33.720
if it doesn't work, it's in the class structure, you look, in the class space. So,

17:34.520 --> 17:39.800
but it has to work for most cases to make it worthwhile, and but in Java, most objects are small

17:39.800 --> 17:45.320
and simple, so they have a small instance size, and very few map entries really.

17:46.840 --> 17:52.440
So, the patch works, this particular thing, if the instance size can be statically computed,

17:52.440 --> 17:57.960
which is always always the case, unless it's a yellow, yellow and class instance, or something like that,

17:58.680 --> 18:03.560
instance size more than 512 bytes, and the number of loop map entries is less than 3,

18:03.560 --> 18:07.240
and if we were loaded by one of the three loaders, then we get bonus.

18:09.400 --> 18:15.080
And turns out this works for like 96 to 99% of all objects in a given heap of relation,

18:15.080 --> 18:20.280
and I did a ton of different tests, various different applications, like SPAC, BGBB, and

18:20.280 --> 18:24.360
and Jack Brains, and so on. So, this actually turns out to work well.

18:25.320 --> 18:29.000
There's a little trick involved for those who wonder what to loop map thing.

18:30.280 --> 18:38.600
So, loop maps describe where and the objects are, reference to our objects. And if you load a normal object,

18:38.600 --> 18:44.680
then you have the GBM class as the object reference. So, for normal, simple objects, you only have a single loop map entry.

18:45.000 --> 18:52.840
You only get multiple loop map entries, when you now have a child of this parent class, because he cannot, the part of the parent class is now immutable.

18:53.720 --> 18:59.960
But what you can do is, like for every hierarchy level, in this, in this hierarchy, when you build up the class,

18:59.960 --> 19:06.360
in the field layout, we can just alternate the order between loops and non-oops, and now you have,

19:06.360 --> 19:12.680
this is like 10 lines of code. And now you have the higher chance of oopsections ending up nearing nearby,

19:12.680 --> 19:17.000
you're nearing each other, and then the GBM will just create a single loop map entry.

19:17.640 --> 19:20.360
That's how we get the average number of loop maps done.

19:21.320 --> 19:28.120
With map entries. So, that's the simple. We store this thing into a side table. That side table, I call it Clute,

19:28.120 --> 19:32.200
a class look up table, and the index into that thing is the narrow class ID.

19:33.560 --> 19:38.520
And iteration now looks like this. We load the narrow class ID from the object,

19:39.080 --> 19:44.040
and we don't need to decode. Now decode needed. So, what we do, we load the, the, the,

19:44.440 --> 19:49.080
token from the, from the group, and then we are done. The token contains all information,

19:49.080 --> 19:54.600
I need for iteration. I need some bit fit link to to get it out, but it can happen all in CPU,

19:54.600 --> 19:58.600
no, no second, no third memory load. If you disregard the memory load of the, of the

19:58.600 --> 20:04.600
clube base. And that's just two loads from two different cache lines. And there's actually a very good

20:04.600 --> 20:11.800
chance of, of different loads. Yeah, of different loads sitting the same cache line because 16,

20:11.960 --> 20:20.440
16 tokens fit into a single cache line. If the, yeah. So, results. The matrix look a lot better.

20:20.440 --> 20:26.440
The red line down there is, is Clute. So, you can see that the response to rising number of classes,

20:26.440 --> 20:34.600
the level one is, is a lot flatter. And not only that, we also have a ton of, we, we have much

20:34.600 --> 20:41.080
lower number of level three loads. And both of them together means that you do a lot of,

20:41.160 --> 20:49.320
a fewer loads, a much fewer loads than before. And it's like in 50 to 60% fewer loads. And of those

20:49.320 --> 20:54.040
loads that actually hadn't, most of them are satisfied by the level one cache. A lot more than with

20:54.040 --> 21:01.240
the other approaches. So, I think this, this handily, um, and solves the hyper-aligning issue, we had,

21:01.240 --> 21:06.920
and um, and, um, oversolves, that's really. So, this was very good. Uh, DC pours is good down,

21:07.000 --> 21:13.480
in the micro, um, they went down for all of them, but in Clute the best. And I was happy to see that

21:13.480 --> 21:17.960
I even, if you're squint wheeled hard, you can actually see some metrics and, uh, expected to be

21:17.960 --> 21:24.600
be the go down. Um, there's a grain of salt rules, since the standard deviation is 5% to 7%. The,

21:24.600 --> 21:30.680
the standard deviation further for the micro was like 1%. So, here, this has definitely to be confirmed

21:30.680 --> 21:36.440
and more measurements, but at least gives me hope. So, for me, the conclusion, this is the best

21:36.440 --> 21:42.040
performance of all by far. Um, it doesn't really, it's not that complex. So, because it, it

21:42.040 --> 21:49.320
just replaces the oil iteration. So, so, it's fine, um, and the, the important part here is that

21:49.320 --> 21:53.960
actually, of course, on compliments, Colleen's idea was like, in the vacuum table, very good,

21:53.960 --> 21:59.720
because that idea was the interaction table, suffers from poor decoding speed because of the load.

21:59.800 --> 22:06.600
But if you don't decode too care, so, so, um, of course, this is only for iteration,

22:06.600 --> 22:13.560
up iteration, then, in C++. But, um, maybe we can adjust this for other things. Um,

22:14.600 --> 22:19.400
so, Clute cursed, I would, I would love to bring Clute in, I think there is no argument against it,

22:19.400 --> 22:23.400
and then maybe the interaction table thing, at least at least it's interesting. And then,

22:23.400 --> 22:27.080
maybe in the end, we don't have a class-based, then, goodbye class-based, after 15 years.

22:30.680 --> 22:34.760
Okay, very quick, um, running out of time, completely different topic, of course.

22:35.160 --> 22:38.600
The number of classes is limited by the, by the narrow class, I think.

22:39.880 --> 22:46.440
Um, we, today, stocked JBN, 5 to 6 million limit, little input brings us down to four,

22:47.640 --> 22:53.320
uh, hypothetical, little input 2 with, with 90 bit class, uh, we, we are now at 525k.

22:53.320 --> 22:58.680
Things get tied, even if you don't have little input 2, you may at some point in time, there's always

22:58.760 --> 23:04.360
the, the, the thing that you may want to repurpose narrow class point a bit for something else,

23:04.360 --> 23:09.800
because the market gets smaller. And there is, there is a plan, we have a plan for this,

23:09.800 --> 23:14.120
and this is actually very cool. The plan is from, not from me, but from John Rose, Edward Orkel.

23:14.680 --> 23:21.560
And it's the near-class far-class idea. Basically, the idea here is that you, you have near-classes,

23:21.560 --> 23:27.000
which are normally classes, and you have the narrow class idea as described. But if you hit the limit,

23:27.080 --> 23:32.520
you, you, over, the limit of a, what a narrow class idea can represent, you have some way to do this by,

23:32.520 --> 23:39.160
by offloading, injecting the, the class pointer into the middle of the object. And the specific

23:39.160 --> 23:45.400
of that idea on, on this mailing list, and you can read it up. And I, I kind of hope, maybe if I get time to

23:45.400 --> 23:50.440
implement a prototype next year, because at least so that we know it works, and that we also can measure

23:50.600 --> 23:57.480
how much the cost. Lastly, things, things of all month, and things, colleagues, and stuff

23:57.480 --> 24:03.080
on an absence, and working on a little bit of a test being really fun. And, yeah, I'm actually done.

24:04.120 --> 24:05.080
Thank you.

24:05.080 --> 24:09.240
Thank you very much.

