WEBVTT

00:00.000 --> 00:29.080
Okay, hi everyone, I'm Joaquilian, so there will be a percent of native member checking

00:29.080 --> 00:32.520
tool extending empty beyond hotspot.

00:32.520 --> 00:36.600
So I guess the first question everyone has, native member tracking, what is that?

00:36.600 --> 00:42.240
Well native member tracking is a subsystem that we have in the hotspot virtual machine today.

00:42.240 --> 00:46.040
We shortly need to end them to each other using NMT over and over again.

00:46.040 --> 00:47.040
What does it do?

00:47.040 --> 00:50.280
It groups native applications into categories.

00:50.280 --> 00:56.320
So if you're thinking about you perform a mallet call and you supply some category along

00:56.320 --> 00:57.320
with your mallet call.

00:57.320 --> 01:01.760
For example, you can say this category is part of the C2 compiler.

01:01.760 --> 01:06.360
Okay, so we have the native applications grouped in the categories, what do we do with them?

01:06.360 --> 01:11.120
Well we present statistics regarding these applications to the user.

01:11.120 --> 01:17.000
You do this by, for example, saying, hey, Jake, come on, they don't native member tracking

01:17.000 --> 01:18.200
and you get the report.

01:18.200 --> 01:20.080
What do you want to do with the report?

01:20.080 --> 01:26.520
You want to check stuff like my memory usage is increasing a lot, but I'm pretty sure that my

01:26.520 --> 01:29.960
job application is a great, fully functioning.

01:29.960 --> 01:33.600
So I think it's the VM, that's got some sort of memory leak, for example.

01:33.600 --> 01:39.480
Then you can start looking at the native member applications to see, hey, it's my hypothesis.

01:39.480 --> 01:40.760
Correct.

01:40.760 --> 01:42.880
So what does it look like?

01:42.880 --> 01:44.640
Well, I look something like this.

01:44.640 --> 01:49.080
So now I don't have a list of points, and I guess you can see my mouth pointer.

01:49.080 --> 01:54.200
But you can say stuff like the total reserve memory.

01:54.200 --> 01:59.040
You can see the categories here, you have, like, a Java heat category, you can see how

01:59.040 --> 02:06.480
much it's mapped in, you have the class category, you can see the number of the mallocks,

02:06.480 --> 02:07.480
it's done.

02:07.480 --> 02:14.280
It's done, it's done, a thousand mallocks, and over that it's mallocked, 424 kivites.

02:14.280 --> 02:16.120
You can also see, you know, in that M map stuff.

02:16.120 --> 02:20.200
And then we have sometimes we have special stuff, like how many classes have you loaded

02:20.200 --> 02:21.200
in?

02:21.200 --> 02:23.200
665 classes.

02:23.880 --> 02:27.280
OK, so we told about extending it, right?

02:27.280 --> 02:28.360
That's the title of the talk.

02:28.360 --> 02:29.200
So what's going on?

02:29.200 --> 02:34.160
Well, native memory track in the day, we've got hotspot, native memory tracking, which is the

02:34.160 --> 02:35.960
Java virtual machine, right?

02:35.960 --> 02:39.880
But we don't have it for the core libraries, where the core libraries.

02:39.880 --> 02:46.640
We have a bunch of Java libraries, which we then write C code 4, which A and I.

02:46.640 --> 02:50.640
And then you can, for example, make ellipsic findings.

02:50.680 --> 02:55.200
There's, you know, maybe you use like the inflated deflator stuff in Java.

02:55.200 --> 02:57.280
That's actually calling out to C.

02:57.280 --> 02:58.920
It's stuff that we written.

02:58.920 --> 03:03.800
We don't track those memory allocations, even though they're native, right?

03:03.800 --> 03:09.360
We got the new form function and the memory API, which let's do this C Java interrupt

03:09.360 --> 03:11.800
stuff without J and I, right?

03:11.800 --> 03:15.040
And we don't do any tracking on this either.

03:15.040 --> 03:18.520
Finally, we have the third port libraries.

03:18.560 --> 03:27.280
So you see thinking about when you are writing your user application and you write

03:27.280 --> 03:29.400
J and I findings to something, right?

03:29.400 --> 03:34.320
Then you, then that third port library that you're interfacing with, might be doing native

03:34.320 --> 03:35.320
allocations.

03:35.320 --> 03:37.680
We don't track those either.

03:37.680 --> 03:45.520
So we're going to imagine a new tomorrow where you get this native memory tracking hotspot.

03:45.520 --> 03:49.840
You get it for core libraries and you get it for FFM.

03:49.840 --> 03:53.360
But you don't get it for the third port libraries.

03:53.360 --> 03:58.880
And if you think back to what I was saying earlier, that this whole category thing, that's

03:58.880 --> 04:00.920
something that you were supplying, right?

04:00.920 --> 04:03.960
So you kind of, this is kind of like a cooperative process.

04:03.960 --> 04:08.960
So asking like third port libraries, C libraries, which have no control over to do that,

04:08.960 --> 04:10.360
might not be so simple.

04:10.360 --> 04:13.680
It depends on their design essentially.

04:13.680 --> 04:19.040
So when I try to implement something, I typically ask myself a very simple question.

04:19.040 --> 04:21.440
What's the minimal change that require for this change?

04:21.440 --> 04:27.560
And I kind of want to go on a small journey with you where we kind of try to figure that out.

04:27.560 --> 04:31.840
So in order to implement the feature, you kind of typically have to know some of the

04:31.840 --> 04:34.440
internals of what you're implementing, right?

04:34.440 --> 04:41.640
So I've been talking a lot about categories in NMT in hotspot, we call these mem tags.

04:41.640 --> 04:45.520
So when you're doing a malloc, you supply a mem tag.

04:45.520 --> 04:51.520
This mem tags can also be seen as indices into a statically sized array.

04:51.520 --> 04:57.040
So you're going to allocate some array, each entry in this array have the statistics for

04:57.040 --> 04:59.680
a specific category.

04:59.680 --> 05:04.840
Mem tags, they're put into each malloc at the start via a header data structure.

05:04.840 --> 05:08.560
So you can see that we have this header here, that's 16 by its long.

05:08.560 --> 05:13.360
Nothing bites most specifically because malloc requires your allocations to be.

05:13.360 --> 05:21.520
I think it guarantees that allocations are 16 by the lines where you want to keep that guarantee.

05:21.520 --> 05:23.880
So let's say that you want to add your own mem tag.

05:23.880 --> 05:25.520
How do you do that?

05:25.520 --> 05:32.440
Well, you would first go into the appropriate C++ source file and you're going to go into

05:32.440 --> 05:37.600
the in-nom definition and you're going to add your category and you're going to recompile

05:37.640 --> 05:38.880
your JVM.

05:38.880 --> 05:45.080
Now, if you're into interfacing via FM, I'm pretty sure that you're not going to tell

05:45.080 --> 05:49.200
your users to like, oh, just recompile the JVM with this new category, right?

05:49.200 --> 05:53.200
So we need to find something else to do this with.

05:53.200 --> 05:58.480
But all functionality in NMT is summary mode and there is a data mode, we're not going

05:58.480 --> 05:59.480
to go into it.

05:59.480 --> 06:04.200
But all the pants on this, handing out this mem tags, perceiving this mem tags, right?

06:04.200 --> 06:12.360
So can we find a way to just give Java and see access to mem tags?

06:12.360 --> 06:18.440
That's the core of the idea, so here's a short plan for attacking the problem.

06:18.440 --> 06:22.840
We're going to ditch mem tags as you know members, because I've said we can't recompile

06:22.840 --> 06:23.840
this stuff.

06:23.840 --> 06:27.240
Instead, we're going to make them dynamically createable.

06:27.240 --> 06:32.480
Then we can expose them via an interface in JVM.h, which is where we expose all of

06:32.480 --> 06:37.520
the JNI methods for the native libraries, especially at the core library bindings that

06:37.520 --> 06:38.920
we have.

06:38.920 --> 06:43.760
Then what you can do on top of that is just add a Java interface for FM, so you can have

06:43.760 --> 06:48.880
some interface which does this Java JNI calls for you.

06:48.880 --> 06:54.560
Okay, so we have a plan, let's enact it.

06:54.560 --> 06:59.160
We are going to need a few things for this dynamic, createable mem tags.

06:59.160 --> 07:03.800
We're going to need lock free access to the memory tag you're counting, this is because

07:03.800 --> 07:08.680
we don't want to do any synchronization when doing this statistics, right?

07:08.680 --> 07:12.000
Of course, we said we want to be able to dynamically add this mem tags, so that's going

07:12.000 --> 07:14.880
to be another part of the solution here.

07:14.880 --> 07:18.520
And use this on extra, we'll come in here, we should really only use as much memory as

07:18.520 --> 07:20.600
needed for the use case.

07:20.600 --> 07:24.840
And this can be some serious, of course, we don't want to waste memory, right?

07:24.880 --> 07:29.200
It makes sense when you look at what I've actually done for this.

07:29.200 --> 07:34.440
So what we're going to do is we're going to make mem tags for bytes long instead of one.

07:34.440 --> 07:39.400
So today they are one byte long, that means you get two hundred fifty six mem tags, probably

07:39.400 --> 07:44.120
two few, right, if you're exposing it to so many of the more users.

07:44.120 --> 07:48.480
But two to the power third to two, mem tags, that's definitely enough.

07:48.480 --> 07:52.120
The other thing we're going to do is we're going to replace our statically sized array

07:52.120 --> 07:53.640
with something which can grow.

07:53.640 --> 07:58.240
So the thing is, it has to be able to grow in place because it's going to be very difficult

07:58.240 --> 08:04.800
to have an array base that moves around while you're trying to do this lock free changes.

08:04.800 --> 08:06.200
So what are we going to do?

08:06.200 --> 08:12.440
We're going to do the thing where we take a big chunk, our virtual address base.

08:12.440 --> 08:17.880
And we don't implement, use the linear bump point allocate on top of that.

08:17.880 --> 08:23.360
And that means that we can have very gracious behavior where we resize by just paging in

08:23.360 --> 08:27.120
another small page, which is just like 4k, 64k.

08:27.120 --> 08:30.560
And it allows for quite graceful failure if you run out of memory.

08:30.560 --> 08:35.720
So you try to paging a new page and it says, oh, oh, you can't do that.

08:35.720 --> 08:36.880
There's not enough memory.

08:36.880 --> 08:42.400
Then I can just return like a mem tag standing in for like the other category.

08:42.400 --> 08:45.760
And the user is going to be known to the user about this.

08:45.880 --> 08:51.520
We're just going to have like a mem tag factory, which takes strings returns mem tags.

08:51.520 --> 08:57.880
So the name is going to be your identifier in this case.

08:57.880 --> 09:00.800
How are we going to implement this mem tag factory?

09:00.800 --> 09:06.000
Well, I thought, you know, let's go for something simple here and just have a dual table.

09:06.000 --> 09:12.040
So we're just going to map the name to the mem tag name being a string and the mem tag to the name.

09:12.040 --> 09:16.920
Simple closed hash tables with some three saving tricks applied.

09:16.920 --> 09:21.800
So by closed hash table, I'm basically talking about the type of hash table.

09:21.800 --> 09:25.080
You would be implementing in the university, right?

09:25.080 --> 09:31.000
You have your buckets and you go into linked lists and the linked lists go into the

09:31.000 --> 09:34.360
pointers to the keys and values, right?

09:34.360 --> 09:39.040
And I thought, OK, actually seeing this, they can probably be done on their lock,

09:39.040 --> 09:42.640
because I'm not really expecting that we're going to be creating these mem tags

09:42.640 --> 09:45.920
of finding out what their names are that often.

09:45.920 --> 09:50.640
We're probably going to create a mem tag, still be the way somewhere,

09:50.640 --> 09:54.240
and just apply that mem tag over and over.

09:55.600 --> 10:02.640
For the data structure layout, I was thinking that we are going to have our category statistics.

10:02.640 --> 10:05.200
Because that's really the key thing here, right?

10:05.200 --> 10:07.280
Those are 64 by it's long.

10:07.360 --> 10:11.440
That's exactly one catch find, that's very nice, because it avoids full sharing.

10:12.480 --> 10:16.080
We are going to use four byte indices instead of pointers.

10:16.080 --> 10:19.200
So you can see here that I'm talking about these entry reps,

10:19.200 --> 10:21.440
I'm talking about these string reps, right?

10:21.440 --> 10:28.160
And that's because we can use recisable arrays for stuff when we're under the lock,

10:28.160 --> 10:30.000
because it's fine if they move around.

10:30.000 --> 10:33.440
We don't want to use pointers down, of course.

10:33.440 --> 10:36.160
And the nice thing is that we can use four bytes.

10:36.240 --> 10:38.320
So that saves us four bytes per reference.

10:38.320 --> 10:41.760
And if you're trying to do some kind of contiguous mapping here,

10:41.760 --> 10:44.240
you kind of want to shave off these small bytes,

10:44.240 --> 10:47.680
use the fit stuff in, so you get better cache behavior.

10:48.800 --> 10:52.000
I call them with this, a negative is that you get a four

10:52.000 --> 10:55.280
giveaway to limit per array, of course,

10:55.280 --> 10:58.480
because you're not going to be able to index it out if you have one byte.

10:59.280 --> 11:03.680
Access, I find this unlikely to be an issue.

11:03.840 --> 11:07.360
There are ways around this where you can say, well, you know,

11:07.360 --> 11:12.320
actually it's not, you know, index plus of base plus offset.

11:12.320 --> 11:18.080
It's actually, you know, base plus the offset times some alignment and stuff like that.

11:19.120 --> 11:23.920
If you look at this as a hold, this is really when I thought I would have a mouse pointer.

11:23.920 --> 11:26.400
So I'm going to have to try the pointer for you.

11:27.200 --> 11:30.640
Here's our, here's like a function, make tag if absent.

11:30.720 --> 11:34.400
As I said, it's a closed hash table with just hash module,

11:34.400 --> 11:43.200
but the bucket size to get into our quite tight array of buckets, which are entry reps, right?

11:43.200 --> 11:51.760
So all the four bytes. And they point into this other array where we have our linked lists.

11:51.760 --> 11:56.320
So the nice thing is that, you know, you can just put your linked list nodes into an array.

11:56.320 --> 12:01.840
You can have your array as an allocator. And that makes things quite nice, quite tight.

12:01.840 --> 12:07.840
If you actually do have to iterate through the array, if you know, if your linked list has several nodes,

12:07.840 --> 12:13.200
you might, you might have the next node in cache and so on. So it gives a better performance that way.

12:14.160 --> 12:19.760
Finally, you can, you can see, you can't see that actually,

12:19.760 --> 12:23.680
but there's our string array over there holding all of the names.

12:24.640 --> 12:29.920
Pretty simple stuff. Finally got the stats of here for directly accessing this.

12:31.120 --> 12:35.440
The other thing that you can't see, which is where we store our statistics.

12:37.360 --> 12:43.200
And that means that you basically can skip this entire table mechanism and just access it directly,

12:43.200 --> 12:49.760
which we need, right? So that's really all of the kind of, if you implement this in code,

12:49.760 --> 12:53.520
it's not a lot of code. You get the dynamic, you create a bold amount of tags.

12:53.520 --> 12:58.640
You get everything you need in order to kind of stop pushing this out to see, pushing this out to Java.

12:58.640 --> 13:01.760
So let's see what that looks like when we export it to see.

13:03.120 --> 13:09.520
So for the JVM.hapi, I was kind of imagining that we're going to have an array in a style API.

13:09.520 --> 13:15.200
So that's when you kind of think about allocations as belonging to an array now.

13:15.200 --> 13:17.520
In other words, they belong to a mem tag.

13:18.480 --> 13:23.600
I think this kind of abstraction, which it's a fairly thin abstraction,

13:23.600 --> 13:29.920
but I think it's going to be quite familiar to devs who are used to the FFM API,

13:29.920 --> 13:32.960
where everything is structured in this array in our space, OK?

13:34.560 --> 13:38.720
So what does it look like? It's just this very simple thing.

13:39.600 --> 13:43.120
You got your JVM, make a arena, gives you an arena back.

13:43.600 --> 13:48.400
You pass in this arena to all of your arena lock, arena free, and so on.

13:50.160 --> 13:51.200
Super simple stuff.

13:52.720 --> 13:56.880
So the next part, if you got an API, you better use it, right?

13:56.880 --> 13:58.080
So let's say what that looks like.

14:02.800 --> 14:06.240
So the core thing here, if you're thinking about it, is basically,

14:06.240 --> 14:10.400
I need to get this arena and I just need to replace all my allocation calls.

14:11.040 --> 14:15.120
And you're going to have this kind of interesting problems regarding the lifestimes, right?

14:16.160 --> 14:22.720
You need to make sure that the arena creation is done before you start allocating,

14:22.720 --> 14:26.080
because otherwise you're not going to have a valid arena.

14:26.080 --> 14:28.000
And there are basically two cases.

14:28.000 --> 14:34.960
So what I did is I went into our lipstick bindings, and I ported it over to this new API,

14:34.960 --> 14:37.680
and I found that there are basically two ways of doing it.

14:38.560 --> 14:43.840
Here we can see something which actually implements a Java native method,

14:44.720 --> 14:46.800
and it's a static initializer.

14:46.800 --> 14:51.840
So if you have a static initializer, which you know is going to run before everything else,

14:51.840 --> 14:57.840
right? You can just make the arena there, and then you can change all your local locations

14:57.840 --> 15:00.240
to use your arena location API.

15:01.040 --> 15:02.080
Super simple.

15:02.080 --> 15:04.240
What if it's not that simple then?

15:05.200 --> 15:10.400
Well, then you're going to have your other case, which is this.

15:11.520 --> 15:14.240
That's also quite simple, right? We're just going to say,

15:15.280 --> 15:21.120
when we're accessing the arena, we're going to do so through an accessor, and we're just going to check, you know,

15:22.400 --> 15:29.280
is it not initialized, then we're going to make the arena and set the initialized one and return it.

15:29.840 --> 15:33.440
So we only take the heavy lock taking code in the worst case.

15:35.120 --> 15:38.480
One thing that you might be wondering about, which I haven't mentioned, is

15:38.480 --> 15:43.280
this like global variables or something like that, and the idea I have here is essentially,

15:43.280 --> 15:48.000
no, this is this is something you're placing your C file, right?

15:48.000 --> 15:51.120
So it's like a file, global variable, I guess you could say.

15:52.960 --> 15:55.760
Yeah, that's a really basic trick that we've got there.

15:57.440 --> 16:03.280
Okay, so I kind of want to talk to you a bit about F and FM also, right?

16:03.360 --> 16:06.960
This kind of what we really want to have, I think.

16:08.720 --> 16:14.240
So the idea I have there was we're just going to expose an empty via G and I.

16:14.800 --> 16:17.360
We're going to have a set of native methods.

16:17.360 --> 16:20.240
They're going to correspond to this C API that we looked at, right?

16:20.960 --> 16:25.760
We're going to replace because if, when you read the FM code, you kind of, when you dig into the

16:25.760 --> 16:29.360
nitty gritty, this all comes down to a few some misconcepts, of course.

16:29.920 --> 16:33.440
So if you can just replace the usage of this son misconcepts,

16:33.440 --> 16:37.040
of course, and instead use an mt, that would be great.

16:37.760 --> 16:41.360
And then the final thing we really have to do is we used a quick hour

16:41.360 --> 16:46.640
arena with constructors, new constructors, which take strings as names, and then we can know.

16:47.200 --> 16:52.000
Okay, this guys really want to use an mt, so we let them do that, right?

16:53.760 --> 16:58.480
So this is where I started hitting kind of a wall out.

16:58.560 --> 17:04.720
There was a lot of the indirection in the Java code that makes this quite a bit painful to implement.

17:06.320 --> 17:10.960
So but really it all boils down to this, and say, okay, memory zero thing, you know,

17:10.960 --> 17:17.440
if I had kind of, if I knew about scoped values before this, this would have been a lot simpler,

17:17.440 --> 17:21.120
and you know, known that they were real cheap, because then I could use the puttoes on the stack

17:21.120 --> 17:24.320
and stuff, were really simple, really simple solution.

17:24.800 --> 17:30.640
What I finally kind of want to talk about is like, okay, so we have this, we've just gone through

17:30.640 --> 17:35.440
like a bunch of code and stuff, right? But what does this mean? Why should we care about this?

17:36.400 --> 17:40.080
And first I want to talk a bit about just hotspot, hotspot DM, right?

17:40.080 --> 17:46.480
I think it's really cool that we have made tags like cheap, we made them dynamic,

17:46.480 --> 17:50.800
it means that you can do some things like you can have namespacing, right?

17:51.520 --> 17:57.920
So you could say, okay, let's say I want to create a bunch of global race in the C2 compiler.

17:57.920 --> 18:01.920
I'm really worried, like how much memory do these global race take up?

18:01.920 --> 18:07.200
You could imagine that you pass in like a super group mem tag, and you can get local

18:07.200 --> 18:11.680
allocation profiling for that. That's really exciting. Another thing I talked about was

18:11.680 --> 18:17.760
namespacing. Now the sole identifier for an identity mem tag is its name so you can start doing things

18:17.840 --> 18:24.400
like adding peer exercises, say, well, GC dot core table, for example. And this allows us to do

18:24.400 --> 18:32.160
some mixing and matching here. If you are a hotspot DM guy, you're probably quite aware that

18:32.160 --> 18:37.600
sometimes you pass in the mem tags as template arguments, that's of course not something that's

18:37.600 --> 18:44.720
going to fly if it's not statically defined, right? So today there's no problem tomorrow,

18:44.800 --> 18:49.600
there's no answer to that. So for I think this is thing is just to keep the number of

18:49.600 --> 18:55.840
definition there. And finally, okay, the final thing here is like, we're talking so much about,

18:55.840 --> 18:59.840
this is so exciting. We can do so much cool stuff. That's going to require some sort of

18:59.840 --> 19:05.680
consistency rooms and the things like that. So everyone who's like, you know, developing hotspot

19:05.680 --> 19:10.400
or kind of like have to have the around there some campfire and talk this out and figure out

19:10.480 --> 19:16.400
like what should we be doing. Finally, I just like to talk a little bit about what does this

19:16.400 --> 19:21.840
mean for the JDK. And I think the best thing is that we're just going to have so much better

19:21.840 --> 19:26.480
analysis of memory issues here, right? We're going to have to be, we're going to be able to split

19:26.480 --> 19:33.040
out like okay, which area of Open JDK is really doing these allocations, what's going on here.

19:34.000 --> 19:41.360
If we have easier ways of doing foreign function memory stuff in FFM, if you can get more

19:41.360 --> 19:46.560
data from there, I think people are going to be less scared about doing C interrupt, meaning like

19:46.560 --> 19:53.920
maybe it's good to have better memory analysis here. And I'm really excited about the idea that

19:53.920 --> 20:02.000
M&MT is going to have like more possible input, right? So if you could switch from our current

20:02.960 --> 20:08.000
reporting format, which is very like meant for humans to really think and you do something like

20:08.000 --> 20:15.360
XML, JSON, CSV, whatever, and kind of just give that tooling back to the users for them to develop.

20:15.360 --> 20:23.760
I think we could see some really cool analysis coming out of this. So that is really all I have today.

20:23.760 --> 20:28.560
Thank you very much for listening. If there are any questions.

20:32.320 --> 20:39.040
I still don't understand how this works, it's the quality of the race, where we need to

20:39.040 --> 20:45.280
include certain talking libraries like C, but not fast, right? We have to change the old dollar

20:45.280 --> 20:52.160
system to explain it. Yeah, well, you mean, are you saying in the code that? So you have, you have

20:52.160 --> 20:58.240
to see, you have the hard-bust labor code, right? And you have the C code that Joel and I'm talking

20:58.240 --> 21:08.320
to that we write. Yeah, so those parts are really hard to switch. So what the nice thing is that

21:08.320 --> 21:14.720
if you have something like, you know, leave a lipstick, right? They actually expose like, hey, do you

21:14.720 --> 21:20.800
want to use a custom allocator just hook in here, then you get it for free? If you want to do something

21:20.880 --> 21:27.520
else, right? Then you're not going to see it and switch out hard-bust, right? But then you also,

21:28.960 --> 21:33.440
what I, okay, so this is actually entirely separate. So this talk about what I hope that we could potentially

21:33.440 --> 21:39.360
have is to do some sort of LD preload them. So we, and then maybe we can do some stack walking to

21:39.360 --> 21:45.040
see, because if you have the debugging information, you can kind of automatically infer what memory

21:45.120 --> 21:51.280
tags something should have. So that is what I think is a very cool thing to do on top of this.

21:54.000 --> 22:04.880
So you have to say that louder. It works with third-party libraries. Exactly, yes. Exactly,

22:04.880 --> 22:10.240
so that's, you know, that's a really, you know, that would be fun if you could extend it to third-party

22:10.240 --> 22:17.840
libraries and that way, exactly, yeah. Any other questions? Yeah? The memory extension is a powerful

22:17.840 --> 22:24.960
sharing. The memory extension for our sharing. Oh, that's the point of provenance thing, right?

22:29.040 --> 22:29.440
Yeah.

22:29.680 --> 22:40.960
I have no clue. That's a really good question. So that may be, I don't know, where-

22:48.960 --> 22:55.680
Yeah. Yeah, that's a good question. I don't know. That's a good one, yeah. Anything else?

22:56.400 --> 23:02.800
Yeah? Yeah. Yeah.

23:09.040 --> 23:22.160
Yeah? I don't think there's any extensions of the standards. It's just, yeah, it's JVM underscore, right?

23:22.160 --> 23:26.800
So it's no problem, I think. Yeah. Okay.

23:52.160 --> 23:54.160
you

24:22.160 --> 24:24.160
you

24:52.160 --> 24:54.160
you

