WEBVTT

00:00.000 --> 00:20.000
That's what happens when you press left and right. So I'm going to click through it. I'm going to click through.

00:20.000 --> 00:31.000
Now it should work. Now there we go. All right. So I'm doing this on my own. So this is just me presenting.

00:31.000 --> 00:39.000
Something that I'm involved with, but not the core member of. So just bear that in mind when you take this information.

00:39.000 --> 00:44.000
So take this information with a grain of salt. I'm not involved in decision making for the purpose or anything.

00:44.000 --> 00:54.000
It's just my experience that I'm willing to share this moment here. So so I'm going to talk about a little introduction.

00:54.000 --> 01:03.000
And then we're going to go through separate topics regarding what's where where the differences between lip hybrids based.

01:03.000 --> 01:15.000
It's new slash Linux adaptations versus main line Linux phones where the differences are. And we're going to go through a list of hardware enablement things during this.

01:16.000 --> 01:33.000
So let's start off with a little graphic here that should could ring about to many people out there. This is what how you don't work show cases as basically the way that we structure an operating system.

01:33.000 --> 01:48.000
It could reuse Android drivers, Android services, house, et cetera, usually either using the truth or like see with Android in it running on that one on that container in that container.

01:48.000 --> 01:57.000
And lip hybrids library that is out there, which is also done by the person who see happens to sit in here.

01:57.000 --> 02:08.000
And this thing is basically, if I just can do this, yeah. This basically the Android library loader slash linker built as a lip see library.

02:08.000 --> 02:18.000
There are different linkers available, which means different Android generations are able to have the libraries loaded on a new slash Linux system.

02:18.000 --> 02:28.000
There are also bits and pieces which are just to see this plus code from Android with some sea linkage and the API guarantees quote unquote.

02:28.000 --> 02:37.000
So that different multimedia situations can be handled so that the camera can be handled, et cetera.

02:37.000 --> 02:44.000
There's also those are the web libraries also for various other tasks and they require some Android services.

02:44.000 --> 02:53.000
Most importantly the property service running in this container. So it needs somewhat fully running Android minus the job bits usually.

02:53.000 --> 02:57.000
And this can run in the truth, this can run in like sea container.

02:57.000 --> 03:07.000
And it also needs some some links for compatibility so that Android code actually guarantees is guaranteed to find libraries in a particular path.

03:07.000 --> 03:14.000
So it's less systems, less vendors, less ODM and some others for firmware blocks at what not.

03:15.000 --> 03:27.000
So we do have Android adaptations which nowadays very much look like something that I don't know what new heard would like to dream off.

03:27.000 --> 03:35.000
There are so many user space services handling the hardware that there is one for senses itself.

03:35.000 --> 03:47.000
There is one for various other, so there are different hardware house services running per per let's call it purpose.

03:47.000 --> 03:53.000
And this also needs new slash Linux site integration services so something that allows to talk about ebus.

03:53.000 --> 04:05.000
That's the integration services and also very important run vendor blobs are not really beautiful most of the time so we can't do much of that use much of that on other in any other way.

04:05.000 --> 04:14.000
EGL as the graphics abstraction that I'm not going to talk about various like.

04:14.000 --> 04:21.000
And hardware integration things like for example how does how do graphics differ on a lipyberspace Linux distribution versus main life.

04:21.000 --> 04:29.000
So remember EGL this is an abstraction for vendor protocols buffer passing this basically allows open shield to work with windowing protocols.

04:29.000 --> 04:41.000
There are implementations for meza for and video and also for lipybers and lipybers supports way length natively because it actually uses the lipyl and client site.

04:41.000 --> 04:52.000
And puts a way length lesson of there so that it can do the buffer passing for an unmodified new slash Linux application binary.

04:52.000 --> 05:01.000
The only difference there is the lipygea library which does hook listen to that whole way length socket thing.

05:01.000 --> 05:11.000
And this is the additional protocol that it requires for doing this so it's an separate XML that needs to be generated into.

05:11.000 --> 05:19.000
Application code and then you can use this protocol with any like sort of environment.

05:19.000 --> 05:24.000
Which GBM which is sort of like this generic buffer manager or they call it the generic buffer manager.

05:24.000 --> 05:30.000
I think there was a talk last year about GBM here and not in this room but fast them.

05:30.000 --> 05:35.000
And what it does is it provides basically the EGL platform to KMS applications.

05:35.000 --> 05:41.000
Now kernel mode setting applications basically give full screen access to the graphics hardware.

05:41.000 --> 05:58.000
This is what a non shell usually uses or some fast probably I don't know that it uses to display things directly on a DRM native Linux graphics device.

05:58.000 --> 06:03.000
This differs from how the composer which we use in those lipybers devices.

06:03.000 --> 06:09.000
How the composer is used to be a library now it's own service it's own user space hall again.

06:09.000 --> 06:13.000
And it's managing the display, hot plug etc.

06:13.000 --> 06:18.000
And compositors need to talk to it in order to provide pixels on the screen.

06:19.000 --> 06:27.000
Now we have something coming up which basically allows site stepping that just as it's a little basic information here.

06:27.000 --> 06:33.000
DMA buffs usually is a DMA buff is usually a reference to a graphics buff in DRM.

06:33.000 --> 06:43.000
It is a fixed in file descriptor with various different formats that can be passed and it allows serial copy passing of buffers and use of those buffers.

06:43.000 --> 06:54.000
Now we have neural work on the other hand with the Android sort of legacy that we live with from this type of environment.

06:54.000 --> 07:02.000
It's a data structure around graphics buffers and it contains more potentially more than one file descriptor.

07:02.000 --> 07:12.000
So it's adding some additional data there and it's also got another array of integers for whatever purpose they might need to be passed.

07:12.000 --> 07:18.000
And those are also nowadays backed by DMA buffs but they don't translate over one to one in many cases.

07:18.000 --> 07:31.000
So what we do have as an upcoming idea or did items that I came up with and others have also worked on similar things where we can share those co pieces is simulating or integrating DMA buffs.

07:31.000 --> 07:46.000
Where we either create a memory of D with some identification or we actually used the DMA buff that is provided by the dialogue buffer and it

07:46.000 --> 07:57.000
It allows using that as a GPM device then as a GPM platform using a GPM back ends similar to what Nvidia does on it in their end.

07:57.000 --> 08:02.000
This would allow bypassing how the composer but it's not there yet.

08:02.000 --> 08:10.000
I would like to talk with a few people who might be interested in that topic so we can get this ball rolling a little quicker.

08:10.000 --> 08:20.000
Now camera one big thing that differs very much between mainline and those Android based devices usually is the camera.

08:20.000 --> 08:29.000
And that is for example with mainline you very often have a video for Linux to device might not might may of course, but may not.

08:29.000 --> 08:46.000
But on Android it's completely uncertain what you're dealing with they're using their own abstraction using their own APIs and there are also compatibility layers as they're called in the library speak that's allow to access the camera and making it use usable in the system.

08:46.000 --> 08:54.000
There are two solutions out there that is the one in the library's repository at the moment camera combat layer.

08:54.000 --> 08:57.000
There's also one used by CylfishOS, Joy Media.

08:57.000 --> 09:07.000
I'm not sure what I haven't have a rough idea but could be the reason why this is not integrated or why the things are split like that.

09:08.000 --> 09:29.000
Yeah, that's just it. So usually what you do on those Android based devices is you get a GL texture external OES created by the camera service which allows to be imported into OpenGL so that you can show the viewfinder using an OpenGL texture that you just get from the camera service.

09:29.000 --> 09:42.000
And this can also be used to basically to a little hack which makes it look like it's mapping the buffer and that is some mapping it to CPU accessible memory.

09:42.000 --> 09:54.000
And that is creating a Grelach buffer first then UGL and EGL functions to copy the contents to this Grelach buffer.

09:54.000 --> 10:00.000
So the texture contents into this Grelach buffer the copy happens on the GPU not on the CPU.

10:00.000 --> 10:15.000
And then locking the Grelach buffer allows reading this from a character pointer and just reading for the length or the size of this buffer or writing to this buffer.

10:15.000 --> 10:20.000
Yeah, this is one way of achieving this from an application developer's perspective.

10:20.000 --> 10:31.000
And there's also multimedia with mainline using VAAPI, VDPAU and on Android you have LMx and codec to think I'm very long forward and time right now.

10:31.000 --> 10:43.000
So I'm going to breeze through those multimedia similar to the camera there are two solutions out there and they basically allow using the LMx and codec two services nowadays through live stage fright.

10:43.000 --> 10:52.000
And there are also some ideas I've heard about using in the NDK API for this in the future may be potentially.

10:52.000 --> 11:06.000
And the integrations in the cheese streamer so that applications can actually make use of this so that they can do hardware-sellerated video the coding and also encoding easily with applications that already exist.

11:07.000 --> 11:27.000
And also very important mainline typically does in process video encoding and decoding on Android it's out of process because the OMx service and the codec to service handle the multimedia functionality on their own and you just request them please give me the decoded buffer for this chunk of h264 for example.

11:28.000 --> 11:39.000
Now sensors I'm going to breeze through this quick sensors usually either kernel devices or something that I, I, I, I owe sense of proxy can just use.

11:39.000 --> 11:54.000
And on Android it's really again abstracted away differently you use bider IPC to communicate with user space health and there's some there's some things like sensor if W which integrate those using plugins.

11:54.000 --> 12:01.000
This can be the abstraction between the mainline thing and an Android thing should there be a need to abstract this.

12:01.000 --> 12:10.000
Location TPSD is one way for the mainline folks usually up not sure actually what is used nowadays in a for example post market OS.

12:10.000 --> 12:18.000
type of scenario what is may I just ask what's used there for.

12:18.000 --> 12:21.000
Yes, I should have put this into this like that.

12:21.000 --> 12:28.000
Okay, yeah, but Android again there's specific implementations running in the user space and.

12:28.000 --> 12:38.000
Using IPC to communicate and see some of the thing there's nearly and also for self issue S there's this NFCD which also has the bus API that looks like nearly.

12:38.000 --> 12:46.000
So that's near the expected application can work seamlessly and if again talks to Android side vendor demons.

12:46.000 --> 12:57.000
And yeah, thank you.

12:57.000 --> 12:59.000
Are there any questions?

12:59.000 --> 13:07.000
Yes.

13:07.000 --> 13:12.000
Oh, there's one.

13:12.000 --> 13:15.000
You want to touch the mainline.

13:15.000 --> 13:26.000
So the question is how far is Ubuntu touch with mainline efforts and the answer to this is with the pdk we have one mainline target that we use ourselves.

13:26.000 --> 13:29.000
In the development team.

13:29.000 --> 13:35.000
I wouldn't call it an officially supported thing but it's be used and if something breaks there it's getting fixes.

13:35.000 --> 13:40.000
So if you want to take a look at something that runs upon to touch mainline.

13:40.000 --> 13:51.000
Then the pdk image also you report CI should do the job for an initial glimpse.

13:51.000 --> 13:55.000
So this is mostly related to the duty.

13:55.000 --> 13:56.000
Okay.

14:13.000 --> 14:16.000
You might have looked at my private.

14:17.000 --> 14:24.000
Yeah, yeah, you might have looked at my little private list of projects that I wanted to plan on that had not have not started yet.

14:24.000 --> 14:29.000
The question is about VAPI and.

14:29.000 --> 14:36.000
Into the lipyber systems and bridging the gap between the AAPI and.

14:36.000 --> 14:44.000
The hybrid devices right so yeah, I've not taken it with anything.

14:44.000 --> 15:00.000
I do have plans though with a VAPI plugin sort of like the intel driver and just ripping out everything that is specific to intel things and using the same interface to just implement something that could work somehow.

15:00.000 --> 15:10.000
So that they can adapt deep look into the topic but it could work that you get VAPI playing along with OEMX for example in some way shape before.

15:10.000 --> 15:15.000
I think there is a project that implements VAPI on top of the.

15:15.000 --> 15:16.000
Uh-huh.

15:16.000 --> 15:19.000
So that might be another.

15:19.000 --> 15:21.000
Could be a starting point.

15:21.000 --> 15:22.000
Yeah, sure.

15:30.000 --> 15:34.000
All right, thanks a lot.

