WEBVTT

00:00.000 --> 00:11.000
Good morning everybody, congratulations to all of you who defied gravity and got out of bed early.

00:11.000 --> 00:20.000
We're listening to Mitchell Baker, she's going to talk to us about what philosophy means in the AI world,

00:20.000 --> 00:23.000
because I think it's a challenge for all of us.

00:23.000 --> 00:33.000
In the end, we'll have questions, so wait for that moment to ask them and we'll come with the microphone and the questions will be clear.

00:33.000 --> 00:37.000
The word is to you, thank you very much for being here so early too.

00:42.000 --> 00:49.000
Yeah, early is not my best time, but thank you for the help and the extra computer and the slides and all.

00:49.000 --> 01:04.000
So, floss in the AI world, like there's one core principle, start and end with, which is that floss must win.

01:04.000 --> 01:14.000
And within our world, there's a lot of controversy about AI as it good as it bad as the terrifying, should we sit out, should we engage?

01:14.000 --> 01:22.000
One of the view it's both, it's fascinating and exciting and terrifying and we cannot sit out.

01:22.000 --> 01:33.000
I was at one of these events in the valley with a very storied, one of the men at the very center of these things.

01:33.000 --> 01:43.000
He's got a long story about what AI is going to mean personally and how important it is to our own creativity and how your own personal copilot or AI,

01:43.000 --> 01:50.000
you call it, will be with you all the time. It will hear what you hear, it will remember all the things that you don't.

01:50.000 --> 02:00.000
And in this vision when you're thinking to yourself, oh, I was talking to, you know, clear last week and I had this really great idea, what was it?

02:00.000 --> 02:04.000
You know, your AI is your external brain and it's going to be reporting that back.

02:04.000 --> 02:11.000
And when you're like, oh, we were talking about X and today I was talking about Y and there's some connection like, what is it?

02:11.000 --> 02:18.000
That this is all going to be processed through your external brain port, you know, your own AI.

02:18.000 --> 02:27.000
And I think that's very likely and actually really exciting because external brain power would be awesome.

02:27.000 --> 02:32.000
But, you know, Microsoft can't own that or Google or Sam Altman.

02:32.000 --> 02:39.000
Like those things are much more entwined with who we are and how we actually operate.

02:39.000 --> 02:42.000
So I think it's coming, I don't think there's any sitting out.

02:42.000 --> 02:47.000
And so I think that AI, flaws must win.

02:47.000 --> 02:55.000
There have to be open alternatives, there have to be systems that we can see and understand and tune and test and reproduce.

02:55.000 --> 03:02.000
And there have to be systems that we own, you know, and that have some public benefit into them.

03:02.000 --> 03:05.000
So, flaws must win.

03:05.000 --> 03:11.000
Sometimes I think, well, flaws has been really successful in the last 20 years.

03:11.000 --> 03:14.000
So it should be easy, right?

03:14.000 --> 03:18.000
But there's a few things there that aren't quite that way.

03:18.000 --> 03:22.000
First, you know, memories are short.

03:22.000 --> 03:28.000
When ages ago now, but when Firefox first appeared, it was a consumer product.

03:28.000 --> 03:30.000
First, real flaws consumer product.

03:30.000 --> 03:33.000
And I did a million interviews about, well, what is flaws?

03:33.000 --> 03:35.000
Gosh, why would anybody volunteer?

03:35.000 --> 03:38.000
What do you mean people don't get paid?

03:38.000 --> 03:40.000
What do you mean you share your code?

03:40.000 --> 03:45.000
And at the time, people were very interested and it led to a widespread interest.

03:45.000 --> 03:53.000
You know, we have open everything now, you know, open education, open access, open architecture, open policy, you name it.

03:53.000 --> 03:59.000
And open is such a prevalent idea that now we struggle with open washing of things.

03:59.000 --> 04:08.000
And so, as a period in there, when flaws came into the mainstream and there was a real interest in what does it actually mean.

04:08.000 --> 04:18.000
But as it's gotten settled in the mainstream, the understanding is very broad, you know, open everything, but also pretty shallow.

04:18.000 --> 04:23.000
And people don't really have an idea of what drives flaws.

04:23.000 --> 04:26.000
It's easy to use flaws today.

04:26.000 --> 04:31.000
You know, you go to some repo and you grab what you need and you're often running.

04:31.000 --> 04:40.000
But most people don't have the understanding of the depth of what drives it, what free as in freedom actually means.

04:40.000 --> 04:46.000
What it's like to engage in build a community and actually share and why we do these things.

04:46.000 --> 04:52.000
And why open source or public benefit is a value in its own sake.

04:52.000 --> 05:09.000
And so, those of us who've been in this world or lived this world or built this world have an understanding about the depth and the wave of energy and commitment and ethos and spirit that is created the open source and flaws movements.

05:09.000 --> 05:14.000
But that's not really well understood now.

05:15.000 --> 05:28.000
And so, we're living in this world and then, you know, the setting changes, generative AI arrives and it is cataclysmic.

05:28.000 --> 05:39.000
And some think it's wonderful and beautiful and everywhere and you know, others think it's really terrible and like this is a, you know, polluting our world.

05:39.000 --> 05:43.000
And it is everywhere and we should stand against it.

05:43.000 --> 05:48.000
As I've said, I'm not of the view that we're going to stand against it.

05:48.000 --> 05:58.000
I mean, you may, but I mean that we're going to be successful and stop this new technological development because it is powerful, it's new.

05:58.000 --> 06:05.000
And we're in the very, very early phases of it, so I don't think it's going back in the bottle.

06:05.000 --> 06:12.000
But one thing about generative AI is it's odd odd relationship to open.

06:12.000 --> 06:23.000
What I call so-called open AI, you know, is originally formed by handful of the tech giants with a very lofty goal to serve humanity.

06:23.000 --> 06:26.000
Formed as a nonprofit with this great goal.

06:26.000 --> 06:31.000
But over time, of course, you know, it's changed.

06:31.000 --> 06:47.000
And we've seen that step after step, the need for, you know, dollars for revenue, the lure of the industry, the style of impact, what's possible, that so-called open AI is increasingly closed.

06:47.000 --> 06:54.000
And that the implementations of what comes out of open AI and the big tech giants are also closed.

06:54.000 --> 06:58.000
And that you need to be in their systems to use their AI's.

06:58.000 --> 07:08.000
Now to my mind, it's no accident that the architecture of chat GPT requires vast amounts of wealth.

07:08.000 --> 07:19.000
It is not, I would say, it is not necessary that generative AI used this particular architecture.

07:20.000 --> 07:28.000
But open AI came out of the valley, you know, founded by the PayPal mafia and, you know, a few others.

07:28.000 --> 07:32.000
So founded by billionaires, founded by the very successful.

07:32.000 --> 07:41.000
And so it's no accident that the architecture is accessible only to the very successful with vast amounts of money.

07:42.000 --> 07:46.000
But that is not necessarily required for generative AI.

07:46.000 --> 07:53.000
And so we've, you know, we've seen that recently with deep seek out of China that I'll talk about in a little bit more.

07:53.000 --> 08:01.000
But until then, like the architecture lent itself to increasingly closed systems.

08:01.000 --> 08:10.000
And so that's a setting that looks like pretty rough waters for open source and free software.

08:10.000 --> 08:14.000
And it also leads people to think, and to say sometimes that everything is different.

08:14.000 --> 08:16.000
It's not like before.

08:16.000 --> 08:19.000
Oh, generative AI changes absolutely everything.

08:19.000 --> 08:25.000
And that nothing we've known or learned or experienced about the past is going to be relevant.

08:25.000 --> 08:31.000
And from my perspective of living the past, that is not the case.

08:31.000 --> 08:39.000
And that what's happening now is better understood as, you know, same, same, only different.

08:39.000 --> 08:45.000
And a lot of things that are happening now are not that different from before.

08:45.000 --> 08:52.000
And a lot of the things that people are thinking about AI are saying about AI and why it flosses open.

08:52.000 --> 08:57.000
I mean, it's hard or it's dangerous or things that we've lived through before.

08:57.000 --> 09:00.000
There's a lot that's the same where our learnings are really relevant.

09:00.000 --> 09:02.000
And there are some things that are different.

09:02.000 --> 09:08.000
You know, AI now is in this carnival atmosphere where it's kind of crazy.

09:08.000 --> 09:12.000
And people are using it and you know, pouring dollars in it where they can.

09:12.000 --> 09:17.000
Whereas floss software grew up in relative obscurity.

09:17.000 --> 09:27.000
And we had the luxury of working out our systems and our communities and actually having a full operating system and languages that were open.

09:27.000 --> 09:34.000
And even the Apache Foundation and maybe even the Mozilla Foundation before it really came into the mainstream.

09:34.000 --> 09:42.000
And so we learned a lot and figured out a lot of tools and how to operate before the mainstream and certainly the industry.

09:42.000 --> 09:45.000
The business side of the industry paid attention to us.

09:45.000 --> 09:51.000
You know, you may remember how they dismissed us for so many years as, you know, weirdos off on the side.

09:51.000 --> 09:53.000
But that's not true today.

09:53.000 --> 09:58.000
And the world has come to open source and certainly industry understands its power.

09:58.000 --> 10:03.000
And so that public setting is very different.

10:04.000 --> 10:07.000
And the, you know, amounts of money are very different.

10:07.000 --> 10:17.000
But it's not that building open source and floss software was free.

10:17.000 --> 10:20.000
But there were, like, Linux took a decade of work.

10:20.000 --> 10:22.000
Like, that's a fair amount of investment.

10:22.000 --> 10:29.000
The browser, like Mozilla, for example, probably hundreds of millions of dollars by AOL before we were even launched.

10:29.000 --> 10:32.000
And so there's a lot of money that has gone in and is required.

10:32.000 --> 10:42.000
And even though open source software is true, you can pick up a laptop or, you know, you know, just top box, you know, and work on it.

10:42.000 --> 10:44.000
You can't create a product that way.

10:44.000 --> 10:48.000
And so this idea that everything about open source software is free and easy.

10:48.000 --> 10:51.000
And AI is expensive and thus impossible.

10:51.000 --> 10:53.000
It really misses a lot of the nuances.

10:53.000 --> 10:58.000
And sure, in the early days of Mozilla, any one of our developers could have a laptop.

10:59.000 --> 11:02.000
But we couldn't build our product on laptops.

11:02.000 --> 11:08.000
And when we came out of AOL, the very last fight I had with AOL was to get in those days.

11:08.000 --> 11:12.000
You know, for son, micro system, big boxes.

11:12.000 --> 11:16.000
It took me 18 months inside of AOL to get AOL to pay for them.

11:16.000 --> 11:18.000
They were really expensive.

11:18.000 --> 11:20.000
But we couldn't run our release process on them.

11:20.000 --> 11:22.000
And we couldn't run our web server.

11:22.000 --> 11:26.000
But more importantly, you know, couldn't build our product and do the testing that we needed to.

11:26.000 --> 11:29.000
So the idea that everything about open source is free and easy.

11:29.000 --> 11:35.000
And, you know, one person sitting alone and cheap, you know, is not actually really the case.

11:35.000 --> 11:41.000
And so there's a lot about our history and that's really relevant in this case.

11:41.000 --> 11:48.000
So I said, I'll say it probably more than once, you know, flaws must win.

11:48.000 --> 11:55.000
And so of the things we've learned in the past, what are some of the things that are really important to bring forward?

11:55.000 --> 12:00.000
And you can see, I've got three that I'm going to talk about a little bit today.

12:00.000 --> 12:02.000
First one is tools.

12:02.000 --> 12:05.000
And we have some familiar tools.

12:05.000 --> 12:13.000
In particular, sorry, the free software and open source definition and the licenses.

12:13.000 --> 12:20.000
These, I think, are one of the key tools to our success as a movement.

12:20.000 --> 12:25.000
Certainly, there are practices, you know, open bug systems, for example.

12:25.000 --> 12:27.000
And open repositories are common today.

12:27.000 --> 12:29.000
They're also key tools.

12:29.000 --> 12:37.000
But in terms of organization and having a voice to the broader world is the open source and free software definition.

12:37.000 --> 12:42.000
And then our licenses, which allowed us to speak with one voice to the world.

12:42.000 --> 12:49.000
And in explaining what open source was, what free software was, we could point to the same things.

12:49.000 --> 12:59.000
And that's how is written into law in the EU because we have a community-based definition that we agree with.

12:59.000 --> 13:11.000
And I've put the definition and the licenses separately here because we have different licenses that meet the definition.

13:11.000 --> 13:16.000
And that is going to be very important in the AI world.

13:17.000 --> 13:20.000
And our licenses vary quite a bit.

13:20.000 --> 13:27.000
You know, BSD here at MIT or Apache on one end was Elizabeth and the middle and the Gnugie pale and the others.

13:27.000 --> 13:33.000
Those are very different versions of community and very different versions of what free means.

13:33.000 --> 13:38.000
And we thought, you know, for many years, about our licenses.

13:38.000 --> 13:45.000
And instead of speaking out words, we were internal where the Apache folks are like, no, it's only our license.

13:45.000 --> 13:49.000
And the Gnugie pale folks are like, no, it's got to be the GPL.

13:49.000 --> 13:58.000
And we did a bunch of fighting and then eventually there's the Gnugie lesser, sorry, the current name, lesser public license.

13:58.000 --> 14:07.000
Originally the library public license to try and figure out what are the different flavors of living within the open source definition.

14:07.000 --> 14:10.000
And that we will have with AI as well.

14:10.000 --> 14:21.000
And so the question of how we, how we join as a community, even with people on different parts of the spectrum.

14:21.000 --> 14:31.000
And so OSI has a license definition and a draft license, which Mozilla has endorsed, understanding the data problem.

14:31.000 --> 14:38.000
And so certainly in our community, there will be a set of people for whom no open must mean really open.

14:38.000 --> 14:41.000
And that must include the data.

14:41.000 --> 14:46.000
Totally logical rationale, good position.

14:46.000 --> 14:52.000
Then there's a set of people who are, well, there's no real data sets that are being used.

14:52.000 --> 14:54.000
There's almost nothing that's open.

14:54.000 --> 14:58.000
So we should be more practical and thus you get the OSI version.

14:58.000 --> 15:01.000
We're going to be all along that spectrum.

15:01.000 --> 15:07.000
Because there's some pretty deep philosophical questions about what free as in freedom or open or public benefit.

15:07.000 --> 15:10.000
Or reproducibility mean.

15:10.000 --> 15:16.000
And it's easy to forget that it took us a long time and software to figure that stuff out.

15:16.000 --> 15:24.000
And it was a major building block when we figured out how to take all the different licenses and make them compatible.

15:24.000 --> 15:31.000
But it used to be if you used Apache software, you couldn't combine it with Mozilla software, you know, and on and on with these licenses.

15:31.000 --> 15:43.000
And as a community, we did a lot of work to overcome our own niche communities and become a flaws community that could use licenses together.

15:43.000 --> 15:51.000
For example, the early version of the MPL had, I think, the first patent clause, patent protection clause.

15:51.000 --> 16:00.000
It also had a really strong copyright protection clause because I believe that individual developers rarely have patents.

16:00.000 --> 16:04.000
And so your patent protection clause works fine for companies and open source.

16:04.000 --> 16:09.000
But if you're an individual developer, most likely you have copyright in what you created.

16:09.000 --> 16:17.000
And the way that individual developers would have more agency in this system is if our IP clause has included copyright.

16:17.000 --> 16:20.000
But we were the only project to do that.

16:20.000 --> 16:29.000
And eventually, you know, I, you know, made the decision that that view about developer agency was not as important as interoperability.

16:29.000 --> 16:32.000
Interoperability with the rest of the community.

16:32.000 --> 16:34.000
That was painful to me.

16:34.000 --> 16:46.000
But, but, but we all had to make compromises to build a broader community to get to interoperability and to work well and speak as one voice.

16:46.000 --> 16:51.000
Even when we're on different flavors of the philosophical spectrum.

16:51.000 --> 16:57.000
And to be effective in open source, you know, that's going to be the same as well.

16:57.000 --> 17:05.000
And so I'm hopeful, you know, it took us a long time to figure out where actually allies in flaws.

17:05.000 --> 17:18.000
And we don't need to fight over our licenses quite so much, but to unify and be able to have a shared presentation to the rest of the world about the underlying values of freedom and openness and empowerment and so on.

17:18.000 --> 17:23.000
So I think that's coming for us in the AI space.

17:23.000 --> 17:35.000
And there's some new tools that just sorely have to be developed, both for open source software, but also in the unknown world of AI.

17:35.000 --> 17:39.000
And you can see I hybrid organizations in public money.

17:39.000 --> 17:45.000
There's a very strong force for public infrastructure, public AI, public money.

17:45.000 --> 18:01.000
So what the public sources public, AI government and government agencies, what those look and feel like as community members in a flaws world is something to be worked out.

18:01.000 --> 18:07.000
And so I got my, I'm involved with the open medical record systems.

18:07.000 --> 18:14.000
And that's a system when which increasingly governments are becoming part of the community because governments run national health care systems.

18:14.000 --> 18:22.000
And they've realized they should contribute, but they're trying to figure out what does it look like when government entities are a big part of your community.

18:22.000 --> 18:28.000
Much like as flaws we figured out what does it look like when a big company is an employer.

18:28.000 --> 18:32.000
And so these new organizations we're going to need them.

18:32.000 --> 18:39.000
I think that further integration of the flaws community will be incredibly important.

18:39.000 --> 18:51.000
Each of our projects is good at solving a particular problem, but there are system-wide problems to be solved and that requires greater integration.

18:51.000 --> 19:05.000
So for example, with the internet, we have so many different open source and flaws projects, but for example, we never made it easy to build a website.

19:05.000 --> 19:10.000
There's WordPress, but other than that, how do people build websites?

19:10.000 --> 19:13.000
Facebook.

19:13.000 --> 19:18.000
Viewer small business in the last 10 or 15 years and you need a website, it's Facebook.

19:18.000 --> 19:25.000
That's because the flaws community wasn't integrated into a full ecosystem that could recognize some of these problems.

19:25.000 --> 19:29.000
How do you pay? How do you identify yourself online? Facebook?

19:29.000 --> 19:32.000
Log in with Facebook, log in with Google.

19:32.000 --> 19:37.000
These are ecosystem problems which require integration to solve.

19:37.000 --> 19:44.000
And so I think we've learned didn't win yet in terms of how do you identify yourself online.

19:44.000 --> 19:51.000
But we've learned a lot of these things and those learnings and caring them forward into AI will be really key.

19:51.000 --> 19:55.000
And finally up here, increase support for developers.

19:55.000 --> 20:11.000
It's a great success and it's just wonderful when your project becomes the basis of, or a library used in something that has millions of users, but the burnout factor among those maintainers is really high.

20:11.000 --> 20:27.000
And we still are in the early phases of figuring out what's the sustainability model for these developers that fits with how they work, but also recognizes the importance of ongoing maintenance and security.

20:27.000 --> 20:40.000
The EU has taken some steps with open source software stewards and cyber resiliency act, which is forcing us on the community.

20:40.000 --> 20:49.000
So you always want to comply, but going beyond that to figure out what are good models for sustainability.

20:49.000 --> 20:57.000
A second piece you know that we've lived is community.

20:57.000 --> 21:14.000
And there is a real AI burgeoning, well I want to say, there are really legitimate AI practitioners like native AI practitioners who are deeply flaws in nature.

21:14.000 --> 21:30.000
And I've put this builders thing here because this was a Amazilla program, not quite an accelerator, but it was, you know, it's a set of AI native projects, which are deep in the flaws ethos.

21:30.000 --> 21:38.000
And what we've learned is that so many of these projects are desperate for community.

21:38.000 --> 21:42.000
There are pockets of people who like I'll pick one.

21:42.000 --> 21:49.000
Photos storage site, you know, two, three bucks a month kind of like what Apple or Google charge.

21:49.000 --> 21:57.000
Why? Because your photo should be yours, and your photo shouldn't be harvested for training data, you know, all of the things that happen.

21:57.000 --> 22:05.000
Or your photo shouldn't be used to surveil you because it turns out like if you follow your photo stream, you know a lot about your life.

22:05.000 --> 22:17.000
And so they have a little business because, you know, you need one to support yourselves, but the ethos that drive these things is both flaws and also AI native.

22:17.000 --> 22:24.000
And so there's a lot of that energy in the world, not yet really formed into coherent community.

22:24.000 --> 22:39.000
And so, you know, there's some of the native kind of clumping together, but there's also a lot from the flaws software community that can make a big difference in the power of open AI.

22:39.000 --> 22:50.000
And so, we have our communities and our projects and we've, you know, not fighting with each other over, you know, licenses and identity anymore.

22:50.000 --> 23:06.000
But the need to really make deeper interconnections among our communities and to be able to welcome in the AI native, flaws people, even if we're worried about AI.

23:07.000 --> 23:14.000
Even if we think we're more worried than they are, you know, I'd invite you to have some conversations with them because they're pretty clear I do.

23:14.000 --> 23:26.000
But if you believe that's I do that we're not stopping it, then the ability to welcome and understand and grow and do some learnings.

23:26.000 --> 23:34.000
A lot of the AI native communities, for example, they don't know anything really about much of the software that we've done.

23:34.000 --> 23:42.000
And many of them know nothing about what I'd call the web, like how do you find and deal with information like it's really surprising.

23:42.000 --> 23:46.000
And many of them know nothing about how you build a community.

23:46.000 --> 23:50.000
They really don't. They know that something must be open.

23:50.000 --> 23:56.000
Like, they are as legitimate as it comes and they're putting their life energy into this has to be open.

23:56.000 --> 24:12.000
But community, you know, some of these folks when they put up a library that's interesting and they first experience what it's like to be part of a flaws community where people appear in contribute like they are just astonished.

24:12.000 --> 24:14.000
It's all new to them.

24:14.000 --> 24:19.000
And so the part I said earlier about the understanding is really broad but shallow.

24:19.000 --> 24:31.000
It's also true among the whole set of AI native projects who are open but don't have the experience that we have of what to do with that and how to build a project.

24:31.000 --> 24:36.000
And for some of us how to take a project and build a product or be sustainable out of it.

24:36.000 --> 24:45.000
And so the connection between these new folks and those who've been living in the flaws world for a while is just critical.

24:45.000 --> 24:48.000
And the last piece I put action.

24:48.000 --> 25:00.000
And here I mean everything from hacking, you know, the projects to products, to ecosystems, to engaging with regulators.

25:00.000 --> 25:03.000
Since they're now regulating open source, it's really necessary.

25:03.000 --> 25:08.000
It's the whole range of what flaws means.

25:08.000 --> 25:16.000
And so I put a few things up here about what Mozilla is doing just to give a source an idea across the spectrum.

25:16.000 --> 25:23.000
There's Lama file which lets you run an LLM in a single executable file across any operating system.

25:23.000 --> 25:35.000
So Mozilla project is local designed to say, hey, you know, that architecture that requires the millions or billions of dollars.

25:35.000 --> 25:38.000
There's other ways to engage with AI.

25:38.000 --> 25:50.000
And to, you know, be an example and also to participate in the open source language model.

25:50.000 --> 25:54.000
You know, like explosion that's happened in the last couple years.

25:54.000 --> 26:02.000
You know, once the Lama model was released or leaked or whatever, you can see the power of flaws and open in the kinds of changes.

26:02.000 --> 26:10.000
And the first early challenges to the, you know, model of everything with giant and incredibly expensive and so on.

26:10.000 --> 26:15.000
Common voice up there is the largest, I think, open source data set.

26:15.000 --> 26:17.000
It's about about voices here.

26:17.000 --> 26:29.000
Really enabled or aimed at making material accessible in whatever language it is that you speak.

26:29.000 --> 26:32.000
And so it crowdsources a range of things.

26:32.000 --> 26:38.000
And of course, it's increasingly savvy about how it is learning and machine learning.

26:38.000 --> 26:43.000
Neither of these are consumer products.

26:43.000 --> 26:58.000
In the dot dot there, I'm just going to mention Firefox Translate where we do take local transparent, safe, understandable privacy, preserving, artificial intelligence and ship it.

26:58.000 --> 27:08.000
And if you're interested in how we think through how do you be privacy preserving and transparent and how does it work and what's different.

27:08.000 --> 27:12.000
You know, come to the dev room or talk to our folks here.

27:12.000 --> 27:27.000
So there is some shipping in products, which obviously, I think we're at Mozilla a little more adverse to, you know, ship it knowing it's not really ready for prime time.

27:27.000 --> 27:37.000
Don't worry about it. So we've spent a lot of time with Firefox Translate to learn, you know, how to do things a little bit differently.

27:37.000 --> 27:51.000
In action, we've also done a bunch of technical convenings here trying to gather a bunch of the actors in the open AI space.

27:51.000 --> 27:57.000
And understand where is their shared common ground, where is their difference.

27:57.000 --> 28:05.000
Of the people who say they're open, what do they mean by open and what as a community do we think we ought to mean by open.

28:05.000 --> 28:11.000
And so these will continue as well.

28:11.000 --> 28:15.000
And these of course are, you know, the papers are all published and are open and so on.

28:15.000 --> 28:30.000
And so it's also, it's an increasingly technical, but it's also a form of community building that's not centered directly around code, but around some of the understanding and frameworks.

28:30.000 --> 28:35.000
Another piece of the future is new allies.

28:35.000 --> 28:48.000
And this I think is going to be controversial among our community who are our allies and how philosophically aligned do they need to be.

28:48.000 --> 28:51.000
So I'll give a couple of examples.

28:51.000 --> 28:55.000
One, what about meta?

28:55.000 --> 29:04.000
Until recently, like Lama was the resource that allowed the false community to engage.

29:04.000 --> 29:22.000
And it was a, I mean, just a wild, wild explosion from the open AI, you know, has to run on big machines only to be able to run on laptop, to be able to run on your phone, to be able to run on a Raspberry Pi.

29:22.000 --> 29:27.000
But all of this activity in the open source community. So how do we think about that?

29:27.000 --> 29:32.000
Clearly meta is not aligned with false values in a whole set of spaces.

29:32.000 --> 29:40.000
They are have provided the biggest asset in open for 18 months to years.

29:40.000 --> 29:45.000
So how do we think about them? Are they allies? Are they frenemies? Are they enemies?

29:45.000 --> 29:50.000
And this is an area where once again, we're going to be across the spectrum.

29:50.000 --> 29:55.000
With some set of people very happy to use Lama because it's there and it works.

29:55.000 --> 30:01.000
You know, and others feeling like, no, that's not an organization I want to engage with.

30:01.000 --> 30:16.000
And so as we try to make progress in philosophy when this question of how do we treat others who are on a different spot in a philosophical spectrum will be quite real.

30:16.000 --> 30:21.000
And the second example is of course deep seek, which has turned everything upside down.

30:21.000 --> 30:32.000
And is another example that the architecture of generative AI does not need to be the original architecture from so-called open AI.

30:32.000 --> 30:38.000
So that's an open source piece and an MIT license across the board.

30:38.000 --> 30:44.000
So as a false community, what do we make of that? Looks pretty open.

30:45.000 --> 30:55.000
And certainly if you read the interviews with the founder about what open means and culturally-wise important to their company and how they think about it,

30:55.000 --> 30:59.000
that language sounds a lot like us.

30:59.000 --> 31:13.000
And yet, at least in the United States, so long to speak for that, you know, this is geopolitical chain, you know, like constant discussion about the geopolitics of technology.

31:14.000 --> 31:16.000
And so what does that mean?

31:16.000 --> 31:26.000
You know, Eric Schmidt did an op-ed right away saying, okay, I think he said US open source must win, but our open source must win.

31:26.000 --> 31:29.000
So like, what does that actually mean?

31:29.000 --> 31:34.000
And you know, like Europe is in a slightly different space than the United States about China.

31:34.000 --> 31:41.000
So like those kinds of questions will be part of our world as well.

31:41.000 --> 31:44.000
And then there's the security question.

31:44.000 --> 32:01.000
And by security, I mean both security vulnerabilities, but also the geopolitical question, which comes up in almost every, you know, AI open private discussion that I'm in.

32:02.000 --> 32:12.000
And so the security vulnerability piece is very much same same but different, because we've been through this before.

32:12.000 --> 32:24.000
And the early years of floss, I mean, I answered a million questions about open is, you know, people would just say, like, open is less secure.

32:24.000 --> 32:27.000
You're like, no, wait a minute, wait a minute.

32:27.000 --> 32:35.000
Here's why open is more secure and, oh, by the way, you don't know how insecure all those close source products are using are.

32:35.000 --> 32:40.000
I mean, we do, because we're shipping one and these vulnerabilities are not that different between browsers, right?

32:40.000 --> 32:46.000
So, so we know a lot and, you know, you're not particularly safer in proprietary systems.

32:46.000 --> 32:49.000
So those kinds of questions are going to come up again.

32:50.000 --> 32:52.000
We have a lot of experience there.

32:52.000 --> 32:56.000
And then the geopolitical question just seems to be everywhere.

32:56.000 --> 33:01.000
And so that, that is very new, like, probably unknown to us as a community.

33:01.000 --> 33:07.000
I was on a panel with Mike Malinkov, the Eclipse Foundation a couple days ago.

33:07.000 --> 33:11.000
And he was saying how his role now includes law and policy.

33:11.000 --> 33:18.000
Something he paid no attention to before, but with the new act and that there an open source software steward,

33:18.000 --> 33:26.000
like it's, it's just been added to his role and paying attention to this in the future is something that's got to be done.

33:26.000 --> 33:30.000
And so these kind of geopolitical considerations are going to come up to.

33:30.000 --> 33:40.000
You know, the US has, like, an exception for the weights of open source models in its export regulation now.

33:40.000 --> 33:50.000
That's pretty new in the last, you know, couple months that open source and model weights are in the regulatory discussion.

33:50.000 --> 33:52.000
And so what does export mean?

33:52.000 --> 33:54.000
And are we going to get caught up in it?

33:54.000 --> 33:58.000
Like, all of those things are going to be part of our world.

33:58.000 --> 34:08.000
So there's some same from the path on the vulnerability side, but there's something pretty different about this larger set of security questions.

34:09.000 --> 34:18.000
And so we have a lot to bring to making AI more free and more open.

34:18.000 --> 34:28.000
And it's a really necessary piece because it's even closer to us and our thinking than social media.

34:28.000 --> 34:35.000
And so for all of this, you know, floss must win.

34:35.000 --> 34:48.000
And with that, I think it's our time for questions. That's my 10 minute.

34:48.000 --> 34:54.000
Thank you very much. And despite technical hurdles, you still have time for questions.

34:54.000 --> 35:04.000
So I'm watching, there's a, I'm closing the microphone.

35:04.000 --> 35:13.000
Yeah, just one thing to say about this, so when we talk about the source code, we know what means being an open source code, so we have open source code.

35:13.000 --> 35:23.000
But when you talk about more, any of these kind of, as you said, a llama or any other model that that that we have now, you don't know what's inside.

35:23.000 --> 35:31.000
You can open it, but yeah, you don't see a much there, it's kind of, you know, takes time to, you know, what's trying to move all these data in.

35:31.000 --> 35:39.000
How we can treat that as an open, how we can do like some tests and to say it's open or not open.

35:39.000 --> 35:48.000
Or is it like a, when you know what you use to train it, then you can say, oh, it's open or you can say, oh, it's not open.

35:48.000 --> 35:54.000
How we know that part.

35:54.000 --> 36:04.000
Let's see, do you mean by know like in practice or know what, how we should make our definition?

36:04.000 --> 36:15.000
Okay, yeah, yeah.

36:15.000 --> 36:23.000
So I'll give a personal view now, not a Mozilla view in that sense.

36:23.000 --> 36:37.000
It's my hope that the OSI definition where open doesn't include data, so you don't know those things is a moment in time that reflects where we are.

36:37.000 --> 37:00.000
Like in a moment in time to be relevant to what's actually happening and that we can get to both philosophically and practically more open requirement to be open, you see much.

37:00.000 --> 37:07.000
But well, so let's just maybe take deep sea.

37:07.000 --> 37:17.000
If you're code and your weights and your data are all under an MIT license, would that be open?

37:17.000 --> 37:27.000
I don't think so now, it's just that, yeah, what do we think? It's a really open, that's a whole thing, you know, when we say it's open. This is now open, I don't know.

37:27.000 --> 37:30.000
Why is this open or not?

37:30.000 --> 37:34.000
Why I would have said that about Lama or just in general.

37:34.000 --> 37:43.000
Yeah, so box, like box and then you would say, I use this black, this thing because it helps me with this and that.

37:43.000 --> 37:47.000
Yeah, it's still black, you know, a box that you don't know what's inside.

37:47.000 --> 37:54.000
Yes, maybe it's the reproducibility test that we're looking at, right, which comes up and how do you actually implement the open source definition?

37:54.000 --> 38:00.000
Like you have to be able to reproduce it. And if you use that as the practical test, you need the data, right?

38:00.000 --> 38:09.000
I mean, you need all of those things to be able to reproduce it. So I think that's, like to try and get practical about understanding it where I would.

38:09.000 --> 38:14.000
Then maybe we should come up with some kind of, you know, open check.

38:14.000 --> 38:19.000
So you run it on any of these modern models and say, oh, this one is open, this one is not.

38:19.000 --> 38:29.000
So you throw kind of a series of tests there, like prompting, prompting, prompting, prompting, prompting.

38:29.000 --> 38:37.000
And when you see a fee feedback, then you can say, oh, this is really open or not, or something, you know?

38:37.000 --> 38:46.000
I think there's actually a period of trying different things and seeing both how they work and how they feel and what we think about the end of them.

38:46.000 --> 38:59.000
Right, like you might try that one and someone else might try a different reproducibility technique and we're just going to have to learn by doing, I guess.

38:59.000 --> 39:07.000
Any other questions?

39:07.000 --> 39:16.000
Thank you, Mitchell. That was excellent. So my question is about the community and new communities.

39:16.000 --> 39:25.000
Do you, there's a lot of communities, like for example, public service broadcasters, who are struggling with this space.

39:25.000 --> 39:34.000
And do you think that the kind of flot community could do more, you know, to work with them?

39:34.000 --> 39:38.000
And especially around Gertive AI.

39:38.000 --> 39:50.000
So, for example, the way that it's kind of built is a certain way and maybe we've been led to believe that that is the only way to do it.

39:50.000 --> 40:06.000
Maybe there's other ways and the flot community and these other partners, I'll say, could engineers think a completely different way of doing it, which is more open and more fitting with the floss ethos.

40:06.000 --> 40:10.000
Does that make sense?

40:10.000 --> 40:13.000
I actually had some trouble hearing.

40:14.000 --> 40:16.000
So, sorry.

40:16.000 --> 40:19.000
So, I guess my question really is, there's two questions really.

40:19.000 --> 40:26.000
One about, you know, do kind of partners or community members.

40:26.000 --> 40:37.000
And so, there's ones like public service broadcasters, for example, which are tend to be not that friendly to the flot community.

40:37.000 --> 40:45.000
And the other one is, could this also bring about new ways of creating Gertive AI?

40:45.000 --> 40:52.000
So, we've been led to believe that there's only kind of one way to do to kind of create these models.

40:52.000 --> 41:02.000
So, maybe there's an opportunity to build these models in different ways, which are beneficial for both communities or ultimately the flot community.

41:07.000 --> 41:13.000
Let's see, let's see if I can get both in one.

41:13.000 --> 41:36.000
I'd say, new what we've learned and lived in building communities applied to different and broader groups in new ways is like a core piece of practicing floss that we've learned and that extending that I think is absolutely critical.

41:36.000 --> 41:40.000
Especially in AI, which, you know, everyone's engaged in right now.

41:40.000 --> 41:45.000
And the regulation is coming so quickly that more people may be able to understand it.

41:45.000 --> 41:50.000
And then secondly, I think there are other ways of building Gertive AI.

41:50.000 --> 41:56.000
I don't think the deep-seek example is just a great one because it's different, right?

41:56.000 --> 42:01.000
And it works in more resource constrained environments, and we're still really new.

42:01.000 --> 42:07.000
And who knows what else is out there like that that, you know, hasn't come into our world yet.

42:07.000 --> 42:09.000
So, yes, I think, to both.

42:09.000 --> 42:11.000
Thanks.

42:11.000 --> 42:13.000
Thank you.

42:13.000 --> 42:16.000
Any other questions?

42:16.000 --> 42:19.000
I'm going to have to run.

42:25.000 --> 42:28.000
Thank you for running.

42:29.000 --> 42:37.000
You said, and we all know that the big, the big masses of the users go to the closed source in the past.

42:37.000 --> 42:44.000
And they'll go to WhatsApp and log in via Google.

42:44.000 --> 42:53.000
Any ideas how to make it better in the future?

42:53.000 --> 42:54.000
Let's see.

42:54.000 --> 43:00.000
Are you asking about the commercial success in the consumer market space?

43:00.000 --> 43:03.000
Like what consumers actually use?

43:03.000 --> 43:20.000
Any idea how to motivate or catch some of the big masses of the users to go another way to use other ethical models,

43:21.000 --> 43:23.000
to use solutions.

43:23.000 --> 43:28.000
There are so many great solutions, but people don't go to use them.

43:28.000 --> 43:31.000
And I would just love to see.

43:31.000 --> 43:40.000
You know, I think the industry has pretty strong incentives.

43:40.000 --> 43:49.000
That aren't completely aligned with public benefit.

43:49.000 --> 43:53.000
But there is a lot of power in the market.

43:53.000 --> 44:05.000
And the thing is competing in the consumer market, which Mozilla does.

44:05.000 --> 44:11.000
It is really hard.

44:11.000 --> 44:29.000
You have to be at the point of the spectrum where you balance what you think is the ideal and the way things should be with what consumers choose.

44:29.000 --> 44:34.000
And that is not for everyone.

44:34.000 --> 44:40.000
It is not the part on the spectrum where you are acting from your purest ideals.

44:40.000 --> 44:49.000
Right, I've always said Mozilla is not like we're not the height of purity because the market is brutal.

44:49.000 --> 44:52.000
And there's nothing like consumers take care.

44:52.000 --> 44:55.000
You can have the power of the market, which is really powerful.

44:55.000 --> 44:57.000
I mean, we've experienced that at Mozilla, too.

44:57.000 --> 45:01.000
It's really powerful when you catch a hold of what consumers do.

45:01.000 --> 45:08.000
But you can't educate them, like as the way out of everything.

45:08.000 --> 45:12.000
You know, most people are trying to pay their bills.

45:12.000 --> 45:17.000
You know, not get thrown out, figure out in the US, not here, how to have health care.

45:17.000 --> 45:20.000
You know, how to keep your kids from being bullied.

45:20.000 --> 45:23.000
Like, that's the reality of life, you know, for most people.

45:23.000 --> 45:29.000
And so the thing that we know that's important for the health of society is very hard to educate people.

45:29.000 --> 45:35.000
Like, you know, across, I mean, I mean, there's your technology, what fish do you eat?

45:35.000 --> 45:38.000
Like, all your sustainability, how do you recycle?

45:38.000 --> 45:45.000
Like, for the average consumer to hope that we can educate all the things that we know, you know, into their lives.

45:45.000 --> 45:49.000
It's worth trying, but, you know, I personally, and you can see that in Mozilla,

45:49.000 --> 45:53.000
have chosen a different point on the spectrum, which is, yeah.

45:53.000 --> 46:00.000
I'm not as pure, you know, like, you know, I'm like, you know.

46:00.000 --> 46:08.000
But, but, but have seen and caught and been hopefully made good positive use of the power of the market.

46:08.000 --> 46:13.000
So, I don't know if that's helpful or not, but that's the history that I've learned.

46:13.000 --> 46:14.000
Thank you.

46:14.000 --> 46:15.000
Thank you.

46:15.000 --> 46:25.000
Thank you.

