WEBVTT

00:00.000 --> 00:10.280
OK, good morning, everybody.

00:10.280 --> 00:13.200
Well, I'm one of your tools.

00:13.200 --> 00:16.320
I'm Federico Ezzi, working for Google Cloud over there.

00:16.320 --> 00:20.760
I do mostly application modernization and performance

00:20.760 --> 00:21.960
optimization.

00:21.960 --> 00:26.920
And today I'm here to talk about VPP together with...

00:26.920 --> 00:27.800
So hi, everyone.

00:27.800 --> 00:29.440
I'm Mohammed Hawari.

00:29.440 --> 00:32.560
I'm a software engineering tech lead at Cisco Systems.

00:32.560 --> 00:34.960
And I'm working on the VPP project.

00:34.960 --> 00:36.360
I'm a computer and this project.

00:36.360 --> 00:40.120
And I'll be talking about VPP mainly.

00:40.120 --> 00:45.320
OK, so before going to all the greeting details

00:45.320 --> 00:49.600
about what VPP, what this work all about.

00:49.600 --> 00:53.920
And around the last year, Spring, I set me

00:53.920 --> 00:58.960
for myself, a challenge to test the maximum performance

00:58.960 --> 01:01.480
throughput achievable with in a single zone deployment

01:01.480 --> 01:04.200
on Google Cloud.

01:04.200 --> 01:06.000
I wanted to validate a back-to-end

01:06.000 --> 01:08.280
the latest machines available.

01:08.280 --> 01:11.200
I come from the Telco World, I have about 10 years experience

01:11.200 --> 01:14.320
in fast-utter path, specific DPPK.

01:14.320 --> 01:19.000
And so kind of comfort zone, let's leverage that Telco application

01:19.000 --> 01:19.720
like VPP.

01:19.720 --> 01:22.440
So let's see what are the numbers we can achieve

01:22.440 --> 01:24.480
in the last three months during the question.

01:24.480 --> 01:30.160
Can Google Cloud be a robust platform capable of handling

01:30.160 --> 01:31.920
typical VNF?

01:31.920 --> 01:34.160
Use cases at high-pocket rate.

01:37.040 --> 01:40.760
So for this application, we're going to talk

01:40.760 --> 01:43.120
about software-based fast data path.

01:43.120 --> 01:46.000
So let's mention quickly the technology can enablers.

01:46.000 --> 01:52.000
Mainly there are enablers in Linux, which consists of the ability

01:52.000 --> 01:53.920
to implement user-space drivers.

01:53.920 --> 01:57.120
So UIO, VFI, all these kind of things.

01:57.120 --> 01:58.720
And minor improvements in the Linux kernel

01:58.720 --> 02:03.760
were done over the years for to get better real-time properties,

02:03.760 --> 02:08.200
like the tickless kernels, huge pages, et cetera.

02:08.200 --> 02:10.360
So this is one side of the enablers.

02:10.360 --> 02:12.320
The other side is actually DPDK.

02:12.320 --> 02:14.560
The pull-mode drivers in DPDK

02:14.560 --> 02:17.760
that actually expose device drivers in user space

02:17.760 --> 02:20.640
and allow you to implement networking application user space

02:20.640 --> 02:24.920
with a decent amount of performance.

02:24.920 --> 02:29.800
So now let's talk about the Vector Packet processor.

02:29.800 --> 02:32.240
So BPP, it's an open source project.

02:32.240 --> 02:32.840
Sorry, let's see.

02:32.840 --> 02:38.400
But it has become open source for eight years now.

02:38.400 --> 02:40.520
If you fast open source, you just

02:40.520 --> 02:42.040
based at working in data plane.

02:42.040 --> 02:46.880
It relies on kernel bypass thanks to DPDK among others.

02:46.880 --> 02:48.080
It's a feature-rich.

02:48.080 --> 02:49.560
It's extensible to plug-in.

02:49.560 --> 02:51.800
So you can implement your own network programming logic

02:51.800 --> 02:53.800
in it.

02:53.800 --> 02:56.680
And it's especially optimized for high throughput.

02:56.680 --> 03:00.400
So being throughput, meaning you can process lots of packets

03:00.400 --> 03:01.400
per second.

03:01.400 --> 03:04.200
And it will, so I'll explain how we do that.

03:04.200 --> 03:07.600
Actually, what are the details allowing us

03:07.600 --> 03:10.280
to achieve a very high throughput?

03:10.280 --> 03:11.800
And it's available on Linux.

03:11.800 --> 03:17.640
And since one year, on 3BSD, I'm looking for Tom.

03:17.640 --> 03:18.640
Oh, he left.

03:18.640 --> 03:22.200
He did the VST port.

03:22.200 --> 03:23.920
OK, so what's the approach of VPP

03:23.920 --> 03:26.400
for to get high performance?

03:26.400 --> 03:29.840
So VPP stands for Vector Packet processor.

03:29.840 --> 03:32.600
So it processes packets in batch instead

03:32.600 --> 03:36.000
of having a run-to-completion model.

03:36.000 --> 03:39.280
And by processing packets in batch in vectors,

03:39.280 --> 03:41.640
it allows you to optimize the instruction cache,

03:41.640 --> 03:44.480
because you are doing an elementary relatively

03:44.480 --> 03:47.440
elementary operation on a bunch of packets, which

03:47.440 --> 03:50.240
means that you don't have instruction cache misses

03:50.240 --> 03:51.840
in your CPU.

03:51.840 --> 03:54.240
It relies on kernel bypass in your copy, as I mentioned.

03:54.240 --> 03:56.320
So packets are gma directly

03:56.320 --> 03:58.920
into the user space memory.

03:58.920 --> 04:01.680
And it relies on explicit prefetches

04:01.680 --> 04:04.680
and what we call dual or quad loops.

04:04.680 --> 04:08.240
I'll focus on it a bit later.

04:08.240 --> 04:10.840
OK, so VPP, very similar graph

04:10.840 --> 04:14.120
to a previous presentation.

04:14.120 --> 04:16.120
It has the same ideas.

04:16.120 --> 04:18.240
So this is the packet processing graph.

04:18.240 --> 04:22.040
So you can see you have a vector of four packets.

04:22.040 --> 04:25.800
And it goes through the processing graph.

04:25.800 --> 04:29.080
Like here you have IPv4 and IPv6 packets

04:29.080 --> 04:33.720
so the vector is split between IPv4 lookup, IPv6 lookup.

04:33.720 --> 04:36.720
Then forwarding decision is taken.

04:36.720 --> 04:39.000
Some of the first vectors, some of the packets

04:39.000 --> 04:41.560
belong to first vectors, first vector

04:41.560 --> 04:43.160
are routed to the first interface,

04:43.160 --> 04:44.800
the other to the second interface.

04:44.800 --> 04:48.960
Vectors are reconstituted and then sent over the wire.

04:48.960 --> 04:54.240
And this architecture allows you to have each node

04:54.240 --> 04:55.920
process multiple packets at a time.

04:55.920 --> 04:59.800
So let's instruction cache misses.

04:59.800 --> 05:03.720
Now let's talk about one of the other ideas, explicit pretaches

05:03.720 --> 05:05.320
and quad loop.

05:05.320 --> 05:07.200
So quad loop, how does it work?

05:07.200 --> 05:11.800
So here you have a prototype of the typical VPP node.

05:11.800 --> 05:15.240
So how do we code in VPP?

05:15.240 --> 05:17.960
We process first four packets at a time.

05:17.960 --> 05:19.760
And while we process four packets,

05:19.760 --> 05:23.080
we are sending a pre-fetched instruction to the CPU

05:23.080 --> 05:26.360
to fetch the four next packets.

05:26.360 --> 05:29.720
So the CPU is going to send a request to the RAM.

05:29.720 --> 05:33.440
Get the four next packets to the cache.

05:33.440 --> 05:37.320
And in the meantime, we are going to process the four packets

05:37.320 --> 05:40.320
right now, the four current packets.

05:40.320 --> 05:42.640
We're going to process them four at a time.

05:42.640 --> 05:44.400
And we are going to interleave the instruction,

05:44.400 --> 05:46.800
like instruction one for packet one, instruction one for packet

05:46.800 --> 05:49.080
two, instruction one for packet three,

05:49.080 --> 05:50.680
instruction one for packet four.

05:50.680 --> 05:54.120
Then instruction two for all these four packets,

05:54.120 --> 05:57.560
which allows you to free-field the pipelines of the CPU.

05:57.560 --> 06:02.560
And by doing so, you will minimize the number of cache misses

06:02.560 --> 06:06.000
and you get actually quite good performance.

06:06.000 --> 06:11.000
And finally, VPP is built with performance in mind.

06:11.000 --> 06:14.160
So unlike lots of open source projects,

06:14.160 --> 06:18.560
and actually in lots of proprietary projects,

06:18.560 --> 06:21.640
performance is tested all during the whole life cycle

06:21.640 --> 06:23.040
of each commit.

06:23.040 --> 06:26.240
So each commit in VPP, we have a full test bed

06:26.240 --> 06:29.240
that tests the performance on the variety of scenarios

06:29.240 --> 06:30.240
and hardware.

06:30.240 --> 06:32.360
And you can track the regression, the performance

06:33.160 --> 06:36.960
at the scale of the packet cycle per node.

06:36.960 --> 06:40.640
And each commit that might introduce the regression

06:40.640 --> 06:45.200
is tracked and we make sure that no regression happens.

06:45.200 --> 06:46.200
That's you.

06:46.200 --> 06:47.360
For the recall.

06:47.360 --> 06:49.360
Thank you, moment.

06:49.360 --> 06:51.600
I think it's quite impressive the level of tuning

06:51.600 --> 06:58.480
and optimization and coding that has been done on VPP,

06:58.480 --> 07:01.880
a kind of impressive.

07:01.880 --> 07:05.200
So looking at the testing topologies,

07:05.200 --> 07:06.680
I have two examples for you today.

07:06.680 --> 07:09.560
The first one, it's fairly simple.

07:09.560 --> 07:12.120
Traffic generator and network receiver are just

07:12.120 --> 07:15.080
some test PMD machines.

07:15.080 --> 07:17.280
Generate packets on one end, deliver on the other,

07:17.280 --> 07:20.240
uniflow, excuse me.

07:20.240 --> 07:23.480
Unidirectional type of load layer to traffic.

07:23.480 --> 07:27.280
So nothing particularly weird.

07:27.280 --> 07:31.840
VPP in between routes packet from one leg or from one network

07:31.840 --> 07:35.080
to another.

07:35.080 --> 07:37.080
The Linux systems underneath are deeply

07:37.080 --> 07:39.440
tuned for deterministic performance.

07:39.440 --> 07:45.400
So doing all sorts of possible isolations,

07:45.400 --> 07:51.320
thickless kernel, isolating the user land and worker threats.

07:51.320 --> 07:54.480
And packet size, 64 bytes.

07:54.480 --> 08:00.520
And so I choose to bring you here two specific tests.

08:00.520 --> 08:04.680
The first one is with a single PMD thread inside VPP.

08:04.680 --> 08:09.960
And I was able to achieve about 80 million packets per second.

08:09.960 --> 08:14.120
20, excuse me, 12 gigabit of traffic, roughly.

08:14.120 --> 08:18.120
But the most astonishing details is the drop rate,

08:18.120 --> 08:21.360
only four parts per billion of a percent, which

08:21.360 --> 08:23.240
is, I think, quite remarkable.

08:23.240 --> 08:26.480
Coming from Telco background, these stuffets.

08:26.480 --> 08:29.200
Those numbers are quite crazy, given that this

08:29.200 --> 08:32.800
is on a public cloud shared infrastructure.

08:32.800 --> 08:34.240
Scaling up the number, of course,

08:34.240 --> 08:39.600
and using a much bigger machine with way more resources,

08:39.600 --> 08:43.600
I was able to break the 100 million packet per second

08:43.600 --> 08:47.120
psychological barrier, specifically the 108,

08:47.120 --> 08:48.800
drop rate, mash higher, but still

08:48.800 --> 08:51.960
kind of reasonably low for the type of environment.

08:51.960 --> 08:58.200
And the fun fact, 108 million packet per second

08:58.200 --> 09:05.640
are roughly on a standard IMX, roughly 300 gigabits of traffic.

09:05.640 --> 09:07.880
Second network topology, everything remains the same.

09:07.880 --> 09:10.880
It says the traffic generator is walking out the test PMD

09:10.880 --> 09:12.680
for packet gen depd key.

09:12.680 --> 09:14.320
Package and sensor received the traffic.

09:14.320 --> 09:16.000
In this case, it's the layer free traffic

09:16.000 --> 09:18.880
to the less shades being used, sending

09:18.880 --> 09:21.720
the receiving on both interfaces.

09:21.720 --> 09:29.760
Variable destination ports, variable packet size,

09:29.760 --> 09:34.720
and effectively tens of millions of possible flow

09:34.720 --> 09:36.360
combinations.

09:36.360 --> 09:41.000
In this, again, kind of challenging scenario,

09:41.000 --> 09:45.600
I managed to achieve about 20 million packets per second,

09:45.600 --> 09:50.280
only with 3 PMD threads, 25 gigabits of traffic.

09:50.280 --> 09:52.480
I think I reached the limit of the traffic generator

09:52.480 --> 09:57.440
because although VPP was not dropping packets and was,

09:57.440 --> 10:02.360
but details, this is kind of another situation,

10:02.360 --> 10:09.160
kind of challenging situation where we see the dep optimizations

10:09.160 --> 10:13.680
done at VPP put a good use.

10:13.680 --> 10:16.440
So what are the technologies in Eblar?

10:16.440 --> 10:19.520
First, the first and foremost, huge shout out

10:19.520 --> 10:23.160
to the VPP community and to the development team,

10:23.160 --> 10:26.520
because they did an incredible job actually optimizing

10:26.520 --> 10:29.040
pretty much everything out of it.

10:29.040 --> 10:32.080
On Google Cloudside, forming factors

10:32.080 --> 10:35.520
third generation machine that rely on what we call

10:35.520 --> 10:39.520
titanium architecture, I will go into details in the next slide.

10:39.520 --> 10:42.120
Video methodology, so compute optimised machines,

10:42.120 --> 10:44.880
some Google Cloud expose what is underneath,

10:44.880 --> 10:46.200
directly to the guest.

10:46.200 --> 10:50.680
So memory allocation and CPU mapping

10:50.680 --> 10:54.800
happens on the correct number node as per the hardware.

10:54.800 --> 10:59.880
Jupiter is our network fabric within all our Google Cloud

10:59.880 --> 11:01.320
deployments.

11:01.320 --> 11:04.520
This is a 10 something years, this point effort.

11:04.520 --> 11:08.440
And the latest iteration, we are able to handle

11:08.440 --> 11:12.080
an aggregated capacity of six petabits of traffic,

11:12.080 --> 11:15.080
connects tens of thousands of servers and sub-100

11:15.080 --> 11:19.400
millisecond latency, excuse me, micro-second latency.

11:19.400 --> 11:22.920
Last but not least, well, this is a tight capital deployment,

11:22.920 --> 11:26.360
not deployed on the same record otherwise,

11:26.360 --> 11:30.440
wouldn't make sense, but deployed on one next to another

11:30.440 --> 11:35.720
racks in our zones.

11:35.720 --> 11:39.280
Another interesting bit, I didn't mention on the slides.

11:39.280 --> 11:43.160
Everything that you're seeing here, it's all GA features.

11:43.160 --> 11:45.960
All of this is accessible and usable by end users.

11:45.960 --> 11:50.200
There are no alpha-based APIs or allow list whatsoever.

11:50.200 --> 11:56.040
All of this is standard regular features.

11:56.040 --> 12:00.920
So very quickly on Google titanium, even I mentioned it a few times.

12:00.920 --> 12:04.120
In the industry, those devices are generally called

12:04.120 --> 12:06.200
as a DPUs or data processing unit.

12:06.200 --> 12:09.160
We call it as an IPU in-for-processing unit,

12:09.160 --> 12:12.040
back in the days, smart nicks.

12:12.040 --> 12:16.440
Well, effectively, we don't just do the protocol

12:16.440 --> 12:20.760
for wording, GRO, or check sums.

12:20.760 --> 12:24.520
The entire virtual link runs in hardware and is PCIPAS

12:24.520 --> 12:26.520
through inside the guest.

12:26.520 --> 12:32.760
This allows to achieve a line rate effectively results.

12:32.760 --> 12:34.760
It's a collaboration together with Intel.

12:34.760 --> 12:36.040
We call it titanium.

12:36.040 --> 12:41.320
The product skew on their side is 2000.

12:41.320 --> 12:44.600
And yeah, by the way, didn't mention the NIC.

12:44.600 --> 12:49.560
The virtual NIC is called GVNIC, Google Virtual NIC.

12:49.560 --> 12:54.120
We move the way from Virtio, mostly for performance reasons.

12:54.120 --> 13:00.920
Was built from the ground up with strong focus on low latency.

13:00.920 --> 13:06.200
And the drivers are main land in all the major products

13:06.200 --> 13:11.240
for Linux, so previously, as well as DPDK.

13:11.240 --> 13:14.440
And so thanks for the recall.

13:14.440 --> 13:17.240
So let's give a bit of perspective what we can do with BPP,

13:17.240 --> 13:19.400
especially with this work that proves that we can run

13:19.400 --> 13:23.320
BPP, actually, that proves that GTC, there are cloud providers

13:23.320 --> 13:26.920
that can run BPP with optimal performance.

13:26.920 --> 13:30.120
Not all of them can do that, actually.

13:30.120 --> 13:31.560
You should test.

13:31.560 --> 13:33.080
So VPP, what can we do with it?

13:33.080 --> 13:34.200
First is plug-in oriented.

13:34.200 --> 13:36.840
So it's easy to customize packet processing

13:36.840 --> 13:39.560
for customized networking behavior.

13:39.560 --> 13:41.880
So this is really what we call, what we can call a network

13:41.880 --> 13:43.080
program ability.

13:43.080 --> 13:45.320
And VPP should be seen as a programming framework,

13:45.320 --> 13:47.960
not as a software router, that allows you to implement

13:47.960 --> 13:50.040
drone cloud network features.

13:50.040 --> 13:52.760
You have the variety of use cases.

13:52.760 --> 13:54.360
And this is already deployed.

13:54.360 --> 13:59.080
You can use VPP as a tunnel gateway, for example,

13:59.080 --> 14:02.200
to terminate IP sector models, connect multiple clouds

14:02.200 --> 14:06.040
together thanks to IP sector or other tunnel protocols.

14:06.040 --> 14:09.080
You can also use it for high performance and point applications,

14:09.080 --> 14:11.960
not only routers, but also endpoints.

14:11.960 --> 14:14.640
Because there is an embedded TCP, high performance,

14:14.640 --> 14:16.920
TCP, UDP stack, and VPP.

14:16.920 --> 14:19.400
So actually, there was an NVOI VPP integration

14:19.400 --> 14:23.480
that is an open source, and part of the Android project.

14:23.480 --> 14:25.960
And finally, you can see it as a complement

14:26.280 --> 14:31.880
in the next kernel to do good fast container networking,

14:31.880 --> 14:36.600
to implement a CNI, for example, for Kubernetes.

14:36.600 --> 14:40.760
Typically, we have an example of CNI called the Calico VPP

14:40.760 --> 14:44.360
that actually use VPP to improve the performance

14:44.360 --> 14:48.200
of Kubernetes networking.

14:48.200 --> 14:50.840
Before closing, I just want to mention

14:50.840 --> 14:54.440
that if you would like to read more about this effort,

14:54.440 --> 14:56.520
everything has been published over a medium

14:56.520 --> 15:00.760
at the Google Cloud medium community.

15:00.760 --> 15:02.520
Excuse me.

15:02.520 --> 15:05.320
It goes, of course, much more details, actually,

15:05.320 --> 15:08.600
even removes VPP just to test the overall performance

15:08.600 --> 15:12.200
and see if is a bottleneck, spoiler is not.

15:12.200 --> 15:15.480
And well, I'm also open by the end of this month

15:15.480 --> 15:19.320
to publish an updated version with latest software

15:19.320 --> 15:21.960
versioning as well as machine types.

15:21.960 --> 15:23.240
And thank you so much.

15:23.240 --> 15:24.240
Thank you.

15:24.240 --> 15:31.240
Thank you.

15:31.240 --> 15:32.240
Any question?

15:43.240 --> 15:44.240
Thank you.

15:44.240 --> 15:46.240
So in the beginning of your talk, you showed

15:46.240 --> 15:49.240
this very low packet loss numbers.

15:49.240 --> 15:51.480
Do you then have an idea where this packet loss

15:51.480 --> 15:52.480
still comes from?

15:52.480 --> 15:55.000
Because then at some point, if it becomes so low,

15:55.000 --> 15:57.480
it could come from anywhere, also from hardware,

15:57.480 --> 16:01.480
or should I come to have an idea?

16:01.480 --> 16:02.480
Yeah.

16:02.480 --> 16:04.480
So actually, I spent quite a bit of time

16:04.480 --> 16:07.480
understanding packet loss is in VPP.

16:07.480 --> 16:10.080
So there's a slide that we skipped that explains how you

16:10.080 --> 16:14.800
optimize the Linux configuration so that you are basically

16:14.800 --> 16:17.520
packet loss has come from the fact that the CPU does

16:17.520 --> 16:20.120
something else at some time.

16:20.120 --> 16:25.520
And then VPP lose some packets, misses some packets.

16:25.520 --> 16:29.520
And this can happen from a variety of software reasons.

16:29.520 --> 16:33.920
Like the Linux kernel is this schedule is the VPP.

16:33.920 --> 16:36.320
The VPP process.

16:36.320 --> 16:39.400
But even if you optimize absolutely everything,

16:39.400 --> 16:44.520
there are inherent hardware reasons that forces the CPU

16:44.520 --> 16:48.880
to do something else during a few microseconds,

16:48.880 --> 16:52.080
called system management interrupts if you heard of those.

16:52.080 --> 16:55.680
And basically, because of that, you can never have zero

16:55.680 --> 16:58.280
property.

16:58.280 --> 16:58.880
Yes, time.

17:09.880 --> 17:10.680
Great presentation.

17:10.680 --> 17:11.480
Thanks a lot.

17:11.480 --> 17:15.680
You mentioned you can implement endpoints using the plug-ins

17:15.680 --> 17:18.880
on top of the built-in network stack.

17:18.880 --> 17:22.680
Do you have example of successful endpoint implementation

17:22.680 --> 17:26.080
in production?

17:26.080 --> 17:28.880
Open source or not, either.

17:28.880 --> 17:29.280
OK.

17:29.280 --> 17:31.680
So in open source, we have NVVP.

17:31.680 --> 17:32.880
NVP can be seen as an endpoint.

17:32.880 --> 17:35.280
Even if at layer 7, it sees as a forwarding,

17:35.280 --> 17:37.480
I mean, still an endpoint, right?

17:37.480 --> 17:45.480
And then internally, we have at least two projects, three.

17:46.280 --> 17:47.280
Yeah, two and a half.

17:47.280 --> 17:49.280
One of them is being finished.

17:49.280 --> 17:55.680
That uses VVP as a termination for TCP or DTRS, actually.

17:55.680 --> 17:59.280
We'll actually do DTRS, TLS, Man in the middle, VVP.

18:03.280 --> 18:04.280
Thanks for the talk.

18:04.280 --> 18:08.280
In the last slide, you'll see VPP as kind of an extension

18:08.280 --> 18:09.480
to VVPF.

18:09.480 --> 18:13.480
Isn't the kernel bypass something as an alternative

18:13.480 --> 18:15.880
to VVPF or TCP, for example?

18:21.880 --> 18:29.680
I don't want to let me think about my answer.

18:29.680 --> 18:30.680
In some cases.

18:43.480 --> 18:58.280
Thanks for the talk.

18:58.280 --> 19:02.080
So did you measure the same performance on a bare metal server

19:02.080 --> 19:05.680
when you compared with the GCP cloud?

19:05.680 --> 19:10.280
Did you see any difference with respect to the performance of VVP?

19:11.280 --> 19:15.880
Back in the days, when I used to do another job,

19:15.880 --> 19:18.880
let's say, as in the TTRQ industry.

19:18.880 --> 19:24.080
Yeah, I saw sometimes comparable results.

19:24.080 --> 19:29.480
Back then, I also had the opportunity to be in charge of the entire chain.

19:29.480 --> 19:34.680
So I could see and understand all of the details of the platform.

19:34.680 --> 19:38.280
So performing specific optimizations or troubleshooting

19:38.280 --> 19:42.280
to reduce loss was obtainable while on Google Cloud,

19:42.280 --> 19:46.280
pretty much, well, you follow the blueprint published on medium

19:46.280 --> 19:49.480
and it just works with those results.

19:49.480 --> 19:54.080
So of course, you are not being given the control of all

19:54.080 --> 19:56.480
the hardware and everything underneath.

19:56.480 --> 20:01.880
So performance by, you don't see any difference with respect

20:01.880 --> 20:04.280
to bare metal server running the VVP.

20:04.280 --> 20:05.680
I think now, this is on bare metal.

20:05.680 --> 20:08.880
You can achieve much, much higher performance than this.

20:08.880 --> 20:11.880
Again, this is still public cloud, this is the shared infrastructure.

20:11.880 --> 20:17.680
The network devices underneath are, if I'm not mistaken, 200 gigabytes.

20:17.680 --> 20:22.880
But only in the latest generation C4, we give customer access

20:22.880 --> 20:25.680
to the 200 gigabytes throughput.

20:25.680 --> 20:29.080
And it's just one network interface at 200 gigabytes.

20:29.080 --> 20:32.480
On bare metal, you can do much more nowadays.

20:32.480 --> 20:34.080
OK, thank you.

20:34.680 --> 20:35.480
OK, thank you.

20:35.480 --> 20:38.280
Expeaker, I think we're on the other time.

