WEBVTT

00:00.000 --> 00:10.000
The last speaker, and we returned to open segment, like we started.

00:10.000 --> 00:17.200
Okay, thanks for joining my sessions, I'm Sonsu from South Korea, and I'm working at the

00:17.200 --> 00:20.000
Energy Cloud based on the South Korea.

00:20.000 --> 00:26.400
So today, I'm going to talk about my experience about the operating the open-stack shift

00:26.400 --> 00:28.600
in the public cloud last eight years.

00:28.600 --> 00:35.280
So as you know, the open-stack shift is a very old open source project, but it is very

00:35.280 --> 00:41.840
difficult to find a production use case or document in the Google league.

00:41.840 --> 00:51.200
So I hope that my presentation will help the others to operate in the open-stack shift.

00:51.200 --> 00:59.800
So first, I will talk about the watch challenges based in a public cloud running the open-stack

00:59.800 --> 01:00.800
shift.

01:00.800 --> 01:07.600
So as you know, the public cloud can be used by many people.

01:07.600 --> 01:19.280
So we can receive a lot of requests and a traffic that is unspectable and cannot control.

01:19.280 --> 01:26.280
So if our open-stack shift is a internal service, we can execute the client to reduce

01:26.280 --> 01:35.280
the request when our service is very getting a performance degradation.

01:35.280 --> 01:41.480
But in the public cloud, I cannot execute the client to reduce your request to our service.

01:41.480 --> 01:50.200
So yeah, that is the number one challenges in a public cloud service.

01:50.200 --> 01:56.200
And then the second one is maybe something only for my productions.

01:56.200 --> 02:03.560
Suppressing the leather than the largest size files, there are a lot of similar size files

02:03.560 --> 02:05.560
in our productions.

02:05.560 --> 02:13.560
The smaller size means a hundred of the kilobytes or the few megabytes object.

02:13.560 --> 02:24.680
So it becomes a very important thing and it becomes a problem in a later on.

02:24.680 --> 02:35.520
And then the third one is some client using our object service as their storage backend.

02:35.760 --> 02:39.000
In their own storage service.

02:39.000 --> 02:44.880
For example, the cortex is one of backend storage of the promitus.

02:44.880 --> 02:48.000
The cortex also supporting the S3 API.

02:48.000 --> 02:56.000
So if one client is using the cortex as the backend, they are the promitus.

02:56.000 --> 03:05.480
They can stay setting up the backend storage of the cortex in our object storage.

03:06.440 --> 03:12.280
And the last one is a very important and a very difficult thing to learning objects

03:12.280 --> 03:14.280
storage in the public cloud.

03:14.280 --> 03:18.280
We should be most ancient or stable service.

03:18.280 --> 03:24.840
Even if the data, the Lebanese needs occurred and some nodes are failures,

03:24.840 --> 03:28.920
we should ensure the stable service.

03:29.880 --> 03:37.320
In the Swift, if the slum, some nodes are failed or adding a new node in existing cluster,

03:37.320 --> 03:44.360
the data, the Lebanese will be occurred, but this operation will impact the service trapeze.

03:44.360 --> 03:52.760
So it is very difficult to control the service impact when you're adding the new node in a cluster.

03:53.640 --> 04:00.520
So let me briefly introduce the what is the OpenStack Swift?

04:00.520 --> 04:06.120
The OpenStack Swift is the one of the initial project of the OpenStack.

04:06.120 --> 04:12.920
So it is originally used for the backend storage of the glance component.

04:12.920 --> 04:17.960
The glance component is storing the cloud operating system images.

04:17.960 --> 04:26.120
But unlike the other OpenStack component, the OpenStack Swift does not have any dependency with the others.

04:26.120 --> 04:34.360
So you can deploy your own OpenStack Swift to media out any others, OpenStack component.

04:34.360 --> 04:39.320
So as far as I know, the Twitch, which is our live streaming service,

04:40.200 --> 04:46.040
they using the OpenStack Swift to ask there are video storage service people,

04:46.040 --> 04:47.800
not sure that these days.

04:53.480 --> 04:56.760
And there are three concepts in the OpenStack Swift.

04:56.760 --> 05:01.160
The first one is the account, account is a thumbnail concept in Swift.

05:01.160 --> 05:05.560
So the account is just a namespace for the containers.

05:05.560 --> 05:08.760
And the container is a namespace for the object.

05:08.760 --> 05:13.480
So the container does not store the object physically.

05:13.480 --> 05:16.520
It is just a logical namespace.

05:17.720 --> 05:24.200
And the object is the rear object, the binary data things.

05:24.840 --> 05:29.800
And then the account and the container is handling with database.

05:34.760 --> 05:40.120
So since the Swift is supporting the HTTP base, the last API,

05:40.120 --> 05:43.800
the basic structure is UI format.

05:43.800 --> 05:49.080
So account select container, select object is a format of the UI.

05:49.080 --> 05:52.120
And then when we are using the Swift,

05:52.120 --> 05:58.120
most people using the Keystone as their authentication system.

05:58.120 --> 06:01.960
So the account will be the Keystone project ID.

06:03.400 --> 06:08.840
And then normally the Swift adding a prefix in account.

06:08.840 --> 06:10.520
It is the OS on the back.

06:10.520 --> 06:12.520
So you can see the OS on the back test,

06:12.520 --> 06:15.640
the user will be the full name of the account.

06:20.280 --> 06:25.320
So let's talk about the nodes that banks up the Swift.

06:25.320 --> 06:28.280
There are the four nodes in the Swift.

06:28.280 --> 06:31.640
The first one is a proxy server, the proxy server,

06:31.640 --> 06:35.160
handling the incoming, the user request,

06:35.160 --> 06:38.520
and do something additional features like authentication,

06:38.520 --> 06:41.320
loading, encryption, like that.

06:41.320 --> 06:45.000
So and then the account server and the container server

06:45.000 --> 06:47.000
is a database server.

06:47.000 --> 06:52.680
But it is not really like a database server like a bicycle.

06:52.680 --> 06:54.520
It is a web service.

06:54.520 --> 06:59.240
But internally, the account and container server stores

06:59.240 --> 07:05.320
each data into the SQL, like the three database file.

07:05.320 --> 07:07.240
So yeah.

07:07.240 --> 07:09.800
And the last one is object server itself.

07:09.800 --> 07:13.160
It stores a real object in every disk.

07:18.520 --> 07:24.920
So this is the configuration when I set up a new Swift cluster.

07:24.920 --> 07:29.240
So for the account and container node,

07:29.240 --> 07:34.360
it is very important to ensure the very fast performance.

07:34.360 --> 07:38.840
So I am using the SSD as their disk.

07:38.840 --> 07:43.480
And those servers don't need too many disks.

07:43.480 --> 07:47.960
So I normally using a four disk.

07:47.960 --> 07:52.120
So two for the account and a two for the containers.

07:52.120 --> 07:57.720
And when you configure the object node,

07:57.720 --> 08:04.040
it is very important to setting up the memories.

08:04.040 --> 08:08.920
Because when the data remains to all the data

08:08.920 --> 08:12.280
replicate to the other node, the object server

08:12.280 --> 08:17.320
using a lot of memories to their disk cases.

08:17.320 --> 08:22.040
So this is my rules, not to everyone.

08:22.040 --> 08:25.120
So I configure the one gigabytes of memories

08:25.120 --> 08:28.600
for every one type of disk.

08:28.600 --> 08:31.240
And then the minimum load is maybe four,

08:31.240 --> 08:38.520
because every Swift data must have a three-ply care.

08:38.520 --> 08:43.320
So at least the full node can handle the failures.

08:43.320 --> 08:48.120
And then it is very important to ensure the consistent disk

08:48.120 --> 08:50.520
size across the whole node.

08:50.520 --> 08:57.000
Because if some node has a big or large size disk,

08:57.000 --> 09:05.880
the data can store both disks or large.

09:05.880 --> 09:11.960
So to ensure the consistent disk usage,

09:11.960 --> 09:16.120
to ensure the consistent disk size is very important.

09:16.120 --> 09:24.520
But in before the eight years ago,

09:24.520 --> 09:28.120
it was my first year to operate in the open state Swift.

09:28.120 --> 09:35.240
So I tried to setting up all these nodes

09:35.240 --> 09:40.360
into those one nodes, but because we don't have any body

09:40.360 --> 09:46.120
suitable configuring the node, but every year,

09:46.120 --> 09:50.520
the sub-scoring of an attribute are increased.

09:50.520 --> 09:56.360
Accounting container servers are counting container server

09:56.360 --> 09:57.720
became a bottleneck.

09:57.720 --> 10:00.360
Because the object server's process

10:00.360 --> 10:03.480
spent a lot of CPU times during the service,

10:03.480 --> 10:08.200
so accounting container process cannot

10:08.200 --> 10:10.760
process in the userly case.

10:10.760 --> 10:15.240
So after that, I decided to separate the accounting container node

10:15.240 --> 10:17.400
and object node in our production.

10:21.000 --> 10:24.520
OK, so the next one is a network architecture.

10:24.520 --> 10:27.160
So to operating the Swift in a production,

10:27.160 --> 10:29.800
you need to set up a three types of network.

10:29.800 --> 10:34.600
So service network can be accessed from the public,

10:34.600 --> 10:37.960
and handling the incoming user trapex.

10:37.960 --> 10:41.160
And the data and the replication network is a private network.

10:41.160 --> 10:43.720
It cannot access from the public.

10:43.720 --> 10:47.720
So the data network is handling the userly case

10:47.720 --> 10:50.200
to internally, and then the replication network

10:50.200 --> 10:53.320
is only for the transferring the data replicating.

10:57.160 --> 11:01.400
Let me talk more about the service network.

11:01.400 --> 11:05.800
At first, I was using the Apple hardware switch

11:05.800 --> 11:13.400
or the ADC switch to load balancing the Swift proxy servers.

11:13.400 --> 11:17.720
But the service traffic getting increased,

11:17.720 --> 11:24.440
the Apple switch doesn't carry on the trapex.

11:24.440 --> 11:28.920
And it is very difficult to scale up the bandwidth

11:28.920 --> 11:32.440
in a production hardware.

11:32.440 --> 11:37.640
So we decided to change the load balancing method

11:37.640 --> 11:39.720
to the HP proxy.

11:39.720 --> 11:48.760
So in the Swift proxy, I put the HP proxy

11:48.760 --> 11:53.400
as a software load balancer, and each HP proxy

11:53.400 --> 11:58.360
has a virtual IP, which is managed by the capability.

11:58.360 --> 12:03.160
And so when the client replaced our service domain,

12:03.160 --> 12:07.960
the DNS returns the GSA B2 main and the GSA B returns

12:07.960 --> 12:11.720
the one of the virtual IP in our production.

12:11.720 --> 12:19.000
So after that, I can scale out our load balancer whenever I want.

12:19.000 --> 12:29.160
So the next one is a scale out of the object nodes.

12:29.160 --> 12:36.520
So the first concept to understanding is a ring.

12:36.520 --> 12:39.240
The Swift is the basic concept of the consistency

12:39.240 --> 12:43.800
of the ring, which is common in this distributed system.

12:43.800 --> 12:48.840
And the open stack Swift has their own ring algorithms.

12:49.000 --> 12:54.120
So the ring is a file, and this file is an extremely important.

12:54.120 --> 12:59.720
So the all nodes in the Swift must have the same content

12:59.720 --> 13:01.720
of the ring files.

13:01.720 --> 13:06.520
If one node has a different ring files,

13:06.520 --> 13:11.720
maybe the disaster will begin soon.

13:12.200 --> 13:18.920
And there are four components in our rings.

13:18.920 --> 13:21.080
The first one is a partition power.

13:21.080 --> 13:22.680
The partition power is a vector,

13:22.680 --> 13:25.400
determining the total number of the partitions.

13:25.400 --> 13:32.280
The partition is a physical location of the where the data stored.

13:32.280 --> 13:37.480
So the when we set up the number of the partition power

13:37.640 --> 13:42.440
is 10, the total number of the partition will be the 1224.

13:42.440 --> 13:49.560
So it is very important to setting out right partition power

13:49.560 --> 13:53.000
when used create the ring.

13:53.000 --> 13:56.280
And the other things I will tell you the later.

14:00.680 --> 14:02.520
The next one is a device list.

14:02.520 --> 14:05.880
The device list is a array of information

14:05.880 --> 14:10.200
about this information like IP and port and a device.

14:10.200 --> 14:16.440
And each index has their own device information.

14:16.440 --> 14:21.720
And Swift using the device list create the device look table.

14:21.720 --> 14:24.440
It is a two-dimensional table.

14:24.440 --> 14:25.960
So you can see the columns.

14:25.960 --> 14:29.960
The columns is a total number of the partitions.

14:29.960 --> 14:33.400
And the rules is a number of the replicas.

14:33.400 --> 14:37.560
So when you create the ring, the Swift spread out

14:37.560 --> 14:42.760
the device list into this table very evenly.

14:42.760 --> 14:51.240
So when if the number of the replicas three,

14:51.240 --> 14:55.960
the first three row will be the primary node.

14:55.960 --> 15:04.120
So that this will be the location where the data stored.

15:08.040 --> 15:13.800
Since the primary node phase, the handoff node

15:13.800 --> 15:17.720
is a temporary designated in the ring to store the data.

15:17.720 --> 15:23.160
So we need to predefined the back on nodes.

15:23.160 --> 15:25.480
So the back on node means the handoff node.

15:25.480 --> 15:29.880
But the important thing is that the handoff

15:29.880 --> 15:33.400
is not completely the spare node.

15:33.400 --> 15:37.160
So you can see the disk index number one in the handoff node.

15:37.160 --> 15:42.360
But it is also the primary node for the partition number 0, 1, 2,

15:42.360 --> 15:43.400
in here.

15:43.400 --> 15:49.240
So in Swift, every disk operate actively.

15:49.240 --> 15:54.440
But one disk can be a primary node for the some partitions

15:54.440 --> 15:58.280
and can be the handoff node for the other partitions.

15:58.280 --> 16:01.960
So it is very important and difficult

16:01.960 --> 16:05.880
to the managing the usage of disk every day.

16:05.880 --> 16:11.640
Because if some disk phase that Swift

16:11.640 --> 16:16.840
replicate those data to the handoff node.

16:16.840 --> 16:24.280
So yeah, it is very important to managing the usage

16:24.280 --> 16:29.960
of the disk.

16:29.960 --> 16:33.960
So we are sure that this object should be stored in this table.

16:37.960 --> 16:39.560
The process is very simple.

16:39.560 --> 16:44.120
The first approach is over, get a request from the client.

16:44.120 --> 16:45.880
And then the computer and developer has

16:45.880 --> 16:47.480
you of the request you are right.

16:47.480 --> 16:53.560
So this is the hash value of the data you are right.

16:53.640 --> 16:57.880
And I take on the modular with the total number of the partition

16:57.880 --> 16:58.840
like this.

16:58.840 --> 17:01.720
And the neutral to weave is the partition number

17:01.720 --> 17:04.280
of the dead request.

17:04.280 --> 17:12.280
So if the value weave is the 140, the data weave is stored

17:12.280 --> 17:15.560
the disk index 193464.

17:15.560 --> 17:20.760
Yeah.

17:20.760 --> 17:22.600
And physically it looks like this.

17:22.600 --> 17:26.600
So if you look inside the mounted disk,

17:26.600 --> 17:32.600
you usually the partition directory inside the

17:32.600 --> 17:36.040
mounted disk.

17:36.040 --> 17:38.760
So this is the where the data stored.

17:39.080 --> 17:46.920
Then how to add a new object to node in the Swift.

17:46.920 --> 17:55.160
So actually, there are one more items in a device list.

17:55.160 --> 18:00.120
It is a weight and weight is a proportion of the disk

18:00.120 --> 18:02.440
in the Swift push.

18:02.440 --> 18:07.320
So the larger weight means that this can store

18:07.320 --> 18:09.640
more data than others.

18:09.640 --> 18:15.880
So I normally setting the weight as the disk side

18:15.880 --> 18:16.920
in the database.

18:16.920 --> 18:23.080
So if the weight is at 4,000, this disk

18:23.080 --> 18:24.920
physical size is a 4 terabyte.

18:31.160 --> 18:34.920
So adding a new node means the adding disk

18:34.920 --> 18:37.000
information to the device list.

18:37.000 --> 18:42.360
And the newly added device must be placed in a device looker

18:42.360 --> 18:43.000
table.

18:43.000 --> 18:46.520
That means that some disk will be assigned to the partitions.

18:46.520 --> 18:49.080
That means the data migration will be occurred.

18:49.080 --> 18:58.520
So data migration from one node to the other node

18:58.520 --> 19:02.760
spend a lot of the disk IO and a network three peaks.

19:02.760 --> 19:07.240
So the open-stack Swift demand using the replication

19:07.240 --> 19:13.160
that talk to reduce the service traffic in degree

19:13.160 --> 19:15.400
two to avoiding the impact of service traffic.

19:20.200 --> 19:23.000
And this is the how the Swift's link looks like.

19:23.000 --> 19:29.160
You can see the device ID and IP and port and the weight

19:29.160 --> 19:35.000
and the how many petitions in each disk in here.

19:35.000 --> 19:42.440
So I heard someone's, yes, the someone's told me

19:42.440 --> 19:46.760
that it is great to be the, you should,

19:46.760 --> 19:53.960
the 100 to the 150 is the minimum size of the petitions.

19:53.960 --> 20:03.080
So you can setting up one, one disk can has less than 100.

20:03.080 --> 20:09.080
But it becomes a nightmare when a sub-strippy

20:09.080 --> 20:15.960
is getting increased and a pull is getting adding a new node.

20:15.960 --> 20:23.800
So 150 is a minimum size of the partition.

20:23.800 --> 20:31.080
So as you keep adding nodes,

20:31.080 --> 20:35.800
you will even eventually reach a limit.

20:35.800 --> 20:40.920
So for example, if you setting up the partition power as a 10,

20:40.920 --> 20:45.480
the link creates the 1204 petitions,

20:45.480 --> 20:51.960
that means your cluster can have only 1204 disk.

20:51.960 --> 20:56.440
Because one partition cannot spread out the multiple disk.

20:56.440 --> 20:59.240
So yeah, so the one partition is one disk.

20:59.240 --> 21:05.480
So, so to solve this problem, the open-text

21:05.480 --> 21:11.240
Swift provides the increase partition power features.

21:11.240 --> 21:15.560
But this process creates the handling for all objects.

21:15.560 --> 21:24.360
So if the one partition has a lot of objects in a single partition,

21:24.360 --> 21:26.840
there are a lot of disk I use.

21:26.840 --> 21:32.200
And the total time to increase the partition power is a

21:32.200 --> 21:34.120
maybe the one month or two months.

21:34.120 --> 21:43.880
So it is very important to set the partition power correctly

21:43.880 --> 21:48.920
from the beginning. But if you're unsure the what value to choose,

21:48.920 --> 21:53.800
the you can use this calculator, just put the number of the disk.

21:53.800 --> 21:58.680
And the disk calculator, let you know the what the partition power

21:58.680 --> 22:05.080
will be good to your cluster.

22:05.080 --> 22:11.880
And this is an experience I had a while adding on your object node.

22:11.880 --> 22:17.240
So even the submission to this utilization and the

22:17.240 --> 22:20.280
replication network is very low.

22:20.280 --> 22:24.200
And we have enough bandwidth in a replication network.

22:24.200 --> 22:30.200
But the replicator replicates the object very slow.

22:30.200 --> 22:37.000
So after analyzing the cloud, the wider replicators

22:37.000 --> 22:39.320
replicate the object very slow.

22:39.320 --> 22:46.280
So the I found that if a single partition has a huge number of the

22:46.280 --> 22:52.200
file, updating the file that stores that hash value for the partition

22:52.200 --> 22:53.960
took a lot of times.

22:53.960 --> 22:58.760
For in my case, the each partition contains the tens of millions of

22:58.760 --> 23:01.800
files, very, very similar files.

23:01.800 --> 23:05.560
And then the processing of single disk with the 100 partition

23:05.560 --> 23:08.360
took about the three days.

23:08.360 --> 23:15.560
So if you're setting up the or 10GB pairs or 4GB pairs in the

23:15.560 --> 23:20.600
replication network, but open text waves to cannot

23:20.600 --> 23:29.640
using the whole bandwidth of that because of this problem.

23:29.640 --> 23:33.720
And then the next one is a container database, three copies,

23:34.520 --> 23:38.840
mismatches the container database stores the object list and

23:38.840 --> 23:42.360
contain the information and the container database is also

23:42.360 --> 23:46.200
has a three copies for the replication.

23:46.200 --> 23:49.960
And this database file is a SQL legacy.

23:49.960 --> 23:55.480
And each the SQL legacy database file will be created each

23:55.480 --> 23:55.960
containers.

23:59.480 --> 24:02.920
So this is the process of updating the object in container

24:03.880 --> 24:07.960
the first the proxy server is the uploading the object

24:07.960 --> 24:10.600
and storing the object in the object server.

24:10.600 --> 24:16.040
And then the sending updating the object in the container

24:16.040 --> 24:20.440
asynchronously and then the container server also updating the

24:20.440 --> 24:25.080
number of objects and a size in the account server

24:25.080 --> 24:27.960
or synchronously.

24:27.960 --> 24:32.280
So why this happens?

24:32.280 --> 24:42.280
I don't know why, so because the Swift must ensure

24:42.280 --> 24:46.840
all of the three copy database data is the same.

24:46.840 --> 24:51.480
But I don't know why why this problem

24:51.480 --> 24:56.280
happens in my production.

24:57.240 --> 25:00.760
In my production, there are a lot of

25:00.760 --> 25:04.360
requests in the only one container.

25:04.360 --> 25:08.600
And in the case that container servers

25:08.600 --> 25:13.320
cannot handle the other request and at the point that

25:13.320 --> 25:18.040
maybe the database data is a mismatches.

25:18.040 --> 25:23.880
So what happens if the database

25:23.880 --> 25:26.440
three copy is the mismatches.

25:26.440 --> 25:32.120
So when some client died downloading the very

25:32.120 --> 25:38.040
large object using the multi-part download.

25:38.040 --> 25:43.480
So the file, I'm not if you had a dialog become

25:43.480 --> 25:47.960
on download over because when the user downloading the object

25:48.920 --> 25:53.960
inside the Swift, the object listing in the container

25:53.960 --> 26:03.960
DB, because the proxy servers looking for the

26:03.960 --> 26:07.800
list of the part of the object.

26:07.800 --> 26:13.640
But if the container DB is unstable, the list of part

26:13.640 --> 26:22.200
is not stable, so the object cannot be downloaded.

26:22.200 --> 26:25.800
Then how we solve this problem?

26:25.800 --> 26:29.640
I'm not sure that this is a good solution, but I

26:29.640 --> 26:33.240
performed the scan of all object listing in the

26:33.240 --> 26:39.640
three copies and manually correct that if the

26:39.640 --> 26:45.240
sum object in the A and B, but not in C, I put it

26:45.240 --> 26:47.640
put that object in the C manually.

26:47.640 --> 26:54.840
It is very hard job.

26:54.840 --> 26:57.400
Then the last topic is how to monitoring the open

26:57.400 --> 27:00.920
Swift.

27:00.920 --> 27:06.280
There are no metrics over the how to open

27:06.280 --> 27:09.960
the Swift in our website.

27:09.960 --> 27:11.960
So there are full main metrics.

27:11.960 --> 27:16.600
So system resource and API is a very familiar with the other

27:16.600 --> 27:21.000
system.

27:21.000 --> 27:24.920
I'm using the parameters to collect the system resource

27:24.920 --> 27:27.160
in the app, in the all of the nodes.

27:27.160 --> 27:31.160
And API is a collector load to the monitoring the request

27:31.160 --> 27:32.040
trend.

27:32.040 --> 27:36.920
And then the recall is only in the Swift

27:36.920 --> 27:41.320
open Swift, then you can check the status of the Swift

27:41.320 --> 27:44.280
Demons.

27:44.280 --> 27:49.800
So API monitoring, I'm using the electricity and a

27:49.800 --> 27:53.160
logistacy and a graphana.

27:53.160 --> 28:02.120
So I'm monitoring the how many requests in the production

28:02.120 --> 28:07.640
and a response code trend and the other things.

28:07.640 --> 28:12.600
And the recall is what happening inside the open

28:12.600 --> 28:17.240
Swift demons, like how many partitions

28:17.640 --> 28:22.040
is a replicated successfully or failures.

28:22.040 --> 28:28.680
And is the all nodes has the same mean files like that.

28:28.680 --> 28:35.480
So you can call it using the SwiftScire.comment.

28:35.480 --> 28:38.840
And it allows you to view the Demons status for the account

28:38.840 --> 28:39.880
container and object.

28:39.880 --> 28:41.320
But it is the one-time data.

28:41.320 --> 28:44.280
So I made the open text Swift.

28:44.280 --> 28:48.600
The recalling exporters, and so promulgative collect

28:48.600 --> 28:51.240
these recall data every five seconds.

28:51.240 --> 28:58.200
And I do a graph to the monitoring these things.

28:58.200 --> 29:00.840
Yeah, so this is how the open text Swift

29:00.840 --> 29:03.960
recalls looked like.

29:03.960 --> 29:07.240
And the last one is a steadily timing monitoring.

29:07.240 --> 29:08.760
Yeah, it is the timing data.

29:08.760 --> 29:10.920
So inside the open text Swift.

29:14.760 --> 29:16.920
And then that there are new features in

29:16.920 --> 29:21.640
progress opting to better understand the Swift

29:21.640 --> 29:22.920
internal operations.

29:22.920 --> 29:25.480
The opting developers are working on integrating

29:25.480 --> 29:27.960
it with the open telemetry tools.

29:27.960 --> 29:41.400
OK, so to conclude my sessions, in fact, unlike the other

29:41.400 --> 29:43.920
sub-tative finance story service, the open text

29:43.920 --> 29:48.760
Swift does not have much production of use case

29:48.760 --> 29:50.520
and a document.

29:50.520 --> 29:54.520
So it is very difficult to define what was the problem

29:54.520 --> 29:55.960
in my production.

29:55.960 --> 30:00.200
So I think the operator should know much about the code

30:00.200 --> 30:03.880
levels understanding.

30:03.880 --> 30:07.160
And then there are less the monitoring tools

30:07.160 --> 30:09.880
and operational tools.

30:09.880 --> 30:13.240
And the operators should make their own tools

30:13.240 --> 30:15.960
to operate in their open text Swift.

30:15.960 --> 30:22.760
And in South Korea, some companies

30:22.760 --> 30:28.760
want to security things, like encryption or IPAsia

30:28.760 --> 30:29.560
like that.

30:29.560 --> 30:33.000
But pure open text Swift doesn't

30:33.080 --> 30:35.960
support very good these features.

30:35.960 --> 30:39.080
So we have to develop our own bidder error

30:39.080 --> 30:41.640
and our own features in the open text Swift.

30:45.080 --> 30:49.320
OK, thanks for listening my talk.

30:49.320 --> 30:52.760
And if you have any questions, this is my LinkedIn.

30:52.760 --> 30:55.360
So please send a message to me.

30:55.360 --> 30:56.360
Thank you.

