WEBVTT
00:00:04.049 --> 00:00:15.392
So will come back to our election Automotive Vision today.
Um, yeah. We started that chapter on road recognition, or
00:00:15.392 --> 00:00:25.141
last Monday. So let us continue with it. So just to remind
you, we started to ask the question, how can we detect the
00:00:25.141 --> 00:00:34.456
road and the road markings and the ego lane in a camera
image. And here you see, again, the classical approach, the
00:00:34.456 --> 00:00:43.729
approach that is used in some driver assistant systems that
you can buy that are produced in mass production. The
00:00:43.729 --> 00:00:53.516
idea is based on the fact that the lane markings are bright
stripes on a relatively dark background and that means
00:00:53.516 --> 00:01:03.748
that the boundary of the lane markings there are strong
gradients, as we can see on the image below and then what
00:01:03.748 --> 00:01:13.535
we can do is we can calculate the derivative, the gradient
image, and Scen through the image row by row, detect strong
00:01:13.535 --> 00:01:22.556
changes in the gray level. That means strong radiance, and
then use a set of euristics to filter out wrong detections,
00:01:22.556 --> 00:01:31.711
as we have seen. So there must be a pair of strong radiance
with opposite direction. The distance between those must be
00:01:31.711 --> 00:01:41.886
between ten and fifteen cents Yeah, the, points must
be located on the ground. So not elevated points, but on the
00:01:41.886 --> 00:01:50.103
ground. And I already said that this is maybe what is used
in mass production. But I think it is not what will be used
00:01:50.103 --> 00:01:58.232
in future. So there is one development that has changed a
lot in computer vision. That is deep learning, a technique
00:01:58.232 --> 00:02:09.621
that is based on the fact that we collect a lot of example
images, and then tune or train. The official word is train a
00:02:09.621 --> 00:02:20.210
classifier, a special kind of artificial neural network to
reproduce. Yeah, to reproduce the desired um output for, for
00:02:20.210 --> 00:02:28.439
the example, images first. And then this artificial neural
network is doing something that is called generalization. It
00:02:28.439 --> 00:02:37.442
is able afterwards, if it is trained properly, and to take
arbitrary images and solve the same task that was sold, for
00:02:37.442 --> 00:02:47.175
their example, images. And we can use that as well for for
lane detection. Here are two example images that just show
00:02:47.175 --> 00:02:56.830
you what is possible nowadays. So. The lower image is
what is called semantic segmentation. Here the task for this
00:02:56.830 --> 00:03:08.136
new artificial neural network was to assign each pixel to one
of round about thirty different categories. And one of the
00:03:08.136 --> 00:03:21.082
categories was um road surface. Another category was ah,
pedestrian walk, vegetation, trees, sky, buildings, ah, ah,
00:03:21.082 --> 00:03:31.494
vehicles, trucks, um, whatever, buses, bicyclists, bicycles,
etc. And in the law in which you see the results, or
00:03:31.494 --> 00:03:40.348
each color refers to one of these categories. And you see
in which way this algorithm has partitioned the whole image
00:03:40.348 --> 00:03:49.605
into these different categories, into areas which belong to
these categories. And um, of course, this area here that you
00:03:49.605 --> 00:04:01.253
can see here in this kind of color here. That is the area that
has been found to be the road surface. And this is of not
00:04:01.253 --> 00:04:09.653
perfect yet, but it is considerably good now. So the boundaries
of this road have been detected quite well and
00:04:09.653 --> 00:04:18.004
also the other object boundaries have been detected quite
well. And based on this result, of course, you could start to
00:04:18.004 --> 00:04:26.579
find all the pixels that belong to the road surface, you could
detect the boundaries of the road surface. And based on
00:04:26.579 --> 00:04:35.208
that, you can estimate a geometric model of the of the road
surface that is the lower image. However, we can even
00:04:35.208 --> 00:04:42.599
more tune these artificial neural networks, not only to be
able to distinguish this road surface, or this is something
00:04:42.599 --> 00:04:51.838
else, but we can also train them to decide whether which of
the pixel belongs to the ego lane, to the lane on which the
00:04:51.838 --> 00:04:59.606
vehicle currently is on, and which picks us belong to the
left neighbouring lane, or to the right neighboring lane. You
00:04:59.606 --> 00:05:07.439
could even train the neural network to decide whether the
lane on which you are is a lane for going straightforward on
00:05:07.439 --> 00:05:14.917
an intersection, or for turning right, or turning left, and
the upper image is one result of currently research in our
00:05:14.917 --> 00:05:22.920
group that is actually research of Anika Mare that you all
know. And she has trained these neural networks to decide
00:05:22.920 --> 00:05:32.840
which is the Eagle Road, which is in this case, the a road
that is labeled with the green pixels, and which is the
00:05:32.840 --> 00:05:41.840
neighboring lane in this case the right neighboring lane,
which is shown here in blue. And of course, based on that, we
00:05:41.840 --> 00:05:51.010
do not need to detect the land markings as such, but we can
make the artificial neural network sol the task to decide
00:05:51.010 --> 00:06:00.245
where is the boundary of our ego lane. And based on that, we
can again estimate a geometrical model of the shape of the
00:06:00.245 --> 00:06:08.243
equal A. Now, that is just to show you what is possible. I
don't want to go into the details of deep learning. That is
00:06:08.243 --> 00:06:15.531
the topic that we discuss in the lecture in winter term, and
the lecture of machine vision but at least to show
00:06:15.531 --> 00:06:24.628
you what is possible and what will be the future. I think
of these lane recognition tasks, not anymore based on these
00:06:24.628 --> 00:06:33.457
gradient based images, but based on these deep learning
methods. Okay, then we said, okay, once we have found the
00:06:33.457 --> 00:06:42.013
boundary pixels of the ego lane. We can try to estimate a
geometrical model of the lane. And we started with straight
00:06:42.013 --> 00:06:51.593
lanes. Now we said, we have some parameters to estimate.
That is the lateral offset of the vehicle on its ego lane at
00:06:51.593 --> 00:06:58.647
the draw angle. The long, eternal offset, if possible.
And that can be described by two coordinate systems, one
00:06:58.647 --> 00:07:06.328
coordinate system that is fixed to the road, fixed to the
lane and describes the lame that is our world coordinate
00:07:06.328 --> 00:07:13.501
system, or road coordinate system here, shown in green. And
then we have the the eagle coordinate system that moves
00:07:13.501 --> 00:07:20.160
together with a vehicle. And what we want to describe is a
relationship. The geometric relationship between these two
00:07:20.160 --> 00:07:27.087
coordinate systems. Furthermore, we need to describe the
shape of the lane. If we assume a straight lane, the only
00:07:27.087 --> 00:07:34.684
parameter that remains is the width of the lane, which is
unknown, a priory. So then we have the lateral long eternal
00:07:34.684 --> 00:07:42.971
offset and the drawing, so that in total we have four vary
elbows to estimate. And we could say," This is the set of
00:07:42.971 --> 00:07:51.067
unknown parameters, or this is the state vector, which we
want to estimate. Then we did a lot of calculations. So there
00:07:51.067 --> 00:07:57.357
are translations, transformations between road and vehicle
accordingly. That is shown here and we started to have
00:07:57.357 --> 00:08:05.077
a look at, how can we relate the lane markings to the
position of the Beagle. And we started to first describe the
00:08:05.077 --> 00:08:12.496
position of the lane markings in the road code in that system
that is comparatively easy so we just have to say,
00:08:12.496 --> 00:08:20.629
okay, from the origin of the road coordinate system. We either
have to go to the left by half of the lane wits, or by to
00:08:20.629 --> 00:08:28.670
the right, where half of the lane wits, and then we can go
an arbitrary amount to the front or to the back. Now that is
00:08:28.670 --> 00:08:36.849
shown with this part here times one zero. That means it
doesn't matter how much we go to the front or to the back.
00:08:36.860 --> 00:08:46.650
Still, we will be facing points on the lane markets
but then we were transforming that into road coordinate
00:08:46.650 --> 00:08:55.069
systems. Then we did some, some calculations. Actually, we
had two equations and we wanted to animate the unknown
00:08:55.069 --> 00:09:03.148
variable tour here. If we do that, that means, if we resolve
these two equations that we implicitly have in this metric
00:09:03.148 --> 00:09:10.919
spectrum equation then, and eliminate t poor. Sorry. Then
we get this equation here that is written here. And if we
00:09:10.919 --> 00:09:19.281
assume that the jaw and gold is relatively small, which
typically applies, of course, in driving. Otherwise, you would
00:09:19.281 --> 00:09:28.825
somehow cross the road or do something like that, then we
can simplify the geographic functions here and replace the
00:09:28.825 --> 00:09:38.024
tungent of the draw angle by the jungle itself. As long as
we will send the drawing, the ingredients and as well the co
00:09:38.024 --> 00:09:46.874
of the draw angle, then it can be approximated by one.
That means now we have a kind of line equation, something
00:09:46.874 --> 00:09:58.344
like that. Why is equal to some, some number of times X, plus
an offset or a typical line equation, or to be exactly, to
00:09:58.344 --> 00:10:08.048
be exact, two line equations, one line equations falls left
lane markings won for the right lane markings. And now we
00:10:08.048 --> 00:10:18.376
can ah use this idea to create, Ah, some of Ah, least Ah,
least least some of squares approach a traditional regression
00:10:18.376 --> 00:10:28.394
approach, where we say, okay, we um take a set of observe
points. And we say this equation that we just derived should
00:10:28.394 --> 00:10:37.022
be met by all these sensed points, as well as best as possible.
And that means we want to minimize the remaining error
00:10:37.022 --> 00:10:45.168
in each of these equations for each point, but not the array
itself, but the square of the arrows, and thus yields this
00:10:45.168 --> 00:10:53.529
kind of an arrow term. Now we will be just some of the squared
arrows at which we get in each equation for each point.
00:10:53.539 --> 00:11:02.395
Then we calculate the derivative zero that derivatives with
respect to the unknown variables. That means the drawing
00:11:02.395 --> 00:11:12.013
apply the lateral offset and the lane with B. And based
on that, we get a system of linear equations that can be
00:11:12.013 --> 00:11:22.561
written in this metric spectrum form here. And yeah, and now,
as long as the matrix on the left inside. So this one here
00:11:22.561 --> 00:11:33.427
is has full rank, which typically applies as soon as we have
two points on one lay lane marking and one point on the
00:11:33.427 --> 00:11:43.123
other lane marking, at least then be this matrix as full rank.
And then we can resolve this system of equations and get
00:11:43.123 --> 00:11:52.016
a unique solution and that is done the best estimate
and the sum chance, then this is the best estimate of the
00:11:52.016 --> 00:12:02.850
beady lad and sigh. Now, so from a single image, we
can use this approach to get the best values for be delight
00:12:02.850 --> 00:12:17.832
imply that we can derive. So just summarizing that in a more
intuitive form. So what do we have to do? first, we start
00:12:17.832 --> 00:12:29.489
with a camera image that looks somehow like that. Them lines.
We transform those points into a top view of the scene.
00:12:29.500 --> 00:12:37.211
That means into the vehicle coordinate systems. In the
beginning, they are represented in image coordinates. If we
00:12:37.211 --> 00:12:44.063
assume that we have calibrated the camera and know the
relationship between the camera coordinate system and the vehicle
00:12:44.063 --> 00:12:51.737
coordinate system, we can transform them into a top view.
Now, based on this that is shown at the bottom. Now,
00:12:51.737 --> 00:13:00.415
with the blue cord in the system that was the vehicle coordinate
system that is so used to describe the position of the
00:13:00.415 --> 00:13:11.362
lane markings. And now what we do is we fit somehow a model
of the road of a straight rod such that um, the points that
00:13:11.362 --> 00:13:21.120
we have detected in the camera image and that we have projected
into a top you and the lane markings, where we expect
00:13:21.120 --> 00:13:30.152
them to be according to our wrote model, fit together as best
as possible. That means there, in this illustration, the
00:13:30.152 --> 00:13:40.279
road, the red lines and the wide lines from the from the
road model must fit together, must be as close together as
00:13:40.279 --> 00:13:50.088
possible. And by doing that, we somehow found a relationship
between the blue legal coordinate system and the green a
00:13:50.088 --> 00:14:00.345
road court in the system that yields the result that contains
the unknown variables and the drawing of time. And
00:14:00.345 --> 00:14:10.081
furthermore, the unknown B now so what we can see as
well if we have this illustration is that we can shift the
00:14:10.081 --> 00:14:19.153
road model to the front or to the back parallel to the
lane markings. And still, the road model fits to the land
00:14:19.153 --> 00:14:27.738
markings. And that is actually um shows that one thing cannot
be revealed with this approach, namely the longitudinal
00:14:27.738 --> 00:14:38.893
offset of the vehicle. And because it doesn't matter whether
the vehicle is stands or is is positioned ten meters
00:14:38.893 --> 00:14:50.303
@unoise@ in for what direction, or ten meters back. Still the
model fits. Now that means this long, eternal position of
00:14:50.303 --> 00:14:58.994
the vehicle cannot be determined from these longitudinal
lane markings alone. Now, that means d long cannot be
00:14:58.994 --> 00:15:08.073
estimated. We cannot determine it. If you like, you can set
it to zero, but you can also, it is better to say we cannot
00:15:08.073 --> 00:15:15.448
estimate it now. We cannot say anything about this long, a
little long, eternal position of the vehicle. Now, okay,
00:15:15.448 --> 00:15:23.470
so the question would be, what would we need to be able
to determine this long enable to in our position. So what
00:15:23.470 --> 00:15:34.071
do you think? what? what could we do? just do a little bit
of brainstorming. What could we do? what do we need actually
00:15:34.071 --> 00:15:45.221
@unoise@ any ideas maybe we can use the optical flow to
determine our position on the road. So that is a good
00:15:45.221 --> 00:15:53.840
idea. So we could estimate our ego motion. And based on that,
we get an information of how much we moved forward. And if
00:15:53.840 --> 00:16:01.209
we knew where we have been, then we can incrementally um
determine our position. Of course, with increasing in accuracy.
00:16:01.220 --> 00:16:09.339
That is the disadvantage of this method. Any other idea, what
we could use to determine our long internal position on a
00:16:09.339 --> 00:16:19.007
normal road. So every one of you was already driving on a
road or moving by bus, by car, on a road. You all know that
00:16:19.007 --> 00:16:30.519
what could we use to determine where we are among you to
know position, except all this method. The what do you do if
00:16:30.519 --> 00:16:40.550
you don't know where you are. Well, if I provide you a map,
a printed map, not nothing, Gps whatever, just a printed
00:16:40.550 --> 00:16:52.631
map. And I ask you, where are we? we are on this road. But
where are we? what do you do @unoise@ nothing switch on the
00:16:52.631 --> 00:17:07.662
mobile phone and ask the navigation system," Yeah, what else
could you do if you don't have a navigation system @unoise@
00:17:07.662 --> 00:17:15.010
feed your points. Okay, yeah, if you have a map with feature
points on, we could use feature points, optical feature
00:17:15.010 --> 00:17:22.630
points, determine feature points and do the self localization
with feature points. What else could we do if we don't
00:17:22.630 --> 00:17:32.565
have these stiff features, or something like that, which
other salient objects exist next to a road. It we used
00:17:32.565 --> 00:17:38.700
landmarks on the road. Yeah, which landmarks could it be?
Ah, I don't know how to call them. But ah, there are some
00:17:38.700 --> 00:17:49.132
with. Ah, ah, ah, she. Ah like shields or signs. Traffic signs.
Yeah. Traffic signs. That is a good idea. So if you have
00:17:49.132 --> 00:17:56.237
a map with traffic signs. Use traffic signs. Yeah. They
provide to you some information about the longer to another
00:17:56.237 --> 00:18:03.817
direction, which other landmarks exist next to the road, eh?
no one else. Ha ha. I'm not sure if they called like that
00:18:03.817 --> 00:18:11.829
by my stones. Milestones here are these next to the
root. Yeah, they exist. Very good idea. Okay, something else.
00:18:11.839 --> 00:18:25.980
So if you cross a village, what is located next to the road
@unoise@ houses. For instance, if you know about houses
00:18:25.980 --> 00:18:35.112
which are next to the road use houses as landmarks. What
exactly what? which other things exist next to roads. The
00:18:35.112 --> 00:18:46.690
trees, for instance, exist yeah and if they are
not cut then they still exist next year as well. If
00:18:46.690 --> 00:18:55.212
you have a map with trees, use a map with trees. What
else exists on roads so what else exists on roads? for
00:18:55.212 --> 00:19:03.043
instance, there are also sometimes lines on the roads, which
are not in longer to another direction, but stop lines once
00:19:03.043 --> 00:19:12.664
you meet a stop line and detect a stopline. Very good. This
tells you where I am in that direction. What else
00:19:12.664 --> 00:19:23.230
exists on roads @unoise@ sometimes. There are intersections
once you meet an intersection, you can look at the map.
00:19:23.240 --> 00:19:34.412
Where is this? an intersection in the map. Now, this provides
to you some information. What else? Yep, different traffic
00:19:34.412 --> 00:19:43.002
signs. So you can say, okay, I know exactly which traffic
sign is located, where and then you say," Okay, you can
00:19:43.002 --> 00:19:51.695
distinguish this. A stop sign, or this is a speed limit sign.
And then if this is written in the map, then you can look
00:19:51.695 --> 00:19:59.520
up. Okay, I have to look for. Okay, the speed limit. Eighty
kilometer per hour. Sign and this of course is
00:19:59.520 --> 00:20:06.774
reducing the ambiguities that would still exist. Okay? so if
you want to localize yourself in longitude, in a direction
00:20:06.774 --> 00:20:14.938
along the road. You must think about these things, you must
think about which landmarks exist next to a road, maybe that
00:20:14.938 --> 00:20:23.935
are related to the road, like these poles that are an or
traffic signs or traffic lights or things like that. But you
00:20:23.935 --> 00:20:35.595
can also think about what typically exists next to roads
in our natural life, in our normal, normal roads, like
00:20:35.595 --> 00:20:45.539
vegetation, trees, houses, whatever structures. Now, as long
as they are moving. It is okay. So park cars are not good.
00:20:45.549 --> 00:20:54.801
Of course, that is clear now, but everything else that is
static can be used. If you can recognize it in their camera
00:20:54.801 --> 00:21:04.040
images. Okay, so now let us um take this idea that was mentioned
here at the beginning of my question round here, namely
00:21:04.040 --> 00:21:11.761
this I of incrementally estimating the position. Now using,
for instance, knowledge about the ego motion of the vehicle
00:21:11.761 --> 00:21:21.307
I. Assume we have centers on board of a vehicle that
are measuring the revolutions of the wheels, or we have
00:21:21.307 --> 00:21:30.024
sensors that measure the steering angle of the vehicle, or
maybe we determine the Eagle motion based on visual auditory
00:21:30.024 --> 00:21:39.297
for so we can use that. And in use that for an incremental
localization task so. What we want to estimate is still
00:21:39.297 --> 00:21:46.275
this state vector that contains the lane width, the
longitudinalateral position of the vehicle and the draw angle. So as
00:21:46.275 --> 00:21:53.944
shown here, but now we assume that the vehicle is moving over
time. So we add to our coordinate system this upper index
00:21:53.944 --> 00:22:01.279
T, which is here in this context should not determine the power
of something, but it should determine the point in time.
00:22:01.289 --> 00:22:10.945
So at a certain point in time t the vehicle coordinates is
looks like that. And at the next point in time, the vehicle
00:22:10.945 --> 00:22:22.036
has moved to a new place. Now the accordion system is shown
in gray here. And we assume that we know the eagle motion of
00:22:22.036 --> 00:22:29.805
the vehicle. That means, first of all, we know that translation,
the shift of the position of the vehicle but we
00:22:29.805 --> 00:22:37.918
must be a little bit care, ful because what we sense with our
on board sensors is the vehicle motion in the coordinate
00:22:37.918 --> 00:22:47.277
system of the vehicle. And there again, we must be careful
fall to determine the courting system of the vehicle at which
00:22:47.277 --> 00:22:57.875
point in time. Usually we say what we sense is a vector M.
This one here that shows the shift of the coordinate system
00:22:57.875 --> 00:23:06.563
of the vehicle represented in the coordinate system at the
previous point in time. So at the point in time tea here. So
00:23:06.563 --> 00:23:13.431
M should be a vector that is represented in the blue
coordinate system that is shown into here. Now, that is not a
00:23:13.431 --> 00:23:23.004
vector in the world court in system. It is a factor in the
um vehicle court in the system. So if you want to transform
00:23:23.004 --> 00:23:32.054
that into the world cardinal system and want to know where we
are now. After doing this shift, after doing this movement
00:23:32.054 --> 00:23:41.296
we have to consider that the vehicle court in a system
is rotated by a certain jaw angle. We suspect to the word "
00:23:41.296 --> 00:23:49.741
code " in that system and we have to consider that
in order to be able to this vector m to our previous
00:23:49.741 --> 00:23:58.500
position Delaware and delight. So this is shown here.
So if this is our previous position, actually this this
00:23:58.500 --> 00:24:07.777
point here. And is this vector here, this one here represented
in the system in this blue vehicle coordinate
00:24:07.777 --> 00:24:17.332
system. Then this new position here represented in the world
corner in the road coordinate system is equal too well at M
00:24:17.332 --> 00:24:25.927
to the previous position. But consider the rotation of the
coordinate system is your anger, gamma. And first, before you
00:24:25.927 --> 00:24:35.793
add this vector m rotated by an angle of ply, an angle by the
jot current draw angle of the vehicle. Now, this is just
00:24:35.793 --> 00:24:44.155
the transformation of this lecture. M from the vehicle
coordinate system into the road court in its system. Okay, with
00:24:44.155 --> 00:24:55.402
that, we get this equation. So the new position is the old
one, plus the rotation metrics times and times this shift. If
00:24:55.402 --> 00:25:05.262
we still assume that this ah this ah jaw anger is
small. That means," Ah, maybe between zero and ten degrees,
00:25:05.262 --> 00:25:14.530
something like that, then we can use the typical approximation
of the functions than this cosine of ply can be
00:25:14.530 --> 00:25:24.109
approximated by one again, and sign of ply can be approximated
by dry itself. As long as we represent Ti in radiance.
00:25:24.119 --> 00:25:33.367
That means none. The whole thing simplifies, and we get
this equation here. So what happens with the joy angle. Of
00:25:33.367 --> 00:25:43.387
course, the vehicle is also somehow churning if the
driver is keep his keeping the steering wheel completely
00:25:43.387 --> 00:25:52.901
constant, driving straight. We have a certain offset in the
joy angle that we get. Let us again assume that we can
00:25:52.901 --> 00:26:00.998
determine that with visual autometry, or with a steering
angle sensor and determine this change in joy angle. Let us
00:26:00.998 --> 00:26:09.953
denote it as fire so far tells us in which this gray
coordinate system that is just the shifted blue accord in its
00:26:09.953 --> 00:26:17.803
system just shifted to this position. And the new vehicle
coordinate system shown in violet here, how they are rotated,
00:26:17.803 --> 00:26:27.703
and I. So here we see this angle five. And for the new jaw
angle, the things are easy. The new jaw angle one is
00:26:27.703 --> 00:26:38.251
equal to the old jaw and plus fire the um change of
joint. So this is really simple. And of course, the fourth
00:26:38.251 --> 00:26:45.693
variable that we did not consider it now is the road width
here we would assume, typically, that the road wits remains
00:26:45.693 --> 00:26:53.059
the same. Other lane width remains the same. That means we
assume that the new lane wits is equal to the old language.
00:26:53.069 --> 00:27:01.785
So now what we did is we were deriving for equations of three,
three equations that describe how the new state vector is
00:27:01.785 --> 00:27:11.516
related to the old state. Lecture how the new state vector
can be calculated from the state rector at the point in time
00:27:11.516 --> 00:27:21.083
before using some additional information about the Eagle
Motion M and a Fi that came from from some on board census say
00:27:21.083 --> 00:27:30.653
that means what we have is a state transition model, something
that we need for a common . And furthermore, we see
00:27:30.653 --> 00:27:39.340
um that this state transition model is linear. If you use
this approximation. I sorry, this approximation here, the
00:27:39.340 --> 00:27:50.208
whole equation is linear in fire. A in Dilat,
Dilong and B. That means in the four state, very house. It is
00:27:50.208 --> 00:27:58.906
linear in the four state. Where else the millennia state
transition model. That is good, because this means that we
00:27:58.906 --> 00:28:10.499
might use a Carman filter. And later on, K. Now let have a
look at Ok. Now we can rewrite it. If we want to assemble all
00:28:10.499 --> 00:28:19.846
these four equations in one matrix time vector equation
then, we can rewrite the whole thing like that. So here
00:28:19.846 --> 00:28:29.676
for the equations that we had on the last light. We are just
assembled into this large matrix time state vector product,
00:28:29.676 --> 00:28:40.547
plus an offset that contains M, X, M, Y and five. These values
that we assume to be able to determine by some on board
00:28:40.547 --> 00:28:49.075
sentences. Okay, furthermore of because, eh, that that means,
Ah, this Ah state transition function that we have derived
00:28:49.075 --> 00:28:59.433
now has a shape that we need for a common filter. This
classical linear equation that we need % um to apply a common
00:28:59.433 --> 00:29:08.759
hit. Of course, we could also argue that there is some noise
in there. There is some sensing noise in and why
00:29:08.759 --> 00:29:17.820
there is some impression in the driving of the vehicle. Their
wits of the lane might also vary a little bit. So we might
00:29:17.820 --> 00:29:26.259
argue that there is some additional random noise term that
must be added to this equation ok so that was a
00:29:26.259 --> 00:29:37.115
state transition model. Now, how to use our knowledge from
the point in time and predict or derive the state vector for
00:29:37.115 --> 00:29:47.509
the next point in time. Now let us go. Have a look at the
observation model again. We assume that we observed capital
00:29:47.509 --> 00:29:57.870
and points on the left lane marking and an R points on the
right marking represented in vehicle coordinates, of course.
00:29:57.890 --> 00:30:11.053
Now, at vehicle poor coordinates that refer to this violet
cordial system. Now, now, let us say, okay, parallel we take
00:30:11.053 --> 00:30:26.329
one of these points here now. One of these points, and um
Mhm, which are represented in in vehicle coordinates. And
00:30:26.329 --> 00:30:37.572
first of all, we draw a line parallel to the Y axis, with a
distance from the origin of expire. And this line is shown
00:30:37.572 --> 00:30:48.190
dotted here. So this is the access here. So this is the
line that is pearl to the by of the vehicle and has
00:30:48.190 --> 00:30:58.020
a certain distance X now from the origin. And now we can say
that based. If we know X, we can determine at which points.
00:30:58.029 --> 00:31:10.040
If we know the state sector. So if you know the state factor,
and we assume a certain distance, we can determine where
00:31:10.040 --> 00:31:23.029
we expect to find Yel and why are the the lateral
coordinates of the that we observe on this line. That is
00:31:23.029 --> 00:31:29.432
actually what we have arrived before. When we made the
regression, we had one equation that was creating a relay
00:31:29.432 --> 00:31:37.000
relationship between the exposition of a lane marking and the
wide position of the lane marking. Now, this was a linear
00:31:37.000 --> 00:31:48.001
equation or linear rise equation. And again, we can use
that here and say, if we know X, we can calculate where we
00:31:48.001 --> 00:31:56.870
expect to find the respective y value. Now this was the
equation that we had before. If we resolved this equation
00:31:56.870 --> 00:32:06.450
respect to Byl and Byr. We get this relationship here. So
yl is expected to be found that minus prior times X plus
00:32:06.450 --> 00:32:16.937
behalf minus and why are minus Ly tie X minus behalf
type minus okay. Now, we of something that tells us
00:32:16.937 --> 00:32:26.106
where we expect to find measurements, and we can compare
these expected positions of the measurements with the sensed
00:32:26.106 --> 00:32:40.868
positions of the measurements now and use that too. Mhm improve
our knowledge about the state factor. So this yields one
00:32:40.868 --> 00:32:54.962
equation, each of these equations that we can create here
establish one equation in the observation model of estate era
00:32:54.962 --> 00:33:07.167
as a @unoise@ of our system model now. Each observation that
we have creates one of these observation equations. And if
00:33:07.167 --> 00:33:17.659
we collect all these equations in one large linear system of
equations. We get the observation model that we meet. Now,
00:33:17.659 --> 00:33:27.508
if we do that, we can system of equations that looks like that.
So each line here represents. And one of these equations
00:33:27.508 --> 00:33:40.655
that we just derived, but the why " value here being depending
on, yeah, the respective row of the of this matrix times
00:33:40.655 --> 00:33:50.822
the state vector. And that is actually our observation model
it has. That is the typical form that we expect for a
00:33:50.822 --> 00:33:59.186
common filter to be used, namely said the observation vector
that contains all the of the observed equals
00:33:59.186 --> 00:34:07.550
some matrix age that is independent of the state vector
times the state vector itself, plus some noise that always
00:34:07.550 --> 00:34:16.737
occurs if we do some measurements. So what we do is we treat
these x values of the measurements as, say, Constance as
00:34:16.737 --> 00:34:26.129
values that we do not make sarcastic inference about that. We
just use as they are without asking whether they are noisy
00:34:26.129 --> 00:34:37.850
or not. We just trust them. And we assume that the only
impression occurs in these wide values. And based on that, we
00:34:37.850 --> 00:34:47.200
can establish this observation equation. So now, what do we
have? we have sister model, a linear equation that
00:34:47.200 --> 00:34:56.046
describes the transition from one point in time to the next
point in time. And we have a linear observation model that
00:34:56.046 --> 00:35:04.224
describes in a linear way how the observations are calculated
from the state lecture. And based on that, we have a
00:35:04.224 --> 00:35:12.400
linear system. If we assume that all the imposition and noise
that occurs is caution. Then we have a linear Coach model,
00:35:12.400 --> 00:35:19.943
and then we can use a Carman filter to estimate incrementally,
the position of the vehicle, including the long, eternal
00:35:19.943 --> 00:35:29.663
position, including the latter opposition, the draw angle
and the lane wits. And that is actually this kind of
00:35:29.663 --> 00:35:40.659
incremental localization and a road estimation that that you
mentioned @unoise@ so. Okay, so, that is an alternative.
00:35:40.670 --> 00:35:50.358
We, what we saw as from one image alone. We can do this. One
shot wrote recognition, where we can determine the latter
00:35:50.358 --> 00:36:00.914
opposition, the draw angle and the if we um do not have
just a a single image, but a sequence of images. And
00:36:00.914 --> 00:36:11.006
we know the initial long eternal position of the
vehicle, then we can use this incremental localization on the
00:36:11.006 --> 00:36:21.690
road recognition Ah approach to incrementally improve
our knowledge of where we are. So a yeah, so the recipe. So
00:36:21.690 --> 00:36:31.866
to say, to summarize what we did, what would we do? so we
start. So what we start with that we start with a state vector
00:36:31.866 --> 00:36:40.978
that says," Okay, my best guess at the moment is that I'm at
a certain position show nearby the origin of the court in
00:36:40.978 --> 00:36:50.091
the system that I have a certain orientation shown by the
orientation of the X and Y axis, and that the lane has a
00:36:50.091 --> 00:36:58.301
certain width based on that. What do we do well with, like,
a common filter always is doing. First, we do a prediction
00:36:58.301 --> 00:37:08.456
step. Now, we predict we use our on board sensors and to and
apply the state transition model to predict where we are
00:37:08.456 --> 00:37:18.253
one point in time later, this yields say," This Magenta, I'm
caught in a system here now that describes where we expect
00:37:18.253 --> 00:37:29.651
to be one step later. The next thing is, we make a camera
image based on this camera of image, we detect all the lane
00:37:29.651 --> 00:37:38.008
markings. Now, then we transform these lane markings into a
top view into the current vehicle court in its system now
00:37:38.008 --> 00:37:47.038
and then we project it. So to say in our model, this yields
these lines. So according to this violet court in its
00:37:47.038 --> 00:37:57.982
system. If we project the lame markings that we have seen
in the image into this, into this top view model, the lane
00:37:57.982 --> 00:38:07.963
markings would maybe be located like that based on these
estimator top U lane markings. Landmark in the big accordion
00:38:07.963 --> 00:38:17.543
system. We apply the innovation step of the common filter, and
that is somehow shifting a little bit. The coordinates is
00:38:17.543 --> 00:38:26.364
the vehicle coordinate system, such that afterwards, laying
the lane markings fit better to the model as before and of
00:38:26.364 --> 00:38:36.571
because in this innovation step, also the lane wits might be
changed a little bit. So that is actually this incremental
00:38:36.571 --> 00:38:45.969
road recognition and localization staff for straight roads
@unoise@ okay. Unfortunately, roads are not always straight.
00:38:45.980 --> 00:38:55.490
So how can we model roads that are not straight. And of
the first extension would be that we say," Okay, the road
00:38:55.490 --> 00:39:03.142
follows a curve with constant curvature or constant radius.
That means that the vehicle would drive on a road that
00:39:03.142 --> 00:39:11.692
follows the circle. Now, this situation is shown here again.
We can determine a vehicle coordinate system that would be
00:39:11.692 --> 00:39:20.300
the screen one here. And and oh, sorry, a vague coordinate
system that would be the blue one here, and a rod coordinate
00:39:20.300 --> 00:39:29.131
system that is shown here as a green one. T typically,
however, people in this situation do not use Euclidean
00:39:29.131 --> 00:39:39.805
coordinates for the for the rotor coordinate system but
they use a coordinate system where at least the axis
00:39:39.805 --> 00:39:49.971
is somehow curved and follows the circular structure
and so we can represent again, the position of the
00:39:49.971 --> 00:40:01.154
vehicle, not in X and why, but in something like, well, sorry,
the lateral offset, which is not, say, this offset here
00:40:01.154 --> 00:40:13.765
now, this offset here, but it is the offset of the vehicle
from from the center line of its lane. So at this piece here,
00:40:13.765 --> 00:40:25.407
that is the letter offset. Now, furthermore, the longitudinal
offset now is not this distance here, but it is the length
00:40:25.407 --> 00:40:36.180
of this arc. Now from the origin here to this position. Now
then, of course, we have the joy angle and the joy angle is
00:40:36.180 --> 00:40:45.694
not the turning between this blue collar and system and the
green one. But the angle between a radial line here, through
00:40:45.694 --> 00:40:57.797
the origin of the big accord in a system. And the y axis of
the current vague accordance systems are just this angle
00:40:57.797 --> 00:41:06.781
here would be the joying now that is typically used. And of
course, still we have the lane wits as a variable, as an
00:41:06.781 --> 00:41:14.715
unknown variable. And now we also need to know somehow the
radius of the curve. We can use that as either the radius we
00:41:14.715 --> 00:41:24.175
represent that as a radius or as the curvature. The courageer
Kappa is just the of the of the rages now. So, and
00:41:24.175 --> 00:41:32.485
the curvature has the advantage that a straight road has a
curvature of zero, while the radios of a straight road would
00:41:32.485 --> 00:41:40.175
be infinity, which is hard to represent. Now, for the curvature
we can represent a straight road. And for outrageous, it
00:41:40.175 --> 00:41:49.470
is not possible. Therefore, people often prefer to use the
curvature here. Both radius and curvature are sign
00:41:49.470 --> 00:41:59.183
numbers to express. It is a curve to the left or occur to
the right therefore, they also have a sign to express
00:41:59.183 --> 00:42:08.340
this direction. So once we have this geometric modeling of
such a circular lane, we can represent the whole situation
00:42:08.340 --> 00:42:18.583
with an extended state factor that again, contains the
b and the long eternal matter of position as well as a
00:42:18.583 --> 00:42:27.039
triangle, but additionally, either it also contains the radius
or the curvature. Yeah, so we have one unknown parameter
00:42:27.039 --> 00:42:36.844
more again, we could start to do the same as we did for the
case of a straight road, we could create a geometrical
00:42:36.844 --> 00:42:46.421
description of where we expect to find the lame markings.
Then we could try to translate that into the vehicle
00:42:46.421 --> 00:42:55.888
coordinate system after things would become more difficult
and more nonlinear, because we have this um nonlinear or um
00:42:55.888 --> 00:43:06.352
accordingly, system to consider um @unoise@ and therefore
what we would end up with would not be a linear model, a
00:43:06.352 --> 00:43:14.823
linear state model or a linear observation model, but an
nonlinear observation model and nonlinearance state
00:43:14.823 --> 00:43:22.970
transition model. And they for if we would do that and go
through all the details. Who would end up with something?
00:43:22.989 --> 00:43:31.797
where would either need a nonlinear regression. If we want to
have such a one shot localization, or we would have to use
00:43:31.797 --> 00:43:39.820
the extent that common filter or unsympted Carmen filter,
or maybe even a particle filter to determine or to estimate
00:43:39.820 --> 00:43:48.549
the position of the vehicle and the rod geometry. Now, the
basic principle is the same as we had for the straight road.
00:43:48.570 --> 00:43:56.309
But everything becomes a little bit more nonlinear and a
little bit more complicated by that. So that is still a
00:43:56.309 --> 00:44:04.710
possibility to represent the road by circles. However, if
we would have a situation like that, and would uh, combine
00:44:04.710 --> 00:44:14.828
road or build roads in such a way that we only have either
straight parts or, uh, parts that follow a circle with fixed
00:44:14.828 --> 00:44:23.654
previous. So combinations like that, then we would run into
problems in practice. First of all, the drivers of the car
00:44:23.654 --> 00:44:33.185
would run into a problem. Why? because the of such a
road changes at one point from zero, straight if he
00:44:33.185 --> 00:44:43.150
followed that to a certain non zero value. So the curvature
would look like that. And this point, of course, is the
00:44:43.150 --> 00:44:53.279
point here where this strange piece on the curve piece are
touching each other. And here, what would happen? the vehicle
00:44:53.279 --> 00:45:03.911
driver would have to change the steering angle of the vehicle
at instantaneously from zero, driving straight to wall,
00:45:03.911 --> 00:45:12.605
something else flying a curve. So that would be pretty demanding
for drivers. Now, these kind of roads, therefore roads
00:45:12.605 --> 00:45:22.586
are not built like that, if possible. Of course, in cities,
Mhm, there are houses, and you have to build the the roads
00:45:22.586 --> 00:45:31.166
so that they filled fit between the houses. But on highways,
for instance, and rural roads. Roads are built in a
00:45:31.166 --> 00:45:40.239
different way to avoid this phenomenon, namely, if we have
a straight power piece of a road. And the words we want to
00:45:40.239 --> 00:45:49.372
drive through a circle with constant curvature in between.
There is a piece in which the curvature changes slowly over
00:45:49.372 --> 00:46:00.238
time. So the typical curvature is zero, and on the straight
part, and comes this intermediate part where the curvature
00:46:00.238 --> 00:46:10.545
increases linearly and then we have the circle where
the curvature remains the same. And this piece in between
00:46:10.545 --> 00:46:20.186
now, where the curvature changes over time slowly is called a
clothuit, a cloth of it is a curve for which the curvature
00:46:20.186 --> 00:46:28.432
changes linearly, depending on the arch length. Now on the
distance that we have travelled on this Cloverate. So let us
00:46:28.432 --> 00:46:37.807
have a look at such a clother eat. So here is an example of
a cloth for it, starting here with an immature curvature of
00:46:37.807 --> 00:46:50.250
zero here. And then we can see that it starts to be curve,
and the curvature increases more and more the radius of this
00:46:50.250 --> 00:47:00.279
curve decreases more and more over time. And the whole,
your shape looks like that. For modelling roads. Of course,
00:47:00.279 --> 00:47:07.214
these parts here are not that interesting, but especially these
parts here are interesting. These parts at the beginning
00:47:07.214 --> 00:47:16.734
of that to cloth aid, so to speak, mathematically. What is
the cloth aid, not it. Is a gem at a curve, and for
00:47:16.734 --> 00:47:32.281
which the curvature changes linearly over the . So if L
is the of this shape, then the curvature, depending
00:47:32.281 --> 00:47:51.722
on the argument is defined as an initial curvature, plus a
change of courageer couple one times. L the tungent has an
00:47:51.722 --> 00:48:04.921
angle of zero. So what is the angle or the tangent angle
shown here on the curve. Here is our English accordionate
00:48:04.921 --> 00:48:16.492
system. Here we fix a certain point on the curve. Oops. Here
the rat arrows just show the coordinate system turned
00:48:16.492 --> 00:48:26.399
and shifted to that point, so that this direction is a tangent
on the cloth aid. And this is a normal to the Catholic.
00:48:26.400 --> 00:48:36.800
Then we ask this angle here, which I denoted as he um, what
is he depending on ill? okay, how can we calculate that we
00:48:36.800 --> 00:48:49.608
can calculate it incrementally. So we start here at the
beginning, where we know the angle is is a zero. And then we
00:48:49.608 --> 00:49:01.068
mark make a small step. Consider the curvature for this small
step, add up the change of this a similar angle and do
00:49:01.068 --> 00:49:09.539
that again and again. And if you make the steps small and
smaller, we end up with an integral turn, this integral term.
00:49:09.539 --> 00:49:18.924
So what it is now we make the integral from zero to L,
from the initial position here to that position here. And
00:49:18.924 --> 00:49:30.306
throughout, we increment over the present curvature. Now
since this is just a linear function, we can analytically
00:49:30.306 --> 00:49:39.720
calculate this integral. And this becomes then Kappa, zero
times ill plus a half times Kappa, one times else quick now.
00:49:39.730 --> 00:49:50.457
And this yields this angle here in radiance. So the assembled
angle, the next thing is, of course, to calculate the
00:49:50.457 --> 00:50:01.620
position. So if you provide to me this from here to
here. I want to determine the position of this point in this
00:50:01.620 --> 00:50:11.568
initial X, Y court in assistant. What do I have to do? well,
what I do is I walk along this cloth, eat, and each point I
00:50:11.568 --> 00:50:22.688
make a small step. I add a small step in ten in the tungent
direction of the cloth of it, to the position where I have
00:50:22.688 --> 00:50:32.171
been, and then I incrementally reduce the step wits two
infinitely small steps so that I get an intergal turn. So that
00:50:32.171 --> 00:50:43.471
is shown here. So an integral from zero. Well, considering a
lot of very, very small, very tiny steps that I make in the
00:50:43.471 --> 00:50:53.224
direction of the tangent are at the present point. And the
tangent um here, this tangent and a unit vector into this
00:50:53.224 --> 00:51:02.722
tungent direction is actually the vector that is the cozine
of the angle and the sign of the acute angle. Now that
00:51:02.722 --> 00:51:12.034
provides a vector in tank in the tangent direction. Linen
make very small, incremental steps that is shown here by this
00:51:12.034 --> 00:51:21.282
integral. So now, unfortunately, if I consider this equation,
this integral, of course, it is impossible to analytically
00:51:21.282 --> 00:51:31.973
calculate this integral, because I have a co design of this
um polynomial here, and the sign of the polynomial. And
00:51:31.973 --> 00:51:40.588
there is no analytic solution and no analytic expression to
determine that. And also, we cannot say this is equal to
00:51:40.588 --> 00:51:48.933
something um. But this can only be solved numerically. And one
way to resolve that numerically is to simplify everything
00:51:48.933 --> 00:51:57.937
for small, assembled angles. Again, we assume we want to
model a vehicle on a road and the angle of the
00:51:57.937 --> 00:52:07.121
vehicle should be small, and that the anger is actually
the draw angle of the now more or less. So that
00:52:07.121 --> 00:52:18.192
means this should be small. And if this is small, then we
can simplify again all the calculations. Yeah, then
00:52:18.192 --> 00:52:30.522
@unoise@ we can say that the cosine of a small angle is can
be approximated by one. That means, instead of integrating
00:52:30.522 --> 00:52:40.837
over the cosine of the small angle. We just integrate over
the cosine of one and the sign of a small angle can be
00:52:40.837 --> 00:52:48.273
approximated by the angle itself represented in radiance.
That means, instead of integrating over the sign of he or
00:52:48.273 --> 00:52:59.081
flunder. We integrate over he of lumber. Now is a
polynomial. So integration is now not a big problem. Here we can
00:52:59.081 --> 00:53:10.085
find an analytic expression. And that means what we get if
we substitute he or flander here by this term here, which we
00:53:10.085 --> 00:53:21.349
just have derived. Then we get X, O, L is approximately equal
to L. This comes from this term here. And why of L? there
00:53:21.349 --> 00:53:31.401
is approximately this polynomial, which comes from this
integration here. So what we did is, or what we do is we can a
00:53:31.401 --> 00:53:40.142
proximate a cloth for it, for small, assembled angles, by a
polynomial of third order. And polynomials are are nice,
00:53:40.142 --> 00:53:49.033
because for dealing with polynomers is much easier than dealing
with cosine and and integrals over and
00:53:49.033 --> 00:53:59.861
now so if we do that if we want to use these for
modeling rods. What do we need? well, we need the
00:53:59.861 --> 00:54:08.930
still, there is a which we need to know the curvit, the
initial curvature then we need something like
00:54:08.930 --> 00:54:15.697
the change of curvature. Kappa one, then we need something
like the lateral offset again, of the vehicle near the
00:54:15.697 --> 00:54:29.729
lateral offset from the center line of the closet. One Locke
is a position that means these sparklings on the closet and
00:54:29.729 --> 00:54:42.503
the drawing glass of eagle on the closet. Now that means, in
this case, we have six very animals, and six very hours in
00:54:42.503 --> 00:54:49.755
the state vector. And the nice thing is that the other
two models that we just said are just special cases. That
00:54:49.755 --> 00:54:57.704
means, if we assume that copper one is equal to zero. That
means there is no change of curvature. Ah than what we get is
00:54:57.704 --> 00:55:05.120
just the you drive through a circle case. And if you further
assume that Kappa's hero is zero. That means this initial
00:55:05.120 --> 00:55:14.492
curvature is zero. Then we get the special case. We drive
on a straight road. Now that means with this model with six
00:55:14.492 --> 00:55:24.597
parameters, we can model roads sufficiently well, at least
for rural roads and for highways. Now in cities, things are
00:55:24.597 --> 00:55:34.178
more difficult, because even with the cloth at you, it is
hard to represent the shapes of roads that exist, but for
00:55:34.178 --> 00:55:43.471
rural roads and highways. This is sufficient, and we can
deal with it again. And we what we could do is we can
00:55:43.471 --> 00:55:51.453
then start to develop an extended Carman filter moral
or a @unoise@ particle filter model @unoise@ uh to estimate
00:55:51.453 --> 00:56:02.889
um the this state vector based on, uh, observations from
or uh lay markets. Hello. Okay. So let us have a look at
00:56:02.889 --> 00:56:11.305
how this works with a small video. That is a video that was
created by , one of the former Ph D students in our
00:56:11.305 --> 00:56:18.708
group. What he did. He was simplifying the model a little
bit. He was only tracking the right landmarking. So not the
00:56:18.708 --> 00:56:26.663
right and the left plain marking, but only the right one.
So he was not able to estimate the it is of the road,
00:56:26.663 --> 00:56:34.743
but at least the curvature and this is shown here in a
video that was recorded here in Castle on this
00:56:34.743 --> 00:56:43.653
you find the image here and the blue line is the projected
model. So the model that is projected into the image
00:56:43.653 --> 00:56:59.139
that was estimated. So let us start this video and hope let
us start this video so now. So it works on straight parts.
00:56:59.139 --> 00:57:09.480
It also works on craft parts, at least as long as the curvature
is not too too strong. Okay, here it was switching from
00:57:09.480 --> 00:57:21.646
one lane marking to the other one @unoise@ now even deals also
with a little bit of changes of the slope, of the rope of
00:57:21.646 --> 00:57:40.805
the road. Of course, this is not real time, but whatever,
eight times real time or so @unoise@ now so. It it works
00:57:40.805 --> 00:57:51.555
somehow sufficiently well now for for our purpose here.
Sometimes it is a little bit confused if the lane markings are
00:57:51.555 --> 00:58:00.980
not clearly visible @unoise@ that happened here @unoise@
and of course now things are difficult. So you see that
00:58:00.980 --> 00:58:09.890
here in this round about the video camera was even not able
to see the lane markings, and therefore the whole approach
00:58:09.890 --> 00:58:18.222
was not able to determine here as well. The lane markings
are not visible, and the curvature is pretty strong, and
00:58:18.222 --> 00:58:26.416
therefore the approach failed but now again, as long as
the lane markings are clearly visible, which is hard again
00:58:26.416 --> 00:58:40.958
here, because we have a curbstone and a lane marking um, it
works again quite well. Yeah, no, uh. So then, for instance,
00:58:40.958 --> 00:58:53.579
what Anna is doing in in her research work @unoise@
she. Is adapting these things now with a deep learning
00:58:53.579 --> 00:59:01.332
approach to be able to @unoise@ ah determine also, ah, the
shape of robes in inner city driving. Now, if you like, you
00:59:01.332 --> 00:59:09.538
can ask her on Friday and about and ask her what she is
doing. Maybe she tells you a little bit more about what she
00:59:09.538 --> 00:59:17.855
currently is doing, but she is actually also continuing the
little bit in this direction to to detect the lanes and to
00:59:17.855 --> 00:59:34.440
to estimate the road geometry in front of the vehicle for
autonomous driving. Hm, yeah. So the video @unoise@ so. Now
00:59:34.440 --> 00:59:43.188
everything solved question mark of as you saw from the video
that at least at that point in time, which was maybe ten
00:59:43.188 --> 00:59:51.353
years ago, not everything was solved. So now we might ask,
is everything solved now?" and the answer is no, of course
00:59:51.353 --> 01:00:00.557
not. And the question is, the problems are not that much.
Maybe to have other geometrical models of the road but
01:00:00.557 --> 01:00:09.620
the main challenges are still interpreting images. Is
this approach able to work at with bad weather conditions?
01:00:09.639 --> 01:00:20.470
is it able to work in where you have markings which are
not clearly visible. Um, does it work? Ah, in situations.
01:00:20.480 --> 01:00:29.279
Ah, when we have Ah reconstruction sites and um. There are
different kind of markings, which are confusing people.
01:00:29.289 --> 01:00:38.076
Yellow, one and white one um. Here in urban driving, we often
don't see the markings, or they are included by parking
01:00:38.076 --> 01:00:45.584
cars, or they don't exist where there are a lot of markings
which are confusing, because these markings are for
01:00:45.584 --> 01:00:53.468
pedestrian crossings for what ever, but not for our own
lane and all these are actually the problems that are
01:00:53.468 --> 01:01:00.107
challenging at the moment. Yeah, concerning this
drive. And of course, sometimes there is night, on average,
01:01:00.107 --> 01:01:08.406
twelve hours per day. And this means also that, of course,
visibility of lane markings is much reduced at my time. And
01:01:08.406 --> 01:01:19.255
to be able to drive in my time is us as well a challenge? no,
so just to summarize this chapter, what we have discussed
01:01:19.255 --> 01:01:29.235
in the beginning was we were first discussing how to detect
road lane markings. Now we introduced this technique based
01:01:29.235 --> 01:01:37.409
on on gradients and checking for positions with strong
radiance, using different Euristics to filter out wrong
01:01:37.409 --> 01:01:46.999
detections. Then we were discussing how we can model lanes
and roles in a geometric way which parameters we need
01:01:46.999 --> 01:01:55.935
to estimate. Then we were discussing, first how we can do
one shot estimation just from one image, with a regression
01:01:55.935 --> 01:02:03.950
approach, with some least some of squares approach and
afterwards, we extended that to an incremental approach to
01:02:03.950 --> 01:02:12.808
a framework frame to an incremental localization, where we
were using a common filter, or a variant of the common filter
01:02:12.808 --> 01:02:20.282
to do the job for us. And finally, we were extending
the dramatical model to first arcs and circular structures,
01:02:20.282 --> 01:02:28.981
and then to cloth it. And at least we touched a little bit
the geometry of closets so that you have had basic ideas
01:02:28.981 --> 01:02:38.538
about what a class for it is and how to deal with it. So
for this chapter number seven at the moment? do you have
01:02:38.538 --> 01:02:47.294
questions about chapter number seven at the moment. No, Ok,
then I have a question to you, or several questions to you,
01:02:47.294 --> 01:02:55.990
or better, kitty has some questions to you, because we again
have the lecture evaluation. So I'm always happy to get
01:02:55.990 --> 01:03:03.993
feedback from you. I take it serious. Of course, I cannot
change everything, and sometimes also your wishes are
01:03:03.993 --> 01:03:09.939
contradicting each other. So some people say," We want to
have more math." and other says," We want to have less mass.
01:03:09.960 --> 01:03:18.083
Ok? But I take it seriously and I'm always happy to
get commands about the lecture, what you like, what you did
01:03:18.083 --> 01:03:26.158
not like. Of course, the numbers that you give are valuable,
but even more, the comments are valuable. So if you find
01:03:26.158 --> 01:03:35.648
something, if you don't like how I pronounce words in English
or whatever. Tell it. I try to improve it next year or
01:03:35.648 --> 01:03:45.663
next time. But these are valuable so I distribute these
question questionnaires @unoise@ i. Need one person that
01:03:45.663 --> 01:03:56.186
is willing to collect everything and put it in to give it to
the evaluation office on the other side of the of the Ivan
01:03:56.186 --> 01:04:05.309
Hoof, who is willing to do that @unoise@ yes, but i Thank
you very much. Okay. And um, yeah, then there are some
01:04:05.309 --> 01:04:14.703
additional questions. And I want to use them. I will just
write it on the on the table so that you can also answer all
01:04:14.703 --> 01:04:26.037
of these additional three questions @unoise@ Ah. So did
everyone receive a questionnaire, or are they missing somewhere
01:04:26.037 --> 01:04:37.262
questionnaires? no, okay. Then, Ah, these additional questions?
Ah, five, one. So actually, if you are finished with
01:04:37.262 --> 01:04:45.639
your questionnaire, you can also leave so we will not do
discuss anything else but then continue next week