[Shape from Texture] Questions from the lecture

[Shape from Texture] Questions from the lecture

by Keith Jun Low -
Number of replies: 1

Slide 19: I don't understand the line at the bottom of the slide "Train a regressor to predict depth --> Noisy predictions" Does this mean that if we use ML, the results will be noisy?

Slides 20 and 21: I don't understand the use of the Markov Random Field. Based on the slide, I think it means connecting a graph to the graph of the image, but I don't understand the meaning of the equation and how it enforces consistency.

And it leads to the next slides, what is the difference between Deep Learning with MRF and without?

In reply to Keith Jun Low

Re: [Shape from Texture] Questions from the lecture

by Andrey Davydov -

Slide 19: Yes, as you can see it in the bottom figure. The reason is that we tackled each superpixel separately,

Slide 20 and 21: whereas here we add some loss that enforces results of neighboring superpixels to be consistent. These are purely technical details what loss to use, even L2 distance would probably work. Most importantly, the improvement from slide 19 to slide 20 - adding some spatial consistency loss enforces connectivity and smoothness.

DL with/without MRF: here "DL with MRF" means that you EXPLICITLY consider an image as a graph (or a graph of superpixels) and impose losses on its neighboring consistency. "Without MRF" means that you do it implicitly, processing an image in different ways, getting low-res, high-res features and then merging results to minimize some losses as usual. No explicit graph representation in this case. These all are very technical details.