Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
458 views
in Technique[技术] by (71.8m points)

backpropagation - How to calculate loss and gradient of index in Pytorch

I have a misaligned point cloud and rotation matrix and I want to use backprop to tune the rotation matrix in 2d plane.

The problem is in the forward all I can get from the rotation matrix is index in the 2d plane which is duplicated like.

>>> x
[500.99, 500.5, 411.2, 411.02, 411]

But the desired output for this case should be

[500, 411]

While the ground truth maybe

[505, 501, 460, 420]

I can't compare the float x and ground truth using mse directly because the dimension is not equal.

Both the x and ground truth are index of a 512 dimension vector but the gradient disappear as soon as I use x.long() so I can't use them as index either.

How to properly calculate loss of this problem?

Thanks

Edit

It's kinda weird problem so I don't know if I explain properly.

The output is index of a vector.

Say, if the vector is 10 dimension and the calculated x is

[9.1, 8.2, 8.1, 5.5]

The final output of x will be the unique of the int of x

[9, 8, 5]

Which will make the vector becomes

>>> vector = torch.zeros(10)
>>> vector[x] = 1
>>> vector
[0, 0, 0, 0, 0, 1, 0, 0, 1, 1]

Edit2

Well there's actually 2 questions in Ivan's comment.

What I have is point cloud and pictures and rotation matrix and camera position when each picture got taken.

The problem is rotation matrix is correct due to some mysterious reasons and I want to correct it using the picture I have.

Somebody else are trying to do neural network but I want to try if using the rotation matrix as weights and backprop them by use mse on 2d projection of the point cloud and the pictures.

I try something like

    def forward(self, pc, out):
        inv_rotate_mat = torch.matmul(self.rot_y, self.rotation_matrix).T
        locals_ = pc.float() @ inv_rotate_mat.float()
        front = locals_[locals_[:, 2] <= -1]

        ndc = torch.matmul(torch.cat((front, torch.ones((front.size(0), 1))), dim=1), self.proj_mat)
        w = ndc[:, 3] / 256

        x = ndc[:, 0] / w + 256
        y = -ndc[:, 1] / w + 256
        cond = torch.where((x.long() >= 0) & (x.long() < 512) & (y.long() >= 0) & (y.long() < 512))  # no gradient here
        yx = y.long()[cond]
        xx = x.long()[cond]
        scx = yx, xx
        # out = torch.zeros((512, 512))
        out[scx] = 1
        return x, y, out

But it obviously doesn't work.

question from:https://stackoverflow.com/questions/65921569/how-to-calculate-loss-and-gradient-of-index-in-pytorch

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)
Waitting for answers

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...