The website uses cookies. By using this site, you agree to our use of cookies as described in the Privacy Policy.

# 【SOT】siameseFC论文和代码解析

## 1.前言

SOT的思想是，在视频中的某一帧中框出你需要跟踪目标的bounding box，在后续的视频帧中，无需你再检测出物体的bounding box进行匹配，而是通过某种相似度的计算，寻找需要跟踪的对象在后续帧的位置，如下动图所示（图中使用的是本章所讲siameseFC的升级版siameseMask），常见的经典的方法有KCF[2]等。

The key contribution of this paper is to demonstrate that this approach achieves very competitive performance in modern tracking benchmarks at speeds that far exceed the frame-rate requirement.

• 输入为模板图像z（大小为127x127x3) + 搜索图像x(大小为255x255x3)
• 结果 score map ，大小为（17x17x1)。这里的17=（22-6）/1+1，符合卷积互运算的定义

2. SiameseFC网络结构定义

• 特征提取网络AlexNet
• 互相关运算网络

AlexNet网络代码定义如下：

class AlexNet(nn.Module):
def __init__(self):
super(AlexNet, self).__init__()
self.conv1 = nn.Sequential(
nn.Conv2d(3, 96, 11, stride=2, padding=0),
nn.BatchNorm2d(96),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2, padding=0))

self.conv2 = nn.Sequential(
nn.Conv2d(96, 256, 5, stride=1, padding=0, groups=2),
nn.BatchNorm2d(256),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2, padding=0))

self.conv3 = nn.Sequential(
nn.Conv2d(256, 384, 3, stride=1, padding=0),
nn.BatchNorm2d(384),
nn.ReLU(inplace=True))

self.conv4 = nn.Sequential(
nn.Conv2d(384, 384, 3, stride=1, padding=0, groups=2),
nn.BatchNorm2d(384),
nn.ReLU(inplace=True))

self.conv5 = nn.Sequential(
nn.Conv2d(384, 256, 3, stride=1, padding=0, groups=2))

def forward(self, x):
conv1 = self.conv1(x)
conv2 = self.conv2(conv1)
conv3 = self.conv3(conv2)
conv4 = self.conv4(conv3)
conv5 = self.conv5(conv4)
return conv5

class Siamfc(nn.Module):
def __init__(self, branch):
super(Siamfc, self).__init__()
self.branch = branch
self.bn_adjust = nn.Conv2d(1, 1, 1, stride=1, padding=0)
self._initialize_weights()

def _initialize_weights(self):
for m in self.modules():
if isinstance(m, nn.Conv2d):
n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
m.weight.data.normal_(0, math.sqrt(2. / n))
if m.bias is not None:
m.bias.data.zero_()
elif isinstance(m, nn.BatchNorm2d):
m.weight.data.fill_(1)
m.bias.data.zero_()

def Xcorr(self, x, z):         # x denote search, z denote template
out = []
for i in range(x.size(0)):
out.append(F.conv2d(x[i, :, :, :].unsqueeze(0), z[i, :, :, :].unsqueeze(0)))
return torch.cat(out, dim=0)

def forward(self, x, z):        # x denote search, z denote template
x = self.branch(x)
z = self.branch(z)
xcorr_out = self.Xcorr(x, z)
out = self.bn_adjust(xcorr_out)
return out

    def Xcorr(self, x, z):         # x denote search, z denote template
out = []
for i in range(x.size(0)):
out.append(F.conv2d(x[i, :, :, :].unsqueeze(0), z[i, :, :, :].unsqueeze(0)))
return torch.cat(out, dim=0)

## 3 . siameseFC的输入、输出和损失函数

3.1 siameseFC网络的输入

        crop_z = self.crop(
img_z, bndbox_z, self.exemplarSize
)  # crop template patch from img_z, then resize [127, 127]
crop_x = self.crop(
img_x, bndbox_x, self.instanceSize
)  # crop search patch from img_x, then resize [255, 255]

    # crop the image patch of the specified size - template(127), search(255)
def crop(self, image, bndbox, out_size):
center = bndbox[:2] + bndbox[2:] / 2
size = bndbox[2:]

context = self.context * size.sum() #(w+h)/2
patch_sz = out_size / self.exemplarSize * \
np.sqrt((size + context).prod())

return crop_pil(image, center, patch_sz, out_size=out_size)

context就是添加边界的宽度，即上述公式中的2p。这里又跳入函数crop_pil中，定义如下：

def crop_pil(image, center, size, padding='avg', out_size=None):
# convert bndbox to corners
size = np.array(size)
corners = np.concatenate((center - size / 2, center + size / 2)) #（左上和右下）
corners = np.round(corners).astype(int)

pads = np.concatenate((-corners[:2], corners[2:] - image.size)) #填充原图，防止后面裁剪的时候出现越界现象
npad = max(0, int(pads.max()))

if npad > 0:#大于0则需要进行pad
image = pad_pil(image, npad, padding=padding)
corners = tuple((corners + npad).tolist())
patch = image.crop(corners)

if out_size is not None:
if isinstance(out_size, numbers.Number):
out_size = (out_size, out_size)
if not out_size == patch.size:
patch = patch.resize(out_size, Image.BILINEAR)

return patch

1. 填充（padding）,如果需要添加边界的部分超出了原图的大小（一般出现在bounding box靠近图像边界的情况），则使用像素的均值代替。论文中这么说的：
When a sub-window extends beyond the extent of the image, the missing portions are lled with the mean RGB value

2. 缩放，这里用resize做的

3.2 siameseFC的真实标签值

       labels, weights = self.create_labels()  # create corresponding labels and weights

    # create labels and weights. This section is similar to Matlab version of Siamfc
def create_labels(self):
labels = self.create_logisticloss_labels()
weights = np.zeros_like(labels)

pos_num = np.sum(labels == 1)
neg_num = np.sum(labels == 0)
weights[labels == 1] = 0.5 / pos_num
weights[labels == 0] = 0.5 / neg_num
#weights *= pos_num + neg_num

labels = labels[np.newaxis, :]
weights = weights[np.newaxis, :]

return labels, weights

def create_logisticloss_labels(self):
label_sz = self.scoreSize #17
r_pos = self.rPos / self.totalStride #16/8=2
r_neg = self.rNeg / self.totalStride #0
labels = np.zeros((label_sz, label_sz))

for r in range(label_sz):
for c in range(label_sz):
dist = np.sqrt((r - label_sz // 2)**2 + (c - label_sz // 2)**2)
if dist <= r_pos:
labels[r, c] = 1
elif dist <= r_neg:
labels[r, c] = self.ignoreLabel
else:
labels[r, c] = 0

return labels

array([[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 1., 1., 1., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 1., 1., 1., 1., 1., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 1., 1., 1., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]])

3.3 损失函数

criterion = BCEWeightLoss()                 # define criterion
output = model(search, template)
loss = criterion(output, labels, weights)/template.size(0)

## 4 siameseFC的tracking部分

scores_up = cv2.resize(scores, (config.final_sz, config.final_sz), interpolation=cv2.INTER_CUBIC)   # [257,257,3]

scores_ = np.squeeze(scores_up)
# penalize change of scale
scores_[0, :, :] = config.scalePenalty * scores_[0, :, :]
scores_[2, :, :] = config.scalePenalty * scores_[2, :, :]
# find scale with highest peak (after penalty)
new_scale_id = np.argmax(np.amax(scores_, axis=(1, 2)))
# update scaled sizes
x_sz = (1 - config.scaleLR) * x_sz + config.scaleLR * scaled_search_area[new_scale_id]
target_w = (1 - config.scaleLR) * target_w + config.scaleLR * scaled_target_w[new_scale_id]
target_h = (1 - config.scaleLR) * target_h + config.scaleLR * scaled_target_h[new_scale_id]

# select response with new_scale_id
score_ = scores_[new_scale_id, :, :]
score_ = score_ - np.min(score_)
score_ = score_ / np.sum(score_)
# apply displacement penalty
score_ = (1 - config.wInfluence) * score_ + config.wInfluence * penalty
p = np.asarray(np.unravel_index(np.argmax(score_), np.shape(score_)))                   # position of max response in score_
center = float(config.final_sz - 1) / 2                                                 # center of score_
disp_in_area = p - center
disp_in_xcrop = disp_in_area * float(config.totalStride) / config.responseUp
disp_in_frame = disp_in_xcrop * x_sz / config.instanceSize
pos_y, pos_x = pos_y + disp_in_frame[0], pos_x + disp_in_frame[1]
bboxes[f, :] = pos_x - target_w / 2, pos_y - target_h / 2, target_w, target_h

## 参考

1. ^http://xxx.itp.ac.cn/abs/1606.09549
2. ^Henriques, J.F., Caseiro, R., Martins, P., Batista, J.: High-speed tracking with kernelized correlation lters. PAMI 37(3) (2015) 583{596
3. ^https://github.com/rafellerc/Pytorch-SiamFC
Measure
Measure
Summary | 5 Annotations

2021/01/17 03:38

2021/01/17 03:40

2021/01/17 03:41

2021/01/17 03:41

2021/01/17 03:41