The website uses cookies. By using this site, you agree to our use of cookies as described in the Privacy Policy.
I Agree

In utils/quant_dorefa.py, Line 46,Why does it multiply max_w ? #10

Open
nbdyn opened this issue on 8 Sep · 2 comments

Comments

`class weight_quantize_fn(nn.Module):
def init(self, w_bit):
super(weight_quantize_fn, self).init()
assert w_bit <= 8 or w_bit == 32
self.w_bit = w_bit
self.uniform_q = uniform_quantize(k=w_bit)

def forward(self, x):
if self.w_bit == 32:
weight_q = x
elif self.w_bit == 1:
E = torch.mean(torch.abs(x)).detach()
weight_q = self.uniform_q(x / E) * E
else:
weight = torch.tanh(x)
max_w = torch.max(torch.abs(weight)).detach()
weight = weight / 2 / max_w + 0.5
weight_q = max_w * (2 * self.uniform_q(weight) - 1)
return weight_q`

**weight_q = max_w * (2 * self.uniform_q(weight) - 1)**
In utils/quant_dorefa.py, Line 46,Why does it multiply max_w ?
It seems that it is not need in the formula?

image

Measure
Measure
Summary | 1 Annotation
HKUST
2020/11/16 05:49