The website uses cookies. By using this site, you agree to our use of cookies as described in the Privacy Policy.
I Agree
blank_error__heading
blank_error__body
Text direction?

How to decide the bandwidth? #22

Closed
Jiankai-Sun opened this issue Aug 4, 2018 · 5 comments
Closed

How to decide the bandwidth? #22

Jiankai-Sun opened this issue Aug 4, 2018 · 5 comments

Comments

Firstly, thank you for the great work!

As the paper mentioned The maximum frequency b is known as the bandwidth, and is related to the resolution of the spatial grid (Kostelec and Rockmore, 2007).

I notice that the bandwidth is set to 30 when you generate the new MNIST dataset. For each S2Convolution and SO3Convolution, the bandwidth is different.

I wonder how to decide the parameter bandwidth when we use S2Convolution or SO3Convolution? Is it also a super parameter (empirical value) or need to be calculated meticulously?

Why you set bandwidth = 30 when you generate the new MNIST dataset?

Thank you!

Copy link
Collaborator

Ultimately it is a hyperparameter, similar to the spatial resolution in a planar CNN. In a planar CNN this is purely determined by by the stride of the convolutions and the pooling, but in spherical CNNs you have more flexibility: you can in principle choose the resolution freely in each layer.

There are currently no good "best practices" for spherical CNN architecture design, and this includes the bandwidth/resolution, but there are a couple of considerations that would factor into the decision:

  • Higher resolution means you can represent more small details
  • Higher resolution means higher computational cost
  • If you reduce the resolution too quickly, you would ignore units in the input, just like when you use 2D convolution with a stride that is larger than the filter size.
  • If the task is classification, you would typically start with a high resolution and gradually decrease it. The final layer can have very low resolution, and each unit has a receptive field that covers the whole input.
  • If the task is e.g. segmentation, you could try a U-net like architecture.

We choose bandwidth=30 because it's not too large, but still allows us to represent MNIST digits without losing too much detail. MNIST images are 28x28, but we project them only on the top of the sphere, so using a spherical grid with 2*b=60 samples per dimension, we can represent it fairly accurately.

Copy link
Author

Thank you for your quick reply!

As the paper describes, We created two instances of this dataset: one in which each digit is projected on the northern hemisphere and one in which each projected digit is additionally randomly rotated. Why do you just project on the northern hemisphere instead of the entire sphere?

Thank you!

Copy link
Author

Jiankai-Sun commented Aug 4, 2018
edited

Also, for VGG Net, there are some nn.MaxPool2d(kernel_size=2, stride=2) layers. Is there any implementations of MaxPool2d() operation or MaxPool3d() operation for Spherical CNN? Probably so3_integrate() is one possible solution as this issue mentioned. However can we feed the output of so3_integrate as the input of the next SO3Convolution()? I am afraid the shape is not appropriate. Or can we directly use torch.nn.MaxPool2d / torch.nn.MaxPool3d same as standard 2D CNN?

Or probably we don't have to think about pooling at all since there is no kernel_size or stride concepts for s2cnn?

Thank you for your suggestion.

Copy link
Collaborator

We projected onto the northern hemisphere because that way the digit doesn't get stretched too much. It's just a toy experiment so we didn't think about this too much. Projecting it on the whole sphere would most likely work as well.

Max pooling is a bit tricky. You could just do nn.MaxPool2d or 3d on the array that stores the feature map, but due to the inhomogeneous sampling grid, this would not be equivariant. It would probably still be approximately equivariant, and may work in practice.

so3_integrate() does a global average pooling. If you want to do a local average pooling, you could use a convolution with a fixed Gaussian blur filter, and sample the result on a low-resolution (low-bandwidth) grid.

Copy link
Author

Great thanks to your quick reply! Your work is so fascinating!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Assignees
No one assigned
Labels
None yet
Projects
None yet
Milestone
No milestone
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
2 participants
Measure
Measure
Related Notes
Get a free MyMarkup account to save this article and view it later on any device.
Create account

End User License Agreement

Summary | 1 Annotation
s purely determined by by the stride of the convolutions and the pooling,
2021/01/09 09:24