ggt.models
Submodules
Package Contents
Classes
Galaxy Group-Equivariant Transformer model. |
|
Galaxy Group-Equivariant Transformer model. |
|
Base class for all neural network modules. |
|
Functions
|
|
|
|
|
|
|
- class ggt.models.GGT(cutout_size, channels, n_out=1, dropout=0.5)
Bases:
torch.nn.Module
Galaxy Group-Equivariant Transformer model.
- setup_stn(input_shape)
- setup_featurizer()
- setup_regression()
- setup_pooling(input_shape=(6, 6))
- setup_dropout(dropout)
- spatial_transform(x)
- forward(x)
- class ggt.models.GGT_no_gconv(cutout_size, channels, n_out=1)
Bases:
torch.nn.Module
Galaxy Group-Equivariant Transformer model.
- spatial_transform(x)
- forward(x)
- ggt.models.vgg16(cutout_size, channels, n_out=1, pretrained=True)
- class ggt.models.vgg16_w_stn_drp(cutout_size, channels, n_out=1, pretrained=True, dropout=False, dropout_rate=0.5)
Bases:
torch.nn.Module
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.- spatial_transform(x)
- forward(x)
- class ggt.models.vgg16_w_stn_drp_2(cutout_size, channels, n_out=1, pretrained=True, dropout=False, dropout_rate=0.5)
Bases:
torch.nn.Module
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.- spatial_transform(x)
- forward(x)
- class ggt.models.vgg16_w_stn_at_drp(cutout_size, channels, n_out=1, pretrained=True, dropout=False, dropout_rate=0.5)
Bases:
torch.nn.Module
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.- spatial_transform(x)
- forward(x)
- class ggt.models.vgg16_w_stn_oc_drp(cutout_size, channels, n_out=1, pretrained=True, dropout=False, dropout_rate=0.5)
Bases:
torch.nn.Module
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.- spatial_transform(x)
- forward(x)
- ggt.models.model_stats(model)
- ggt.models.model_factory(modeltype)
- ggt.models.save_trained_model(model, slug)