| |
Methods defined here:
- __init__(self, *args, **kwargs)
- build_convo_layers(self, configs_for_all_convo_layers)
- build_fc_layers(self)
- check_a_sampling_of_images(self)
- Displays the first batch_size number of images in your dataset.
- display_tensor_as_image(self, tensor, title='')
- This method converts the argument tensor into a photo image that you can display
in your terminal screen. It can convert tensors of three different shapes
into images: (3,H,W), (1,H,W), and (H,W), where H, for height, stands for the
number of pixels in the vertical direction and W, for width, for the same
along the horizontal direction. When the first element of the shape is 3,
that means that the tensor represents a color image in which each pixel in
the (H,W) plane has three values for the three color channels. On the other
hand, when the first element is 1, that stands for a tensor that will be
shown as a grayscale image. And when the shape is just (H,W), that is
automatically taken to be for a grayscale image.
- imshow(self, img)
- called by display_tensor_as_image() for displaying the image
- load_cifar_10_dataset(self)
- We make sure that the transformation applied to the image end the images being normalized.
Consider this call to normalize: "Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))". The three
numbers in the first tuple affect the means in the three color channels and the three
numbers in the second tuple affect the standard deviations. In this case, we want the
image value in each channel to be changed to:
image_channel_val = (image_channel_val - mean) / std
So with mean and std both set 0.5 for all three channels, if the image tensor originally
was between 0 and 1.0, after this normalization, the tensor will be between -1.0 and +1.0.
If needed we can do inverse normalization by
image_channel_val = (image_channel_val * std) + mean
- load_cifar_10_dataset_with_augmentation(self)
- In general, we want to do data augmentation for training:
- parse_config_string_for_convo_layers(self)
- Each collection of 'n' otherwise identical layers in a convolutional network is
specified by a string that looks like:
"nx[a,b,c,d]-MaxPool(k)"
where
n = num of this type of convo layer
a = number of out_channels [in_channels determined by prev layer]
b,c = kernel for this layer is of size (b,c) [b along height, c along width]
d = stride for convolutions
k = maxpooling over kxk patches with stride of k
Example:
"n1x[a1,b1,c1,d1]-MaxPool(k1) n2x[a2,b2,c2,d2]-MaxPool(k2)"
- plot_loss(self)
- run_code_for_testing(self, net)
- run_code_for_training(self, net)
- save_model(self, model)
- Save the trained model to a disk file
- show_network_summary(self, net)
Data descriptors defined here:
- __dict__
- dictionary for instance variables (if defined)
- __weakref__
- list of weak references to the object (if defined)
Data and other attributes defined here:
- AutogradCustomization = <class 'DLStudio.AutogradCustomization'>
- This class illustrates how you can add additional functionality of Autograd by
following the instructions posted at
https://pytorch.org/docs/stable/notes/extending.html
- ExperimentsWithCIFAR = <class 'DLStudio.ExperimentsWithCIFAR'>
- ExperimentsWithSequential = <class 'DLStudio.ExperimentsWithSequential'>
- Demonstrates how to use the torch.nn.Sequential container class
- Net = <class 'DLStudio.Net'>
|