| |
Methods defined here:
- __del__(self)
- # The destructor:
- __init__(self, *args, **kwargs)
- accessing_one_color_plane(self, image_file, n)
- This method shows how can access the n-th color plane of the argument color image.
- convolutions_with_pytorch(self, image_file, kernel)
- Using torch.nn.functional.conv2d() for demonstrating a single image convolution with
a specified kernel
- displayImage(self, argimage, title='')
- Displays the argument image. The display stays on for the number of seconds
that is the first argument in the call to tk.after() divided by 1000.
- displayImage2(self, argimage, title='')
- Displays the argument image. The display stays on until the user closes the
window. If you want a display that automatically shuts off after a certain
number of seconds, use the previous method displayImage().
- displayImage3(self, argimage, title='')
- Displays the argument image (which must be of type Image) in its actual size. The
display stays on until the user closes the window. If you want a display that
automatically shuts off after a certain number of seconds, use the method
displayImage().
- displayImage4(self, argimage, title='')
- Displays the argument image (which must be of type Image) in its actual size without
imposing the constraint that the larger dimension of the image be at most half the
corresponding screen dimension.
- displayImage5(self, argimage, title='')
- This does the same thing as displayImage4() except that it also provides for
"save" and "exit" buttons. This method displays the argument image with more
liberal sizing constraints than the previous methods. This method is
recommended for showing a composite of all the segmented objects, with each
object displayed separately. Note that 'argimage' must be of type Image.
- displayImage6(self, argimage, title='')
- For the argimge which must be of type PIL.Image, this does the same thing as
displayImage3() except that it also provides for "save" and "exit" buttons.
- display_tensor_as_image(self, tensor, title='')
- This method converts the argument tensor into a photo image that you can display
in your terminal screen. It can convert tensors of three different shapes
into images: (3,H,W), (1,H,W), and (H,W), where H, for height, stands for the
number of pixels in the vertical direction and W, for width, for the same
along the horizontal direction. When the first element of the shape is 3,
that means that the tensor represents a color image in which each pixel in
the (H,W) plane has three values for the three color channels. On the other
hand, when the first element is 1, that stands for a tensor that will be
shown as a grayscale image. And when the shape is just (H,W), that is
automatically taken to be for a grayscale image.
- extract_data_pixels_in_bb(self, image_file, bounding_box)
- Mainly used for testing
- extract_image_region_interactively_by_dragging_mouse(self, image_name)
- This is one method you can use to apply selective search algorithm to just a
portion of your image. This method extract the portion you want. You click
at the upper left corner of the rectangular portion of the image you are
interested in and you then drag the mouse pointer to the lower right corner.
Make sure that you click on "save" and "exit" after you have delineated the
area.
- extract_image_region_interactively_through_mouse_clicks(self, image_file)
- This method allows a user to use a sequence of mouse clicks in order to specify a
region of the input image that should be subject to furhter processing. The
mouse clicks taken together define a polygon. The method encloses the
polygonal region by a minimum bounding rectangle, which then becomes the new
input image for the rest of processing.
- extract_rectangular_masked_segment_of_image(self, horiz_start, horiz_end, vert_start, vert_end)
- Keep in mind the following convention used in the PIL's Image class: the first
coordinate in the args supplied to the getpixel() and putpixel() methods is for
the horizontal axis (the x-axis, if you will) and the second coordinate for the
vertical axis (the y-axis). On the other hand, in the args supplied to the
array and matrix processing functions, the first coordinate is for the row
index (meaning the vertical) and the second coordinate for the column index
(meaning the horizontal). In what follows, I use the index 'i' with its
positive direction going down for the vertical coordinate and the index 'j'
with its positive direction going to the right as the horizontal coordinate.
The origin is at the upper left corner of the image.
- gaussian_smooth(self, pil_grayscale_image)
- This method smooths an image with a Gaussian of specified sigma.
- graph_based_segmentation(self, image_name, num_blobs_wanted=None)
- This is an implementation of the Felzenszwalb and Huttenlocher algorithm for
graph-based segmentation of images. At the moment, it is limited to working
on grayscale images.
- graph_based_segmentation_for_arrays(self, which_one)
- This method is provided to enable the user to play with small arrays when
experimenting with graph-based logic for image segmentation. At the moment, it
provides three small arrays, one under the "which_one==1" option, one under the
"which_one==2" option, and the last under the "which_one==3" option.
- graying_resizing_binarizing(self, image_file, polarity=1, area_threshold=0, min_brightness_level=100)
- This is a demonstration of some of the more basic and commonly used image
transformations from the torchvision.transformations module. The large comments
blocks are meant to serve as tutorial introduction to the syntax used for invoking
these transformations. The transformations shown can be used for converting a
color image into a grayscale image, for resizing an image, for converting a
PIL.Image into a tensor and a tensor back into an PIL.Image object, and so on.
- histogramming_and_thresholding(self, image_file)
- PyTorch based experiments with histogramming and thresholding
- histogramming_the_image(self, image_file)
- PyTorch based experiments with histogramming the grayscale and the color values in an
image
- repair_blobs(self, merged_blobs, color_map, all_pairwise_similarities)
- The goal here to do a final clean-up of the blob by merging tiny pixel blobs with
an immediate neighbor, etc. Such a cleanup requires adjacency info regarding the
blobs in order to figure out which larger blob to merge a small blob with.
- selective_search_for_region_proposals(self, graph, image_name)
- This method implements the Selective Search (SS) algorithm proposed by Uijlings,
van de Sande, Gevers, and Smeulders for creating region proposals for object
detection. As mentioned elsewhere here, that algorithm sits on top of the graph
based image segmentation algorithm that was proposed by Felzenszwalb and
Huttenlocher. The parameter 'pixel_blobs' required by the method presented here
is supposed to be the pixel blobs produced by the Felzenszwalb and Huttenlocher
algorithm.
- visualize_segmentation_in_pseudocolor(self, pixel_blobs, color_map, label='')
- Assigns a random color to each blob in the output of an image segmentation algorithm
- visualize_segmentation_with_mean_gray(self, pixel_blobs, label='')
- Assigns the mean color to each each blob in the output of an image segmentation algorithm
- working_with_hsv_color_space(self, image_file, test=False)
- Shows color image conversion to HSV
Data descriptors defined here:
- __dict__
- dictionary for instance variables (if defined)
- __weakref__
- list of weak references to the object (if defined)
Data and other attributes defined here:
- canvas = None
- drawEnable = 0
- region_mark_coords = {}
- startX = 0
- startY = 0
|