Sjostrand, Kealia Maui Intern 2023

Kealia Sjostrand was born and raised in Kahului, Maui.  She is currently attending the University of Oregon, majoring in Data Science concentrated in Marketing Analytics, and minoring in Entrepreneurship, Math, and Sports Business.  After college, she hopes to move back to Hawaii and start her own business.

Home Island: Maui

High School: 

Institution when accepted: University of Oregon

Casting Neural Nets on Satellite Waters: Training and Testing a Computer Vision Model

Project Site: Privateer Space, Kihei, HI

Mentors: Joel Walsh & Shaantam Chawla

Project Abstract:

Privateer Space was founded to achieve a goal of making space more safe, sustainable, and accessible.  The company was built on the belief that open access to space-related data would achieve this goal.  As a potential use case, this project involved the training of a convolutional neural network (loosely based on You Only Look Once — a model capable of detecting objects in real-time) to identify and draw bounding boxes around any boats in satellite images.  Nearly 200,000 images of the ocean were obtained from Kaggle’s Airbus Ship Detection Challenge (2018) and used to train and validate the model.  These images were converted to fit the required format for the model.  For each image, an associated text file was created that contains the normalized coordinates (x-center, y-center, box width, and box height) of any bounding boxes in the image.  The image and text file pairs were kept in cloud storage so that the model could be trained on cloud supercomputer clusters with specialized GPUs.  During the training process, each of the images were cut into “patches,” and the patch that contains the center of the boat then predicts the size of the bounding box and displays it on the image.  A baseline model achieved a maximum precision of 0.8 and a maximum recall of 0.7 on the validation dataset of satellite images.  While testing this model, it was discovered that the model struggled to identify boats in docks and residential areas.  At this stage, there was only one object class (boat), but to address the domain adaptation issue, more classes and subclasses were necessary (such as types of boats).  This problem was addressed by training a Contrastive Learning In Pretraining (CLIP) model, which encodes images and text files in such a way that paired files are close to each other in a vector space.  By using CLIP, the similarity of and patterns in the existing data can be compared to discover more classes and subclasses.  Continued training and testing of this model will continue to increase its complexity, which could transform it into an environmental and/or actuarial tool, such as a method of enforcing fishing legislation, measuring commerce, or sensing boats remotely.