Pretrained remote sensing models for the rest of us.
[Read The Docs] - [Quick Start] - [Website]
Moonshine is a Python package that makes it easier to train models on remote sensing data like satellite imagery. Using Moonshine's pretrained models, you can reduce the amount of labeled data required and reduce the training compute needed.
For more info and examples, read the docs.
Pretrained on multispectral data: Many existing packages are pretrained with ImageNet or similar RGB images. Using Moonshine you can unlock the full power of satellites that many contain many channels of multispectral data.
Pretrained on remote sensing data: Pretraining in the domain of your data is important, and most off the shelf pretrained models are fit to natural images such as ImageNet.
Focus on usability: While there are some academic remote sensing pretrained models available, they often are difficult to use and lack support. Moonshine is designed to be easy to use and will offer community support via Github and Slack.
PyPI version:
pip install moonshine
Latest version from source:
pip install git+https://github.com/moonshinelabs-ai/moonshine
The Moonshine Remote Sensing Python package offers a light wrapper around our pretrained PyTorch models. You can load the pretrained weights into your own model architecture and fine tune with your own data:
import torch.nn as nn
from moonshine.models.unet import UNet
class SegmentationModel(nn.Module):
def __init__(self):
super().__init__()
# Create a blank model based on the available architectures.
self.backbone = UNet(name="unet50_fmow_rgb")
# If we are using pretrained weights, load them here. In
# general, using the decoder weights isn't preferred unless
# your downstream task is also a reconstruction task. We suggest
# trying only the encoder first.
self.backbone.load_weights(
encoder_weights="unet50_fmow_rgb", decoder_weights=None
)
# Run a per-pixel classifier on top of the output vectors.
self.classifier = nn.Conv2d(32, 2, (1, 1))
def forward(self, x):
x = self.backbone(x)
return self.classifier(x)
You can also configure data pre-processing to make sure your data is formatted the same way as the model pretraining was done.
from moonshine.preprocessing import get_preprocessing_fn
preprocess_fn = get_preprocessing_fn(model="unet", dataset="fmow_rgb")
@misc{Harada:2023,
Author = {Nate Harada},
Title = {Moonshine},
Year = {2023},
Publisher = {GitHub},
Journal = {GitHub repository},
Howpublished = {\url{https://github.com/moonshinelabs-ai/moonshine}}
}
This project is under MIT License.