hammad93 / hurricane-map

A graphical user interface through a web application is created to display archived tropical storms and forecast models.
MIT License
1 stars 0 forks source link

Containerized Global live mosaic #15

Closed hammad93 closed 7 months ago

hammad93 commented 10 months ago

There are some publicly available geostationary satellites from various agencies around the world that can be combined into a global mosaic. The outputs from these satellites are timely, updated every 10, 30, 60 minutes and so on. This issue is to track generating the mosaic. Creating a tile server and enhancing can be captured in another issue.

Other options like NASA Worldview, while free, don't provide a good, consistent, and timely. There are also options that are good and everything, but are not free.

hammad93 commented 10 months ago

image Around this area, EUMETSAT has available free geostationary satellite imagery. There are 3 relevant channels to create a visible output. Note that only 1 of these channels are actually in the human visible spectrum.

These links have access, download, and all other info

Imagery links

High Rate SEVIRI VIS0.6 μm Image - MSG - 0 degree High Rate SEVIRI IR10.8 μm Image - MSG - 0 degree High Rate SEVIRI IR3.9 μm Image - MSG - 0 degree

Water Vapor Imagery

There is also a relevant water vapor channel that might help visualize something or be a relevant input for meteorological algorithms.

High Rate SEVIRI WV6.2 μm Image - MSG - 0 degree

hammad93 commented 10 months ago

image

The Himawari 8 is timely and covers an area of the globe that others don't. The following link is a visual from their agency, https://himawari8.nict.go.jp/

If we navigate to the menu and click download data we are navigated to another interface where we can download full disk images, hopefully with all the channels. For English, they have provided an option on the lower right of the screen. This link was generated, https://sc-nc-web.nict.go.jp/wsdb_osndisk/shareDirDownload/bDw2maKV?lang=en

This is a write up on the available channels and other pertinent information, https://www.data.jma.go.jp/mscweb/en/himawari89/space_segment/spsg_ahi.html

hammad93 commented 10 months ago

image

This is still from the EUMETSAT but covers more of the Indian Ocean. Links are provided per the previous comment,

Imagery

High Rate SEVIRI VIS0.6 μm Image - MSG - Indian Ocean High Rate SEVIRI IR10.8 μm Image - MSG - Indian Ocean High Rate SEVIRI IR3.9 μm Image - MSG - Indian Ocean

Water Vapor

High Rate SEVIRI WV6.2 μm Image - MSG - Indian Ocean

hammad93 commented 10 months ago

GOES East

image

Download, https://www.star.nesdis.noaa.gov/GOES/fulldisk.php?sat=G16

GOES West

image Download, https://www.star.nesdis.noaa.gov/GOES/fulldisk.php?sat=G18

hammad93 commented 9 months ago

While we need to update the issue with import code and installation steps, the main technical hurdle now is to containerize it and output to NetCDF

hammad93 commented 8 months ago

We will create a notebook that will do all the processing and return the result. The container will run this notebook and exit.

hammad93 commented 8 months ago

Although the notebook can do the processing, each source can be part of a class. Some required methods for all sources include,

hammad93 commented 8 months ago
# -*- coding: utf-8 -*-
"""generate netcdf file with updated live storms.ipynb

Automatically generated by Colaboratory.

Original file is located at
    https://colab.research.google.com/drive/1dKQmOltvqANI6f_Rh7NcG7kN4qjQFFWZ

# Global Image
Consisting of the following satellites,
- GOES East
- GOES West
- MSG 0 Degree
- MSG Indian Ocean
- Himawari 8

Downloads must be from a primary source.

# References
https://github.com/hammad93/hurricane-map/issues/15

##Setup
"""

!pip install eumdac

!pip install libnetcdf

!pip install rasterio

!pip install satpy
!pip install netCDF4==1.6.0

import os
import eumdac
import datetime
import shutil
import requests
import dateutil
import subprocess
import satpy
import glob
from osgeo import gdal
from PIL import Image
from urllib.request import urlretrieve
from concurrent.futures import ThreadPoolExecutor, as_completed

from abc import ABC, abstractmethod

class DataSource(ABC):
    @abstractmethod
    def getRecentData(self):
        """Fetch the most recent data from the source."""
        pass

    @abstractmethod
    def toNetCDF(self, existing_netcdf_path=None):
        """
        Convert the data into NetCDF format.

        Parameters:
        - existing_netcdf_path (str): Path to an existing NetCDF file to which the new data will be added. If None, create a new NetCDF file.
        """
        pass

from google.colab import userdata

class Config:
    GOES_EAST_STATIC_URL = "https://cdn.star.nesdis.noaa.gov/GOES16/ABI/FD/GEOCOLOR/GOES16-ABI-FD-GEOCOLOR-10848x10848.tif"
    GOES_WEST_STATIC_URL = "https://cdn.star.nesdis.noaa.gov/GOES18/ABI/FD/GEOCOLOR/GOES18-ABI-FD-GEOCOLOR-10848x10848.tif"
    h8_link = "https://himawari8.nict.go.jp/img/FULL_24h/latest.json?_={time}"
    eumetsat_consumer_key = userdata.get('eumetsat_pass')
    eumetsat_consumer_secret = userdata.get('eumetsat_secret')
    output_dir = '/content/'

"""## GOES East"""

class GOESEastDataSource(DataSource):
    def __init__(self):
        self.name = "GOES 16 East"
        self.id = "GOES-16"
        self.data_url = Config.GOES_EAST_STATIC_URL

    def getRecentData(self, file_prefix=''):
        """
        Fetch the most recent data from GOES East and save it with an optional prefix.

        :param file_prefix: Optional. A string to prepend to the filename on save.
        """
        filename = f"{file_prefix}[{self.id}]_{os.path.basename(self.data_url)}"
        save_path = os.path.join(Config.output_dir, filename)
        self.recent_path = save_path

        try:
            urlretrieve(self.data_url, save_path)
            print(f"File saved as {save_path}")
            return save_path  # Return the path of the saved file for further processing
        except Exception as e:
            print(f"Failed to download the file: {e}")
            return None

    def toNetCDF(self, existing_netcdf_path=None):
        ds = gdal.Translate(self.recent_path[:-len("tif")] + "nc", self.recent_path, format='NetCDF')

"""##GOES West"""

class GOESWestDataSource(DataSource):
    def __init__(self):
        self.name = "GOES 18 West"
        self.id = "GOES-18"
        # Updated to point to the GOES-18 GEOCOLOR image
        self.data_url = Config.GOES_WEST_STATIC_URL

    def getRecentData(self, file_prefix=''):
        """
        Fetch the most recent data from GOES West (GOES-18) and save it with an optional prefix.

        :param file_prefix: Optional. A string to prepend to the filename on save.
        """
        filename = f"{file_prefix}_{os.path.basename(self.data_url)}"
        save_path = os.path.join(Config.output_dir, filename)

        try:
            urlretrieve(self.data_url, save_path)
            print(f"File saved as {save_path}")
            return save_path  # Return the path of the saved file for further processing
        except Exception as e:
            print(f"Failed to download the file: {e}")
            return None

    def toNetCDF(self, existing_netcdf_path=None):
        ds = gdal.Translate(self.recent_path[:-len("tif")] + "nc", self.recent_path, format='NetCDF')

"""##MSG 0 Degree

"""

class MSG0DegreeDataSource(DataSource):
    def __init__(self):
        self.consumer_key = Config.eumetsat_consumer_key
        self.consumer_secret = Config.eumetsat_consumer_secret

    def getRecentData(self, file_prefix=''):
        """
        Fetch the most recent data from EUMETSAT

        :param file_prefix: Optional. A string to prepend to the filename on save.
        """
        credentials = (self.consumer_key, self.consumer_secret)
        token = eumdac.AccessToken(credentials)

        print(f"This token '{token}' expires {token.expiration}")

        collection = 'EO:EUM:DAT:MSG:HRSEVIRI'
        datastore = eumdac.DataStore(token)
        selected_collection = datastore.get_collection(collection)
        print(selected_collection.title)

        display(selected_collection.search_options)

        product = selected_collection.search().first()
        print(product)

        try:
            with product.open() as fsrc :
                self.recent_path = f'{Config.output_dir}/{file_prefix}_{fsrc.name}'
                with open(self.recent_path, mode='wb') as fdst:
                  shutil.copyfileobj(fsrc, fdst)
                  print(f'Download of product {product} finished.')

                  # extract zip
                  print('Extracting . . .')
                  subprocess.run(["unzip", self.recent_path, "-d", self.recent_path[:-len('.zip')]])
                  self.recent_path = f"{self.recent_path[:-len('.zip')]}/{self.recent_path.split('_')[-1][:-len('zip')]}.nat"
                  print('Recent file changed to ' + self.recent_path)

        except eumdac.product.ProductError as error:
            print(f"Error related to the product '{product}' while trying to download it: '{error.msg}'")
        except requests.exceptions.ConnectionError as error:
            print(f"Error related to the connection: '{error.msg}'")
        except requests.exceptions.RequestException as error:
            print(f"Unexpected error: {error}")

    def toNetCDF(self, existing_netcdf_path=None):
        # read in the .nat
        scn = Scene(
            filenames=[self.recent_path],
            reader='seviri_l1b_native')
        # output to NetCDF
        scn.save_datasets(
            writer="cf",
            groups={
                'default': filter(lambda x: x!='HRV', scn.available_dataset_names()),
                'hrv': ['HRV']})
        pass

"""##MSG Indian Ocean"""

class MSGIndianOceanDataSource(DataSource):
    def __init__(self):
        self.consumer_key = Config.eumetsat_consumer_key
        self.consumer_secret = Config.eumetsat_consumer_secret
    def getRecentData(self, file_prefix=''):
        """
        Fetch the most recent data from EUMETSAT

        :param file_prefix: Optional. A string to prepend to the filename on save.
        """
        credentials = (self.consumer_key, self.consumer_secret)
        token = eumdac.AccessToken(credentials)

        print(f"This token '{token}' expires {token.expiration}")

        collection = 'EO:EUM:DAT:MSG:HRSEVIRI-IODC'
        datastore = eumdac.DataStore(token)
        selected_collection = datastore.get_collection(collection)
        print(selected_collection.title)

        display(selected_collection.search_options)

        product = selected_collection.search().first()
        print(product)

        try:
            with product.open() as fsrc :
                self.recent_path = f'{Config.output_dir}/{file_prefix}_{fsrc.name}'
                with open(self.recent_path, mode='wb') as fdst:
                  shutil.copyfileobj(fsrc, fdst)
                  print(f'Download of product {product} finished.')

                  # extract zip
                  print('Extracting . . .')
                  subprocess.run(["unzip", self.recent_path, "-d", self.recent_path[:-len('.zip')]])
                  self.recent_path = f"{self.recent_path[:-len('.zip')]}/{self.recent_path.split('_')[-1][:-len('zip')]}.nat"
                  print('Recent file changed to ' + self.recent_path)

        except eumdac.product.ProductError as error:
            print(f"Error related to the product '{product}' while trying to download it: '{error.msg}'")
        except requests.exceptions.ConnectionError as error:
            print(f"Error related to the connection: '{error.msg}'")
        except requests.exceptions.RequestException as error:
            print(f"Unexpected error: {error}")

    def toNetCDF(self, existing_netcdf_path=None):
        # read in the .nat
        scn = Scene(
            filenames=[self.recent_path],
            reader='seviri_l1b_native')
        # output to NetCDF
        scn.save_datasets(
            writer="cf",
            groups={
                'default': filter(lambda x: x!='HRV', scn.available_dataset_names()),
                'hrv': ['HRV']})
        pass

"""##Himawari 8"""

class Himawari8DataSource(DataSource):
    def __init__(self):
        self.link = Config.h8_link
        self.name = "Himawari 8"
        self.id = "H8"

    def jpn_get_latest_metadata(self):
      time = int(datetime.datetime.now().timestamp() * 1000)
      return requests.get(self.link).json()

    def jpn_create_urls(self, band):
      dimension = 10
      metadata = self.jpn_get_latest_metadata()
      parsed_time = dateutil.parser.parse(metadata['date'])
      template_url = f"https://himawari8.nict.go.jp/img/FULL_24h/B{band:02}/10d/550/{parsed_time.year}/{parsed_time.strftime('%m')}/{parsed_time.day}/{parsed_time.strftime('%H%M%S')}" #_3_3.png
      result_urls = []
      for i in range(dimension) :
        for j in range(dimension) :
          result_urls.append(template_url + f"_{i}_{j}.png")
      return result_urls

    def download_image(self, file_prefix, band_index, url):
        """
        Download a single image and save it.
        """
        try:
            filename = f"{file_prefix}_{band_index}_{url.split('/')[-1]}"
            print(f"Himarwari 8 downloading {filename}")
            save_path = os.path.join(Config.output_dir, filename)
            urlretrieve(url, save_path)
            return save_path
        except Exception as e:
            print(f"Failed to download {filename}: {e}")
            return None

    def getRecentData(self, file_prefix=''):
        """
        Fetch the most recent data from Himawari 8 and save it with an optional prefix.
        """
        download_paths = []  # To store paths of successfully downloaded files
        with ThreadPoolExecutor() as executor:
            # Create a list to hold futures
            futures = []
            for band_index in [1, 2, 3]:
                for url in self.jpn_create_urls(band_index):
                    # Schedule the download_image method to be executed and store the future
                    futures.append(executor.submit(self.download_image, file_prefix, band_index, url))

            # as_completed will yield futures as they complete
            for future in as_completed(futures):
                path = future.result()  # Get the result from the future
                if path:
                    download_paths.append(path)

        if download_paths:
            print(f"Files saved: {download_paths}")
            self.file_prefix = f"{file_prefix}[{self.id}]"
            return download_paths  # Return the paths of the saved files for further processing
        else:
            print("Failed to download any files.")
            return None

    def toNetCDF(self, existing_netcdf_path=None):
        # Define the size of each tile and the number of tiles in each dimension
        tile_size = 550
        grid_size = 10

        # Create a blank image for the final combined image
        combined_image = Image.new('LA', (tile_size * grid_size, tile_size * grid_size))

        # Function to parse coordinates from filename
        def get_coordinates(filename):
            parts = filename.split('_')
            x = int(parts[-2])
            y = int(parts[-1].split('.')[0])
            return x, y

        # Loop through the downloaded images and place them in the correct position
        for filename in os.listdir(download_dir):
            if filename.endswith(".png"):
                x, y = get_coordinates(filename)
                img_path = os.path.join(download_dir, filename)
                img = Image.open(img_path)

                # Ensure the image is in grayscale mode 'LA'
                print(img_path)

                combined_image.paste(img, (x * tile_size, y * tile_size))

        # Save the combined image
        self.combined_path = f"{self.file_prefix}combined_image_greyscale.png"
        combined_image.save(self.combined_path)

        #gdal_translate -a_srs "+proj=geos +h=35785863 +a=6378137.0 +b=6356752.3 +lon_0=140.7 +no_defs" -a_ullr -5500000 5500000 5500000 -5500000 PI_H08_20150125_0230_TRC_FLDK_R10_PGPFD.png temp.tif
        self.recent_GeoTiff = self.combined_path[:-len('.png')]+'.tif'
        gdal.Translate(outputSRS="+proj=geos +h=35785863 +a=6378137.0 +b=6356752.3 +lon_0=140.7 +no_defs",
                       outputBounds=[-5500000, 5500000, 5500000, -5500000],
                       srcDS=self.combined_path,
                       destName=self.recent_GeoTiff
                       )
        #gdalwarp -overwrite -t_srs "+proj=latlong +ellps=WGS84 +pm=140.7" -wo SOURCE_EXTRA=100 temp.tif Himawari8.tif
        gdal.Warp(destNameOrDestDS=self.combined_path[:-len('.png')]+'[preprocessed].tif',
                  srcDSOrSrcDSTab=self.recent_GeoTiff,
                  dstSRS="+proj=latlong +ellps=WGS84 +pm=140.7",
                  warpOptions={'SOURCE_EXTRA': 100})
        self.recent_GeoTiff = self.combined_path[:-len('.png')]+'[preprocessed].tif'

import time
time.time()

"""##Download"""

# Your satellite sources list
satellites = [
    GOESEastDataSource(),
    GOESWestDataSource(),
    MSG0DegreeDataSource(),
    MSGIndianOceanDataSource(),
    Himawari8DataSource()
]

def fetch_data_for_satellite(satellite, file_prefix):
    """
    Wrapper function to call getRecentData on a satellite object.
    """
    satellite.getRecentData(file_prefix=file_prefix)

def main():
    prefix = f"[{int(time.time())}]"
    # Use ThreadPoolExecutor to run getRecentData concurrently for each satellite
    with ThreadPoolExecutor(max_workers=len(satellites)) as executor:
        # Schedule the executions
        futures = [executor.submit(fetch_data_for_satellite, satellite, prefix) for satellite in satellites]

        # Optionally, wait for all futures to complete and handle results/errors
        for future in futures:
            try:
                # Result handling here if needed
                future.result()  # This will raise exceptions if any occurred
            except Exception as e:
                print(f"An error occurred: {e}")
main()

"""# Appendix"""

from satpy import Scene
scn = Scene(filenames=['/content/_MSG2-SEVI-MSG15-0100-NA-20240223224240.078000000Z-NA/MSG2-SEVI-MSG15-0100-NA-20240223224240.078000000Z-NA.nat'], reader='seviri_l1b_native')

output = scn.load(scn.available_dataset_names(), upper_right_corner='NE')

scn.available_dataset_names()

scn.save_datasets(writer="cf", groups={'default': filter(lambda x: x!='HRV', scn.available_dataset_names()), 'hrv': ['HRV']})

!pip show libnetcdf

test = MSGIndianOceanDataSource()
test.getRecentData()
test.toNetCDF()

from osgeo import gdal
#src = rasterio.open('GOES18-ABI-FD-GEOCOLOR-10848x10848.tif')
#ds = gdal.Translate('GOES18-ABI-FD-GEOCOLOR-10848x10848.nc', 'GOES18-ABI-FD-GEOCOLOR-10848x10848.tif', format='NetCDF')
ds = gdal.Translate('GOES18-ABI-FD-GEOCOLOR-10848x10848.nc', 'merged.tif', format='NetCDF')

import rasterio
from rasterio.merge import merge
from rasterio.plot import show
src1 = rasterio.open('GOES18-ABI-FD-GEOCOLOR-10848x10848.tif')
src2 = rasterio.open('GOES16-ABI-FD-GEOCOLOR-10848x10848.tif')
# Read the data and metadata
data1 = src1.read()
data2 = src2.read()
metadata1 = src1.meta
metadata2 = src2.meta

# Merge the two GeoTIFFs
merged, out_transform = merge([src1, src2])

# Update the metadata with the new dimensions and transform
metadata1.update({
    'height': merged.shape[1],
    'width': merged.shape[2],
    'transform': out_transform
})

# Write the merged GeoTIFF to a new file
with rasterio.open('merged.tif', 'w', **metadata1) as dst:
    dst.write(merged)

# Optionally, you can visualize the merged GeoTIFF
show(merged)

test = src.read()

for i, dtype, nodataval in zip(src.indexes, src.dtypes, src.nodatavals):
    print(i, dtype, nodataval)

from matplotlib import pyplot

pyplot.imshow(test[0], cmap='pink')
pyplot.show()

src.profile

import gdal

## this is for meteosat

!pip install eumdac

# https://github.com/hammad93/hurricane-map/issues/15
# https://eumetsatspace.atlassian.net/wiki/spaces/EUMDAC/pages/1760198661/Python+Library
import eumdac
import datetime
import shutil

credentials = (consumer_key, consumer_secret)
token = eumdac.AccessToken(credentials)

print(f"This token '{token}' expires {token.expiration}")

datastore = eumdac.DataStore(token)
selected_collection = datastore.get_collection('EO:EUM:DAT:MSG:HRSEVIRI-IODC')
try:
    print(selected_collection.title)
    display(selected_collection.search_options)
except eumdac.datastore.DataStoreError as error:
    print(f"Error related to the data store: '{error.msg}'")
except eumdac.collection.CollectionError as error:
    print(f"Error related to the collection: '{error.msg}'")
except requests.exceptions.RequestException as error:
    print(f"Unexpected error: {error}")

# here we try to get the most recent one for the 0.6 um image
# https://gitlab.eumetsat.int/eumetlab/data-services/eumdac_data_store/-/blob/master/3_Downloading_products.ipynb?ref_type=heads
# https://gitlab.eumetsat.int/eumetlab/data-services/eumdac_data_store/-/blob/master/2_Searching_and_filtering_products.ipynb
collection = 'EO:EUM:DAT:MSG:HRSEVIRI'
datastore = eumdac.DataStore(token)
selected_collection = datastore.get_collection(collection)
print(selected_collection.title)

display(selected_collection.search_options)

product = selected_collection.search().first()
print(product)

try:
    with product.open() as fsrc, \
            open(fsrc.name, mode='wb') as fdst:
        shutil.copyfileobj(fsrc, fdst)
        print(f'Download of product {product} finished.')
except eumdac.product.ProductError as error:
    print(f"Error related to the product '{product}' while trying to download it: '{error.msg}'")
except requests.exceptions.ConnectionError as error:
    print(f"Error related to the connection: '{error.msg}'")
except requests.exceptions.RequestException as error:
    print(f"Unexpected error: {error}")

# so basically, we need to parse the entire image sent and there are python
# packages that do this
# https://satpy.readthedocs.io/en/latest/api/satpy.readers.seviri_l1b_hrit.html

"""1. get the response from this https://himawari8.nict.go.jp/img/FULL_24h/latest.json?_=1705024461116 with the right unix time in milliseconds query parameter
2. download images based on result and url template for highest resolution
  - https://himawari8.nict.go.jp/img/FULL_24h/B01/10d/550/2024/01/12/014000_3_3.png
    - the B01 is the first channel (blue). B02 would be the second or visible red.
    - 10d is the zoom level, indicates tiling length and width
    - the 550 is the pixels, unknown if configurable
    - 2024/12/01400 is the timestamp from step 1
    - _3_3 is the position in the tiles
  - we only need the first 3 bands (blue, green, red)
3. stitch together the images for the mosaic
  - using python, we can stitch the images together to create one image
4. project to geotiff
  - Reference this https://gis.stackexchange.com/questions/188500/georeferencing-himawari-8-in-gdal-or-other
  - each band has its own geotiff and we can have each of these in a netcdf
"""

import datetime
import requests
import dateutil

def jpn_get_latest_metadata():
  time = int(datetime.datetime.now().timestamp() * 1000)
  latest_data_url = "https://himawari8.nict.go.jp/img/FULL_24h/latest.json?_={time}"
  return requests.get(latest_data_url).json()

def jpn_create_urls(band):
  dimension = 10
  metadata = jpn_get_latest_metadata()
  parsed_time = dateutil.parser.parse(metadata['date'])
  template_url = f"https://himawari8.nict.go.jp/img/FULL_24h/B{band:02}/10d/550/{parsed_time.year}/{parsed_time.strftime('%m')}/{parsed_time.day}/{parsed_time.strftime('%H%M%S')}" #_3_3.png
  result_urls = []
  for i in range(dimension) :
    for j in range(dimension) :
      result_urls.append(template_url + f"_{i}_{j}.png")
  return result_urls

jpn_create_urls(band = 1)

!pip install requests pillow

import requests
from PIL import Image
import os

# Define your list of URLs here
urls = [
    # "http://example.com/image_0_0.png",
    # "http://example.com/image_0_1.png",
    # ... add all other URLs here
]
urls = jpn_create_urls(band = 1)

# Directory to save downloaded images (temporary)
download_dir = "/content/downloaded_images"
os.makedirs(download_dir, exist_ok=True)

# Function to download an image
def download_image(url):
    filename = url.split("/")[-1]
    filepath = os.path.join(download_dir, filename)
    response = requests.get(url)
    if response.status_code == 200:
        with open(filepath, 'wb') as f:
            f.write(response.content)
    return filepath

# Download all images
image_files = [download_image(url) for url in urls]

import os
from PIL import Image

# Directory containing the downloaded images
download_dir = "/content/downloaded_images"

# Define the size of each tile and the number of tiles in each dimension
tile_size = 550
grid_size = 10

# Create a blank image for the final combined image in grayscale mode 'L'
combined_image = Image.new('LA', (tile_size * grid_size, tile_size * grid_size))

# Function to parse coordinates from filename
def get_coordinates(filename):
    parts = filename.split('_')
    x = int(parts[-2])
    y = int(parts[-1].split('.')[0])
    return x, y

# Loop through the downloaded images and place them in the correct position
for filename in os.listdir(download_dir):
    if filename.endswith(".png"):
        x, y = get_coordinates(filename)
        img_path = os.path.join(download_dir, filename)
        img = Image.open(img_path)

        # Ensure the image is in grayscale mode 'LA'
        print(img_path)

        combined_image.paste(img, (x * tile_size, y * tile_size))

# Save the combined image
combined_image.save("combined_image_greyscale.png")

from PIL import Image
import numpy as np

def image_statistics(image_path):
    # Open the image
    img = Image.open(image_path)

    # Convert the image to a numpy array
    data = np.array(img)

    # Separate the RGB channels
    red, green, blue = data[:,:,0], data[:,:,1], data[:,:,2]

    # Calculate statistics for each channel
    def calculate_statistics(channel):
        min_val = np.min(channel)
        max_val = np.max(channel)
        mean_val = np.mean(channel)
        std_dev = np.std(channel)
        return min_val, max_val, mean_val, std_dev

    red_stats = calculate_statistics(red)
    green_stats = calculate_statistics(green)
    blue_stats = calculate_statistics(blue)

    # Print the statistics
    print("Red Channel Statistics:   Min: {}, Max: {}, Mean: {:.2f}, Std Dev: {:.2f}".format(*red_stats))
    print("Green Channel Statistics: Min: {}, Max: {}, Mean: {:.2f}, Std Dev: {:.2f}".format(*green_stats))
    print("Blue Channel Statistics:  Min: {}, Max: {}, Mean: {:.2f}, Std Dev: {:.2f}".format(*blue_stats))

# Replace 'path_to_image.png' with the path to your image file
image_statistics('combined_image.png')

!rm -rf /content/downloaded_images/
hammad93 commented 7 months ago

All satellite outputs will be in a NetCDF and we will utilize GeoServer to create a mosaic of NetCDF files. https://github.com/hammad93/hurricane-geoserver https://docs.geoserver.org/stable/en/user/extensions/netcdf/netcdf.html#mosaic-of-netcdf-files

hammad93 commented 7 months ago

This issue can be closed, here is the solution which produces a NetCDF file. The container was tested on the other repository so it meets all requirements. Perhaps history was made.

# -*- coding: utf-8 -*-
"""generate netcdf file with updated live storms.ipynb

Automatically generated by Colaboratory.

Original file is located at
    https://colab.research.google.com/drive/1dKQmOltvqANI6f_Rh7NcG7kN4qjQFFWZ

# Global Image
Consisting of the following satellites,
- GOES East
- GOES West
- MSG 0 Degree
- MSG Indian Ocean
- Himawari 8

Downloads must be from a primary source.

# References
https://github.com/hammad93/hurricane-map/issues/15

##Setup
"""

!pip install eumdac

!pip install libnetcdf

!pip install rasterio

!pip install satpy
!pip install netCDF4==1.6.0

import os
import eumdac
import datetime
import shutil
import requests
import dateutil
import subprocess
import satpy
from satpy import Scene
from glob import glob
from osgeo import gdal
from PIL import Image
from urllib.request import urlretrieve
from concurrent.futures import ThreadPoolExecutor, as_completed

from abc import ABC, abstractmethod

class DataSource(ABC):
    @abstractmethod
    def getRecentData(self):
        """Fetch the most recent data from the source."""
        pass

    @abstractmethod
    def toNetCDF(self, existing_netcdf_path=None):
        """
        Convert the data into NetCDF format.

        Parameters:
        - existing_netcdf_path (str): Path to an existing NetCDF file to which the new data will be added. If None, create a new NetCDF file.
        """
        pass

from google.colab import userdata

class Config:
    GOES_EAST_STATIC_URL = "https://cdn.star.nesdis.noaa.gov/GOES16/ABI/FD/GEOCOLOR/GOES16-ABI-FD-GEOCOLOR-10848x10848.tif"
    GOES_WEST_STATIC_URL = "https://cdn.star.nesdis.noaa.gov/GOES18/ABI/FD/GEOCOLOR/GOES18-ABI-FD-GEOCOLOR-10848x10848.tif"
    h8_link = "https://himawari8.nict.go.jp/img/FULL_24h/latest.json?_={time}"
    eumetsat_consumer_key = userdata.get('eumetsat_pass')
    eumetsat_consumer_secret = userdata.get('eumetsat_secret')
    output_dir = '/content'

"""## GOES East"""

class GOESEastDataSource(DataSource):
    def __init__(self):
        self.name = "GOES 16 East"
        self.id = "GOES-16"
        self.data_url = Config.GOES_EAST_STATIC_URL

    def getRecentData(self, file_prefix=''):
        """
        Fetch the most recent data from GOES East and save it with an optional prefix.

        :param file_prefix: Optional. A string to prepend to the filename on save.
        """
        filename = f"{file_prefix}[{self.id}]_{os.path.basename(self.data_url)}"
        save_path = os.path.join(Config.output_dir, filename)
        self.recent_path = save_path

        try:
            urlretrieve(self.data_url, save_path)
            print(f"File saved as {save_path}")
            self.recent_path = save_path
            return save_path  # Return the path of the saved file for further processing
        except Exception as e:
            print(f"Failed to download the file: {e}")
            return None

    def toNetCDF(self, existing_netcdf_path=None):
        self.recent_netcdf_path = self.recent_path[:-len("tif")] + "nc"
        ds = gdal.Translate(self.recent_netcdf_path, self.recent_path, format='NetCDF')
        print(f"Transformed {self.recent_path} to {self.recent_netcdf_path}")

"""##GOES West"""

class GOESWestDataSource(DataSource):
    def __init__(self):
        self.name = "GOES 18 West"
        self.id = "GOES-18"
        # Updated to point to the GOES-18 GEOCOLOR image
        self.data_url = Config.GOES_WEST_STATIC_URL

    def getRecentData(self, file_prefix=''):
        """
        Fetch the most recent data from GOES West (GOES-18) and save it with an optional prefix.

        :param file_prefix: Optional. A string to prepend to the filename on save.
        """
        filename = f"{file_prefix}_{os.path.basename(self.data_url)}"
        save_path = os.path.join(Config.output_dir, filename)

        try:
            urlretrieve(self.data_url, save_path)
            print(f"File saved as {save_path}")
            self.recent_path = save_path
            return save_path  # Return the path of the saved file for further processing
        except Exception as e:
            print(f"Failed to download the file: {e}")
            return None

    def toNetCDF(self, existing_netcdf_path=None):
        self.recent_netcdf_path = self.recent_path[:-len("tif")] + "nc"
        ds = gdal.Translate(self.recent_netcdf_path, self.recent_path, format='NetCDF')
        print(f"Transformed {self.recent_path} to {self.recent_netcdf_path}")

"""##MSG 0 Degree

"""

class MSG0DegreeDataSource(DataSource):
    def __init__(self):
        self.consumer_key = Config.eumetsat_consumer_key
        self.consumer_secret = Config.eumetsat_consumer_secret

    def getRecentData(self, file_prefix=''):
        """
        Fetch the most recent data from EUMETSAT

        :param file_prefix: Optional. A string to prepend to the filename on save.
        """
        credentials = (self.consumer_key, self.consumer_secret)
        token = eumdac.AccessToken(credentials)

        print(f"This token '{token}' expires {token.expiration}")

        collection = 'EO:EUM:DAT:MSG:HRSEVIRI'
        datastore = eumdac.DataStore(token)
        selected_collection = datastore.get_collection(collection)
        print(selected_collection.title)

        display(selected_collection.search_options)

        product = selected_collection.search().first()
        print(product)

        try:
            with product.open() as fsrc :
                self.recent_path = f'{Config.output_dir}/{file_prefix}_{fsrc.name}'
                with open(self.recent_path, mode='wb') as fdst:
                  shutil.copyfileobj(fsrc, fdst)
                  print(f'Download of product {product} finished.')

                  # extract zip
                  print('Extracting . . .')
                  subprocess.run(["unzip", self.recent_path, "-d", self.recent_path[:-len('.zip')]])
                  self.recent_path = f"{self.recent_path[:-len('.zip')]}/{self.recent_path.split('_')[-1][:-len('zip')]}nat"
                  print('Recent file changed to ' + self.recent_path)

        except eumdac.product.ProductError as error:
            print(f"Error related to the product '{product}' while trying to download it: '{error.msg}'")
        except requests.exceptions.ConnectionError as error:
            print(f"Error related to the connection: '{error.msg}'")
        except requests.exceptions.RequestException as error:
            print(f"Unexpected error: {error}")

    def toNetCDF(self, existing_netcdf_path=None):
        '''
        References
        ----------
        https://satpy.readthedocs.io/en/stable/api/satpy.scene.html
        '''
        # read in the .nat
        scn = Scene(
            filenames=[self.recent_path],
            reader='seviri_l1b_native')
        # output to NetCDF
        output = scn.load(scn.available_dataset_names(), upper_right_corner='NE')
        scn.save_datasets(
            writer="cf",
            groups={
                'default': filter(lambda x: x!='HRV', scn.available_dataset_names()),
                'hrv': ['HRV']})
        print(f"Transformed {self.recent_path} into a NetCDF.")
        pass

"""##MSG Indian Ocean"""

class MSGIndianOceanDataSource(DataSource):
    def __init__(self):
        self.consumer_key = Config.eumetsat_consumer_key
        self.consumer_secret = Config.eumetsat_consumer_secret
    def getRecentData(self, file_prefix=''):
        """
        Fetch the most recent data from EUMETSAT

        :param file_prefix: Optional. A string to prepend to the filename on save.
        """
        credentials = (self.consumer_key, self.consumer_secret)
        token = eumdac.AccessToken(credentials)

        print(f"This token '{token}' expires {token.expiration}")

        collection = 'EO:EUM:DAT:MSG:HRSEVIRI-IODC'
        datastore = eumdac.DataStore(token)
        selected_collection = datastore.get_collection(collection)
        print(selected_collection.title)

        display(selected_collection.search_options)

        product = selected_collection.search().first()
        print(product)

        try:
            with product.open() as fsrc :
                self.recent_path = f'{Config.output_dir}/{file_prefix}_{fsrc.name}'
                with open(self.recent_path, mode='wb') as fdst:
                  shutil.copyfileobj(fsrc, fdst)
                  print(f'Download of product {product} finished.')

                  # extract zip
                  print('Extracting . . .')
                  subprocess.run(["unzip", self.recent_path, "-d", self.recent_path[:-len('.zip')]])
                  self.recent_path = f"{self.recent_path[:-len('.zip')]}/{self.recent_path.split('_')[-1][:-len('zip')]}nat"
                  print('Recent file changed to ' + self.recent_path)

        except eumdac.product.ProductError as error:
            print(f"Error related to the product '{product}' while trying to download it: '{error.msg}'")
        except requests.exceptions.ConnectionError as error:
            print(f"Error related to the connection: '{error.msg}'")
        except requests.exceptions.RequestException as error:
            print(f"Unexpected error: {error}")

    def toNetCDF(self, existing_netcdf_path=None):
        '''
        References
        ----------
        https://satpy.readthedocs.io/en/stable/api/satpy.scene.html
        '''
        # read in the .nat
        scn = Scene(
            filenames=[self.recent_path],
            reader='seviri_l1b_native')
        # output to NetCDF
        output = scn.load(scn.available_dataset_names(), upper_right_corner='NE')
        scn.save_datasets(
            writer="cf",
            groups={
                'default': filter(lambda x: x!='HRV', scn.available_dataset_names()),
                'hrv': ['HRV']})
        print(f"Transformed {self.recent_path} into a NetCDF.")
        pass

"""##Himawari 8"""

class Himawari8DataSource(DataSource):
    def __init__(self):
        self.link = Config.h8_link
        self.name = "Himawari 8"
        self.id = "H8"

    def jpn_get_latest_metadata(self):
      time = int(datetime.datetime.now().timestamp() * 1000)
      return requests.get(self.link).json()

    def jpn_create_urls(self, band):
      dimension = 10
      metadata = self.jpn_get_latest_metadata()
      parsed_time = dateutil.parser.parse(metadata['date'])
      template_url = f"https://himawari8.nict.go.jp/img/FULL_24h/B{band:02}/10d/550/{parsed_time.year}/{parsed_time.strftime('%m')}/{parsed_time.day}/{parsed_time.strftime('%H%M%S')}" #_3_3.png
      result_urls = []
      for i in range(dimension) :
        for j in range(dimension) :
          result_urls.append(template_url + f"_{i}_{j}.png")
      return result_urls

    def download_image(self, file_prefix, band_index, url):
        """
        Download a single image and save it.
        """
        try:
            filename = f"{file_prefix}_{band_index}_{url.split('/')[-1]}"
            print(f"Himarwari 8 downloading {filename}")
            save_path = os.path.join(Config.output_dir, filename)
            urlretrieve(url, save_path)
            return save_path
        except Exception as e:
            print(f"Failed to download {filename}: {e}")
            return None

    def getRecentData(self, file_prefix=''):
        """
        Fetch the most recent data from Himawari 8 and save it with an optional prefix.
        """
        download_paths = []  # To store paths of successfully downloaded files
        with ThreadPoolExecutor() as executor:
            # Create a list to hold futures
            futures = []
            for band_index in [1, 2, 3]:
                for url in self.jpn_create_urls(band_index):
                    # Schedule the download_image method to be executed and store the future
                    futures.append(executor.submit(self.download_image, file_prefix, band_index, url))

            # as_completed will yield futures as they complete
            for future in as_completed(futures):
                path = future.result()  # Get the result from the future
                if path:
                    download_paths.append(path)

        if download_paths:
            print(f"Files saved: {download_paths}")
            self.file_prefix = f"{file_prefix}[{self.id}]"
            self.download_paths = download_paths
            return download_paths  # Return the paths of the saved files for further processing
        else:
            print("Failed to download any files.")
            return None

    def toNetCDF(self, existing_netcdf_path=None):
        '''
        https://gis.stackexchange.com/questions/188500/georeferencing-himawari-8-in-gdal-or-other
        '''
        # Define the size of each tile and the number of tiles in each dimension
        tile_size = 550
        grid_size = 10

        # Function to parse coordinates from filename
        def get_coordinates(filename):
            parts = filename.split('_')
            x = int(parts[-2])
            y = int(parts[-1].split('.')[0])
            return x, y

        # Function to get the imagery band (e.g. VIS, IR)
        def get_band(filename):
            parts = filename.split('_')
            band = int(parts[-4])
            return band

        # Loop through the downloaded images and place them in the correct position
        # Satellite band images are combined together
        band_paths = {}
        for filename in self.download_paths:
            band = get_band(filename)
            if band in band_paths.keys():
                band_paths[band].append(filename)
            else:
                band_paths[band] = []

        for band in band_paths.keys() :
          # Create a blank image for the final combined image
          combined_image = Image.new('LA', (tile_size * grid_size, tile_size * grid_size))
          for filename in band_paths[band]:
              if filename.endswith(".png"):
                  x, y = get_coordinates(filename)
                  img = Image.open(filename)

                  # Ensure the image is in grayscale mode 'LA'
                  print(filename)
                  combined_image.paste(img, (x * tile_size, y * tile_size))
              else:
                  print(f"{filename} is not a PNG")

          # Save the combined image
          combined_path = f"{self.file_prefix}_{band}_combined_image_greyscale.png"
          combined_image.save(combined_path)

          #gdal_translate -a_srs "+proj=geos +h=35785863 +a=6378137.0 +b=6356752.3 +lon_0=140.7 +no_defs" -a_ullr -5500000 5500000 5500000 -5500000 PI_H08_20150125_0230_TRC_FLDK_R10_PGPFD.png temp.tif
          recent_GeoTiff = combined_path[:-len('.png')]+'.tif'
          gdal.Translate(outputSRS="+proj=geos +h=35785863 +a=6378137.0 +b=6356752.3 +lon_0=140.7 +no_defs",
                        outputBounds=[-5500000, 5500000, 5500000, -5500000],
                        srcDS=combined_path,
                        destName=recent_GeoTiff)
          #gdalwarp -overwrite -t_srs "+proj=latlong +ellps=WGS84 +pm=140.7" -wo SOURCE_EXTRA=100 temp.tif Himawari8.tif
          gdal.Warp(destNameOrDestDS=combined_path[:-len('.png')]+'[preprocessed].tif',
                    srcDSOrSrcDSTab=recent_GeoTiff,
                    dstSRS="+proj=latlong +ellps=WGS84 +pm=140.7",
                    warpOptions={'SOURCE_EXTRA': 100})

          preprocessed = combined_path[:-len('.png')]+'[preprocessed].tif'
          print(f"Combined image at {combined_path} and preprocessed at {preprocessed}")
          nc_path = combined_path[:-len('tif')] + 'nc'
          ds = gdal.Translate(nc_path, preprocessed, format='NetCDF')
          print(f"Translated to {nc_path}")

import time
time.time()

"""##Download"""

# Your satellite sources list
satellites = [
    GOESEastDataSource(),
    GOESWestDataSource(),
    MSG0DegreeDataSource(),
    MSGIndianOceanDataSource(),
    Himawari8DataSource()
]

def fetch_data_for_satellite(satellite, file_prefix):
    """
    Wrapper function to call getRecentData on a satellite object.
    """
    satellite.getRecentData(file_prefix=file_prefix)

def main():
    prefix = f"[{int(time.time())}]"
    # Use ThreadPoolExecutor to run getRecentData concurrently for each satellite
    with ThreadPoolExecutor(max_workers=len(satellites)) as executor:
        # Schedule the executions
        futures = [executor.submit(fetch_data_for_satellite, satellite, prefix) for satellite in satellites]

        # Optionally, wait for all futures to complete and handle results/errors
        for future in futures:
            try:
                # Result handling here if needed
                future.result()  # This will raise exceptions if any occurred
            except Exception as e:
                print(f"An error occurred: {e}")
main()

"""##Extract & Transform"""

satellites

for satellite in satellites :
    satellite.toNetCDF()

"""/content/[1710410804]_MSG2-SEVI-MSG15-0100-NA-20240314095740.947000000Z-NA/MSG2-SEVI-MSG15-0100-NA-20240314095740.947000000Z-NA.nat

/content//[1710410804]_MSG3-SEVI-MSG15-0100-NA-20240314095741.930000000Z-NA/MSG3-SEVI-MSG15-0100-NA-20240314095741.930000000Z-NA..nat

# Appendix
"""

from satpy import Scene
scn = Scene(filenames=['/content/[1710413044]_MSG3-SEVI-MSG15-0100-NA-20240314104241.552000000Z-NA/MSG3-SEVI-MSG15-0100-NA-20240314104241.552000000Z-NA.nat'], reader='seviri_l1b_native')

output = scn.load(scn.available_dataset_names(), upper_right_corner='NE')

scn.available_dataset_names()

list(filter(lambda x: x!='HRV', scn.available_dataset_names()))

scn.save_datasets(writer="cf",
                  datasets=list(filter(lambda x: x!='HRV', scn.available_dataset_names())))

!pip show libnetcdf

test = MSGIndianOceanDataSource()
test.getRecentData()
test.toNetCDF()

from osgeo import gdal
#src = rasterio.open('GOES18-ABI-FD-GEOCOLOR-10848x10848.tif')
#ds = gdal.Translate('GOES18-ABI-FD-GEOCOLOR-10848x10848.nc', 'GOES18-ABI-FD-GEOCOLOR-10848x10848.tif', format='NetCDF')
ds = gdal.Translate('GOES18-ABI-FD-GEOCOLOR-10848x10848.nc', 'merged.tif', format='NetCDF')

import rasterio
from rasterio.merge import merge
from rasterio.plot import show
src1 = rasterio.open('GOES18-ABI-FD-GEOCOLOR-10848x10848.tif')
src2 = rasterio.open('GOES16-ABI-FD-GEOCOLOR-10848x10848.tif')
# Read the data and metadata
data1 = src1.read()
data2 = src2.read()
metadata1 = src1.meta
metadata2 = src2.meta

# Merge the two GeoTIFFs
merged, out_transform = merge([src1, src2])

# Update the metadata with the new dimensions and transform
metadata1.update({
    'height': merged.shape[1],
    'width': merged.shape[2],
    'transform': out_transform
})

# Write the merged GeoTIFF to a new file
with rasterio.open('merged.tif', 'w', **metadata1) as dst:
    dst.write(merged)

# Optionally, you can visualize the merged GeoTIFF
show(merged)

test = src.read()

for i, dtype, nodataval in zip(src.indexes, src.dtypes, src.nodatavals):
    print(i, dtype, nodataval)

from matplotlib import pyplot

pyplot.imshow(test[0], cmap='pink')
pyplot.show()

src.profile

import gdal

## this is for meteosat

!pip install eumdac

# https://github.com/hammad93/hurricane-map/issues/15
# https://eumetsatspace.atlassian.net/wiki/spaces/EUMDAC/pages/1760198661/Python+Library
import eumdac
import datetime
import shutil

datastore = eumdac.DataStore(token)
selected_collection = datastore.get_collection('EO:EUM:DAT:MSG:HRSEVIRI-IODC')
try:
    print(selected_collection.title)
    display(selected_collection.search_options)
except eumdac.datastore.DataStoreError as error:
    print(f"Error related to the data store: '{error.msg}'")
except eumdac.collection.CollectionError as error:
    print(f"Error related to the collection: '{error.msg}'")
except requests.exceptions.RequestException as error:
    print(f"Unexpected error: {error}")

# here we try to get the most recent one for the 0.6 um image
# https://gitlab.eumetsat.int/eumetlab/data-services/eumdac_data_store/-/blob/master/3_Downloading_products.ipynb?ref_type=heads
# https://gitlab.eumetsat.int/eumetlab/data-services/eumdac_data_store/-/blob/master/2_Searching_and_filtering_products.ipynb
collection = 'EO:EUM:DAT:MSG:HRSEVIRI'
datastore = eumdac.DataStore(token)
selected_collection = datastore.get_collection(collection)
print(selected_collection.title)

display(selected_collection.search_options)

product = selected_collection.search().first()
print(product)

try:
    with product.open() as fsrc, \
            open(fsrc.name, mode='wb') as fdst:
        shutil.copyfileobj(fsrc, fdst)
        print(f'Download of product {product} finished.')
except eumdac.product.ProductError as error:
    print(f"Error related to the product '{product}' while trying to download it: '{error.msg}'")
except requests.exceptions.ConnectionError as error:
    print(f"Error related to the connection: '{error.msg}'")
except requests.exceptions.RequestException as error:
    print(f"Unexpected error: {error}")

# so basically, we need to parse the entire image sent and there are python
# packages that do this
# https://satpy.readthedocs.io/en/latest/api/satpy.readers.seviri_l1b_hrit.html

"""1. get the response from this https://himawari8.nict.go.jp/img/FULL_24h/latest.json?_=1705024461116 with the right unix time in milliseconds query parameter
2. download images based on result and url template for highest resolution
  - https://himawari8.nict.go.jp/img/FULL_24h/B01/10d/550/2024/01/12/014000_3_3.png
    - the B01 is the first channel (blue). B02 would be the second or visible red.
    - 10d is the zoom level, indicates tiling length and width
    - the 550 is the pixels, unknown if configurable
    - 2024/12/01400 is the timestamp from step 1
    - _3_3 is the position in the tiles
  - we only need the first 3 bands (blue, green, red)
3. stitch together the images for the mosaic
  - using python, we can stitch the images together to create one image
4. project to geotiff
  - Reference this https://gis.stackexchange.com/questions/188500/georeferencing-himawari-8-in-gdal-or-other
  - each band has its own geotiff and we can have each of these in a netcdf
"""

import datetime
import requests
import dateutil

def jpn_get_latest_metadata():
  time = int(datetime.datetime.now().timestamp() * 1000)
  latest_data_url = "https://himawari8.nict.go.jp/img/FULL_24h/latest.json?_={time}"
  return requests.get(latest_data_url).json()

def jpn_create_urls(band):
  dimension = 10
  metadata = jpn_get_latest_metadata()
  parsed_time = dateutil.parser.parse(metadata['date'])
  template_url = f"https://himawari8.nict.go.jp/img/FULL_24h/B{band:02}/10d/550/{parsed_time.year}/{parsed_time.strftime('%m')}/{parsed_time.day}/{parsed_time.strftime('%H%M%S')}" #_3_3.png
  result_urls = []
  for i in range(dimension) :
    for j in range(dimension) :
      result_urls.append(template_url + f"_{i}_{j}.png")
  return result_urls

jpn_create_urls(band = 1)

!pip install requests pillow

import requests
from PIL import Image
import os

# Define your list of URLs here
urls = [
    # "http://example.com/image_0_0.png",
    # "http://example.com/image_0_1.png",
    # ... add all other URLs here
]
urls = jpn_create_urls(band = 1)

# Directory to save downloaded images (temporary)
download_dir = "/content/downloaded_images"
os.makedirs(download_dir, exist_ok=True)

# Function to download an image
def download_image(url):
    filename = url.split("/")[-1]
    filepath = os.path.join(download_dir, filename)
    response = requests.get(url)
    if response.status_code == 200:
        with open(filepath, 'wb') as f:
            f.write(response.content)
    return filepath

# Download all images
image_files = [download_image(url) for url in urls]

import os
from PIL import Image

# Directory containing the downloaded images
download_dir = "/content/downloaded_images"

# Define the size of each tile and the number of tiles in each dimension
tile_size = 550
grid_size = 10

# Create a blank image for the final combined image in grayscale mode 'L'
combined_image = Image.new('LA', (tile_size * grid_size, tile_size * grid_size))

# Function to parse coordinates from filename
def get_coordinates(filename):
    parts = filename.split('_')
    x = int(parts[-2])
    y = int(parts[-1].split('.')[0])
    return x, y

# Loop through the downloaded images and place them in the correct position
for filename in os.listdir(download_dir):
    if filename.endswith(".png"):
        x, y = get_coordinates(filename)
        img_path = os.path.join(download_dir, filename)
        img = Image.open(img_path)

        # Ensure the image is in grayscale mode 'LA'
        print(img_path)

        combined_image.paste(img, (x * tile_size, y * tile_size))

# Save the combined image
combined_image.save("combined_image_greyscale.png")

from PIL import Image
import numpy as np

def image_statistics(image_path):
    # Open the image
    img = Image.open(image_path)

    # Convert the image to a numpy array
    data = np.array(img)

    # Separate the RGB channels
    red, green, blue = data[:,:,0], data[:,:,1], data[:,:,2]

    # Calculate statistics for each channel
    def calculate_statistics(channel):
        min_val = np.min(channel)
        max_val = np.max(channel)
        mean_val = np.mean(channel)
        std_dev = np.std(channel)
        return min_val, max_val, mean_val, std_dev

    red_stats = calculate_statistics(red)
    green_stats = calculate_statistics(green)
    blue_stats = calculate_statistics(blue)

    # Print the statistics
    print("Red Channel Statistics:   Min: {}, Max: {}, Mean: {:.2f}, Std Dev: {:.2f}".format(*red_stats))
    print("Green Channel Statistics: Min: {}, Max: {}, Mean: {:.2f}, Std Dev: {:.2f}".format(*green_stats))
    print("Blue Channel Statistics:  Min: {}, Max: {}, Mean: {:.2f}, Std Dev: {:.2f}".format(*blue_stats))

# Replace 'path_to_image.png' with the path to your image file
image_statistics('combined_image.png')

!rm -rf /content/downloaded_images/