ctlearn-project / ctlearn

Deep Learning for IACT Event Reconstruction
BSD 3-Clause "New" or "Revised" License
53 stars 43 forks source link

Hexagonal convolution #88

Open TjarkMiener opened 5 years ago

TjarkMiener commented 5 years ago

There are mainly two ways of dealing with raw IACT images captured by cameras made of hexagonal lattices of photo-multipliers. You can either transform the hexagonal camera pixels to square image pixels https://github.com/ctlearn-project/ctlearn/issues/56 or you can modify your convolution and pooling methods. There are different packages like IndexedConv and HexagDLy, which has been shown an improvement in performance.

shikharras commented 5 years ago

Hi! I would like to learn more about and pick up this issue on hexagonal convolutions. I have worked with Tensorflow and Keras before. How can I begin contributing to this project?

Thanks!

ShreyanshTripathi commented 5 years ago

Hello, I am interested in this project. I have previous experience working with convolution. Please guide me for a good entry point to contribute to this project.

parthpm commented 5 years ago

I am interested in this project. How can I get started?

nietootein commented 5 years ago

Hi @shikharras, @parthpm, @ShreyanshTripathi! Many thanks for you interest in CTLearn and this issue in particular. Our recommendation would be to get to know our code by installing CTLearn on your system and read through it and, if you are already familiar with convolutional neural networks, check out the couple of packages @TjarkMiener recommends above.

gremlin97 commented 5 years ago

Hello, I am interested to work on this project. I have prior experience in deep learning and would like to start contributing to this issue.

TjarkMiener commented 5 years ago

Dear @shikharras @ShreyanshTripathi @parthpm @gremlin97, I have pushed the code (PR #92) that will create the input images for the two hexagonal convolution packages above. The first step to contribute to this issue would be to understand the changes of #92 and write your own script to test the code! Therefore, you should import image_mapping.py and load various ImageMappers with different arguments. Then, you should create a "dummy event" {np.concatenate(([0.0], np.arange(0, "number of pixels for the selected telescope", 1)))} for different telescopes and expand a dimension (axis=1). Create the final images using the function "map_image()" and plot them, so that you can compare them (and your script) with test_image_mapper.ipynb.

shikharras commented 5 years ago

Hello @TjarkMiener, I noticed that in the 'image shifting' section in the 'image_mapping.py', we have shifted the alternate columns by 1 without checking if they are in the required form as shown here: https://github.com/ai4iacts/hexagdly/blob/master/notebooks/how_to_apply_adressing_scheme.ipynb download

Should our script to test the code contain images which are not aligned in this particular way, or is the input to our CNN always in the correct form? Thank you for the help.

TjarkMiener commented 5 years ago

That's a good point you are raising, @shikharras! We are reading the pixel positions of the IACTs from the fits file in "ctlearn/ctlearn/pixel_pos_files/", which originate from ctapipe-extra. These fits files also contain rotation information. While reading the pixel positions into CTLearn, we already make sure to perform the right rotation and therefore the pixel positions have the required form above. So you don't need to add this check in your script!

BTW @shikharras @ShreyanshTripathi @parthpm @gremlin97, you can actually test your script with a hdf5 file of magic events. This file contains 10 dummy events for the MAGIC telescope. You need to import hdf5 and then obtain the image charge values. Be aware of selecting the "MAGICCam" in the ImageMapper. You can also select different conversion methods to see the differences.

shikharras commented 5 years ago

Thanks for your response @TjarkMiener! I have written a script similar to 'test_image_mapper.ipynb' for the hdf5 file of magic events using the tables library in Python which displays the 10 images with different mapper parameters.

However, in the beginning, I was trying to use the HDF5DataLoader function from ctlearn.data_loading and that threw an error while accessing the Array_Info group as such a group did not exist in the given file of magic events. The actual group name was MAGIC, and I was able to access the data by writing

for row in f.root.MAGIC.iterrows():
     test_data_magic['MAGICCam'] = np.concatenate(([0.0], np.array(row['image_charge'])))
     test_data_magic['MAGICCam'] = np.expand_dims(test_data_magic['MAGICCam'], axis=1)

instead of using the f.root.Array_Info.iterrows() given in the HDF5DataLoader function here:

def _process_array_info(self, filename):
        # get file handle
        f = self.files[filename]

        telescopes = {}
        for row in f.root.Array_Info.iterrows():
            # note: tel type strings stored in Pytables as byte strings, must be decoded
            tel_type = row['tel_type'].decode('utf-8')

So is the solution to this problem to rename the group to Array_Info before using the HDF5DataLoader, or should the code in the above function be changed to retrieve the correct group name? Thank you for your help.

h3lio5 commented 5 years ago

@TjarkMiener As per your suggestion, I wrote a script to test the images of magic events. In that, I selected an image charge value and plotted it before pre-processing with image_shifting algorithm. Raw image before image-shifting

Screenshot 2019-03-25 at 12 24 53 AM

Image after image-shifting

Screenshot 2019-03-25 at 12 25 10 AM

After pre-processing, I expanded the dimensions of the image in compliance with the format required for convolution(using Conv2d( ) from hexagdly) Sample output after convolution for different stride values

Screenshot 2019-03-25 at 12 24 11 AM

Please have a look at my notebook here Is this what you expected or have I misunderstood it? Please guide me to go on further. Thank you.

TjarkMiener commented 5 years ago

Great that you were able to display the magic events, @shikharras and @h3li05369!

@shikharras Using the tables library is even better than my suggestion and closer to the CTLearn framework! It's also good that you are familiar with the functionality of the tool and try to solve this problem using CTLearn. Unfortunately, our data format, which is created through DL1 Data-handler, differs from the MAGIC file. I should have mentioned that before. Sorry!

@h3li05369 The event that you are showing in your first two plots look good to me! This is exactly what I meant with the task! Awesome that you went one step further and get familiar with the usage of hexagdly. I haven't study this package in detail, so could please explain me your four plots in more detail? Thanks!

h3lio5 commented 5 years ago

@TjarkMiener After pre-processing the image with image_shifting addressing scheme(which was recently integrated into ctlearn from hexagldy by @aribrill), it is fed into the Conv2d layer (which is also imported from hexagdly) for 4 different stride sizes.

Screenshot 2019-03-25 at 11 55 20 PM

The four plots are the outputs of plain hexagonal convolution for 4 different stride sizes. The following is the code for the plot:

Screenshot 2019-03-26 at 12 01 14 AM

The result after first epoch

Screenshot 2019-03-25 at 12 24 11 AM

I hope that I've explained it clearly. Having gained substantial understanding of hexagdly,I wanted to ask whether I should start converting the existing codebase of hexagdly from PyTorch to TensorFlow to make it compatible with ctlearn? Or are there any other tasks/suggestions for me? Please guide me. Thank you 😃.

TjarkMiener commented 5 years ago

@h3li05369 Thanks! At the moment there is no need to start converting this package! All interested student should focus on their GSoC application. Don't hesitate to ask for comments and suggestions.

h3lio5 commented 5 years ago

Thank you for replying. It keeps my motivation high 😃.

h3lio5 commented 5 years ago

Where can I find the dataset to train the models? The link provided in the readme section is asking for authorisation. I need the data to run the models on my system. Thank you

TjarkMiener commented 5 years ago

@h3li05369 For the time being, we aren't allow to share CTA private data with you or any non CTA member. We haven't found a solution for this problem yet. A workaround here would be that you fork the CTLearn project, make your changes and then I could set up some runs for you on our gpus. However, for now the application has the highest priority.

TjarkMiener commented 5 years ago

Hi all @shikharras @ShreyanshTripathi @parthpm @h3li05369 @gremlin97, In addition to your application you have to make a PR in CTLearn. This is a rule from GSoC; not from us! This PR don't have to contain any code for now! But any code you wrote is more than welcome and should be included! What have to be included is bascially a summary of your application! So just introduce yourself and sum up the project and your progress so far. Feel free to copy paste some parts of your application. Spent more time in your application than in the PR!

My email is tmiener@ucm.es in case you want to have some feedback for your application. Cheers!

hrueda25 commented 5 years ago

Hi everybody, I am a graduate in physics from the Complutense University of Madrid. I am currently studying a MS in astrophysics and I have started working on hexagonal convolution. I want to contribute to this CTLearn issue under the GSoC project.

iamarchisha commented 4 years ago

Hi, I am interested in this project and I am a Computer Science student. Can someone help me get started with making contributions to this project?