Winning solution for the Galaxy Challenge on Kaggle

Overview

kaggle-galaxies

Winning solution for the Galaxy Challenge on Kaggle (http://www.kaggle.com/c/galaxy-zoo-the-galaxy-challenge).

Documentation about the method and the code is available in doc/documentation.pdf. Information on how to generate the solution file can also be found below.

Generating the solution

Install the dependencies

Instructions for installing Theano and getting it to run on the GPU can be found here. It should be possible to install NumPy, SciPy, scikit-image and pandas using pip or easy_install. To install pylearn2, simply run:

git clone git://github.com/lisa-lab/pylearn2.git

and add the resulting directory to your PYTHONPATH.

The optional dependencies listed in the documentation don't have to be installed to reproduce the winning solution: the generated data files are already provided, so they don't have to be regenerated (but of course you can if you want to). If you want to install them, please refer to their respective documentation.

Download the code

To download the code, run:

git clone git://github.com/benanne/kaggle-galaxies.git

A bunch of data files (extracted sextractor parameters, IDs files, training labels in NumPy format, ...) are also included. I decided to include these since generating them is a bit tedious and requires extra dependencies. It's about 20MB in total, so depending on your connection speed it could take a minute. Cloning the repository should also create the necessary directory structure (see doc/documentation.pdf for more info).

Download the training data

Download the data files from Kaggle. Place and extract the files in the following locations:

  • data/raw/training_solutions_rev1.csv
  • data/raw/images_train_rev1/*.jpg
  • data/raw/images_test_rev1/*.jpg

Note that the zip file with the training images is called images_training_rev1.zip, but they should go in a directory called images_train_rev1. This is just for consistency.

Create data files

This step may be skipped. The necessary data files have been included in the git repository. Nevertheless, if you wish to regenerate them (or make changes to how they are generated), here's how to do it.

  • create data/train_ids.npy by running python create_train_ids_file.py.
  • create data/test_ids.npy by running python create_test_ids_file.py.
  • create data/solutions_train.npy by running python convert_training_labels_to_npy.py.
  • create data/pysex_params_extra_*.npy.gz by running python extract_pysex_params_extra.py.
  • create data/pysex_params_gen2_*.npy.gz by running python extract_pysex_params_gen2.py.

Copy data to RAM

Copy the train and test images to /dev/shm by running:

python copy_data_to_shm.py

If you don't want to do this, you'll need to modify the realtime_augmentation.py file in a few places. Please refer to the documentation for more information.

Train the networks

To train the best single model, run:

python try_convnet_cc_multirotflip_3x69r45_maxout2048_extradense.py

On a GeForce GTX 680, this took about 67 hours to run to completion. The prediction file generated by this script, predictions/final/try_convnet_cc_multirotflip_3x69r45_maxout2048_extradense.csv.gz, should get you a score that's good enough to land in the #1 position (without any model averaging). You can similarly run the other try_*.py scripts to train the other models I used in the winning ensemble.

If you have more than 2GB of GPU memory, I recommend disabling Theano's garbage collector with allow_gc=False in your .theanorc file or in the THEANO_FLAGS environment variable, for a nice speedup. Please refer to the Theano documentation for more information on how to get the most out Theano's GPU support.

Generate augmented predictions

To generate predictions which are averaged across multiple transformations of the input, run:

python predict_augmented_npy_maxout2048_extradense.py

This takes just over 4 hours on a GeForce GTX 680, and will create two files predictions/final/augmented/valid/try_convnet_cc_multirotflip_3x69r45_maxout2048_extradense.npy.gz and predictions/final/augmented/test/try_convnet_cc_multirotflip_3x69r45_maxout2048_extradense.npy.gz. You can similarly run the corresponding predict_augmented_npy_*.py files for the other models you trained.

Blend augmented predictions

To generate blended prediction files from all the models for which you generated augmented predictions, run:

python ensemble_predictions_npy.py

The script checks which files are present in predictions/final/augmented/test/ and uses this to determine the models for which predictions are available. It will create three files:

  • predictions/final/blended/blended_predictions_uniform.npy.gz: uniform blend.
  • predictions/final/blended/blended_predictions.npy.gz: weighted linear blend.
  • predictions/final/blended/blended_predictions_separate.npy.gz: weighted linear blend, with separate weights for each question.

Convert prediction file to CSV

Finally, in order to prepare the predictions for submission, the prediction file needs to be converted from .npy.gz format to .csv.gz. Run the following to do so (or similarly for any other prediction file in .npy.gz format):

python create_submission_from_npy.py predictions/final/blended/blended_predictions_uniform.npy.gz

Submit predictions

Submit the file predictions/final/blended/blended_predictions_uniform.csv.gz on Kaggle to get it scored. Note that the process of generating this file involves considerable randomness: the weights of the networks are initialised randomly, the training data for each chunk is randomly selected, ... so I cannot guarantee that you will achieve the same score as I did. I did not use fixed random seeds. This might not have made much of a difference though, since different GPUs and CUDA toolkit versions will also introduce different rounding errors.

Owner
Sander Dieleman
Sander Dieleman
scikit-fem is a lightweight Python 3.7+ library for performing finite element assembly.

scikit-fem is a lightweight Python 3.7+ library for performing finite element assembly. Its main purpose is the transformation of bilinear forms into sparse matrices and linear forms into vectors.

Tom Gustafsson 297 Dec 13, 2022
OptaPy is an AI constraint solver for Python to optimize planning and scheduling problems.

OptaPy is an AI constraint solver for Python to optimize the Vehicle Routing Problem, Employee Rostering, Maintenance Scheduling, Task Assignment, School Timetabling, Cloud Optimization, Conference S

OptaPy 208 Dec 27, 2022
Combines MLflow with a database (PostgreSQL) and a reverse proxy (NGINX) into a multi-container Docker application

Combines MLflow with a database (PostgreSQL) and a reverse proxy (NGINX) into a multi-container Docker application (with docker-compose).

Philip May 2 Dec 03, 2021
This is the code repository for Interpretable Machine Learning with Python, published by Packt.

Interpretable Machine Learning with Python, published by Packt

Packt 299 Jan 02, 2023
BASTA: The BAyesian STellar Algorithm

BASTA: BAyesian STellar Algorithm Current stable version: v1.0 Important note: BASTA is developed for Python 3.8, but Python 3.7 should work as well.

BASTA team 16 Nov 15, 2022
Learn how to responsibly deliver value with ML.

Made With ML Applied ML · MLOps · Production Join 30K+ developers in learning how to responsibly deliver value with ML. 🔥 Among the top MLOps reposit

Goku Mohandas 32k Dec 30, 2022
Machine Learning for RC Cars

Suiron Machine Learning for RC Cars Prediction visualization (green = actual, blue = prediction) Click the video below to see it in action! Dependenci

Kendrick Tan 706 Jan 02, 2023
MLBox is a powerful Automated Machine Learning python library.

MLBox is a powerful Automated Machine Learning python library. It provides the following features: Fast reading and distributed data preprocessing/cle

Axel 1.4k Jan 06, 2023
A Python implementation of GRAIL, a generic framework to learn compact time series representations.

GRAIL A Python implementation of GRAIL, a generic framework to learn compact time series representations. Requirements Python 3.6+ numpy scipy tslearn

3 Nov 24, 2021
Kaggle Tweet Sentiment Extraction Competition: 1st place solution (Dark of the Moon team)

Kaggle Tweet Sentiment Extraction Competition: 1st place solution (Dark of the Moon team)

Artsem Zhyvalkouski 64 Nov 30, 2022
A collection of interactive machine-learning experiments: 🏋️models training + 🎨models demo

🤖 Interactive Machine Learning experiments: 🏋️models training + 🎨models demo

Oleksii Trekhleb 1.4k Jan 06, 2023
Tools for mathematical optimization region

Tools for mathematical optimization region

林景 15 Nov 30, 2022
GroundSeg Clustering Optimized Kdtree

ground seg and clustering based on kitti velodyne data, and a additional optimized kdtree for knn and radius nn search

2 Dec 02, 2021
List of Data Science Cheatsheets to rule the world

Data Science Cheatsheets List of Data Science Cheatsheets to rule the world. Table of Contents Business Science Business Science Problem Framework Dat

Favio André Vázquez 11.7k Dec 30, 2022
GAM timeseries modeling with auto-changepoint detection. Inspired by Facebook Prophet and implemented in PyMC3

pm-prophet Pymc3-based universal time series prediction and decomposition library (inspired by Facebook Prophet). However, while Faceook prophet is a

Luca Giacomel 314 Dec 25, 2022
Pandas DataFrames and Series as Interactive Tables in Jupyter

Pandas DataFrames and Series as Interactive Tables in Jupyter Star Turn pandas DataFrames and Series into interactive datatables in both your notebook

Marc Wouts 364 Jan 04, 2023
Distributed Deep learning with Keras & Spark

Elephas: Distributed Deep Learning with Keras & Spark Elephas is an extension of Keras, which allows you to run distributed deep learning models at sc

Max Pumperla 1.6k Dec 29, 2022
Polyglot Machine Learning example for scraping similar news articles.

Polyglot Machine Learning example for scraping similar news articles In this example, we will see how we can work with Machine Learning applications w

MetaCall 15 Mar 28, 2022
scikit-learn is a python module for machine learning built on top of numpy / scipy

About scikit-learn is a python module for machine learning built on top of numpy / scipy. The purpose of the scikit-learn-tutorial subproject is to le

Gael Varoquaux 122 Dec 12, 2022
Deep Survival Machines - Fully Parametric Survival Regression

Package: dsm Python package dsm provides an API to train the Deep Survival Machines and associated models for problems in survival analysis. The under

Carnegie Mellon University Auton Lab 10 Dec 30, 2022