PCGNN - Procedural Content Generation with NEAT and Novelty

Related tags

Deep LearningPCGNN
Overview

PCGNN - Procedural Content Generation with NEAT and Novelty

Generation ApproachMetricsPaperPosterExamples

About

This is a research project for a BSc (Hons) degree at the University of the Witwatersrand, Johannesburg. It's about combining novelty search and NeuroEvolution of Augmenting Topologies (NEAT) for procedural level generation. We also investigate two new metrics for evaluating the diversity and difficulty of levels. This repo contains our code as well as the final report.

If you just want to get started generating or playing levels, then please look at how to generate levels or the examples. Also feel free to look at the report or a poster that summarises our approach. For information about the metrics and how to use them, see here.

General structure

The main structure of the code is (hopefully) somewhat understandable. First of all, to run any python file in here, use ./run.sh path/to/python/file instead of using python directly, because otherwise modules are not recognised.

Most code in here can be categorised into 3 main archetypes:

  1. General / Method code. This is how the methods were actually implemented, and these files don't do anything useful when run on their own.
  2. Runs / Experiment code. This is a large chunk of what is in here, specifically it is code that runs the methods in some way, and generates results. Most of the results that we generate are in python pickle format.
  3. Analysis Code. We have a pretty clear separation between experiment code (which runs the methods), and analysis code, which takes in the results and generates some usable output, like images, tables, graphs, etc.

File Structure

Most of these are relative to ./src

Method Code
├── novelty_neat     -> Our actual method
├── main
├── baselines
├── games
├── common
├── metrics

Instrumental
├── experiments
├── pipelines
├── runs
├── run.sh
├── scripts
└── slurms

Analysis
├── analysis
├── external

Data
├── levels
├── logs
├── results
├── ../results

Document
├── ../doc/report.pdf

Explanation

The method roughly works as follows:

  1. Evolve a neural network using NEAT (with neat-python)
  2. The fitness function for each neural network is as follows:
    1. Generate N levels per network
    2. Calculate the average solvability of these N levels
    3. Calculate how different these N levels are from each other (called intra-novelty). Calculate the average of this.
    4. Calculate how different these N levels are from the other networks' levels (normal novelty)
    5. Fitness (network) = w1 * Solvability + w2 * Intra-Novelty + w3 * Novelty.
  3. Update the networks using the above calculated fitness & repeat for X generations.

After this 'training' process, take the best network and use it to generate levels in real time.

The way novelty is calculated can be found in the report, or from the original paper by Joel Lehman and Kenneth O. Stanley, here.

We compare levels by considering a few different distance functions, like the normalised Hamming Distance and Image Hashing, but others can also be used.

Get started

To get started you would require a python environment, and env.yml is provided to quickly get started with Conda. Use it like: conda create -f env.yml. There is also another environment that is used specifically for interacting with the gym_pcgrl codebase. If that is something you want to do, then create another environment from the env_pcgrl.yml file.

For full functionality, you will also need java installed. The openjdk 16.0.1 2021-04-20 version worked well.

Additionally, most of the actual experiments used Weights & Biases to log experiments and results, so you would also need to log in using your credentials. The simple entry points described below should not require it.

Entry Points

At the moment, the easiest way to interact with the codebase would be to use the code in src/main/.

Generate Levels.

To have a go at generating levels, then you can use the functions provided in src/main/main.py. Specifically you can call this (remember to be in the src directory before running these commands):

./run.sh main/main.py --method noveltyneat --game mario --mode generate --width 114 --height 14

The above allows you to view some generated levels.

Playing Levels

You can also play the (Mario) levels, or let an agent play them. After generating a level using the above, you can play it by using:

./run.sh main/main.py --game mario --command play-human --filename test_level.txt

Or you can let an A* agent play it using

./run.sh main/main.py --game mario --command play-agent --filename test_level.txt

Features

Works for Tilemaps

Mario Mario

Generates arbitrary sized levels without retraining

Mario

Mario-28 Mario-56 Mario-114 Mario-228

Maze



Experiments

We have many different experiments, with the following meaning:

Generalisation - Generate Larger levels

  • v206: Mario
  • v104: Maze NEAT
  • v107: Maze DirectGA

Metrics

  • v202: Mario
  • v106: Maze

Method runs

  • v105: Maze NEAT
  • v102: Maze DirectGA
  • v204: Mario NEAT
  • v201: Mario DirectGA

The PCGRL code can be found in ./src/external/gym-pcgrl

Reproducing

Our results that were shown and mentioned in the report are mainly found in src/results/.

The following describes how to reproduce our results. Note, there might be some difference in the ordering of the images (e.g. mario-level-0.png and mario-level-1.png will swap), but the set of level images generated should be exactly the same.

The whole process contains 3 steps, and does assume a Slurm based cluster scheduler. Please also change the logfile locations (look at running src/pipelines/replace_all_paths.sh from the repository root after changing paths in there - this updates all paths, and decompresses some results). Our partition name was batch, so this also potentially needs to be updated in the Slurm scripts.

You need to run the following three scripts, in order, and before you start the next one, all the jobs from the previous one must have finished.

Note, timing results probably will differ, and for fairness, we recommend using a machine with at least 8 cores, as we do usually run multiple seeds in parallel. Do not continue on to the next step before all runs in the current one have finished. First of all, cd src/pipelines

  1. ./reproduce_full.sh -> Runs the DirectGA & NoveltyNEAT experiments.
  2. ./analyse_all.sh -> Reruns the metric calculations on the above, and saves it to a easy to work with format
  3. ./finalise_analysis.sh -> Uses the above results to create figures and tables.

The analysis runs (steps 2 and 3.) should automatically use the latest results. If you want to change this, then before going from one step to the next, you will need to manually update the location of the .p files, e.g. between step 1. and 2., you need to update

  • src/analysis/proper_experiments/v200/for_mario_generation_1.py,
  • src/analysis/proper_experiments/v100/for_maze_1.py,
  • src/analysis/proper_experiments/v100/analyse_104.py
  • src/analysis/proper_experiments/v200/analyse_206.py.

Likewise, between step 2. and 3., you need to update (only if you don't want to analyse the latest runs.)

  • src/analysis/proper_experiments/v400/analyse_all_statistical_tests.py and
  • src/analysis/proper_experiments/v400/analyse_all_metrics_properly.py.

For PCGRL, the runs do take quite long, so it is suggested to use our models / results. If you really want to rerun the training, you can look at the Slurm scripts in src/slurms/all_pcgrl/*.batch.

For the PCGRL inference, there are two steps to do, specifically:

  1. Run infer_pcgrl.py
  2. Then run the analysis scripts again, specifically analyse_all.sh and finalise_analysis.sh (noting to change the PCGRL filepaths in for_mario_generation_1.py and for_maze_1.py)

Note: The models for turtle (both Mario and Maze) were too large for Github and are thus not included here, but wide is.

Metrics

We also introduce 2 metrics to measure the diversity and difficulty of levels using A* agents. The code for these metrics are in metrics/a_star/a_star_metrics.py.

A* Diversity Metric

The A* diversity metric uses the trajectory of the agent on two levels to evaluate the diversity. Levels that are solved using different paths are marked as diverse, whereas levels with similar paths are marked as similar.

Largely Similar levels

Diversity = 0.08

Left         Right

Different Levels

Diversity = 0.27

Left         Right

All paths

The green and orange paths are quite similar, leading to low diversity

A* Difficulty

This metric measures how much of the search tree of an A* agent needs to be expanded before the agent can solve the level - more expansion indicates more exploration is required and that the level is more difficult.

Left         Right

Applying the metrics code to levels is done in (among others) src/runs/proper_experiments/v300_metrics.

We also experimented with using RL agents to measure the above characteristics, and results looked promising, but the implementation posed some challenges.

Feel free to look in

  • metrics/rl/tabular/rl_agent_metric.py
  • metrics/rl/tabular/tabular_rl_agent.py
  • metrics/rl/tabular/rl_difficulty_metric.py

for this code.

Assorted

Island Models

There is also some code (not thoroughly tested) that uses multiple island populations and performs regular migration between them and these can be found in novelty_neat/mario/test/island_mario.py, novelty_neat/maze/test/island_model.py and src/runs/proper_experiments/v200_mario/v203_island_neat.py.

Other repositories and projects used

These can be found in src/external. We did edit and adapt some of the code, but most of it is still original.

Some ideas from here

And some snippets from Stack Overflow, which I've tried to reference where they were used.

Acknowledgements

This work is based on the research supported wholly by the National Research Foundation of South Africa (Grant UID 133358).

Owner
Michael Beukman
Michael Beukman
Implementation for NeurIPS 2021 Submission: SparseFed

READ THIS FIRST This repo is an anonymized version of an existing repository of GitHub, for the AIStats 2021 submission: SparseFed: Mitigating Model P

2 Jun 15, 2022
code for our ECCV-2020 paper: Self-supervised Video Representation Learning by Pace Prediction

Video_Pace This repository contains the code for the following paper: Jiangliu Wang, Jianbo Jiao and Yunhui Liu, "Self-Supervised Video Representation

Jiangliu Wang 95 Dec 14, 2022
Bayesian Meta-Learning Through Variational Gaussian Processes

vmgp This is the repository of Vivek Myers and Nikhil Sardana for our CS 330 final project, Bayesian Meta-Learning Through Variational Gaussian Proces

Vivek Myers 2 Nov 17, 2022
Official code for "Eigenlanes: Data-Driven Lane Descriptors for Structurally Diverse Lanes", CVPR2022

[CVPR 2022] Eigenlanes: Data-Driven Lane Descriptors for Structurally Diverse Lanes Dongkwon Jin, Wonhui Park, Seong-Gyun Jeong, Heeyeon Kwon, and Cha

Dongkwon Jin 106 Dec 29, 2022
Repo for flood prediction using LSTMs and HAND

Abstract Every year, floods cause billions of dollars’ worth of damages to life, crops, and property. With a proper early flood warning system in plac

1 Oct 27, 2021
KSAI Lite is a deep learning inference framework of kingsoft, based on tensorflow lite

KSAI Lite is a deep learning inference framework of kingsoft, based on tensorflow lite

80 Dec 27, 2022
This is an implementation for the CVPR2020 paper "Learning Invariant Representation for Unsupervised Image Restoration"

Learning Invariant Representation for Unsupervised Image Restoration (CVPR 2020) Introduction This is an implementation for the paper "Learning Invari

GarField 88 Nov 07, 2022
Personal implementation of paper "Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval"

Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval This repo provides personal implementation of paper Approximate Ne

John 8 Oct 07, 2022
Inkscape extensions for figure resizing and editing

Academic-Inkscape: Extensions for figure resizing and editing This repository contains several Inkscape extensions designed for editing plots. Scale P

192 Dec 26, 2022
Get 2D point positions (e.g., facial landmarks) projected on 3D mesh

points2d_projection_mesh Input 2D points (e.g. facial landmarks) on an image Camera parameters (extrinsic and intrinsic) of the image Aligned 3D mesh

5 Dec 08, 2022
Pytorch implementations of popular off-policy multi-agent reinforcement learning algorithms, including QMix, VDN, MADDPG, and MATD3.

Off-Policy Multi-Agent Reinforcement Learning (MARL) Algorithms This repository contains implementations of various off-policy multi-agent reinforceme

183 Dec 28, 2022
Physics-informed convolutional-recurrent neural networks for solving spatiotemporal PDEs

PhyCRNet Physics-informed convolutional-recurrent neural networks for solving spatiotemporal PDEs Paper link: [ArXiv] By: Pu Ren, Chengping Rao, Yang

Pu Ren 11 Aug 23, 2022
SparseInst: Sparse Instance Activation for Real-Time Instance Segmentation, CVPR 2022

SparseInst 🚀 A simple framework for real-time instance segmentation, CVPR 2022 by Tianheng Cheng, Xinggang Wang†, Shaoyu Chen, Wenqiang Zhang, Qian Z

Hust Visual Learning Team 458 Jan 05, 2023
CMUA-Watermark: A Cross-Model Universal Adversarial Watermark for Combating Deepfakes (AAAI2022)

CMUA-Watermark The official code for CMUA-Watermark: A Cross-Model Universal Adversarial Watermark for Combating Deepfakes (AAAI2022) arxiv. It is bas

50 Nov 26, 2022
End-to-end face detection, cropping, norm estimation, and landmark detection in a single onnx model

onnx-facial-lmk-detector End-to-end face detection, cropping, norm estimation, and landmark detection in a single onnx model, model.onnx. Demo You can

atksh 42 Dec 30, 2022
A tiny, friendly, strong baseline code for Person-reID (based on pytorch).

Pytorch ReID Strong, Small, Friendly A tiny, friendly, strong baseline code for Person-reID (based on pytorch). Strong. It is consistent with the new

Zhedong Zheng 3.5k Jan 08, 2023
MarcoPolo is a clustering-free approach to the exploration of bimodally expressed genes along with group information in single-cell RNA-seq data

MarcoPolo is a method to discover differentially expressed genes in single-cell RNA-seq data without depending on prior clustering Overview MarcoPolo

Chanwoo Kim 13 Dec 18, 2022
Security evaluation module with onnx, pytorch, and SecML.

🚀 🐼 🔥 PandaVision Integrate and automate security evaluations with onnx, pytorch, and SecML! Installation Starting the server without Docker If you

Maura Pintor 11 Apr 12, 2022
Code for our paper "MG-GAN: A Multi-Generator Model Preventing Out-of-Distribution Samples in Pedestrian Trajectory Prediction" published at ICCV 2021.

MG-GAN: A Multi-Generator Model Preventing Out-of-Distribution Samples in Pedestrian Trajectory Prediction This repository contains the code for the p

Sven 30 Jan 05, 2023
DIRL: Domain-Invariant Representation Learning

DIRL: Domain-Invariant Representation Learning Domain-Invariant Representation Learning (DIRL) is a novel algorithm that semantically aligns both the

Ajay Tanwani 30 Nov 07, 2022