The official implementation of the Hybrid Self-Attention NEAT algorithm

Overview

REPLES LOGO

PUREPLES - Pure Python Library for ES-HyperNEAT

About

This is a library of evolutionary algorithms with a focus on neuroevolution, implemented in pure python, depending on the neat-python implementation. It contains a faithful implementation of both HyperNEAT and ES-HyperNEAT which are briefly described below.

NEAT (NeuroEvolution of Augmenting Topologies) is a method developed by Kenneth O. Stanley for evolving arbitrary neural networks.
HyperNEAT (Hypercube-based NEAT) is a method developed by Kenneth O. Stanley utilizing NEAT. It is a technique for evolving large-scale neural networks using the geometric regularities of the task domain.
ES-HyperNEAT (Evolvable-substrate HyperNEAT) is a method developed by Sebastian Risi and Kenneth O. Stanley utilizing HyperNEAT. It is a technique for evolving large-scale neural networks using the geometric regularities of the task domain. In contrast to HyperNEAT, the substrate used during evolution is able to evolve. This rids the user of some initial work and often creates a more suitable substrate.

The library is extensible in regards to easy transition between experimental domains.

Getting started

This section briefly describes how to install and run experiments.

Installation Guide

First, make sure you have the dependencies installed: numpy, neat-python, graphviz, matplotlib and gym.
All the above can be installed using pip.
Next, download the source code and run setup.py (pip install .) from the root folder. Now you're able to use PUREPLES!

Experimenting

How to experiment using NEAT will not be described, since this is the responsibility of the neat-python library.

Setting up an experiment for HyperNEAT:

  • Define a substrate with input nodes and output nodes as a list of tuples. The hidden nodes is a list of lists of tuples where the inner lists represent layers. The first list is the topmost layer, the last the bottommost.
  • Create a configuration file defining various NEAT specific parameters which are used for the CPPN.
  • Define a fitness function setting the fitness of each genome. This is where the CPPN and the ANN is constructed for each generation - use the create_phenotype_network method from the hyperneat module.
  • Create a population with the configuration file made in (2).
  • Run the population with the fitness function made in (3) and the configuration file made in (2). The output is the genome solving the task or the one closest to solving it.

Setting up an experiment for ES-HyperNEAT: Use the same setup as HyperNEAT except for:

  • Not declaring hidden nodes when defining the substrate.
  • Declaring ES-HyperNEAT specific parameters.
  • Using the create_phenotype_network method residing in the es_hyperneat module when creating the ANN.

If one is trying to solve an experiment defined by the OpenAI Gym it is even easier to experiment. In the shared module a file called gym_runner is able to do most of the work. Given the number of generations, the environment to run, a configuration file, and a substrate, the relevant runner will take care of everything regarding population, fitness function etc.

Please refer to the sample experiments included for further details on experimenting.

Comments
  • The query_cppn function returns a value of discontinuity range

    The query_cppn function returns a value of discontinuity range

    Hi,

    I have a bit of improvement point about the query_cppn function in hyperneat.py. In line 85-88, a value below the threshold is replaced with 0.0, so that range [-0.2, 0.2] of the value drop out in this implementation.

    However, the original paper (http://axon.cs.byu.edu/Dan/778/papers/NeuroEvolution/stanley3**.pdf) says "The magnitude of weights above this threshold are scaled to be between zero and a maximum magnitude in the substrate." on page 8.

    Thus, I suggest changing the query_cppn function like it returns a value of continuity range [-max_val, max_val].

    opened by yamatakeru 14
  • Config always finds 5 inputs. [RuntimeError: Expected 840 inputs, got 5]

    Config always finds 5 inputs. [RuntimeError: Expected 840 inputs, got 5]

     ****** Running generation 0 ******
    
    Traceback (most recent call last):
      File "c:\Users\Silver\.vscode\extensions\ms-python.python-2020.2.64397\pythonFiles\ptvsd_launcher.py", line 48, in <module>
        main(ptvsdArgs)
      File "c:\Users\Silver\.vscode\extensions\ms-python.python-2020.2.64397\pythonFiles\lib\python\old_ptvsd\ptvsd\__main__.py", line 432, in main
        run()
      File "c:\Users\Silver\.vscode\extensions\ms-python.python-2020.2.64397\pythonFiles\lib\python\old_ptvsd\ptvsd\__main__.py", line 316, in run_file
        runpy.run_path(target, run_name='__main__')
      File "C:\Users\Silver\AppData\Local\Programs\Python\Python37\lib\runpy.py", line 263, in run_path
        pkg_name=pkg_name, script_name=fname)
      File "C:\Users\Silver\AppData\Local\Programs\Python\Python37\lib\runpy.py", line 96, in _run_module_code
        mod_name, mod_spec, pkg_name, script_name)
      File "C:\Users\Silver\AppData\Local\Programs\Python\Python37\lib\runpy.py", line 85, in _run_code
        exec(code, run_globals)
      File "g:\Emulators\ML AI open AI\env2.py", line 51, in <module>
        winner = run(200, env)[0]
      File "g:\Emulators\ML AI open AI\env2.py", line 37, in run
        winner, stats = run_es(gens, env, 200, config, params, sub, max_trials=200)
      File "C:\Users\Silver\AppData\Local\Programs\Python\Python37\lib\site-packages\pureples\shared\gym_runner.py", line 50, in run_es
        pop.run(eval_fitness, gens)
      File "C:\Users\Silver\AppData\Local\Programs\Python\Python37\lib\site-packages\neat\population.py", line 89, in run
        fitness_function(list(iteritems(self.population)), self.config)
      File "C:\Users\Silver\AppData\Local\Programs\Python\Python37\lib\site-packages\pureples\shared\gym_runner.py", line 25, in eval_fitness
        net = network.create_phenotype_network()
      File "C:\Users\Silver\AppData\Local\Programs\Python\Python37\lib\site-packages\pureples\es_hyperneat\es_hyperneat.py", line 46, in create_phenotype_network
        hidden_nodes, connections = self.es_hyperneat()
      File "C:\Users\Silver\AppData\Local\Programs\Python\Python37\lib\site-packages\pureples\es_hyperneat\es_hyperneat.py", line 151, in es_hyperneat
        root = self.division_initialization((x, y), True)
      File "C:\Users\Silver\AppData\Local\Programs\Python\Python37\lib\site-packages\pureples\es_hyperneat\es_hyperneat.py", line 110, in division_initialization
        c.w = query_cppn(coord, (c.x, c.y), outgoing, self.cppn, self.max_weight)
      File "C:\Users\Silver\AppData\Local\Programs\Python\Python37\lib\site-packages\pureples\hyperneat\hyperneat.py", line 84, in query_cppn
        w = cppn.activate(i)[0]
      File "C:\Users\Silver\AppData\Local\Programs\Python\Python37\lib\site-packages\neat\nn\feed_forward.py", line 14, in activate
        raise RuntimeError("Expected {0:n} inputs, got {1:n}".format(len(self.input_nodes), len(inputs)))
    RuntimeError: Expected 840 inputs, got 5
    

    I ran this through the Debugger and found that at some point some random float values replace the existing number of inputs that initially gets set.

    I could even see that at some point during execution the correct number of inputs was actually used.

    I've been fighting to find the cause and I've come to the conclusion that something has to be wrong in the module.

    for some context, I took one of the examples and attempted to configure it to run a gym retro env.

    As you can see though the only thing stopping me is the inputs being messed up somehow.

    If you need more information please let me know.

    opened by SilverDash 12
  • Question about discrete gym runner observation space

    Question about discrete gym runner observation space

    Hi!

    Very cool project, thanks for making it available. I have a toy project I am working on with Gym for function approximation, and which is a discrete-valued observation space consisting of 12 integers; action space is also discrete-valued, three integers used to determine the correct agent action based on the sequence of 12 integers.

    So does pureples support discrete observation and action spaces, and would the cartpole experiment make for a good starting point for this?

    Thanks in advance!

    opened by pablogranolabar 5
  • Line 169 in es_hyperneat.py is different from the algorithm in the original paper

    Line 169 in es_hyperneat.py is different from the algorithm in the original paper

    Hi,

    The following part seems to be different from the algorithm in https://eplex.cs.ucf.edu/papers/risi_alife12.pdf.

    160 | for i in range(self.iteration_level):  # Explore from hidden.
    161 |     for x, y in unexplored_hidden_nodes:
    162 |         root = self.division_initialization((x, y), True)
    163 |         self.pruning_extraction((x, y), root, True)
    164 |         connections2 = connections2.union(self.connections)
    165 |         for c in connections2:
    166 |             hidden_nodes.add((c.x2, c.y2))
    167 |         self.connections = set()
    168 | 
    169 | unexplored_hidden_nodes -= hidden_nodes
    

    According to pseudocode on page 47, line 169 should be indented once again. Also, unexplored_hidden_nodes will always be the empty set if we remove hidden_nodes from unexplored_hidden_nodes (because hidden_nodes is always greater than unexplored_hidden_nodes). I think it needs to be corrected as follows.

    160 | for i in range(self.iteration_level):  # Explore from hidden.
    161 |     for x, y in unexplored_hidden_nodes:
    162 |         root = self.division_initialization((x, y), True)
    163 |         self.pruning_extraction((x, y), root, True)
    164 |         connections2 = connections2.union(self.connections)
    165 |         for c in connections2:
    166 |             hidden_nodes.add((c.x2, c.y2))
    167 |         self.connections = set()
    168 | 
    169 - unexplored_hidden_nodes -= hidden_nodes
        +     unexplored_hidden_nodes = hidden_nodes - unexplored_hidden_nodes
    
    opened by yamatakeru 3
  • ES-HyperNEAT for OpenAI-Gyms SpaceInvader

    ES-HyperNEAT for OpenAI-Gyms SpaceInvader

    Hey,

    First of all you did great work, easy to use and understand! What I am trying to do is, using ES-HyperNEAT to exploit the Geometrical Informations in the Picture's Pixels of an Atari Game. OpenAI Gym gives an observationspace of (210, 160, 3), i have downsized it to (84, 84, 1) without colours. These are 7056 input-Nodes, instead of 100800.

    Now the Problem is that the outputs of the substrate's outputnodes are always Zero.

    The Input Layout is:

    for y in range(1,85):
    	for x in range(1,85):
    		input_coordinates.append((x , y))
    

    Is there some configuration in the CPPN i should watch out for, is the substrate too large, or is there a max Range for the Node-Placment in the substrat (exp just between -1, 1)?

    Thanks in advance!

    opened by Multiv4c 3
  • Question about inference with evolved ANN

    Question about inference with evolved ANN

    Hi @ukuleleplayer,

    I've been working on a PUREPLES-based project with your gym runner but I can't find any resources on inference with an evolved ANN? It looks like the phenotype gets pickled and model saved whenever the reward in +1., but what type of model format is that in and how to deploy for inference tasks?

    What I want to do is implement an additional loop whenever a +1. reward is found, to test it n more times to see if it has generalized to other examples.

    And does it make sense to restart an episode on each of those saved pickles for subsequent runs?

    TIA!

    opened by pablogranolabar 2
  • Connection's __eq__ does not return a boolean in es_hyperneat.py.

    Connection's __eq__ does not return a boolean in es_hyperneat.py.

    Hi.

    Connection's __eq__ is expected to return a boolean, but it returns a tuple (float, float, float, bool, float, float, float). However, the library seems to be working correctly at first glance.

    Tentatively, I will create a PR.

    opened by yamatakeru 2
  • Missing list() in es_hyperneat.py / unsupported operand type(s) for +: 'range' and 'range'

    Missing list() in es_hyperneat.py / unsupported operand type(s) for +: 'range' and 'range'

    Hi, I think in es_hyperneat.py on line 30/31 the ranges for the input- and output_nodes should be transformed to a list with list().

    Otherwise return neat.nn.RecurrentNetwork(input_nodes, output_nodes, node_evals) throws an error: unsupported operand type(s) for +: 'range' and 'range'

    Without that change skripts like es_hyperneat_xor_large.py do not work.

    The same problem seems to appear in hyperneat.py

    opened by DaKnick 2
  • The relationship between ESNetwork.activations and max_depth

    The relationship between ESNetwork.activations and max_depth

    Could anyone please explain the following line of code in es_hyperneat.py?

            # Number of layers in the network.
            self.activations = 2 ** params["max_depth"] + 1
    

    Thank you very much.

    opened by lester1027 1
  • network.create_phenotype_network() executing for more than 30 minutes when input and output sizes are (49360,) and (1024,) respectively

    network.create_phenotype_network() executing for more than 30 minutes when input and output sizes are (49360,) and (1024,) respectively

    I have been trying to use ES-Hyperneat on a custom environment. The size of input to ES-Network is (49360,) and for output is (1024,). The "net = network.create_phenotype_network()" method is sometimes taking more than 30 minutes to execute for a single genome. Does it mean that the larger the size of input and output of network the more time it will take to create network?

    Is there any solution for this?

    opened by Abdul-Wahab-mc 1
  • Multiple activation function support for ES-HyperNEAT?

    Multiple activation function support for ES-HyperNEAT?

    Hi @ukuleleplayer

    I've noticed that all of the examples use sigmoid activation functions for ES-HyperNEAT; is the use of multiple activation function at the per-neuron level possible with PUREPLES?

    Or any activation function other than sigmoid for ES-HyperNEAT?

    TIA

    opened by pablogranolabar 1
  • Question about run_hyper()

    Question about run_hyper()

    Hi, first of all thank you for your library, it's great! I am going through the code trying to understand what each step does, regarding the pole balancing environment. There is a point that really leaves me confused: in run_hyper(), it seems we create the population and test it for one trial, then again for 10 trials, and then for max_trials trials. Any reason to do that? Thanks

    opened by ValerioB88 0
Releases(v0.0-alpha)
Owner
Adrian Westh
Data Conscious Software Developer
Adrian Westh
Pytorch implementation of CVPR2020 paper “VectorNet: Encoding HD Maps and Agent Dynamics from Vectorized Representation”

VectorNet Re-implementation This is the unofficial pytorch implementation of CVPR2020 paper "VectorNet: Encoding HD Maps and Agent Dynamics from Vecto

120 Jan 06, 2023
Source code for models described in the paper "AudioCLIP: Extending CLIP to Image, Text and Audio" (https://arxiv.org/abs/2106.13043)

AudioCLIP Extending CLIP to Image, Text and Audio This repository contains implementation of the models described in the paper arXiv:2106.13043. This

458 Jan 02, 2023
tensorrt int8 量化yolov5 4.0 onnx模型

onnx模型转换为 int8 tensorrt引擎

123 Dec 28, 2022
Torch code for our CVPR 2018 paper "Residual Dense Network for Image Super-Resolution" (Spotlight)

Residual Dense Network for Image Super-Resolution This repository is for RDN introduced in the following paper Yulun Zhang, Yapeng Tian, Yu Kong, Bine

Yulun Zhang 494 Dec 30, 2022
Unsupervised Image to Image Translation with Generative Adversarial Networks

Unsupervised Image to Image Translation with Generative Adversarial Networks Paper: Unsupervised Image to Image Translation with Generative Adversaria

Hao 71 Oct 30, 2022
Dynamic Graph Event Detection

DyGED Dynamic Graph Event Detection Get Started pip install -r requirements.txt TODO Paper link to arxiv, and how to cite. Twitter Weather dataset tra

Mert Koşan 3 May 09, 2022
95.47% on CIFAR10 with PyTorch

Train CIFAR10 with PyTorch I'm playing with PyTorch on the CIFAR10 dataset. Prerequisites Python 3.6+ PyTorch 1.0+ Training # Start training with: py

5k Dec 30, 2022
TensorFlow ROCm port

Documentation TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, a

ROCm Software Platform 622 Jan 09, 2023
Open source Python implementation of the HDR+ photography pipeline

hdrplus-python Open source Python implementation of the HDR+ photography pipeline, originally developped by Google and presented in a 2016 article. Th

77 Jan 05, 2023
This repository contains the code for the paper 'PARM: Paragraph Aggregation Retrieval Model for Dense Document-to-Document Retrieval' published at ECIR'22.

Paragraph Aggregation Retrieval Model (PARM) for Dense Document-to-Document Retrieval This repository contains the code for the paper PARM: A Paragrap

Sophia Althammer 33 Aug 26, 2022
Official implementation for “Unsupervised Low-Light Image Enhancement via Histogram Equalization Prior”

Unsupervised Low-Light Image Enhancement via Histogram Equalization Prior. The code will release soon. Implementation Python3 PyTorch=1.0 NVIDIA GPU+

FengZhang 34 Dec 04, 2022
Nonuniform-to-Uniform Quantization: Towards Accurate Quantization via Generalized Straight-Through Estimation. In CVPR 2022.

Nonuniform-to-Uniform Quantization This repository contains the training code of N2UQ introduced in our CVPR 2022 paper: "Nonuniform-to-Uniform Quanti

Zechun Liu 60 Dec 28, 2022
BOOKSUM: A Collection of Datasets for Long-form Narrative Summarization

BOOKSUM: A Collection of Datasets for Long-form Narrative Summarization Authors: Wojciech Kryściński, Nazneen Rajani, Divyansh Agarwal, Caiming Xiong,

Salesforce 125 Dec 31, 2022
Dyalog-apl-docset - Dyalog APL Dash Docset Generator

Dyalog APL Dash Docset Generator o alasa e kili sona kepeken tenpo lili a A Dash

Maciej Goszczycki 1 Jan 10, 2022
Multi-view 3D reconstruction using neural rendering. Unofficial implementation of UNISURF, VolSDF, NeuS and more.

Volume rendering + 3D implicit surface Showcase What? previous: surface rendering; now: volume rendering previous: NeRF's volume density; now: implici

Jianfei Guo 682 Jan 04, 2023
RoFormer_pytorch

PyTorch RoFormer 原版Tensorflow权重(https://github.com/ZhuiyiTechnology/roformer) chinese_roformer_L-12_H-768_A-12.zip (提取码:xy9x) 已经转化为PyTorch权重 chinese_r

yujun 283 Dec 12, 2022
A real-time motion capture system that estimates poses and global translations using only 6 inertial measurement units

TransPose Code for our SIGGRAPH 2021 paper "TransPose: Real-time 3D Human Translation and Pose Estimation with Six Inertial Sensors". This repository

Xinyu Yi 261 Dec 31, 2022
Tutorial repo for an end-to-end Data Science project

End-to-end Data Science project This is the repo with the notebooks, code, and additional material used in the ITI's workshop. The goal of the session

Deena Gergis 127 Dec 30, 2022
Multi-angle c(q)uestion answering

Macaw Introduction Macaw (Multi-angle c(q)uestion answering) is a ready-to-use model capable of general question answering, showing robustness outside

AI2 430 Jan 04, 2023
Implementation of association rules mining algorithms (Apriori|FPGrowth) using python.

Association Rules Mining Using Python Implementation of association rules mining algorithms (Apriori|FPGrowth) using python. As a part of hw1 code in

Pre 2 Nov 10, 2021