Lucid library adapted for PyTorch

Related tags

Deep Learninglucent
Overview

Lucent

Travis build status Code coverage

PyTorch + Lucid = Lucent

The wonderful Lucid library adapted for the wonderful PyTorch!

Lucent is not affiliated with Lucid or OpenAI's Clarity team, although we would love to be! Credit is due to the original Lucid authors, we merely adapted the code for PyTorch and we take the blame for all issues and bugs found here.

Usage

Lucent is still in pre-alpha phase and can be installed locally with the following command:

pip install torch-lucent

In the spirit of Lucid, get up and running with Lucent immediately, thanks to Google's Colab!

You can also clone this repository and run the notebooks locally with Jupyter.

Quickstart

import torch

from lucent.optvis import render
from lucent.modelzoo import inceptionv1

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = inceptionv1(pretrained=True)
model.to(device).eval()

render.render_vis(model, "mixed4a:476")

Tutorials

Other Notebooks

Here, we have tried to recreate some of the Lucid notebooks! You can also check out the lucent-notebooks repo to clone all the notebooks.

Recommended Readings

Related Talks

Slack

Check out #proj-lucid and #circuits on the Distill slack!

Additional Information

License and Disclaimer

You may use this software under the Apache 2.0 License. See LICENSE.

Comments
  • use custom model?

    use custom model?

    Hi, I see its possible to use models from the modelzoo, is it possible to use a custom trained model? Ay documentation or direction would be appreciated.

    opened by dvschultz 5
  • Add activation grids notebook

    Add activation grids notebook

    Issue #3, reproducing activation grids (https://github.com/tensorflow/lucid/blob/master/notebooks/building-blocks/ActivationGrid.ipynb)

    It's possible to try it here: https://colab.research.google.com/drive/1pEe-KmXeDJcWQYLOHwcMubS69wVVCHLe#scrollTo=xidm-QrXvL2X

    Here are the results so far with inceptionv1 and layer mixed4d:

    • reproduced: https://imgur.com/twmizR4
    • original: https://imgur.com/eaYwEWR

    Some remarks and a question:

    • I added channel_reducer as is from the original repo
    • default transforms in transform.py produce a different size (due to random scaling) each time it is called, then resampling to 224 is done after that to have a fixed size. Is that the same in lucid? I need to debug a little bit in the original repo to be sure to answer that, but please let me know if you have the answer. The reason I ask is that in this specific notebook, the cells in the grid are much smaller than 224. I added an argument "fixed_image_size" to handle this specific case where we want a fixed image size (after resampling) which is not 224.
    • Since all layers are computed and with this commit we can accept smaller images, this means that it's possible to have an exception on higher layers because the image size is not big enough, but it should be fine as long as the layer we are interested in is computed, I handled this exception
    opened by mehdidc 5
  • ValueError in Render

    ValueError in Render

    Hi there,

    I am trying to run the tutorial and am running into the following error:

    >>> import torch
    >>> from lucent.optvis import render, param, transform, objectives
    >>> from lucent.modelzoo import inceptionv1
    >>> device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
    >>> model = inceptionv1(pretrained=True)
    >>> _ = model.to(device).eval()
    >>> _ = render.render_vis(model, "mixed4a:476", show_inline=True)
    
      0%|                                                                                       | 0/512 [00:00<?, ?it/s]
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/Users/tatekeller/opt/anaconda3/envs/spec/lib/python3.8/site-packages/lucent/optvis/render.py", line 113, in render_vis
        optimizer.step(closure)
      File "/Users/tatekeller/.local/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context
        return func(*args, **kwargs)
      File "/Users/tatekeller/.local/lib/python3.8/site-packages/torch/optim/adam.py", line 66, in step
        loss = closure()
      File "/Users/tatekeller/opt/anaconda3/envs/spec/lib/python3.8/site-packages/lucent/optvis/render.py", line 97, in closure
        model(transform_f(image_f()))
      File "/Users/tatekeller/opt/anaconda3/envs/spec/lib/python3.8/site-packages/lucent/optvis/transform.py", line 85, in inner
        x = transform(x)
      File "/Users/tatekeller/opt/anaconda3/envs/spec/lib/python3.8/site-packages/lucent/optvis/transform.py", line 75, in inner
        M = kornia.get_rotation_matrix2d(center, angle, scale).to(device)
      File "/Users/tatekeller/opt/anaconda3/envs/spec/lib/python3.8/site-packages/kornia/geometry/transform/imgwarp.py", line 347, in get_rotation_matrix2d
        raise ValueError("Input scale must be a B tensor. Got {}"
    ValueError: Input scale must be a B tensor. Got torch.Size([1, 2])
    

    I am using a conda environment with Python 3.8.5 and pytorch=1.7.0.

    Any help regarding this error would be much appreciated!

    opened by tatkeller 4
  • Utils to show modulename with its repr(); Add Linear weighted activations as objective; Add pretrained GAN as parametrization

    Utils to show modulename with its repr(); Add Linear weighted activations as objective; Add pretrained GAN as parametrization

    Dear author,

    Thanks so much for implement lucid in PyTorch! I really enjoyed using it in my projects of leveraging deep neural networks as a way to understand real neurons in visual cortices. In my usage, I want to activate multiple channels together to match the selectivity of the biological neuron or units in other networks. We can achieve this by adding up the original channel objective or neuron objective. But it becomes very inefficient in back prop.

    So here are my 2 cents, in this commit I

    • Add linearly weighted activations of the channel, neuron, neuron group as objective. Using tensor operations.
    • Add a function in util.py to output the module names to ease the usage for custom networks.
    opened by Animadversio 4
  • Generating a batch of optimal stimuli, one for each unit in a layer

    Generating a batch of optimal stimuli, one for each unit in a layer

    Hi, I was trying to use Lucent to generate optimal stimuli for several units/neurons of a layer parallely. So, I figured I would leverage the batch processing. As illustrated in the neuron interaction tutorial notebook, I was passing a sum of objectives to the render.render_vis() function. Here is a toy example of what I want and my approach: Units to be visualized = [10,20,30] Layer = 'readout_fc' tot_objective = objectives.channel("readout_fc",10,batch=0)+objectives.channel("readout_fc",20,batch=1)+objectives.channel("readout_fc",30,batch=2) param_f = lambda:param.image(135,batch=3) imgs = render.render_vis(model,tot_objective,param_f=param_f,preprocess=False,fixed_input_image_size=135)

    The parameter settings works beautifully when I try one unit.😄 However, I wasn't sure if this is the correct way to approach this for multiple units in parallel (this gives me seperate images for each unit). Also when the number of units is more, I was hoping to avoid writing it out individually or run an explicit for loop to compute the objective. I tried using reduce as below: neurons = [10,20,30] tot_objective = reduce(lambda x,y: x+objectives.channel("readout_fc",y[0],batch=y[1]),list(zip(neurons,np.arange(len(neurons)))),0) Doing so gives me the same image 3 times. So, I was wondering if there is something wrong in how I am using the objective function to generate optimal stimuli from multiple units in parallel. Thanks in advance.

    opened by arnaghosh 3
  • Q: Do you use the same architecture and weights as Clarity does?

    Q: Do you use the same architecture and weights as Clarity does?

    Hi,

    I am looking for a trainable InceptionV1 model which shares the same weights with the ones Clarity team uses. Reading your code, I've found these lines:

    model_urls = {
        # InceptionV1 model used in Lucid examples, converted by ProGamerGov
        'inceptionv1': 'https://github.com/ProGamerGov/pytorch-old-tensorflow-models/raw/master/inception5h.pth',
    }
    

    Does it mean you're using exactly the same architecture and weights, so your render_vis function can reproduce the same pictures that Clarity has published?

    Thanks!

    opened by gergopool 3
  • Add direction and direction_neuron objectives

    Add direction and direction_neuron objectives

    Hi @greentfrapp,

    Thanks so much for making this repo! It has been a great help for me. For my use-case I needed the direction and direction_neuron objectives so I added them into lucent. I also included two demo files but let me know if they should be rolled into a docstring instead. This PR also lays the groundwork to reproduce the activation atlas notebooks from lucid. Would love to hear your thoughts :)

    opened by ndey96 3
  • Activation Grid Notebook

    Activation Grid Notebook

    Reproduce Lucid's Activation Grid Notebook with PyTorch and Lucent.

    The only new function required seems to be ChannelReducer, which doesn't rely on Tensorflow so it should be relatively simple to port over.

    Help wanted for this!

    help wanted good first issue 
    opened by greentfrapp 3
  • get raw_activations

    get raw_activations

    Hi, thanks for this great library! I'm trying to reproduce the Activation Atlas notebook using lucent, creating grid cells in the end.

    In the notebook, "raw activation" is available in a numpy.ndarray format by "model.layers[7].activations" to utilise in the next Dimensionality reduction section. How can I get this raw activation using lucent?

    I did create visualised images using lucent render_vis first, and then flattened them to use UMAP fit method, but I'm not sure this is correct. Any suggestion would be appreciated.

    opened by 2nayk 2
  • Temporarily freeze kornia at 0.4.0 to prevent breaking change

    Temporarily freeze kornia at 0.4.0 to prevent breaking change

    kornia 0.4.1 released recently and it includes a breaking change to get_rotation_matrix2d

    I've opened an Issue to hopefully address this soon: https://github.com/kornia/kornia/issues/742 but in the meantime, I suggest freezing kornia at 0.4.0 so that the random_rotate transform continues working as before

    opened by ivanzvonkov 2
  • Suggestion for `lucent.optvis.render.hook_model`

    Suggestion for `lucent.optvis.render.hook_model`

    First, thanks for making this. Lifesaver. Two thoughts (Fwiw, the nested functions, higher-order functions and decorators make things a biiiiit hard to follow when debugging):

    1. I initially dun goofed and didn't eval the model (even though the very example notebook I'm using from lucent does lol). Maybe the hook_model function could check for nonetypes and tell the user to eval, if no saved feature maps are found?
    2. PyTorch module names usually use dot notation. Maybe use dots instead of underscores? Or just tell the user which feature map names are available and the user'll figure it out quickly enough

    Suggested replacement for this function: https://github.com/greentfrapp/lucent/blob/a2b015ce95f29460a329f750428077bcde5e4e94/lucent/optvis/render.py#L194

    def hook(layer):
            if layer == "input":
                out = image_f()
            elif layer == "labels":
                out = list(features.values())[-1].features
            else:
                assert layer in features, f"Invalid layer {layer}. Pick from one of {features.keys()}"  # suggestion 2 ish
                out = features[layer].features
            assert out is not None, "There are no saved feature maps. Make sure to put the model in eval mode, like so: `model.to(device).eval()`. See Lucent notebook for example."   # suggestion 1, tell user to eval
            return out
    

    *I ran it on resnet18. Gorgeous and worked out of the box btw.

    download-4 download-5 download-6 download-7

    opened by alvinwan 2
  • Code Breaks as GPU Index > 0

    Code Breaks as GPU Index > 0

    When using GPU, this codebase only works for torch.device('cuda:0') -- the GPU index has to be 0.

    For example, if you choos torch.device('cuda:1'), then when you run the code demo

    import torch
    
    from lucent.optvis import render
    from lucent.modelzoo import inceptionv1
    
    # Let's use cuda:1
    device = torch.device("cuda:1")
    model = inceptionv1(pretrained=True)
    model.to(device).eval()
    
    render.render_vis(model, "mixed4a:476")
    

    you will see an error like

    ..........
    File .....lucent/optvis/render.py:206, in hook_model.<locals>.hook(layer)
        204     assert layer in features, f"Invalid layer {layer}. Retrieve the list of layers with `lucent.modelzoo.util.get_model_layers(model)`."
        205     out = features[layer].features
    --> 206 assert out is not None, "There are no saved feature maps. Make sure to put the model in eval mode, like so: `model.to(device).eval()`. See README for example."
        207 return out
    
    AssertionError: There are no saved feature maps. Make sure to put the model in eval mode, like so: `model.to(device).eval()`. See README for example.
    
    opened by Haoxiang-Wang 0
  • Lucent handles greyscale images in function view incorrectly

    Lucent handles greyscale images in function view incorrectly

    When rendering and visualizing greyscale images not inline, i.e., with show_inline=False, PIL throws following error: TypeError: Cannot handle this data type: (1, 1, 1), |u1 The problem is that Lucent passes a tensor of shape [H, W, C] with C=1 and range from 0-225 to PIL, but PIL can handle only two-dimensional tensors with integer values. This Stackoverflow answer provides more information.

    Solution: Lucent should check if shape is [H, W, C=1] and reduce to [H, W]. Alternatively, introduce a param, e.g, greycale=True in the view function.

    opened by neoglez 0
  • activation grid for hierarchical custom model

    activation grid for hierarchical custom model

    Hi, is there a way to visualize the activation grid for a custom model with nested modules, which are not explicitely named as a model's attribute? E.g. when I call get_model_layers(), you can see the following output for this custom model: image

    I followed your notebook on the activation grid (https://colab.research.google.com/github/greentfrapp/lucent-notebooks/blob/master/notebooks/activation_grids.ipynb#scrollTo=BDH9cXnSuu5Q). For example, I choose layer = "net_down1_maxpool_conv" (is there some kind of syntax for specifying the layers?) I also rewrote the get_layer() helper function to parse the networks layer from the string, because that layer is not a direct attribute of the network class. But when I then try to use the rendering function, there is an error in the first line of the objective function: In lines 203-206 of render.py either one of the assertions is thrown, depending on how I choose the layer string. Can you help me with this problem? Many thanks!

    opened by An-nay-marks 1
  • Support batches for CPPN image representation

    Support batches for CPPN image representation

    When representing the optimized image using CPPN network, current implementation allows for optimizing for a single image per run. This limitation prevents using, e.g., "diversity" objectives during optimization. This PR adds support for batching for cppn image representation by creating a batch of networks.

    here's an example of generating a diverse batch=2 images for objective "mixed4d_3x3_bottleneck_pre_relu_conv:139" of inception network.

    image

    opened by shaibagon 0
  • Low GPU utilization

    Low GPU utilization

    I am trying to use Lucent to visualize deep neurons, but whatever I do it seems like GPU is under-utilized: Examining utilization via nvidia-smi I see low utilization (~10%) with occasional peaks at ~50%, but never above that. This happens both for cppn prior as well as fourier image representation.

    Any suggestions?

    opened by shaibagon 0
Releases(v0.1.8)
Owner
Lim Swee Kiat
Lim Swee Kiat
Library for converting from RGB / GrayScale image to base64 and back.

Library for converting RGB / Grayscale numpy images from to base64 and back. Installation pip install -U image_to_base_64 Conversion RGB to base 64 b

Vladimir Iglovikov 16 Aug 28, 2022
Official implementation of the PICASO: Permutation-Invariant Cascaded Attentional Set Operator

PICASO Official PyTorch implemetation for the paper PICASO:Permutation-Invariant Cascaded Attentive Set Operator. Requirements Python 3 torch = 1.0 n

Samira Zare 0 Dec 23, 2021
[Link]mareteutral - pars tradg wth M []

pairs-trading-with-ML Jonathan Larkin, August 2017 One popular strategy classification is Pairs Trading. Though this category of strategies can exhibi

Jonathan Larkin 134 Jan 06, 2023
Pytorch implement of 'Unmixing based PAN guided fusion network for hyperspectral imagery'

Pgnet There's a improved version compared with the publication in Tgrs with the modification in the deduction of the PDIN block: https://arxiv.org/abs

5 Jul 01, 2022
A method that utilized Generative Adversarial Network (GAN) to interpret the black-box deep image classifier models by PyTorch.

A method that utilized Generative Adversarial Network (GAN) to interpret the black-box deep image classifier models by PyTorch.

Yunxia Zhao 3 Dec 29, 2022
Text2Art is an AI art generator powered with VQGAN + CLIP and CLIPDrawer models

Text2Art is an AI art generator powered with VQGAN + CLIP and CLIPDrawer models. You can easily generate all kind of art from drawing, painting, sketch, or even a specific artist style just using a t

Muhammad Fathy Rashad 643 Dec 30, 2022
Pytorch implementation of our paper under review — Lottery Jackpots Exist in Pre-trained Models

Lottery Jackpots Exist in Pre-trained Models (Paper Link) Requirements Python = 3.7.4 Pytorch = 1.6.1 Torchvision = 0.4.1 Reproduce the Experiment

Yuxin Zhang 27 Jun 28, 2022
The source codes for ACL 2021 paper 'BoB: BERT Over BERT for Training Persona-based Dialogue Models from Limited Personalized Data'

BoB: BERT Over BERT for Training Persona-based Dialogue Models from Limited Personalized Data This repository provides the implementation details for

124 Dec 27, 2022
Code repository for "Free View Synthesis", ECCV 2020.

Free View Synthesis Code repository for "Free View Synthesis", ECCV 2020. Setup Install the following Python packages in your Python environment - num

Intelligent Systems Lab Org 253 Dec 07, 2022
Pure python PEMDAS expression solver without using built-in eval function

pypemdas Pure python PEMDAS expression solver without using built-in eval function. Supports nested parenthesis. Supported operators: + - * / ^ Exampl

1 Dec 22, 2021
Source code, data, and evaluation details for “Cross-Lingual Citations in English Papers: A Large-Scale Analysis of Prevalence, Formation, and Ramifications”

Analysis of cross-lingual citations in English papers Contents initial_analysis Source code, data, and evaluation details as published at ICADL2020 ci

Tarek Saier 1 Oct 27, 2022
NVIDIA container runtime

nvidia-container-runtime A modified version of runc adding a custom pre-start hook to all containers. If environment variable NVIDIA_VISIBLE_DEVICES i

NVIDIA Corporation 938 Jan 06, 2023
Let Python optimize the best stop loss and take profits for your TradingView strategy.

TradingView Machine Learning TradeView is a free and open source Trading View bot written in Python. It is designed to support all major exchanges. It

Robert Roman 473 Jan 09, 2023
Sematic-Segmantation - Semantic Segmentation on MIT ADE20K dataset in PyTorch

Semantic Segmentation on MIT ADE20K dataset in PyTorch This is a PyTorch impleme

Berat Eren Terzioğlu 4 Mar 22, 2022
Live training loss plot in Jupyter Notebook for Keras, PyTorch and others

livelossplot Don't train deep learning models blindfolded! Be impatient and look at each epoch of your training! (RECENT CHANGES, EXAMPLES IN COLAB, A

Piotr Migdał 1.2k Jan 08, 2023
Deep Two-View Structure-from-Motion Revisited

Deep Two-View Structure-from-Motion Revisited This repository provides the code for our CVPR 2021 paper Deep Two-View Structure-from-Motion Revisited.

Jianyuan Wang 145 Jan 06, 2023
Structural Constraints on Information Content in Human Brain States

Structural Constraints on Information Content in Human Brain States Code accompanying the paper "The information content of brain states is explained

Leon Weninger 3 Sep 07, 2022
A small tool to joint picture including gif

README 做设计的时候遇到拼接长图的情况,但是发现没有什么好用的能拼接gif的工具。 于是自己写了个gif拼接小工具。 可以自动拼接gif、png和jpg等常见格式。 效果 从上至下 从下至上 从左至右 从右至左 使用 克隆仓库 git clone https://github.com/Dels

3 Dec 15, 2021
Finetune alexnet with tensorflow - Code for finetuning AlexNet in TensorFlow >= 1.2rc0

Finetune AlexNet with Tensorflow Update 15.06.2016 I revised the entire code base to work with the new input pipeline coming with TensorFlow = versio

Frederik Kratzert 766 Jan 04, 2023
ERISHA is a mulitilingual multispeaker expressive speech synthesis framework. It can transfer the expressivity to the speaker's voice for which no expressive speech corpus is available.

ERISHA: Multilingual Multispeaker Expressive Text-to-Speech Library ERISHA is a multilingual multispeaker expressive speech synthesis framework. It ca

Ajinkya Kulkarni 43 Nov 27, 2022