Lolviz - A simple Python data-structure visualization tool for lists of lists, lists, dictionaries; primarily for use in Jupyter notebooks / presentations

Related tags

Deep Learninglolviz
Overview

lolviz

By Terence Parr. See Explained.ai for more stuff.

A very nice looking javascript lolviz port with improvements by Adnan M.Sagar.

A simple Python data-structure visualization tool that started out as a List Of Lists (lol) visualizer but now handles arbitrary object graphs, including function call stacks! lolviz tries to look out for and format nicely common data structures such as lists, dictionaries, linked lists, and binary trees. This package is primarily for use in teaching and presentations with Jupyter notebooks, but could also be used for debugging data structures. Useful for devoting machine learning data structures, such as decision trees, as well.

It seems that I'm always trying to describe how data is laid out in memory to students. There are really great data structure visualization tools but I wanted something I could use directly via Python in Jupyter notebooks.

The look and idea was inspired by the awesome Python tutor. The graphviz/dot tool does all of the heavy lifting underneath for layout; my contribution is primarily making graphviz display objects in a nice way.

Functionality

There are currently a number of functions of interest that return graphviz.files.Source objects:

  • listviz(): Horizontal list visualization
  • lolviz(): List of lists visualization with the first list vertical and the nested lists horizontal.
  • treeviz(): Binary trees visualized top-down ala computer science.
  • objviz(): Generic object graph visualization that knows how to find lists of lists (like lolviz()) and linked lists. Trees are also displayed reasonably, but with left to right orientation instead of top-down (a limitation of graphviz). Here is an example linked list and dictionary:

  • callsviz(): Visualize the call stack and anything pointed to by globals, locals, or parameters. You can limit the variables displayed by passing in a list of varnames as an argument.
  • callviz(): Same as callsviz() but displays only the current function's frame or you can pass in a Python stack frame object to display.
  • matrixviz(data): Display numpy ndarray; only 1D and 2D at moment.
  • strviz(): Show a string like an array.

Given the return value in generic Python, simply call method view() on the returned object to display the visualization. From jupyter, call function IPython.display.display() with the returned object as an argument. Function arguments are in italics.

Check out the examples.

Installation

First you need graphviz (more specifically the dot executable). On a mac it's easy:

$ brew install graphviz

Then just install the lolviz Python package:

$ pip install lolviz

or upgrade to the latest version:

$ pip install -U lolviz

Usage

From within generic Python, you can get a window to pop up using the view() method:

from lolviz import *
data = ['hi','mom',{3,4},{"parrt":"user"}]
g = listviz(data)
print(g.source) # if you want to see the graphviz source
g.view() # render and show graphviz.files.Source object

From within Jupyter notebooks you can avoid the render() call because Jupyter knows how to display graphviz.files.Source objects:

For more examples that you can cut-and-paste, please see the jupyter notebook full of examples.

Preferences

There are global preferences you can set that affect the display for long values:

  • prefs.max_str_len (Default 20). How many chars in a string representation of a value before we abbreviate with .... E.g.,:
  • prefs.max_horiz_array_len (Default 70) Lists can quickly become too wide and distort the visualization. This preference lets you set how long the combined string representations of the list values can get before we use a vertical representation of the list. E.g.,:
  • prefs.max_list_elems. Horizontal and vertical lists and sets show maximum of 10 (default) elements.
  • prefs.float_precision. How many decimal places to show for floats (default is 5).

Implementation notes

Mostly notes for parrt to remember things.

Graphviz

  • Ugh. shape=record means html-labels can't use ports. warning!

  • warning: <td> and </td> must be on same line or row is super wide!

Deploy

$ python setup.py sdist upload 

Or to install locally

$ cd ~/github/lolviz
$ pip install .
Comments
  • name 'unicode' is not defined with Python 3.6

    name 'unicode' is not defined with Python 3.6

    Hi, thanks for this great python package.

    I noticed the follow error when I do this in jupyter on python-3.6.2

    import lolviz
    lolviz.objviz('1234567890')
    
    ---------------------------------------------------------------------------
    NameError                                 Traceback (most recent call last)
    <ipython-input-4-7a8f0cba785c> in <module>()
          1 import lolviz
    ----> 2 lolviz.objviz('1234567890')
    
    /usr/lib/python3.6/site-packages/lolviz.py in objviz(o, orientation)
        217 """ % orientation
        218     reachable = closure(o)
    --> 219     s += obj_nodes(reachable)
        220     s += obj_edges(reachable)
        221     s += "}\n"
    
    /usr/lib/python3.6/site-packages/lolviz.py in obj_nodes(nodes)
        229     # currently only making subgraph cluster for linked lists
        230     # otherwise it squishes trees.
    --> 231     max_edges_for_type,subgraphs = connected_subgraphs(nodes)
        232     c = 1
        233     for g in subgraphs:
    
    /usr/lib/python3.6/site-packages/lolviz.py in connected_subgraphs(reachable, varnames)
        785     of sets containing the id()s of all nodes in a specific subgraph
        786     """
    --> 787     max_edges_for_type = max_edges_in_connected_subgraphs(reachable, varnames)
        788 
        789     reachable = closure(reachable, varnames)
    
    /usr/lib/python3.6/site-packages/lolviz.py in max_edges_in_connected_subgraphs(reachable, varnames)
        839     """
        840     max_edges_for_type = defaultdict(int)
    --> 841     reachable = closure(reachable, varnames)
        842     reachable = [p for p in reachable if isplainobj(p)]
        843     for p in reachable:
    
    /usr/lib/python3.6/site-packages/lolviz.py in closure(p, varnames)
        706     from but don't include frame objects.
        707     """
    --> 708     return closure_(p, varnames, set())
        709 
        710 
    
    /usr/lib/python3.6/site-packages/lolviz.py in closure_(p, varnames, visited)
        710 
        711 def closure_(p, varnames, visited):
    --> 712     if p is None or isatom(p):
        713         return []
        714     if id(p) in visited:
    
    /usr/lib/python3.6/site-packages/lolviz.py in isatom(p)
        691 
        692 
    --> 693 def isatom(p): return type(p) == int or type(p) == float or type(p) == str or type(p) == unicode
        694 
        695 
    
    NameError: name 'unicode' is not defined
    
    py2py3 compatibility 
    opened by faultylee 5
  • Hidden values of nested table

    Hidden values of nested table

    I find table in lolviz can not show values if cell in one row is a list, as:

    
    T = [
        ['11','12','13','14',['a','b','c'],'16']
    ]
    objviz(T)
    

    Only the a, b, c are shown and 11, 12, 13, 14, and 16 are not shown, is this configurable? Thanks!

    question 
    opened by pytkr 2
  • Invalid syntax on Python 3.6 and lolviz 1.2.1

    Invalid syntax on Python 3.6 and lolviz 1.2.1

    Contd. from https://github.com/parrt/lolviz/issues/11#issuecomment-326474125

    Traceback (most recent call last):
    
      File "/Users/srid/code/ipython/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2862, in run_code
        exec(code_obj, self.user_global_ns, self.user_ns)
    
      File "<ipython-input-1-ae470ca34f62>", line 1, in <module>
        from lolviz import *
    
      File "/Users/srid/code/ipython/lib/python3.6/site-packages/lolviz.py", line 442
        print "hashcode =", hashcode(key)
                         ^
    SyntaxError: invalid syntax
    
    
    py2py3 compatibility 
    opened by srid 1
  • Can't install on Python 3

    Can't install on Python 3

    This package is currently registered as Python 2.7 only in PyPI. This is what happens if you try to install it on Python 3.6:

    (venv) $ pip install lolviz
    Collecting lolviz
      Using cached lolviz-1.2.tar.gz
    lolviz requires Python '<3' but the running Python is 3.6.0
    

    I think it's just a matter of adding the Python 3 classifier in setup.py, because as far as I can see the code is fine for Python 3.

    duplicate 
    opened by miguelgrinberg 1
  • Dictionaries with tuple values are rendered incorrectly

    Dictionaries with tuple values are rendered incorrectly

    from lolviz import *
    
    dict_tuple_values = dictviz({'a': (1, 2)})
    dict_tuple_values.render(view=True, cleanup=True)
    

    Is rendered as: image

    This only happens with 2-tuples. Any other tuple length is rendered as expected.

    This is the offending line: https://github.com/parrt/lolviz/blob/a6fc29b008a16993738416e793de71c3bff4175d/lolviz.py#L159

    What is the significance of ... and len(el) == 2 ?

    enhancement 
    opened by DeepSpace2 1
  • implement multi child tree

    implement multi child tree

    Hi, I was looking for a package to visualize Tree Search Algorithm Recently and I found this repository and really liked it. But for tree visualization, the treeviz function can only support binary tree with child name left and right.

    So I made some modification to support multi children and specifying child name. I add 2 new parameters for treeviz(), childfields and show_all_children. The variable name in childfields will be recognized as child node. And if show_all_children=False, it will only visualize the child names exist, else it will show all the names in childfieds.

    I know you may be busy and this repository hasn't been updated for a long time. You can check these modification anytime free. I am so glad to receive any suggestions from you.

    enhancement 
    opened by sunyiwei24601 5
  • Could we add a

    Could we add a "super" display function that chooses the best one based on the datatype?

    Hello @parrt! I used your lolviz project a few years ago, and I rediscovered it today. It's awesome!

    Could we add a "super" display function that chooses the best one based on the datatype?

    When reading the documentation, it shows like 8 different functions, and I don't want to spend my time thinking about the name of one or another function for one or another datatype. What is described in this documentation is almost trivially translated to Python code.

    modes = [ "str", "matrix", "call", "calls", "obj", "tree", "lol", "list" ]
    
    def unified_lolviz(obj, mode=None):
        """ Unified function to display `obj` with lolviz, in Jupyter notebook only."""
        if mode == "str" or isinstance(obj):
            return strviz(obj)
        if mode == "matrix" or "<class 'numpy.ndarray'>" == str(type(obj)):
            # can't use isinstance(obj, np.ndarray) without import numpy!
            return matrixviz(obj)
        if mode == "call": return callviz()
        if mode == "calls": return callviz()
        if mode == "lol" or isinstance(obj, list) and obj and isinstance(obj[0], list):
            # obj is a list, is non empty, and obj[0] is a list!
            return lolviz(obj)
        if mode == "list" or isinstance(obj, list):
            return listviz(obj)
        return objviz(obj)  # default
    

    So I'm opening this ticket: if you think this could be added to the library, can we discuss it here, and then I can take care of writing it, testing it, sending a pull-request, and you can merge, and then update on Pypi! What do you think?

    Regards from France, @Naereen

    enhancement 
    opened by Naereen 11
  • Create typed Class-Structure Diagram

    Create typed Class-Structure Diagram

    This isn't a bug, but a question for help / hints.

    I would like to use lolviz to create a graph of my python class structure. Every class-attribute is type annotated, so it should be possible to like their interactions without creating class instances. Here is a minimal example:

    from copy import deepcopy
    from lolviz import *
    class Workout:
        def __init__(
            self,
            date: dt.date,
            name: str = "",
            duration: int = 0
        ):
            # assert isinstance(tss, (np.number, int))
            self.date: dt.date = date
            self.name: str = name
            self.duration: int = duration  # in seconds
            self.done: boolean = False
    
    class Athlete:
        def __init__(
            self,
            name: str,
            birthday: dt.date,
            sports: List[str]
        ):
            self.name: str = name
            self.birthday: dt.date = birthday
            self.sports: List[str] = sports
    
    
    class DataContainer:
        def __init__(self, 
                     athlete: Athlete, 
                     tasks: List[Workout] = [], 
                     fulfilled: List[Workout] = []):
            self.athelete: Athlete = athlete
            self.tasks: List[Workout] = [w for w in tasks if isinstance(w, Workout)]
            self.fulfilled: List[Workout] = [w for w in fulfilled if isinstance(w, Workout)]
    
    me = Athlete("nico", dt(1990,3,1).date(), sports=["running","climbing"])
    t1 = Workout(date=dt(2020,1,1).date(), name="5k Run")
    f1 = deepcopy(t1)
    f1.done = True
    dc1 = DataContainer(me, tasks=[t1], fulfilled=[f1])
    

    Now I can use objviz(dc1) to create the following diagram: Screenshot 2021-01-13 at 16 47 07

    What I actually would like to achieve is a command like classviz(DataContainer) which will give me a similar chart, but not with the actual attribute values but their types. For sure there will be other small changes, but that's the basic idea.

    What I already can do is something like:

    def get_types(annotated_class):
        return (annotated_class.__name__, {k: v.__name__ for k,v in annotated_class.__init__.__annotations__.items()})
    
    get_types(Workout)
    

    which gives me something like: ('Workout', {'date': 'date', 'name': 'str', 'duration': 'int'}). How ever I don't find a proper way to create similar table-elements which contain Workout in the header and the name-attribute mapping in it's body.

    Can someone give me a hint, how to create such tables manually? I am also happy for any additional advices

    feature 
    opened by krlng 0
  • visualization fails when variables contain " chars">

    visualization fails when variables contain "<" and ">" chars

    from lolviz import objviz a={"hello":"<"} objviz(a).render() Error: Source.gv: syntax error in line 16 scanning a HTML string (missing '>'? bad nesting? longer than 16384?) String starting:<

    opened by ami-navon 1
    Releases(1.4)
    Owner
    Terence Parr
    Creator of the ANTLR parser generator. Professor at Univ of San Francisco, computer science and data science. Working mostly on machine learning stuff now.
    Terence Parr
    Session-aware Item-combination Recommendation with Transformer Network

    Session-aware Item-combination Recommendation with Transformer Network 2nd place (0.39224) code and report for IEEE BigData Cup 2021 Track1 Report EDA

    Tzu-Heng Lin 6 Mar 10, 2022
    Code artifacts for the submission "Mind the Gap! A Study on the Transferability of Virtual vs Physical-world Testing of Autonomous Driving Systems"

    Code Artifacts Code artifacts for the submission "Mind the Gap! A Study on the Transferability of Virtual vs Physical-world Testing of Autonomous Driv

    Andrea Stocco 2 Aug 24, 2022
    Official codebase for Decision Transformer: Reinforcement Learning via Sequence Modeling.

    Decision Transformer Lili Chen*, Kevin Lu*, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas†, and Igor M

    Kevin Lu 1.4k Jan 07, 2023
    On the Adversarial Robustness of Visual Transformer

    On the Adversarial Robustness of Visual Transformer Code for our paper "On the Adversarial Robustness of Visual Transformers"

    Rulin Shao 35 Dec 14, 2022
    Lyapunov-guided Deep Reinforcement Learning for Stable Online Computation Offloading in Mobile-Edge Computing Networks

    PyTorch code to reproduce LyDROO algorithm [1], which is an online computation offloading algorithm to maximize the network data processing capability subject to the long-term data queue stability an

    Liang HUANG 87 Dec 28, 2022
    A simple tutoral for error correction task, based on Pytorch

    gramcorrector A simple tutoral for error correction task, based on Pytorch Grammatical Error Detection (sentence-level) a binary sequence-based classi

    peiyuan_gong 8 Dec 03, 2022
    Scientific Computation Methods in C and Python (Open for Hacktoberfest 2021)

    Sci - cpy README is a stub. Do expand it. Objective This repository is meant to be a ready reference for scientific computation methods. Do ⭐ it if yo

    Sandip Dutta 7 Oct 12, 2022
    GPU-accelerated PyTorch implementation of Zero-shot User Intent Detection via Capsule Neural Networks

    GPU-accelerated PyTorch implementation of Zero-shot User Intent Detection via Capsule Neural Networks This repository implements a capsule model Inten

    Joel Huang 15 Dec 24, 2022
    K-PLUG: Knowledge-injected Pre-trained Language Model for Natural Language Understanding and Generation in E-Commerce (EMNLP Founding 2021)

    Introduction K-PLUG: Knowledge-injected Pre-trained Language Model for Natural Language Understanding and Generation in E-Commerce. Installation PyTor

    Xu Song 21 Nov 16, 2022
    a project for 3D multi-object tracking

    a project for 3D multi-object tracking

    155 Jan 04, 2023
    MVP Benchmark for Multi-View Partial Point Cloud Completion and Registration

    MVP Benchmark: Multi-View Partial Point Clouds for Completion and Registration [NEWS] 2021-07-12 [NEW 🎉 ] The submission on Codalab starts! 2021-07-1

    PL 93 Dec 21, 2022
    Real-time 3D multi-person detection made easy with OpenPose and the ZED

    OpenPose ZED This sample show how to simply use the ZED with OpenPose, the deep learning framework that detects the skeleton from a single 2D image. T

    blanktec 5 Nov 06, 2020
    [CVPR 2021] A Peek Into the Reasoning of Neural Networks: Interpreting with Structural Visual Concepts

    Visual-Reasoning-eXplanation [CVPR 2021 A Peek Into the Reasoning of Neural Networks: Interpreting with Structural Visual Concepts] Project Page | Vid

    Andy_Ge 54 Dec 21, 2022
    BuildingNet: Learning to Label 3D Buildings

    BuildingNet This is the implementation of the BuildingNet architecture described in this paper: Paper: BuildingNet: Learning to Label 3D Buildings Arx

    16 Nov 07, 2022
    DeepLab2: A TensorFlow Library for Deep Labeling

    DeepLab2 is a TensorFlow library for deep labeling, aiming to provide a unified and state-of-the-art TensorFlow codebase for dense pixel labeling tasks.

    Google Research 845 Jan 04, 2023
    PyTorch implementations of the paper: "Learning Independent Instance Maps for Crowd Localization"

    IIM - Crowd Localization This repo is the official implementation of paper: Learning Independent Instance Maps for Crowd Localization. The code is dev

    tao han 91 Nov 10, 2022
    Intel® Neural Compressor is an open-source Python library running on Intel CPUs and GPUs

    Intel® Neural Compressor targeting to provide unified APIs for network compression technologies, such as low precision quantization, sparsity, pruning, knowledge distillation, across different deep l

    Intel Corporation 846 Jan 04, 2023
    CLIPImageClassifier wraps clip image model from transformers

    CLIPImageClassifier CLIPImageClassifier wraps clip image model from transformers. CLIPImageClassifier is initialized with the argument classes, these

    Jina AI 6 Sep 12, 2022
    Tom-the-AI - A compound artificial intelligence software for Linux systems.

    Tom the AI (version 0.82) WARNING: This software is not yet ready to use, I'm still setting up the GitHub repository. Should be ready in a few days. T

    2 Apr 28, 2022
    Simple Tensorflow implementation of Toward Spatially Unbiased Generative Models (ICCV 2021)

    Spatial unbiased GANs — Simple TensorFlow Implementation [Paper] : Toward Spatially Unbiased Generative Models (ICCV 2021) Abstract Recent image gener

    Junho Kim 16 Apr 15, 2022