Produces a summary CSV report of an Amber Electric customer's energy consumption and cost data.

Overview

Amber Electric Usage Summary

This is a command line tool that produces a summary CSV report of an Amber Electric customer's energy consumption and cost data.

You simply need to provide your Amber API token, and the tool will output a CSV like this for the last 12 months:

CHANNEL                         , 2020-09-01, 2020-09-02, 2020-09-03, ...
B4 (FEED_IN) Usage (kWh)        ,      1.351,      0.463,      0.447, ...
E3 (CONTROLLED_LOAD) Usage (kWh),      2.009,      2.669,      2.757, ...
E4 (GENERAL) Usage (kWh)        ,     20.400,     20.965,     16.011, ...

About Amber Electric

Amber Electric is an innovative energy retailer in Australia which gives customers access to the wholesale energy price as determined by the National Energy Market. This gives customers the opportunity to reduce their bills and their reliance on fossil fuels by shifting their biggest energy usage to times of the day when energy is cheaper and greener.

Amber's API

Amber gives customers access to a LOT of their own data through their public Application Programming Interface or API.

This tool relies on you having access to Amber's API, which means you need to be an Amber customer, and you need to get an API token. But that's pretty easy. Start here.

How To Get The Tool

If you're a programmer comfortable with Git, I'm sure you already know how to get this code onto your machine from GitHub.

If you're not familiar with Git, you can download this code as a Zip file by clicking on this link. Once it's downloaded, unzip the file, which will create a directory containing all the files of this project.

How To Use It

Pre-Requisites

You'll need Python 3.9+ installed.

And an Amber API token. (See above)

Setup

Using a terminal, in the directory of this project:

  1. Create a Python virtual environment with this command:
python3.9  -m  venv  venv
  1. Start using the virtual environment with this command:
source  ./venv/bin/activate
  1. Install the required dependencies with this command:
python  -m  pip  install  -r  requirements.txt

Running the tool

Using a terminal, in the directory of this project:

  1. Start using the virtual environment with this command:
source  ./venv/bin/activate
  1. Run the tool with this command, replacing YOUR_API_TOKEN with your own API token:
python  amber_usage_summary.py  --api-token  YOUR_API_TOKEN  >  my_amber_usage_data.csv

Using the above, your summary consumption data for the last year will be saved to the file called my_amber_usage_data.csv in the same directory.

Options

Help

Run the script with the -h option to see its help page:

python  amber_usage_summary.py  -h

API Token File

If you'd prefer not to paste your API token into a terminal command, you can save it in a file called apitoken in the project's directory.

Costs Summary

By default, the tool just outputs energy consumption data. If you also want a summary of your cost data, add the --include-cost option:

python  amber_usage_summary.py  --include-cost

Site Selection

If you have multiple sites in your Amber Electric account, you'll need to select one using the --site-id option:

python  amber_usage_summary.py  --site-id  SITE_ID_YOU_WANT_DATA_FOR

Date Range

By default, the report includes the last 12 full calendar months of data, plus all of the current month's data up until yesterday. You can select what date range to include in the output by adding and start date and, optionally, an end date to the command.

python  amber_usage_summary.py  2020-07-01  2021-06-30

Contributions

I'm open to accepting contributions that improve the tool.

If you're planning on altering the code with the intention of contributing the changes back, it'd be great to have a chat about it first to check we're on the same page about how the improvement might be added. It's probably easiest to create an issue describing your planned improvement (and being clear that you plan to implement it yourself).

License

All files in this project are licensed under the 3-clause BSD License. See LICENSE.md for details.

Owner
Graham Lea
Graham Lea
Tablexplore is an application for data analysis and plotting built in Python using the PySide2/Qt toolkit.

Tablexplore is an application for data analysis and plotting built in Python using the PySide2/Qt toolkit.

Damien Farrell 81 Dec 26, 2022
The micro-framework to create dataframes from functions.

The micro-framework to create dataframes from functions.

Stitch Fix Technology 762 Jan 07, 2023
Python package to transfer data in a fast, reliable, and packetized form.

pySerialTransfer Python package to transfer data in a fast, reliable, and packetized form.

PB2 101 Dec 07, 2022
Python reader for Linked Data in HDF5 files

Linked Data are becoming more popular for user-created metadata in HDF5 files.

The HDF Group 8 May 17, 2022
Gathering data of likes on Tinder within the past 7 days

tinder_likes_data Gathering data of Likes Sent on Tinder within the past 7 days. Versions November 25th, 2021 - Functionality to get the name and age

Alex Carter 12 Jan 05, 2023
An orchestration platform for the development, production, and observation of data assets.

Dagster An orchestration platform for the development, production, and observation of data assets. Dagster lets you define jobs in terms of the data f

Dagster 6.2k Jan 08, 2023
Sentiment analysis on streaming twitter data using Spark Structured Streaming & Python

Sentiment analysis on streaming twitter data using Spark Structured Streaming & Python This project is a good starting point for those who have little

Himanshu Kumar singh 2 Dec 04, 2021
CINECA molecular dynamics tutorial set

High Performance Molecular Dynamics Logging into CINECA's computer systems To logon to the M100 system use the following command from an SSH client ss

J. W. Dell 0 Mar 13, 2022
Tuplex is a parallel big data processing framework that runs data science pipelines written in Python at the speed of compiled code

Tuplex is a parallel big data processing framework that runs data science pipelines written in Python at the speed of compiled code. Tuplex has similar Python APIs to Apache Spark or Dask, but rather

Tuplex 791 Jan 04, 2023
Approximate Nearest Neighbor Search for Sparse Data in Python!

Approximate Nearest Neighbor Search for Sparse Data in Python! This library is well suited to finding nearest neighbors in sparse, high dimensional spaces (like text documents).

Meta Research 906 Jan 01, 2023
2019 Data Science Bowl

Kaggle-2019-Data-Science-Bowl-Solution - Here i present my solution to kaggle 2019 data science bowl and how i improved it to win a silver medal in that competition.

Deepak Nandwani 1 Jan 01, 2022
TheMachineScraper 🐱‍👤 is an Information Grabber built for Machine Analysis

TheMachineScraper 🐱‍👤 is a tool made purely for analysing machine data for any reason.

doop 5 Dec 01, 2022
wikirepo is a Python package that provides a framework to easily source and leverage standardized Wikidata information

Python based Wikidata framework for easy dataframe extraction wikirepo is a Python package that provides a framework to easily source and leverage sta

Andrew Tavis McAllister 35 Jan 04, 2023
Churn prediction with PySpark

It is expected to develop a machine learning model that can predict customers who will leave the company.

3 Aug 13, 2021
Pip install minimal-pandas-api-for-polars

Minimal Pandas API for Polars Install From PyPI: pip install minimal-pandas-api-for-polars Example Usage (see tests/test_minimal_pandas_api_for_polars

Austin Ray 6 Oct 16, 2022
A multi-platform GUI for bit-based analysis, processing, and visualization

A multi-platform GUI for bit-based analysis, processing, and visualization

Mahlet 529 Dec 19, 2022
Bigdata Simulation Library Of Dream By Sandman Books

BIGDATA SIMULATION LIBRARY OF DREAM BY SANDMAN BOOKS ================= Solution Architecture Description In the realm of Dreaming, its ruler SANDMAN,

Maycon Cypriano 3 Jun 30, 2022
Validation and inference over LinkML instance data using souffle

Translates LinkML schemas into Datalog programs and executes them using Souffle, enabling advanced validation and inference over instance data

Linked data Modeling Language 7 Aug 07, 2022
Py-price-monitoring - A Python price monitor

A Python price monitor This project was focused on Brazil, so the monitoring is

Samuel 1 Jan 04, 2022
First and foremost, we want dbt documentation to retain a DRY principle. Every time we repeat ourselves, we waste our time. Second, we want to understand column level lineage and automate impact analysis.

dbt-osmosis First and foremost, we want dbt documentation to retain a DRY principle. Every time we repeat ourselves, we waste our time. Second, we wan

Alexander Butler 150 Jan 06, 2023