for those who dont want to pay $10/month for high school game footage with ads

Overview

nfhs-scraper

Disclaimer: I am in no way responsible for what you choose to do with this script and guide. I do not endorse avoiding paywalls or any illegal activity relating to this matter, I am simply providing a Python script to those who are interested.

NFHS Network is "the leader in streaming Live and On Demand high school sports."

In short, you need to pay $10 a month for a subscription to watch these games. As an athlete, I didn't want to spend $10 a month to watch my own games, with ads in it, so I made this.

Usage

Download the provided main.py Python file, so you can run it yourself. Remember, whatever you do is your choice and your responsibility.

Navigate to https://www.nfhsnetwork.com/, and find your school and sport, and select the game video you'd like to download.

In the last portion of the url, you will find the game ID.

e.g. https://www.nfhsnetwork.com/events/cool-high-school-cool-town/gam4576a0f402 -> game ID is gam4576a0f402


In the main.py file, do 2 things:

  • Change the game_id variable to your game ID.
  • Change the scrub_count variable to however much of the game you'd like to download. The game footage sometimes goes until 1-2 hours after the game ends, so you can usually omit this by lowering the count.
    • Scrubs are 10 seconds long each, you figure out how many of them you want to get your desired video length.

Run the Python file, and let the magic of computers do it's thing. It can take a while, but the video will be saved to output.mp4 in the same directory as the project, by default.

How it works

With only a bit of reverse engineering, it isn't too hard to understand how NFHS streams video to the user, and why this script works.

NFHS requires a subscription to watch the videos, and with this subscription comes an API key used to fetch the stream. In this case, you need a valid API key to fetch the stream, which is a .m3u8 file that looks a little something like this:

#EXTM3U
#EXT-X-VERSION:3
#EXT-X-TARGET-DURATION:10
#EXT-X-MEDIA-SEQUENCE:0
#EXT-INF:10.000000,
gamed408a95df_000000.ts
#EXT-INF:10.000000,
gamed408a95df_000001.ts
#EXT-INF:10.000000,
gamed408a95df_000002.ts

...and so on

Here, we notice a few things.

  • a) each media file is 10 seconds long, as specified by EXT-INF and EXT-X-TARGET-DURATION
  • b) the media files are incremental, meaning we don't need the .m3u8 stream at all to construct one ourselves

In the network tab, while watching the game, I could see my browser making a request to these files, which are hosted at https://cfscrubbed.nfhsnetwork.com/. I tried downloading one of these files myself, and was able to do so successfully with no authentication needed. So, hypothetically, I could download every file and then patch them together into one big video.

Hence, nfhs-scraper. :)


Feel free to star this repo
Owner
Conrad Crawford
full-stack typescript engineer • i write code sometimes (i think)
Conrad Crawford
a high-performance, lightweight and human friendly serving engine for scrapy

a high-performance, lightweight and human friendly serving engine for scrapy

Speakol Ads 30 Mar 01, 2022
A Simple Web Scraper made to Extract Download Links from Todaytvseries2.com

TDTV2-Direct Version 1.00.1 • A Simple Web Scraper made to Extract Download Links from Todaytvseries2.com :) How to Works?? install all dependancies v

Danushka-Madushan 1 Nov 28, 2021
SkyScrapers: A collection of variety of Scraping Apps

SkyScrapers Collection of variety of Web Scraping Apps The web-scrapers involved

Biplov Pokhrel 3 Feb 17, 2022
Scraping script for stats on covid19 pandemic status in Chiba prefecture, Japan

About 千葉県の地域別の詳細感染者統計(Excelファイル) をCSVに変換し、かつ地域別の日時感染者集計値を出力するスクリプトです。 Requirement POSIX互換なシェル, e.g. GNU Bash (1) curl (1) python = 3.8 pandas = 1.1.

Conv4Japan 1 Nov 29, 2021
VG-Scraper is a python program using the module called BeautifulSoup which allows anyone to scrape something off an website. This program lets you put in a number trough an input and a number is 1 news article.

VG-Scraper VG-Scraper is a convinient program where you can find all the news articles instead of finding one yourself. Installing [Linux] Open a term

3 Feb 13, 2022
Github scraper app is used to scrape data for a specific user profile created using streamlit and BeautifulSoup python packages

Github Scraper Github scraper app is used to scrape data for a specific user profile. Github scraper app gets a github profile name and check whether

Siva Prakash 6 Apr 05, 2022
Find papers by keywords and venues. Then download it automatically

paper finder Find papers by keywords and venues. Then download it automatically. How to use this? Search CLI python search.py -k "knowledge tracing,kn

Jiahao Chen (TabChen) 2 Dec 15, 2022
A Powerful Spider(Web Crawler) System in Python.

pyspider A Powerful Spider(Web Crawler) System in Python. Write script in Python Powerful WebUI with script editor, task monitor, project manager and

Roy Binux 15.7k Jan 04, 2023
Bigdata - This Scrapy project uses Redis and Kafka to create a distributed on demand scraping cluster

Scrapy Cluster This Scrapy project uses Redis and Kafka to create a distributed

Hanh Pham Van 0 Jan 06, 2022
This Scrapy project uses Redis and Kafka to create a distributed on demand scraping cluster

This Scrapy project uses Redis and Kafka to create a distributed on demand scraping cluster.

IST Research 1.1k Jan 06, 2023
Footballmapies - Football mapies for learning webscraping and use of gmplot module in python

Footballmapies - Football mapies for learning webscraping and use of gmplot module in python

1 Jan 28, 2022
Parsel lets you extract data from XML/HTML documents using XPath or CSS selectors

Parsel Parsel is a BSD-licensed Python library to extract and remove data from HTML and XML using XPath and CSS selectors, optionally combined with re

Scrapy project 859 Dec 29, 2022
Crawl BookCorpus

These are scripts to reproduce BookCorpus by yourself.

Sosuke Kobayashi 590 Jan 03, 2023
Explore scraping with BeautifulSoup!

beautifulsoup-scrape Explore scraping with BeautifulSoup! Part One: Start from Shakespeare As my professor is a poet (yes, and he teaches me data and

Chuqin 2 Oct 05, 2022
淘宝、天猫半价抢购,抢电视、抢茅台,干死黄牛党

taobao_seckill 淘宝、天猫半价抢购,抢电视、抢茅台,干死黄牛党 依赖 安装chrome浏览器,根据浏览器的版本找到对应的chromedriver下载安装 web版使用说明 1、抢购前需要校准本地时间,然后把需要抢购的商品加入购物车 2、如果要打包成可执行文件,可使用pyinstalle

2k Jan 05, 2023
Crawler do site Fundamentus.com com o uso do framework scrapy, tanto da aba detalhada como a de resumo.

Crawler do site Fundamentus.com com o uso do framework scrapy, tanto da aba detalhada como a de resumo. (Todas as infomações)

Guilherme Silva Uchoa 3 Oct 04, 2022
News, full-text, and article metadata extraction in Python 3. Advanced docs:

Newspaper3k: Article scraping & curation Inspired by requests for its simplicity and powered by lxml for its speed: "Newspaper is an amazing python li

Lucas Ou-Yang 12.3k Jan 07, 2023
京东茅台抢购 2021年4月最新版

Jd_Seckill 特别声明: 本仓库发布的jd_seckill项目中涉及的任何脚本,仅用于测试和学习研究,禁止用于商业用途,不能保证其合法性,准确性,完整性和有效性,请根据情况自行判断。 本项目内所有资源文件,禁止任何公众号、自媒体进行任何形式的转载、发布。 huanghyw 对任何脚本问题概不

45 Dec 14, 2022
Unja is a fast & light tool for fetching known URLs from Wayback Machine

Unja Fetch Known Urls What's Unja? Unja is a fast & light tool for fetching known URLs from Wayback Machine, Common Crawl, Virus Total & AlienVault's

Sheryar 10 Aug 07, 2022
A Python module to bypass Cloudflare's anti-bot page.

cloudscraper A simple Python module to bypass Cloudflare's anti-bot page (also known as "I'm Under Attack Mode", or IUAM), implemented with Requests.

VeNoMouS 2.6k Dec 31, 2022