An algorithm that can solve the word puzzle Wordle with an optimal number of guesses on HARD mode.

Overview

WordleSolver

An algorithm that can solve the word puzzle Wordle with an optimal number of guesses on HARD mode.

How to use the program


Copy this project with git clone and run python3 solver.py in the terminal.

When you run the program, the algorithm will provide you with an educated guess. Then, you type the guess into Wordle. Once you get the result of how many letters were right, you input it back into the program and will get another guess back. This process will continue until you have solved the puzzle!

Inputting the result of your guesses is easy. If a character is gray, enter '_', if a character is yellow, enter the lowercase letter, and if a character is green, enter the uppercase letter. For example, if the program told you to guess "aeros" and the result of the guess was:

image

You would enter the result as: __r__

Here is another example:

image

You would enter the result as: DR_k_

How the algorithm works

Here's a quick run-down of how the algorithm works. We keep a list of words that the answer can be and keep removing from the list until only one word remains or we guess the right answer. Each word has a unique number associated with it. We can use this number to quickly determine if a word can be an answer based on the results of other guesses. If a word cannot be the answer, it will be removed from our list. The key to the accuracy and efficiency of this algorithm is how this unique number is generated.

The number is the product of a few prime numbers which lets us use modular arithmetic in a clever way! Each letter will have 6 prime numbers associated with it. One "yellow" number and five "green" numbers. We use the one yellow number when we know a letter is in the word but we don't know where. We use one of the green letters when we know that a letter is in a specific spot. You can see these prime numbers in charDict.json. To actually calculate the number of a word, we multiply all the yellow numbers of the characters that make up the word together as well as certain green numbers. The green number we multiply depends on the position the letter appears. If the letter D appears in the first spot, we multiply by its 1st green number. If it was instead in the last spot of the word, we multiply by its 5th green number. The reason we do this is we can utilize modulo to check if a certain word can be an answer based on the result of another guess. For example, if we guessed "aeros" and the word we were trying to find was "drink", we will find that r is somewhere in the word but not in the third spot. Let us say a word has number n. If n%r's yellow number does not equal 0, then we know that word cannot be zero and we can remove it from the list. Also, if n%r's third green number equals 0, we know that it cannot be the answer because r cannot be in the third spot. Similar logic is applied when multiple letters are yellow or some letters come up green. The value of each word does not change, so we can process this information once and store it in a txt file to be used later which is what I did in wordList.txt! If you would like to use a different set of words than what I used, feel free to change the words.txt file and run process.py to generate a new wordList file.

Optimizations

One way to make the algorithm take fewer guesses is to make smarter guesses. As such, an optimization I decided to make is to take into account letter frequency. Letters that appear more often have lower prime numbers associated with them and also that the word that is guessed always has the smallest number associated with it. Now, the primes associated with each letter aren't just chosen arbitrarily and actually tell us some information. "e" is the most common letter and as such has the six smallest prime numbers. I can sort the wordlist and make the algorithm guess the word with the smallest number. So, our algorithm is more likely to guess a word with "e" in it than "q" since words with "e" will probably be smaller. This is good because "e" is much more likely to be in the word than "q". Also, I only need to sort the list once in process.py so there is no significant performance hit!

A drawback of this approach is that words that are made up of repetitive common letters have very low values and are guessed much more. This is not good because words with repeating letters make it harder to narrow down our potential guesses! For example, consider the word "esses" which is made up of only of the two most common letters. It's good that our guesses consist of letters that are common but it is bad that we only get information about two different letters. The way I fixed this is by multiplying words that have characters repeated two or three times by a much bigger prime number so they are weighed down and guesses less often.

Another optimization I made is taking into account how common a word is. There are a lot of niche words in the list that are very rarely used which are likely not the answer to the puzzle. So, once I've narrowed down the possible words to less than a hundred, it makes sense to guess the more common words first. This is why I introduced a second number that is associated which each word. The second number is the frequency of a word in Wikipedia articles. Once there are less than 100 words in the list, the list is resorted by this second number rather than the first and so each guess will be the most common word remaining!

Further Optimizations

As I mentioned before, one of the optimizations I made was having more common letters correspond with smaller prime numbers and sorting the list of words based on the number associated with each word. This is all done just once for each set of words in process.py and is very computationally efficient. However, if more accuracy is desired, the prime number associated with each letter can be re-generated after each guess because the frequency of each letter is likely to change. This may increase accuracy slightly but will take much longer to process which is why I opted against it. After each guess, I would have to re-check the frequency of each letter, calculate the value of each word, and then resort to the entire list based on this new value.

Sources

  • Wordle is by PowerLanguage
  • List of 5 letter words is based on SOWPODS and was taken from Word Game Dictionary. I suspect that PowerLanguage used the same source for wordle as he used a similar source for another project.
  • The frequency of words was taken from lexepedia with a minimum frequency of 1, length of 5, and only includes Wiktionary Words.
Owner
Akil Selvan Rajendra Janarthanan
yo!
Akil Selvan Rajendra Janarthanan
๐Ÿš€Clone a voice in 5 seconds to generate arbitrary speech in real-time

English | ไธญๆ–‡ Features ๐ŸŒ Chinese supported mandarin and tested with multiple datasets: aidatatang_200zh, magicdata, aishell3, data_aishell, and etc. ?

Vega 25.6k Dec 31, 2022
Code for our ACL 2021 (Findings) Paper - Fingerprinting Fine-tuned Language Models in the wild .

๐ŸŒณ Fingerprinting Fine-tuned Language Models in the wild This is the code and dataset for our ACL 2021 (Findings) Paper - Fingerprinting Fine-tuned La

LCS2-IIITDelhi 5 Sep 13, 2022
A Python module made to simplify the usage of Text To Speech and Speech Recognition.

Nav Module The solution for voice related stuff in Python Nav is a Python module which simplifies voice related stuff in Python. Just import the Modul

Snm Logic 1 Dec 20, 2021
This repository contains data used in the NAACL 2021 Paper - Proteno: Text Normalization with Limited Data for Fast Deployment in Text to Speech Systems

Proteno This is the data release associated with the corresponding NAACL 2021 Paper - Proteno: Text Normalization with Limited Data for Fast Deploymen

37 Dec 04, 2022
Sequence modeling benchmarks and temporal convolutional networks

Sequence Modeling Benchmarks and Temporal Convolutional Networks (TCN) This repository contains the experiments done in the work An Empirical Evaluati

CMU Locus Lab 3.5k Jan 03, 2023
BookNLP, a natural language processing pipeline for books

BookNLP BookNLP is a natural language processing pipeline that scales to books and other long documents (in English), including: Part-of-speech taggin

654 Jan 02, 2023
Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorch

COCO LM Pretraining (wip) Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorch. They were a

Phil Wang 44 Jul 28, 2022
Coreference resolution for English, French, German and Polish, optimised for limited training data and easily extensible for further languages

Coreferee Author: Richard Paul Hudson, Explosion AI 1. Introduction 1.1 The basic idea 1.2 Getting started 1.2.1 English 1.2.2 French 1.2.3 German 1.2

Explosion 70 Dec 12, 2022
Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents

Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents [Project Page] [Paper] [Video] Wenlong Huang1, Pieter Abbee

Wenlong Huang 114 Dec 29, 2022
NeoDays-based tileset for the roguelike CDDA (Cataclysm Dark Days Ahead)

NeoDaysPlus Reduced contrast, expanded, and continuously developed version of the CDDA tileset NeoDays that's being completed with new sprites for mis

0 Nov 12, 2022
NLP topic mdel LDA - Gathered from New York Times website

NLP topic mdel LDA - Gathered from New York Times website

1 Oct 14, 2021
Deduplication is the task to combine different representations of the same real world entity.

Deduplication is the task to combine different representations of the same real world entity. This package implements deduplication using active learning. Active learning allows for rapid training wi

63 Nov 17, 2022
ZUNIT - Toward Zero-Shot Unsupervised Image-to-Image Translation

ZUNIT Dependencies you can install all the dependencies by pip install -r requirements.txt Datasets Download CUB dataset. Unzip the birds.zip at ./da

Chen Yuanqi 9 Jun 24, 2022
Code for PED: DETR For (Crowd) Pedestrian Detection

Code for PED: DETR For (Crowd) Pedestrian Detection

36 Sep 13, 2022
Rootski - Full codebase for rootski.io (without the data)

๐Ÿ“ฃ Welcome to the Rootski codebase! This is the codebase for the application run

Eric 20 Nov 18, 2022
Index different CKAN entities in Solr, not just datasets

ckanext-sitesearch Index different CKAN entities in Solr, not just datasets Requirements This extension requires CKAN 2.9 or higher and Python 3 Featu

Open Knowledge Foundation 3 Dec 02, 2022
Text-Based zombie apocalyptic decision-making game in Python

Inspiration We shared university first year game coursework.[to gauge previous experience and start brainstorming] Adapted a particular nuclear fallou

Amin Sabbagh 2 Feb 17, 2022
PRAnCER is a web platform that enables the rapid annotation of medical terms within clinical notes.

PRAnCER (Platform enabling Rapid Annotation for Clinical Entity Recognition) is a web platform that enables the rapid annotation of medical terms within clinical notes. A user can highlight spans of

Sontag Lab 39 Nov 14, 2022
DELTA is a deep learning based natural language and speech processing platform.

DELTA - A DEep learning Language Technology plAtform What is DELTA? DELTA is a deep learning based end-to-end natural language and speech processing p

DELTA 1.5k Dec 26, 2022
An implementation of WaveNet with fast generation

pytorch-wavenet This is an implementation of the WaveNet architecture, as described in the original paper. Features Automatic creation of a dataset (t

Vincent Herrmann 858 Dec 27, 2022