|
|
||
|---|---|---|
| .github/workflows | ||
| src | ||
| test | ||
| .gitignore | ||
| gleam.toml | ||
| manifest.toml | ||
| README.md | ||
aoc
gladent use
General Workflow
Where X is the day you'd like to add:
Note: this method requires all day solutions be in src/aoc_<year>/ with filenames day_X.gleam, each solution module containing fn pt_1(String) -> Int and a fn pt_2(String) -> Int
- run
gleam run new X - add your input to
input/<YEAR>/X.txt - add your code to
src/aoc_<YEAR>/day_X.gleam - run
gleam run run X
Available commands
This project provides your application with 2 command groups, new and run:
New
new: createsrc/aoc_<year>/day_<day>.gleamandinput/<year>/<day>.txtfiles that correspond to the specified days- format:
gleam run new a b c ...
- format:
Run
The run command expects input files to be in the input/<year> directory, and code to be in src/aoc_<year>/
(corresponding to the files created by the new command).
-
run: run the specified days- format:
gleam run run a b c ...
- format:
-
run all: run all registered days- format:
gleam run run all
- format:
Note:
- any triggered
assert,panicortodowill be captured and printed, for example:
Part 1: error: todo - unimplemented in module aoc_2024/day_1 in function pt_1 at line 2
Fetching problem inputs
When stubbing out a new day's solution with the new command, you can use the --fetch flag to tell gladvent to fetch your problem input from the advent of code website.
Some things to note:
- The
AOC_COOKIEenvironment variable must be set with your advent of code session cookie. - Gladvent will only attempt to fetch your input if the input file for the day being requested does not exist. This is to prevent accidental and redundant calls to the advent of code website, there should be no reason to fetch input data for the same day more than once.
Reusable parse funtions
Gladvent supports modules with functions that provide a pub fn parse(String) -> a where the type a matches with the type of the argument for the runner functions pt_1 and pt_2.
If this parse function is present, gladvent will pick it up and run it only once, providing the output to both runner functions.
An example of which looks like this:
pub fn parse(input: String) -> Int {
let assert Ok(i) = int.parse(input)
i
}
pub fn pt_1(input: Int) -> Int {
input + 1
}
pub fn pt_2(input: Int) -> Int {
input + 2
}
Note: gladvent now leverages gleam's export package-interface functionality to type-check your parse and pt_{1|2} functions to make sure that they are compatible with each other.
Defining expectations for easy refactoring
One of the most satisfying aspects of advent of code (for me), second only to that sweet feeling of first solving a problem, is iteration and refactoring.
Gladvent makes it easy for you to define expected outputs in your gleam.toml for all your solutions so that you can have the confidence to refactor your solutions as much as you want without having to constantly compare with your submissions on the advent of code website.
Expectations in gleam.toml
Defining expectations is as simple as adding sections to your gleam.toml in the following format:
[gladvent.<year as int>]
1 = { pt_1 = <int or string>, pt_2 = <int or string> }
2 = { pt_1 = <int or string>, pt_2 = <int or string> }
3 = { pt_1 = <int or string>, pt_2 = <int or string> }
...
For example, to set the expectations for Dec 1st 2024 (2024 day 1) you would add something like:
[gladvent.2024]
1 = { pt_1 = 1, pt_2 = 2 }
When running, gladvent will detect whether a specific day has it's expectations set and if so will print out the result for you.
Let's say that your computed solution for 2024 day 1 is actually 1 for pt_1 and 3 for pt_2, the output will look like this:
Ran 2024 day 1:
Part 1: ✅ met expected value: 1
Part 2: ❌ unmet expectation: got 3, expected 2
Example inputs
Sometimes it's helpful to run advent of code solutions against example inputs to verify expectations.
Gladvent now provides a --example flag in both the new and run commands to conveniently support that workflow without needing to modify your actual problem input files.
Example input files will be generated at and run from input/<year>/<day>.example.txt.
Note: gladvent will not compare your solution output against the expectations defined in gleam.toml when running in example mode.
Display execution time
Use the --timed flag when running your solutions to display how long each part took to solve.
For example:
Ran 2024 day 1:
Part 1: ✅ met expected value: 1579939 (in 885 µs)
Part 2: ✅ met expected value: 20351745 (in 605 µs)
Note: as the output of the parse function is reused for both parts, its execution time is not included in the displayed time.