Every year on February 7th, math enthusiasts worldwide (should) consider celebrating Euler’s Day or E-day. Among Euler’s many gifts to the (currently known) mathematical universe is the ever-popular number e, the natural logarithm base that is basically the rock star of calculus, complex analysis, continuous growth models, compound interest, and (much) more. That irrational number shows up in places we might or might not expect. This blog post (notebook) explores some formulas and plots related to Euler’s number, e.
js-d3-list-line-plot(10.rand xx 40, background => 'none', stroke-width => 2)
my $title-color = 'Silver';
my $background = '#1F1F1F';
Formulas and computation
Raku has the built in mathematical constant (base of the natural logarithm). Both ASCII “e” and Unicode “𝑒” (“MATHEMATICAL ITALIC SMALL E” or U+1D452) can be used:
js-d3-list-line-plot(@log-spiral, :$background, color => 'red', title => 'Exponential spiral', |%opts) ~
js-d3-list-line-plot(@arch-spiral, :$background, color => 'blue', title => 'Archimedean spiral', |%opts)
Catenary is the curve a hanging flexible wire or chain assumes when supported at its ends and acted upon by a uniform gravitational force. It is given with the formula:
Here is a corresponding plot:
#% js
js-d3-list-line-plot((-1, -0.99 ... 1).map({ [$_, e ** $_ + e ** (-$_)] }), :$background, stroke-width => 4, :grid-lines, title => 'Catenary curve', :$title-color)
This document (notebook) shows transformation of movie dataset into a form more suitable for making a movie recommender system. (It builds upon Part 1 of the blog posts series.)
Remark: All three notebooks feature the same introduction, setup, and references sections in order to make it easier for readers to browse, access, or reproduce the content.
Remark: The series data files can be found in the folder “Data” of the GitHub repository “RakuForPrediction-blog”, [AAr1].
The notebook series can be used in several ways:
Just reading this introduction and then browsing the notebooks
Reading only this (data transformations) notebook in order to see how data wrangling is done
Evaluating all three notebooks in order to learn and reproduce the computational steps in them
Outline
Here are the transformation, data analysis, and machine learning steps taken in the notebook series, [AAn1, AAn2, AAn3]:
Ingest the data — Part 1
Shape size and summaries
Numerical columns transformation
Renaming columns to have more convenient names
Separating the non-uniform genres column into movie-genre associations
Into long format
Basic data analysis — Part 1
Number of movies per year distribution
Movie-genre distribution
Pareto principle adherence for movie directors
Correlation between number of votes and rating
Association Rules Learning (ARL) — Part 1
Converting long format dataset into “baskets” of genres
Most frequent combinations of genres
Implications between genres
I.e. a biography-movie is also a drama-movie 94% of the time
LLM-derived dictionary of most commonly used ARL measures
Recommender system creation — Part 2
Conversion of numerical data into categorical data
Application of one hot embedding
Experimenting / observing recommendation results
Getting familiar with the movie data by computing profiles for sets of movies
Relationships graphs — Part 3
Find the nearest neighbors for every movie in a certain range of years
Make the corresponding nearest neighbors graph
Using different weights for the different types of movie metadata
Visualize largest components
Make and visualize graphs based on different filtering criteria
Comments & observations
This notebook series started as a demonstration of making a “real life” data Recommender System (RS).
The data transformations notebook would not be needed if the data had “nice” tabular form.
Since the data have aggregated values in its “genres” column typical long form transformations have to be done.
On the other hand, the actor names per movie are not aggregated but spread-out in three columns.
Both cases represent a single movie metadata type.
For both long format transformations (or similar) are needed in order to make an RS.
After a corresponding Sparse Matrix Recommender (SMR) is made its sparse matrix can be used to do additional analysis.
Such extensions are: deriving clusters, making and visualizing graphs, making and evaluating suitable classifiers.
In most “real life” data processing most of the data transformation listed steps above are taken.
ARL can be also used for deriving recommendations if the data is large enough.
The SMR object is based on Nearest Neighbors finding over “bags of tags.”
Latent Semantic Indexing (LSI) tag-weighting functions are applied.
The data does not have movie-viewer data, hence only item-item recommenders are created and used.
One hot embedding is a common technique, which in this notebook is done via cross-tabulation.
The categorization of numerical data means putting number into suitable bins or “buckets.”
The bin or bucket boundaries can be on a regular grid or a quantile grid.
For categorized numerical data one-hot embedding matrices can be processed to increase similarity between numeric buckets that are close to each to other.
Nearest-neighbors based recommenders — like SMR — can be used as classifiers.
These are the so called K-Nearest Neighbors (KNN) classifiers.
Although the data is small (both row-wise & column-wise) we can consider making classifiers predicting IMDB ratings or number of votes.
Using the recommender matrix similarities between different movies can be computed and a corresponding graph can be made.
Centrality analysis and simulations of random walks over the graph can be made.
Like Google’s “Page-rank” algorithm.
The relationship graphs can be used to visualize the “structure” of movie dataset.
Alternatively, clustering can be used.
Hierarchical clustering might be of interest.
If the movies had reviews or summaries associated with them, then Latent Semantic Analysis (LSA) could be applied.
SMR can use both LSA-terms-based and LSA-topics-based representations of the movies.
LLMs can be used to derive the LSA representation.
Again, not done in these series of notebooks.
See, the video “Raku RAG demo”, [AAv4], for such demonstration.
Setup
Load packages used in the notebook:
use Math::SparseMatrix;
use ML::SparseMatrixRecommender;
use ML::SparseMatrixRecommender::Utilities;
use Statistics::OutlierIdentifiers;
One way to investigate (browse) the data is to make a recommender system and explore with it different aspects of the movie dataset like movie profiles and nearest neighbors similarities distribution.
Make the recommender
In order to make a more meaningful recommender we put the values of the different numerical variables into “buckets” — i.e. intervals derived corresponding to the values distribution for each variable. The boundaries of the intervals can form a regular grid, correspond to quanitile values, or be specially made. Here we use quantiles:
my @bucketVars = <score votes_count reviews_count>;
my @dsMovieDataLongForm2;
sink for @dsMovieDataLongForm.map(*<TagType>).unique -> $var {
if $var ∈ @bucketVars {
my %bucketizer = ML::SparseMatrixRecommender::Utilities::categorize-to-intervals(@dsMovieDataLongForm.grep(*<TagType> eq $var).map(*<Tag>)».Numeric, probs => (0..6) >>/>> 6, :interval-names):pairs;
@dsMovieDataLongForm2.append(@dsMovieDataLongForm.grep(*<TagType> eq $var).map(*.clone).map({ $_<Tag> = %bucketizer{$_<Tag>}; $_ }))
} else {
@dsMovieDataLongForm2.append(@dsMovieDataLongForm.grep(*<TagType> eq $var))
}
}
Here are the recommender sub-matrices dimensions (rows and columns):
.say for $smrObj.take-matrices.deepmap(*.dimensions).sort(*.key)
# actor => (5043 6256)
# country => (5043 66)
# director => (5043 2399)
# genre => (5043 26)
# language => (5043 48)
# reviews_count => (5043 7)
# score => (5043 7)
# title => (5043 4917)
# votes_count => (5043 7)
# year => (5043 92)
Note that the sub-matrices of “reviews_count”, “score”, and “votes_count” have small number of columns, corresponding to the number probabilities specified when categorizing to intervals.
Enhance with one-hot embedding
my $mat = $smrObj.take-matrices<year>;
my $matUp = Math::SparseMatrix.new(
diagonal => 1/2 xx ($mat.columns-count - 1), k => 1,
row-names => $mat.column-names,
column-names => $mat.column-names
);
my $matDown = $matUp.transpose;
# mat = mat + mat . matDown + mat . matDown
$mat = $mat.add($mat.dot($matUp)).add($mat.dot($matDown));
This document (notebook) demonstrates the functions of “Graph::RandomMaze”, [AAp1], for generating and displaying random mazes. The methodology and implementations of maze creation based on random rectangular and hexagonal grid graphs are described in detail in the blog post “Day 24 – Maze Making Using Graphs”, [AA1], and in the Wolfram notebook “Maze Making Using Graphs”, [AAn1].
This document (notebook) shows transformations of a movie dataset into a format more suitable for data analysis and for making a movie recommender system. It is the first of a three-part series of notebooks that showcase Raku packages for doing Data Science (DS). The notebook series as a whole goes through this general DS loop:
Remark: All three notebooks feature the same introduction, setup, and references sections in order to make it easier for readers to browse, access, or reproduce the content.
Remark: The series data files can be found in the folder “Data” of the GitHub repository “RakuForPrediction-blog”, [AAr1].
The notebook series can be used in several ways:
Just reading this introduction and then browsing the notebooks
Reading only this (data transformations) notebook in order to see how data wrangling is done
Evaluating all three notebooks in order to learn and reproduce the computational steps in them
Outline
Here are the transformation, data analysis, and machine learning steps taken in the notebook series, [AAn1, AAn2, AAn3]:
Ingest the data — Part 1
Shape size and summaries
Numerical columns transformation
Renaming columns to have more convenient names
Separating the non-uniform genres column into movie-genre associations
Into long format
Basic data analysis — Part 1
Number of movies per year distribution
Movie-genre distribution
Pareto principle adherence for movie directors
Correlation between number of votes and rating
Association Rules Learning (ARL) — Part 1
Converting long format dataset into “baskets” of genres
Most frequent combinations of genres
Implications between genres
I.e. a biography-movie is also a drama-movie 94% of the time
LLM-derived dictionary of most commonly used ARL measures
Recommender system creation — Part 2
Conversion of numerical data into categorical data
Application of one hot embedding
Experimenting / observing recommendation results
Getting familiar with the movie data by computing profiles for sets of movies
Relationships graphs — Part 3
Find the nearest neighbors for every movie in a certain range of years
Make the corresponding nearest neighbors graph
Using different weights for the different types of movie metadata
Visualize largest components
Make and visualize graphs based on different filtering criteria
Comments & observations
This notebook series started as a demonstration of making a “real life” data Recommender System (RS).
The data transformations notebook would not be needed if the data had “nice” tabular form.
Since the data have aggregated values in its “genres” column typical long form transformations have to be done.
On the other hand, the actor names per movie are not aggregated but spread-out in three columns.
Both cases represent a single movie metadata type.
For both long format transformations (or similar) are needed in order to make an RS.
After a corresponding Sparse Matrix Recommender (SMR) is made its sparse matrix can be used to do additional analysis.
Such extensions are: deriving clusters, making and visualizing graphs, making and evaluating suitable classifiers.
In most “real life” data processing most of the data transformation listed steps above are taken.
ARL can be also used for deriving recommendations if the data is large enough.
The SMR object is based on Nearest Neighbors finding over “bags of tags.”
Latent Semantic Indexing (LSI) tag-weighting functions are applied.
The data does not have movie-viewer data, hence only item-item recommenders are created and used.
One hot embedding is a common technique, which in this notebook is done via cross-tabulation.
The categorization of numerical data means putting number into suitable bins or “buckets.”
The bin or bucket boundaries can be on a regular grid or a quantile grid.
For categorized numerical data one-hot embedding matrices can be processed to increase similarity between numeric buckets that are close to each to other.
Nearest-neighbors based recommenders — like SMR — can be used as classifiers.
These are the so called K-Nearest Neighbors (KNN) classifiers.
Although the data is small (both row-wise & column-wise) we can consider making classifiers predicting IMDB ratings or number of votes.
Using the recommender matrix similarities between different movies can be computed and a corresponding graph can be made.
Centrality analysis and simulations of random walks over the graph can be made.
Like Google’s “Page-rank” algorithm.
The relationship graphs can be used to visualize the “structure” of movie dataset.
Alternatively, clustering can be used.
Hierarchical clustering might be of interest.
If the movies had reviews or summaries associated with them, then Latent Semantic Analysis (LSA) could be applied.
SMR can use both LSA-terms-based and LSA-topics-based representations of the movies.
LLMs can be used to derive the LSA representation.
Again, not done in these series of notebooks.
See, the video “Raku RAG demo”, [AAv4], for such demonstration.
Setup
Load packages used in the notebook:
use Math::SparseMatrix;
use ML::SparseMatrixRecommender;
use ML::SparseMatrixRecommender::Utilities;
use Statistics::OutlierIdentifiers;
my $title-color = 'Silver';
my $stroke-color = 'SlateGray';
my $tooltip-color = 'LightBlue';
my $tooltip-background-color = 'none';
my $tick-labels-font-size = 10;
my $tick-labels-color = 'Silver';
my $tick-labels-font-family = 'Helvetica';
my $background = 'White'; #'#1F1F1F';
my $color-scheme = 'schemeTableau10';
my $color-palette = 'Inferno';
my $edge-thickness = 3;
my $vertex-size = 6;
my $mmd-theme = q:to/END/;
%%{
init: {
'theme': 'forest',
'themeVariables': {
'lineColor': 'Ivory'
}
}
}%%
END
my %force = collision => {iterations => 0, radius => 10},link => {distance => 180};
my %force2 = charge => {strength => -30, iterations => 4}, collision => {radius => 50, iterations => 4}, link => {distance => 30};
sink my %opts = :$background, :$title-color, :$edge-thickness, :$vertex-size;
Ingest data
Ingest the movie data:
# Download and unzip: https://github.com/antononcube/RakuForPrediction-blog/raw/refs/heads/main/Data/movie_data.csv.zip
my $fileName=$*HOME ~ '/Downloads/movie_data.csv';
my @dsMovieData=data-import($fileName, headers=>'auto');
deduce-type(@dsMovieData)
#% html
@dsMovieGenreLongForm.head(8)
==> to-html(field-names => <Item TagType Tag>)
Item
TagType
Tag
0
genre
Action
0
genre
Adventure
0
genre
Fantasy
0
genre
Sci-Fi
1
genre
Action
1
genre
Adventure
1
genre
Fantasy
2
genre
Action
Statistics
In this section we compute different statistics that should give us better idea what the data is.
Show movie years distribution:
#% js
js-d3-bar-chart(@dsMovieData.map(*<title_year>.Str).&tally.sort(*.head), title => 'Movie years distribution', :$title-color, :1200width, :$background)
~
js-d3-box-whisker-chart(@dsMovieData.map(*<title_year>)».Int.grep(*>1916), :horizontal, :$background)
Show movie genre distribution:
#% js
my %genreCounts = cross-tabulate(@dsMovieGenreLongForm, 'Item', 'Tag', :sparse).column-sums(:p);
js-d3-bar-chart(%genreCounts.sort, title => 'Genre distributions', :$background, :$title-color)
Check Pareto principle adherence for director names:
#% js
pareto-principle-statistic(@dsMovieData.map(*<director_name>))
==> js-d3-list-line-plot(
:$background,
title => 'Pareto principle adherence for movie directors',
y-label => 'probability', x-label => 'index',
:grid-lines, :5stroke-width, :$title-color)
Plot the number of IMDB votes vs IMBDB scores:
#% js
@dsMovieData.map({ %( x => $_<num_voted_users>».Num».log(10), y => $_<imdb_score>».Num ) })
==> js-d3-list-plot(
:$background,
title => 'Number of IMBD votes vs IMDB scores',
x-label => 'Number of votes, lg', y-label => 'score',
:grid-lines, point-size => 4, :$title-color)
Association rules learning
It is interesting to see which genres associated closely with each other. One way to find to those associations is to use Association Rule Learning (ARL).
For each movie make a “basket” of genres:
my @baskets = cross-tabulate(@dsMovieGenreLongForm, 'Item', 'Tag').values».keys».List;
@baskets».elems.&tally
Here is a table showing the formulas for the Association Rules Learning measures (confidence, lift, leverage, conviction), along with their minimum value, maximum value, and value of indifference:
Explanation of terms:
support(X) = P(X), the proportion of transactions containing itemset X.
¬A = complement of A (transactions not containing A).
Value of indifference generally means the value where the measure indicates independence or no association.
For Confidence, the baseline is support(B) (probability of B alone).
For Lift and Conviction, 1 indicates no association.
Leverage’s minimum and maximum depend on the supports of A and B.
LLM prompt
Here is the prompt used to generate the ARL metrics dictionary table above:
Give the formulas for the Association Rules Learning measures: confidence, lift, leverage, and conviction. In a Markdown table for each measure give the min value, max value, value of indifference. Make sure the formulas are in LaTeX code.
Export transformed data
Here we export the transformed data in order to streamline the computations in the other notebooks of the series:
That effort requires to verify that the Monad laws apply to certain constructs of the Raku language. This document (notebook) defines the Monad laws and provides several verifications for different combinations of operators and coding styles.
This document (notebook) focuses on built-in Raku features that can be used in monadic programming. It does not cover Raku packages that enhance Raku’s functionality or syntax for monadic programming. Also, since Raku is a feature-rich language, not all approaches to making monadic pipelines are considered — only the main and obvious ones. (I.e. the ones I consider “main and obvious.”)
The examples in this document are very basic. Useful, more complex (yet, elegant) examples of monadic pipelines usage in Raku are given in the notebook “Monadic programming examples”, [AAn1].
Context
Before going further, let us list the applications of monadic programming we consider:
Graceful failure handling
Rapid specification of computational workflows
Algebraic structure of written code
Remark: Those applications are discussed in [AAv5] (and its future Raku version.)
As a tools maker for Data Science (DS) and Machine Learning (ML), I am very interested in Point 1; but as a “simple data scientist” I am mostly interested in Point 2.
That said, a large part of my Raku programming has been dedicated to rapid and reliable code generation for DS and ML by leveraging the algebraic structure of corresponding software monads — i.e. Point 3. (See [AAv2, AAv3, AAv4].) For me, first and foremost, monadic programming pipelines are just convenient interfaces to computational workflows. Often I make software packages that allow “easy”, linear workflows that can have very involved computational steps and multiple tuning options.
Dictionary
Monadic programming A method for organizing computations as a series of steps, where each step generates a value along with additional information about the computation, such as possible failures, non-determinism, or side effects. See [Wk1].
Monadic pipeline Chaining of operations with a certain syntax. Monad laws apply loosely (or strongly) to that chaining.
Uniform Function Call Syntax (UFCS) A feature that allows both free functions and member functions to be called using the same object.function() method call syntax.
Method-like call Same as UFCS. A Raku example: [3, 4, 5].&f1.$f2.
Verifications overview
Raku — as expected — has multiple built-in mechanisms for doing monadic programming. A few of those mechanisms are “immediate”, other require adherence to certain coding styles or very direct and simple definitions. Not all of the Monad law verifications have to be known (or understood) by a programmer. Here is a table that summarizes them:
Type
Description
Array and ==>
Most immediate, clear-cut
&unit and &bind
Definitions according to the Monad laws; programmable semicolon
Any and andthen
General, built-in monad!
Styled OOP
Standard and straightforward
The verification for each approach is given as an array of hashmaps with keys “name”, “input”, “expected”. The values of “input” are strings which are evaluated with the lines:
use MONKEY-SEE-NO-EVAL;
@tbl .= map({ $_<output> = EVAL($_<input>); $_ });
EVAL is used in order to have easily verifiable “single origin of truth.”
The HTML verification tables are obtained withe function proof-table, which has several formatting options. (Set the section “Setup”.)
What is a monad? (informally)
Many programmers are familiar with monadic pipelines, although, they might know them under different names. This section has monadic pipeline examples from Unix, R, and Raku that should help understanding the more formal definitions in the next section.
Unix examples
Most (old and/or Raku) programmers are familiar with Unix programming. Hence, they are familiar with monadic pipelines.
Pipeline (|)
The Unix pipeline semantics and syntax was invented and introduced soon after the first Unix release. Monadic pipelines (or uniform function call) have very similar motivation and syntax.
Here is an example of Unix pipeline in which the output of one shell program is the input for the next:
# Fri Dec 13 07:59:16 EST 2024
# Tue Dec 24 14:24:20 EST 2024
# Sat Dec 14 07:57:41 EST 2024
That UNIX command:
Finds in the current directory all files with names that finish with “nb”
Picks from the list produces by 1 only the rows that contain the string “chebyshev”
Gives the dates of modification of those files
Reverse-Polish calculator (dc)
One of the oldest surviving Unix language programs is dc (desktop calculator) that uses reverse-Polish notation. Here is an example of the command 3 5 + 4 * p given to dc that prints out 32, i.e. (3 + 5) * 4:
#% bash
echo '3 5 + 4 * p' | dc
# 32
We can see that dc command as a pipeline:
The numbers are functions that place the corresponding values in the context (which is a stack)
The space between the symbols is the pipeline constructor
Data wrangling
Posit‘s constellation of R packages “tidyverse” facilitates pipeline construction of data wrangling workflows. Here is an example in which columns of the data frame dfTitanic are renamed, then its rows are filtered and grouped, and finally, the corresponding group sizes are shown:
dfTitanic %>%
dplyr::rename(age = passengerAge, sex = passengerSex, class = passengerClass) %>%
dplyr::filter(age > 10) %>%
dplyr::group_by(class, sex) %>%
dplyr::count()
Here is a corresponding Raku pipeline andthen style (using subs of “Data::Reshapers”, [AAp5]):
In this document a monad is any set of a symbol $m$ and two operators unit and bind that adhere to the monad laws. (See the next sub-section.) The definition is taken from [Wk1] and [PW1] and phrased in Raku terms. In order to be brief, we deliberately do not consider the equivalent monad definition based on unit, join, and map (also given in [PW1].)
Here are operators for a monad associated with a certain class M:
monad unit function is unit(x) = M.new(x)
monad bind function is a rule like bind(M:D $x, &f) = &f(x) with &f($x) ~~ M:D giving True.
Note that:
the function bind unwraps the content of M and gives it to the function &f;
the functions given as second arguments to bind (see&f) are responsible to return as results instances of the monad class M.
Here is an illustration formula showing a monad pipeline:
From the definition and formula it should be clear that if for the result f(x) of bind the test f(x) ~~ M:D is True then the result is ready to be fed to the next binding operation in monad’s pipeline. Also, it is easy to program the pipeline functionality with reduce:
reduce(&bind, M.new(3), [&f1, &f2, $f3])
The monad laws
The monad laws definitions are taken from [H1] and [H3].In the monad laws given below “⟹” is for monad’s binding operation and x↦expr is for a function in anonymous form.
Here is a table with the laws:
name
LHS
RHS
Left identity
unit m ⟹ f
f m
Right identity
m ⟹ unit
m
Associativity
(m ⟹ f) ⟹ g
m ⟹ (x ⟼ f x ⟹ g)
Setup
Here we load packages for tabulating the verification results:
use Data::Translators;
use Hilite::Simple;
Here is a sub that is used to tabulate the Monad laws proofs:
#| Tabulates Monad laws verification elements.
sink sub proof-table(
@tbl is copy, #= Array of hashmaps with keys <name input expected>
Bool:D :$raku = True, #= Whether .raku be invoked in the columns "output" and "expected"
Bool:D :$html = True, #= Whether to return HTML table
Bool:D :$highlight = True #= Whether to highlight the Raku code in the HTML table
) {
if $raku {
@tbl .= map({ $_<output> = $_<output>.raku; $_});
@tbl .= map({ $_<expected> = $_<expected>.raku; $_});
}
return @tbl unless $html;
my @field-names = <name input output expected>;
my $res = to-html(@tbl, :@field-names, align => 'left');
if $highlight {
$res = reduce( {$^a.subst($^b.trans([ '<', '>', '&' ] => [ '<', '>', '&' ]), $^b.&hilite)}, $res, |@tbl.map(*<input>) );
$res = $res.subst('<pre class="nohighlights">', :g).subst('</pre>', :g)
}
return $res;
}
Array and ==>
The monad laws are satisfied in Raku for:
Every function f that takes an array argument and returns an array
The unit operation being Array
The feed operator (==>) being the binding operation
Name
Input
Output
Left identity
Array($a) ==> &f()
&f($a)
Right identity
$a ==> { Array($_) }()
$a
Associativity LHS
Array($a) ==> &f1() ==> &f2()
&f2(&f1($a))
Associativity RHS
Array($a) ==> { &f($_) ==> &f2() }()
&f2(&f1($a))
Here is an example:
#% html
# Operators in the monad space
my &f = { Array($_) >>~>> '_0' }
my &f1 = { Array($_) >>~>> '_1' }
my &f2 = { Array($_) >>~>> '_2' }
# Some object
my $a = 5; #[3, 4, 'p'];
# Verification table
my @tbl =
{ name => 'Left identity', :input( 'Array($a) ==> &f()' ), :expected( &f($a) )},
{ name => 'Right identity', :input( '$a ==> { Array($_) }()' ), :expected( $a )},
{ name => 'Associativity LHS', :input( 'Array($a) ==> &f1() ==> &f2()' ), :expected( &f2(&f1($a)) )},
{ name => 'Associativity RHS', :input( 'Array($a) ==> { &f1($_) ==> &f2() }()' ), :expected( &f2(&f1($a)) )}
;
use MONKEY-SEE-NO-EVAL;
@tbl .= map({ $_<output> = EVAL($_<input>); $_ });
@tbl ==> proof-table(:html, :raku, :highlight)
name
input
output
expected
Left identity
Array($a) ==>&f()
$[“5_0”]
$[“5_0”]
Right identity
$a==> { Array($_) }()
$[5]
5
Associativity LHS
Array($a) ==>&f1() ==>&f2()
$[“5_1_2”]
$[“5_1_2”]
Associativity RHS
Array($a) ==> { &f1($_) ==>&f2() }()
$[“5_1_2”]
$[“5_1_2”]
Remark: In order to keep the verification simple I did not want to extend it to cover Positional and Seq objects. In some sense, that is also covered by Any and andthen verification. (See below.)
&unit and &bind
From the formal Monad definition we can define the corresponding functions &unit and &bind and verify the Monad laws with them:
#% html
# Monad operators
my &unit = { Array($_) };
my &bind = { $^b($^a) };
# Operators in the monad space
my &f = { Array($_) >>~>> '_0' }
my &f1 = { Array($_) >>~>> '_1' }
my &f2 = { Array($_) >>~>> '_2' }
# Some object
my $a = (3, 4, 'p');
# Verification table
my @tbl =
{ name => 'Left identity', :input( '&bind( &unit($a), &f)' ), :expected( &f($a) )},
{ name => 'Right identity', :input( '&bind( $a, &unit)' ), :expected( $a )},
{ name => 'Associativity LHS', :input( '&bind( &bind( &unit($a), &f1), &f2)' ), :expected( &f2(&f1($a)) )},
{ name => 'Associativity RHS', :input( '&bind( &unit($a), { &bind(&f1($_), &f2) })' ), :expected( &f2(&f1($a)) )}
;
use MONKEY-SEE-NO-EVAL;
@tbl .= map({ $_<output> = EVAL($_<input>); $_ });
@tbl ==> proof-table(:html, :raku, :highlight)
name
input
output
expected
Left identity
&bind( &unit($a),&f)
$[“3_0”, “4_0”, “p_0”]
$[“3_0”, “4_0”, “p_0”]
Right identity
&bind( $a,&unit)
$[3, 4, “p”]
$(3, 4, “p”)
Associativity LHS
&bind( &bind( &unit($a),&f1),&f2)
$[“3_1_2”, “4_1_2”, “p_1_2”]
$[“3_1_2”, “4_1_2”, “p_1_2”]
Associativity RHS
&bind( &unit($a), { &bind(&f1($_),&f2) })
$[“3_1_2”, “4_1_2”, “p_1_2”]
$[“3_1_2”, “4_1_2”, “p_1_2”]
To achieve the “monadic pipeline look and feel” with &unit and &bind, certain infix definitions must be implemented. For example, infix<:»> ($m, &f) { &bind($m, &f) }. Here is a full verification example:
The andthen operator returns Empty if the first argument is undefined, otherwise the last argument. The last argument is returned as-is, without being checked for definedness at all. Short-circuits. The result of the left side is bound to $_ for the right side, or passed as arguments if the right side is a Callable, whose count must be 0 or 1.
A main feature andthen is to return Empty if its first argument is not defined. That is, actually, very “monadic” — graceful handling of errors is one of the main reasons of use Monadic programming. It is also limiting, because the monad failure is “just” Empty. That is mostly a theoretical limitation; in practice Raku has many other elements, like, notandthen and orelse, that can shape the workflows to programmer’s desires.
The Monad laws hold for Any.new as the unit operation and andthen as the binding operation.
#% html
# Operators in the monad space
my &f = { Array($_) >>~>> '_0' }
my &f1 = { Array($_) >>~>> '_1' }
my &f2 = { Array($_) >>~>> '_2' }
# Some object
my $a = (3, 9, 'p');
# Verification table
my @tbl =
{ name => 'Left identity', :input( '$a andthen .&f' ), :expected( &f($a) )},
{ name => 'Right identity', :input( '$a andthen $_' ), :expected( $a )},
{ name => 'Associativity LHS', :input( '$a andthen .&f1 andthen .&f2' ), :expected( &f1(&f2($a)) )},
{ name => 'Associativity RHS', :input( '$a andthen { .&f1 andthen .&f2 }' ), :expected( &f1(&f2($a)) )}
;
use MONKEY-SEE-NO-EVAL;
@tbl .= map({ $_<output> = EVAL($_<input>); $_ });
@tbl ==> proof-table(:html, :raku, :highlight)
name
input
output
expected
Left identity
$aandthen .&f
$[“3_0”, “9_0”, “p_0”]
$[“3_0”, “9_0”, “p_0”]
Right identity
$aandthen$_
$(3, 9, “p”)
$(3, 9, “p”)
Associativity LHS
$aandthen .&f1andthen .&f2
$[“3_1_2”, “9_1_2”, “p_1_2”]
$[“3_2_1”, “9_2_1”, “p_2_1”]
Associativity RHS
$aandthen { .&f1andthen .&f2 }
$[“3_1_2”, “9_1_2”, “p_1_2”]
$[“3_2_1”, “9_2_1”, “p_2_1”]
Monad class and method call
Raku naturally supports method chaining using dot notation (.) for actual methods defined on a class or type. Hence, a more “standard” way for doing Monadic programming is to use a monad class, say M, and method call:
M.new(...) plays the monad unit role — i.e. it uplifts objects into monad’s space
$m.f(...) (where $m ~~ M:D) plays the binding role if all methods of M return M:D objects
The axioms verification needs to be done using a particular class definition format (see the example below):
1. Left identity applies:
M.new($x).f does mean application of M.f to $x.
2. Right identity applies by using M.new
3. Associativity axiom holds
For RHS, again, method-like call (call as method) is used.
Here is an example:
#% html
# Monad class definition
my class M {
has $.context;
multi method new($context) { self.bless(:$context) }
multi method new(M:D $m) { self.bless(context => $m.context) }
method f() { $!context = $!context >>~>> '_0'; self}
method f1() { $!context = $!context >>~>> '_1'; self}
method f2() { $!context = $!context >>~>> '_2'; self}
}
# Some object
my $a = 5; #[5, 3, 7];
# Verification table
my @tbl =
{ name => 'Left identity', :input( 'M.new($a).f' ), :expected( M.new($a).f )},
{ name => 'Right identity', :input( 'my M:D $x .= new($a)' ), :expected( M.new($a) )},
{ name => 'Associativity LHS', :input( '(M.new($a).f1).f2' ), :expected( (M.new($a).f1).f2 )},
{ name => 'Associativity RHS', :input( 'M.new($a).&{ $_.f1.f2 }' ), :expected( M.new($a).&{ $_.f1.f2 } )}
;
use MONKEY-SEE-NO-EVAL;
@tbl .= map({ $_<output> = EVAL($_<input>); $_ });
@tbl ==> proof-table(:html, :raku, :highlight)
name
input
output
expected
Left identity
M.new($a).f
M.new(context => “5_0”)
M.new(context => “5_0”)
Right identity
my M:D $x.=new($a)
M.new(context => 5)
M.new(context => 5)
Associativity LHS
(M.new($a).f1).f2
M.new(context => “5_1_2”)
M.new(context => “5_1_2”)
Associativity RHS
M.new($a).&{ $_.f1.f2 }
M.new(context => “5_1_2”)
M.new(context => “5_1_2”)
Method-like calls
Instead of M methods f<i>(...) we can have corresponding functions &f<i>(...) and “method-like call” chains:
M.new(3).&f1.&f2.&f3
That is a manifestation of Raku’s principle “everything is an object.” Here is an example:
Remark A simpler version of the code above is: [6, 3, 12].elems.sqrt.&{ $_ ** 3 }.
Conclusion
It is encouraging — both readability-wise and usability-wise — that Raku code can be put into easy to read and understand pipeline-like computational steps. Raku supports that in its Functional Programming (FP) and Object-Oriented Programming (OOP) paradigms. The support can be also seen from these programming-idiomatic and design-architectural points of view:
Any computation via:
andthen and ==>
Method-like calls or UFCS
For special functions and (gradually typed) arguments via:
sub and infix
OOP
Caveats
There are a few caveats to be kept in mind when using andthen and ==> (in Raku’s language version “6.d”.)
does it run?
andthen
==>
no
(^100).pick xx 5 andthen .List andthen { say "max {$_.max}"; $_} andthen $_».&is-prime
(^100).pick xx 5 ==> {.List} ==> { say "max {$_.max}"; $_} ==> { $_».&is-prime }
yes
(^100).pick xx 5 andthen .List andthen { say "max {$_.max}"; $_}($_) andthen $_».&is-prime
(^100).pick xx 5 ==> {.List}() ==> { say "max {$_.max}"; $_}() ==> { $_».&is-prime }()
[SH2] Sheng Liang, Paul Hudak, Mark Jones, “Monad transformers and modular interpreters”, (1995), Proceedings of the 22nd ACM SIGPLAN-SIGACT symposium on Principles of programming languages. New York, NY: ACM. pp. 333–343. doi:10.1145/199448.199528.
[PW1] Philip Wadler, “The essence of functional programming”, (1992), 19’th Annual Symposium on Principles of Programming Languages, Albuquerque, New Mexico, January 1992.
“LLM::Functions” supports high level LLM function calling via llm-synthesize and llm-synthesize-with-tools. (The latter provides more options for the tool invocation process like max-iterations or overriding tool specs.)
At this point “LLM::Functions” supports function calling in the styles of OpenAI’s ChatGPT and Google’s Gemini. If the LLM configuration is not set with the names “ChatGPT” or “Gemini”, then the function calling style used is that of ChatGPT. (Many LLM providers — other than OpenAI and Gemini — tend to adhere to OpenAI’s API.)
Remark: LLM “function calling” is also known as LLM “tools” or “LLM tool invocation.”
In this document, non-trivial Stoichiometry computations are done with the Raku package “Chemistry::Stoichiometry”, [AAp4]. Related plots are done with the Raku package “JavaScript::D3”, [AAp6].
Big picture
Inversion of control is a way to characterize LLM function calling. This means the LLM invokes functions or subroutines that operate on an external system, such as a local computer, rather than within the LLM provider’s environment. See the section “Outline of the overall process” of “LLM function calling workflows (Part 1, OpenAI)”, [AA1].
Remark: The following Software Framework building principles (or mnemonic slogans) apply to LLM function calling:
“Don’t call us, we’ll call you.” (The Hollywood Principle)
“Leave the driving to us.” (Greyhound Lines, Inc.)
The whole series
This document is the fourth of the LLM function calling series, [AA1 ÷ AA4]. The other three show lower-level LLM function calling workflows.
Define package functions as tools Show how to define LLM-tools
Stoichiometry by LLM Invoking LLM requests with LLM tools
“Thoughtful” response Elaborated LLM answer based in LLM tools results
Adding plot tools Enhancing the LLM answers with D3.js plots
Setup
Load packages:
use JSON::Fast;
use LLM::Functions;
use LLM::Tooling;
use Chemistry::Stoichiometry;
use JavaScript::D3;
Define LLM access configurations:
sink my $conf41-mini = llm-configuration('ChatGPT', model => 'gpt-4.1-mini', :8192max-tokens, temperature => 0.4);
sink my $conf-gemini-flash = llm-configuration('Gemini', model => 'gemini-2.0-flash', :8192max-tokens, temperature => 0.4);
The package “Chemistry::Stoichiometry”, [AAp4], provides element data, a grammar (or parser) for chemical formulas, and subs for computing molecular masses and balancing equations. Here is an example of calling molecular-mass:
molecular-mass("SO2")
# 64.058
Balance chemical equation:
'Al + O2 -> Al2O3'
==> balance-chemical-equation
# [4*Al + 3*O2 -> 2*Al2O3]
Define package functions as tools
Define a few tools based in chemistry computations subs:
sink my @tools =
LLM::Tool.new(&molecular-mass),
LLM::Tool.new(&balance-chemical-equation)
;
Undefined type of parameter ⎡$spec⎦; continue assuming it is a string.
Make an LLM configuration with the LLM-tools:
sink my $conf = llm-configuration($conf41-mini, :@tools);
Remark: When llm-synthesize is given LLM configurations with LLM tools, it hands over the process to llm-synthesize-with-tools. This function then begins the LLM-tool interaction loop.
Stoichiometry by LLM
Here is a prompt requesting to compute molecular masses and to balance a certain chemical equation:
sink my $input = "What are the masses of SO2, O3, and C2H5OH? Also balance: C2H5OH + O2 = H2O + CO2."
The LLM invocation and result:
llm-synthesize(
[$input, llm-prompt('NothingElse')('JSON')],
e => $conf,
form => sub-parser('JSON'):drop)
Remark: It order to see the LLM-tool interaction use the Boolean option (adverb) :echo of llm-synthesize.
“Thoughtful” response
Here is a very informative, “thoughtful” response for a quantitative Chemistry question:
#% markdown
my $input = "How many molecules a kilogram of water has? Use LaTeX for the formulas. (If any.)";
llm-synthesize($input, e => $conf)
==> { .subst(/'\[' | '\]'/, '$$', :g).subst(/'\(' | '\)'/, '$', :g) }() # Make sure LaTeX code has proper fences
Adding plot tools
It would be interesting (or fancy) to add a plotting tool. We can use text-list-plot of “Text::Plot”, [AAp5], or js-d3-list-plot of “JavaScript::D3”, [AAp6]. For both, the automatically derived tool specs — via the sub llm-tool-definition used by LLM::Tool — are somewhat incomplete. Here is the auto-result for js-d3-list-plot:
{
"function": {
"strict": true,
"parameters": {
"additionalProperties": false,
"required": [
"$data",
""
],
"type": "object",
"properties": {
"$data": {
"description": "",
"type": "string"
},
"": {
"description": "",
"type": "string"
}
}
},
"type": "function",
"name": "js-d3-list-plot",
"description": "Makes a list plot (scatter plot) for a list of numbers or a list of x-y coordinates."
},
"type": "function"
}
The automatic tool-spec for js-d3-list-plot can be replaced with this spec:
my $spec = q:to/END/;
{
"type": "function",
"function": {
"name": "jd-d3-list-plot",
"description": "Creates D3.js code for a list-plot of the given arguments.",
"parameters": {
"type": "object",
"properties": {
"$x": {
"type": "array",
"description": "A list of a list of x-coordinates or x-labels",
"items": {
"anyOf": [
{ "type": "string" },
{ "type": "number" }
]
}
}
"$y": {
"type": "array",
"description": "A list of y-coordinates",
"items": {
"type": "number"
}
}
},
"required": ["$x", "$y"]
}
}
}
END
my $t = LLM::Tool.new(&text-list-plot);
$t.json-spec = $spec;
Though, it is easier and more robust to define a new function that delegates to js-d3-list-plot — or other plotting function — and does some additional input processing that anticipates LLM derived argument values:
#| Make a string that represents a list-plot of the given arguments.
my sub data-plot(
Str:D $x, #= A list of comma separated x-coordinates or x-labels
Str:D $y, #= A list of comma separated y-coordinates
Str:D :$x-label = '', #= Label of the x-axis
Str:D :$y-label = '', #= Label of the y-axis
Str:D :$title = '', #= Plot title
) {
my @x = $x.split(/<[\[\],"]>/, :skip-empty)».trim.grep(*.chars);
my @y = $y.split(/<[\[\],"]>/, :skip-empty)».trim».Num;
my @points = (@x Z @y).map({ %( variable => $_.head, value => $_.tail ) });
js-d3-bar-chart(@points, :$x-label, :$y-label, title-color => 'Gray', background => '#1F1F1F', :grid-lines)
}
Here we add the new tool to the tool list above:
sink my @tool-objects =
LLM::Tool.new(&molecular-mass),
LLM::Tool.new(&balance-chemical-equation),
LLM::Tool.new(&data-plot);
Here we make an LLM request for chemical molecules masses calculation and corresponding plotting — note that require to obtain a dictionary of the masses and plot:
my $input = q:to/END/;
What are the masses of SO2, O3, Mg2, and C2H5OH?
Make a plot the obtained quantities: x-axes for the molecules, y-axis for the masses.
The plot has to have appropriate title and axes labels.
Return a JSON dictionary with keys "masses" and "plot".
END
# LLM configuration with tools
my $conf = llm-configuration($conf41-mini, tools => @tool-objects);
# LLM invocation
my $res = llm-synthesize([
$input,
llm-prompt('NothingElse')('JSON')
],
e => $conf,
form => sub-parser('JSON'):drop
);
# Type/structure of the result
deduce-type($res)
sink my %rules =
poet1 => "Write a short poem about summer.",
poet2 => "Write a haiku about winter.",
poet3 => sub ($topic, $style) {
"Write a poem about $topic in the $style style."
},
poet4 => {
llm-function => {llm-synthesize('You are a famous Russian poet. Write a short poem about playing bears.')},
test-function => -> $with-russian { $with-russian ~~ Bool:D && $with-russian || $with-russian.Str.lc ∈ <true yes> }
},
judge => sub ($poet1, $poet2, $poet3, $poet4) {
[
"Choose the composition you think is best among these:\n\n",
"1) Poem1: $poet1",
"2) Poem2: $poet2",
"3) Poem3: {$poet4.defined && $poet4 ?? $poet4 !! $poet3}",
"and copy it:"
].join("\n\n")
},
report => {
eval-function => sub ($poet1, $poet2, $poet3, $poet4, $judge) {
[
'# Best poem',
'Three poems were submitted. Here are the statistics:',
to-html( ['poet1', 'poet2', $poet4.defined && $poet4 ?? 'poet4' !! 'poet3'].map({ [ name => $_, |text-stats(::('$' ~ $_))] })».Hash.Array, field-names => <name chars words lines> ),
'## Judgement',
$judge
].join("\n\n")
}
}
;
Remark: This is a documentation example — I want to be seen that $poet4 can be undefined. That hints that the corresponding sub is not always evaluated. (Because of the result of the corresponding test function.)
Make the graph:
my $gBestPoem = LLM::Graph.new(%rules)
Now. to make the execution quicker, we assign the poems (instead of LLM generating them):
# Poet 1
my $poet1 = q:to/END/;
Golden rays through skies so blue,
Whispers warm in morning dew.
Laughter dances on the breeze,
Summer sings through rustling trees.
Fields of green and oceans wide,
Endless days where dreams abide.
Sunset paints the world anew,
Summer’s heart in every hue.
END
# Poet 2
my $poet2 = q:to/END/;
Silent snowflakes fall,
Blanketing the earth in white,
Winter’s breath is still.
END
# Poet 3
my $poet3 = q:to/END/;
There once was a game on the ice,
Where players would skate fast and slice,
With sticks in their hands,
They’d score on the stands,
Making hockey fans cheer twice as nice!
END
# Poet 4
sink my $poet4 = q:to/END/;
В лесу играют медведи —
Смех разносится в тиши,
Тяжело шагают твердо,
Но в душе — мальчишки.
Плюшевые лапы сильны,
Игривы глаза блестят,
В мире грёз, как в сказке дивной,
Детство сердце охраняет.
END
sink my $judge = q:to/END/;
The 3rd one.
END
Graph evaluation
Evaluate the LLM graph with input arguments and intermediate nodes results:
One of the “standard” things to do with an Agentic Artificial Intelligence (AI) system is to summarize (large) texts using different Large Language Model (LLM) agents.
This (computational Markdown) document illustrates how to specify an LLM graph for deriving comprehensive summaries of large texts. The LLM graph is based on different LLM- and non-LLM functions. The Raku package “LLM::Graph” is used, [AAp1].
Using the LLM graph is an alternative to the Literate programming based solutions shown in [AA1, AAn1].
Setup
Load the Raku packages needed for the computations below:
use LLM::Graph;
use LLM::Functions;
use LLM::Prompts;
use LLM::Tooling;
use Data::Importers;
use Data::Translators;
Define an LLM-access configuration:
sink my $conf41-mini = llm-configuration('ChatGPT', model => 'gpt-4.1-mini', temperature => 0.55, max-tokens => 4096);
Procedure outline
For a given URL, file path, or text a comprehensive text summary document is prepared in the following steps (executed in accordance to the graph below):
User specifies an input argument ($_ in the graph)
LLM classifies the input as “URL”, “FilePath”, “Text”, or “Other”
The text is ingested
If the obtained label is different than “Text”
Using asynchronous LLM computations different summaries are obtained
The title of the summary document can be user specified
my $url = 'https://raw.githubusercontent.com/antononcube/RakuForPrediction-blog/refs/heads/main/Data/Graph-neat-examples-in-Raku-Set-2-YouTube.txt';
my $txtFocus = data-import($url);
text-stats($txtFocus)
# (chars => 5957 words => 1132 lines => 157)
Remark: The function data-import is provided by the Raku package “Data::Importers”, [AAp4].
This blog post introduces and exemplifies the Raku package “LLM::Graph”, which is used to efficiently schedule and combine multiple LLM generation steps.
The package provides the class LLM::Graph with which computations are orchestrated.
Creation of an LLM::Graph object in which “node_i” evaluates fun_i with results from parent nodes:
LLM::Graph.new({name_1 => fun_1, ...})
LLM::Graph objects are callables. Getting the result of a graph on input:
LLM::Graph.new(...)(input)
Details and options
An LLM::Graph enables efficient scheduling and integration of multiple LLM generation steps optimizing evaluation by managing the concurrency of LLM requests.
Using LLM::Graph requires (LLM) service authentication and internet connectivity.
Authentication and internet are required if all graph nodes are non-LLM computation specs.
Possible values of the node function spec fun_i are:
llm-function(...)
an llm-function for LLM submission
sub (...) {...}
a sub for Raku computation submission
%(key_i => val_i ...)
a Map with detailed node specifications nodespec
Possible node specifications keys in nodespec are:
“eval-function”
arbitrary Raku sub
“llm-function”
LLM evaluation via an llm-function
“listable-llm-function”
threaded LLM evaluation on list input values
“input”
explicit list of nodes required as sub arguments
“test-function”
whether the node should run
“test-function-input”
explicit list of nodes required as test arguments
Each node must be defined with only one of “eval-function”, “llm-function”, or “listable-llm-function”.
The “test-function” specification makes a node evaluation conditional on the results from other nodes.
Possible “llm-function” specifications prompt_i include:
“text”
static text
["text1", ...]
a list of strings
llm-prompt("name")
a repository prompt
sub ($arg1..) {"Some $arg1 text"}
templated text
llm-function(...)
an LLM::Function object
Any “node_i” result can be provided in input as a named argument. input can have one positional argument and multiple named arguments.
LLM::Graph objects have the attribute llm-evaluator that is used as a default (or fallback) LLM evaluator object. (See [AAp1].)
The Boolean option “async” in LLM::Graph.new can be used to specify if the LLM submissions should be made asynchronous.
The class Promise is used.
Usage examples
Three poets
Make an LLM graph with three different poets, and a judge that selects the best of the poet-generated poems:
use LLM::Graph;
use Graph;
my %rules =
poet1 => "Write a short poem about summer.",
poet2 => "Write a haiku about winter.",
poet3 => sub ($topic, $style) {
"Write a poem about $topic in the $style style."
},
judge => sub ($poet1, $poet2, $poet3) {
[
"Choose the composition you think is best among these:\n\n",
"1) Poem1: $poet1",
"2) Poem2: $poet2",
"3) Poem3: $poet3",
"and copy it:"
].join("\n\n")
};
my $gBestPoem = LLM::Graph.new(%rules);
# {eval-function => sub { }, input => [poet1 poet3 poet2], result => I think Poem1 is the best composition among these. Here's the poem:
#
# Golden sun above so bright,
# Warmth that fills the day with light,
# Laughter dancing on the breeze,
# Whispers through the swaying trees.
#
# Fields alive with blooms in cheer,
# Endless days that draw us near,
# Summer’s song, a sweet embrace,
# Nature’s smile on every face., spec-type => (Routine), test-function-input => [], wrapper => Routine::WrapHandle.new}
Further examples
The following notebooks provide more elaborate examples:
Since the very beginning, the functions produced by “LLM::Functions” were actually blocks (Block:D). It was in my TODO list for a long time instead of blocks to produce functors (function objects). For “LLM::Graph” that is/was necessary in order to make the node-specs processing more adequate.
So, llm-function produces functors (LLM::Function objects) by default now.
The option “type” can be used to get blocks.
No need for topological sorting
I thought that I should use the graph algorithms for topological sorting in order to navigate node dependencies during evaluation.
Turned out, that is not necessary — simple recursion is sufficient.
From the nodes specs, a directed graph (a Graph object) is made.
Graph‘s method reverse is used to get the directed computational dependency graph.
That latter graph is used in the node-evaluation recursion.
Wrapping “string templates”
It is convenient to specify LLM functions with “string templates.”
Since there are no separate “string template” objects in Raku, subs or blocks are used.
For example:
sub ($country, $year) {"What is the GDP of $country in $year"} (sub)
{"What is the GDP of $^a in $^b?"} (block)
String template subs are wrapped to be executed first and then the result is LLM-submitted.
Since the blocks cannot be wrapped, currently “LLM::Graph” refuses to process them.
It is planned later versions of “LLM::Graph” to process blocks.
Special graph plotting
Of course, it is nice to have the LLM-graphs visualized.
Instead of the generic graph visualization provided by the package “Graph” (method dot) a more informative graph plot is produced in which the different types of notes have different shapes.
The graph vertex shapes help distinguishing LLM-nodes from just-Raku-nodes.
Also, test function dependencies are designated with dashed arrows.
The shapes in the graph plot can be tuned by the user.
Literate Programming (LT), [Wk1], blends code and documentation into a narrative, prioritizing human readability. Code and explanations are interwoven, with tools extracting code for compilation and documentation for presentation, enhancing clarity and maintainability.
LT is commonly employed in scientific computing and data science for reproducible research and open access initiatives. Today, millions of programmers use literate programming tools.
This document (notebook) discusses executable documents parameterization — or parameterized reports — provided by “Text::CodeProcessing”, [AAp1].
Remark: Providing report parameterization has been in my TODO list since the beginning of programming “Text::CodeProcessing”. I finally did it in order to facilitate parameterized Large Language Model (LLM) workflows. See the LLM template “LLM-comprehensive-summary-Raku.md”.
The document has three main sections:
Using YAML document header to specify parameters
Description and examples
LLM templates with parameters
Operating System (OS) shell execution with specified parameters
Remark: The programmatically rendered Markdown is put within three-dots separators.
Setup
Load packages:
use Text::CodeProcessing;
use Lingua::NumericWordForms;
YAML front-matter with parameters
For a given text or file we can execute that text or file and produce its woven version using:
The sub StringCodeChunksEvaluation in a Raku session
The Command Line Interface (CLI) script file-code-chunks-eval in an OS shell
Consider the following Markdown text (of a certain file):
sink my $txt = q:to/END/;
---
title: Numeric word forms generation (template)
author: Anton Antonov
date: 2025-06-19
params:
sample-size: 5
min: 100
max: 10E3
to-lang: "Russian"
---
Generate a list of random numbers:
```raku
use Data::Generators;
my @ns = random-real([%params<min>, %params<max>], %params<sample-size>)».floor
```
Convert to numeric word forms:
```raku
use Lingua::NumericWordForms;
.say for @ns.map({ ``_ => to-numeric-word-form(``_, %params<to-lang>) })
```
END
The parameters of that executable document are given in YAML format — similar to “parameterized reports” of R Markdown documents. (Introduced and provided by Posit, formerly RStudio.)
Declaring parameters:
Parameters are declared using the params field within the YAML header of the document.
For example, the text above creates the parameter “sample-size” and assigns it the default value 5.
Using parameters in code:
Parameters are made available within the Raku environment as a read-only hashmap named %params.
To access a parameter in code, call %params<parameter-name>.
Setting parameter values:
To create a report that uses a new set of parameter values add:
%params argument to StringCodeChunksEvaluation
--params argument to the CLI script file-code-chunks-eval
Here is the woven (or executed) version of the text:
use Data::Generators;
my @ns = random-real([100, 10000], 5)».floor
# [3925 6533 3215 2983 1395]
Convert to numeric word forms:
use Lingua::NumericWordForms;
.say for @ns.map({ $_ => to-numeric-word-form($_, 'Russian') })
# 3925 => три тысячи девятьсот двадцать пять
# 6533 => шесть тысяч пятьсот тридцать три
# 3215 => три тысячи двести пятнадцать
# 2983 => две тысячи девятьсот восемьдесят три
# 1395 => одна тысяча триста девяносто пять
Remark: In order to be easier to read the results, the YAML header ware removed (with subst.)
Here we change parameters — different sample size and language for the generated word forms:
From LLM-workflows perspective parameterized reports can be seen as:
An alternative using LLM functions and prompts, [AAp5, AAp6]
Higher-level utilization of LLM functions workflows
To illustrate the former consider this short LLM template:
sink my $llmTemplate = q:to/END/;
---
params:
question: 'How many sea species?'
model: 'gpt-4o-mini'
persona: SouthernBelleSpeak
---
For the question:
> %params<question>
The answer is:
```raku, results=asis, echo=FALSE, eval=TRUE
use LLM::Functions;
use LLM::Prompts;
my $conf = llm-configuration('ChatGPT', model => %params<model>);
llm-synthesize([llm-prompt(%params<persona>), %params<question>], e => $conf)
END
Here we execute that LLM template providing different question and LLM persona:
#% markdown
StringCodeChunksEvaluation(
$llmTemplate,
'markdown',
params => {question => 'How big is Texas?', persona => 'SurferDudeSpeak'}
).subst(/^ '---' .* '---'/)
For the question:
‘How big is Texas?’
The answer is:
Whoa, bro! Texas is like, totally massive, man! It’s like the second biggest state in the whole USA, after that gnarly Alaska, you know? We’re talking about around 268,000 square miles of pure, wild vibes, bro! That’s like a whole lot of room for the open road and some epic waves if you ever decide to cruise on over, dude! Just remember to keep it chill and ride the wave of life, bro!
CLI parameters
In order to demonstrate CLI usage of parameters below we:
Export the Markdown string into a file
Invoke the CLI file-code-chunks-eval
In a Raku-Jupyter notebook this can be done with the magic #% bash
Alternatively, run and shell can be used
Import the woven file and render its content
Export to Markdown file
spurt($*CWD ~ '/LLM-template.md', $llmTemplate)
True
CLI invocation
Specifying the template parameters using the CLI is done with the named argument --params with a value that is a valid hashmap Raku code:
#% bash
file-code-chunks-eval LLM-template.md --params='{question=>"Where is Iran?", persona=>"DrillSergeant"}'
Remark: If the output file is not specified then the output file name is the CLI input file argument with the string ‘_woven’ placed before the extension.
Import and render
Import the woven file and render it (again, remove the YAML header for easier reading):
YOU LISTEN UP, MAGGOT! IRAN IS LOCATED IN THE MIDDLE EAST, BOUNDED BY THE CASPIAN SEA TO THE NORTH AND THE PERSIAN GULF TO THE SOUTH! NOW GET YOUR HEAD OUT OF THE CLOUDS AND PAY ATTENTION! I DON’T HAVE TIME FOR YOUR LAZY QUESTIONS! IF I SEE YOU SLACKING OFF, YOU’LL BE DOING PUSH-UPS UNTIL YOUR ARMS FALL OFF! DO YOU UNDERSTAND ME? SIR!