Blog

  • Error Management: Go vs Rust vs Exception-based

    I want to write some reference for error management in Rust and in Go, versus exception-class based language, maybe Python deserve a specific session, I will update that, but basically the point of exception-based error management is the sub-classing of a Exception class and the try {...} catch(...) {...} block for handling exceptional cases

    Golang (aka Go language)

    In Go Error is an interface. This is the only fact that one must understand about error. In Go a function can return a tuple that is de-structured by the assignment operator, like in

    a, b, c := GetTuple(1)

    Where

    func GetTuple(i int) (int, int, int) {
       return i, i, i
    }

    And it is dev responsability to deal with function returning Error as a second component of the tuple.

    The func that generates error must always returns a tuple of 2 component: (response, error), where either response or error must be nil, typically not both, but there is no limitation.

    Error type is defined as:

    type CustomError struct {
      S string
    }
    func (e *CustomError) Error() string {
      return e.S
    }

    An almost complete exposition of error handling is given by the golang tour https://go.dev/tour/methods/19

    Rust Language

    In Rust things are a bit more complicated. The main reason is the strong type system. In fact a fn in Rust can return a Result, that is an additive type, also known from C developer as the old and good enum (but with a better implementation). So:

    pub enum Result<T, E> {
        Ok(T),
        Err(E),
    }

    And E must implement fmt::Display.

    Again, it is a dev responsability to deal with error, but there are some important constraints.

    The fact that Result type is an additive type, means that only one case is given: you got a result, or you got an error. Both or neither are not possible, because the fn is just returning one value of the enum, being it Ok(T), or Err(E).

    Of course the calling code must match both enum values, or if Ok(r) {...} else{...}, or if Err(e) {...}else{...}, or whatever, including question mark for throwing up to the caller of the caller.

    These constraints are the gift offered by the Rust type system, and it would make your code more robust by design, at least if you are a responsible developer, and anyway one MUST deal with errors, someway, anyhow

    Exception based language

    Basically one has a Exception base class then there 2 main constructs:

    1. raise or throw exception
    2. try…catch, or try…except block

    If the code do not catch the exception, depending on the strength of the language, it may not compile (/run) at all, or just crash at runtime.

    (semi) Conclusion

    I would like to write a more detailed article, but also I want this to remains succinct.

    I want to notice here that Rust type system play an important role in error handing, but also the specific construct given by ? (question mark) and the functional nature of the language make it really robust, but of course robustness come with a price to charged to the head of the developer, question mark and anyhow are very practical to procrastinate the error management, but constraint on Err(e) type may be annoying on first sight (it must impl fmt::Display).

    On contrast, Go language is very relaxed on error handling, still it is expected to provide a struct implementing the interface Error, and this is almost everything one must remember, it is idiomatic to see if err != nil everywhere in a Go source code.

    From the performance point of view, both Go and Rust has the most lightweight method for dealing with errors. By contrast exception based language needs specific runtime that is almost hidden to the developer, that is not true for Go, neither for Rust, meaning literally: the error handing cost is not hidden in Go/Rust.

    Adapt language to stack

    If you are interested on adapting C language to something similar to C++ Exception handling you should read this article:

  • Dive into (useless?) uncovered case of multistage dockerfile

    Dive into (useless?) uncovered case of multistage dockerfile

    Looking at the upstream documentation of Multi-stage builds there are some examples but this is missing:

    FROM rust:1.72-alpine as build
    
    RUN ls -ls
    
    FROM alpine
    
    COPY --from=build /usr/local/cargo/bin/rustc /rustc
    
    CMD [ "ls", "-l", "/"]
    

    Ok, I mean, the example by itself is useless, but “it works”. Try:

    docker build -f Dockerfile -t copiedfrom-rust:v0.1 .

    Then:

    docker run -it copiedfrom-rust:v0.1 sh

    / # ls
    bin    dev    etc    home   lib    media  mnt    opt    proc   root   run    rustc  sbin   srv    sys    tmp    usr    var
    / # ./rustc
    error: rustup could not choose a version of rustc to run, because one wasn't specified explicitly, and no default is configured.
    help: run 'rustup default stable' to download the latest stable release of Rust and set it as your default toolchain.
    / #

    Real use case scenario

    Lets get serious now: why?

    Suppose you are inside a shell, inside a devop tool and you need to often to rely on binary/ies coming from another image, furthermore you know that built process of those binary is time & resource consuming (like Rust, for example? yes).

    So it is better to arrange compilation so that it does not overload the CI tool for just re-compile what did not changed (and should be separated, instead).

    This hint also apply to golang, and to everything that is cpu intensive.

    Once one gets the power of static analysis, it is time to use it carefully

    Real life examples

    Porting of legacy PHP code into the cloud is difficult: by itself PHP does not provide multi thread execution, there were mainly 2 options, and now there is a third:

    1. use nginx or apache module to run php code: but this does not run php-cli env! and this may be a security issue
    2. Use PHP exotic extension (ReactPHP, workerman, Amp, …): ok, but this is not php anymore, every extension impose some limitation, and it does not provides real “legacy code porting” (lets be honest here, legacy means the same code, and the same code simply … has problem)
    3. Create a special executor daemon: this executor daemon can be configurable by a yaml file, where, for each http url/verb, there is path of php script, working directory, environment, and whatever.

    A simple example of that yaml file:

    restapi:
      routers:
        - url: /templates
          method: GET 
          wd: /home/phpusr/templates/
          content-type: "application/json"
          timeout: 30
          cmd: php /home/phpusr/templates/listtemplate.php
          environment:
            - HOME: "/home/phpusr"
        - url: /template-new
          method: POST
          payload-to: body
          escape-shell-args: true
          cmd: php /home/phpusr/templates/addtemplate.php $body
    

    A better defined/refined yaml file is what I am actually using but for consuming rabbitmq message as a worker which run PHP (where also a unix socket is provided to get back response from PHP (and yes, unix socket support in php is really good)).

    Before I just thought about a “do nothing from image” idea, I need to spend 4 minutes on each simple change. Simply absurd.

    When you need to deploy soon and fast, is this trick that useless?

    Requirements

    • CI tool with an environment capable to build docker
    • CI environment where docker has access to a private register
    • a private register where the “executor-daemon” is built as an image
    • the private registry must be seen (visible/accessible) as https://registry.private.local/v2/_catalog

    In the CI environment, the user running CI stage must be logged into the private register, that is:

    docker login registry.private.local

    This command will store in ~/.docker/config.json file the required auth to access the registry at registry.private.local (I give here a .local domain, because typically a private registry is not exposed outside, by default https://registry.private.local:443/v2 is accessed when specified that way, so this domain must be known).

    On legacy PHP porting

    With this strategy local development of existing PHP code can proceed in a natural way:

    • port legacy code to composer as first step (at very least)
    • clean the code as usual and provide test: tests, even integration tests, does not need the executor-daemon to be run.
    • it is possible to port code in another language: then move outside the code base and use another stage during docker build
    • make your legacy code quickly communicate with the cloud environment (docker swarm, kubernetes, or whatever), then clean code

    The last point is the “jump-into-the-cloud” as soon as possible, it should provide a legacy developers a new prospective and let them enjoy cloud native app for the new powers it provides, and not blaming it because “it is difficult to understand” (or comment like those).

    There is also a progressive culture upgrade that is favored by this approach: for developer is not required to understand details, but s/he can do look at details and choose to enhance the code and the service.

    References

    There are reference about this strategy. Like in https://shahbhargav.medium.com/docker-multi-stage-build-3d1af8868ac0

    But there is no mention about local registry usage.

  • Rust: riferimenti e lifetime

    (… oggi ho imparato che … premessa)

    Cosa vuol dire “lifetime” in Rust:

    A lifetime parameter does not allow you to choose how long a value lives, it only allows you to communicate to the compiler that two or more references are “related” to the same memory and are expected to share the same lifetime.

    https://stackoverflow.com/a/28109180/250970

    Quindi si parla di references (indicate con &T, dove T è un tipo) ad una locazione di memoria.

    Image

    Un riferimento ad una struttura dinamica perde valità se la struttura muta

    Concetto che ha degli esempi già nel Rust book, ma che sembra naturale quando si affronta un problema.

    Esempio: https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=9cf971cce49e7a54665ac2d87928d725

    qua il messaggio di errore sembra essere relativo al lifetime, ma in realtà l’annotazione è giusta, bstr vivrebbe abbastanza, a patto che self.names non cambi.

    Se invece applico il lifetime &'a mut self alla funzione?

    Un lifetime di un riferimento mut troppo prolungato blocca la struttura a cui si riferisce

    Applicando il lifetime ‘a (&'a mut self) mut self resta in vita tanto quanto è l’aspettativa di vita della struct HaGraph.

    https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=d7ee6e948b7bbfb331ca2dc104bc782b

    questa volta il messaggio di errore è più aderente a quello che succede:

    error[E0499]: cannot borrow g as mutable more than once at a time

    infatti g è catturato “a vita” dal &mut self.
    Questo spiega anche il fatto che una singola chiamata get_or_push funziona correttamente: dal punto di vista del compliatore, non c’è possibilità che self.names cambi, può essere controllato staticamente, visto che ha il lifetime uguale a quello della struttura (ossia ‘a).

    Se invece il compilatore si accorge che cambia, come in https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=cf915a88e7a47296e2e786aa735a3f51
    allora riporta immediatamente l’errore.

    Un altro esempio dove è più chiaro che il problema è nel “mut catturato a lifetime”:

    https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=491701b170ccc4727195c0560cc853e9

    è più corretto chiamarlo: “riferimento mut catturato per l’intero lifetime”

    E gli altri linguaggi? non è naturale ragionare così?

    Quello che mi ha un po’ deluso, e poi sorpreso è che questo modo di procedere mi sembrava del tutto naturale: ho una lista di nodi, li inserisco nel grafo, ed in una hashmap, di modo che sono sicuro che non siano già presenti, e uso dei riferimenti, cosicché evito di duplicare inutilmente i dati.

    Ragionamento naturale, ma la falla è nella struttura che mantiene i dati reali, e nel fatto che li sto inserendo uno alla volta.

    A questo punto, quello che invece è naturale, è creare una struttura “statica”, che contenga tutte le string, uso le virgolette perché String è dinamico, ma il suo contenitore è un “array di puntatori” di dimensione fissa. È necessario anche avere un contatore che indica quante posizioni sono state occupate, ma tutto questo resta incapsulato.

    La domanda resta: ma se con il C++ posso farlo e non va in crash, perché con Rust no?

    Il fatto che non vada in crash col C++ “facendo diverse prove”, non vuol dire che non darà errore di riferimento in altre situazioni.

    Conclusione

    Se puoi farlo con altri linguaggi, probabilmente lo stai facendo male con altri linguaggi.

  • DAGs from Graph

    Image

    Still ChatGPT. But bad experience this time.

    I asked all possible distinct DAGs given an adjacency matrix of a graph (probably not connected). I insisted that the graph is not connected. But “it” pretended to just make a loop over the nodes and build “all possible dags” (it pretends).

    I must note that the solution I put above is wrong as well, you notice that if there are common subpath for distinct DAGs those are just skipped, because it has visited node in common (and dfs method stop as soon it meets a visited node).

    Fortunately the algos I read just yesterday night inspired me. It was a list ranking problem (find the distance of each node from the end, in a linked list), nothing related, but it was just my imagination to think to a linked list as it was a graph, with a single path, and connected. (and I thought also to the adjacency matrix and the algorithm meaning in term of transformation, but then I fall sleep).

    Anyway:

    1. Which nodes in a graph have no in-nodes? All nodes that in adjacency matrix has in their column a 0 for all positions.
    2. Which nodes in a graph have no out-nodes? All nodes that in adjacency matrix has in their row a 0 for all positions.

    So far so easy. I can also say that all nodes i which has row i and column i, all 0, those node are “spare node”, or “single node dag”.

    In fact, I was asking for all DAGs, not almost all.

    Of course I can not have Vec<Vec<(usize, usize)>> as output if I want _all_ dag, but I need something more specific.

    I am using fixedbitset::FixedBitSet as adjacency matrix that is exported from petgraph, but I think that Vec<petgraph::graph::Graph> is overkilling here. Also I defined a dedicated struct, so I can define _another_ dedicated struct for that, or better an Enum, why not? so exporting Vec<Mydag> where

    enum MyDAG {
        Path(Vec<(usize,usize)>),
        Single(usize),
    }

    In the code above the visited Vec must stay inside the for loop, and it returns a Vec<MyDAG> now

        fn find_connected_dags(&self) -> Vec<MyDAG> {
            // given the adjacency matrix for each column which has no setted bit
            // do: dfs(on that node)
            let startings = self.select_starting_nodes();
            let mut sub_dags = Vec::new();
            
            for node in startings.iter() {
                let mut visited = vec![false; self.size];
                let mut dag = Vec::new();
                self.dfs( *node, &mut visited, &mut dag);
                if !dag.is_empty() {
                    sub_dags.push(MyDAG::Path(dag));
                } else {
                    sub_dags.push(MyDAG::Single(*node));
                }
            }
            
            sub_dags
        }
        
        fn dfs(&self, node: usize, visited: &mut [bool], dag: &mut Vec<(usize, usize)>) {
            let len = self.getsize();
            visited[node] = true;
            
            for i in 0..len {
                if self.contains(node, i) {
                    dag.push((node, i));
                    if !visited[i] {
                        self.dfs(i, visited, dag);
                    }
                }
            }
        }

    I just began to know petgraph, and maybe this are already in, but I am exploring staff.

    rif. <https://github.com/petgraph/fixedbitset>, <https://docs.rs/petgraph/latest/petgraph/index.html>

    UPDATE: fixed range expression 0..len (to is not inclusive, and it caused me some headaches)

    Update

    It has now an own crate, https://crates.io/crates/multidigraph, on github https://github.com/danielecr/multidigraph

  • The Power of Windows in Rust: Unleashing the Potential of Sequential Data

    Just catch a glimpse from your windows (it was: “introduction”)

    The windows method in Rust’s standard library is a powerful tool for processing sequential data. It enables efficient iteration over fixed-size, overlapping windows of elements in a collection, offering a functional and concise approach to data manipulation. This article will discuss the usefulness of the windows method, with examples and domain applications to help you grasp its potential.

    Understanding windows

    The windows method is part of the Iterator trait and provides a way to create an iterator over overlapping windows of elements within a slice. The syntax for using windows is as follows:

    slice.windows(window_size)

    where slice is the input slice and window_size is the size of the window.

    The method returns an iterator that yields windows of the specified size. The iterator will produce a new window for each step, moving one element forward at a time. It’s important to note that the windows method will only work with slices, so you might need to convert other collections like Vec to a slice using the as_slice() method.

    Example usage

    Consider the following example, which calculates the moving average of a sequence of numbers:

    fn moving_average(numbers: &[f64], window_size: usize) -> Vec<f64> {
        numbers.windows(window_size)
            .map(|window| window.iter().sum::<f64>() / window_size as f64)
            .collect()
    }
    
    fn main() {
        let numbers: Vec<f64> = vec![1.0, 2.0, 3.0, 4.0, 5.0];
        let window_size = 3;
        let output = moving_average(&numbers, window_size);
        println!("{:?}", output); // Output: [2.0, 3.0, 4.0]
    }
    

    In this example, the moving_average function takes a slice of f64 numbers and a window size as input. It uses the windows method to create an iterator over windows of the given size, calculates the average for each window, and collects the results into a Vec.

    Domain applications

    The windows method has numerous applications across various domains, including:

    1. Time series analysis: In finance, economics, and other fields, time series data is often analyzed using rolling or moving window techniques. The windows method can help with calculating moving averages, standard deviations, and other rolling statistics.
    2. Signal processing: Digital signal processing often involves applying filters or transformations to a sequence of samples. The windows method can be used to implement sliding window filters, convolution operations, or other window-based techniques.
    3. Text analysis: When analyzing text data, the windows method can be used to extract n-grams (contiguous sequences of n items) for tasks such as language modeling, text classification, or information retrieval.
    4. Genomics: In bioinformatics, analyzing DNA or protein sequences often involves sliding window techniques to identify motifs, calculate sequence similarity, or perform other analyses.

    Conclusion

    The windows method in Rust is a powerful tool for processing sequential data, offering an efficient and functional way to work with fixed-size, overlapping windows. Its applicability spans across various domains, such as time series analysis, signal processing, text analysis, and genomics. By understanding and leveraging the power of windows, you can write concise and efficient Rust code for a wide range of data manipulation tasks.

    My own conclusion

    The above article was generated by ChatGPT 4, my prompt was:

    can you write a technical article on the usefulness of windows? including in that article example of its usage and example of domain of application

    This question was “in topic” with the same chat session (that was about Rust windows() of vector, for the record)

    windows’ doc is https://doc.rust-lang.org/std/slice/struct.Windows.html

    Existing article https://tndl.medium.com/rusts-slice-windows-is-really-cool-70d50cdc74c5 (Austin Tindle), just the first from google.

    If you found this article by google my hint is: next time consider to hire a language model.

  • How to stream query result with fastify and mysql2

    The aim is to serve a data on a REST API as a stream as soon are available. Instead of completely get the data set from the dbms, compress it, send back to the caller, those steps can be part of a stream pipeline.

    Important note on client side: if a regular client, like ajax by ReactJs, is used, it just cache all streamed data, then render it when ready. To overcome this behaviour, the content-type should be something else than application/json

    The main goal here is to use less memory.

    By now this is just an idea of the result of the same request made by buffering a whole result set VS streaming out result as soon it comes from database

    Image
    Stream version VS buffered version

    As of now difference in time is less than 2 seconds, but a big difference in memory usage (about 300MB VS 30MB), and this is sending 8300 records.

    This tests are done with NodeJS service (v18.10.0), fastify stream (4.9.2 https://fastify.io/), node-mysql2 module (v2.3.3 https://www.npmjs.com/package/mysql2), objstream ( https://www.npmjs.com/package/objstream because fastify reply.send() accepts a stream of bytes, no object mode).

    The code is just a bit more complicated, but not too much

    const streamResult = async (pool, request, reply) => {
      const {query, data} = request.body;
      const getPrepared = (query,data) => {...
      const {prepared, params} = getPrepared(query,data);
      
      return new Promise((resolve, reject) => {
        
        pool.getConnection( (err, conn) => {
          if(err) {
            throw new Error(err);
          }
          let transform = new ObjStream();
    
          let outStream = conn.execute({sql: prepared, rowsAsArray: true}, params).stream();
          
          let piped = outStream.pipe(transform)
          reply.send(piped);
          piped.on('error', (err)=> {
            reject(err);
            pool.releaseConnection(conn);
          });
          piped.on('finish', ()=> {
            pool.releaseConnection(conn);
            console.log("out stream finish");
            resolve(1)
          });
          
        })
      });
      
    }
    
    fastify.post('/sql-stream', async (req,reply) => {
        try {
            await streamResult(pool, req, reply);
            printUsage();
            console.log("END");
        } catch (err) {
            console.log("STREAM error:", err);
            reply.send({error: true});
        }
    })

    I will never finish to write this …

    Actually in streamed version I am using rowsAsArray option true, switching to false things get worse in term of time to serve, memory consumption stay stable anyway.

    Also I am not sure about the node module, and the code is so small that I should have the opportunity to rewrite it with rocket something, I think, still using stream.

    Disappointed by @fastify/compress

    I am just a bit disappointed by the unfulfilled promise of fastify/compress module, here:

    https://github.com/fastify/fastify-compress#replycompress

    There is an example of use of reply.compress(), sure, but no, that method does not exists.

    It is strange, I look at the code, it should work, really, I am going to ask for some hint…

    UPDATE: Clarification arrived from upstream dev about fastify and compress, in short time and well explained, rif. fastify 4 porting

    UPDATE Nov 2nd, 2025: I implemented the service that way, and it really works. But I am not satisfied by this approach for a number of reasons.

  • Building multi arch docker image from linux host

    Key concepts

    Container manager.There is an actor, a daemon, or something that pull and run the right image when instructucted to do that.
    That means, you can use docker, docker swarm mode, kubernetes, k3s, or whatever daemon to monitor or just run container
    based on image taken from some place, and that daemon need to understand what to pull.

    Architecture: every machine has its own instruction set, device, etc. and its own binary format.
    This is “architecture” in this context (the context of cross building a docker image).

    Binary format, binfmt in linux. There is a lot of things on this topic, and too advanced.
    Anyway, Linux kernel support a kind of pluggable executor based on magic character, characters used
    to recognize the architecture, those magics are bounded with executor. Typically /proc fs or /sys fs is used
    to setup and inspect kernel runtime setup, so the current binfmt status can be inspected in /proc/sys/fs/binfmt_misc/*
    More infos in lwn article: https://lwn.net/Articles/630727/

    Docker registry is a service that run somewhere and keep track of images and theirs tags.

    Image manifest. This was a new concept to me, tag is not enough for describing an image, an image is identified by
    its manifest, that is a json structure that keep reference to the real image(s) tag(s), its architecture, and other infos.

    So the manifest, someway, overlap the concept of tag for image, but in fact that is a trick: a default manifest
    with the same tag is created and associated to the unique image tag, when manifest is not treated explicitly.

    So manifest has a tag, like each image has a tag, but manifest refers to more images.

    Deal explicitly with manifest

    Docker CLI, the command line interface used to communicate with dockerd, is not intended to deal with manifest,
    I do not know why, but it can do it only enabling “experimental” feature, by editing ~/.docker/config.json with

    {"experimental": "enabled"}

    Then is possible to use it

    Commands:

    • docker manifest inspect --verbose TAG
    • docker manifest create MANIFESTTAG [IMAGETAG1 [IMAGETAG2 [IMAGETAG3 […]]]]
    • docker manifest push MANIFESTTAG

    inspect is to inspect, create is to create, push is to push the created manifest into the remote registry.

    There are limitations here, for example one can create a manifest tag and push it to the registry, but if
    the manifest refers to image tags that are not already pushed in the registry the service will complain about it
    (or at least I suppose, never tried)

    Real building multiarch

    What I used to understand it as of few days ago, a Dockerfile refer to main image in FROM section.
    But today my idea about it become more complex and I am a bit confused.
    Maybe it refers to a Manifest? and the image is automatically selected during the docker build
    phase from a set of image, based on matching architecture?

    Anyway, one can explicetely refer to a specific architecture by its name, for example using alpine distro
    it is a prefix: ‘amd64/’, ‘arm32v7/’, ‘arm64v8/’, …

    In Dockerfile is possible to define arguments and their defaults:

    ARG ARCH=
    FROM ${ARCH}alpine

    and give values to arguments in command line during docker build:

    docker build -t IMAGETAGNAME --build-args ARCH=arm32v7/

    at this point the docker daemon try to build it based on image referred in FROM: arm32v7/alpine

    This image contains a filesystem with binary in a format specific to arm32v7 architecture, so you have:

    standard_init_linux.go:211: exec user process caused “exec format error”

    it means there is no magic matched, so docker tried to execute it directly, but by

    docker run --rm --privileged docker/binfmt:a7996909642ee92942dcd6cff44b9b95f08dad64

    the folder /proc/sys/fs/binfmt_misc is populated, i.e.:

    cat /proc/sys/fs/binfmt_misc/qemu-arm

    enabled
    interpreter /usr/bin/qemu-arm
    flags: OCF
    offset 0
    magic 7f454c4601010100000000000000000002002800
    mask ffffffffffffff00fffffffffffffffffeffffff

    and now again

    docker build -t IMAGETAGNAME –build-args ARCH=arm32v7/

    will build the image in a qemu-arm machine runned magically for that job.

    Then the steps are:

    1. built image tag for each architecture
    2. push image tag for each architecture
    3. create a manifest naming it with a tag and referring to all architecture created in 1. and pushed in 2.
    4. push the manifest created in 3. to the docker registry

    There is nothing complex or special, but one just need to understand the concept behind the words: image tag, manifest tag, manifest, architecture, registry, binfmt, crossbuild, …

    References

    I found it hard to understand concepts, my learning path (I spent 5/6 hours) was to check first this document
    https://www.docker.com/blog/multi-arch-build-and-images-the-simple-way/

    From there I needed to understand github actions, and github secrets

    1. github actions

    An action is defined by

    i. a trigger (on: push: branch: master, for example)
    ii. and a job, made of: name, environment, steps

    i.e. https://github.com/danielecr/uuid-provider/blob/main/.github/workflows/image.yml

    1. github secrets

    It is possible to define variable binded with the repo, those are secrets and environment variable

    It is possible to access those variables from github action, i.e. by ${{ secrets.MYSECRET }}

    But still I was confused about it.

    I googled more, and continuosly found reference to windows platform, and everywhere try to sell that
    buildx as “easy way”, with result to be more confused.

    I finally found https://www.docker.com/blog/getting-started-with-docker-for-arm-on-linux/
    still a lot of advertising for buildx script, but reference to https://hub.docker.com/r/docker/binfmt/tags?page=1&ordering=last_updated
    and by guessing it https://github.com/docker/binfmt
    and suggestion to use the newer https://github.com/linuxkit/linuxkit (but I still need to see it)

    Just for curiosity, the only graphical description of build environment I found was in https://www.stereolabs.com/docs/docker/building-arm-container-on-x86/ but it give little/no infos about binfmt, and it is a content specific to jetbrain device.

  • React server side rendering as a react based template engine

    isomorphic javascript is a technology used to speedup rendering on browser, it can be used a pure version that is server-side-rendering (ssr) to generate template from a react app

    Motivation: Create a presentation template to be converted to pdf and configurable with data: produce formatted documents.

    Here:
    https://www.freecodecamp.org/news/server-side-rendering-your-react-app-in-three-simple-steps-7a82b95db82e/

    This is the way the piece of server generated code and client app can work together, using renderToString from:

    import { renderToString } from ‘react-dom/server’;

    the rendered html contains references to the window.STATE, hydrate is used to “undry” the app (sort of bring it to life).

    But by using renderToStaticMarkup from:

    import { renderToStaticMarkup } from ‘react-dom/server’;

    it is possible to render directly to html markup, without any additional information: pure html.

    note

    The trick is on the use of:

    require("@babel/register");
    

    for transpilling the jsx to javascript transparently.

    Here is a src/template.js code:

    require("@babel/register");
    let fs = require('fs');
    
    const decoratedStyle = (content) => {
        return '<style type="text/css">\n' + content + '\n</style>\n';
    }
    
    const nodulespath = __dirname.replace(/\/[a-zA-Z]+$/,'')+"/node_modules/";
    
    const importCss = (moduleref) => {
        const solvedPath = moduleref.replace("~",nodulespath).replace("\./",__dirname+'/');
        //console.log('solvedpath', moduleref, solvedPath);
        let styledef = fs.readFileSync(solvedPath);
        return decoratedStyle(styledef);
    }
    
    let render = require('./server');
    
    const template = (state) => {
        let s = {...state};
        delete s.title;
        let content = render(s);
        let page = `<!DOCTYPE html>
                  <html lang="en">
                  <head>
                    <meta charset="utf-8">
                    <title> ${state.title} </title>
                    ${importCss('~bootstrap/dist/css/bootstrap.min.css')}
                    ${importCss('~font-awesome/css/font-awesome.min.css')}
                    ${importCss('./template.css')}
                  </head>
                  <body>
                  <div id="root">
                    ${content}
                    </div>
                  </body>
                  `;
    
      return page;
    }
    
    module.exports = template;

    and here a rendering script that is called by a nodejs service that generate static documents:

    const fs = require('fs');
    const inlineCss = require('inline-css');
    
    let template = require('./template');
    let getState = require('./state/get-state');
    
    let data = process.argv[2];
    //console.log('commna',data);
    let currentselection = {
        tariff_id: 5,
        selected_opt: {"3101":66,"3102":79,"3108":166,"3109":34,"3112":155,"3114":85,"3115":157,"3208":161,"4005":180,"4102":151,"4105":170,"4107":67,"4202":159,"4207":105}
    };
    
    let state = getState(JSON.parse(data));
    let html = template(state);
    
    const output_fn_inlined = __dirname + '/../dist/output.inlined.html';
    
    inlineCss(html, {url: 'http://ssl.starsellersworld.com/'})
    .then(html => {
        //fs.writeFileSync(output_fn_inlined, html);
        let buf = Buffer.from(html);
        //console.log(buf.toString('base64'))
        let res = {"html":buf.toString('base64')};
        //console.log(JSON.stringify(res))
        process.send(JSON.stringify(res));
        //process.send(data);
        process.exit(0);
    }).catch(err=> {
        console.log('error', err);
        process.exit(0);
    });

    If you want to know more, you can consider working with us, drop me an email: dc @ xwave . de ( r e m o v e s p a c e s ! )

  • Sicurezza, password salvate e Google

    youhavebeenpawned! ( https://haveibeenpwned.com/ )

    Google fortunatamente non sta a guardare, ma non può agire per noi.

    • Fare click sull’avatar (in alto a destra) per gestire l’account
    Image
    • Dove si atterra:
    Image
    • Notare un segnale di allarme per problemi di sicurezza (esclamativo rosso):
    Image
    • Vai a controllo password
    Image
    • Password salvate compromesse
    Image
    • Cambia ogni password compromessa
    Image
    • Signin per ogni account
    Image

    Oppure approfittare per account non più utilizzati (ad esempio acquia.com per me non è più utile)

  • React Form, my wrong way

    Today I had to face form definition in react and I was afraid to some kind of repeatitive task.
    I looked for in in google, I found formik, react-redux-form, but I see it too complex, or too heavy.

    I am using reactstrap for frontend and maybe there is a premium library for form, but I am not sure about how it plays with redux.

    Also I do not want realtime link with redux store, but an edit mode then a submit action. In this way I can provide a “cancel” with no pain.

    First approach was to useRef, that is it. I put them in an array of fieldElements, just fe because I must write it multiple time (no escape here: multiple input elements).

    Also I useSelector for a fieldValues array (again, fv here).

    Image
    Array of useRef() hooks

    It worked. Then I defined an array, the same is used for defining refs and values:

    Image
    Image

    Then I cleaned up some code:

    Image

    (fe, field element, became fr, field reference, also)

    At the end I see a coupled data in redux storage and the form, I can take storage field list, maybe, not sure
    about what happen with innerRef={undefined}, it should give error so that I can find it immediately.
    Unfortunately I have no error, I should find some other mean for dealing with it.

    There is no validation here, I plan to manage validation in redux-observable eventually, adding some kind of isvalidtagmessage near each field, again using array, useSelector(), the same way. I will see.