<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Markus Mayer on Medium]]></title>
        <description><![CDATA[Stories by Markus Mayer on Medium]]></description>
        <link>https://medium.com/@sunside?source=rss-f0f0d7cabfa3------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Sun, 12 Apr 2026 18:03:01 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@sunside/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Trust in Open Source (or don’t): The advent of Crev in Code Reviews]]></title>
            <link>https://medium.com/@sunside/trust-in-open-source-or-dont-the-advent-of-crev-in-code-reviews-27a878769baa?source=rss-f0f0d7cabfa3------2</link>
            <guid isPermaLink="false">https://medium.com/p/27a878769baa</guid>
            <category><![CDATA[crev]]></category>
            <category><![CDATA[code-review]]></category>
            <category><![CDATA[rust]]></category>
            <category><![CDATA[security]]></category>
            <category><![CDATA[devsecops]]></category>
            <dc:creator><![CDATA[Markus Mayer]]></dc:creator>
            <pubDate>Thu, 07 Dec 2023 15:47:28 GMT</pubDate>
            <atom:updated>2025-02-02T14:52:59.362Z</atom:updated>
            <content:encoded><![CDATA[<p>You write your code and you test it, it passes a review, LGTM, squirrels, thumbs up — all set, you’re done. Or are you?</p><p>When it comes to our dependencies, we simply tend to “trust” them — not <em>real</em> trust, of course, but some vague and fuzzy <em>kind of</em> trust, after all they are probably tested and code reviewed. And they’re open source, other people use them. We’re all professionals. Dependencies of dependencies? Potato potato!</p><p>You should probably trust your dependencies as much as you trust your own code. If your trust is high enough, this post may not be for you. If you care about doing the hard work, welcome aboard. Let’s get all our dependencies reviewed ... piece by piece.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/500/0*GND2_yv4RbDcA_3n" /><figcaption><a href="https://jakelikesonions.com/post/158707858999/the-future-more-of-the-present">image source</a></figcaption></figure><p>If you’re working on any medium-sized project, it’s entirely likely that you can’t vet all of your dependencies on your own. Rust and Node.js are famous for transiently pulling in gazillions of them and it only takes one bad actor to cause a lot of trouble.</p><h3>Enter Crev: a distributed code review system</h3><p><a href="https://github.com/crev-dev/crev/">Crev</a> aims at helping with that issue by distributing the work and the reviews. The idea is that if everyone only vets a couple of their own dependencies we can build a network of trust (and distrust).</p><p>In short, when Crev scans your dependencies, it’ll inform you about the <strong>consensus of other reviewers</strong> for each dependency. You can then <strong>inspect and review individual dependencies</strong> yourself, and Crev will <strong>cryptographically sign your review</strong> and <strong>publish it on GitHub</strong> for others to consume.</p><p>You can then <strong>assign a level of trust </strong>to<strong> </strong>other people’s reviews, and through that network of trust get a higher review coverage — or, when you start from scratch, <em>some</em> review coverage to begin with.</p><p>At the time of writing this, crev provides <a href="https://github.com/crev-dev/cargo-crev">cargo-crev</a>, reference implementation for Rust, and initial implementations of <a href="https://www.npmjs.com/package/crev">npm-crev</a> for Node, as well as <a href="https://github.com/crev-dev/pip-crev">pip-crev</a> for Python.</p><p>Today, I’ll walk you through using cargo-crev to demonstrate the workflow. In it’s own words, it can (amongst others):</p><ul><li>warn you about untrustworthy crates and security vulnerabilities,</li><li>allow you to review most suspicious dependencies and publish your findings, and</li><li>increase trustworthiness of your own code.</li></ul><p>You can also find a short version of the below information in the <a href="https://github.com/crev-dev/cargo-crev/blob/main/cargo-crev/src/doc/getting_started.md">Getting Started Guide</a>.</p><h3>A worked example</h3><p>The first step is to get it, of course: cargo install --locked cargo-crev. It’ll take a bit, after which it provides you with the cargo crev command.</p><h4>Creating a proof repository</h4><p>At the heart of it, Crev relies on <a href="https://github.com/crev-dev/crev/wiki/Proof-Repository">proof repositories</a> — Git repositories in which reviews are published. You can either <a href="https://github.com/crev-dev/crev-proofs/fork">fork the template</a> on GitHub or create a repo from scratch; the canonical name for the repo is crev-proofs, and it must be publicly available over https, e.g.</p><blockquote><a href="https://github.com/your-username/crev-proofs">https://github.com/your-username/crev-proofs</a></blockquote><p><a href="https://github.com/sunsided/crev-proofs">Here’s mine</a>, if you want to see an example.</p><h4>Setting up an identity</h4><p>Each user of cargo-crev is identified by a unique identity, allowing the same repository to be used by multiple people at the same time. This might be interesting if you’re using Crev in a business or team context, but it has implications if you’re using cargo-crevon multiple machines (see below for more on that).</p><p><strong>Setting up a new identity</strong> involves calling cargo crev new id and cargo crev publish, first creating the identity and then publishing it to the repository. You will be asked to provide a passphrase. Do so and keep it around safely.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/42117fb2b943d59d908f6d229866d26b/href">https://medium.com/media/42117fb2b943d59d908f6d229866d26b/href</a></iframe><p>At the end of it, you’ll be presented with a summary and asked to write it down. It’s a good idea to keep a backup, but if you don’t, you can always get it back. Apart from that, you won’t need this information anytime soon.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/7d2e1db425c7a68a6d564e653cf567f8/href">https://medium.com/media/7d2e1db425c7a68a6d564e653cf567f8/href</a></iframe><p>If needed, you can create multiple IDs on the same machine using the same process. Use cargo crev id current to see all available IDs.</p><p><strong>Setting up an existing identity</strong> on the other hand <em>does</em> require the above output. Take your backup or run cargo crev id export on the existing machine/account to get it back, then run cargo crev id import and paste the entire output.</p><h4>Trust a proof repository</h4><p>On your new setup, start by trusting someone. The guidelines suggest to <em>highly</em> trust <a href="https://github.com/dpc/crev-proofs">dpc/crev-proofs</a> (a leap of faith in and of itself), but let’s do so by using cargo crev trust:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/412d641f77101d788526a3b53c7e4e35/href">https://medium.com/media/412d641f77101d788526a3b53c7e4e35/href</a></iframe><p>Later on you will want to <strong>extend your network of trust</strong>. In that case, repeat the cargo crev trust step with a proper trust level and a proof repository of your choice.</p><p>To give an example: If you wanted to trust my reviews (low trust, unless you know me), you’d run the following line.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/4b158afe135130cda18ac201b3afbc5c/href">https://medium.com/media/4b158afe135130cda18ac201b3afbc5c/href</a></iframe><h4>Fetching current reviews for your repo</h4><p>To fetch all current review repos, use cargo crev repo fetch all. Depending on when you run it, it’ll either say no updates or gives more information.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/f573615260108a1c436c84b9ece5e091/href">https://medium.com/media/f573615260108a1c436c84b9ece5e091/href</a></iframe><h4>Verifying your dependencies</h4><p>To verify your dependencies, switch into your Git repository directory and run cargo crev verify --show-all. It will fetch your dependencies and then list the review status; this is either none if no reviews exist, pass if there’s some level of trust, and flagged or dangerous for reported issues:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/f3d3d6f3620fa9080c245250c1b1227e/href">https://medium.com/media/f3d3d6f3620fa9080c245250c1b1227e/href</a></iframe><p>In the output above, the flagCB informs you about custom builds, where UM would inform you about an unmaintained dependency. The <a href="https://github.com/geiger-rs/cargo-geiger">geiger</a> count informs you about the number of lines of unsafe code in that dependency.</p><p>In a terminal, it would look roundabout like this:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/973/1*u8aW-Bkr7Nr13EkEtpKjzQ.png" /><figcaption>The output of cargo crev verify — show-all</figcaption></figure><h4>Reviewing a dependency</h4><p>The review process consists of two steps: Opening a dependency and actually reviewing it, then writing a review and publishing it.</p><p>To <strong>open the dependency</strong>, call cargo crev open $(name-of-the-crate); this will simply open a directory showing the crate contents.</p><p>To <strong>write a review</strong>, call cargo crev review $(name-of-the-crate). Keep your passphrase ready. Your editor of choice will open with something along these lines:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/dc5470bd63b3166c2a70e8178f7884a0/href">https://medium.com/media/dc5470bd63b3166c2a70e8178f7884a0/href</a></iframe><p>In reality, the file is much longer and thankfully self-describing, providing information about each field. The key aspects here however are the <strong>thoroughness</strong> of your review, your <strong>understanding</strong> of the dependency code and your overall <strong>rating</strong> of it, plus an additional <strong>comment</strong>.</p><p><strong>Thoroughness</strong> encodes time and effort spent reviewing and comes in the following flavors</p><ul><li><strong>high</strong>: This was a long, deep, focused review or even a formal security audit. Rule of thumb: <em>“an hour or more per file”</em>.</li><li><strong>medium</strong>: A standard and focused code review: <em>“15 minutes per file”</em>.</li><li><strong>low</strong>: A low intensity review: <em>“2 minutes per file”</em>.</li><li><strong>none</strong>: No actual review, just skimming: <em>“seconds per file”</em>. This is your fallback with common, established projects (think <a href="https://tokio.rs/">tokio</a>) or when you just want to flag a dependency.</li></ul><p><strong>Understanding</strong> informs others about how well you actually understand the code:</p><ul><li><strong>high</strong>: You completely understand the code.</li><li><strong>medium</strong>: You have a good understanding of the code.</li><li><strong>low</strong>: Some of the code parts are unclear to you.</li><li><strong>none</strong>: You lack understanding of the code. (We’ve all been there.)</li></ul><p>Finally, <strong>rating</strong> informs about your verdict:</p><ul><li><strong>strong</strong>: The code is secure and good in all aspects, for all applications.</li><li><strong>positive</strong>: The code is secure and okay to use, maybe some minor issues.</li><li><strong>neutral</strong>: The code is secure but with flaws.</li><li><strong>negative</strong>: The code has severe flaws and is not fit for production use.</li><li><strong>dangerous</strong>: The code is unsafe to use, has severe flaws or possibly malicious intent.</li></ul><p>After checking the dependency’s source repository you can flag the dependency as either unmaintained or not (the default). If there are alternatives, you can add them in the alternatives section — remove the section if there are none.</p><p>After saving and closing the editor, your review is signed and committed to your (local) proof repository.</p><h4>Re-reviewing a dependency after it changed</h4><p>When the dependency code changed, you do not have to perform a full review again. Instead, you can use cargo crev crate diff and cargo crev review --diff to review only the parts changed since your last review.</p><h3>And then you rinse and repeat</h3><p>Now that you know</p><ul><li>that <a href="https://github.com/crev-dev/crev">crev</a> and <a href="https://github.com/crev-dev/cargo-crev">cargo-crev</a> exists,</li><li>how to set up an identity and proof repository,</li><li>how to trust other people’s review (to some degree),</li><li>how to fetch reviews for your dependencies and</li><li>how to publish your own reviews for others to use,</li></ul><p>your work really has just begun.</p><p>While each review is only a little step in and of itself, each individual contribution does help the system grow. Tooling will get easier over time, and even if it never reaches general popularity — a rather unlikely thing, given the nature of the endeavor — it’s still a perfect chance to do some good. After all, you did just review one dependency more than you had before. Now on to the next one.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1000/0*jqGaIU0EIwnU34Q9.jpg" /><figcaption>“I will find you and I will give you feedback”</figcaption></figure><p>Thanks for making it here. Stay safe and healthy.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=27a878769baa" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Embedded Rust development with OpenOCD in JetBrains CLion (2023) on Linux]]></title>
            <link>https://medium.com/@sunside/embedded-rust-development-with-openocd-in-jetbrains-clion-2023-on-linux-7011d754cf31?source=rss-f0f0d7cabfa3------2</link>
            <guid isPermaLink="false">https://medium.com/p/7011d754cf31</guid>
            <category><![CDATA[clion]]></category>
            <category><![CDATA[linux]]></category>
            <category><![CDATA[stm32]]></category>
            <category><![CDATA[rust]]></category>
            <category><![CDATA[embedded-systems]]></category>
            <dc:creator><![CDATA[Markus Mayer]]></dc:creator>
            <pubDate>Sun, 04 Jun 2023 11:30:43 GMT</pubDate>
            <atom:updated>2023-09-15T19:58:16.363Z</atom:updated>
            <content:encoded><![CDATA[<p>As of writing this (June 4th, 2023), JetBrains CLion has plug-in support for Rust as well as native support for Embedded development, but no native or plugin support for both combined. One can however use the <a href="https://www.jetbrains.com/help/clion/openocd-support.html">OpenOCD support</a> and custom build tooling to get a reasonable building and debugging experience without having to resort to a <a href="https://github.com/berkowski/rust-target-cmake">CMake wrapper</a>.</p><p>The following describes a basic setup for an STM32F3 Discovery board using the STM32F303VCT6 MCU, but should apply to all boards that can be programmed and debugged using OpenOCD. I have recently used it in a toy project <a href="https://github.com/sunsided/stm32f3disco-rust">here</a>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/224/1*fFgJy2JLiNOW7zePocuFbA.gif" /><figcaption>No STM32F3 Discovery blog post is complete without a GIF of the LED Roulette example.</figcaption></figure><h3>Preparing the Rust environment</h3><p>This article focuses specifically on CLion, so I recommend following <a href="https://docs.rust-embedded.org/book/">The Embedded Rust Book</a> to get you started. There also is an older version of the book available specifically for the STM32F3 Discovery, which you can read <a href="https://docs.rust-embedded.org/discovery/f3discovery/">here</a>.</p><p>In particular you will need</p><ul><li>The proper Rust target installed (e.g. <em>thumbv7em-none-eabi</em>),</li><li>OpenOCD installed,</li><li>GDB Multi-Arch (or any GDB for your target platform) installed,</li><li>udev rules set up, and</li><li>User permissions to communicate with the UART device.</li></ul><p>The book will guide you through all of this.</p><h3>Set up the target toolchain</h3><p>In CLion, go to <strong>Settings &gt; Build, Execution, Deployment &gt; Toolchains</strong> and create a new toolchain. In this window, select the GDB for the hardware, in this case arm-none-eabi-gdb. This step may not be strictly required as we will do a similar selection later on, but it keeps things nice and tidy.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*dCHm3tl47iOkd9yXNR3MoQ.png" /><figcaption>Setting up the Toolchain (<strong>Settings &gt; Build, Execution, Deployment &gt; Toolchains)</strong></figcaption></figure><h3>Set up the build targets</h3><p>Go to <strong>Settings &gt; Build, Execution, Deployment &gt; Custom Build Targets</strong> and create a new target for each binary and flavor to build. In the picture below, I have created Cargo Debug and Cargo Release targets and selected the toolchain created above. The <strong>Build</strong> and <strong>Clean</strong> steps will be created right after.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*pDCqOiHoMJaUW47ys1Mp0g.png" /><figcaption>Setting up the build targets (<strong>Settings &gt; Build, Execution, Deployment &gt; Custom Build Targets)</strong></figcaption></figure><p>This configuration is stored in .idea/customTargets.xml.</p><p>Clicking the three-dot button next to <strong>Build</strong> and <strong>Clean</strong> will bring up the <strong>External Tools</strong> dialog.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/216/1*vFxA9RbTMABtw63CUcz5-w.png" /><figcaption>External tools</figcaption></figure><p>Click the + button and create a single target <strong>Clean</strong> as well as multiple <strong>Build</strong> targets, one for each flavor or binary you want to build. This will bring up the <strong>Edit Tool</strong> dialog.</p><p>The <strong>Group</strong> name can be freely specified in the <strong>Edit Tool</strong> window, I selected <strong>Rust Build</strong>. You can choose a generic name or a specific name for each binary you build, e.g. to bundle debug and release builds per binary.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/492/1*daa-SA8slyEJ8bGrZ9W1XQ.png" /><figcaption>A <strong>cargo clean</strong> tool configuration.</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/492/1*TTB7STnVRgxEq0UVhg0_EQ.png" /><figcaption>A <strong>cargo build</strong> tool configuration.</figcaption></figure><p>You can omit the --target ... argument if your project uses a custom .cargo/config.toml with a build.target configuration. My project uses:</p><pre>[build]<br>target = &quot;thumbv7em-none-eabihf&quot; # Cortex-M4 and Cortex-M7 (no FPU)</pre><p>If you do not provide a configuration like this, the <strong>Argument</strong> value must be something along the lines of:</p><pre>build --target thumbv7em-none-eabihf</pre><h4>Cargo Workspaces</h4><p>When setting up the project as a <a href="https://doc.rust-lang.org/book/ch14-03-cargo-workspaces.html">Cargo Workspace</a> rather than a single binary, the variable expansions won’t be as useful. For example, $ContentRoot$ refers to the workspace directory, but so does $ProjectFileDir$. To build the right binary, you&#39;ll have to either specify default-members in the Cargo.toml or specifically provide the --bin your-project parameter to the build commands.</p><p>A complete value for <strong>Arguments</strong> would then look like:</p><pre>build --target thumbv7em-none-eabihf --bin led-roulette</pre><h3>Run / Debug Configurations</h3><p>Lastly we need to set up the run configurations. For this, navigate to <strong>Edit Configurations</strong> in your Debug/Run Configuration selection. This, unsurprisingly, brings up the <strong>Run/Debug Configurations</strong> dialog.</p><p>In here, create a configuration of type <strong>OpenOCD Download &amp; Run</strong>.</p><ul><li>For <strong>Target</strong> select the build target created above. In the example below, I selected <strong>Cargo Debug</strong>.</li><li>For <strong>Executable binary</strong> you select the build binary artifact from the target directory. In my case, the binary resided at target/thumbv7em-none-eavbihf subdirectory debug since the project builds for the thumbv7em-none-eavbihf target.</li><li>Select the <strong>Debugger</strong> by picking the Toolchain you created above, or by pointing to the correct debugger on disk if you didn’t create a toolchain.</li><li>For <strong>Board config file</strong>, click the <strong>Assist</strong> button. This will bring up the <strong>Select Board Config File</strong> dialog, allowing you to select the relevant configuration.</li><li>Leave <strong>GDB port</strong> and <strong>Telnet port</strong> as they are unless they clash with other configuration.</li><li>For <strong>Download</strong> select <strong>Always</strong> or <strong>If updated</strong>.</li><li>For <strong>Reset</strong> select <strong>Halt</strong>. This provided the best debugging experience for me, anyway.</li><li>For <strong>Before launch</strong>, keep <strong>Build</strong>.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Zgwq7ZTfsQ3_5jngQ5wjlA.png" /><figcaption>Run / Debug Configurations</figcaption></figure><p>Select the correct board configuration from the <strong>Select Board Config File</strong> dialog:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/363/1*pvsVWht1jqH3-2H19_7gWw.png" /><figcaption>Board configuration file selection assistant</figcaption></figure><h3>Flashing and running the application</h3><p>When starting the selected Run / Debug Configuration, the <strong>Debug</strong> window should open. A tooltip will inform you about the firmware being uploaded through OpenOCD and you can now freely place breakpoints and step through them as usual.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*1jK1o3MI3XDIyh0eNz2ZXA.png" /></figure><p>At times, restarting the debugger results in compilation communications errors. Restarting the debugging session a second time resolves these.</p><h4>System Viewer Description (SVD)</h4><p>The <strong>Peripherals</strong> tab in the <strong>Debug</strong> view allows you to select the SVD file relevant to your board. I have selected mine from the <a href="https://github.com/stm32-rs/stm32-rs">github.com/stm32-rs/stm32-rs</a> repo’s svd subdirectory (<a href="https://github.com/stm32-rs/stm32-rs/tree/e9edcdcfebb73ac81a972c4a00b755d026fff621/svd/vendor">here</a>).</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*PP3hFRk1NYQwMlFhqVxq6Q.png" /><figcaption>STM32F303 SVD with GPIO3 ODR11 set to toggle an LED high.</figcaption></figure><h4>Register View</h4><p>Similarly, the <strong>Registers</strong> tab in the <strong>Debug</strong> view allows you to inspect the current register values.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*-ZytEkCIAQ27bp6eLHDVbQ.png" /><figcaption>Register view of the OpenOCD development experience.</figcaption></figure><h3>That’s all folks</h3><p>It’s not the best development experience in the world, but it’s a reasonable starting point. Have fun, take care.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=7011d754cf31" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[STM32F3-Discovery: no 72 MHz clock due to HSE never ready]]></title>
            <link>https://medium.com/@sunside/stm32f3-discovery-no-72-mhz-clock-due-to-hse-never-ready-ef829750741c?source=rss-f0f0d7cabfa3------2</link>
            <guid isPermaLink="false">https://medium.com/p/ef829750741c</guid>
            <category><![CDATA[stm32f3]]></category>
            <category><![CDATA[embedded-systems]]></category>
            <category><![CDATA[stm32]]></category>
            <dc:creator><![CDATA[Markus Mayer]]></dc:creator>
            <pubDate>Mon, 01 Aug 2022 11:56:08 GMT</pubDate>
            <atom:updated>2022-08-01T11:56:08.790Z</atom:updated>
            <content:encoded><![CDATA[<p>In 2015, I unboxed my brand new STM32F3-Discovery, plugged it in — sweet blinking rapture. Compiled my first demo program, played around with the timers, all was <em>so</em> good. Until I had a closer look at the system clock speed: 8 MHz it said. So I dug into the unknown grounds of STM32F3 development, ended up in the generated firmware’s system initialization function in system_stm32f30x.c — which looks like this:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/e3bbc541d8604285d96c1631cf4ce1ff/href">https://medium.com/media/e3bbc541d8604285d96c1631cf4ce1ff/href</a></iframe><p>I did so, only to find out that HSEStatus never switched to 0x01 because the RCC_CR_HSERDY flag was never asserted in the first place.</p><p>Obviously no one else in the whole wide web had trouble with this. Cold water? Let’s dive! Someone at the ST forums pointed me to the trick to output the RCC clock signal to the board’s PA8 pin, which I did like so:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/9c6510747b593392d8c6e955c6e669e9/href">https://medium.com/media/9c6510747b593392d8c6e955c6e669e9/href</a></iframe><p>Turned out … well, nothing. Flatline on that pin. So I took my multimeter and went upstream from the oscillator pins. Solder bridge SB12, of course bridged, working fine, SB17 open as requested, and then — silence on RB48. No beeps on my meter, no value, just plain high impedance.</p><p>To make a long story short: That 100 Ω resistor was borked, so I replaced it with some spare parts of an old scanner board I had floating around in the to-do stash. I’m not exactly known for massive soldering skills, but <a href="https://www.youtube.com/watch?v=8JM4oCpWnjU">this video</a> helped a lot here.</p><p>Final result:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*j_vLHtzdZZm2zUzk.jpg" /></figure><p>Ugly but effective. Worked like a charm.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=ef829750741c" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Naismith, Aitken-Langmuir, Tranter and Tobler: Modeling hiking speed]]></title>
            <link>https://medium.com/@sunside/naismith-aitken-langmuir-tranter-and-tobler-modeling-hiking-speed-4ff3937e6898?source=rss-f0f0d7cabfa3------2</link>
            <guid isPermaLink="false">https://medium.com/p/4ff3937e6898</guid>
            <category><![CDATA[mathematical-modeling]]></category>
            <category><![CDATA[hiking]]></category>
            <dc:creator><![CDATA[Markus Mayer]]></dc:creator>
            <pubDate>Sat, 16 Jul 2022 11:08:53 GMT</pubDate>
            <atom:updated>2022-07-16T11:08:53.433Z</atom:updated>
            <content:encoded><![CDATA[<p>While planning an eleven-day trekking trip through the Hardangervidda in Norway, I came across the age old problem of estimating the walking time for a given path on the map. While one is easily able to determine the times for the main west-east and north-south routes from a travel guide, there sadly is no information about those self-made problems (i.e. custom routes). Obviously, a simple and correct solution needs to be found.</p><p>Of course, there is no such thing. When searching for hiking time rules, two candidates pop up regularly: <a href="http://en.wikipedia.org/wiki/Naismith&#39;s_rule">Naismith’s rule</a> (including Tranter’s corrections), as well as <a href="http://en.wikipedia.org/wiki/Tobler%27s_hiking_function">Tobler’s hiking function</a>.</p><p>William W. Naismith’s rule — and I couldn’t find a single scientific source — is more a rule of thumb than it is exact. It states:</p><blockquote>For every 5 kilometres, allow one hour. For every 600 metres of ascend, add another hour.</blockquote><p>which reads as</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/286/1*lSJPbQ8mOhG_v5hEsnJlAQ.png" /></figure><p>where |<em>w</em>⃗ | is the walking speed, Δ<em>s</em> the length on the horizontal plane (i.e. “forward”), Δ<em>a</em> the ascend (i.e. the difference in height) and <em>θ</em> the slope.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/dfd33a0d6dc5d56952ae4f9b5f2c25cd/href">https://medium.com/media/dfd33a0d6dc5d56952ae4f9b5f2c25cd/href</a></iframe><p>That looks like</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/483/0*BW7ezSt5bPk71eOb.png" /><figcaption>Naismith’s rule</figcaption></figure><p>Interestingly, this implies that if you climb a 3 km mountain straight up, it will take you 5 hours.</p><p>By recognising that 5km/0.6km≈8.3≈8, the <em>8 to 1</em> rule can be employed, which allows the transformation of any (Naismith-ish) track to a flat track by calculating</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/251/1*7cwEtPdtYFKjqHwZG8K81A.png" /></figure><p>So a track of 20km in length with 1km of ascend would make for km+8⋅1km=28km of total track length. Assuming an average walking speed of 5km/h, that route will take 28km/5km/h=5.6h, or 5 hours and 36 minutes. Although quite inaccurate, <em>somebody</em> found this rule to be accurate enough when comparing it against times of men running down hills in Norway. Don’t quote me on that.</p><p>Robert Aitken assumed that 5 km/h might be too much and settled for 4 km/h on all off-track surfaces. Unfortunately the Naismith rule still didn’t state anything about descent or slopes in general, so Eric Langmuir added some refinements:</p><blockquote><em>When walking off-track, allow one hour for every 4 kilometres (instead of 5 km). W</em>hen on a small decline of 5 to 12°, subtract 10 minutes per 300 metres (1000 feet). For any steeper decline (i.e. over 12°), <em>add</em> 10 minutes per 300 metres of descent.</blockquote><p>Now that’s the stuff wonderfully non-differentiable functions are made of:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/561/0*tfHGLRJDhms7eje9.png" /><figcaption>Naismith’s rule with Aitken-Langmuir corrections</figcaption></figure><p>That is:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/463/1*0hBsy58bbPYXfX5Coki1WA.png" /></figure><p>It should be clear that 12 km/h is an highly unlikely speed, even on roads.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/f58ab9ffb440053aa9a177b18f7645a1/href">https://medium.com/media/f58ab9ffb440053aa9a177b18f7645a1/href</a></iframe><p>So Waldo Tobler came along and developed his “hiking function”, an equation that assumes a top speed of 6 km/h with an interesting feature: It — though still indifferentiable — adapts gracefully to the slope of the ground. That function can be found in his 1993 report “<em>Three presentations on geographical analysis and modeling: Non-isotropic geographic modeling speculations on the geometry of geography global spatial analysis</em>” and looks like the following:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/472/0*KTguqWii8Su92tyr.png" /><figcaption>Tobler’s hiking function</figcaption></figure><p>It boils down to the following equation of the walking speed |<em>w</em>⃗ | “on footpaths in hilly terrain” (with <em>s</em>=1) and “off-path travel” (with <em>s</em>=0.6):</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/199/1*-2_5xX37kUa9aKbirtfVyA.png" /></figure><p>where tan(<em>θ</em>) is the tangent of the slope (i.e. vertical distance over horizontal distance). By taking into account the exact slope of the terrain, this function is superior to Naismith’s rule and a much better alternative to the Langmuir bugfix, especially when used on GIS data.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/e999e082f891e555d32605c558cb3b8b/href">https://medium.com/media/e999e082f891e555d32605c558cb3b8b/href</a></iframe><p>It however lacks the one thing that makes the Naismith rule stand out: <strong>Tranter’s corrections</strong> for fatigue and fitness. (Yes, I know it gets weird.) Sadly these corrections seem to only exists in the form of a mystical table that looks, basically, like that:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/594/1*2pmE4UnDwjP3ScQR_GVd4A.png" /><figcaption>Tranter’s corrections to Naismith’s rule</figcaption></figure><p>where the minutes are a rather obscure measure of how fast somebody is able to hike up 300 metres over a distance of 800 metres (20°). With that table the rule is: If you get into nastier terrain, drop one fitness level. If you suck at walking, drop a fitness level. If you use a 20 kg backpack, drop one level. Sadly, there’s no equation to be found, so I had to make up one myself.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/04696d833a598cc575759754420f52fb/href">https://medium.com/media/04696d833a598cc575759754420f52fb/href</a></iframe><p>By looking at the table and the mesh plot it seems that each time axis for a given fitness is logarithmic.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/501/0*rUXvrGsxEJQVIoon.png" /><figcaption>Tranter’s corrections to Naismith’s rule visualized</figcaption></figure><p>I did a log-log plot and it turns out that the series not only appear to be logarithmic in time, but also in fitness. By deriving the (log-log-)linear regression for each series, the following equations can be found:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/185/1*jri8WvoGQc2QbxcTza-Lmw.png" /></figure><p>Visually, this renders as:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/482/0*UEfxsjVFy9ufPiZk.png" /></figure><p>These early approximations appear to be quite good, as can be seen in the following linear plot. The last three lines <em>t</em>30 , <em>t</em>40 and <em>t</em>50 however begin to drift away. That’s expected for the last two ones due to the small number of samples, but the <em>t</em>30 line was irritating.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/479/0*y8h8lroMnkC651P2.png" /></figure><p>My first assumption was that the <em>t</em>40 and <em>t</em>50 lines simply are outliers and that the real coefficient for the time variable is the (outlier corrected) mean of 1.2215±0.11207. This would imply, that the intersect coefficient is the variable for fitness.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/487/0*Zze_o593dlfQ7Mwf.png" /></figure><p>Unfortunately, this only seems to make things better in the log-log plot, but makes them a little bit worse in the linear world.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/479/0*GH3KMrN2FWCOa8DC.png" /></figure><p>Equi-distant intersect coefficients also did not do the trick. Well, well. In the end, I decided to give the brute force method a chance and defined several fitting functions for the use with genetic algorithm and pattern search solvers, including exponential, third-order and sigmoidal forms. The best version I could come up with was</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/371/1*VMK0r6y80mYLxFx8viJM0Q.png" /></figure><p>This function results in a least squared error of about 21.35 hours over <em>all</em> data points. The following shows the original surface from the table and the synthetic surface from the function.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/501/0*Ti9TY5QG2M0tDT1w.png" /><figcaption>Tranter’s corrections vs. synthesized corrections</figcaption></figure><p>A maximum deviation of about 1 hour can be seen clearly in the following error plot for the <em>t</em>30 line, which really seems to be an outlier.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/498/0*SqkZBOxvA-tKo3nO.png" /><figcaption>Error surface for the synthesized correction rule</figcaption></figure><p>For comparison, this is the synthetic correction table:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/594/1*7z2ke9BPyIZszp6-I7sfCw.png" /><figcaption>Synthesized values for Tranter’s corrections to Naismith’s rule</figcaption></figure><p>So now you know!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=4ff3937e6898" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[git: pushing to multiple remotes at the same time]]></title>
            <link>https://medium.com/@sunside/git-pushing-to-multiple-remotes-at-the-same-time-38e93c135f19?source=rss-f0f0d7cabfa3------2</link>
            <guid isPermaLink="false">https://medium.com/p/38e93c135f19</guid>
            <category><![CDATA[shell]]></category>
            <category><![CDATA[git]]></category>
            <dc:creator><![CDATA[Markus Mayer]]></dc:creator>
            <pubDate>Sat, 16 Jul 2022 10:48:09 GMT</pubDate>
            <atom:updated>2022-07-16T10:48:09.468Z</atom:updated>
            <content:encoded><![CDATA[<p>When working on a project on GitHub, I sometimes like to keep an additional copy floating around on my own server for esoteric reasons. While the following is possible:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/dd3b0711673249408623bd4d6bb42faf/href">https://medium.com/media/dd3b0711673249408623bd4d6bb42faf/href</a></iframe><p>it is quite annoying to issue the push command twice — advanced git-fu to the rescue. Some dude over at <a href="http://stackoverflow.com/a/14290145/195651">Stack Overflow</a> pointed out that Git supports the notion of a pushurl, being an endpoint for pushing to a given remote. The fun thing is that every remote may have <em>multiple</em> push URLs, which is exactly what I needed.</p><p>It needs to be said that despite the usage of the --add flag in the following snippet, a push URL always overwrites the default URL, so adding only <em>one</em> URL results in the original entry being overruled. So, for the situation given in the example above:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/61dd03d61a059655f3d0de3cc3a16895/href">https://medium.com/media/61dd03d61a059655f3d0de3cc3a16895/href</a></iframe><p>And that’s it. By pushing to origin Git instead pushes to both registered URLs.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=38e93c135f19" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Quadratic interpolation given two points and one derivative]]></title>
            <link>https://medium.com/@sunside/quadratic-interpolation-given-two-points-and-one-derivative-61837bfa1e05?source=rss-f0f0d7cabfa3------2</link>
            <guid isPermaLink="false">https://medium.com/p/61837bfa1e05</guid>
            <category><![CDATA[nonlinear-optimization]]></category>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[optimization]]></category>
            <category><![CDATA[interpolation]]></category>
            <category><![CDATA[line-search]]></category>
            <dc:creator><![CDATA[Markus Mayer]]></dc:creator>
            <pubDate>Sat, 16 Jul 2022 10:44:24 GMT</pubDate>
            <atom:updated>2023-01-02T19:54:27.893Z</atom:updated>
            <content:encoded><![CDATA[<p>Years ago, while reading up on line search algorithms in nonlinear optimization for neural network training, I came across this problem:</p><p>Given a function <em>f</em>(<em>x</em>), find a quadratic interpolant <em>q</em>(<em>x</em>)=<em>ax²</em>+<em>bx</em>+<em>c</em> such that <em>f(x)</em> and <em>q(x)</em> share two points and have the same derivative on the first of them — i.e., the interpolant fulfills the conditions <em>f</em>(<em>x</em>0)=<em>q</em>(<em>x</em>0), <em>f</em>(<em>x</em>1)=<em>q</em>(<em>x</em>1) and <em>f</em>′(<em>x</em>0)=<em>q</em>′(<em>x</em>0). Basically this:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/474/0*sX26akgykTeDr3uD.png" /></figure><p>(The tangent looks a bit off, but you get the idea.)</p><p>So I took out my scribbling pad, wrote down some equations and then, after two pages of nonsense, decided it really wasn’t worth the hassle. It turns out that the simple system</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/186/1*Wz7eUUzOCFBqUAkKGv9ACQ.png" /></figure><p>for</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/178/1*YNr8XdvLg2w2Yu9oNFgtOQ.png" /></figure><p>has the solution</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/488/1*31JPFJFFw3yC1mDWQR8myA.png" /></figure><p>Instead of ruining your time on the paper, it can be obtained more easily in Matlab using</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/480677c4452882049e01c2ca497f7f8b/href">https://medium.com/media/480677c4452882049e01c2ca497f7f8b/href</a></iframe><p>Obviously — given that this is a line search problem — the whole purpose of this operation is to find an approximation to the local minimiser of <em>f</em>′(<em>x</em>). This gives</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/428/1*EpuvsIylHvGHSXshg9A_ew.png" /></figure><p>We also would need to check the interpolant’s second derivative <em>q</em>′′(<em>xmin</em>) to ensure the approximated minimiser is indeed a minimum of <em>q</em>(<em>x</em>) by requiring <em>q</em>′′(<em>xmin</em>)&gt;0, with the second derivative given as:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/370/1*9vatqMKs15zzrvMYEf7CcQ.png" /></figure><p>The premise of the line search in minimization problems usually is that the search direction is already a direction of descent. By having 0&gt;<em>f</em>′(<em>x</em>0) and <em>f</em>′(<em>x</em>1)&gt;0 (as would typically be the case when bracketing the local minimiser of <em>f</em>(<em>x</em>)), the interpolant should always be (strictly) convex. If these conditions do not hold, there might be no solution at all: one obviously won’t be able to find a quadratic interpolant given the initial conditions for a function that is linear to machine precision. In that case, watch out for divisions by zero.</p><p>Last but not least, if the objective is to minimize <em>φ</em>(<em>α</em>)=<em>f</em>(<em>x</em>⃗ <em>k</em>+<em>αd</em>⃗ <em>k</em>) using <em>q</em>(<em>α</em>) , where <em>d</em>⃗ <em>k</em> is the search direction and <em>x</em>⃗ <em>k</em> the current starting point, such that</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/158/1*9fRNPUYmV7cz_U2RBIfcEA.png" /></figure><p>then the above formulas simplify to</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/242/1*Wvm-aPyVSu8EOhXHy1uYWQ.png" /></figure><p>and, more importantly, the local (approximated) minimiser at <em>αmin</em> simplifies to</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/245/1*2l884OBvna7RZOjDi5L7hw.png" /></figure><p>If <em>q</em>(<em>α</em>) is required to be strongly convex, then we’ll observe that</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/131/1*X7RLzq0xaA4VMfP9FMvFEg.png" /></figure><p>for an <em>m</em>&gt;0, giving us that <em>a</em> must be greater than zero (or <em>ϵ</em> , for that matter), which is a trivial check. The following picture visualizes that this is indeed the case:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/376/0*jb4J5_jQgZdjRtif.png" /><figcaption>Convexity of a parabola for different highest-order coefficients <em>a</em> with positive <em>b</em> (top), zero <em>b</em> (middle) and negative <em>b</em> (bottom). Lowest-order coefficient <em>c</em> is left out for brevity.</figcaption></figure><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=61837bfa1e05" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Summarized: The E-Dimension — Why Machine Learning Doesn’t Work Well for Some Problems?]]></title>
            <link>https://medium.com/@sunside/summarized-the-e-dimension-why-machine-learning-doesnt-work-well-for-some-problems-9b984bf38c7?source=rss-f0f0d7cabfa3------2</link>
            <guid isPermaLink="false">https://medium.com/p/9b984bf38c7</guid>
            <category><![CDATA[emergence]]></category>
            <category><![CDATA[machine-learning]]></category>
            <dc:creator><![CDATA[Markus Mayer]]></dc:creator>
            <pubDate>Sat, 16 Jul 2022 10:27:43 GMT</pubDate>
            <atom:updated>2022-07-16T10:27:43.853Z</atom:updated>
            <content:encoded><![CDATA[<h3>Summarized: The E-Dimension — Why Machine Learning Doesn’t Work Well for Some Problems?</h3><p>The article <em>Why Machine Learning Doesn’t Work Well for Some Problems?</em> (Shahab, 2017) describes the effect of <strong>Emergence</strong> as a barrier for predictive inference.</p><p><em>Emergence</em> is the phenomenon of completely new behavior arising (emerging) from interactions of elementary entities, such as life emerging from biochemistry and collective intelligence emerging from social animals.</p><p>In general, effects of emergence cannot be inferred through a priori analysis of a system (or its elementary entities). While <em>weak</em> emergence can be understood still by observing or simulating the system, emergent qualities from <em>strong</em> emergence cannot be simulated with current systems.</p><p>Sheikh-Bahei suggests interpreting emergence — in a predictive context — as an additional dimension, called the <strong>E-Dimension</strong>, where moving across that dimension results in new qualities emerging. Crossing E-Dimensions during inference leads to reduced predictive power as emergent qualities cannot be necessarily described as a function of the observed features alone.</p><p>The more E-Dimensions are crossed during inference, the lower the prediction success will be — regardless of the amount of feature noise. Current-generation algorithms do not handle this kind of problem well and further research is required in this area.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/750/0*1BUr50KJ1vuAxIUA.png" /><figcaption>Hypothetical example of the E-Dimension concept: Emergence phenomena can be considered as a barrier for making predictive inferences. The further away the target is from features along this dimension, the less information the features provide about the target. The figure shows an example of predicting organism level properties (target) using molecular and physicochemical properties (feature space). (Shahab, 2017)</figcaption></figure><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/2c4d9de613d419e7b9ab70762f180bea/href">https://medium.com/media/2c4d9de613d419e7b9ab70762f180bea/href</a></iframe><blockquote>Shahab, S.-B. (2017, July 6). The E-Dimension: Why Machine Learning Doesn’t Work Well for Some Problems? Retrieved March 4, 2018, from <a href="https://www.datasciencecentral.com/profiles/blogs/the-e-dimension-why-machine-learning-doesn-t-work-well-for-some">https://www.datasciencecentral.com/profiles/blogs/the-e-dimension-why-machine-learning-doesn-t-work-well-for-some</a></blockquote><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=9b984bf38c7" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The  Game of Life vs. Convolutions]]></title>
            <link>https://medium.com/@sunside/the-game-of-life-vs-convolutions-bc495c962de8?source=rss-f0f0d7cabfa3------2</link>
            <guid isPermaLink="false">https://medium.com/p/bc495c962de8</guid>
            <category><![CDATA[game-programming]]></category>
            <category><![CDATA[rust]]></category>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[algorithms]]></category>
            <category><![CDATA[game-of-life]]></category>
            <dc:creator><![CDATA[Markus Mayer]]></dc:creator>
            <pubDate>Thu, 03 Feb 2022 17:29:28 GMT</pubDate>
            <atom:updated>2022-02-03T17:29:28.620Z</atom:updated>
            <content:encoded><![CDATA[<h3>The Game of Life vs. Convolutions</h3><p><a href="https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life">Conway’s Game of Life</a> is a self-contained simulation game that evolves “living” cells over multiple generations according to four easy rules.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/508/1*wUWx8GOj6GRNyTfAX4VO_Q.gif" /><figcaption>Game of Life in RGB</figcaption></figure><p>While the game is trivially implemented by sequentially evaluating the rules for every cell on the board, a rather beautiful highly parallelizable solution exists that uses 3×3 kernel convolutions and some clever observations. In the following, we will explore that algorithm and see a GPU-accelerated implementation of it in Rust using the <a href="https://github.com/arrayfire/arrayfire">ArrayFire</a> library. At its core, it will look like this:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/c7092486ed50545f3a2fc0b50989dbd9/href">https://medium.com/media/c7092486ed50545f3a2fc0b50989dbd9/href</a></iframe><h4>Recap: The rules to the game.</h4><p>The game state is evolved over multiple generations. Each cell has two distinct states: Alive (1) or dead (0). A cell also has a 3×3 <a href="https://en.wikipedia.org/wiki/Moore_neighborhood">Moore neighborhood</a> of eight neighbors from the top left (north-west) to the bottom right (south-east).</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/220/0*QNCD46Msn0rPf7Sg.png" /><figcaption>A cell’s 8-connected Moore neighborhood.</figcaption></figure><p>Starting from a randomly initialized board, a single evolution is performed by applying these steps:</p><ol><li>Any living cell with fewer than two neighbors dies out due to underpopulation.</li><li>Any living cell with two or three living neighbors lives on to the next generation due to its stable environment.</li><li>Any living cell with more than three live neighbors dies due to overpopulation.</li><li>Any dead cell with exactly three living neighbors becomes alive due to reproduction.</li></ol><figure><img alt="" src="https://cdn-images-1.medium.com/max/680/0*RFFUZ6uLUofk-4u2.png" /><figcaption>Rules of Conway’s Game of Life, taken from <a href="https://www.researchgate.net/figure/Rules-of-Conways-Game-of-Life_fig5_339605473">here</a>.</figcaption></figure><p>In repeatedly applying these steps, the game may either converge to a steady state, cycle between two states or continue to change infinitely.</p><h3><strong>Simplifying the rules</strong></h3><p>After transferring the rules to pseudo-code, it looks somewhat like this:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/b07946560b26ff416c6ff7084ed9118d/href">https://medium.com/media/b07946560b26ff416c6ff7084ed9118d/href</a></iframe><p>From here, we can make some observations:</p><ol><li>If a cell has less than two neighbors, the result will <em>always</em> be 0 regardless of the cell’s current value; this is due to the Under-population rule.</li><li>If the cell has exactly three neighbors, the result will <em>always</em> be 1 regardless of the cell’s current value; this is due to both the Stable Environment and Reproduction rules.</li><li>We do need to check for exactly two neighbors to ensure a cell keeps surviving (1) according to the Stable Environment rule: Note that with exactly two neighbors no new cell is born, so the outcome depends on the cell’s current state.</li><li>More than three neighbors <em>always</em> cause a cell to be 0 regardless of the cell’s current value; this is due to the Overpopulation rule.</li></ol><p>This allows us to ignore a couple of tests for the cell’s previous value; in pseudo code:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/42179cb0159539a584b859e423095c4d/href">https://medium.com/media/42179cb0159539a584b859e423095c4d/href</a></iframe><p>When we ignore the cell’s current value for a moment, we can see the following pattern:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/ed155306e8fbd0bf3827fe92539cbc18/href">https://medium.com/media/ed155306e8fbd0bf3827fe92539cbc18/href</a></iframe><p>Upon closer inspection however, we see that both lines are simply the negation of each other:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/7ae979fb28931661d6fb7272e9a707f6/href">https://medium.com/media/7ae979fb28931661d6fb7272e9a707f6/href</a></iframe><p>As a consequence of that, only the Stable Environment and Overcrowding rules are required to determine the entire state progression. The missing piece being, of course, the current state of the cell:</p><ul><li>As we know from observation 2 and the Overpopulation rule, a cell will always exist if there are exactly three neighbors. We’ll call this the must_exist condition; it is additive (i.e. we “add life” to a cell).</li><li>From observation 3 and the Stable Environment rule, a cell only <em>continues</em> to exist if it existed before. We’ll call this the can_exist condition; it is multiplicative (i.e. we “zero out” dead cells).</li></ul><p>Mathematically, we can now express the resulting state as simply a multiplication and addition:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/c4ec6b45d9edea9be5a18861f95b1a74/href">https://medium.com/media/c4ec6b45d9edea9be5a18861f95b1a74/href</a></iframe><p>How do we get the number of neighbors of the cell? This is where a 2D convolution enters the stage: Convolving the board with the 8-neighborhood kernel shown below simply sums up all living cells (valued 1) around the center, which itself is ignored.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/379/1*sFMpwBYYPe1a_wPIdBrE_w.png" /><figcaption>An 8-neighborhood kernel.</figcaption></figure><p>The result of that operation is therefore a direct measurement of the neighborhood size. All we need to do then is to compare that value against our thresholds of <em>two</em> and <em>three</em> as established above — and that’s it. If you want a nice refresher on how convolutions work particularly in image processing, head over to to the excellent post <a href="https://setosa.io/ev/image-kernels/">Image Kernels explained visually</a>.</p><h3>Putting it together in Rust and ArrayFire</h3><p>As just explained, we want to determine the neighborhood size using a 2-dimensional convolution with the neighborhood kernel via <a href="https://arrayfire.org/arrayfire-rust/arrayfire/fn.convolve2.html">convolve2</a>. To specify our kernel in ArrayFire we use a 3×3×1×1 dimensionality, which reads approximately as <em>height</em>, <em>width</em>, <em>number of channels</em> and a “<em>batch</em>” dimension we can safely ignore here. When used against a multi-channel input such as an RGB image, this format will process each color channel independently of the others, as shown in the image in this post’s header.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/439f6d0202a9b8d804cb5b1513c9984a/href">https://medium.com/media/439f6d0202a9b8d804cb5b1513c9984a/href</a></iframe><p>Due to GPU requirements we cannot operate on a binary image directly and instead use a 32-bit floating point array. As a consequence, the addition used by our update will eventually send the cell values outside their allowed range. To mitigate that, we introduce the method clamp_range that limits the value range to the range 0..1 using ArrayFire’s <a href="https://arrayfire.org/arrayfire-rust/arrayfire/fn.clamp.html">clamp</a> function:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/cc7da92907020e103216e4b38e72bddb/href">https://medium.com/media/cc7da92907020e103216e4b38e72bddb/href</a></iframe><p>Finally, we update the state using the method you already know by comparing counts exactly using ArrayFire’s <a href="https://arrayfire.org/arrayfire-rust/arrayfire/fn.eq.html">eq</a> function:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/6ee99941d847d485e0cac5a4d3263cd5/href">https://medium.com/media/6ee99941d847d485e0cac5a4d3263cd5/href</a></iframe><p>With that, Conway’s Game of Life runs on your GPU.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Ftenor.com%2Fembed%2F17552382&amp;display_name=Tenor&amp;url=https%3A%2F%2Ftenor.com%2Fview%2Fblow-mind-mind-blown-explode-gif-17552382&amp;image=https%3A%2F%2Fmedia.tenor.com%2FtvFWFDXRrmMAAAAM%2Fblow-mind-mind-blown.gif&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=tenor" width="600" height="400" frameborder="0" scrolling="no"><a href="https://medium.com/media/aff075b67ed9f15548fb75c0a7694776/href">https://medium.com/media/aff075b67ed9f15548fb75c0a7694776/href</a></iframe><p>As usual, you can find the entire code <a href="https://github.com/sunsided/rust-arrayfire-experiments">on GitHub</a>.</p><p><a href="https://github.com/sunsided/rust-arrayfire-experiments">GitHub - sunsided/rust-arrayfire-experiments: Toying around with ArrayFire in Rust</a></p><p>Take care, stay safe and have fun!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=bc495c962de8" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Doge, Shiba, FartCoin — What exactly IS a cryptocurrency?]]></title>
            <link>https://medium.com/cryptostars/doge-shiba-fartcoin-what-exactly-is-a-cryptocurrency-58e2bcf96b4c?source=rss-f0f0d7cabfa3------2</link>
            <guid isPermaLink="false">https://medium.com/p/58e2bcf96b4c</guid>
            <category><![CDATA[fart-coin]]></category>
            <category><![CDATA[binance-smart-chain]]></category>
            <category><![CDATA[cryptocurrency]]></category>
            <category><![CDATA[blockchain]]></category>
            <category><![CDATA[misconception]]></category>
            <dc:creator><![CDATA[Markus Mayer]]></dc:creator>
            <pubDate>Mon, 31 Jan 2022 18:31:02 GMT</pubDate>
            <atom:updated>2022-02-02T16:21:22.229Z</atom:updated>
            <content:encoded><![CDATA[<h3>Doge, Shiba, FartCoin — What exactly IS a cryptocurrency?</h3><p>To the majority of us, currencies and wallets make sense from our everyday experience in the physical realm. As such, I too had a basic idea of what cryptocurrencies are and how they work — buy them, send them, keep your wallet in a vault — but things get slightly weird once you look under the hood.</p><p>So let’s do that real quick.</p><h3>Misconception #1: Those coins are in your wallet.</h3><p>They’re not, and it’s easy to believe that this is how it works since that’s what we’ve been seeing all day in real life: You open your wallet and put a coin in. Interestingly, with crypto this is not at all how it works.</p><p>When you create a crypto wallet on a blockchain — let’s say Binance Smart Chain, a sibling of Ethereum — you will be handed a “wallet address” that looks somewhat like this:</p><pre>0xC3DF3fe97a0d6054Da7f89262b19285a9eEf3C2A</pre><p>Other blockchains use different formats, but the idea on all of them is the same: This address is the publicly visible part of your wallet, meant to be shared, while the other part (a secret key) is kept private to you, and only you.</p><p>However, cryptocurrency tokens are never actually sent <em>to</em> your wallet — instead it is your wallet address that is sent to the cryptocurrency. Those cryptocurrencies store your wallet, not the other way around.</p><p>Let me explain.</p><h3>Misconception #2: Crypto contracts are complex.</h3><p>Once you start learning about the potential and scope of modern blockchain technology, cryptocurrencies appear to be complicated beasts even to programmers, complex “smart contracts” running on hundreds of systems …doing finance, somehow.</p><p>What I didn’t expect is what actually happens behind the curtain. One recent example for this is a dear project in the making, <a href="https://www.fartcoin.us/">FartCoin</a>. A “token” — or cryptocurrency — is simply a small program running “on the blockchain”, and that program is also the piece that knows about your wallet address.</p><p>Like your wallet, the token itself has an address which in case of FartCoin looks like this:</p><pre><a href="https://bscscan.com/token/0xdfaa1f2ba4550a3f099ca26ac73e4e4f27cf5ca3">0xdfaa1f2ba4550a3f099ca26ac73e4e4f27cf5ca3</a></pre><p>This address in particular <a href="https://bscscan.com/address/0xdfaa1f2ba4550a3f099ca26ac73e4e4f27cf5ca3">can be investigated</a> on a website called BscScan, a Binance Smart Chain related explorer that allows to view smart contracts, their transactions and wallets on that blockchain. In some cases — as with FartCoin here — it also allows you to inspect the actual source code of the token’s smart contract.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/964/1*S4922FZ9bxcNwRZN74b-rA.png" /><figcaption>The contract code view on BscScan.</figcaption></figure><p>It is this code that represents the entire cryptocurrency aspect of it. Here is the FartCoin code, in all its glory. If you don’t understand it code just yet, bear with me.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/8fe70fda857ca1b77a4dd916b8253fe0/href">https://medium.com/media/8fe70fda857ca1b77a4dd916b8253fe0/href</a></iframe><p>That’s all.</p><p>The first thing to take away from this is that the entire smart contract for a valid, working cryptocurrency is not more than 44 lines of code, whitespace included.</p><p>The key pieces here are in lines 4 and 24–26. Line 4 contains a table that stores the number of tokens held by every address, as a positive number. You can imagine it somewhat like this:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/3e6b5f5d7837de66cf610ebae17dd4d3/href">https://medium.com/media/3e6b5f5d7837de66cf610ebae17dd4d3/href</a></iframe><p>This table is modified by two standard cryptocurrency “functions” of the contract, transfer and transferFrom. Apart from serving two slightly different purposes, they essentially perform the same operation: Transferring “tokens” from one wallet address to another. They do that by changing numbers in the table:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/4e0893603ebb808abb916eb30ac9fe80/href">https://medium.com/media/4e0893603ebb808abb916eb30ac9fe80/href</a></iframe><p>Here’s how it works:</p><ol><li>At first, the wallet address of the caller is tested for a high enough balance to perform the transfer in the first place.</li><li>Then, the balance of the target wallet is increased by the specified amount (value). This is simply an addition on the value in the table.</li><li>Next, the balance of the caller is decreased by the specified amount. This is a subtraction on the value in the table.</li><li>Lastly, a statement about this change is emitted to the blockchain for the world to see; this acts as a proof of the transaction.</li></ol><p>And that’s really all there is to it; this is what makes you “have” a cryptocurrency “in” your wallet.</p><p>Interestingly, no actual token ever “left the contract” or “entered your wallet”. Your wallet balance simply is a number in a table held by that program, and all you do is prove to the program that this number belongs to you.</p><h3>Misconception #3: Your wallet knows your balances.</h3><p>It will eventually, but it has to cheat a bit.</p><p>You might have noticed that your wallet software like MetaMask, TrustWallet or Ledger Live is aware of some cryptocurrencies but not of others. Specifically very-much-altcoins usually require you to first make themselves known to your wallet by importing a “custom token”.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/354/1*SevD3O54SRbmtJdCZKSirg.png" /><figcaption>MetaMask’s dialog to import a custom token.</figcaption></figure><p>The key here is to realize that these tokens were never “in your wallet” in the first place. Instead, your wallet software is aware of some well-known cryptocurrency contracts and will actively query them for any balance assigned to your wallet address. All it does is showing these results to you.</p><p>When you import a custom token, you simply make another contract known to the wallet software that it will then look up for you every once in a while.</p><p>While this is slightly annoying, it is both necessary and safe: As you will find out below, not every cryptocurrency contract is benign. By not being able to see any arbitrary contract in the first place, your wallet protects you from scams by requiring you to actively make the decision to register a “nonstandard” contract.</p><p>That said, even if your wallet would not have that mechanism: There simply are too many cryptocurrencies out there and scanning all of them for your balance individually is a close to impossible task.</p><h3>Misconception #4: Burning tokens destroys them.</h3><p>Sooner or later you’ll someone speak of a “burning event”, where some amount of a cryptocurrency is “burned” to reduce the amount of available tokens in circulation. It evokes the idea of setting a pile of money on fire in order to remove it for good.</p><p>Let’s revisit the balances table example from above:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/3e6b5f5d7837de66cf610ebae17dd4d3/href">https://medium.com/media/3e6b5f5d7837de66cf610ebae17dd4d3/href</a></iframe><p>In here, you’ll see a special address called</p><pre>0x000000000000000000000000000000000000dead</pre><p>In Ethereum-like blockchains like Binance Smart Chain, this address is commonly known as the “Null Address” or “Burn Address” and like any other address it can be observed on your blockchain’s explorer. Take <a href="https://bscscan.com/token/0xdfaa1f2ba4550a3f099ca26ac73e4e4f27cf5ca3?a=0x000000000000000000000000000000000000dead">this example</a> from FartCoin:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/980/1*veMSUT08Lpk20GuWUrNZ4w.png" /><figcaption>The Null Address balance for FartCoin</figcaption></figure><p>The trick here is that this address does not belong to any accessible wallet, so any balance sent there is lost for good. However, as the common theme of this story goes, the tokens still exist and therefore the total supply didn’t change; they’re just taken out of circulation as they can’t be transferred anywhere else anymore —that is, if the token’s contract is benign, of course.</p><h3>Let’s talk about safety a bit.</h3><h4>If you lose your wallet, your tokens are gone.</h4><p>That’s the short version.</p><p>What happens under the hood is that the thing you lose is the proof that an address belongs to you, and only by extension that some tokens in a contract’s “balances” table are yours. The “private key” of your wallet always generates the same “public address”, and as long as you have that private key, you can identify yourself to a contract — so keep it safe.</p><p>The “tokens”, in either case, will always remain in the contract, until the end of days. A soothing thought, in a way.</p><h4>Airdrop scams: If it’s too good to be true, it’s probably not.</h4><p>As a consequence of the balances table existing, cryptocurrency tokens might still “show up” in your wallet without requiring any interaction on your side. Is this what’s called an Airdrop; for it to happen, all it takes is for someone — like the owner — to instruct the contract that your address now has a nonzero balance, and now you “possess” it.</p><p>So far that’s nice, we all like free money. The dangerous part is that scammers do use this to their advantage by luring you into interacting with token contracts you don’t know.</p><p>The ways of scamming people are plenty, but here are some:</p><ul><li>You may think the value of an airdropped cryptocurrency is high, so you buy more. The contract might be forged however to only let the <em>owner</em> sell. As people buy the price goes up and the owner leaves with a fortune. Before you interact with an unknown token, look it up on a blockchain explorer. If you see a bunch of failed transactions like <a href="https://bscscan.com/txs?a=0x9395283aFbDDf72220e64D181Da119Dc282f87Af&amp;f=1">here</a>, steer away.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/968/1*M0vIywxg-q7EI6ewQAXnFw.png" /><figcaption>An example of a forged contract that prevents selling.</figcaption></figure><ul><li>The token might have a “minting” function <em>of any name</em> that allows the owner to create new tokens at will. When enough people have bought the cryptocurrency, the owner can increase his own balance and cash it in, leaving everyone else with a worthless token and their money gone.</li><li>The token name might indicate a website you will then want to visit. A Web3 website might have malicious code that interacts with your wallet software directly after asking you to connect your wallet to it. In this case, it is the website itself that will execute contracts on your behalf, like swapping your tokens for cash, then sending it to the scammer. The obvious solution: Never connect your wallet to websites you don’t know or trust, and never blindly sign or approve any transactions without double-or triple-checking their contents.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*FkjoczD_kZ0yLcnr8TYZzA.png" /><figcaption>A scam token’s Web3 website that tricks you into connecting your wallet to it.</figcaption></figure><ul><li>Every operation you perform on the cryptocurrency contract yourself — like sending any amount of ETH or BNB <em>to</em> the contract, or executing any other function — can trigger arbitrary code you might not be aware of. This can lead to entirely unexpected results as the next point will highlight.</li><li>A scammer might instruct you to send a zero-balance (i.e. 0 ETH, 0 BNB, …) payment along with some cryptic data. Since you do not appear to be sending any money here it appears safe, but this data might just be an encoded instruction. Approaches like this are likely to trick you into actually calling another function of the contract without your knowledge — for example, it could make you accept and perform a swap trade of an arbitrary token you possess such that the contract can drain your wallet on your behalf.</li></ul><h4>Opaque token contracts.</h4><p>Note that a token contract isn’t always clearly visible as in the example I made above. Instead, it might show up as a bunch of hexadecimal values:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/974/1*1BUuR1mEIhCfnhAqhAsjgw.png" /><figcaption>A compiled token contract.</figcaption></figure><p>This isn’t a problem in itself, like <a href="https://etherscan.io/address/0x2a98f128092abbadef25d17910ebe15b8495d0c1#code">DogeCoin’s contract</a> shows. Trustworthy creators will however generally make it an effort to increase transparency by validating and showing the original source code, as is the case in the <a href="https://bscscan.com/address/0xdfaa1f2ba4550a3f099ca26ac73e4e4f27cf5ca3#code">FartCoin</a> example.</p><p>Be careful however that just because the code is listed still doesn’t mean the contract is trustworthy — well-crafted traps are hard to detect even in plain sight, even by professionals, and you might not want to pay for that lesson.</p><p>A prime example of a very well documented contract is that of <a href="https://etherscan.io/address/0x95ad61b0a150d79219dcf64e1e6cc01f0b64c4ce#code">Shiba Inu</a> on the Ethereum blockchain. You will find that it is about 500 lines long, but most of that are comments on the code, explaining the implementation.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/858/0*NtLnmuBHTqmK9aeY.png" /></figure><h4>The silver lining. 🙂</h4><p>A contract cannot steal any other crypto tokens from your wallet on its own, because it would have to have your private key to prove itself to the other contract as the rightful owner.</p><p>As a result, receiving a coin typically isn’t dangerous in itself. Different blockchains may have different rules however; Stellar, for example, requires you to actively establish a “trust line” before you can even receive a new kind of cryptocurrency because of that.</p><p>So — if you find a token in your wallet that you didn’t expect to be there, simply ignore it. Your wallet is safe.</p><h3>TL;DR</h3><p>A cryptocurrency, and in particular the abundance of very-much-altcoins in circulation these days, is often nothing more than a very small program shifting around a lot of value. To be on the safe side, when you hear “new cryptocurrency”, think of it as a computer program you just downloaded from the internet. You might not want to click that one right away.</p><p>That said, while a token’s contract might just be some fourty lines long, it may still change the world. Go <a href="https://www.fartcoin.us/">FartCoin</a>, you little rascal.</p><p>Take care, stay safe and have fun!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=58e2bcf96b4c" width="1" height="1" alt=""><hr><p><a href="https://medium.com/cryptostars/doge-shiba-fartcoin-what-exactly-is-a-cryptocurrency-58e2bcf96b4c">Doge, Shiba, FartCoin — What exactly IS a cryptocurrency?</a> was originally published in <a href="https://medium.com/cryptostars">CryptoStars</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Converting between types in increasingly absurd ways]]></title>
            <link>https://medium.com/@sunside/converting-between-types-in-increasingly-absurd-ways-89414ae6eb7c?source=rss-f0f0d7cabfa3------2</link>
            <guid isPermaLink="false">https://medium.com/p/89414ae6eb7c</guid>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[dotnet-core]]></category>
            <category><![CDATA[source-generator]]></category>
            <category><![CDATA[csharp]]></category>
            <dc:creator><![CDATA[Markus Mayer]]></dc:creator>
            <pubDate>Sun, 09 Jan 2022 23:03:36 GMT</pubDate>
            <atom:updated>2022-01-09T23:18:42.065Z</atom:updated>
            <content:encoded><![CDATA[<p>Today I was answering a C# question on StackOverflow that got me thinking. The request was slightly odd, but reasonable: Given a SomeType and a SomeTypeDTO class as well as a corresponding extension method</p><pre>public static <strong>SomeTypeDTO</strong> <strong>ToDTO</strong>(this <strong>SomeType</strong> data) { ... }</pre><p>how can one write a generic method</p><pre>public static <strong>TDTO</strong> <strong>ToDTO&lt;TDTO, TData&gt;</strong>(<strong>TData</strong> data) { ... }</pre><p>that forwards to the right extension method(s). Some ten minutes into thinking about how to explain why it’s complicated and why it might not even be a good idea to begin with but how <a href="https://github.com/AutoMapper/AutoMapper">AutoMapper</a> might solve the issue … the OP deleted <a href="https://stackoverflow.com/questions/70642141/calling-an-extension-method-according-to-a-type-in-a-class-where-there-are-sever">the question</a>.</p><p>But sometimes, the only reason to really <em>do</em> something is because you can, and so I set off to explore the solution space. In this post, I will go through implementing some approaches using</p><ol><li><a href="#f4f2">Simple Generics</a> (TL;DR: won’t work)</li><li><a href="#3f90">Reflection with </a><a href="#3f90">MethodInfo invocations</a>,</li><li><a href="#49d6">Reflection with runtime compilation of Lambda expressions,</a></li><li><a href="#32df">Compile-time Source Generation</a> and</li><li><a href="#a3b7">Using AutoMapper</a>, for the sake of sanity.</li></ol><p>You can find the source code for this blog post on <a href="https://github.com/sunsided/medium-absurd-conversion">GitHub</a>:</p><p><a href="https://github.com/sunsided/medium-absurd-conversion">GitHub - sunsided/medium-absurd-conversion: Code for the Converting between types in increasingly absurd ways medium post.</a></p><p>To have a starting point, these are the data classes and DTOs the author used:</p><pre>namespace ExtensionMethods70642141;<br><br>public class Student<br>{<br>    public string Name { get; set; }<br>}<br><br>public class StudentDTO<br>{<br>    public string Name { get; set; }<br>}<br><br>public class Teacher<br>{<br>    public string Name { get; set; }<br>}<br><br>public class TeacherDTO<br>{<br>    public string Name { get; set; }<br>}</pre><p>To convert, the author provided these extension methods:</p><pre>namespace ExtensionMethods70642141;<br><br>public static class PeopleExtension<br>{<br>    public static StudentDTO ToDTO(this Student student) =&gt; new()<br>    {<br>        Name = student.Name<br>    };<br><br>    public static TeacherDTO ToDTO(this Teacher teacher) =&gt; new()<br>    {<br>        Name = teacher.Name<br>    };<br>}</pre><p>And to recap, the question is:</p><pre>namespace ExtensionMethods70642141;<br><br>public static class GenericPeopleConversion<br>{<br>    public static TDTO ToDTO&lt;TDTO, TData&gt;(TData data)<br>    {<br>        throw new System.NotImplementedException(&quot;how?&quot;);<br>    }<br>}</pre><p>The code used for testing would look like this:</p><pre>using Xunit;<br><br>namespace ExtensionMethods70642141;<br><br>public class SmokeTests<br>{<br>    [Fact]<br>    public void Smoke()<br>    {<br>        var student = new Student { Name = &quot;Student Name&quot; };<br>        var teacher = new Teacher { Name = &quot;Teacher Name&quot; };<br><br>        var studentDto = student.ToDTO();<br>        var teacherDto = teacher.ToDTO();<br><br>        Assert.<em>Equal</em>(student.Name, studentDto.Name);<br>        Assert.<em>Equal</em>(teacher.Name, teacherDto.Name);<br>    }<br>}</pre><h3>Approach #1: Generics won’t help us much</h3><p>At a first glance, one might be tempted to convert the arguments to constrained generics first, such that</p><pre>namespace ExtensionMethods70642141;<br><br>public static class PeopleExtension<br>{<br>    public static StudentDTO ToDTO&lt;<strong>TData</strong>&gt;(<strong>this TData</strong> student) <br>        where TData : Student<br>        =&gt; new()<br>    {<br>        Name = student.Name<br>    };<br><br>    public static TeacherDTO ToDTO&lt;<strong>TData</strong>&gt;(<strong>this TData</strong> teacher) <br>        where TData : Teacher<br>        =&gt; new()<br>    {<br>        Name = teacher.Name<br>    };<br>}</pre><p>But the moment you do that, you’ll be greeted with compiler error <a href="https://docs.microsoft.com/en-us/dotnet/csharp/misc/cs0111">CS0111</a> informing you that there already is a method with the same name and argument:</p><pre>Nope.cs(12, 30): [CS0111] Type &#39;PeopleExtension&#39; already defines a member called &#39;ToDTO&#39; with the same parameter types</pre><p>One way to resolve this issue is to distribute the extension methods across different classes, e.g.</p><pre>public static class <strong>PeopleExtension1</strong><br>{<br>    public static StudentDTO ToDTO&lt;<strong>TData</strong>&gt;(this <strong>TData</strong> student)<br><strong>        where TData : Student</strong><br>        =&gt; new()<br>        {<br>            Name = student.Name<br>        };<br>}<br><br>public static class <strong>PeopleExtension2</strong><br>{<br>    public static TeacherDTO ToDTO&lt;<strong>TData</strong>&gt;(this <strong>TData</strong> teacher)<br><strong>        where TData : Teacher</strong><br>        =&gt; new()<br>        {<br>            Name = teacher.Name<br>        };<br>}</pre><p>Older compilers would fail even with this, but at some point in the recent past (as of 2022), a tie breaker was implemented to assist. Now in order to get the TDTO output type, we have to introduce both a new() constraint, as well as one on the proper DTO type as we would be unable otherwise to create an instance, nor assign the property:</p><pre>public static class PeopleExtension1<br>{<br>    public static <strong>TDTO</strong> ToDTO&lt;TDTO, TData&gt;(this TData student)<br>        where TData : Student<br><strong>        where TDTO : StudentDTO, new()</strong><br>        =&gt; new()<br>        {<br>            Name = student.Name<br>        };<br>}<br><br>public static class PeopleExtension2<br>{<br>    public static <strong>TDTO</strong> ToDTO&lt;TDTO, TData&gt;(this TData teacher)<br>        where TData : Teacher<br><strong>        where TDTO : TeacherDTO, new()</strong><br>        =&gt; new()<br>        {<br>            Name = teacher.Name<br>        };<br>}</pre><p>Now calling the extension methods still works:</p><pre><strong>[</strong>Fact<strong>]<br></strong>public void Smoke()<br><strong>{<br>    </strong>var student = new Student { Name = &quot;Student Name&quot; };<br>    var teacher = new Teacher { Name = &quot;Teacher Name&quot; };<br><br>    var studentDto = <strong>student.</strong>ToDTO&lt;<strong>StudentDTO, Student</strong>&gt;();<br>    var teacherDto = <strong>teacher.</strong>ToDTO&lt;<strong>TeacherDTO, Teacher</strong>&gt;();<br><br>    Assert.<em>Equal</em>(student.Name, studentDto.Name);<br>    Assert.<em>Equal</em>(teacher.Name, teacherDto.Name);<br><strong>}</strong></pre><p>No sadly, we didn’t really gain anything: We still have to know the exact type in order to pick the right extension method. The moment we erase the actual types …</p><pre>public TDTO Convert&lt;TDTO, TData&gt;(TData data) =&gt;<br>    data.ToDTO&lt;TDTO, TData&gt;();</pre><p>… the compiler calls us a clown using error <a href="https://docs.microsoft.com/en-us/dotnet/csharp/misc/cs0314">CS0314</a>:</p><pre>SmokeTests.cs(21, 14): [CS0314] The type &#39;TData&#39; cannot be used as type parameter &#39;TData&#39; in the generic type or method &#39;PeopleExtension1.ToDTO2&lt;TDTO, TData&gt;(TData)&#39;. There is no boxing conversion or type parameter conversion from &#39;TData&#39; to &#39;ExtensionMethods70642141.Student&#39;.</pre><p>And since we just split the extension methods into multiple classes, we also cannot call the method directly anymore.</p><p>Scratch that.</p><h3>Approach #2: Reflection with MethodInfo invocations</h3><p>The next best thing to typing out an if cascade of every combination of accepted input and output type is to look types up at runtime. Since reflection is costly, we can cache our lookup results in a static dictionary, and now the only question is whether we want to use look up each specific method pessimistically once it is required, or to optimistically look up all ToDTO methods once, and only once. Below I will go with the first approach as it is slightly less convoluted.</p><p>The main part here will be the introspection of the PeopleExtension class for all its public static methods, finding every candidate named ToDTO that has an output type matching TDTOand exactly one argument matching the TData type:</p><pre>var methodInfo = <strong>typeof(PeopleExtension)</strong><br>    .<strong>GetMethods</strong>(BindingFlags.<strong><em>Static</em></strong><em> </em>| BindingFlags.<strong><em>Public</em></strong>)<br>    .<strong>Where</strong>(method =&gt; <br>           method.Name.Equals(nameof(PeopleExtension.ToDTO)))<br>    .<strong>Where</strong>(method =&gt; outputType == method.ReturnType)<br>    .<strong>FirstOrDefault</strong>(method =&gt; <br>         inputType == method.GetParameters()<br>                            .SingleOrDefault()?<br>                            .ParameterType);</pre><p>This provides us with the MethodInfo of the relevant method, if one exists (methodInfo would be null otherwise).</p><p>Since we are only looking at one specific class (namely PeopleExtension), there can be at most one Y ToDTO(X data) method per type X as otherwise the methods would differ only in their return type, which is forbidden. We can therefore introduce a static cache dictionary keyed by the concrete type X that stores the relevant MethodInfo, allowing us to skip the reflection call next time around:</p><pre>private static readonly ConcurrentDictionary&lt;Type, MethodInfo&gt; <em>_cache </em>= new();</pre><p>If we were to look up all methods at this point, a simple Type key might still be sufficient. The worst case is a call with unrelated arguments, e.g. ToDTO&lt;TeacherDTO, Student&gt;() in which case we would happily match with the StudentDTO ToDTO(Student data) method, then fail trying to cast the return values. But that is the main issue with runtime lookups in the first place, so the only problematic thing here is that the exception would look weird to the caller — but one could still double-check the retrieved MethodInfo.</p><p>Now the only thing left to do is to call the actual method, and we can achieve this with this beauty:</p><pre>(TDTO)<strong>methodInfo.Invoke</strong>(<strong>null</strong>, new object?[] { <strong>data</strong> })!;</pre><p>The null cue here is to inform the Invoke method that we are operating on a static class, and the array we’re passing in contains the arguments.</p><p>To sum it up:</p><pre>private static readonly ConcurrentDictionary&lt;Type, MethodInfo&gt; <em>_cache </em>= new();</pre><pre>private static TDTO ToDTO&lt;TDTO, TData&gt;(TData data)<br><strong>{<br>    </strong>var inputType = typeof(TData);<br>    var outputType = typeof(TDTO);<br>    if (!<strong><em>_cache</em>.TryGetValue</strong>(inputType, out var methodInfo))<br>    {<br>        methodInfo = <strong><em>GetMatchingMethodInfo</em></strong>(outputType, inputType);<br>        if (methodInfo is null)<br>        {<br>            throw new InvalidOperationException($&quot;No conversion from {inputType} to {outputType} was registered&quot;);<br>        }<br><br>        <em>_cache</em>.TryAdd(inputType, methodInfo);<br>    }<br><br>    return (TDTO)<strong>methodInfo.Invoke</strong>(null, new object?[] { data })!;<br>}</pre><pre>private static MethodInfo? <em>GetMatchingMethodInfo</em>(<br>    Type outputType, Type inputType) =&gt;<br>    typeof(PeopleExtension)<br>        .GetMethods(BindingFlags.<em>Static </em>| BindingFlags.<em>Public</em>)<br>        .Where(method =&gt;<br>            method.Name.Equals(nameof(PeopleExtension.ToDTO)))<br>        .Where(method =&gt; outputType == method.ReturnType)<br>        .FirstOrDefault(method =&gt; <br>            inputType == method.GetParameters()<br>                               .SingleOrDefault()?.ParameterType);</pre><h3>Approach 3: Reflection with runtime compilation of Lambda expressions</h3><p>The caching already removed the majority of concerns with the reflection-based approach, but we are still left with a dynamic invocation from the MethodInfo. One way to improve on this is to <em>somehow</em> get a delegate (i.e. function pointer) to call the method directly, and we’ll be doing this by wiring up a method call expression, then compiling it — at runtime.</p><p>The spot where we put the scalpel is right between having obtained a MethodInfo and storing it in the dictionary. If we were to write a specialized TDTO ToDTO&lt;TDTO, TData&gt;(TData data) method ourselves, knowing precisely that TDTO and TData are always referring to the specific types StudentDTO and Student, we would probably be doing something like this:</p><pre>private TDTO ToDTO&lt;TDTO, TData&gt;(TData data) =&gt;<br>   (TDTO)ConvertWithTypesErased((object)student);</pre><pre>private object ConvertWithTypesErased(object data)<br>{<br>    var student = (Student)data!;             // unary conversion<br>    var dto = PeopleExtension.ToDTO(student); // method call<br>    return (object)dto;                       // unary conversion<br>}</pre><p>First, we’d have to forget about the TData and StudentDTO type signatures by indirecting through an object cast. If we wouldn’t do it, <a href="https://docs.microsoft.com/en-us/dotnet/csharp/misc/cs0030">CS0030</a> would be there to greet:</p><pre>ReflectedWithDelegateTests.cs(71, 23): [CS0030] Cannot convert type &#39;TData&#39; to &#39;ExtensionMethods70642141.Student&#39;</pre><p>We then call the actual method, and convert back to the generic TDTO type. The ConvertWithTypesErased method above follows a Func&lt;object, object&gt; signature, and this is exactly what we’ll be using for our cache:</p><pre>private static readonly ConcurrentDictionary&lt;Type, Func&lt;object, object&gt;&gt; <em>_cache </em>= new();</pre><p>In order to model the ConvertWithTypesErased method above using System.Linq.Expressions we will start by defining a function parameter of type object dubbed dataObj. We will use this twice, once for defining the method and once for calling it. We then convert from object to our known type and pass that into a method call expression.</p><pre>var inputObject = <strong>Expression.<em>Parameter</em></strong>(<br>    typeof(object), &quot;dataObj&quot;);</pre><pre>var inputCastToProperType = <strong>Expression.<em>Convert</em></strong>(<br>    inputObject, inputType);</pre><pre>var callExpr = <strong>Expression.<em>Call</em></strong>(<br>    null, methodInfo, inputCastToProperType);</pre><p>When done, we convert the result back to object and wrap the entire tree into a lambda expression (reusing the aforemention dataObj parameter):</p><pre>var castResultExpr = <strong>Expression.<em>Convert</em></strong>(callExpr, typeof(object));</pre><pre>var lambdaExpr = <strong>Expression.<em>Lambda</em></strong>&lt;Func&lt;object, object&gt;&gt;(<br>    castResultExpr, inputObject);</pre><p>The only thing left to do here is to call Compile on the result and store it in the cache:</p><pre>Func&lt;object, object&gt; toDto = <strong>lambdaExpr.Compile</strong>();<br><em>_cache</em>.TryAdd(inputType, toDto);</pre><p>From that point on, all we do is pass in our TData data value (as it trivially casts to object) and map the result back to TDTO when we return. To wrap it up:</p><pre>private static readonly ConcurrentDictionary&lt;Type, Func&lt;object, object&gt;&gt; <em>_cache </em>= new();</pre><pre>private TDTO ConvertToDTO&lt;TDTO, TData&gt;(TData data)<br><strong>{<br>    </strong>var inputType = typeof(TData);<br>    var outputType = typeof(TDTO);<br>    if (<em>_cache</em>.TryGetValue(inputType, out var toDto))<br>    {<br>        return (TDTO)toDto(data!);<br>    }<br><br>    var methodInfo = <em>GetMatchingMethodInfo</em>(outputType, inputType);<br>    if (methodInfo is null)<br>    {<br>        throw new InvalidOperationException($&quot;No conversion from {inputType} to {outputType} was registered&quot;);<br>    }<br><br>    toDto = <em>CompileLambda</em>&lt;TDTO, TData&gt;(inputType, methodInfo);<br>    <em>_cache</em>.TryAdd(inputType, toDto);<br><br>    return (TDTO)toDto(data!);<br><strong>}</strong></pre><pre>private static Func&lt;object, object&gt; <em>CompileLambda</em>(<br>    Type inputType, MethodInfo methodInfo)<br><strong>{<br>    </strong>var inputObject = Expression.<em>Parameter</em>(<br>        typeof(object), &quot;dataObj&quot;);<br>    var inputCastToProperType = Expression.<em>Convert</em>(<br>        inputObject, inputType);<br>    var callExpr = Expression.<em>Call</em>(<br>        null, methodInfo, inputCastToProperType);<br>    var castResultExpr = Expression.<em>Convert</em>(<br>        callExpr, typeof(object));<br>    var lambdaExpr = Expression.<em>Lambda</em>&lt;Func&lt;object, object&gt;&gt;(<br>        castResultExpr, inputObject);<br>    return lambdaExpr.Compile();<br><strong>}</strong></pre><p>Still not good enough.</p><h3>Approach 4: Compile-time Source Generation</h3><p>The fundamental problem with the above approaches is that they all operate at runtime. There is nothing stopping us from typing up any wild combination of types only to figure out days later in production — or during testing, possibly— that this doesn’t actually work. There might be structural ways to resolve this (tricks like the one employed by the <a href="https://refactoring.guru/design-patterns/visitor">Visitor Pattern</a> do away very nicely with their compile-time safe double indirection), but one thing is for sure: We would like to make sure right away that impossible combinations never compile.</p><p>Given the nature of the problem we will not be able to achieve this: The author originally asked for a method where TDTO and TData are strictly generic, and C# doesn’t always give us enough flexibility to work around that. There are still two things we can do here:</p><ul><li>Build an analyzer that sanity-checks all calls and emits a compiler error where needed. I won’t do this here.</li><li>Build a source generator that moves the type lookup logic to compile time, rather than runtime.</li></ul><p>Is the second approach slightly pointless and overkill? Yes. Let’s go!</p><p>We first create a new netstandard2.0 project hosting our source generator and reference the Microsoft.CodeAnalysis.CSharp and Microsoft.CodeAnalysis.Analyzers dependencies as private assets.</p><pre>&lt;Project Sdk=&quot;Microsoft.NET.Sdk&quot;&gt;<br><br>    &lt;PropertyGroup&gt;<br>        &lt;TargetFramework&gt;<strong>netstandard2.0</strong>&lt;/TargetFramework&gt;<br>        &lt;ImplicitUsings&gt;enable&lt;/ImplicitUsings&gt;<br>        &lt;Nullable&gt;enable&lt;/Nullable&gt;<br>        &lt;LangVersion&gt;10&lt;/LangVersion&gt;<br>    &lt;/PropertyGroup&gt;<br><br>    &lt;ItemGroup&gt;<br>        &lt;PackageReference Include=&quot;<strong>Microsoft.CodeAnalysis.CSharp</strong>&quot; Version=&quot;4.0.1&quot; PrivateAssets=&quot;all&quot; /&gt;<br>        &lt;PackageReference Include=&quot;<strong>Microsoft.CodeAnalysis.Analyzers</strong>&quot; Version=&quot;3.3.3&quot; PrivateAssets=&quot;all&quot; /&gt;<br>    &lt;/ItemGroup&gt;<br><br>&lt;/Project&gt;</pre><p>We then reference this new project in our original one specifying both OutputItemType=”Analyzer” and ReferenceOutputAssembly=”false”:</p><pre>&lt;ItemGroup&gt;<br>    &lt;ProjectReference <br>        Include=&quot;..\SourceGenerators\SourceGenerators.csproj&quot; <br>        <strong>ReferenceOutputAssembly=&quot;false&quot;</strong><br>        <strong>OutputItemType=&quot;Analyzer&quot;</strong> /&gt;<br>&lt;/ItemGroup&gt;</pre><pre>&lt;PropertyGroup&gt;<br>    &lt;<strong>EmitCompilerGeneratedFiles</strong>&gt;true&lt;/EmitCompilerGeneratedFiles&gt;<br>    &lt;<strong>CompilerGeneratedFilesOutputPath</strong>&gt;<br>        $(BaseIntermediateOutputPath)Generated<br>    &lt;/CompilerGeneratedFilesOutputPath&gt;<br>&lt;/PropertyGroup&gt;</pre><p>If we were to skip ReferenceOutputAssembly, our source generator would end up as a regular runtime dependency, which is not what we want. It does, after all, simply generate source. The EmitCompilerGeneratedFiles bit is not required, but helps a lot when debugging the generated code. The generated source files will end up in the obj directory of the project.</p><p>If you’re new to source generators, there’s a lot to untangle and I will only go about things briefly. There is excellent material both on YouTube (e.g. <a href="https://www.youtube.com/watch?v=052xutD86uI">here</a>) and in blogs, so take a deep breath and have a dive.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/640/0*_GLfkoC1A9AJpsm_.jpg" /></figure><p>As far as we are concerned, we’ll be using these concepts:</p><ul><li>ISourceGenerator and the [Generator] attribute; this is the bare minimum,</li><li>partial classes and methods,</li><li>ISyntaxReceiver to speed up compilation time,</li><li>MethodDeclarationSyntax syntax nodes and IMethodSymbol from the semantic model to find the main extension methods, as well as</li><li>InvocationExpressionSyntax syntax nodes to detect calls to the conversion method.</li></ul><p>To begin with, we will create the following placeholder type:</p><pre>namespace ExtensionMethods70642141;<br><br>public static <strong>partial</strong> class GenericPeopleConversion<br>{<br>    public static <strong>partial</strong> TDTO ToDTO&lt;TDTO, TData&gt;(TData data);<br>}</pre><p>This method implements the code requested by the original StackOverflow author; it is the job of the source generator to actually provide the partial implementation of this method.</p><p>The boilerplate for our source generator will look like the following: We implement the ISourceGenerator interface and tag the class with the [Generator] attribute. In itself, this generator would be called for every piece of code syntax observed by the compiler. This might end up being prohibitively slow for large code bases (i.e., not ours), so we assist the compiler by taking note only of very specific pieces of code using an ISyntaxReceiver that we register in the Initialize method of the source generator. The main job of our syntax receiver will be to find possible ToDTO extension methods, but we will go a step further by detecting calls to our TDTO ToDTO&lt;TDTO, TData&gt;(TData data) method as well. This way, we can generate dispatching code specific to the types actually used in the code, and not for everything that might occur. Here’s how it looks so far:</p><pre>using Microsoft.CodeAnalysis;<br>using Microsoft.CodeAnalysis.CSharp.Syntax;<br><br>namespace SourceGenerators;<br><br><strong>[Generator]</strong><br>public sealed class ToDTOGenerator : <strong>ISourceGenerator</strong><br>{<br>    public void <strong>Initialize</strong>(GeneratorInitializationContext context)<br>    {<br>        context.<strong>RegisterForSyntaxNotifications</strong>(() =&gt; <br>            new SyntaxReceiver());<br>    }<br><br>    public void <strong>Execute</strong>(GeneratorExecutionContext context)<br>    {<br>        // ...<br>        context.AddSource(&quot;GenericPeopleConversion.Generated.cs&quot;, <br>                          source: &quot;/* TODO */&quot;);<br>    }<br><br>    private sealed class SyntaxReceiver : <strong>ISyntaxReceiver</strong><br>    {<br>        public HashSet&lt;MethodDeclarationSyntax&gt; CandidateMethods { get; } = new();<br>        public HashSet&lt;InvocationExpressionSyntax&gt; CandidateInvocations { get; } = new();<br><br>        public void <strong>OnVisitSyntaxNode</strong>(SyntaxNode syntaxNode)<br>        {<br>            // TODO ...<br>        }<br>    }<br>}</pre><p>The implementation of the syntax receiver benefits drastically from C# 8+’s pattern matching. TheOnVisitSyntaxNode method will be called by the compiler for each syntax element of the code, and we will be saving the interesting bits for later. As far as the ToDTO extension methods are concerned,</p><ul><li>the syntax node must resemble a method declaration (MethodDeclarationSyntax); moreover,</li><li>the method declaration must have the identifier ToDTO,</li><li>it must have exactly one argument (the data bit) and</li><li>it must have at least one modifier (static)</li></ul><p>When it comes to calls to our conversion method to be,</p><ul><li>the syntax node must resemble a method call (InvocationExpressionSyntax) with exactly one argument (the data bit), and</li><li>it must resemble an access of the (static) class’ member ToDTO.</li></ul><pre>private sealed class SyntaxReceiver : ISyntaxReceiver<br><strong>{<br>    </strong>public HashSet&lt;MethodDeclarationSyntax&gt; CandidateMethods { get; } = new();<br>    public HashSet&lt;InvocationExpressionSyntax&gt; CandidateInvocations { get; } = new();<br><br><em>    </em>public void OnVisitSyntaxNode(SyntaxNode syntaxNode)<br>    {<br>        // Find candidates for &quot;ToDTO&quot; extension methods.<br>        // We expect exactly one input parameter as<br>        // well as a &quot;static&quot; modifier.<br>        if (syntaxNode is <strong>MethodDeclarationSyntax</strong><br>            {<br>                <strong>Identifier.Text</strong>: &quot;ToDTO&quot;,<br>                ParameterList.Parameters.Count: 1,<br>                Modifiers:<br>                {<br>                    Count: &gt;= 1<br>                } modifiers<br>            } mds &amp;&amp;<br>            modifiers.Any(st =&gt; st.ValueText.Equals(&quot;static&quot;)))<br>        {<br>            CandidateMethods.Add(mds);<br>        }<br><br>        // Likewise, the method invocations must be to a<br>        // &quot;ToDTO&quot; method with exactly one argument.<br>        if (syntaxNode is <strong>InvocationExpressionSyntax</strong><br>            {<br>                ArgumentList.Arguments.Count: 1,<br>                Expression: <strong>MemberAccessExpressionSyntax</strong><br>                {<br>                    <strong>Name.Identifier.ValueText</strong>: &quot;ToDTO&quot;,<br>                }<br>            } ie)<br>        {<br>            CandidateInvocations.Add(ie);<br>        }<br>    }<br>}</pre><p>I’ll be upfront: There is currently no sane way to debug any of this and not everything makes immediate sense. The best way I found here is to repeatedly run dotnet clean and dotnet build and write fake code (wrapped in /* */comment blocks) to be inspected in the obj directories — hence the EmitCompilerGeneratedFiles property mentioned earlier on.</p><p>With the syntax receiver in place, we can flesh out the Execute method of the source generator. I’ll show the code first, the explanation follows immediately:</p><pre>public void Execute(GeneratorExecutionContext context)<br>{<br>    var compilation = context.Compilation;<br>    var syntaxReceiver = (SyntaxReceiver)context.SyntaxReceiver!;<br><br>    // Fetch all ToDTO methods.<br>    var extensionMethods = syntaxReceiver.CandidateMethods<br>        .Select(methodDeclaration =&gt; compilation<br>            .<strong>GetSemanticModel</strong>(methodDeclaration.SyntaxTree)<br>            .<strong>GetDeclaredSymbol</strong>(methodDeclaration)!)<br>        .Where(declaredSymbol =&gt; declaredSymbol.<strong>IsExtensionMethod</strong>)<br>        .ToImmutableHashSet&lt;IMethodSymbol&gt;(SymbolEqualityComparer.<em>Default</em>);<br><br>    // Fetch type type arguments of all calls to the ToDTO methods.<br>    var usedTypeArguments = syntaxReceiver.CandidateInvocations<br>        .Select(methodDeclaration =&gt; compilation<br>           .<strong>GetSemanticModel</strong>(methodDeclaration.SyntaxTree)<br>           .<strong>GetSymbolInfo</strong>(methodDeclaration).Symbol as IMethodSymbol)<br>        .Where(symbol =&gt; symbol?.<strong>TypeParameters</strong>.Length == 2)<br>        .Select(symbol =&gt; <br>            new InputOutputPair(<br>                symbol!.TypeArguments[0], symbol.<strong>TypeArguments</strong>[1]))<br>        .ToImmutableHashSet();<br><br>    var code = <em>GenerateConversionMethodCode</em>(extensionMethods, usedTypeArguments);<br>    context.AddSource(&quot;GenericPeopleConversion.Generated.cs&quot;, code);<br>}</pre><p>As mentioned initially, the goal is of a source generator is to generate source code, which is exactly how the method ends. I will get into the GenerateConversionMethodCode method later on; at this point it’s only interesting to know that it returns the generated source code as a string.</p><p>As for the rest: Not everything makes sense on the syntax node level, so the first thing we do is obtaining semantic information from the syntax by getting the Compilation from the context and calling its GetSemanticModel method on every interesting node’s SyntaxTree.</p><p>For method declarations, we then grab the IMethodSymbol from said semantic model and check its IsExtensionMethod property; together with the syntax receiver, this ensures that we end up with exactly the methods we need. We pass both sets into the GenerateConversionMethodCode method, allowing it to see all possible conversion methods, as well as all their (actual) usages.</p><p>For the method invocations, things are a bit more complected, but follow the same approach: We get the semantic model, obtain the IMethodSymbol and then verify that the method we are calling has exactly two type parameters, namelyTDTO and TData. We stash away all type <em>arguments</em>, leaving us with a set of all used <em>concrete</em>TDTO-TData combinations. The InputOutputPair here is simply a readonly struct that uses SymbolEqualityComparer.Default internally:</p><pre>private readonly struct InputOutputPair: IEquatable&lt;InputOutputPair&gt;<br><strong>{<br>    </strong>public InputOutputPair(<br>        ITypeSymbol dtoType, ITypeSymbol dataType)<br>    {<br>        DtoType = dtoType;<br>        DataType = dataType;<br>    }<br><br>    public ITypeSymbol DtoType { get; }<br>    public ITypeSymbol DataType { get; }<br><br><em>    </em>public bool Equals(InputOutputPair other) =&gt; <br>        DtoType.Equals(other.DtoType,<br>                       SymbolEqualityComparer.<em>Default</em>) &amp;&amp; <br>        DataType.Equals(other.DataType,<br>                        SymbolEqualityComparer.<em>Default</em>);<br><br><em>    </em>public override bool Equals(object? obj) =&gt; <br>        obj is InputOutputPair other &amp;&amp; Equals(other);<br><br><em>    </em>public override int GetHashCode()<br>    {<br>        unchecked<br>        {<br>            return (SymbolEqualityComparer.<em>Default<br>                        </em>.GetHashCode(DtoType) * 397) ^ <br>                    SymbolEqualityComparer.<em>Default<br>                        </em>.GetHashCode(DataType);<br>        }<br>    }<br><strong>}</strong></pre><p>Note that HashCode.Combine isn’t available for netstandard2.0, so I’m going with Rider’s default implementation.</p><p>Now for the ugly bit: The GenerateConversionMethodCode method. In order to refer to the correct method and type names, we use a handful of helper methods that generate the fully qualified type and method names (Namespace.Type and Namespace.Type.Method) from the IMethodSymbol interfaces:</p><pre>private static string <em>GetMethodFullName</em>(IMethodSymbol methodSymbol)<br>{<br>    var methodReceiverType = methodSymbol.ReceiverType!;<br>    return<br>        $&quot;{methodReceiverType.ContainingNamespace.Name}.{methodReceiverType.Name}.{methodSymbol.Name}&quot;;<br>}<br><br>private static string <em>GetReturnTypeFullName</em>(IMethodSymbol methodSymbol)<br>{<br>    var returnTypeNamespace = methodSymbol.ReturnType<br>        .ContainingNamespace.Name;<br>    var returnTypeName = methodSymbol.ReturnType.Name;<br>    return $&quot;{returnTypeNamespace}.{returnTypeName}&quot;;<br>}<br><br>private static string <em>GetParameterTypeFullName</em>(<br>    IMethodSymbol methodSymbol)<br>{<br>    var parameterTypeNamespace = methodSymbol<br>        .Parameters.Single()<br>        .Type.ContainingNamespace.Name;<br>    var parameterTypeName = methodSymbol<br>        .Parameters.Single()<br>        .Type.Name;<br>    return $&quot;{parameterTypeNamespace}.{parameterTypeName}&quot;;<br>}</pre><p>With that in place, our GenerateConversionMethodCode method simply iterates all entries in the extensionMethods set, tests whether it is actually referenced by the code, obtains the type names and builds up the partial method by adding more and more if statements checking the types, then dispatching to the appropriate ToDTO method.</p><pre>private static string <em>GenerateConversionMethodCode</em>(<br>    ImmutableHashSet&lt;IMethodSymbol&gt; extensionMethods,<br>    ImmutableHashSet&lt;InputOutputPair&gt; usedTypeArguments)<br>{<br>    var sb = new StringBuilder();<br><br>    sb.Append(@&quot;<br>using System;<br><br>namespace ExtensionMethods70642141;<br><br>public static partial class GenericPeopleConversion {<br>public static partial TDTO ToDTO&lt;TDTO, TData&gt;(TData data)<br>{&quot;);<br><br>    foreach (var methodSymbol in extensionMethods)<br>    {<br>        if (!usedTypeArguments.Contains(<br>                new InputOutputPair(<br>                    methodSymbol.ReturnType, <br>                    methodSymbol.Parameters.Single().Type)))<br>        {<br>            continue;<br>        }</pre><pre>        var methodName = <em>GetMethodFullName</em>(methodSymbol);<br>        var parameterType = <em>GetParameterTypeFullName</em>(methodSymbol);<br>        var returnType = <em>GetReturnTypeFullName</em>(methodSymbol);<br><br>        sb.AppendLine($@&quot;<br>    if (typeof(TData) == typeof({parameterType}) &amp;&amp; typeof(TDTO) == typeof({returnType})) {{<br>        return (TDTO)(object){methodName}(({parameterType})(object)data);<br>    }}&quot;);<br>    }<br><br>    // TODO: Add an analyzer that prevents this from happening. :)<br>    sb.Append(<br>        @&quot;        throw new InvalidOperationException(&quot;&quot;No method found to convert from type {typeof(TData)} to {typeof{TDTO}}&quot;&quot;);&quot;);<br>    sb.AppendLine(@&quot;<br>}<br>}&quot;);<br><br>    return sb.ToString();<br>}</pre><p>It also throws an exception for good measure (I did promise it’s not going to help much!) to ensure no invalid combination is called — and reminds you to write an analyzer.</p><p>The only thing left to do now is to actually call the method:</p><pre>using System;<br>using Xunit;<br><br>namespace ExtensionMethods70642141;<br><br>public class SourceGeneratedTests<br>{<br>    [Fact]<br>    public void Works()<br>    {<br>        var student = new Student { Name = &quot;Student Name&quot; };<br>        var teacher = new Teacher { Name = &quot;Teacher Name&quot; };<br><br>        var studentDto = <strong>GenericPeopleConversion.<em>ToDTO</em></strong>&lt;StudentDTO, Student&gt;(student);<br>        var teacherDto = <strong>GenericPeopleConversion.<em>ToDTO</em></strong>&lt;TeacherDTO, Teacher&gt;(teacher);<br><br>        Assert.<em>Equal</em>(student.Name, studentDto.Name);<br>        Assert.<em>Equal</em>(teacher.Name, teacherDto.Name);<br>    }<br><br>    [Fact]<br>    public void InvalidConversionFails()<br>    {<br>        var student = new Student { Name = &quot;Student Name&quot; };<br><br>        var invalidCall = () =&gt; GenericPeopleConversion.<em>ToDTO</em>&lt;TeacherDTO, Student&gt;(student);<br><br>        Assert.<em>Throws</em>&lt;InvalidOperationException&gt;(invalidCall);<br>    }<br>}</pre><p>and when used like this, the source generator creates an GenericPeopleConversion.Generated.cs of this content:</p><pre>using System;<br><br>namespace ExtensionMethods70642141;<br><br>public static partial class GenericPeopleConversion <br>{<br>    public static partial TDTO <em>ToDTO</em>&lt;TDTO, TData&gt;(TData data)<br>    {<br>        if (typeof(TData) == typeof(ExtensionMethods70642141.<strong>Teacher</strong>) &amp;&amp; typeof(TDTO) == typeof(ExtensionMethods70642141.<strong>TeacherDTO</strong>)) {<br>            return (TDTO)(object)ExtensionMethods70642141.PeopleExtension.<strong>ToDTO</strong>((ExtensionMethods70642141.Teacher)(object)data);<br>        }<br><br>        if (typeof(TData) == typeof(ExtensionMethods70642141.<strong>Student</strong>) &amp;&amp; typeof(TDTO) == typeof(ExtensionMethods70642141.<strong>StudentDTO</strong>)) {<br>            return (TDTO)(object)ExtensionMethods70642141.PeopleExtension.<strong>ToDTO</strong>((ExtensionMethods70642141.Student)(object)data);<br>        }</pre><pre>        throw new <strong>InvalidOperationException</strong>(&quot;No method found to convert from type {typeof(TData)} to {typeof{TDTO}}&quot;);<br>    }<br>}</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/512/0*cRNpF-Aep2Q-Hwvm" /></figure><p>Comment out both calls to GenericPeopleConversion.ToDTO and the source generator creates an empty implementation:</p><pre>public static partial class GenericPeopleConversion {<br>    public static partial TDTO <em>ToDTO</em>&lt;TDTO, TData&gt;(TData data)<br>    {    <br>       throw new InvalidOperationException(&quot;No method found to convert from type {typeof(TData)} to {typeof{TDTO}}&quot;);<br>    }<br>}</pre><p>And that’s how to convert between types in increasingly absurd ways. That said, writing an actual Analyzer isn’t going to be that much harder after what we just pulled off, so I’ll leave it as an exercise to the reader.</p><h3>Approach #5: Using AutoMapper</h3><p>Now arguably there are insights to be gained in the above experiments. That said, if we make our peace with runtime resolution, then <a href="https://github.com/AutoMapper/AutoMapper">AutoMapper</a> is a superior solution to the problem; in the example below, automatic name-based mapping is applied, but that’s obviously configurable.</p><pre>var configuration = new MapperConfiguration(cfg =&gt;<br>{<br>    cfg.<strong>CreateMap</strong>&lt;Student, StudentDTO&gt;();<br>    cfg.<strong>CreateMap</strong>&lt;Teacher, TeacherDTO&gt;();<br>});<br><br>#if DEBUG<br>configuration.AssertConfigurationIsValid();<br>#endif<br><br>var mapper = configuration.CreateMapper();<br><br>var student = new Student { Name = &quot;Student Name&quot; };<br>var teacher = new Teacher { Name = &quot;Teacher Name&quot; };<br><br>var studentDto = _mapper.<strong>Map</strong>&lt;StudentDTO&gt;(student);<br>var teacherDto = _mapper.<strong>Map</strong>&lt;TeacherDTO&gt;(teacher);<br>var invalidCall = () =&gt; mapper.Map&lt;TeacherDTO&gt;(student);</pre><pre>Assert.Equal(student.Name, studentDto.Name);<br>Assert.Equal(teacher.Name, teacherDto.Name);<br>Assert.Throws&lt;AutoMapperMappingException&gt;(invalidCall);</pre><p>As usual, use the right tool for the job.</p><p>Thanks for reading, stay safe, stay vaccinated and stay curious!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=89414ae6eb7c" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>