<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Numerical Bits]]></title><description><![CDATA[The official development blog of stdlib, the fundamental numerical computing library for JavaScript and Node.js.]]></description><link>https://blog.stdlib.io/</link><generator>Ghost 5.83</generator><lastBuildDate>Sat, 04 Apr 2026 04:09:28 GMT</lastBuildDate><atom:link href="https://blog.stdlib.io/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Connect with the stdlib community on Zulip]]></title><description><![CDATA[Announcing the launch of stdlib's new Zulip chat community]]></description><link>https://blog.stdlib.io/announcing-zulip/</link><guid isPermaLink="false">69422db0ed2315029621f198</guid><category><![CDATA[News]]></category><dc:creator><![CDATA[Mara Averick]]></dc:creator><pubDate>Wed, 17 Dec 2025 05:32:06 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1604881988758-f76ad2f7aac1?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDF8fGNvbnZlcnNhdGlvbnxlbnwwfHx8fDE3NjU5NDQ3NTl8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1604881988758-f76ad2f7aac1?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDF8fGNvbnZlcnNhdGlvbnxlbnwwfHx8fDE3NjU5NDQ3NTl8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" alt="Connect with the stdlib community on Zulip"><p>As the stdlib community continues to grow and evolve, so has our need for new ways to connect and collaborate (see, for example, our <a href="https://blog.stdlib.io/new-ways-to-engage-with-the-stdlib-community">announcement of office hours and a public events calendar</a>). While Gitter&apos;s simple, single-channel interface worked well in the early days, it no longer scales with the range of conversations happening around the project. Today we&apos;re excited to announce our new <a href="https://stdlib.zulipchat.com/?ref=blog.stdlib.io">Zulip chat</a>, which provides a more full-featured, structured, and searchable space for us to interact.</p><h2 id="why-zulip">Why Zulip?</h2><p><a href="https://zulip.com/?ref=blog.stdlib.io">Zulip</a> is open source and generously <a href="https://zulip.com/for/open-source?ref=blog.stdlib.io">supports open-source projects</a> like ours with a free cloud plan. Its channel-and-topic model makes it easier to keep discussions focused, follow ongoing threads, and resurface past knowledge through powerful <a href="https://zulip.com/help/search-for-messages?ref=blog.stdlib.io#search-filters">search features</a>.</p><p>Anyone can browse the web-public channels of <a href="https://stdlib.zulipchat.com/?ref=blog.stdlib.io">stdlib&apos;s Zulip</a> without an account, and you can sign up at any time to join the conversation.</p><h2 id="join-and-get-started">Join and get started</h2><p>The <a href="https://stdlib.zulipchat.com/?ref=blog.stdlib.io"><strong>stdlib Zulip chat</strong></a> is open to all. A welcome bot will greet you when you first join and share some tips specific to stdlib about how to participate effectively. If you&apos;re new to Zulip, their <a href="https://zulip.com/help/getting-started-with-zulip?ref=blog.stdlib.io">getting started guide</a> is an invaluable resource. If you&apos;re already familiar with applications such as Slack or Discord, much of the experience will be familiar.</p><p>We encourage you to come say hello in the <a href="https://stdlib.zulipchat.com/?ref=blog.stdlib.io#narrow/channel/546733-introductions"><strong>#introductions</strong></a> channel and take some time to explore other channels and topics that may be of interest to you. If you have any questions about Zulip itself, we&apos;ve got a channel for that too (<a href="https://stdlib.zulipchat.com/?ref=blog.stdlib.io#narrow/channel/546662-zulip"><strong>#zulip</strong></a>).</p><p>The stdlib team is active in the chat, and public messages are the best way to get timely help&#x2014;no need for routine <strong>@-mentions</strong>. Asking questions in public is the fastest way to get a response, as more people can help, <em>plus</em> it&apos;s likely that someone else will benefit from finding out the answer to your question. The stdlib <a href="https://github.com/stdlib-js/stdlib/blob/develop/CODE_OF_CONDUCT.md?ref=blog.stdlib.io">Code of Conduct</a> applies to all community spaces, including stdlib&apos;s Zulip. Should you encounter an issue, Zulip&apos;s <a href="https://zulip.com/help/report-a-message?ref=blog.stdlib.io">reporting tools</a> and our moderation team are available.</p><h2 id="see-you-there">See you there!</h2><p>We&apos;re looking forward to seeing you in the stdlib Zulip instance! We welcome questions and suggestions as we continue shaping a space that is useful, inclusive, and genuinely supportive for everyone who wants to learn, build, or contribute.</p><hr><p><a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">stdlib</a> is an open source software project dedicated to providing a comprehensive suite of robust, high-performance libraries to accelerate your project&apos;s development and give you peace of mind knowing that you&apos;re depending on expertly crafted, high-quality software.</p><p>If you&apos;ve enjoyed this post, give us a star &#x1F31F; on <a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">GitHub</a> and consider <a href="https://opencollective.com/stdlib?ref=blog.stdlib.io">financially supporting</a> the project. Your contributions and continued support help ensure the project&apos;s long-term success and are greatly appreciated!</p>]]></content:encoded></item><item><title><![CDATA[Using AI in the development of stdlib]]></title><description><![CDATA[A reflection on stdlib's participation in the 2025 METR study on AI's impact on open-source developer productivity.]]></description><link>https://blog.stdlib.io/reflection-on-the-metr-study-2025/</link><guid isPermaLink="false">687949d7ed2315029621f14b</guid><category><![CDATA[Engineering]]></category><dc:creator><![CDATA[Philipp Burckhardt]]></dc:creator><pubDate>Thu, 17 Jul 2025 19:13:54 GMT</pubDate><media:content url="https://blog.stdlib.io/content/images/2025/07/gen_splash_1120x670.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.stdlib.io/content/images/2025/07/gen_splash_1120x670.png" alt="Using AI in the development of stdlib"><p>I read the results of the recent METR study on <a href="https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/?ref=blog.stdlib.io">&quot;Impact of Early-2025 AI on Experienced Open-Source Developer Productivity&quot;</a> with great interest for two reasons. Firstly, I have been an early adopter of LLM tools. In 2020, I was lucky enough to get access to the private beta of the OpenAI API from then CTO Greg Brockman and explored the use of AI for education at Carnegie Mellon University. Secondly, because <a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">stdlib</a> participated in the METR study, I was personally involved and contributed by working on randomized <a href="https://github.com/stdlib-js/metr-issue-tracker?ref=blog.stdlib.io">issues</a> over several months, being allowed to use AI for some tasks and forbidden for others.</p><p>Given that <a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">stdlib</a>&apos;s involvement is central to my perspective, it&apos;s worth providing some context on the project. stdlib is a comprehensive open-source standard library for JavaScript and Node.js, with a specific and ambitious goal: to be the fundamental library for numerical and scientific computing on the web. It is a large-scale project with well over 5 million source lines of JavaScript, C, Fortran, and WebAssembly, and composed of thousands of independently consumable packages, bringing the rigor of high-performance mathematics, statistics, and machine learning to the JavaScript ecosystem. Think of it as a foundational layer for data-intensive applications similar to the roles NumPy and SciPy serve in the Python ecosystem. In short, stdlib isn&apos;t your average JavaScript project.</p><h2 id="a-word-of-thanks">A Word of Thanks</h2><p>Before diving into my reflection, I want to take the opportunity to thank the METR team and especially Nate Rush for giving stdlib the chance to participate in this study with two core stdlib developers, <a href="https://github.com/headlessNode?ref=blog.stdlib.io">Muhammad Haris</a> and <a href="https://github.com/Planeshifter?ref=blog.stdlib.io">myself</a>. It was a great experience to work with the METR team, and I am eager to see any future studies they will conduct. It is my conviction that, with the entire tech industry being gripped by an AI gold rush, it is incredibly valuable to have a non-profit research institute like METR conduct studies that cut through the noise with actual data.</p><h2 id="the-slowdown">The Slowdown</h2><p>The results of the METR study are surprising, clashing with some previously published and very optimistic study results on the impact of generative AI (e.g., see GitHub and Accenture&apos;s <a href="https://github.blog/news-insights/research/research-quantifying-github-copilots-impact-in-the-enterprise-with-accenture/?ref=blog.stdlib.io#:~:text=Since%20bringing%20GitHub%20Copilot%20to,world%2C%20large%20engineering%20organizations">2023 study on the impact of Copilot on developer productivity</a>). Citing from the Core Result section of the <a href="https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/?ref=blog.stdlib.io">METR study page</a>:</p><blockquote>When developers are allowed to use AI tools, they take 19% longer to complete issues&#x2014;a significant slowdown that goes against developer beliefs and expert forecasts. This gap between perception and reality is striking: developers expected AI to speed them up by 24%, and even after experiencing the slowdown, they still believed AI had sped them up by 20%.</blockquote><p>Rather predictably, the results have led to a lot of discussion on <a href="https://news.ycombinator.com/item?id=44522772&amp;ref=blog.stdlib.io">Hacker News</a> and other social channels, with parties on both sides lining up with their pitchforks.</p><h2 id="the-perception-gap">The Perception Gap</h2><p>I am part of the group of developers who estimated that they were sped up 20%-30% during the study&apos;s exit interview. While I like to believe that my productivity didn&apos;t suffer while using AI for my tasks, it&apos;s not unlikely that it might not have helped me as much as I anticipated or maybe even hampered my efforts.</p><p>But how can that be? Daily, we are reading about how AI is already revolutionizing the workplace or making software engineers redundant, with companies like Salesforce <a href="https://www.theregister.com/2025/02/27/salesforce_misses_revenue_guidance/?ref=blog.stdlib.io">announcing</a> that they won&apos;t be hiring for software engineering positions anymore or online lender Klarna <a href="https://www.forbes.com/sites/quickerbettertech/2024/03/13/klarnas-new-ai-tool-does-the-work-of-700-customer-service-reps/?ref=blog.stdlib.io">announcing</a> that they were shuttering their entire human customer support in favor of AI.</p><p>Many of these stories have turned out to be more hyperbole than reality. Klarna <a href="https://www.independent.co.uk/news/business/klarna-ceo-sebastian-siemiatkowski-ai-job-cuts-hiring-b2755580.html?ref=blog.stdlib.io">still has</a> human support, and Salesforce still has many engineering <a href="https://careers.salesforce.com/en/jobs/?search=engineer&amp;pagesize=20&amp;ref=blog.stdlib.io#results">job listings</a>. Sadly, some of these stories appear influenced by ulterior motives, such as Klarna&apos;s strategic positioning as an &quot;AI-native&quot; company to capture premium valuations ahead of its IPO amid the current AI wave.</p><p>However, I have been using AI tools daily for the past three years, both at work and outside, and find them immensely useful. How do I square these benefits with the study results?</p><h2 id="on-study-design">On Study Design</h2><p>When confronted with results that go counter to one&apos;s expectations, it is a natural instinct to try to attack the study and identify holes to explain away the result. For example, one could point to the small sample size of 16 developers. There is also the argument that the study was conducted in a very specific context, with experienced developers working on projects they are intimately familiar with.</p><p>There might also have been a subtle selection effect in the tasks themselves: since project maintainers proposed their own task lists, it is possible that those more experienced with AI subconsciously selected issues they believed were more amenable to an agentic workflow. One could also argue that the developers were subject to the <a href="https://en.wikipedia.org/wiki/Hawthorne_effect?ref=blog.stdlib.io">Hawthorne effect</a>, altering their behavior simply because they knew they were being video-recorded, perhaps over-relying on the AI tools for the sake of the experiment.</p><p>Finally, and perhaps most importantly, the experimental setup of requiring screen recordings and active time tracking for a single task enforced a synchronous workflow. This effectively locked developers into what I call &quot;supervision mode&quot;, where they had to watch the agent work rather than being free to context-switch to another problem.</p><p>Some of these critiques, particularly the enforced &quot;supervision&quot; workflow, could directly contribute to the observed slowdown. But others, such as selecting &quot;AI-friendly&quot; tasks or over-relying on the tool to impress researchers, should have biased the results toward a speedup. This makes the final outcome even more notable. The direction of various potential biases is ambiguous at best, which is why we must look at the study&apos;s core design.</p><p>As a randomized control trial, the study follows the gold standard experimental design for detecting causality. By randomizing individual tasks to &quot;AI-allowed&quot; or &quot;AI-disallowed&quot;, the study isolates the effect of AI tooling. Instead of comparing one group of developers against a control group (where differences in skill could skew the results), it compares each developer against themselves. This &quot;within-subjects&quot; design controls for individual characteristics, from typing speed to experience with the project. With such a study design, results are harder to write off as mere statistical noise, even with a smaller sample size.</p><p>Crucially, the tasks were defined before this randomization. This avoids a common pitfall where AI might simply produce more verbose code or encourage developers to break tasks into smaller pull requests, which can inflate some productivity metrics without representing more work getting done.</p><p>16 developers from several open-source projects might not sound like much, but, in total, we completed 246 tasks. To give a sense of the work <a href="https://github.com/stdlib-js/metr-issue-tracker/issues?q=sort%3Aupdated-desc+is%3Aissue&amp;ref=blog.stdlib.io">involved</a>, the tasks Haris and I worked on were not trivial, while still being hand scoped to be completed in a few hours or less. They were a mix of core feature development (such as adding new array, string, and BLAS functions), creating custom ESLint rules to enforce project-specific coding standards, enhancing our CI/CD pipelines with new automation, and fixing bugs from our issue tracker.</p><p>And while a single developer&apos;s performance on one task is likely correlated with their performance on another and the precision of the estimates thus larger than otherwise, it is quite notable that the effect was in the opposite direction from what economists, ML experts, and the developers themselves predicted (with the former two groups being more in the range of a 40% speedup). Moreover, the effect is quite large in magnitude. A quick back-of-the-envelope calculation reveals that if the true effect were a 40% speedup, the probability of observing a result this far in the opposite direction is astronomically low.</p><p>In light of this, I have no reason to doubt the internal validity of the study and would venture that the effect measured is real within the context of the experiment. If one believed the chatter on social media and the hype merchants who two years ago were all shilling cryptocurrency (and maybe still are!) but have meanwhile all switched over to extolling the amazing speedup AI offers, then increases of 100%, 5x, or even 10x should have been in the cards. But this is definitively not what the study observed.</p><h2 id="embracing-agentic-development">Embracing Agentic Development</h2><p>The more important consideration for squaring my own experience with these results is external validity: how generalizable are the study&apos;s findings? The paper is a great read and touches on many possible criticisms and threats to external validity, and I won&apos;t belabor any of the points raised therein.</p><p>Instead, I will solely focus on my experience as a study participant and how I have been leveraging AI with success. I will also share my own hypotheses for why the performance of the developers in this sample was overall negatively affected by the use of AI.</p><p>To give some context, my main way of incorporating LLMs into my work before participating in this study was twofold. As something of an early adopter, I had used GitHub Copilot for auto-completion and inline suggestions and made heavy use of ChatGPT and Anthropic Claude web apps by assembling relevant context, writing detailed prompts, and copying results back into my editor. Tools such as <a href="https://repomix.com/?ref=blog.stdlib.io" rel="noreferrer">Repomix</a> helped streamline the process of incorporating LLMs into my daily development workflow. This general approach allowed me to review changes quickly, iterate on them by asking questions, and have the LLM make follow-up edits directly in a chat interface.</p><p>The METR study subsequently provided an excuse for me to delve into agentic programming and make Cursor an integral part of my workflow. I had used it briefly some time before but didn&apos;t find the AI-generated results compelling enough to let it loose on any codebase I was working on. But Claude Sonnet 3.7 had come out, which is still one of the most powerful models for coding tasks. Due to some very encouraging results during early testing, I was eager to put it to work on a backlog of tooling that we wanted to build for stdlib, alongside various refactoring and bug fixes.</p><p>One of my first impressions with Cursor this time around was the underlying LLM&apos;s rather impressive ability to follow the very specific coding standards and conventions of the project and, when placed in agent mode, to automatically and reliably fix lint errors and attempt to iteratively resolve errors in unit tests. This felt like another step change in capabilities, just like when OpenAI released GPT-3 Davinci in June 2020, which made a lot of use cases suddenly feasible that before would break down in any realistic scenario.</p><p>While I no longer use Cursor and have meanwhile switched to Claude Code (more on that later), I found Cursor straightforward to use, especially given that it is a fork of VSCode, which has been my IDE of choice for many years. I heavily doubt that inexperience with Cursor, which I shared with roughly a half of the developers in the study, played a major role in the results. While I didn&apos;t have an extensive <code>.cursorrules</code> setup (which has since been deprecated in favor of <a href="https://docs.cursor.com/context/rules?ref=blog.stdlib.io#project-rules">project rules</a>), I did add basic instructions and context about the project and made sure to index the stdlib codebase. Aside from that, further customization was neither possible nor necessary, as the Cursor Agent was able to automatically pull in other files, look up function call signatures, and perform other operations for assembling context.</p><p>My experience of Cursor was largely positive during the study. As an example, I ended up working on several Bash scripts for our CI/CD pipeline, and Cursor definitely sped up my development workflow by not having to look up the man page of <code>jq</code> for the eleventh time given that I only use this command-line tool for manipulating JSON once in a blue moon. With the AI agent&apos;s help, I could quickly generate a function like this one to check if a GitHub issue has a specific label:</p><pre><code class="language-bash"># Check if an issue has the &quot;Tracking Issue&quot; label.
#
# $1 - Issue number
is_tracking_issue() {
    local issue_number=&quot;$1&quot;
    local response

    debug_log &quot;Checking if issue #${issue_number} is a tracking issue&quot;
    # Get the issue:
    if ! response=$(github_api &quot;GET&quot; &quot;/repos/${repo_owner}/${repo_name}/issues/${issue_number}&quot;); then
        echo &quot;Warning: Failed to fetch issue #${issue_number}&quot; &gt;&amp;2
        return 1
    fi

    # ...

    # Check if the issue has the &quot;Tracking Issue&quot; label:
    if echo &quot;$response&quot; | jq -r &apos;.labels[].name&apos; 2&gt;/dev/null | grep -q &quot;Tracking Issue&quot;; then
        debug_log &quot;Issue #${issue_number} is a tracking issue&quot;
        return 0
    else
        debug_log &quot;Issue #${issue_number} is not a tracking issue&quot;
        return 1
    fi
}
</code></pre><p>The agent correctly assembled the <code>jq -r &apos;.labels[].name&apos;</code> filter to extract the label names from the JSON response&#x2014;something that would have sent me to a documentation page for a few minutes. While a small speed bump, these moments add up. The AI handled the rote task of recalling obscure syntax, letting me focus on the actual logic.</p><p>My first takeaway is this: current LLMs are very powerful for tasks in domains that you are not intimately familiar with, allowing you to move much more quickly. Agentic tools such as Cursor and Claude Code are also very helpful to quickly navigate and learn your way around a large codebase, allowing you to ask questions and explore the codebase in a natural way. Leveraging &quot;deep research&quot; provides another means to more exhaustively explore a problem space in a way that the search engines of old simply cannot match.</p><p>On the other hand, some tasks were very frustrating. For example, the Cursor agent wrote one ESLint rule almost fully in one shot, but for another one, the Cursor agent was running in circles and unable to figure out the correct algorithm. Trying to prompt it to fix the bug was unsuccessful multiple times. It would have been better to not fall prey to the <a href="https://en.wikipedia.org/wiki/Sunk_cost?ref=blog.stdlib.io">sunk cost fallacy</a> and instead throw away the code and then either give the agent another shot or write it myself.</p><blockquote>Cursor does have a neat feature of breakpoints which allow you to stop the agent at any time and revert to a prior state, something I wholeheartedly recommend using. It is a great way to avoid getting stuck in a loop of the agent trying to fix a bug that it cannot figure out.</blockquote><p>I freely admit that I may have been a bit overeager about using AI for all of the AI-enabled tasks, partly due to my desire to learn to use Cursor productively but also due to my general amazement of what these new technologies unlock. However, maybe the METR study suggests that the question of whether a task can be more efficiently completed by AI, or whether one would be better off completing it by hand, is far from settled.</p><h2 id="the-blank-slate-problem">The Blank Slate Problem</h2><p>Aside from occasional inefficiencies and outright mistakes in the generated code, coding agents do not have access to all the implicit knowledge and conventions of a large, mature project, which often might not be written down. In <a href="https://johnwhiles.com/posts/mental-models-vs-ai-tools?ref=blog.stdlib.io">his reflections</a> on the study, John Whiles identifies a core conflict: an expert engineer&apos;s primary value isn&apos;t just writing code; it&apos;s holding a complete, evolving mental model of the entire system in their head. The agent does not have such a mental model. Every interaction starts from a blank slate.</p><p>It is possible that some of this can be mitigated with better, more targeted instructions. As usual, there is no free lunch. One has to actively invest in making one&apos;s codebase more accessible to coding agents. And more generally, memory and learning is an unsolved problem with transformer-based LLMs, and changing that will likely require fundamental architectural advancements.</p><p>The necessity of auditing the agent&apos;s code for mistakes created two major sources of friction: the cognitive drain of &apos;babysitting&apos; the AI and the time spent waiting for and reviewing its output. For every minute the agent spent running in circles on that ESLint rule, I was blocked, my attention monopolized by the need to supervise its flawed process. This synchronous, blocking workflow is exhausting and inefficient. It&apos;s the digital equivalent of shoulder-surfing an overconfident junior developer who has memorized everything there is to know about programming but cannot be trusted and who will make subtle mistakes that are hard to spot.</p><p>My advice: stay in the driver&apos;s seat during such pair programming and use the AI as a sparring partner to bounce ideas back and forth instead of yielding agency.</p><h2 id="delegate-dont-supervise">Delegate, Don&apos;t Supervise</h2><p>Partly based on my experiences in the study, my workflow has evolved, and I have subsequently switched to using Anthropic&apos;s Claude Code. This has changed my interaction model from synchronous supervision to asynchronous delegation. I can now define a complex task via Claude Code&apos;s planning mode and then have the agent work on the task in the background. I can then turn my full attention elsewhere, be it attending a meeting, reviewing a colleague&apos;s code, or simply thinking through the next problem without interruption. Claude&apos;s work happens in parallel and is not a blocker to my own. The cognitive cost of babysitting is replaced by the much lower cost of reviewing a completed proposal later; if it didn&apos;t work out, I might just throw away the code and have the model try again, instead of engaging in a fruitless back and forth.</p><p>Claude Sonnet 4 and Opus 4 were not released at the time the METR study was conducted, and, while they mark another improvement, especially with regard to tool use by the model, the dynamics haven&apos;t fundamentally changed. The models still make mistakes and do not always implement things in an optimal or sound way, but they are now much better at following instructions and can work uninterrupted for longer periods of time.</p><p>At least for me, in contrast to those who frame coding agents as mere &quot;stochastic parrots&quot;, I find myself absolutely amazed that, despite its warts and hiccups, we have now a technology that, given a set of instructions, is able to generate a fully-formed pull request that correctly implements logic, adheres to style guidelines, and has a passing test suite. And, in the best cases, this can happen without any human intervention.</p><h2 id="the-first-80-percent">The First 80 Percent</h2><p>We still need to reconcile the observed performance decrease with how many developers, including myself, have now been leveraging AI to get tasks done in a fraction of the time, tasks that would have taken them hours or days previously. I believe that the <a href="https://en.wikipedia.org/wiki/Pareto_principle?ref=blog.stdlib.io">Pareto Principle</a> is a helpful yardstick. Named after Italian economist Vilfredo Pareto, it is commonly referred to as the 80/20 rule and posits that roughly 80% of effects come from 20% of the causes. Coding agents can now generate working code that mostly works but that might fall short if the goal is 100%.</p><p>In many instances, coding agents can easily accomplish the first 80% of a programming task, generating boilerplate, scaffolding logic, implementing core functionality, and writing a test suite. However, the final 20% of the task, from handling tricky edge cases, adhering to unwritten architectural conventions, ensuring optimal performance, and avoiding code duplication by reusing existing utilities is where the complexity lies. This last mile still requires the developer&apos;s deep, stateful mental model of the project. The rub here is that, by using the AI agent, one may bypass all the little steps which are necessary in the process of building that mental model.</p><p>But does it matter? When working on a crucial piece of a larger, complex system, it definitely does, and I would be hesitant with generative AI. But when working on a well-defined, isolated piece of code with expected behavior for inputs and outputs, why bother? The marginal cost of writing code (long recognized as only a small part of software engineering) is going to zero. In the event that there is a problem with the code, it can simply be thrown away and rewritten. The code that AI agents now generate is of decent quality, well-documented, and capable of adhering to one&apos;s coding conventions.</p><p>This brings to mind the following quote by <a href="https://tidyfirst.substack.com/p/90-of-my-skills-are-now-worth-0?ref=blog.stdlib.io">Kent Beck</a>.</p><blockquote>The value of 90% of my skills just dropped to $0. The leverage for the remaining 10% went up 1000x. I need to recalibrate.</blockquote><p>AI as a force multiplier is why I am long on AI, even though the METR study is a good reminder that we all can easily fall prey to cognitive biases.</p><p>In <a href="https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow?ref=blog.stdlib.io"><em>Thinking, Fast and Slow</em></a>, Daniel Kahneman gives a classic example for biases driven by the <a href="https://en.wikipedia.org/wiki/Availability_heuristic?ref=blog.stdlib.io">availability heuristic</a>: people overestimate plane crash risks due to vivid media coverage, making such events more &quot;available&quot; to memory than statistically riskier, yet routine, car crashes. Our judgment is swayed not by data, but by the ease of recall. In the case of working with AI agents, observing them build fully-functioning tools in seconds is a very memorable and visceral experience. On the other hand, the slow, frustrating &quot;death by a thousand cuts&quot; experience of auditing, debugging, and correcting the AI&apos;s subtle mistakes is the equivalent of the mundane car crash. It&apos;s a distributed cost with no single dramatic moment.</p><p>Nevertheless, I have no reason to believe that this technology will not continue to improve, and I, for one, am excited about the possibilities. For any big and ambitious project, the amount of tickets to be completed, features to be implemented, and bugs to fix vastly outstrips the available amount of time and human bandwidth to work on them.</p><h2 id="what-future-studies-should-tell-us">What Future Studies Should Tell Us</h2><p>It remains to be seen whether the results of the METR study can be replicated. However, the study clearly demonstrated that experts and developers were overly optimistic about the impact of AI on productivity. This is an important insight that should inform future research.</p><p>In some ways, the study raises more new questions than it answers. It looked at a very particular situation: seasoned experts working in the familiar territory of their own large, mature projects. Future studies by METR and others could vary these conditions. What happens when we throw developers into unfamiliar codebases, where, at least per my anecdotal experience, AI agents shine? Or what about junior developers or new contributors to an established open-source codebase? Under what conditions can AI act as a great equalizer, compressing the skill gap and providing a speed boost rather than slowdown?</p><p>Furthermore, the current study centered on completion time, but faster isn&apos;t always better. One possible follow-up would be a blinded study where human experts review pull requests without knowing if AI was involved. We could then measure things like the number of review cycles, the time spent in review, and the long-term maintainability of the code. This might shed light on when and how AI-assisted development may impact trading short-term speed for long-term technical debt.</p><p>Finally, the field of AI is still evolving at a rapid pace. The synchronous workflow that the study&apos;s setup encouraged could be fundamentally suboptimal. Exploring different interaction models, such as the asynchronous delegation workflow that I&apos;ve moved to, could yield very different results.</p><h2 id="how-to-work-with-ai-now">How to Work With AI Now</h2><p>What follows are my current recommendations for using AI in your daily workflow based on my experiences and the METR study.</p><h3 id="adopt-an-asynchronous-workflow">Adopt an Asynchronous Workflow</h3><p>The biggest drain from using AI is the cognitive load of &quot;babysitting&quot; it. Instead of watching the agent work, adopt an asynchronous model:</p><ul><li>Define one or more tasks (e.g., running a set of commands to audit a codebase for lint errors and documentation mistakes) and then let AI agents work on them in the background (e.g., in separate Git worktrees of your repository), and turn your attention elsewhere.</li><li>Review the completed task(s) later. If the output is flawed, it&apos;s often better to discard it and have the model try again with a better prompt rather than engaging in a frustrating back-and-forth.</li></ul><h3 id="know-what-to-delegate">Know What to Delegate</h3><p>AI can now handle the first 80% of many programming tasks, but the final 20% often requires deep context. The key is to choose the right tasks for AI:</p><ul><li><strong>&quot;Vibe Code&quot; and Prototypes:</strong> use AI for mock-ups or small, isolated tools that can be thrown away. This is where the technology&apos;s speed offers a distinct advantage.</li><li><strong>Verifiable Code:</strong> AI is excellent for tasks that can be fully verified against an existing, robust test suite. The tests act as a safety net to catch the subtle mistakes the AI might make.</li><li><strong>Boilerplate Code:</strong> AI can quickly generate boilerplate code, such as REST API endpoints or form validation, and can do so in a way that follows project conventions.</li><li><strong>Learning and Navigation:</strong> use AI to quickly learn your way around a large codebase, document previously undocumented code, or to get help with tools you use infrequently. Asking LLMs questions can be much faster than hunting through documentation, particularly if that documentation is split across multiple resources.</li></ul><h3 id="use-and-customize-claude-code">Use and Customize Claude Code</h3><p>For tools such as Claude Code, customization is a helpful means of writing down any implicit knowledge about the project that is not readily accessible from the code alone.</p><ul><li><strong>Provide Proper Context:</strong> drag and drop relevant files (this can include images!) into the Claude Code window for the model to use as context for the task at hand. One approach I have found useful is to add TODO comments in the codebase with the required changes, and then have Claude Code work on them. Use the planning mode to have the model think through the task and generate a plan that can be approved before immediately jumping into implementation.</li><li><strong>Use Project Memory:</strong> use <code>CLAUDE.md</code> files to give the model project-specific <a href="https://docs.anthropic.com/en/docs/claude-code/memory?ref=blog.stdlib.io#how-claude-looks-up-memories">memory</a>, specifically on its architecture and unwritten knowledge. You can have multiple <code>CLAUDE.md</code> files in different project sub-directories, and the model will intelligently pick up the most relevant one based on your current context.</li><li><strong>Build Custom Tooling:</strong> use the Claude CLI to build small, automated tools, such as a review bot that flags typos as a daily CRON job. For fuzzy tasks such as pointing out typos or inconsistencies in a PR, it&apos;s best to let Claude generate output that can be verified by a human. For well-defined tasks that can be fully automated, it is better to have Claude produce code that deterministically runs and can be verified.</li><li><strong>Set up Hooks to Automate Actions:</strong> <a href="https://docs.anthropic.com/en/docs/claude-code/hooks?ref=blog.stdlib.io">hooks</a> are a powerful new feature of Claude Code that allows you to run scripts and commands at different points in Claude&apos;s agentic lifecycle.</li><li><strong>Automate Repetitive Actions:</strong> create <a href="https://docs.anthropic.com/en/docs/claude-code/slash-commands?ref=blog.stdlib.io#custom-slash-commands">custom slash commands</a> for frequent tasks performing routine work. Below is an example <code>stdlib:review-changed-packages</code> command that I run to flag any possible errors in PRs that were recently merged to our development branch:</li></ul><pre><code class="language-md">- Pull down the latest changes from the develop branch of the stdlib repository.
- Get all commits from the past $ARGUMENTS day(s) that were merged to the develop branch
- Extract a list of @stdlib packages touched by those commits
- Review the packages for any typos, bugs, violations of the stdlib style guidelines, or inconsistencies introduced by the changes.
- Fix any issues found during the review.
</code></pre><ul><li><strong>Build Custom Tooling:</strong> use the Claude CLI to build small, automated tools, such as a review bot that flags typos as a daily CRON job. For fuzzy tasks such as pointing out typos or inconsistencies in a PR, it&apos;s best to let Claude generate output that can be verified by a human. For well-defined tasks that can be fully automated, it is better to have Claude produce code that deterministically runs and can be verified.</li><li><strong>Set up Hooks to Automate Actions:</strong> <a href="https://docs.anthropic.com/en/docs/claude-code/hooks?ref=blog.stdlib.io">hooks</a> are a powerful new feature of Claude Code that allows you to run scripts and commands at different points in Claude&apos;s agentic lifecycle.</li></ul><h2 id="final-thoughts">Final Thoughts</h2><p>It&apos;s natural to attack a study whose results you don&apos;t like. A better response is to ask what they might be telling you. For me, it tells me there is still a lot to learn about how to use this new, powerful, but often deeply weird and unpredictable technology. One mistake is treating it as the driver in a pair programming session that requires your constant attention. Instead, treat it like a batch process for grunt work, freeing you to focus on the problems that actually require a human brain.</p><hr><p><em>Philipp Burckhardt is a data scientist and software engineer securing software supply chains at </em><a href="https://socket.dev/?ref=blog.stdlib.io"><em>Socket</em></a><em> and a core contributor of </em><a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io"><em>stdlib</em></a><em>.</em></p><hr><p><a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">stdlib</a> is an open source software project dedicated to providing a comprehensive suite of robust, high-performance libraries to accelerate your project&apos;s development and give you peace of mind knowing that you&apos;re depending on expertly crafted, high-quality software.</p><p>If you&apos;ve enjoyed this post, give us a star &#x1F31F; on <a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">GitHub</a> and consider <a href="https://opencollective.com/stdlib?ref=blog.stdlib.io">financially supporting</a> the project. Your contributions and continued support help ensure the project&apos;s long-term success and are greatly appreciated!</p>]]></content:encoded></item><item><title><![CDATA[GSoC 2025 Projects Announced]]></title><description><![CDATA[We're thrilled to share that stdlib was awarded five slots for Google Summer of Code 2025.]]></description><link>https://blog.stdlib.io/stdlib-gsoc-2025-projects-announced/</link><guid isPermaLink="false">681d399bed2315029621f0a8</guid><category><![CDATA[News]]></category><dc:creator><![CDATA[Philipp Burckhardt]]></dc:creator><pubDate>Fri, 09 May 2025 02:24:05 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1589652717521-10c0d092dea9?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDI3fHxsYXB0b3B8ZW58MHx8fHwxNzQ2NzU3MDg0fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1589652717521-10c0d092dea9?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDI3fHxsYXB0b3B8ZW58MHx8fHwxNzQ2NzU3MDg0fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" alt="GSoC 2025 Projects Announced"><p>Today, we are grateful to announce that stdlib, the fundamental numerical library for JavaScript, was awarded five slots in this year&apos;s Google&apos;s Summer of Code (GSoC). We participated in the program last year for the first time, and had four talented students working on a variety of projects. It was a resounding success, which we hope to surpass this year given all <a href="https://blog.stdlib.io/reflecting-on-gsoc-2024/">that we have learned</a> over the past year and a half.</p><p>This achievement comes after a tremendously productive start to 2025. Since January 1st of this year, the stdlib community has:</p><ul><li>Opened two thousand PRs with 1,377 successfully merged.</li><li>Welcomed contributions from 88 different contributors.</li><li>Added 3,452 commits to the repository.</li></ul><p>For GSoC, we received 99 excellent applications from enthusiastic students. Ranking proposals was a tough decision, and we would have loved for a few more projects to be accepted. We are grateful to everyone who applied and encourage those not selected this year to stay connected, continue to contribute to the project, and to apply again next year! In fact, one of this year&apos;s accepted contributors was a repeat applicant, demonstrating how persistence and continued engagement can pay off.</p><p>The accepted projects are listed below. Each project addresses key areas that will expand JavaScript&apos;s potential for technical and scientific applications.</p><p><a href="https://summerofcode.withgoogle.com/programs/2025/projects/opJzlQTz?ref=blog.stdlib.io"><strong>Add LAPACK bindings and implementations for linear algebra</strong></a><br><strong>Contributor:</strong> <a href="https://github.com/aayush0325?ref=blog.stdlib.io">Aayush Khanna</a></p><p>The goal of Aayush&apos;s project is to develop JavaScript and C implementations of LAPACK (<strong>L</strong>inear <strong>A</strong>lgebra <strong>Pack</strong>age) routines. This project aims to extend conventional LAPACK APIs by borrowing ideas from BLIS, thus ensuring easy compatibility with stdlib ndarrays and adding support for both row-major (C-style) and column-major (Fortran-style) storage layouts. This work will help overcome the LAPACK&apos;s column-major limitation and thus make advanced linear algebra operations more accessible and efficient in JavaScript environments.</p><p><a href="https://summerofcode.withgoogle.com/programs/2025/projects/JYSuqCBs?ref=blog.stdlib.io"><strong>Expanding array-based statistical computation in stdlib</strong></a><br><strong>Contributor:</strong> <a href="https://github.com/gururaj1512?ref=blog.stdlib.io">Gururaj Gurram</a></p><p>Gururaj will advance statistical operations in stdlib by introducing convenience array wrappers for all existing strided APIs, thus improving developer ergonomics for common use cases. Additionally, he will develop specialized ndarray statistical kernels with the aim of facilitating efficient statistical reductions across multi-dimensional data.</p><p><a href="https://summerofcode.withgoogle.com/programs/2025/projects/Td3c9qv2?ref=blog.stdlib.io"><strong>Implement base special mathematical functions in JavaScript and C</strong></a><br><strong>Contributor:</strong> <a href="https://github.com/anandkaranubc?ref=blog.stdlib.io">Karan Anand</a></p><p>Karan will implement and enhance lower-level scalar kernels for special mathematical functions in stdlib. The goal is to complete missing C implementations for existing double-precision packages, develop new single-precision versions, and ensure consistency, accuracy, and IEEE 754 compliance. These enhancements will provide developers with the most comprehensive set of high-precision mathematical tools for scientific computing in JavaScript.</p><p><a href="https://summerofcode.withgoogle.com/programs/2025/projects/lKDCoGBz?ref=blog.stdlib.io"><strong>Achieve ndarray API parity with built-in JavaScript arrays</strong></a><br><strong>Contributor:</strong> <a href="https://github.com/headlessNode?ref=blog.stdlib.io">Muhammad Haris</a></p><p>Haris will extend stdlib&apos;s ndarray capabilities by implementing familiar JavaScript array methods like <code>concat</code>, <code>find</code>, <code>flat</code>, <code>includes</code>, <code>indexOf</code>, <code>reduce</code>, and <code>sort</code> for multi-dimensional arrays. The project will develop high-performance C implementations with Node.js native add-ons for compute-intensive operations. These enhancements will allow JavaScript developers to work with multi-dimensional arrays as easily as built-in arrays, significantly expanding JavaScript&apos;s capabilities for scientific and numerical computing.</p><p><a href="https://summerofcode.withgoogle.com/programs/2025/projects/NJC5LuLO?ref=blog.stdlib.io"><strong>Add BLAS bindings and implementations for linear algebra</strong></a><br><strong>Contributor:</strong> <a href="https://github.com/ShabiShett07?ref=blog.stdlib.io">Shabareesh Shetty</a></p><p>Shabareesh will expand stdlib&apos;s BLAS (<strong>B</strong>asic <strong>L</strong>inear <strong>A</strong>lgebra <strong>S</strong>ubprograms) support by implementing missing Level 2 (vector-matrix) and Level 3 (matrix-matrix) operations in JavaScript, C, Fortran, and WebAssembly. The project will focus on key dependencies for LAPACK routines and create performance-optimized APIs that work in both browser and server environments. These enhancements will provide essential building blocks for developing high-performance machine learning and statistical analysis applications on the web.</p><p>We&apos;re excited to see these projects develop over the coming months. Each contribution will significantly enhance stdlib&apos;s capabilities and make advanced mathematical and statistical operations more accessible to the JavaScript community. The work done by these talented contributors will help bridge the gap between traditional scientific computing environments and JavaScript, furthering our mission to create a comprehensive, high-performance standard library for JavaScript.</p><p>We&apos;d like to extend thanks to Google for their continued support of open-source development through the Summer of Code program, and we look forward to sharing updates as the above projects progress over the course of this summer. In addition to watching for more posts on this blog, you can follow development by joining our <a href="https://app.gitter.im/?ref=blog.stdlib.io#/room/#stdlib-js_stdlib:gitter.im">community chat</a>. We also hold regular&#xA0;<a href="https://github.com/stdlib-js/meetings/issues?q=sort%3Aupdated-desc+is%3Aissue+is%3Aopen+label%3A%22Office+Hours%22&amp;ref=blog.stdlib.io">office hours</a>&#xA0;over video conferencing, which is a great opportunity to ask questions, share ideas, and engage directly with the stdlib team.</p><p>We hope that you&apos;ll join us in our mission to advance cutting-edge scientific computation in JavaScript. Start by showing your support and starring the project on GitHub today: <a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">https://github.com/stdlib-js/stdlib</a>.</p><hr><p><a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">stdlib</a> is an open source software project dedicated to providing a comprehensive suite of robust, high-performance libraries to accelerate your project&apos;s development and give you peace of mind knowing that you&apos;re depending on expertly crafted, high-quality software.</p><p>If you&apos;ve enjoyed this post, give us a star &#x1F31F; on <a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">GitHub</a> and consider <a href="https://opencollective.com/stdlib?ref=blog.stdlib.io">financially supporting</a> the project. Your contributions and continued support help ensure the project&apos;s long-term success and are greatly appreciated!</p>]]></content:encoded></item><item><title><![CDATA[Google Summer of Code 2025]]></title><description><![CDATA[stdlib was accepted as a Google Summer of Code mentoring organization for 2025!]]></description><link>https://blog.stdlib.io/announcing-gsoc-2025/</link><guid isPermaLink="false">67be8727ed2315029621f094</guid><category><![CDATA[News]]></category><dc:creator><![CDATA[Athan Reines]]></dc:creator><pubDate>Thu, 27 Feb 2025 18:21:51 GMT</pubDate><media:content url="https://blog.stdlib.io/content/images/2025/02/gen_splash.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.stdlib.io/content/images/2025/02/gen_splash.png" alt="Google Summer of Code 2025"><p>We are beyond excited to share that <a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">stdlib</a> has once again been accepted as a mentoring organization for <a href="https://summerofcode.withgoogle.com/?ref=blog.stdlib.io">Google Summer of Code</a> 2025! This marks our second consecutive year participating in this incredible program, and we cannot wait to work alongside aspiring open source contributors to push the boundaries of scientific computing on the web.</p><p>Google Summer of Code (GSoC) is a global initiative that introduces new contributors to open source software by offering mentorship and funding for meaningful, long-term projects. Over the years, GSoC has been instrumental in helping open source projects like stdlib grow, while also giving participants valuable real-world software development experience. With our acceptance into GSoC 2025, we are looking forward to welcoming a new wave of enthusiastic contributors who share our vision of making JavaScript and the extended ecosystem of TypeScript, Node.js, Deno, and other JavaScript runtimes first-class environments for numerical and scientific computing.</p><h3 id="reflecting-on-gsoc-2024-a-year-of-growth">Reflecting on GSoC 2024: A Year of Growth</h3><p>Last year marked our first time participating in GSoC, and we could not have asked for a better experience. We had the privilege of mentoring four incredibly talented contributors, each of whom made substantial contributions to the stdlib ecosystem.</p><p>From integrating BLAS bindings and optimizing special mathematical functions to enhancing support for boolean arrays and improving our interactive REPL experience, their work strengthened the foundation of stdlib and paved the way for even greater advancements. Beyond just code, their contributions sparked deeper engagement within our community, leading to over <strong>2,000 pull requests from more than 100 contributors</strong> and <strong>3,000+ new commits</strong> to <a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">stdlib</a> since February 2024.</p><p>If you missed our retrospective on last year&apos;s program, be sure to check out our blog post: <a href="https://blog.stdlib.io/reflecting-on-gsoc-2024/">Reflecting on GSoC 2024</a>.</p><h3 id="whats-in-store-for-gsoc-2025">What&apos;s in Store for GSoC 2025?</h3><p>As we gear up for GSoC 2025, we have a range of exciting project ideas that we hope will inspire potential contributors. Whether you&apos;re passionate about numerical computing, statistical modeling, performance optimization, or developer tooling, there&apos;s something for you. Some areas we&apos;re particularly excited about include:</p><ul><li><strong>BLAS/LAPACK</strong>: continuing to expand stdlib&apos;s coverage of BLAS and LAPACK operations to provide a robust foundation for linear algebra and machine learning in JavaScript and Node.js.</li><li><strong>WebAssembly</strong>: compiling BLAS and statistical kernels to WebAssembly with support for ergonomic inter-operation between WebAssembly and JavaScript.</li><li><strong>ndarray kernels</strong>: implementing lower-level ndarray kernels for efficient element-wise iteration and reduction to improve performance.</li><li><strong>Improving developer tooling</strong>: improving the stdlib development experience by creating better tools for automation, publishing, and managing the stdlib package ecosystem.</li><li><strong>Expanding statistical distributions</strong>: building on previous efforts to provide C implementations for special mathematical functions, thus unlocking a wider range of probability distributions and making stdlib a comparable alternative to SciPy for statistical computing in JavaScript.</li></ul><p>These ideas, however, are just the beginning. We believe that innovation comes from collaboration, and we welcome fresh ideas from prospective contributors. If you have a project concept that aligns with our mission and a clear plan for execution, we would love to hear about it. Our current list of ideas is available on our GSoC <a href="https://github.com/stdlib-js/google-summer-of-code/blob/main/ideas.md?ref=blog.stdlib.io">repository</a>, but don&apos;t feel constrained by it&#x2014;great ideas come from all directions!</p><h3 id="how-to-get-involved">How to Get Involved</h3><p>If you&apos;re interested in contributing to stdlib for GSoC 2025, now is the perfect time to get started. Here&apos;s how you can begin your journey:</p><ol><li><strong>Explore stdlib</strong>: familiarize yourself with the project by browsing the project&apos;s <a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">GitHub repository</a> and reading our documentation.</li><li><strong>Join the conversation</strong>: engage with the stdlib community on <a href="https://gitter.im/stdlib-js/stdlib?ref=blog.stdlib.io">Element</a> to discuss project ideas, ask questions, and connect with mentors.</li><li><strong>Review our guidelines</strong>: carefully read our <a href="https://github.com/stdlib-js/google-summer-of-code/tree/main?ref=blog.stdlib.io">GSoC Application Guidelines</a> to understand what we&apos;re looking for in a proposal.</li><li><strong>Start contributing</strong>: we strongly encourage all applicants to contribute to stdlib before submitting their application. This can be in the form of a bug fix, new feature, performance improvement, or some other enhancement to stdlib&apos;s capabilities.</li></ol><p>The official GSoC timeline is as follows:</p><ul><li><strong>February 27 &#x2013; March 24</strong>: prospective contributors discuss project ideas with mentoring organizations.</li><li><strong>March 24 &#x2013; April 8</strong>: application period (final deadline: April 8 at 18:00 UTC).</li><li><strong>May 8</strong>: accepted proposals announced.</li><li><strong>May 8 &#x2013; June 1</strong>: community bonding period.</li><li><strong>June 2 &#x2013; September 1</strong>: standard 12-week coding period.</li></ul><p>For the full timeline, visit the <a href="https://developers.google.com/open-source/gsoc/timeline?ref=blog.stdlib.io">GSoC 2025 Timeline</a>.</p><h3 id="looking-ahead">Looking Ahead</h3><p>As we embark on another exciting GSoC season, we want to extend our deepest gratitude to Google for this opportunity. We are incredibly excited to meet new contributors, explore new ideas, and continue building an open source ecosystem where JavaScript thrives as a language for scientific computing.</p><p>If you&apos;re passionate about building high-quality software and eager to make an impact, we invite you to join us. We can&apos;t wait to see your ideas and begin working together to advance scientific computing in JavaScript. Let&apos;s make this year&apos;s GSoC program one to remember!</p><hr><p><em>Athan Reines is a software engineer at </em><a href="https://quansight.com/?ref=blog.stdlib.io"><em>Quansight</em></a><em> and core developer of </em><a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io"><em>stdlib</em></a><em>.</em></p><hr><p><a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">stdlib</a> is an open source software project dedicated to providing a comprehensive suite of robust, high-performance libraries to accelerate your project&apos;s development and give you peace of mind knowing that you&apos;re depending on expertly crafted, high-quality software.</p><p>If you&apos;ve enjoyed this post, give us a star &#x1F31F; on <a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">GitHub</a> and consider <a href="https://opencollective.com/stdlib?ref=blog.stdlib.io">financially supporting</a> the project. Your contributions and continued support help ensure the project&apos;s long-term success and are greatly appreciated!</p>]]></content:encoded></item><item><title><![CDATA[New ways to engage with the stdlib community!]]></title><description><![CDATA[Announcing office hours and a public events calendar.]]></description><link>https://blog.stdlib.io/new-ways-to-engage-with-the-stdlib-community/</link><guid isPermaLink="false">6785b1ceed2315029621f045</guid><category><![CDATA[News]]></category><category><![CDATA[Developer]]></category><dc:creator><![CDATA[Athan Reines]]></dc:creator><pubDate>Tue, 14 Jan 2025 00:42:08 GMT</pubDate><media:content url="https://blog.stdlib.io/content/images/2025/01/gen_splash-1.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.stdlib.io/content/images/2025/01/gen_splash-1.png" alt="New ways to engage with the stdlib community!"><p>Fostering a vibrant and inclusive community is crucial for ensuring the long-term success of open-source software, and stdlib is no exception. We believe that collaboration and open communication are key to driving innovation and making scientific computing on the web accessible to everyone. To that end, we&apos;re thrilled to announce two new initiatives designed to make it easier than ever for contributors, users, and maintainers to connect, collaborate, and grow together!</p><h2 id="weekly-office-hours">Weekly Office Hours</h2><p>As part of our efforts to enhance transparency and collaboration, we&apos;re proud to announce weekly office hours! We&apos;ve been running these informally for the past few months, and they&apos;ve been a wonderful success, providing high-bandwidth opportunities to connect with project maintainers, users, and new and existing stdlib contributors.</p><p>To facilitate the coordination of office hours and other public project meetings, we&apos;ve created a public GitHub <a href="https://github.com/stdlib-js/meetings?ref=blog.stdlib.io">repository</a> to serve as a centralized hub where community members can propose agenda topics, review discussion points, and participate in shaping the direction of stdlib. Each week in advance of the next office hours, we&apos;ll create a new dedicated agenda <a href="https://github.com/stdlib-js/meetings/issues?q=sort%3Aupdated-desc+state%3Aopen+label%3A%22Office+Hours%22&amp;ref=blog.stdlib.io">issue</a>, where you can link issues and pull requests you want to discuss, post questions in advance, and share any pre-reads. Thus far, agendas have run the gamut, from project overviews to live code reviews to discussions about the project roadmap to upcoming events and community announcements.</p><p>In short, if you have questions about stdlib or if you need help fixing a bug, figuring out what to do next, or are just looking for feedback, this is your time to shine! Please join our weekly office hours to connect with project maintainers, stay updated on the latest project news, and chat with other community members. This is a great opportunity to ask questions, share ideas, and engage directly with the stdlib team.</p><p>Everyone is welcome&#x2014;drop in and say hello!</p><h2 id="public-community-calendar">Public Community Calendar</h2><p>Second, we&apos;re excited to introduce our new public community <a href="https://calendar.google.com/calendar/u/0/embed?src=a72677fe2820c833714b8b9a2aa87393f742bcaf0d0f6c9499eee6661795eae0%40group.calendar.google.com&amp;ref=blog.stdlib.io">calendar</a>, where you can stay up-to-date with all stdlib events, including office hours, project orientations, development meetings, and other important happenings.</p><p>With this <a href="https://calendar.google.com/calendar/u/0/embed?src=a72677fe2820c833714b8b9a2aa87393f742bcaf0d0f6c9499eee6661795eae0%40group.calendar.google.com&amp;ref=blog.stdlib.io">calendar</a>, you can:</p><ul><li>Find the dates and times of upcoming office hours and meetings.</li><li>Add our events to your own calendar for easy reminders.</li><li>Stay informed about new opportunities to engage with the stdlib team and community.</li></ul><h2 id="how-you-can-get-involved">How You Can Get Involved</h2><p>Here are a few ways you can make the most of these new resources:</p><ul><li><strong>Bookmark the </strong><a href="https://calendar.google.com/calendar/u/0/embed?src=a72677fe2820c833714b8b9a2aa87393f742bcaf0d0f6c9499eee6661795eae0%40group.calendar.google.com&amp;ref=blog.stdlib.io"><strong>community calendar</strong></a><strong> or add it to your own.</strong> Be on the lookout for upcoming events, and mark your calendar to join us.</li><li><strong>Engage on GitHub.</strong> Visit our meetings <a href="https://github.com/stdlib-js/meetings?ref=blog.stdlib.io">repository</a> to propose agenda topics or contribute to ongoing discussions.</li><li><strong>Attend Office Hours.</strong> Whether you&apos;re stuck on a problem or curious about the latest project updates, office hours are an excellent opportunity to connect and learn.</li><li><strong>Spread the Word.</strong> Help us grow the stdlib community by sharing these updates with anyone who might be interested.</li></ul><h2 id="lets-build-together">Let&apos;s Build Together!</h2><p>We&apos;re committed to creating a supportive and inspiring environment for everyone in the scientific computing ecosystem, and we&apos;re excited to see how these new initiatives will help our community thrive. Needless to say, we can&apos;t wait to connect with you at our next office hours!</p><p>Together, we&apos;re building the future of scientific computing on the web! &#x1F680;</p><hr><p><a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">stdlib</a> is an open source software project dedicated to providing a comprehensive suite of robust, high-performance libraries to accelerate your project&apos;s development and give you peace of mind knowing that you&apos;re depending on expertly crafted, high-quality software.</p><p>If you&apos;ve enjoyed this post, give us a star &#x1F31F; on <a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">GitHub</a> and consider <a href="https://opencollective.com/stdlib?ref=blog.stdlib.io">financially supporting</a> the project. Your contributions and continued support help ensure the project&apos;s long-term success and are greatly appreciated!</p>]]></content:encoded></item><item><title><![CDATA[2024 Retrospective]]></title><description><![CDATA[A look back at 2024 and a preview of the year ahead for all things stdlib.]]></description><link>https://blog.stdlib.io/2024-retrospective/</link><guid isPermaLink="false">677915a2ed2315029621f024</guid><category><![CDATA[News]]></category><dc:creator><![CDATA[Athan Reines]]></dc:creator><pubDate>Sat, 04 Jan 2025 20:25:13 GMT</pubDate><media:content url="https://blog.stdlib.io/content/images/2025/01/gen_splash.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.stdlib.io/content/images/2025/01/gen_splash.png" alt="2024 Retrospective"><p>2024 was a <strong>landmark year</strong> for <a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">stdlib</a>, packed with progress, innovation, and community growth. Looking back, I am struck by the amount of time and effort members of the stdlib community spent refining existing APIs, crafting new functionality, and laying the groundwork for an exciting road ahead. I feel incredibly fortunate to be part of a community that is actively shaping the future of scientific computing on the web, and I am bullish on our continued success in the months to come.</p><p>In this post, I&apos;ll provide a recap of some key highlights and foreshadow what&apos;s in store for 2025. While I&apos;ll be making various shoutouts to individual contributors, none of what we accomplished this year could have happened without the entire stdlib community. The community was instrumental in doing the hard work necessary to make stdlib a success, from finding and patching bugs to reviewing pull requests and triaging issues to diving deep into the weeds of numerical algorithms and software design. If I don&apos;t mention you by name, please be sure to know that your efforts are recognized and greatly appreciated. A big thank you to everyone involved and to everyone who&apos;s helped out along the way, in ways both big and small. &#x2764;&#xFE0F;</p><h2 id="tldr">TL;DR</h2><p>This past year was transformative for stdlib, marked by significant growth, innovation, and community contributions. Some key highlights include:</p><ul><li><strong>Community Growth</strong>: 84 new contributors joined stdlib, tripling the size of our developer community and driving over 4,000 commits, 2,200 pull requests, and the release of 500+ new packages.</li><li><strong>Google Summer of Code</strong>: four exceptional contributors helped advance critical projects, including enhanced REPL capabilities, expanded BLAS support, and new mathematical APIs.</li><li><strong>Enhanced Developer Tools</strong>: major strides in automation included automated changelog generation, improved CI workflows, and better test coverage tracking.</li><li><strong>Technical Milestones</strong>: significant progress was made in linear algebra (BLAS and LAPACK), fancy indexing, WebAssembly integrations, and C implementations of mathematical functions, all aimed at making JavaScript a first-class language for scientific computing.</li><li><strong>Future Vision</strong>: looking ahead to 2025, we aim to expand our math libraries, improve REPL interactivity, explore WebGPU, and continue building tools to make scientific computing on the web more powerful and accessible.</li></ul><p>With stdlib&#x2019;s rapid growth and the collective efforts of our global community, we&apos;re shaping the future of scientific computing on the web. Join us as we take the next steps in this exciting journey!</p><h2 id="stats">Stats</h2><p>To kick things off, some high-level year-end statistics. This year,</p><ul><li><strong>84</strong> new contributors from across the world joined stdlib, <strong>tripling</strong> our developer community size and bringing new life and fresh perspectives to the project.</li><li>Together, we made over <strong>4000 commits</strong> to the main development branch.</li><li>We opened nearly <strong>2200 pull requests</strong>, with over 1600 of those pull requests merged.</li><li>And we shipped over <strong>500 new packages</strong> in the project, ranging from new linear algebra routines to specialized math functions to foundational infrastructure for multi-dimensional arrays to APIs supporting WebAssembly and other accelerated environments.</li></ul><p>These accomplishments reflect the hard work and dedication of our community. It was a busy year, and we were forced to think critically about how we can effectively scale the project and our community as both continue to grow. This meant investing in tooling and automation, improving our review and release processes, and figuring out ways to quickly identify and upskill new contributors.</p><h2 id="google-summer-of-code">Google Summer of Code</h2><p>The one event which really set things in motion for stdlib in 2024 was our <a href="https://summerofcode.withgoogle.com/programs/2024/organizations/stdlib?ref=blog.stdlib.io">acceptance</a> into Google Summer of Code (GSoC). We had previously applied in 2023, but were rejected. So when we applied in 2024, we didn&apos;t think we had much of a chance. Much to our surprise and delight, stdlib was accepted, thus setting off a mad dash to get our affairs together so that we could handle the influx of contributors to come.</p><p>GSoC ended up being a transformative experience for stdlib, bringing in talented contributors and pushing forward critical projects. As we detailed in our GSoC <a href="https://blog.stdlib.io/reflecting-on-gsoc-2024/">reflection</a>, the road was bumpy, but we learned a lot and came out the other side. Needless to say, we were extremely lucky to have four truly excellent GSoC contributors: <a href="https://github.com/orgs/stdlib-js/people/aman-095?ref=blog.stdlib.io">Aman Bhansali</a>, <a href="https://github.com/orgs/stdlib-js/people/gunjjoshi?ref=blog.stdlib.io">Gunj Joshi</a>, <a href="https://github.com/orgs/stdlib-js/people/Jaysukh-409?ref=blog.stdlib.io">Jaysukh Makvana</a>, and <a href="https://github.com/orgs/stdlib-js/people/Snehil-Shah?ref=blog.stdlib.io">Snehil Shah</a>. I&apos;ll have a bit more to say about their work in the sections below.</p><h2 id="repl">REPL</h2><p>The Node.js read-eval-print loop (REPL) is often something of an afterthought in the JavaScript world, both underutilized and underappreciated. From stdlib&apos;s earliest days, we wanted to create a better <a href="https://github.com/stdlib-js/stdlib/tree/develop/lib/node_modules/%40stdlib/repl?ref=blog.stdlib.io">REPL</a> experience, with integrated support for stdlib&apos;s scientific computing and data processing functionality. Development of the stdlib REPL has come in fits and starts, but there&apos;s always been a goal of matching the power and feature set of Python&apos;s IPython in order to facilitate interactive exploratory data analysis in JavaScript. We were thus quite excited when <a href="https://github.com/orgs/stdlib-js/people/Snehil-Shah?ref=blog.stdlib.io">Snehil Shah</a> expressed interest in working on the stdlib REPL as part of GSoC.</p><p>Snehil already covered some of his work in a previous blog post on <a href="https://blog.stdlib.io/welcoming-colors-to-the-repl/">&quot;Welcoming colors to the REPL!&quot;</a>, but his and others&apos; work covered so much more. A few highlights:</p><ul><li><strong>Preview completions</strong>: when typing characters matching a known symbol in the REPL, a completion preview is now displayed, helping facilitate auto-completion and saving developers precious keystrokes. Shoutout to <a href="https://github.com/tudor-pagu?ref=blog.stdlib.io">Tudor Pagu</a>, in particular, for adding this!</li><li><strong>Multi-line editing</strong>: prior to adding support for multi-line editing, the REPL supported multi-line inputs, but did not support editing previously entered lines, often leading to a frustrating user experience. Now, the REPL supports multi-line editing within the terminal similar to dedicated editor applications.</li><li><strong>Pagination of long outputs</strong>: a longstanding feature request has been to add support for something like <code>less</code>/<code>more</code> to the stdlib REPL. Previously, if a command generated a long output, a user could be confronted with a wall of text. This has now been addressed, with the hope of adding more advanced <code>less</code>-like search functionality in the months ahead.</li><li><strong>Bracketed-paste</strong>: pasting multi-line input into the REPL used to execute the input line-by-line, instead of pasting it as a single prompt. While useful in some cases, this is often not the desired intent, especially when a user wishes to paste and edit multi-line input before execution.</li><li><strong>Custom syntax-highlighting themes</strong>: developers who are used to developing in IDEs can often feel adrift when moving to a terminal lacking some of the niceties of their favorite editor. One of those niceties is syntax-highlighting. Accordingly, we worked to add support for custom theming, as detailed in Snehil&apos;s <a href="https://blog.stdlib.io/welcoming-colors-to-the-repl/">blog post</a>.</li><li><strong>Auto-pairing</strong>: another common IDE nicety is the automatic closing of brackets and quotation marks, helping save keystrokes and mitigate the dreaded missing bracket. Never one to shy away from a difficult task, Snehil implemented support for auto-pairing as one of his first <a href="https://github.com/stdlib-js/stdlib/pull/1680?ref=blog.stdlib.io">pull requests</a> leading up to GSoC.</li></ul><p>Largely thanks to Snehil&apos;s work, we moved much closer to IPython parity in 2024, thus transforming the JavaScript experience for scientific computing. And we&apos;re not done yet. We still have pull requests working their way through the queue, and one thing I am particularly excited about is that we&apos;ve recently started exploring adding support for the Jupyter protocol. Stay tuned for additional REPL news in 2025!</p><h2 id="blas">BLAS</h2><p>Another area of focus has been the continued development of stdlib&apos;s <a href="https://netlib.org/blas/?ref=blog.stdlib.io">BLAS</a> (<strong>B</strong>asic <strong>L</strong>inear <strong>A</strong>lgebra <strong>S</strong>ubprograms) support, which provides fundamental APIs for common linear algebra operations, such as vector addition, scalar multiplication, dot products, linear combinations, and matrix multiplication. Coming into 2024, BLAS support in stdlib was rather incomplete, particularly in terms of its support for complex-valued floating-point data types. The tide began to change with <a href="https://github.com/orgs/stdlib-js/people/Jaysukh-409?ref=blog.stdlib.io">Jaysukh Makvana</a>&apos;s efforts to achieve feature parity of stdlib&apos;s <a href="https://github.com/stdlib-js/stdlib/tree/develop/lib/node_modules/%40stdlib/array/complex64?ref=blog.stdlib.io"><code>Complex64Array</code></a> and <a href="https://github.com/stdlib-js/stdlib/tree/develop/lib/node_modules/%40stdlib/array/complex128?ref=blog.stdlib.io"><code>Complex128Array</code></a> data structures with built-in JavaScript typed arrays.</p><p>These efforts subsequently paved the way for adding Level 1 BLAS support for complex-valued typed array data types and the work of <a href="https://github.com/orgs/stdlib-js/people/aman-095?ref=blog.stdlib.io">Aman Bhansali</a>, who set out to further Level 2 and Level 3 BLAS support in stdlib. After focusing initially on lower-level BLAS strided array interfaces, Aman expanded his scope by adding WebAssembly implementations and by adding support for applying BLAS operations to stacks of matrices and vectors via higher-level multi-dimensional array (a.k.a., <a href="https://github.com/stdlib-js/stdlib/tree/develop/lib/node_modules/%40stdlib/ndarray/ctor?ref=blog.stdlib.io"><code>ndarray</code></a>) APIs.</p><p>In addition to conventional BLAS routines, stdlib includes <a href="https://github.com/stdlib-js/stdlib/tree/develop/lib/node_modules/%40stdlib/blas/ext/base?ref=blog.stdlib.io">BLAS-like routines</a> which are not a part of <a href="https://netlib.org/blas/?ref=blog.stdlib.io">reference BLAS</a>. These routines include APIs for alternative scalar and cumulative summation algorithms, sorting strided arrays, filling and manipulating strided array elements, explicit handling of <code>NaN</code> values, and other operations which don&apos;t fall neatly under the banner of linear algebra, but are common when working with data.</p><p>During Aman&apos;s BLAS work, we cleaned up and refactored BLAS implementations, and <a href="https://github.com/headlessNode?ref=blog.stdlib.io">Muhammad Haris</a> volunteered to extend those efforts to our <a href="https://github.com/stdlib-js/stdlib/tree/develop/lib/node_modules/%40stdlib/blas/ext/base?ref=blog.stdlib.io">extended BLAS</a> routines. His efforts entailed migrating Node.js native add-ons to pure C in order to reduce boilerplate and leverage our extensive collection of C <a href="https://github.com/stdlib-js/stdlib/tree/develop/lib/node_modules/%40stdlib/napi?ref=blog.stdlib.io">macros</a> for authoring of native add-ons and further entailed adding dedicated C APIs to facilitate interfacing with stdlib&apos;s <a href="https://github.com/stdlib-js/stdlib/tree/develop/lib/node_modules/%40stdlib/ndarray/ctor?ref=blog.stdlib.io"><code>ndarrays</code></a>.</p><p>These developments ensure that stdlib continues to lead the way in linear algebra support for JavaScript developers, offering powerful tools for numerical computing. While much has been completed, more work remains, and BLAS will continue to be a focal point in 2025.</p><h2 id="lapack">LAPACK</h2><p>Building on the BLAS work as part of an internship at <a href="https://labs.quansight.org/?ref=blog.stdlib.io">Quansight Labs</a>, <a href="https://github.com/Pranavchiku?ref=blog.stdlib.io">Pranav Goswami</a> worked to lay the foundations for <a href="https://www.netlib.org/lapack/index.html?ref=blog.stdlib.io">LAPACK</a> (<strong>L</strong>inear <strong>A</strong>lgebra <strong>Pack</strong>age) support in stdlib in order to provide higher order linear algebra routines for solving systems of linear equations, eigenvalue problems, matrix factorization, and singular value decomposition. Detailed more fully in his post-internship <a href="https://blog.stdlib.io/lapack-in-stdlib/">blog post</a>, Pranav sought to establish an approach for testing and documentation of added implementations and to leverage the ideas of <a href="https://github.com/flame/blis?ref=blog.stdlib.io">BLIS</a> to create LAPACK interfaces which facilitated interfacing with stdlib&apos;s <a href="https://github.com/stdlib-js/stdlib/tree/develop/lib/node_modules/%40stdlib/ndarray/ctor?ref=blog.stdlib.io"><code>ndarrays</code></a> and thus minimize data movement and storage requirements. While a good chunk of time was spent working out the kinks and iterating on API design, Pranav made significant headway in adding various implementation utilities and nearly 30 commonly used LAPACK routines. Given the enormity of LAPACK (~1700 routines), this work will continue into the foreseeable future, so be on the lookout for more updates in the months ahead!</p><div class="kg-card kg-callout-card kg-callout-card-blue"><div class="kg-callout-emoji">&#x1F4A1;</div><div class="kg-callout-text">As a quick aside, if you&apos;re interested in learning more about how stdlib approaches interfacing with Fortran libraries, many of which still form the bedrock of modern numerical computing, be sure to check out Pranav&apos;s blog post on <a href="https://blog.stdlib.io/how-to-call-fortran-routines-from-javascript-with-node-js/">calling Fortran routines from JavaScript using Node.js</a>.</div></div><h2 id="c-implementations-of-special-math-functions">C implementations of special math functions</h2><p>One of stdlib&apos;s longstanding priorities is continued development of its vectorized routines for common mathematical and statistical operations. While all scalar mathematical kernels (e.g., transcendental functions, such as <code>sin</code>, <code>cos</code>, <code>erf</code>, <code>gamma</code>, etc, and statistical distribution density functions) have JavaScript implementations, many of the kernels lacked corresponding C implementations, which are needed for unlocking faster performance in Node.js and other server-side JavaScript runtimes supporting native bindings.</p><p><a href="https://github.com/gunjjoshi/?ref=blog.stdlib.io">Gunj Joshi</a> and others sought to fill this <a href="https://github.com/stdlib-js/stdlib/issues/649?ref=blog.stdlib.io">gap</a> and opened over <strong>160</strong> pull requests adding dedicated C implementations. At this point, only a few of the most heavily used double-precision transcendental functions remain (looking at you <code>betainc</code>!). Efforts have now turned to completing single-precision support and adding C implementations for statistical distribution functions. We expect this work to continue for the first half of 2025 before turning our attention to higher-level strided array and ndarray APIs, with implementations for both WebAssembly and Node.js native add-ons.</p><h2 id="fancy-indexing">Fancy indexing</h2><p>Another area where we made significant progress is in improving slicing and array manipulation ergonomics. Users of numerical programming languages, such as MATLAB and Julia, and dedicated numerical computing libraries, such as NumPy, have long enjoyed the benefit of concise syntax for expressing operations affecting only a subset of array elements. For example, the following snippet demonstrates setting every other element in an array to zero with NumPy.</p><pre><code class="language-python">import numpy as np

# Create an array of ones:
x = np.ones(10)

# Set every other element to zero:
x[::2] = 0.0
</code></pre><p>As a language, JavaScript does not provide such convenient syntax, forcing users to either use more verbose object methods or manual <code>for</code> loops. We thus sought to address this gap by leveraging <code>Proxy</code> objects to support &quot;fancy indexing&quot;. While the use of <code>Proxy</code> objects does incur some performance overhead due to property indirection, you now need only install and import a single <a href="https://github.com/stdlib-js/array-to-fancy?ref=blog.stdlib.io">package</a> to get all the benefits of Python-style slicing in JavaScript, thus obviating the need for verbose <code>for</code> loops and making array manipulation significantly more ergonomic.</p><pre><code class="language-javascript">import array2fancy from &apos;@stdlib/array-to-fancy&apos;;

// Create a plain array:
const x = [ 1, 2, 3, 4, 5, 6, 7, 8 ];

// Turn the plain array into a &quot;fancy&quot; array:
const y = array2fancy( x );

// Select the first three elements:
const v = y[ &apos;:3&apos; ];
// returns [ 1, 2, 3 ]

// Select every other element, starting from the second element:
v = y[ &apos;1::2&apos; ];
// returns [ 2, 4, 6, 8 ]

// Select every other element, in reverse order, starting with the last element:
v = y[ &apos;::-2&apos; ];
// returns [ 8, 6, 4, 2 ]

// Set all elements to the same value:
y[ &apos;:&apos; ] = 9;

// Create a shallow copy by selecting all elements:
v = y[ &apos;:&apos; ];
// returns [ 9, 9, 9, 9, 9, 9, 9, 9 ]
</code></pre><p>In addition to slice semantics, Jaysukh added support to stdlib for <a href="https://github.com/stdlib-js/stdlib/tree/develop/lib/node_modules/%40stdlib/array/bool?ref=blog.stdlib.io">boolean arrays</a>, thus laying the groundwork for boolean array masking.</p><pre><code class="language-javascript">import BooleanArray from &apos;@stdlib/array-bool&apos;;
import array2fancy from &apos;@stdlib/array-to-fancy&apos;;

// Create a plain array:
const x = [ 1, 2, 3, 4, 5, 6, 7, 8 ];

// Turn the plain array into a &quot;fancy&quot; array:
const y = array2fancy( x );

// Create a shorthand alias for creating an array &quot;index&quot; object:
const idx = array2fancy.idx;

// Create a boolean mask array:
const mask = new BooleanArray( [ true, false, false, true, true, true, false, false ] );

// Retrieve elements according to the mask:
const z = x[ idx( mask ) ];
// returns [ 1, 4, 5, 6 ]
</code></pre><p>We subsequently applied our learnings when adding support for boolean array masking to add support for integer array indexing.</p><pre><code class="language-javascript">import Int32Array from &apos;@stdlib/array-int32&apos;;
import array2fancy from &apos;@stdlib/array-to-fancy&apos;;

// Create a plain array:
const x = [ 1, 2, 3, 4, 5, 6, 7, 8 ];

// Turn the plain array into a &quot;fancy&quot; array:
const y = array2fancy( x );

// Create a shorthand alias for creating an array &quot;index&quot; object:
const idx = array2fancy.idx;

// Create an integer array:
const indices = new Int32Array( [ 0, 3, 4, 5 ] );

// Retrieve selected elements:
const z = x[ idx( indices ) ];
// returns [ 1, 4, 5, 6 ]
</code></pre><p>While the above demonstrates fancy indexing with built-in JavaScript array objects, we&apos;ve recently extended the concept of fancy indexing to stdlib <a href="https://github.com/stdlib-js/stdlib/tree/develop/lib/node_modules/%40stdlib/ndarray/ctor?ref=blog.stdlib.io"><code>ndarrays</code></a>, a topic we&apos;ll have more to say about in a future blog post.</p><p>Needless to say, we are particularly excited about these developments because we believe they will significantly improve the user experience of interactive computing and exploratory data analysis in JavaScript.</p><h2 id="test-and-build">Test and build</h2><p>Lastly, 2024 was a year of automation, and I would be remiss if I didn&apos;t mention the efforts of <a href="https://github.com/Planeshifter?ref=blog.stdlib.io">Philipp Burckhardt</a>. Philipp was instrumental in improving our CI build and test infrastructure and improving the overall scalability of the project. His work was prolific, but there are a few key highlights I want to bring to the fore.</p><ul><li><strong>Automatic changelog generation</strong>: Philipp shepherded the project toward using <a href="https://www.conventionalcommits.org/en/v1.0.0/?ref=blog.stdlib.io">conventional commits</a>, which is a standardized way for adding human and machine readable meaning to commit messages, and subsequently built a robust set of tools for performing automatic releases, generating comprehensive changelogs, and coordinating the publishing of stdlib&apos;s ever-growing ecosystem of over <strong>4000</strong> standalone packages. What was once a manual release process can now be done by running a single GitHub workflow.</li><li><strong>stdlib bot</strong>: Philipp created a GitHub pull request bot for automating pull request review tasks, posting helpful messages, and improving the overall maintainer development experience. In the months ahead, we&apos;re particularly keen to extend the bot&apos;s functionality to help with new contributor onboarding and flagging common contribution issues.</li><li><strong>Test coverage automation</strong>: with a project of stdlib&apos;s size, running the entire test suite on each commit and for each pull request is simply not possible. It can thus be challenging to stitch together individual package test coverage reports in order to obtain a global view of overall test coverage. Philipp worked to address this problem by creating an automation pipeline for uploading individual test coverage reports to a dedicated <a href="https://github.com/stdlib-js/www-test-code-coverage?ref=blog.stdlib.io">repository</a>, with support for tracking coverage metrics over time and creating expected test coverage changes for each submitted pull request. Needless to say, this has drastically improved our visibility into test coverage metrics and helped improve our confidence in tests accompanying submitted pull requests.</li></ul><p>While we&apos;ve made considerable strides in our project automation tooling, we never seem to be short of ideas for further automation and tooling improvements. Expect more to come in 2025! &#x1F916;</p><h2 id="look-ahead">Look ahead</h2><p>So what&apos;s in store for 2025?! Glad you asked!</p><p>We&apos;ve already alluded to various initiatives in the sections above, but, at a high level, here&apos;s where we&apos;re planning to focus our efforts in the year ahead:</p><ul><li><strong>GSoC 2025</strong>: assuming Google runs its annual Google Summer of Code program and we&apos;re fortunate enough to be accepted again, we&apos;d love to continue supporting the next generation of open source contributors.</li><li><strong>Math and stats C implementations</strong>: expanding our library of scalar math and statistics kernels and ensuring double- and single-precision parity.</li><li><strong>BLAS</strong>: completing our WebAssembly distribution and higher-level APIs for operating on stacks of matrices and vectors.</li><li><strong>LAPACK</strong>: continuing to chip away at the ~1700 LAPACK routines (!).</li><li><strong>FFTs</strong>: adding initial Fast Fourier Transform (FFT) support to stdlib to help unlock algorithms for signal processing.</li><li><strong>Vectorized operations</strong>: automating package creation for vectorized operations over scalar math and statistics kernels.</li><li><strong>ndarray API parity</strong>: expanding the usability and familiarity of <a href="https://github.com/stdlib-js/stdlib/tree/develop/lib/node_modules/%40stdlib/ndarray/ctor?ref=blog.stdlib.io"><code>ndarrays</code></a> by achieving API parity with built-in JavaScript arrays and typed arrays.</li><li><strong>REPL</strong>: adding Jupyter-protocol support and various user-ergonomics improvements.</li><li><strong>WebGPU</strong>: while we haven&apos;t formally committed to any specific approach, we&apos;re keen on at least exploring support for <a href="https://en.wikipedia.org/wiki/WebGPU?ref=blog.stdlib.io">WebGPU</a>, an emerging web standard that enables webpages to use a device&apos;s graphics processing unit (GPU) efficiently, including for general-purpose GPU computation, in order to provide APIs for accelerated scientific computing on the web.</li><li><strong>Project funding</strong>: exploring and hopefully securing project funding to accelerate development efforts and support the continued growth of the stdlib community.</li></ul><p>That&apos;s definitely a lot, and it&apos;s going to take a village&#x2014;a community of people dedicated to our mission of making the web a first-class platform for numerical and scientific computing. If you&apos;re ready to join us in building the future of scientific computing on the web, we&apos;d love for you to join us. Check out our <a href="https://github.com/stdlib-js/stdlib/blob/develop/CONTRIBUTING.md?ref=blog.stdlib.io">contributing guide</a> to see how you can get involved.</p><h2 id="a-personal-note">A personal note</h2><p>As we look ahead, I&apos;d like to share a personal reflection on what this year has meant to me. Given our growth this year, I often felt like I was drinking from a fire hose. And, honestly, it can be hard not to get burned out when you wake up day-after-day to over <em>100</em> new notifications and messages from folks wanting guidance, answers to questions, and pull requests reviewed. But, when reflecting on this past year, I am awfully proud of what we&apos;ve accomplished, and I am especially heartened when I see contributors new to open source grow and flourish, sometimes using the lessons they&apos;ve learned contributing as a springboard to dream jobs and opportunities. Having the fortune to see that is a driving motivation and a privilege within the greater world of open source that I do my best to not take for granted.</p><p>And with that, this concludes the 2024 retrospective. Looking back on all we&apos;ve achieved together, the future of scientific computing on the web has never been brighter! Thank you again to everyone involved who&apos;s helped out along the way. The road ahead is filled with exciting opportunities, and we can&apos;t wait to see what we will achieve together in 2025. Onward and upward! &#x1F680;</p><hr><p><em>Athan Reines is a software engineer at </em><a href="https://quansight.com/?ref=blog.stdlib.io"><em>Quansight</em></a><em> and core developer of </em><a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io"><em>stdlib</em></a><em>.</em></p><hr><p><a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">stdlib</a> is an open source software project dedicated to providing a comprehensive suite of robust, high-performance libraries to accelerate your project&apos;s development and give you peace of mind knowing that you&apos;re depending on expertly crafted, high-quality software.</p><p>If you&apos;ve enjoyed this post, give us a star &#x1F31F; on <a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">GitHub</a> and consider <a href="https://opencollective.com/stdlib?ref=blog.stdlib.io">financially supporting</a> the project. Your contributions and continued support help ensure the project&apos;s long-term success and are greatly appreciated!</p>]]></content:encoded></item><item><title><![CDATA[LAPACK in your web browser]]></title><description><![CDATA[Adding initial LAPACK support to stdlib to enable high-performance linear algebra on the web.]]></description><link>https://blog.stdlib.io/lapack-in-stdlib/</link><guid isPermaLink="false">6765e94e588c15cb1b916270</guid><category><![CDATA[Engineering]]></category><dc:creator><![CDATA[Pranav Goswami]]></dc:creator><pubDate>Fri, 20 Dec 2024 22:37:36 GMT</pubDate><media:content url="https://blog.stdlib.io/content/images/2024/12/gen_splash.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.stdlib.io/content/images/2024/12/gen_splash.jpg" alt="LAPACK in your web browser"><p><em>This post was originally published on the Quansight Labs </em><a href="https://labs.quansight.org/blog/lapack-in-stdlib?ref=blog.stdlib.io"><em>blog</em></a><em> and has been modified and republished here with Quansight&apos;s permission.</em></p><p>Web applications are rapidly emerging as a new frontier for high-performance scientific computation and AI-enabled end-user experiences. Underpinning the ML/AI revolution is linear algebra, a branch of mathematics concerning linear equations and their representations in vector spaces and via matrices. <a href="https://netlib.org/lapack/?ref=blog.stdlib.io">LAPACK</a> (&quot;<strong>L</strong>inear <strong>A</strong>lgebra <strong>Pack</strong>age&quot;) is a fundamental software library for numerical linear algebra, providing robust, battle-tested implementations of common matrix operations. Despite LAPACK being a foundational component of most numerical computing programming languages and libraries, a comprehensive, high-quality LAPACK implementation tailored to the unique constraints of the web has yet to materialize. That is...until now.</p><p>Earlier this year, I had the great fortune of being a summer intern at <a href="https://labs.quansight.org/?ref=blog.stdlib.io">Quansight Labs</a>, the public benefit division of <a href="https://quansight.com/?ref=blog.stdlib.io">Quansight</a> and a leader in the scientific Python ecosystem. During my internship, I worked to add initial LAPACK support to <a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">stdlib</a>, a fundamental library for scientific computation written in C and JavaScript and optimized for use in web browsers and other web-native environments, such as Node.js and Deno. In this blog post, I&apos;ll discuss my journey, some expected and unexpected (!) challenges, and the road ahead. My hope is that this work, with a little bit of luck, provides a critical building block in making web browsers a first-class environment for numerical computation and machine learning and portends a future of more powerful AI-enabled web applications.</p><p>Sound interesting? Let&apos;s go!</p><h2 id="what-is-stdlib">What is stdlib?</h2><p>Readers of this blog who are familiar with LAPACK are likely to not be intimately familiar with the wild world of web technologies. For those coming from the world of numerical and scientific computation and have familiarity with the scientific Python ecosystem, the easiest way to think of <a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">stdlib</a> is as an open source scientific computing library in the mold of <a href="https://github.com/numpy/numpy?ref=blog.stdlib.io">NumPy</a> and <a href="https://github.com/scipy/scipy?ref=blog.stdlib.io">SciPy</a>. It provides multi-dimensional array data structures and associated routines for mathematics, statistics, and linear algebra, but uses JavaScript, rather than Python, as its primary scripting language. As such, stdlib is laser-focused on the web ecosystem and its application development paradigms. This focus necessitates some interesting design and project architecture decisions, which make stdlib rather unique when compared to more traditional libraries designed for numerical computation.</p><p>To take NumPy as an example, NumPy is a single monolithic library, where all of its components, outside of optional third-party dependencies such as <a href="https://github.com/OpenMathLib/OpenBLAS?ref=blog.stdlib.io">OpenBLAS</a>, form a single, indivisible unit. One cannot simply install NumPy routines for <a href="https://numpy.org/doc/stable/reference/routines.array-manipulation.html?ref=blog.stdlib.io">array manipulation</a> without installing all of NumPy. If you are deploying an application which only needs NumPy&apos;s <code>ndarray</code> object and a couple of its manipulation routines, installing and bundling all of NumPy means including a considerable amount of <a href="https://en.wikipedia.org/wiki/Dead_code?ref=blog.stdlib.io">&quot;dead code&quot;</a>. In web development parlance, we&apos;d say that NumPy is not <a href="https://en.wikipedia.org/wiki/Tree_shaking?ref=blog.stdlib.io">&quot;tree shakeable&quot;</a>. For a normal NumPy installation, this implies at least 30MB of disk space, and at least <a href="https://towardsdatascience.com/how-to-shrink-numpy-scipy-pandas-and-matplotlib-for-your-data-product-4ec8d7e86ee4?ref=blog.stdlib.io">15MB of disk space</a> for a customized build which excludes all debug statements. For SciPy, those numbers can balloon to 130MB and 50MB, respectively. Needless to say, shipping a 15MB library in a web application for just a few functions is a non-starter, especially for developers needing to deploy web applications to devices with poor network connectivity or memory constraints.</p><p>Given the unique constraints of web application development, stdlib takes a bottom-up approach to its design, where every unit of functionality can be installed and consumed independently of unrelated and unused parts of the codebase. By embracing a decomposable software architecture and <a href="https://aredridel.dinhe.net/2016/06/04/radical-modularity/?ref=blog.stdlib.io">radical modularity</a>, stdlib offers users the ability to install and use exactly what they need, with little-to-no excess code beyond a desired set of APIs and their explicit dependencies, thus ensuring smaller memory footprints, bundle sizes, and faster deployment.</p><p>As an example, suppose you are working with two stacks of matrices (i.e., two-dimensional slices of three-dimensional cubes), and you want to select every other slice and perform the common BLAS operation <code>y += a * x</code>, where <code>x</code> and <code>y</code> are <a href="https://stdlib.io/docs/api/latest/@stdlib/ndarray/ctor?ref=blog.stdlib.io"><code>ndarrays</code></a> and <code>a</code> is a scalar constant. To do this with NumPy, you&apos;d first install all of NumPy</p><pre><code class="language-bash">pip install numpy
</code></pre><p>and then perform the various operations</p><pre><code class="language-python"># Import all of NumPy:
import numpy as np

# Define arrays:
x = np.asarray(...)
y = np.asarray(...)

# Perform operation:
y[::2,:,:] += 5.0 * x[::2,:,:]
</code></pre><p>With stdlib, in addition to having the ability to install the project as a monolithic library, you can install the various units of functionality as separate packages</p><pre><code class="language-bash">npm install @stdlib/ndarray-fancy @stdlib/blas-daxpy
</code></pre><p>and then perform the various operations</p><pre><code class="language-javascript">// Individually import desired functionality:
import FancyArray from &apos;@stdlib/ndarray-fancy&apos;;
import daxpy from &apos;@stdlib/blas-daxpy&apos;;

// Define ndarray meta data:
const shape = [4, 4, 4];
const strides = [...];
const offset = 0;

// Define arrays using a &quot;lower-level&quot; fancy array constructor:
const x = new FancyArray(&apos;float64&apos;, [...], shape, strides, offset, &apos;row-major&apos;);
const y = new FancyArray(&apos;float64&apos;, [...], shape, strides, offset, &apos;row-major&apos;);

// Perform operation:
daxpy(5.0, x[&apos;::2,:,:&apos;], y[&apos;::2,:,:&apos;]);
</code></pre><p>Importantly, not only can you independently install any one of stdlib&apos;s over <a href="https://github.com/stdlib-js?ref=blog.stdlib.io">4,000 packages</a>, but you can also fix, improve, and remix any one of those packages by forking an associated GitHub repository (e.g., see <a href="https://github.com/stdlib-js/ndarray-fancy/tree/main?ref=blog.stdlib.io"><code>@stdlib/ndarray-fancy</code></a>). By defining explicit layers of abstraction and dependency trees, stdlib offers you the freedom to choose the right layer of abstraction for your application. In some ways, it&apos;s a simple&#x2014;and, if you&apos;re accustomed to conventional scientific software library design, perhaps unorthodox&#x2014;idea, but, when tightly integrated with the web platform, it has powerful consequences and creates exciting new possibilities!</p><h2 id="what-about-webassembly">What about WebAssembly?</h2><p>Okay, so maybe your interest has piqued; stdlib seems intriguing. But what does this have to do with LAPACK in web browsers? Well, one of our goals this past summer was to apply the stdlib ethos&#x2014;small, narrowly scoped packages which do one thing and do one thing well&#x2014;in bringing LAPACK to the web.</p><p>But wait, you say! That is an extreme undertaking. LAPACK is vast, with approximately 1,700 routines, and implementing even 10% of them within a reasonable time frame is a significant challenge. Wouldn&apos;t it be better to just compile LAPACK to <a href="https://webassembly.org/?ref=blog.stdlib.io">WebAssembly</a>, a portable compilation target for programming languages such as C, Go, and Rust, which enables deployment on the web, and call it a day?</p><p>Unfortunately, there are several issues with this approach.</p><ol><li>Compiling Fortran to WebAssembly is still an area of active development (see <a href="https://gws.phd/posts/fortran_wasm/?ref=blog.stdlib.io">1</a>, <a href="https://pyodide.org/en/0.25.0/project/roadmap.html?ref=blog.stdlib.io#find-a-better-way-to-compile-fortran">2</a>, <a href="https://github.com/scipy/scipy/issues/15290?ref=blog.stdlib.io">3</a>, <a href="https://github.com/pyodide/pyodide/issues/184?ref=blog.stdlib.io">4</a>, and <a href="https://lfortran.org/blog/2023/05/lfortran-breakthrough-now-building-legacy-and-modern-minpack/?ref=blog.stdlib.io">5</a>). At the time of this post, a common approach is to use <a href="https://netlib.org/f2c/?ref=blog.stdlib.io"><code>f2c</code></a> to compile Fortran to C and then to perform a separate compilation step to convert C to WebAssembly. However, this approach is problematic as <code>f2c</code> only fully supports Fortran 77, and the generated code requires extensive patching. Work is underway to develop an LLVM-based Fortran compiler, but gaps and complex toolchains remain.</li><li>As alluded to above in the discussion concerning monolithic libraries in web applications, the vastness of LAPACK is part of the problem. Even if the compilation problem is solved, including a single WebAssembly binary containing all of LAPACK in a web application needing to use only one or two LAPACK routines means considerable dead code, resulting in slower loading times and increased memory consumption.</li><li>While one could attempt to compile individual LAPACK routines to standalone WebAssembly binaries, doing so could result in binary bloat, as multiple standalone binaries may contain duplicated code from common dependencies. To mitigate binary bloat, one could attempt to perform module splitting. In this scenario, one first factors out common dependencies into a standalone binary containing shared code and then generates separate binaries for individual APIs. While suitable in some cases, this can quickly get unwieldy, as this approach requires linking individual WebAssembly modules at load-time by stitching together the exports of one or more modules with the imports of one or more other modules. Not only can this be tedious, but this approach also entails a performance penalty due to the fact that, when WebAssembly routines call imported exports, they now must cross over into JavaScript, rather than remaining within WebAssembly. Sound complex? It is!</li><li>Apart from WebAssembly modules operating exclusively on scalar input arguments (e.g., computing the sine of a single number), every WebAssembly module instance must be associated with WebAssembly memory, which is allocated in fixed increments of 64KiB (i.e., a &quot;page&quot;). And importantly, as of this blog post, WebAssembly memory can only grow and <a href="https://github.com/WebAssembly/memory-control/blob/16dd6b93ab82d0b4b252e3da5451e9b5e452ee62/proposals/memory-control/Overview.md?ref=blog.stdlib.io">never shrink</a>. As there is currently no mechanism for releasing memory to a host, a WebAssembly application&apos;s memory footprint can only increase. These two aspects combined increase the likelihood of allocating memory which is never used and the prevalence of memory leaks.</li><li>Lastly, while powerful, WebAssembly entails a steeper learning curve and a more complex set of often rapidly evolving toolchains. In end-user applications, interfacing between JavaScript&#x2014;a web-native dynamically-compiled programming language&#x2014;and WebAssembly further brings increased complexity, especially when having to perform manual memory management.</li></ol><p>To help illustrate the last point, let&apos;s return to the BLAS routine <code>daxpy</code>, which performs the operation <code>y = a*x + y</code> and where <code>x</code> and <code>y</code> are strided vectors and <code>a</code> a scalar constant. If implemented in C, a basic implementation might look like the following code snippet.</p><pre><code class="language-c">void c_daxpy(const int N, const double alpha, const double *X, const int strideX, double *Y, const int strideY) {
    int ix;
    int iy;
    int i;
    if (N &lt;= 0) {
        return;
    }
    if (alpha == 0.0) {
        return;
    }
    if (strideX &lt; 0) {
        ix = (1-N) * strideX;
    } else {
        ix = 0;
    }
    if (strideY &lt; 0) {
        iy = (1-N) * strideY;
    } else {
        iy = 0;
    }
    for (i = 0; i &lt; N; i++) {
        Y[iy] += alpha * X[ix];
        ix += strideX;
        iy += strideY;
    }
    return;
}
</code></pre><p>After compilation to WebAssembly and loading the WebAssembly binary into our web application, we need to perform a series of steps before we can call the <code>c_daxpy</code> routine from JavaScript. First, we need to instantiate a new WebAssembly module.</p><pre><code class="language-javascript">const binary = new UintArray([...]);

const mod = new WebAssembly.Module(binary);
</code></pre><p>Next, we need to define module memory and create a new WebAssembly module instance.</p><pre><code class="language-javascript">// Initialize 10 pages of memory and allow growth to 100 pages:
const mem = new WebAssembly.Memory({
    &apos;initial&apos;: 10,  // 640KiB, where each page is 64KiB
    &apos;maximum&apos;: 100  // 6.4MiB
});

// Create a new module instance:
const instance = new WebAssembly.Instance(mod, {
    &apos;env&apos;: {
        &apos;memory&apos;: mem
    }
});
</code></pre><p>After creating a module instance, we can now invoke the exported BLAS routine. However, if data is defined outside of module memory, we first need to copy that data to the memory instance and always do so in little-endian byte order.</p><pre><code class="language-javascript">// External data:
const xdata = new Float64Array([...]);
const ydata = new Float64Array([...]);

// Specify a vector length:
const N = 5;

// Specify vector strides (in units of elements):
const strideX = 2;
const strideY = 4;

// Define pointers (i.e., byte offsets) for storing two vectors:
const xptr = 0;
const yptr = N * 8; // 8 bytes per double

// Create a DataView over module memory:
const view = new DataView(mem.buffer);

// Resolve the first indexed elements in both `xdata` and `ydata`:
let offsetX = 0;
if (strideX &lt; 0) {
    offsetX = (1-N) * strideX;
}
let offsetY = 0;
if (strideY &lt; 0) {
    offsetY = (1-N) * strideY;
}

// Write data to the memory instance:
for (let i = 0; i &lt; N; i++) {
    view.setFloat64(xptr+(i*8), xdata[offsetX+(i*strideX)], true);
    view.setFloat64(yptr+(i*8), ydata[offsetY+(i*strideY)], true);
}
</code></pre><p>Now that data is written to module memory, we can call the <code>c_daxpy</code> routine.</p><pre><code class="language-javascript">instance.exports.c_daxpy(N, 5.0, xptr, 1, yptr, 1);
</code></pre><p>And, finally, if we need to pass the results to a downstream library without support for WebAssembly memory &quot;pointers&quot; (i.e., byte offsets), such as D3, for visualization or further analysis, we need to copy data from module memory back to the original output array.</p><pre><code class="language-javascript">for (let i = 0; i &lt; N; i++) {
    ydata[offsetY+(i*strideY)] = view.getFloat64(yptr+(i*8), true);
}
</code></pre><p>That&apos;s a lot of work just to compute <code>y = a*x + y</code>. In contrast, compare to a plain JavaScript implementation, which might look like the following code snippet.</p><pre><code class="language-javascript">function daxpy(N, alpha, X, strideX, Y, strideY) {
    let ix;
    let iy;
    let i;
    if (N &lt;= 0) {
        return;
    }
    if (alpha === 0.0) {
        return;
    }
    if (strideX &lt; 0) {
        ix = (1-N) * strideX;
    } else {
        ix = 0;
    }
    if (strideY &lt; 0) {
        iy = (1-N) * strideY;
    } else {
        iy = 0;
    }
    for (i = 0; i &lt; N; i++) {
        Y[iy] += alpha * X[ix];
        ix += strideX;
        iy += strideY;
    }
    return;
}
</code></pre><p>With the JavaScript implementation, we can then directly call <code>daxpy</code> with our externally defined data without the data movement required in the WebAssembly example above.</p><pre><code class="language-javascript">daxpy(N, 5.0, xdata, 1, ydata, 1);
</code></pre><p>At least in this case, not only is the WebAssembly approach less ergonomic, but, as might be expected given the required data movement, there&apos;s a negative performance impact, as well, as demonstrated in the following figure.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.stdlib.io/content/images/2024/12/daxpy_wasm_comparison_benchmarks_small_white_bkgd.png" class="kg-image" alt="LAPACK in your web browser" loading="lazy" width="1934" height="1042" srcset="https://blog.stdlib.io/content/images/size/w600/2024/12/daxpy_wasm_comparison_benchmarks_small_white_bkgd.png 600w, https://blog.stdlib.io/content/images/size/w1000/2024/12/daxpy_wasm_comparison_benchmarks_small_white_bkgd.png 1000w, https://blog.stdlib.io/content/images/size/w1600/2024/12/daxpy_wasm_comparison_benchmarks_small_white_bkgd.png 1600w, https://blog.stdlib.io/content/images/2024/12/daxpy_wasm_comparison_benchmarks_small_white_bkgd.png 1934w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">Figure 1: Performance comparison of stdlib&apos;s C, JavaScript, and WebAssembly (Wasm) implementations for the BLAS routine </span><i><em class="italic" style="white-space: pre-wrap;">daxpy</em></i><span style="white-space: pre-wrap;"> for increasing array lengths (x-axis). In the </span><i><em class="italic" style="white-space: pre-wrap;">Wasm (copy)</em></i><span style="white-space: pre-wrap;"> benchmark, input and output data is copied to and from Wasm memory, leading to poorer performance.</span></figcaption></figure><p>In the figure above, I&apos;m displaying a performance comparison of stdlib&apos;s C, JavaScript, and WebAssembly (Wasm) implementations for the BLAS routine <code>daxpy</code> for increasing array lengths, as enumerated along the x-axis. The y-axis shows a normalized rate relative to a baseline C implementation. In the <code>Wasm</code> benchmark, input and output data is allocated and manipulated directly in WebAssembly module memory, and, in the <code>Wasm (copy)</code> benchmark, input and output data is copied to and from WebAssembly module memory, as discussed above. From the chart, we may observe the following:</p><ol><li>In general, thanks to highly optimized just-in-time (JIT) compilers, JavaScript code, when carefully written, can execute only 2-to-3 times slower than native code. This result is impressive for a loosely typed, dynamically compiled programming language and, at least for <code>daxpy</code>, remains consistent across varying array lengths.</li><li>As data sizes and thus the amount of time spent in a WebAssembly module increase, WebAssembly can approach near-native (~1.5x) speed. This result aligns more generally with expected WebAssembly performance.</li><li>While WebAssembly can achieve near-native speed, data movement requirements may adversely affect performance, as observed for <code>daxpy</code>. In such cases, a well-crafted JavaScript implementation which avoids such requirements can achieve equal, if not better, performance, as is the case for <code>daxpy</code>.</li></ol><p>Overall, WebAssembly can offer performance improvements; however, the technology is not a silver bullet and needs to be used carefully in order to realize desired gains. And even when offering superior performance, such gains must be balanced against the costs of increased complexity, potentially larger bundle sizes, and more complex toolchains. For many applications, a plain JavaScript implementation will do just fine.</p><h2 id="radical-modularity">Radical modularity</h2><p>Now that I&apos;ve prosecuted the case against just compiling the entirety of LAPACK to WebAssembly and calling it a day, where does that leave us? Well, if we&apos;re going to embrace the stdlib ethos, it leaves us in need of radical modularity.</p><p>To embrace radical modularity is to recognize that what is best is highly contextual, and, depending on the needs and constraints of user applications, developers need the flexibility to pick the right abstraction. If a developer is writing a Node.js application, that may mean binding to hardware-optimized libraries, such as OpenBLAS, Intel MKL, or Apple Accelerate in order to achieve superior performance. If a developer is deploying a web application needing a small set of numerical routines, JavaScript is likely the right tool for the job. And if a developer is working on a large, resource intensive WebAssembly application (e.g., for image editing or a gaming engine), then being able to easily compile individual routines as part of the larger application will be paramount. In short, we want a radically modular LAPACK.</p><p>My mission was to lay the groundwork for such an endeavor, to work out the kinks and find the gaps, and to hopefully get us a few steps closer to high-performance linear algebra on the web. But what does radical modularity look like? It all begins with the fundamental unit of functionality, the <strong>package</strong>.</p><p>Every package in stdlib is its own standalone thing, containing co-localized tests, benchmarks, examples, documentation, build files, and associated meta data (including the enumeration of any dependencies) and defining a clear API surface with the outside world. In order to add LAPACK support to stdlib, that means creating a separate standalone package for each LAPACK routine with the following structure:</p><pre><code>&#x251C;&#x2500;&#x2500; benchmark
&#x2502;   &#x251C;&#x2500;&#x2500; c
&#x2502;   &#x2502;   &#x251C;&#x2500;&#x2500; Makefile
&#x2502;   &#x2502;   &#x2514;&#x2500;&#x2500; benchmark.c
&#x2502;   &#x251C;&#x2500;&#x2500; fortran
&#x2502;   &#x2502;   &#x251C;&#x2500;&#x2500; Makefile
&#x2502;   &#x2502;   &#x2514;&#x2500;&#x2500; benchmark.f
&#x2502;   &#x2514;&#x2500;&#x2500; benchmark*.js
&#x251C;&#x2500;&#x2500; docs
&#x2502;   &#x251C;&#x2500;&#x2500; types
&#x2502;   &#x2502;   &#x251C;&#x2500;&#x2500; index.d.ts
&#x2502;   &#x2502;   &#x2514;&#x2500;&#x2500; test.ts
&#x2502;   &#x2514;&#x2500;&#x2500; repl.txt
&#x251C;&#x2500;&#x2500; examples
&#x2502;   &#x251C;&#x2500;&#x2500; c
&#x2502;   &#x2502;   &#x251C;&#x2500;&#x2500; Makefile
&#x2502;   &#x2502;   &#x2514;&#x2500;&#x2500; example.c
&#x2502;   &#x2514;&#x2500;&#x2500; index.js
&#x251C;&#x2500;&#x2500; include/*
&#x251C;&#x2500;&#x2500; lib
&#x2502;   &#x251C;&#x2500;&#x2500; index.js
&#x2502;   &#x2514;&#x2500;&#x2500; *.js
&#x251C;&#x2500;&#x2500; src
&#x2502;   &#x251C;&#x2500;&#x2500; Makefile
&#x2502;   &#x251C;&#x2500;&#x2500; addon.c
&#x2502;   &#x251C;&#x2500;&#x2500; *.c
&#x2502;   &#x2514;&#x2500;&#x2500; *.f
&#x251C;&#x2500;&#x2500; test
&#x2502;   &#x2514;&#x2500;&#x2500; test*.js
&#x251C;&#x2500;&#x2500; binding.gyp
&#x251C;&#x2500;&#x2500; include.gypi
&#x251C;&#x2500;&#x2500; manifest.json
&#x251C;&#x2500;&#x2500; package.json
&#x2514;&#x2500;&#x2500; README.md
</code></pre><p>Briefly,</p><ul><li><strong>benchmark</strong>: a folder containing micro-benchmarks to assess performance relative to a reference implementation (i.e., reference LAPACK).</li><li><strong>docs</strong>: a folder containing auxiliary documentation including REPL help text and TypeScript declarations defining typed API signatures.</li><li><strong>examples</strong>: a folder containing executable demonstration code, which, in addition to serving as documentation, helps developers sanity check implementation behavior.</li><li><strong>include</strong>: a folder containing C header files.</li><li><strong>lib</strong>: a folder containing JavaScript source implementations, with <code>index.js</code> serving as the package entry point and other <code>*.js</code> files defining internal implementation modules.</li><li><strong>src</strong>: a folder containing C and Fortran source implementations. Each modular LAPACK package should contain a slightly modified Fortran reference implementation (F77 to free-form Fortran). C files include a plain C implementation which follows the Fortran reference implementation, a wrapper for calling the Fortran reference implementation, a wrapper for calling hardware-optimized libraries (e.g., OpenBLAS) in server-side applications, and a native binding for calling into compiled C from JavaScript in Node.js or a compatible server-side JavaScript runtime.</li><li><strong>test</strong>: a folder containing unit tests for testing expected behavior in both JavaScript and native implementations. Tests for native implementations are written in JavaScript and leverage the native binding for interoperation between JavaScript and C/Fortran.</li><li><strong>binding.gyp/include.gypi</strong>: build files for compiling Node.js native add-ons, which provide a bridge between JavaScript and native code.</li><li><strong>manifest.json</strong>: a configuration file for stdlib&apos;s internal C and Fortran compiled source file package management.</li><li><strong>package.json</strong>: a file containing package meta data, including the enumeration of external package dependencies and a path to a plain JavaScript implementation for use in browser-based web applications.</li><li><strong>README.md</strong>: a file containing a package&apos;s primary documentation, which includes API signatures and examples for both JavaScript and C interfaces.</li></ul><p>Given stdlib&apos;s demanding documentation and testing requirements, adding support for each routine is a decent amount of work, but the end result is robust, high-quality, and, most importantly, modular code suitable for serving as the foundation for scientific computation on the modern web. But enough with the preliminaries! Let&apos;s get down to business!</p><h2 id="a-multi-phase-approach">A multi-phase approach</h2><p>Building on <a href="https://github.com/stdlib-js/stdlib/pulls?q=sort%3Aupdated-desc+label%3ABLAS&amp;ref=blog.stdlib.io">previous efforts</a> which added BLAS support to stdlib, we decided to follow a similar multi-phase approach when adding LAPACK support in which we first prioritize JavaScript implementations and their associated testing and documentation and then, once tests and documentation are present, back fill C and Fortran implementations and any associated native bindings to hardware-optimized libraries. This approach allows us to put some early points on the board, so to speak, quickly getting APIs in front of users, establishing robust test procedures and benchmarks, and investigating potential avenues for tooling and automation before diving into the weeds of build toolchains and performance optimizations. But where to even begin?</p><p>To determine which LAPACK routines to target first, I parsed LAPACK&apos;s Fortran source code to generate a call graph. This allowed me to infer the dependency tree for each LAPACK routine. With the graph in hand, I then performed a topological sort, thus helping me identify routines without dependencies and which will inevitably be building blocks for other routines. While a depth-first approach in which I picked a particular high-level routine and worked backward would enable me to land a specific feature, such an approach might cause me to get bogged down trying to implement routines of increasing complexity. By focusing on the &quot;leaves&quot; of the graph, I could prioritize commonly used routines (i.e., routines with high <em>indegrees</em>) and thus maximize my impact by unlocking the ability to deliver multiple higher-level routines either later in my efforts or by other contributors.</p><p>With my plan in hand, I was excited to get to work. For my first routine, I chose <a href="https://www.netlib.org/lapack/explore-html/d1/d7e/group__laswp_ga5d3ea3e3cb61e32750bf062a2446aa33.html?ref=blog.stdlib.io#ga5d3ea3e3cb61e32750bf062a2446aa33"><code>dlaswp</code></a>, which performs a series of row interchanges on a general rectangular matrix according to a provided list of pivot indices and which is a key building block for LAPACK&apos;s LU decomposition routines. And that is when my challenges began...</p><h2 id="challenges">Challenges</h2><h3 id="legacy-fortran">Legacy Fortran</h3><p>Prior to my Quansight Labs internship, I was (and still am!) a regular contributor to <a href="https://lfortran.org/?ref=blog.stdlib.io">LFortran</a>, a modern interactive Fortran compiler built on top of LLVM, and I was feeling fairly confident in my Fortran skills. However, one of my first challenges was simply understanding what is now considered <a href="https://fortranwiki.org/fortran/show/Modernizing+Old+Fortran?ref=blog.stdlib.io">&quot;legacy&quot; Fortran code</a>. I highlight three initial hurdles below.</p><h4 id="formatting">Formatting</h4><p>LAPACK was originally written in FORTRAN 77 (F77). While the library was moved to Fortran 90 in version 3.2 (2008), legacy conventions still persist in the reference implementation. One of the most visible of those conventions is formatting.</p><p>Developers writing F77 programs did so using a fixed form layout inherited from punched cards. This layout had strict requirements concerning the use of character columns:</p><ul><li>Comments occupying an entire line must begin with a special character (e.g., <code>*</code>, <code>!</code>, or <code>C</code>) in the first column.</li><li>For non-comment lines, 1) the first five columns must be blank or contain a numeric label, 2) column six is reserved for continuation characters, 3) executable statements must begin at column seven, and 4) any code beyond column 72 was ignored.</li></ul><p>Fortran 90 introduced the free form layout which removed column and line length restrictions and settled on <code>!</code> as the comment character. The following code snippet shows the reference implementation for the LAPACK routine <a href="https://www.netlib.org/lapack/explore-html/da/dcf/dlacpy_8f_source.html?ref=blog.stdlib.io"><code>dlacpy</code></a>:</p><pre><code class="language-fortran">      SUBROUTINE dlacpy( UPLO, M, N, A, LDA, B, LDB )
*
*  -- LAPACK auxiliary routine --
*  -- LAPACK is a software package provided by Univ. of Tennessee,    --
*  -- Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd..--
*
*     .. Scalar Arguments ..
      CHARACTER          UPLO
      INTEGER            LDA, LDB, M, N
*     ..
*     .. Array Arguments ..
      DOUBLE PRECISION   A( LDA, * ), B( LDB, * )
*     ..
*
*  =====================================================================
*
*     .. Local Scalars ..
      INTEGER            I, J
*     ..
*     .. External Functions ..
      LOGICAL            LSAME
      EXTERNAL           lsame
*     ..
*     .. Intrinsic Functions ..
      INTRINSIC          min
*     ..
*     .. Executable Statements ..
*
      IF( lsame( uplo, &apos;U&apos; ) ) THEN
         DO 20 j = 1, n
            DO 10 i = 1, min( j, m )
               b( i, j ) = a( i, j )
   10       CONTINUE
   20    CONTINUE
      ELSE IF( lsame( uplo, &apos;L&apos; ) ) THEN
         DO 40 j = 1, n
            DO 30 i = j, m
               b( i, j ) = a( i, j )
   30       CONTINUE
   40    CONTINUE
      ELSE
         DO 60 j = 1, n
            DO 50 i = 1, m
               b( i, j ) = a( i, j )
   50       CONTINUE
   60    CONTINUE
      END IF
      RETURN
*
*     End of DLACPY
*
      END
</code></pre><p>The next code snippet shows the same routine, but implemented using the free form layout introduced in Fortran 90.</p><pre><code class="language-fortran">subroutine dlacpy( uplo, M, N, A, LDA, B, LDB )
    implicit none
    ! ..
    ! Scalar arguments:
    character :: uplo
    integer :: LDA, LDB, M, N
    ! ..
    ! Array arguments:
    double precision :: A( LDA, * ), B( LDB, * )
    ! ..
    ! Local scalars:
    integer :: i, j
    ! ..
    ! External functions:
    logical LSAME
    external lsame
    ! ..
    ! Intrinsic functions:
    intrinsic min
    ! ..
    if ( lsame( uplo, &apos;U&apos; ) ) then
        do j = 1, n
            do i = 1, min( j, m )
               b( i, j ) = a( i, j )
            end do
        end do
    else if( lsame( uplo, &apos;L&apos; ) ) then
        do j = 1, n
            do i = j, m
               b( i, j ) = a( i, j )
            end do
        end do
    else
        do j = 1, n
            do i = 1, m
               b( i, j ) = a( i, j )
            end do
        end do
    end if
    return
end subroutine dlacpy
</code></pre><p>As may be observed, by removing column restrictions and moving away from the F77 convention of writing specifiers in ALL CAPS, modern Fortran code is more visibly consistent and thus more readable.</p><h4 id="labeled-control-structures">Labeled control structures</h4><p>Another common practice in LAPACK routines is the use of labeled control structures. For example, consider the following code snippet in which the label <code>10</code> must match a corresponding <code>CONTINUE</code>.</p><pre><code class="language-fortran">      DO 10 I = 1, 10
          PRINT *, I
   10 CONTINUE
</code></pre><p>Fortran 90 obviated the need for this practice and improved code readability by allowing one to use <code>end do</code> to end a <code>do</code> loop. This change is shown in the free form version of <code>dlacpy</code> provided above.</p><h4 id="assumed-size-arrays">Assumed-size arrays</h4><p>To allow flexibility in handling arrays of varying sizes, LAPACK routines commonly operate on arrays having an assumed-size. In the <code>dlacpy</code> routine above, the input matrix <code>A</code> is declared to be a two-dimensional array having an assumed-size according to the expression <code>A(LDA, *)</code>. This expression declares that <code>A</code> has <code>LDA</code> number of rows and uses <code>*</code> as a placeholder to indicate that the size of the second dimension is determined by the calling program.</p><p>One consequence of using assumed-size arrays is that compilers are unable to perform bounds checking on the unspecified dimension. Thus, <a href="https://fortran-lang.discourse.group/t/matrix-index-pointer-confusion/8453/5?ref=blog.stdlib.io">current best practice</a> is to use explicit interfaces and assumed-shape arrays (e.g., <code>A(LDA,:)</code>) in order to prevent out-of-bounds memory access. This stated, the use of assumed-shape arrays can be problematic when needing to pass sub-matrices to other functions, as doing so requires slicing which often results in compilers creating internal copies of array data.</p><h4 id="migrating-to-fortran-95">Migrating to Fortran 95</h4><p>Needless to say, it took me a while to adjust to LAPACK conventions and adopt a LAPACK mindset. However, being something of a purist, if I was going to be porting over routines anyway, I at least wanted to bring those routines I did manage to port into a more modern age in hopes of improving code readability and future maintenance. So, after discussing things with stdlib maintainers, I settled on migrating routines to Fortran 95, which, while not the latest and greatest Fortran version, seemed to strike the right balance between maintaining the look-and-feel of the original implementations, ensuring (good enough) backward compatibility, and taking advantage of newer syntactical features.</p><h3 id="test-coverage">Test Coverage</h3><p>One of the problems with pursuing a bottom-up approach to adding LAPACK support is that explicit unit tests for lower-level utility routines are often non-existent in LAPACK. LAPACK&apos;s test suite largely employs a hierarchical testing philosophy in which testing higher-level routines is assumed to ensure that their dependent lower-level routines are functioning correctly as part of an overall workflow. While one can argue that focusing on integration testing over unit testing for lower-level routines is reasonable, as adding tests for every routine could potentially increase the maintenance burden and complexity of LAPACK&apos;s testing framework, it means that we couldn&apos;t readily rely on prior art for unit testing and would have to come up with comprehensive standalone unit tests for each lower-level routine on our own.</p><h3 id="documentation">Documentation</h3><p>Along a similar vein to test coverage, outside of LAPACK itself, finding real-world documented examples showcasing the use of lower-level routines was challenging. While LAPACK routines are consistently preceded by a documentation comment providing descriptions of input arguments and possible return values, without code examples, visualizing and grokking expected input and output values can be challenging, especially when dealing with specialized matrices. And while neither the absence of unit tests nor documented examples is the end of the world, it meant that adding LAPACK support to stdlib would be more of a slog than I expected. Writing benchmarks, tests, examples, and documentation was simply going to require more time and effort, potentially limiting the number of routines I could implement during the internship.</p><h3 id="memory-layouts">Memory layouts</h3><p>When storing matrix elements in linear memory, one has two choices: either store columns contiguously or rows contiguously (see Figure 2). The former memory layout is referred to as <strong>column-major</strong> order and the latter as <strong>row-major</strong> order.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.stdlib.io/content/images/2024/12/row_vs_column_major_white_bkgd.png" class="kg-image" alt="LAPACK in your web browser" loading="lazy" width="1234" height="650" srcset="https://blog.stdlib.io/content/images/size/w600/2024/12/row_vs_column_major_white_bkgd.png 600w, https://blog.stdlib.io/content/images/size/w1000/2024/12/row_vs_column_major_white_bkgd.png 1000w, https://blog.stdlib.io/content/images/2024/12/row_vs_column_major_white_bkgd.png 1234w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">Figure 2: Schematic demonstrating storing matrix elements in linear memory in either (a) column-major (Fortran-style) or (b) row-major (C-style) order. The choice of which layout to use is largely a matter of convention.</span></figcaption></figure><p>The choice of which layout to use is largely a matter of convention. For example, Fortran stores elements in column-major order, and C stores elements in row-major order. Higher-level libraries, such as NumPy and stdlib, support both column- and row-major orders, allowing you to configure the layout of a multi-dimensional array during array creation.</p><pre><code class="language-javascript">import asarray from &apos;@stdlib/ndarray-array&apos;;

// Create a row-major array:
const x = asarray([1.0, 2.0, 3.0, 4.0], {
    &apos;shape&apos;: [2, 2],
    &apos;order&apos;: &apos;row-major&apos;
});

// Create a column-major array:
const y = asarray([1.0, 3.0, 2.0, 4.0], {
    &apos;shape&apos;: [2, 2],
    &apos;order&apos;: &apos;column-major&apos;
});
</code></pre><p>While neither memory layout is inherently better than the other, arranging data to ensure sequential access in accordance with the conventions of the underlying storage model is critical in ensuring optimal performance. Modern CPUs are able to process sequential data more efficiently than non-sequential data, which is primarily due to CPU caching which, in turn, exploits spatial locality of reference.</p><p>To demonstrate the performance impact of sequential vs non-sequential element access, consider the following function which copies all the elements from an <code>MxN</code> matrix <code>A</code> to another <code>MxN</code> matrix <code>B</code> and which does so assuming that matrix elements are stored in column-major order.</p><pre><code class="language-javascript">/**
* Copies elements from `A` to `B`.
*
* @param {integer} M - number of rows
* @param {integer} N - number of columns
* @param {Array} A - source matrix
* @param {integer} strideA1 - index increment to move to the next element in a column
* @param {integer} strideA2 - index increment to move to the next element in a row
* @param {integer} offsetA - index of the first indexed element in `A`
* @param {Array} B - source matrix
* @param {integer} strideB1 - index increment to move to the next element in a column
* @param {integer} strideB2 - index increment to move to the next element in a row
* @param {integer} offsetB - index of the first indexed element in `B`
*/
function copy(M, N, A, strideA1, strideA2, offsetA, B, strideB1, strideB2, offsetB) {
    // Initialize loop bounds:
    const S0 = M;
    const S1 = N;

    // For column-major matrices, the first dimension has the fastest changing index.
    // Compute &quot;pointer&quot; increments accordingly:
    const da0 = strideA1;                  // pointer increment for innermost loop
    const da1 = strideA2 - (S0*strideA1);  // pointer increment for outermost loop
    const db0 = strideB1;
    const db1 = strideB2 - (S0*strideB1);

    // Initialize &quot;pointers&quot; to the first indexed elements in the respective arrays:
    let ia = offsetA;
    let ib = offsetB;

    // Iterate over matrix dimensions:
    for (let i1 = 0; i1 &lt; S1; i1++) {
        for (let i0 = 0; i0 &lt; S0; i0++) {
            B[ib] = A[ia];
            ia += da0;
            ib += db0;
        }
        ia += da1;
        ib += db1;
    }
}
</code></pre><p>Let <code>A</code> and <code>B</code> be the following <code>3x2</code> matrices:</p><p>$$A = \begin{bmatrix}1 &amp; 2 \\3 &amp; 4 \\5 &amp; 6\end{bmatrix},\ B = \begin{bmatrix}0 &amp; 0 \\0 &amp; 0 \\0 &amp; 0\end{bmatrix}$$</p><p>When both <code>A</code> and <code>B</code> are stored in column-major order, we can call the <code>copy</code> routine as follows:</p><pre><code class="language-javascript">const A = [1, 3, 5, 2, 4, 6];
const B = [0, 0, 0, 0, 0, 0];

copy(3, 2, A, 1, 3, 0, B, 1, 3, 0);
</code></pre><p>If, however, <code>A</code> and <code>B</code> are both stored in row-major order, the call signature changes to</p><pre><code class="language-javascript">const A = [1, 2, 3, 4, 5, 6];
const B = [0, 0, 0, 0, 0, 0];

copy(3, 2, A, 2, 1, 0, B, 2, 1, 0);
</code></pre><p>Notice that, in the latter scenario, we fail to access elements in sequential order within the innermost loop, as <code>da0</code> is <code>2</code> and <code>da1</code> is <code>-5</code> and similarly for <code>db0</code> and <code>db1</code>. Instead, the array index &quot;pointers&quot; repeatedly skip ahead before returning to earlier elements in linear memory, with <code>ia = {0, 2, 4, 1, 3, 5}</code> and <code>ib</code> the same. In Figure 3, we show the performance impact of non-sequential access.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.stdlib.io/content/images/2024/12/dlacpy_row_vs_column_major_comparison_benchmarks_small_white_bkgd.png" class="kg-image" alt="LAPACK in your web browser" loading="lazy" width="1934" height="1042" srcset="https://blog.stdlib.io/content/images/size/w600/2024/12/dlacpy_row_vs_column_major_comparison_benchmarks_small_white_bkgd.png 600w, https://blog.stdlib.io/content/images/size/w1000/2024/12/dlacpy_row_vs_column_major_comparison_benchmarks_small_white_bkgd.png 1000w, https://blog.stdlib.io/content/images/size/w1600/2024/12/dlacpy_row_vs_column_major_comparison_benchmarks_small_white_bkgd.png 1600w, https://blog.stdlib.io/content/images/2024/12/dlacpy_row_vs_column_major_comparison_benchmarks_small_white_bkgd.png 1934w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">Figure 3: Performance comparison when providing square column-major versus row-major matrices to </span><i><em class="italic" style="white-space: pre-wrap;">copy</em></i><span style="white-space: pre-wrap;"> when </span><i><em class="italic" style="white-space: pre-wrap;">copy</em></i><span style="white-space: pre-wrap;"> assumes sequential element access according to column-major order. The x-axis enumerates increasing matrix sizes (i.e., number of elements). All rates are normalized relative to column-major results for a corresponding matrix size.</span></figcaption></figure><p>From the figure, we may observe that column- and row-major performance is roughly equivalent until we operate on square matrices having more than 1e5 elements (<code>M = N = ~316</code>). For 1e6 elements (<code>M = N = ~1000</code>), providing a row-major matrix to <code>copy</code> results in a greater than 25% performance decrease. For 1e7 elements (<code>M = N = ~3160</code>), we observe a greater than 85% performance decrease. The significant performance impact may be attributed to decreased locality of reference when operating on row-major matrices having large row sizes.</p><p>Given that it is written in Fortran, LAPACK assumes column-major access order and implements its algorithms accordingly. This presents issues for libraries, such as stdlib, which not only support row-major order, but make it their default memory layout. Were we to simply port LAPACK&apos;s Fortran implementations to JavaScript, users providing row-major matrices would experience adverse performance impacts stemming from non-sequential access.</p><p>To mitigate adverse performance impacts, we borrowed an idea from <a href="https://github.com/flame/blis?ref=blog.stdlib.io">BLIS</a>, a BLAS-like library supporting both row- and column-major memory layouts in BLAS routines, and decided to create modified LAPACK implementations when porting routines from Fortran to JavaScript and C that explicitly accommodate both column- and row-major memory layouts through separate stride parameters for each dimension. For some implementations, such as <code>dlacpy</code>, which is similar to the <code>copy</code> function defined above, incorporating separate and independent strides is straightforward, often involving stride tricks and loop interchange, but, for others, the modifications turned out to be much less straightforward due to specialized matrix handling, varying access patterns, and combinatorial parameterization.</p><h3 id="ndarrays">ndarrays</h3><p>LAPACK routines primarily operate on matrices stored in linear memory and whose elements are accessed according to specified dimensions and the stride of the leading (i.e., first) dimension. Dimensions specify the number of elements in each row and column, respectively. The stride specifies how many elements in linear memory must be skipped in order to access the next element of a row. LAPACK assumes that elements belonging to the same column are always contiguous (i.e., adjacent in linear memory). Figure 4 provides a visual representation of LAPACK conventions (specifically, schematics (a) and (b)).</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.stdlib.io/content/images/2024/12/lapack_vs_ndarray_conventions_white_bkgd.png" class="kg-image" alt="LAPACK in your web browser" loading="lazy" width="1500" height="484" srcset="https://blog.stdlib.io/content/images/size/w600/2024/12/lapack_vs_ndarray_conventions_white_bkgd.png 600w, https://blog.stdlib.io/content/images/size/w1000/2024/12/lapack_vs_ndarray_conventions_white_bkgd.png 1000w, https://blog.stdlib.io/content/images/2024/12/lapack_vs_ndarray_conventions_white_bkgd.png 1500w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">Figure 4: Schematics illustrating the generalization of LAPACK strided array conventions to non-contiguous strided arrays. a) A 5-by-5 contiguous matrix stored in column-major order. b) A 3-by-3 non-contiguous sub-matrix stored in column-major order. Sub-matrices can be operated on in LAPACK by providing a pointer to the first indexed element and specifying the stride of the leading (i.e., first) dimension. In this case, the stride of leading dimension is five, even though there are only three elements per column, due to the non-contiguity of sub-matrix elements in linear memory when stored as part of a larger matrix. In LAPACK, the stride of the trailing (i.e., second) dimension is always assumed to be unity. c) A 3-by-3 non-contiguous sub-matrix stored in column-major order having non-unit strides and generalizing LAPACK stride conventions to both leading and trailing dimensions. This generalization underpins stdlib&apos;s multi-dimensional arrays (also referred to as &quot;ndarrays&quot;).</span></figcaption></figure><p>Libraries, such as NumPy and stdlib, generalize LAPACK&apos;s strided array conventions to support</p><ol><li>non-unit strides in the last dimension (see Figure 4 (c)). LAPACK assumes that the last dimension of a matrix always has unit stride (i.e., elements within a column are stored contiguously in linear memory).</li><li>negative strides for any dimension. LAPACK requires that the stride of a leading matrix dimension be positive.</li><li>multi-dimensional arrays having more than two dimensions. LAPACK only explicitly supports strided vectors and (sub)matrices.</li></ol><p>Support for non-unit strides in the last dimension ensures support for O(1) creation of non-contiguous views of linear memory without requiring explicit data movement. These views are often called &quot;slices&quot;. As an example, consider the following code snippet which creates such views using APIs provided by stdlib.</p><pre><code class="language-javascript">import linspace from &apos;@stdlib/array-linspace&apos;
import FancyArray from &apos;@stdlib/ndarray-fancy&apos;;

// Define a two-dimensional array similar to that shown in Figure 4 (a):
const x = new FancyArray(&apos;float64&apos;, linspace(0, 24, 25), [5, 5], [5, 1], 0, &apos;row-major&apos;);
// returns &lt;FancyArray&gt;

// Create a sub-matrix view similar to that shown in Figure 4 (b):
const v1 = x[&apos;1:4,:3&apos;];
// returns &lt;FancyArray&gt;

// Create a sub-matrix view similar to that shown in Figure 4 (c):
const v2 = x[&apos;::2,::2&apos;];
// returns &lt;FancyArray&gt;

// Assert that all arrays share the same underlying memory buffer:
const b1 = (v1.data.buffer === x.data.buffer);
// returns true

const b2 = (v2.data.buffer === x.data.buffer);
// returns true
</code></pre><p>Without support for non-unit strides in the last dimension, returning a view from the expression <code>x[&apos;::2,::2&apos;]</code> would not be possible, as one would need to copy selected elements to a new linear memory buffer in order to ensure contiguity.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.stdlib.io/content/images/2024/12/flip_and_rotate_stride_tricks_white_bkgd.png" class="kg-image" alt="LAPACK in your web browser" loading="lazy" width="717" height="1300" srcset="https://blog.stdlib.io/content/images/size/w600/2024/12/flip_and_rotate_stride_tricks_white_bkgd.png 600w, https://blog.stdlib.io/content/images/2024/12/flip_and_rotate_stride_tricks_white_bkgd.png 717w"><figcaption><span style="white-space: pre-wrap;">Figure 5: Schematics illustrating the use of stride manipulation to create flipped and rotated views of matrix elements stored in linear memory. For all sub-schematics, strides are listed as </span><code spellcheck="false" style="white-space: pre-wrap;"><span>[trailing_dimension, leading_dimension]</span></code><span style="white-space: pre-wrap;">. Implicit for each schematic is an &quot;offset&quot;, which indicates the index of the first indexed element such that, for a matrix </span><i><em class="italic" style="white-space: pre-wrap;">A</em></i><span style="white-space: pre-wrap;">, the element </span><i><em class="italic" style="white-space: pre-wrap;">A</em></i><i><sub style="white-space: pre-wrap;"><em class="italic">ij</em></sub></i><span style="white-space: pre-wrap;"> is resolved according to </span><code spellcheck="false" style="white-space: pre-wrap;"><span>i&#x22C5;strides[1] + j&#x22C5;strides[0] + offset</span></code><span style="white-space: pre-wrap;">. a) Given a 3-by-3 matrix stored in column-major order, one can manipulate the strides of the leading and trailing dimensions to create views in which matrix elements along one or more axes are accessed in reverse order. b) Using similar stride manipulation, one can create rotated views of matrix elements relative to their arrangement within linear memory.</span></figcaption></figure><p>Support for negative strides enables O(1) reversal and rotation of elements along one or more dimensions (see Figure 5). For example, to flip a matrix top-to-bottom and left-to-right, one need only negate the strides. Building on the previous code snippet, the following code snippet demonstrates reversing elements about one or more axes.</p><pre><code class="language-javascript">import linspace from &apos;@stdlib/array-linspace&apos;
import FancyArray from &apos;@stdlib/ndarray-fancy&apos;;

// Define a two-dimensional array similar to that shown in Figure 5 (a):
const x = new FancyArray(&apos;float64&apos;, linspace(0, 8, 9), [3, 3], [3, 1], 0, &apos;row-major&apos;);

// Reverse elements along each row:
const v1 = x[&apos;::-1,:&apos;];

// Reverse elements along each column:
const v2 = x[&apos;:,::-1&apos;];

// Reverse elements along both columns and rows:
const v3 = x[&apos;::-1,::-1&apos;];

// Assert that all arrays share the same underlying memory buffer:
const b1 = (v1.data.buffer === x.data.buffer);
// returns true

const b2 = (v2.data.buffer === x.data.buffer);
// returns true

const b3 = (v3.data.buffer === x.data.buffer);
// returns true
</code></pre><p>Implicit in the discussion of negative strides is the need for an &quot;offset&quot; parameter which indicates the index of the first indexed element in linear memory. For a strided multi-dimensional array <em>A</em> and a list of strides <em>s</em>, the index corresponding to element <em>A<sub>ij&#x22C5;&#x22C5;&#x22C5;n</sub></em> can be resolved according to the equation</p><p>$$\textrm{idx} = \textrm{offset} + i \cdot s_0 + j \cdot s_1 + \ldots + n \cdot s_{N-1}$$</p><p>where <em>N</em> is the number of array dimensions and <em>s<sub>k</sub></em> corresponds to <em>k</em>th stride.</p><p>In BLAS and LAPACK routines supporting negative strides&#x2014;something which is only supported when operating on strided vectors (e.g., see <code>daxpy</code> above)&#x2014;the index offset is computed using logic similar to the following code snippet:</p><pre><code class="language-c">if (stride &lt; 0) {
    offset = (1-M) * stride;
} else {
    offset = 0;
}
</code></pre><p>where <code>M</code> is the number of vector elements. This implicitly assumes that a provided data pointer points to the beginning of linear memory for a vector. In languages supporting pointers, such as C, in order to operate on a different region of linear memory, one typically adjusts a pointer using pointer arithmetic prior to function invocation, which is relatively cheap and straightforward, at least for the one-dimensional case.</p><p>For example, returning to <code>c_daxpy</code> as defined above, we can use pointer arithmetic to limit element access to five elements within linear memory beginning at the eleventh and sixteenth elements (note: zero-based indexing) of an input and output array, respectively, as shown in the following code snippet.</p><pre><code class="language-c">// Define data arrays:
const double X[] = {...};
double Y[] = {...};

// Specify the indices of the elements which begin a desired memory region:
const xoffset = 10;
const yoffset = 15; 

// Limit the operation to only elements within the desired memory region:
c_daxpy(5, 5.0, X+xoffset, 1, Y+yoffset, 1);
</code></pre><p>However, in JavaScript, which does not support explicit pointer arithmetic for binary buffers, one must <a href="https://github.com/stdlib-js/stdlib/tree/1c56b737ec018cc818cebf19e5c7947fa684e126/lib/node_modules/%40stdlib/strided/base/offset-view?ref=blog.stdlib.io">explicitly instantiate</a> new typed array objects having a desired <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/TypedArray?ref=blog.stdlib.io#parameters">byte offset</a>. In the following code snippet, in order to achieve the same results as the C example above, we must resolve a typed array constructor, compute a new byte offset, compute a new typed array length, and create a new typed array instance.</p><pre><code class="language-javascript">/**
* Returns a typed array view having the same data type as a provided input typed
* array and starting at a specified index offset.
*
* @param {TypedArray} x - input array
* @param {integer} offset - starting index
* @returns {TypedArray} typed array view
*/
function offsetView(x, offset) {
    return new x.constructor(x.buffer, x.byteOffset+(x.BYTES_PER_ELEMENT*offset), x.length-offset);
}

// ...

const x = new Float64Array([...]);
const y = new Float64Array([...]);

// ...

daxpy(5, 5.0, offsetView(x, 10), 1, offsetView(y, 15), 1);
</code></pre><p>For large array sizes, the cost of typed array instantiation is negligible compared to the time spent accessing and operating on individual array elements; however, for smaller array sizes, object instantiation can significantly impact performance.</p><p>Accordingly, in order to avoid adverse object instantiation performance impacts, stdlib decouples an ndarray&apos;s data buffer from the location of the buffer element corresponding to the beginning of an <a href="https://github.com/stdlib-js/stdlib/tree/1c56b737ec018cc818cebf19e5c7947fa684e126/lib/node_modules/%40stdlib/ndarray/base/min-view-buffer-index?ref=blog.stdlib.io">ndarray view</a>. This allows the slice expressions <code>x[2:,3:]</code> and <code>x[3:,1:]</code> to return new ndarray views <strong>without</strong> needing to instantiate new buffer instances, as demonstrated in the following code snippet.</p><pre><code class="language-javascript">import linspace from &apos;@stdlib/array-linspace&apos;
import FancyArray from &apos;@stdlib/ndarray-fancy&apos;;

const x = new FancyArray(&apos;float64&apos;, linspace(0, 24, 25), [5, 5], [5, 1], 0, &apos;row-major&apos;);

const v1 = x[&apos;2:,3:&apos;];
const v2 = x[&apos;3:,1:&apos;];

// Assert that all arrays share the same typed array data instance:
const b1 = (v1.data === x.data);
// returns true

const b2 = (v2.data === x.data);
// returns true
</code></pre><p>As a consequence of decoupling a data buffer from the beginning of an ndarray view, we similarly sought to avoid having to instantiate new typed array instances when calling into LAPACK routines with ndarray data. This meant creating modified LAPACK API signatures supporting explicit offset parameters for all strided vectors and matrices.</p><p>For simplicity, let&apos;s return to the JavaScript implementation of <code>daxpy</code>, which was previously defined above.</p><pre><code class="language-javascript">function daxpy(N, alpha, X, strideX, Y, strideY) {
    let ix;
    let iy;
    let i;
    if (N &lt;= 0) {
        return;
    }
    if (alpha === 0.0) {
        return;
    }
    if (strideX &lt; 0) {
        ix = (1-N) * strideX;
    } else {
        ix = 0;
    }
    if (strideY &lt; 0) {
        iy = (1-N) * strideY;
    } else {
        iy = 0;
    }
    for (i = 0; i &lt; N; i++) {
        Y[iy] += alpha * X[ix];
        ix += strideX;
        iy += strideY;
    }
    return;
}
</code></pre><p>As demonstrated in the following code snippet, we can modify the above signature and implementation such that the responsibility for resolving the first indexed element is shifted to the API consumer.</p><pre><code class="language-javascript">function daxpy_ndarray(N, alpha, X, strideX, offsetX, Y, strideY, offsetY) {
    let ix;
    let iy;
    let i;
    if (N &lt;= 0) {
        return;
    }
    if (alpha === 0.0) {
        return;
    }
    ix = offsetX;
    iy = offsetY;
    for (i = 0; i &lt; N; i++) {
        Y[iy] += alpha * X[ix];
        ix += strideX;
        iy += strideY;
    }
    return;
}
</code></pre><p>For ndarrays, resolution happens during ndarray instantiation, making the invocation of <code>daxpy_ndarray</code> with ndarray data a straightforward passing of associated ndarray meta data. This is demonstrated in the following code snippet.</p><pre><code class="language-javascript">import linspace from &apos;@stdlib/array-linspace&apos;
import FancyArray from &apos;@stdlib/ndarray-fancy&apos;;

// Create two ndarrays:
const x = new FancyArray(&apos;float64&apos;, linspace(0, 24, 25), [5, 5], [5, 1], 0, &apos;row-major&apos;);
const y = new FancyArray(&apos;float64&apos;, linspace(0, 24, 25), [5, 5], [5, 1], 0, &apos;row-major&apos;);

// Create a view of `x` corresponding to every other element in the 3rd row:
const v1 = x[&apos;2,1::2&apos;];

// Create a view of `y` corresponding to every other element in the 3rd column:
const v2 = y[&apos;1::2,2&apos;];

// Operate on the vectors:
daxpy_ndarray(v1.length, 5.0, v1.data, v1.strides[0], v1.offset, v2.data, v2.strides[0], v2.offset);
</code></pre><p>Similar to BLIS, we saw value in both conventional LAPACK API signatures (e.g., for backward compatibility) and modified API signatures (e.g., for minimizing adverse performance impacts), and thus, we settled on a plan to provide both conventional and modified APIs for each LAPACK routine. To minimize code duplication, we aimed to implement a common lower-level &quot;base&quot; implementation which could then be wrapped by higher-level APIs. While the changes for the BLAS routine <code>daxpy</code> shown above may appear relatively straightforward, the transformation of a conventional LAPACK routine and its expected behavior to a generalized implementation was often much less so.</p><h2 id="dlaswp">dlaswp</h2><p>Enough with the challenges! What does a final product look like?!</p><p>Let&apos;s come full circle and bring this back to <code>dlaswp</code>, a LAPACK routine for performing a series of row interchanges on an input matrix according to a list of pivot indices. The following code snippet shows the reference LAPACK <a href="https://www.netlib.org/lapack/explore-html/d7/d6b/dlaswp_8f_source.html?ref=blog.stdlib.io">Fortran implementation</a>.</p><pre><code class="language-fortran">SUBROUTINE dlaswp( N, A, LDA, K1, K2, IPIV, INCX )
*
*  -- LAPACK auxiliary routine --
*  -- LAPACK is a software package provided by Univ. of Tennessee,    --
*  -- Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd..--
*
*     .. Scalar Arguments ..
      INTEGER            INCX, K1, K2, LDA, N
*     ..
*     .. Array Arguments ..
      INTEGER            IPIV( * )
      DOUBLE PRECISION   A( LDA, * )
*     ..
*
* =====================================================================
*
*     .. Local Scalars ..
      INTEGER            I, I1, I2, INC, IP, IX, IX0, J, K, N32
      DOUBLE PRECISION   TEMP
*     ..
*     .. Executable Statements ..
*
*     Interchange row I with row IPIV(K1+(I-K1)*abs(INCX)) for each of rows
*     K1 through K2.
*
      IF( incx.GT.0 ) THEN
         ix0 = k1
         i1 = k1
         i2 = k2
         inc = 1
      ELSE IF( incx.LT.0 ) THEN
         ix0 = k1 + ( k1-k2 )*incx
         i1 = k2
         i2 = k1
         inc = -1
      ELSE
         RETURN
      END IF
*
      n32 = ( n / 32 )*32
      IF( n32.NE.0 ) THEN
         DO 30 j = 1, n32, 32
            ix = ix0
            DO 20 i = i1, i2, inc
               ip = ipiv( ix )
               IF( ip.NE.i ) THEN
                  DO 10 k = j, j + 31
                     temp = a( i, k )
                     a( i, k ) = a( ip, k )
                     a( ip, k ) = temp
   10             CONTINUE
               END IF
               ix = ix + incx
   20       CONTINUE
   30    CONTINUE
      END IF
      IF( n32.NE.n ) THEN
         n32 = n32 + 1
         ix = ix0
         DO 50 i = i1, i2, inc
            ip = ipiv( ix )
            IF( ip.NE.i ) THEN
               DO 40 k = n32, n
                  temp = a( i, k )
                  a( i, k ) = a( ip, k )
                  a( ip, k ) = temp
   40          CONTINUE
            END IF
            ix = ix + incx
   50    CONTINUE
      END IF
*
      RETURN
*
*     End of DLASWP
*
      END
</code></pre><p>To facilitate interfacing with the Fortran implementation from C, LAPACK provides a two-level C interface called <a href="https://netlib.org/lapack/lapacke.html?ref=blog.stdlib.io">LAPACKE</a>, which wraps Fortran implementations and makes accommodations for both row- and column-major input and output matrices. The middle-level interface for <code>dlaswp</code> is shown in the following code snippet.</p><pre><code class="language-c">lapack_int LAPACKE_dlaswp_work( int matrix_layout, lapack_int n, double* a,
                                lapack_int lda, lapack_int k1, lapack_int k2,
                                const lapack_int* ipiv, lapack_int incx )
{
    lapack_int info = 0;
    if( matrix_layout == LAPACK_COL_MAJOR ) {
        /* Call LAPACK function and adjust info */
        LAPACK_dlaswp( &amp;n, a, &amp;lda, &amp;k1, &amp;k2, ipiv, &amp;incx );
        if( info &lt; 0 ) {
            info = info - 1;
        }
    } else if( matrix_layout == LAPACK_ROW_MAJOR ) {
        lapack_int lda_t = MAX(1,k2);
        lapack_int i;
        for( i = k1; i &lt;= k2; i++ ) {
            lda_t = MAX( lda_t, ipiv[k1 + ( i - k1 ) * ABS( incx ) - 1] );
        }
        double* a_t = NULL;
        /* Check leading dimension(s) */
        if( lda &lt; n ) {
            info = -4;
            LAPACKE_xerbla( &quot;LAPACKE_dlaswp_work&quot;, info );
            return info;
        }
        /* Allocate memory for temporary array(s) */
        a_t = (double*)LAPACKE_malloc( sizeof(double) * lda_t * MAX(1,n) );
        if( a_t == NULL ) {
            info = LAPACK_TRANSPOSE_MEMORY_ERROR;
            goto exit_level_0;
        }
        /* Transpose input matrices */
        LAPACKE_dge_trans( matrix_layout, lda_t, n, a, lda, a_t, lda_t );
        /* Call LAPACK function and adjust info */
        LAPACK_dlaswp( &amp;n, a_t, &amp;lda_t, &amp;k1, &amp;k2, ipiv, &amp;incx );
        info = 0;  /* LAPACK call is ok! */
        /* Transpose output matrices */
        LAPACKE_dge_trans( LAPACK_COL_MAJOR, lda_t, n, a_t, lda_t, a, lda );
        /* Release memory and exit */
        LAPACKE_free( a_t );
exit_level_0:
        if( info == LAPACK_TRANSPOSE_MEMORY_ERROR ) {
            LAPACKE_xerbla( &quot;LAPACKE_dlaswp_work&quot;, info );
        }
    } else {
        info = -1;
        LAPACKE_xerbla( &quot;LAPACKE_dlaswp_work&quot;, info );
    }
    return info;
}
</code></pre><p>When called with a column-major matrix <code>a</code>, the wrapper <code>LAPACKE_dlaswp_work</code> simply passes along provided arguments to the Fortran implementation. However, when called with a row-major matrix <code>a</code>, the wrapper must allocate memory, explicitly transpose and copy <code>a</code> to a temporary matrix <code>a_t</code>, recompute the stride of the leading dimension, invoke <code>dlaswp</code> with <code>a_t</code>, transpose and copy the results stored in <code>a_t</code> to <code>a</code>, and finally free allocated memory. That is a fair amount of work and is common across most LAPACK routines.</p><p>The following code snippet shows the reference LAPACK implementation <a href="https://github.com/stdlib-js/stdlib/blob/1c56b737ec018cc818cebf19e5c7947fa684e126/lib/node_modules/%40stdlib/lapack/base/dlaswp/lib/base.js?ref=blog.stdlib.io">ported</a> to JavaScript, with support for leading and trailing dimension strides, index offsets, and a strided vector containing pivot indices.</p><pre><code class="language-javascript">// File: base.js

// ...

const BLOCK_SIZE = 32;

// ...

function base(N, A, strideA1, strideA2, offsetA, k1, k2, inck, IPIV, strideIPIV, offsetIPIV) {
    let nrows;
    let n32;
    let tmp;
    let row;
    let ia1;
    let ia2;
    let ip;
    let o;

    // Compute the number of rows to be interchanged:
    if (inck &gt; 0) {
        nrows = k2 - k1;
    } else {
        nrows = k1 - k2;
    }
    nrows += 1;

    // If the order is row-major, we can delegate to the Level 1 routine `dswap` for interchanging rows...
    if (isRowMajor([strideA1, strideA2])) {
        ip = offsetIPIV;
        for (let i = 0, k = k1; i &lt; nrows; i++, k += inck) {
            row = IPIV[ip];
            if (row !== k) {
                dswap(N, A, strideA2, offsetA+(k*strideA1), A, strideA2, offsetA+(row*strideA1));
            }
            ip += strideIPIV;
        }
        return A;
    }
    // If the order is column-major, we need to use loop tiling to ensure efficient cache access when accessing matrix elements...
    n32 = floor(N/BLOCK_SIZE) * BLOCK_SIZE;
    if (n32 !== 0) {
        for (let j = 0; j &lt; n32; j += BLOCK_SIZE) {
            ip = offsetIPIV;
            for (let i = 0, k = k1; i &lt; nrows; i++, k += inck) {
                row = IPIV[ip];
                if (row !== k) {
                    ia1 = offsetA + (k*strideA1);
                    ia2 = offsetA + (row*strideA1);
                    for (let n = j; n &lt; j+BLOCK_SIZE; n++) {
                        o = n * strideA2;
                        tmp = A[ia1+o];
                        A[ia1+o] = A[ia2+o];
                        A[ia2+o] = tmp;
                    }
                }
                ip += strideIPIV;
            }
        }
    }
    if (n32 !== N) {
        ip = offsetIPIV;
        for (let i = 0, k = k1; i &lt; nrows; i++, k += inck) {
            row = IPIV[ ip ];
            if (row !== k) {
                ia1 = offsetA + (k*strideA1);
                ia2 = offsetA + (row*strideA1);
                for (let n = n32; n &lt; N; n++) {
                    o = n * strideA2;
                    tmp = A[ia1+o];
                    A[ia1+o] = A[ia2+o];
                    A[ia2+o] = tmp;
                }
            }
            ip += strideIPIV;
        }
    }
    return A;
}
</code></pre><p>To provide an API having consistent behavior with conventional LAPACK, I then wrapped the above implementation and adapted input arguments to the &quot;base&quot; implementation, as shown in the following code snippet.</p><pre><code class="language-javascript">// File: dlaswp.js

// ...
const base = require(&apos;./base.js&apos;);

// ...

function dlaswp(order, N, A, LDA, k1, k2, IPIV, incx) {
    let tmp;
    let inc;
    let sa1;
    let sa2;
    let io;
    if (!isLayout(order)) {
        throw new TypeError(format(&apos;invalid argument. First argument must be a valid order. Value: `%s`.&apos;, order));
    }
    if (order === &apos;row-major&apos; &amp;&amp; LDA &lt; max(1, N)) {
        throw new RangeError(format(&apos;invalid argument. Fourth argument must be greater than or equal to max(1,%d). Value: `%d`.&apos;, N, LDA));
    }
    if (incx &gt; 0) {
        inc = 1;
        io = k1;
    } else if (incx &lt; 0) {
        inc = -1;
        io = k1 + ((k1-k2) * incx);
        tmp = k1;
        k1 = k2;
        k2 = tmp;
    } else {
        return A;
    }
    if (order === &apos;column-major&apos;) {
        sa1 = 1;
        sa2 = LDA;
    } else { // order === &apos;row-major&apos;
        sa1 = LDA;
        sa2 = 1;
    }
    return base(N, A, sa1, sa2, 0, k1, k2, inc, IPIV, incx, io);
}
</code></pre><p>I subsequently wrote a separate but similar <a href="https://github.com/stdlib-js/stdlib/blob/1c56b737ec018cc818cebf19e5c7947fa684e126/lib/node_modules/%40stdlib/lapack/base/dlaswp/lib/ndarray.js?ref=blog.stdlib.io">wrapper</a> which provides an API mapping more directly to stdlib&apos;s multi-dimensional arrays and which performs some special handling when the direction in which to apply pivots is negative, as shown in the following code snippet.</p><pre><code class="language-javascript">// File: ndarray.js

const base = require(&apos;./base.js&apos;);

// ...

function dlaswp_ndarray(N, A, strideA1, strideA2, offsetA, k1, k2, inck, IPIV, strideIPIV, offsetIPIV) {
    let tmp;
    if (inck &lt; 0) {
        offsetIPIV += k2 * strideIPIV;
        strideIPIV *= -1;
        tmp = k1;
        k1 = k2;
        k2 = tmp;
        inck = -1;
    } else {
        offsetIPIV += k1 * strideIPIV;
        inck = 1;
    }
    return base(N, A, strideA1, strideA2, offsetA, k1, k2, inck, IPIV, strideIPIV, offsetIPIV);
}
</code></pre><p>A few points to note:</p><ol><li>In contrast to the conventional LAPACKE API, the <code>matrix_layout</code> (order) parameter is not necessary in the <code>dlaswp_ndarray</code> and <code>base</code> APIs, as the order can be inferred from the provided strides.</li><li>In contrast to the conventional LAPACKE API, when an input matrix is row-major, we don&apos;t need to copy data to temporary workspace arrays, thus reducing unnecessary memory allocation.</li><li>In contrast to libraries, such as NumPy and SciPy, which interface with BLAS and LAPACK directly, when calling LAPACK routines in stdlib, we don&apos;t need to copy non-contiguous multi-dimensional data to and from temporary workspace arrays before and after invocation, respectively. Except when interfacing with hardware-optimized BLAS and LAPACK, the pursued approach helps minimize data movement and ensures performance in resource constrained browser applications.</li></ol><p>For server-side applications hoping to leverage hardware-optimized libraries, such as OpenBLAS, we provide separate wrappers which adapt generalized signature arguments to their optimized API equivalents. In this context, at least for sufficiently large arrays, creating temporary copies can be worth the overhead.</p><h2 id="current-status-and-next-steps">Current status and next steps</h2><p>Despite the challenges, unforeseen setbacks, and multiple design iterations, I am happy to report that, in addition to <code>dlaswp</code> above, I was able to open <a href="https://github.com/stdlib-js/stdlib/pulls?q=sort%3Aupdated-desc+is%3Apr+author%3APranavchiku+label%3ALAPACK+&amp;ref=blog.stdlib.io">35 PRs</a> adding support for various LAPACK routines and associated utilities. Obviously not quite 1,700 routines, but a good start! :)</p><p>Nevertheless, the future is bright, and we are quite excited about this work. There&apos;s still plenty of room for improvement and additional research and development. In particular, we&apos;re keen to</p><ol><li>explore tooling and automation.</li><li>address build issues when resolving the source files of Fortran dependencies spread across multiple stdlib packages.</li><li>roll out C and Fortran implementations and native bindings for stdlib&apos;s existing LAPACK packages.</li><li>continue growing stdlib&apos;s library of modular LAPACK routines.</li><li>identify additional areas for performance optimization.</li></ol><p>While my Quansight Labs internship has ended, my plan is to continue adding packages and pushing this effort along. Given the immense potential and LAPACK&apos;s fundamental importance, we&apos;d love to see this initiative of bringing LAPACK to the web continue to grow, so, if you are interested in helping drive this forward, please don&apos;t hesitate to reach out! And if you are interested in sponsoring development, the folks at <a href="https://quansight.com/?ref=blog.stdlib.io">Quansight</a> would be more than happy to chat.</p><p>And with that, I would like to thank Quansight for providing this internship opportunity. I feel incredibly fortunate to have learned so much. Being an intern at Quansight was long a dream of mine, and I am very grateful to have fulfilled it. I want to extend a special thanks to <a href="https://github.com/kgryte?ref=blog.stdlib.io">Athan Reines</a> and to <a href="https://github.com/melissawm?ref=blog.stdlib.io">Melissa Mendon&#xE7;a</a>, who is an amazing mentor and all around wonderful person! And thank you to all the stdlib core developers and everyone else at Quansight for helping me out in ways both big and small along the way.</p><p>Cheers!</p><hr>
<!--kg-card-begin: html-->
<p class="dev-theme-author-blurb">
    <em>Pranav Goswami is a developer of <a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">stdlib</a> and a computer science graduate who&apos;s passionate about technology, algorithms, compilers, and epic roadtrips.</em>
</p>
<!--kg-card-end: html-->
<hr><p><a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">stdlib</a> is an open source software project dedicated to providing a comprehensive suite of robust, high-performance libraries to accelerate your project&apos;s development and give you peace of mind knowing that you&apos;re depending on expertly crafted, high-quality software.</p><p>If you&apos;ve enjoyed this post, give us a star &#x1F31F; on <a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">GitHub</a> and consider <a href="https://opencollective.com/stdlib?ref=blog.stdlib.io">financially supporting</a> the project. Your contributions help ensure the project&apos;s long-term success, and your continued support is greatly appreciated!</p><hr>
<!--kg-card-begin: html-->
<h2>License</h2>
<details>
    <summary>All code is licensed under <a href="http://www.apache.org/licenses/LICENSE-2.0?ref=blog.stdlib.io">Apache License, Version 2.0</a>.</summary>
    <pre><code class="language-text hljs">
Copyright (c) 2024 The Stdlib Authors.

Licensed under the Apache License, Version 2.0 (the &quot;License&quot;);
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

   http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an &quot;AS IS&quot; BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
    </code></pre>
</details>
<!--kg-card-end: html-->
]]></content:encoded></item><item><title><![CDATA[Reflecting on GSoC 2024]]></title><description><![CDATA[We share key achievements, challenges, and tips for future success for both participating organizations and aspiring contributors.]]></description><link>https://blog.stdlib.io/reflecting-on-gsoc-2024/</link><guid isPermaLink="false">66ff3dabd8eb7fcd9a9615fc</guid><category><![CDATA[News]]></category><dc:creator><![CDATA[Philipp Burckhardt]]></dc:creator><pubDate>Fri, 04 Oct 2024 03:04:04 GMT</pubDate><media:content url="https://blog.stdlib.io/content/images/2024/10/gen_splash2.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.stdlib.io/content/images/2024/10/gen_splash2.png" alt="Reflecting on GSoC 2024"><p>An exciting summer has come to a close for stdlib with our first participation in Google Summer of Code (GSoC). GSoC is an annual program run by Google and a highlight within the open source community. It brings together passionate contributors and mentors to collaborate on open source projects. Selected contributors receive a stipend for their hard work, while organizations benefit from new features, improved project visibility, and the potential to cultivate long-term contributors.<br><br>stdlib (/&#x2C8;st&#xE6;nd&#x259;rd l&#x26A;b/ &quot;standard lib&quot;) is a fundamental numerical library for JavaScript. Our mission is to create a scientific computing ecosystem for JavaScript and TypeScript, similar to what NumPy and SciPy are for Python. This year, we were granted four slots in GSoC, marking a significant milestone for us as a first-time participating organization.<br><br>The purpose of this post is to share our GSoC experiences to help future organizations and contributors prepare more effectively. We aim to provide insights into what worked well, what challenges we faced, and advice for making the most out of this incredible program.</p><h2 id="highlights-of-the-program">Highlights of the Program</h2><p>While we certainly encountered bumps along the way (more on that in a second), overall, our participation in GSoC was packed with standout moments. Our accepted contributors successfully completed their four GSoC projects.<br><br>To illustrate the impact of our participation, here are some key statistics and accomplishments from our community since the GSoC organization announcement in February:</p><ul><li>Over 1,000 PRs opened </li><li>More than 100 unique PR contributors </li><li>Over 2,000 new commits to the codebase</li></ul><p>We had a range of successful contributions that significantly advanced stdlib. Specifically, our four GSoC contributors worked on the following projects:</p><ul><li>Aman Bhansali worked on BLAS bindings, overcoming the challenge of integrating complex numerical libraries into JavaScript. </li><li>Gunj Joshi developed C implementations for special mathematical functions, significantly improving the performance of our library. </li><li>Jaysukh Makvana added support for Boolean arrays, enhancing the library&apos;s functionality and usability and paving the way for NumPy-like array indexing in JavaScript. </li><li>Snehil Shah worked on enhancing the stdlib REPL for scientific computing in Node.js, making it easier for users to interact with our library and perform data analysis in their terminals.</li></ul><p>Each project addressed critical areas in our mission to create a comprehensive numerical library for JavaScript and the web platform. <br><br>Finally, we already see a glimpse of the project attracting long-term contributors from both GSoC participants and the broader community.</p><h2 id="an-unexpected-challenge">An Unexpected Challenge</h2><p>Despite the many positives, our journey wasn&apos;t without its share of challenges. Early on, we faced an unexpected incident that seemed straight out of a movie plot. A prospective contributor tried to sabotage a fellow applicant by impersonating them on Gitter, the open source instant messaging and chat room service where we engage with the community. After signing up via a fake Twitter/X account, he started sending unhinged messages to several of the project&apos;s core contributors. While it quickly became clear that we were communicating with an impersonator, it was an unsettling experience nonetheless. The impersonator even ended up copying the real applicant&apos;s proposal and later attempted to claim the work as their own on GitHub after the conclusion of GSoC.<br><br>In light of this experience, we advise any organizations participating in GSoC to keep in mind that competition for slots can be fierce, and that some individuals may be tempted to use subterfuge or actively jeopardize others&apos; applications. One must be vigilant and expect the unexpected. We also recommend having a Code of Conduct (CoC) in place to address such unethical behavior and raising awareness among GSoC contributors of its existence, such as having a CoC acknowledgment checkbox on pull requests and when submitting proposals.</p><h2 id="lessons-learned-and-advice-for-future-participants">Lessons Learned and Advice for Future Participants</h2><h3 id="engage-early-with-the-community">Engage Early with the Community</h3><p>First and foremost, it is crucial to encourage potential contributors to start interacting with the community and codebase well before the application period. This helps build familiarity and commitment. Although we were aware of this, we could have done more to encourage early engagement and provide clearer guidance on how to get started. Going through all onboarding steps afresh may help uncover outdated information in documentation or other inconsistencies.</p><div class="kg-card kg-callout-card kg-callout-card-blue"><div class="kg-callout-emoji">&#x1F4A1;</div><div class="kg-callout-text"><b><strong style="white-space: pre-wrap;">Community Outreach: </strong></b>Actively promote your participation through social media, blogs, and coding forums. Use platforms like X/Twitter, LinkedIn, and relevant forums to announce your participation and engage with potential contributors.</div></div><h3 id="handling-community-queries">Handling Community Queries</h3><p>After our participation was announced, we were quickly bombarded with what seemed like a non-stop barrage of messages per day on Gitter and other communication channels, and with dozens of PRs opened each day. As the core stdlib team is not working on the project full-time, it was very challenging to keep up. We learned that it&apos;s essential to set clear expectations and boundaries early on to manage the influx of new contributors.</p><h3 id="managing-the-onboarding-process">Managing the Onboarding Process</h3><p>Answering the same questions repeatedly can be time-consuming, so having frequently asked questions (FAQs) and a well-documented onboarding process will prove to be invaluable. We also started a weekly office hour for people to drop by. This had a decent turnout and proved valuable, as only individuals who were genuinely interested in the project attended and helped weed out those who were just making &quot;drive-by&quot; contributions. In addition to the weekly office hours, we also held two sessions during the application period to serve as informational sessions specifically focusing on GSoC so we could answer all questions that prospective contributors had.</p><p>After the conclusion of GSoC, we have continued to hold weekly office hours, which have been a great way to keep the community engaged!</p><div class="kg-card kg-callout-card kg-callout-card-blue"><div class="kg-callout-emoji">&#x1F4A1;</div><div class="kg-callout-text"><b><strong style="white-space: pre-wrap;">Communication Channels:</strong></b> Clearly outline the primary communication channels (e.g., mailing lists, chat platforms like Gitter, etc) and how to use them.</div></div><h3 id="good-first-issues">Good First Issues</h3><p>What worked less well were the &quot;good first issues&quot; issues we had opened and labeled as such on GitHub. We found that issues we thought were good first ones, such as updating documentation and examples, resulted in a very high number of low-quality submissions, often suffering from hallucinated contents due to AI generation or other issues, which caused more work for reviewers. On the other hand, other tasks, such as refining TypeScript definitions, were often too complex and challenging for newcomers. </p><p>We learned that the best first issues are those that are well-scoped, have clear instructions, and are easy to test and verify. Having a bunch of trivial issues provides weak signal; you want to see contributors progressively tackle more complicated tasks as they become more acquainted with the project. To aid in this progression, one would be well served to have enough issues of varying difficulty that prospective contributors can tackle. If possible, it may be ideal to have issues build on top of each other and take the contributor on a journey toward mastery. Similarly, it may be good to create open issues that are related to each of the potential GSoC project topics, so that contributors can get familiar with the parts of the codebase they would be working on during the GSoC program. And lastly, consider creating issue templates specifically for GSoC participants, which include detailed instructions, links to relevant documentation, and expected outcomes. This reduces ambiguity and helps set clear expectations for newcomers.</p><p>Going forward, we plan to focus on creating well-defined, incremental issues that serve as stepping stones for new contributors to build familiarity and gradually take on more complex tasks.</p><div class="kg-card kg-callout-card kg-callout-card-blue"><div class="kg-callout-emoji">&#x1F4A1;</div><div class="kg-callout-text"><b><strong style="white-space: pre-wrap;">Starter Issues and Mini-Projects:</strong></b> Offer beginner-friendly issues and smaller tasks early on to help newcomers familiarize themselves with the codebase. Fixing existing bugs or writing tests can be a good starting point.</div></div><h3 id="the-role-of-ai">The Role of AI</h3><p>I think it&apos;s fair to conclude that Generative AI has emerged as both a blessing and a curse in the world of open source contributions. Personally, I am an avid user of LLMs and happy about the innovation they have sparked in the developer tooling space. They can assist non-native English speakers in better communicating their ideas, provide a conversation partner equipped with vast knowledge of even quite remote topics, and can increase developer productivity through code completions and code generation. However, AI has also led to a flood of low-quality PRs generated by AI tools, often filled with hallucinated code or content that doesn&apos;t align with the project&apos;s actual requirements. While writing code can feel more rewarding than the often tedious task of reviewing it&#x2014;especially when the code isn&apos;t your own&#x2014;reviewer fatigue becomes a real issue when faced with a barrage of poorly constructed or misaligned PRs.<br><br>Contributors must recognize that AI is an assistant, not a replacement for personal responsibility and craftsmanship. We have by now spent a significant amount of effort in automation to filter out low-effort submissions before they even reach the review stage. Beside workflows that close PRs which don&apos;t adhere to basic contribution conventions, we have added jobs that post helpful comments on how to set up a development environment or which remind contributors that they have to accept the project&apos;s contribution guidelines before their PR can be reviewed. This significantly reduces the burden on reviewers and ensures contributors are aware of expectations from the beginning.</p><h3 id="contributor-triage">Contributor Triage</h3><p>Another important takeaway is to watch out for contributors claiming multiple issues without completing them. We found that it&apos;s best to avoid assigning issues to anyone via the respective GitHub feature and instead focus on encouraging quality contributions over sheer quantity. Additionally, be prepared to manage contributors who may place unrealistic demands on review times, such as insisting on immediate feedback.<br><br>One has to be ruthless in prioritizing contributions. This approach ensures that contributors who show genuine interest and effort receive the attention they deserve, leading to higher quality interactions and outcomes for both the project and the contributor. Reviewer time is a limited resource, and it&apos;s simply not feasible to provide equal attention to every contributor.<br><br>At the end of the day, contributors must invest the time necessary to familiarize themselves with a project&apos;s conventions, guidelines, and best practices. If they don&apos;t meet this minimum threshold and do not show genuine effort, it&apos;s not worth allocating the finite resources of the core team. This may sound harsh, but it&apos;s necessary to ensure there is enough time to focus on the high-quality contributions. Otherwise, one ends up in a position where everybody is unhappy with your responsiveness. This may be less of an issue for organizations in niches requiring specialized skills and which may not have as wide an audience as a JavaScript library.</p><h3 id="provide-clear-documentation">Provide Clear Documentation</h3><p>Ensure that your project documentation is comprehensive and up-to-date. This includes installation guides, contribution guidelines, and a clear roadmap. Poor documentation can be a significant barrier to entry. During the community bonding period, we found that our documentation was outdated in some areas and that there were issues arising from our setup instructions not working on all operating systems. Providing a <code>devcontainer</code> setup for Visual Studio Code helped to mitigate these issues and streamline the onboarding process.</p><div class="kg-card kg-callout-card kg-callout-card-blue"><div class="kg-callout-emoji">&#x1F4A1;</div><div class="kg-callout-text"><b><strong style="white-space: pre-wrap;">Contribution Guides:</strong></b> Providing detailed guides on setting up the development environment, navigating the codebase, and submitting contributions is crucial.</div></div><h3 id="mentor-selection-and-training">Mentor Selection and Training</h3><p>Choose experienced and committed mentors who can provide guidance and support throughout the program. Consider providing mentor training sessions and setting clear expectations around time commitments and responsibilities to better prepare mentors for their roles. Expect mentoring to be more demanding than envisioned.<br><br>We found that having weekly stand-ups allowed contributors to get to know each other and share their progress. We had also, early on, decided to have weekly 1:1s between contributors and mentors, combined with active conversations on PRs, RFC issues, and our project-internal Slack. All these channels helped to keep the communication flowing and ensure that everyone was on the same page. However, it&apos;s crucial to try to be responsive. Personally, I could have been better at responding to PRs and questions given how quickly the time flies by, with GSoC being over before you know it!</p><div class="kg-card kg-callout-card kg-callout-card-blue"><div class="kg-callout-emoji">&#x1F4A1;</div><div class="kg-callout-text">Encourage mentors to actively communicate with each other about their experiences and challenges, so they can offer consistent advice and collaborate on strategies for effectively supporting contributors.</div></div><h2 id="post-gsoc-engagement-strategies">Post-GSoC Engagement Strategies</h2><p>After GSoC ends, it&apos;s essential to keep contributors engaged in order to build a sustainable community. Continue holding regular office hours, offer additional project ideas, or even invite selected GSoC contributors to mentor the next round of participants. This will go a long way toward creating a sense of belonging and long-term commitment.</p><h2 id="common-pitfalls-to-avoid">Common Pitfalls to Avoid</h2><ul><li><strong>Overwhelming Newcomers: </strong>Don&apos;t assign tasks that are too complex or lacking adequate documentation.</li><li><strong>Inadequate Support:</strong> Ensure mentors are available and can provide adequate guidance.</li><li><strong>Poor Documentation:</strong> Avoid outdated or incomplete documentation which can create barriers to entry. </li><li><strong>Insufficient Community Interaction:</strong> Foster a sense of community and two-way communication.</li></ul><p>To provide an illustrative example of where we fell prey to the pitfalls above, a number of contributors working on Windows machines initially struggled with setting up their local development environment. Because the core stdlib team primarily develops on MacOS and Linux, we are largely unaware of the needs and constraints of Windows users, and our contributing guidelines largely reflected that ignorance. Needless to say, telling people to just use Ubuntu shell was not sufficient. We could have saved ourselves a lot of back and forth by (a) providing preconfigured dev containers, (b) investing the time necessary to create more comprehensive documentation, and (c) having a quick onboarding session over a higher bandwidth medium than chat.</p><h2 id="advice-for-contributors">Advice for Contributors</h2><ul><li><strong>Early Engagement:</strong> Interact with the community and start working on beginner-friendly issues early on. If you start contributing before the application period and show your commitment to the project, you will stand out as a proactive candidate during the selection process. This is probably the biggest hack to get selected for GSoC.</li><li><strong>Invest in Project Familiarity Early On:</strong> Before contributing code, take time to read through old issues, PR discussions, and any architectural documentation available. Understanding the project&apos;s historical context can help avoid misunderstandings and improve the relevance of your contributions.</li><li><strong>Prioritize Code Quality and Documentation:</strong> Don&apos;t rush to make as many contributions as possible. Take your time to write high-quality code and back it up with sufficient documentation and test cases. Especially in stdlib, we place a high priority on ensuring consistency throughout the codebase, so the more your contributions look and feel like stdlib, the more likely your contributions will be accepted. This attention to detail will set you apart from others who may focus solely on quantity and ignore project conventions. </li><li><strong>Clear Communication:</strong> Don&apos;t hesitate to ask questions and seek guidance from mentors and the community. Organizations may be overwhelmed with applications, so stepping up and answering questions on the community forums can help you stand out as well.</li><li><strong>Ask for Feedback:</strong> Throughout the GSoC program, ask for and incorporate feedback from project mentors. During the GSoC application phase, contributors who clearly demonstrate an ability to receive and act on feedback will stand out. It can be frustrating for project reviewers to repeat the same feedback across multiple PRs, especially concerning project style and conventions. Make it a goal to reduce the number of reviewer comments on each PR. Clean PRs requiring little-to-no review feedback significantly improve the odds of you setting yourself apart from the pack.</li><li><strong>Respect Maintainer Time:</strong> Be respectful of maintainer time. GSoC can be highly competitive, and, for many, GSoC acceptance is a meaningful resum&#xE9; item. Recognize, however, that maintainers often have obligations and jobs outside of their open source work. Sometimes it just isn&apos;t possible to immediately review your PR or answer your question, especially toward the end of the GSoC application period. You can significantly improve the likelihood of a response if you heed the advice above; namely, invest in project familiarity early on, prioritize code quality and documentation, and incorporate feedback. Maintainers are human, and they are more likely to invest in you, the more you show you care about them.</li><li><strong>Time Management:</strong> Plan your time effectively to meet project milestones and deadlines. The time will fly by, and you don&apos;t want to be scrambling to complete your project at the last minute. Break down your project into smaller tasks, and set realistic goals for each week. Where possible, be strategic in your planning, such that, if one task becomes blocked, you can continue making progress by working on other tasks in parallel. If you encounter obstacles, reach out for help sooner rather than later. Being proactive not only ensures you stay on track but also demonstrates your commitment and initiative.</li><li><strong>Participate Beyond Code:</strong> Engage in discussions beyond code contributions.  Once you have familiarized yourself with the project, gotten up to speed on how to contribute, and successfully made contributions to the codebase, help other newcomers by participating in community channels, answering questions, and directing them to appropriate resources. Not only does this show that you are invested in the community, but it also helps reduce maintainer burden&#x2014;something which is unlikely to go unnoticed.</li><li> <strong>Be Adaptive and Open to Change:</strong> Sometimes your initial project plan may not work out as expected. Be flexible and willing to adjust your project scope or approach based on feedback and evolving project priorities.</li></ul><div class="kg-card kg-callout-card kg-callout-card-blue"><div class="kg-callout-emoji">&#x1F4A1;</div><div class="kg-callout-text">Remember that valuable contributions aren&apos;t limited to code alone. Participating in community discussions, improving documentation, and offering support to other newcomers are all meaningful ways to contribute and demonstrate commitment to the project.</div></div><h2 id="acknowledgments">Acknowledgments</h2><p>Our heartfelt thanks go out to everyone involved in this year&apos;s GSoC, from the mentors and contributors to the broader community, and last but not least, to Google.  We&apos;re excited to build on the momentum from this summer and look forward to seeing what the future holds for stdlib!</p><p>If you&apos;re interested in becoming a part of our growing community or exploring the opportunities GSoC can provide, visit our <a href="https://github.com/stdlib-js/google-summer-of-code?ref=blog.stdlib.io">Google Summer of Code</a> repository and join the conversation on our community channels. We&apos;re always excited to welcome new contributors!</p><p>And if you&apos;re just generally interested in contributing or staying updated, be sure to check out the project <a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">repository</a>. Don&apos;t be shy, and come say hi. We&apos;d love for you to be a part of our community!</p><hr><p><em>Philipp Burckhardt is a data scientist and software engineer securing software supply chains at </em><a href="https://socket.dev/?ref=blog.stdlib.io" rel="noreferrer"><em>Socket</em></a><em> and a core contributor of </em><a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io"><em>stdlib</em></a><em>.</em></p><hr><p><a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">stdlib</a> is an open source software project dedicated to providing a comprehensive suite of robust, high-performance libraries to accelerate your project&apos;s development and give you peace of mind knowing that you&apos;re depending on expertly crafted, high-quality software.</p><p>If you&apos;ve enjoyed this post, give us a star &#x1F31F; on <a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">GitHub</a> and consider <a href="https://opencollective.com/stdlib?ref=blog.stdlib.io">financially supporting</a> the project. Your contributions help ensure the project&apos;s long-term success, and your continued support is greatly appreciated!</p>]]></content:encoded></item><item><title><![CDATA[Welcoming colors to the REPL!]]></title><description><![CDATA[The REPL now supports syntax highlighting and custom theming!]]></description><link>https://blog.stdlib.io/welcoming-colors-to-the-repl/</link><guid isPermaLink="false">66ba5fe4d8eb7fcd9a9615af</guid><category><![CDATA[News]]></category><dc:creator><![CDATA[Snehil Shah]]></dc:creator><pubDate>Mon, 19 Aug 2024 01:04:03 GMT</pubDate><media:content url="https://blog.stdlib.io/content/images/2024/08/showcase.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.stdlib.io/content/images/2024/08/showcase.png" alt="Welcoming colors to the REPL!"><p>The stdlib REPL (Read-Eval-Print Loop) is an interactive interpreter environment for executing JavaScript and enabling easy prototyping, testing, debugging, and programming. With syntax highlighting now added, editing in the REPL becomes way more intuitive and fun.</p><figure class="kg-card kg-image-card"><img src="https://blog.stdlib.io/content/images/2024/08/typing.gif" class="kg-image" alt="Welcoming colors to the REPL!" loading="lazy" width="386" height="98"></figure><p>How to get your hands on the new hotness? Download the latest package from <a href="https://www.npmjs.com/package/@stdlib/repl?ref=blog.stdlib.io">npm</a>, fire it up, and just start typing.</p><pre><code class="language-bash">$ npm install -g @stdlib/repl
$ stdlib-repl
</code></pre><p>We have various themes to get started with. But if you want to make the REPL your own, you can also customize it. We explore customization later in this post.</p><h2 id="stdlib">stdlib</h2><p>A brief segue about <a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">stdlib</a>. stdlib is a standard library for numerical and scientific computation for use in web browsers and in server-side runtimes supporting JavaScript. The library provides high-performance and rigorously tested APIs for data manipulation and transformation, mathematics, statistics, linear algebra, pseudorandom number generation, array programming, and a whole lot more.</p><p>We&apos;re on a mission to make JavaScript (and TypeScript!) a preferred language for numerical computation. If this sounds interesting to you, check out the project on <a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">GitHub</a>, and be sure to give us a star &#x1F31F;!</p><p>Moving on... &#x1F3C3;&#x1F4A8;</p><h2 id="themes">Themes</h2><p>Where were we? Ah, yes, themes! The REPL comes with the following themes built-in.</p><div class="kg-card kg-callout-card kg-callout-card-green"><div class="kg-callout-emoji">&#x1F680;</div><div class="kg-callout-text"><b><strong style="white-space: pre-wrap;">Pro tip</strong></b>: You can always use the <code spellcheck="false" style="white-space: pre-wrap;">themes()</code> REPL command to list available themes.</div></div><ul><li><strong>stdlib-ansi-basic</strong>: The classic. The default.</li></ul><figure class="kg-card kg-image-card"><img src="https://blog.stdlib.io/content/images/2024/08/ansi-basic-blur-bg.png" class="kg-image" alt="Welcoming colors to the REPL!" loading="lazy" width="960" height="491" srcset="https://blog.stdlib.io/content/images/size/w600/2024/08/ansi-basic-blur-bg.png 600w, https://blog.stdlib.io/content/images/2024/08/ansi-basic-blur-bg.png 960w" sizes="(min-width: 720px) 720px"></figure><ul><li><strong>stdlib-ansi-light</strong>: For the light mode users.</li></ul><figure class="kg-card kg-image-card"><img src="https://blog.stdlib.io/content/images/2024/08/ansi-light-blur-bg-1.png" class="kg-image" alt="Welcoming colors to the REPL!" loading="lazy" width="960" height="489" srcset="https://blog.stdlib.io/content/images/size/w600/2024/08/ansi-light-blur-bg-1.png 600w, https://blog.stdlib.io/content/images/2024/08/ansi-light-blur-bg-1.png 960w" sizes="(min-width: 720px) 720px"></figure><ul><li><strong>stdlib-ansi-dark</strong>: For the normal users.</li></ul><figure class="kg-card kg-image-card"><img src="https://blog.stdlib.io/content/images/2024/08/ansi-dark-blur-bg.png" class="kg-image" alt="Welcoming colors to the REPL!" loading="lazy" width="960" height="491" srcset="https://blog.stdlib.io/content/images/size/w600/2024/08/ansi-dark-blur-bg.png 600w, https://blog.stdlib.io/content/images/2024/08/ansi-dark-blur-bg.png 960w" sizes="(min-width: 720px) 720px"></figure><ul><li><strong>stdlib-ansi-strong</strong>: Expressive and bold.</li></ul><figure class="kg-card kg-image-card"><img src="https://blog.stdlib.io/content/images/2024/08/ansi-strong-blur-bg.png" class="kg-image" alt="Welcoming colors to the REPL!" loading="lazy" width="960" height="491" srcset="https://blog.stdlib.io/content/images/size/w600/2024/08/ansi-strong-blur-bg.png 600w, https://blog.stdlib.io/content/images/2024/08/ansi-strong-blur-bg.png 960w" sizes="(min-width: 720px) 720px"></figure><ul><li><strong>solarized</strong>: My personal favorite.</li></ul><figure class="kg-card kg-image-card"><img src="https://blog.stdlib.io/content/images/2024/08/solarized-blur-bg-2.png" class="kg-image" alt="Welcoming colors to the REPL!" loading="lazy" width="960" height="491" srcset="https://blog.stdlib.io/content/images/size/w600/2024/08/solarized-blur-bg-2.png 600w, https://blog.stdlib.io/content/images/2024/08/solarized-blur-bg-2.png 960w" sizes="(min-width: 720px) 720px"></figure><ul><li><strong>minimalist</strong>: Enough said.</li></ul><figure class="kg-card kg-image-card"><img src="https://blog.stdlib.io/content/images/2024/08/minimalist-blur-bg-1.png" class="kg-image" alt="Welcoming colors to the REPL!" loading="lazy" width="960" height="491" srcset="https://blog.stdlib.io/content/images/size/w600/2024/08/minimalist-blur-bg-1.png 600w, https://blog.stdlib.io/content/images/2024/08/minimalist-blur-bg-1.png 960w" sizes="(min-width: 720px) 720px"></figure><ul><li><strong>monokai</strong>: The one and only.</li></ul><figure class="kg-card kg-image-card"><img src="https://blog.stdlib.io/content/images/2024/08/monokai-blur-bg.png" class="kg-image" alt="Welcoming colors to the REPL!" loading="lazy" width="960" height="491" srcset="https://blog.stdlib.io/content/images/size/w600/2024/08/monokai-blur-bg.png 600w, https://blog.stdlib.io/content/images/2024/08/monokai-blur-bg.png 960w" sizes="(min-width: 720px) 720px"></figure><p>In order to change to the theme of your choice, use the REPL <code>settings()</code> command.</p><pre><code class="language-javascript">In [1]: settings( &apos;theme&apos;, &apos;solarized&apos; )
</code></pre><h2 id="customization">Customization</h2><p>You can create your own syntax highlighting themes using a theme definition. A theme definition is an object mapping each token type to its corresponding color. The following code snippet shows the theme definition for the <code>monokai</code> theme.</p><pre><code class="language-javascript">const monokai = {
    // Keywords:
    &apos;control&apos;: &apos;brightRed&apos;,
    &apos;keyword&apos;: &apos;italic brightCyan&apos;,
    &apos;specialIdentifier&apos;: &apos;brightMagenta&apos;,

    // Literals:
    &apos;string&apos;: &apos;brightYellow&apos;,
    &apos;number&apos;: &apos;brightBlue&apos;,
    &apos;literal&apos;: &apos;brightBlue&apos;,
    &apos;regexp&apos;: &apos;underline yellow&apos;,

    // Identifiers:
    &apos;command&apos;: &apos;bold brightGreen&apos;,
    &apos;function&apos;: &apos;brightGreen&apos;,
    &apos;object&apos;: &apos;italic brightMagenta&apos;,
    &apos;variable&apos;: null,
    &apos;name&apos;: null,

    // Others:
    &apos;comment&apos;: &apos;brightBlack&apos;,
    &apos;punctuation&apos;: null,
    &apos;operator&apos;: &apos;brightRed&apos;
}
</code></pre><p>For the full list of supported tokens, see the REPL <a href="https://github.com/stdlib-js/stdlib/tree/develop/lib/node_modules/%40stdlib/repl?ref=blog.stdlib.io#replprototypeaddtheme-name-theme-">documentation</a>.</p><div class="kg-card kg-callout-card kg-callout-card-green"><div class="kg-callout-emoji">&#x1F680;</div><div class="kg-callout-text"><b><strong style="white-space: pre-wrap;">Pro tip</strong></b>: Use the <code spellcheck="false" style="white-space: pre-wrap;">getTheme()</code> REPL command to find out how a theme was built.</div></div><p>Currently, the REPL supports ANSI colors, such as <code>black</code>, <code>red</code>, <code>green</code>, <code>yellow</code>, <code>blue</code>, <code>magenta</code>, <code>cyan</code>, and <code>white</code>, and their brighter variants, such as <code>brightBlack</code> and <code>brightRed</code>.</p><p>For more expressive themes, you can use styles, such as <code>bold</code>, <code>italic</code>, <code>underline</code>, <code>strikethrough</code>, and <code>reversed</code>, and background colors, such as <code>bgRed</code> and <code>bgBrightRed</code>.</p><p>Lastly, you can go wild by mixing and matching any of the above colors, styles, and background colors. So something like the following works:</p><pre><code class="language-text">italic red bgBrightGreen
</code></pre><figure class="kg-card kg-image-card"><img src="https://blog.stdlib.io/content/images/2024/08/ridiculous.png" class="kg-image" alt="Welcoming colors to the REPL!" loading="lazy" width="376" height="88"></figure><p>Some might say this looks ridiculous, but good to know the REPL supports the ridiculousness!</p><h2 id="adding-your-own-theme">Adding your own theme</h2><p>To add your theme, use the <code>addTheme()</code> REPL command, as shown in the following REPL snippet.</p><pre><code class="language-javascript">In [1]: const theme = {
    &apos;string&apos;: &apos;italic red bgBrightGreen&apos;,
    &apos;keyword&apos;: &apos;bold magenta&apos;,
    // Be the artist...
};

In [2]: addTheme( &apos;bestThemeEver&apos;, theme )
</code></pre><p>Change your mind and added something you don&apos;t like? No worries. Just use the <code>deleteTheme()</code> REPL command to send the theme into oblivion, as in the following REPL snippet.</p><pre><code class="language-javascript">In [5]: deleteTheme( &apos;worstThemeEver&apos; )
</code></pre><p>Want to call your theme something different? We&apos;ve got you covered. Just use the <code>renameTheme()</code> REPL command, as in the following REPL snippet.</p><pre><code class="language-javascript">In [6]: renameTheme( &apos;bestThemeEver&apos;, &apos;secondBestThemeEver&apos; )
</code></pre><p>If you prefer spooky <a href="https://en.wikipedia.org/wiki/Action_at_a_distance?ref=blog.stdlib.io">action at a distance</a>, simply use the corresponding REPL prototype methods for the above operations. Refer to the REPL <a href="https://github.com/stdlib-js/stdlib/tree/develop/lib/node_modules/%40stdlib/repl?ref=blog.stdlib.io">documentation</a> for the full list of REPL commands and prototype methods related to syntax highlighting and everything else.</p><h2 id="lets-wrap-this-up">Let&apos;s wrap this up</h2><p>Time to end this post with a quote:</p><blockquote>&quot;Coding without syntax highlighting is like trying to read a book with all the words in the wrong order&#x2014;frustrating, confusing, and not nearly as fun!&quot;<br><br>&#x2014; ChatGPT 4o mini</blockquote><p>Boy ain&apos;t that the truth!</p><p>The stdlib REPL is in constant development, so feel free to reach out with new ideas and identified issues. Your feedback is appreciated and hugely important!</p><p>We&apos;ve got some more REPL news and notes in the pipeline, so stay tuned for the drip. Until next time, cheers and happy REPLing!</p><hr><p><em>Snehil Shah is a computer science undergrad, an audio nerd, and a contributor to </em><a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io"><em>stdlib</em></a><em>.</em></p><hr><p><a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">stdlib</a> is an open source software project dedicated to providing a comprehensive suite of robust, high-performance libraries to accelerate your project&apos;s development and give you peace of mind knowing that you&apos;re depending on expertly crafted, high-quality software.</p><p>If you&apos;ve enjoyed this post, give us a star &#x1F31F; on <a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">GitHub</a> and consider <a href="https://opencollective.com/stdlib?ref=blog.stdlib.io">financially supporting</a> the project. Your contributions help ensure the project&apos;s long-term success, and your continued support is greatly appreciated!</p><hr>
<!--kg-card-begin: html-->
<h2>License</h2>
<details>
    <summary>All code is licensed under <a href="http://www.apache.org/licenses/LICENSE-2.0?ref=blog.stdlib.io">Apache License, Version 2.0</a>.</summary>
    <pre><code class="language-text hljs">
Copyright (c) 2024 Snehil Shah.

Licensed under the Apache License, Version 2.0 (the &quot;License&quot;);
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

   http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an &quot;AS IS&quot; BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
    </code></pre>
</details>
<!--kg-card-end: html-->
]]></content:encoded></item><item><title><![CDATA[The Accessor Protocol]]></title><description><![CDATA[An introduction to the accessor protocol for generalized array-like object iteration, with application to exotic array types, sparse array computation, lazy materialization, and much more!]]></description><link>https://blog.stdlib.io/introducing-the-accessor-protocol-for-array-like-objects/</link><guid isPermaLink="false">64fbf52387c5db24b8c5a893</guid><category><![CDATA[Engineering]]></category><dc:creator><![CDATA[Athan Reines]]></dc:creator><pubDate>Tue, 06 Aug 2024 20:09:12 GMT</pubDate><media:content url="https://blog.stdlib.io/content/images/2024/08/gen_splash.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.stdlib.io/content/images/2024/08/gen_splash.png" alt="The Accessor Protocol"><p>In this post, I&apos;ll introduce you to the <strong>accessor protocol</strong> for generalized array-like object iteration. First, I&apos;ll provide an overview of built-in array-like objects, along with example usage. I&apos;ll then show you how you can create your own custom array-like objects using the same element access syntax. Next, we will explore why you might want to go beyond vanilla array-like objects to create more &quot;exotic&quot; variants to accommodate sparsity, deferred computation, and performance considerations. Following this, I&apos;ll introduce the accessor protocol and how it compares to possible alternatives. Finally, I&apos;ll showcase various example applications.</p><p>Sound good?! Great! Let&apos;s go!</p><h2 id="tldr">TL;DR</h2><p>The <strong>accessor protocol</strong> (also known as the <strong>get/set protocol</strong>) defines a standardized way for non-indexed collections to access element values. In order to be accessor protocol-compliant, an array-like object must implement two methods having the following signatures:</p><pre><code class="language-typescript">function get&lt;T&gt;( index: number ): T {...}
function set&lt;T, U&gt;( value: T, index: number ): U {...}
</code></pre><p>The protocol allows implementation-dependent behavior when an <code>index</code> is out-of-bounds and, similar to built-in array bracket notation, only requires that implementations support nonnegative <code>index</code> values. In short, the protocol prescribes a minimal set of behavior in order to support the widest possible set of use cases, including, but not limited to, sparse arrays, arrays supporting &quot;lazy&quot; (or deferred) materialization, shared memory views, and arrays which clamp, wrap, or constrain <code>index</code> values.</p><p>The following code sample provides an example class returning an array-like object implementing the accessor protocol and supporting <a href="https://en.wikipedia.org/wiki/Stride_of_an_array?ref=blog.stdlib.io">strided access</a> over a linear data buffer.</p><pre><code class="language-javascript">/**
* Class defining a strided array.
*/
class StridedArray {
    // Define private instance fields:
    #length; // array length
    #data;   // underlying data buffer
    #stride; // step size (i.e., the index increment between successive values)
    #offset; // index of the first indexed value in the data buffer

    /**
    * Returns a new StridedArray instance.
    *
    * @param {integer} N - number of indexed elements
    * @param {ArrayLikeObject} data - underlying data buffer
    * @param {number} stride - step size
    * @param {number} offset - index of the first indexed value in the data buffer
    * @returns {StridedArray} strided array instance
    */
    constructor( N, data, stride, offset ) {
        this.#length = N;
        this.#data = data;
        this.#stride = stride;
        this.#offset = offset;
    }

    /**
    * Returns the array length.
    *
    * @returns {number} array length
    */
    get length() {
        return this.#length;
    }

    /**
    * Returns the element located at a specified index.
    *
    * @param {number} index - element index
    * @returns {(void|*)} element value
    */
    get( index ) {
        return this.#data[ this.#offset + index*this.#stride ];
    }

    /**
    * Sets the value for an element located at a specified index.
    *
    * @param {*} value - value to set
    * @param {number} index - element index
    */
    set( value, index ) {
        this.#data[ this.#offset + index*this.#stride ] = value;
    }
}

// Define a data buffer:
const buf = new Float64Array( [ 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0 ] );

// Create a strided view over the data buffer:
const x1 = new StridedArray( 4, buf, 2, 1 );

// Retrieve the second element:
const v1 = x1.get( 1 );
// returns 4.0

// Mutate the second element:
x1.set( v1*10.0, 1 );

// Retrieve the second element:
const v2 = x1.get( 1 );
// returns 40.0

// Create a new strided view over the same data buffer, but reverse the elements:
const x2 = new StridedArray( 4, buf, -2, buf.length-1 );

// Retrieve the second element:
const v3 = x2.get( 1 );
// returns 6.0

// Mutate the second element:
x2.set( v3*10.0, 1 );

// Retrieve the second element:
const v4 = x2.get( 1 );
// returns 60.0

// Retrieve the third element from the first array view:
const v5 = x1.get( 2 );
// returns 60.0
</code></pre><p>As shown in the code sample above, a strided array is a powerful abstraction over built-in arrays and typed arrays, as it allows for arbitrary views having custom access patterns over a single buffer. In fact, strided arrays are the conceptual basis for multi-dimensional arrays, such as NumPy&apos;s <a href="https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html?ref=blog.stdlib.io"><code>ndarray</code></a> and stdlib&apos;s <a href="https://github.com/stdlib-js/stdlib/tree/develop/lib/node_modules/%40stdlib/ndarray/ctor?ref=blog.stdlib.io"><code>ndarray</code></a>, which are the fundamental building blocks of modern numerical computing. Needless to say, the example above speaks to the utility of going beyond built-in bracket syntax and providing APIs for generalized array-like object iteration.</p><p>To learn more about the accessor protocol and its use cases, continue reading the rest of the post below! &#x1F680;</p><h2 id="stdlib">stdlib</h2><p>A brief overview about <a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">stdlib</a>. stdlib is a standard library for numerical and scientific computation for use in web browsers and in server-side runtimes supporting JavaScript. The library provides high-performance and rigorously tested APIs for data manipulation and transformation, mathematics, statistics, linear algebra, pseudorandom number generation, array programming, and a whole lot more.</p><p>We&apos;re on a mission to make JavaScript (and TypeScript!) a preferred language for numerical computation. If this sounds interesting to you, check out the project on <a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">GitHub</a>, and be sure to give us a star &#x1F31F;!</p><h2 id="introduction">Introduction</h2><p>In JavaScript, we use bracket notation to access individual array elements. For example, in the following code sample, we use bracket notation to retrieve the second element in an array.</p><pre><code class="language-javascript">const x = [ 1, 2, 3 ];

// Retrieve the second element:
const v = x[ 1 ];
// returns 2
</code></pre><p>This works for both generic array and typed array instances. In the next code sample, we repeat the previous operation on a typed array.</p><pre><code class="language-javascript">const x = new Float64Array( [ 1, 2, 3 ] );
// returns &lt;Float64Array&gt;

// Retrieve the second element:
const v = x[ 1 ];
// returns 2
</code></pre><p>Similarly, one can use bracket notation for built-in array like objects, such as strings. In the next code sample, we retrieve the second UTF-16 code unit in a string.</p><pre><code class="language-javascript">const s = &apos;beep boop&apos;;

// Retrieve the second UTF-16 code unit:
const v = s[ 1 ];
// returns &apos;e&apos;
</code></pre><p>In order to determine how many elements are in an array-like object, we can use the <code>length</code> property, as shown in the following code sample.</p><pre><code class="language-javascript">const x = [ 1, 2, 3 ];

const len = x.length;
// returns 3
</code></pre><p>Arrays and typed arrays are referred to as <strong>indexed collections</strong>, where elements are ordered according to their index value. An array-like object is thus an ordered list of values that one refers to using a variable name and index.</p><p>While JavaScript arrays and typed arrays have many methods (e.g., <code>forEach</code>, <code>map</code>, <code>filter</code>, <code>sort</code>, and more), the only required property that any array-like object (built-in or custom) must have is a <code>length</code> property. The <code>length</code> property tells us the maximum number of elements for which we can apply an operation. Without it, we&apos;d never know when to stop iterating in a <code>for</code> loop!</p><h2 id="custom-array-like-objects">Custom array-Like objects</h2><p>We can create our own custom array-like objects using vanilla object literals. For example, in the following code sample, we create an object having numbered keys and a <code>length</code> property and retrieve the value associated with the key <code>1</code> (i.e., the second element).</p><pre><code class="language-javascript">const x = {
    &apos;length&apos;: 3,
    &apos;0&apos;: 1,
    &apos;1&apos;: 2,
    &apos;2&apos;: 3
};

// Retrieve the second element:
const v = x[ 1 ];
// returns 2
</code></pre><p>Notice that we&apos;re able to use numeric &quot;indices&quot;. This is because, per the ECMAScript Standard, any non-symbol value used as a key is <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Working_with_objects?ref=blog.stdlib.io#accessing_properties">first converted</a> to a string before performing property-value look-up. In which case, so long as downstream consumers don&apos;t assume the existence of specialized methods, but stick to only indexed iteration, downstream consumers can adopt array-like object neutrality.</p><p>For example, suppose we want to compute the sum of all elements in an array-like object. We could define the following function which accepts, as its sole argument, any object having a <code>length</code> property and supporting value access via numeric indices.</p><pre><code class="language-javascript">function sum( x ) {
    let total = 0;
    for ( let i = 0; i &lt; x.length; i++ ) {
        total += x[ i ];
    }
    return total;
}
</code></pre><p>We can then provide all manner of array-like objects and <code>sum</code> is none-the-wiser, being capable of handling them all. In the following code sample, we separately provide a generic array, a typed array, and an array-like object, and, for each input value, the <code>sum</code> function readily computes the sum of all elements.</p><pre><code class="language-javascript">const x1 = [ 1, 2, 3 ];
const s1 = sum( x1 );
// returns 6

const x2 = new Int32Array( [ 1, 2, 3 ] );
const s2 = sum( x2 );
// returns 6

const x3 = {
    &apos;length&apos;: 3,
    &apos;0&apos;: 1,
    &apos;1&apos;: 2,
    &apos;2&apos;: 3
};
const s3 = sum( x3 );
// returns 6
</code></pre><p>This is great! So long as downstream consumers make minimal assumptions regarding the existence of prototype methods, preferably avoiding the use of methods entirely, we can create functional APIs capable of operating on any indexed collection.</p><p>But wait, what about those scenarios in which we want to use alternative data structures, such that property-value pairs are not so neatly aligned, or we want to leverage deferred computation, or create views on existing array-like objects? How can we handle those use cases?</p><h2 id="motivating-use-cases">Motivating use cases</h2><h3 id="sparse-arrays">Sparse arrays</h3><p>Up until this point, we&apos;ve concerned ourselves with &quot;dense&quot; arrays (i.e., arrays in which all elements can be stored sequentially in a contiguous block of memory). In JavaScript, in addition to dense arrays, we have the concept of &quot;sparse&quot; arrays. The following code sample demonstrates sparse array creation by setting an element located at an index which vastly exceeds the length of the target array.</p><pre><code class="language-javascript">const x = [];

// Convert `x` into a sparse array:
x[ 10000 ] = 3.14;

// Retrieve the second element:
const v1 = x[ 1 ];
// returns undefined

// Retrieve the last element:
const v10000 = x[ 10000 ];
// returns 3.14

// Retrieve the number of elements:
const len = x.length;
// returns 10001
</code></pre><p>Suffice it to say that, by not using the <code>Array.prototype.push</code> method and filling in values until element <code>10000</code>, JavaScript engines responsible for compiling and optimizing your code treat the array as if it were a normal object, which is a reasonable optimization in order to avoid unnecessary memory allocation. Creating a sparse array in this fashion is often referred to as converting an array into &quot;dictionary-mode&quot;, where an array is stored in a manner similar to a regular object instance. The above code sample is effectively equivalent to the following code sample where we explicitly define <code>x</code> to be an array-like object containing a single defined value at index <code>10000</code>.</p><pre><code class="language-javascript">const x = {
    &apos;length&apos;: 10001,
    &apos;10000&apos;: 3.14
};

// Retrieve the second element:
const v1 = x[ 1 ];
// returns undefined

// Retrieve the last element:
const v10000 = x[ 10000 ];
// returns 3.14
</code></pre><p>Creating sparse arrays in this manner is fine for many use cases, but less than optimal in others. For example, in numerical computing, we&apos;d prefer that the &quot;holes&quot; (i.e., undefined values) in our sparse array would be <code>0</code>, rather than <code>undefined</code>. This way, the <code>sum</code> function we defined above could work on both sparse and dense arrays alike (setting aside, for the moment, any performance considerations).</p><h3 id="deferred-computation">Deferred computation</h3><p>Next up, consider the case in which we want to avoid materializing array values until they are actually needed. For example, in the following snippet, we&apos;d like the ability to define an array-like object without any pre-defined values and which supports &quot;lazy&quot; materialization such that values are materialized upon element access.</p><pre><code class="language-javascript">const x = {
    &apos;length&apos;: 3
};

// Materialize the first element:
const v0 = x[ 0 ];
// returns 1

// Materialize the second element:
const v1 = x[ 1 ];
// returns 2

// Materialize the third element:
const v2 = x[ 2 ];
// returns 3
</code></pre><p>To implement lazy materialization in JavaScript, we could utilize the <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Iteration_protocols?ref=blog.stdlib.io">Iterator protocol</a>; however, iterators are not directly &quot;indexable&quot; in a manner similar to array-like objects, and they don&apos;t generally have a <code>length</code> property indicating how many elements they contain. To know when they finish, we need to explicitly check the <code>done</code> property of the iterated value. While we can use the built-in <code>for...of</code> statement to iterate over Iterables, this requires either updating our <code>sum</code> implementation to use <code>for...of</code>, and thus require that all provided array-like objects also be Iterables, or introducing branching logic based on the type of value provided. Neither option is ideal, with both entailing increased complexity, constraints, performance-costs, or, more likely, some combination of the above.</p><h3 id="shared-memory-views">Shared memory views</h3><p>For our next motivating example, consider the case of creating arbitrary views over the same underlying block of memory. While typed arrays support creating contiguous views (e.g., by providing a shared <code>ArrayBuffer</code> to typed array constructors), situations may arise where we want to define non-contiguous views. In order to avoid unnecessary memory allocation, we&apos;d like the ability to define arbitrary iteration patterns which allow accessing particular elements within an underlying linear data buffer.</p><p>In the following snippet, we illustrate the use case of an array-like object containing complex numbers which are stored in memory as interleaved real and imaginary components. To allow accessing and manipulating just the real components within the array, we&apos;d like the ability to create a &quot;view&quot; atop the underlying data buffer which accesses every other element (i.e., just the real components). We could similarly create a &quot;view&quot; for only accessing the imaginary components.</p><pre><code class="language-javascript">// Define a data buffer of interleaved real and imaginary components:
const buf = [ 1.0, -2.0, 3.0, -4.0, 5.0, -6.0, 7.0, -8.0 ];

// Create a complex number array:
const x = new ComplexArray( buf );

// Retrieve the second element:
const z1 = x[ 1 ];
// returns Complex&lt;3.0, -4.0&gt;

// Create a view which only accesses real components:
const re = x.reals();

// Retrieve the real component of the second complex number:
const r = re[ 1 ];
// returns 3.0

// Mutate the real component:
re[ 1 ] = 10.0;

// Retrieve the second element of the complex number array:
const z2 = x[ 1 ];
// returns Complex&lt;10.0, -4.0&gt;
</code></pre><p>To implement such views, we&apos;d need three pieces of information: (1) an underlying data buffer, (2) an <strong>array stride</strong> which defines the number of locations in memory between successive array elements, and (3) an <strong>offset</strong> which defines the location in memory of the first indexed element. For contiguous arrays, the array stride is unity, and the offset is zero. In the example above, for a real component view, the array stride is two, and the offset is zero; for an imaginary component view, the array stride is also two, but the offset is unity. Ideally, we could define a means for providing generalized access such that array-like objects which provide abstracted element indexing can also be provided to array-agnostic APIs, such as <code>sum</code> above.</p><h3 id="backing-data-structures">Backing data structures</h3><p>As a final example, consider the case where we&apos;d like to compactly store an ordered sequence of boolean values. While we could use generic arrays (e.g., <code>[true,false,...,true]</code>) or <code>Uint8Array</code> typed arrays for this, doing so would not be the most memory efficient approach. Instead, a more memory efficient data structure would be a <a href="https://en.wikipedia.org/wiki/Bit_array?ref=blog.stdlib.io"><strong>bit array</strong></a> comprised of a sequence of integer words in which, for each word of <em>n</em> bits, a 1-bit indicates a value of <code>true</code> and a 0-bit indicates a value of <code>false</code>.</p><p>The following code snippet provides a general idea of mapping a sequence of boolean values to bits, along with desired operations for setting and retrieving boolean elements.</p><pre><code class="language-javascript">const seq = [ true, false, true, ..., false, true, false ];
// bit array:    1      0     1  ...      0     1      0    =&gt; 101...010

const x = new BooleanBitArray( seq );

// Retrieve the first element:
const v0 = x[ 0 ];
// returns true

// Retrieve the second element:
const v1 = x[ 1 ];
// returns false

// Retrieve the third element:
const v2 = x[ 2 ];
// returns true

// Set the second element:
x[ 1 ] = true;

// Retrieve the second element:
const v3 = x[ 1 ];
// returns true
</code></pre><p>In JavaScript, we could attempt to subclass array or typed array built-ins in order to allow setting and getting elements via bracket notation; however, this approach would prove limiting as subclassing alone does not allow intercepting property access (e.g., <code>x[i]</code>), which would be needed in order to map an index to a specific bit. We could try and combine subclassing with <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Proxy?ref=blog.stdlib.io"><code>Proxy</code></a> objects, but this would come with a steep performance cost due to property accessor indirection--something which we&apos;ll revisit later in this post.</p><h2 id="accessor-protocol">Accessor Protocol</h2><p>To accommodate the above use cases and more, we&apos;d like to introduce a conceptually simple, but very powerful, new protocol: the <strong>accessor protocol</strong> for generalized element access and iteration of array-like objects. The protocol doesn&apos;t require new syntax or built-ins. The protocol only defines a standard way to get and set element values.</p><p>Any array-like object can implement the accessor protocol (also known as the <strong>get/set protocol</strong>) by following two conventions.</p><ol><li><strong>Define a <code>get</code> method.</strong> A <code>get</code> method accepts a single argument: an integer value specifying the index of the element to return. Similar to bracket notation for built-in array and typed array objects, the protocol requires that the <code>get</code> method be defined for integer values that are nonnegative and within array bounds. Protocol-compliant implementations may choose to support negative index values, but that behavior should not be considered portable. Similarly, how implementations choose to handle out-of-bounds indices is implementation-dependent; implementations may return <code>undefined</code>, raise an exception, wrap, clamp, or some other behavior. By not placing restrictions on out-of-bounds behavior, the protocol can more readily accommodate a broader set of use cases.</li><li><strong>Define a <code>set</code> method.</strong> A <code>set</code> method accepts two arguments: the value to set and an integer value specifying the index of the element to replace. Similar to the <code>get</code> method, the protocol requires that the <code>set</code> method be defined for integer indices that are nonnegative and within array bounds. And similarly, protocol-compliant implementations may choose to support negative index values, but that behavior should not be considered portable, and how implementations choose to handle out-of-bounds indices is implementation-dependent.</li></ol><p>The following code sample demonstrates an accessor protocol-compliant array-like object.</p><pre><code class="language-javascript">// Define a data buffer:
const data = [ 1, 2, 3, 4, 5 ];

// Define a minimal array-like object supporting the accessor protocol:
const x = {
    &apos;length&apos;: 5,
    &apos;get&apos;: ( index ) =&gt; data[ index ],
    &apos;set&apos;: ( value, index ) =&gt; data[ index ] = value
};

// Retrieve the third element:
const v1 = x.get( 2 );
// returns 3

// Set the third element:
x.set( 10, 2 );

// Retrieve the third element:
const v2 = x.get( 2 );
// returns 10
</code></pre><p>Three things to note about the above code sample.</p><ol><li>The above example demonstrates another potential use case&#x2014;namely, an array-like object which doesn&apos;t own the underlying data buffer and, instead, acts as a proxy for element access requests.</li><li>The signature for the <code>set</code> method may seem counter-intuitive, as one might expect the arguments to be reversed. The rationale for <code>value</code> being the first argument and <code>index</code> being the second argument is to be consistent with built-in typed array <code>set</code> method conventions, where the first argument is an array from which to copy values and the second argument is optional and specifies the offset at which to begin writing values from the first argument. While one could argue that <code>set(v,i)</code> is not ideal, given the argument order precedent found in built-ins, the protocol follows that precedent in order to avoid confusion.</li><li>In contrast to the built-in typed array <code>set</code> method which expects an array (or typed array) for the first argument, the accessor protocol only requires that protocol-compliant implementations support a single element value. Protocol-compliant implementations may choose to support first arguments which are array-like objects and do so in a manner emulating arrays and typed arrays; however, such behavior should not be considered portable.</li></ol><p>In short, in order to be accessor protocol-compliant, an array-like object only needs to support single element retrieval and mutation via dedicated <code>get</code> and <code>set</code> methods, respectively.</p><p>While built-in typed arrays provide a <code>set</code> method, they are <strong>not</strong> accessor protocol-compliant, as they lack a dedicated <code>get</code> method, and built-in arrays are also <strong>not</strong> accessor protocol-compliant, as they lack both a <code>get</code> and a <code>set</code> method. Their lack of compliance is expected and, from the perspective of the protocol, by design in order to distinguish indexed collections from accessor protocol-compliant array-like objects.</p><p>Array-like objects implementing the accessor protocol should be expected to pay a small, but likely non-negligible, performance penalty relative to indexed collections using bracket notation for element access. As such, we expect that performance-conscious array-agnostic APIs will maintain two separate code paths: one for indexed collections and one for collections implementing the accessor protocol. Hence, the presence or absence of <code>get</code> and <code>set</code> methods provides a useful heuristic for determining which path takes priority. In general, for indexed collections which are also accessor protocol-compliant, the <code>get</code> and <code>set</code> methods should <strong>always</strong> take precedent over bracket notation.</p><p>The following code sample refactors the <code>sum</code> API defined above to accommodate array-like objects supporting the accessor protocol.</p><pre><code class="language-javascript">function isAccessorArray( x ) {
    return ( typeof x.get === &apos;function&apos; &amp;&amp; typeof x.set === &apos;function&apos; );
}

function sum( x ) {
    let total = 0;
    
     // Handle accessor-protocol compliant collections...
    if ( isAccessorArray( x ) ) {
       for ( let i = 0; i &lt; x.length; i++ ) {
            total += x.get( i );
        }
        return total;
    }
    // Handle indexed collections...
    for ( let i = 0; i &lt; x.length; i++ ) {
        total += x[ i ];
    }
    return total;
}
</code></pre><p>For array-agnostic APIs which prefer brevity over performance optimization, one can refactor the previous code sample to use a small, reusable helper function which abstracts array element access and allows loop consolidation. A demonstration of this refactoring is shown in the following code sample.</p><pre><code class="language-javascript">function isAccessorArray( x ) {
    return ( typeof x.get === &apos;function&apos; &amp;&amp; typeof x.set === &apos;function&apos; );
}

function array2accessor( x ) {
    if ( isAccessorArray( x ) ) {
        return x;
    }
    return {
        &apos;length&apos;: x.length,
        &apos;get&apos;: ( i ) =&gt; x[ i ],
        &apos;set&apos;: ( v, i ) =&gt; x[ i ] = v
    };
}

function sum( x ) {
    let total = 0;
   
    x = array2accessor( x );
    for ( let i = 0; i &lt; x.length; i++ ) {
        total += x.get( i );
    }
    return total;
}
</code></pre><p>As before, we can then provide all manner of array-like objects, including those supporting the accessor protocol, and <code>sum</code> is none-the-wiser, being capable of handling them all. In the following code sample, we separately provide a generic array, a typed array, an array-like object, and a &quot;lazy&quot; array implementing the accessor protocol, and, for each input value, the <code>sum</code> function readily computes the sum of all elements.</p><pre><code class="language-javascript">const x1 = [ 1, 2, 3 ];
const s1 = sum( x1 );
// returns 6

const x2 = new Int32Array( [ 1, 2, 3 ] );
const s2 = sum( x2 );
// returns 6

const x3 = {
    &apos;length&apos;: 3,
    &apos;0&apos;: 1,
    &apos;1&apos;: 2,
    &apos;2&apos;: 3
};
const s3 = sum( x3 );
// returns 6

const x4 = {
    &apos;length&apos;: 3,
    &apos;get&apos;: ( i ) =&gt; i + 1,
    &apos;set&apos;: ( v, i ) =&gt; x4[ i ] = v
};
const s4 = sum( x4 );
// returns 6
</code></pre><h2 id="alternatives">Alternatives</h2><p>At this point, you may be thinking that the accessor protocol seems useful, but why invent something new? Doesn&apos;t JavaScript already have mechanisms for inheriting indexed collection semantics (subclassing built-ins), supporting lazy materialization (iterators), proxying element access requests (<a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Proxy?ref=blog.stdlib.io"><code>Proxy</code></a> objects), and accessing elements via a method (<a><code>Array.prototype.at</code></a>)?</p><p>Yes, JavaScript does have built-in mechanisms for supporting, at least partially, the use cases outlined above; however, each approach has limitations, which I&apos;ll discuss below.</p><h3 id="subclassing-built-ins">Subclassing built-ins</h3><p>In the early days of the web and prior to built-in subclassing support, third-party libraries would commonly add methods directly to the prototypes of built-in global objects in order to expose functionality missing from the JavaScript standard library&#x2014;a practice which was, and still remains, frowned upon. After standardization of ECMAScript 2015, JavaScript gained support for subclassing built-ins, including arrays and typed arrays. By subclassing built-ins, we can create specialized indexed collections which not only extend built-in behavior, but also retain the semantics of bracket notation for indexed collections. Subclassing can be particularly beneficial when wanting to augment inherited classes with new properties and methods.</p><p>The following code sample demonstrates extending the built-in <code>Array</code> class to support in-place element-wise addition.</p><pre><code class="language-javascript">/**
* Class which subclasses the built-in Array class.
*/
class SpecialArray extends Array {
    /**
    * Performs in-place element-wise addition.
    *
    * @param {ArrayLikeObject} other - input array
    * @throws {RangeError} must have the same number of elements
    * @returns {SpecialArray} the mutated array
    */
    add( other ) {
        if ( other.length !== this.length ) {
            throw new RangeError( &apos;Must provide an array having the same length.&apos; );
        }
        for ( let i = 0; i &lt; this.length; i++ ) {
            this[ i ] += other[ i ];
        }
        return this;
    }
}

// Create a new SpecialArray instance:
const x = new SpecialArray( 10 );

// Call an inherited method to fill the array:
x.fill( 5 );

// Retrieve the second element:
const v1 = x[ 1 ];
// returns 5

// Create an array to add:
const y = [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ];

// Perform element-wise addition:
x.add( y );

// Retrieve the second element:
const v2 = x[ 1 ];
// returns 7
</code></pre><p>While powerful, subclassing built-ins has several limitations.</p><ol><li>With respect to the use cases discussed above, subclassing built-ins only satisfies the desire for preserving built-in bracket notation semantics. Subclassing does not confer support for lazy materialization, separate backing data structures, or shared memory views.</li><li>Subclassing built-ins imposes a greater implementation burden on subclasses. Particularly for more &quot;exotic&quot; array types, such as read-only arrays, subclasses may be forced to override and re-implement parent methods in order to ensure consistent behavior (e.g., returning a collection having a desired instance type).</li><li>Subclassing built-ins imposes an ongoing maintenance burden. As the ECMAScript Standard evolves and built-in objects gain additional properties and methods, those properties and methods may need to be overridden and re-implemented in order to preserve desired semantics.</li><li>Subclassing built-ins influences downstream user expectations. If a subclass inherits from a <code>Float64Array</code>, users will likely expect that any subclass satisfying an <code>instanceof</code> check supports all inherited methods, some of which may not be possible to support (e.g., for a read-only array, methods supporting mutation). Distinct (i.e., non-coupled) classes which explicitly own the interface contract will likely be better positioned to manager user expectations.</li><li>While subclassing built-ins can encourage reuse, object-oriented programming design patterns can more generally lead to code bloat (read: increased bundle sizes in web applications), as the more methods are added or overridden, the less likely any one of those methods is actually used in a given application.</li></ol><p>For the reasons listed above, inheriting from built-ins is generally <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Classes/extends?ref=blog.stdlib.io#subclassing_built-ins">discouraged</a> in favor of composition due to non-negligible performance and security impacts. One of the principle aims of the accessor protocol is to provide the smallest API surface area necessary in order to facilitate generalized array-like object iteration. Subclassing built-ins is unable to fulfill that mandate.</p><h3 id="iterators">Iterators</h3><p>Objects implementing the <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Iteration_protocols?ref=blog.stdlib.io">iterator protocol</a> can readily support deferred computation (i.e., &quot;lazy&quot; materialization), but, for several of the use cases outlined above, the iterator protocol has limited applicability. More broadly, relying on the iterator protocol has three limitations.</p><p>First, as alluded to earlier in this post, the iterator protocol does not require that objects have a <code>length</code> property, and, in fact, iterators are allowed to be infinite. As a consequence, for operations requiring fixed memory allocation (e.g., as might be the case when needing to materialize values before passing a typed array from JavaScript to C within a Node.js native add-on), the only way to know how much memory to allocate is by first materializing all iterator values. Doing so may require first filling a temporary array before values can be copied to a final destination. This process is likely to be inefficient.</p><p>Furthermore, operations involving multiple iterators can quickly become complex. For example, suppose I want to perform element-wise addition for two iterators <code>X</code> and <code>Y</code> (i.e., <code>x0+y0</code>, <code>x1+y1</code>, ..., <code>xn+yn</code>). This works fine if <code>X</code> and <code>Y</code> have the same &quot;length&quot;, but what if they have different lengths? Should iteration stop once one of the iterator ends? Or should a fill value, such as zero, be used? Or maybe this is unexpected behavior, and we should raise an exception? Accordingly, generalized downstream APIs accepting iterators may require tailored options to support various edge cases which simply aren&apos;t as applicable when working with array-like objects.</p><p>We could, of course, define a protocol requiring that iterators have a <code>length</code> property, but that leads us to the next limitation: iterators do not support random access. In order to access the <code>n</code>-th iterated value, one must materialize the previous <code>n-1</code> values. This is also likely to be inefficient.</p><p>Lastly, in general, code paths operating on iterators are significantly slower than equivalent code paths operating on indexed collections. While the accessor protocol does introduce overhead relative to using bracket notation due to explicitly needing to call a method, the overhead is less than the overhead introduced by iterators.</p><p>The following code sample defines three functions: one for computing the sum of an indexed collection, one for computing the sum of an array-like object implementing the accessor protocol, and a third for computing the sum of an iterator using JavaScript&apos;s built-in <code>for...of</code> syntax.</p><pre><code class="language-javascript">function indexedSum( x ) {
    let total = 0;
    for ( let i = 0; i &lt; x.length; i++ ) {
        total += x[ i ];
    }
    return total;
}

function accessorSum( x ) {
    let total = 0;
    for ( let i = 0; i &lt; x.length; i++ ) {
        total += x.get( i );
    }
    return total;
}

function iteratorSum( x ) {
    let total = 0;
    for ( const v of x ) {
        total += v;
    }
    return total;
}
</code></pre><p>To assess the performance of each function, I ran benchmarks on an Apple M1 Pro running MacOS and Node.js v20.9.0. For a set of array lengths ranging from ten elements to one million elements, I repeated benchmarks three times and used the maximum observed rate for subsequent analysis and chart display. The results are provided in the following grouped column chart.</p><figure class="kg-card kg-image-card"><img src="https://blog.stdlib.io/content/images/2024/08/iterator_benchmarks_white_bkgd.png" class="kg-image" alt="The Accessor Protocol" loading="lazy" width="2000" height="1078" srcset="https://blog.stdlib.io/content/images/size/w600/2024/08/iterator_benchmarks_white_bkgd.png 600w, https://blog.stdlib.io/content/images/size/w1000/2024/08/iterator_benchmarks_white_bkgd.png 1000w, https://blog.stdlib.io/content/images/size/w1600/2024/08/iterator_benchmarks_white_bkgd.png 1600w, https://blog.stdlib.io/content/images/2024/08/iterator_benchmarks_white_bkgd.png 2000w" sizes="(min-width: 720px) 720px"></figure><p>In the chart above, columns along the x-axis are grouped according to input array/iterator length. Accordingly, the first group of columns corresponds to input arrays/iterators having <code>10</code> elements, the second group to input array/iterators having <code>100</code> elements, and so on and so forth. The y-axis corresponds to normalized rates relative to the performance observed for indexed collections. For example, if the maximum observed rate when summing over an indexed collection was <code>100</code> iterations per second and the maximum observed rate when summing over an iterator was <code>70</code> iterations per second, the normalized rate is <code>70/100</code>, or <code>0.7</code>. Hence, a rate equal to unity indicates an observed rate equal to that of indexed collections. Anything less than unity indicates an observed rate less than that of indexed collections (i.e., summation involving a given input was slower than using built-in bracket notation). Anything greater than unity indicates an observed rate greater than that of indexed collections (i.e., summation involving a given input was faster than using built-in bracket notation).</p><p>From the chart, we can observe that, for all array lengths, neither accessor protocol-compliant array-like objects nor iterators matched or exceeded the performance of indexed collections. Array-like objects implementing the accessor protocol were 15% slower than indexed collections, and iterators were 30% slower than indexed collections. These results confirm that the accessor protocol introduces an overhead relative to indexed collections, but not nearly as much as the overhead introduced by iterators.</p><p>In short, the accessor protocol is both more flexible and more performant than using iterators.</p><h3 id="computed-properties">Computed properties</h3><p>Another alternative to the accessor protocol is to use defined properties having property accessors. When implementing lazy materialization and proxied element access prior to ECMAScript standardization of <code>Proxy</code> objects and support for <code>Array</code> subclassing, property descriptors were the primary way to emulate the built-in bracket notation of indexed collections.</p><p>The following code sample shows an example class returning an array-like object emulating built-in bracket notation by explicitly defining property descriptors for all elements. Each property descriptor defines an accessor property with specialized <code>get</code> and <code>set</code> accessors.</p><pre><code class="language-javascript">/**
* Class emulating built-in bracket notation for lazy materialization without subclassing.
*/
class LazyArray {
    // Define private instance fields:
    #data; // memoized value cache

    /**
    * Returns a new fixed-length &quot;lazy&quot; array.
    *
    * @param {number} len - number of elements
    * @returns {LazyArray} lazy array instance
    */
    constructor( len ) {
        Object.defineProperty( this, &apos;length&apos;, {
            &apos;configurable&apos;: false,
            &apos;enumerable&apos;: false,
            &apos;writable&apos;: false,
            &apos;value&apos;: len
        });
        for ( let i = 0; i &lt; len; i++ ) {
            Object.defineProperty( this, i, {
                &apos;configurable&apos;: false,
                &apos;enumerable&apos;: true,
                &apos;get&apos;: this.#get( i ),
                &apos;set&apos;: this.#set( i )
            });
        }
        this.#data = {};
    }

    /**
    * Returns a getter.
    *
    * @private
    * @param {number} index - index
    * @returns {Function} getter
    */
    #get( index ) {
        return get;

        /**
        * Returns an element.
        *
        * @private
        * @returns {*} element
        */
        function get() {
            const v = this.#data[ index ];
            if ( v === void 0 ) {
                // Perform &quot;lazy&quot; materialization:
                this.#data[ index ] = index; // note: toy example
                return index;
            }
            return v;
        }
    }

    /**
    * Returns a setter.
    *
    * @private
    * @param {number} index - index
    * @returns {Function} setter
    */
    #set( index ) {
        return set;

        /**
        * Sets an element value.
        *
        * @private
        * @param {*} value - value to set
        * @returns {boolean} boolean indicating whether a value was set
        */
        function set( value ) {
            this.#data[ index ] = value;
            return true;
        }
    }
}

// Create a new &quot;lazy&quot; array:
const x = new LazyArray( 10 );

// Print the list of elements:
for ( let i = 0; i &lt; x.length; i++ ) {
    console.log( x[ i ] );
}
</code></pre><p>There are several issues with this approach:</p><ol><li>Explicitly defining property descriptors is <strong>very</strong> expensive. Thus, especially for large arrays, instantiation can become prohibitively slow.</li><li>Creating separate accessors for each property requires significantly more memory than the accessor protocol. The latter only needs two methods to serve all elements. The former requires two methods for every element.</li><li>Element access is orders of magnitude slower than both built-in bracket notation and the accessor protocol.</li></ol><p>In the following grouped column chart, I show benchmark results for computing the sum over an array-like object which emulates built-in bracket notation by using property accessors. The chart extends the previous grouped column chart by including the same column groups as the previous chart and adding a new column to each group corresponding to property accessor performance results. As can be observed, using property accessors is more than one hundred times slower than either indexed collection built-in bracket notation or the accessor protocol.</p><figure class="kg-card kg-image-card"><img src="https://blog.stdlib.io/content/images/2024/08/property_accessor_benchmarks_white_bkgd.png" class="kg-image" alt="The Accessor Protocol" loading="lazy" width="2000" height="1078" srcset="https://blog.stdlib.io/content/images/size/w600/2024/08/property_accessor_benchmarks_white_bkgd.png 600w, https://blog.stdlib.io/content/images/size/w1000/2024/08/property_accessor_benchmarks_white_bkgd.png 1000w, https://blog.stdlib.io/content/images/size/w1600/2024/08/property_accessor_benchmarks_white_bkgd.png 1600w, https://blog.stdlib.io/content/images/2024/08/property_accessor_benchmarks_white_bkgd.png 2000w" sizes="(min-width: 720px) 720px"></figure><h3 id="proxies">Proxies</h3><p>The <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Proxy?ref=blog.stdlib.io"><code>Proxy</code></a> object allows you to create a proxy for another object. During its creation, the proxy can be configured to intercept and redefine fundamental object operations, including getting and setting properties. While proxy objects are commonly used for logging property accesses, validation, formatting, or sanitizing inputs, they enable novel and extremely powerful extensions to built-in behavior. One such extension&#x2014;implementing Python-like indexing in JavaScript&#x2014;will be the subject of a future post.</p><p>The following code sample defines a function for creating proxied array-like objects which intercept the operations for getting and setting property values. The proxy is created by providing two parameters:</p><ol><li><code>target</code>: the original object we want to proxy.</li><li><code>handler</code>: an object defining which operations to intercept and how to redefine those operations.</li></ol><pre><code class="language-javascript">/**
* Tests whether a string contains only integer values.
*
* @param {string} str - input string
* @returns {boolean} boolean indicating whether a string contains only integer values
*/
function isDigitString( str ) {
    return /^\d+$/.test( str );
}

/**
* Returns a proxied array-like object.
*
* @param {number} len - array length
* @returns {Proxy} proxy object
*/
function lazyArray( len ) {
    const target = {
        &apos;length&apos;: len
    };
    return new Proxy( target, {
        &apos;get&apos;: ( target, property ) =&gt; {
            if ( isDigitString( property ) ) {
                return parseInt( property, 10 ); // note: toy example
            }
            return target[ property ];
        },
        &apos;set&apos;: ( target, property, value ) =&gt; {
            target[ property ] = value;
            return true;
        }
    });
}

// Create a new &quot;lazy&quot; array:
const x = lazyArray( 10 );

// Print the list of elements:
for ( let i = 0; i &lt; x.length; i++ ) {
    console.log( x[ i ] );
}
</code></pre><p>While proxy objects avoid many of the issues described above for subclassing, iterators, and property accessors, including random access, instantiation costs, and general complexity, their primary limitation at the time of this blog post is performance.</p><p>The following group column chart builds on the previous column charts by adding a new column to each group corresponding to proxy object results. As can be observed, using proxy objects fares no better than the property accessor approach described above. Performance is on par with property accessors and more than one hundred times slower than either indexed collection built-in bracket notation or the accessor protocol.</p><figure class="kg-card kg-image-card"><img src="https://blog.stdlib.io/content/images/2024/08/proxy_benchmarks_white_bkgd.png" class="kg-image" alt="The Accessor Protocol" loading="lazy" width="2000" height="1078" srcset="https://blog.stdlib.io/content/images/size/w600/2024/08/proxy_benchmarks_white_bkgd.png 600w, https://blog.stdlib.io/content/images/size/w1000/2024/08/proxy_benchmarks_white_bkgd.png 1000w, https://blog.stdlib.io/content/images/size/w1600/2024/08/proxy_benchmarks_white_bkgd.png 1600w, https://blog.stdlib.io/content/images/2024/08/proxy_benchmarks_white_bkgd.png 2000w" sizes="(min-width: 720px) 720px"></figure><h3 id="using-at-rather-than-get">Using &quot;at&quot; rather than &quot;get&quot;</h3><p>The 2022 revision of the ECMAScript Standard added an <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/at?ref=blog.stdlib.io"><code>at</code></a> method to array and typed array prototypes which accepts a single integer argument and returns the element at that index, allowing for both positive and negative integers. Why, then, do we need another method for retrieving an array element as proposed in the accessor protocol? This question seems especially salient given that the protocol&apos;s <code>get</code> method only requires support for nonnegative integer arguments, making the <code>get</code> method seem less powerful.</p><p>There are a few reasons why the accessor protocol chooses to use <code>get</code>, rather than <code>at</code>.</p><ol><li>The name <code>get</code> has symmetry with the name <code>set</code>.</li><li>The <code>at</code> method does not have a built-in method equivalent for setting element values. The <code>set</code> method is only present on typed arrays, not generic arrays, and does not support negative target offsets.</li><li>The <code>at</code> method does not match built-in bracket notation semantics. When using a negative integer within square brackets, the integer value is serialized to a string <strong>before</strong> property lookup (i.e., <code>x[-1]</code> is equivalent to <code>x[&apos;-1&apos;]</code>). Unless negative integer properties are explicitly defined, <code>x[-1]</code> will return <code>undefined</code>. In contrast, <code>x.at(-1)</code> is equivalent to <code>x[x.length-1]</code>, which, for non-empty arrays, will return the last array element. Accordingly, the <code>get</code> method of the accessor protocol allows protocol-compliant implementations to match built-in bracket notation semantics exactly.</li><li>The accessor protocol does <strong>not</strong> specify the behavior of out-of-bounds index arguments. In contrast, when an index argument is negative, the <code>at</code> method normalizes a negative index value according to <code>index + x.length</code>. This, however, is not the only reasonable behavior, depending on the use case. For example, an array-like object implementation may want to clamp out-of-bounds index arguments, such that indices less than zero are clamped to zero (i.e., the first index) and indices greater than <code>x.length-1</code> are clamped to <code>x.length-1</code> (i.e., the last index). Alternatively, an array-like object implementation may want to wrap out-of-bounds index arguments using modulo arithmetic. Lastly, an array-like object implementation may want to raise an exception when an index is out-of-bounds. In short, the <code>at</code> method prescribes a particular mode of behavior, which may not be appropriate for all use cases.</li><li>By only requiring support for nonnegative integer arguments, the accessor protocol allows protocol-compliant implementations to minimize branching and ensure better performance. While convenient, support for negative indices is not necessary for generalized array-like object iteration.</li><li>As the EMCAScript Standard does not define a <code>get</code> method for arrays and typed arrays (at least not yet!), the presence or absence of a <code>get</code> method in combination with a <code>set</code> method and <code>length</code> property allows for distinguishing indexed collections from array-like objects implementing the accessor protocol. The combination of <code>at</code>, <code>set</code>, and <code>length</code> would not be sufficient for making such a distinction. This ability is important in order to allow downstream array-like object consumers to implement optimized code paths and ensure optimal performance.</li></ol><p>For these reasons, an <code>at</code> method is not a suitable candidate for use in generalized array-like object iteration.</p><h2 id="examples">Examples</h2><p>Now that we&apos;ve considered the alternatives and established the motivation and need for the accessor protocol, what can we do with it?! Glad you asked! To answer this question, I provide several concrete implementations below.</p><h3 id="complex-number-arrays">Complex number arrays</h3><p>Complex numbers have applications in many scientific domains, including signal processing, fluid dynamics, and quantum mechanics. We can extend the concept of typed arrays to the realm of complex numbers by storing real and imaginary components as interleaved values within a real-valued typed array. In the following code sample, I define a minimal immutable complex number constructor and a complex number array class implementing the accessor protocol.</p><pre><code class="language-javascript">/**
* Class defining a minimal immutable complex number.
*/
class Complex {
    // Define private instance fields:
    #re; // real component
    #im; // imaginary component

    /**
    * Returns a new complex number instance.
    *
    * @param {number} re - real component
    * @param {number} im - imaginary component
    * @returns {Complex} complex number instance
    */
    constructor( re, im ) {
        this.#re = re;
        this.#im = im;
    }

    /**
    * Returns the real component of a complex number.
    *
    * @returns {number} real component
    */
    get re() {
        return this.#re;
    }

    /**
    * Returns the imaginary component of a complex number.
    *
    * @returns {number} imaginary component
    */
    get im() {
        return this.#im;
    }
}

/**
* Class defining a complex number array implementing the accessor protocol.
*/
class Complex128Array {
    // Define private instance fields:
    #length; // array length
    #data;   // underlying data buffer

    /**
    * Returns a new complex number array instance.
    *
    * @param {number} len - array length
    * @returns {Complex128Array} complex array instance
    */
    constructor( len ) {
        this.#length = len;
        this.#data = new Float64Array( len*2 ); // accommodate interleaved components
    }

    /**
    * Returns the array length.
    *
    * @returns {number} array length
    */
    get length() {
        return this.#length;
    }

    /**
    * Returns an array element.
    *
    * @param {integer} index - element index
    * @returns {(Complex|void)} element value
    */
    get( index ) {
        if ( index &lt; 0 || index &gt;= this.#length ) {
            return;
        }
        const ptr = index * 2; // account for interleaved components
        return new Complex( this.#data[ ptr ], this.#data[ ptr+1 ] );
    }

    /**
    * Sets an array element.
    *
    * @param {Complex} value - value to set
    * @param {integer} index - element index
    * @returns {void}
    */
    set( value, index ) {
        if ( index &lt; 0 || index &gt;= this.#length ) {
            return;
        }
        const ptr = index * 2; // account for interleaved components
        this.#data[ ptr ] = value.re;
        this.#data[ ptr+1 ] = value.im;
    }
}

// Create a new complex number array:
const x = new Complex128Array( 10 );
// returns &lt;Complex128Array&gt;

// Retrieve the second element:
const z1 = x.get( 1 );
// returns &lt;Complex&gt;

const re1 = z1.re;
// returns 0.0

const im1 = z1.im;
// returns 0.0

// Set the second element:
x.set( new Complex( 3.0, 4.0 ), 1 );

// Retrieve the second element:
const z2 = x.get( 1 );
// returns &lt;Complex&gt;

const re2 = z2.re;
// returns 3.0

const im2 = z2.im;
// returns 4.0
</code></pre><p>If you are interested in a concrete implementation of complex number arrays, see the <a href="https://github.com/stdlib-js/stdlib/tree/develop/lib/node_modules/%40stdlib/array/complex128?ref=blog.stdlib.io"><code>Complex128Array</code></a> and <a href="https://github.com/stdlib-js/stdlib/tree/develop/lib/node_modules/%40stdlib/array/complex64?ref=blog.stdlib.io"><code>Complex64Array</code></a> packages provided by <a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">stdlib</a>. We&apos;ll have more to say about these packages in future blog posts.</p><h3 id="sparse-arrays-1">Sparse arrays</h3><p>Applications of sparse arrays commonly arise in network theory, numerical analysis, natural language processing, and other areas of science and engineering. When data is &quot;sparse&quot; (i.e., most elements are non-zero), sparse array storage can be particularly advantageous in reducing required memory storage and in accelerating the computation of operations involving only non-zero elements.</p><p>In the following code sample, I define a minimal accessor protocol-compliant sparse array class using the dictionary of keys (DOK) format and supporting arbitrary fill values. Support for arbitrary fill values is useful as it extends the concept of sparsity to any array having a majority of elements equal to the same value. For such arrays, we can compress an array to a format which stores a single fill value and only those elements which are not equal to the repeated value. This approach is implemented below.</p><pre><code class="language-javascript">/**
* Class defining a sparse array implementing the accessor protocol.
*/
class SparseArray {
    // Define private instance fields:
    #length; // array length
    #data;   // dictionary containing array elements
    #fill;   // fill value

    /**
    * Returns a new sparse array instance.
    *
    * @param {number} len - array length
    * @param {*} fill - fill value
    * @returns {SparseArray} sparse array instance
    */
    constructor( len, fill ) {
        this.#length = len;
        this.#data = {};
        this.#fill = fill;
    }

    /**
    * Returns the array length.
    *
    * @returns {number} array length
    */
    get length() {
        return this.#length;
    }

    /**
    * Returns an array element.
    *
    * @param {number} index - element index
    * @returns {*} element value
    */
    get( index ) {
        if ( index &lt; 0 || index &gt;= this.#length ) {
            return;
        }
        const v = this.#data[ index ];
        if ( v === void 0 ) {
            return this.#fill;
        }
        return v;
    }

    /**
    * Sets an array element.
    *
    * @param {*} value - value to set
    * @param {number} index - element index
    * @returns {void}
    */
    set( value, index ) {
        if ( index &lt; 0 || index &gt;= this.#length ) {
            return;
        }
        this.#data[ index ] = value;
    }
}

// Create a new sparse array:
const x = new SparseArray( 10, 0.0 );

// Retrieve the second element:
const v1 = x.get( 1 );
// returns 0.0

// Set the second element:
x.set( 4.0, 1 );

// Retrieve the second element:
const v2 = x.get( 1 );
// returns 4.0
</code></pre><h3 id="lazy-arrays">Lazy arrays</h3><p>While less broadly applicable, situations may arise in which you want an array-like object supporting lazy materialization and random access. For example, suppose each element is the result of an expensive computation, and you want to defer the computation of each element until first accessed.</p><p>In the following code sample, I define a class supporting lazy materialization of randomly generated element values. When an element is accessed, a class instance eagerly computes all un-computed element values up to and including the accessed element. Once an element value is computed, the value is memoized and can only be overridden by explicitly setting the element.</p><pre><code class="language-javascript">/**
* Class defining an array-like object supporting lazy materialization of random values.
*/
class LazyRandomArray {
    // Define private instance fields:
    #data;   // underlying data buffer

    /**
    * Returns a new lazy random array.
    *
    * @returns {LazyRandomArray} new instance
    */
    constructor() {
        this.#data = [];
    }

    /**
    * Materializes array elements.
    *
    * @private
    * @param {number} len - array length
    */
    #materialize( len ) {
        for ( let i = this.#data.length; i &lt; len; i++ ) {
            this.#data.push( Math.random() );
        }
    }

    /**
    * Returns the array length.
    *
    * @returns {number} array length
    */
    get length() {
        return this.#data.length;
    }

    /**
    * Returns an array element.
    *
    * @param {number} index - element index
    * @returns {*} element value
    */
    get( index ) {
        if ( index &lt; 0 ) {
            return;
        }
        if ( index &gt;= this.#data.length ) {
            this.#materialize( index+1 );
        }
        return this.#data[ index ];
    }

    /**
    * Sets an array element.
    *
    * @param {*} value - value to set
    * @param {number} index - element index
    * @returns {void}
    */
    set( value, index ) {
        if ( index &lt; 0 ) {
            return;
        }
        if ( index &gt;= this.#data.length ) {
            // Materialize `index+1` in order to ensure &quot;fast&quot; elements:
            this.#materialize( index+1 );
        }
        this.#data[ index ] = value;
    }
}

// Create a new lazy array:
const x = new LazyRandomArray();

// Retrieve the tenth element:
const v1 = x.get( 9 );
// returns &lt;number&gt;

// Set the tenth element:
x.set( 4.0, 9 );

// Retrieve the tenth element:
const v2 = x.get( 9 );
// returns 4.0

// Return the number of elements in the array:
const len = x.length;
// returns 10
</code></pre><h2 id="stdlib-1">stdlib</h2><p>While array-like objects implementing the accessor protocol are useful in their own right, they become all the more powerful when combined with functional APIs which are accessor protocol-aware. Fortunately, <a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">stdlib</a> treats accessor protocol-compliant objects as first-class citizens, providing support for them throughout its codebase.</p><p>For example, the following code sample uses <a href="https://github.com/stdlib-js/array-put?ref=blog.stdlib.io"><code>@stdlib/array-put</code></a> to replace the elements of an accessor protocol-compliant strided array at specified indices.</p><pre><code class="language-javascript">const put = require( &apos;@stdlib/array-put&apos; );

/**
* Class defining a strided array.
*/
class StridedArray {
    // Define private instance fields:
    #length; // array length
    #data;   // underlying data buffer
    #stride; // step size (i.e., the index increment between successive values)
    #offset; // index of the first indexed value in the data buffer

    /**
    * Returns a new StridedArray instance.
    *
    * @param {integer} N - number of indexed elements
    * @param {ArrayLikeObject} data - underlying data buffer
    * @param {number} stride - step size
    * @param {number} offset - index of the first indexed value in the data buffer
    * @returns {StridedArray} strided array instance
    */
    constructor( N, data, stride, offset ) {
        this.#length = N;
        this.#data = data;
        this.#stride = stride;
        this.#offset = offset;
    }

    /**
    * Returns the array length.
    *
    * @returns {number} array length
    */
    get length() {
        return this.#length;
    }

    /**
    * Returns the element located at a specified index.
    *
    * @param {number} index - element index
    * @returns {(void|*)} element value
    */
    get( index ) {
        return this.#data[ this.#offset + index*this.#stride ];
    }

    /**
    * Sets the value for an element located at a specified index.
    *
    * @param {*} value - value to set
    * @param {number} index - element index
    */
    set( value, index ) {
        this.#data[ this.#offset + index*this.#stride ] = value;
    }
}

// Define a data buffer:
const buf = new Float64Array( [ 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0 ] );

// Create a strided view over the data buffer:
const x1 = new StridedArray( 4, buf, 2, 1 );

// Retrieve the second element:
const v1 = x1.get( 1 );
// returns 4.0

// Retrieve the fourth element:
const v2 = x1.get( 3 );
// returns 8.0

// Replace the second and fourth elements with new values:
put( x, [ 1, 3 ], [ -v1, -v2 ] );

// Retrieve the second element:
const v3 = x1.get( 1 );
// returns -4.0

// Retrieve the fourth element:
const v4 = x1.get( 3 );
// returns -8.0
</code></pre><p>In addition to supporting accessor protocol-compliant array-like objects in <a href="https://github.com/stdlib-js/stdlib/tree/develop/lib/node_modules/%40stdlib/array?ref=blog.stdlib.io">utilities</a>, <a href="https://github.com/stdlib-js/stdlib/tree/develop/lib/node_modules/%40stdlib/blas?ref=blog.stdlib.io">linear algebra operations</a>, and other vectorized APIs, stdlib has leveraged the accessor protocol to implement typed arrays supporting data types beyond real-valued numbers. To see this in action, see stdlib&apos;s <a href="https://github.com/stdlib-js/stdlib/tree/develop/lib/node_modules/%40stdlib/array/complex128?ref=blog.stdlib.io"><code>Complex128Array</code></a>, <a href="https://github.com/stdlib-js/stdlib/tree/develop/lib/node_modules/%40stdlib/array/complex64?ref=blog.stdlib.io"><code>Complex64Array</code></a>, and <a href="https://github.com/stdlib-js/stdlib/tree/develop/lib/node_modules/%40stdlib/array/boolean?ref=blog.stdlib.io"><code>BooleanArray</code></a> typed array constructors.</p><p>In short, the accessor protocol is a powerful abstraction which is not only performant, but can accommodate new use cases with minimal effort.</p><h2 id="conclusion">Conclusion</h2><p>In this post, we dove deep into techniques for array-like object iteration. Along the way, we discussed the limitations of current approaches and identified opportunities for a lightweight means for element retrieval that can flexibly accommodate a variety of use cases, including strided arrays, arrays supporting deferred computation, shared memory views, and sparse arrays. We then learned about the accessor protocol which provides a straightforward solution for accessing elements in a manner consistent with built-in bracket notation and having minimal performance overhead. With the power and promise of the accessor protocol firmly established, we wrapped up by showcasing a few demos of the accessor protocol in action.</p><p>In short, we covered a lot of ground, but I hope you learned a thing or two along the way. In future posts, we&apos;ll explore more applications of the accessor protocol, including in the implementation of complex number and boolean typed arrays. We hope that you&apos;ll continue to follow along as we share our insights and that you&apos;ll join us in our mission to realize a future where JavaScript and the web are preferred environments for numerical and scientific computation. &#x1F680;</p><hr><p><em>Athan Reines is a software engineer at </em><a href="https://quansight.com/?ref=blog.stdlib.io"><em>Quansight</em></a><em> and core developer of </em><a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io"><em>stdlib</em></a><em>.</em></p><hr><p><a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">stdlib</a> is an open source software project dedicated to providing a comprehensive suite of robust, high-performance libraries to accelerate your project&apos;s development and give you peace of mind knowing that you&apos;re depending on expertly crafted, high-quality software.</p><p>If you&apos;ve enjoyed this post, give us a star &#x1F31F; on <a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">GitHub</a> and consider <a href="https://opencollective.com/stdlib?ref=blog.stdlib.io">financially supporting</a> the project. Your contributions and continued support help ensure the project&apos;s long-term success and are greatly appreciated!</p><hr><p>If you&apos;d like to view the code covered in this post on GitHub, please visit the source code <a href="https://github.com/stdlib-js/blog-introducing-the-accessor-protocol-1/tree/main?ref=blog.stdlib.io">repository</a>.</p>
<!--kg-card-begin: html-->
<h2>License</h2>
<details>
    <summary>All code is licensed under <a href="http://www.apache.org/licenses/LICENSE-2.0?ref=blog.stdlib.io">Apache License, Version 2.0</a>.</summary>
    <pre><code class="language-text hljs">
Copyright (c) 2024 Athan Reines.

Licensed under the Apache License, Version 2.0 (the &quot;License&quot;);
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

   http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an &quot;AS IS&quot; BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
    </code></pre>
</details>
<!--kg-card-end: html-->
]]></content:encoded></item><item><title><![CDATA[How to call Fortran routines from JavaScript with Node.js]]></title><description><![CDATA[A tour de force introduction to authoring Node.js native add-ons which support calling Fortran routines from JavaScript and usher in a new era of high-performance computation for the web.]]></description><link>https://blog.stdlib.io/how-to-call-fortran-routines-from-javascript-with-node-js/</link><guid isPermaLink="false">66616910d8eb7fcd9a961258</guid><category><![CDATA[Engineering]]></category><dc:creator><![CDATA[Pranav Goswami]]></dc:creator><pubDate>Sun, 21 Jul 2024 10:32:30 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1613905780946-26b73b6f6e11?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDkyfHxtYXRofGVufDB8fHx8MTcyMTU1NTAyMHww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1613905780946-26b73b6f6e11?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDkyfHxtYXRofGVufDB8fHx8MTcyMTU1NTAyMHww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="How to call Fortran routines from JavaScript with Node.js"><p><a href="https://fortran-lang.org/?ref=blog.stdlib.io">Fortran</a> is a commonly used language for numerical and scientific computation, underpinning many of the higher-level numerical libraries and programming languages in use today. Since Fortran&apos;s original development in 1957, researchers and software developers have used Fortran as a primary language for high-performance computation and authored thousands of high-performance programs and libraries for astronomy, climate modeling, computational chemistry, fluid dynamics, simulation, weather prediction, and more.</p><p>Rather than attempt to re-implement the entirety of the Fortran ecosystem, programming languages, such as R, MATLAB, and Julia, and numerical libraries, such as NumPy and SciPy, have opted to provide language-specific wrappers around Fortran functionality. Despite significant interest in numerical computing on the web, no one has developed comprehensive JavaScript bindings for Fortran libraries. That is, until now.</p><p>In this post, we&apos;ll begin laying the groundwork for authoring high-performance Fortran bindings and explore how to call Fortran routines from JavaScript using <a href="https://nodejs.org/?ref=blog.stdlib.io">Node.js</a>. We&apos;ll start with a brief introduction to Fortran, followed by writing and compiling a simple Fortran program. We&apos;ll then discuss how to use <a href="https://nodejs.org/api/n-api.html?ref=blog.stdlib.io">Node-API</a> to link a compiled Fortran routine to the Node.js runtime. And we&apos;ll conclude by demonstrating how to use <a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">stdlib</a> to simplify the authoring of Node.js bindings.</p><p>By the end of this post, you&apos;ll have a good understanding of how to call Fortran routines from JavaScript using Node.js.</p><h2 id="prerequisites">Prerequisites</h2><p>Throughout this post, we&apos;ll be writing sample programs and performing various steps to compile and run Fortran programs. We&apos;ll assume that you have some familiarity with using the terminal, executing commands, and running JavaScript programs. For the most part, terminal commands will assume a Linux-based operating system. Some modifications may be required to successfully run commands and perform compilation steps on Windows.</p><p>If you&apos;re hoping to follow along, you&apos;ll need the following prerequisites:</p><p>1) You&apos;ll want to make sure you&apos;ve installed the latest stable <a href="https://nodejs.org/?ref=blog.stdlib.io">Node.js</a> version. To check whether Node.js is already installed</p><pre><code class="language-bash">$ node --version
</code></pre><p>where <code>$</code> is the terminal prompt and <code>node --version</code> is the entered command.</p><p>2) We&apos;ll be using <a href="https://www.npmjs.com/package/npm?ref=blog.stdlib.io">npm</a> for installing Node.js dependencies, but you should be able to adapt any installation commands to your preferred JavaScript package manager (e.g., Yarn, pnpm, etc).</p><p>3) In order to generate build files appropriate for your operating system (OS), we&apos;ll be using <a href="https://github.com/nodejs/node-gyp?ref=blog.stdlib.io">node-gyp</a>, which, in turn, has varying prerequisites depending on your OS, including the availability of Python. For more details, see the node-gyp <a href="https://github.com/nodejs/node-gyp?ref=blog.stdlib.io#installation">installation instructions</a>.</p><p>4) In order to compile Fortran programs, you&apos;ll need a Fortran compiler. In this post, we&apos;ll be using <a href="https://gcc.gnu.org/fortran/?ref=blog.stdlib.io">GNU Fortran</a> (GFortran) to compile Fortran code. GFortran is an implementation of the Fortran programming language in the widely used <a href="https://gcc.gnu.org/?ref=blog.stdlib.io">GNU Compiler Collection</a> (GCC), an open-source project maintained under the umbrella of the GNU Project. To check whether GFortran is already installed</p><pre><code class="language-bash">$ gfortran --version
</code></pre><p>5) And finally, we&apos;ll be using GCC to compile and link C source code. To check whether GCC is already installed</p><pre><code class="language-bash">$ gcc --version
</code></pre><p>If you don&apos;t have one or more of the above installed, you&apos;ll want to go ahead and install those now.</p><h2 id="introduction-to-fortran">Introduction to Fortran</h2><p>Fortran is a compiled, imperative programming language well-suited to numerical and scientific computation. Known for its high performance, versatility, and ease of use, Fortran is natively parallel and has built-in support for array handling. This makes Fortran a popular choice for scientific computing.</p><p>Many fundamental libraries for numerical computation, such as <a href="https://netlib.org/blas/?ref=blog.stdlib.io">BLAS</a> (<strong>b</strong>asic <strong>l</strong>inear <strong>a</strong>lgebra <strong>s</strong>ubprograms), <a href="https://www.netlib.org/lapack/?ref=blog.stdlib.io">LAPACK</a> (<strong>l</strong>inear <strong>a</strong>lgebra <strong>pack</strong>age), <a href="https://www.netlib.org/slatec/?ref=blog.stdlib.io">SLATEC</a>, and <a href="https://www.netlib.org/minpack/?ref=blog.stdlib.io">MINPACK</a>, among many others, are written in Fortran. These libraries serve as the foundation of popular open-source numerical computation libraries, such as NumPy and SciPy, and numerical programming languages, such as R, MATLAB, and Julia.</p><p>Given Fortran&apos;s widespread usage and decades of development, one could argue that most modern numerical programming languages and libraries are simply fancy wrappers around Fortran routines. Therefore, enabling JavaScript to call Fortran routines not only leverages these high-performance libraries but also positions JavaScript as a viable language for machine learning and other computation-intensive tasks.</p><p>Now, let&apos;s get started by compiling our first Fortran program!</p><h3 id="compiling-our-first-fortran-program">Compiling our first Fortran program</h3><p>Recognizing that some readers of this post may not be familiar with Fortran, let&apos;s kick things off by writing a &quot;Hello world&quot; program in Fortran for adding two numbers and printing the result. To begin, open up a text editor and create the file <code>add.f90</code> containing the following code which contains a function definition for adding two integers and a <code>main</code> program which calls that function and prints the result.</p><pre><code class="language-fortran">! file: add.f90

!&gt;
! Adds two integer values.
!
! @param {integer} x - first input value
! @param {integer} y - second input value
!&lt;
integer function add( x, y )
    ! Define the input parameters:
    integer, intent(in) :: x, y
    ! ..
    ! Compute the sum:
    add = x + y
end function add

!&gt;
! Main execution sequence.
!&lt;
program main
    ! Local variables:
    character(len=999) :: str, tmp
    ! ..
    ! Intrinsic functions:
    intrinsic adjustl, trim
    ! ..
    ! Define a variable for storing the sum:
    integer :: res
    ! ..
    ! Compute the sum:
    res = add( 12, 15 )
    ! ..
    ! Print the results:
    write (str, &apos;(I15)&apos;) res
    tmp = adjustl( str )
    print &apos;(A, A)&apos;, &apos;The sum of 12 and 15 is &apos;, trim( tmp )
end program
</code></pre><p>There are a few things to note in the above program. The first is that, in general, Fortran routines pass arguments by reference. A common practice is to define and pass output variables for storing results&#x2014;something that we&apos;ll revisit later in this post.</p><p>Second, a best practice is to specify the <code>intent(xx)</code> of a variable. In the code above, <code>intent(in)</code> indicates that an argument must not be redefined or become undefined during the execution of a subroutine. Similarly, <code>intent(out)</code> indicates that an argument must be defined before the argument is referenced within a subroutine.</p><p>Third, in order to print formatted results, we need to perform various string manipulation steps, including writing to character buffers (<code>write</code>), adjusting alignment (<code>adjustl</code>), and trimming results (<code>trim</code>).</p><p>For the purposes of getting something working, our program defines a single variable <code>res</code>, which receives the result of passing two number literals to an <code>add</code> function. To test whether the code works, we first need to see if it compiles, and, to do this, we&apos;ll use the <a href="https://gcc.gnu.org/fortran/?ref=blog.stdlib.io">GNU Fortran</a> (GFortran) compiler, which is part of the GNU Compiler Collection (GCC). While other Fortran compilers exist, such as the Intel Fortran Compiler, LLVM Flang, and LFortran, GFortran is one of the most widely used Fortran compilers, and what we cover in this post should readily translate elsewhere.</p><p>In a terminal, navigate to the directory containing <code>add.f90</code>, and execute the following command</p><pre><code class="language-bash">$ gfortran add.f90 -o add.out &amp;&amp; ./add.out
</code></pre><p>where <code>add.f90</code> is the file path of the file to be compiled and <code>add.out</code> is the file path to use for storing a generated executable. If all went according to plan, you should see the following text as output</p><pre><code class="language-text">The sum of 12 and 15 is 27
</code></pre><h3 id="defining-another-fortran-subroutine">Defining another Fortran subroutine</h3><p>In <code>add.f90</code>, we defined a self-contained Fortran program which adds two numbers and prints the result. But what if we want to call Fortran functions and subroutines from another Fortran file or from outside of Fortran, such as from JavaScript running in Node.js?</p><p>To see how this is done, let&apos;s begin by creating another Fortran file <code>mul.f90</code>, this time containing a subroutine for multiplying two integers and returning an integer result.</p><pre><code class="language-fortran">! file: mul.f90

!&gt;
! Multiplies two integer values.
!
! @param {integer} x - first input value
! @param {integer} y - second input value
! @param {integer} res - output argument for storing the result
!&lt;
subroutine mul( x, y, res )
    integer, intent(in) :: x, y
    integer, intent(out) :: res
    res = x * y
end subroutine mul
</code></pre><p>Similar to <code>add</code>, <code>mul</code> takes two input parameters <code>x</code> and <code>y</code>, but this time <code>mul</code> is a subroutine which takes an output parameter <code>res</code> for the storing the result.</p><p>If we try compiling <code>mul.f90</code> as we did with <code>add.f90</code>,</p><pre><code class="language-bash">$ gfortran mul.f90 -o mul.out
</code></pre><p>we&apos;ll encounter an error message similar to the following</p><pre><code class="language-text">Undefined symbols for architecture arm64:
  &quot;_main&quot;, referenced from:
      &lt;initial-undefines&gt;
ld: symbol(s) not found for architecture arm64
collect2: error: ld returned 1 exit status
</code></pre><p>In order to successfully generate a standalone executable, Fortran code must have a <code>main</code> program providing an entry point for execution. Without this entry point, a Fortran compiler does not where to begin executing code or where to look to identify the procedures and functions necessary to run a program.</p><p>For <code>mul.f90</code>, we&apos;re not wanting Fortran to drive execution, and, instead, we&apos;re interested in defining an entry point outside of Fortran which will enable a JavaScript runtime to drive execution. This means that we need to figure out a way to establish a bridge between a JavaScript runtime exposing native APIs and Fortran code containing APIs which we want to use. In order to establish such a bridge, we need to disentangle two compiler phases: compilation and linking.</p><h2 id="linking">Linking</h2><p>At a high level, compilation is the process of translating one programming language into another programming language. Often this means taking expressions written in a higher-level language, such as Fortran, and translating them to a lower-level language, such as machine code, in order to create an executable program that a machine can natively understand. The output of compilation is one or more object files, which typically have <code>.o</code> or <code>.obj</code> filename extensions.</p><p>Linking is the process of taking one or more object files and combining them into a single executable file. During linking, a &quot;linker&quot; performs several tasks:</p><ul><li><strong>symbol resolution</strong>: resolving references to functions and variables across different object files.</li><li><strong>address binding</strong>: assigning final memory addresses to a program&apos;s functions and variables.</li><li><strong>library inclusion</strong>: including code from static or dynamic libraries as required.</li><li><strong>executable creation</strong>: producing the final executable file that can be run on a target system.</li></ul><p>When we ran the GFortran command above</p><pre><code class="language-bash">$ gfortran mul.f90 -o mul.out
</code></pre><p>the compiler attempted to perform both compilation and linking. However, if we&apos;re trying to combine compiled Fortran code with a separate library (or a runtime such as Node.js), we need to split compilation and linking into separate steps.</p><p>Accordingly, in order to just generate the object file, we can amend the previous command as follows</p><pre><code class="language-bash">$ gfortran -c mul.f90
</code></pre><p>where the <code>-c</code> flag instructs the compiler to compile, but not to link. After running this command from the same directory as <code>mul.f90</code>, you should see a <code>mul.o</code> (or <code>mul.obj</code>) file containing the compiled source code.</p><h3 id="linking-fortran-files">Linking Fortran files</h3><p>To demonstrate linking as a separate phase, create a <code>mul_script.f90</code> file containing the following code containing a <code>main</code> program which calls the <code>mul</code> function and prints the result.</p><pre><code class="language-fortran">! file: mul_script.f90

!&gt;
! Main execution sequence.
!&lt;
program main
    ! Local variables:
    character(len=999) :: str, tmp
    ! ..
    ! Intrinsic functions:
    intrinsic adjustl, trim
    ! ..
    ! Define a variable for storing the product:
    integer :: res
    ! ..
    ! Call the `mul` subroutine to compute the product:
    call mul( 4, 5, res )
    ! ..
    ! Print the results:
    write (str, &apos;(I15)&apos;) res
    tmp = adjustl( str )
    print &apos;(A, A)&apos;, &apos;The product of 4 and 5 is &apos;, trim( tmp )
end program
</code></pre><p>We can then perform the same compilation step as we did for <code>mul.f90</code>.</p><pre><code class="language-bash">$ gfortran -c mul_script.f90
</code></pre><p>At this point, we should have two object files: <code>mul.o</code> and <code>mul_script.o</code> (or <code>mul.obj</code> and <code>mul_script.obj</code>, respectively). To link them into a single executable, we can run the following command in which we define the path of the output executable and pass in the paths of the object files we wish to link.</p><pre><code class="language-bash">$ gfortran -o mul_script.out mul.o mul_script.o
</code></pre><p>Once linked, we can test that everything works by running the generated executable.</p><pre><code class="language-bash">$ ./mul_script.out
</code></pre><p>If all went according to plan, you should see the following text as output</p><pre><code class="language-text">The product of 4 and 5 is 20
</code></pre><p>At this point, we&apos;ve successfully compiled and linked together separate Fortran source files, and we can now turn our attention to linking compiled Fortran to non-Fortran code.</p><h3 id="linking-fortran-and-c">Linking Fortran and C</h3><p>A common scenario in numerical computing is exposing numerical computing libraries written in Fortran as C functions. C also happens to be the programming language used by Node.js to expose APIs for building native add-ons (i.e., extensions to the Node.js runtime). Accordingly, if we can figure out how to link Fortran to C, we&apos;ll be well on our way to creating a Node.js native add-on capable of calling Fortran routines.</p><h4 id="writing-fortran-wrappers">Writing Fortran wrappers</h4><p>While the <code>mul</code> function defined above can be used in conjunction with other Fortran files, we cannot simply call <code>mul</code> from C as we do in Fortran because Fortran expects arguments to be passed by reference rather than by value. It&apos;s also worth mentioning that, because Fortran functions can only return scalar values and not, e.g., pointers to arrays, general best practice is to expose Fortran functions as subroutines, which are the equivalent of C functions returning <code>void</code> and which allow passing pointers for storing output return values.</p><p>While <code>mul</code> is already a subroutine, if we wanted to expose <code>add</code> to C, we&apos;d first need to wrap <code>add</code> as a subroutine in a manner similar to the following code snippet containing the subroutine wrapper <code>addsub</code> which forwards input arguments to <code>add</code> and assigns the result to an output argument <code>res</code>.</p><pre><code class="language-fortran">!&gt;
! Wraps `add` as a subroutine.
!
! @param {integer} x - first input value
! @param {integer} y - second input value
! @param {integer} res - output argument for storing the result
!&lt;
subroutine addsub( x, y, res )
    implicit none
    ! ..
    ! External functions:
    interface
        integer function add( x, y )
            integer :: x, y
        end function add
    end interface
    ! ..
    integer, intent(in) :: x, y
    integer, intent(out) :: res
    ! ..
    res = add( x, y )
    return
end subroutine addsub
</code></pre><h4 id="defining-function-prototypes-in-c">Defining function prototypes in C</h4><p>With those preliminaries out of the way, to help the C compiler reason about functions defined elsewhere (e.g., in a Fortran library or in other source files), we need to define function prototypes for any functions we plan to use before we use them. For our use case of calling a single Fortran routine, we can create a <code>mul_fortran.h</code> header file containing a single function declaration for the <code>mul</code> subroutine.</p><pre><code class="language-c">// file: mul_fortran.h

#ifndef MUL_FORTRAN_H
#define MUL_FORTRAN_H

#ifdef __cplusplus
extern &quot;C&quot; {
#endif

void mul( const int *x, const int *y, int *res );

#ifdef __cplusplus
}
#endif

#endif
</code></pre><p>One thing to note is that, in the above header file, we prevent <a href="https://en.wikipedia.org/wiki/Name_mangling?ref=blog.stdlib.io">name mangling</a> by using <code>extern &quot;C&quot;</code>. This is common practice in order to facilitate interoperation of C and C++, and preventing name mangling helps avoid compiler errors if we decide to use <code>mul</code> in C++ in the future.</p><h4 id="calling-fortran-routines-from-c">Calling Fortran routines from C</h4><p>Next, similar to how we created a Fortran program for calling a Fortran function defined in a separate file, we can create a <code>main.c</code> file containing a <code>main</code> function which calls <code>mul</code> and prints the result.</p><pre><code class="language-c">// file: main.c

#include &quot;mul_fortran.h&quot;
#include &lt;stdio.h&gt;

int main( void ) {
    int x = 4;
    int y = 5;
    int res;

    // Compute the product, passing arguments by reference:
    mul( &amp;x, &amp;y, &amp;res );

    printf( &quot;The product of %d and %d is %d\n&quot;, x, y, res );
    return 0;
}
</code></pre><h4 id="compiling-c-and-fortran">Compiling C and Fortran</h4><p>To compile our C program, we can run the following command</p><pre><code class="language-bash">$ gcc -I mul_fortran.h -c main.c
</code></pre><p>where <code>-I mul_fortran.h</code> instructs the compiler to use the function declarations defined in the header file we created above.</p><p>Before linking <code>main.o</code> and <code>mul.o</code>, we first need to recompile <code>mul.f90</code>, making sure to instruct GFortran to not modify function names by appending underscores during compilation. This ensures that the name used in our C code matches the exported symbol from compiled Fortran. One should be careful, however, as non-mangled names may conflict with existing symbols defined in C.</p><p>To prevent GFortran from appending underscores to symbol names, we set the <a href="https://gcc.gnu.org/onlinedocs/gfortran/Code-Gen-Options.html?ref=blog.stdlib.io#index-fno-underscoring"><code>-fno-underscoring</code></a> compiler option when calling GFortran.</p><pre><code class="language-bash">$ gfortran -fno-underscoring -c mul.f90
</code></pre><p>Now that we&apos;ve compiled our source files, it&apos;s time to generate an executable!</p><pre><code class="language-bash">$ gcc -o main.out main.o mul.o
</code></pre><p>Depending on your operating system, if the previous command errors, you may need to modify the previous command to</p><pre><code class="language-bash">$ gcc -o main.out main.o mul.o -lgfortran
</code></pre><p>where <code>-lgfortran</code> instructs GCC to link to the standard Fortran libraries. And finally, to test that everything works, we run the executable by entering the following command</p><pre><code class="language-bash">$ ./main.out
</code></pre><p>If successful, you should see the following text as output</p><pre><code class="language-text">The product of 4 and 5 is 20
</code></pre><p>Phew! If you&apos;re new to Fortran and C, congratulations on making it this far!</p><p>Now that we&apos;ve successfully managed to link Fortran and C code, we can turn our attention to using Node.js native add-ons to call Fortran routines from JavaScript.</p><h2 id="node-api">Node-API</h2><p><a href="https://nodejs.org/api/n-api.html?ref=blog.stdlib.io">Node-API</a> is an API for building Node.js native add-ons (i.e., extensions to the Node.js JavaScript runtime). There&apos;s a long history of add-on evolution and development in Node.js, of which I&apos;ll spare you the <a href="https://nodesource.com/blog/NAN-to-Node-API-migration-a-short-story/?ref=blog.stdlib.io">details</a>. The real benefit of Node-API is in providing a stable Application Binary Interface (ABI), which insulates add-ons from changes in the underlying JavaScript engine (namely, V8) and which allows modules compiled for one version of Node.js to run on later versions of Node.js without recompilation. In short, Node-API provides the glue code, in the form of C APIs, necessary for us to extend Node.js capabilities with C/C++ code written and compiled independently of Node.js itself.</p><p>In order to access Node-API APIs, we need to do two things:</p><ol><li>Include the <code>&lt;node_api.h&gt;</code> header in our C files.</li><li>Compile C source files using Node-API APIs with <a href="https://github.com/nodejs/node-gyp?ref=blog.stdlib.io">node-gyp</a>, a build system based on Google&apos;s <a href="https://gyp.gsrc.io/?ref=blog.stdlib.io">GYP</a>, a meta-build system for generating other build systems.</li></ol><p>So without further ado...</p><h3 id="creating-an-add-on-file">Creating an add-on file</h3><p>Let&apos;s start by creating an <code>addon.c</code> file which will serve as an entry point for our native add-on. In this file, we&apos;ll define two functions&#x2014;<code>addon</code> and <code>Init</code>&#x2014;and register a Node-API module which exports a function in a manner similar to how we&apos;d export a function if writing a module in vanilla JavaScript.</p><pre><code class="language-c">// file: addon.c

#include &lt;node_api.h&gt;
#include &lt;assert.h&gt;

/**
* Receives JavaScript callback invocation data.
*
* @param env    environment under which the function is invoked
* @param info   callback data
* @return       Node-API value
*/
static napi_value addon( napi_env env, napi_callback_info info ) {

    // NOTE: we&apos;ll add code here later in this post

    return NULL;
}

/**
* Defines the Node.js module &quot;exports&quot; object for the native add-on.
*
* @param env      environment under which the function is invoked
* @param exports  exports object
* @return         Node-API value
*/
static napi_value Init( napi_env env, napi_value exports ) {
    napi_value fcn;

    // Export the add-on function as a &quot;default&quot; export:
    napi_status status = napi_create_function( env, &quot;exports&quot;, NAPI_AUTO_LENGTH, addon, NULL, &amp;fcn );

    // Verify that we successfully wrapped the `addon` function as a JavaScript function object:
    assert( status == napi_ok );

    // Return the JavaScript function object to allow registering with the JavaScript runtime:
    return fcn;
}

/**
* Register a Node-API module which exports a function.
*/
NAPI_MODULE( NODE_GYP_MODULE_NAME, Init )
</code></pre><p>The <code>addon.c</code> file is comprised of three parts:</p><ol><li><code>addon</code>: this function receives JavaScript invocation data. If we assume <code>foo()</code> is a JavaScript function exposed by a native add-on, <code>env</code> is the environment in which the JavaScript code runs and <code>info</code> is an opaque object which can be used to retrieve function arguments and other contextual data when <code>foo</code> is invoked.</li><li><code>Init</code>: similar to how <a href="https://nodejs.org/api/modules.html?ref=blog.stdlib.io#moduleexports"><code>module.exports</code></a> defines the APIs a Node.js module exposes to other Node.js modules, this function defines the &quot;exports&quot; object and initializes exported values. In this context, initialization typically means wrapping C APIs as JavaScript objects so that a JavaScript engine can pass data back and forth between JavaScript and native code.</li><li><code>NAPI_MODULE</code>: this is a <a href="https://nodejs.org/api/n-api.html?ref=blog.stdlib.io#module-registration">macro</a> exposed by Node-API for registering a Node-API module with the Node.js JavaScript runtime.</li></ol><p>At this point, we&apos;re starting to accumulate a number of moving parts: Fortran source files, GFortran, C source files, GCC, Node-API, and a heretofore mentioned, but not explained, node-gyp.</p><figure class="kg-card kg-image-card"><img src="https://blog.stdlib.io/content/images/2024/07/build_diagram.png" class="kg-image" alt="How to call Fortran routines from JavaScript with Node.js" loading="lazy" width="667" height="585" srcset="https://blog.stdlib.io/content/images/size/w600/2024/07/build_diagram.png 600w, https://blog.stdlib.io/content/images/2024/07/build_diagram.png 667w"></figure><p>As may be observed in the diagram above, a key component which we have yet to cover, but which is necessary to allow building a Node.js native add-on in a manner that is portable across platforms, is the <code>binding.gyp</code> file. It&apos;s this file and node-gyp that we&apos;ll dive into next.</p><h2 id="node-gyp">node-gyp</h2><p><a href="https://github.com/nodejs/node-gyp?ref=blog.stdlib.io">node-gyp</a> is a build system based on Google&apos;s <a href="https://gyp.gsrc.io/?ref=blog.stdlib.io">GYP</a>, which, in turn, is a meta-build system for generating other build systems. The key idea behind GYP is the generation of build files, such as Makefiles, Ninja build files, Visual Studio projects, and XCode projects, which are tailored to the platform on which a project is being compiled. Once GYP scaffolds a project in a manner tailored to the host platform, GYP can then perform build steps which replicate as closely as possible the way that one would have set up a native build of the project were one writing the project build system from scratch. node-gyp subsequently extends GYP by providing the configuration and tooling specific to developing Node.js native add-ons.</p><h3 id="configuring-how-to-build-an-add-on">Configuring how to build an add-on</h3><p>In order to describe the configuration necessary to build a Node.js native add-on, one needs to provide a <code>binding.gyp</code> file. This file is written in a JSON-like format and is placed at the root of a JavaScript package alongside a package&apos;s <code>package.json</code> file. GYP configuration files can be awkward to write, and, unfortunately, GYP has long been abandoned by the Google team responsible for its creation. Adding insult to injury, good documentation for authoring GYP files can be hard to come by, as the GYP documentation is incomplete and finding real-world examples doing exactly what you are wanting to do can be a time-consuming task, especially when authoring <code>binding.gyp</code> files requiring specialized configuration (e.g., as might be needed when compiling CUDA, OpenCL, or Fortran).</p><p>Nevertheless, persist we shall! Fortunately, writing a minimal <code>binding.gyp</code> file capable of supporting Fortran compilation is within reach. Start by creating a <code>binding.gyp</code> file specifying various configuration parameters, including build targets, source files, compiler flags, and rules for how to process files having a specific file type.</p><pre><code class="language-python"># file: binding.gyp

# A `.gyp` file for building a Node.js native add-on.
#
# [1]: https://gyp.gsrc.io/docs/InputFormatReference.md
# [2]: https://gyp.gsrc.io/docs/UserDocumentation.md
{
  # Define variables to be used throughout the configuration for all targets:
  &apos;variables&apos;: {
    # Set variables based on the host OS:
    &apos;conditions&apos;: [
      [
        &apos;OS==&quot;win&quot;&apos;,
        {
          # Define the object file suffix on Windows:
          &apos;obj&apos;: &apos;obj&apos;,
        },
        {
          # Define the object file suffix for other operating systems (e.g., Linux and MacOS):
          &apos;obj&apos;: &apos;o&apos;,
        }
      ],
    ],
  },

  # Define compilation targets:
  &apos;targets&apos;: [
    # Define a target to generate an add-on:
    {
      # The target name should match the add-on export name (see addon.c above):
      &apos;target_name&apos;: &apos;addon&apos;,

      # List of source files:
      &apos;sources&apos;: [
        # Relative paths should be relative to this configuration file...
        &apos;./addon.c&apos;,
        &apos;./mul.f90&apos;,
      ],

      # List directories which contain relevant headers to include during compilation:
      &apos;include_dirs&apos;: [
        # Relative paths should be relative to this configuration file...
        &apos;./&apos;,
      ],

      # Define settings which should be applied when a target&apos;s object files are used as linker input:
      &apos;link_settings&apos;: {
        # Define linker flags for libraries against which to link (e.g., &apos;-lm&apos;, &apos;-lblas&apos;, etc):
        &apos;libraries&apos;: [],

        # Define directories in which to find libraries to link to (e.g., &apos;/usr/lib&apos;):
        &apos;library_dirs&apos;: []
      },

      # Define custom build actions for particular source files:
      &apos;rules&apos;: [
        {
          # Define a rule name:
          &apos;rule_name&apos;: &apos;compile_fortran&apos;,

          # Define the filename extension for which this rule should apply:
          &apos;extension&apos;: &apos;f90&apos;,

          # Set a flag specifying whether to process generated output as sources for subsequent steps:
          &apos;process_outputs_as_sources&apos;: 1,

          # Define the pathnames to be used as inputs when performing processing:
          &apos;inputs&apos;: [
            # Full path of the current input:
            &apos;&lt;(RULE_INPUT_PATH)&apos;,
          ],

          # Define the outputs produced during processing:
          &apos;outputs&apos;: [
            # Store an output object file in a directory for placing intermediate results (only accessible within a single target):
            &apos;&lt;(INTERMEDIATE_DIR)/&lt;(RULE_INPUT_ROOT).&lt;(obj)&apos;,
          ],

          # Define the command-line invocation:
          &apos;action&apos;: [
            &apos;gfortran&apos;,
            &apos;-fno-underscoring&apos;,
            &apos;-c&apos;,
            &apos;&lt;@(_inputs)&apos;,
            &apos;-o&apos;,
            &apos;&lt;@(_outputs)&apos;,
          ],
        },
      ],
    },
  ],
}
</code></pre><p>A few comments:</p><ol><li>GYP configuration files support variables, conditionals, and expressions. In the configuration file above, <code>&lt;(RULE_INPUT_PATH)</code>, <code>&lt;(INTERMEDIATE_DIR)</code>, and <code>&lt;(RULE_INPUT_ROOT)</code> are predefined variables provided by the GYP generator module. Variables such as <code>&lt;@(_inputs)</code> and <code>&lt;@(_outputs)</code> represent variable expansions and correspond to variables which should be expanded in list contexts.</li><li>While GYP attempts to automate and abstract away the generation of build files tailored to the operating system on which to compile, this doesn&apos;t absolve us from needing to consider platform variability. For example, the configuration file above includes a conditional for resolving an appropriate object file filename extension based on the target operating system.</li><li>Configuration files can quickly become complex depending on operating system variability, including the availability of specialized compilers, such as GFortran, and the need for bespoke rules for varying input file types.</li></ol><h3 id="building-an-add-on">Building an add-on</h3><p>Now that we have a GYP configuration file, it&apos;s time to install <a href="https://github.com/nodejs/node-gyp?ref=blog.stdlib.io">node-gyp</a>. In your terminal, run</p><pre><code class="language-bash">$ npm install --no-save node-gyp
</code></pre><p>The node-gyp executable will subsequently be available in the <code>./node_modules/.bin</code> directory. To generate the appropriate project build files for the current platform, run the following command</p><pre><code class="language-bash">$ ./node_modules/.bin/node-gyp configure
</code></pre><p>This will generate a <code>./build</code> directory containing platform-specific build files. To build the native add-on, we can run</p><pre><code class="language-bash">$ ./node_modules/.bin/node-gyp build
</code></pre><p>which will generate an <code>addon.node</code> file in a <code>./build/Release</code> sub-folder. To remove generated files, run</p><pre><code class="language-bash">$ ./node_modules/.bin/node-gyp clean
</code></pre><p>As we continue to iterate on our <code>addon.c</code> file, we&apos;ll want to perform the <code>clean-configure-build</code> sequence each time we make changes. Accordingly, we can consolidate the above steps into a single command</p><pre><code class="language-bash">$ ./node_modules/.bin/node-gyp clean &amp;&amp; \
  ./node_modules/.bin/node-gyp configure &amp;&amp; \
  ./node_modules/.bin/node-gyp build
</code></pre><h2 id="calling-a-fortran-routine-from-javascript">Calling a Fortran routine from JavaScript</h2><p>At this point, we&apos;ve got almost all of the core building blocks for calling a Fortran routine from JavaScript. We&apos;re only missing two things:</p><ol><li>Logic in <code>addon.c</code> which calls the Fortran routine.</li><li>A JavaScript file which invokes the function exposed by our native add-on.</li></ol><h3 id="updating-the-add-on-file">Updating the add-on file</h3><p>To start, let&apos;s revisit our <code>addon.c</code> file. In this file, we need to make four changes:</p><ol><li>Retrieve provided arguments.</li><li>Convert from JavaScript objects to native C types.</li><li>Add logic to call our Fortran routine <code>mul</code>.</li><li>Return a result as a JavaScript object.</li></ol><p>Luckily, we already have experience with (3) when we wrote <code>main.c</code> and linked against our compiled Fortran routine. As in <code>main.c</code>, we want to include the <code>mul_fortran.h</code> header, which we can do by making the following change in <code>addon.c</code></p><pre><code class="language-diff">// file: addon.c

+ #include &quot;mul_fortran.h&quot;
#include &lt;node_api.h&gt;
#include &lt;assert.h&gt;
</code></pre><p>Next, we&apos;ll want to modify the <code>addon</code> function in <code>addon.c</code> to include logic for calling the <code>mul</code> Fortran routine. In the snippet below, we copy the invocation logic used in <code>main.c</code> into the implementation of the <code>addon</code> function.</p><pre><code class="language-c">/**
* Receives JavaScript callback invocation data.
*
* @param env    environment under which the function is invoked
* @param info   callback data
* @return       Node-API value
*/
static napi_value addon( napi_env env, napi_callback_info info ) {

    // ...

    // Call the Fortran routine:
    int res;
    mul( &amp;x, &amp;y, &amp;res );

    // ...

    return NULL;
}
</code></pre><p>Now on to argument munging. Fortunately, Node-API provides several APIs for converting from JavaScript objects to native C data types. In particular, we&apos;re interested in converting JavaScript numbers to C integers, which is demonstrated in the following code snippet which defines the number of expected input arguments, retrieves those arguments from provided callback info using <a href="https://nodejs.org/api/n-api.html?ref=blog.stdlib.io#napi_get_cb_info"><code>napi_get_cb_info</code></a>, and converts JavaScript objects to native C data types using <a href="https://nodejs.org/api/n-api.html?ref=blog.stdlib.io#napi_get_value_int32"><code>napi_get_value_int32</code></a>.</p><pre><code class="language-c">/**
* Receives JavaScript callback invocation data.
*
* @param env    environment under which the function is invoked
* @param info   callback data
* @return       Node-API value
*/
static napi_value addon( napi_env env, napi_callback_info info ) {
    napi_status status;

    // Define the expected number of input arguments:
    size_t argc = 2;

    // Retrieve the input arguments from the callback info:
    napi_value argv[ 2 ];
    status = napi_get_cb_info( env, info, &amp;argc, argv, NULL, NULL );
    assert( status == napi_ok );

    // Convert each argument to a signed 32-bit integer:
    int x;
    status = napi_get_value_int32( env, argv[ 0 ], &amp;x );
    assert( status == napi_ok );

    int y;
    status = napi_get_value_int32( env, argv[ 1 ], &amp;y );
    assert( status == napi_ok );

    // Call the Fortran routine:
    int res;
    mul( &amp;x, &amp;y, &amp;res );

    // ...

    return NULL;
}
</code></pre><p>And finally, we need to convert the integer result to a JavaScript object for use within JavaScript, which is demonstrated in the following code snippet which adds logic for converting a C signed 32-bit integer to an opaque object representing a JavaScript number using <a href="https://nodejs.org/api/n-api.html?ref=blog.stdlib.io#napi_create_int32"><code>napi_create_int32</code></a>.</p><pre><code class="language-c">/**
* Receives JavaScript callback invocation data.
*
* @param env    environment under which the function is invoked
* @param info   callback data
* @return       Node-API value
*/
static napi_value addon( napi_env env, napi_callback_info info ) {
    napi_status status;

    // Define the expected number of input arguments:
    size_t argc = 2;

    // Retrieve the input arguments from the callback info:
    napi_value argv[ 2 ];
    status = napi_get_cb_info( env, info, &amp;argc, argv, NULL, NULL );
    assert( status == napi_ok );

    // Convert each argument to a signed 32-bit integer:
    int x;
    status = napi_get_value_int32( env, argv[ 0 ], &amp;x );
    assert( status == napi_ok );

    int y;
    status = napi_get_value_int32( env, argv[ 1 ], &amp;y );
    assert( status == napi_ok );

    // Call the Fortran routine:
    int res;
    mul( &amp;x, &amp;y, &amp;res );

    // Convert the result to a JavaScript object:
    napi_value out;
    status = napi_create_int32( env, res, &amp;out );
    assert( status == napi_ok );

    return out;
}
</code></pre><p>Putting it all together, we have the following <code>addon.c</code> file which defines the entirety of our native add-on bindings.</p><pre><code class="language-c">
// file: addon.c

#include &quot;mul_fortran.h&quot;
#include &lt;node_api.h&gt;
#include &lt;assert.h&gt;

/**
* Receives JavaScript callback invocation data.
*
* @param env    environment under which the function is invoked
* @param info   callback data
* @return       Node-API value
*/
static napi_value addon( napi_env env, napi_callback_info info ) {
    napi_status status;

    // Define the expected number of input arguments:
    size_t argc = 2;

    // Retrieve the input arguments from the callback info:
    napi_value argv[ 2 ];
    status = napi_get_cb_info( env, info, &amp;argc, argv, NULL, NULL );
    assert( status == napi_ok );

    // Convert each argument to a signed 32-bit integer:
    int x;
    status = napi_get_value_int32( env, argv[ 0 ], &amp;x );
    assert( status == napi_ok );

    int y;
    status = napi_get_value_int32( env, argv[ 1 ], &amp;y );
    assert( status == napi_ok );

    // Call the Fortran routine:
    int res;
    mul( &amp;x, &amp;y, &amp;res );

    // Convert the result to a JavaScript object:
    napi_value out;
    status = napi_create_int32( env, res, &amp;out );
    assert( status == napi_ok );

    return out;
}

/**
* Defines the Node.js module &quot;exports&quot; object for the native add-on.
*
* @param env      environment under which the function is invoked
* @param exports  exports object
* @return         Node-API value
*/
static napi_value Init( napi_env env, napi_value exports ) {
    napi_value fcn;

    // Export the add-on function as a &quot;default&quot; export:
    napi_status status = napi_create_function( env, &quot;exports&quot;, NAPI_AUTO_LENGTH, addon, NULL, &amp;fcn );

    // Verify that we successfully wrapped the `addon` function as a JavaScript function object:
    assert( status == napi_ok );

    // Return the JavaScript function object to allow registering with the JavaScript runtime:
    return fcn;
}

/**
* Register a Node-API module which exports a function.
*/
NAPI_MODULE( NODE_GYP_MODULE_NAME, Init )
</code></pre><p>To confirm that our Node.js add-on still compiles, we can re-run our build sequence defined above.</p><pre><code class="language-bash">$ ./node_modules/.bin/node-gyp clean &amp;&amp; \
  ./node_modules/.bin/node-gyp configure &amp;&amp; \
  ./node_modules/.bin/node-gyp build
</code></pre><h3 id="creating-a-javascript-file-importing-the-native-add-on">Creating a JavaScript file importing the native add-on</h3><p>We&apos;re here! The moment that we&apos;ve been waiting for! Time to create a JavaScript file which loads our Node.js native add-on and calls its public API. &#x1F941;</p><p>Thankfully, loading a native add-on is just like loading any other JavaScript module. To see this in action, let&apos;s create a <code>mul.js</code> file which imports the native add-on module, calls the function exposed by the add-on, and prints the result.</p><pre><code class="language-javascript">// file: mul.js

// Import the native add-on module:
const addon = require( &apos;./build/Release/addon.node&apos; );

// Compute the product of two integers:
const res = addon( 5, 10 );
console.log( &apos;The product of %d and %d is %d&apos;, 5, 10, res );
</code></pre><p>To test whether everything works as expected, we can run the script by passing the script&apos;s file path to the Node.js executable.</p><pre><code class="language-bash">$ node ./mul.js
</code></pre><p>If all went according to plan, you should see the following text as output</p><pre><code class="language-text">The product of 5 and 10 is 50
</code></pre><p>That&apos;s it! We did it. &#x1F605;</p><p>Barring any platform quirks or dreaded compiler errors, we successfully called a Fortran routine from JavaScript. &#x1F64C;</p><h2 id="simplifying-add-on-authoring-with-stdlib">Simplifying add-on authoring with stdlib</h2><p>Depending on API complexity, authoring Node.js native add-ons can be verbose and error prone. This verbosity largely stems from the need for argument validation logic and status checks. For example, when handling typed arrays, one needs to perform multiple steps, such as verifying that an input argument is a typed array, verifying that an input argument is a typed array of the correct type, resolving the length of a typed array, converting a JavaScript object representing a typed array to a C pointer pointing to the start of the underlying typed array memory, and, for applications involving strided arrays, ensuring that typed array properties are consistent with other input arguments, such as strides and offsets.</p><p>While some validation logic can be performed in JavaScript or omitted entirely, a general best practice is to include such logic in order to ensure data integrity when calling APIs outside of Node-API APIs and to avoid hard-to-track down bugs leading to segmentation faults and buffer overflows. And furthermore, best practice requires that, after each invocation of a Node-API function, one must check <code>napi_status</code> return values to ensure that the JavaScript engine was able to successfully perform the requested operation. As a consequence, lines of code add up, and you find yourself writing the same logic over and over.</p><h3 id="macros-for-module-registration-and-data-type-conversion">Macros for module registration and data type conversion</h3><p>To simplify add-on authoring, <a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">stdlib</a> provides several utilities, both functional APIs and macros, which abstract away common boilerplate. For example, we can refactor the <code>addon.c</code> file defined above to use stdlib&apos;s <a href="https://github.com/stdlib-js/stdlib/tree/develop/lib/node_modules/%40stdlib/napi?ref=blog.stdlib.io"><code>napi</code></a> macros for retrieving input arguments, handling conversion to and from native C data types, and initializing and registering an exported function with Node.js.</p><pre><code class="language-c">// file: addon2.c

#include &quot;mul_fortran.h&quot;
#include &quot;stdlib/napi/create_int32.h&quot;
#include &quot;stdlib/napi/argv_int32.h&quot;
#include &quot;stdlib/napi/argv.h&quot;
#include &quot;stdlib/napi/export.h&quot;
#include &lt;node_api.h&gt;

static napi_value addon( napi_env env, napi_callback_info info ) {
    STDLIB_NAPI_ARGV( env, info, argv, argc, 2 ); // retrieve function arguments
    STDLIB_NAPI_ARGV_INT32( env, x, argv, 0 );    // convert to C data type
    STDLIB_NAPI_ARGV_INT32( env, y, argv, 1 );    // convert to C data type
    int res;
    mul( &amp;x, &amp;y, &amp;res );
    STDLIB_NAPI_CREATE_INT32( env, res, out );    // convert to JavaScript object
    return out;
}

STDLIB_NAPI_MODULE_EXPORT_FCN( addon )
</code></pre><h3 id="specialized-macros-for-common-function-signatures">Specialized macros for common function signatures</h3><p>The use case explored in this post&#x2014;namely, calling a C/Fortran function which operates on and returns scalar values&#x2014;is something that we do quite often in stdlib, especially for testing native C APIs and sharing test logic across JavaScript and C implementations. Accordingly, stdlib provides several more <a href="https://github.com/stdlib-js/stdlib/tree/develop/lib/node_modules/%40stdlib/math/base/napi?ref=blog.stdlib.io">macro abstractions</a> which abstract away all argument retrieval, argument validation, and module registration logic for certain input/output data type combinations.</p><pre><code class="language-c">// file: addon3.c

#include &quot;mul_fortran.h&quot;
#include &quot;stdlib/math/base/napi/binary.h&quot;

static int multiply( x, y ) {
    int res;
    mul( &amp;x, &amp;y, &amp;res );
    return res;
}

STDLIB_MATH_BASE_NAPI_MODULE_II_I( multiply )
</code></pre><p>Two comments regarding the code above:</p><ol><li><code>STDLIB_MATH_BASE_NAPI_MODULE_II_I</code> is a macro for registering a Node-API module for an exported function accepting two signed 32-bit integer input arguments and returning a signed 32-bit integer output value. This signature is encoded in the macro name as <code>II_I</code>.</li><li>We need to wrap the Fortran routine in a C function, as the module registration macro assumes that a registered <code>II_I</code> function expects arguments to be passed by value, not by reference, and returns a scalar value.</li></ol><h3 id="learning-from-real-world-examples-in-stdlib">Learning from real-world examples in stdlib</h3><p>For more details on how we author Node-API native add-ons and leverage macros and various utilities for simplifying the add-on creation process, the best place to start is by browsing stdlib <a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">source code</a>. For the examples explored in this post, we&apos;ve brushed aside some of the complexity in ensuring cross-platform configuration portability (looking at you Windows!) and in specifying compiler options for optimizing compiled code. For those interested in learning more, you&apos;ll find many more examples throughout the codebase, and, if you have questions, don&apos;t be afraid to stop by and say hi! &#x1F44B;</p><h2 id="conclusion">Conclusion</h2><p>In this post, we explored several aspects when authoring Node.js native add-ons, with a particular eye toward being able to call Fortran routines from JavaScript. This effort involved compilation and linking, writing C interfaces, module registration, and build configuration. Along the way, we relied on a variety of tools for generating build artifacts, including Fortran and C compilers, Node-API, and node-gyp. We touched on best practices and potential pitfalls, and we observed how <a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">stdlib</a> can make authoring Node.js native add-ons much easier.</p><p>All in all, it was a lot, with several moving parts and complex toolchains. But our exploration was well worth the effort. By leveraging Fortran&apos;s high-performance capabilities within Node.js, you can significantly enhance and accelerate your numerical and scientific computing tasks. With Node.js native add-ons, you can bridge the gap between modern web technologies and established scientific computing practices, providing a powerful toolset for you and others and opening the door to new and more powerful Node.js applications.</p><p>In future posts, we&apos;ll explore more complex use cases, including the ability to leverage hardware-optimized routines for linear algebra and machine learning. There&apos;s still a lot to learn and more ground to cover. We hope that you&apos;ll continue to follow along as we share our insights and that you&apos;ll join us in our mission to realize a future where JavaScript and the web are preferred environments for numerical and scientific computation. &#x1F680;</p><hr>
<!--kg-card-begin: html-->
<p class="dev-theme-author-blurb">
    <em>Pranav Goswami is a developer of <a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">stdlib</a> and a computer science graduate who&apos;s passionate about technology, algorithms, compilers, and epic roadtrips.</em>
</p>
<!--kg-card-end: html-->

<!--kg-card-begin: html-->
<p class="dev-theme-author-blurb">
    <em>Athan Reines is a software engineer at <a href="https://quansight.com/?ref=blog.stdlib.io">Quansight</a> and core developer of <a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">stdlib</a>.</em>
</p>
<!--kg-card-end: html-->
<hr><p><a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">stdlib</a> is an open source software project dedicated to providing a comprehensive suite of robust, high-performance libraries to accelerate your project&apos;s development and give you peace of mind knowing that you&apos;re depending on expertly crafted, high-quality software.</p><p>If you&apos;ve enjoyed this post, give us a star &#x1F31F; on <a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">GitHub</a> and consider <a href="https://opencollective.com/stdlib?ref=blog.stdlib.io">financially supporting</a> the project. Your contributions and continued support help ensure the project&apos;s long-term success and are greatly appreciated!</p><hr><p>If you&apos;d like to view the code covered in this post on GitHub, please visit the source code <a href="https://github.com/stdlib-js/blog-calling-fortran-from-nodejs-1/tree/main?ref=blog.stdlib.io" rel="noreferrer">repository</a>.</p>
<!--kg-card-begin: html-->
<h2>License</h2>
<details>
    <summary>All code is licensed under <a href="http://www.apache.org/licenses/LICENSE-2.0?ref=blog.stdlib.io">Apache License, Version 2.0</a>.</summary>
    <pre><code class="language-text hljs">
Copyright (c) 2024 Pranav Goswami and Athan Reines.

Licensed under the Apache License, Version 2.0 (the &quot;License&quot;);
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

   http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an &quot;AS IS&quot; BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
    </code></pre>
</details>
<!--kg-card-end: html-->
]]></content:encoded></item><item><title><![CDATA[Community Survey]]></title><description><![CDATA[If you're interested in all things web and number crunching, we'd love to hear from you!]]></description><link>https://blog.stdlib.io/community-survey/</link><guid isPermaLink="false">6684f10dd8eb7fcd9a96130e</guid><category><![CDATA[News]]></category><dc:creator><![CDATA[Athan Reines]]></dc:creator><pubDate>Wed, 03 Jul 2024 06:58:03 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1522202176988-66273c2fd55f?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDF8fGNvbW11bml0eSUyMHN1cnZleSUyMHNvZnR3YXJlfGVufDB8fHx8MTcxOTk5MzIwOHww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1522202176988-66273c2fd55f?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDF8fGNvbW11bml0eSUyMHN1cnZleSUyMHNvZnR3YXJlfGVufDB8fHx8MTcxOTk5MzIwOHww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="Community Survey"><p>We&apos;re running a community <a href="https://stdlib.io/survey?ref=blog.stdlib.io" rel="noreferrer">survey</a> to learn more about how people are interested in using web technologies for numerical and scientific computation. And we&apos;d love to get your input regarding what you identify as the biggest gaps in the numerical JavaScript ecosystem, what applications you&apos;re interested in, and where you think the future is headed.</p><p>The survey should take about 10-15 minutes and can be found at the following link:</p><p><a href="https://stdlib.io/survey?ref=blog.stdlib.io" rel="noreferrer">https://stdlib.io/survey</a></p><p>As developers of <a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io" rel="noreferrer">stdlib</a>, we spend much of our time deep in the weeds, trying to develop performant algorithms and write high quality code. While writing code and solving problems is what we love to do, we recognize that sometimes we need to take a moment and ensure our efforts are aligned with community needs and forward looking vision. And that&apos;s why we&apos;re running this survey.</p><p>Our hope is that, through this survey, we&apos;ll gain a better understanding of pain points, gaps, and use cases, which can be tricky to glean from issue trackers and pull requests alone. So, if you have opinions, here is your chance! Take the survey and tell us more! We&apos;d love to hear from you. &#x2764;&#xFE0F;</p><hr><p><a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">stdlib</a> is an open source software project dedicated to providing a comprehensive suite of robust, high-performance libraries to accelerate your project&apos;s development and give you peace of mind knowing that you&apos;re depending on expertly crafted, high-quality software.</p><p>If you&apos;ve enjoyed this post, head on over to the project repository and give us a star &#x1F31F; on <a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">GitHub</a>. Your contributions and continued support help ensure the project&apos;s long-term success and are greatly appreciated!</p>]]></content:encoded></item><item><title><![CDATA[GSoC Projects Announced]]></title><description><![CDATA[We're thrilled to share that stdlib has been selected for Google Summer of Code, and we've been granted four slots this year!  As a first-time GSoC organization, this is a significant milestone for us, and we couldn't be more grateful for this opportunity.]]></description><link>https://blog.stdlib.io/stdlib-gsoc-participants-announced/</link><guid isPermaLink="false">6634439387c5db24b8c5ab18</guid><category><![CDATA[News]]></category><dc:creator><![CDATA[Philipp Burckhardt]]></dc:creator><pubDate>Fri, 03 May 2024 03:09:38 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1453928582365-b6ad33cbcf64?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDE2fHxzdW1tZXIlMjBjb21wdXRlciUyMGNvZGV8ZW58MHx8fHwxNzE5OTkyOTQwfDA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1453928582365-b6ad33cbcf64?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDE2fHxzdW1tZXIlMjBjb21wdXRlciUyMGNvZGV8ZW58MHx8fHwxNzE5OTkyOTQwfDA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="GSoC Projects Announced"><p>stdlib is a fundamental numerical library for JavaScript. Our goal is to create a scientific computing ecosystem for JavaScript and TypeScript akin to NumPy and SciPy for Python, with a special focus on the unique features and constraints of the web.</p><p>The following four <a href="https://summerofcode.withgoogle.com/programs/2024/organizations/stdlib?ref=blog.stdlib.io" rel="noreferrer">projects</a>, which will all be instrumental in achieving our vision, were selected for this year&apos;s <a href="https://summerofcode.withgoogle.com/programs/2024/organizations/stdlib?ref=blog.stdlib.io" rel="noreferrer">Google Summer of Code program</a>:</p><p><a href="https://summerofcode.withgoogle.com/programs/2024/projects/gxSf9XqK?ref=blog.stdlib.io"><strong>Add BLAS bindings and implementations for linear algebra</strong></a><br><strong>Contributor:</strong> <a href="https://github.com/aman-095?ref=blog.stdlib.io">Aman Bhansali</a><br><strong>Mentors:</strong> <a href="https://github.com/kgryte?ref=blog.stdlib.io">Athan Reines</a>, <a href="https://github.com/czgdp1807?ref=blog.stdlib.io">Gagandeep Singh</a></p><p>Aman will work on BLAS routines and their C, Fortran, and JavaScript implementations for linear algebra. His efforts will include the creation of Node.js bindings to hardware-optimized BLAS implementations. This work is key in making Node.js a viable option for data intensive computation.</p><p><a href="https://summerofcode.withgoogle.com/programs/2024/projects/PVwSgrbG?ref=blog.stdlib.io"><strong>Develop C implementations for base special mathematical functions</strong></a><br><strong>Contributor:</strong> <a href="https://github.com/gunjjoshi?ref=blog.stdlib.io">Gunj Joshi</a><br><strong>Mentors:</strong> <a href="https://github.com/Planeshifter?ref=blog.stdlib.io">Philipp Burckhardt</a>, <a href="https://github.com/rreusser?ref=blog.stdlib.io">Ricky Reusser</a></p><p>Gunj&apos;s project focuses on developing C implementations for special mathematical functions in stdlib, enhancing performance, and enabling seamless integration of JavaScript and C implementations. This work will provide a critical component for stdlib&apos;s &quot;ufuncs&quot; (universal functions), which enable efficient element-wise computation on stdlib&apos;s <a href="https://github.com/stdlib-js/stdlib/tree/develop/lib/node_modules/%40stdlib/ndarray/ctor?ref=blog.stdlib.io">n-dimensional arrays</a>.</p><p><a href="https://summerofcode.withgoogle.com/programs/2024/projects/8zUO9AU0?ref=blog.stdlib.io"><strong>Add support for Boolean arrays in stdlib</strong></a><br><strong>Contributor:</strong> <a href="https://github.com/Jaysukh-409?ref=blog.stdlib.io">Jaysukh Makvana</a><br><strong>Mentors:</strong> <a href="https://github.com/Pranavchiku?ref=blog.stdlib.io">Pranav Goswami</a>, <a href="https://github.com/kgryte?ref=blog.stdlib.io">Athan Reines</a></p><p>Jaysukh will integrate support for boolean arrays in stdlib, enhancing functionality and expanding integration opportunities throughout various namespaces and APIs. This work will be especially important for enabling boolean array indexing in stdlib&apos;s <a href="https://github.com/stdlib-js/stdlib/tree/develop/lib/node_modules/%40stdlib/array/to-fancy?ref=blog.stdlib.io">fancy arrays</a>!</p><p><a href="https://summerofcode.withgoogle.com/programs/2024/projects/KRRdM6F8?ref=blog.stdlib.io"><strong>A better Node.js REPL for Numerical and Scientific Computing</strong></a><br><strong>Contributor:</strong> <a href="https://github.com/Snehil-Shah?ref=blog.stdlib.io">Snehil Shah</a><br><strong>Mentors:</strong> <a href="https://github.com/steff456?ref=blog.stdlib.io">Stephannie Jim&#xE9;nez Gacha</a>, <a href="https://github.com/Planeshifter?ref=blog.stdlib.io">Philipp Burckhardt</a></p><p>Snehil aims to build a better Node.js REPL tailored for scientific computing and data analysis using stdlib. With features like fuzzy auto-completion and syntax highlighting, the <a href="https://github.com/stdlib-js/stdlib/tree/develop/lib/node_modules/%40stdlib/repl?ref=blog.stdlib.io">enhanced REPL</a> will provide an interactive environment for data exploration.</p><p>Needless to say, this summer is shaping up to be a busy one for stdlib, and we&apos;re super excited to work with an absolutely stellar group of GSoC contributors to further our mission of pushing the web forward.</p><p>We hope that you&apos;ll join us in our mission to advance cutting-edge scientific computation in JavaScript. You can start showing your support by starring the project on GitHub today: <a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">https://github.com/stdlib-js/stdlib</a>.<br></p><hr><p><a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">stdlib</a> is an open source software project dedicated to providing a comprehensive suite of robust, high-performance libraries to accelerate your project&apos;s development and give you peace of mind knowing that you&apos;re depending on expertly crafted, high-quality software.</p><p>If you&apos;ve enjoyed this post, give us a star &#x1F31F; on <a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">GitHub</a> and please consider <a href="https://opencollective.com/stdlib?ref=blog.stdlib.io">financially supporting</a> the project. Your contributions and continued support help ensure the project&apos;s long-term success and are greatly appreciated!</p><p></p>]]></content:encoded></item><item><title><![CDATA[Our Mission]]></title><description><![CDATA[We believe in a future in which the web is a preferred environment for numerical computation. To help realize this future, we've built stdlib.]]></description><link>https://blog.stdlib.io/our-mission/</link><guid isPermaLink="false">62be8bab85a14f4556819cdd</guid><dc:creator><![CDATA[Athan Reines]]></dc:creator><pubDate>Tue, 22 Aug 2023 05:52:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1587620962725-abab7fe55159?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDV8fGNvZGV8ZW58MHx8fHwxNjkyNzU4OTg3fDA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1587620962725-abab7fe55159?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDV8fGNvZGV8ZW58MHx8fHwxNjkyNzU4OTg3fDA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="Our Mission"><p>With <a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">stdlib</a>, we believe in a future in which Node.js and the browser are the <strong>preferred</strong> environments for numerical and scientific computation. And we believe that you should be empowered to use precisely what you want and how you want it.</p><p>To this end, we&apos;ve built <a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">stdlib</a>, a standard library with an emphasis on numerical and scientific computation, written in JavaScript (and C) for execution in browsers and in Node.js. The library is fully decomposable, being architected in such a way that you can swap out and mix and match APIs and functionality to cater to your exact preferences and use cases.</p><p>Have a faster dispatch algorithm for <a href="https://stdlib.io/docs/api/latest/@stdlib/ndarray/array?ref=blog.stdlib.io">ndarray</a> loop selection? Great! Swap out our <a href="https://stdlib.io/docs/api/latest/@stdlib/ndarray/dispatch?ref=blog.stdlib.io">implementation</a> and use yours!</p><p>Have a more robust <a href="https://stdlib.io/docs/api/latest/@stdlib/random/base/randu?ref=blog.stdlib.io">pseudorandom number generator</a> (PRNG)? Super! Plug your PRNG into our APIs for generating pseudorandom number variates from various <a href="https://stdlib.io/docs/api/latest/@stdlib/random/base/normal?ref=blog.stdlib.io">statistical distributions</a>.</p><p>Developed an algorithm for more accurately computing the <a href="https://stdlib.io/docs/api/latest/@stdlib/math/base/special/riemann-zeta?ref=blog.stdlib.io">Riemann zeta function</a>? That&apos;s awesome! Leverage our infrastructure to create vectorized APIs supporting efficient array computation.</p><p>At every level, you are empowered to take control and build your own numerical computing functionality. We recognize that we don&apos;t have a monopoly on expertise. Some of our algorithms may not be the fastest. They may not be the most accurate. Nor are they always the most succinct and clever.</p><p>But we can assure you that everything that we do is built with a deep empathy for <strong>you</strong>, the consumer of stdlib. That&apos;s made evident in the quality of what we write, both in <a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">code</a> and an abundance of <a href="https://stdlib.io/docs/api?ref=blog.stdlib.io">documentation</a>, in a painstaking attention to detail, in the prioritization of backward compatibility and stability, even when it means more work and cost to ourselves, and in the dogged pursuit of reworking everything until we get it right.</p><p>With stdlib, we <strong>never</strong> take shortcuts.</p><p>When you use stdlib, you can be absolutely certain that you are using the most thorough, rigorous, well-written, studied, documented, tested, measured, and high-quality code out there.</p><p>There&apos;s never any doubt when you&apos;re using stdlib. <strong>Everything</strong> we do is consistent with who we are and what we believe, forming a consistent whole having one voice and purpose: <strong>to write high-quality software to help realize our vision.</strong></p><p>So thank you for believing in what we believe and helping bring scientific computing to the web.</p><hr>
<!--kg-card-begin: html-->
<p class="dev-theme-author-blurb">
    <em>Athan Reines is a software engineer at <a href="https://quansight.com/?ref=blog.stdlib.io">Quansight</a> and core developer of <a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">stdlib</a>.</em>
</p>
<!--kg-card-end: html-->
<hr><p><a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">stdlib</a> is an open source software project dedicated to providing a comprehensive suite of robust, high-performance libraries to accelerate your project&apos;s development and give you peace of mind knowing that you&apos;re depending on expertly crafted, high-quality software.</p><p>If you&apos;ve enjoyed this post, give us a star &#x1F31F; on <a href="https://github.com/stdlib-js/stdlib?ref=blog.stdlib.io">GitHub</a> and please consider <a href="https://opencollective.com/stdlib?ref=blog.stdlib.io">financially supporting</a> the project. Your contributions and continued support help ensure the project&apos;s long-term success and are greatly appreciated!</p>]]></content:encoded></item></channel></rss>