<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Jonathan Reem on Medium]]></title>
        <description><![CDATA[Stories by Jonathan Reem on Medium]]></description>
        <link>https://medium.com/@jreem?source=rss-8454f4b26020------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Wed, 08 Apr 2026 11:05:33 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@jreem/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Rust Patterns: Using traits for function overloading]]></title>
            <link>https://medium.com/@jreem/advanced-rust-using-traits-for-argument-overloading-c6a6c8ba2e17?source=rss-8454f4b26020------2</link>
            <guid isPermaLink="false">https://medium.com/p/c6a6c8ba2e17</guid>
            <dc:creator><![CDATA[Jonathan Reem]]></dc:creator>
            <pubDate>Mon, 12 Jan 2015 04:36:01 GMT</pubDate>
            <atom:updated>2015-01-12T05:54:05.250Z</atom:updated>
            <content:encoded><![CDATA[<p>Rust doesn’t have function or method overloading built in, but you can use the extremely flexible trait system to get something very much like it.</p><p>Let’s imagine we are writing a simple HTTP server framework and we’d like to expose a method on our Response representation to set the body of the response. Since our Response stores the body as a Reader trait object, we can just write a simple method which accepts a Reader:</p><pre>impl Response {<br>    pub fn set_body(&amp;mut self, reader: Box&lt;Reader + Send&gt;) {<br>        self.body = reader;<br>    }<br>}</pre><p>That was easy — but, before we get ahead of ourselves, let’s try some typical use cases:</p><p>Reading from an arbitrary reader:</p><pre>res.set_body(Box::new(get_reader()));</pre><p>Reading from a file:</p><pre>use std::io::File;<br><br>res.set_body(<br>    Box::new(File::open(Path::new(&quot;./file.html&quot;)).unwrap())<br>);</pre><p>Reading from a string literal, Vec&lt;u8&gt;, or String:</p><pre>use std::io::MemReader;<br><br>res.set_body(<br>    Box::new(MemReader::new(&quot;bytes&quot;.as_bytes().to_vec()))<br>);</pre><p>It doesn’t take a lot of playing around to see that our API is incredibly verbose to use and unergonomic in several common cases — we can do better.</p><p>How can we solve this problem? We’d like to shield our users from the complexity of getting a Reader over some type, so we’ll just write a trait for converting to that type.</p><pre>trait IntoReader {<br>    type OutReader: Reader;<br>    <br>    fn into_reader(self) -&gt; Self::OutReader;<br>}</pre><p>Then, we’ll change our Response method to be generic over this trait:</p><pre>impl Response {<br>    pub fn set_body&lt;I&gt;(&amp;mut self, data: I)<br>    where I: IntoReader, I::OutReader: Send {<br>        self.body = Box::new(data.into_reader());<br>    }<br>}</pre><p>Now our set_body method will accept any of a multitude of types, all that’s left is to implement the IntoReader trait for the types we want to accept:</p><pre>use std::io::MemReader;<br><br>impl IntoReader for String {<br>   type OutReader = MemReader;<br><br>   fn into_reader(self) -&gt; MemReader {<br>       MemReader::new(self.into_bytes())<br>   }<br>}<br><br>impl IntoReader for Path {<br>   type OutReader = File;<br><br>   fn into_reader(self) -&gt; File {<br>       File::open(self).unwrap()<br>   }<br>}<br><br>// Some more impls for Vec&lt;u8&gt;, &amp;str, &amp;[u8], Box&lt;Reader&gt; etc.</pre><p>Now we can test our earlier use cases:</p><p>Reading from an arbitrary Reader:</p><pre>res.set_body(reader);</pre><p>Reading from a file:</p><pre>res.set_body(Path::new(&quot;./file.html&quot;));</pre><p>Reading from a string literal, Vec&lt;u8&gt;, etc.:</p><pre>res.set_body(data);</pre><p>By using a trait and a little bit of hidden polymorphism, we’ve managed to turn our cruddy API into a lean, efficient, and very easy to use one. The result of this transformation is very similar to what you’d get from method overloading, but much more extensible since downstream users can implement IntoReader for their own types!</p><p>In Iron, we took this trick to its logical extreme and created <a href="https://github.com/reem/rust-modifier">rust-modifier</a> for use with editing Responses. Modifiers allow setting the body much like the API we built above, but also allow users to set all sorts of other attributes and even define arbitrary new modifiers in downstream code.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=c6a6c8ba2e17" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The Road to Rust Web 1.0]]></title>
            <link>https://medium.com/@jreem/the-road-to-rust-web-1-0-b712a5b7a973?source=rss-8454f4b26020------2</link>
            <guid isPermaLink="false">https://medium.com/p/b712a5b7a973</guid>
            <dc:creator><![CDATA[Jonathan Reem]]></dc:creator>
            <pubDate>Fri, 19 Sep 2014 00:55:54 GMT</pubDate>
            <atom:updated>2014-09-19T02:04:52.990Z</atom:updated>
            <content:encoded><![CDATA[<p>Amidst all of the talk surrounding the <a href="http://blog.rust-lang.org/2014/09/15/Rust-1.0.html">Road to Rust 1.0</a>, I thought it would be fitting to talk about the corner of Rust with which I am most familiar, and what it will take to get Rust web dev to “1.0.”</p><p>First things first, let’s celebrate what we’ve accomplished!</p><p>Rust web-dev has come a very long way in the past months, with the development of frameworks and libraries like <a href="https://github.com/iron/iron">Iron</a>, <a href="https://github.com/nickel-org/nickel.rs">Nickel</a> and <a href="https://github.com/conduit-rust/conduit">Conduit</a> and new fundamental HTTP implementations like <a href="https://github.com/teepee/teepee">Teepee</a> and <a href="https://github.com/hyperium/hyper">Hyper</a>, developing Rust web applications has become much easier.</p><p>Many Rust web projects are still under active development, with new features and improvements landing every day. For instance, Iron has landed a huge refactor, Nickel has landed middleware, and Hyper has landed a working beta.</p><p>Even with all this progress, we still have a ways to go. Most of these projects are still in beta and all are pre-1.0 and any promise of stability. To get to the point where application developers will be ready to use Rust to design applications, these are the biggest things that have to stabilize:</p><ol><li>Application Frameworks</li><li>HTTP + HTTPS Server Implementations</li><li>HTTP + HTTPS Client Implementations</li><li>Database bindings</li></ol><p>That’s it. You will notice that this list has much in common with what we have now — but stabilized. I feel that many of the other things that some consider necessities can be free to grow on top of these fundamental layers without further support from them — things like an ORM, email, etc.</p><p>Getting to that stage is still hard. These are the biggest blockers that I perceive to getting to the desired level of stability:</p><ol><li>Server-Side SSL and HTTPS</li><li>A non-blocking IO abstraction for building C10k servers</li><li>Confidence</li></ol><p>Server-Side SSL is relatively self-explanatory; we want HTTPS because we need secure connections. This is really a basic requirement for building non-trivial applications and it’s a serious wart that we don’t have it already.</p><p>Rust, not just web Rust, <em>needs</em> non-blocking IO to reach the pinnacle of its performance potential and for us to build massively concurrent systems — spawning 10k threads for 10k connections will not work out in the long run.</p><p>The blocking, threaded model that Rust exposes today is a pleasure to program in and is much better than something like node’s CPS, but it is also limiting. Spawning and scheduling threads is and will always be expensive, whereas low-level bindings over kqueue and epoll are cheap and will remain so. It’s a tradeoff, but this is a choice that users of Rust have to be able to make.</p><p>There is a lot of good discussion of this issue in the RFCs (<a href="https://github.com/rust-lang/rfcs/pull/219">one</a> and <a href="https://github.com/rust-lang/rfcs/pull/230">two</a>) for the removal of libgreen and the runtime, so I won’t drone on further on this here.</p><p>Lastly we need <em>confidence — </em>confidence in our APIs, our decisions, our tools, and our libraries. At one point we will need to decide, like Rust, what our “1.0” looks like and we will need confidence in that decision if we want to drive Rust web development forward.</p><p>I will be spending some time this week to consider what stability means for the projects I lead (you can expect more posts to come). I encourage everyone else with a name in the game to do the same.</p><p>Discuss on <a href="http://www.reddit.com/r/rust/comments/2gtfei/the_road_to_rust_web_10/">Reddit</a></p><p>Discuss on <a href="https://news.ycombinator.com/item?id=8338503">HN</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b712a5b7a973" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[From The Forge — Rebuilding Iron]]></title>
            <link>https://medium.com/@jreem/from-the-forge-rebuilding-iron-953146828cc6?source=rss-8454f4b26020------2</link>
            <guid isPermaLink="false">https://medium.com/p/953146828cc6</guid>
            <dc:creator><![CDATA[Jonathan Reem]]></dc:creator>
            <pubDate>Mon, 25 Aug 2014 22:36:06 GMT</pubDate>
            <atom:updated>2014-08-25T22:37:59.042Z</atom:updated>
            <content:encoded><![CDATA[<h3>From The Forge — Rebuilding <a href="https://github.com/iron/iron">Iron</a></h3><p>New and improved!</p><p>Iron was originally designed, implemented, documented and released in three weeks, the first week of which was mostly me and my teammates learning Rust in the first place! We have come a long way since then, and so has Iron.</p><p>Today, I’m re-releasing Iron, version 0.0.<strong>1. </strong>I know, exciting, isn’t it. This is the first time I’m really comfortable putting a version mark down, because it’s the first time I think we’ve really gotten it <em>right.</em></p><p>The new Iron is a much more flexible framework that has had much more thought go into it than 0.0.0. It has an all-new middleware system, allocates much less, is better documented, and makes significantly more <em>sense</em>.</p><p>Without further ado, the major changes:</p><h3>#1: enter/exit =&gt; before/after</h3><p>enter/exit middleware are gone, replaced by the more familiar and more semantically meaningful before/after middleware system. In the new Iron, the old Middleware trait is replaced by:</p><pre>pub trait BeforeMiddleware {<br>    fn before(&amp;self, &amp;mut Request) -&gt; IronResult&lt;()&gt;;<br>    // some methods omitted<br>}</pre><pre>pub trait AfterMiddleware {<br>    fn after(&amp;self, &amp;mut Request, &amp;mut Response) -&gt; IronResult&lt;()&gt;;<br>    // some methods omitted<br>}</pre><pre>pub trait AroundMiddleware: Handler {<br>    fn around&lt;H: Handler&gt;(&amp;mut self, H);<br>}</pre><p>That’s right, not one but <strong>three</strong> traits. One of the things we learned this time is the importance of making Traits <em>composable (+) </em>rather than <em>inheritable </em>(:).</p><p>This refactor allows Middleware to be both Before and After if they want, but does not force them to implement both in cases where it is meaningless. It also allows Before and After middleware to be kept separately, and have their semantics be altered in important ways.</p><p>The third trait, AroundMiddleware, introduces around-middleware to Iron. AroundMiddleware must themselves be Handlers, and they are used to wrap <em>around </em>an existing handler and add functionality to it. What is a handler? Well, this brings us to our second change — the introduction of the Handler trait.</p><h3>#2 Handler</h3><p>This change introduces a new trait, Handler, and removes special treatment of Chain from the framework.</p><pre>pub trait Handler {<br>    fn call(&amp;self, &amp;mut Request) -&gt; IronResult&lt;Response&gt;;<br>}</pre><p>Each instance of Iron has a single Handler, and that Handler is responsible for doing one thing, handling a request by producing either a Response or an Error, which is translated into a 500 response for the client.</p><p>Handlers can be elegantly nested by using a Chain, since Chain now requires Handler as a bound. You can then wrap Handlers in other Handlers using AroundMiddleware, and the whole thing allows Chains to be easily used in any place that expects a Handler, such as in the controllers of a Router.</p><p>This also, as a side-effect, takes care of the problem of generating defaults for a Response object. On the other hand, it also necessitates moving some of the constructors previously relegated to <a href="https://github.com/reem/iron-test">iron-test</a> into core, so that Handler’s can conveniently create Response structs.</p><h3>#3 Clone =&gt; Send + Sync</h3><p>You may have noticed in the last few traits that most methods receive &amp;self rather than &amp;mut self. This is a result of another major change in Iron — Middleware and Handlers are stored behind an Arc so they are not copied for each Request. This is a huge win performance-wise, as copying large Middleware with complex clone semantics such as Router was a performance bottleneck.</p><p>Middleware which need to have data associated with them for every request can instead store that data in Request::extensions. With a private type as key this will also keep the data accessible to only that Middleware.</p><p>This is a huge ergonomic win as well, as it means that Middleware no longer need to be Clone, which was sometimes difficult or impossible to implement for interesting types. In the worst case, Middleware can put themselves behind RWLocks so they become Send and Sync if they are not already.</p><h3>#4 Alloy =&gt; Plugins</h3><p>In most web frameworks, including the micro-frameworks in dynamic languages that early Iron was heavily inspired by, request and response extensions are just added as new fields and methods dynamically and in a specific order. This means that if you link your body-parsing middleware after your authentication, you will get repeated runtime errors.</p><p>Iron has a new approach for Request and Response plugins which do not modify control flow that uses <a href="https://github.com/reem/rust-plugin">rust-plugin</a> to provide typesafe, lazily evaluated extensions with a guaranteed interface that are automatically cached and order-independent.</p><pre>let body = req.alloy.find::&lt;ParseBody&gt;().unwrap().unwrap()</pre><p>has become:</p><pre>let body = req.get::&lt;Body, Json&gt;().expect(&quot;No Body&quot;);</pre><p>Which then parses the body by-need and only once, providing automatic caching. This makes Iron applications significantly more robust than before, as they will no longer implode dramatically if you forgot to link a middleware — in fact, under the new system, bringing the Body type into scope would be enough to enable the above behavior, you don’t even need to link it to your chain.</p><h3>#5 AnyMap =&gt; TypeMap</h3><p>AnyMap served us extremely well for the first iteration of Iron, but a better abstraction that allowed values to be a different type than their keys was needed. In the newest Iron, AnyMap has been replaced by <a href="https://github.com/reem/rust-typemap">TypeMap</a>, an actual <em>key-value</em> store which is keyed by Types and can have many values of the same type.</p><p>TypeMap allows:</p><pre>struct Key;</pre><pre>let num = req.extensions.get::&lt;Key, uint&gt;();</pre><p>whereas with AnyMap you had to do:</p><pre>struct Value(uint);</pre><pre>let Value(num) = req.extensions.get::&lt;Value&gt;();</pre><p>which becomes tricky with common types like uint, String, or Url.</p><h3>#6 Request and Response</h3><p>In the earliest versions of Iron, Request and Response were just aliases for the equivalent types provided by rust-http. They were relatively low-level implementations and were not pleasant to work with on an application level.</p><p>Iron now has its own Request and Response representations that expose a high-level, extensible interface. Both Request and Response contain an extensions TypeMap, meaning they can be used for automatic Plugins, and they are full of helpful and useful fields, such as a parsed Url struct, or a Reader for the Response body as opposed to a String.</p><p>This is a <strong>massive</strong> ergonomics win makes Iron significantly more usable as an application framework.</p><h3>#7 Errors and Handling Errors</h3><p>Iron’s initial release had no error-handling at all — the only thing a middleware could do to signal something had gone wrong was to abort and unwind the stack.</p><p>To remedy this, I introduced a new variant to the now-removed Status enum: Error(Box&lt;Show&gt;). This had slightly different behavior than Unwind and allowed other handlers to at least know that an error had been thrown and perhaps, well, <em>show</em> it.</p><p>However, that is not a production ready approach. It does not allow deep introspection of Errors or allow for the possibility of recovery. Iron now uses the error type from <a href="https://github.com/reem/rust-error">rust-error</a>, which will be at least convertible to and from stdlib errors. In addition to a new Error type, Iron now allows errors to occur at any time during the handling of a Request and has a full system for propagating, handling, and recovering from errors.</p><p>The hidden methods from the BeforeMiddleware, AfterMiddleware, and Handler traits all have to do with creating, propagating, and recovering from errors. I will defer to the <a href="http://ironframework.io/doc/src/iron/src/middleware.rs.html#1-62">documentation</a> to explain the full system.</p><p>Trivia: There is not a single call to fail!, {Option, Result}::unwrap, or a single unsafe block in the entirety of iron core.</p><h3>#8–100 Various things about the Place</h3><p>Many other small fixes have been made to Iron. It no longer clones Chain twice and it no longer blocks on listen, to take two two tiny examples.</p><p>Generally, Iron is a much more hospitable place now. Many of the middleware still need refactoring and updating to bring them in line with Iron’s new approach, but the core framework has matured and evolved enormously into something we can proudly call version 0.0.1.</p><p>I hope you’ll take a second look. Thanks for listening.</p><p>Discuss on <a href="http://www.reddit.com/r/rust/comments/2ekmf2/rebuilding_iron/">Reddit</a></p><p>Discuss on <a href="https://news.ycombinator.com/item?id=8224622">Hacker News</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=953146828cc6" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[What should you get from an Error?]]></title>
            <link>https://medium.com/@jreem/what-should-you-get-from-an-error-6704dbdc4895?source=rss-8454f4b26020------2</link>
            <guid isPermaLink="false">https://medium.com/p/6704dbdc4895</guid>
            <dc:creator><![CDATA[Jonathan Reem]]></dc:creator>
            <pubDate>Wed, 20 Aug 2014 21:33:05 GMT</pubDate>
            <atom:updated>2014-08-20T21:35:16.615Z</atom:updated>
            <content:encoded><![CDATA[<p>When crafting a general error type for <a href="https://github.com/iron/iron">Iron</a> I came to a conclusion about the two different roles an error can serve: errors can be crafted for reporting, or they can be crafted for handling.</p><p>Iron has rather demanding needs from an error type — it needs to be extensible not only within Iron itself and its modules, but across library boundaries, and more importantly is has to be possible to write generic handlers that can <em>handle</em> those errors across library boundaries.</p><p>One way to solve this problem is to simply ignore the second requirement, that errors be handle-able, and stick with something optimized for reporting. Right now this looks like:</p><pre>fn error_producer&lt;T&gt;() -&gt; Result&lt;T, Box&lt;Show&gt;&gt;</pre><p>This is very limiting — the only thing we can do with a Box&lt;Show&gt; error type is show it. That gives us basic reporting capabilities, but since it’s impossible to do anything else with this error we might as well just forgo error handling entirely and just print all errors as soon as they happen.</p><p>We can do a little better with a specific error trait to give us a bit more information:</p><pre>pub trait Error {<br>   fn kind(&amp;self) -&gt; &amp;&#39;static str;<br>   fn description(&amp;self) -&gt; Option&lt;String&gt;;<br>   fn cause(&amp;self) -&gt; Box&lt;Error&gt;;<br>}<br><br>fn error_producer&lt;T&gt;() -&gt; Result&lt;T, Box&lt;Error&gt;&gt;</pre><p>Now we can actually do a tiny bit of work, we can print the kind of the error and its description separately, but more importantly we can track a stack of errors if we need to, which can be very helpful in disambiguating exactly where something started to go wrong.</p><p>But, there’s still a major problem with this. The only way we can <em>handle</em> an error with this scheme is by matching the string returned by Error::kind, which is error-prone and extremely sub-optimal for a language with a type system as powerful as Rust’s. We can do better.</p><p>One way around this to use some phantom types:</p><pre>pub trait Error&lt;Mark: &#39;static, Cause&gt; { <br>   fn name(&amp;self) -&gt; &amp;&#39;static str;<br>   fn description(&amp;self) -&gt; Option&lt;String&gt;;<br>   fn cause(&amp;self) -&gt; Cause;<br>   fn is&lt;Other: &#39;static&gt;(&amp;self) -&gt; bool {<br>       TypeId::of::&lt;Other&gt;() == TypeId::of::&lt;Mark&gt;()<br>   }<br>}</pre><p>This Error trait makes use of a phantom type — Mark— to tell us what kind of Error it is in a more type-safe way than a string could. Now we can at least attempt to handle errors by checking if they are an error we can handle using is and anyone can implement a new Error with a new Mark, so this remains extensible and movable across abstraction layers.</p><p>There’s one remaining problem with this scheme — let’s say I catch an Error&lt;ParseError, Original&gt; and now know I have a Parse Error and can handle it. There remains no way for me to get back to the original Parse Error representation. We’re back to our original goal: handling errors across library boundaries.</p><p>My original rust-error implementation dealt with this issue rather inelegantly, but I think that the solution here is a much better one — we can solve this by shifting around the way we represent Mark:</p><pre>pub trait Error: &#39;static {<br>    fn name(&amp;self) -&gt; &amp;&#39;static str;<br>    fn description(&amp;self) -&gt; Option&lt;String&gt;;<br>    fn is&lt;O: &#39;static&gt;(&amp;self) -&gt; bool {<br>        (self as &amp;Any).is::&lt;O&gt;()<br>    }<br>    fn cause(&amp;self) -&gt; Option&lt;Box&lt;Error&gt;&gt;;<br>    fn downcast&lt;O: &#39;static&gt;(&amp;self) -&gt; Option&lt;&amp;O&gt; {<br>        (self as &amp;Any).downcast_ref::&lt;O&gt;()<br>    }<br>}</pre><p>Currently ‘static bounds on traits don’t work, but theoretically this would allow us to have safe downcasting to errors that we can actually work with, by allowing access to the actual error. This gives us runtime checking and handling of errors in an extensible way that allows us to, as our original goal states, <em>handle errors across library boundaries</em>.</p><p>I’ve implemented a working version of this proposal that compiles with todays rust nightly and doesn’t rely on ‘static bounds <a href="https://github.com/reem/rust-error">here</a>. It exposes an API very similar to this one, but is <em>slightly</em> modified to work with the current state of rustc (specifically the lack of DST).</p><p>I propose that this error representation is used in Iron and, maybe, as Rust’s universal error type to enhance error interoperability throughout both the standard library and in third party libraries.</p><p>Discuss on <a href="http://www.reddit.com/r/rust/comments/2e4ch0/what_should_you_get_from_an_error/">Reddit</a>.</p><p>Discuss on <a href="https://news.ycombinator.com/item?id=8204789">Hacker News</a>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=6704dbdc4895" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The Explainer’s Fallacy]]></title>
            <link>https://medium.com/@jreem/the-explainers-fallacy-e20f8755c269?source=rss-8454f4b26020------2</link>
            <guid isPermaLink="false">https://medium.com/p/e20f8755c269</guid>
            <dc:creator><![CDATA[Jonathan Reem]]></dc:creator>
            <pubDate>Sat, 07 Jun 2014 20:41:01 GMT</pubDate>
            <atom:updated>2014-06-07T20:41:01.763Z</atom:updated>
            <content:encoded><![CDATA[<p>In the mind of the classic explainer, learning has two steps: beginning and understanding — bridged first and foremost by a single realization brought about by the perfect explanation.</p><p>In reality, learning is exponential. It is a process that demands enormous investment and initially provides mediocre returns. It is not easy and it is not quick. Our understanding is not easily transformed and we often have very little idea of how we arrived at our current state.</p><p>Often, this makes us weak explainers of what we know. We sometimes struggle to connect with those who have yet to reach our level of understanding because we forget the first 80 percent of the learning process — when you have no idea what’s going on.</p><p>Getting through that first 80 percent is usually the hardest part of learning something new. It’s especially easy to feel as if every explanation is targeted at someone who knows more than you, and you can only parse a tiny fraction of explanations which others decry as embarrassingly simple.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=e20f8755c269" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Zero Cost]]></title>
            <link>https://medium.com/@jreem/zero-cost-4202a034199b?source=rss-8454f4b26020------2</link>
            <guid isPermaLink="false">https://medium.com/p/4202a034199b</guid>
            <dc:creator><![CDATA[Jonathan Reem]]></dc:creator>
            <pubDate>Sat, 07 Jun 2014 20:26:28 GMT</pubDate>
            <atom:updated>2014-06-07T20:26:28.630Z</atom:updated>
            <content:encoded><![CDATA[<h4>or why Rust</h4><p>Rust achieves Speed, Safety <em>and </em>Usability through the power of Zero Cost Abstractions — primarily robust pointer and type systems that offer safety and expressiveness with as close to zero performance penalty as possible.</p><p>I recently read a Quora question that asked why we don’t write all software in C or C++ as they are the fastest options. Most of the answers targeted the trade off between computer hours and developer hours and how, today, developer hours are significantly more expensive than computer hours and it usually makes very little sense to waste developer hours to save a few computer hours.</p><blockquote>Zero Cost Abstractions get you the best of both worlds; you get expressiveness and speed.</blockquote><p>Rust tries as hard as possible to make all of its features fit this category. For instance, Rust’s entire pointer system which guarantees memory safety without a garbage collector, compiles to code that looks like the equivalent C. All pointers are C pointers, there is no tagging, no runtime checks, and generally just no runtime overhead for the safety the borrow checker gets you.</p><p>The same is true for Rust’s other complex abstractions: polymorphic functions, traits, type inference, etc. Rustc specializes and optimizes all polymorphic functions and traits and infers all types at compile-time, offering you C like performance with code that looks almost like Haskell.</p><p>Basically, Rust is full of win and you should try it. You get all the advantages and <em>almost</em> none of the drawbacks, it’s very much worth it.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=4202a034199b" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Promises and Callbacks]]></title>
            <link>https://medium.com/@jreem/promises-and-callbacks-77e55f00da83?source=rss-8454f4b26020------2</link>
            <guid isPermaLink="false">https://medium.com/p/77e55f00da83</guid>
            <dc:creator><![CDATA[Jonathan Reem]]></dc:creator>
            <pubDate>Sat, 07 Jun 2014 18:31:29 GMT</pubDate>
            <atom:updated>2014-06-07T18:31:29.316Z</atom:updated>
            <content:encoded><![CDATA[<p>Promises are cool primarily for two reasons:</p><ul><li>Promises encapsulate the idea of a value that doesn’t exist yet or may never exist.</li><li>Promises define what it means to chain computations on values that don’t exist yet.</li></ul><p>Together, these two qualities allow code that uses promises to be much clearer and more flexible than code that uses callbacks. To make this clear, let’s write a cool function both using callbacks and using promises: asynchronous map.</p><p>Before we get started, following TDD, let’s write some simple tests so we know what our end goal is:</p><pre>describe(‘Callback Map’, function () { <br> var data;<br> before(function () {<br>   data = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];<br> });</pre><pre>  it(‘should provide the results of mapping an asynchronous operation over an array’, function (done) {<br>   callbackMap(data, function (datum, callback) {<br>   setTimeout(function () {<br>     callback(datum * 2);<br>   });<br>   }, function (results) {<br>     for (var i = 0; i &lt; results.length; i++) {<br>       expect(results[i]).to.equal(data[i] * 2);<br>     }<br>     done();<br>   });<br> });<br>});</pre><pre>describe(‘Promises Map’, function () { <br> var data;<br> before(function () {<br>   data = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];<br> });</pre><pre> it(‘should provide the results of mapping an asynchronous operation over an array’, function (done) {<br> promiseMap(data, function (datum) {<br>   return Q(datum * 2);<br> }).then(function (results) {<br>   for (var i = 0; i &lt; results.length; i++) {<br>     expect(results[i]).to.equal(data[i] * 2);<br>   }<br>   done();<br>   });<br> });<br>});</pre><p>The function we are creating is going to allow us to abstract away the concept of applying an asynchronous operation to many elements and then collecting the result asynchronously. We’re already starting to see a tiny bit of the advantages of promises in this test code, but it’s really hard to argue that it’s much better when it follows almost the same structure.</p><p>I think that the majority of objections to promises come up because the easiest examples to write are the ones that look like this — where the translation from callbacks to promises and back is extremely mechanical.</p><p>The utility of promises becomes much clearer when you write functions that make the mapping between callbacks and promises much murkier, and those are usually the places where promises look much better.</p><p>Let’s talk a little bit about the code we’d like to write:</p><pre>var callbackMap = function (data, op, cb) { <br> when (asyncData = map(data, op)) {<br>   cb(asyncData)<br> } <br>};</pre><p>I’ve used a fictional when block here to simplify the code, and just so we can imagine what the best possible implementation would look like. Unfortunately, there are some problems with this. Let’s enumerate them:</p><ul><li>when doesn’t exist</li><li>map(data, op) doesn’t do what we want it to</li></ul><p>We can overcome the lack of a “when” with some trickery, but it’s the second problem where we are going to start feeling the pain — the fictional map that we used here is actually the function we are trying to implement, and here we’ve arrived at the crux of the issue.</p><p>When you use pure callbacks, you can’t reuse any existing non-callback oriented higher-order functions or combinators because they aren’t designed to work with callbacks. This is extremely problematic!</p><p>To make this work, we need three things:<br>A way to apply op to all the values in data<br>A way to collect all of the results of those applications<br>A way to tell when we are done</p><p>Let’s get started:</p><pre>var callbackMap = function (data, op, cb) { <br> // where we will keep the results<br> var results = [];<br> // apply op to all the values in data, and collect the results<br> specialEach(data, op, results.push.bind(results));</pre><pre> // Wait to be done, then call cb on the results.<br> when(function () {<br> // results is fully populated<br> return results.length === data.length;<br> }, 5, function () {<br> cb(results);<br> });<br>};</pre><p>This looks pretty awesome, right? Only a few lines of code and very close to what we had originally. We’re applying op to all of our data and collecting the results and doing it all asynchronously and in parallel.</p><p>The problem is that none of these magic helper functions, like when and specialEach, exist and more importantly implementing them (especially when) is dirty and specialized.</p><p>It’s not really worth going into here, but I’ve created a github repo with the full implementation + tests for these two functions that I’ll link to at the bottom of this post. Suffice it to say that when requires using setInterval to check the condition and if you want to go around that by firing events and registering listeners you are basically implementing promises yourself anyway.</p><p>For the promises version, this would be ideal:</p><pre>var promiseMap = function (data, op) { <br> return squash(_.map(data, op));<br>};</pre><p>That’s it! Since op is just a regular function that takes a value and returns a value we can just use our existing map function to create a list of promises, then all we have to do is squash that list of promises into a single promise of a list.</p><p>Squash is actually a surprisingly elegant function to implement:</p><pre>var squash = function (data) { <br> return _.reduce(data, function (accPromise, promise) {<br>   return accPromise.then(function (acc) {<br>     return promise.then(function (val) {<br>       acc.push(val);<br>       return acc;<br>     });<br>   });<br> }, Q([]));<br>};</pre><p>Since promises have already abstracted away the idea of chaining, it’s extremely easy for us to just “then” two promises together into a single promise, and from there we can just use reduce to apply that combinator to a whole list of promises and voila, we have squash.</p><p>When we use promises, we actually get to write our ideal code. That’s awesome to me, at least.</p><p>I hope this has been helpful/informative/not-droning. Here’s the github repo I referenced earlier: <a href="https://github.com/reem/promises-vs-callbacks">Promises-vs-Callbacks</a>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=77e55f00da83" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Async and Parallel, a Restaurant Model]]></title>
            <link>https://medium.com/@jreem/async-and-parallel-a-restaurant-model-81f3c309c3c5?source=rss-8454f4b26020------2</link>
            <guid isPermaLink="false">https://medium.com/p/81f3c309c3c5</guid>
            <dc:creator><![CDATA[Jonathan Reem]]></dc:creator>
            <pubDate>Sat, 07 Jun 2014 18:27:04 GMT</pubDate>
            <atom:updated>2014-06-07T18:27:04.732Z</atom:updated>
            <content:encoded><![CDATA[<p>Let’s talk about waiters, or really, rather, one waiter.</p><p>See, there’s this pretty terrible restaurant with this one waiter in it and every time he takes an order from a customer he goes to the kitchen, tell’s one of the cooks there’s an order and then just stands by the counter and waits for that order to be ready.</p><p>In the meantime, he does pretty much nothing and tons and tons of customers start piling in because for some completely unknown reason this terrible restaurant is really popular. By the time the first meal is ready and he brings it back to the first customer enough new people have come to the restaurant that the fire marshalls just come in and shut down the place.</p><p>That’s the single-threaded, synchronous model, and from that example it looks pretty bad. We’re waiting around and wasting a lot of time if we run our programs synchronously and sequentially.</p><p>Let’s explore some alternatives:</p><p>As restaurant owners (what, I didn’t tell you we own this place?) we have to figure out what the main inefficiences in this system are and how we can make it go faster.</p><p>The obvious thing to do without increasing cost is to make our waiter not be a complete idiot and not wait at the kitchen for every single order. Now, after taking an order and talking to the chef, our waiter will go back and take more orders, so they are working as efficiently as they can and we can get the most out of our one waiter as possible.</p><p>Instead of waiting for something to finish, every time they go back to the kitchen they just make a quick check to see if anything is ready and if it is they bring it back to whoever ordered it. No waiting around, no customers piling in, and no fire marshalls shutting down our amazing restaurant.</p><p>This is the asynchronous version. We still have one waiter, but now they don’t block and wait on expensive operations like cooking a meal, instead they fit that in the gaps and do work in the meantime — our waiter gets near 100% efficiency, but there’s still only one waiter.</p><p>Since our waiter can now actually serve customers, our restaurant has exploded in popularity. We now need to handle thousands of customers a day and our one waiter just can’t handle it. Even the short time it takes them to get an order and bring it to the kitchen is too much.</p><p>The solution seems obvious at this point: just hire more waiters. This comes with some hidden drawbacks though. Increasing the efficiency of our single waiter was free — we get more bang for the same buck — but this isn’t the same. Adding more waiters is going to cost us more, but has the potential to make our restaurant scale indefinitely, unlike our single, but very efficient, waiter.</p><p>Now we’ve got tons of waiters, all running around and taking and delivering orders at ridiculous efficiency. We’ve solved two problems: serving our current customers and figuring out a plan to scale to many, many more in the future.</p><p>These two patterns, parallel and asynchronous, are often framed in an adversarial relationship but it doesn’t have to be that way.</p><p>It’s pretty clear that the most efficient way to run a restaurant is with many non-blocking waiters, not a single efficient one or many blocking ones. Both of those last two have obvious shortcomings that could be easily fixed by adopting the principles of the other paradigm.</p><p>I challenge you to write your code like you would run a restaurant, efficiently and scalably. Watch out for blocking operations that make your code stand still burning clock cycles, and watch out for opportunities to scale your code to use multiple waiters.</p><p>Both approaches are valid and important; we should never ignore the possible benefits of each one.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=81f3c309c3c5" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Functional Programming is Black Magic]]></title>
            <link>https://medium.com/@jreem/functional-programming-is-black-magic-310084308678?source=rss-8454f4b26020------2</link>
            <guid isPermaLink="false">https://medium.com/p/310084308678</guid>
            <dc:creator><![CDATA[Jonathan Reem]]></dc:creator>
            <pubDate>Sun, 03 Nov 2013 21:32:11 GMT</pubDate>
            <atom:updated>2013-11-03T21:34:55.388Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/700/1*gSYc0TX8hAEo93QSE7QYgg.jpeg" /></figure><p>I promise that if you’ve worked entirely in OO languages, Haskell is unlike anything you have ever seen before. Even if you’ve worked in languages with some functional features, like Python or Ruby, Haskell will still thoroughly blow your mind.</p><p>To be honest, most programming languages are pretty similar to each other. I mean, you have some speed and syntax differences, but ultimately if you broaden your focus it’s not that hard to argue that writing code in one of many imperative or OO languages is a pretty similar experience to writing code in other OO or imperative languages — and that covers a HUGE number of languages, everything from Python to C# to Javascript.</p><p>Haskell, or other purely functional languages, are completely different. They don’t just have syntax differences or little changes like duck typing versus static typing, they are based on <em>entirely different models of computation, </em>and that’s <strong>scary.</strong></p><p>Remember the first time you ever wrote any code? It was weird, hard, and more than a little strange. Learning Haskell is <em>just like that — </em>weird, and hard with an extra dose of strange.</p><p>Ultimately though, Haskell is awesome. I’m not going to show you code here, just because I don’t think random Haskell code — no matter how awesome it looks in practice — will actually convince you to learn Haskell. Instead, here are just a few reasons why Haskell is amazing and functional programming is a dark art.</p><h3><strong>High-Order Functions</strong></h3><p>This is arguably the most important trait of Haskell and what makes it a functional language in the first place. This covers stuff like map, filter, reduce, etc., which are all going to be radically different tools than you have ever seen before if you are coming form something exclusively OO, like Java.</p><p><strong>These are functions that take <em>other functions </em>as arguments — basically, they let you control, using very tight syntax, how functions are applied.</strong></p><p>Haskell lets you write functions that control other functions, and then you can write function that, you guessed it, control <em>those</em>functions. By the time you are even two levels deep, you’ve invented map-reduce — which is powerful enough to power Google, and which you should, coincidentally, Google.</p><p>For instance, and you should Google this also, you can implement a linear-time, recursive fibonacci algorithm by picking form an infinite list generated using zipWith, a high order function, in a single line.</p><h3><strong>Laziness</strong></h3><p>Haskell is a lazy language. I like to imagine different programming languages as having different personalities — Ruby is this eager, nerdy guy who will do anything you ask, Python is kind of demanding, he wants to do things <em>his </em>way, but he’ll do whatever you want as long as you let him grumble about it, but Haskell is a condescending asshole.</p><p>Ask Haskell to do <em>anything </em>at all and it’s just like, “Meh. Maybe later.” Haskell will only ever evaluate an expression when it absolutely has to, like when you ask it to print the result to the screen. Otherwise, it just adds whatever you want from it to its to-do list and moves on.</p><p>This seems kind of dumb, why wouldn’t you want your code to actually be evaluated when you want it to be evaluated? Well, I challenge you to do something like this in a strict language like python:</p><pre>&gt; x <strong>= </strong>[1, x !! 0 + x !! 2, 3] =&gt; [1, 4, 3]</pre><p>That’s right, I used the list… inside the declaration for the list. That, right there, that’s black magic, and it’s just one of the cooler things you can do with lazy evaluation. You can also operate on infinite lists or other data-types, which makes for some equally crazy code that Just Works.</p><h3><strong>Purity</strong></h3><p>Purity is the bomb, and a pretty simple concept. Basically, for a function to be pure, it just has to return the same value every time when you pass it the same arguments — it can’t rely on any kind of state contained outside of its definition. This is a really bad explanation, because the real formal ideas behind purity are much harder to explain.</p><p>What you need to know is that purity makes your program work. When you write impure functions, which is literally every single piece of code you have ever written in OO style, you are creating problems. If your code is impure, its behavior depends on stuff that could be going on 100s of lines away or long before it is called. If your code is pure, all you need to know is what is going on inside the function.</p><p>Purity makes the all important “does this function actually work?” question definitively answerable in a way that impure code just doesn’t. Compared to reasoning about pure, functional code, reasoning about stateful OO code is like having to go through high school again.</p><h3><strong>And More</strong></h3><p>There’s absolutely <strong>tons </strong>of stuff I haven’t covered here that makes Haskell even cooler, but if you are very curious about Haskell (I hope you are!) here’s some suggested reading:<br><strong>To get more motivation:</strong></p><ul><li><a href="http://dave.fayr.am/posts/2011-08-19-lets-go-shopping.html">Functional Programming Is Hard,That’s Why It’s Good</a></li><li><a href="http://www.joelonsoftware.com/items/2006/08/01.html">Can Your Programming Language Do This?</a></li><li><a href="http://stackoverflow.com/questions/36504/why-functional-languages">Why functional languages?</a></li></ul><p><strong>To actually do it:</strong></p><ul><li><a href="http://learnyouahaskell.com/">Learn You a Haskell for Great Good!</a></li><li><a href="http://book.realworldhaskell.org/">Real World Haskell</a></li><li>If you like pain: <a href="http://yannesposito.com/Scratch/en/blog/Haskell-the-Hard-Way/">Learn Haskell Fast and Hard</a></li></ul><p>I wish you luck on your journey in the fascinating enigma that is functional programming.<br><em>You should follow me on Quora here: </em><a href="http://www.quora.com/Jonathan-Reem">Jonathan Reem</a> <em>and on twitter here: </em><a href="http://www.twitter.com/jreem"><em>@jreem</em></a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=310084308678" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Something we all really wish would go away.]]></title>
            <link>https://medium.com/i-m-h-o/something-we-all-really-wish-would-go-away-8a143189eccf?source=rss-8454f4b26020------2</link>
            <guid isPermaLink="false">https://medium.com/p/8a143189eccf</guid>
            <dc:creator><![CDATA[Jonathan Reem]]></dc:creator>
            <pubDate>Tue, 10 Sep 2013 04:19:20 GMT</pubDate>
            <atom:updated>2013-09-10T04:19:20.198Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*DxQDsoPJpOu0FsLa.jpeg" /><figcaption>Credit: Gates Foundation</figcaption></figure><h4>And nobody wants to talk about.</h4><p>Click <a href="https://www.againstmalaria.com/Donate.aspx">here</a> to save a life.</p><p>Did you click? Did you give? I know you did, but just in case: go ahead, it costs just $5 to give another human being a desperately needed insect net. Now did you give?</p><p>Awesome, I trust you, you gave $5 and another person with thoughts and hopes and dreams is going to get an item critical to their survival. Now do it again. Why not? It’s just $5. You probably (definitely) spent more last time you went to Starbucks.</p><p>Is it worth it (I know those yogurts are just <em>delicious)? </em>We’re talking about people’s lives here. You could afford to skimp on the coffee, right? Give another $5.Give another person security in their health.</p><p>I know you did, because you’re a good person, right? But like, those organic fruits you are eating are <em>just a few </em>bucks more than the conventional kind. Maybe you buy the conventional kind and give another $5 and contribute to saving another living, breathing, human being.</p><p>Maybe.</p><p>Maybe that Uber ride, or that Ikea couch, or that Ice Cream, or that anything, isn’t worth it. Maybe it isn’t worth trading another human’s safety for a grande iced caramel macchiato even though you are just <em>so tired </em>and it’s just <em>soooo </em>delicious. Trust me, I’ve been there. So maybe you make the sacrifice, you donate another $50. Help save another <em>ten</em> people.</p><p>Maybe.</p><p>But probably not, let’s be honest, the vast, overwhelming majority of my readers aren’t going to give a single cent to Against Malaria. (I implore you to click <a href="https://www.againstmalaria.com/Donate.aspx">here</a> and prove me devastatingly wrong). I haven’t.</p><p>Here’s the easy question with the easy answer that doesn’t change how we live our lives:</p><blockquote>If it is within our power to stop the suffering of another human being without causing greater suffering, should we?</blockquote><p>Here’s the hard question with a hard answer that makes our lives harder:</p><blockquote>If it is within our power to stop the suffering of another human being without causing greater suffering, and we do not, are we in the wrong?</blockquote><p>It seems, to me at least, to be completely impossible to argue against the hard answer: <strong>Unequivocally, yes.</strong></p><p>There is no room in this answer or in the question for a clause based on distance or cultural group or religious group or political group. If you have the power to help another human being you should, and if you consciously make the decision not to, you are in the wrong.</p><p>This is obvious in simple examples. If you walk down a street and a man comes out and tells you that he is going to shoot another person unless you ask him nicely not to, it is wrong of you to not ask him nicely not to (assuming, of course, he is telling the truth).</p><p>In this situation, it’s clear that while the man is clearly in the wrong, should you not intervene when it costs you nothing, so are you. If it is within our power to stop the death of another person and we sit idly by and allow it to happen we share in the burden of guilt.</p><p>Now what if the man about to get shot is 3000 miles away.</p><p>Oh shit.</p><blockquote>If idly allowing others to die implicates us in their death, then our conscious decision not to give an inconsequential amount of money to help save the life of another human being implicates us in their death.</blockquote><p>How far do we take this? Does it mean I should sell my TV and my couch and my bed and move to a cheaper house and sell everything I don’t absolutely need to continue giving and give all the money to help save other lives?</p><p>More importantly, would not doing so be wrong?</p><p>If we follow our moral compasses strictly, if we don’t allow the value of our own comforts to be multiplied exponentially when compared to the very lives of others, then <strong>yes.</strong></p><p>Again: shit.</p><p>There is no easy resolution. There is no philosophical trick to be played to excuse our lack of giving, our lack of massive, overwhelming intervention.</p><p>All there is is our guilt, our small, or large, part in the deaths of millions.</p><p>So give. <strong>Give first</strong>, then <strong>please recommend</strong> <strong>this</strong> so others give too.</p><p><em>My name is Jonathan Reem and I am as guilty as the rest of us.</em></p><p><em>I wrote this because the answer to the questions I have posed here are unsettling to me, and cause me to question the very way I and many others live our lives.</em></p><p><em>As I said, I have no answers, no solutions to excuse our behavior. The best I can do is to start discussion and provoke a conversation that many are unwilling to have. I hope I’ve succeeded.</em></p><p>Seeing as Medium doesn’t have a comments section and I think this is a perfect example of why there should be an option to add one, please continue the discussion with me on Twitter @jreem or on Quora in the comments on this duplicate post <a href="http://jonathanreem.quora.com/Something-nobody-wants-to-hear-about">here.</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=8a143189eccf" width="1" height="1" alt=""><hr><p><a href="https://medium.com/i-m-h-o/something-we-all-really-wish-would-go-away-8a143189eccf">Something we all really wish would go away.</a> was originally published in <a href="https://medium.com/i-m-h-o">I. M. H. O.</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
    </channel>
</rss>