<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Teo Voinea on Medium]]></title>
        <description><![CDATA[Stories by Teo Voinea on Medium]]></description>
        <link>https://medium.com/@TeoVoinea?source=rss-7bf9fb960149------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Tue, 07 Apr 2026 07:21:39 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@TeoVoinea/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Imitating Rust’s trait system in JavaScript]]></title>
            <link>https://medium.com/@TeoVoinea/imitating-rusts-trait-system-in-javascript-c198f2440f17?source=rss-7bf9fb960149------2</link>
            <guid isPermaLink="false">https://medium.com/p/c198f2440f17</guid>
            <category><![CDATA[javascript]]></category>
            <category><![CDATA[generics]]></category>
            <category><![CDATA[rust]]></category>
            <category><![CDATA[traits]]></category>
            <category><![CDATA[safety]]></category>
            <dc:creator><![CDATA[Teo Voinea]]></dc:creator>
            <pubDate>Fri, 24 Nov 2017 15:59:36 GMT</pubDate>
            <atom:updated>2017-11-24T15:59:36.524Z</atom:updated>
            <content:encoded><![CDATA[<p>Generics and traits are great for DRY code. They let you get by with writing code once that works (almost) everywhere. Unfortunately, Rust doesn’t have built-in support for implementing the same trait for different concrete structs. As a work-around, Rust developers have come up with solutions like macros to avoid repetition.</p><h4>Traits in Rust</h4><p>Here’s a simple example of how Rust deals with traits:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/2cc2ba9d64aadee0f57c0f0d3244706a/href">https://medium.com/media/2cc2ba9d64aadee0f57c0f0d3244706a/href</a></iframe><p>What happened there? I created 2 structs Foo and Bar, and for each of them I implemented a function X() that would print their a field. This works because both Foo and Bar have a field a. After, I also implemented a function Y() that would print the b field of Bar since only Bar has a field named b. If I tried the same macro trick to implement Y() for Foo, the compiler would yell at me that Foo doesn’t have a field b.</p><h4>Let’s see how to do this in JavaScript</h4><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/0b2613a0ca32e4290c7396c7cd235dc9/href">https://medium.com/media/0b2613a0ca32e4290c7396c7cd235dc9/href</a></iframe><p>What happened here? Functions in JavaScript are passed a context argument when they’re being called. It’s automatically assigned when invoking functions as X() or Y(), but if you invoke them with .call() you can specify your own context. The context is available as this inside your function. This allows us to model a struct in Rust as a typical JavaScript object. Then implementing a function for a struct is simply passing in our JavaScript object as the context to the function.</p><h4>Pros and Cons</h4><p>Ultimately, it all boils down to safety vs. flexibility. Rust guarantees at compile time the struct supports the function being implemented. Using the earlier example, implementing Y() for Foo is impossible because Foo doesn’t contain the field b. In JavaScript this is totally possible, Y.call(foo); would simply result in a is 1 \n b is undefined. On the other hand, JavaScript doesn’t require a “solution” like a macro to implement the same function for multiple structs.</p><p>In a future post I will also expand on the similarities of function overriding between Rust and JavaScript.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=c198f2440f17" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[SignKit-Learn: Using Machine Learning to converse with a bot in American Sign Language]]></title>
            <link>https://medium.com/@TeoVoinea/signkit-learn-using-machine-learning-to-converse-with-a-bot-in-american-sign-language-e122ae8da1b8?source=rss-7bf9fb960149------2</link>
            <guid isPermaLink="false">https://medium.com/p/e122ae8da1b8</guid>
            <category><![CDATA[microsoft]]></category>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[sign-language]]></category>
            <dc:creator><![CDATA[Teo Voinea]]></dc:creator>
            <pubDate>Tue, 14 Nov 2017 16:26:18 GMT</pubDate>
            <atom:updated>2017-11-14T16:26:18.227Z</atom:updated>
            <content:encoded><![CDATA[<p>This was a 36 hour project at <a href="https://medium.com/u/b6ceba09c8ee">HackPrinceton</a>. Our goal was to bridge the communication gap between those with hearing and those without, by creating a tool that would make it easier for people to practice their sign language skills with a bot. The app takes pictures of the user as they make gestures, analyzes the image using machine learning, converts the gestures to the corresponding textual word, and then sends that text to Microsoft’s chat bot. The chatbot then responds with an appropriate message to keep the conversation going.</p><h3>Picking a dataset</h3><p>We used Microsoft’s <a href="http://customvision.ai">CustomVision</a> to train our datasets and handle API calls for classifying images. Our first plan was to teach the AI how to understand the whole ASL alphabet. We found a <a href="https://medium.freecodecamp.org/weekend-projects-sign-language-and-static-gesture-recognition-using-scikit-learn-60813d600e79">dataset</a> online with almost 2000 images, but they were all of the same person in the same setting. This made it very difficult to classify our signs because we were different people and the pictures were in a different setting. Additionally, we had a limit of 1000 images to train and with 26 different classes, we only had roughly 50 pictures per letter. We had an 8% prediction rate with this dataset.</p><p>Our 2nd approach was to restrict our domain to only a few words. This would allow us to have more training images for each class. Also, we would generate our own dataset using a variety of people (other hackers at the hackathon) in a variety of settings (against a white wall, green wall, people walking in the background, etc.) to build a more robust classifier. Our training set has 1000 pictures, here’s a few of them:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/640/1*C93Z0JNvuu7r1Dxupl6IgQ.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*-2VOKaQMHJaePxSHrqjp7A.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Cv4OhAc83TsIFmDmhk0a-A.png" /><figcaption>Yes, Weather, Here (respectively)</figcaption></figure><p>With our new dataset ready and trained we got an 80% prediction rate!</p><h4>Sign language isn’t static, what about signs that have movement?</h4><p>The time constraint didn’t allow for developing a context-aware solution to sign movement so we had to improvise. Our solution was to select “key frames” of a particular sign movement and train our AI on those.</p><p>Here’s how you can say no in ASL:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/720/1*N_EW3qCk_-cplnKZeyaggQ.gif" /><figcaption>For more ASL-related material, visit <a href="http://www.lifeprint.com.">www.lifeprint.com.</a></figcaption></figure><p>We chose 3 key frames for this sign:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/640/1*r90mWeCcsJlIsKFguJyAGQ.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/640/1*xB-OaFeeZcCykKLQMAEcCg.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/640/1*IsNcoOUDyGrlLWY8k-0kfw.png" /></figure><p>This allowed us to capture any point during the motion of the sign and still correctly classify it.</p><h3>Not a perfect solution</h3><p>ASL (like any language) is much more complex than what you can capture in the span of 36 hours. Here are a few improvements we thought of:</p><ul><li>Languages are contextual and the meaning of a sign can change based on what was said earlier</li><li>We had a <em>very</em> limited vocabulary</li><li>People have different “accents” when they sign</li><li>The signs we trained mostly “looked” different so we didn’t have to handle minuscule changes in movement that change the meaning</li><li>Many signs also have facial expressions (like the man shaking his head for “no”) which we didn’t focus on for our training data</li></ul><h3>Communicating with the bot</h3><p>We used Microsoft’s BotFramework for developing a conversational bot. After parsing the sign into text, the python server would prompt the bot API for a response. The bot was designed to hold up a simple conversation with you by answering questions and asking a few of its own.</p><h3>A quick walkthrough</h3><p>Your webcam captures you signing the word “hello”</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/640/1*wUjDkmWgwszgGGAotQ4vzg.png" /><figcaption><a href="https://medium.com/u/9864b6f12fe8">Jaison Loodu</a></figcaption></figure><p>The image is sent to CustomVision for classifying and returns a tag. That tag is sent to the bot. The bot responds accordingly.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/481/1*DKjBI1LN66yUTn1hKrH6gQ.png" /></figure><h3>Looking forward</h3><p>Since no special hardware is required (just a webcam) and all of the hard work is done by Microsoft’s servers, we can easily extend this into a mobile phone app.</p><p>Finally, we would love to extend our dataset and train on a variety of people to improve our classifier. Some variables we would like to consider:</p><ul><li>Varying skill levels (all the images were of beginners)</li><li>Different lighting/camera conditions</li><li>Multiple people signing in one image</li></ul><p><a href="https://github.com/Rumsha7/signkit-learn">Rumsha7/signkit-learn</a></p><p>Special thanks: <a href="https://github.com/Jailoodu">Jaison Loodu</a>, <a href="https://github.com/sophiatao">Sophia Tao</a>, <a href="https://github.com/Rumsha7">Rumsha Siddiqui</a>, <a href="https://github.com/dreamInCoDeforlife">Aman Adhav</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=e122ae8da1b8" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[From hacky Python to performant Rust]]></title>
            <link>https://medium.com/@TeoVoinea/from-hacky-python-to-performant-rust-a572187162b7?source=rss-7bf9fb960149------2</link>
            <guid isPermaLink="false">https://medium.com/p/a572187162b7</guid>
            <category><![CDATA[rust]]></category>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[steganography]]></category>
            <category><![CDATA[rustlang]]></category>
            <dc:creator><![CDATA[Teo Voinea]]></dc:creator>
            <pubDate>Mon, 18 Sep 2017 15:18:33 GMT</pubDate>
            <atom:updated>2017-09-18T15:18:33.838Z</atom:updated>
            <content:encoded><![CDATA[<h4>How I turned a 24 hour Python project into Rust’s first stable <a href="http://github.com/teovoinea/steganography">steganography</a> library</h4><h3>How it got started</h3><p>The idea for this project came while I was at a hackathon in Montreal, Canada with <a href="https://medium.com/u/c1bb9204eeb3">Justin Leger</a> and <a href="https://twitter.com/icefalcn">@icefalcn</a>. Our <a href="http://github.com/icefalcn/pgp2img">plan</a> was to hide sensitive files in unassuming ones like vacation photos and PDFs. While we were successful in hiding the files in an easy to use system, the implementation was well… hacky. Our solution was to encrypt a sensitive file and append it to the end of an image. Looking at the image in a viewer wouldn’t give away the encrypted file added to the end, but looking at the file size or cating the image definitely would. With another hackathon come and gone, I set out to implement it right.</p><h3>Why Rust</h3><p>I chose Rust because it had good library support for <a href="https://github.com/PistonDevelopers/image">image manipulation</a>, it worked seamlessly on both my Chromebook and Windows boxes, and most importantly, I wanted to gain expertise with the language.</p><h3>Getting my grips</h3><p>The library started simply as reading and writing buffers to the alpha channel of an image. It grew as I added utility functions converting a String to an array of bytes and reading or writing that. Then I added converting whole files to byte arrays. And I realized something: my library was starting to become about encoding things as byte arrays to read/write into the alpha channel. It should have been about adding actual steganographic methods.</p><h3>The Big Refactor AKA: 1.0.0 (stable)</h3><p>I factored the program into 3 modules: <strong>Encoder</strong>, <strong>Decoder</strong> and <strong>Util</strong>. I moved all the “added functionality” of converting things to byte arrays into the <strong>Util</strong> module. The core of the library, the actual steganography, now exists in <strong>Encoder</strong> and <strong>Decoder</strong>. With the functionality of the library nicely separated, it was very easy to add some new features! Alongside encoding bytes into the alpha channel of images, I also added encoding bytes as an image.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/500/1*BjlFG5GEz85tPjp6gQG7nw.png" /><figcaption>“This is a steganography demo!” is encoded into the alpha channel of the first 31 pixels</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/246/1*rtZL-Eh-Wk4aTNKnDeHWxg.png" /><figcaption>notepad.exe (Binary size: 237 KB, Image size: 194 KB)</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*E4nIuzX5zP6P4DF3jBGOWA.png" /><figcaption>mspaint.exe (Binary size: 6.35 MB, Image size: 3.16 MB)</figcaption></figure><p>If you look closely at the bottom of mspaint.exe, there’s some nice 🌈s.</p><h3>Next steps</h3><p>Short term, adding a command line interface is top priority.</p><p>It can be easy to spot <em>fishiness</em> when encoding bytes straight into the alpha channels. So I’d like to add some established steganographic methods like F5, JSteg and LSB.</p><p>The crate should be (and is!) easy to use. It’s how I was sidetracked into spending so much time at the beginning developing utility functions. I want to keep the focus on steganography, but I see a big benefit in adding some simple encryption functions to the <strong>Utility</strong> module.</p><p>Steganography isn’t limited to just images… The next big step will be adding methods to discreetly encode data in audio and video.</p><p>A performant binary which can run on many platforms that allows for advanced steganographic methods through a simple interface will have a profound impact on privacy. I imagine a DSLR camera that can hide pictures in other pictures on the fly, giving reporters more freedom to not fear reporting the truth.</p><p><a href="https://teovoinea.com/steganography">steganography</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a572187162b7" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>