<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Allan Lei on Medium]]></title>
        <description><![CDATA[Stories by Allan Lei on Medium]]></description>
        <link>https://medium.com/@allanlei?source=rss-ac908026f10e------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Sat, 11 Apr 2026 19:08:33 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@allanlei/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Pulsating Fade with FFmpeg]]></title>
            <link>https://medium.com/@allanlei/pulsating-fade-with-ffmpeg-bb2cedbe6305?source=rss-ac908026f10e------2</link>
            <guid isPermaLink="false">https://medium.com/p/bb2cedbe6305</guid>
            <category><![CDATA[engineering]]></category>
            <category><![CDATA[ffmpeg]]></category>
            <dc:creator><![CDATA[Allan Lei]]></dc:creator>
            <pubDate>Wed, 07 Sep 2022 18:34:02 GMT</pubDate>
            <atom:updated>2022-09-07T18:34:02.009Z</atom:updated>
            <content:encoded><![CDATA[<h4>How to apply continuous fade to a video source</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/320/1*xpMhOIj-X058fEy2E6019A.gif" /></figure><p>The goal of this walkthrough is to create a pulsating fade effect that can be continuously applied to an input stream where the duration is unknown, like a live source.</p><h3>geq to the rescue</h3><p>The geq filter allows <a href="https://www.ffmpeg.org/ffmpeg-all.html#geq">the usage of an equation to be applied to each pixel</a>. We can utilized this filter with the T variable which references the time of the current frame.</p><p>For our example, we will combine it with a simple sin to get a cyclic effect with a multiplier.</p><pre>geq=a=&#39;abs(255*sin(T))&#39;</pre><ul><li>a / alpha_exprcannot be set alone. Either lum or one of the r, g, b must be set along side. A hacky solution is to set rgb to its original value via r=r(X,Y)</li><li>abs is used here to simplify the sin‘s negative cycle</li><li>The layer must be converted to have a alpha channel so that the alpha channel can be changed</li></ul><h4>Results</h4><p>Combining some knowledge from the previous post on <a href="https://medium.com/swag/blur-out-videos-with-ffmpeg-92d3dc62d069">Blur Out Videos with FFmpeg</a>, we get the end result. This takes in 1 input source, generates the blurred version of it, then adjusts the alpha according to time, then overlays the results on the base.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/9a80b56eb359e7af9a91ec656fca8712/href">https://medium.com/media/9a80b56eb359e7af9a91ec656fca8712/href</a></iframe><figure><img alt="" src="https://cdn-images-1.medium.com/max/320/1*XduiHJiPnh_IlvAJncJKHw.gif" /><figcaption>30s of Big Buck Bunny</figcaption></figure><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=bb2cedbe6305" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Presentations Day #1]]></title>
            <link>https://medium.com/swag/presentations-day-1-e4eaf8d97571?source=rss-ac908026f10e------2</link>
            <guid isPermaLink="false">https://medium.com/p/e4eaf8d97571</guid>
            <category><![CDATA[continuous-delivery]]></category>
            <category><![CDATA[engineering]]></category>
            <category><![CDATA[github-actions]]></category>
            <category><![CDATA[presentations]]></category>
            <category><![CDATA[ios]]></category>
            <dc:creator><![CDATA[Allan Lei]]></dc:creator>
            <pubDate>Thu, 24 Feb 2022 04:09:34 GMT</pubDate>
            <atom:updated>2022-02-24T04:09:34.796Z</atom:updated>
            <content:encoded><![CDATA[<h4>Winter 2021</h4><p>不久之前，RD的夥伴們完成了一場精彩絕倫的分享！ 這次有四位同仁分享覺得有趣的主題，分別為：</p><ul><li>Lego CI/CD with github action</li><li>The wet codebase</li><li>Enlarge eyes by Apple face detection and Metal shader</li><li>Smooth skin principle and implementation</li></ul><p>快來看看這次的Presentations吧！</p><h3>Enlarge eyes by Apple face detection and Metal shader</h3><ul><li>Introduce Metal and compare to OpenGL ES</li><li>Introduce Vision</li><li>Enlarge eyes algorithm</li></ul><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FP8MtZadt8Fs%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DP8MtZadt8Fs&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/473fc8ebcdcb19bbe464790becfce1e9/href">https://medium.com/media/473fc8ebcdcb19bbe464790becfce1e9/href</a></iframe><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fdocs.google.com%2Fpresentation%2Fembed%3Fid%3D1iwpEVZXekjWRKPskZ9Pfra6xsw0-ceZLFB5xxRVfiSQ%26size%3Dl&amp;display_name=Google+Docs&amp;url=https%3A%2F%2Fdocs.google.com%2Fpresentation%2Fd%2F1iwpEVZXekjWRKPskZ9Pfra6xsw0-ceZLFB5xxRVfiSQ%2Fedit%3Fusp%3Dsharing&amp;image=https%3A%2F%2Flh6.googleusercontent.com%2Fko4UnMk11OLKiR7e6b6JyRcHB8MV6VhLyxVuLSIwxlIvNTAM2LzHEuqCGsg_ql9_jkZXEMjMHGGE-A%3Dw1200-h630-p&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=google" width="700" height="559" frameborder="0" scrolling="no"><a href="https://medium.com/media/9ff6fbca6d98f4b8101799d3b46a6065/href">https://medium.com/media/9ff6fbca6d98f4b8101799d3b46a6065/href</a></iframe><h3>Smooth skin principle and implementation</h3><p>本篇介紹 GPUImage 中一個常用的美肌濾鏡，並深入解釋其中使用高斯模糊濾鏡的原因及原理。</p><ul><li>Overview of filter pipeline</li><li>Box blur vs. Gaussian blur and how to implement with shader</li><li>Why Gaussian blur is suitable for smooth skin (fourier transform, low-pass filter)</li><li>Filter pipeline principle summary</li><li>Direction to improve performance and combine with face detection</li></ul><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FMLGhE64gkWQ%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DMLGhE64gkWQ&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FMLGhE64gkWQ%2Fhqdefault.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/2869ba03164e7bff9578f474cc0c2fe6/href">https://medium.com/media/2869ba03164e7bff9578f474cc0c2fe6/href</a></iframe><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fdocs.google.com%2Fpresentation%2Fembed%3Fid%3D1N4Od8XfDZn93oMd1r3-Wnv_cbxPhiyqIRQjxo0eOurU%26size%3Dl&amp;display_name=Google+Docs&amp;url=https%3A%2F%2Fdocs.google.com%2Fpresentation%2Fd%2F1N4Od8XfDZn93oMd1r3-Wnv_cbxPhiyqIRQjxo0eOurU%2Fedit%3Fusp%3Dsharing&amp;image=https%3A%2F%2Flh6.googleusercontent.com%2Fg4yR7TlTESidymmJzSX6iMcMeRiyuhBugLg5uyl2JaDPRqGxD8HxYKLqJQQkEKhDXSVHNgDplbm4OQ%3Dw1200-h630-p&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=google" width="700" height="559" frameborder="0" scrolling="no"><a href="https://medium.com/media/e678e7542e23d32ddaaaa98f7f773e68/href">https://medium.com/media/e678e7542e23d32ddaaaa98f7f773e68/href</a></iframe><h3>Lego CI/CD with Github Actions</h3><p>Using features on Github (Github Action, Github App, Github Deployment) implement a GitOps flow which make application developer and DevOps engineer cowork with each other easier, and how this flow improve the development experience.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FAXlIFU0PeA0%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DAXlIFU0PeA0&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube" width="640" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/513b308b0f2092f2b9a10ddcba115736/href">https://medium.com/media/513b308b0f2092f2b9a10ddcba115736/href</a></iframe><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fdocs.google.com%2Fpresentation%2Fembed%3Fid%3D1-l31Z6MDAE1HfoMy9JyUcKMVghskdozXSm5BKMASW_s%26size%3Dl&amp;display_name=Google+Docs&amp;url=https%3A%2F%2Fdocs.google.com%2Fpresentation%2Fd%2F1-l31Z6MDAE1HfoMy9JyUcKMVghskdozXSm5BKMASW_s%2Fedit%3Fusp%3Dsharing&amp;image=https%3A%2F%2Flh5.googleusercontent.com%2F7NkI9JN4lNSZfGqqOEybno-muFkz34Y7zbsn__1yJb94sFKMPIZCNPUMksnlFs2enqPhf_Kvpf7xZA%3Dw1200-h630-p&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=google" width="700" height="559" frameborder="0" scrolling="no"><a href="https://medium.com/media/1b32860b4a9abf88df7ec5a4b81aea18/href">https://medium.com/media/1b32860b4a9abf88df7ec5a4b81aea18/href</a></iframe><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=e4eaf8d97571" width="1" height="1" alt=""><hr><p><a href="https://medium.com/swag/presentations-day-1-e4eaf8d97571">Presentations Day #1</a> was originally published in <a href="https://medium.com/swag">SWAG</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Custom Installing NVIDIA Drivers for GKE]]></title>
            <link>https://medium.com/@allanlei/custom-installing-nvidia-drivers-for-gke-636ef6e7847e?source=rss-ac908026f10e------2</link>
            <guid isPermaLink="false">https://medium.com/p/636ef6e7847e</guid>
            <category><![CDATA[nvidia]]></category>
            <category><![CDATA[gke]]></category>
            <category><![CDATA[ffmpeg]]></category>
            <dc:creator><![CDATA[Allan Lei]]></dc:creator>
            <pubDate>Fri, 26 Jun 2020 05:46:32 GMT</pubDate>
            <atom:updated>2020-06-30T11:36:25.293Z</atom:updated>
            <content:encoded><![CDATA[<p>I was recently working with ffmpeg and NVIDIA T4 GPUs on GKE for a encoding pipeline. To get started with GPUs on GKE, <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/gpus#installing_drivers">the NVIDIA drivers need to be installed on the nodes</a>. After installing, ffmpeg should be able to access NVIDIA GPU capabilities like nvenc, nvdec , yadif_cuda, etc. One of the filters we needed was scale which there is a GPU accelerated version called scale_npp.</p><h3>The problem</h3><p>Unfortunately, scale_npp produced corrupt video when used.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/5506b00e379107f417165cc16961d975/href">https://medium.com/media/5506b00e379107f417165cc16961d975/href</a></iframe><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*3ESetX1UJx5Mm1i3MoAV_Q.png" /></figure><p>This turns out that the drivers install with the daemonset provided by GKE is version 410.79 and has some problems with NVIDIA T4 GPUs. Running the same commands on a NVIDIA Quadro RTX 5000 with the same drivers produced non-corrupt video.</p><p>The incompatibility seems to be when scale_npp is actually needs to do scaling. If the source dimensions equal the output dimensions, the video is non-corrupt. When the source dimensions are not equal the output dimensions, the video is corrupt.</p><h3>The Fix</h3><p>Taking a look at the <a href="https://github.com/GoogleCloudPlatform/container-engine-accelerators">daemonset</a>, the driver installation process is</p><ol><li>Attempt to download and install a precompiled driver</li><li>Fallback to compiling and installing (This always errored)</li></ol><p><a href="https://github.com/GoogleCloudPlatform/cos-gpu-installer/blob/master/cos-gpu-installer-docker/gpu_installer_url_lib.sh">The precompiled driver was hardcoded to a preformated download location on a Google Cloud Storage bucket</a>. It would take the region and the default driver version specified in a script and attempted to download it.</p><p>Fortunately, the driver version could be configured from the env of the daemonset, but exact driver version would needed to be provided.</p><p>Running gsutil ls gs://nvidia-drivers-asia-public/tesla , you will be able to list the downloadable drivers by version number.</p><pre>gs://nvidia-drivers-asia-public/tesla/384.183/<br>gs://nvidia-drivers-asia-public/tesla/390.116/<br>gs://nvidia-drivers-asia-public/tesla/396.26/<br>gs://nvidia-drivers-asia-public/tesla/396.37/<br>gs://nvidia-drivers-asia-public/tesla/396.44/<br>gs://nvidia-drivers-asia-public/tesla/396.82/<br>gs://nvidia-drivers-asia-public/tesla/410.104/<br>gs://nvidia-drivers-asia-public/tesla/410.72/<br>gs://nvidia-drivers-asia-public/tesla/410.79/<br>gs://nvidia-drivers-asia-public/tesla/418.126.02/<br>gs://nvidia-drivers-asia-public/tesla/418.40.04/<br>gs://nvidia-drivers-asia-public/tesla/418.67/<br>gs://nvidia-drivers-asia-public/tesla/418.87.00/<br>gs://nvidia-drivers-asia-public/tesla/418.87.01/<br>gs://nvidia-drivers-asia-public/tesla/440.64.00/</pre><p>After choosing the driver version you want, 440.64.00 in our case, make a copy of the daemonset and set the the environment variableNVIDIA_DRIVER_VERSION=440.64.00 and re-install the daemonset.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/6370a713b4a05b21562fe18df3e7e009/href">https://medium.com/media/6370a713b4a05b21562fe18df3e7e009/href</a></iframe><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=636ef6e7847e" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Blur Out Videos with FFmpeg]]></title>
            <link>https://medium.com/swag/blur-out-videos-with-ffmpeg-92d3dc62d069?source=rss-ac908026f10e------2</link>
            <guid isPermaLink="false">https://medium.com/p/92d3dc62d069</guid>
            <category><![CDATA[media-processing]]></category>
            <category><![CDATA[ffmpeg]]></category>
            <category><![CDATA[engineering]]></category>
            <dc:creator><![CDATA[Allan Lei]]></dc:creator>
            <pubDate>Sat, 07 Sep 2019 08:26:02 GMT</pubDate>
            <atom:updated>2019-09-07T08:33:44.909Z</atom:updated>
            <content:encoded><![CDATA[<h4>Or how to utilize filter_complex</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/360/1*SKR4tOqHQ7KVKkKFQ57yxA.gif" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/360/1*6yiVdG4NBk5687mrQe4t2w.gif" /><figcaption>Original vs Blur Out</figcaption></figure><h3>Harnessing filter_complex</h3><p>FFmpeg’s filter_complex works in a similar fashion as <a href="https://www.tldp.org/LDP/GNU-Linux-Tools-Summary/html/c1089.htm">Unix pipes</a>. Take a input, modify, output, then rinse and repeat. A filtergraph contains one or more filterchains, each which contains a certain order of filters to be applied to the input source. Like Unix pipes, you can get pretty creative and generate some great results.</p><h4>Splitting Outputs</h4><p>First, we will split a single input into 2 outputs with scaled resolutions.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/75a9180e10682d192c9dd09cc4e07ac4/href">https://medium.com/media/75a9180e10682d192c9dd09cc4e07ac4/href</a></iframe><ul><li>0:v: Select the first input source’s videostream only</li><li>split=2: Split the stream into 2 and store them as 360pand 720p</li><li>[360p]scale=-2:360[360p] and [720p]scale=-2:720[720p] scales the input to 360px and 720pxheight respectively while preserving aspect ratio and sends the output back to the same input name. The -2 is to tell the scaler to scale to a even number as some formats do not support a odd number of pixels</li><li>-map &quot;[360p]&quot; 360p.mp4 and -map &quot;[720p]&quot; 720p.mp4 take each respective output and encodes it to a file.</li></ul><p>The result will be 2 files with different resolutions. Keep in mind, all outputs created in the filtergraph must be connected to an output.</p><h4>Blurring</h4><p>The next component is to generate a blurred video. There are many blurring algorithms available, but for this task, we will be using <a href="https://ffmpeg.org/ffmpeg-filters.html#boxblur">boxblur</a>. This blur has 3 tunable settings each with radius (box radius to apply to the frame) and power (how many times to apply to the frame):</p><ul><li>luma (luma_radius and luma_power): Brightness</li><li>chroma (chroma_radius and chroma_power): Color</li><li>alpha (alpha_radius and alpha_power): Transparency</li></ul><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/ef3854acd203ad5edee5420957fc70f1/href">https://medium.com/media/ef3854acd203ad5edee5420957fc70f1/href</a></iframe><figure><img alt="" src="https://cdn-images-1.medium.com/max/360/1*SKR4tOqHQ7KVKkKFQ57yxA.gif" /><figcaption>Original</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/360/1*arm62-mG8ncLW_dbR_z0tw.gif" /><figcaption>luma_radius=10chroma_radius=10:luma_power=1</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/360/1*x5QlT5qejWbo6eCNSONXjg.gif" /><figcaption>luma_radius=50:chroma_radius=25:luma_power=1</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/360/1*yrg1FR2a1dJ_omXn1VgQqg.gif" /><figcaption>luma_radius=min(w\,h)/5:chroma_radius=min(cw\,ch)/5:luma_power=1</figcaption></figure><h4>Fading In</h4><p>The last part we need is a fade. This is filter is fairly simple as it takes a start and a end with either a fade in or fade out. It supports both frames and time/duration. For our case, we will use the time/duration and skip calculating frames.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/ab88a792cf098797d1dae17bccc278cc/href">https://medium.com/media/ab88a792cf098797d1dae17bccc278cc/href</a></iframe><figure><img alt="" src="https://cdn-images-1.medium.com/max/360/1*SKR4tOqHQ7KVKkKFQ57yxA.gif" /><figcaption>Original</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/360/1*OUl9Al9Y0-rkAEAKyCS3CA.gif" /></figure><h3>Peanut Butter Jelly Time</h3><p>Now that we have all the tools we need, let’s put this all together. The plan is to put each of the filters above to create a Blur Out effect. A Blur Out is essentially 2 videos (the original and a blurred version) overlayed on top of each other with the one of the videos fading.</p><p>For this example, we will choose the original video as the baseand the blurred video as the fade in.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/b625cc4d20f8b6fd8fada384a63bb705/href">https://medium.com/media/b625cc4d20f8b6fd8fada384a63bb705/href</a></iframe><ol><li>We will will need to split the input into 2, base and blurred, so we use the split filter</li><li>Next we need to generate a blurred version using boxblur</li><li>Taking the blurred input, we add a fade in start time 1s and duration 3s. The extra alpha=1 mentioned here is to allow the video to be transparent when faded out else the background color will be black</li><li>The final step is to use overlay to put input sources on top of each other (like an onion). The alpha=1 from above would then allow the base to show through the blurred layer as it is being faded in.</li></ol><figure><img alt="" src="https://cdn-images-1.medium.com/max/360/1*6yiVdG4NBk5687mrQe4t2w.gif" /><figcaption>Final results</figcaption></figure><h4>References</h4><ul><li><a href="https://trac.ffmpeg.org/wiki/FilteringGuide">FFmpeg Filtering Guide</a></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=92d3dc62d069" width="1" height="1" alt=""><hr><p><a href="https://medium.com/swag/blur-out-videos-with-ffmpeg-92d3dc62d069">Blur Out Videos with FFmpeg</a> was originally published in <a href="https://medium.com/swag">SWAG</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Compiling the SIMD version of python-rapidjson]]></title>
            <link>https://medium.com/@allanlei/compiling-the-simd-version-of-python-rapidjson-c1958bcc171a?source=rss-ac908026f10e------2</link>
            <guid isPermaLink="false">https://medium.com/p/c1958bcc171a</guid>
            <category><![CDATA[python]]></category>
            <dc:creator><![CDATA[Allan Lei]]></dc:creator>
            <pubDate>Mon, 07 Jan 2019 16:17:01 GMT</pubDate>
            <atom:updated>2019-01-07T16:17:01.108Z</atom:updated>
            <content:encoded><![CDATA[<p>A recent task had me taking a look at alternative JSON libraries for the purpose of performance. One of them was <a href="https://github.com/Tencent/rapidjson">python-rapidjson</a> which offered support for SIMD.</p><h4>Compiling</h4><p>To get python-rapidjson to compile with SIMD, <a href="http://rapidjson.org/group___r_a_p_i_d_j_s_o_n___c_o_n_f_i_g.html#ga0ccf72f3ebc4b3306ab669f95ca5c64b">we need to define one of the SIMD macros</a>, either RAPIDJSON_SSE2, RAPIDJSON_SSE42, or RAPIDJSON_NEON.</p><p>The chosen flag would then need to be passed to pip during install via CFLAGS. Depending on the flag, you would have to pass some addition options.</p><ul><li>SSE2: CFLAGS=&quot;-DRAPIDJSON_SSE2=1&quot;</li><li>SSE4.2: CFLAGS=&quot;-DRAPIDJSON_SSE42=1 -msse4.2&quot;</li></ul><p>One-liner to re-install the currently installed version as the SIMD version.</p><pre>CFLAGS=&quot;-DRAPIDJSON_SSE42=1 -msse4.2&quot; pip -v install -force-reinstall -no-binary python-rapidjson $(pip freeze | grep python-rapidjson)</pre><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=c1958bcc171a" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Moving forward with Live Stream Video Delivery]]></title>
            <link>https://medium.com/swag/moving-forward-with-live-stream-video-delivery-ee36176f4216?source=rss-ac908026f10e------2</link>
            <guid isPermaLink="false">https://medium.com/p/ee36176f4216</guid>
            <category><![CDATA[engineering]]></category>
            <category><![CDATA[rtmp]]></category>
            <category><![CDATA[http2]]></category>
            <category><![CDATA[mpeg-dash]]></category>
            <category><![CDATA[livestream]]></category>
            <dc:creator><![CDATA[Allan Lei]]></dc:creator>
            <pubDate>Sun, 09 Dec 2018 06:07:08 GMT</pubDate>
            <atom:updated>2018-12-09T06:10:49.005Z</atom:updated>
            <content:encoded><![CDATA[<p>Many popular apps offer live stream as a feature integration like Twitch and Youtube Live. With the ever increasing users drawn towards video based services, as seen with the <a href="https://socialblade.com/youtube/compare/t-series/pewdiepie">trending race to top Youtube subscribed</a>, the technology behind video delivery is equally increasing in importance. While most service will integrate video as a VOD like much of <a href="https://youtube.com">Youtube</a> or <a href="https://www.douyin.com/">DouYin</a>, the challenge lies with live video delivery.</p><h3>So what’s available?</h3><p>There are many solutions available and is dependent on use case, but there are mainly 2 options, RTMP and DASH/HLS. While HLS is an option and having the much larger market share compared to DASH, DASH is vastly superior in terms of feature offering with DASH being able to support the majority of features of HLS. <a href="https://www.encoding.com/files/2018-Global-Media-Formats-Report.pdf">Many workflows are also switching to or dual supporting DASH with HLS</a>.</p><h3>What’s the difference?</h3><h4>RMTP</h4><ul><li><strong>Persistent TCP connection</strong><br>Requiring a persistent TCP connections is a double edged sword. It removes the connection overhead compared to a segment based delivery protocol over HTTP.</li><li><strong>Low Latency<br></strong>Being a pushed based delivery allows RTMP to achieve very low latency with potential to go as low as 0.5s.</li></ul><h4>DASH</h4><ul><li><strong>HTTP Based<br></strong>Being a segment based delivery over HTTP, this is huge plus(mostly). This means that DASH can take advantage of existing HTTP knowledge and infrastructure and any advancement in the HTTP spec. It can also take advantage of persistent connections in HTTP/1.1 or HTTP/2.0.</li><li><strong>Adaptive Streaming<br></strong>One of DASH’s main goals was to support multiple streams and the ability to seamlessly switch between them. The purpose for this was to support multiple bit rates so that the client can use the best fit for their case, reducing bandwidth requirements.</li><li><strong>Wider Container/Codec Support</strong><br>DASH is designed to be flexible, supporting MP4 and MPEG-TS and newer codecs such as HEVC and AV1.</li><li><strong>Discontinuity<br></strong>While mainly used for ad insertion, it is very straight forward to break a stream in progress.</li><li><strong>Content Protection<br></strong>Applications wanting to control content on devices have Fairplay, Widevine, PlayReady</li><li><strong>Controlling Group<br></strong>Unlike RTMP where Adobe is the sole developer, DASH was created from the cooperation between many groups (Google, Apple, Microsoft to name a few) and <a href="https://dashif.org/about/">not owned by any single company</a>.</li></ul><h3>Roadblocks</h3><p>RTMP, being a proprietary protocol developed by Adobe(Macromedia) which uses FLV, requires Flash to be usable on web browsers. With Flash being<em> long gone</em>, there is no straightforward way to support RMTP in browsers. Instead, support in modern browsers is via FLV over HTTP (via <a href="https://github.com/Bilibili/flv.js">flv.js</a>) resulting in increased management overhead. Technology momentum is also very important. Like with any technology, it should improve over time, but advancements to RMTP are nearly non-existent and has lost momentum over the years to other protocols like DASH, HLS and WebRTC.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/764/1*CvCdRMpEByPAkwcXk30P0g.jpeg" /><figcaption>2018 Encoding.com Video Developer Report</figcaption></figure><p>DASH also has areas that need to be addressed with latency being one of the biggest. For applications that require near realtime/ultra low latency, DASH is not quite there yet. CMAF aims to improve this by introducing a specification on the encoding and transfer process.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/800/1*2c3KVkNZE8BukDTDmjCU5Q.png" /><figcaption><a href="https://bitmovin.com/cmaf-low-latency-streaming">https://bitmovin.com/cmaf-low-latency-streaming</a></figcaption></figure><h3>Paving the Road Ahead</h3><p><strong>HTTP/2.0 Server Push<br></strong>While Server Push being relatively unused, in terms of video delivery, it can help move DASH even closer to RMTP in terms of latency by pushing segments immediately after encoding has finished.</p><p><strong>QUIC (aka HTTP/3.0)<br></strong>As mentioned above, being HTTP is a huge plus. With QUIC being standardized as HTTP/3.0, this mean DASH will in-essence receive free UDP support in addition to better network switching and a more efficient TLS connection.</p><p>To be fair, <a href="https://zhuanlan.zhihu.com/p/33698793?fbclid=IwAR2N5GxyXH5gcessWsRf-ipD5IldL01TumwY0aF4OCaZ531jgJlZNkSzCXM">there have been some work done by QiNiu (a China based RTMP CDN) to integrate RTMP over QUIC</a>.</p><p><strong>CMAF Low Latency<br></strong>Common Media Application Format for Low Latency is a encoding and transfer specification aimed to reduce latency by essentially sending IDRs as soon as they are encoded without waiting for the entire segment to encode.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/800/1*ffRL9jcecIT2e0GIyjy7gA.png" /></figure><h3>Thoughts</h3><p>While RMTP still have clear use cases in publishing and low latency viewing, it is quickly being replaced. Without a clear roadmap for RTMP and with DASH closing the gap on low latency, it seems very likely that DASH will come out the solution of choice in the coming years.</p><p><strong>References</strong></p><ul><li><a href="https://www.encoding.com/files/2018-Global-Media-Formats-Report.pdf">https://www.encoding.com/files/2018-Global-Media-Formats-Report.pdf</a></li><li><a href="https://blogs.akamai.com/2018/10/best-practices-for-ultra-low-latency-streaming-using-chunked-encoded-and-chunk-transferred-cmaf.html">https://blogs.akamai.com/2018/10/best-practices-for-ultra-low-latency-streaming-using-chunked-encoded-and-chunk-transferred-cmaf.html</a></li><li><a href="https://bitmovin.com/cmaf-low-latency-streaming">https://bitmovin.com/cmaf-low-latency-streaming</a></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=ee36176f4216" width="1" height="1" alt=""><hr><p><a href="https://medium.com/swag/moving-forward-with-live-stream-video-delivery-ee36176f4216">Moving forward with Live Stream Video Delivery</a> was originally published in <a href="https://medium.com/swag">SWAG</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Auto-labeling GKE Nodes for XFS support]]></title>
            <link>https://medium.com/@allanlei/auto-labeling-gke-nodes-for-xfs-support-f873a2f7dc81?source=rss-ac908026f10e------2</link>
            <guid isPermaLink="false">https://medium.com/p/f873a2f7dc81</guid>
            <category><![CDATA[fx]]></category>
            <category><![CDATA[kubernetes]]></category>
            <category><![CDATA[gke]]></category>
            <dc:creator><![CDATA[Allan Lei]]></dc:creator>
            <pubDate>Sun, 16 Sep 2018 15:30:18 GMT</pubDate>
            <atom:updated>2018-09-16T15:30:18.322Z</atom:updated>
            <content:encoded><![CDATA[<p>To use XFS with Persistent Volumes, the host node needs to have the command xfs_mkfile available so disks can be created and formatted. The problem comes when needing to do this on GKE where there are 2 OSs available, ubuntu which has xfsprogs installed by default and cos which does not. Also, before ubuntu was available, container-vm was the used for XFS, but <a href="https://medium.com/@allanlei/mounting-xfs-on-gke-adcf9bd0f212">required a separate install</a>.</p><p>The solution I took was to add a XFS support node label at node pool creation time using OS image ubuntu. While this sort of works, there are a couple problems.</p><ul><li>The need to <strong>remember </strong>to add a XFS node label when creating a new node pool</li><li>The assumption that ubuntuhas xfsprogs installed</li><li>The assumption that cos does <strong>not</strong> have xfsprogs installed. (It doesn’t right now but you never know)</li></ul><h4>Detecting XFS host support</h4><p>Since provisioning a PersistentVolume is done on the host before the Pod is running, this complicates things as the support would need to be for the host, but the detection done in a Pod.</p><p>For this, we will use <a href="http://man7.org/linux/man-pages/man1/nsenter.1.html">nsenter</a>. nsenter allows running a process in a different namespace, in our case, the host. For us to properly use nsenter, we will need to set hostPID:true and priviledged: true, allowing us to <em>break out</em> of the pod into the host.</p><pre>hostPID: true<br>volumes:<br>- name: tmp<br>  emptyDir: {}<br>initContainers:<br>- name: detect<br>  image: wardsco/nsenter<br>  command: [&quot;sh&quot;, &quot;-eo&quot;, &quot;pipefail&quot;, &quot;-c&quot;]<br>  args: [&quot;nsenter -t 1 -m -u -i -n -p -- sh -c &#39;command -v xfs_mkfile&#39; &amp;&amp; touch /tmp/xfs_mkfile || true&quot;]<br>  securityContext:<br>    privileged: true<br>  volumeMounts:<br>  - name: tmp<br>    mountPath: /tmp/</pre><p>The command command -v xfs_mkfile gets run on the host which detects if the command xfs_mkfile is available.</p><h4>Labeling the node for support</h4><p>The usage of the /tmp volume mount is to pass the results of detection into the labeling container. This is to drop the priviledged: true as soon as possible.</p><pre>initContainers<br>- name: label<br>  image: wardsco/kubectl:1.11<br>  command: [&quot;sh&quot;, &quot;-eo&quot;, &quot;pipefail&quot;, &quot;-c&quot;]<br>  args: [&quot;kubectl label node --overwrite $NODE_NAME fs.type/xfs=$(test -e /tmp/xfs_mkfile &amp;&amp; echo &#39;true&#39; || echo &#39;false&#39;&quot;]<br>  env:<br>  - name: NODE_NAME<br>    valueFrom:<br>      fieldRef:<br>        fieldPath: spec.nodeName<br>  volumeMounts:<br>  - name: tmp<br>    mountPath: /tmp/<br>    readOnly: true</pre><p>By using the <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/">Downward API</a>, we can pass in the node name to the pod. This container will then label the node it is on with the label fs.type/xfs=true/false indicating support. With this, you can schedule pods with nodeAffinity.</p><p><em>Note:</em> kubectl label node requires extra permissions which won’t be covered here. As a shortcut, setting spec.serviceAccountName: node-controller in the kube-system namespace provides these permissions.</p><h4>Putting it together</h4><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/832ce586bf48746905b2b4cd0c383082/href">https://medium.com/media/832ce586bf48746905b2b4cd0c383082/href</a></iframe><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=f873a2f7dc81" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Disabling Transparent Huge Pages in Kubernetes]]></title>
            <link>https://medium.com/@allanlei/disabling-transparent-huge-pages-in-kubernetes-9e8f60141fb1?source=rss-ac908026f10e------2</link>
            <guid isPermaLink="false">https://medium.com/p/9e8f60141fb1</guid>
            <category><![CDATA[gke]]></category>
            <category><![CDATA[redis]]></category>
            <category><![CDATA[kubernetes-engine]]></category>
            <category><![CDATA[kubernetes]]></category>
            <category><![CDATA[mongo]]></category>
            <dc:creator><![CDATA[Allan Lei]]></dc:creator>
            <pubDate>Sat, 15 Sep 2018 21:40:38 GMT</pubDate>
            <atom:updated>2018-09-16T15:31:30.691Z</atom:updated>
            <content:encoded><![CDATA[<p>I’ve recently needed to revisit some of our deployments which were created in the earlier days of GKE where some useful features were not available. One component revisited was the disabling the kernel setting <strong>Transparent Huge Pages (THP)</strong> recommended for <strong>mongo</strong> and <strong>redis</strong>.</p><p>The solution at the time was to use a Daemonset running a startup script with <a href="https://github.com/kubernetes/contrib/blob/master/startup-script/startup-script.yml">gcr.io/google-containers/startup-script:v1</a>.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/461d4cb82bd75f0321959d8b45447e19/href">https://medium.com/media/461d4cb82bd75f0321959d8b45447e19/href</a></iframe><p>There are a couple of areas that could be improved</p><ul><li>hostPID and securityContext seemed excessive</li><li>No checks if the setting actually changed</li><li>gcr.io/google-containers/startup-script:v1 is a relatively large image</li><li>Timing conflicts with pod scheduling</li></ul><h4>hostPID and securityContext</h4><p>Instead of using hostPID and priviledged: true, we can mount the host’s /sysinto the pod as a volume.</p><pre>volumes:<br>- name: sys<br>  hostPath:<br>    path: /sys</pre><pre>volumeMounts:<br>- name: sys<br>  mountPath: /rootfs/sys</pre><h4>Checking if settings applied</h4><p>This part is straight forward. We simply grep the property and return an appropriate exit code.</p><pre>grep -q -F [never] /sys/kernel/mm/transparent_hugepage/enabled<br>grep -q -F [never] /sys/kernel/mm/transparent_hugepage/defrag</pre><h4>Large Images</h4><p>This one is not a critical problem. gcr.io/google-containers/startup-scriptis 12.5MB, but since we are essentially just running a shell script, it can be changed to a slimmer image, like busyboxwhich has an image size of 1.15MB. Of course busybox is lacking the startup functionality of gcr.io/google-containers/startup-script. For this we can utilize initContainers which were unavailable at the time.</p><h4>Pod Scheduling Conflicts</h4><p>This problem is referring to a dependency conflict where redis or mongo can be scheduled on a node where the kernel-tuner has not yet completed. Since the process was started before the setting was applied, it will not receive the updated kernel settings and would need a restart.</p><p>For this problem, we can use labels on nodes in conjunction with nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution. This solution will pend the pod scheduling until a node with the proper labels exist.</p><p>To label a node within a pod, there are some prerequisites.</p><ul><li>kubectl label node needs RBAC permission (skip if it is not required). For my case, I used the service account node-controller that is created by default on kube-systemnamespace on GKE by setting serviceAccountName: node-controller</li><li>Pod needs to know the node name it lives on via <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/">Downward API</a></li></ul><pre>initContainers:<br>- name: label-node<br>  image: swaglive/kubectl:1.11<br>  command: [&quot;kubectl&quot;]<br>  args: [&quot;label&quot;, &quot;node&quot;, &quot;--overwrite&quot;, &quot;$(NODE_NAME)&quot;, &quot;sysctl/mm.transparent_hugepage.enabled=never&quot;, &quot;sysctl/mm.transparent_hugepage.defrag=never&quot;]<br>  env:<br>  - name: NODE_NAME<br>    valueFrom:<br>      fieldRef:<br>        fieldPath: spec.nodeName</pre><p>Now to add the label restriction to pods that need it.</p><pre>affinity:<br>  nodeAffinity:<br>    requiredDuringSchedulingIgnoredDuringExecution:<br>      nodeSelectorTerms:<br>      - matchExpressions:<br>        - key: sysctl/mm.transparent_hugepage.enabled<br>          operator: In<br>          values:<br>          - &quot;never&quot;<br>        - key: sysctl/mm.transparent_hugepage.defrag<br>          operator: In<br>          values:<br>          - &quot;never&quot;</pre><h3>Putting it all together</h3><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/c38146003c459c6ddbfc4ed4a2acdbef/href">https://medium.com/media/c38146003c459c6ddbfc4ed4a2acdbef/href</a></iframe><p>Using it with a redisdeployment</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/b19905dfd50cab707b3c8f7b7e97a03d/href">https://medium.com/media/b19905dfd50cab707b3c8f7b7e97a03d/href</a></iframe><ul><li><a href="https://gist.github.com/allanlei/17363387182a9e4fd5d0a987f2574600">Cluster Role Bindings</a></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=9e8f60141fb1" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Installing kubectl in a Kubernetes Pod]]></title>
            <link>https://medium.com/@allanlei/installing-kubectl-in-a-kubernetes-pod-a0d35655fc0f?source=rss-ac908026f10e------2</link>
            <guid isPermaLink="false">https://medium.com/p/a0d35655fc0f</guid>
            <category><![CDATA[kubernetes]]></category>
            <dc:creator><![CDATA[Allan Lei]]></dc:creator>
            <pubDate>Sun, 28 Jan 2018 15:07:58 GMT</pubDate>
            <atom:updated>2018-01-28T15:07:58.250Z</atom:updated>
            <content:encoded><![CDATA[<h4>Without creating custom images</h4><p><strong><em>tl;dr</em></strong><em> </em>Use empty volume, initContainers, and subPath to copy and mount kubectl.</p><h4>The Why</h4><p>I needed access to the kubernetes API from within a pod so that the pod can self label itself.</p><p>For example, I am currently working with redis and redis-sentinel. When sentinel triggers a reconfigure script, I want the pod to re-label itself to role=masteror role=slave. I didn’t want to create a custom redis image that includes kubectl as it would be another component to maintain.</p><p>Also, what if I needed to work with other images requiring kubectl? Seemed like alot of maintenance going the custom image route.</p><h4>The How</h4><p>First, create an empty volume to hold the kubectl binary.</p><pre>volumes:<br>- name: kubectl<br>  emptyDir: {}</pre><p>Next, using initContainers, copy out the kubectlbinary from a docker image into the volume. In this case, allanlei/kubectlis an image containing a static binary from <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-binary-via-curl">kubernetes</a>.</p><pre>initContainers:<br>- name: install-kubectl<br>  image: allanlei/kubectl<br>  volumeMounts:<br>  - name: kubectl<br>    mountPath: /data<br>  command: [&quot;cp&quot;, &quot;/usr/local/bin/kubectl&quot;, &quot;/data/kubectl&quot;]</pre><p>Finally, mount the kubectl volume into the container using subPath. If you don’t use subPath, then the entire mount path will get overriden or gets mounted as a directory, which is not the goal. subPath <a href="https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath">allows us to specify certain paths in the volume to be mounted</a>.</p><pre>volumeMounts:<br>- name: kubectl<br>  subPath: kubectl<br>  mountPath: /usr/local/bin/kubectl</pre><p>You’re ready to go! The container is now able to run kubectl which is automatically setup via Service Accounts.</p><h4><strong>Full Example:</strong></h4><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/1296271abe70ca40a1de9c6915388769/href">https://medium.com/media/1296271abe70ca40a1de9c6915388769/href</a></iframe><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a0d35655fc0f" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Running cron in Docker the Easy Way]]></title>
            <link>https://medium.com/@allanlei/running-cron-in-docker-the-easy-way-4f779bfa9ca7?source=rss-ac908026f10e------2</link>
            <guid isPermaLink="false">https://medium.com/p/4f779bfa9ca7</guid>
            <category><![CDATA[docker]]></category>
            <category><![CDATA[cron]]></category>
            <category><![CDATA[devops]]></category>
            <category><![CDATA[alpine]]></category>
            <dc:creator><![CDATA[Allan Lei]]></dc:creator>
            <pubDate>Thu, 06 Apr 2017 09:17:47 GMT</pubDate>
            <atom:updated>2017-04-06T09:17:47.325Z</atom:updated>
            <content:encoded><![CDATA[<p>There are some situations where you just need to run a simple single command cronjob. This is where the alpineDocker image comes in very handy. It comes with a simple yet flexible cron package via busybox.</p><h4>Single Line</h4><p>For very simple stuff, a single command using the default alpine image.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/3a9e20115b41be1bf2c5114538b2300b/href">https://medium.com/media/3a9e20115b41be1bf2c5114538b2300b/href</a></iframe><h4>Mounted Volume</h4><p>For more complex crontabs, mount a crontab file!</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/213efc0e3517020ac1994ac1f5800396/href">https://medium.com/media/213efc0e3517020ac1994ac1f5800396/href</a></iframe><h4>Compiled Docker Image</h4><p>Compiling your own image with an entrypoint makes it cleaner to run.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/277f76ecb6453816dccabe0e0c9691b9/href">https://medium.com/media/277f76ecb6453816dccabe0e0c9691b9/href</a></iframe><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=4f779bfa9ca7" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>