<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Braden Riggs on Medium]]></title>
        <description><![CDATA[Stories by Braden Riggs on Medium]]></description>
        <link>https://medium.com/@bdriggs?source=rss-6b8dc7a69e8f------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Wed, 08 Apr 2026 13:15:37 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@bdriggs/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Turning 127 Million Data Points Into an Industry Report]]></title>
            <link>https://bdriggs.medium.com/turning-127-million-data-points-into-an-industry-report-1d097b9c14d6?source=rss-6b8dc7a69e8f------2</link>
            <guid isPermaLink="false">https://medium.com/p/1d097b9c14d6</guid>
            <category><![CDATA[sql]]></category>
            <category><![CDATA[data-analysis]]></category>
            <category><![CDATA[data-visualization]]></category>
            <category><![CDATA[data-science]]></category>
            <category><![CDATA[appsec]]></category>
            <dc:creator><![CDATA[Braden Riggs]]></dc:creator>
            <pubDate>Thu, 26 Mar 2026 20:22:55 GMT</pubDate>
            <atom:updated>2026-03-26T20:33:28.178Z</atom:updated>
            <content:encoded><![CDATA[<h4><strong>What I learned about data wrangling, segmentation, and storytelling while building an application security report from scratch</strong></h4><p>Earlier this year, I published an industry report called<a href="https://semgrep.dev/resources/remediation-at-scale/"> Remediation at Scale</a> analyzing how application security (AppSec) teams fix vulnerabilities in their code. The dataset: tens of thousands of repositories, a full year of scan data, and organizations ranging from startups to enterprises. In total, north of 127 million data points spanning individual findings, scan events, and remediation actions across two types of security scanning (SAST and SCA).</p><p>I’m a Senior Technical PMM at<a href="https://semgrep.dev/"> Semgrep</a> with a background in computer science, data science, and solutions engineering. I like building things. This project let me combine all of that in a single motion: writing the SQL, building scripts to manage the analysis, parsing and cleaning the data, finding the story the data is telling, and shipping the final polished asset.</p><p>This post walks through five lessons I picked up along the way. If you’ve ever had to take a massive dataset, find the narrative inside it, and turn it into something a technical and non-technical audience can act on, some of this might be useful.</p><h3>1. Start with the data, not the story</h3><p>The temptation with any data project is to decide your narrative first, then go looking for numbers to back it up. I did the opposite.</p><p>I spent weeks in pure exploration mode. Querying Snowflake, looking at distributions, running aggregations across different dimensions. No hypothesis, no angle. Just trying to understand what the data actually showed.</p><p>This was uncomfortable. Stakeholders wanted to know what the report would say. I didn’t have an answer yet.</p><p>But it turned out to be the most important phase of the entire project. The data told a story I wouldn’t have guessed: the gap between top-performing security teams and everyone else wasn’t about tooling. It was about systematic follow-through on remediation. I never would have landed on that framing if I’d started with a thesis.</p><p>You also have to be willing to kill your darlings. There were several findings I wanted to be true that the data didn’t support. On the flip side, some of the most interesting insights came from places I wasn’t looking. I used local LLMs via Ollama to classify 10,000+ text-based triage records into 20 thematic categories. What emerged was a clear pattern: the most common themes were about test files, framework protections, and trusted services. That told a story about how teams actually use triage tooling that I never would have found by looking at aggregate metrics.</p><p>A few things that helped during exploration:</p><ul><li><strong>Run diagnostic queries first.</strong> I built a set of 12+ data quality checks before touching the analysis. One of them caught that a key metric (parse_rate) only had coverage for a fraction of repos. I switched to an alternative field (NUM_BYTES_SCANNED) with 90%+ coverage. Without that diagnostic, the entire findings-per-lines-of-code analysis would have been mis-computed.</li><li><strong>Build checkpoint/resume into your pipeline.</strong> I had 108+ SQL queries across multiple report sections. I wrote a shell script that auto-discovered .sql files, tracked which ones had already produced output CSVs, and skipped them on re-runs. When queries failed midway through (and they did), I could pick up right where I left off instead of re-running everything.</li><li><strong>Document as you go.</strong> Every interesting result, every dead end, every assumption. That running log became the backbone of the report’s methodology section and saved me weeks when I needed to retrace my steps.</li></ul><figure><img alt="Image shows the Author’s pipeline for running queries against the database automatically. The image highlights the 108 total queries as well as the diagnotics queries." src="https://cdn-images-1.medium.com/max/1024/1*7HtezG_7r8-Eyof61Kt1hg.png" /><figcaption>Shell script for auto-discovering and running queries for the report. Image by Author.</figcaption></figure><h3>2. Become the domain expert</h3><p>You can’t tell a story about data you don’t understand. Before I could write a single section, I needed to know how static analysis scanners work, how remediation flows operate in practice, and what metrics actually matter to security teams.</p><p>Several companies in the space publish annual reports on similar topics. I collected and read as many as I could find. Not to copy, but to understand the format, the depth, and the expectations. Reading them gave me a sense of:</p><ul><li>What the industry expects from this kind of resource</li><li>What’s already well-covered</li><li>Where there’s room to say something new</li></ul><p>This also helped me spot gaps. Most reports focus on detection volume. Very few dig into what happens after detection. That became our angle.</p><p>Skipping this phase would have meant writing a report full of surface-level observations that didn’t differentiate against the other great content produced by others.</p><h3>3. Talk to your target audience early and often</h3><p>Early versions of the analysis just showed averages. Average fix rate, average time to remediate, average findings per repo. The numbers were fine. The story was boring.</p><p>The breakthrough came after talking to actual practitioners: the security engineers, AppSec leads, and CISOs who would be reading the final product. Everyone wanted to answer one question: <em>how do I compare to teams that are doing this well?</em></p><p>That feedback directly shaped two of the biggest decisions in the report.</p><p>First, it led to a cohort-based segmentation. I split organizations into two groups: the top 15% by fix rate (“leaders”) and everyone else (“the field”). This is similar to how survey-based reports segment by maturity level, except I was using behavioral data rather than self-reported responses. Suddenly the data had contrast:</p><ul><li>Leaders fix 2–3x more vulnerabilities</li><li>They resolve findings caught during code review 9x faster than findings from full repository scans</li><li>They adopt workflow automation features at higher rates and extract more value from them</li></ul><p>The segmentation was the difference between “here are some numbers” and “here is something you can act on.”</p><figure><img alt="Bar chart showing the different in code vulnerabilty fix rates between “Leader” cohorts and “Field” cohorts." src="https://cdn-images-1.medium.com/max/886/1*MMdejbFZR_cYOBBki-fkeQ.png" /><figcaption>Splitting cohorts into leaders and field gives the reader a frame of reference for where their program stands. It also helps frame talking points and findings. Image by Author.</figcaption></figure><p>Second, it reshaped the report’s structure. People didn’t just want benchmarks. They wanted to know what to do about them. “Great, the leader cohort fixes more code security vulnerabilities. How do I become a leader?” That feedback led me to add an evidence-based recommendations section organized by implementation speed:</p><ul><li>Quick wins for this week</li><li>Process changes for this quarter</li><li>Strategic investments for the half</li></ul><p>The final report reads as much like a playbook as it does a benchmark. None of that would have happened without putting early drafts in front of actual readers.</p><h3>4. Get design involved early</h3><p>This one I almost learned too late. Data reports live or die on how they look. A wall of charts with no visual hierarchy is just as bad as no data at all.</p><p>I brought in our design team earlier than I normally would and spent time walking them through the domain. What does “reachability analysis” mean? Why does the cohort split matter? When the designers understood the story, they made choices (color coding for cohorts, callout boxes for key insights, before/after code examples) that reinforced it without me having to explain in text.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*EU8B2CQo4mbdJUnUdcVAOQ.png" /><figcaption>Unused proof-of-concept rendering of the report cover graphic. Note the 2.4x Remediation Gap. Image used with permission.</figcaption></figure><h3>5. Give yourself time</h3><p>This project took months. The data exploration alone was weeks. Then there were iterations on the analysis as I found new angles, design cycles, legal reviews, and rounds of feedback from stakeholders across the company.</p><p>If I had tried to ship this in a quarter, the result would have been forgettable.</p><h3>Where it landed</h3><p>Looking back, the two things I’d change are both about speed. I’d write down every definition and assumption on day one. Things like “what counts as an active repository” or “how do we calculate fix rate” seem obvious at the start. They become contested fast. I eventually created a formal definitions document covering 40+ metrics, but doing it earlier would have saved several rounds of rework. And I’d bring in a second set of eyes during exploration. Working solo meant no one to gut-check whether a finding was interesting or just noise.</p><p>The report itself,<a href="https://semgrep.dev/resources/remediation-at-scale/"> Remediation at Scale</a>, covers six evidence-backed patterns that separate high-performing security teams from the rest. If you’ve tackled a similar data-heavy reporting project, I’d be curious to hear what you learned along the way.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=1d097b9c14d6" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How to Broadcast a WebRTC Stream on Twitch]]></title>
            <link>https://bdriggs.medium.com/how-to-broadcast-a-webrtc-stream-on-twitch-238c96b1556c?source=rss-6b8dc7a69e8f------2</link>
            <guid isPermaLink="false">https://medium.com/p/238c96b1556c</guid>
            <category><![CDATA[live-streaming]]></category>
            <category><![CDATA[streaming]]></category>
            <category><![CDATA[obs]]></category>
            <category><![CDATA[twitch]]></category>
            <category><![CDATA[webrtc]]></category>
            <dc:creator><![CDATA[Braden Riggs]]></dc:creator>
            <pubDate>Wed, 06 Mar 2024 20:42:33 GMT</pubDate>
            <atom:updated>2024-03-06T20:42:33.546Z</atom:updated>
            <content:encoded><![CDATA[<h4>Learn how to create a WebRTC stream in OBS and broadcast it to Twitch</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*BvlX7zDbf03VDKsP" /><figcaption>Photo by <a href="https://unsplash.com/@libby_penner?utm_source=medium&amp;utm_medium=referral">Libby Penner</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p>Recently, while exploring <a href="https://docs.dolby.io/streaming-apis/docs/webrtc-whip">syndicating Dolby.io WebRTC</a> streams, I learned that <a href="https://www.linkedin.com/posts/sean-dubois_twitch-activity-7053056800861933568-TTPW/">Twitch has added support for WebRTC Ingest</a> or <a href="https://datatracker.ietf.org/doc/draft-ietf-wish-whip/">WHIP</a> as it is known in the industry.</p><p>WebRTC for streaming is an exciting choice because it can decrease stream latency compared to traditional protocols such as RTMP and HLS. When ingested, Twitch will transmux the WebRTC stream into something the platform supports (HLS), so that adds latency, slowing down the feed.</p><p>With that said, WHIP support is a great step for the community and with OBS now adding support for WebRTC, I thought I’d have to try it out.</p><p>In this guide, we’ll showcase how to stream WebRTC from OBS into Twitch.</p><h3>Setting up OBS for WebRTC</h3><p>The core OBS project <a href="https://dolby.io/blog/obs-studio-adds-native-webrtc-streaming-with-whip/">has recently added WebRTC support</a> (with <a href="https://www.ietf.org/archive/id/draft-ietf-wish-whip-01.html">WHIP</a>)! You can download the OBS project <a href="https://obsproject.com/">here</a>.</p><p>Once downloaded, extract the project and install it.</p><h3>Streaming WebRTC from OBS to Twitch</h3><p>With the project installed and launched, navigate to:<br>Settings -&gt; Stream</p><p>Inside of Stream select WHIP as your service:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/800/0*iMIAxTHVdtSmKoNP.png" /><figcaption>Image by author.</figcaption></figure><p>To start a WebRTC stream to Twitch you need the Server path and your Stream key.</p><h3>The Twitch WHIP server</h3><p>The server is (<em>currently</em>) the same for everyone:<br>https://g.webrtc.live-video.net:4443/v2/offer</p><p><strong><em>Note:</em></strong><em> This server currently only supports H264 and Opus-encoded streams.</em></p><h3>Getting Your Twitch Stream Key</h3><p>Your Twitch Stream Key can be found on your <a href="https://dashboard.twitch.tv/">dashboard</a> once you’ve logged in, under:<br>settings -&gt; stream</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/800/0*i64T9JpjIn0BsCsU.png" /><figcaption>Image by author.</figcaption></figure><p>Copy both the Server URL and the Stream Key into the Server and Bearer Token inputs within OBS.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/800/0*JnXishf3ZjUDqRMI.png" /><figcaption>Image by author.</figcaption></figure><p>Click Apply, set up OBS as usual, and click Start Stream to begin your WebRTC broadcast to Twitch.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/800/0*AltiKNZk4cS8Qr8q.png" /><figcaption>Image by author.</figcaption></figure><h3>Learn More</h3><p>Broadcasting a WebRTC stream to Twitch is a great feature for the site as it allows people to easily <a href="https://docs.dolby.io/streaming-apis/docs/syndication">syndicate their WebRTC streams</a> to a popular platform. Because Twitch transmuxes the WebRTC stream, some delay is added, so if you’re looking for an end-to-end white-label real-time streaming solution, check out <a href="https://dolby.io/products/real-time-streaming/">Dolby.io Real-time Streaming</a>.</p><p><em>A special shout out to </em><a href="https://www.linkedin.com/in/sean-dubois/"><em>Sean DuBois</em></a><em> for his work on both the OBS project and Twitch’s WHIP support.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=238c96b1556c" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[No Headset, No Problem: Building Social Gaming Experiences for Every Device]]></title>
            <link>https://medium.com/hacking-media/no-headset-no-problem-building-social-gaming-experiences-for-every-device-f008f3080686?source=rss-6b8dc7a69e8f------2</link>
            <guid isPermaLink="false">https://medium.com/p/f008f3080686</guid>
            <category><![CDATA[unity]]></category>
            <category><![CDATA[unreal-engine]]></category>
            <category><![CDATA[game-design]]></category>
            <category><![CDATA[game-development]]></category>
            <dc:creator><![CDATA[Braden Riggs]]></dc:creator>
            <pubDate>Thu, 18 May 2023 18:07:05 GMT</pubDate>
            <atom:updated>2023-05-18T18:07:05.683Z</atom:updated>
            <content:encoded><![CDATA[<h4>Learn how Dolby.io and PubNub are powering in-game communication with immersive plugins for Unity and Unreal.</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*eINKle6Z1C7ddOKf" /><figcaption>Photo by <a href="https://unsplash.com/@screenpost?utm_source=medium&amp;utm_medium=referral">SCREEN POST</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p>Social gaming has exploded in popularity thanks to leaps in technology and innovation in the communication and gaming space. For <a href="https://gdconf.com/">GDC 2023</a>, <a href="https://dolby.io/">Dolby.io</a> and <a href="https://www.pubnub.com/">PubNub</a> discussed the social gaming landscape and how both companies are powering innovations in immersive communication.</p><p>To learn more about Dolby.io and PubNub, join Developer Advocates <a href="https://dolby.io/blog/author/brigg/">Braden Riggs</a> (Dolby.io) and <a href="https://www.linkedin.com/in/oliverfcarson/">Oliver Carson</a> (PubNub) as they dive into the world of social gaming in their talk: <br><strong><em>No Headset, No Problem: Building Social Gaming Experiences for Every Device</em></strong></p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2Fw5t8nvJX45Q%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3Dw5t8nvJX45Q&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2Fw5t8nvJX45Q%2Fhqdefault.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/657e83c9c547144a6221b2e140733c46/href">https://medium.com/media/657e83c9c547144a6221b2e140733c46/href</a></iframe><p>If you are interested in trying out the demo app from the talk <a href="https://github.com/PubNubDevelopers/dolbyio-pubnub-gdc2023-unity">head over to the project on GitHub</a> or learn more about the <a href="https://docs.dolby.io/communications-apis/docs/unity-overview">Dolby.io Spatial Audio</a> and <a href="https://docs.dolby.io/streaming-apis/docs/unreal-player-plugin">Streaming Plugins</a> for Unity and Unreal. For building social experiences on the Web check out this guide showing how to build a <a href="https://dolby.io/blog/adding-pubnub-in-app-chat-to-your-webrtc-live-stream-app/">WebRTC streaming app with PubNub and Dolby.io</a>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=f008f3080686" width="1" height="1" alt=""><hr><p><a href="https://medium.com/hacking-media/no-headset-no-problem-building-social-gaming-experiences-for-every-device-f008f3080686">No Headset, No Problem: Building Social Gaming Experiences for Every Device</a> was originally published in <a href="https://medium.com/hacking-media">Hacking Media</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Building a Real-Time Streaming App with WebRTC and Flutter 3]]></title>
            <link>https://bdriggs.medium.com/building-a-real-time-streaming-app-with-webrtc-and-flutter-3-72a0c5ffeff0?source=rss-6b8dc7a69e8f------2</link>
            <guid isPermaLink="false">https://medium.com/p/72a0c5ffeff0</guid>
            <category><![CDATA[android]]></category>
            <category><![CDATA[ios]]></category>
            <category><![CDATA[flutter]]></category>
            <category><![CDATA[web-development]]></category>
            <category><![CDATA[webrtc]]></category>
            <dc:creator><![CDATA[Braden Riggs]]></dc:creator>
            <pubDate>Fri, 30 Sep 2022 01:01:15 GMT</pubDate>
            <atom:updated>2022-10-25T19:02:18.283Z</atom:updated>
            <content:encoded><![CDATA[<h4>With the Flutter 3 WebRTC streaming SDK you can build real-time streaming solutions for Android, iOS, Web, and Desktop from a single code base.</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*th7ks1MB1wyegPfn" /><figcaption>Photo by <a href="https://unsplash.com/@zmachacek?utm_source=medium&amp;utm_medium=referral">Zdeněk Macháček</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p>Streaming, especially the low latency kind, has become a popular medium to engage with an audience, host live events, and connect people virtually. For developers building streaming apps, however, there is just one issue. If we are interested in connecting to a wide audience we need to develop for a wide range of platforms such as Android, iOS, Web, and even desktop native apps, which can quickly become a heavy lift for any team. This is where <a href="https://flutter.dev/?gclid=CjwKCAjw4c-ZBhAEEiwAZ105RYihY2PWVmum6IojgwCKgGWKZg9IOYmyhWlapji_zIYo_FpW-vW8tRoCoKcQAvD_BwE&amp;gclsrc=aw.ds">Flutter 3</a> comes in, released in May of 2022, Flutter 3 takes cross-platform to the next level allowing users to “ <em>build for any screen</em>” from a single code base. Hence, rather than building 3 separate apps for iOS, Android, and Web, you can build just one. To further sweeten the deal,<a href="http://dolby.io/"> Dolby.io</a> has recently released their <a href="https://docs.dolby.io/streaming-apis/docs/flutter">WebRTC real-time streaming SDK for Flutter</a>, allowing users to build cross-platform streaming apps that combine scalability and ultra-low delay.</p><p>In this guide, we’ll be exploring how to build a cross-platform real-time streaming app that works on Android, iOS, Desktop Native, and Web using the Dolby.io Streaming SDK for Flutter.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*EC5Pm1301fPACnAf.jpg" /></figure><h3>Getting Started with the Real-Time Streaming SDK</h3><p>Before we begin you need to make sure you have the <a href="https://docs.flutter.dev/get-started/install">latest version of Flutter installed</a> and set up on your machine. To get started with building a streaming app we need to install the <a href="http://dolby.io/">Dolby.io</a> Streaming SDK for Flutter 3 via the terminal.</p><pre>flutter pub add millicast_flutter_sdk</pre><p>Then run the following command in terminal to download the dependencies:</p><pre>flutter pub get</pre><p>With the Flutter Streaming SDK installed, you can start by creating a <a href="https://docs.flutter.dev/get-started/test-drive?tab=vscode">vanilla Flutter app</a> and add the most recent version of flutter_webrtc to your project&#39;s pubspec.yaml. You should also see that the Dolby.io Millicast flutter SDK has been automatically added.</p><pre>flutter_webrtc: ^x.x.x millicast_flutter_sdk: ^x.x.x</pre><p>Then inside main.dart you just import flutter_webrtc alongside any other dependencies your project may have.</p><pre>import &#39;package:flutter_webrtc/flutter_webrtc.dart&#39;;</pre><p>In addition to installing the SDK, you’ll also need to <a href="https://dashboard.dolby.io/signup/">create a free </a><a href="http://dolby.io/">Dolby.io</a><a href="https://dashboard.dolby.io/signup/"> Account</a>. The free account offers 50 Gigabytes of data transfer a month, which will be plenty for building and testing out the real-time streaming app.</p><blockquote>Interested in following along with a project that already has the SDK installed and set up? <a href="https://github.com/dolbyio-samples/blog-streaming-flutter-app/tree/main/streaming_app">Check out this GitHub repository</a> which contains a completed version of this app.</blockquote><h3>Building the Real-Time Streaming App with Flutter</h3><p>Building a WebRTC Flutter streaming app can be complicated, so to get started we first need to divide the app into a series of features that come together to support a real-time streaming experience. In order for the app to connect to the<a href="http://dolby.io/"> Dolby.io</a> servers, we must include a way for the user to input their streaming credentials and tokens in order to authenticate and use the<a href="http://dolby.io/"> Dolby.io</a> servers.</p><h4>Taking in the WebRTC Stream Credentials</h4><p>To publish and view a WebRTC stream with the<a href="http://dolby.io/"> Dolby.io</a> Flutter SDK we need three things: an account ID, a stream name, and a publishing token. <a href="https://docs.dolby.io/streaming-apis/docs/about-dash">These credentials can be found on your Dolby.io dashboard</a> and need to be input by the user which we can capture with the TextFormField widget, where the widget, on change, updates a TextEditingController variable.</p><pre>Container(<br>                            width:MediaQuery.of(context).size.width,<br>                              constraints: const BoxConstraints(<br>                                  minWidth: 100, maxWidth: 400),<br>                              child: TextFormField(<br>                                maxLength: 20,<br>                                controller: accID,<br>                                decoration: const InputDecoration(<br>                                  labelText: &#39;Enter Account ID&#39;,<br>                                ),<br>                                onChanged: (v) =&gt; accID.text = v,<br>                              )),</pre><p><em>Note: In production, you don’t need to have users input these credentials, instead you could use a custom login and serve the users a temporary login token. For learning more about Dolby.io tokens </em><a href="https://dolby.io/blog/secure-token-authentication-with-dolby-io-millicast-streaming-webrtc/"><em>check out this blog on creating and securing tokens</em></a><em>.</em></p><p>Because we need three inputs to publish a WebRTC stream to the Dolby.io server, we can repeat this code for each input.</p><pre>Container(<br>                              width:MediaQuery.of(context).size.width,<br>                              constraints: const BoxConstraints(<br>                                  minWidth: 100, maxWidth: 400),<br>                              child: TextFormField(<br>                                maxLength: 20,<br>                                controller: streamName,<br>                              onChanged: (v) =&gt; streamName.text = v,<br>                                decoration: const InputDecoration(<br>                                  labelText: &#39;Enter Stream Name&#39;,<br>                                ),<br>                              )),<br>                          Container(<br>                              width:MediaQuery.of(context).size.width,<br>                              constraints: const BoxConstraints(<br>                                  minWidth: 100, maxWidth: 400),<br>                              child: TextFormField(<br>                                controller: pubTok,<br>                                maxLength: 100,<br>                                onChanged: (v) =&gt; pubTok.text = v,<br>                                decoration: const InputDecoration(<br>                                labelText: &#39;Enter Publishing Token&#39;,<br>                                ),<br>                              )),</pre><p>Additionally, we can add an ElevatedButton for the user to press once they have added their credentials.</p><pre>ElevatedButton(<br>               style: ElevatedButton.styleFrom(<br>                              primary: Colors.deepPurple,<br>                            ),<br>                            onPressed: publishExample,<br>                            child: const Text(&#39;Start Stream&#39;),<br>                          ),</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/795/0*cDKkxnvkZPKFRDBD.jpg" /></figure><h4>Authentication and Publishing Streams from Flutter</h4><p>You’ll notice that the Elevated button triggers a function via its onPressed parameter. This function, called publishExample, checks if the credentials are valid and authenticates the stream. First, the function checks that a user has input a value for each input.</p><pre>void publishExample() async {<br>    if (pubTok.text.isEmpty || streamName.text.isEmpty || accID.text.isEmpty) {<br>      ScaffoldMessenger.of(context).showSnackBar(const SnackBar(<br>        backgroundColor: Colors.grey,<br>        content: Text(<br>            &#39;Make sure Account ID, Stream Name, and Publishing Token all include values.&#39;),<br>      ));<br>    }</pre><p>Then the function calls publishConnect, an asynchronous function that takes in streamName, pubTok, and a third object called localRenderer. localRendered is a RTCVideoRenderer object included with the flutter.webrtc package.</p><pre>final RTCVideoRenderer localRenderer = RTCVideoRenderer(); <br>publish = await publishConnect(localRenderer, streamName.text, pubTok.text);</pre><p>Using these three parameters we have everything we need to authenticate and begin publishing a stream. Inside of the publishConnect function, we need to generate a temporary publishing token using the streamName and pubTok:</p><pre>Future publishConnect(RTCVideoRenderer localRenderer, String streamName, String pubTok) async { </pre><pre>// Setting subscriber options <br>DirectorPublisherOptions directorPublisherOptions =    DirectorPublisherOptions(token: pubTok, streamName: streamName); </pre><pre>/// Define callback for generate new token tokenGenerator() =&gt; Director.getPublisher(directorPublisherOptions); <br>... }</pre><p>With the temporary publishing token created, we then can use it to create a publish object. Using this publish object we could start the stream, however, we wouldn&#39;t be able to see or hear anything, this is because we haven&#39;t specified what kind of stream we are creating or which devices we will connect to. To do this we need to specify if the stream will include audio, video, or audio <em>and</em> video, then we need to pass these constraints into the getUserMedia function which will map the constraints to the default audio capture device and the default video capture device.</p><pre>{ ... <br>Publish publish = Publish(streamName: &#39;your-streamname&#39;, tokenGenerator: tokenGenerator); <br>final Map&lt;String, dynamic&gt; constraints = &lt;String, bool&gt;{ <br>&#39;audio&#39;: true, <br>&#39;video&#39;: true <br>}; <br>MediaStream stream = await navigator.mediaDevices.getUserMedia(constraints); ... }</pre><p>Using this stream object, we can also provide a feed to the user in the form of a viewer. To do this we need to assign our input devices to localRender as sources.</p><pre>{ ... <br>localRenderer.srcObject = stream; <br>... }</pre><p>Finally, we can map the stream object and pass it as an option to the connect function, which is inherited from publish.</p><pre>{ ... <br>//Publishing Options <br>Map&lt;String, dynamic&gt; broadcastOptions = {&#39;mediaStream&#39;: stream}; </pre><pre>/// Start connection to publisher <br>await publish.connect(options: broadcastOptions); <br>return publish; }</pre><p>With our stream connected, we can now look at setting up the viewer using localRender.</p><h4>In-App WebRTC Stream Viewer</h4><p>Now that our stream is authenticated and publishing we need to add a viewer object so the streamer can see themselves streaming. This can be done with <a href="https://pub.dev/documentation/flutter_webrtc/latest/flutter_webrtc/RTCVideoView-class.html">a </a><a href="https://pub.dev/documentation/flutter_webrtc/latest/flutter_webrtc/RTCVideoView-class.html">RTCVideoView object</a> which takes in our localRender object and is constrained by a container.</p><pre>Container(<br>                margin: const EdgeInsets.all(30),<br>                constraints: const BoxConstraints(<br>                   minWidth: 100, maxWidth: 1000, maxHeight: 500),<br>                width: MediaQuery.of(context).size.width,<br>                height: MediaQuery.of(context).size.height / 1.7,<br>                    decoration:<br>                       const BoxDecoration(color:Colors.black54),<br>                    child: RTCVideoView(_localRenderer, mirror: true),<br>                          )</pre><h4>Sharing the Real-time Stream</h4><p>With the stream authenticated and live, we want to share our content with the world. We can do this via a URL formatted with our streamName and our accountID which we collected as inputs. Using the example app as a template we can create a function called shareStream which formats the URL to share and copies it to the clipboard.</p><pre>void shareStream() {<br>    Clipboard.setData(ClipboardData(<br>        text:<br>            &quot;<a href="https://viewer.millicast.com/?streamId=${accID.text}/${streamName.text">https://viewer.millicast.com/?streamId=${accID.text}/${streamName.text</a>}&quot;));<br>    ScaffoldMessenger.of(context).showSnackBar(const SnackBar(<br>      backgroundColor: Colors.grey,<br>      content: Text(&#39;Stream link copied to clipboard.&#39;),<br>    ));<br>  }</pre><h4>Unpublishing a WebRTC Stream</h4><p>To unpublish the stream we can call the publish object returned from our asynchronous publishConnect function to stop, killing the connection with the <a href="http://dolby.io/">Dolby.io</a> server.</p><pre>publish.stop();</pre><h4>Flutter 3 is Truly Cross Platform</h4><p>The power of Flutter is taking one code base and having it work across multiple platforms. Here we can see examples of the app working for Android, Windows, and Web:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/463/0*JZld5ybKqZeHdx6E.jpg" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*PogAD2txtF0eNdvu.jpg" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/795/0*jyGVzB0QFQn9m0FW.jpg" /></figure><p>Building in this cross-platform framework saves both time and resources, allowing you to get started building real-time streaming apps without having to worry about which platform works for your users. These apps are perfect for streaming live events and virtual events to the widest range of audiences allowing for high-quality interactive experiences. If you are interested in learning more about our Flutter streaming SDK <a href="https://docs.dolby.io/streaming-apis/docs/flutter">check out our documentation</a> and play around with the full project on<a href="https://github.com/dolbyio-samples/blog-streaming-flutter-app/tree/main/streaming_app"> this GitHub repository</a>.</p><p>Feedback or Questions? Leave a <em>Medium Comment</em> or <a href="https://twitter.com/BradenRiggs1">reach out to me on Twitter</a>.</p><p><em>Originally published at </em><a href="https://dolby.io/blog/building-a-real-time-streaming-app-with-webrtc-and-flutter-3/"><em>https://dolby.io</em></a><em> on September 30, 2022.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=72a0c5ffeff0" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[3 Things to Know Before Building with PyScript]]></title>
            <link>https://medium.com/data-science/3-things-you-must-know-before-building-with-pyscript-245a0a82f2c3?source=rss-6b8dc7a69e8f------2</link>
            <guid isPermaLink="false">https://medium.com/p/245a0a82f2c3</guid>
            <category><![CDATA[data-science]]></category>
            <category><![CDATA[python]]></category>
            <category><![CDATA[pyscript]]></category>
            <category><![CDATA[javascript]]></category>
            <category><![CDATA[data-visualization]]></category>
            <dc:creator><![CDATA[Braden Riggs]]></dc:creator>
            <pubDate>Wed, 25 May 2022 15:21:12 GMT</pubDate>
            <atom:updated>2022-05-26T16:58:45.958Z</atom:updated>
            <content:encoded><![CDATA[<h4>After recently running into a few blockers, bugs, and quirks I wanted to make a guide for building with PyScript, Python, and HTML</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*8rqqNslFC0cdboyv" /><figcaption>Photo by <a href="https://unsplash.com/@davidclode?utm_source=medium&amp;utm_medium=referral">David Clode</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p>For anyone who hasn’t already heard <a href="https://pyscript.net/">PyScript</a>, which debuted at <a href="https://us.pycon.org/2022/">PyCon 2022</a>, is a browser-embedded python environment, built on top of an existing project called<a href="https://pyodide.org/en/stable/"> Pyodide</a>. This project, to the shock of long-term Pythonistas and web developers, seamlessly blends (<em>well almost</em>) JavaScript and Python in a bi-directional environment allowing developers to utilize Python staples such as<strong> </strong><a href="https://numpy.org/"><strong>NumPy</strong></a><strong> or </strong><a href="https://pandas.pydata.org/"><strong>Pandas</strong></a><strong> in the browser</strong>.</p><p>After playing with the project for a few days I wanted to share a few learnings and gotchya’s that tripped me up on my journey to master PyScript.</p><p><strong>Prelude</strong>: <a href="#a820">A Crash Course in PyScript</a><br><strong>1</strong>. <a href="#e1f7">Package Indentation Matters!</a><br><strong>2</strong>. <a href="#4958">Local File Access</a><br><strong>3</strong>. <a href="#2e74">DOM Manipulation</a></p><h4>A Crash Course in PyScript</h4><p>To get started using PyScript we first have to link our HTML file with the PyScript script as we would for any ordinary javascript file. Additionally, we can link the PyScript style sheet to improve usability.</p><pre><strong>&lt;head&gt;</strong><br>    &lt;link rel=&quot;stylesheet&quot; href=&quot;https://pyscript.net/alpha/pyscript.css&quot; /&gt;<br>    &lt;script defer src=&quot;https://pyscript.net/alpha/pyscript.js&quot;&gt;&lt;/script&gt;<br><strong>&lt;/head&gt;</strong></pre><p>With PyScript imported in the head of our HTML file, we can now utilize the <strong>&lt;py-script&gt;</strong> tag in the body of our HTML to write python code.</p><pre><strong>&lt;body&gt;</strong><br>    &lt;py-script&gt;<br>        for i in [&quot;Python&quot;, &quot;in&quot;, &quot;html?&quot;]:<br>            print(i)<br>    &lt;/py-script&gt;<br><strong>&lt;/body&gt;</strong></pre><p>Yep! It really is just that simple to get started. Now, where do things get tricky?</p><h4>Package Indentation Matters</h4><p>One of the big advantages of using PyScript is the ability to import Python libraries such as NumPy or Pandas, which is first done in the <em>Head</em> using the <em>&lt;py-env&gt;</em> tag and then inside of the <em>&lt;py-script&gt;</em> tag just like you would in regular Python.</p><pre><strong>&lt;head&gt;</strong><br>    &lt;link rel=&quot;stylesheet&quot; href=&quot;https://pyscript.net/alpha/pyscript.css&quot; /&gt;<br>    &lt;script defer src=&quot;https://pyscript.net/alpha/pyscript.js&quot;&gt;&lt;/script&gt;</pre><pre>    &lt;py-env&gt;<br>- numpy<br>- pandas<br>    &lt;/py-env&gt;<br><strong>&lt;/head&gt;</strong></pre><pre><strong>&lt;body&gt;</strong><br>    &lt;py-script&gt;<br>        <strong>import pandas as pd</strong><br>    &lt;/py-script&gt;<br><strong>&lt;/body&gt;</strong></pre><p>On the surface, this may seem straightforward <strong>but note the indentation of the packages</strong> within <em>&lt;py-env&gt;</em>.</p><pre>    &lt;py-env&gt;<br><strong>- numpy<br>- pandas</strong><br>    &lt;/py-env&gt;</pre><p>Turns out that if there is <a href="https://github.com/pyscript/pyscript/issues/136">any indentation</a> you’ll receive a <strong><em>ModuleNotFoundError</em></strong><em>: No module named ‘pandas’ </em>or<em> </em><strong><em>ModuleNotFoundError</em></strong><em>: No module named ‘numpy’ ) </em>for PyScript. This error caught me off guard initially since indentation in Python is so important.</p><h4>Local File Access</h4><p>JavaScript handles file access very differently compared to Python… as it should given the relationship between web development and privacy and security. Hence Vanilla JavaScript does not have direct access to local files. Since the PyScript project is built on top of JavaScript <strong>your Python code won&#39;t be able to access local files</strong> like you might be used to.</p><p>PyScript does offer a solution to file access in the &lt;py-env&gt; tag. In addition to importing packages, you can also import files such as CSVs or XLSXs.</p><pre>    &lt;py-env&gt;<br>- numpy<br>- pandas<br><strong>- paths:<br>    - /views.csv</strong><br>    &lt;/py-env&gt;</pre><p>Again<strong> note the indentation</strong> as in this case the CSV must be indented in relation to the paths.</p><p>With the file included in the path, you can read it within your &lt;py-script&gt; code.</p><pre>&lt;py-script&gt;<br>    import pandas as pd<br>    df = pd.read_csv(&quot;<strong>views.csv</strong>&quot;)<br>&lt;/py-script&gt;</pre><h4>DOM Manipulation</h4><p>For anyone who has worked in web development, you should be familiar with the DOM or Document Object Model. DOM Manipulation is common in most web applications as developers typically want their websites to interact with users, reading inputs and responding to button clicks. In the case of PyScript this raises an interesting question how do buttons and input fields interact with the Python code?</p><p>Again PyScript has a solution to this, however, it mightn’t be what you expect. Here are a few (of many) examples where PyScript has functionality:</p><ol><li>For buttons, you can include <em>pys-onClick=”your_function”</em> parameter to trigger python functions when clicked.</li><li>For retrieving user input from within the <em>&lt;py-script&gt;</em> tag <em>document.getElementById(‘input_obj_id’).value </em>can retrieve the input value.</li><li>And Finally <em>pyscript.write(“output_obj_id”, data) </em>can write output to a tag from within the <em>&lt;py-script&gt;</em> tag.</li></ol><p>We can see these three DOM manipulation techniques put together into one web application that lets users check if a CSV has been added to the PyScript path:</p><pre>&lt;body&gt;<br>   &lt;form onsubmit = &#39;return false&#39;&gt;<br>   &lt;label for=&quot;fpath&quot;&gt;filepath&lt;/label&gt;<br>   &lt;input type=&quot;text&quot; id=&quot;fpath&quot; name=&quot;filepath&quot; placeholder=&quot;Your name..&quot;&gt;<br>   &lt;input <strong>pys-onClick=&quot;onSub&quot;</strong> type=&quot;submit&quot; id=&quot;btn-form&quot; value=&quot;submit&quot;&gt;<br>    &lt;/form&gt;</pre><pre>&lt;div <strong>id=&quot;outp&quot;</strong>&gt;&lt;/div&gt;</pre><pre>    &lt;py-script&gt;<br>        import pandas as pd</pre><pre>        def onSub(*args, **kwargs):<br>            file_path = <strong>document.getElementById(&#39;fpath&#39;).value</strong><br>            df = pd.read_csv(file_path)<br>            <strong>pyscript.write(&quot;outp&quot;,df.head())</strong><br>    &lt;/py-script&gt;<br>&lt;/body&gt;</pre><p>These examples aren’t comprehensive as the project also supports <a href="https://github.com/pyscript/pyscript/blob/main/docs/tutorials/getting-started.md">visual component tags</a>.</p><h4><strong>Conclusion</strong></h4><p>PyScript is a wonderful step in the right direction for bringing some excellent Python packages into the web development space. With that said it still has a bit of growing to do and there are many improvements that need to be made before the project sees widespread adoption.</p><blockquote>Show some support to the team working on this <strong>awesome</strong> project: <a href="https://github.com/pyscript">https://github.com/pyscript</a></blockquote><p><strong>Leave a comment with any other insights or gotchya’s that you might have experienced working with PyScript and I’ll make a part 2.</strong></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*oD1E1b6pcLx-tgtW" /><figcaption>Photo by <a href="https://unsplash.com/@honza_kahanek?utm_source=medium&amp;utm_medium=referral">Jan Kahánek</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=245a0a82f2c3" width="1" height="1" alt=""><hr><p><a href="https://medium.com/data-science/3-things-you-must-know-before-building-with-pyscript-245a0a82f2c3">3 Things to Know Before Building with PyScript</a> was originally published in <a href="https://medium.com/data-science">TDS Archive</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Introducing Hacking Media]]></title>
            <link>https://medium.com/hacking-media/introducing-hacking-media-324cdfdaf70e?source=rss-6b8dc7a69e8f------2</link>
            <guid isPermaLink="false">https://medium.com/p/324cdfdaf70e</guid>
            <category><![CDATA[audio]]></category>
            <category><![CDATA[coding]]></category>
            <category><![CDATA[medium]]></category>
            <category><![CDATA[data]]></category>
            <category><![CDATA[software-development]]></category>
            <dc:creator><![CDATA[Braden Riggs]]></dc:creator>
            <pubDate>Sun, 22 May 2022 18:25:26 GMT</pubDate>
            <atom:updated>2022-05-23T17:59:31.754Z</atom:updated>
            <content:encoded><![CDATA[<h4>Interested in learning about software development in the audio, video and media space? Come join hacking media and learn together.</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*bfqUA4rsmfUYgxme" /><figcaption>Photo by <a href="https://unsplash.com/@ngeshlew?utm_source=medium&amp;utm_medium=referral">Lewis Kang&#39;ethe Ngugi</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p>The Hacking Media publication aims to bring high-quality content relating to software development and research in the media space, be that video or audio correction software, data generation tools, visualization tools and so forth.</p><h4>Publishing with Hacking Media</h4><p>Currently, we are open to article submissions. If your article is in line with Medium’s terms of service and rules <a href="https://tpij02cs5mk.typeform.com/to/XCjisMD4"><strong>you can make a submission here</strong></a><strong>.</strong></p><blockquote>You always own and control the rights to your content so feel free to remove or change publications as you please.</blockquote><h4>Editors and Publishers</h4><p><a href="https://twitter.com/bradenriggs1"><strong>Braden Riggs</strong></a>: Braden is a developer advocate with Dolby specializing in data generation and media post-production tools.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=324cdfdaf70e" width="1" height="1" alt=""><hr><p><a href="https://medium.com/hacking-media/introducing-hacking-media-324cdfdaf70e">Introducing Hacking Media</a> was originally published in <a href="https://medium.com/hacking-media">Hacking Media</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Searching Audio to Find Loudness and Music Sections]]></title>
            <link>https://medium.com/hacking-media/searching-audio-to-find-loudness-and-music-sections-66fe7efc9ed4?source=rss-6b8dc7a69e8f------2</link>
            <guid isPermaLink="false">https://medium.com/p/66fe7efc9ed4</guid>
            <category><![CDATA[api]]></category>
            <category><![CDATA[medium]]></category>
            <category><![CDATA[audio]]></category>
            <category><![CDATA[javascript]]></category>
            <category><![CDATA[music]]></category>
            <dc:creator><![CDATA[Braden Riggs]]></dc:creator>
            <pubDate>Thu, 19 May 2022 17:10:31 GMT</pubDate>
            <atom:updated>2022-05-19T17:10:31.509Z</atom:updated>
            <content:encoded><![CDATA[<h4>A JavaScript guide to extracting data, loudness, content, and music sections from audio with the Analyze Media API for the Cue Sheet SOCAN hackathon.</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*YbXLM3rTOb4JsvBaaCQoWw.jpeg" /><figcaption>Analyze Audio. Image by Thomas Christiansen. Used with author permission.</figcaption></figure><p>Ever since the inception of cinema, <em>Scores</em>, the musical composition within a film, have become synonymous with the medium and a crucial staple in the experience of enjoying film or TV. As the industry has grown and matured so too has the score, with many productions having hundreds of tracks spanning many genres and artists. These artists can be anyone from an orchestra drummer all the way up to a sellout pop star sensation, each composing, producing, or performing a variety of tracks. The challenge with this growing score complexity is ensuring that every artist is paid for their fair share and contribution to the overall film.</p><p>The industry presently tackles this challenge with a tool known as a “<a href="https://www.bmi.com/creators/detail/what_is_a_cue_sheet">Cue Sheet</a>“, a spreadsheet that identifies exactly where a track is played and for how long. The issue with Cue Sheets is that their creation and validation is an immensely manual process, constituting hundreds of hours spent confirming that every artist is accounted for and compensations are awarded accordingly. It was this inefficiency that attracted<a href="http://dolby.io/"> Dolby.io</a> to help support the<a href="https://cue-sheet-palooza.devpost.com/"> Cue Sheet Palooza Hackathon</a>, a Toronto-based event that challenged musicians and software engineers to work and innovate together to reduce the time spent creating Cue Sheets. The event was sponsored by the<a href="https://www.socan.com/"> Society of Composers, Authors and Music Publishers of Canada or SOCAN</a> for short, which is an organization that helps ensure Composers, Authors, and Music Publishers are correctly compensated for their work.</p><p>Many of the hackers utilized the<a href="http://dolby.io/"> Dolby.io</a><a href="https://docs.dolby.io/media-apis/docs/analyze-api-guide"> Analyze Media API</a> to help detect loudness and music within an audio file and timestamp exactly where music is included. In this guide, we will highlight how you can build your own tool for analyzing music content in media, just like the SOCAN hackathon participants.</p><h3>So what is the Analyze Media API?</h3><p>Before we explained how hackers used the API we need to explain what Analyze Media is and what it does. The <a href="http://dolby.io/">Dolby.io</a><a href="https://docs.dolby.io/media-apis/docs/analyze-api-guide"> Analyze Media API</a> generates insight based on the underlying signal in audio for creating data such as loudness, content classification, noise, and musical instruments or genre classification. This makes the API useful for detecting in what section music occurs in an audio file and some qualities of the music at that instance.</p><p>The Analyze Media API adheres to the Representational state transfer (REST) protocol meaning that it is language-agnostic and can be built into an existing framework that includes tools to interact with a server. This is useful as it means the API can adapt depending on the use case. In the Cue Sheet example many teams wanted to build a web application as that was what was most accessible to the SOCAN community, and hence relied heavily on HTML, CSS, and JavaScript to build out the tool.</p><p>In this guide, we will be highlighting how the participants implemented the API and why it proved useful for audio media. If you want to follow along you can sign up for a free<a href="http://dolby.io/"> Dolby.io</a> account which includes plenty of trial credits for experimenting with the Analyze Media API.</p><h3>A QuickStart with the Analyze Media API in JavaScript:</h3><p>There are <strong>four</strong> steps to using the Analyze Media API on audio:</p><ol><li>Store the media on the cloud.</li><li>Start an Analyze Media job.</li><li>Monitor the status of that job.</li><li>Retrieve the result of a completed job.</li></ol><p>The first step, storing media on the cloud depends on your use case for the Analyze Media API. If your media/audio is already stored on the cloud (Azure AWS, GCP) you can instead move on to <em>step 2</em>. However, if your audio file is stored locally you will first have to upload it to a cloud environment. For this step, we upload the file to the<a href="http://dolby.io/"> Dolby.io</a> Media Cloud Storage using the local file and our<a href="http://dolby.io/"> Dolby.io</a> Media API key.</p><pre>async function uploadFile() {<br>    //Uploads the file to the Dolby.io server<br>    let fileType = YOUR_FILE_TYPE;<br>    let audioFile = YOUR_LOCAL_MEDIA_FILE;<br>    let mAPIKey = YOUR_DOLBYIO_MEDIA_API_KEY;<br> <br>    let formData = new FormData();<br>    var xhr = new XMLHttpRequest();<br>    formData.append(fileType, audioFile);<br> <br>    const options = {<br>        method: &quot;POST&quot;,<br>        headers: {<br>            Accept: &quot;application/json&quot;,<br>            &quot;Content-Type&quot;: &quot;application/json&quot;,<br>            &quot;x-api-key&quot;: mAPIKey,<br>        },<br>        // url is where the file will be stored on the Dolby.io servers.<br>        body: JSON.stringify({ url: &quot;dlb://file_input.&quot;.concat(fileType) }),<br>    };<br> <br>    let resp = await fetch(&quot;https://api.dolby.com/media/input&quot;, options)<br>        .then((response) =&gt; response.json())<br>        .catch((err) =&gt; console.error(err));<br> <br>    xhr.open(&quot;PUT&quot;, resp.url, true);<br>    xhr.setRequestHeader(&quot;Content-Type&quot;, fileType);<br>    xhr.onload = () =&gt; {<br>        if (xhr.status === 200) {<br>            console.log(&quot;File Upload Success&quot;);<br>        }<br>    };<br>    xhr.onerror = () =&gt; {<br>        console.log(&quot;error&quot;);<br>    };<br>    xhr.send(formData);<br>    let rs = xhr.readyState;<br> <br>    //Check that the job completes<br>    while (rs != 4) {<br>        rs = xhr.readyState;<br>    }<br>}</pre><p>For this file upload, we have chosen to use <a href="https://www.sitepoint.com/xmlhttprequest-vs-the-fetch-api-whats-best-for-ajax-in-2019/">XMLHttpRequest for handling our client-side file upload</a>, although packages like Axios are available. This was a deliberate choice as in our Web App we add functionality for progress tracking and timeouts during our audio upload.</p><p>With our audio file uploaded and stored on the cloud we can start an Analyze Media API job which is done using the location of our cloud-stored media file. If your file is stored on a cloud storage provider such as AWS you can use the pre-signed URL for the file as the input. In this example, we are using the file stored on<a href="http://dolby.io/"> Dolby.io</a> Media Cloud Storage from <em>step 1</em>.</p><pre>async function startJob() {<br>    //Starts an Analyze Media Job on the Dolby.io servers<br>    let mAPIKey = YOUR_DOLBYIO_MEDIA_API_KEY;<br>    //fileLocation can either be a pre-signed URL to a cloud storage provider or the URL created in step 1.<br>    let fileLocation = YOUR_CLOUD_STORED_MEDIA_FILE;<br> <br>    const options = {<br>        method: &quot;POST&quot;,<br>        headers: {<br>            Accept: &quot;application/json&quot;,<br>            &quot;Content-Type&quot;: &quot;application/json&quot;,<br>            &quot;x-api-key&quot;: mAPIKey,<br>        },<br>        body: JSON.stringify({<br>            content: { silence: { threshold: -60, duration: 2 } },<br>            input: fileLocation,<br>            output: &quot;dlb://file_output.json&quot;, //This is the location we&#39;ll grab the result from.<br>        }),<br>    };<br> <br>    let resp = await fetch(&quot;https://api.dolby.com/media/analyze&quot;, options)<br>        .then((response) =&gt; response.json())<br>        .catch((err) =&gt; console.error(err));<br>    console.log(resp.job_id); //We can use this jobID to check the status of the job<br>}</pre><p>When startJob resolves we should see a job_id returned.</p><pre>{&quot;job_id&quot;:&quot;b49955b4-9b64-4d8b-a4c6-2e3550472a33&quot;}</pre><p>Now that we’ve started an Analyze Media job we need to wait for the job to resolve. Depending on the size of the file the job could take a few minutes to complete and hence requires some kind of progress tracking. We can capture the progress of the job using the JobID created in <em>step 2</em>, along with our Media API key to track the progress of the job.</p><pre>async function checkJobStatus() {<br>    //Checks the status of the created job using the jobID<br>    let mAPIKey = YOUR_DOLBYIO_MEDIA_API_KEY;<br>    let jobID = ANALYZE_JOB_ID; //This job ID is output in the previous step when a job is created.<br> <br>    const options = {<br>        method: &quot;GET&quot;,<br>        headers: { Accept: &quot;application/json&quot;, &quot;x-api-key&quot;: mAPIKey },<br>    };<br> <br>    let result = await fetch(&quot;https://api.dolby.com/media/analyze?job_id=&quot;.concat(jobID), options)<br>        .then((response) =&gt; response.json());<br>    console.log(result);<br>}</pre><p>The checkJobStatus function may need to be run multiple times depending on how long it takes for the Analyze Media job to resolve. Each time you query the status you should get results where progress ranges from 0 to 100.</p><pre>{<br>  &quot;path&quot;: &quot;/media/analyze&quot;,<br>  &quot;status&quot;: &quot;Running&quot;,<br>  &quot;progress&quot;: 42<br>}</pre><p>Once we know the job is complete we can download the resulting JSON which contains all the data and insight generated regarding the input media.</p><pre>async function getResults() {<br>    //Gets and displays the results of the Analyze job<br>    let mAPIKey = YOUR_DOLBYIO_MEDIA_API_KEY;<br> <br>    const options = {<br>        method: &quot;GET&quot;,<br>        headers: { Accept: &quot;application/octet-stream&quot;, &quot;x-api-key&quot;: mAPIKey },<br>    };<br> <br>    //Fetch from the output.json URL we specified in step 2.<br>    let json_results = await fetch(&quot;https://api.dolby.com/media/output?url=dlb://file_output.json&quot;, options)<br>        .then((response) =&gt; response.json())<br>        .catch((err) =&gt; console.error(err));<br> <br>    console.log(json_results)<br>}</pre><p>The resulting output JSON includes music data which breaks down by section. These sections contain an assortment of useful data points:</p><ul><li><strong>Start (seconds)</strong>: The starting point of this section.</li><li><strong>Duration (seconds)</strong>: The duration of the segment.</li><li><strong>Loudness (decibels)</strong>: The intensity of the segment at the threshold of hearing.</li><li><strong>Beats per minute (bpm)</strong>: The number of beats per minute and an indicator of tempo.</li><li><strong>Key</strong>: The pitch/scale of the music segment along with a confidence interval of 0.0–1.0.</li><li><strong>Genre</strong>: The distribution of genres including confidence intervals of 0.0–1.0.</li><li><strong>Instrument</strong>: The distribution of instruments including confidence intervals of 0.0–1.0.</li></ul><p>Depending on the complexity of the audio file there can sometimes be 100s of music segments.</p><pre>&quot;music&quot;: {<br>    &quot;percentage&quot;: 34.79,<br>    &quot;num_sections&quot;: 35,<br>    &quot;sections&quot;: [<br>        {<br>            &quot;section_id&quot;: &quot;mu_1&quot;,<br>            &quot;start&quot;: 0.0,<br>            &quot;duration&quot;: 13.44,<br>            &quot;loudness&quot;: -16.56,<br>            &quot;bpm&quot;: 222.22,<br>            &quot;key&quot;: [<br>                [<br>                    &quot;Ab major&quot;,<br>                    0.72<br>                ]<br>            ],<br>            &quot;genre&quot;: [<br>                [<br>                    &quot;hip-hop&quot;,<br>                    0.17<br>                ],<br>                [<br>                    &quot;rock&quot;,<br>                    0.15<br>                ],<br>                [<br>                    &quot;punk&quot;,<br>                    0.13<br>                ]<br>            ],<br>            &quot;instrument&quot;: [<br>                [<br>                    &quot;vocals&quot;,<br>                    0.17<br>                ],<br>                [<br>                    &quot;guitar&quot;,<br>                    0.2<br>                ],<br>                [<br>                    &quot;drums&quot;,<br>                    0.05<br>                ],<br>                [<br>                    &quot;piano&quot;,<br>                    0.04<br>                ]<br>            ]<br>        }</pre><p>This snippet of the output only shows the results as they relate to the Cue Sheet use case, the API generates even more data including audio defects, loudness, and content classification. I recommend reading<a href="https://docs.dolby.io/media-apis/docs/analyze-api-guide"> this guide that</a> explains in-depth the content of the output JSON.</p><p>With the final step resolved we successfully used the Analyze Media API and gained insight into the content of the audio file. In the context of the Cue Sheet Palooza Hackathon, the participants were only really interested in the data generated regarding the loudness and music content of the media and hence filtered the JSON to just show the music data similar to the example output.</p><h3>Building an app for creating Cue Sheets</h3><p>Of course not every musician or composer knows how to program and hence part of the hackathon was building a user interface for SOCAN members to interact with during the Cue Sheet creation process. The resulting apps used a variety of tools including the <a href="http://dolby.io/">Dolby.io</a> API to format the media content data into a formal Cue Sheet. These web apps took a variety of shapes and sizes with different functionality and complexity.</p><p>It’s one thing to show how the Analyze Media API works but it’s another thing to highlight how the app might be used in a production environment like for a Cue Sheet. Included <a href="https://github.com/dolbyio-samples/blog-analyze-music-web"><strong>in this repo here is an example</strong></a> I built using the Analyze Media API that takes audio and decomposes the signal to highlight what parts of the media contain music. <br>Here is a picture of the user interface, which takes in your Media API Key and the location of a locally stored media file.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*sPwKk-H60QibhoDe.png" /><figcaption>The starting screen of the Dolby.io Analyze API Music Data Web app, found here:https://github.com/dolbyio-samples/blog-analyze-music-web</figcaption></figure><p>For showcasing the app I used a downloaded copy of a music review podcast where the host samples a range of songs across a variety of genres. The podcast includes 30 tracks which are played over 40% of the 50-minute podcast. If you want to try out the App with a song you can use the public domain version of “Take Me Out to the Ball Game” originally recorded in 1908, <a href="https://dolby.io/blog/using-music-mastering-on-take-me-out-to-the-ball-game/">which I had used for another project relating to music mastering</a>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*BgaDYftaksYtiJk3.png" /><figcaption>The Dolby.io Analyze API Music Data Web app after running the analysis on a 50-minute music podcast.</figcaption></figure><p><a href="https://github.com/dolbyio-samples/blog-analyze-music-web"><strong>Feel free to clone the repo and play around with the app yourself.</strong></a></p><h3>Conclusion:</h3><p>At the end of the hackathon, participating teams were graded and awarded prizes based on how useful and accessible the Cue Sheet tool would be for SOCAN members. The sample app demoed above represents a very rudimentary version of what many of the hackers built and how they utilized the Analyze Media API. If you are interested in learning more about their projects the winning team included a <a href="https://github.com/rudolfolah/hackathon_cuesheets">GitHub repo with their winning </a>entry where you can see how they created a model to recognize music and how they used the Dolby.io Analyze Media API to supplement the Cue Sheet creation process.</p><p>If the Dolby.io Analyze Media API is something you’re interested in learning more about check out our<a href="https://docs.dolby.io/media-apis/docs/analyze-api-guide"> documentation</a> or explore our other tools including APIs for <a href="https://docs.dolby.io/media-apis/docs/music-mastering-api-guide">Algorithmic Music Mastering</a>, <a href="https://docs.dolby.io/media-apis/docs/enhance-api-guide">Enhancing Audio</a>, and <a href="https://docs.dolby.io/media-apis/docs/transcode-api-guide">Transcoding Media</a>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=66fe7efc9ed4" width="1" height="1" alt=""><hr><p><a href="https://medium.com/hacking-media/searching-audio-to-find-loudness-and-music-sections-66fe7efc9ed4">Searching Audio to Find Loudness and Music Sections</a> was originally published in <a href="https://medium.com/hacking-media">Hacking Media</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[“Take Me Out to the Ball Game” Algorithmically Remastered in Python]]></title>
            <link>https://medium.com/hacking-media/take-me-out-to-the-ball-game-algorithmically-remastered-in-python-841732e75e12?source=rss-6b8dc7a69e8f------2</link>
            <guid isPermaLink="false">https://medium.com/p/841732e75e12</guid>
            <category><![CDATA[baseball]]></category>
            <category><![CDATA[music]]></category>
            <category><![CDATA[python]]></category>
            <category><![CDATA[api]]></category>
            <category><![CDATA[machine-learning]]></category>
            <dc:creator><![CDATA[Braden Riggs]]></dc:creator>
            <pubDate>Tue, 12 Apr 2022 18:14:24 GMT</pubDate>
            <atom:updated>2022-04-12T18:14:24.651Z</atom:updated>
            <content:encoded><![CDATA[<h4>In this guide, we will explore how to algorithmically remaster the classic baseball anthem “Take Me Out to the Ball Game” with the Music Mastering API.</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*hok3j8VoS1Hc_yJWcI643g.jpeg" /><figcaption>Image by Thomas Christiansen. Used with author permission.</figcaption></figure><p>The 1908<a href="https://en.wikipedia.org/wiki/Jack_Norworth"> Jack Norworth</a> and<a href="https://en.wikipedia.org/wiki/Albert_Von_Tilzer"> Albert Von Tilzer</a> song “Take Me Out to the Ball Game” has been a classic staple across ballparks becoming synonymous with America’s favorite pastime. At over 114 years old, the original version of “Take Me Out to the Ball Game” was recorded on a <a href="https://en.wikipedia.org/wiki/Phonograph_cylinder">two-minute Edison Wax Cylinder</a> by singer and performer <a href="https://en.wikipedia.org/wiki/Edward_Meeker">Edward Meeker</a>, quickly becoming a beloved classic. With the baseball season getting underway this <a href="https://www.mlb.com/news/mlb-revised-2022-regular-season-schedule">April 7th</a> we thought it was about time to dust off the gloves, pick up our bats, step up to our Python environments, and get to work algorithmically remastering the classic anthem with the <a href="http://dolby.io/">Dolby.io</a><a href="https://docs.dolby.io/media-apis/docs/music-mastering-api-guide"> Music Mastering API</a>.</p><p>Typically performed by audio engineers, Mastering is a labor-intensive post-production process usually applied as the last step in creating a song and is the final polish that takes a track from good to great. Because “Take Me Out to the Ball Game” was recorded and produced in 1908 the mastering and post-production technology was very limited and hence it could be interesting to explore the impact of applying a music mastering algorithm on the original recording, and the effect that has on the palatability of the track.</p><h4><strong>Picking a Version</strong></h4><p>Before we can get started remastering “Take Me Out to the Ball Game”, we first need to pick a version of the song. Whilst we often hear the catchy tune played during the middle of the seventh inning, that version isn’t the original and is subject to copyright protection. For this project, we will be using the 1908 version found <a href="https://ia802605.us.archive.org/26/items/TakeMeOutToTheBallGame_243/TakeMeOuttotheBallGame_edmeeker.mp3">here</a>, as it is now available in the <a href="http://publicdomainaudiovideo.blogspot.com/2010/04/take-me-out-to-ball-game.html">public domain and free to use</a>. Unfortunately, the highest-quality version of the 1908 song is stored as an MP3. Whilst this works with the API, Free Lossless Audio Codec (FLAC) or other lossless file types are preferred as they produce the best results during the mastering post-production process.</p><h4><strong>The Music Mastering API</strong></h4><p>With our song in hand, it’s time to introduce the tool that will be doing the majority of the heavy lifting. <a href="https://dolby.io/products/music-mastering/">The </a><a href="http://dolby.io/">Dolby.io</a><a href="https://dolby.io/products/music-mastering/"> Music Mastering API </a>is a music enhancement tool that allows users to programmatically master files via a number of sound profiles specific to certain genres and styles. The API isn’t free, however, Dolby.io offers trial credits if you sign up and additional trial credits if you add your credit card to the platform.</p><p>For this project, the trial tier will be sufficient which is available if you <a href="https://dashboard.dolby.io/signup">sign up here</a>.</p><p>Once you have created an account and logged in, navigate over to the applications tab, select “my_first_app”, and locate your <strong>Media APIs API Key</strong>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*qEvLM71cck3KK_Nx.png" /><figcaption>Screenshot of the Dolby.io dashboard page. Image by Author.</figcaption></figure><p>It’s important to note that all <a href="http://dobly.io/">Dobly.io</a> media APIs adhere to the<a href="https://www.redhat.com/en/topics/api/what-is-a-rest-api#:~:text=A%20REST%20API%20(also%20known,by%20computer%20scientist%20Roy%20Fielding."> REST framework,</a> meaning they are language agnostic. For the purposes of this project, I will be using the tool in Python, however, it works in any other language.</p><h4><strong>Adding it to the </strong><a href="http://dolby.io/"><strong>Dolby.io</strong></a><strong> Server</strong></h4><p>To utilize the Music Mastering API we first need to store the MP3 file on the cloud. This can be done with either a cloud service provider such as <a href="https://docs.dolby.io/media-apis/docs/aws-s3">AWS</a>, or you can use the <a href="http://dolby.io/">Dolby.io</a> Media Storage platform. For simplicity, we will use the <a href="http://dolby.io/">Dolby.io</a> platform which can be accessed via a REST API call.</p><p>To get started we need to import the Python “Requests” package and specify a path to the MP3 file on our local machine.</p><pre><strong>import</strong> requests #Requests is useful for making HTTP requests and interacting with REST APIs</pre><pre>file_path <strong>=</strong> &quot;Take-Me-Out-to-the-Ball-Game.mp3&quot;</pre><p>Next, we need to specify the URL we want the Requests package to interact with, specifically the <a href="http://dolby.io/">Dolby.io</a> Media Input address. In addition to the input URL, we also need to format a header that will authenticate our request to the <a href="http://dolby.io/">Dolby.io</a> server with our API key.</p><pre>url = &quot;<a href="https://api.dolby.com/media/input">https://api.dolby.com/media/input</a>&quot;<br>headers = {<br>    &quot;x-api-key&quot;: &quot;YOUR DOLBY.IO MEDIA API KEY&quot;,<br>    &quot;Content-Type&quot;: &quot;application/json&quot;,<br>    &quot;Accept&quot;: &quot;application/json&quot;,<br>}</pre><p>Finally, we need to format a body that specifies the name we want to give our file once it is added to the server.</p><pre>body = {<br>    &quot;url&quot;: &quot;dlb://input-example.mp3&quot;,<br>}</pre><p>With the URL, Head, and Body all formatted correctly we can use the Requests package to create a pre-signed URL to which we can upload our MP3 file to.</p><pre>response = requests.post(url, json=body, headers=headers)<br>response.raise_for_status()<br>presigned_url = response.json()[&quot;url&quot;]<br> <br>print(&quot;Uploading {0} to {1}&quot;.format(file_path, presigned_url))<br>with open(file_path, &quot;rb&quot;) as input_file:<br>    requests.put(presigned_url, data=input_file)</pre><h4><strong>Starting a Mastering Job</strong></h4><p>Once the audio file has been moved to the cloud we can begin calling a mastering job. The Music Mastering API includes a number of predefined “profiles” which match up to a selection of audio genres such as Hip Hop or Rock. For the best results, a Rock song should be mastered with the Rock profile, however, the process of picking a profile can require a bit of experimentation.</p><p>Because matching creative intent with different sound profiles can take a few trials the API offers a “preview version” where you can master a 30 seconds segment of a song with 3 different profiles. We format the body of this request to include this information as well as when we want the segment to begin.</p><pre>body = {<br>    &quot;inputs&quot;: [<br>        {&quot;source&quot;: &quot;dlb://input-example.mp3&quot;, &quot;segment&quot;: {&quot;start&quot;: 36, &quot;duration&quot;: 30}} #36 seconds is the start of the iconic chorus.<br>    ],<br>    &quot;outputs&quot;: [<br>        {<br>            &quot;destination&quot;: &quot;dlb://example-master-preview-l.mp3&quot;,<br>            &quot;master&quot;: {&quot;dynamic_eq&quot;: {&quot;preset&quot;: &quot;l&quot;}} #Lets master with the Vocal profile<br>        },<br>        {<br>            &quot;destination&quot;: &quot;dlb://example-master-preview-m.mp3&quot;,<br>            &quot;master&quot;: {&quot;dynamic_eq&quot;: {&quot;preset&quot;: &quot;m&quot;}} #Lets master with the Folk profile<br>        },<br>        {<br>            &quot;destination&quot;: &quot;dlb://example-master-preview-n.mp3&quot;,<br>            &quot;master&quot;: {&quot;dynamic_eq&quot;: {&quot;preset&quot;: &quot;n&quot;}} #Lets master with the Classical profile<br>        }<br>         <br>    ]<br>}</pre><p>The Header stays the same as the one we used to upload the file to the <a href="http://dolby.io/">Dolby.io</a> Server and the URL changes to match the Music Mastering endpoint.</p><pre>url = &quot;<a href="https://api.dolby.com/media/master/preview">https://api.dolby.com/media/master/preview</a>&quot;<br>headers = {<br>    &quot;x-api-key&quot;: &quot;YOUR DOLBY.IO MEDIA API KEY&quot;,<br>    &quot;Content-Type&quot;: &quot;application/json&quot;,<br>    &quot;Accept&quot;: &quot;application/json&quot;,<br>}</pre><p>We can use the Requests package to deliver our profile selections and start the mastering job.</p><pre>response = requests.post(url, json=body, headers=headers)<br>response.raise_for_status()<br>print(response.json())<br>job_id = response.json()[&quot;job_id&quot;]</pre><p>This process can take a minute to complete. To check the status of the job we can format another request to the same URL with the Job_ID included in the body to check the progress of the master.</p><pre>url = &quot;<a href="https://api.dolby.com/media/master/preview">https://api.dolby.com/media/master/preview</a>&quot;<br>headers = {<br>        &quot;x-api-key&quot;: &quot;YOUR DOLBY.IO MEDIA API KEY&quot;,<br>        &quot;Content-Type&quot;: &quot;application/json&quot;,<br>        &quot;Accept&quot;: &quot;application/json&quot;,<br>    }<br>params = {&quot;job_id&quot;: job_id}<br>response = requests.get(url, params=params, headers=headers)<br>response.raise_for_status()<br>print(response.json())</pre><p>The response from the request outputs the progress of the job between 0% and 100%.</p><h4><strong>Downloading the Mastered File</strong></h4><p>With our file mastered it’s time to download the three master previews so we can hear the difference. The workflow for downloading files mirrors that of how the rest of the <a href="http://dolby.io/">Dolby.io</a> APIs work. Much like uploading a file or starting a job we format a header with our API key and a body that points to the mastering output on the Dolby.io server.</p><pre>import shutil #File operations package useful for downloading files from a server.<br> <br>url = &quot;<a href="https://api.dolby.com/media/output">https://api.dolby.com/media/output</a>&quot;<br>headers = {<br>        &quot;x-api-key&quot;: api_key,<br>        &quot;Content-Type&quot;: &quot;application/json&quot;,<br>        &quot;Accept&quot;: &quot;application/json&quot;,<br>    }<br> <br>for profile in [&quot;l&quot;,&quot;m&quot;,&quot;n&quot;]:<br> <br>    output_path = &quot;out/preview-&quot; + profile + &quot;.mp3&quot;<br> <br>    preview_url = &quot;dlb://example-master-preview-&quot; + profile + &quot;.mp3&quot;<br>    args = {&quot;url&quot;: preview_url}<br> <br>    with requests.get(url, params=args, headers=headers, stream=True) as response:<br>        response.raise_for_status()<br>        response.raw.decode_content = True<br>        print(&quot;Downloading from {0} into {1}&quot;.format(response.url, output_path))<br>        with open(output_path, &quot;wb&quot;) as output_file:<br>            shutil.copyfileobj(response.raw, output_file)</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*RG0RpO9lPCQRmeaT.png" /><figcaption>Summary of mastering job workflow from locating the file to downloading results. Image by Author.</figcaption></figure><p>With the mastered files downloaded locally, we can listen to both and hear the difference between the original and one of our Masters.</p><p><a href="https://dolby.io/wp-content/uploads/2022/04/Take-Me-Out-to-the-Ball-Game.mp3">Original file</a></p><p><a href="https://dolby.io/wp-content/uploads/2022/04/preview-n.mp3">Mastered Version</a></p><p>We can also hear the subtle differences between the Masters.</p><p><a href="https://dolby.io/wp-content/uploads/2022/04/preview-l.mp3">Mastered 1</a></p><p><a href="https://dolby.io/wp-content/uploads/2022/04/preview-m.mp3">Mastered 2</a></p><p><a href="https://dolby.io/wp-content/uploads/2022/04/preview-n-1.mp3">Mastered 3</a></p><p>For the purposes of this demo, we only mastered the last three profiles, however, there are <a href="https://docs.dolby.io/media-apis/docs/music-mastering-api-guide">14 different music mastering profiles</a> to pick from. From my testing, I like the “Classical” profile (Profile n) the best, but everyone is different, try it out yourself.</p><h4><strong>A More Modern Example</strong></h4><p>Whilst the classic still doesn’t sound modern, remastering the track does make it a little clear and hence more enjoyable to listen to. Typically the <a href="http://dolby.io/">Dolby.io</a> Music Mastering API is built for contemporary samples recorded on more modern equipment in lossless formats such as FLAC and is not designed to be an audio restoration tool. For the purposes of this investigation, we wanted to see the impact post-production mastering would have on the track rather than attempting to outright “fix” the original.</p><p>Currently, the Dolby.io team has a <a href="https://sxsw-music-mastering.netlify.app/">demo hosted here</a> that lets you listen to before and after examples of licensed contemporary tracks which better exemplifies the use case of the API. Because Dolby.io owns the licenses to those songs they are allowed to host the content, whereas for this project we wanted to pick a track in the public domain so anyone with an interest can test it out for themselves without fear of infringing on copyright law.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*d7Fyen9af-1daMHz.png" /><figcaption>The Dolby.io Music Mastering demo, <a href="https://static.dolby.link/demos/music-mastering/index.html">available here</a>. Image by Author.</figcaption></figure><p>If the Music Mastering API is something you are interested in further exploring check out the <a href="https://docs.dolby.io/media-apis/docs/music-mastering-api-guide">Dolby.io documentation</a> around the API or the <a href="https://sxsw-music-mastering.netlify.app/">live demo mentioned above</a>, otherwise let&#39;s get excited for an awesome Baseball season ahead and “<em>root, root, root for the home team”</em>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=841732e75e12" width="1" height="1" alt=""><hr><p><a href="https://medium.com/hacking-media/take-me-out-to-the-ball-game-algorithmically-remastered-in-python-841732e75e12">“Take Me Out to the Ball Game” Algorithmically Remastered in Python</a> was originally published in <a href="https://medium.com/hacking-media">Hacking Media</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Extracting Features from Audio Samples for Machine Learning]]></title>
            <link>https://medium.com/hacking-media/extracting-features-from-audio-samples-for-machine-learning-7b6a9271984?source=rss-6b8dc7a69e8f------2</link>
            <guid isPermaLink="false">https://medium.com/p/7b6a9271984</guid>
            <category><![CDATA[data-science]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[audio]]></category>
            <category><![CDATA[python]]></category>
            <category><![CDATA[machine-learning]]></category>
            <dc:creator><![CDATA[Braden Riggs]]></dc:creator>
            <pubDate>Thu, 16 Dec 2021 20:00:24 GMT</pubDate>
            <atom:updated>2021-12-21T23:52:52.660Z</atom:updated>
            <content:encoded><![CDATA[<h4>Creating an effective classifier relies on extracting useful features from the underlying data. In this tutorial, we outline a tool that optimizes audio feature extraction.</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*VLdfhUNjA6XAyvjs.jpg" /><figcaption>Image by Thomas Christiansen. Used with author permission.</figcaption></figure><p>Humans are great at classifying noises. We can hear a chirp and surmise that it belongs to a bird, we can hear an abstract noise and classify it as speech with a particular meaning and definition. This relationship between humans and audio classification forms the basis of speech and human communication as a whole. Translating this incredible ability to computers on the other hand can be a difficult challenge, to say the least.</p><p>Whilst we can naturally decompose signals, how do we teach computers to do this, and how do we show what parts of the signal matter and what parts of the signal are irrelevant or noisy? This is where PyAudio Analysis comes in. <a href="https://github.com/tyiannak/pyAudioAnalysis/wiki">PyAudio Analysis</a> is an open-source Python project by <a href="https://tyiannak.github.io/">Theodoros Giannakopoulos</a>, a Principle researcher of multimodal machine learning at the <a href="https://labs-repos.iit.demokritos.gr/MagCIL/index.html">Multimedia Analysis Group of the Computational Intelligence Lab (MagCIL)</a>. The package aims to simplify the feature extraction and classification process by providing a number of helpful tools that can sift through the signal and create relevant features. These features can then be used to train models for classification or segmentation tasks.</p><h3>So How Does it Work?</h3><p>To get started with PyAudio Analysis we first have to install the package which can be done through the pip command in the command line:</p><pre>pip install PyAudioAnalysis</pre><p>Next, we can use the functionality of the package to extract features. With the feature extraction, there are two main methodologies we need to understand, Short-term features and Mid-term features.</p><ul><li><strong>Short-term features</strong> are features calculated on a user-defined “frame”. The signal is split into these frames where the package then computes a number of features for each frame, outputting a feature vector for the whole signal.</li><li><strong>Mid-term features</strong> are features calculated on short-term feature sequences and includes common statistics such as mean and standard deviation for each short-term sequence.</li></ul><p>Altogether the feature extraction creates 34 features for each frame that fits within the provided audio signal. For example with we have one minute of audio and set a frame length of 0.025 seconds our resulting matrix will be 34 by 2400 rows. These features include a variety of signal processing nomenclature and are briefly described in the table provided by the PyAudio Analysis wiki below:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*1JuvzYbnSdH7SXMP_KBNFA.png" /><figcaption>Image by Author. Using definitions provided by the <a href="https://github.com/tyiannak/pyAudioAnalysis/wiki/3.-Feature-Extraction">PyAudio Analysis Wiki</a>.</figcaption></figure><p>These features can be generated for a series of audio samples through the command line. In this case, we have four parameters to specify with relation to feature creation, the mid-term window size (mw), the mid-term step size (ms), the short-term window size (sw), and the short-term step size (ss).</p><pre>python3 audioAnalysis.py featureExtractionDir -i data/ -mw 1.0 -ms 1.0 -sw 0.050 -ss 0.050</pre><p>Using these created features we can then train a classifier with the inbuilt features of the package. In this case, we are going to train a classifier that can distinguish the difference between two contrasting genres, sports, and business. For this particular example, we will use a k-Nearest-Neighbor (kNN) model which informs classification based on the relationship of the surrounding “k” neighbors, and we’ll train on a dataset of 20 20-minute sports podcasts and 10 20-minute business podcasts. The model will then be evaluated on a reserve dataset of 10 20-minute sports podcasts and 5 20-minute business podcasts. Under optimal conditions, we would use a significantly greater dataset of hundreds of audio samples, however, due to limitations in a catalog of audio samples we are only experimenting with 45 total podcast samples.</p><pre>python3 AudioAnalysis.py trainClassifier -i raw_audio/sport/ raw_audio/business --method knn -o data\knnSM</pre><p>The model takes about 15 minutes to train, before spitting out results relating to the training in the form of a Confusion Matrix and a Precision, Recall, F1, and Accuracy Matrix. The Confusion Matrix highlights a highly effective model with 100% accuracy despite the imbalance of the data with 66% of the samples belonging to “Sports” and 33% belonging to “Business”.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/428/0*1PKcRN8i2TjPSJXv.png" /><figcaption>An auto-generated <a href="https://www.dataschool.io/simple-guide-to-confusion-matrix-terminology/#:~:text=A%20confusion%20matrix%20is%20a,the%20true%20values%20are%20known.&amp;text=The%20classifier%20made%20a%20total,the%20presence%20of%20that%20disease).">confusion matrix</a> where the top left represents sports podcasts correctly classified as sports podcasts and the bottom right represents business podcasts correctly classified as business podcasts. A good model should only have values in the top left and the bottom right. Image by Author.</figcaption></figure><p>We can also look at the Precision, Recall, F1, and Accuracy Matrix to see how the model performed at different neighbor counts (c ), with both 1-NN and 3-NN performing the most accurately with the performance dropping off as more neighbors are considered.</p><p>With the model trained we can now evaluate its performance on unseen audio samples:</p><pre>from pyAudioAnalysis import audioTrainTest as aT aT.file_classification(&quot;file_loc/sports_podcast.wav&quot;, &quot;data/knnSM&quot;,&quot;knn&quot;)</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*wOCue4JMs9rlG37w.png" /><figcaption>The model has correctly classified a sports podcast as belonging to the sports category. Image by Author.</figcaption></figure><pre>aT.file_classification(&quot;file_loc/business_podcast.wav&quot;, &quot;data/knnSM&quot;,&quot;knn&quot;)</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*M1c629g4SZfRQ1KU.png" /><figcaption>The model has correctly classified a business podcast as belonging to the business category. Image by Author.</figcaption></figure><p>As we can see the model is performing well, in fact, if we evaluate it on the remaining 15 test podcast samples we get an accuracy of 100%. This indicates that the model is generally effective and that the feature extraction process created relevant data points that are useful for training models. This experiment on business and sports podcasts serve as more of a proof of concept however as the training set and test set we relatively small. despite the limitations, this example highlights the effectiveness of feature extraction from audio samples using PyAudio Analysis.</p><h3>In Summary</h3><p>Being able to extract relevant and useful data points from raw unstructured audio is an immensely useful process, especially for building effective classification models. PyAudio Analysis takes this feature extraction process and simplifies it into just a few lines of code you can execute on a directory of audio files to build your own classification models. If you are interested in learning more about PyAudio Analysis the Dolby.io team presented a <a href="https://zoom.us/rec/play/O9NYrLk4ZndslxJ2_ijIYVSeAIsEjxbWHny4pQ3eEKnS1KxQmf7fBgWtCWdwIcALEeqfIr8m7vui0skq.Pm2hVyYmg2gh6-Bs?startTime=1635606149000&amp;_x_zm_rtaid=LdnvL0qCQF6bI7hS8vVh5Q.1635638331701.c3802c6048b02acbb4e8082f84565a4c&amp;_x_zm_rhtaid=916&amp;fbclid=IwAR1qxxdne6qyNcxjEbnrICEb7Mk2pDB0YiZBV240gDF-s2fsGIhGK3OGr4g">tutorial on audio data extraction at PyData Global</a> that included an introduction to the package, or you can read more about the package on <a href="https://github.com/tyiannak/pyAudioAnalysis">its wiki here.</a></p><p><em>Originally published at </em><a href="https://dolby.io/blog/creating-audio-features-with-pyaudio-analysis/"><em>https://dolby.io</em></a><em> on December 16, 2021.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=7b6a9271984" width="1" height="1" alt=""><hr><p><a href="https://medium.com/hacking-media/extracting-features-from-audio-samples-for-machine-learning-7b6a9271984">Extracting Features from Audio Samples for Machine Learning</a> was originally published in <a href="https://medium.com/hacking-media">Hacking Media</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Three Tips I Learned Presenting at My First Tech Conference]]></title>
            <link>https://medium.com/swlh/three-tips-i-learned-presenting-at-my-first-data-science-conference-6047528e96c?source=rss-6b8dc7a69e8f------2</link>
            <guid isPermaLink="false">https://medium.com/p/6047528e96c</guid>
            <category><![CDATA[python]]></category>
            <category><![CDATA[pydata]]></category>
            <category><![CDATA[data-science]]></category>
            <category><![CDATA[public-speaking]]></category>
            <category><![CDATA[machine-learning]]></category>
            <dc:creator><![CDATA[Braden Riggs]]></dc:creator>
            <pubDate>Wed, 03 Nov 2021 18:55:18 GMT</pubDate>
            <atom:updated>2021-11-10T14:52:02.193Z</atom:updated>
            <content:encoded><![CDATA[<h4>Presenting at PyData Global and some tricks to make your content resonate with the audience</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*59fWnrB1lRjW1tz2" /><figcaption>Photo by <a href="https://unsplash.com/@ianharber?utm_source=medium&amp;utm_medium=referral">Ian Harber</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p>From October 28th to the 30th I participated in <a href="https://pydata.org/global2021/">PyData Global 2021</a>, an event hosted by the non-profit educational program PyData, that focuses on helping connect, educate, and grow the data science community. Although Python is in the name, the event welcomes developers and researchers from all different backgrounds to share their tools and experience with the wider community.</p><p>For me this was my first data science event, having only graduated this year I had missed out on a number of conferences due to the pandemic and I was thrilled to finally be able to attend one, albeit virtually. Not only would I be attending the event I would also be representing my company Dolby and presenting two webinars, a short 10-minute lightning talk, and a full-length 90-minute tutorial.</p><p>As a first-time data science speaker and a recent graduate, jumping headfirst into the community was a daunting task and I wanted to share three things I learned along the way for anyone looking to get involved as a speaker.</p><h4>What Was I Presenting?</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*zCAjqNj23RO-Ja24OYpanQ.png" /><figcaption>A very candid shot of me presenting on sports podcasts. Image by author.</figcaption></figure><p>For my presentation, I was focusing on educating and building the community around audio data. Audio data is an immensely rich and common data source however it can be difficult to use and has a steep learning curve. This learning curve can turn away beginners so I wanted to create a presentation highlighting some tools, tips, and tricks, as well as some rewards of working in the audio data space. To give the presentation more of a theme I chose to focus my investigation on podcasts, specifically a 20-episode sports series where we explored what we could learn from the podcast recordings. If you are interested in watching a recording of the presentation <a href="https://zoom.us/rec/play/O9NYrLk4ZndslxJ2_ijIYVSeAIsEjxbWHny4pQ3eEKnS1KxQmf7fBgW[…]lid=IwAR3WOYyo7erQHyqYQ7Ptdl2xHzFFH0cEhuNySyGnH14DAO3O9bqdEtelyX8"><strong>check it out here</strong></a><strong>.</strong></p><h4>Do Your Research on the Community</h4><p>The first thing I learned and probably the most important is to always do your research on the event, even after your CFP (call for presentations) has been accepted.</p><p>If you haven&#39;t attended the event before there may be some unseen rules or nuances to how presentations are conducted and how the community receives them. A great way to check this out is to watch past recordings of the events on YouTube, where you can get a feel for the tone and atmosphere of the community. Some events are more serious in nature and others are more casual, some are more academic and research-focused, and some are more industry-focused. Knowing the distinction can help make sure your work resonates with the audience’s expectations.</p><p>Here are a few good questions to ask when preparing a presentation:</p><ol><li>How much code am I expected to show and where should I show it?</li><li>What level of aptitude am I expecting from my audience?</li><li>Should I prepare a script or adlib for a more casual feel to the presentation?</li><li>Is my presentation applicable to the wider community or just a subset?</li><li>How can I make sure my figures and code are legible to viewers?</li></ol><p>Depending on the answers to these questions you might format your content in a certain way to better cater to the audience.</p><h4>Interact with the Audience</h4><p>If you have ever watched a Livestream on Twitch you’ll know that a good streamer interacts with their audience regularly. This isn’t just to kill time, the connection between the audience and host is an important one and can make or break live content, especially content that is hours long.</p><p>This relationship between the viewer and the host is no different at tech conferences and can be more important depending on the difficulty of the content.</p><p>There are a number of different tactics I employed to help my audience feel more connected to the content and the presentation. Some of these tactics are obvious such as stopping to answer questions or reading comments out loud, which can help the audience engage with the content as well as demonstrate your own command of the material you are presenting. If you want to really engage your audience you can explore ways to make the viewers think critically about your content. In my audio data presentation, I linked a Typeform for attendees to fill out with their own project ideas for how they could use audio data. To further incentivize responses I offered bonus usage minutes to <a href="https://docs.dolby.io/media-apis/docs">Dolby.io media API</a> for any ideas I liked and want to see explored. This helped viewers feel more connected to the content and the presentation as well as allowed me to pad out the session with some more casual discussion on some of the ideas.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/957/1*YdnUmDABtvnP5W5itjFA4w.png" /><figcaption>Question #3 in our Typeform. Image by author.</figcaption></figure><p>Also included in the TypeForm was a section to gather feedback on the presentation where I got some great comments from an awesome audience.</p><h4>Share Your Code!</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*WdMw9YkMLlAYuwlmBtaEYw.png" /><figcaption>A screenshot from my analysis notebook, including some terminology and links for anyone looking to learn more. Image by Author.</figcaption></figure><p>The final major takeaway I got from my first tutorial was to have code ready to share. Even if you don’t include much code in your actual presentation having some demos ready to share with the audience is a must, especially at data science events where the audience expects <a href="https://ipython.org/notebook.html">IPython notebooks</a>.</p><p>Sharing code can help the audience in a number of ways from catering to different kinds of learners to helping reiterate key components of the presentation. I opted to include two code resources for the tutorial:</p><ol><li><a href="https://github.com/Briggs599/audio_data_DolbyIO">My Presentation Code</a>, This repo included two notebooks that created the data and a notebook that analyzed the data.</li><li><a href="https://github.com/DolbyIO/awesome-audio">Awesome Audio</a>, This repo included a number of audio data resources include how-to guides, datasets, audio samples, and much more. Check it out if you are interested in contributing.</li></ol><p>These resources helped improve the reproducibility of my results and presentation as well as helping the audience with some resources to get started if they want to do something similar.</p><h4>Final Comments</h4><p>Public speaking, even if virtual, can be a daunting experience no matter your background. My advice to anyone looking to get started, beyond what I mentioned above, is to get involved with the community. It&#39;s incredibly supportive and full of many people happy to pass on some advice. Who knows you might even get some kudos along the way:</p><blockquote>Fantastic talk. It is unfair to judge but if I had to it would be my in top 3 out of 35 or so talks that I listened to. Thank you.</blockquote><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=6047528e96c" width="1" height="1" alt=""><hr><p><a href="https://medium.com/swlh/three-tips-i-learned-presenting-at-my-first-data-science-conference-6047528e96c">Three Tips I Learned Presenting at My First Tech Conference</a> was originally published in <a href="https://medium.com/swlh">The Startup</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
    </channel>
</rss>