<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Shaun VanWeelden on Medium]]></title>
        <description><![CDATA[Stories by Shaun VanWeelden on Medium]]></description>
        <link>https://medium.com/@successengineer?source=rss-c1a49b7bd016------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Mon, 06 Apr 2026 21:40:29 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@successengineer/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[How to Automatically Turn Off your EC2 Instance in 2021]]></title>
            <link>https://successengineer.medium.com/how-to-automatically-turn-off-your-ec2-instance-in-2021-b73374e51090?source=rss-c1a49b7bd016------2</link>
            <guid isPermaLink="false">https://medium.com/p/b73374e51090</guid>
            <category><![CDATA[amazon-web-services]]></category>
            <category><![CDATA[amazon]]></category>
            <category><![CDATA[ec2-instance]]></category>
            <category><![CDATA[aws]]></category>
            <category><![CDATA[ec2]]></category>
            <dc:creator><![CDATA[Shaun VanWeelden]]></dc:creator>
            <pubDate>Sun, 28 Feb 2021 18:45:43 GMT</pubDate>
            <atom:updated>2021-02-28T18:45:43.694Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*KefUpgflMMiOGSjcNHI12g.jpeg" /></figure><p><em>Ever crash after a long night of coding only to wake up realizing you didn’t turn off your EC2 instances and now you owe AWS $10? This is surprisingly easy to avoid in 2021.</em></p><p>This guide will show you how to automatically turn off your instance after a period of inactivity.</p><h4>1. Find your EC2 Instance and click “+” in the Alarm Status column</h4><p>Our first step is to find the EC2 instance we want to automatically turn off.</p><p>Clicking the “+” icon in the Alarm Status column will open up the page to manage CloudWatch Alarms (the feature we’ll be using)</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*jE3koFxVCEB1yXjkNt1WaA.png" /><figcaption>Navigate to the “Create Alarm” page</figcaption></figure><p>You should now be on <strong>Manage CloudWatch Alarms</strong> page.</p><h4>2. Set the Alarm Action to Stop your Instance</h4><p>When our EC2 instance becomes inactive, the behavior we want to happen is that our instance will be stopped. To specify the action to take, toggle the Alarm Action section to be on, then select <strong>Stop</strong>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*MqwQxyJLLeHvOfVdtzRHTw.png" /><figcaption>Specify the Alarm Action</figcaption></figure><h4>3. Set the Alarm Thresholds as Needed</h4><p>Like a lot of AWS in my opinion, this section is powerful, but not the most intuitive.</p><p>Below are the settings I recommend for an inactivity filter. Essentially, when the maximum CPU usage hasn’t been above 2% for an hour, turn off the instance. I also gave my alarm a more readable name.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*bWyFE2Ncf2wU5C9fzSFViQ.png" /><figcaption>Set up your Alarm Thresholds as needed</figcaption></figure><h4>4. Click Create and get more Peace of Mind</h4><p>It really is that easy. Whenever I create an EC2 instance, I’m now going to be doing these steps each time to ensure I don’t end up with a big bill at the end.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*pbo3MNHXPNS07Ej5b2m17g.png" /><figcaption>Click Create to save your new Alarm</figcaption></figure><h4>5. Optional: Add an Email Alert when your instance shuts down</h4><p>If you’d like to receive an email for extra peace of mind when your instance shuts down, there is an Alarm Notification section where it asks you to type in the name of a new SNS topic to create an email notification.</p><p>I actually couldn’t get this to create a new, usable SNS topic / notification. Feel free to comment about your own experience below.</p><p>If you do want to set this up, I found that after I created my alarm, I could go back to the EC2 instance overview and click on the <strong>Alarm Status </strong>cell for my EC2 instance, then go to the full edit mode.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*fCEpD7IBne5m7KXOq_9k9A.png" /><figcaption>Click on the text in the cell for Alarm Status</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*FS7aZybuMK1vmVNTYMxXQQ.png" /><figcaption>Open up the full edit page</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*0Q1SbbuS99p3bFJ3JH_4sA.png" /><figcaption>Go to the Edit page for the Alarm</figcaption></figure><p>Once on the Edit page, we’ll want to scroll down and click <strong>Next</strong> to go to the <strong>Step 2: Configure Actions</strong> section.</p><p>Click <strong>Add Notification</strong>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*8VkP7teaszijE5amc8kpuw.png" /><figcaption>Click “Add Notification”</figcaption></figure><p>Lastly, we’re going to create a new topic by giving it a topic name and specifying your email address.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*PbwqG_eOqNInUGAe3TdO8w.png" /></figure><p>Click <strong>Create topic</strong> and click through the confirmation email you’ll receive.</p><p>You’re now all set to get email alerts the next time your instance shuts down.</p><h4>Additional Reading</h4><p>This is truly just the tip of the iceberg — For more information, check out the resources below:</p><p><a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/UsingAlarmActions.html">AWS Guide on CloudWatch Alarm Creation for EC2 instances</a></p><p><a href="https://aws.amazon.com/solutions/implementations/instance-scheduler/">AWS Overview on their Instance Scheduler </a>— a way to automatically turn on and off your EC2 instance at set intervals</p><p>Have any other tips to more efficiently use your EC2 instances? Leave them in the comments below!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b73374e51090" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Why is California’s Prop 16 so confusing?]]></title>
            <link>https://successengineer.medium.com/why-is-californias-prop-16-so-confusing-96e23977354d?source=rss-c1a49b7bd016------2</link>
            <guid isPermaLink="false">https://medium.com/p/96e23977354d</guid>
            <category><![CDATA[california]]></category>
            <category><![CDATA[affirmative-action]]></category>
            <category><![CDATA[elections]]></category>
            <category><![CDATA[prop-16]]></category>
            <category><![CDATA[election-2020]]></category>
            <dc:creator><![CDATA[Shaun VanWeelden]]></dc:creator>
            <pubDate>Mon, 05 Oct 2020 02:06:41 GMT</pubDate>
            <atom:updated>2020-10-05T02:06:41.558Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*t1pCfQ7gvm8gNxbqaqchqw.jpeg" /></figure><p><em>Prop 16 is on the ballot and it’s confusing as heck. Ensuring racial equality shouldn’t be this twisted. Here’s my breakdown.</em></p><p>In a few weeks, California will be voting on Prop 16, a repeal of Prop 209 which banned discrimination or preferences “on the basis of race, sex, color, ethnicity, or national origin in the operation of public employment, public education, or public contracting.”</p><p>Since Prop 209 passed in 1996, California has been 1 of 9 states that ban “affirmative action”, a reference to a <a href="https://en.wikipedia.org/wiki/Executive_Order_10925">1961 executive order from JFK</a> ordering federal contractors to “take affirmative action to ensure that applicants are employed and that employees are treated during employment without regard to their race, creed, color, or national origin”</p><h3>Well, this is easy, I’m against discrimination so why would I vote “yes”?</h3><p>Like most things in the world, nothing is ever as easy as it seems. This is no exception.</p><h4>1996, when it all started</h4><p>Since we’re considering repealing Prop 209, let’s dive into how we got Prop 209 passed in the first place.</p><p>Prior to Prop 209, California set goals for how many contracts they awarded to women-owned and minority-owned businesses, as well as evaluated race as a contributing, but not an exclusive factor to university admissions.</p><p>Quota systems were not part of the original nor current proposition, they have been banned by the supreme court since 1978, and won’t be making a comeback regardless of the outcome of this vote.</p><p>Prop 209 was passed in 1996 with the support of 54.5% of that year’s voters, nearly three-quarters of whom were white and voted for it 63% to 37%.</p><h4>Understanding both sides</h4><p><a href="https://calmatters.org/election-2020-guide/proposition-16-affirmative-action/">CalMatters.org</a> is an excellent resource to understand both sides of a given ballot measure. They have summarized the arguments for Prop 16 below:</p><p>Supporters of Prop 16 say:</p><blockquote>Structural racism exists and to preach a color-blind philosophy is to be blind to the impacts of racism. Instead, for example, principals should be able to specifically seek to employ qualified Latino teachers in a school where most teachers are white but most students are Latino. Public universities should be able to consider a student’s race as one of numerous admissions factors, including grades and school work.</blockquote><p>Opponents of Prop 16 say:</p><blockquote>Allowing schools and government offices to make decisions based on race, ethnicity or sex is its own kind of prejudice. Equal rights mean everyone is treated equally. The claim that America is systemically racist is a false narrative that “fuels racial paranoia, division and hatred.” The state already has made strides in diversity. And it’s legal now to give preference to students who really need it — those who grew up in low-income families. As for who gets into the public universities, the fault lies with inadequate K-12 schooling.</blockquote><h4>Prop 16 boils down to a vote on how you recognize and want to solve systemic discrimination based on race and gender.</h4><p>Systemic racism and discrimination are not comfortable, easy, nor straightforward topics.</p><p>Systemic racism is a form of racism that is embedded as a normal practice within society or an organization. I would hope we can agree that throughout America’s history, there have consistently been laws, policies, and regulations meant to keep groups of people at a disadvantage, whether that’s women, African Americans, Latinos, Native Americans, Asians, or any other minority group. To varying degrees throughout history, every non-white group has had to overcome challenges that were a direct result of white men wanting to stay in power.</p><p>Tying this back to Prop 16, the naive argument to vote no usually goes something like these comments <a href="https://www.youtube.com/watch?v=lsR3s-rCmxM">pulled from an educational video on Prop 16</a>.</p><blockquote>You should be hired on your qualifications not the color of your skin</blockquote><blockquote>I’m voting NO. I say to the best qualified individual. It motivates all people to try harder to get what they want rather given a hand me down or pass for race.</blockquote><blockquote>The only systemic racism taking place is that towards white people.</blockquote><p>If you don’t recognize that systemic discrimination has played a large role in America’s history and that the ramifications are still being felt today, then each of these arguments makes perfect sense.</p><p>It is indeed much simpler and easier to imagine the world being fair to everyone and that everyone has an equal opportunity to succeed in America, it certainly feels American to say that.</p><h4>Is Affirmative Action the right response to Systemic Discrimination?</h4><p>Frankly, to me, I find the naive arguments of “Vote No because everyone has an equal opportunity, they just need to work harder” relatively easy to counter by pointing out simple, indisputable facts about how groups have been systemically discriminated against.</p><p>It does get more nuanced once you do recognize that systemic racism exists and we look to the possible solutions to reconcile those imbalances.</p><p>To me, this issue has two main options:</p><p><strong>Option 1:</strong> Do not leverage Affirmative Action and trust that those in power today will, on their own, introduce policies and initiatives that correct the historical discrimination women and minority groups have faced — and within a timeline competitive with other options seeking to right these discriminations.</p><p><strong>Option 2:</strong> Allow the possibility of Affirmative Action to level the playing field by considering more minority contenders for university admissions and government contracts. — Affirmative action isn’t a slam dunk, repercussion-free option. By definition, it would raise the bar for white men to compete for opportunities and it’s a bit of a “fighting fire with fire” approach that can feel uncomfortable to some.</p><p>To me, Affirmative Action is not an amazing choice, but it is better than the alternatives. With Affirmative Action, it in no way addresses the heart of the issues, say, quality K-12 education that gives everyone an equal opportunity to compete during and after their high school years, or the myriad other causes of systemic discrimination.</p><h4>You don’t want discrimination to be legal, do you?</h4><p>Like many things in our political system today, there’s a game to play when introducing laws and policies. Efforts to codify the intent and ensure things at the surface-level seem great have been used over and over again, it’s practically in the playbook.</p><p>In this case, it manifests itself as “You don’t want discrimination to be legal, do you?” - Then vote “No”.</p><p>And sure enough, it seems successful.</p><blockquote>Watching a focus group with Black voters from Los Angeles, they all said no we won’t vote for this as it was read to them,” said Eva Patterson, who co-chairs the Yes on 16 campaign. “Then we explained that it was in favor of affirmative action and equal opportunity, and they all said, ‘Of course we’ll vote for this.’ <br>(<a href="https://abc7news.com/whats-prop-16-what-is-ca-propostion-california-2020-polls/6439595/">ABC News</a>)</blockquote><p>The team introducing Prop 209 back in 1996 expertly knew how to word this to kill affirmative action in California and ensure that efforts to bring it back would be extremely difficult. They seem to be counting on an uneducated voter base.</p><h4>Who’s voting “Yes” on Prop 16?</h4><ul><li>Governor Gavin Newsom</li><li>Mayors of Los Angeles, San Francisco, and San Diego</li><li>Many Democratic senators, representatives, and other legislators</li><li>ACLU, Anti-Defamation League, California Hispanic Chambers of Commerce, California Teachers Association, many other groups</li><li>Kaiser Permanente and PG&amp;E</li></ul><h4>Who’s voting “No” on Prop 16?</h4><ul><li>2 Republican State Senators</li><li>Ward Connerly, the person who introduced Prop 209 in 1996.</li><li>Organizations helping Asian students get into California’s premier schools, like <a href="http://www.ivymax.com/#features">IvyMax</a>.</li></ul><h4>Where can I get just the facts, no opinions?</h4><p><a href="https://ballotpedia.org/California_Proposition_16,_Repeal_Proposition_209_Affirmative_Action_Amendment_(2020)">BallotPedia</a></p><p><a href="https://calmatters.org/election-2020-guide/proposition-16-affirmative-action/">CalMatters.org</a></p><p>It took me a really long time to unpack what was going on with this proposition and to get to the heart of the issues we’ll be voting on.</p><p>My goal in writing this is to provide a breakdown of the issue as I’ve learned about it in hopes that you’ll be able to form a more educated and precise opinion on the matters at hand.</p><p>Do your own research. I don’t have all the answers myself. I’d love to understand your perspective.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=96e23977354d" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Lane Finding with Computer Vision Techniques]]></title>
            <link>https://medium.com/swlh/lane-finding-with-computer-vision-techniques-bad24828dbc0?source=rss-c1a49b7bd016------2</link>
            <guid isPermaLink="false">https://medium.com/p/bad24828dbc0</guid>
            <category><![CDATA[computer-vision]]></category>
            <category><![CDATA[python]]></category>
            <category><![CDATA[self-driving-cars]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[udacity]]></category>
            <dc:creator><![CDATA[Shaun VanWeelden]]></dc:creator>
            <pubDate>Sun, 05 Jan 2020 10:53:14 GMT</pubDate>
            <atom:updated>2020-01-09T19:28:24.652Z</atom:updated>
            <content:encoded><![CDATA[<p><em>The opportunity of self-driving cars presents so many fun and unique technical challenges.</em></p><p><em>Since joining </em><a href="http://scale.com/"><em>Scale AI</em></a><em> as a Customer Engineer, I’ve been diving deep in to all things self-driving and trying to learn as much as I can about AI, Computer Vision, and anything remotely related to those things.</em></p><p><em>I recently signed up for Udacity’s </em><a href="https://www.udacity.com/course/self-driving-car-engineer-nanodegree--nd013"><em>Self-Driving Car Nanodegree</em></a><em> program and found the first project around Lane Finding with Computer Vision techniques to be very engaging.</em></p><h4>The Problem:</h4><p>The goal is to take videos of highway driving and annotate the vehicle’s lane markings by creating an image pipeline to identify and ultimately draw lane markings.</p><p>In the end, our output will look like this:</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2Fwktvv1Ptpvk%3Ffeature%3Doembed&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3Dwktvv1Ptpvk&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2Fwktvv1Ptpvk%2Fhqdefault.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/0789cddd498b3823c96ce48fceb8a302/href">https://medium.com/media/0789cddd498b3823c96ce48fceb8a302/href</a></iframe><h4>My Approach:</h4><p>We’ll be going into the details below, but at the highest level, the approach I developed was:</p><ol><li>Extract only the white and yellow-related pixels from a given video frame</li><li>Convert image to grayscale and apply a slight gaussian blur to remove noise</li><li>Calculate edges using a canny edge detector</li><li>Apply a regional specific mask to focus on the area on the front of the car only</li><li>Run a hough transform over the remaining edges to identify straight lines</li><li>Split lines into “left” and “right” by their slope, calculate median slope and y-intercept for all “left” and “right” lines</li><li>Calculate line intersection point and x-intercept points</li><li>Keep track of the last few frames line intersection points and x-intercept points as well for smoothing from frame to frame</li><li>Draw smoothed intersection and x-intercept points on the frame</li><li>Grab some popcorn and enjoy the smooth as butter lane annotations</li></ol><p>I then end this blog with some reflection and thoughts about enhancements.</p><h4>1. Extracting the White and Yellow Pixels</h4><p>To start off, I want to explain why this step is a game-changer.</p><p>If you were to jump straight into detecting edges and calculating lines without first extracting the color hues you care about, your resulting edges would look something like what’s below for this image:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*QV0bZCEgqsq9VKfIFWPuRw.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/496/1*NygBLXjggLOqGjKlBmQe5Q.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/496/1*Nsd3v7oxEpKJpWJsrShDrw.png" /><figcaption>Without Color Isolation / With Color Isolation</figcaption></figure><p>Without color isolation, if you look closely, the right, long line isn’t even on the actual lane marker, it’s the area where it goes from shadows to the sun.</p><p>With color isolation, the yellow and white line markings come through clearly in the image.</p><p><strong>Enter the HSV Color Space</strong></p><p>We usually think about colors in terms of Red, Green, and Blue or RGB values, where different ratios of Red, Green, and Blue make up the color a pixel has.</p><p>With RGB, a pixel gets its color from how bright or intense each of the three colors are, but that becomes an issue when you instead want to find colors of varying brightness.</p><p>HSV flips RGB on its head and separates a pixel’s color into Hue, Saturation, and Value. All the “Yellow”-ish pixels would have about the same hue value regardless of it was a dark yellow or really bright yellow, which is exactly what we want since roads and lighting conditions change a lot!</p><p><strong>Extracting the Colors:</strong></p><p>I had to do some playing around with an <a href="https://www.rapidtables.com/convert/color/hsv-to-rgb.html">online tool</a> myself to figure out how variations of yellow and white were represented, and then applied it to my image by creating a mask of where the colors I want are and removing everything else.</p><pre># Convert image to HSV colorspace<br>    hsv_image = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)</pre><pre># Define range of colors in HSV<br>    lower_yellow = np.array([30,80,80])<br>    upper_yellow = np.array([255,255,255])<br>    lower_white = np.array([0,0,200])<br>    upper_white = np.array([255,20,255])</pre><pre># Threshold the HSV image to get only blue colors<br>    yellow_mask = cv2.inRange(hsv_image, lower_yellow, upper_yellow)<br>    white_mask = cv2.inRange(hsv_image, lower_white, upper_white)<br>    mask = cv2.bitwise_or(yellow_mask, white_mask)</pre><pre># Bitwise-AND mask the original image<br>    color_isolated = cv2.bitwise_and(image, image, mask=mask)</pre><p><em>A quick tactical note, you need to convert HSV values into a scale of 0 to 255 instead of say, 0–360 degrees for a Hue or “70%” Saturation.</em></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/495/1*ik7twDq-p2FCKmuPRlwFkA.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/494/1*h-EDwWvrXPec_TC45zwU9w.png" /><figcaption>An image after extracting the yellow and white pixels from it</figcaption></figure><h4>2. Converting Image to Grayscale and Blurring</h4><p>Blurring is a common technique to remove “noise” from particular pixels, this helps lines and edge transitions be more smooth instead of jagged.</p><pre># Convert to Grayscale<br>    gray = cv2.cvtColor(color_isolated,cv2.COLOR_RGB2GRAY)</pre><pre># Define a kernel size and apply Gaussian smoothing<br>    kernel_size = 5<br>    blur_gray = cv2.GaussianBlur(gray,(kernel_size, kernel_size),0)</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/495/1*Kmzoeb7GD4uaX-EOzMUXwg.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/492/1*MY8KSnmEKD-rgS4ZLUuRmg.png" /><figcaption>Converting to Grayscale, then subtly blurring — We’ll want to do this same thing for just our isolated colors</figcaption></figure><h4>3. Edge Detection</h4><p>We want to extract all the edges around the colored regions to extract the specific lines in our image, we’ll want to use that in a few steps to approximate the lane lines.</p><p>I used the Canny Edge Detector algorithm with some reasonable detection thresholds.</p><pre># Define our parameters for Canny and apply<br>    low_threshold = 50<br>    high_threshold = 150<br>    edges = cv2.Canny(blur_gray, low_threshold, high_threshold)</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/493/1*UQOopvbhiGUFuxsl2TG1Cw.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/490/1*uIZVxRFTAtutCwMZZH2uXw.png" /><figcaption>Results of running our edge detection, including a zoomed in view of the right lane</figcaption></figure><h4>4. Regional Masking</h4><p>You may notice there’s a bunch of noise hanging out on the far left and right of the image.</p><p>Given the fixed camera position on the vehicle, it is safe to assume that the lanes in front of the vehicle that we care about will always be <em>roughly</em> in the same parts of the image.</p><p>We can remove other potential lines and noise in our image by applying a regional mask that allows only pixels in the region to exist, making all others black.</p><p>I came up with this region to limit my lane searching to:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/495/1*dua4-UARQ9PYhh6RDwqhyg.png" /></figure><p>We use OpenCV’s fillPoly method to create a regional mask and apply it to our image.</p><pre># Next we&#39;ll create a masked edges image using cv2.fillPoly()<br>    mask = np.zeros_like(edges)   <br>    ignore_mask_color = 255</pre><pre># Define a four sided polygon to mask<br>    imshape = image.shape<br>    vertices = np.array([<br>        [<br>            (0,imshape[0]), # bottom left<br>            (imshape[1] * .35, imshape[0] * .6),  # top left <br>            (imshape[1] * .65, imshape[0] * .6), # top right<br>            (imshape[1],imshape[0]) # bottom right<br>        ]<br>    ], dtype=np.int32)<br>    <br># Do the Masking<br>    cv2.fillPoly(mask, vertices, ignore_mask_color)<br>    masked_edges = cv2.bitwise_and(edges, mask)</pre><p>We now have an image looking like this:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/493/1*6JkDkhK_mJW7TXo05gMgFQ.png" /></figure><h4>5. Get lines with a Hough Transform</h4><p>A Hough transform is one of the cooler and more mind-blowing things in Computer Vision for me.</p><p>I won’t be able to explain it fully here, but it essentially maps every pixel to a linear equation where the 2d axes go from “x” and “y” to “m” and “b” representing the parts of the equation for a line, y = mx * b , and then we put everything in polar coordinates instead of cartesian coordinates to avoid dividing by 0 errors, lots of fun!</p><p>The important part is that we get a set of points, x1, y1, x2, y2 representing the start and endpoints of lines it found in our image.</p><pre># Define the Hough transform parameters<br>    rho = 2 # distance resolution in pixels<br>    theta = np.pi/180 # angular resolution in radians<br>    threshold = 30 # minimum number of votes <br>    min_line_length = 20 # minimum number of pixels making up a line<br>    max_line_gap = 1 # maximum gap in pixels between lines</pre><pre># Run Hough on edge detected image<br>    # Output &quot;lines&quot; is an array containing endpoints of lines<br>    lines = cv2.HoughLinesP(<br>        masked_edges, rho, theta, threshold, <br>        np.array([]), min_line_length, max_line_gap<br>    )</pre><h4>6. Identify our Lane Equations</h4><p>Our Hough transform gave us a set of line endpoints for any lines that it found in our last image above. Hopefully, most of these lines are going to be representing the long edges of the lane markings. There can be 5, 10, or even 20+ lines returned sometimes depending on the level of noise and lane markings captured.</p><p><strong>Step 1:</strong> Break out the middle school math and calculate the m and b parts for every line with our handy rise/run formula.</p><pre># Iterate over the output &quot;lines&quot; to calculate m&#39;s and b&#39;s <br>    mxb = np.array([[0,0]])<br>    for line in lines:<br>        for x1,y1,x2,y2 in line:<br>            m = (y2-y1) / (x2-x1)<br>            b = y1 + -1 * m * x1   <br>            mxb = np.vstack((mxb, [m, b]))</pre><p><strong>Step 2</strong>: Separate line equations into the left and right lanes and calculate the median slope and median y-intercept.</p><p>I chose to use the median to reduce outliers, outliers are actually quite possible if we happened to draw a line for another part of the road or scene that doesn’t relate to a line marker directly.</p><p>An important realization is that the left lane and right lane markers will inherently have opposite slopes, such that one is positive and one is negative.</p><p>We’ll do some fancy array indexing to calculate our median slope and intercept for the left and right lines of our slope.</p><pre>median_right_m = np.median(mxb[mxb[:,0] &gt; 0,0])<br>median_left_m = np.median(mxb[mxb[:,0] &lt; 0,0])</pre><pre>median_right_b = np.median(mxb[mxb[:,0] &gt; 0,1])<br>median_left_b = np.median(mxb[mxb[:,0] &lt; 0,1])</pre><p>Awesome, we have our linear equations for our lines!</p><h4>7. Calculate Line Intersection Point and X-Intercept Points</h4><p>Now that we have our two lines, let’s figure out where they intersect with each other, as well as the bottom of the image. This will be useful when we want to draw them on our original image.</p><p>For a refresher, here’s some equations from <a href="https://en.wikipedia.org/wiki/Line%E2%80%93line_intersection#Given_the_equations_of_the_lines">Wikipedia</a> given ax+c = bx+d.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/320/1*UQIv4SM2xSg3gplMgKWkww.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/346/1*R-EVfI3bLf5CCpgrlCnCOQ.png" /></figure><pre># Calculate the Intersect point of our two lines<br>    x_intersect = (median_left_b - median_right_b) / (median_right_m - median_left_m)<br>    y_intersect = median_right_m * (median_left_b - median_right_b) / (median_right_m - median_left_m) + median_right_b</pre><pre># Calculate the X-Intercept Points<br>    # x = (y - b) / m<br>    left_bottom = (imshape[0] - median_left_b) / median_left_m<br>    right_bottom = (imshape[0] - median_right_b) / median_right_m</pre><h4>8. Tracking Line History for Smoothing</h4><p>If we stopped here, we would have lines drawn on our images, but when you put these annotations into a video, you’ll see a lot of jumpiness as slight differences in the line equations and how the edge detection algorithms highlight pixels cause minor variations in our equation variables.</p><p>The solution is to sacrifice a small amount of responsiveness for a smoothed transition from the previously seen line equations.</p><p>When used in a video setting, my algorithm will actively take in as input, utilize, and return a history of previously seen line equations. We’ll see what that looks like further below.</p><pre># Create a History array for smoothing<br>    num_frames_to_median = 19<br>    new_history = [left_bottom, right_bottom, x_intersect, y_intersect]</pre><pre>    if (history.shape[0] == 1): # First time, create larger array<br>        history = new_history<br>        for i in range(num_frames_to_median):<br>            history = np.vstack((history, new_history))</pre><pre>    elif (not(np.isnan(new_history).any())): <br>        history[:-1,:] = history[1:]<br>        history[-1, :] = new_history</pre><pre># Calculate the smoothed line points<br>    left_bottom_median = np.median(history[:,0])<br>    right_bottom_median = np.median(history[:,1])<br>    x_intersect_median = np.median(history[:,2])<br>    y_intersect_median = np.median(history[:,3])</pre><p>It turns out lanes move relatively slowly from one video frame to the next so we can have a relatively high count of how many frames we’re taking the median of to calculate the current line.</p><p>After some experimentation, I chose 19 frames to take the median of, which at 25 frames per second, amounts to a theoretical delay of (19 / 2) / 25, or 0.38 seconds. When doing this, you balance smoothness with delay.</p><h4>9. Drawing Lines on Video Frames</h4><p><strong>Step 1:</strong> Once we have our final line points, we are able to draw lines on a single image.</p><pre># Create a blank image to draw lines on<br>line_image = np.copy(image) * 0</pre><pre># Create our Lines<br>cv2.line(<br>    line_image,<br>    (np.int_(left_bottom_median), imshape[0]),  <br>    (np.int_(x_intersect_median), np.int_(y_intersect_median)),   <br>    (255,0,0),10<br>)</pre><pre>cv2.line(<br>    line_image,<br>    (np.int_(right_bottom_median), imshape[0]),<br>    (np.int_(x_intersect_median), np.int_(y_intersect_median)),<br>    (0,0,255),10<br>)</pre><pre># Draw the lines on the image<br>    lane_edges = cv2.addWeighted(image, 0.8, line_image, 1, 0)</pre><p><strong>Step 2:</strong> Process an entire Video</p><pre>history = np.array([[0, 0, 0, 0]]) # Initialize History<br>images_list = []</pre><pre># Read in Base Video Clip<br>base_clip = VideoFileClip(input_filename)</pre><pre># Process each video frame, note how &quot;history&quot; is passed around<br>for frame in base_clip.iter_frames():<br>        [image, history] = draw_lanes_on_img(frame, history)<br>        images_list.append(image)</pre><pre># Create a new ImageSequenceClip, in other words, our video!<br>clip = ImageSequenceClip(images_list, fps=base_clip.fps)</pre><pre># Save our video<br>%time clip.write_videofile(output_filename, audio=False)</pre><h4>10. Enjoy some smooth Lane Annotations</h4><p>Below are some example videos processed with the above code:</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FchR8d0VJcz0%3Ffeature%3Doembed&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DchR8d0VJcz0&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FchR8d0VJcz0%2Fhqdefault.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/8ffd84db7ebf4d914e6de61b8ad0e385/href">https://medium.com/media/8ffd84db7ebf4d914e6de61b8ad0e385/href</a></iframe><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FoV--bJSdMz0%3Ffeature%3Doembed&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DoV--bJSdMz0&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FoV--bJSdMz0%2Fhqdefault.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/7b97846808ffe80ee94b16cd5124fb2b/href">https://medium.com/media/7b97846808ffe80ee94b16cd5124fb2b/href</a></iframe><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2Fwktvv1Ptpvk%3Ffeature%3Doembed&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3Dwktvv1Ptpvk&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2Fwktvv1Ptpvk%2Fhqdefault.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/0789cddd498b3823c96ce48fceb8a302/href">https://medium.com/media/0789cddd498b3823c96ce48fceb8a302/href</a></iframe><h3>Reflections</h3><h4>What could be done still:</h4><p>I had three ideas I didn’t yet implement in the above code.</p><p><strong>Creating a Custom Kernel for Edge Detection</strong></p><p>In Step 2 and 3, we applied a blur to our image to remove noise and then found edges. While these are things you can do to <em>convolve</em> an image, there’s all kinds of different <em>kernels</em> you can build to highlight, lessen, or otherwise modify a pixel’s value based on the pixels around it.</p><p>In theory, since lanes have somewhat fixed angles, we can build a custom kernel that highlights pixels where the pixels above and below it form an angled line.</p><p>A <a href="https://www.youtube.com/watch?v=uihBwtPIBxM">Sobel filter</a> is another kind of edge detector useful for finding, very specifically, straight lines in images. We could borrow similar concepts from how Sobel filters work to create an edge detector for angled lines at the approximate angles lane line follow.</p><p><strong>Filtering Hough Transform Lines</strong></p><p>While most lines coming back from our Hough transform are hopefully related to the lane markers, we can engineer out any obvious lines where that is not the case.</p><p>Directly in the Hough transform, or in the processing of the lines, we should be able to filter out, say, any horizontal lines as we know those could not be related to a lane marker.</p><p>Leverage Parallel Lines</p><h4>Limitations:</h4><p><strong>No Lane Switching<br></strong>This algorithm assumes you are already in a lane, and will stay in that lane. When you are in the middle of switching lanes, your line angles become near vertical and I think this algorithm would struggle with that given how I am separating lanes into “left” and “right”. A better approach would be to draw and track ALL lanes seen and then compute the car’s position relative to the lanes.</p><p><strong>Lane Markers must be Present and Explicit<br></strong>Typically, lane markers on highways are pretty explicit. If a lane marker was simply not painted, covered over by something, or if a highway exit showed up, this algorithm wouldn’t know what to do.</p><h3>All in All</h3><p>I had so much fun working on this project. There are so many much better ways to solve this in 2020, but getting back to basics and seeing how far you can take classic computer vision techniques was a blast!</p><p>As always, feel free to comment with your thoughts, connect with me on <a href="https://www.linkedin.com/in/svanweelden/">LinkedIn</a>, or otherwise email me at shaun.t.vanweelden [at] gmail.com</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=bad24828dbc0" width="1" height="1" alt=""><hr><p><a href="https://medium.com/swlh/lane-finding-with-computer-vision-techniques-bad24828dbc0">Lane Finding with Computer Vision Techniques</a> was originally published in <a href="https://medium.com/swlh">The Startup</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Support Tool + Christmas Lights = Festive Fun]]></title>
            <link>https://successengineer.medium.com/support-tool-christmas-lights-festive-fun-c0078bfe3f53?source=rss-c1a49b7bd016------2</link>
            <guid isPermaLink="false">https://medium.com/p/c0078bfe3f53</guid>
            <category><![CDATA[api]]></category>
            <category><![CDATA[iot]]></category>
            <category><![CDATA[nodejs]]></category>
            <category><![CDATA[support]]></category>
            <category><![CDATA[smart-home]]></category>
            <dc:creator><![CDATA[Shaun VanWeelden]]></dc:creator>
            <pubDate>Sun, 24 Nov 2019 19:22:21 GMT</pubDate>
            <atom:updated>2019-11-24T19:22:21.118Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/379/1*0kWD30CLXJ-FhgTN_Wze_w.png" /></figure><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2Fz600vlprLrU%3Ffeature%3Doembed&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3Dz600vlprLrU&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2Fz600vlprLrU%2Fhqdefault.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube" width="640" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/0f4dd4f1c68758a0ffc67bd7ea40cfa3/href">https://medium.com/media/0f4dd4f1c68758a0ffc67bd7ea40cfa3/href</a></iframe><p>I don’t know about your team, but at least with the team I’m on, we have a lot of crazy, silly ideas that are very fun to discuss, but likely shouldn’t ever happen to preserve the sanity of those around us.</p><p>After seeing our sales team celebrating closed deals with a smash of the carnival hammer game, and our CSMs celebrating closed renewals with a gong-type thing, I’ll admit we were feeling a bit left out.</p><p>We started brainstorming what we could do, and our conversation about “how everyone looks cool dancing in strobe lights no matter what” turned into “We should have a fog machine and strobe light for a few seconds every time we close a conversation”</p><p>Alas, the smoke machine wasn’t meant to be.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/682/1*J0ZEzn1sC8CBUk4lczZCxA.png" /></figure><p>Despite not having a smoke machine, I <strong>REALLY</strong> wanted to do something now that I was brainstorming.</p><p>I arrived at Christmas Lights blinking on and off every time we closed a conversation which seemed both subtle enough to have in the office and timely for the season.</p><h3>System Overview</h3><p>At a high-level, we need a few things to happen to make this work:</p><ol><li>A smart plug that can be reachable with an API endpoint</li><li>A support tool that can fire a Webhook when a conversation is closed (most have this, more below)</li><li>Some kind of server or service that can receive the webhook from the support tool and hit the smart plug’s API to toggle the state</li></ol><h3>Smart Plug Selection</h3><p>I do not have a ton of experience in the “Internet of Things” or “Smart Home” space so this was my first real introduction.</p><p>All Smart Plugs will have a mobile app that is paired with them, and then you control your Smart Plug primarily with that, or you hook it up with Alexa or something like <a href="https://ifttt.com/">IFTTT</a> (If This Then That — a very consumer-ized version of Zapier for better <em>and</em> worse).</p><p>Since no one has built a Support Tool -&gt; Smart Plug service before, there isn’t anything out of the box for us to use and we’ll need our own way to connect to it.</p><blockquote><strong>You must buy a Smart Plug that you can toggle on and off with your code or service of choice. Determine how you’ll turn it off with your code or service BEFORE deciding which plug to get.</strong></blockquote><p>For myself, I already had a Node.js server up and connected to our Support Tool for some other operational things. The first thing I googled was “smart plug node sdk” and I found the link for <a href="https://www.npmjs.com/package/tplink-cloud-api">https://www.npmjs.com/package/tplink-cloud-api</a></p><p>This one seemed (and was) perfect because I’ll be able to use my smart app credentials to authenticate and mimic the requests I’d make from the app on my phone.</p><p>In conjunction with this Node.js library, I bought this Smart Plug on Amazon:</p><p><a href="https://www.amazon.com/Smart-outlet-Google-Simple-Required/dp/B07TXM4MT3">https://www.amazon.com/Smart-outlet-Google-Simple-Required/dp/B07TXM4MT3</a></p><h3>Support Tool Webhook</h3><p>A webhook is a way for a service such as Help Scout or Zendesk to let you know that something just happened on their platform. When an event occurs within the platform, it triggers a call to be made to a URL you specify, usually pointing to an API endpoint you’ve designed. The call it makes to your URL contains the data you need to take action.</p><p>In our case, we want a call to be triggered when a conversation is closed.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/436/1*gjjoRUwhYKjLz3odHZW3HA.png" /><figcaption>A nuance in Help Scout is that an “Agent Reply” can close the conversation too, so merely tracking “Status Updated” isn’t enough.</figcaption></figure><p>Now that we have a webhook set up, our server will get a ping every time a conversation is closed.</p><h3><strong>Putting it Together</strong></h3><p>With a Smart Plug that you can toggle on and off with code, and a way to get notified when a customer conversation is closed, we just need to put these two pieces together and we’ll be set.</p><p>Since I was using Node, I built out a helper class like this following the <a href="https://www.npmjs.com/package/tplink-cloud-api">Node SDK</a> for this Smart Plug.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/19d261789e3801ef5751f819557ef5e7/href">https://medium.com/media/19d261789e3801ef5751f819557ef5e7/href</a></iframe><p>Next, add a Help Scout Webhook function like this:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/26fd28b44b274985e3380b8f52555bab/href">https://medium.com/media/26fd28b44b274985e3380b8f52555bab/href</a></iframe><p>My Node.js app simply uses Express.js to handle the routing and all of that.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/1b251b670f91663a5d6078dd9ac9575c/href">https://medium.com/media/1b251b670f91663a5d6078dd9ac9575c/href</a></iframe><p>Depending on the plug you buy and your preference for hosting code, your mileage may vary.</p><h3>Learnings so far:</h3><ol><li>Easy to work with APIs for “Smart Home” devices seem to be really hard to find. It makes sense given how consumer-driven and non-technical the market is, but when you want to be technical… it was hard.</li><li>Similarly, I was really hoping to abstract this project out so it wouldn’t be as technical. I had explored services like AWS Lambda, Zapier, and IFTTT to see if I could get to a more “Plug n Play” model for other support leaders wanting to do this. I haven’t had a lot of luck so far, but if you have ideas, I’d love to hear them!</li><li>It’s not the most robust lighting system ever — If it gets two “close” requests at about the same time, it will get off it’s toggle track and the lights will stay on indefinitely. Having the mobile app handy to turn them off when that happens is nice.. :)</li></ol><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=c0078bfe3f53" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How to Download Instagram Videos Without any App]]></title>
            <link>https://successengineer.medium.com/how-to-download-instagram-videos-without-any-app-55eec068797f?source=rss-c1a49b7bd016------2</link>
            <guid isPermaLink="false">https://medium.com/p/55eec068797f</guid>
            <category><![CDATA[growth-hacking]]></category>
            <category><![CDATA[instagram]]></category>
            <category><![CDATA[social-media]]></category>
            <category><![CDATA[growth]]></category>
            <category><![CDATA[social-media-marketing]]></category>
            <dc:creator><![CDATA[Shaun VanWeelden]]></dc:creator>
            <pubDate>Tue, 22 Oct 2019 03:39:14 GMT</pubDate>
            <atom:updated>2019-10-22T03:39:46.771Z</atom:updated>
            <content:encoded><![CDATA[<p>There are hundreds of apps built to help you download your pics and videos from Instagram in exchange for your email address.</p><p>Most people don’t realize this is something you can do by yourself in about 30 seconds with no tools, apps, or money needed.</p><p>There’s only <strong>4</strong> steps needed to download public or private videos to your computer.</p><ol><li>With Google Chrome, use the actual Instagram website to get to the page where the image or video you want to download is.<br>If the page is private, you will need to log in to your Instagram account first.</li></ol><figure><img alt="" src="https://cdn-images-1.medium.com/max/990/1*1iXuaj_i7viLlirSUQIcsA.png" /></figure><p><strong>2. </strong>Click on the Image or Video you want to download to make it “full screen”</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1011/1*JXu7YNLdS93nbm8DsX7PGQ.png" /></figure><p><strong>3.</strong> Right-click directly on the Image or Video, then click <strong>“Inspect”</strong></p><p>This crazy looking HTML should pop up:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*glrK_3asC5F_4YSDDl2THg.png" /></figure><p><strong>4.</strong> Get the URL to your Image or Video</p><p>You’ll see an HTML bit starting with “&lt;div …” highlighted on the right.</p><h4><strong>For Images:</strong></h4><p>There’s a line right above the line that’s highlighted also starting with “&lt;div …”.</p><p>Click the “▶” next to the “&lt;div …” that’s above what’s highlighted.</p><p>You’ll see a bunch of URL-looking things, the one we want to grab is the bottom one. It starts with src=”long link”.</p><p>Copy that URL and paste it into a new tab. You can do whatever you want with the image now.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/462/1*WfU1O-_oOOfjzbXS1ZDw2g.gif" /></figure><h4><strong>For Videos:</strong></h4><p>There are two lines right above the line that’s highlighted also starting with “&lt;div …”.</p><p>Click the “▶” next to the “&lt;div …” that’s two lines above what’s highlighted.</p><p>Keep clicking the “▶” button until you can not any more and instead see URLs.</p><p>Find where it says <em>type=”video/mp4&quot; src = “ … “</em> — Copy the URL for <em>src</em> and paste in a new tab.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/438/1*BEIbBc8Kf7FRDbfbcb9zSg.gif" /></figure><p>Click the three dots and “Download”, you’re done!</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/528/1*Ct2IJsl2pRhaCEHJuwFl_A.gif" /></figure><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=55eec068797f" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How to Normalize Websites in Salesforce]]></title>
            <link>https://successengineer.medium.com/how-to-normalize-websites-in-salesforce-d8f2ddd6804d?source=rss-c1a49b7bd016------2</link>
            <guid isPermaLink="false">https://medium.com/p/d8f2ddd6804d</guid>
            <category><![CDATA[data]]></category>
            <category><![CDATA[excel]]></category>
            <category><![CDATA[operations]]></category>
            <category><![CDATA[awesomeadmin]]></category>
            <category><![CDATA[salesforce]]></category>
            <dc:creator><![CDATA[Shaun VanWeelden]]></dc:creator>
            <pubDate>Sun, 08 Sep 2019 00:53:16 GMT</pubDate>
            <atom:updated>2019-09-08T00:54:13.546Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/640/1*14ENmiiDJGhoCrHPDTPv2g.png" /></figure><h3>How to Normalize Websites in Salesforce and Excel</h3><p>I was recently tasked with seeing how many customers on a list we had were shared with lists others were providing us.</p><p>No problem I thought, I’ll just export the report in Salesforce, do a quick VLOOKUP or <a href="https://exceljet.net/formula/range-contains-specific-text">COUNTIF</a> based on the Account Website in Excel, and Voilà, I’ll have my list!</p><p>I exported my report and realized it wasn’t going to be quite as easy… we had all types of values in our “Website” field and the lists to compare to also had different domain formats.</p><p>I needed a way to normalize everything together:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/605/1*DDGicJaJLOUHjLKWB_zMVg.png" /><figcaption>Results of Normalizing Different Domain Formats</figcaption></figure><h4>In this post, I’ll share how I made:</h4><p>1. A <strong>Normalized Website</strong> Formula Field in Salesforce</p><p>2. The <strong>Excel</strong> version of that Formula field so you can normalize other lists</p><p>3. A <strong>Process Builder</strong> flow so the Website field is always normalized, and you don’t need to create a second Website field.</p><h3>The Recipe</h3><p>First things first, I made a list of what had to be normalized:</p><ol><li>Remove https://</li><li>Remove http://</li><li>Remove anything after the first remaining “/” (including the slash itself)</li><li>Remove “www.”</li><li>Make it all lowercase</li></ol><p>The one thing you <em>could</em> argue should also be removed is any additional subdomains. “blog.google.com” should in theory be “google.com”.</p><p>This specific case becomes quite challenging considering domains like “blog.gsuite.google.co.uk” to the point where I find it’s not worth the complexity.</p><p>Luckily for us, the probability that website fields include sub-domains is generally very low and can generally be dealt with manually or in the acceptable margin of error.</p><h3>Salesforce Formula for Normalized Website</h3><pre>SUBSTITUTE(SUBSTITUTE(SUBSTITUTE(<br>  LEFT(<br>    LOWER(Website),<br>    IF(FIND(&quot;/&quot;, Website, FIND(&quot;.&quot;, Website)) = 0,    <br>      LEN(Website),<br>      FIND(&quot;/&quot;, Website, FIND(&quot;.&quot;, Website)) - 1<br>    )<br>  ),<br>&quot;https://&quot;, &quot;&quot;), &quot;http://&quot;, &quot;&quot;), &quot;www.&quot;, &quot;&quot;)</pre><p>If you copy and paste this into a new text-based formula field on the Account, you’ll have a lovely normalized website field and be all set.</p><h4>Break it down</h4><p>If you find longer formulas interesting and fun like me, here’s a breakdown of what’s happening starting from the middle outwards.</p><p>We start off with <strong>LOWER</strong> to ensure casing never gets in our way of comparing websites.</p><p>Next, we need a way to see if there’s a “/” or things after a “/” at the end of our lower-case domain to remove. We’ll use <strong>LEFT</strong> to get everything to the left of our potential “/”. <strong>LEFT</strong> takes in two arguments: <br>1. Text to potentially cut off<br>2. The Nth letter in the text where it should cut things off</p><p>To get the Nth letter in our website for <strong>LEFT</strong>, we need to see <strong>IF</strong> there’s a “/” somewhere in the domain. To <strong>FIND</strong> if there is a “/”, <br>1. We pass in our search text, “/”<br>2. We pass in the text to look at, Website<br>3. We pass in the first place in that text we want to look for our slash. In this case, we want to look after the first “.” we see because we need to avoid finding a “/” in “http://” and we can count on there always being a “.” before a final slash because of the “.com” or similar part.</p><p><strong>IF</strong> there is not a “/” in Website (as indicated by <strong>FIND</strong> returning 0), return the entire Website by getting the length of the Website field.<br><strong>IF</strong> there was a “/”, only return the first N letters of the Website field before the “/”</p><p>Now with our <strong>LOWER</strong>-cased text and everything to the <strong>LEFT</strong> of a potential “/” at the end of our website, <strong>SUBSTITUTE</strong> out the “www”, “http”, or “https” part of the domain with an empty string.</p><p>🎉 You made it!</p><h3>Excel Formula for Normalized Website</h3><p>Luckily for us, Salesforce and Excel formulas are quite similar. We can use essentially the exact same logic in Excel as Salesforce.</p><pre>=SUBSTITUTE(SUBSTITUTE(SUBSTITUTE(<br>  LEFT(<br>    LOWER(A2),<br>    IF(ISERROR(FIND(&quot;/&quot;, A2, FIND(&quot;.&quot;, A2))), <br>      LEN(A2),<br>      FIND(&quot;/&quot;, A2, FIND(&quot;.&quot;, A2)) - 1<br>    )<br>  ),<br>&quot;https://&quot;, &quot;&quot;), &quot;http://&quot;, &quot;&quot;), &quot;www.&quot;, &quot;&quot;)</pre><p>If you’re paying attention, the only change here is that we need to swap out our <strong>IF</strong> statement to use <strong>ISERROR</strong> instead of “= 0” because if text can’t be found in Excel, it returns an error instead of just 0.</p><h3>Normalized Website Process Builder</h3><p>If you too like to avoid making unnecessary fields but don’t want to give up this normalization efficiency, you can get a bit creative in Process Builder.</p><h4>Goal:</h4><p>On any changes to an Account’s Website field, re-normalize the value in the Website field. This works well when doing data imports from different vendors.</p><h4>Create a New Process</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/851/1*K1dYunL1i1YFDFu5aCyO3w.png" /></figure><h4>Our Process is for the Account object</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/461/1*JuHaVVnnuS8QkM-A6fse-w.png" /><figcaption>Make sure it fires when a record is edited as well</figcaption></figure><h4>Define our Criteria</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/685/1*3CBNaJGYgfkQCOybBwoVqQ.png" /></figure><p>If our Website field changed, let’s go update it!</p><h4>Define our Action</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/640/1*TgRs__1Q7NzDwqqojYqyNQ.png" /></figure><p>Have the Website be set to our formula above with one small twist, we need to explicitly tell it what object to pull our fields from (Account).</p><pre>SUBSTITUTE(SUBSTITUTE(SUBSTITUTE(<br>  LEFT(<br>    LOWER([Account].Website),<br>    IF(FIND(&quot;/&quot;, [Account].Website, FIND(&quot;.&quot;, [Account].Website)) = 0,    <br>      LEN([Account].Website),<br>      FIND(&quot;/&quot;, [Account].Website, FIND(&quot;.&quot;, [Account].Website)) - 1<br>    )<br>  ),<br>&quot;https://&quot;, &quot;&quot;), &quot;http://&quot;, &quot;&quot;), &quot;www.&quot;, &quot;&quot;)</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/687/1*2ZtUa9DYqOT7Eiua_dydKg.png" /></figure><p>Click <strong>Use this Formula</strong> and <strong>Save</strong></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/640/1*DmWrTQ3kbAvPcTPgVjUtKA.png" /></figure><h4>Turn it On</h4><p>Click <strong>Activate</strong>, and maybe do a quick data load to set all existing websites with their normalized version, and you’ll be all set!</p><h3>Oh, The Places You’ll Go</h3><p>Hopefully this saves you a bit of time, gives you some inspiration, and helps you get back to making the most of your data instead of fighting domain variances.</p><p>If you have ideas, I’d love to hear them! Please leave a comment or find me on <a href="https://www.linkedin.com/in/svanweelden/">LinkedIn</a>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d8f2ddd6804d" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How to Setup Heroku with GoDaddy]]></title>
            <link>https://successengineer.medium.com/how-to-setup-heroku-with-godaddy-d8e936d10849?source=rss-c1a49b7bd016------2</link>
            <guid isPermaLink="false">https://medium.com/p/d8e936d10849</guid>
            <category><![CDATA[dns]]></category>
            <category><![CDATA[heroku]]></category>
            <category><![CDATA[custom-domain]]></category>
            <category><![CDATA[godaddy]]></category>
            <dc:creator><![CDATA[Shaun VanWeelden]]></dc:creator>
            <pubDate>Mon, 06 May 2019 00:19:14 GMT</pubDate>
            <atom:updated>2019-09-07T23:00:21.599Z</atom:updated>
            <content:encoded><![CDATA[<p>I don’t know about you, but setting up domain configurations is one of the least fun parts about building and launching a site for me.</p><p>Setting up GoDaddy and Heroku was a particularly frustrating experience and I thought I’d try and help you avoid the same pitfalls.</p><p><strong>As it turns out, it’s impossible to perfectly get Heroku and GoDaddy to play together well, but below is as close as you can get.</strong></p><h4><strong>Why connecting GoDaddy and Heroku is challenging:</strong></h4><ol><li>Most platforms give you a CNAME record to point your www subdomain to, and then some IP Addresses for your A records to point to. Heroku only provides CNAME records and ANAME or ALIAS records GoDaddy does not have.</li><li>You can add Custom Domains on the Heroku site, but the domain format isn’t clear, should I add the www subdomain and the “naked” domain, one, or the other?</li><li>When you do add a Custom Domain in Heroku, the domain to point to is cut off on the Heroku website.</li><li>SSL requires you to upgrade to a Hobby Plan in Heroku, and then https needs to still be managed somewhat carefully on the DNS side.</li></ol><h4>Initial Goals:</h4><p>Going to:<br>http://yoursite.com<br>https://yoursite.com<br>http://www.yoursite.com<br>https://www.yoursite.com</p><p>All work and end up, without scary warnings, at <a href="https://www.yoursite.com">https://www.yoursite.com</a></p><h4>GoDaddy does NOT Support Heroku</h4><p>Getting “https://yoursite.com” to go to your website without a scary warning is not something that is possible today when using GoDaddy.</p><p>Heroku documents why over here:<br><a href="https://help.heroku.com/NH44MODG/my-root-domain-isn-t-working-what-s-wrong">https://help.heroku.com/NH44MODG/my-root-domain-isn-t-working-what-s-wrong</a></p><p>Thank you <a href="http://twitter.com/Aidlab">@Aidlab</a> for providing the link and context.</p><h4><strong>Final Solution Preview:</strong></h4><p><strong>In Heroku:</strong><br>1. The ONE domain you need to set up is the WWW version, <a href="http://www.your-site.com">www.your-site.com</a>, do not set up your-site.com<br>2. Pay $7 a month to upgrade to the hobby plan because your site will load faster and you can have https://</p><p><strong>In GoDaddy:</strong><br>1. Add a CNAME record for “www” pointing to the site Heroku provides<br>2. Set up Domain Forwarding to point to the “https” version of your “www” domain.</p><h3>Step by Step Guide:</h3><h4>In Heroku:</h4><p><strong>Step 1: Add your Custom Domain</strong><br>Use the Heroku CLI to add your domain instead of doing it from the web application. Ensure you add the “www” version instead of just “yourdomain.com”</p><blockquote>heroku domains:add www.yourdomain.com</blockquote><p><strong>Step 2:</strong> <strong>Retrieve the DNS Target</strong><br>If it’s not already there, you can always type the following to get a list of your domains</p><blockquote>heroku domains</blockquote><p><strong>Step 3: Double check SSL is enabled</strong><br>If you’re like me and weren’t on a paid project when setting up your custom domain, you need to log in to Heroku and go to your Application Settings page and enable SSL.</p><h4>In GoDaddy:</h4><p><strong>Step 0: Capture Original State</strong><br>Before making changes, always take a screenshot or note of what settings looked like</p><p><strong>Step 1: Remove Extraneous A Records</strong><br>Remove any A records pointing to other IP Addresses from past projects or the GoDaddy defaults.</p><p><strong>Step 2: Add a CNAME Record</strong><br>Your CNAME record will have<br>Name = www<br>Value = Heroku domain from Step 2 above</p><p><strong>Step 3: Set up Forwarding</strong><br>Heroku mentions setting up ANAME or ALIAS records, GoDaddy does not have that. Instead, that is done through Forwarding. I personally spent way too long here trying to figure that one out…</p><p>You want to forward with the “https://” protocol to your “www” sub-domain.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/418/1*4etP32-ADBUHE-yzDhsUuQ.png" /></figure><h4>In Conclusion:</h4><p>Hopefully this saved you a few clicks and minutes. Like all domain stuff, there can be some nerve-wracking minutes while things propagate, but hopefully this will allow things to make much more sense!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d8e936d10849" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[SaaS Support is moving to a “Texting” Model]]></title>
            <link>https://successengineer.medium.com/saas-support-is-moving-to-a-texting-model-c4577a7f2ccc?source=rss-c1a49b7bd016------2</link>
            <guid isPermaLink="false">https://medium.com/p/c4577a7f2ccc</guid>
            <category><![CDATA[customer-service]]></category>
            <category><![CDATA[support]]></category>
            <category><![CDATA[intercom]]></category>
            <category><![CDATA[saas]]></category>
            <dc:creator><![CDATA[Shaun VanWeelden]]></dc:creator>
            <pubDate>Tue, 15 Jan 2019 09:31:00 GMT</pubDate>
            <atom:updated>2019-01-25T03:35:56.089Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/900/0*1nV0fJfCIsJ8WXus.jpg" /></figure><p>Traditionally, there have been 3 main models for doing customer support for a SaaS-based business; Email, Phone, and Chat.</p><p>When I get asked how Engagio supports customers, I have always struggled to put it in one of those buckets.</p><p>At Engagio, we, like many SaaS-based startups turned to tools like Intercom to offer our customers an easy way to get in touch with us, ask questions, and get support.</p><p>The premise of Intercom (and other similar tools) is pretty simple, customers click the chat bubble, they open it up, and type their question. We get back to them, and everyone is happy.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/488/1*51M2yi_aeA8kJ2KnMkKLQQ.gif" /><figcaption>Intercom in Action</figcaption></figure><p>So it seems like we have a chat experience going on, heck, we can even add emojis.</p><p>One issue. There’s a little line below the friendly faces.</p><blockquote>The Team typically replies in a few hours.</blockquote><p>Clearly, we’re not about to have a full chat agent experience, and I probably shouldn’t be expecting a reply in seconds.</p><h3>So it’s not Chat</h3><p>I think it’s fair to say, using support tools like Intercom is not typically a full blown chat experience with wait times measured in the seconds. At the very least, it does not have to be that model of support.</p><p>Many SaaS support teams, including Engagio’s, simply don’t have the bandwidth to fully support a “chat” experience all day every day, but we want customers to have a seamless experience with us and we want to be able to communicate quickly back and forth when we are able to.</p><p>When I reply late at night, you’re not in the app, and you get an email copy of my response that you reply to the next day; we’re now very much emailing about your question more than anything else.</p><p>While some customers may have a few hour wait period throughout the week, if you send a question over to us at 2pm on Friday when it’s more quiet, it’s very possible we’ll reply 18 seconds later with an answer and can help you in real-time get things figured out.</p><h3>So it’s not Email either</h3><p>Going back and forth in a matter of seconds with a user experience modeled after iMessage to the point where you have the “___ is currently typing …” appear alongside buttons for GIFs doesn’t really feel like email either.</p><p>From both an agent and a customer perspective, it can very much operate like you’re chatting with a friend. Something tells me that this experience has been a consistent part of Intercom’s Success.</p><h3>So if it’s not Chat, and it’s not Email, what is it?</h3><p>It took me a long time to see it, longer than I’d like to admit, but it’s like texting or messaging.</p><blockquote>When you text a friend, you know they can get back to you in 15 seconds, but more than likely, there’s going to be a few minute to a few hour delay between responses because they have their own things they are working on and so do you.</blockquote><p>It’s mutually understood that you’ll both do your best to get back to each other as quickly as you can, and the conversation can go on in an ad-hoc way as you both get more context and get things figured out.</p><p>After using Intercom for over two years as we grew our support team with the coincidental “messaging” approach, I’ve been reasonably satisfied with it’s scalability.</p><p>As you can imagine, handling 50 different text conversations with friends in a given day would be error-prone and hard to stay on top of, things would get lost, there’s a lot of cherry picking the easy stuff. Setting up workflows, teams, and multiple inboxes to handle the increased load is critical, and this is where I feel the tools to start differentiate themselves quite a bit.</p><h3>Is the “Messaging” Support Model here to stay?</h3><p>I honestly think so, especially in the software and startup space. I think it strikes a good balance between timeliness, experience and accessibility, and scalability.</p><p>As society becomes more text and messenger driven, I feel we are already accustomed to the experience and willing to take it into our work-life as well.</p><h4>What are your thoughts, do you have a better name for this support model? Does it deserve it’s own channel?</h4><h4>Leave a comment or get in touch!</h4><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=c4577a7f2ccc" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How to “Export” Bulk Data Load Jobs data in Salesforce]]></title>
            <link>https://successengineer.medium.com/how-to-export-bulk-data-load-jobs-data-in-salesforce-600de5608f83?source=rss-c1a49b7bd016------2</link>
            <guid isPermaLink="false">https://medium.com/p/600de5608f83</guid>
            <category><![CDATA[bulk]]></category>
            <category><![CDATA[admin]]></category>
            <category><![CDATA[api]]></category>
            <category><![CDATA[salesforce]]></category>
            <category><![CDATA[monitor]]></category>
            <dc:creator><![CDATA[Shaun VanWeelden]]></dc:creator>
            <pubDate>Tue, 15 Jan 2019 08:05:31 GMT</pubDate>
            <atom:updated>2019-01-15T08:05:31.506Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1000/1*UoY5hdVDaL0UkNtUZTHAag.png" /><figcaption>Tips on Being an Awesome Admin</figcaption></figure><p>If you’re a Salesforce Admin, there’s a good chance you’ve checked out the “Bulk Data Load Jobs” admin page to monitor the status of a big batch update or debug why an integration push didn’t go as expected.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/789/1*Ex2mJZTMjsULtvgKwgXKpw.png" /></figure><p>This page is unique, it’s the ONLY place in Salesforce to see:<br> ➤ What Object was Updated<br> ➤ # of Records Processed<br> ➤ # of Records Failed<br> ➤ Job Start Time</p><p>For a lot of debugging, you’ll want to count the total number of records updated in a given time window, or figure out how many records a specific user submitted for updates.</p><p>This is all super helpful data, but there’s a catch.</p><p>The table view has two main issues, <br>1) It only shows ~50 records per page in Lightning and 110 per page in Classic<br>2) Clicking “View more records” on the page only adds 10 records, 10!</p><blockquote>There is no way to export this data, no way to query it with SOQL or Apex, and it’s a total pain in the ass to page through it manually.</blockquote><p>Lucky for you, I was determined to get the full set of our batch jobs into Excel.</p><h3>My Attempts to Solve This</h3><h4>Write a SOQL query</h4><p>My end goal was to filter this data down by the parameters in the table, and then export it. A SOQL query seemed perfect. I found the <a href="https://developer.salesforce.com/docs/atlas.en-us.object_reference.meta/object_reference/sforce_api_objects_asyncapexjob.htm"><strong>AsyncApexJob</strong></a> object which seemed to roughly mirror what the table was showing, but it was missing a few critical updates, namely the User who submitted the request and # of Records updated.</p><h4><strong>Write JavaScript to click the “View More” button programmatically</strong></h4><p>Trying to view the last 3,000 batch jobs by slowly paging through wasn’t going to happen manually.</p><p>I know how to write some code though, I thought, let me try to click the “View More” button with JavaScript every few seconds. I’ll let it sit for an hour and come back to the full table view, it sounded great.</p><p>It didn’t go very well…</p><ol><li>In Lightning Mode, the table is actually loaded as an iframe on the page holding a VisualForce component, trying to click on things programmatically in an iframe that reloads itself every time it’s clicked was proving harder than imagined.</li><li>In Classic, the page refreshed itself every time the “View More” or “Next Page” button was clicked so looping through with JavaScript wasn’t working out well. This ultimately proved to be key to the solution.</li></ol><h3>The Break Through</h3><p>When testing the JavaScript method in Salesforce Classic, I realized the URL had pagination parameters that would appear in the query string only after clicking the “View More” and “Next Page” links.</p><p>If you were to click those a few times, you’d get an ugly link like below, but it has two very important components:</p><p>https://na1.salesforce.com/750?<strong>j_id0%3Aj_id6%3Arowsperpage=140</strong>&amp;<strong>j_id0%3Aj_id6%3Alsr=280</strong>&amp;retURL=%2Fui%2Fsetup%2FSetup%3Fsetupid%3DJobs&amp;setupid=AsyncApiJobStatus</p><p><strong>Rows per Page</strong><br>The first parameter ending in <strong>“rowsperpage=140” </strong>is how many rows that page should return.</p><p><strong>Page Number<br></strong>The second highlighted parameter ending in <strong>“lsr=280” </strong>is how many records it should skip, or what page it’s on.</p><p>In my example, I’m returning 140 batch jobs per page, and I’m on page 3 because I’m skipping the first two pages (140 * 2 = 280 records)</p><h3>Let’s Optimize</h3><p>After some testing, you can return a maximum of 1,000 records per page. This means you can click on the below URLs to get the full set of the data and follow the pattern:</p><p><strong>First Page:</strong><br><a href="https://na1.salesforce.com/750?j_id0%3Aj_id6%3Arowsperpage=1000">https://na1.salesforce.com/750?j_id0%3Aj_id6%3A<strong>rowsperpage=1000</strong></a></p><p><strong>Second Page:</strong><br><a href="https://na1.salesforce.com/750?j_id0%3Aj_id6%3Arowsperpage=1000&amp;j_id0%3Aj_id6%3Alsr=1000">https://na1.salesforce.com/750?j_id0%3Aj_id6%3A<strong>rowsperpage=1000</strong>&amp;j_id0%3Aj_id6%3A<strong>lsr=1000</strong></a></p><p><strong>Third Page:</strong><br><a href="https://na1.salesforce.com/750?j_id0%3Aj_id6%3Arowsperpage=1000&amp;j_id0%3Aj_id6%3Alsr=2000">https://na1.salesforce.com/750?j_id0%3Aj_id6%3A<strong>rowsperpage=1000</strong>&amp;j_id0%3Aj_id6%3A<strong>lsr=2000</strong></a></p><p>This <strong>MUST</strong> be done in Salesforce Classic because Lightning doesn’t let you pass in the pagination parameters.</p><h3>Getting into Excel</h3><p>Alright, so this part isn’t quite as sexy, but it’s surprisingly effective.</p><p>Salesforce is rendering the table, as well, an actual HTML table. This means when we copy it, it can be nicely pasted into Excel.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*IYVEXapaN119K02oEBdR4A.gif" /></figure><p>Doing this, I was able to make a pivot table of record updates submitted by user, by object for the last 5,000 batch jobs in about 5 minutes, it was great!</p><h3>Do you have a better way to do this?</h3><p>I’d love to hear it, while the approach above seems a lot better than the alternatives, it’s still not as sexy as I’d like it to be. If there are some cool SOQL queries you can run to avoid all this completely, I’d love to highlight that here.</p><p>Leave a comment or email me at shaun.t.vanweelden [at] gmail.com</p><h4>Helpful Links:</h4><p><a href="https://help.salesforce.com/articleView?id=monitoring_async_api_jobs.htm&amp;type=5">Monitoring Bulk Data Load Jobs Help Article</a></p><p><a href="https://developer.salesforce.com/docs/atlas.en-us.object_reference.meta/object_reference/sforce_api_objects_asyncapexjob.htm">Salesforce Reference on AsyncApexJob object</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=600de5608f83" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Thoughts on Managing Job Titles in Salesforce]]></title>
            <link>https://successengineer.medium.com/thoughts-on-managing-job-titles-in-salesforce-304e3c5a7654?source=rss-c1a49b7bd016------2</link>
            <guid isPermaLink="false">https://medium.com/p/304e3c5a7654</guid>
            <category><![CDATA[data-management-platform]]></category>
            <category><![CDATA[title]]></category>
            <category><![CDATA[job-titles]]></category>
            <category><![CDATA[data-management]]></category>
            <category><![CDATA[salesforce]]></category>
            <dc:creator><![CDATA[Shaun VanWeelden]]></dc:creator>
            <pubDate>Wed, 26 Dec 2018 20:20:50 GMT</pubDate>
            <atom:updated>2018-12-26T20:20:50.120Z</atom:updated>
            <content:encoded><![CDATA[<p>Data Management and Data Quality are two things I’ve become very passionate about as I’ve dived into Salesforce.</p><p>One of the earliest and most consistent pain points for us was managing job titles. We have more than a handful of “title” related fields in our Salesforce instance, some examples would be</p><ul><li>The default “Title” field</li><li>Title derived from Outsourced Work (UpWork, etc.)</li><li>2 LinkedIn Title fields from two different apps</li><li>4–5 Data Vendors fields (Clearbit, Leadspace, Reachforce, etc.)</li></ul><h4>The Problem</h4><p>Most people and applications only look at and use the default “Title” field.</p><p>Not touching the Title field when augmenting data seems like a waste of potentially viable data. This weekend, I found over 15% of our leads had a job title in some field, but nothing in the default Title field everyone uses in their reports and our app leverages for segmentation. That sucks.</p><p>On the other hand, I trust some of these sources significantly more than others. If an SDR goes through and populates a bunch of job titles manually, but I overwrite their changes in the next weekly import from a 3rd party, I will have a frustrated team that doesn’t trust the data and doesn’t want to make updates.</p><p>The last problem today is that many platforms and integrations do update the default title field with their debatably trustworthy information. In a perfect world, I’d trust data we manually updated on a lead or contact or paid someone to look up on LinkedIn over a tradeshow upload that populates the title field. Forget about other fields, even the title field itself isn’t always trustworthy.</p><h4>What we do Today</h4><p>To partially solve this, we have came up with a “Best Title” formula field in Salesforce that is simply a rank ordered precedence of title fields. If the top trusted field is blank, go to the next. Our key marketing programs and Salesforce flows use this Best Title field for segmentation.</p><p>This solves the second part of the problem above, trusting the data, but leaves the first part of the problem woefully unsolved. Other applications have no idea we have a custom title field, and for better or worse, the field is read-only as it’s a formula field. It’s not even on our page layouts currently, it’s just used by those “in the know” behind the scenes.</p><h4>A Potentially Better Way</h4><p>To solve this problem well, you seemingly would need to be able to establish a “title source” precedence and then capture any update to a title-related field and see if the default “Title” field should be updated or not based on that precedence.</p><p>I think someone (including myself) could build exactly this in Salesforce today.</p><p>We could create a trigger to intercept any lead or contact update that contains a title field we’re tracking. Instead of simply letting the title field update go through, we look at what field updates have been made in the past and see if the update that’s being pushed through should take precedence on the title field. If so, update the “title” field, if not, leave the “title” field alone and either drop the update or let it update the 3rd party title field.</p><p><strong>An Example —</strong></p><p>Our field precedence may look something like this:<br> 1. <em>UpWork / manual field<br> 2. Title field<br> 3. DiscoverOrg field <br> 4. Clearbit field <br> 5. Leadspace field</em></p><p>DiscoverOrg creates a new lead and populates their DiscoverOrg Job Title field with the title it came up with. The trigger would then note that the DiscoverOrg title was populated with the initial value, and then it would populate the default “Title” field with the DiscoverOrg Value. <br> <br> 2 Months later, this lead is at a trade show and our marketing team does a list upload and updates this person’s “title” field to be what they found at the trade show. Because the update was to the default “Title” field, we know that takes higher precedence than the DiscoverOrg field and so we populate the “Title” field with the tradeshow value.</p><p>The next month, we get Clearbit to augment job titles. They populate their job title field with the value they came up with. Since we know this lead has had it’s “title” populated by the default “title” field in the past from our behind the scenes tracking, we leave the default “title” field alone.</p><p><strong>Quick notes on Logistics as they are in my head now</strong><br>The behind the scenes historical tracking would be done in a hidden textarea field on the lead/contact that is populated and managed by the trigger.</p><p>An app with a VisualForce component to set the field precedence would be helpful so no one needs to write code.</p><p>If someone manually updated the “title” field, that should always take the highest precedence. An easy, not foolproof, way to determine this case vs. a list upload using the default “title” field is by looking at how many records are included in the change. If it’s just 1, it’s likely a manual change from someone. If more than 1, it’s a bulk import/update.</p><h3>Do you have this problem too?</h3><p>If so, I’d love to hear if the above solution resonates with you and if you’d be interested in such a Salesforce application? See any holes, let me know!</p><p>Email: shaun.t.vanweelden [at] gmail.com<br>LinkedIn: <a href="https://www.linkedin.com/in/svanweelden/">Shaun VanWeelden</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=304e3c5a7654" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>