<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Matt on Medium]]></title>
        <description><![CDATA[Stories by Matt on Medium]]></description>
        <link>https://medium.com/@matt_huff?source=rss-6dbf573ee5a6------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Sat, 16 May 2026 16:21:29 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@matt_huff/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Vibe Code a Dashboard from Scratch]]></title>
            <link>https://medium.com/dataai/vibe-code-a-dashboard-from-scratch-3eeb3c89ff21?source=rss-6dbf573ee5a6------2</link>
            <guid isPermaLink="false">https://medium.com/p/3eeb3c89ff21</guid>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[vibe-coding]]></category>
            <category><![CDATA[data-science]]></category>
            <dc:creator><![CDATA[Matt]]></dc:creator>
            <pubDate>Sun, 05 Apr 2026 19:48:36 GMT</pubDate>
            <atom:updated>2026-04-06T12:40:36.546Z</atom:updated>
            <content:encoded><![CDATA[<h3>Vibe Coding a Dashboard</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*KXvibbhe-G66IUdx2SPeSg.png" /><figcaption>Generated by Gemini</figcaption></figure><p>Recently Jeffrey Shaffer posted about an experience he had building a dashboard that was generated through AI prompting. Naturally, I thought that seemed like a fun challenge and something that would be worth experimenting.</p><p>I’ll be the first to admit that my end output wasn’t nearly as good of a final product, but I had some very interesting takeaways from this project. If you are in the data visualization or data storytelling space I would highly recommend you try this process at least once.</p><p>I relied exclusively on AI tools to develop ALL of the code that was used in my final output. Yes, that caused a headache at many points along the way, but this led to a lot of learning. Some of my key takeaways and insights are below:</p><ol><li>Your AI tool may get stuck and cant find a solution. When this happens try a different tool.</li><li>The end-to-end development process is likely going to be far slower than building something directly in a tool like Power BI or Tableau.</li><li>The AI tools have a limited ability to identify and implement visualization best practices (without additional prompting).</li></ol><p>Basically, at this point, you probably don’t need to worry about AI building best-in-class visualizations with minimal effort. As the data viz practitioner it will continue to be important to understand data viz best practices and design standards.</p><h3>Tech and Data Overview</h3><p>First of all, if you are trying to replicate something like I did it’s helpful to have the context of the tools I was using to build out my dashboard. When it comes to the data, I used an API feed of San Antonio road pavement quality. My initial goal was to use traffic data, but I had to pivot as the AI tools struggled to get those feeds set up properly.</p><p>I built the dashboard itself using the Databricks Free Edition and d3.js to build a locally hosted dashboard. The goal here was to use only free and readily available tools. The Databricks Free Edition is a great way to be able to play around with the Databricks UI while connecting to real data. I’ve also been hearing a lot about d3.js dashboards and wanted to experiment with them. Effectively, two birds one stone.</p><p>The AI tools that I used were Claude in a VERY limited capacity and Google’s Gemini. For Claude, I just used the free version with limited tokens. Gemini was the powerhouse behind getting the process built out and was limited to Gemini 3 Fast. I suspect I could have benefited from using a more advanced version of Gemini but again the goal was to keep this as available to a broad reaching audience as possible.</p><h3>Setting up the API call</h3><p>This part was executed within Databricks. The goal here is to set up the initial data feed from the San Antonio pavement data API endpoint. Here is the prompt I used to set up the API feed.</p><blockquote>My goal is to create an end to end data visualization tool. This tool will visualize San Antonio pavement data patterns using geospatial maps and trend lines over time. The data must be sourced via API. The output visualization must be deployed via a D3 web dashboard.</blockquote><blockquote>Generate code that can be used within the Databricks Free Edition to collect the initial data via the API connection to the Texas department of transportation. Assume I have limited experience connecting to API endpoints.</blockquote><p>After repeated prompting I kept getting errors with the API call. Many of these errors were due to ill formatted querying against the API feed. Ultimately, this is where I pivoted to Claude to get the starting code seed. Claude was able to build code that connected on the first try and then I refined that code in Gemini after I ran out of Claude tokens.</p><p>In the Databricks Free Edition there are limitations to the way that data feeds can be structured for downstream use. Ultimately, after A LOT of prompting and iteration I was able to generate the following code that sets up the initial API call that then saves a json data feed. Below is the code (you’ll notice the references to traffic data from my initial research that I pivoted away from):</p><pre>import pandas as pd<br>import json<br>import base64<br>from pyspark.sql.functions import col, lit, when<br><br># 1. Convert to Spark<br>df = spark.createDataFrame(traffic_data)<br><br># 2. DATA REPAIR: Using Backticks to resolve the column name error<br># Spark requires ` ` when a column name contains a dot .<br>target_col = &quot;`attributes.PCI`&quot;<br><br>if &quot;attributes.PCI&quot; in df.columns:<br>    # Cast to integer for D3 math<br>    df_clean = df.withColumn(&quot;pci_score&quot;, col(target_col).cast(&quot;int&quot;))<br>else:<br>    # Fallback if the column is truly missing<br>    df_clean = df.withColumn(&quot;pci_score&quot;, lit(0))<br><br># 3. Create &#39;Health&#39; categories for high-contrast D3 colors<br>df_clean = df_clean.withColumn(&quot;health&quot;, <br>    when(col(&quot;pci_score&quot;) &gt;= 85, &quot;Excellent&quot;)<br>    .when(col(&quot;pci_score&quot;) &gt;= 70, &quot;Good&quot;)<br>    .when(col(&quot;pci_score&quot;) &gt;= 50, &quot;Fair&quot;)<br>    .when(col(&quot;pci_score&quot;) &gt; 0, &quot;Poor&quot;)<br>    .otherwise(&quot;Unknown&quot;)<br>)<br><br># 4. Final Prep (Ensuring we keep geometry for the map!)<br>final_pd = df_clean.toPandas()<br><br># 5. GENERATE DOWNLOAD BUTTON<br>json_string = final_pd.to_json(orient=&#39;records&#39;)<br>b64 = base64.b64encode(json_string.encode()).decode()<br>payload = f&quot;data:application/json;base64,{b64}&quot;<br><br>html_button = f&#39;&#39;&#39;<br>&lt;div style=&quot;background-color: #121212; padding: 20px; border: 1px solid #00ff41; border-radius: 8px; text-align: center; font-family: sans-serif;&quot;&gt;<br>    &lt;h3 style=&quot;color: #00ff41; margin-top: 0;&quot;&gt;Step 1: PCI Data Extracted&lt;/h3&gt;<br>    &lt;p style=&quot;color: #e0e0e0;&quot;&gt;Resolved Column Conflict using Backticks.&lt;/p&gt;<br>    &lt;a href=&quot;{payload}&quot; download=&quot;pavement_data.json&quot; <br>       style=&quot;background-color: #00ff41; color: #000; padding: 12px 25px; <br>              text-decoration: none; border-radius: 4px; font-weight: bold; <br>              display: inline-block; margin-top: 10px;&quot;&gt;<br>       DOWNLOAD REPAIRED JSON<br>    &lt;/a&gt;<br>&lt;/div&gt;<br>&#39;&#39;&#39;<br>displayHTML(html_button)</pre><h3>Building the Dashboard</h3><p>Now comes the really fun part. Unfortunately, I was out of Claude tokens at this point because I used them all to get the API feed stood up… Setting up the d3.js dashboard was a bit tricky because of the structure that is needed to general set up that is needed across multiple files.</p><p>To set up a d3.js dashboard you need to have a dedicated folder that houses 4 different files. These files tell your web app what data to reference, how to render the images, how to structure the overall dashboard, and an index file. Coordinating the code across each of these files will lead to the final dashboard output. However, as with all code, the different files could also be the source of a lot of code discrepancies.</p><p>Below is the code I used to generate my final dashboard:</p><h4>index.html</h4><pre>&lt;!DOCTYPE html&gt;<br>&lt;html lang=&quot;en&quot;&gt;<br>&lt;head&gt;<br>    &lt;meta charset=&quot;UTF-8&quot;&gt;<br>    &lt;meta name=&quot;viewport&quot; content=&quot;width=device-width, initial-scale=1.0&quot;&gt;<br>    &lt;title&gt;San Antonio | Pavement Infrastructure Analysis&lt;/title&gt;<br>    <br>    &lt;script src=&quot;https://d3js.org/d3.v7.min.js&quot;&gt;&lt;/script&gt;<br>    <br>    &lt;link href=&quot;https://fonts.googleapis.com/css2?family=Inter:wght@300;400;600&amp;family=JetBrains+Mono&amp;display=swap&quot; rel=&quot;stylesheet&quot;&gt;<br>    <br>    &lt;link rel=&quot;stylesheet&quot; href=&quot;style.css&quot;&gt;<br>&lt;/head&gt;<br>&lt;body&gt;<br><br>    &lt;header class=&quot;app-header&quot;&gt;<br>        &lt;div class=&quot;header-content&quot;&gt;<br>            &lt;h1&gt;San Antonio | &lt;span&gt;Pavement Infrastructure Analysis&lt;/span&gt;&lt;/h1&gt;<br>            &lt;div class=&quot;status-badge&quot;&gt;<br>                &lt;span class=&quot;indicator&quot;&gt;&lt;/span&gt; Real-time Condition Monitor<br>            &lt;/div&gt;<br>        &lt;/div&gt;<br>    &lt;/header&gt;<br><br>    &lt;main class=&quot;dashboard-container&quot;&gt;<br>        <br>        &lt;aside class=&quot;sidebar&quot;&gt;<br>            &lt;section class=&quot;control-group&quot;&gt;<br>                &lt;h2&gt;Controls&lt;/h2&gt;<br>                &lt;p class=&quot;instruction&quot;&gt;Click legend items to toggle visibility&lt;/p&gt;<br>                <br>                &lt;div id=&quot;map-legend&quot; class=&quot;legend-container&quot;&gt;<br>                    &lt;/div&gt;<br>            &lt;/section&gt;<br><br>            &lt;section class=&quot;kpi-group&quot;&gt;<br>                &lt;div class=&quot;kpi-card&quot;&gt;<br>                    &lt;label&gt;Visible Segments&lt;/label&gt;<br>                    &lt;div id=&quot;kpi-miles&quot; class=&quot;kpi-value&quot;&gt;0&lt;/div&gt;<br>                &lt;/div&gt;<br>                &lt;div class=&quot;kpi-card&quot;&gt;<br>                    &lt;label&gt;Predominant Health&lt;/label&gt;<br>                    &lt;div id=&quot;kpi-type&quot; class=&quot;kpi-value excellent&quot;&gt;--&lt;/div&gt;<br>                &lt;/div&gt;<br>            &lt;/section&gt;<br>        &lt;/aside&gt;<br><br>        &lt;div class=&quot;viz-content&quot;&gt;<br>            <br>            &lt;div class=&quot;viz-card map-card&quot;&gt;<br>                &lt;div class=&quot;card-header&quot;&gt;<br>                    &lt;h3&gt;Spatial Distribution: Road Health&lt;/h3&gt;<br>                &lt;/div&gt;<br>                &lt;div id=&quot;map-viz&quot; class=&quot;svg-container&quot;&gt;<br>                    &lt;div class=&quot;loader&quot;&gt;Loading Geospatial Data...&lt;/div&gt;<br>                &lt;/div&gt;<br>            &lt;/div&gt;<br><br>            &lt;div class=&quot;viz-card trend-card&quot;&gt;<br>                &lt;div class=&quot;card-header&quot;&gt;<br>                    &lt;h3&gt;PCI Score Distribution (Pavement Condition Index)&lt;/h3&gt;<br>                &lt;/div&gt;<br>                &lt;div id=&quot;trend-viz&quot; class=&quot;svg-container&quot;&gt;<br>                    &lt;/div&gt;<br>            &lt;/div&gt;<br><br>        &lt;/div&gt;<br>    &lt;/main&gt;<br><br>    &lt;script src=&quot;dashboard.js&quot;&gt;&lt;/script&gt;<br>&lt;/body&gt;<br>&lt;/html&gt;</pre><h4>style.css</h4><pre>/* --- 1. GLOBAL RESET &amp; FONTS --- */<br>:root {<br>    --bg-dark: #0f0f0f;<br>    --card-bg: #1a1a1a;<br>    --border-color: #333;<br>    --text-main: #e0e0e0;<br>    --text-dim: #888;<br>    <br>    /* Condition Colors matched to D3 */<br>    --excellent: #00ff41;<br>    --good: #00e5ff;<br>    --fair: #ffff00;<br>    --poor: #ff4500;<br>    --unknown: #444444;<br>}<br><br>body {<br>    background-color: var(--bg-dark);<br>    color: var(--text-main);<br>    font-family: &#39;Inter&#39;, -apple-system, BlinkMacSystemFont, sans-serif;<br>    margin: 0;<br>    display: flex;<br>    flex-direction: column;<br>    height: 100vh;<br>    overflow: hidden;<br>}<br><br>/* --- 2. HEADER --- */<br>.app-header {<br>    height: 60px;<br>    padding: 0 24px;<br>    display: flex;<br>    align-items: center;<br>    background: #000;<br>    border-bottom: 1px solid var(--border-color);<br>}<br><br>.app-header h1 {<br>    font-size: 1.2rem;<br>    font-weight: 600;<br>    letter-spacing: -0.5px;<br>}<br><br>.app-header span {<br>    color: var(--text-dim);<br>    font-weight: 300;<br>}<br><br>/* --- 3. LAYOUT ENGINE --- */<br>.dashboard-container {<br>    display: grid;<br>    grid-template-columns: 320px 1fr; /* Fixed Sidebar, Fluid Viz */<br>    gap: 20px;<br>    padding: 20px;<br>    flex-grow: 1;<br>    box-sizing: border-box;<br>}<br><br>/* Sidebar Styling */<br>.sidebar {<br>    display: flex;<br>    flex-direction: column;<br>    gap: 20px;<br>}<br><br>.control-group, .kpi-group {<br>    background: var(--card-bg);<br>    border: 1px solid var(--border-color);<br>    border-radius: 12px;<br>    padding: 20px;<br>}<br><br>/* Viz Content Stacking (Map on top of Trend) */<br>.viz-content {<br>    display: flex;<br>    flex-direction: column;<br>    gap: 20px;<br>    min-width: 0; /* Prevents D3 from overflowing container */<br>}<br><br>/* --- 4. CARDS &amp; VISUALS --- */<br>.viz-card {<br>    flex: 1;<br>    background: var(--card-bg);<br>    border: 1px solid var(--border-color);<br>    border-radius: 12px;<br>    padding: 20px;<br>    display: flex;<br>    flex-direction: column;<br>    position: relative;<br>}<br><br>.card-header h3 {<br>    margin: 0 0 15px 0;<br>    font-size: 0.9rem;<br>    text-transform: uppercase;<br>    letter-spacing: 1px;<br>    color: var(--text-dim);<br>}<br><br>.svg-container {<br>    flex-grow: 1;<br>    width: 100%;<br>    min-height: 0;<br>}<br><br>/* --- 5. LEGEND &amp; CONTROLS --- */<br>.legend-container {<br>    margin-top: 15px;<br>}<br><br>.legend-item {<br>    display: flex;<br>    align-items: center;<br>    padding: 6px 8px;<br>    border-radius: 6px;<br>    transition: background 0.2s;<br>}<br><br>.legend-item:hover {<br>    background: rgba(255, 255, 255, 0.05);<br>}<br><br>.instruction {<br>    font-size: 0.8rem;<br>    color: var(--text-dim);<br>    margin-bottom: 10px;<br>}<br><br>/* --- 6. KPI STYLING --- */<br>.kpi-card {<br>    margin-bottom: 15px;<br>}<br><br>.kpi-card:last-child { margin-bottom: 0; }<br><br>.kpi-card label {<br>    display: block;<br>    font-size: 0.75rem;<br>    color: var(--text-dim);<br>    text-transform: uppercase;<br>    margin-bottom: 5px;<br>}<br><br>.kpi-value {<br>    font-family: &#39;JetBrains Mono&#39;, monospace;<br>    font-size: 1.8rem;<br>    font-weight: 600;<br>}<br><br>/* Dynamic KPI Colors */<br>.kpi-value.excellent { color: var(--excellent); text-shadow: 0 0 15px rgba(0,255,65,0.2); }<br>.kpi-value.good { color: var(--good); }<br>.kpi-value.fair { color: var(--fair); }<br>.kpi-value.poor { color: var(--poor); }<br>.kpi-value.unknown { color: var(--unknown); }<br><br>/* --- 7. D3 SPECIFIC OVERRIDES --- */<br>path.road-path {<br>    transition: stroke-width 0.2s;<br>}<br><br>path.road-path:hover {<br>    stroke-width: 4px !important;<br>    cursor: crosshair;<br>}<br><br>.domain, .tick line {<br>    stroke: #444; /* Axis colors */<br>}<br><br>.tick text {<br>    fill: #888;<br>    font-size: 10px;<br>}</pre><h4>dashboard.js</h4><pre>// 1. GLOBAL STATE<br>let globalData = [];<br>let filteredData = [];<br>const colorScheme = {<br>    &quot;Excellent&quot;: &quot;#00ff41&quot;, <br>    &quot;Good&quot;: &quot;#00e5ff&quot;,      <br>    &quot;Fair&quot;: &quot;#ffff00&quot;,      <br>    &quot;Poor&quot;: &quot;#ff4500&quot;,      <br>    &quot;Unknown&quot;: &quot;#444444&quot;<br>};<br><br>// Initialize with ALL filters active<br>let activeFilters = new Set([&quot;Excellent&quot;, &quot;Good&quot;, &quot;Fair&quot;, &quot;Poor&quot;, &quot;Unknown&quot;]);<br><br>// 2. DATA LOADING ENGINE<br>d3.json(&#39;pavement_data.json&#39;).then(data =&gt; {<br>    // This will bring back your sample observation for debugging<br>    console.log(&quot;Sample Observation:&quot;, data[0]);<br><br>    globalData = data.map(d =&gt; {<br>        // Convert Unix Timestamp (ms) to Year for the trend chart<br>        const rawDate = d[&#39;attributes.InstallDate&#39;];<br>        const yearInt = rawDate ? new Date(rawDate).getFullYear() : 0;<br><br>        return {<br>            ...d,<br>            health: d.health || &quot;Unknown&quot;,<br>            pci: d.pci_score || 0,<br>            year: yearInt,<br>            geometry: { <br>                type: &quot;LineString&quot;, <br>                coordinates: d[&#39;geometry.paths&#39;] ? d[&#39;geometry.paths&#39;][0] : [] <br>            }<br>        };<br>    }).filter(d =&gt; d.geometry.coordinates.length &gt; 0); // Remove empty geometries<br><br>    filteredData = [...globalData];<br>    initDashboard();<br>}).catch(err =&gt; console.error(&quot;Loading Error:&quot;, err));<br><br>// 3. INITIALIZATION<br>function initDashboard() {<br>    renderLegend();<br>    updateKPIs();<br>    <br>    // Give CSS 100ms to calculate the grid before drawing SVGs<br>    setTimeout(() =&gt; {<br>        renderMap();<br>        renderTrend();<br>    }, 100);<br>}<br><br>// 4. MAP RENDER (Using Auto-Fit)<br>function renderMap() {<br>    const container = d3.select(&quot;#map-viz&quot;);<br>    const width = container.node().clientWidth;<br>    const height = container.node().clientHeight;<br>    <br>    const svg = container.html(&quot;&quot;).append(&quot;svg&quot;)<br>        .attr(&quot;width&quot;, width)<br>        .attr(&quot;height&quot;, height);<br>    <br>    const g = svg.append(&quot;g&quot;);<br><br>    // 1. Create a FeatureCollection for D3 to measure<br>    const geojson = {<br>        type: &quot;FeatureCollection&quot;,<br>        features: filteredData.map(d =&gt; ({<br>            type: &quot;Feature&quot;,<br>            geometry: d.geometry<br>        }))<br>    };<br><br>    // 2. AUTO-SCALE: This forces the data to fill the box<br>    const projection = d3.geoMercator()<br>        .fitSize([width, height], geojson); // &lt;--- THIS IS THE MAGIC LINE<br><br>    const pathGen = d3.geoPath().projection(projection);<br><br>    g.selectAll(&quot;path&quot;)<br>        .data(filteredData)<br>        .enter().append(&quot;path&quot;)<br>        .attr(&quot;d&quot;, d =&gt; pathGen(d.geometry))<br>        .attr(&quot;stroke&quot;, d =&gt; colorScheme[d.health] || &quot;#888&quot;)<br>        .attr(&quot;stroke-width&quot;, 1.5)<br>        .attr(&quot;fill&quot;, &quot;none&quot;)<br>        .style(&quot;opacity&quot;, 0.8);<br><br>    // Add Zoom<br>    svg.call(d3.zoom().on(&quot;zoom&quot;, (event) =&gt; g.attr(&quot;transform&quot;, event.transform)));<br>}<br><br>// 5. LEGEND &amp; FILTERING<br>function renderLegend() {<br>    const legend = d3.select(&quot;#map-legend&quot;).html(&quot;&quot;);<br>    <br>    Object.keys(colorScheme).forEach(category =&gt; {<br>        const item = legend.append(&quot;div&quot;)<br>            .attr(&quot;class&quot;, &quot;legend-item&quot;)<br>            .style(&quot;display&quot;, &quot;flex&quot;)<br>            .style(&quot;align-items&quot;, &quot;center&quot;)<br>            .style(&quot;margin-bottom&quot;, &quot;8px&quot;)<br>            .style(&quot;cursor&quot;, &quot;pointer&quot;)<br>            .style(&quot;opacity&quot;, activeFilters.has(category) ? 1 : 0.3)<br>            .on(&quot;click&quot;, function() {<br>                if (activeFilters.has(category)) activeFilters.delete(category);<br>                else activeFilters.add(category);<br>                <br>                filteredData = globalData.filter(d =&gt; activeFilters.has(d.health));<br>                d3.select(this).style(&quot;opacity&quot;, activeFilters.has(category) ? 1 : 0.3);<br>                <br>                renderMap();<br>                renderTrend();<br>                updateKPIs();<br>            });<br><br>        item.append(&quot;span&quot;)<br>            .style(&quot;width&quot;, &quot;12px&quot;).style(&quot;height&quot;, &quot;12px&quot;)<br>            .style(&quot;background-color&quot;, colorScheme[category])<br>            .style(&quot;margin-right&quot;, &quot;10px&quot;).style(&quot;border-radius&quot;, &quot;2px&quot;);<br><br>        item.append(&quot;label&quot;).text(category).style(&quot;cursor&quot;, &quot;pointer&quot;).style(&quot;color&quot;, &quot;#eee&quot;);<br>    });<br>}<br><br>// 6. TREND CHART (PCI Distribution)<br>function renderTrend() {<br>    const container = d3.select(&quot;#trend-viz&quot;);<br>    const width = container.node().clientWidth - 80;<br>    const height = Math.max(container.node().clientHeight - 80, 150);<br><br>    const svg = container.html(&quot;&quot;).append(&quot;svg&quot;)<br>        .attr(&quot;width&quot;, &quot;100%&quot;)<br>        .attr(&quot;height&quot;, &quot;100%&quot;)<br>        .append(&quot;g&quot;)<br>        .attr(&quot;transform&quot;, `translate(50, 20)`);<br><br>    // 1. Define X Scale<br>    const x = d3.scaleLinear().domain([0, 100]).range([0, width]);<br><br>    // 2. Generate Bins<br>    const histogram = d3.bin()<br>        .value(d =&gt; d.pci)<br>        .domain(x.domain())<br>        .thresholds(10);<br><br>    const bins = histogram(filteredData);<br><br>    // 3. Define Y Scale <br>    const y = d3.scaleLinear()<br>        .domain([0, d3.max(bins, d =&gt; d.length) || 10])<br>        .range([height, 0]);<br><br>    // 4. Draw Grouped/Stacked Bars<br>    bins.forEach(bin =&gt; {<br>        // Group the data inside this specific bin by health status<br>        const counts = d3.rollup(bin, v =&gt; v.length, d =&gt; d.health);<br>        let yOffset = 0;<br><br>        // Draw a sub-rect for every health type present in this PCI range<br>        Array.from(counts).forEach(([health, count]) =&gt; {<br>            const barHeight = height - y(count);<br>            <br>            svg.append(&quot;rect&quot;)<br>                .attr(&quot;x&quot;, x(bin.x0) + 1)<br>                .attr(&quot;width&quot;, Math.max(0, x(bin.x1) - x(bin.x0) - 1))<br>                .attr(&quot;y&quot;, y(count) - yOffset) <br>                .attr(&quot;height&quot;, barHeight)<br>                .attr(&quot;fill&quot;, colorScheme[health] || &quot;#888&quot;) // &lt;--- Syncs with Map!<br>                .attr(&quot;opacity&quot;, 0.8)<br>                .append(&quot;title&quot;) // Tooltip for clarity<br>                .text(`${health}: ${count} segments`);<br><br>            yOffset += barHeight; // Stack them if multiple types fall in one bin<br>        });<br>    });<br><br>    // Axes<br>    svg.append(&quot;g&quot;).attr(&quot;transform&quot;, `translate(0,${height})`).call(d3.axisBottom(x));<br>    svg.append(&quot;g&quot;).call(d3.axisLeft(y).ticks(5));<br>}<br><br>// 7. KPIs<br>function updateKPIs() {<br>    const total = filteredData.length;<br>    <br>    // Group and find the most common health status<br>    const counts = d3.rollup(filteredData, v =&gt; v.length, d =&gt; d.health);<br>    const topEntry = d3.greatest(counts, d =&gt; d[1]);<br>    <br>    // Safety check: if no data, default to N/A / unknown<br>    const topHealth = topEntry ? topEntry[0] : &#39;Unknown&#39;;<br>    const healthClass = topHealth.toLowerCase();<br><br>    // Update the HTML<br>    d3.select(&#39;#kpi-miles&#39;).text(total.toLocaleString());<br>    <br>    d3.select(&#39;#kpi-type&#39;)<br>        .text(topHealth)<br>        // This removes old classes and adds the new one for the color sync<br>        .attr(&quot;class&quot;, `kpi-value ${healthClass}`);<br>}<br><br></pre><p>After the code was created and synced across each of the files I went it a local command line editor on my laptop to set up a locally hosted version of the dashboard. In that editor I navigated to the folder containing these 4 files and typed in the following code (<em>python -m http.server 8000</em>) to set up a locally hosted web app. Then I opened a new browser and entered <em>localhost:8000 </em>to load the dashboard.</p><h3>Conclusion</h3><p>At the end of the project I ended up getting the following dashboard.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ZVEoAfDHP9DneQVSLMa6JA.png" /></figure><p>Keep in mind all of this was built from AI directly. If I were to build this on my own for a real work project this is far from the desired final output. However, this approach does show a ton of promise and is something I’ll be exploring further. I relied entirely on Gemini to be able to derive the design and all relevant KPIs as well.</p><p>At this point, the development process here was significantly slower and much less iterative than what I’m used to being able to do in Tableau. If I need to get something fast out to stakeholders this isn’t the approach I would use, but building a generalizable framework on top of this is very appealing as well.</p><p>If you’re even remotely curious about doing something like this let me know. I’m curious to see how other people approach a similar problem and to see your successes.</p><p>— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —</p><p><em>If this post was helpful leave me a comment here or connect with me on </em><a href="https://www.linkedin.com/in/mhuff04/"><em>LinkedIn</em></a><em>.</em></p><p><em>Note that this post was not written by AI (except for the code… obviously).</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=3eeb3c89ff21" width="1" height="1" alt=""><hr><p><a href="https://medium.com/dataai/vibe-code-a-dashboard-from-scratch-3eeb3c89ff21">Vibe Code a Dashboard from Scratch</a> was originally published in <a href="https://medium.com/dataai">DataAI</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Let’s get social at conferences!]]></title>
            <link>https://medium.com/@matt_huff/lets-get-social-at-conferences-daed91570710?source=rss-6dbf573ee5a6------2</link>
            <guid isPermaLink="false">https://medium.com/p/daed91570710</guid>
            <category><![CDATA[community]]></category>
            <category><![CDATA[data-visualization]]></category>
            <category><![CDATA[tableau]]></category>
            <dc:creator><![CDATA[Matt]]></dc:creator>
            <pubDate>Tue, 24 Mar 2026 20:56:01 GMT</pubDate>
            <atom:updated>2026-03-24T20:56:01.441Z</atom:updated>
            <content:encoded><![CDATA[<p>It’s that time of year again. You guessed it… it’s conference season again. With so many to choose from it can be hard to know which ones to attend. Not only do you need to figure out which, if any, you should attend, but once you decide you need to know how to get the most out of your experience.</p><p>The Tableau Conference has been one I have attended multiple times. Obviously, this is a vendor sponsored conference and vendor sponsored conferences are going to be far from unbiased reflections of reality. However, exposure in this conference has opened many doors I wouldn’t have expected.</p><p>In this post I’ll cover my conference experiences within the context of the Tableau Conference while outlining ways to get the most your experiences regardless of which conferences you attend. There is basically a little bit for everyone in this post.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*4ByWSXY6H9e4w76BtNpEdQ.jpeg" /><figcaption>Matt Huff and Blake Feiza</figcaption></figure><h3>My Tableau conference experiences</h3><p>I attended my first Tableau Conference in 2019. I’ve always been excited about the prospects of what Tableau could offer as a tool, but had never really had any formal training. My experience was all learned by doing and was very limited at best. I had attended a couple user group meetings in Cincinnati but again I was far from a regular. In my first Tableau Conference I sat in as many training sessions as I could. If something was remotely interesting and I had nothing better planned I was there. This was a great opportunity to learn a little bit about a lot of topics.</p><p>My first conference, most importantly, was where I learned about community projects. Makeover Monday was then, and still is now, one of the largest community projects for data visualization anywhere. After attending the conference I started participating in Makeover Monday, but not sharing my work publicly.</p><p>That first year taught me about what data visualization was and helped me get introduced to tools and resources available to the community.</p><h3>My second (and all COVID) conferences</h3><p>These conferences I still heavily indexed my time on participating in as many sessions as I possibly could. As I grew in my career, though, I started to find that the technical sessions just weren’t enough for what I needed. The technical sessions were still fun and a great learning opportunity, but I found that industry sessions or stories from representatives at other companies about how they used Tableau became much more powerful. In these sessions I learned how to build best-in-class templates or visualization guides.</p><p>These were incredibly helpful as a point of reference but ultimately were still pretty limited in scope. That being said I still reference relevant presentations today and frequently apply key learnings to my workflows and to strategic recommendations for my leadership.</p><h3>Tableau Conference 2025</h3><p>This conference was a complete pivot from my first experiences at Tableau Conferences past. In late 2024, I started getting much more involved in the broader Tableau community. I started leading the Retail and Consumer Goods Tableau User Group and was given a chance to speak about our design process for analytics within HEB Retail Media.</p><p>In all honesty I wasn’t Sure how beneficial it would be to attend this particular conference given that my work had shifted towards more data science heavy workflows and because I was already “good enough” with Tableau to more than meet the expectations of my current role (I lead a team of analysts and data scientists and rarely have bandwidth to build any practical reporting for the organization anymore).</p><p>However, because I was speaking and leading a user group I ended up having a ton of opportunities to connect with people both one-on-one and in broader group settings. I ended up helping run meetups for retail professionals and presenting. Not only that, but I had countless opportunities to meet other industry professionals with a strong data background.</p><p>The best part was that since I had started to engage with the broader Tableau user community I had started to build genuine friendships with other professionals with similar interests. This is probably the biggest benefit for attending a conference is that, if done right, you can start to build meaningful relationships.</p><h3>How can you get the most out of conferences?</h3><p>The best way to approach conferences is to be social. As most analysts are introverted this can be a big ask, but it’s totally worth it.</p><p>Before the conference, research who is presenting. Connect with them on socials and let them know you’re excited for their presentation. Start connecting early and importantly be genuine. You shouldn’t be expecting anything from these individuals, but this is a great way to start building those connections.</p><p>Next, when you are at the conference attend the sessions and go up to the presenters afterwards (if it’s appropriate for the conference). Ask meaningful questions about their presentations and introduce yourself.</p><p>Then, shortly after the conference ends, send the people you meet from presentations and throughout your daily interactions a note on a social network like LinkedIn. Again, your intent isn’t to look for a new job or to get anything from these people. You’re just trying to build a genuine connection.</p><p>The other recommendation I have is to go where the people are and talk with people in common gathering areas. At Tableau Conference there is a data village where vendors present the capabilities of their tools. However, there is also inspirational meetups like an area with data visualization submissions where you can get inspired about Tableau Public or broader meetup breakout areas. Go to these places and spend time there. You will meet some great people. At Tableau Conference this also happens to be where a lot of community thought leaders tend to congregate so don’t underestimate the value of spending time there.</p><h3>Summary</h3><p>Conferences aren’t the only way to get connected with like minded peers. However, they can be an excellent way to build meaningful connections, get inspired, solve problems you didn’t know you had, and genuinely have fun at the same time. My biggest piece of advice for anyone attending any conference is to not be afraid to get social.</p><p>— — — — — — — — — — — — — — — — — — — — — — — — — — — — —</p><p>If you found this helpful let me know. Add a comment telling me what resonated. If you got other advice for conferences share that in the comments too.</p><p>Feel free to connect with me on <a href="https://www.linkedin.com/in/mhuff04/">LinkedIn </a>or follow my work on <a href="https://public.tableau.com/app/profile/matt.huff/vizzes">Tableau Public</a> as well.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=daed91570710" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Telling a BETTER data story]]></title>
            <link>https://medium.com/retail-tableau-user-group/telling-a-better-data-story-319b3e9cfecc?source=rss-6dbf573ee5a6------2</link>
            <guid isPermaLink="false">https://medium.com/p/319b3e9cfecc</guid>
            <category><![CDATA[business]]></category>
            <category><![CDATA[data-visualization]]></category>
            <category><![CDATA[tableau]]></category>
            <dc:creator><![CDATA[Matt]]></dc:creator>
            <pubDate>Tue, 10 Mar 2026 14:31:10 GMT</pubDate>
            <atom:updated>2026-03-10T21:15:27.862Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*OC9xmZzjTaNy9X40zULrtA.png" /><figcaption>Gemini</figcaption></figure><blockquote>Dashboards are a horrible place to tell a story.</blockquote><blockquote>- Steve Wexler</blockquote><p>Think about that for a second… You don’t expect to look at your watch or speedometer and expect to get a rich body of insights do you? A dashboard in it’s simplest form is intended to communicate data quickly. Dashboards can do the following:</p><ul><li>Communicate changing performance metrics</li><li>Create frameworks for communicating complex data</li></ul><p>Dashboards are intended to quickly and easily share information. Dashboards don’t have to just communicate descriptive data performance either. They can communicate predictive data or modeled data as well.</p><p>However, they don’t tell a story. You shouldn’t assume they would.</p><p>The next logical question is how do we then blend a dashboard with a story? Enter the analyst, data scientist, data engineer, or other data professional. This post will help guide you through some tactical best practices you can implement to better tell data stories.</p><h3>Why the design matters?</h3><p>As data practitioners engage more technical efforts (think advanced data science), it can become easy to discount the importance of dashboard design. That disregard ultimately comes at the expense of your ability to communicate and influence others with your work.</p><p>Whether you’re attempting to communicate why a sales trend is going up or down or you are trying to show why a neural network is the correct approach for the problem at hand you are trying to do the same thing. You are taking data and trying to convince someone else that your approach is valid.</p><p>Too many times I have seen the hard work of others (and myself) get set aside because the analysis, model, dashboard, etc. was too hard to understand. This is ultimately why data practitioners need to have an eye towards design. Design and a solution centered approach is key for helping you drive adoption of your proposed idea or solution.</p><h3>What resources are available?</h3><p>In the context of this article I’ll focus on using Tableau Public as a resource to help you understand design best practices. It is worth noting though that Tableau Public is not the only place to go to find data design best practices (or that it is even the best). This is just where I go to get inspired and to get examples I can reverse engineer. I recommend you find the place(s) that work best for you and your needs.</p><p>On Tableau Public you can think of the visualizations there in two different contexts:</p><ul><li>Business friendly dashboards</li><li>Recreational dashboards</li></ul><p>Business friendly dashboards are just what they sound like. They are dashboards or parts of dashboards on Tableau Public that could easily be repurposed for business reporting.</p><p>Recreational dashboards are typically created by community members to experiment with Tableau. These dashboards frequently include elements that you should NOT share in corporate dashboards. You can tell which dashboards are recreational by the subject matter, use of visualization best practices, etc. Often these dashboards are very “artistic” in nature. Generally, you will not want to recreate these dashboards for storytelling in a corporate setting. It is worth noting though that some components of these dashboards are totally fine in the context of business reporting but exercise caution before you pull them into a business dashboard.</p><h3>What does good look like?</h3><p>One recommendation I have to anyone that is looking to get better at data storytelling and analytics in general is to read “Storytelling with Data” by Cole Nussbaumer Knaflic. This book is an excellent primer on data storytelling and how to use charts to communicate your research and perspective.</p><p>The other callout I have after participating and hosting Makeover Monday is that there is no single “best” dashboard that should be created with your data. Don’t expect to build the perfect dashboard, report, or data story. What is more important though is understanding what you are trying to communicate and then crafting your visualizations to support that story (after doing your research… obviously). The following questions are typically where I start building my data story:</p><ol><li>What key insights or takeaways did my research uncover? (at most 2–3… or they probably aren’t main takeaways)</li><li>Are your insights or takeaways new or impactful?</li><li>Is there a logical order to your insights or takeaways?</li><li>Do your insights or takeaways roll up to some other initiative or project that wasn’t previously known?</li></ol><p>As you build your data story you need to keep in mind that your research is not the same work that you should share. As the data practitioner your job is to pull apart and stress test the data as part of your research process. Often data practitioners do excellent research and then just share everything they’ve pulled together with little thought to how their work is presented. This is one of the fastest ways to ruin credibility as your stakeholders will get bored and/or won’t understand the nuance. Effectively at this point it is very easy to get lost in the details.</p><h3>Building an effective data story</h3><p>A data story can take many shapes. You shouldn’t be afraid to call a data story a simple email with 1 or 2 data backed insights. Sometimes thats all you need. The main takeaway is meet your audience where they are.</p><h4>Identify the key takeaways</h4><p>Assuming you need to build something more robust than an email, I’ll ground my story in the key takeaways (no more than 3). These should be simple bullets communicating data backed insights. Try to keep the text to a minimum as the goal is for your insights to be simple and impactful to strategy. I also recommend using text formatting to double emphasize your takeaways. This has the benefit of focusing the reader on the absolute minimum they need to understand while also giving you a visual cue when presenting. Below are some examples of how to execute this approach:</p><ul><li>Chuck was a popular tv show with a strong fan following that struggled to receive network investment but was ultimately saved for 3 extra seasons due to direct investment by Subway is an example of advertising opportunities (bad)</li><li>The popular tv show Chuck survived due to direct brand advertising by Subway(better)</li><li>Direct advertising investment in tv shows builds brand loyalty and by reaching customers where they are (even better)</li><li><strong>Direct advertising investment in tv shows builds brand loyalty</strong> and by reaching customers where they are (best)</li></ul><p>As you compile your key takeaways you will want to consider the flow of how you communicate them. You will generally want to start with the “what happened?” then “so what?” or “why should I care?” and then finally to “now what?” or “what should I do after knowing this?”.</p><h4>Sculpt your presentation</h4><p>After you’ve shared your key takeaways and have your flow built out, now you can start crafting your presentation. Regardless of whether you use PowerPoint or Tableau to craft your story you should only use charts that directly support your key takeaways. By support I specifically mean charts that communicate the findings clearly and concisely to your stakeholders.</p><p>In a PowerPoint presentation I recommend using the slide title to communicate points directly related to the key takeaways. Then the chart that you share links directly to the title of the slide and any supporting text you include helps explain the chart and/or provides additional context. The richer your research the more you can expound the additional context you provide.</p><p>I often do something similar in dashboards that I compile. I’ll frequently add questions or callouts in the different sections of a long form (one time use) dashboard. This tells my audience how to interpret the dashboard and what to take from the chart.</p><h4>Build your tool kit</h4><p>When building your charts I recommend going out to a resource like Tableau Public to see how other people are communicating similar points to what you are trying to do. This is also where having a standard toolkit of charts really comes in handy. Participating in community projects like MakeoverMonday, Back2Viz Basics, GamesNightViz, WorkoutWednesday, and others is important because they expose you to unfamiliar data models and chart types and require you to think about presenting data differently.</p><p>An example where I’ve found having a toolkit of chart types handy is when trying to communicate distributions. While box and whisker plots are great for technical teams and academia (only those that use them frequently) it can be hard to communicate what they represent to non-technical stakeholders. This is where something like a jitter plot (tape plot, etc.) or a violin plot really comes in handy as they can communicate distributions in a cleaner way even if you lose some precision. Remember the main goal is to communicate your insights. If you are worried about precision remember you can always include the box and whisker in the appendix.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*oLatS0w3YPIdIGoWjOMFow.png" /><figcaption>Tape plot: <a href="https://public.tableau.com/app/profile/matt.huff/viz/AreCatsLazy/MOM2025_wk47">link</a></figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/839/1*qyuefH3WhAIpCRntn9BkpQ.png" /><figcaption>Violin plot with box and whisker by Numa Begum: <a href="https://public.tableau.com/app/profile/numa.begum/viz/ViolinPlotInspiration/Visualisation">link</a></figcaption></figure><h4>Communicate only what you want to communicate</h4><p>Now that you have the structure in place for your presentation you need to consider how you will drive focus to those places where you want the audience to focus. This is where text formatting, color, size, and chart choice become important.</p><p>Take the electricity viz below as an example. Text formatting was used to specifically highlight key takeaways and drive home the emotional impact of electricity access.</p><p>Yellow was used because it is a color frequently associated with electricity. Note that only 2 colors were used in the dashboard. Grey was meant to give context but since it isn’t tied to the key takeaways I needed to de-emphasize that data so you can focus on the emerging economies (which is where electricity access is most problematic). The strategic use of color here helps to focus the message clearly and quickly. If I had a different color for each country then it would be almost impossible to quickly figure out what is going on (I know… that’s the default and it took forever to parse it out). Also, note that color is used consistently throughout the presentation. What I label with yellow ALWAYS refers to the same thing in this dashboard.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/988/1*NGSbmglG0dByoVaBCZPDuQ.png" /><figcaption><a href="https://public.tableau.com/app/profile/matt.huff/viz/GlobalAccesstoElectricityMakeoverMonday/Dashboard1">Link</a></figcaption></figure><p>One callout when it comes to communicating what you want to communicate is that chart type will matter. Bar charts are typically the de facto chart used in a corporate setting. However, don’t be afraid to deviate from the default. Sometimes a more complicated chart type with clean labels, colors, etc. will communicate your story more effectively.</p><h3>Summary</h3><p>As you begin your data storytelling journey remember that as a data practitioner your job is to deliver clean, concise, memorable data stories that help drive decisions. Your goal is to drive speed to insight while reducing the analytical burden on your audience. Focus on the fundamentals and building your toolkit and you will be surprised at the value you can start to deliver</p><h4>Main takeaways</h4><ul><li>Meet the audience where they are</li><li>Focus on the key takeaways</li><li>Charts should only be used if they support the key takeaways</li><li>Charts should use visualization best practices (size, color, font, etc.)</li></ul><p>— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —</p><p>Want to learn more?</p><ul><li>Connect with me on LinkedIn (<a href="https://www.linkedin.com/in/mhuff04/">link</a>)</li><li>Follow me on Tableau Public (<a href="https://public.tableau.com/app/profile/matt.huff/vizzes">link</a>)</li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=319b3e9cfecc" width="1" height="1" alt=""><hr><p><a href="https://medium.com/retail-tableau-user-group/telling-a-better-data-story-319b3e9cfecc">Telling a BETTER data story</a> was originally published in <a href="https://medium.com/retail-tableau-user-group">Retail Tableau User Group</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Makeover Monday: Become an Analytics Expert]]></title>
            <link>https://medium.com/@matt_huff/makeover-monday-become-an-analytics-expert-332e580cc7e1?source=rss-6dbf573ee5a6------2</link>
            <guid isPermaLink="false">https://medium.com/p/332e580cc7e1</guid>
            <category><![CDATA[learning]]></category>
            <category><![CDATA[data]]></category>
            <category><![CDATA[tableau]]></category>
            <category><![CDATA[data-visualization]]></category>
            <dc:creator><![CDATA[Matt]]></dc:creator>
            <pubDate>Mon, 05 Jan 2026 13:32:53 GMT</pubDate>
            <atom:updated>2026-01-05T13:32:53.292Z</atom:updated>
            <content:encoded><![CDATA[<p>Have you ever felt like you are too far behind in the data space? Have you felt that there are people much better at [fill in your desired skillset]?</p><p>The good news is that there are things you can do to close those skills gaps. These activities aren’t free. They require effort, but they can be fun too. Ultimately, everything comes down to practice (and a little bit of play).</p><p>That’s the beauty of the Makeover Monday data project. It was originally designed as a way to help connect the community of Tableau users, share best practices, and think and explore data concepts. In this post I’ll outline what the Makeover Monday data project is and how you can get involved.</p><h3>What is Makeover Monday?</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/800/0*lXjVKE_ysDExohW-" /></figure><p>Makeover Monday is simply a weekly data challenge. Every Monday a data set is published for use within the community. These datasets cover a wide range of topics from social initiatives to fun data sets in the news cycle. Along with these datasets is an image of a chart that is sourced from the data.</p><p>Your mission (should you choose to accept it) is to take that data set and using whatever software you prefer (e.g. Tableau, PowerBI, Excel, pen and paper… you get the idea) do one of the following:</p><ol><li>Recreate the original image</li><li>Improve upon the original image</li></ol><p>Once you’ve done worked through either of these approaches the only ask is that you share your work. You can share that with other people you work with or on LinkedIn tagging the Makeover Monday leaders or any other way you prefer. If you work through Makeover Monday with Tableau we strongly recommend uploading your work to Tableau Public and filling out the Makeover Monday tracker <a href="https://docs.google.com/forms/d/e/1FAIpQLScOV9H0Ly19xOJuKDtbGt--0y0bq6DVcycHb9nr6LPdKSFIRg/viewform">here </a>to track your progress in this <a href="https://public.tableau.com/app/profile/chimdi.nwosu/viz/MakeoverMondaySubmissionTracker_17501488943530/Submissions">viz</a>.</p><h3>Avoid perfection (it doesn’t exist anyways)</h3><p>When starting your work don’t worry about being perfect. Your work won’t be perfect but that’s kinda the point. Makeover Monday can be viewed as a sandbox project where you’re able to try out new capabilities.</p><p>In the context of Tableau, if you haven’t used dynamic zone visibility or spatial parameters or any variety of other newer capabilities use Makeover Monday to practice and learn what works or doesn’t.</p><p>You should view Makeover Monday as a low stress option to up skill and get inspired by others in the community.</p><h3>Why the random data sets?</h3><p>This is the hidden value behind Makeover Monday. Imagine you’re an analyst focusing on profit and loss statements. You probably have a couple of go to methods for displaying your reports. If you were to practice building analytics skills just on data you’re familiar with you’d hit a growth ceiling quickly.</p><p>Makeover Monday introduces you to large-ish and small data sets. You’ll also be introduced to mapping data and simple tabular data. You’ll be exposed to topics you’ve never considered. This exposure forces you to think about the data and the best way to visualize the same data.</p><p>A lot of analysts I’ve worked with didn’t want to work with data that wasn’t directly applicable to their everyday work. However, to be a well rounded and versatile analyst you need the broad data exposure.</p><h3>Why share your work?</h3><p>Think of sharing as your personal commitment device. Since this project is live every week there is always something new to learn. You will pick up simple tips and tricks over time just by participating.</p><p>Sharing is also a way to pay it forward to others. I’ve been participating in Makeover Monday now for the past 5+ years and sometimes I get stuck. When that happens I find going to other people’s work can be a great source of inspiration that I can try to replicate.</p><p>The other benefit of sharing (especially to somewhere like Tableau Public) is that this will give you a large portfolio of data visualization assets you can refer back to. I can’t tell you how many times I go back to a dashboard I previously posted on Tableau Public because I forgot how to do something but I can reverse engineer my old work.</p><h3>I’m convinced, but don’t have enough time</h3><p>I mentioned earlier that any community project is going to require an investment. You won’t get better unless you invest the time up front. This has a hidden benefit of helping you work faster in the long run too.</p><p>My recommendation is to start small. Set a time limit for what you are willing to do each week. If you can set aside 30 minutes each week you’d be shocked at what you learn over months of consistent and targeted practice.</p><p>Remember, you’re just trying to get better than you were last week. Get inspired by the work the community is doing. Learn from the community. Share with the community. You’ll be shocked at what you learn with a little bit of dedicated effort. Most importantly though….</p><p>HAVE FUN!</p><p>— — — — — — — — — — — — — —</p><p>Still have questions?</p><ul><li>Join the Tableau community Makeover Monday channel (via <a href="https://www.tableau.com/community/slack">Slack</a>).</li><li>Reach out to your Makeover Monday leaders (Chimdi Nwosu, Harry Beardon, Ojoswi Basu, Blake Feiza, and myself) and other active participants for help.</li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=332e580cc7e1" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Image magic in Tableau with maps]]></title>
            <link>https://medium.com/retail-tableau-user-group/image-magic-in-tableau-with-maps-098086ec0e13?source=rss-6dbf573ee5a6------2</link>
            <guid isPermaLink="false">https://medium.com/p/098086ec0e13</guid>
            <category><![CDATA[data-visualization]]></category>
            <category><![CDATA[data]]></category>
            <category><![CDATA[analytics]]></category>
            <dc:creator><![CDATA[Matt]]></dc:creator>
            <pubDate>Tue, 02 Dec 2025 00:07:31 GMT</pubDate>
            <atom:updated>2025-12-02T00:08:20.860Z</atom:updated>
            <content:encoded><![CDATA[<p>One of the great things about Tableau Public is the inspiration the platform provides. Need a way to visualize a financial statement? Done. How about a way to understand e-commerce sales trends? Done. Ever wonder if Link could be vegan in Zelda Breath of the Wild? Done.</p><p>Will Sutton, an Iron Viz winner a couple years back, built a dashboard analyzing the different food sources in the game. You can find his dashboard <a href="https://public.tableau.com/app/profile/wjsutton/viz/BreathoftheWildVeganModeTheLegendofZeldaGamesNightViz/LongForm">here</a>. The concept behind this dashboard is a lot of fun. What I found most interesting was that he added a single subtle feature to his dashboard that, in my opinion, was pretty powerful. He added a button to the dashboard that would swap out the background colors. None of the data changed, but this ability to simply swap out the backgrounds of a dashboard is something I have thought about A LOT ever since seeing this viz.</p><p>I finally had a use case to test out this trick on my own when building a viz on tips and tricks for the Portland Tableau User Group. I thought it would be fun to showcase the differences between data from the University of Oregon and Oregon State University. While I’m a huge University of Oregon fan I wanted to showcase some fun facts about each university. To do this I wanted to use Will Sutton’s background image swap so I could have a branded view of the data for each school in the same dashboard.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*cuC_qEpZqdd3Rsnf" /><figcaption>Generated via Google Gemini</figcaption></figure><h3>The problem</h3><p>I needed to have a single dashboard that would have a single filter. This filter needed to adjust the underlying data that was in each chart for the chosen university. I also needed to have that same filter change the background image for the dashboard overall. This would ensure that I’m not showing University of Oregon branding on Oregon State University data (eww…).</p><p>Naturally, I also downloaded Will’s dashboard to see how he did the swap. His approach was very similar to something I’ve done before when adding explanations to a dashboard. However, it turns out this solution wasn’t going to work for my dashboard.</p><p>The other piece to keep in mind is that when I collected the data I had multiple data sources (this is a hint at the solution for those reading ahead). Each of these data sources had slightly different structures to support the specific visualizations I wanted for the dashboard.</p><h3>The (not so) solution</h3><p>The simple approach for this was to add an image to your dashboard. This image will simply sit in the background of everything you lay on top of it.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ISQkmpM1dQ3zoRzAca_rOw.png" /></figure><p>Next, you can add a floating container sized to the specific dimensions of your dashboard. In this container you can add your second background image. You can then add a button for the container. This will allow you to turn on or off this image. Remember the first layer image we put down? If we turn off the container then the first layer image will show.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/254/1*Ty5qKptNn-W3Uth2F6MeZQ.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/232/1*6gTHN6j2UyK5Afz70z9j2g.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*HqCAPAdv9mdBoWdXG1fm0w.gif" /></figure><p>This works great if all you want to do is change the background image. However, obviously I don’t want to share University of Oregon data on Oregon State Universities view and vice versa. I could ask my stakeholders to click a button in one spot to change the image and then interact with a different filter somewhere else. However, that is far from the kind of experience I want for my stakeholders.</p><h3>The (real) solution</h3><p>Remember when I said we had multiple data sources? That was a hint that the solution we need is going to rely on some kind of parameter. So let’s start by creating a parameter that lets us pick the university (either UO or OSU).</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/705/1*xG1vFbtI43rVPUHN-RgKYA.png" /></figure><p>Now the trick we used earlier for swapping out the backgrounds won’t work here because a container button isn’t capable of changing a parameter. If you know a way to get that to work let me know. I did figure out a workaround though. In this case, let’s just use maps to do the heavy lifting for us.</p><p>To start, create a new worksheet for one of the school’s images. Connect it to one of your datasets and create a metric on the rows and columns shelf that is the avg(0).</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*5F4rmeoJoD6M1qjIUAFr9g.png" /></figure><p>Next, change the default mark type to map like below.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/194/1*tLyOVLRsmRAzuFY3R_yL3A.png" /></figure><p>Then in your Map dropdown select the data set you are using from the following pattern.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/484/1*_HW5SOkZ0NwgMtl_hspRzA.png" /></figure><p>Then choose to add an image and change the specifications to fit the size of your dashboard. In my case I want the image to start at (0,0) and fit 850 pixels across and 1500 pixels vertically.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/537/1*8_obB0mupLwW06mpqCs-xQ.png" /></figure><p>You should then get something like this.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/855/1*UYoAsKWiwYvJq9byZFmIUg.png" /></figure><p>With some reformatting to remove the axis labels, sheet titles, etc. you should now be able to add this directly into a floating container on your dashboard as a background image.</p><p>Now the trick to this approach is that your image is now a worksheet object instead of an image or container object. You can then use either standard sheet swap or dynamic zone visibility approaches to show or hide the background images using the same parameter you use to show or hide the underlying data that is exposed. Now you have a dynamic background image on your dashboard that can be powered by a parameter.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/844/1*tRNjEMQqhtnBny192xvNmA.gif" /></figure><p>You can check out the final dashboard <a href="https://public.tableau.com/app/profile/matt.huff/viz/UOvsOSU/OregonSchoolComparison">here</a>. Another bonus tip in this dashboard is that the numbers laid out at the top of this dashboard were created using map layers which gives flexibility into where you show the numbers. To accomplish something like this without map layers would require you to build multiple sheets and cause Tableau to trigger multiple queries to render your views.</p><h3>So what?</h3><p>Why is this approach useful? Imagine you have different stakeholders that each had different internal brand needs. This approach allows you to swap the view that is shown to your stakeholders dynamically. If you have references in your data identifying views that only certain users need you can then selectively show data and branding to the stakeholders that only they need to see. This allows for you to create multiple dynamic views that can all be controlled via parameters thereby reducing your overall dashboard development by compiling multiple stakeholder views into a single dashboard.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=098086ec0e13" width="1" height="1" alt=""><hr><p><a href="https://medium.com/retail-tableau-user-group/image-magic-in-tableau-with-maps-098086ec0e13">Image magic in Tableau with maps</a> was originally published in <a href="https://medium.com/retail-tableau-user-group">Retail Tableau User Group</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Why you should write on Medium]]></title>
            <link>https://medium.com/retail-tableau-user-group/why-you-should-write-on-medium-15000b44cde0?source=rss-6dbf573ee5a6------2</link>
            <guid isPermaLink="false">https://medium.com/p/15000b44cde0</guid>
            <category><![CDATA[data-visualization]]></category>
            <category><![CDATA[writing]]></category>
            <category><![CDATA[data-analysis]]></category>
            <dc:creator><![CDATA[Matt]]></dc:creator>
            <pubDate>Sat, 25 Oct 2025 19:15:16 GMT</pubDate>
            <atom:updated>2025-10-25T19:16:52.265Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*XrguvwASihnD1pJqyll-zg.jpeg" /></figure><blockquote>What’s the point? I don’t have anything to contribute? There isn’t anything that I can offer to somebody else. It’s a lot of work for little benefit.</blockquote><p>You’ve probably had similar thoughts to those above. However, each of us has a unique perspective and point of view. I’ll be the first to admit that your perspective might not be valuable for everyone, BUT it will be valuable for someone.</p><p>Spoiler alert!</p><p>You are probably the person that will benefit the most. Writing is an excellent way to clear your thoughts and develop frameworks. Writing is especially powerful as a mechanism for refining ideas and consolidating your thoughts.</p><p>Within a technical framework writing is especially powerful as a reinforcement mechanism. It becomes a process for you to explain concepts in a way that you understand best. For highly complex topics you now have a reference point you can use the next time you need to do something similar. Think of technical writing as bread crumbs you leave yourself.</p><p>The best part of writing is that the best practices you develop by practicing in Medium apply to work as well. In a corporate setting writing internal wikis are powerful ways to communicate your perspective and anchor a conversation. This anchoring is a powerful approach to implement change as peers can all start from the same baseline.</p><h3>Getting started</h3><p>The first and most important step in writing on Medium is simple. Think of a topic and start writing. Your writing doesn’t need to be perfect (guaranteed it won’t be). You might not even like your first post. Doesn’t matter… publish and share it!</p><p>A bias for action is the critical piece of this first step. I see a lot of people passively consume but rarely contribute. If you want to get better you need to contribute. Set personal goals and have fun.</p><h3>What to write</h3><p>I especially enjoy writing about Tableau. In a work context I encounter many people that have varying levels of capability with Tableau. It isn’t uncommon that people who are newer in their journey ask questions. These questions are gold! Document and share solutions to their questions.</p><p>Writing solutions to these questions gives you a repository of tips and tricks you can refer back to. It also has the sneaky benefit of meaning you now have a playbook for answering the same questions in the future.</p><h3>Getting social</h3><p>Once you’ve done the hard part of writing now you need to socialize. While this part is optional I highly encourage sharing. I consider sharing optional because Medium will share your work organically. However, sharing is important as it gives you the opportunity to connect with other members of the same community. Within the Tableau community there are plenty of places to share. You can volunteer to present at a Tableau User Group (they’re always looking for speakers). You can share it to social platforms like LinkedIn, Tableau Public (if there’s a corresponding visualization), Twitter/X, or Blue Sky. This is your opportunity to give back to everyone that has shared their ideas before.</p><p>Within Medium you can also work with publications. For example, the Retail and Consumer Goods Tableau User Group recently began a publication focused on all things Tableau, data viz, analytics, and retail. As a non-paying Medium contributor you can still write and submit your work to a publication (follow the publication’s rules and recommendations though). This is a great way for you to connect with other like minded professionals.</p><p>This should go without saying but as you create content be sure to share it with your colleagues at work. It’s safe to say that the reason you were writing in the first place was to solve a business problem you faced. It is all but guaranteed that some of your peers have a very similar problem to what you solved. My preference is to share my writing via direct message (where applicable) or within social channels on Slack that have peers working on similar projects.</p><h3>Conclusion</h3><p>Regardless of your experience level or technical capabilities YOU have a story worth sharing. Your approach is exactly what someone else needs. By contributing to the community you’ll be forging lasting relationships and friendships. This is an opportunity you won’t want to miss.</p><p>— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —</p><p><em>If you made it this far and found this valuable let’s connect! Share what you liked in the comments. You can also find me on </em><a href="https://www.linkedin.com/in/mhuff04/"><em>LinkedIn </em></a><em>and </em><a href="https://public.tableau.com/app/profile/matt.huff/vizzes"><em>Tableau Public</em></a><em>.</em></p><p><em>I’m also a 2025 Tableau Ambassador and leader of the Retail &amp; Consumer Good Tableau User Group (</em><a href="https://usergroups.tableau.com/retail-and-consumer-goods-tableau-user-group/"><em>here</em></a><em>). I’d love to connect in one of our meetups.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=15000b44cde0" width="1" height="1" alt=""><hr><p><a href="https://medium.com/retail-tableau-user-group/why-you-should-write-on-medium-15000b44cde0">Why you should write on Medium</a> was originally published in <a href="https://medium.com/retail-tableau-user-group">Retail Tableau User Group</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Reinventing the Humble Bar Chart]]></title>
            <link>https://medium.com/retail-tableau-user-group/reinventing-the-humble-bar-chart-6c6541de9f9e?source=rss-6dbf573ee5a6------2</link>
            <guid isPermaLink="false">https://medium.com/p/6c6541de9f9e</guid>
            <category><![CDATA[data-visualization]]></category>
            <category><![CDATA[tutorial]]></category>
            <category><![CDATA[data-analysis]]></category>
            <category><![CDATA[training]]></category>
            <dc:creator><![CDATA[Matt]]></dc:creator>
            <pubDate>Sun, 12 Oct 2025 20:15:19 GMT</pubDate>
            <atom:updated>2025-10-13T13:39:56.770Z</atom:updated>
            <content:encoded><![CDATA[<p>Recently I was asked to create some very specific formatting for a bar chart. My initial reaction was that this would be easy to build. As I was digging into trying to replicate the visualization in Tableau it became clear that this wasn’t going to be a simple remake. The ask ended up being more complicated than expected but this visualization has a lot of merit as it can be more simple and easy to read. Below is a screenshot of the bar chart but you can also find the dashboard <a href="https://public.tableau.com/app/profile/matt.huff/viz/CustomLabeledBarCharts/CustomLabeledBarCharts">here</a>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/665/1*ehenBfYfKCLwo5sgL-lvog.png" /></figure><h3>Step 1: Data set up</h3><p>Let’s dive right into the build of this dashboard. This dashboard will use the Superstore dataset that comes with Tableau. After connecting to Superstore we need to move sub-category onto the rows shelf and filter to the 2 most recent years. In my example I’m using the years 2023 and 2024.</p><p>Then add sales to the columns shelf and the order date year to the marks shelf. This gives us sales for the two years, which is not quite what we want. However, we can add a simple calculation to get what we need. Use the following formula to create a field “1. Sales Growth”.</p><blockquote>SUM(if year([Order Date]) = 2024 then [Sales] else null end) <br>/<br>SUM(if year([Order Date]) = 2023 then [Sales] else null end)<br>- 1</blockquote><p>You can then sort sub-category by the sales growth descending. You should get something like this.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1003/1*NgajkZFwDvrYrL2s2t8gSA.png" /></figure><p>At this point, I highly recommend you play around with this view to see if you can get something similar to the final output. You’ll quickly learn that standard tricks won’t work all that well here.</p><h3>Step 2: Create a dual axis workaround</h3><p>The first thing that becomes apparent is that a dual axis bar chart is not capable of getting the formatting to match exactly. This is a huge bummer as that would make this chart trivial to create. Instead we’re going to have to pull out some tricks we learn in Workout Wednesday (see sample problem <a href="https://workout-wednesday.com/2025w15tab/">here</a>).</p><p>This means we’ll need to get tricky ourselves. Let’s duplicate “1. Sales Growth” twice and rename both fields so we can have “1. Sales Growth”, “1. Sales Growth Left” and “1. Sales Growth Right”. All sorting should still be occurring on “1. Sales Growth”. This will help us keep track of the makeshift dual axis chart we’ll create. The make sure both are on the columns shelf until we get something like this.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Dskqs6pv4FXH3iX_2-XKAg.png" /></figure><p>Now let’s refine the query used to generate the left and right sides of the sales growth calculations. Let’s adjust the left side calculation to:</p><blockquote>if (<br> SUM(if year([Order Date]) = 2024 then [Sales] else null end) <br> /<br> SUM(if year([Order Date]) = 2023 then [Sales] else null end)<br> — 1 ) &gt;= 0 then 0</blockquote><blockquote>else (<br> SUM(if year([Order Date]) = 2024 then [Sales] else null end) <br> /<br> SUM(if year([Order Date]) = 2023 then [Sales] else null end)<br> — 1 )<br>end</blockquote><p>The right side of the equation should be:</p><blockquote>if (<br>SUM(if year([Order Date]) = 2024 then [Sales] else null end) <br>/<br>SUM(if year([Order Date]) = 2023 then [Sales] else null end)<br>- 1<br>) &lt;= 0 then 0</blockquote><blockquote>else (<br>SUM(if year([Order Date]) = 2024 then [Sales] else null end) <br>/<br>SUM(if year([Order Date]) = 2023 then [Sales] else null end)<br>- 1)<br>end</blockquote><p>Adjusting these calculations will give you only negative values on the left side equation and only positive values on the right side equation. Then fix the axis for the left side to be similar to the following</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/665/1*k6NDK8tID9WO2qmj2Z7eYw.png" /></figure><p>We then need to do the same for the right side axis.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/667/1*bdMxjbHPGNa2AF-LppjPwQ.png" /></figure><p>You should then have something that looks like this. At this stage double check your sort is still working as expected.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*NmG3C0-0KcrRgqp7ykE5Ww.png" /></figure><h3>Step 3: Scale your axis value</h3><p>This is looking pretty promising so far but there’s something odd going on. Machines and Envelope sales declines aren’t really comparable to the 500% growth we see for Fasteners but the bars suggest the same order of magnitude. This tells us that we have an issue with the scaling that is happening on the axes. So let’s fix the issue.</p><p>To fix the scaling let’s identify maximum growth on the right and the maximum decline on the left side. We’ll then pass these values into parameters that can update on the workbook load so we can have dynamically updating axes.</p><p>We’ll use the following formula “1. Sales Growth Right Max” to calculate the maximum value on the right side:</p><blockquote>-1 * max(<br>(<br>{fixed [Sub-Category]:SUM(if year([Order Date]) = 2024 then [Sales] else null end)} <br>/<br>{fixed [Sub-Category]: SUM(if year([Order Date]) = 2023 then [Sales] else null end)}<br>- 1 <br>))</blockquote><p>We’ll always create the “1. Sales Growth Left Min” to calculate the maximum sales decline on the left:</p><blockquote>-1 * min(<br>(<br>{fixed [Sub-Category]:SUM(if year([Order Date]) = 2024 then [Sales] else null end)} <br>/<br>{fixed [Sub-Category]: SUM(if year([Order Date]) = 2023 then [Sales] else null end)}<br>- 1 <br>))</blockquote><p>What we want to do is create two parameter values based on both of these values. You can right click on the calculated fields to create these parameters. After doing this you should have 2 new parameters. You will then add these parameters as reference lines for your chart. Be sure to suppress any labeling and line formatting so they are invisible. You will want something that looks like this for both the maximum sales growth and maximum sales decline.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/668/1*MI1VDgYnZMAby5pfwzQJ0w.png" /></figure><p>Once done, you should have a view that looks like this.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*0slHqx0WgbzQMmFfDnzwxQ.png" /></figure><h3>Step 4: Label the observations</h3><p>Perfect! We are getting close. You can take my word for it or you can try to build labels yourself but bar charts just aren’t going to work here. This is because we are looking for a very specific kind of labeling. However, gantt bars are going to work perfectly here. I created something similar for a walkthrough on waterfall charts you can check <a href="https://medium.com/@matt_huff/chasing-waterfalls-a141fd0572d2">here</a>.</p><p>To start with let’s change the default chart types from bars to gantt. Doing so will yield this:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*XXDvQPbbNGpN-niH0JSGTw.png" /></figure><p>Now, let’s hack these gantt marks to get a bar. For the right side you will ctrl drag the sales growth right field from columns onto size in the marks shelf. This will add the sales growth onto your gantt marks like this.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*OYiGD1QNFsWA5WOv3C4AMw.png" /></figure><p>This is close to what we want but not quite right so let’s apply the same trick we used for building waterfalls to use the negative of the sales growth on the marks shelf to reorient the location of the bar. Do the same for the left side and you’ll get the following:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*WV98eXuJMobd9tqDtYtJjQ.png" /></figure><p>We now have gantt bars disguised as a bar chart as desired. Now lets create the labels. I’ll give you the formulas but because we want to mix between showing a number and text we’ll have to apply a couple of cumbersome calculation to the growth values. If we don’t account for the number formatting we’ll end up with number formatting that has a LOT of decimal values suggesting a level of precision we shouldn’t be comfortable with.</p><p>Here is the calculation for the left label:</p><blockquote>if {fixed [Sub-Category]:SUM(if year([Order Date]) = 2023 then [Sales] else null end)} <br> &lt; {fixed [Sub-Category]: SUM(if year([Order Date]) = 2024 then [Sales] else null end)}<br> then [Sub-Category]</blockquote><blockquote>else str(int([1. Sales Growth]))<br> +<br> ‘.’<br> +<br> str(int(<br> (<br> [1. Sales Growth]<br> -<br> floor([1. Sales Growth])<br> )*10)<br> )<br> +<br> str(int(<br> ((<br> [1. Sales Growth]<br> -<br> floor([1. Sales Growth])<br> )*100) %10)<br> )<br> + ‘%’<br>end</blockquote><p>For the right label we’ll use the following:</p><blockquote>if {fixed [Sub-Category]:SUM(if year([Order Date]) = 2023 then [Sales] else null end)} <br> &gt;= {fixed [Sub-Category]: SUM(if year([Order Date]) = 2024 then [Sales] else null end)}<br> then [Sub-Category]</blockquote><blockquote>else str(int([1. Sales Growth]))<br> +<br> ‘.’<br> +<br> str(int(<br> (<br> [1. Sales Growth]<br> -<br> floor([1. Sales Growth])<br> )*10)<br> )<br> +<br> str(int(<br> ((<br> [1. Sales Growth]<br> -<br> floor([1. Sales Growth])<br> )*100) %10)<br> )<br> + ‘%’<br>end</blockquote><p>Then add the left label to the left side text field on the marks shelf and the right label for the right side. You’ll then get the following:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*9g1KrhZd07nCSJHklzlTww.png" /></figure><p>Then with a little touch up and formatting we can get something like this:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*HDzoBfQb3nj6n5U2w-SNrw.png" /></figure><h3>Summary</h3><p>While it isn’t necessarily worth building this out everytime I need to create a bar chart this view is powerful and concise. I like how it cleanly shows the data next to the bar while alternating the labels on/off depending on whether the sub-category is growing or not.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=6c6541de9f9e" width="1" height="1" alt=""><hr><p><a href="https://medium.com/retail-tableau-user-group/reinventing-the-humble-bar-chart-6c6541de9f9e">Reinventing the Humble Bar Chart</a> was originally published in <a href="https://medium.com/retail-tableau-user-group">Retail Tableau User Group</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Scaling data science workflows]]></title>
            <link>https://medium.com/dataai/scaling-data-science-workflows-d67e5a1de953?source=rss-6dbf573ee5a6------2</link>
            <guid isPermaLink="false">https://medium.com/p/d67e5a1de953</guid>
            <category><![CDATA[data-science]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[automation]]></category>
            <category><![CDATA[analytics]]></category>
            <dc:creator><![CDATA[Matt]]></dc:creator>
            <pubDate>Wed, 08 Oct 2025 22:45:23 GMT</pubDate>
            <atom:updated>2025-11-04T22:17:06.353Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*SKLVAVB2KH6YygVw" /><figcaption>Generated by Google Gemini</figcaption></figure><p>Imagine you’re a data scientist or technical analyst that has a collection of reports that you run regularly. None of these reports are incredibly difficult to run but they all require some finesse. Essentially they require some form of manual input. Maybe that input is a date range, list of products, or list of locations. Wouldn’t it be nice to automate the collection of these inputs?</p><p>In this post I’ll walk through an LLM based approach and general framework you can use to automate these pipelines. It helps to walk through a tactical example so I’ll look at a simple example where I want to have an automated process to generate behavior based audiences at scale.</p><h3>Process overview</h3><p>To incorporate this process you will need to have key pieces in place. Below is a diagram outlining each necessary component to the proposed workflow.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*2Tb7rC-9083YuLrGOyPiNg.png" /></figure><p>Keep in mind that this proposal is intended to be a bare essentials process. This is a framework that you can add to or take away from at your leisure. For example, I’m only recommending a single LLM model run, but you could just as easily swap this out with a more agentic approach where you run multiple LLM’s simultaneously to get multiple recommendations that could then be evaluated by another agent. For this purpose though a lot of those complexities will just muddy the key points we need to review.</p><h3>User intake</h3><p>This should be as low-touch as possible for you. Ideally, you have an established intake process that non-technical stakeholders can access. An example, would be a simple Slack bot that you set up that will let your stakeholders input custom text for whatever audience they want. This allows your stakeholders to explain in detail what they are looking to accomplish with the audience they need.</p><p>Once you have your Slack bot stood up, you will need to have that output into a table in your respective data lake. As part of this process you will want to have clearly defined metadata as well for this particular request. For example, you would want to ensure that the Slack bot would output the following features:</p><ul><li>Request ID (possibly linked to a work ticket in Jira or other ticket management software)</li><li>Urgency</li><li>User description (will be passed to the LLM)</li></ul><h3>LLM model</h3><p>During this phase you will dig deep in tuning your model. Ideally, you have a backlog of requests you could use to manually test your prompt.</p><p>As you build your prompt you will want to keep the following in mind:</p><ul><li>Clearly define the structure of your inputs</li><li>Specify that your outputs should ONLY be in a table structure (or other data structure that you plan to ingest in workflows later on)</li><li>Clearly define all data requirements you need to use in your downstream models (e.g. date ranges, etc.)</li><li>Defined lists of models that the model can choose from</li></ul><p>Here is a sample prompt you can use as a starting point. Note that this prompt needs further refinement to restrict output to a single usable table format with variables that need to be evaluated.</p><blockquote>You are a senior level data science practitioner. Your purpose is to take a custom explanation of audience profiles as described by your stakeholders and translate those descriptions into filtering and model recommendations for defining the relevant audiences. The output you define must consist of exactly one model recommendation and your confidence, between 0 and 1, in your model choice. You must also provide any recommendations for minimum or maximum spend and visit thresholds. You must also generate the ideal look back window for your recommendation.</blockquote><h3>Model recommendation</h3><p>In this phase you would have a workflow that regularly picks up new audience requests in the data lake. Then you would iteratively loop through all new audience requests against the LLM you defined above. The goal here is to generate a simple table or series of tables that can again be picked up by other downstream processes.</p><p>At the end of this phase you will have tables with clear thresholds for features needed to support your model along with a specific model recommendation. For example, you would have clearly defined date ranges or geographic locations you want to use in defining your audiences. Remember that you will have defined the key features you need to pass into the assortment of audience models you can choose from.</p><h3>Models</h3><p>At this point you should have an output table with the relevant request ID’s and other model recommendations. Within your data platform you can then set up specific jobs that begin executing after your model recommendations have been made. The audience segmentation process can then be executed as individual workflows for each model type you have available or as a single, broad workflow through a directed acyclic graph (DAG).</p><p>The important piece here is that you will have a process that takes the output from the LLM and that will be used to define features and criteria that are needed for your audience segmentation frameworks. You can then add new methods for segmenting audiences over time as needed.</p><h3>Reporting layer</h3><p>The reporting layer is critical to the success of this automation process. This is where we introduce a human-in-the-loop process to evaluate the outputs of the LLM recommendation and assess whether or not the audience that was generated is tactically relevant. One consideration here is that you will want a low touch process that can be used to re-evaluate the initial model through the above process again.</p><p>Additionally, this is another space where you can introduce agentic frameworks to evaluate the outputs and provide recommendations on next steps. This isn’t recommended though for initial deployments until you have refined the above framework and have a good understanding of where gaps may exist in your framework.</p><h3>Application</h3><p>In Google’s Gemini I tested a couple of sample audiences that my hypothetical stakeholders may request. It is important to note that these examples are purely hypothetical and don’t actually reflect any segmentations I have created. Also, I used the prompt that I shared above to create these model recommendations.</p><h4>Example #1</h4><blockquote>I need an audience that can be used to target ads for recent Nike brand shoppers at my company.</blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/879/1*7hMsqcCw_5xbF2RufSildw.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/571/1*yliyXJ2tNnMxYEKNbqqFOA.png" /></figure><h4>Example #2</h4><blockquote>I need to create an audience that identifies loyal shoppers of Pepsi that have a high likelihood of switching to coke if presented with an ad.</blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/894/1*G8fkWW_bVmE5rsnlEgEopg.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/879/1*7cp-TXye7ThSeJrdu8XGZw.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/891/1*JM1naOqjsBD27njTC5rPTg.png" /></figure><h3>Summary</h3><p>While there’s still plenty of tuning that is need to support this particular hypothetical example of audiences it should be clear that there is a lot of merit to this particular approach. This approach works for other processes as well especially when you simply need to parameters for analysis but your code can’t scale due to size but automating the requests can.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d67e5a1de953" width="1" height="1" alt=""><hr><p><a href="https://medium.com/dataai/scaling-data-science-workflows-d67e5a1de953">Scaling data science workflows</a> was originally published in <a href="https://medium.com/dataai">DataAI</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Building Technical Communities]]></title>
            <link>https://medium.com/dataai/building-technical-communities-90acfaa41741?source=rss-6dbf573ee5a6------2</link>
            <guid isPermaLink="false">https://medium.com/p/90acfaa41741</guid>
            <category><![CDATA[data]]></category>
            <category><![CDATA[data-analysis]]></category>
            <category><![CDATA[community]]></category>
            <dc:creator><![CDATA[Matt]]></dc:creator>
            <pubDate>Mon, 22 Sep 2025 12:48:04 GMT</pubDate>
            <atom:updated>2025-09-24T11:43:31.193Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/512/1*6z-psrW5BfYGI8GoBoDhTw.jpeg" /><figcaption>Source: Google Gemini</figcaption></figure><p>In a previous post I discussed how I have benefited from participating in data communities. Maybe you don’t feel comfortable working with communities outside of your organization. You’ll be missing out on building a rich network and friendships. That’s totally fine though because you can build them within your organization and get a lot of the same benefits.</p><p>In this post I’ll be discussing what you can do to build those communities and the benefits you can look to receive from participation. The journey isn’t short and there will be ups and downs but it’s definitely worth it.</p><h3>What is a data community?</h3><p>Whether it is openly acknowledged or not every organization needs to have some form of data community. The needs of those communities may differ widely. For example, the data community at a small, local shop may just be the people running the accounting and sales reports. For a larger organization there may be many different communities each focused on a specific data science or analytics&#39; topic.</p><p>Ultimately, it doesn’t matter how big the community is. The community is there to provide guidance and best practices for the data you are using.</p><h3>What are the benefits of a data community?</h3><p>Let’s start by asking if you’ve ever been stuck trying to find a dataset or how to do a certain type of standard analysis. Sure, you can do a quick search online or maybe even pass your questions through an LLM. However, it’s very likely that the answers you find won’t be tailored to your specific use cases. It would be nice to just be able to ask a colleague who knows the nuance of what you are looking to do. This is the main benefit of data communities. These are the daily practitioners that have a shared analytics background.</p><p>While there are a lot of benefits from these communities I’ll list out some of the main benefits I’ve experienced.</p><ul><li>Learning data best practices</li><li>Creating a portfolio that showcases technical skill growth</li><li>Reducing the friction to get answers to questions I have</li><li>Get visibility into blind spots in my applied technical knowledge</li><li>Building relationships with people on other teams that I might not work with daily</li><li>Gain confidence with sharing your work and incorporating feedback</li><li>Gain confidence in giving feedback</li></ul><p>This is definitely not an exhaustive list of benefits but should give you a flavor of the types of benefits you can expect to see.</p><h3>My experience building data communities</h3><p>Over the past five years I’ve stood up many different data communities. Most of them have actually been internal communities focused on specific needs of the organization or colleagues.</p><p>I’ve set up multiple Tableau communities that focus on developing more advanced technical skills or sharing simple tips and tricks to help users operate more efficiently. I’ve also built out book clubs reviewing technical and leadership skills. Externally, I’ve helped build the Retail and Consumer Goods Tableau User Group.</p><p>Each community is a bit different with different needs and goals. Some communities we’ve sunset others have been very successful. Below I’ll share some best practices that I’ve discovered over the past five years of building communities.</p><h3>Building your own data communities</h3><p>I’ll share some examples of what you can do today to start building your own data community. First, and most important, make this your own. Being creative, having fun, and providing value will give your participants a reason to attend and engage. This will likely be a side of desk project for you but can help establish you as a thought leader in your community.</p><p>Now let’s get into the practical work you need to do. I’ll provide recommendations for a larger company but a lot of the same steps still apply.</p><h4>Step 1: Identify a need</h4><p>In this phase you need to identify what communities are needed and something you are passionate about. For me, Tableau checked both boxes. Tableau seems simple to start with but the learning curve is deceptively steep. Additionally, we have a lot of new Tableau users that need to upskill quickly.</p><p>For you this may be integrating AI into workflows, mastering Databricks or Snowflake, applying statistical measurement, or whatever else you are passionate about and is a need for others.</p><h4>Step 2: Get feedback</h4><p>Don’t spend a lot of time on this step but you should connect with peers and other people you think might be interested and ask if they want to participate. This will confirm to you whether or not a community is needed. You’re also starting to seed potential participants at the ground level.</p><h4>Step 3: Build infrastructure</h4><p>You’ve decided it’s worth pursuing your ideas for a data community. Now you need to start creating the infrastructure behind what you want to accomplish. Below are key considerations you need to considering implementing:</p><ul><li>Slack or Teams channels for communication</li><li>Schedule recurring community meetings</li><li>Identify what you will be doing</li><li>Internal wikis to record thought leadership and meetings</li></ul><p>One key consideration is that people learn best by practice or application. How will your community give people the opportunity to practice? In the past I’ve used external community projects like the Makeover Monday project for Tableau. You can also use Kaggle data or even other internal data. My recommendation is to use external data because this forces you to think about what you will be doing out of context from what you’re already doing. This makes the learning more sticky. It also means you can start to share that work externally if you want.</p><h4>Step 4: Kick off</h4><p>Now you need to kick off the community. This is where you start to market your community and start to let anyone and everyone know about your new community.</p><p>Now comes the long slog. You will be on the hook for making sure that you have quality content (another reason to use external projects) which can be a heavy lift if you aren’t careful. Regardless of whether you are the only participant or you have fifty participants make sure you’ve done your homework every time and honor your commitments to the community.</p><h4>Step 5: Refine</h4><p>After you’ve been running the community for a while get some feedback from your participants. Ask them what could be better?</p><p>Anything and everything should be off the table during refinement. I’ve sunset communities that I’ve set up in the past and that’s ok. We all got value while they were running but decided the effort was too high and the return on personal investment just wasn’t there.</p><h3>Conclusion</h3><p>As you look to upskill there is no reason to go alone. If you want to develop quickly lean on those around you and give back. You will learn fastest by practicing and teaching others. Doing this with others makes the whole journey a lot more fun as well.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=90acfaa41741" width="1" height="1" alt=""><hr><p><a href="https://medium.com/dataai/building-technical-communities-90acfaa41741">Building Technical Communities</a> was originally published in <a href="https://medium.com/dataai">DataAI</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Chasing waterfalls]]></title>
            <link>https://medium.com/retail-tableau-user-group/chasing-waterfalls-a141fd0572d2?source=rss-6dbf573ee5a6------2</link>
            <guid isPermaLink="false">https://medium.com/p/a141fd0572d2</guid>
            <category><![CDATA[data-visualization]]></category>
            <category><![CDATA[tableau]]></category>
            <category><![CDATA[data-analysis]]></category>
            <dc:creator><![CDATA[Matt]]></dc:creator>
            <pubDate>Mon, 08 Sep 2025 12:01:43 GMT</pubDate>
            <atom:updated>2025-10-12T20:25:20.581Z</atom:updated>
            <content:encoded><![CDATA[<p>In Tableau the default viz options you see in show me are incredibly powerful. They do a great job at covering the fundamentals you will need for most of your dashboards. However, there are charts that are informative for your stakeholders that aren’t included in show me.</p><p>Waterfall charts are standard charts for finance and accounting. They convey a lot of information in a relatively small space and most stakeholders have seen this chart type enough to understand how to interpret the data.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*lwKO7p6Z9uaUw4UEmKuqUw.png" /><figcaption>AI generated image from Google Gemini</figcaption></figure><p>However, waterfall charts in Tableau can be tricky to build. Once you understand the fundamentals though they actually teach key principles for navigating and building more advanced vizzes.</p><h3>What is a waterfall chart?</h3><p>Waterfall charts are a viz type that shows the cumulative effect of increases or decreases of the key metric you are looking to monitor.</p><p>For example, if you were looking to understand the relationships between sales, costs, and any strategic initiatives on total profitability then a waterfall chart is probably the easiest and fastest way to show the contributions of each initiative.</p><h3>Data Preparation</h3><p>To start with this is the viz we are going to be building. You’ll notice that this is just a bar chart with some sneaky missing pieces. However, the trick to building these charts is to realize that, despite what it looks like, these are not actually bar charts.</p><p>In this example we’ll use the Superstore data to build out this chart. Our goal is to look at Superstore’s profitability for the most recent year. However, the Superstore data isn’t in the correct structure to easily support a waterfall chart (see <a href="https://medium.com/dataai-heb/from-idea-to-insight-5a3fb82762e9">here</a> for an explanation). In this case I’ll filter the data to only the last year (2024 at the time of this writing). I’ll also pivot the measures that I want to include in the waterfall so that I have a unique row for each feature instead of having a unique column for each feature. I’ll also calculate cost ( sum([Sales]) - sum([Profit]) ). You can see the structure below.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/948/1*1DWqNdG1AGgPAuGPAaMR6g.png" /></figure><p>For simplicity I’m going to copy this data and paste it into Excel to create a new data source that we can use a little bit easier. If you are a considering implementing a waterfall chart I highly recommend pivoting your data before ingestion into Tableau through a platform like Databricks or Snowflake especially if you are already using them in your development cycles. This can also be done in Tableau by pivoting the columns in the edit data source menu. With some simple edits in Excel I can get something in this format.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/392/1*Nbgs4Ff5TiK90-AJAEWIOA.png" /></figure><p>Now I need to save that data somewhere I can easily access it via Tableau. Then I obviously need to connect to the data in Tableau.</p><h3>Build the Waterfall</h3><p>Phew! Data prep is done. Random rant, but Tableau can be very sensitive to data structures. Some vizzes are easy to do with tall data but impossible with wide data. That is the case here. We shifted our data model from a wide view to a tall view and that will make all the difference.</p><p>Now let’s get to work. I’ll add the category to the columns shelf and exclude the “Grand Total” values (at this point those are only going to be used for quality checks). Let’s also remove “Profit” from our metrics as we’ll calculate that through the waterfall.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*IenTGbYaP-t0fPjSyPCE-Q.png" /></figure><h4>Step 1: Create a height variable</h4><p>You’re now supposed to say but that’s just a bar chart and that you were promised something different. Now we’ll get wild and crazy. Let’s create a calculated called “Waterfall Height”.</p><pre>sum(<br>    if [Metric] = &#39;Sales&#39; then [Value] <br>    else -[Value] <br>    end<br>)</pre><p>All this formula does is create a negative value for cost and a positive value for sales. Let’s add this to the rows shelf to get this.</p><p>As color will be helpful for telling us what is going on let’s also add “Waterfall Height” to colors on the marks shelf. We’ll also edit the color to be 2 step diverging and centered on zero like this (I’ll be editing the default color choices because I don’t like the defaults but use what works best for you).</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/494/1*Bu8SHF5H36rC3bWSkJmUQg.png" /></figure><p>You should now have something that looks like this.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*sCc0u6ieDPQJcni4LM-fbQ.png" /></figure><h4>Step 2: Create a running sum</h4><p>Now we’ll need to add a table calculation on “Waterfall Height” in the rows shelf. We will want to set the running total and can otherwise keep the default options.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/357/1*FDd-JAwKZYBU86OCbSaz3A.png" /></figure><p>You should get something that looks like this.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ZBbSalIc2xemlC10v8HIhg.png" /></figure><p>Now if you’re still thinking that this isn’t a waterfall chart you’re not wrong. Hang with me though as we’re almost there.</p><p>Next, change the marks type to Gantt from automatic. Earlier I had mentioned that a waterfall was not a bar chart in Tableau and this is why. The running total we calculated earlier ended up taking sales and subtracting our cost for furniture, taking the remainder and adding office supplies sales and then subtracting office supplies cost, etc. If you think about what a waterfall chart does, it does just that. It takes your first observation and then just creates a running tally of where you’re at. By changing to a Gantt chart you’ll get this.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*lKBufYXTMlSvQAlQ1zsBkA.png" /></figure><p>But there aren’t any bars yet! We’re going to apply a trick with gantt charts. You can add a measure to size to the gantt which will create (in this case) vertical depth to your gantt bar. Now, if we look at the first observation for furniture sales we want the bar to go from zero to the height of the gantt. We already have that distance though. It’s just the sales value we calculated earlier in waterfall height. However, if we just add waterfall height to size we’ll get a bar from ~$220K to about ~$440K which is not what we need. Instead, we need to add the negative of the Waterfall Height metric to size. This will shift the height of the gantt down to axis like we need. Let’s do that now by moving Waterfall Height to size, double clicking, and adding a minus to the front of the calculation.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*bC3d_gDCtQhECoLmse4_rQ.png" /></figure><p>Now, this is looking like a waterfall chart, but we don’t have the total yet. Let’s add the row totals and edit the grand totals column name to show profit as that is what should be reported now.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*wsneq1F5bgmAnV4PMx6-sg.png" /></figure><p>You now have your very first waterfall chart. Congratulations! If you’d like to explore the viz or reverse engineer my work check out this viz <a href="https://public.tableau.com/app/profile/matt.huff/viz/SuperstoreWaterfall_17572142948680/WaterfallTutorial">here</a>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a141fd0572d2" width="1" height="1" alt=""><hr><p><a href="https://medium.com/retail-tableau-user-group/chasing-waterfalls-a141fd0572d2">Chasing waterfalls</a> was originally published in <a href="https://medium.com/retail-tableau-user-group">Retail Tableau User Group</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
    </channel>
</rss>