<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Ekohe - Medium]]></title>
        <description><![CDATA[Designers | Developers | Creators - Medium]]></description>
        <link>https://medium.com/ekohe?source=rss----3a74bbd19b9f---4</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Tue, 07 Apr 2026 21:08:52 GMT</lastBuildDate>
        <atom:link href="https://medium.com/feed/ekohe" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[The “Density” Paradox: Decoding Japanese vs. Global UX]]></title>
            <link>https://medium.com/ekohe/the-density-paradox-decoding-japanese-vs-global-ux-8ed286130b5c?source=rss----3a74bbd19b9f---4</link>
            <guid isPermaLink="false">https://medium.com/p/8ed286130b5c</guid>
            <category><![CDATA[japanese-culture]]></category>
            <category><![CDATA[design-philosophy]]></category>
            <category><![CDATA[ui-design]]></category>
            <category><![CDATA[ux-writing]]></category>
            <category><![CDATA[ux-design]]></category>
            <dc:creator><![CDATA[Mai Sugahara]]></dc:creator>
            <pubDate>Tue, 27 Jan 2026 07:41:12 GMT</pubDate>
            <atom:updated>2026-01-27T07:41:11.244Z</atom:updated>
            <content:encoded><![CDATA[<h3>1. The Audit Gap: Global Standards vs. Japanese Reality</h3><p>Throughout my career, I have conducted numerous UX audits, primarily for e-commerce platforms. One persistent challenge I’ve faced is the overwhelming disparity in information volume between Japanese and international websites.</p><p>Even when dealing with the same global brand, a site designed for Japan can appear “cluttered” from a Western perspective. Conversely, a Western site can seem “unhelpful or lacking” to a Japanese user. Why is it that the global standard of “Simple = Best” doesn’t always hold true in Japan?</p><h3><strong>2. Case Studies: Analyzing the Divide</strong></h3><p><em>(Note: Place comparison images here to highlight the following contrasting axes)</em></p><p><strong>Case 1: Toyota (Dynamic Experience vs. Process of Consensus)</strong></p><ul><li><strong>Toyota Japan</strong>:<br> Uses static images and meticulous explanations to facilitate a “careful deliberation” process. It provides gentle guidance to detailed pages via clear CTAs.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*yIOmVqJzOhbFJKojIStEew.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*01BH-u1ZouMVagDLcj9jMQ.png" /></figure><p><strong>Toyota USA:</strong> <br>Focuses on “immediate gratification” with high-energy videos and instant price transparency. It utilizes cards that allow users to see the model and price at a glance.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Jv9Xn52rL5Id8htrDtebcA.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*WyoILSi7CwQ4NaDQEV0gLw.png" /></figure><p><strong>Case 2: Zurich (Emotional Hook vs. Logical Trust)</strong></p><ul><li><strong>Zurich Japan:</strong> <br>Builds “logical peace of mind” by listing important notices and multiple CTAs. It uses a broader color palette compared to the US version.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*2mMqOvguoL0JEK6axs2yQg.png" /></figure><ul><li><strong>Zurich USA:</strong> <br>Uses family-oriented visuals to create an “emotional hook” of safety. The CTA is streamlined, focusing primarily on inquiries.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Oihq83Nsy-YD4OAVHRcJfw.png" /></figure><p><strong>Case 3: Rakuten (Curated Catalog vs. Sea of Information)</strong></p><ul><li><strong>Rakuten Japan: <br></strong>Features multiple main and sub-carousels. It builds a “mountain of conviction” to eliminate any reason not to buy, covering everything from country of origin and materials to extensive styling photos.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*o8LTnBDebYhuxwRq5TA_eQ.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ZpwtWmWloPPNWof_eqxNcA.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*tX7WoeipndtbgflgYLvbIQ.png" /></figure><ul><li><strong>Rakuten France: <br></strong>Minimizes visual noise by containing information within clean cards. Each card typically features only five photos and essential product details.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*54qb9N-AqcQedD4hsue9jA.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*f8IzyKuoIPiWHQReb8gz5A.png" /></figure><h3>3. The Psychology of Density: Why Information Matters</h3><p>The high information density found on Japanese websites is not merely a matter of aesthetic preference. It is deeply rooted in the unique decision-making processes of Japanese culture and their specific definition of “safety.”</p><h4>① “Intuitive” West vs. “Convinced” Japan</h4><p>There is a fundamental difference in how decisions are made:</p><ul><li><strong>Western Markets:</strong> <br>Prefer <strong>“Intuitive and Efficient Decision-Making,”</strong> where users judge value from minimal information and move quickly to the next action.</li><li><strong>Japanese Market:</strong> <br>Prioritizes <strong>“Careful and Comprehensive Decision-Making,”</strong> where users scrutinize every piece of information provided to eliminate risk.</li></ul><h4>② Data Proving the “Fear of Failure”</h4><p>This difference is clearly illustrated by social psychologist Geert Hofstede’s <strong>Cultural Dimensions Theory</strong>.</p><p><strong>Dimension: Uncertainty Avoidance</strong></p><ul><li><strong>Japan:</strong> 92</li><li><strong>USA:</strong> 46</li></ul><p>（<em>Source：</em><a href="https://www.researchgate.net/publication/381377185_Doing_Business_in_Japan_International_Perspectives#pf5"><em>Hofstede Insights — Country Comparison</em></a>）</p><p>Japan’s score is exactly double that of the US. This massive gap indicates that Japanese users feel significant anxiety toward &quot;information voids.&quot; The desire for meticulous explanation is a cultural manifestation of wanting to <strong>minimize uncertain risks.</strong></p><h4>③ Defining “Safety” and “Sincerity” in Japan</h4><p>Consumer research echoes this need for comprehensive data. In a survey of 1,000 e-commerce users, the top request for improvement (cited by 60% of both men and women) was: <strong>“Please provide more detailed product information.”</strong> (Source： <a href="https://linestep.jp/2024/11/05/survey-ec/"><em>LINE STEP — E-commerce Consumer Survey</em></a>）</p><p>In Japan, “Safety” is proportional to “Information Volume.” Details such as country of origin, material specifics, precise size charts, and exhaustive FAQs are viewed as <strong>proof of the provider’s sincerity,</strong> forming the bedrock of trust.</p><p><strong>④Linguistic Traits: “Visual Scanning” through Kanji</strong></p><p>Beyond psychological factors, the inherent nature of the Japanese language has shaped this unique digital evolution.</p><ul><li><strong>High-Speed Visual Recognition:</strong> <br>Kanji are “ideograms” that convey meaning in a single character. While “phonograms” (like the alphabet) require the brain to convert sounds into meaning, Kanji allows the brain to grasp meaning the moment the shape is seen.</li><li><strong>Information Scanning:</strong> <br>Japanese users are adept at “scanning” text composed of Kanji, Hiragana, and Katakana as if they were images. What looks like a “wall of text” to a Westerner functions as an <strong>efficient interface</strong> for a Japanese user to find necessary information instantly without excessive scrolling.</li></ul><p><strong>4. Being Human-centric Requires a Shift in Design Philosophy</strong></p><p>At Ekohe, we pursue a <strong>Human-Centric</strong> design philosophy. Our goal is neither to simply “strip away information for simplicity” nor to “display information chaotically.”</p><p>As our analysis shows, the UX approach to building “trust” differs fundamentally between Japan and the rest of the world.</p><ul><li><strong>Cultural Translation:</strong> <br>We use global design systems as a foundation while restructuring information density to satisfy the Japanese user’s need for deep conviction.</li><li><strong>AI Optimization:</strong> <br>We leverage evolving AI technologies to organize vast amounts of data, personalizing information to ensure the right details are delivered at the right time.</li><li><strong>Visualizing Sincerity:</strong> <br>We manifest a company’s integrity through design, supporting the creation of robust trust between the brand and the user.</li></ul><p>What is required today is not to force a fit into one style or the other by simple ‘localization’- it is to deeply understand the “Shape of Trust” inherent in each culture and optimize it through technology. This represents an elevation of Design Philosophy<strong>.</strong></p><p>As a partner leading this shift, Ekohe balances global quality with local safety, transforming intangible brand values into reliable, high-quality user experiences.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=8ed286130b5c" width="1" height="1" alt=""><hr><p><a href="https://medium.com/ekohe/the-density-paradox-decoding-japanese-vs-global-ux-8ed286130b5c">The “Density” Paradox: Decoding Japanese vs. Global UX</a> was originally published in <a href="https://medium.com/ekohe">Ekohe</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The Prompt is the New Pixel]]></title>
            <link>https://medium.com/ekohe/the-prompt-is-the-new-pixel-594980b3044e?source=rss----3a74bbd19b9f---4</link>
            <guid isPermaLink="false">https://medium.com/p/594980b3044e</guid>
            <category><![CDATA[vibe-coding]]></category>
            <category><![CDATA[ui-ux-design]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[context-designer]]></category>
            <category><![CDATA[design-tools]]></category>
            <dc:creator><![CDATA[Qianyu Luo (Joey)]]></dc:creator>
            <pubDate>Thu, 23 Oct 2025 08:59:49 GMT</pubDate>
            <atom:updated>2025-10-23T08:59:45.674Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*j12XcjtgwkOyX3xUeHAXkg.png" /><figcaption><em>This work, by Joey Luo, is licensed under </em><a href="https://creativecommons.org/licenses/by/4.0/"><em>CC BY 4.0</em></a><em>. Adapted from “Nebula — AI prompt marketplace” by Kishu Raj Tyagi</em></figcaption></figure><p>When working on client projects, I keep running into the same old trio of trouble: <strong>pressing timelines, tight budgets, </strong>and preventable<strong> </strong>rework caused by<strong> communication gaps</strong>. The traditional flow (Figma ↦ Slide deck ↦ Presentation) often feels like moving through molasses — slow to iterate, inefficient, and prone to misunderstandings. Worse, many clients and engineering teams have limited patience for narrative decks. By the time a demo is actually needed, the startup cost is high and resources aren’t aligned — so <strong>plenty of good ideas stall at the design stage</strong>.</p><h4>Replit + UX: A Rapid-Validation Engine</h4><p>Recently, I started experimenting with AI tools for rapid prototyping and found a different path. After trying Replit, I now think of it as a <strong>rapid-validation engine</strong> for designers: low cost, high feedback, and almost zero barrier to entry.</p><p>On a recent project, I spun up a minimal MVP in Replit. The result? Surprisingly good. Stakeholders immediately grasped the interaction logic, alignment happened quickly, and there was far less of that dreaded <em>“imagination gap.”</em> Many design decisions were made naturally during the build.</p><p>Replit doesn’t require a local dev environment. You can generate components and page structures in natural language, deploy online, and share a link so clients can actually click through. For designers, it shifts the task from <strong><em>“writing code”</em> </strong>to<strong> <em>“writing prompts”</em></strong><em>.</em> That’s both a superpower — and a new kind of challenge. Prompt-craft is fast becoming one of our core design skills.</p><h4>The Design Language Is Expanding to Text</h4><p>Historically, our design language has been visual: layout, graphics, and flows. But it’s <strong>expanding to text </strong>today. Directing AI with prompts has become part of how we assemble interactions.</p><p>Writing good prompts isn’t about sprinkling in magic keywords, it’s about clarity, context, intent and constraints. We can’t just say “make a pretty dashboard” — we need to teach the AI how we define pretty and what content to display on the dashboard. We need to use words precisely, with systems thinking, framing problems in ways AI can actually process. In the age of AI, our <strong>empathy extends </strong>not only to people but also<strong> to machines</strong>.</p><p>When written well, prompts don’t stay at the level of abstract ideas. They act as the bridge between intent and working prototypes. Instead of chasing visual polish, prompts emphasise <strong>structure &amp; behavior</strong> — so validation becomes faster, and A/B tests that used to wait until “next sprint” can now happen in “ten minutes”.</p><p>This shift naturally nudges our role forward. We’re moving <strong>from drawing pixels to authoring context</strong>: not just designing interfaces, but designing the very environment in which AI works. Text, visuals, and data — all of it becomes the context AI needs to generate meaningful output.</p><h4>Vibe Coding: A New Design Loop</h4><p>Learning some code doesn’t mean becoming an engineer. Most of the time, AI writes the bulk, our job is to <strong>review, steer, and adjust.</strong> The rhythm looks like this: <strong>Generate </strong>↦<strong> Judge </strong>↦<strong> Tune.</strong></p><p>This loop gives designers more control over whether ideas actually run<em>.</em> It breaks us out of the old linear model (mock-up ↦ hand-off ↦ wait for release) and puts us into <strong>faster cycles of build-test-refine.</strong></p><p>Think of it as vibe coding. It’s still design. But instead of handing off static artifacts, we’re <strong>building living, running experiences</strong>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*QYSVk7cuQdzZDOLTKf0RAw.png" /><figcaption><em>This work, by Joey Luo, is licensed under </em><a href="https://creativecommons.org/licenses/by/4.0/"><em>CC BY 4.0</em></a><em>. Adapted from “Nebula — AI prompt marketplace” by Kishu Raj Tyagi</em></figcaption></figure><h4>Keep the Essence While Embracing the Tools</h4><p>AI’s design capabilities are evolving quickly — which can feel unsettling: the tools are always changing and every tool shift feels like a paradigm shift. The essence of design doesn’t change: <strong>understand users and business context, clarify the logic, and structure the problem</strong>.</p><p>Only now, the order can change: AI supports generating content quickly and with little effort. Designers can then define the problem — and judge what’s good. Where in the past we advocated for waiting to understand, now we can iterate on a best guess.</p><p>If you’d like to try this AI-plus-code workflow, <strong>start small</strong>. Pick a simple problem and make a guess at your solution. Use prompts to generate a first pass. Don’t chase perfection, just get it running. Then iterate on logic and styles. Treat it as an <strong>alignment sandbox</strong> with your client, not the final product.</p><p>AI is reshaping the design process. We don’t have to stay locked in a linear pipeline, iteration is faster and easier now, so we can create right away — instead of waiting for alignment and understanding.</p><p>Designers don’t need to fear AI — we have to learn to use it, and treat prompts as a new design language. That means being precise in <strong>how we frame problems</strong>, <strong>treating context</strong> (text, visuals, data) <strong>as design material</strong>, and <strong>working in loops of generate, judge, and tune</strong>.</p><p>Maybe the future really will be “the mind is the interface.” Either way: AI extends our intent, <strong>design remains the art of thinking</strong>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=594980b3044e" width="1" height="1" alt=""><hr><p><a href="https://medium.com/ekohe/the-prompt-is-the-new-pixel-594980b3044e">The Prompt is the New Pixel</a> was originally published in <a href="https://medium.com/ekohe">Ekohe</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Designing Emotion:]]></title>
            <link>https://medium.com/ekohe/designing-emotion-a94b6fa71bef?source=rss----3a74bbd19b9f---4</link>
            <guid isPermaLink="false">https://medium.com/p/a94b6fa71bef</guid>
            <category><![CDATA[design-systems]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[creativity]]></category>
            <category><![CDATA[ux-ui-design-thinking]]></category>
            <category><![CDATA[human-emotions]]></category>
            <dc:creator><![CDATA[Johanne Chen Min Tao]]></dc:creator>
            <pubDate>Mon, 28 Jul 2025 07:11:47 GMT</pubDate>
            <atom:updated>2025-07-28T07:11:47.228Z</atom:updated>
            <content:encoded><![CDATA[<p><strong>Why the Human Touch Still Matters in the Age of AI</strong></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/790/1*WV9PCpyOnbaj8z1XQi_BQw.png" /><figcaption>Emotional, abstract artworks “wrong Lane, Bring Lights” #ovawhelm</figcaption></figure><p><em>Based on my creative practice as a designer, artist, and observer of how AI is being integrated into our workflow</em></p><p>I’m a UI designer. I’m also a painter.</p><p>In my free time, I create emotional, abstract artworks — visual responses to feelings too complex or quiet for words. It’s not just a hobby. It’s how I stay in touch with something I fear we’re starting to overlook: the human instinct to feel, interpret, and express meaning beyond efficiency.</p><p>And this isn’t just about my art. It deeply shapes how I see design, especially now that AI is becoming so embedded in our daily work. While there’s real excitement around what AI can do — and some genuinely helpful tools emerging — I’ve also been asking myself: what are we quietly giving up in the process?</p><h3>A Personal Reflection, Not a Process Map</h3><p>Let me be clear: this isn’t a manifesto about how we’ve nailed the balance between AI and creativity at Ekohe. It’s not even about what we currently do. It’s about what I notice, what I feel, and what I think we should be talking about more openly.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ToTMHeK4g5ocT4AnQkySWA.jpeg" /></figure><p>I’ve seen AI improve workflows. I’ve used it to prototype UI ideas, generate content, and speed up repetitive tasks. But I’ve also seen how easy it is to let its speed and volume obscure nuance. And more than once, I’ve had the gut feeling that something was “off” — too cold, too generic, too disconnected from the end user’s emotional reality.</p><p>That kind of feedback doesn’t usually show up in Gitlab tickets or Figma comments. But I think it should. And I don’t think AI is ready to sense it for us.</p><h3>The Fal Experiment — and Its Limits</h3><p>Lately we’ve been experimenting with a LoRA-trained model on FAL to generate visuals in line with our brand. It’s a smart tool, and the results are impressive <em>at first glance</em>. But they still need a lot of human adjustment. In fact, the further I push into using these tools, the more I notice this: what they generate isn’t wrong — it’s just missing something.</p><p>That “something” is often emotion, tone, subtlety, cultural context. And that’s the part I believe designers — and especially emotionally-attuned ones — still need to protect.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*tTftUKUiHTV4yqQmOKHksQ.jpeg" /></figure><p>This makes me wonder:<br> Should we continue refining these models to match our brand style, or should we be building workflows where human review is not just a final step, but an essential layer of interpretation?<br> Could we define what emotional curation means, and make space for it?<br> Could a design QA process include a human <em>feeling-check</em>, not just technical accuracy?</p><h3>Is It Time to Question the Direction?</h3><p>If I’m honest, I think some of our current AI integration is leaning more toward efficiency than experience. And I get why: fast delivery, client expectations, fewer resources. But the tradeoff we’re making isn’t always visible in the short term. It shows up later, when users disengage, when interfaces feel sterile, or when products feel interchangeable.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*DyywrhkgMj8fdvdAkYs9cQ.jpeg" /></figure><p>From where I stand, it feels like a good time to ask:<br> Are we building tools that help us design better — or just faster?<br> Are we making space for critique, refinement, and intuition — or are we over-automating the parts of design that need the most human thought?</p><p>If our long-term vision is to stay creative, distinctive, and emotionally resonant — especially across cultures and industries — I believe we need to slow down and protect the part of design that AI doesn’t understand: feeling.</p><h3>Creativity Isn’t a Feature — It’s a Value</h3><p>This isn’t a call to stop using AI. It’s a call to reframe how we use it. To stop assuming that “human-first” just means giving a person the final review. To question whether we’ve built the right systems to support emotion, depth, and imperfection — the things that make design <em>feel</em> human in the first place.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*BONCRTmipgZbVEi5SeFMkw.jpeg" /></figure><p>In my own creative practice, nothing meaningful ever happens when I rush. A painting only works when I stay open, critical, and curious. I think Design is the same. And I think our current moment is asking us to decide whether we want to lead the integration of AI thoughtfully — or let it quietly flatten the things that make our work meaningful.</p><p>This is just my perspective. But if we say we value the human touch, now might be the time to prove it — not just in our output, but in how we build our process.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a94b6fa71bef" width="1" height="1" alt=""><hr><p><a href="https://medium.com/ekohe/designing-emotion-a94b6fa71bef">Designing Emotion:</a> was originally published in <a href="https://medium.com/ekohe">Ekohe</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[AI Coding Assistant: TONGYI Lingma Vs MarsCode]]></title>
            <link>https://medium.com/ekohe/ai-coding-assistant-tongyi-lingma-vs-marscode-bae34fd5b59f?source=rss----3a74bbd19b9f---4</link>
            <guid isPermaLink="false">https://medium.com/p/bae34fd5b59f</guid>
            <category><![CDATA[vscode-extension]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[bytedance]]></category>
            <category><![CDATA[alibabacloud]]></category>
            <category><![CDATA[ai-coding-assistant]]></category>
            <dc:creator><![CDATA[Benedict Chan]]></dc:creator>
            <pubDate>Wed, 27 Nov 2024 21:51:42 GMT</pubDate>
            <atom:updated>2024-11-27T21:50:40.678Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/978/0*RDaKz-TzqjKCeSXf" /><figcaption>(Modified from the “Average Fan vs Average Enjoyer” meme)</figcaption></figure><h3>Introduction</h3><p><strong>Disclaimer:</strong></p><p>This passage was assisted by LLMs. Additionally, there is no sponsorship from the two companies that developed TONGYI Lingma and MarsCode.</p><p>Large Language Models (LLMs) have rapidly transformed the landscape of software development. AI coding assistants, such as Cursor IDE and Copilot extensions, have become invaluable tools for developers. In this post, we’ll delve into a comparative analysis of two prominent Chinese GenAI coding assistants: MarsCode and TONGYI Lingma.</p><p>By examining benchmarks, cost considerations, and hands-on experiences, we aim to provide insights into their target audiences and use cases.</p><h3>TONGYI Lingma</h3><p>TONGYI Lingma is an advanced AI-powered coding assistant developed by Alibaba Cloud. Launched in October 2023, it has rapidly gained popularity, surpassing two million downloads and securing a significant market share in China.</p><p>Key features and capabilities of Tongyi Lingma include:</p><ul><li><strong>Code Generation:</strong> Generates code snippets based on natural language prompts, accelerating development processes.</li><li><strong>Debugging and Optimization:</strong> Identifies potential errors and suggests improvements, enhancing code quality and performance.</li><li><strong>Task Automation:</strong> Automates repetitive tasks, such as test case generation, significantly reducing manual effort.</li><li><strong>Seamless Integration:</strong> Integrates seamlessly with popular IDEs like IntelliJ, making it accessible to developers of all experience levels.</li><li><strong>Broad Language Support:</strong> Supports over 200 programming languages, including widely-used languages like Java and Python.</li></ul><p>By leveraging a powerful large language model Tongyi, Tongyi Lingma empowers developers to work more efficiently and effectively.</p><h3>MarsCode</h3><p>MarsCode is an innovative cloud-based Integrated Development Environment (IDE) designed by ByteDance to revolutionize the software development process. Meanwhile, it is an AI-powered coding assistant plug-in available in various mainstream IDEs, such as VSCode, JetBrain, … By seamlessly integrating advanced AI capabilities, MarsCode empowers developers to write code more efficiently, accurately, and creatively.</p><p><strong>Key Features of MarsCode:</strong></p><ul><li><strong>AI-Powered Assistance:</strong> Leverage the power of AI for tasks such as code completion, generation, and bug detection.</li><li><strong>Comprehensive Development Environment:</strong> Support for multiple programming languages, including Python, JavaScript, Java, and C++.</li><li><strong>Cloud-Based Accessibility:</strong> Access your projects from any device with an internet connection.</li><li><strong>Integrated Tools for Serverless Functions:</strong> Effortlessly build and deploy serverless applications.</li><li><strong>MarsCode Agent for Automated Bug Fixing:</strong> Automatically identify and fix bugs in your code, improving code quality and reducing development time.</li></ul><p>By streamlining the development workflow and providing intelligent assistance, MarsCode significantly enhances developer productivity and enables the creation of high-quality software applications.</p><h3>Comparison</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*40m8OX0vi6hcNBv-ci6nzA.png" /></figure><p>When we compare the <strong>functions </strong>of Tongyi Lingma and MarsCode, it seems that there are not many significant differences. As a result, I spent an entire workday working with both AI coding assistants.</p><h3>Impression</h3><p>From my experience, the accuracy of the generated code often depends on the package’s up-to-dateness and the clarity of your prompts. Considering the number of questions I asked, it’s biased to judge which is more accurate.</p><p>In this section, I will compare the impressions of each extension on VScode. It’s important to note that this comparison is unbiased and free from sponsorship.</p><p>Otherwise, it would be unfair to compare the cloud-based IDE version of Marscode with the TONGYI Lingma extension.</p><p>Please note that some confidential information has been censored in the screenshots, and I haven’t explored all the features in the documentation.</p><h3>Installation</h3><p>In my opinion, the registration procedure for TONGYI Lingma is more time-consuming than that of MarsCode.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*fphk6kRLulTVna-oz3DHDQ.png" /></figure><p>Of course, there are some agreements to be accepted, but I have omitted them for brevity. For more detailed information, please refer to the respective documentation. The links are available in the Reference section.</p><h3>Interface</h3><h3>TONGYI Lingma</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*sFoMeXuES3ahb0wi" /><figcaption>(A screenshot illustrating the guidelines for TONGYI Lingma Chat and its corresponding settings)</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/784/0*cBYsIdPBhhNgvEGl" /><figcaption>(A screenshot illustrating the “Explain using TONGYI Lingma” feature in the terminal)</figcaption></figure><p>By default, TONGYI Lingma is set to Chinese, not auto-language, as evidenced by the Chinese comments generated. If you want all replies in English, go to the extension setting page to switch the default language.</p><p>The “Explain using TONGYI Lingma” feature helps a lot when you find any errors in the terminal.</p><h3>MarsCode</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*xVXmd3_qD6M-epyy" /><figcaption>(A screenshot illustrating the guidelines for MarsCode Chat and its corresponding settings)</figcaption></figure><p>On the other hand, MarsCode operates in auto-language mode by default.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/710/0*l21Z6c-H5T8qlUdS" /><figcaption>(A screenshot illustrating MarsCode in the right-click menu)</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/432/0*DqxoN8IQm1FPDBDO" /><figcaption>(A screenshot illustrating the quick command within the code)</figcaption></figure><p>There are fewer default shortcuts and commands compared to TONGYI Lingma.</p><p>You can either apply the commands via a right-click or by directly clicking the quick command displayed above any function in the code.</p><p>The quick command feature, which appears above all functions, is a particularly innovative approach as it eliminates the need to highlight code and execute commands through clicks.</p><h3>Documentation</h3><h3>TONGYI Lingma</h3><p>The documentation for TONGYI Lingma is ultimately comprehensive, even including an article about evaluating the benefits of using AI-powered coding assistants.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*CJm5-wo5fjeDeVIA" /><figcaption>(A screenshot of evaluating the benefits of using AI-powered coding assistant)</figcaption></figure><p>Besides, the documentation illustrates how one can use the enterprise version to maintain deployment code with TONGYI Lingma.</p><p>Unfortunately, I don’t have much time to experiment with this.</p><p>The potential value is very high if you know how to utilize this AI assistant effectively.</p><p>So far, I’ve only found the Chinese documentation. If anyone has found an English version, please share it.</p><h3>MarsCode</h3><p>Since MarsCode’s primary focus is the cloud-based IDE, there doesn’t seem to be much content specifically for the AI assistant part.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*4KFoxBM39iP4xFvy" /><figcaption>(A screenshot of illustrating the side chat)</figcaption></figure><p>But upon closer inspection, we found animations that clearly illustrate step-by-step instructions. It’s simple and easy to understand.</p><p>Please note that I skipped the chat part in both plugins as they are more or less the same from the point of view of user experience.</p><h3>Review</h3><p>My impressions of the TONGYI Lingma and MarsCode extensions are summarized below:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*FVr-eQS2lFvjHk1PZKZOdw.png" /></figure><p>**Potential means the ability to enhance work efficiency**</p><h3>Conclusion</h3><p>After exploring the extensions of TONGYI Lingma and MarsCode, we can see that technology companies in China are working hard to improve user experience to differentiate themselves from competitors as they compete in the market of AI-powered programming assistants.</p><p>In a nutshell, there’s no better AI-powered tool than the one you’re familiar with to maximize output with minimal effort.</p><p>You should continue to experiment and learn as you always have.</p><p>See you next time!</p><h3>Reference</h3><ol><li><strong>1AI.</strong> (n.d.). <em>AI-assisted programming insights</em>. Available:<a href="https://www.1ai.net/en/13962.html"> https://www.1ai.net/en/13962.html</a>.</li><li><strong>Aliyun.</strong> (n.d.). <em>How to measure the benefits of AI-assisted programming</em>. Available:<a href="https://help.aliyun.com/zh/lingma/use-cases/how-to-measure-the-benefits-of-ai-assisted-programming"> https://help.aliyun.com/zh/lingma/use-cases/how-to-measure-the-benefits-of-ai-assisted-programming</a>.</li><li><strong>Aliyun Tongyi.</strong> (n.d.). <em>Tongyi AI platform</em>. Available:<a href="https://tongyi.aliyun.com/lingma/"> https://tongyi.aliyun.com/lingma/</a>.</li><li><strong>Marscode.</strong> (n.d.). <em>Home page</em>. Available:<a href="https://www.marscode.com/home"> https://www.marscode.com/home</a>.</li><li><strong>Marscode.</strong> (n.d.). <em>Use AI capabilities</em>. Available:<a href="https://docs.marscode.com/docs/use-ai-capabilities?_lang=en"> https://docs.marscode.com/docs/use-ai-capabilities?_lang=en</a>.</li></ol><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=bae34fd5b59f" width="1" height="1" alt=""><hr><p><a href="https://medium.com/ekohe/ai-coding-assistant-tongyi-lingma-vs-marscode-bae34fd5b59f">AI Coding Assistant: TONGYI Lingma Vs MarsCode</a> was originally published in <a href="https://medium.com/ekohe">Ekohe</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Unlocking Quick Wins: AI for Process and Knowledge Efficiency]]></title>
            <link>https://medium.com/ekohe/unlocking-quick-wins-ai-for-process-and-knowledge-efficiency-90b3131f258c?source=rss----3a74bbd19b9f---4</link>
            <guid isPermaLink="false">https://medium.com/p/90b3131f258c</guid>
            <category><![CDATA[process-automation]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[generative-ai-solution]]></category>
            <category><![CDATA[google]]></category>
            <category><![CDATA[knowledge-management]]></category>
            <dc:creator><![CDATA[Doug Dyson]]></dc:creator>
            <pubDate>Tue, 08 Oct 2024 22:25:52 GMT</pubDate>
            <atom:updated>2024-10-08T22:42:07.469Z</atom:updated>
            <content:encoded><![CDATA[<p>Are you looking for quick wins with AI to make your business processes faster, get your team on the same page, and increase the reach of company knowledge?</p><p>If so, this article is for you. I’ll show you how to leverage AI to achieve immediate impact in streamlining processes, improving team alignment, and boosting the quality and reach of knowledge in your organization.</p><p>This is the first in a series where we break down a cost-effective, rapidly implementable AI framework. We’ll explore the system architecture I’ve built, which integrates familiar tools with a central knowledge server to optimize workflows — quickly adaptable to evolving business needs.</p><h3>Understanding the Core Components</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*H0jZvmLgmO1G_QMfuKb8kg.jpeg" /></figure><p>I designed this architecture to streamline workflows and knowledge management by combining widely adopted tools with targeted, custom AI-powered solutions.</p><h3><strong>Here’s a breakdown of the major components:</strong></h3><p><strong>FastAPI Knowledge Server</strong> <br>The <a href="https://fastapi.tiangolo.com/">FastAPI</a> Knowledge Server is the central hub of the system, connecting all components. It manages the flow of Large Language Models (LLMs) between tools and ensures seamless access to information. It also integrates company values, AI policies, and culture, acting as a safeguard to align automated processes with organizational goals.</p><p><strong>Knowledge Base</strong> <br>The knowledge base stores key information, including procedures, project overviews, templates, contact lists, and company updates. It is continuously curated, expanded, and refined, ensuring that teams always have access to up-to-date information.</p><p><strong>Values, AI Policies, Culture</strong> <br>The knowledge server incorporates prompts into every request, reflecting company values and integrating AI policies. This ensures that automated processes or decision-making tasks remain in line with the company’s culture and ethical standards. By embedding these values within the knowledge base and tools, the architecture maintains alignment in every request, ensuring consistency across all workflows.</p><p><strong>Google Workspace Add-Ons</strong> <br><a href="https://developers.google.com/workspace/add-ons/overview">Google Add-Ons</a>, written with <a href="https://developers.google.com/apps-script">Google Apps Script</a>, automate tasks like client research, proposal generation, and project setup. By integrating these Add-Ons, users can seamlessly generate and collaborate on important documents without switching platforms, ensuring efficiency and real-time collaboration.</p><p><strong>Slack Knowledge Bot</strong> <br>The <a href="https://api.slack.com/docs">Slack bot</a> brings the knowledge base directly into team communication channels. Users can ask questions and receive answers without leaving their workflow. The bot retrieves information from the knowledge base via the knowledge server, ensuring quick, accurate responses.</p><p><strong>GitLab Summarizer Chrome Extension</strong> <br><a href="https://docs.gitlab.com/ee/integration/">GitLab</a> is Ekohe’s line of business system, where all code and projects are managed. The AI architecture includes a <a href="https://developer.chrome.com/docs/extensions">custom Chrome extension</a> that summarizes GitLab issues. This simplifies onboarding by providing detailed summaries of complex issues, allowing users to quickly understand an issue without navigating long histories.</p><p><strong>Google Sheets: Knowledge Base Scoring</strong> <br>Google Sheets, together with Google Apps Script, plays a crucial role in maintaining knowledge base quality. A feedback loop is established where questions and responses are scored based on accuracy and helpfulness. This scoring system utilizes <a href="https://build.nvidia.com/nvidia/nemotron-4-340b-reward">nemotron-reward-340b</a> to maintain high standards and continually refine the content.</p><h3><strong>Knowledge Server: The Heart of the System</strong></h3><p>At the core of the architecture is the FastAPI Knowledge Server, which provides LLM services to all components. I genuinely enjoy building this central piece.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*4eXgmZz6ubvcudDmXwyHyw.jpeg" /></figure><p><strong>Control and Visibility Over the Question-Response Loop</strong> <br>Unlike generalized AI solutions like ChatGPT, which often operate as a black box, the knowledge server provides full transparency of the question-response loop. This transparency allows continuous improvement of the knowledge base through scoring responses and refining content based on feedback.</p><p><strong>Vendor Flexibility: Adaptable to Different LLMs</strong> <br>The knowledge server is designed with vendor flexibility in mind, allowing for easy switching between different LLMs. This adaptability ensures we stay current with AI advancements or shift to alternative models based on specific needs, without disrupting the overall system.</p><p><strong>Centralized LLM Calls</strong> <br>By centralizing LLM calls, the knowledge server provides data and insights into AI adoption rates and associated costs. The secure endpoint is OpenAI compatible, enabling seamless integration with other AI solutions.</p><p><strong>Full Control Over Prompts and Values Integration</strong> <br>The knowledge server embeds company values, AI policies, and workflows directly into the system to ensure consistency across interactions. This consistency will be increasingly important as AI systems play more autonomous roles in decision-making and processes.</p><h3>Knowledge Base</h3><p>The knowledge base contains a wide range of essential information, such as procedures, project overviews, contact lists, templates, company updates, and other critical documentation. This centralized resource allows both teams and AI agents to access the information they need to perform tasks efficiently.</p><p>For example, when onboarding new team members or launching a project, the knowledge base provides step-by-step procedures, templates, and contact details for key stakeholders, reducing time spent searching for information and allowing teams to focus on their core work.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*TReSJmX-84aex2JIcMbmjw.jpeg" /></figure><p><strong>Curating Quality Knowledge</strong> <br>Maintaining a high-quality knowledge base is crucial. Content in traditional knowledge bases can become stale, reducing user trust. By focusing on adding only valuable, high-quality information and leveraging feedback loops, we ensure the knowledge base remains relevant and reliable.</p><p>It’s important to be selective about what content is included. Adding unnecessary information can dilute the quality of responses and erode trust. This approach emphasizes content that adds real value, using scoring systems to regularly review and improve the knowledge base.</p><p><strong>Feedback Improvement Loop</strong> <br>A key benefit of centralizing LLMs via the knowledge server is the ability to evaluate responses comprehensively. Using scoring models, we score responses based on helpfulness, correctness, coherence, complexity, and verbosity. This process helps identify areas for improvement and guides updates to the knowledge base.</p><p><strong>Data Sets: Proactive Knowledge Improvement &amp; Fine Tuning</strong> <br>By proactively generating new questions, we can test responses against the knowledge base and evaluate their scores. This helps identify gaps early on, before users encounter low-quality responses.</p><p>High-quality responses from both user interactions and synthesized data contribute to a fine-tuning dataset. By combining these sources, we ensure that all high-quality data contributes to improving model performance. Fine-tuning enables organizations to tailor an LLM to their needs by training it on internal knowledge and best practices.</p><p>Using LLMs with permissible licenses, such as Meta’s Llama and NVIDIA’s Nemotron, allows us to utilize generated data freely for model fine-tuning while ensuring compliance with operational goals.</p><p><strong>Knowledge Bases vs ChatGPT GPTs</strong> <br>While <a href="https://help.openai.com/en/articles/8554407-gpts-faq">ChatGPT’s GPTs</a> are great for generating content and providing high-quality responses, they lack the critical features needed for effective knowledge base management. Specifically, GPTs do not support continuous curation, and they lack feedback mechanisms and analytics. Without these insights, it’s difficult to determine how often a GPT is being used, what questions are being asked, or whether the provided answers are meeting user needs.</p><p>Additionally, GPTs do not facilitate proactive knowledge improvement, making it impossible to identify gaps or refine the model based on real usage. GPTs can only be created by paying users and free tier users cannot leave feedback for the GPT creator. The lack of a feedback loop means content can become outdated, reducing user trust.</p><p>Finally, GPTs have rate limits for free tier users, forcing people to wait until their rate limit resets if they exceed their rate limit threshold.</p><p>For these reasons, GPTs are not suitable for building a comprehensive knowledge base. They lack the capabilities required for scalable, integrated knowledge management.</p><h3><strong>Process Automation with Widely Adopted Tools</strong></h3><p>The architecture supports a variety of process automations, using widely adopted tools such as Slack, Google Docs, Google Slides, and Google Workspace to streamline workflows. Here are four examples of how this system automates business processes:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*vNOv2T-y7WOHn0YNOrOHog.jpeg" /></figure><p><strong>Client Research Agent (Google Docs)<br></strong>This Google Add-On for Docs automates client research and market intelligence gathering, performing competitive analysis and generating market insights for a client’s industry.</p><ul><li><em>Data Retrieval:</em> I use the Tavoli API to run web searches that gather market insights and information on client competitors.</li><li><em>Natural Language Processing: </em>With OpenAI’s GPT models, we analyze text, generate questions, and provide well-formed answers.</li><li><em>Data Processing: </em>Our system identifies key competitors, checks the data for consistency, and organizes insights.</li><li><em>Report Generation:</em> Finally, all the processed data is compiled into a report that includes sections for client overview, competitors, market trends, and strategic insights.</li></ul><p><strong>Proposal Generator (Google Slides)<br></strong>I use a streamlined approach to generate the first draft of a proposal, leveraging a few key tools to ensure efficiency.</p><ul><li><em>GitLab Integration:</em> We automatically pull in relevant details directly from GitLab, including opportunity requirements and client information. This allows us to keep everything up to date and reduces manual data entry.</li><li><em>Google Slides Population: </em>Using a flexible Google Slides template, the system creates new proposals automatically, populated with relevant project information. This saves time and ensures consistency across all proposals.</li><li><em>OpenAI-Powered Responses: </em>For areas where additional content or suggestions are needed, we use OpenAI models to generate context-aware responses based on the opportunity data. This includes generating concise descriptions or filling in specific details automatically.</li><li><em>Draft Proposal:</em> Once everything is pulled together, the system generates a first draft deck in Google Slides, ready for review and revision by the team.</li></ul><p><strong>Project Setup Automation (Google Workspace App Script)<br></strong>This automation is designed to make starting a new project quick and seamless by creating folders, copying templates, and organizing documents automatically.</p><ul><li><em>Project Folder Creation:</em> With just one click, the system generates a new project folder based on a predefined directory template. Each project gets its own folder structure, keeping everything organized from the start.</li><li><em>Template Automation:</em> The system automatically copies essential subfolders and files (like contracts, reports, and project templates) into the new project folder. This saves time and ensures that every project follows a consistent structure.</li></ul><p>This eliminates the need for manual folder creation and document organization, letting users focus on the actual work while ensuring everything is stored in the right place.</p><p><strong>GitLab Issue Summarization (Chrome Extension)<br></strong>The GitLab issue summarizer Chrome extension simplifies the onboarding of new team members and issue management by generating plain-language summaries of complex GitLab tickets:</p><ul><li><em>Issue Summarization: </em>a comprehensive summary of the issue, including its status, key points, and major developments.</li><li><em>Priority Identification:</em> highlights the most pressing tasks and important aspects of the issue.</li><li><em>Progress Tracking:</em> details who is working on what, including specific contributions and ongoing work.</li><li><em>Question Generation:</em> generates critical questions to ensure clarity and remove potential blockers.</li><li><em>Action Suggestions: </em>suggests impactful next steps to resolve the issue efficiently.</li></ul><h3>The Line of Business System: GitLab</h3><p>Ekohe uses GitLab as our line of business system to manage projects, track requirements, handle code repositories, and store initial client information and requirements. It provides a comprehensive view of both our internal workflows and client-related data.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*7n1Y7n8nEOaIYEHyWkCSvg.jpeg" /></figure><p>The GitLab Chrome extension and proposal generator feed directly from this system, enabling informed project management and proposal creation. Because I rely on an API-driven approach, this system is adaptable to other platforms with integration capabilities, such as JIRA, Confluence, Asana, and Salesforce, making it highly flexible.</p><h3><strong>How All the Components Work Together</strong></h3><p>The architecture integrates various tools and components to automate workflows, ensuring smooth transitions across different project stages. Here’s how these components work together in real-world scenarios:</p><ul><li><strong>Understanding a New Potential Client with the Research Agent</strong>: The research agent gathers market insights and competitor information, pulling data from external sources and the knowledge base to compile a research document in Google Docs, providing a clear understanding of the client’s industry.</li><li><strong>Generating a Draft Proposal for the New Potential Client</strong>: The proposal generator creates a Google Slides draft, using relevant data from GitLab to tailor the proposal to the client’s needs.</li><li><strong>Kicking Off a Project for the New Client with Google Workspace Automation</strong>: After proposal acceptance, the system automates project setup, creating folders, populating templates, and organizing documents in Google Drive, ensuring the project team can start immediately.</li><li><strong>Team Member Onboarding with Knowledge Bot and Chrome Extension</strong>: New team members can quickly get up to speed using the Slack knowledge bot and the GitLab Issue Summarizer, ensuring they understand key details without sifting through long threads.</li></ul><h3><strong>Summary</strong></h3><p>This AI architecture is designed to streamline workflows, enhance knowledge management, and drive process automation using familiar tools.</p><p>By integrating systems like GitLab, Google Workspace, and Slack with our custom FastAPI Knowledge Server, I can automate critical tasks, manage client insights, and improve the quality and reach of our knowledge base.</p><p>The benefits are clear:</p><ul><li><strong>Faster business processes</strong></li><li><strong>Reduced manual effort</strong></li><li><strong>Deeper client insights</strong></li><li><strong>Flexibility to evolve with business needs</strong></li><li><strong>Greater consistency across projects</strong></li></ul><p>By embedding company values and AI policies directly into the system, we ensure every decision and task aligns with your organization’s goals. The architecture also offers flexibility, allowing adaptation to different line-of-business systems and ensuring smooth integration.</p><p><strong>In upcoming posts, we’ll dive deeper into:</strong></p><ul><li>Process automation</li><li>Effective knowledge base management</li><li>AI research agents</li><li>Client proposal generation</li><li>Values-based prompting</li></ul><p>Each of these areas plays a key role in bringing AI-driven efficiency and intelligence to day-to-day operations.</p><h3><strong>Call To Action</strong></h3><p>I love building AI-powered tools that help streamline processes, deepen insights, and continuously improve the quality and reach of knowledge.</p><p>Coding these solutions myself provides me with a hands-on perspective of AI’s vast potential in the workplace and the unique challenges of AI deployment. Super exciting times ahead!</p><p>Are you working on an AI project? Automating business processes? Managing organizational knowledge? I’d love to hear about it. What business processes are you automating? What tasks do you want to automate?</p><p>Please reach out if you want to collaborate or need help with your AI project!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=90b3131f258c" width="1" height="1" alt=""><hr><p><a href="https://medium.com/ekohe/unlocking-quick-wins-ai-for-process-and-knowledge-efficiency-90b3131f258c">Unlocking Quick Wins: AI for Process and Knowledge Efficiency</a> was originally published in <a href="https://medium.com/ekohe">Ekohe</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Integrating AI in UI Design: A Creative Director’s Perspective]]></title>
            <link>https://medium.com/ekohe/integrating-ai-in-ui-design-a-creative-directors-perspective-dda84c1427f5?source=rss----3a74bbd19b9f---4</link>
            <guid isPermaLink="false">https://medium.com/p/dda84c1427f5</guid>
            <category><![CDATA[technology]]></category>
            <category><![CDATA[ui-design]]></category>
            <category><![CDATA[creativity]]></category>
            <category><![CDATA[chatgpt]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <dc:creator><![CDATA[Johanne Chen Min Tao]]></dc:creator>
            <pubDate>Thu, 08 Aug 2024 10:28:28 GMT</pubDate>
            <atom:updated>2024-08-08T10:28:28.046Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*_1dyBnqULiTMekTEaaC2pA.png" /></figure><p>When ChatGPT first came out, I was skeptical. I thought it was just another automated tool that would end up frustrating me, like those impersonal robots we deal with on chat services or phone systems. The lack of human interaction and genuine contact was disheartening. But within a few weeks, I realized how quickly this AI became my best buddy in my day-to-day life as a Creative Director. It didn’t just help me; it became a trusted sidekick for all the designers on my team.</p><p>We all have our flaws and moments of self-doubt. Some of us second-guess ourselves when challenged, while others need to double-check details before presenting them to their manager or superior. And then there are those who are too shy to ask for advice. Well, that era is over now that we’ve found this reliable ally. Sure, it might say some weird things sometimes, and a double check on the info might be advised, but most of the time, it gives us that little boost of confidence we need to thrive.</p><p>Realizing this made me see the potential of integrating AI into the creative process, a journey I knew would be both exciting and complex. In this article, I’ll share how, as a Creative Director, I use AI to amp up my UI designs and why, despite its helpfulness, I still believe it can’t fully replace the creative mind.</p><p><strong>The Role of AI in My Creative Process</strong></p><p>AI has become an indispensable tool in my creative process, especially when it comes to UI design. Here’s how I incorporate it into my work:</p><p><strong><em>Generating Ideas and Inspiration:</em></strong> One of the toughest parts of design is coming up with fresh, innovative ideas. We’ve all been there — staring at a blank page with zero inspiration. AI tools are gold for this! They can crunch loads of design data and trends to spark new ideas and inspiration. For example, by feeding it certain keywords or themes, AI can suggest design elements that I might not have considered. This kick-starts the creative process and gives me a solid foundation to build on. It’s a real game changer; think about that shy designer who can now kick off their own brainstorming session to find new ideas and come to their superior more confidently for advice.</p><p><strong><em>Creating Content:</em></strong> We all know that client — the one with a brilliant idea but no content to bring it to life. That’s where AI tools swoop in to save the day, helping me whip up images or videos in a flash, so I can start designing right away, no waiting necessary. They even spark new ideas for clients that hadn’t crossed their minds. Oh, the time I save with this!</p><p>Just a heads-up though: while these tools are impressive and getting better all the time, they’re not flawless in the finer details yet. So, if you’re aiming for something super polished or distinctive, your own stellar designer skills might still be the best bet 😉</p><p><strong><em>Streamlining Repetitive Tasks and UI Design:</em> </strong>Designing UI elements often involves repetitive tasks that eat up time. AI tools can automate these tasks — resizing images, tweaking color schemes, suggesting layout improvements — you name it. This not only saves time but also lets me focus on the fun, creative parts of design. Many AI tools are now built into popular programs like Figma, which makes experimenting with the team a blast.</p><p><strong><em>Creating UI designs for inspiration or quick tweaks to speed up page creation:</em> </strong>Let’s face it, a login page and process aren’t the most thrilling parts of design, but AI tools make this process way quicker, cutting costs for clients and boosting all our efficiency.</p><p>Here’s a tip: if you’re a designer who can’t stand it when things aren’t perfectly aligned or there are tiny inconsistencies (which I think all designers should strive to be, 😅), you might want to fine-tune those generated pages manually. These tools speed things up, but they’re not perfect, and a human touch still matters.</p><p>In Ekohe, our unique perspective in our design team, combined with the power of AI, brings immense value to client projects. By leveraging AI tools, we can streamline processes and generate fresh ideas quickly, allowing us to focus on the creative and strategic aspects of design. This ensures that our designs are not only innovative and visually stunning but also delivered faster and more efficiently.</p><p><strong>The Limitations of AI in Design</strong></p><p>While AI is a powerful tool, it’s got its limits. Here’s why I believe AI can’t fully replace the creative mind:</p><p><strong><em>Lack of Emotional Intelligence:</em></strong> Design is all about creating experiences that resonate emotionally with users. Human designers understand this deeply — they use their knowledge of human emotions and behaviors, along with color psychology and layout, to craft designs that evoke the right feelings. AI can help suggest colors, but it’s the designer who knows the emotional response they want to evoke in users. At Ekohe, I want our designers to create designs that truly connect with users on a deeper level. Whether it’s a sense of safety, excitement, youthfulness, or seriousness, our designers make intentional, human choices to ensure the design aligns perfectly with the brand’s desired emotions.</p><p>AI, on the other hand, is great at crunching numbers and spotting patterns but struggles with empathy and understanding the deeper emotional context of design choices. It can suggest design elements based on user preferences or past interactions, but it can’t intuitively grasp how those choices will emotionally impact a diverse group of users. Emotional intelligence means picking up on subtle cues, reading non-verbal feedback, and adapting designs to create real connections — things human designers excel at.</p><p><strong><em>Creativity and Innovation:</em> </strong>Real creativity comes from pushing boundaries, testing wild ideas, and shaking up the status quo. Human designers have a knack for this — they draw on their experiences, gut feelings, and unique perspectives to create designs that aren’t just eye-catching but groundbreaking.</p><p>AI, on the flip side, relies heavily on existing data and trends to make suggestions. It can help generate ideas and optimize designs based on what’s hot, but it struggles to dream up truly original concepts that break away from the norm. The ability to take risks, explore weird solutions, and imagine new possibilities is a distinctly human talent that drives innovation in design.</p><p><strong><em>Understanding Context:</em></strong><em> </em>Design decisions are heavily influenced by context — culture, society, and history all play a role. Human designers understand this — they bring cultural sensitivity and context awareness to their designs, tailoring them to resonate with diverse audiences and cultural norms. At Ekohe, understanding cultural contexts is crucial because we work with clients from all over the world. We know that a design for a French audience will be culturally different from a design for a Japanese audience. This awareness helps us create designs that truly connect with people, no matter where they are.</p><p>AI, for all its number-crunching prowess, can miss these subtle contextual cues. A design that nails it in one cultural context might totally flop in another due to different symbols, color meanings, or user behaviors. Human designers use their cultural smarts and empathy to navigate these nuances, crafting designs that hit home with real impact.</p><p>So yes, AI amps up efficiency, streamlines processes, and gives killer insights in UI design, but it’s no replacement for the full creative package human designers bring. Mixing AI’s smarts with human creativity, emotional savvy, and cultural understanding — that’s where the magic happens, creating designs that aren’t just sleek and smart but deeply meaningful to people worldwide.</p><p>In Ekohe, our understanding of AI’s strengths and limitations means we can use it to enhance the creative process without losing the human touch that makes designs truly resonate. This blend of creativity, efficiency, and emotional intelligence helps us exceed client expectations and deliver projects that stand out in the market.</p><p><strong>Conclusion</strong></p><p>AI has become my go-to tool as a Creative Director, speeding up the design process in all sorts of ways. It’s an amazing ally for brainstorming, speeding up tasks, and cranking out UI designs quickly. But even with all its perks, AI can’t outshine the creativity, emotional insight, and cultural understanding that human designers bring to the table.</p><p>At Ekohe, we mix AI’s brainpower with our team’s creativity and emotional intelligence to create designs that are both functional and deeply meaningful. Our unique approach allows us to deliver innovative and visually stunning projects faster and more efficiently, exceeding client expectations and making their ideas come to life. 😊</p><p>By teaming up AI with human creativity, we’re not just making things work — we’re making designs that truly resonate with people. That’s where the real magic happens, creating designs that hit people in the feels, big time. ✨</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=dda84c1427f5" width="1" height="1" alt=""><hr><p><a href="https://medium.com/ekohe/integrating-ai-in-ui-design-a-creative-directors-perspective-dda84c1427f5">Integrating AI in UI Design: A Creative Director’s Perspective</a> was originally published in <a href="https://medium.com/ekohe">Ekohe</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[A Guide to Effective Long-Text Summarization with ChatGPT]]></title>
            <link>https://medium.com/ekohe/a-guide-to-effective-long-text-summarization-with-chatgpt-e52fe786ccfa?source=rss----3a74bbd19b9f---4</link>
            <guid isPermaLink="false">https://medium.com/p/e52fe786ccfa</guid>
            <category><![CDATA[summarization]]></category>
            <category><![CDATA[chatgpt]]></category>
            <category><![CDATA[nlp]]></category>
            <category><![CDATA[llm]]></category>
            <dc:creator><![CDATA[Benedict Chan]]></dc:creator>
            <pubDate>Fri, 02 Aug 2024 08:18:26 GMT</pubDate>
            <atom:updated>2024-08-12T09:47:59.089Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/512/1*qBIHNaTixmteHj3-qUM26g.jpeg" /><figcaption>An AI-generated image by prompting to <a href="https://deepai.org/">DeepAI</a></figcaption></figure><h3>Overview</h3><p>This guide explores techniques for summarizing lengthy text passages using ChatGPT, a large language model. Summarizing long texts can be a challenging task, but with the right approach, ChatGPT can generate concise and coherent summaries that capture the key points of the original text. This guide will take you through the steps to effectively use ChatGPT for summarizing long texts, offering practical tips and strategies to ensure the best results.</p><h3>Core Steps</h3><p>The core steps for summarizing lengthy texts with ChatGPT involve providing the text you want summarizing and specifying a desired length for the summary, whether it is a word count or a sentence count. ChatGPT will then generate a concise version of the text, highlighting the main ideas and essential information. However, due to character limitations, if your text is too lengthy, you may need to split it into smaller chunks and summarize each chunk individually. Once you have summaries for all the chunks, you can then provide these summaries to ChatGPT again to create a comprehensive summary of the entire text.</p><h3>Step-by-Step Process</h3><p><strong>1.Initial Input:</strong> Provide ChatGPT with the text you want to summarize. Clearly state the desired length for the summary, either in words or sentences. For example, you could say, “Please summarize the following text in 200 words.” If you are not confident in your prompt, ask ChatGPT directly to improve the content or shorten it to meet the token limit.</p><p><strong>2.Splitting Long Texts</strong>: If the text is too lengthy to be processed in one go, split it into smaller, manageable chunks. Each chunk should be within the character limit that ChatGPT can handle effectively.</p><p><strong>3.Summarizing Chunks:</strong> Summarize each chunk individually using ChatGPT. Provide clear instructions for each chunk, such as, “Summarize the following section in 100 words.”</p><p><strong>4.Combining Summaries:</strong> Once you have individual summaries for all the chunks, combine these summaries into a single text. You can then ask ChatGPT to summarize this combined text to create a comprehensive summary of the entire original document.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*_XO9qZfwic7Y_VoUGwGvWQ.png" /></figure><h3><strong>Prompt Engineering</strong></h3><p>One of the most crucial aspects of using ChatGPT effectively is prompt engineering. The way you frame your instructions can significantly impact the quality of the summary. Here are some tips for crafting effective prompts:</p><ul><li><strong>Be Clear and Specific</strong>: Clearly specify what you want ChatGPT to do. For example, “Please summarize the following text in 150 words” is more effective than a vague instruction like “Summarize this.” If you have some professional terms, you should give a clear definition and description of them.</li><li><strong>Use Templates</strong>: Providing a template or structure for the summary can help guide ChatGPT. For example, “Please summarize the following text according to the output template: Introduction, Main Points, Conclusion.”</li><li><strong>Highlight Key Points</strong>: If there are specific aspects of the text you want to be highlighted in the summary, mention them in your prompt. For example, “Focus on the key arguments and the conclusion.”</li></ul><p><strong>Iterative Refinement</strong></p><p>ChatGPT allows for continued conversation, which means you can iteratively refine the summary until it meets your needs. Here’s how to do it:</p><ul><li><strong>Review and Adjust</strong>: After receiving the initial summary, review it and identify any areas that need improvement or additional detail. If the budget is limited, you can try using a piece of chunked text and prompt in the ChatGPT chatroom to save the cost.</li><li><strong>Provide Feedback</strong>: Use the conversation feature to give ChatGPT feedback and ask for specific adjustments. For example, “Can you add more details about the methodology discussed in the second paragraph?”</li><li><strong>Repeat as Necessary</strong>: Continue this process until you are satisfied with the summary. Each iteration helps fine-tune the output to better match your requirements.</li></ul><h3>Chunking Strategies</h3><p>When dealing with very long texts, effective chunking strategies are essential. Here are some methods to consider:</p><ul><li><strong>Logical Sections</strong>: Split the text based on logical sections, such as chapters, headings, or paragraphs. This ensures each chunk is coherent and easier to summarize.</li><li><strong>Equal Lengths</strong>: Divide the text into equal lengths based on character or word count. This approach ensures that no single chunk exceeds the processing limit.</li><li><strong>Key Themes</strong>: Identify key themes or topics within the text and chunk it based on these themes. This helps in creating more focused summaries for each theme.</li></ul><h3><strong>Combining Summaries</strong></h3><p>Once you have individual summaries for each chunk, combining them effectively is crucial for creating a comprehensive summary. Here are some tips:</p><p>1.<strong>Sequential Order</strong>: Combine the summaries in the same order as the original text. This helps maintain the logical flow and coherence of the summary.</p><p>2.<strong>Highlight Connections</strong>: Ensure that the combined summary emphasizes the connections between different sections, possibly by incorporating transition sentences or phrases.</p><p>3.<strong>Final Summarization</strong>: After combining the summaries, ask ChatGPT to summarize the combined text. This step helps in refining the summary and ensuring it captures the overall essence of the original document.</p><p><strong>Practical Example</strong></p><p><strong>Example 1: Summarizing a Research Paper</strong></p><p>Imagine you have a lengthy research paper that you need to summarize. Here’s how you could approach it:</p><p>1.<strong>Split the Paper</strong>: Divide the paper into sections, such as Introduction, Methods, Results, and Discussion. You can make use of the headers to find all the sections.</p><p>2.<strong>Summarize Each Section</strong>: Use ChatGPT to summarize each section individually. For example, “Summarize the Introduction section in 100 words.”</p><p>3.<strong>Combine Summaries</strong>: Combine the summaries of each section into a single text within the token limit of GPT.</p><p>4.<strong>Final Summary</strong>: Ask ChatGPT to summarize the combined text to create a concise summary of the entire research paper.</p><p>Here’s the <a href="https://chatgpt.com/share/3a38f14c-72c1-471d-afcb-4b628c310f13">link</a> to demonstrate the prompt testing in this example.</p><h3><strong>Conclusion</strong></h3><p>By following these strategies, you can leverage ChatGPT to effectively summarize long documents and articles, saving you valuable time and effort. Prompt engineering, iterative refinement, and chunking techniques are key to creating high-quality summaries. Whether you’re summarizing research papers, articles, or any other lengthy texts, ChatGPT can be a powerful tool to help you condense information into a more digestible form.</p><h3><strong>Bibliography</strong></h3><p>[1] Width.ai. “Building a GPT-4 Medical Record Summarization Pipeline.” Available at: [<a href="https://www.width.ai/post/gpt-4-medical-record-summarization-pipeline](https://www.width.ai/post/gpt-4-medical-record-summarization-pipeline)">https://www.width.ai/post/gpt-4-medical-record-summarization-pipeline](https://www.width.ai/post/gpt-4-medical-record-summarization-pipeline)</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=e52fe786ccfa" width="1" height="1" alt=""><hr><p><a href="https://medium.com/ekohe/a-guide-to-effective-long-text-summarization-with-chatgpt-e52fe786ccfa">A Guide to Effective Long-Text Summarization with ChatGPT</a> was originally published in <a href="https://medium.com/ekohe">Ekohe</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Designer Insights: AI Summarization for Better Information Retrieval]]></title>
            <link>https://medium.com/ekohe/designer-insights-ai-summarization-for-better-information-retrieval-ab735bf9a1e0?source=rss----3a74bbd19b9f---4</link>
            <guid isPermaLink="false">https://medium.com/p/ab735bf9a1e0</guid>
            <category><![CDATA[text-summarization]]></category>
            <category><![CDATA[ux-design]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[design-thinking]]></category>
            <category><![CDATA[ui-design]]></category>
            <dc:creator><![CDATA[Qianyu Luo (Joey)]]></dc:creator>
            <pubDate>Tue, 30 Jul 2024 09:59:57 GMT</pubDate>
            <atom:updated>2024-07-30T15:20:14.662Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*aZkXffAGlNIGx-MR" /><figcaption>Image by the author via Midjourney | AI Summarization</figcaption></figure><p>In a recent project I’m working on, there’s a classic AI feature involved: using Large Language Models (LLMs) to quickly summarize the main content of multiple articles, helping users efficiently obtain information and make subsequent decisions. In today’s fast-paced world, we’re easily swamped by an overload of information and rarely have enough time to read everything that interests us. Text summarization and information retrieval are precisely where LLMs shine, and at Ekohe, we have ample experience and a stellar team to back us up. Whether it’s designing related products or utilizing them as daily tools, hopefully some of my thoughts from a design perspective could spark some new findings for you.</p><h4>Overview &amp; Application</h4><p>AI summarization, the automatic generation of concise content summaries using AI technology, involves summarizing <strong>existing content</strong> rather than creating entirely new content, which doesn’t quite fall under Generative AI. Whether used for quickly browsing news, research reports, or keeping up with meeting minutes, smart summarization significantly boosts information processing efficiency. Its applications extend beyond text to include media like videos and audio, providing comprehensive summarization services through automatic transcription and content analysis. Introducing smart summarization not only saves users valuable time but also supports <strong>more efficient decision-making and knowledge acquisition</strong> in an era of information overload.</p><p>As a low-risk mode, AI summarization can be <strong>pre-introduced</strong> in addition to providing some preset or freely input prompts. A common example: Loom automatically transcribes and segments video content after recording and provides a summary. Without affecting the core functionality (video recording and sharing in this case), users often embrace these auxiliary features, especially when they come as a pleasant surprise. Even with some flaws, users tend to be more forgiving of minor errors.</p><p>Speaking of which, I recently read one of Nielsen’s articles, “<a href="https://www.nngroup.com/articles/ai-paradigm/">AI: First New UI Paradigm in 60 Years</a>” which mentions that AI is introducing the third user interface paradigm in computing history. This new interaction mechanism has users <strong>telling the computer what they want rather than how to do it</strong> — fundamentally reversing the control source. These AI summaries, whether presented by default or generated by user-selected or input prompts, theoretically still involve users telling the computer what they want via prompts. By repeatedly adjusting prompts, we can get closer to the precise results we expect. But indeed, when the intermediate process becomes a black box, users have very limited control over the results. How to precisely express, correctly use prompts, and properly guide users in using them may give rise to the profession of prompt engineer. However, I agree with Nielsen’s view that this profession won’t last long — <strong>making AI usable by everyone with better usability will still be the long-term trend</strong>.</p><h4>Potential Challenges &amp; Solutions</h4><p>Using AI to summarize large amounts of text can indeed improve efficiency, but it also has some potential downsides. Even though AI has made significant progress in language understanding in recent years, it still struggles with complex semantics in specific cultural or contextual contexts. There may also be biases or misunderstandings due to algorithm limitations or training data biases, affecting users’ ability to gain a comprehensive and diverse perspective or missing crucial background information. Especially when users don’t know how something is done, it becomes harder to identify or correct problems. These are all factors that we need to take into consideration while designing.</p><p>Therefore, when designing such a feature, we often bring in Retrieval-Augmented Generation (RAG) technology to have AI provide inline citations to its sources. These citations help users trace back to the original articles, delve deeper into a topic, or verify if the quoted material is relevant and valid. This way, users won’t entirely relinquish control over content accuracy to the AI search engine.</p><p>Another more profound negative impact is that long-term reliance on AI-generated summaries may weaken our own reading and comprehension skills, particularly for handling long texts or complex content. If you frequently use such auxiliary tools, please do remember that <strong>AI summaries make it easier to access important information, but they can not replace the information itself</strong> — traditional learning and comprehension involve deep reading, critical thinking, and synthesizing information from multiple sources, which develop cognitive skills and a thorough understanding and retention of the material. Relying solely on AI summaries prevents those mechanisms from happening, as users may miss out on the nuances and context provided by the full text — this also underscores the importance of preserving links to the original text to engage with the content more deeply.</p><p>Beyond technical and cognitive challenges, AI-generated summaries also raise significant ethical concerns, especially in critical industries like healthcare, human resources, finance, etc. For example, in healthcare, they might omit crucial patient information, leading to misdiagnoses — be more mindful when designing with products in those fields. This introduces another layer of complexity, AI might generate summaries based on incomplete or biased data. While RAG can help by providing verification sources, it is not a complete solution. And talking about data sources, I have noticed that more websites are being protected from scraping recently, which directly leads to skewed results in AI search applications like Perplexity. Ensuring ethical AI use requires oversight and accountability, with <strong>user-centric designers often acting as the “last line of defence”</strong> on software development teams. Ultimately, we hope to highlight the need to adhere to ethical guidelines on our teams and push for stringent review processes to prevent possible ethical pitfalls from over-reliance on AI summaries. This includes addressing issues related to data accessibility and accuracy, and ensuring that the AI’s sources are comprehensive and representative.</p><h4>Evolving Role of Design in AI Features</h4><p>In designing AI-related features, personally I feel that visible, “seen” designs are becoming rarely required. The third AI user interface paradigm is simple and highly convergent, but the considerations behind each design decision have increased. Although the second UI paradigm may not dominate the future any more, it will continue to exist (<em>“clicking or tapping on-screen content remains an intuitive and important way of user interaction”</em>). Still using our project feature as a simple example, questions like: Should we let users click to get summaries? Should we show the AI model used? Should we collect subsequent user feedback (such as thumbs up/down)? Should we support saving outputs temporarily or permanently? Etc, those are still what we need to concern before making decisions. As designers, we serve as bridges — we strive to <strong>provide users with sufficient information to build trust while optimizing performance and avoiding cognitive overload</strong>. At Ekohe, we integrate user-centric design principles and are involved throughout the development process. We advocate for transparent explanations of AI recommendations to foster user trust and our close collaboration with engineers &amp; data scientists ensures our AI solutions are user-friendly and reliable.</p><p>We are fortunate to be at and witness the intersection of user interface paradigm shifts in computing history. This is undoubtedly an exciting yet somewhat anxiety-inducing change. I firmly believe that the way to counter anxiety is to adapt to development, embrace change, and maintain thoughtful consideration — they will be our <strong>most powerful tools</strong> as designers and our <strong>greatest confidence</strong> in ensuring user experience.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=ab735bf9a1e0" width="1" height="1" alt=""><hr><p><a href="https://medium.com/ekohe/designer-insights-ai-summarization-for-better-information-retrieval-ab735bf9a1e0">Designer Insights: AI Summarization for Better Information Retrieval</a> was originally published in <a href="https://medium.com/ekohe">Ekohe</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Why Your Product is Never Finished: Understanding Continual Development in Software Engineering…]]></title>
            <link>https://medium.com/ekohe/why-your-product-is-never-finished-understanding-continual-development-in-software-engineering-7016af8c0962?source=rss----3a74bbd19b9f---4</link>
            <guid isPermaLink="false">https://medium.com/p/7016af8c0962</guid>
            <category><![CDATA[project-management]]></category>
            <category><![CDATA[software-engineering]]></category>
            <category><![CDATA[business-development]]></category>
            <dc:creator><![CDATA[Marianne Ylilehto]]></dc:creator>
            <pubDate>Mon, 03 Jun 2024 02:55:55 GMT</pubDate>
            <atom:updated>2024-05-07T20:07:21.272Z</atom:updated>
            <content:encoded><![CDATA[<h3>Why Your Product is Never Finished: Understanding Continual Development in Software Engineering from Project Manager’s Perspective</h3><p>In software engineering, the saying <strong>“a product is never finished”</strong> holds truer than ever before. Unlike traditional tangible products, digital creations undergo a continuous evolution: you’ve got users wanting different things, tech moving at warp speed, and the market doing its own dance. So, imagine<em> being a project manager in the middle of all that</em>! This constant state of development poses both challenges and opportunities for project managers tasked with navigating the complexities of product life cycles for their clients.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*H0xgOXJqK_a-YUiJJEn98A.jpeg" /></figure><p>Let’s explore why your software product is never truly finished and how project managers can effectively react to ensure its continued success.</p><p><strong>User Feedback Loop:</strong> Software products thrive on user feedback. As users interact with the product, they provide invaluable insights into its strengths, weaknesses, and areas for improvement. This constant stream of feedback necessitates ongoing iterations and updates to meet evolving user expectations and preferences. Project managers must actively solicit, analyze, and prioritize user feedback to drive product enhancements and ensure its relevance in the marketplace.</p><p><strong>Technological Advancements:</strong> The rapid pace of technological innovation means that what is cutting-edge today may become obsolete tomorrow. New programming languages, frameworks, tools, and methodologies constantly emerge, offering opportunities to enhance product functionality, performance, and scalability. Project managers must stay on top of technological trends and proactively integrate relevant advancements into the product roadmap to maintain its competitiveness.</p><p><strong>Market Dynamics:</strong> Market conditions and industry trends are in a perpetual state of flux. New competitors enter the arena, consumer preferences shift, and regulatory requirements evolve. To remain competitive and capture market share, software products must adapt to changing landscapes and seize emerging opportunities. Project managers can help conduct regular market analyses, competitor assessments, and risk evaluations to inform strategic decision-making and steer the product in the right direction.</p><p><strong>Bug Fixes and Maintenance:</strong> No software product is immune to bugs, glitches, and technical issues. Even after rigorous testing and quality assurance measures, unforeseen problems may arise post-deployment. Addressing these issues requires ongoing maintenance, debugging, and optimization efforts. Project managers must allocate resources for bug fixes, prioritize issues based on severity and impact, and implement robust monitoring mechanisms to detect and resolve issues proactively.</p><p><strong>Scalability and Growth</strong>: Successful software products attract a growing user base over time. With increased usage comes the need for enhanced scalability, performance, and reliability. Project managers must anticipate scalability challenges, design scalable architectures, and implement strategies to accommodate growing user loads seamlessly. Additionally, they can help support avenues for monetization, expansion into new markets, and diversification of product offerings to fuel sustainable growth and revenue generation.</p><p><strong>Continuous Improvement Culture:</strong> To maintain relevance and competitiveness, software development teams and clients must embrace a culture of continuous improvement. This entails fostering innovation, encouraging experimentation, and promoting a growth mindset across the team. Project managers play a pivotal role in nurturing this culture for their clients, empowering team members to challenge the status quo, explore new ideas, and strive for excellence in all endeavors.</p><p>But here’s the kicker: what often happens is that clients fall for the trap of believing that a product is done after reaching the Minimum Viable Product (MVP) stage. However, it’s crucial to recognize that <strong>even after achieving MVP status, the product journey is far from over</strong>. Project managers often face the challenge of navigating when and how to initiate planning for subsequent phases post-MVP.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*eWxfbe94tim7l-KOYwhTdg.jpeg" /></figure><p>Luckily, there are several strategies that project managers can adopt to ensure the continuity and success of their client’s projects:</p><p><strong>Align with Business Goals:</strong> Ensure that post-MVP development efforts align with the overarching business goals and strategic objectives of the organization. Prioritize initiatives that contribute to revenue growth, customer acquisition, and competitive differentiation or other client priorities.</p><p><strong>Engage Stakeholders:</strong> Foster open communication and collaboration with stakeholders, including customers, investors, and internal teams. Solicit their input and involvement in the post-MVP planning process to ensure alignment and buy-in.</p><p><strong>Creating Strategic Roadmap: </strong>The strategic roadmap serves as a guide for decision-making and resource allocation, ensuring that all efforts are aligned with the overarching vision of the organization. It provides stakeholders with a clear understanding of the path forward and what is expected on a given timeline.</p><p><strong>Evaluate MVP Feedback:</strong> Gather and analyze feedback from early adopters and users to identify strengths, weaknesses, and areas for improvement. Use this feedback to inform the prioritization of features and enhancements for future iterations.</p><p><strong>Prioritized Backlog Management</strong>: Maintain a prioritized backlog of features, enhancements, and bug fixes based on MPV discussions, user feedback, business value, and strategic objectives. Regularly reassess priorities with the client and adjust the backlog accordingly.</p><p><strong>Invest in Scalability: </strong>Anticipate scalability challenges and design the product architecture to accommodate future growth and expansion. Implement scalable infrastructure, robust performance monitoring, and automation tools to support ongoing scalability efforts.</p><p>In a nutshell, it’s essential to view software product development more as a marathon rather than a sprint. <strong>Project managers serve as important facilitators</strong>, fostering a culture of innovation and continual improvement for their team and clients. So, <em>let’s embrace the journey of perpetual improvement</em>, knowing that with the right guidance, collaboration and strategy, success is not just attainable — it’s inevitable.</p><p>Ready to embark on your product journey with us? <a href="https://ekohe.com/contact">Reach out to Ekohe</a> today to explore how we can partner for your success.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=7016af8c0962" width="1" height="1" alt=""><hr><p><a href="https://medium.com/ekohe/why-your-product-is-never-finished-understanding-continual-development-in-software-engineering-7016af8c0962">Why Your Product is Never Finished: Understanding Continual Development in Software Engineering…</a> was originally published in <a href="https://medium.com/ekohe">Ekohe</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Exploring Vector Databases with PGVector]]></title>
            <link>https://medium.com/ekohe/exploring-vector-databases-with-pgvector-d6fcd523bf2b?source=rss----3a74bbd19b9f---4</link>
            <guid isPermaLink="false">https://medium.com/p/d6fcd523bf2b</guid>
            <category><![CDATA[vector-database]]></category>
            <category><![CDATA[nlp]]></category>
            <category><![CDATA[retrieval-augmented-gen]]></category>
            <dc:creator><![CDATA[Mai Duy Dũng]]></dc:creator>
            <pubDate>Thu, 30 May 2024 08:23:11 GMT</pubDate>
            <atom:updated>2024-05-30T08:31:30.599Z</atom:updated>
            <content:encoded><![CDATA[<p>In software engineering, the way we store and query data is crucial. Traditional databases have served us well, but as the complexity and volume of data grow, so does the need for more advanced solutions. Enter vector databases, a revolutionary approach to handling high-dimensional data. PGVector, an extension of PostgreSQL, integrates vector storage and querying seamlessly. In this article, we’ll explore vector databases and how PGVector is transforming data handling.</p><h3>Applications</h3><p>Vector databases offer powerful capabilities that transform how we handle and analyze data. Below are some key applications where vector databases like PGVector excel, demonstrating their practical value in real-world scenarios.</p><ul><li><strong>Retrieval-Augmented Generation (RAG)</strong>: Vector DB is crucial in RAG systems, helping retrieve related information quickly for LLMs while maintaining the integration of relational databases.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*2DtDKQ--YCb5RR0y" /><figcaption>Source: <a href="https://www.anyscale.com/blog/a-comprehensive-guide-for-building-rag-based-llm-applications-part-1">Building RAG-based LLM Applications for Production</a></figcaption></figure><ul><li><strong>Recommendation System</strong>: Similar search on vector databases forms the basis of content-based recommendation systems, showing items similar to those a user has liked or selected.</li><li><strong>Training data gathering</strong>: Let’s say you’re building an NLP classifier, instead of downloading or purchasing training data. You can gather them by manually selecting a few samples, then look for samples that are similar to the ones you just selected. For example: To gather training data for a spam classifier, you’ll only need to pick a spam email with poor grammar, too good to be true offers, suspicious sender, etc, and look for similar emails.</li></ul><h3>What are vector databases?</h3><p>Vector databases (DBs) are specialized databases designed to store and retrieve high-dimensional vectors. Unlike traditional databases that excel at handling structured data, vector databases are built to manage unstructured data, such as images, text, and audio, which are often represented as vectors. These vectors can capture complex relationships and similarities that are not easily discernible through traditional database queries.</p><p>Traditional DBs will store their records like the following example:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*pWD7QDSScoxTA32h52naOg.png" /></figure><p>To convert those records to a suitable format for a vector DB we typically use an embedding model. For my projects, I use <em>sentence-transformers/distiluse-base-multilingual-cased-v2</em><br>This is because traditional text representations, like bag-of-words or TF-IDF, focus on the occurrence of words but often fail to capture the semantic meaning and context of the text. Embedding models is designed to understand the context and meaning behind words and sentences and convert them into dense vectors.</p><p>However, now our records in the DB will look drastically different.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*kwpKCjwbRha0W12BBezItg.png" /></figure><p>It’s obvious that those records are not human-readable and should go through other processing steps before we can make use of them. We can apply a distance metric to measure how these embeddings are far apart from each other. A commonly used metric is <a href="https://en.wikipedia.org/wiki/Cosine_similarity#:~:text=.-,Cosine%20distance,-%5Bedit%5D">cosine distance</a>.</p><p>When we need to compare a record versus millions or even hundreds of millions, comparing one by one and sorting the distance is incredibly inefficient. Which is why we need something to combine the robust features of relational DB with the specialized functions to store, index, and execute vector operations efficiently.</p><h3>PGVector</h3><p>PGVector is an extension of PostgreSQL that provides these capabilities. It enables the storage, indexing, and querying of vector data within PostgreSQL, allowing for seamless integration of vector operations in a relational database environment.</p><p>Key features:</p><ul><li>Vector Data Type: PGVector introduces a new data type for storing high-dimensional vectors.</li><li>Efficient Similarity Search: Perform fast similarity searches using vector operations like cosine similarity.</li><li>Indexing: Create indexes on vector columns to speed up search queries.<br>SQL Integration: Use standard SQL commands to manage and query vector data.</li></ul><p>You can either create a table with traditional PSQL as usual, or gem <a href="https://github.com/ankane/neighbor">Neighbor</a> if you’re using Rails. For simplicity’s sake, I’m using PSQL</p><pre>CREATE TABLE companies_vectors (<br> id bigserial PRIMARY KEY,<br> company_id int8 REFERENCES companies,<br> embeddings vector (512),<br> created_at TIMESTAMP,<br> updated_at TIMESTAMP<br>);</pre><p>You also need to create an index with either IVFFlat or HNSW. Here I used IVFFlat but if you have memory to spare HNSW is a very good choice with excellent query time.</p><pre>CREATE INDEX ON companies_vectors USING ivfflat (embeddings vector_cosine_ops);</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*U1iMCwBndJM3NyKR" /><figcaption>Source: <a href="https://www.pinecone.io/learn/series/faiss/vector-indexes/">Nearest Neighbor Indexes for Similarity Search</a></figcaption></figure><p>Now we can start querying, I’m looking for companies with a description similar to that of BBC.com. The 2 tables I used have 30 million and 3 rows million respectively. Note that this query finishes after only 0.8s.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*l7Hy1fMyb7PZ8Mb1" /></figure><h3>Conclusion</h3><p>PGVector revolutionizes data handling by combining relational database features with efficient vector operations. This makes it ideal for applications like recommendation systems, retrieval-augmented generation, and training data gathering. As data complexity grows, tools like PGVector will be essential for efficient and meaningful data analysis.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d6fcd523bf2b" width="1" height="1" alt=""><hr><p><a href="https://medium.com/ekohe/exploring-vector-databases-with-pgvector-d6fcd523bf2b">Exploring Vector Databases with PGVector</a> was originally published in <a href="https://medium.com/ekohe">Ekohe</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
    </channel>
</rss>