Responsive Design Guidelines

Explore top LinkedIn content from expert professionals.

  • View profile for Mohd Sahil

    UI/UX Designer - Designing Tomorrow’s Experiences Today

    1,326 followers

    Ever wonder - Why the iPhone features two ways to answer the calls ? A UX Perspective Sometimes you have to swipe to answer a call, and other times just tap. It’s intentional and it's a brilliant example of context-aware UX design. When your phone is locked, you swipe to answer. This prevents accidental touches when your phone is in your pocket or bag. Sliding is a deliberate action. When your phone is unlocked, you tap to answer. Tapping is easier and quicker when you're already interacting with the screen - efficiency matters here. This subtle shift in interaction is Apple quietly thinking ahead : Understanding user context Prioritizing convenience Preventing unintended actions This small but thoughtful difference highlights how good UI/UX anticipates context and adapts to prevent friction or frustration. Design isn’t just how it looks. It’s how it works — and sometimes, the smallest micro-interactions make the biggest difference. It’s a reminder that great UX isn’t always loud — sometimes, it's just a seamless moment you don’t even notice. Have you noticed any other subtle design choices that changed how you interact with your device ? ⠀ #UXDesign #UserExperience #iOSDesign #ProductDesign #AppleUX #DesignThinking #MicroInteractions

  • View profile for Vitaly Friedman
    Vitaly Friedman Vitaly Friedman is an Influencer

    Practical insights for better UX • Running “Measure UX” and “Design Patterns For AI” • Founder of SmashingMag • Speaker • Loves writing, checklists and running workshops on UX. 🍣

    222,847 followers

    🎩 “How We Designed a Multi-Brand Design System” (https://lnkd.in/erc3mA4i), a fantastic case study by Ness Grixti on the pains of maintaining multiple systems, how to make a design system work seamlessly across multiple brands — with a multi-system token infrastructure in Figma, applied everywhere. Most teams eventually go through a “consolidation” effort — and that’s where struggles emerge as different systems have slightly different needs. Wise team created a system where the entire library is controlled by a top brand layer, which contained nested libraries for type, spacing and colors. I love Wise's approach of 1 system with 2 tracks to avoid duplications — typography and spacing on one track, and color on the other. Both pulled from shared primitives and brought together in a single Global Token library that houses all brands. As a result, designers can manage responsive type, spacing and interaction states across all color themes from one place. 🧱 Raw values core → for color, type, spacing without brand associations 🧅 Layered structure → primitives, scaling/device, sentiment, brands 🌈 Sentiment themes → for alerts, neutral, warning, success, proposition 📏 Accessible dynamic scaling → for type and spacing values 📦 Nested variables → scaling lives within responsive device library ♻️ Avoiding token explosion → tokens shared across brands + diffs If you'd like to dive deeper, I highly recommend to take a look at the Multi-Brand Design System Figma Kit (https://lnkd.in/eShgnPnW) by Pavel Kiselev, a practical guide and Figma kit on how to set up a design system in Figma for multiple brands, platforms or products — with full control over colors, typography and visual styles. And huge kudos to Ness Grixti (along with colleagues Henrique Gusso and Willem Purdy and the wonderful Wise team) for sharing the challenges, the failures and successes for all of us to learn from! 👏🏼👏🏽👏🏾 #ux #design

  • View profile for Prukalpa ⚡
    Prukalpa ⚡ Prukalpa ⚡ is an Influencer

    Founder & Co-CEO at Atlan | Forbes30, Fortune40, TED Speaker

    51,059 followers

    15 people sent me the same article in the last 24 hours, OpenAI's announcement of how they built their own internal in-house data agent. Why does everyone think I need to see this? Beyond just being interesting, it validates something I've been saying for years: The model isn't the hard part. Context is. When we started talking about the idea of context being king for AI at Atlan, people would sometimes respond with blank stares: "Why are you building a context platform? Just plug in GPT." Finally, I can send them this article from OpenAI as a response. As they put it, "CONTEXT IS EVERYTHING. High-quality answers depend on rich, accurate context. Without context, even strong models can produce wrong results, such as vastly misestimating user counts or misinterpreting internal terminology. To avoid these failure modes, the agent is built around multiple layers of context that ground it in OpenAI’s data and institutional knowledge." To make their data agent successful, OpenAI needed to unify lots of different types of context from different sources, both within and beyond their data platform. They call it "multilayered contextual grounding." Here's what that means: → Table usage: Going beyond table names to understand how data flows and gets used (e.g. table schemas, relationships, lineage, usage patterns, and historical queries) → Human annotations: Pulling from domain-expert knowledge for each table that goes beyond metadata (e.g. semantics, business meaning, and known caveats) → Codex enrichment: Examining the code behind each data table to understand insights like scope and granularity, which can highlight important differences between tables that look similar on the surface → Institutional knowledge: Pulling context from Slack, Google Docs, and Notion to understand company specifics (e.g. launches, reliability incidents, internal codenames, key metrics) → Memory: Saving and learning from prior user corrections and agent discoveries over time via saved, editable memories → Runtime context: Live queries to the data warehouse or other data platform systems when context is missing or stale Can't wait for the next time someone tells me that context is easy. I'll just send them this article! Great work by Bonnie Xu, Aravind Suresh and Emma Tang.

  • View profile for Rishav Gupta
    Rishav Gupta Rishav Gupta is an Influencer

    The “Why” behind the “How” | Product @ ETS

    12,175 followers

    Quick exercise: "User engagement is down 20%" Add context ↓ "...among power users" Add more context ↓ "...after we simplified the advanced features" Add wisdom ↓ "We optimized for new users at the cost of power user productivity" This is the power of context in product decisions. It transforms: DATA: Raw numbers without meaning "Engagement dropped by 20%" INFORMATION: Numbers with identified relationships "Power user engagement dropped after the UI refresh" KNOWLEDGE: Patterns that tell a story "Every time we simplify advanced features, we see power user drop-off" WISDOM: Principles that guide decisions "Sustainable growth requires balancing simplicity for new users with power user productivity" Without context, data is just noise. With context, data drives informed decisions. The best product decisions aren't made by those with the most data, but by those who best understand its context. #ProductManagement #Leadership #ProductStrategy

  • View profile for Santhana Lakshmi Ponnurasan

    Microsoft MVP Data Platform | Power BI World Championship 2025 & 2026 Finalist | Microsoft Certified Power BI Data Analyst | Bringing Data to Life, One Visualization at a Time

    24,505 followers

    Most billing dashboards show numbers. This one shows structure. Instead of forcing users to read through tables or guess proportions, the design answers two questions instantly: What’s the total billing? Where is it coming from? Here are the intentional design decisions that make this work: 1. Total anchored at the center: The total sits in the middle- exactly where the eye goes first. Users don’t need to calculate anything. The main KPI is established before the breakdown even begins. 2. Visual hierarchy shows contribution instantly Example: Surgery has the largest arc, followed by Consultation, then Diagnostics. Even without numbers, the ranking is obvious. That’s good hierarchy doing its job. 3. Color builds association, not decoration: Each category has one clear color repeated across icon and segment. No gradients. No unnecessary accents. Color helps recognition, not distraction. 4. Legend acts like a precision layer: The right panel doesn’t just name categories- it shows exact dollar values. Executives get a quick scan. Finance teams get accurate numbers. One layout serves both. This KPI focuses on distribution- not trends, not comparisons, not growth. Because the scope is clear, the message stays strong. Love this? Follow #TheVisualBreakdown and hit the bell so you don’t miss the next one

  • View profile for Melissa Perri
    Melissa Perri Melissa Perri is an Influencer

    Board Member | CEO | CEO Advisor | Author | Product Management Expert | Instructor | Designing product organizations for scalability.

    103,489 followers

    We talk a lot about user intent in product design. But are we actually paying attention? I recently spoke with Lucie Buisson, Chief Product Officer at Contentsquare, about how understanding user intent can transform digital experience design. Her insights were eye-opening. Ten years ago, the focus was on simple visual contrasts. A predictive heat map could tell you that users would notice an orange button next to a dark blue area. But that approach misses something big: context. What a user notices or chooses to engage with depends on why they're there in the first place. Think about the difference between a shopper in a rush and one casually browsing. If you walk into a store in a hurry, the staff doesn’t try to explain the brand’s values. They help you get what you need and move on. That’s good service, and it's based on reading the situation. So, why do websites often fail to adapt? Because they don't read the room. They offer the same experience regardless of the user's purpose. That's not just inefficient, it’s a missed opportunity. Lucie made a great point: understanding intent goes beyond personas or demographics. It's about tuning into what users actually need in the moment. Without that, websites and apps will miss the mark, delivering one-size-fits-all experiences. This conversation made me rethink how we approach product management. It's a call to dig into the 'why' behind user actions. To truly innovate, we need to move beyond best practices and focus on real-time user data and context. How does your team adapt to shifting user intent? Let me know in the comments!

  • View profile for Nicholas Nouri

    Founder | Author

    132,576 followers

    In product development, understanding user feedback is akin to observing how a frog interacts with its little home. This approach not only brings products closer to user needs but ensures efficient resource use. 𝐓𝐡𝐞 𝐏𝐫𝐨𝐜𝐞𝐬𝐬: >> MVP Creation: Start with a Minimal Viable Product - a simple version of the product that meets the basic needs of your users. Think of it as building a basic shelter for a frog, just enough to call it a home. >> User Observation: Watch how the user (or frog) uses the MVP. Do they seem comfortable? What modifications do they seem to need? >> Feedback Collection: Through surveys, interviews, or direct observations, gather insights. For our frog, this means noting what changes make it hop happily or what features cause a croak of discomfort. >> Iterative Design: Use the feedback to refine and enhance the product. Prioritize new features that directly respond to the user’s needs. >> Continuous Improvement: Keep the loop going. More feedback leads to further refinements, creating a product that evolves with its users. 𝐁𝐞𝐧𝐞𝐟𝐢𝐭𝐬: >> User-Centric: Products finely tuned to user needs and preferences. >> Resource Efficiency: Focuses development on features that matter, reducing waste. >> Adaptability: Encourages flexibility in product plans based on actual user behavior. 𝐂𝐡𝐚𝐥𝐥𝐞𝐧𝐠𝐞𝐬: >> Commitment to Feedback: Requires a genuine commitment to integrate user insights continuously. >> Handling Failure: Must be prepared to pivot or discard features based on user response. Just like monitoring a frog in its habitat can teach us what makes a suitable home, listening to user feedback helps build products that truly resonate with their audience. 🔄 Have you ever had to pivot drastically based on user feedback? How do you integrate user observations into your development process? #innovation #technology #future #management #startups

  • View profile for Jason Spielman

    Co-Founder Huxe | Prev. Design Lead NotebookLM (Google Labs)

    10,017 followers

    Today, after thousands of iterations, I am super excited to share the reimagined NotebookLM UX. Here’s what I learned designing a novel, AI-first experience: 1️⃣ UIs will become more dynamic:  AI has unlocked unprecedented possibilities within a single tool. With NotebookLM, we aimed to create a seamless experience that supports the entire user journey: Reading → Chat → Creation. Designing for this complexity requires rethinking traditional UI paradigms. The result? A dynamic, adaptive interface that intuitively adjusts to your needs, ensuring fluidity and focus throughout your workflow. 2️⃣ UIs will become more contextually aware:  As AI expands possibilities, the challenge is managing them without overwhelming users. Context-aware interfaces help by proactively suggesting actions to reduce cognitive load. In NotebookLM’s case, some users may come to create an audio overview, while others come to chat with their sources. Anticipating these needs and surfacing the right tools/actions at the right moment streamlines workflows and minimizes friction. 3️⃣ We are in a transition period: Right now, Chat remains the most intuitive and digestible way for users to engage with AI, acting as a familiar anchor that bridges the gap between traditional interfaces and the emerging possibilities of AI-first design. The adaptive UX ensures users can safely rely on chat when they need, while exploring new experiences like the new Audio Overview Interactive Mode. Speaking of… Starting today, you can join the conversation and interact directly with the hosts 🙂. The team stepped up to make this happen and bring this to life at an incredible speed. Krystal Kallarackal, Nick McGinnis, Oliver King, Raiza Martin, Yi Y., Biao (Frank) Wang, Kenechi E., Justin Pacione, Corbin Cunningham, PhD, Josh Woodward, Feel Hwang. Shout out to Usama Bin Shafqat and Michael Chen for their continued work on voice exp. (👋Yes, I’m still leaving Google to build something new — follow along at werebuilding.ai. That said, I’m incredibly excited to finally share this with the world and deeply grateful to the amazing NotebookLM team!)

  • View profile for Shubham Saboo

    Senior AI Product Manager @ Google | Awesome LLM Apps (#1 AI Agents GitHub repo with 101k+ stars) | 3x AI Author | Community of 350k+ AI developers | Views are my Own

    84,750 followers

    Introducing the Agent-to-User Interface by Google 🔥 A2UI lets your AI agents generate native, interactive UIs on the fly. And it's 100% Open Source. Here's the problem with AI chat interfaces: They hit the ceiling fast. User: "Book me a flight to Tokyo" Agent: "Here are 47 options with dates, prices, layovers, seat classes..." walls of text...walls of text Nobody wants to parse that. You want buttons. Cards. Filters. A real interface. But hardcoding UIs for every possible agent action? Impossible. A2UI flips the entire paradigm. Your agent doesn't just respond with text. It streams native UI components directly to the user. Flight cards. Booking forms. Interactive carousels. Date pickers. Generated on the fly. Rendered instantly. Zero client-side code changes. Here's how it works: 🔌 Built on A2A Protocol Secure transport layer. Your agent streams JSONL payloads describing UI intent. 🏗️ Truly Framework-Agnostic Write once. Render on React, Flutter, Android, Web Components. Any surface. ⚡ Real-Time Streaming Progressive rendering as the agent thinks. No waiting for complete responses. 🔒 Secure by Design Clean separation of structure and data. UI injection risks mitigated. The old way: User navigates menus → Clicks through forms → Waits for confirmations The A2UI way: User asks → Agent streams the exact interface needed → User acts immediately Chat interfaces were step one. Generative UI is step two. Stop building static dashboards that gather dust. Start letting your agents build the UI. This is Day 15 of 25 in Google Cloud's Advent of Agents. Missed previous days? The archive is live. Catch up anytime. ♻️ Repost to share this free and interactive course with your network. And follow this space to stay updated for what's to come.

  • View profile for Muskan Jai

    1000+ Ad Creatives Built for Meta & Google | Helping Brands Reduce CAC & Drive Sales

    5,936 followers

    If your design isn’t grabbing attention, it’s not a creativity problem, it’s a hierarchy problem. Visual hierarchy is the invisible system that decides what people notice first, second, and last. Master it… and even simple designs start communicating with power. Here are 12 principles every beginner designer should know: 1. Size: Bigger gets seen first. 2. Perspective - Closer feels more important. 3. Color/Contrast - Bright grabs attention. 4. Fonts - Type hierarchy guides reading. 5. Space - Empty space adds emphasis. 6. Proximity: Close = related. 7. Negative Space: Separation creates focus. 8. Alignment: Order and clarity. 9. Rule of Odds: Center stands out. 10. Repetition: Consistency unifies. 11. Lines: Direct the eye. 12. Grids: Structure and balance. Master these and your designs won’t just look good, they’ll communicate clearly. #VisualHierarchy #DesignPrinciples #GraphicDesign #DesignTips #Creativity #UIUX #BrandDesign #VisualDesign #CreativeProcess #DesignEducation #LearnDesign #DesignForBeginners

Explore categories