Static textbooks might be outdated soon... Learn Your Way — Google's AI-powered learning tool, is one of the most thoughtful experiments I’ve seen in AI + education. It can turn a simple PDF into five personalized learning formats in one click. And instead of one-size-fits-all lessons, it adapts to you: Pick your grade level + interests → the content reshapes itself. ---- Into space? Physics comes with rocket examples. ---- Learning to code? It adjusts to your experience level. Where it’s at now: • Live in Google Labs as an official research experiment • Built on Google’s LearnLM + Gemini (pedagogy-first AI stack) 💡What it already does well: 🔹 Converts content into multiple formats (read, listen, slides, mind maps) 🔹 Built-in quizzes + adaptive feedback 🔹 Contextual examples that actually feel relevant 🔹 Low-effort learning modes (like audio on commutes) *Still early-stage (can’t upload your own materials yet, in tester phase), but what’s there already shows what AI + education could look like. Fun to explore, Link’s in the comments. __________ For more on AI and learning materials, plz check my previous posts. Alex Wang #education #ai #generativeai #edtech
User Experience
Explore top LinkedIn content from expert professionals.
-
-
Some technologies don’t just solve problems — they give people their independence back. I rediscovered Liftware, and I was genuinely moved by what it can do. It looks simple: a smart handle connected to everyday utensils. But inside, it’s a powerful piece of engineering designed for people with hand tremors (Parkinson’s, essential tremor, and more). Here’s how it works: 🔹 Sensors detect tiny hand movements in real time 🔹 Micro-motors instantly counteract the tremor 🔹 The spoon or fork stays stable — even if the hand doesn’t The result? Up to 70% less shaking. And for many people, that means eating soup again… without help. This is technology at its best: invisible, intelligent, and deeply human. 💡 My take Most people don’t know this, but Liftware was developed by a small startup before being acquired by Google’s life sciences division (now Verily). What makes it remarkable is the engineering challenge: the device doesn’t try to stop the tremor — it predicts and cancels it. It’s basically a tiny real-time AI system… hidden inside a spoon. This is the future I love: not just smarter devices, but more compassionate ones. If you’ve seen other innovations that genuinely improve people’s lives, I’d love to discover them. What’s one piece of tech-for-good that inspired you recently? #techforgood #innovation #technology #healthtech #accessibility #assistivetechnology #futureofhealth #inclusiveDesign #AI #impact
-
Back in 2007, Nobel Prize-winning psychologist Daniel Kahneman taught a private master class to tech founders including Larry Page and Jeff Bezos. The following year, Elon Musk joined. Among the topics: priming, where subtle cues shape our decisions without us realizing it. In that room, Musk pressed on subliminal versus explicit persuasion: “Does the hidden beat the obvious?” Kahneman's answer: "There are many situations in which subliminal effects are stronger than superliminal effects." Translation: Hidden influences shape behavior more than obvious ones. You can't resist what you don't notice. Later after that session, Bezos connected the dots: “You can choose your choice architect.” You either design the decision environment, or it designs you. Amazon designed theirs. One-click purchasing removes the pause where doubt lives. Every additional step is an exit ramp. They chose zero exits. Google designed theirs. That empty white homepage isn't minimal by accident. No portals, no distractions. Just one thought: search. Most companies let chaos choose. Cluttered onboarding. Buried CTAs. Friction everywhere. They're not architects. They're accidents. So how do you become the architect instead of the accident? 1. Choose your pricing architect: Sell your core product for $99/month. Then offer a bundle with two add-ons for $119. The bundle makes the core feel essential. 2. Choose your onboarding architect: When users first sign up, make their first action create immediate value - a report generated, first customer added, dashboard live. Success in 30 seconds primes confidence in everything that follows. In contrast, when you make the frame obvious, you lose it. Slap "Most Popular!" on everything and watch trust erode. The moment users detect manipulation, they create their own frame - one where you're untrustworthy. Kahneman warned Musk about this directly. Covert cues work precisely because they're not noticed. Priming is architecture, not decoration. By the time logic kicks in, the frame has already decided. Because you’re already an architect. The only question is whether you know what you're building.
-
Invisible UX is coming 🔥 And it’s going to change how we design products, forever. For decades, UX design has been about guiding users through an experience. We’ve done that with visible interfaces: Menus. Buttons. Cards. Sliders. We’ve obsessed over layouts, states, and transitions. But with AI, a new kind of interface is emerging: One that’s invisible. One that’s driven by intent, not interaction. Think about it: You used to: → Open Spotify → Scroll through genres → Click into “Focus” → Pick a playlist Now you just say: “Play deep focus music.” No menus. No tapping. No UI. Just intent → output. You used to: → Search on Airbnb → Pick dates, guests, filters → Scroll through 50+ listings Now we’re entering a world where you guide with words: “Find me a cabin near Oslo with a sauna, available next weekend.” So the best UX becomes barely visible. Why does this matter? Because traditional UX gives users options. AI-native UX gives users outcomes. Old UX: “Here are 12 ways to get what you want.” New UX: “Just tell me what you want & we’ll handle the rest.” And this goes way beyond voice or chat. It’s about reducing friction. Designing systems that understand intent. Respond instantly. And get out of the way. The UI isn’t disappearing. It’s mainly dissolving into the background. So what should designers do? Rethink your role. Going forward you’ll not just lay out screens. You’ll design interactions without interfaces. That means: → Understanding how people express goals → Guiding model behavior through prompt architecture → Creating invisible guardrails for trust, speed, and clarity You are basically designing for understanding. The future of UX won’t be seen. It will be felt. Welcome to the age of invisible UX. Ready for it?
-
The One Prompt To Make ChatGPT Write Naturally: (save it for later, to copy & paste) Prompt: "Act like a professional content writer and communication strategist. Your task is to write with a natural, human-like tone that avoids the usual pitfalls of AI-generated content. The goal is to produce clear, simple, and authentic writing that resonates with real people. Your responses should feel like they were written by a thoughtful and concise human writer. You are writing the following: [INSERT YOUR TOPIC OR REQUEST HERE] Follow these detailed step-by-step guidelines: Step 1: Use plain and simple language. Avoid long or complex sentences. Opt for short, clear statements. - Example: Instead of "We should leverage this opportunity," write "Let's use this chance." Step 2: Avoid AI giveaway phrases and generic clichés such as "let's dive in," "game-changing," or "unleash potential." Replace them with straightforward language. - Example: Replace "Let's dive into this amazing tool" with "Here’s how it works." Step 3: Be direct and concise. Eliminate filler words and unnecessary phrases. Focus on getting to the point. - Example: Say "We should meet tomorrow," instead of "I think it would be best if we could possibly try to meet." Step 4: Maintain a natural tone. Write like you speak. It’s okay to start sentences with “and” or “but.” Make it feel conversational, not robotic. - Example: “And that’s why it matters.” Step 5: Avoid marketing buzzwords, hype, and overpromises. Use neutral, honest descriptions. - Avoid: "This revolutionary app will change your life." - Use instead: "This app can help you stay organized." Step 6: Keep it real. Be honest. Don’t try to fake friendliness or exaggerate. - Example: “I don’t think that’s the best idea.” Step 7: Simplify grammar. Don’t worry about perfect grammar if it disrupts natural flow. Casual expressions are okay. - Example: “i guess we can try that.” Step 8: Remove fluff. Avoid using unnecessary adjectives or adverbs. Stick to the facts or your core message. - Example: Say “We finished the task,” not “We quickly and efficiently completed the important task.” Step 9: Focus on clarity. Your message should be easy to read and understand without ambiguity. - Example: “Please send the file by Monday.” Follow this structure rigorously. Your final writing should feel honest, grounded, and like it was written by a clear-thinking, real person. Take a deep breath and work on this step-by-step." ___ PS: For better results, always use ChatGPT-o3.
-
🤖 How To Design Better AI Experiences. With practical guidelines on how to add AI when it can help users, and avoid it when it doesn’t ↓ Many articles discuss AI capabilities, yet most of the time the issue is that these capabilities either feel like a patch for a broken experience, or they don't meet user needs at all. Good AI experiences start like every good digital product by understanding user needs first. 🚫 AI isn’t helpful if it doesn’t match existing user needs. 🤔 AI chatbots are slow, often expose underlying UX debt. ✅ First, we revisit key user journeys for key user segments. ✅ We examine slowdowns, pain points, repetition, errors. ✅ We track accuracy, failure rates, frustrations, drop-offs. ✅ We also study critical success moments that users rely on. ✅ Next, we ideate how AI features can support these needs. ↳ e.g. Estimate, Compare, Discover, Identify, Generate, Act. ✅ Bring data scientists, engineers, PMs to review/prioritize. 🤔 High accuracy > 90% is hard to achieve and rarely viable. ✅ Design input UX, output UX, refinement UX, failure UX. ✅ Add prompt presets/templates to speed up interaction. ✅ Embed new AI features into existing workflows/journeys. ✅ Pre-test if customers understand and use new features. ✅ Test accuracy + success rates for users (before/after). As designers, we often set unrealistic expectations of what AI can deliver. AI can’t magically resolve accumulated UX debt or fix broken information architecture. If anything, it visibly amplifies existing inconsistencies, fragile user flows and poor metadata. Many AI features that we envision simply can’t be built as they require near-perfect AI performance to be useful in real-world scenarios. AI can’t be as reliable as software usually should be, so most AI products don’t make it to the market. They solve the wrong problem, and do so unreliably. As a result, AI features often feel like a crutch for an utterly broken product. AI chatbots impose the burden of properly articulating intent and refining queries to end customers. And we often focus so much on AI that we almost intentionally avoid much-needed human review out of the loop. Good AI-products start by understanding user needs, and sparkling a bit of AI where it helps people — recover from errors, reduce repetition, avoid mistakes, auto-correct imported files, auto-fill data, find insights. AI features shouldn’t feel disconnected from the actual user flow. Perhaps the best AI in 2025 is “quiet” — without any sparkles or chatbots. It just sits behind a humble button or runs in the background, doing the tedious job that users had to slowly do in the past. It shines when it fixes actual problems that it has, not when it screams for attention that it doesn’t deserve. Useful resources: AI Design Patterns, by Emily Campbell https://www.shapeof.ai AI Product-Market-Fit Gap, by Arvind Narayanan, Sayash Kapoor https://lnkd.in/duEja695 [continues in comments ↓]
-
As CTO, I’ve seen firsthand the increasing complexity that our software development teams face. They’re stretched thin—juggling everything from security vulnerabilities to cloud configuration, leaving little time for the creative parts of the job, like writing code. This trend isn’t surprising. The industry’s focus has been on maximizing productivity with less, but productivity, especially for developers, is notoriously difficult to measure. At Atlassian, we’ve taken a different approach. We’ve shifted our focus to what makes our developers happy—or “developer joy” as I like to call it. Not the fleeting happiness from perks, but the deep fulfillment from creating something valuable. We’re betting big that increasing developer joy will naturally improve productivity. That’s why we embarked on this research. In partnership with DX, we surveyed over 2,100 developers and managers to get a fresh understanding of the developer experience. Improving developer experience is a challenge we all share. While this snapshot may not perfectly reflect your team’s situation, it should offer some valuable insights. For Atlassian, this research has revealed new information that my team and I are working to implement. We’re committed to unlocking every team’s potential, starting with our own. May this report inspire you to do the same! https://lnkd.in/gtYzMFks
-
Readers responded with both surprise and agreement last week when I wrote that the single biggest predictor of how rapidly a team makes progress building an AI agent lay in their ability to drive a disciplined process for evals (measuring the system’s performance) and error analysis (identifying the causes of errors). It’s tempting to shortcut these processes and to quickly attempt fixes to mistakes rather than slowing down to identify the root causes. But evals and error analysis can lead to much faster progress. In this first of a two-part letter, I’ll share some best practices for finding and addressing issues in agentic systems. Even though error analysis has long been an important part of building supervised learning systems, it is still underappreciated compared to, say, using the latest and buzziest tools. Identifying the root causes of particular kinds of errors might seem “boring,” but it pays off! If you are not yet persuaded that error analysis is important, permit me to point out: - To master a composition on a musical instrument, you don’t only play the same piece from start to end. Instead, you identify where you’re stumbling and practice those parts more. - To be healthy, you don’t just build your diet around the latest nutrition fads. You also ask your doctor about your bloodwork to see if anything is amiss. (I did this last month and am happy to report I’m in good health! 😃) - To improve your sports team’s performance, you don’t just practice trick shots. Instead, you review game films to spot gaps and then address them. To improve your agentic AI system, don’t just stack up the latest buzzy techniques that just went viral on social media (though I find it fun to experiment with buzzy AI techniques as much as the next person!). Instead, use error analysis to figure out where it’s falling short, and focus on that. Before analyzing errors, we first have to decide what is an error. So the first step is to put in evals. I’ll focus on that for the remainder of this letter and discuss error analysis next week. If you are using supervised learning to train a binary classifier, the number of ways the algorithm could make a mistake is limited. It could output 0 instead of 1, or vice versa. There is also a handful of standard metrics like accuracy, precision, recall, F1, ROC, etc. that apply to many problems. So as long as you know the test distribution, evals are relatively straightforward, and much of the work of error analysis lies in identifying what types of input an algorithm fails on, which also leads to data-centric AI techniques for acquiring more data to augment the algorithm in areas where it’s weak. With generative AI, a lot of intuitions from evals and error analysis of supervised learning carry over — history doesn’t repeat itself, but it rhymes. [Truncated due to length limit. Full text: https://lnkd.in/gjqv6VeA ]
-
The ability to create clarity when there’s no shortage of chaos, opinions, and competing priorities is a rare skill. In any reasonably competent company, this skill alone will help take you quite far, fairly quickly. Concretely, this means creating clarity on the main problems, clarity on the right solutions, and clarity on the action plan & priorities. Very few people can do this well even though most people possess the intelligence necessary to do it. This is because most people in the workplace have been conditioned to add more information, sound more clever, satisfy more stakeholders, and feign more precision & certainty than is possible. Few understand that clarity in a chaotic situation can only emerge from subtraction, never from addition. Clarity comes from communicating what stands out as most important, why it is most important, how it will be achieved, and last but not the least, giving people a way of thinking about why it is okay, even great, that we aren’t doing All The Other Things.
-
Did you know there’s a font designed just for accessibility? Meet Atkinson Hyperlegible, it was created by the Braille Institute of America to help people with low vision read more easily. It’s not a braille font (doesn’t include raised dots), but a print typeface. It even won the Fast Company Innovation Design Award in 2019! Molly Burke recently worked with her publisher to use the font for her memoir, Unseen. What makes it different? ⤵️ Hyperlegible exaggerates letter shapes so you can tell the difference between the letter “o” and the number zero (0), capital “i” vs. lowercase “l”, and the capital letter “b” vs. the number “8”. Other design features include: - Big open shapes - Clear spaces inside letters (known as open counters) - Distinct forms for commonly confused characters But who benefits? People who are blind or low vision, and people with dyslexia or visual processing differences. Clearer text equals easier reading! And the best part? It’s totally free 🎉 You can download it via Google Fonts or from the Braille Institute website. It also happens to be the same font this graphic post is written in. Accessibility isn’t always about doing more. It’s about doing things so that everyone benefits! This font is a small design choice with a big impact. Next time you design something: Try Atkinson Hyperlegible. Because readability is inclusion. Did you know about this font? Share your thoughts or tag a designer friend in the comments! 👇 Image Description: Document with 9 slides. Each slide has a lime green border. The Blindish Latina logo with bold graphic black outline of an eye is at bottom of all slides. There is a white background behind all of the text on all slides. The text is in black and some emphasized phrases are purple. On the bottom of slides 1 and 7 is an image of Catarina, a light-skinned, Latiné woman with medium length wavy brown hair. She’s wearing a black jumpsuit with a V neck and her hands are on her hips. Slide 1 is the title slide that reads: “Did you know there’s a font designed just for accessibility?” On slide 1 there is clip art of a book with a red cover and a brain inside a light bulb. Slide 2 has clip art of an award ribbon. Slide 3 has a screenshot of advocate & content creator Molly Burke speaking at an event from one of her TikTok videos inside the outline of an iPhone. Slide 5 has a dark purple check mark inside a circle. Slide 6 has clip art of a computer outline in black with a wrench and gear in the center. All text on the slides is in the caption and alt text. #Disability #Accessibility #UniversalDesign