<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Selected thoughts and experiments - Medium]]></title>
        <description><![CDATA[A collection of personal thoughts and experiments - Medium]]></description>
        <link>https://humber.to?source=rss----dd58d698be4c---4</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Sun, 19 Apr 2026 08:31:56 GMT</lastBuildDate>
        <atom:link href="https://humber.to/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[The New Shape of Agency (II)]]></title>
            <link>https://humber.to/the-new-shape-of-agency-ii-7216fd907da1?source=rss----dd58d698be4c---4</link>
            <guid isPermaLink="false">https://medium.com/p/7216fd907da1</guid>
            <category><![CDATA[agentic-ai]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[futurism]]></category>
            <category><![CDATA[ai-governance]]></category>
            <category><![CDATA[openclaw]]></category>
            <dc:creator><![CDATA[Humberto Moreira]]></dc:creator>
            <pubDate>Tue, 17 Feb 2026 05:07:47 GMT</pubDate>
            <atom:updated>2026-02-17T05:07:47.146Z</atom:updated>
            <content:encoded><![CDATA[<h4>Part 2: Norms of the agent jungle</h4><p>Last month, I wrote about how <a href="https://humber.to/the-new-shape-of-agency-30bafe81667a">“showing up” is changing</a> as delegation gets cheaper and more capable. Since then, OpenClaw, an <a href="https://openclaw.ai/">open-source personal AI assistant</a>, has surged into the spotlight as a poster child for agentic emergence. The broader trend does not depend on any single product, but OpenClaw accelerates the conversation around an unavoidable question: which rules will govern agent agency inside systems traditionally built around direct human input?</p><p>Online interactions are starting to fork into two lanes. One is a hands-on lane, operating through interactive interfaces that implicitly assume a person is at the controls: forms, inboxes, portals, phone trees. The other is a hands-off lane, operating through explicit delegation: permissioned access, whitelisted agents, and rate limits designed for automation.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*24Lv66eh8yZxmmKHHLytAg.png" /></figure><p>OpenClaw’s local setup makes it well-suited to the hands-on lane even when its behavior is agentic, which is both an opportunity and a source of tension. A system can tolerate automation when it is labeled and bounded. It reacts differently when automation enters through the same doors as hands-on participation. And because these interfaces carry social meaning, the boundary is not only technical. It is also cultural.</p><p>That distinction matters because many platforms, systems, and interfaces were designed around an implied unitary person: one account, one cursor, and a recognizable pace of activity. The hands-on lane inherited social assumptions about intent, effort, and responsibility. Those assumptions also fed design principles, security considerations, and product licensing. The hands-off lane, by contrast, was constrained by technical and policy limitations: permissions, quotas, and acceptable use policies. The boundary never held perfectly, but it remained the organizing principle that made abuse detection, fairness expectations, and accountability manageable.</p><p>Now the boundary is under pressure from multiple directions at once. A new generation of tools can navigate the same browser interfaces, fill the same forms, and manage the same inboxes that a person would, but at a non-human pace and scale. Some are autonomous. Others sit alongside a person, drafting while they review and acting while they decide, until authorship becomes a blended state rather than a clean category. The old binary of “human or bot” loses explanatory power when “automated” can mean anything from accessibility aids to personal assistants to industrial-scale swarms.</p><p>If hands-on spaces are going to remain workable, they will need mechanisms that restore legibility, intent, and attribution. In identity-bearing contexts, interpretation depends on whether the “presence” behind an action reflects a person at the controls, an agent acting on their behalf, or something in between. In any system with consequences, the first question becomes: who commissioned an action, under what authority, and who owns the outcome?</p><p>The deeper issue is not whether actions are labeled, but what happens when scarce opportunities meet automated attempts in systems built for human-paced participation. Consider apartment hunting. In competitive markets, “being on it” used to mean watching listings, touring quickly, assembling documents, and applying fast. When search agents can continuously monitor inventory, pre-fill applications, and submit the moment a unit goes live, speed stops reflecting attentiveness and starts reflecting automation.</p><p>Then the loop escalates. Renters deploy agents to find and apply. Property managers deploy agents to filter, rank, and schedule. Listing platforms optimize for throughput and conversion. Soon the market becomes a cacophony of automated attempts and automated defenses, and the effective “lane” is clogged by machine-paced traffic. In that world, a non-agentic search can start to feel like showing up to a ticket drop on human-time.</p><p>And at that point, a fair response is: good. The current system is already brittle, opaque, and exhausting. If the old rubrics in many of these systems were never really fair, maybe they deserve to break. The question is whether what replaces them is better in a broadly accessible way, or better only for those who can afford the best tooling and priority.</p><p>One path is a managed transition: more interaction shifts into explicitly delegated modes, with clear scope on when, where, and which agents can act, and what still requires a person. That is the optimistic version of “hands-off,” where authority and responsibility can be traced.</p><p>Yet scarcity and the pursuit of advantage are where optimism gets stress-tested. The law of the agent jungle can be terrifying. Wherever there is a bid, a slot, a ticket, a response, or a chance to be seen, escalating cheap attempts trigger an arms race and rules harden. When that happens, the system does not converge on a clean standard. It fractures into a patchwork of private regimes, each defining “allowed,” “preferred,” and “blocked” in its own interest.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*qfbRvDr3yu27FM2PU8hNPw.png" /></figure><p>And there is a darker irony here: most participants do not actually win an arms race. Renters do not benefit when apartment search becomes a machine-speed contest, and property owners do not benefit when their inbox fills with automated noise and automated screening. The equilibrium is not efficiency. It is higher volume, higher friction, higher spend, and thinner trust. In practice, the consistent winners are the infrastructure providers and gatekeepers who sell the shovels, the priority lanes, and the filtering and “verification” that the arms race makes feel unavoidable.</p><p>At one extreme, that patchwork becomes a road network designed for extraction. Agentic systems operate inside hands-on interfaces under rules that are mostly invisible to the people subject to them. Platforms certify favored agents, strike quiet alliances, and sell priority as “safety” or “quality.” The advantage is not just better tooling, but privileged right-of-way. Human-paced participation becomes the slow lane by default, and fairness becomes something you are told about rather than something you can see.</p><p>At the other extreme is a harder, rarer outcome: agentic infrastructure that treats scale as congestion to be governed, not leverage to be auctioned. Something closer to air traffic control for scarce opportunities and interactions, where priority is legible, bounded, and contestable, and where rate and access norms prevent escalation from becoming the baseline. That outcome is not automatic. It requires explicit norms and enforceable governance, not just better interfaces.</p><p>Those are extremes, but they clarify what is at stake. Scaled agency can become a maze of invisible tolls and alliances, or it can become a new social contract: infrastructure and rules that make room for machine-scale capability without dissolving the human expectations that make participation feel legitimate in the first place.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=7216fd907da1" width="1" height="1" alt=""><hr><p><a href="https://humber.to/the-new-shape-of-agency-ii-7216fd907da1">The New Shape of Agency (II)</a> was originally published in <a href="https://humber.to">Selected thoughts and experiments</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The New Shape of Agency (I)]]></title>
            <link>https://humber.to/the-new-shape-of-agency-30bafe81667a?source=rss----dd58d698be4c---4</link>
            <guid isPermaLink="false">https://medium.com/p/30bafe81667a</guid>
            <category><![CDATA[internet]]></category>
            <category><![CDATA[culture]]></category>
            <category><![CDATA[agentic-ai]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[automation]]></category>
            <dc:creator><![CDATA[Humberto Moreira]]></dc:creator>
            <pubDate>Tue, 20 Jan 2026 05:28:12 GMT</pubDate>
            <atom:updated>2026-02-17T04:51:48.472Z</atom:updated>
            <content:encoded><![CDATA[<h4><strong>Part 1: Presence and effort in an age of scaled capability</strong></h4><p>For much of history, ‘showing up’ was constrained by human-scale agency. Action and presence were bound by our singular bodies, which can be in only one place at a time, and by the limited bandwidth of our attention.</p><p>This anchored the ground rules for human interaction, served as a guardrail against unfairness, and signaled legitimate intent and investment. As the old adage goes, “<a href="https://quoteinvestigator.com/2013/06/10/showing-up/">80 percent of life is showing up</a>.” But as we delve deeper into the AI-enhanced information age, agency augmented by virtual, autonomous delegates threatens to outrun what pre-existing societal norms are prepared for.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*k8nl8tPSeHVynXq9pSFfxw.png" /></figure><p>Social acknowledgement of time and effort remains a <a href="https://www.sciencedirect.com/science/article/pii/S1051137724000238">powerful signal we associate with fairness</a>, even as the mechanisms for communication and coordination have evolved.</p><p>One-to-one assisted telepresence evolved from the written letter to the phone call to the text message, and a loose hierarchy emerged to interpret the depth and intent implicit in each mode of outreach and interaction.</p><p>Speaking and acting “through” someone else <a href="https://www.sfu.ca/~wainwrig/Econ400/jensen-meckling.pdf">is an old pattern</a>. People have always asked others to speak, negotiate, or act on their behalf. But historically, that indirection carried <a href="https://www.jstor.org/stable/258191">friction and risk</a>: it cost time and money, required trust, and invited error or miscommunication. Even in the best case, a delegate was an imperfect avatar for direct contact. Steps were removed, and proximity became a proxy for value and care.</p><p>Over the past few decades, the internet has gradually revised the rules of participation. The early web lowered the cost of knowing. The social and mobile web reduced the cost of <a href="https://techofcomm.wordpress.com/wp-content/uploads/2015/11/here_comes_everybody_power_of_organizing_without_organizations.pdf">communication and coordination</a>. Generative AI lowered the <a href="https://www.microsoft.com/en-us/worklab/podcast/ai-lowers-the-cost-of-expertise">cost of producing competent work</a>. The emerging agentic web lowers the <a href="https://blogs.microsoft.com/blog/2025/05/19/microsoft-build-2025-the-age-of-ai-agents-and-building-the-open-agentic-web/">cost of action</a>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*rb5DhwKZ4uGyliVx19ckzA.png" /></figure><p>The reason is a new intelligence layer that can be inserted into the path between intention and execution. By reducing decision friction, it turns many formerly human-limited acts into <a href="https://a16z.com/the-rise-of-computer-use-and-agentic-coworkers/">processes that can run</a> continuously, adaptively, and at scale. Agents emulate human interaction (including memory, context, and persistence) as discrete instances and can replicate that “presence” across countless parallel threads. This is materially different from ordinary delegation or simple automation.</p><p>For some people, this will feel like being more agentic than ever. For others, it will feel like the bar moved overnight. Capability increases, but unevenly, and that unevenness forces us to revisit how we evaluate effort and legitimacy in everything from attention and outreach to access and opportunity.</p><p>The simulation of personal-investment signals threatens to undermine their legitimate expression. When the cost of “showing up” collapses, we lose not just fairness, but legibility, trust, and shared expectations. If effort signals can be simulated, what do they still prove?</p><p>Automation can pair with replication. A single person’s intent can now be instantiated across many parallel threads: monitoring, drafting, following up, applying, retrying, escalating. What used to be a human-limited act becomes something continuous and scalable.</p><p>Many online platforms were implicitly designed around the unitary person: one cursor, one attention stream, one identity, one “reasonable” rate of action. They can tolerate bots at the margins, but their core interaction loops (forms, feeds, inboxes, marketplaces) assume human pacing and human intent. When agents enter these loops, it’s not just more automation; it’s a mismatch between a human-shaped interface and non-human-scale behavior. The web is adapting to this unevenly, with some systems becoming explicitly <a href="https://aaif.io/">machine-navigable</a> and others resisting it.</p><p>The erosion of effort signals matters most wherever scarce opportunities are allocated: a response, a slot, a ticket, a chance to be seen. In those environments, the old rubrics posed their own challenges, but they were understandable, and even their randomness enabled serendipity. Presence and attention were often naturally scarce on both sides of opportunity.</p><p>Take <a href="https://www.columbia.edu/~ww2040/4615S13/Psychology_of_Waiting_Lines.pdf">physical queues as an example</a>. A unitary person can’t be in multiple lines at once. But when “being there” can be replicated through agents watching, refreshing, submitting, and retrying, queues quietly transform into “virtual lines,” where priority, strategy, and leverage can outrank simple arrival.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*fYpKRH9g7WXxXkZckIZ9HQ.png" /></figure><p>As more interfaces become machine-navigable, “trying” becomes cheap and easy to repeat. A single rideshare request becomes multiple parallel attempts across apps, pickup points, or timing windows. On the other side, drivers <a href="https://www.businessinsider.com/uber-lyft-gigu-mystro-rideshare-apps-2025-7">use their own ad hoc tooling</a> to maximize offers and minimize dead time. The people and systems we interact with start to look like they’re engaged in continuous optimization. Once scalable trying becomes normal, opting out starts to feel strangely artisanal.</p><p>It’s also not only online. Even a neighborhood restaurant becomes swept up, balancing walk-ins with reservation platforms, call-ahead lists, and now automated voice agents that can place and re-place holds with relentless persistence.</p><p>With that complexity comes a creeping sense of unfairness: advantage accrues to those who can scale their attempts or discover workarounds. In arenas with high personal stakes, such as <a href="https://www.ft.com/content/30a032dd-bdaa-4aee-bc51-754867abbde0">job applications</a>, college admissions, or even dating apps, the same dynamics become particularly critical.</p><p>As a result, the seemingly democratizing flattening of access to information, automation, and cognition is actually hiding an unresolved normative problem. A realignment is underway as human-scale assumptions erode. To find a balance that scales at a more digestible pace, society will have to define the norms, markets, and laws that reconcile notions of fairness with new capabilities.</p><p><strong><em>Coming in Part 2: Which Norms Scale?</em></strong></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=30bafe81667a" width="1" height="1" alt=""><hr><p><a href="https://humber.to/the-new-shape-of-agency-30bafe81667a">The New Shape of Agency (I)</a> was originally published in <a href="https://humber.to">Selected thoughts and experiments</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[An Expanded Context at TED AI ‘24]]></title>
            <link>https://humber.to/an-expanded-context-at-ted-ai-24-44661d86e76b?source=rss----dd58d698be4c---4</link>
            <guid isPermaLink="false">https://medium.com/p/44661d86e76b</guid>
            <category><![CDATA[robotics]]></category>
            <category><![CDATA[embodied-ai]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[ted]]></category>
            <category><![CDATA[large-language-models]]></category>
            <dc:creator><![CDATA[Humberto Moreira]]></dc:creator>
            <pubDate>Tue, 29 Oct 2024 04:57:33 GMT</pubDate>
            <atom:updated>2024-10-29T04:57:15.776Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*aXXGPVAdJdUDo8Iy-5wbZw.png" /></figure><p>At TED AI 2024 last week in San Francisco, a consistent theme emerged: a shift away from placing LLMs at the center of AI. Instead, there was a renewed focus on revisiting the foundational principles that initially made the technology transformative and on seeking other analogous discoveries that could revolutionize new domains.</p><h4>Language, Time, and Context</h4><p>Setting the stage, the morning talks on the first day included several reminders of how humans construe language, cognition, and even time and how different these are from how AI sees them.</p><p>Physicist <a href="https://www.santafe.edu/people/profile/carlo-rovelli">Carlo Rovelli</a> illustrated that our sense of ‘now’ is a blend of past memories and future expectations, challenging the linear notion of time. Linguist Jessica Coon (who was a consultant on the alien language <a href="https://tedai-sanfrancisco.ted.com/speakers/jessica-coon/">in the movie “Arrival”</a>) explained that the words we use are surface manifestations of deeper, complex mental hierarchies, which AI struggles to replicate authentically. Through examples of sperm whale communication, Pratyusha Sharma highlighted that <a href="https://www.nature.com/articles/s41467-024-47221-8">non-human species</a> develop unique representations tailored to their environmental interactions and survival needs.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ZmD1B2xMyIenTq0bFK0-QA.jpeg" /></figure><p>While humans and animals process language and context intuitively, AI relies on vast datasets and computational power, making it significantly less efficient — consuming energy levels that are orders of magnitude higher than biological systems to apply techniques such as <a href="https://en.wikipedia.org/wiki/Transformer_(deep_learning_architecture)">transformers</a> and <a href="https://en.wikipedia.org/wiki/Retrieval-augmented_generation">retrieval-augmented generation</a> at scale.</p><p>Yet, as researcher <a href="https://tedai-sanfrancisco.ted.com/speakers/max-jaderberg/">Max Jaderberg</a> of Isomorphic Labs pointed out in a later panel, there are limits to attempting to tokenize everything you want to solve. Those looking for new approaches, including <a href="https://a16z.com/author/surya-ganguli/">Surya Ganguli</a>, have set their sights on multidisciplinary approaches that can better advance the “science of intelligence,” integrating insights from neuroscience, psychology, and physics to develop AI that more closely mirrors natural intelligence.</p><p>Depending on your favored interpretation of these advances, they could lead to an astounding level of biological-machine interconnectivity.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*oraWfwaBLWC4OHK0l0eAMQ.png" /></figure><p>However, the earlier sessions on the intricacies of language throw at least a bit of cold water on the notion of an effortless interface between AI and biological brains, and throughout the conference the limits of the state of the art were on display.</p><p><strong>AI, Out and About</strong></p><p>One heavily hyped topic during the conference was “Embodied AI.” Manifested most visibly in the robots on display on stage and dancing at the first evening party, embodied AI involves linking AI with physical, typically robotic manifestations (humanoid and non-humanoid) that can navigate real-world environments autonomously.</p><p>As one speaker put it, “a robot for every job,” and in a panel suitably titled “The Robots Are Coming,” a vision was presented by <a href="https://www.ted.com/speakers/sebastien_de_halleux">Sebastien de Halleux</a> of Field AI of multiple robots combining their own vantage points into a shared environmental awareness. Whether pitched as helper or as ominous replacements, the conversations set high expectations of finally seeing examples on display.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*dV5Sa0AQ61Rk0WDAK1U2eg.jpeg" /></figure><p>However, the robots on display seemed to manifest their intelligence only partially. Several of the demos failed onstage, and the robots at the party seemed confused by the loud and bright environment. The presenters acknowledged that they knew this was a risky live test.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*7XwW86sNV-9h_9siTsShGw.png" /></figure><p>While the robots didn’t fully meet expectations regarding their autonomy, some were aesthetically impressive. However, the environments during the event were considerably less challenging than the complex settings showcased in the demo videos, leaving some of us wondering what we were supposed to expect. As an attendee, I wanted to believe in the advances, but I couldn’t ignore the glitchy robot unceremoniously parked in the corner.</p><p><strong>Bracing for AI Today</strong></p><p>While we wait for true embodied AI and robotic advances, we must also address the immediate issues related to AI as it exists today. Workers, particularly in creative industries, are already facing significant waves of change. As Grammy CEO Harvey Mason put it, creatives need a “<a href="https://www.bizjournals.com/sanfrancisco/news/2024/10/23/grammy-ceo-ai-survival-guide-musicians-artists.html">survival guide</a>” to coping with GenAI.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*1iRiGphqIadTSpxzXJAE5A.jpeg" /></figure><p>The practical tips on understanding and leveraging AI make sense. However, they seem like just one piece of a larger puzzle that could be complemented by other efforts — such as highlighting models that are <a href="https://www.fairlytrained.org/">“fairly” trained</a>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*h8z39i8USuNjzd10AcakZA.png" /></figure><p>Various perspectives from industry, government, and policy sectors were well represented, discussing copyright, originality, and creativity from multiple angles. However, it was harder to perceive a truly embodied representation of the individual creatives — the long tail — who could be most impacted by these issues.</p><p><strong>Going Somewhere Too Fast…Or Not Fast Enough</strong></p><p>A lingering question during the event was: “Where are we going, and are we heading there at the right speed?”</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*zM840Vqj2CLLTbxmvqxk5w.png" /></figure><p>One of the most entertaining panels showcased a tug-of-war between acceleration and caution, featuring Guillaume Verdon (of <a href="https://en.wikipedia.org/wiki/Effective_accelerationism">effective accelerationism</a> fame) and Igor Kurganov (known <a href="https://en.wikipedia.org/wiki/Igor_Kurganov">proponent of risk awareness</a>). The conversation led to no definitive conclusion, as expected, but it felt like a speculative sideshow considering the issues discussed in other panels.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*2-1EtpQa8usMQKiTEB1tZg.png" /></figure><p>If AI’s current trajectory encounters scaling limitations, as some panelists bluntly asserted, then the question of accelerating AI development might become moot. This brings us back to the importance of moderating our expectations of LLMs and similar technologies.</p><p>While new methods of using LLMs within frameworks that enhance their capabilities and compensate for their deficiencies are promising, even AI agents are not cure-alls. OpenAI’s <a href="https://venturebeat.com/ai/openai-noam-brown-stuns-ted-ai-conference-20-seconds-of-thinking-worth-100000x-more-data/">Noam Brown</a>, who emphasized the value of “<a href="https://thedecisionlab.com/reference-guide/philosophy/system-1-and-system-2-thinking">system two thinking</a>” (the slow, deliberate kind), also referenced Rich Sutton’s influential essay <a href="http://www.incompleteideas.net/IncIdeas/BitterLesson.html">The Bitter Lesson</a> essay, which suggests that long-term progress in AI is driven by leveraging increased computational power, and that more handcrafted or creative methods struggle to achieve comparable advancements.</p><p><strong>Asking And Answering The Right Questions</strong></p><p>In the conference’s more philosophical moments, a compelling debate unfolded between science and intuition. Perplexity CEO <a href="https://time.com/7012698/aravind-srinivas/">Aravind Srinivas</a> urged attendees to prepare for a future where all the world’s answers are readily available, shifting the crucial task to formulating the right questions.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*jdxXiju9sdd-kA7tnkrCkw.png" /></figure><p>While the practice of “relentless questioning” resonates with many, consensus on which questions matter most — and how to approach them — remains elusive. <a href="https://www.cnbc.com/2024/07/12/inceptive-ceo-jakob-uszkoreit-says-ai-will-transform-pharmaceuticals.html">Jakob Uszkoreit</a>, co-founder of Incentive, challenged the constraints of traditional academia, advocating for new paradigms that transcend conventional boundaries. Meanwhile, <a href="https://tedai-sanfrancisco.ted.com/speakers/max-jaderberg/">Max Jaderberg</a> highlighted how AI could compress billions of years of research into accelerated, in-silico exploration.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*mapsY86gx2hY7-jlK3NO5A.png" /></figure><p>This discourse brings us back to envisioning virtual, AI-enhanced worlds that could unlock discoveries with profound implications for our reality. It’s a vision straddling the line between reality and science fiction — one where we are already witnessing tangible progress, yet the extent and pace of its potential remain subjects of debate.</p><p>Ultimately, regardless of how swiftly these advancements unfold, one thing is clear: these technologies empower us to ask better questions and embark on new avenues of inquiry. Even as we await more sophisticated iterations of our “dancing robots,” the journey toward understanding and innovation continues — a journey that promises to enrich our world in ways we are only beginning to fathom.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=44661d86e76b" width="1" height="1" alt=""><hr><p><a href="https://humber.to/an-expanded-context-at-ted-ai-24-44661d86e76b">An Expanded Context at TED AI ‘24</a> was originally published in <a href="https://humber.to">Selected thoughts and experiments</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Terminal to Totality]]></title>
            <link>https://humber.to/terminal-to-totality-b74f83b938d5?source=rss----dd58d698be4c---4</link>
            <guid isPermaLink="false">https://medium.com/p/b74f83b938d5</guid>
            <category><![CDATA[american-airlines]]></category>
            <category><![CDATA[eclipse]]></category>
            <category><![CDATA[2024-eclipse]]></category>
            <category><![CDATA[air-travel]]></category>
            <category><![CDATA[travel]]></category>
            <dc:creator><![CDATA[Humberto Moreira]]></dc:creator>
            <pubDate>Tue, 16 Apr 2024 05:02:15 GMT</pubDate>
            <atom:updated>2024-04-16T05:04:19.757Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*l8Nf8sNX1ctB3cRn726P9g.jpeg" /></figure><p>On the morning of April 8th, high above North Texas, I anxiously re-<a href="https://www.nytimes.com/interactive/2024/science/solar-eclipse-cloud-cover-forecast-map.html">checked the weather forecast</a>, just as many others in the path of the “<a href="https://science.nasa.gov/eclipses/future-eclipses/eclipse-2024/where-when/">Great North American Eclipse</a>” did that day. While some enthusiasts had planned their viewing months or <a href="https://www.cnn.com/travel/chasing-the-eclipse-for-decades/index.html">even decades</a> in advance, my approach was somewhat more haphazard.</p><p>A one-way ticket from SFO to DFW, purchased in early February, was the first kernel of a plan. I wasn’t sure how likely it would be for me to make the trip. Work or other commitments could easily force me to cancel, but absent that scenario, Dallas seemed the best location. The combination of its location on the path of totality, as well as the onward flight connection possibilities sealed the deal. I could determine later where to go from there, but at least I knew that getting there on the morning of the 8th would guarantee an eclipse view…in theory.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*zBAs0YUswHivNOod1ZyZ8w.png" /></figure><p>The weather was a wildcard, and as the day approached, the forecast became less favorable, settling somewhere around 40–60% cloud coverage. With just a few hours to go, the dip to 20–40% cloud coverage was a glimmer of hope.</p><blockquote>“Not great, not terrible.”</blockquote><p>Watching the eclipse from the airport also held a special charm in my view. Airports are perhaps the ultimate <a href="https://en.wikipedia.org/wiki/Liminal_space_(aesthetic)">liminal spaces</a>, and DFW airport is by far the airport I’ve transited through most often, so it seemed suitable that I would watch the lunar transit across the sun from there.</p><p>After landing, as we waited to deplane, a couple seated in the next row gave away extra eclipse glasses. A gentleman on the other side of the aisle anxiously called a friend to inquire about a rental car that would take them to “wherever the weather was best.” My plan was different. With <a href="https://www.nytimes.com/2024/04/04/upshot/eclipse-hotel-prices.html">hotels in short supply</a> and a need to be elsewhere the following day, my trip was strictly a fly-in, fly-out affair. Given the practical need to stay inside the airport, the question emerged: Where exactly should I watch it from?</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*JSJsiJoQ6KUCcU-e4byd7g.png" /></figure><p>The potentially obvious answer, “Just look out any window,” was complicated by the fact that the terminals I was arriving and departing from faced the wrong way and, for obvious reasons, weren’t exactly positioned to take in the Texas sun.</p><p>Days before, I had looked through airport maps, photographs, and reviews, exploring my options (including just <a href="https://www.dfwairport.com/explore/plan/connect/">riding Skylink</a> during the eclipse), and concluded that the American Airlines Admiral’s Club <a href="https://loungereview.com/lounges/american-airlines-admirals-club-dfw-terminal-e/">in satellite Terminal E</a> was ideal. The smallest of American’s six lounges at the airport has large windows facing just the right way.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*JdKRQ0BG6AXiudy-gr651A.jpeg" /></figure><p>I had a scheduled one-hour video meeting after landing, leaving me just enough time to sprint from one terminal to another to catch the main event. While on the Skylink, I could see dozens of people gathering on the top floor of the airport parking structures, and in the open terminal spaces, quite a few people were peering through windows at odd angles trying to catch a glimpse of the sun. Arriving at the lounge at around 25 minutes to totality, I found mostly bright skies, then proceeded to fling open my luggage and pull out my assortment of photographic goodies.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*DPTs7viYO-BBUosGmGkHtA.jpeg" /></figure><p>At the lounge, anticipation was building among the few people there, although much of it seemed focused on the local newscast showing a close-up of the partial eclipse overhead. A few people were using safety glasses to look upwards, and I also had some extras to share.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*DJRBdJ-CTx03aZZtH9oj3Q.jpeg" /></figure><p>Through those safety glasses and a 6-dollar photo filter, I could see the outline of the partial eclipse as totality approached. As much as I wish it were a highly precise operation, amidst the excitement, I kept fumbling through multiple devices, filters, shutter settings, and more, trying to determine which worked best in the rapidly evolving conditions.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*esGOPRwd33z0OXbbE8Z-sg.jpeg" /></figure><p>Sometimes, even finding the sun was a challenge. As the sky darkened, clouds moved in, and seeing anything turned into trying to catch elusive glimpses between clouds. The possibility dawned on me that a thick cloud cover might be moving in at the worst possible time.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*uJjFfI9DxuvYiQfIHUNK9A.jpeg" /></figure><p>Luckily, this was not to be the case. Although clouds glided in and out, the definitive outline of totality became clearer.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*dJFygwfNCvOBGmX_UIUekA.jpeg" /></figure><p>As the sky went completely dark, I could no longer tell where the clouds were, but the eclipsed sun was clearly in sight.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*rJDYPOawq_18TW_Cb2ZnHw.jpeg" /></figure><p>The people nearby and I marveled at what we were looking at. The mood was jovial, punctuated by my neighbor’s vocal requests to “turn out the lights!” At one point the navigation lights of an airliner swept glided past the eclipse. Clearly, others were also having a nice view.</p><p>Totality lasted two minutes and forty-three seconds, but it felt much longer.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*BnK8rA6Z7LwJd-zjjt4nDQ.jpeg" /></figure><p>I have seen countless pictures and videos of solar eclipses over the years, but something that caught me by surprise and a highlight of watching this eclipse was seeing the red <a href="https://www.nbcdfw.com/solar-eclipse/solar-prominence-dallas-total-solar-eclipse/3510433/#:~:text=Did%20you%20see%20those%20red,outward%20from%20the%20sun&#39;s%20surface.">eruptive prominences</a> emerge from the sun with my own eyes, making the sun come to life in a way I’d never seen before.</p><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=https%3A//twitter.com/humbertomoreira/status/1777422734005338174&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/3ebada4afad17034d6ba459e96c712b1/href">https://medium.com/media/3ebada4afad17034d6ba459e96c712b1/href</a></iframe><p>My less eventful transit continued once the sun and moon’s transit concluded. Mission accomplished. I couldn’t resist buying a commemorative hat just before boarding my flight out.</p><p>Watching the eclipse from the airport might not have been the most dramatic setting, but after the experience, I am even more convinced that such a place, one that leads to both temporary and long-enduring connections, made the already amazing experience particularly special.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b74f83b938d5" width="1" height="1" alt=""><hr><p><a href="https://humber.to/terminal-to-totality-b74f83b938d5">Terminal to Totality</a> was originally published in <a href="https://humber.to">Selected thoughts and experiments</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Perils of a Cognitive Safety Net]]></title>
            <link>https://humber.to/perils-of-a-cognitive-safety-net-545518d3b852?source=rss----dd58d698be4c---4</link>
            <guid isPermaLink="false">https://medium.com/p/545518d3b852</guid>
            <category><![CDATA[computer-science]]></category>
            <category><![CDATA[learning]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[generative-ai-tools]]></category>
            <category><![CDATA[language]]></category>
            <dc:creator><![CDATA[Humberto Moreira]]></dc:creator>
            <pubDate>Wed, 28 Feb 2024 21:06:44 GMT</pubDate>
            <atom:updated>2024-02-28T21:06:22.892Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*FEACjZPU1oLowmHCUmDVtg.png" /></figure><p>Steve Jobs considered the personal computer a <a href="https://www.goodreads.com/quotes/9281634-i-think-one-of-the-things-that-really-separates-us">bicycle for the mind</a>. However, aside from minimal safety measures, a bike can be used in open-ended ways, retaining the risk of falls and other mishaps. What if there was never any falling off your bike?</p><p>Generative AI can provide digital assistance in many endeavors, but the implications of providing adaptive training wheels are worth considering. This cognitive safety net could unwittingly become a straightjacket in an AI-mediated future.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*eGLeNow6EwhykQUucNDIpg.png" /></figure><p><strong>Riding on the jagged frontier</strong></p><p>Harvard professor <a href="https://d3.harvard.edu/our-team/karim-r-lakhani/">Karim Lakhani</a> sees Generative AI in professional contexts converge on a “<a href="https://www.cxotalk.com/episode/centaurs-and-cyborgs-navigating-the-jagged-edge-of-generative-ai-productivity">cyborgs and centaurs model</a>,” where knowledge workers either intertwine their efforts with AI or else distinctly divide tasks between those for AI and those for humans.</p><p>Key to this concept is the notion that Generative AI overall is “<a href="https://www.cxotalk.com/episode/centaurs-and-cyborgs-navigating-the-jagged-edge-of-generative-ai-productivity">lowering the cost of cognition</a>’ but retains a <a href="https://www.oneusefulthing.org/p/strategies-for-an-accelerating-future">jagged frontier</a> that makes it hard to delineate whether humans, AI, or a combination of the two work best for a particular case and context.</p><p>Even combining humans and AI does not automatically lead to better outcomes. For example, when AI is involved, research shows a tendency towards overconfidence from skilled workers. In one BCG study, workers who could do well at certain activities tended to trust erroneous LLM output, <a href="https://www.bcg.com/publications/2023/how-people-create-and-destroy-value-with-gen-ai">leading to overall worse performance</a>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/728/0*z2KTOLZ377nZZgOl.png" /><figcaption>From “<a href="https://www.bcg.com/publications/2023/how-people-create-and-destroy-value-with-gen-ai">How People Create and Destroy Value with Generative AI</a>” by BCG Henderson Institute</figcaption></figure><p>This highlights the importance of applying AI conscientiously and not overlooking basics such as requisite changes in processes and functions.</p><p>These studies also point to Generative AI markedly helping less skilled workers produce work of a higher baseline quality. This is a supercharged version of what we have seen with other technologies, even with elements as basic as spell check and, more broadly, productivity software such as Microsoft Office in the 1990s. It’s striking to look back at how revolutionary something like Excel seemed at the time:</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FkOO31qFmi9A&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DkOO31qFmi9A&amp;image=http%3A%2F%2Fi.ytimg.com%2Fvi%2FkOO31qFmi9A%2Fhqdefault.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/b115b281e97912f7d0590a85464213fa/href">https://medium.com/media/b115b281e97912f7d0590a85464213fa/href</a></iframe><p>With ever-more sophisticated AI co-pilots in our workflows, the justifications for work products not meeting a minimum quality level appear to vanish.</p><p><strong>The End of Mediocrity?</strong></p><p>As much as it might seem like this rising tide lifts all boats, there could be a lost learning opportunity in the averted failures. Just as a generation of students became<a href="https://www.bbc.com/news/education-18158665"> used to spell-check</a>, a new generation’s learning process across domains will incorporate AI-based cognitive assistance early in their training and careers.</p><p>The potential ensuing scenarios are many. Today, aides sometimes correct politicians with “what the candidate meant to say” interjections. In the future, we could see this apply more broadly with pre-emptive autocorrect in much of our written or even spoken output.</p><p>One view is to look at AI co-pilots as <a href="https://medium.datadriveninvestor.com/every-ai-startup-is-building-you-a-second-brain-for-your-personal-life-but-why-8cbc9d72cf63">complementary brains</a> to be nurtured and consulted. Even before the rise of Generative AI, some, like Tiago Forte, championed the notion of a <a href="https://www.buildingasecondbrain.com/">second brain</a> to organize knowledge.</p><p>Still, there is a difference between <a href="https://fortelabs.com/blog/test-driving-a-new-generation-of-second-brain-apps-obsidian-tana-and-mem/">building a second “brain</a>” and outsourcing your thoughts. After all, the original second brain concept involved conscious curation and organization by an individual. With new models incorporating <a href="https://www.oneusefulthing.org/p/strategies-for-an-accelerating-future">hundreds of pages of context</a>, there can be the temptation of just feeding everything into a black box and letting a system decide what is most relevant.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*DmyYOh-2j1SEK_-BI1JKxQ.png" /></figure><p><strong>The past and future co-pilots</strong></p><p>A counter-argument for some of the fears is that we have long had co-pilots of one form or another in our modern interdependent world. We delegate to humans or machines work for which we are not best suited. We do not need to know intricate details of the vast and complex systems of the modern world to live and thrive in it.</p><p>Just as calculators ultimately helped <a href="https://www.ucl.ac.uk/ioe/news/2018/mar/calculators-can-help-boost-childrens-maths-skills-research-suggests#:~:text=%22When%20integrated%20into%20the%20teaching,better)%20use%20of%20them.%22">boost math skills</a>, proper use of Generative AI could boost learning processes in a collaborative model between human contributions and AI ones.</p><p>There’s nothing inherent in AI that would doom it to be bland or unimaginative, and factors like experimentation and failure could be built into training protocols and surfaced as needed. However, this will not happen independently; therefore, these considerations should be top of mind for those developing core AI building blocks such as <a href="https://research.ibm.com/blog/what-are-foundation-models">foundation models</a>.</p><p>Techniques to optimize Large Language Models, such as <a href="https://aws.amazon.com/what-is/reinforcement-learning-from-human-feedback/#:~:text=Reinforcement%20learning%20from%20human%20feedback%20(RLHF)%20is%20a%20machine%20learning,making%20their%20outcomes%20more%20accurate.">reinforcement learning from human feedback</a>, reward “good” outputs, but using such techniques to evaluate the kernels of good ideas and be able to consider them in the context of other ideas at their nascent stage (or different stages of their evolution) and naturally <em>guide</em> a human creator to evolve their work seems like a more challenging task.</p><p><strong>Does great work start with mediocre first drafts?</strong></p><p>If Generative AI tends to favor output with a higher level of quality, this future could mean that few “substandard” outputs (in terms of what we traditionally consider substandard) will be created. For AI-generated art, for example, a rough draft might look glossy and high quality yet not match the hoped-for result.</p><p>We must ask if certain ultimately great works need to start at a “rough” level and what a “rough draft” will mean in a Generative AI context. AI has been criticized for being a bad photocopy or “<a href="https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web">blurry JPEG of the Web</a>,” as <em>The New Yorker</em>’s Ted Chiang put it, and many creative works undeniably have come out of lesser quality predecessor works in progress.</p><p>Seeing <a href="https://en.wikipedia.org/wiki/Stable_Diffusion#/media/File:X-Y_plot_of_algorithmically-generated_AI_art_of_European-style_castle_in_Japan_demonstrating_DDIM_diffusion_steps.png">Generative AI “at work” in generating outputs</a> differs from seeing a writer’s progressive set of drafts or an artist at an easel. Even though “<a href="https://www.promptingguide.ai/techniques/cot">chain of thought</a>” prompting exists to a degree, seeing it in practice shows very <a href="https://medium.com/@alexcarltully/midjourney-in-tandem-with-dall-e-3-and-chain-of-thought-prompting-in-gpt-9323ed365adb">different work-in-process</a> than would have existed in pre-AI workflows.</p><p>Context is important, and current popular platforms such as ChatGPT don’t differentiate between middle school student and Ph.D. student queries (at least not without <a href="https://www.linkedin.com/pulse/prompt-engineering-guide-college-students-prabhu-stanislaus-dfihc/">extra prompt engineering</a>). Ideally, AI systems could evaluate work-in-process in the same way that a human teacher might recognize a first draft of a talented student who needs assistance. Iteration on a work in progress can improve creative outputs and hone human skills.</p><p>If current LLMs have limitations on managing this, there is evidence that tooling around them can help. <a href="https://notebooklm.google/">NotebookLM</a>, the experimental Google tool spearheaded by <a href="https://stevenberlinjohnson.com/">author Steven Johnson</a>, helps provide a source-grounded writing workflow process. It is an advanced co-creation tool and seems more intended to take a writer from deep thought to more profound thought, but it indicates how building tooling around LLMs can help overcome their limitations in handling in-progress content.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*CgxA7a10IQYNpGwg3S-n6w.png" /></figure><p><strong>Seasoning for taste</strong></p><p>The failure modes of some combinations of human+AI collaboration along the jagged frontier of AI effectiveness have the upside of keeping us on our toes as we continue to question which parts of the human experience we feel comfortable having AI influence.</p><p>Not long ago, AI (particularly as it relates to automation) was supposed to take over “<a href="https://www.cnbc.com/2019/01/25/these-workers-face-the-highest-risk-of-losing-their-jobs-to-automation.html">boring and repetitive</a>” jobs, but instead, it has now aimed at some of our most creative pursuits. In the early 90’s, Robert Reich coined the notion of <a href="https://scott.london/reviews/reich.html">symbolic analysts</a> to cover a range of intellectually-focused jobs representing the core of high-value work in the 21st century. As more of them become AI-mediated, what is left for human differentiation? Scott Belsky comes at this from an interesting angle:</p><blockquote>Until now, skills have been a major differentiator for humanity. However, in the age of AI, taste will become more important than skills as much of skill-based work and productivity is offloaded to compute. Taste seems more scarce these days, and increasingly differentiating in the age of AI. This assertion makes me think about the development of taste, and how we nurture taste for the next generation of humans? My initial thoughts: We must study the history of art and the creative choices and sources of the greats. We must expose people to unique and admirable demonstrations of taste and celebrate it.</blockquote><p>It’s worth questioning whether hypothetical tastemakers need to experience more of the “not so great,” which might end up being in shorter and shorter supply. Additionally, a world led by tastemakers sounds too much like an influencer-led world (and one still quite <a href="https://arstechnica.com/ai/2023/12/ai-created-virtual-influencers-are-stealing-business-from-humans/">vulnerable to LLM substitution</a>).</p><p>Still, it does point to the value of human curation, which seems particularly important if we still trust people more than we trust AI.</p><p>That may turn out to be perhaps a big <em>if</em>. Kyla Scanlon’s fascinating essay on the <a href="https://kyla.substack.com/p/why-we-dont-trust-each-other-anymore">downfall of trust</a> pointed out the importance of language and the decline of a shared understanding of words, which, from an AI point of view, seems ironic, considering LLMs operate in <a href="https://en.wikipedia.org/wiki/Latent_space">a latent space</a> that is somewhat alien to human notions of language. Is this part of what we giving up as we offset more language production to AI, or will LLMs help us converge to a more common shared language?</p><p>As Generative AI helps us improve and create, we must remain mindful of the trade-offs between assistance, influence, and autonomy underlying human agency. Just as adaptive training wheels can support a novice cyclist, they should not become a permanent fixture that constrains our journey.</p><p>Assuming we believe in human-driven AI, we come full circle back to humans being the underlying co-pilots keeping a hand on the bike handlebars, albeit in collaboration with AI. In that case, even in this new world, perhaps the safety net was people all along!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=545518d3b852" width="1" height="1" alt=""><hr><p><a href="https://humber.to/perils-of-a-cognitive-safety-net-545518d3b852">Perils of a Cognitive Safety Net</a> was originally published in <a href="https://humber.to">Selected thoughts and experiments</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[AI Crawls A Fragmented Web]]></title>
            <link>https://humber.to/ai-crawls-a-fragmented-web-760f8b3ea802?source=rss----dd58d698be4c---4</link>
            <guid isPermaLink="false">https://medium.com/p/760f8b3ea802</guid>
            <category><![CDATA[innovation]]></category>
            <category><![CDATA[internet]]></category>
            <category><![CDATA[medium]]></category>
            <category><![CDATA[intellectual-property]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <dc:creator><![CDATA[Humberto Moreira]]></dc:creator>
            <pubDate>Thu, 06 Jul 2023 18:31:44 GMT</pubDate>
            <atom:updated>2023-07-06T18:35:23.823Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*LzLoA3TaUnTaHxuH_sXnpw.png" /><figcaption>Robotic eye of Newton</figcaption></figure><p>“<em>If I have seen further than others, it is by standing upon the shoulders of giants</em>,” Newton <a href="https://discover.hsp.org/Record/dc-9792/Description">famously wrote</a>.</p><p>New creative works draw inspiration and insight from their precursors, a fact that the <a href="https://www.newworldencyclopedia.org/entry/Hyperlink#History_of_the_hyperlink">visionaries</a> and <a href="https://en.wikipedia.org/wiki/Hyperlink#:~:text=Tim%20Berners%2DLee%20saw%20the,hypertext%20mark%2Dup%20language%20HTML.">architects</a> of the early Web considered when devising what we know today as the humble hyperlink.</p><p>Decades later, softly enforced <a href="https://josipmisko.com/posts/rest-api-rate-limiting">policies</a> and <a href="https://en.wikipedia.org/wiki/Robots.txt">voluntary guardrails</a> have allowed the core Web to retain a relatively cohesive online experience despite the <a href="https://www.oreilly.com/library/view/restful-web-services/9780596529260/ch01.html">evolution of web services</a>, a proliferation of <a href="https://en.wikipedia.org/wiki/Closed_platform">walled garden ecosystems</a>, and the rise of social, mobile, and streaming.</p><p>However, the emergence of generative AI has altered the calculus of the value chains that govern our information and media industries. Now <a href="https://variety.com/2023/music/news/universal-music-streaming-services-block-ai-1235582612/">platforms</a> and <a href="https://www.nytimes.com/2023/06/23/business/media/meta-google-canada-news-facebook-instagram.html">nations</a> alike are raising drawbridges as the value of originality is re-appraised, straining the human-centric information-sharing norms that still underpin the Internet.</p><p>Is AI on its way to “<a href="https://www.theverge.com/2023/6/26/23773914/ai-large-language-models-data-scraping-generation-remaking-web">killing the old web</a>,” as The Verge suggests, or is another shift occurring?</p><h3><strong>Hungry, Hungry Models</strong></h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*_9DOA0WLxxCL0R6PgH3ZEQ.png" /><figcaption>AI Read-A-Thon</figcaption></figure><p>This past weekend, Twitter users found their browsing imperiled by rate limits imposed due to what CEO Elon Musk described as <a href="https://www.bbc.com/news/technology-66087172">extremely high levels of data scraping</a>.</p><blockquote>“Almost every company doing AI, from start-ups to some of the biggest corporations on Earth, was scraping vast amounts of data”</blockquote><p>Although the specific Twitter explanation <a href="https://mashable.com/article/twitter-rate-limit-exceeded-elon-musk">has its skeptics</a>, AI-related scraping is undoubtedly <a href="https://www.vice.com/en/article/dy3vmx/an-ai-scraping-tool-is-overwhelming-websites-with-traffic">becoming an issue for content owners and hosts</a> from both operational as well as intellectual property perspectives.</p><p>While OpenAI lists some <a href="https://cs.stackexchange.com/questions/159361/as-far-as-we-know-what-does-gpt-4s-training-data-look-like">sources explicitly</a> for its GPT models, others fall under “data licensed from third-party providers.” Google has updated its privacy policy to say it uses “<a href="https://www.theverge.com/2023/7/5/23784257/google-ai-bard-privacy-policy-train-web-scraping">publicly available information</a>” in training its models. Others in the industry are training proprietary models on <a href="https://www.niemanlab.org/2023/04/what-if-chatgpt-was-trained-on-decades-of-financial-news-and-data-bloomberggpt-aims-to-be-a-domain-specific-ai-for-business-news/">domain-specific data</a>, creating unique models for private use. As the methods for creating models become <a href="https://bdtechtalks.com/2023/05/08/open-source-llms-moats/">more accessible</a>, incorporating data not included in competing models becomes a key differentiator.</p><p>This has made platforms emphasize that they do not want to unwittingly give away valuable IP. Stack Overflow, for example, plans to <a href="https://gizmodo.com/stack-overflow-charging-ai-companies-for-training-data-1850362500">explicitly charge</a> for using its data in training models. The debate has also progressed into the <a href="https://digiday.com/media-buying/as-regulatory-pressure-mounts-for-artificial-intelligence-new-lawsuits-want-to-take-openai-to-court/">legal action sphere</a>.</p><p>Beyond model training, the fact that some models can supplement pre-trained models with data accessed on the spot has added to the debate. After the ability of ChatGPT to bypass <a href="https://www.wired.com/story/news-publishers-are-wary-of-the-microsoft-bing-chatbots-media-diet/">content access rules and paywalls</a> drew criticism, OpenAI <a href="https://help.openai.com/en/articles/8077698-how-do-i-use-chatgpt-browse-with-bing-to-search-the-web">temporarily disabled</a> the browsing feature:</p><blockquote>We have learned that the ChatGPT Browse beta can occasionally display content in ways we don’t want. For example, if a user specifically asks for a URL’s full text, it might inadvertently fulfill this request.</blockquote><blockquote>As of July 3, 2023, we’ve disabled the Browse with Bing beta feature out of an abundance of caution while we fix this in order to do right by content owners.</blockquote><p>Major AI platform providers tend to have content providers as customers and are motivated to be vigilant, but the same may not apply throughout the ecosystem.</p><p>In a world where original, quality content is valuable, the data-hungry habits of <a href="https://en.wikipedia.org/wiki/Large_language_mode">LLMs</a>, particularly the <a href="https://arxiv.org/abs/2108.07258">foundation models </a>trained on massive amounts of general-purpose data, can come across as voracious.</p><h3>What’s Fair and What’s Right</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*x6yML_6-iio2_qNyn5ug6g.png" /></figure><p>In the United States, the <a href="https://www.copyright.gov/help/faq/faq-fairuse.html#:~:text=Under%20the%20fair%20use%20doctrine,news%20reporting%2C%20and%20scholarly%20reports.">“fair use” doctrine</a> has allowed for limited unlicensed use of copyrighted works under certain circumstances. The guidelines are not cut and dry but instead are said to depend on<a href="https://www.copyright.gov/fair-use/index.html"> four factors</a> according to the U.S. Copyright Office:</p><ul><li><em>Purpose and character of the use, including whether the use is of a commercial nature or is for nonprofit educational purposes</em></li><li><em>Nature of the copyrighted work</em></li><li><em>Amount and substantiality of the portion used in relation to the copyrighted work as a whole</em></li><li><em>Effect of the use upon the potential market for or value of the copyrighted work</em></li></ul><p>A group of Stanford scholars recently published a very insightful paper, “<a href="https://hai.stanford.edu/news/reexamining-fair-use-age-ai?utm_source=Stanford+HAI&amp;utm_campaign=01178e94ce-Mailchimp_HAI_Newsletter_June+2023_2&amp;utm_medium=email&amp;utm_term=0_aaf04f4a4b-01178e94ce-64474323">Foundation Models and Fair Use</a>,” that deeply explores the intersection of new generative AI technologies and fair use.</p><p>Their research finds potential minefields but also intriguing solutions.</p><p>For example, a similarity model trained on a corpus of copyrighted works that is not used generatively would likely fall under fair use, they suggest, citing Stanford Law scholars <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3528447">Lemley and Casey</a>, but they find generative use cases to be more complex:</p><blockquote>This is because these models are usually capable of generating content similar to copyrighted data, and deploying them can potentially impact economic markets that benefit the original data creators. For these scenarios, legal scholars argue that fair use may not apply (Lemley &amp; Casey, 2020; Sobel, 2017; Levendowski, 2018)</blockquote><p>Ultimately, the Stanford authors propose establishing a middle ground between realizing the positive impact of foundation models and protecting intellectual property rights through a series of mitigation strategies.</p><p>Several options they examine are interesting, including data filtering at both the input and output levels to detect cases where the data is linked to a source in a way that might trigger infringement. They also consider relying on techniques such as <a href="https://en.wikipedia.org/wiki/Differential_privacy">differential privacy</a> and adjusting the <a href="https://en.wikipedia.org/wiki/Reinforcement_learning_from_human_feedback">human feedback</a> components of training to incorporate mitigation.</p><p>Perhaps the most intriguing option mentioned is instance attribution — a means through which one could theoretically derive a granular list of the contribution level of different inputs for a given output. There are issues with high computational cost and accuracy, but theoretically, this could be an objective way to slice the value pie.</p><h3>The Web of Value</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*sVtQei2Arkdd1rCtxfNRKQ.png" /><figcaption>There’s gold in the hills of data</figcaption></figure><p>Technical solutions aside, the intellectual property guidelines may depend on the outcome of legislation and litigation, including in-progress legal cases, some <a href="https://news.artnet.com/art-world/class-action-lawsuit-ai-generators-deviantart-midjourney-stable-diffusion-2246770">raised by creative communities</a> and some by well-known artists and their estates, such as the <a href="https://www.nytimes.com/2023/05/18/us/supreme-court-warhol-copyright.html">recent Andy Warhol case</a>.</p><p>The media world is no stranger to the challenges of attribution.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FUODO2O5k_94%3Fstart%3D374%26feature%3Doembed%26start%3D374&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DUODO2O5k_94&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FUODO2O5k_94%2Fhqdefault.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/3d636d417278a298eefb346db637e18e/href">https://medium.com/media/3d636d417278a298eefb346db637e18e/href</a></iframe><p>In one of many examples, a four-note flute riff from a folk song, identified decades after the release of Men at Work’s famous “Down Under” tune, led to a <a href="https://www.techdirt.com/2010/02/04/australian-court-says-men-at-works-down-under-infringes-on-folk-song-only-took-decades-to-notice/">lengthy court battle over royalties.</a></p><p>Analogously, even if a mechanism were found to objectively determine the degree of overall attribution within an AI/LLM-generated work, it is much more difficult to determine causality of value.</p><p>In the case of a song, is it possible to mark definitively which portion tipped the scales on making it a hit, and which other works inspired those particular yrics or rhythm? In the case of code, movies, or books, it it similarly complicated to determine which parts are most important.</p><p>The challenges in allocating value fairly are a reason why not only attribution but negotiation is key in distributing value. In media such as TV shows and movies, for example, <a href="https://en.wikipedia.org/wiki/Residual_(entertainment_industry)">residual payments</a> had habitually been afforded to some creators and contributors through complicated formulas. Inevitably, the impact of AI has now become a key topic of the current <a href="https://www.theguardian.com/us-news/2023/may/26/hollywood-writers-strike-artificial-intelligence">Hollywood writer’s strike</a> as industry participants worry about replacement and fair compensation.</p><p>In the absence of better alternatives, coarse measures such as the so-called “link tax” in <a href="https://www.nytimes.com/2021/02/17/business/media/australia-google-pay-for-news.html">Australia</a> and <a href="https://arstechnica.com/tech-policy/2023/06/google-tells-canada-it-wont-pay-link-tax-will-pull-news-links-from-search/">Canada</a> have emerged and the subsequent implementation measures and responses, including funds going to an <a href="https://www.reuters.com/technology/facebook-set-finance-regional-australia-newspaper-fund-2021-07-02/">innovation fund</a> in Australia or <a href="https://www.cbc.ca/radio/frontburner/google-meta-to-block-news-in-canada-1.6896144">Google and Meta removing Canadian news from their sites and apps</a> seem unsatisfactory.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*XsgjQ5BDWsiCdUllziU01w.png" /><figcaption>Pay-per-view</figcaption></figure><p>Throughout this debate, the question remains of how granular a unit of information needs to be accounted for. If four notes turned out to be worth millions, what about four words? As The CEO of Reddit (which is also going <a href="https://www.bloomberg.com/news/articles/2023-06-21/reddit-blackout-previews-social-media-s-future-in-the-ai-era">through its own</a> AI-related strike) <a href="https://www.gpb.org/news/2023/06/16/reddit-ceo-steve-huffman-its-time-we-grow-and-behave-adult-company#:~:text=%22We%20are%20not%20in%20the,to%20kill%20third%2Dparty%20apps.">Steve Huffman told NPR</a>,</p><blockquote>“Reddit represents one of the largest data sets of just human beings talking about interesting things,” Huffman said. “We are not in the business of giving that away for free.”</blockquote><p>With companies like Twitter and Reddit asserting the value of their conversations and feeds, does the value of each post need to be tracked and assessed value distributed? It might not be practical today, but if it became so in the future, it could lead to a massive shift.</p><p>Some years ago, the notion of data being the “<a href="https://medium.com/geekculture/stop-saying-data-is-the-new-oil-a2422727218c">new oil</a>” was popular. More recently, Tim O’Reilly’s notion of “<a href="https://www.theinformation.com/articles/data-is-the-new-sand?rc=loujyr">data is the new sand</a>” — valuable only in the aggregate — became more widespread. Now there’s a new wave of prospectors seeing more than plain sand on the beach and sifting for buried treasure with heavy machinery.</p><p>Any gold rush will have disputed claims, but to realize the advantages of new technologies without further fracturing the Internet, we need to address attribution intricacies in a manageable way and establish a framework for fair data transactions within the digital ecosystem.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=760f8b3ea802" width="1" height="1" alt=""><hr><p><a href="https://humber.to/ai-crawls-a-fragmented-web-760f8b3ea802">AI Crawls A Fragmented Web</a> was originally published in <a href="https://humber.to">Selected thoughts and experiments</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Agents, Plugins, and the Future of Discovery]]></title>
            <link>https://humber.to/agents-plugins-and-the-future-of-discovery-dc40f692aca4?source=rss----dd58d698be4c---4</link>
            <guid isPermaLink="false">https://medium.com/p/dc40f692aca4</guid>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[google]]></category>
            <category><![CDATA[microsoft]]></category>
            <category><![CDATA[openai]]></category>
            <category><![CDATA[chatgpt]]></category>
            <dc:creator><![CDATA[Humberto Moreira]]></dc:creator>
            <pubDate>Wed, 24 May 2023 06:09:56 GMT</pubDate>
            <atom:updated>2023-05-24T06:09:33.371Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*MuIVU2qiAVRpuOLhYQPXsA.png" /></figure><p>The landscape of Generative AI shifts quickly, but as of May 2023, three emerging trends in the space - plugins, agents, and chains - are bringing to light how the future of discovery, search, and user interfaces is in play.</p><p><a href="https://openai.com/blog/chatgpt-plugins">ChatGPT plugins</a>, announced in March and released broadly by OpenAI to paying users <a href="https://help.openai.com/en/articles/6825453-chatgpt-release-notes">this month</a>, allow its language models to connect to third-party services, invoke limited forms of web access, and execute sandboxed code. Google announced plans to enable <a href="https://blog.google/technology/ai/google-bard-updates-io-2023/">similar functionality for its Bard “experiment”</a> during Google I/O a couple of weeks ago, an event during which the company doubled down on its <a href="https://www.cnet.com/tech/computing/with-ai-and-new-gadgets-google-gets-some-mojo-back-at-google-io/">“AI first” nature</a>.</p><p>Partners announced by Google and OpenAI include popular consumer services such as Kayak, Opentable, and Instacart, marking the platforms’ intention to provide a more holistic experience through their prompt interfaces. Microsoft went even broader this week, announcing a commitment to a shared plugin platform with OpenAI that will work across Microsoft offerings.</p><h3>Jordi Ribas on Twitter: &quot;As part of this shared platform, @Bing is adding to its support for plugins. In addition to previously announced @OpenTable and @Wolfram_Alpha, today we&#39;re thrilled to welcome @Expedia, @Instacart, @KAYAK, @Klarna, @Redfin, @Tripadvisor, and @zillow to the Bing ecosystem. (2/3) pic.twitter.com/YnljblbxpN / Twitter&quot;</h3><p>As part of this shared platform, @Bing is adding to its support for plugins. In addition to previously announced @OpenTable and @Wolfram_Alpha, today we&#39;re thrilled to welcome @Expedia, @Instacart, @KAYAK, @Klarna, @Redfin, @Tripadvisor, and @zillow to the Bing ecosystem. (2/3) pic.twitter.com/YnljblbxpN</p><p>Large language models’ ability to access and incorporate ad hoc data and software is growing, and this trend can be seen as giving LLMs an “outside line” of sorts. At the same time, ways to invoke LLMs are also becoming more sophisticated.</p><p><a href="https://mashable.com/article/autogpt-ai-agents-how-to-get-access">Auto-GPT, BabyAGI, and similar projects</a> are built around the concepts of agents - software modules that autonomously or semi-autonomously spin up sessions (in this case, LLM and other workflow-related sessions) as needed to pursue a goal. These projects, along with <a href="https://hackernoon.com/a-comprehensive-guide-to-langchain-building-powerful-applications-with-large-language-models">frameworks such as Langchain</a> that allow the combining of agent components into sophisticated applications, show third parties are eager to extend the capabilities of LLMs without the <a href="https://towardsdatascience.com/behind-the-millions-estimating-the-scale-of-large-language-models-97bd7287fb6b">required time and expense</a> of large-scale training or <a href="https://medium.com/@atmabodha/pre-training-fine-tuning-and-in-context-learning-in-large-language-models-llms-dd483707b122">fine-tuning</a>.</p><p>Microsoft’s announcement this week also included the introduction of <a href="https://blogs.windows.com/windowsdeveloper/2023/05/23/bringing-the-power-of-ai-to-windows-11-unlocking-a-new-era-of-productivity-for-customers-and-developers-with-windows-copilot-and-dev-home/">Windows Copilot</a>, effectively a centralized agent at the desktop/operating system level that can invoke AI and other services.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*gQ96x_TDO4L_kc54rP3DCQ.png" /></figure><p>A can plug into B, which plugs into C, and itself can sometimes plug into A.</p><p>The leading AI platforms are now in heated competition, and new entrants (<a href="https://www.marketwatch.com/story/ibm-rolls-into-ai-arms-race-with-watsonx-f434574c">and new alliances</a>) continue to emerge. How best to understand how these pieces come together and which connection points make the most sense?</p><p>As always, history gives us some clues. Similar themes have emerged during other battles earlier in the Internet’s history.</p><p><strong>Platform Tugs of War</strong></p><p>Some experts compare the new plugin platforms to the iconic “<a href="https://www.fastcompany.com/90870842/did-openai-just-have-its-app-store-moment">app store moment</a>” introduced with the iPhone in 2008, but the analogy is limited. During the mobile app boom, individual popular apps came to loom large in the mobile user experience as focus destinations and set off competitive dynamics for rankings, popularity, and monetization.</p><p>Currently, plugins and extensions serve more as bridges and facilitators calling upon third-party services, either upon the user’s request or when an LLM determines they are apt for a specific task. Their role is more comparable to an interstitial element than a full-fledged application, and many of the most interesting use cases for plugins involve outputs from one service becoming inputs for other services under the coordination of an underlying agent, something that, to this day, monolithic mobile applications are ill-suited for.</p><p>However, the callback to an app store moment brings to mind the sometimes tense dynamic between technology platforms and their partners and users. This dance has played out numerous times in consumer technology sectors, as everyone seasoned enough to recall “Intel Inside” and the <a href="https://en.wikipedia.org/wiki/Browser_wars">browser wars</a> can attest to.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*5ZRkv51PbFB2SFEi.png" /></figure><p>Newcomers seek opportunities to leverage leading platforms to carve their own niches. At the same time, incumbents balance fostering ecosystems that increase the value of their own platforms while avoiding overly nurturing future competitors.</p><p><strong>What is different now?</strong></p><p>At first glance, LLM plugins may echo the past short-lived excitement over Alexa skills or <a href="https://www.theverge.com/2018/1/8/16856654/facebook-m-shutdown-bots-ai">AI assistants</a>, both of which ran into <a href="https://arstechnica.com/gadgets/2022/11/amazon-alexa-is-a-colossal-failure-on-pace-to-lose-10-billion-this-year/">significant challenges</a>. However, there are important differences.</p><p>LLM-based conversational interfaces can overcome the limitations of prior interfaces <a href="https://rasa.com/blog/breaking-free-from-intents-a-new-dialogue-model/">based on intents and rules</a>, and an interesting element of this new ecosystem is that its utility is heightened by systems communicating well with each other, not just with their human end users.</p><p>Some basic examples in OpenAI’s plugin documentation <a href="https://platform.openai.com/docs/plugins/examples">highlight this distinction</a>:</p><blockquote>“<strong>name_for_human</strong>”: “Sport Stats”,<br> “<strong>name_for_model</strong>”: “sportStats”,<br> “<strong>description_for_human</strong>”: “Get current and historical stats for sport players and games.”,<br> “<strong>description_for_model</strong>”: “Get current and historical stats for sport players and games. Always display results using markdown tables.”,</blockquote><p>That said, while proficient at consuming and evaluating static content, LLMs’ effectiveness in assessing dynamic agents and their offered services remains to be evaluated in the wild. The unfolding plugin ecosystem will likely favor plugins best comprehended and utilized by other software agents.</p><p>Computer systems have talked to each other for decades, typically using structured data and well-defined APIs and protocols. LLMs’ propensity for creativity and hallucination could add validation challenges to system interconnections, although it may also allow for unexpected innovations. There is even some precedent of cases where AI agents have come up with <a href="https://www.theatlantic.com/technology/archive/2017/06/artificial-intelligence-develops-its-own-non-human-language/530436/">surprisingly innovative ways of communicating with each other</a>, as in this Facebook example from 2017:</p><blockquote>At one point, the researchers write, they had to tweak one of their models because otherwise the bot-to-bot conversation “led to divergence from human language as the agents developed their own language for negotiating.”</blockquote><p>The reality may have been <a href="https://towardsdatascience.com/the-truth-behind-facebook-ai-inventing-a-new-language-37c5d680e5a7">less dramatic</a>, but the concept is intriguing.</p><p>Reflecting on the 1990s, when websites and their creators had to keep up with optimizing for different browsers and browser versions, similar complexity might emerge in optimizing plugins and extensions for various agents. However, if standards like the recently announced one shared by Microsoft and OpenAI become widespread, they could set a common framework.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*3xF8voxELptnyIad.png" /></figure><p>What may ultimately emerge is a more sophisticated, conversational form of API communication, including elements of the long-envisioned Semantic Web. This fits particularly well, considering <a href="https://dylancastillo.co/semantic-search-elasticsearch-openai-langchain/">how complementary</a> semantic search use cases are with LLMs.</p><p><strong>Open Questions</strong></p><p>The future landscape of the agent and plugin ecosystems will also depend on how the balance between closed and open systems turns out.</p><p>We could see integrated partnerships and platforms, such as Google’s platform or the Microsoft/OpenAI partnership, facilitate highly coordinated clusters of services with external interfaces limited by guardrails. From an open ecosystem standpoint, we could see a variety of agents, plugins, and coordinating mechanisms surfacing, possibly governed by a (presumably human-led) standards body.</p><p>It’s early to tell which side of this balance will turn out to be the most innovative, and they may each be best suited for different kinds of innovation.</p><p>Finally, agent service discovery remains a particularly vexing problem. Early plugin demos currently require explicit user activation from a curated pool. However, with a proliferation of agents, plugins, and services, a method for discovery and ranking will need to emerge to ensure that a software agent’s selections and actions align well <a href="https://en.wikipedia.org/wiki/Principal%E2%80%93agent_problem">with their human principal</a>’s goals and intentions.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=dc40f692aca4" width="1" height="1" alt=""><hr><p><a href="https://humber.to/agents-plugins-and-the-future-of-discovery-dc40f692aca4">Agents, Plugins, and the Future of Discovery</a> was originally published in <a href="https://humber.to">Selected thoughts and experiments</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The Right Dose of Machine Humanity]]></title>
            <link>https://humber.to/the-right-dose-of-machine-humanity-768030527464?source=rss----dd58d698be4c---4</link>
            <guid isPermaLink="false">https://medium.com/p/768030527464</guid>
            <category><![CDATA[interface-design]]></category>
            <category><![CDATA[chatbots]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[chatgpt]]></category>
            <category><![CDATA[automation]]></category>
            <dc:creator><![CDATA[Humberto Moreira]]></dc:creator>
            <pubDate>Tue, 14 Mar 2023 05:20:09 GMT</pubDate>
            <atom:updated>2023-03-14T05:24:53.898Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*0C91XMc29tPNabNGaNHXXA@2x.jpeg" /><figcaption>Furby gazing at AI-created art — <a href="https://www.misalignmentmuseum.com/">Misalignment Museum, San Francisco</a></figcaption></figure><p>As we continue to interact <a href="https://www.unimelb.edu.au/caide/research/beyond-anthropomorphism-the-ai-machine-human-animal-continuum">more intricately with artificial entities</a> in our daily lives, the norms and expectations of these interactions seem to blur and shift. It has become more challenging for <a href="https://www.theverge.com/2023/2/24/23608961/tiktok-creator-bot-accusation-prove-theyre-human">actual people to differentiate themselves from bots</a>, while sights and sounds derived from real sources yet <a href="https://studioamelia.medium.com/hyperreal-4fbd1c193528">detached from reality</a> continue to proliferate.</p><p>Certain forms of AI can now be raw and even demanding at times <a href="https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html">(as Kevin Roose found out)</a>, while others <a href="https://www.businessinsider.com/ai-chatgpt-love-us-back-dangerous-quest-2023-2">provide questionable comfort</a> in one-sided relationships. Although it seems more possible than ever to speak with machines as if they were a person, the question remains…just how human should we make our machines?</p><h3>Ali Khosh on Twitter: &quot;On the internet, nobody knows you&#39;re an AI. pic.twitter.com/1WEy1oS52t / Twitter&quot;</h3><p>On the internet, nobody knows you&#39;re an AI. pic.twitter.com/1WEy1oS52t</p><p><strong>Why Make Machines Humanlike ?</strong></p><p>Anthropomorphism, the attribution of human characteristics and behaviors to <a href="https://en.wikipedia.org/wiki/Anthropomorphism">non-human entities</a>, dates back far into prehistory, and it’s likely that the christening of inanimate tools emerged early on as well. As pointed out in <em>The Atlantic, </em>naming machines serves a <a href="https://www.theatlantic.com/technology/archive/2014/06/why-people-give-human-names-to-machines/373219/">dual purpose of endearing us to them</a> as well as asserting human control:</p><blockquote>Giving something a human name is ultimately, then, a way of exerting control over it — a reminder that it works for you, that it exists within a human construct, even when the machine itself is wholly indifferent. This is why we give human names to all sorts of things we can’t control in nature — Hurricane Hugo and Jack Frost and “Tommy long legs,” the popular nickname for the spiders many people now call “Daddy long legs.”</blockquote><p>Unlike <a href="https://en.wikipedia.org/wiki/Big_Ben">Big Ben</a>, though, newer machines enabled by higher levels of sophistication have presented a more specifically human interaction cadence.</p><p>In the recent past, Amazon’s Alexa, released in 2014 with Echo devices, represented a mass market milestone in this trend. While Alexa was innovative in its use of conversational interfaces (even inciting some <a href="https://medium.com/codex/the-consequences-of-giving-a-machine-a-human-name-13d09d411085">exploration of the consequences of human naming</a>), it ultimately was considered to <a href="https://arstechnica.com/gadgets/2022/11/amazon-alexa-is-a-colossal-failure-on-pace-to-lose-10-billion-this-year/">not reach its full promise</a>, as might also be said for <a href="https://embeddedcomputing.com/technology/ai-machine-learning/computer-vision-speech-processing/the-invention-of-apple-s-siri-and-other-virtual-assistants">Apple’s Siri and others.</a></p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FYvT_gqs5ETk%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DYvT_gqs5ETk&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FYvT_gqs5ETk%2Fhqdefault.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/76f269485bf04070d5577e5f64bb1d45/href">https://medium.com/media/76f269485bf04070d5577e5f64bb1d45/href</a></iframe><p><strong>Tools to Teammates</strong></p><p>The Alexa platform is a purpose-driven narrow interface. The degree of anthropomorphism is clever, but Echo and similar systems are generally used as a tool for spot requests.</p><p>From a broad “system” standpoint, a<a href="https://www.semanticscholar.org/paper/Trust-in-Automation%3A-Designing-for-Appropriate-Lee-See/7dd86508438657ac7a704a5d952a2a4422808975"> model illustrated by John Lee and Katrina See in 2004</a> is helpful in showing an ideal ratio between trust in a system and its capabilities. Systems should be designed so we naturally calibrate our level of trust to their true capabilities, else we over-trust them (and misuse them) or unduly distrust them (and disuse them).</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*Js5ngJFHp812__vG.png" /><figcaption>From “<a href="https://www.semanticscholar.org/paper/Trust-in-Automation%3A-Designing-for-Appropriate-Lee-See/7dd86508438657ac7a704a5d952a2a4422808975">Trust in Automation: Designing for Appropriate Reliance</a>” by John D. Lee, Katrina A. See (2004)— Business Human Factors: The Journal of Human Factors and Ergonomics Society</figcaption></figure><p>This broad model can be useful even for inanimate tools, but as we layer on systems that emulate humanlike behavior, the question arises as to whether seeing them as human can lead us to trusting them beyond their true capabilities (the so-called <a href="https://en.wikipedia.org/wiki/ELIZA_effect">ELIZA effect</a>).</p><p>This tendency has led to <a href="https://www.sciencedirect.com/science/article/abs/pii/S0747563212003287">calls for caution</a> in several studies that point out interfaces can evoke “depictions of reasoning” that may be misleading to unaccustomed users. There are already some voices suggesting that we should <a href="https://arxiv.org/abs/2112.01281">instill distrust</a> in AI systems to guard against overconfidence.</p><p>Some researchers have gone deeper into looking at how humans establish trust with machine interfaces and found a “<a href="https://pdf.sciencedirectassets.com/271441/1-s2.0-S0003687020X00081/1-s2.0-S0003687020302982/am.pdf?X-Amz-Security-Token=IQoJb3JpZ2luX2VjEPL%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEaCXVzLWVhc3QtMSJHMEUCIQDCUIN6v7eVk%2BPCWTKZ27oPz5UfPGu5hoZaD%2FwplDf1LQIga%2BYLK2SKe8IQFddEzSpRYESuqmbtMTN%2B%2Fj0OoPV4z%2BQqvAUIq%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FARAFGgwwNTkwMDM1NDY4NjUiDGCOoABsAr8iIYO%2BISqQBWq4FXX3kPFKYgMaqc8iT%2FC5DGWsgwlxbYdFbW7%2BWm6xSUiIaxjHgfxg02VnX4hCa%2Fk4KMld9%2FOvDo7FlQOr9Xrs%2FxQDtanPxsKqJGeiZqr%2FtdazTg8p9uJvBR%2F7cr0Na7JaOm6TTu%2BPr0joU28NnWh0Vq4YCQAi6yxnfFdDEwNwYL3xqz%2FhBRUjEXzzWd%2BZAyvqPYe2QrklJafProtTQ8XbzLeSIlo%2FpnGTzRfrBfUXuLm%2FwwhVwffUIzqc2Ow8%2BRp3VBmRwvCmI%2Fielg0h0O2J8NLu8W3QdWNLq6cpYYVfNi9k1T6lkv6WvrAnoDSVZTflU5QZ0sV6KmmhQbj6ao4TQkluYmSvfjRMcRPPUmLGjF8flNIgl5wYYxKTlkRJTK71WzWIF8wTlZxaU%2BEwsnw853MQLT0tL125dHxavudQLWlZ51zJL%2BYhR7wJA8%2FKAOYlpeku%2Bfgv5K81JeH6%2Fz9WvyGm%2BSytPCs0fKFVjqQIudBA7xYFsGv87sf3cwz2yZHHB2eH61blzxJfE4y3r%2B7CkQTrfoAT%2BHRH%2FFsnIq6nkM8sizEf3LwU0FC7yxmGAEhGKZA5jKunGuNLLd2PK2KehuywuBqT9eHgVq%2BCmdCRPl28cXoaPrqw4uftPv2nOgQgRmqaeAztRHphTIp2ITpXoccyZZ9JAfVER7FiEVUhHd1eoJFuTlsX%2FrBtALDQCnENDZks4KF3qW2v0PmBXg%2BQIx8uxV32idey48I3GAVZVeP6Kjj52YXxyiQlribbWOqedFU6ncUS%2Fs7bPjxMeF0tynJWAgu%2FUJ%2FeywkRTOI1U2zQDF3ZJXhoGSi%2FDQ9OFzTysb73P6hDQCOLP2dgT%2F%2Fpl%2BTprgtTDc1XjWAyH69vMIeiuKAGOrEBe1Z7FuVB1GYkujMwPJbwGFrdQAQXvnAPpby6JdssC07D1jCPYBPQKAc%2BJ0ezYcTpD%2FcbJ1imJO%2Fku7J79Ywe7mTGy0VZ9O5LPoSd8oaRlnW585jmEEyRGotZWKiENlqQEBVvbeXIJBYJ2x1l9jYHd%2F6FG6iIUKEcdCnRne1FmCcr75FSDl8C7eGUg8jtCSnFX6frvz8EUe%2B%2F7bwiqE9ZRhEdYwiKbFs8atdXQbstdV%2BC&amp;X-Amz-Algorithm=AWS4-HMAC-SHA256&amp;X-Amz-Date=20230312T192952Z&amp;X-Amz-SignedHeaders=host&amp;X-Amz-Expires=300&amp;X-Amz-Credential=ASIAQ3PHCVTY7NY66IKR%2F20230312%2Fus-east-1%2Fs3%2Faws4_request&amp;X-Amz-Signature=474edfab62372e82fdba849a90c007baf90e0f08f69cbc9640e5512391c49d07&amp;hash=2dd5d2a4df81832bfb3da76581b96078c2574ae2f9b103a5796a02ecbba0683d&amp;host=68042c943591013ac2b2430a89b270f6af2c76d8dfd086a07176afe7c76c2c61&amp;pii=S0003687020302982&amp;tid=pdf-82cc3c78-a762-4937-882a-91d34437410f&amp;sid=f882ea0161cd444e655a8015f90665f36fc4gxrqa&amp;type=client">tool vs. team-mate</a>” dichotomy as well as specific <a href="https://par.nsf.gov/servlets/purl/10344099">team interaction dynamics models</a> that include machines, just as models exist for <a href="https://www.amazon.com/Five-Dysfunctions-Team-Leadership-Fable/dp/0787960756/ref=asc_df_0787960756/?tag=hyprod-20&amp;linkCode=df0&amp;hvadid=266023323049&amp;hvpos=&amp;hvnetw=g&amp;hvrand=8550208599351692850&amp;hvpone=&amp;hvptwo=&amp;hvqmt=&amp;hvdev=c&amp;hvdvcmdl=&amp;hvlocint=&amp;hvlocphy=9031944&amp;hvtargid=pla-487653304767&amp;psc=1">purely human teams.</a> “<em>The Five Dysfunctions of a Team”</em> may well need an AI-aware edition soon.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*YsB4pXw6eEiDKUK7v4969Q.png" /><figcaption>“Robot reading a book in a conference room during its lunch hour; watercolor” via DALL-E 2</figcaption></figure><p>This notion of appropriate trust has been appreciated by many in the field, as in this <a href="https://www.philips.com/a-w/about/news/archive/blogs/innovation-matters/2021/20210419-why-ai-in-healthcare-needs-human-centered-design.html">post from Philips</a> using this same framework to call for human-centered AI in the context of physician-AI collaboration in radiology.</p><p>A particularly interesting study from Carnegie-Mellon looks distinctly at <a href="https://www.sciencedirect.com/science/article/pii/S0747563222003569">AI teammates</a> in the context of objective tasks (chess matches), overall finding better outcomes come from <strong>not</strong> deceiving humans and acknowledging that the AI Teammate is artificial. However, it does find an interesting trend:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/691/0*aMiRDInYzqDOEDG8.jpg" /><figcaption>Fig. 3 — Trust in an AI versus a Human teammate: The effects of teammate identity and performance on Human-AI cooperation. Guanglu Zhang a, Leah Chong a, et al. <a href="https://www.sciencedirect.com/science/article/pii/S0747563222003569">Computers in Human Behavior Vol 139 -Feb 2023</a></figcaption></figure><blockquote>[…] participants report their low-performing teammate to be more competent and helpful when they are told that they work with another human participant (with deception) rather than an AI teammate (without deception).</blockquote><p>If their AI team-mate was competent, it didn’t matter if they knew it was AI or not, but if the competency level is lower, humans seem more willing to give <strong>other humans</strong> some slack in terms of evaluation compared to their artificial counterparts.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/720/1*Jreetk4M8nsEh0eELB3j1w@2x.jpeg" /><figcaption>AI Art, safely abtract — MOMA, New York City (2022)</figcaption></figure><p><strong>Honest Bots vs Robot Catfish</strong></p><p>The question of how to incorporate and acknowledge automation or AI elements is a complex one, and it has been debated in the context of Honest and “<a href="https://www.semanticscholar.org/paper/Robot-Eyes-Wide-Shut%3A-Understanding-Dishonest-Leong-Selinger/c7727f3251ab57aaf6fa1c44215d54452478a6ce/figure/0">Dishonest Anthropomorphism</a>”. The notion of dishonest antropomorphism is interesting in that it applies even in cases where it would be obvious that the entity is artificial — for example adding emotional cues to robot voices in a way that might be construed as manipulative.</p><h3>jw on Twitter: &quot;Just got an email from @duolingo with the subject &quot;You made Duolingo sad&quot;I love Duolingo, but isn&#39;t there enough going on in 2023 for us to not have to worry about the emotional well being of inanimate software?Maybe dial it down a dozen, green cartoon owl. / Twitter&quot;</h3><p>Just got an email from @duolingo with the subject &quot;You made Duolingo sad&quot;I love Duolingo, but isn&#39;t there enough going on in 2023 for us to not have to worry about the emotional well being of inanimate software?Maybe dial it down a dozen, green cartoon owl.</p><p>People tend to recoil at feeling manipulated or deceived. However, one interesting notion of Human-AI interactions (pointed out by Vulture’s Nicholas Quah in a review of the <a href="https://www.vulture.com/2023/03/bot-love-podcast-artificial-intelligence.html">Radiotopia podcast “Bot Love”</a>) is the concept that awareness of artificiality is not a barrier to a perceived connection:</p><blockquote>One of the more interesting threads explored in <em>Bot Love </em>is the notion that human beings can form genuine bonds with AI chatbots with the full knowledge of their artifice. They may, at some point, arrive at a subjective feeling (or fantasy) that there is a ghost in the machine, but they initiate these relationships <em>knowing</em> that it’s simply a machine.</blockquote><p>This theme also emerged in the recent <em>New Yorker </em>piece covering the implications of the <a href="https://www.newyorker.com/magazine/2023/03/06/can-ai-treat-mental-illness">use of AI therapeutically</a> in mental health situations.</p><blockquote>I knew that I was talking to a computer, but in a way I didn’t mind. The app became a vehicle for me to articulate and examine my own thoughts. I was talking to myself.</blockquote><p>If a user clearly perceives that a machine is on the other end or that it’s merely a simple reflection of their own thoughts, there is a sense of reassurance, but it’s the ambiguity about exactly what is on the other end and their capabilities that leads to problems.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FhShY6xZWVGE%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DhShY6xZWVGE&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FhShY6xZWVGE%2Fhqdefault.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/c83ec34f7af4c7aa6bf7708b814044d0/href">https://medium.com/media/c83ec34f7af4c7aa6bf7708b814044d0/href</a></iframe><p>Could AI be a true friend…or just a friend-displacer? Even cursory attempts to translate <a href="https://academic.oup.com/hcr/article/48/3/404/6572120">frameworks for friendship to AI</a> finds concepts that obviously do not exist in such interactions: Voluntariness and Reciprocity; Intimacy and Similarity; Self Disclosure, Empathy, Trust — all these notions only exist as simulations or “<a href="https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)">hallucinations</a>”.</p><p>The evolution of these trends will have effects that may not be ultimately be as radical as depicted science fiction (e.g. <a href="https://en.wikipedia.org/wiki/Her_(film)">Her</a>, <a href="https://en.wikipedia.org/wiki/Be_Right_Back#:~:text=Martha%20(Hayley%20Atwell%2C%20right),Episode%20no.&amp;text=The%20episode%20tells%20the%20story,killed%20in%20a%20car%20accident.">Black Mirror</a>, <a href="https://en.wikipedia.org/wiki/M3GAN">M3Gan</a>, and so on) but could nonetheless turn into an entirely new kind of parasocial relationship.</p><p>As AI becomes more invovled in our lives by providing guidance, co-creation, and even intermediation, the question of <a href="https://lifehacker.com/how-to-tell-if-you-re-chatting-with-a-bot-1848733021">whether one truly is talking to a human or a machine</a> will become even harder to answer than it currently is.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*odYlW0G0LQcLn3YEpNJBPw.png" /><figcaption>“Two people talking with each other with a robot in the middle; oil painting” via DALL-E 2</figcaption></figure><p>A recent <a href="https://www.cbsnews.com/news/robot-lawyer-wont-argue-court-jail-threats-do-not-pay/">failed attempt at enabling a “robot lawyer”</a> (effectively an AI-assisted lawyer) shows natural resistance in some contexts, however, establishing contextual social norms, best practices, and regulations will take time to develop.</p><p>Short of observing a “<em>minimum level of human</em>” in our communications and activities, distinguishing between human actions and those influenced by AI, which itself learns from human actions, will become challenging.</p><h3>Marc Andreessen on Twitter: &quot;Talking to an LLM is talking to the assembled souls of everyone who ever wrote anything down. / Twitter&quot;</h3><p>Talking to an LLM is talking to the assembled souls of everyone who ever wrote anything down.</p><p>Ultimately the question of how human we make our machines will need to be paired with the question of how mechanized we are comfortable in allowing ourselves to become.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=768030527464" width="1" height="1" alt=""><hr><p><a href="https://humber.to/the-right-dose-of-machine-humanity-768030527464">The Right Dose of Machine Humanity</a> was originally published in <a href="https://humber.to">Selected thoughts and experiments</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Driverless Cruising with Apricot]]></title>
            <link>https://humber.to/driverless-cruising-with-apricot-192f9a4a2451?source=rss----dd58d698be4c---4</link>
            <guid isPermaLink="false">https://medium.com/p/192f9a4a2451</guid>
            <category><![CDATA[transportation]]></category>
            <category><![CDATA[autonomous-cars]]></category>
            <category><![CDATA[ride-sharing-app]]></category>
            <category><![CDATA[san-francisco]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <dc:creator><![CDATA[Humberto Moreira]]></dc:creator>
            <pubDate>Sun, 08 Jan 2023 05:14:01 GMT</pubDate>
            <atom:updated>2023-01-08T05:14:01.325Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*zg_3nPDd3nvUba0TXNplCQ.png" /><figcaption>Prompt “A robot in a taxi on a starry night driving through San Francisco” / via <a href="https://labs.openai.com/"><strong>DALL·E</strong></a></figcaption></figure><p>Driverless taxi rides have been looming in “any day now” territory for years, iterating through issues spanning <a href="https://www.nytimes.com/2011/05/11/science/11drive.html">technical testing</a> to <a href="https://www.cnbc.com/2022/12/16/us-safety-regulators-probe-gms-cruise-self-driving-cars.html">legal and regulatory</a> scrutiny. A <a href="https://www.theguardian.com/technology/2016/dec/21/uber-cancels-self-driving-car-trial-san-francisco-california">canceled pilot</a> in 2016 by Uber and a <a href="https://www.wired.com/story/uber-self-driving-car-fatal-crash/">2018 tragedy</a> seemed to slow the pace of roll-outs and reinforce the importance of prioritizing safety.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FOKJK3_XIGD4%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DOKJK3_XIGD4&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FOKJK3_XIGD4%2Fhqdefault.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/a940116369eff0991f5cf608f63e5a9a/href">https://medium.com/media/a940116369eff0991f5cf608f63e5a9a/href</a></iframe><p>As 2023 opens, however, this mode of transportation has finally <a href="https://www.sfchronicle.com/bayarea/article/Cruise-poised-to-offer-driverless-taxi-rides-to-17657865.php">started to become consistently available</a> to the public in San Francisco, Austin, Phoenix, and other cities through services such as <a href="https://getcruise.com/">Cruise</a> and <a href="https://waymo.com/faq/">Waymo</a>.</p><p>When it comes to Cruise, the service area is still limited (although <a href="https://www.sfchronicle.com/bayarea/article/Cruise-poised-to-offer-driverless-taxi-rides-to-17657865.php">may soon expand</a>) and the hours (10 p.m. through 5:30 a.m.) are not the most practical, but this past Friday evening I decided to give the service a try after finally getting off the waitlist.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/585/1*850_6N2tmJUubfnJvkX_Mw@2x.jpeg" /><figcaption>Much of San Francisco is still off-limits</figcaption></figure><p><strong>The Journey</strong></p><p>I picked an innocuous diner-to-diner short trip from <a href="https://www.maxsoperasf.com/">Max’s Opera Cafe</a> <em>near</em> City Hall to the <a href="https://pinecrestsf.com/">Pinecrest Diner</a> <em>near</em> Union Square (The actual City Hall and Union Square are outside the current service area) to try.</p><p>Normally this trip would take about 8 minutes via ride share on a Friday evening.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/577/1*h5CrpasswTl-gZa7mtzFpA@2x.jpeg" /></figure><p>The app is simple to use and its mechanics should be familiar to any ride-share user. One notable difference is that Cruise is very particular about its pick up spots. At first this seems like a limitation, but as someone accustomed to having ride-share vehicles consistently dispatched to the opposite side of the street from my apartment building, I can appreciate the specificity as beneficial.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/585/1*AnkVe__aolV5JirIgY5ACA@2x.jpeg" /></figure><p><strong>The Ride</strong></p><p>Once the vehicle arrives, it unlocks itself and allows for easy boarding into the cozy Chevy Bolt (Cruise is reportedly the “<a href="https://en.wikipedia.org/wiki/Cruise_(autonomous_vehicle)">largely autonomous</a>”…autonomous…subsidiary of GM), and after a quick safety briefing and seatbelt check the vehicle sets off smoothly back into traffic. My vehicle for the evening was named “Apricot”, and apparently naming vehicles is a <a href="https://www.theverge.com/2017/2/1/14476226/gm-chevy-volt-self-driving-codename-animal-marvel">whole GM tradition</a>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/933/1*AlO-w03vFTX8X2bEgRMjxA@2x.jpeg" /></figure><p>The “feel” of the ride is, well, “surprisingly human” (or at least, organic) is the best way I can describe it. It doesn’t feel like an electric tram reverting to a steady route but instead it comes across through its motions (and even its steering wheel handling) that the vehicle is consistently reacting and adjusting. As Cruise themselves <a href="https://getcruise.com/technology/">describe it</a>:</p><blockquote>Cruise cars consider multiple paths per second, constantly choosing the best ones for unexpected events and changes in road conditions. Cruise cars tell their wheels and other controls how to move along the selected path and react to changes in it.</blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/480/1*lPP4UK-_TjaGdlFXvxaLdA.gif" /></figure><p>About halfway through the trip, one of those multiple paths per second decided to take the vehicle unexpectedly on an odd detour, going all the way up Nob Hill and then all the way back down past Bush Street. This added about 8 minutes to the ride and is still something of a mystery.</p><p>At first I thought it might have acted due to a potential obstacle followed by rules regarding when and how to turn. However, this detour was “all left turns”, which is different from <a href="https://theconversation.com/why-ups-drivers-dont-turn-left-and-you-probably-shouldnt-either-71432">what is typical from other driving guidelines</a>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*R-d928EYMxRMV5L7LaBSFg@2x.jpeg" /></figure><p>During the ride we encountered people unexpectedly on the street a couple of times, and it handled the situation well, although not exactly in a “shy” manner when it came to humans. At first glance, it seemed to prefer passing the exception situation instead of waiting for the exception situation to resolve itself.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/480/1*Jz9Ev2XEJz4-MHlX3E2lDA.gif" /></figure><p>The final few minutes of the ride went smoothly. The car’s system announced the impending arrival and then unlocked the doors when it had reached its “spot”. It was interesting that both on pick-up as well as on drop-off it preferred to (briefly) double-park next to its favorite spot instead of trying to find a nearby clear spot.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/585/1*tM-3IZuI0M_9M2ZEmvpU1w@2x.jpeg" /><figcaption>14 minutes vs 8 minute plan</figcaption></figure><p><strong>The Verdict</strong></p><p>On the positive side, the ride experience is very comparable to traditional ride-share with an added element of novelty. The dispatch itself is lighting quick, since there is obviously no driver ride-select cycle, and at least in this test the ETA of pickup was very accurate (vs. other ride-share experiences that can involve ETAs increasing as a driver speeds away before cancelling).</p><p>However, some elements of routing still seem to need work as evidenced by the detour I experienced, and inaccuracy there will lead to less predictable transit times compared to where human driver or a knowledgeable passenger (perhaps, ironically, enhanced by their own mapping apps) might be able to select better alternate routing.</p><p>Finally, the “at curb” experience can be less than ideal during pick up and drop off. It is hard to say for certain after only one trip, but it seems the vehicle opts for its own wide berth (which has been <a href="https://www.sfgate.com/local/article/driverless-cruise-cars-block-SF-traffic-17467985.php">reported in the media as well</a>).</p><p>Handling those last 5–15 feet is vital and curbs will continue to present challenging situations, especially in congested downtown areas with vehicles as well as people arranged in “noisy” configurations. I do wonder if in some of these situations — low speed, high complexity- having a human resolve the ambiguity (in real-time, remotely, based on cameras and sensor data) might be an option.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*4Pz17TfzTStqk3UEJDMcyw@2x.jpeg" /><figcaption>Apricot</figcaption></figure><p>All that said, I enjoyed my Cruise with Apricot!</p><p>The current state of the service is a good start, and although the system needs some work before it can really be considered something beyond a novelty alternative, I look forward to it being one more option in the transportation landscape.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=192f9a4a2451" width="1" height="1" alt=""><hr><p><a href="https://humber.to/driverless-cruising-with-apricot-192f9a4a2451">Driverless Cruising with Apricot</a> was originally published in <a href="https://humber.to">Selected thoughts and experiments</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Crafting the Hyperreal]]></title>
            <link>https://humber.to/crafting-the-hyperreal-8cf2afae12cc?source=rss----dd58d698be4c---4</link>
            <guid isPermaLink="false">https://medium.com/p/8cf2afae12cc</guid>
            <category><![CDATA[art]]></category>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[generative-art]]></category>
            <category><![CDATA[creativity]]></category>
            <category><![CDATA[stable-diffusion]]></category>
            <dc:creator><![CDATA[Humberto Moreira]]></dc:creator>
            <pubDate>Sun, 18 Sep 2022 20:10:36 GMT</pubDate>
            <atom:updated>2022-09-18T20:09:57.709Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*jue8auTpCLaltVcI-dbbqA.png" /><figcaption>A photo of the Palace of Fine Arts + a few iterations of Dall-E 2 outpainting</figcaption></figure><p>Creativity holds a special place in the relationship between humans and machines. We consider it to be a uniquely human ability, differentiated even from those <a href="https://www.amnh.org/explore/news-blogs/research-posts/human-creativity">of other living things</a>. Yet when it comes to technology’s role, although we relish its power to enhance our capabilities, we draw a sharp distinction between creators and their tools. Is the distinction as concrete as we think?</p><p>Last month, a video game designer <a href="https://www.smithsonianmag.com/smart-news/artificial-intelligence-art-wins-colorado-state-fair-180980703/">won a regional regional digital arts competition</a> with a work created on <a href="https://www.midjourney.com/home/">Midjourney</a>, one of several recently popular platforms (also including <a href="https://openai.com/dall-e-2/">DALL-E 2</a> and <a href="https://stability.ai/blog/stable-diffusion-public-release">Stable Diffusion</a>) that leverage ML-based text-to-image models.</p><p><a href="https://www.instagram.com/p/Ch-6osFtfoM">Colorado State Fair on Instagram: &quot;The digital art category at the Fine Arts competition has people talking! At the Colorado State Fair, we think this brings up a great conversation. With advancing technology, the discussion of AI and art helps the Fair evolve from year to year.&quot;</a></p><p>The piece incited commentary online that is reflective of a simmering debate. Some of the headlines implied that the art was “created by an AI”, although the process was more involved than that:</p><blockquote>Allen created<em> Théâtre D’opéra Spatial</em> by entering various words and phrases into Midjourney, which then produced more than 900 renderings for him to choose from. He selected his three favorites, then continued adjusting them in Photoshop until he was satisfied. He boosted their resolution using a tool called Gigapixel and printed the works on canvas.</blockquote><p>In this case it is clear that there was active human work and creativity involved, including the writing of the prompt, the selection amongst intermediate artifacts, and the refinement of the piece towards a final form.</p><p>However, we can envision a scenario where several of these steps could be turned over to an algorithm, up to and including the selection of the topic, the contest, or even the decision to enter a contest.</p><p>If the prompt made to an AI-based agent were to <em>“go win a series of digital art contests online until you make $1,000” </em>and a system made all of the intermediate decisions, would the originator still be considered an artist? Would the winning work be considered an original creation?</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*qDPF-9PrQ5bSiQrGMwIAjg.png" /><figcaption>“An infinite number of robots at typewriters in 3d” using Dall-E 2</figcaption></figure><h4>Not your grandparents’ Photoshop</h4><p>State fair art contests aside, it’s worth asking whether this trend is actually novel and what it portends.</p><p>The complexity of authorship in creative works is not new, and our tendency to personify individuals (who <a href="https://magazine.artland.com/5-artists-who-have-a-helping-hand-or-two/">sometimes have quite a bit of help</a>) as singular authors obfuscates the diffuse and collaborative nature of creativity. Separating out authorship amongst writers, animators, producers, and entrepreneurs <a href="https://www.kcur.org/history/2021-05-22/walt-disney-didnt-actually-draw-mickey-mouse-meet-the-kansas-city-artist-who-did">is tricky</a>, but provides a level of lineage at human scale.</p><p>From a technology standpoint, recognizable precursors of today’s art generation technology go back to <a href="https://news.artnet.com/art-world/artificial-intelligence-art-history-2045520">at least the middle of the 20th century</a>, but it’s been only recently that machine learning models trained on vast datasets have generated eerily impressive results. In the late 2010s, <a href="https://en.wikipedia.org/wiki/Generative_adversarial_network">Generative Adversarial Networks</a> began to be used to <a href="https://news.artnet.com/market/9-artists-artificial-intelligence-1384207">create marketable art</a>, and this year it is <a href="https://www.assemblyai.com/blog/how-dall-e-2-actually-works/">diffusion models</a> which are driving the <a href="https://www.louisbouchard.ai/latent-diffusion-models/">latest boom</a>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*N55cWCV8cghvMoqnihseyg.png" /><figcaption>(My sketch is on the left) — output on the right via Stable Diffusion</figcaption></figure><p>As with many disruptive innovations of the Internet age, the difference is not in the concept, but in a dramatic shift in quality, accessibility, scale, and cost.</p><p>Artists are no strangers to shifting technology, but some of the new techniques introduce a particularly large degree of separation between an originator and their creation, especially through having a basis in conditioning inputs (often <a href="https://medium.com/merzazine/the-art-to-start-designing-prompts-for-gpt-3-introduction-89848c208007">text “prompts”</a>) that produce good results with minimal configuration and sometimes very good results after some effort.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*6jQcukynpiZgRo4laWGt2Q.png" /><figcaption>Stable Diffusion via a web interface (local on Windows 11 PC)</figcaption></figure><p>It’s also worth noting that although images have been making the rounds recently, related technologies also continue to evolve for <a href="https://www.newyorker.com/culture/culture-desk/the-new-poem-making-machinery">generative text works</a> and technology for video is also <a href="https://infinite-nature.github.io/">rapidly catching up</a>. The same trend is emerging across different kinds of media and is making transitions between them fluid. An <a href="https://medium.com/@KunalSavvy/generating-poems-for-any-given-image-using-multi-modal-machine-learning-2be35b72f50a">image converted to a poem</a> can potentially be converted into <a href="https://www.inputmag.com/culture/this-ai-turns-poetry-into-creepy-video-art">a video</a> which can then be further transformed.</p><p>Where these outputs fall in terms of original artistry, production, curation, and commission is fiercely debatable, but what is certain is that it is starting to cause ripples within creative communities.</p><p>A photograph is not a lesser relative to a watercolor, but it <em>is</em> different, and it is the indeterminate sense of provenance of these works and the possibility of misrepresentation and misattribution that seems to cause the most noise.</p><h3>✨Lexi✨ on Twitter: &quot;Honestly angry that someone charged people $150 for AI background assets and people paid for them while artists on this site struggle getting someone to buy their art commission. / Twitter&quot;</h3><p>Honestly angry that someone charged people $150 for AI background assets and people paid for them while artists on this site struggle getting someone to buy their art commission.</p><h4>The Fair, the Practical, and the Legal</h4><p>Notions of fairness loom large in these debates, not only due to the ease of creation, but also due to the particularities of the generation methodology itself. Datasets used to generate the ML models feed on existing art, and images can be generated in the <a href="https://medium.com/mlearning-ai/artists-up-in-arms-over-new-ai-model-that-can-generate-similar-works-883b25552636">style of existing artists</a> without any attribution or credit. This extends to including likenesses of real people or characters in images, raising thorny copyright and even privacy issues.</p><p>A key question is whether these creations constitute original creative works. Recent legal cases, including one involving a “<a href="https://cyber.harvard.edu/events/2018/luncheon/01/monkeyselfie">monkey selfie”</a> and another where <a href="https://www.smithsonianmag.com/smart-news/us-copyright-office-rules-ai-art-cant-be-copyrighted-180979808/">“AI” was denied copyright</a> in the absence of human authorship are testing the waters around this topic. The issue <a href="https://www.theregister.com/2022/08/14/ai_digital_artwork_copyright/">seems yet unsettled</a> (and by some accounts has been debated <a href="https://www.ipwatchdog.com/2022/02/20/sorry-nft-worthless-copyright-generative-art-problem-nft-collections/id=146163/">at least since 1965</a>), but at ground level, some online art communities are increasingly <a href="https://waxy.org/2022/09/online-art-communities-begin-banning-ai-generated-images/">banning or placing restrictions on AI-generated images</a>. This tension is perhaps not unlike what was seen in the 19th century with the advent of photography, except in accelerated and broader form.</p><blockquote>Within a decade of Louis Daguerre’s publication of the process in 1839, half a million plates had been sold in Paris alone. Many artists embraced the technology, while <a href="https://www.nytimes.com/2016/06/16/arts/international/how-photography-cast-new-light-on-art.html">some remained reticent about the extent to which they employed it in their own work</a>.</blockquote><blockquote>Monet, who owned at least four cameras, responded tetchily to the suggestion that he had used a photograph of the Houses of Parliament for one of his famous series of paintings of the Thames, writing to a friend that “whether my cathedrals, my Londons and other canvases are painted from nature or not is nobody’s business and is of no importance.”</blockquote><p>The notion of an effect “within a decade” is a notable reminder of how quickly these new shifts are taking place. When it comes to generative art, <a href="https://arxiv.org/pdf/2112.10752.pdf">core academic papers</a> have segued into <a href="https://openai.com/blog/dall-e-now-available-in-beta/">beta releases</a> of platforms and subsequently to <a href="https://towardsdatascience.com/stable-diffusion-best-open-source-version-of-dall-e-2-ebcdf1cb64bc#:~:text=Stable%20Diffusion%20is%20an%20open,get%20a%20sample%20of%20interest.">open source versions</a> and <a href="https://promptbase.com/">commercial marketplaces</a> all in the space of less than six months.</p><h4>Synthetic Coexistence vs. Displacement</h4><p>In the Star Trek universe, replicator technology is able to synthesize a variety of foods and beverages on command from a vast database of recipes, providing future humans with delicious food on demand.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FqD4EVXkfe0w%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DqD4EVXkfe0w&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FqD4EVXkfe0w%2Fhqdefault.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube" width="640" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/e5529841d0b16c3d1e5fe6f79526fbd5/href">https://medium.com/media/e5529841d0b16c3d1e5fe6f79526fbd5/href</a></iframe><p>Less clear is what happened to the 24th century’s farmers and cooks and why <a href="https://www.reddit.com/r/startrek/comments/poaks/why_are_there_so_many_farmers_in_star_trek/">the future still seems to have a lot of them</a>. Closer to the 21st century, we are aware that automation in the production and supply chains of anything, from clothing to <a href="https://theconversation.com/fast-food-is-comforting-but-in-low-income-areas-it-crowds-out-fresher-options-136227">fresh food</a>, has unintended consequences.</p><p>All this brings to mind the issue of coexistence, complementarity, and displacement when it comes to new technologies. Did ATMs actually lead to more bank tellers than before? <a href="https://www.vox.com/2017/5/8/15584268/eric-schmidt-alphabet-automation-atm-bank-teller">Yes and no</a>. Is the visual effects industry much larger today than it would have been without the CGI boom? <a href="https://medium.com/@jordangowanlock/is-the-visual-effect-industry-unstable-a804ed1e273b">Most certainly</a>. Are computer-generated effects always better than practical effects? <a href="https://fandomwire.com/10-reasons-movies-that-prefer-practical-effects-will-always-be-better-than-cgi-movies/">Most certainly not</a>.</p><p>The notions of supply and demand are complicated, but generally speaking at the core of the question is whether this new ML-assisted creative “supply” will be creating more competition for the same attention (possibly crowding some creators out) or whether it will actually be helping to make the space larger and richer for all.</p><p>As an example, if new ways of generating artwork for computer games allow more and better games to be created, then this could become particularly <a href="https://medium.com/@woodenfox/ai-generated-card-game-artwork-a-boon-to-small-game-developers-9f2651b6c670">beneficial for small game studios</a> who would otherwise have struggled to source art. More creators is very likely a better outcome, and better onramps to creativity are unequivocally a good thing.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FWO3bMZIqBZA%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DWO3bMZIqBZA&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FWO3bMZIqBZA%2Fhqdefault.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube" width="640" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/4ec2b2991895b9a995b888471e63e467/href">https://medium.com/media/4ec2b2991895b9a995b888471e63e467/href</a></iframe><p>Art has always been accessible in theory but challenging in practice. Ease of onboarding will ideally facilitate more equitable access to expression.</p><h4>What can we expect from the future?</h4><p>With such a fast-moving space, it is hard to predict what even the next few months will bring, but it is very likely that we will be able to understand the provenance of creative works in the more nuanced way.</p><p>Whether based in something truly grounded or merely in effective marketing, there is a special legitimacy afforded to the original, artisanal, and close to the source, be it for <a href="https://www.ledgerinsights.com/de-beers-diamond-provenance-blockchain-tracr-launched-at-scale/">diamonds</a> or <a href="https://www.ferrybuildingmarketplace.com/farmers-market/">radishes</a>. As AI-enhanced generative creative works increase in prevalence, there may be more of an emphasis in highlighting the chain of provenance and details of the creative inputs that went into a particular work, akin to how we can delve into <a href="https://www.sfmoma.org/watch/how-diego-rivera-made-his-murals/">understanding the techniques</a> and process involved in <a href="https://en.wikipedia.org/wiki/Pentimento">more traditional art</a>.</p><p>It will likely be possible, to a degree, to detect if works have employed ML in their generation, much in the same way <a href="https://www.media.mit.edu/projects/detect-fakes/overview/">deep fake detection </a>is staying a step ahead of deep fake generation. Works closer to real sources could keep a certain differentiation. However, if ML-based generative techniques become pervasive, although some may prefer not to use them or to avoid works created using them, their absence may become <a href="https://en.wikipedia.org/wiki/Stop_motion">the exception rather than the rule</a> over time.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*d90NC1jjrNjqZ_v-fpqqAg.png" /><figcaption>Many Dall-E 2 iterations later…</figcaption></figure><p>A slice of the <a href="https://www.instagram.com/p/Chz_sVbvXtQ/">original photo</a> from the Palace of Fine Arts remains in the image above, but after several ML outpainting iterations, the generated landscape increasingly becomes someplace else. And even the “real” Palace of Fine Arts is a <a href="https://www.palaceoffinearts.org/history">renovated reconstruction</a> of a <a href="https://www.instagram.com/p/CYpPM0xPrmD/?utm_source=ig_embed&amp;ig_rid=a069c789-3b61-4daf-8ae9-33b2b1178e9a">real ruin</a> inspired by <a href="https://www.swanngalleries.com/news/prints-and-drawings/2018/05/ancient-then-ancient-now-piranesi-views-of-rome/">an engraving</a> of actual ruins.</p><p>Given the way ML-based generative models can take real inputs and create new works that are increasingly distant from concrete origins, we may be taking another step in the transition suggested by Jean Baudrillard’s <a href="https://web.stanford.edu/class/history34q/readings/Baudrillard/Baudrillard_Simulacra.html">notion of hyperreality</a>, where representations lose their link to an underlying reality.</p><p>So whatever we call this latest trend, it’s steps removed from both reality as well as our imagination, but it still has humans at the controls…for now.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=8cf2afae12cc" width="1" height="1" alt=""><hr><p><a href="https://humber.to/crafting-the-hyperreal-8cf2afae12cc">Crafting the Hyperreal</a> was originally published in <a href="https://humber.to">Selected thoughts and experiments</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
    </channel>
</rss>